diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdunm" "b/data_all_eng_slimpj/shuffled/split2/finalzzdunm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdunm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nInvestigating thermodynamic structures of nonequilibrium steady states (NESSs) has been a topic of active researches in nonequilibrium statistical mechanics~\\cite{Landauer,Oono,Raulle,Komatsu1,Komatsu2,Saito,Sasa,Seifert,Hatano,Speck,Esposito1,Esposito2,Lebowitz,Nemoto,Komatsu0,Maes1,Maes2}. For example, the extension of the relations in equilibrium thermodynamics, such as the Clausius equality, to NESSs is a great challenge~\\cite{Landauer,Oono,Raulle,Komatsu1,Komatsu2,Saito,Sasa,Seifert}. The extended thermodynamics, which is called steady state thermodynamics (SST)~\\cite{Oono}, is expected to be useful to analyze and to predict the dynamical properties of NESSs. However, the complete picture of SST has not been understood.\n\n\nIn equilibrium thermodynamics, the Clausius equality tells us how one can determine thermodynamic potential (entropy) by measuring the heat: \n\\begin{equation}\n\\Delta S - \\sum_\\nu \\beta^\\nu Q^\\nu = 0,\n\\label{Clausius}\n\\end{equation}\nwhich is universally valid for quasi-static transitions between equilibrium states. Here, $\\nu$ is an index of the heat baths, $\\beta^\\nu$ is the inverse temperature of bath $\\nu$, $Q^\\nu$ is the heat that the system absorbed from bath $\\nu$, and $S$ is the Shannon entropy of the system. The second term of the left-hand side (LHS) of (\\ref{Clausius}) is called the entropy production in the baths.\nTo generalize the Clausius equality to nonequilibrium situations, it has been proposed~\\cite{Landauer,Oono} that heat $Q^\\nu$ needs to be replaced by excess heat $Q_{\\rm ex}^\\nu$, which describes an additional heat induced by a transition between NESSs with time-dependent external control parameters such as the electric field. Correspondingly, the total heat can be decomposed as $Q^\\nu = Q_{\\rm ex}^\\nu + Q_{\\rm hk}^\\nu$, where housekeeping heat $Q_{\\rm hk}^\\nu$ describes the steady heat current in a NESS without any parameter change. \nQuantitative definitions of these quantities will be given later.\nOne may then expect that there exists some thermodynamic potential $S_{\\rm SST}$ which characterizes NESSs such that\n\\begin{equation}\n \\Delta S_{\\rm SST} - \\sum_\\nu \\beta^\\nu Q^{\\nu}_{\\rm ex} = 0,\n\\label{Clausius1}\n\\end{equation}\nholds for quasi-static transitions between NESSs, where the second term of the LHS corresponds to the excess part of the entropy production in the baths. Komatsu, Nakagawa, Sasa, and Tasaki (KNST) found that $S_{\\rm SST}$ in Eq.~(\\ref{Clausius1}) is a symmetrized version of the Shannon entropy in the lowest order of nonequilibriumness~\\cite{Komatsu1,Komatsu2}. However, the full order expression of the extended Clausius equality~(\\ref{Clausius1}) has been elusive. Then, some fundamental questions arise: What is the nonequilibrium thermodynamic potential $S_{\\rm SST}$ in Eq.~(\\ref{Clausius1}) in the full order expression? Does there exist such a potential at all?\n\nIn this paper, we answer these questions, and derive a full order expression of the excess entropy production for Markovian jump processes. We note that driven lattice gases are special cases of our formulation. We have found that an extended Clausius equality in the form of (\\ref{Clausius1}) does not hold in general; scalar thermodynamic potential $S_{\\rm SST}$ should be replaced by a vector potential. In other words, the first term of the LHS of (\\ref{Clausius1}) should be replaced by a geometrical quantity that depends only on trajectories in the parameter space. Our result includes equilibrium Clausius equality~(\\ref{Clausius}) and the KNST's extended Clausius equality as special cases. We will also derive the general condition that there exists a thermodynamic potential $S_{\\rm SST}$ such that Eq.~(\\ref{Clausius1}) holds.\n\nWe have used the technique of the full counting statistics~\\cite{Esposito1,Bagrets} to prove our main results. In the context of the full counting statistics (and also stochastic ratchets), it has been reported~\\cite{Parrondo,Sinitsyn1,Sinitsyn2,Sinitsyn3,Jarzynski1,Ohkubo1,Ren,Ohkubo2} that several phenomena in classical stochastic processes are analogous to the Berry's geometrical phase in quantum mechanics~\\cite{Berry,Samuel}. In this analogy, the above-mentioned vector potential corresponds to the gauge field that induces the Berry phase. Our result can also be regarded as a generalization of these previous studies on the classical Berry phase.\n\nThis paper is organized as follows. In Sec.~II, we formulate the model of our system and define the decomposition of the entropy production into the housekeeping and excess parts based on the full counting statistics. In Sec.~III, we derive our main results, which consist of the geometrical expressions of the excess parts of the cumulant generating function and the average of the entropy production. In Sec.~IV, we apply our main results to two special cases; one is equilibrium thermodynamics with the detailed balance, and the other is the KNST's extended Clausius equality. In Sec.~V, we discuss a quantum dot as a simple example, where Eq.~(\\ref{Clausius1}) does not hold in general. In Sec.~VI, we conclude this paper with some discussions.\n\n\n\\section{Setup}\n\nWe first formulate our setup and define the decomposition of the cumulant generating function of the entropy production into the excess and housekeeping parts.\n\n\\subsection{Dynamics}\n\nWe consider Markovian jump processes with $N < \\infty$ microscopic states. Let $p_x$ be the probability that the system is in state $x$. The probability distribution of the system is then characterized by vector $| p \\rangle := [p_1, p_2, \\cdots, p_N]^T$, where $1, 2, \\cdots, N$ describe the states, and ``$T$'' describes the transpose of the vector.\nThe time evolution of the probability distribution is given by a master equation $| \\dot p (t) \\rangle = R (\\bm{\\alpha} (t)) | p (t) \\rangle$,\nwhere $| \\dot p (t) \\rangle$ describes the time derivative of $| p (t) \\rangle$, $R(\\bm{\\alpha})$ is a $N \\times N$ matrix characterizing the transition rate of the dynamics with external parameters $\\bm{\\alpha}$. Here, the external parameters correspond to, for example, a potential or a nonconservative force applied to a lattice gas, or the temperatures of the heat baths. We drive the system by changing $\\bm{\\alpha}$. For simplicity of notations, we will often omit ``$(\\bm{\\alpha}(t))$'' or ``$(t)$'' in the following discussions.\nWe note that $\\sum_{x} R_{xy} = 0$ holds for every $y$, where $R_{xy}$ is the $xy$-component of $R$ that characterizes the transition rate from state $y$ to $x$.\nWe assume that $R$ is irreducible such that $R$ has eigenvalue $0$ without degeneracy due to the Perron-Frobenius theorem. We write as $\\langle 1 |$ and $| p^S \\rangle$ the left and right eigenvectors of $R$ corresponding to eigenvalue $0$ such that $\\langle 1 | R = 0$ and $R | p^S \\rangle = 0$ hold. We note that $\\langle 1 | = [1, 1, \\cdots, 1]$ holds and that $| p^S \\rangle = [p^S_1, p^S_2, \\cdots, p^S_N]^T$ is the unique steady distribution of the dynamics with a given $\\bm \\alpha$. For simplicity, we assume that $R$ is diagonalizable.\nWe also assume that the transition matrix can be decomposed into the contributions from multiple heat baths, labeled by $\\nu$, as $R_{xy} = \\sum_\\nu R^\\nu_{xy}$.\n\nWe next introduce the entropy production that depends on trajectories of the system. Such a trajectory-dependent entropy production has been studied in terms of nonequilibrium thermodynamics of stochastic systems~\\cite{Lebowitz,Crooks,Jarzynski2,Seifert}.\nThe entropy production in bath $\\nu$ with transition from $y$ to $x$ is given by\n\\begin{eqnarray}\n\\sigma^\\nu_{xy} = \\left\\{\n\\begin{array}{l}\n\\ln \\frac{R^\\nu_{xy}}{R^\\nu_{y x}} = - \\beta^\\nu Q^\\nu_{xy} \\ ({\\rm if} \\ R^\\nu_{xy}\\neq 0 \\ {\\rm and} \\ R^\\nu_{y x}\\neq 0), \\\\\n0 \\ ({\\rm if} \\ R^\\nu_{xy} = 0 \\ {\\rm and} \\ R^\\nu_{y x} = 0), \n\\end{array}\n\\right.\n\\label{entropy}\n\\end{eqnarray}\nwhere $Q_{xy}^\\nu$ is the heat that is absorbed in the system from bath $\\nu$ during the transition from $y$ to $x$. Equality (\\ref{entropy}) is consistent with the detailed fluctuation theorem~\\cite{Lebowitz,Crooks,Jarzynski2,Seifert}. \nThe integrated entropy production from time $0$ to $\\tau$ is determined by the trajectory of system's states during the time interval as \n\\begin{equation}\n\\sigma = \\sum_{t : \\ {\\rm jump}} \\sigma^\\nu_{x(t+0)y(t-0)},\n\\end{equation}\nwhere the sum is taken over all times at which the system jumps, and $y(t-0)$ and $x(t+0)$ are the states immediately before and after the jump at $t$, respectively. \nWe note that the ensemble average of $\\sigma$ is equivalent to the entropy production in the conventional thermodynamics of macroscopic systems. A reason why we consider the trajectory-dependent entropy production lies in the fact that the entropy production is connected to the heat through Eq.~(\\ref{entropy}) at the level of each trajectories.\n\n\n\n\n\n\\subsection{Full Counting Statistics}\n\nWe then discuss the full counting statistics of $\\sigma$. Let $P(\\sigma)$ be the probability of $\\sigma$. Its cumulant generating function is given by \n\\begin{equation}\nS(i\\chi) := \\ln \\int d\\sigma e^{i\\chi \\sigma}P(\\sigma),\n\\end{equation}\nwhere $\\chi \\in \\mathbb R$ is the counting field. $S(i \\chi)$ leads to the cumulants of $\\sigma$ like $\\langle \\sigma \\rangle = \\partial S (i \\chi) \/ \\partial (i \\chi) |_{\\chi = 0}$,\nwhere $\\langle \\cdots \\rangle$ describes the statistical average.\nTo calculate $S(i\\chi)$, we define matrix $R_\\chi$ as $( R_\\chi )_{xy} := \\sum_\\nu R_{xy}^\\nu \\exp (i\\chi \\sigma_{xy}^\\nu)$,\nand consider the time evolution of vector $| p_\\chi (t) \\rangle$ corresponding to\n\\begin{equation}\n | \\dot p_\\chi (t) \\rangle = R_\\chi (\\bm \\alpha (t)) | p_\\chi (t) \\rangle\n\\label{master_chi}\n\\end{equation}\nwith initial condition $| p_\\chi (0) \\rangle := | p(0) \\rangle$. The formal solution of Eq.~(\\ref{master_chi}) is given by $| p_\\chi (t) \\rangle = {\\rm T} \\! \\exp_{\\leftarrow} \\left( \\int_0^\\tau R_\\chi (\\bm{\\alpha} (t)) dt \\right) | p(0) \\rangle$, where ${\\rm T} \\! \\exp_{\\leftarrow}$ describes the left-time-ordered exponential.\nThen we can show that\n\\begin{equation}\ne^{S(i\\chi)} = \\langle 1 | p_\\chi (\\tau) \\rangle\n\\end{equation}\nholds, where $\\langle \\cdot | \\cdot \\rangle$ means the inner product of left and right vectors.\n\nWe write the eigenvalues of $R_\\chi$ as $\\lambda_\\chi^n$'s, where $n=0$ corresponds to the eigenvalue with the maximum real part. \nIf $| \\chi |$ is sufficiently small, $\\lambda_\\chi^0$ is not degenerated and $R_\\chi$ is diagonalizable. \nWe write as $\\langle \\lambda_\\chi^n |$ and $| \\lambda_\\chi^n \\rangle$ the left and right eigenvectors corresponding to $\\lambda_\\chi^n$, which we can normalize as $\\langle \\lambda_\\chi^n | \\lambda_\\chi^m \\rangle = \\delta_{nm}$ with $\\delta_{nm}$ being the Kronecker's delta. \nIn particular, we write $\\langle \\lambda_\\chi^0 | =: \\langle 1_\\chi |$ and $| \\lambda_\\chi^0 \\rangle =: | p^S_\\chi \\rangle$.\nWe note that, if $\\chi = 0$, $\\langle 1_\\chi |$ and $| p_\\chi^S \\rangle$ reduce to $\\langle 1 |$ and $| p^S \\rangle$, respectively. \n\n\\subsection{Decomposition of the Entropy Production}\n\nIt is known that $\\lambda_\\chi^0 (\\bm \\alpha)$ is the cumulant generating function of $\\sigma$ in the steady distribution with parameter $\\bm \\alpha$.\nMore precisely, $\\lambda_\\chi^0 (\\bm \\alpha)$ satisfies\n\\begin{equation}\n\\lambda_\\chi^0 (\\bm \\alpha) = \\lim_{\\tau \\to +\\infty} \\frac{S(i \\chi; \\bm{\\alpha}; \\tau)}{\\tau},\n\\label{hk_average}\n\\end{equation}\nwhere $S(i \\chi; \\bm{\\alpha}, \\tau)$ is the cumulant generating function of $\\sigma$ from $0$ to $\\tau$ with $\\bm \\alpha$ being fixed.\n\n\nWe then decompose the cumulant generating function into two parts:\n\\begin{equation}\nS(i\\chi) = S_{\\rm hk} (i \\chi) + S_{\\rm ex} (i \\chi),\n\\end{equation}\nwhere $S_{\\rm hk} (i\\chi)$ is the house-keeping part defined as\n\\begin{equation}\nS_{\\rm hk} (i\\chi) : = \\int_0^\\tau \\lambda_\\chi^0 (\\bm{\\alpha} (t)) dt, \n\\end{equation}\nand $S_{\\rm ex} (i\\chi)$ is the excess part defined as $S_{\\rm ex}(i\\chi) := S (i \\chi) - S_{\\rm hk} (i \\chi)$. \nThe average of the excess entropy production is given by \n\\begin{equation}\n\\langle \\sigma \\rangle_{\\rm ex} = \\frac{\\partial S_{\\rm ex} (i \\chi)}{\\partial (i \\chi)} \\biggr|_{\\chi = 0}.\n\\label{excess_average}\n\\end{equation}\n\n\nWe note that the above decomposition is consistent with that in Refs.~\\cite{Komatsu1,Komatsu2}. In fact, from Eqs.~(\\ref{hk_average}) and (\\ref{excess_average}), we can show \n\\begin{equation}\n\\langle \\sigma \\rangle_{\\rm ex} = \\langle \\sigma \\rangle - \\int_0^\\tau \\langle \\dot \\sigma \\rangle_{{\\rm hk}; \\bm{\\alpha} (t)} dt,\n\\end{equation}\nwhere $\\langle \\dot \\sigma \\rangle_{{\\rm hk}; \\bm{\\alpha}} := \\partial \\lambda_\\chi^0 (\\bm \\alpha) \/ \\partial (i \\chi) |_{\\chi = 0}$ is the long-time average of the entropy production per unit time with $\\bm{\\alpha}$ being fixed.\n\n\n\n\\section{Main Results}\n\nWe now discuss the main results of this paper, which we will refer to as Eqs.~(\\ref{berry2}) and (\\ref{average_e}). \nFirst of all, we expand $| p_\\chi (t) \\rangle$ as \n\\begin{equation}\\\n| p_\\chi (t) \\rangle = \\sum_n c_n(t) e^{\\Lambda_\\chi^n (t)} | \\lambda_\\chi^n (\\bm{\\alpha} (t)) \\rangle,\n\\label{c_0}\n\\end{equation}\nwhere $\\Lambda_\\chi^n (t) := \\int_0^t \\lambda_\\chi^n (\\bm{\\alpha}(t')) dt'$. We can show that $\\dot{c}_0 = - \\sum_n c_n \\langle 1_\\chi | \\dot{\\lambda}_\\chi^n \\rangle e^{\\Lambda_\\chi^n - \\Lambda_\\chi^0}$ and $\\langle 1_\\chi | \\dot \\lambda_\\chi^n \\rangle = \\langle 1_\\chi | \\dot R_\\chi | \\lambda_\\chi^n \\rangle \/ (\\lambda^n_\\chi - \\lambda_\\chi^0)$ hold. Therefore, if the speed of the change of the external parameters is much smaller than the relaxation speed of the system, we obtain\n\\begin{equation}\n\\dot c_0 (t) \\simeq - c_0 (t) \\langle 1_\\chi (\\bm{\\alpha} (t)) | \\dot p_\\chi^S (\\bm{\\alpha} (t)) \\rangle.\n\\label{equation}\n\\end{equation}\nHere, we have used that the real part of $\\Lambda^n_\\chi - \\Lambda_\\chi^0$ is negative for all $n \\neq 0$. \nWe note that this result is similar (but not equivalent) to the adiabatic theorem in quantum mechanics. \n\n\n\n\nAssume that we quasi-statically change parameter $\\bm \\alpha$ between time $0$ and $\\tau$ along a curve $C$ in the parameter space. The solution of Eq.~(\\ref{equation}) is given by\n\\begin{equation}\n\\begin{split}\nc_0 (\\tau) &= c_0 (0) e^{- \\int_0^\\tau dt \\langle 1_\\chi (\\bm{\\alpha} (t)) | \\dot p_\\chi^S (\\bm{\\alpha} (t)) \\rangle } \\\\\n&= c_0 (0) e^{- \\int_C \\langle 1_\\chi | d | p_\\chi^S \\rangle },\n\\end{split}\n\\label{berry1}\n\\end{equation}\nwhere ``$d$'' on the right-hand side (RHS) means the total differential in terms of $\\bm{\\alpha}$ such that $d|p^S_\\chi \\rangle := d\\bm{\\alpha} \\cdot \\frac{\\partial}{\\partial \\bm{\\alpha}}|p^S_\\chi \\rangle$.\nLet the initial distribution be the steady distribution $| p (0) \\rangle = | p^S (\\bm \\alpha (0)) \\rangle$, which leads to $c_0 (0) = \\langle 1_\\chi (\\bm{\\alpha} (0)) | p^S(\\bm{\\alpha}(0)) \\rangle$. We then obtain the excess part of the cumulant generating function as\n\\begin{equation}\n\\begin{split}\nS_{\\rm ex} (i\\chi) = &\\int_C \\langle 1_\\chi | d | p_\\chi^S \\rangle \\\\\n&+ \\ln \\langle 1_\\chi (\\bm{\\alpha}(0) )| p^S(\\bm{\\alpha}(0)) \\rangle + \\ln \\langle 1 | p_\\chi^S (\\bm{\\alpha}(\\tau)) \\rangle,\n\\end{split}\n\\label{berry2}\n\\end{equation}\nwhere the RHS is geometrical and analogous to the Berry phase in quantum mechanics~\\cite{Berry}; it only depends on trajectory $C$ in the parameter space. More precisely, the RHS of (\\ref{berry2}) is analogous to the non-cyclic Berry phase~\\cite{Ohkubo2,Samuel}.\nWe note that $\\Lambda_\\chi^n (\\tau)$ is analogous to the dynamical phase. In this analogy, $| p_\\chi^S \\rangle$ and $R_\\chi$ respectively correspond to a state vector and a Hamiltonian.\nEquality~(\\ref{berry2}) is our first main result.\n\n\nIn terminologies of the Berry phase, $\\langle 1_\\chi | d | p_\\chi^S \\rangle$ corresponds to a vector potential or a gauge field whose base space is the parameter space. \nThe second and the third terms of the RHS of (\\ref{berry2}) confirms the gauge invariance of $S_{\\rm ex}(i \\chi)$ as is the case for quantum mechanics~\\cite{Samuel}, where the gauge transformation corresponds to the transformation of the left and right eigenvectors of $R_\\chi (\\bm{\\alpha})$ as $\\langle 1_\\chi (\\bm{\\alpha}) | \\mapsto \\langle 1_\\chi (\\bm{\\alpha}) | e^{ - \\theta (\\bm{\\alpha})}$ and $| p_\\chi^S (\\bm{\\alpha}) \\rangle \\mapsto e^{\\theta (\\bm{\\alpha})}| p^S (\\bm{\\alpha}) \\rangle$ with $\\theta (\\bm {\\alpha})$ being a scalar. \nWe note that several formulae that are similar to Eq.~(\\ref{berry2}) have been obtained for different setups~\\cite{Sinitsyn1,Sinitsyn2,Sinitsyn3,Ohkubo1,Ren,Ohkubo2}.\n\n\nBy differentiating Eq.~(\\ref{berry2}) in terms of $i \\chi$, we obtain a simple expression of the average of the excess entropy production:\n\\begin{equation}\n \\int_C \\langle 1' | d | p^S \\rangle + \\langle \\sigma \\rangle_{\\rm ex} = 0,\n\\label{average_e}\n\\end{equation}\nwhere $\\langle 1' | := \\partial \\langle 1_\\chi | \/ \\partial (i\\chi) |_{\\chi = 0}$. \nEquality~(\\ref{average_e}) is the second main result, which is the full order expression of the average of the excess entropy production. On the contrary to Eq.~(\\ref{Clausius1}), the first term of the LHS of (\\ref{average_e}) is not given by the difference of a scalar potential $S_{\\rm SST}$, but by a geometrical quantity. We also refer to $\\langle 1' | d | p^S \\rangle$ as a vector potential.\n\nWe can explicitly calculate $\\langle 1' |$. By differentiate the both-hand sides of $\\langle 1_\\chi | R_\\chi = \\lambda_\\chi^0 R_\\chi$ in terms of $i\\chi$, we have $\\langle 1' | = - \\langle 1 | \\partial R_\\chi \/ \\partial (i\\chi) |_{\\chi = 0} R^\\dagger + k \\langle 1 |$,\nwhere $R^\\dagger$ is the Moore-Penrose pseudo-inverse of $R$ and $k$ is an unimportant constant. Therefore, we obtain\n\\begin{equation}\n\\langle \\sigma \\rangle_{\\rm ex} = \\int_C \\sum_{\\nu x y z} \\sigma_{xy}^\\nu R_{xy}^\\nu R_{yz}^\\dagger dp_z^S.\n\\end{equation}\nSome similar formulae for particle currents have been obtained in Refs.~\\cite{Parrondo,Jarzynski1}.\n\n\nWe next consider the condition for the existence of thermodynamic potential $S_{\\rm SST}$ that satisfies Eq.~(\\ref{Clausius1}). For simplicity, we assume that the parameter space is simply-connected, i.e., there is no ``hole'' or singularity. The necessary and sufficient condition for the existence of $S_{\\rm SST}$ is that the integral in the first term of the LHS of (\\ref{average_e}) is always determined only by the initial and final points of $C$; or equivalently, $\\oint_C \\langle 1' | d | p^S \\rangle = 0$ holds for every closed curve $C$. On the other hand, the Stokes theorem states that $\\oint_C \\langle 1' | d | p^S \\rangle = \\int_S d\\left( \\langle 1' | d | p^S \\rangle \\right)$ holds,\nwhere $S$ is a surface whose boundary is $C$, and ``$d$'' means the exterior derivative. By using the wedge product ``$\\wedge$,'' we have $d\\left( \\langle 1' | d | p^S \\rangle \\right) = d\\langle 1' | \\wedge d | p^S \\rangle := \\sum_x d 1'_x \\wedge d p^S_x = \\sum_{xkl} \\frac{\\partial 1'_x}{\\partial \\alpha_k} \\frac{\\partial p^S_x}{\\partial \\alpha_l} d\\alpha_k \\wedge d\\alpha_l$, where $1'_x$ means the $x$-component of vector $\\langle 1' |$, and $\\alpha_k$ is the $k$-component of $\\bm{\\alpha}$. Therefore the necessary and sufficient condition is that \n\\begin{equation}\nd\\langle 1' | \\wedge d | p^S \\rangle = 0\n\\label{no_curvature}\n\\end{equation}\nholds in every point of the parameter space. Equation~(\\ref{no_curvature}) is equivalent to \n\\begin{equation}\n\\sum_x \\left( \\frac{\\partial 1' _x}{\\partial \\alpha_k} \\frac{\\partial p^S_x}{\\partial \\alpha_l} - \\frac{\\partial 1'_x}{\\partial \\alpha_l} \\frac{\\partial p^S_x}{\\partial \\alpha_k} \\right) = 0\n\\end{equation}\nfor all $(k, l)$. In terminology of the gauge theory, $d\\langle 1' | \\wedge d | p^S \\rangle$ corresponds to the strength of the gauge field or the curvature. For the case of the $U(1)$-gauge theory, the curvature is the magnetic field.\n\n\nIn equilibrium thermodynamics, Eq.~(\\ref{no_curvature}) holds due to the Maxwell relation, and $\\langle 1' | d | p^S \\rangle$ becomes the total differential of the Shannon entropy as we will see in the next section. On the other hand, Eq.~(\\ref{no_curvature}) does not hold for transitions between NESSs in general. In this sense, vector potential $\\langle 1' | d | p^S \\rangle$ plays a fundamental role instead of the scalar thermodynamic potential (i.e., the Shannon entropy) in SST. \n\n\n\n\n\n\n\\section{Special Cases}\n\nIn this section, we discuss two special cases, in which the first term of the LHS of~(\\ref{average_e}) reduces to the total differential of a scalar thermodynamic potential.\n\n\\subsection{Equilibrium Thermodynamics}\n\nIn general, we can explicitly show that Eq.~(\\ref{average_e}) reduces to the equilibrium Clausius equality if the detailed balance is satisfied. Let $E_x$ be the energy of state $x$. The transition rate is given by $R_{xy} = e^{\\beta (E_y - W_{xy})}$ with $W_{xy} = W_{y x}$, the steady distribution by $p_x^S = e^{-\\beta E_x} \/ Z$ with $Z$ being the partition function, and the entropy production in a bath by $\\sigma_{xy} = -\\beta (E_x - E_y) = -\\beta Q_{xy}$. In the quasi-static limit, the system is in contact with a single heat bath with inverse temperature $\\beta$ at each time, while $\\beta$ can be time-dependent. We then obtain \n\\begin{equation}\n \\langle 1' | d | p^S \\rangle = \\sum_x \\beta E_x dp^S_x = d \\left(- \\sum_x p_x^S \\ln p_x^S \\right),\n\\end{equation}\nwhich means that $\\langle 1' | d | p^S \\rangle $ is the total differential of the Shannon entropy. \n\n\\subsection{KNST's Extended Clausius Equality}\n\nWe now show that Eq.~(\\ref{average_e}) reduces to the KNST's extended Clausius equality~\\cite{Komatsu1,Komatsu2} in the lowest order of nonequilibriumness. Here we assume that, for every $(x,y)$, there exists at most single $\\nu$ that satisfies $R_{xy}^\\nu \\neq 0$, so that we can remove index $\\nu$. This is the same assumption as in Refs.~\\cite{Komatsu1,Komatsu2}.\nMoreover, we formally introduce the time-reversal of states; the time-reversal of state $x$, denoted as $x^\\ast$, is assigned in the phase space. Since we are considering stochastic jump processes that do not have any momentum term usually, we just interpret the correspondence $x \\mapsto x^\\ast$ as a formal mathematical map. \nCorrespondingly, we should replace $\\ln (R^\\nu_{xy} \/ R^\\nu_{y x})$ in Eq.~(\\ref{entropy}) by $\\ln (R^\\nu_{xy} \/ R^\\nu_{y^\\ast x^\\ast})$. Only with this replacement, all of the foregoing arguments remain unchanged in the presence of the time-reversal. We also assume that, in thermal equilibrium, $p_{x}^S = p_{x^\\ast}^S$ holds.\nWe define\n\\begin{equation}\n\\eta := \\sum_{xyz} \\ln (p_{x^\\ast}^S \/ p_{y^\\ast}^S )R_{xy}R_{yz}^\\dagger dp_z^S = \\sum_x \\left( \\ln p_{x^\\ast}^S \\right) dp_x^S\n\\end{equation}\nand $\\tilde{R}_{xy} := R_{y^\\ast x^\\ast} p_{x^\\ast}^S \/ p_{y^\\ast}^S$.\nHere, $\\tilde R$ is the adjoint of $R$ for the cases of $x = x^\\ast$~\\cite{Hatano,Esposito2}. We note that $\\sum_x \\tilde{R}_{xy} = 0$ holds for every $y$. Since $R = \\tilde R$ holds if the detailed balance is satisfied, we characterize the nonequilibriumness of the dynamics by $\\varepsilon := \\max_{xy} | (\\tilde{R}_{xy} - R_{xy})\/R_{xy} |$.\nWe then obtain \n\\begin{equation}\n\\begin{split}\n\\langle 1' | d | p^S \\rangle +\\eta &= \\sum_{xyz} \\ln (\\tilde{R}_{xy} \/ R_{xy}) R_{xy} R_{yz}^\\dagger dp_z^S \\\\\n &= \\sum_{xyz} (\\tilde{R}_{xy} - R_{xy}) R_{yz}^\\dagger dp_z^S + O(\\varepsilon^2 \\Delta) \\\\\n&= O(\\varepsilon^2 \\Delta),\n \\end{split}\n \\label{eta1}\n \\end{equation}\nwhere $\\Delta := \\max_x | dp_x^S |$ characterizes the amount of the infinitesimal change of the steady distribution. \nOn the other hand, \n\\begin{equation}\n\\eta = d \\left( \\sum_x p_x^S \\ln \\sqrt{p_x^S p_{x^\\ast}^S} \\right) + O(\\varepsilon^2 \\Delta)\n\\label{eta2}\n\\end{equation}\nholds~\\cite{Komatsu2}. From Eqs.~(\\ref{eta1}) and (\\ref{eta2}), we obtain\n\\begin{equation}\n\\langle 1' | d | p^S \\rangle = d \\left( - \\sum_x p_x^S \\ln \\sqrt{p_x^S p_{x^\\ast}^S} \\right) + O(\\varepsilon^2 \\Delta),\n\\end{equation}\nwhich implies the KNST's extended Clausius equality, where the first term of the RHS is the total differential of the symmetrized Shannon entropy. \nWe note that, if we gradually change parameter $\\bm \\alpha$ from an equilibrium distribution, then the KNST's extended Clausius equality is valid up to the order of $O(\\varepsilon^2)$ because $\\Delta = O(\\varepsilon)$ holds.\n\n\n\n\n\\section{Example} \n\nAs a simple example that illustrates the absence of a scalar thermodynamic potential, we consider a stochastic model of a quantum dot that is in contact with two baths that are labeled by $\\nu = L$ and $R$ (see also Fig.~1 (a))~\\cite{Bagrets}. \nThis model describes the stochastic dynamics of the number of electrons in the dot by a classical master equation. \n\n\nAn electron is transfered from the baths to the dot one by one or vise-versa. We assume that the states of the dot are $x=0$ and $1$, which respectively describe that the electron is absent and occupies the dot. The probability distribution is described by $| p \\rangle = [p_0, p_1]^T$, and the transition rate is given by $R = \\sum_{\\nu = L, R} R^\\nu$ with\n\\begin{eqnarray}\nR^\\nu = \\left[ \n\\begin{array}{cc}\n- \\gamma_\\nu f_\\nu & \\gamma_\\nu (1 - f_\\nu) \\\\\n\\gamma_\\nu f_\\nu & -\\gamma_\\nu (1 - f_\\nu) \\\\\n\\end{array} \n\\right],\n\\end{eqnarray}\nwhere $\\gamma_\\nu$ is the tunneling rate between the dot and bath $\\nu$, and $f_\\nu = (e^{\\beta (E - \\mu_\\nu)}+1)^{-1}$ is the Fermi distribution function with $\\beta$ being the inverse temperature of the baths, $\\mu_\\nu$ being the chemical potential of bath $\\nu$, and $E$ being the excitation energy of the dot. The entropy production is given by $\\sigma_{00}^\\nu = \\sigma_{11}^\\nu = 0$ and $\\sigma_{10}^\\nu = - \\sigma_{01}^\\nu = \\sigma_\\nu$ with $\\sigma_\\nu := \\beta (\\mu_\\nu - E)$. For simplicity, we set $\\gamma_L = \\gamma_R =: \\gamma$. Without loss of generality, we assume that the control parameters are $\\sigma_L$ and $\\sigma_R$. \n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=80mm]{Figure_dot_revised.eps}\n \\end{center}\n \\caption{(a) A schematic of the model of a quantum dot. A single electron is transfered to\/from the two heat baths with chemical potentials $\\mu_L$ and $\\mu_R$. (b) $\\langle \\sigma \\rangle_{\\rm ex}$ (the solid line) and $-\\Delta S$ (the dashed line) for quasi-static processes. They are coincident with each other up to the second order of the nonequilibriumness of the final state that is denoted by $u$.} \n\\end{figure}\n\n\n\nWe can explicitly calculate the vector potential as\n\\begin{equation}\n\\langle 1' | d | p^S \\rangle = -\\frac{1}{4} (\\sigma_L + \\sigma_R) (f_L(1-f_L) d \\sigma_L + f_R (1-f_R)d\\sigma_R),\n\\end{equation}\nand the curvature as\n\\begin{equation}\nd\\langle 1' | \\wedge d | p^S \\rangle = \\frac{1}{4} (f_L (1 - f_L) - f_R (1-f_R)) d\\sigma_L \\wedge d \\sigma_R.\n\\end{equation}\nTherefore, the curvature vanishes only if $\\mu_L = \\mu_R$ or $2E = \\mu_L + \\mu_R$ holds. The former case corresponds to equilibrium thermodynamics. Since the curvature vanishes only on the two lines in the two-dimensional parameter space, any scalar potential cannot be defined on the entire parameter space. We note that the quantities that we have calculated here are different from those in the previous researches~\\cite{Sinitsyn1,Sinitsyn2,Sinitsyn3,Ohkubo1,Ren}. \n\nAs a simple illustration, we consider the following situation. The dot is initially in thermal equilibrium with $\\sigma_L = \\sigma_R = 0$. We then quasi-statically change $\\sigma_L$ from $0$ to $u$, while $\\sigma_R$ is not changed. We calculate $\\langle \\sigma \\rangle_{\\rm ex} = \\int_0^u \\sigma_L f_L(1-f_L) d\\sigma_L \/ 4$ for this process. For comparison, we also calculate the difference of the Shannon entropy between the initial and final distributions of the dot, denoted as $\\Delta S$. Figure 1 (b) shows $\\langle \\sigma \\rangle_{\\rm ex}$ (the solid line) and $-\\Delta S$ (the dashed line) versus $u$. They are coincident with each other up to the order of $O(u^2)$, which is consistent with the extended Clausius equality discussed in Sec.~IV.~ B with $u = O(\\varepsilon) = O(\\Delta)$.\n\\\n\\section{Conclusions and Discussions}\nWe have derived the geometrical expressions of the excess entropy production for quasi-static transitions between NESSs: Eq.~(\\ref{berry2}) for $S_{\\rm ex}(i \\chi)$ and Eq.~(\\ref{average_e}) for $\\langle \\sigma \\rangle_{\\rm ex}$. Our results imply that the vector potentials $\\langle 1_\\chi | d | p_\\chi^S \\rangle$ and $\\langle 1' | d | p^S \\rangle$ play important roles in SST. We have also derived condition~(\\ref{no_curvature}) that a scalar thermodynamic potential exists. \n\nWe note that the arguments in Secs.~II and III are not restricted to the case of entropy production $\\sigma_{xy}^\\nu$, but can be formally applied to an arbitrary quantity $f_{xy}^\\nu$ that satisfies $f_{xx}^\\nu = 0$. In fact, even if we replace $\\sigma_{xy}^\\nu$ by any $f_{xy}^\\nu$, the formal expressions of the main results in Sec.~III remain unchanged. However, we have explicitly used the properties of $\\sigma_{xy}^\\nu$ such as Eq.~(\\ref{entropy}) in Sec.~IV.\n\nWe also note that, as is the case for the gauge theory, we can rephrase our results (\\ref{berry1}) and (\\ref{berry2}) in terms of differential geometry~\\cite{Nakahara}. We consider a trivial vector bundle whose base manifold is parameter space $\\{ \\bm \\alpha \\}$. The fiber is $\\mathbb C$, and $c_0 (t)$ in Eq.~(\\ref{c_0}) is an element of the fiber. Then $\\langle 1_\\chi | d | p_\\chi^S \\rangle$ is a connection form, and Eq.~(\\ref{equation}) describes the parallel displacement of $c_0$ with the connection along curve $C$. \n\n\nIn this paper, we have assumed that nonequilibrium dynamics is modeled by a Markovian jump process with transition rate $R$ being diagonalizable. To generalize our results to other models of nonequilibrium dynamics is a future issue. For example, it is worth investigating whether our result can be generalized to Langevin systems.\nMoreover, to investigate the usefulness of our results in nonequilibrium thermodynamics is also a future challenge.\n\n\n\\begin{acknowledgments}\nWe are grateful to T. S. Komatsu, N. Nakagawa, T. Nemoto, J. Ohkubo, S. Okazawa, K. Saito, S. Sasa, and H. Tasaki for valuable discussions. This work was supported by the Global COE Program ``The Next Generation of Physics, Spun from Universality and Emergence'' from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan, the Grantin-Aid of MEXT (Grants No.~21540384), and the Grant-in-Aid for Research Activity Start-up (Grants No.~11025807).\n\\end{acknowledgments}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Pose R-CNN}\nOur base network is derived from the Mask R-CNN~\\cite{Mask-RCNN} with the VGG backbone~\\cite{SimonyanZ14}. \n\\paragraph{Mask R-CNN:} Let us first brief the Mask-RCNN. Mask R-CNN consists of two main component. The first component, called a Region Proposal Network (RPN), proposes candidate object bounding boxes (RoIs). The second component extracts features using RoIAlign layer~\\cite{Mask-RCNN} from each candidate box and performs classification, bounding-box regression, and segmentation.\nWe refer readers to~\\cite{Mask-RCNN} details of Mask-RCNN. \n\n\\junk{\nHere we present our adaptation from Mask-RCNN with VGG backbone. The input to our network is a RGB image with the size $480 \\times 640$. The Region Proposal Network takes the last feature map of VGG (i.e., $conv5\\_3$) as input and output RoIs at different sizes and shapes. We use 15 anchors in the RPN (5 scales which are $16 \\times 16$, $32\\times32$, $64 \\times 64$, $128\\times128$ and $256 \\times 256$ and 3 aspect ratios which are $2:1$, $1:1$, $1:2$). This design allows the network to detect small objects. For each output RoI of RPN, a fixed-size $7\\times7$ feature map is pooled from the $conv5\\_3$ feature map using the RoIAlign layer. This pooled feature map is used to regress box coordinate, classify, segment, and estimate 6D pose for the object inside the box.\n}\n\n\\subsection{Pose R-CNN}\nPose-CNN also consist of two main components, with an identical first component (which is RPN) as Mask R-CNN. In the second component, in parallel to predicting the class, box\ncoordinate, and segment mask, Pose R-CNN also outputs a 6D pose for each RoI. \n\n\\paragraph{Pose representation}\n\n\\paragraph{Multi-task loss function}\n\n\\paragraph{Network architecture}\n\n\\section{Conclusion}\n\\label{sec:conl}\nIn this paper, we propose \\method{}, a deep learning approach for jointly detecting, segmenting, and most importantly recovering 6D poses of object instances from an single RGB image. \\method{} is end-to-end trainable and can directly output estimated poses without any post-refinements. The novelty design is at a new pose head branch, which uses the Lie algebra to represent the rotation space. \\method{} compares favorably with the state-of-the-art RGB-based 6D object pose estimation methods. Furthermore, \\method{} also allows a fast inference which is around 10 fps. An interesting future work is to improve the network for handling with rotationally symmetric objects. \n\n\n\\subsection{Training data}\n\\section{Experiments}\n\\label{sec:exp}\nWe evaluate \\method{} on two widely used datasets, i.e., the single object pose dataset LINEMOD provided by Hinterstoisser et al.~\\cite{ACCV12} and the multiple object instance pose dataset provided by Tejani et al.~\\cite{DBLP:conf\/eccv\/TejaniTKK14}. \nWe also compare \\method{} to the state-of-the-art methods for 6D object pose estimation from RGB images~\\cite{CVPR16,BB8,SSD-6D}.\n\n\\junk{When evaluating on the dataset of Hinterstoisser et al.~\\cite{ACCV12}, we mainly target to compare our work with the recent state-of-the-art 6D pose estimation method from Brachmann et al.~\\cite{CVPR16} because that work also estimates 6D pose from a single RGB input~\\cite{CVPR16}. For reference purposes, we also mention the results of LINE2D~\\cite{LINE2D}, which is a template-based approach\\footnote{LINE2D~\\cite{LINE2D} is originally proposed for object detection. It is then extended for 6D pose estimation by~\\cite{CVPR16}.}. It is because the method~\\cite{CVPR16} has already improves over LINE2D~\\cite{LINE2D}.}\n\n\\textbf{Metric:}\nDifferent 6D pose measures have been proposed in the past. In order to evaluate the recovered poses, we use the \\textit{standard} metrics used in~\\cite{CVPR16,BB8}. To measure pose error in 2D, we project the 3D object model into the image using the groundtruth pose and the estimated pose. The estimated pose is accepted if the IoU between two project boxes is higher than 0.5. This metric is called as \\textit{2D-pose} metric. To measure the pose error in 3D, the $5cm5^\\circ$ and \\textit{$\\textrm{ADD}$} metrics is used. \nIn $5cm5^\\circ$ metric, an estimated pose is accepted if it is within $5cm$ translational error and $5^\\circ$ angular error of the ground truth pose.\nIn $\\textrm{ADD}$ metric, an estimated pose is accepted if the average distance between transformed model point clouds by the groundtruth pose and the estimated pose is smaller than $10\\%$ of the object's diameter. We also provide the $F1$ score of the detection and segmentation results. A detection \/ segmentation is accepted if its IoU with the groundtruth box \/ segmentation mask is higher than a threshold. We report results with the widely used thresholds, i.e., $0.5$ and $0.9$~\\cite{Faster-RCNN,Mask-RCNN}.\n\\input{exp_hoistin}\n\n\\input{exp_tenaji}\n\n\\subsection{Timing}\nWhen testing on the LINEMODE dataset~\\cite{ACCV12}, Brachman {{et al.}}~\\cite{CVPR16} reports a running time around $0.45s$ per image. \\method{} is several times faster than their method, i.e., its end-to-end architecture allows the inference at around $0.1s$ per image on a Titan X GPU. SSD-6D~\\cite{SSD-6D} and BB8~\\cite{BB8} report the running time around $0.1s$ and $0.3s$ per image, respectively. \\method{} is comparable to SSD-6D in the inference speed, while it is around 3 times faster than BB8. It is worth noting that due to using post-refinement, the inference time of SSD-6D and BB8 may be increased when the input image contains multiple object instances. We note that although \\method{} is fast, its parameters are still not optimized for the speed. \n As shown in the recent study~\\cite{DBLP:journals\/corr\/HuangRSZKFFWSG016}, better trade-off between speed and accuracy may be achieved by carefully selecting parameters, e.g., varying the number of proposals after RPN, image sizes, which is beyond scope of the current work. \n\n\n\\subsection{Single object pose estimation}\n\\label{exp:hoi}\n\n\\begin{table*}[!t]\n\\vspace{0.5cm}\n \\centering\n \\footnotesize\n \\begin{center}\n \\begin{tabular}{c| c c c c c c c c c c c c c c} \n&Ape &Bvise &Cam &Can &Cat &Driller &Duck &Box &Glue &Holep &Iron &Lamp &Phone &Average\\\\ \n\\Xhline{0.75pt}\n\t&\\multicolumn{14}{c}{\\textbf{IoU 0.5}} \\\\ \nDetection &99.8 &100 &99.7 &100 &99.5 &100 &99.8 &99.5 &99.2 &99.0 &100 &99.8 &100 &99.7\\\\ \nSegmentation &99.5 &99.8 &99.7 &100 &99.1 &100 &99.4 &99.5 &99.0 &98.6 &99.2 &99.4 &99.7 &99.4\n\\\\ \\hline\n\n\t&\\multicolumn{14}{c}{\\textbf{IoU 0.9}} \\\\\nDetection \t &85.4 &91.7 &93.3 &93.6 &89.3 &87.5 &86.3 &94.2 &81.1 &93.2 &92.5 &91.3 &90.8 &90.0\\\\ \nSegmentation &80.6 &57.0 &91.4 &62.5 &52.1 &74.6 &81.2 &91.9 &73.3 &84.6 &90.3 &85.0 &84.6 &77.6\\\\ \\hline\n \\end{tabular}\n \\end{center}\n \\vspace{-0.2cm}\n \\caption{F1 score for 2D detection and segmentation of \\method{} on LINEMOD dataset~\\cite{ACCV12} for single object.} \n \\label{tab:2D_det_seg}\n\\end{table*}\n\n\n\\begin{table*}[!t]\n \\centering\n \\footnotesize\n \\begin{center}\n \\begin{tabular}{c| c c c c c c c c c c c c c c} \n&Ape &Bvise &Cam &Can &Cat &Driller &Duck &Box &Glue &Holep &Iron &Lamp &Phone &Average\\\\ \n\\Xhline{1pt}\n\t&\\multicolumn{14}{c}{\\textbf{2D-pose metric}}\\\\ \n\\method{} &99.8 &100 &99.7 &100 &99.2 &100 &99.8 &99.0 &97.1 &98.0 &99.7 &99.8 &99.1 &99.3\\\\ \nBrachmann\\cite{CVPR16} &98.2 &97.9 &96.9 &97.9 &98.0 &98.6 &97.4 &98.4 &96.6 &95.2 &99.2 &97.1 &96.0 &97.5\\\\\nSSD-6D\\cite{SSD-6D} &- &- &- &- &- &- &- &- &- &- &- &- &- &99.4\\\\\n\n \t\\hline\n \t \t \t&\\multicolumn{14}{c}{$\\mathbf{5cm5^\\circ}$ \\textbf{metric}} \\\\\n\\method{} &57.8 &72.9 &75.6 &70.1 &70.3 &72.9 &67.1 &68.4 &64.6 &70.4 &60.7 &70.9 &69.7 &68.5\\\\ \nBrachmann\\cite{CVPR16} &34.4 &40.6 &30.5 &48.4 &34.6 &54.5 &22.0 &57.1 &23.6 &47.3 &58.7 &49.3 &26.8 &40.6\\\\\nBB8\\cite{BB8} &80.2 &81.5 &60.0 &76.8 &79.9 &69.6 &53.2 &81.3 &54.0 &73.1 &61.1 &67.5 &58.6 &69.0\\\\\n \t\\hline \t\n \t \t&\\multicolumn{14}{c}{$\\mathbf{ADD}$ \\textbf{metric}} \\\\\n\\method{} &38.8 &71.2 &52.5 &86.1 &66.2 &82.3 &32.5 &79.4 &63.7 &56.4 &65.1 &89.4 &65.0 &65.2\\\\ \nBrachmann\\cite{CVPR16} &33.2 &64.8 &38.4 &62.9 &42.7 &61.9 &30.2 &49.9 &31.2 &52.8 &80.0 &67.0 &38.1 &50.2\\\\\nBB8\\cite{BB8} &40.4 &91.8 &55.7 &64.1 &62.6 &74.4 &44.3 &57.8 &41.2 &67.2 &84.7 &76.5 &54.0 &62.7\\\\\nSSD-6D\\cite{SSD-6D} &- &- &- &- &- &- &- &- &- &- &- &- &- &76.3\\\\\n \t\\hline \t\n \\end{tabular}\n \\end{center}\n \\vspace{-0.2cm}\n \\caption{Pose estimation accuracy on the LINEMOD dataset~\\cite{ACCV12} for single object.}\n \\label{tab:pose} \n\\end{table*}\n\n\\begin{figure*}[!t]\n\\vspace{0.1cm}\n\\centering\n\\subfigure{\n \\includegraphics[scale=0.17]{Hinter\/05color00013.jpg}\n}\n\\subfigure{\n \n \\includegraphics[scale=0.17]{Hinter\/05color00013_m.jpg} \n}\n\\subfigure{\n \\includegraphics[scale=0.17]{Hinter\/05color00013_p2.jpg} \n}\n\\\\\n\\vspace{0.1cm}\n\\subfigure{\n \\includegraphics[scale=0.17]{Hinter\/06color00050.jpg}\n}\n\\subfigure{\n \n \\includegraphics[scale=0.17]{Hinter\/06color00050_m.jpg} \n}\n\\subfigure{\n \\includegraphics[scale=0.17]{Hinter\/06color00050_p2.jpg} \n}\n\\\\\n\\vspace{0.1cm}\n\\subfigure{\n \\includegraphics[scale=0.17]{Hinter\/08color00002.jpg} \n}\n\\subfigure{\n \\includegraphics[scale=0.17]{Hinter\/08color00002_m.jpg} \n}\n\\subfigure{\n \\includegraphics[scale=0.17]{Hinter\/08color00002_p2.jpg} \n}\n\\caption[]{Qualitative results for single object pose estimation on the LINEMOD dataset~\\cite{ACCV12}. From left to right: (i) original images, (ii) the predicted 2D bounding boxes, classes, and segmentations, (iii) 6D poses in which the green boxes are the groundtruth poses and the red boxes are the predicted poses. Best view in color.}\n\\label{fig:singleobj}\n\\vspace{-0.3cm}\n\\end{figure*}\nIn~\\cite{ACCV12}, the authors publish LINEMOD, a RGBD dataset, which has become a \\textit{de facto} standard benchmark for 6D pose estimation. The dataset contains poorly textured objects in a cluttered scene. We only use the RGB images to evaluate our method. The dataset contains 15 object sequences. For fair comparison to~\\cite{CVPR16,BB8,SSD-6D}, we evaluate our method on the $13$ object sequences for which the 3D models are available. \nThe images in each object sequence contain multiple objects, however, only one object is annotated with the groundtruth class label, bounding box, and 6D pose. The camera intrinsic matrix is also provided with the dataset. Using the given groundtruth 6D poses, the object models, and the camera matrix, we are able to compute the groundtruth segmentation mask for the annotated objects. \nWe follow the evaluation protocol in~\\cite{CVPR16,BB8} which uses RGB images from the object sequences for training and testing.\nFor each object sequence,\n\\red{we randomly select $30\\%$ of the images for training and validation. The remaining images serve as the test set.} \nA complication when using this dataset for training arises because \nnot all objects in each image are annotated, i.e., only one object is annotated per sequence, even though multiple objects are present. This is problematic for training a detection\/segmentation network such as~\\cite{Faster-RCNN,Mask-RCNN} because the training may be confused, e.g. slow or fail to converge, if an object is annotated as foreground in some images and as background other images. \nHence, we preprocess the training images as follows. For each object sequence, we use the RefineNet~\\cite{Lin:2017:RefineNet}, a state-of-the-art semantic segmentation algorithm, to train a semantic segmentation model. The trained model is applied on all training images in other sequences. The predicted masks in other sequences are then filtered out, so that the appearance of the objects without annotated information does not hinder the training. \n\n\\junk{\n\\paragraph{Evaluation protocol} \nIn the evaluation protocol of~\\cite{CVPR16}, the authors assume that for each testing image, the object sequence which the testing image belongs to is known. Hence they can select the learned features corresponding to the known object class for computing the pose. It helps to improve the accuracy. \nWe do not require the object sequence to be known in our evaluation. From the detection results which have already filtered by a global threshold, i.e. 0.9, for each object class, we keep only one detection with the highest classification score. This allows us to access the 2D detection and segmentation performance. We count a 2D detection \/ segmentation to be correct if its IoU with the groundtruth box \/ segmentation mask is higher than a threshold. We report the detection and segmentation results with IoU thresholds 0.5 and 0.9. In order to evaluate the recovered poses, we follow the metrics used in~\\cite{CVPR16}. To measure pose error in 2D, we project the 3D object model into the image using the groundtruth pose and the estimated pose. The estimated pose is accepted if the IoU between two project boxes is higher than 0.5. This metric is called as \\textit{2D pose}. To measure the pose error in 3D, the $5cm5^\\circ$ metric is used. An estimated pose is accepted if it is within $5cm$ translational error and $5^\\circ$ angular error of the ground truth pose. \\junk{The angular distance between two rotation matrices $P$ and $Q$ is computed as follows \n\\begin{equation}\n\\theta = \\arccos \\frac{tr(R)-1}{2}\n\\end{equation}\nwhere $R=PQ^T$.}\n}\n\n\\textbf{Results:}\nTable~\\ref{tab:2D_det_seg} presents the 2D detection and segmentation results of \\method{}. At an IoU 0.5, the results show that both detection and segmentation achieve nearly perfect scores for all object categories. This reconfirms the effective design of Faster R-CNN~\\cite{Faster-RCNN} and Mask R-CNN~\\cite{Mask-RCNN} for object detection and instance segmentation. \nWhen increasing the IoU to 0.9, both detection and segmentation accuracy significantly decrease. The more decreasing is observed for the segmentation, i.e., the dropping around \\red{10\\%} and \\red{22\\%} for detection and segmentation, respectively.\n\nWe put our interest on the pose estimation results, which is the main focus of this work. \nTable~\\ref{tab:pose} presents the comparative pose estimation accuracy between \\method{} and the state-of-the-art works of Brachmann et al.~\\cite{CVPR16}, BB8~\\cite{BB8}, SSD-6D~\\cite{SSD-6D} which also use RGB images as inputs to predict the poses. Under \\textit{2D-pose} metric, \\method{} is comparable to SSD-6D, while outperforms over \\cite{CVPR16} around \\red{2\\%}.\nUnder $5cm5^\\circ$ metric, \\method{} is slightly lower than BB8, while it significantly outperforms~\\cite{CVPR16}, i.e., around \\red{28\\%}. Under $\\textrm{ADD}$ metric, \\method{} outperforms BB8 \\red{2.5\\%}.\nThe results of \\method{} are also more stable than BB8~\\cite{BB8}, e.g., under $5cm5^\\circ$ metric, the standard deviations in the accuracy of \\method{} and BB8~\\cite{BB8} are \\red{5.0} and \\red{10.6}, respectively. \nThe Table~\\ref{tab:pose} also shows that both \\method{} and BB8~\\cite{BB8} are worse than SSD-6D. However, it is worth noting that SSD-6D~\\cite{SSD-6D} does not use images from the object sequences for training. \nThe authors~\\cite{SSD-6D} perform a discrete sampling over \\textit{whole} rotation space and use the known 3D object models to generate synthetic images used for training. By this way, the training data of SSD-6D is able to cover more rotation space than~\\cite{CVPR16}, BB8~\\cite{BB8}, and~\\method{}. \nFurthermore, SSD-6D also further uses an ICP-based refinement to improve the accuracy. Different from SSD-6D, \\method{} directly outputs the pose without any post-processing. \nFigure~\\ref{fig:singleobj} shows some qualitative results of \\method{} for single object pose estimation on the LINEMOD dataset.\n\n\n\n\\subsection{Multiple object instance pose estimation}\n\\label{exp:tejani}\nIn~\\cite{DBLP:conf\/eccv\/TejaniTKK14}, Tejani et al. publish a dataset consisting of six object sequences in which the images in each sequence contain multiple instances of the same object with different levels of occlusion. Each object instance is provided with the groundtruth class label, bounding box, and 6D pose. Using the given groundtruth 6D poses, the object models, and the known camera matrix, we are able to compute the groundtruth segmentation masks for object instances. \nWe use the RGB images provided by the dataset for training and testing. \\red{We randomly split $30\\%$ images in each sequence for training and validation. The remaining images serve as the test set.}\n\n\\junk{\n\\paragraph{Evaluation protocol} \nBecause there are multiple instances of an object in an image, we do not constraint the maximum number of the detection results. We accept all detections whose the classification scores are higher than a threshold. The threshold is fixed to $0.9$ for all sequences. A 2D detection \/ segmentation is accepted if its IoU with the groundtruth box \/ segmentation mask is higher than a threshold. We report the results with IoU thresholds $0.5$ and $0.9$. \nIn order to evaluate the recovered poses, we use {2D pose} and $5cm5^\\circ$ metrics as mentioned in the first dataset. \nBecause there are multiple object instances in each image, follow the original work~\\cite{DBLP:conf\/eccv\/TejaniTKK14}, we report results with $F1$ score, which is the harmonic mean of precision and recall.\n}\n\n\\begin{table*}[!t]\n \n \\footnotesize\n \\begin{center}\n \\begin{tabular}{c| c c c c c c c} \n&Camera &Coffee &Joystick &Juice &Milk &Shampoo &Average\\\\ \n\\Xhline{0.75pt}\n\t&\\multicolumn{7}{c}{\\textbf{IoU 0.5}} \\\\ \nDetection &99.8 &100 &99.8 &99.2 &99.7 &99.5 &99.6 \\\\ \nSegmentation &99.8 &99.7 &99.6 &99.0 &99.3 &99.5 &99.4 \\\\ \\hline\n\t&\\multicolumn{7}{c}{\\textbf{IoU 0.9}} \\\\ \nDetection &88.3 &97.2 &97.7 &89.4 &83.5 &82.4 &89.7 \\\\ \nSegmentation &81.7 &95.0 &95.5 &86.9 &77.8 &74.7 &85.2 \\\\ \\hline\n \\end{tabular}\n \\end{center}\n \\vspace{-0.2cm}\n \\caption{F1 score for 2D detection and segmentation of \\method{} on the dataset of Tejani et al.~\\cite{DBLP:conf\/eccv\/TejaniTKK14} for multiple object instances.} \n\n \\label{tab:2D_det_seg_tejani}\n\\end{table*}\n\\begin{table*}[!t]\n\\vspace{-0.2cm}\n \\centering\n \\footnotesize\n \\begin{center}\n \\begin{tabular}{c| c c c c c c c} \n&Camera &Coffee &Joystick &Juice &Milk &Shampoo &Average\\\\ \\Xhline{1pt}\n\t&\\multicolumn{7}{c}{\\textbf{2D pose metric}}\\\\ \n\\method{}\t \t&99.2 &100 &99.6 &98.4 &99.5 &99.1 &99.3 \\\\ \nSSD-6D\\cite{SSD-6D} &- &- &- &- &- &- &98.8 \\\\\n \t\\hline\n \t&\\multicolumn{7}{c}{$\\mathbf{5cm5^\\circ}$ \\textbf{metric}}\\\\ \n\\method{} &76.5 &18.7 &60.2 &85.6 &73.5 &72.4 &64.5 \\\\ \\hline\n \t&\\multicolumn{7}{c}{$\\mathbf{ADD}$ \\textbf{metric}}\\\\ \n\\method{} &80.4 &35.4 &27.5 &81.2 &71.6 &75.8 &62.0 \\\\ \n\n \t\\hline \t\n \\end{tabular}\n \\end{center}\n \\vspace{-0.2cm}\n \\caption{Pose estimation accuracy on the dataset of Tejani et al.~\\cite{DBLP:conf\/eccv\/TejaniTKK14} for multiple object instances.} \n \\label{tab:pose_tejani}\n\\end{table*}\n\\begin{figure*}[!t]\n\\centering\n\\vspace{-0.3cm}\n\\subfigure{\n \\includegraphics[scale=0.17]{tejani\/01color00011.jpg}\n}\n\\subfigure{\n \\includegraphics[scale=0.17]{tejani\/01color00011_m.jpg} \n}\n\\subfigure{\n \n \\includegraphics[scale=0.17]{tejani\/01color00011_p2.jpg} \n\n}\\\\\n\\vspace{-0.2cm}\n\\subfigure{\n \\includegraphics[scale=0.17]{tejani\/04color00291.jpg}\n}\n\\subfigure{\n \n \\includegraphics[scale=0.17]{tejani\/04color00291_m.jpg} \n}\n\\subfigure{\n \\includegraphics[scale=0.17]{tejani\/04color00291_p2.jpg} \n}\n\\\\\n\\vspace{-0.2cm}\n\\subfigure{\n \\includegraphics[scale=0.17]{tejani\/06color00002.jpg}\n}\n\\subfigure{\n\n \\includegraphics[scale=0.17]{tejani\/06color00002_m.jpg} \n}\n\\subfigure{\n \\includegraphics[scale=0.17]{tejani\/06color00002_p2.jpg} \n}\n\\caption[]{Qualitative results for pose estimation on the multiple object instance dataset of Tejani et al.~\\cite{DBLP:conf\/eccv\/TejaniTKK14}. From left to right: (i) the original images, (ii) the predicted 2D bounding boxes, classes, and segmentations (different instances are shown with different colors), (iii) 6D poses in which the green boxes are the groundtruth poses and the red boxes are the predicted poses. Best view in color.}\n\\label{fig:singleobj_tenaji}\n\\vspace{-0.3cm}\n\\end{figure*}\n\n\n\\begin{figure}[!t]\n\\centering\n\\subfigure{\n \n \\includegraphics[scale=0.15]{tejani\/02color00083_m.jpg}\n}\n\\subfigure{\n \\includegraphics[scale=0.15]{tejani\/02color00083_p2.jpg} \n}\n\\caption[]{One fail case on the Coffee sequence in the dataset of Tejani et al.~\\cite{DBLP:conf\/eccv\/TejaniTKK14}. Although the network can correctly predict the bounding box, the estimated 6D is incorrect due to the rotational symmetry of the object. We can see that the incorrect rotations are mainly in the Yaw axis.}\n\\label{fig:fail_tejani}\n\\end{figure}\n\\textbf{Results:} \nThe 2D detection and segmentation results are presented in Table~\\ref{tab:2D_det_seg_tejani}. \nAt an IoU $0.5$, \\method{} achieves nearly perfect scores. When increasing the IoU to $0.9$, the average accuracy decreases around \\red{10\\%} and \\red{14\\%} for detection and segmentation, respectively. We found that the Shampoo category has the most drop. It is caused by its flat shape, e.g., at some certain poses, the projected 2D images only contain a small side edge of the object, resulting the drop of scores at high IoU. \n\n\nThe accuracy of the recovered poses is reported in Table~\\ref{tab:pose_tejani}. Under \\textit{2D-pose} metric, both \\method{} and SSD-6D~\\cite{SSD-6D} achieve mostly perfect scores. \n Under $5cm5^\\circ$ and $\\textrm{ADD}$ metrics, on the average, \\method{} achieves \\red{64.5\\%} and \\red{62.0\\%} accuracy, respectively. We note that no previous works reports the standard $5cm5^\\circ$ and $\\textrm{ADD}$ metrics on this dataset using only RGB images.\n\nFigure~\\ref{fig:singleobj_tenaji} shows some qualitative results for the predicted bounding boxes, classes, segmentations, and 6D poses for multiple object instances. \n\n\nWe also found that \\method{} is not very robust to nearly rotationally symmetric objects. That is because if an object is rotational symmetry in $z$ axis, any rotation of the 3D object in the Yaw angle will produce the same object appearance in the 2D image. This makes confusion for the network to predict the rotation using only appearance information. \nAs shown in Table~\\ref{tab:pose_tejani}, the Coffee sequence, which is nearly rotational symmetry in both shape and texture, has very low $5cm5^\\circ$ score. Figure~\\ref{fig:fail_tejani} shows a failure case for this object sequence. \n\n\\subsection{Inference}\n\\section{Introduction}\nDetecting objects and their 6D poses (3D location and orientation) is an important task for many robotic applications including object manipulations (e.g., pick and place), parts assembly, to name a few. In clutter environments, objects must be first detected before their poses can be estimated. Accurate object segmentation is also important, especially when objects are occluded. Our goal is to develop a deep learning framework that jointly detects, segments, and most importantly recovers 6D poses of object instances from a single RGB image. While the first two tasks are getting more and more mature thanks to the power of deep learning, the 6D pose estimation problem remains a challenging problem.\n\n\n\n\n\n\nTraditional object pose estimation methods are mainly based on the matching of hand-crafted local features (e.g. SIFT~\\cite{SIFT_Lowe}). However, these local feature matching based approaches are only suitable for richly textured objects. For poorly textured objects, template-based matching~\\cite{ACCV12,DBLP:conf\/eccv\/TejaniTKK14,DBLP:conf\/iccv\/Rios-CabreraT13} or dense feature learning approaches are usually used~\\cite{ECCV14,ICCV15,CVPR17,CVPR17_2}. However, the template-based methods are usually sensitive to illuminations and occlusions. \nThe feature learning approaches~\\cite{ECCV14,ICCV15,CVPR17,CVPR17_2} have shown better performances against template-based methods, but they suffer from several disadvantages, i.e., \nthey require a time-consuming multi-stage processing for learning dense features, generating coarse pose hypotheses, and refining the coarse poses. \n\n\n\nWith the rising of deep learning, especially Convolutional Neural Networks (CNN), the object classification~\\cite{krizhevsky2012imagenet}, object detection~\\cite{Fast-RCNN,Faster-RCNN}, and recently object instance segmentation~\\cite{Mask-RCNN,affnet} tasks have achieved remarkable improvements. However, the application of CNN to 6D object pose estimation problem is still limited. \nRecently, there are few works~\\cite{SSD-6D,BB8,posecnn} which apply deep learning for 6D object pose estimation. These methods, however, are not end-to-end or only estimate a coarse object poses. They require further post-refinements to improve the accuracy, which linearly increases the running time, w.r.t. the number of detected objects. \n\n\\junk{\nThe key idea for recent large improvements in object detection and object instance segmentation problems~\\cite{Faster-RCNN,Mask-RCNN} is the development of a CNN architecture, i.e., Region Proposal Network (RPN)~\\cite{Faster-RCNN}. RPN is actually a CNN which is trained to produce multiple object (bounding boxes) proposals in an image at different shapes and sizes. \nFaster R-CNN~\\cite{Faster-RCNN} further refines and classify bounding boxes produced by RPN using additional fully connected layers.\nThe recent work Mask R-CNN~\\cite{Mask-RCNN} goes beyond Faster-RCNN, i.e., it performs binary segmentation in each bounding box produced by RPN.\n}\n\nRecently, Mask R-CNN~\\cite{Mask-RCNN} achieves state-of-the-art results in the instance segmentation problem. The key component of Mask R-CNN is a Region Proposal Network (RPN)~\\cite{Faster-RCNN}, which predicts multiple object (bounding boxes) proposals in an image at different shapes and sizes. \nMask R-CNN further segments instances inside bounding boxes produced from RPN by using additional convolutional layers. \n\nInspired by the impressive results of Mask R-CNN for object instance segmentation, \nwe are motivated to find the answer for the question that, \\textit{can we exploit the merits of RPN to not only segment but also recover the poses of object instances in a single RGB image, in an end-to-end fashion?}\nTo this end, we design a network which simultaneously detects\\footnote{The detection means the prediction of both bounding boxes and class labels.}, segments, and also recovers 6D poses of object instances from a single RGB image. In particular, we propose a \\method{} network, which goes beyond Mask R-CNN by adding a novel branch for regressing the poses for the object instances inside bounding boxes produced by RPN. The proposed pose branch is parallel with the detection and segmentation branches. \n\nOur main contribution is a novel object pose regressor, where the network regresses translation and rotation parameters seperately. Cares must be taken when regressing 3D rotation matrices as not all $3\\times3$ matrices are valid rotation matrices. To work around, we resort to the Lie algebra associated with the $SO(3)$ Lie group for our 3D rotation representation. Compared to other representations such as quaternion or orthonormal matrix, Lie algebra is an optimal choice as it is less parammeters and unconstrained, thus making the training process easier. Although the Lie algebra representation has been widely used in geometry-based robot vision problems~\\cite{DBLP:conf\/iros\/Agrawal06,DBLP:conf\/icra\/RosGSPL13}, to our best knowledge, this is the first work which successfully uses the Lie algebra in a CNN for regressing 6D object poses.\n\n\nDifferent from recent deep learning-based 6D pose estimation methods which are not end-to-end trainable~\\cite{BB8} or only predict a rough pose followed by a pose refinement step~\\cite{SSD-6D,BB8,posecnn}, the proposed \\method{} is a single deep learning architecture. It takes a RGB image as input and directly outputs 6D object poses without any pose post-refinements. Additionally, our system also returns segmentation masks of object instances. The experimental results show that \\method{} is competitive or outperforms the state-of-the-art methods on standard datasets. Furthermore, \\method{} is simple and elegant, allows the inference at the speed of $\\textrm{10 fps}$, which is several times faster than many existing methods.\n\nThe remainder of this paper is organized as follows. Section~\\ref{sec:related} presents related works. Section~\\ref{sec:method} details the proposed \\method{}. Section~\\ref{sec:exp} evaluates and compares \\method{} to the state-of-the-art 6D object pose estimation methods. Section~\\ref{sec:conl} concludes the paper. \n\\section{Method}\n\\label{sec:method}\nOur goal is to simultaneously detect, segment, and estimate the 6D poses of object instances in the input image. \nMask R-CNN performs well for the first two tasks, except the 6D pose estimation. In order to achieve a complete system, we propose a novel branch which takes RoIs from RPN as inputs and outputs the 6D poses of the instances inside the RoIs.\nAlthough the concept is simple, the additional 6D pose is distinct from the other branches. It requires an effective way to represent the 6D pose and a careful design of the loss function. \nIn this paper, we represent a pose by a 4-dimensional vector, in which the first three elements represent the Lie algebra associated with the rotation matrix of the pose; the last element represents the $z$ component of the translation vector of the pose. Given the predicted $z$ component and the predicted bounding box from the box regression branch, we use projective property to recover the full translation vector. The architecture of \\method{} is shown in Figure~\\ref{fig:overview}.\n\n\\subsection{\\method{}}\n\n\\begin{figure*}[!t] \n\\centering \t\t\t\t\n\\includegraphics[scale=0.4]{figure-1_edit.pdf} \n \\caption{An overview of \\method{} framework. {From left to right:} The input to \\method{} is a RGB image. A deep CNN backbone (i.e., VGG) is used to extract features over the whole image. The RPN is attached on the last convolutional layer of VGG (i.e., $conv5\\_3$) and outputs RoIs. For each RoI, the corresponding features from the feature map $conv5\\_3$ are extracted and pooled into a fixed size $7\\times 7$. The pooled features are used as inputs for $4$ head branches. For the box regression and classification heads, we follow Mask-RCNN~\\cite{Mask-RCNN}. The segmentation head is \\textit{adapted} from~\\cite{Mask-RCNN}, i.e., four $3\\times3$ consecutive convolutional layers (denoted as `$\\times4$') are used. The ReLu is used after each convolutional layer. A deconvolutional layer is used to upsample the feature map to $28\\times28$ which is the segmentation mask. The proposed pose head consists of four fully connected layers. The ReLu is used after each of the first three fully connected layers. The last fully connected layer outputs four numbers which represent for the pose. As shown on the right image, the network outputs the detected instances (with classes, i.e., Shampoo), the predicted segmentation masks (different object instances are shown with different colors) and the predicted 6D poses for detected instances (shown with 3D boxes).\n}\n \\label{fig:overview} \n\\end{figure*}\n\nLet us first briefly recap Mask R-CNN~\\cite{Mask-RCNN}. Mask R-CNN consists of two main components. The first component is a RPN~\\cite{Faster-RCNN} which produces candidate RoIs. The second component extracts features from each candidate RoI using RoIAlign layer~\\cite{Mask-RCNN} and performs the classification, the bounding box regression, and the segmentation. We refer readers to~\\cite{Mask-RCNN} for details of Mask-RCNN. \n\n\\method{} also consists of two main components. The first component is also a RPN. In the second component, in \\textit{parallel} to the existing branches of Mask R-CNN, \\method{} also outputs a 6D pose for the objects inside RoIs. \n\n\\paragraph{Pose representation}\nAn important task when designing the pose branch is the representation space of the output poses. We learn the translation in the Euclidean space. In stead of predicting full translation vector, our network is trained to regress the $z$ component only. The reason is that when projecting a 3D object model into a 2D image, two translation vectors with the same $z$ and the different $x$ and $y$ components may produce two objects which have very similar appearance and scale in 2D image (at different positions in the image -- in the extreme case of parallel projection, there is no difference at all). This causes difficulty for the network to predict the $x$ and $y$ components by using only appearance information as input. However, the object size and the scale of its textures in a 2D image provide strong cues about the $z$-coordinate. \nThis projective property allows the network to learn the $z$ component of the translation using the 2D object appearance only. Given the $z$ component, it is used together with predicted bounding box, which is outputted by the bounding box regression branch, to fully recover the translation. \nThe detail of this recovering process is presented in the following sections. \n\nRepresenting the rotation part of the pose is more complicated than the translation part. Euler angles are intuitive due to the explicit meaning of parameters.\nHowever, the Euler angles wrap around at 2$\\pi$ radians, i.e., having multiple values representing the same angle. This causes difficulty in learning a uni-modal scalar regression task. \nFurthermore, the Euler angles-based representation suffers from the well-studied problem of gimbal lock~\\cite{pose}. Another alternative, the use of $3\\times3$ orthonormal matrix is over-parametrised, and creates the problem of enforcing the orthogonality constraint when training the network through back-propagation. A final common representation is the unit length 4-dimensional quaternion. \nOne of the downsides of quaternion representation is its norm should be unit. This constraint may harm the optimization~\\cite{DBLP:conf\/iccv\/KendallGC15}. \n \nIn this work, we use the Lie algebra $so(3)$ associated with the Lie group $SO(3)$ (which is space of 3D rotation matrices) as our rotation representation. \nThe Lie algebra $so(3)$ is known as the tangent space at the identity element of the Lie group $SO(3)$. \nWe choose the Lie algebra $so(3)$ to represent the rotation because an arbitrary element of $so(3)$ admits a skew-symmetric matrix representation parameterized by a vector in ${\\mathbb R}^3$ which is continuous and smooth.\nThis means that the network needs to regress only three scalar numbers for a rotation, without any constraints. \nTo our best knowledge, this paper is the first one which uses Lie algebra for representing rotations in training a deep network for 6D object pose estimation task. \n\nDuring training, we map the groundtruths of rotation matrices to their associated elements in $so(3)$ by the closed form Rodrigues logarithm mapping~\\cite{log-exp-map}.\nThe mapped values are used as regression targets when learning to predict the rotation. \n\nIn summary, the pose branch is trained to regress a 4-dimensional vector, in which the first three elements represent rotation part and the last element represents the $z$ component of the translation part of the pose. \n\n\\paragraph{Multi-task loss function}\nIn order to train the network, we define a multi-task loss to jointly train the bounding\nbox class, the bounding box position, the segmentation, and the pose of the object inside the box. Formally, the loss function is defined as follows\n\\begin{equation}\n L=\\alpha_1L_{cls} + \\alpha_2L_{box} + \\alpha_3L_{mask} + \\alpha_4L_{pose} \n \\label{eq:allloss}\n\\end{equation}\n The classification loss $L_{cls}$, the bounding box regression loss $L_{box}$, and the segmentation loss $L_{mask}$ are defined similar as~\\cite{Mask-RCNN}, which are \\textit{softmax} loss, \\textit{smooth L1} loss, and \\textit{binary cross entropy} loss, respectively. The $\\alpha_1, \\alpha_2, \\alpha_3, \\alpha_4$ coefficients are scale factors to control the important of each loss during training. \n\nThe pose branch outputs 4 numbers for each RoI, which represents the Lie algebra for the rotation and $z$ component of the translation. It is worth noting that in our design, the output of pose branch is class-agnostic,\nbut the class-specific counterpart (i.e., with a $4C$-dimensional output vector in which $C$ is the number of classes) is also applicable. \nThe pose regression loss $L_{pose}$ is defined as follows\n\\begin{equation}\nL_{pose} = \\norm{r-\\hat{r}}_p + \\beta\\norm{t_z - \\hat{t}_z}_p\n\\label{eq:poseloss}\n\\end{equation}\nwhere $r$ and $\\hat{r}$ are two 3-dimensional vectors representing the regressed rotation and the groundtruth rotation, respectively; $t_z$ and $\\hat{t}_z$ are two scalars representing the regressed $z$ and the groundtruth $z$ of the translation; $p$ is a distance norm; $\\beta$ is a scale factor to control the rotation and translation regression errors.\n\n\\paragraph{Network architecture}\nFigure~\\ref{fig:overview} shows the schematic overview of \\method{}. We differentiate two parts of the network, i.e., the backbone and the head branches. The backbone is used to extract features over the whole image and is shared between head branches. There are four head branches corresponding to the four different tasks, i.e., the bounding box regression, the bounding box classification, the segmentation, and the 6D pose estimation for the object inside the box. \nFor the backbone, we follow Faster R-CNN~\\cite{Faster-RCNN} which uses VGG~\\cite{SimonyanZ14} together with a RPN attached on the last convolutional layer of VGG (i.e., $conv5\\_3$). For each output RoI of RPN, a fixed-size $7\\times7$ feature map is pooled from the $conv5\\_3$ feature map using the RoIAlign layer~\\cite{Mask-RCNN}. This pooled feature map is used as input for head branches. For the network heads of the bounding box regression and classification, we closely follow the Mask R-CNN~\\cite{Mask-RCNN}. \nFor the segmentation head, we adapt from Mask R-CNN. In our design, four $3\\times3$ consecutive convolutional layers (denoted as `$\\times4$' in Figure~\\ref{fig:overview}) are used after the pooled feature map. The ReLu is used after each convolutional layer. A deconvolutional layer is used to upsample the feature map to $28\\times28$ which is the segmentation mask. It is worth noting that for segmentation head, we use the class-agnostic design, i.e., this branch outputs a single mask, regardless of class. We empirically found that this design reduces the model complexity and the inference time, while it is nearly effective as the class-specific design. This observation is consistent with the observation in Mask R-CNN~\\cite{Mask-RCNN}. \n\nIn order to adapt the shared features to the specific pose estimation task, the pose head branch consists of a sequence of 4 fully connected layers in which the number of outputs are $4096 \\to 4096 \\to 384 \\to 4$. The ReLU is used after each fully layer, except for the last layer. This is because the regressing targets (i.e., the groundtruths) contain both negative and positive values. We note that our pose head has a simple structure. More complex design may have potential improvements. \n\n\\subsection{Training and inference}\n\\paragraph{Training}\nWe implement \\method{} using Caffe deep learning library~\\cite{DBLP:conf\/mm\/JiaSDKLGGD14}. The input to our network is a RGB image with the size $480 \\times 640$. \nThe RPN outputs RoIs at different sizes and shapes. We use 5 scales and 3 aspect ratios, resulting 15 anchors in the RPN. The 5 scales are $16 \\times 16$, $32\\times32$, $64 \\times 64$, $128\\times128$ and $256 \\times 256$; the 3 aspect ratios are $2:1$, $1:1$, $1:2$. This design allows the network to detect small objects. \n\nThe $\\alpha_1, \\alpha_2, \\alpha_3$, and $\\alpha_4$ in (\\ref{eq:allloss}) are empirically set to 1, 1, 2, 2, respectively. The values of $\\beta$ in (\\ref{eq:poseloss}) is empirically set to 1.5. \nAn important choice for the pose loss (\\ref{eq:poseloss}) is the regression norm $p$. Typically, deep learning models use $p=1$ or $p=2$. With the datasets used in this work, we found that $p=1$ give better results and hence is used in our experiments. \n\nWe train the network in an end-to-end manner using stochastic gradient descent with $0.9$ momentum and $0.0005$ weight decay. The network is trained on a Titan X GPU for $350k$ iterations. Each mini batch has $1$ image.\nThe learning rate is set to $0.001$ for the first $150k$ iterations and then decreased by 10 for the remaining iterations. The top $2000$ RoIs from RPN (with a ratio of 1:3 of positive to negative) are subsequently used for computing the multi-task loss. A RoI is considered positive if it has an intersection over union (IoU) with a groundtruth box of at least 0.5 and negative otherwise. The losses $L_{mask}$ and $L_{pose}$ are defined for only positive RoIs.\n\n\\paragraph{Inference}\nAt the test phase, we run a forward pass on the input image. The top $1,000$ RoIs produced by the RPN are selected and fed into the box regression and classification branches, followed by non-maximum suppression~\\cite{Fast-RCNN}. Based on the outputs of the classification branch, we select the outputted boxes from the regression branch that have classification scores higher than a certain threshold (i.e., $0.9$) as the detection results. The segmentation branch and the pose branch are then applied on the detected boxes, which output segmentation masks and the 6D poses for the objects inside the boxes. \n\n\\paragraph{From the 4-dimensional regressed pose to the full 6D pose}\nGiven the predicted Lie algebra, i.e., the first three elements of the predicted 4-dimensional vector from pose branch, we use the exponential Rodrigues mapping~\\cite{log-exp-map} to map it to the corresponding rotation matrix. \nIn order to recover the full translation, we rely on the predicted $z$ component ($t_z$ -- the last element of the 4-dimensional predicted vector) and the predicted bounding box coordinates to compute two missing components $t_x$ and $t_y$. We assume that the bounding box center (in 2D image) is the projected point of the 3D object center (the origin of the object coordinate system). Under this assumption, using the 3D-2D projection formulation, we compute $t_x$ and $t_y$ as follows\n\\begin{equation}\nt_x = \\frac{(u_0 - c_x)t_z}{f_x}\n\\end{equation}\n\\begin{equation}\nt_y = \\frac{(v_0 - c_x)t_z}{f_y}\n\\end{equation}\nwhere $u_0$, $v_0$ are the bounding box center in 2D image, and the matrix $[f_x, 0, c_x; 0, f_y, c_y; 0, 0, 1]$ is the known intrinsic camera calibration matrix.\n\n\\section{Related Work}\n\\label{sec:related}\nIn this section, we first review the 6D object pose estimation methods. We then brief the main design of the recent methods which are based on RPN for object detection and segmentation.\n\n\\textbf{Classical approaches.} The topic of pose estimation has great attention in the past few years. For objects with rich of texture, sparse feature matching approaches have been shown good accuracy~\\cite{DBLP:conf\/clor\/GordonL06,SIFT_Lowe,DBLP:conf\/icra\/MartinezCS10}. Recently, researchers have put more focus on poor texture or texture-less objects. The most traditional approaches for poor texture objects are to use object templates~\\cite{ACCV12,DBLP:conf\/eccv\/TejaniTKK14,DBLP:conf\/iccv\/Rios-CabreraT13}. The most notable work belonging to this category is LINEMOD~\\cite{ACCV12} which is based on stable gradient and normal features. However, LINEMOD is designed to work with RGBD images. Furthermore, template-based approaches are sensitive to the lighting and occlusion. \n\n\\textbf{Feature learning approach.} Recent 6D pose estimation researches have relied on feature learning for dealing with insufficient texture objects~\\cite{ECCV14,ICCV15,CVPR17,CVPR17_2}. In~\\cite{ECCV14,ICCV15}, the authors show that the dense feature learning approach outperforms matching approach. The basic design of \\cite{ECCV14,ICCV15,CVPR17,CVPR17_2} is a time-consuming multi-stage pipeline, i.e., a random forest is used for jointly learning the object categories for pixels\nand the coordinates of pixels w.r.t. object coordinate systems (known as object coordinates). A set of pose hypotheses is generated by using the outputs of the forest and the depth channel of the input image.\nAn energy function is defined on the generated pose hypotheses to select hypotheses. The selected pose hypotheses are further refined to obtain the final pose. Note that the pipelines in those works heavily depend on the depth channel. The depth information is required in both pose hypothesis generation and refinement. The work~\\cite{CVPR16} also follows a multi-stage approach as~\\cite{ECCV14,ICCV15} but is designed to work with RGB inputs.\nIn order to deal with the missing depth information, the distribution of object coordinates is approximated as a mixture model when generating pose hypotheses. \n\n\\junk{\nThe disadvantage of feature learning approaches~\\cite{ECCV14,ICCV15,CVPR16,CVPR17_2} is that the generation of pose hypotheses uses only local information, i.e., only three or four pixels are used to generate a hypothesis. As result, this may generate bad hypotheses because it does not consider a global context over the whole object. Furthermore, by requiring multiple processing steps, those approaches are are time-consuming, making them unsuitable for real-time applications. \n}\n\n\\textbf{CNN-based approach.} In recent years, CNN has been applied for 6D pose problem, firstly for camera pose~\\cite{DBLP:conf\/iccv\/KendallGC15,kendall2017posenet}, and recently for object pose~\\cite{ICCV15,CVPR17_2,SSD-6D,BB8,posecnn,yolo-6D}. \n\nIn~\\cite{DBLP:conf\/iccv\/KendallGC15,kendall2017posenet}, the authors train CNNs to directly regress 6D camera pose from a single RGB image. The camera pose estimation task is arguably easier than the object pose estimation task, because to estimate object pose, it also requires accurate detection and classification of the object, while these steps are not typically needed for camera pose. \n\nIn~\\cite{ICCV15}, the authors use a CNN in their object pose estimation system. However, \nthe CNN is only used as a probabilistic model to learn to compare the learned information (produced by a random forest) and the rendered image. The CNN outputs an energy value for selecting pose hypotheses which are further refined.\nThe work in~\\cite{CVPR17_2} improves over~\\cite{ICCV15} by using a CNN and reinforcement learning for joint selecting and refining pose hypotheses. \n\nIn SSD-6D~\\cite{SSD-6D}, the authors extend SSD detection framework~\\cite{SSD} to 3D detection and 3D rotation estimation. The authors decompose 3D rotation space into discrete viewpoints and in-plane rotations. They then treat the rotation estimation as a classification problem.\nHowever, to get the good results, it is required to manually find an appropriate sampling for the rotation space. Furthermore, the approach SSD-6D does not directly output the translation, i.e., to estimate the translation, for each object, an offline stage is required to precomputes bounding boxes w.r.t. all possible sampled rotations. This precomputed information is used together with the estimated bounding box and rotation to estimate the 3D translation. \nIn the recent technical report~\\cite{posecnn}, the authors propose a network, dubbed PoseCNN, which jointly segments objects and estimates the rotation and the distance of segmented objects to camera. However, by relying on a semantic segmentation approach (which is a FCN~\\cite{FCN}) to localize objects, it may be difficult for PoseCNN to deal with input image which contains multiple instances of an object. \nBoth SSD-6D and PoseCNN also require further pose refinement steps to improve the accuracy. \n\nIn BB8~\\cite{BB8}, the authors propose a cascade of multiple CNNs for object pose estimation task. A segmentation network is firstly applied to the input image to localize objects. Another CNN is then used to predict 2D projections of the corners of the 3D bounding boxes around objects. The 6D pose is estimated for the correspondences between the projected 2D coordinates and the 3D ground control points of \nbounding box corners using a PnP algorithm. Finally, a CNN per object is trained to refine the pose. By using multiple separated CNNs, BB8 is not end-to-end and is time-consuming for the inference.\nSimilar to~\\cite{BB8}, in the recent technical report~\\cite{yolo-6D}, the authors extend YOLO object detection network~\\cite{yolo9000} to predict 2D projections of the corners of the 3D bounding boxes around objects. Given the projected 2D coordinates and the 3D ground control points of bounding box corners, a PnP algorithm is further used to estimate the 6D object pose. \n\n\\textbf{RPN-based detection and segmentation.} One of key components in the recent successful object detection method Faster R-CNN~\\cite{Faster-RCNN} and instance segmentation method Mask R-CNN~\\cite{Mask-RCNN} is the Region Proposal Network --- RPN. The core idea of RPN is to dense sampling the whole input image by many overlap bounding boxes at different shapes and sizes. The network is trained to produce multiple object proposals (also known as Region of Interest --- RoI). This design of RPN allows to smoothly search over different scales of feature maps. \n\\junk{\nFor each RoI, a fixed-size small feature map (e.g., $7\\times7$) is pooled from the image feature map using the RoIPool layer~\\cite{RCNN} or RoIAlign layer~\\cite{Mask-RCNN}. These layers work by dividing the RoI into a regular grid and then max-pooling the feature map values in each grid cell. In Faster R-CNN, the outputs of the RoIPool layer are used to refine the RoI coordinates and to classify the RoI label. In Mask-RCNN, the outputs of the RoIAlign layer are used not only for refining and recognizing the RoI but also for segmenting the object inside the RoI.\n}\nFaster R-CNN~\\cite{Faster-RCNN} further refines and classifies RoIs with additional fully connected layers,\nwhile Mask R-CNN~\\cite{Mask-RCNN} further improves over Fast R-CNN by segmenting instances inside RoIs with additional convolutional layers. \n\nIn this paper, we go beyond Mask RCNN. In particular, depart from the backbone of Mask R-CNN, we propose a novel head branch which takes RoIs from RPN as inputs to regress the 6D object poses and is parallel with the existing branches. This results a novel end-to-end architecture which is not only detecting, segmenting but also directly recovering the 6D poses of object instances from a single RGB image. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMassive stars play an important role in the evolution of the Universe particularly in the late stages of their evolution. Through their strong stellar winds and supernova explosions, they inject mechanical energy into the interstellar medium (ISM) \\citep{bib:Abbott1982}. Also, they are principal sources of heavy elements in the ISM. In spite of their importance, our understanding of late stages of massive stars' evolution is still poor. One of the reasons for this is that massive stars are extremely rare partly because of their short lifetime. Due to their small numbers, the properties of evolved massive stars are still uncertain. For instance, as discussed in \\citet{bib:Massey2003}, \\citet{bib:MasseyOlsen2003} and \\citet{bib:Levesque2005}, there was a discrepancy between observed and theoretically predicted locations of red supergiants, high-mass evolved stars, on the Hertzsprung-Russel (HR) diagram. Compared with stellar evolutionary models, observed red supergiants appear to be too cool and too luminous.\n\nVY Canis Majoris (VY CMa) is one of the most well-studied red supergiants. Similar to other red supergiants, the location of VY CMa on the HR diagram is also uncertain. For instance, \\citet{bib:Monnier1999}, \\citet{bib:Smith2001} and \\citet{bib:Humphreys2007} obtained the luminosity of (2--5) $\\times$ 10$^5$ L$_{\\odot}$ from the spectral energy distribution (SED) assuming the distance of 1.5 kpc \\citep{bib:Lada1978} and effective temperature of 2800--3000 K based on the stellar spectral type \\citep{bib:LeSidaner1996}. However, \\citet{bib:Massey2006} suggested that the above parameters would make VY CMa cooler and more luminous than what current evolutionary models allow and would place it in the ``forbidden zone\" of the HR diagram, which is on the right-hand side of the Hayashi track in the HR diagram (see figure \\ref{fig:HRDthis}). \\citet{bib:Massey2006} reexamined the effective temperature and obtained a new value of 3650 $\\pm$ 25 K based on optical spectrophotometry combined with a stellar atmosphere model and suggested that the luminosity of VY CMa is only 6.0 $\\times$ 10$^4$ L$_{\\odot}$, which is probably a lower limit \\citep{bib:Massey2008}.\n\n\\citet{bib:Levesque2005} determined the effective temperatures of 74 Galactic red supergiants based on optical spectrophotometry and stellar atmosphere models. They obtained the effective temperatures of the red supergiants with a precision of 50 K. Their new effective temperatures are warmer than those in the literature. \nThese new effective temperature values seem to be consistent with the theoretical stellar evolutionary tracks. However, the luminosities of the red supergiants still had large uncertainty due to possible errors in estimated distances. Since most of red supergiants are very far, it has been difficult to apply the most reliable trigonometric parallax method to their distance measurements.\n\nIn the case of VY CMa, the currently accepted distance of 1.5 kpc \\citep{bib:Lada1978} is obtained by assuming that VY CMa is a member of the NGC 2362 star cluster and that the distance of NGC 2362 is same as that of VY CMa. The distance to the NGC 2362 was determined from the color-magnitude diagram \\citep{bib:Johnson1961} with an accuracy of no better than 30 \\%. Since the luminosity depends on the square of the distance, the accuracy of luminosity estimation is worse than 50 \\%. The proper motions measured with H$_2$O masers in previous studies (\\cite{bib:Richards1998}; \\cite{bib:Marvel1998}) have not been contradictory to the above distance value (1.5 kpc). When the H$_2$O maser region around a star is modeled with a symmetrically expanding spherical shell, the distance to the star from the Sun can be inferred by assuming that the observed mean proper motion multiplied by the distance should be equal to the observed mean \nradial velocity of the H$_2$O masers. The proper motion velocities in \\citet{bib:Richards1998} are well consistent with the H$_2$O maser spectral line velocities at the distance of 1.5 kpc. Also, \\citet{bib:Marvel1998} derived the distance of VY CMa to be 1.4 $\\pm$ 0.2 kpc. However, such a ``statistical parallax'' method depends on kinematical models fitted to the observed relative proper motions. In fact, if we add rotation or anisotropic expansion to the simple spherical expansion model, the inferred distance value could vary beyond quoted error ranges. An accurate distance determination without an assumption is crucial for obtaining the right location of VY CMa on the HR diagram and for determining fundamental parameters of the star.\n\nThanks to the recent progress in the VLBI technique, the distance measurements with trigonometric parallaxes have become possible even beyond 5 kpc (e.g., \\cite{bib:Honma2007}). Since VY CMa has strong maser emission in its circumstellar envelope, we have conducted an astrometric observations of H$_2$O masers around VY CMa to measure an accurate parallax with VLBI Exploration of Radio Astrometry (VERA) and here we present the results.\n\n\\section{Observations and Data Reduction}\n\nWe have observed H$_2$O masers (H$_2$O 6$_{16}$--5$_{23}$ transition at the rest frequency of 22.235080 GHz) in the red supergiant VY CMa with VERA at 10 epochs over 13 months. The epochs are April 24, May 24, September 2, October 30, November 27 in 2006, January 10, February 14, March 26, April 21 and May 27 in 2007 (day of year 114, 144, 245, 303, 331 in 2006, 010, 045, 085, 111 and 147 in 2007, respectively). In each epoch, VY CMa and a position reference source J0725--2640 [$\\alpha$(J2000) = 07h25m24.413135s, $\\delta$(J2000) = --26d40'32.67907\" in VCS 5 catalog, \\citep{bib:Kovalev2007}] were observed simultaneously in dual-beam mode for about 7 hours. The separation angle between VY CMa and J0725--2640 is 1.059 degrees. The instrumental phase difference between the two beams was measured at each station during the observations based on the correlations of artificial noise sources (\\cite{bib:Kawaguchi2000}; \\cite{bib:Honma2008b}). A bright continuum source (DA193 at 1st--3rd epochs and J0530+1330 at other epochs) was observed every 80 minutes for bandpass and delay calibration at each beam.\n\nLeft-handed circularly polarized signals were sampled with 2-bit quantization and filtered with the VERA digital filter unit \\citep{bib;Iguchi2005}. The data were recorded onto magnetic tapes at a rate of 1024 Mbps, providing a total bandwidth of 256 MHz which consists of 16 $\\times$ 16 MHz IF channels. One IF channel was assigned to the H$_2$O masers in VY CMa and the other 15 IF channels were assigned to J0725--2640, respectively. The correlation processing was carried out on the Mitaka FX correlator \\citep{bib:Chikada1991}. The spectral resolution for H$_2$O maser lines is 15.625 kHz, corresponding to the velocity resolution of 0.21 km s$^{-1}$.\n\nAll data reduction was conducted using the NRAO Astronomical Image Processing System (AIPS) package. The amplitude and the bandpass calibration for target source (VY CMa) and reference source (J0725--2640) were performed independently. The amplitude calibration of each antenna was performed using system temperatures measured during observation. The bandpass calibration was applied using a bright continuum source (DA193 at 1st--3rd epochs and J0530+1330 at other epochs). Doppler corrections were carried out to obtain the radial motions of the H$_2$O masers relative to the Local Standard of Rest (LSR).\n\nThen, we calibrated the clock parameters using the residual delay of a bright continuum calibrator, which is also used for the bandpass calibration. The fringe fitting was made on the position reference source J0725--2640 with integration time of 2 minutes and time interval of 12 seconds to obtain residual delays, rates, and phases. These phase solutions were applied to the target source VY CMa and we also applied the dual-beam phase calibration \\citep{bib:Honma2008b} to correct the instrumental delay difference between the two beams. We calibrated the tropospheric zenith delay offset by using the GPS measurements of the tropospheric zenith delay made at each station \\citep{bib:Honma2008a}. After these calibrations, synthesized clean images were obtained. A typical synthesized beam size (FWHM) was 2 mas $\\times$ 1 mas with a position angle of --23$^{\\circ}$. We measured positions of the H$_2$O maser features relative to the extragalactic source J0725--2640 with two-dimensional Gaussian fitting.\n\n\\section{Results}\n\\begin{figure}\n \\begin{center}\n \\FigureFile(75mm,75mm){figure1original.eps}\n \n \\end{center}\n \\caption{The total power spectrum of the H$_2$O masers in VY CMa obtained at Iriki station on April 24, 2006.}\\label{fig:Kspectrum}\n\\end{figure}\nThe autocorrelation spectrum of the H$_2$O masers around VY CMa is shown in figure \\ref{fig:Kspectrum}. The spectrum shows rich maser emission over LSR velocities ranging from --5 to 35 km s$^{-1}$. Though there are variations in flux density, overall structures of maser spectrum are common to all the epochs, indicating that maser spots are surviving over the observing period of 13 months.\n\n\\begin{figure*}\n \\begin{center}\n \\FigureFile(100mm,100mm){figure2.eps}\n \n \\end{center}\n \\caption{The distribution of the H$_2$O masers in VY CMa obtained on April 24, 2006. Color, as shown on the scale on the right-hand side, represents the LSR velocities of the maser features.}\\label{fig:Kmap}\n\\end{figure*}\nTo reveal the distribution of the H$_2$O masers, we mapped the H$_2$O maser features in VY CMa at the first epoch (April 24, 2006). We detected 55 maser features that have signal-to-noise ratio larger than 7 in two adjacent channels. The distribution of the H$_2$O maser features is shown in figure \\ref{fig:Kmap}. The most red-shifted component and the most blue-shifted component lie near the apparent center of the distribution and the moderately red-shifted components on a NE-SW line of 150 mas. This distribution well agrees with the previous observational results by \\citet{bib:MarvelPhD}.\n\nAmong the H$_2$O maser features in VY CMa, one maser feature at a LSR velocity of about 0.55 km s$^{-1}$ is analyzed to detect an annual parallax. This feature, the most blue-shifted discrete component, is stably detected at all epochs. It is the third brightest maser component in the total-power spectrum, but in fact it is the second brightest component in the map. We note that while the brightest channel has multiple components, there is only one component in the channel at the LSR velocity of about 0.55 km s$^{-1}$. Because of the simple structure as well as its strength, this velocity channel is most suitable for astrometric measurements. Analyses for the other H$_2$O maser features will be reported in a forthcoming paper.\n\n\\begin{figure*}\n \\begin{center}\n \\begin{tabular}{cc}\n \\FigureFile(85mm,85mm){figure3a.eps} &\n \\FigureFile(85mm,85mm){figure3b.eps} \\\\\n \\end{tabular}\n \\caption{Results of the measured positions of the H$_2$O maser spot at the LSR velocity of 0.55 km s$^{-1}$ in VY CMa using J0725--2640 as a position reference source. The position offsets are with respect to $\\alpha$(J2000.0) = ${\\rm 07^h 22^m 58^s.32906}$, $\\delta$(J2000.0) = --25$^{\\circ}$46'03\".1410. \\textit{The left panel} shows the movements of the maser spot in right ascension as a function of time (day of year). \\textit{The right panel} is the same as \\textit{the left panel} in declination. Solid lines represent the best fit model with an annual parallax and a linear proper motion for the maser spot. Dotted lines represent the linear proper motion (--2.09 $\\pm$ 0.16 mas yr$^{-1}$ in right ascension and 1.02 $\\pm$ 0.61 mas yr$^{-1}$ in declination) and points represent observed positions of maser spot with error bars indicating the positional uncertainties in systematic errors (0.17 mas in right ascension and 0.68 mas in declination).}\\label{fig:para}\t\n \\end{center}\n\\end{figure*}\n\nFigure \\ref{fig:para} shows the position measurements of the 0.55 km s$^{-1}$ H$_2$O maser component for 13 months. The position offsets are with respect to $\\alpha$(J2000.0) = ${\\rm 07^h 22^m 58^s.32906}$, $\\delta$(J2000.0) = --25$^{\\circ}$46'03\".1410. Assuming that the movements of maser features are composed of a linear motion and the annual parallax, we obtained a proper motion and an annual parallax by the least-square analysis. The declination data have too large errors to be suitable for an annual parallax and a proper motion measurement. Therefore, we obtained the parallax of VY CMa to be 0.88 $\\pm$ 0.08 mas, corresponding to a distance of 1.14 $^{+0.11} _{-0.09}$ kpc using only the data in right ascension. This is the first distance measurement of VY CMa based on an annual parallax measurement with the highest precision. We estimated the positional uncertainties from the least-square analysis, and the values of the errors are 0.17 mas in right ascension and 0.68 mas in declination making reduced $\\chi^{2}$ to be 1.\n\nWe also determined the absolute proper motion in right ascension and declination. The absolute proper motion is --2.09 $\\pm$ 0.16 mas yr$^{-1}$ in right ascension and 1.02 $\\pm$ 0.61 mas yr$^{-1}$ in declination. Compared with the proper motion of VY CMa obtained with Hipparcos, 9.84 $\\pm$ 3.26 mas yr$^{-1}$ in right ascension and 0.75 $\\pm$ 1.47 mas yr$^{-1}$ in declination \\citep{bib:Perryman1997}, there is a discrepancy in the proper motion in right ascension. Nearly 12 mas yr$^{-1}$ difference (more than 65 km s$^{-1}$ at 1.14 kpc) seems too large even when we take into account possible relative motion of the 0.55 km s$^{-1}$ maser feature with respect to the star itself. The difference may be due to the fact that circumstellar envelope of VY CMa has complex small-scale structure at optical wavelengths. This might seriously affect proper motion measurements with Hipparcos.\n\nThe quantitative estimation of the individual error sources in the VLBI astrometry is difficult as previously mentioned in \\citet{bib:Hachisuka}, \\citet{bib:Honma2007}, and \\citet{bib:Hirota2007}. Therefore, we estimated errors from the standard deviations of the least-square analysis to be 0.17 mas in right ascension and 0.68 mas in declination. When we consider that the statistical errors of the position, 0.02--0.09 mas, are estimated from residuals of the Gaussian fitting, the large values of the standard deviations suggest that some systematic errors affect the result of astrometry. In the following, we discuss causes of these astrometric errors, although we cannot separate each factor of error.\n\nFirst, we consider the errors originating from the reference source. The positional errors of the reference source J0725--2640 affect those of the target source. The position of J0725--2640 was determined with an accuracy of 0.34 mas in right ascension and 0.94 mas in declination, respectively \\citep{bib:Kovalev2007}. Because these offsets are constant at all epochs, it dose not affect the parallax measurements. Also, when the reference source is not a point source, the positional errors of the target source could occur due to the structure and its variation of the reference source. However, since the reference source for our measurements is point-like and shows no structural variations between epochs, this is not likely to be the main source of the positional errors.\n\nSecondly, we consider the baseline errors originated from the positional errors of each VLBI station. The positions of VERA stations are determined with an accuracy of 3 mm by the geodetic observations at 2 and 8 GHz every 2 weeks. The positional errors derived from the baseline errors are 11 $\\mu$as at a baseline of 1000 km with the baseline error of 3 mm. This error is much smaller than our astrometric errors.\n\nThirdly, a variation of the structure of the maser feature could be one of the error sources. \\citet{bib:Hirota2007}, \\citet{bib:Imai2007}, and \\citet{bib:Hirota2008} proposed the maser structure effect as main sources in their trigonometric parallax measurements for nearby star forming regions Orion KL, IRAS 16293--2422 in $\\rho$ Oph East, and SVS 13 in NGC 1333, correspondingly. However, the effect does not seem dominating in our case because, (1) the 0.55 km s$^{-1}$ maser feature showed a stable structure in the closure phase, spectrum and map at all epochs of our observation, (2) this effect is inversely proportional to the distance of the target source and hence should be more than twice less significant for VY CMa as those in the above cases for a given size of the structure variation, (3) it is difficult to explain the large difference between astrometric errors in right ascension and declination by this effect.\n\nFinally, we have to consider the errors by the zenith delay residual due to tropospheric water vapor. These errors originate from the difference of path length through the atmosphere between the target and the reference sources, and generally larger in declination than in right ascension. According to the result of the simulation in \\citet{bib:Honma2008a}, the positional error by the tropospheric delay is 678 $\\mu$as in declination when the atmospheric zenith residual is 3 cm, the declination is --30 degrees, separation angle (SA) is 1 degree, and the position angle (PA) is 0 degrees. This is well consistent with our measurements. Therefore, the atmospheric zenith delay residual is likely to be the major source of the astrometric errors.\n\n\\section{The Location on the HR Diagram}\nWe successfully detected a trigonometric parallax of 0.88 $\\pm$ 0.08 mas, corresponding to a distance of 1.14 $^{+0.11} _{-0.09}$ kpc to VY CMa. Compared with the previously accepted distance 1.5 kpc \\citep{bib:Lada1978}, the distance to VY CMa became 76 \\%. Since the luminosity depends on the square of the distance, the luminosity should become 58 \\% of previous estimates. Hence, here we re-estimate the luminosity of VY CMa with the most accurate distance.\nThe luminosity can be estimated as follows:\n\\begin{equation}\nL = 4 \\pi d^2 F_{bol},\n\\end{equation}\nwhere $L$ is luminosity, $d$ is distance and $F_{bol}$ is the bolometric flux. To obtain F$_{bol}$, we used the SED of VY CMa. The data are based on HST optical images and near-IR ground based images in \\citet{bib:Smith2001} and IRAS fluxes from 25 to 100 $\\mu$m. The F$_{bol}$ is obtained by integrating the observed fluxes. The estimated luminosity of VY CMa with our distance is (3 $\\pm$ 0.5) $\\times$ 10$^5$ L$_{\\odot}$.\n\nWe re-estimate the luminosities of VY CMa in the previous studies with our distance. \\citet{bib:LeSidaner1996} obtained a luminosity of VY CMa to be 9 $\\times$ 10$^5$ L$_{\\odot}$ from the SED at a kinematic distance of 2.1 kpc. With the distance of 1.14 kpc, the luminosity of \\citet{bib:LeSidaner1996} became 2.6 $\\times$ 10$^5$ L$_{\\odot}$. \\citet{bib:Smith2001} also estimated a luminosity of VY CMa from the SED and their luminosity is 5 $\\times$ 10$^5$ L$_{\\odot}$ at a distance of 1.5 kpc, which is is revised to be 3.0 $\\times$ 10$^5$ L$_{\\odot}$ using our distance. These luminosities are well consistent with each other. \\citet{bib:Massey2006} estimated an effective temperature of VY CMa to be 3650 K, which fitted the MARCS atmosphere model to the observed spectrophotometric data. They estimated an absolute V magnitude $M_V$ using the observed V magnitude, currently known distance of 1.5 kpc and a visible extinction $A_V$. Since the MARCS model presents bolometric corrections as a function of effective temperature, they obtained the luminosity of VY CMa to be 6.0 $\\times$ 10$^4$ L$_{\\odot}$, much lower than our value derived above. When they adopt our distance, their luminosity would be increased. The accurate distance is essential to estimate the accurate luminosity. \n\nTo place VY CMa on the HR diagram, we adopted an effective temperature from the literatures. As we already mentioned, \\citet{bib:Massey2006} estimated an effective temperature of VY CMa to be 3650 K based on the observed spectrophotometric data. With the previously accepted spectral type of M4-M5, \\citet{bib:LeSidaner1996} obtained an effective temperature of 2800 K and \\citet{bib:Smith2001} also adopted an effective temperature of 3000 K. Since the effective temperature does not depend on the distance, our measurements cannot judge which effective temperature is correct. \n\nAlthough we cannot estimate the effective temperature of VY CMa with our measurements, when we adopt the effective temperature of 3650 K from the MARCS atmosphere model \\citep{bib:Massey2006}, the location of VY CMa on the HR diagram is determined as the filled square in figure \\ref{fig:HRDthis}. For comparison, we also show VY CMa's locations on the HR diagram obtained in the previous studies as filled circles. Our results suggest that the location of VY CMa on the HR diagram is now consistent with the evolutionary track of an evolved star with an initial mass of 25 M$_{\\odot}$. Also, re-scaled luminosity values of \\citet{bib:LeSidaner1996} and \\citet{bib:Smith2001} imply much closer locations to the 25 M$_{\\odot}$ track than their original ones which were deeply inside the ``forbidden zone''. On the other hand, the lower limit of luminosity of 6.0 $\\times$ 10$^4$ L$_{\\odot}$ in \\citet{bib:Massey2006} would be consistent with 15 M$_{\\odot}$ initial mass of VY CMa. According to \\citet{bib:Hirschi2004}, there is an order of magnitude difference in lifetimes between 15 M$_{\\odot}$ and 25 M$_{\\odot}$ in the initial mass. The improvement in mass and age values should affect statistical studies on evolution of massive stars such as the initial mass function \\citep{bib:Salpeter1955}. \n\nOn the other hand, there is an argument against the effective temperature in \\citet{bib:Massey2006}. \\citet{bib:Humphreys2006} suggested that the spectrum in \\citet{bib:Massey2006} are more like their M4-type reference spectrum than M2-type reference spectrum. \\citet{bib:Humphreys2006} also pointed out that the modeling applicable for ``standard\" stars without mass-loss (MARCS) is simply not valid for an object like VY CMa. When we adopt the previously accepted spectral type, instead of M2.5I \\citep{bib:Massey2006}, the effective temperature is 3000 K \\citep{bib:Smith2001}. The location of VY CMa on the HR diagram is shown as the open square in figure \\ref{fig:HRDthis}. In this case, the position on the HR diagram is still not consistent with the theoretical evolutionary track. Table \\ref{tab:first} summarized our adopted parameters for VY CMa.\n\n\\begin{table*}\n \\caption{Adopted parameters for VY CMa}\\label{tab:first}\n \\begin{center}\n \\begin{tabular}{lll}\n \\hline\n Parameter & Value & Note \\\\ \\hline\n Parallax & 0.88 $\\pm$ 0.08 mas \\\\\n Distance & 1.14 $^{+0.11} _{-0.09}$ kpc \\\\\n Luminosity & (3 $\\pm$ 0.5) $\\times$ 10$^5$ L$_{\\odot}$ \\\\\n Mass & 25 M$_{\\odot}$ & \\citet{bib:Meynet2003} \\\\\n Temperature & 3650 $\\pm$ 25 K & \\citet{bib:Massey2006} \\\\\n & 3000 K &\\citet{bib:Smith2001} \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table*}\n\n\\begin{figure*}\n \\begin{center}\n \\FigureFile(100mm,100mm){figure4.eps}\n \n \\end{center}\n \\caption{The various locations of VY CMa on the HR diagram. The filled and the open squares represent our results. The luminosity of our result is calculated using the distance based on the trigonometric parallax measurements and the bolometric flux from the SED. The filled square adopted the effective temperature of 3650 K \\citep{bib:Massey2006} and the open square adopted 3000 K\\citep{bib:Smith2001}. The circles ``1,'' ``2'' and ``3'' represent the results of \\citet{bib:LeSidaner1996}, \\citet{bib:Smith2001} and \\citet{bib:Massey2006}, respectively. The evolutionary tracks are from \\citet{bib:Meynet2003}.} \\label{fig:HRDthis}\n\\end{figure*}\n\nThe accurate distance measurements of red supergiants, which provide true luminosities, will greatly contribute to the understanding of massive star evolution, though there is still uncertainty in the effective temperature on the stellar surface. \n\n\\section{Conclusion}\n\nWe have observed the H$_2$O masers around the red supergiant VY CMa with VERA during 10 epochs spread over 13 months. Simultaneous observations for both H$_2$O masers around VY CMa and the position reference source J0725--2640 were carried out. We measured a trigonometric parallax of 0.88 $\\pm$ 0.08 mas, corresponding to a distance of 1.14 $^{+0.11} _{-0.09}$ kpc from the Sun. It is the first result that the distance of VY CMa is determined with an annual parallax measurement. There had been overestimation of the luminosities in previous studies due to the previously accepted distance. Using the most accurate distance based on the trigonometric parallax measurements and the bolometric flux from the observed SED, we estimated the luminosity of VY CMa to be (3 $\\pm$ 0.5) $\\times$ 10$^5$ L$_{\\odot}$. The accurate distance measurements provided the improved luminosity. The location of VY CMa on the HR diagram became much close to the theoretically allowable region, though there is still uncertainty in the effective temperature. \n \n~\\\\\nWe are grateful to the referee, Dr. Philip Massey, for helpful comments on the manuscript. We would like to thank Prof. Dr. Karl M. Menten for his invaluable comments and for his help improving the manuscript. The authors also would like to thank all the supporting staffs at Mizusawa VERA observatory for their assistance in observations.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe similarity between the $D=5$ simple supergravity (SUGRA) and $D=11$\nSUGRA has been recognized for a long time \\cite{Cr81,ChNi80}.\nThe $D=11$ SUGRA is supposed to play a fundamental role as the\nlow energy limit of the M-theory \\cite{Wi95} --- an expected unified\nspeculation for the well-known five consistent superstring theories.\nThe field contents of the $D=11$ SUGRA theory consist of the metric,\na single Majorana spin-$\\frac32$ fermion along with a (singlet)\nthree-form gauge potential, with neither ``$N>1$'' extensions\nnor matter coupling permitted \\cite{CrJuSc78}.\nThe simple $D=5$ SUGRA, besides the metric $\\hat g_{AB}$, contains a\nspin-$\\frac32$ field $\\hat \\Psi^a_A$ ($a=1,2$ is an internal index)\nand $U(1)$ gauge field (one-form $\\hat B_A$) which replaces the\nthree-form gauge field in the $D=11$ SUGRA.\nThe ``primeval'' likeness comes directly from the fact that\nthe Lagrangians of both SUGRAs are exactly of the same form,\nexcept for the numbering of the gauge field indices.\nIn addition, their dimensional reduction to $D=4$ can be carried out in\na similar way \\cite{CrJu79}. Furthermore,\nthe $D=5$ simple SUGRA can be realized as a Calabi-Yau compactification\nof the $D=11$ SUGRA together with the truncation of the scalar multiplets,\nwhich is always necessary since there arises at least one scalar\nmultiplet for any Calabi-Yau compactification\n\\cite{CaCeAuFe95,FeKhMi96}.\nFurther resemblances between the two SUGRAs are related to the duality\ngroups upon dimensional reduction and the world sheet structure of the\nsolitonic string of the $D=5$ SUGRA \\cite{MiOh98}.\n\nThus the four-dimensional reduced effective action of the $N=2, D=5$\nSUGRA contains an additional Maxwell-like $U(1)$ field and a scalar\nfield regarded as external fields in five dimensions which are contributed\nby $\\hat B$, besides the ones coming from the metric\n$\\hat g_{AB}$ as in the traditional scheme for the Kaluza-Klein theory\n\\cite{AuFrMaRe82,AuMaReFr81,BaFaKe90a,BaFaKe90b}.\nCosmological solutions to this model have been previously considered\nby Balbinot, Fabris and Kerner \\cite{BaFaKe90a,BaFaKe90b}.\nFor the case of spatial homogeneity and isotropy the general\nsolution is non-singular in the scale factor, but unstable due to the\ncollapse to zero of the size of the fifth dimension\n\\cite{BaFaKe90a,BaFaKe90b}.\nBiaxial (with two equal scale factors) anisotropic solutions with a\ncylindrical homogeneous five-dimensional metric lead to singular\nsolutions with positive gravitational coupling \\cite{BaFaKe90b}.\nRecently, an explicit example of a manifestly $U$-duality covariant\nM-theory cosmology in five dimensions resulting from compactification\non a Calabi-Yau three-fold has been obtained in \\cite{LuOvWa98}.\nExact static solutions in $N=2,D=5$ SUGRA have been found by Pimentel\n\\cite{Pi95}, in a metric with cylindrical symmetry, with a particular\ncase corresponding to the exterior of a cosmic string.\n\nThe purpose of the present paper is to construct the general solution\nto the gravitational field equations of the $N=2, D=5$ SUGRA as\nformulated in \\cite{BaFaKe90a,BaFaKe90b} for an anisotropic triaxial\n(all directions have unequal scale factors) Bianchi type I space-time.\nIn this case the general solution of the field equations can be\nexpressed in an exact parametric form.\nFor all cosmological solutions, the singularity at the starting\/ending\ntime of the evolution can not be avoided except in the isotropic limit\nconsidered in \\cite{BaFaKe90a,BaFaKe90b}.\nNevertheless, in the models analyzed in this paper, the anisotropic\nUniverse has non-inflationary evolution for all times and for\nall values of parameters.\n\nThe present paper is organized as follows.\nThe field equations of our model are written down in Section II.\nIn Section III the general solution of the field equations is obtained.\nWe discuss our results and conclusions in Section IV.\n\n\\section{Field Equations, Geometry and Consequences}\nThe bosonic sector of $N=2, D=5$ SUGRA contains the five-dimensional\nmetric $\\hat g_{AB}$ and $U(1)$ gauge field $\\hat B_A$ described by\na Lagrangian which possesses a non-vanishing Chern-Simons term \\cite{Cr81}\n\\begin{eqnarray} \\label{L5}\n\\hat {\\cal L} &=& \\sqrt{-\\hat g} \\left\\{ \\hat R\n - \\frac14 \\hat F_{AB} \\hat F^{AB} \\right\\} \\nonumber \\\\\n &-& \\frac1{12\\sqrt{3}} \\epsilon^{ABCDE}\\hat F_{AB}\\hat F_{CD}\\hat B_E,\n\\end{eqnarray}\nwhere $\\hat F_{AB} = 2\\partial_{[A} \\hat B_{B]}$.\nIn this paper we use the following conventions and notations.\nThe variables with hats are five-dimensional objects all other variables\nare four-dimensional. Upper case indices $A,B,...$ are\nused for five-dimensional space-time, greek indices\n$\\mu,\\nu,...$ and low case indices $i,j,...$ are for\nfour-dimensional space-time and three-dimensional space\nrespectively. The signature is $(-,+,+,+,+)$.\n\nAssuming that the five dimensional space-time has locally the structure\nof $M^4 \\times S^1$ with a four-dimensional space-time $M^4$\nwhose spatial sections are homogeneous and asymptotic flat, then\nthe five-dimensional metric can be decomposed along the standard\nKaluza-Klein pattern\n\\begin{equation}\nd \\hat s^2 = \\phi^2 (dx_4 + A_\\mu dx^\\mu)^2 + g_{\\mu\\nu} dx^\\mu dx^\\nu,\n\\end{equation}\nwhere the scale factor $\\phi$ and Kaluza-Klein vector $A_\\mu$ are\nfunctions depending on $x^\\mu$ only.\n\nLooking for a ``ground state'' configuration we set, following\n\\cite{BaFaKe90a,BaFaKe90b}, the Kaluza-Klein vector, $A_\\mu$, equal\nto zero and take the one-form potential $\\hat B_A$ to be $\\hat B_\\mu=0$\nand $\\hat B_4= \\sqrt{3} \\psi(x^\\mu)$.\nUnder this ans\\\"atz, the five-dimensional gravitational field equations\nfor (\\ref{L5}) reduce to a set of four-dimensional equations\n\\begin{eqnarray}\nR_{\\mu\\nu} &-& \\phi^{-1} D_\\mu D_\\nu \\phi \\nonumber\\\\\n &-& \\frac12 \\phi^{-2} \\left[ 3 \\partial_\\mu \\psi \\partial_\\nu \\psi\n - g_{\\mu\\nu} (\\partial\\psi)^2 \\right] = 0, \\label{FEA1} \\\\\nD^2 \\phi &+& \\phi^{-1} (\\partial\\psi)^2 = 0, \\label{FEA2} \\\\\nD^2 \\psi &-& \\phi^{-1} \\partial_\\mu \\phi \\partial^\\mu \\psi = 0,\n \\label{FEA3}\n\\end{eqnarray}\nwhere $D$ denotes the four-dimensional covariant derivative with\nrespect to the metric $g_{\\mu\\nu}$.\nEquivalently, the field equations can be re-derived,\nin the string frame, from the four-dimensional Lagrangian\n\\cite{BaFaKe90a,BaFaKe90b}\n\\begin{equation} \\label{L4}\n{\\cal L} = \\sqrt{-g} \\phi \\left\\{ R\n - \\frac32 \\phi^{-2} (\\partial\\psi)^2 \\right\\},\n\\end{equation}\nvia variation with respect to the fields $g_{\\mu\\nu}, \\phi$ and $\\psi$.\nIn the Lagrangian (\\ref{L4}), the scale factor $\\phi$ is an analogue\nof the Brans-Dicke field whereas the origin of $\\psi$ is purely\nsupersymmetric.\n\nThe line element of an anisotropic homogeneous flat Bianchi type I\nspace-time is given by\n\\begin{equation}\nds^2 = - dt^2 + a_1^2(t) dx^2 + a_2^2(t) dy^2 + a_3^2(t) dz^2.\n\\end{equation}\nDefining the ``volume scale factor'', $V := \\prod_i a_i$,\n``directional Hubble factors'', $H_i := \\dot a_i\/a_i$,\nand ``average Hubble factor'', $H := \\frac13 \\sum_i H_i$, one can\npromptly find the relation $3H = \\dot V\/V$,\nwhere dot means the derivative with respect to time $t$.\nIn terms of those variables, the field equations\n(\\ref{FEA1}) and the equations of motion for $\\phi$ and $\\psi$\n(\\ref{FEA2},\\ref{FEA3}) coupling with the anisotropic Bianchi type I\ngeometry take the concise forms\n\\begin{eqnarray}\n3 \\dot H + \\sum_i H_i^2 + \\phi^{-1} \\ddot \\phi\n + \\phi^{-2} \\dot \\psi^2 &=& 0, \\label{FEB1} \\\\\nV^{-1} \\frac{d}{dt} (VH_i) + H_i \\phi^{-1} \\dot\\phi\n - \\frac12 \\phi^{-2} \\dot \\psi^2 &=& 0, \\; i=1,2,3,\n \\label{FEB2} \\\\\nV^{-1} \\frac{d}{dt} (V\\dot\\phi) + \\phi^{-1} \\dot \\psi^2\n &=& 0, \\label{FEB3} \\\\\nV^{-1} \\frac{d}{dt} (V\\dot\\psi) - \\phi^{-1} \\dot\\phi \\dot\\psi\\\n &=& 0. \\label{FEB4}\n\\end{eqnarray}\n\nThe physical quantities of interest in cosmology are the {\\em expansion\nscalar} $\\theta$, the {\\em mean anisotropy parameter} $A$,\nthe {\\em shear scalar} $\\sigma^2$ and the {\\em deceleration parameter}\n$q$ defined as \\cite{Gr85}\n\\begin{eqnarray}\n\\theta &:=& 3H, \\qquad\nA := \\frac13 \\sum_i \\left( \\frac{H-H_i}{H} \\right)^2, \\nonumber \\\\\n\\sigma^2 &:=&\n \\frac12 \\left( \\sum_i H_i^2 - 3 H^2 \\right), \\quad\nq := \\frac{d}{dt} H^{-1} - 1. \\label{Def}\n\\end{eqnarray}\n\nThe sign of the deceleration parameter indicates whether the cosmological\nmodel inflates. A positive sign corresponds to standard decelerating\nmodels whereas a negative sign indicates inflationary behavior.\n\n\\section{General Solution of the Field Equations}\nEquation (\\ref{FEB4}) can immediately be integrated to give\n\\begin{equation}\nV \\dot \\psi = \\omega \\phi, \\label{VPP}\n\\end{equation}\nwith $\\omega$ --- a constant of integration.\nFrom equations (\\ref{FEB3}) and (\\ref{FEB4}) one can find that the\nexpressions of the fields $\\phi(t)$ and $\\psi(t)$ have the following form\n\\begin{eqnarray}\n\\phi(t) &=& \\phi_0 \\cos \\left( \\omega \\int \\frac{dt}{V}\n + \\omega_0 \\right), \\label{phi} \\\\\n\\psi(t) &=& \\psi_0 + \\phi_0 \\sin \\left( \\omega \\int \\frac{dt}{V}\n + \\omega_0 \\right), \\label{psi}\n\\end{eqnarray}\nwhere $\\phi_0, \\psi_0$ and $\\omega_0$ are constants of integration.\n\nBy summing equations (\\ref{FEB2}) one gets\n\\begin{equation}\nV^{-1} \\frac{d}{dt} (VH) + H \\phi^{-1} \\dot\\phi\n - \\frac12 \\phi^{-2} \\dot \\psi^2 = 0, \\label{FEB2a}\n\\end{equation}\nwhich can be transformed, by using equations (\\ref{VPP}) and (\\ref{phi}),\ninto the following differential-integral equation\ndescribing the dynamics and evolution\nof a triaxial Bianchi type I space-time in $N=2,D=5$ SUGRA:\n\\begin{equation} \\label{EqV}\n\\ddot V = \\omega \\frac{\\dot V}{V} \\tan \\left( \\omega \\int \\frac{dt}{V}\n + \\omega_0 \\right) + \\frac32 \\frac{\\omega^2}{V}.\n\\end{equation}\n\nFurthermore, by subtracting equation (\\ref{FEB2a}) from equations\n(\\ref{FEB2}), one can solve for the $H_i$ as\n\\begin{equation}\nH_i = H + \\frac{K_i}{\\phi V}, \\qquad i=1,2,3, \\label{Hi}\n\\end{equation}\nwhere $K_i$ are constants of integration satisfying the following\nconsistency condition\n\\begin{equation}\n\\sum_i K_i = 0. \\label{Ki}\n\\end{equation}\nTherefore the physical quantities of interest (\\ref{Def}) reduce to\n\\begin{equation}\nA = \\frac{K^2}{3\\phi^2 V^2 H^2}, \\qquad\n\\sigma^2 = \\frac32 A H^2,\n\\end{equation}\nwhere $K^2 = \\sum_i K_i^2$.\n\nBy introducing a new variable $\\eta$ related to the physical time $t$\nby means of the transformation $d\\eta := dt\/V$ and by denoting\n$u:=\\dot V = dV\/(V d\\eta)$, equation (\\ref{EqV}) reduces to a first\norder linear differential equation for the unknown function $u$\n\\begin{equation}\n\\frac{du}{d\\eta} = \\omega \\tan( \\omega\\eta + \\omega_0) u\n + \\frac32 \\omega^2,\n\\end{equation}\nwhose general solution is given by\n\\begin{equation}\nu = C \\cos^{-1}(\\omega\\eta+\\omega_0)\n + \\frac32 \\omega \\tan(\\omega\\eta+\\omega_0), \\label{dotV}\n\\end{equation}\nwhere $C$ is an arbitrary constant of integration.\n\nDefining a new parameter $\\zeta(\\eta):=\\omega\\eta+\\omega_0$,\nwe can represent the general solution of\nthe field equations for a Bianchi type I space-time in\nthe $N=2,D=5$ SUGRA in the following exact parametric form:\n\\begin{eqnarray}\nt &=& t_0 + \\frac{V_0}{\\omega} \\int\n \\frac{(1+\\sin\\zeta)^\\beta}{(1-\\sin\\zeta)^\\gamma} d\\zeta,\n \\label{T} \\\\\nV &=& V_0 \\frac{(1+\\sin\\zeta)^\\beta}{(1-\\sin\\zeta)^\\gamma},\n \\label{V} \\\\\nH &=& \\frac{\\omega}{2V_0} (\\alpha + \\sin\\zeta)\n \\frac{(1-\\sin\\zeta)^{\\gamma-\\frac12}}{(1+\\sin\\zeta)^{\\beta+\\frac12}}, \\\\\na_{i} &=& a_{i0}\n \\frac{(1+\\sin\\zeta)^{\\frac{\\beta}3+\\frac{K_i}{2\\omega\\phi_0}}}\n {(1-\\sin\\zeta)^{\\frac{\\gamma}3+\\frac{K_i}{2\\omega\\phi_0}}},\n \\quad i=1,2,3, \\label{ai}\n\\end{eqnarray}\nwhere we have denoted $\\alpha = \\frac{2C}{3\\omega}$,\n$\\beta=\\frac34(\\alpha-1)$, $\\gamma=\\frac34(\\alpha+1)$ and the $a_{i0}$\nare arbitrary constants of integration while $V_0=\\Pi_i a_{i0}$.\nThe observationally important physical quantities are given by\n\\begin{eqnarray}\nA &=& \\frac{4K^2}{3\\phi_0^2 \\omega^2}\n \\left( \\alpha + \\sin\\zeta \\right)^{-2}, \\label{AA} \\\\\n\\sigma^2 &=& \\frac{K^2}{2\\phi_0^2 V_0^2}\n \\frac{(1-\\sin\\zeta)^{2\\gamma-1}}{(1+\\sin\\zeta)^{2\\beta+1}}, \\\\\nq &=& 2 \\left\\{ 1 - \\frac{ 1 + \\alpha \\sin\\zeta}\n {(\\alpha + \\sin\\zeta)^2} \\right\\}. \\label{qq}\n\\end{eqnarray}\nFinally, the field equation (\\ref{FEB1}) gives a\nconsistency condition relating the constants $K^2, \\omega, \\alpha$ and\n$\\phi_0$:\n\\begin{equation}\nK^2 = \\frac32 \\phi_0^2 \\omega^2 \\left( \\alpha^2 - 1 \\right), \\label{K2}\n\\end{equation}\nleading to\n\\begin{equation}\n\\alpha \\ge 1 \\quad \\hbox{or} \\quad \\alpha \\le -1.\n\\end{equation}\n\nIt is worth noting that these two classes of solutions corresponding to\npositive or negative values of $\\alpha$ and $\\omega$ are not independent.\nIndeed, they can be related via a ``duality'' transformation by\nchanging the signs of $\\omega, \\alpha$ and $\\zeta$ so that\n$t(\\omega,\\alpha,\\zeta)=t(-\\omega,-\\alpha,-\\zeta)$,\n$a_i(\\omega,\\alpha,\\zeta)=a_i(-\\omega,-\\alpha,-\\zeta), i=1,2,3$,\nand $V(\\alpha,\\zeta)=V(-\\alpha,-\\zeta)$ etc.\nThis duality relation can be obtained by a simple inspection of equations\n(\\ref{T})-(\\ref{ai}) and, therefore, all physical quantities are\ninvariant with respect to this transformation.\nMoreover, the physical properties of the cosmological\nmodels presented here are strongly dependent on the signs of the\nparameters $\\alpha$ and $\\omega$.\nNevertheless, due to the duality transformation, hereafter we will consider,\nwithout loss of generality, the cases with positive $\\alpha$ only.\n\nFor some particular values of $\\alpha$, the general solutions\ncan be expressed in an exact non-parametric form, for instance,\nan exact class solutions can be obtained for $\\alpha = \\pm \\frac53$.\nBy introducing a new time variable $\\tau:=\\frac{3\\sqrt{2}\\omega}{V_0}t$,\nand choosing $t_0=\\mp\\frac{V_0}{3\\sqrt{2}\\omega}$,\nthe exact solution in $N=2,D=5$ SUGRA\nfor the Bianchi type I space-time is given by\n\\begin{eqnarray}\n\\tau &=& \\pm \\left[ \\cos^{-3} \\left( \\frac{\\zeta}2 \\pm\\frac{\\pi}4 \\right)\n -1 \\right], \\\\\nV &=& \\frac{V_0}{2\\sqrt{2}} (1\\pm\\tau) \\left[ (1\\pm\\tau)^\\frac23 - 1\n \\right]^\\frac12, \\\\\nH &=& \\frac{\\sqrt{2}\\omega}{V_0} \\frac{\\frac43 (1\\pm\\tau)^\\frac23 -1}\n {(1\\pm\\tau)\\left[ (1\\pm\\tau)^\\frac23 - 1\\right]}, \\\\\na_i &=& \\frac{a_{i0}}{\\sqrt{2}} (1\\pm\\tau)^\\frac13 \\left[\n (1\\pm\\tau)^\\frac23 - 1 \\right]^{\\frac16\\pm\\frac{K_i}{2\\omega\\phi_0}}, \\\\\nA &=& \\frac89 (1\\pm\\tau)^\\frac43 \\left[ \\frac43 (1\\pm\\tau)^\\frac23 - 1\n \\right]^{-2}, \\\\\n\\sigma^2 &=& \\frac{8\\omega^2}{3V_0^2} (1\\pm\\tau)^{-\\frac23}\n \\left[ (1\\pm\\tau)^\\frac23 - 1 \\right]^{-2}, \\\\\nq &=& 2 \\left\\{ 1-\\frac{(1\\pm\\tau)^\\frac23\\left[4(1\\pm\\tau)^\\frac23-5\\right]}\n {6\\left[\\frac43(1\\pm\\tau)^\\frac23-1\\right]^2} \\right\\}.\n\\end{eqnarray}\n\nThe isotropic limit can be achieved by taking $\\alpha=\\pm 1$\nand, consequently, $K_i=0,i=1,2,3$.\nIt is worth noting that our solutions reduce to two different\ntypes of homogeneous space-times when $\\alpha=\\pm 1$.\n\nFor $\\alpha=1$, we obtain (by denoting $a_1=a_2=a_3=a$)\n\\begin{eqnarray}\nt &=& t_0+\\frac{V_0}{2\\sqrt{2}\\omega}\\left[ \\frac{\\sin\\theta}{\\cos^2\\theta}\n + \\ln(\\tan\\theta+\\sec\\theta) \\right], \\label{IT1} \\\\\na &=& \\frac{a_0}{2\\sqrt{2}} \\cos^{-3}\\theta, \\label{IA1}\n\\end{eqnarray}\nwhere $\\theta:=\\frac{\\zeta}2 + \\frac{\\pi}4$.\nEqs.(\\ref{IT1}) and (\\ref{IA1}), describing a homogeneous flat isotropic\nspace-time interacting with two scalar fields (Kaluza-Klein and\nsupersymmetric), have been previously obtained by Balbinot, Fabris and\nKerner \\cite{BaFaKe90a} (for an extra choice of the parameter\n$\\omega_0=\\pi\/2$), who extensively studied their physical properties.\nThis isotropic solution also provides a positive gravitational coupling\nat the present time.\n\nIn the isotropic limit corresponding to $\\alpha=-1$, one can obtain\nanother class of isotropic homogeneous flat space-times represented in\nthe following parametric form by\n\\begin{eqnarray}\nt &=& t_0-\\frac{V_0}{2\\sqrt{2}\\omega}\\left[ \\frac{\\cos\\theta}{\\sin^2\\theta}\n - \\ln(\\csc\\theta-\\cot\\theta) \\right], \\label{IT2} \\\\\na &=& \\frac{a_0}{2\\sqrt{2}} \\sin^{-3}\\theta. \\label{IA2}\n\\end{eqnarray}\nThis type of flat space-time has not been previously considered.\n\n\\section{Discussions and Final Remarks}\nIn order to study the physical properties of the Bianchi type I Universe\ndescribed by the Eqs. (\\ref{V})-(\\ref{ai}) we need to fix first the range\nof variation of the parameter $\\zeta$. There are no a priori limitations\nin choosing the admissible range of values, thus both positive and negative\nvalues are permitted since the variable $\\eta=\\int dt\/V$ can also\nbe negative.\nBut from a physical point of view it is natural to impose the condition\nsuch that the gravitational coupling $\\phi$ is always positive during\nthe evolution of the Bianchi type I space-time in $N=2,D=5$ SUGRA.\nConsequently, we shall consider $\\zeta\\in (-\\pi\/2, \\pi\/2)$.\nWith this choice, the Universe for $\\alpha<-1$ starts its evolution\nin the infinite past ($t\\to -\\infty$) and ends at a finite moment $t=t_0$.\nFor $\\alpha>1$ the Universe starts at $t=t_0$ and ends in an infinite\nfuture with $t \\to \\infty$. (All discussion here and hereafter are with\nrespect to positive $\\omega$.)\n\nAs can be easily seen from equation (\\ref{dotV}), if $\\alpha<-1$\nwe have $\\dot V < 0$ for all $t$.\nFor these values of the parameters the\nBianchi type I anisotropic Universe collapses from an initial state\ncharacterized by infinite values of the volume scale factor and of the\nscale factors $a_i,i=1,2,3,$ to a singular state\nwith all factors vanishing.\nBut if $\\alpha>1$, the Universe expands and $\\dot V > 0$ for all $t$.\nThe expanding Bianchi type I Universe starts its evolution at the\ninitial moment, $t_0$, from a singular state with zero values of the\nscale factors, $a_i(t_0)=0,i=1,2,3$.\n\nAnother possible way to investigate the singularity behavior at the\ninitial moment is to consider the sign of the quantity\n$R_{\\mu\\nu} u^\\mu u^\\nu$, where $u^\\mu$ is the vector tangent to the\ngeodesics; $u^\\mu=(-1,0,0,0)$ for the present model.\nFrom the gravitational field equations we easily obtain\n\\begin{equation}\nR_{\\mu\\nu} u^\\mu u^\\nu = 3 H^2 (q-A). \\label{Ruu}\n\\end{equation}\nBy using equations (\\ref{AA}), (\\ref{qq}) and (\\ref{K2}) we can\nexpress (\\ref{Ruu}) as\n\\begin{equation}\nR_{\\mu\\nu} u^\\mu u^\\nu\n = 6H^2 \\frac{\\sin\\zeta}{\\alpha+\\sin\\zeta}.\n\\end{equation}\nFor $\\zeta \\to -\\pi\/2$ the sign of $R_{\\mu\\nu}u^\\mu u^\\nu$ is determined\nby the sign of $1-\\alpha$.\nTherefore we obtain\n\\begin{equation}\nR_{\\mu\\nu} u^\\mu u^\\nu < 0 \\quad \\hbox{for} \\quad \\alpha > 1.\n\\end{equation}\nHence, the energy condition of Hawking-Penrose singularity theorems\n\\cite{HaEl73} is not satisfied for the solutions corresponding to Bianchi\ntype I Universes in the four-dimensional reduced two scalar fields theory of\n$N=2,D=5$ SUGRA with $\\alpha>1$.\nNevertheless, for those solutions an initial singular state is\n{\\em unavoidable} at the initial moment $t_0$.\n\nSince for $\\alpha>1$ the Bianchi type I Universe starts its evolution at\nthe initial moment $t=t_0$ ($\\zeta \\to -\\pi\/2$) from a singular state,\ntherefore, the presence of a variable gravitational coupling, $\\phi$,\nand of a supersymmetric field, $\\psi$, in an anisotropic geometry {\\em can\nnot} remove the initial singularity that mars the big-bang cosmology.\nAt the initial moment the degree of the anisotropy of the space-time\nis maximal, with the initial value of the anisotropy\nparameter $A(t_0)=2(\\alpha+1)\/(\\alpha-1)$.\nFor $t>t_0$ the Universe expands and the anisotropy parameter decreases.\n\nThe behavior of the volume scale factor, of the anisotropy parameter\nand of the deceleration parameter is presented for different values of\n$\\alpha$ in Figs. \\ref{FIG1}-\\ref{FIG3}.\nThe evolution of the Universe is non-inflationary, with $q>0$\nfor all $t>t_0$. Non-inflationary behavior is a generic feature of most\nof the supersymmetric models.\nThis is due to the general fact that the effective potential for the\ninflation field $\\sigma$ in SUGRA typically is too curved,\ngrowing as $\\exp (c \\sigma^2\/m)$, with $c$ a parameter\n(typically $O(1)$) and $m$ the stringy Planck mass \\cite{LiRi97}.\nThe typical values of $c$ make inflation impossible because the\ninflation mass becomes of the order of the Hubble constant.\nSee also \\cite{LiRi97} for a simple realization of hybrid inflation\nin SUGRA.\nIn the present model the presence of the supersymmetric field $\\psi$\nprevents the Bianchi type I Universe from inflating.\n\n\\begin{figure}\n\\epsfxsize=10cm\n\\epsffile{fig1.eps}\n\\caption{Behavior of the volume scale factor of the Bianchi type I\n space-time for different values of $\\alpha>1$\n ($V_0=1$ and $\\omega=1$): $\\alpha=3\/2$ (solid curve),\n $\\alpha=2$ (dotted curve) and $\\alpha=5\/2$ (dashed curve).}\n\\label{FIG1}\n\\end{figure}\n\n\\begin{figure}\n\\epsfxsize=10cm\n\\epsffile{fig2.eps}\n\\caption{Time dependence of the parameter\n $a=\\frac{3\\phi_0^2\\omega^2}{4K^2} A$ for different values of\n $\\alpha$: $\\alpha=3\/2$ (solid curve),\n $\\alpha=2$ (dotted curve) and $\\alpha=5\/2$ (dashed curve).}\n\\label{FIG2}\n\\end{figure}\n\n\\begin{figure}\n\\epsfxsize=10cm\n\\epsffile{fig3.eps}\n\\caption{Evolution of the deceleration parameter $q$ of the Bianchi\n type I space-time for different values of $\\alpha$:\n $\\alpha=3\/2$ (solid curve),\n $\\alpha=2$ (dotted curve) and $\\alpha=5\/2$ (dashed curve).}\n\\label{FIG3}\n\\end{figure}\n\nIn the far future, for $\\zeta \\to \\pi\/2$ and $t\\to \\infty$ ($\\alpha>1$),\nwe have $V \\to \\infty, a_i \\to\\infty, i=1,2,3$.\nIn this limit the anisotropy parameter becomes a non-zero constant and\nthe Universe ends in a still anisotropic phase, but with\na decrease in the value of the anisotropy parameter $A$,\nas compared with the initial one.\nTherefore during its evolution the Bianchi type I Universe cannot\nexperience a transition from the anisotropic phase to the isotropic\nflat geometry.\nThe time evolution of the gravitational coupling $\\phi$ and of the\nsupersymmetric field $\\psi$ is represented in Fig. \\ref{FIG4}.\nThe $\\phi$-field is positive for all values of time.\n\n\\begin{figure}\n\\epsfxsize=10cm\n\\epsffile{fig4.eps}\n\\caption{Time evolution of the gravitational coupling $\\phi$ (solid curve)\n and of the supersymmetric field $\\psi$ (dashed curve)\n for $\\alpha=5\/2$ ($\\phi_0=1$, $\\psi_0=0$).}\n\\label{FIG4}\n\\end{figure}\n\nIn the present paper we have investigated the evolution and dynamics\nof a Bianchi type I space-time in a SUGRA toy-model, obtained\nby dimensional reduction of the $N=2,D=5$ SUGRA.\nThe inclusion of the supersymmetric term gives some particular\nfeatures to this cosmological model, by preventing the Universe from\ninflating and attaining completely isotropy.\nBut globally there is a decrease in the degree of anisotropy of the\ngeometry. Hence this model can be used to describe only a specific,\nwell-determined period of the evolution of our Universe.\n\n\\section*{Acknowledgments}\nOne of the authors (CMC) would like to thank prof. J.M. Nester for\nprofitable discussions.\nThe work of CMC was supported in part by the National Science Council\n(Taiwan) under grant NSC 89-2112-M-008-016.\n\nWe are also grateful to prof. Pimentel for calling our attention to the\nresults \\cite{Pi92,PiSo93,PiSo95} about several types of Bianchi\ncosmologies in the framework of $N=2,D=5$ supergravity.\n\n\\begin{appendix}\n\\section{Some Exact Forms for Physical Time}\nFor a large class of values of the parameter $\\alpha$ the general\nsolution of the gravitational field equations can be expressed in a\nclosed explicit form.\nThe variation of the physical time $t$ is determined by the integral\nequation (\\ref{T}). After a trick manipulation, one can rewrite this\nequation in the following form\n\\begin{equation}\nt = t_0 + \\frac{V_0}{\\sqrt{2}\\omega} \\int \\sin^{\\gamma'-3}\\theta\n \\cos^{-\\gamma'}\\theta d \\theta,\n\\end{equation}\nwhere $\\theta:=\\frac{\\zeta}2+\\frac{\\pi}4$ and\n$\\gamma':=2\\gamma=\\frac32(\\alpha+1)$.\nIn general, for arbitrary $\\gamma'$, this integral can not be closed.\nFortunately, for integer values of $\\gamma'$, the physical time $t$ can\nbe expressed in an explicit form as a function of $\\zeta$. Some of these\nexact forms of the time function are listed in the following.\n(The outcomes for $\\alpha\\pm1$ are given in (\\ref{IT1}) and (\\ref{IT2}) ).\n\n\\noindent{\\bf For $\\alpha < -1$ :}\n\n\\noindent{(i) $\\alpha=-\\frac53, \\; (\\gamma'=-1),$}\n$$ t=t_0-\\frac{V_0}{3\\sqrt{2}\\omega} \\frac1{\\sin^3\\theta}, $$\n\n\\noindent{(ii) $\\alpha=-\\frac73, \\; (\\gamma'=-2),$}\n$$\nt=t_0-\\frac{V_0}{8\\sqrt{2}\\omega}\\Biggl[\n \\frac{\\cos\\theta(\\cos^2\\theta+1)}{\\sin^4\\theta}\n + \\ln(\\csc\\theta-\\cot\\theta) \\Biggr],\n$$\n\n\\noindent{(iii) $\\alpha=-3, \\; (\\gamma'=-3),$}\n$$\nt=t_0-\\frac{V_0}{15\\sqrt{2}\\omega}\\Biggl(\n \\frac{5\\cos^2-2}{\\sin^5\\theta} \\Biggr).\n$$\n\n\\noindent{\\bf For $\\alpha > 1$ :}\n\n\\noindent{(i) $\\alpha=\\frac53, \\; (\\gamma'=4),$}\n$$ t=t_0+\\frac{V_0}{3\\sqrt{2}\\omega} \\frac1{\\cos^3\\theta}, $$\n\n\\noindent{(ii) $\\alpha=\\frac73, \\; (\\gamma'=5),$}\n$$\nt=t_0+\\frac{V_0}{8\\sqrt{2}\\omega}\\Biggl[\n \\frac{\\sin\\theta(\\sin^2\\theta+1)}{\\cos^4\\theta}\n - \\ln(\\tan\\theta+\\sec\\theta) \\Biggr],\n$$\n\n\\noindent{(iii) $\\alpha=3, \\; (\\gamma'=6),$}\n$$\nt=t_0+\\frac{V_0}{15\\sqrt{2}\\omega}\\Biggl(\n \\frac{5\\sin^2\\theta-2}{\\cos^5\\theta} \\Biggr).\n$$\n\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe \\emph{colouring number $\\mathrm{col}(G)$} of a graph $G$ is the minimum\n$k$ for which there is a linear order~$<_L$ on the vertices of $G$\nsuch that each vertex $v$ has \\emph{back-degree} at most $k-1$, that\nis, $v$ has at most $k-1$ neighbours $u$ with $u<_Lv$. The colouring\nnumber is a measure for uniform sparseness in graphs: we have\n$\\mathrm{col}(G)=k$ if and only if every subgraph $H$ of $G$ has a vertex of\ndegree at most $k-1$. Hence, provided $\\mathrm{col}(G)=k$, not only $G$ is\nsparse, but also every subgraph of $G$ is sparse. The colouring number\nminus one is also known as the \\emph{degeneracy}.\n\nRecently, Ne\\v{s}et\\v{r}il and Ossona de Mendez introduced the notions\nof \\emph{bounded expansion}~\\cite{nevsetvril2008grad} and\n\\emph{nowhere density} \\cite{nevsetvril2011nowhere} as very general\nformalisations of uniform sparseness in graphs. Since then, several\nindependent and seemingly unrelated characterisations of these notions\nhave been found, showing that these concepts behave robustly. For\nexample, nowhere dense classes of graphs can be defined in terms of\nexcluded shallow minors~\\cite{nevsetvril2011nowhere}, in terms of\nuniform quasi-wideness~\\cite{dawar2010homomorphism}, a notion studied\nin model theory, or in terms of a game~\\cite{grohe2014deciding} with\ndirect algorithmic applications. The \\emph{generalised colouring\n numbers} $\\mathrm{adm}_r$, $\\mathrm{col}_r$, and $\\mathrm{wcol}_r$ were introduced by\nKierstead and Yang~\\cite{kierstead2003orders} in the context of\ncolouring and marking games on graphs. As proved by Zhu\n\\cite{zhu2009colouring}, they can be used to characterise both bounded\nexpansion and nowhere dense classes of graphs.\n\nThe invariants $\\mathrm{adm}_r$, $\\mathrm{col}_r$, and $\\mathrm{wcol}_r$ are defined similarly\nto the classic colouring number: for example, the \\emph{weak\n $r$-colouring} number $\\mathrm{wcol}_r(G)$ of a graph $G$ is the minimum\ninteger~$k$ for which there is a linear order of the vertices such\nthat each vertex $v$ can reach at most $k-1$ vertices $w$ by a path of\nlength at most~$r$ in which~$w$ is the smallest vertex on the\npath. \n\nThe generalised colouring numbers found important applications in the\ncontext of algorithmic theory of sparse graphs. For example, they play\na key role in Dvo\\v{r}\\'ak's approximation algorithm for minimum\ndominating sets \\cite{dvovrak13}, or in the construction of sparse\nneighbourhood covers on nowhere dense classes, a fundamental step in\nthe almost linear time model-checking algorithm for first-order\nformulas of Grohe et al.~\\cite{grohe2014deciding}.\n \nIn this paper we study the relation between the colouring numbers and\nthe above mentioned characterisations of nowhere dense classes of\ngraphs, namely with uniform quasi-wideness and the splitter game. We\nuse the generalised colouring numbers to give a new proof that every\nbounded expansion class is uniformly quasi-wide. This was first\nproved by Ne\\v{s}et\\v{r}il and Ossona de Mendez in\n\\cite{nevsetvril2010first}; however, the constants appearing in the\nproof of~\\cite{nevsetvril2010first} are huge. We present a very simple\nproof which also improves the appearing constants. Furthermore, for\nthe splitter game introduced in~\\cite{grohe2014deciding}, we show that\nsplitter has a very simple strategy to win on any class of bounded\nexpansion, which leads to victory much faster than in general nowhere\ndense classes of graphs.\n\nEvery graph $G$ from a fixed class $\\mathcal{C}$ of bounded expansion\nsatisfies $\\mathrm{wcol}_r(G)\\leq f(r)$ for some function $f$ and all positive\nintegers~$r$. However, the order that witnesses this inequality for\n$G$ may depend on the value $r$. We say that a class $\\mathcal{C}$ admits\n\\emph{uniform orders} if there is a function $f:\\mathbb{N}\\rightarrow\\mathbb{N}$ such\nthat for each $G\\in\\mathcal{C}$ there is one linear order that witnesses\n$\\mathrm{wcol}_r(G)\\leq f(r)$ for every value of $r$. We show that every\nclass that excludes a fixed topological minor admits uniform orders\nthat can be computed efficiently.\n\nFinally, based on our construction of uniform orders for graphs that\nexclude a fixed topological minor, we provide an alternative proof of\na very recent result of Eickmeyer and\nKawarabayashi~\\cite{eickmeyer2016model}, that the model-checking\nproblem for successor-invariant first-order ($\\mathrm{FO}$) formulas is\nfixed-para\\-meter tractable on such classes (we obtained this result\nindependently of but later than~\\cite{eickmeyer2016model}).\nSuccessor-invariant logics have been studied in database theory and\nfinite model theory, and successor-invariant $\\mathrm{FO}$ is known to be more\nexpressive than plain $\\mathrm{FO}$~\\cite{rossman2007successor}. The\nmodel-checking problem for successor-invariant $\\mathrm{FO}$ is known to be\nfixed-parameter tractable parameterized by the size of the formula on\nany graph class that excludes a fixed minor~\\cite{eickmeyer2013model}.\nVery recently, this result was lifted to classes that exclude a fixed\ntopological minor by Eickmeyer and\nKawarabayashi~\\cite{eickmeyer2016model}. \nThe key point of their proof\nis to use the decomposition theorem for graphs excluding a fixed\ntopological minor, due to Grohe and Marx~\\cite{grohe2015structure}.\nOur approach is similar to that of~\\cite{eickmeyer2016model}.\nHowever, we employ new constructions based on the generalised\ncolouring numbers and use the decomposition theorem\nof~\\cite{grohe2015structure} only implicitly. In particular, we do not\nconstruct a graph decomposition in order to solve the model-checking\nproblem. Therefore, we believe that our approach may be easier to\nextend further to classes of bounded expansion, or even to nowhere\ndense classes of graphs.\n\n\\section{Preliminaries}\n\n\\subparagraph*{Notation.} We use standard graph-theoretical notation;\nsee e.g.~\\cite{diestel2012graph} for reference. All graphs considered\nin this paper are finite, simple, and undirected. For a graph $G$, by\n$V(G)$ and $E(G)$ we denote the vertex and edge sets of $G$,\nrespectively. A graph~$H$ is a \\emph{subgraph} of~$G$, denoted\n$H\\subseteq G$, if $V(H)\\subseteq V(G)$ and $E(H)\\subseteq E(G)$. For\nany $M\\subseteq V(G)$, by $G[M]$ we denote the subgraph induced by\n$M$. We write $G-M$ for the graph $G[V(G)\\setminus M]$ and if\n$M=\\{v\\}$, we write $G-v$ for $G-M$. For a non-negative integer\n$\\ell$, a \\emph{path of length $\\ell$} in~$G$ is a sequence\n$P=(v_1,\\ldots, v_{\\ell+1})$ of pairwise different vertices such that\n$v_iv_{i+1}\\in E(G)$ for all $1\\leq i\\leq \\ell$. We write $V(P)$ for\nthe vertex set $\\{v_1,\\ldots, v_{\\ell+1}\\}$ of~$P$ and $E(P)$ for the\nedge set $\\{v_iv_{i+1} : 1\\leq i\\leq \\ell\\}$ of~$P$ and identify~$P$\nwith the subgraph of $G$ with vertex set $V(P)$ and edge set\n$E(P)$. We say that the path $P$ \\emph{connects} its \\emph{endpoints}\n$v_1,v_{\\ell+1}$, whereas $v_2,\\ldots, v_\\ell$ are the \\emph{internal\n vertices} of $P$. The {\\em{length}} of a path is the number of its\nedges. Two vertices $u,v\\in V(G)$ are \\emph{connected} if there is a\npath in $G$ with endpoints $u,v$. The {\\em{distance}} $\\mathrm{dist}(u,v)$\nbetween two connected vertices $u,v$ is the minimum length of a path\nconnecting $u$ and $v$; if $u,v$ are not connected, we put\n$\\mathrm{dist}(u,v)=\\infty$. The \\emph{radius} of~$G$ is\n$\\min_{u\\in V(G)}\\max_{v\\in V(G)}\\mathrm{dist}(u,v)$. The set of all\nneighbours of a vertex $v$ in $G$ is denoted by $N^G(v)$, and the set\nof all vertices at distance at most $r$ from $v$ is denoted by\n$N^G_r(v)$. A graph $G$ is $c$-\\emph{degenerate} if every subgraph\n$H\\subseteq G$ has a vertex of degree at most $c$. A $c$-degenerate\ngraph of order $n$ contains an independent set of order at least\n$n\/(c+1)$.\n\nA graph $H$ with $V(H)=\\{v_1,\\ldots, v_n\\}$ is a \\emph{minor} of $G$,\nwritten $H\\preccurlyeq G$, if there are pairwise disjoint connected\nsubgraphs $H_1,\\ldots, H_n$ of $G$, called {\\em{branch sets}}, such\nthat whenever $v_iv_j\\in E(H)$, then there are $u_i\\in H_i$ and\n$u_j\\in H_j$ with $u_iu_j\\in E(G)$. We call $(H_1,\\ldots, H_n)$ a\n{\\em{minor model}} of $H$ in~$G$. The graph $H$ is a \\emph{topological\n minor} of $G$, written $H\\preccurlyeq^t G$, if there are pairwise\ndifferent vertices $u_1, \\ldots, u_n\\in V(G)$ and a family of paths\n$\\{P_{ij}\\ \\colon\\ v_iv_j\\in E(H)\\}$, such that each $P_{ij}$ connects\n$u_i$ and $u_j$, and paths $P_{ij}$ are pairwise internally\nvertex-disjoint.\n\n\\vspace{-0.5cm}\n\\subparagraph*{Generalised colouring numbers.} Let us fix a graph\n$G$. By $\\Pi(G)$ we denote the set of all linear orders of $V(G)$. For\n$L\\in\\Pi(G)$, we write $u<_L v$ if $u$ is smaller than $v$ in $L$, and\n$u\\le_L v$ if $u<_L v$ or $u=v$. Let $u,v\\in V(G)$. For a\nnon-negative integer $r$, we say that $u$ is \\emph{weakly\n $r$-reachable} from~$v$ with respect to~$L$, if there is a path $P$\nof length $\\ell$, $0\\le\\ell\\le r$, connecting $u$ and~$v$ such that\n$u$ is minimum among the vertices of $P$ (with respect to $L$). By\n$\\mathrm{WReach}_r[G,L,v]$ we denote the set of vertices that are weakly\n$r$-reachable from~$v$ w.r.t.\\ $L$.\n\nVertex $u$ is \\emph{strongly $r$-reachable} from $v$ with respect\nto~$L$, if there is a path $P$ of length~$\\ell$, $0\\le\\ell\\le r$,\nconnecting $u$ and $v$ such that $u\\le_Lv$ and such that all internal\nvertices $w$ of~$P$ satisfy $v<_Lw$. Let $\\mathrm{SReach}_r[G,L,v]$ be the set\nof vertices that are strongly $r$-reachable from~$v$ w.r.t.\\ $L$. Note\nthat we have $v\\in \\mathrm{SReach}_r[G,L,v]\\subseteq \\mathrm{WReach}_r[G,L,v]$.\n\nFor a non-negative integer $r$, we define the \\emph{weak $r$-colouring\n number $\\mathrm{wcol}_r(G)$} of $G$ and the \\emph{$r$-colouring number\n $\\mathrm{col}_r(G)$} of $G$ respectively as follows:\n\\begin{eqnarray*}\n\\mathrm{wcol}_r(G)& := & \\min_{L\\in\\Pi(G)}\\:\\max_{v\\in V(G)}\\:\n\\bigl|\\mathrm{WReach}_r[G,L,v]\\bigr|,\\\\\n\\mathrm{col}_r(G) & := & \\min_{L\\in\\Pi(G)}\\:\\max_{v\\in V(G)}\\:\n\\bigl|\\mathrm{SReach}_r[G,L,v]\\bigr|.\n\\end{eqnarray*}\n\nFor a non-negative integer $r$, the \\emph{$r$-admissibility}\n$\\mathrm{adm}_r[G,L, v]$ of $v$ w.r.t.\\ $L$ is the maximum size~$k$ of a\nfamily $\\{P_1,\\ldots,P_k\\}$ of paths of length at most $r$ that start\nin~$v$, end at a vertex $w$ with $w\\leq_Lv$, and satisfy\n$V(P_i)\\cap V(P_j)=\\{v\\}$ for all $1\\leq i< j\\leq k$. As for $r>0$ we\ncan always let the paths end in the first vertex smaller than $v$, we\ncan assume that the internal vertices of the paths are larger\nthan~$v$. Note that $\\mathrm{adm}_r[G,L,v]$ is an integer, whereas\n$\\mathrm{WReach}_r[G,L, v]$ and $\\mathrm{SReach}_r[G,L,v]$ are vertex sets. The\n\\emph{$r$-admissibility} $\\mathrm{adm}_r(G)$ of~$G$~is\n\\begin{eqnarray*} \n\\mathrm{adm}_r(G) & = & \\min_{L\\in\\Pi(G)}\\max_{v\\in V(G)}\\mathrm{adm}_r[G,L,v].\n\\end{eqnarray*} \nThe generalised colouring numbers were introduced by Kierstead and\nYang \\cite{kierstead2003orders} in the context of colouring and marking\ngames on graphs. The authors also proved that the generalised\ncolouring numbers are related by the following inequalities:\n\\begin{equation}\\label{eq:gen-col-ineq}\n\\mathrm{adm}_r(G)\\leq \\mathrm{col}_r(G)\\le \\mathrm{wcol}_r(G)\\le (\\mathrm{adm}_r(G))^r.\n\\end{equation}\n\n\\subparagraph*{Shallow minors, bounded expansion, and nowhere\n denseness.} A graph $H$ with $V(H)=\\{v_1,\\ldots, v_n\\}$ is a\n{\\em{depth-$r$ minor}} of $G$, denoted $H\\preccurlyeq_rG$, if there is a\nminor model $(H_1,\\ldots,H_n)$ of $H$ in $G$ such that each $H_i$ has\nradius at most $r$. We write $d(H)$ for the \\emph{average degree}\nof~$H$, that is, for the number $2|E(H)|\/|V(H)|$. A class $\\mathcal{C}$ of\ngraphs has \\emph{bounded expansion} if there is a function\n$f\\colon\\mathbb{N}\\rightarrow\\mathbb{N}$ such that for all non-negative integers $r$\nwe have $d(H)\\leq f(r)$ for every $H\\preccurlyeq_r G$ with $G\\in\\mathcal{C}$. A\nclass $\\mathcal{C}$ of graphs is \\emph{nowhere dense} if for every real\n$\\epsilon>0$ and every non-negative integer~$r$, there is an integer\n$n_0$ such that if $H$ is an $n$-vertex graph with $n\\geq n_0$ and\n$H\\preccurlyeq_r G$ for some $G\\in\\mathcal{C}$, then~$d(H)\\leq n^\\epsilon$.\n\nBounded expansion and nowhere dense classes of graphs were introduced\nby Ne\\v{s}et\\v{r}il and Ossona de Mendez as models for uniform\nsparseness of graphs\n\\cite{nevsetvril2008grad,nevsetvril2011nowhere}. As proved by\nZhu~\\cite{zhu2009colouring}, the generalised colouring numbers are\ntightly related to densities of low-depth minors, and hence they can\nbe used to characterise bounded expansion and nowhere dense classes.\n\n\\begin{theorem}[Zhu \\cite{zhu2009colouring}] A class $\\mathcal{C}$ of graphs\n has bounded expansion if and only if there is a function\n $f:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that $\\mathrm{wcol}_r(G)\\leq f(r)$ for all $r\\in\\mathbb{N}$\n and all $G\\in\\mathcal{C}$.\n\\end{theorem}\n\nDue to Inequality~(\\ref{eq:gen-col-ineq}), we may equivalently demand that there\nis a function $f:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that $\\mathrm{adm}_r(G)\\leq f(r)$ or\n$\\mathrm{col}_r(G)\\leq f(r)$ for all non-negative integers $r$ and all\n$G\\in\\mathcal{C}$.\n\nSimilarly, from Zhu's result one can derive a characterisation of\nnowhere dense classes of graphs, as presented in\n\\cite{nevsetvril2011nowhere}. A class $\\mathcal{C}$ of graphs is called\n\\emph{hereditary} if it is closed under induced subgraphs, that is, if\n$H$ is an induced subgraph of $G\\in\\mathcal{C}$, then $H\\in\\mathcal{C}$.\n\n\\begin{theorem}[Ne\\v{s}et\\v{r}il and Ossona de Mendez\n \\cite{nevsetvril2011nowhere}] A hereditary class $\\mathcal{C}$ of graphs is\n no\\-where dense if and only if for every real $\\epsilon>0$ and every\n non-negative integer $r$, there is a positive integer $n_0$ such\n that if $G\\in\\mathcal{C}$ is an $n$-vertex graph with $n\\geq n_0$, then\n $\\mathrm{wcol}_r(G)\\leq n^\\epsilon$.\n\\end{theorem}\n\n\nAs shown in \\cite{dvovrak13}, for every non-negative integer $r$,\ncomputing $\\mathrm{adm}_r(G)$ is fixed-parameter tractable on any class of\nbounded expansion (parameterized by $\\mathrm{adm}_r(G)$). For $\\mathrm{col}_r(G)$ and\n$\\mathrm{wcol}_r(G)$ this is not known; however, by~\\eqref{eq:gen-col-ineq} we\ncan use admissibility to obtain approximations of these numbers. On\nnowhere dense classes of graphs, for every $\\epsilon>0$ and every\nnon-negative integer $r$, we can compute an order that witnesses\n$\\mathrm{wcol}_r(G)\\leq n^\\epsilon$ in time $\\mathcal{O}(n^{1+\\epsilon})$ if $G$ is\nsufficiently large \\cite{grohe2014deciding}, based on Ne\\v{s}et\\v{r}il\nand Ossona de Mendez's augmentation technique\n\\cite{nevsetvril2008grad}.\n\n\\section{Uniform quasi-wideness and the splitter game}\n\nIn this section we discuss the relation between weak $r$-colouring\nnumbers and two notions that characterise nowhere dense classes:\nuniform quasi-wideness and the splitter game.\n\nFor a graph $G$, a vertex subset $A\\subseteq V(G)$ is called\n\\emph{$r$-independent} in $G$, if $\\mathrm{dist}_G(a,b)>r$ for all different\n$a,b\\in V(G)$. A vertex subset is called \\emph{$r$-scattered}, if it\nis $2r$-independent, that is, if the $r$-neighbourhoods of different\nelements of $A$ do not intersect.\n\nInformally, uniform quasi-wideness means the following: in any large\nenough subset of vertices of a graph from $\\mathcal{C}$, one can find a large\nsubset that is $r$-scattered in $G$, possibly after removing from $G$\na small number of vertices. Formally, a class $\\mathcal{C}$ of graphs is\n\\emph{uniformly quasi-wide} if there are functions\n$N:\\mathbb{N}\\times\\mathbb{N}\\rightarrow\\mathbb{N}$ and $s:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that for all\n$m, r\\in\\mathbb{N}$, if $W\\subseteq V(G)$ for a graph $G\\in\\mathcal{C}$ with\n$|W|>N(m,r)$, then there is a set $S\\subseteq V(G)$ of size at most\n$s(r)$ such that $W$ contains a subset of size at least $m$ that is\n$r$-scattered in $G-S$.\n\nThe notion of quasi-wideness was introduced by\nDawar~\\cite{dawar2010homomorphism} in the context of homomorphism\npreservation theorems. It was shown in \\cite{nevsetvril2010first} that\nclasses of bounded expansion are uniformly quasi-wide and that uniform\nquasi-wideness characterises nowhere dense classes of graphs.\n\n\\begin{theorem}[Ne\\v{s}et\\v{r}il and Ossona de Mendez\n \\cite{nevsetvril2010first}] A hereditary class $\\mathcal{C}$ of graphs is\n no\\-where dense if and only if it is uniformly quasi-wide.\n\\end{theorem}\n\nIt was shown by Atserias et al.\\@ in \\cite{atserias2006preservation} that classes\nthat exclude $K_k$ \nas a minor are uniformly quasi-wide. In fact, in this case we can choose\n$s(r)=k-1$, independent of $r$ (if such a constant function for a class $\\mathcal{C}$\nexists, the class is called \\emph{uniformly almost wide}). However, the function\n$N(m,r)$ that was used in the proof is huge: it comes from an iterated Ramsey\nargument. The same approach was used in \\cite{nevsetvril2010first} to show that\nevery nowhere dense class, and in particular, every class of bounded expansion,\nis uniformly quasi-wide. We present a new proof that every bounded expansion\nclass is uniformly quasi-wide, which gives us a much better bound on $N(m,r)$\nand which is much simpler than the previously known proof.\n\n\\begin{theorem} Let $G$ be a graph and let $r,m\\in \\mathbb{N}$. Let $c\\in\\mathbb{N}$ be such\n that $\\mathrm{wcol}_r(G)\\leq c$ and let $A\\subseteq V(G)$ be a set of size at least\n $(c+1)\\cdot 2^m$. Then there exists a set $S$ of size at most $c(c-1)$ and a set\n $B\\subseteq A$ of size at least $m$ which is $r$-independent\n in $G-S$.\n\\end{theorem}\n\\begin{proof} Let $L\\in \\Pi(G)$ be such that\n $|\\mathrm{WReach}_r[G,L,v]|\\leq c$ for every $v\\in V(G)$. Let $H$ be the\n graph with vertex set $V(G)$, where we put an edge $uv\\in E(H)$ if\n and only if $u\\in \\mathrm{WReach}_r[G,L,v]$ or $v\\in \\mathrm{WReach}_r[G,L,u]$. Then\n $L$ certifies that~$H$ is $c$-degenerate, and hence we can greedily\n find an independent set $I\\subseteq A$ of size $2^m$ in $H$. By the\n definition of the graph $H$, we have that\n $\\mathrm{WReach}_r[G,L,v]\\cap I=\\{v\\}$ for each $v\\in I$.\n\n\\begin{claim}\\label{cl:del}\n Let $v\\in I$. Then deleting $\\mathrm{WReach}_r[G,L,v]\\setminus \\{v\\}$ from\n $G$ leaves $v$ at a distance greater than $r$ (in\n $G-(\\mathrm{WReach}_r[G,L,v]\\setminus\\{v\\})$) from all the other vertices of\n $I$.\n\\end{claim}\n\\begin{claimproof}\n Let $u\\in I$ and let $P$ be a path in $G$ that has length at most\n $r$ and connects $u$ and~$v$. Let $z\\in V(P)$ be minimal with\n respect to $L$. Then $z<_L v$ or $z=v$. If $z<_L v$, then\n $z\\in \\mathrm{WReach}_r[G,L,v]$ and hence the path $P$ no longer exists\n after the deletion of $\\mathrm{WReach}_r[G,L,v]\\setminus\\{v\\}$ from $G$. On\n the other hand, if $z=v$, then $v\\in \\mathrm{WReach}_r[G,L,u]$,\n which contradicts the fact that both $u,v\\in I$. \n\\end{claimproof}\n\nWe iteratively find sets\n$B_0\\subseteq \\ldots \\subseteq B_m\\subseteq I$, sets\n$I_0\\supseteq \\ldots \\supseteq I_m$, and sets\n$S_0\\subseteq \\ldots \\subseteq S_m$ such that $B$ is $r$-independent\nin $G-S$, where $B:=B_m$ and $S:=S_m$. We maintain the invariant that\nsets $B_i$, $I_i$, and $S_i$ are pairwise disjoint for each $i$. Let\n$I_0=I$, $B_0 = \\emptyset$ and $S_0 = \\emptyset$. In one step\n$i=1,2,\\ldots, m$, we delete some vertices from $I_i$ (thus obtaining\n$I_{i+1}$), shift one vertex from $I_i$ to $B_i$ (obtaining $B_{i+1}$)\nand, possibly, add some vertices from $V(G)\\setminus I_i$ to $S_i$\n(obtaining $S_{i+1}$). More precisely, let $v$ be the vertex of $I_i$\nthat is the largest in the order $L$. We set\n$B_{i+1} = B_i\\cup\\{v\\}$, and now we discuss how $I_{i+1}$ and\n$S_{i+1}$ are constructed.\n\nWe distinguish two cases. First, suppose $v$ is connected by a path of\nlength at most~$r$ in $G-S_i$ to at most half of the vertices of $I_i$\n(including $v$). Then we remove these reachable vertices from~$I_i$,\nand set $I_{i+1}$ to be the result. We also set $S_{i+1}=S_i$. Note\nthat $|I_{i+1}|\\geq |I_i|\/2$.\n\nSecond, suppose $v$ is connected by a path of length at most $r$ in\n$G-S_i$ to more than half of the vertices of $I_i$ (including $v$). We\nproceed in two steps. First, we add the at most $c-1$ vertices of\n$\\mathrm{WReach}_r[G,L,v]\\setminus\\{v\\}$ to $S_{i+1}$, that is, we let\n$S_{i+1}=S_i\\cup (\\mathrm{WReach}_r[G,L,v]\\setminus\\{v\\})$. (Recall here that\n$\\mathrm{WReach}_r[G,L,v]\\cap I = \\{v\\}$.) By Claim~\\ref{cl:del}, this leaves\n$v$ at a distance greater than~$r$ from every other vertex of $I_i$ in\n$G-S_{i+1}$. Second, we construct $I_{i+1}$ from $I_i$ by removing the\nvertex $v$ and all the vertices of $I_i$ that are not connected to $v$\nby a path of length at most~$r$ in $G-S_i$, hence we have\n$|I_{i+1}|\\geq\n\\lfloor|I_i|\/2\\rfloor$.\n\nObserve the construction above can be carried out for $m$ steps,\nbecause in each step, we remove at most half of the vertices of $I_i$\n(rounded up) when constructing $I_{i+1}$. As $|I_0|=|I|=2^m$, it is\neasy to see that the set $I_i$ cannot become empty within $m$\niterations. Moreover, it is clear from the construction that we end\nup with a set $B=B_m$ that has size~$m$ and is $r$-scattered in $G-S$,\nwhere $S=S_m$. It remains to argue that $|S_m|\\leq c(c-1)$. For this,\nit suffices to show that the second case cannot apply more than $c$\ntimes in total.\n\nSuppose the second case was applied in the $i$th iteration, when\nconsidering a vertex~$v$. Every vertex $u\\in I_i$ with $u<_Lv$ that\nwas connected to~$v$ by a path of length at most $r$ in $G-S_i$\nsatisfies $\\mathrm{WReach}_r[G,L,v]\\cap \\mathrm{WReach}_r[G,L,u]\\neq \\emptyset$.\nThus, every remaining vertex~$u\\in I_{i+1}$ has at least one of its\nweakly $r$-reachable vertices deleted (that is, included in\n$S_{i+1}$). As the number of such vertices is at most $c-1$ at the\nbeginning, and it can only decrease during the construction, this\nimplies that the second case can occur at most~$c$~times.\n\\end{proof}\n\nAs shown in \\cite{siebertz16}, if $K_k\\not\\preccurlyeq G$, then\n$\\mathrm{wcol}_r(G)\\in\\mathcal{O}(r^{k-1})$. Hence, for such graphs we have to delete\nonly a polynomial (in $r$) number of vertices in order to find an\n$r$-independent set of size $m$ in a set of vertices of size single\nexponential in $m$.\n\nWe now implement the same idea to find a very simple strategy for\nsplitter in the splitter game, introduced by Grohe et\nal.~\\cite{grohe2014deciding} to characterise nowhere dense classes of\ngraphs. Let~${\\ell},r\\in \\mathbb{N}$. The \\emph{simple $\\ell$-round\n radius-$r$ splitter game} on~$G$ is played by two players,\n\\emph{connector} and \\emph{splitter}, as follows. We let~$G_0:=G$. In\nround~$i+1$ of the game, connector chooses a\nvertex~$v_{i+1}\\in V(G_i)$. Then splitter picks a vertex\n$w_{i+1}\\in N_r^{G_i}(v_{i+1})$. We\nlet~$G_{i+1}:=G_i[N_r^{G_i}(v_{i+1})\\setminus \\{w_{i+1}\\}]$. Splitter\nwins if~$G_{i+1}=\\emptyset$. Otherwise the game continues\nat~$G_{i+1}$. If splitter has not won after~${\\ell}$ rounds, then\nconnector wins.\n\nA \\emph{strategy} for splitter is a function~$\\sigma$ that maps every\npartial play $(v_1, w_1, \\dots,$ $v_s, w_s)$, with associated\nsequence~$G_0, \\dots, G_s$ of graphs, and the next\nmove~$v_{s+1}\\in V(G_s)$ of connector, to a\nvertex~$w_{s+1}\\in N_r^{G_s}(v_{s+1})$ that is the next move of\nsplitter. A strategy~$\\sigma$ is a \\emph{winning strategy} for\nsplitter if splitter wins every play in which she follows the\nstrategy~$f$. We say that splitter \\emph{wins} the simple $\\ell$-round\nradius-$r$ splitter game on~$G$ if she has a winning strategy.\n\n\\begin{theorem}[Grohe et al.\\ \\cite{grohe2014deciding}] A class $\\mathcal{C}$\n of graphs is nowhere dense if and only if there is a function\n $\\ell:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that splitter wins the simple\n $\\ell(r)$-round radius-$r$ splitter game on every graph $G\\in\\mathcal{C}$.\n\\end{theorem}\n\nMore precisely, it was shown in \\cite{grohe2014deciding} that\n$\\ell(r)$ can be chosen as $N(2s(r), r)$, where $N$ and~$s$ are the\nfunctions that characterise $\\mathcal{C}$ as a uniformly quasi-wide class of\ngraphs. We present a proof that on bounded expansion classes, splitter\ncan win much faster.\n\n\\begin{theorem}\\label{thm:splitterwcol} Let $G$ be a graph, let $r\\in\\mathbb{N}$ and let\n $\\ell=\\mathrm{wcol}_{2r}(G)$. Then splitter wins the $\\ell$-round radius-$r$\n splitter game.\n\\end{theorem}\n\\begin{proof} Let $L$ be a linear order that witnesses\n $\\mathrm{wcol}_{2r}(G)=\\ell$. Suppose in round $i+1\\leq \\ell$, connector\n chooses a vertex $v_{i+1}\\in V(G_i)$. Let $w_{i+1}$ (splitter's\n choice) be the minimum vertex of $N_r^{G_i}(v_{i+1})$ with respect\n to $L$. Then for each $u\\in N_r^{G_i}(v_{i+1})$ there is a path\n between $u$ and $w_{i+1}$ of length at most $2r$ that uses only\n vertices of $N_r^{G_i}(v_{i+1})$. As $w_i$ is minimum in\n $N_r^{G_i}(v_{i+1})$, $w_{i+1}$ is weakly $2r$-reachable from each\n $u\\in N_r^{G_i}(v_{i+1})$. Now let\n $G_{i+1}:=G_i[N_r^{G_i}(v_{i+1})\\setminus\\{w_{i+1}\\}]$. As $w_{i+1}$\n is not part of $G_{i+1}$, in the next round splitter will choose\n another vertex which is weakly $2r$-reachable from every vertex of\n the remaining $r$-neighbourhood. As $\\mathrm{wcol}_{2r}(G)= \\ell$, the game\n must stop after at most $\\ell$ rounds.\n\\end{proof}\n\n\n\\section{Uniform orders for graphs excluding a topological minor}\\label{sec:uniform}\n\nIf $\\mathcal{C}$ is a class of bounded expansion such that\n$\\mathrm{wcol}_r(G)\\leq f(r)$ for all $G\\in\\mathcal{C}$ and all $r\\in \\mathbb{N}$, the order\n$L$ that witnesses this inequality for $G$ may depend on the value\n$r$. We say that a class $\\mathcal{C}$ \\emph{admits uniform orders} if there\nis a function $f:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that for each $G\\in\\mathcal{C}$, there\nis a linear order $L\\in \\Pi(G)$ such that\n$|\\mathrm{WReach}_r[G,L,v]|\\leq f(r)$ for all $v\\in V(G)$ and all $r\\in\\mathbb{N}$. In\nother words, there is one order that simultaneously certifies the\ninequality $\\mathrm{wcol}_r(G)\\leq f(r)$ for all $r$.\n\nIt is implicit in \\cite{siebertz16} that every class that excludes a\nfixed minor admits uniform orders, which can be efficiently\ncomputed. We are going to show that the same holds for classes that\nexclude a fixed topological minor. Our construction is similar to the\nconstruction of \\cite{siebertz16}, in particular, our orders can be\ncomputed quickly in a greedy fashion. The proof that we find an order\nof high quality is based on the decomposition theorem for graphs with\nexcluded topological minors, due to Grohe and\nMarx~\\cite{grohe2015structure}. Note however, that for the\nconstruction of the order we do not have to construct a tree\ndecomposition according to Grohe and Marx~\\cite{grohe2015structure}.\n\n\\subparagraph*{Construction.} Let $G$ be a graph. We present a\nconstruction of an order of $V(G)$ of high quality. We iteratively\nconstruct a sequence $H_1,\\ldots, H_\\ell$ of pairwise disjoint and\nconnected subgraphs of $G$ such that\n$\\bigcup_{1\\leq i\\leq \\ell}V(H_i)=V(G)$. For $0\\leq i<\\ell$, let\n$G_i \\coloneqq G - \\bigcup_{1\\le j\\le i}V(H_j)$. We say that a\ncomponent $C$ of $G_i$ is \\emph{connected} to a subgraph $H_j$,\n$j\\leq i$, if there is a vertex $u\\in V(H_j)$ and a vertex $v\\in V(C)$\nsuch that $uv\\in E(G)$. For all~$i$, $1\\leq i<\\ell$, we will maintain\nthe following invariant. If $C$ is a component of $G_i$, then the\nsubgraphs $H_{i_1},\\ldots, H_{i_s}\\in \\{H_1,\\ldots, H_i\\}$ that are\nconnected to $C$ form a minor model of the complete graph $K_s$, where\n$s$ is their number.\n\nTo start, we choose an arbitrary vertex $v\\in V(G)$ and let $H_1$ be\nthe connected subgraph $G[\\{v\\}]$. Clearly, $H_1$ satisfies the above\ninvariant. Now assume that for some $i$, $1\\le i< \\ell$, the sequence\n$H_1,\\ldots,H_i$ has already been constructed. Fix some component $C$\nof $G_i$ and, by the invariant, assume that the subgraphs\n$H_{i_1},\\ldots, H_{i_s} \\in \\{H_1,\\ldots, H_i\\}$ with\n$1\\leq i_1<\\ldotsa(k)$, there is a node $t$ with $\\beta(t)$\nintersecting at least $a(k)+1$ branch sets~$H_{i_j}$. By\nLemma~\\ref{lem:corebag}, this node is unique. We call it the\n\\emph{core node} of the minor model. Next we show that if the model\nis large, then its core node must be a bounded degree node. Shortly\nspeaking, this is because the model $H_{i_1},\\ldots, H_{i_s}$ trimmed\nto the torso of the core node is already a minor model of $K_s$ in\nthis torso.\n\n\\begin{lemma}\\label{lem:degreebag}\n If $s>\\max\\{a(k), e(k)\\}$, then the core node of the minor model is\n a bounded degree node.\n\\end{lemma}\n\\begin{proof}\n As $s>a(k)$, by Lemma~\\ref{lem:corebag} we can identify the \n unique core node $t$ whose bag intersects all the branch sets $H_{i_j}$. \n Recall that $\\tau(t)$ is the graph induced by\n the bag $\\beta(t)$ in which all adjacent separators are turned into cliques. \n It is easy to see that the subgraphs $H_{i_j}':=\\tau(t)[V(H_{i_j})\\cap \\beta(t)]$ are connected\n in $\\tau(t)$ and form a minor model of $K_s$. As $s>e(k)$, \n we infer that $t$ cannot be an excluded minor node, and hence it is a bounded degree node.\n\\end{proof}\n\nFor vertices outside the bag of the core node, the bound promised in\nLemma~\\ref{lem:disjointpaths} can be proved similarly as\nLemma~\\ref{lem:corebag}.\n\n\\begin{lemma}\\label{lem:connectionsfromoutside}\n Let $C$ be a component of $G_i$ that has a connection to the\n subgraphs $H_{i_1},\\ldots, H_{i_s}$. If $s>a(k)$, then for every\n vertex $v\\in V(C)\\setminus\\beta(t)$, where $t$ is the core node of\n the model, we have that $m(v)\\leq a(k)$.\n\\end{lemma}\n\\begin{proof}\nBy the properties of a tree decomposition, there is an edge $e=tt'$ of $T$ such that\n$\\beta^{-1}(v)$ is contained in the subtree of $T-e$ that contains $t'$.\nSuppose $\\mathcal{P}$ is a family of paths that connect $v$ with distinct branch sets $H_{i_j}$ and are pairwise disjoint apart from $v$.\nRecall that $\\beta(t)$ intersects every branch set $H_{i_j}$.\nTherefore, by extending each path of $\\mathcal{P}$ within the branch set it leads to, we can assume w.l.o.g. that each path of $\\mathcal{P}$ connects $v$ with a vertex of $\\beta(t)$.\nBy Lemma~\\ref{lem:separator}, this implies that each path of $\\mathcal{P}$ intersects $\\beta(t)\\cap \\beta(t')$.\nPaths of $\\mathcal{P}$ share only $v$, which is not contained in $\\beta(t)\\cap \\beta(t')$, and hence we conclude that $|\\mathcal{P}|\\leq |\\beta(t)\\cap \\beta(t')|$.\nAs $\\mathcal{P}$ was chosen arbitrarily, we obtain that $m(v)\\leq |\\beta(t)\\cap \\beta(t')|\\leq a(k)$.\n\\end{proof}\n\n\\vspace{-0.5cm}\nWe now complete the proof of Lemma~\\ref{lem:disjointpaths} by looking\nat the vertices inside the core bag.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:disjointpaths}]\n We set $\\alpha:=a(k)+c(k)+d(k)+e(k)$. Assume towards a\n contradiction that for some $i$, $1\\leq i<\\ell$, we have that some\n component $C$ of $G_i$ contains a vertex $v_1$ with $m(v_1)>\\alpha$.\n Denote the branch sets that have a connection to $C$ by\n $H_{i_1},\\ldots, H_{i_s}$, where $i_1\\alpha$, we have that $|\\mathcal{P}|>\\alpha$, and in\n particular $s>\\alpha$. As $\\alpha>a(k)$, by~Lemma~\\ref{lem:corebag} \n and~Lemma~\\ref{lem:corebag-exists} we can\n identify the unique core node~$t$ of the minor model. As\n $s>\\max\\{a(k),e(k)\\}$, by Lemma~\\ref{lem:degreebag} the core node is\n a bounded degree node. As $m(v_1)>a(k)$, by\n Lemma~\\ref{lem:connectionsfromoutside} we have $v_1\\in \\beta(t)$.\n As $\\mathcal{P}$ contains more than $d(k)$ disjoint paths from $v$\n to distinct branch sets, the degree of $v_1$ in $G$ must be greater\n than $d(k)$, hence $v_1$ is an apex vertex of $\\tau(t)$.\n\n Since $i_1a(k)+c(k)+d(k)+e(k)\\geq a(k)+d(k)+e(k)+1$, the same\n reasoning as above shows that $t$ is also the core vertex of the\n minor model formed by branch sets connected to $C'$. Thus, by\n exactly the same reasoning we obtain that $v_2$ is also an apex\n vertex of $\\tau(t)$.\n\n Since $\\alpha>a(k)+c(k)+d(k)+e(k)$, we can repeat this reasoning\n $c(k)+1$ times, obtaining vertices $v_1,\\ldots, v_{c(k)+1}$, which\n are all apex vertices of $\\tau(t)$. This contradicts the fact that\n $\\tau(t)$ contains at most $c(k)$ apex vertices.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:reachablevertices}]\n We set $\\beta$ so that $\\beta\\cdot r\\geq (2r+1)\\cdot \\alpha$, where\n $\\alpha$ is the constant given by Lemma~\\ref{lem:disjointpaths}.\n For the sake of contradiction, suppose there is a family of paths\n $\\mathcal{P}$ as in the statement, whose size is larger than\n $(2r+1)\\cdot \\alpha$.\n\n Recall that $H_{j}$ was chosen as a subtree of a breadth-first\n search tree in $G_{j-1}$; throughout the proof, we treat $H_j$ as a\n rooted tree. As $H_j$ is a subtree of a BFS tree, every path from a\n vertex $w$ of the tree to the root $v'$ of the tree is an isometric\n path in $G_{j-1}$, that is, a shortest path between $w$ and $v'$ in\n the graph $G_{j-1}$. If $P$ is an isometric path in a graph $H$,\n then $|N_r^H(v)\\cap V(P)|\\leq 2r+1$ for all $v\\in V(H)$ and all\n $r\\in\\mathbb{N}$. As the paths from $\\mathcal{P}$ are all contained in\n $G_{j-1}$, and they have lengths at most $r$, this implies that the\n path family $\\mathcal{P}$ cannot connect $v$ with more than $2r+1$\n vertices of $H_{j}$ which lie on the same root-to-leaf path in\n $H_{j}$. Since $|\\mathcal{P}|> (2r+1)\\cdot \\alpha$, we can find a\n set $X\\subseteq V(H_j)$ such that $|X|>\\alpha$, each vertex of $X$\n is connected to $v$ by some path from $\\mathcal{P}$, and no two\n vertices of $X$ lie on the same root-to-leaf path in $H_j$. Recall\n that, by the construction, each leaf of $H_j$ is connected to a\n different branch set $H_{j'}$ for some $j'0$) is\n chosen as follows. We first select the vertex\n $w'_{i+1}\\in V(H_{i_s})$ that is the largest in the order $L$ among\n those vertices of $H_{i_s}$ that are adjacent to $C$ (the vertices\n of $H_j$ for $j\\leq i$ are already ordered by $L$ at this point).\n Then, we select any its neighbour $w_{i+1}$ in $C$ as the vertex\n that is going to be included in $H_{i+1}$ in its construction.\n Finally, recall that in the construction of $L$, we could order the\n vertices of $H_{i+1}$ arbitrarily. Hence, we fix an order of\n $H_{i+1}$ so that $w_{i+1}$ is the smallest among $V(H_{i+1})$.\n This concludes the description of the restrictions applied to the\n construction.\n\n We now construct $H$ by taking $G$ and adding some edges. During the\n construction, we will mark some edges of $H$ as {\\em{spanning\n edges}}. We start by marking all the edges of all the trees\n $H_i$, for $1\\leq i\\leq \\ell$, as spanning edges. At the end, we\n will argue that the spanning edges form a spanning tree of $H$ with\n maximum degree at most $\\delta$.\n\n For each $i$ with $1\\leq i<\\ell$, let us examine the vertex\n $w_{i+1}$, and let us {\\em{charge}} it to $w_{i+1}'$. Note that in\n this manner every vertex $w_{i+1}$ is charged to its neighbour that\n lies before it in the order $L$. For any $w\\in V(G)$, let $D(w)$ be\n the set of vertices charged to $w$. Now examine the vertices of $G$\n one by one, and for each $w\\in V(G)$ do the following. If\n $D(w)=\\emptyset$, do nothing. Otherwise, if\n $D(w)=\\{u_1,u_2,\\ldots,u_h\\}$, mark the edge $wu_1$ as a spanning\n edge, and add edges $u_1u_2,u_2u_3,\\ldots,u_{h-1}u_h$ to $H$,\n marking them as spanning edges as well.\n\n\\begin{claim}\\label{cl:degbnd}\n The spanning edges form a spanning tree of $H$ of maximum degree at\n most $\\alpha+4$, where~$\\alpha$ is the constant given by\n Lemma~\\ref{lem:disjointpaths}.\n\\end{claim}\n\\begin{claimproof}\nBecause the branch sets partition the graph, the spanning edges form a spanning subgraph of $H$.\nBecause we connect the branch set $H_{i+1}$ only to the largest reachable branch set $H_{i_s}$ \n(and this set is never again the largest reachable branch set for $H_j$, $j>i$), the spanning \nsubgraph is acyclic. It is easy to see that the spanning subgraph is also connected. \nBy Lemma~\\ref{lem:degbound}, we have that each $H_i$ has maximum degree at most $\\alpha+1$.\nAlso, for every vertex $w\\in V(G)$, at most $3$ additional edges incident to $w$ in $H$ are marked as spanning (two edges are contributed by the path from $u_1$ to $u_h$ (only $u_1$ charges to a different vertex and has degree $1$ on the path) and one edge \nmay be added if a vertex is charged to it). In total, this means that\n$H$ has maximum degree bounded by $\\alpha+4$.\n\\end{claimproof}\n\nIt remains to argue that $H$ has small admissibility. For this, it\nsuffices to prove the following claim. The proof uses the additional\nrestrictions we introduced in the construction.\n\n\\begin{claim}\\label{cl:adm}\n Let $r$ be a positive integer. If the order $L$ satisfies \n $\\max_{v\\in V(G)} |\\mathrm{SReach}_{2r}[G,L,v]|\\leq m$, that is, \n the order certifies $\\mathrm{col}_{2r}(G)\\leq m$, \n then $\\mathrm{adm}_r(H)\\leq m+2$.\n\\end{claim}\n\\begin{claimproof}\nWe verify that for each $r$, the order $L$ certifies that $\\mathrm{adm}_r(H)\\leq m+2$. For this, take any vertex $v\\in V(H)=V(G)$, and let $\\mathcal{P}$ be any family of paths of length at most $r$ in $H$\nthat start in $v$, end in distinct vertices smaller than $v$ in $L$, and are pairwise internally disjoint.\nWe can further assume that all the internal vertices of all the paths from $\\mathcal{P}$ are larger than $v$ in $L$.\nLet $i$, $0\\leq i<\\ell$, be such that $v\\in V(H_{i+1})$.\nWe distinguish two cases: either $v=w_{i+1}$ or $v\\neq w_{i+1}$.\n\nWe first consider the case $v\\neq w_{i+1}$; the second one will be very similar. By the construction of the order $L$, it follows that $w_{i+1}<_L v$.\nConsider any path $P\\in \\mathcal{P}$. Then $P$ is a path in $H$; we shall modify it to a walk $P'$ in $G$ as follows.\nSuppose $P$ uses some edge $e$ that is not present in~$H$. By the construction of~$H$, it follows that\n$e=u_1u_2$ is an edge connecting two vertices that are charged to the same vertex $w$; suppose w.l.o.g. $P$ traverses $e$ from $u_1$ to $u_2$. \nDefine $P'$ by replacing the traversal of $e$ on $P$ by a path of length two consisting of $u_1w$ and $wu_2$, and making the same replacement for all other edges on $P$ that do not belong to $G$.\n\nWe claim that all the internal vertices of $P'$ are not smaller, in $L$, than $v$. For this, it suffices to show that whenever some edge $u_1u_2$ is replaced by a path $(u_1w, wu_2)$ as above,\nthen we have that $v\\leq_L w$. Aiming towards a contradiction, suppose that $u_1u_2$ is the first edge on $P$ for which we have $w<_L v$.\nBy the construction, it must be that $(u_1,u_2)=(w_{j_1},w_{j_2})$ for some $j_1,j_2>i+1$, and $w=w_{j_1}'=w_{j_2}'$. \nLet $j$ be such that $w\\in H_j$.\nWhen constructing $H_{j_1}$, we chose $w=w_{j_1}'$ as the largest, w.r.t. $L$, vertex of $H_j$ which was adjacent to the connected component $C'$ of $G_{j_1-1}$ that contains $H_{j_1}$.\nObserve that the prefix of $P'$ up to $w_{j_1}$ is a path in $G$ that, by the choice of $u_1u_2$, contains only vertices not smaller in $L$ than $v$. This prefix has to access the connected component $C'$\nfrom some vertex $q$, for which we of course have $v\\leq_L q$. If $q\\notin V(H_j)$ then, as $H_j$ is the last among subgraphs connected to $C'$, we have that $q\\in V(H_{j'})$ for some $j'