diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmhqu" "b/data_all_eng_slimpj/shuffled/split2/finalzzmhqu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmhqu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThere has been a considerable interest in the quantum theory of information and\ncomputation for the past several\nyears~\\cite{Lloyd,Bennett,Jozsa94,Schumacher95,Shor95,Steane,Laflamme,Nielsen,Jozsa95,Wiesner,Brassard,Shor94,DiVincenzo,Barenco,Loss,Schumacher96}. \nEspecially, quantum-mechanical\nproperties of coding~\\cite{Jozsa94,Schumacher95}, \nnoisy-channels including error-correcting\ncodes~\\cite{Shor95,Steane,Laflamme,Nielsen} and\nchannel fidelity~\\cite{Jozsa95}, and \ncomputation~\\cite{Wiesner,Brassard,Shor94,DiVincenzo,Barenco,Loss} \nhave been studied in detail.\nIt was shown~\\cite{DiVincenzo,Barenco} that any quantum \ncomputation procedure can be decomposed into operations on single-bit\ngates and a two-bit gate which involves an entanglement operation on\ntwo quantum bits or qubits. Presence of decoherence and imperfections \ncause the operations of \nthese quantum gates away from the ideal ones and as a result one can regard\nthese gates as a part of noisy quantum channels. \nDetailed analysis of these channels are necessary for the complete\nunderstanding of general quantum information process.\nMathematically, the dynamics of quantum channels or generalized quantum gates\ninvolves the transformation of input quantum \nstates represented by a density\noperator $\\rho$ into an output states $\\rho'$~\\cite{Schumacher96}, i.e.,\n\\begin{eqnarray}\n\\label{rhodyn}\n\\rho \\buildrel {\\cal E}\\over\\longrightarrow \\rho'={\\cal E}[\\rho],\n\\end{eqnarray}\nwhere we assume ${\\cal E}$ is a linear mapping but is not necessarily a unitary\ntransformation if one considers an open system interacting with the reservoir\nsuch as noisy quantum channels.\nA model of a noisy quantum channel would involve several Hamiltonians for\nthe system representing qubits, reservoir and the mutual interaction between\nthe system and the reservoir that causes the decoherence or noise. \nThe density operator is then governed by the quantum \nLiouville equation~\\cite{Reichl} which\nis an integro-differential equations and in general, it is nontrivial to obtain\nthe solution of the form given by Eq.~(\\ref{rhodyn}).\nRather, one is expected to get the solution for\nthe density operator for the output states in\nVolterra type integral equation:\n\\begin{eqnarray}\n\\label{volterra}\n\\rho(t)=A(t,0)\\rho(0)+\\int d\\tau B(t,\\tau)\\rho(\\tau)\n\\end{eqnarray}\nwhere $A$ is a propagator and $B$ is a memory kernel. In general,\nit is very difficult to solve for the memory kernels of the time-convolution\nform equation (\\ref{volterra}) self-consistently \nand almost always, one must be content\nwith the narrowing limit or the fast modulation limit~\\cite{Saeki82}.\n\n\nSome time ago, the time-convolutionless equations of motion in the Heisenberg\npicture was suggested by Tokuyama and Mori~\\cite{Tokuyama} \nto overcome above mentioned\ndifficulties for problems in nonequilibrium statistical mechanics. \nThese formulations were then developed in the Schr{\\\"o}dinger picture by using\nthe projection operator \ntechnique~\\cite{Hashitsume,Saeki82,Saeki86}. \nOne of the authors applied the\ntime-convolutionless formulation to the model of quantum devices for detailed\nnumerical \nstudy~\\cite{Ahn94,Ahn95,Ahn97,Ahn98}. \nIt was shown that the time-convolutionless formulation can\nalso incorporate both non-Markovian relaxation and renormalization of the\nmemory effects.\n\n\nRecently, Loss and DiVincenzo~\\cite{Loss} \nhas made a comprehensive study of the two-bit\nquantum gate taking into account the effect of decoherence on the gate\noperation using the reduced density operator in the\ntime-convolution formulation.\nTheir results indicate that the detailed analysis of \nthe decoherence process is\nimportant for the reliable operation of quantum gates utilizing\ncontrolled, nonequilibrium time evolution of solid-state spin systems.\n\n\nIn order to make the reduce-density operator for the output quantum states\nof the form given by the equation (\\ref{rhodyn}),\nseveral approximations including the Born approximation were made in their\ntheory. In our opinion, it would be more convenient if \nthere is a way to get exact\nsolution for the output density-operator in time-convolutionless form \ngiven by (\\ref{rhodyn}).\n\nIn this paper, we first derive the exact solution for the\nreduced-density-operator of the output quantum states in time-convolutionless\nform by solving the quantum Liouville equation for a quantum channel using the\nprojection operator method. The formalism we develop would be general enough\nto model a realistic quantum channel or a quantum gate. Secondly, we apply the\ntheory to model a two-bit quantum gate composed of coupled spin systems in\nwhich the Heisenberg coupling is controlled by the tunneling barrier between\nneighboring single electron quantum dots.\n\\section{Time-convolutionless reduced-density-operator theory of a quantum\nsystem interacting with a reservoir}\n\nIn this section, we study the quantum Liouville equation for a quantum system\nwhich corresponds to a quantum channel or a generalized quantum gate to derive\nan equation and to solve for a reduced-density-operator of a system coupled to\na reservoir. An interaction between the system and the reservoir leads to\ndecoherence. The Hamiltonian of the total system is assumed to be\n\\begin{eqnarray}\nH_T(t)=H_S(t)+H_B + H_{\\rm int},\n\\end{eqnarray}\nwhere $H_S(t)$ is the Hamiltonian of the system representing a quantum gate\n(or channel), $H_B$ the reservoir and\n$H_{\\rm int}$ the Hamiltonian for the interaction of the system with its\nreservoir. \nThe evolution of the system might include a coding, transmission\nand decoding process. \nThe equation of motion for the density operator $\\rho_T(t)$ of the\ntotal system is given by a quantum Liouville equation\n\\begin{eqnarray}\n\\frac{d}{dt}\\rho_T(t)&=&-i[H_T,\\rho_T] \\nonumber \\\\\n&=&-i L_T \\rho_T,\n\\label{lioueq}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray*}\nL_T(t)=L_S(t) +L_B +L_{\\rm int}\n\\end{eqnarray*}\nis the Liouville superoperator in one-to-one correspondence with the\nHamiltonian. In this work, we use a unit where $\\hbar=1$. In order to derive\nan equation and to solve for a system alone, it is convenient to use the\nprojection operators~\\cite{Nakajiama,Zwanzig} \nwhich decompose the total system by eliminating the\ndegrees of freedom for the reservoir. We define time-independent projection\noperators $\\underline{P}$ and $\\underline{Q}$ as~\\cite{Hashitsume} \n\\begin{eqnarray}\n\\underline{P} X= \\rho_B {\\rm tr}_B(X),~~\n\\underline{Q}=1-\\underline{P},\n\\end{eqnarray}\nfor any dynamical variable $X$.\nHere ${\\rm tr}_B$ indicates a partial trace over the quantum reservoir.\nProjection operators satisfy the operator identity\n$ \\underline{P}^2=\\underline{P},\\underline{Q}^2=\\underline{Q}$ and \n$\\underline{P}\\underline{Q}=\\underline{Q}\\underline{P}=0$.\nThe information of the system is then contained in the reduced density\noperator $\\rho(t)$ which is defined by\n\\begin{eqnarray}\n\\rho(t)&=& {\\rm tr}_B \\rho_T(t)\\nonumber \\\\\n&=& {\\rm tr}_B \\underline{P} \\rho_T(t).\n\\end{eqnarray}\nIn order to derive a time-convolutionless equation, we first multiply \nEq.~(\\ref{lioueq}) by $\\underline{P}$ and $\\underline{Q}$ to obtain coupled\nequation for $\\underline{P}\\rho_T(t)$ and $\\underline{Q}\\rho_T(t)$:\n\\begin{eqnarray}\n\\frac{d}{dt}\\underline{P}\\rho_T(t)\n= -i \\underline{P}\\rho_T\\underline{P} \\rho_T(t)\n+i \\underline{P} L_T(t) \\underline{Q} \\rho_T(t),\n\\label{peq1}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\frac{d}{dt}\\underline{Q}\\rho_T(t)\n= -i \\underline{Q}\\rho_T\\underline{Q} \\rho_T(t)\n+i \\underline{Q} L_T(t) \\underline{P} \\rho_T(t).\n\\label{qeq1}\n\\end{eqnarray}\nWe assume that the channel was turned on at $t=0$ and the input state prepared\nat $t=0$, $\\rho(t=0)$ was isolated with the reservoir at $t=0$, i.e., \n$\\underline{Q}\\rho_T(0)=0$~\\cite{Hashitsume}.\n\nThe formal solution of (\\ref{qeq1}) is given by~\\cite{Ahn94}\n\\begin{eqnarray}\n\\label{qsol}\n\\underline{Q}\\rho_T(t)=-i \\int_0^t d\\tau\n\\underline{H}(t,\\tau)\\underline{Q} L_T(\\tau) \\underline{P}\\rho_T(\\tau),\n\\end{eqnarray}\nwhere the projected propagator $\\underline{H}(t,\\tau)$ of the total system is\ngiven by\n\\begin{eqnarray}\n\\underline{H}(t,\\tau)=\\underline{T}\\exp\\left\\{\n-i\\int_\\tau^t ds \\underline{Q}L_T(s)\\underline{Q}\\right\\}.\n\\end{eqnarray}\nHere $\\underline{T}$ denotes the time-ordering operator.\nBecause Eq.~(\\ref{qsol}) is in time-convolution form, we transform the memory\nkernel in (\\ref{qsol}) into time-convolutionless form~\\cite{Ahn94} \nby substituting the formal solution of (\\ref{lioueq})\n\\begin{eqnarray}\n\\label{tsol}\n\\rho_T(\\tau)=\\underline{G}(t,\\tau)\\rho_T(t)\n\\end{eqnarray}\ninto Eq.~({\\ref{qsol}). The anti-time evolution operator \n$\\underline{G}(t,\\tau)$ of the total system is defined by\n\\begin{eqnarray*}\n\\underline{G}(t,\\tau)=\\underline{T}^c\n\\exp\\left\\{ i \\int_\\tau^t ds L_T(s)\\right\\},\n\\end{eqnarray*}\nwhere $\\underline{T}^c$ is the anti-time-ordering operator.\nFrom Eq.~(\\ref{qsol}) and (\\ref{tsol}), we obtain\n\\begin{eqnarray}\n\\label{qtsol}\n\\underline{Q}\\rho_T(t)=\\{\\theta(t)-1\\}\\underline{P}\\rho_T(t)\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\theta^{-1}(t)&=&g(t)\\nonumber \\\\\n&=&1+i\\int_0^t d\\tau \\underline{H}(t,\\tau)\\underline{Q}\nL_T(\\tau)\\underline{P}~\\underline{G}(t\\tau)\n\\end{eqnarray}\nBy substituting Eq.~({\\ref{qtsol}) into (\\ref{peq1}), we obtain the\ntime-convolutionless equation of motion for $\\underline{P}\\rho_T(t)$ as\n\\begin{eqnarray}\n\\label{peq2}\n\\frac{d}{dt}\\underline{P}\\rho_T(t)\n=-i\\underline{P}L_T(t)\\underline{P}\\rho_T(t)\n-i\\underline{P}L_T(t)\\{\\theta(t)-1\\}\\underline{P}\\rho_T(t)\n\\end{eqnarray}\nIt can be shown that the formal solution of (\\ref{peq2}) is given by\n\\begin{eqnarray}\n\\label{psol}\n\\underline{P}\\rho_T\n=\\underline{U}(t,0)\\underline{P}\\rho_T(0)\n-i\\int_0^t ds \\underline{U}(t,s)\\underline{P}L_T(s)\n\\{\\theta(s)-1\\}\\underline{P}\\rho_T(s),\n\\end{eqnarray}\nwhere the projected propagator $\\underline{U}(t,\\tau)$ of the system is\ndefined by\n\\begin{eqnarray}\n\\underline{U}(t,\\tau)\n=\\underline{T}\\exp\\left\\{\n-i\\int_0^t ds \\underline{P} L_T(s)\\underline{P}\\right\\}.\n\\end{eqnarray}\nTo transform Eq.~(\\ref{psol}) into time-convolutionless form once again, \nwe substitute\n\\begin{eqnarray}\n\\rho_T(s)=\\underline{G}(t,s)\\rho_T(t)\n\\end{eqnarray}\ninto (\\ref{psol}) to obtain:\n\\begin{eqnarray}\n\\underline{P}\\rho_T(t)&=&\n\\underline{U}(t,0)\\underline{P}\\rho_T(0)\n-i\\int_0^t ds\\underline{U}(t,s)\\underline{P}L_T(s)\n\\{\\theta(s)-1\\} \\underline{P}~\\underline{G}(t,s)\\rho_T(t)\\nonumber\\\\\n&=&\\underline{U}(t,0)\\underline{P}\\rho_T(0)\n-i\\int_0^t ds\\underline{U}(t,s)\\underline{P}L_T(s)\n\\{\\theta(s)-1\\} \\underline{P}~\\underline{G}(t,s)\\underline{P}\\rho_T(t)\n\\nonumber \\\\\n&&-i\\int_0^t ds\\underline{U}(t,s)\\underline{P}L_T(s)\n\\{\\theta(s)-1\\} \\underline{P}~\\underline{G}(t,s)\\underline{Q}\\rho_T(t)\n\\nonumber \\\\\n&=&\\underline{U}(t,0)\\underline{P}\\rho_T(0)\n-i\\int_0^t ds\\underline{U}(t,s)\\underline{P}L_T(s)\n\\{\\theta(s)-1\\} \\underline{P}~\\underline{G}(t,s)\n\\theta(t)\\underline{P}\\rho_T(t).\n\\label{psol2}\n\\end{eqnarray}\nBy the way,\n\\begin{eqnarray}\n\\underline{P}\\rho_T(t)&=&\n\\rho_B{\\rm tr}_B \\left(\\rho_T(t)\\right) \\nonumber \\\\\n&=&\\rho_B\\rho(t),\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\underline{P}L_T(t)\\underline{P}&=&\n\\underline{P}(L_S(t)+L_B+L_{\\rm int})\\underline{P} \\nonumber \\\\\n&=& \\underline{P}L_S(t)\\underline{P} \\nonumber \\\\\n&=&L_S(t)\\underline{P}.\n\\end{eqnarray}\nThen\n\\begin{eqnarray}\n\\underline{U}(t,0)\\underline{P}\\rho_T(0)\n&=&\\underline{T}\\exp\\left\\{\n-i \\int^t_0 ds \\underline{P} L_T(s)\\underline{P}\\right\\}\n\\underline{P}\\rho_T(0) \\nonumber \\\\\n&=&\\underline{T}\\exp\\left\\{\n-i \\int^t_0 ds L_S(s)\\underline{P}\\right\\} \n\\underline{P}\\rho_T(0) \\nonumber \\\\\n&=&\\underline{U}_S(t,0)\\underline{P}\\rho_T(0) \\nonumber \\\\\n&=&\\underline{U}_S(t,0)\\rho_B \\rho(t).\n\\label{rel1}\n\\end{eqnarray}\nHere $\\underline{U}_S(t,0)$ denotes the propagator of the system.\nLikewise,\n\\begin{eqnarray}\n&&\\underline{U}(t,s)\\underline{P}L_T(s)\n\\{\\theta(s)-1\\} \\underline{P}~\\underline{G}(t,s)\n\\theta(t)\\underline{P}\\rho_T(t) \\nonumber \\\\\n&=&\\underline{U}_S(t,s)\\rho_B{\\rm tr}_B\\Big[\nL_T(s)\\{\\theta(s)-1\\}\\rho_B{\\rm tr}_B\\{ \\underline{G}(t,s)\\theta(t)\n\\rho_B\\}\\Big]\\rho(t) \\nonumber \\\\\n&=&\\underline{U}_S(t,s)\\rho_B{\\rm tr}_B\\Big[\nL_T(s)\\{\\theta(s)-1\\}\\rho_B\\Big]\n{\\rm tr}_B\\Big[\\underline{G}(t,s)\\theta(t)\\rho_B\\Big]\\rho(t).\n\\label{rel2}\n\\end{eqnarray}\nSubstituting (\\ref{rel1}) and (\\ref{rel2}) into (\\ref{psol2}), we obtain\n\\begin{eqnarray}\n\\rho(t)&=&\\underline{U}_S(t,0)\\rho(0)\\nonumber \\\\\n&&-i\\int_0^t ds \\underline{U}_S(t,s) {\\rm tr}_B\\Big[\nL_T(s)\\{\\theta(s)-1\\}\\rho_B\\Big]\n{\\rm tr}_B\\Big[\\underline{G}(t,s)\\theta(t)\\rho_B\\Big]\n\\rho(t),\n\\end{eqnarray}\nor \n\\begin{eqnarray}\n\\rho(t)&=&{\\cal E}(t)\\rho(0)\\nonumber \\\\\n&=&\\underline{W}^{-1}(t)\\underline{U}_S(t,0)\\rho(0),\n\\label{rho1}\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n\\underline{W}(t)&=&\n1+i\\int_0^t ds\\underline{U}_S(t,s) {\\rm tr}_B\\Big[\nL_T(s)\\{\\theta(s)-1\\}\\rho_B\\Big]\n{\\rm tr}_B\\Big[\\underline{G}(t,s)\\theta(t)\\rho_B\\Big] \\nonumber \\\\\n&=&1+i\\int_0^t ds\\underline{U}_S(t,s) {\\rm tr}_B\\Big[\nL_{\\rm int} \\Sigma(s)\\{1-\\Sigma(s)\\}^{-1}\\rho_B\\Big] \\nonumber \\\\\n&&\\hspace{5em}\\times{\\rm tr}_B\\Big[\n\\underline{U}_0(s)\\underline{R}(t,s)\\underline{U}^{-1}_0(t)\n\\{1-\\Sigma(t)\\}^{-1}\\rho_B\\Big].\n\\label{weq}\n\\end{eqnarray}\nHere, we define\n\\begin{eqnarray}\n\\Sigma(t)=1-\\theta^{-1}(t),\n\\end{eqnarray}\n\\begin{eqnarray}\n\\underline{U}_0(t)=e^{-it L_B}\\underline{U}_S(t),\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\underline{R}(t,\\tau)=\n\\underline{T}^c \\exp\\left\\{\ni\\int^t_\\tau ds \\underline{U}_0^{-1}(s)L_{\\rm int} \\underline{U}_0\\right\\},\n\\end{eqnarray}\nwhere $\\underline{U}_0(t)$ is the evolution operator of the system with the\nreservoir and $\\underline{R}(t,\\tau)$ is \nthe evolution operator~\\cite{Saeki86} of the total\nsystem in the interacting picture. In (\\ref{weq}), we use the identities\n$\\underline{P}L_T(s)\\underline{Q}=\\underline{P}L_{\\rm int}\\underline{Q}$\nand $\\underline{H}(t,\\tau)\\underline{Q}=\\underline{Q}~\\underline{H}(t,\\tau)$.\n\nDetailed expression for $\\Sigma(t)$ becomes\n\\begin{eqnarray}\n\\Sigma(t)&=&1-\\theta^{-1}(t) \\nonumber \\\\\n&=& -i \\int^t_0 d\\tau \\underline{H}(t,\\tau)\\underline{Q}\nL_T(\\tau)\\underline{P} ~\\underline{G}(t,\\tau) \\nonumber \\\\\n&=&-i \\int^t_0 d\\tau \\underline{H}(t,\\tau)\\underline{Q}\nL_{\\rm int}(\\tau)\\underline{P} ~\\underline{G}(t,\\tau) \\nonumber \\\\\n&=&-i \\int^t_0 d\\tau\n\\underline{U}_0(t)\\underline{S}(t,\\tau)\\underline{U}_0^{-1}\n\\underline{Q}L_{\\rm int}\\underline{P}~\\underline{U}_0(\\tau)\n\\underline{R}(t,\\tau)\\underline{U}_0^{-1}(t),\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n\\underline{S}(t,\\tau)=\\underline{T}\\exp\\left\\{\n-i \\int^t_\\tau ds ~\\underline{Q}\\underline{U}_0^{-1}(s)\nL_{\\rm int} \\underline{U}_0(s) \\underline{Q}\\right\\},\n\\end{eqnarray}\nwhere $\\underline{S}(t,\\tau)$ is the \nprojected propagator~\\cite{Saeki86} of the total system\nin the interaction picture. It is now obvious from (\\ref{rho1}) and\n(\\ref{weq}), the exact solution $\\rho(t)$ for the output quantum state is \nin time-convolutionless form\ngiven by Eq.~(\\ref{rhodyn}) which is employed in the description of\nquantum information processing and computation~\\cite{Schumacher96}. \n\nWe now consider the case when the system is interacting weakly with the\nreservoir and expand (\\ref{weq}) up to the second order in powers of the\ninteraction Hamiltonian $H_{\\rm int}$. The renormalization of the unperturbed\nenergy of the system and the first order of the interaction $H_{\\rm int}$ \ngives~\\cite{Hashitsume,Saeki82,Saeki86}\n\\begin{eqnarray}\n\\underline{P}L_{\\rm int}\\underline{P}=0.\n\\end{eqnarray}\nThen in the lowest order Born approximation which is valid up to the order\n$(H_{\\rm int})^2$, we obtain\n\\begin{eqnarray}\n\\underline{W}^{(2)}(t)&=& \n1+i \\int^t_0 ds \\underline{U}_S(t,s){\\rm tr}_B\\Big[\nL_{\\rm int}\\Sigma^{(1)}(s)\\rho_B\\Big]{\\rm tr}_B\\Big[\n\\underline{U}_0(s) \\underline{U}_0^{-1}(t)\\rho_B\\Big]\\nonumber \\\\\n&=& 1+i \\int^t_0 ds \\underline{U}_S(t,s){\\rm tr}_B\\Big[\nL_{\\rm int}\\Sigma^{(1)}(s)\\rho_B\\Big]\n\\underline{U}_S^{-1}(t,s),\n\\label{w2}\n\\end{eqnarray}\nor\n\\begin{eqnarray}\n\\Big[\\underline{W}^{(2)}(t)\\Big]^{-1}=\n 1-i \\int^t_0 ds \\underline{U}_S(t,s){\\rm tr}_B\\Big[\nL_{\\rm int}\\Sigma^{(1)}(s)\\rho_B\\Big]\n\\underline{U}_S^{-1}(t,s),\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n\\label{Mequation}\n\\underline{{\\cal E}}^{(2)}=\\left[\n1-i\\int^t_0 ds \\underline{U}_S(t,s){\\rm tr}_B\\Big[\nL_{\\rm int}\\Sigma^{(1)}(s)\\rho_B\\Big]\n\\underline{U}_S^{-1}(t,s)\\right]\\underline{U}_S(t,0)\n\\end{eqnarray}\nHere\n\\begin{eqnarray}\n\\Sigma^{(1)}(s)&=&-i \\int^s_0 d\\tau \n\\underline{U}_0(s)\\underline{U}_0^{-1}(\\tau)\n\\underline{Q}L_{\\rm int}\\underline{P}~\\underline{U}_0(\\tau)\n\\underline{U}_0^{-1}(s) \\nonumber \\\\\n&=&-i\\int^s_0 d\\tau \\underline{U}_0(s,\\tau)L_{\\rm\nint}\\underline{U}_0^{-1}(s,\\tau) .\n\\label{sigma1}\n\\end{eqnarray}\nThe time-convolutionless form of the output reduced-density-operator\n\\begin{eqnarray}\n\\rho(t)=\\underline{{\\cal E}}^{(2)}(t)\\rho(0)\n\\end{eqnarray}\ntogether with (\\ref{w2})-(\\ref{sigma1}) can be used in any time scale and \nis valid up to the second order in powers in the interaction between the\nsystem and the reservoir.\n\nIn the next section, reduced-density-operator for the output quantum state is\nused to study the two-bit quantum gate utilizing coupled spin system in\nnonequilibrium situation.\n\n\n\\section{Decoherence of two-bit quantum gate}\n\nWe consider a two-bit quantum gate based on nonequilibrium dynamics of \n the spin of excess electrons in\nquantum dots~\\cite{Loss}. In this system, the gate \noperation is controlled by\nan electrical tunneling between two quantum dots. Projecting out the \nspatial parts\nof wavefunctions of electrons, we model the system by \nthe Hubbard Hamiltonian~\\cite{Hubb};\n\\begin{eqnarray}\nH_S(t) = J(t) \\vec{S_1}\\cdot\\vec{S_2}\n\\label{hubbard}\n\\end{eqnarray}\nwhere $J(t)$ is time-dependent Heisenberg coupling which involves\nthe energy difference between the spin singlet \nand triplet states.\nIf we turn on $J(t)$ for $\\int dt J(t)=J_0 \\tau_s = \\pi$,\nthe unitary operator associated with the Hamiltonian (\\ref{hubbard})\ngives the swap operation up to overall phase difference; \nif $|i,j\\rangle$ labels the spin states of two electrons\nin the $S_z$ basis with $i,j=\\uparrow,\\downarrow$, then swap operation\n$U_{\\rm swap}$ on two registers $|i,j\\rangle$ gives \n$U_{\\rm swap} |i,j\\rangle=|j,i\\rangle$.\n\nIn reality, quantum-dot system of our interest is not a closed system, so we\nhave to take into account of the decoherence effects \ndue to the interaction with\nthe environment which is coupled with the system.\n{}For the action of the environment during the gate operation, we use\na Calderia-Leggett-type model~\\cite{Loss} \nwhere a set of harmonic oscillators are coupled\nlinearly to the system spins by\n\\begin{eqnarray}\n\\label{int}\nH_{\\rm int}=\\lambda(\\vec{S_1}\\cdot\\vec{b_1}+\\vec{S_2}\\cdot\\vec{b_2})\n\\end{eqnarray}\nHere, $b^j_i=\\sum_{\\alpha} g_{\\alpha}(a_{\\alpha,i}^j\n+{a^j_{\\alpha,i}}^{\\dagger})$ is a\nfluctuating quantum field whose unperturbed motion is governed \nby the harmonic-oscillator\nHamiltonian,\n\\begin{eqnarray}\n\\label{bath}\nH_B(t) = \\sum_{\\alpha} \\omega_\\alpha a^{\\dagger}_{\\alpha} a_{\\alpha}\n\\end{eqnarray}\nwhere $a^{\\dagger}_{\\alpha}$ $(a_{\\alpha})$ are bosonic creation (annihilation)\noperator and $\\omega_\\alpha$ are the corresponding frequencies with spectral\ndistribution function \n$A(\\omega)=\\pi\\sum_\\alpha g_\\alpha^2 \\delta(\\omega-\\omega_\\alpha)$.\n\nFor a coupled spin system, the \nevolution operator ${\\cal E}^{(2)}$ given by \nEq. (\\ref{Mequation}) can be written down explicitly\nin terms of spin operators.\nSubstituting (\\ref{hubbard})-(\\ref{bath}) into definitions \nfor $\\underline{U}_0$ \nand $L_{\\rm int}$, the integrand\nof Eq. (\\ref{Mequation}) can be written as,\n\\begin{eqnarray}\n\\label{trb}\n & & {\\rm tr}_B \\left[ L_{\\rm int}\\Sigma^{(1)}(s)\\rho_B\\right]\n\\underline{U}_S^{-1}(t,s)\n\\underline{U}_S(t,0)\\rho(0) \\nonumber \\\\\n&=& -i\\int_0^s d\\tau {\\rm tr}_B\\left[ L_{\\rm int}\n\\underline{U}_0(s,\\tau)L_{\\rm int} \\underline{U}_0^{-1}(s,\\tau) \\rho_B\\right]\n \\underline{U}_S(s,0)\\rho(0) \\nonumber \\\\\n&=& -i\\lambda^2 \\sum_{ijkl}\\int_0^s d\\tau \\Big\\{\n [ S^j_i,S^l_k(\\tau-s) (\\underline{U}_S(s,0)\\rho(0)) ]\n{\\rm tr_B}\\{b_i^j b_k^l(\\tau-s)\\rho_B\\} \\nonumber \\\\\n& &~~~~~+[(\\underline{U}_S(s,0)\\rho(0)) S^l_k(\\tau-s),S^j_i]\n{\\rm tr_B}\\{b_k^l(\\tau-s) b_i^j \\rho_B\\} \\Big\\} \\nonumber \\\\\n&=&-i\\sum_{ij}\\int_0^sd\\tau \\Big\\{[ S^j_i,S^j_i(\\tau-s) \n(\\underline{U}_S(s,0)\\rho(0)) ]\\{\\Gamma(\\tau-s)-i\\Delta(\\tau-s)\\}\n\\nonumber \\\\\n& &~~~~~+[(\\underline{U}_S(s,0)\\rho(0)) S^j_i(\\tau-s),S^j_i]\n\\{\\Gamma(\\tau-s)+i\\Delta(\\tau-s)\\} \\Big\\},\n\\end{eqnarray}\nwhere the trace over the heat bath is done for \nthe harmonic oscillator eigenstates,\n\\begin{eqnarray}\n\\label{gamma}\n{\\rm Tr_B}\\{b_k^l(t)b_i^j \\rho_B\\} =\n\\delta_{ik}\\delta_{jl}\\frac{1}{\\pi}\\int_0^\\infty A(\\omega)\n\\left\\{e^{-i\\omega t}+\n\\frac{2\\cos(\\omega t)}{e^{\\omega\/k_BT}-1} \\right\\} d\\omega,\n\\end{eqnarray}\nand we define $\\Gamma(t)$ and $\\Delta(t)$ as\n\\begin{eqnarray}\n\\Gamma(t)+i\\Delta(t) = \\lambda^2 {\\rm Tr_B}\\{b_i^j(t)b_i^j \\rho_B\\}.\n\\end{eqnarray}\nThen, Eq. (\\ref{Mequation}) leads to\n\\begin{eqnarray}\n\\label{Dequation}\n\\underline{{\\cal E}}^{(2)} &=& \\underline{U}_S(t,0) \\bigg[\n1-\\int^t_0 ds \\int^s_0 d\\tau\n\\sum_{ij}\\Big\\{ [S_i^j(s),S^j_i(\\tau)\\rho(0)]\\big\\{\n\\Gamma(\\tau-s)-i\\Delta(\\tau-s)\\big\\} \n\\nonumber \\\\\n&+& [\\rho(0) S_i^j(\\tau),S^j_i(s)]\\big\\{\\Gamma(\\tau-s)+i\\Delta(\\tau-s)\\big\\}\n \\Big\\} \\bigg].\n\\end{eqnarray}\n\n\nNow we evaluate the density operator in basis representation;\n$\\rho(t)=\\sum_{\\alpha\\beta}\\rho_{\\alpha\\beta}(t) e_{\\alpha\\beta}$,\n$e_{\\alpha\\beta}$ is the basis for the density operators, and in this work\nwe choose $e_{\\alpha\\beta}$ as the multiplet states, $i.e.$\n$e_{\\alpha\\beta}=|\\alpha\\rangle\\langle\\beta|$\nwith $\\alpha,\\beta=1,2,3,4$;\n$| 1\\rangle=|\\uparrow\\uparrow\\rangle,\n | 2\\rangle=(|\\uparrow\\downarrow\\rangle\n+|\\downarrow\\uparrow\\rangle)\/\\sqrt{2},\n | 3\\rangle=|\\downarrow\\downarrow\\rangle$, and \n$| 4\\rangle=(|\\uparrow\\downarrow\\rangle\n-|\\downarrow\\uparrow\\rangle)\/\\sqrt{2}$.\nBy defining the inner product like $(e_{\\alpha\\beta}, e_{\\gamma\\delta})\n={\\rm tr}[e_{\\alpha\\beta}^\\dag e_{\\gamma\\delta}]\n=\\delta_{\\alpha\\beta}\\delta_{\\gamma\\delta}$, \n$\\rho_{\\alpha\\beta}(t)$ is obtained as;\n\\begin{eqnarray}\n\\label{rhocomp}\n\\rho_{\\alpha\\beta}& =& (e_{\\alpha\\beta},\\rho(t)) \\nonumber \\\\\n &=& (e_{\\alpha\\beta},{\\cal E}^{(2)} \\rho(0))=\n\\sum_{\\gamma\\delta}(e_{\\alpha\\beta},{\\cal E}^{(2)}e_{\\gamma\\delta})\n\\rho(0)_{\\gamma\\delta}\n=\\sum_{\\gamma\\delta}{\\cal E}^{(2)}_{\\alpha\\beta|\\gamma\\delta}\n \\rho(0)_{\\gamma\\delta}\n\\end{eqnarray}\nwhere $\\rho(0)_{\\gamma\\delta}$ expansion coefficients \nof the initial density operator.\nWithout the interaction with environment, \n$i.e.$ the absence of the second term in Eq.~(\\ref{Dequation}),\n${\\cal E}^{(2)}_{\\alpha\\beta|\\gamma\\delta}$ is reduced to\n${U_S(t)}_{\\alpha\\beta|\\gamma\\delta}$ and evaluated \non the multiplet basis as\n\\begin{eqnarray}\n(e_{\\alpha\\beta},U_S(t)e_{\\gamma\\delta})\n=\\delta_{\\alpha\\beta}\\delta_{\\gamma\\delta}\ne^{-i\\overline{t}(E_{\\alpha}-E_\\beta)},\n\\label{urep}\n\\end{eqnarray}\nwhere $E_{1,2,3}=J_0\/4$ and $E_4=-3J_0\/4$ are the triplet \nand singlet energy eigenvalues.\nHere, $\\overline{t}$ has its value $t$ ($\\tau_s$) \nif $t$ is less(larger) than $\\tau_s$.\nThen, $\\underline{U}_S(t)$ becomes the swap operator,\n$U_{S}(t)=e^{-i\\pi\/4}\\underline{U}_{\\rm swap}$ if $t=\\tau_s$.\n\nIn order to evaluate ${\\cal E}^{(2)}$, \nwe first calculate the following matrix elements:\n\\begin{eqnarray}\n\\label{S1}\n(e_{\\alpha\\beta},\\sum_{ij}[S^j_i(s),S^j_i(\\tau)e_{\\gamma\\delta}]) &=&\n\\sum_{ij}\\{ \\langle\\alpha| S^j_i(s)S^j_i(\\tau)\n|\\gamma\\rangle\\langle\\delta|\\beta\\rangle\\nonumber \\\\\n&-& \\langle\\alpha| S^j_i(\\tau)|\\gamma\\rangle\\langle\\delta| S^j_i(s)\n|\\beta\\rangle \\}\\nonumber \\\\\n&=&\\delta_{\\delta\\beta}\\sum_\\kappa\n M_{\\alpha\\kappa\\kappa\\gamma}\n e^{i\\overline{\\tau}\\omega_{\\kappa\\gamma}\n+i\\overline{s}\\omega_{\\alpha\\kappa}}\n-M_{\\alpha\\gamma\\delta\\beta}\n e^{i\\overline{\\tau}\\omega_{\\alpha\\gamma}\n+i\\overline{s}\\omega_{\\delta\\beta }}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\label{S2}\n(e_{\\alpha\\beta},\\sum_{ij}[e_{\\gamma\\delta}S^j_i(\\tau),S^j_i(s)]) &=&\n\\sum_{ij}\\{ \\langle\\delta| S^j_i(\\tau)S^j_i(s)\n|\\beta\\rangle\\langle\\alpha|\\gamma\\rangle\\nonumber \\\\\n&-& \\langle\\alpha| S^j_i(s)|\\gamma\\rangle\\langle\\delta\n| S^j_i(\\tau)|\\beta\\rangle \\}\\nonumber \\\\\n&=&\\delta_{\\alpha\\gamma}\\sum_{\\kappa }M_{\\delta\\kappa \\kappa \\beta}\n e^{i\\overline{\\tau}\\omega_{\\delta\\kappa}\n+i\\overline{s}\\omega_{\\kappa\\beta}}\n-M_{\\alpha\\gamma\\delta\\beta}\n e^{i\\overline{\\tau}\\omega_{\\delta\\beta}\n+i\\overline{s}\\omega_{\\alpha\\gamma}}\n\\end{eqnarray}\nwhere $M_{\\alpha\\beta\\gamma\\delta}=\\sum_{ij}\\langle\\alpha| S^j_i\n|\\beta\\rangle\\langle\\gamma| S^j_i|\\delta\\rangle$,\n$\\omega_{\\alpha\\beta}=E_\\alpha-E_\\beta$, and\n\\begin{eqnarray*}\n\\overline{\\tau}(\\overline{s})=\n\\left\\{\\begin{array}{ll}\n\\tau(s) & \\mbox{if } \\tau(s) < \\tau_s\\\\\n\\tau_s & \\mbox{otherwise.} \n\\end{array}\\right.\n\\end{eqnarray*}\nThen, the matrix element of the evolution operator,\n${\\cal E}^{(2)}_{\\alpha\\beta|\\gamma\\delta}$ is obtained \nby substituting (\\ref{urep})-(\\ref{S2}) into (\\ref{Dequation}),\n\\begin{eqnarray}\n{\\cal E}^{(2)}_{\\alpha\\beta|\\gamma\\delta}\n&=& e^{-i\\overline{t}\\omega_{\\alpha\\gamma}}\n\\Big[ \\delta_{\\alpha\\gamma}\\delta_{\\beta\\delta}\n -\\delta_{\\beta\\delta} \\sum_\\kappa M_{\\alpha\\kappa\\kappa\\gamma}\n p_{\\kappa\\kappa|\\gamma\\alpha}(t)\n -\\delta_{\\alpha\\gamma}\\sum_\\kappa M_{\\delta\\kappa\\kappa\\beta}\n p^*_{\\delta\\kappa|\\gamma\\beta}(t) \\nonumber \\\\\n&&\\hspace{4em} +M_{\\alpha\\gamma\\delta\\beta}\n\\{p_{\\alpha\\beta|\\gamma\\delta}(t)+p^*_{\\beta\\alpha|\\delta\\gamma}(t)\\}\\Big]\n\\end{eqnarray}\nwith the time-dependent term $p_{\\alpha\\beta|\\gamma\\delta}(t)$ defined by\n\\begin{eqnarray}\np_{\\alpha\\beta|\\gamma\\delta}(t)\n=\\int^t_0 ds e^{-i\\overline{s}\\omega_{\\beta\\delta}}\n \\int^s_0 d\\tau e^{i\\overline{\\tau}\\omega_{\\alpha\\gamma}}\n \\{\\Gamma(\\tau-s)-i\\Delta(\\tau-s) \\}.\n\\end{eqnarray}\nFor numerical calculations, it is more convenient to split \nthe time integrals of the matrix\n$p_{\\alpha\\beta |\\gamma\\delta}(t)$ into three parts;\n\\begin{eqnarray}\np_{\\alpha\\beta|\\gamma\\delta}(t)\n&=&\\int^{\\tau_s}_0 ds e^{-is\\omega_{\\beta\\delta}}\n \\int^{s}_0 d\\tau e^{i\\tau\\omega_{\\alpha\\gamma}}\n \\{\\Gamma(\\tau-s)-i\\Delta(\\tau-s) \\}\n\\nonumber \\\\\n & &+\\int_{\\tau_s}^t ds e^{-i\\tau_s\\omega_{\\beta\\delta}}\n \\int^{\\tau_s}_0 d\\tau e^{i\\tau\\omega_{\\alpha\\gamma}}\n \\{\\Gamma(\\tau-s)-i\\Delta(\\tau-s) \\}\n\\nonumber \\\\\n & &+\\int_{\\tau_s}^t ds e^{-i\\tau_s\\omega_{\\beta\\delta}}\n \\int^{s}_{\\tau_s} d\\tau e^{i\\tau_s\\omega_{\\alpha\\gamma}}\n \\{\\Gamma(\\tau-s)-i\\Delta(\\tau-s) \\}\n\\nonumber \\\\\n &=&\\int^{\\tau_s}_0 ds e^{is(\\omega_{\\delta\\beta}+\\omega_{\\alpha\\gamma})}\n \\int^{s}_0 d\\tau e^{i\\tau\\omega_{\\gamma\\alpha}}\n \\{\\Gamma(\\tau)+i\\Delta(\\tau)\\}\n\\nonumber \\\\\n & & +e^{i\\tau_s\\omega_{\\delta\\beta}}\n \\int_{\\tau_s}^t ds e^{is\\omega_{\\alpha\\gamma}}\n \\int^{s}_{s-\\tau_s} d\\tau e^{i\\tau\\omega_{\\gamma\\alpha}}\n \\{\\Gamma(\\tau)+i\\Delta(\\tau) \\}\n\\nonumber \\\\\n & & +e^{i\\tau_s(\\omega_{\\delta\\beta}+\\omega_{\\alpha\\gamma})}\n \\int_{\\tau_s}^t ds\n \\int^{s-\\tau_s}_0 d\\tau \n \\{\\Gamma(\\tau)+i\\Delta(\\tau) \\}.\n\\label{peq}\n\\end{eqnarray}\nIn order to investigate the dynamics of the density \noperator in non-equilibrium situation, we calculate\nEqs.~(\\ref{rhocomp})-(\\ref{peq}) numerically,\nassuming an Ohmic damping for spectral distribution \nfunction $A(\\omega)=\\eta\\omega$ with a cutoff frequency $\\omega_c$\\cite{Palma}.\n\\section{Numerical Results and Discussions}\nWe now study dynamics of the density operator for various initial states.\nFirst, we calculate the evolution of the spin states \nduring the swap gate operation and\ncompare our results with those obtained by Loss and DiVincenzo\\cite{Loss}. The\ninitial spin state is chosen to be the spin-up for the second electron while\nthe first electron is unpolarized;\n$\\rho(0)=(|\\uparrow\\uparrow\\rangle\\langle\\uparrow\\uparrow|\n+|\\downarrow\\uparrow\\rangle\\langle\\downarrow\\uparrow| )\/2$.\nIn the multiplet basis, the initial state is expanded as;\n\\begin{eqnarray}\n\\label{irho}\n\\rho(0) = \\frac{1}{2}| 1\\rangle\\langle 1| \n +\\frac{1}{4}| 2\\rangle\\langle 2|\n -\\frac{1}{4}| 2\\rangle\\langle 4|\n -\\frac{1}{4}| 4\\rangle\\langle 2|\n +\\frac{1}{4}| 4\\rangle\\langle 4| .\n\\end{eqnarray}\n\n\nFig.~1-(a) shows the spin polarization calculated using\nparameters $\\lambda^2\\eta=1.8\\times 10^{-5}$, $k_BT=300~{\\rm K}$, \n$\\omega_c=400~{\\rm K}$, and $J_0=1~{\\rm K}$(solid lines). For \nthe interval, $0\\le t \\le\\tau_s$ the \nspin polarization of the first\nelectron $s=2\\langle S_z^1\\rangle=2 {\\rm tr}[\\rho(t) S_z^1]$ \nchanges to nearly a unity whereas\nthe spin state of the second electron becomes zero(dashed line), \ndemonstrating the feasibility of the swap operation.\nHowever, due to the decoherence, we find that \na perfect swap operation cannot be achievable. \nIn addition, the perturbing fields \ncause the monotonic decreases of the spin polarization with the elapse time\nafter completion of swap operation.\nThis means that spin states are becoming thermalized \nowing to the\ninteraction with the environment, \nwhich shows the decoherence of the states. \nThe decoherence would be a fundamental problem \nin making a reliable quantum logic gate,\nwhich puts severe restriction on building the realistic quantum computer. \nHowever, there are several quantum error-correction techniques \nwhich can compensate imperfections \nintroduced by the decoherence during and after the gate \noperation~\\cite{Shor95,Steane,Laflamme,Nielsen}.\nComparing with the result obtained in the \nprevious work~\\cite{Loss}(dotted line),\n we find that\nboth calculations yield similar results for $t>\\tau_s$ \nexcept for the value at $t=\\tau_s$.\nWe think that the discrepancy at $t=\\tau_s$ is resulted from \nsomewhat simplified evaluation of \nthe evolution operator in the reference~\\cite{Loss}\nwhen the swap operation occurs.\n\nIn Fig. 1-(b) and (c), we plot the gate fidelity ${\\cal F}$ and \ngate purity ${\\cal P}$ which \ncharacterize the intrinsic properties of the gate,\nand are defined as~\\cite{Poyatos};\n\\begin{eqnarray}\n{\\cal F}\n&=& \\overline{ \\langle \\psi_0| U_S^{\\dagger}(\\bar{t})\\rho(t)\n | \\psi_0\\rangle }\n = \\frac{1}{6}+\\frac{1}{24}\\left[\\sum_{\\alpha}\n {\\cal E}^{(2)}_{\\alpha\\alpha|\\alpha\\alpha}\n +\\sum_{\\alpha,\\beta} {\\cal E}^{(2)}_{\\alpha\\beta|\\alpha\\beta}\n e^{i \\bar{t}\\omega_{\\alpha\\beta}}\n \\right ], \n\\label{fidelity}\\\\\n{\\cal P} &=& \\overline{{\\rm tr}[\\rho(t)]^2 }\n = \\frac{1}{24}\\sum_{\\alpha,\\beta,\\gamma} \n\\left[| {\\cal E}^{(2)}_{\\alpha\\beta|\\gamma\\gamma}|^2\n +\\sum_{\\delta}\\left({\\cal E}^{(2)}_{\\alpha\\beta|\\gamma\\gamma}\n{\\cal E}^{(2)*}_{\\alpha\\beta|\\delta\\delta}\n +| {\\cal E}^{(2)}_{\\alpha\\beta|\\gamma\\delta}|^2 \\right)\n \\right]\n\\label{purity}\n\\end{eqnarray}\nwhere the overbar means an average over all possible \ninitial state $|\\psi_0\\rangle$ and $U_S(\\bar t)$ is an ideal gate operation\nwhich was turned on during the time interval, $0\\le t\\le \\tau_s$.\nThe last equalities in \n({\\ref{fidelity}) and (\\ref{purity}) were derived under the condition\nof both trace and hermiticity of \n${\\cal E}^{(2)}$ being preserved within our approximation scheme.\nFor an ideal quantum gate, the gate fidelity ${\\cal F}$ and\nthe gate purity ${\\cal P}$ must be equal to one during the gate operation\nbecause in that case the evolution operator is unitary. \nOur calculation shows that both ${\\cal F}$ and ${\\cal P}$ \nare found to decrease almostly\nlinearly as time elapses, which indicates clearly the presence of \ndecoherence effect. As the case of the spin polarization,\nthe decreasing rates for ${\\cal F}$ and ${\\cal P}$ are close to \nthose obtained in Ref.~\\cite{Loss}, \nhowever its value at $t=\\tau_s$ are different \nand our results show more severe\ndecoherence of the spin state for the same parameters.\n\nAnother interesting property of the two-bit gates is \nthe von Neumann entropy $\\Lambda$ \ndefined as $\\Lambda=-{\\rm tr}[\\rho(t)\\log_2\\rho(t)]$ of a quantum state. \nIn Fig.~1-(d), the calculated von Neumann entropy of the spin system\nis plotted.\nFor the initial density operator of Eq. (\\ref{irho}), \nits entropy is $\\Lambda=1~({\\rm bit})$ because\nthe eigenvalues of $\\rho(0)$ are $\\{0,0,1\/2,1\/2\\}$. \nAs time goes on, the entropy becomes\nlarger because the thermalization makes the system \nreside equally in all states.\nEventually, the entropy will reach to the maximum value of \n$\\Lambda=2~({\\rm bits})$ where all four states are equally probable.\n\nTo examine the effect of the perturbing field on an entangled state, \nwe now consider\na different initial density operator. We assume that the system is in a \npure spin singlet at $t=0$;\n$|\\psi_0\\rangle\n=(|\\uparrow\\downarrow\\rangle-|\\downarrow\\uparrow\\rangle)\/\\sqrt{2}$\nand its density operator\nis $\\rho(0) = |\\psi_0\\rangle\\langle \\psi_0|$. \nIn Fig.~2-(a), we plot the diagonal \ncomponents of the density operators in the multiplet basis\nas a function of time. $\\rho_{44}$ (solid line) \nloses its coherence\nlinearly to time while other components \n$\\rho_{\\alpha\\alpha}$ grows as time\nelapses. This behavior gives rise to an increasing \nvalue of the entropy as shown in Fig.~2-(d).\nFor the pure initial state one can calculate \nthe fidelity of the gate without too much difficulty.\nWe compare fidelity ($\\langle\\psi_0 |U^{\\dagger}_S(\\bar{t})\n\\rho(t)|\\psi_0\\rangle$) of a given entangled pure state (dotted line) with \nthe gate fidelity (solid line) in Fig.~2-(c) and in addition the \npurity (${\\rm tr}\\rho(t)^2$) of a\ngiven initial entangled state (dotted line) with gate purity (solid line) in \nFig.~2-(d). \nIn both quantities, there are a slight difference between the cases. \nThis implies that although the gate fidelity ${\\cal F}$ and gate purity \n${\\cal P}$ define the global characteristics of gate, fidelity and purity\nof the gate for a specific input state depends on input itself.\n \nNow, we discuss the strength of the decoherence \nwhich depends on $\\Gamma(t)$ and $\\Delta(t)$\nof Eq. (\\ref{Dequation});\n\\begin{eqnarray}\n\\Gamma(t)+i\\Delta(t)\n=\\frac{\\lambda^2\\eta}{\\pi}\\int_0^{\\omega_c} \\omega\\cos{\\omega t}\n\\coth\\left[\\frac{\\omega}{2k_BT}\\right] d\\omega\n-i\\frac{\\lambda^2\\eta}{\\pi}\\int_0^{\\omega_c} \\omega\\sin{\\omega t} d\\omega.\n\\end{eqnarray}\nFor a sufficiently high temperature \n$k_BT\\gg\\omega_c\/2$, $\\Gamma(t)$ and $\\Delta(t)$\nare further simplified to \n\\begin{eqnarray}\n\\Gamma(t)+i\\Delta(t)=\\frac{2\\Gamma_0}{\\pi\\tau_s}\\frac{\\sin{\\omega_c t}}{t}\n-i\\frac{\\Delta_0}{\\tau_s}\\left[ \\frac{\\sin{\\omega_c t}}{\\omega_c t^2}-\n\\frac{\\cos{\\omega_c t}}{t} \\right]\n\\end{eqnarray}\nwith $\\Gamma_0=\\lambda^2\\eta k_BT \\tau_s$ \nand $\\Delta_0=\\lambda^2\\eta\\omega_c\\tau_s\/\\pi$.\nSince a typical value of $\\tau_s$ is $25{\\rm psec}$ for $J_0=1K$ and,\nthus $\\omega_c\\tau_s\\gg1$, $\\Gamma(t)$ and $\\Delta(t)$ \nare rapidly oscillating functions.\nThis implies that the dominant contribution to the decoherence\ncan be written as \n$\\Gamma(t)+i\\Delta(t)=2\\Gamma_0\\delta(t)\/\\tau_s$ \nin the limit of $\\omega_c\\tau_s\\gg1$.\nIn this approximation, we find that $p_{\\alpha\\beta|\\gamma\\delta}(t)$ \nof Eq. (\\ref{peq})\nis proportional to $\\Gamma_0 t$. This behavior is \nattributed to a linear dependence of\nvarious quantities($s,{\\cal F},{\\cal P}$) on time.\nIn addition, we expect that the degradation of the spin \npolarization is also proportional\nto $\\Gamma_0t$. For this, we examine the evolution of the spin polarization\nof the first electron for the initial density operator \nof Eq. (\\ref{irho}) for various values\nof the coupling constant, $\\lambda^2\\eta$, and plot results in Fig. 3-(a).\nAs $\\lambda^2\\eta$ increases, we find that\nmore strong decoherence occurs in spin states and its dependence is linear\non $\\lambda^2\\eta$ as shown in Fig. 3-(b). \nThis linear dependence also appears\nin the fidelity and purity.\n\nIn summary, we first derive an exact reduced-density-operator for the output\nquantum states in time-convolutionless form by solving the quantum Liouville\nequation for a noisy quantum channel. The formalism developed in this paper\nwould be general enough to model a noisy quantum channel if various\nHamiltonians for a channel dynamics, environment and an interaction are given.\nSecondly, we calculated various characteristics \nincluding the fidelity, purity, and the change of entropy of\na two-bit quantum gate which is based on the spin exchange\ninteraction between two quantum dots. \nOur calculation shows it is really important to control the decoherence in\nthe quantum gate to protect quantum information against corruption.\nThe decoherence in the quantum logic gate which is extremely\nsensitive to it may be a major obstacle to building \nthe realistic quantum computer, however, \nit it is expected that \nas long as the error rate is below some threshold value, a quantum\ncomputer which can give arbitrary accurate answer can be built \nwith a reasonable model of decoherence.\nIn this respect, it will be interesting to investigate the \nimplementation of quantum error correction\ntechnique for this model.\nAnother interesting study on the present model is to find \nan operator sum representation for the evolution \noperator ${\\cal E}$:\n\\begin{eqnarray}\n{\\cal E}[\\rho]=\\sum_{\\mu} A_\\mu \\rho A_\\mu^\\dagger,\n\\end{eqnarray}\nwhere $A_\\mu$ is an operator acting on the system alone. \nWith the operator sum representation, we can calculate various information\ntheoretical quantities such as the coherent information, entropy exchange, and the\nchannel capacity~\\cite{Schumacher96}. \nWe would like to leave this subject for future work.\n\\acknowledgments{\nWe thank to Dr. Ki Jeong Kong and Dr. Jinsoo Kim for valuable discussions.\nThis work was supported by the Korean Ministry of Science and Technology\nthrough the Creative Research Initiatives Program\nunder Contract No. 98-CR-01-01-A-08.}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\begin{figure}[ptbh!]\n\t\t\\begingroup\n\t\t\t\\sbox0{\\includegraphics{.\/fig\/multiscale2.pdf}}\n\t\t\t\\includegraphics[clip,trim={.1\\wd0} {.1\\wd0} {.1\\wd0} {.05\\wd0},width=\\linewidth]{.\/fig\/multiscale2.pdf}\n\t\t\\endgroup\n\\caption{An entangled polymer chain under shear flow. Its structure is defined by the distribution of distances between all monomer pairs $(n,m)$. Conceptually, and for the purpose of data analysis, it is subdivided into two (or more) linear regimes, $\\braket{\\mathbf{r}_{nm}^2} \\propto |n-m|$, each an ideal random walk. On the short scale (red ellipses), the anisotropy is just a few percent, but grows much bigger on a large scale (blue ellipse). The chain gradually stretches along the flow, and shrinks perpendicular to it.}\\label{multiscale}\n\\end{figure}\n\nViscoelastic materials have properties of both viscous liquids and elastic solids. Such non-Newtonian fluids are very common, from daily items like food and cosmetics, to raw materials for plastics and fibres. Their complex response to flow is due to the intricate deformation of polymer molecules, shown in Fig.~\\ref{multiscale}. To describe it, let us consider two extreme cases. On one hand there are fully elastic materials, like rubber, which are composed of permanently cross-linked polymer chains. Under stress they exhibit the so-called affine deformation, meaning that the mean square distance (MSD) between two monomers $n$ and $m$ is linearly proportional to their separation $\\braket{\\mathbf{r}_{nm}^2} \\propto |n-m|$. On the opposite side, there is the ideal chain (Rouse model), whose deformation is non-affine, scaling as $|n-m|^2$, see Ref.~\\cite{pincus1977dynamics}. In this article we examine an intermediate case, a semi-dilute entangled polystyrene solution, and using a novel fitting approach we show that the deformation is proportional to $|n-m|^\\xi$, where $1<(\\xi=1.2)<2$ is a viscoelastic signature exponent.\n\nThanks to deuteration, small angle neutron scattering (SANS) can measure the structure of an individual polymer chain, called the form factor. Many previous studies have used extensional flow to characterize the relaxation of polymers (creep) over time~\\cite{muller1990polymer, lopez2017chain, wang2017fingerprinting}. In this work we focus on shear flow, which poses more practical challenges, but has an advantage of eschewing the complicated time response, once steady state has been reached. The effect of a shear rate $\\kappa$ on the material structure is quantified with a dimensionless Weissenberg number $\\text{Wi} = \\kappa \\tau$, where $\\tau$ is the relaxation time specific to each fluid, typically a millisecond or more. For polymer melts~\\cite{wignall1981measurements}, shear can be applied on a heated sample, which is then quenched below the glass transition temperature, and the molecular structure is later examined \\emph{ex situ}. Polystyrene (PS) has been measured with SANS using this technique, at an estimated shear rate of $\\text{Wi} = 4$. An asymmetry of 1.7 was detected between the chain radii of gyration along the flow and the vorticity directions~\\cite{Muller1993}. More difficult, but also more industrially relevant experiments measure the fluid structure under \\emph{in situ} shear. Molten polymers like polydimethylsiloxane (PDMS) and polybutadien (PBD) are popular examples, thanks to their low glass transition temperature and comparatively low viscosity. \\emph{In situ} steady flow SANS experiments have not detected any anisotropy of the form factor for either of these samples. The highest shear rate for the PBD experiment~\\cite{Noirez2009} in Couette geometry was $\\text{Wi} = 5.4$ and for the PDMS experiment~\\cite{kawecki2018direct} in cone-plate geometry it was $\\text{Wi} = 0.8$. The only \\emph{in situ} shear experiment that has shown anisotropy of entangled polymers was performed in a Couette cell with PS at $\\text{Wi}\\approx 1$, but since the relaxation time has not been reported, the Weissenberg number is uncertain~\\cite{Yearley2010}. Anisotropy of 1.5 has also been detected in a dilute solution of long but unentangled PS chains~\\cite{lindner1988shear}.\n\nUp to now, the form factor of entangled semi-dilute polymer solutions has not been characterized by SANS under \\emph{in situ} shear. The advantage of the semi-dilute condition is that it has a lower viscosity and glass transition temperature than a melt, facilitating its handling and enabling higher Wi. However, shear induces massive concentration fluctuations, leading up to complete demixing in the extreme case. The resulting SANS signal contains a strong contribution from the structure factor~\\cite{nakatani1994neutron, morfin1999temperature}, hindering the single chain analysis~\\cite{Nakatani1994}. Fortunately, it is possible to use deuteration to match the contrast between the solvent and the polymer, fully canceling the inter-chain contribution to scattering, even at high density~\\cite{Hammouda2008}.\n\nThe scenarios where polymers may deform range from dilute, to semi-dilute, to melts. Moreover, a strong anisotropy can also be found in cross-linked polymer networks of gels and rubbers~\\cite{read1997lozenge,basu2011nonaffine}, as well as nanocomposites like polymer-clay~\\cite{takeda2010rheo,schmidt2002small}. Mechanically, these materials are probed in either shear or extensional flow, applied in a steady, oscillatory, or stepwise mode, or even a superposition of multiple stimuli. As a rule of thumb, a strong deformation will stretch the polymer along flow and shrink it perpendicular to flow. However, the detailed shape of the form factor can have considerable differences in the various cases listed above. While many isotropic theories exist for equilibrium~\\cite{kholodenko1993analytical, pedersen1996scattering, sigel2018form}, anisotropic scattering patterns up to now have been analyzed in mostly \\emph{ad hoc} fashion: fitting 1D radial cuts~\\cite{Muller1993}, comparing angular sector averages~\\cite{Yearley2010}, fitting ellipses to isointensity curves~\\cite{Muller1993}, and fingerprinting with spherical harmonics~\\cite{wang2017fingerprinting}. In the present work, we develop a new approach to extract the underlying real-space structure directly from the data, not requiring any knowledge of the molecular motion. The observed form factor originates from the MSD between the monomers, which is a function of their index separation along the chain, see Fig.~\\ref{multiscale}. At equilibrium, this function is a straight line (ideal random walk), while under a strong deformation it becomes some other, unknown curve. Our main novelty is to approximate this curve with a set of straight segments, or layers. This discrete model converges to the exact mathematical result when the number of segments is brought to infinity (a textbook definition of the Riemann integral). Luckily, in real-world experiments the MSD deviation from a perfect straight line is quite small, almost never exceeding $\\times 2$, so there is no need for an infinity of parameters for a good description, and only a few layers are sufficient. In this case, the model is convenient to integrate analytically, and the resulting formula is fitted to the 2D data, to determine the width and the slope of each layer. This structure is then fitted to reveal a power-law of $\\xi=1.2$, and that is our novel measure of structural non-affinity.\n\n\n\\section{Experimental}\nAn unlabeled polymer solution of $C$ chains with $N$ monomers each is characterized by a quantity known as the structure factor\n\\begin{equation}\\label{Sdef}\nS(\\mathbf{q}) = \\frac{1}{NC} \\Big| \\sum_{n=1}^{N} \\sum_{c=1}^{C} e^{-i\\mathbf{q}\\cdot \\mathbf{r}_{nc}}\\Big|^2 = \\frac{|F|^2}{NC}\n\\end{equation}\nwhich is the modulus squared of the Fourier transform $F$ of all the monomer positions $\\mathbf{r}_{nc}$. While there are $(NC)^2$ terms in the double sum, only the nearest neighbours of each scatterer contribute to the structure, hence it is normalized by $NC$. With this convention, a structureless fluid (i.e. ideal gas) has $S(\\mathbf{q}) = 1$. In real fluids, one can measure deviations from this baseline which is a signature of their molecular interactions~\\cite{pedersen2004scattering}. However, the focus in this study is to obtain the single chain form factor, defined as\n\\begin{equation}\\label{Pdef}\nP(\\mathbf{q}) = \\frac{1}{N^2} \\Big|\\sum_{n=1}^N e^{-i\\mathbf{q}\\cdot \\mathbf{r}_n}\\Big|^2\n\\end{equation}\nwith the normalization chosen to have $P(\\mathbf{q}\\rightarrow 0) = 1$. Using a mixture of deuterated (D, phase~1) and hydrogenated (H, phase~2) chains, it is possible to isolate the form factor $P(\\mathbf{q})$ even in dense solutions where the chains strongly overlap. This method, called the Zero Average Contrast, is described in the handbook Ref.~\\cite{Hammouda2008} (see Eq.~35 on page 324), and is a standard SANS technique. Here we briefly outline its derivation. The experimental scattering cross-section from a sample of volume $V$ consists of three terms:\n\\begin{equation}\\label{Vsigma}\n\\frac{d\\Sigma}{d\\Omega} = \\frac{1}{V}|b_S F_0 + b_D F_1 + b_H F_2|^2\n\\end{equation}\nwhere $F_{0,1,2}$ are the Fourier transforms of the solvent, D, and H monomer positions respectively, while $b_{S,D,H}$ are the corresponding scattering lengths of each nuclear species. While the volume $v$ of one monomer and $v_S$ of one solvent molecule are in general different, the system can be assumed to be incompressible, leading to: $v_S F_0 + v F = 0$, where $F=F_1+F_2$ is the Fourier transform of all polymers as defined in Eq.~\\eqref{Sdef}. The solvent term $F_0$ is plugged into Eq.~\\eqref{Vsigma}, leaving only the polymer part:\n\\begin{multline}\\label{Ftrans}\n\\left(\\frac{V}{v^2}\\right)\\frac{d\\Sigma}{d\\Omega} = |\\rho_1 F_1 + \\rho_2 F_2|^2 = \\\\\n\\rho_1 \\rho_2 |F|^2 + \\rho_1(\\rho_1-\\rho_2) |F_1|^2 + \\rho_2 (\\rho_2-\\rho_1) |F_2|^2\n\\end{multline}\nFor convenience, the scattering length density (SLD) contrast has been defined as $\\rho_1 = b_D\/v - b_S\/v_S$ and $\\rho_2 = b_H\/v - b_S\/v_S$ for the two labels. The Fourier transform squared of each phase can be further decomposed into the diagonal (intra-chain) and the off-diagonal (inter-chain) terms:\n\\begin{equation}\\label{F1}\n|F_1|^2 = C_1 N^2 P(\\mathbf{q}) + C_1(C_1-1)Q(\\mathbf{q}),\n\\end{equation}\nand similarly for $F_2$ and $F$. Note that the total number of chains $C_1+C_2=C$ is fixed. The auxiliary function\n\\begin{subequations}\n\\begin{align}\nQ(\\mathbf{q}) &= \\sum_{n,m=1}^N \\braket{e^{-i\\mathbf{q}\\cdot \\mathbf{r}_{\\alpha n, \\beta m}}}_{\\alpha \\neq \\beta}\\\\\n&= \\frac{NS(\\mathbf{q})-N^2 P(\\mathbf{q})}{C-1}\\label{Qaux}\n\\end{align}\n\\end{subequations}\nis the interference between any two different chains $\\alpha \\neq \\beta$. The definition of $Q(\\mathbf{q})$ involves only the monomer positions, not their SLD, since the contrast information has already been factored out in Eq.~\\eqref{Ftrans}. The weight in front of $Q(\\mathbf{q})$ is proportional to $C^2$, whereas the weight of $P(\\mathbf{q})$ has a $C^1$ dependence, and this difference enables the tuning of the relative contributions of the form and the structure factors. Eq.~\\eqref{Qaux} is plugged into Eq.~\\eqref{F1}, which is then plugged into Eq.~\\eqref{Ftrans}, revealing the scattered intensity in terms of the form and the structure factors only:\n\\begin{multline}\n\\left(\\frac{\\phi_1+\\phi_2}{v}\\right) \\frac{d\\Sigma}{d\\Omega} =\\\\ (\\rho_1-\\rho_2)^2 \\phi_1 \\phi_2 N P(\\mathbf{q}) + (\\phi_1 \\rho_1 + \\phi_2 \\rho_2)^2 S(\\mathbf{q})\n\\end{multline}\nIt is the same formula as used in other SANS studies~\\cite{hammouda2015single}. In particular, it shows that if we set the average contrast to $\\phi_1 \\rho_1 + \\phi_2 \\rho_2 = 0$, the structure factor contribution $S(\\mathbf{q})$ vanishes, since the three inter-chain signals from hPS-hPS, dPS-dPS, and hPS-dPS add up to zero in this case.\n\n\\begin{figure}[ptbh!]\n\t\t\\begingroup\n\t\t\t\\sbox0{\\includegraphics{.\/fig\/static.pdf}}\n\t\t\t\\includegraphics[clip,trim={.1\\wd0} 0 {.15\\wd0} 0,width=\\linewidth]{.\/fig\/static.pdf}\n\t\t\\endgroup\n\\caption{The scattering cross-section from a quiescent solution, multiplied by $q^2$ to reveal the ideal random walk character (flat line) of the chain form factor. The solid black line is the Debye function, Eq.~\\eqref{debye}, fitted to $R = \\SI{27.12}{\\nano\\meter}$.}\\label{static}\n\\end{figure}\n\nOur sample was an entangled semi-dilute polymer solution, with a volume fraction $\\phi_1 = 0.172$ of deuterated PS (\\SI{575}{\\kilo\\gram\\per\\mole}, $N=5127$, $\\text{PDI} = 1.09$) and $\\phi_2 = 0.0998$ of hydrogenated PS (\\SI{510}{\\kilo\\gram\\per\\mole}, $N=5000$, $\\text{PDI} = 1.1$), purchased from Polymer Source. It was prepared by first dissolving the powdered PS mix in a glass beaker with a large amount of deuterated toluene, using a magnetic stirrer. After removing the stirrer, the solution was left in a ventilated fume hood for several days until the toluene has evaporated to the volume fraction quoted above, which was determined by weighing the dry and the dissolved polymer, minus the container. The detailed rheological characterization of a similar sample has been reported in Ref.~\\cite{korolkovas2017polymer}. \n\nIn our region of interest, $\\mathcal{O}(qR) = 1$, the structure and the form factors as defined in Eqs.~\\eqref{Sdef} and \\eqref{Pdef} both have a similar magnitude of $\\mathcal{O}(S(q)\\approx P(q)) = 1$. Using the SLD values $\\rho_1 = \\SI{0.47e10}{\\centi\\meter^{-2}}$ and $\\rho_2 = \\SI{-4.5e10}{\\centi\\meter^{-2}}$ we can estimate the ratio of the two intensities as\n\\begin{equation}\n\\frac{ (\\phi_1 \\rho_1 + \\phi_2 \\rho_2)^2}{(\\rho_1-\\rho_2)^2 \\phi_1 \\phi_2 N} = \\SI{6e-5}{} \\ll 1\n\\end{equation}\nwhich is quite small, thanks also to the high degree of polymerization $N=5000$. Even though our system is not exactly contrast-matched, the structure factor contribution is negligible beyond $q>\\SI{0.04}{\\nano\\meter^{-1}}$, see Fig.~\\ref{static}. This quiescent data~\\cite{d11data} was recorded on the instrument D11 (Institut Laue-Langevin, Grenoble, France) providing a wide $q$ range, in this case \\SIrange{0.004}{4}{\\per\\nano\\meter}, covering distances from multi-chain clusters to a single monomer. Our focus is on intermediate $q$ values, which are well fitted with the Debye function, Eq.~\\eqref{debye}, establishing the radius of gyration to be $R=\\SI{27.12}{\\nano\\meter}$. To assess the validity of this fit, we compare it with literature data~\\cite{fetters1994molecular} for dilute PS of the same molecular weight in toluene $R_{\\text{TOL}} = \\SI{29.9}{\\nano\\meter}$ (maximum swelling in a good solvent), and in cyclohexane $R_{\\text{CH}} = \\SI{20.2}{\\nano\\meter}$ (a theta solvent which fully screens the excluded volume). Our semi-dilute solution of volume fraction $\\phi=\\phi_1+\\phi_2=0.27$ is partially screened, so the radius is estimated to be $(1-\\phi) R_{\\text{TOL}} + \\phi R_{\\text{CH}} = \\SI{27.3}{\\nano\\meter}$, in agreement with our data.\n\nThe form factor of an ideal random walk has a power law behaviour of $P \\propto (qR)^0$ for $qR\\ll 1$ and $P \\propto (qR)^{-2}$ for $qR\\gg 1$, as evidenced in Fig.~\\ref{static}. Eventually at high $q$ the scattering starts to probe correlations inside the blob of size $\\mathcal{O}(\\lambda) = \\SI{1}{\\nano\\meter}$, which is a typical distance between the semi-dilute chains, called the mesh size. Within the blob ($q\\lambda \\gg 1$) the excluded volume interactions are not screened, so the polymer form factor changes towards the scaling of $P \\propto (q\\lambda)^{-5\/3}$, which is a signature of a self-avoiding random walk (see textbook Refs.~\\cite{deGennesScalingConcepts, doi1988theory}). In addition, the scattering from density fluctuations at the chemical monomer level may become visible for the highest $q$-values (not measured here). On the opposite side of the spectrum, the ultra low $q$ data also deviates from Debye, this time due to scattering from very slowly relaxing density inhomogeneities spanning large distances, likely hundreds of chains or more~\\cite{morfin1999temperature}. Extreme viscoelastic samples like ours are difficult to fully equilibrate, as some residual flow persists for many hours if not days (one experiment has been running for almost 100 years~\\cite{edgeworth1984pitch}). Even when left perfectly still, the sample may keep flowing due to an interplay of gravity and the capillary forces between the narrow gap of the rheometer plates. This can induce concentration fluctuations (see Refs.~\\cite{wu1991enhanced, hashimoto1992butterfly, groisman2000elastic, saito2002structures}), and while their amplitude may be tiny, when integrated over a long distance, a strong SANS signal can result at ultra low $q$.\n\nOur shear experiments were conducted on PAXY (Laboratoire L{\\'e}on Brillouin, Saclay, France), with a narrower $q$ range set at \\SIrange{0.05}{0.5}{\\per\\nano\\meter}, where the scattering is fully described by the Debye function. We have used a custom-made vertical sealed cone-plate shear cell~\\cite{Kawecki2016}, designed for both SANS and NSE instruments and allowing a smaller liquid volume than typical Couette cells~\\cite{Yearley2010}, which can be a considerable advantage for costly and rare deuterated samples. It is also well suited for shearing fluids which exhibit non-linear viscoelastic phenomena such as the rod-climbing effect. A vertical cone-plate geometry is a necessity for rheo-NSE and allows a direct measurement of both structure (SANS) and dynamics (NSE) in the same setup. In our experiment the shear rate was $\\kappa = \\SI{300}{\\per\\second}$, corresponding to $\\text{Wi}=30$. Data collection has lasted \\SI{4.25}{\\hour} per spectrum, at a temperature of \\SI{45}{\\celsius}, which is the same as used at D11 for the quiescent measurement.\n\nIn this article we only report data from the SANS experiment carried over one day. After that, the experiment continued for three more days with NSE, which will be a separate subject. However, for full disclosure we note that after these four days of shearing, we have spotted some wear of the cell sealing, causing aluminum and teflon impurities to have leached into the sample. Solid particles are known to give rise to Porod scattering of $P\\propto q^{-4}$, and fortunately there was no trace of it in the range covered by PAXY, where the Debye law $P\\propto q^{-2}$ dominates. As the SANS data was collected during the first day of shearing, the impurities at that stage must have been very dilute and hence invisible to the beam. On top of that, the particle size must have been much greater than the polymer radius of gyration, falling outside of the SANS range. Such big particles cannot interfere with the polymer dynamics, as that is only possible in polymer-nanocomposites where the two components are similar-sized~\\cite{schmidt2002small}. These specialty materials require advanced chemical synthesis and cannot be produced by just using mechanical friction to grind up some aluminum dust. Therefore, even if we would have had a considerable percentage of sample contamination, its effect could not have altered the entanglement physics, but only lowered the overall polymer density. This means that the actual Wi may have been 29 instead of 30 we claim. Either way, it is unlikely that these impurities could have altered the polymer form factor beyond the uncertainty of the fit (\\SI{15}{\\percent}), as explained in the next section.\n\n\\section{Results}\n\\begin{figure*}[ptbh!]\n\\begin{subfigure}{.49\\textwidth}\n\\begingroup\n\t\t\t\\sbox0{\\includegraphics{.\/fig\/hiQexp.pdf}}\n\t\t\t\\includegraphics[clip,trim={.13\\wd0} 0 {.17\\wd0} 0,width=\\linewidth]{.\/fig\/hiQexp.pdf}\n\t\t\\endgroup\\caption{Experimental data}\\label{hiQexp}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}{.49\\textwidth}\n\t\t\\begingroup\n\t\t\t\\sbox0{\\includegraphics{.\/fig\/hiQfit.pdf}}\n\t\t\t\\includegraphics[clip,trim={.13\\wd0} 0 {.17\\wd0} 0,width=\\linewidth]{.\/fig\/hiQfit.pdf}\n\t\t\\endgroup\\caption{Analytical fit, Eq.~\\eqref{fullq}}\\label{hiQfit}\n\\end{subfigure}\n\n\\begin{subfigure}{.49\\textwidth}\n\t\t\\begingroup\n\t\t\t\\sbox0{\\includegraphics{.\/fig\/residuals.pdf}}\n\t\t\t\\includegraphics[clip,trim={.13\\wd0} 0 {.17\\wd0} 0,width=\\linewidth]{.\/fig\/residuals.pdf}\n\t\t\\endgroup\\caption{Residuals plot}\\label{residuals}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}{.49\\textwidth}\n\t\t\\begingroup\n\t\t\t\\sbox0{\\includegraphics{.\/fig\/schemeplot.pdf}}\n\t\t\t\\includegraphics[clip,trim={.2\\wd0} 0 {.2\\wd0} 0,width=\\linewidth]{.\/fig\/schemeplot.pdf}\n\t\t\\endgroup\\caption{Real-space structure}\\label{model}\n\t\t\\end{subfigure}\n\\caption{(a-c) The scattered intensity under shear $P(\\mathbf{q})$, divided by the quiescent signal $P_{\\text{iso}}(\\mathbf{q})$. This plot removes the Debye envelope $1\/q^2$, highlighting the structural changes induced by the shear. The data (a) is fitted with the analytical function (b), and their difference is plotted in (c), showing that the fit accounts for \\SI{85}{\\percent} of the signal or more. (d)~The inferred mean square distance (MSD) between two monomers, for different directions probed by the scattering vector $\\mathbf{q}$. The dotted lines are a sketch of what the true function may look like. The straight black lines show our piecewise approximation, Eq.~\\eqref{regimes}. The dashed black line is the isotropic MSD found at equilibrium, Eq.~\\eqref{iso}.}\\label{exp}\n\\end{figure*}\n\\subsection{Theory}\nPrevious studies on sheared polymers have for the most part focused on the deformation as a function of shear rate, or time in the case of extensional flow~\\cite{wang2017fingerprinting}. In the present experiment we elucidate the lesser understood aspect, which is the structural, or the $q$ dependence. For this goal, the entire available beamtime was devoted to only one shear rate $\\text{Wi} = 30$, the highest possible with the setup. In Fig.~\\ref{hiQexp} we plot the 2D scattering pattern under shear, divided by the quiescent signal. Anisotropy at low $q$ is clearly visible, showing that the chains stretch along the flow and shrink along the vorticity, but the extent of the deformation decreases as we probe deeper into the chain interior (higher $q$). The observed signal originates from the distribution function $\\Psi(\\mathbf{r}_{nm})$ of the distance $\\mathbf{r}_{nm} = x\\hat{\\textbf{\\i}} + y\\hat{\\textbf{\\j}} + z\\hat{\\textbf{k}}$ between the monomers $n$ and $m$ (we drop the subscript $nm$ from now on). In equilibrium, it is well described by a Gaussian function (the normalization factor is not shown):\n\\begin{equation}\n\\Psi(\\mathbf{r}, \\text{Wi}\\ll 1) = \\exp \\left(-\\frac{x^2+y^2+z^2}{2|n-m|\\lambda^2}\\right)\n\\end{equation}\nThis result is exact for an infinite ideal random walk, and is very often applied for real polymers too. Under an experimentally reasonable amount of deformation, our assumption is that the functional form of the distribution $\\Psi$ remains close to a Gaussian, but its shape is now a tilted ellipsoid rather than a sphere:\n\\begin{multline}\\label{anisogauss}\n\\Psi(\\mathbf{r}, \\text{Wi}\\gtrsim 1) =\\\\\n\\exp \\left(-\\frac{1}{2}\\left(A_{xx}x^2 + 2A_{xy}xy + A_{yy}y^2+ A_{zz}z^2\\right)\\right)\n\\end{multline}\nThe ellipse is specified by the anisotropy matrix $A_{ij}(|n-m|)$. In other words, we account for the observed deformation of the polymer form factor through a change of the Gaussian's dimensions and orientation, rather than a change of the function itself. It is justified, since the scattering is mainly sensitive to the width of the monomer distribution, while its precise shape is less important. Nevertheless, for extreme deformations this assumption may break down, in which case we could extend Eq.~\\eqref{anisogauss}, for example by adding higher order terms of the Hermite expansion. This would introduce additional fitting parameters, which would enable an experimental determination of the magnitude of those extra terms. For now, only the zeroth order term (a regular Gaussian) is considered, as it will be seen to already produce a satisfactory fit to the data. In this case, the scattering contribution from two monomers is (see Appendix~2.1 in Ref.~\\cite{doi1988theory}):\n\\begin{equation}\\label{expi}\n\\braket{e^{i\\mathbf{q}\\cdot \\mathbf{r}}} = e^{-\\braket{(\\mathbf{q}\\cdot \\mathbf{r})^2}\/2}\n\\end{equation}\nThe argument of the exponent, which we call the MSD function,\n\\begin{equation}\\label{msddef}\n\\braket{(\\mathbf{q}\\cdot \\mathbf{r})^2} = \\braket{x^2} q_x^2 + 2\\braket{xy} q_x q_y + \\braket{y^2} q_y^2 + \\braket{z^2} q_z^2\n\\end{equation}\ncontains 4 averages, derived from the 4 components of the anisotropy matrix $A_{ij}$:\n\\begin{equation}\\label{averages}\n\\begin{pmatrix}\n\\braket{x^2}\\\\ \\braket{y^2}\\\\ \\braket{xy}\\\\ \\braket{z^2} \\end{pmatrix} = \n\\begin{pmatrix} D\/A_{xx}\\\\ D\/A_{yy}\\\\ (1-D)\/A_{xy}\\\\ 1\/A_{zz}\\end{pmatrix}\n\\end{equation}\nwhere we have defined $D = 1\/\\left(1-A_{xy}^2\/(A_{xx}A_{yy})\\right)$ for brevity. We now plug in Eq.~\\eqref{expi} to Eq.~\\eqref{Pdef}, which in the continuous limit becomes a double integral\n\\begin{equation}\\label{Sq}\nP(\\mathbf{q}) = \\frac{1}{N^2}\\int_0^N dn \\int_0^N dm\\, \\exp \\left(-\\frac{\\braket{(\\mathbf{q}\\cdot \\mathbf{r}_{nm})^2}}{2} \\right)\n\\end{equation}\nAn exact analytical solution is available in equilibrium (see Section~2.4 in Ref.~\\cite{doi1988theory}), since the argument\n\\begin{equation}\\label{iso}\n\\braket{(\\mathbf{q}\\cdot \\mathbf{r}_{nm})^2}\/2 = a|n-m|\/N\n\\end{equation}\nis then a straight line with a constant dimensionless slope $a = (qR)^2$, where $R^2=N\\lambda^2\/6 = (\\SI{27.12}{\\nano\\meter})^2$ is the equilibrium radius of gyration. The result is known as the Debye function:\n\\begin{equation}\\label{debye}\nP_{\\text{iso}}(\\mathbf{q}) = \\frac{2\\left( e^{-a}-1+a\\right)}{a^2}\n\\end{equation}\nUnder shear, the slope is not constant, as evidenced by the $q$-dependence of the anisotropy, Fig.~\\ref{hiQexp}. The functional form of the MSD dependence on $|n-m|$ is due to the specific molecular and topological interactions, and at present no suitable theory is available for entangled polymer solutions. Luckily, this information is not necessary to fit the SANS data, and we propose to integrate Eq.~\\eqref{Sq} by approximating the unknown MSD function with a series of straight lines. Any reasonable curve can be approximated to an arbitrarily high accuracy with a set of ever shorter segments. In this study we only use two of them:\n\\begin{equation}\\label{regimes}\n\\frac{\\braket{(\\mathbf{q}\\cdot \\mathbf{r}_{nm})^2}}{2} = \\begin{cases}\nb|n-m|\/N, & |n-m|< N_1\\\\\nc|n-m|\/N + (b-c)\\nu, & |n-m|> N_1\n\\end{cases}\n\\end{equation}\nHere $N_1$ is the first layer ``thickness'', specified through the fitting parameter $0<(\\nu=N_1\/N)<1$. In our cone-plate experiment, only the $xz$ plane could be measured, so the slopes from Eq.~\\eqref{msddef} reduce to\n\\begin{align}\nb &= R^2\\left[(\\alpha q_x)^2 + (\\beta q_z)^2\\right]\\\\\nc &= R^2\\left[(\\gamma q_x)^2 + (\\delta q_z)^2\\right]\n\\end{align}\nalthough if 3D data was available from Couette or \\emph{ex situ} experiments, the full Eq.~\\eqref{msddef} would be retained. Now the two slopes in two dimensions are described by four fitting parameters $(\\alpha,\\beta,\\gamma,\\delta)$, which trace back to the original inter-monomer distribution, Eq.~\\eqref{anisogauss}. Our generic model is plugged into the scattering function, Eq.~\\eqref{Sq}, and integrated piece by piece to yield:\n\\begin{multline}\\label{fullq}\nP(\\mathbf{q}) = \\frac{2}{b^2} \\Bigl( b-1 + e^{-\\nu b} \\times \\\\\n\\left\\{1 + [(b\/c)^2-1]e^{(\\nu-1)c} + (1-\\nu)(b\/c)(b-c)\\right\\} \\Bigr)\n\\end{multline}\nwhich is our main result. In principle, more than two layers can be added, keeping in mind that every new layer introduces three fitting parameters (its thickness and the two $(x,z)$ slopes). Although the formulas become tedious with extra layers, the analytical solution always exists and is straightforward to obtain with symbolic algebra software (Matlab, Mathematica, etc.). \n\n\\subsection{Experimental application}\n\\begin{table}[htb]\n\\begin{tabular}{l | l l l}\n\t\t\t\t\t\t\t\t\t\t&\tSlope $x$\t& Slope $z$\t& Thickness\\\\\n\t\t\t\t\t\t\t\t\t\t\\hline\nLayer 1 (high $q$) & $\\alpha = 1.005$\t&\t$\\beta = 0.976$ & $\\nu= 0.09$\\\\\nLayer 2 (low $q$)\t& $\\gamma = 1.556$ &\t$\\delta = 0.835$ & $1-\\nu=0.91$\n\\end{tabular}\\caption{Chain deformation parameters}\\label{fitparams}\n\\end{table}\n\nEq.~\\eqref{fullq} is divided by its isotropic counterpart, Eq.~\\eqref{debye}, and fitted in 2D to the experimental data shown in Fig.~\\eqref{hiQexp}. The five fitting parameters are listed in Table~\\ref{fitparams}. They were obtained by a standard genetic fitting algorithm. Using these values, Eq.~\\eqref{fullq} is plotted in Fig.~\\ref{hiQfit} and is seen to match the experimental data reasonably well. To assess the quality of the fit, we show the residuals (difference between the data and the fit) in Fig.~\\ref{residuals}. Admittedly, some structure remains unfitted, mostly the low $q_x$ area with a difference of $0.05$. In comparison, the amplitude of the signal change in the same area is $0.3$, meaning that our fit accounts for at least $0.25\/0.3=\\SI{85}{\\percent}$ of the observed phenomenon. The remainder is likely to be a combination of some structure factor contribution due to imperfect contrast-matching, impurities, instrument bias, and an inexact fitting function.\n\nUsing the parameters from Table~\\ref{fitparams}, the piecewise model of Eq.~\\eqref{regimes} is plotted in Fig.~\\ref{model} with solid black lines for the $q_x$ and $q_z$ directions. Quite obviously, a realistic polymer structure cannot have sharp kinks, so we have fitted two smooth curves (dotted red and blue) to our piecewise model. These fits are made with a semi-empirical function\n\\begin{multline}\\label{semirouse}\n\\frac{\\braket{(\\mathbf{q}\\cdot \\mathbf{r}_{nm})^2}}{2(qR^2)} = \\left|\\frac{n-m}{N}\\right| +\\\\\n\\left( \\frac{(B_x q_x)^2 - (B_z q_z)^2}{q^2} \\right) \\left|\\frac{n-m}{N}\\right|^{\\xi} \n\\end{multline}\nFirst, the anisotropic amplitudes are fixed at $B_x = \\mathpalette\\DHLhksqrt{\\nu \\alpha^2 + (1-\\nu) \\gamma^2} = 1.51$ and $B_z = \\mathpalette\\DHLhksqrt{\\nu \\beta^2 + (1-\\nu) \\delta^2} = 0.85$, to exactly match the endpoints of the piecewise Eq.~\\eqref{regimes}. Second, the exponent $\\xi$ is determined by minimizing the difference between the piecewise and the smooth curves. The result is $\\xi = 1.19$ and $\\xi = 1.18$ for the $x$ and $z$ axes respectively. Optimizing for both axes simultaneously still leaves us with 1.19, since the $z$-axis amplitude is much smaller. This semi-empirical expression requires merely three parameters ($B_x$, $B_z$, $\\xi$) to describe the entire experiment.\n\n\\subsection{Stress estimation}\nSANS is a tool to measure structure on the large scale of the whole molecule. Mechanical stress, on the other hand, arises from the structure on the short scale of one molecular bond. Yet, there is considerable overlap between these two techniques, and we shall now attempt to extract the stress tensor values from our fit of the SANS data. First, SANS is measured in units of length alone, as the intensity is given in 1\/cm, and the $q$-vector is in 1\/nm. In contrast, stress is measured in units of Pa, or N\/m${}^2$, so clearly some additional information is required to connect these two methods. Coarse-grained polymers are often described by a mechanical model of beads joined by harmonic springs, in which case the stress tensor is derived to be~\\cite{mcleish2002tube}:\n\\begin{equation}\\label{stress}\n\\sigma_{ij} = \\left(\\frac{\\rho N_A k_B T}{3 M_r}\\right) \\frac{3\\braket{r_i r_j}}{\\lambda^2}\n\\end{equation}\nwhere $r_i$ is the bond vector in the $i^{\\text{th}}$ direction $i=(x,y,z)$. The pre-factor contains the polymer mass density, the Avogadro number, the thermal energy, and the molecular mass of a monomer. The product in the big parentheses amounts to $\\sigma_0=\\SI{2e6}{\\pascal}$ for our polystyrene solution. To extract the bond length from SANS, we go back to Eq.~\\eqref{regimes}, which says that at short distances, the polymer has the structure of an ideal random walk of step length $\\braket{r_x^2} = (\\alpha \\lambda)^2\/3$ and $\\braket{r_z^2} = (\\beta \\lambda)^2\/3$ along $x$ and $z$ respectively. Plugging this into Eq.~\\eqref{stress}, we obtain a rheological quantity called the third normal stress difference\n\\begin{equation}\nN_3 = \\sigma_{xx}-\\sigma_{zz} = \\sigma_0 \\left(\\alpha^2 - \\beta^2\\right) = \\SI{1.3e5}{\\pascal}\n\\end{equation}\nSince we could not measure this quantity with our shear apparatus, we compare it with the available literature data of a similar polymer. Ref.~\\cite{kannan1992third} reports oscillatory shear results for polyisoprene of $M_w = \\SI{170}{\\gram\\per\\mole}$, which is 3 times shorter than our polystyrene, but also 3 times denser as they have used a melt instead of a semi-dilute solution. Judging from the dynamical moduli data in Fig.~3c of that study, the cross-over frequency, which corresponds to $\\text{Wi}=1$, is at $\\omega a_T = \\SI{6e-3}{}$. To compare with our conditions of $\\text{Wi}=30$, we look at their Fig.~4c and frequency $\\omega a_T = 0.18$. Reading off the stress axis we find $N_3 = \\SI{6e4}{\\pascal}$, which is half of the magnitude that we could infer from our piecewise fit of SANS. It shows that our structural data analysis is reasonably consistent with an independent rheology perspective. We attribute the remaining discrepancy of $\\mathcal{O}(2)$ partly to the difference of sample chemistry, but mostly to the uncertainty of the SANS data in the high $q$ region, which is the important bit for calculating the stress. A more precise comparison with rheology may become available in the future, by improving the resolution and the counting time of the SANS setup, and by collecting data in more directions than just the $xz$ plane.\n\n\\section{Discussion}\nOur experiment can be compared to an earlier work in Ref.~\\cite{Muller1993}, where an entangled PS melt has been sheared, quenched, and measured with \\emph{ex situ} SANS, using the same PAXY instrument. Their data does not show any change along the vorticity axis, whereas we observe a clear increase in scattering (chain shrinkage), although the effect is $(\\gamma-1)\/(1-\\delta) = 3.37$ times weaker than the stretching seen along the flow axis (see Table~\\ref{fitparams}). This discrepancy could be explained by their slower shear rate of $\\text{Wi} = 4$, compared to ours $\\text{Wi} = 30$. The anisotropy in the melt case was thus entirely due to the change along the flow axis. It was quantified by fitting ellipses to the scattering data, and taking the ratio of their axes. In the flow-vorticity plane data is available for $\\text{Wi} \\approx 1$, where anisotropy is seen to decrease from 1.39 to 1.23, the value at which it saturates with increasing $q$. In contrast, the anisotropy in our data decreases continuously from $\\gamma\/\\delta = 1.86$ to $\\alpha\/\\beta = 1.03$, and is almost perfectly isotropic at high $q$. To summarize, shear experiments on \\emph{ex situ} melts and \\emph{in situ} semi-dilute solutions bear qualitative similarities at low $q$, but the universality breaks down at high $q$, where we see almost no saturation or plateau of the anisotropy.\n\nWe have extracted the chain deformation parameters, Table~\\ref{fitparams}, using a purely structural model, without any recourse to molecular theories. Nevertheless, to understand why the chain deforms in this particular way, a molecular explanation is needed. Currently no definitive theory exists, but the main contender in this arena is GLaMM~\\cite{graham2003microscopic}, a tube theory~\\cite{de1971reptation} with several modifications. In essence, the many-chain fluid is simplified with just a single chain trapped in a tube, which is the mean field of other chains, and the overall dynamics are described in a self-consistent way. This model can accurately reproduce the rheology of entangled polymer melts, although SANS studies have not reached a consensus yet, with some authors claiming a strong support of tube theory~\\cite{blanchard2005small}, others report no evidence of any tubes~\\cite{boue1987transient}, and others still demonstrate kinetic trends opposite to theoretical predictions~\\cite{wang2017fingerprinting}. The debate centers on how exactly does the tube relax, and how is it affected by a strong deformation.\n\nThere is considerable universality between entangled polymer melts and semi-dilute solutions, especially in the linear regime $\\text{Wi}<1$, where GLaMM could be applied. At higher shear the universality breaks down, as polymer solutions display enhanced concentration fluctuations, which can reach length scales considerably larger than the molecule radius of gyration~\\cite{helfand1989large, mendes1991experimental, hashimoto1992butterfly, boue1994semi, saito2002structures}. Therefore, a single average chain in a tube may not be enough to describe the whole fluid. Furthermore, the shape of an individual molecule is known to fluctuate between highly stretched and collapsed states, a phenomenon called tumbling dynamics~\\cite{teixeira2005shear}. The mean field assumption, a core tenet of tube theory, becomes questionable given such inhomogeneities. Finally, we note that the form factor measured by SANS is a fundamentally static quantity (contains no units of time), and could be consistent with many different dynamical theories. Given the above limitations, it would be premature to interpret our findings in terms of the current tube theories.\n\nInstead, we offer an explanation based on the fact that an entangled polymer liquid is an intermediate case between a rubber and an ideal Rouse chain. A piece of rubber responds to stress with an affine deformation, meaning that the exponent in Eq.~\\eqref{semirouse} is $\\xi = 1$. It is widely believed that entangled polymers, at a large scale, have this rubber-like affine response~\\cite{rubinstein1997nonaffine, basu2011nonaffine}. However, on the short scale, a non-affine liquid-like response is expected. In this regime, unentangled polymers are well described by the Rouse model, which contains the following forces: spring, random, and shear (see Chapter~4 in Ref.~\\cite{doi1988theory} for details):\n\\begin{equation}\n\\frac{\\partial \\mathbf{X}_p}{\\partial t} = -\\frac{k_p}{\\zeta_p} \\mathbf{X}_p + \\frac{\\mathbf{f}_p}{\\zeta_p} + \\kappa (\\hat{\\textbf{\\j}} \\cdot \\mathbf{X}_p) \\hat{\\textbf{\\i}}\n\\end{equation}\nIts solution gives the mean square value of the Rouse modes $\\mathbf{X}_p(t)$ in the thermodynamic limit $t\\rightarrow \\infty$:\n\\begin{equation}\n\\braket{(\\mathbf{q} \\cdot \\mathbf{X}_p)^2} = \\frac{k_B T}{k_p} \\left[q^2+q_x^2\\frac{(\\kappa \\zeta_p\/k_p)^2}{2}\\right]\n\\end{equation}\nin agreement with Ref.~\\cite{pincus1976excluded}. We use standard definitions for the mode friction $\\zeta_p = 2N\\zeta$ and the mode stiffness $k_p = 6\\pi^2 k_B T p^2\/(N\\lambda^2)$. The above equation shows that the mean square width of a harmonic dumbbell is elongated by a factor of $1+(\\kappa \\tau_p)^2\/2$, where $\\tau_p = \\zeta_p\/k_p$ is its thermal relaxation time. The quadratic dependence on the dimensionless shear rate $(\\kappa \\tau_p)^2 = \\text{Wi}^2$ is expected to hold even in the complete multi-chain theory, since it is the first non-zero term in the Taylor expansion of any reasonably behaved function, and is sufficient to describe the effect as long as the shear is not too strong. In the future a much stronger shear may become accessible, in which case we would simply argue for adding a $\\text{Wi}^4$ term. The odd terms are all zero, because reversing the flow $\\text{Wi} \\rightarrow -\\text{Wi}$ is equivalent to flipping the axis $q_x \\rightarrow -q_x$, which does not affect the physics.\n\nThe pairwise distance for polymers is obtained by summing all the Rouse modes (dumbbells):\n\\begin{equation}\n\\mathbf{r}_{nm} = 2\\sum_{p=1}^{\\infty} \\mathbf{X}_p [\\cos (p\\pi n\/N) - \\cos (p\\pi m\/N)]\n\\end{equation}\nand this leads to the chain structure\n\\begin{multline}\\label{fseries}\n\\frac{\\braket{(\\mathbf{q}\\cdot \\mathbf{r}_{nm})^2}}{2(qR^2)} = \\frac{|n-m|}{N} +\\\\\n\\left(\\frac{q_x \\kappa \\tau_1}{\\pi q}\\right)^2 \\sum_{p=1}^{\\infty} \\frac{\\braket{[\\cos(p\\pi n\/N)-\\cos(p\\pi m\/N)]^2}}{p^6}\\\\\n= \\mu + \\frac{\\pi^4}{180} \\left(\\frac{q_x \\kappa \\tau_1 \\mu}{q}\\right)^2 \\left(1+\\mu-(\\mu\/2)^2-2\\mu^3 + \\mu^4\\right)\n\\end{multline}\nwhere $\\mu = |n-m|\/N$ has been defined for brevity. The above equation is the exact analytical solution of the Rouse chain structure under shear, and is reported here for the first time. The summation of the Fourier series, Eq.~\\eqref{fseries}, can be performed using formulas tabulated in Ref.~\\cite{gradshteyn2014table}. We can see that for small separations $\\mu\\ll 1$ where the Rouse model should have some validity, it predicts the deformation exponent $\\xi = 2$. Hence, our fitted value of $\\xi=1.19$ lies between the rubber (1) and the liquid (2) predictions, a reasonable outcome for a viscoelastic material.\n\n\\section{Conclusion}\nIn this study we have performed the first SANS experiment on the form factor of entangled semi-dilute polymers under \\emph{in situ} shear. We have verified our quiescent result against an interpolation of literature measurements in different solvents. We have then compared our shear result with earlier \\emph{ex situ} data and found a qualitative agreement. To allow a deeper analysis, we have derived an analytical fitting function for SANS in 3D, which is the major novelty of this work. From the fit we have shown that the molecular deformation follows a power law between an elastic rubber and a Rouse liquid. In addition, we have used our SANS fit to calculate the rheological third normal stress difference, and compared the outcome with the literature data. The match was reasonably good, which is encouraging for future studies, as it is now possible to directly connect the SANS spectra with mechanical stress, in both shear and normal components. Our fit is independent of molecular theories, and is therefore applicable to deformed polymers in a wide variety of situations: dilute, semi-dilute, and melts, as well as cross-linked materials like rubbers and gels, in addition to polymer-nanoparticle composites. For such complex materials, far from equilibrium, reliable theories and simulations are yet to be developed. Traditionally, one has to postulate (guess) a theory, calculate the resulting SANS, and compare it with experiment, until a good match is found. Our piecewise fit takes the guesswork out of the equation, and instead directly provides the real-space structure, from which a theory is more straightforward to deduce.\n\n\\section{Conflics of interest}\nThere are no conflicts to declare.\n\n\\section{Acknoledgements}\nThe SANS beamtime was provided by the Laboratoire L{\\'e}on Brillouin, France, instrument scientist Alain Lapp. A.K. acknowledges the financial support of the Swedish research council and the Carl Tryggers stiftelse, grant CTS 16:519. \n\n\\section{Author contributions}\nA.K. has fitted the data and has written the article. S.P. has reduced the raw data with contributions of A.D. F.A.A., P.G. and A.K. have prepared the polymer solution. M.W. has contributed to the shear cell design. M.K. has built the shear cell and has led the experiment. All authors have participated in the neutron experiments.\n\n \n \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nOptical images of galactic \\ion{H}{ii} regions show a mix of bright and dark nebulosity. Foreground cold dust clouds obscure the bright background of warm and ionized gas. The warm plasma in these nebul\\ae\\ accelerates outwards through the interaction with radiation and winds from hot and massive stars. The molecular gas is swept up and forms expanding shells, which are sculpted into complex filamentary formations, like elephant trunks, pillars that point at O stars in the nebula. Blocks of cold gas can detach from the shells and trunks and may fragment into smaller clouds that appear as dark patches on optical images of these nebul\\ae\\ as noted long ago by Bok \\& Reilly (\\cite{bok47}) and Thackeray (\\cite{tha50}). The clumps may be round or shaped like tear drops, some with bright rims facing the central cluster (Herbig \\cite{her74}).\n\nA number of such \\ion{H}{ii} regions have been subject to more detailed studies, and it has been found that many regions contain distinct, but very small clumps, extending over less than one to a few arcseconds. Several studies have focused on the so-called proplyds, which are photoevaporating discs surrounding very young stars (e.g. O'Dell et al. \\cite{ode93}; O'Dell \\& Wen \\cite{ode94}; McCaughrean \\& O'Dell \\cite{mcc96}; Bally et al. \\cite{bal00}; Smith et al. \\cite{smi03}). In these studies small cloudlets without any obvious central stellar objects were also found, as also recognized by Hester et al. (\\cite{hes96}) and Reipurth et al. (\\cite{rei97, rei03}) from Hubble Space Telescope images of nebular regions. \n\nMore systematic studies of such star-less cloudlets followed, and from the surveys of more than 20 \\ion{H}{ii} regions by De Marco et al. (\\cite{mar06}), Grenman (\\cite{gre06}), and Gahm et al. (\\cite{gah07}; hereafter Paper~1) it can be concluded that most of the objects have radii $<$10~kAU with size distributions that peak at $\\sim$~2.5~kAU. In Paper~1 masses were derived from extinction measures indicating that most objects have masses $<$~13~M$_{J}$ (Jupiter masses), which currently is taken to be the domain of planetary-mass objects. This class of tiny clouds in \\ion{H}{ii} regions were called {\\it globulettes} in Paper~1 to distinguish them from proplyds and the much larger globules spread throughout interstellar space. We define globulettes as cloudlets with round or slightly elongated shapes with or without bright rims and\/or tails. \n\nSome globulettes are connected by thin filaments to larger molecular blocks and it is then natural to assume that isolated globulettes once detached from shells and trunks. They may also survive in this harsh environment for long times, as concluded in Paper~1. Follow-up 3D numerical simulations in Kuutmann (\\cite{kuu07}) predict lifetimes of $\\sim$~$10^{4}$ years, increasing with mass. Owing to the outer pressure exerted on the globulettes from surrounding warm gas, and the penetrating shock generated by photoionization, it was found that many globulettes may even collapse to form brown dwarfs or planetary-mass objects before evaporation has proceeded very far. The objects are protected against rapid photoevaporation by a screen of expanding ionized gas (e.g. Dyson \\cite{dys68}; Kahn \\cite{kah69}; Tenorio-Tagle \\cite{ten77}). Consequently, the objects are expected to develop bright rims on the side facing the cluster because of the interaction with stellar light. In addition, the models predict that dusty tails emerge from the cloud cores. It is therefore puzzling that most globulettes lack any trace of bright rims in H$\\alpha$, and that most are round, or only slightly elongated, without any trace of tails. \n\nIn a recent study by Gahm et al. (\\cite{gah13}; hereafter Paper~2), based on NIR imaging and radio molecular line observations of globulettes in the Rosette Nebula, it was found that the objects contain dense cores, which strengthens the suggestion that many objects might collapse to form planetary-mass objects or brown dwarfs that are accelerated outwards from the nebular complex. The whole system of globulettes and trunks expands outwards from the central cluster with velocities of about 22 km s$^{-1}$. In the case where more compact objects are formed inside some globulettes, they will escape and become free-floating objects in the galaxy. In both the optical and radio\/NIR surveys (Papers 1 and 2) it was concluded that the density is relatively high even close to the surface layers, which could explain why the objects lack extensive bright rims in H$\\alpha$. Some of the optically completely dark objects were discovered to have thin rims manifested in P$\\beta$ and H$_{2}$ emission. In a follow-up study of the NIR images, M\\\"akel\\\"a et al. (\\cite{mak14}) found that some smaller globulettes are also crowned by thin bright rims that are not seen in H$\\alpha$. \n\nThe present study is an inventory of globulettes in the Carina Nebula (NGC 3372) based on images taken from the {\\it Hubble Space Telescope} (HST) through a narrow-band H$\\alpha$ filter. Basic parameters, like size and mass, are derived and we compare the results to surveys of similar objects in other nebulae.\n\nThe Carina complex, with its extended network of bright and dark nebulosity, spans over several degrees in the sky and is one of the most prominent sites of star formation in the galaxy. More than 60 O-type stars and several young clusters (Tr 14, 15, and 16; Collinder 228 and 232; and Bochum 10 and 11) are located in the region, and more than a thousand pre-main sequence stars have been identified from optical, infrared, and X-ray surveys (e.g. Tapia et al. \\cite{tap03}; Ascenso et al. \\cite{asc07}; Sanchawala et al. \\cite{san07a}, \\cite{san07b}; Smith et al. \\cite{smi10a}, \\cite{smi10b}; Povich et al. \\cite{pov11}; Gaczkowski et al. \\cite{gac13}). The global properties of the nebular material was discussed in Smith et al. (\\cite{smi00}), Smith \\& Brooks (\\cite{smi07}), and references to studies based on observations of selected areas can be found in the comprehensive review by Smith (\\cite{smi08}). Additional surveys from the submm range (Preibisch et al. \\cite{pre11}; Pekruhl et al. \\cite{pek13}) and the far IR (Preibisch et al. \\cite{pre12}; Roccatagliata et al. \\cite{roc13}) have been made more recently. The Carina Nebula, in all its glory, is presented in multicolour mosaics found at the Hubble Space Heritage webpage.\n\nA number of small obscuring structures in the Carina Nebula were noted by Smith et al. (\\cite{smi03}) from HST images, and were regarded as possible proplyds. However, the objects studied were found to be larger than the standard cases in the Orion Nebula (Bally et al. \\cite{bal00}). More objects of this nature were recognized by Smith et al. (\\cite{smi04}) who stated that their nature remains ambiguous: \"analogues of Orion's proplyds, starless cometary clouds, or something in between?\" Ascenso et al. (\\cite{asc07}), however, concluded from near-infrared imaging that these candidates do not harbour any stars. Most of these objects are globulettes by our definition and are thereby included in our list of nearly 300 globulettes. Thus the Carina complex is the richest known with regard to total number of globulettes. A number of Herbig-Haro jets emanating from embedded young stars in the region were found by Smith et al. (\\cite{smi10a}). Most of these are related to trunks or larger fragments. However, HH 1006 is related to an isolated cloud with an embedded jet-driving source (Sahai et al. \\cite{sah12}; Reiter \\& Smith \\cite{reit13}). Tentative jet-signatures were also found for a few much smaller isolated clouds like HH 1011 and HHc-1.\n\nThe distance to the Carina complex has been estimated in several investigations with rather different results. A distance of 2.3 kpc has been adopted as a kind of standard (Smith \\cite{smi08}). Recently, Hur et al. (\\cite{hur12}) concluded that the main stellar clusters Tr 14 and 16 are located at a distance of 2.9 kpc. We have adopted this value in the present investigation, but will discuss the implications if the complex is closer. \n\nThe paper is organized as follows. We present the fields we have searched, the objects identified, and their measured properties in Section~\\ref{sec:obs}. The results are analysed in Section~\\ref{sec:results} and discussed further in Section~\\ref{sec:disc}. We end with a summary in Section~\\ref{sec:conclude}. \n\n\\begin{figure}[t] \n\\centering\n\\resizebox{9cm}{!}{\\includegraphics[angle=00]{HSTmap.jpg}}\n\\caption{Image of the central region of the Carina Nebula, where the HST fields containing globulettes are marked. The locations of the star $\\eta$~Carin{\\ae} and four stellar clusters are marked. North is up and east to the left. The image spans 1.\\degr3 x 1.\\degr5 (credit: Nathan Smith, Univ. of Minnesota, NOAO, AURA, NSF).} \n\\label{map}\n\\end{figure} \n\n\\begin{table}\n \\caption[]{HST archive data used.}\n \\label{HST}\n $$\n \\begin{array}{*{4}{p{0.1\\textwidth}}}\n \\hline\n \\noalign{\\smallskip}\n \\ Field\/Target & R.A. (J2000.0) \\hfill{} & Dec. (J2000.0)& Images \\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\n1 \/ Pos 30 & 10:41:27\t& \t-59:47:42 & J9dk09010\t \\\\\n2 \/ Pos 30 & 10:41:38\t& -59:46:17 & J9dk09020 \\\\\n3 \/ Pos 30 & 10:41:40\t& -59:44:41 & J9dka9010 \\\\\n4 \/ Pos 19 & 10:42:23 &-59:20:59 & J9dk12010 \\\\\n5 \/ Pos 19 & 10:42:48\t&-59:19:44 & J9dk32010 \\\\\n6 \/ Tr 14 & 10:43:07\t& -59:29:34 & J900c1010 \\\\ \n7 \/ Tr 14 & 10:43:23\t& \t-59:32:06 & J900b1010 \t \\\\\n8 \/ Tr 14& 10:43:24\t \t& -59:27:55 & J900c2010 \\\\ \n9 \/ Tr 14 & 10:43:39\t& \t-59:34:37 & J900a1010 \\\\ \n10 \/ Tr14 & 10:43:41\t& \t-59:30:17 & J900b2010 \\\\ \n11 \/ Tr 14 & 10:43:47\t& \t-59:35:53 & J90001020 \\\\ \n12 \/ Tr 14 & 10:43:55\t& \t-59:37:09 & J90001010 \\\\ \n13 \/ HH 666& 10:43:58\t& \t-59:54:39 & J900a9010 \\\\\n14 \/ Tr 14 & 10:43:59\t& \t-59:32:38 & J900a2010 \\\\ \n15 \/ Tr 14 & 10:44:00\t& \t-59:28:05 & J900b3010 \t \\\\\n16 \/ HH 666 & 10:44:01\t& \t-59:58:42 & J900b9010 \\\\ \n17 \/ Tr 16 & 10:44:05\t& \t-59:40:16 & J900c5010 \\\\ \n18 \/ Tr 14 & 10:44:07\t& \t-59:33:53 & J90002020 \\\\ \n19 \/ Tr 14 & 10:44:15\t& \t-59:35:09 & J90002010 \\\\ \n20 \/ Tr 14 & 10:44:17\t& \t-59:30:27 & J900a3010 \\\\ \n21 \/ Tr 14 & 10:44:19\t& \t-59:25:54 & J900b4010 \\\\ \n22 \/ Tr 16 & 10:44:20\t& \t-59:42:48 & J900b5010 \\\\ \n23 \/ Tr 16 & 10:44:22\t& \t-59:38:34 & J900c6010 \\\\ \n24 \/ Tr 14 & 10:44:35\t& \t-59:33:09 & J90003010 \\\\ \n25 \/ Tr 16 & 10:44:36\t& \t-59:45:19 & J900a5010 \\\\ \n26 \/ Pos 27 & 10:44:40\t& \t-59:59:46 & J9dk07010 \\\\ \n27 \/ Pos 27& 10:44:43\t& \t-59:56:34 & J9dk27010 \\\\\n28 \/ Tr 16 & 10:44:44 \t& \t-59:46:35 & J90005020 \\\\\n29 \/ Tr 16 & 10:44:49\t& \t-59:37:35 & J900b7020 \\\\\n30 \/ Tr 16 & 10:44:52\t& \t-59:47:51 & J90005010 \\\\ \n31 \/ Tr 14 & 10:44:54\t& \t-59:31:08 & J90004010 \\\\ \n32 \/ Tr 15 & 10:44:58\t& \t-59:26:50 & J9dka0010 \\\\ \n33 \/ Tr 16& 10:44:58\t& \t-59:38:47 & J900b7010 \\\\ \n34 \/ Tr 16 & 10:45:12\t& \t-59:45:51 & J90006010 \\\\\n35 \/ Tr 16 & 10:45:17\t& \t-59:36:38 & J900b8010 \\\\\n36 \/ Tr 15 & 10:45:23\t& \t-59:26:59 & J9dk10010 \\\\ \n37 \/ Tr 16 & 10:45:44\t& \t-59:40:34 & J90008020 \\\\ \n38 \/ Pos 23& 10:45:53 & -60:08:16 \t& j90010010 \\\\ \n39 \/ Pos 23 & 10:45:56\t& \t-60:06:42 & J90010020 \\\\ \n40 \/ Pos 22& 10:46:32\t& \t-60:05:14 & J9dk23010 \\\\\n41 \/ Pos 21& 10:46:47\t& \t-60:09:29 & J9dk22010 \\\\\n42 \/ Pos 20& 10:46:58\t& \t-60:06:26 & J9dk01010 \t \\\\\n43 \/ Pos 20& 10:47:01\t& \t-60:03:14 & J9dk21010 \\\\ \n \\noalign{\\smallskip}\n \\hline\n \\end{array}\n $$\n \\end{table}\n\n\n\\section{Objects and measurements}\n\\label{sec:obs}\n\n\n\\subsection{HST fields}\n\\label{sec:fields}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[angle=00, width=9cm]{mosaik.jpg}\n\\caption{Examples of globulettes found in the Carina Nebula numbered according to Tables A1-A5 in Appendix A. The most typical cases are the dark globulettes shown in the upper four rows followed by objects with bright halos. The last two rows show examples of elongated objects with tails, some with bright rims. We note the scales are different from panel to panel (the dimensions in arcsec are given for each object in the tables). }\n\\label{mosaik}\n\\end{figure}\n\nThe optical images of the Carina complex were downloaded from the HST archive, cycle 13 and 14 programs GO-10241 and 10475 (principal investigator N. Smith) based on observations with the ACS\/WFI camera, which contains two CCDs of 2048~$\\times$~4096 pixels glued together with a small gap in between. The pixel size corresponds to $\\approx$~0.05 arcsec pixel$^{-1}$, and the field of view is $202 \\times 202$~arcsec. All images selected were exposed for 1000~s through the narrow-band filter F658N, covering the nebular emission lines of H$\\alpha$ and [\\ion{N}{ii}]. \n\nMost of the HST fields contains globulettes, and only these are listed in Table~\\ref{HST} with a running field number and references to target and image designations according to the HST archive. Figure~\\ref{map} shows how the areas covered by HST are distributed over the region (see also Smith et al. \\cite{smi10a}). Two regions on opposite sides of a large V-shaped dark cloud are rather well covered by HST. The total area covered is $\\sim$~700~arcmin$^2$, which is larger than covered by all HST-based surveys of \\ion{H}{ii} regions together, but much smaller than the area covered of the Rosette Nebula in Paper~1. This ground-based survey was limited to objects with radii $\\geq$~0.8 arcsec, however. \n\nThe globulettes are easily recognized as dark patches against the bright background. Most are roundish without any bright rims or halos, similar to previous surveys. A number of the elongated objects with tear-drop forms are crowned with bright rims. The Carina complex is also rich in dark irregular blocks and fragments of all sizes, some of which are very elongated and shaped like worms or long, narrow cylinders, and some show very irregular shapes. These objects, which as a rule are much larger than typical globulettes, were not included in our list of globulettes, but in Sect.~\\ref{sec:pec} we highlight some smaller cloudlets with peculiar shapes.\n\nThe Carina Nebula contains a large number of very small globulettes, down to the limit of resolution of HST, but we do not consider objects with dimensions $\\le$ 3 pixels across. Some regions contain quite isolated globulettes that are located far away from any larger molecular block, while in others there are clusters of globulettes. Examples of different types of globulettes are shown in Fig.~\\ref{mosaik}, where the first four rows show round and dark globulettes, which are most abundant. Round objects with bright halos are found in the fifth row followed by objects that are more elongated or have developed pronounced tails, with or without bright rims. \n\n\n\\subsection{Measurements}\n\\label{sec:data}\n\n\\begin{table*}\n\\centering\n\\caption{List of globulettes. The symbols are defined in Sect.~\\ref{sec:data}. The complete list of the 288 globulettes measured is presented in Appendix A.}\n\\begin{tabular} {lccccccccccl}\n \\hline\n \\noalign{\\smallskip}\nCN & Field & x & y & R.A. & Dec. & $\\alpha$ & $\\beta$ & P.A. & $\\bar r $ & Mass & Remarks\\\\\n &&&& (J2000.0) & (J2000.0) & (arcsec) & (arcsec) & (degr.) & (kAU) & (M$_{J}$) & \\\\\n\n\\noalign{\\smallskip} \n\\hline\n\\\\\n \\noalign{\\smallskip}\n\n1 & F1 & 360 & 4020 & 10:41:13.3 & -59:49:00 & 0.46 & 0.50 & & 1.39 & 2.4 & \\\\ \n2 & F1 & 660\t & 1084 & 10:41:18.9 & -59:46:38 & 0.30 & 0.36 & & 0.96 & 1.1 & \\\\\n3 & F1 & 1792 & 3251 & 10:41:23.6 & -59:48:35 & 0.38 & 0.40 & & 1.13 & 1.3 & \\\\\n4 & F1 & 1654 & 378 & 10:41:26.2 & -59:46:13 & 0.18 & 0.19 & & 0.54 & 0.4 & \\\\\n5 & F1 & 1819 & 395 & 10:41:27.2 & -59:46:15 & 0.19 & 0.27 & & 0.67 & 0.6 & \\\\\n6 & F1 & 2029 & 221 & 10:41:28.8 & -59:46:09 & 0.26 & 0.33 & 23 & 0.86 & 1.2 & T \\\\\n7 & F2 & 670\t & 1907 & 10:41:28.9 & -59:45:53 & 0.23 & 0.25 & -43& 0.69 &\\it 0.6 & BR,T \\\\\n8 & F1 & 2391 & 1935 & 10:41:29.1 & -59:47:36 & 0.20 & 0.22 & & 0.61 & 0.5 & \\\\\n9 & F1 & 2156 & 597 & 10:41:29.2 & -59:46:29 & 0.29 & 0.33 & & 0.90 & 1.0 & \\\\\n10 & F1 & 2752 & 1904 & 10:41:31.5 & -59:47:38 & 0.54 & 1.46 & 8 & 2.90 & \\it 8.8 & BR,EL,T\\\\\n11 & F1 & 2604 & 373\t & 10:41:32.4 & -59:46:22 & 0.31 & 0.51 & -38 & 1.19 & 2.0 & EL \\\\\n\\\\\ncontinued in Appendix A\\\\\n\\\\\n\\hline\n\n\\end{tabular}\n\\label{glob} \n\\end{table*} \n\nCentral positions were measured in terms of x and y coordinates and R.A. and Dec according to available HST readouts. The globulettes, designated CN (as Carina Nebula plus number), are listed in order of increasing R.A. in Table 2 showing only the first entries. The complete table is found in Tables A.1-A.5 in Appendix A. Finding charts for all fields containing globulettes are found in Figs. B.1-B.6 in Appendix B. In these charts we have also marked some objects that we do not consider to be regular globulettes, like some clumps with peculiar shapes (see Sect.~\\ref{sec:pec}). Some larger fragments are marked as \\emph{Frag} and these features will be commented on in Sect.~\\ref{sec:disc}. Most globulettes have circular or slightly elliptic shapes. The semi-major and semi-minor axes are given in arcseconds in Cols. 7 and 8. These quantites are defined from an outer contour where the intensity level has dropped to 95 \\% of the interpolated background nebular intensity. Outside this contour, the level of noise starts to affect the definition of the boundary, but as a rule very little matter resides in the outskirts. Column 9 gives the position angle of elongated objects, for which the ratio of semi-major and semi-minor axes is $>$ 1.5. \n\nWe derive the physical dimensions of the objects assuming a distance of 2.9 kpc (see Sect.~\\ref{sec:intro}) and define a characteristic radius, $\\bar r$, as the mean of the semi-major and semi-minor axes expressed in kAU (Column 10). For the determination of mass we strictly follow the procedure as described in detail in Paper~1 and Grenman (\\cite{gre06}). In short, we measure the residual intensity for each pixel within a globulette relative to the interpolated bright background. This value relates to extinction due to dust at $\\lambda$~6563~{\\AA} ($A_{\\alpha}$). Two extreme cases are considered: there is no foreground emission at all, or practically all the residual intensity in the darkest areas of each object is caused by foreground emission. We assume a standard interstellar reddening law (Savage \\& Mathis \\cite{sav79}) to compute the visual extinction, $A_{V}$ = 1.20 $A_{\\alpha}$, and the column densities of molecular hydrogen, $N(H_{2})$ = 9.4~10$^{20} A_V$, according to the relations in Bohlin et al. (\\cite{boh78}) for each pixel assuming a standard mass ratio of gas to dust of 100, and that all hydrogen is in molecular form. The total column density is derived assuming a cosmic chemical composition. Finally, we sum over all pixels inside the contour defined above to obtain the total mass, and we select the mean of the two extreme cases defined above as a measure of the mass of each object. Column 11 gives the so derived mean mass of each globulette. The maximum and minimum masses rarely differ from the mean by more than a factor of two. \n\nIn the last column remarks about individual objects are found. Elongated globulettes are marked as $EL$. Some objects have developed tails or tear-drop forms and are marked $T$. Objects with pronounced bright rims are marked $BR$, and those with bright halos as $BH$. The derived masses for the $BH$ objects are lower limits, and their masses are set in italics in Column 11. Symbol $C$ indicates that the object is connected by a dark, thin filament to a larger structure, like a nearby trunk, or to another globulette (with number marked). Objects noted in Smith et al. (\\cite{smi03}) are marked $S$ in Column 12 followed by the symbol they used, and two HH candidates recognized in Smith et al. (\\cite{smi10a}) are also noted.\n\nThe derived masses are subject to other uncertainties as well. For instance, uniform density has been assumed, which is consistent (to a first approximation) with column densities derived as a function of radial position (see Paper 1). However, the objects may have developed dense cores that escape detection. Another concern is the use of a normal extinction law since larger-than-normal ratios of $R$ have been found in certain areas (e.g. Th\\'e et al. \\cite{the80}; Smith \\cite{smi02}; Tapia et al. \\cite{tap03}; Hur et al. \\cite{hur12}). Since the globulettes may condense from larger clouds, they may contain larger dust grains than assumed for a normal extinction law. Finally, nebular H$\\alpha$ photons entering a globulette may scatter into the line of sight to the observer (e.g. Mattila et al. \\cite{mat07}). This effect would lead to an underestimation of mass. The effect is expected to be small, but cannot be evaluated further until more precise information exists on locations within the nebula and local radiation fields.\n\n\\begin{figure*}[t] \n\\centering\n\\resizebox{6cm}{!}{\\includegraphics[angle=00]{arcsec2.jpg}}\n\\resizebox{6cm}{!}{\\includegraphics[angle=00]{radii2.jpg}}\n\\resizebox{6cm}{!}{\\includegraphics[angle=00]{mass2.jpg}}\n\\caption{{\\it Left}: Distribution of average radii as measured for all globulettes found in the Carina Nebula expressed in arcsec. {\\it Middle}: The corresponding distribution of average radii expressed in kAU and adopting a distance to the complex of 2.9 kpc. The vertical arrow marks the peak in the corresponding accumulated size distribution for objects in seven \\ion{H}{ii} regions (De Marco et al. \\cite{mar06}). {\\it Right}: Distribution of masses for Carina globulettes less massive than 10~M$_{J}$, expressed in Jupiter masses.}\n\\label{distribution}\n\\end{figure*} \n\n\n\\section{Results}\n\\label{sec:results}\n\nWe have found a total of 288 globulettes in the HST-images of the Carina complex. Most of the objects are dark without any bright rims or halos, just like those found in surveys of other \\ion{H}{ii} regions. The globulettes are spread over the entire region, but are more abundant along the western part of the V-shaped dark cloud and in areas surrounding Tr 14 and 16. Examples of quite isolated globulettes can be found in Fields 10 and 25 in Figs.~\\ref{fields2} and ~\\ref{fields4}. Clusters of globulettes are found in, for example, Fields 12 and 41 in Figs.~\\ref{fields2} and \\ref{fields6}. The total number of globulettes found exceeds the number found in any other \\ion{H}{ii} region. This large complex is comparatively well covered by HST observations and the number per unit area is comparable to the areas studied by De Marco et al. (\\cite{mar06}). \n\n\n\\subsection{Distributions of radii and masses} \n\\label{sec:distribute}\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[angle=00, width=16cm]{areas.jpg}\n\\caption{In these charts the directions of the major axes of elongated objects are depicted in three areas, labelled I to III. In addition, the directions of objects with tails or bright rims present in round or elongated objects are indicated. The locations of these areas are shown in the lower-left panel on a map composed from Spitzer observations at 4.5 $\\mu$m, This background is used in all areas, and in area III (lower-right) the optical image is superimposed as well. The symbols used are explained in bottom-left corner. North is up and east to the left in these images, and CN numbers are marked according to Tables A.1-A.5.}\n\\label{PAfields}\n\\end{figure*}\n\nThe left panel in Fig.~\\ref{distribution} shows the distribution of average radii of the Carina globulettes expressed in arcsec, and in kAU in the middle panel. The bulk of the Carina globulettes have radii $<$~1000 AU, and the distribution increases steeply towards the detection limit. Hence, the Carina globulettes are, on the whole, significantly smaller than the accumulated distribution for the seven \\ion{H}{ii} regions investigated by De Marco et al. (\\cite{mar06}), which peaks at 2.5 kAU, and with detection limits similar to ours. We note that if we instead assume a distance of 2.3 kpc to the Carina complex, as advocated by Smith (\\cite{smi08}), then the Carina globulettes would be even smaller by $\\sim$~20\\%.\n\nThe masses derived for the tiny globulettes in the Carina complex are consequently also, on the whole, considerably smaller than for other regions. Most of the globulettes have masses well within the domain of planetary masses. The right panel in Fig.~\\ref{distribution} shows that the number of such objects increases rapidly below 3~M$_{J}$ towards the detection limit. Only 4~\\% of the Carina globulettes are more massive than 10~M$_{J}$, the most massive being CN~78 and 80 with $\\sim$~130~M$_{J}$. This is in sharp contrast to the corresponding distribution in the Rosette Nebula that hosts a large number of more massive clumps, some with masses of several hundred M$_{J}$ (Paper~1). However, even though this complex is at half the distance to the Carina complex, tiny objects with masses $<$~2~M$_{J}$ escape detection in this ground-based survey. The largest objects with masses $>$~20~M$_{J}$ are located close to and along the V-shaped dust feature (Fields 9, 11, and 12) and to the south in Field 3, Position 30 (see Fig.~\\ref{map}). They may represent relatively recent detachments from the nearby shell structures. \n\nThe Carina globulettes not only differ in size from those in other regions, but also in density. Their average density amounts to $\\rho$ = 2.8 10$^{-19}$ g cm$^{-3}$ compared to $\\rho$~=~6.2~10$^{-20}$ g cm$^{-3}$ for those in the Rosette Nebula. In terms of number densities of molecular hydrogen they exceed 10$^{5}$ cm$^{-3}$ in several Carina globulettes.\n\n\n\\subsection{Orientations}\n\\label{sec:PAs}\n\nElongated globulettes, with or without tails, line up in the same direction in certain areas, but are more randomly oriented in others. The lower left-panel in Fig.~\\ref{PAfields} shows the location of three selected areas, I to III, projected on a strip composed from images obtained from Spitzer\/IRAC 4.5~$\\mu$m images (key no 23695360). The two upper panels show areas I and II on the 4.5~$\\mu$m background, while for area III (bottom-right panel) the optical and Spitzer images are superimposed to better illustrate the locations of stars and bright and dark nebulosity. Included are also objects with pronounced bright rims and tails and a few round objects surrounded by bright halos that are distinctly brighter at one side. Obviously, some objects classified as round may in fact be elongated if they are oriented closer to the line of sight, and objects surrounded by bright halos flag the presence of bright rims on the remote side.\n\nAll of the objects depicted in areas I and II are oriented in about the same direction and point at the cluster Tr 14, located in area III, and it is clear that the objects have been sculpted by the interaction with photons coming from the bright stars in this cluster. It should be noted that there are also a large number of round, dark objects in these areas. In area III, elongated objects are more randomly oriented, particularly in the central part of the image. An example is CN~93 with 60~M$_{J}$ in Field 18, an isolated globulette seen in projection against the cluster Tr~14. However, both the direction of the tail and the bright rim indicates that the globulette is influenced by some object east of the cluster core. Globulettes just above and along the western extension of the V-shaped dust lane (in the right part of this panel), as well as a group to the left in the panel point in the general direction of the bright nebulosity surrounding $\\eta$ Carina and Tr 16. Several O stars are spread over this nebulosity, and it is likely that their combined radiation caused the shaping and orientation of these elongated globulettes. In addition, we found the same predominant direction of objects in regions south of area III (not shown here). \n\n\n\\subsection{Peculiar objects} \n\\label{sec:pec}\n\nThe Carina Nebula hosts a large number of clouds of very irregular shape like dark worm-like filaments and larger fragments, such as the \"Defiant Fingers\" described by Smith et al. (\\cite{smi04}). Some of these fragments are accompanied by smaller cloudlets with irregular shapes that most likely have eroded from the larger ones. We did not include these objects in the list of globulettes. In Figs.~B1-B6 we have marked some larger fragments, since they might be important birth-places of globulettes (see Sect.~\\ref{sec:disc}). \n\nThe border-line between what we define as a globulette or not is a bit arbitrary, but this is less important in our statistical analysis. Figure~\\ref{strange} shows some examples of cloudlets with peculiar shapes, and their locations can be found in Figs.~B1-B6, except for $j$ which is in an HST field void of globulettes. This object, like object $c$, was noted and labelled in Smith et al. (\\cite{smi03}). Objects B and D could be similar to standard globulettes, but since B appears to be surrounded by strong H$\\alpha$ emission, and D is in the background behind strong foreground emission, these objects could not be measured for extinction. The other objects have peculiar dusty tails, and object A even shows three such outgrowths. These tails can be the result of erosion of elongated objects with peculiar density structure, or they may be examples of dust-enshrouded jets from embedded sources. Objects CN~219 and 241 were recognized as HH-objetcs in Smith et al. (\\cite{smi03}), but their nature remains unclear. The objects CN 219 has a bright rim with a detached bright spot just to the north and the thin, twisted dust-tail in CN~241 is remarkable as is the surrounding, faint, and very elongated bright halo. Some of the peculiar objects in Fig.~\\ref{strange} deserve a closer inspection. \n\n\\begin{figure}[t] \n\\centering\n\\resizebox{8cm}{!}{\\includegraphics[angle=00]{strange.jpg}}\n\\caption{Examples of objects with peculiar shapes. The objects are marked in Figs. B.1-B.6, except for object \n$c$ which falls outside areas containing globulettes. This object and $j$ are denoted in Smith et al. (\\cite{smi03}). The images span: A 5\\arcsec x 5\\arcsec, B 4\\arcsec x 4\\arcsec, C 20\\arcsec x 30\\arcsec, D 3\\arcsec x 3\\arcsec, $c$ 8\\arcsec x 8\\arcsec, E 7\\arcsec x 10\\arcsec, F 20\\arcsec x 10\\arcsec, CN241 7\\arcsec x 5\\arcsec, G 35\\arcsec x 18\\arcsec, CN219 7\\arcsec x 7\\arcsec, $j$ 15\\arcsec x 15\\arcsec. }\n\\label{strange}\n\\end{figure} \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[angle=00, width=10cm]{RhoRadius.pdf}\n\\caption{Average density versus radius for all dark globulettes in the Carina complex.}\n\\label{RhoR}\n\\end{figure}\n\n\n\\section{Discussion}\n\\label{sec:disc}\n\nNone of the 288 globulettes coincides in position with any of the YSO candidates listed in Povich et al. (\\cite{pov11}) and Gaczkowski et al. (\\cite{gac13}). There are stars seen inside the boundaries of two globulettes in optical images, namely CN 138 (Fig.~\\ref{mosaik}) and the object designated $j$ in Fig. 5 in Smith et al. (\\cite{smi03}). These stars are likely to be foreground stars, since they show no sign of IR excess judging from the Two Micron All Sky Survey (Skrutskie et al. \\cite{skr06}) or existing Spitzer images. There are no obvious proplyd candidates in our sample, except possibly for CN 219 with a jet-like extension (see Figure~\\ref{strange}) and listed as a Herbig-Haro object (HH 1011) in Smith et al. (\\cite{smi10a}). \n\n\\begin{figure*}[t] \n\\centering\n\\resizebox{5.72cm}{!}{\\includegraphics[angle=00]{frag1.jpg}}\n\\resizebox{5.31cm}{!}{\\includegraphics[angle=00]{frag2.jpg}}\n\\resizebox{4.55cm}{!}{\\includegraphics[angle=00]{frag4.jpg}}\n\\caption{Three detached fragments in the Carina Nebula. From left to right: Fragments 1, 2, and 4 in Fields 7, 9, and 24 in Figs. B.1-B.6. These larger blocks contain denser cores, some of which appear to be detaching (see Frag. 1). Fragment 4 is surrounded by smaller cloudlets of different shapes. }\n\\label{fragments}\n\\end{figure*} \n\nThe fraction of objects with bright rims and halos is 39\\%, which is large compared to findings from other \\ion{H}{ii} regions (De Marco et al. \\cite{mar06}; Paper~1). In the central parts of the Carina Nebula objects with bright rims even dominate over those without indicating that the interaction with the radiation field is more intense closer to the centre.\n\nAn important finding is the statistical relation between average density and radius shown in Fig.~\\ref{RhoR}, where we have selected only distinctly dark objects without bright rims and halos. The smallest objects with radii $<$~1~kAU are on average four times denser than those with radii $>$2~kAU. The corresponding distribution including all objects in Tables \nA1-A5 shows the same general trend but with a larger scatter, and where the densities of BRs and BHs are systematically lower than the distribution in Fig.~\\ref{RhoR}. This is expected since the masses for these objects are lower limits as pointed out in Sect.~\\ref{sec:data}. It is likely that the distribution in Fig.~\\ref{RhoR} reflects how globulettes evolve with time.\n\n\n\\subsection{Origin and fate} \n\\label{sec:origin}\n\nMost dark formations seen in the optical images are located in front of the central regions of bright nebulosity. For the mass estimates we have considered two extreme cases: all residual emission in the darkest parts is due to foreground emission, or there is (practically) no foreground emission. The Carina region is very complex, and it is difficult to judge how deeply embedded a given globulette is in the warm nebulosity. For the same reason it is hard to determine the geometrical distance between a given globulette and other dust formations or clusters in the area. Some globulettes are quite isolated with projected distances to the closest dust complexes of more than 1.5~pc, for example the group of globulettes in Field 14 containing Tr 14. A possible scenario is that this group is the remnant of a larger cloud that gradually eroded in the intense radiation field from Tr 14. \n\nIt was inferred in Paper~1 that globulettes in the Rosette Nebula originate from condensations in elephant trunks and shell features. In the Carina complex there are a number of isolated, larger fragments that must have detached from shell structures long ago. Such fragments are marked in Fields 7, 9, 12, 24, and 29. Figure~\\ref{fragments} shows examples of such fragments and all contain condensations with masses similar to those found in globulettes. Fragment 4 is surrounded by a cluster of smaller irregular fragments that appear to be leftovers from a presumably larger block that once eroded. We note that Fragment 2, which hosts several condensations, looks like a detached elephant trunk composed by a network of thin twisted filaments similar to the threaded elephant trunks discussed in Carlqvist et al. (\\cite{car03}). \n\nAs discussed in Paper~2, the lack of distinct bright rims in H$\\alpha$ may be traced to a combination of several circumstances. One is that the density distributions are rather flat and that the density is high even close to the surface where the gas is in molecular form as flagged by fluorescent H$_{2}$ emission. In addition, thin P$\\beta$ emitting rims were discovered to be present in several objects, rims that in some cases appear to extend over the remote side of the objects. Such thin bright rims, not detected in H$\\alpha$, were also found in several much smaller globulettes in the Rosette Nebula (M\\\"akel\\\"a et al. \\cite{mak14}). Moreover, the Carina objects could be located at considerable distances from the bright UV-radiating stars, in which case the flux of exciting Lyman continuum photons is moderate producing only weak photodissociation in the outer layer at the dense surface.\n\nThe large number of quite isolated globulettes indicates that the objects have survived for a long time in the nebula. Most of these objects are tiny and dense, and unlike the larger objects they lack thin envelopes. It appears that the population of tiny globulettes in the Carina complex are in a more evolved state than those encountered in other \\ion{H}{ii} regions, either because they have eroded faster in the intense radiation field, or because they are, on the whole, older. A likely scenario is that globulettes detach from larger molecular blocks, like shell structures, pillars, and fragments. Thin envelopes would gradually be lost with time, and the remnant cores may become denser with time. This scenario is further supported by the findings presented in Fig.~\\ref{RhoR} showing that the average density is inversely proportional to radius. \n\nIn Paper~1 we applied simple virial arguments to conclude that most globulettes in the Rosette Nebula could be gravitationally unstable, especially after considering the influence of an outer pressure from the surrounding warm plasma. When applying the same analysis to the Carina globulettes we found that the globulettes are close to virial equilibrium but none is bound, even when assuming an outer pressure (thermal plus turbulence) of the same magnitude as in the Rosette complex. On the other hand, the radiation pressure exerted by light from the numerous O stars in Carina should be much higher. This pressure acts on one side of the globulettes. The derived masses are subject to uncertainties (see Sect.~\\ref{sec:obs}), and we note that very little extra mass is needed to confine the objects, as would be the case if they contain denser yet unresolved cores, or more speculatively, even Jupiter-sized planets. This would clarify why the globulettes appear to have survived for such a long time, as can be seen by their distribution over the nebula, where many objects are quite isolated and reside far away from larger molecular structures.\n\nThe total number of unbound planets in the Milky Way could amount to several hundred billion (Sumi et~al. \\cite{sumi11}). Globulettes in \\ion{H}{ii} regions may be an additional source of such free-floating planetary-mass objects besides an origin in circumstellar protoplanetary disks from where they are ejected (e.g. Veras et~al. \\cite{veras09}).\n\n\\section{Conclusions}\n\\label{sec:conclude}\n\nWe have made an inventory of globulettes in the Carina Nebula complex based on existing HST narrow-band H$\\alpha$ images. A total of 288 globulettes were listed and measured for size, mass, and density. Most objects are either round or slightly elongated, and many of the latter are oriented in the direction of massive young clusters in the area. We discuss why only a minority have developed bright H$\\alpha$ emitting rims and\/or tails, and we note that there is no evidence so far of any embedded young stars.\n\nThe Carina globulettes are, on the whole, much smaller and less massive than those recognized from HST surveys of a number of other \\ion{H}{ii} regions. Practically all are of planetary mass, and most have masses less than one Jupiter mass. The corresponding mean densities are much higher than in other regions, exceeding number densities of 10$^{5}$ cm$^{-3}$ in several objects. We found a statistical relation between average density and size in the sense that the smallest globulettes are also the densest. Globulettes may detach from larger blocks of molecular gas, like isolated fragments, elephant trunks, and shell structures, after which their thinner envelopes evaporate and leave denser cores, which may become even more compressed with time.\n\nFrom virial arguments we conclude that the objects are not bound unless they contain a bit more mass than inferred from the derived mean mass. Most of the tiny objects are quite isolated and located at projected distances of $>$~1.5 pc from the closest larger molecular structures, which indicates that the objects can survive for long times in the nebula. We speculate that the objects might contain denser cores or even planetary-mass objects that already have formed in their interior. \n\nWe suggest that the Carina globulettes are a more evolved state than the larger and less dense objects that are abundant in other \\ion{H}{ii} regions. Globulettes in \\ion{H}{ii} regions may be one source of the large number of free-floating planetary-mass objects that has been estimated to exist in the Galaxy.\n\n\n\\begin{acknowledgements}\n\nWe thank the referee Bo Reipurth for valuable comments and suggestions. This work was supported by the Magnus Bergvall Foundation, the L\\\"angmanska Kulturfonden, and the Swedish National Space Board.\n\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nComets are mostly studied via telescopes, and while the European Space Agency (ESA)'s Rosetta mission has told us much more about comet 67P\/Churyumov-Gerasimenko (hereafter 67P) than remote observation alone could ever achieve, remote observation of the comet is necessary for a number of reasons. Firstly, observations were used to characterise the comet ahead of the spacecraft's arrival, to plan the mission. Secondly, the comet's coma and tails stretch thousands to millions of kilometres, far beyond Rosetta's orbit, so a wider view is necessary to understand the total activity and the large scale context that complements the in situ view from the spacecraft. Finally, telescopic observations allow a comparison between 67P and other comets, the vast majority of which will only ever be astronomical objects and not visited directly. Parallel observations allow Rosetta measurements to provide `ground truth' to compare with the interpretation of observations, allowing various techniques to be tested, and for the lessons from Rosetta to be applied to the wider comet population.\n\nThe world-wide campaign of observations of 67P includes most major observatories, and deploys all possible techniques across a wide range of wavelengths, from ultraviolet to radio. Unlike previous comet mission support campaigns (for example those supporting the NASA Deep Impact and EPOXI missions\\cite{Meech05,Meech11}), the Rosetta mission and campaign are unique in their long duration -- there is not a single fly-by or impact to observe, but rather the long-term evolution of the comet as it approached and then retreated from the Sun. The campaign has been coordinated via a website\\footnote{\\url{http:\/\/www.rosetta-campaign.net}}, mailing lists, and regular meetings. The coordination largely began with a meeting in London in 2012, sponsored by the European Union FP7 research infrastructure `Europlanet' under its networking activity fund\\footnote{\\url{http:\/\/europlanet-scinet.fi\/}}. Further meetings were hosted by the European Southern Observatory (ESO) and ESA (usually as parallel sessions to Rosetta Science Working Team meetings). Europlanet again sponsored a workshop in June 2016, at Schloss Seggau near Graz in Austria, towards the end of the parallel Rosetta observations, where results could be exchanged and further analyses of data planned. In addition to the wide range of observations from professional observatory facilities, a large number of amateur astronomers have collected a significant and useful data set. The amateur campaign was coordinated with support from the NASA Rosetta project office, in parallel with and as part of the main campaign, and is described in detail elsewhere\\cite{Padma-paper}.\n\nThis paper presents an overview of the observations of 67P, together with a\nreview of some key results from the observing campaign. These include description of the large-scale morphology of\nthe comet (section~\\ref{sec:morphology}), results from spectroscopy (section~\\ref{sec:spec}) and polarimetry (section~\\ref{sec:pol}), and estimates of total activity levels (section~\\ref{sec:activity}). \nFurther detailed studies are ongoing, but some other preliminary results, and discussion on their implications, are\nincluded in section~\\ref{sec:discussion}.\n\n\n\\section{Observations}\n\nPrior to its selection as the Rosetta mission target in 2003, comet 67P was not particularly well studied. It was discovered in 1969 and observed at its 1983, 1995 and 2002 perihelion passages as part of narrowband photometry surveys of comets\\cite{Schleicher2006}, and was targeted at larger heliocentric distance (for nucleus observations) in between these\\cite{Mueller92,Lowry03}. While the original target of the Rosetta mission, comet 46P\/Wirtanen, was studied in detail as the mission was developed\\cite{Schulz1999}, the delay in the launch of the mission (due to concerns about the launch vehicle) meant that 67P was selected only a year before Rosetta launched towards it, and the relatively unknown comet suddenly became the target of many observations. 67P was just past its perihelion at the time, and the first observations from ESO constrained gas activity levels via spectroscopy\\cite{Schulz2004}, while Hubble Space Telescope (HST) imaging was used to estimate nucleus size, shape and rotation rate information using coma subtraction techniques\\cite{Lamy06,Lamy-SSR}. Imaging and polarimetric observations at this and the next perihelion passage (in 2009) were used to constrain the dust activity levels and morphology of the coma\\cite{Lara05,Hadamcik10,Lara11,Tozzi11}, including large scale structures\\cite{Vincent2013}, to monitor changes in dust properties and produce models of the dust size distribution\\cite{Hadamcik10,Agarwal10,Fulle10}. Around the aphelion passage between these, a series of observations were used to pin down nucleus properties\\cite{Tubiana08,Tubiana11,Lowry12,Kelley06,Kelley09}.\n\nThese observations around a full orbit following selection as the Rosetta target meant that, by 2010, 67P was one of the best characterised Jupiter family comets that had not yet been visited by a spacecraft. An analysis of images taken from archives over all previously observed orbits allowed predictions on total activity to be made\\cite{Snodgrass2013}, which were largely confirmed during the 2014-2016 Rosetta mission parallel observations. A summary of all observations obtained through the 2009 perihelion passage (up until the end of 2010) is given in table 2 of ref.~\\cite{Snodgrass2013}. Additional archival images from the 1 m Jacobus Kapteyn Telescope (JKT) and 2.5 m Isaac Newton Telescope (INT) on La Palma in April 2003 and February 2004, and the 4 m SOAR in August and September 2007, have subsequently been identified. In addition, there were regular observations during the 2011-2012 aphelion passage from ESO telescopes, despite the comet being both faint and located in the direction of the crowded star fields towards the galactic centre. \n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=0.7\\columnwidth]{observability.pdf} \n\\caption{Observability of the comet, as seen from Earth, during the Rosetta mission. The observability of the comet from Earth is shown by hatched, cross-hatched and solid grey areas marking when the solar elongation is less than 50$^\\circ$, 30$^\\circ$ and 15$^\\circ$, respectively. Perihelion (in August 2015) is marked by a vertical dashed line. At that time the comet was 43$^\\circ$ from the Sun. Dash-dot vertical lines show the boundaries between the years 2014-2016. Upper panel: Solar elongation $\\epsilon$ (solid line) and phase angle $\\alpha$ (dashed line); Middle panel: Declination; Lower panel: Heliocentric $r$ (solid line) and geocentric $\\Delta$ (dashed line) distances.}\n\\label{fig:visibility}\n\\end{center}\n\\end{figure}\n\n\\begin{table}\n\\caption{Summary table of observations. }\n\\begin{center}\n\\tiny\n\\begin{tabular}{llllll}\n\\hline\nTelescope\/Instrument & Technique & Wavelength range & Dates (YY\/MM\/DD) & ToT & PI \\\\\n\\hline\nVLT\/FORS & IMG & R & 13\/04\/17 -- 16\/06\/20 & 97.3 & C Snodgrass \\\\\nVLT\/FORS & IMG & R & 13\/05\/11 -- 13\/06\/18 & 0.9 & KJ Meech \\\\\nNOT\/ALFOSC & IMG & V,R & 13\/05\/13 -- 16\/05\/27 & 103.6 & H Lehto \\\\\nNOT\/StanCam & IMG & V,R & 14\/04\/05 -- 16\/05\/22 & 11.4 & H Lehto \\\\\nVLT\/FORS & SPEC & 330-1100 nm & 14\/05\/07 -- 13\/06\/18 & 56.1 & C Snodgrass \\\\\nUH88-TEK & IMG & R & 14\/06\/26 -- 14\/06\/26 & 2.7 & KJ Meech \\\\\nLOT (1-m) & IMG & BVRI, NB OSIRIS set & 14\/06\/30 -- 15\/11\/30 & 50.8 & ZY Lin \\\\\nHST\/ACS\/WFC & POL & F606W & 14\/08\/18 -- 16\/03\/07 & 59.2 & D Hines \\\\\nGemini S\/Flamingos-2 & IMG & J,H,K & 14\/09\/19 -- 15\/06\/30 & 5.5 & MM Knight \\\\\nGemini S\/GMOS & IMG & g,r,i,z & 14\/09\/20 -- 14\/11\/19 & 4.5 & MM Knight \\\\\nOGS\/SDC & IMG & visible & 14\/09\/21 -- 16\/07\/04 & 4.6 & D Koschny \\\\\nCFHT\/MegaCam & IMG & g,r & 14\/10\/24 -- 16\/05\/10 & 0.3 & KJ Meech \\\\\nVLT\/XSHOOTER & SPEC & 0.3-2.5 $\\mu$m & 14\/11\/09 -- 14\/11\/16 & 10.1 & C Snodgrass \\\\\nTRAPPIST 0.6m & IMG & B,V,R,I, CN,C2,BC,GC,RC & 15\/04\/18 -- 16\/06\/07 & 22.2 & E Jehin \\\\\nSATU\/St Augustine - Tuorla CCD & IMG & R & 15\/04\/23 -- 15\/06\/17 & 4.0 & H Lehto \\\\\nALMA & SPEC & 293-307, 343-355 GHz & 15\/05\/17 -- 15\/09\/27 & 5.8 & N Biver \\\\\nVLT\/UVES & SPEC & 304-1040 nm & 15\/06\/24 -- 16\/02\/10 & 10.0 & E Jehin \\\\\nWHT\/ACAM & IMG\/SPEC & R,I, g,r,i \/ 350-940 nm & 15\/07\/07 -- 16\/06\/28 & 4.0 & A Fitzsimmons \/ C Snodgrass \\\\\nTNG\/NICS & IMG & J,H,K & 15\/07\/11 -- 15\/12\/13 & 5.6 & GP Tozzi \/ C Snodgrass \\\\\nSTELLA\/WIFSIP1 & IMG & g,r,i,z & 15\/07\/18 -- 16\/06\/08 & 39.7 & C Snodgrass \\\\\nLT\/IO:O & IMG & g,r,i,z & 15\/07\/19 -- 16\/06\/11 & 22.4 & C Snodgrass \\\\\nIRTF\/CSHELL & SPEC & 1-5 $\\mu$m & 15\/07\/26 -- 15\/07\/31 & 3.9 & L Paganini \\\\\nGemini N\/NIRI & IMG & J,H,K & 15\/08\/04 -- 16\/05\/23 & 15.3 & MM Knight \\\\\nLCOGT 2.0m\/CCD & IMG & g,r,i,z & 15\/08\/08 -- 15\/09\/22 & 1.5 & T Lister \\\\\n2m BNAO-Rozhen\/FoReRo2 & IMG & R, NB 387,443,614,642,684 nm & 15\/08\/11 -- 16\/04\/28 & 8.1 & G Borisov \/ P Nikolov \\\\\nCA 2.2m\/CAFOS & IMG & R & 15\/08\/14 -- 16\/06\/05 & 49.8 & F Moreno \\\\\nCA 3.5m\/MOSCA & IMG & R & 15\/08\/18 -- 15\/08\/25 & 0.4 & F Moreno \\\\\nTNG\/DOLORES & IMG\/SPEC & B,V,R \/ 300-843 nm & 15\/08\/18 -- 16\/06\/06 & 16.6 & GP Tozzi \/ C Snodgrass \\\\\nLowell 0.8m\/NASAcam & IMG & R,CN & 15\/08\/18 -- 15\/12\/01 & 15.0 & MM Knight \\\\\nWHT\/ISIS & SPEC & 300-1020 nm & 15\/08\/20 -- 16\/04\/27 & 7.5 & A Fitzsimmons \/ C Snodgrass \\\\\nWendelstein\/2m & IMG & g,r,i & 15\/08\/21 -- 16\/05\/09 & 94.9 & H Boehnhardt \\\\\nWendelstein\/0.4m & IMG & r,i & 15\/08\/21 -- 15\/11\/11 & 29.8 & H Boehnhardt\\\\\nLT\/SPRAT & SPEC & 400-800 nm &\t15\/09\/04 -- 16\/01\/12\t& 1.85 & C Snodgrass\\\\\nLT\/LOTUS & SPEC & 320-630 nm & 15\/09\/05 -- 16\/01\/12 & 2.7 & C Snodgrass \\\\\nLowell 1.1m\/Kron photometer & PHOT & OH,NH,CN,C3,C2,UVC,BC,GC & 15\/09\/12 -- 15\/10\/15 & 2.0 & DG Schleicher \\\\\nIRAM-30m\/EMIR & SPEC & 3.4-0.97 mm & 15\/09\/18 -- 15\/09\/22 & 8.0 & N Biver \\\\\nOSN 1.52m\/CCD & IMG & R & 15\/09\/22 -- 15\/11\/28 & 13.0 & F Moreno \\\\\nDCT\/LMI & IMG & R,r,CN,OH,BC,RC & 15\/09\/23 -- 16\/05\/26 & 4.9 & MM Knight \/ MSP Kelley \/ D Bodewits \\\\\nGTC\/OSIRIS & IMG & r,NB 514,530,704,738,923 nm & 15\/09\/29 -- 16\/02\/10 & 5.9 & C Snodgrass \\\\\nINT\/IDS & SPEC & 300-610 nm & 15\/10\/07 -- 15\/10\/07 & 0.7 & C Snodgrass \\\\\nGemini N\/GNIRS & SPEC & 1-2.5 $\\mu$m & 15\/10\/14 -- 16\/01\/04 & 2.6 & MM Knight \\\\\nINT\/WFC & IMG & B,r,i,z & 15\/10\/14 -- 16\/06\/21 & 56.1 & C Snodgrass \/ A Fitzsimmons \/ SC Lowry \\\\\nWHT\/LIRIS & IMG & J,H,K & 15\/10\/29 -- 16\/01\/23 & 3.0 & C Snodgrass \\\\\n6m BTA SAO RAS\/SCORPIO-2 & IMG\/SPEC\/IPOL & g,r \/ 350-707 nm \/ R & 15\/11\/08 -- 16\/04\/05 & 3.7 & N Kiselev \/ V Rosenbush \\\\\nOdin sub-mm receivers & SPEC & 0.54 mm & 15\/11\/09 -- 15\/11\/12 & 63.8 & N Biver \\\\\nLijiang (2.4m) & IMG & R, NB OSIRIS set & 15\/11\/19 -- 16\/01\/06 & 8.3 & ZY Lin \\\\\nTNG\/HARPS-N & SPEC & 383-693 nm & 15\/12\/09 -- 15\/12\/09 & 0.3 & C Snodgrass \\\\\nTBL\/Narval & SPEC & 370-1000 nm & 15\/12\/10 -- 15\/12\/11 & 2.3 & J Lasue \\\\\nOHP 80cm & IMG & visible & 15\/12\/11 -- 16\/01\/08 & 6.0 & E Hadamcik \\\\\nHCT (2m) & IMG & R,I & 15\/12\/12 -- 15\/12\/12 & 2.5 & AK Sen \\\\\nWHT\/ISIS & IPOL & r & 15\/12\/18 -- 16\/03\/11 & 18.0 & C Snodgrass \/ S Bagnulo \\\\\nNEOWISE & IMG & 3.4,4.6 $\\mu$m & 15\/12\/21 -- 16\/05\/23 & 0.1 & A Mainzer \/ J Bauer \\\\\nKeck\/HIRES & SPEC & 350-1000 nm & 15\/12\/26 -- 15\/12\/27 & 8 & A McKay \\\\\nVLT\/FORS & IPOL\/PMOS & R, NB 485 nm \/ 400-950 nm & 16\/01\/10 -- 16\/03\/04 & 8.2 & S Bagnulo \\\\\nOSN 0.9m\/CCD & IMG & R & 16\/01\/14 -- 16\/01\/16 & 3.0 & F Moreno \\\\\nLCOGT 1.0m\/CCD & IMG & r & 16\/01\/30 -- 16\/03\/06 & 1.3 & T Lister \\\\\nIRTF\/SPEX & SPEC & 0.8-2.5 $\\mu$m & 16\/02\/04 -- 16\/03\/28 & 17.5 & S Protopapa \/ Y Ramanjooloo \\\\\nGemini N\/GMOS & IMG & g,r,i,z & 16\/02\/16 -- 16\/05\/28 & 2.0 & MM Knight \\\\\nVLT\/MUSE & SPEC & 465-930 nm & 16\/03\/03 -- 16\/03\/07 & 9.0 & A Guilbert-Lepoutre \\\\\nVLT\/SINFONI & SPEC & H+K & 16\/03\/03 -- 16\/03\/07 & 7.4 & A Guilbert-Lepoutre \\\\\nSubaru\/HSC & IMG & HSC-g (480 nm) & 16\/03\/08 -- 16\/03\/08 & 1.1 & M Yagi \\\\\nIRTF\/MORIS & IMG & r & 16\/03\/13 -- 16\/03\/28 & 13.5 & Y Ramanjooloo \\\\\nSpitzer & IMG & 3.6,4.5 $\\mu$m & 16\/04\/08 -- 16\/05\/08 & 1.3 & MSP Kelley \\\\\nVLT\/VIMOS & IMG & R & 16\/05\/09 -- 16\/05\/10 & 1.5 & A Fitzsimmons \\\\\nNTT\/EFOSC & IMG & r & 16\/07\/29 -- 16\/07\/29 & 0.3 & P Lacerda \\\\\nKepler & IMG & visible & 16\/09\/08 -- 16\/09\/20 & 288.0 & C Snodgrass \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\small\n{\\it Notes: \nToT = time on target (hours). Techniques are IMG = imaging, PHOT = photometry, SPEC = spectroscopy, IPOL = imaging polarimetry, PMOS = spectropolarimetry. Filters in letters for standard bands, with lowercase (griz) indicating SDSS type filters and upper case (BVRI) indicating Johnson\/Cousins types. NB = narrowband (followed by central wavelengths), some cometary narrowband filters labelled by name (e.g. CN around CN emission band). Wavelength range given for spectroscopy, in typical unit (nm for visible, $\\mu$m for near-IR, etc.).\n}\n\\label{tab:obs-summary}\n\\end{table}%\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{map.pdf}\n\\caption{Map of locations of contributing observatories.}\n\\label{fig:map}\n\\end{center}\n\\end{figure}\n\n\nThe coordinated campaign of $\\sim$parallel observations with Rosetta began in the 2013 observing season, with approximately monthly imaging with the ESO 8~m VLT from April to October, primarily dedicated to astrometric measurements to improve the orbit determination ahead of Rosetta's arrival at the comet in 2014. Further observations with the VLT followed the beginning of detectable activity through 2014, up until the Philae landing in November\\cite{Snodgrass2016}, which coincided with the end of the 2014 visibility window from Earth. The comet became brighter as it approached perihelion in August 2015, and was observed by a wide range of facilities through the main visibility window in parallel with Rosetta, which stretched from April 2015 until August 2016. The visibility windows around the Rosetta mission are shown in fig.~\\ref{fig:visibility}, and a summary of all observations in the coordinated campaign is given in table~\\ref{tab:obs-summary}. More detailed information on the observations can be found on the online log of observations at \\url{http:\/\/www.rosetta-campaign.net\/observations}. The broad geographical spread of participating observatories is illustrated in fig.~\\ref{fig:map}. Totalling the (approximate) time on target from each set of observations, we calculate that $\\sim$1300 hours of telescope time were dedicated to observing comet 67P during the Rosetta mission.\n\nIn 2014 we were mainly limited to larger telescopes, the 8~m VLT and Gemini-S, due to the comet's faintness and Southern declination. There were also observations using the 2.5~m Nordic Optical Telescope (NOT) on the island of La Palma, which has the advantage of being able to point to low elevations, and was therefore one of the only telescopes able to follow the comet over the full observability range from both hemispheres\\cite{Zaprudin2015}. In the second quarter of 2015 the comet was briefly visible from Southern sites again, before being a Northern hemisphere target through perihelion, although visibility was limited to a short window before sunrise from any given site. \n\nAround perihelion robotic telescopes played a large part in the campaign, as these are ideal for obtaining regular short observations\\cite{Snodgrass-MNRAS}. One of the key robotic contributors to the campaign was the 0.6~m TRAPPIST telescope at La Silla observatory in Chile\\cite{TRAPPIST}, which is dedicated to monitoring comets (and extrasolar planets). A larger robotic facility, the 2~m Liverpool Telescope (LT) on La Palma\\cite{Steele2004}, was able to provide spectroscopic monitoring using a new instrument specially designed and commissioned for this observing campaign, LOTUS\\cite{lotus}. The LT observations were performed as part of a large International Time Programme across six Canary Island telescopes, which enabled a wide range of observations to be taken with various techniques (broad- and narrow-band imaging, spectroscopy, polarimetry) across the visible and near-IR wavelengths. Near-IR observations were also taken at the NASA IRTF facility on Hawaii and over a long period at the Gemini telescopes, while even longer wavelength observations were possible with the Spitzer and NEOWISE space telescopes in the IR and sub-mm arrays on Earth, including ALMA, near to perihelion. Meanwhile spectroscopic observations continued at the ESO VLT over as wide a time range as possible, despite some very challenging weather in 2016 in Chile. Many other facilities contributed imaging monitoring observations while the comet was relatively bright and well placed in late 2015 and the first half of 2016, with the Wendelstein observatory in Germany\\cite{Boehnhardt-MNRAS}, Calar Alto observatory in Spain, ESA's optical ground station on Tenerife, the Lulin observatory in Taiwan and the Lowell Observatory in the USA providing regular and world-wide coverage. \n\nAs Rosetta entered its extended mission in 2016 the comet was increasingly visible all night, although fading as it retreated from the Sun, and was targeted with wide-field imagers, including a serendipitous deep and wide observation with the 8~m Subaru telescope (Hyper Suprime-Cam) on Hawaii, and with integral field unit spectrographs (MUSE and SINFONI at the VLT), to investigate compositional variations across the gas and dust coma. The dust coma was further investigated by measuring its polarisation as the observing geometry changed, using various facilities including the Russian 6~m, the 2~m at Rohzen observatory in Bulgaria, the 4~m William Herschel Telescope on La Palma, the HST, and the VLT. \nFinally, as the Rosetta mission reached its end in September 2016, a last set of remote imaging observations was collected by the NASA Kepler satellite, as the comet happened to be crossing the survey field of this facility in the weeks before the end of mission, after it was no longer visible from Earth.\n\n\n\n\n\n\\section{Large scale morphology}\\label{sec:morphology}\n\n\n \\begin{figure}\n \\centering\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-14-02-27.pdf}\\put(5,85){\\textcolor{white}{Feb 2014}}\\end{overpic}\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-14-07-01.pdf}\\put(5,85){\\textcolor{white}{Jul 2014}}\\end{overpic}\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-14-10-22.pdf}\\put(5,85){\\textcolor{white}{Oct 2014}}\\end{overpic}\\\\\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-15-05-21.pdf}\\put(5,85){\\textcolor{white}{May 2015}}\\end{overpic}\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-15-07-18.pdf}\\put(5,85){\\textcolor{white}{Jul 2015}}\\end{overpic}\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-15-10-07.pdf}\\put(5,85){\\textcolor{white}{Oct 2015}}\\end{overpic}\\\\\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-16-01-10.pdf}\\put(5,85){\\textcolor{white}{Jan 2016}}\\end{overpic}\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-16-03-10.pdf}\\put(5,85){\\textcolor{white}{Mar 2016}}\\end{overpic}\n \\begin{overpic}[width=0.325\\columnwidth,trim=111 230 111 230, clip]{IMG-16-06-03.pdf}\\put(5,85){\\textcolor{white}{Jun 2016}}\\end{overpic}\\\\\n \n \\caption{$R$-band Images of the comet, 1 arcminute on each side. Arrows indicate the direction of the orbital velocity ($v$) and Sun ($\\odot$) directions, i.e. opposite the expected direction of the dust trail and ion tail respectively. Image dates, telescopes and exposure times: \n 2014\/02\/27, VLT\/FORS, 10x50s; \n 2014\/07\/01, VLT\/FORS, 31x50s; \n 2014\/10\/22, VLT\/FORS, 39x50s; \n 2015\/05\/21, VLT\/FORS, 2x30s;\n 2015\/07\/18, LT\/IO:O, 10x20s;\n 2015\/10\/07, LT\/IO:O, 9x15s;\n 2016\/01\/10, LT\/IO:O, 3x120s;\n 2016\/03\/10, LT\/IO:O, 14x180s;\n 2016\/06\/03, LT\/IO:O, 3x180s.\n May 2015 image shows reflection from bright star out of FOV (above comet).\n }\n \\label{fig:morphology}\n \\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{trail_night2.png}\n\\caption{Wide field image taken with the 2.5 m INT in March 2016, showing the long trail (approximately 2 degrees).}\n\\label{fig:INT}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=0.32\\columnwidth]{subaru-FoV.jpg}\n\\raisebox{0.9\\height}{\n\\begin{tabular}{l}\n\\includegraphics[width=0.66\\columnwidth]{subaru1.jpg}\\\\\n\\includegraphics[width=0.66\\columnwidth]{subaru2.jpg}\n\\end{tabular}\n}\n\\caption{Wide field image taken with the 8 m Subaru telescope and the Hyper Suprime-Cam, taken on 2016\/03\/08 when the comet was at 2.5 au from the Sun. Left: Full field of view of HSC (single frame), showing the region containing the comet. Top right: Extracted comet region (62.3 $\\times$ 14.2 arcmin), total of 10 $\\times$ 6 minute exposures stacked. Bottom right: same images median combined after shifting to account for comet motion.}\n\\label{fig:subaru}\n\\end{figure}\n\nThe appearance of the comet changed significantly during the campaign, as it went from inactive, through perihelion, and then as the activity faded as it retreated from the Sun. A selection of images that illustrate the general appearance of the comet is shown in fig.~\\ref{fig:morphology}. \n\nThe earliest observations in the campaign (in 2013 and early 2014) showed a point source, an apparently inactive nucleus, although photometry and observations from Rosetta\/OSIRIS indicated that detectable activity began early in 2014, when the comet was more than 4~au from the Sun\\cite{Snodgrass2016,Tubiana2015}. The comet became visibly active during 2014, showing a short tail at least 10 arcseconds long, corresponding to $25\\,000$ km at the distance of the comet, by the time of the Philae landing in November.\n\nWhen the comet was again visible from Earth in 2015 it was considerably brighter, with a tear-drop shape and a long tail ($\\sim$ 70\" \/ $120\\,000$ km), showing a similar appearance through perihelion. The apparent tail length increased as the comet continued to brighten, and also as it became visible in darker skies. As the comet began to retreat from the Sun it took on a distinct aspect, similar to that shown on previous orbits, with a broad coma and a clear narrow tail (possibly a so-called `neck-line', composed of dust released 180$^\\circ$ in true anomaly before the date of observation), along with a very long dust trail tracing its orbit. It maintained this appearance until the end of the campaign, although fading as it reached $\\sim$3.5~au from the Sun by the end of observations in 2016.\n\nThe long dust trail was particularly apparent in wide field images obtained in early 2016, when the comet was well placed for deep imaging. Figure~\\ref{fig:INT} shows a mosaic taken with the wide field camera on the 2.5~m INT on La Palma, with each L-shaped field-of-view covering approximately half a degree on a side. The trail can be traced to over $10^7$~km from the comet, but is seen to be at a slightly different angle from the tail\/neck-line feature that is brighter closer to the comet. \nThe `two tails' (trail and neck-line) are also apparent in deep and wide-field images obtained serendipitously with the 8~m Subaru telescope on Mauna Kea, in observations on 2016\/03\/08 using the new Hyper Suprime-Cam, which is a mosaic imager with a 1.5~degree field of view (fig.~\\ref{fig:subaru}). Detailed modelling of these structures is still to be done, but it is clear that the trail and neck-line become more apparent post-perihelion. Finson-Probstein models\\cite{FinsonProbstein,Beisser1987} indicate that the narrow tail structure should contain old dust; for example in early November 2015 it should be dust that is at least 400 days old, released long before perihelion\\cite{Boehnhardt-MNRAS}.\n\nIt is worth noting that only dust tails (or trails) were observed. Despite\ndedicated searches there was no ion tail feature seen. The observing geometry\n(and the low inclination of 67P's orbit) meant that such observations were\nalways challenging, but it seems that any ion features near to the comet were\ntoo faint to separate from the dust, and further away could not be detected\neven in the deepest images. While the comet was relatively bright a number of\nobservations were made through narrowband filters, either special cometary\nfilters from the Hale-Bopp set\\cite{Farnham2000}, or by selecting suitable\nbandpasses from larger narrowband sets (e.g. for the 10~m Gran Telescopio Canarias [GTC]). At the 1~m Lulin Optical Telescope (LOT) in\nTaiwan and at the 2.4~m at Lijiang in China, copies of the same narrowband\nfilter set flown on Rosetta\/OSIRIS\\cite{Keller-OSIRIS} were used to observe the\ncomet from the ground, providing a direct comparison between the inner 10s of\nkm seen from the spacecraft and the whole coma. Preliminary results from\nnarrowband imaging do not reveal obvious differences between gas and dust\nmorphologies, other than the large scale gas coma being more symmetrical with\nno tail seen, but the observations generally have relatively poor S\/N and analysis is ongoing.\nPhotometry from these images can be used to derive gas production rates, which\nare consistent with spectroscopy results (see section~\\ref{sec:spec}), and (in\nthe case of the LOT\/Lijiang data) will be used to make direct comparisons with\nthe Rosetta\/OSIRIS gas observations\\cite{Lin-inprep,Bodewits2016}.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.315\\columnwidth]{jets_nov2015.pdf}\n\\includegraphics[width=0.675\\columnwidth]{fig_gemini_trim.png}\n\\caption{{\\bf Left:} `Jets' in the coma (labelled J1, J2), as seen from the 6 m BTA telescope of the SAO (Russia), on 2015\/11\/08. Image is $\\sim$~100,000 km across. {\\bf Right:} Enhanced Gemini NIRI $J$-band images of the comet monthly from August 2015\nthrough January 2016 (date give as YYMMDD). Images are centred on the\ncomet and an azimuthal\nmedian profile has been subtracted to reveal the fainter underlying\nstructure. At times (August,\nJanuary) two distinct structures can be discerned that match those labelled J1, J2 in the SAO image while at other times they overlap to appear as a single larger\nstructure towards the southeast. All images have the same colour scheme\nwith red\/orange bright and blue\/purple\/black faint, but different\ncolour scales. Each image is 50,000 km on a side and has north up and\neast to the left. The Sun and the direction of the comet's orbital velocity\nare towards the southeast in all panels, and do not change significantly over this period (fig.~\\ref{fig:morphology}). The red blob within a few pixels\nof the centre in all panels is an artifact of the enhancement; trailed stars\ncan be seen as streaks in August, October, and January.}\n\\label{fig:jets}\n\\end{center}\n\\end{figure}\n\nThe morphology within the coma on $10^3$--$10^5$~km scales is more complex than the large scale tail\/trails picture. Various image enhancement techniques can be used to reveal structure within the coma, and a similar pattern is visible in different data sets and using different techniques. A stable pattern of fans or jets is seen using either Larson-Sekanina\\cite{Larson+Sekanina84} processing or subtraction of an azimuthal median profile (fig.~\\ref{fig:jets}). Although we refer to these structures as jets, they may projections of broader dust flows (fans), and do not necessarily relate to the narrow jets seen in Rosetta images of the inner coma\\cite{Lin-AA}. \nThis pattern showed a slow evolution\\cite{Boehnhardt-MNRAS,Knight-inprep,Zaprudin-inprep}, with the relative intensity of the different jets approximately following the changing seasons on the comet -- the Southern structures are brighter around perihelion when this hemisphere of the nucleus is illuminated\\cite{Keller2015}. Preliminary analysis of coma morphology seen throughout the apparition is consistent with predictions\\cite{Vincent2013} for the source regions and pole solution [Vincent, priv. comm.], indicating that these jets are features that reappear each orbit\\cite{Lara2015,Boehnhardt-MNRAS,Knight-inprep}.\n\nFurther analysis of the shape of the coma reveals evidence for a short-lived change (outburst) in late August 2015, around the time of peak activity post-perihelion, and a change in the slope of the coma profile indicating possible dust fragmentation\\cite{Boehnhardt-MNRAS}. Observations with the HST revealed differences in polarisation within the jets compared with the background coma\\cite{Hadamcik-MNRAS}. Finally, models can be employed to recreate the coma morphology based on assumptions on dust properties. In the case of 67P, where in situ instruments provide many constraints on these properties, detailed models have linked the 2014 observations and early OSIRIS observations\\cite{Moreno2016a} and will further investigate the changing morphology with time\\cite{Moreno2016b}.\n\n\\section{Composition}\\label{sec:spec}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{wht_comet_spec.pdf}\n\\caption{Spectrum taken with the blue arm of ISIS on the WHT, on 2015\/08\/19, with the comet just past perihelion. The narrow red line shows a scaled solar analogue (i.e. the continuum\/dust signal) for comparison. The strongest emission bands are identified.}\n\\label{fig:WHTspec}\n\\end{center}\n\\end{figure}\n\nFrom the ground, the composition of comets (or any astrophysical object) is generally probed by spectroscopy. The solid components of the comet (its nucleus and dust coma) have similar and generally featureless spectra, reflecting sunlight back with a red slope in the visible range and a more neutral reflectance spectrum in the near-IR. The spectrum of the dust coma seen early in the mission matched Rosetta\/VIRTIS observations of the nucleus\\cite{Snodgrass2016,Capaccioni2015Sci}. As the comet approached perihelion Rosetta observations revealed some variation across the surface, including exposed ice patches. Near-IR spectroscopy with Gemini-N (GNIRS) and the NASA IRTF (SpeX) was obtained with the goal to look for and characterise the signatures of water-ice grains in the coma as previously obtained in the much more active comet 103P\/Hartley 2 \\cite{Protopapa2014}. Analyses of the GNIRS and SpeX data are ongoing \\cite{Protopapa-inprep}.\n\nThe gas coma of comets is far more revealing, as emission features from various species can be measured across all possible wavelengths, for bright enough comets. \nRosetta's own remote sensing spectrograph suite observed the gas coma from the UV through to the sub-mm (ALICE, VIRTIS, MIRO), detecting water already from June 2014 onwards and also mapping CO$_2$, OH and CN, among other species\\cite{Gulkis2015Sci,BM-MNRAS,Migliorini2016}.\nFrom the ground we were mostly limited to observations of 67P in the visible range, where emissions from so-called `daughter' species are seen (e.g. fig.~\\ref{fig:WHTspec}). Using the detection of these species to probe the composition of the parent ices in the nucleus requires the use of photochemistry models, which Rosetta presents a unique opportunity to test (by comparing these observations with in situ measurements of parent gasses escaping directly from the nucleus). The spectrum in fig.~\\ref{fig:WHTspec} was taken with the ISIS spectrograph on the WHT within a week of perihelion, and shows a fairly typical comet emission pattern, with obvious OH and CN bands, and weaker C$_2$, C$_3$ and NH features. The intensity of C$_2$ and C$_3$ in spectra of 67P recorded in this campaign is relatively low (compared with the strong CN band), placing 67P in the carbon-chain depleted class of comet, in agreement with earlier observations\\cite{Ahearn95}. \n\nLonger wavelength and\/or higher resolution spectroscopy was possible when the comet was at its brightest. High-resolution near-IR spectroscopy, useful to separate cometary water emission lines from the terrestrial atmosphere, was attempted close to perihelion (2015\/07\/26-31) in good conditions with CSHELL on the IRTF, but resulted only in (3$\\sigma$) upper limits of $Q({\\rm H}_2{\\rm O}) \\le 5.1 \\times 10^{27}$ molecules~s$^{-1}$ and $Q({\\rm C}_2{\\rm H}_6) \\le 9.9 \\times 10^{25}$ molecules~s$^{-1}$, assuming a rotational temperature of 40 K. These limits were close to the total water production interpolated from Rosetta results ($\\sim 3.9 \\times 10^{27}$ molecules~s$^{-1}$ for late July\\cite{Fougere2016}), suggesting a detection was just out of reach. \nAt longer IR wavelengths Spitzer\/IRAC \\cite{Werner2004, Fazio2004} \n and WISE \\cite{Mainzer2011}\nphotometry can be used to estimate the production rate of CO$_2$ \\cite{Reach2013,Bauer2012,Bauer2015}, another major parent species in the coma. The comet's CO$_2$-to-dust ratio at 2.8 to 3.0~AU (post-perihelion) appeared relatively low compared with other comets observed at similar distances in the same survey\\cite{Kelley-inprep}. Figure~\\ref{fig:spitzer-coma} shows the four Spitzer epochs median combined into a single image. An asymmetry in the 4.5~$\\mu$m coma due to emission from the CO$_2$ $\\nu_3$ band at 4.26~$\\mu$m suggests the production of this gas is dominated by the southern hemisphere, similar to Rosetta\/VIRTIS and Rosetta\/ROSINA observations from elsewhere in the orbit \\cite{Haessig2015Sci, BockeleeMorvan2015, Fink2016}. Observations were also carried out in the sub-mm range, using the large ground-based facilities IRAM and ALMA to detect HCN (a parent of CN) and CH$_3$OH at rates of $\\sim 9 \\times 10^{24}$ and $\\sim 2 \\times 10^{26}$ molecules~s$^{-1}$, respectively, in September 2015. From above Earth's atmosphere, observations with the Odin satellite in November 2015 searched for a water signature, but were only able to give upper limits ($Q({\\rm H}_2{\\rm O}) \\le 3.3 \\times 10^{27}$ molecules~s$^{-1}$), again close to the Rosetta value at that time\\cite{Fougere2016}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{67p-spitzer-combined-3au.pdf}\n\\caption{Spitzer\/IRAC images of the comet at 3.6 and 4.5~$\\mu$m (left and center). The CO$_2$ coma (right) is apparent after the 3.6~$\\mu$m image (dust) is subtracted from the 4.5~$\\mu$m image (dust and gas). Celestial North (N), the projected orbital velocity ($v$), and the projected direction of the Sun ($\\odot$) are marked with arrows. Each image is approximately 200\\,000~km on a side.}\n\\label{fig:spitzer-coma}\n\\end{center}\n\\end{figure}\n\n\n\\section{Polarimetry}\\label{sec:pol}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{polariz_nov2015.pdf}\n\\caption{Distribution of linear polarisation (P, \\%) in comet 67P (left: coma and tail; right: zoom in on coma). Observation obtained with SAO 6~m telescope on 2015\/11\/08.}\n\\label{fig:pol}\n\\end{center}\n\\end{figure}\n\n\nPolarimetry, and more specifically linear polarisation imaging and phase curves, provides evidence for changes in dust physical properties and gives clues to size and morphology of the dust particles (see, e.g., ref~\\cite{Kolokolova2015}). \nPolarimetric images of 67P have been obtained from the HST ACS\/WFC in 2014, 2015 and 2016. In August and November 2014, the comet, still far away from the Sun, was observed at low galactic latitudes; for a phase angle about 15.5$^\\circ$, the average polarisation was nominal, about -2\\% \\cite{Hines+ACLR2016}. Three months after perihelion, in November 2015, the comet was still quite active, with conspicuous structures in intensity; for a phase angle about 33$^\\circ$, the average polarisation was in agreement with what had been noticed at the previous passage \\cite{Hadamcik10}, above average values, suggesting significant changes in the properties of dust aggregates ejected by the comet after perihelion \\cite{Hadamcik-MNRAS,Hines-inprep}.\nPolarimetric images have also been obtained from the VLT, the WHT, Rohzen observatory, and BTA, the 6~m Russian telescope, between August 2015 and April 2016. The polarisation maps (fig.~\\ref{fig:pol}) provide evidence for different properties (e.g. size, shape, porosity) in the dust particles across the coma\\cite{Ivanova-inprep}.\n\n\n\n\n\\section{Total activity}\\label{sec:activity}\n\nOne of the fundamental measurements provided by the Earth-based view of 67P was an assessment of the `total' activity of the comet, which is an important reference for Rosetta results. Activity measurements from the spacecraft necessarily depend on various models, to reconstruct the global activity from a local measurement at one position inside the coma. Reassuringly, attempts to compare the measurements from various instruments with the ground-based total activity view produce largely consistent results (including between different Rosetta instruments, although there are some differences between ROSINA and VIRTIS), suggesting that the models used to interpret local measurements are valid\\cite{Fougere2016,Hansen-MNRAS}.\n\nThe total activity of the comet can be measured in various ways from the ground, looking at the dust or gas coma. The total dust activity is easiest to follow, and can be assessed using broad-band photometry (typically $R$-band, to avoid contamination in the bandpass by gas emissions). Archival imaging was used to measure the total activity in previous orbits\\cite{Snodgrass2013}, and make predictions for 2014-2016, and the same sort of measurements were applied to the campaign data: We measure the total brightness of the comet in $R$-band within a constant circular aperture of radius $\\rho$ = $10\\,000$ km at the distance of the comet. Some observations were taken with Johnson or Cousins $R$-band filters, while others used the $r'$ filter of the SDSS system (and VLT\/FORS observations used the `R\\_SPECIAL' filter that is somewhere between these). All photometry was calibrated onto the Cousins $R$ (Landolt) photometric scale, using transformations from the SDSS system and the $(g-r) = 0.62 \\pm 0.04$ colour of the comet measured with the LT near perihelion where necessary\\cite{Snodgrass-MNRAS}. This allowed direct comparison with predictions. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.7\\columnwidth,trim=40 35 85 80,clip]{predict.pdf}\n\\caption{Total $R$-band magnitude of the comet, compared with prediction from previous orbits (solid line). Solar elongation is indicated with hatching as in fig.~\\ref{fig:visibility}.}\n\\label{fig:rmag}\n\\end{center}\n\\end{figure}\n\n\nWe show the measured total brightness of the comet in $R$-band in fig.~\\ref{fig:rmag}, along with the prediction from three previous orbits. It is immediately clear that the comet's brightness followed the prediction very well, implying that the activity level of the comet is consistent from one orbit to the next. It is also clear that the campaign resulted in very complete coverage of the 2014-2016 period, with regular observations whenever it was possible to obtain them. The peak in activity in late August ($\\sim$2 weeks after perihelion) is obvious. There are subtle differences between data sets taken with different filters ($R$ vs $r'$), implying a possible change of colour in the coma with time. The change in colour is small and has not yet been studied in detail, but likely relates to the changing gas production\\cite{Snodgrass-MNRAS,Boehnhardt-MNRAS}.There are no large outbursts or other sudden brightness changes, and the phase function assumed in the prediction (a linear phase function with $\\beta = 0.02$ mag deg$^{-1}$) clearly gives a decent fit over the range of phase angles seen from Earth ($\\alpha \\le 35^\\circ$). The simple power law dependencies on heliocentric distance on which the predictions are based (flux $\\propto r^{-5.2}$ pre-perihelion and $\\propto r^{-5.4}$ post-perihelion\\cite{Snodgrass2013}) can be used to give a good first order description of the dust brightness. Just before perihelion the observed brightness appears to be slightly below the prediction, but this is probably an effect of the peak in activity being offset slightly from perihelion, which isn't considered in the simple power law model\\cite{Snodgrass-MNRAS}. In terms of the widely used $Af\\rho$ parameter for quantifying cometary activity\\cite{Ahearn84}, we find that the peak in activity was around $Af\\rho \\approx 1000$~cm. \nThe Wendelstein data support an $Af\\rho$ power law dependence on heliocentric distance with an exponent of -3.7 to -4.2, depending on the phase function assumed, using data from mid-September to the end of December 2015\\cite{Boehnhardt-MNRAS}. This is close to the $Af\\rho \\propto r^{-3.4}$ post-perihelion from the prediction paper, and within the range of previous determinations discussed there\\cite{Snodgrass2013}.\n\nThe total activity can also be measured in terms of gas production rates. The most abundant species released by the comet is water, but this is difficult to measure from the ground -- only comets considerably brighter than 67P can be regularly observed with high resolution spectroscopy to separate cometary water emission lines (e.g. in the near-IR) from the terrestrial atmosphere. In the weeks pre-perihelion an upper limit of $Q({\\rm H}_2{\\rm O}) \\le 5.1 \\times 10^{27}$ molecules s$^{-1}$ was measured with IRTF, as described in section~\\ref{sec:spec}. The alternative way to obtain water production rates from the ground is through observation of the daughter species such as OH, via the emission bands around 308 nm in the UV, or the [OI] lines near 630 nm. These are also challenging for a faint comet, given the strong absorption in the UV by atmospheric ozone and the need for high resolution to separate oxygen lines from terrestrial ones. Successful detections of OH were made in 67P, primarily using the ISIS spectrograph on the WHT (fig.~\\ref{fig:WHTspec}). A production rate of $Q({\\rm OH}) = 2.6 \\times 10^{27}$ molecules s$^{-1}$ was found on 2015\/08\/19 (within a week of perihelion), which corresponds to $Q({\\rm H}_2{\\rm O}) = 3.2 \\times 10^{27}$ molecules s$^{-1}$. Further observations with ISIS were attempted until April 2016, but the OH production rate was only measurable relatively close to perihelion\\cite{Fitzsimmons-inprep}. \nObservations of OH emission with the Lowell 1.1m and Kron photoelectric photometer (and narrowband filters) were used to derive water production rates of $Q({\\rm H}_2{\\rm O}) = 7.7$ and $3.4 \\times 10^{27}$ molecules s$^{-1}$ on 2015\/09\/12 and 2015\/10\/15, respectively.\n[OI] lines were detected using UVES on the VLT and HIRES on Keck in late 2015. While used in the past as a reliable proxy for H$_2$O production in comets \\cite{Morgenthaler2001,Fink2009,McKay2015}, the detection of abundant molecular oxygen in the coma of 67P \\cite{Bieler2015} complicates interpretation of the observed [OI] line fluxes in terms of H$_2$O production rates.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.7\\columnwidth,trim=40 35 85 80,clip]{CN-plot.pdf}\n\\caption{CN production rate (in molecules s$^{-1}$) as a function of heliocentric distance (negative values indicate pre-perihelion data, positive post-perihelion). We include data from previous orbits\\cite{Schleicher2006,Schulz2004,Lara11,Guilbert2014}, narrowband photometry from TRAPPIST and the Lowell 1.1~m around perihelion, and spectroscopy from the VLT, SAO 6 m and LT (see key in plot for symbols). Error bars are not included for clarity. The dashed line shows a scaled version of the dust scaling law plotted in fig.~\\ref{fig:rmag}. In general, pre-perihelion data are only upper limits (marked with arrows) until relatively low $r$, when the rate climbs quickly pre-perihelion, with a similar slope to the dust fit. CN emission can be detected to larger distance post-perihelion, with a shallow decrease in $Q$(CN). Data sets from different telescopes mostly agree, although the LT production rates post-perihelion are generally lower than those measured with larger telescopes, but with significant scatter. This behaviour appears to be seen in data from previous orbits too, with little change in total CN production (there appears to be a reasonable match between the `previous' points and those taken in this campaign).}\n\\label{fig:CNplot}\n\\end{center}\n\\end{figure}\n\nThe most easily detected gas species in a typical cometary coma is the CN radical, with a strong emission band around 389 nm, so the longer term gas production rate was monitored by observations of this band. Preliminary Rosetta results suggest that CN production does largely follow the water production rate, although there may be some long term variation in the relative proportions [Altwegg, priv. comm.]. The detection of CN was still challenging in 67P, with sensitive searches with the VLT and FORS in 2014 unsuccessful\\cite{Snodgrass2016}. When the comet returned to visibility in 2015 CN was still not immediately detected, despite the considerably brighter coma, and an upper limit of $2\\times10^{23}$ molecules~s$^{-1}$ was found in May with the VLT, even though the heliocentric distance was only 1.6 au. The rate then rapidly increased, with a positive detection finally achieved in early July at $2\\times10^{24}$ molecules~s$^{-1}$ (1.35 au), and a range of facilities were able to make observations via spectroscopy or narrowband imaging in the months after perihelion, while VLT and SAO 6 m observations continued to trace CN out to $\\sim$ 3 au post-perihelion. Estimates of the CN production rate against heliocentric distance are shown in fig.~\\ref{fig:CNplot}. The strong asymmetry around perihelion is clear -- while detection was impossible with even the best telescopes until just before perihelion inbound, CN was measured for many months post-perihelion.\n\n\n\n\\section{Discussion and open questions}\\label{sec:discussion}\n\nThe observing campaign largely demonstrated 67P to be a fairly typical Jupiter family comet, with a predictable and smoothly varying activity level, and no major outbursts or unusual events. In this way it confirmed that Rosetta was seeing typical behaviour of a typical object -- an important statement that allows the conclusions from Rosetta measurements to be taken as generally true for comets. However, there were some surprises that require further investigation. The most obvious of these is the puzzling difference between the symmetrical rise and fall of total activity as measured by dust brightness, and the sharp onset and then slow decrease in activity measured by CN gas production. Taken at face value this implies a significant change in dust-to-gas ratio with time, but the observation does not agree with the symmetrical rise and fall in total gas production seen by in situ Rosetta instruments\\cite{Fougere2016,Hansen-MNRAS}. While Rosetta\/ROSINA sees some change in the CN\/water ratio with time, these are subtle, and in general in situ measurements find that the CN and water production rates appeared to be correlated (while the relative abundance of other major species to water, e.g. CO$_2$\/water, varies across the nucleus and with time\\cite{Haessig2015Sci,BM-MNRAS}). The apparent turn on and off of CN as seen from Earth are near to the dates of equinox on the comet, so this could be a seasonal effect (i.e. the CN parent is mostly released from the Southern hemisphere), but this was not obvious in Rosetta\/ROSINA measurements [Altwegg, priv. comm.]. This implies a difference between in situ and whole coma measurements, which still needs to be explained. One possibility is a distributed source of CN, at distances $> 100$ km from the nucleus, that is not seen by Rosetta. A more detailed analysis of the long term gas and dust production rate monitoring will appear in a future paper\\cite{Opitom-inprep}. \n\nIf one of the conclusions of the parallel Earth and Rosetta observations is that the bright CN band cannot be used as a reliable tracer of total gas production, an equally important test will be to see how well more direct tracers compare with in situ measurements. Although more difficult to perform, observations of OH are generally thought to have the advantage of having a single well known parent (water) and therefore tracing total (water dominated) gas production directly. In the few months post-perihelion where we could perform OH measurements from the ground, Rosetta production rates are based on models that extrapolate to the whole coma from a local (or single line of sight) measurement. These models agree with ground-based photometry for (scaled) dust production\\cite{Hansen-MNRAS}, but have not yet been directly compared with ground-based gas measurements. A further complicating factor is the discovery, from Rosetta\/ALICE, that the dissociation models used to get daughter species fluxes need to take into account electron impact as well as Solar UV radiation\\cite{Feldman2015}. This has been taken into account in studies of the gas production via Rosetta\/OSIRIS narrowband imaging of the inner coma\\cite{Bodewits2016}, but the implications for the larger scale coma need to be considered. \n\nFinally, while the total brightness evolution of 67P was very smooth, there is evidence of short term variations (i.e. outbursts). Outbursts from comets vary in scale from the frequent but small scale events seen as Deep Impact approached comet 9P\/Tempel 1\\cite{Ahearn-DI, Feldman2007}, to events that can cause the coma to brighten by many magnitudes (such as the mega-outburst of 17P\/Holmes in 2007). The abundant photometry on 67P from this campaign will allow careful searches for small outbursts, and in particular tests to see if the many short-lived events seen as bright jets in Rosetta imaging\\cite{outbursts} are correlated with changes in the total brightness. One potentially significant outburst, in late August 2015, has already been identified from ground-based data due to the effect it had on the overall shape of the inner coma\\cite{Boehnhardt-MNRAS}. There is also a possible signature in ground-based photometry of the outburst seen by many of Rosetta's instruments in February 2016\\cite{Gruen-outburst}, although the comet was close to the full moon at that time, and also near to opposition and therefore phase function effects on the total brightness need to be carefully considered.\n\n\n\\section{Conclusion}\n\nAn unprecedented long-term campaign of observations followed comet 67P throughout the Rosetta mission, including characterisation of the nucleus and activity levels before the spacecraft arrived, and parallel to its operational period (2014-2016). This made 67P one of the best studied short period comets, with observations following it from its inactive state through perihelion and back out to beyond 3~au, despite challenging geometry (low solar elongation) for much of this apparition. The parallel observations with the long-term in situ monitoring from Rosetta provide a unique opportunity to test observational techniques and models against `ground-truth'. We find that the comet's brightness largely varies in a smooth and predictable way, with no major outbursts or changes from orbit-to-orbit, but subtle variations can be identified. The morphology of both the inner coma and the large scale tails and trails is also repeatable between orbits, and implies a stable pattern of activity, which we hope to correlate with the detailed view of active regions seen by the spacecraft. The comet's composition is typical of the carbon-depleted class, but the dust and CN gas production rates varied in different ways around perihelion, indicating possible differences in composition across the nucleus. With $\\sim$~1300 hours of observation over 4 years, there is a wealth of ground-based data to compare with the treasure trove of Rosetta results: A large number of detailed follow up studies are on going, and will be published in the coming year(s). \n\\vskip6pt\n\n\\enlargethispage{20pt}\n\n\n\n\\dataccess{It is our intention that all observational data from the campaign will be archived alongside Rosetta instrument data at the ESA Planetary Science Archive (\\url{http:\/\/www.cosmos.esa.int\/web\/psa}). In addition, much of the raw data are (or will be) available from individual observatory archive facilities. Observing log information available at \\url{http:\/\/www.rosetta-campaign.net} will also be permanently archived at the PSA.}\n\n\\aucontribute{C. Snodgrass coordinated the campaign and drafted the manuscript. All authors contributed to observations and\/or data reduction, and read and approved the manuscript.}\n\n\\competing{The authors declare that they have no competing interests.}\n\n\\funding{C. Snodgrass is supported by an UK Science \\& Technology Facilities Council (STFC) Rutherford fellowship.}\n\n\\ack{We acknowledge the contribution of the Europlanet EU FP7\/H2020 framework in supporting meetings that initiated this campaign in 2012 and brought many of us together to discuss the results in 2016. We thank the Royal Society and the organisers of the `Cometary science after Rosetta' meeting for the invitation to present the results of the campaign. \nS.F. Green acknowledges support from the STFC (Grant ST\/L000776\/1).\nJ. Kleyna is supported by NSF grant 1413736.\nH.J.~Lehto, B. Zaprudin and A. Somero acknowledge the support of the Academy of Finland (grant number 277375).\nJ. Licandro and J. de Le\\'on acknowledge support from the AYA2015-67772-R (MINECO, Spain).\nJ.~Lasue and A.C. Levasseur-Regourd acknowledge support from CNES, the French Space Agency, for this work in relation with CONSERT and MIDAS on board Rosetta.\nZ.Y. Lin and X. Wang were supported by NSC 102-2112-M-008-003-MY3 of the Ministry of Science and Technology, Taiwan, and National Natural Sciences Foundations of China (contract No. 11073051 and No. 11473066).\nC. Opitom acknowledges the support of the FNRS.\nFinally, we thank the many observatories involved in these observations for their support in allocating significant time to observing 67P, especially ESO, Gemini, and Observatorios de Canarias del IAC (through the CCI International Time Programme) for enabling the long-term baseline of observations.\nWe are grateful for the efforts of various support astronomers in assisting with the observations: In particular\nIan Skillen at the ING; \nFumiaki Nakata, Finet Francois, and the HSC Queue Working Group at Subaru;\nThomas Granzer at STELLA;\nDavid Abreu and Pablo Ruiz from Ataman Science, Spain, for supporting the OGS observations.\n}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Conclusion and Future Directions}\n\\vspace{-1mm}\nIn this paper, we propose Multi-Objective Actor-Critics (in short, MoTiAC)\nfor bidding optimization in RTB system. To our best knowledge, MoTiAC is the first to utilize\nspecialized actor-critics to solve the problem of multi-objective bid optimization. By learning priors from history data, our model is able to follow adaptive strategies in a\ndynamic RTB environment and outputs the optimal bidding policy.\nWe conduct extensive experiments on real-world industrial dataset and provide interesting analysis of\nmodel properties, especially the convergence of Pareto optimality. Empirical results shows that in off-line ad click data, MoTiAC outperforms the\nstate-of-the-art bidding algorithms and can generate +4.2\\% lift in revenue and +2.7\\% in ROI for T's advertising platform. \n\nOne future direction could be to extend multi-objective solution with priors in multi-agent reinforcement learning area. Another possible\ndirection is in applying this method into other real-world RL applications.\n\n\n\\section{Experiments} \\label{sec:exp}\nIn experiment, we use real-world industry data to answer the following\nquestions. Q1: how well can MoTiAC perform in general?\nQ2: what is the best way to combine multiple objectives?\nQ3: where and why does MoTiAC work?\n\n\\subsection{Experiment Setup}\\label{exp:setup}\n\\textbf{Dataset.} In experiment, the dataset ($67~GB$) is collected from company T's Ads bidding system, ranging from Jan.~7th 2019 to Jan.~11th 2019. There are nearly 10,000 ads in\neach day with huge volumn of clicks and conversions. According to the real-world\nbusiness, bidding interval is set to be 10 minutes (144 bidding sessions for a day), which is much shorter than 1 hour\n\\cite{jin2018real}. Basic statistics can be found in Table~\\ref{tb:statistics}. \n\nIn the evaluation, huge memory load are required. \nWe implement all the methods with PyTorch and two 128 GB memory machine with 56 CPUs. We perform a five-folder cross-validation, i.e., using 4 days for training and\nanother one day for testing, and then report the averaged results. Similar settings can be found in\nliteratures \\cite{cai2017real,wang2017ladder,zhu2017optimized}.\n\n\n\n\n\\vspace{-1mm}\n\\subsection{Compared Baselines}\nWe carefully select related methods for comparison, and adopt the same settings\nfor all compared methods with 200 iterations. \\emph{Proportional-Integral-Derivative (PID)} \\cite{bennett1993development}\n is a widely used feedback control policy, which produces the control signal from a linear combination of the\n proportional factor, the integral factor and the derivative factor. PID is\n free from training. In company T's online ad system, PID is currently used to\n control bidding. We employ it as a standard baseline and will show relative experiment\n result with respect to it. Two state-of-the-art RTB methods are selected: \\emph{Reinforcement Learning to Bid (RLB)} \\cite{cai2017real}, \\emph{Distributed Coordinated Multi-Agent Bidding\n \t(DCMAB)} \\cite{jin2018real}. Since they are both based on DQN, we use the same\n discrete action space (interval is 0.01) like \\cite{wang2017ladder}.\n \\emph{Aggregated A3C (Agg-A3C)} \\cite{mnih2016asynchronous} is a standard A3C with reward combination, and we \n employ this to compare with MoTiAC and show the superiority of our \\emph{Reward Partition} schema. In\n experiment, without loss of generality we linearly combine multiple rewards for all baselines.\n\tWe also adopt two simple variants of our model by only considering one of the objectives: \\emph{Objective1-A3C (O1-A3C)} and \\emph{Objective2-A3C (O2-A3C)}. We denote our model as \\emph{Multi-objective Actor-Critics (MoTiAC)}.\n\n\\begin{table}[t]\\small\n\t\\centering\n\t\\begin{tabular}{c|cccc}\n\t\t\\toprule\n\t\t\\textbf{Date} & \\textbf{ \\# of Ads} & \\textbf{\\# of clicks} & \\textbf{\\# of conversions} \\\\\n\t\t\\midrule\n\t\t\\textbf{20190107} & 10,201 & 176,523,089 & 3,886,155 \\\\\n\t\t\\textbf{20190108} & {10,416 } & {165,676,734} & {3,661,060} \\\\\n\t\t\\textbf{20190109} & 10,251 & 178,150,666 & 3,656,714 \\\\\n\t\t\\textbf{20190110} & 9,445 & 157,084,102 & 3,287,254 \\\\\n\t\t\\textbf{20190111} & 10,035 & 181,868,321 & 3,768,247 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption{Statistics of Click Data.}\n\t\\label{tb:statistics}\n\\end{table}\n\n\\subsection{Evaluation Metrics}\nIn Sec.~\\ref{sec:pd}, we have claimed that the agent's goal is to (1) maximize total revenue and (2) maintain ROI. In\nthis section, we refine these two objectives with another two terms from platform side, which will appear in our experiment result.\n\n\\emph{GMV.} Gross merchandise value\n(GMV) is a frequently used indicators for advertiser's earnings, and it turns out to be\nproportional w.r.t conversions ($GMV^{(j)} = conversions^{(j)}\\times CPA_{target}^{(j)}, ~~~\\forall ad_j\\in A$). So in the\nexperiment, we will use total $GMV = \\sum_j GMV^{(j)}$ of all ads to show the achievement\nof the first goal.\n\n{$\\frac{GMV}{Cost}$}.\nCost is the amount of money invested by advertisers, and it is defined by $Cost^{(j)} = clicks^{(j)}\\times CPC_{next}^{(j)}$ in this setting. The\nratio $\\frac{GMV}{Cost}$ is so-called return on investment (ROI), showing the joint benefits of all advertisers. We use it to indicate our second goal.\n\nTo make it easy to compare, we use R-score proposed in \\cite{LuYGWLC19} to evaluate the model performance. The higher the \nR-score, the more satisfactory the advertisers and\nplatform will be. Note that most of the comparison result \nwill be based on PID, i.e., $value \\rightarrow \\frac{value}{value_{PID}}$, except for Sec.~\\ref{sub:q3}.\n\n\\subsection{AtoQ1: General Experiment}\nWe report basic comparison of MoTiAC and other approaches in\nTable~\\ref{tb:general}. Note that results are in the basis of PID, and\nvalues in parentheses show improvement\/reduction percentage. \n\n\n\\begin{table}[t!]\n\t\\centering\n \\small\n \\begin{tabular}{c|ccc}\n \\toprule\n \\textbf{Model}&\n \\textbf{Relative GMV} &\n \\textbf{Relative $\\frac{GMV}{Cost}$} &\n \\textbf{R-score} \\\\\n \\midrule\n \\textbf{PID} & {1.0000} & 1.0000 & 1.0000\\\\\n \\textbf{DCMAB} & 1.0019 (+0.19\\%) & 0.9665 (-3.35\\%) & 0.9742\\\\\n \\textbf{RLB} & 0.9840 (-1.60\\%) & 1.0076 (+0.76\\%) & 0.9966\\\\\n \\textbf{Agg-A3C} & 1.0625 (+6.25\\%) & 0.9802 (-1.98\\%) & 0.9929\\\\\n \\midrule\n \\textbf{O1-A3C} & 0.9744 (-2.56\\%) & 1.0170 (+1.70\\%) & 1.0070\\\\\n \\textbf{O2-A3C} & 1.0645 (+6.45\\%) & 0.9774 (-2.26\\%) & 0.9893\\\\\n \\textbf{MoTiAC} & 1.0421 (+4.21\\%) & 1.0267 (+2.67\\%) & 1.0203\\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{Comparative Result based on PID}\n \\label{tb:general}\n\\end{table}\n\n\\textbf{Result.} In general, it is obvious that MoTiAC outperforms all of the baselines in terms of $GMV$ and $\\frac{GMV}{Cost}$ and also achieves the highest overall R-score. DCMAB is shown to be the worst one relatively,\nthough it gains a slightly higher $GMV$ (first goal) over PID, but the cumulative $\\frac{GMV}{Cost}$ in this method is much lower comparing to PID, which is unaccepted. The reason might be that ads are hard to cluster in RTB dynamics, so that multi-agent cannot take its advantage. Based on the a weighted objective, RLB gives a benign performance. We deem that the intrinsic dynamic programming algorithm tends to be convservative. It seems to give up less profitable $GMV$ in order to maintain the overall $\\frac{GMV}{Cost}$. These two methods also show that discrete action space is not an optimal setting in this problem.\nBy sololy applying the weighted sum in a standard A3C (Agg-A3C), the result is not suprising.\nBecause RTB\nenvironment varies a lot, and fixing the formula of reward\naggregation cannot capture that. Moreover, direct combination of reward is unable to achieve a better solution.\n\nIt is worth noticing that two ablation models O1-A3C and O2-A3C present two\nextreme situations. O1-A3C performs well in the second objective, but results in\npoor result for the first goal, and vice versa for O2-A3C. Our proposed MoTiAC uses agent's prior as a reference\nto make decision in the future, and the pririty of different objectives will evolve over time, which exactly captures\nthe dynamics of RTB sequence. Therefore, it outperforms all the baselines.\n\n\n\\subsection{AtoQ2: Variation of $\\boldmath\\lambda(t)$}\\label{sub:q2}\nTo give a comprehensive view of MoTiAC, we have tried different\nways to aggregate objectives in our model. We mainly consider the interesting variants of $\\lambda(t)$ and will compare our\nmethod with commonly used variations of weighted sum.\n\n\\textbf{Variation of \\boldmath $\\lambda(t)$.} In this experiment, we assign five workers for each objective. Four $\\lambda(t)$ variants are considered:\n\\begin{itemize}\n \\item equal priority: $\\lambda(t) = \\frac 12$;\n \\item changing priority: $\\lambda(t) =\n exp(-\\alpha\\cdot t)$ with a scalar $\\alpha$;\n \\item random priority: $\\lambda(t) = random([0,1])$;\n \\item bayesian\n priority: One can refer to Eq.~\\eqref{eq:phi}.\n\\end{itemize}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{img\/sensitive.pdf}\n\t\\caption{Result Under Different Priority}\n\t\\label{fig:lambda}\n\\end{figure}\n\nAs is shown in Fig.~\\ref{fig:lambda}, we present training curves for $\\frac{GMV}{Cost}$ and $GMV$. The first three strategies are designed before training so that they do not capture the whole playing process.\n It turns out that they are very similar in both objectives, and can gain\n a decent improvement over PID case by around +2.5\\% in $\\frac{GMV}{Cost}$\n and +3\\% in $GMV$. To be more specific, in \\emph{equal priority}, curve of $\\frac{GMV}{Cost}$ generally drops when we iterations go up, which\n might stem from that fixed equal weights cannot fit the dynamic environment. For \\emph{changing priority},\n it is interesting that $\\frac{GMV}{Cost}$ first increases then decreases with respect to priority shifting. Because different priority leads to\n different optimal. In \\emph{random priority}, curves turns out to dramatically change in a small interval, since priority also fluctuates\n in random. The \\emph{bayesian priority} case, one the contrary, sets priority based on the conformity of agent's prior and current state. Reward partition with agent's prior dominates the first three strategies\n by an increasingly higher $\\frac{GMV}{Cost}$ achievement by +2.6\\% and better $GMV$ by around +4.3\\%.\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=3.3in]{img\/caseStudy.pdf}\n\t\\vspace{-1mm}\n\t\\caption{$\\frac{GMV}{Cost}$ and $GMV$ curve of the $1_{st}$ ad's reponse}\n\t\\label{fig:caseStudy}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=3.3in]{img\/caseStudy2.pdf}\n\t\\vspace{-1mm}\n\t\\caption{$\\frac{GMV}{Cost}$ and $GMV$ curve of the $2_{nd}$ ad's reponse}\n\t\\label{fig:caseStudy2}\n\\end{figure}\n\n\\subsection{AtoQ3: Case Study}\\label{sub:q3}\nIn this section, we try to understand where and why MoTiAC\nworks well in RTB problem. We choose two typical ads with\nlarge conversions and show the bidding process within 24 \nhours. As PID is the current model in our real ad system, we use\nPID to compare with our model and draw the results of \n$\\frac{GMV}{Cost}$ and $GMV$ curve in Fig.~\\ref{fig:caseStudy} \nand Fig.~\\ref{fig:caseStudy2}. We also collect the final \nresults in Table.~\\ref{tb:twoAds}. Note that in real-world business,\nthe gain and loss are culculated by the final number. Therefore, in this problem,\nwe only care about the final results.\n\nFig.~\\ref{fig:caseStudy} shows the $1_{st}$ ad's response to\nthe PID and MoTiAC model. For the $GMV$ curve, both of them \nrises with respect to time intuitively, and the result of \nMoTiAC dominates that of PID. For the $\\frac{GMV}{Cost}$ \ncurve,\nit fluctuates a lot in the beginning. It is easy to know that\nPID model is great enough to adjust this metric through \nnegative feedback mechanism. It drags the red dashdot\nline quickly towards 1.0, and afterwards maintains this trend.\nThe grey solid line is the process of MoTiAC, we can observe\nthat MoTiAC tries to lift the very low value at first, then\nit starts to explore (maintain a relatively low \n$\\frac{GMV}{Cost}$) at around 6h. However, in the end of the \nday, these two models reach a similar $\\frac{GMV}{Cost}$ at around 1.0 (desirable result in RTB setting). \n\nFig.~\\ref{fig:caseStudy2} shows a different ad with a\npretty low $\\frac{GMV}{Cost}$ initially. For this ad, both \nmodels will firstly try to lift the $\\frac{GMV}{Cost}$. \nBased on the figures presented left, the red dashed curve rises\nup from 0 to about 0.7 sharply for PID at time 8h. The potential process should\nbe that PID has given up most of the bid chances and only\nconcentrates on those with high conversion rate (CVR), so that\nwe have witnessed a low $GMV$ gain of PID model in the right figure\nfrom 8h to around 21h. Though $\\frac{GMV}{Cost}$ of MoTiAC remains in\na relatively low position, our model is able to select good impression-level chances\nat that situation while still considering the another objective.\nAt 24h, both models cannot adjust this ad to 1.0 in \n$\\frac{GMV}{Cost}$, but MoTiAC finally surpasses PID in this\nmetric because of the high volume of $GMV$. Therefore, with \nlong-term consideration, MoTiAC beats PID on both the \ncumulative $\\frac{GMV}{Cost}$ and $GMV$. \n\nWe conclude that PID is kind of greedy, it always concerns with the\ncurrent situation and never considers further benefits.\nWhen the current state is under control as shown in\nFig.~\\ref{fig:caseStudy} (after 4h), PID will appear to be conservative\nand give short-sighted strategy, which usually results in a seemingly\ngood $\\frac{GMV}{Cost}$ and a poor $GMV$ (like the red curve in Fig.~\\ref{fig:caseStudy2}).\nHowever, our model MoTiAC possesses an\noverall perspective, it foresees the long-run benefit and \nwill keep exploration even temporarily deviating from the right \ndirection (grey $\\frac{GMV}{Cost}$ curve for the $1_{st}$ ad \nafter 3h) or slowing down the rising pace (grey \n$\\frac{GMV}{Cost}$ curve for the $2_{nd}$ ad at 8h). Under a \nglobal overview, MoTiAC can finally reach\na similar $\\frac{GMV}{Cost}$ but better $GMV$ than PID.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\scriptsize\n\t\\begin{tabular}{c|c|ccc}\n\t\t\\toprule\n\t\t& \\textbf{Model} & \\textbf{GMV (CNY)} & \\textbf{Cost (CNY)} & \\textbf{GMV \/ Cost} \\\\\n\t\t\\midrule\n\t\t\\multirow{2}{*}{\\textbf{$1_{st}$ ad}} & \\textbf{PID} & $8.847\\times 10^4$ & $9.099\\times 10^4$ & 0.9723 \\\\\n\t\t& \\textbf{MoTiAC} & $1.181\\times 10^5$ & $1.230\\times 10^5$ & 0.9620 \\\\\n\t\t\\midrule\n\t\t\\multirow{2}{*}{\\textbf{$2_{nd}$ ad}} & \\textbf{PID} & $3.184\\times 10^3$ & $2.548\\times 10^3$ & 0.8003 \\\\\n\t\t& \\textbf{MoTiAC} & $4.298\\times 10^3$ & $5.199\\times 10^3$ & 0.8267 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption{Statistics of Two Ads after PID and MoTiAC Model}\n\t\\label{tb:twoAds}\n\t\\vspace{-2mm}\n\\end{table}\n\n\n\\section{Experiments} \\label{sec:exp}\nIn the experiment, we use real-world industry data to answer the following\nquestions. Q1: how well can MoTiAC perform in general?\nQ2: what is the best way to combine multiple objectives?\nQ3: where and why does MoTiAC work?\n\n\\begin{table}[t]\\small\n\t\\centering\n\t\\begin{tabular}{c|cccc}\n\t\t\\toprule\n\t\t\\textbf{Date} & \\textbf{ \\# of Ads} & \\textbf{\\# of clicks} & \\textbf{\\# of conversions} \\\\\n\t\t\\midrule\n\t\t\\textbf{20190107} & 10,201 & 176,523,089 & 3,886,155 \\\\\n\t\t\\textbf{20190108} & {10,416 } & {165,676,734} & {3,661,060} \\\\\n\t\t\\textbf{20190109} & 10,251 & 178,150,666 & 3,656,714 \\\\\n\t\t\\textbf{20190110} & 9,445 & 157,084,102 & 3,287,254 \\\\\n\t\t\\textbf{20190111} & 10,035 & 181,868,321 & 3,768,247 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption{Statistics of Click Data.}\n\t\\label{tb:statistics}\n\\end{table}\n\n\\subsection{Experiment Setup}\\label{exp:setup}\n\\textbf{Dataset.} In the experiment, the dataset ($67~GB$) is collected from company T's Ads bidding system, ranging from Jan.~7th 2019 to Jan.~11th 2019. There are nearly 10,000 ads in\neach day with huge volume of click and conversion logs. According to the real-world\nbusiness, the bidding interval is set to be 10 minutes (144 bidding sessions for a day), which is much shorter than 1 hour\n\\cite{jin2018real}. Basic statistics can be found in Table~\\ref{tb:statistics}. \n\nIn the evaluation, huge memory load is required. \nWe implement all the methods with PyTorch on two 128 GB memory machines with 56 CPUs. We perform a five-fold cross-validation, i.e., using 4 days for training and\nanother one day for testing, and then report the averaged results. Similar settings can be found in\nliteratures \\cite{cai2017real,wang2017ladder,zhu2017optimized}.\n\n\n\n\n\\vspace{-1mm}\n\\subsection{Compared Baselines}\nWe carefully select related methods for comparison, and adopt the same settings\nfor all compared methods with 200 iterations. \\emph{Proportional-Integral-Derivative (PID)} \\cite{bennett1993development}\n is a widely used feedback control policy, which produces the control signal from a linear combination of the\n proportional, the integral and the derivative factor. PID is\n free from training. In company T's online ad system, PID is currently used to\n control bidding. We employ it as a standard baseline and will show relative experiment\n result with respect to it. Two state-of-the-art RTB methods are selected: \\emph{Reinforcement Learning to Bid (RLB)} \\cite{cai2017real}, \\emph{Distributed Coordinated Multi-Agent Bidding\n \t(DCMAB)} \\cite{jin2018real}. Since they are both based on DQN, we use the same\n discrete action space (interval is 0.01) like \\cite{wang2017ladder}.\n \\emph{Aggregated A3C (Agg-A3C)} \\cite{mnih2016asynchronous} is a standard A3C with linear\n combined reward, and we \n implement it to compare with MoTiAC and show the superiority of our \\emph{Reward Partition} schema. In the\n experiment, without loss of generality we linearly combine multiple rewards (following \\emph{Reward Combination}) for all baselines.\n\tWe also adopt two simple variants of our model by only considering one of the objectives: \\emph{Objective1-A3C (O1-A3C)} and \\emph{Objective2-A3C (O2-A3C)}. We denote our model as \\emph{Multi-objective Actor-Critics (MoTiAC)}.\n\n\\subsection{Evaluation Metrics}\nIn Sec.~\\ref{sec:pd}, we have claimed that agent's goal is to (i) first minimize cost and (ii) then maximize total revenue. In the experiments, we refer to the industrial convention and re-define our goal to be \\emph{maximizing both revenue and return on investment (ROI)}. We give detailed introduction of two terms below.\n\n\\textbf{Revenue.} Revenue is a frequently used indicator for advertiser's earnings, and it turns out to be\nproportional w.r.t conversions (for the $j_{th}$ ad, $Revenue^{(j)} = conversions^{(j)}\\times CPA_{target}^{(j)}$). In the\nexperimental comparison, we will use total $Revenue = \\sum_j Revenue^{(j)}$ of all ads to show the achievement\nof the first objective.\n\n{\\textbf{ROI}}.\nCost is the amount of money invested by advertisers, and it is defined by $Cost^{(j)} = clicks^{(j)}\\times CPC_{next}^{(j)}$ in this setting. The\nratio $Revenue\/Cost$ is so-called return on investment (ROI), showing the joint benefits of all advertisers. We use it to indicate our second objective.\n\nTo make it easy to compare, we also use \\emph{R-score} proposed in \\cite{LuYGWLC19} to evaluate the model performance. The higher the \n\\emph{R-score}, the more satisfactory the advertisers and\nplatform will be. Note that most of the comparison result \nwill be based on PID, i.e., $value \\rightarrow \\frac{value}{value_{PID}}$, except for Sec.~\\ref{sub:q3}.\n\n\\begin{table}[t!]\n\t\\centering\n\t\\small\n\t\\begin{tabular}{c|ccc}\n\t\t\\toprule\n\t\t\\textbf{Model}&\n\t\t\\textbf{Relative Revenue} &\n\t\t\\textbf{Relative ROI} &\n\t\t\\textbf{R-score} \\\\\n\t\t\\midrule\n\t\t\\textbf{PID} & {1.0000} & 1.0000 & 1.0000\\\\\n\t\t\\textbf{DCMAB} & 1.0019 (+0.19\\%) & 0.9665 (-3.35\\%) & 0.9742\\\\\n\t\t\\textbf{RLB} & 0.9840 (-1.60\\%) & 1.0076 (+0.76\\%) & 0.9966\\\\\n\t\t\\textbf{Agg-A3C} & 1.0625 (+6.25\\%) & 0.9802 (-1.98\\%) & 0.9929\\\\\n\t\t\\midrule\n\t\t\\textbf{O1-A3C} & 0.9744 (-2.56\\%) & 1.0170 (+1.70\\%) & 1.0070\\\\\n\t\t\\textbf{O2-A3C} & 1.0645 (+6.45\\%) & 0.9774 (-2.26\\%) & 0.9893\\\\\n\t\t\\textbf{MoTiAC} & 1.0421 (+4.21\\%) & 1.0267 (+2.67\\%) & 1.0203\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\n\t\\caption{Comparative Result based on PID}\n\t\\label{tb:general}\n\\end{table}\n\n\\subsection{AtoQ1: General Experiment}\nWe report basic comparison of MoTiAC and other approaches in\nTable~\\ref{tb:general}. Note that results are on the basis of PID, and\nvalues in parentheses show improvement\/reduction percentage. \n\n\n\\textbf{Result.} In general, it is obvious that MoTiAC outperforms all of the baselines in terms of Revenue and ROI and also achieves the highest overall R-score. DCMAB is shown to be the worst one relatively,\nthough it gains a slightly higher Revenue (first objective) over PID, but the cumulative ROI in this method is much lower comparing to PID, which is unaccepted. The reason might be that ads are hard to cluster in RTB dynamics, so that multi-agents in DCMAB cannot take its advantage. Similarly, RLB gives a benign performance. We deem that the intrinsic dynamic programming algorithm tends to be convservative. It seems to give up less profitable Revenue in order to maintain the overall ROI. These two methods also show that discrete action space is not an optimal setting in this problem.\nBy solely applying the weighted sum in a standard A3C (Agg-A3C), the poor result is not surprising.\nBecause RTB\nenvironment varies a lot, fixing the formula of reward\naggregation cannot capture that.\n\nIt is worth noticing that two ablation models O1-A3C and O2-A3C present two\nextreme situations. O1-A3C performs well in the second objective, but performs\npoorly for the first goal, and vice versa for O2-A3C. By shifting the priority of different objectives over time, our proposed MoTiAC uses agent's prior as a reference\nto make decision in the future, which exactly captures\nthe dynamics of RTB sequence. Therefore, it outperforms all the baselines.\n\n\n\n\\begin{figure}[t]\t\n\t\\centering\n\\includegraphics[width=3.4in]{img\/sensitive.pdf}\n\\caption{Result Under Different Priority}\n\\label{fig:lambda}\n\\end{figure}\n\n\\subsection{AtoQ2: Variation of $\\boldmath w_k$}\\label{sub:q2}\nTo give a comprehensive view of MoTiAC, we have tried different\nways to aggregate objectives in our model. We mainly consider and present the interesting variants of $w_k$.\n\n\\textbf{Variation of \\boldmath $w_k$.} In this experiment, four variants are considered. Since we have two objectives, we use $w_1(t)$ for the first objective and $1-w_1(t)$ for the second:\n\\begin{itemize}\n \\item equal priority: $w_1(t) = \\frac 12$;\n \\item changing priority: $w_1(t)=\n exp(-\\alpha\\cdot t)$ with a scalar $\\alpha$;\n \\item random priority: $w_1(t) = random([0,1])$;\n \\item Bayesian\n priority: One can refer to Eqn.~\\eqref{eq:phi}.\n\\end{itemize}\n\n\n\nAs is shown in Fig.~\\ref{fig:lambda}, we present training curves for ROI and Revenue. The first three strategies are designed before training and will not adjust according to the changing environment.\n It turns out that they perform similarly in both objectives and could gain\n a decent improvement over PID case by around +2.5\\% in ROI\n and +3\\% in Revenue. To be more specific, in \\emph{equal priority}, curve of ROI generally drops when the iteration goes up, which stems from the fact that fixed equal weights cannot fit the dynamic environment. For \\emph{changing priority},\n it is interesting that ROI first increases then decreases with respect to priority shifting. Because different priority leads to\n different optimal. In \\emph{random priority}, curves turns out to dramatically change in a small range, since priority also fluctuates\n in random. The \\emph{Bayesian priority} case, one the contrary, sets priority based on the conformity of agent's prior (learned from the history) and current state. Reward partition with agent's prior dominates the first three strategies\n by an increasingly higher ROI achievement by +2.7\\% and better Revenue by around +4.2\\%.\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{img\/caseStudy.pdf}\n\t\\vspace{-1mm}\n\t\\caption{ROI and Revenue curve of the $1_{st}$ ad's reponse}\n\t\\label{fig:caseStudy}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{img\/caseStudy2.pdf}\n\t\\vspace{-1mm}\n\t\\caption{ROI and Revenue curve of the $2_{nd}$ ad's reponse}\n\t\\label{fig:caseStudy2}\n\\end{figure}\n\n\\begin{table}[t]\n\t\\centering\n\t\\scriptsize\n\t\\begin{tabular}{c|c|ccc}\n\t\t\\toprule\n\t\t& \\textbf{Model} & \\textbf{Revenue (CNY)} & \\textbf{Cost (CNY)} & \\textbf{ROI} \\\\\n\t\t\\midrule\n\t\t\\multirow{2}{*}{\\textbf{$1_{st}$ ad}} & \\textbf{PID} & $8.847\\times 10^4$ & $9.099\\times 10^4$ & 0.9723 \\\\\n\t\t& \\textbf{MoTiAC} & $1.181\\times 10^5$ & $1.230\\times 10^5$ & 0.9620 \\\\\n\t\t\\midrule\n\t\t\\multirow{2}{*}{\\textbf{$2_{nd}$ ad}} & \\textbf{PID} & $3.184\\times 10^3$ & $2.548\\times 10^3$ & 0.8003 \\\\\n\t\t& \\textbf{MoTiAC} & $4.298\\times 10^3$ & $5.199\\times 10^3$ & 0.8267 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption{Case Study Result of Two Ads}\n\t\\label{tb:twoAds}\n\t\\vspace{-2mm}\n\\end{table}\n\n\n\\subsection{AtoQ3: Case Study}\\label{sub:q3}\nIn this section, we try to understand where and why MoTiAC\nworks well in RTB problem. We choose two typical ads with\nlarge conversions and show the bidding process within 24 \nhours. As PID is the current model in our real ad system, we use\nPID to compare with our model and draw the results of \nROI and Revenue curve in Fig.~\\ref{fig:caseStudy} \nand Fig.~\\ref{fig:caseStudy2}. We also collect the final \nresults in Table.~\\ref{tb:twoAds}. Note that in real-world business,\nonly the final number matters. Therefore, in this problem,\nwe only care about the final results.\n\nFig.~\\ref{fig:caseStudy} shows the $1_{st}$ ad's response to\nthe PID and MoTiAC model. For the Revenue curve, both of them \nrise with respect to time intuitively, and the result of \nMoTiAC dominates that of PID. For the ROI\ncurve,\nit fluctuates a lot in the beginning. It is easy to know that\nPID model is great enough to adjust this metric through \nnegative feedback mechanism. It drags the red dashdot\nline quickly towards 1.0, and afterwards maintains this trend.\nThe grey solid line is the process of MoTiAC, we can observe\nthat MoTiAC tries to lift the very low value at first, then\nit starts to explore (maintain a relatively low \nROI) at around 6h. However, at the end of the \nday, these two models reach a similar ROI at around 1.0 (desirable result in RTB). \n\nFig.~\\ref{fig:caseStudy2} shows a different ad with a\npretty low ROI initially. For this ad, both \nmodels will firstly try to lift the ROI. \nBased on the figures presented left, the red dashed curve rises\nup from 0 to about 0.7 sharply for PID at time 8h. The potential process should\nbe that PID has given up most of the bid chances and only\nconcentrates on those with high conversion rate (CVR), so that\nwe have witnessed a low Revenue gain of PID model in the right figure\nfrom 8h to around 21h. Though ROI curve remains in\na relatively low position, our MoTiAC is able to select good impression-level chances\nat that situation while still considering the another objective.\nAt 24h, both models cannot adjust ROI up to 1.0, but MoTiAC finally surpasses PID in this\nmetric because of the high volume of pre-gained Revenue. In sum, with \nlong-term consideration, MoTiAC beats PID on both the \ncumulative ROI and Revenue. \n\nWe conclude that PID is greedy out of the immediate feedback mechanism, and it always concerns with the\ncurrent situation and never considers further benefits.\nWhen the current state is under control as shown in\nFig.~\\ref{fig:caseStudy} (after 4h), PID will appear to be conservative\nand give short-sighted strategy, which usually results in a seemingly\ngood ROI and a poor Revenue (like the red curve in Fig.~\\ref{fig:caseStudy2}).\nHowever, our model MoTiAC possesses an\noverall perspective, it foresees the long-run benefit and \nwill keep exploration even temporarily deviating from the right \ndirection (ROI curve for the $1_{st}$ ad \nafter 3h) or slowing down the rising pace (\nROI curve for the $2_{nd}$ ad at 8h). Under a \nglobal overview, MoTiAC can finally reach\na similar ROI but better Revenue than PID.\n\n\n\n\\section{Introduction}\nThe rapid development of Internet and smart devices has created a decent\nenvironment for advertisement industry. As a billion dollar business, real-time bidding (RTB) has gain continuous attention in the past few decades \\cite{yuan2013real}.\n\n\\begin{figure}\n\t\\vspace{3mm}\n\t\\centering\n\t\\includegraphics[width=2.7in]{img\/rtb-framework.pdf}\n\t\\vspace{-1mm}\n\t\\caption{A Typical RTB System. \n}\n\t\\vspace{-1mm}\n\t\\label{fig:rtb-framework}\n\\end{figure}\n\n\\vspace{0.5mm}\n\\textbf{Bidding System.} \\emph{Online users}, \\emph{advertisers} and \\emph{ad\nplatforms} constitute the main players in real-time bidding. A typical RTB (in Fig.~\\ref{fig:rtb-framework}) setup consists\nof publishers, supply-side platforms (SSP), data\nmanagement platforms (DMP), ad exchange (ADX), and demand-side platforms (DSP). In one bidding round, when an online browsing activity triggers an ad request, the SSP sends this request to the DSP through the ADX, where eligible ads compete for the \\emph{impression}. The bidding agent, DSP, represents advertisers to come up with an optimal bid and transmits the bid back to the ADX (e.g. usually within\nless than 200ms \\cite{yuan2013real}), where the winner is selected to be displayed and\ncharged by generalized second price (GSP) \\cite{varian2007position}.\n\nOur work focus on DSP, where \\emph{bidding optimization} happens. To conduct real-time bidding, two\nfundamental challenges need to be addressed. Firstly, RTB environment is higly dynamic. In \\cite{zhang2014optimal,wang2017ladder,zhu2017optimized}, researchers make a strong assumption that the bidding process\nis stationary over time. However, the sequence of user queries (e.g.,\nthose incurring impressions, clicks, or conversions)\nis time-dependent and mostly unpredictable \\cite{zhao2018deep}, where the outcome influences the next auction round. \nTraditional algorithms usually learn an independent predictor\nand conduct fixed optimization that amounts to a greedy strategy, which often does\nnot lead to the optimal return \\cite{cai2017real}.\nAgents with reinforcement learning (RL) address the aforementioned challenge to some extent \\cite{zhao2018deep,jin2018real}. \nBy learning from both the immediate feedback and the long-term reward,\n RL based methods are able to alleviate the instability.\n\n\n\nHowever, these methods are all limited to either revenue or ROI, which is only one part of the overall utility of the industry. In the problem of RTB, we posit that the utility is two-fold, as outlined: (i) the cumulative cost should be kept\nwithin the budget; (ii) the overall revenue should be maximized. In sum, the second challenge is that real-world RTB industry needs to consider \\emph{multiple objectives}, which is not adequately addressed in the existing literature.\n\nTo address the aforementioned challenges, we propose a \\emph{Multi-Objective Actor-Critic} model, named MoTiAC. We generalize the popular asynchronous advantage actor-critic (A3C) \\cite{mnih2016asynchronous} reinforcement learning algorithm for multiple objectives in RTB setting. Our model employs several local\nactor-critic networks with different objectives to interact with the same\nenvironment and then updates the global network asynchronously according to\ndifferent reward signals. Instead of using a fixed linear combination of different objectives, MoTiAC is able to decide on adaptive weights over time according to how well the current situation conforms with agent's prior. To our best knowledge, this is the first multi-objective reinforcement learning model for RTB problems. We comprehensively evaluate our model on a large-scale industrial dataset, the experimental results verify the superiority of the approach.\n\n\\textbf{Contributions}. The contributions of our work can be summarized as follows: \n\\begin{itemize}\n\\item We identify two critical challenges in RTB and provide motivation to use multi-objective RL as the solution.\n\\item We generalize A3C and\npropose a novel multi-objective actor-critic model MoTiAC for optimal bidding, which to our knowledge is the first in the literature.\n\\item We mathematically prove that our model will converge to Pareto optimality and empirically evaluate MoTiAC using a proprietary real-world commercial dataset.\n\\end{itemize}\n\\section{Methodology}\nIn this section, we formulate the multi-objective actor-critic model (MoTiAC), to address\nthe aforementioned problem in dynamic RTB environment. The organization of this section is:\n(i) in Sec.~\\ref{sec:ac}, we will give a brief introduction of A3C model in RTB setting; (ii) we\npropose to use \\emph{Reward Partition} for multiple rewards and justify its\nsuperiority in Sec.~\\ref{sub:reward_partition}; (iii) in Sec.~\\ref{sec:analysis}, we present interesting analysis of model updating\nand prove that it will converge to Pareto optimality.\n\n\n\\subsection{A3C Model in RTB}\\label{sec:ac}\nActor-critic model was firstly proposed in \\cite{konda2000actor}, then\n\\cite{lillicrap2015continuous} generalized the discrete action space to\ncontinuous one, and \\cite{mnih2016asynchronous} advanced it with\nmultiple actor-critics updating the same global network asynchronously (A3C).\nA typical actor-critic reinforcement learning setting consists of:\n\\begin{itemize}\n \\item \\textbf{state:} each state is a vector representation of a\n bidding round $s$. We use features extracted from user\n profile, bidding environment and so on.\n \\item \\textbf{action:} action is the generated cpcbid\n for each ad based on the input state. Instead of using discrete\n action space \\cite{wang2017ladder}, the output of our model is the action\n distribution, from which we sample the cpcbid.\n \\item \\textbf{reward:} reward is a feedback\n signal from the environment to evaluate how good the current action is.\n In our model, each objective has its own reward.\n \\item \\textbf{policy:} policy defines the way of agent taking action $a$\n at state $s$, which can be denoted as a probability distribution $p(a|s)$. Policy is\n the key in RL based models.\n\\end{itemize}\n\n\nIn an actor-critic architectur\n, the \\emph{critic} uses learning methods with a neural network and the \\emph{actor} is updated in an\napproximated gradient direction based on the information provided by the critic\n\\cite{konda2000actor}. There exists a self-improving loop in this process:\nfollowing the current policy $\\pi_{\\theta}$, RL agent plays an action $a_1$ in\nstate $s_1$, and receives reward signal $r_1$ from environment. Then this\naction will lead to a new state $s_2$, and agent repeats this round again. One\ntrajectory $\\tau$ can be written as $\\{s_1, a_1, r_1, s_2, a_2, r_2, ...\\}$.\nFor each policy $\\pi_{\\theta}$, we define the utility function as,\n\\begin{equation}\nU(\\pi_{\\theta}) = E_{\\tau\\sim{p_{{\\theta}}(\\tau)}} [R(\\tau)],\n\\end{equation}\nwhere $p_{\\theta}(\\tau)$ denotes the distribution of trajectories under policy\n$\\pi_{\\theta}$, and $R(\\tau)$ is a return function over trajectory $\\tau$, typically calculated by summing all the reward signals in the trajectory.\nAfter sampling $N$ trajectories from policy $\\pi_{\\theta}$, parameters $\\theta$\nwill be updated after one or several rounds based on tuple $(s, a, r)$ in each\ntrajectory $\\tau$ in order to maximize the utility $U(\\pi_{\\theta})$. Stochastic gradient descent (SGD) is used in the updating of\nactor parameters ($\\eta$ is the learning rate),\n\\begin{align}\\label{eq:sgd}\n\\theta &\\leftarrow \\theta + \\eta\\nabla_{\\theta}U(\\pi_{\\theta}), \\\\\n\\theta &\\leftarrow \\theta + \\eta\\sum_{n=1}^{N}\\sum_{i=1}^{T_n}R_i^{n} \\nabla\\log_{\\pi}(a_i^{n}\\mid s_i^n),\n\\end{align}\nwhere $R_i^{n}=\\sum_{j=1}^{T_n}\\gamma^{T_n-j}r_j^{n}$ is the cumulative discounted reward and $\\gamma$ denotes the decaying\n factor. Then {critic} calibrates the gradient by adding a baseline reward using value network $V(s)$ \\cite{jin2018real}. Consider the simple and robust Monte Carlo method (for the advantage part), formally,\n \\begin{align}\\label{eq:sgdv}\n \\theta &\\leftarrow \\theta + \\eta\\sum_{n=1}^{N}\\sum_{i=1}^{T_n}(R_i^{n}-V(s_i^n))\\nabla\\log_{\\pi}(a_i^{n}\\mid s_i^n).\n \\end{align}\n\nAsynchronous advantage actor-critic (A3C) \\cite{mnih2016asynchronous} is a distributed variant of actor-critic model. In A3C, there is one global and several local networks. The local networks copy global parameters periodically, and they run in parrellel by updating gradients\nto global net asynchronously.\n\n\n\n\n\\subsection{MoTiAC with Reward Partition}\\label{sub:reward_partition}\n\nAs is stated in \\ref{sec:pd}, RTB business shall require multiple objectives. A natural way is to linearly integrate them into a single one, and we call it \\emph{Reward Combination}. However, it is usually ineffective in most real-world cases \\cite{roijers2013survey}. Thus, we are motivated to propose \\emph{Reward Partition}. In this subsection, we consider the general $K$-objective case.\n\n\\textbf{Reward Combination.} One intuitive way\n\\cite{pasunuru2018multi} of handling multiple objectives is to (i) firstly compute a linear\ncombination of the rewards, in which case each element of $w_k$ quantifies the\nrelative importance of the corresponding objectives:\n$R(s)=\\sum_{k=1}^K w_k\\times R_k(s)$;\n(ii) and then define a single-objective agent with the expected return equals to value function $V(s)$ \\cite{roijers2013survey}.\n\nHowever, these implementations ignore the premise that one or both of the\nsteps is infeasible or undesirable in real practice \\cite{roijers2013survey}: (i)\nA weighted combination is only valid when objectives do not compete\n\\cite{sener2018multi}. However, in RTB setting, relation between objectives can be complicated, and they usually\nconflict in terms of different sides. (ii) The\nintuitive combination might flatten the gradient with respect to each\nobjective, and thus the agent is likely to limit itself within a narrow\nboundary of search space. (iii) A pre-defined combination may not be\nflexible in some cases, especially in the changing environment. Overall, \\emph{Reward Combination}\n is unstable and inappropriate in RTB problem \\cite{van2017hybrid}.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{img\/multi-ac.pdf}\n\t\\caption{Our Model Framework: MoTiAC.}\n\t\\label{fig:MoTiAC}\n\t\\vspace{-1mm}\n\\end{figure}\n\n\\textbf{Reward Partition.} We therefore propose \\emph{Reward Partition} scheme. In our architecture,\nwe design reward for each objective and employ one group of local networks on a feature subset with that reward. Note that all the local network share the same structure. There is one global network with an actor and multiple critics in our model. At the start of one iteration,\neach local network copies parameters from global network and begins\nexploration. Local networks from each group then will explore based on\ntheir own objective and push weighted gradients to the actor and one of the\ncritics (partial update) in the global network asynchronously (in Fig.~\\ref{fig:MoTiAC}). \n\nFormally, we denote the total utility and value function of the $k_{th}$ group ($k=1,\\cdots,K$) as $U^k(\\pi_{\\theta})$ and $V_k(s)$, respectively. The parameter updating can be\nformulated as,\n\\begin{align}\\label{eq:sgd2}\n\\theta & \\leftarrow \\theta + \\eta w_k\\nabla_{\\theta}U^k(\\pi_{\\theta}), \\\\\n\\theta & \\leftarrow \\theta + \\eta w_k\\sum (R_k(s)-V_k(s))\\nabla\\log_{\\pi}(a\\mid s).\n\\end{align}\nMotivated by Bayesian RL \\cite{ghavamzadeh2015bayesian}, we parameterize $w_k$ by introduing a latent multinomial variable $\\phi$ with $w_k = p(\\phi=k|\\tau)$ under that trajectory $\\tau$. We call it \\emph{agent's prior}. In the beginning, \nwe set the initial prior as,\n\\begin{equation}\np(\\phi=k|\\tau_0)=\\frac1K,~~~\\forall~k = 1, 2, \\dots, K.\n\\end{equation}\nwhere trajectory $\\tau_0$ just begins. When $\\tau_t$ is up to state $s_t$, we update the posterior using Bayes' rule,\n\\begin{align}\\label{eq:phi}\np(\\phi=k|\\tau_t) = \\frac{p(\\tau_t|\\phi=k)p(\\phi=k)}{\\sum_k p(\\tau_t|\\phi=k)p(\\phi=k)},\n\\end{align}\nwhere $p(\\tau_t|\\phi=k)$ tells how well the current trajectory agrees with the utility of objective $k$.\n\nOur scheme shows several advantages. First, different rewards are not\nexplicitly combined in the model, and thus the conflicts should be addressed\nto some extent. Second, each local network aims at only one objective, so\nthe model could explore in a relatively larger space. Third, we do not use a\ncomplex reward combination, which is usually hard to learn in most of the real\ncases \\cite{van2017hybrid}. Instead, we use multiple value functions to approximate multiple single rewards in subsets of features, making critics easy to learn.\n\n\\subsection{Analysis of MoTiAC} \\label{sec:analysis}\nIn this section, we will view our model from mathematical perspective and provide sound properties of MoTiAC.\nFirstly, we show that if we attach the weights of \\emph{Reward Combination} to the gradients in \\emph{Reward\nPartition}, the result of parameters updating should be identical on average. Secondly, we prove that with the specially designed agent's prior (in Eqn.~\\eqref{eq:phi}), our model will converge to Pareto optimality.\n\n\\textbf{Gradient Analysis.} For \\emph{Reward Combination}, we have mentioned that \nrewards of different objectives are linearly aggregated by weight $\\{w_k\\}$. Like Eqn.~\\eqref{eq:sgdv}, by applying standard SGD (Eqn.~\\eqref{eq:sgd}), the parameter $\\theta$ is updated,\n\\begin{align}\n\\theta & \\leftarrow \\theta + \\eta\\sum_i \\left[\\left[\\sum_kw_kR_k(s_i)-V(s_i)\\right]\\nabla\\log_{\\pi}(a_i\\mid s_i)\\right],\\notag\n\\end{align}\nwhile in \\emph{Reward Partition}, each group of local networks update gradients w.r.t their own objectives. If the updating manner follows\nthe same weights, we can easily aggregate Eqn.~\\eqref{eq:sgd2}. Then, the expectation of gradient is given by,\n\\begin{align}\n\\theta & \\leftarrow \\theta + \\eta \\sum_kw_k\\left[\\sum_i \\left(R_k(s_i)-V_k(s_i)\\right)\\nabla\\log_{\\pi}(a_i\\mid s_i)\\right],\\notag \\\\\n\\theta & \\leftarrow \\theta + \\eta\\sum_i \\left[\\left[\\sum_kw_k(R_k(s_i)-V_k(s_i))\\right]\\nabla\\log_{\\pi}(a_i\\mid s_i)\\right].\\notag\n\\end{align}\n\nBy comparing the updating formulas (of \\emph{Reward Combination's} and \\emph{Reward Partition's}), we find the differenece lies on the \\emph{advantage} part and that \nthe effect of update depends exactly on how well the critic(s) can learn from its reward(s). By learning in a decomposed level, \\emph{Reward Partition} advances the \\emph{Reward Combination} by using easy-to-learn functions to approximate single rewards, and thus yields a better policy.\n\n\\textbf{Convergence Analysis.} Next, we prove that the global policy will converge to the Pareto optimality between these objectives. The utility expectation of the objective $k$ is denoted as $E[U^k(\\pi_{\\theta})]$. We begin the analysis with Theorem~\\ref{thm:pareto},\n\n\\begin{theorem}\\label{thm:pareto}\n\t(Pareto Optimality). If $\\pi^*$ is a Pareto optimal policy, then for any other policy $\\pi$, one can at least find\n\tone k, so that $00: \\sum_kl_k=1\\}$ such that,\n \t\\begin{equation}\n \t\\pi^* \\in \\underset{\\pi}{\\operatorname{arg\\,max}}~\\left[\\sum_kl_kE[U^k(\\pi)]\\right]. \\label{eq:pareto2}\n \t\\end{equation}\n \\end{theorem}\n\\begin{proof}\n\tWe derive the gradient by aggregating Eqn.~\\eqref{eq:sgd2} as,\n\t\\begin{align}\n\t\\nabla &= \\sum_{\\tau_t}\\sum_{k}p(\\phi=k|\\tau_t)\\nabla_{\\theta}U^k(\\tau_t; \\pi_{\\theta}) \\label{eq:pareto3}\\\\\n\t& \\propto \\sum_{k}p(\\phi=k)\\sum_{\\tau_t}p(\\tau_t|\\phi=k)\\nabla_{\\theta}U^k(\\tau_t; \\pi_{\\theta}) \\label{eq:pareto4}\\\\\n\t& =\\sum_{k}p(\\phi=k)\\nabla_{\\theta}E_{\\tau_t}[U^k(\\tau_t; \\pi_{\\theta})] \\\\\n\t& =\\nabla_{\\theta}\\left[ \\sum_{k}p(\\phi=k)E_{\\tau_t}[U^k(\\tau_t; \\pi_{\\theta})]\\right].\n\t\\end{align}\n\t\n\tFrom Eqn.~\\eqref{eq:pareto3} to Eqn.~\\eqref{eq:pareto4}, we use the relation from Eqn.~\\eqref{eq:phi}. By making $l_k=p(\\phi=k)$ (Note that $\\sum_kp(\\phi=k)=1$), we find that the overall gradient conform with the definition of Pareto optimality in Eqn.~\\eqref{eq:pareto2}. Therefore, we conclude that our algorithm converges to Pareto optimal.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Methodology}\nIn this section, we present our multi-objective actor-critic framework in\ndetail. We try to answer the following questions. What is the advancement of\nMoTiAC comparing to traditional AC? Why can MoTiAC work well? How can we\ndeploy it in a distributed ad system? The organization of this section is:\n(1) we present basic actor-critic framework in Sec.~\\ref{sec:ac}; (2) we\npropose \\emph{Reward Partition} to handle multiple rewards and justify its\nsuperiority in Sec.~\\ref{sub:reward_partition}; (3) we well specify MoTiAC\nand show the algorithm in Sec.~\\ref{sec:acalgo}; (4) Multiple rewards are designed in Sec.~\\ref{sec:aac};\n(5) a large-scale implementation schema for distributed systems can be seen in\nSec.~\\ref{sec:dac}.\n\n\n\\subsection{Actor-Critic Framework}\\label{sec:ac}\nActor-critic algorithm is proposed first in \\cite{konda2000actor}, then\n\\cite{lillicrap2015continuous,silver2014deterministic} generalizes the discrete action space to\ncontinuous one (DPG), and \\cite{mnih2016asynchronous} advances it with\nmultiple actor-critics updating the same global network asynchronously (A3C).\nA typical actor-critic reinforcement learning setting consists of:\n\\begin{itemize}\n \\item \\textbf{state:} each state is a feature vector reconstruction of a\n bidding situation $s$. We use feature representations extracted from user\n profile, bidding environment and so on.\n \\vspace{0.5mm}\n \\item \\textbf{action:} in our model, action works as a bid\n for each ad based on the input state. Instead of using discrete\n action space \\cite{wang2017ladder}, our model will output an action\n distribution, and then we sample actions according to its\n probability.\n \\vspace{0.5mm}\n \\item \\textbf{reward:} obviously, reward is a feedback\n signal from the environment to evaluate how good previous action is,\n which guides RL agent towards a better policy. In our model, we design\n multiply rewards based on multiple goals. Each actor-critic couple will\n observe one type of reward from environment, and together they\n achieve multiple objectives.\n \\vspace{0.5mm}\n \\item \\textbf{policy:} in an actor-critic thread, policy is present as\n $p(a_t\\mid s_t)$, which denotes the probability to take action $a_t$\n under state $s_t$. Different from model-based MDP stated in\n Sec.~\\ref{sec:rw}, where transition probability is given by\n $p(s_{t+1}\\mid s_t, a_t)$, actor-critic framework is totally model-free\n and thus is able to handle infinite states in real world\n \\cite{mnih2016asynchronous,hessel2018rainbow}.\n\\end{itemize}\n\n\nIn an actor-critic architectur\n, the critic uses\nTD learning with a linear approximation and the actor is updated in an\napproximated gradient direction based on the information provided by the critic\n\\cite{konda2000actor}. There exists a self-improving loop in this process:\nfollowing the current policy $\\pi_{\\theta}$, RL agent plays an action $a_1$ in\nstate $s_1$, and receives reward signal $r_1$ from environment. Then this\naction will lead to a new state $s_2$, where agent repeats this round again. One\ntrajectory $\\tau$ can be written as $\\{s_1, a_1, r_1, s_2, a_2, r_2, ...\\}$.\nFor each policy $\\pi_{\\theta}$, we define the utility function as,\n\\begin{equation}\nU(\\pi_{\\theta}) = E_{\\tau\\sim{p_{\\pi_{\\theta}}(\\tau)}} [R(\\tau)],\n\\end{equation}\nwhere $p_{\\theta}(\\tau)$ denotes the distribution of trajectories under policy\n$\\pi_{\\theta}$, and $R(\\tau)$ is a reward function over trajectory $\\tau$.\nAfter sampling $N$ trajectories from policy $\\pi_{\\theta}$, parameters $\\theta$\nwill be updated after one or several rounds based on tuple $(s, a, r)$ in each\ntrajectory $\\tau$ in order to maximize the utility $U(\\pi_{\\theta})$. Stochastic gradient descent (SGD) is used in the updating, \nformally,\n\\begin{align}\n\\theta &\\leftarrow \\theta + \\eta_1\\nabla\\sum_{n=1}^{N}R_i^{(n)}p_{\\pi}(\\tau^{(n)}),\\\\\n\\theta &\\leftarrow \\theta + \\eta_1\\sum_{n=1}^{N}\\sum_{i=1}^{T_n}R_i^{(n)}\\times \\nabla\\log_{\\pi}(a_i^{(n)}\\mid s_i^{(n)}),\n\\end{align}\nwhere, $R_i^{(n)}=\\sum_{j=i}^{T_n}\\gamma^{T_n-j}r_j^{(n)}$ is the cumulative\ndiscounted reward for state $s_i$, $\\gamma$ denotes decaying factor. Then\ncritic calibrates the gradient by adding a baseline reward using value function\n$V(s; \\theta_v)$ \\cite{sutton2018reinforcement, degris2012model} (MC-based\npolicy iteration is used) as,\n\\vspace{-1mm}\n\\begin{align}\n&\\theta \\leftarrow \\theta + \\eta_1\\sum_{n=1}^{N}\\sum_{i=1}^{T_n}(R_i^{(n)}-V(s_i^{(n)}; \\theta_v))\\times \\nabla\\log_{\\pi}(a_i^{(n)}\\mid s_i^{(n)}),\n\\end{align}\n\n\n\n\n\\vspace{-2mm}\n\\subsection{Reward Combination \\& Partition}\\label{sub:reward_partition}\n\nAs is stated in \\ref{sec:pd}, RTB business shall require multiple objectives. A natural way is to integrate multiple rewards into a single one and we call it \\emph{Reward Combination}. However, it turns out to be ineffective in real world \\cite{roijers2013survey}. Thus, we are motivated to propose \\emph{Reward Partition}. In this subsection, we will compare these two schemes and justify the superiority of \\emph{Reward Partition}.\n\n\\vspace{0.5mm}\n\\textbf{Reward Combination.} One intuitive way\n\\cite{friedman2018generalizing, pasunuru2018multi} is to (1) compute a linear\ncombination of the rewards, in which case each element of $w_k$ quantifies the\nrelative importance of the corresponding objectives:\n$R(s_i)=\\sum_{k=1}^{K}w_k\\times R_k(s_i)$;\n(2) define a single-objective MDP with the expected return equals to value\nfunction $V(s_i;\\theta_v)$.\n\nHowever, these implementations ignore the premise that one or both of the\nsteps is impossible, infeasible, or undesirable \\cite{roijers2013survey}. (1)\nA weighted combination is only valid when objectives do not compete\n\\cite{sener2018multi}. However, in RTB setting, objectives usually\nconflict in terms of each side, making this strategy blind in practice. (2) The\nintuitive combination will flatten the gradient with respect to each\nobjective, and thus the agent is likely to limit itself within a narrow\nboundary of search space. (3) Explicitly combining objectives may not be\nflexible in some cases, especially in changing environment. Although we can\nintegrate into a single-objective reward, this low-dimension representation\nmay not be easily approximated by only one value function, thus learning can be\nslow and unstable \\cite{van2017hybrid}.\n\n\\vspace{0.5mm}\n\\textbf{Reward Partition.} We propose \\emph{Reward Partition} scheme\nand design multiple reward functions with respect to multiple objectives.\nEach reward is employed in one worker group, where actor-critic couples will\nexplore based on the same objective using a low-dimension state\nrepresentation. In our scheme, we have one global network with an actor\nnetwork and multiple critic networks. In the beginning of one iteration,\nactor-critics copy parameters from global network, and start\nexploration. Actor-critics from each worker group then will explore based on\ntheir own mission and push weighted gradients to the actor and one of the\ncritics in the global network asynchronously.\n\nOur scheme shows several advantages. First, different rewards are not\nexplicitly combined in our model, and thus the conflicts should be addressed\nto some extent. Second, each actor-critic aims at only one objective, so\nour model should explore in a relatively larger space. Third, we do not use a\ncomplex reward combination, which is usually hard to learn in most of the real\ncases \\cite{van2017hybrid}. Instead, we use multiple value functions to\napproximate multiple single rewards in small subsets of features. This\nstrategy maps low-dimension feature representation to single reward,\nmaking our critics easy to learn.\n\n\\textbf{Analysis.} In addition, we present mathematical analysis for \\emph{Reward Partition} in\nterms of parameters update and value function approximation. If we attach the weights of \\emph{Reward Combination} to the gradients in \\emph{Reward\nPartition}, the parameters updating strategy should be identical on average. \n\nFor \\emph{Reward Combination}, there is a combined value function $V(s_i; \\theta_v)$, and\nrewards of different objectives are linearly aggregated by weight $w_k$. Actor parameter $\\theta$ is updated by,\n\\begin{equation*}\n\\theta \\leftarrow \\theta + \\eta_1\\sum_{i}\\left( \\left(\\sum_{k=1}^{K}w_k\\times\nR_k(s_i)-V(s_i; \\theta_v)\\right)\\times \\nabla\\log_{\\pi}(a_i\\mid s_i)\\right),\n\\end{equation*}\nwhile in \\emph{Reward Partition}, there are multiple value functions $V_k(s_i;\n\\theta_v)$ with their respective rewards $R_k(s_i)$. If the updating manner follows\nthe same weights, then expected gradient is generated as,\n\\begin{align*}\n\\theta &\\leftarrow \\theta +\n\\eta_1\\sum_{k=1}^{K}w_k\\times\\left(\\sum_{i}(R_k(s_i)-V_k(s_i;\n\\theta_v))\\times \\nabla\\log_{\\pi}(a_i\\mid s_i)\\right),\\notag\\\\\n\\theta &\\leftarrow \\theta +\n\\eta_1\\sum_{i}\\left(\\left(\\sum_{k=1}^{K}w_k\\times(R_k(s_i)-V_k(s_i;\n\\theta_v))\\right)\\times \\nabla\\log_{\\pi}(a_i\\mid s_i)\\right).\n\\end{align*}\n\nIt is obvious that the difference lies in the advantage part, thus the effect of update exactly depends on how well and precisely the critic can\nlearn from its reward. By learning in a decomposed level, multiple\neasy-to-learn critics should advance the intuitive linear combination, which\nmeans by using this updating policy, \\emph{Reward Partition} can yield a better critic and thus a better actor.\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=3.3in]{img\/multi-ac.pdf}\n \\caption{MoTiAC Framework. In this framework, we show two worker \n groups. In each group, we have multiple workers with the same\n mission, and they upload and download local gradients asynchronously \n based on their own objectives. Different workers explore and \n exploit together in macro- and micro-level.}\n \\label{fig:MoTiAC}\n \\vspace{-3mm}\n\\end{figure}\n\n\n\n\\subsection{MoTiAC Algorithm}\\label{sec:acalgo}\n\nFig.~\\ref{fig:MoTiAC} shows our model architecture. As stated in\nSec.~\\ref{sec:pd}, there are two objectives in this problem. We extend\ntraditional A3C framework, and replace unified workers with multiple\nobjective-specialized actor-learners. On the one hand, for \\emph{objective 1}\n($CPA$ goal), several workers interact with environment to achieve\nlower real $CPA$, while on the other hand, another batch of workers feed\ngradients to the global network based on $objective 2$ ($Conversion$ goal). Different\nobjective-specialized workers that run in parallel are likely to explore\ndifferent parts of the environment. Also for each objective, one can\nexplicitly use different policies in each actor to\nmaximize this diversity \\cite{mnih2016asynchronous}. Under macro- and\nmicro-level exploration, MoTiAC is guaranteed to achieve a better policy.\n\n\\vspace{0.5mm}\n\\textbf{Macro-level Exploration.} As stated above, optimization of two\nobjectives is a non-trivial setting in this problem.\nTraditional reward combination tricks will usually narrow the solution to a\nlimited space, because the integrated reward cannot correlated with\nsignals from each objective. This problem can be well addressed in MoTiAC, since \nwe directly\ndesign reward functions from each objective and\nthen make different groups of workers\nlearn from different rewards. By \n\\emph{inter-group} coordination, they can explore in the macro-level.\n\n\\cite{zhuang2018dual}\nproposed a weighted function to balance global convolutions and local\nconvolutions in graph-based semi-supervised classification. In loss\nback-propagation, we also introduce a similar weighted function $\\lambda(t)$\nto shift priority across objectives, specifically,\n\\begin{align}\n&\\theta \\leftarrow \\theta + \\eta_1\\times\\lambda^{(j)}(t)\\times\\nabla E_{\\tau\\sim{p_{\\theta}}} [Reward_1(\\tau)], ~~~ \\forall ad_j \\in A, \\notag\\\\\n&\\theta \\leftarrow \\theta + \\eta_1\\times[1-\\lambda^{(j)}(t)]\\times\\nabla E_{\\tau\\sim{p_{\\theta}}} [Reward_2(\\tau)], \\label{eq:1}\n\\end{align}\nwhere $\\lambda^{(j)}(t)$ is a priority function on time $t$ in range $(0,1)$\nand is tailored for each $ad_j \\in A$. Eqn.~\\eqref{eq:1} well handles\nmulti-objective conundrum: (1) it updates gradients to global net with respect\nto \\emph{objective 1} and \\emph{objective 2} simultaneously; (2) Also, each ad's\nspecial situation and fluctuation is properly captured.\n\n\\begin{algorithm}[t]\\small\n \\SetAlgoLined\n \/\/ Assume global shared parameters vectors $\\theta$ and $\\theta_v$\\;\n \/\/ Assume objective-specific parameter vectors $\\theta_k'$ and $\\theta_{v,k}'$, k=1,2\\;\n initialize step counter $t \\leftarrow 1$; epoch $T_{max}$; discounted factor $\\gamma$\\;\n \\While{$t