{"text":"\\section{\\label{}}\n\\section{INTRODUCTION}\nOPERA~\\cite{OPERA} is a long baseline neutrino experiment located\n in the Gran Sasso underground laboratory (LNGS) in Italy. The collaboration is composed\nof about 200 physicists coming from 36 institutions in 13 different countries. \nThe experiment is a massive hybrid detector\n with nuclear emulsions used as very precise tracking devices and electronic detectors \nto locate the neutrino interaction events in the emulsions.\n It is designed to primarily search\nfor $\\nu_{\\tau}$ appearance in the CERN high energy $\\nu_{\\mu}$ beam CNGS~\\cite{CNGS} at \n730 km from the neutrino source,\nin order to establish unambiguously the origin of the neutrino\n oscillations observed at the \"atmospheric\" $\\Delta m^{2}$ scale. The preferred hypothesis \nto describe this phenomenon being $\\nu_{\\mu} \\rightarrow \\nu_{\\tau}$ oscillation.\nCombining all the present known neutrino data \n the best fit values of a global three flavour \nanalysis of neutrino oscillations~\\cite{Fogli2008} give for \n $\\nu_{\\mu} \\rightarrow \\nu_{\\tau}$ oscillation parameters \n $\\Delta m^{2}=2.39$x$10^{-3}$$ \\mathrm{eV}^{2}$ and $\\mathrm{sin}^{2}2\\theta$=0.995.\nThe range of allowed values at 3 $\\sigma$ is \n2.06x$10^{-3} < \\Delta m^{2} <$ 2.81x$10^{-3}$$ \\mathrm{eV}^{2}$.\nIn addition to the dominant $\\nu_{\\mu}\\rightarrow\\nu_{\\tau}$ oscillation in $\\nu_{\\mu}$ beam, it is possible that a \nsub-leading $\\nu_{\\mu}\\rightarrow\\nu_{e}$ transition occurs as well.\nThis process will also be investigated by OPERA profiting from its \nexcellent electron identification capabilities \nto asses a possible improvement on the knowledge of the third yet unknown mixing angle $\\theta_{13}$.\n\nThe $\\nu_{\\tau}$ direct appearance search is based on the observation\nof events produced by charged current interaction (CC) with the\n$\\tau$ decaying in leptonic and hadronic modes.\nIn order to directly observe the $\\tau$ kinematics,\nthe principle of the OPERA experiment is to observe the $\\tau$ trajectories \nand the decay products in emulsion films composed of\ntwo thin emulsion layers (44 $\\mu$m thick) put on either side of a plastic base\n (205 $\\mu$m thick). The detector concept which is described in the next section combines\nmicrometer tracking resolution, large target mass together with good lepton identification. \nThis concept allows to reject efficiently the main topological background coming from charm production in \n$\\nu_\\mu$ charged current interactions.\n\n\n\\section{DETECTOR OVERVIEW}\n\nThe OPERA detector is installed in the Hall C of the Gran Sasso underground laboratory.\nFigure~\\ref{fig:opera} shows a recent picture of the detector which is 20 m long with a \ncross section of about 8x9 $\\mathrm{m}^{2}$ and composed \nof two identical parts called super modules (SM). Each SM has a target section and\na muon spectrometer. \n\\begin{figure*}[htb]\n\\vspace{-0.3cm}\n\\begin{center}\n \\includegraphics[width=17cm]{opera_det.eps}\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{View of the OPERA detector in Hall C of the Gran Sasso Underground Laboratory in May 2007.}\n\\label{fig:opera}\n\\vspace{0cm}\n\\end{figure*}\n\nThe spectrometer allows a determination of the charge and momentum\nof muons going through by measuring their curvature in a \ndipolar magnet made of 990 tons of iron, and providing 1.53 Tesla transverse to the \nneutrino beam axis. Each spectrometer is equipped with six vertical planes of \ndrift tubes as precision tracker together with 22 planes (8x8 $\\mathrm{m}^{2}$) \nof RPC bakelite chambers reaching a spatial resolution of $\\sim$1 cm and an efficiency of 96\\%.\nThe precision tracker planes are composed of 4 staggered layers of 168 aluminium tubes,\n8 m long with 38 mm outer diameter. The spatial resolution\nof this detector is better than 500 $\\mu$m.\nThe physics performance of the complete spectrometer should reduce the charge confusion \nto less than 0.3\\% and gives a momentum resolution better than 20\\% for momentum less than\n50 GeV. The muon identification efficiency reaches 95\\% adding the target\ntracker information for the cases where the muons stop inside the target. \n\nThe target section is composed of 31 vertical light supporting steel structures, called walls,\ninterleaved with double layered planes of \n 6.6 m long scintillator strips in the two transverse\ndirections. The main goals\nof this electronic detector are to provide a trigger for the\nneutrino interactions, an efficient event pattern recognition together with the magnetic spectrometer\nallowing a clear classification of the $\\nu$ interactions\n and a precise localisation of the event.\nThe electronic target tracker spatial resolution\nreaches $\\sim$0.8 cm and has an efficiency of 99\\%.\\\\\nThe walls contain the basic target detector units, called ECC brick, sketched in Fig.~\\ref{fig:brick}\nwhich are obtained by stacking 56 lead plates with 57 emulsion films. This structure provides\nmany advantages like a massive target coupled to a very precise tracker, as well as a standalone\ndetector to measure electromagnetic showers and charged particle momentum using the multiple\ncoulomb scattering in the lead. The ECC concept has been\nalready succesfully used for the direct $\\nu_\\tau$ observation perfomed in 2000 by the DONUT \nexperiment~\\cite{donut}.\n\\begin{figure*}[htb]\n\\vspace{-0.3cm}\n\\begin{center}\n \\includegraphics[width=6cm]{brick_emul3.eps}\n \\includegraphics[width=6cm]{brique.eps}\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{a)Schematic structure of an ECC cell. The $\\tau$ decay kink is\nreconstructed by using four track segments in the emulsion films. b) Picture of an assembled brick. \n Each brick weights about 8.6 kg and has a thickness of 10 radiation length $X_{o}$. }\n\\label{fig:brick}\n\\vspace{0cm}\n\\end{figure*}\n\n Behind each brick, an emulsion film doublet, called Changeable Sheet (CS) is attached in \n a separate enveloppe. The CS can be detached from the brick for analysis\n to confirm and locate the tracks produced in neutrino interactions.\n\nBy the time of this conference, 146500 bricks (1.25 kton of target) assembled\n underground at an average rate of about 700 bricks\/day by\na dedicated fully automated Brick Assembly Machine (BAM) with precise robotics were\ninstalled in the support steel structures from the sides of the walls\nusing two automated manipulator systems (BMS) running on each side of the experiment. \\\\\nWhen a candidate brick has been located by the electronic detectors, \nthe brick is removed using the BMS and the changeable\nsheet is detached and developped. The film is then scanned to \nsearch for the tracks originating from the neutrino interaction. If none\nare found then the brick is left untouched and another\none is removed. When a neutrino \n event is confirmed the brick is exposed to cosmics\nto collect enough alignment tracks before going to the development. \nAfter development the emulsions are sent to the scanning laboratories hosting\nautomated optical microscopes in Europe and Japan, each region using\na different technology~\\cite{scan1,scan2}. This step is the \nstart of the detailed analysis consisting of finding the neutrino vertex\nand looking for a decay kink topology in the vertex region. \n\n\n\\section{THE CNGS BEAM STARTUP}\n The CNGS neutrino beam~\\cite{CNGS} is a high energy $\\nu_\\mu$ beam optimised\nto maximise the $\\nu_\\tau$ charged current interactions at Gran Sasso produced\nby oscillation mechanism at the atmospheric $\\Delta m^{2}$.\nThe mean neutrino energy is about 17 GeV with a contamination of 2.4\\% $\\overline{\\nu}_{\\mu}$,\n 0.9\\% $\\nu_{e}$ and less than 0.06\\% of $\\overline{\\nu}_{e}$.\nUsing the CERN SPS accelerator in a shared mode with fixed target experiment together with LHC, \n4.5x$10^{19}$ protons on target (pot) per year should normally be delivered,\n assuming 200 days of operation.\nThe number of charged current and neutral current interactions expected in the Gran Sasso\nlaboratory from $\\nu_\\mu$\nare then about 2900 \/kton\/year and 875 \/kton\/year respectively.\nIf the $\\nu_{\\mu}\\rightarrow\\nu_{\\tau}$ oscillation hypothesis is confirmed, \nthe number of $\\tau$'s produced via\ncharged current interaction at the Gran Sasso should be\nof the order of 14 \/kton\/year \nfor $\\Delta m^{2}=$2.5x$10^{-3}$$ \\mathrm{eV}^{2}$ at full mixing. \n\nA first CNGS short run took place in August 2006. The OPERA target was empty at that time\nbut the electronic detectors were taking data.\nDuring this run, 319 events correlated in time with the beam and coming from neutrino \ninteractions in the surrounding rock and inside the detector have been recorded. \n The delivered intensity correponded to $7.6\\mathrm{x}10^{17}$ pot, with a \npeak intensity of $1.7$x$10^{13}$ pot per extraction corresponding to 70\\% of the expected nominal value.\n The reconstructed zenith angle distribution from penetrating muon tracks was showing a clear\npeak centered around $3.4^{o}$ as expected for neutrinos originating from CERN.\nDetails and results can be found in Ref~\\cite{cngs2006}.\n\n\\section{FIRST NEUTRINO EVENTS AND DETECTOR PERFORMANCES}\nA second CNGS physics run took place in October\n2007 with a total of $8.24\\mathrm{x}10^{17}$ pot delivered and 369 reconstructed beam related events.\nSimilar selection criteria to the 2006 analysis~\\cite{cngs2006},\n based on GPS timing systems and synchronisation between OPERA and CNGS, have been used\nto select events compatible with the CNGS proton extraction time window. \n The OPERA target was filled with 80\\% of the first\nsupermodule corresponding to a total target mass of 0.5 kton. \nAmong the selected beam events, 38 were recorded and reconstructed inside the OPERA target\nfor 31.5$\\pm$6 expected.\nAmong them, 29 were classified as Charged Current (CC) and 9\nas Neutral Current (NC) in agreement with expectation. \nFor each event the electronic detector hits were used to find the most probable\nbrick where the neutrino interaction may have occured. \nThe left part of Figure~\\ref{fig:cngs_event} shows an event display of the first neutrino \ninteraction located in the OPERA detector. The black dots represent hits in the \nelectronic detector. The event is a charged current event with a clear muon track traversing\nboth target and spectrometer sections over more than 18 m. The right part of the figure\nshows the result of the detailed analysis of the emulsions after scanning the identified \nbrick where a clear reconstructed\ninteraction vertex is visible with two photon conversions compatible with \na $\\pi^{o}$ decay.\n\n\\begin{figure*}[htb]\n\\vspace{-0.3cm}\n\\begin{center}\n \\includegraphics[width=10cm]{cngs_event.eps}\n \\includegraphics[width=7cm]{vertex.eps}\n\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{a) Charged current neutrino interaction recorded in OPERA. The event display \nshows the hits left in the electronic detectors). b) Emulsion reconstruction of the \nneutrino interaction vertex in the corresponding target brick.}\n\\label{fig:cngs_event}\n\\vspace{0cm}\n\\end{figure*}\n\nThe extensive study of the recorded events have confirmed the OPERA\nperformances and the validity of the methods and algorithms used which, for example, give\nimpact parameter resolution of the order of a few microns, particle momentum estimation, \nshower detection for e\/$\\pi$ separation.\n Figure~\\ref{fig:charm2007} shows the longitudinal and \ntransverse views of another reconstructed event vertex where a clear decay topology similar to what is expected\nfrom a $\\tau$ decay is visible. However, the presence of a prompt muon attached to the primary vertex and the momentum\nbalance in the transverse plane is in favour of a $\\nu_\\mu^{CC}$ interaction producing a charm particle. \n \n\\begin{figure*}[htb]\n\\vspace{-0.3cm}\n\\begin{center}\n \\includegraphics[width=9cm]{charm2007_v2.eps}\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{Longitudinal and transverse view of a reconstructed neutrino interaction vertex with a charm\ndecay candidate topology.}\n\\label{fig:charm2007}\n\\vspace{0cm}\n\\end{figure*}\n\n\n\\section{CONCLUSIONS}\n\nThe OPERA detector is completed and is now massive with 1.25 kton of lead-emulsion target offering\na huge and precise tracking device. \nWith the cosmic data taking and the first CNGS neutrino runs in 2006 and 2007,\nthe design goals and detector perfomances\nwere reached and the first levels of the reconstruction software and analysis tools were validated.\nThe observation in 2007 of 38 neutrino events in the target bricks, the \nlocalization and reconstruction of neutrino vertex in emulsions was an important phase which\nsuccesfully validated the OPERA detector concept.\\\\\nHaving now the full OPERA target, the next important step is the 2008 CNGS neutrino run which\nstarted already in June.\nIt is expected to have about $2.28$x$10^{19}$ pot in 123 days of SPS running assuming a nominal \nintensity of $2$x$10^{13}$ pot\/extraction. This intensity, when reached, should lead\n to about 20 neutrino interactions\/day in the target and eventually the \nobservation of the first $\\tau$ event candidate.\\\\\nIn 5 years of CNGS running at 4.5x$10^{19}$ pot per year, OPERA should be able \nto observe 10 to 15 $\\nu_\\tau$ events after oscillation at\nfull mixing in the range $2.5$x$10^{-3} < \\Delta m^{2} <$ 3x$10^{-3}$ $\\mathrm{eV}^{2}$,\nwith a total background less than 0.76 events.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction} The combination of control theory techniques with quantum mechanics (see, e.g., \\cite{AltaTico}, \\cite{Mikobook}, \\cite{Optrev}) has generated a rich set of control algorithms for quantum mechanical systems modeled by the \nSchr\\\"odinger (operator) equation \n\\be{Scrod11}\n\\dot X=AX+ \\sum_{j=1}^m B_j u_jX, \\qquad X(0)={\\bf 1}. \n\\end{equation} \nHere we assume that we have a finite dimensional model, $A$ and $ B_j$ are matrices in $\\mathfrak{su}(n)$ for each $j=1,...,m$, $X$ is the unitary propagator, which is equal to the identity ${\\bf 1}$ at time zero, and $u_j$ are the controls. These are usually electromagnetic fields, constant in space but possibly time-varying, which are the output of an appropriately engineered pulse-shaping device. Many of the proposed algorithms in the literature involve control functions which are only piecewise continuous and in fact have `jumps' at certain points of the interval of control. For example, control algorithms based on {\\it Lie group decompositions} (see, e.g., \\cite{Rama1}) involve `switches' between different Hamiltonians; Algorithms based on \n{\\it optimal control}, even if they produce smooth control functions, often require a jump at the beginning of the control interval in order for the control to achieve the prescribed value in norm (assuming a bound in norm of the optimal control as in \\cite{Bosca1}). Beside the practical problem of generating (almost) instantaneous switches with pulse shapers, such discontinuities introduce undesired high frequency \ncomponents in the dynamics of the controlled system. For these reasons, it is important to have algorithms which produce {\\it smooth} control functions whose values at the beginning and the end of the control interval are equal to zero. \n\nThis paper describes a method to design control functions without discontinuities in order to drive the state of a class of quantum systems of the form (\\ref{Scrod11}) to an arbitrary final configuration. Our main example of application will be the simultaneous control of two quantum bits in zero field NMR, a system which was also considered in \\cite{Xinhua} in the context of optimal control. As compared to that paper, we abandon here the requirement of time optimality (under the requirement of bounded norm for the control) but introduce a novel \nmethod which will allow us more flexibility in the control design. The result is a control algorithm that does not present discontinuities, with the control equal to zero at the beginning and at the end of the control interval. \n\nThe paper is organized in two main sections each of which divided into several subsections. In Section \\ref{GenTheor} we describe the class of systems we consider and the general theory underlying our method. We also present two simple examples of quantum systems where the theory applies. In Section \\ref{App2QB} we detail the application to the system of two spin $\\frac{1}{2}$ particles in zero field NMR above mentioned. This section includes a description of the model as well as the explicit numerical treatment of a control problem: the independent control of the two spin $\\frac{1}{2}$ particles to two different types of Hadamard gates. \n\n\n\n\\section{General Theory}\\label{GenTheor}\n\n\\subsection{Class of systems considered}\\label{Classo} \n\nConsider the class of control systems (\\ref{Scrod11}) with $A$ and $ B_j$, $j=1,...,m$, in $\\mathfrak{su}(n)$ and let ${\\cal L}$ denote the Lie algebra generated by $\\{A, B_1,...,B_m\\}$. We assume that ${\\cal L}$ is semi-simple, which implies, since ${\\cal L} \\subseteq \\mathfrak{su}(n)$, that the associated Lie group $e^{\\cal L}$ is compact. The Lie algebra ${\\cal L}$ is called, in quantum control theory, the {\\it dynamical Lie algebra} associated to the system (\\ref{Scrod11}). Since $e^{\\cal L}$ is compact, the Lie group $e^{\\cal L}$ is the set of states for (\\ref{Scrod11}) reachable by changing the control \\cite{Suss}. In particular if ${\\cal L}=\\mathfrak{su}(n)$, the system is said to be {\\it controllable} because every special unitary matrix can be obtained with appropriate control. These are known facts in quantum control theory (see, e.g., \\cite{Mikobook}). We assume that ${\\cal L}$ has a (vector space) decomposition ${\\cal L}={\\cal K} \\oplus {\\cal P}$, such that $[{\\cal K}, {\\cal K}] \\subseteq {\\cal K}$, i.e., ${\\cal K}$ is a {\\it Lie subalgebra of ${\\cal L}$}, which we also assume to be semisimple so that $e^{\\cal K}$ is compact. Moreover $[{\\cal K}, {\\cal P}] \\subseteq {\\cal P}$. A special case is when in addition $[{\\cal P}, {\\cal P} ] \\subseteq {\\cal K}$ in which case the decomposition ${\\cal L}={\\cal K} \\oplus {\\cal P}$ defines a symmetric space of $e^{\\cal L}$ \\cite{Helgason}. We assume, in the model (\\ref{Scrod11}), that such a decomposition exists so that $A \\in {\\cal K}$ and $\\{B_1,...,B_m\\}$ forms a basis for ${\\cal P}$. \n\n\nUnder such circumstances, we can reduce ourselves to the case $A=0$ in (\\ref{Scrod11}), i.e., to systems of the form \n\\be{Scro}\n\\dot U= \\sum_{k=1}^m \\hat u_k B_k U, \\qquad U(0)={\\bf 1}. \n\\end{equation} \nTo see this, assume that for any fixed interval $[0,t_f]$ and any desired final condition $ U_f$, we are able to find a control $\\hat u_k$ steering the state $U$ in (\\ref{Scro}) from the identity ${\\bf 1}$ to $U_f$. Let $a_{kj}=a_{kj}(t)$, $k,j=1,...,m$ the coefficients forming an $m \\times m$ orthogonal matrix, so that, for any $j=1,...,m$, \n\\be{transforB}\ne^{-At} B_j e^{At}=\\sum_{k=1}^m a_{kj}(t) B_k. \n\\end{equation} \nLet $X_f$ be the desired final condition for (\\ref{Scrod11}) and $\\hat u_k$ be the control steering the state $U$ of system (\\ref{Scro}) from the identity ${\\bf 1}$ to $e^{-At_f} X_f$, in time $t_f$. Then the control $u_j$ obtained inverting \n\\be{tobeinverted}\n\\hat u_k(t):=\\sum_{j=1}^m a_{kj}(t)u_j(t), \n\\end{equation}\nsteers the state $X$ of (\\ref{Scrod11}) from the identity to $X_f$. This follows from the fact that, if $U=U(t)$ is the solution of (\\ref{Scro}) with the control $\\hat u_k$, and final condition $e^{-At_f} X_f$, then $X=e^{At}U$ is a solution of (\\ref{Scrod11}), with the controls $u_j$ given by (\\ref{tobeinverted}) and therefore the final condition at $t_f$ is $X_f$. Notice that the transformation (\\ref{tobeinverted}) does not modify the smoothness properties of the control, neither does it modify the fact that the control is zero at the beginning and at the end of the control interval (or at any other point). \nTherefore in the following we shall deal with {\\it driftless systems} of the form (\\ref{Scro}) with the Lie algebraic ${\\cal L}={\\cal K} \\oplus {\\cal P}$, structure above described. In particular $\\{B_1,...,B_m\\}$ forms a basis for ${\\cal P}$. \n\n\\subsection{Symmetry reduction}\n \nThe compact Lie group $e^{\\cal K}$ can be seen as a Lie \ntransformation group which acts on $e^{\\cal L}$ via conjugation $X \\in e^{\\cal L} \\rightarrow KXK^{-1}$, $ K \\in e^{\\cal K}$. Moreover this is a {\\it group of symmetries} for system (\\ref{Scro}) \nin the sense that $K B_j K^{-1} \\in {\\cal P}$ for each $j$, for every $K \\in e^{\\cal K}$. In particular let $K B_j K^{-1} :=\\sum_{k=1}^m a_{kj} B_k$ for an orthogonal matrix $\\{a_{kj}\\}$ depending on $K \\in e^{\\cal K}$ (cf. (\\ref{transforB})). If $U=U(t)$ is a trajectory corresponding to a control $\\hat u_k$, $KUK^{-1}$ is the trajectory \ncorresponding to controls $u_k:=\\sum_{j=1}^ma_{kj} \\hat u_j$, as it is easily seen from (\\ref{Scro}) and \n$$\nK\\dot U K^{-1}=\\sum_{j=1}^m \\hat u_j KB_j K^{-1} KUK^{-1}= \\sum_{j=1}^m \\hat u_j \\left( \\sum_{k=1}^m a_{kj} B_k \\right) KUK^{-1}=$$\n$$=\\sum_{k=1}^m \\left( \\sum_{j=1}^m a_{kj} \\hat u_j \\right) B_k KUK^{-1}=\\sum_{k=1}^m u_k B_k KUK^{-1}. \n$$ \nThis suggests to treat the control problem on the {\\it quotient space} $e^{\\cal L}\/e^{\\cal K}$ corresponding \nto the above action of $e^{\\cal K}$ on $e^{\\cal L}$. \n \n \nFrom the theory of Lie transformation groups (see, e.g., \\cite{Bredon}) we know that the quotient space $e^{\\cal L}\/e^{\\cal K}$ has the structure of a {\\it stratified space} where each stratum corresponds to an {\\it orbit type}, i.e., a set of points in $e^{\\cal L}$ which have conjugate isotropy groups. The stratum corresponding to the smallest possible isotropy group, $K_{min}$, is known to be a connected manifold which is {\\it open and dense} in $e^{\\cal L}\/e^{\\cal K}$. We denote it here by $e^{\\cal L}_{reg}\/e^{\\cal K}$, where $reg$ stands for the {\\it regular} part. Its preimage in $e^{\\cal L}$, $e^{\\cal L}_{reg}$, under the natural projection $\\pi \\, : \\, e^{\\cal L} \\rightarrow e^{\\cal L}\/e^{\\cal K}$ is open and dense in $e^{\\cal L}$ \nas well. This is called the {\\it regular part} of $e^{\\cal L}\/e^{\\cal K}$, (resp. $e^{\\cal L}$). The complementary set in $e^{\\cal L}\/e^{\\cal K}$, (resp. $e^{\\cal L}$) is called the {\\it singular part}. The dimension of $e^{\\cal L}_{reg}\/e^{\\cal K}$ as a manifold is \n\\be{dimensio}\n\\dim (e^{\\cal L}_{reg}\/e^{\\cal K}) =\\dim (e^{\\cal L})-\\dim (e^{\\cal K})+\\dim K_{min}=\\dim ({\\cal L})-\\dim ({\\cal K})+(\\dim K_{min}), \n\\end{equation} \nwhere $\\dim ( K_{min})$ is the dimension of the minimal isotropy group as a Lie group.\\footnote{More discussion on these basic facts in the theory of Lie transformation groups can be found in \\cite{NOIJDCS} and references therein.} In particular, if \n$K_{min}$ is a {\\it discrete Lie group}, i.e., it has dimension zero, the right hand side of (\\ref{dimensio}) is the dimension of the subspace ${\\cal P}$. This is verified for instance in $K-P$ problems (cf., e.g., \\cite{conBenECC}) when $e^{\\cal L}=SU(n)$. We shall assume this to be the case in the following. \n\nAccording to a result in \\cite{conBenECC}, under the assumption that the minimal isotropy group $K_{min}$ is discrete, the restriction of $\\pi_*$ to $R_{x*} {\\cal P}$ is an isomorphism onto $T_{\\pi(x)} \\left( e^{\\cal L}_{reg}\/{e^{\\cal K}} \\right)$ for each point $x$ in the regular part, $e^{\\cal L}_{reg}$. Here, as it is often done, we have identified the Lie algebra ${\\cal L}$ with the tangent space of $e^{\\cal L}$ at the identity ${\\bf 1}$, and therefore ${\\cal P}$ is identified with a subspace of the tangent space at $\\bf{1}$. The map $R_x$ denotes the {\\it right translation} by $x$ so that $R_{x*}{\\cal P}$ is a subspace (with the same dimension) of the tangent space at $x$, $T_x {e^{\\cal L}}$.\\footnote{Recall that for a map $f \\, : \\, M \\rightarrow N$ for two manifolds $M$ and $N$, $f_*$ denotes the {\\it differential} (also called {\\it push-forward}) $f_* \\, : \\, T_xM \\rightarrow T_{f(x)}N$ between two tangent spaces. When we want to emphasize the point $x$ we write $f_*|_x$. } In Appendix B, we show that in given coordinates the determinant of the restriction of $\\pi_*$ to $R_{x*}{\\cal P}$ is invariant under the action of $e^{\\cal K}$. The above isomorphism result says that in the regular part $\\det \\pi_* \\not=0$. In this situation, for every regular point $U \\in e^{\\cal L}$, for every tangent vector $V \\in T_{\\pi(U)}(e^{\\cal L}_{reg}\/e^{\\cal K})$, we can find a tangent vector $\\pi_*^{-1} V \\in R_{U*} {\\cal P}$. Such a tangent vector is {\\it horizontal} for system (\\ref{Scro}) which means that it can be written as a linear combination of the available vector fields $\\{ B_k U \\}$ in (\\ref{Scro}). If $\\Gamma=\\Gamma(t)$ is a curve entirely contained in $e^{\\cal L}_{reg}\/e^{\\cal K}$ and $U=U(t)$ a curve in $e^{\\cal L}_{reg}$ such that $\\pi(U(t))=\\Gamma(t)$ for every $t$, i.e., $U$ is a `{\\it lift}' of $\\Gamma$, then $\\pi_*|_{U}^{-1} \\dot \\Gamma$ is a horizontal tangent vector at $U(t)$ for every $t$. If $\\Gamma$ joins two points $\\Gamma_0$ and $\\Gamma_1$ in $e^{\\cal L}_{reg}\/e^{\\cal K}$, in the interval $[t_0,t_1]$, and $U_0$ is such that $\\pi(U_0)=\\Gamma_0$, then the solution of the differential system \n\\be{diffsys}\n\\dot U=\\pi_*|_{U}^{-1} \\dot \\Gamma, \\qquad U(t_0)=U_0, \n\\end{equation} \nis such that $\\pi(U(t_1))=\\Gamma_1$. Therefore, once we prescribe an arbitrary trajectory $\\Gamma$ to move in the quotient space between two given orbits $\\Gamma_0$ and $\\Gamma_1$ in the regular part, the control specified by \n\\be{deficontr}\n\\pi_*|_{U}^{-1} \\dot \\Gamma=\\sum_{j=1}^m u_j B_j U\n\\end{equation}\n will allow us to move between two states $U_0$ and $U_1$ such that $\\pi(U_0)=\\Gamma_0$ and $\\pi(U_1)=\\Gamma_1$. \n \n \n \\subsection{Methodology for Control} \n The above treatment suggests a general methodology to design control laws \n for systems of the form (\\ref{Scro}) described in subsection \\ref{Classo}. In fact, \n given the freedom in the choice of the trajectory $\\Gamma=\\Gamma(t)$ above mentioned, \n we can design such controls satisfying various requirements and in particular without discontinuity. Such a methodology can be summarized as follows. \n \n First of all we need to obtain a geometric description of the orbit space $e^{\\cal L}\/e^{\\cal K}$, and in particular of its regular part $e^{\\cal L}_{reg}\/e^{\\cal K}$, and verify that the minimal isotropy group, which is the isotropy group of the elements in $e^{\\cal L}_{reg}$, is discrete so that the right hand side of \n (\\ref{dimensio}) is equal to $\\dim {\\cal P}$. This is a weak assumption, easily verified in the examples that will follow and that can be proven true in several cases \\cite{conBenECC}, \\cite{BenTesi}. Then one chooses\n coordinates for the manifold $e^{\\cal L}_{reg}\/e^{\\cal K}$. These are expressed in terms of the original coordinates in \n $e^{\\cal L}$ or, more commonly, in terms of the entries of the matrices in $e^{\\cal L}$. Such coordinates are a complete set of {\\it independent invariants} with respect to the (conjugacy) action of the group $e^{\\cal K}$. The word `complete' here means that the knowledge of their values uniquely determines the {\\it orbit}, i.e., a point in $e^{\\cal L}_{reg}\/e^{\\cal K}$. There are $m=\\dim({\\cal P})$ of them, as this is the dimension of $e^{\\cal L}_{reg}\/e^{\\cal K}$ (cf. (\\ref{dimensio})). Once we have coordinates $\\{x^1,...,x^m\\}$, the tangent vectors $\\left \\{ \\frac{\\partial}{\\partial x^1},...,\\frac{\\partial}{\\partial x^m}\\right\\}$ at every regular point in the quotient space determine a basis of the tangent space of $e^{\\cal L}_{reg}\/e^{\\cal K}$. For any trajectory $\\Gamma$ in \n $e^{\\cal L}_{reg}\/e^{\\cal K}$,\n we can write the tangent vector $\\dot \\Gamma(t)$ as $\\dot \\Gamma(t)=\\sum_{j=1}^m \\dot x^{j}\\frac{\\partial}{\\partial x^j}$, for some functions $\\dot x^j$. With this choice of coordinates, one then needs to calculate, for every regular point $U$, the matrix for $\\pi_*|_U$ as restricted to $R_{U*} {\\cal P}$ and its inverse $\\pi_*|_U^{-1}$. This allows us to implement \n formula (\\ref{deficontr}) to obtain the control from a given prescribed trajectory in the orbit space. \n \n \n We remark that there is an issue concerning the fact that our initial condition which is the identity ${\\bf 1}$ in (\\ref{Scro}) (and possibly the final desired condition) is not in the regular part of $e^{\\cal L}$. In fact the whole Lie group $e^{\\cal K}$ is the isotropy group of the identity. If we take for $\\Gamma$ a trajectory which starts from the class corresponding to the identity, the matrix corresponding to $\\pi_*$ may become singular as $t \\rightarrow 0$ and therefore it will be impossible to derive the control directly from formula (\\ref{deficontr}). This problem can be overcome by applying a {\\it preliminary control} which takes the state of system (\\ref{Scro}) out of the singular part and into the regular part of $e^{\\cal L}$. To avoid discontinuities, such a control is chosen to be zero at the beginning and at the end of the control interval. It takes the system to a point $U_0$ with $\\pi(U_0)=\\Gamma_0 \\in e^{\\cal L}_{reg}\/e^{\\cal K}$. Then we choose the trajectory in the quotient space $\\Gamma$ in the regular part of the quotient space which joins $\\Gamma_0$ and $\\Gamma_1$ where $\\Gamma_1$ is the orbit of the desired final condition. The control obtained through (\\ref{deficontr}) will steer system (\\ref{Scro}) to a state $\\hat U_f$ in the same orbit as the desired final condition. Therefore if $U_f$ is the desired final condition we will have \n $\\pi(\\hat U_f)=\\pi(U_f)=\\Gamma_1$. Notice that we also want $\\dot \\Gamma\\to 0$ at both the initial and final point so that the control is zero and can be {\\it concatenated} continuously with the preliminary control above described. \n \n \n \n \n It is possible that the desired final condition $U_f$ is also in the singular part of $e^{\\cal L}$. This problem can be tackled in two ways. We can recall that the regular part is open and dense and therefore we can always drive to a state in the regular part arbitrarily close to the desired $U_f$. This means that our algorithm will only give an approximate control, but which will steer the system arbitrarily close to $U_f$. Alternatively we can select a regular element $S \\in e^{\\cal L}$ and such that $U_f S^{-1}$ is also regular.\\footnote{Such an element $S\\in e^{\\mathcal{L}}_{\\text{reg}}$ always exists for any $U\\in e^{\\cal L}$, by the following argument: Assume that it does not exist. Then for every regular $S$, $US$ is singular. Therefore, by indicating by $L_U$ the left translation by $U$ we have, $L_U(e^{\\mathcal{L}}_{\\text{reg}})\\subseteq e^{\\mathcal{L}}_{\\text{sing}}$. Then by applying the unique bi-invariant Haar measure $\\mu$ on $e^{\\cal L}$ with $\\mu(e^{\\cal L})=1$ implies $\\mu(e^{\\mathcal{L}}_{\\text{reg}})=\\mu(L_U(e^{\\mathcal{L}}_{\\text{reg}}))\\leq\\mu(e^{\\mathcal{L}}_{\\text{sing}})$. On the other hand, $\\mu(e^{\\mathcal{L}}_{\\text{sing}})=0$ since $\\mu$ must also correspond to the Riemannian volume of the bi-invariant Killing metric (normalized if necessary) and each stratum in $e^{\\mathcal{L}}_{\\text{sing}}$ has dimension strictly less than dimension of $e^{\\cal L}$ and thus has volume $0$ and therefore invariant measure $0$. But $\\mu(e^{\\mathcal{L}}_{\\text{reg}})=\\mu(e^{\\mathcal{L}})-\\mu(e^{\\mathcal{L}}_{\\text{sing}})=\\mu(e^{\\mathcal{L}})=1$. This is a contradiction.} Then we find two controls: $u_1$ driving $U$ in (\\ref{Scro}) from the identity to $S$ in (\\ref{Scro}) and $u_2$ driving $U$ in (\\ref{Scro}) from the identity to $U_fS^{-1}$ in (\\ref{Scro}). In particular, because of the right invariance of system (\\ref{Scro}), $u_2$ also drives $S$ to $U_f$. \n Therefore, the concatenation of $u_1$ (first) and $u_2$ (second) will drive to the desired final configuration. Therefore in the following, for simplicity, we shall assume that the final desired state is in the regular part. \n \n \n The (concatenated) control $\\hat u_j$ obtained from the tangent vector $\\dot \\Gamma$ at every time $t$ for a trajectory on the quotient space $\\Gamma$ (cf. (\\ref{deficontr})) will drive the state $U$ of (\\ref{Scro}) from the identity ${\\bf 1}$ only to a state $\\hat U_f$ which is in the same orbit as the desired final state $U_f$. \n There exists $K \\in e^{\\cal K}$ such \n that $K\\hat U_f K^{-1}=U_f$. Once such a $K$ is found it will determine through \n $\\sum_{j=1}^m KB_j \\hat u_jK^{-1}= \\sum_{k=1}^m B_k u_k$ the actual control $\\{u_k\\}$ to apply. We remark that this tranformation does not modify the smoothness properties of the control, nor the fact that it is zero at some point (in particular at the beginning and at the end of the control interval). \n\n \n\\subsection{Examples}\n \n\\subsubsection{Control of a single spin $\\frac{1}{2}$ particle}\\label{Twol}\n\nConsider the Schr\\\"odinger operator equation (\\ref{Scro}) in the form \n\\be{Scro1}\n\\dot U=\\begin{pmatrix} 0 & \\alpha(t) \\cr \n-\\alpha^*(t) & 0\n\\end{pmatrix} U, \\qquad U(0)={\\bf 1}_2, \n\\end{equation}\nwith $U$ in $SU(2)$. The complex-valued function $\\alpha$ is a \ncomplex control field representing the $x$ and $y$ \ncomponents of an electro-magnetic field. \nThe dynamical Lie algebra ${\\cal L}$ is $\\mathfrak{su}(2)$ and it has a decomposition \n$\\mathfrak{su}(2)={\\cal K} \\oplus {\\cal P}$ with ${\\cal K}$ diagonal and ${\\cal P}$ \nanti-diagonal matrices. \n The one-dimensional \nLie group of {\\it diagonal} matrices in $SU(2)=e^{\\cal L}$ is a symmetry group $e^{\\cal K}$ \nfor the above system and the structure of the quotient space $SU(2)\/e^{\\cal K}$ is that of \na closed unit disc \\cite{NoiAutomat}. The entry $u_{1,1}$ of \n$U \\in SU(2)$, which is a complex number with absolute value $\\leq 1$, \ndetermines the orbit of the matrix $U$. The regular part \nof $SU(2)$ corresponds to matrices with $|u_{1,1}| < 1$, i.e., the interior \nof the unit disc $\\mathring D$, in the complex plane. The singular part is \nthe boundary of the unit disc. Denote by $z$ the (complex) coordinate in the \ninterior of the complex unit disc. This corresponds to two {\\it real} coordinates invariant under the action of $e^{\\cal K}$ \n (conjugation by diagonal matrices). Let $\\Gamma=\\Gamma(t)$ be a desired trajectory inside \n the unit disc, which we denote by $z_d$ in the chosen coordinates. The tangent \nvector $\\dot \\Gamma$ in (\\ref{diffsys}) is given in complex coordinates by $\\dot{z_d} \\frac{\\partial}{\\partial z}$.\\footnote{This means $\\dot x_d \\frac{\\partial}{\\partial x_d} +\\dot y_d \\frac{\\partial}{\\partial y_d}$ where $x_d$ and $y_d$ are the real and imaginary parts of $z_d$.} In the coordinates on $SU(2)$ used in equation (\\ref{Scro}) the corresponding tangent vector for $\\dot U$ is given by (cf. (\\ref{Scro1})) $\\begin{pmatrix} 0 & \\alpha \\cr -\\alpha & 0 \\end{pmatrix} U$ and the value of the control $\\alpha$ is obtained by solving (\\ref{diffsys}) which gives \n\\be{diffebis}\n\\dot z_d=\\frac{d}{dt}|_{t=0} z \\left( e^{\\begin{pmatrix}0 & \\alpha \\cr -\\alpha^* & 0 \\end{pmatrix}t} U \\right), \n\\end{equation}\nwhere $z(P)$ denotes the the $(1,1)$ entry of the matrix $P$. Equation (\\ref{diffebis}) gives $\\alpha=\\frac{\\dot z_d}{u_{21}}$, which, \nas expected from the above recalled isomorphism theorem of \\cite{conBenECC}, gives a one-to-one correspondence between $\\alpha$ and $\\dot z_d$ as long as $U$ is in the \nregular part of $SU(2)$, i.e., it is not diagonal, i.e., $u_{2,1} \\not=0$. \n\n\n\n\n \n\\subsection{Control of a three level system in the $\\Lambda$ configuration}\\label{Lambda}\nConsider a three level quantum system where the controls couple level $|1\\rangle $ to level $|2\\rangle $ and level $|1 \\rangle $ to level $|3\\rangle$ but not level $|2\\rangle$ and $|3\\rangle$ directly. Assuming that $|1\\rangle$ is the highest energy level, the energy level diagram takes the so-called $\\Lambda$ configuration (see, e.g., \\cite{Lambda1}). The Schr\\\"odinger operator equation (\\ref{Scro}) is such that \n\\be{lambdascro}\n\\sum_{j=1}^m u_j B_j =\\sum_{j=1}^4 u_j B_j=\n\\begin{pmatrix}0 & \\alpha & \\beta \\cr -\\alpha^* & 0 & 0 \\cr \n- \\beta^* & 0 & 0 \\end{pmatrix}, \n\\end{equation}\nwith the complex control functions $\\alpha$ and $\\beta$. Such a system admits a group of symmetries given by $e^{\\cal K}=S(U(1) \\times U(2))$, i.e., block diagonal matrices in $SU(3)$ with one block of dimension $1 \\times 1$ and one block of dimension $2 \\times 2$, and determinant equal to $1$. The Lie subalgebra ${\\cal K}$ consists of matrices in $\\mathfrak{su}(3)$ with a block diagonal structure with one block of dimension $1 \\times 1$ and one block of dimension $2 \\times 2$. The complementary subspace ${\\cal P}$ is spanned by \nantidiagonal matrices in $\\mathfrak{su}(2)$ with the same partition of rows and columns. Such a system was studied in \\cite{ADS} in the context of optimal control and a description of the orbit space $SU(3)\/e^{\\cal K}$ was given. \nIt was shown that the regular part $SU(3)_{reg}\/e^{\\cal K}$ is \nhomeomorphic to the product of two open unit discs \n$\\mathring D_1 \\times \\mathring D_2$ in the complex plane. Up to a similarity transformation in $e^{\\cal K}=S(U(1) \\times U(2))$, a matrix $U$ in $SU(3)$ \ncan be written as \n\\be{canonicalformSU3}\nU=\\begin{pmatrix} z_1 & \\sqrt{1-|z_1|^2} & 0 \\cr \n-\\sqrt{1-|z_1|^2} & z_1^* & 0 \\cr 0 & 0 & 1 \\end{pmatrix}\n\\begin{pmatrix} 1 & 0 & 0 \\cr 0 & z_2 & w \\cr 0 & - w^* & z_2^* \\end{pmatrix}, \n\\end{equation}\nfor complex numbers $z_1,$ $z_2$ and $w$, where $|z_1|\\leq 1$ and $|z_2| \\leq 1$. Strict inequalities hold if and only if $U$ is in the regular part in which case $z_1$ and $z_2$ can be taken as the coordinates in $SU(3)_{reg}\/e^{\\cal K}$. An alternative set of (complex) coordinates is given by $(z_1,T)$ where $T$ is the trace of the (lower) $2 \\times 2$ block of the element $U \\in SU(3)$ which is invariant (along with $z_1$) under the conjugation action of elements in $e^{\\cal K}=S(U(1) \\times U(2))$. The coordinates $(z_1,T)$ are related to the coordinates $(z_1,z_2)$ by (from (\\ref{canonicalformSU3})) $T=z_1^*z_2+z_2^*$ which is inverted as $z_2=\\frac{T^*-z_1 T}{1-|z_1|^2}$. A desired trajectory $\\Gamma$ in $SU(3)_{reg}\/S(U(1)\\times U(2))$ is written in these coordinates as $(z_{1d},T_d):= (z_{1d}(t),T_d(t))$ . The associated tangent vector $\\dot \\Gamma$ in (\\ref{diffsys}) is \n $\\dot z_{1d}(t) \\frac{\\partial}{\\partial z_{1d}}+\\dot T_{d} \\frac{\\partial}{\\partial T}$. By applying $\\pi_*|_{U}\\dot U$ in (\\ref{diffsys}) to $z_1$ and $T$ with the restriction that $\\dot U$ is of the form $\\begin{pmatrix}0 & \\alpha & \\beta \\cr -\\alpha^* & 0 & 0 \\cr \n- \\beta^* & 0 & 0 \\end{pmatrix}U$, we obtain two equations for $\\alpha$ and $\\beta$, \n$$\n\\alpha u_{2,1}+\\beta u_{3,1}=\\dot z_{1d}, \\qquad -\\alpha u_{1,2}^*-\\beta u_{1,3}^*=\\dot T_{d}^*. \n$$ \nThese are solved, using $\\hat D:=u_{1,3}^*u_{2,1}-u_{3,1} u_{1,2}^*$ by \n\\be{alphabetasol}\n\\alpha=\\frac{u_{1,3}^* \\dot z_{1,d}+ u_{3,1} \\dot T_d^*}{\\hat D}, \\qquad \n\\beta=-\\frac{u_{1,2}^* \\dot z_{1,d}+ u_{2,1} \\dot T_d^*}{\\hat D}. \n\\end{equation}\n\nThe quantity $\\hat D$ is different from zero if and only if the matrix $U$ is in the regular part of $SU(3)$. This can be shown in two steps: First one shows that $\\hat D$ is invariant under the action of $S(U(1) \\times U(2))$ by writing a matrix in \n$S(U(1) \\times U(2))$ with an Euler-type decomposition \nas $F_1 R F_2$ with $F_1$ and $F_2$ diagonal and $R$ of the form \n$\nR=\\begin{pmatrix} 1 & 0 & 0 \\cr \n0 & \\cos(\\theta) & \\sin (\\theta) \\cr \n0 & -\\sin(\\theta) & \\cos(\\theta) \n\\end{pmatrix}, \n$\nand verifying that conjugation by each factor in $F_1 R F_2$ leaves $\\hat D$ unchanged. The second step is to verify that $\\hat D$ for the matrix $X$ in the form \n(\\ref{canonicalformSU3}) is different from zero if and only if $|z_1| \\not=1$ and \n$|z_2| \\not=1$. This gives a quick \ntest to check whether a matrix is in the regular part, i.e., if its isotropy group is the smallest possible one, which, in this case, is the group of scalar matrices in $SU(3)$. \n{This fact also follows from the result in Appendix B which shows in general that $\\det(\\pi_*)$, with $\\pi_* :\\, R_{p *}{\\cal P} \\rightarrow T_{\\pi(p)} e^{\\cal L}_{reg}\/e^{\\cal K}$ is invariant under the action of $e^{\\cal K}$. }\n\nAs always, we have the problem that the initial point ${\\bf 1}$ is in the singular part of the orbit space decomposition and therefore $\\hat D$ in (\\ref{alphabetasol}) is zero at time $0$. As suggested in the previous section, we can however apply a preliminary control to \nsteer away from the singular part. \n\n\n\n\n \n \n\n\n \n\n\\section{Simultaneous control of two independent spin $\\frac{1}{2}$ particles}\\label{App2QB}\n\n\\subsection{The model}\nThe dynamics of two spin $\\frac{1}{2}$ particles with different gyromagnetic ratios in zero field NMR can be described by the Schr\\\"odinger equation (\\ref{Scro}) (after appropriate normalization) where \n\\be{Hamilto}\n\\sum_{k=1}^m \\hat u_k B_k:=\\sum_{x,y,z} u_{x,y,z}(t)( i\\sigma_{x,y,z} \\otimes {\\bf 1}+ \\gamma\n i {\\bf 1} \\otimes \\sigma_{x,y,z}). \n\\end{equation} \nHere $u_{x,y,z}$ are the controls representing the $x,y,z$ components of the electromagnetic field, and $\\sigma_{x,y,z}$ are the Pauli matrices defined as \n\\be{Pauli}\n\\sigma_x:= \\begin{pmatrix} 0 & 1 \\cr 1 & 0 \\end{pmatrix}, \\qquad\n\\sigma_y:= \\begin{pmatrix} 0 & -i \\cr i & 0 \\end{pmatrix}, \\qquad\n\\sigma_z:= \\begin{pmatrix} 1 & 0 \\cr 0 & -1 \\end{pmatrix}. \\qquad\n\\end{equation}\nThe parameter $\\gamma$ is the ratio of the two gyromagnetic ratios and \nwe shall assume that $|\\gamma|\\not=1$. Under this assumption, the dynamical Lie algebra ${\\cal L}$ for system (\\ref{Scro}), (\\ref{Hamilto}) is the $6-$dimensional Lie algebra spanned by $\\{\\sigma_1 \\otimes {\\bf 1} + {\\bf 1} \\otimes \\sigma_2 \\, | \\, \\sigma_1, \\sigma_2 \\in \\mathfrak{su}(2)\\}$.\\footnote{This Lie algebra is isomorphic to $\\mathfrak{so}(4)$.} The corresponding Lie group $e^{\\cal L}$, which is the set of reachable states for system (\\ref{Scro}), (\\ref{Hamilto}), is $\\{X_1 \\otimes X_2 \\, | \\, X_1, X_2 \\in SU(2)\\}$, i.e. the tensor product $SU(2)\\otimes SU(2)$. It is convenient to slightly relax the description of the state space and look at system (\\ref{Scro}), (\\ref{Hamilto}) as a system on the Lie group $SU(2) \\times SU(2)$, i.e., the Cartesian direct product of $SU(2)$ with itself, and the dynamical equations (\\ref{Scro}), (\\ref{Hamilto}) \nreplaced by\n\\be{reple}\n\\dot U= \\sigma(t)U, \\quad U(0)={\\bf 1}, \\qquad \\dot V= \\gamma \\sigma(t)V, \\quad V(0)={\\bf 1}, \n\\end{equation} \nwith $\\sigma(t):=\\sum_{x,y,z} i u_{x,y,z}(t) \\sigma_{x,y,z}$. \nThe controls that drive system (\\ref{reple}) to $(\\pm U_f, \\pm V_f)$ drive system (\\ref{Scro}), (\\ref{Hamilto}) to the state $U_f \\otimes V_f$. Therefore we shall focus on the steering problem for system (\\ref{reple}) which consists of steering one spin to $U_f$ and the other to $V_f$, simultaneously. Since $|\\gamma| \\not=1$, the dynamical Lie algebra associated with (\\ref{reple}) is spanned by the pairs $(\\sigma_1, \\sigma_2)$ with $\\sigma_1$ and $\\sigma_2$ in $\\mathfrak{su}(2)$. Such a Lie algebra can be written as ${\\cal K} \\oplus {\\cal P}$ with ${\\cal K}$ spanned by elements of the form \n$(\\sigma, \\sigma)$ with $\\sigma \\in \\mathfrak{su}(2)$ and ${\\cal P}$ spanned by elements of the \nform $(\\sigma, \\gamma \\sigma)$ with $\\sigma \\in \\mathfrak{su}(2)$. At every $p \\in G=SU(2) \\times SU(2)$, the vector fields in (\\ref{reple}) belong to $R_{p*} {\\cal P}$. \n\n\\subsection{Symmetries and the the structure of the quotient space} \n\nThe Lie group $SU(2)$ acts on $G:=SU(2) \\times SU(2)$ by {\\it simultaneous conjugation}, that is, for $K \\in SU(2)$, $(U_f, V_f) \\rightarrow (K U_f K^{-1}, K V_f K^{-1})$ and this \nis a group of symmetries for system (\\ref{reple}) in that if $\\sigma=\\sigma(t)$ is the control steering to $(U_f,V_f)$, then $K\\sigma K^{-1}$ is the control steering to $(KU_f K^{-1}, KV_f K^{-1})$. The quotient space of \n$SU(2) \\times SU(2)$ under this action, $(SU(2) \\times SU(2))\/SU(2)$ was described in \\cite{Xinhua} as follows. \n\nConsider \na pair $(U_f, V_f)$ and let $\\phi \\in [0,\\pi]$ be a real number so that the two eigenvalues of $U_f$ are $e^{i\\phi}$ \nand $e^{-i\\phi}$. If $0< \\phi < \\pi$ then $U_f \\not= \\pm {\\bf 1}$ and there exists a unitary \nmatrix $S$ such that $SU_f S^{\\dagger}:=D_f$ is diagonal. Therefore the matrix $(U_f, V_f)$ is in the same orbit as $(D_f, SV_f S^\\dagger)$. The parameter $\\phi$ determines the orbit, along with the $(1,1)-$entry of $SV_f S^\\dagger$, which does not depend on the choice of $S$.\\footnote{All the possible diagonalizing matrices differ by a diagonal factor that does not affect the $(1,1)$ entry of $SV_f S^\\dagger$.} Such a $(1,1)$-entry has absolute value $\\leq 1$ and therefore it is an element of the unit disk in the complex plane. The orbits corresponding to the values of $ 0< \\phi < \\pi$ (for the eigenvalue of the first matrix) are in one-to-one correspondence with the points of a solid cylinder with height equal to $\\phi$. When $\\phi=0$ (or $\\phi=\\pi$), the matrix $U_f$ is $\\pm$ identity and therefore the equivalence class is determined by the eigenvalue of the matrices $V_f$, which are $e^{\\pm i \\psi}$ for $\\psi\\in [0, \\pi]$. In the geometric description, the solid cylinder degenerates at the two ends to become a segment $[0,\\pi]$. The regular part of the orbit space $G_{reg}$ is represented by points in the interior of the solid cylinder. Such points \ncorrespond to pairs \n$$\n\\left( \\begin{pmatrix} e^{i \\phi} & 0 \\cr 0 & e^{-i\\phi} \\end{pmatrix}, \\begin{pmatrix} z & w \\cr -w^* & z^* \\end{pmatrix}\\right)\n$$ \nwith $\\phi \\in (0,\\pi)$ and $|z|<1$. For these pairs, the isotropy group is the discrete group $\\{ \\pm {\\bf 1} \\}$. In general points that are in the singular part correspond to pairs of matrices $(U_f, V_f)$ which can be simultaneously diagonalized. Therefore \nthe condition that they commute \n\\be{commut34}\nU_f V_f=V_fU_f, \n\\end{equation} \nis necessary and sufficient for a pair $(U_f,V_f)$ to belong to the singular part. \n\n\nAssume that $p$ is a regular point in $G_{reg}$ for this problem and $\\pi$ is the natural projection $\\pi: G_{reg} \\rightarrow G_{reg}\/SU(2)$. Then from the theory in the previous section, the differential $\\pi_*|_{p}$ is an isomorphism from $R_{p*}{\\cal P}$ to $T_{\\pi(p)} G_{reg}\/SU(2)$. Let us choose a basis for ${\\cal P}$ given by $(i\\sigma_{x,y,z}, \\gamma i\\sigma_{x,y,z})$. To choose the three coordinates in $G_{reg}\/SU(2)$, we consider a general element $p$ in $SU(2) \\times SU(2)$ written as \n\\be{puntop}\np:=(U_f,V_f):=\\left( \\begin{pmatrix} x & y \\cr -y^* & x^* \\end{pmatrix}, \\begin{pmatrix} z & w \\cr \n- w^* & z^* \\end{pmatrix}\\right). \n\\end{equation} \nFor a complex number $q$ we shall denoted by $q_R:=\\texttt{Re}(q)$ and $q_I:=\\texttt{Im}(q)$. Notice that in (\\ref{puntop}) we have \n$$\nx_R^2+x_I^2+y_R^2+y_I^2=z_R^2+z_I^2+w_R^2+w_I^2=1. \n$$ \nCoordinates in $G_{reg}\/SU(2)$ must be independent invariant functions of $(U_f,V_f)$ in (\\ref{puntop}). We choose \n\\be{ccordinates}\nx^1:=x_R, \\qquad x^2:=z_R, \\qquad x^3:=x_I z_I+w_R y_R+ w_I y_I.\n\\end{equation}\nIt is a direct verification to check that at any point $p \\in SU(2) \\times SU(2)$, $x^1$, $x^2$ and $x^3$ are unchanged by the (double conjugation) action of $SU(2)$, i.e., they are invariant. We remark also that we can consider two unit \nvectors $\\vec V:=(x_R,x_I, y_R, y_I)$, and $\\vec W:=(z_R,z_I, w_R, w_I)$, and, if we do that, \n $x^3=\\vec V \\cdot \\vec W-x_Rz_R$. \n\n\\subsection{Choice of invariants}\nWe pause a moment to detail how the invariant coordinates in (\\ref{ccordinates})\n were chosen. We do this because the method can be used for other examples. We consider the vectors $\\vec V:=[x_R, x_I, y_R, y_I]^T$ and \n $\\vec W:=[z_R, z_I, w_R, w_I]^T$ and the adjoint action of $SU(2)$ on $SU(2) \\times SU(2)$ which gives a linear action on $\\vec Q:=[\\vec V^T, \\vec W^T]^T$. We are looking for functions $f=f(\\vec V, \\vec W)$ invariant under this action. Given that every element of $SU(2)$ can be written according to Euler's decomposition as $e^{i \\sigma_z \\alpha} e^{i \\sigma_y \\theta} e^{i \\sigma_z \\beta}$, for real parameters $\\alpha, \\beta$ and $\\theta$, it is enough that $f$ is invariant with respect to transformations of the form \n$e^{i \\sigma_z \\beta}$ and $e^{i \\sigma_y \\theta}$, for general real $\\beta$ and $\\theta$, in order for $f$ to be invariant with respect to all of $SU(2)$. If $X_z:=X_z(\\beta):=e^{i \\sigma_z \\beta}$ then $Ad_{X_z}$ acting on $[\\vec V^T, \\vec W^T]^T$ is \n\\be{adXz}\nAd_{X_z(\\beta)}:=\\begin{pmatrix} 1 & 0 & 0 & 0 \\cr 0 & 1 & 0 & 0 \\cr \n0 & 0 & \\cos(\\beta) & - \\sin (\\beta) \\cr \n0 & 0 & \\sin(\\beta) & \\cos(\\beta) \n\\end{pmatrix}. \n\\end{equation}\nIf $X_y:=X_y(\\theta):=e^{i \\sigma_y \\theta}$ then $Ad_{X_y}$ acting on $[\\vec V^T, \\vec W^T]^T$ is \n\\be{adXy}\nAd_{X_y(\\theta)}:=\\begin{pmatrix} 1 & 0 & 0 & 0 \\cr 0 & \\cos(\\theta) & 0 & \\cos(\\theta) \\cr \n0 & 0 & 1 & 0 \\cr \n0 & -\\sin(\\theta) & 0 & \\cos(\\theta) \n\\end{pmatrix}. \n\\end{equation}\nWe first look for {\\it linear invariants}, i.e., invariant functions $f$ of the form \n$f(\\vec V, \\vec W):=\\vec a^{\\, T} \\vec V + \\vec b^{\\, T} \\vec W$. From the condition\n$$\n\\vec a^{\\, T} \\vec V + \\vec b^{\\, T} \\vec W=\\vec a^{\\, T} Ad_{X}\\vec V + \\vec b^{\\, T} Ad_{X}\\vec W, \n$$ \nwhere $X$ may be equal to $X_z(\\beta)$ or $X_{y}(\\theta)$, with arbitrary $\\beta$ and $\\theta$, we find that the last three components of $\\vec a$ and $\\vec b$ must be zero. Therefore all linear invariants $f$ must be of the form \n$f=a_1 x_R+b_1 z_R$, from which we get the invariant $x_R$ and $z_R$ in (\\ref{ccordinates}). \n\n\n\nWe then try to find {\\it quadratic invariants} and therefore an $8 \\times 8$ symmetric matrix $P$ so that\n$f(\\vec V, \\vec W)=[\\vec V^T, \\vec W^T] P [\\vec V^T, \\vec W^T]^T$ and \n$$\n[\\vec V^T, \\vec W^T] P [\\vec V^T, \\vec W^T]^T=[\\vec V^T, \\vec W^T] \\begin{pmatrix}Ad_X^T & 0 \\cr 0 & Ad_X^T \\end{pmatrix}P \\begin{pmatrix}Ad_X & 0 \\cr 0 & Ad_X \\end{pmatrix} [\\vec V^T, \\vec W^T]^T, \n$$\nfor $X=X_z(\\beta)$ and $X=X_{y}(\\theta)$ as defined in (\\ref{adXz}) and (\\ref{adXy}) for every $\\beta$ and $\\theta$ (and for every $\\vec V$ and $\\vec W$). This leads to the condition \n$$\n\\begin{pmatrix} Ad_X & 0 \\cr \n0 & Ad_X \\end{pmatrix} P=P \\begin{pmatrix} Ad_X & 0 \\cr \n0 & Ad_X \\end{pmatrix}. \n$$ \nFrom this, we find that the matrix $P$ must be of the form \n$$\nP=\\begin{pmatrix} e & 0 & 0 & 0& d & 0& 0 & 0\\cr \n 0 & c & 0 & 0 &0 & g & 0 & 0 \\cr \n 0 & 0 & c & 0 & \n 0 & 0 & g & 0 \\cr \n 0 & 0 & 0 & c & 0 & 0 & 0 & g \\cr \n d & 0 & 0 & 0 & r & 0 & 0 & 0\\cr \n 0 & g & 0 & 0 & 0 & h & 0 & 0 \\cr \n 0 & 0 & g & 0 & 0 & 0 & h& 0 \\cr \n 0 & 0 & 0 & g & 0 & 0 & 0 & h \\end{pmatrix}. \n$$\nIt follows that all quadratic invariants must be of the form \n$$\nf=ex_R^2+2dx_R z_R + r z_R^2+c(x_I^2+y_R^2+y_I^2) + h (z_I^2+w_R^2+w_I^2)+ 2g(x_I z_I+w_R y_R+ w_I y_I). \n$$\nBecause of $x_R^2+x_I^2+y_R^2+y_I^2=z_R^2+z_I^2+w_R^2+w_I^2=1$, all terms can be written in terms of the (linear) invariant $x_R$ and $z_R$ except the last one which we choose as the third coordinate in (\\ref{ccordinates}). \n\n\\subsection{Algorithm for control}\\label{AlgoCon}\n\nAt the point $\\pi(p) \\in G_{reg}\/SU(2)$, the tangent vectors \n$\\frac{\\partial}{\\partial x^j},$ $j=1,2,3$ span $T_{\\pi(p)} G_{reg}\/SU(2)$, so that a general tangent vector at $\\pi(p)$ can be written as $a_1\\frac{\\partial}{\\partial x^1}+a_2 \\frac{\\partial}{\\partial x^2}+ a_3 \\frac{\\partial}{\\partial x^3}$. We calculate the matrix of the isomorphism $\\pi_*|_{p}$ mapping \nthe coordinates $\\alpha_x, \\alpha_y, \\alpha_z,$ in $R_{p*} (\\sigma, \\gamma \\sigma):=R_{p*}(\\alpha_x(i \\sigma_x, \\gamma i \\sigma_x) +\\alpha_y (i \\sigma_y, \\gamma i \\sigma_y)+ \\alpha_z (i \\sigma_z, \\gamma i \\sigma_z) ) \\in R_{p*} {\\cal P}$ to $(a_1,a_2,a_3)$, (cf. \n(\\ref{diffsys})). Denote this matrix by $\\Pi_{j,l}:=\\Pi_{j,l}(p)$ with $j=1,2,3$ and $l=x,y,z$. \nWe have \n$$\n\\Pi_{j,l}(p)= \\pi_{*}|_p R_{p*} (i \\sigma_l, i \\gamma \\sigma_l) x^j. \n$$\nFor the sake of illustration, let us calculate $\\Pi_{1,x}(p)$. This is given by (recall $p$ is defined in (\\ref{puntop})) \n$$\n\\Pi_{1,x}(p):=\\pi_{p*}R_{p*} (i \\sigma_x, i \\gamma \\sigma_x) x^1=R_{p*} (i \\sigma_x, i \\gamma \\sigma_x) (x^1 \\circ \\pi)=\\frac{d}{dt}|_{t=0} x^1\\circ \\pi \\left( e^{i \\sigma_x t} \\begin{pmatrix} x & y \\cr -y^* & x^*\\end{pmatrix}\\, , \\,e^{i \\gamma \\sigma_x t} \\begin{pmatrix} z & w \\cr -w^* & w^*\\end{pmatrix} \\right). \n$$\nThis simplifies because $x^1 \\circ \\pi $ does not depend on the second factor. Therefore the $\\Pi_{1,x}(p)$ entry is the derivative at $t=0$ of the real part of the $(1,1)$ entry of the matrix \n$$\ne^{i \\sigma_x t} \\begin{pmatrix} x & y \\cr -y^* & x^*\\end{pmatrix} \n= \n\\begin{pmatrix} \\cos(t) & i \\sin(t) \\cr i \\sin(t) & \\cos(t) \\end{pmatrix} \\begin{pmatrix} x & y \\cr -y^* & x^*\\end{pmatrix}.\n$$ \nThis leads to the result \n$$\n\\Pi_{1,x}(p)=-y_I. \n$$\n\n\nThe quantities \n\\be{D1D2D3}\n\\Delta_1:=z_I y_R- x_I w_R, \\qquad \\Delta_2:=z_I y_I-x_Iw_I, \\qquad \\Delta_3:=w_Ry_I-w_Iy_R, \n\\end{equation}\nappear routinely in calculations that follow. \n\n\n\n\nSimilar calculations to the ones above for $\\Pi_{1,x}(p)$ lead to the full matrix $\\Pi(p)$, which is given by \n{\n\\be{PiofP}\n\\Pi(p):=\n\\begin{pmatrix} - y_I & -y_R & -x_I \\cr \n- \\gamma w_I & - \\gamma w_R & - \\gamma z_I \\cr \n(\\gamma -1) \\Delta_1 +\\gamma z_R y_I \n+w_I x_R \n& (1-\\gamma) \\Delta_2 \n+ w_R x_R \n+ \\gamma z_R y_R \n &\n(\\gamma-1) \\Delta_3 \n+x_R z_I \n+ \\gamma z_R x_I\n\\end{pmatrix}\n\\end{equation} }\nThe determinant of this matrix is different from zero if and only if $p$ is in the regular part \nand it is another invariant under the action of $SU(2)$ on $SU(2) \\times SU(2)$ (cf. Appendix B). It can be explicitly computed as \n\\be{determinante}\n\\det (\\Pi (p))=\\gamma(\\gamma-1)(\\Delta_1^2+\\Delta_2^2+\\Delta_3^2), \n\\end{equation}\nwhich can be seen to be equal to zero if and only if condition (\\ref{commut34}) is verified. The invariant \n$\\Delta:=\\Delta_1^2+\\Delta_2^2+\\Delta_3^2$ can be expressed in terms of the (minimal) invariants $x_R$, $z_R$ and $x^3$ in (\\ref{ccordinates}) as\\footnote{This can be seen by expanding the left hand side using the definitions of $\\Delta_{1,2,3}$ (\\ref{D1D2D3}) and the right hand side using the definition of $x^3$ (\\ref{ccordinates}), so that (\\ref{delt}) reduces to $y_I^2w_R^2+w_I^2y_R^2+y_R^2 z_I^2+x_I^2 w_R^2+x_I^2 w_I^2+y_I^2 z_I^2=(1-x_R^2)(1-z_R^2)-z_I^2x_I^2-y_R^2 w_R^2-y_I^2 w_I^2$, and writing $(1-x_R^2)=x_I^2+y_R^2+y_I^2$ and $(1-z_R^2)=z_I^2+w_R^2+w_I^2$, we obtain an identity.}\n\\be{delt}\n\\Delta=\\Delta_1^2+\\Delta_2^2+\\Delta_3^2=(1-x_R^2)(1-z_R^2)-(x^3)^2. \n\\end{equation}\n\n\n\n\n\nWhen we design a control law, the components $a_1,a_2,a_3$ of the tangent vector at every time $t$ in the tangent space at $\\pi(p(t))$ are given by the derivatives $\\dot x^1$, $\\dot x^2$, $\\dot x^3$ of the desired trajectory in the quotient space. The corresponding components, $\\alpha_x$, $\\alpha_y$ and $\\alpha_z$, of the tangent vector in $R_{p(t)*}{\\cal P}$ give the appropriate control functions $(u_x, u_y, u_z)$. The matrix $\\Pi(p)$ in (\\ref{PiofP}) gives the map from the control to trajectories. Since we want to specify trajectories and compute the corresponding controls, we need the inverse of the matrix $\\Pi(p)$ (cf. (\\ref{deficontr})). This is found from (\\ref{PiofP}) to be \n\\be{Piinverse}\n\\det(\\Pi(p)) \\Pi^{-1}(p):=\n\\begin{pmatrix}\n\\gamma(\\gamma -1)(-w_R \\Delta_3 - z_I \\Delta_2) + \\gamma^2z_R \\Delta_1 &\n(\\gamma -1)(x_I \\Delta_2 + y_R \\Delta_3)+ x_R \\Delta_1 & \\gamma \\Delta_1 \\cr \n\\gamma (\\gamma -1)(w_I \\Delta_3 - z_I \\Delta_1)- \\gamma^2 z_R \\Delta_2 & \n(\\gamma -1) (x_I \\Delta_1 - y_I \\Delta_3) - x_R \\Delta_2 & \n- \\gamma \\Delta_2 \\cr \n\\gamma ( \\gamma-1) ( w_I \\Delta_2 + w_R \\Delta_1) + \\gamma^2 z_R \\Delta_3 \n& \n-(\\gamma-1)(y_I \\Delta_2 +y_R \\Delta_1) + x_R \\Delta_3 \n& \n\\gamma \\Delta_3\n\\end{pmatrix}. \n\\end{equation}\n\n\nWe remark that $\\Pi^{-1}(p)$ is not defined if we are in the singular part of the space $G=SU(2) \\times SU(2)$ as the determinant of $\\Pi$ is zero there. This is in particular true at the beginning as the initial point $p \\in SU(2) \\times SU(2)$ is the identity. In order to follow a prescribed trajectory in the quotient space $G_{reg}\/SU(2)$, we need therefore to apply a preliminary control to drive the state to an arbitrary point in $G_{reg}$ and after that we \nshall apply the control corresponding to a prescribed trajectory in the quotient space. \n\n\n\nThe preliminary control in an interval $[0,T_1]$ to move the state from the singular part of the quotient space has to involve at least two different directions in the tangent space. In other terms, if we use $\\sigma(t)=u(t) \\sigma$ for some function $u=u(t)$ and a constant matrix $\\sigma \\in \\mathfrak{su}(2)$ we remain in the singular part. To see this, notice that if \n$d:=\\int_0^{T_1} u(s)ds$, then the solution of (\\ref{reple}) will be $(U_f, V_f)=(e^{d \\sigma}, e^{\\gamma d \\sigma})$, a pair that satisfies the condition (\\ref{commut34}). Therefore the simplest control strategy of moving in one direction only will not work if we want to move the state from the singular part. Furthermore, we want $u_x(0)=u_y(0)=u_z(0)=0$ and $u_x(T_1)=u_y(T_1)=u_z(T_1)=0$ to avoid discontinuities at the initial time $t=0$ and at the time of concatenation with the second portion of the control. We propose to prescribe a trajectory \nfor $U=U(t)$ in (\\ref{reple}) and, from that trajectory, to derive the control to be used in the equation for $V=V(t)$ in (\\ref{reple}). We choose a smooth function $\\delta:=\\delta(t)$ such that \n$\\delta(0)=0$ and $\\delta(T_1)=\\delta_0 \\not=0 $, and $\\dot \\delta(0)=\\dot \\delta(T_1)=0$. \nWe also choose a smooth function $\\epsilon:=\\epsilon(t)$, with $\\epsilon(0)=\\epsilon_0 \\not=0 $ and $\\epsilon(T_1)=0$, and $\\dot \\epsilon(0)=\\dot \\epsilon(T_1)=0$. We choose for $U=U(t)$ in (\\ref{reple}) \n\\be{Uoft}\nU(t)=\\begin{pmatrix} \\cos(\\delta(t)) & e^{i \\epsilon(t)} \\sin(\\delta(t)) \\cr \n-e^{-i\\epsilon(t)} \\sin(\\delta(t)) & \\cos(\\delta(t)) \\end{pmatrix}, \n\\end{equation}\nwhich at time $T_1$ gives \n\\be{Poil}\nU(T_1)=\\begin{pmatrix} \\cos(\\delta_0) & \\sin(\\delta_0) \\cr \n-\\sin(\\delta_0) & \\cos(\\delta_0) \\end{pmatrix}. \n\\end{equation}\nThe corresponding control $\\sigma$ is $\\sigma(t)=\\dot U U^\\dagger$, which is \n\\be{controlSS}\n\\sigma(t)=\\begin{pmatrix} i \\dot \\epsilon \\sin^2(\\delta) & \\dot \\delta e^{i \\epsilon}+ \\frac{i}{2} \\, \\dot \\epsilon \\,\n{\\sin(2 \\delta)}\\, e^{i\\epsilon} \\cr \n- \\dot \\delta e^{-i \\epsilon}+ \\frac{i}{2} \\dot \\epsilon \\, {\\sin(2 \\delta)} \\, e^{-i\\epsilon} & -i \\dot \\epsilon \\sin^2(\\delta) \\end{pmatrix}.\n\\end{equation}\nPlacing this in the second equation of (\\ref{reple}) and integrating numerically we obtain the values for $V(T_1)$, the second component of $(U,V)$, and therefore the values of $(z_R,z_I,w_R,w_I)$. Using these values and the expression for $(x_R,x_I,z_R,z_I)$ in (\\ref{Poil}), and using the formula for $\\Delta$ given in \n(\\ref{delt}), we obtain \n\\be{accor5}\n\\Delta=\\Delta_1^2+\\Delta_2^2+\\Delta_3^2=\\sin^2(\\delta_0)(1-z_R^2(T_1)-w_R^2(T_1))=\\sin^2(\\delta_0)(z^2_I(T_1)+w^2_I(T_1)),\n\\end{equation}\nwhich has to be different from zero in order for the state to be in the regular part. \n\n\nThe second portion of the control depends on the trajectory followed, $(x^1,x^2,x^3)=(x^1(t),x^2(t),x^3(t))$, and it is obtained by multiplying by $\\Pi^{-1}$ in (\\ref{Piinverse}) $(\\dot x^1,\\dot x^2, \\dot x^3)$. The trajectory $(x^1,x^2,x^3)$ is almost completely arbitrary. However it has to satisfy certain conditions which we now discuss. Let us denote the interval where the second part of the control is used by $[0,T_2]$. The initial condition $(x^1(0),x^2(0),x^3(0))$ has to agree with the one given by the previous interval of control. The final condition $(x^1(T_2),x^2(T_2),x^3(T_2))$ has to agree with the orbit of the desired final condition. Moreover, care has to be taken to make sure that the trajectory is such that $\\Delta$ in (\\ref{delt}) is never zero because this would create a singularity in $\\Pi^{-1}(p)$. Furthermore \nwe need $\\dot x^1(0)=\\dot x^2(0)=\\dot x^3(0)=0$, which gives $\\sigma(0)=0$, \n to ensure continuity with the control in the previous interval, and we also choose $\\dot x^1(T_2)=\\dot x^2(T_2)=\\dot x^3(T_2)=0$ to ensure that the control is switched off at the end of the procedure. Finally, the functions $(x^1, x^2, x^3)$ have to be representative of a possible trajectory for special unitary matrices. This means that, with $\\vec V:=(x_R, x_I, y_R, y_I)^T$ and $\\vec W:=(z_R, z_I, w_R, w_I)^T$, \n $\\|\\vec V (t)\\|^2=\\|\\vec W(t)\\|^2=1$, at every $t$. Therefore $|x_R(t)| < 1$ at every $t$, $|z_R(t)| < 1$, at every $t$ (to avoid singularity), and from the Schwartz inequality $|\\vec V \\cdot \\vec W |\\leq 1$ we also must have $|x_R z_R + x^3|\\leq 1$ and therefore \n\\be{condx3} \n -1-x_R z_R \\leq x^3 \\leq 1 - x_R z_R. \n\\end{equation} \n \n \nOnce the functions $(\\dot x^1, \\dot x^2,\\dot x^3)$ are chosen, the system to integrate numerically is (\\ref{reple}) with $(u_x,u_y,u_z)$ given by $\\Pi^{-1}(p)(\\dot x^1, \\dot x^2,\\dot x^3)^T$. By deriving $(u_x,u_y,u_z)$ using the explicit expression of $\\Pi^{-1}$ given in (\\ref{Piinverse}) and replacing into (\\ref{reple}), it is possible to obtain a simplified system of differential equations for $(x_R,x_I,...,z_R,z_I,...,w_I)$ without implementing the preliminary step of calculating the control. We found this system to be more stable in numerical integration with MATLAB and report it in Appendix A for future use. \n\n\n\n\\subsection{Numerical example: Driving to two different Hadamard gates} \n\nWe now apply the above technique to a specific numerical example: The problem is to drive \nthe system (\\ref{Hamilto}) so that the first spin performs the Hadamard-type gate \n\\be{Hada1}\nH_1:=\\begin{pmatrix} \\frac{1}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}} \\cr \n\\frac{-1}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}}\\end{pmatrix}\n\\end{equation}\nand the second spin performs the Hadamard gate\n\\be{Hada2}\nH_2:=\\begin{pmatrix} \\frac{1}{\\sqrt{2}} & \\frac{-i}{\\sqrt{2}} \\cr \n\\frac{-i}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}}\\end{pmatrix}. \n\\end{equation}\nWe want to drive system (\\ref{reple}) to $(U_f, V_f)=(H_1, H_2)$. The orbit of the desired final condition is characterized by the invariant coordinates \n\\be{finalorbit}\nx^1=x_R=\\frac{1}{\\sqrt{2}}, \\qquad x^2=z_R=\\frac{1}{\\sqrt{2}}, \\qquad x^3=x_I z_I+ y_R w_R+ y_I w_I=0. \n\\end{equation}\nWe take a physical value for the ratio $\\gamma$ between the two gyromagnetic ratios. In particular we will choose \n $\\gamma\\approx \\frac{1}{0.2514}$ which corresponds to the Hydrogen-Carbon ($^{1} H-^{13} C$) system also considered in \\cite{Xinhua}. \n\nWe first consider the control that moves the state away from the singular part, in a time interval $[0,1]$. We choose \n$\\sigma$ in (\\ref{controlSS}) with the functions \n$\\delta$ and $\\epsilon$ as follows: \n\\be{deltaandepsilon}\n\\delta=6 \\delta_0 \\left( \\frac{t^2}{2} -\\frac{t^3}{3} \\right), \\qquad \\epsilon=\\epsilon_0 + 6 \\epsilon_0\n\\left( \\frac{t^3}{3} -\\frac{t^2}{2} \\right). \n\\end{equation}\nWith these functions $\\delta$ and $\\epsilon$, $\\sigma$ satisfies all the requirements described above. \nFrom (\\ref{deltaandepsilon}) and (\\ref{controlSS}) we obtain the controls $u_x,u_y,u_z$ which replaced into (\\ref{reple}) give the dynamics in the interval $[0,T_1]=[0,1]$. Numerical integration with the values of the parameters $\\delta_0=0.5$ and $\\epsilon_0=1$, gives the following conditions at time $T_1=1$ (cf. (\\ref{Poil}) \n\\be{condizioni}\nU(1)=\\begin{pmatrix} \\cos(0.5) & \\sin(0.5) \\cr \n-\\sin(0.5) & \\cos(0.5) \\end{pmatrix}, \\qquad V(1) \\approx \\begin{pmatrix} -0.3472+i0.7769 & -0.5123 -i 0.1157\\cr \n0.5123-i0.1157 & -0.3472-i0.7769 \\end{pmatrix}. \n\\end{equation}\nThe value of $\\Delta$ is, according to (\\ref{accor5}), \n$\\Delta \\approx \\sin^2(0.5)\\left( (0.7769 )^2+(0.1157)^2 \\right) \\not= 0$, as desired. \n\n\n\nThe values of the variables to be used as initial conditions in the integration in the subsequent interval of the procedure are \n$\nx_R(1)=x^1(1)=\\cos(0.5), \\quad x_I(1)=0, \\quad y_R(1)=\\sin(0.5), \\quad y_I(1)=0, \\quad \nz_R(1)=x^2(1)=-0.3472 , \\quad z_I(1)=0.7769 , \\quad w_R(1)=-0.5123, \\quad w_I(1)=-0.1157, \n$ \nand $x^3(1)=x_I(1) z_I(1)+y_R(1) w_R(1)+y_I(1) w_I(1)=\\sin(0.5) \\times (-0.5123)\\approx -0.2456.$ For the subsequent \n interval $[0,T_2]$ we choose the trajectory $x^1(t),x^2(t),x^3(t)$ in the quotient space as follows: \n$T_2=10$ and the trajectory in the interval $[0,T_2]$ is \n\\be{x1t}\nx^1(t)=-\\frac{1}{500} \\left( \\frac{1}{\\sqrt{2}}-\\cos(0.5)\\right)t^3+\\frac{3}{100} \\left( \\frac{1}{\\sqrt{2}} - \\cos(0.5) \\right)t^2+ \\cos(0.1);\n\\end{equation}\n\\be{x2t}\nx^2(t)=-\\frac{1}{500} \\left( \\frac{1}{\\sqrt{2}}+0.3472 \\right) t^3+\\frac{3}{100} \\left( \\frac{1}{\\sqrt{2}}+\n0.3472 \\right)t^2-0.3472; \n\\end{equation} \n\\be{x3t}\nx^3(t):=\\frac{-0.2456}{500} t^3-\\frac{3 \\times (-0.2456)}{100} t^2-0.2456, \n\\end{equation}\nwhich are easily seen to satisfy the conditions at the endpoints. Moreover by plotting $x^1$ and $x^2$ we see that \n$|x^1(t)|\\leq 1$ and $|x^2(t)|\\leq 1$ for every $t \\in [0,10]$ (Figure \\ref{Fig1}). \nBy plotting $x^3$ vs $1-x^1 x^2$ and $-1-x^1x^2$ (Figure \\ref{Fig2}) we find that $-1-x^1 x^2(t) \\leq x^3(t) \\leq 1-x^1 x^2(t)$ for every $t \\in [0,10]$ as required from condition (\\ref{condx3}). \nBy plotting $\\Delta=\\Delta(t)$ in $[0,10]$ we know that $\\Delta(t) \\not=0$ for every $t \\in [0,10]$ (Figure \\ref{Fig3}). Therefore the whole trajectory is in the regular part. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=0.4\\textheight]{Fig1}\n\\vspace*{-16mm}\n\\caption{Prescribed $x^1$ and $x^2$ variables in the (second) interval $[0,10]$.}\n\\label{Fig1}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=.4\\textheight]{Fig2}\n\\vspace*{-16mm}\n\\caption{$x^3=x^3(t)$ vs $-1-x^1(t)x^2(t)$ and $ 1-x^1(t)x^2(t)$ in the (second) interval $[0,10]$.}\n\\label{Fig2}\n\\end{figure}\n\n\n\n\n \n \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=.4\\textheight]{Fig3}\n\\vspace*{-16mm}\n\\caption{$\\Delta=\\Delta(t)=\\Delta_1^2(t)+\\Delta_2^2(t)+\\Delta_3^2(t)$ in the (second) interval $[0,10]$.}\n\\label{Fig3}\n\\end{figure}\n\nThe {\\it full} trajectory, in the union of the two intervals, and with the concatenation \nof the two controls is depicted in Figure \\ref{Fig4}. Let us denote the full control by $\\hat \\sigma=\\hat \\sigma(t)=\nu_x i\\sigma_x+ u_y i \\sigma_y+ u_z i \\sigma_z$. The final condition $(\\hat U_f, \\hat V_f)$ is given by \n\\be{Finalprel}\n\\hat U_f=\\begin{pmatrix} 0.7071-0.2795i & 0.5913+0.2685i \\cr \n-0.5913+0.2685i & 0.7071+0.2795i \\end{pmatrix}, \n\\qquad \\hat V_f=\\begin{pmatrix} 0.7071+0.2708i & 0.3718-0.5369i \\cr \n -0.3718-0.5369i & 0.7071-0.2708i \\end{pmatrix}. \n\\end{equation} \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=.4\\textheight]{BeforeConjugation}\n\\vspace*{-16mm}\n\\caption{Trajectory under the preliminary control in the total interval $[0,11]$.}\n\\label{Fig4}\n\\end{figure}\nThis condition, as expected, is in the same orbit as the desired final condition $(H_1,H_2)$ in (\\ref{Hada1}), (\\ref{Hada2}), that is, there exists a matrix $K \\in SU(2)$ such that\n$K\\hat U_fK^\\dagger=H_1$ and $K\\hat V_fK^\\dagger=H_2$. The matrix $K$ solving these equations is found to be \n$$\nK=\\begin{pmatrix} 0.1485-0.2460i & -0.2444+0.9260i \\cr \n 0.2444+0.9260i & 0.1485+0.2460i\\end{pmatrix}. \n$$\nIn particular, to find $K$ one can diagonalize $\\hat U_f$ and $H_1$, i.e., $\\hat U_f=P \\Lambda P^\\dagger$ and $H_1=R\\Lambda R^\\dagger$ for a diagonal matrix $\\Lambda$, so that, from $KP\\Lambda P^\\dagger K^\\dagger=R \\Lambda R^\\dagger$, we find that $R^\\dagger K P=D$, for $D$, a diagonal matrix. This matrix is found by solving $DP^\\dagger \\hat V_f P=R^ \\dagger H_2 R D$. The control $K \\hat \\sigma K^\\dagger$ steers then to the desired final condition. The resulting trajectory leading to the desired final condition (\\ref{Hada1}), (\\ref{Hada2}) is given in Figure \\ref{Fig5}. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=.4\\textheight]{AfterConjugation}\n\\vspace*{-16mm}\n\\caption{Trajectory under control algorithm leading to the desired final condition (\\ref{Hada1}) and (\\ref{Hada2}).}\n\\label{Fig5}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section*{Introduction}\n\n The notion of the complexity $c(M)$ of a compact $3$-manifold $M$\nwas introduced in \\cite{Matveev-1990}. The complexity is defined as\nthe minimal possible number of true vertices of an almost simple\nspine of $M$. If $M$ is closed and irreducible and $c(M)>0$, then\n$c(M)$ is the minimal number of tetrahedra needed to obtain $M$ by\ngluing together their faces. The problem of calculating the\ncomplexity $c(M)$ is very difficult. The exact values of the\ncomplexity are presently known only for certain infinite series of\nirreducible boundary irreducible $3$-manifolds \\cite{FMP,\nAnisov-2005, Jaco-Rubinstein-Tillmann-2009}. In addition, this\nproblem is solved for all closed orientable irreducible manifolds up\nto complexity $12$ (see \\cite{Matveev-2005}). Note that the table\ngiven in \\cite{Matveev-2005} contains $36833$ manifolds and is only\navailable in electronic form \\cite{Atlas}.\n\n The task of finding an upper bound for the complexity of a manifold\n$M$ does not present any particular difficulties. To do that it\nsuffices to construct an almost simple spine $P$ of $M$. The number\nof true vertices of $P$ will serve as an upper bound for the\ncomplexity. It is known \\cite[2.1.2]{Matveev-2003} that an almost\nsimple spine can be easily constructed from practically any\nrepresentation of a manifold. The rather large number of manifolds\nin \\cite{Atlas} gives rise to a new task of finding potentially\nsharp upper bounds for the complexity, i.e. upper bounds that would\nyield the exact value of the complexity for all manifolds from the\ntable \\cite{Atlas}. An important result in this direction was\nobtained by Martelli and Petronio \\cite{Martelli-Petronio-2004}.\nThey found a potentially sharp upper bound for the complexity of all\nclosed orientable Seifert manifolds. Similar results for infinite\nfamilies of graph manifolds can be found in\n\\cite{Fominykh-Ovchinnikov-2005, Fominykh-2008}.\n\n An upper bound $h(r\/s, t\/u, v\/w)$ for the complexity of hyperbolic\nmanifolds obtained by surgeries on the link $6^3_1$ (in Rolfsen's\nnotation \\cite{Rolfsen-1976}) with rational parameters $(r\/s, t\/u,\nv\/w)$ is given by Martelli and Petronio in\n\\cite{Martelli-Petronio-2004}. It turns out that the bound is not\nsharp for a large number of manifolds, as the following two examples\nshow. First, the value of $h$ is equal to $10$ only for $13$ of $24$\nmanifolds of complexity $10$ obtained by surgeries on $6^3_1$ (see\n\\cite{Martelli-Petronio-2004}). Second, on analyzing the table\n\\cite{Atlas} we noticed that the bound is not sharp for $44$ of $46$\nmanifolds of the type $6^3_1(1, 2, v\/w)$ with complexity less or\nequal to $12$. Denote by $4_1(p\/q)$ the closed orientable\n$3$-manifold obtained from the figure eight knot $4_1$ by\n$p\/q$-surgery. Since the manifolds $4_1(p\/q)$ and $6^3_1(1, 2,\np\/q+1)$ are homeomorphic, a potentially sharp upper bound for the\ncomplexity of such manifolds become important.\n\n The following theorem is the main result of the paper. To give an\nexact formulation, we need to introduce a certain\n$\\mathbb{N}$-valued function $\\omega(p\/q)$ on the set of\nnon-negative rational numbers. Let $p\\geqslant 0$, $q\\geqslant 1$ be\nrelatively prime integers, let $[p\/q]$ be the integer part of $p\/q$,\nand let $rem(p,q)$ be the remainder of the division of $p$ by $q$.\nAs in \\cite{Matveev-2003}, we denote by $S(p,q)$ the sum of all\npartial quotients in the expansion of $p\/q$ as a regular continued\nfraction. Now we define:\n $$\\omega(p\/q) = a(p\/q) + \\max\\{[p\/q]-3, 0\\} + S(rem(p,q),q),$$\nwhere\n $$a(p\/q) = \\left\\\n\\begin{array}{ll}\n 6, & \\hbox{if } p\/q=4, \\\\\n 7, & \\hbox{if } p\/q\\in \\mathbb{Z} \\hbox{ and } p\/q\\neq 4,\\\\\n 8, & \\hbox{if } p\/q\\not\\in \\mathbb{Z}.\\\\\n\\end{array\n\\right.$$\n\n\\begin{theorem}\n For any two relatively prime integers $p\\geqslant 0$ and\n$q\\geqslant 1$ we have the inequality $c(4_1(p\/q))\\leqslant\n\\omega(p\/q)$. Moreover, if $\\omega(p\/q)\\leqslant 12$, then\n$c(4_1(p\/q)) = \\omega(p\/q)$.\n\\end{theorem}\n\n Note that the restrictions $p\\geqslant 0$ and $q\\geqslant 1$ in the\nabove theorem are inessential, since the knot $4_1$ is equivalent to\nits mirror image, which implies $4_1(-p\/q)$ is homeomorphic to\n$4_1(p\/q)$.\n\n\n\\section{Preliminaries}\n\n In this section we recall some known definitions and facts that\nwill be used in the paper.\n\n\\subsection{Theta-curves on a torus}\n\n By a theta-curve $\\theta\\subset T$ on a torus $T$ we mean a graph\nthat is homeomorphic to a circle with a diameter and such that\n$T\\setminus \\theta$ is an open disc. It is well known\n\\cite{Martelli-Petronio-2004, Anisov-1994} that any two theta-curves\non $T$ can be transformed into each other by isotopies and by a\nsequence of flips (see Fig.~\\ref{flip-transformation}). Let us endow\nthe set $\\Theta(T)$ of theta-curves on $T$ with the distance\nfunction $d$ defining for given $\\theta, \\theta'\\in \\Theta(T)$ the\ndistance $d(\\theta, \\theta')$ between them as the minimal number of\nflips required to transform $\\theta$ into $\\theta'$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.6]{flip.eps}\n\\caption{A flip-transformation} \\label{flip-transformation}\n\\end{figure}\n\n For calculating the distance between two theta-curves on a torus\nwe use the classical ideal triangulation $\\mathbb{F}$ (Farey\ntesselation) of the hyperbolic plane $\\mathbb{H}^2$. If we view the\nhyperbolic plane $\\mathbb{H}^2$ as the upper half plane of\n$\\mathbb{C}$ bounded by the circle $\\partial \\mathbb{H}^2 =\n\\mathbb{R}\\cup \\{\\infty\\}$, then the triangulation $\\mathbb{F}$ has\nvertices at the points of $\\mathbb{Q}\\cup \\{1\/0\\}\\subset \\partial\n\\mathbb{H}^2$, where $1\/0=\\infty$, and its edges are all the\ngeodesics in $\\mathbb{H}^2$ with endpoints the pairs $a\/b$, $c\/d$\nsuch that $ad-bc=\\pm 1$. For convenience, the images of the\nhyperbolic plane $\\mathbb{H}^2$ and of the triangulation\n$\\mathbb{F}$ under the mapping $z\\to (z-i)\/(z+i)$ are shown in\nFig.~\\ref{triangulation}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{circle.eps}\n\\caption{The ideal Farey triangulation of the hyperbolic plane}\n\\label{triangulation}\n\\end{figure}\n\n Fix some coordinate system $(\\mu, \\lambda)$ on a torus $T$. We now\nconstruct a map $\\Psi_{\\mu, \\lambda}$ from $\\Theta(T)$ to the set of\ntriangles of $\\mathbb{F}$. To do that we consider the map\n$\\psi_{\\mu, \\lambda}$ that assigns to each nontrivial simple closed\ncurve $\\mu^{\\alpha}\\lambda^{\\beta}$ on $T$ the point $\\alpha\/\n\\beta\\in \\partial\\mathbb{H}^2$. Note that each theta-curve $\\theta$\non $T$ contains three nontrivial simple closed curves $\\ell_1$,\n$\\ell_2$, $\\ell_3$, that are formed by the pairs of edges of\n$\\theta$. Since the intersection index of every two curves $\\ell_i$,\n$\\ell_j$, $i\\neq j$, is equal to $\\pm 1$, the points $\\psi_{\\mu,\n\\lambda}(\\ell_1)$, $\\psi_{\\mu, \\lambda}(\\ell_2)$, $\\psi_{\\mu,\n\\lambda}(\\ell_3)$ are the vertices of a triangle $\\triangle$ of the\nFarey triangulation, and we define $\\Psi_{\\mu, \\lambda}(\\theta)$ to\nbe $\\triangle$.\n\n Denote by $\\Sigma$ the graph dual to the triangulation\n$\\mathbb{F}$. This graph is a tree because the triangulation is\nideal. We now define the distance between any two triangles of\n$\\mathbb{F}$ to be the number of edges of the only simple path in\n$\\Sigma$ that joins the corresponding vertices of the dual graph.\nThe key observation used for the practical calculations is that for\nany coordinate system $(\\mu, \\lambda)$ on $T$ the distance between\nany two theta-curves $\\theta$, $\\theta'$ is equal to the distance\nbetween the triangles $\\Psi_{\\mu, \\lambda}(\\theta)$, $\\Psi_{\\mu,\n\\lambda}(\\theta')$ of the Farey triangulation. The reason is that if\n$\\theta'$ is obtained from $\\theta$ via a flip, the corresponding\ntriangles have a common edge.\n\n\\subsection{Simple and special spines}\n\n A compact polyhedron $P$, following Matveev \\cite{Matveev-2003}, is\ncalled simple if the link of each point $x\\in P$ is homeomorphic to\none of the following $1$-dimensional polyhedra:\n\\begin{itemize}\n \\item[(a)] a circle (the point $x$ is then called nonsingular);\n \\item[(b)] a circle with a diameter (then $x$ is a triple point);\n \\item[(c)] a circle with three radii (then $x$ is a true vertex).\n\\end{itemize}\n\n The components of the set of nonsingular points are said to\nbe the $2$-components of $P$, while the components of the set of\ntriple points are said to be the triple lines of $P$. A simple\npolyhedron is special if each of its triple lines is an open\n$1$-cell and each of its $2$-components is an open $2$-cell.\n\n A subpolyhedron $P$ of a $3$-manifold $M$ is a spine of $M$ if\n$\\partial M\\neq\\emptyset$ and the manifold $M\\setminus P$ is\nhomeomorphic to $\\partial M \\times (0,1]$, or $\\partial M=\\emptyset$\nand $M\\setminus P$ is an open ball. A spine of a $3$-manifold is\ncalled simple or special if it is a simple or special polyhedron,\nrespectively.\n\n\n\n\\subsection{Relative spines}\n\n A manifold with boundary pattern, following Johannson\n\\cite{Johannson-1979}, is a $3$-manifold $M$ with a fixed graph\n$\\Gamma \\subset \\partial M$ that does not have any isolated\nvertices. A manifold $M$ with boundary pattern $\\Gamma$ can be\nconveniently viewed as a pair $(M, \\Gamma)$. The case $\\Gamma =\n\\emptyset$ is also allowed.\n\n\\begin{definition}\n Let $(M, \\Gamma)$ be a $3$-manifold with boundary pattern. Then a\nsubpolyhedron $P\\subset M$ is called a relative spine of $(M,\n\\Gamma)$ if the following holds:\n \\begin{enumerate}\n \\item $M\\setminus P$ is an open ball;\n \\item $\\partial M \\subset P$;\n \\item $\\partial M \\cap Cl(P\\setminus\\partial M) = \\Gamma$.\n \\end{enumerate}\n\\end{definition}\n\nA relative spine is simple if it is a simple polyhedron.\nObviously, if $M$ is closed, then any relative spine of $(M,\n\\emptyset)$ is a spine of $M$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.6]{2blocks.eps}\n\\caption{Examples of simple relative spines} \\label{blocks}\n\\end{figure}\n\n\\begin{example}\n Let $V$ be a solid torus with a meridian $m$. Choose a\nsimple closed curve $\\ell$ on $\\partial V$ that intersects $m$ twice\nin the same direction. Note that $\\ell$ decomposes $m$ into two\narcs. Consider a theta-curve $\\theta_V\\subset \\partial V$ consisting\nof $\\ell$ and an arc (denote it by $\\gamma$) of $m$. Then the\nmanifold $(V, \\theta_V)$ has a simple relative spine without\ninterior true vertices. This spine is the union of $\\partial V$, a\nM\\\"{o}bius strip inside V, and a part of meridional disc bounded by\n$\\gamma$ (Figure~\\ref{blocks}a).\n\n Note that among the three nontrivial simple closed curves contained in\n$\\theta_V$, none is isotopic to the meridian $m$ of $V$. On the\nother hand, applying the flip to $\\theta_V$ along $\\gamma$, we get a\ntheta-curve $\\theta_m\\subset \\partial V$ containing $m$.\n\\end{example}\n\n\\begin{example}\n Let $\\theta$, $\\theta'$ be two theta-curves on a torus $T$ such that\n$\\theta'$ is obtained from $\\theta$ by exactly one flip. Then the\nmanifold\n $$(T\\times [0, 1], (\\theta\\times \\{0\\})\\cup (\\theta'\\times \\{1\\}))$$\nhas a simple relative spine $R$ with one interior true vertex (in\nFigure~\\ref{blocks}b the torus $T$ is represented as a square with\nthe sides identified). Note that $R$ satisfies the following\nconditions:\n \\begin{itemize}\n \\item[(1)] for each $t\\in [0, 1\/2)$ a theta-curve $\\theta_t$, where\n $R\\cap (T\\times \\{t\\}) = \\theta_t\\times \\{t\\}$, is isotopic to\n $\\theta$;\n \\item[(2)] for each $t\\in (1\/2, 1]$ the theta-curve $\\theta_t$ is isotopic to\n $\\theta'$;\n \\item[(3)] $R\\cap (T\\times \\{1\/2\\})$ is a wedge of two circles.\n \\end{itemize}\n\\end{example}\n\n\\subsection{Assembling of manifolds with boundary patterns}\n\n Denote by $\\mathscr{T}$ the class of all manifolds $(M,\n\\Gamma)$ such that any component $T$ of $\\partial M$ is a torus and\n$T \\cap \\Gamma$ is a theta-curve. Let $(M, \\Gamma)$ and $(M',\n\\Gamma')$ be two manifolds in $\\mathscr{T}$ with nonempty\nboundaries. Choose two tori $T\\subseteq \\partial M$, $T'\\subseteq\n\\partial M'$ and a homeomorphism $\\varphi: T\\to T'$ taking the theta-curve\n$\\theta = T\\cap \\Gamma$ to the theta-curve $\\theta' = T'\\cap\n\\Gamma'$. Then we can construct a new manifold $(W, \\Xi)\\in\n\\mathscr{T}$, where $W = M\\cup_\\varphi M'$, and $\\Xi =\n(\\Gamma\\setminus \\theta)\\cup (\\Gamma'\\setminus \\theta')$. In this\ncase we say that the manifold $(W, \\Xi)$ is obtained assembling $(M,\n\\Gamma)$ and $(M', \\Gamma')$ \\cite{Martelli-Petronio-2001}.\n\n Note that if manifolds $(M, \\Gamma)$ and $(M', \\Gamma')$ have\nsimple relative spines denoted $P$ and $P'$ respectively, with $v$\nand $v'$ interior true vertices, then the manifold $(W, \\Xi)$ has a\nsimple relative spine $R$ with $v + v'$ interior true vertices.\nIndeed, $R$ can be obtained by gluing $P$ and $P'$ along $\\varphi$\nand removing the open disc in $P\\cup_\\varphi P'$ that is obtained by\nidentifying $T\\setminus \\theta$ with $T'\\setminus \\theta'$.\n\n To prove the main theorem of the paper we generalize the notion of the assembling\nby removing the restriction $\\varphi(\\theta) = \\theta'$.\n\n\\begin{lemma}\n \\label{assembling}\n Let $(M, \\Gamma)$ and $(M', \\Gamma')$ be two manifolds in\n$\\mathscr{T}$ with nonempty boundaries that admit simple relative\nspines with $v$ and $v'$ interior true vertices respectively. Then\nfor any homeomorphism $\\varphi: T\\to T'$ of a torus $T\\subseteq\n\\partial M$ onto a torus $T'\\subseteq \\partial M'$ there exists a simple\nrelative spine of a manifold $(W, \\Upsilon)$, where $W =\nM\\cup_\\varphi M'$ and $\\Upsilon = (\\Gamma\\setminus \\theta)\\cup\n(\\Gamma'\\setminus \\theta')$, with $v + v' + d(\\varphi(\\theta),\n\\theta')$ interior true vertices.\n\\end{lemma}\n\n\\begin{proof}\n First, by induction on the number $n = d(\\varphi(\\theta), \\theta')$\nwe prove that there exists a simple relative spine of the manifold\n $$(M'', \\Gamma'') = (T'\\times [0, 1], (\\varphi(\\theta)\\times \\{0\\})\\cup\n (\\theta'\\times \\{1\\}))$$\nwith $n$ interior true vertices. If $n=0$, i.e. the theta-curve\n$\\varphi(\\theta)$ is isotopic to the theta-curve $\\theta'$, the\ndesired spine is isotopic to the polyhedron\n$(\\varphi(\\theta)\\times [0, 1])\\cup \\partial M''$. Suppose that $n\n> 0$. As has already been alluded to in the beginning of the\nsection 1.1, there exists a sequence $\\{\\theta_i\\}_{i=0}^n$ of\npairwise distinct theta-curves on the torus $T'$ such that\n$\\theta_0 = \\varphi(\\theta)$, $\\theta_n = \\theta'$, and $\\theta_i$\nis obtained from $\\theta_{i-1}$ by a flip, for $i=1\\ldots n$. The\ninduction assumption implies that the manifold\n \\begin{equation}\n \\label{m1}\n (T'\\times [0, 1\/2], (\\theta_0\\times \\{0\\})\\cup\n (\\theta_{n-1}\\times \\{1\/2\\}))\n \\end{equation}\nhas a simple relative spine with $n-1$ interior true vertices.\nFurthermore, the simple relative spine of the manifold\n \\begin{equation}\n \\label{m2}\n (T'\\times [1\/2, 1], (\\theta_{n-1}\\times \\{1\/2\\})\\cup\n (\\theta_n\\times \\{1\\}))\n \\end{equation}\nwith one interior true vertex is described in the Example 2. Then\nthe desired spine of the manifold $(M'', \\Gamma'')$ is obtained by\nassembling the manifolds (\\ref{m1}) and (\\ref{m2}) along the\nidentity map on $T'\\times \\{1\/2\\}$.\n\n Now, note that the consecutive assemblings of the manifolds $(M,\n\\Gamma)$, $(M'', \\Gamma'')$ and $(M', \\Gamma')$ along natural\nhomeomorphisms that take each point $x\\in T$ to the point\n$(\\varphi(x), 0)\\in T'\\times \\{0\\}$, and each point $(y, 1)\\in\nT'\\times \\{1\\}$ to the point $y\\in T'$, yield the manifold $(W,\n\\Upsilon)$ and its simple relative spine with $v + v' +\nd(\\varphi(\\theta), \\theta')$ interior true vertices.\n\\end{proof}\n\n\n\\section{Relative spines of the figure eight knot complement}\n\n In this section we construct some simple relative spines of\nthe figure eight knot complement $E(4_1)$. Let us fix a canonical\ncoordinate system on the boundary torus $\\partial E(4_1)$ consisting\nof oriented closed curves $\\mu$, $\\lambda$ such that the meridian\n$\\mu$ generates $H_1(E(4_1); \\mathbb{Z})$ and the longitude\n$\\lambda$ bounds a surface in $E(4_1)$. This system determines the\nmap $\\Psi_{\\mu, \\lambda}$ from $\\Theta(T)$ to the set of triangles\nof the Farey triangulation. Denote by $\\triangle^{(i)}$ the triangle\nof $\\mathbb{F}$ with the vertices at $i$, $i+1$, and $\\infty$.\n\n\\begin{proposition}\n \\label{spine}\n For any $i\\in\\{ 0, 1, 2, 3\\}$ there exists a theta-curve\n$\\theta^{(i)}$ on the torus $\\partial E(4_1)$ such that the\nmanifold $(E(4_1), \\theta^{(i)})$ has a simple relative spine with\n$10$ interior true vertices and $\\Psi_{\\mu, \\lambda}(\\theta^{(i)})\n= \\triangle^{(i)}$.\n\\end{proposition}\n\n\\begin{proof}\n Step 1. Let $P$ be a special spine of an arbitrary compact orientable\n$3$-manifold $M$ whose boundary is a torus, and let $\\theta$ be a\ntheta-curve on $\\partial M$. We begin the proof by describing a\nmethod for constructing a simple relative spine $R(P, \\theta)$ of\nthe manifold $M$.\n\n By Theorem 1.1.7 \\cite{Matveev-2003}, $M$ can be identified with\nthe mapping cylinder of a local embedding $f:\\partial M\\to P$.\nDenote by $f_{|\\theta}:\\theta\\to P$ the restriction to $\\theta$ of\nthe map $f$. Then the union $R(P, \\theta)$ of the mapping cylinder\nof $f_{|\\theta}$ and of $\\partial M$ is a relative spine of $M$,\nsince $\\partial M \\subset R(P, \\theta)$, $\\partial M \\cap Cl(R(P,\n\\theta)\\setminus\\partial M) = \\theta$, and $M\\setminus R(P, \\theta)$\nis homeomorphic to the direct product of the open disc $\\partial\nM\\setminus \\theta$ with an interval. In general, $R(P, \\theta)$ just\nconstructed is not necessarily a simple polyhedron. This can be\ndealt with by introducing the notion of general position. We say\nthat a theta-curve $\\theta\\subset \\partial M$ is in general position\nwith respect to the map $f$, if the image $f(\\theta)$ satisfies the\nfollowing conditions.\n\\begin{enumerate}\n \\item $f(\\theta)$ contains no true vertices of $P$.\n \\item For any intersection point $x$ of $f(\\theta)$ with\n the triple lines of $P$ there exists a neighborhood $U(x)\\subset P$\n such that the intersection $U(x)\\cap f(\\theta)$ is an arc\n meeting the set of the triple lines of $P$ transversally exactly at $x$.\n \\item For any intersection point $x$ of the set $f(\\theta)$ with\n the $2$-components of $P$ its inverse image $f^{-1}_{|\\theta}(x)$ consists of at most\n two points of $\\theta$. Moreover, if $f^{-1}_{|\\theta}(x)$ consists of exactly\n two points, then there exists a neighborhood $U(x)\\subset P$ such that\n the inverse image $f^{-1}_{|\\theta}(U(x)\\cap f(\\theta))$ of the\n intersection $U(x)\\cap f(\\theta)$ is the disjoint union of two\n arcs $\\gamma_1$, $\\gamma_2$ of $\\theta$, and the images $f(\\gamma_1)$, $f(\\gamma_2)$\n intersect each other transversally at exactly one point $x$.\n Such a point $x$ is called the self-intersection point of the image $f(\\theta)$\n of $\\theta$.\n\\end{enumerate}\n\nObviously, if a theta-curve $\\theta$ is in general position with\nrespect to the map $f$, then the relative spine $R(P, \\theta)$ of\nthe manifold $M$ is simple.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.6]{P_2.eps}\n\\caption{A minimal spine of the complement of the figure eight knot}\n\\label{minspine}\n\\end{figure}\n\n Step 2. We consider now the minimal special spine $P$ of the manifold\n$M = E(4_1)$ shown in Figure \\ref{minspine} (see\n\\cite[2.4.2]{Matveev-2003}). To construct the theta-curves\n$\\theta^{(i)}$, $i\\in\\{ 0, 1, 2, 3\\}$, we need to describe certain\ncell decompositions of the torus $T=\\partial M$ and of its universal\ncovering $\\tilde{T}$. The local embedding $f:T\\to P$ determines a\ncell decomposition of $T$ as follows.\n\\begin{enumerate}\n \\item The inverse image $f^{-1}(C)$ of every open $k$-dimensional\n cell $C$ of $P$ consists of two open $2$-cells if $k=2$,\n three open arcs if $k=1$, and four points if $k=0$.\n \\item The restriction of $f$ to each of these cells\n is a homeomorphism onto the corresponding cell of $P$.\n\\end{enumerate}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.7]{decompositions.eps}\n\\caption{Cell decompositions of $\\tilde{T}$ (left) and $T$ (right)}\n\\label{decomposition}\n\\end{figure}\n\n Construct the universal covering $\\tilde{T}$ of $T$. It can\nbe presented as a plane decomposed into hexagons, see Fig.\n\\ref{decomposition}a. The group of covering translations is\nisomorphic to the group $\\pi_1(T) = H_1(T; \\mathbb{Z})$. We choose a\nbasis $\\tilde{\\mu}$, $\\tilde{\\lambda}$ as shown in Fig.\n\\ref{decomposition}a. It is easy to see that the corresponding\nelements of $\\pi_1(T)$ (which can be also viewed as oriented loops)\nform the canonical coordinate system $(\\mu, \\lambda)$ on $T$. If we\nfactor this covering by the translations $\\tilde{\\mu}$,\n$\\tilde{\\lambda}$, we recover $T$. If we additionally identify the\nhexagons marked by the letter $A$ with respect to the composition of\nthe symmetry in the dotted diagonal of the hexagon and the\ntranslation by $-\\tilde{\\mu} + \\tilde{\\lambda}\/2$, and do the same\nfor the hexagons marked by the letter $B$, we obtain $P$. The torus\n$T$ is shown in Fig. \\ref{decomposition}b as a polygon $D$ composed\nof four hexagons. Each side of $D$ is identified with some other one\nvia the translation along one of the three vectors $\\tilde{\\mu}$,\n$-2\\tilde{\\mu}+\\tilde{\\lambda}$, and $-\\tilde{\\mu}+\\tilde{\\lambda}$.\nThe spine $P$ can be presented as the union of two hexagons, see\nFig. \\ref{theta0} (right). The edges of the hexagons are decorated\nwith four different patterns. To recover $P$, one should identify\nthe edges having the same pattern.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{theta0_2.eps}\n\\caption{The theta-curve $\\theta^{(0)}$} \\label{theta0}\n\\end{figure}\n\n Step 3. Now for each $i\\in\\{ 0, 1, 2, 3\\}$ we exhibit a theta-curve\n$\\theta^{(i)}\\subset \\partial M$ such that the simple relative\nspine $R(P, \\theta^{(i)})$ of $M$ has $10$ interior true vertices\nand $\\Psi_{\\mu, \\lambda}(\\theta^{(i)}) = \\triangle^{(i)}$.\n\n Consider the wedge of the three arcs on $\\tilde{T}$, see Fig.\n\\ref{theta0} (left). The projections of the arcs onto $T$ yield a\ntheta-curve that we denoted by $\\theta^{(0)}$. It can be checked\ndirectly that $\\theta^{(0)}$ is in general position with respect to\nthe map $f$, and $\\Psi_{\\mu, \\lambda}(\\theta^{(0)}) =\n\\triangle^{(0)}$. It remains to note that the set of the interior\ntrue vertices of $R(P, \\theta^{(0)})$ consists of (a) the two true\nvertices of the special polyhedron $P$, (b) the images under $f$ of\nthe two vertices of $\\theta^{(0)}$, (c) the five intersection points\nof the set $f(\\theta^{(0)})$ with the triple lines of $P$, see Fig.\n\\ref{theta0} (left), and (d) one self-intersection point of the\nimage $f(\\theta^{(0)})$ of $\\theta^{(0)}$ (shown in Fig.\n\\ref{theta0} (right) as a fat gray dot).\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{theta1_2.eps}\n\\caption{The theta-curve $\\theta^{(1)}$} \\label{theta1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{theta2_2.eps}\n\\caption{The theta-curve $\\theta^{(2)}$} \\label{theta2}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{theta3_2.eps}\n\\caption{The theta-curve $\\theta^{(3)}$} \\label{theta3}\n\\end{figure}\n\n The theta-curves $\\theta^{(1)}$, $\\theta^{(2)}$, $\\theta^{(3)}$\nsatisfying the conclusion of the Proposition are shown in Fig.\n\\ref{theta1}, \\ref{theta2}, \\ref{theta3}. We point out that among\nthe $10$ interior true vertices of $R(P, \\theta^{(3)})$ there are\n$6$ intersection points of the set $f(\\theta^{(3)})$ with the triple\nlines of $P$, see Fig. \\ref{theta3} (left), while there are no\nself-intersection points of the image $f(\\theta^{(3)})$ of\n$\\theta^{(3)}$, see Fig. \\ref{theta3} (right).\n\\end{proof}\n\n\n\\section{Proof of the main theorem}\n\n Let $p\\geqslant 0$ and $q\\geqslant 1$ be two relatively prime integers.\nTo prove the inequality $c(4_1(p\/q))\\leqslant \\omega(p\/q)$ it\nsuffices to construct a simple spine of the manifold $4_1(p\/q)$ with\n$\\omega(p\/q)$ true vertices.\n\n Thurston \\cite{Thurston-1978} proved that the manifold $4_1(p\/q)$ is\nhyperbolic except for $p\/q\\in \\{0, 1, 2, 3, 4, \\infty\\}$. The case\n$p\/q=\\infty$ does not satisfy the assumptions of the Theorem. In\neach of the five remaining cases the non-hyperbolic manifold\n$4_1(p\/q)$ has complexity $7$ and $\\omega(p\/q)=7$.\n\n Let us construct a simple spine of the hyperbolic manifold\n$4_1(p\/q)$. Recall that the meridian $m$ and the theta-curve\n$\\theta_m$ on the boundary of $(V, \\theta_V)$ were fixed in Example\n1. Let $(\\mu, \\lambda)$ be the canonical coordinate system on the\nboundary torus $\\partial E(4_1)$ of the figure eight knot complement\n$E(4_1)$. Among all homeomorphisms $\\partial V \\to\n\\partial E(4_1)$ that take $m$ to the curve $\\mu^p\\lambda^q$, we\nchoose a homeomorphism $\\varphi$ such that the distance between the\ntheta-curves $\\varphi(\\theta_m)$ and $\\theta^{(0)}$ be as small as\npossible. For convenience denote by $z$ the number $\\min\\{[p\/q],\n3\\}$. By the Proposition, the manifold $(E(4_1), \\theta^{(z)})$ has\na simple relative spine with $10$ interior true vertices. Since\n$4_1(p\/q) = V\\cup_\\varphi E(4_1)$, it follows from Lemma that the\nmanifold $(4_1(p\/q), \\emptyset)$ has a simple relative spine\n$Q_{p\/q}$ with $10 + d(\\varphi(\\theta_V), \\theta^{(z)})$ interior\ntrue vertices. Moreover, $Q_{p\/q}$ is a spine of $4_1(p\/q)$, since\n$\\partial 4_1(p\/q) = \\emptyset$.\n\n Now let us prove that $d(\\varphi(\\theta_V), \\theta^{(z)}) = -2 +\n\\max\\{[p\/q]-3, 0\\} + S(rem(p,q),q)$. Recall that for each $i\\in\\{ 0,\n1, 2, 3\\}$ the map $\\Psi_{\\mu, \\lambda}$ takes $\\theta^{(i)}$ to the\ntriangle $\\triangle^{(i)}$ of the Farey triangulation with the\nvertices at $i$, $i+1$, and $\\infty$. Denote by $\\triangle_V$ and\n$\\triangle_m$ the triangles $\\Psi_{\\mu, \\lambda}(\\varphi(\\theta_V))$\nand $\\Psi_{\\mu, \\lambda}(\\varphi(\\theta_m))$, respectively. Since\nthe distance between theta-curves on $\\partial E(4_1)$ is equal to\nthe distance between the corresponding triangles of $\\mathbb{F}$, it\nis sufficient to find $d(\\triangle_V, \\triangle^{(z)})$.\n\n The choice of $\\varphi$ guarantees us that $\\triangle_m$ is\nthe closest triangle to $\\triangle^{(0)}$ among all the triangles\nwith a vertex at $p\/q$. This implies (see \\cite[Proposition\n4.3]{Martelli-Petronio-2004} and \\cite[Lemma 2]{Fominykh-2008}) that\n$d(\\triangle_m, \\triangle^{(0)}) = S(p,q)-1$. Since the theta-curve\n$\\theta_V$ is obtained from $\\theta_m$ by exactly one flip and\n$\\theta_V$ does not contain the meridian $m$, the triangle\n$\\triangle_V$ has a common edge with $\\triangle_m$ and $p\/q$ is not\na vertex of $\\triangle_V$. Hence, $d(\\triangle_V, \\triangle^{(0)}) =\nS(p,q)-2$. Analyzing the Farey triangulation, we can notice that\n$d(\\triangle_V, \\triangle^{(z)}) = d(\\triangle_V, \\triangle^{(0)}) -\nd(\\triangle^{(z)}, \\triangle^{(0)})$. Taking into account that\n$d(\\triangle^{(z)}, \\triangle^{(0)})=z$, $S(p,q) = [p\/q] +\nS(rem(p,q),q)$ and $[p\/q] - \\min\\{[p\/q], 3\\} = \\max\\{[p\/q]-3, 0\\}$\nwe get the equality $d(\\varphi(\\theta_V), \\theta^{(z)}) =\nd(\\triangle_V, \\triangle^{(z)}) = -2 + \\max\\{[p\/q]-3, 0\\} +\nS(rem(p,q),q)$.\n\n Note that if $p\/q\\not\\in \\mathbb{Z}$, then $Q_{p\/q}$ is the desired\nspine, since it contains $\\omega(p\/q)$ true vertices. On the other\nhand, if $p\/q\\in \\mathbb{Z}$, the spine $Q_{p\/q}$ contains\n$\\omega(p\/q)+1$ true vertices. In this case $Q_{p\/q}$ can be\ntransformed into another simple spine $Q_{p\/q}'$ of $4_1(p\/q)$ by a\nsequence of moves along boundary curves of length $4$ (similar\narguments can be found in \\cite[page 81]{Matveev-2003}). The spine\n$Q_{p\/q}'$ has the same number of true vertices but possesses a\nboundary curve of length $3$, hence it can be simplified. The result\nis a new spine of $4_1(p\/q)$ with $\\omega(p\/q)$ true vertices.\n\n To conclude the proof of the theorem, it remains to note that the\ntable \\cite{Atlas}contains $46$ hyperbolic manifolds of the type\n$4_1(p\/q)$ satisfying $\\omega(p\/q)\\leqslant 12$. For each of them\nour upper bound is sharp.\n\n\n\\footnotesize\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section*{Acknowledgment}\n\nThe authors greatly appreciate the financial support from the Rail Manufacturing Cooperative Research Centre (funded jointly by participating rail organizations and the Australian Federal Government's Business-Cooperative Research Centres Program) through Project R3.7.3 - Rail infrastructure defect detection through video analytics.\n\n\\small\n\n\\section{Conclusion}\n\nWe propose a Poisson Transfer Network (PTN) to tackle the semi-supervised few-shot problem, aiming to explore the value of unlabeled novel-class data from two aspects. We propose to employ the Poisson learning model to capture the relations between the few labeled and unlabeled data, which results in a more stable and informative classifier than previous semi-supervised few-shot models. Moreover, we propose to adopt the unsupervised contrastive learning to improve the generality of the embedding on novel classes, which avoids the possible over-fitting problem when training with few labeled samples. \nIntegrating the two modules, the proposed PTN can fully explore the unlabeled auxiliary information boosting the performance of few-shot learning.\nExtensive experiments indicate that PTN outperforms state-of-the-art few-shot and semi-supervised few-shot methods.\n\n\n\\section{Experiments}\n\\subsection{Datasets}\nWe evaluate the proposed PTN on two few-shot benchmark datasets: miniImageNet and tieredImageNet. The miniImageNet dataset \\cite{vinyals2016matching} is a subset of the ImageNet, consisting of 100 classes, and each class contains 600 images of size 84$\\times$84. We follow the standard split of 64 base, 16 validation , and 20 test classes \\cite{vinyals2016matching,tian2020rethinking}. The tieredImageNet \\cite{ren2018meta} is another subset but with 608 classes instead. We follow the standard split of 351 base, 97 validation, and 160 test classes for the experiments \\cite{ren2018meta,liu2018learning}. We resize the images from tieredImageNet to 84$\\times$84 pixels, and randomly select $C$ classes from the novel class to construct the few-shot task. Within each class, $K$ examples are selected as the labeled data, and $V$ examples from the rest as queries. The extra $N$ unlabeled samples are selected from the $C$ classes or rest novel classes. We set $C=5, K= \\{1,5\\}, V=15$ and study different sizes of $N$. We run 600 few-shot tasks and report the mean accuracy with the 95\\% confidence interval.\n\n\\subsection{Implementation Details}\nSame as previous works~\\cite{rusu2018meta,dhillon2019baseline,liu2019prototype,tian2020rethinking,yu2020transmatch}, we adopt the wide residual network (WRN-28-10) \\cite{zagoruyko2016wide} as the backbone of our base model $W_{\\phi} \\circ f_{\\theta_0}$, and we follow the protocals in \\cite{tian2020rethinking,yu2020transmatch} fusing the base and validation classes to train the base model from scratch. We set the batch size to 64 with SGD learning rate as 0.05 and weight decay as $5e^{-4}$. We reduce the learning rate by 0.1 after 60 and 80 epochs. The base model is trained for 100 epochs.\n\n\n\\begin{table*}[t]\n\\centering\n\\resizebox{2.0\\columnwidth}{!}{%\n\\begin{tabular}{lcccc}\n\\hline\n\\multicolumn{1}{c}{\\multirow{2}{*}{Methods}} & \\multirow{2}{*}{Type} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{miniImageNet} \\\\ \\cline{4-5} \n\\multicolumn{1}{c}{} & & & 1-shot & 5-shot \\\\ \\hline\nPrototypical-Net \\cite{snell2017prototypical} & Metric, Meta & ConvNet-256 & 49.42$\\pm$0.78 & 68.20$\\pm$0.66 \\\\\nRelation Network \\cite{sung2018learning} & Metric, Meta & ConvNet-64 & 50.44$\\pm$0.82 & 65.32$\\pm$0.70 \\\\\nTADAM \\cite{oreshkin2018tadam} & Metric, Meta & ResNet-12 & 58.50$\\pm$0.30 & 76.70$\\pm$0.30\n \\\\\nDPGN \\cite{yang2020dpgn} & Metric, Meta & ResNet-12 & 67.77$\\pm$0.32 & 84.60$\\pm$0.43 \n \\\\\nRFS \\cite{tian2020rethinking} & Metric, Transfer & ResNet-12 & 64.82$\\pm$0.60 & 82.14$\\pm$0.43\n \\\\ \\hline\nMAML \\cite{finn2017model} & Optimization, Meta & ConvNet-64 & 48.70$\\pm$1.84 & 63.11$\\pm$0.92 \\\\\nSNAIL \\cite{mishra2018simple} & Optimization, Meta & ResNet-12 & 55.71$\\pm$0.99 & 68.88$\\pm$0.92 \\\\\nLEO \\cite{rusu2018meta} & Optimization, Meta & WRN-28-10 & 61.76$\\pm$0.08 & 77.59$\\pm$0.12 \\\\\nMetaOptNet \\cite{lee2019meta} & Optimization, Meta & ResNet-12 & 64.09$\\pm$0.62 & 80.00$\\pm$0.45 \\\\ \\hline\nTPN \\cite{liu2018learning} & Transductive, Meta & ConvNet-64 & 55.51$\\pm$0.86 & 69.86$\\pm$0.65 \\\\\nBD-CSPN \\cite{liu2019prototype} & Transductive, Meta & WRN-28-10 & 70.31$\\pm$0.93 & 81.89$\\pm$0.60 \\\\\nTransductive Fine-tuning \\cite{dhillon2019baseline} & Transductive, Transfer & WRN-28-10 & 65.73$\\pm$0.68 & 78.40$\\pm$0.52 \\\\\nLaplacianShot \\cite{ziko2020laplacian} & Transductive, Transfer & DenseNet & 75.57$\\pm$0.19 & 84.72$\\pm$0.13 \\\\ \\hline\nMasked Soft k-Means \\cite{ren2018meta} & Semi, Meta & ConvNet-128 & 50.41$\\pm$0.31 & 64.39$\\pm$0.24 \\\\\nTPN-semi \\cite{liu2018learning} & Semi, Meta & ConvNet-64 & 52.78$\\pm$0.27 & 66.42$\\pm$0.21 \\\\\nLST \\cite{li2019learning} & Semi, Meta & ResNet-12 & 70.10$\\pm$1.90 & 78.70$\\pm$0.80 \\\\ \\hline\nTransMatch \\cite{yu2020transmatch} & Semi, Transfer & WRN-28-10 & 62.93$\\pm$1.11 & 82.24$\\pm$0.59 \\\\\nDPN (Ours) & Semi, Transfer & WRN-28-10 & \\multicolumn{1}{l}{79.67$\\pm$1.06} & \\multicolumn{1}{l}{86.30$\\pm$0.95} \\\\\nPTN (Ours) & Semi, Transfer & WRN-28-10 & \\textbf{82.66$\\pm$0.97} & \\textbf{88.43$\\pm$0.67} \\\\ \\hline \\hline\n\\multicolumn{1}{c}{\\multirow{2}{*}{Methods}} & \\multirow{2}{*}{Type} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{tieredImageNet} \\\\ \\cline{4-5} \n\\multicolumn{1}{c}{} & & & 1-shot & 5-shot \\\\ \\hline\nPrototypical-Net \\cite{snell2017prototypical} & Metric, Meta & ConvNet-256 & 53.31$\\pm$0.89 & 72.69$\\pm$0.74 \\\\\nRelation Network \\cite{sung2018learning} & Metric, Meta & ConvNet-64 & 54.48$\\pm$0.93 & 71.32$\\pm$0.78 \\\\\nDPGN \\cite{yang2020dpgn} & Metric, Meta & ResNet-12 & 72.45$\\pm$0.51 & 87.24$\\pm$0.39 \n \\\\\nRFS \\cite{tian2020rethinking} & Metric, Transfer & ResNet-12 & 71.52$\\pm$0.69 & 86.03$\\pm$0.49 \\\\ \\hline\nMAML \\cite{finn2017model} & Optimization, Meta & ConvNet-64 & 51.67$\\pm$1.81 & 70.30$\\pm$1.75 \\\\\nLEO \\cite{rusu2018meta} & Optimization, Meta & WRN-28-10 & 66.33$\\pm$0.05 & 81.44$\\pm$0.09 \\\\\nMetaOptNet \\cite{lee2019meta} & Optimization, Meta & ResNet-12 & 65.81$\\pm$0.74 & 81.75$\\pm$0.53 \\\\ \\hline\nTPN \\cite{liu2018learning} & Transductive, Meta & ConvNet-64 & 59.91$\\pm$0.94 & 73.30$\\pm$0.75 \\\\\nBD-CSPN \\cite{liu2019prototype} & Transductive, Meta & WRN-28-10 & 78.74$\\pm$0.95 & 86.92$\\pm$0.63 \\\\\nTransductive Fine-tuning \\cite{dhillon2019baseline} & Transductive, Transfer & WRN-28-10 & 73.34$\\pm$0.71 & 85.50$\\pm$0.50 \\\\\nLaplacianShot \\cite{ziko2020laplacian} & Transductive, Transfer & DenseNet & 80.30$\\pm$0.22 & 87.93$\\pm$0.15 \\\\ \\hline\nMasked Soft k-Means \\cite{ren2018meta} & Semi, Meta & ConvNet-128 & 52.39$\\pm$0.44 & 69.88$\\pm$0.20 \\\\\nTPN-semi \\cite{liu2018learning} & Semi, Meta & ConvNet-64 & 55.74$\\pm$0.29 & 71.01$\\pm$0.23 \\\\\nLST \\cite{li2019learning} & Semi, Meta & ResNet-12 & 77.70$\\pm$1.60 & 85.20$\\pm$0.80 \\\\ \\hline\nDPN (Ours) & Semi, Transfer & WRN-28-10 & \\multicolumn{1}{l}{82.18$\\pm$1.06} & \\multicolumn{1}{l}{88.02$\\pm$0.72} \\\\\nPTN (Ours) & Semi, Transfer & WRN-28-10 & \\textbf{84.70$\\pm$1.14} & \\textbf{89.14$\\pm$0.71} \\\\ \\hline\n\\end{tabular} }\n\\caption{The 5-way, 1-shot and 5-shot classification accuracy (\\%) on the two datasets with 95\\% confidence interval. Tne best results are in bold. The upper and lower parts of the table show the results on miniImageNet and tieredImageNet, respectively.}\n\\label{Res1}\n\\end{table*}\n\nIn unsupervised embedding transfer, the data augmentation $T$ is defined same as \\cite{lee2019meta,tian2020rethinking}. For fair comparisons against TransMatch~\\cite{yu2020transmatch}, we also augment each labeled image 10 times by random transformations and generate the prototypes of each class as labeled samples. We apply SGD optimizer with a momentum of 0.9. The learning rate is initialized as $1e^{-3}$, and the cosine learning rate scheduler is used for 10 epochs. We set the batch size to 80 with $\\lambda = 1$ in Eq. (\\ref{UT}).\nFor Poisson inference, we construct the graph by connecting each sample to its $K$-nearest neighbors with Gaussian weights. We set $K = 30$ and the weight matrix $W$ is summarized with $w_{ii} = 0$, which accelerates the convergence of the iteration in Algorithm \\ref{algorithm} without change the solution of the Equation \\ref{PO}. We set the max $tp = 100$ in step 7 of Algorithm \\ref{algorithm} by referring to the stop constraint discussed in the Proposed Algorithm section. \nWe set hyper-parameters $\\mu = 1.5, M_1 = 20, M_2 = 40$ and $M_3 = 100$ empirically. Moreover, we set ${\\varphi} = 10, \\upsilon_{\\alpha}=0.5, \\upsilon_{\\sigma}=1.0$.\n\\subsection{Experimental Results}\n\\subsubsection{Comparison with the State-Of-The-Art}\nIn our experiments, we group the compared methods into five categories, and the experimental results on two datasets are summarized in Table \\ref{Res1}. \nWith the auxiliary unlabeled data available, our proposed PTN outperforms the metric-based and optimization-based few-shot models by large margins, indicating that the proposed model effectively utilizes the unlabeled information for assisting few-shot recognition.\nBy integrating the unsupervised embedding transfer and PoissonMBO classifier, PTN achieves superior performance over both transductive and existing SSFSL approaches. Specifically, under the 5-way-1-shot setting, the classification accuracies are 81.57\\% vs. 63.02\\%~TransMatch \\cite{yu2020transmatch}, 84.70\\% vs. 80.30\\%~LaplacianShot \\cite{ziko2020laplacian} on miniImageNet and tieredImageNet, respectively; under the 5-way-5-shot setting, the classification accuracies are 88.43\\% vs. 78.70\\%~LST \\cite{li2019learning}, 89.14\\% vs. 81.89\\%~BD-CSPN \\cite{liu2019prototype} on miniImageNet and tieredImageNet, respectively.\nThese results demonstrate the superiority of PTN for SSFSL tasks.\n\\subsubsection{Different Extra Unlabeled Samples}\n\\begin{table}[]\n\\centering\n\\begin{tabular}{@{}cccc@{}}\n\\toprule\nMethods & Num\\_U & 1-shot & 5-shot \\\\ \\midrule\n~~PTN$^*$ & \\multicolumn{1}{c}{0} & 76.20$\\pm$0.82 & 84.25$\\pm$0.61 \\\\\nPTN & \\multicolumn{1}{c}{0} & 77.01$\\pm$0.94 & 85.32$\\pm$0.68 \\\\\nPTN & \\multicolumn{1}{c}{20} & 77.20$\\pm$0.92 & 85.93$\\pm$0.82 \\\\\nPTN & \\multicolumn{1}{c}{50} & 79.92$\\pm$1.06 & 86.09$\\pm$0.75 \\\\\nPTN & \\multicolumn{1}{c}{100} & 81.57$\\pm$0.94 & 87.17$\\pm$0.58 \\\\\nPTN & \\multicolumn{1}{c}{200} & \\textbf{82.66$\\pm$0.97} & \\textbf{88.43$\\pm$0.76} \\\\ \\bottomrule\n\\end{tabular}\n\\caption{The 5-way, 1-shot and 5-shot classification accuracy (\\%) with different number of extra unlabeled samples on miniImageNet. PTN$^*$ denotes that we adopt PTN as the transductive model without fine-tune embedding. Best results are in bold.}\n\\label{Res2}\n\\end{table}\n\nWe show the results of using different numbers of extra unlabeled instances in Table \\ref{Res2}. For Num\\_U = 0, PTN$^*$ can be viewed as the transductive model without extra unlabeled data, where we treat query samples as the unlabeled data, and we do not fine-tune the embedding with query labels for fair comparisons. Contrary to PTN$^*$, the proposed PTN model utilize the query samples to fine-tune the embedding when Num\\_U=0.\nIt can be observed that our PTN model achieves better performances with more extra unlabeled samples, which indicates the effectiveness of PTN in mining the unlabeled auxiliary information for the few-shot problem.\n\n\n\\subsubsection{Results with Distractor Classes}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.95\\linewidth]{.\/images\/Few-shot.pdf}\n\\end{center}\n\\caption{The 5-way, 1-shot and 5-shot classification accuracy (\\%) with different number of extra unlabeled samples on miniImageNet. w\/D means with distractor classes.}\n\\label{dist}\n\\end{figure}\n\nInspired by \\cite{ren2018meta,liu2018learning,yu2020transmatch}, we further investigate the influence of distractor classes, where the extra unlabeled data are collected from classes with no overlaps to labeled support samples. We follow the settings in \\cite{ren2018meta,liu2018learning}. As shown in Figure \\ref{dist}, even with distractor class data, the proposed PTN still outperforms other SSFSL methods by a large margin, which indicates the robustness of the proposed PTN in dealing with distracted unlabeled data.\n\n\\subsection{Ablation Study}\n\\begin{table}[t]\n\\begin{minipage}{16.5cm}\n\\begin{tabular}{@{}lcc@{}}\n\\toprule\nMethods & 1-shot & 5-shot \\\\ \\midrule\nTransMatch & 62.93$\\pm$1.11 & 82.24$\\pm$0.59 \\\\\nLabel Propagation (LP) & 74.04$\\pm$1.00 & 82.60$\\pm$0.68 \\\\\nPoissonMBO & 79.67$\\pm$1.02 & 86.30$\\pm$0.65 \\\\\nDPN & 80.00$\\pm$0.83 & 87.17$\\pm$0.51 \\\\\nUnsup Trans+LP \\footnote{Unsup Trans means Unsupervised Embedding Transfer.} & 75.65$\\pm$1.06 & 84.46$\\pm$0.68 \\\\\nUnsup Trans+PoissonMBO & 80.73$\\pm$1.11 & 87.41$\\pm$0.63 \\\\\nUnsup Trans+PTN \\footnote{PTN consists of Unsup Trans and DPN.} & \\textbf{82.66$\\pm$0.97} & \\textbf{88.43$\\pm$0.76} \\\\ \\bottomrule\n\\end{tabular}\n\\end{minipage}\n\\caption{Ablation studies about the proposed PTN, all methods are based on a pretrained embedding with 200 extra unlabeled samples each class on miniImageNet for 5-way, 1-shot and 5-shot classification (\\%). Best results are in bold.}\n\\label{aby}\n\\end{table}\n\nWe analyze different components of the PTN and summarize the results in Table ~\\ref{aby}. All compared approaches are based on the pre-trained WRN-28-10 embedding. \n\nFirst of all, we investigate the graph propagation component (classifier).\nIt can be observed that graph-based models such as Label Propagation~\\cite{zhou2004learning} and PoissonMBO~\\cite{calder2020poisson} outperform the inductive model TransMatch~\\cite{yu2020transmatch}, which is consistent with previous researches~\\cite{zhu2005semi,liu2018learning,ziko2020laplacian}. Compared to directly applying PoissonMBO on few-shot tasks, the proposed DPN \\textit{\\textbf{(without Unsupervised Embedding Transfer)}} achieves better performance, which indicates it is necessary to perform the feature calibration to eliminate the cross-class biases between support and query data distributions before \nlabel inference.\n\nFor investigating the proposed unsupervised embedding transfer in representation learning, we observe that all the graph-based models achieve clear improvement after incorporating the proposed transfer module. For instance, the Label Propagation obtains 1.61\\%, 1.86\\% performance gains on 5-way-1-shot, and 5-way-5-shot minImageNet classification. These results indicate the effectiveness of the proposed unsupervised embedding transfer. \nFinally, by integrating the unsupervised embedding transfer and graph propagation classifier, the PTN model achieves the best performances compared against all other approaches in Table \\ref{aby}. \n\n\\subsection{Inference Time}\nWe conduct inference time experiments to investigate the computation efficiency of the proposed Poisson Transfer Network (PTN) on the \\textit{mini}ImageneNet~\\cite{vinyals2016matching} dataset. Same as \\cite{ziko2020laplacian}, we compute the average inference time required for each 5-shot task. The results are shown in Table~\\ref{time}. Compared with inductive models, the proposed PTN costs more time due to the graph-based Poisson inference. However, our model achieves better classification performance than inductive ones and other transductive models, with affordable inference time. \n\n\\begin{table}[h]\n\\caption{Average inference time (in seconds) for the 5-shot tasks in \\textit{mini}ImageneNet dataset.}\n\\begin{tabular}{@{}lc@{}}\n\\toprule\nMethods & Inference Time \\\\ \\midrule\nSimpleShot~\\cite{wang2019simpleshot} & 0.009 \\\\\nLaplacianShot~\\cite{ziko2020laplacian} & 0.012 \\\\\nTransductive fine-tune~\\cite{dhillon2019baseline} & 20.7 \\\\\nPTN(Ours) & 13.68 \\\\ \\bottomrule\n\\end{tabular}\n\n\\label{time}\n\\end{table}\n\n\\begin{table*}[t!]\n\\centering\n\\caption{Accuracy with various extra unlabeled samples for different semi-supervised few-shot methods on the \\textit{mini}ImageNet dataset. All results are averaged over 600 episodes with 95\\% confidence intervals. The best results are in bold.}\n\\begin{tabular}{@{}lccccc@{}}\n\\toprule\n & \\multicolumn{5}{c}{\\textit{mini}ImageNet 5-way-1-shot} \\\\ \\midrule\n & 0 & 20 & 50 & 100 & 200 \\\\ \\midrule\nTransMatch~\\cite{yu2020transmatch} & - & 58.43$\\pm$0.93 & 61.21$\\pm$1.03 & 63.02$\\pm$1.07 & 62.93$\\pm$1.11 \\\\\nLabel Propagation~\\cite{zhou2004learning} & 69.74$\\pm$0.72& 71.80$\\pm$1.02& 72.97$\\pm$1.06 & 73.35$\\pm$1.05 & 74.04$\\pm$1.00 \\\\\nPoissonMBO~\\cite{calder2020poisson} & 74.79$\\pm$1.06 & 76.01$\\pm$0.99 & 76.67$\\pm$1.02 & 78.28$\\pm$1.02 & 79.67$\\pm$1.02 \\\\\nDPN (Ours) & 75.85$\\pm$0.97 & 76.10$\\pm$1.06 & 77.01$\\pm$0.92& 79.55$\\pm$1.13 & 80.00$\\pm$0.83 \\\\\nPTN (Ours) & \\textbf{77.01$\\pm$0.94} & \\textbf{77.20$\\pm$0.92} & \\textbf{79.92$\\pm$1.06} & \\textbf{81.57$\\pm$0.94} & \\textbf{82.66$\\pm$0.97} \\\\ \\midrule\n & \\multicolumn{5}{c}{\\textit{mini}ImageNet 5-way-5-shot} \\\\ \\midrule\n & 0 & 20 & 50 & 100 & 200 \\\\ \\midrule\nTransMatch & - & 76.43$\\pm$0.61 & 79.30$\\pm$0.59 & 81.19$\\pm$0.59 & 82.24$\\pm$0.59 \\\\\nLabel Propagation & 75.50$\\pm$0.60 & 78.47$\\pm$0.60 & 80.40$\\pm$0.61 & 81.65$\\pm$0.59 & 82.60$\\pm$0.68 \\\\\nPoissonMBO & 83.89$\\pm$0.66 & 84.43$\\pm$0.67 & 84.94$\\pm$0.82 & 85.51$\\pm$0.81 & 86.30$\\pm$0.65 \\\\\nDPN (Ours) & 84.74$\\pm$0.63 & 85.04$\\pm$0.66 & 85.36$\\pm$0.60 & 86.09$\\pm$0.63 & 87.17$\\pm$0.51 \\\\\nPTN (Ours) & \\textbf{85.32$\\pm$0.68} & \\textbf{85.93$\\pm$0.82} & \\textbf{86.09$\\pm$0.75} & \\textbf{87.17$\\pm$0.58} & \\textbf{88.43$\\pm$0.76} \\\\ \\bottomrule\n\\label{unlab}\n\\end{tabular}\n\\end{table*}\n\n\\subsection{Results with Different Extra Unlabeled}\nWe conduct further experiments to investigate the current semi-supervised few-shot methods in mining the value of the unlabeled data. All approaches are based on a pre-trained WRN-28-10~\\cite{zagoruyko2016wide} model for fair comparisons. As indicated in Table \\ref{unlab}, with more unlabeled samples, all the models achieve higher classification performances. However, our proposed PTN model achieves the highest performance among the compared methods, which validates the superior capacity of the proposed model in using the extra unlabeled information for boosting few-shot methods.\n\n\\begin{table*}[t]\n\\centering\n\\caption{Semi-supervised comparison on the \\textit{mini}ImageNet dataset.}\n\\begin{threeparttable}\n\\begin{tabular}{@{}lcccc@{}}\n\\toprule\nMethods & 1-shot & 5-shot & \\multicolumn{1}{l}{1-shot w\/D} & \\multicolumn{1}{l}{5-shot w\/D} \\\\ \\midrule\nSoft K-Means~\\cite{ren2018meta} & 50.09$\\pm$0.45 & 64.59$\\pm$0.28 & 48.70$\\pm$0.32 & 63.55$\\pm$0.28 \\\\\nSoft K-Means+Cluster~\\cite{ren2018meta} & 49.03$\\pm$0.24 & 63.08$\\pm$0.18 & 48.86$\\pm$0.32 & 61.27$\\pm$0.24 \\\\\nMasked Soft k-Means~\\cite{ren2018meta} & 50.41$\\pm$0.31 & 64.39$\\pm$0.24 & 49.04$\\pm$0.31 & 62.96$\\pm$0.14 \\\\\nTPN-semi~\\cite{liu2018learning} & 52.78$\\pm$0.27 & 66.42$\\pm$0.21 & 50.43$\\pm$0.84 & 64.95$\\pm$0.73 \\\\\nTransMatch~\\cite{yu2020transmatch} & 63.02$\\pm$1.07 & 81.19$\\pm$0.59 & 62.32$\\pm$1.04 & 80.28$\\pm$0.62 \\\\ \\midrule\nPTN (Ours) & \\textbf{82.66$\\pm$0.97} & \\textbf{88.43$\\pm$0.67} & \\textbf{81.92$\\pm$1.02} & \\textbf{87.59$\\pm$0.61} \\\\ \\bottomrule\n\\end{tabular}\n\\begin{tablenotes}\n\\item[$\\divideontimes$]``w\/D\" means with distraction classification. In this setting, many extra unlabeled samples are from the distraction classes, which is different from the support labeled classes. All results are averaged over 600 episodes with 95\\% confidence intervals. The best results are in bold. \n\\end{tablenotes}\n\\end{threeparttable}\n\\label{mini}\n\\end{table*}\n\n\\begin{table*}[!hpbt]\n\\centering\n\\caption{Semi-supervised comparison on the \\textit{tiered}ImageNet dataset.}\n\\begin{threeparttable}\n\\begin{tabular}{@{}lcccc@{}}\n\\toprule\nMethods & 1-shot & 5-shot & \\multicolumn{1}{l}{1-shot w\/D} \n& \\multicolumn{1}{l}{5-shot w\/D} \n\\\\ \\midrule\nSoft K-Means~\\cite{ren2018meta} & 51.52$\\pm$0.36 & 70.25$\\pm$0.31 & 49.88$\\pm$0.52 & 68.32$\\pm$0.22 \\\\\nSoft K-Means+Cluster~\\cite{ren2018meta} & 51.85$\\pm$0.25 & 69.42$\\pm$0.17 & 51.36$\\pm$0.31 & 67.56$\\pm$0.10 \\\\\nMasked Soft k-Means~\\cite{ren2018meta} & 52.39$\\pm$0.44 & 69.88$\\pm$0.20 & 51.38$\\pm$0.38& 69.08$\\pm$0.25 \\\\\nTPN-semi~\\cite{liu2018learning} & 55.74$\\pm$0.29 & 71.01$\\pm$0.23 & 53.45$\\pm$0.93& 69.93$\\pm$0.80 \\\\ \\midrule\nPTN (Ours) & \\textbf{84.70$\\pm$1.14} & \\textbf{89.14$\\pm$0.71} & \\textbf{83.84$\\pm$1.07} & \\textbf{88.06$\\pm$0.62} \\\\ \\bottomrule\n\\end{tabular}\n\\begin{tablenotes}\n\\item[$\\divideontimes$]``w\/D\" means with distraction classification. In this setting, many extra unlabeled samples are from the distraction classes, which is different from the support labeled classes. All results are averaged over 600 episodes with 95\\% confidence intervals. The best results are in bold. \n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tiered}\n\\end{table*}\n\n\n\\subsection{Results with Distractor Classification}\nWe report the results of the proposed PTN on both \\textit{mini}ImageNet~\\cite{vinyals2016matching} and \\textit{tiered}ImageneNet~\\cite{ren2018meta} datasets under different settings in Table~\\ref{mini} and Table~\\ref{tiered}, respectively. It can be observed that the classification results of all semi-supervised few-shot models are degraded due to the distractor classes. However, the proposed PTN model still outperforms other semi-supervised few-shot methods with a large margin. This also indicates the superiority of the proposed PTN model in dealing with the semi-supervised few-shot classification tasks over previous approaches.\n\n\n\n\\section{Introduction}\n\\noindent\nFew-shot learning \\cite{miller2000learning,fei2006one,vinyals2016matching} aims to learn a model that generalizes well with a few instances of each novel class.\nIn general, a few-shot learner is firstly trained on a substantial annotated dataset, also noted as the base-class set, and then adapted to unseen novel classes with a few labeled instances.\nDuring the evaluation, a set of few-shot tasks are fed to the learner, where each task consists of a few support (labeled) samples and a certain number of query (unlabeled) data.\nThis research topic has been proved immensely appealing in the past few years, as a large number of few-shot learning methods are proposed from various perspectives. Mainstream methods can be roughly grouped into two categories. The first one is learning from episodes \\cite{vinyals2016matching}, also known as meta-learning, which adopts the base-class data to create a set of episodes. Each episode is a few-shot learning task, with support and query samples that simulate the evaluation procedure.\nThe second type is the transfer-learning based method, which focuses on learning a decent classifier by transferring the domain knowledge from a model pre-trained on the large base-class set~\\cite{chen2018closer,qiao2018few}. This paradigm decouples the few-shot learning progress into representation learning and classification, and has shown favorable performance against meta-learning methods in recent works \\cite{tian2020rethinking,ziko2020laplacian}. Our method shares somewhat similar motivation with transfer-learning based methods and proposes to utilize the extra unlabeled novel-class data and a pre-trained embedding to tackle the few-shot problem.\n\nCompared with collecting labeled novel-class data, it is much easier to obtain abundant unlabeled data from these classes. Therefore, semi-supervised few-shot learning (SSFSL)~\\cite{ren2018meta,liu2018learning,li2019learning,yu2020transmatch} is proposed to combine the auxiliary information from labeled base-class data and extra unlabeled novel-class data to enhance the performance of few-shot learners.\nThe core challenge in SSFSL is how to fully explore the auxiliary information from these unlabeled.\nPrevious SSFSL works indicate that graph-based models~\\cite{liu2018learning,ziko2020laplacian} can learn a better classifier than inductive ones~\\cite{ren2018meta,li2019learning,yu2020transmatch}, since these methods directly model the relationship between the labeled and unlabeled samples during the inference.\nHowever, current graph-based models adopt the Laplace learning~\\cite{zhu2003semi} to conduct label propagation, the solutions of Laplace learning develop localized spikes near the labeled samples but are almost constant far from the labeled samples, \\textit{i.e.,} label values are not propagated well, especially with few labeled samples. Therefore, these models suffer from the underdeveloped message-passing capacity for the labels.\nOn the other hand, most SSFSL methods adapt the feature embedding pre-trained on base-class data (meta- or transfer- pre-trained) as the novel-class embedding. This may lead to the embedding degeneration problem, as the pre-trained model is designed for the base-class recognition, it tends to learn the embedding that represents only base-class information, and lose information that might be useful outside base classes.\n\nTo address the above issues, we propose a novel transfer-learning based SSFSL method, named Poisson Transfer Network (PTN). Specifically, \\textbf{\\textit{to improve the capacity of graph-based SSFSL models in message passing}}, we propose to revise the Poisson model tailored for few-shot problems by incorporating the query feature calibration and the Poisson MBO model. Poisson learning~\\cite{calder2020poisson} has been provably more stable and informative than traditional Laplace learning in low label rate semi-supervised problems. However, directly employing Poisson MBO for SSFSL may suffer from the cross-class bias due to the data distribution drift between the support and query data. Therefore, we improve the Poisson MBO model by explicitly eliminating the cross-class bias before label inference. \n\\textbf{\\textit{To tackle the novel-class embedding degeneration problem}}, we propose to transfer the pre-trained base-class embedding to the novel-class embedding by adopting unsupervised contrastive training \\cite{he2020momentum,chen2020simple} on the extra unlabeled novel-class data. Constraining the distances between the augmented positive pairs, while pushing the negative ones distant, the proposed transfer scheme captures the novel-class distribution implicitly. This strategy effectively avoids the possible overfitting of retraining feature embedding on the few labeled instances.\n\nBy integrating the Poisson learning and the novel-class specific embedding, the proposed PTN model can fully explore the auxiliary information of extra unlabeled data for SSFSL tasks.\nThe contributions are summarized as follows:\n\\begin{itemize}\n\\item We propose a Poisson learning based model to improve the capacity of mining the relations between the labeled and unlabeled data for graph-based SSFSL.\n\n\\item We propose to adapt unsupervised contrastive learning in the representation learning with extra unlabeled data to improve the generality of the pre-trained base-class embedding for novel-class recognition. \n\n\\item \nExtensive experiments are conducted on two benchmark datasets to investigate the effectiveness of PTN, and PTN achieves state-of-the-art performance.\n\\end{itemize}\n\\section{Methodology}\n\\subsection{Problem Definition}\\label{def}\nIn the standard few-shot learning, there exists a labeled support set $S$ of $C$ different classes, ${ S } = \\left\\{ \\left({x} _ { s } , {y} _ { s } \\right) \\right\\} _ { s = 1 } ^ { K \\times C }$, where $x_s$ is the labeled sample and $y_s$ denote its label. We use the standard basis vector $\\mathbf{e}_{i} \\in \\mathbb{R}^{C}$ represent the $i$-th class, \\textit{i.e.}, $y_s \\in \\left\\{\\mathbf{e}_{1}, \\mathbf{e}_{2}, \\ldots, \\mathbf{e}_{C}\\right\\}$. Given an unlabeled query sample $x_q$ from the query set ${ Q } = \\left\\{ {x} _ { q } \\right\\} _ { q=1 } ^ { V }$, the goal is to assign the query to one of the $C$ support classes. The labeled support set and unlabeled query set share the same label space, and the novel-class dataset $\\mathcal{D}_{novel}$ is thus defined as $\\mathcal{D}_{novel} = S \\cup Q$. If $S$ contains $K$ labeled samples for each of $C$ categories, the task is noted as a $C$-way-$K$-shot problem.\nIt is far from obtaining an ideal classifier with the limited annotated $S$. Therefore, few-shot models usually utilize a fully annotated dataset, which has similar data distribution but disjoint label space with $\\mathcal{D}_{novel}$ as an auxiliary dataset $\\mathcal{D}_{base}$, noted as the base-class set.\n\nFor the semi-supervised few-shot learning (SSFSL), we have an extra unlabeled support set ${ U } = \\left\\{ {x} _ { u } \\right\\} _ { u = 1 } ^ { N }$. These additional $N$ unlabeled samples are usually from each of the $C$ support classes in standard-setting, or other novel-class under distractor classification settings. Then the new novel-class dataset $\\mathcal{D}_{novel}$ is defined as $\\mathcal{D}_{novel} = S \\cup Q \\cup U$. The goal of SSFSL is maximizing the value of the extra unlabeled data to improve the few-shot methods.\n\nFor a clear understanding, the details of proposed PTN are introduced as follows: we first introduce the proposed Representation Learning, and then we illustrate the proposed Poisson learning model for label inference.\n\n\\subsection{Representation Leaning}\nThe representation learning aims to learn a well-generalized novel-class embedding through Feature Embedding Pre-training and Unsupervised Embedding Transfer. \n\\subsubsection{Feature Embedding Pre-training}\nOn the left side of Figure \\ref{framework}, the first part of PTN is the feature embedding pre-training.\n\nBy employing the cross-entropy loss between predictions and ground-truth labels in $\\mathcal{D}_{base}$, we train the base encoder $f_{\\theta_0}$ in a fully-supervised way, which is the same as \\cite{chen2018closer,yu2020transmatch,tian2020rethinking}. \nThis stage can generate powerful embedding for the downstream few-shot learner.\n\n\\subsubsection{Unsupervised Embedding Transfer} \\label{UET}\nDirectly employ the pre-trained base-class embedding for the novel-class may suffer from the degeneration problem. However, retraining the base-class embedding with the limited labeled instances is easy to lead to overfitting. How can we train a novel-class embedding to represent things beyond labels when our only supervision is the limited labels? Our solution is unsupervised contrastive learning.\nUnsupervised learning, especially Contrastive learning~\\cite{he2020momentum,chen2020simple}, recently has shown great potential in representation learning for various downstream vision tasks, and most of these works training a model from scratch. \nHowever, unsupervised pre-trained models perform worse than fully-supervised pre-trained models. \nUnlike previous works, we propose to adopt contrastive learning to retrain the pre-trained embedding with the unlabeled novel data. In this way, we can learn a decent novel-class embedding by integrating the fully-supervised pre-trained scheme with unsupervised contrastive fine-tuning. \n\nSpecifically, for a minibatch of $n$ examples from the unlabeled novel-class subset ${ U_i = \\{x_u\\}_{u=1}^n }$, randomly sampling two data augmentation operators $t,t'\\in {T}$, we can generate a new feature set $Z = \\{ Z_t = \\{f_{\\theta_0} \\circ t(x_u)\\}_{u=1}^n \\} \\cup \\{ Z_{t'} = \\{f_{\\theta_0} \\circ t'(x_u)\\}_{u=1}^n \\}$, resulting in $n$ pairs of feature points. We treat each feature pair from the same raw data input as the positive pair, and the other $2(n-1)$ feature points as negative samples. Then the contrastive loss for the minibatch is defined as\n\\begin{equation}\n\\ell_{cont}=- \\sum_{i,j = 1}^{n} \\log \\frac{\\exp \\left(\\operatorname{cosine}\\left({z}_{i}, {z}_{j}\\right) \/ \\tau\\right)}{\\sum_{k \\neq i} \\exp \\left(\\operatorname{cosine}\\left({z}_{i}, {z}_{k}\\right) \/ \\tau\\right)},\n\\label{NCE}\n\\end{equation}\n\nwhere $z_i,z_j$ denote a positive feature pair from $Z$, $\\tau$ is a temperature parameter, and $\\operatorname{cosine}(\\cdot)$ represents the consine similarity. Then, we adopt a Kullback-Leibler divergence ($\\ell_{KL}$) between two feature subset $Z_t$ and $Z_{t'}$ as the regulation term. Therefore, the final unsupervised embedding transfer loss $\\ell_{UT}$ is defined as \n\\begin{equation}\n\\ell_{UT} = \\ell_{cont} + \\lambda \\ell_{KL}(Z_t ~\\|~ Z_{t'}).\n\\label{UT}\n\\end{equation}\nBy training the extra unlabeled data with this loss, we can learn a robust novel-class embedding $f_{\\theta}$ from $f_{\\theta_0}$.\n\n\\subsection{Poisson Label Inference}\nPrevious studies \\cite{zhu2003semi,zhou2004learning,zhu2005semi,liu2018learning,ziko2020laplacian} indicate that the graph-based few-shot classifier has shown superior performance against inductive ones. Therefore, we propose constructing the classifier with a graph-based Poisson model, which adopts different optimizing strategy with representation learning. \nPoisson model~\\cite{calder2020poisson} has been proved superior over traditional Laplace-based graph models~\\cite{zhu2003semi,zhou2004learning} both theoretically and experimentally, especially for the low label rate semi-supervised problem. \nHowever, directly applying this model to the few-shot task will suffer from a cross-class bias challenge, caused by the data distribution bias between support data (including labeled support and unlabeled support data) and query data. \n\nTherefore, we revise this powerful model by eliminating the support-query bias as the classifier. We explicitly propose a query feature calibration strategy before the final Poisson label inference. \nIt is worth noticing that the proposed graph-based classifier can be directly appended to the pre-trained embedding without adopting the unsupervised embedding transfer training. We dob this baseline model as \\textit{Decoupled Poisson Network} (\\textit{DPN}). \n\n\\subsubsection{Query Feature Calibration}\nThe support-query data distribution bias, also referred to as the cross-class bias~\\cite{liu2019prototype}, is one of the reasons for the degeneracy of the few-shot learner. In this paper, we propose a simple but effective method to eliminate this distribution bias for Poisson graph inference. For a SSFSL task, we fuse the labeled support set $S$ and the extra unlabeled set $U$ as the final support set $ B = S \\cup U$. We denote the normalized embedded support feature set and query feature set as $Z_b = \\{z_b\\}$ and $Z_q = \\{z_q\\}$,\nthe cross-class bias is defined as \n\\begin{equation} \\\n\\begin{split}\n& \\Delta_{\\text {cross}}=\\mathbb{E}_{z_{b} \\sim p_{\\mathcal{B}}}\\left[z_{b}\\right]-\\mathbb{E}_{z_{q} \\sim p_{\\mathcal{Q}}}\\left[z_{q}\\right] \\\\\n& \\quad\\quad~ = \\frac{1}{|\\mathcal{B}|} \\sum_{b=1}^{|\\mathcal{B}|} {z}_{b}-\\frac{1}{|\\mathcal{Q}|} \\sum_{q=1}^{|\\mathcal{Q}|} {z}_{q}.\n\\end{split}\n\\label{QFR}\n\\end{equation}\nWe then add the bias $\\Delta_{cross}$ to query features. To such a degree, support-query bias is somewhat eliminated. After that, a Poisson MBO model is adopted to infer the query label.\n\n\\subsubsection{The Poisson Merriman\u2013Bence\u2013Osher Model}\nWe denote the embedded feature set as $Z_{novel} = Z_b \\cup Z_q = \\{z_1, z_2, \\dots, z_m\\}$ ($m=K\\times C + N + V)$, where the first $K \\times C$ feature points belong to the labeled support set, the last $V$ feature points belong to the query set, and the remaining $N$ points denote the unlabeled support set. We build a graph with the feature points as the vertices, and the edge weight $w_{ij}$ is the similarity between feature \npoint $z_i$ and $z_j$, defined as $w_{i j}=\\exp \\left(-4\\left|z_{i}-z_{j}\\right|^{2} \/ d_{K}\\left(z_{i}\\right)^{2}\\right)$, where $d_{K}\\left(z_{i}\\right)^{2}$ is the distance between $z_i$ and its $K$-th nearest neighbor. We set $w_{ij} \\ge 0$ and $w_{ij} = w_{ji}$. Correspondingly, we define the weight matrix as $W=[w_{ij}]$, the degree matrix as $D=\\operatorname{diag}([d_i=\\sum_{j=1}^{m}w_{ij}])$, and the unnormalized Laplacian as $L = D - W$. \nAs the first $K\\times C$ feature points have the ground-truth label, we use $\\bar y = \\frac{1}{K \\times C} \\sum_{s=1}^{K\\times C} y_s $ to denote the average label vector, and we let indicator $\\mathbb{I}_{ij} = 1 $ if $i=j$, else $\\mathbb{I}_{ij} = 0 $. The goal of this model is to learn a classifier $g: z \\rightarrow \\mathbb{R}^{C}$.\nBy solving the Poisson equation:\n\\begin{equation}\n\\begin{split}\n& L g\\left(z_{i}\\right)=\\sum_{j=1}^{K\\times C}\\left(y_{j}-\\bar{y}\\right) \\mathbb{I}_{ij} \\quad \\text { for } i=1, \\ldots, m, \n\\end{split}\n\\label{PO}\n\\end{equation}\nsatisfying $\\sum_{i=1}^{m} \\sum_{k=1}^{m}w_{ik} g\\left(z_{i}\\right)=0$, we can then result in the label prediction function $g(z_i)=(g_1(z_i),g_2(z_i),\\dots,g_C(z_i))$. The predict label $\\hat{y_i}$ of vertex $z_i$ is then determined as $\\hat{y_i} = {\\arg \\max_{j \\in\\{1, \\ldots, C\\}} }\\left\\{g_{j}(x_i)\\right\\}$. Let $G$ denote the set of $ m \\times C $ matrix, which is the prediction label matrix of the all data. We concatenate the support label to form a label matrix $Y = [y_s] \\in \\mathbb{R}^{C \\times (\nK\\times C)} $. Let $A = [Y - \\bar y, \\mathbf{0}^{C \\times (m-K\\times C)}]$ denotes the initial label of all the data, in which all unlabeled data's label is zero. The query label of Eq. (\\ref{PO}) can be determined by\n\\begin{equation}\n G^{tp+1} = G^{tp} + D^{-1} ( A^T - LG^{tp}),\n \\label{POSO}\n\\end{equation}\nwhere $G^{tp}$ denotes the predicted labels of all data at the timestamp $tp$.\nWe can get a stable classifier $g$ with a certain number of iteration using Eq. (\\ref{POSO}). After that, we adopt a graph-cut method to improve the inference performance by incrementally adjusting the classifier's decision boundary. The graph-cut problem is defined as\n\\begin{equation}\n\\min_{g: Z\\rightarrow H \\atop(g)_{z}=o}\\left\\{ g^T L g -\\mu \\sum_{i=1}^{K \\times C}\\left(y_{i}-\\bar{y}\\right) \\cdot g\\left(z_{i}\\right)\\right\\},\n\\label{POMBO}\n\\end{equation}\nwhere $H = \\{ \\mathbf{e}_{1}, \\mathbf{e}_{2}, \\ldots, \\mathbf{e}_{C} \\}$ denotes the annotated samples' label set, $(g)_z = \\frac{1}{m}\\sum_{i=1}^m g(z_i)$ is the fraction of vertices to each of $C$ classes, and $o =[o_1,o_2,\\dots,o_C]^T \\in \\mathbb{R}^{C}$ is the piror knowledge of the class size distribution that $o_i$ is the fraction of data belonging to class $i$. With the constraint $(g)_z = o$, we can encode the prior \nknowledge into the Poisson Model.\n$g^T L g = \\frac{1}{2} \\sum_{i, j=1}^{m} w_{i j}(g(i)-g(j))^{2}$, this term is the graph-cut energy of the classification given by $g=[g(z_1),g(z_2),\\dots,g(z_m)]^T$, widely used in semi-supervised graph models~\\cite{zhu2003semi,zhu2005semi,zhou2004learning}.\n\nIn Eq. (\\ref{POMBO}), the solution will get discrete values, which is hard to solve.\nTo relax this problem, we use the Merriman-Bence-Osher (MBO) scheme~\\cite{garcia2014multiclass} by replacing the graph-cut energy with the Ginzburg-Landau approximation:\n\n\\begin{equation}\n\\begin{split}\n& \\min _{g\\in \\mathrm{SP}\\{Z\\rightarrow \\mathbb{R}^C\\} \\atop(g)_{z}=o}\\left\\{ \\mathrm{GL}_{\\tau'} (g) -\\mu \\sum_{i=1}^{K \\times C}\\left(y_{i}-\\bar{y}\\right) \\cdot g\\left(z_{i}\\right)\\right\\},\\\\\n& \\mathrm{GL}_{\\tau'}(g)= g^T L g +\\frac{1}{\\tau'} \\sum_{i=1}^{m} \\prod_{j=1}^{C}\\left|g\\left(z_{i}\\right)-\\mathbf{e}_{j}\\right|^{2}.\n\\end{split}\n\\label{GLPOMBO}\n\\end{equation}\n\nIn Eq. (\\ref{GLPOMBO}), $\\mathrm{SP}\\{Z\\rightarrow \\mathbb{R}^C\\}$ represents the space of projections $g: Z\\rightarrow \\mathbb{R}^C$, which allow the classifier $g$ to take on any real values, instead of the discrete value from $H$ in Eq. (\\ref{POMBO}). More importantly, this leads to a more efficiently computation of the Poisson model. The Eq. (\\ref{GLPOMBO}) can be efficiently solved with alternates \ngradient decent strategy, as shown in lines 9-20 of Algorithm \\ref{algorithm}. \n\n\\begin{algorithm}[t]\n\\caption{PTN for SSFSL}\\label{algorithm}\n\\SetKwData{Left}{left}\\SetKwData{This}{this}\\SetKwData{Up}{up}\n \\SetKwFunction{Union}{Union}\\SetKwFunction{FindCompress}{FindCompress}\n \\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n \\SetKwProg{PoissonMBO}{$PoissonMBO$}{}{$G\\leftarrow G[m-V:m,:]$;}\n \\Input{$\\mathcal{D}_{base}$, $\\mathcal{D}_{novel}=S\\cup U \\cup Q$,\\\\ $o$, $\\mu$, $M_{1}, M_{2}, M_{3}$}\n \\Output{Query samples' label prediction $G$}\n\nTrain a base model $\\mathbf{W}_{\\phi} \\circ f_{\\theta_0} (x)$ with all samples and labels from $\\mathcal{D}_{base}$;\n\nApply unsupervised embedding transfer method to fine-tune the $f_{\\theta_0}$ with novel unlabeled data $U$ by using $\\ell_{UT}$ in Eq. (\\ref{UT}), and result in $f_{\\theta}$;\n\nApply $f_{\\theta}$ to extract features on $D_{novel}$ as $Z_{novel}$;\n\nApply query feature calibration using Eq. (\\ref{QFR});\n\nCompute $W, D, L, A$ according to $Z_{novel}$, $G \\leftarrow \\mathbf{0}^{m \\times C}$\n\n\\PoissonMBO{}{\n\nUpdate $G$ uisng Eq. (\\ref{POSO}) with given steps\n\n$\\mathrm{d}_{mx} \\leftarrow 1 \/ \\max _{1 \\leq i \\leq m} {D}_{i i}$, $G \\leftarrow \\mu G$\n\n\\For{ $i=1$ \\KwTo $M_{1} $}{\n\\For{ $j=1$ \\KwTo $M_{2} $}{${G} \\leftarrow {G}-\\mathrm{d}_{mx}\\left({L} {G}-\\mu {A}^{T}\\right)$}\n$r \\leftarrow \\textbf{ones}(1,C)$\\\\\n\\For{ $j=1$ \\KwTo $M_3$}{$\\hat{o} \\leftarrow \\frac{1}{n} \\mathbf{1}^{T} \\mathbf{P r o j}_{H}({G} \\cdot \\operatorname{diag}({r}))$\\\\\n${r} \\leftarrow \\max \\left(\\min \\left({r}+{\\varphi} \\cdot ({o}-\\hat{o}), \\upsilon_{\\alpha}\\right), \\upsilon_{\\sigma }\\right)$\n}\n$G \\leftarrow \\mathbf{P r o j}_{H}({G} \\cdot \\operatorname{diag}({r}))$ }\n}\n\\end{algorithm}\n\\subsection{Proposed Algorithm}\nThe overall proposed algorithm is summarized in Algorithm \\ref{algorithm}. Inputting the base-class set $\\mathcal{D}_{base}$, novel-class set $\\mathcal{D}_{novel}$, prior classes' distribution $o$, and other parameters, PTN will predict the query samples' label $G \\in \\mathbb{R}^{V \\times C}$. The query label $\\hat{y}_q$ is then determined as $\\hat{y}_q = \\arg \\max _{1 \\leq j \\leq C} G_{qj}$. \nMore specifically, once the encoder $f_\\theta$ is learned using the base set $\\mathcal{D}_{base}$, we employ the proposed unsupervised embedding transfer method in step 2 in Algorithm \\ref{algorithm}. After that, we build the graph with the feature set $Z_{novel}$ and compute the related matrices $W, D, L, A$ in step 3-5. In the label inference stage in steps 6-20, we first apply Poisson model to robust propagate the labels in step 7, and then solve the graph-cut problem by using MBO scheme in several steps of gradient-descent to boost the classification performance. The stop condition in step 7 follow the constraint: $\\left\\|\\mathbf{sp}_{tp}-{W} \\mathbf{1} \/\\left(\\mathbf{1}^{T} {W} \\mathbf{1}\\right)\\right\\|_{\\infty} \\leq 1 \/ m$, where $\\mathbf{1}$ is a all-ones \ncolumn vector, $ \\mathbf{sp}_{tp}={W} {D}^{-1} \\mathbf{sp}_{tp-1}$, $\\mathbf{sp_{0}}$ is a $m$-column vector with ones in the first $K\\times C$ positions and zeros elsewhere. \nSteps 9-19 are aimed to solve the graph-cut problem in Eq. (\\ref{GLPOMBO}), \nTo solve the problem, we first divide the Eq. (\\ref{GLPOMBO}) into $E_1 = g^TLg-\\mu \\sum_{i=1}^{K \\times C}\\left(y_{i}-\\bar{y}\\right) \\cdot g\\left(z_{i}\\right)$ and $E_2=\\frac{1}{\\tau'} \\sum_{i=1}^{m} \\prod_{j=1}^{C}\\left|g\\left(z_{i}\\right)-\\mathbf{e}_{j}\\right|^{2}$, and then \nemploying the gradient decent alternative on these two energy functions. \nSteps 10-12 are used to optimize the $E_1$.\nWe optimize the $E_2$ in steps 14-17, $\\mathbf{Proj}_{H}: \\mathbb{R}^{C} \\rightarrow H$ is the closet point projection, $r =[r_1,\\dots,r_C]^T$ ($r_i > 0$), ${\\varphi}$ is the time step, and $\\upsilon_{\\alpha}, \\upsilon_{\\sigma }$ are the clipping values, \nBy adopting the gradient descent scheme in steps 14-17, the vector $r$ is generated that also satisfies the constraint $(g)_z = o$ in Eq.(\\ref{GLPOMBO}). After obtaining the PoissonMBO's solution $G$, the query samples' label prediction matrix is resolved by step 20. \n\nThe main inference complexity of PTN is $\\mathcal{O}(M_1 M_2 E)$\n, where $E$ is the number of edges in the graph. As a graphed-based model, PTN's inference complexity is heavier than inductive models. However, previous studies \\cite{liu2018learning,calder2020poisson} indicate that this complexity is affordable for few-shot tasks since the data scale is not very big. Moreover, we do not claim that our model is the final solution for SSFSL. We aim to design a new method to make full use of the extra unlabeled information. We report inference time comparison experiments in Table~\\ref{time}. The average inference time of PTN is 13.68s.\n\n\n\n\\section{Related Work}\n\\subsection{Few-Shot Learning} \nAs a representative of the learning methods with limited samples, \\textit{e.g.,} weakly supervised learning~\\cite{lan2017robust,zhang2018adversarial}, semi-supervised learning~\\cite{zhu2003semi,calder2019properly}, \nfew-shot learning can be roughly grouped into two categories: meta-learning models and transfer-learning models. Meta-learning models adopt the episode training mechanism~\\cite{vinyals2016matching}, of which metric-based models optimize the transferable embedding of both auxiliary and target data, and queries are identified according to the embedding distances~\\cite{sung2018learning,li2019distribution,Simon_2020_CVPR,zhang2020sgone}. Meanwhile, meta-optimization models~\\cite{finn2017model,rusu2018meta} target at designing optimization-centered algorithms to adapt the knowledge from meta-training to meta-testing. \nInstead of separating base classes into a set of few-shot tasks, transfer-learning methods~\\cite{qiao2018few,gidaris2018dynamic,chen2018closer,qi2018low} utilize all base classes to pre-train the few-shot model, which is then adapted to novel-class recognition. \nMost recently, Tian \\textit{et al.} \\cite{tian2020rethinking} decouple the learning procedure into the base-class embedding pre-training and novel-class classifier learning. By adopting multivariate logistic regression and knowledge distillation, the proposed model outperforms the meta-learning approaches. \nOur proposed method is inspired by the transfer-learning framework, where we adapt this framework to the semi-supervised few-shot learning by exploring both unlabeled novel-class data and base-class data to boost the performance of few-shot tasks.\n\n\\subsection{Semi-Supervised Few-shot Learning (SSFSL)}\nSSFSL aims to leverage the extra unlabeled novel-class data to improve the few-shot learning. \\citeauthor{ren2018meta} \\cite{ren2018meta} propose a meta-learning based framework by extending the prototypical network \\cite{snell2017prototypical} with unlabeled data to refine class prototypes. LST \\cite{li2019learning} re-trains the base model using the unlabeled data with generated pseudo labels. During the evaluation, it dynamically adds the unlabeled sample with high prediction confidence into testing. In \\cite{yu2020transmatch}, TransMatch proposes to initialize the novel-class classifier with the pre-trained feature imprinting, and then employs MixMatch \\cite{berthelot2019mixmatch} to fine-tune the whole model with both labeled and unlabeled data.\nAs closely related research to SSFSL, the transductive few-shot approaches~\\cite{liu2018learning,kim2019edge,ziko2020laplacian} also attempt to utilize unlabeled data to improve the performance of the few-shot learning. These methods adopt the entire query set as the unlabeled data and perform inference on all query samples together. For instance, TPN \\cite{liu2018learning} employs graph-based transductive inference to address the few-shot problem, and a semi-supervised extension model is also presented in their work.\n\nUnlike the above approaches, in this paper, we adopt the transfer-learning framework and propose to fully explore the extra unlabeled information in both classifier learning and embedding learning with different learning strategies. \n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=0.85\\linewidth]{.\/images\/flowchart.pdf}\n\\end{center}\n \\caption{The overview of the proposed PTN. We first pre-train a feature embedding $f_{\\theta_0}$ from the base-class set using standard cross-entropy loss. This embedding is then fine-tuned with the external novel-class unlabeled data by adopting unsupervised transferring loss $\\ell_{UT}$ to generate $f_{\\theta}$. Finally, we revise a graph model named PoissonMBO to conduct the query label inference.}\n\\label{framework}\n\\end{figure*}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section*{Introduction}\n\nDynamics of liquid drops is familiar in daily life: we observe rain drops\nrolling on a new umbrella, honey dripping off from a spoon, and oil droplets\nfloating on the surface of vegetable soup and so on. Such everyday phenomena\nare in fact important not only in physical sciences\n\\cite{RichardClanetQuere2002,DoshiCohenZhangSiegelHowellBasaranNagel2003,CouderProtiereFortBoudaoud2005,RistenpartBirdBelmonteDollarStone2009,PhysRevLett.79.1265,BirdDeCourbinStone2010,EtienneWedge2014}\nbut also in a variety of practical issues such as ink-jet printing\n\\cite{Calvert2001}, microfluidics manipulations\n\\cite{SquiresQuake2005,MathileTabeling2014}, and emulsification, formation of\nspray and foams \\cite{DynamicsDroplets,PhysicsFoams,Sylvie}. From such\nphenomena familiar to everybody, researchers have successfully extracted a\nnumber of scaling laws representing the essential physics \\cite{CapilaryText},\nwhich include scaling laws associated with the lifetime of a bubble in viscous\nliquid \\cite{DebregeasGennesBrochard-Wyart1998,EriOkumura2007} and contact\ndynamics of a drop to another drop\n\\cite{AartsLekkerkerkerGuoWegdamBonn2005,YokotaPNAS2011} or to a solid plate\n\\cite{BirdMandreStone2008,DavidCoalPRE}. Here, we report on a crossover of two\nscaling regimes experimentally revealed for viscous friction acting on a fluid\ndrop in a confined space. In particular, we study the descending motion (due\nto gravity) of an oil droplet surrounded by another immiscible oil in a\nHele-Shaw cell. The friction law thus revealed is nonlinear and replaces the\nwell-known Stokes' law in the Hele-Shaw cell geometry.\n\nA closely related topic of the rising bubble in a Hele-Shaw cell is\ntheoretically discussed by Taylor and Saffman in a pioneering paper\n\\cite{TAYLORSAFFMAN1959} in 1958 (earlier than the Bretherton's paper on\nbubbles in tubes \\cite{Bretherton,Clanet2004}). The solution of Taylor and\nSaffman was further discussed by Tanveer \\cite{Tanveer1986}. There are many\nother theoretical works on fluid drops in the Hele-Shaw cell geometry, notably\nin the context of the topological transition associated with droplet breakup\n\\cite{Eggers1997,ConstantinDupontGoldsteinKadanoffShelleyZhou1993,GoldsteinPesciShelley1995,Howell1999}%\n. \\ As for experimental studies, a number of researchers have investigated the\nrising motion of a bubble in a Hele-Shaw cell\n\\cite{Maxworthy1986,Kopf-SillHomsy1988,MaruvadaPark1996}. However, unlike the\npresent study, systematic and quantitative studies in a constant velocity\nregime have mostly concerned with the case in which there is a forced flow in\nthe outer fluid phase and most of the studies have been performed with the\ncell strongly inclined nearly to a horizontal position (one of a few examples\nof the case with the cell set in the upright position but with external flow\n\\cite{HeleShawPetroleum2010} demonstrates relevance of the present work to\nimportant problems in petroleum industry, such as the suction of crude oil\nfrom the well).\\ \n\nOne of the features of the present study compared with most of previous ones\non the dynamics of fluid drops in a Hele-Shaw cell is that in the present case\nthe existence of a thin liquid film surrounding a fluid drop plays a crucial\nrole: In many previous works, the existence of such thin films is not\nconsidered. In this respect, the present problem is closely related to the\ndynamics governed by thin film dissipation such as the imbibition of textured\nsurfaces\n\\cite{StoneNM2007,IshinoReyssatEPL2007,ObaraPRER2012,TaniPlosOne2014,TaniSR2015,DominicVellaImbibition2016}%\n. In this sense, our problem is quasi two-dimensional, although the geometry\nof the Hele-Shaw cell is often associated with a purely two-dimensional problem.\n\n\\section*{Experiment}\n\nWe fabricated a Hele-Shaw cell of thickness $D$\n\\cite{EriOkumura2007,EriOkumura2010,YokotaPNAS2011} and filled the cell with\nolive oil (150-00276, Wako; kinematic viscosity $\\nu_{ex}=60$ cS and density\n$\\rho_{ex}=910$ kg\/m$^{3}$). This oil plays a role of an external surrounding\nliquid for a drop of poly(dimethylsiloxane) (PDMS) to be inserted at the top\nof the cell using a syringe (SS-01T, Termo). We observe the inserted drop\ngoing down in the cell, as illustrated in Fig. \\ref{Fig1}(a), because of the\ndensity difference $\\Delta\\rho=\\rho_{in}-\\rho_{ex}>0$. The drop density\n$\\rho_{in}$ depends on its kinematic viscosity $\\nu_{in}$ only slightly (see\nthe details for Methods). The drop size is characterized by the cell thickness\n$D$ and the width $R_{T}$, i.e., the size in the direction transverse to that\nof gravity (see Fig. \\ref{Fig1}(b)), which is slightly smaller than the size\nin the longitudinal direction, $R_{L}$. As shown in Fig. \\ref{Fig1}(c), a thin\nfilm of olive oil exists between a cell plate and the surface of the drop. We\ncan think of two limiting cases for the distribution of liquid flow: (1)\nInternal Regime: The velocity gradient is predominantly created in the\ninternal side of the droplet as in the left illustration. (2) External Regime:\nThe gradient is predominantly exists in the external side of the droplet as in\nthe right.\n\nThe width and height of the cell are 10 cm and 40 cm, respectively, and are\nmuch larger than the drop size to remove any finite size effects in the\ndirection of width and height. The cell is made of acrylic plates of thickness\n5 mm, to avoid thinning deformation of the cell due to the effect of capillary\nadhesion \\cite{CapilaryText}.\n\nWe took snapshots of the descending drop at a regular time interval using a\ndigital camera (Lumix DMC-G3, Panasonic) and a camera controller (PS1,\nEtsumi). The obtained data were analyzed with the software, Image J, to obtain\nthe position as a function of time to determine the descending velocity of the\ndrop. Some examples are shown in Fig. \\ref{Fig1}(d). This plot show the\nfollowing facts. (1) The descending motion can be characterized by a\nwell-defined constant velocity (to guarantee a long stationary regime, the\ncell height is made significantly larger (40 cm) than the drop size; because\nof a small density difference, the constant-velocity regime starts after a\nlong transient regime). (2) The descending velocity is dependent on the\nkinematic viscosity of the internal liquid of the drop $\\nu_{in}$ for the\nthinner cell ($D=0.7$ mm) as predicted in the previous study\n\\cite{EriSoftMat2011}, which is not the case for the thicker cell ($D=1.5$\nmm); These examples clearly demonstrate the existence of a novel scaling\nregime different from the one discussed in the previous study\n\\cite{EriSoftMat2011}.\n\nIn the present study, the dependence of the descending velocity on the drop\nsize is negligible. In the previous study \\cite{EriSoftMat2011}, it was found\nthat the descending speed of drops is dependent on $R_{T}$ for $R_{T}\/D<10$ if\na glycerol drop goes down in PDMS oil. However, in the present combination\n(i.e., a PDMS drop going down in olive oil), we do not observe a significant\ndependence on $R_{T}$ in our data even for fairly small drops, whereas $R_{T}$\nis in the range $1.31k_{3}D\/\\kappa^{-1}$.\nIn other words, the phase boundary between the internal and external regimes\nis given by\n\\begin{equation}\n\\eta_{ex}\/\\eta_{in}=k_{3}D\/\\kappa^{-1}\\label{eq10}%\n\\end{equation}\nwith $k_{3}=k_{1}^{3}\/k_{in}$. This means that the phase boundary between the\ninternal and external regime is a straight line with the slope $k_{3}$ on the\nplot of $\\eta_{ex}\/\\eta_{in}$ as a function of $D\/\\kappa^{-1}$.\n\n\\section*{Experiment and theory}\n\nThe experimental data for the descending velocity of drops $V$ are plotted as\na function of $\\Delta\\rho gD^{2}\/\\eta_{in}$ in Fig. \\ref{Fig2}(a). In view of\nEq. (\\ref{eq4}), the data points in the internal regime would be on a straight\nline of slope 1. This is almost true: there is a series of data well on the\ndashed line of slope close to one. Naturally, there is a slight deviation from\nthe theory: the slope of the straight dashed line obtained by a numerical\nfitting is in fact $1.24\\pm0.06$, a value slightly larger than one, but the\ncoefficient corresponding $k_{in}$ is $0.150\\pm0.015$, the order of magnitude\nof which is consistent with the scaling arguments.\n\nSome detailed remarks for the above arguments are as follows. (1) Even in the\nprevious study \\cite{EriSoftMat2011} in which the internal scaling regime was\nconfirmed for the first time, the scaling regime described by Eq. (\\ref{eq4})\nwas shown with some deviations, similarly to the present case (whereas another\nscaling regime first established in the previous paper \\cite{EriSoftMat2011}\nis almost perfectly demonstrated). (2) We note here that the data represented\nby the red filled circle and red filled inverse triangle are exceptional ones\nand their seemingly strange behavior will be explained in Discussion. (3) We\nhave confirmed that even if we replace $D$ with $D-2h$ in the analysis (by\nusing the thickness $h$ estimated from Eq. (\\ref{eq6})) when $D$ is used as a\nlength scale characterizing the viscous gradient (i.e., when $D$ is used in\nthe expression $V\/D$ in Eq. (2)), any visible differences are not introduced\ninto the plots given in Fig. \\ref{Fig2} (This correction could be motivated by\nconsidering the existence of thin films surrounding the drops as in Fig.\n\\ref{Fig1}(c) as mentioned above).\n\nIn Fig. \\ref{Fig2}(b), it is shown that some of the data we obtained clearly\nsatisfy Eq. (\\ref{eq8}), which describes the external regime. In Fig.\n\\ref{Fig2}(b), we collected the data points that are off the dashed line of\nslope close to one in Fig. \\ref{Fig2}(a) and that are thus ruled out from the\ninternal regime. The data thus selected and plotted in Fig. \\ref{Fig2}(b) are\nalmost on the straight line of slope 3 in accordance with Eq. (\\ref{eq8}). The\nstraight line is obtained by a numerical fitting with fixing the slope to 3.0;\nas a result of this fitting, the coefficient is given as $k_{1}=0.167\\pm\n0.003$, the order of magnitude of which is consistent with the scaling arguments.\n\nWe confirm this scaling law in Eq. (\\ref{eq8}) also in Fig. \\ref{Fig2}(a). In\nthe light of Eq. (\\ref{eq8}), the data in the external regime for a given $D$\nshould take almost the same values, because $\\eta_{ex}$ and $\\gamma$ are both\nconstant and $\\kappa^{-1}$ is almost constant (note that $\\Delta\\rho$ is\nalmost constant) in the present study. In fact, in Fig. \\ref{Fig2}(a), the\ndata points for a fixed $D$ that are off the dashed line, which data are shown\nto be in the external regime in Fig. \\ref{Fig2}(b), take almost a constant\nvalue, that is, they are located almost on a horizontal line. This fact also\nconfirms that the data in question are independent of $\\eta_{in}$, that is,\nthey are certainly not in the internal regime. Strictly speaking, the data\nlabeled as a given $D$ can have slightly different measured values of $D$ (see\nMethods), which is the main reason the data for a \"given\" $D$ that are off the\ndashed line in Fig. \\ref{Fig2}(a) slightly deviate from the straight\nhorizontal line corresponding the $D$ value.\n\nThe scaling law in Eq. (\\ref{eq8}) can be confirmed in Fig. \\ref{Fig2}(a) in a\nstill another way. The open marks of the same shape, say diamond, but with\ndifferent colors (that are the data for a given $\\nu_{in}$ but with different\n$D$) are almost on a straight line of a slope close to one (This slope may\nseem to be slightly larger than one, which may be because of the uncertainty\non the cell spacing $D$ as already mentioned\\ in the last sentence of the\nparagraph just above this one, or because the exponent 3 in Eq. (\\ref{eq8})\nmay be in fact slightly larger than 3 in a more complete theory beyond the\npresent arguments at the level of scaling laws). For a such series of data,\nthe velocity $V$ scales with $D^{3}$ according to Eq. (\\ref{eq8}), thus when\nplotted as a function of $D^{2}$ as in Fig. \\ref{Fig2}(a), the quantity\nlinearly scales with $D$, as \\ reasonably well confirmed.\n\nThe phase diagram based on Eq. (\\ref{eq10}) is shown in Fig. \\ref{Fig2}(c), in\nwhich\\ we plot all the data (except for the special data mentioned above), to\ndemonstrate further consistency of the present arguments. As expected from Eq.\n(\\ref{eq10}), we can indeed draw a straight line of slope 1 on Fig.\n\\ref{Fig2}(c), which divides the internal and external regimes; Above the\nstraight line of slope 1 in Fig. \\ref{Fig2}(c) lie the data in the internal\nregime described by Eq. (\\ref{eq4}), i.e., the data on the straight dashed\nline in Fig. \\ref{Fig2}(a); Below the straight line in Fig. \\ref{Fig2}(c) lie\nthe data in the external regime described by Eq. (\\ref{eq8}), i.e., the data\non the straight line in Fig. \\ref{Fig2}(b). The coefficient $k_{3}$ of Eq.\n(\\ref{eq10}), i.e., the line dividing two regimes shown in Fig. \\ref{Fig2}(c),\nis $k_{3}=0.017$,\\ the order of magnitude of which is consistent with the\nscaling arguments in a profound\\ sense: The numerical coefficient, $k_{in}$,\n$k_{1}$, and $k_{3}$, are predicted to satisfy the relation $k_{3}=k_{1}%\n^{3}\/k_{in}$, and this relation is satisfied at a quantitative level in the\npresent analysis (0.017 vs $(0.167)^{3}\/0.15\\simeq0.031$). This quantitative\nagreement is indeed quite satisfactory, if we consider slight deviations of\nthe data from the predicted theory. For example, the value $0.15$ used in the\nestimation in the parentheses is not the value of $k_{in}$ itself (the precise\ndefinition of $k_{in}$ is the coefficient appearing in Eq. (\\ref{eq4}),\n$V_{in}=k_{in}\\Delta\\rho gD^{2}\/\\eta_{in}$, but the value of $k_{in}$, 0.15,\nused in the above is in fact the value of the coefficient $k_{in}^{\\prime}$\nappearing in the relation $V_{in}=(k_{in}^{\\prime}\\Delta\\rho gD^{2}\/\\eta\n_{in})^{\\alpha}$ obtained when the data corresponding to the internal regime\nin Fig. \\ref{Fig2}(a) are numerically fitted by this relation with $\\alpha$\ndetermined to be not equal to one but close to 1.24, as mentioned in the first\nparagraph in Experiment and Theory).\\ In addition, the exponent in (\\ref{eq8})\nmight also be slightly deviated from 3 as suggested in the paragraph just\nabove this one.\n\nThe crossover from the internal to external regime can explicitly be seen in\nthe data for $D=1.0$ mm (red data) in Fig. \\ref{Fig2}(a). As $\\eta_{in}$\ndecreases from the left-most data for $\\nu_{in}=30000$ cS (red open diamonds)\nto the data for $\\nu_{in}=5000$ cS (red open inverse triangle), the velocity\nis independent of $\\nu_{in}$, which reveals that the three data on the\nhorizontal line are in the external regime. However, the data for $\\nu\n_{in}=1000$ cS and $\\nu_{in}=500$ cS are on the straight dashed line with a\nslope close to one, which confirms that these two data are in the internal\nregime. Since the phase boundary expressed by Eq. (\\ref{eq10}) is obtained\nalso by equating $V_{in}$ and $V_{ex}$ in Eqs. (\\ref{eq4}) and (\\ref{eq8}),\nthe crossover between the two regimes occurs in Fig. \\ref{Fig2}(a) near at the\ncross point between the horizontal line connecting the data on the external\nregime for a given $D$ and the straight dashed line of a slope close to one\nrepresenting the internal regime.\n\nThe behavior of the data close to the crossover points are quite intriguing.\nThe data for $D=2.0$ mm and 3.0 mm at $\\nu_{in}=1000$ cS (green filled square\nand purple filled square) are located at the position close to the phase\nboundary in Fig. \\ref{Fig2}(c) (and the data have already been confirmed to be\nin the internal regime in Fig. \\ref{Fig2}(a): in this plot, these data points\nare reasonably well on the dashed line).\\ We have confirmed that, when these\ntwo data are plotted in Fig. \\ref{Fig2}(b), they are nearly on the straight\nline of slope close to 3 in Fig. \\ref{Fig2}(b). The two points can be\ndescribed by both Eqs. (\\ref{eq4}) and (\\ref{eq8}), which is reasonable\nbecause they are nearly on the phase boundary. However, this is not always the\ncase. The data for $D=0.7$ mm and $\\nu_{in}=5000$ cS (black filled inverse\ntriangle) and for $D=1.5$ mm and $\\nu_{in}=3000$ cS (blue open triangle) are\nalso positioned close to the phase boundary in Fig. \\ref{Fig2}(c). However,\nthe former is rather in the internal regime and the latter rather in the\nexternal regime. This is in a sense logical because the blue open triangle is\nrather away from the crossover point for $D=1.5$ mm in Fig. \\ref{Fig2}(a) but\nthis is not the case for black filled inverse triangle. In general, how\nquickly the crossover occurs seems to be a subtle problem.\n\n\\section*{Discussion}\n\nThe direct measurement of the thickness $h$ supports the above analysis. We\nused a laser distance sensor\\ (ZS-HLDS2+ZS-HLDC11+Smart Monitor Zero Pro.,\nOmron), as illustrated in Fig. \\ref{Fig3}(a). The measurement is extremely\ndelicate and difficult, because we have six reflective planes I to VI with\nsignificantly different strengths of reflection where the target two\nreflections II and III are the smallest and the second smallest among them\n(see Fig. \\ref{Fig3}(b)). The six surfaces are the front and back surfaces of\nthe front cell plate (interface I and II), the front and back interfaces\nbetween olive oil and the PDMS drop (interface III and IV), and the front and\nback surfaces of the back cell plate (interface V and VI). To determine $h$,\nwe need to detect reflection from interface II and III, where the\nreflection\\ from II is small compared with that of III (see Fig.\n\\ref{Fig3}(b)) and significantly small compared with that of I, because the\nrefractive index of olive oil is $n_{olive}=1.47$, that of acrylic plate is\n$n_{acr}=1.491$, that of PDMS oil is $n_{PDMS}=1.403$ and that of air is\n$n_{air}=1$. Furthermore, the object (the descending drop) is moving. In spite\nof these experimental difficulties, we obtained a reasonably good correlation\nbetween the measured thickness and the experimentally\\ obtained value as shown\nin Fig. \\ref{Fig3}(c), by virtue of various efforts (for example, in the\nscreen shot Fig. \\ref{Fig3}(b), the two target peaks are intensionally\npositioned off-center because the precision of measurement becomes the maximum\nwhen the reflection angle is the largest). Here, the slope of the line\nobtained by a numerical fitting is $0.749\\pm0.027$ (the slope here is not the\nexponent but the coefficient for the linear relationship), the order of\nmagnitude of which is consistent with the scaling argument.\n\nExceptional data mentioned above reveal an intriguing phenomenon. In Fig.\n\\ref{Fig2}(a), the data for $D=1$ mm and for $\\nu_{in}=10000$ cS are\nrepresented by two different marks, the red filled circle and the red open\ncircle, with the former described by the internal regime and the latter by the\nexternal regime. The data for $D=1$ mm and for $\\nu_{in}=5000$ cS are also\ncategorized into two filled and open symbols. The experimental difference in\nacquiring these two different types (filled and open symbols) of data obtained\nfor identical drop viscosity and cell spacing\\ is that, when the drop goes\ndown on the same path multiple times in the same cell, the first drop is in\nthe external regime (open marks) whereas the drop going down after the first\none is always in the internal regime (filled marks). This apparently\nmysterious effect is quite reproducible and is understood by considering a\npossibility of mixing of olive oil and PDMS at the surface of the drops. For\nthe first drop, such a mixing effect is negligible and the drop is governed by\nthe dynamics of the external regime. However, after the first one, because of\nthe mixing effect, the viscosity of the thin film surrounding the drop\nincreases (because $\\nu_{in}\\gg\\nu_{ex}$)\\ so that making a velocity gradient\nin the external thin film is no longer favored in terms of energy and instead\nthe velocity gradient inside the drop is favored to realize the dynamics in\nthe internal regime. Because of this reason, the red filled circles\\ and the\nred filled inverse triangles\\ are not shown in the phase diagram given in Fig.\n\\ref{Fig2}(c). This seemingly mysterious behavior tends to be suppressed if\nthe viscosity is too small (because the \"external\" viscosity does not get\nsufficiently viscous), or too large (because the mixing is not sufficiently\neffective). This is why we observed this phenomenon only for the two values of viscosity.\n\nThe present study suggests that Stokes' drag friction $F=6\\pi\\eta_{ex}VR$ for\na solid sphere of radius $R$ surrounded by a viscous liquid of viscosity\n$\\eta_{ex}$ is replaced in the external regime of the Hele-Shaw cell geometry\nby\n\\begin{equation}\nF_{ex}\\simeq\\eta_{ex}VR_{T}R_{L}\/h\\simeq\\eta_{ex}Ca^{-2\/3}VR_{T}R_{L}%\n\/\\kappa^{-1}.\\label{eq11}%\n\\end{equation}\nThis expression possesses a nonlinear dependence on the velocity $V$ due to\nthe extra $V$ dependence contained in the capillary number $Ca$. This is\nstrikingly different from the two other expressions for the drag force:\n$F_{in}\\simeq\\eta_{in}VR_{T}R_{L}\/D$ and $F_{bubble}\\simeq\\eta_{ex}VR_{T}%\n^{2}\/D$, which are both linear in velocity; The former corresponds to the\ninternal regime in the present study, whereas the latter corresponds to the\ncase in which the dominant dissipation is the one associated with the velocity\ngradient $V\/D$ in the surrounding external liquid \\cite{EriSoftMat2011}. The\nviscous friction forces including the nonlinear friction in Eq. (\\ref{eq11})\nare relevant to the dynamics of emulsion, foam, antifoam and soft gels\n\\cite{Sylvie,AnnLaureSylvieSM2009,DominiqueMicrogravity2015}, in particular,\nnonlinear rheology of such systems\n\\cite{DenkovSoftMat2009,DurianPRL10,CloitreNP2011}.\n\nWe intentionally used several times the phrase, \\textquotedblleft the order of\nmagnitude of which is consistent with the scaling argument,\\textquotedblright%\n\\ which may be vague compared with an expression like, \\textquotedblleft being\nof order one further supports the scaling argument.\\textquotedblright\\ The\nreason we used the seemingly vague expression is that whether a coefficient\nfor a scaling law is of the order of one or not is in fact a subtle issue.\nDepending on the problem or on the definition of the coefficient, the orders\nof magnitude can be fairly larger or smaller than one. An example of such a\ncase can be given by exploiting the relation, $k_{3}=k_{1}^{3}\/k_{in}$, given\nabove: The three coefficients, $k_{1},k_{3}$, and $k_{in}$ are all\ncoefficients for some scaling laws so that, for example, $k_{1}$ and $k_{in}$\ncan be 5 and 1, respectively, but this example implies $k_{3}$ is much larger\nthan one ($k_{3}=5^{3}$).\\ \n\nIn the present study, the consistency of the whole scaling arguments is\nchecked in several ways, which clearly deepens our physical understanding. For\nexample, a new scaling regime is demonstrated through a clear data collapse\n(Fig. \\ref{Fig2}(b)), and the crossover of this regime to another is shown\n(Fig. \\ref{Fig2}(a)), which is completed by the phase diagram (Fig.\n\\ref{Fig2}(c)) and a separate measurements on thin-film thickness (Fig.\n\\ref{Fig3}(c)). In addition, data arrangements in the crossover diagram (Fig.\n\\ref{Fig2}(a)) are interpreted from various viewpoints, confirming the\nconsistency of the arguments.\n\n\\section*{Conclusion}\n\nIn summary, we show in Fig. \\ref{Fig2}(b) the existence of a novel scaling\nregime for the descending velocity of a drop surrounded by thin external fluid\nin the Hele-Shaw cell, in which regime the viscous dissipation in the thin\nfilm is essential. This regime corresponds to a nonlinear form of viscous drag\nfriction. In this regime, the thickness of the film is determined by the law\nof LLD, as directly confirmed in Fig. \\ref{Fig3}(c). The crossover between\nthis regime and another regime in which the viscous dissipation in the\ninternal side of the drop governs the dynamics is shown in Fig. \\ref{Fig2}(a).\nThe phase boundary between the two regimes are given in Fig. \\ref{Fig2}(c).\n\nThere are some other scaling regimes for the viscous drag friction in the\nHele-Shaw cell geometry with the existence of thin films surrounding a fluid\ndrop. For example, the dissipation associated with the velocity gradient $V\/D$\nin the internal drop liquid has been revealed to be important for a rising\nbubble in the Hele-Shaw cell \\cite{EriSoftMat2011}. The dissipation associated\nwith the dynamic meniscus (in the context of LLD theory\n\\cite{LandauLevich,Derjaguin1943,CapilaryText}) formed in the external\nthin-film has been found to be important in a non Hele-Shaw cell geometry\n\\cite{PascalEPL2002}. In addition, the present external regime will give\nanother scaling law if the capillary length $\\kappa^{-1}$ is, unlike in the\npresent study, larger than the cell thickness $D$.\n\nConfirmation of such other regimes for viscous drag friction in the Hele-Shaw\ncell geometry, as well as crossovers among various scaling regimes would be\nexplored in future studies. The simple friction laws for confined fluid drops\nand the crossover between them revealed in the present study (and in future\nstudies) are relevant to fundamental issues including rheology of foam and\nemulsion, as well as applications such as in microfluidics.\n\n\\section*{Methods}\n\nThe density of PDMS oil $\\rho_{in}$ slightly depends on viscosity: (1) 970\nkg\/m$^{3}$ for the kinematic viscosities $\\nu_{in}=500,1000,$ and $3000$ cS\n(SN-4, SN-5, and SN-6, As One). (2) 975 kg\/m$^{3}$ for $\\nu_{in}=5000$ and\n$10000$ cS (SN-7 and SN-8, As One). (3) 976 kg\/m$^{3}$ for $\\nu_{in}=30000$ cS\n(KF-96H, ShinEtsu).\n\nThe cell thickness $D$ is controlled by spaces, and is directly measured using\nthe laser distance sensor\\ (ZS-HLDS5, Omron) for most of the cells. In all the\nfigures of the present study, for simplicity, the cell thickness $D$ is\nrepresented by an approximate value, which is slightly different from measured\nvalues. For some of the data the measurement of $D$ was not performed and in\nsuch a case an approximate value of $D$ is used, instead of measured values,\nto plot the data points, which does not cause serious difficulties in\nanalyzing and interpreting the data. This is because the difference between\nthe $D$ value used for labeling and the measured value of the cell thickness\nis rather small.\n\nThe interfacial tension between PDMS and olive oil was measured by using\npendant drop tensiometry. It is recently discussed that measured values for\npendant drops are dependent on Bond number and Worthington number, with both\nscaling with $B=\\Delta\\rho gR_{0}^{2}\/\\gamma$ ($R_{0}$: the drop radius at the\napex of the pendant drop) when the drop size is of the same order of magnitude\nas the needle diameter, and that the measured value approach the correct value\nas $B$ approaches one \\cite{PendantDrop2015} (one could expect that the\nexperimental precision will be optimized when the drop is most \"swelled,\" that\nis, when the droplet is on the verge of detaching off from the needle tip due\nto gravity, that is, when $B=1$).\\ We measured the value of tension as a\nfunction of $B$ by using the software, OpenDrop, developed by Michael Neeson,\nJoe Berry and Rico Tabor. We extrapolated the data thus obtained to the value\nat $B=1$ to have a pragmatic\\ value, $\\gamma=0.78$ mN\/m, because it was\nexperimentally difficult to approach $B=1$. This is possibly because the\ntension is significantly small, which might lead to an extra error in the measurement.\n\nEven though the measurement of the interfacial tension contains an extra error\nand our analysis numerically depends on the measured value, this does not\nbring any uncertainties in the present arguments at the level of scaling laws.\nWe explain this by an example. Introducing the experimentally measured value\nof surface tension $\\gamma_{m}$, we define a numerical coefficient $\\beta$ as\n$\\gamma=\\beta^{2}\\gamma_{m}$ and the corresponding capillary length\n$\\kappa^{-1}=\\beta\\kappa_{m}^{-1}$. With these \"measured\" quantities, Eq.\n(\\ref{eq8}) can be expressed as $\\eta_{ex}V_{ex}\/\\gamma_{m}=k_{1,m}%\n^{3}(D\/\\kappa_{m}^{-1})^{3}$ with $k_{1,m}^{3}=k_{1}^{3}\/\\beta$. By noting\nthat the values of the interfacial tension and capillary number used in Fig.\n\\ref{Fig2}(c) that experimentally confirms the relation Eq. (\\ref{eq8}) are in\nfact not $\\gamma$ and $\\kappa^{-1}$ but $\\gamma_{m}$ and $\\kappa_{m}^{-1}$,\nrespectively, the coefficient we determined from Fig. \\ref{Fig2}(c) is in fact\nnot $k_{1}$ but $k_{1,m}$. However, since Eq. (\\ref{eq10}) can be expressed as\n$\\eta_{ex}\/\\eta_{in}=k_{3,m}D\/\\kappa_{m}^{-1}$ with $k_{3,m}=k_{3}\/\\beta$, the\nphase boundary line $\\eta_{ex}\/\\eta_{in}=k_{3,m}D\/\\kappa_{m}^{-1}$ on the\n$(\\eta_{ex}\/\\eta_{in},D\/\\kappa_{m}^{-1})$ space and the line $\\eta_{ex}%\n\/\\eta_{in}=k_{3}D\/\\kappa^{-1}$ on the $(\\eta_{ex}\/\\eta_{in},D\/\\kappa^{-1})$\nspace have the same physical meaning. From these reasons, a special care is\nneeded when one compares the numerical coefficient obtained experimentally in\nthe present study with more sophisticated experiments or calculations.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}