diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzimac" "b/data_all_eng_slimpj/shuffled/split2/finalzzimac" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzimac" @@ -0,0 +1,5 @@ +{"text":"\\section{intro}\n\nStudies to exorcise Maxwell's demon \\cite{leff2014maxwell} shed light on the relationship between information theory and statistical physics \\cite{szilard1929entropieverminderung,bennett1982thermodynamics,lloyd1989use,lloyd1997quantum,kim2011quantum,schaller2011probing}.\nResearchers reported models for the Maxwell demon, including the demon as a feedback controller \\cite{touchette2000information,touchette2004information,sagawa2008second,allahverdyan2009thermodynamic,cao2009thermodynamics,fujitani2010jarzynski,ponmurugan2010generalized,horowitz2010nonequilibrium,sagawa2010generalized,abreu2012thermodynamics,sagawa2012fluctuation,sagawa2012nonequilibrium,esposito2012stochastic,hartich2014stochastic,horowitz2014thermodynamics,horowitz2014second,shiraishi2015role,miyahara2018work,ribezzi2019large,ito2013information,sandberg2014maximum}, information reservoir such as a bit sequence \\cite{mandal2012work,mandal2013maxwell,deffner2013information,strasberg2014second,barato2014stochastic,merhav2015sequence,strasberg2015thermodynamics,chapman2015autonomous,boyd2016identifying,mcgrath2017biochemical,strasberg2017quantum,stopnitzky2019physical,boyd2017leveraging}, and the unifying approaches \\cite{horowitz2013imitating,barato2014unifying,shiraishi2016measurement}.\nIn those studies, researchers have shown second-law-like inequalities ({\\SLLI}s), establishing that the second law of thermodynamics (SLT)\n\\begin{IEEEeqnarray}{lCr}\n\\Delta\\shaent{\\Xtot}+\\Qtot\\geq 0,\n\\label{eq:SLT}\n\\end{IEEEeqnarray}\nis not violated by the demon in principle, where we denote by $\\shaent{\\Xtot}$ and $\\Qtot$ the entropy and the heat transfer of the whole system in contact with a heat bath.\nWe assume that all processes are isothermal processes with a heat bath of temperature $T$ in this Letter. We set $k_B T = 1$.\n\nThe {\\SLLI}s appeared in those previous studies can be tentatively classified into two types: the first one, which was reported in the studies of the feedback control model, asserting that the entropy of the system can be reduced by the amount of information gathered by the controller (demon):\n\\begin{IEEEeqnarray}{lCr}\n\\Delta \\shaent{X} + Q \\geq \\genI,\n\\IEEEeqnarraynumspace\n\\label{eq:HItype1}\n\\end{IEEEeqnarray}\nand the second one, which was reported in the studies of the information reservoir, asserting that work can be extracted in exchange for the entropy production of the information reservoir:\n\\begin{IEEEeqnarray}{lCr}\n\\Wavg \\leq \\Delta \\shaent{X},\n\\IEEEeqnarraynumspace\n\\label{eq:WHtype2}\n\\end{IEEEeqnarray}\nwhere $X$ denotes the state of the controlled system such as gases enclosed in a piston or a bit sequence;\n$Q$ denotes the heat transfer from the controlled system;\n$\\Wavg$ denotes the work extracted from the controlled system;\nFor $\\genI$ in the inequality~(\\ref{eq:HItype1}), several quantities such as transfer entropy have been employed.\nIn the following,\nwe specify it as the mutual information production: $\\genI = \\muI{\\Xsys\\prm}{\\memX}-\\muI{\\Xsys}{\\memX}$, where $\\memX$ is the memory's state.\nThen, the \\SLLI~(\\ref{eq:HItype1}) coincides with the one shown in Ref.~\\cite{sagawa2012fluctuation}, \n\n\n\nBoth types of the previous {\\SLLI}s do not imply that feedback control increases the \\textit{net maximum work}.\nHere, we call the upper limit of extracted work that accounts for the memory cost determined by the Landauer's principle \\cite{landauer1961irreversibility,sagawa2009minimal} the net maximum work. \nIn other words, the net maximum work is the upper limit of $\\Wavg$ minus the lower limit of the memory cost.\nAccording to the Landauer's principle, the work required to exploit the memory is greater than $\\genI$.\nThus, the \\SLLI~(\\ref{eq:HItype1}) does not imply that the net maximum work increases by measurements because $\\genI$ is canceled out by the memory cost.\nLikewise, the \\SLLI~(\\ref{eq:WHtype2}) does not provide the difference of the net maximum work between feedback and open-loop control because all of the terms in the \\SLLI~(\\ref{eq:WHtype2}) do not vary depending on whether the control is feedback or open-loop.\n\nOur first main result\nis that the sum of the entropy bounds for all subsystems leads to\nthe first \\SLLI\n\\begin{IEEEeqnarray}{lCr}\n\\Delta\\Htot + \\Qtot \\geq \\DelI\n\\IEEEeqnarraynumspace\n\\label{eq:HHQD5}\n\\end{IEEEeqnarray}\nimplying that feedback control increases the net maximum work.\nHere, $\\DelI$ is the positive quantity defined later.\nDue to the positivity of $\\DelI$, this \\SLLI is a stronger bound than the SLT~(\\ref{eq:SLT}).\nIn terms of extracting work, $\\DelI$ coincides with how much the system's entropy production is useless for a controller to extract the work from the system.\nAlthough $\\DelI$ can grow in proportional to the system size, the memory cost can be constant with the system size when there are correlations within the system.\nHence, the feedback control can vanish $\\DelI$ with significantly less memory cost than $\\DelI$.\nThus, the feedback control can increase the net maximum work.\n\n\nIn addition, the sum of the entropy bounds of the controlled subsystems leads to\nthe stronger inequality than {\\SLLI}~(\\ref{eq:HItype1}):\n\\begin{IEEEeqnarray}{lCr}\n\\Delta \\Hsys + \\Qsys \\geq \\genI + \\DelIsys,\n\\label{eq:HQID23}\n\\end{IEEEeqnarray}\nwhere $\\DelIsys$ denotes the positive quantity similar to $\\DelI$. \nThis inequality is our second main result and derived at the end of the main result part.\nLikewise, the stronger bound corresponding to \\SLLI~(\\ref{eq:WHtype2}) is immediately derived from \\SLLI~(\\ref{eq:HHQD5}) by\nregarding all subsystems as a controlled system such as a bit sequence and ignoring the change in the internal energy:\n\\begin{IEEEeqnarray}{lCr}\n\\Wavg \\leq \\Delta \\Htot - \\DelI,\n\\label{eq:WQHD4}\n\\end{IEEEeqnarray}\n\n\n\n\n\n\\textit{Main result --}\nWe study a classical system in contact with a heat bath of temperature $T$.\nWe assume that the system and the heat bath constitute a closed system.\nThe system consists of non-overlapping subsystems $1, 2,\\dots, \\Nx$.\nThe evolution is Markovian with discretized time intervals, and we focus on a single step of this discrete Markov process.\nThe initial and final states of the $k$-th subsystem are denoted by $\\Xs{k}$ and $\\Xs{k}\\prm$, respectively.\nLet $\\DepX{k}$ be the subset of $\\Xtot=\\{\\Xs{1},\\dots,\\Xs{\\Nx}\\}$ that influences the evolution of $\\Xk$ other than $\\Xk$ itself.\nIn other words, the state $\\Xs{k}$ evolves depending only on $\\{\\Xs{k}\\}\\cup\\DepX{k}$\n\\footnote{For the formal definition of the independence, see Definition~\\ref{def:dependence} in Section~\\ref{sect:Positivity of Co-dissipation}.\n}\n.\n\nWe premise the local detailed balance for each subsystem, which result in the entropy bound for a subsystem\n\\footnote{We demonstrate the inequality~(\\ref{eq:yuragi_k}) in\nSection~\\ref{ap:Fluctuation relation for a subsystem}.}\n:\n\\begin{IEEEeqnarray}{rCl}\n\\YuragiK := \\shaent{\\Xs{k}\\prm\\mid \\DepX{k}} - \\shaent{\\Xs{k}\\mid \\DepX{k}} + \\Qs{k} \\geq 0,\n\\label{eq:yuragi_k}\n\\end{IEEEeqnarray}\nwhere we denote by $\\YuragiK$ the total entropy production accompanied with the evolution of $\\Xs{k}$ and by $\\Qs{k}$ the heat transfer from the $k$-th subsystem to the heat bath.\n\nThe dependency among subsystems induces the graph $\\GcaC{0}=\\{\\VcaC{0},\\EcaC{0}\\}$ as follows:\n\\begin{IEEEeqnarray}{lCr}\n\\VcaC{0} := \\{\\{\\Xs{1}\\},\\{\\Xs{2}\\},\\dots,\\{\\Xs{\\Nx}\\}\\},\n\\label{eq:Vdef}\n\\\\\n\\EcaC{0} :=\\{\\EdgeX{j}{k}\\mid \\Xj \\in \\DepXX{\\Xk}, \\Xk\\in\\Xtot\\},\n\\label{eq:Edef}\n\\end{IEEEeqnarray}\nwhere we denote by $\\EdgeX{j}{k}$ the directed edge from the vertex $\\{\\Xs{j}\\}$ to the vertex $\\{\\Xs{k}\\}$.\nThe graph $\\GcaC{0}$ may contain cycles.\nTo eliminate dependencies causing these cycles, we consider the coarse-grained subsystems where original subsystems are merged if they are members of the same cycle in $\\Gorg$.\nHereinafter, $\\Xj$ denotes the states of the $j$-th subsystem in the coarse-grained subsystems, and $\\Nx$ denotes the number of the coarse-grained subsystems.\nWe denote by $\\Gca=(\\Vca,\\Eca)$ the graph induced from the coarse-grained subsystems by the same way shown in Eqs.~(\\ref{eq:Vdef}) and (\\ref{eq:Edef}).\nBy its definition, $\\Gca$ is a directed acyclic graph (DAG).\n\nLet $\\Gcomp=(\\Vcomp,\\Ecomp)$ be a connected component of $\\Gca$.\nThe graph sequence $\\GcompS{1}, \\GcompS{2}, \\cdots, \\GcompS{\\Ncomp}$ is constructed by the following procedure applying the edge contraction \\cite{gross2003handbook} repeatedly to $\\Gcomp$:\n\\begin{easylist}[enumerate]\n@ Initialization: let $j=1$, $\\GcompS{1}=\\Gcomp$.\n@ If the graph $\\GcompS{j}$ has exactly one vertex, terminate. Otherwise, go to step \\ref{list:step3}\n\\label{list:step2}\n@ Select a vertex couple $\\pcPair$ from $\\GcompS{j}$ such that every child of $\\p$ is a sink, and $\\c$ is a child of $\\p$.\n\\label{list:step3}\n@ Execute the edge contraction for $\\p$ and $\\c$, and let the result of it be $\\GcompS{j+1}$.\n\\label{list:step4}\n@ Increment $j$, and go back to step \\ref{list:step2}\n\\end{easylist}\n\\vspace{0.5em}\n\\noindent This procedure is feasible because a DAG with more than two vertices always has a vertex such that has only sink as its children.\nThe essence of this procedure is that any $\\GcompS{j}$ is a DAG, and the adjacency of $\\Gca$ preserves \\footnote{See Lemma~\\ref{lem:ReduceGraphProperties} in Section~\\ref{sect:Properties of the graph sequence}.}.\n\n\\newcommand{\\hat{V}}{\\hat{V}}\n\nFor simplicity, we introduce a convention regarding collections of subsystems $\\hat{V}$.\nSuppose a function $f$ takes an arbitrary number of subsystems as arguments.\nThen, we just write $f(\\hat{V})$ in the sense of $f(\\Xs{V_1},\\Xs{V_2}\\cdots)$, where $\\Xs{V_j}$ is a $j$-th member of $\\hat{V}$.\nLikewise, we denote by $\\hat{V}\\prm$ the collection of the subsystem's final states in $\\hat{V}$.\nFurthermore, we denote by $\\Flatten(\\hat{V})$ the set of the subsystems created by flattening the nested structure of $\\hat{V}$.\nFor example, if $\\hat{V}=\\{\\{\\Xs{1},\\Xs{2}\\},\\{\\Xs{3},\\Xs{4}\\}\\}$, then\n\\begin{IEEEeqnarray}{cCc}\n\\Prb{}{\\hat{V}}=P(\\Xs{1},\\Xs{2},\\Xs{3},\\Xs{4}),\n\\label{eq:PVPXXXX8}\n\\\\\n\\hat{V}\\prm=\\{\\{\\Xs{1}\\prm,\\Xs{2}\\prm\\},\\{\\Xs{3}\\prm,\\Xs{4}\\prm\\}\\},\n\\label{eq:VXXXX9}\n\\\\\n\\Flatten(\\hat{V})=\\{\\Xs{1},\\Xs{2},\\Xs{3},\\Xs{4}\\}.\n\\end{IEEEeqnarray}\nWe denote by $\\Qset{\\hat{V}} := \\Sum{X\\in\\Flatten(\\hat{V})}{}\\Qs{}$ the heat transfer from all the subsystems composing $\\hat{V}$.\nWe denote by $\\DepXX{\\hat{V}}:=\\bigcup_{\\Xx\\in \\Flatten(\\hat{V})}{\\DepXX{\\Xx}}$ the sets of subsystems influencing the evolution of the members of $\\hat{V}$.\n\nWe use another notation to ignore the empty set as the conditioning event:\n\\begin{IEEEeqnarray}{cCr}\nP(V\\mid\\emptyset) := P(V),\n\\quad\n\\shaent{V\\mid\\emptyset} := \\shaent{V},\n\\\\ \n\\MutualInfo{V}{W\\mid\\emptyset}{}:=\\MutualInfo{V}{W}{}.\n\\label{eq:convention_empty_condition}\n\\end{IEEEeqnarray}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{F3_PC_graph.eps}\n\\caption{(Color online). The schematic diagram of the sets defined as Eqs.~(\\ref{eq:dpdpdp14}) and $\\DelITwoPC$, $\\DelIThreePC$, and $\\DelIFourPC$.\n}\n\\label{fig:pc_depend}\n\\end{center}\n\\end{figure}\n\nIn addition, \nwe prepare notations for sets of subsystems regarding the dependency of $\\p$ and $\\c$:\n\\begin{IEEEeqnarray}{rCl}\n\\aaa := \\DepXX{c} \\cap \\p&,&\n\\quad\n\\CE := \\DepXX{\\c} \\setminus \\p,\n\\nonumber\n\\\\\n\\D := \\DepXX{\\p} \\setminus \\DepXX{c}&,&\n\\label{eq:dpdpdp14}\n\\quad\n\\DCE := \\DepXX{\\p\\cup\\c} \\setminus \\p,\n\\\\\n\\C := \\DepXX{\\c} \\setminus \\DepXX{\\p} \\setminus \\vMergeP&,&\n\\quad\n\\E := \\DepXX{\\p} \\cap \\DepXX{\\c} \\setminus \\p\\,.\n\\nonumber\n\\end{IEEEeqnarray}\nAs illustrated in Fig.~\\ref{fig:pc_depend}, these sets have the following meaning: \n$\\aaa$ are the subsystems that are members of $\\p$ and influence to $\\c$; $\\D$ are the subsystems that influence $\\p$ but not $\\c$; $\\E$ are the subsystems that influence both $\\p$ and $\\c$; $\\C$ are the subsystems that influences $\\c$ but not $\\p$. We also note that $\\CE=\\C\\cup\\E$ and $\\DCE = \\C\\cup\\E\\cup\\D$. The minus symbol ``-'' put at the superscript of $\\Dep$ represents the elimination of $\\p$.\n\nThe key quantity $\\DelI$ is the sum of two modules, each caused by the independency of the evolution within or among connected components in $\\Gca$.\nThe first module $\\DelIinner{\\Gcomp}$, which is caused by the independency within a connected component, is provided by\n\\begin{IEEEeqnarray}{cCc}\n\\DelIinner{\\Gcomp}\n:= \n- \\Sum{(p,c)\\in \\Vpccc}{}\n\\left(\\DelIOne{p,c} + \\DelITwo{p,c}+\\DelIThree{p,c} +\\DelIFour{p,c}\\right),\n\\IEEEeqnarraynumspace\n\\label{eq:def_DELI_intra}\n\\end{IEEEeqnarray}\nwhere $\\Vpccc$ is the set of all $\\pcPair$ selected through constructing the graph sequence $\\GcompS{1},\\cdots,\\GcompS{\\Ncomp}$, and each term of $\\DelIinner{\\Gcomp}$ is mutual information production as follows:\n\\begin{IEEEeqnarray}{cCc}\n\\DelIOne{p,c} := \\muI{\\p\\prm}{\\c\\prm\\mid\\DCE}-\\MutualInfo{p}{c\\prm\\mid \\DCE}{},\n\\label{eq:def_delIOne}\n\\\\\n\\DelITwo{p,c} := \\MutualInfo{p\\prm}{\\C\\mid \\DE}{}- \\MutualInfo{p}{ \\C\\mid \\DE}{},\n\\label{eq:def_delITwo}\n\\\\\n\\DelIThree{p,c} := \\MutualInfo{c\\prm}{\\D\\mid p,\\CE}{}- \\MutualInfo{c}{\\D\\mid p,\\CE}{} ,\n\\label{eq:def_delIThree}\n\\\\\n\\DelIFour{p,c} :=\n\\MutualInfo{\\c\\prm}{\\p\\mid \\DepC}{}\n-\n\\MutualInfo{\\c}{\\p\\mid \\DepC}{},\n\\label{eq:def_delIFour}\n\\end{IEEEeqnarray}\nwhere we used the convention introduced in Eqs.~(\\ref{eq:PVPXXXX8}) and (\\ref{eq:VXXXX9}).\nThe independencies of the evolutions induced from the definitions~(\\ref{eq:dpdpdp14})\nlead to the negativity of $\\DelIOnePC$, $\\DelITwoPC, \\DelIThreePC$, and $\\DelIFourPC$ \n\\footnote{For the formal definition of the independent evolution, see Definition~\\ref{def:independent Markov process} in Section~\\ref{sect:Positivity of Co-dissipation}.\nIn addition, Lemma~\\ref{lem:ind_mutual_prod_nonincrease1} in this section establishes the negativity of the mutual information productions such as $\\DelIOnePC$, $\\DelITwoPC, \\DelIThreePC$ and $\\DelIFourPC$.}.\nFor example, $\\DelIOnePC$ is the mutual information production through the independent process of $\\p$ from $\\c\\prm$ conditioned on $\\DCE$.\nThis fact leads to the negativity of $\\DelIOnePC$.\nConsequently, $\\DelIinner{\\Gcomp}$ is positive.\nLikewise, the following quantity is caused by the independency among the connected components in $\\Gca$:\n\\begin{IEEEeqnarray}{rCl}\n\\DelIinter{\\Gca}\n :=\\Sum{j=2}{\\Ncc}\\left[\\MutualInfo{\\VcompI{j}}{\\VcompI{1:j-1}}{}\n - \\MutualInfo{{\\VcompIprm{j}}}{{\\VcompIprm{1:j-1}}}{}\n \\right],\n\\IEEEeqnarraynumspace\n\\label{eq:defDelIinter}\n\\end{IEEEeqnarray}\nwhere we denote by $\\Ncc$ the number of connected components in $\\Gca$ and by $\\VcompI{j}$ all vertices in the $j$-th connected component of $\\Gca$.\nWe use the colons to represent the union of indexed sets such as $\\VcompI{1:j-1}:=\\bigcup_{k=1}^{j-1}\\VcompI{k}$. \nSince subsystems contained in different connected components evolve mutually independently, the mutual information production $\\MutualInfo{\\VcompIprm{j}}{\\VcompIprm{1:j-1}}{}\n - \\MutualInfo{{\\VcompI{j}}}{{\\VcompI{1:j-1}}}{}$ is negative, which leads to the positivity of $\\DelIinter{\\Gca}$.\nWe define $\\DelI$ as the sum of the two contributions:\n\\begin{IEEEeqnarray}{rCl}\n\\DelI := \\Sum{\\Gcomp\\in\\Gca}{}\\DelIinner{\\Gcomp}+\n\\DelIinter{\\Gca}.\n\\IEEEeqnarraynumspace\n\\label{eq:defDelI39}\n\\end{IEEEeqnarray}\nThe first main result of this Letter is the SLLI~(\\ref{eq:HHQD5}) with $\\DelI$ provided by Eq.~(\\ref{eq:defDelI39}).\nFor a rigorous proof of this result, see Theorem~\\ref{lem:sum_all_is_bound} in Section~\\ref{subsec:Calculation of the sum of YuragiRel using Gca}, where we demonstrate that the sum $\\Sum{k=1}{\\Nx}\\YuragiK$ results in the SLLI~(\\ref{eq:HHQD5}).\nMoreover, since all of its modules are positive, $\\DelI$ takes a positive value. \nThe positivity is formally proved in Theorem~\\ref{lem:JKLMI} in Section~\\ref{sect:Positivity of Co-dissipation}.\nThus, by the positivity of $\\DelI$, the \\SLLI~(\\ref{eq:HHQD5}) is a stronger bound than the SLT~(\\ref{eq:SLT}).\nIn particular, if all subsystems influence each other, then $\\DelI$ vanishes, and \\SLLI coincides with the SLT~(\\ref{eq:SLT}).\n\nTo obtain the \\SLLI~(\\ref{eq:HQID23}), we take the sum of the entropy bounds within the controlled system.\nWe regard $\\Xs{1}$ as the state $Y$ of the memory and $\\Xs{2:\\Nx}$ as the state $\\Xsys$ of the controlled system.\nLet $\\DelIsysS$ be $\\DelI$ determined by the definition~(\\ref{eq:defDelI39}) under the setting where the subsystems $2,3,\\dots,\\Nx$ constitute the whole system.\nIn addition, let $\\DelIsys$ be the quantity provided by adding $Y$ as the conditioning event in $\\DelIOnePC, \\DelITwoPC, \\DelIThreePC$, and $\\DelIFourPC$ when calculating $\\DelIsysS$.\nThen, we can show\n\\footnote{See Corollary~\\ref{th:strong_type1} in Section~\\ref{subsec:Calculation of the sum of YuragiRel using Gca}.}\nthat the sum $\\Sum{k=2}{\\Nx}\\YuragiK$ leads to the \\SLLI~(\\ref{eq:HQID23}).\nThe positivity of $\\DelIsys$ is established by a similar way of proving the positivity of $\\DelI$.\n\n\n\n\\textit{Example 1 --}\nWe observe that \\SLLI~(\\ref{eq:HHQD5}) implies that the feedback control increases the net maximum work contrary to the SLT~(\\ref{eq:SLT}) and the {\\SLLI}s~(\\ref{eq:HItype1})(\\ref{eq:WHtype2}) by the model inspired by the Szilard engine. \nWe regard subsystem 1 as a controller with a 1-bit memory and subsystems 2 through $N$ as a controlled system.\nAs illustrated in Fig.~\\ref{fig:f1}, the controlled system consists of $N-1$ ideal gas particles in a rigid box.\nIn the initial state, the box is separated by a wall, and all particles are on the left or right side of the wall with a probability of 0.5. Then, subsystems of the controlled system correlate with each other: $\\muI{\\Xj}{\\Xk}=\\ln2$ for $j,k\\geq2$.\nWe assume that feasible controls consist of three actions: the controller moves the wall to the left or right edge of the box or pulls the wall out from the box.\nWe investigate only when the final state is the maximum entropy state, where the particles move around the entire box.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.43\\textwidth]{PRL_F1.eps}\n \\caption{The initial and final state in Example 1.}\n \\label{fig:f1}\n\\end{figure}\n\nWe suppose that the feedback control consists of two steps: 1) the controller performs a binary measurement of whether a single particle stays on the left or right side without error; 2) the controller moves the wall to the opposite side of the particles.\nIn the second step, the entropy productions of the whole system and the controlled system are $\\Delta\\Htot=(N-1)\\ln2$ and $\\Delta\\Hsys=(N-2)\\ln2$. In addition, the increase in mutual information between the memory and the controlled system is $-\\ln2$, and the memory cost determined by the Landauer's principle is $\\ln2$.\n\nBesides the feedback control, we can consider three cases in terms of the measurement and the dependency.\nWith the measurement and without the influence from the memory's state on the controlled system's evolution, which we call the ``open-loop with measurement (\\OpenLoopWithMeasure),''\n$\\Delta\\Htot=(N-1)\\ln2$, $\\Delta\\Hsys=(N-2)\\ln2$ and the memory cost is $\\ln2$.\nWithout the measurement, regardless of the influence of the memory state on the controlled evolution, $\\Delta\\Htot=\\Delta\\Hsys=(N-2)\\ln2$ with the zero memory cost. \nWe call these cases the ``open-loop control without the measurement (\\OpenLoopWithoutMeasure).''\n\n\nThen, although the SLT~(\\ref{eq:SLT}) and the {\\SLLI}s~(\\ref{eq:HItype1})(\\ref{eq:WHtype2}) claim the same net maximum work $(N-2)\\ln2$ for all four cases,\nour \\SLLI~(\\ref{eq:HHQD5}) asserts a strict limit such that no work can be extracted without the feedback control:\n\\begin{IEEEeqnarray}{lCr}\n\\Wavg \\leq\n\\begin{cases}\n(N-1)\\ln2 & (\\text{feedback control})\\\\\n0 & (\\text{\\OpenLoopWithMeasure or \\OpenLoopWithoutMeasure})\n\\end{cases}\n\\label{eq:NLN205}\n\\end{IEEEeqnarray}\nbecause\nwe have \\footnote{\nSee Section~\\ref{ap:Calculation of $\\DelI$ in Example 1}.\n}\n\\begin{IEEEeqnarray}{lCr}\n\\DelI=\n\\begin{cases}\n0 & (\\text{feedback control}) \\\\\n(N-1)\\ln2 & (\\text{\\OpenLoopWithMeasure})\\\\\n(N-2)\\ln2. & (\\text{\\OpenLoopWithoutMeasure})\n\\end{cases}\n\\end{IEEEeqnarray}\nAccordingly, only the feedback control enjoys the positive net maximum work $(N-2)\\ln2$ despite $-\\ln2$ or zero for the \\OpenLoopWithMeasure or the \\OpenLoopWithoutMeasure case.\nIn addition, this advantage of the feedback control \nincreases with the number of particles, which means the feedback control can be significantly beneficial more at a macroscopic level.\n\n\n\n\n\\textit{Example 2 --}\nTo observe how $\\DelIOnePC$ appears,\nwe consider a two-body system with the following dependency:\n\\begin{IEEEeqnarray}{lCr}\n\\DepX{1} = \\emptyset, \\quad \\DepX{2} = \\{\\Xs{1}\\}\n\\label{eq:x1tox2depen}\n\\end{IEEEeqnarray}\nThen, $\\Ncc=1$ and $\\pcPair=(\\Xs{1},\\Xs{2})$.\nSuppose that all subsystems take binary states, and\nthe entropy of each subsystem takes the maximum amount: $\\shaent{\\Xj}=\\shaent{\\Xj\\prm}=\\ln2$ for all $j$.\nWe assume the following correlations:\n\\begin{IEEEeqnarray}{cCc}\n\\muI{\\Xs{1}}{\\Xs{2}} = \\muI{\\Xs{1}}{\\Xs{2}\\prm} = \\muI{\\Xs{2}}{\\Xs{2}\\prm} = \\ln2,\n\\\\\n\\muI{\\Xs{1}}{\\Xs{1}\\prm} = \\muI{\\Xs{2}}{\\Xs{1}\\prm} = \\muI{\\Xs{2}\\prm}{\\Xs{1}\\prm} = 0.\n\\end{IEEEeqnarray}\nThen, we have $\\Delta \\Htot=\\ln2$. Since $\\DelIOnePC=\\ln2$ and $\\DelITwoPC = \\DelIThreePC = \\DelIFourPC = 0$, we obtain $\\DelI=\\ln2$. \nAssuming that the internal energy is constant, our inequality~(\\ref{eq:WQHD4}) implies $\\Wavg \\leq 0$\nalthough the SLT~(\\ref{eq:SLT}) and the {\\SLLI}s~(\\ref{eq:HItype1}) (\\ref{eq:WHtype2}) claim that the net maximum work is $\\ln 2$.\n\n\\textit{Example 3 --}\nTo observe how $\\DelITwoPC, \\DelIThreePC$ and $\\DelIFourPC$ appear, we consider a system consisting of five subsystems with the following dependency:\n\\begin{IEEEeqnarray}{cCr}\n\\DepX{1} = \\{\\Xs{3}\\},\\quad\n\\DepX{2} = \\{\\Xs{3}, \\Xs{5}\\},\\quad\n\\DepX{3} = \\{\\Xs{4}\\},\n\\NonumberNewline\n\\DepX{4} = \\{\\Xs{5}\\},\\quad\n\\DepX{5} = \\emptyset\n\\IEEEeqnarraynumspace\n\\end{IEEEeqnarray}\nThen, $\\Ncc=1$ and the sequence of $\\pcPair$ can be $(\\Xs{3},\\Xs{1})$, $(\\{\\Xs{1},\\Xs{3}\\}, \\Xs{2})$, $(\\Xs{4}, \\{\\Xs{1},\\Xs{2}, \\Xs{3}\\})$, $(\\Xs{5}, \\{\\Xs{1}, \\Xs{2},\\Xs{3}, \\Xs{4}\\})$. \nWe suppose all subsystems take binary states except $\\Xs{1}$, which takes four states.\nWe denote by $\\Xs{1H}$ and $\\Xs{1V}$ the first and second bit of $\\Xs{1}$.\nThe entropy of each subsystem takes the maximum amount: $\\shaent{\\Xj}=\\shaent{\\Xj\\prm}=\\ln2$ for all $j$ except $\\shaent{\\Xs{1}}=\\shaent{\\Xs{1}\\prm}=2\\ln2$.\nThere are no correlations among the subsystems other than the following:\n\\begin{IEEEeqnarray}{rCl}\n\\muI{\\Xs{1H}}{\\Xs{2}} &=& \\muI{\\Xs{2}}{\\Xs{4}} \n= \\muI{\\Xs{1V}}{\\Xs{3}} \n\\NonumberNewline\n&=& \\muI{\\Xs{3}}{\\Xs{5}} = \\ln2.\n\\end{IEEEeqnarray}\nThen, we have $\\Delta \\Htot = 4\\ln2$. \nSince $\\DelIOnePC=\\DelITwoPC = \\DelIThreePC=\\DelIFourPC=0$ other than $\\DelIThree{\\Xs{3},\\Xs{1}}=\\DelITwo{\\{\\Xs{3},\\Xs{1}\\}, \\Xs{2}}=\\DelIFour{\\{\\Xs{3},\\Xs{1}\\}, \\Xs{2}}=\\ln2$, we obtain $\\DelI=3\\ln2$. \nAssuming that the internal energy is constant, our inequality~(\\ref{eq:WQHD4}) implies $\\Wavg \\leq \\ln2$, although the SLT~(\\ref{eq:SLT}) and the {\\SLLI}s~(\\ref{eq:HItype1})(\\ref{eq:WHtype2}) assert that the maximum work is $4\\ln2$. \nIndeed, when the third subsystem performs feedback control, the maximum work $\\ln2$ is achieved. \n\nMeanwhile, if the dependency is provided by \n$\\DepX{1} = \\emptyset,\\quad\\DepX{2} = \\DepX{3} = \\DepX{4} = \\DepX{5} = \\{\\Xs{1}\\}$,\nthen $\\DelI$ is zero, and our \\SLLI claims that the same upper limit $\\Wavg=4\\ln2$ as the SLT and the {\\SLLI}s~(\\ref{eq:HItype1})(\\ref{eq:WHtype2}).\n\n\\textit{Discussion --}\nBy the definition, $\\DelI$ is the decrease in mutual informations between subsystems evolving independently.\nPrecisely, $\\DelIinner{\\Gcomp}$ is the negative sum of $\\DelIOnePC, \\DelITwoPC, \\DelIThreePC$, and $\\DelIFourPC$ that are the mutual information productions through the independent processes conditioned on the directly influencing subsystems.\nFor example, $\\DelITwoPC$ is the mutual information production between $\\p$ and $\\CE$, where $\\p$ evolves independently from $\\CE$ conditioned on $\\D$.\nWe can refer to Fig.~\\ref{fig:pc_depend}, where the arrows represent dependencies, to understand the independencies regarding $\\DelIOnePC, \\DelITwoPC, \\DelIThreePC$, and $\\DelIFourPC$.\nLikewise, $\\DelIinter{}$ is the decrease in the mutual informations through the mutually independent processes, as mentioned earlier.\nHence, $\\DelI$ is varied by changing the dependency and the conditioning event.\n\nIn Example 1, we can interpret that the feedback control reduces $\\DelI$ by turning the independent process not conditioned on other states into the independent process conditioned on the correlated memory's state.\nWhen the controller neither performs the measurement nor influences the controlled system's evolution,\n$\\DelI$ is the decrease in the mutual informations through \\textit{the independent processes that are not conditioned on another state}.\nIn contrast, in the feedback control, the memory state influences the controlled system's evolution, and then\n$\\DelI$ becomes the mutual information decrease through the \\textit{independent process conditioned on the correlated memory state}.\nSince the measurement provides the correlation: $\\muI{\\Xs{1}}{\\Xj}$, this conditioning by the memory state causes less $\\DelI$ in the feedback control.\n\n\n\nAt last, we compare the present study with the exorcise of the Maxwell demon.\nThe latter study recovers the SLT\nfrom a situation in which the feedback control appeared to extract more work than the upper limit by the second law.\nIn contrast, this study illustrates that the feedback control improves the maximum work, which is lower than what the SLT claims in the open-loop control.\nMoreover, our {\\SLLI}s imply that the feedback control does not violate the SLT because our {\\SLLI}s are always tighter than the SLT.\n\n\n\n\n\n\n\n\n\n\n\n\\textit{Conclusion --}\nWe reported the first {\\SLLI}s~(\\ref{eq:HHQD5})-(\\ref{eq:WQHD4}) implying that the feedback control can increase the \\textit{net maximum work}. \nAlthough both the {\\SLLI}s (\\ref{eq:HItype1}) and (\\ref{eq:WHtype2}) do not imply that the feedback control increases the net maximum work, \nour {\\SLLI}s imply it\nbecause the open-loop control causes $\\DelI$, which leads to less net maximum work than what the SLT claims.\nBy the positivity of $\\DelI$,\nout {\\SLLI}s are stronger bounds than the SLT and the previous {\\SLLI}s.\nSince all components of $\\DelI$ are the decreases of the mutual informations among the subsystems, the advantage of the feedback control suggested in this Letter comes from the internal correlations of the controlled system.\nOur work might serve as a theoretical basis for quantifying the usefulness of information processing in physical systems.\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\IEEEPARstart{I}{n} recent decades, the rapid development of the digital camera hardware has revolutionized human lives. On one hand, even mid-level mobile devices can easily produce high resolution images and videos. Besides the physical elements, the widespread use of the images and videos also reflects the importance of developing software technology for them. On the other hand, numerous registration techniques for images and video frames have been developed for a long time. The existing registration techniques work well on problems with a moderate size. However, when it comes to the current high quality images and videos, most of the current registration techniques suffer from extremely long computations. This limitation in software seriously impedes fully utilizing the state-of-the-art camera hardware.\n\nOne possible way to accelerate the computation of the registration is to introduce a much coarser grid on the images or video frames. Then, the registration can be done on the coarse grid instead of the high resolution images or video frames. Finally, the fine details can be added back to the coarse registration. It is noteworthy that the quality of the coarse grid strongly affects the quality of the final registration result. If the coarse grid cannot capture the important features of the images or video frames, the final registration result is likely to be unsatisfactory. In particular, for the conventional rectangular coarse grids, since the partitions are restricted in the vertical and horizontal directions, important features such as slant edges and irregular shapes cannot be effectively recorded. By contrast, triangulations allow more freedom in the partition directions as well as the partition sizes. Therefore, it is more desirable to make use of triangulations in simplifying the registration problems.\n\nIn this work, we propose a novel content-aware algorithm to {\\em TR}iangulate {\\em IM}ages, abbreviated as \\emph{TRIM}, that largely simplifies the registration problem for high quality images and video frames. Specifically, given a possibly very high definition image or video frame, we compute a coarse triangulation representation of it. The computation involves a series of steps including subsampling, unsharp masking, segmentation and sparse feature extraction for locating sample points on important features. Then, a high quality triangulation can be created on the set of the sample points using the Delaunay triangulation. With a coarse triangular representation of the images, the registration problem can be significantly accelerated. We apply a landmark-based quasi-conformal registration algorithm \\cite{Lam14} for computing the coarse registration. Finally, the fine detail of the image or video frame is computed with the aid of the coarse registration result. Our proposed framework also serves as a highly efficient and accurate initialization for many different registration approaches.\n\nThe rest of this paper is organized as follows. In Section \\ref{previous}, we review the literature on image and triangular mesh registration. In Section \\ref{contribution}, we highlight the contribution of our work. Our proposed method is explained in details in Section \\ref{main}. In Section \\ref{experiment}, we demonstrate the effectiveness of our approach with numerous real images. The paper is concluded in Section \\ref{conclusion}.\n\n\n\\section{Previous works} \\label{previous}\n\nIn this section, we describe the previous works closely related to our work.\n\nImage registration have been widely studied by different research groups. Surveys on the existing image registration approaches can be found in \\cite{Zitova03,Crum04,Klein09}. In particular, one common approach for guaranteeing the accuracy of the registration is to make use of landmark constraints. Bookstein \\cite{Bookstein78,Bookstein91,Bookstein99} proposed the unidirectional landmark thin-plate spline (UL-TPS) image registration. In \\cite{Johnson02}, Johnson and Christensen presented a landmark-based consistent thin-plate spline (CL-TPS) image registration algorithm. In \\cite{Joshi00}, Joshi et al. proposed the Large Deformation Diffeomorphic Metric Mapping (LDDMM) for registering images with a large deformation. In \\cite{Glaunes04,Glaunes08}, Glaun\\`es et al. computed large deformation diffeomorphisms of images with prescribed displacements of landmarks.\n\nA few works on image triangulations have been reported. In \\cite{Gee94}, Gee et al. introduced a probabilistic approach to the brain image matching problem and described the finite element implementation. In \\cite{Kaufmann13}, Kaufmann et al. introduced a framework for image warping using the finite element method. The triangulations are created using the Delaunay triangulation method \\cite{Shewchuk96} on a point set distributed according to variance in saliency. In \\cite{Lehner07,Lehner08}, Lehner et al. proposed a data-dependent triangulation scheme for image and video compression. Recently, a commercial triangulation image generator called DMesh \\cite{dmesh} has been designed.\n\nIn our work, we handle image registration problems with the aid of triangulations. Numerous algorithms have been proposed for the registration of triangular meshes. In particular, the landmark-driven approaches use prescribed landmark constraints to ensure the accuracy of mesh registration. In \\cite{Wang05,Lui07,Wang07}, Wang et al. proposed a combined energy for computing a landmark constrained optimized conformal mapping of triangular meshes. In \\cite{Lui10}, Lui et al. used vector fields to represent surface maps and computed landmark-based close-to-conformal mappings. Shi et al. \\cite{Shi13} proposed a hyperbolic harmonic registration algorithm with curvature-based landmark matching on triangular meshes of brains. In recent years, quasi-conformal mappings have been widely used for feature-endowed registration \\cite{Zeng11,Zeng14,Lui14,Meng15}. Choi et al. \\cite{Choi15} proposed the FLASH algorithm for landmark aligned harmonic mappings by improving the algorithm in \\cite{Wang05,Lui07} with the aid of quasi-conformal theories. In \\cite{Lam14}, Lam and Lui reported the quasi-conformal landmark registration (QCLR) algorithm for triangular meshes.\n\n\n\\section{Contribution} \\label{contribution}\nOur proposed \\emph{TRIM} approach for accelerating image and video frame registration is advantageous in the following aspects:\n\\begin{enumerate}\n \\item The triangulation algorithm is fully automatic. The important features of the input image are well recorded in the resulting coarse triangulation.\n \\item The algorithm is fast and robust. The coarse triangulation of a typical high resolution image can be computed within seconds.\n \\item The registration of the original images can be simplified by a mapping of the coarse triangulation. The fine details are then restored with the aid of the coarse registration result.\n \\item Our proposed \\emph{TRIM} algorithm is particularly suitable for landmark-based registration as the landmark constraints can be easily included in the coarse triangulations. By contrast, for regular grid-based approaches, accurate landmark correspondences can only be achieved on a very dense grid domain representation.\n \\item Using our proposed approach, the problem scale of the image and video frame registration is significantly reduced. Our method can serve as a fast and accurate initialization for the state-of-the-art image registration algorithms.\n\\end{enumerate}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[height=95mm]{pl.png}\n \\caption{The pipeline of our proposed {\\em TRIM} algorithm for accelerating image registration via coarse triangulation.}\n \\label{fig:pl}\n\\end{figure}\n\n\n\n\\section{Proposed method} \\label{main}\n\nIn this section, we describe our proposed {\\em TRIM} image triangulation approach for efficient registration in details. The pipeline of our proposed framework is described in Figure \\ref{fig:pl}.\n\n\\bigbreak\n\n\\subsection{Construction of coarse triangulation on images}\nGiven two high resolution images $I_1$ and $I_2$, our goal is to compute a fast and accurate mapping $f: I_1 \\to I_2$. Note that directly working on the high resolution images can be inefficient. To accelerate the computation, the first step is to construct a coarse triangular representation of the image $I_1$. In the following, we propose an efficient image triangulation scheme called {\\em TRIM}. Our triangulation scheme is content-aware. Specifically, special objects and edges in the images are effectively captured by a segmentation step, and a suitable coarse triangulation is constructed with the preservation of these features. Our proposed {\\em TRIM} method consists of 6 steps in total.\n\n\\bigbreak\n\n\\subsubsection{Subsampling the input image without affecting the triangulation quality}\nDenote the input image by $I$. To save the computational time for triangulating the input image $I$, one simple remedy is to reduce the problem size by performing certain subsampling on $I$. For ordinary images, subsampling unavoidably creates adverse effects on the image quality. Nevertheless, it does not affect the quality of the coarse triangulation we aim to construct on images.\n\nIn our triangulation scheme, we construct triangulations based on the straight edges and special features on the images. Note that straight edges are preserved in all subsamplings of the images because of the linearity. Hence, our edge-based triangulation is not affected by the mentioned adverse effects. In other words, we can subsample high resolution images to a suitable size for enhancing the efficiency of the remaining steps for the construction of the triangulations. We denote the subsampled image by $\\tilde{I}$.\n\n\\bigbreak\n\n\\subsubsection{Performing unsharp masking on the subsampled image}\nAfter obtaining the subsampled image $\\tilde{I}$, we perform an unsharp masking on $\\tilde{I}$ in order to preserve the edge information in the final triangulation. More specifically, we first transform the data format of the subsampled image $\\tilde{I}$ to the CIELAB standard. Then, we apply the unsharp masking method in \\cite{Polesel00} on the intensity channel of the CIELAB representation of $\\tilde{I}$. The unsharp masking procedure is briefly described as follows.\n\nBy an abuse of notation, we denote $\\tilde{I}(x,y)$ and $\\bar{I}(x,y)$ the intensities of the input subsampled image $\\tilde{I}$ and the output image $\\bar{I}$ respectively, and $G_{\\sigma}(x,y)$ the Gaussian mean of the intensity of the pixel $(x,y)$ with standard derivation $\\sigma$. Specifically, $G_{\\sigma}(x,y)$ is given by\n\n\\begin{equation}\n G_{\\sigma}(x,y) \\triangleq \\frac{1}{\\sigma\\sqrt{2\\pi}}\\int_{(u,v) \\in \\Omega} e^{-\\frac{(u-x)^2 + (v-y)^2}{2\\sigma^2}}.\n\\end{equation}\n\nWe perform an unsharp masking on the image using the following formula\n\\begin{equation}\n \\bar{I}(x,y) = \\tilde{I}(x,y) - \\lambda\\begin{cases} G_{\\sigma}*\\tilde{I}(x,y) & \\text{ if } V_s(x,y)>\\theta, \\\\ 0 & \\text{ if } V_s(x,y) < \\theta, \\end{cases}\n\\end{equation}\nwhere\n\\begin{equation}\n V_s(x,y) \\triangleq \\sqrt{\\frac{1}{Area(M_s)} \\int_{(u,v) \\in M_s} (\\tilde{I}(u,v)-\\tilde{I}_{mean}(x,y))^2 }\n\\end{equation}\nand\n\\begin{equation}\n\\tilde{I}_{mean}(x,y) = \\frac{1}{Area(M_s)} \\int_{(u,v) \\in M_s} \\tilde{I}(u,v).\n\\end{equation}\n\nHere, $M_s$ is the disk with center $(x,y)$ and radius $s$. The effect of the unsharp masking is demonstrated in Figure \\ref{fig:unsharp}. With the aid of this step, we can highlight the edge information in the resulting image $\\bar{I}$ for the construction of the triangulation in the later steps. In our experiment, we choose $\\lambda = 0.5,~\\sigma = 2,~s = 2$, and $\\theta = 0.5$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.22\\textwidth]{edge_2.png}\n \\includegraphics[width=0.22\\textwidth]{edge_1.png}\n \\caption{An illustration of unsharp masking. Left: the input image. Right: the resulting image. The unsharp masking procedure helps preserving the edge information of the input image to ensure that the vertices in unclear edges can also be extracted.}\n \\label{fig:unsharp}\n\\end{figure}\n\n\\bigbreak\n\n\\subsubsection{Segmenting the image}\nAfter obtaining the image $\\bar{I}$ upon unsharp masking, we perform a segmentation in this step in order to optimally locate the mesh vertices for computing the coarse triangulation. Mathematically, our segmentation problem is described as follows.\n\nSuppose the image $\\bar{I}$ has $L$ intensity levels in each RGB channel. Denote $i$ as a specific intensity level $(i.e. ~ 0 \\leq i \\leq L-1)$. Let $C$ be a color channel of the image $(i.e. ~ C \\in \\lbrace R,G,B \\rbrace)$, and let $h_{i}^{C}$ denote the image histogram for channel $C$, in other words, the number of pixels which correspond to its $i$-th intensity level.\n\nDefine $p_{i}^{C} \\triangleq \\frac{h_{i}^{C}}{N}$, where $N$ represents the total number of pixels in the image $\\bar{I}$. Then we have\n\\begin{equation}\n\\sum\\limits_{\\substack{i = 0, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^L p_{i}^{C} = 1 \\ \\ \\text{ and } \\ \\ \\mu_{T}^{C} = \\sum\\limits_{\\substack{i = 0, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^L ip_{i}^{C}.\n\\end{equation}\n\nSuppose that we want to compress the color space of the image $\\bar{I}$ to $l$ intensity levels. Equivalently, $\\bar{I}$ is to be segmented into $l$ regions $D_{1}^{C},\\cdots,D_{l}^{C}$ by the ordered threshold levels $ x_{j}^{C}, j = 1,\\cdots,l-1$. We define the best segmentation criterion to be maximizing the inter-region intensity-mean variance. More explicitly, we define the cost\n\\begin{equation}\\label{eqt:pso_cost}\n \\sigma^C \\triangleq \\sum\\limits_{\\substack{j = 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^l w_{j}^{C}(\\mu_{j}^{C} - \\mu_{T}^{C})^2,\n\\end{equation}\nwhere the probability $w_{j}^{C}$ of occurrence and the intensity-mean $\\mu_{j}^{C}$of the region $D_{j}^{C}$ are respectively given by\n\\begin{equation}\n w_{j}^{C} =\n\\begin{cases}\n\\sum\\limits_{\\substack{i = 0, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{t_{j}^{C}} p_{i}^{C} & \\text{ if } j = 1, \\\\\n\\sum\\limits_{\\substack{i = t_{j-1}^{C} + 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{t_{j}^{C}} p_{i}^{C} & \\text{ if } 1 < j < l, \\\\\n\\sum\\limits_{\\substack{i = t_{j}^{C} + 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{L - 1} p_{i}^{C} & \\text{ if } j = l,\n\\end{cases}\n\\end{equation}\nand\n\\begin{equation}\n\\mu_{j}^{C} =\n\\begin{cases}\n\\sum\\limits_{\\substack{i = 0, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{t_{j}^{C}} \\frac{ip_{i}^{C}}{w_{j}^{C}} & \\text{ if } j = 1, \\\\\n\\sum\\limits_{\\substack{i = t_{j-1}^{C} + 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{t_{j}^{C}} \\frac{ip_{i}^{C}}{w_{j}^{C}}\n& \\text{ if } 1 < j < l, \\\\\n\\sum\\limits_{\\substack{i = t_{j}^{C} + 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{L - 1} \\frac{ip_{i}^{C}}{w_{j}^{C}}\n& \\text{ if } j = l.\n\\end{cases}\n\\end{equation}\n\nHence, we maximize three objective functions of each RGB channel\n\\begin{equation}\n\\underset{1 < x_{1}^{C} < \\cdots < x_{l-1}^{C} < L}{\\arg\\max} \\sigma^C(\\lbrace x_{j}^{C} \\rbrace_{j = 1}^{l-1}),\n\\end{equation}\nwhere $C \\in \\{R, G, B\\}$. Our goal is to find a set of $\\textbf{x} = \\lbrace x_{j}^{C} \\rbrace_{j = 1}^{l-1}$ such that above function is maximized for each RGB channel.\n\nTo solve the aforementioned segmentation optimization problem, we apply the Particle Swarm Optimization (PSO) segmentation algorithm \\cite{Ghamisi12} on the image $\\bar{I}$. The PSO method is used in this segmentation optimization problem for reducing the chance of trapping in local optimums.\n\nAn illustration of the segmentation step is provided in Figure \\ref{fig:pso}. After performing the segmentation, we extract the boundaries of the segments. Then, we can obtain a number of large patches of area in each of which the intensity information is almost the same. They provide us with a reasonable edge base for constructing a coarse triangulation in later steps.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.22\\textwidth]{mnm_1.png}\n \\includegraphics[width=0.22\\textwidth]{mnm_2.png}\n \\caption{An illustration of the segmentation step for compressing the color space to achieve a sparse intensity representation. Left: the original image. Right: the segmentation result.}\n \\label{fig:pso}\n\\end{figure}\n\n\\bigbreak\n\\subsubsection{Sparse feature extraction on the segment boundaries}\nAfter computing the segment boundaries $\\mathcal{B}$ on the image $\\bar{I}$, we aim to extract sparse feature points on $\\mathcal{B}$ in this step. For the final triangulation, it is desirable that the edges of the triangles are as close as possible to the segment boundaries $\\mathcal{B}$, so as to preserve the geometric features of the original image $I$. Also, to improve the efficiency for the computations on the triangulation, the triangulation should be much coarser than the original image. To achieve the mentioned goals, we consider extracting sparse features on the segment boundaries $\\mathcal{B}$ and use them as the vertices of the ultimate triangulated mesh.\n\nConsider a rectangular grid table $G$ on the image $\\bar{I}$. Apparently, the grid table $G$ intersects the segment boundaries $\\mathcal{B}$ at a number of points. Denote $\\mathcal{P}$ as our desired set of sparse features. Conceptually, $\\mathcal{P}$ is made up of the set of points at which $\\mathcal{B}$ intersect the grid $G$, with certain exceptions.\n\nIn order to further reduce the number of feature points for a coarse triangulation, we propose a merging procedure for close points. Specifically, let $g_{i,j}$ be the vertex of the grid $G$ at the $i$-th row and the $j$-th column. We denote $\\mathcal{P}_{i,j}^1$ and $\\mathcal{P}_{i,j}^2$ respectively as the set of points at which $\\mathcal{B}$ intersect the line segment $\\displaystyle \\overline{g_{i,j} g_{i,j+1}}$ and the line segment $\\displaystyle \\overline{g_{i,j} g_{i+1,j}}$.\n\nThere are 3 possible cases for $\\mathcal{P}_{i,j}^k$:\n\\begin{enumerate}[(i)]\n \\item If $|\\mathcal{P}_{i,j}^k|=0$, then there is no intersection point between the line segment and $\\mathcal{B}$ and hence we can neglect it.\n \\item If $|\\mathcal{P}_{i,j}^k|=1$, then there is exactly one intersection point $p_{i,j}^k$ between the line segment and $\\mathcal{B}$. We include this intersection point $p_{i,j}^k$ in our desired set of sparse features $\\mathcal{P}$.\n \\item If $|\\mathcal{P}_{i,j}^k|>1$, then there are multiple intersection points between the line segment and $\\mathcal{B}$. Since these multiple intersection points lie on the same line segment, it implies that they are sufficiently close to each other. In other words, the information they contain about the segment boundaries $\\mathcal{B}$ is highly similar and redundant. Therefore, we consider merging these multiple points as one point.\n\\end{enumerate}\nMore explicitly, for the third case, we compute the centre $m_{i,j}^k$ of the points in $\\mathcal{P}_{i,j}^k$ by\n\\begin{equation}\n\\displaystyle m_{i,j}^k = mean_{\\{p | p \\in \\mathcal{P}_{i,j}^k\\}} p.\n\\end{equation}\nThe merged point $m_{i,j}^k$ is then considered as a desired feature point. In summary, our desired set of sparse features is given by\n\\begin{equation}\n\\displaystyle \\mathcal{P} = \\bigcup_i \\bigcup_j \\{p_{i,j}^1, p_{i,j}^2, m_{i,j}^1, m_{i,j}^2\\}.\n\\end{equation}\n\nAn illustration of the sparse feature extraction scheme is given in Figure \\ref{fig:feature}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.28\\textwidth]{feature.png}\n \\caption{An illustration of our sparse feature extraction scheme. The chosen sparse feature points are represented by the red dots. If the segment boundary does not intersect an edge, no point is selected. If the segment boundary intersects an edge at exactly one point, the point is selected as a feature point. If the segment boundary intersects an edge at multiple points, the centre of the points is selected as a feature point.}\n \\label{fig:feature}\n\\end{figure}\nHowever, one important problem in this scheme is to determine a suitable size of the grid $G$ so that the sparse feature points are optimally computed. Note that to preserve the regularity of the extracted sparse features, it is desirable that the elements of the grid $G$ are close to perfect squares. Also, to capture the important features as complete as possible, the elements of $G$ should be small enough. Mathematically, the problem can be formulated as follows.\n\nDenote $w$ as the width of the image $\\bar{I}$, $h$ as the height of the image $\\bar{I}$, $w'$ as the number of columns in $G$, $h'$ as the number of rows in $G$, $l_{w}$ as the horizontal length of every element of $G$, and $l_{h}$ as the vertical length of every element of $G$. We further denote $p$ as the percentage of grid edges in $G$ which intersect the segment boundaries $\\mathcal{B}$, and $n$ as the desired number of the sparse feature points. Given the two inputs $p$ and $n$ which respectively correspond to the desired sparse ratio and the desired number of feature points, to find a suitable grid size of $G$, we aim to minimize the cost function\n\n\\begin{equation}\nc(l_{w},l_{h}) = \\left| l_{w} - l_{h} \\right|^2\n\\end{equation}\nsubject to\n\\begin{enumerate}[(i)]\n \\item \\begin{equation}h = h'l_{h},\\end{equation}\n \\item \\begin{equation}w = w'l_{w},\\end{equation}\n \\item \\begin{equation} \\label{eqt:number} p(w'+h'+2w'h') = n.\\end{equation}\n\\end{enumerate}\n\nHere, the first and the second constraint respectively correspond to the horizontal and vertical dimensions of the grid $G$, and the third constraint corresponds to the total number of intersection points. To justify Equation (\\ref{eqt:number}), note that\n\\begin{equation}\n\\begin{split}\n&\\text{Total number of line segments} \\\\\n= \\ &\\text{Total number of horizontal line segments} \\\\\n& + \\text{Total number of vertical line segments}\\\\\n= \\ &h'(w'+1) + w'(h'+1)\\\\\n= \\ &w'+h'+2w'h'.\\\\\n\\end{split}\n\\end{equation}\n\nNote that this minimization problem is nonlinear. To simplify the computation, we assume that $w'$, $h'$ are very large, that is, the grid $G$ is sufficiently dense. Then, from Equation (\\ref{eqt:number}), we have\n\\begin{equation}\n\\begin{split}\n\\frac{p}{n} &= \\frac{1}{w'+h'+2w'h'} \\\\\n& \\approx \\frac{1}{2w'h'} \\\\\n&= \\frac{1}{2\\left(\\frac{w}{l_w}\\right) \\left(\\frac{h}{l_h}\\right)} \\\\\n&= \\frac{l_{w}l_{h}}{2wh}.\n\\end{split}\n\\end{equation}\nBy further assuming that the grid $G$ is sufficiently close to a square grid, we have $l_{w} \\approx l_{h}$. Then, it follows that\n\\begin{equation}\n\\frac{p}{n} \\approx \\frac{l_{w}^2}{2wh}\n\\end{equation}\nand hence\n\\begin{equation}\nl_{w} \\approx \\sqrt{\\frac{2pwh}{n}}.\n\\end{equation}\nSimilarly,\n\\begin{equation}\nl_{h} \\approx \\sqrt{\\frac{2pwh}{n}}.\n\\end{equation}\n\nTo satisfy the integral constraints for $w'$ and $h'$, we make use of the above approximations and set\n\\begin{equation}\nh' = h_0' := \\left\\lfloor \\frac{h}{\\sqrt{\\frac{2pwh}{n}}} \\right\\rfloor = \\left\\lfloor \\sqrt{\\frac{nh}{2pw}} \\right\\rfloor.\n\\end{equation}\nSimilarly, we set\n\\begin{equation}\nw' = w_0' := \\left\\lfloor \\frac{w}{\\sqrt{\\frac{2pwh}{n}}} \\right\\rfloor = \\left\\lfloor \\sqrt{\\frac{nw}{2ph}} \\right\\rfloor.\n\\end{equation}\nFinally, we take\n\\begin{equation}\nl_{h} = \\frac{h}{h_0'}\n \\end{equation}\nand\n\\begin{equation}\nl_{w} = \\frac{w}{w_0'}.\n \\end{equation}\n\nTo summarize, with the abovementioned strategy for the feature point extraction, we obtain a set of sparse feature points which approximates the segment boundaries $\\mathcal{B}$. Specifically, given the inputs $p$ and $n$, the rectangular grid $G$ we introduce leads to approximately $n$ regularly-extracted sparse feature points. An illustration of the sparse feature extraction scheme is shown in Figure \\ref{fig:delaunay} (left). In our experiments, $p$ is set to be 0.2. A denser triangulated representation can be achieved by increasing the value of $p$.\n\n\\bigbreak\n\n\\subsubsection{Adding landmark points to the vertex set of the desired coarse triangulation}\nThis step is only required when our {\\em TRIM} algorithm is used for landmark-constrained registration. For accurate landmark-constrained registration, it is desirable to include the landmark points in the vertex set of the coarse representations of the input image $I$. One of the most important features of our coarse triangulation approach is that it allows registration with exact landmark constraints on a coarse triangular representation. By contrast, the regular grid-based registration can only be achieved on very dense rectangular grid domains in order to reduce the numerical errors.\n\nWith the abovementioned advantage of our approach, we can freely add a set of landmark points $\\mathcal{P}_{LM}$ to the set of sparse features $\\mathcal{P}$ extracted by the previous procedure. In other words, the landmark points are now considered as a part of our coarse triangulation vertices:\n\\begin{equation}\n\\mathcal{P} = \\bigcup_i \\bigcup_j \\{p_{i,j}^1, p_{i,j}^2, m_{i,j}^1, m_{i,j}^2\\} \\cup \\mathcal{P}_{LM}.\n\\end{equation}\nThen, the landmark-constrained registration of images can be computed by the existing feature-matching techniques for triangular meshes. The existing feature detection approaches such as \\cite{harris88} and \\cite{Lowe04} can be applied for obtaining the landmark points.\n\n\\bigbreak\n\n\\subsubsection{Computing a Delaunay triangulation}\nIn the final step, we construct a triangulation based on the set $\\mathcal{P}$ of feature points. Among all triangulation schemes, the Delaunay triangulation method is chosen since the triangles created by the Delaunay triangulations are more regular. More specifically, if $\\alpha$ and $\\beta$ are two angles opposite to a common edge in a Delaunay triangulation, then they must satisfy the inequality\n\\begin{equation}\n \\alpha + \\beta \\leq \\pi.\n\\end{equation}\n\nIn other words, Delaunay triangulations avoid the formation of sharp and irregular triangles. Note that the regularity does not only enhance the visual quality of the resulting triangulation but also lead to a more stable approximation of the derivatives on the triangles when applying various registration schemes. Therefore, we compute a Delaunay triangulation on the set $\\mathcal{P}$ of feature points for achieving the ultimate triangulation $\\mathcal{T}$. An illustration of the construction of the Delaunay triangulations is shown in Figure \\ref{fig:delaunay}.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.23\\textwidth]{mnm_sparse_point.png} \\ \\\n \\includegraphics[width=0.23\\textwidth]{mnm_mesh.png} \\ \\\n \\includegraphics[width=0.23\\textwidth]{mnm_mesh_color.png}\n \\caption{An illustration of extracting the vertices and computing a Delaunay triangulation. Left: the points obtained by the feature extraction step from Figure \\ref{fig:pso}. Middle: a Delaunay triangulation generated on the feature points. Right: the triangulation with a color approximated on each triangle.}\n \\label{fig:delaunay}\n\\end{figure*}\n\nThese 6 steps complete our {\\em TRIM} algorithm as summarized in Algorithm \\ref{algorithm:triangulation}.\n\n\\begin{algorithm}[h]\n\\KwIn{An image $I$, the desired number of image intensity levels $l$ for segmentation, the desired number of feature points $n$, the sparse ratio $p$.}\n\\KwOut{A coarse triangulation $\\mathcal{T}$ that captures the main features of the image.}\n\\BlankLine\n Subsample the input image $I$ to a suitable size and denote the result by $\\tilde{I}$\\;\n Apply an unsharp masking on the subsampled image $\\tilde{I}$ and denote the result by $\\bar{I}$\\;\n Apply the PSO segmentation for compressing the color space of $\\bar{I}$ to $l$ intensity levels, and extract boundaries $\\mathcal{B}$ of the segments\\;\n Extract a set of sparse feature points $\\mathcal{P}$ from the segment boundaries $\\mathcal{B}$ based on the parameters $n$ and $p$\\;\n Add a set of extra landmark points $\\mathcal{P}_{LM}$ to $\\mathcal{P}$ if necessary\\;\n Compute a Delaunay triangulation $\\mathcal{T}$ on the sparse feature points $\\mathcal{P}$.\n\\caption{Our proposed {\\em TRIM} algorithm for triangulating an image.}\n\\label{algorithm:triangulation}\n\\end{algorithm}\n\nIt is noteworthy that our proposed {\\em TRIM} algorithm significantly trims down high resolution images without distorting their important geometric features. Experimental results are shown in Section \\ref{experiment} to demonstrate the effectiveness of the {\\em TRIM} algorithm.\n\n\\subsection{Simplification of the registration using the coarse triangulation}\nWith the above triangulation algorithm for images, we can simplify the image registration problem as a mapping problem of triangulated surfaces. Many conventional image registration approaches are hindered by the long computational time and the accuracy of the initial maps. With the new strategy, it is easy to obtain a highly efficient and reasonably accurate registration of images. Our registration result can serve as a high quality initial map for various algorithms.\n\nIn this work, we apply our proposed {\\em TRIM} algorithm for landmark-aligned quasi-conformal image registration. Ideally, conformal registration are desired as conformal mappings preserve angles and hence the local geometry of the images. However, with the presence of landmark constraints, conformal mappings may not exist. We turn to consider quasi-conformal mappings, a type of mappings which is closely related to the conformal mappings. Mathematically, a \\emph{quasi-conformal mapping} $f: \\mathbb{C} \\to \\mathbb{C}$ satisfies the Beltrami equation\n\\begin{equation}\n\\frac{\\partial f}{\\partial \\bar{z}} = \\mu(z) \\frac{\\partial f}{\\partial z}\n\\end{equation}\nwith $\\mu$ is a complex-valued function with sup norm less than 1. $\\mu$ is called the \\emph{Beltrami coefficient} of $f$. Intuitively, a conformal mapping maps infinitesimal circles to infinitesimal circles, while a quasi-conformal mapping maps infinitesimal circles to infinitesimal ellipses (see Figure \\ref{fig:beltrami}). Readers are referred to \\cite{Gardiner00} for more details.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{beltrami.png}\n \\caption{An illustration of quasi-conformal mappings. The maximal magnification and shrinkage are determined by the Beltrami coefficient $\\mu$ of the mappings.}\n \\label{fig:beltrami}\n\\end{figure}\n\nIn \\cite{Lam14}, Lam and Lui proposed the quasi-conformal landmark registration (QCLR) algorithm for triangular meshes. In this work, we apply the QCLR algorithm on our coarse triangulations of images. More explicitly, to compute a registration mapping $f:I_1 \\to I_2$ between two images $I_1$ and $I_2$ with prescribed point correspondences\n\\begin{equation} \\label{eqt:constraint}\np_i \\longleftrightarrow q_i, i = 1,2,\\cdots,n,\n\\end{equation}\nwe first apply our proposed {\\em TRIM} algorithm and obtain a coarse triangulation $\\mathcal{T}_1$ on $I_1$. Here, we include the feature points $\\{p_i\\}_{i=1}^n$ in the generation of the coarse triangulation, as described in the fifth step of the {\\em TRIM} algorithm. Then, instead of directly computing $f$, we can solve for a map $\\tilde{f}: \\mathcal{T}_1 \\to I_2$. Since the problem size is significantly reduced under the coarse triangulation, the computation for $\\tilde{f}$ is much more efficient than that for $f$.\n\nThe QCLR algorithm \\cite{Lam14} aims to compute the desired quasi-conformal mapping $\\tilde{f}$ by minimizing the energy functional\n\\begin{equation} \\label{eqt:qclr}\nE_{LM}(\\nu) = \\int_{\\mathcal{T}_1} | \\nabla \\nu|^2 + \\alpha \\int_{\\mathcal{T}_1} |\\nu|^2\n\\end{equation}\nsubject to\n\\begin{enumerate}[(i)]\n \\item $\\tilde{f}(p_i) = q_i$ for all $i = 1,2,\\cdots,n$,\n \\item $\\|\\nu\\|_{\\infty} <1$, and\n \\item $\\nu = \\mu(\\tilde{f})$.\n\\end{enumerate}\n\nTo solve the above minimization problem (\\ref{eqt:qclr}), the QCLR algorithm makes use of the penalty splitting method and minimizes\n\n\\begin{equation}\n E_{LM}^{split}(\\nu,\\tilde{f}) = \\int_{\\mathcal{T}_1} |\\nabla \\nu|^2 + \\alpha \\int_{\\mathcal{T}_1} |\\nu|^2 + \\rho \\int_{\\mathcal{T}_1} |\\nu - \\mu(\\tilde{f})|^2\n\\end{equation}\nsubject to\n\\begin{enumerate}[(i)]\n \\item $\\tilde{f}(p_i) = q_i$ for all $i = 1,2,\\cdots,n$ and\n \\item $\\|\\nu\\|_{\\infty} <1$.\n\\end{enumerate}\nThe QCLR algorithm then alternately minimizes the energy $E_{LM}^{split}$ over one of $\\nu$ and $\\tilde{f}$ while fixing the other one.\n\nSpecifically, for computing $\\tilde{f}_n$ while fixing $\\nu_n$ and the landmark constraints, the QCLR algorithm applies the Linear Beltrami Solver proved by Lui et al.\\cite{Lui13}. On the other hand, for computing $\\nu_{n+1}$ while fixing $\\tilde{f}_n$, by considering the Euler-Lagrange equation, it suffices to solve\n\\begin{equation}\n (-\\Delta + 2 \\alpha I + 2\\rho I) \\nu_{n+1} = 2\\rho \\mu(\\tilde{f}_n).\n\\end{equation}\n\nFrom $\\nu_{n+1}$, one can compute the associated quasi-conformal mapping $\\tilde{f}_{n+1}$. However, note that $\\tilde{f}_{n+1}$ may not satisfy the hard landmark constraints \\ref{eqt:constraint} anymore. To enforce the landmark constraints, instead of directly using $\\tilde{f}_{n+1}$ as the updated result, the QCLR algorithm updates $\\nu_{n+1}$ by\n\\begin{equation}\n \\nu_{n+1} \\leftarrow \\nu_{n+1} + t(\\mu(\\tilde{f}_{n+1}) - \\nu_{n+1})\n\\end{equation}\nfor some small $t$. By repeating the abovementioned procedures, we achieve a landmark-constrained quasi-conformal mapping $\\tilde{f}$. For more details of the QCLR algorithm, readers are referred to \\cite{Lam14}.\n\nAfter computing the quasi-conformal mapping $\\tilde{f}$ on the coarse triangulation, we apply an interpolation to retrieve the fine details of the registration. Since the triangulations created by our proposed {\\em TRIM} algorithm preserves the important geometric features and prominent straight lines of the input image, the details of the registration results can be accurately interpolated. Moreover, since the coarse triangulation largely simplifies the input image and reduces the problem size, the computation is largely accelerated.\n\nThe registration procedure is summarized in Algorithm \\ref{algorithm:registration}. Experimental results are illustrated in Section \\ref{experiment} to demonstrate the significance of our coarse triangulation in the registration scheme.\n\n\\begin{algorithm}[h]\n\\KwIn{Two images or video frames $I_1$, $I_2$ to be registered, with the prescribed feature correspondences.}\n\\KwOut{A feature-matching registration mapping $f:I_1 \\to I_2$.}\n\\BlankLine\nCompute a coarse triangulation $\\mathcal{T}_1$ of $I_1$ using our proposed {\\em TRIM} algorithm (Algorithm \\ref{algorithm:triangulation}). Here, we include the prescribed feature points on $I_1$ in the generation of the coarse triangulation $\\mathcal{T}_1$\\;\nSelect landmark correspondences of the coarse triangulation $\\mathcal{T}_1$ and the target image $I_2$. Denote the landmark points on $\\mathcal{T}_1$ and $I_2$ by $\\{p_i\\}_{i=1}^n$ and $\\{q_i\\}_{i=1}^n$ correspondingly\\;\nCompute a landmark based quasi-conformal mapping $\\tilde{f}: \\mathcal{T}_1 \\to \\mathbb{C}$ by the QCLR algorithm in \\cite{Lam14}\\;\nObtain $f$ by $\\tilde{f}$ with a bilinear interpolation between $\\mathcal{T}_j$ and $I_j$.\n\\caption{Feature-based registration via our proposed {\\em TRIM} algorithm.}\n\\label{algorithm:registration}\n\\end{algorithm}\n\n\n\\section{Experimental results} \\label{experiment}\n\nIn this section, we demonstrate the effectiveness of our proposed triangulation scheme. The algorithms are implemented using MATLAB. For solving the mentioned linear systems, the backslash operator ($\\backslash$) in MATLAB is used. The test images are courtesy of the RetargetMe dataset \\cite{Rubinstein10}. The bird image is courtesy of the first author. All experiments are performed on a PC with an Intel(R) Core(TM) i7-4500U CPU @1.80 GHz processor and 8.00 GB RAM.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.215\\textwidth]{bird.png} \\\n \\includegraphics[width=0.16\\textwidth]{book.png} \\\n \\includegraphics[width=0.32\\textwidth]{pencils.png} \\\n \\includegraphics[width=0.255\\textwidth]{bear_1.png}\n \\center{\\hspace{5mm} Bird \\hspace{27mm} Book \\hspace{35mm} Pencils \\hspace{45mm} Teddy}\n \\newline\n \\newline\n \\includegraphics[width=0.215\\textwidth]{bird_mesh.png} \\\n \\includegraphics[width=0.16\\textwidth]{book_mesh.png} \\\n \\includegraphics[width=0.32\\textwidth]{pencils_mesh.png} \\\n \\includegraphics[width=0.255\\textwidth]{bear_mesh.png}\n \\center{Triangulation of Bird \\hspace{3mm} Triangulation of Book \\hspace{9mm} Triangulation of Pencils \\hspace{19mm} Triangulation of Teddy}\n \\newline\n \\caption{Several images and the triangulations created by our {\\em TRIM} algorithm. Top: the input images. Bottom: the resulting triangulations. The key features of the original images are well represented in our triangulations, and the regions with similar color can be represented by coarse triangulations. }\n \\label{fig:triangulation}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.24\\textwidth]{bee.png}\n \\includegraphics[width=0.24\\textwidth]{bee_mesh.png}\n \\includegraphics[width=0.24\\textwidth]{bee_mesh_color.png}\n \\includegraphics[width=0.24\\textwidth]{bee_dmesh.png}\n \\caption{A bee image and the triangulations created by our {\\em TRIM} algorithm and DMesh \\cite{dmesh}. Left: The input image. Middle left: The coarse triangulation created by {\\em TRIM}. Middle right: Our {\\em TRIM} coarse triangulation with a color approximated on each triangle. Right: The triangulation by DMesh \\cite{dmesh}.}\n \\label{fig:triangulation_bee}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.24\\textwidth]{butterfly.png}\n \\includegraphics[width=0.24\\textwidth]{butterfly_mesh.png}\n \\includegraphics[width=0.24\\textwidth]{butterfly_mesh_color.png}\n \\includegraphics[width=0.24\\textwidth]{butterfly_dmesh.png}\n \\caption{An butterfly image and the triangulations created by our {\\em TRIM} algorithm and DMesh \\cite{dmesh}. Left: The input image. Middle left: The coarse triangulation created by {\\em TRIM}. Middle right: Our {\\em TRIM} coarse triangulation with a color approximated on each triangle. Right: The triangulation by DMesh \\cite{dmesh}.}\n \\label{fig:triangulation_butterfly}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.27\\textwidth]{baseball.png} \\\n \\includegraphics[width=0.27\\textwidth]{baseball_mesh_color.png} \\\n \\includegraphics[width=0.27\\textwidth]{baseball_dmesh.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{eagle.png} \\\n \\includegraphics[width=0.27\\textwidth]{eagle_triangulation.png} \\\n \\includegraphics[width=0.27\\textwidth]{eagle_dmesh.png}\n \\newline\n \\caption{Two more examples created by our {\\em TRIM} algorithm and Dmesh \\cite{dmesh}. Our coarse triangulations capture the important features and closely resemble the original images. Left: The input image. Middle: The coarse triangulation created by {\\em TRIM}. Right: The triangulation by DMesh \\cite{dmesh}.}\n \\label{fig:triangulation_more}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.24\\textwidth]{bear_1.png} \\\n \\includegraphics[width=0.24\\textwidth]{bear_trim.png} \\\n \\includegraphics[width=0.24\\textwidth]{bear_noisy.png} \\\n \\includegraphics[width=0.24\\textwidth]{bear_noisy_trim.png}\n \\newline\n \\newline\n \\includegraphics[width=0.24\\textwidth]{cone_1.png} \\\n \\includegraphics[width=0.24\\textwidth]{cone_trim.png} \\\n \\includegraphics[width=0.24\\textwidth]{cone_noisy.png} \\\n \\includegraphics[width=0.24\\textwidth]{cone_noisy_trim.png}\n \\newline\n \\caption{Two triangulation examples by our {\\em TRIM} algorithm for noisy images. Left: The noise-free images. Middle left: The triangulations computed by {\\em TRIM} based on the noise-free images. Middle right: The noisy images. Right: The triangulations computed by {\\em TRIM} based on the noisy images. It can be observed that the important features of the images are preserved even for noisy images.}\n \\label{fig:noisy}\n\\end{figure*}\n\n\\subsection{The performance of our proposed {\\em TRIM} triangulation scheme}\nIn this subsection, we demonstrate the effectiveness of our triangulation scheme by various examples.\n\nOur proposed algorithm is highly content-aware. Specifically, regions with high similarities or changes in color on an image can be easily recognized. As a result, the triangulations created faithfully preserve the important features by a combination of coarse triangles with different sizes. Some of our triangulation results are illustrated in Figure \\ref{fig:triangulation}. For better visualizations, we color the resulting triangulations by mean of the original colors of corresponding patches. In Figure \\ref{fig:triangulation_bee}, we apply our {\\em TRIM} algorithm on a bee image. It can be observed that the regions of the green background can be effectively represented by coarser triangulations, while the region of the bee and flowers with apparent color differences is well detected and represented by a denser triangulation. Figure \\ref{fig:triangulation_butterfly} shows another example of our triangulation result. The butterfly and the flowers are well represented in our triangulation result. The above examples demonstrate the effectiveness of our triangulation scheme for representing images in a simplified but accurate way. Some more triangulation examples created by our {\\em TRIM} algorithm are shown in Figure \\ref{fig:triangulation_more}. Figure \\ref{fig:noisy} shows some triangulation examples for noisy images. It can be observed that our {\\em TRIM} algorithm can effectively compute content-aware coarse triangulations even for noisy images.\n\nWe have compared our algorithm with the DMesh triangulator \\cite{dmesh} in Figure \\ref{fig:triangulation_bee}, Figure \\ref{fig:triangulation_butterfly} and Figure \\ref{fig:triangulation_more}. It can be observed that our triangulation scheme outperforms DMesh \\cite{dmesh} in terms of the triangulation quality. Our results can better capture the important features of the images. Also, the results by DMesh \\cite{dmesh} may contain unwanted holes while our triangulation results are always perfect rectangles. The comparisons reflect the advantage of our coarse triangulation scheme.\n\nThen, we evaluate the efficiency of our triangulation scheme for various images. Table \\ref{table:trim} shows the detailed statistics. The relationship between the target coarse triangulation size and the computational time is illustrated in Figure \\ref{fig:triangulation_time}. Even for high resolution images, the computational time for the triangulation is only around 10 seconds. It is noteworthy that our {\\em TRIM} algorithm significantly compresses the high resolution images as coarse triangulations with only several thousand triangles.\n\n\\begin{table*}[h!]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nImage & Size & Triangulation time (s) & \\# of triangles & Compression rate \\\\ \\hline\nSurfer & 846 $\\times$ 421 & 5.78 & 1043 & 0.1536\\% \\\\ \\hline\nHelicopter & 720 $\\times$ 405 & 5.78 & 1129 & 0.1989\\%\n\\\\ \\hline\nBee & 640 $\\times$ 425 & 7.13 & 1075 & 0.2029\\% \\\\ \\hline\nBird & 1224 $\\times$ 1224 & 7.04 & 1287 & 0.0859\\% \\\\ \\hline\nButterfly & 1024 $\\times$ 700 & 8.00 & 1720 & 0.1232\\% \\\\ \\hline\nBook & 601 $\\times$ 809 & 8.38 & 1629 & 0.3350\\% \\\\ \\hline \nBaseball & 410 $\\times$ 399 & 7.85 & 2315 & 0.7201\\% \\\\ \\hline\nBear & 450 $\\times$ 375 & 7.48 & 2873 & 0.8652\\% \\\\ \\hline\nPencil & 615 $\\times$ 410 & 8.93 & 2633 & 0.5838\\% \\\\ \\hline\nTiger & 2560 $\\times$ 1600 & 13.91 & 3105 & 0.0414\\% \\\\ \\hline\nEagle & 600 $\\times$ 402 & 13.27 & 1952 & 0.4299\\% \\\\ \\hline\n\\end{tabular}\n\\bigbreak\n\\caption{Performance of our {\\em TRIM} algorithm. Here, the compression rate is defined by $\\frac{\\text{\\# of triangle nodes}}{\\text{\\# of pixels}} \\times 100\\%$.}\n\\label{table:trim}\n\\end{table*}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{time_graph.png}\n \\caption{The relationship of the desired coarse triangulation size and the computational time of our proposed {\\it TRIM} algorithm.}\n \\label{fig:triangulation_time}\n\\end{figure}\n\nIt is noteworthy that the combination of the steps in our {\\em TRIM} algorithm is important for achieving a coarse triangulation. More specifically, if certain steps in our algorithm are removed, the triangulation result will become unsatisfactory. Figure \\ref{fig:segmentation} shows two examples of triangulations created by our entire {\\em TRIM} algorithm and by our algorithm with the segmentation step excluded. It can be easily observed that without the segmentation step, the resulting triangulations are extremely dense and hence undesirable for simplifying further computations. By contrast, the number of triangles produced by our entire {\\em TRIM} algorithm is significantly reduced. The examples highlight the importance of our proposed combination of steps in the {\\em TRIM} algorithm for content-aware coarse triangulation.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.22\\textwidth]{bee_seg_923.png}\n \\includegraphics[width=0.22\\textwidth]{bee_no_seg_3612.png}\n \\includegraphics[width=0.22\\textwidth]{bird_seg_1496.png}\n \\includegraphics[width=0.22\\textwidth]{bird_no_seg_8685.png}\n \\caption{The triangulations created by our entire {\\em TRIM} algorithm (left) and by the algorithm without the segmentation step (Right). It can be observed that the segmentation step is crucial for achieving a coarse triangulation. Number of triangles produced: 923 (top left), 3612 (top right), 1496 (bottom left), 8685 (bottom right).}\n \\label{fig:segmentation}\n\\end{figure}\n\\subsection{The improvement in efficiency of image registration by our {\\em TRIM} algorithm}\nIn this subsection, we demonstrate the effectiveness of our proposed triangulation-based method for landmark-based image registration. In our experiments, the feature points on the images are extracted using the algorithm by Harris and Stephens \\cite{harris88} as landmark constraints. For simplifying the image registration problems, one conventional approach is to make use of coarse regular grids followed by interpolation. It is natural to ask whether our proposed coarse triangulation-based method produces better results. In Figure \\ref{fig:registration_teddy}, we consider a stereo registration problem of two scenes. With the prescribed feature correspondences, we compute the feature-endowed stereo registration via the conventional grid-based approach, the DMesh triangulation approach \\cite{dmesh} and our proposed {\\it TRIM} method. For the grid-based approach and the DMesh triangulation approach \\cite{dmesh}, we take the mesh vertices nearest to the prescribed feature points on the source image as source landmarks. For our proposed {\\it TRIM} method, as the landmark vertices are automatically embedded in the content-aware coarse triangulation, the source landmarks are exactly the feature points detected by the method in \\cite{harris88}.\n\nIt can be observed that our triangulation-based approach produces a much more natural and accurate registration result when compared with both the grid-based approach and the DMesh triangulation approach. In particular, sharp features such as edges are well preserved using our proposed method. By contrast, the edges are seriously distorted in the other two methods. In addition, the geometry of the background in the scenes are well retained via our {\\em TRIM} method but not the other two methods. The higher accuracy of the registration result by our approach can also be visualized by the intensity difference plots. Our triangulation-based approach results in an intensity difference plot with more dark regions than the other two approaches. The advantage of our method over the other two methods is attributed to the geometry preserving feature of our {\\it TRIM} algorithm, in the sense that the triangulations created by {\\it TRIM} are more able to fit into complex features and have more flexibilities in size than regular grids. Also, the triangulations created by DMesh \\cite{dmesh} do not capture the geometric features and hence the registration results are unsatisfactory. They reflect the significance of our content-aware {\\it TRIM} triangulation scheme in computing image registration. Another example is illustrated in Figure \\ref{fig:registration_cone}. Again, it can be easily observed that our proposed {\\em TRIM} triangulation approach leads to a more accurate registration result.\n\nTo highlight the improvement in the efficiency by our proposed {\\em TRIM} algorithm, Table \\ref{table:registration} records the computational time and the error of the registration via the conventional grid-based approach and our {\\it TRIM} triangulation-based approach. It is noteworthy that our proposed coarse triangulation-based method significantly reduces the computational time by over 85\\% on average when compared with the traditional regular grid-based approach. To quantitatively assess the quality of the registration results, we define the matching accuracy by\n\\begin{equation}\n\\frac{\\text{\\# pixels s.t.} \\|\\text{final intensity - original intensity}\\|_1 < \\epsilon}{\\text{Total \\# of pixels}} \\times 100\\%.\n\\end{equation}\nThe threshold $\\epsilon$ is set to be $0.2$ in our experiments. Our triangulation-based method produces registration results with the matching accuracy higher than that of the regular grid-based method by 3\\% on average. The experimental results reflect the advantages of our {\\it TRIM} content-aware coarse triangulations for image registration\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.27\\textwidth]{bear_1.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_2.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_registration.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{bear_grid_result.png} \\\n \\includegraphics[width=0.27\\textwidth]{dmesh_bear.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_result.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{bear_difference_map_grid.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_difference_map_dmesh.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_difference_map_TRIM.png}\n \\newline\n \\caption{Stereo landmark registration using different algorithms. Top left: The source image. Top middle: The target image. Top Right: The prescribed feature correspondences. Middle left: The registration result by the dense grid-based approach (4 pixels per grid). Middle: The registration result via DMesh \\cite{dmesh}. Middle right: The registration result by our proposed coarse triangulation-based method. Bottom left: The intensity difference after the registration by the dense grid-based approach. Bottom middle: The intensity difference after the registration via DMesh \\cite{dmesh}. Bottom right: The intensity difference after the registration by our proposed coarse triangulation-based method. Because of the greater flexibility of our triangulation scheme, the accuracy of registration by our method is higher than that of the dense grid-based approach.}\n \\label{fig:registration_teddy}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.27\\textwidth]{cone_1.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_2.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_registration.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{cone_grid_result.png} \\\n \\includegraphics[width=0.27\\textwidth]{dmesh_cone.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_result.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{cone_difference_map_grid.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_difference_map_dmesh.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_difference_map_TRIM.png}\n \\newline\n \\caption{Stereo landmark registration using different algorithms. Top left: The source image. Top middle: The target image. Top Right: The prescribed feature correspondences. Middle left: The registration result by the dense grid-based approach (4 pixels per grid). Middle: The registration result via DMesh \\cite{dmesh}. Middle right: The registration result by our proposed coarse triangulation-based method. Bottom left: The intensity difference after the registration by the dense grid-based approach. Bottom middle: The intensity difference after the registration via DMesh \\cite{dmesh}. Bottom right: The intensity difference after the registration by our proposed coarse triangulation-based method.}\n \\label{fig:registration_cone}\n\\end{figure*}\n\\begin{table*}[h!]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\nImages & Size & \\multicolumn{4}{c|}{Registration} & Time saving rate \\\\ \\cline{3-6}\n& & \\multicolumn{2}{c|}{Via regular grids} & \\multicolumn{2}{c|}{Via {\\em TRIM}} & \\\\ \\cline{3-6}\n& & Time (s) & Matching accuracy (\\%) & Time (s) & Matching accuracy (\\%) & \\\\ \\hline\nBear & 450 $\\times$ 375 & 102.3 & 59.5 & 13.8 & 70.7 & 13.4897\\% \\\\ \\hline\nCone & 450 $\\times$ 375 & 108.7 & 51.3 & 28.2 & 61.2 & 25.9430\\%\\\\ \\hline\nCloth& 1252 $\\times$ 1110 & 931.0 & 70.7 & 36.0 & 75.4 & 3.8668\\% \\\\ \\hline\nBook & 1390 $\\times$ 1110 & 1204.5 & 59.0 & 51.0 & 63.0 & 4.2341\\% \\\\ \\hline\nBaby & 1390 $\\times$ 1110 & 94.3 & 62.3 & 11.0 & 62.3 & 11.6649\\% \\\\ \\hline\n\n\\end{tabular}\n\n\\bigbreak\n\\caption{The performance of feature-based image registration via our proposed {\\em TRIM} coarse triangulation method and the ordinary coarse grids. Here, the time saving rate is defined by $\\frac{\\text{Registration time via {\\em TRIM}}}{\\text{Registration time via regular grids}} \\times 100\\%$.\n\\label{table:registration}\n\\end{table*}\n\n\n\\section{Conclusion and future work} \\label{conclusion}\nIn this paper, we have established a new framework for accelerating image registration. Using our proposed {\\em TRIM} algorithm, a content-aware coarse triangulation is built on a high resolution image. Then, we can efficiently compute a landmark-based registration using the coarse triangulation. From the coarse registration result, we can obtain the desired fine registration result. Our proposed method is advantageous for a large variety of registration applications with a significant improvement of the computational efficiency and registration accuracy. Hence, our proposed method can serve as an effective initialization for various registration algorithms. In the future, we aim to extend our proposed {\\em TRIM} algorithm in the following two directions.\n\n\\bigbreak\n\n\\subsubsection{n-dimensional registration via {\\em TRIM}}\nOur work can be naturally extended for higher dimensional registration problems. For accelerating $n$-dimensional volumetric registration, one can consider constructing a coarse representation using $n$-simplices. Analogous to our 2-dimensional {\\em TRIM} algorithm, the $n$-dimensional coarse representation should also result in a significant improvement of the efficiency for $n$-dimensional registration problems.\n\n\\bigbreak\n\n\\subsubsection{{\\em TRIM} image compression format}\nRecall that in our proposed {\\em TRIM} algorithm, there are two important parameters, namely, the number of intensity threshold levels $l$ in the image segmentation step and the sparse ratio $p$ for controlling the fineness of the triangular representation. By increasing the values of these two parameters, an image intensity representation becomes more accurate. Hence, given a compression error threshold, we aim to develop an iterative {\\em TRIM} scheme for continuously increasing the two values until the intensity difference between the compressed image and the original image is smaller than the prescribed error threshold. This provides us with a new image compression format using {\\em TRIM}. Since the {\\em TRIM} compression format only consists of the coarse triangular mesh vertices and the corresponding colors, the compression size is much smaller than that of the existing image compression formats.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nNear term quantum devices have a small number of noisy qubits that can support execution of shallow depth circuits (i.e., those with few operational cycles) only. Variational Quantum Algorithms (VQA) aim to leverage the power as well as the limitations imposed by these devices to solve problems of interest such as combinatorial optimization \\cite{farhi2014quantum,wang2018quantum,hadfield2019quantum,cook2020quantum}, quantum chemistry \\cite{mcclean2016theory, grimsley2019adaptive}, and quantum machine learning \\cite{10.1007\/978-3-030-50433-5_45,biamonte2017quantum,torlai2020machine}. VQA divides the entire computation into functional modules, and outsources some of these modules to classical computers. The general framework of VQA can be divided into four steps: (i) encode the problem into a parameterized quantum state $\\ket{\\psi(\\theta)}$ (called the ansatz), where $\\theta = \\{\\theta_1,\\theta_2, \\hdots, \\theta_k\\}$ are $k$ parameters; (ii) prepare and measure the ansatz in a quantum computer, and determine the value of some objective function $C(\\theta)$ (which depends on the problem at hand) from the measurement outcome; (iii) in a classical computer, optimize the set of parameters to find a better set $\\theta' = \\{\\theta'_1,\\theta'_2, \\hdots, \\theta'_k\\}$ such that it minimizes (or maximizes) the objective function; (iv) repeat steps (ii) and (iii) with the new set of parameters until convergence.\n\nQuantum Approximate Optimization Algorithm (QAOA) is a type of VQA that focuses on finding good approximate solutions to combinatorial optimization problems. It has been studied most widely for finding the maximum cut of a (weighted or unweighted) graph (called the Max-Cut problem) \\cite{farhi2014quantum}. For this problem, given a graph $G = (V,E)$ where $V$ is the set of vertices and $E$ is the set of edges, the objective is to partition $V = V_1 \\cup V_2$, such that $V_1 \\cap V_2 = \\phi$, and the number of edges crossing the partition is maximized. Throughout this paper, we shall consider {\\it connected graphs} with $|V| = n$ and $|E| = m$, but the results can be easily extended to disconnected graphs as well.\n\nIn the initial algorithm proposed by Farhi \\cite{farhi2014quantum} for the Max-Cut problem, a depth-$p$ QAOA consists of $p \\geq 1$ layers of alternating operators on the initial state $\\ket{\\psi_0}$ \n\\begin{equation}\n\\label{eq:ansatz}\n \\ket{\\psi(\\gamma,\\beta)} = ( \\displaystyle \\Pi_{l = 1}^{p} e^{(-i\\beta_l H_M)} e^{(-i\\gamma_l H_P)}) \\ket{\\psi_0}\n\\end{equation}\n\nwhere $H_P$ and $H_M$ are called the Problem and Mixer Hamiltonian respectively, and $\\gamma = \\{\\gamma_1, \\gamma_2, \\hdots, \\gamma_p\\}$ and $\\beta = \\{\\beta_1, \\beta_2, \\hdots, \\beta_p\\}$ are the parameters. It is to be noted that the depth $p$ of the QAOA is not related to the depth of the quantum circuit realizing the algorithm. The problem Hamiltonian describing the Max-Cut can be represented as in Eq.~(\\ref{eq:max_cut}), where $w_{jk}$ is the weight associated with the edge $(j,k)$.\n\\begin{equation}\n\\label{eq:max_cut}\n H_P = \\frac{1}{2}\\displaystyle \\sum_{(j,k) \\in E} w_{jk} (I - Z_j Z_k)\n\\end{equation}\n\nFurthermore, the mixer Hamiltonian should be an operator that does not commute with the Problem Hamiltonian. In the traditional QAOA, the mixer Hamiltonian is $H_M = \\displaystyle \\sum_{i} X_i$.\n\nVariations to this have been studied to improve the performance of the algorithm --- such as using other mixers \\cite{bartschi2020grover,zhu2020adaptive, yu2021quantum}, training the parameters to reduce the classical optimization cost \\cite{larkin2020evaluation}, and modifying the cost function for faster convergence \\cite{barkoutsos2020improving}. In this paper we stick to the original problem and mixer hamiltonians proposed in the algorithm by Farhi et al. \\cite{farhi2014quantum}. The applicability and effectiveness of our proposed method on the modifications of this algorithm can be looked at as a follow-up work. However, our proposed methods optimize the circuit corresponding to the problem hamiltonian. Since most of the modifications suggested in the literature aim to design more efficient mixers, our proposed optimization should be applicable on those as well.\n\nThe realization of the QAOA circuit for Max-cut requires two CNOT gates for each edge (details given in Sec.~\\ref{sec:ansatz}). Hardware realization of a CNOT gate is, in general, significantly more erroneous than a single qubit gate. Even in the higher end devices of IBM Quantum, such as \\textit{ibmq\\_montreal}, \\textit{ibmq\\_manhattan}, the probability of error for a single qubit gate and a CNOT gate are $\\mathcal{O}(10^{-4})$ and $\\mathcal{O}(10^{-2})$, respectively~\\cite{ibmquantum}. In other words, CNOT gates are $100$ times more likely to be erroneous than single qubit gates. Therefore, we focus primarily on reducing the number of CNOT gates in the design of QAOA ansatz for Max-cut.\n\n\\subsubsection*{Contributions of this paper}\n\nIn this paper, we\n\n\\begin{enumerate}[(i)]\n \\item propose two optimization methods for reducing the number of CNOT gates in the first layer of the QAOA ansatz based on (1) an Edge Coloring that can reduce upto $\\lfloor \\frac{n}{2} \\rfloor$ CNOT gates, and (2) a Depth First Search (DFS) that can reduce $n-1$ CNOT gates.\n \n \\item prove that there exists no method that can reduce more than $n-1$ CNOT gates while still maintaining a fidelity of 1 with the original QAOA ansatz \\cite{farhi2014quantum}.\n \n \\item show that while the Edge Coloring based optimization reduces the depth of the circuit, the DFS based method may increase the depth. We further analytically derive the criteria (involving the increase in the depth and the reduction in the number of CNOT gates) for which the DFS based optimization method still leads to a lower probability of error in the circuit, and show that the IBM Quantum Hardwares \\cite{ibmquantum} conform to that criteria.\n \n \n \n \\item simulate our proposed optimization methods in Qiskit~\\cite{Qiskit} with the \\textit{ibmq\\_manhattan} noise model and show that for graphs of different sparsity (Erdos-Renyi graphs with the probability of edge varying from 0.4 - 1)\n \\begin{enumerate}\n \\item the proposed reduction in the CNOT gate is still retained post transpilation\n \\item the DFS based method has lower error probability than the Edge Coloring method, which in its turn has lower error probability than the traditional QAOA ansatz.\n \\end{enumerate}\n\\end{enumerate}\n\nTherefore, for any graph $G = (V,E)$, our proposed method provides reduction in the number of CNOT gates, and hence lowers the error probability of the circuit. Although the DFS method provably surpasses the Edge Coloring method, both in terms of reduction in CNOT gates and lowering the error probability, the latter reduces the depth of the QAOA circuit, and is also used in the DFS based method to arrange the edges which do not form a part of the DFS tree.\n\nFor the rest of this paper, we consider \\emph{unweighted and connected graphs}, $i.e.$, $w_{jk} = 1$, $\\forall$ $(j,k) \\in E$. However, the circuit corresponding to the ansatz does not change if we have a weighted graph \\cite{hadfield2019quantum}. Therefore, every analysis in this paper holds for a weighted graph as well. Furthermore, the analysis of this paper will hold as it is, or with some minimal modification, for disconnected graphs as well.\n\nThe rest of the paper is organized as follows - Section~\\ref{sec:ansatz} briefly discusses the traditional QAOA ansatz design. In Section~\\ref{sec:thm} we provide the proposed optimization and the criteria for it. Section~\\ref{sec:edge_col} and ~\\ref{sec:dfs} describe two methods of optimization based on Edge Coloring and DFS respectively. We provide the respective algorithms and analyze the conditions under which each one reduces the probability of error. We present the results of our simulation in section~\\ref{sec:sim} and conclude in Section~\\ref{sec:con}.\n\n\\section{Traditional ansatz design for QAOA}\n\\label{sec:ansatz}\n\nThe objective function of a depth-$p$ QAOA for Max-Cut~\\cite{farhi2014quantum} can be expressed as\n\\begin{equation}\n\\label{eq:objective}\n \\max_{\\psi(\\gamma,\\beta)} \\bra{\\psi(\\gamma,\\beta)} H_P \\ket{\\psi(\\gamma,\\beta)}\n\\end{equation}\nwhere $\\gamma = \\{\\gamma_1, \\gamma_2, \\hdots, \\gamma_p\\}$ and $\\beta = \\{\\beta_1, \\beta_2, \\hdots, \\beta_p\\}$ are the parameters. The trial wavefunction $\\ket{\\psi(\\gamma,\\beta)}$ is called the ansatz. The QAOA ansatz has a fixed form as described in Eq.~(\\ref{eq:ansatz}). The initial state $\\ket{\\psi_0}$ is usually the equal superposition of $n$ qubits, where $n = |V|$. Note that the depth of the circuit required to prepare $\\ket{\\psi_0}$ is 1 (Hadamard gates acting simultaneously on all the qubits). Similarly, for each layer of QAOA, the operator $exp(-i \\beta_l H_M)$ can be realized by a depth one circuit of $R_x(\\beta_l)$ gates acting simultaneously on all the qubits.\n\nThe operator $exp(-i \\gamma_l H_P)$ has a more costly implementation. Note that\n\\begin{eqnarray*}\nexp(-i \\gamma_l H_P) = \\displaystyle \\Pi_{(i,j) \\in E} exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right).\n\\end{eqnarray*}\n\nThe operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ acts on each edge $(j,k)$, and is realized as shown below:\n\\begin{figure}[H]\n\\centering\n\t\\begin{quantikz}\n\t\t{q_{j}}&&\\ctrl{1} & \\qw & \\ctrl{1} & \\qw \\\\\n\t\t{q_{k}}&&\\targ{} & \\gate{R_z(2\\gamma_l)} & \\targ{} & \\qw\n\t\\end{quantikz}\n\t\\label{fig:z_jz_k}\n\\end{figure}\n\nHere, $q_j$ and $q_k$ represent qubit indices $j$ and $k$, respectively. Note that Max-Cut is a symmetric problem, and therefore, the selection of control and target from qubits $q_j$ and $q_k$ for the CNOT gate corresponding to the edge $(j,k)$ is irrelevant, $i.e.$ the operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ can be equivalently realized as $CNOT_{kj} (I_k \\otimes R_z(2\\gamma_l)_{j})$ $CNOT_{kj}$. In Fig.~\\ref{fig:depth_qaoa}(a) and (b), we show a 2-regular graph with four vertices and its corresponding QAOA circuit for $p = 1$ respectively.\n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[scale=0.35]{graph.png}\n \\caption{A 2-regular graph with four vertices}\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{qaoa_circ.png}\n \\caption{Max-Cut QAOA circuit for $p=1$ corresponding to the graph}\n \\label{dfs}\n \\end{subfigure}\n \\caption{The Max-Cut QAOA circuit for $p=1$ corresponding to the 2-regular graph with four vertices; the values of $\\gamma$ and $\\beta$ can be arbitrary but those in this figure are the optimum values for this graph when $p = 1$}\n \\label{fig:depth_qaoa}\n\\end{figure}\n\n\n\\section{Methods for Optimized ansatz design}\n\\label{sec:thm}\n\nSome recent studies have proposed optimization methods for the circuit of the QAOA ansatz with respect to the underlying hardware architecture \\cite{alam2020circuit}. In this paper we propose two \\textit{hardware independent} methods to reduce the number of CNOT gates in the traditional QAOA ansatz. The intuition is that in the circuit realization of the operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ as $CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k})$ $CNOT_{jk}$, the first CNOT gate can be removed whenever it does not make any contribution to the overall effect of the operator. Our proposed method reduces the number of CNOT gates in the circuit irrespective of the hardware architecture, and hence is applicable for any quantum device.\n\nIn Theorem~\\ref{thm:equiv} we prescribe the condition where the first CNOT gate is irrelevant to the effect of the said operator, and hence may be removed.\n\n\\begin{theorem}\n\\label{thm:equiv}\nLet $\\ket{\\psi}$ be an $n$-qubit state prepared in a uniform superposition (upto relative phase) over all basis states $\\ket{x_1, \\hdots, x_n}$ such that the relative phase on each basis state is a function of a subset $S \\subset$ $\\{1,2,...,n\\}$ of the $n$ qubits (and independent of remaining qubits) $i.e.$\n\\begin{center}\n $\\ket{\\psi} = \\frac{1}{\\sqrt{2^n}}\\displaystyle \\sum_{x_1,...,x_n} e^{(i \\phi(x_S))}\\ket{x_1,...,x_n}$\n\\end{center}\nwhere $x_S = \\{x_i : i \\in S\\}$ and $\\phi(x_S)$ depicts the relative phase of each superposition state. For any two qubits $\\ket{j}$ and $\\ket{k}$, where $\\ket{k} \\notin S$, and for the two operators $U_1 = CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k}) CNOT_{jk}$ and $U_2 = (I_j \\otimes R_z(2\\gamma_l)_{k}) CNOT_{jk}$, we have \n\\begin{center}\n $U_1\\ket{\\psi} = U_2\\ket{\\psi}$.\n\\end{center}\n\\end{theorem}\n\n\\begin{proof}\nLet us consider the action of the operators $U_1$ and $U_2$ on any edge $(j,k)$.\n\\begin{equation}\nU_1\\ket{\\psi} = CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k}) (CNOT_{jk}) \\ket{\\psi} \\nonumber\n\\end{equation}\n\\begin{align}\n=&\\displaystyle \\sum_{x_1,...,x_n} CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k}) (CNOT_{jk})\\nonumber \\\\\n& e^{i\\phi(x_S)} \\ket{x_1,...,x_n} \\\\\n=&\\displaystyle \\sum_{x_1,...,x_n} CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k}) \\nonumber \\\\\n& e^{i \\phi(x_S)} \\ket{x_1,..,x_k'=x_j \\oplus x_k,.,x_n} \\\\\n=&\\displaystyle \\sum_{x_1,...,x_n} e^{i(\\phi(x_S)- \\gamma_l (x_j \\oplus x_k))} CNOT_{jk} \\nonumber \\\\\n& \\ket{x_1,..,x_k'=x_j \\oplus x_k,.,x_n} \\\\\n\\label{eq:u1}\n=& \\displaystyle \\sum_{x_1,...,x_n} e^{i(\\phi(x_S) - \\gamma_l (x_j \\oplus x_k))}\\ket{x_1,...,x_n}\n\\end{align}\nwhere $e^{i \\phi(x_S)}$ is the cumulative effect of operators acting on previous edges (= 0 if $(j,k)$ is the first). We have dropped the normalization constant for brevity.\n\nSimilarly,\n\\begin{equation}\n U_2\\ket{\\psi} = CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{x_k}) \\ket{\\psi} \\nonumber\n\\end{equation}\n\\begin{equation}\n = CNOT_{jk} \\displaystyle \\sum_{x_1,...,x_n} e^{i((\\phi(x_S)) - \\gamma_l x_k)}\\ket{x_1,...,x_n} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\label{eq:u2_mid}\n = \\displaystyle \\sum_{x_1,...,x_n} e^{i((\\phi(x_S)) - \\gamma_l x_k)} \\ket{x_1,..,x_j \\oplus x_k,..,x_n}\n\\end{equation}\n\nwhere the qubit in $k^{\\text{th}}$ position changes to $x_j \\oplus x_k$ due to the $CNOT_{jk}$ operation. Now, substituting $x_k' = x_j \\oplus x_k$ in the above equation, we get\n\n\\begin{equation}\n U_2\\ket{\\psi} = \\displaystyle \\sum_{x_1,...,x_n} e^{i((\\phi(x_S)) - \\gamma_l x_k)} \\ket{x_1,..,x_j \\oplus x_k,..,x_n} \\nonumber\n\\end{equation}\n\\begin{equation}\n = \\displaystyle \\sum_{x_1,..,x_k',..,x_n} e^{i((\\phi(x_S)) - \\gamma_l (x_j \\oplus x_k'))} \\ket{x_1,..,x_k',..,x_n} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\label{eq:u2}\n = \\displaystyle \\sum_{x_1,..,x_k,..,x_n} e^{i((\\phi(x_S)) - \\gamma_l (x_j \\oplus x_k))} \\ket{x_1,..,x_k,..,x_n}\n\\end{equation}\n\nHere since $k \\notin S$, the substitution in second last step, does not change the phase $e^{i \\phi(x_S)}$. The last step is valid since $x_k'$ is a running index and hence can be changed to $x_k$. Thus Eq.~(\\ref{eq:u1}) and Eq.~(\\ref{eq:u2}) are identical.\n\\end{proof}\n\n\\begin{corollary}\n\\label{cor:cond}\nFor a graph $G$, we can optimize the circuit for the operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ corresponding to an edge $(j,k)$ replacing $U_1$ by $U_2$, provided that the target vertex does not occur in any of the edge operators run earlier. In other words, the following conditions are sufficient to optimize an edge:-\n\\begin{enumerate}\n \\item if the vertex $j$ is being operated on for the first time, then it acts either as a control or a target for the CNOT gate corresponding to the operator;\n \\item the vertex $j$ does not act as a target of the CNOT gate if it occurs as a part of any other edge operators run earlier.\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\n\nThe first time we consider an edge adjacent to a vertex $j$, where $j \\notin x_S$, (see Theorem~\\ref{thm:equiv}) the relative phase $\\phi(x_S)$ does not depend on $j$. Thus it satisfies the condition of Theorem~\\ref{thm:equiv} and allows optimization of the operator.\n\n On the other hand, if the vertex $j$ occurs as part of an edge operator already run, the phase on the basis state $\\phi$ can potentially depend on $S$, $i.e.$ $j \\in S$. By not allowing it to act as target, we satisfy the conditions of Theorem~\\ref{thm:equiv}.\n\\end{proof}\n\nFrom the above discussion, it follows that if we arbitrarily choose edges for applying the operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$, then it cannot be guaranteed that a large number of edges will conform to Corollary~\\ref{cor:cond}. The requirement, in fact, imposes a precedence ordering among the edges. In Section~\\ref{sec:edge_col} and ~\\ref{sec:dfs}, we provide two algorithmic procedures for maximizing the number of edges that satisfy the requirement in order to reduce the number of CNOT gates in the ansatz.\n\nFor the rest of the paper, we say that {\\it an edge is optimized} if the operator $U_2$ can be operated on that edge instead of $U_1$.\n\n\\section{Edge Coloring based Ansatz Optimization }\n\\label{sec:edge_col}\nThe total error one incurs in a circuit depends on the number of operators (since a larger number of operators tend to incur more error) and the depth of the circuit (corresponding to relaxation error). In this section, we discuss how one can minimize the depth of the circuit. We also discuss the possibility of reduction in CNOT gates in the depth optimized circuit. \n\nThe operators $H_M$ act on distinct qubits and hence can be run in parallel contributing to a depth of 1 (for each step of the QAOA). On the other hand, the operators in $H_P$ can potentially contribute a lot to depth since the edge operators do not act on disjoint vertices.\nAt a given level of the circuit, we can only apply edge operators corresponding to a vertex disjoint set of edges. Thus the minimum depth of the circuit will correspond to the minimum value $k$ such that we can partition the set of edges $E$ as a disjoint union $\\cup_i E_i$ where each subset $E_i$ consists of a vertex disjoint collection of edges. This in turn corresponds to the edge coloring problem in a graph.\n\nGiven a graph $G = (V,E)$ and a set of colors $\\chi' = \\{\\chi'_1, \\chi'_2, \\hdots, \\chi'_k\\}$, an edge coloring \\cite{west2001introduction} assigns a color to each edge $e \\in E$, such that any two adjacent edges ($i.e.$, edges incident on a common vertex) must be assigned distinct colors. The edge coloring problem comprises of coloring the edges using the minimum number of colors $k$. Note that the operators corresponding to edges having the same color can therefore be executed in parallel.\nMoreover,\n\\begin{enumerate}\n \\item the number of colors in optimal coloring, called the chromatic index, corresponds to the minimum depth of the circuit;\n \\item edges having the same color corresponds to the operators $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ that can be executed simultaneously.\n\\end{enumerate}\n\nOptimal edge coloring is an NP-complete problem \\cite{west2001introduction}. But it is not practical to allocate exponential time to find the optimal edge-coloring as a pre-processing step for QAOA. Vizing's Theorem states that every simple undirected graph can be edge-colored using at most $\\Delta + 1$ colors, where $\\Delta$ is the maximum degree of the graph \\cite{vizing1964estimate}. This is within an additive factor of 1 since any edge-coloring must use at least $\\Delta$ colors. Misra and Gries algorithm \\cite{misra1992constructive} achieves the above bound constructively in $\\mathcal{O}(n\\cdot m)$ time. Therefore, we use the Misra and Gries edge coloring algorithm. Algorithm~\\ref{alg:edgecol} below computes the sets of edges having the same color using Misra and Gries algorithm as a subroutine. It returns the largest set $S_{max}$ of edges having the same color in the coloring computed by Misra and Gries algorithm.\n\n\\begin{algorithm}[H]\n\\caption{Edge Coloring based Ansatz Optimization}\n\\label{alg:edgecol}\n\\begin{algorithmic}[1]\n\\REQUIRE A graph $G = (V,E)$.\n\\ENSURE Largest set $S_{max}$ of edges having the same color.\n\\STATE Use the Misra and Gries algorithm to color the edges of the graph $G$.\n\\STATE $S_i \\leftarrow$ set of edges having the same color $i$, $1 \\leq i \\leq \\chi'$.\n\\STATE $S_{max} \\leftarrow$ $max\\{S_1, S_2, \\hdots, S_{\\chi'}\\}$.\n\\STATE Return $S_{max}$.\n\\end{algorithmic}\n\\end{algorithm}\n\nThis edge coloring approach provides the minimum depth achievable for QAOA ansatz using a polynomial time pre-processing. After reducing the depth, we now try to further reduce errors by decreasing the number of CNOT gates.\nRecall that the operators corresponding to edges with the same color can be executed in parallel. We use the operators corresponding to the edges of $S_{max}$ as the first layer of operators. The other layers can be used in any order.\n\n\\begin{lemma}\nEvery edge in the first layer can be optimized according to Corollary~\\ref{cor:cond}.\n\\end{lemma}\n\n\\begin{proof}\nFor every edge $(u,v)$ in the first layer, both the vertices are adjacent to an edge for the first time, $i.e.$, both $u, v \\notin S$. Therefore, it satisfies the criteria of Corollary~\\ref{cor:cond}, and hence can be optimized. In fact, any one of the qubits corresponding to the two vertices can be selected as the control for the CNOT operation.\n\\end{proof}\n\nSome edges in the corresponding layers may be optimized as well. Nevertheless, it is trivial to come up with examples where this is not the case (e.g., a complete graph of 4-vertices). Therefore, in the worst case scenario, only the edges in the first layer can be optimized. However, since this method does not increase the depth of the circuit, it always leads to a more efficient circuit design than the traditional QAOA circuit with lower depth (by 1 since the first layer of CNOT is absent) and fewer CNOT gates.\n\nFor general graphs, the worst case scenario is, therefore, that only the edges in the first layer can be optimized. In the following subsection we provide an analysis on the number of optimized edges using this method.\n\n\\subsection{Lower and upper bound on the number of optimized edges}\nLet us assume that the chromatic index of a graph $G = (V,E)$ is $\\chi'$. Using the Misra and Gries Theorem \\cite{misra1992constructive} we can find a polynomial time coloring using at most $\\Delta + 1$ colors, where $\\Delta$ is the maximum degree of the graph. Therefore, on an average, $\\lceil \\frac{m}{\\Delta + 1} \\rceil$ edges have the same color.\n\nMore precisely, two extreme cases arise: (i) the colors may be uniformly distributed, and the maximum number of edges having the same color is $\\lceil \\frac{m}{\\Delta + 1} \\rceil$; or (ii) one of the colors is used dominantly for most of the edges. Nevertheless, note that for all the edges adjacent to the same vertex, a particular color can be assigned to one of the edges only. Therefore, the dominant color can be used at most on $\\lfloor \\frac{n}{2} \\rfloor$ edges, where $n = |V|$. Therefore, the possible number of optimized edges that can be obtained via the Edge Coloring method is as shown in Eq.~(\\ref{eq:edge_col}).\n\\begin{equation}\n\\label{eq:edge_col}\n \\lceil \\frac{m}{\\Delta + 1} \\rceil \\leq ~\\# ~Optimized ~Edges \\leq \\lfloor \\frac{n}{2} \\rfloor.\n\\end{equation}\n\n\\section{Depth First Search based Ansatz Optimization}\n\\label{sec:dfs}\nAs the edge coloring based algorithm can optimize at most $\\lfloor \\frac{n}{2} \\rfloor$ edges, in this section, we present a Depth First Search (DFS) based optimization procedure which can optimize $n-1$ edges. Algorithm~\\ref{alg:dfs}, for obtaining the optimized QAOA ansatz, uses the standard DFS algorithm \\cite{cormen2009introduction}, by returning the tree edges or discovery edges forming the DFS tree.\n\nIn this method, we start from the first vertex of the DFS tree. For every edge $e = (u,v)$ in the DFS tree, the vertex $u$ is made the control and $v$ is made the target for the CNOT gate corresponding to that edge. The edges are operated on sequentially one after another, as in the set $E_{dfs}$ (the tree edges). Once every edge in the DFS tree has been operated on, the remaining edges can be executed in any order. In fact, one may opt to use the Edge Coloring method on the remaining edges to obtain the minimum depth of the circuit corresponding to these edges, although CNOT gates cannot be reduced any further.\n\n\\begin{algorithm}[H]\n\\caption{DFS based Ansatz Optimization }\n\\label{alg:dfs}\n\\begin{algorithmic}[1]\n\\REQUIRE A graph $G = (V,E)$.\n\\ENSURE A list $E_{dfs}$ of $n-1$ edges.\n\\STATE $E_{dfs} = \\{\\}$\n\\STATE $u \\leftarrow$ randomly selected vertex from $V$.\n\\STATE Start DFS from the vertex $u$. For every vertex $v$ discovered from its predecessor $v'$, $E_{dfs} = E_{dfs} \\cup (v',v)$.\n\\STATE Return $E_{dfs}$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{theorem}\nEach edge in the DFS tree can be optimized according to Corollary~\\ref{cor:cond}.\n\\label{thm:dfs}\n\\end{theorem}\n\n\\begin{proof}\nWe prove this by the method of induction. Let $u$ be the vertex from which the DFS tree starts. Then $u$ is being operated on for the first time, and, hence, can act both as a control\/target for the CNOT operation corresponding to the first edge (Corollary~\\ref{cor:cond}). Choose $u$ to be the control.\n\n\\textbf{Base case}: If $v$ is the vertex that is discovered from $u$ via the edge $(u,v)$, then choosing $u$ as the control and $v$ as the target satisfies Corollary~\\ref{cor:cond}. Therefore, the edge $(u,v)$ can be optimized.\n\n\\textbf{Induction hypothesis}: Let the DFS tree has been constructed upto some vertex $j$, and every edge $(e_1, e_2)$ in this DFS tree so far can be optimized, $i.e.$ $e_1$ acts as the control and $e_2$ as the target.\n\n\\textbf{Induction step}: Let the next vertex in the DFS tree, that is discovered from some vertex $i$, is $k$. From DFS algorithm, the vertex $i$ must have been discovered in some previous step. Since vertex $k$ was not previously discovered, so $k \\notin x_S$ and hence the edge $(i,k)$ can be optimized if we select $i$ to be the control and $k$ as the target.\n\\end{proof}\n\nTherefore, the DFS based optimization method provides $n-1$ optimized edges, $i.e.$, a reduction in the number of CNOT gates by $n-1$. We now show in Theorem~\\ref{thm:optimal} that this is the maximum number of edges that can be optimized.\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{col.png}\n \\caption{Edge Coloring Based Optimization}\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{dfs.png}\n \\caption{Depth First Search Based Optimization}\n \\label{dfs}\n \\end{subfigure}\n \\caption{Depth of the ansatz circuit when using (a) Edge Coloring and (b) DFS based method; edges having same color can be executed simultaneously. The depth of the spanning tree in the DFS based method is 4, compared to depth 2 for the Edge Coloring based method. However, the number of optimized edges in the Edge Coloring based method is 2, while that by the DFS based method is 3.}\n \\label{fig:depth}\n\\end{figure}\n\n\\begin{theorem}\nOptimization of ansatz for Max-Cut QAOA with p=1, by deletion of the CNOT gate in the first level for an edge of the graph, can be done for no more than $n-1$ edges.\n\\label{thm:optimal}\n\\end{theorem}\n\n\\begin{proof}\nLet us assume that there is some method by which at least $n$ edges can be optimized. Now, the connected subgraph which contains all the $n$ vertices and at least $n$ optimized edges must contain a cycle. Let $(u,v)$ be an edge of this cycle, $i.e.$, if $(u,v)$ is removed then the residual graph is a tree (in case there are $> n$ edges, the removal of edges can be performed recursively till such an edge $(u,v)$ is obtained whose removal makes the residual graph a tree). For this edge $(u,v)$, both the vertices $u$ and $v$ are endpoints of some other optimized edges as well. Therefore, from Corollary~\\ref{cor:cond} both $u$ and $v$ must act as the control for the CNOT gate corresponding to the edge $(u,v)$ in order for this edge to be optimized, which is not possible. Therefore, it is not possible to optimize more than $n-1$ edges.\n\\end{proof}\n\nTherefore, the DFS method is optimal in the number of optimized edges. However, we note that the DFS based method associates an ordering of the edges, $i.e.$, some of the edges which could have been operated on simultaneously, cannot be done so now.\nThis, in turn, can lead to an increase in the depth of the circuit. Hence, a penalty for this method producing optimal reduction in CNOT gates, is that it increases the depth of the circuit. \n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.37]{qaoa_col.png}\n \\caption{Optimized circuit by Edge Coloring based method }\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[scale=0.37]{qaoa_dfs.png}\n \\caption{Optimized circuit by DFS based method}\n \\label{dfs}\n \\end{subfigure}\n \\caption{Max-Cut QAOA ansatz with $p=1$ corresponding to (a) Edge Coloring and (b) DFS based optimization. In (a), the first CNOT gates of the operators have been deleted. The operators corresponding to $(q_1,q_2)$ and $(q_3,q_0)$ act in parallel. In (b), the first CNOT gates of three operators have been deleted, but the depth has increased.}\n \\label{fig:opt}\n\\end{figure}\n\nIn Fig.~\\ref{fig:depth}, we show a 2-regular graph with four vertices. In Fig.~\\ref{fig:depth}(a), the depth of the circuit corresponding to the operator $exp(-i\\gamma_l H_P)$ is 2; the edges of the same color can be operated on simultaneously. If the red (or blue) edges form the first layer, then those two edges are optimized. However, if we use the DFS method, with the DFS tree starting from, say, vertex 1, then the edges $(1,2),(2,3)$ and $(3,4)$ can be optimized (Fig.~\\ref{fig:depth}(b)). Now these three edges must be operated on one after another, followed by the fourth edge. Thus the depth of the circuit corresponding to the operator $exp(-i\\gamma_l H_P)$ becomes 4. The circuits corresponding to these two scenarios are depicted in Fig~\\ref{fig:opt}(a) and (b) respectively.\n\n\nThe question, therefore, is whether this increase in depth is always acceptable, even with the increased reduction in the number of CNOT gates as, with increased depth, the circuit becomes more prone to relaxation error.\nNumerical analysis and simulation (Section~\\ref{sec:sim}) establises that although the depth of the circuit is increased, the overall error probability of the circuit is reduced further.\n\n\\subsection{When is the DFS based method useful?}\n\nIn this subsection, we formulate a relation for which the increase in the depth still leads to a lower probability of error for the reduction in the number of CNOT gates. For this analysis, we make an assumption that the error in the circuit arises only from noisy CNOT gates and the depth of the circuit ($i.e.$, the $T_1$ time). Although this assumption is idealistic, the ansatz primarily consists of layers of CNOT gates. Furthermore, in superconductors, $R_z$ gates are executed virtually \\cite{mckay2017efficient}, and hence does not lead to any gate error. Therefore, CNOT is the primary source of gate error and with increasing depth, the qubits become more prone to relaxation error. Therefore, this assumption allows for a simple but powerful model for analyzing the query at hand.\n\nLet us assume that the time duration and the error probability of each CNOT gate is $t_{cx}$ and $p_{cx}$ respectively. Let there be $N$ layers of CNOT operations. Note that although there can be multiple CNOT gates in each layer, the time duration of each layer is $t_{cx}$ only. Therefore, the probability of no error ($i.e.$, the probability that the circuit remains error free) after $N$ layers of operations, considering only relaxation error, is\n $exp(-\\frac{N t_{cx}}{T_1})$.\n\nLet there be $k$ CNOT gates in the original circuit. Therefore, the probability of no error after the operation of the CNOT gates, considering only CNOT gate error, is\n $(1 - p_{cx})^k$.\n\nCombining both the sources of the errors, Eq.~(\\ref{eq:no_err}) gives the probability of success ($i.e.$, the probability of no error) after a single cycle of computation of the QAOA ansatz.\n\\begin{equation}\n \\label{eq:no_err}\n P_{success} = (1 - p_{cx})^k \\cdot exp(-\\frac{N t_{cx}}{T_1})\n\\end{equation}\n\nHenceforth, $P_{success}$ will refer to the probability of success ($i.e.$, how close the noisy outcome is to the noise-free ideal outcome) of the ansatz circuit execution for a single run of the algorithm. Note that in QAOA, the ansatz is computed multiple times for multiple cycles, and the objective is to maximize the expectation value of the outcome.\n\nWe further assume that after the optimization using DFS based method, $k_1$ CNOT gates have been reduced leading to an increase in $N_1$ layers of operations. The probability that this optimized circuit remains error-free is given in Eq.~(\\ref{eq:opt_err}).\n\\begin{equation}\n \\label{eq:opt_err}\n P^{opt}_{success} = (1 - p_{cx})^{(k-k_1)} \\cdot exp(-\\frac{(N+N_1) t_{cx}}{T_1})\n\\end{equation}\n\nThe optimization is fruitful only when $P^{opt}_{success} \\geq P_{success}$. Note that\n\\begin{center}\n $P^{opt}_{success} = P_{success} \\cdot exp(-\\frac{N_1 t_{cx}}{T_1}) \/ (1 - p_{cx})^{k_1}$\n\\end{center}\n\nSince both $P^{opt}_{success}$ and $P_{success} \\leq 1$, the required inequality holds only if $exp(- \\frac{N_1 t_{cx}}{T_1}) \/ (1 - p_{cx})^{k_1} \\geq 1$. In other words,\n\\begin{eqnarray}\n\\label{eq:cond1}\nexp(-\\frac{N_1 t_{cx}}{T_1}) &\\geq& (1 - p_{cx})^{k_1} \\nonumber\\\\\n\\Rightarrow N_1 &\\leq& \\lambda \\times k_1, \\nonumber\\\\\n~where && \\lambda = \\frac{-ln(1 - p_{cx}) \\cdot T_1}{t_{cx}}.\n\\end{eqnarray}\n\nThe constant $\\lambda$ is defined in terms of parameters specific to the quantum device.\n\n\n\\subsubsection{Effect of varying $\\lambda$}\nGiven that $\\lambda = f(t_{cx},p_{cx},T_1)$, we expect the $T_1$ value to increase, and the $t_{cx}$ and $p_{cx}$ values to decrease as technology improves. The value of $\\lambda$ increases for increasing $T_1$ and\/or decreasing $t_{cx}$, whereas it decreases for decreasing $p_{cx}$. Therefore, \n\n\\begin{itemize}\n \\item If $p_{cx}$, the probability of error for CNOT gates decreases, the optimization becomes less useful since we are increasing the probability of relaxation error, but the reduction in error probability becomes less. As per this observation, for smaller $\\lambda$, Eq.~(\\ref{eq:cond1}) is satisfied when the increase in depth is reduced as well.\n \n \\item Similarly, if (i) $T_1$ increases, then the qubit can retain its conherence for a longer period of time, or (ii) $t_{cx}$ decreases, then the overall computation time of the circuit decreases as well, and the circuit can allow some relaxation in the depth even if $T_1$ remains unchanged. We observe that for both of these cases, by Eq.~(\\ref{eq:cond1}), $\\lambda$ increases, thus allowing more increase in depth for a given reduction in the number of CNOT gates.\n\\end{itemize}\n\n\\subsection{Trade-off between depth and reduction in CNOT gates}\n\nIf the DFS based method is not applied, then the number of layers of CNOT gates is equal to the number of color classes (as in Edge Coloring method). The maximum number of color classes is $\\Delta + 1$ (as discussed in the previous section), and hence the maximum depth of the circuit is $\\Delta + 1$ as well. Now, when the DFS based method is applied, the circuit can be divided into two disjoint sets of edges:\n\\begin{enumerate}\n \\item The set of edges belonging to the DFS tree which can be optimized. The depth of this portion of the circuit is at most $n-1$ ($i.e.$, the depth of the DFS tree). Each of the operators corresponding to these edges contains a single CNOT gate only, and hence the number of CNOT gate layers is $n-1$ as well.\n \\item The set of edges that do not belong to the DFS tree and hence are not optimized. The operators corresponding to these edges can be applied in any order, but after all the optimized edges. When removing the edges of the DFS tree, the degree of each vertex is reduced by at least 1. Therefore, the maximum degree of the remaining subgraph is at most $\\Delta - 1$. Therefore, the depth of this portion of the circuit will be at most $\\Delta$ (From Misra and Gries Algorithm). Each of the layer in this portion contains 2 CNOT gates, and hence the number of CNOT gate layers is $2\\Delta$.\n\\end{enumerate}\n\nTherefore, the maximum depth of the circuit after applying the DFS based optimization is $n-1 + \\Delta$. In other words, the increase in depth due to this method is given by Eq.~(\\ref{eq:cond2}).\n\\begin{eqnarray}\n\\label{eq:cond2}\n n-1 + \\Delta - (\\Delta + 1) = n - 2\n\\end{eqnarray}\n\nRecall that the number of CNOT gates reduced due to the DFS method is always $n-1$. Therefore, from Eq.~(\\ref{eq:cond1}) and $~(\\ref{eq:cond2})$, we get\n\\begin{eqnarray}\n\\label{eq:condition}\nn - 2 &\\leq& \\lambda \\cdot (n-1) \\nonumber \\\\\n\\Rightarrow \\lambda &\\geq& \\frac{n-2}{n-1}\n\\end{eqnarray}\n\nIn Table~\\ref{tab:lambda} we show the average value of $\\lambda$ for some IBM Quantum~\\cite{ibmquantum} devices, ranging from the comparatively more noisy \\textit{ibmq\\_melbourne} to the comparatively less noisy \\textit{ibmq\\_manhattan}.\n\n\\begin{table}[htb]\n \\centering\n \\caption{Average value of $\\lambda$ for four IBM Quantum machines ~\\cite{ibmquantum}}\n \\begin{tabular}{|c|c|}\n \\hline\n IBM Quantum devices & Avg value of $\\lambda$\\\\\n \\hline\n \\textit{ibmq\\_manhattan} & 3.6\\\\\n \\hline\n \\textit{ibmq\\_montreal} & 2.47\\\\\n \\hline\n \\textit{ibmq\\_sydney} & 3.35\\\\\n \\hline\n \\textit{ibmq\\_melbourne} & 2.03\\\\\n \\hline\n \\end{tabular}\n \\label{tab:lambda}\n\\end{table}\n\nNote that the lower bound on $\\lambda$, $\\frac{n-2}{n-1}$ (Eq.~(\\ref{eq:condition}), is always less than $1$ for all $n$. In the asymptotic limit, $\\frac{n-2}{n-1} \\rightarrow 1$. Thus, the proposed DFS based optimization method leads to a lower error probability on any quantum device for which $\\lambda \\geq 1$. Table~\\ref{tab:lambda} readily shows that the IBM Quantum hardwares conform to this requirement.\n\n\n\n\n\\section{Results of simulation}\n\\label{sec:sim}\n\n\\begin{table*}[t]\n \\centering\n \\caption{Comparison of Max-Cut QAOA ansatz circuits post transpilation on \\textit{ibmq\\_manhattan}: (i) Traditional, (ii) Edge coloring and (iii) DFS based optimization}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{Graph Family} & \\multirow{2}{*}{\\# qubits} & \\multicolumn{3}{c|}{\\# CNOT gates in Max-Cut QAOA ansatz circuit}\\\\\n \\cline{3-5}\n & & Traditional & Edge coloring & DFS\\\\\n \\hline\n \\multirow{6}{*}{Complete graph} & 10 & 90 & 85 & 81\\\\\n \\cline{2-5}\n & 20 & 380 & 370 & 361\\\\\n \\cline{2-5}\n & 30 & 870 & 855 & 841\\\\\n \\cline{2-5}\n & 40 & 1560 & 1540 & 1521\\\\\n \\cline{2-5}\n & 50 & 2450 & 2425 & 2401\\\\\n \\cline{2-5}\n & 60 & 3540 & 3510 & 3481\\\\\n \\hline\n \\multirow{6}{*}{Erdos-Renyi ($p_{edge}$ = 0.8)} & 10 & 70 & 66 & 61\\\\\n \\cline{2-5}\n & 20 & 302 & 292 & 283\\\\\n \\cline{2-5}\n & 30 & 698 & 683 & 669\\\\\n \\cline{2-5}\n & 40 & 1216 & 1197 & 1177\\\\\n \\cline{2-5}\n & 50 & 1956 & 1931 & 1907\\\\\n \\cline{2-5}\n & 60 & 2822 & 2792 & 2763 \\\\\n \\hline\n \\multirow{6}{*}{Erdos-Renyi ($p_{edge}$ = 0.6)} & 10 & 50 & 46 & 41\\\\\n \\cline{2-5}\n & 20 & 234 & 225 & 215\\\\\n \\cline{2-5}\n & 30 & 504 & 491 & 475\\\\\n \\cline{2-5}\n & 40 & 960 & 940 & 921\\\\\n \\cline{2-5}\n & 50 & 1504 & 1479 & 1455\\\\\n \\cline{2-5}\n & 60 & 2114 & 2085 & 2055 \\\\\n \\hline\n \\multirow{6}{*}{Erdos-Renyi ($p_{edge}$ = 0.4)} & 10 & 36 & 31 & 27\\\\\n \\cline{2-5}\n & 20 & 164 & 154 & 145\\\\\n \\cline{2-5}\n & 30 & 362 & 348 & 333\\\\\n \\cline{2-5}\n & 40 & 586 & 566 & 547\\\\\n \\cline{2-5}\n & 50 & 950 & 925 & 901\\\\\n \\cline{2-5}\n & 60 & 1468 & 1440 & 1409 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:cx_count}\n\\end{table*}\n\nIn this section we show the effect of our optimization methods on reducing the probability of error and the CNOT count of QAOA for Max-Cut. We first show that our proposed reduction is retained in the post transpilation circuit, which is executed on the quantum hardware. Next, we run our simulation with the noise model for \\textit{ibmq\\_manhattan} from IBM Quantum; this noise model corresponds to the actual noise in the IBM Quantum Manhattan device which has $65$ qubits and a Quantum Volume of $32$~\\cite{ibmquantum}. For our simulation purpose, we have considered Erdos-Renyi graphs, where the probability that an edge exists between two vertices, $p_{edge}$, varies respectively from 0.4 to 1 (complete graph). The choice of Erdos-Renyi graph allows us to study the performance of these proposed methods for various sparsity of graphs.\n\nThe circuit that we construct is usually not executed as it is in the IBM Quantum hardware. It undergoes a process called transpilation in which\n\n\\begin{enumerate}[(i)]\n \\item the gates of the circuit are replaced with one, or a sequence of, basis gates which are actually executed in the quantum hardware. The basis gates of the IBM Quantum devices are \\{$CNOT$, $SX$, $X$, $R_z$ and Identity\\} \\cite{ibmquantum},\n \n \\item the circuit is mapped to the underlying connectivity (called the coupling map) of the hardware \\cite{bhattacharjee2020survey},\n \n \\item the number of gates in the circuit is reduced using logical equivalence \\cite{burgholzer2020advanced}.\n\\end{enumerate}\n\nA natural question, therefore, is whether the reduction in CNOT gates is retained post transpilation. In Table~\\ref{tab:cx_count} we show the number of CNOT gates in the post transpilation circuit for the \\textit{ibmq\\_manhattan} device as the number of vertices is varied from $10 - 60$ for each of the graph family considered. Our results readily show that the proposed optimization in the number of CNOT gates still hold good even in the transpiled circuit. Since the \\textit{ibmq\\_manhattan} device is a 65-qubits device, we show the results upto 60 qubits, but the results show that the trend will continue for higher qubit devices as well, when they become available.\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{erdos04.png}\n \\caption{Erdos-Renyi graphs with $p_{edge} = 0.4$}\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{erdos06.png}\n \\caption{Erdos-Renyi graphs with $p_{edge} = 0.6$}\n \\label{dfs}\n \\end{subfigure}\n \\newline\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{erdos08.png}\n \\caption{Erdos-Renyi graphs with $p_{edge} = 0.8$}\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{complete.png}\n \\caption{Complete graphs}\n \\label{dfs}\n \\end{subfigure}\n \\caption{$|\\braket{\\psi|\\psi_e}|^2$ for graphs of various sparsity: Erdos Renyi graphs ($p_{edge} = 0.4,~0.6,~ 0.8$) and complete graphs}\n \\label{fig:prob_succ}\n\\end{figure*}\n\nLet $\\ket{\\psi}$ be the state obtained from the noise-free (ideal) computation of the QAOA circuit, and the state obtained from noisy computation be $\\ket{\\psi_e}$. The probability of success of the noisy computation, then, is defined as\n\\begin{equation}\n \\label{eq:graph_succ}\n P_{succ} = |\\braket{\\psi|\\psi_e}|^2\n\\end{equation}\nIn Fig.~\\ref{fig:prob_succ}(a) - (d) we plot $P_{succ}$ of the traditional QAOA ansatz, Edge Coloring based and the DFS based optimization method for Erdos-Renyi graphs, where $p_{edge}$, the probability that an edge exists between two vertices, varies from 0.4 to 1 (complete graph). The choice of Erdos-Renyi graph allows us to study the performance of these proposed methods for various sparsity of graphs. For each case we vary the number of vertices $n$ from 4 to 12. For each value of $n$ and $p_{edge}$, the results are averaged over 20 input graph instances, and each instance is an average of 100 randomly generated noisy circuits by the simulator model for \\textit{ibmq\\_manhattan} with noise. Our results readily show that the DFS based method outperforms both the Edge Coloring based method and the traditional QAOA in terms of lower error probability.\n\nFrom Table~\\ref{tab:cx_count}, and our simulation results in Fig.~\\ref{fig:prob_succ}(a)-(d), we can infer that the DFS based optimization outperforms the Edge Coloring based optimization, which again, outperforms the traditional QAOA in the reduction in CNOT count, and the probability of error in the circuit in (i) the actual transpiled circuit that is executed on the quantum devices, as well as (ii) in realistic noisy scenario of quantum devices.\n\n\\section{Conclusion}\n\\label{sec:con}\n\nIn this paper we have proposed two methods to reduce the number of CNOT gates in the traditional QAOA ansatz. The Edge Coloring based method can reduce upto $\\lfloor \\frac{n}{2} \\rfloor$ CNOT gates whereas the DFS based method can reduce $n - 1$ CNOT gates. While the former method provides a depth-optimized circuit, the latter method can increase the depth of the circuit. We analytically derive the constraint for which a particular increase in depth is acceptable given the number of CNOT gates reduced, and show that every graph satisfies this constraint. Therefore, these methods can reduce the number of CNOT gates in the QAOA ansatz for any graph. Finally, we show via simulation, with the \\textit{ibmq\\_manhattan} noise model, that the DFS based method outperforms the Edge Coloring based method, which in its turn, outperforms the traditional QAOA in terms of lower error probability in the circuit. The transpiler procedure of Qiskit maps a circuit to the underlying hardware connectivity graph, and some gates are reduced in this process. This transpiled circuit is executed on the real hardware. We show, with the \\textit{ibmq\\_manhattan} coupling map, that the reduction in the number of CNOT gates still holds post transpilation. Therefore, our proposed methods provide a universal way to an improved QAOA ansatz design. On a final note, all the calculations in this paper considers connected graph, but these carry over easily to disconnected graphs as well.\n\n\\section*{Acknowledgement}\n\nWe acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. In this paper we have used the noise model of \\textit{ibmq\\_manhattan}, which is one of IBM Quantum Hummingbird r2 Processors. \n\n\\section*{Code Availability}\n\nA notebook providing the code to generate the plots of Fig.~\\ref{fig:prob_succ}(a)-(d) is available open source at \\href{https:\/\/github.com\/RitajitMajumdar\/Optimizing-Ansatz-Design-in-QAOA-for-Max-cut}{https:\/\/github.com\/RitajitMajumdar\/Optimizing-Ansatz-Design-in-QAOA-for-Max-cut}.\n\n\n\n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nMagnetic Resonance Fingerprinting (MRF)~\\cite{ref:ma2013} is an emerging Quantitative MRI technology for simultaneous measurement of multiple quantitative bio-properties (e.g. T1 and T2 relaxation times, and proton density (PD)) of tissues in a single and time-efficient scan. However, due to the aggressive spatiotemporal subsampling needed for short scan times, the MRF time-series data and consequently the tissue maps usually contain aliasing artefacts. \n\nCompressed sensing reconstruction algorithms based on using analytical image priors (e.g., sparsity, Total Variation and\/or low-rank) have been proposed to address this problem \\cite{ref:mcgivney2014, ref:asslander2018, ref:golbabaee2021-lrtv_mrfresnet}. Recent works e.g.~\\cite{ref:fang2019oct, ref:chen2020} have been focussed around deep learning approaches that utilise spatiotemporal image priors learned from data for artefact removal. These approaches which are conventionally trained end-to-end on pairs of subsampled and clean data, outperform those using analytical image priors and produce excellent results. However, unlike the traditional compressed sensing algorithms, current trained deep learning models are specific to the subsampling processes used during their training and unable to generalise and remove aliasing artefacts from other subsampling processes given at the testing time.\n\nThe contribution of this work is to propose the first deep Plug-and-Play (PnP) iterative reconstruction algorithm for MRF to address this issue, and to demonstrate that this approach effectively adapts to changing acquisition models, specifically, the MRF k-space subsampling patterns. Iterations of the PnP algorithm~\\cite{ref:venkatakrishnan2013, ref:ahmad2020} follow steps of the Alternating Direction Method of Multipliers (ADMM)~\\cite{ref:boyd2011} which is an optimisation method for model-based compressed sensing reconstruction (for imaging applications of deep learning ADMM and competing methods, see~\\cite{ref:ahmad2020, ref:liang2020, ref:romano2017, ref:sun2019, ref:sun2021}). In our work, the spatiotemporal MRF image priors are learned from data through an image denoiser i.e. a Convolutional Neural Network (CNN), that is pre-trained for removing Additive White Gaussian Noise (AWGN), and not any particular subsampling artefact. This CNN denoiser is used as a data-driven shrinkage operator within the ADMM's iterative reconstruction process. The reconstructed (de-aliased) image time-series are then fed to an MRF dictionary matching step~\\cite{ref:ma2013} for mapping tissues' quantitative parameters.\n\n\n\\section{The MRF Inverse imaging Problem}\n\\label{sec:mrf}\n\nMRF adopts a linear spatiotemporal compressive acquisition model:\n\\begin{equation}\n\t\\label{eq:standard_linear_inverse_problem}\n\t\\bm{y} = \\bm{A}\\bm{x} + \\bm{w}\n\\end{equation}\nwhere $ \\bm{y} \\in \\mathbb{C}^{\\,m\\times T}$ are the k-space measurements collected at $T$ temporal frames and corrupted by some bounded noise $\\bm{w}$. The acquisition process i.e. the linear forward operator $ \\bm{A}: \\mathbb{C}^{\\,n\\times t}\\rightarrow \\mathbb{C}^{\\,m\\times T}$ models Fourier transformations subsampled according to a set of temporally-varying k-space locations in each timeframe combined with a temporal-domain compression scheme~\\cite{ref:mcgivney2014,ref:asslander2018,ref:golbabaee2021-lrtv_mrfresnet} for (PCA subspace) dimensionality reduction i.e. $t\\ll T$. $ \\bm{x}\\in \\mathbb{C}^{\\,n\\times t} $ is the Time-Series of Magnetisation Images (TSMI) across $n$ voxels and $t$ dimension-reduced timeframes (channels). Accelerated MRF acquisition implies working with under-sampled data which makes the inversion of~\\eqref{eq:standard_linear_inverse_problem} i.e. estimating the TSMIs $\\bm{x}$ from the compressed MRF measurements $\\bm{y}$, an ill-posed problem. \n\\vspace{\\baselineskip}\n\\\\\n\\textbf{The Bloch response model:} \nThe TSMIs magnetisation time-responses are the main source of tissue quantification. At each voxel $v$, the time responses i.e. the fingerprints, are related to the corresponding voxel's bio-properties, namely, the T1 and T2 relaxation times, through the solutions of the Bloch differential equations $\\mathcal{B}$ scaled by the proton density (PD)~\\cite{ref:ma2013, jiang2015mr}:\n\\begin{equation}\n\t\\label{eq:magnetisation_response}\n\t\\bm{x}_{v} \\approx \\text{PD}_{v} \\, \\mathcal{B} \\! \\left( \\text{T1}_{v}, \\text{T2}_v \\right)\n\\end{equation}\nWhile this relationship could temporally constrain the inverse problem~\\eqref{eq:standard_linear_inverse_problem}, it turns out to be inadequate to make the problem well-posed~\\cite{ref:golbabaee2021-lrtv_mrfresnet}. The model~\\eqref{eq:magnetisation_response} alone does not capture cross-voxel correlations. Spatial-domain image priors that account for this must be further added to resolve the ill-posedness problem. For this we rely on datasets of anatomical quantitative maps i.e. the T1, T2 and PD maps, to create the TSMIs via \\eqref{eq:magnetisation_response}, and train a denoiser model on them to learn the spatiotemporal structures\/priors for $\\bm{x}$. We then use this trained denoiser to iteratively solve~\\eqref{eq:standard_linear_inverse_problem} for any given forward model $\\bm{A}$. This process is detailed in the next section.\n\n\n\\section{image reconstruction Algorithm}\n\\label{sec:algorithms}\n\nWe describe our algorithm to reconstruct artefact-free TMSIs before feeding them to MRF dictionary-matching for quantitative mapping.\n\n\n\\subsection{The PnP-ADMM algorithm}\n\\label{ssec:pnp_admm}\n\nA model-based reconstruction approach to solve inverse problems like (1) would typically lead to an optimisation of the form\n\\begin{equation}\n\t\\label{eq:opt}\n\t\\arg\\min_{x} \\| \\bm{y} - \\bm{A}\\bm{x}\\|_2^2 + \\phi(\\bm{x})\n\\end{equation}\nwhich can be solved by variety of iterative shrinkage algorithms including ADMM~\\cite{ref:boyd2011} and by iterating between a step to minimise the first term and promote the k-space data consistency according to the tested acquisition process, and another shrinkage step according to a regularisation term $\\phi$ to promote certain structural priors on $\\bm{x}$ to combat the ill-posedness of (1). In the Plug-and-Play (PnP) approach~\\cite{ref:venkatakrishnan2013}, the shrinkage term is an AWGN image denoiser $\\bm{f}$ that builds an implicit regularisation for (1). The denoiser model $\\bm{f}$ could be a trained convolutional neural network (CNN) like~\\cite{ref:ahmad2020} that captures the structural image priors for $\\bm{x}$ by removing generic gaussian noise from $\\bm{x}+\\bm{n}$, where $\\bm{n} \\sim\\mathcal{N}(0,\\sigma)$, and the noise power $\\sigma$ is an experimentally-chosen hyperparameter. \n\nWe use the following ADMM based iterations of the PnP algorithm~\\cite{ref:ahmad2020}: $\\bm{v}_0 =\\bm{A}^Hy$, $\\bm{u}_0=\\mathbf{0}$, $\\forall k, \\text{iteration number} =1,2,...\\!\\!\\!$ \n\\begin{subequations}\n\t\\begin{align}\n\t\\label{eq:pnp_admm_a}\n\t\\bm{x}_k & = \\bm{h} \\left( \\bm{v}_{k-1} - \\bm{u}_{k-1} \\right) \\\\\n\t\\label{eq:pnp_admm_b}\n\t\\bm{v}_k & = \\bm{f} \\left( \\bm{x}_{k} + \\bm{u}_{k-1} \\right) \\\\\n\t\\label{eq:pnp_admm_c}\n\t\\bm{u}_k & = \\bm{u}_{k-1} + \\left( \\bm{x}_{k} - \\bm{v}_{k} \\right)\n\t\\end{align}\n\\end{subequations}\nwhere\n\\begin{equation}\n\t\\label{eq:h}\n\t\\bm{h} \\left( \\bm{z} \\right) = \\arg\\min_{x} \\|\\bm{y}-\\bm{Ax}\\|_2^2\n\t + \\gamma \\|\\bm{x}-\\bm{z}\\|_2^2 \n\\end{equation}\nThe step \\eqref{eq:pnp_admm_a} enforces the k-space data consistency, where $ \\bm{h} $ is solved using the conjugate gradient algorithm. Here the ADMM's internal convergence parameter is set to $\\gamma=0.05$ and $ \\bm{z} = \\bm{v}_{k-1} - \\bm{u}_{k-1} $. The step \\eqref{eq:pnp_admm_b} applies the image denoising shrinkage to promote spatiotemporal TSMI priors, and the final step aggregates the two previous steps to update the next iteration. \n\n\n\\subsection{CNN denoiser}\n\\label{ssec:cnn_denoiser}\nOur PnP algorithm is combined with a pre-trained CNN denoiser which is plugged in as $ \\bm{f} $ in \\eqref{eq:pnp_admm_b} to iteratively restore the TMSI using learned image priors. The denoiser has a U-Net shape architecture following that of~\\cite{ref:zhang2021}. Here we modified the network's input and output dimensions to match the number of TSMI's multiple channels (i.e. we used real-valued TSMIs with $t=10$ in our experiments) to enable multichannel spatiotemporal image processing. A noise level map, filled with $\\sigma$ values, of the same dimensions as other channels was appended to the network's input for multi-noise level denoising, following ~\\cite{ref:zhang2021}. Other hidden layers follow exactly the same specifications as~\\cite{ref:zhang2021}. We trained this model using (multichannel) image patches extracted from $\\{(\\bm{x},\\bm{x}+\\bm{n})\\}$ pairs of clean and noise-contaminated TSMIs for various levels $\\sigma$ of AWGN noise. Further, image patches are patch-wise normalised to the [0,1] range. The clean TSMIs were obtained from a dataset of anatomical quantitative maps via (2). \n\n\n\\section{Numerical Experiments}\n\\label{sec:experiments}\n\n\\textbf{Dataset:} A dataset of quantitative T1, T2 and PD tissue maps of 2D axial brain scans of 8 healthy volunteers across 15 slices each were used in this study\\footnote{Data was obtained from a 3T GE scanner (MR750w system - GE Healthcare, Waukesha, WI) with 8-channel receive-only head RF coil, $230\\times230$ mm$^2$ FOV, $230\\times230$ image pixels, $5$ mm slice thickness, and used a FISP acquisition protocol with $T=1000$ repetitions, the same flip angles as~\\cite{jiang2015mr}, inversion time of 18 ms, repetition time of 10 ms and echo time of 1.8 ms. The groundtruth tissue maps were obtained by the LRTV algorithm~\\cite{ref:golbabaee2021-lrtv_mrfresnet}. }.\nClean (groundtruth) TSMIs were retrospectively simulated from these maps via~\\eqref{eq:magnetisation_response} using the EPG Bloch model formalism~\\cite{weigel2015extended} and a truncated (accelerated) variant of the FISP MRF protocol~\\cite{jiang2015mr} with $T=200$ repetitions. PCA was applied to obtain $t=10$ channel dimension-reduced TSMI data~\\cite{ref:mcgivney2014}. The TSMIs were real-valued and their spatial resolution was cropped \nfrom $ 230 \\times 230 $ pixels to $ 224 \\times 224 $ pixels for the U-Net. The dataset was split into 105 slices (from 7 subjects) for training and 15 slices (for 1 held-out subject) for testing.\n\\vspace{\\baselineskip}\n\\\\\n\\textbf{Training the CNN denoiser:} The TSMI training data was augmented by random resizing of the spatial resolution. \nThe CNN is a multichannel image-patch denoiser. For this we extracted patches of size $128\\times128\\times10$ with spatial strides of $17$ from the TSMIs. We augmented the patches by flipping across image axes and used random rotations. The patches were then [0,1] normalised. Random AWGN was then added during each iteration of the training algorithm to create pairs of clean and noisy TSMI patches. The CNN was trained following~\\cite{ref:zhang2021} to 500 epochs using L1 loss, Adam optimiser, batch size $16$, initialised weights using Kaiming uniform with no bias, and the learning rate initialised at $10^{-4}$ and halved every 100,000 iterations. Randomly selected levels of AWGN noise from $\\sigma=10^{\\{-4\\,\\text{to}\\,0\\}}$ were used for training the denoiser, following~\\cite{ref:zhang2021}. For the PnP algorithm, the denoiser was tested with five levels of AWGN noise $\\sigma=10^{\\{-4,-3,-2,-1,0\\}}$ from which $\\sigma=10^{-2}$ yielded the best result. A second denoiser trained solely on the optimum noise level found was utilised for our comparisons.\n\\vspace{\\baselineskip}\n\\\\\n\\textbf{Subsampling models:} We simulated two single-coil Cartesian acquisition processes with distinct k-space subsampling patterns: (i) a spiral readout as in~\\cite{ref:chen2020} with rotating patterns across $T=200$ repetitions was used to subsample the $224\\times224$ Cartesian FFT grid across all timeframes, and (ii) k-space multiple rows subsampling pattern i.e. the multi-shot Echo Planar Imaging (EPI)~\\cite{benjamin2019multi}, with shifted readout lines across the timeframes was used for MRF subsampling. \nBoth MRF acquisitions subsampled 771 k-space locations in each timeframe from a total of $224\\times224$, leading to a compression ratio of 65, and were contaminated with AWGN noise of $30$ dB SNR.\n\\vspace{\\baselineskip}\n\\input{.\/sections\/results_fig_tissue_maps_spiral.tex}\n\\input{.\/sections\/results_fig_tissue_maps_epi.tex}\n\\\\\n\\textbf{Tested reconstruction algorithms:} \nWe compared the performance of the proposed PnP-ADMM algorithm to the SVD-MRF~\\cite{ref:mcgivney2014} and LRTV~\\cite{ref:golbabaee2021-lrtv_mrfresnet} TSMI reconstruction baselines. All algorithms incorporated subspace dimensionality reduction ($t=10$). Further they can all adapt to the MRF subsampled acquisition process at testing time. SVD-MRF is a non-iterative approach based on zero-filling reconstruction $\\hat{\\bm{x}}=\\bm{A}^Hy$. The LRTV is an iterative approach to~\\eqref{eq:opt} based on an analytical Total Variation image (prior) regularisation for $\\phi$. The PnP approach uses data-driven image priors learned by the CNN denoiser. The PnP-ADMM ran for $100$ iterations, used $\\gamma=0.05$ and its conjugate gradient steps ran to the solution tolerance $10^{-4}$. The LRTV ran for $200$ iterations and used regularisation parameter $\\lambda=4\\times10^{-5}$.\n\\vspace{\\baselineskip}\n\\\\\n\\textbf{Quantitative mapping:} Dictionary matching was used for mapping the reconstructed TSMIs to the T1, T2 and PD images. For this an MRF dictionary of $94'777$ atoms (fingerprints) was created using the EPG Bloch response simulations for a logarithmically-spaced grid of $($T1, T2$)\\in[0.01, 6]\\times [0.004,0.6]$ (s). PCA was applied to compress this dictionary's (i.e. the Bloch responses) temporal dimension to a $t=10$ dimensional subspace~\\cite{ref:mcgivney2014}. The same subspace was used for the TSMIs dimensionality reduction and within all tested reconstruction algorithms.\n\\vspace{\\baselineskip}\n\\input{.\/sections\/results_table.tex}\n\\\\\n\\textbf{Evaluation Metrics:} For evaluation of the TSMI reconstruction performance, the Peak Signal-to-Noise-Ratio (PSNR) and Structural Similarity Index Measure (SSIM) averaged across all 10 temporal channels were used. To evaluate tissue maps, the Mean Absolute Error (MAE), PSNR and SSIM were used. Metrics were calculated for all 15 slices of the held-out test subject and averaged.\n\\vspace{\\baselineskip}\n\\\\\n\\textbf{Results and Discussion:} Fig.\\ref{fig:spiral_results}, Fig.\\ref{fig:epi_results} and Table.\\ref{tab:results_table}, shows PnP-ADMM utilising the same CNN denoiser, trained on generic AWGN, can apply to two different forward models with drastically different subsampling patterns. The output is consistent de-aliasing performance for time-series data and consequently the tissue maps (see supplementary material for a comparison of TSMIs).\n\nPnP-ADMM outperforms tested baselines subjectively (Fig.\\ref{fig:spiral_results}, Fig.\\ref{fig:epi_results}) and objectively (Table.\\ref{tab:results_table}) across all tested metrics, for both tested subsampling patterns. Recovering tissue maps from EPI subsampled data is observed to be generally more challenging than the spiral subsampling scheme (because the centre of k-space is densely sampled by spiral readouts), however, as observed the PnP-ADMM algorithm succeeds while other tested baselines fail. The key to the superior performance of PnP-ADMM lies in its ability to utilise spatiotemporal prior information related to the dataset. SVD-MRF utilises only temporal prior information through the use of PCA, while LRTV utilises generic prior information in the form of temporal priors through PCA and spatial priors through Total Variation. The use of dataset specific spatiotemporal priors, learned by a CNN denoiser, is the crux of PnP-ADMM's superior performance.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nA proof-of-concept is proposed for a PnP-ADMM approach using deep CNN image denoisers for multi-parametric tissue property quantification using MRF compressed sampled acquisitions, which may be useful for cases where the measurement acquisition scheme is unknown during training for deep learning methods and\/or may change during testing. \nThe method was validated on simulated data and consistently outperformed the tested baselines. This was possible due to the use of data-driven spatiotemporal priors learnt by a pre-trained CNN denoiser, which were critical for enhancing the reconstruction of the TSMIs. Future work will include a variety of measurement acquisition settings, the use of non-gridded sampling trajectories and prospective in-vivo scans.\n\n\n\\section{Compliance with ethical standards}\n\\label{sec:ethics}\n\nThis research study was conducted retrospectively using anonymised human subject scans made available by GE Healthcare who obtained informed consent in compliance with the German Act on Medical Devices. Approval was granted by the Ethics Committee of The University of Bath (Date. Sept 2021 \/ No. 6568). \n\n\n\\section{Acknowledgments}\n\\label{sec:acknowledgments}\n\nCarolin M. Pirkl and Marion I. Menzel receive funding from the European Union's Horizon 2020 research and innovation programme, grant agreement No. 952172.\n\n\\bibliographystyle{IEEEbib}\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\@ifstar{\\origsection*}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}{\\@ifstar{\\origsection*}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}}\n\\def\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}\n\\makeatother\n\n\\usepackage{amsmath,amssymb,amsthm}\n\\usepackage{mathrsfs}\n\\usepackage{mathabx}\\changenotsign\n\\usepackage{dsfont}\n\\usepackage{bbm}\n\n\\usepackage{xcolor}\n\\usepackage[backref=section]{hyperref}\n\\usepackage[ocgcolorlinks]{ocgx2}\n\\hypersetup{\n\tcolorlinks=true,\n\tlinkcolor={red!60!black},\n\tcitecolor={green!60!black},\n\turlcolor={blue!60!black},\n}\n\n\n\\usepackage[open,openlevel=2,atend]{bookmark}\n\n\\usepackage[abbrev,msc-links,backrefs]{amsrefs}\n\\usepackage{doi}\n\\renewcommand{\\doitext}{DOI\\,}\n\n\\renewcommand{\\PrintDOI}[1]{\\doi{#1}}\n\n\\renewcommand{\\eprint}[1]{\\href{http:\/\/arxiv.org\/abs\/#1}{arXiv:#1}}\n\n\n\\usepackage[T1]{fontenc}\n\\usepackage{lmodern}\n\\usepackage[babel]{microtype}\n\\usepackage[english]{babel}\n\n\\linespread{1.3}\n\\usepackage{geometry}\n\\geometry{left=27.5mm,right=27.5mm, top=25mm, bottom=25mm}\n\n\\numberwithin{equation}{section}\n\\numberwithin{figure}{section}\n\n\\usepackage{enumitem}\n\\def\\upshape({\\itshape \\roman*\\,}){\\upshape({\\itshape \\roman*\\,})}\n\\def\\upshape(\\Roman*){\\upshape(\\Roman*)}\n\\def\\upshape({\\itshape \\alph*\\,}){\\upshape({\\itshape \\alph*\\,})}\n\\def\\upshape({\\itshape \\Alph*\\,}){\\upshape({\\itshape \\Alph*\\,})}\n\\def\\upshape({\\itshape \\arabic*\\,}){\\upshape({\\itshape \\arabic*\\,})}\n\n\\let\\polishlcross=\\ifmmode\\ell\\else\\polishlcross\\fi\n\\def\\ifmmode\\ell\\else\\polishlcross\\fi{\\ifmmode\\ell\\else\\polishlcross\\fi}\n\n\\def\\ \\text{and}\\ {\\ \\text{and}\\ }\n\\def\\quad\\text{and}\\quad{\\quad\\text{and}\\quad}\n\\def\\qquad\\text{and}\\qquad{\\qquad\\text{and}\\qquad}\n\n\\let\\emptyset=\\varnothing\n\\let\\setminus=\\smallsetminus\n\\let\\backslash=\\smallsetminus\n\n\\makeatletter\n\\def\\mathpalette\\mov@rlay{\\mathpalette\\mov@rlay}\n\\def\\mov@rlay#1#2{\\leavevmode\\vtop{ \\baselineskip\\z@skip \\lineskiplimit-\\maxdimen\n\t\t\\ialign{\\hfil$\\m@th#1##$\\hfil\\cr#2\\crcr}}}\n\\newcommand{\\charfusion}[3][\\mathord]{\n\t#1{\\ifx#1\\mathop\\vphantom{#2}\\fi\n\t\t\\mathpalette\\mov@rlay{#2\\cr#3}\n\t}\n\t\\ifx#1\\mathop\\expandafter\\displaylimits\\fi}\n\\makeatother\n\n\\newcommand{\\charfusion[\\mathbin]{\\cup}{\\cdot}}{\\charfusion[\\mathbin]{\\cup}{\\cdot}}\n\\newcommand{\\charfusion[\\mathop]{\\bigcup}{\\cdot}}{\\charfusion[\\mathop]{\\bigcup}{\\cdot}}\n\n\n\\DeclareFontFamily{U} {MnSymbolC}{}\n\\DeclareSymbolFont{MnSyC} {U} {MnSymbolC}{m}{n}\n\\DeclareFontShape{U}{MnSymbolC}{m}{n}{\n\t<-6> MnSymbolC5\n\t<6-7> MnSymbolC6\n\t<7-8> MnSymbolC7\n\t<8-9> MnSymbolC8\n\t<9-10> MnSymbolC9\n\t<10-12> MnSymbolC10\n\t<12-> MnSymbolC12}{}\n\\DeclareMathSymbol{\\powerset}{\\mathord}{MnSyC}{180}\n\\DeclareMathSymbol{\\YY}{\\mathord}{MnSyC}{42}\n\n\n\\usepackage{tikz}\n\\usetikzlibrary{calc,decorations.pathmorphing}\n\\usetikzlibrary{arrows,decorations.pathreplacing}\n\\pgfdeclarelayer{background}\n\\pgfdeclarelayer{foreground}\n\\pgfdeclarelayer{front}\n\\pgfsetlayers{background,main,foreground,front}\n\n\\usepackage{multicol}\n\\usepackage{subcaption}\n\\captionsetup[subfigure]{labelfont=rm}\n\n\\let\\epsilon=\\varepsilon\n\\let\\eps=\\epsilon\n\\let\\phi=\\varphi\n\\let\\rho=\\varrho\n\\let\\theta=\\vartheta\n\\let\\wh=\\widehat\n\n\\def{\\mathds E}{{\\mathds E}}\n\\def{\\mathds N}{{\\mathds N}}\n\\def{\\mathds Q}{{\\mathds Q}}\n\\def{\\mathds G}{{\\mathds G}}\n\\def{\\mathds Z}{{\\mathds Z}}\n\\def{\\mathds P}{{\\mathds P}}\n\\def{\\mathds R}{{\\mathds R}}\n\\def{\\mathds C}{{\\mathds C}}\n\\def{\\mathds 1}{{\\mathds 1}}\n\\def<_{\\mathrm{lex}}{<_{\\mathrm{lex}}}\n\n\\newcommand{\\mathcal{A}}{\\mathcal{A}}\n\\newcommand{\\mathcal{B}}{\\mathcal{B}}\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathcal{E}}{\\mathcal{E}}\n\\newcommand{\\mathcal{F}}{\\mathcal{F}}\n\\newcommand{\\mathcal{G}}{\\mathcal{G}}\n\\newcommand{\\mathcal{H}}{\\mathcal{H}}\n\\newcommand{\\mathcal{I}}{\\mathcal{I}}\n\\newcommand{\\mathcal{J}}{\\mathcal{J}}\n\\newcommand{\\mathcal{K}}{\\mathcal{K}}\n\\newcommand{\\mathcal{L}}{\\mathcal{L}}\n\\newcommand{\\mathcal{N}}{\\mathcal{N}}\n\\newcommand{\\mathcal{P}}{\\mathcal{P}}\n\\newcommand{\\mathcal{Q}}{\\mathcal{Q}}\n\\newcommand{\\mathcal{R}}{\\mathcal{R}}\n\\newcommand{\\mathcal{S}}{\\mathcal{S}}\n\\newcommand{\\mathcal{T}}{\\mathcal{T}}\n\\newcommand{\\mathcal{X}}{\\mathcal{X}}\n\n\\newcommand{\\mathscr{A}}{\\mathscr{A}}\n\\newcommand{\\mathscr{B}}{\\mathscr{B}}\n\\newcommand{\\mathscr{S}}{\\mathscr{S}}\n\\newcommand{\\mathscr{C}}{\\mathscr{C}}\n\\newcommand{\\mathscr{P}}{\\mathscr{P}}\n\\newcommand{\\mathscr{U}}{\\mathscr{U}}\n\\newcommand{\\mathscr{W}}{\\mathscr{W}}\n\\newcommand{\\mathscr{X}}{\\mathscr{X}}\n\\newcommand{\\mathscr{Z}}{\\mathscr{Z}}\n\n\\newcommand{\\mathfrak{S}}{\\mathfrak{S}}\n\\newcommand{\\mathfrak{B}}{\\mathfrak{B}}\n\\newcommand{\\mathfrak{U}}{\\mathfrak{U}}\n\\newcommand{\\mathfrak{A}}{\\mathfrak{A}}\n\\newcommand{\\mathfrak{E}}{\\mathfrak{E}}\n\\newcommand{\\mathfrak{X}}{\\mathfrak{X}}\n\\newcommand{\\mathfrak{P}}{\\mathfrak{P}}\n\n\\newtheoremstyle{note} {4pt} {4pt} {\\sl} {} {\\bfseries} {.} {.5em} {}\n\\newtheoremstyle{introthms} {3pt} {3pt} {\\itshape} {} {\\bfseries} {.} {.5em} {\\thmnote{#3}}\n\\newtheoremstyle{remark} {2pt} {2pt} {\\rm} {} {\\bfseries} {.} {.3em} {}\n\n\\theoremstyle{plain}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{prop}[theorem]{Proposition}\n\\newtheorem{proposition}[theorem]{Proposition}\n\\newtheorem{constr}{Construction}\n\\newtheorem{conj}[theorem]{Conjecture}\n\\newtheorem{cor}[theorem]{Corollary}\n\\newtheorem{fact}[theorem]{Fact}\n\\newtheorem{claim}[theorem]{Claim}\n\n\n\\theoremstyle{note}\n\\newtheorem{dfn}[theorem]{Definition}\n\\newtheorem{definition}[theorem]{Definition}\n\\newtheorem{setup}[theorem]{Setup}\n\n\\theoremstyle{remark}\n\\newtheorem{remark}[theorem]{Remark}\n\\newtheorem{notation}[theorem]{Notation}\n\\newtheorem{exmpl}[theorem]{Example}\n\\newtheorem{question}[theorem]{Question}\n\n\n\\usepackage{lineno}\n\\newcommand*\\patchAmsMathEnvironmentForLineno[1]{\n\t\\expandafter\\let\\csname old#1\\expandafter\\endcsname\\csname #1\\endcsname\n\t\\expandafter\\let\\csname oldend#1\\expandafter\\endcsname\\csname end#1\\endcsname\n\t\\renewenvironment{#1}\n\t{\\linenomath\\csname old#1\\endcsname}\n\t{\\csname oldend#1\\endcsname\\endlinenomath}}\n\\newcommand*\\patchBothAmsMathEnvironmentsForLineno[1]{\n\t\\patchAmsMathEnvironmentForLineno{#1}\n\t\\patchAmsMathEnvironmentForLineno{#1*}}\n\\AtBeginDocument{\n\t\\patchBothAmsMathEnvironmentsForLineno{equation}\n\t\\patchBothAmsMathEnvironmentsForLineno{align}\n\t\\patchBothAmsMathEnvironmentsForLineno{flalign}\n\t\\patchBothAmsMathEnvironmentsForLineno{alignat}\n\t\\patchBothAmsMathEnvironmentsForLineno{gather}\n\t\\patchBothAmsMathEnvironmentsForLineno{multline}\n}\n\n\n\\usepackage{scalerel}\n\n\\makeatletter\n\\newcommand{\\overrighharpoonup}[1]{\\ThisStyle{%\n\t\t\\vbox {\\m@th\\ialign{##\\crcr\n\t\t\t\t\\rightharpoonupfill \\crcr\n\t\t\t\t\\noalign{\\kern-\\p@\\nointerlineskip}\n\t\t\t\t$\\hfil\\SavedStyle#1\\hfil$\\crcr}}}}\n\n\\def\\rightharpoonupfill{%\n\t$\\SavedStyle\\m@th\\mkern+0.8mu\\cleaders\\hbox{$\\shortbar\\mkern-4mu$}\\hfill\\rightharpoonuptip\\mkern+0.8mu$}\n\n\\def\\rightharpoonuptip{%\n\t\\raisebox{\\z@}[2pt][1pt]{\\scalebox{0.55}{$\\SavedStyle\\rightharpoonup$}}}\n\n\\def\\shortbar{%\n\t\\smash{\\scalebox{0.55}{$\\SavedStyle\\relbar$}}}\n\\makeatother\n\n\\let\\seq=\\overrighharpoonup\n\n\\makeatletter\n\\newcommand{\\overlefharpoonup}[1]{\\ThisStyle{%\n\t\t\\vbox {\\m@th\\ialign{##\\crcr\n\t\t\t\t\\leftharpoonupfill \\crcr\n\t\t\t\t\\noalign{\\kern-\\p@\\nointerlineskip}\n\t\t\t\t$\\hfil\\SavedStyle#1\\hfil$\\crcr}}}}\n\n\\def\\leftharpoonupfill{%\n\t$\\SavedStyle\\m@th\\mkern+0.8mu\\cleaders\\hbox{$\\shortbar\\mkern-4mu$}\\hfill\\leftharpoonuptip\\mkern+0.8mu$}\n\n\\def\\leftharpoonuptip{%\n\t\\raisebox{\\z@}[2pt][1pt]{\\scalebox{0.55}{$\\SavedStyle\\leftharpoonup$}}}\n\n\\let\\seqq=\\overlefharpoonup\n\n\\let\\lra=\\longrightarrow\n\\def\\mathrm{Red}{\\mathrm{Red}}\n\\def\\mathrm{Green}{\\mathrm{Green}}\n\\def\\mathrm{left}{\\mathrm{left}}\n\\def\\mathrm{right}{\\mathrm{right}}\n\\def\\mathrm{L}{\\mathrm{L}}\n\\def\\Ra\\mathrm{R}\n\n\n\\makeatletter\n\\newsavebox\\myboxA\n\\newsavebox\\myboxB\n\\newlength\\mylenA\n\n\\newcommand*\\xoverline[2][0.75]{%\n\t\\sbox{\\myboxA}{$\\m@th#2$}%\n\t\\setbox\\myboxB\\nul\n\t\\ht\\myboxB=\\ht\\myboxA%\n\t\\dp\\myboxB=\\dp\\myboxA%\n\t\\wd\\myboxB=#1\\wd\\mybox\n\t\\sbox\\myboxB{$\\m@th\\overline{\\copy\\myboxB}$\n\t\\setlength\\mylenA{\\the\\wd\\myboxA\n\t\\addtolength\\mylenA{-\\the\\wd\\myboxB}%\n\t\\ifdim\\wd\\myboxB<\\wd\\myboxA%\n\t\\rlap{\\hskip 0.5\\mylenA\\usebox\\myboxB}{\\usebox\\myboxA}%\n\t\\else\n\t\\hskip -0.5\\mylenA\\rlap{\\usebox\\myboxA}{\\hskip 0.5\\mylenA\\usebox\\myboxB}%\n\t\\fi}\n\\makeatother\n\n\\DeclareSymbolFont{symbolsC}{U}{txsyc}{m}{n}\n\\SetSymbolFont{symbolsC}{bold}{U}{txsyc}{bx}{n}\n\\DeclareFontSubstitution{U}{txsyc}{m}{n}\n\\DeclareMathSymbol{\\strictif}{\\mathrel}{symbolsC}{74}\n\n\n\\DeclareSymbolFont{stmry}{U}{stmry}{m}{n}\n\\DeclareMathSymbol\\arrownot\\mathrel{stmry}{\"58}\n\\DeclareMathSymbol\\Arrownot\\mathrel{stmry}{\"59}\n\\def\\mathrel{\\mkern5.5mu\\arrownot\\mkern-5.5mu}{\\mathrel{\\mkern5.5mu\\arrownot\\mkern-5.5mu}}\n\\def\\mathrel{\\mkern5.5mu\\Arrownot\\mkern-5.5mu}{\\mathrel{\\mkern5.5mu\\Arrownot\\mkern-5.5mu}}\n\\def\\longarrownot\\longrightarrow{\\mathrel{\\mkern5.5mu\\arrownot\\mkern-5.5mu}\\longrightarrow}\n\\def\\Longarrownot\\Longrightarrow{\\mathrel{\\mkern5.5mu\\Arrownot\\mkern-5.5mu}\\Longrightarrow}\n\\DeclareMathOperator{\\ER}{ER}\n\n\n\\begin{document}\n\\dedicatory{Dedicated to the memory of Ronald Graham}\n\\title{On quantitative aspects of a canonisation theorem for edge-orderings}\n\n\\author[Chr.~Reiher]{Christian Reiher}\n\\address{Fachbereich Mathematik, Universit\\\"at Hamburg, Hamburg, Germany}\n\\email{Christian.Reiher@uni-hamburg.de}\n\\email{schacht@math.uni-hamburg.de}\n\\email{kevin.sames@uni-hamburg.de} \n\n\\author[V.~R\\\"{o}dl]{Vojt\\v{e}ch R\\\"{o}dl}\n\\address{Department of Mathematics,\nEmory University, Atlanta, USA}\n\\email{vrodl@emory.edu\n\\email{marcelo.tadeu.sales@emory.edu}\n\n\\thanks{The second author is supported by NSF grant DMS 1764385.}\n\n\\author[M.~Sales]{Marcelo Sales}\n\\thanks{The third author is partially supported by NSF grant DMS 1764385.}\n\n\\author[K.~Sames]{Kevin Sames}\n\n\n\n\n\\author[M.~Schacht]{Mathias Schacht}\n\\thanks{The fifth author is supported by the ERC (PEPCo 724903)}\n\n\\begin{abstract}\n\tFor integers $k\\ge 2$ and $N\\ge 2k+1$ there are $k!2^k$ {\\it canonical orderings} of \n\tthe edges of the complete $k$-uniform hypergraph with vertex set $[N] = \\{1,2,\\dots, N\\}$.\n\tThese are exactly the orderings with the property that any two subsets $A, B\\subseteq [N]$ \n\tof the same size induce isomorphic suborderings. We study the associated {\\it canonisation} problem \n\tto estimate, given $k$ and $n$, the least integer $N$ such that no matter how the $k$-subsets \n\tof $[N]$ are ordered there always exists an $n$-element set $X\\subseteq [N]$ whose $k$-subsets \n\tare ordered canonically. For fixed $k$ we prove lower and upper bounds on these numbers \n\tthat are $k$ times iterated exponential \n\tin a polynomial of $n$. \n\\end{abstract}\n\n\\maketitle\n\n\\@ifstar{\\origsection*}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}{Introduction}\\label{sec:intro}\n\nWe consider numerical aspects of an unpublished result of Klaus Leeb from the early seventies\n(see~\\cites{NPRV85, NR17}). For a natural number $N$ we set $[N]=\\{1,2,\\dots,N\\}$. \nGiven a set $X$ and a nonnegative integer $k$ we write $X^{(k)}=\\{e\\subseteq X\\colon |e|=k\\}$ \nfor the set of all $k$-element subsets of $X$. \n\n\\subsection{Ramsey theory}\nRecall that for any integers $k, n, r\\ge 1$, Ramsey's theorem~\\cite{Ramsey30} informs us \nthat every sufficiently large integer $N$ satisfies the partition relation \n\\begin{equation}\\label{eq:1500}\n\tN\\longrightarrow (n)^k_r\\,,\n\\end{equation}\nmeaning that for every colouring of $[N]^{(k)}$ with $r$ colours there exists a set $X\\subseteq [N]$\nof size~$n$ such that~$X^{(k)}$ is monochromatic. The least number~$N$ validating~\\eqref{eq:1500}\nis denoted by~$\\mathrm{R}^{(k)}(n, r)$. The {\\it negative} partition relation $N\\longarrownot\\longrightarrow (n)^k_r$\nexpresses the fact that~\\eqref{eq:1500} fails. \n\n\nFor colourings $f\\colon [N]^{(k)}\\longrightarrow {\\mathds N}$ with infinitely many colours, however, \none can, in general, not even hope to obtain a monochromatic set of size $k+1$. \nFor instance, it may happen \nthat every $k$-element subset of $[N]$ has its own private colour. \nNevertheless, Erd\\H{o}s and Rado~\\cites{ER52, ER50}\nestablished a meaningful generalisation of Ramsey's theorem to colourings with infinitely many \ncolours, even if the ground sets stay finite. In this context it is convenient to regard such \ncolourings as equivalence relations, two $k$-sets being in the same equivalence class if and only \nif they are of the same colour. \nNow Erd\\H{o}s and Rado~\\cites{ER52, ER50} call an equivalence relation \non $[N]^{(k)}$ {\\it canonical} if for any two subsets $A, B\\subseteq [N]$ of the same size \nthe equivalence relations induced on~$A^{(k)}$ and~$B^{(k)}$ correspond to each other via \nthe order preserving map between $A$ and $B$. So, roughly speaking, an equivalence relation is \ncanonical if you cannot change its essential properties by restricting your attention to a subset. \nIt turns out that for~$N\\ge 2k+1$ there are always exactly $2^k$ canonical equivalence relations \non $[N]^{(k)}$ that can be parametrised by subsets~$I\\subseteq [k]$ in the following natural way. \nLet $x, y\\in [N]^{(k)}$ be two $k$-sets and let $x=\\{x_1, \\dots, x_k\\}$ \nas well as $y=\\{y_1, \\dots, y_k\\}$ enumerate their elements in increasing order. \nWe write $x\\equiv_I y$ if and only if $x_i=y_i$ holds for all~$i\\in I$.\nNow $\\{\\equiv_I\\colon I\\subseteq [k]\\}$ is the collection of all canonical equivalence \nrelations on~$[N]^{(k)}$. The {\\it canonical Ramsey theorem} of Erd\\H{o}s and Rado \nasserts that given two integers~$k$ and~$n$ there exists an integer $\\ER^{(k)}(n)$ such that \nfor every $N\\ge\\ER^{(k)}(n)$ and every equivalence relation~$\\equiv$ on~$[N]^{(k)}$ there exists a \nset $X\\subseteq [N]$ of size $n$ with the property that $\\equiv$ is canonical on~$X^{(k)}$. \n\nThis result sparked the development of {\\it canonical Ramsey theory} in the seventies. \nLet us commemorate some contributions to the area due to {\\sc Ronald Graham}, \nwho was among the main protagonists of Ramsey theory in the $20^{\\mathrm{th}}$ century: Together \nwith Erd\\H{o}s~\\cite{EG80}, he established a canonical version of van der Waerden's theorem, \nstating that if for $N\\ge \\mathrm{EG}(k)$ we colour $[N]$ with arbitrarily many colours, then there \nexists an arithmetic progression of length $k$ which is either monochromatic or receives~$k$ \ndistinct colours. With Deuber, Pr\\\"omel, \nand Voigt~\\cite{DGPV} he later obtained a canonical version of the Gallai-Witt theorem, \nwhich is more difficult to state --- in $d$ dimensions the canonical colourings \nfor a given configuration $F\\subseteq {\\mathds Z}^d$ are now parametrised by vector \nsubspaces $U\\subseteq {\\mathds Q}^d$ having a basis of vectors of the form $x-y$ with $x, y\\in F$.\nInterestingly, {\\sc Ronald Graham} was also among the small number of co-authors of Klaus \nLeeb~\\cite{GLR, GLK2} and in joint work with Rothschild they settled a famous conjecture of Rota \non Ramsey properties of finite vector spaces.\n\nThe canonisation discussed here concerns linear orderings. We say that a linear \norder $([N]^{(k)}, \\strictif)$ is {\\it canonical} if for any two sets $A, B\\subseteq [N]$\nof the same size the order preserving map from $A$ to $B$ induces an order preserving \nmap from $(A^{(k)}, \\strictif)$ to $(B^{(k)}, \\strictif)$. It turns out that for $N\\ge 2k+1$ \nthere are exactly $k!2^k$ canonical orderings of $[N]^{(k)}$ that can uniformly be parametrised by \npairs $(\\eps, \\sigma)$ consisting of a {\\it sign vector} $\\eps\\in \\{-1,+1\\}^k$ and a \npermutation $\\sigma\\in \\mathfrak{S}_k$, i.e., a bijective map $\\sigma\\colon [k]\\lra [k]$. \n\n\\begin{definition}\\label{def:canonical}\n\tLet $N, k\\ge 1$ be integers, and let $(\\eps, \\sigma)\\in \\{+1, -1\\}^k\\times \\mathfrak{S}_k$ be a pair \n\tconsisting of a sign vector $\\eps=(\\eps_1, \\dots, \\eps_k)$ and a permutation $\\sigma$ of $[k]$. \n\tThe ordering $\\strictif$ on~$[N]^{(k)}$ \n\t{\\it associated with $(\\eps, \\sigma)$} is the unique ordering with the property that \n\tif \n\t%\n\t\\[\n\t\t1\\le a_1< \\dots 2$, where $c$ is a positive constant. \nThey also proved $\\mathrm{R}^{(k)}(n, 4)\\ge t_{k-1}(c_k n)$ for all $k\\ge 2$, \nwhere $c_k>0$ is an absolute constant. In the other direction, it is known \nthat $\\mathrm{R}^{(k)}(n, r)\\le t_{k-1}(C_{k, r}n)$.\n\nEstimates on the canonical Ramsey numbers $\\ER^{(k)}(n)$ were studied in \\cites{LR95, Sh96}. \nNotably, Lefmann and R\\\"{o}dl proved the lower bound $\\ER^{(k)}(n)\\ge t_{k-1}(c_kn^2)$,\nwhile Shelah obtained the complementary upper bound $\\ER^{(k)}(n)\\le t_{k-1}(C_kn^{8(2k-1)})$.\n\nLet us now turn our attention to the Leeb numbers $\\mathrm{L}^{(k)}(n)$. For $k=1$, there exist\nonly two canonical orderings of $[N]^{(1)}$, namely the ``increasing'' and the ``decreasing''\none corresponding to the sign vectors $\\eps=(+1)$ and $\\eps=(-1)$, respectively. \nThus the well-known Erd\\H{o}s-Szekeres theorem~\\cite{ES35} yields the exact \nvalue $\\mathrm{L}^{(1)}(n)=(n-1)^2+1$. Accordingly, Leeb's theorem \ncan be viewed as a multidimensional version of the Erd\\H{o}s-Szekeres theorem and we refer \nto~\\cite{Lang} for further variations on this theme. Our own results can be summarised as follows.\n\n\\begin{theorem}\\label{thm:lower}\n\tIf $n\\ge 4$ and $R\\longarrownot\\longrightarrow (n-1)^2_2$, then $\\mathrm{L}^{(2)}(n)> 2^{R}$. Moreover, if $n\\ge k\\ge 3$, \n\tthen $\\mathrm{L}^{(k)}(4n+k) > 2^{\\mathrm{L}^{(k-1)}(n)-1}$. \n\\end{theorem}\n\nDue to the known lower bounds on diagonal Ramsey numbers this implies \n\\[\n\t\\mathrm{L}^{(2)}(n)\\ge t_2(n\/2)\n\t\\quad \\text {as well as } \\quad \n\t\\mathrm{L}^{(k)}(n)\\ge t_{k}(c_kn) \\text{ for } k\\ge 3\\,. \n\\]\nWe offer an upper bound with the same number of exponentiations.\n\n\\begin{theorem}\\label{thm:upper}\n\tFor every $k\\ge 2$ there exists a constant $C_k$ such that $\\mathrm{L}^{(k)}(n)\\le t_{k}(n^{C_k})$\n\tholds for every $n\\ge 2k$.\n\\end{theorem}\n\nThe case $k=2$ of Theorems~\\ref{thm:lower} and~\\ref{thm:upper} was obtained independently by Conlon, \nFox, and Sudakov in unpublished work~\\cite{Fox}.\n\n\\subsection*{Organisation} We prove Theorem~\\ref{thm:lower} in Section~\\ref{sec:comb}.\nThe upper bound, Theorem~\\ref{thm:upper}, is established in Section~\\ref{sec:upper}. \nLemma~\\ref{lem:2150} from this proof turns out to be applicable to Erd\\H{o}s-Rado \nnumbers as well and Section~\\ref{sec:ERS} illustrates this by showing a variant of \nShelah's result, namely $\\ER^{(k)}(n)\\le t_{k-1}(C_kn^{6k})$. \n\n\\@ifstar{\\origsection*}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}{Lower bounds: Constructions}\\label{sec:comb}\n\n\\subsection{Trees and combs}\nOur lower bound constructions take inspiration from the negative stepping-up lemma due\nto Erd\\H{o}s, Hajnal, and Rado \\cite{EHR65}. Notice that in order to prove an inequality \nof the form $\\mathrm{L}^{(k)}(n)>N$ one needs to exhibit a special ordering $([N]^{(k)}, \\strictif)$\nfor which there is no set $Z\\subseteq [N]$ of size $n$ such that $\\strictif$ orders $Z^{(k)}$ \ncanonically. For us, $N=2^m$ will always be a power of two and for reasons of visualisability \nwe identify the numbers in $[N]$ with the leaves of a binary tree of height $m$. \nWe label the levels of our tree from the bottom to the top with the numbers in $[m]$. \nSo the root is at level $1$ and the leaves are immediately above \nlevel $m$ (see Figure~\\ref{fig:optimalline}).\n\n\\begin{figure}[h]\n\\centering\n{\\hfil \\begin{tikzpicture}[scale=1.0\n \\coordinate (A) at (0,0);\n \\coordinate (B) at (-2,1);\n \\coordinate (C) at (2,1);\n \\coordinate (D) at (-3,2);\n \\coordinate (E) at (-1,2);\n\t\\coordinate (F) at (1,2);\n \\coordinate (G) at (3,2);\n \\coordinate [label=above:$1$] (H1) at (-3.5,3);\n \\coordinate [label=above:$2$] (H2) at (-2.5,3);\n \\coordinate [label=above:$3$] (H3) at (-1.5,3);\n \\coordinate [label=above:$4$] (H4) at (-0.5,3);\n \\coordinate [label=above:$5$] (H5) at (0.5,3);\n \\coordinate [label=above:$6$] (H6) at (1.5,3);\n \\coordinate [label=above:$7$] (H7) at (2.5,3);\n \\coordinate [label=above:$8$] (H8) at (3.5,3);\n \n \\coordinate [label=left:$1$] (L1) at (-5,0);\n \\coordinate [label=left:$2$] (L2) at (-5,1);\n \\coordinate [label=left:$3$] (L3) at (-5,2);\n \\coordinate (R1) at (5,0);\n \\coordinate (R2) at (5,1);\n \\coordinate (R3) at (5,2);\n\n \\draw[line width=0.5] (C)--(A)--(B);\n \\draw[line width=0.5] (D)--(B)--(E);\n \\draw[line width=0.5] (F)--(C)--(G);\n \\draw[line width=0.5] (H1)--(D)--(H2);\n \\draw[line width=0.5] (H3)--(E)--(H4);\n \\draw[line width=0.5] (H5)--(F)--(H6);\n \\draw[line width=0.5] (H7)--(G)--(H8);\n \\draw[line width=0.5] (L1)--(L2)--(L3);\n \\draw[dashed] (L1)--(R1);\n \\draw[dashed] (L2)--(R2);\n \\draw[dashed] (L3)--(R3);\n \\draw (L1) circle [radius=0.05];\n \t\\draw (L2) circle [radius=0.05];\n \t\\draw (L3) circle [radius=0.05];\n \n \\end{tikzpicture}\\hfil}\n\\caption{The binary tree and its levels for $m=3$.}\n\\label{fig:optimalline}\n\\end{figure}\n\nAlternatively, we can identify $[N]$ with the set $\\{+1, -1\\}^m$ of all $\\pm1$-vectors of length~$m$\nsuch that the standard ordering on $[N]$ corresponds to the lexicographic one \non~$\\{+1, -1\\}^m$. The literature often works with the number $0$ instead of $-1$ here, but for the \napplication we have in mind our choice turns out to be advantageous. \n\nNow every set $X\\subseteq [N]$ generates a tree $T_X$, which is the subtree of our binary tree \ngiven by the union of the paths from the leaves in $X$ to the root (see Figure~\\ref{fig:2057}).\n\n\n\\begin{figure}[h]\n\\centering\n{\\hfil \\begin{tikzpicture}[scale=1.0\n \\coordinate (A) at (0,0);\n \\coordinate (B) at (-2,1);\n \\coordinate (C) at (2,1);\n \\coordinate (D) at (-3,2);\n \\coordinate (E) at (-1,2);\n\t\\coordinate (F) at (1,2);\n \\coordinate (G) at (3,2);\n \\coordinate [label=above:$1$] (H1) at (-3.5,3);\n \\coordinate [label=above:$2$] (H2) at (-2.5,3);\n \\coordinate [label=above:$3$] (H3) at (-1.5,3);\n \\coordinate [label=above:$4$] (H4) at (-0.5,3);\n \\coordinate [label=above:$5$] (H5) at (0.5,3);\n \\coordinate [label=above:$6$] (H6) at (1.5,3);\n \\coordinate [label=above:$7$] (H7) at (2.5,3);\n \\coordinate [label=above:$8$] (H8) at (3.5,3);\n \n \\coordinate [label=left:$1$] (L1) at (-5,0);\n \\coordinate [label=left:$2$] (L2) at (-5,1);\n \\coordinate [label=left:$3$] (L3) at (-5,2);\n \\coordinate (R1) at (5,0);\n \\coordinate (R2) at (5,1);\n \\coordinate (R3) at (5,2);\n\n \\draw[line width=2.5] (C)--(A)--(B);\n \\draw[line width=2.5] (D)--(B)--(E);\n \\draw[line width=0.5] (F)--(C);\n \\draw[line width=2.5] (G)--(C);\n \\draw[line width=0.5] (H1)--(D);\n \\draw[line width=2.5] (H2)--(D);\n \\draw[line width=2.5] (H3)--(E);\n \\draw[line width=0.5] (H4)--(E);\n \\draw[line width=0.5] (H5)--(F)--(H6);\n \\draw[line width=2.5] (H7)--(G);\n \\draw[line width=0.5] (H8)--(G);\n \\draw[line width=0.5] (L1)--(L2)--(L3);\n \\draw[dashed] (L1)--(R1);\n \\draw[dashed] (L2)--(R2);\n \\draw[dashed] (L3)--(R3);\n \\draw (L1) circle [radius=0.05];\n \t\\draw (L2) circle [radius=0.05];\n \t\\draw (L3) circle [radius=0.05];\n \t\\draw[fill] (H2) circle [radius=0.1];\n \t\\draw[fill] (H3) circle [radius=0.1];\n \t\\draw[fill] (H7) circle [radius=0.1];\n \t\\draw[fill] (B) circle [radius=0.1];\n \t\\draw[fill] (A) circle [radius=0.1];\n \n \\end{tikzpicture}\\hfil}\n\\caption{The auxiliary tree $T_X$ for $X=\\{2,3,7\\}$.}\n\\label{fig:2057}\n\\end{figure}\n\nAn essential observation on which both the double-exponential lower bound construction \nfor the four-colour Ramsey number of triples and our lower bound on $\\mathrm{L}^{(2)}(n)$ rely \nis that there are two essentially different kinds of triples: those engendering a ``left tree'' \nand those engendering a ``right tree''. Roughly speaking the difference is that when descending\nfrom the leaves to the root, the two elements concurring first are the two smaller \nones for left trees and the two larger ones for right trees. For instance, Figure~\\ref{fig:2057} \ndisplays a left tree.\n\nTo make these notions more precise we introduce the following notation. Given at least two distinct \nvectors $x_1, \\dots, x_t\\in \\{-1, +1\\}^m$ with coordinates $x_i=(x_{i1}, \\dots, x_{im})$ we write \n\\[\n\t\\delta(x_1, \\dots, x_t)=\\min\\{\\mu\\in [m]\\colon x_{1\\mu}= \\dots = x_{t\\mu} \\text{ is not the case}\\}\\,.\n\\]\nfor the first level where two of them differ. \n\nNow let $xyz\\in [N]^{(3)}$ be an arbitrary triple with $x\\delta(y, z)$ and a right tree whenever $\\delta(x, y)<\\delta(y, z)$. \n\nWhen the triples are coloured with $\\{\\mathrm{left}, \\mathrm{right}\\}$ depending on whether they form left\ntrees or right trees, the monochromatic sets are called {\\it combs}. \nMore explicitly, for \n\\[\t\n\t1\\le x_1<\\dots < x_t\\le N\n\\]\nthe set $X=\\{x_1, \\dots, x_t\\}$ is a {\\it left comb} if \n$\\delta(x_1, x_2)>\\delta(x_2, x_3)>\\dots >\\delta(x_{t-1}, x_t)$\nand a {\\it right comb} if \n$\\delta(x_1, x_2)<\\delta(x_2, x_3)<\\dots <\\delta(x_{t-1}, x_t)$ (see Figure~\\ref{fig:0001}). \nFor instance, the empty set, every singleton, and every pair are both a left comb and a \nright comb. Since every \ntriple is either a left tree or a right tree, triples are combs in exactly one of the two \ndirections. Some quadruples fail to be combs. \n\n\\begin{figure}[h]\n\\centering\n{\\hfil \\begin{tikzpicture}[scale=0.7\n \\coordinate (R1) at (-3,0);\n \\coordinate (R2) at (-3.5,1);\n \\coordinate (R3) at (-4,2);\n \\coordinate (R4) at (-4.5,3);\n \\coordinate (R5) at (-5,4);\n\t\\coordinate (R6) at (-4,4);\n \\coordinate (R7) at (-3,4);\n \\coordinate (R8) at (-2,4);\n \\coordinate (R9) at (-1,4);\n \\coordinate (H1) at (3,0);\n \\coordinate (H2) at (3.5,1);\n \\coordinate (H3) at (4,2);\n \\coordinate (H4) at (4.5,3);\n \\coordinate (H5) at (5,4);\n \\coordinate (H6) at (4,4);\n \\coordinate (H7) at (3,4);\n \\coordinate (H8) at (2,4);\n \\coordinate (H9) at (1,4);\n \n\n \\draw[line width=1] (R1)--(R2)--(R3)--(R4)--(R5);\n \\draw[line width=1] (R1)--(R9);\n \\draw[line width=1] (R2)--(R8);\n \\draw[line width=1] (R3)--(R7);\n \\draw[line width=1] (R4)--(R6);\n \n \\draw[line width=1] (H1)--(H2)--(H3)--(H4)--(H5);\n \\draw[line width=1] (H1)--(H9);\n \\draw[line width=1] (H2)--(H8);\n \\draw[line width=1] (H3)--(H7);\n \\draw[line width=1] (H4)--(H6);\n \n \\draw[fill] (R1) circle [radius=0.1];\n \t\\draw[fill] (R2) circle [radius=0.1];\n \t\\draw[fill] (R3) circle [radius=0.1];\n \\draw[fill] (R4) circle [radius=0.1];\n \t\\draw[fill] (R5) circle [radius=0.1];\n \t\\draw[fill] (R6) circle [radius=0.1];\n \\draw[fill] (R7) circle [radius=0.1];\n \t\\draw[fill] (R8) circle [radius=0.1];\n \t\\draw[fill] (R9) circle [radius=0.1];\n \\draw[fill] (H1) circle [radius=0.1];\n \t\\draw[fill] (H2) circle [radius=0.1];\n \t\\draw[fill] (H3) circle [radius=0.1];\n \\draw[fill] (H4) circle [radius=0.1];\n \t\\draw[fill] (H5) circle [radius=0.1];\n \t\\draw[fill] (H6) circle [radius=0.1];\n \t\\draw[fill] (H7) circle [radius=0.1];\n \t\\draw[fill] (H8) circle [radius=0.1];\n \t\\draw[fill] (H9) circle [radius=0.1];\n \t\n \n \\end{tikzpicture}\\hfil}\n\\caption{A left comb and a right comb.}\n\\label{fig:0001}\n\\end{figure}\n\n\\subsection{Ordering pairs}\nLet us recall that for Ramsey numbers the negative stepping-up lemma shows, e.g., that if \n$m\\longarrownot\\longrightarrow (n-1)^2_2$, then $2^m\\longarrownot\\longrightarrow (n)^3_4$. As we shall reuse the argument, we provide a brief\nsketch. Let $f\\colon [2^m]^{(3)}\\lra \\{\\mathrm{left}, \\mathrm{right}\\}$ be the colouring telling\nus whether a triple generates a left tree or a right tree and \nlet $g\\colon [m]^{(2)}\\lra \\{+1, -1\\}$ be a colouring exemplifying the negative partition \nrelation $m\\longarrownot\\longrightarrow (n-1)^2_2$. Define a \ncolouring \n\\[\n\th\\colon [2^m]^{(3)}\\lra \\{\\mathrm{left}, \\mathrm{right}\\}\\times \\{+1, -1\\}\n\\]\nof the triples with four colours by \n\\[\n\th(xyz)=\\bigl(f(xyz), g\\bigl(\\delta(xy), \\delta(yz)\\bigr)\\bigr)\n\\]\nwhenever $1\\le x2^m$ will be complete. \n\nLet us introduce one final piece of notation.\nGiven a vector $x=(x_1, \\dots, x_m)\\in \\{-1, +1\\}^m$ and~$\\delta\\in [m]$ we \ndefine the vector $x\\ltimes\\delta\\in\\{-1, +1\\}^{m-\\delta}$ by\n\\[\n\tx\\ltimes\\delta=\\bigl(x_{\\delta+1}g(\\delta, \\delta+1), \\dots, x_mg(\\delta, m)\\bigr)\\,.\n\\]\n\nNow let $\\strictif$ be an arbitrary ordering with the property that for any two distinct \npairs $xy$ and $x'y'$ with $xz_{r-1}$ \n\t\t\t\tis in $Z_\\nu\\cup A_\\nu$, then $f(z_1, \\dots, z_r)=f'(z_1, \\dots, z_{r-1})$.\n\t\t\\item\\label{it:52} If $i\\in I$, the numbers $z_1<\\dotsz_{s-1}$ is in $Z_\\nu\\cup A_\\nu$, \n\t\t\t\tthen $g_i(z_1, \\dots, z_{s})=g'_i(z_1, \\dots, z_{s-1})$.\n\t\t\\item\\label{it:53} If $i\\in I$, the numbers $z_1<\\dots0$ is always valid.\nLet us check that, conversely, in the permutation-definite case we can define a permutation $\\sigma$ \nhaving this property. \n\n\\begin{lemma}\\label{lem:1518}\n\tSuppose that $N\\ge 2k$. If the ordering $([N]^{(k)}, \\strictif)$ is both sign-definite \n\tand permutation-definite, then there exists a permutation $\\sigma\\in \\mathfrak{S}_k$ such that \n\t%\n\t\\[\n\t\t\\sigma_{ij}\\cdot \\bigl(\\sigma^{-1}(j)-\\sigma^{-1}(i)\\bigr)>0\n\t\\]\n\n\tholds whenever $1\\le i0$ holds.\n\\end{proof}\n\nThe remainder of this section deals with the question how much a sign-definite and \npermutation-definite ordering $([N]^{(k)}, \\strictif)$ with sign-vector $\\eps$ and \npermutation $\\sigma$ (obtained by means of Lemma~\\ref{lem:1518}) needs to have in \ncommon with the canonical ordering associated \nwith $(\\eps, \\sigma)$ in the sense of Definition~\\ref{def:canonical}. For instance, Lemma~\\ref{lem:1458} below asserts that under these circumstances there\nexists a dense subset $W\\subseteq [N]$ such that $(W^{(k)}, \\strictif)$ is ordered in this \ncanonical way.\n\nThe strategy we use for this purpose is the following. Suppose that $x, y\\in [N]^{(k)}$ \nare written in the form $x=\\{x_1, \\dots, x_k\\}$ and $y=\\{y_1, \\dots, y_k\\}$, \nwhere $x_1<\\dots\\max\\{x_{\\sigma(i)-1}, y_{\\sigma(i)-1}\\}$\n\t\\end{enumerate} \n\t%\n\thold, then the pair $(x, y)$ is $(\\eps, \\sigma)$-sound. \n\\end{lemma}\n\n\\begin{proof}\n\tWithout loss of generality we may assume $\\eps_{\\sigma(i)}=+1$. We are to prove \n\tthat $x\\strictif y$ and in all cases this will be accomplished by finding a $k$-set \n\t$z\\subseteq L$ such that the pairs~$(x, z)$ and~$(z, y)$ are $(\\eps, \\sigma)$-sound\n\tand $x\\strictif z\\strictif y$ holds. \n\t\n\tSuppose first that $\\min(I)<\\sigma(i)<\\max(I)$. \n\tDue to $a\\in (x_{\\sigma(i)}, y_{\\sigma(i)})\\subseteq (x_{\\sigma(i)-1}, y_{\\sigma(i)+1})$\n\tthe set $z=\\{x_1, \\dots, x_{\\sigma(i)-1}, a, y_{\\sigma(i)+1}, \\dots, y_k\\}$ has $k$ \n\telements that have just been enumerated in increasing order. Moreover $\\sigma(i)\\ne \\min(I)$\n\tyields $|\\Delta(x, z)|\\le t$ and, therefore, $(x, z)$ is indeed $(\\eps, \\sigma)$-sound, which\n\timplies $x\\strictif z$. Similarly, $\\sigma(i)\\ne \\max(I)$ leads to $z\\strictif y$.\n\t\n\tNext we suppose that $\\sigma(i)=\\min(I)$, which owing to the second bullet entails \n\t%\n\t\\[\n\t\ta<\\min\\{x_{\\sigma(i)+1}, y_{\\sigma(i)+1}\\}\\,.\n\t\\]\n\n\tLet $j\\in [k]$ be the second smallest index with $\\sigma(j)\\in I$. \n\t\n\t\\smallskip\n \t%\n\t\\begin{center}\n\t\\begin{tabular}{c|c|c}\n\n\tIf & and & then we set $z=$\\\\ \\hline \n\t\\rule{0pt}{3ex} $x_{\\sigma(j)}< y_{\\sigma(j)}$ & $\\sigma(j)<\\max(I)$ & \n\t\t$\\{x_1, \\dots, x_{\\sigma(i)-1}, a, x_{\\sigma(i)+1}, \\dots, x_{\\sigma(j)}, \n\t\t\ty_{\\sigma(j)+1}, \\dots, y_k\\}$,\\\\\n\t\\rule{0pt}{3ex} $x_{\\sigma(j)}< y_{\\sigma(j)}$ & $\\sigma(j)=\\max(I)$ & \n\t\t$\\{x_1, \\dots, x_{\\sigma(i)-1}, a, x_{\\sigma(i)+1}, \\dots, x_{\\sigma(j)-1}, \n\t\t\ty_{\\sigma(j)}, \\dots, y_k\\}$,\\\\\n\t\\rule{0pt}{3ex} $x_{\\sigma(j)}> y_{\\sigma(j)}$ & $\\sigma(j)<\\max(I)$ & \n\t\t$\\{x_1, \\dots, x_{\\sigma(i)-1}, a, y_{\\sigma(i)+1}, \\dots, y_{\\sigma(j)}, \n\t\t\tx_{\\sigma(j)+1}, \\dots, x_k\\}$,\\\\\n\t\\rule{0pt}{3ex} $x_{\\sigma(j)}> y_{\\sigma(j)}$ & $\\sigma(j)=\\max(I)$ &\n\t\t$\\{x_1, \\dots, x_{\\sigma(i)-1}, a, y_{\\sigma(i)+1}, \\dots, y_{\\sigma(j)-1}, \n\t\t\tx_{\\sigma(j)}, \\dots, x_k\\}$.\n\t\\end{tabular}\n\t\\end{center}\n\\medskip\n\nIn all four cases we have listed the elements of $z$ in increasing order. Furthermore, \nin almost all cases we have $|\\Delta(x, z)|\\le t$ and, consequently, $x\\strictif z$. \nIn fact, the only exception occurs if we are in the second case and $t=2$. But if this happens, \nthen \n\\[\n\tP=\\bigl(\\{x_1, \\dots, x_{\\sigma(i)-1}\\}, \\{x_{\\sigma(i)}, a\\}, \n\t\t\\{x_{\\sigma(i)+1}, \\dots, x_{\\sigma(j)-1}\\}, \\{x_{\\sigma(j)}, y_{\\sigma(j)}\\}, \n\t\t\\{y_{\\sigma(j)+1}, \\dots, y_k\\}\\bigr)\n\\]\nis an $\\sigma(i)\\sigma(j)$-decider. \nSince the ordering $\\strictif$ is permutation-definite, we know that it orders the four-element \nset $\\langle P\\rangle$ ``correctly'' and, as $x$, $z$ belong to this set, $x\\strictif z$ holds \nin this case as well. \n\nSimilarly, in almost all cases we have $\\Delta(y, z)\\le t$ and $y\\strictif z$, the only \nexception occurring if we are in the fourth case and $t=2$. Under these circumstances \n\\[\n\tQ=\\bigl(\\{x_1, \\dots, x_{\\sigma(i)-1}\\}, \\{a, y_{\\sigma(i)}\\}, \n\t\t\\{y_{\\sigma(i)+1}, \\dots, y_{\\sigma(j)-1}\\}, \\{x_{\\sigma(j)}, y_{\\sigma(j)}\\}, \n\t\t\\{x_{\\sigma(j)+1}, \\dots, x_k\\}\\bigr)\n\\]\nis an $\\sigma(i)\\sigma(j)$-decider and, as before, $y, z\\in \\langle Q\\rangle$ implies $y\\strictif z$.\n\nThis concludes our discussion of the case $\\sigma(i)=\\min(I)$ and it remains to deal with \nthe case $\\sigma(i)=\\max(I)$. Here one can either perform a similar argument, or one argues\nthat this case reduces to the previous one by reversion the ordering of the ground set. That\nis, one considers the ordering $\\bigl((-L)^{(k)}, \\strictif^\\star\\bigr)$ defined by \n\\[\n\tx\\strictif^\\star y \n\t\\,\\,\\, \\Longleftrightarrow \\,\\,\\,\n\t(-x) \\strictif (-y)\\,,\n\\]\nwhere $-L$ means $\\{-\\ell\\colon \\ell\\in L\\}$ and $-x$, $-y$ are defined similarly. \nIf one replaces $\\strictif$ by $\\strictif^\\star$, \nthen $I^\\star=(k+1)-I$ assumes the r\\^{o}le of $I$, minima correspond to maxima and the second and \nthird bullet in the assumption of our lemma are exchanged. \n\\end{proof}\n\nWe proceed with the existence of ``dense'' canonical subsets promised earlier. \n\n\\begin{lemma}\\label{lem:1458}\n\tIf $N\\ge 2k$ and the ordering $([N]^{(k)}, \\strictif)$ is both sign-definite \n\tand permutation-definite, then there exists a set $W\\subseteq [N]$ of size $|W|\\ge 2^{1-k}N$\n\tsuch that $\\strictif$ is canonical on~$W^{(k)}$.\n\\end{lemma}\n\n\\begin{proof}\n\tFor every $t\\in [k]$ we set \n\t%\n\t\\[\n\t\tW_t=\\bigl\\{n\\in [N]\\colon n\\equiv 1\\pmod{2^{t-1}}\\bigr\\}\\,.\n\t\\]\n\t%\n\tClearly $|W_k|\\ge 2^{1-k}N$, so it suffices to prove that $\\strictif$ is canonical on~$W_k^{(k)}$.\n\t\n\tDenote the sign-vector of $\\strictif$ by $\\eps=(\\eps_1, \\dots, \\eps_k)$ and let $\\sigma\\in \\mathfrak{S}_k$\n\tbe the permutation obtained from $\\strictif$ by means of Lemma~\\ref{lem:1518}. \n\tLet us prove by induction on $t\\in [k]$ that if two $k$-element subsets $x, y\\subseteq W_t$\n\tsatisfy $|\\Delta(x, y)|\\le t$, then $(x, y)$ is $(\\eps, \\sigma)$-sound. \n\t\n\tIn the base case $t=1$ we have $W_1=[N]$ and everything follows from our assumption \n\tthat $\\strictif$ be sign-definite. In the induction step from $t\\in [k-1]$ to $t+1$ \n\twe appeal to Lemma~\\ref{lem:1610}. Since no two elements of $W_{t+1}$ occur in consecutive \n\tpositions of $W_t$, the bulleted assumptions are satisfied and thus there are no problems \n\twith the induction step. \n\t\n\tSince $|\\Delta(x, y)|\\le k$ holds for all $k$-element subsets of $W_k$, the case $t=k$ of \n\tour claim proves the lemma. \n\\end{proof}\n\nNext we characterise the canonical orderings on $([N]^{(k)}, \\strictif)$\nfor $N\\ge 2k+1$. \n\n\\begin{lemma}\\label{lem:canonical}\n\tIf $N\\ge 2k+1$ and the ordering $([N]^{(k)}, \\strictif)$ is canonical, then there exists\n\ta pair $(\\eps, \\sigma)\\in \\{-1, +1\\}^k\\times \\mathfrak{S}_k$ such that $\\strictif$ is the ordering \n\tassociated with $(\\eps, \\sigma)$. \n\\end{lemma}\n\n\\begin{proof}\n\tLet $\\eps$ and $\\sigma$ denote the sign vector and the permutation of $\\strictif$ as introduced\n\tin \\S\\ref{subsec:32}. \n\tWe shall show that $\\strictif$ coincides with the canonical ordering associated to $(\\eps, \\sigma)$.\n\t\n\tOtherwise there exists a pair $(x, y)$ of subsets of $[N]$ that fails to be $(\\eps, \\sigma)$-sound.\n\tAssume that such a pair $(x, y)$ is chosen with $t+1=|\\Delta(x, y)|$ minimum. Since $\\strictif$ \n\tis sign-definite we know that $t\\in [k-1]$. Set $I=\\Delta(x, y)$ and let $i\\in [k]$ be the \n\tsmallest index with $\\sigma(i)\\in I$. By symmetry we may assume that $x_{\\sigma(i)} x_{\\sigma(i)}$. Now by Lemma~\\ref{lem:1610}\n\tthe pair $\\bigl(\\phi[x], \\phi[y]\\bigr)$ is $(\\eps, \\sigma)$-sound. Moreover, the canonicity of\n\t$\\strictif$ tells us that $\\big((x\\cup y)^{(k)}, \\strictif\\bigr)$ \n\tand $\\big(\\phi[x\\cup y]^{(k)}, \\strictif\\bigr)$ are isomorphic via $\\phi$. For these reasons,\n\tthe pair $(x, y)$ is $(\\eps, \\sigma)$-sound as well. \n\n\t\\smallskip\n\n\t{\\it \\hskip2em Second Case: $\\max(x\\cup y)=N$}\n\n\t\\smallskip\n \n \tLet $\\eta\\colon x\\cup y\\lra [|x\\cup y|]$ be order-preserving. In view of $|x\\cup y|\\le 2k0$ there exists a largest \n\tinteger $i(\\star) \\in [k]$ such that $a_{i(\\star)} \\ne b_{i(\\star)}$. \n\tApplying the assumption that $\\equiv$ be $i(\\star)$-purged to \n\t%\n\t\\begin{enumerate}\n\t\t\\item[$\\bullet$] $a_1<\\dots