diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeehc" "b/data_all_eng_slimpj/shuffled/split2/finalzzeehc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeehc" @@ -0,0 +1,5 @@ +{"text":"\\section{Proving the Correctness of Await Model Checking}\\label{s:amc}\nIn this section we formally prove \\cref{thm:amc} to show that our AMC is correct. In order to formally prove this theorem, first we define a tiny concurrent assembly-like language in \\S\\ref{sec:lan}, which follows our Bounded-Length principle and allows us to map the execution graphs to the execution of instruction sequences. Then we formalize the Bounded-Effect principle in \\S\\ref{sec:vegas}. Finally we give the formal representation of \\cref{thm:amc} and prove it in \\S\\ref{sec:main}.\nThroughout this section we use the notation of \\cite{GenMC}.\n\n\\begin{theorem}[\\textbf{AMC Correctness}]\\label{thm:amc}\\em\n\tFor programs which satisfy {the Bounded-Length principle} and {the Bounded Effect principle}, 1) AMC terminates, 2) AMC detects every possible safety violation, 3) AMC detects every possible non-terminating await, and 4) AMC has no false positives.\n\\end{theorem}\n\n\n\\subsection{The tiny concurrent assembly-like language}\\label{sec:lan}\nTo have a formal foundation for these proofs we need to provide a formal programming language.\nWithout such a programming language, execution graphs just float in the air, detached from any program. \nUsing the consistency predicate $\\mathrm{cons}_M$ we can state that an execution graph can be generated by the weak memory model, but not whether it can be generated by a given program.\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{tikzpicture}[\n\tnode distance = 9mm and 2mm,\n\n\tpo\/.style={->,thick, blue!50!black!50},\n\trf\/.style={->,thick, green!70!black},\n\tmo\/.style={->,thick, red},\n\tctrl\/.style={->,thick, dashed},\n\tev\/.style={rounded corners, fill=gray!10},\n\tevx\/.style={rounded corners, fill=red!70!black!40},\n\tevent\/.style={ev, anchor=west},\n\n\tthread\/.style={text=white, fill=black!70},\n\t]\n\t\\node[ev] (l) {$W_\\mathit{init}$ {\\tt x, 0}};\n\t\n\t\\node[below left=9mm and -4mm of l,ev] (t1 init next) {$R_\\barrier{rlx}$ {\\tt x, 0}};\n\t\\node[below=of t1 init next.west,event] (t1 init lock) {$R_\\barrier{rlx}$ {\\tt x, 0}};\n\t\\node[below=of t1 init lock.west,event] (t1 xchg read) {$R_\\barrier{rlx}$ {\\tt x, 0}};\n\t\\node[below=of t1 xchg read.west,event] (t1 xchg tail) {$R_\\barrier{rlx}$ {\\tt x, 0}};\n\t\\node[below right=9mm and 3mm of t1 xchg tail.west,event] (t1 write next) {\\vdots};\n\t\n\t\\draw[po] ($(t1 init next.south west)+(5mm,0)$) -- ($(t1 init lock.north west)+(5mm,0)$);\n\t\\draw[po] ($(t1 init lock.south west)+(5mm,0)$) -- ($(t1 xchg read.north west)+(5mm,0)$);\n\t\\draw[po] ($(t1 xchg read.south west)+(5mm,0)$) -- ($(t1 xchg tail.north west)+(5mm,0)$);\n\t\\draw[po] ($(t1 xchg tail.south west)+(5mm,0)$) -- ($(t1 write next.north west)+(2mm,0)$);\n\t\n\n\n\n\n\n\n\t\n\t\n\t\\draw[rf] (l) to[bend right=-50]\n\tnode[midway, fill=white, font=\\footnotesize] {rf}\n\t(t1 init next.east);\n\t\n\t\\draw[rf] (l) to[bend right=-50]\n\tnode[midway, fill=white, font=\\footnotesize] {rf}\n\t(t1 init lock.east);\n\t\n\t\\draw[rf] (l) to[bend right=-50]\n\tnode[midway, fill=white, font=\\footnotesize] {rf}\n\t(t1 xchg read.east);\n\t\n\t\\draw[rf] (l) to[bend right=-50]\n\tnode[midway, fill=white, font=\\footnotesize] {rf}\n\t(t1 xchg tail.east);\n\t\n\t\\node[thread, above=2mm of t1 init next] {Thread 1};\n\t\\end{tikzpicture}\n\t\\caption{Divergent execution graph} \\label{non-terminating execution}\n\\end{figure}\nFor example, the non-terminating execution graph in \\cref{non-terminating execution} consisting only of reads from the initial store is consistent with all standard memory models, but is obviously irrelevant to most programs.\nIf we were only to decide whether a non-terminating execution graph is consistent with the memory model or not, we could simply always return \\lstinline[language=C,style=mainexample]|true| and be done with it.\nWhat we really want to decide is whether a given program (which satisfies our two principles) has a non-terminating execution graph which is consistent with the memory model.\nFor this purpose we will in this section define a tiny assembly-like programming language, and define whether a program $P$ can generate an execution graph $G$ through a new consistency predicate $\\mathrm{cons}^P(G)$.\nThis will require formally defining an execution-graph driven semantics for the language, in which threads are executed in isolation using values provided by an execution graph.\nAfter we define the programming language, we formally define the Bounded-Effect principle.\nWe then show that if the Bounded-Effect principle is satisfied, we can always remove one failed iteration of an await from a graph without making the graph inconsistent with the program.\nThis will allow us to show that graphs in $\\mathbb G^F$ can always be ``trimmed'' to a graph in \\ensuremath{\\mathbb G_\\ast^F}{} which has the same error events, and thus all safety violations are detected by AMC.\nNext we show that we can also add failed iterations of an await; indeed, for graphs in \\ensuremath{\\mathbb G_\\ast^\\infty}{} we can add infinitely many such iterations.\nThus these graphs can be ``extended'' to graphs in $\\mathbb G^\\infty$ which are consistent with the program.\nThis implies that there are not false positives.\nFinally we show that due to the Bounded-Length dance principle and the Bounded-Effect principle, graphs in \\ensuremath{\\mathbb G_\\ast^F}{} and \\ensuremath{\\mathbb G_\\ast^\\infty}{} have a bounded number of failed iterations of awaits, and the remaining steps are bounded as well.\nThus the search space itself must be finite, and AMC always terminates.\n\\lstdefinelanguage{Lambda}{%\n\tmorekeywords={%\n\t\tif,then,else,fix,match,await,step,with\n\t},%\n\tmorekeywords={[2]int}, \n\totherkeywords={:},\n\tliterate=\n\t\t{->}{{$\\to$}}{2}\n\t\t{lambda}{{$\\lambda$}}{1}\n\t\t{rlx}{{$^\\barrier{rlx}$}}{2}\n\t\t{|}{{$\\big|$}}{1}\n\t\t{sigma}{{$\\sigma$}}{1}\n\t},\n\tbasicstyle={\\sffamily},\n\tkeywordstyle={\\bfseries},\n\tkeywordstyle={[2]\\itshape},\n\tkeepspaces,\n\tcolumns=flexible,\n\tmathescape\n}[keywords,comments,strings]%\n\n\n\\newcommand{m}{m}\n\\newcommand{e}{e}\n\\newcommand{\\sigma}{\\sigma}\n\\newcommand\\dowhile[2]{\\lstinline[language=Lambda]|await(#1,#2)|}\n\\newcommand\\loopcons{\\kappa}\n\\newcommand\\inst[2]{\\lstinline[language=Lambda]|step(#1,#2)|}\n\\newcommand{\\epsilon}{\\epsilon}\n\\newcommand{\\delta}{\\delta}\n\\newcommand{\\mu}{\\mu}\n\\begin{figure}[H]\n\t\\begin{align*}\n\t\\textit{(Program)} \\ P &:= T_0 \\parallel \\ldots \\parallel \\ T_i \\parallel\\ldots \\ \\parallel T_n && \\text{Composition of parallel threads $T_i$} \\\\\n\t\\textit{(Thread)} \\ T &:= S_0 ; \\ \\ldots ; \\ S_n && \\text{Sequence of statements $S_i$} \\\\\n\t\\textit{(Stmts)} \\ S &:= \\dowhile{$n$}{$\\loopcons$} \\mid \\inst{$\\epsilon$}{$\\delta$} && \\text{Await-loop and non-await-loop steps.}\n\t\\\\ \\midrule \n\t\\textit{(LoopCon)} \\ \\loopcons &\\in \\mathit{State} \\to \\Set{0,1} && \\text{Loop condition}\n\t\\\\ \\textit{(EvtGen)} \\ \\epsilon &\\in \\mathit{State} \\to \\mathit{Events} && \\text{Event generator}\n\t\\\\ \\textit{(StTrans)} \\ \\delta &\\in \\mathit{State} \\times \\mathit{Value}? \\to \\textit{Update} && \\text{State transformer}\n\t\\\\ \\midrule\n\t\\textit{(Events)} \\ e &:= R^m(x) \\mid W^m(x,v) \\mid F^m \\mid E && \\text{Read, write, fence, error events. $x \\in \\mathit{Location}$, $v \\in \\mathit{Value}$.}\n\t\\\\ \\textit{(Modes)} \\ m &:= \\barrier{rlx} \\mid \\barrier{rel} \\mid \\barrier{acq} \\mid \\barrier{sc} && \\text{Barrier modes}\n\t\\\\ \\midrule \n\t\\textit{(State)}\\ \\sigma & \\in \\mathit{Register} \\to \\mathit{Value} && \\text{Set of thread-local states.}\n\t\\\\\n\t\\textit{(Update)}\\ \\mu & \\in \\mathit{Register} \\rightharpoonup \\mathit{Value} && \\text{Updated register values.}\n\t\\\\\n\t\\textit{(Register)} & \\quad \\ldots && \\text{Set of thread-local registers.}\n\t\\\\ \\textit{(Location)} & \\quad \\ldots&& \\text{Set of shared memory locations.}\n\t\\\\ \\textit{(Value)} & \\quad \\ldots && \\text{Set of possible values of registers and memory locations.}\n\t\\end{align*}\n\t\\caption{Compact Syntax and Types of our Language}\\label{syntax summary}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{tabular}{lp{0.05\\textwidth}l}\n\t\tC-like program && Our toy language \\\\\n\t\t{\\lstset{showlines=true,}\n\t\t\t\\begin{lstlisting}\n\t\t\tx = r1;\n\t\t\tr1 = y;\n\t\t\tif (r1 == 0)\n\t\t\tr2 = x;\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\\end{lstlisting}}\n\t\t&&\n\t\t\\begin{lstlisting}[language=Lambda]\n\t\tstep(lambda sigma. Wrlx(x, sigma(r1)), lambda sigma _. [ ]);\n\t\tstep(lambda sigma. Rrlx(y), lambda sigma v. [r1 -> v]);\n\t\tstep(lambda sigma. match sigma(r1) with\n\t\t| 0 -> Rrlx(x)\n\t\t| _ -> Frlx,\n\t\tlambda sigma v. match sigma(r1) with\n\t\t| 0 -> [r2 -> v]\n\t\t| _ -> [ ]) \n\t\t\\end{lstlisting}\n\t\\end{tabular}\n\t\\caption{Using lambda functions inside \\lstinline[language=Lambda]|step| to implement different control paths} \\label{example control flow}\n\\end{figure}\n\n\n\\subsubsection{Syntax}\nRecall that the Bounded-Length principle requires that the number of steps inside await loops and the number of steps outside awaits are bounded.\nWe define a tiny concurrent assembly-like language which represents such programs, but not programs that violate the Bounded-Length principle.\nThe purpose of this language is to allow us to prove things easily, not to conveniently program in it.\nThus instead of a variety of statements, we consider only two statements: await loops (\\lstinline[language=Lambda]|await|) and event generating instructions (\\lstinline[language=Lambda]|step|).\nThe syntax and types of our language is summarized in \\cref{syntax summary}. The event generating instructions \\lstinline[language=Lambda]|step| use a pair of two lambda functions to generate events and modify the thread local state.\nThus the execution of the steps yields a sequence $\\sigma^{(t)}$ of thread-local states. We illustrate this at hand of the small example in \\cref{example control flow}, which we execute starting with the (arbitrarily picked for demonstrative purposes) thread local state\n\\[ \\sigma^0(r) = \\begin{cases} 5 & r = \\lstinline[language=Lambda]|r1| \\\\ 0 & r = \\lstinline[language=Lambda]|r2| \\end{cases} \\]\nin which the value of \\lstinline[language=Lambda]|r1| is 5 and the value of \\lstinline[language=Lambda]|r2| is 1.\nThe first instruction of the program first evaluates \\lstinline[language=Lambda]|lambda sigma. Wrlx(x, sigma(r1))| on $\\sigma^0$ to determine which event (if any) should be generated by this instruction. In this case, the generated event is\n\\[ W^\\barrier{rlx}(\\lstinline[language=Lambda]|x|,\\sigma^0(\\lstinline[language=Lambda]|r1|)) = W^\\barrier{rlx}(\\lstinline[language=Lambda]|x|,5) \\]\nwhich writes 5 to variable \\lstinline[language=Lambda]|x|. Next, the function \\lstinline[language=Lambda]|lambda _ _. [ ]| is evaluated on $\\sigma^0$ to determine the set of changed registers and their new value in the next thread local state $\\sigma^1$. This function takes a second parameter which represents the value returned by the generated event in case the generated event is a read. Because the event in this case is not a read, no value is returned, and the function simply ignores the second parameter. The empty list \\lstinline[language=Lambda]|[ ]| indicates that no registers should be updated. Thus\n\\[ \\sigma^1 = \\sigma^0 \\]\nand execution proceeds with the next instruction. The second instruction generates the read event\n\\[ R^\\barrier{rlx}(y) \\]\nwhich reads the value of variable \\lstinline[language=Lambda]|y|. Assume for the sake of demonstration that this read (\\emph{e.g.}, due to some other thread not shown here) returns the value 8.\nNow the function \\lstinline[language=Lambda]|lambda sigma v. [r1 -> v]| is evaluated on $\\sigma^1$ and $v=8$. The result \\lstinline[language=Lambda]|[r1 -> 8]| indicates that the value of \\lstinline[language=Lambda]|r1| should be updated to 8, and the next state $\\sigma^2$ is computed as\n\\[ \\sigma^2(r) = \\begin{cases} 8 & r = \\lstinline[language=Lambda]|r1| \\\\ \\sigma^1(r) & \\text{o.w.} \\end{cases} \\]\nIn this state, the third instruction is executed. Because in $\\sigma^2$, the value of \\lstinline[language=Lambda]|r1| is not 0, the \\lstinline[language=Lambda]|match| goes to the second case, in which no event is generated (indicated by \\lstinline[language=Lambda]|F(rlx)|, \\emph{i.e.}, a relaxed fence which indicates a NOP). Thus again there is no read result of $v$, and the next state $\\sigma^3$ is computed simply as\n\\[ \\sigma^3 = \\sigma^2 \\]\n\n\n\\begin{figure}[H]\n\\centering\n\\begin{tabular}{lp{0.04\\textwidth}ll}\n\\toprule \nnormal code && \\multicolumn{2}{l}{encoding in toy language} \\\\\n&& event generators & state transformers\n\\\\\\midrule\n{\\lstset{showlines=true,}\n\\begin{lstlisting}\nr1 = x;\n\\end{lstlisting}}\n&&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nlambda _. Rrlx(x)\n\\end{lstlisting}}\n&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nlambda _ v. [r1 -> v]\n\\end{lstlisting}}\n\\\\\\midrule\n{\\lstset{showlines=true,}\n\\begin{lstlisting}\ny = r1+2;\n\\end{lstlisting}}\n&&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nlambda sigma. Wrlx(x, sigma(r1)+2)\n\\end{lstlisting}}\n&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nlambda _ _. [ ]\n\\end{lstlisting}}\n\\\\\\midrule\n{\\lstset{showlines=true,}\n\\begin{lstlisting}\nif (x==1){\n\ty = 2;\n\tr2 = z;\n} else {\n\tr2 = z;\n\ty = r2;\n}\n\n\n\\end{lstlisting}}\n&&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nlambda _. Rrlx(x)\nlambda sigma. match sigma(r1) \n with \n | 1 -> Wrlx(y, 2)\n | _ -> Rrlx(z)\nlambda sigma. match sigma(r1)\n with\n | 1. Rrlx(z)\n | _. Wrlx(y, sigma(r2))\n\\end{lstlisting}}\n&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nlambda _ v. [r1 -> v]\nlambda sigma v. match sigma(r1) \n with\n | 1. [ ]\n | _. [r1 -> v]\nlambda sigma v. match sigma(r1) \n with\n | 1. [r2 -> v]\n | _. [ ]\n\\end{lstlisting}}\n\\\\\\midrule\n\\begin{lstlisting}\nfor (r1 = 0; r1 < 3; r1++)\n{\n\tx = r1;\n}\n\\end{lstlisting} \n&&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nlambda sigma. Wrlx(x, sigma(r1))\nlambda sigma. Wrlx(x, sigma(r1))\nlambda sigma. Wrlx(x, sigma(r1))\n\n\\end{lstlisting}}\n&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nlambda sigma _. [r1 -> sigma(r1)+1]\nlambda sigma _. [r1 -> sigma(r1)+1]\nlambda sigma _. [r1 -> sigma(r1)+1]\n\n\\end{lstlisting}}\n\\\\\\bottomrule\n\\end{tabular}\n\\caption{Example Encodings of Language Constructs as Event-generator\/State-transformer Pairs} \\label{fig:example encodings}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{lp{0.05\\linewidth}l}\n\\begin{lstlisting}\ndo_awaitwhile({\n r1 = y;\n},x==1)\n\n\\end{lstlisting} \n&&\n{\\lstset{showlines=true,}\n\\begin{lstlisting}[language=Lambda]\nstep(lambda _. Rrlx(y), lambda _ v. [r1 -> v]);\nstep(lambda _. Rrlx(x), lambda _ v. [r2 -> v]);\nawait(2, lambda sigma. sigma(r2) == 1)\n\\end{lstlisting}}\n\\end{tabular}\n\\caption{Encoding of Do-Await-While as a Program in our Language} \\label{fig:await example}\n\\end{figure}\n\n\nNote that each thread's program text is finite, and the only allowed loops are awaits.\nThus programs with infinite behaviors or unbounded executions (which violate the Bounded-Length principle) can not be represented in this language.\n\nEach statement generates up to one event that depends on a thread-local state, and modifies the thread-local state based on the previous state and (in case a read event was generated) the result of the read.\nThis is encoded using two types of lambda functions: the \\emph{event generators} that map the current state an event (possibly $F^\\barrier{rlx}$), \\emph{i.e.}, have type\n\\[ \\mathit{State} \\rightarrow \\mathit{Event} \\]\nand the \\emph{state transformers} that map the current state and possibly a read result to an update to thread-local state representing the new value of all the registers that are changed by the instruction, \\emph{i.e.}, have type \n\\[ \\mathit{State} \\times \\mathit{Value}? \\rightarrow \\mathit{Update} \\]\nHere $T?$ is a so called \\emph{option} type, which is similar to the nullable types of C\\#: each value $v \\in T?$ is either a value of $T$ or $\\bot$ (standing for ``none'' or ``null''):\n\\[ v \\in T? \\iff v \\in T \\lor v = \\bot \\]\n\nAwait loops are the only control construct in our language. Apart from awaits, the control of the thread only moves forward, one statement at a time. \nDifferent conditional branches are implemented through internal logic of the event generating instructions: the state keeps track of the active branch in the code, and the event generator and state transformer functions do a case split on this state. \nBounded loops have to be unrolled. See \\cref{fig:example encodings}.\n\nWe formalize the syntax of the language. There is a fixed, finite set of threads $\\mathcal T$ and each thread $T \\in \\mathcal T$ has a finite program text $P_T$ which is a sequence of statements.\nWe denote the $k$-th statement in the program text of thread $T$ by $P_T(k)$. \nA statement is either an an event generating instruction or a do-await-while statement. \nWe assume that the set of registers, values, and locations are all finite.\n\n\\paragraph{Event Generating Instruction}\nAn event generating instruction has the syntax\n\n\\begin{minipage}{\\linewidth}\n\\centering\t\\lstinline[language=Lambda]|step($\\epsilon$,$\\delta$)|\n\\end{minipage}\nwhere \n\\[ \\epsilon : \\mathit{State} \\rightarrow \\mathit{Event} \\]\nis an event generator and \n\\[ \\delta : \\mathit{State} \\times \\mathit{Value}? \\rightarrow \\mathit{Update} \\]\nis a state transformer.\nNote that the event generating instruction is roughly a tuple of two functions $\\epsilon$ and $\\delta$. When the statement is executed in a thread-local state $s \\in \\mathit{State}$, we first evaluate $\\epsilon(\\sigma)$ to determine which event is generated. If this event is a read, it returns a value $v$ (defined based on reads-from edges in an execution graph $G$), which is then passed to $\\delta$ to compute the new values for updated registers in the update $\\delta(\\sigma,v)$. The next state is then defined by taking all new values from $\\delta(\\sigma,v)$ and the remaining (unchanged) values from $s$.\n\n\n\\paragraph{Do-Await-While}\nA do-await-while statement has the syntax\n\n\\begin{minipage}{\\linewidth}\n\\centering\t\\lstinline[language=Lambda]|await($n$,$\\loopcons$)|\n\\end{minipage}\nwhere $n \\in \\Set{0,1,2,\\ldots}$ is the number of statements in the loop, and the loop condition $\\loopcons : \\mathit{State} \\rightarrow \\Set{0,1}$ is a predicate over states telling us whether we must stay in the loop. \nIf this statement is executed in thread-local state $s$, we first evaluate $\\loopcons(\\sigma)$. \nIn case $\\loopcons(\\sigma)$ evaluates to true, the control jumps back $n$ statements, thus repeating the loop; otherwise it moves one statement ahead, thus exiting the loop.\n\n\\paragraph{Syntactic Restriction of Awaits}\n\nWe add two syntactic restrictions of awaits: 1) no nesting of awaits and 2) an await which jumps back $n$ statements needs to be at least at position $n$ in the program.\n\\[ P_T(k) = \t\\lstinline[language=Lambda]|await($n$,_)| \\quad\\to\\quad n \\le k \\ \\land \\ \\forall k' \\in [k-n:k).\\ P_T(k') \\not= \\lstinline[language=Lambda]|await(_,_)| \\]\nThese restrictions will allow us to easily identify steps in an iteration of an await as steps in the range $[k-n:k)$\n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}\n\\draw (0,0) rectangle (2,0.5) node[pos=.5] (N) {\\lstinline[language=Lambda]|await($n$,_);|};\n\\draw (0,1.2) rectangle (2,1.7) node[pos=.5] (V) {\\lstinline[language=Lambda]|await($v$,_);|};\n\\draw (0,-0.5) -- (0,4);\n\\draw (2,-0.5) -- (2,4);\n\\draw [->] (N.east) -- ++(0.5,0) -- node [right] {$n$} ++(0,3.5) -- ++(-0.5,0);\n\\draw (0,2.8) rectangle (2,3.3) node[pos=.5] (Q) {};\n\\draw [->,draw=blue!60!black] (V.east) -- ++(0.4,0) -- node [right,text=blue!60!black] {$v$} ++($(Q.center)-(V.center)$) -- ++(-0.4,0);\n\\node [left=0.9 of N.west] (t) {$k_G^T(t)$};\n\\draw [->] (t) -- (N.west);\n\n\\draw (t |- Q) node (m) {$k_G^T(t)-m$};\n\\draw [->] (m) -- (N.west |- Q);\n\n\\draw (t |- V) node (v) {$k_G^T(t)-m+v$};\n\\draw [->] (v) -- (N.west |- V);\n\n\\node at ($(Q)!0.45!(V)$) {$\\vdots$};\n\\node at ($(V)!0.45!(N)$) {$\\vdots$};\n\\end{tikzpicture}\n\\caption{Two overlapping awaits}\\label{fig:awaits position}\n\\end{figure}\n\n\\subsubsection{Semantics}\nThe semantics of our language consist of two components: an execution graph $G$, which represents the concurrent execution of the events, and local instruction sequences that generate and refer to these events. \nAt first glance, the instruction sequences and the event graph interlock like gears: the instruction sequences generate the events in the event graph, \\emph{e.g.}, the reads and writes, and the event graph generates the values that are returned by those reads to the instruction sequences.\nOf course, the values returned by reads determine which events are generated next by the instruction sequences.\nUnfortunately, it is a bit more complex than this: due to weak memory models, the interlock is actually \\emph{cyclical}; a write event $w$ of thread $A$ can be generated based on a value returned by a previous read of $A$, which reads from a write event of thread $B$, which is generated based on a previous read of $B$ which reads from $w$.\nThus a simple step-by-step parallel construction of $G$ and the instruction sequences is not possible.\n\nInstead, we follow a more indirect (so called axiomatic) semantics: we take an arbitrary (potentially cyclical) execution graph $G$ and try to justify it ad-hoc by finding local instruction sequences that are consistent with the events in the graph, \\emph{i.e.}, 1) every event in $G$ is generated by the instruction sequences, and 2) the instruction sequences use read-results from the graph $G$.\nWe define this in two steps: at first we ignore for simplicity the consistency predicate $\\mathrm{cons}_M(G)$ which states that $G$ is consistent with the memory model, and only check that $G$ can be justified by the program text.\nWe define this by a predicate $\\mathrm{cons}^P(G)$ stating that $G$ is consistent with the program.\nThen we make the definition complete by combining the two consistency predicates into a single predicate\n\\[ \\mathrm{cons}^P_M(G) = \\mathrm{cons}^P(G) \\land \\mathrm{cons}_M(G) \\]\nwhich states that the execution graph $G$ is consistent with the program $P$ under the weak memory model $M$. We borrow the notation of execution graphs from the work of Vafeiadis et. al \\cite{GenMC}, with the minor change that our events include barrier modes.\n\n\n\\paragraph{Defining $\\mathrm{cons}^P(G)$}\nThe semantics of our language is defined with relation to an execution graph $G$, which provides the values individual reads read. With reference to these values, the local program text of each thread can be executed locally.\nThe graph is consistent with the program text if it contains exactly the events that occur during this local execution.\nThe local execution of thread $T$ is described by four sequences, which are defined by mutual recursion on the number of executed steps $t \\in \\Set{0,1,2,\\ldots}$: the thread local state $\\sigma_G^T(t)$ after $t$ steps, the position of control $k_G^T(t)$ after $t$ steps, the (potential) event generated in the $t$-th step $e_G^T(t)$, and the (potential) read result $v_G^T(t)$ of that event. The definitions of $e_G^T(t)$ and $v_G^T(t)$ are not themselves recursive but refer to $\\sigma_G^T(t)$, while themselves being referenced in the definition of $\\sigma_G^T({t+1})$.\nFor this reason, the four sequences do not all have the same length. We denote the number of execution steps of thread $T$ by $N_G^T \\in \\mathbb N \\cup \\Set{ \\infty }$, where $N_G^T = \\infty$ indicates that the thread does not terminate and hence makes infinitely many steps.\nThe number of steps $N_G^T$ coincides with the length $\\lvert e_G^T(t) \\rvert$ and $\\lvert v_G^T(t) \\rvert$ of the sequences $e_G^T(t)$ of events and $v_G^T(t)$ of read results of thread $T$ as every step generates up to one event and returns up to one read result\n\\[ \\lvert e_G^T(t) \\rvert = \\lvert v_G^T(t) \\rvert = N_G^T\\]\nAs usual for these fence post cases, the number of states and positions of control is $N_G^T+1$\n\\[ \\lvert \\sigma_G^T(t) \\rvert = \\lvert k_G^T(t) \\rvert = N_G^T+1 \\]\n\n\n\\paragraph{Position of Control} The position of control\n\\[ k_G^T(t) \\in \\Set{0,1,2,\\ldots} \\]\nis the index of the next statement $P_T(k_G^T(t))$ to be executed by thread $T$ after executing $t$ steps.\nAll programs start at the first statement, \\emph{i.e.},\n\\[ k_G^T(0) = 0 \\]\nand thus the first statement to be executed is the first statement of the program $P_T(k_G^T(0)) = P_T(0)$.\nAfter $t \\le N_G^T$ steps, the position of control may leave the program text, \\emph{i.e.}, $k_G^T(t)$ may no longer be an index in the sequence $P_T((k))$ of statements of thread $T$\n\\[ k_G^T(t) \\ge \\lvert P_T((k)) \\rvert \\]\nIn this case, the computation of thread $T$ ends:\n\\[ k_G^T(t) \\ge \\lvert P_T((k)) \\rvert \\ \\to \\ N_G^T = t \\]\nand thus there is no $k_G^T({t+1})$ that needs to be defined. Otherwise, the execution of the $t$-th step changes the position based on the statement $P_T(k_G^T(t))$ executed in that step.\nWe abbreviate\n\\[ S_G^T(t) = P_T(k_G^T(t)) \\]\n\nIf the this statement is an event generating instruction, we always move to the next statement, \\emph{i.e.}, \n\\[ P_T(k_G^T(t)) = \\lstinline[language=Lambda]|step(_,_)| \\quad\\to\\quad k_G^T({t+1}) = k_G^T(t) + 1\\]\nFor a do-await-while \\lstinline[language=Lambda]|await($n$,$\\loopcons$)|, $k$ is either also incremented by 1 (the loop is exited) or decremented by $n$ (the loop is continued), depending on whether $\\loopcons$ evaluates to false or true in the internal state $\\sigma_G^T(t)$ of thread $T$ after $t$ steps:\n\\[ P_T(k_G^T(t)) = \\lstinline[language=Lambda]|await($n$,$\\loopcons$)| \\quad\\to\\quad k_G^T({t+1}) = \\begin{cases} k_G^T(t) + 1 & \\loopcons(\\sigma_G^T(t)) = 0 \\\\ k_G^T(t) - n & \\text{o.w.} \\end{cases} \\]\n\n\\paragraph{Event Sequence}\nThis $t$-th step (with $t < N_G^T$) generates the event\n\\[ e_G^T(t) \\in \\mathit{Event} \\]\nwhich is either $\\epsilon(\\sigma_G^T(t))$ if the statement executed in this step is an event generating instruction \\lstinline[language=Lambda]|step($\\epsilon$,_)|\n\\[ P_T(k_G^T(t)) = \\lstinline[language=Lambda]|step($\\epsilon$,_)| \\quad\\to\\quad e_G^T(t) = \\epsilon(\\sigma_G^T(t)) \\]\nor, in case the statement is a do-await-while, a NOP event\n\\[ P_T(k_G^T(t)) = \\lstinline[language=Lambda]|await(_,_)| \\quad\\to\\quad e_G^T(t) = F^\\barrier{rlx}\\]\nThis event must be in $G$ or $G$ is not consistent with the program.\nRecall that $G$ stores the event together with meta data indicating the thread $T$ and the event index $t$ in the program order of $T$.\nIf no event with this meta data exists in the graph, the graph represents a partial execution of the program. In this case we stop execution of thread $T$ before the event is generated\n\\[ \\langle T,\\ t,\\ - \\rangle \\not\\in G.\\mathrm{E} \\quad\\to\\quad N_T = t \\]\nOtherwise, the event and its meta data from the triplet $\\langle T,\\ t,\\ e_G^T(t) \\rangle$.\nIf some event with this meta data exists in the graph, but not this particular event, then the program generated a different event than the one provided by the graph; the graph is inconsistent with the program.\n\\[ \\langle T,\\ t,\\ e \\rangle \\in G.\\mathrm{E} \\ \\land \\ e \\not= e_G^T(t) \\quad\\to\\quad \\neg \\mathrm{cons}^P(G) \\]\nNote that $T$ and $t$ already uniquely identify $e_G^T(t)$ in a consistent execution graph. To avoid redundancy we abbreviate\n\\[ \\event{T}{t}{G} = \\langle T,\\ t,\\ e_G^T(t) \\rangle \\]\n\n\\paragraph{Read Result} The read result\n\\[ v_G^T(t) \\in \\mathit{Value}? \\]\nis the value returned by a read event generated in step $t < N_G^T$ of thread $T$.\nIf no read event is generated, there is no read result $v_G^T(t)$\n\\[ e_G^T(t) \\not= R^-(-) \\quad \\to\\quad v_G^T(t) = \\bot \\]\nOtherwise, the read reads from the write $w= G.\\mathrm{\\rf}(\\event{ T}{ t}{ G} \\rangle)$ and returns the value $w.\\mathrm{val}$ written by that write. \nNote that in the case of a missing \\rf-edge, $w$ may be $\\bot$ even though $e_G^T(t)$ is a read event. In such cases, we return the read result $\\bot$.\nHowever, an instruction that generates a read event usually depends on the read result to compute the next state.\nThus we will define in the next section that the thread-local execution terminates in case $e_G^T(t)$ is a read event but the read result is $\\bot$. \nWe collect the read result as the value $v_G^T(t)$\n\\[ e_G^T(t)=R^o(x) \\quad\\to\\quad v_G^T(t) = \\begin{cases} G.\\mathrm{\\rf}(\\event{T}{t}{G}).\\mathrm{val} & G.\\mathrm{\\rf}(\\event{T}{t}{G} \\rangle) \\not= \\bot \\\\ \\bot & \\text{o.w} \\end{cases} \\]\n\n\\paragraph{State Sequence}\nThe thread-local state of thread $T$ after executing $t$ steps\n\\[ \\sigma_G^T(t) \\in \\mathit{State} \\]\ncontains the values of all thread-local registers. We leave the initial state\n\\[ \\sigma_G^T(0) \\]\nof thread $T$ uninterpreted. Each step then updates the local state based on the executed statement. Do-await-whiles never change the program state\n\\[ P_T(k_G^T(t)) = \\lstinline[language=Lambda]|await(_,_)| \\quad\\to\\quad \\sigma_G^T({t+1}) = \\sigma_G^T(t)\\]\nFor event generating instructions, we consider two cases: the first (regular) case is that the value $v_G^T(t)$ matches the event $e_G^T(t)$ in the sense that $v_G^T(t)$ provides a value if $e_G^T(t)$ is a read.\nIn this case the event generating instruction \\lstinline[language=Lambda]|step(_,$\\delta$)| with state transformer $\\delta$ updates the state based on $\\delta$ under control of the read result $v_G^T(t)$:\n\\begin{align*} \nP_T(k_G^T(t)) &= \\lstinline[language=Lambda]|step(_,$\\delta$)| \\ \\land \\ (e_G^T(t) \\not= R^-(-) \\, \\lor \\, v_G^T(t) \\not=\\bot) \\\\ &\\quad\\to\\quad \\sigma_G^T({t+1}) \\ =\\ \\sigma_G^T(t) \\ll \\delta(\\sigma_G^T(t), v_G^T(t)) \n\\end{align*}\nHere $\\ll$ is the update operator which takes all the new (updated) register values from $\\delta(\\sigma,v)$ and the remaining (unchanged) register values from $s$:\n\\[ (\\sigma \\ll \\sigma')(r) = \\begin{cases}\\sigma'(r) & r \\in \\mathit{Dom}(\\sigma') \\\\ \\sigma(r) & \\text{o.w.} \\end{cases} \\]\nIn the second (irregular) case, step $t$ generated a read event but no read-result was returned. In this case the computation gets stuck. We define that $t$ is the last step. Since steps $0, \\ \\ldots, \\ t$ have been executed the number of steps is thus $N_G^T = t+1$. Thus formally we need to define a state $\\sigma_G^T({t+1})$, despite the computation being stuck. We arbitrarily define $\\sigma_G^T({t+1}) = \\sigma_G^T(t)$.\n\\[ P_T(k_G^T(t)) = \\lstinline[language=Lambda]|step(_,$\\delta$)| \\ \\land \\ e_G^T(t) = R^-(-) \\ \\land \\ v_G^T(t) =\\bot \\quad\\to\\quad N_G^T = t+1 \\ \\land \\ \\sigma_G^T({t+1}) = \\sigma_G^T(t) \\]\n\n\\paragraph{No Superfluous Events}\nWe have so far checked that every event generated by the program is also in $G$. However, $G$ is only consistent with a program if there is also no event in $G$ that was not generated by the program, \\emph{i.e.}, there are no superfluous events.\nMore precisely, $G$ is consistent with the program $P$ exactly when the set of events $G.\\mathrm{E}$ in $G$ is exactly the set of events (plus meta data) generated by $P$:\n\\[ \\mathrm{cons}^P(G) \\quad\\iff\\quad G.\\mathrm{E} = \\Set{ \\event{T}{ t}{G} | T \\in \\mathcal T,\\ t < N_G^T } \\]\n\n\n\\paragraph{Register Read-From}\nRecall that the Bounded-Effect principle states that there must not be a visible side effect of a failed iteration of an await. \nBetween threads, the only potentially visible side effects are the generated stores.\nWithin a thread, updates to registers can also be visible, provided these registers are not overwritten in the mean-time.\nWe define a register read-from relation within events of a single thread\n\\[ G.\\mathrm{rrf} \\subseteq G.\\mathrm{po} \\]\nwhich holds between events $e$ and $e'$ exactly when the statement that generated $e'$ ``depends'' on any registers updated by the statement that generated $e$ which have not been overwritten in the meantime.\nMore precisely, we put a register read-from edge in the graph between the $t$-th and $u$-th steps (with $u\\ge t$) if any function in the statement $P_T(k_G^T(u))(G)$ executed by the $u$-th step depends on the visible output of the $t$-th to the $u$-th step:\n\\[ \\event{ T }{ t }{ G } \\overset{\\mathrm{rrf}}\\longrightarrow \\event{T}{ u}{ G } \\quad\\iff\\quad u \\ge t \\, \\land \\, \\exists f \\in F(P_T(k_G^T(u))(G)). \\ \\mathit{depends\\hy on}(f, \\mathit{vis}_G^T(t,u)) \\]\nTo define what it means to ``depend'' on these registers, we look at the functions $\\epsilon$, $\\delta$, $\\loopcons$ of the statement and see whether the registers can affect the functions.\nWe collect the functions in statement $S$ in a set $F(S)$ defined as follows\n\\[ F(S) = \\begin{cases} \\Set{ \\epsilon, \\delta } & S = \\lstinline[language=Lambda]|step($\\epsilon$, $\\delta$)| \\\\ \\Set{ \\loopcons } & S = \\lstinline[language=Lambda]|await(_, $\\loopcons$)| \\end{cases} \\]\nA function $f \\in F(S)$ \\emph{depends on} a set of registers $R \\subseteq \\mathit{Register}$ if there are two states which only differ on registers in $R$ on which $f$ produces different results\\footnote{For the sake of simplicity we use here curried notation for $f = \\delta$, \\emph{i.e.}, $\\delta(\\sigma) \\not= \\delta(\\sigma')$ iff there is a $v$ such that $\\delta(\\sigma,v) \\not= \\delta(\\sigma',v)$.}\n\\[ \\mathit{depends\\hy on}(f,R) \\quad\\iff\\quad \\exists \\sigma,\\, \\sigma' \\in \\mathit{State}. \\ (\\forall r \\not\\in R.\\ \\sigma(r) = \\sigma'(r)) \\ \\land \\ f(\\sigma) \\not= f(\\sigma') \\]\nTo unify notation we define an update $\\delta_G^T(t)$ for each step $t$ by\n\\[ \\delta_G^T(t) = \\begin{cases} \\delta(\\sigma_G^T(t), v_G^T(t)) & P_T(k_G^T(t)) = \\lstinline[language=Lambda]|step(_, $\\delta$)| \\land (e_G^T(t) \\not= R^-(-) \\lor v_G^T(t) \\not= \\bot) \\\\ \\emptyset & \\text{o.w.} \\end{cases} \\]\nwhere $\\emptyset$ is the empty update (no registers changed). A straightforward induction shows:\n\\begin{lemma} \\label{lem:strans sequence}\n\\[ \\sigma_G^T(t+1) = \\sigma_G^T(t) \\ll \\delta_G^T(t) \\]\n\\end{lemma}\nWe define the visible output of the $t$-th step the the $u$-th step to be the set of registers that are updated by the $t$-th step but not by the steps before $u$\n\\[ \\mathit{vis}_G^T(t,u) \\ = \\ \\mathit{Dom}(\\delta_G^T(t)) \\setminus \\bigcup_{u' \\in (t:u)} \\mathit{Dom}(\\delta_G^T(u')) \\]\n\n\\paragraph{Iterations of Await}\nIn this section we define the steps that constitute iterations of await.\nThese are steps that execute statements with numbers $k' \\in [k-n:k]$ where statement number $k$ is a do-await-while statement that jumps back $n$ steps. We enumerate these steps $k$ which are endpoints of await iterations.\n\\begin{align*}\n\\mathit{end}_G^T(0) &= \\min \\Set{ t | P_T(k_G^T(t)) = \\lstinline[language=Lambda]|await(_, _)| }\\\\ \n\\mathit{end}_G^T(q+1) &= \\min \\Set{t > \\mathit{end}_G^T(q) | P_T(k_G^T(t)) = \\lstinline[language=Lambda]|await(_, _)| }\n\\end{align*} \nWe denote the length of such an iteration, \\emph{i.e.}, the number $n$ of steps jumped back by the do-await-while-statement, by\n\\[ \\mathit{len}_G^T(q) =n \\quad\\text{where}\\quad P_T(k_G^T(\\mathit{end}_G^T(q))) = \\lstinline[language=Lambda]|await($n$, _)| \\]\nThe start point ($k-n$) of the $q$-th iteration is defined by\n\\[ \\mathit{start}_G^T(q) \\ = \\ \\mathit{end}_G^T(q) - \\mathit{len}_G^T(q) \\] \n\nWe show that the intervals $[\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)]$ for all $q$ do not overlap.\nFor this it suffices to show that only the last step in such an interval executes a do-await-while statement.\n\\begin{lemma}\\label{lem:step forward iteration}\n\\[ t \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)) \\quad\\to\\quad P_G^T(k_G^T(t)) \\not= \\lstinline[language=Lambda]|await(_, _)| \\]\n\\end{lemma}\n\\begin{proof}\nAssume for the sake of contradiction that step $t$ executes a do-await-while.\nW.l.o.g. $t$ is the last step to do so\n\\[ t = \\max \\Set{ u \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)) | P_G^T(k_G^T(u)) = \\lstinline[language=Lambda]|await(_, _)| } \\]\nSince the remaining steps between $t$ and $\\mathit{end}_G^T(q)$ are not do-await-while statements, they move the position of control ahead one statement at a time. Thus the difference in the positions of control is equal to the difference in the ste number.\n\\[ k_G^T(\\mathit{end}_G^T(q)) - k_G^T(t+1) \\ =\\ \\mathit{end}_G^T(q)-(t+1) \\]\nFurthermore, the loop condition of step $t$ can not be satisfied since otherwise control would jump back and then have to cross position $k_G^T(t)$ a second time; but this contradicts the assumption that no more do-await-while statements are executed.\nThus\n\\[ k_G^T(t+1) = k_G^T(t) \\]\nand we conclude with simple arithmetic\n\\[ k_G^T(t) \\ =\\ k_G^T(\\mathit{end}_G^T(q)) - (\\mathit{end}_G^T(q) - t) \\]\nSince $t$ is in the interval $[\\mathit{start}_G^T(q):\\mathit{end}_G^T(q))$ which by definition has length $\\mathit{len}_G^T(q)$, we conclude first\n\\[ \\mathit{end}_G^T(q) - t < \\mathit{len}_G^T(q) \\]\nand then that the await must be positioned in the last $\\mathit{len}_G^T(q)$ statements before the $q$-th executed await\n\\[ k_G^T(t) \\in [k_G^T(\\mathit{end})-\\mathit{len}_G^T(q):k_G^T(\\mathit{end})) \\]\nbut this contradicts the assumption that awaits are never nested.\n\\end{proof}\n\nWe conclude that interval number $q+1$ starts after interval number $q$ ends. Monotonicity then immediately implies that the intervals are pairwise disjoint.\n\\begin{lemma}\\label{lem:iterations disjoint}\n\\[ \\mathit{end}_G^T(q)< \\mathit{start}_G^T(q+1) \\]\n\\end{lemma}\n\\begin{proof}\nBy definition, step $\\mathit{end}_G^T(q)$ executes a do-await-while statement\n\\[ P_T(k_G^T(\\mathit{end}_G^T(q))) = \\lstinline[language=Lambda]|await(_, _)| \\]\nBy contraposition of \\cref{lem:step forward iteration}, the step is not in interval number $q+1$\n\\[ \\mathit{end}_G^T(q) \\not\\in [\\mathit{start}_G^T(q+1):\\mathit{end}_G^T(q+1)) \\]\nDue to monotonicity interval number $q$ ends before interval number $q+1$\n\\[ \\mathit{end}_G^T(q) < \\mathit{end}_G^T(q+1) \\]\nand the claim follows\n\\[ \\mathit{end}_G^T(q) < \\mathit{start}_G^T(q+1) \\]\n\\end{proof}\n\nIteration $q$ is failed if the loop condition in step $\\mathit{end}_G^T(q)$ evaluates to $1$:\n\\[ \\mathit{fail}_G^T(q) \\ \\iff \\ \\loopcons(\\sigma_G^T(\\mathit{end}_G^T(q)))=1 \\quad\\text{where}\\quad P_T(k_G^T(\\mathit{end}_G^T(q))) = \\lstinline[language=Lambda]|await(_, $\\loopcons$)| \\]\nWe plan to cut all failed iterations from the graph which result in a wasteful graph. These are failed iterations $q$ in which the next failed iteration reads from exactly the same stores.\nWe define this precisely through a predicate $\\mathit{WI}_G^T(q)$:\n\\begin{align*}\n\\mathit{WI}_G^T(q) \\ \\iff \\ \\mathit{fail}_G^T(q) \\ \\land \\ \\forall m \\le \\mathit{len}_G^T(q).&\\ e_G^T(\\mathit{start}_G^T(q)+m) = \\mathit{R}^-(-) \n \\\\ & \\ \\to \\ G.\\mathrm{rf}(\\event{ T}{ \\mathit{start}_G^T(q) + m}{G}) = G.\\mathrm{rf}(\\event{ T}{ \\mathit{start}_G^T(q+1) + m}{G})\n\\end{align*}\nIff a graph has any such iterations, we say that is wasteful. Formally:\n\\[ \\mathrm{W}(G) \\ \\iff \\ \\forall T,\\,q.\\ \\neg \\mathit{WI}_G^T(q) \\]\n\n\n\\subsection{Formalizing the Bounded-Effect principle}\\label{sec:vegas}\nThe Bounded-Effect principle is now easily formalized. We define $\\mathit{BE}(G)$ to hold if in graph $G$ no register reads-from arrow leaves a failed iteration of an await.\nFor the sake of simplicity we fully forbid generating write events in failed iterations of awaits. This is unlikely to be a practical restriction\\footnote{It is possible to construct theoretical examples where this restriction changes the logic of the code, but we are not aware of any practical examples. In any case, the restriction can be lifted with considerable elbow grease.}; as reads-from edges from such writes that leave the iteration are anyways forbidden, this directly only affects loop-internal write-read pairs, which can be simulated using registers.\n\\begin{definition}\nGraph $G$ satisfies the Bounded-Effect principle, \\emph{i.e.},\n\\( \\mathit{BE}(G) \\), iff for all threads $T$, failed await iterations $q$ with $\\mathit{fail}_G^T(q)$, and step numbers $t \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q))$ we have: \n\\begin{enumerate}\n\\item no write event is generated in step $t$ of thread $T$\n\\[ e_G^T(t) \\not= W^-(-,-) \\]\n\\item if the event generated in step $t$ of thread $T$ is register read-from by the event generated in step $u$ of thread $T$, then $u$ is in the same failed iteration $q$\n\\[ \\event{ T}{t}{G} \\overset{\\mathrm{rrf}}\\longrightarrow \\event{T}{u}{G} \\quad\\to\\quad u \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)) \\]\n\\end{enumerate}\n\\end{definition}\nIn the remainder of this text, we assume: all graphs that are consistent with the memory model and with $P$ satisfy the Bounded-Effect principle\n\\begin{equation} \\forall G.\\ \\mathrm{cons}_M^P(G) \\quad\\to\\quad \\mathit{BE}(G) \\end{equation}\n\n\\subsection{Proving the main theorem}\\label{sec:main}\nWe define the sets $\\mathbb G^F$ and $\\mathbb G^\\infty$ of execution graphs with a finite resp. infinite number of failed await iterations by\n\\begin{align*}\n \\mathbb G^F &= \\Set{ G | \\mathrm{cons}_M^P(G) \\ \\land \\ \\forall T.\\ \\exists q.\\ \\forall q' \\ge q. \\ \\neg \\mathit{fail}_G^T(q') } \n \\\\\n \\mathbb G^\\infty &= \\Set{ G | \\mathrm{cons}_M^P(G) \\ \\land \\ \\exists T.\\ \\forall q.\\ \\exists q' \\ge q. \\ \\mathit{fail}_G^T(q') } \n \\\\ &= \\Set{ G | \\mathrm{cons}_M^P(G)} \\setminus \\mathbb G^F\n\\end{align*}\nAwait-termination holds if $\\mathbb G^\\infty$ is empty\n\\[ \\mathit{AT} \\ \\iff \\ \\mathbb G^\\infty = \\emptyset \\]\n\nWe now show a series of lemmas that lead us to the main theorem:\n\\begin{reptheorem}{thm:amc} \\leavevmode\n\\begin{enumerate}\n\\item The set $\\ensuremath{\\mathbb G_\\ast^F}{} = \\Set{G \\in \\mathbb G^F | \\neg \\mathrm{W}(G)}$ of non-wasteful execution graphs is finite, and every $G \\in \\ensuremath{\\mathbb G_\\ast^\\infty}{} = \\Set{ G \\in \\ensuremath{\\mathbb G_\\ast^F}{} | \\mathrm{stagnant}(G) }$ which is stagnant is finite.\n\\item if an error event $E$ exists in a graph $G \\in \\mathbb G^F$, it also exists in a graph $G' \\in \\ensuremath{\\mathbb G_\\ast^F}$\n\\item every graph $G \\in \\mathbb G^\\infty$ can be cut to a graph $G' \\in \\ensuremath{\\mathbb G_\\ast^\\infty}{}$\n\\item every graph $G \\in \\ensuremath{\\mathbb G_\\ast^\\infty}{}$ can be extended to a graph $G' \\in \\mathbb G^\\infty$\n\\end{enumerate}\n\\end{reptheorem}\n\nIn the proofs we ignore the memory model consistency. This can be easily proven on a case by case basis (\\emph{i.e.}, for concrete $M$) but a generic proof must rely on certain abstract features of the memory model which are hard to identify in generic fashion. We leave a generic proof with appropriate conditions as future work.\n\nWe first show that after a failed iteration, we return to the startof the await and immediately repeat the iteration (possibly with a different outcome).\n\\begin{lemma} \\label{lem:failed repeat}\n\\begin{align*}\n \\mathit{fail}_G^T(q) \\quad\\to\\quad k_G^T(\\mathit{end}_G^T(q)+1) &= k_G^T(\\mathit{start}_G^T(q)) \n \\\\ {}\\land \\mathit{start}_G^T(q+1) &= \\mathit{end}_G^T(q)+1\n \\\\ {}\\land \\mathit{end}_G^T(q+1) &= \\mathit{end}_G^T(q)+1+n\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nBy definition the loop condition $\\loopcons$ in step $\\mathit{end}_G^T(q)$ is satisfied\n\\[ P_T(k_G^T(\\mathit{end}_G^T(q))) = \\lstinline[language=Lambda]|await($n$,$\\loopcons$)| \\ \\land \\ \\loopcons(\\sigma_G^T(\\mathit{end}_G^T(q))) = 1 \\]\nThus by the semantics of the language, control jumps back $n = \\mathit{len}_G^T(q)$ steps\n\\[ k_G^T(\\mathit{end}_G^T(q)+1) = k_G^T(\\mathit{end}_G^T(q)) - n \\]\nBy \\cref{lem:step forward iteration} the previous $n$ steps all do not execute do-await-while statements and thus moved control forward linearly; the first part of the claim follows\n\\[ k_G^T(\\mathit{end}_G^T(q)) - n = k_G^T(\\mathit{end}_G^T(q)-n) = k_G^T(\\mathit{start}_G^T(q))\\]\nWe next prove the third part of the claim. Observe that the next $n$ statements are exactly the same (non-await) statements, and thus after an additional $n$ steps we have again\n\\[ k_G^T(\\mathit{end}_G^T(q)+1+n) = k_G^T(\\mathit{end}_G^T(q)+1)+n = k_G^T(\\mathit{end}_G^T(q)) \\]\nwhich is a do-await-while. Hence by the definition of $\\mathit{end}_G^T$ we have\n\\[ \\mathit{end}_G^T(q+1) = \\mathit{end}_G^T(q)+1+n \\] \nwhich is the third part of the claim. For the remaining second part of the claim, note that the do-await-while still jumps back $n$ statements.\nThus by definition of $\\mathit{start}_G^T$ and $\\mathit{len}_G^T$ we have\n\\[ \\mathit{start}_G^T(q+1) = \\mathit{end}_G^T(q+1) - \\mathit{len}_G^T(q+1) = \\mathit{end}_G^T(q)+1+n - n = \\mathit{end}_G^T(q)+1 \\]\nwhich is the claim.\n\\end{proof}\nIf the iteration is wasteful, then the next iteration repeats exactly the same events, positions of control, and states (except that registers that were modified in the first iteration may have changed values)\n\\begin{lemma} \\label{lem:failed progress}\nLet $q$ be the index of an iteration that is wasteful, and $m$ be the number of steps taken by the thread inside the iteration (without leaving it)\n\\[ \\mathit{WI}_G^T(q) \\ \\land \\ m \\le \\mathit{len}_G^T(q) \\]\nThen all of the following hold:\n\\begin{enumerate}\n\\item the same events are generated in iterations $q$ and $q+1$ after $m$ steps \\label{lem:failed repeat:same e}\n\\[ e_G^T(\\mathit{start}_G^T(q)+m) = e_G^T(\\mathit{start}_G^T(q+1)+m) \\]\n\\item the same values are observed \\ldots \\label{lem:failed repeat:same v}\n\\[ v_G^T(\\mathit{start}_G^T(q)+m) = v_G^T(\\mathit{start}_G^T(q+1)+m) \\]\n\\item the same position of control is reached \\ldots \\label{lem:failed repeat:same k}\n\\[ k_G^T(\\mathit{start}_G^T(q)+m) = k_G^T(\\mathit{start}_G^T(q+1)+m) \\]\n\\item if the value of register $r$ is not the same after $m$ steps in the two iterations \\label{lem:failed repeat:same state}\n\\[ \\sigma_G^T(\\mathit{start}_G^T(q)+m)(r) = \\sigma_G^T(\\mathit{start}_G^T(q+1)+m)(r) \\]\nthen $r$ must be an output of one of the steps $u$ of the failed iteration $q$ which is still visible after $m$ steps\n\\[ \\exists u \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)].\\ r \\in \\mathit{vis}(u,\\mathit{start}_G^T(q+1)+m) \\]\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nWe first show that claims \\ref{lem:failed repeat:same e} and \\ref{lem:failed repeat:same v} follow from claims \\ref{lem:failed repeat:same k} and \\ref{lem:failed repeat:same state}.\nWe know from claim \\ref{lem:failed repeat:same k} that the position of control is the same. Thus also the executed statement is the same\n\\[ P_T(k_G^T(\\mathit{start}_G^T(q)+m)) = P_T(k_G^T(\\mathit{start}_G^T(q+1)+m)) \\]\nWe split cases on the type of statement executed by the steps; in the case it is a do-await-while we are done as no event or read-result is generated. In the other case, we have\n\\[ P_T(k_G^T(\\mathit{start}_G^T(q)+m)) = P_T(k_G^T(\\mathit{start}_G^T(q+1)+m)) = \\lstinline[language=Lambda]|step($\\epsilon$, _)| \\]\nFrom the Bounded-Effect principle we know that there is no register reads-from from a step $u \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)]$ of the failed iteration $q$ to step $\\mathit{start}_G^T(q+1)+m$ which is outside that iteration (\\cref{lem:iterations disjoint})\n\\[ \\event{T}{u}{G} \\centernot{\\overset{\\mathrm{rrf}}\\longrightarrow} \\event{T}{\\mathit{start}_G^T(q+1)+m}{G} \\]\nand thus in particular $\\epsilon$ does not depend on the visible outputs of step $u$ to that step\n\\[ \\neg \\mathit{depends\\hy on}(\\epsilon, \\mathit{vis}(u, \\mathit{start}_G^T(q+1)+m) \\]\nFrom claim \\ref{lem:failed repeat:same state} we know that the only differences between the two states are on registers which are such visible outputs. Thus with the definition of $\\mathit{depends\\hy on}$ we know\n\\[ \\epsilon(\\sigma_G^T(\\mathit{start}_G^T(q)+m)) = \\epsilon(\\sigma_G^T(\\mathit{start}_G^T(q+1)+m)) \\]\nand thus the generated events are the same, which is claim \\ref{lem:failed repeat:same e}\n\\[ e_G^T(\\mathit{start}_G^T(q)+m) = e_G^T(\\mathit{start}_G^T(q+1)+m) \\]\nFor claim \\ref{lem:failed repeat:same v} we only consider read events\n\\[ e_G^T(\\mathit{start}_G^T(q)+m) = R^-(-) \\]\nWe have by assumption that iteration $q$ is wasteful, thus the two events read from the same store\n\\[ G.\\mathrm{rf}(\\event{T}{ \\mathit{start}_G^T(q)+m}{G}) = G.\\mathrm{rf}(\\event{T}{ \\mathit{start}_G^T(q+1)+m}{G}) \\]\nand thus read the same value. Claim \\ref{lem:failed repeat:same v} immediately follows.\n\nClaims \\ref{lem:failed repeat:same state} and \\ref{lem:failed repeat:same k} are shown by joint induction on $m$. In the base case $m=0$ claim \\ref{lem:failed repeat:same k} follows immediately from \\cref{lem:failed repeat}\n\\[ k_G^T(\\mathit{start}_G^T(q+1)) = k_G^T(\\mathit{end}_G^T(q)+1) = k_G^T(\\mathit{start}_G^T(q)) \\]\nFor claim \\ref{lem:failed repeat:same state} observe that every register $r$ which differs before and after iteration $q$\n\\[ \\sigma_G^T(\\mathit{start}_G^T(q))(r) \\not= \\sigma_G^T(\\mathit{end}_G^T(q)+1)(r)\\]\nmust have been modified by some $u \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)]$ in that iteration\n\\[ r \\in \\mathit{Dom}(\\delta_G^T(u)) \\]\nW.l.o.g. $u$ is the last such write, in which case the effect is still visible to step $\\mathit{end}_G^T(q)+1$\n\\[ r \\in \\mathit{vis}_G^T(u,\\mathit{end}_G^T(q)+1) \\]\nand the claim follows as by \\cref{lem:failed repeat}, step $\\mathit{end}_G^T(q)+1$ is the start of iteration $q+1$\n\\[ \\mathit{end}_G^T(q)+1 = \\mathit{start}_G^T(q+1)\\]\n\nIn the induction step $m \\to m+1$, we know by the induction hypothesis that the position of control is the same after $m$ steps in the respective iteration and that the states are the same (modulo visible outputs). \nAs we have shown before, this implies that the read result is also the same\n\\[ v_G^T(\\mathit{start}_G^T(q)+m) = v_G^T(\\mathit{start}_G^T(q+1)+m) \\]\nAnalogous to the proof that $\\epsilon$ produces the same event due to the Bounded-Effect principle, one can also conclude that the new position of control must be the same (\\emph{i.e.}, claim \\ref{lem:failed repeat:same k} holds)\n\\[ k_G^T(\\mathit{start}_G^T(q)+m+1) = k_G^T(\\mathit{start}_G^T(q+1)+m+1) \\]\nand that the state transformer is the same (which apart from the state also depends on the read result)\n\\[ \\delta_G^T(\\mathit{start}_G^T(q)+m+1) =\\delta_G^T(\\mathit{start}_G^T(q+1)+m+1) \\]\nAssume for the sake of showing the only remaining claim that register $r$ has a different value in the new states\n\\[ \\sigma_G^T(\\mathit{start}_G^T(q)+m+1)(r) \\not= \\sigma_G^T(\\mathit{start}_G^T(q+1)+m+1)(r) \\]\nBy \\cref{lem:strans sequence} the functions $\\delta_G^T$ determine the state change; since the states were updated in the same way, $r$ can not have been updated\n\\[ r \\not\\in \\mathit{Dom}(\\delta_G^T(\\mathit{start}_G^T(q)+m+1)) \\]\nand the difference was already present before the step\n\\[ \\sigma_G^T(\\mathit{start}_G^T(q)+m)(r) \\not= \\sigma_G^T(\\mathit{start}_G^T(q+1)+m)(r) \\]\nBy the induction hypothesis this implies that $r$ was a visible output of some step $u$ from iteration $q$ to the previous step\n\\[ \\exists u \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)] . \\ r \\in \\mathit{vis}_G^T(u,\\mathit{start}_G^T(q+1)+m) \\]\nand since it is not updated in this step, it is still visible to the next step\n\\[ \\exists u \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)] . \\ r \\in \\mathit{vis}_G^T(u,\\mathit{start}_G^T(q+1)+m+1) \\]\nwhich is the claim.\n\\end{proof}\n\n\nNext we show that such a failed iteration can be safely removed. For this we define a deletion operation $G-(T,q)$ which deletes the $q$-th iteration of thread $T$ from the graph. We only define this in case the $q$-th iteration failed. \nIn this case by the Bounded-Effect principle there are no write events that are deleted, and thus we do not have to pay attention to deleting writes that are referenced by other reads.\nEvents of other threads are not affected at all. Neither are events generated before the start of the deleted iteration. For events started after the deleted iteration we simply reduce the event index by the number of events in the deleted iteration ($\\mathit{len}_G^T(q)+1$)\n\\begin{align*}\n(G-(T,q)).\\mathrm{E}_U &= G.\\mathrm{E}_U \\quad \\text{if} \\quad U \\not= T\n\\\\\n(G-(T,q)).\\mathrm{E}_T &= \\Set{ \\langle T,\\ t,\\ e \\rangle \\in G.\\mathrm{E}_T | t < \\mathit{start}_G^T(q) } \\cup \\Set{ \\langle T,\\ t-(\\mathit{len}_G^T(q)+1),\\ e \\rangle \\in G.\\mathrm{E}_T | t > \\mathit{end}_G^T(q) }\n\\end{align*}\nThis can also be defined by means of a partial, invertible renaming function\n\\[ r : G.\\mathrm{E} \\rightharpoonup (G-(T,q)).\\mathrm{E} \\]\nwhich maps each non-deleted event to its renamed event in $G-(T,q)$:\n\\[ r(\\langle U,\\ t,\\ e \\rangle ) = \\begin{cases} \\langle U,\\ t,\\ e \\rangle & U \\not= T \\lor t < \\mathit{start}_G^T(q) \\\\ \\langle U,\\ t-(\\mathit{len}_G^T(q)+1),\\ e \\rangle & U=T \\land t > \\mathit{end}_G^T(q) \\end{cases} \\]\nWe have:\n\\[ (G-(T,q)).\\mathrm{E} = r(G.\\mathrm{E}) \\]\nFor the reads-from relationship, we simply re-map the edges between the renamed events:\n\\[ (G-(T,q)).\\mathrm{rf}(e) = r(G.\\mathrm{rf}(r^{-1}(e))) \\]\n\nWe show that this graph still is consistent with the program.\n\\begin{lemma} \\label{lem:cons preserved}\n\\[ \\mathrm{cons}^P(G) \\ \\land \\ \\mathit{fail}_G^T(q) \\quad\\to\\quad \\mathrm{cons}^P(G-(T,q)) \\]\n\\end{lemma}\n\\begin{proof}\nFor threads other than $T$ there is nothing to show as the event sequences and \\rf-edges are fully unchanged.\nFor $T$, we focus on the steps after the deletion, which may be affected by the change in registers.\nWe will show: any changes to registers after step $\\mathit{start}_G^T(q)$ were visible changes of a register by one of the deleted steps. Other things have not changed.\nThus any dependence on these changed registers would imply a register-read-from relation in the original graph $G$, which is forbidden by the Bounded-Effect principle.\nFor the sake of brevity we define\n\\[ G' = G-(T,q) \\]\n\\begin{lemma} \\label{lem:deleted:aux}\nIf $t$ is a step behind the deleted parts in the new graph,\n\\[ t \\ge \\mathit{start}_G^T(q) \\]\nthen both of the following hold:\n\\begin{enumerate}\n\\item position of control in, event generated by, and read result seen in step $t$ are unaffected by the deletion (relative to the original values of step $t+\\mathit{len}_G^T(q)+1$)\n\\[ k_{G'}^T(t) = k_G^T(t+\\mathit{len}_G^T(q)+1) \\ \\land \\ e_{G'}^T(t) = e_G^T(t+\\mathit{len}_G^T(q)+1) \\ \\land \\ v_{G'}^T(t) = v_G^T(t+\\mathit{len}_G^T(q)+1) \\]\n\\item if $r$ is a register whose value was changed by the deletion\n\\[ \\sigma_{G'}^T(t)(r) \\not= \\sigma_G^T(t+\\mathit{len}_G^T(q)+1)(r) \\]\nthen there is a step $u$ in the original graph which still has a visible effect on $r$ \n\\[ \\exists u \\in [\\mathrm{start}_G^T(q):\\mathrm{end}_G^T(q)].\\ r \\in \\mathit{vis}_G^T(u,t+\\mathit{len}_G^T(q)+1) \\]\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nThe proof is analogous to the proof of \\cref{lem:failed progress} and omitted.\n\\end{proof}\nNow to show that $\\mathrm{cons}^P$ is preserved we simply consider the sets of events of the individual threads $U \\in \\mathcal T$ and show that they are not affected:\n\\begin{equation} \\forall U. \\ G'.\\mathrm{E}_U = \\Set{ \\event{U}{t}{G'} | t < N_{G'}^U } \\label{lem:deleted:same events} \\end{equation}\nWe split cases on $U \\not= T$. For threads $U\\not=T$ other than $T$, nothing has changed and the claim follows from the consistency of $G$\n\\begin{align}\n G'.\\mathrm{E}_U &= G'.\\mathrm{E}_U \\nonumber\n \\\\ &= \\Set{ \\event{U}{t}{G} | t < N_{G}^U } \\nonumber\n\\\\ &= \\Set{ \\event{U}{t}{G'} | t < N_{G'}^U } \\label{lem:deleted:U unchanged}\n\\end{align}\nFor thread $T$, we split the set into those events generated before the cut (which have not changed)\n\\begin{equation} \\Set{ \\event{T}{t}{G} | t < \\mathit{start}_{G}^T(q) } = \\Set{ \\event{T}{t}{G'} | t < \\mathit{start}_{G}^T(q) } \\label{lem:deleted:prev unchanged} \\end{equation}\nand the events generated after the cut, for which \\cref{lem:deleted:aux} shows that only indices have changed:\n\\begin{align}\n&\\hphantom{{}={}} \\Set{ \\langle T, \\ t-(\\mathit{len}_G^T(q)+1), \\ e_{G}^T(t) \\rangle | t > \\mathit{end}_G^T(q) \\land t < N_{G}^T } \\nonumber\n\\\\ &= \\Set{ \\langle T, \\ t-(\\mathit{len}_G^T(q)+1), \\ e_{G}^T(t) \\rangle | t > \\mathit{start}_G^T(q)+(\\mathit{len}_G^T(q)+1) \\land t < N_{G}^T } \\nonumber\n\\\\ &= \\Set{ \\langle T, \\ t-(\\mathit{len}_G^T(q)+1), \\ e_{G}^T(t) \\rangle | t-(\\mathit{len}_G^T(q)+1) > \\mathit{start}_G^T(q) \\land t < N_{G}^T } \\nonumber\n\\\\ &= \\Set{ \\langle T, \\ t, \\ e_{G}^T(t+(\\mathit{len}_G^T(q)+1)) \\rangle | t > \\mathit{start}_G^T(q) \\land t+(\\mathit{len}_G^T(q)+1) < N_{G}^T } \\nonumber & \\text{rebase $t$}\n\\\\ &= \\Set{ \\langle T, \\ t, \\ e_{G'}^T(t) \\rangle | t > \\mathit{start}_G^T(q) \\land t < N_{G}^T-(\\mathit{len}_G^T(q)+1) } \\nonumber & \\text{L \\ref{lem:deleted:aux}}\n\\\\ &= \\Set{ \\event{T}{t}{G'} | t > \\mathit{start}_G^T(q) \\land t < N_{G'}^T } \\label{lem:deleted:succ unchanged}\n\\end{align}\nJointly with \\cref{lem:deleted:prev unchanged} this proves \\cref{lem:deleted:same events} for $U:=T$:\n\\begin{align*}\n G'.\\mathrm{E}_T &= r(G.\\mathrm{E}_T)\n \\\\ &= \\Set{ \\event{T}{t}{G} | t < \\mathit{start}_{G}^T(q) } \\cup \\Set{ \\langle T, \\ t-(\\mathit{len}_G^T(q)+1), \\ e_{G}^T(t) \\rangle | t > \\mathit{end}_G^T(q) \\land t < N_{G}^T }\n \\\\ &= \\Set{ \\event{T}{t}{G'} | t < \\mathit{start}_{G}^T(q) } \\cup \\Set{ \\event{T}{t}{G'} \\rangle | t \\ge \\mathit{start}_G^T(q) \\land t < N_{G'}^T} & \\text{E \\eqref{lem:deleted:prev unchanged}, \\eqref{lem:deleted:succ unchanged}}\n \\\\ &= \\Set{ \\event{T}{t}{G'} | t < N_{G'}^T }\n\\end{align*}\nTogether with \\cref{lem:deleted:U unchanged} this shows \\cref{lem:deleted:same events}. By \\cref{lem:deleted:same events} we conclude that the union over all threads $U$ of events in $G'.\\mathrm{E}_U$ is equal to the events generated by all threads:\n\\[ \\bigcup_{U \\in \\mathcal T} G'.\\mathrm{E}_U \\ = \\ \\bigcup_{U \\in \\mathcal T} \\Set{ \\event{U}{ t}{G'} \\rangle | t < N_{G'}^T } \\]\nIt immediately follows that the set of all events in $G'$ is equal to the set of events generated by all threads\n\\[ G'.\\mathrm{E} = \\Set{ \\event{U}{ t}{G'} \\rangle | U \\in \\mathcal T,\\ t < N_{G'}^T } \\]\nwhich is the definition of $\\mathrm{cons}^P(G')$, \\emph{i.e.}, the claim.\n\\end{proof}\n\nNext we iteratively eliminate all wasteful iterations of awaits. This takes us from any graph $G \\in \\mathbb G^F$ to a graph $G' \\in \\ensuremath{\\mathbb G_\\ast^F}{}$ but preserves at least some error events.\n\\begin{lemma} \\label{lem:cut finite}\n\\[ G \\in \\mathbb G^F \\ \\land \\ \\langle -, -, E \\rangle \\in G.\\mathrm{E} \\quad\\to\\quad \\exists G' \\in \\ensuremath{\\mathbb G_\\ast^F}{}. \\ \\langle -, -, E \\rangle \\in G'.\\mathrm{E} \\]\n\\end{lemma}\n\\begin{proof}\nWe construct a series \\( G^{(i)} \\) of graphs in which $i$ wasteful iterations have been eliminated from $G$. Since $G \\in \\mathbb G^F$ there are only finitely many failed iterations we need to remove, thus the sequence is finite.\nWe begin graph $G$ in which $0$ iterations have been deleted\n\\[ G^0 = G \\]\nand then progressively cut one additional arbitrary failed iteration at a time\n\\[ G^{i+1} = G^i-\\epsilon \\Set{ (T,q) | \\mathit{WI}_G^T(q) } \\]\nThe last graph $G^{I-1}$ in the sequence with index $I = \\lvert G^{(i)} \\rvert$ by definition does not have any wasteful iterations\n\\[ \\nexists T,\\,q.\\ \\mathit{WI}_{G^{I-1}}^T(q)\\]\nand thus is not wasteful\n\\[ \\neg \\mathrm{W}(G^{I-1}) \\]\nBy repeated application of \\cref{lem:cons preserved} we conclude that all the graphs in the sequence, including $G^{I-1}$, are consistent with the program\n\\[ \\mathrm{cons}^P(G^{I-1}) \\]\nand thus $G':=G^{I-1}$ is in $\\ensuremath{\\mathbb G_\\ast^F}{}$\n\\[ G^{I-1} \\in \\ensuremath{\\mathbb G_\\ast^F}{} \\]\n\nIt now suffices to show that the error event is preserved (although possibly generated by a different step). \nAssume\n\\[ \\langle -, -, E \\rangle \\in G.\\mathrm{E} \\]\nBy definition we only delete events in wasteful iterations. \nBy \\cref{lem:failed progress} every event in such an iteration is repeated in the next iteration. The events outside the deleted iteration are maintained (cf. \\cref{lem:deleted:aux}).\nWe conclude: the event $E$ is generated in every graph in the sequence, in particular also in the last one\n\\[ \\langle -, -, E \\rangle \\in G^{I-1}.\\mathrm{E} \\]\nwhich is the claim.\n\\end{proof}\n\nThis shows that no bugs are missed by AMC. Our next goal is to show that AMC can terminate.\nWe first show that the number of writes generated in a graph is bounded.\n\\begin{lemma} \\label{lem:bounded W}\n\\[ \\exists b. \\ \\forall G. \\ \\mathrm{cons}_M^P(G) \\quad\\to\\quad \\lvert \\Set{ (T, t) | e_G^T(t) = W^-(-,-), T \\in \\mathcal T, t < N_G^T} \\rvert \\le b \\]\n\\end{lemma}\n\\begin{proof}\nThe bound is equal to the sum of the program lengths of each thread \n\\[ b:= \\sum_{T \\in \\mathcal T} \\lvert P_T \\rvert \\]\nThe reason for this is that threads only repeat statements in case they fail a loop iteration; but these failed loop iterations by the Bounded-Effect principle do not produce writes.\nWe show: if step $t$ of thread $T$ generates a write event, statement $k_G^T(t)$ is never executed again.\n\\begin{equation} e_G^T(t) = W^-(-,-) \\quad\\to\\quad \\forall u >t.\\ k_G^T(u) > k_G^T(t) \\label{lem:bound:T}\\end{equation}\nLet $u$ w.l.o.g. be the first step after $t$ in which the position of control is at $k_G^T(t)$ or before\n\\[ k_G^T(u) \\le k_G^T(t) \\ \\land \\ k_G^T(u-1) > k_G^T(t) \\]\nBy the semantics of the language, step $u-1$ must have been a failed await iteration\n\\[ P_T(k_G^T(u-1)) = \\lstinline[language=Lambda]|await($n$,$\\loopcons$)| \\ \\land \\ \\loopcons(\\sigma_G^T(u-1)) = 1 \\]\nSince there are no nested awaits, all lines between $k_G^T(u)$ and $k_G^T(u-1)$ are not awaits\n\\[ \\forall k \\in [k_G^T(u):k_G^T(u-1)). \\ P_T(k) \\not= \\lstinline[language=Lambda]|await(-,-)| \\]\nOf course $k_G^T(t)$ is in that interval; thus all steps between $k_G^T(t)$ and $K_G^T(u-1)$ are not awaits.\nBy the semantics of the language, these steps moved control ahead one statement per step. Thus at most $n$ steps have passed between $t$ and $u-1$\n\\[ (u-1) - t = k_G^T(u-1) - k_G^T(t) < n \\]\nSince step $u-1$ ends an iteration $q$ of an await of length $n$\n\\[ \\mathit{end}_G^T(q) = u-1 \\ \\land \\ \\mathit{len}_G^T(q) = n \\]\nit follows that step $t$ is one of the steps in that iteration\n\\[ t \\in [\\mathit{start}_G^T(q):\\mathit{end}_G^T(q)] \\]\nBecause the loop condition is satisfied, the iteration is a failed iteration\n\\[ \\mathit{fail}_G^T(q) \\]\nand we conclude from the Bounded-Effect principle: step $t$ does not produce a write\n\\[ e_G^T(t) \\not= W^-(-,-) \\]\nwhich is a contradiction. This proves \\cref{lem:bound:T}, \\emph{i.e.}, each statement can produce at most one write.\nThus the total number of writes produced by thread $T$ is at most the size $\\lvert P_T \\rvert$ of the program text of thread $T$\n\\[ \\lvert \\Set{ t | e_G^T(t) = W^-(-,-),\\ t next}, similarly to DPDK's bug discussed in \\S\\ref{s:dpdk}.\n\tIn version 4.16, the experts used a \\barrier{rel}\\ barrier in the atomic write immediately after the {\\tt decode_tail} function, but finally replace that with an atomic \\barrier{sc}-fence in the current version.\n\tOptimizations with \\mbox{\\textsc{VSync}}\\xspace are verified and hence not affected by such bugs.\n\n\\item[Version 5.6 -- current version:]\n\t\\Cref{fig:qspin-curr} shows the barrier modes used in the current version of qspinlock\\cite{linux-qspinlock-version5.6}.\n\tThe dotted lines connect our barriers in \\cref{fig:qspin-optimize} with the equivalent barriers in the current version.\n\tThe few different barrier modes are due to two reasons:\n\tFirst, there exists multiple maximally-relaxed combinations that are correct.\n\tSecond, both optimizations are based on different memory models (LKMM and IMM).\n\t\\mbox{\\textsc{VSync}}\\xspace extended with an LKMM module would likely suggest the same barriers as used in Linux.\n\n\n\n\\end{description}\n\n\\begin{figure}[H]\n\t\\begin{minipage}[b]{.44\\textwidth}\n\t\n\\begin{lstlisting}[style=verbcodelarge]\nlock\n |\\tikz[overlay, remember picture]\\coordinate (a1);\n |atomic32_cmpxchg_rel --> $acquire$|\\tikz[overlay, remember picture]\\coordinate (o1);|\n |\\tikz[overlay, remember picture]\\coordinate (b1);\n |atomic_fence --> $remove$\n queued_spin_lock_slowpath\n atomic32_await_neq_rlx\n |\\tikz[overlay, remember picture]\\coordinate (a2);\n |atomic32_cmpxchg_rel --> $acquire$|\\tikz[overlay, remember picture]\\coordinate (o2);|\n |\\tikz[overlay, remember picture]\\coordinate (b2);\n |atomic_fence --> $remove$\n atomic32_await_mask_eq_acq --> $relaxed$\n |\\tikz[overlay, remember picture]\\coordinate (b8);\n |atomic32_add_rlx --> $acquire$|\\tikz[overlay, remember picture]\\coordinate (o3);|\n encode_tail\n atomic32_write_rlx\n atomicptr_write_rlx\n atomic32_read_rlx\n |\\tikz[overlay, remember picture]\\coordinate (a3);\n |atomic32_cmpxchg_rel --> $acquire$|\\tikz[overlay, remember picture]\\coordinate (o4);|\n |\\tikz[overlay, remember picture]\\coordinate (b3);\n |atomic_fence --> $remove$\n |\\tikz[overlay, remember picture]\\coordinate (a9);\n |atomic32_read_rlx\n |\\tikz[overlay, remember picture]\\coordinate (a4);\n |atomic32_cmpxchg_rel --> $seq_cst$|\\tikz[overlay, remember picture]\\coordinate (o5);|\n |\\tikz[overlay, remember picture]\\coordinate (b4);\n |atomic_fence --> $remove$\n decode_tail\n atomicptr_write_rlx\n atomic32_await_neq_acq|\\tikz[overlay, remember picture]\\coordinate (o6);|\n atomicptr_read_rlx\n |\\tikz[overlay, remember picture]\\coordinate (a10);\n |atomic32_await_mask_eq_acq --> $relaxed$\n atomic32_or_rlx --> $acquire$|\\tikz[overlay, remember picture]\\coordinate (o7);|\n |\\tikz[overlay, remember picture]\\coordinate (a5);\n |atomic32_cmpxchg_rel --> $acquire$|\\tikz[overlay, remember picture]\\coordinate (o8);|\n |\\tikz[overlay, remember picture]\\coordinate (b5);\n |atomic_fence --> $remove$\n atomicptr_await_neq_rlx\n |\\tikz[overlay, remember picture]\\coordinate (b10);\n |atomic32_write_rel|\\tikz[overlay, remember picture]\\coordinate (o9);|\nunlock\n |\\tikz[overlay, remember picture]\\coordinate (a6);\n |atomic_fence --> $remove$\n |\\tikz[overlay, remember picture]\\coordinate (b6);\n |atomic32_sub_rlx --> $release$|\\tikz[overlay, remember picture]\\coordinate (o10);\n \\begin{tikzpicture}[overlay, remember picture, b\/.style={opacity=0.2, draw=none, rounded corners}, n\/.style={circle, inner sep=1pt, font=\\rm\\footnotesize\\bf, draw=none, text=white}]\n\n \\coordinate (rex) at ($(a1)+(58mm,3mm)$);\n \\draw[b,fill=MyRed] ($(b1)+(-1mm,-1mm)$) rectangle (rex);\n \\node[n,fill=MyRed] at ($(a1)+(-3mm,-.8mm)$) {1};\n\n \\draw[b,fill=MyRed] ($(b2)+(-1mm,-1mm)$) rectangle ($(rex|-a2)+(4mm,3mm)$);\n \\node[n,fill=MyRed] at ($(a2)+(-3mm,-.8mm)$) {2};\n\n \\draw[b,fill=MyRed] ($(b3)+(-1mm,-1mm)$) rectangle ($(rex|-a3)+(4mm,3mm)$);\n \\node[n,fill=MyRed] at ($(a3)+(-3mm,-.8mm)$) {3};\n\n \\draw[b,fill=MyRed] ($(b4)+(-1mm,-1mm)$) rectangle ($(rex|-a4)+(4mm,3mm)$);\n \\node[n,fill=MyRed] at ($(a4)+(-3mm,-.8mm)$) {4};\n\n \\draw[b,fill=MyRed] ($(b5)+(-1mm,-1mm)$) rectangle ($(rex|-a5)+(4mm,3mm)$);\n \\node[n,fill=MyRed] at ($(a5)+(-3mm,-.8mm)$) {5};\n\n \\draw[b,fill=MyBlue] ($(b6)+(-1mm,-1mm)$) rectangle ($(a6)+(50mm,3mm)$);\n \\node[n,fill=MyBlue] at ($(a6)+(-3mm,-.8mm)$) {6};\n \\end{tikzpicture}\n |\n\\end{lstlisting}\n\\caption{Barrier modes in version 4.4 and \\mbox{\\textsc{VSync}}\\xspace optimizations in {\\bf bold}.\n\tOptimizations in the red boxes are similar to version 4.5;\nthose in the blue box are identical to version 4.8.}\n\t\t\\label{fig:qspin-optimize}\n\t\\end{minipage}\\hfill\n\t\\begin{minipage}[b]{.42\\textwidth}\n\\begin{lstlisting}[style=verbcodelarge]\nlock\n |\\tikz[overlay,remember picture]\\coordinate (c1);\n |@atomic32_cmpxchg_acq@\n queued_spin_lock_slowpath\n atomic32_await_counter_neq_rlx\n |\\tikz[overlay,remember picture]\\coordinate (c2);\n |@atomic32_get_or_acq@\n atomic32_sub_rlx\n |\\tikz[overlay,remember picture]\\coordinate (c3);\n |@atomic32_await_mask_eq_acq@\n atomic32_add_rlx\n encode_tail\n grab_mcs_node\n atomic32_write_rlx\n atomicptr_write_rlx\n atomic32_read_rlx\n |\\tikz[overlay,remember picture]\\coordinate (c4);\n |@atomic32_cmpxchg_acq@\n |\\tikz[overlay,remember picture]\\coordinate (c5);\n |@atomic_fence@\n atomic32_read_rlx\n atomic32_cmpxchg_rlx\n decode_tail\n atomicptr_write_rlx\n |\\tikz[overlay,remember picture]\\coordinate (c6);\n |atomic32_await_neq_acq\n atomicptr_read_rlx\n |\\tikz[overlay,remember picture]\\coordinate (c7);\n |@atomic32_await_mask_eq_acq@\n atomic32_cmpxchg_rlx\n atomic32_or_rlx\n atomicptr_await_neq_rlx\n |\\tikz[overlay,remember picture]\\coordinate (c8);\n |@atomic32_write_rel@\nunlock\n |\\tikz[overlay,remember picture]\\coordinate (c9);\n |@atomic32_sub_rel@\n \n |\n \\begin{tikzpicture}[overlay, remember picture, bline\/.style={dotted, thick}]\n\t \\draw[bline] ($(o1)+(.5mm,.5mm)$) to[out=0,in=180] ($(c1)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o2)+(.5mm,.5mm)$) to[out=0,in=180] ($(c2)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o3)+(.5mm,.5mm)$) to[out=0,in=180] ($(c3)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o4)+(.5mm,.5mm)$) to[out=0,in=180] ($(c4)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o5)+(.5mm,.5mm)$) to[out=0,in=180] ($(c5)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o6)+(.5mm,.5mm)$) to[out=0,in=180] ($(c6)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o7)+(.5mm,.5mm)$) to[out=0,in=180] ($(c7)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o8)+(.5mm,.5mm)$) to[out=0,in=180] ($(c7)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o9)+(.5mm,.5mm)$) to[out=0,in=180] ($(c8)+(-.5mm,.5mm)$);\n\t \\draw[bline] ($(o10)+(.5mm,.5mm)$) to[out=0,in=180] ($(c9)+(-.5mm,.5mm)$);\n \\end{tikzpicture}\n|\n\\end{lstlisting}\n\\caption{Barrier mode information for qspinlock in Linux version 5.6 (current version).\n\tDotted lines connect related barrier optimizations of \\mbox{\\textsc{VSync}}\\xspace and the current version.\n}\n\t\t\\label{fig:qspin-curr}\n\t\\end{minipage}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\t\\centering\n\t\t\\begin{minipage}[b]{.6\\linewidth} %\n\t\t\t\\begin{lstlisting}[style=casecode,tabsize=8]\n#define __linux_atomic_cmpxchg(mod1, mod2, l, a, b) \\\n({ \\\n\ttypeof(a) __r = @atomic_cmpxchg@##$mod1$(l, a, b) \\\n\tif (__r == a) @atomic_fence@##$mod2$(); \\\n\t__r; \\\n})\n\n#define linux_cmpxchg(l, a, b)\t\t__linux_atomic_cmpxchg($_rel$, , l, a, b)\n#define linux_cmpxchg$_rlx$(l, a, b)\t__linux_atomic_cmpxchg($_rlx$, $_rlx$, l, a, b)\n#define linux_cmpxchg$_acq$(l, a, b)\t__linux_atomic_cmpxchg($_acq$, $_rlx$, l, a, b)\n#define linux_cmpxchg$_rel$(l, a, b)\t__linux_atomic_cmpxchg($_rel$, $_rlx$, l, a, b)\n#define linux_cmpxchg$_seq$(l, a, b)\t__linux_atomic_cmpxchg( , $_rlx$, l, a, b)\n\\end{lstlisting}\n\\end{minipage}\n\\caption{Using \\mbox{\\textsc{VSync}}\\xspace atomics to implement code compatible with Linux's {\\tt cmpxchg}.}\n\t\\label{fig:linux-cmpxchg}\n\\end{figure}\n\n\n\n\\section{Study Cases}\n\\label{s:cases}\n\nIn this section, we discuss in detail three study cases:\na bug in the MCS lock of the DPDK library, a bug in the MCS lock of an internal Huawei product, and a comparison of expert-optimization and \\mbox{\\textsc{VSync}}\\xspace-optimization of the Linux qspinlock.\nWe report about bugs found with \\mbox{\\textsc{VSync}}\\xspace as well as limitations.\n\n\\subsection{DPDK MCS lock}\n\\label{s:dpdk}\n\nThe Data Plane Development Kit (DPDK\\footnote{\\url{https:\/\/github.com\/DPDK\/dpdk}}) is a popular set of libraries used to\ndevelop packet processing software in user space.\n\\mbox{\\textsc{VSync}}\\xspace found a bug in the MCS lock of the current DPDK version (v20.05).\n\\Cref{fig:dpdk-mcs} shows the part of the implementation that concern us.\nAt the end of the code, we added the bug scenario in which two threads, Alice and Bob,\nare involved.\nAlice wants to acquire the lock (see \\verb|run_alice()| function), and Bob currently\nholds the lock and is about to release it (see \\verb|run_bob()|).\nNote that we removed the slowpath of \\verb|rte_mcslock_unlock()| since the bug\nonly occurs in the fastpath.\nThe core of the bug is a missing \\barrier{rel}\\ barrier before or at Line~\\ref{ln:dpdk-bug}, which\ncauses Alice to hang and never enter the critical section.\n\n\\begin{figure}[ht]\n\n\\begin{multicols}{2}\n \n \n \\begin{lstlisting}[style=casecode]\n\/* SPDX-License-Identifier: BSD-3-Clause\n * Copyright(c) 2019 Arm Limited\n *\/\n typedef struct rte_mcslock {\n\tstruct rte_mcslock *next;\n\tint locked; \/* 1 if the queue locked, 0 otherwise *\/\n} rte_mcslock_t;\n\nstatic inline void\nrte_mcslock_lock(rte_mcslock_t **msl, rte_mcslock_t *me) {\n rte_mcslock_t *prev;\n\n \/* Init me node *\/\n __atomic_store_n(&me->locked, 1, __ATOMIC_RELAXED); |\\label{ln:dpdk-init}|\n __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);\n\n \/* If the queue is empty, the exchange operation is |\\label{ln:dpdk-comment1}|\n * enough to acquire the lock. Hence, the exchange\n * operation requires acquire semantics. The store to\n * me->next above should complete before the node is\n * visible to other CPUs\/threads. Hence, the exchange\n * operation requires release semantics as well. *\/\n prev = __atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL); |\\label{ln:dpdk-xchg}|\n if (prev == NULL) {\n \treturn;\n }\n __atomic_store_n(&prev->next, me, __ATOMIC_RELAXED); |\\label{ln:dpdk-bug}|\n\n \/* The while-load of me->locked should not move above\n * the previous store to prev->next. Otherwise it will\n * cause a deadlock. Need a store-load barrier. *\/\n __atomic_thread_fence(__ATOMIC_ACQ_REL); |\\label{ln:dpdk-fence}|\n while (__atomic_load_n(&me->locked, __ATOMIC_ACQUIRE))\n rte_pause();\n}\n\nstatic inline void\nrte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me) {\n\tif (__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL) {\n \/\/ **ignore this branch**\n\t}\n\n\t\/* Pass lock to next waiter. *\/\n\t__atomic_store_n(&me->next->locked, 0, __ATOMIC_RELEASE);\n}\n\/\/---------------------------------------------------------\n\/\/ bug scenario\n\/\/---------------------------------------------------------\n\/\/ 2 threads: alice and bob.\nrte_mcslock_t alice, bob;\n\/\/ bob has the lock\nrte_mcslock_t *tail = &bob;\n\nvoid run_alice() { rte_mcslock_lock(&tail, &alice); }\nvoid run_bob() { rte_mcslock_unlock(&tail, &bob); }\n\\end{lstlisting}\n\\end{multicols}\n\\caption{Part of the DPDK MCS lock implementation describing the scenario in which\nAlice hangs.}\\label{fig:dpdk-mcs}\n\\end{figure}\n\n\\begin{figure}[t]\n \\begin{minipage}{.48\\textwidth}\n \\resizebox{\\textwidth}{!}{\\input{figures\/dpdk-imm-bug}}\n \\vspace{-3em}\n \\caption{{\\bf IMM}. Bug results in Alice hanging: Alice writes to {\\tt bob->next} with \\barrier{rlx}\\ mode,\n and Bob reads \\barrier{rlx}\\ mode, which causes allows Bob's write to before the initialization of {\\tt me->locked}.}\\label{fig:dpdk-imm-bug}\n \\end{minipage}\\hfill\n \\begin{minipage}{.48\\textwidth}\n \\resizebox{\\textwidth}{!}{\\input{figures\/dpdk-imm-fix}}\n \\vspace{-3em}\n \\caption{{\\bf IMM}. With the bug fixed, Alice writes to {\\tt prev->next} with \\barrier{rel}\\ mode,\n and Bob reads with \\barrier{acq}\\ mode, creating a synchronizes-with edge, which forces Bob's write to occur after the initialization.}\\label{fig:dpdk-imm-fix}\n \\end{minipage}\n\\end{figure}\n\n\\paragraph{The bug on IMM.}\n\\Cref{fig:dpdk-imm-bug} shows an execution graph in which Alice hangs.\nAMC gives exactly this execution graph as counter-example for await termination, but in the text form.\nThe $U_\\barrier{sc}$ pair of events are the ``read part'' and ``write part'' of the atomic exchange (Line~\\ref{ln:dpdk-xchg} of code in \\cref{fig:dpdk-mcs}).\nTo understand why exchange is modeled with two events, remember that atomic exchange is implemented with\nload-linked\/store-conditional instruction pairs in many architectures.\nFor example in ARMv8, \\verb|atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL)| is compiled to\n\\begin{verbatim}\n 38:\tc85ffc02 \tldaxr\tx2, [x0]\n 3c:\tc803fc01 \tstlxr\tw3, x1, [x0]\n 40:\t35ffffc3 \tcbnz\tw3, 38 \n\\end{verbatim}\nIntuitively, the load instruction is the ``read part'' of the exchange,\nwhereas the store instruction is ``write part'' of the exchange.\nAlso note that for the sake of this bug, \\verb|__ATOMIC_ACQ_REL| is equivalent to \\verb|__ATOMIC_SEQ_CST|, \\emph{i.e.}, even with the stronge \\barrier{sc}\\ barrier mode, the bug can still manifest.\n\nReturning to the bug in IMM,\nAlice starts by initializing her node, in particular, setting {\\tt alice->locked} to 1.\nAfter exchanging the tail, Alice writes to {\\tt bob->next}.\nAlthough Bob reads from Alice's write to {\\tt bob->next}, IMM allows Alice's write\nto {\\tt alice->locked} to happen after Bob's write to {\\tt alice->locked}\nbecause no happens-before relation is established between Alice and Bob.\nThe \\mo\\ relation shows this order of modifications.\nIf that occurs, Alice's fate is to await {\\tt alice->locked} to become 0 forever.\nTo establish the correct happens-before relation between Alice and Bob,\nAlice's write to {\\tt bob->next} has to be \\barrier{rel},\nand Bob's read of {\\tt bob->next} has to be \\barrier{acq}\\ (see \\cref{fig:dpdk-imm-fix}).\nThat causes both events to ``synchronize-with'', guaranteeing Alice's write to {\\tt alice->locked} to\nhappen before Bob's.\nNote that in IMM the happens-before relation projected to one memory location, \\emph{e.g.}, {\\tt alice->locked}, implies the same order in the visible memory updates of that location, \\emph{i.e.}, in the modification order \\mo of {\\tt alice->locked}.\nThe happens-before relation does not, however, imply an ordering between the writes to distinct memory locations\\cite{podkopaev2019bridging}.\n\n\\paragraph{The bug on ARM.}\nThe bug is not exclusive to the IMM model.\n\\Cref{fig:dpdk-arm-bug} shows an execution graph manually adapted to the ARM memory model (ARM for short).\nIn this memory model, a global order of events exist; so, we number the events with one possible global order.\nARM allows the write to {\\tt alice->locked} (event 7) to happen after the read part of the atomic exchange $U^R_\\barrier{sc}$ (event 3),\nbut does not allow it happen after the write part $U^W_\\barrier{sc}$ (event 8).\nMoreover, since Alice's write to {\\tt bob->next} is \\barrier{rlx}\\ (event 4),\nARM allows the write to {\\tt alice->locked} (along with $U^W_\\barrier{sc}$) to happen after it.\nAs a consequence, although Bob reads (event 5) from Alice's write to {\\tt bob->next} (event 4), the effect of\nsetting {\\tt alice->locked} to 1 (event 7) happens after Bob has set it to 0 (event 6).\nIn contrast to IMM, making Alice's write to be \\barrier{rel}\\ is sufficient in ARM (see \\cref{fig:dpdk-arm-fix}) because\nthe control\/address dependency between Bob's events guarantees that writes that happen before the read event also happen before the subsequent dependent events.\n\n\\paragraph{Validation of the bug.}\nSo far we were not able to reproduce the effect of the bug on real hardware.\nThe situation that triggers the bug is very unlikely to happen, but nevertheless\npossible and still a potential problem for code using DPDK on ARM platforms.\nTo have a higher confidence about the bug on ARM, we checked the scenario\nof \\cref{fig:dpdk-mcs} with Rmem\\cite{pulte2017simplifying}, a stateful model checking tool capable of verifying\nsmall pieces of binary code compiled to ARMv8 architecture.\nAlthough Rmem cannot deal with the infinite loop of Alice, we can reproduce the\nbug by asserting Bob does not see {\\tt alice->locked} being reverted to 1 after\nhe has set it to 0 -- as expected, the assertion fails.\n\n\\paragraph{Discussion.}\nThe DPDK MCS lock bug is a good example of how understanding WMMs can be\nchallenging even for experts.\nNote the MCS lock has been contributed by ARM Limited to the DPDK project.\nIn \\cref{fig:dpdk-mcs}, Line~\\ref{ln:dpdk-comment1}, the developer\nconsiders exactly the situation observed in this bug:\n``The store to {\\tt me->next} above should complete before the node is\nvisible to other CPUs\/threads. Hence, the exchange operation requires release\nsemantics as well.''\nHowever, making the exchange \\barrier{rel}\\ is not sufficient because the node can also become\nvisible to another thread via the write at Line~\\ref{ln:dpdk-bug},\nand nothing stops the store of Line~\\ref{ln:dpdk-init} and the write part of the exchange of Line~\\ref{ln:dpdk-xchg}\nto be reordered after Line~\\ref{ln:dpdk-bug}.\nAnother interesting finding in this code is that, as far as we can verify,\nthe explicit fence at Line~\\ref{ln:dpdk-fence} is useless and can be removed.\n\n\\begin{figure}[t]\n \\begin{minipage}{.48\\textwidth}\n \\resizebox{\\textwidth}{!}{\\input{figures\/dpdk-arm-bug}}\n \\vspace{-3em}\n \\caption{{\\bf ARM memory model}. Bug results in Alice hanging: Alice writes to {\\tt bob->next} with \\barrier{rlx}\\ mode\n causing the initialization of {\\tt alice->locked} to be reordered after Bob's write.}\\label{fig:dpdk-arm-bug}\n \\end{minipage}\\hfill\n \\begin{minipage}{.48\\textwidth}\n \\resizebox{\\textwidth}{!}{\\input{figures\/dpdk-arm-fix}}\n \\vspace{-3em}\n \\caption{{\\bf ARM memory model}. To fix the bug, Alice has to writes to {\\tt bob->next} with \\barrier{rel}\\ mode,\n forcing Bob's write to occur after the initialization of {\\tt alice->locked}.}\\label{fig:dpdk-arm-fix}\n \\end{minipage}\n\\end{figure}\n\n\n\n\n\\subsection{MCS lock of an internal Huawei product}\n\n\\begin{figure}[h]\n\\begin{minipage}{.40\\textwidth}\n \n \n \\begin{lstlisting}[style=casecode]\nstatic inline void\nmcslock_acquire(volatile mcslock_t *tail,\n volatile mcs_node_t *me)\n{\n mcs_node_t *prev;\n\n me->next = NULL;\n me->spin = 1;\n\n smp_wmb(); \/\/ ** consider to be SC fence **\n\n \/\/ equivalent to xchg_acq\n prev = __sync_lock_test_and_set(tail, me);\n if (!prev)\n return;\n\n prev->next = me;\n smp_mb(); \/\/ ** consider to be SC fence ** |\\label{ln:huawei-useless-fence}|\n while(me->spin); |\\label{ln:huawei-loop}|\n \/\/ BUG: Missing ACQ barrier, eg, smp_mb();\n}\n\n\nstatic inline void\nmcslock_release(volatile mcslock_t *tail,\n volatile mcs_node_t *me)\n{\n if (!me->next) {\n \/\/ SC cmpxchg\n if (__sync_val_compare_and_swap(\n tail, me, NULL) == me) {\n return;\n }\n while(!me->next);\n }\n\n smp_mb(); \/\/ ** consider to be SC fence **\n me->next->spin = 0;\n}\n \\end{lstlisting}\n \\caption{MCS implementation in a commercial OS. A barrier bug cases data races in the critical section.}\\label{fig:huawei-mcs}\n\\end{minipage}\\hfill\n\\begin{minipage}{.57\\textwidth}\n\\resizebox{\\linewidth}{!}{ \\input {figures\/huawei-mcs}}\n\\caption{In IMM, Alice's read of {\\tt x} may happen before Bob's write to {\\tt x}.}\n\\label{fig:huawei-graph}\n\\end{minipage}\n\\end{figure}\nOur next study case is concerned with the MCS lock implementation found in an internal Huawei product.\nIn this implementation, \\mbox{\\textsc{VSync}}\\xspace\\ identified a missing \\barrier{acq}\\ barrier that causes serious data corruption problems.\nWe were able to reproduce the problem on real hardware and reported the bug along with a simple fix to the maintainers.\nHere, we describe this issue to illustrate the challenges of porting x86 code to ARM, which is the reason why such a bug was introduced in the code base.\nWith the recent increased demand of software for ARM servers, we believe that similar bugs are going to become more and more common in production.\n\n\\paragraph{The bug on IMM.}\n\\Cref{fig:huawei-mcs} presents a slightly simplified version of the original MCS lock implementation.\nThe bug is a missing \\barrier{acq}\\ barrier at the end of {\\tt mcslock_acquire()}.\nTo understand the scenario, consider the execution graph in \\cref{fig:huawei-graph},\nwhere the critical section is a simple increment \\verb|x++|,\nAlice wants to enter, and Bob is inside the critical section.\nSimilarly to the DPDK bug, Bob sees Alice's node when releasing the lock and sets Alice's {\\tt spin = 0}; this flag is called {\\tt locked} in DPDK.\nThe first fence in Alice's \\verb|mcslock_acquire| synchronizes with the fence in Bob's \\verb|mcslock_release| due to the write and read of {\\tt bob->next} field.\nThat establishes a happens-before relation marked with the dashed arrows in the figure.\nThe happens-before relation, however, does not specify whether Bob's critical-section execution happens before Alice's critical-section execution, or vice-versa.\nIn this execution graph, Alice and Bob run their critical sections concurrently and both read from the initial write to {\\tt x}, causing one of the increments to be lost.\nBy introducing an \\barrier{acq}\\ barrier in the reads of {\\tt me->spin} or after them (\\cref{fig:huawei-mcs}, Line~\\ref{ln:huawei-loop}), Alice is guaranteed to execute her critical section after Bob.\nNote that, although the ARM model also introduces control dependencies, reads of {\\tt me->spin} and the read of{\\tt x} inside the critical section of Alice, a reordering of these operations is not precluded because they are all read operations.\n\n\\paragraph{Discussion.}\nBesides the barrier bug, some issues may be interesting to point out.\nThe developers that implemented this code opted in using compiler specific atomic operations.\nWe do not recommend their use because they hide the barrier mode used underneath.\nIn particular,\n{\\tt _sync_lock_test_and_set} has an \\barrier{acq}\\ mode,\nwhereas {\\tt _sync_val_compare_and_swap} has an \\barrier{sc}\\ mode.\nMoreover, the developers overuse fences:\nthe {\\tt smb_mb()} fence in \\verb|mcs_acquire()|, Line~\\ref{ln:huawei-useless-fence}, is redundant and can be eliminated.\n\n\n\n\\input{cases-qspinlock}\n\n\n\n\\section{Optimized-code Evaluation}\n\\label{s:eval}\n\n\\newcommand{0.6}{0.6}\n\nIn this section, we present details of the setup used in our ``optimized-code evaluation'' section.\nMoreover, we discuss the results obtained with microbenchmarks in length.\nSee the full paper for results with real-world workloads~\\cite{vsync}.\n\n\\subsection{Setup details}\n\n \\subsubsection{Evaluation platforms}\n \\label{subsub:eval-platforms}\n\n We conduct our experiments in the following hardware platforms:\n\n \\begin{itemize}\n \\item a Huawei \\textbf{TaiShan 200} (Model 2280)\\footnoteurl{\n https:\/\/e.huawei.com\/uk\/products\/servers\/taishan-server\/taishan-2280-v2\n }\n rack server that has \\texttt{128 GB} of RAM and 2 \\textbf{Kunpeng 920-6426} processors, a HiSilicon chip\n with \\textbf{64} \\texttt{ARMv8.2} 64-bit cores\\footnoteurl{\n https:\/\/en.wikichip.org\/wiki\/hisilicon\/kunpeng\/920-6426\n },\n totaling 128 cores running at a nominal 2.6 GHz frequency.\n The identifier to denote this machine in this document is \\texttt{taishan200-128c}.\n\n \\item a GIGABYTE \\textbf{R182-Z91-00}\\footnoteurl{\n https:\/\/www.gigabyte.com\/Rack-Server\/R182-Z91-rev-100\n }\n rack server that has \\texttt{128 GB} of RAM and 2 \\textbf{EPYC 7352} processors, an AMD chip\n with \\textbf{24} \\texttt{x86\\_64} cores\\footnoteurl{\n https:\/\/www.amd.com\/en\/products\/cpu\/amd-epyc-7352\n },\n totaling 48 cores (96 if counting hyperthreading) running at a nominal 2.3 GHz frequency.\n The identifier to denote this machine in this document is \\texttt{gigabyte-96c}.\n \\end{itemize}\n\n We installed on all these servers the Ubuntu 18.04.4 LTS (aarch64) operating system, with the following\n Linux kernel version: \\texttt{5.3.0-42-generic}.\n\n\n \\subsubsection{Environment setup}\n\n To produce stable benchmark results on a kernel as complex as Linux, we took some precautions in terms of\n environment configuration of our experiment target platforms.\n We list here such precautions:\n\n \\begin{enumerate}\n \\item \\textbf{Atomic types isolation.}\n Linux and \\mbox{\\textsc{VSync}}\\xspace each declare their own atomic types such as \\texttt{atomic\\_t} and \\texttt{atomic64\\_t}.\n When writing the kernel benchmark module, to avoid name conflicts between Linux kernel headers and \\mbox{\\textsc{VSync}}\\xspace library headers, we separated into different translation units the benchmark ``main'' code (where the entry point lies and where the kernel threads are created) from the lock primitives function definitions and data structures instantiations (where the contention loops are executed, see Section~\\ref{subsub:eval-micro-details}).\n Therefore, the ``main'' code of the benchmark kernel module could use the classic Linux headers for its needs while the \\mbox{\\textsc{VSync}}\\xspace test units could include the \\mbox{\\textsc{VSync}}\\xspace library headers (that define the required atomic types) and use from Linux only the primitives types (such as \\texttt{uint32\\_t} and the likes).\n This technique also enables the possibility to benchmark individually module from the Linux kernel, such as the qspinlock located in \\texttt{\"linux\/spinlock.h\"} header.\n\n \\item \\textbf{Thread to core affinity assignment.}\n The benchmark module spawns as many \\emph{kernel} threads as requested on the module invocation command.\n To measure the multi-core overheads of the locks, these threads must be pinned on individual cores (both within the same NUMA nodes and on different NUMA nodes).\n For this purpose, the Linux \\texttt{kthread\\_bind()} function is used.\n\n \\item \\textbf{Operating frequency fixing.}\n To avoid suffering from thermal effects, and thus the OS dynamically changing the operating frequency while the benchmark were running (and by doing so skewing our results), we fixed the frequency to 1.5 GHz, a frequency point available on all the platforms used in our evaluations.\n For this purpose, we used the Linux \\texttt{cpufreq} mechanism.\n We set the governor to \\texttt{userspace} to be able to choose the frequency.\n We observed that using a fixed governor such as \\texttt{userspace} instead of an adaptive one (such as \\texttt{ondemand}) yields way better predictability in our results.\n\n \\item \\textbf{Disable network.}\n In the preliminary experiments we conducted for our work, we observed that network introduced lots of noise into the evaluations by widely spreading the distributions of results (more than 10\\% difference between minimum and maximum observed throughput for a benchmark repeated with the same parameters).\n The simplest solution to avoid these interferences was to disable the network and therefore to operate the benchmarks directly on the machine workstation (on the server \\texttt{tty}).\n\n \\item \\textbf{Disable IRQ balancing.}\n \\texttt{irqbalance} is a Linux daemon in charge of distributing the hardware Interrupts Requests (IRQs) among the different processing cores of the platform for the purpose of overall system performance.\n However, sporadic IRQs and subsequent execution of Interrupt Service Routines (ISRs) occurring on an \\emph{uncontrolled} set of cores would bring unpredictability in the system response-time and interfere with our benchmark measurements.\n We simply disable this mechanism.\n Therefore, the Linux fallback strategy is to pin all IRQs to the first core, which we remove from our thread affinity assignment to completely avoid the issue of running ISRs and benchmarks concurrently on the same cores.\n\n \\item \\textbf{Disable NUMA balancing.}\n On platforms with a large number of cores such as the ones used in these experiments (see Section~\\ref{subsub:eval-platforms}), the CPU cores are organized in NUMA nodes (for \\emph{Non-Uniform Memory Access}).\n This structure allows to palliate the unavoidable pressure on the memory bus due to the high amount of processing cores operating in parallel.\n Banks of memory are allocated per NUMA node, reflecting the cache hierarchy.\n \\textbf{NUMA balancing} is a feature of Linux that periodically moves the tasks closer to the memory they use, \\emph{i.e.} in the right NUMA node.\n \\textbf{NUMA control} is an additional tool allowing to configure NUMA-aware task scheduling and memory allocation in a fine-grain manner, this overriding the overall system NUMA balancing.\n We disable system NUMA balancing, and we enable task-local NUMA control, using this syntax:\n\\begin{lstlisting}[language=bash]\nsudo numactl --cpubind=0 --membind=0 \n\\end{lstlisting}\n This goes one step further as task affinity assignment, as it forces memory allocation to be bound on NUMA node 0.\n Our benchmark being inherently concurrent, as soon as there will be more threads than cores per NUMA node, these threads will be allocated on the next NUMA node.\n\t\tFor userspace benchmarks, we use {\\tt libnuma} directly in our {\\tt pthread} wrapper to pin threads to cores and control the allocation of context data structures.\n\t Therefore, the cross-node threads will suffer some performance loss when trying to access shared data (\\emph{e.g.}{} spinlock data structures).\n\n \\item \\textbf{Completely isolate the cores.}\n The Linux kernel provides the possibility to isolate a subset of the CPU cores.\n To do so, the parameter \\texttt{isolcpus} must be filled with the list of CPU core identifiers to isolate.\n This parameter is given on the Linux boot argument command line (\\emph{i.e.} in Grub for our case, prior to the kernel boot).\n This has the effect of completely preventing the scheduling and other task balancing mechanisms (such as SMP balancing) to operate on these cores.\n Unless \\emph{explicitly} required with task affinity configuration (with the corresponding system calls or by using the \\emph{taskset} program), the OS will not schedule any task on these isolated cores.\n In our case, we decided to isolate all cores but the first, with the idea of running our benchmarks on the isolated cores, while the rest of the Linux processes would run on the Core 0 to avoid interfering with our results.\n\n \\item \\textbf{Kernel threads priority.}\n We tried several configurations of \\texttt{niceness} for our kernel threads by calling the \\texttt{set\\_user\\_nice()} Linux function, but this did not seem to impact the distribution of our results.\n This is to be expected with the precautions described above.\n The benchmark response time variability weren't influenced by the priority of the kernel threads.\n \\end{enumerate}\n\n\n \\subsection{Microbenchmark evaluation}\n\n In this section, we present the microbenchmark experiments carried-out for the paper.\n We first describe the experiment itself and then discuss the results obtained.\n\n \\subsubsection{Experiment details}\n \\label{subsub:eval-micro-details}\n\n The microbenchmark works as follow:\n each thread repeatedly acquires a (writer) lock, increments a shared counter, and releases the lock.\n This is summarized in pseudo-C in Listing~\\ref{listing:eval-benchstructure}.\n\n\\begin{lstlisting}[style=tutorialcode,label=listing:eval-benchstructure,caption=Pseudo-C code of microbenchmark]\n\/* the tested lock is parameterized here *\/\n#include lock.h>\n\n\/* lock variable *\/\nstatic lock_s lock_var;\n\n\/*\n * supposedly already allocated,\n * we also take care of cacheline-alignment\n * to avoid false sharing.\n *\/\nunsigned long long* shared_counter;\n\n\/* ... *\/\n\nunsigned long long run()\n{\n *shared_counter = 0ull;\n lock_init(&lock_var);\n\n do {\n lock_acquire(&lock_var);\n\n (*shared_counter)++;\n\n lock_release(&lock_var);\n } while (!thread_should_stop());\n\n return *shared_counter;\n}\n\\end{lstlisting}\n\n \\input{generated\/microbench-constants}\n\n The returned counter is used to compute the throughput (number of times the critical sections was accessed by any thread).\n We vary the number of threads in the following set~\\footnote{\n Obviously, the 127-thread case can only be run on platforms with 128 cores.\n It is omitted in the other cases.\n }:\n $\\lbrace 1, 2, 4, 8, 16, 23, 31, 63, 95, 127{} \\rbrace$.\n We run each experiment for a fixed period of time (30{} seconds) and measure the throughput (number of critical sections per second).\n We run the experiments 5{} times to ensure the stability of the results (for each case, we pick the median of these repeated runs).\n\n For spinlocks and reader-writer locks, the benchmark runs as a Linux kernel module.\n It means that we insert in the kernel a kernel module (\\texttt{*.ko} file) that is linked with the code listed in Listing~\\ref{listing:eval-benchstructure}.\n When inserted, the module runs an initialization routine that consists in spawning as many kernel threads as requested (as explained above, this is a parameter of the experiment), and each thread runs the code listed in Listing~\\ref{listing:eval-benchstructure}, until interrupted by a timer (the 30{}-seconds execution).\n This timer is triggered externally, when removing the module from the kernel, as the exit routine of the module is requiring all the threads to finish their execution (it is not possible to kill a kernel thread with a kill signal).\n This kernel module is inspired by the previous work of Kashyap \\textit{et al.}~\\cite{shflock2019}, where they (nano-)benchmarked the behavior of a hash-table in kernel-mode to evaluate several locking primitives.\n\n For mutexes, we replace \\verb|pthread_mutex| using \\verb|LD_PRELOAD|, and the benchmark runs in Linux userspace.\n\n About the selected locking primitives, we compare two variants of each primitive:\n an \\barrier{sc}-only variant,\n and a \\mbox{\\textsc{VSync}}\\xspace-optimized variant with barriers.\n\n \\subsubsection{Results}\n\n \\paragraph{Grouping and filtering records.}\n\n The raw experiment results look like a list of records as showed in Table~\\ref{table:raw-records}.\n The \\texttt{count} column represents the value returned by the function \\texttt{run()} of Listing~\\ref{listing:eval-benchstructure}, \\emph{i.e.} the number of times a thread access the critical section.\n The \\texttt{duration} column is the measured duration (even if it is fixed, some deviation may occur).\n Lastly, the \\texttt{throughput} column is simply $\\frac{\\texttt{count}}{\\texttt{duration}}$, effectively capturing the number of critical sections per second.\n\n \\begin{table}\n \\begin{minipage}{\\textwidth}\n \\begin{center}\n \\resizebox{1.0\\textwidth}{!}{%\n \\input{generated\/raw-records}\n }\n \\end{center}\n \\end{minipage}\n \\caption{Raw captured records, with parameters and output values.}\n \\label{table:raw-records}\n \\end{table}\n\n\n The records then get grouped together by parameters, and throughput mean, median and stability are computed, as reported on Table~\\ref{table:grouped-records}.\n\n \\begin{table}\n \\begin{minipage}{\\textwidth}\n \\begin{center}\n \\resizebox{1.0\\textwidth}{!}{%\n \\input{generated\/grouped-records}\n }\n \\end{center}\n \\end{minipage}\n \\caption{Records grouped by target platform, lock algorithm, \\barrier{sc}-only\/\\mbox{\\textsc{VSync}}\\xspace-optimized version and number of threads.\n Computed values are median, mean, standard deviation and stability of the \\texttt{throughput} column of Table~\\ref{table:raw-records}.}\n \\label{table:grouped-records}\n \\end{table}\n\n As can be seen on the table, these values are computed for different versions of the algorithm.\n \\texttt{opt} refers to the \\mbox{\\textsc{VSync}}\\xspace-optimized version of the algorithm, while \\texttt{seq} refers to the \\barrier{sc}-only variant.\n\n Mean, median and standard deviation are computed using the usual definitions, while the stability is computed by dividing the maximum throughput by the minimum throughput, effectively giving an indication on the stability of the data set.\n The closer the stability is to $1.00$, the more stable the sample is for these fixed values of the parameters.\n Figure~\\ref{fig:eval-microbench-stability} shows the repartition of the stability among the records of the above table.\n As can be observed on the density chart, most observed values are stable.\n\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{generated\/chart-density-stability}\n \\caption{Density of the stability of the different records, per architecture.\n The chart exhibits the fact that most results are very stable ($<1.16$ for stability).}\n \\label{fig:eval-microbench-stability}\n \\end{figure}\n\n Another way to see them is to group the lines of Table~\\ref{table:grouped-records} by stability values as done in Table~\\ref{table:stability-records}.\n We see than more than 84\\% of the results have a stability inferior to 10\\% (value $\\leq 1.1$ in the table), which we believe is satisfactory.\n\n \\begin{table}\n \\begin{minipage}{\\textwidth}\n \\begin{center}\n \\input{generated\/stability-table}\n \\end{center}\n \\end{minipage}\n \\caption{Number of experiments categorized by stability.\n The mentioned records are lines of Table~\\ref{table:grouped-records}.}\n \\label{table:stability-records}\n \\end{table}\n\n In practice, we filter out all records with more than 20\\% stability to avoid skewing the results.\n Indeed, sometimes the optimization speedup can be in the 0 to 20\\% improvement range.\n Therefore, statistically-speaking computing speedups for unstable values would make little to no sense at all.\n For our evaluations, it means we had to throw less than 7\\% of the results (see Table~\\ref{table:stability-records}).\n\n Instead of simply dropping unstable records, an alternate method would be to redraw experiment samples until it becomes stable (\\emph{e.g.} with a stability threshold value of $1.1$).\n However, for time reasons, we did not use this method.\n We believe the results obtained with this method and the method we actually used would be very similar.\n\n \\paragraph{Analysis of speedups of \\mbox{\\textsc{VSync}}\\xspace-optimized over \\barrier{sc}-only implementations.}\n\n Then, we use the values in the filtered table of records to compute the speedup $\\frac{T_o}{T_s} - 1$, where $T_o$ is median throughput of \\mbox{\\textsc{VSync}}\\xspace-optimized and $T_s$ is the median throughput \\barrier{sc}-only variants, respectively.\n Descriptive statistics aggregates about the observed speedups are showed in Table~\\ref{tab:eval-microbench-speedup} and the density of the speedup values is showed in Figure~\\ref{fig:eval-microbench-speedups}.\n In the paper, for the sake of space, we only reported maximum observed speedups (the \\texttt{max} column in Table~\\ref{tab:eval-microbench-speedup}).\n We can observe on Figure~\\ref{fig:eval-microbench-speedups} that most speedups are close to 0.\n This effect can mainly be observed on the plot because of the highly-contended cases (number of threads from $8$ and up), where the impact of optimizing barrier is negligible.\n On the other hand, if we observe the same data but split for all measured contention levels (\\emph{i.e.} number of threads) as depicted per architecture on Figures~\\ref{fig:eval-microbench-speedups-heatmap-arm} and~\\ref{fig:eval-microbench-speedups-heatmap-x86}, we can analyze the results with finer-grain details.\n For \\texttt{ARMv8 (taishan200-128c)} (Fig.~\\ref{fig:eval-microbench-speedups-heatmap-arm}), good results are scattered across the different contentions levels, but speedups tend to be better for low contention level (especially the $1$ thread case).\n In the case of \\texttt{x86} (Fig.~\\ref{fig:eval-microbench-speedups-heatmap-x86}), the tremendous low-contention speedup case (up to $7\\times$ for $1$ thread) is emphasized.\n This is so big that it overshadows the other cases.\n However, the \\texttt{qspinlock} column is clearly better than the others, illustrating that in the \\texttt{x86} case, \\texttt{qspinlock} has \\emph{no negative speedup}.\n\n \\begin{table}\n \\begin{minipage}{\\textwidth}\n \\begin{center}\n \\resizebox{1.0\\textwidth}{!}{%\n \\input{generated\/speedup-table}\n }\n \\end{center}\n \\end{minipage}\n \\caption{Speedups of \\mbox{\\textsc{VSync}}\\xspace-optimized version of the algorithm over \\barrier{sc}-only variant.\n This descriptive summary must be read with care (especially for the values of \\texttt{mean}), as they are only aggregated from \\emph{our own} experiment samples (our arbitrary selected thread number, etc.).}\n \\label{tab:eval-microbench-speedup}\n \\end{table}\n\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{generated\/chart-density-speedups}\n \\caption{Density of the speedups of the different locks, per architecture.}\n \\label{fig:eval-microbench-speedups}\n \\end{figure}\n\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{generated\/chart-heatmap-speedups-arm}\n \\caption{Heat map showing the speedups of the different locks on \\texttt{ARMv8 (taishan200-128c)}.\\\\\n White squares correspond to data filtered-out for instability.}\n \\label{fig:eval-microbench-speedups-heatmap-arm}\n \\end{figure}\n\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{generated\/chart-heatmap-speedups-x86}\n \\caption{Heat map showing the speedups of the different locks on \\texttt{x86\\_64 (gigabyte-96c)}.\\\\\n White squares correspond to data filtered-out for instability.}\n \\label{fig:eval-microbench-speedups-heatmap-x86}\n \\end{figure}\n\n \\paragraph{MCS lock comparisons.}\n\n Figure~\\ref{fig:eval-microbench-mcs} compares the performance of several MCS lock implementations on \\texttt{ARMv8 (taishan200-128c)} and \\texttt{x86\\_64 (gigabyte-96c)}.\n As reported in the paper, the different MCS lock implementations are: DPDK\\cite{DPDK}, Concurrency Kit (ck)\\cite{CK}, CertiKOS\\cite{gu2016certikos} and our \\mbox{\\textsc{VSync}}\\xspace-optimized.\n\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{generated\/chart-mcslock}\n \\caption{Comparisons of performance of different MCS lock implementations on \\texttt{ARMv8 (taishan-128c)} and \\texttt{x86\\_64 (gigabyte-96c)}.}\n \\label{fig:eval-microbench-mcs}\n \\end{figure}\n\n \\paragraph{Critical and non-critical section sizes.}\n\n We observed a few other things by conducting this campaign of experiments.\n Our benchmark setting allows for additional parameters: \\texttt{cs\\_size} and \\texttt{es\\_size} (not reported in the charts of this report).\n \\begin{itemize}\n \\item The \\texttt{cs\\_size} parameter (for ``critical section size'') allows to artificially increase (and control) the size of the critical section.\n Instead of only touching one cache line by increasing a counter (as depicted in Listing~\\ref{listing:eval-benchstructure}), we can touch an arbitrary number of cache lines, which would corresponds to the value of the \\texttt{cs\\_size} parameter.\n\n \\item The \\texttt{es\\_size} parameter allows to set an arbitrary number of cache lines touched \\emph{outside} the critical section, to simulate different relative sizes of the critical section with regards to the size of the whole program.\n \\end{itemize}\n We observed the following:\n \\begin{enumerate}\n \\item The \\texttt{es\\_size} parameter did not influence the results, meaning that the lock primitive performances and speedup obtained in \\mbox{\\textsc{VSync}}\\xspace-optimized over \\barrier{sc}-only are not affected by the size of the program that is not in the critical section.\n\n \\item The \\texttt{cs\\_size} parameter strongly influenced the results in the following way: the bigger the critical section was, the less was the impact of the barrier optimization.\n Additionally, overall, all locking primitives are converging towards the same performance value for an increasing critical section, which is expected as the entry\/exit protocols are negligible relatively to a sufficiently large critical section.\n \\end{enumerate}\n From this, we can conclude that barrier optimizations and locking protocols make sense especially for small critical sections and fine-grain locking.\n For the final results of the paper, we decided to set \\texttt{cs\\_size} to $1$ and \\texttt{es\\_size} to $0$.\n\n \\paragraph{Hash table benchmarks.}\n\n Linked to these last findings, prior running our custom-made kernel benchmark module, we tried to use the work of Kashyap \\textit{et al.}~\\cite{shflock2019} (which is publicly available on Github~\\footnote{\\url{https:\/\/github.com\/sslab-gatech\/shfllock\/tree\/master\/benchmarks\/kernel-syncstress}}), but we were not able to produce predictable results.\n Indeed, the variability of such results was very high, and each time we changed a small parameter it produced different output values.\n It happened even for details that should not influence the results, such as changing the linking order of the object modules in the makefile.\n This was unusable for our work, and could be explained in the following way:\n basically, the critical section in the kernel \\emph{syncstress} module of Kashyap~\\textit{et al.}{} is accessing nodes of a hash table.\n The data of this hash table is randomly populated.\n However, accessing a hash table is not a predictable operation in terms of run-time, and different access can yield very different execution times (especially if the hash table is seeded with different random values at each run).\n This would lead to critical section size being very different for different runs (even with the same parameter values), and was therefore not usable to compare different techniques.\n In comparison, our microbenchmark framework, although being simpler in its structure, produce very predictable results (with very small deviations and good stability, as showcased above in this section) and allow to precisely measure the overheads of barriers and the performance of different locking primitive implementations.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Await Model Checking in Detail}\n}\n\\label{s:spinning}\n\nAMC is an enhancement of \\emph{stateless model checking} (SMC) capable of handling programs with awaits on WMMs.\nSMC constructs all possible executions of a program, and filters out those inconsistent with respect to the underlying memory model.\nHowever, SMC falls short when the program has infinitely many or non-terminating executions (\\emph{e.g.}, due to await loops) because the check never terminates.\nAMC overcomes this limitation by filtering out executions in which multiple iterations of an await loop read from the same writes.\n\nWe start introducing basic notation and definitions, including execution graphs, which are used to represent executions.\nNext, we explain how awaits lead to infinitely many and\/or non-terminating executions, and how AMC overcomes these problems.\nWe present sufficient conditions under which AMC correctly verifies programs including await termination (AT) and safety.\nFinally, we show the integration of AMC into a stateless model checker from the literature.\n\n\\subsection{Notations and Definitions}\n\n\n\\paragraph{\\emph{Executions as graphs.}}\nAn execution graph $G$ is a formal abstraction of executions,\nwhere nodes are events such as reads and writes,\nand the edges indicate (\\po) program order, (\\mo) modification order,\nand (\\rf) read-from relationships, as illustrated in \\cref{ua execution graph}. \nA read event $R^m_T(x,v)$ reads the value $v$ from the variable $x$ by the thread $T$ with the mode $m$, a write event $W^m_T(x,v)$ updates $x$ with $v$, and $W_{init}(x,v)$ initializes. The short notations $R_T(x,v)$ and $W_T(x,v)$ represent the relaxed events $R^{\\barrier{rlx}}_T(x,v)$ and $W^{\\barrier{rlx}}_T(x,v)$ respectively.\nNote that the \\po is identical in \\execution{a} and \\execution{b} because it is\nthe order of events in the program text.\nIn contrast, \\mo and \\rf edges differ; \\emph{e.g.}, in \\execution{a}, $W_{T_1}(l,1)$ precedes $W_{T_2}(l,0)$ in \\mo, while in \\execution{b} it is the other way around.\nFurthermore, the \\rf edges indicate that $W_{T_1}(l,1)$ is never read in \\execution{a}, while it is read by $R_{T_2}(l,1)$ in \\execution{b}.\n\n\\begin{figure}[h]\n\t\\hspace{-5mm}\n\t\\begin{minipage}[b]{.5\\textwidth}\n\t\t\\hspace{-4mm}\n\t\\begin{tabular}{c}\n\t\t\\begin{lstlisting}[language=C,style=mainexample]\nlocked = 0, q = 0;\n\t\t\\end{lstlisting}\n\t\t\\\\[1em]\n\t\t\\begin{tabular}{c||c}\n\t\t\t $T_1: $ \\lstinline[language=C,style=mainexample]|lock| & $T_2: $ \\lstinline[language=C,style=mainexample]|unlock| \\\\\n\t\t\t{\\lstset{showlines=true}\n\t\t\t\\begin{lstlisting}[language=C,style=mainexample]\nlocked = 1;\nq = 1;\nwhile (locked == 1);\n\/* Critical Section *\/\n\t\t\t\\end{lstlisting}}\n\t\t\t&\n\t\t\t{\\lstset{showlines=true}\n\t\t\t\\begin{lstlisting}[language=C,style=mainexample]\nwhile (q == 0);\nlocked = 0;\nassert (locked == 0);\n\n\t\t\\end{lstlisting}}\n\t\t\\end{tabular}\n\t\\end{tabular}\n\n\t\\caption{Awaits in one path of a partial MCS lock. $T_1$ signals \\lstinline[language=C,style=mainexample]|q = 1| to notify $T_2$ that it enqueued, and $T_2$ waits for the notification, then signals \\lstinline[language=C,style=mainexample]|locked = 0| to pass the lock to $T_1$.}\\label{unbounded wait}\n\n\\end{minipage}\\hspace{8pt}\n\\begin{minipage}[b]{.5\\textwidth}\n\t\\centering\n\t\\begin{tabular}{cc}\n\t\t\\begin{tikzpicture}\n\t\t\\node (fignum) at (-1.65,0.1) {\\tiny a};\n\t\t\\draw[black] (fignum) circle [radius=3.5pt];\n\t\n\t\t\\clip (-1.75,-4.3) rectangle (2.08, 0.25);\n\t\n\t\t\\node[event] (INITx) at (-0.76,0) {$W_{init}(l,0)$};\n\t\t\\node[event] (INITy) at (0.76,0) {$W_{init}(q,0)$};\n\t\t\\node[event] (updx) at (-1,-0.9) {$W_{T_1}(l,1)$};\n\t\t\\node[event] (updy) at (-1,-1.7) {$W^{\\barrier{rel}}_{T_1}(q,1)$};\n\t\t\\node[event] (updx2) at (1,-3.3) {$W_{T_2}(l,0)$};\n\t\t\n\t\t\\draw[->,draw=orange] (INITx) -- node[midway,left,orange] {\\small mo} (updx);\n\t\t\\draw[->,draw=orange] (INITy) -- node[right,orange,pos=0.6] {\\small mo} (updy);\n\t\t\\draw[->,draw=orange,inner sep=0] (updx) to[in = 170, out = 210] node[pos=0.7,right,orange] {\\small mo} \n\t\t(updx2);\n\t\t\n\t\t\\draw[->,draw=blue] (updx) -- (updy) node[midway,left,blue] {\\small po};\n\t\t\n\t\n\t\t\\node[event] (y1) at (1,-0.9) {$R^{\\barrier{acq}}_{T_2}(q,0)$};\n\t\t\\node[event] (y2) at (1,-1.7) {$R^{\\barrier{acq}}_{T_2}(q,0)$};\n\t\t\\node[event] (y3) at (1,-2.5) {$R^{\\barrier{acq}}_{T_2}(q,1)$};\n\t\t\n\t\t\\node[event] (x) at (1,-4.1) {$R_{T_2}(l,0)$};\n\t\t\n\t\t\n\t\t\\node[event] (unlock) at (-1,-3.3) {$R_{T_1}(l,0)$};\n\t\t\n\t\t\\draw[->,draw=blue] (y1) -- (y2) node[midway,left,blue] {\\small po};\n\t\t\\draw[->,draw=blue] (y2) -- (y3) node[midway,left,blue] {\\small po};\n\t\t\\draw[->,draw=blue] (y3) -- (updx2) node[midway,left,blue] {\\small po};\n\t\t\\draw[->,draw=blue] (updx2) -- (x) node[midway,left,blue] {\\small po};\n\t\t\n\t\t\\draw[->,draw=blue] (updy) -- (unlock) node[midway,left,blue] {\\small po};\n\t\t\n\t\t\\draw[->,draw=teal] (updx2) to[in = 30, out = 330] node[midway,left ,teal] {\\small rf} (x);\n\t\t\\draw[->,draw=teal] (INITy) to[in = 30, out = 340] node[midway,left ,teal] {\\small rf} (y1);\n\t\t\\draw[->,draw=teal] (INITy) to[in = 30, out = 340] node[midway,right,teal] {\\small rf} (y2);\n\t\t\\draw[->,draw=teal] (updy) -- node[midway,above,teal] {\\small rf} (y3);\n\t\t\\draw[->,draw=teal] (updx2) -- node[midway,above,teal] {\\small rf} (unlock);\n\t\t\n\t\t\\end{tikzpicture}\n\t\t&\n\t\t\\begin{tikzpicture}\n\t\n\t\t\\node (fignum) at (-1.65,0.1) {\\tiny b};\n\t\t\\draw[black] (fignum) circle [radius=3.5pt];\n\t\t\\clip (-1.8,-4.3) rectangle (2.08, 0.25);\n\t\n\t\t\\node[event] (INITx) at (-0.76,0) {$W_{init}(l,0)$};\n\t\t\\node[event] (INITy) at (0.76,0) {$W_{init}(q,0)$};\n\t\t\\node[event] (updx) at (-1,-0.9) {$W_{T_1}(l,1)$};\n\t\t\\node[event] (updy) at (-1,-1.7) {$W^{\\barrier{rel}}_{T_1}(q,1)$};\n\t\t\\node[event] (updx2) at (1,-3.3) {$W_{T_2}(l,0)$};\n\t\t\n\t\t\\draw[->,draw=orange] (INITx) to[out = 310, in=162] node[near end,right,orange] {\\small mo} (updx2);\n\t\t\\draw[->,draw=orange] (INITy) -- node[midway,right,orange,pos=0.6] {\\small mo} (updy);\n\t\n\t\t\n\t\t\\draw[draw=yellow,opacity=0.5,line width=3mm] (updx) -- (updy);\n\t\t\\draw[->,draw=blue,very thick] (updx) -- (updy) node[midway,left,blue] {\\small po};\n\t\t\n\t\n\t\t\\node[event] (y1) at (1,-0.9) {$R^{\\barrier{acq}}_{T_2}(q,0)$};\n\t\t\\node[event] (y2) at (1,-1.7) {$R^{\\barrier{acq}}_{T_2}(q,0)$};\n\t\t\\node[event] (y3) at (1,-2.5) {$R^{\\barrier{acq}}_{T_2}(q,1)$};\n\t\t\n\t\t\\node[event] (x) at (1,-4.1) {$R_{T_2}(l,1)$ \\color{red}{\\text{\\ding{55}}}};\n\t\t\n\t\t\\node[event] (unlockf) at (-1,-3.3) {$R_{T_1}(l,1)$};\n\t\t\n\t\t\\node[event] (forever) at (-1,-4) {$\\vdots$};\t\t\n\t\t\n\t\t\\draw[->,draw=blue] (y1) -- (y2) node[midway,left,blue] {\\small po};\n\t\t\\draw[->,draw=blue] (y2) -- (y3) node[midway,left,blue] {\\small po};\n\t\t\\draw[draw=yellow,opacity=0.5,line width=3mm] (y3) -- (updx2);\n\t\t\\draw[->,draw=blue,very thick] (y3) -- (updx2) node[midway,left,blue] {\\small po};\n\t\t\\draw[->,draw=blue] (updx2) -- (x) node[midway,left,blue] {\\small po};\n\t\t\n\t\t\\draw[->,draw=blue] (updy) -- (unlockf) node[midway,left,blue] {\\small po};\n\t\t\\draw[->,draw=blue] (unlockf) -- (forever.north) node[midway,left,blue] {\\small po};\n\t\t\n\t\t\\draw[->,draw=teal] (updx) to[in = 160, out = 211] node[midway,left ,teal] {\\small rf} (x);\n\t\t\\draw[->,draw=teal] (INITy) to[in = 30, out = 340] node[midway,left ,teal]{\\small rf} (y1);\n\t\t\\draw[->,draw=teal] (INITy) to[in = 30, out = 340] node[midway,right,teal] {\\small rf} (y2);\n\t\t\\draw[draw=yellow,opacity=0.5,line width=3mm] (updy) -- (y3);\n\t\t\\draw[->,draw=teal,very thick] (updy) -- node[midway,above,teal] {\\small rf} (y3);\n\t\t\\draw[->,draw=teal] (updx) to[out=214, in=135] \n\t\t\t\t\t\t\t\t\t(unlockf);\n\t\t\n\t\t\\draw[draw=yellow,opacity=0.5,line width=3mm] (updx2) to[in = 214, out = 170, looseness=1.2] (updx);\n\t\t\\draw[->,draw=orange,very thick] (updx2) to[in = 214, out = 170, looseness=1.2] node[pos = 0.3,right,orange] {\\small mo} (updx);\n\t\t\n\t\t\\end{tikzpicture}\n\t\\end{tabular}\n\t\\caption{Two execution graphs of \\cref{unbounded wait} where $l =$ \\lstinline[language=C,style=mainexample]|locked|.} \\label{ua execution graph}\n\\end{minipage}\n\\end{figure}\n\n\n\\paragraph{\\emph{Consistency predicates.}}\nA weak memory model $M$ is defined by a consistency predicate $\\mathrm{cons}_M$ over graphs, where $\\mathrm{cons}_M(G)$ holds iff $G$ is consistent with $M$.\nFor instance, the `IMM' model used by \\mbox{\\textsc{VSync}}\\xspace forbids the cyclic path\\footnote{A path is cyclic if it starts and ends with the same node} of {\\setlength{\\fboxsep}{0pt}\\colorbox{yellow!50}{highlighted}} edges in \\execution{b} of \\cref{ua execution graph} due to the \\rel\\xspace and \\acq\\xspace modes, as it forbids all cyclic paths consisting of edges in this order: 1) \\po ending in $W^\\barrier{rel}$, 2) \\rf ending in $R^\\barrier{acq}$, 3) \\po, 4) \\mo.\nSuch a path is compactly written as ``\\(\\po ; [W^{\\barrier{rel}}] ; \\rf ; [R^{\\barrier{acq}}] ; \\po ; \\mo\\)'', and is never cyclic in graphs consistent with IMM.\nThus $\\mathrm{cons}_\\mathrm{IMM}(\\execution{b})$ does not hold.\nIf say the \\barrier{rel}\\ barriers on the accesses to \\lstinline[language=C,style=mainexample]|q| would be removed, the graph would be consistent with IMM.\n\n\n\n\n\n\\begin{figure}\n\t\\centering\n\t\\begin{minipage}{.4\\textwidth}\n \\begin{lstlisting}[language=C]\n\/* lock acquire *\/\ndo {\n atomic_await_neq(&lock, 1);\n} while(atomic_xchg(&lock, 1) != 0);\n\nx++; \/* CS *\/\n\n\/* lock release *\/\natomic_write(&lock, 0);\n\\end{lstlisting}\n\\caption{TTAS lock example.} \\label{fig:ttas-lock}\n\\end{minipage}\n\\end{figure}\n\n\n\\paragraph{\\emph{Awaits}.}\nIntuitively, an await is a special type of loop which waits for a write of another thread.\nTo make this intuition more precise, imagine a demonic scheduler that prioritizes threads currently inside awaits.\nUnder such a scheduler, an await has two possible outcomes: either the write of the other thread is currently visible, and the await terminates immediately; or the write of the other thread is not visible.\nIn the latter case, the scheduler continuously prevents the write from becoming visible by never scheduling the writer thread, and hence the await never terminates.\nA more precise definition for {\\em await} is a loop that, for every possible value of the polled variables\\footnote{%\nPolled variables refers the variables read in each loop iteration to evaluate the loop's condition.},\neither exits immediately or loops forever when executed in isolation.\nWe illustrate this with the two loops of the TTAS lock from \\cref{fig:ttas-lock}.\nThe inner loop is an await; to show this, we need to consider every potential value $v$ of \\lstinline[language=C,style=mainexample]|lock|: \nfor $v = 0$, the loop repeats forever, and for $v \\not= 0$ the loop exits during the first iteration.\nThe outer loop is not an await; to show this, we need to find one value $v$ of \\lstinline[language=C,style=mainexample]|lock| for which the loop is not executed infinitely often but also not exited immediately. One such value is $v=1$, for which the thread never reaches the outer loop again after entering the inner loop.\n\n\\paragraph{\\emph{Sets of execution graphs.}}\\label{infinity mc}\nGiven a fair scheduler, awaits either exit after a number of failed iterations or (intentionally) loop forever.\nWe separate execution graphs that satisfy $\\mathrm{cons}_M$ into two sets, $\\mathbb G^F$ and $\\mathbb G^\\infty$.\n$\\mathbb G^F$ is the set of execution graphs where all awaits exit after a finite number of failed iterations.\nIn \\cref{ua execution graph}, for example, \\execution{a} is in $\\mathbb G^F$ since $\\mathrm{cons}_M(\\execution{a})$ holds\nand the await of $T_2$ exits after two failed iterations. \nNote $\\mathbb G^F$ consists exactly of the graphs in which all awaits terminate, but that does not imply that these graphs are finite: there may still be infinitely many steps \\emph{outside} of the awaits. We will later state a sufficient condition that excludes such cases.\n$\\mathbb G^\\infty$ is the set of the remaining consistent graphs. In each of these at least one await loops forever. We define:\n\\begin{definition}[\\textbf{Await termination}]\\label{def:at} \\em AT holds iff $\\mathbb G^\\infty$ is $\\emptyset$.\n\\end{definition}\n\\noindent\nDue to the barriers on \\lstinline[language=C,style=mainexample]|q|, $\\mathrm{cons}_\\mathrm{IMM}(\\execution{b})$ does not hold, and hence \\execution{b} is not in $\\mathbb G^\\infty$. In fact, with these $\\barrier{acq}$ and $\\barrier{rel}$ barriers, all graphs with an infinite number of failed iterations violate consistency, and hence $\\mathbb G^\\infty$ is empty; AT is not violated.\n\nNote that we can splice an additional failed iteration into \\execution{a} by repeating $R^\\barrier{acq}_{T_2}(q,0)$, resulting in a new graph in $\\mathbb G^F$. We generalize this idea. Let $\\mathbb{G}_k \\subseteq \\mathbb{G}^F$ be the set of consistent\nexecution graphs with a total\\footnote{We count here the sum of failed iterations of all executed instances of awaits, including multiple instances by the same thread, \\emph{e.g.}, when the inner await loop in the TTAS lock from \\cref{fig:ttas-lock} is executed multiple times.} of $k \\in \\{0,1,\\ldots\\}$ failed iterations.\nThus, $\\mathbb G_0$ is the set of consistent graphs with no failed iterations. With two failed iterations of $T_2$'s await and zero of $T_1$'s await, $\\execution a$ has a total of $2+0=2$ failed iterations of awaits, and is thus in $\\mathbb{G}_2$.\nLet now $G \\in \\mathbb{G}_k$ with $k>0$, \\emph{i.e.}, $G$ has at least one failed iteration; we can always repeat one of its failed iterations to obtain a graph $G' \\in \\mathbb{G}_{k+1}$ due to the non-deterministic number of iterations of await loops.\nSince all $\\mathbb G_k$ are disjoint, their union (denoted by $\\mathbb G^F = \\biguplus_{k \\in \\Set{0,1,\\ldots}} \\mathbb G_k$) is infinite despite every set $\\mathbb G_k$ being finite.\n\n\n\\subsection{Verifying Await Termination with AMC}\nState-of-the-art stateless model checkers\\cite{GenMC,HMC,RCMC} cannot construct all execution graphs in $\\mathbb G^F$ or any in $\\mathbb G^\\infty$.\nIn order for SMC to complete in finite time, the user has to limit the search space to a finite subset of executions graphs in $\\mathbb G^F$.\nConsequently, SMC cannot verify AT (\\cref{def:at}) and can only verify safety within this subset of executions graphs.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newcommand{\\color{purple}{$B$}}{\\color{purple}{$B$}}\n\\begin{figure*}\n\t\\centering\n\t\\hfill\n\t\\begin{minipage}{.30\\linewidth}\n\t\t\\centering $T$\n\t\t{\\lstset{showlines=true,}\t\t\t\n\t\t\t\\begin{lstlisting}[language=C, style=mainexample, escapechar=$]\n\/* lock acquire *\/\ndo { $\\bh 1$d = 2;$\\eh 1$\n $\\!\\!\\bh 2$while (d--);$\\eh 2$\n} await_while$\\label{whiletas}$\n (atomic_xchg(lock, 1) != 0);\nassert (d==0);\n\\end{lstlisting}}\n\t\\end{minipage}\n\t\\hfill\n\t\\begin{minipage}{.63\\linewidth}\n\t\t\\resizebox{\\textwidth}{!}{\n\t\t\\small \n\t\t\\begin{tikzpicture}\n\t\t\\node[event] (aWd0) at (0,0) {$W_{T}(\\texttt{d},2)$};\n\t\t\\node (IL) at (aWd0) at ($(aWd0)+(-0.9,0.4)$) {\\color{purple}{$B$}:};\n\t\t\\begin{scope}[local bounding box=ingroup1]\n\t\t\\node[event] (aRd1) at ($(aWd0)+(1.8,0)$) {$R_{T}(\\texttt{d},2)$};\n\t\t\\node[event] (aWd1) at ($(aRd1)+(1.6,0)$) {$W_{T}(\\texttt{d},1)$};\n\t\t\\end{scope}\n\t\t\\begin{scope}[local bounding box=ingroup2]\n\t\t\\node[event] (aRd2) at ($(aWd1)+(1.8,0)$) {$R_{T}(\\texttt{d},1)$};\n\t\t\\node[event] (aWd2) at ($(aRd2)+(1.6,0)$) {$W_{T}(\\texttt{d},0)$};\n\t\t\\end{scope}\n\t\t\\begin{scope}[local bounding box=ingroup3]\n\t\t\\node[event] (aRd3) at ($(aWd2)+(1.8,0)$) {$R_{T}(\\texttt{d},0)$};\n\t\t\\end{scope}\n\t\t\\node (rightsep) at ($(aRd3)+(0.8cm,0cm)$) {};\n\t\t\\node[fit=(aWd0)(rightsep), draw=yellow, ultra thick, name path=testX, rounded rectangle, inner xsep=2pt,inner ysep=10pt] (X) {};\n\t\n\t\t\\draw[densely dotted] ($(ingroup1.north west) + (-2.5pt, 5.5pt)$) rectangle ($(ingroup1.south east) + (2.5pt, -5.5pt)$);\n\t\t\\draw[densely dotted] ($(ingroup2.north west) + (-2.5pt, 5.5pt)$) rectangle ($(ingroup2.south east) + (2.5pt, -5.5pt)$);\n\t\t\\draw[] ($(ingroup3.north west) + (-2.5pt, 5.5pt)$) rectangle ($(ingroup3.south east) + (2.5pt, -5.5pt)$);\n\t\n\t\t\\draw[->,draw=blue] (aWd0) -- node[midway,below,blue] {\\small po} (aRd1);\n\t\t\\draw[->,draw=blue] (aRd1) -- node[midway,below,blue] {\\small po} (aWd1);\n\t\t\\draw[->,draw=blue] (aWd1) -- node[midway,below,blue] {\\small po} (aRd2);\n\t\t\\draw[->,draw=blue] (aRd2) -- node[midway,below,blue] {\\small po} (aWd2);\n\t\t\\draw[->,draw=blue] (aWd2) -- node[midway,below,blue] {\\small po} (aRd3);\n\t\t\n\t\t\\draw[->,draw=teal] (aWd0) to[in = 155, out = 25] node[midway,above,teal] {\\small rf} (aRd1);\n\t\t\\draw[->,draw=teal] (aWd1) to[in = 155, out = 25] node[midway,above] {\\small \\rf $\\lightning$} (aRd2);\n\t\t\\draw[->,draw=teal] (aWd2) to[in = 155, out = 25] node[midway,above] {\\small \\rf $\\lightning$} (aRd3);\n\t\t\\coordinate (AWD0) at (current bounding box.center);\n\t\t\\coordinate (FinalD) at (aWd2.south west);\n\t\t\n\t\n\t\t\n\t\t\\begin{scope}[local bounding box=outgroup1]\n\t\t\\node[event] (aB) at ($(aWd0)+(-1,-1.5)$) {\\color{purple}{$B$}};\n\t\t\\node[event] (aRx) at ($(aB)+(1.3,0)$) {$R^\\barrier{acq}_{T}(\\texttt{lock} ,1)$};\n\t\t\\end{scope}\n\t\t\\draw[densely dotted] ($(outgroup1.north west) + (-2.5pt, 5.5pt)$) rectangle ($(outgroup1.south east) + (2.5pt, -5.5pt)$);\n\t\t\\draw[->,draw=blue] (aB) -- node[midway,below,blue] {\\small po} (aRx);\n\t\t\n\t\t\n\t\t\\begin{scope}[local bounding box=outgroup2]\n\t\t\\node[event] (bB) at ($(aRx)+(1.5,0)$) {\\color{purple}{$B$}};\n\t\t\\node[event] (bRx) at ($(bB)+(1.3,0)$) {$R^\\barrier{acq}_{T}(\\texttt{lock},1)$};\n\t\t\\end{scope}\n\t\t\\draw[densely dotted] ($(outgroup2.north west) + (-2.5pt, 5.5pt)$) rectangle ($(outgroup2.south east) + (2.5pt, -5.5pt)$);\n\t\t\\draw[->,draw=blue] (aRx) -- node[midway,below,blue] {\\small po} (bB);\n\t\t\\draw[->,draw=blue] (bB) -- node[midway,below,blue] {\\small po} (bRx);\n\t\t\n\t\t\\begin{scope}[local bounding box=outgroup3]\n\t\t\\node[event] (cB) at ($(bRx)+(1.5,0)$) {\\color{purple}{$B$}};\n\t\t\\node[event] (cRx) at ($(cB)+(1.3,0)$) {$R^\\barrier{acq}_{T}(\\texttt{lock},0)$};\n\t\t\\node[event] (cWx) at ($(cRx)+(1.9,0)$) {$W_{T}(\\texttt{lock},1)$};\n\t\t\\end{scope}\n\t\t\\coordinate (AB) at (cB);\n\t\t\\draw[yellow, ultra thick, name path=testY] (AB) circle[radius=8pt];\n\t\t\n\t\t\\draw[] ($(outgroup3.north west) + (-2.5pt, 5.5pt)$) rectangle ($(outgroup3.south east) + (2.5pt, -5.5pt)$);\n\t\t\\draw[->,draw=blue] (bRx) -- node[midway,below,blue] {\\small po} (cB);\n\t\t\\draw[->,draw=blue] (cB) -- node[midway,below,blue] {\\small po} (cRx);\n\t\t\\draw[->,draw=blue] (cRx) -- node[midway,below,blue] {\\small po} (cWx);\n\t\t\n\t\t\n\t\t\\node[event] (Rd) at ($(cWx)+(1.8,0)$) {$R_{T}(\\texttt{d},0)$};\n\t\t\\coordinate (FINALB) at (cB.north east);\n\t\t\\draw[->,draw=blue] (cWx) -- node[midway,below,blue] {\\small po} (Rd);\n\t\t\\draw[->,draw=teal] (cB.north east) to[out=20, in=160] node[midway,above,teal] {\\small rf} (Rd.north west);\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\path[name path=connectfigs] (AB) -- (AWD0);\n\t\t\\node[name intersections={of=testX and connectfigs, by={INT1}}] (int1) at ($(INT1)$) {};\n\t\t\\draw[ultra thick,yellow,name intersections={of=testY and connectfigs, by={INT2}}] (int1.center) -- (INT2);\n\t\t\\draw[draw=teal] (FinalD) -- (FINALB);\t\t\n\t\t\\end{tikzpicture}\n\t\t\\begin{tikzpicture}[overlay,remember picture]\n\t\t\\draw[yellow,line width=10pt,opacity=0.5] (begin highlight 1) -- (end highlight 1);\n\t\t\\draw[yellow,line width=10pt,opacity=0.5] (begin highlight 2) -- (end highlight 2);\n\t\t\\end{tikzpicture}\n\t\t}\n\t\\end{minipage}\n\t\\caption{Execution graph {\\color{purple}{$B$}} represents the inner loop of $T$ (marked \\colorbox{yellow!50}{yellow}). Failed await iterations are indicated with dotted boxes, the final (non-failed) iteration with a solid box. The inner loop of $T$ violates the Bounded-Effect principle, as \\rf-edges (marked with $\\lightning$) leave failed await iterations. The outer loop obeys the principle, as \\rf-edges only leave the final await iteration.} \\label{Bounded-Effect principle violated}\n\\end{figure*}\n\n\\paragraph{\\em Key challenges.} For SMC to become feasible in our problem domain, we need to solve three key challenges:\n\\begin{description}[topsep=2pt, parsep=4pt, itemsep=0pt]\n\t\\item[Infinity:] We need to produce an answer in finite time without a user-specified search space, even though the search space $\\mathbb G^F \\cup \\mathbb G^\\infty$ is infinite.\n\t\n\t\\item[Soundness:] We need to make sure not to miss any execution graph that may potentially uncover a safety bug.\n\t\\item[Await termination:] We need to verify that $\\mathbb G^\\infty$ is $\\emptyset$.\n\\end{description}\nUnder certain conditions specified later, AMC overcomes these\nchallenges through three crucial implications:\n\\begin{enumerate}[topsep=2pt, parsep=4pt, itemsep=0pt]\n\t\\item The infinite set $\\mathbb G^F$ is collapsed into a finite set of finite executions graphs $\\ensuremath{\\mathbb G_\\ast^F} \\subset \\mathbb G^F$.\n\tMoreover, the infinite execution graphs in $\\mathbb G^\\infty$ are collapsed into finite execution graphs in a (possibly infinite) set \\ensuremath{\\mathbb G_\\ast^\\infty}{}.\n\tAMC explores at most all graphs in \\ensuremath{\\mathbb G_\\ast^F}{} and up to one graph in \\ensuremath{\\mathbb G_\\ast^\\infty}{}.\n\\item For all $G\\in \\mathbb G^F$, there exists $G'\\in \\ensuremath{\\mathbb G_\\ast^F}{}$ such that $G$ and $G'$ are equivalent. Thus, bugs present in $\\mathbb G^F$ are also in \\ensuremath{\\mathbb G_\\ast^F}{}.\n\t\\item Detecting whether there exists a finite execution graph in \\ensuremath{\\mathbb G_\\ast^\\infty}{} is sufficient to conclude whether $\\mathbb G^\\infty$ is empty.\n\n\tThus AMC can stop after exploring one graph in the set \\ensuremath{\\mathbb G_\\ast^\\infty}{} and report an AT violation, and if AMC does not come across such a graph, AT is not violated.\n\\end{enumerate}\nWe now explain how AMC achieves these three implications, as well as the conditions under which it does so.\n\n\\paragraph{\\em The key to AMC.}\n\\begin{figure}[b!]\n\t\\centering\n\t\\begin{tabular}{cc}\n\t\t\\multicolumn{1}{c|}{\n\t\t\\begin{tikzpicture}[baseline=-60pt,auto]\n\t\t\\node (fignum) at (-0.6,0.1) {\\tiny \\textalpha};\n\t\t\\draw[black] (fignum) circle [radius=3.5pt];\n\t\n\t\t\\node[event] (INIT) at (0.9,0) {$W_\\mathit{init}(l,0)$};\n\t\t\\node[event] (x1) at (0,-0.65) {$W_{T_1}(l,1)$};\n\t\t\\node[event] (x2) at (1.8,-0.6\n\t\t) {$W_{T_2}(l,0)$};\n\t\t\n\t\t\\draw[->,draw=orange] (INIT) -- (x1) node[midway,left,orange] {\\small mo};\n\t\t\\draw[->,draw=orange] (x1) -- (x2) node[midway,above,orange] {\\small mo};\n\t\t\n\t\n\t\t\\node[event] (ra) at (0,-1.6) {$R_{T_1}(l,1)$};\n\t\t\\draw[->,draw=teal] (x1) to[out=225,in=135] node[left,teal] {\\small rf} (ra);\n\t\t\\draw[->,draw=blue] (x1) -- (ra) node[midway,right,blue] {\\small po};\n\t\t\n\t\t\\node[event] (rb) at (0,-2.35) {$R_{T_1}(l,1)$};\n\t\t\\draw[->,draw=blue] (ra) -- (rb) node[midway,right,blue] {\\small po};\n\t\t\\draw[->,draw=teal] (x1) to[out=225,in=135] (rb);\n\t\t\n\t\t\n\t\t\\node[event] (rc) at (0,-3.1) {$R_{T_1}(l,0)$};\n\t\t\\draw[->,draw=blue] (rb) -- (rc) node[midway,right,blue] {\\small po};\n\t\t\\draw[->,draw=teal] (x2) to[out=255,in=25] node[midway,above,teal] {\\small rf} (rc);\n\t\t\n\t\t\\end{tikzpicture}} & \n\t\t\\begin{tikzpicture}[baseline=-60pt]\n\t\t\\node (fignum) at (-0.6,0.1) {\\tiny \\textbeta};\n\t\t\\draw[black] (fignum) circle [radius=3.5pt];\n\t\n\t\t\\node[event] (INIT) at (0.9,0) {$W_\\mathit{init}(l,0)$};\n\t\t\\node[event] (x1) at (0,-0.65) {$W_{T_1}(l,1)$};\n\t\t\\node[event] (x2) at (1.8,-0.65) {$W_{T_2}(l,0)$};\n\t\t\n\t\t\\draw[->,draw=orange] (INIT) -- (x2) node[midway,right,orange] {\\small mo};\n\t\t\\draw[->,draw=orange] (x2) -- (x1) node[midway,above,orange] {\\small mo};\n\t\t\n\t\n\t\t\\node[event] (ra) at (0,-1.6) {$R_{T_1}(l,1)$};\n\t\t\\draw[->,draw=teal] (x1) to[out=225,in=135] node[left,teal] {\\small rf} (ra);\n\t\t\\draw[->,draw=blue] (x1) -- (ra) node[midway,right,blue] {\\small po};\n\t\t\n\t\t\\node[event] (rb) at (0,-2.35) {$R_{T_1}(l,\\lightning)$};\n\t\t\\draw[->,draw=blue] (ra) -- (rb) node[midway,right,blue] {\\small po};\n\t\t\n\t\t\\end{tikzpicture} \\\\\n\t\t\\hline\n\t\t\\begin{tikzpicture}\n\t\t\\node (fignum) at (-0.6,0.1) {\\tiny 1};\n\t\t\\draw[black] (fignum) circle [radius=3.5pt];\n\t\n\t\t\\node[event] (INIT) at (0.9,0) {$W_\\mathit{init}(l,0)$};\n\t\t\\node[event] (x1) at (0,-0.65) {$W_{T_1}(l,1)$};\n\t\t\\node[event] (x2) at (1.8,-0.65) {$W_{T_2}(l,0)$};\n\t\t\n\t\t\\draw[->,draw=orange] (INIT) -- (x1) node[midway,left,orange] {\\small mo};\n\t\t\\draw[->,draw=orange] (x1) -- (x2) node[midway,above,orange] {\\small mo};\n\t\t\n\t\n\t\t\\node[event] (ra) at (0,-1.6) {$R_{T_1}(l,1)$};\n\t\t\\draw[->,draw=teal] (x1) to[out=225,in=135] node[left,teal] {\\small rf} (ra);\n\t\t\\draw[->,draw=blue] (x1) -- (ra) node[midway,right,blue] {\\small po};\n\t\t\n\t\t\\node[event] (rb) at (0,-2.35) {$R_{T_1}(l,0)$};\n\t\t\\draw[->,draw=blue] (ra) -- (rb) node[midway,right,blue] {\\small po};\n\t\t\\draw[->,draw=teal] (x2) to[out=235,in=25] node[midway,above,teal] {\\small rf} (rb);\n\t\t\n\t\t\\end{tikzpicture} &\n\t\t\\begin{tikzpicture}\n\t\t\\node (fignum) at (-0.6,0.1) {\\tiny 2};\n\t\t\\draw[black] (fignum) circle [radius=3.5pt];\n\t\n\t\t\\node[event] (INIT) at (0.9,0) {$W_\\mathit{init}(l,0)$};\n\t\t\\node[event] (x1) at (0,-0.65) {$W_{T_1}(l,1)$};\n\t\t\\node[event] (x2) at (1.8,-0.65) {$W_{T_2}(l,0)$};\n\t\t\n\t\t\\draw[->,draw=orange] (INIT) -- (x1) node[midway,left,orange] {\\small mo};\n\t\t\\draw[->,draw=orange] (x1) -- (x2) node[midway,above,orange] {\\small mo};\n\t\t\n\t\n\t\t\\node[event] (ra) at (0,-1.6) {$R_{T_1}(l,0)$};\n\t\t\\draw[->,draw=teal] (x2) to[out=245,in=15] node[midway,above,teal] {\\small rf} (ra);\n\t\t\\draw[->,draw=blue] (x1) -- (ra) node[midway,right,blue] {\\small po};\n\t\t\n\t\t\t\t\\node[event, draw=none] (rb) at (0,-2.35) {\\phantom{$R_{T_1}(l,0)$}};\n\t\t\\end{tikzpicture}\n\t\\end{tabular}\n\t\\caption{Execution graphs of \\cref{unbounded wait} where $l =$ \\lstinline[language=C,style=mainexample]|locked|.} \\label{await AP}\n\\end{figure}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\resizebox{\\textwidth}{!}{\n\t\\begin{tikzpicture}[scale = 0.75, every node\/.style={scale=0.75}, auto]\n\t\\node [cloud] (start) {$S.\\mathit{push}(G_{init})$};\n\t\\node [decision, below right =1.8cm and 0.2cm of start.center] (done) {$\\left| S \\right| > 0$};\n\t\\node [block, right = 0.2cm of done] (pop) {$G := S.\\mathit{pop}()$};\n\t\\node [decision, right= 0.2cm of pop,align=center] (cons) {$\\mathrm{cons}_M(G)$\\\\$\\land \\neg\\mathrm{W}(G)$};\n\t\\draw [draw=highlightnewstuffcolor,line width=4pt,opacity=0.5]($(cons)+(0,-0.2)$) ellipse (0.65cm and 0.35cm);\n\n\t\\node [decision, above= 0.2cm of cons] (term) {$\\mathcal T_G \\not= \\emptyset$};\n\t\\node [decision, left= 0.2cm of term,align=center] (AT) {$\\exists r. \\bot{\\overset\\rf\\rightarrow}r\\in\\!G$};\n\t\\draw [draw=highlightnewstuffcolor,line width=4pt,opacity=0.5](AT) ellipse (1.2cm and 0.5cm);\n\t\\node [block, right = 0.5cm of term] (pick) {Pick $T \\in \\mathcal T_G$};\n\t\\node [decision, below = 0.2cm of pick,inner sep =-4pt,align=center] (switch) {Next\\\\instruction of\\\\$T$};\n\t\n\t\\path [line] (cons) -- node [midway] {yes} (term);\n\t\n\t\\node [block,right=1.8 cm of switch.center,align=left] (read) {for all $W^{m'}_U(x,v) \\in G$:\\\\\n\t\t\\hspace{1em}$S.\\mathit{push}(G[W^{m'}_U(x,v) \\!\\overset{\\rf}\\rightarrow\\! R^{m}_T(x,v)])$}; \n\t\\path [line] (switch) -- node [above, pos=0.42,align=center] {R: reads\\\\from $x$} (read);\n\t\n\t\n\t\\node [block,below=0.1 cm of read,align=left] (write) {for all $G' \\in \\mathsf{CalcRev.}(G,W^m_T(x,v))$:\n\t\t\\\\\\hspace{1em}$S.\\mathit{push}(G')$}; \n\t\\path [line] (switch.south east) |- node [below, pos=0.65] {W: writes $v$ to $x$} (write);\n\t\n\t\n\t\\node [decision, right = 0.2cm of read,align=center, inner sep=-2pt] (inUA) {read\\\\in await\\\\loop};\n\t\\node [block, right = 0.3 cm of inUA, align=left] (UAread) {$S.\\mathit{push}(G[\\bot \\overset{\\rf}\\rightarrow R^{m}_T(x,\\lightning)])$};\n\t\n\t\\node [block,above = 0.1 cm of read,align=left,fill=red!50,text=black] (error) {report $G$ as a\\\\counterexample}; \n\t\n\t\\path [name path=angledpat\n\t] (switch.north east) -- ++ (45:2cm);\n\t\\path [name path=horizontalpat\n\t] (error) -- +(-4cm,0);\n\t\\draw [->,name intersections={of=angledpath and horizontalpath, by=X}] (switch.north east) -- (X) -- node {F: assertion $\\lightning$} (error);\n\t\n\t\\path [line] (AT.north) |- +(0,0.1cm) -| node [near start] (ATyes) {yes (\\textbf{AT violation})} (error);\n\t\\draw [draw=highlightnewstuffcolor,line width=4pt,opacity=0.5] (ATyes) ellipse (2.5cm and 0.5cm);\n\t\n\t\\path [line] (term) -- node [above, near start] {no} (AT);\n\t\\path [line] (AT) -| node [near start] {no} (done);\n\t\\path [line] (term) -- node {yes} (pick);\n\t\\path [line] (pick) -- (switch);\n\t\\path [line] (pop) -- (cons);\n\t\\path [line] (done) -- node [pos=0.38] {yes} (pop);\n\t\n\t\\path [line] (read) -- (inUA);\n\t\\path [line] (inUA) -- node [pos=0.35] {yes} (UAread);\n\t\n\t\\path [line, name path=returnarrow] (UAread) -- +(0,-2cm) -| (done);\n\t\\node [block,below left= 0.5cm and 0.1cm of done,fill=blue!40,text=black] (end) {report success};\n\t\\clip (current bounding box.south west) rectangle (current bounding box.north east);\n\t\n\t\n\t\\path [name path=downinUA] (inUA) -- +(0,-2cm);\n\t\\path [name path=downwrite] (write) -- +(0,-2cm);\n\t\\path [name path=downcons] (cons) -- +(0,-2cm);\n\t\n\t\\path [line, name intersections={of=downinUA and returnarrow, by=XinUA}] (inUA) -- node{no} (XinUA);\n\t\\path [line, name intersections={of=downwrite and returnarrow, by=Xwrite}] (write) -- (Xwrite);\n\t\\path [line, name intersections={of=downcons and returnarrow, by=Xcons}] (cons) -- node{no} (Xcons);\n\t\\draw [draw=highlightnewstuffcolor,line width=4pt,opacity=0.5] ($(inUA.west)!0.5!(UAread.east)$) ellipse (3.1cm and 1cm);\n\t\n\t\n\t\\path [line] (done.west) -| node [below,pos=0.1] {no} (end);\n\t\\path [line] (start) |- (done.north west);\n\t\\end{tikzpicture}\n}\n\t\\caption{AMC exploration}\\label{exploration algorithm}\n\\end{figure*}\nIn contrast to existing SMCs, AMC filters out execution graphs that contain\nawaits where multiple iterations read from the same writes to the polled variables.\nThis idea is captured by the predicate $\\mathrm{W}(G)$ which defines wasteful executions.\n\\begin{definition} [\\textbf{Wasteful}]\\em An execution graph $G$ is wasteful, \\emph{i.e.}, $\\mathrm{W}(G)$ holds, if an await in $G$ reads the same combination of writes in two consecutive iterations.\n\\end{definition}\n\nAMC does not generate wasteful executions, as they do not add any additional information.\nFor instance, AMC does not generate execution graph $\\execution{\\textalpha} \\in \\mathbb G_2$ from \\cref{await AP} because $\\mathrm{W}(\\execution\\textalpha)$ holds:\n$T_1$ reads from its own write twice in the await.\nSimilarly, AMC does not generate any of the infinitely many variations of \\execution{\\textalpha} in which $T_1$ reads even more often from that write.\nInstead, if we remove all references to \\lstinline[language=C,style=mainexample]|q| in the program of \\cref{unbounded wait}, AMC generates only the two execution graphs in $\\mathbb G_\\ast^F = \\Set{\\execution{1},\n\\execution{2} }$, in which each write is read at most once by the await of $T_1$.\n$T_1$ can read from at most two different writes, thus there is at most one failed iteration of the await in the execution graphs in \\ensuremath{\\mathbb G_\\ast^F}{}, and we have $\\ensuremath{\\mathbb G_\\ast^F}{} \\subseteq \\biguplus_{k\\in\\Set{0,1}}\\mathbb G_k$; in general, if there are at most $n \\in \\mathbb{N}$ writes each await can read from, there are at most\n$n-1$ failed iterations of any await in graphs in \n$\\mathbb{G}^F_\\ast$.\nIf there are at most $a \\in \\mathbb N$ executed instances of awaits, we have\n$\\ensuremath{\\mathbb G_\\ast^F} \\subseteq \\biguplus_{k\\in\\Set{0,\\ldots,a \\cdot (n-1)}}\\mathbb G_k$ which is a union of a finite number of finite sets and thus finite. We will later define sufficient conditions to ensure this.\n\nWe proceed to discuss how AMC discovers AT violations.\nConsider execution graph \\execution{\\textbeta}.\nIn the first iteration of the await, $T_1$ reads from $W_{T_1}(l,1)$.\nIn the next iteration, $T_1$'s read has no incoming \\rf-edge; \ncoherence forbids $T_1$ from reading an older write and the\nawait progress condition forbids it from reading the same write.\nSince there is no further write to the same location, AMC detects an AT violation and uses the finite graph \\execution{\\textbeta} as the evidence.\nIn general, if the \\mo of every polled variable is finite, then AT violations from $\\mathbb G^\\infty$ are represented by graphs in $\\ensuremath{\\mathbb G_\\ast^\\infty}$ where some read has no incoming \\rf-edge. AMC exploits this fact to detect AT violations. \n\n\\paragraph{\\em Conditions of AMC.} \\label{amc conditions}\nState-of-the-art SMC only terminates and produces correct results for terminating and loop-free programs\\footnote{Exploration depths are often used to transform programs with loops into loop-free programs, potentially changing the behavior of the program.}.\nAMC extends the domain of SMC to a fragment of looping and\/or non-terminating programs,\non which AMC not only produces correct results but can also decide termination.\nWith the generic client code provided by \\mbox{\\textsc{VSync}}\\xspace, all synchronization primitives we have studied are in this fragment, showing it is practically useful.\nThe fragment includes\nall programs satisfying the following two principles:\n\\begin{description}[topsep=2pt, parsep=4pt, itemsep=0pt]\n\t\\item[Bounded-Length Principle:]\n\t\tThere is a bound $b$ (chosen globally for the program) so that all executions in $\\mathbb{G}_1$ of the program have length $\\le b$.\n\\item[Bounded-Effect Principle:] Failed await iterations satisfy the \\emph{bounded-effect principle}, that the effect of the loop iteration is limited to that loop iteration. \n\\end{description}\n\nInformally, \\textbf{the Bounded-Length principle} means that the number of execution steps outside of awaits is bounded, and each individual iteration of a failed await is also bounded.\nObviously, infinite loops in the client code are disallowed by the Bounded-Length principle.\n\n\n\\textbf{The Bounded-Effect principle} means that no side-effects from failed await iterations must be referenced by either subsequent loop iterations, other threads, or outside the loop.\nThe principle can be defined more precisely in terms of execution graphs: \\rf-edges starting with writes\\footnote{Only writes that change the value of the variable matter here.} generated by a failed await iteration must go to read events that are generated in the same iteration.\n\\Cref{Bounded-Effect principle violated} illustrates the principle: \\rf-edges from decrements in the failed iterations of the loop body {\\color{purple}{$B$}} go to subsequent iterations of the loop, but for the outer loop only the final iteration has outgoing \\rf-edges.\nThe Bounded-Effect principle allows removing any failed iteration from a graph without affecting the rest of the graph since the effects of the failed iteration are never referenced outside the iteration.\nThis implies that any bugs in graphs from $\\mathbb{G}^F$ are also present in graphs in $\\mathbb G_1$.\nFurthermore, if the Bounded-Effect principle and the Bounded-Length principle hold,\nthen graphs in $\\mathbb G_k$ are bounded for every $k$.\nThe bound for $\\mathbb G_k$ can be computed as \\mbox{$b + (k-1) \\cdot x$} where $b$ is the bound for $\\mathbb G_1$ and $x$ is the maximum number of steps in a failed iteration of an await in $\\mathbb G_1$. \n\nThe two principles jointly imply that {\\mo}s and the number of awaits are bounded, thus, as discussed before, $\\ensuremath{\\mathbb G_\\ast^F}$ is a finite set and $\\ensuremath{\\mathbb G_\\ast^\\infty}$ contains only finite graphs, and AMC always terminates.\nIn synchronization primitives, awaits either just poll a variable (without side effects) or perform some operation which only changes global state if it succeeds, \\emph{e.g.},\n\\lstinline[language=C,style=mainexample]{await_while(q==0);} \\quad or \\quad \\lstinline[language=C,style=mainexample]{await_while(!trylock(&L));}\nThese awaits satisfy the Bounded-Effect principle. The former does not have any side effects. The latter encapsulates its local side effects inside \\lstinline[language=C,style=mainexample]|trylock(&L)|, which can therefore not leave failed iteration of the loop. A global side effect (\\emph{i.e.}, acquiring the lock) only occurs in the last iteration of the await (cf. \\cref{Bounded-Effect principle violated}).\nWhen called in our generic client code, synchronization primitives also satisfy the Bounded-Length principle: the client code invokes the functions of the primitives only a bounded number of times, and each function of the primitives is also bounded.\n\n\n\\paragraph{\\em AMC Correctness.}\nFor programs which satisfy {the Bounded-Length principle} and {the Bounded-Effect principle}, 1) AMC terminates, 2) AMC detects every possible safety violation, 3) AMC detects every possible non-terminating await, and 4) AMC has no false positives.\nSee \\cref{s:amc} for the formal proof.\n\n\n\n\\subsection{Implementing AMC} \\label{AMCawaitwhile}\n\nWe implement AMC on top of GenMC \\cite{GenMC,HMC}, a highly\nadvanced SMC from the literature.\nThe exploration algorithm in \\cref{exploration algorithm} extends GenMC's algorithm with the {\\setlength{\\fboxsep}{0pt}\\colorbox{yellow!50}{highlighted}} essential changes: 1) detecting AT violations through reads with no incoming \\rf-edge; 2) checking if $\\mathrm{W}(G)$ holds to filter out graphs $G$ in which an await reads from the same writes in multiple iterations.\nAMC builds up execution graphs through a kind of depth-first search, starting with an empty graph $G_\\mathit{init}$, which is extended step-by-step with new events and edges.\nThe search is driven by a stack $S$ of possibly incomplete and\/or inconsistent graphs that is initialized to contain only $G_\\mathit{init}$.\n\nEach iteration of the exploration pops a graph $G$ from $S$. If the graph $G$\nviolates the consistency predicate $\\mathrm{cons}_M$\nor is wasteful (\\emph{i.e.}, $\\mathrm{W}(G)$ holds), it is discarded and the iteration ends.\nOtherwise, a program state is reconstructed by emulating the execution of threads until every thread executed all its events in $G$; \\emph{e.g.}, if a thread $U$ executes a read instruction that corresponds to $R_U(x,0)$ in $G$, the emulator looks into $G$ for the corresponding read event and returns the value read by the event, in this case $0$.\nWe denote the set of runnable threads in the reconstructed program state by $\\mathcal T_G$.\nInitially, all threads are runnable; a thread is removed from $\\mathcal T_G$ once it terminates or if it is stuck in an await.\nIf the set is not empty, we can explore further and pick some arbitrary thread $T \\in \\mathcal T_G$ to run next.\nWe emulate the next instruction of $T$ in the reconstructed program state.\nFor the sake of brevity, we discuss only three types of instructions:\nfailed assertions\\footnote{The assertion expression \\lstinline[language=C,style=mainexample]|assert(x==0)| consists of at least two instructions: the first reads \\lstinline[language=C,style=mainexample]|x|, and the second just compares the result to zero and potentially fails the assertion. Note that once the \\rf-edge for the event corresponding to the first instruction has been fixed, whether the second instruction is a failed assertion or not is also fixed.}, writes, and reads.\n\\begin{description}[topsep=2pt, parsep=4pt, itemsep=0pt]\n\t\\item[F:] an assertion failed. We stop the exploration and report $G$. \n\\end{description}\nIf the instruction executes a read or write, a new graph with the corresponding event should be generated. Usually there are several options for the event; for each option, a new graph is generated and pushed on a stack $S$. In particular:\n\\begin{description}[topsep=2pt, parsep=4pt, itemsep=0pt]\n\t\\item[W:] a write event $w$ is added. In this case, for every existing read $r$ to the same variable, a partial\\footnote{For details we refer the reader to the function $\\mathsf{CalcRevisits}$ from \\cite{GenMC}.} copy of $G$ with an edge $w \\overset{\\rf[\\tiny]}\\rightarrow r$ is generated and pushed into $S$.\n\t\\item[R:] a read event $r$ is added; for every write event $w$ in $G$ a copy of $G$ with an additional edge $w \\overset{\\rf[\\tiny]}\\rightarrow r$ is generated and pushed into $S$.\n\\end{description}\nCrucially, if the read event $r$ is in an await, an additional copy of $G$ is generated in which $r$ has no incoming \\rf-edge (we write this new graph as $G[\\bot \\overset{\\rf[\\tiny]}\\rightarrow r]$). \nThis missing \\rf-edge indicates a potential AT violation. It is not an actual AT violation \\emph{yet} because a new write $w$ to the same variable might be added by another thread later during exploration.\nIf such a write is added, it leads to the generation of two types of graphs: graphs in which $r$ is still missing an \\rf-edge, and graphs with the edge $w \\overset{\\rf[\\tiny]}\\rightarrow r$ where $r$ no longer has a missing \\rf-edge (and the potential AT violation got resolved).\nOtherwise, if a missing \\rf-edge is still present and no other thread can be run, we know that such a write cannot become available anymore; the potential AT violation turns into an actual AT violation.\nThe algorithm detects the violation after popping a graph $G$ in which no threads can be run ($\\mathcal T_G = \\emptyset$), but a read without incoming \\rf-edge is present ($\\bot\\overset{\\rf[\\tiny]}\\rightarrow r\\in G$), and this missing \\rf-edge could not be resolved except through a wasteful execution, \\emph{i.e.}, every consistent graph $G'$ obtained by adding the missing \\rf-edge and completing the await iteration is wasteful. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMatrix models have been considered as one of the most\npowerful frameworks to formulate string theories\nin a nonperturbative manner.\nA fundamental viewpoint which links matrix models\nto string theories was given by 't Hooft \\cite{'tHooft:1973jz}.\nThere Feynman diagrams which appear in matrix models are\nidentified with {\\em discretized} string worldsheets.\nHowever, if one takes the large-$N$ limit naively\n(the so-called planar limit),\nonly the planar diagrams survive, \nwhich implies the appearance of a classical string theory.\nOne way to formulate nonperturbative string theory\nusing matrix models is therefore \nto look for a nontrivial large-$N$ limit, \nin which Feynman diagrams with all kinds of \ntopology survive.\\footnote{\nAnother possibility to realize string theory using matrix models\nis to keep $N$ finite\nas in the AdS\/CFT correspondence \\cite{Aharony:1999ti}, \ntopological string theory \\cite{Gopakumar:1998ki_Dijkgraaf:2002fc}\nand the Kontsevich model \\cite{Kontsevich:1992ti}.}\nThe existence of such a limit\nhas been first demonstrated in matrix models \nfor noncritical string theory\n\\cite{Brezin:1990rb,Douglas:1989ve,Gross:1989vs},\nand it is called the double scaling limit.\n(See also refs.\\ \n\\cite{DSLgeneral,Bietenholz:2002ch,Bietenholz:2004xs,Bietenholz:2006cz}\nfor recent works, in which\nthe double scaling limit appears in various contexts.)\n\nIt is generally believed that a similar idea can be \napplied also to critical string theories.\nThe corresponding matrix models have been proposed\nin refs.\\ \\cite{BFSS,Ishibashi:1996xs,DVV},\nbut the existence of a nontrivial large-$N$ limit\nis yet to be confirmed.\nTo address such an issue, the 2d Eguchi-Kawai model \\cite{EK}\nhas been studied as a toy model.\nIndeed Monte Carlo simulation \\cite{Nakajima:1998vj} demonstrated\nthe existence of\na one-parameter family of large-$N$ limits, which \ngeneralizes the Gross-Witten \n\\cite{Gross:1980he} planar large-$N$ limit.\nIf one modifies the Eguchi-Kawai model by introducing the \ntwist \\cite{GAO},\nthe double scaling limit\ncan be identified with the continuum limit of\nfield theories on discrete non-commutative (NC) geometry \\cite{AMNS}.\nThe actual existence of such limits \nhas been demonstrated by\nMonte Carlo simulations\nin the case of NC gauge theory in 2d \\cite{Bietenholz:2002ch}\nand 4d \\cite{Bietenholz:2006cz} and also\nin 3d NC scalar field theory \\cite{Bietenholz:2004xs}.\nIn all these cases, it was observed\nthat non-planar diagrams indeed affect\nthe infrared dynamics drastically through the\nUV\/IR mixing mechanism \\cite{MRS}.\n\n\nWe consider that\nMonte Carlo simulation would be a powerful tool\nalso to study matrix models for critical string theories.\nTechnically the IIB matrix model \\cite{Ishibashi:1996xs} would be the\nleast difficult among them since the space-time, on which\nthe ten-dimensional ${\\cal N}=1$ super Yang-Mills theory is defined,\nis totally reduced to a point.\nHowever, the integration over the fermionic matrices yields\na complex Pfaffian, which makes the Monte Carlo simulation\nstill very hard \\cite{Ambjorn:2000dx,Anagnostopoulos:2001yb}.\nAn analogous model, which can be obtained by dimensionally\nreducing {\\em four-dimensional} ${\\cal N}=1$ \nsuper Yang-Mills theory to a point, \ndoes not have that problem, and Monte Carlo studies \nsuggest the existence of a nontrivial large-$N$ limit \n\\cite{Ambjorn:2000bf}.\n\n\nThe developments in the matrix description of\ncritical string theories have also given a new perspective\nto noncritical string theory.\nFor instance, in matrix quantum mechanics which describe\n$(1+1)$-dimensional string theory in the double scaling limit,\nthe matrix degrees of freedom\nhave been interpreted as the tachyonic open-string field \nliving on unstable D0-branes \\cite{McGreevy:2003kb,Klebanov:2003km}.\nBased on this interpretation,\nmatrix models with the double-well potential, which are\nknown to be solvable, have been identified as a dual description of \nnoncritical string theory with worldsheet supersymmetry\n\\cite{Takayanagi:2003sm,Douglas:2003up,Klebanov:2003wg}.\nAn important property of these models is \nthat they possess a stable nonperturbative vacuum\nunlike their bosonic counterparts,\nand therefore one can obtain a complete constructive formulation\nof string theory.\nIt also provides us with a unique opportunity to test the\nvalidity and the feasibility of Monte Carlo methods for studying \nstring theories nonperturbatively.\nIn particular we are concerned with such questions as\nwhat kind of analysis is possible\nto extract the double scaling limit,\nand how large the matrix size should be.\n\n\n\nIn this work we consider the simplest model \\cite{Cicuta:1986pu},\nnamely a hermitian one-matrix model\nidentified \\cite{Klebanov:2003wg}\nas a dual of $\\hat c=0$ noncritical string theory,\\footnote{The \nmodel studied in this paper\nwas also used in ref.\\ \\cite{Kawai:2004pj} \nto calculate the chemical potential of D-instantons,\nwhich is shown to be a universal quantity in the\ndouble scaling limit \\cite{Hanada:2004im}. \nThese works are generalized to \nother noncritical string theories \\cite{Sato:2004tz,%\nIshibashi:2005zf,Matsuo:2005nw}\nand discussed in various contexts \n\\cite{deMelloKoch:2004en,Ishibashi:2005dh,%\nFukuma:2005nm,Kuroki:2007an}.}\nwhich is sometimes referred to as the pure supergravity \nin the literature.\nWe calculate correlation functions \nnear the critical point, and investigate their scaling behavior\nto extract the double scaling limit. The results are then\ncompared with a prediction obtained by a different approach.\nWe hope that the lessons from this work\nwould be useful in applying the same method to models\nwhich are not accessible by analytic methods.\n\n\n\n\n\n\n\n\n\n\nThe rest of this paper is organized as follows.\nIn section \\ref{section:MA} we introduce the one-matrix model, \nand present some simulation details.\nIn section \\ref{section:PL} we obtain explicit results\nin the planar limit,\nand compare them with the known analytical results.\nIn section \\ref{section:DSL} we search for\na double scaling limit by using only Monte Carlo data.\nThe results are compared with the prediction obtained\nby the orthogonal-polynomial technique. \nIn section \\ref{section:exact} we present more\ndetailed comparison with the analytical prediction.\nSection \\ref{section:summary} is devoted to a summary and discussions.\nIn the Appendix we briefly review the derivation of some asymptotic\nbehaviors in the double scaling limit.\n\n\n\n\\section{The model and some simulation details}\n\\label{section:MA}\n\nThe model we study in this paper is defined by\n\\begin{eqnarray}\nZ&=& \\int d^{N^2} \\!\\! \\phi \\, \n\\exp\\left( -S \\right) \\ ,\n\\label{Zdef}\n\\\\\nS &=& \\frac{N}{g} \\, {\\rm tr\\,}\\left(-\\phi^2 + \\frac{1}{4}\\phi^4 \\right) \\ ,\n\\label{omm}\n\\end{eqnarray} \nwhere $\\phi$ is an $N\\times N$ hermitian matrix.\nWe assume that the coupling constant $g$ is positive\nso that the action is bounded from below.\nSince the action takes the form of a double-well,\nthe standard Metropolis algorithm using a trial configuration\nobtained by slightly modifying some components of the matrix \nwould have a problem with ergodicity.\nIn order to circumvent this problem, we perform the simulation\nas follows.\n\nLet us diagonalize the hermitian matrix $\\phi$ as\n$\\phi = U \\Lambda U^{-1}$,\nwhere $\\Lambda = {\\rm diag}(\\lambda_1,\\cdots,\\lambda_N)$\nis a real diagonal matrix.\nDue to the SU($N$) invariance of the model,\nthe angular variable $U$ can be integrated out.\\footnote{An analogous \nmodel including a kinetic term\nrepresenting the fuzzy sphere background has been\nstudied by Monte Carlo simulation in refs.\\ \\cite{Martin,Panero}.\nThe basic idea to avoid the ergodicity\nproblem can be applied there as well,\nalthough in that case the angular variables $U$ have to be treated\nin Monte Carlo simulation. We thank \nMarco Panero for communications on this issue.}\nThus we are left with a system of eigenvalues $\\lambda_i$\n\\begin{eqnarray}\nZ &=& \\int \\prod_{i=1}^{N} \nd\\lambda_i \\, \\exp (-\\widetilde{S}) \\ , \n\\label{d_pf} \\\\\n\\widetilde{S} &=& \\frac{N}{g}\n\\sum_{i=1}^{N}\\left(-\\lambda_i^2 + \\frac{1}{4}\\lambda_i^4\n\\right)\n- \\sum_{i < j} \\log |\\lambda_i-\\lambda_j|^2 \\ ,\n\\label{d_action}\n\\end{eqnarray}\nwhere the log term in eq.\\ (\\ref{d_action})\ncomes from the Vandermonde determinant. \nDue to the $\\lambda _i ^4$ term in the action,\nthe probability of $\\lambda_i$ having a large absolute\nvalue is strongly suppressed.\nThis can be seen also from\nthe eigenvalue distribution in fig.\\ \\ref{fig_rho},\nwhich actually has a compact support in the \nplanar large-$N$ limit; see eqs.\\ (\\ref{rho_largeN-1})\nand (\\ref{rho_largeN-2}).\nWe therefore restrict $|\\lambda_i|$ to be less than \nsome value $X$.\n\nWe first run a simulation with a reasonably large $X$.\nBy measuring the eigenvalue distribution, we can\nobtain an estimate on $X$ that \ncan be used without affecting the Monte Carlo results.\nWe generate a trial configuration by replacing\none eigenvalue by a uniform random number \nwithin the range $[-X ,X]$. \nThe trial configuration is accepted as a new configuration\nwith the probability max($1,\\exp(-\\Delta \\widetilde{S})$), \nwhere $\\Delta \\widetilde{S}$ \nis the increase of the action $\\widetilde{S}$\n($\\Delta \\widetilde{S}<0$ in case it decreases).\nThe acceptance rate turns out to be \nof the order of a few percent.\\footnote{We could have\nincreased the acceptance rate by suggesting a number\nfor the eigenvalue with a non-uniform probability\nand taking it into account in the Metropolis accept\/reject\nprocedure. In this work, however, we stayed with the simplest\nalgorithm for illustrative purposes.}\nWe repeat this procedure for all the eigenvalues,\nand that defines our ``one sweep''.\n\nTypically we make 500,000 sweeps for each set of parameters.\nWe discard the first 10,000 sweeps for thermalization,\nand measure quantities every 100 sweeps considering\nauto-correlation. The statistical errors are estimated by\nthe standard jack-knife method, although in most cases \nthe error bars are invisible compared with the symbol size.\nThe simulation has been performed on\nPCs with Pentium 4 (3GHz), and it took a few weeks\nto get results for each value of $g$ with the largest \nsystem size $N=2048$.\nNote that the required CPU time is of O($N^2$) \nthanks to the fact that we only have to deal with\nthe eigenvalues but not the whole matrix degrees\nof freedom. Otherwise the required CPU time would\ngrow as O($N^3$) at least.\nNote also that our algorithm allows\nthe eigenvalues to move from one well\nto the other with finite probability.\nThus the problem with ergodicity is \navoided.\n\n\n\n\\section{The planar limit} \n\\label{section:PL}\n\nIn this section we investigate\nthe planar limit of the model by Monte Carlo simulation.\nThis limit corresponds to sending the matrix size\n$N$ to infinity with fixed $g$.\nIt is necessary to study the planar limit first since\nwe have to identify the critical point, and calculate\ncorrelation functions at that point,\nwhich will be used when we search for \na double scaling limit.\n\n\n\\FIGURE[t]{\n\\epsfig{file=planar_exact3.eps, width=.5\\textwidth}\n\\caption{The eigenvalue distribution $\\rho(x)$ is plotted\nfor $g=0.5,1.0,2.0,3.0$ with $N=32$. \nThe curves represent the exact results \n(\\ref{rho_largeN-1}), (\\ref{rho_largeN-2}) \nobtained in the planar large-$N$ limit.\n}\\label{fig_rho}}\n\n\n\nLet us define the eigenvalue density distribution\n\\begin{equation}\n\\rho(x)\\equiv \\frac{1}{N}\n\\Bigl\\langle {\\rm tr\\,}\\delta(x-\\phi)\n\\Bigr\\rangle \\ ,\n\\label{rho}\n\\end{equation}\nfrom which one can calculate\nthe expectation value of any single trace operator.\nIn the planar limit the distribution $\\rho(x)$ is \nobtained analytically \\cite{Cicuta:1986pu}\nusing the method developed in ref.\\ \\cite{Brezin:1977sv}.\nFor $g\\ge 1$ the distribution is given by\n\\begin{eqnarray}\n\\lim_{N\\to \\infty}\\rho(x) &=&\n\\frac{1}{\\pi g} \\left(\\frac{1}{2}x^2+r^2-1 \\right)\n\\nonumber \\\\\n&~& \n \\sqrt{4r^2-x^2} \n\\label{rho_largeN-1}\n\\end{eqnarray}\nin the range $-2r \\leq x\\leq 2r$,\nwhere $r^2=\\frac{1}{3}(1+\\sqrt{1+3 g})$.\nFor $g \\le 1$ it is given by\n\\begin{equation}\n\\lim_{N\\to \\infty}\\rho(x)=\n\\frac{1}{2\\pi g}|x|\\sqrt{(x^2-r_-^2)(r_+^2-x^2)} \n\\label{rho_largeN-2}\n\\end{equation}\nin the range $ r_-\\leq |x|\\leq r_+ $, \nwhere $r_\\pm^2=2(1\\pm \\sqrt{g})$.\nOutside the specified region, \nthe distribution is constantly zero,\nand hence it has a compact support\nfor $g \\ge 1$, which splits into two for $g \\le 1$.\nThis implies a phase transition of the Gross-Witten type\n\\cite{Gross:1980he} at the critical point\n\\begin{equation}\ng=g_{\\rm cr}\\equiv 1 \\ .\n\\end{equation} \nOur Monte Carlo results for $N=32$\nshown in fig.\\ \\ref{fig_rho}\nagree well with the exact results \nin the planar limit.\n\n\n\\DOUBLEFIGURE[h] \n{planar_p2p2.eps, width=.49\\textwidth}\n{planar_pp.eps, width=.49\\textwidth}\n{The two-point correlation function \n$\\langle {\\rm tr\\,} \\phi^2 \\, {\\rm tr\\,} \\phi^2 \\rangle_{\\rm c}$ \nis plotted against $g$ for various $N$.\nThe solid line represents the analytic\nresult (\\ref{p2p2_largeN}) in the planar large-$N$ limit.\n\\label{fig_p2p2}\n}\n{The two-point correlation function \n$\\langle {\\rm tr\\,} \\phi \\, {\\rm tr\\,} \\phi \\rangle_{\\rm c}$ \nis plotted against $g$ for various $N$.\nThe solid line represents the analytic\nresult (\\ref{pp_largeN}) in the planar large-$N$ limit.\n \\label{fig_pp}}\n\nLet us next consider two-point correlation functions\n$\\langle {\\rm tr\\,}\\phi^2 \\, {\\rm tr\\,}\\phi^2 \\rangle_{\\rm c}$ and \n$\\langle {\\rm tr\\,}\\phi \\, {\\rm tr\\,}\\phi \\rangle_{\\rm c}$, \nwhere the suffix ``c'' \nimplies that the connected part is taken.\nIn the planar limit the correlation functions are\nobtained analytically as (See Appendix for derivation)\n\\begin{eqnarray}\n\\lim_{N\\to \\infty}\\langle{\\rm tr\\,}\\phi^2 \\, {\\rm tr\\,}\\phi^2\\rangle_{\\rm c}\n&=& \\left\\{\n\\begin{array}{cc}\n\\frac{2}{9}\\left(1+\\sqrt{1+3 g}\\right)^2 \n\\quad & \\textrm{for~} g \\geq 1\\ , \\\\\n2g\n\\quad & \\textrm{for~} g \\leq 1\\ . \\\\\n\\end{array}\n\\right.\n\\label{p2p2_largeN}\\\\\n\\lim_{N\\to \\infty}\\langle{\\rm tr\\,}\\phi \\, {\\rm tr\\,}\\phi\\rangle_{\\rm c}\n&=& \\left\\{\n\\begin{array}{cc}\n\\frac{1}{3}\\left(1+\\sqrt{1+3 g}\\right) \n\\quad & \\textrm{for~} g \\geq 1 \\ , \\\\\n1-\\sqrt{1-g}\n\\quad & \\textrm{for~} g \\leq 1 \\ . \\\\\n\\end{array}\n\\right.\\label{pp_largeN}\n\\end{eqnarray}\nOur Monte Carlo results for various $N$ shown in\nfigs.\\ \\ref{fig_p2p2} and \\ref{fig_pp} approach\nthe planar limit with increasing $N$.\n\n\nIn passing, let us consider \nthe free energy of the system (\\ref{Zdef}) defined by\n\\begin{equation}\nF\\equiv \\log Z- \\frac{1}{4} N^2 \\log g \\ ,\n\\label{defF}\n\\end{equation}\nwhere the log term is subtracted in order to make\n$F$ finite in the free case ($g=0$).\nOne can easily see that the correlation function\n$\\langle {\\rm tr\\,}\\phi^2 \\, {\\rm tr\\,}\\phi^2 \\rangle_{\\rm c}$\nis related to the second derivative of the free energy\nwith respect to $g^{-1\/2}$ as\n\\begin{equation}\n\\langle {\\rm tr\\,}\\phi^2 \\, {\\rm tr\\,}\\phi^2 \\rangle_{\\rm c}\n=\\frac{g}{N^2} \\, \\frac{\\partial^2\n}{\\partial(g^{-1\/2})^2} \\, F \\ . \n\\label{u_to_F}\n\\end{equation} \nTherefore, the behavior (\\ref{p2p2_largeN})\nat the critical point $g=1$ \nimplies that the phase transition is of third order\nin accord with ref.\\ \\cite{Gross:1980he}.\n\n\n\n\n \n\\section{The double scaling limit}\n\\label{section:DSL}\n\nIn this section we search for \na double scaling limit, in which \nwe send the coupling constant\n$g$ to the critical point $g_{\\rm cr}= 1$\nsimultaneously with the $N \\rightarrow \\infty$ limit\nkeeping\n\\begin{equation}\n \\mu\\equiv N^{p\/3}(1\n-g)\n\\label{mu}\n\\end{equation}\nfixed.\nWe investigate whether the quantities\n\\begin{eqnarray}\nA (\\mu , N) &\\equiv&\n- N^{q\/3} \\Bigl( \\langle{\\rm tr\\,}\\phi^2 \\, \n{\\rm tr\\,}\\phi^2\\rangle_{\\rm c} - 2\n\\Bigr) \\ , \n\\label{p2p2_extracted}\\\\\nB (\\mu , N) &\\equiv&\n- N^{r\/3}\n\\Bigl( \\langle{\\rm tr\\,}\\phi \\, {\\rm tr\\,}\\phi \\rangle_{\\rm c} - 1\n\\Bigr)\n\\label{pp_extracted}\n\\end{eqnarray}\nhave large-$N$ limits as functions of $\\mu$\nfor some choice of the parameters $p$, $q$ and $r$.\nIn eqs.\\ (\\ref{p2p2_extracted}) and (\\ref{pp_extracted}),\nwe have subtracted the values\nin the planar large-$N$ limit\nat the critical point $g=1$,\nwhich are 2 and 1, respectively, for each correlation function;\nsee, eqs.\\ (\\ref{p2p2_largeN}) and (\\ref{pp_largeN}).\n\n\n\n\\DOUBLEFIGURE[htb]\n{dsl_p2p2.eps, width=.49\\textwidth}\n{dsl_pp.eps, width=.49\\textwidth}\n{The observable $2-\\langle {\\rm tr\\,} \\phi^2 \\, {\\rm tr\\,} \\phi^2 \\rangle_{\\rm c}$ \nat the critical point $g=1$\nis plotted against $N$ in the log-log scale.\nThe straight line represents a fit to the power law \nbehavior $N^{-1.7(3)}$.\n\\label{mu0_graph1}}\n{The observable $1-\\langle {\\rm tr\\,} \\phi \\, {\\rm tr\\,} \\phi \\rangle_{\\rm c}$ \nat the critical point $g=1$\nis plotted against $N$ in the log-log scale.\nThe straight line represents a fit to the power law \nbehavior $N^{-1.00(2)}$.\n\\label{mu0_graph2}}\n\n\nIn fact, by merely \nlooking at the behavior of the planar results\n(\\ref{p2p2_largeN}) and (\\ref{pp_largeN})\nnear the critical point $g\\sim 1$,\none can readily deduce the existence of a double scaling limit\nfor $\\mu \\sim \\pm \\infty$,\nwhere the $\\pm$ sign corresponds \nto the behavior for $g \\rightarrow 1 \\pm \\epsilon$,\nrespectively.\nNamely, plugging $g = 1 - \\mu N^{-p\/3}$ into \n(\\ref{p2p2_largeN}) and (\\ref{pp_largeN}), one obtains\n\\begin{eqnarray}\n\\lim _{N\\to \\infty}\nA(\\mu, N) &=& \n\\left\\{\n\\begin{array}{cc}\n2 \\mu \\quad & \\textrm{for~} \\mu\\sim \\infty \\ , \\\\\n\\mu \\quad & \\textrm{for~} \\mu\\sim -\\infty \\ ,\n\\end{array}\n\\right.\n\\label{AtoNlim}\n\\\\\n\\lim _{N\\to \\infty}\nB(\\mu, N) &=& \n\\left\\{\n\\begin{array}{cc}\n\\sqrt{\\mu} \\quad & \\textrm{for~} \\mu\\sim \\infty \\ , \\\\\n0 \\quad & \\textrm{for~} \\mu\\sim -\\infty \\ ,\n\\end{array}\n\\right.\n\\label{BtoNlim}\n\\end{eqnarray}\n\\begin{equation}\n\\mbox{~~with~~}\nq=p \\quad {\\rm and} \\quad r=\\frac{p}{2} \\ .\n\\label{qr-p-rel}\n\\end{equation}\n\nWhen we search for a double scaling limit,\nwe have to impose (\\ref{qr-p-rel}) in order to ensure\nthe scaling behavior at large $|\\mu|$.\nThe nontrivial question then is whether we can \nchoose the parameters within the constraints (\\ref{qr-p-rel})\nin such a way that\nthe scaling extends to small $|\\mu|$.\nIn general, the planar results can be used in this way to\nimpose some constraints on the parameters\nthat appear in searching for a double scaling limit.\nA similar strategy has been used,\nfor instance,\nin ref.\\ \\cite{Nakajima:1998vj,Bietenholz:2002ch,Bietenholz:2004xs}.\nWe emphasize, however, that \nthis is just meant to make the analysis simpler,\nand that the relation (\\ref{qr-p-rel}) would come out\nanyway when we attempt to optimize the scaling behavior \nat large $|\\mu|$.\n \n\nLet us search for a scaling behavior \nat the particular point $\\mu=0$.\nThis corresponds to $g=1$ for any choice of $p$\ndue to (\\ref{mu}), and therefore, we can actually\ndetermine $q$ and $r$ without using (\\ref{qr-p-rel}).\nIn fig.\\ \\ref{mu0_graph1} we plot\nthe r.h.s.\\ of (\\ref{p2p2_extracted}) omitting the \nfactor $N^{q\/3}$. The observed power behavior \nimplies $q=1.7(3)$.\nSimilarly from fig.\\ \\ref{mu0_graph2}, \nwe obtain $r=1.00(2)$.\nUsing this value of $r$, the other exponents\n$p$ and $q$ may be obtained \nfrom the relation (\\ref{qr-p-rel})\nas $p=q=2.00(4)$.\nThis is consistent with the value of $q$ \nextracted from fig.\\ \\ref{mu0_graph1} directly.\nThe latter has a larger error bar, though.\nThe reason for this is that the quantities\nplotted in figs.\\ \\ref{mu0_graph1} and \\ref{mu0_graph2}\nare of the order of $\\frac{1}{N^2}$ and $\\frac{1}{N}$,\nrespectively.\n\n\n\\DOUBLEFIGURE[htb]\n{dsl_p2p2_p2.eps,width=.49\\textwidth}\n{dsl_pp_r1.eps, width=.49\\textwidth}\n{The quantity $A(\\mu , N)$\nis plotted against $\\mu$ for $N=8, 16, \\cdots , 512$\nwith $p=q=2.0$. \nThe solid line represents the result (\\ref{AtoN}) \nobtained in the double scaling limit.\n\\label{dsl_p2p2}}\n{The quantity $B(\\mu , N)$\nis plotted against $\\mu$ for $N=8, 16, \\cdots , 512$\nwith $p=2.0$ and $r=1.0$.\nThe solid line represents the result (\\ref{BtoN}) \nobtained in the double scaling limit.\n\\label{dsl_pp}}\n\n\n\n\nNow let us see whether \nthese values of $p$, $q$ and $r$\nmake the quantities $A (\\mu , N)$ \nand $B(\\mu , N)$ \nscale also for $\\mu \\neq 0$.\nUsing the Monte Carlo data shown in fig.\\ \\ref{fig_p2p2},\nwe plot the quantities as functions of $\\mu$.\nFigs.\\ \\ref{dsl_p2p2} and \\ref{dsl_pp} show the results.\nThe scaling functions given below in eqs.\\\n(\\ref{AtoN}) and (\\ref{BtoN}) \nare also plotted for comparison. \nThe Monte Carlo results for \n$A (\\mu , N)$ \nshow a nice scaling behavior, and they agree with\nthe prediction (\\ref{AtoN}).\nOn the other hand, \nthe quantity $B (\\mu , N)$ scales and agrees\nwith the prediction (\\ref{BtoN}) only in \nthe $\\mu \\gtrsim 0$ region.\nIn the $\\mu \\lesssim 0$ region,\nwe observe \nsome tendency towards scaling as $N$ increases up to $N=512$, \nbut the convergence to the prediction (\\ref{BtoN}) seems to be slow.\nThis behavior \nis due to the next-leading $1\/N$ corrections,\nas we discuss in the next section.\n\n\nIn fact the analysis based on the orthogonal polynomial technique\n\\cite{Brezin:1990rb,Douglas:1989ve,Gross:1989vs}\nsuggests the existence of a double scaling limit with \n\\begin{equation}\np=q=2 \\quad {\\rm and} \\quad r=1 \\ ,\n\\label{q-r-p-sol}\n\\end{equation}\nwhich agrees with our observation.\nIn this limit the model (\\ref{omm})\nis conjectured \\cite{Klebanov:2003wg}\nto be a dual description of the $\\hat{c}=0$ \nnoncritical string theory, where\nthe parameter $\\mu$ is identified with the\ncosmological constant in the corresponding\nsuper Liouville theory. \nNote that we are able to deduce the existence of the \ndouble scaling limit only from Monte Carlo data. \n\n\nLet us also note that due to eq.\\ (\\ref{u_to_F}),\n$A (\\mu , N)$ is related to the ``specific heat''\n\\begin{equation}\nC(\\mu , N ) \\equiv\n\\frac{\\partial^2 F}{\\partial \\mu^2} - \n \\left.\n\\frac{\\partial^2 F}{\\partial \\mu^2} \n\\right|_{\\mu =0} \n\\label{Cdef}\n\\end{equation}\nas\n\\begin{equation}\nA (\\mu , N) = \n- \\frac{1}{4} \\, N^{(q+2p-6)\/3}\n\\left\\{\nC(\\mu , N ) + {\\rm O}\\left(N^{-2p\/3}\\right) \\right\\} \\ .\n\\end{equation}\nTherefore, the scaling of $A(\\mu , N)$ with\nthe choice (\\ref{q-r-p-sol}) \nimplies that the ``specific heat'',\nwhich has a physical meaning in the dual string theory,\nbecomes finite in the double scaling limit.\n\n\n\n\\DOUBLEFIGURE[htb]\n{dsl_n3_3.eps, width=.49\\textwidth}\n{dsl_n2_2.eps, width=.49\\textwidth}\n{The observable $\\langle {\\rm tr\\,} \\phi^2 \\, {\\rm tr\\,} \\phi^2 \\rangle_{\\rm c}$ \nis plotted against $a(\\equiv N^{-1\/3})$ for various $\\mu$\nwith $N=32,64,\\cdots , 2048$.\nFor each $\\mu$ we fit the data\nto the behavior (\\ref{O_p2p2})\nwithout the ${\\rm O}(a^4)$ terms\ntreating $h(\\mu)$ as a fitting parameter.\n\\label{dsl_graph2}}\n{The observable $\\langle {\\rm tr\\,} \\phi \\, {\\rm tr\\,} \\phi \\rangle_{\\rm c}$ \nis plotted against $a(\\equiv N^{-1\/3})$ for various $\\mu$\nwith $N=32,64,\\cdots , 2048$.\nFor each $\\mu$ we fit the data\nto the behavior (\\ref{O_pp})\nwithout the ${\\rm O}(a^3)$ terms\ntreating $h(\\mu)$ as a fitting parameter.\n\\label{dsl_graph1}}\n\n\n\n\\section{Next-leading $1\/N$ corrections}\n\\label{section:exact}\n\nSo far we have been analyzing our Monte Carlo data\nwithout using the knowledge obtained from analytical\nresults. \nThe purpose of this section is to discuss\nmore detailed behaviors in the double scaling limit\nwhich are obtained analytically, and to see whether\nour Monte Carlo data reproduce those behaviors as well.\n\n\nAs we briefly review in the Appendix,\none can actually derive the asymptotic large-$N$ \nbehavior of the correlation functions (for even $N$) \nin the double scaling limit as\n\\begin{eqnarray}\n\\langle {\\rm tr\\,} \\phi^2 \\, {\\rm tr\\,} \\phi^2 \\rangle_{\\rm c} &=&\n2-\\Bigl\\{ \\mu+h^2(\\mu)\\Bigr\\} a^2 -\\frac{1}{2}\\Bigl\\{\n\\mu \\, h(\\mu)-h^3(\\mu)\\Bigr\\}a^3 + {\\rm O}(a^4) \\ ,\n\\label{O_p2p2}\\\\\n\\langle {\\rm tr\\,} \\phi \\, {\\rm tr\\,} \\phi \\rangle_{\\rm c} &=& 1-h(\\mu)\\, a-\n\\frac{1}{4}\\Bigl\\{ \\mu-h^2(\\mu)\n\\Bigr\\} a^2 +\n{\\rm O}(a^3) \\ ,\n\\label{O_pp}\n\\end{eqnarray}\nwhere we have introduced a parameter $a\\equiv N^{-1\/3}$,\nand $h(\\mu)$ is a function which \nsatisfies the differential equation \\cite{Douglas:1990xv}\n\\begin{equation}\n\\mu \\, h(\\mu) = h^3(\\mu) -2 \\, h''(\\mu) \\ ,\n\\label{PainleveII}\n\\end{equation}\nand the boundary conditions\n\\begin{equation}\nh(\\mu) \\sim \n\\left\\{\n\\begin{array}{cc}\n\\sqrt{\\mu} \\quad & \\textrm{for~} \\mu\\sim \\infty \\ , \\\\\n0 \\quad & \\textrm{for~} \\mu\\sim -\\infty \\ . \n\\end{array}\n\\right.\n\\label{bc2}\n\\end{equation}\nEquation (\\ref{PainleveII}) \nis nothing but the Painleve-II equation,\nwhich is proven \\cite{Hastings:1980} to \nhave a unique real solution\\footnote{In \nthe case of $\\phi^3$ matrix model,\nwhich corresponds to the noncritical string theory\nwithout worldsheet supersymmetry, one can obtain\nonly one boundary condition, since one can approach\nthe critical point only from one direction.\nAccordingly the solution of the Painleve equation\nhas a one-parameter ambiguity \\cite{Douglas:1989ve}.\nThis is essentially\nbecause the vacuum of the matrix model\nis nonperturbatively\nunstable. The ambiguity arises from how one regularizes the\ninstability. The model we study in this paper does not \nhave this problem.} \nunder the boundary conditions (\\ref{bc2}).\nThe solution is obtained numerically in ref.\\ \\cite{num}\nto high accuracy, and we use it in plotting the exact\nresults in figs.\\ \\ref{dsl_p2p2}, \\ref{dsl_pp} \nand \\ref{fig_h}.\n\n\n{}From (\\ref{O_p2p2}) and (\\ref{O_pp}),\nthe large-$N$ limits of the quantities\n(\\ref{p2p2_extracted}), (\\ref{pp_extracted})\nare obtained as\n\\begin{eqnarray}\n\\lim _{N\\to \\infty}A(\\mu, N) &=& \\mu + h^2(\\mu) \\ , \n\\label{AtoN}\n\\\\\n\\lim _{N\\to \\infty}B(\\mu, N) &=& h(\\mu) \\ , \n\\label{BtoN}\n\\end{eqnarray}\nwhich we plot as exact results in figs.\\ \n\\ref{dsl_p2p2} and \\ref{dsl_pp}.\nPlugging in the boundary conditions (\\ref{bc2}),\nwe reproduce eqs.\\ (\\ref{AtoNlim}) and (\\ref{BtoNlim}) \nobtained from the planar results.\n\nThe analysis in the previous section therefore amounts to\nextracting the leading $1\/N$ corrections in\n(\\ref{O_p2p2}) and (\\ref{O_pp}).\nThe reason for the observed slow approach to the \nlimit (\\ref{BtoN}) for $\\mu \\lesssim 0$ is that\nthe coefficient of the O($a$) term in the expansion (\\ref{O_pp})\nbecomes much smaller than that of the O($a^2$) term\nas $\\mu$ decreases due to the boundary conditions (\\ref{bc2}).\n\n\n\\FIGURE[t]{\n\\epsfig{file=h_mu3.eps, width=.5\\textwidth}\n\\caption{\nThe crosses and the circles represent\nthe function $h(\\mu)$, which is \nextracted from figs.\\ \\ref{dsl_graph2} and \\ref{dsl_graph1},\nrespectively.\nThe solid line represents\nthe solution of the Painleve-II equation.\n} \n\\label{fig_h}\n}\n\n\nIt would therefore be interesting to see whether\nthe {\\em next-leading} $1\/N$ corrections in\neqs.\\ (\\ref{O_p2p2}) and (\\ref{O_pp}) are reproduced\nby Monte Carlo simulation.\nIn fig.\\ \\ref{dsl_graph2} \nwe plot \n$\\langle {\\rm tr\\,} \\phi^2 \\, {\\rm tr\\,} \\phi^2 \\rangle_{\\rm c}$\nagainst $a$ for various $\\mu$.\nIndeed the data can be nicely fitted\nto the behavior (\\ref{O_p2p2})\nwithout the ${\\rm O}(a^4)$ terms,\nwhere $h(\\mu)$ is determined as a fitting parameter\nby optimizing the fit for each $\\mu$.\nIn fig.\\ \\ref{dsl_graph1}\nwe plot the observable \n$\\langle {\\rm tr\\,} \\phi \\, {\\rm tr\\,} \\phi \\rangle_{\\rm c}$ against $a$ \nfor various $\\mu$.\nAgain the data can be nicely fitted\nto the behavior (\\ref{O_pp})\nwithout the ${\\rm O}(a^3)$ terms,\nwhere $h(\\mu)$ is determined similarly.\nThe function $h(\\mu)$ obtained in this way\nis plotted in fig.\\ \\ref{fig_h}.\nThe crosses and the circles represent the results\nobtained from \n$\\langle {\\rm tr\\,} \\phi^2 \\, {\\rm tr\\,} \\phi^2 \\rangle_{\\rm c}$\nand $\\langle {\\rm tr\\,} \\phi \\, {\\rm tr\\,} \\phi \\rangle_{\\rm c}$, respectively,\nwhich turn out to be consistent with each other\nwithin error bars. \nFurthermore the results agree with \nthe solution of the Painleve-II equation (\\ref{PainleveII})\nwith the boundary conditions (\\ref{bc2}).\n\n\n\\section{Summary}\n\\label{section:summary}\n\nIn this paper we have shown how one can use\nMonte Carlo simulation to search for a double scaling\nlimit, and, if it exists, to obtain the corresponding\nscaling functions.\nFor that purpose we studied a solvable one-matrix model\nwhich has recently been proposed as a constructive\nformulation of noncritical strings with worldsheet\nsupersymmetry.\nIn particular, we have shown how the results in the \nplanar limit provide useful information in such an\ninvestigation.\nThe required matrix size is not very large in most cases,\nbut we have also encountered a case in which\nthe approach to the large-$N$ limit turns out to be \nslow due to large next-leading $1\/N$ corrections.\n\nConsidering that even a simple two-matrix model\nare not solvable except for some special cases \n\\cite{Itzykson:1979fi},\nwe believe that Monte Carlo simulation provides\na powerful tool to investigate the universality class\nof matrix models in the double scaling limit.\nFor instance, in ref.\\ \\cite{Fukuma:2006ny}\na string field theory of minimal $(p,q)$ superstrings\nhas been constructed from the two-cut ansatz for\nthe two-matrix model.\nIt would be interesting to confirm their results\nby taking the double scaling limit explicitly.\n\n\nIn general, if there exists a continuous phase transition\nin the planar limit,\none has a chance to take the double scaling limit by\napproaching the critical point with increasing $N$.\nHow generically this holds needs to be investigated.\nFor instance, it is known that the \nunitary matrix model \\cite{Gross:1980he} has\na third order phase transition, which allows \na double scaling limit \\cite{Periwal:1990gf}.\nThe obtained limit belongs to the same universality class\n\\cite{Douglas:1990xv} as the one studied in this paper.\nWhether a double scaling limit defines a sensible \nnonperturbative string theory is also an important issue,\nwhich was addressed in refs.\\ \n\\cite{Ambjorn:2000bf,Anagnostopoulos:2001cb,Hanada}\nby Monte Carlo simulation.\nWe hope that Monte Carlo studies of matrix models\nwill also shed light on nonperturbative dynamics\nof critical strings.\n\n\n\\acknowledgments\n\nIt is our pleasure to thank Takehiro Azuma,\nMasanori Hanada and Hirotaka Irie \nfor valuable discussions.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGiven an ensemble of ``quantum chaotic'' Hamiltonians $\\{H\\}$, the averaged Spectral Form Factor (SFF) is defined as \n\\begin{equation}\n \\text{SFF}(T) = \\overline{|\\tr(e^{-i H T})|^2},\n\\end{equation}\nwhere the overline denotes an ensemble average. The SFF is known to exhibit a ramp-like structure at intermediate times which is characteristic of a random-matrix-like spectrum for $H$, a defining feature of quantum chaos~\\cite{haake2010quantum,PhysRevLett.52.1,mehta2004random}. In this paper, we study a generalization of the SFF called the Loschmidt SFF (LSFF). The LSFF is defined in terms of two Hamiltonians $H_1$ and $H_2$ as\n\\begin{equation}\n \\text{LSFF}(T) = \\overline{\\tr(e^{i H_1 T}) \\tr(e^{-i H_2 T})},\n\\end{equation}\nwhere again the overline denotes an average. The goal of the paper is to motivate the study of the LSFF and to study it in a variety of representative contexts.\n\nTo explain why the LSFF is natural object to consider, let us begin with another basic feature of chaotic systems: the exponential decay of auto-correlation functions. Consider a complete set $\\{A\\}$ of Hermitian operators and define the infinite temperature auto-correlation function for each $A$ as\n\\begin{equation}\n G_{A}(T) = \\frac{1}{D} \\tr\\left( e^{i H T} A e^{-i H T} A \\right),\n\\end{equation}\nwhere $H$ is the system Hamiltonian and $D$ is the Hilbert space dimension. In a chaotic system, one typically expects $G_A(T) \\sim e^{- \\gamma_A T} + \\cdots$, at least at intermediate times. \n\nRather than considering a single such $G_A$, it is often convenient to consider sum over all $A$, in which case we obtain a result proportional to the SFF,\n\\begin{equation}\n \\sum_A G_A(t) = \\frac{\\text{SFF}(T)}{D}.\n\\end{equation}\nAs mentioned above, the form factor directly probes the correlations between energy levels of the Hamiltonian in an operator-independent way. Moreover, it can be argued on effective field theory grounds that exponential decay of correlations indeed implies the characteristic ``ramp'' phenomenon in the spectral form factor~\\cite{winer2021hydrodynamic}.\n\nThe correlators $G_A$ can be measured in the following way. Consider two copies of the system, $S$ and $\\bar{S}$, prepared in an infinite-temperature thermofield double state along with a control qubit $C$ initialized in the state $\\frac{|0\\rangle + |1\\rangle}{\\sqrt{2}}$. Using a conditional application of operator $A$, followed by an unconditional time-evolution, followed by another conditional application of $A$, the state becomes \n\\begin{equation}\n \\frac{1}{\\sqrt{2}} \\left(|0\\rangle_C \\otimes e^{- i H T} |\\infty\\rangle_{S\\bar{S}} + |1\\rangle_C \\otimes A e^{- i H T} A |\\infty\\rangle_{S\\bar{S}} \\right).\n\\end{equation}\nBy measuring the Pauli operators $X_C$ and $Y_C$ and repeating to collect statistics, one can then estimate $G_A(T)$ via\n\\begin{equation}\n \\langle X_C \\rangle + i \\langle Y_C\\rangle = G_A.\n\\end{equation}\n\nFrom this point of view it is natural to ask what happens if the time-evolution is itself conditional. Suppose the system evolves according to $H_1$ if the control is in state $|0\\rangle$ and according to $H_2$ if the control is in state $|1\\rangle$. In this case, the experimental procedure now yields\n\\begin{equation}\n \\langle X_C\\rangle + i \\langle Y_C\\rangle = L_A(T),\n\\end{equation}\nwhere $L_A(T)$ is the ``Loschmidt'' auto-correlation function,\n\\begin{equation}\n L_A(T) = \\frac{1}{D} \\tr\\left( e^{i H_1 T} A e^{-i H_2 T} A \\right).\n\\end{equation}\n\nWe refer to this as a Loschmidt auto-correlator since it is correlation function version of the traditional Loschmidt echo, which is defined by taking an initial state $|\\psi\\rangle$, evolving for time $T$ with Hamiltonian $H_2$, then evolving for time $T$ with Hamiltonian $-H_1$. The return amplitude is $\\langle \\psi | e^{i H_1 T} e^{- i H_2 T} | \\psi \\rangle$, and when the state $|\\psi\\rangle$ is the infinite temperature thermofield double state, the return amplitude is $L_{\\text{Id}}(T)$. \n\nThe Loschmidt correlator is thus a natural generalization of the return amplitude in the Loschmidt echo. Moreover, if we sum the Loschmidt correlator over all choices of $A$, we get precisely the LSFF,\n\\begin{equation}\n \\sum_A L_A(T) = \\frac{\\text{LSFF}(T)}{D}.\n\\end{equation}\nHence, the LSFF is an object that can probe both spectral correlations and the physics of the Loschmidt echo in an operator-independent way.\n\nAs we discuss below, in addition to these elementary motivations, the LSFF appears in a variety of other contexts, including as a part of the SFF in systems with spontaneous symmetry breaking. In fact, this symmetry breaking application is how we first came to consider the LSFF. Finally, the LSFF is a time-domain version of the long-studied phenomenon known as parametric correlations, e.g.~\\cite{PhysRevLett.70.4063,Weidenmuller_2005,guhr1998random,https:\/\/doi.org\/10.48550\/arxiv.2205.12968}. For all these reasons, the LSFF is a natural extension of the SFF which is worthy of study in its own right.\n\nIn the remainder of the introduction, we include two subsections, one that defines the LSFF and its filtered cousins in more detail and one that discusses our motivations in more detail. The rest of the paper is organized as follows. Section \\ref{sec:Loschmidt} derives a formula for the Loschmidt SFF, connecting it to the Loschmidt echo. This section also extends beyond the standard Loschmidt analysis to include more complicated connected diagrams. Section \\ref{sec:hydro} reformulates these results in a hydrodynamic language, and calculates new results for the Loschmidt SFF in hydrodynamic systems with spatial extent. Finally Section \\ref{sec:conclusion} contains concluding remarks.\n\n\n\n\\subsection{Random matrices and Form Factors}\n\nHere we more thoroughly introduce the SFF and the LSFF. The energy level repulsion that is a hallmark of quantum chaos is an important prediction of random matrix theory. It is commonly diagnosed by the averaged Spectral Form Factor (SFF)~\\cite{haake2010quantum,BerrySemiclassical,Sieber_2001,saad2019semiclassical,PhysRevResearch.3.023118,PhysRevResearch.3.023176}, which is defined as\n\\begin{equation}\n \\text{SFF}(T,f)=\\overline{\\tr[ f(H) e^{iHT}] \\tr[ f(H) e^{-iHT}]}.\n\\end{equation}\nHere we consider a more general definition in which we allow for a filter function $f(H)$. The filter should be some slowly varying function used to focus in on a particular energy range of interest. Useful choices include $f(H)=1, f(H)=e^{-\\beta H}$ and $f(H)=\\exp(-(H-E_0)^2\/4\\sigma^2)$.\n\nIn chaotic systems, the SFF exhibits a dip-ramp-plateau structure (figure \\ref{fig:sffPic}), while in integrable systems, the ramp is not present. \n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{sffBasic.png}\n \\caption{Full (blue) and connected (orange) SFF for GUE random matrix theory on a log-log plot. The connected SFF has the ramp-plateau structure emblematic of level repulsion and quantum chaos.}\n \\label{fig:sffPic}\n\\end{figure}\nIt can be shown that in systems with conserved modes, or even slow modes or nearly conserved quantities, the ramp is significantly enhanced~\\cite{Friedman_2019,winer2021hydrodynamic,roy_2020,WinerGlass}.\n\nThe ensemble average, denoted by an overline, is important for rendering the averaged SFF a smooth function of time. For a single Hamiltonian, the non-averaged SFF is an erratic function of time, although an appropriate time average is typically sufficient to make it smooth. However, we are also interested in explictly disordered systems with fixed random couplings. In any case, the presence of the average means that the averaged SFF can be decomposed into the square of an average and a variance. These are often called the disconnected and connected SFF, respectively. \n\nMathematically, one can write\n\\begin{equation}\n \\text{SFF}_{\\text{conn}}=\\overline{Z Z^*}-\\overline{Z}\\ \\overline{Z^*},\n\\end{equation}\nwhere \n\\begin{equation}\n Z(T,f)=\\sum_{n} f(E_n)\\exp(-iE_nT)=\\tr f(H)e^{-iHT}.\n\\end{equation}\n$Z$ has a simple interpretation as the Fourier transform of the level density. In random matrix theory, the connected SFF has a ramp-plateau structure, with a long linear ramp terminating in a plateau (see Figure \\ref{fig:sffPic}). The plateau is the variance one would get assuming random phases for the complex exponential. The ramp, where the SFF takes on a smaller value, thus represents a suppression of variation in the Fourier transform of the density. This suppression is greater at lower frequencies, owing to the long-range nature of the repulsion.\n\nSetting the $f$ factors equal to $1$, the SFF is a path integral on a particular time contour, depicted in Figure \\ref{fig:sffcontour}.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.2]{SFFcontour.png}\n \\caption{Two disconnected periodic contours in opposite directions. Evaluating the path integral on this contour calculates an SFF. As shown, there is no interaction between the two contours and the SFF factors, but this factorization is destroyed by the introduction of a disorder average.}\n \\label{fig:sffcontour}\n\\end{figure}\nThis contour has two periodic time legs with periods $+T$ and $-T$. There is no direct interaction between them. Introducing disorder leads to an effective interaction between the two legs, allowing them to be treated with a single interacting theory \\cite{saad2019semiclassical}. This theory has a $U(1)\\times U(1)$ translational symmetry, one cycle for each leg of the contour. When treated semiclassically, certain saddle points spontaneously break this symmetry. The resulting Goldstone manifold has size proportional to $T$, leading to a linear ramp.\n\nWe define the (connected) Loschmidt SFF (LSFF) as\n\\begin{equation}\n \\text{LSFF}_{\\text{conn}}=\\overline{\\tr[ f(H_1) e^{iH_1T}]\\tr[ f(H_2) e^{-iH_2T}]}-\\overline{\\tr[ f(H_1) e^{iH_1T}]}\\ \\overline{\\tr[ f(H_2) e^{-iH_2T}]}.\n\\end{equation}\nIf the connected SFF is a variance, the Loschmidt SFF is a correlation. We focus on the case where we draw two Hamiltonians $H_1,H_2$ from a joint distribution which makes them similar. For instance, they might satisfy $H_1=H+\\epsilon \\delta H,H_2=H-\\epsilon \\delta H$, for some small $\\epsilon$ and $H,\\delta H$ drawn from a normalized GUE ensemble. Alternatively, $H_1$ and $H_2$ might be random-field Heisenberg models or SYK clusters~\\cite{sachdev_sy_1993,kitaev_syk_2015,rosenhaus_syk_2016,maldacena_syk_2016} with strongly correlated but not identical disorder. We will focus mainly on the case where $T>0$, noting that $\\text{LSFF}(-T)=\\text{LSFF}(T)^*$.\n\nAs discussed above, the name comes from an analogy with the Loschmidt echo \\cite{2012,2012Loschmidt,2006Loschmidt,Goussev:2012}. The echo can be written as $|\\bra \\psi e^{iH_1 T}e^{-iH_2 T}\\ket \\psi|^2,$ and it can be interpreted as a diagnostic of the fidelity of time reversal. If one starts with a state $\\ket \\psi$, evolves under Hamiltonian $H_1$ for time $T$, then evolves under $-H_2\\approx -H_1$ for time $T$, the Loschmidt echo diagnoses how close one comes to the original state.\n\n\\subsection{Motivations for the Loschmidt Spectral Form Factor}\n\nThere are several motivations to think about the LSFF. The most important is to answer the question of ``How different is different enough'' when it comes to fine-grained spectral statistics. This is an important question when considering ensembles of the form $H=H_0+\\delta H$, where $H_0$ is some fixed large Hamiltonian and $\\delta H$ is some smaller disordered perturbation. Can we think of the spectral statistics of such $H$s as independent? As we shall see, the answer for large times $T$ is yes. In physics, this ensemble has an interpretation as an ordered system with some small amount of disorder. \n\nIn mathematics, the concept of the Dyson Process \\cite{Dyson:1962brm,DysonNew,Joyner} or Matrix Brownian motion refers to starting with an initial Matrix $H_1$ and adding in a GUE matrix $\\delta H$ with variance proportional to some small $t$. This can be interpreted as Brownian motion in the $N\\times N$ dimensional space of matrices lasting for some fictitious time $t$ with no bearing on any physical time. As the matrix evolves under this process, the eigenvalues diffuse while repelling each other. Our results show that the Fourier mode of the eigenvalue density with wavenumber $T$ decays like $\\exp(-\\#t|T|)$. This contrasts with pure eigenvalue diffusion which which would result in a decay like $\\exp(-\\#tT^2)$.\n\nThe Loschmidt SFF relates to a phenomenon called parametric correlations, e.g.~\\cite{PhysRevLett.70.4063,Weidenmuller_2005,guhr1998random,https:\/\/doi.org\/10.48550\/arxiv.2205.12968}. Mathematicians and physicists have studied the spectral correlation functions of similar matrices since the 90s. The study of parametric correlations was typically done in the energy domain as opposed to the time domain, and was focused on systems well-described by random matrices. Our work extends it to the time domain and provides physical justification for similar results by relating them to the Loschmidt echo and to the SFF hydrodynamics \\cite{winer2021hydrodynamic,Winer_2022}.\n\nThe Loschmidt SFF also emerges naturally when one is calculating full SFFs of specific systems. For systems whose Hamiltonian can be written in block diagonal form as \n\\begin{equation}\n H=\\begin{pmatrix}H_0-\\delta H&0\\\\0&H_0+\\delta H\\end{pmatrix},\n\\end{equation}\nthe SFF naturally decomposes in the sum of the SFF of $H_0-\\delta H$,$H_0+\\delta H$, and twice the Loschmidt SFF between the two blocks.\n\nSuch Hamiltonians arise naturally, for instance, in the case of spontaneous symmetry breaking (SSB), where different charge sectors have very similar Hamiltonians acting on collections of 'cat states'. Indeed in \\cite{Winer_2022} we performed this calculation and obtained results consistent with the more general results here. \n\n\\section{A Simple Formula for the Loschmidt Spectral Form Factor}\n\\label{sec:Loschmidt}\n\nRecall that we are interested in the quantity\n\\begin{equation}\n \\text{LSFF}(T,f)=\\overline {\\tr[f(H_1) e^{iH_1T}]\\tr [f(H_2) e^{-iH_2T}]},\n\\end{equation}\nin the case where $H_1$ and $H_2$ are two highly correlated but not identical Hamiltonians drawn from some joint distribution. We define $H_1 = H+ \\epsilon \\delta H$ and $H_2 = H - \\epsilon \\delta H$, where $\\epsilon$ controls the closeness of $H_1$ and $H_2$.\n\nLet us begin by elaborating on the similarities between this object and the traditional Loschmidt echo amplitude~\\cite{2012Loschmidt,2006Loschmidt,Goussev:2012}, $\\bra{\\psi}e^{iH_1T}e^{-iH_2T}\\ket{\\psi}$. If we average the echo amplitude over $|\\psi\\rangle$ weighted by factors of $f(H_1)$ and $f(H_2)$, then we obtain $\\tr [f(H_1)e^{iH_1T}e^{-iH_2T}f(H_2)]$. This is a single-trace version of $\\text{LSFF}(T,f)$ which can be evaluated with a Schwinger-Keldysh path integral \\cite{Keldysh:1964ud,Kamenev_2009,kamenev_2011,CHOU19851,Haehl_2017}. \n\nThe standard S-K path integral is defined on the contour depicted in Figure \\ref{fig:keldysh}. The contour consists of a thermal circle to prepare a canonical ensemble, and then forwards and backward time evolution. Insertions can be placed along either or both of the real time legs of the contour. We first review this construction.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{schwingerKeldysh.png}\n \\caption{A Schwinger-Keldysh contour evolves forwards in time by $T$, then backwards by $T$, then by $i\\beta$ in imaginary time. It is often used to calculate dynamical correlations in a thermal background. When identical sources are present on the forward and backwards legs, the partition function on this contour is equal to the traditional thermal partition function. But when the sources differ, the partition function on this contour is decreased by an amount related to the Loschmidt echo.}\n \\label{fig:keldysh}\n\\end{figure}\n\nLet's assume at some distant time in the past the system is prepared in a microcanonical ensemble at energy $E$ under the Hamiltonian $H$. This can be a single state, a sampling of states from a narrow energy window, or even a thermal state. We denote the choice generically by state $|\\psi\\rangle$ We assume that the Hamiltonian $H$ is chaotic and obeys the eigenstate thermalization hypothesis (ETH)\\cite{DeutschETH,SredETH,Rigol_2008,D_Alessio_2016}. As such, it doesn't matter very much what choice we make. Our density matrix evolves under just $H$ until time $0$ where sources $\\pm \\epsilon \\delta H$ are turned on along the two real-time legs of the contour.\n\nThis can be evaluated to leading order in perturbation theory in $\\epsilon$ in terms of a cumulant expansion~\\cite{2006Loschmidt}. We want the echo amplitude,\n\\begin{equation}\n \\overline {\\mathcal P\\exp(i\\int_0^T \\epsilon \\delta H(t)-i\\int_T^0 \\epsilon \\delta H(t))},\n \\label{eq:LoschmidtFormula}\n\\end{equation}\nwhere $\\mathcal P$ denotes path ordering on the Schwinger-Keldysh contour, and the overline represents both a disorder average and a quantum expectation value of the operator in the interaction picture of $H$. Since equation \\ref{eq:LoschmidtFormula} is the expected value of an exponential, it can be expressed as a cumulant expansion using the general identity $\\overline{\\mathcal P \\exp(\\epsilon \\int O(t) dt)}=\\exp(\\sum_i \\epsilon^i \\kappa_i\/i!)$, where $\\kappa_i$ is the $i$th path-ordered cumulant of $\\int O(t) dt$. \n\nThe first cumulant in our expansion is just the mean,\n\\begin{equation}\n \\overline{i\\int_0^T \\epsilon \\delta H(t)-i\\int_T^0 \\epsilon \\delta H(t)}=2i\\epsilon \\overline{\\delta H}T.\n\\end{equation} \nThe second cumulant is the variance of $\\left(i\\int_0^T \\epsilon \\delta H(t)-i\\int_T^0 \\epsilon \\delta H(t)\\right)$. Expressed in terms of \n\\begin{equation}\n G(t) =\\overline{\\delta H(t)\\delta H(0)}-\\overline{\\delta H}^2,\n\\end{equation}\nwhere again the overline includes the quantum average, the second cumulant is\n\\begin{equation}\n 4\\epsilon^2|T| \\int_{-\\infty}^\\infty G(t) dt\n\\end{equation} \nPlugging in the first two terms of the cumulant expansion gives\n\\begin{equation}\n\\begin{split}\n \\bra{\\psi}e^{iH_1T}e^{-iH_2T}\\ket{\\psi}=\\exp(i\\lambda T)\\\\\n \\lambda=2\\epsilon \\overline {\\delta H}+4i\\epsilon^2\\int_{0}^\\infty G(t)dt+O(\\epsilon^3).\n\\end{split}\n\\end{equation}\nAs we can see, the final answer for $\\bra{\\psi}e^{iH_1T}e^{-iH_2T}\\ket{\\psi}$ is exponential in the sum of the cumulants. This is true for exactly the same reason that the parition function is exponential in the sum of the connected diagrams.\n\n \n\nNow we modify the construction to access the LSFF along the same lines as in \\cite{winer2021hydrodynamic,saad2019semiclassical}. The filter functions can allow us to restrict the energy range and the averaging yields a non-erratic function of time and justifies coupling the two legs of the contour. The cumulant expansion on the SFF contour instead of Schwinger Keldysh gives the same cumulants, because finite-time correlations aren't sensitive to the change in boundary conditions. Plugging them into the cumulant expansion gives the novel result \n\\begin{equation}\n\\begin{split}\n \\text{LSFF}(T,f)=|T|\\int dE \\frac{f^2(E)e^{i\\lambda T}}{{\\mathfrak b} \\pi}\\\\\n \\lambda=2\\epsilon \\overline {\\delta H}+4i\\epsilon^2\\int_{0}^\\infty G(t)dt+O(\\epsilon^3).\n\\end{split}\n \\label{eq:bigResult}\n\\end{equation}\n\n\nAs a comment, it will be helpful to write $\\text{Im}(\\lambda)$ in another form. The two-point function $4\\epsilon^2\\int_{-\\infty}^\\infty G(t)dt$ can also be written \n\\begin{equation}\n\\begin{split}\n 4\\epsilon^2\\int_{0}^\\infty G(t)dt=4\\pi \\epsilon^2 \\sigma^2 \\rho(E)\\\\\n \\sigma^2=|\\delta H\\textrm{ root-mean-square matrix element between states with energy $\\sim E$}|^2.\n\\end{split}\n\\end{equation}\nTo give a simple, suppose $|\\psi\\rangle$ is an infinite temperature state and $\\delta H$ is traceless. Then the $\\lambda_{Im}$ term vanishes and the cumulant expansion predicts exponential decay of the echo amplitude. \n\nOne useful analogy to have in mind is that the exponent $\\lambda$ in our cumulant expansion is a type of free energy density. The Loschmidt SFF is a path integral evaluated with a nontraditional action on a nontraditional doubled spacetime of size proportional to $T$. It can be interpreted as a partition function (with imaginary temperature) of a system with Hamiltonian $H_1\\otimes I-I\\otimes H_2$. In this point of view, the sum of the connected diagrams is like a free energy density. To get the partition function, we multiply $\\lambda$ by $T$ and exponentiate, just as we extract partition functions by doing the same to the free energy density.\n\nWhat then of the more complicated connected diagrams? Whether higher cumulants can contribute meaningfully or not depends on the details of the situation in question. For instance, consider the case of a large lattice of volume $V$ with some small number of disordered defects which add some local operator $O(x)$ to the Hamiltonian. Realistically, the number of defects is proportional to $V$. The cumulants of any given $O(x)$ over time are independent of $V$, and the joint cumulants are roughly zero. So we'd expect every term in the expansion to be of order $V$, which means the Loshmidt SFF vanishes in the thermodynamic limit. To prevent this, we can either have fewer defects, or smaller defects. In the first case, we would still see every term in the expansion contributing at the same order, whereas in the second case the leading terms would dominate.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{RMT1.png}\\includegraphics[scale=0.5]{RMT2.png}\n \\caption{Loschmidt SFF divided by the SFF, numerical results (blue) vs prediction (orange). The first graph is for a sample of 1320 pairs of 1500 by 1500 GUE matrices correlated with $r=.998$. The second graph is for a sample of 10830 pairs of 300 by 300 GUE matrices correlated with $r=.98$.}\n \\label{fig:RMT}\n\\end{figure}\nFigure \\ref{fig:RMT} illustrates the validity of equation \\eqref{eq:bigResult} for pure random matrix theory, where only the leading terms in the cumulant expansion contribute. Figure \\ref{fig:SYK} shows the same for the SYK model, a fermion model with all-to-all interactions~\\cite{sachdev_sy_1993,kitaev_syk_2015,rosenhaus_syk_2016,maldacena_syk_2016}.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{syksim.png}\n \\caption{Loschmidt SFF divided by the SFF, numerical results (blue) vs prediction (orange). These results are for two instances of the SYK model \\cite{Maldacena:2016hyu,Rosenhaus_2019,Davison_2017} with Hamiltonian $H=\\sum_{ijkl}J_{ijkl}\\psi_i\\psi_j\\psi_k\\psi_l$. The two instances have $J$'s correlated with $r=.999$.}\n \\label{fig:SYK}\n\\end{figure}\nA useful sanity check is to consider $\\delta H=i[H,O]$ for some Hermitian operator $O$. Because this operator is a time derivative, we can see that $\\lambda$ vanishes. This is what we expect since to leading order adding $[H,O]$ to $H$ doesn't change the spectrum, only the eigenfunctions.\n\n\\subsection{Higher Cumulants}\n\nA useful toy model which is analytically tractable and where higher-order cumulants of $\\delta H$ affect the Loschmidt SFF is obtained from an $N\\times N$ GUE matrix $H$ and a projection matrix $P_k=\\sum_{i=1}^k \\ket{i} \\bra{i}$ onto a random basis (not the energy basis) with $k \\ll N$ nonzero eigenvalues. The perturbed Hamiltonians are $H_1=H_0+\\epsilon P_k$ and $H_2=H_0-\\epsilon P_k$. Explicitly, $H$ is a Hermitian matrix with elements chosen independently (except for $H_{0ij}=H_{0ji}^*$) such that each element is drawn from a complex distribution with complex variance $\\sigma^2$. For convenience, in this section we will make the assumption that $T>0$. The $T<0$ case can be obtained from $\\textrm{LSFF}(-T)=\\textrm{LSFF}(T)^*$.\n\nWe will need the $n$-point functions of $\\delta H=\\epsilon P_k$ with a background at energy $E$. This can be evaluated by inserting $P_k=\\sum_{i=1}^k \\ket i\\bra i $, and using $\\bra {i_1} e^{-iHT}\\ket {i_2}\\approx \\delta_{i_1,i_2}\\frac{J_1(2\\sqrt{N\\sigma}T)}{\\sqrt{N\\sigma}T}$, where we make use of the fact that the states $\\ket i$ are random with respect to the energy basis. As such, we have \n\\begin{equation}\n G(t_1,t_2...t_n)=\\epsilon^n\\bra{E}P_k(t_1)P_k(t_2)...P_k(t_n)\\ket{E}=k \\frac {\\epsilon^n}{N}e^{iEt_{1n}}\\frac{ J_1(2\\sqrt{N\\sigma}t_{12})}{\\sqrt{N\\sigma}t_{12}}\\frac{J_1(2\\sqrt{N\\sigma}t_{23})}{\\sqrt{N\\sigma}t_{23}}\\dots \\frac{ J_1(2\\sqrt{N\\sigma}t_{n-1n})}{\\sqrt{N\\sigma}t_{n-1n}},\n\\end{equation}\nwhere $t_{ij}=t_i-t_j$. To leading order in $N$, these correlation functions are exactly equal to the cumulants.\n\nIntegrating over all time-ordered configurations of $t$s we get a total contribution of\n\\begin{equation}\n \\int d^n t\\, G(t_1,t_2...t_n)=2n k\\frac {\\epsilon^n}{N^n}(\\pi \\rho(E))^{n-1} T.\n\\end{equation}\nThe factor of $2n$ comes from the number of time-ordered ways to assign the insertions to the two contours. Once that is done, the $t_{ij}$ integrals can be done separately. Summing these contributions gives\n\\begin{equation}\n \\begin{split}\n \\lambda=2k\\frac{\\epsilon}{N} \\frac 1 {(1-\\pi i \\rho(E) \\epsilon\/N)^2}\n \\label{eq:higherCumulantSummed},\n \\end{split}\n\\end{equation}\nwhich can be shown to agree with equation \\eqref{eq:bigResult} up to the first two orders in $\\epsilon$. The accuracy of equation \\eqref{eq:higherCumulantSummed} is borne out in Figure \\ref{fig:cumulant}.\n\\begin{figure}\n \\centering\n \\includegraphics{higherCumulantReal.pdf}\n \\includegraphics{higherCumulantImag.pdf}\n \\caption{The numerical results (blue) line up almost perfectly with the higher-cumulant predictions of equation \\eqref{eq:higherCumulantSummed} (orange), far outperforming the two-cumulant prediction (green).}\n \\label{fig:cumulant}\n\\end{figure}\n\n\\subsection{The Loschmidt Spectral Correlation Function in Energy Space}\n\nFor GUE systems, the connected spectral form factor is given by\n\\begin{equation}\n\\begin{split}\n \\text{SFF}_{conn}(T,f)=\\int dE f^2(E) g(T,\\rho(E))\\\\\n g(T,\\rho)=\\begin{cases} \n \\frac {|T|}{2\\pi} & |T|\\leq 2\\pi \\rho \\\\\n \\rho & |T|> 2\\pi \\rho\n \\end{cases}.\n\\end{split}\n\\end{equation}\nWe can take the Fourier transform of this to get the two-point function in position space:\n\\begin{equation}\n \\langle \\rho(E)\\rho(E+\\Delta E)\\rangle_{conn}=- \\frac{\\sin^2(\\rho(E)\\Delta E\/\\pi)}\n{(\\Delta E)^2}+\\rho(E)\\delta(\\Delta E).\n\\label{eq:RMTtwopoint}\n\\end{equation}\nTaking the same Fourier transform of equation \\eqref{eq:bigResult}, we obtain the two-point function of the densities of two different Hamiltonians:\n\\begin{equation}\n \\langle \\rho_1(E)\\rho_2(E+\\Delta E)\\rangle_{conn}=- \\frac{(\\Delta E+\\textrm{Re }\\lambda)^2-\\textrm{Im }\\lambda^2}\n{2((\\Delta E+\\textrm{Re } \\lambda)^2+\\textrm{Im }\\lambda^2)^2}.\n\\label{eq:Loschmidttwopoint}\n\\end{equation}\nFor large $\\Delta E$, equations \\eqref{eq:RMTtwopoint} and \\eqref{eq:Loschmidttwopoint} agree, but the short-range behavior is entirely different, confirming that small changes to the Hamiltonian have a drastic effect at low energies (long times) but a negligible effect at high energies (short times).\n\n\n\\subsection{How Different Is Different Enough?}\n\nOne of the central questions we hope to answer is how different two samples need to be in order to have effectively independent spectral statistics. We now have the power to answer this question. If we have two samples with a single defect on a single site different between them, this will lead to a non-extensive decay of the Loschmidt SFF. A decay of the Loschmidt SFF independent of system size contrasts with many SFF-related quantities that grow with system size, such as the Heisenberg time which grows exponentially with system size and the Thouless time which grows (for systems with a local conserved energy) at least quadratically in system size\\cite{winer2021hydrodynamic,roy_2020, Schiulaz_2019}.\n\nIf the number of defects grows with system size at all, then we find that the Loschmidt SFF decays even more quickly. This means it would be extremely difficult to observe the Loschmidt SFF directly in any large system, unless it was prepared extremely carefully. It also means that when measuring an SFF experimentally, even tiny changes (such as changing a small but extensive fraction of the couplings) to the system guarantee that the samples are effectively independent.\n\n\n\\section{The Loschmidt SFF for Hydrodynamic Systems}\n\\label{sec:hydro}\n\nSo far we primarily considered GXE Hamiltonians which serve as a toy model for chaotic quantum systems without any conserved quantities. It is also interesting to consider models with slow modes due to the presence of conserved or almost conserved quantities. In this context, one powerful technique for calculating SFFs is a formulation of hydrodynamics known as the Doubled Periodic Time (DPT) formulation \\cite{winer2021hydrodynamic}. This technology is itself built around a hydrodynamic theory known as the CTP formalism \\cite{crossley2017effective,Glorioso_2017,glorioso2018lectures}. The CTP formalism is a theory of hydrodynamic slow modes on the Schwinger-Keldysh contour, and the DPT formalism transfers these results onto the SFF contour.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.1]{Masterpiece.png}\n \\caption{The idea behind the Doubled Periodic Time (DPT) formalism. Just as how microscopic fields on a Schwinger-Keldysh contour are integrated out to give an effective field theory of hydrodynamics, the microscopic degrees of freedom on an SFF contour are integrated out to give that same action, up to different boundary conditionals and exponentially small (in $T$) corrections.}\n \\label{fig:BrianMasterpiece}\n\\end{figure}\nIn this section, we review the DPT formalism and then extend it to the Loschmidt SFF.\n\n\\subsection{Quick Review of CTP Formalism}\n\\label{subsec:CTP}\nHydrodynamics can be viewed as the program of creating effective field theories (EFTs) for systems based on the principle that long-time and long-range physics is driven primarily by conservation laws and other protected slow modes. One particular formulation is the CTP formalism explained concisely in ~\\cite{glorioso2018lectures}. For more details see ~\\cite{crossley2017effective,Glorioso_2017}. Additional information about fluctuating hydrodyamics can be found in~\\cite{Grozdanov_2015,Kovtun_2012,Dubovsky_2012,Endlich_2013}.\n\nThe CTP formalism is the theory of the following partition function: \n\\begin{equation}\n Z[A^\\mu_1(t,x),A^\\mu_2(t,x)]=\\tr \\left( e^{-\\beta H} \\mathcal{P} e^{i\\int dt d^d x A^\\mu_1j_{1\\mu}} \\mathcal{P} e^{-i\\int dt d^d x A^\\mu_2 j_{2\\mu}}\\right),\n\\end{equation}\nwhere $\\mathcal P$ is a path ordering on the Schwinger-Keldish contour. Here the $j$ operators are local conserved currents. $j_1$ and $j_2$ act on the forward and backward contours, respectively. \n\nFor $A_1=A_2=0$, $Z$ is just the thermal partition function at inverse temperature $\\beta$. Differentiating $Z$ with respect to the $A$s generates insertions of the conserved current density $j_\\mu$ along either leg of the Schwinger-Keldysh contour. Thus $Z$ is a generator of all possible contour ordered correlation functions of current operators.\n\nOne always has a representation of $Z$ as a path integral,\n\\begin{equation}\n Z[A^\\mu_1,A^\\mu_2]=\\int \\mathcal D \\prod_a \\phi^a_1\\mathcal D \\phi^a_2 \\exp\\left(i\\int dt dx W_{micro}[A_{1\\mu},A_{2\\mu},\\phi^a_1,\\phi^a_2]\\right),\n\\end{equation}\nfor some collection of microscopic local fields, the $\\phi^a$s. The fundamental insight of hydrodynamics is that at long times and distances, any $\\phi^a$s that decay rapidly can be integrated out. What's left over is one effective $\\phi$ per contour to enforce the conservation law $\\partial^\\mu j_{i\\mu}=0$. Our partition function can thus be written \n\\begin{equation}\n \\begin{split}\n Z[A^\\mu_1,A^\\mu_2]=\\int \\mathcal D \\phi_1\\mathcal D \\phi_2 \\exp\\left(i\\int dt dx W[B_{1\\mu},B_{2\\mu}]\\right),\\\\\n B_{i\\mu}(t,x)=\\partial_\\mu \\phi_i(t,x)+A_{i\\mu}(t,x),\n \\end{split}\n \\label{eq:AbelianB}\n\\end{equation}\nwhich is essentially a definition of the hydrodynamic field $\\phi$ and the effective action $W$.\n\nInsertions of the currents are obtained by differentiating $Z$ with respect to the background gauge fields $A_{i\\mu}$. A single such functional derivative gives a single insertion of the current, and so one presentation of current conservation is the identity $\\partial_\\mu \\frac{\\delta Z}{\\delta A_{i\\mu}} = 0$. A demonstration that this enforces the conservation law is given in appendix \\ref{app:CTPstuff}.\n\nCrucially, the functional $W$ is not arbitrary. The key assumption of hydrodynamics is that $W$ is the spacetime integral of a local action. Moreover, when expressed in terms of\n\\begin{equation}\n\\begin{split}\n B_a=B_1-B_2,\\\\\n B_r=\\frac{B_1+B_2}{2},\n\\end{split}\n\\end{equation}\nthere are several constraints which follow from unitarity:\n\\begin{itemize}\n \\item $W$ terms all have at least one power of $B_a$, that is $W=0$ when $B_a=0$.\n \\item Terms odd (even) in $B_a$ make a real (imaginary) contribution to the action.\n \\item All imaginary contributions to the action are positive imaginary.\n \\item A KMS constraint imposing fluctuation-dissipation relations.\n \\item Unless the symmetry is spontaneously broken, all factors of $B_r$ have at least one time derivative.\n \\item Any correlator in which the last variable has $a$ time will evaluate to 0 (known as the last time theorem or LTT).\n\\end{itemize}\nWhen calculating SFFs, one typically sets the external sources $A$ to zero, so the action can be written purely in terms of the derivatives of the $\\phi$s.\n\nThe $\\phi$s have a physical interpretation depending on the precise symmetry in question. In the case of time translation \/ energy conservation, the $\\phi$s are the physical time corresponding to a given fluid time (and are often denoted $\\sigma$). In the case of a U(1) internal symmetry \/ charge conservation, they are local fluid phases. One simple quadratic action for an energy-conserving system consistent with the above conditions is\n\\begin{equation}\n L=\\sigma_a\\left(D\\kappa \\beta^{-1}\\nabla^2\\partial_t \\sigma_r-\\kappa \\beta^{-1}\\partial_t^2\\sigma_r\\right)\n +i\\beta^{-2}\\kappa(\\nabla \\sigma_a)^2. \n\\end{equation}\nThe reader should imagine that this action is corrected by cubic and higher orders terms in the $\\sigma$s and by higher derivative corrections even at the quadratic level.\n\n\n\\subsection{Moving on to DPT and the SFF}\n\nAs argued in \\cite{winer2021hydrodynamic}, the SFF enhancement for a diffusive system can be calculated by evaluating the path integral\n\\begin{equation}\n \\text{SFF} = \\int \\mathcal{D} \\sigma_1 \\mathcal{D} \\sigma_2 f(E_1)f(E_2)e^{i W(\\sigma_1,\\sigma_2)},\n\\end{equation}\nwhere $\\sigma_{1,2}$ represent reparameterization modes on the two legs of the contour. It is often useful to define $\\sigma_a=\\sigma_1-\\sigma_2$ and $\\rho=\\frac {\\partial S}{\\partial(\\partial_t \\sigma_a)}$. Here $\\rho$ is the average energy density between the two contours, and can be written $\\rho=\\kappa \\beta^{-1}\\partial_t \\phi_r$. In this notation, \n\\begin{equation}\n L=-\\sigma_a\\left(\\partial_t\\rho-D\\nabla^2\\rho\\right)+i\\beta^{-2}\\kappa(\\nabla \\sigma_a)^2,\n\\end{equation}\nand the effective action is $W = \\int dt L$. Importantly, the boundary conditions are no longer those of the Schwinger-Keldysh contout but those of the SFF contour.\n\nSince the action is entirely Gaussian, we can evaluate the path integral exactly. We first break into Fourier modes in the spatial directions. The remaining integral is\n\\begin{equation}\n \\prod_{k}\\int\\mathcal D\\epsilon_k\\mathcal D\\sigma_{ak}f(E_1)f(E_2) \\exp\\left(-i\\int dt \\sigma_{ak}\n \\partial_t\\rho_k+Dk^2\\sigma_{ak}\\rho_k-\\beta^-2\\kappa k^2 \\sigma_{ak}^2\\right) \n\\end{equation}\nFor $k\\neq 0$, breaking the path integral into time modes gives an infinite product which works out to $\\frac{1}{1-e^{-Dk^2T}}$. For $k=0$, we just integrate over the full manifold of possible zero-frequency $\\sigma_a$s and $\\rho$s to get $\\frac T{2\\pi} \\int f^2(E) dE$. Including other modes gives\n\\begin{equation}\n \\text{SFF}=\\left[\\prod_k \\frac{1}{1-e^{-Dk^2T}} \\right]\\frac T{2\\pi} \\int f^2(E) dE.\n\\end{equation}\n\n\\subsection{Coupling In Sources}\n\nIn this subsection, we will focus specifically on sources that couple to conserved currents, but the next paragraph applies to any operator. Because of the relative minus sign between the two contours, $A_r$ couples to $j_a$ and vice versa. A configuration where $A_a=0$, $A_r=A$ corresponds to unitary evolution with background potential $A$. With CTP boundary conditions, the partition function for $A_a=0$, $A_r=A$ is exactly $Z(\\beta)$, irregardless of $A_r$. So when $A_a=0$, any number of derivatives with respect to $A_r$ (insertions of $j_a$) results in a correlator of zero.\n\nWith periodic boundary conditions, this is no longer entirely true. The trace can take on different values, and changing details of the unitary evolution results in a change in the SFF, albeit one which decays as $T$ grows and the effects of the periodic boundary conditions grow more mild. The intuitive explanation for this is that the SFF at times less than the Thouless time depends on non-universal properties of the Hamiltonian, and coupling in $A_r$ is effectively a change to the Hamiltonian. Thus it can affect the SFF before the Thouless time.\n\n$A_a$ is a different story. Turning on a nonzero $A_a$ term corresponds to having a different Hamiltonian on the forward vs backwards path. This changes the partition function even in the CTP case. In the periodic-time setting, this transforms our SFF into a Loschmidt SFF, a pair of periodic contours with slightly different Hamiltonians along the two legs.\n\n\\subsection{A Perturbative Look at the Loschmidt SFF in Hydro}\n\nIn this subsection we will restrict our attention to systems with no conserved quantities besides the energy $H$, and thus only one conserved current $j_\\mu$.\n\nAssuming the perturbing $\\epsilon \\delta H$ has an overlap $\\epsilon \\delta (x-x_0)$ with the local energy density $j_0(x)$, we can model the Loschmidt SFF as\n\\begin{equation}\n \\text{LSFF}(T,f)=Z_{\\text{DPT}}[A_{\\mu r}=0, A_{\\mu a} =2\\epsilon\\delta(x-x_0)\\delta_{\\mu 0}]\n\\end{equation}\nTo leading order in $\\epsilon$, the `free energy' $\\lambda$ is just a $-2i\\epsilon j_{0r}(x_0)$ insertion, which is a typical diagonal matrix element in the energy shell, so we have\n\\begin{equation}\n \\text{LSFF}(T,f)=\\frac T{2\\pi}\\int dE f^2(E)\\exp\\left(2i\\epsilon j_{0r}(x_0)T+O(\\epsilon^2)\\right).\n\\end{equation}\n\nTo second order in $\\epsilon$, the object in the exponent is\n\\begin{equation}\n\\begin{split}\n \\textrm{Re} \\lambda=2\\epsilon j_{0r}(x_0)\\\\\n \\textrm{Im} \\lambda=4\\epsilon^2\\int_0^T dt G_{rr;DPT}(x_0,x_0,t)\n\\end{split}\n\\end{equation}\nThis last integrand is the correlation function of the energy density. Because the DPT correlation functions wrap around the periodic time, this is the same as the `unwound' CTP integral \n\\begin{equation}\n \\int_0^T dt G_{rr;DPT}(x_0,x_0,t)=\\int_{-\\infty}^\\infty dt G_{rr;CTP}(x_0,x_0,t).\n \\label{eq:gIntegral}\n\\end{equation}\nAt times below the Thouless time and assuming spatial translation symmetry, $G_{rr;CTP}(x_0,x_0,t)$ can be modeled as \n\\begin{equation}\n G_{rr;CTP}(x_0-x_0,t)=\\frac{\\kappa}{\\beta^2D}\\frac{1}{\\sqrt{2\\pi D|T|}^d}.\n\\end{equation}\nFor $d\\geq 2$, this has a UV divergence, which can be cured by imposing a UV cutoff on the extent of our operator $\\delta H$. The IR divergence of $d\\leq 2$ is more interesting. It is also cured by a cutoff: the system has some finite size, and the IR behavior of $G_{rr}$ depends on that system's size and shape, as well as the precise location of $x_0$ and how strong the slowest modes are there.\n\n\\subsection{Exact Evaluation for $d=1$}\n\nAs an illustration of how integral \\eqref{eq:gIntegral} can depend on $x_0$, we can evaluate it exactly when we have a diffusive system in 1d with length $L$.\n\nWe first express $G_{rr;CTP}(x_0,x_0,t)$ as an infinite sum:\n\\begin{equation}\n G_{rr;CTP}(x_0,x_0,t)=\\sum_i f_i(x_0)^2\\beta^{-2}\\kappa e^{-Dk_i^2 |t|},\n\\end{equation}\nwhere the $f_i$s are eigenvalues of $\\nabla^2$ with eigenvalues $-k_i^2$. Performing the integral gives us the sum\n\\begin{equation}\n \\int_{-\\infty}^\\infty dt G_{rr;CTP}(x_0,x_0,t)=\\sum_i f_i(x_0)^2\\frac {2\\kappa}{\\beta^2D k_i^2}.\n \\label{eq:inverseSum}\n\\end{equation}\nThis is just a multiple of $-(\\nabla^2)^{-1}(x_0,x_0)$. We define\n\\begin{equation}\n C(x_1,x_2)=\n \\begin{cases} \n \\frac 1L (L-x_2)x_1 & x_1\\leq x_2 \\\\\n \\frac 1L x_2 (L-x_1) & x_1\\geq x_2\n \\end{cases}.\n\\end{equation}\nThen\n\\begin{equation}\n \\nabla_1^2 C(x_1,x_2)=-\\delta(x_1,x_2).\n\\end{equation}\nSo the sum in equation \\eqref{eq:inverseSum} is \n\\begin{equation}\n \\frac{2 \\kappa}{\\beta^2 D}C(x_0,x_0)=\\frac{2 \\kappa x_0(L-x_0)}{\\beta^2 DL}.\n\\end{equation}\n\n\\section{Conclusion and Discussion}\n\\label{sec:conclusion}\nIn this paper, we defined and studied the Loschmidt SFF. Just as the Loschmidt echo measures the similarity between $e^{iH_1T}$ and $e^{iH_2T}$ in terms of their actions on a state, the Loschmidt SFF measures their similarity in terms of spectral statistics. We found that the Loschmidt SFF decays exponentially as a function of $T$, with the same exponential rate as the echo. We studied this quantity in several situations, including an RMT model where the exponent required just a two point function, and a model requiring higher cumulants. In both cases, our analytical prediction matched up with numerical results. We also obtained analytic results about the Loschmidt SFF in theory where the slow dynamics is governed by a hydrodynamic theory of diffusion.\n\nOne natural extension of our work would be to study the Loschmidt SFF for integrable systems. In particular, does it always have the same long-time behavior as the Loschmidt echo?\n\nAnother important direction is to look for the Loschmidt analogue of the other connection between random matrix theory and quantum chaos: the eigenstate thermalization hypothesis. If an operator $O$ is written in the energy eigenbases of Hamiltonians $H_1$ and $H_2$, is there any relation between the matrix elements in the two bases? \n\nThis work was supported by the Joint Quantum Institute (M.W.) and by the Air Force Office of Scientific Research under award number FA9550-19-1-0360 (B.S.).\n\nNote: Just before we posted this work, we learned of an independent study of the LSFF in a holographic context that appeared a few days before~\\cite{cotler_precision_2022}. \n\n\n\\section{Introduction}\n\nGiven an ensemble of ``quantum chaotic'' Hamiltonians $\\{H\\}$, the averaged Spectral Form Factor (SFF) is defined as \n\\begin{equation}\n \\text{SFF}(T) = \\overline{|\\tr(e^{-i H T})|^2},\n\\end{equation}\nwhere the overline denotes an ensemble average. The SFF is known to exhibit a ramp-like structure at intermediate times which is characteristic of a random-matrix-like spectrum for $H$, a defining feature of quantum chaos~\\cite{haake2010quantum,PhysRevLett.52.1,mehta2004random}. In this paper, we study a generalization of the SFF called the Loschmidt SFF (LSFF). The LSFF is defined in terms of two Hamiltonians $H_1$ and $H_2$ as\n\\begin{equation}\n \\text{LSFF}(T) = \\overline{\\tr(e^{i H_1 T}) \\tr(e^{-i H_2 T})},\n\\end{equation}\nwhere again the overline denotes an average. The goal of the paper is to motivate the study of the LSFF and to study it in a variety of representative contexts.\n\nTo explain why the LSFF is natural object to consider, let us begin with another basic feature of chaotic systems: the exponential decay of auto-correlation functions. Consider a complete set $\\{A\\}$ of Hermitian operators and define the infinite temperature auto-correlation function for each $A$ as\n\\begin{equation}\n G_{A}(T) = \\frac{1}{D} \\tr\\left( e^{i H T} A e^{-i H T} A \\right),\n\\end{equation}\nwhere $H$ is the system Hamiltonian and $D$ is the Hilbert space dimension. In a chaotic system, one typically expects $G_A(T) \\sim e^{- \\gamma_A T} + \\cdots$, at least at intermediate times. \n\nRather than considering a single such $G_A$, it is often convenient to consider sum over all $A$, in which case we obtain a result proportional to the SFF,\n\\begin{equation}\n \\sum_A G_A(t) = \\frac{\\text{SFF}(T)}{D}.\n\\end{equation}\nAs mentioned above, the form factor directly probes the correlations between energy levels of the Hamiltonian in an operator-independent way. Moreover, it can be argued on effective field theory grounds that exponential decay of correlations indeed implies the characteristic ``ramp'' phenomenon in the spectral form factor~\\cite{winer2021hydrodynamic}.\n\nThe correlators $G_A$ can be measured in the following way. Consider two copies of the system, $S$ and $\\bar{S}$, prepared in an infinite-temperature thermofield double state along with a control qubit $C$ initialized in the state $\\frac{|0\\rangle + |1\\rangle}{\\sqrt{2}}$. Using a conditional application of operator $A$, followed by an unconditional time-evolution, followed by another conditional application of $A$, the state becomes \n\\begin{equation}\n \\frac{1}{\\sqrt{2}} \\left(|0\\rangle_C \\otimes e^{- i H T} |\\infty\\rangle_{S\\bar{S}} + |1\\rangle_C \\otimes A e^{- i H T} A |\\infty\\rangle_{S\\bar{S}} \\right).\n\\end{equation}\nBy measuring the Pauli operators $X_C$ and $Y_C$ and repeating to collect statistics, one can then estimate $G_A(T)$ via\n\\begin{equation}\n \\langle X_C \\rangle + i \\langle Y_C\\rangle = G_A.\n\\end{equation}\n\nFrom this point of view it is natural to ask what happens if the time-evolution is itself conditional. Suppose the system evolves according to $H_1$ if the control is in state $|0\\rangle$ and according to $H_2$ if the control is in state $|1\\rangle$. In this case, the experimental procedure now yields\n\\begin{equation}\n \\langle X_C\\rangle + i \\langle Y_C\\rangle = L_A(T),\n\\end{equation}\nwhere $L_A(T)$ is the ``Loschmidt'' auto-correlation function,\n\\begin{equation}\n L_A(T) = \\frac{1}{D} \\tr\\left( e^{i H_1 T} A e^{-i H_2 T} A \\right).\n\\end{equation}\n\nWe refer to this as a Loschmidt auto-correlator since it is correlation function version of the traditional Loschmidt echo, which is defined by taking an initial state $|\\psi\\rangle$, evolving for time $T$ with Hamiltonian $H_2$, then evolving for time $T$ with Hamiltonian $-H_1$. The return amplitude is $\\langle \\psi | e^{i H_1 T} e^{- i H_2 T} | \\psi \\rangle$, and when the state $|\\psi\\rangle$ is the infinite temperature thermofield double state, the return amplitude is $L_{\\text{Id}}(T)$. \n\nThe Loschmidt correlator is thus a natural generalization of the return amplitude in the Loschmidt echo. Moreover, if we sum the Loschmidt correlator over all choices of $A$, we get precisely the LSFF,\n\\begin{equation}\n \\sum_A L_A(T) = \\frac{\\text{LSFF}(T)}{D}.\n\\end{equation}\nHence, the LSFF is an object that can probe both spectral correlations and the physics of the Loschmidt echo in an operator-independent way.\n\nAs we discuss below, in addition to these elementary motivations, the LSFF appears in a variety of other contexts, including as a part of the SFF in systems with spontaneous symmetry breaking. In fact, this symmetry breaking application is how we first came to consider the LSFF. Finally, the LSFF is a time-domain version of the long-studied phenomenon known as parametric correlations, e.g.~\\cite{PhysRevLett.70.4063,Weidenmuller_2005,guhr1998random,https:\/\/doi.org\/10.48550\/arxiv.2205.12968}. For all these reasons, the LSFF is a natural extension of the SFF which is worthy of study in its own right.\n\nIn the remainder of the introduction, we include two subsections, one that defines the LSFF and its filtered cousins in more detail and one that discusses our motivations in more detail. The rest of the paper is organized as follows. Section \\ref{sec:Loschmidt} derives a formula for the Loschmidt SFF, connecting it to the Loschmidt echo. This section also extends beyond the standard Loschmidt analysis to include more complicated connected diagrams. Section \\ref{sec:hydro} reformulates these results in a hydrodynamic language, and calculates new results for the Loschmidt SFF in hydrodynamic systems with spatial extent. Finally Section \\ref{sec:conclusion} contains concluding remarks.\n\n\n\n\\subsection{Random matrices and Form Factors}\n\nHere we more thoroughly introduce the SFF and the LSFF. The energy level repulsion that is a hallmark of quantum chaos is an important prediction of random matrix theory. It is commonly diagnosed by the averaged Spectral Form Factor (SFF)~\\cite{haake2010quantum,BerrySemiclassical,Sieber_2001,saad2019semiclassical,PhysRevResearch.3.023118,PhysRevResearch.3.023176}, which is defined as\n\\begin{equation}\n \\text{SFF}(T,f)=\\overline{\\tr[ f(H) e^{iHT}] \\tr[ f(H) e^{-iHT}]}.\n\\end{equation}\nHere we consider a more general definition in which we allow for a filter function $f(H)$. The filter should be some slowly varying function used to focus in on a particular energy range of interest. Useful choices include $f(H)=1, f(H)=e^{-\\beta H}$ and $f(H)=\\exp(-(H-E_0)^2\/4\\sigma^2)$.\n\nIn chaotic systems, the SFF exhibits a dip-ramp-plateau structure (figure \\ref{fig:sffPic}), while in integrable systems, the ramp is not present. \n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{sffBasic.png}\n \\caption{Full (blue) and connected (orange) SFF for GUE random matrix theory on a log-log plot. The connected SFF has the ramp-plateau structure emblematic of level repulsion and quantum chaos.}\n \\label{fig:sffPic}\n\\end{figure}\nIt can be shown that in systems with conserved modes, or even slow modes or nearly conserved quantities, the ramp is significantly enhanced~\\cite{Friedman_2019,winer2021hydrodynamic,roy_2020,WinerGlass}.\n\nThe ensemble average, denoted by an overline, is important for rendering the averaged SFF a smooth function of time. For a single Hamiltonian, the non-averaged SFF is an erratic function of time, although an appropriate time average is typically sufficient to make it smooth. However, we are also interested in explictly disordered systems with fixed random couplings. In any case, the presence of the average means that the averaged SFF can be decomposed into the square of an average and a variance. These are often called the disconnected and connected SFF, respectively. \n\nMathematically, one can write\n\\begin{equation}\n \\text{SFF}_{\\text{conn}}=\\overline{Z Z^*}-\\overline{Z}\\ \\overline{Z^*},\n\\end{equation}\nwhere \n\\begin{equation}\n Z(T,f)=\\sum_{n} f(E_n)\\exp(-iE_nT)=\\tr f(H)e^{-iHT}.\n\\end{equation}\n$Z$ has a simple interpretation as the Fourier transform of the level density. In random matrix theory, the connected SFF has a ramp-plateau structure, with a long linear ramp terminating in a plateau (see Figure \\ref{fig:sffPic}). The plateau is the variance one would get assuming random phases for the complex exponential. The ramp, where the SFF takes on a smaller value, thus represents a suppression of variation in the Fourier transform of the density. This suppression is greater at lower frequencies, owing to the long-range nature of the repulsion.\n\nSetting the $f$ factors equal to $1$, the SFF is a path integral on a particular time contour, depicted in Figure \\ref{fig:sffcontour}.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.2]{SFFcontour.png}\n \\caption{Two disconnected periodic contours in opposite directions. Evaluating the path integral on this contour calculates an SFF. As shown, there is no interaction between the two contours and the SFF factors, but this factorization is destroyed by the introduction of a disorder average.}\n \\label{fig:sffcontour}\n\\end{figure}\nThis contour has two periodic time legs with periods $+T$ and $-T$. There is no direct interaction between them. Introducing disorder leads to an effective interaction between the two legs, allowing them to be treated with a single interacting theory \\cite{saad2019semiclassical}. This theory has a $U(1)\\times U(1)$ translational symmetry, one cycle for each leg of the contour. When treated semiclassically, certain saddle points spontaneously break this symmetry. The resulting Goldstone manifold has size proportional to $T$, leading to a linear ramp.\n\nWe define the (connected) Loschmidt SFF (LSFF) as\n\\begin{equation}\n \\text{LSFF}_{\\text{conn}}=\\overline{\\tr[ f(H_1) e^{iH_1T}]\\tr[ f(H_2) e^{-iH_2T}]}-\\overline{\\tr[ f(H_1) e^{iH_1T}]}\\ \\overline{\\tr[ f(H_2) e^{-iH_2T}]}.\n\\end{equation}\nIf the connected SFF is a variance, the Loschmidt SFF is a correlation. We focus on the case where we draw two Hamiltonians $H_1,H_2$ from a joint distribution which makes them similar. For instance, they might satisfy $H_1=H+\\epsilon \\delta H,H_2=H-\\epsilon \\delta H$, for some small $\\epsilon$ and $H,\\delta H$ drawn from a normalized GUE ensemble. Alternatively, $H_1$ and $H_2$ might be random-field Heisenberg models or SYK clusters~\\cite{sachdev_sy_1993,kitaev_syk_2015,rosenhaus_syk_2016,maldacena_syk_2016} with strongly correlated but not identical disorder. We will focus mainly on the case where $T>0$, noting that $\\text{LSFF}(-T)=\\text{LSFF}(T)^*$.\n\nAs discussed above, the name comes from an analogy with the Loschmidt echo \\cite{2012,2012Loschmidt,2006Loschmidt,Goussev:2012}. The echo can be written as $|\\bra \\psi e^{iH_1 T}e^{-iH_2 T}\\ket \\psi|^2,$ and it can be interpreted as a diagnostic of the fidelity of time reversal. If one starts with a state $\\ket \\psi$, evolves under Hamiltonian $H_1$ for time $T$, then evolves under $-H_2\\approx -H_1$ for time $T$, the Loschmidt echo diagnoses how close one comes to the original state.\n\n\\subsection{Motivations for the Loschmidt Spectral Form Factor}\n\nThere are several motivations to think about the LSFF. The most important is to answer the question of ``How different is different enough'' when it comes to fine-grained spectral statistics. This is an important question when considering ensembles of the form $H=H_0+\\delta H$, where $H_0$ is some fixed large Hamiltonian and $\\delta H$ is some smaller disordered perturbation. Can we think of the spectral statistics of such $H$s as independent? As we shall see, the answer for large times $T$ is yes. In physics, this ensemble has an interpretation as an ordered system with some small amount of disorder. \n\nIn mathematics, the concept of the Dyson Process \\cite{Dyson:1962brm,DysonNew,Joyner} or Matrix Brownian motion refers to starting with an initial Matrix $H_1$ and adding in a GUE matrix $\\delta H$ with variance proportional to some small $t$. This can be interpreted as Brownian motion in the $N\\times N$ dimensional space of matrices lasting for some fictitious time $t$ with no bearing on any physical time. As the matrix evolves under this process, the eigenvalues diffuse while repelling each other. Our results show that the Fourier mode of the eigenvalue density with wavenumber $T$ decays like $\\exp(-\\#t|T|)$. This contrasts with pure eigenvalue diffusion which which would result in a decay like $\\exp(-\\#tT^2)$.\n\nThe Loschmidt SFF relates to a phenomenon called parametric correlations, e.g.~\\cite{PhysRevLett.70.4063,Weidenmuller_2005,guhr1998random,https:\/\/doi.org\/10.48550\/arxiv.2205.12968}. Mathematicians and physicists have studied the spectral correlation functions of similar matrices since the 90s. The study of parametric correlations was typically done in the energy domain as opposed to the time domain, and was focused on systems well-described by random matrices. Our work extends it to the time domain and provides physical justification for similar results by relating them to the Loschmidt echo and to the SFF hydrodynamics \\cite{winer2021hydrodynamic,Winer_2022}.\n\nThe Loschmidt SFF also emerges naturally when one is calculating full SFFs of specific systems. For systems whose Hamiltonian can be written in block diagonal form as \n\\begin{equation}\n H=\\begin{pmatrix}H_0-\\delta H&0\\\\0&H_0+\\delta H\\end{pmatrix},\n\\end{equation}\nthe SFF naturally decomposes in the sum of the SFF of $H_0-\\delta H$,$H_0+\\delta H$, and twice the Loschmidt SFF between the two blocks.\n\nSuch Hamiltonians arise naturally, for instance, in the case of spontaneous symmetry breaking (SSB), where different charge sectors have very similar Hamiltonians acting on collections of 'cat states'. Indeed in \\cite{Winer_2022} we performed this calculation and obtained results consistent with the more general results here. \n\n\\section{A Simple Formula for the Loschmidt Spectral Form Factor}\n\\label{sec:Loschmidt}\n\nRecall that we are interested in the quantity\n\\begin{equation}\n \\text{LSFF}(T,f)=\\overline {\\tr[f(H_1) e^{iH_1T}]\\tr [f(H_2) e^{-iH_2T}]},\n\\end{equation}\nin the case where $H_1$ and $H_2$ are two highly correlated but not identical Hamiltonians drawn from some joint distribution. We define $H_1 = H+ \\epsilon \\delta H$ and $H_2 = H - \\epsilon \\delta H$, where $\\epsilon$ controls the closeness of $H_1$ and $H_2$.\n\nLet us begin by elaborating on the similarities between this object and the traditional Loschmidt echo amplitude~\\cite{2012Loschmidt,2006Loschmidt,Goussev:2012}, $\\bra{\\psi}e^{iH_1T}e^{-iH_2T}\\ket{\\psi}$. If we average the echo amplitude over $|\\psi\\rangle$ weighted by factors of $f(H_1)$ and $f(H_2)$, then we obtain $\\tr [f(H_1)e^{iH_1T}e^{-iH_2T}f(H_2)]$. This is a single-trace version of $\\text{LSFF}(T,f)$ which can be evaluated with a Schwinger-Keldysh path integral \\cite{Keldysh:1964ud,Kamenev_2009,kamenev_2011,CHOU19851,Haehl_2017}. \n\nThe standard S-K path integral is defined on the contour depicted in Figure \\ref{fig:keldysh}. The contour consists of a thermal circle to prepare a canonical ensemble, and then forwards and backward time evolution. Insertions can be placed along either or both of the real time legs of the contour. We first review this construction.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{schwingerKeldysh.png}\n \\caption{A Schwinger-Keldysh contour evolves forwards in time by $T$, then backwards by $T$, then by $i\\beta$ in imaginary time. It is often used to calculate dynamical correlations in a thermal background. When identical sources are present on the forward and backwards legs, the partition function on this contour is equal to the traditional thermal partition function. But when the sources differ, the partition function on this contour is decreased by an amount related to the Loschmidt echo.}\n \\label{fig:keldysh}\n\\end{figure}\n\nLet's assume at some distant time in the past the system is prepared in a microcanonical ensemble at energy $E$ under the Hamiltonian $H$. This can be a single state, a sampling of states from a narrow energy window, or even a thermal state. We denote the choice generically by state $|\\psi\\rangle$ We assume that the Hamiltonian $H$ is chaotic and obeys the eigenstate thermalization hypothesis (ETH)\\cite{DeutschETH,SredETH,Rigol_2008,D_Alessio_2016}. As such, it doesn't matter very much what choice we make. Our density matrix evolves under just $H$ until time $0$ where sources $\\pm \\epsilon \\delta H$ are turned on along the two real-time legs of the contour.\n\nThis can be evaluated to leading order in perturbation theory in $\\epsilon$ in terms of a cumulant expansion~\\cite{2006Loschmidt}. We want the echo amplitude,\n\\begin{equation}\n \\overline {\\mathcal P\\exp(i\\int_0^T \\epsilon \\delta H(t)-i\\int_T^0 \\epsilon \\delta H(t))},\n \\label{eq:LoschmidtFormula}\n\\end{equation}\nwhere $\\mathcal P$ denotes path ordering on the Schwinger-Keldysh contour, and the overline represents both a disorder average and a quantum expectation value of the operator in the interaction picture of $H$. Since equation \\ref{eq:LoschmidtFormula} is the expected value of an exponential, it can be expressed as a cumulant expansion using the general identity $\\overline{\\mathcal P \\exp(\\epsilon \\int O(t) dt)}=\\exp(\\sum_i \\epsilon^i \\kappa_i\/i!)$, where $\\kappa_i$ is the $i$th path-ordered cumulant of $\\int O(t) dt$. \n\nThe first cumulant in our expansion is just the mean,\n\\begin{equation}\n \\overline{i\\int_0^T \\epsilon \\delta H(t)-i\\int_T^0 \\epsilon \\delta H(t)}=2i\\epsilon \\overline{\\delta H}T.\n\\end{equation} \nThe second cumulant is the variance of $\\left(i\\int_0^T \\epsilon \\delta H(t)-i\\int_T^0 \\epsilon \\delta H(t)\\right)$. Expressed in terms of \n\\begin{equation}\n G(t) =\\overline{\\delta H(t)\\delta H(0)}-\\overline{\\delta H}^2,\n\\end{equation}\nwhere again the overline includes the quantum average, the second cumulant is\n\\begin{equation}\n 4\\epsilon^2|T| \\int_{-\\infty}^\\infty G(t) dt\n\\end{equation} \nPlugging in the first two terms of the cumulant expansion gives\n\\begin{equation}\n\\begin{split}\n \\bra{\\psi}e^{iH_1T}e^{-iH_2T}\\ket{\\psi}=\\exp(i\\lambda T)\\\\\n \\lambda=2\\epsilon \\overline {\\delta H}+4i\\epsilon^2\\int_{0}^\\infty G(t)dt+O(\\epsilon^3).\n\\end{split}\n\\end{equation}\nAs we can see, the final answer for $\\bra{\\psi}e^{iH_1T}e^{-iH_2T}\\ket{\\psi}$ is exponential in the sum of the cumulants. This is true for exactly the same reason that the parition function is exponential in the sum of the connected diagrams.\n\n \n\nNow we modify the construction to access the LSFF along the same lines as in \\cite{winer2021hydrodynamic,saad2019semiclassical}. The filter functions can allow us to restrict the energy range and the averaging yields a non-erratic function of time and justifies coupling the two legs of the contour. The cumulant expansion on the SFF contour instead of Schwinger Keldysh gives the same cumulants, because finite-time correlations aren't sensitive to the change in boundary conditions. Plugging them into the cumulant expansion gives the novel result \n\\begin{equation}\n\\begin{split}\n \\text{LSFF}(T,f)=|T|\\int dE \\frac{f^2(E)e^{i\\lambda T}}{{\\mathfrak b} \\pi}\\\\\n \\lambda=2\\epsilon \\overline {\\delta H}+4i\\epsilon^2\\int_{0}^\\infty G(t)dt+O(\\epsilon^3).\n\\end{split}\n \\label{eq:bigResult}\n\\end{equation}\n\n\nAs a comment, it will be helpful to write $\\text{Im}(\\lambda)$ in another form. The two-point function $4\\epsilon^2\\int_{-\\infty}^\\infty G(t)dt$ can also be written \n\\begin{equation}\n\\begin{split}\n 4\\epsilon^2\\int_{0}^\\infty G(t)dt=4\\pi \\epsilon^2 \\sigma^2 \\rho(E)\\\\\n \\sigma^2=|\\delta H\\textrm{ root-mean-square matrix element between states with energy $\\sim E$}|^2.\n\\end{split}\n\\end{equation}\nTo give a simple, suppose $|\\psi\\rangle$ is an infinite temperature state and $\\delta H$ is traceless. Then the $\\lambda_{Im}$ term vanishes and the cumulant expansion predicts exponential decay of the echo amplitude. \n\nOne useful analogy to have in mind is that the exponent $\\lambda$ in our cumulant expansion is a type of free energy density. The Loschmidt SFF is a path integral evaluated with a nontraditional action on a nontraditional doubled spacetime of size proportional to $T$. It can be interpreted as a partition function (with imaginary temperature) of a system with Hamiltonian $H_1\\otimes I-I\\otimes H_2$. In this point of view, the sum of the connected diagrams is like a free energy density. To get the partition function, we multiply $\\lambda$ by $T$ and exponentiate, just as we extract partition functions by doing the same to the free energy density.\n\nWhat then of the more complicated connected diagrams? Whether higher cumulants can contribute meaningfully or not depends on the details of the situation in question. For instance, consider the case of a large lattice of volume $V$ with some small number of disordered defects which add some local operator $O(x)$ to the Hamiltonian. Realistically, the number of defects is proportional to $V$. The cumulants of any given $O(x)$ over time are independent of $V$, and the joint cumulants are roughly zero. So we'd expect every term in the expansion to be of order $V$, which means the Loshmidt SFF vanishes in the thermodynamic limit. To prevent this, we can either have fewer defects, or smaller defects. In the first case, we would still see every term in the expansion contributing at the same order, whereas in the second case the leading terms would dominate.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{RMT1.png}\\includegraphics[scale=0.5]{RMT2.png}\n \\caption{Loschmidt SFF divided by the SFF, numerical results (blue) vs prediction (orange). The first graph is for a sample of 1320 pairs of 1500 by 1500 GUE matrices correlated with $r=.998$. The second graph is for a sample of 10830 pairs of 300 by 300 GUE matrices correlated with $r=.98$.}\n \\label{fig:RMT}\n\\end{figure}\nFigure \\ref{fig:RMT} illustrates the validity of equation \\eqref{eq:bigResult} for pure random matrix theory, where only the leading terms in the cumulant expansion contribute. Figure \\ref{fig:SYK} shows the same for the SYK model, a fermion model with all-to-all interactions~\\cite{sachdev_sy_1993,kitaev_syk_2015,rosenhaus_syk_2016,maldacena_syk_2016}.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{syksim.png}\n \\caption{Loschmidt SFF divided by the SFF, numerical results (blue) vs prediction (orange). These results are for two instances of the SYK model \\cite{Maldacena:2016hyu,Rosenhaus_2019,Davison_2017} with Hamiltonian $H=\\sum_{ijkl}J_{ijkl}\\psi_i\\psi_j\\psi_k\\psi_l$. The two instances have $J$'s correlated with $r=.999$.}\n \\label{fig:SYK}\n\\end{figure}\nA useful sanity check is to consider $\\delta H=i[H,O]$ for some Hermitian operator $O$. Because this operator is a time derivative, we can see that $\\lambda$ vanishes. This is what we expect since to leading order adding $[H,O]$ to $H$ doesn't change the spectrum, only the eigenfunctions.\n\n\\subsection{Higher Cumulants}\n\nA useful toy model which is analytically tractable and where higher-order cumulants of $\\delta H$ affect the Loschmidt SFF is obtained from an $N\\times N$ GUE matrix $H$ and a projection matrix $P_k=\\sum_{i=1}^k \\ket{i} \\bra{i}$ onto a random basis (not the energy basis) with $k \\ll N$ nonzero eigenvalues. The perturbed Hamiltonians are $H_1=H_0+\\epsilon P_k$ and $H_2=H_0-\\epsilon P_k$. Explicitly, $H$ is a Hermitian matrix with elements chosen independently (except for $H_{0ij}=H_{0ji}^*$) such that each element is drawn from a complex distribution with complex variance $\\sigma^2$. For convenience, in this section we will make the assumption that $T>0$. The $T<0$ case can be obtained from $\\textrm{LSFF}(-T)=\\textrm{LSFF}(T)^*$.\n\nWe will need the $n$-point functions of $\\delta H=\\epsilon P_k$ with a background at energy $E$. This can be evaluated by inserting $P_k=\\sum_{i=1}^k \\ket i\\bra i $, and using $\\bra {i_1} e^{-iHT}\\ket {i_2}\\approx \\delta_{i_1,i_2}\\frac{J_1(2\\sqrt{N\\sigma}T)}{\\sqrt{N\\sigma}T}$, where we make use of the fact that the states $\\ket i$ are random with respect to the energy basis. As such, we have \n\\begin{equation}\n G(t_1,t_2...t_n)=\\epsilon^n\\bra{E}P_k(t_1)P_k(t_2)...P_k(t_n)\\ket{E}=k \\frac {\\epsilon^n}{N}e^{iEt_{1n}}\\frac{ J_1(2\\sqrt{N\\sigma}t_{12})}{\\sqrt{N\\sigma}t_{12}}\\frac{J_1(2\\sqrt{N\\sigma}t_{23})}{\\sqrt{N\\sigma}t_{23}}\\dots \\frac{ J_1(2\\sqrt{N\\sigma}t_{n-1n})}{\\sqrt{N\\sigma}t_{n-1n}},\n\\end{equation}\nwhere $t_{ij}=t_i-t_j$. To leading order in $N$, these correlation functions are exactly equal to the cumulants.\n\nIntegrating over all time-ordered configurations of $t$s we get a total contribution of\n\\begin{equation}\n \\int d^n t\\, G(t_1,t_2...t_n)=2n k\\frac {\\epsilon^n}{N^n}(\\pi \\rho(E))^{n-1} T.\n\\end{equation}\nThe factor of $2n$ comes from the number of time-ordered ways to assign the insertions to the two contours. Once that is done, the $t_{ij}$ integrals can be done separately. Summing these contributions gives\n\\begin{equation}\n \\begin{split}\n \\lambda=2k\\frac{\\epsilon}{N} \\frac 1 {(1-\\pi i \\rho(E) \\epsilon\/N)^2}\n \\label{eq:higherCumulantSummed},\n \\end{split}\n\\end{equation}\nwhich can be shown to agree with equation \\eqref{eq:bigResult} up to the first two orders in $\\epsilon$. The accuracy of equation \\eqref{eq:higherCumulantSummed} is borne out in Figure \\ref{fig:cumulant}.\n\\begin{figure}\n \\centering\n \\includegraphics{higherCumulantReal.pdf}\n \\includegraphics{higherCumulantImag.pdf}\n \\caption{The numerical results (blue) line up almost perfectly with the higher-cumulant predictions of equation \\eqref{eq:higherCumulantSummed} (orange), far outperforming the two-cumulant prediction (green).}\n \\label{fig:cumulant}\n\\end{figure}\n\n\\subsection{The Loschmidt Spectral Correlation Function in Energy Space}\n\nFor GUE systems, the connected spectral form factor is given by\n\\begin{equation}\n\\begin{split}\n \\text{SFF}_{conn}(T,f)=\\int dE f^2(E) g(T,\\rho(E))\\\\\n g(T,\\rho)=\\begin{cases} \n \\frac {|T|}{2\\pi} & |T|\\leq 2\\pi \\rho \\\\\n \\rho & |T|> 2\\pi \\rho\n \\end{cases}.\n\\end{split}\n\\end{equation}\nWe can take the Fourier transform of this to get the two-point function in position space:\n\\begin{equation}\n \\langle \\rho(E)\\rho(E+\\Delta E)\\rangle_{conn}=- \\frac{\\sin^2(\\rho(E)\\Delta E\/\\pi)}\n{(\\Delta E)^2}+\\rho(E)\\delta(\\Delta E).\n\\label{eq:RMTtwopoint}\n\\end{equation}\nTaking the same Fourier transform of equation \\eqref{eq:bigResult}, we obtain the two-point function of the densities of two different Hamiltonians:\n\\begin{equation}\n \\langle \\rho_1(E)\\rho_2(E+\\Delta E)\\rangle_{conn}=- \\frac{(\\Delta E+\\textrm{Re }\\lambda)^2-\\textrm{Im }\\lambda^2}\n{2((\\Delta E+\\textrm{Re } \\lambda)^2+\\textrm{Im }\\lambda^2)^2}.\n\\label{eq:Loschmidttwopoint}\n\\end{equation}\nFor large $\\Delta E$, equations \\eqref{eq:RMTtwopoint} and \\eqref{eq:Loschmidttwopoint} agree, but the short-range behavior is entirely different, confirming that small changes to the Hamiltonian have a drastic effect at low energies (long times) but a negligible effect at high energies (short times).\n\n\n\\subsection{How Different Is Different Enough?}\n\nOne of the central questions we hope to answer is how different two samples need to be in order to have effectively independent spectral statistics. We now have the power to answer this question. If we have two samples with a single defect on a single site different between them, this will lead to a non-extensive decay of the Loschmidt SFF. A decay of the Loschmidt SFF independent of system size contrasts with many SFF-related quantities that grow with system size, such as the Heisenberg time which grows exponentially with system size and the Thouless time which grows (for systems with a local conserved energy) at least quadratically in system size\\cite{winer2021hydrodynamic,roy_2020, Schiulaz_2019}.\n\nIf the number of defects grows with system size at all, then we find that the Loschmidt SFF decays even more quickly. This means it would be extremely difficult to observe the Loschmidt SFF directly in any large system, unless it was prepared extremely carefully. It also means that when measuring an SFF experimentally, even tiny changes (such as changing a small but extensive fraction of the couplings) to the system guarantee that the samples are effectively independent.\n\n\n\\section{The Loschmidt SFF for Hydrodynamic Systems}\n\\label{sec:hydro}\n\nSo far we primarily considered GXE Hamiltonians which serve as a toy model for chaotic quantum systems without any conserved quantities. It is also interesting to consider models with slow modes due to the presence of conserved or almost conserved quantities. In this context, one powerful technique for calculating SFFs is a formulation of hydrodynamics known as the Doubled Periodic Time (DPT) formulation \\cite{winer2021hydrodynamic}. This technology is itself built around a hydrodynamic theory known as the CTP formalism \\cite{crossley2017effective,Glorioso_2017,glorioso2018lectures}. The CTP formalism is a theory of hydrodynamic slow modes on the Schwinger-Keldysh contour, and the DPT formalism transfers these results onto the SFF contour.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.1]{Masterpiece.png}\n \\caption{The idea behind the Doubled Periodic Time (DPT) formalism. Just as how microscopic fields on a Schwinger-Keldysh contour are integrated out to give an effective field theory of hydrodynamics, the microscopic degrees of freedom on an SFF contour are integrated out to give that same action, up to different boundary conditionals and exponentially small (in $T$) corrections.}\n \\label{fig:BrianMasterpiece}\n\\end{figure}\nIn this section, we review the DPT formalism and then extend it to the Loschmidt SFF.\n\n\\subsection{Quick Review of CTP Formalism}\n\\label{subsec:CTP}\nHydrodynamics can be viewed as the program of creating effective field theories (EFTs) for systems based on the principle that long-time and long-range physics is driven primarily by conservation laws and other protected slow modes. One particular formulation is the CTP formalism explained concisely in ~\\cite{glorioso2018lectures}. For more details see ~\\cite{crossley2017effective,Glorioso_2017}. Additional information about fluctuating hydrodyamics can be found in~\\cite{Grozdanov_2015,Kovtun_2012,Dubovsky_2012,Endlich_2013}.\n\nThe CTP formalism is the theory of the following partition function: \n\\begin{equation}\n Z[A^\\mu_1(t,x),A^\\mu_2(t,x)]=\\tr \\left( e^{-\\beta H} \\mathcal{P} e^{i\\int dt d^d x A^\\mu_1j_{1\\mu}} \\mathcal{P} e^{-i\\int dt d^d x A^\\mu_2 j_{2\\mu}}\\right),\n\\end{equation}\nwhere $\\mathcal P$ is a path ordering on the Schwinger-Keldish contour. Here the $j$ operators are local conserved currents. $j_1$ and $j_2$ act on the forward and backward contours, respectively. \n\nFor $A_1=A_2=0$, $Z$ is just the thermal partition function at inverse temperature $\\beta$. Differentiating $Z$ with respect to the $A$s generates insertions of the conserved current density $j_\\mu$ along either leg of the Schwinger-Keldysh contour. Thus $Z$ is a generator of all possible contour ordered correlation functions of current operators.\n\nOne always has a representation of $Z$ as a path integral,\n\\begin{equation}\n Z[A^\\mu_1,A^\\mu_2]=\\int \\mathcal D \\prod_a \\phi^a_1\\mathcal D \\phi^a_2 \\exp\\left(i\\int dt dx W_{micro}[A_{1\\mu},A_{2\\mu},\\phi^a_1,\\phi^a_2]\\right),\n\\end{equation}\nfor some collection of microscopic local fields, the $\\phi^a$s. The fundamental insight of hydrodynamics is that at long times and distances, any $\\phi^a$s that decay rapidly can be integrated out. What's left over is one effective $\\phi$ per contour to enforce the conservation law $\\partial^\\mu j_{i\\mu}=0$. Our partition function can thus be written \n\\begin{equation}\n \\begin{split}\n Z[A^\\mu_1,A^\\mu_2]=\\int \\mathcal D \\phi_1\\mathcal D \\phi_2 \\exp\\left(i\\int dt dx W[B_{1\\mu},B_{2\\mu}]\\right),\\\\\n B_{i\\mu}(t,x)=\\partial_\\mu \\phi_i(t,x)+A_{i\\mu}(t,x),\n \\end{split}\n \\label{eq:AbelianB}\n\\end{equation}\nwhich is essentially a definition of the hydrodynamic field $\\phi$ and the effective action $W$.\n\nInsertions of the currents are obtained by differentiating $Z$ with respect to the background gauge fields $A_{i\\mu}$. A single such functional derivative gives a single insertion of the current, and so one presentation of current conservation is the identity $\\partial_\\mu \\frac{\\delta Z}{\\delta A_{i\\mu}} = 0$. A demonstration that this enforces the conservation law is given in appendix \\ref{app:CTPstuff}.\n\nCrucially, the functional $W$ is not arbitrary. The key assumption of hydrodynamics is that $W$ is the spacetime integral of a local action. Moreover, when expressed in terms of\n\\begin{equation}\n\\begin{split}\n B_a=B_1-B_2,\\\\\n B_r=\\frac{B_1+B_2}{2},\n\\end{split}\n\\end{equation}\nthere are several constraints which follow from unitarity:\n\\begin{itemize}\n \\item $W$ terms all have at least one power of $B_a$, that is $W=0$ when $B_a=0$.\n \\item Terms odd (even) in $B_a$ make a real (imaginary) contribution to the action.\n \\item All imaginary contributions to the action are positive imaginary.\n \\item A KMS constraint imposing fluctuation-dissipation relations.\n \\item Unless the symmetry is spontaneously broken, all factors of $B_r$ have at least one time derivative.\n \\item Any correlator in which the last variable has $a$ time will evaluate to 0 (known as the last time theorem or LTT).\n\\end{itemize}\nWhen calculating SFFs, one typically sets the external sources $A$ to zero, so the action can be written purely in terms of the derivatives of the $\\phi$s.\n\nThe $\\phi$s have a physical interpretation depending on the precise symmetry in question. In the case of time translation \/ energy conservation, the $\\phi$s are the physical time corresponding to a given fluid time (and are often denoted $\\sigma$). In the case of a U(1) internal symmetry \/ charge conservation, they are local fluid phases. One simple quadratic action for an energy-conserving system consistent with the above conditions is\n\\begin{equation}\n L=\\sigma_a\\left(D\\kappa \\beta^{-1}\\nabla^2\\partial_t \\sigma_r-\\kappa \\beta^{-1}\\partial_t^2\\sigma_r\\right)\n +i\\beta^{-2}\\kappa(\\nabla \\sigma_a)^2. \n\\end{equation}\nThe reader should imagine that this action is corrected by cubic and higher orders terms in the $\\sigma$s and by higher derivative corrections even at the quadratic level.\n\n\n\\subsection{Moving on to DPT and the SFF}\n\nAs argued in \\cite{winer2021hydrodynamic}, the SFF enhancement for a diffusive system can be calculated by evaluating the path integral\n\\begin{equation}\n \\text{SFF} = \\int \\mathcal{D} \\sigma_1 \\mathcal{D} \\sigma_2 f(E_1)f(E_2)e^{i W(\\sigma_1,\\sigma_2)},\n\\end{equation}\nwhere $\\sigma_{1,2}$ represent reparameterization modes on the two legs of the contour. It is often useful to define $\\sigma_a=\\sigma_1-\\sigma_2$ and $\\rho=\\frac {\\partial S}{\\partial(\\partial_t \\sigma_a)}$. Here $\\rho$ is the average energy density between the two contours, and can be written $\\rho=\\kappa \\beta^{-1}\\partial_t \\phi_r$. In this notation, \n\\begin{equation}\n L=-\\sigma_a\\left(\\partial_t\\rho-D\\nabla^2\\rho\\right)+i\\beta^{-2}\\kappa(\\nabla \\sigma_a)^2,\n\\end{equation}\nand the effective action is $W = \\int dt L$. Importantly, the boundary conditions are no longer those of the Schwinger-Keldysh contout but those of the SFF contour.\n\nSince the action is entirely Gaussian, we can evaluate the path integral exactly. We first break into Fourier modes in the spatial directions. The remaining integral is\n\\begin{equation}\n \\prod_{k}\\int\\mathcal D\\epsilon_k\\mathcal D\\sigma_{ak}f(E_1)f(E_2) \\exp\\left(-i\\int dt \\sigma_{ak}\n \\partial_t\\rho_k+Dk^2\\sigma_{ak}\\rho_k-\\beta^-2\\kappa k^2 \\sigma_{ak}^2\\right) \n\\end{equation}\nFor $k\\neq 0$, breaking the path integral into time modes gives an infinite product which works out to $\\frac{1}{1-e^{-Dk^2T}}$. For $k=0$, we just integrate over the full manifold of possible zero-frequency $\\sigma_a$s and $\\rho$s to get $\\frac T{2\\pi} \\int f^2(E) dE$. Including other modes gives\n\\begin{equation}\n \\text{SFF}=\\left[\\prod_k \\frac{1}{1-e^{-Dk^2T}} \\right]\\frac T{2\\pi} \\int f^2(E) dE.\n\\end{equation}\n\n\\subsection{Coupling In Sources}\n\nIn this subsection, we will focus specifically on sources that couple to conserved currents, but the next paragraph applies to any operator. Because of the relative minus sign between the two contours, $A_r$ couples to $j_a$ and vice versa. A configuration where $A_a=0$, $A_r=A$ corresponds to unitary evolution with background potential $A$. With CTP boundary conditions, the partition function for $A_a=0$, $A_r=A$ is exactly $Z(\\beta)$, irregardless of $A_r$. So when $A_a=0$, any number of derivatives with respect to $A_r$ (insertions of $j_a$) results in a correlator of zero.\n\nWith periodic boundary conditions, this is no longer entirely true. The trace can take on different values, and changing details of the unitary evolution results in a change in the SFF, albeit one which decays as $T$ grows and the effects of the periodic boundary conditions grow more mild. The intuitive explanation for this is that the SFF at times less than the Thouless time depends on non-universal properties of the Hamiltonian, and coupling in $A_r$ is effectively a change to the Hamiltonian. Thus it can affect the SFF before the Thouless time.\n\n$A_a$ is a different story. Turning on a nonzero $A_a$ term corresponds to having a different Hamiltonian on the forward vs backwards path. This changes the partition function even in the CTP case. In the periodic-time setting, this transforms our SFF into a Loschmidt SFF, a pair of periodic contours with slightly different Hamiltonians along the two legs.\n\n\\subsection{A Perturbative Look at the Loschmidt SFF in Hydro}\n\nIn this subsection we will restrict our attention to systems with no conserved quantities besides the energy $H$, and thus only one conserved current $j_\\mu$.\n\nAssuming the perturbing $\\epsilon \\delta H$ has an overlap $\\epsilon \\delta (x-x_0)$ with the local energy density $j_0(x)$, we can model the Loschmidt SFF as\n\\begin{equation}\n \\text{LSFF}(T,f)=Z_{\\text{DPT}}[A_{\\mu r}=0, A_{\\mu a} =2\\epsilon\\delta(x-x_0)\\delta_{\\mu 0}]\n\\end{equation}\nTo leading order in $\\epsilon$, the `free energy' $\\lambda$ is just a $-2i\\epsilon j_{0r}(x_0)$ insertion, which is a typical diagonal matrix element in the energy shell, so we have\n\\begin{equation}\n \\text{LSFF}(T,f)=\\frac T{2\\pi}\\int dE f^2(E)\\exp\\left(2i\\epsilon j_{0r}(x_0)T+O(\\epsilon^2)\\right).\n\\end{equation}\n\nTo second order in $\\epsilon$, the object in the exponent is\n\\begin{equation}\n\\begin{split}\n \\textrm{Re} \\lambda=2\\epsilon j_{0r}(x_0)\\\\\n \\textrm{Im} \\lambda=4\\epsilon^2\\int_0^T dt G_{rr;DPT}(x_0,x_0,t)\n\\end{split}\n\\end{equation}\nThis last integrand is the correlation function of the energy density. Because the DPT correlation functions wrap around the periodic time, this is the same as the `unwound' CTP integral \n\\begin{equation}\n \\int_0^T dt G_{rr;DPT}(x_0,x_0,t)=\\int_{-\\infty}^\\infty dt G_{rr;CTP}(x_0,x_0,t).\n \\label{eq:gIntegral}\n\\end{equation}\nAt times below the Thouless time and assuming spatial translation symmetry, $G_{rr;CTP}(x_0,x_0,t)$ can be modeled as \n\\begin{equation}\n G_{rr;CTP}(x_0-x_0,t)=\\frac{\\kappa}{\\beta^2D}\\frac{1}{\\sqrt{2\\pi D|T|}^d}.\n\\end{equation}\nFor $d\\geq 2$, this has a UV divergence, which can be cured by imposing a UV cutoff on the extent of our operator $\\delta H$. The IR divergence of $d\\leq 2$ is more interesting. It is also cured by a cutoff: the system has some finite size, and the IR behavior of $G_{rr}$ depends on that system's size and shape, as well as the precise location of $x_0$ and how strong the slowest modes are there.\n\n\\subsection{Exact Evaluation for $d=1$}\n\nAs an illustration of how integral \\eqref{eq:gIntegral} can depend on $x_0$, we can evaluate it exactly when we have a diffusive system in 1d with length $L$.\n\nWe first express $G_{rr;CTP}(x_0,x_0,t)$ as an infinite sum:\n\\begin{equation}\n G_{rr;CTP}(x_0,x_0,t)=\\sum_i f_i(x_0)^2\\beta^{-2}\\kappa e^{-Dk_i^2 |t|},\n\\end{equation}\nwhere the $f_i$s are eigenvalues of $\\nabla^2$ with eigenvalues $-k_i^2$. Performing the integral gives us the sum\n\\begin{equation}\n \\int_{-\\infty}^\\infty dt G_{rr;CTP}(x_0,x_0,t)=\\sum_i f_i(x_0)^2\\frac {2\\kappa}{\\beta^2D k_i^2}.\n \\label{eq:inverseSum}\n\\end{equation}\nThis is just a multiple of $-(\\nabla^2)^{-1}(x_0,x_0)$. We define\n\\begin{equation}\n C(x_1,x_2)=\n \\begin{cases} \n \\frac 1L (L-x_2)x_1 & x_1\\leq x_2 \\\\\n \\frac 1L x_2 (L-x_1) & x_1\\geq x_2\n \\end{cases}.\n\\end{equation}\nThen\n\\begin{equation}\n \\nabla_1^2 C(x_1,x_2)=-\\delta(x_1,x_2).\n\\end{equation}\nSo the sum in equation \\eqref{eq:inverseSum} is \n\\begin{equation}\n \\frac{2 \\kappa}{\\beta^2 D}C(x_0,x_0)=\\frac{2 \\kappa x_0(L-x_0)}{\\beta^2 DL}.\n\\end{equation}\n\n\\section{Conclusion and Discussion}\n\\label{sec:conclusion}\nIn this paper, we defined and studied the Loschmidt SFF. Just as the Loschmidt echo measures the similarity between $e^{iH_1T}$ and $e^{iH_2T}$ in terms of their actions on a state, the Loschmidt SFF measures their similarity in terms of spectral statistics. We found that the Loschmidt SFF decays exponentially as a function of $T$, with the same exponential rate as the echo. We studied this quantity in several situations, including an RMT model where the exponent required just a two point function, and a model requiring higher cumulants. In both cases, our analytical prediction matched up with numerical results. We also obtained analytic results about the Loschmidt SFF in theory where the slow dynamics is governed by a hydrodynamic theory of diffusion.\n\nOne natural extension of our work would be to study the Loschmidt SFF for integrable systems. In particular, does it always have the same long-time behavior as the Loschmidt echo?\n\nAnother important direction is to look for the Loschmidt analogue of the other connection between random matrix theory and quantum chaos: the eigenstate thermalization hypothesis. If an operator $O$ is written in the energy eigenbases of Hamiltonians $H_1$ and $H_2$, is there any relation between the matrix elements in the two bases? \n\nThis work was supported by the Joint Quantum Institute (M.W.) and by the Air Force Office of Scientific Research under award number FA9550-19-1-0360 (B.S.).\n\nNote: Just before we posted this work, we learned of an independent study of the LSFF in a holographic context that appeared a few days before~\\cite{cotler_precision_2022}. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nImage restoration is a classic ill-posed inverse problem that aims to recover high-quality images from damaged images affected by various kinds of degradations. According to the types of degradation, it can be categorized into different subtasks such as image super-resolution, image denoising, JPEG image deblocking, etc. \n\nThe rise of deep learning has greatly facilitated the development of these subtasks. But these methods are often goal-specific, and we need to retrain the network when we deal with images different from the training dataset. Furthermore, most methods usually aim to pursue high reconstruction accuracy in terms of PSNR or SSIM. However, image quality assessment from personal opinion is relatively subjective, and low reconstruction distortion is not always consistent with high visual quality \\cite{blau2018perception}. In addition, in many practical applications (\\emph{e.g.}, mobile), it is often challenging to obtain user\\textquotesingle s preference and the real degradation level of the corrupted images. All of these appeal to an interactive image restoration framework which can be applied to a wide variety of subtasks. However, to the best of our knowledge, currently there are few available networks which can satisfy both interactivity and generality requirements.\n\nSome designs have been proposed to improve the flexibility of deep methods. Take image denoising for example, data augmentation is widely used to improve the generalization of a model. Training with the dataset which contains a series of noise levels, a single model can be applied to blind denoising task \\cite{zhang2017beyond}. \nHowever, this method still produces a fixed reconstruction result of the input, which does not necessarily guarantee satisfactory perceptual quality (as shown in Fig. \\ref{first_visual}). An alternative choice, Zhang \\emph{et~al.} \\cite{zhang2018ffdnet} concatenated a tunable noise level map with degraded images as input to handle blind image denoising task. Though this scheme is also user-friendly, it can not be generalized to other tasks. In image super-resolution, \\cite{michelini2018multi} added noise to the input to control the compromise between perceptual quality and distortion. However, this scheme is specific for image super-resolution and cannot guarantee smooth and continuous control.\n\nIn this paper, to rectify these weaknesses, we propose a novel framework equipped with controllability for human perception-oriented interactive image restoration. To be more specific, we realize the interactive control of the reconstruction result by tuning the features of each unit block, called coupling module. Each coupling module consists of a main block and a tuning block. The parameters of two blocks are obtained under two endpoint optimization objectives. Taking image super-resolution as an example, the main block is optimized for low distortion while the tuning block is optimized for high perceptual quality. Besides, as a key to achieving fine feature control, we assign the high-degree-of-freedom coupling coefficients adaptively learned from a control scalar to each coupling module.\n\nOur main contributions can be summarized as follows:\n\\begin{itemize}\n\n\\item[$\\blacktriangleright$] We propose a novel controllable end-to-end framework for interactive image restoration in a fine-grained way.\n\\item[$\\blacktriangleright$] We propose a coupling module and an adaptive learning strategy of coupling coefficients to improve reconstruction performance.\n\\item[$\\blacktriangleright$] Our CFSNet outperforms the state-of-the-art methods on super-resolution, JPEG image deblocking and image denoising in terms of flexibility and visual quality.\n\n\\end{itemize}\n\\section{Related Work}\n\n\\begin{bfseries} \nImage Restoration.\n\\end{bfseries}\nDeep learning methods have been widely used in image restoration. \\cite{kim2016accurate, tai2017memnet, tai2017image, lim2017enhanced, zhang2018residual, zhang2018image, yang2019lightweight} continuously deepen, widen or lighten the network structure, aiming at improving the super-resolution accuracy as much as possible. While \\cite{johnson2016perceptual, ledig2017photo, sajjadi2017enhancenet, mechrez2018maintaining} paid more attention to the design of loss function to improve visual quality. Besides, \\cite{blau2018perception, wang2018esrgan, liu2018multi} explored the perception-distortion trade-off. In \\cite{dong2015compression}, Dong \\emph{et~al.} adopted ARCNN built with several stacked convolutional layers for JPEG image deblocking. Zhang \\emph{et~al.} \\cite{zhang2018ffdnet} proposed FFDNet to make image denoising more flexible and effective. Guo \\emph{et~al.} \\cite{guo2019toward} designed CBDNet to handle blind denoising of real images. Different from these task-specific methods, \\cite{zhang2017beyond, zhang2017learning, liu2018multi, liu2018non} proposed some unified schemes that can be employed to different image restoration tasks. However, these fixed networks are not flexible enough to deal with volatile user needs and application requirements.\n\n \\begin{bfseries} \nControllable Image Transformation.\n\\end{bfseries}\nIn high-level vision task, many technologies have been explored to implement controllable image transformation. \\cite{lu2018attribute} and \\cite{yu2018super} incorporated facial attribute vector into network to control facial appearance (\\emph{e.g.}, gender, age, beard). In \\cite{upchurch2017deep}, deep feature interpolation was adopt to implement automatic high-resolution image transformation. \\cite{karras2019style} also proposed a scheme that controls adaptive instance normalization (AdaIN) in feature space to adjust high-level attributes. Shoshan \\emph{et~al.} \\cite{shoshan2018dynamic} inserted some tuning blocks in the main network to allow modification of the network. However, all of these methods are designed for high-level vision tasks and can not be directly applied to image restoration. To apply controllable image transformation to low-level vision tasks, Wang \\emph{et~al.} \\cite{wang2019deep} performed interpolation in the parameter space, but this method can not guarantee the optimality of the outputs, which inspires us to further explore fine-grain control of image restoration.\n\\section{Proposed Method}\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[scale=0.45]{framework.pdf}\n\\end{center}\n\\caption{The framework of our proposed controllable feature space network (CFSNet).}\n\\label{framework}\n\\end{figure*}\n\n\nIn this section, we first provide an overview of the proposed framework, called CFSNet and then, present the modeling process inspired by the image super-resolution problem. Instead of specializing for the specific super-resolution task, we finally generalize our CFSNet to multiple image restoration tasks, including denoising and deblocking. Moreover, we give the explicit model interpretation based on the manifold learning to show the intrinsic rationality of the proposed network. At the end of this section, we show the superiority and improvements of the proposed CFSNet through the detailed comparison with the current typical related methods.\n\\subsection{Basic Network Architecture}\nAs shown in Fig. \\ref{framework}, our CFSNet consists of a main branch and a tuning branch. The main branch contains $M$ main blocks (residual blocks \\cite{lim2017enhanced}) while the tuning branch contains $M$ tuning blocks with additional $2*M+3$ fully connected layers. A pair of main block and tuning block constitute a coupling module by using a coupling operation to combine the features of the two branches effectively. We take the original degraded image $I_{in}$ and the control scalar $\\alpha_{in}$ as the input and output the restored image $I_{rec}$ as the final result.\n\nIn the beginning, we first use a $3\\times 3$ convolutional layer to extract features from the degraded image $I_{in}$,\n\\begin{equation}\nB_{0}=F_{in}(I_{in}),\n\\end{equation}\nwhere $F_{in}(\\cdot)$ represents the feature extraction function and $B_{0}$ serves as the input of next stage. Here, together with the input image $I_{in}$, we introduce a control scalar $\\alpha_{in}$ to balance the different optimization goals. To be more specific, there are 3 shared fully connected layers to transform the input scalar $\\alpha_{in}$ into multi-channels vectors and 2 independent fully connected layers to learn the optimal coupling coefficient for each coupling module:\n\\begin{equation}\n\\alpha_{m}=F_{m}^{ind}(F_{m}^{sha}(\\alpha_{in})),\n\\label{alpha_mapping}\n\\end{equation}\nwhere both $F_{m}^{sha}(\\cdot)$ and $F_{m}^{ind}(\\cdot)$ denote the function of the shared and independent fully connected layers, and $\\alpha_{m}$ is the coupling coefficient vector of the $m$-th coupling module. Each coupling module couples the output of a main block and a tuning block as follows:\n\\begin{equation}\n\\begin{aligned}\nB_{m}&=F_{m}(R_{m}, T_{m})\\\\\n&=F_{m}(F_{m}^{main}(B_{m-1}), F_{m}^{tun}(B_{m-1})),\n\\end{aligned}\n\\end{equation}\nwhere $F_{m}(\\cdot)$ represents the $m$-th coupling operation, $R_{m}$ and $T_{m}$ denote the output features of $m$-th main block and $m$-th tuning block respectively, $F_{m}^{main}(\\cdot)$ and $F_{m}^{tun}(\\cdot)$ are the $m$-th main block function and $m$-th tuning block function respectively. To address the image super-resolution task, we add an extra coupling module consisting of the upscaling block before the last convolutional layer, as shown in Fig. \\ref{framework}. Specifically, we utilize sub-pixel convolutional operation (convolution + pixel shuffle) \\cite{shi2016real} to upscale feature maps. Finally, we use a $3\\times 3$ convolutional layer to get the reconstructed image,\n\\begin{equation}\nI_{rec}=F_{out}(B_{M}+B_{0}) \\; or \\; I_{rec}=F_{out}(B_{M+1}),\n\\end{equation}\nwhere $F_{out}(\\cdot)$ denotes convolution operation. The overall reconstruction process can be expressed as\n\\begin{equation}\nI_{rec}=F_{CFSN}(I_{in}, \\alpha_{in};\\theta_{main},\\theta_{tun},\\theta_{\\alpha}),\n\\end{equation}\nwhere $F_{CFSN}(\\cdot)$ represents the function of our proposed CFSNet. $\\theta_{main}$, $\\theta_{tun}$ and $\\theta_{\\alpha}$ represent the parameters of main branch, all tuning blocks and all fully connected layers respectively. \n\nSince the two branches of our framework are based on different optimization objectives, in further detail, our training process can be divided into two steps:\n\n\\begin{enumerate}[\\begin{bfseries} \nStep 1 \n\\end{bfseries}]\n\n\\item Set the control variable $\\alpha_{in}$ as 0. Train the main branch with the loss function $L_{1}(I_{rec}, I_{g}; \\theta_{main})$, where $I_{g}$ is the corresponding ground truth image.\\label{Step1}\n\n\\item Set the control variable $\\alpha_{in}$ as 1. Map the control variable $\\alpha_{in}$ to different coupling coefficients $\\left \\{ \\alpha_{m} \\right \\}$, fix parameters of the main branch and train the tuning branch with another loss function $L_{2}(I_{rec}, I_{g}; \\theta_{tun},\\theta_{\\alpha})$.\n\\label{Step2}\n\\end{enumerate}\n\n\\subsection{Coupling module}\nWe now present the details of our coupling module. We mainly introduce our design from the perspective of image super-resolution. In order to balance the trade-off between perceptual quality and distortion, we usually realize it by modifying the penality parameter $\\lambda$ of the loss terms \\cite{blau2018perception},\n\\begin{equation}\nL_{gen}=L_{distortion}+\\lambda L_{adv},\n\\end{equation}\nwhere $L_{distortion}$ denotes distortion loss (\\emph{e.g.}, MSE and MAE), $L_{adv}$ contains GAN loss \\cite{wang2018esrgan, gulrajani2017improved} and perceptual loss \\cite{johnson2016perceptual}, $\\lambda$ is a scalar. We usually pre-train the network with $L_{distortion}$ loss, then we fine-tune the network with combined loss $L_{gen}$ to reach a different working point for the trade-off determined by the value of $\\lambda$. That is to say, if we regard pre-trained results as a reference point, then we can start from the reference point and gradually convert it to the result of another optimization goal.\n\nHowever, it is not efficient to train a network for each different value of $\\lambda$. In order to address this issue, we convert the control scalar $\\lambda$ to the input and directly control the offset of the reference point in latent feature space. For this purpose, we implement a controllable coupling module to couple reference features learned with $L_{distortion}$ and new features learned with $L_{gen}$ together. We set the feature based on distortion optimization as the reference point which is denoted as $R_{m}$. In the process of optimization based on perceptual quality, we keep the reference point unchanged and set $T_{m}-R_{m}$ as direction of change. In other words, in a coupling module, part of features are provided by reference information, and the other part are obtained from new exploration:\n\\begin{equation}\nB_{m} = R_{m}+\\alpha_{m}(T_{m}-R_{m})=(1-\\alpha_{m})R_{m}+\\alpha_{m}T_{m},\n\\label{coupling_module}\n\\end{equation}\nwhere $T_{m}\\in \\mathbb{R}^{W\\times H\\times C}$, $R_{m}\\in \\mathbb{R}^{W\\times H\\times C}$, $\\alpha_{m}\\in \\mathbb{R}^{C}$ denotes the $m$-th coefficient, and $C$ is the number of channels. \n\nIt is worth noting that different main blocks provide different reference information, so we should treat them differently. We expect to endow each coupling module a different coupling coefficient to make full use of reference information. Therefore, our control coefficients $\\left \\{ \\alpha_{m} \\right \\}$ are learned from optimization process. To be more specific, we use some fully connected layers to map a single input control scalar $\\alpha_{in}$ into different coupling coefficients (Eq. \\ref{alpha_mapping}). The proposed network will find the optimal coupling mode since our control coefficients $\\left \\{ \\alpha_{m} \\right \\}$ are adaptive not fixed. \n\nThanks to the coupling module and adaptive learning strategy, we can realize continuous and smooth transition by a single control variable $\\alpha_{in}$. Moreover, if this framework achieves an excellent trade-off between perceptual quality and distortion for super-resolution, then can we generalize this model to other restoration tasks? After the theoretical analysis and experimental tests, we find that this framework is applicable to a wide variety of image restoration tasks. In the next section, we will provide a more general theoretical explanation of our model.\n\n\\subsection{Theoretical analysis}\nSuppose there is a high-dimensional space containing all natural images. The degradation process of a natural image can be regarded as continuous in the space. So approximately, these degraded images are adjacent in the space. It is possible to approximate the reconstruction result of unknown degradation level with the results of known degradation levels. Unfortunately, natural images lie on an approximate non-linear manifold \\cite{weinberger2006unsupervised}. As a result, simple image interpolation tends to introduce some ghosting artifacts or other unexpected details to final results. \n\nInstead of operating in the pixel space, we naturally turn our attention to the feature space. Some literatures indicate that the data manifold can be flattened by neural network mapping and we can approximate the mapped manifold as a Euclidean space \\cite{bengio2013better, brahma2016deep, shoshan2018dynamic}. Based on this hypothesis, as shown in Fig. \\ref{manifold_analysis}, we denote $X_{i}$ and $Y_{i}$ in the latent space as two endpoints respectively, and we can represent unknown point $Z_{i}$ as a affine combination of $X_{i}$ and $Y_{i}$: \n\\begin{equation}\nZ_{i} \\approx \\alpha_{i}X_{i}+(1-\\alpha_{i})Y_{i},\n\\label{coupling_module2}\n\\end{equation}\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[scale=0.6]{manifold_analysis.pdf}\n\\end{center}\n\\caption{Neural network mapping gradually disentangles data manifolds. We can represent unknown point with known point in latent space.}\n\\label{manifold_analysis}\n\\end{figure}\nwhere $\\alpha_{i}$ is the $i$-th combination coefficient. This is exactly the formula of the controllable coupling module Eq. \\ref{coupling_module}. However, we should also note that this hypothesis is influenced by the depth and width of CNN. In other words, we do not know the degree of flattening that can be achieved by different channels and different layers. Thus the combination coefficients of different channels and different layers should be discrepant. Besides, we hope to find the optimal combination coefficients through the optimization process:\n\\begin{equation}\n\\alpha^{*}=\\mathop{\\arg\\min}_{\\alpha_{ij}} \\ \\ \\sum_{i=1}^{M}\\sum_{j=1}^{C}\\left [ Z_{ij}-\\left ( \\alpha_{ij}X_{ij}+\\left ( 1-\\alpha_{ij} \\right )Y_{ij} \\right ) \\right ],\n\\label{combination_coefficients}\n\\end{equation}\nwhere $\\alpha^{*}=\\left \\{\\alpha_{i,j}|i=1\\cdots M,j=1\\cdots C \\right \\}$ represents the optimal solution. However, it is difficult to directly obtain the optimal $\\alpha^{*}$, because the unknown working point $Z_{ij}$ cannot in general be computed tractably. So we solve Eq. \\ref{combination_coefficients} in an implicit way. Specifically, we map the input control variable $\\alpha_{in}$ to different combination coefficients with some stacked fully connected layers, and then, we can approximate the above process into optimizing the parameters of the linear mapping network:\n\\begin{equation}\n\\alpha^{*}\\approx \\hat{\\alpha}=F_{alpha}(\\alpha_{in};\\theta_{\\alpha}),\n\\end{equation}\nwhere $F_{alpha}$ denotes mapping function of $\\alpha_{in}$, $\\hat{\\alpha}$ is the approximated solution of the optimal $\\alpha^{*}$. Fortunately, this network can be embedded into our framework. Therefore, we can optimize the parameters of the linear mapping network and the tuning blocks in one shot.\n The entire optimization process (corresponding to Step \\ref{Step2}) can be expressed as\n\\begin{equation}\n\\theta_{tun},\\theta_{\\alpha}=\\mathop{\\arg\\min}_{\\theta_{tun},\\theta_{\\alpha}} \\ \\ L_{2}(F_{CFSN}(I_{in}, \\alpha_{in};\\theta_{tun},\\theta_{\\alpha}), I_{g}),\n\\label{whole_optimization}\n\\end{equation}\n\n\\label{section:Theoretical_analysis}\n\n\n\\subsection{Discussions}\n\\begin{bfseries} \nDifference to Dynamic-Net\n\\end{bfseries}\n Recently, Dynamic-Net \\cite{shoshan2018dynamic} realized interactive control of continuous image conversion. In contrast, there are two main differences between Dynamic-Net and our CFSNet. First of all, Dynamic-Net is mainly designed for image transformation tasks, like style transfer. It\\textquotesingle s difficult to achieve the desirable results when we use Dynamic-Net directly for image restoration tasks. While motivated by super-resolution, we design the proposed CFSNet for low-level image restoration. Secondly, in Dynamic-Net, as shown in Fig. \\ref{basic_module}, they directly tune multiple scalars $\\left \\{ \\alpha_{m} \\right \\}$ to get different outputs. While, in our framework coupling coefficient $\\alpha_{m}$ is a vector and is learned adaptively from the input scalar $\\alpha_{in}$ through the optimization process. This is more user-friendly and we explain the reasonability of this design in Sec. \\ref{section:Theoretical_analysis}.\n \n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{compare.pdf}\n\\end{center}\n\\caption{Basic module. Yellow and orange bar represent main block and tuning block respectively.}\n\\label{basic_module}\n\\end{figure}\n\n\\begin{bfseries} \nDifference to Deep Network Interpolation\n\\end{bfseries}\nDeep Network Interpolation (DNI) is another choice to control the compromise between perceptual quality and distortion \\cite{wang2018esrgan, wang2019deep}. DNI also can be applied to many low-level vision tasks \\cite{wang2019deep}. However, this method needs to train two networks with the same architecture but different losses and will generate a third network to control. In contrast, our framework can achieve better interactive control with a unified end-to-end network. Moreover, our framework makes better use of reference information using coupling module. DNI performs interpolation in the parameter space to generate continuous transition effects, and interpolation coefficients are kept the same in the whole parameter space. However, this simple strategy can not guarantee the optimality of the outputs. While in our CFSNet, we perform the interpolation in the feature space and the continuous transition of the reconstruction effect is consistent with the variation of the control variable $\\alpha_{in}$. We can produce a better approximation of the unknown working point. See Sec. \\ref{section:super-resolution} for more experimental comparisons.\n\n\\section{Experiments}\nIn this section, we first demonstrate implementation details of our framework. Then we validate the control mechanism of our CFSNet. Finally, we apply our CFSNet to three classic tasks: image super-resolution, image denoising, JPEG image deblocking. All experiments validate the effectiveness of our model. Due to space limitations, more examples and analyses are provided in the appendix.\n\\subsection{Implement Details}\n\\begin{bfseries} \n\\end{bfseries}\n\\begin{bfseries} \n\\end{bfseries}\nFor image super-resolution task, our framework contains 30 main blocks and 30 tuning blocks (i.e., $M=30$). As for the other two tasks, the main branch parameters of our CFSNet are kept similar to that of the compared method \\cite{zhang2017beyond} (i.e., $M=10$) for a fair comparison. Furthermore, we first generate a 512-dimensional vector with values of all 1. Then we multiply it by the control scalar $\\alpha_{in}$ to produce a control input. All convolutional layers have 64 filters and the kernel size of each convolutional layer is $3\\times 3$. We use the method in \\cite{he2015delving} to perform weight initialization. For both training stage of all tasks, we use the ADAM optimizer \\cite{kingma2014adam} by setting $\\beta_{1}=0.9$, $\\beta_{2}=0.999$, and $\\varepsilon = 10^{-8}$ with the initial learning rate 1e-4. We adopt 128 as the minibatch size in image denoising task and set it as 16 in the other three tasks. We use PyTorch to implement our network and perform all experiments on GTX 1080Ti GPU. \n\nFor image denoising and JPEG image deblocking, we follow the settings as in \\cite{zhang2017beyond} and \\cite{dong2015compression} respectively. The training loss function in Step \\ref{Step1} and Step \\ref{Step2} remains unchanged: $L_{1}(I_{rec}, I_{g})=L_{2}(I_{rec}, I_{g})$. In particular, for image denoising, we input the degraded images of noise level 25 when we train the main branch in Step \\ref{Step1} and we input the degraded images of noise level 50 when we train the tuning branch in Step \\ref{Step2}. Training images are cut into $40\\times 40$ patches with a stride of 10. And the learning rate is reduced by 10 times every 50000 steps. For JPEG deblocking, we set quality factor as 10 in the first training stage and change it to 40 in the second training stage. Besides, we choose $48\\times 48$ as patch size and the learning rate is divided by 10 every 100000 steps. For image super-resolution, we first train the main branch with objective MAE loss, then we train the tuning branch with objective $L_{2}=L_{mae}+0.01L_{gan}+0.01L_{per}$, where $L_{mae}$ denotes mean absolute error (MAE), $L_{gan}$ represents wgan-gp loss \\cite{gulrajani2017improved} and $L_{per}$ is a variant of perceptual loss \\cite{wang2018esrgan}. We set HR patch size as $128\\times 128$ and we multiply the learning rate by 0.6 every 400000 steps.\n\n\n\\subsection{Ablation Study}\nFig. \\ref{Ablation_alpha} presents the ablation study on the effects of adaptive learning coupling coefficients strategy. We directly set the coupling coefficients of different channels and different layers as the same $\\alpha_{in}$ in CFSNet-SA. That is, compared with CFSNet, CFSNet-SA removes the linear mapping network of control variable $\\alpha_{in}$. Otherwise, we keep the training process of CFSNet-SA consistent with CFSNet. We can find that, no matter in denoising task or in deblocking task, the best restored result of CFSNet is better than that of CFSNet-SA for unseen degradation level. In particular, the curve of CFSNet is concave-shaped, which means that there is a bijective relationship between the reconstruction effect and the control variable. In contrast, there is no obvious change law in the curve of CFSNet-SA. The reason is that adaptive coupling coefficients help to produce better intermediate features. This merit provides more friendly interaction control. What\\textquotesingle s more, JPEG deblocking task is more robust to control variable than image denoising task, we speculate that this is because JPEG images of different degradation levels are closer in the latent space.\n\n \\begin{figure}[t]\n\\centering \n\\subfigure[image denoising]{\n\\label{Ablation_alpha.sub.1}\n\\includegraphics[width=0.22\\textwidth]{noise30_samealpha.pdf}}\n\\subfigure[JPEG image deblocking]{\n\\label{Ablation_alpha.sub.2}\n\\includegraphics[width=0.22\\textwidth]{level30_samealpha.pdf}}\n\\caption{Average PSNR curve for noise level 30 on the BSD68 dataset and same curve for quality factor 20 on the LIVE1 dataset.}\n\\label{Ablation_alpha}\n\\end{figure}\n\n\n\\subsection{Image Super-resolution}\nFor image super-resolution, we adopt a widely used DIV2K training dataset \\cite{agustsson2017ntire} that contains 800 images. We down-sample the high resolution image using MATLAB bicubic kernel with a scaling factor of 4. Following \\cite{michelini2018multi, wang2018esrgan}, we evaluate our models on PIRM test dataset provided in the PIRM-SR Challenge \\cite{blau20182018}. We use the perception index (PI) to measure perceptual quality and use RMSE to measure distortion. Similar to the PIRM-SR challenge, we choose EDSR \\cite{lim2017enhanced}, CX \\cite{mechrez2018maintaining} and EnhanceNet \\cite{sajjadi2017enhancenet} as baseline methods. Furthermore, we also compare our CFSNet with another popular trade-off method, deep network interpolation \\cite{wang2019deep, wang2018esrgan}. We directly use source code from ESRGAN \\cite{wang2018esrgan} to produce SR results with different perceptual quality, namely ESRGAN-I.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.5]{PI_RMSE.pdf}\n\\caption{Perception-distortion plane on PIRM test dataset. We gradually increase $\\alpha_{in}$ from 0 to 1 to generate different results from distortion point to perception point.}\n\\label{Perception_distortion}\n\\end{figure}\n\n\\begin{figure*}[ht]\n\\begin{center}\n \\includegraphics[scale=0.537]{sr_compare.pdf}\n\\end{center}\n\\caption{Perceptual and distortion balance of ``215'', ``211'' and ``268'' (PIRM test dataset) for $4\\times$ image super-resolution.}\n\\label{sr_compare}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[scale=0.53]{noise40_51.pdf}\n\\caption{Gray image denoising results of ``test051'' ``test017'' and ``test001'' (BSD68) with unknown noise level $\\sigma=40$. $\\alpha_{in}=0.5$ corresponds to the highest PSNR results, and the best visual results are marked with red boxes.}\n\\label{noise40_51and44}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=1.0\\textwidth, height=0.18\\textheight]{jpeg_20.pdf}\n\\caption{JPEG image artifacts removal results of ``house'' and ``ocean'' (LIVE1) with unknown quality factor 20. $\\alpha_{in}=0.5$ corresponds to the highest PSNR results, and the best visual results are marked with red boxes.}\n\\label{jpeg_20}\n\\end{figure*}\nFig. \\ref{sr_compare} and Fig. \\ref{first_visual} show the visual comparison between our results and the baselines. We can observe that CFSNet can achieve a mild transition from low distortion results to high perceptual quality results without unpleasant artifacts. In addition, it can be found that our CFSNet outperforms the baselines on edges and shapes. Due to different user preferences, it is necessary to allow users to adjust the reconstruction results freely.\n\nWe also provide quantitative comparisons on PIRM test dataset. Fig. \\ref{Perception_distortion} shows the perception-distortion plane. As we can see, CFSNet improves the baseline (EnhanceNet) in both perceptual quality and reconstruction accuracy. The blue curve shows that our perception-distortion function is steeper than ESRGAN-I (orange curve). Meanwhile, CFSNet performs better than ESRGAN-I in most regions, although our network is lighter than ESRGAN. This means that our result is closer to the theoretical bound of perception-distortion. \n\n\n\\label{section:super-resolution}\n\n\n\\subsection{Image Denoising}\nIn image denoising experiments, we follow \\cite{zhang2017beyond} to use 400 images from the Berkeley Segmentation Dataset (BSD) \\cite{martin2001database} as the training set. We test our model on BSD68 \\cite{martin2001database} using the mean PSNR as the quantitative metric. Both training set and test set are converted to gray images. We generate the degraded images by adding Gaussian noise of different levels (\\emph{e.g.}, 15, 25, 30, 40, and 50) to clean images. \n\n\nWe provide visual comparison in Fig. \\ref{noise40_51and44} and Fig. \\ref{first_visual}. As we can see, users can easily control $\\alpha_{in}$ to balance noise reduction and detail preservation.\nIt is worth noting that, our highest PSNR results ($\\alpha_{in}=0.5$) have similar visual quality with other methods, but it does not necessarily mean the best visual effects, for example, the sky patch of `test017'' enjoys a smoother result when $\\alpha_{in}=0.6$. Users can personalize each picture and choose their favorite results by controlling $\\alpha_{in}$ at test-time. \n\nIn addition to perceptual comparisons, we also provide objective quantitative comparisons. We change $\\alpha_{in}$ from 0 to 1 with an interval of 0.1 for preset noise range ($\\sigma \\in [25,30,40,50]$). Then we choose the final result according to the highest PSNR. We compare our CFSNet with several state-of-the-art denoising methods: BM3D \\cite{4271520}, TNRD \\cite{chen2017trainable}, DnCNN \\cite{zhang2017beyond}, IRCNN \\cite{zhang2017learning}, FFDNet \\cite{zhang2018ffdnet}. More interestingly, as shown in Tab. \\ref{Benchmark_denoise}, our CFSNet is comparable with FFDNet on the endpoint ($\\sigma=25$ and $\\sigma=50$), but our CFSNet still achieves the best performance on the noise level 30 which is not contained in the training process. Moreover, our CFSNet can even deal with unseen outlier ($\\sigma=15$). This further verifies that we can obtain a good approximation of the unknown working point.\n\n\\begin{table}[!t]\n\\centering\n\\caption{Benchmark image denoising results. The average PSNR(dB) for various noise levels on (gray) BSD68. $^{*}$denotes unseen noise levels for our CFSNet in the training stage.}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nmethods&$\\sigma=15^{*}$&$\\sigma=25$&$\\sigma=30^{*}$&$\\sigma=50$\\\\\n\\hline\n\\hline\nBM3D& 31.08&28.57 &27.76 & 25.62 \\\\\n\\hline\nTNRD&31.42&28.92 & 27.66 & 25.97 \\\\\n\\hline\nDnCNN-B&31.61&29.16 & 28.36 & 26.23 \\\\\n\\hline\nIRCNN&31.63&29.15 & 28.26 & 26.19 \\\\\n\\hline\nFFDNet&31.63&29.19 &28.39&26.29 \\\\\n\\hline\nCFSNet&31.29&29.24&28.39&26.28 \\\\\n\\hline\n\\end{tabular}\n\\label{Benchmark_denoise}\n\\end{table}\n\n\\subsection{JPEG Image Deblocking}\nWe also apply our framework to reduce image compression artifacts. As in \\cite{dong2015compression, zhang2017beyond, liu2018multi}, we adopt LIVE1 \\cite{moorthy2009visual} as the test dataset and use the BSDS500 dataset \\cite{martin2001database} as base training set. For a fair comparison, we perform training and evaluating both on the luminance component of the YCbCr color space. We use the MATLAB JPEG encoder to generate JPEG deblocking input with four JPEG quality settings q = 10, 20, 30, 40.\n\nWe select the deblocking result in the same way as the image denoising task. We select SA-DCT \\cite{foi2007pointwise}, ARCNN \\cite{dong2015compression}, TNRD \\cite{chen2017trainable} and DnCNN \\cite{zhang2017beyond} for comparisons. Tab. \\ref{Benchmark_deblocking} shows the JPEG deblocking results on LIVE1. Our CFSNet achieves the best PSNR results on all compression quality factors. Especially, our CFSNet does not degrade too much and still achieves 0.12 dB and 0.18 dB improvements over DnCNN-3 on quality 20 and 30 respectively, although JPEG images of quality 20 and 30 never appear in training process. Fig. \\ref{jpeg_20} shows visual results of different methods on LIVE1. Too small $\\alpha_{in}$ produces too smooth results, while too large $\\alpha_{in}$ leads to incomplete artifacts elimination. Compared to ARCNN \\cite{dong2015compression} and DnCNN \\cite{zhang2017beyond}, our CFSNet can make a better compromise between artifacts removal and details preservation.\n\n\n\n\n\n\\begin{table}[!t]\n\\centering\n\\caption{Benchmark JPEG deblocking results. The average PSNR(dB) on the LIVE1 dataset. $^{*}$ denotes unseen quality factors for our CFSNet in the training stage.}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nmethods&$q=10$&$q=20^{*}$&$q=30^{*}$&$q=40$\\\\\n\\hline\n\\hline\nJPEG& 27.77&30.07 &31.41 & 32.35 \\\\\n\\hline\nSA-DCT&28.65&30.81 & 32.08 & 32.99 \\\\\n\\hline\nARCNN&28.98&31.29 & 32.69 & 33.63 \\\\\n\\hline\nTNRD&29.15&31.46 & 32.84 & N\/A \\\\\n\\hline\nDnCNN-3&29.19&31.59 &32.98&33.96 \\\\\n\\hline\nCFSNet&29.36&31.71&33.16&34.16 \\\\\n\\hline\n\\end{tabular}\n\\label{Benchmark_deblocking}\n\\end{table}\n\n\\section{Conclusion}\n\nIn this paper, we introduce a new well-designed framework which equipped with flexible controllability for image restoration. The reconstruction results can be finely controlled using a single input variable with an adaptive learning strategy of coupling coefficients. Besides that, it is capable of producing high quality images on image restoration tasks such as image super-resolution, image blind denoising and image blind deblocking, and it outperforms the existing state-of-the-art methods in terms of user-control flexibility and visual quality. Future works will focus on the expansion of the multiple degraded image restoration tasks.\n\n\\begin{bfseries} \n\\noindent Acknowledgements.\n\\end{bfseries}\nThis work was supported by the Natural Science Foundation of China (Nos. 61471216 and 61771276) and the Special Foundation for the Development of Strategic Emerging Industries of Shenzhen (Nos. JCYJ20170307153940960 and JCYJ20170817161845824).\n\n{\\small\n\\bibliographystyle{ieee}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Main}\\label{sec2}\n\nA recent, high-impact development in the field of photonics has been the revelation of parity-time symmetric systems endowed with rich novel physics\\cite{bender1999pt, bender1998real, el2018non, ruter2010observation, regensburger2012parity, jing2014pt,lin2011unidirectional,zhu2014p,benisty2011implementation,ding2015coalescence}. At the barycenter of the literature on PT-symmetry in photonics lie peculiar spectral degeneracies called the exceptional points\\cite{peng2014parity,peng2014loss, cerjan2016exceptional,feng2013experimental,jing2017high,ding2016emergence,kang2016chiral,zhen2015spawning}. Originating from the realm of quantum mechanics, exceptional points (EPs) signify singularities at which the eigenvalues and eigenfunctions of non-Hermitian matrices cannot be described analytically\\cite{heiss2012physics,kato1966analytic}. At the EP, two or more eigenvalues coalesce, simultaneously with the coalescence of associated eigenfunctions\\cite{heiss1999phases,moiseyev2011non,el2007theory,pick2017general}. This theoretical construct has led to immense physical consequences in real experimental systems\\cite{miri2019exceptional}. Previously discussed in the quantum theory of atomic and molecular resonances\\cite{moiseyev1998quantum}, EPs are now demonstrated in carefully designed photonic systems motivated by the upgradation of systemic functionalities, such as, for instance, enhancement in output intensity\\cite{feng2014single,hodaei2014parity,takata2021observing} and increased sensing capability\\cite{hodaei2017enhanced,chen2017exceptional,hokmabadi2019non}. The ability of precise nanofabrication and on-demand gain\/loss has allowed for the realization of several photonic platforms for the observation of EPs\\cite{ozdemir2019parity,brandstetter2014reversing}. However, to our knowledge, there are no experimental reports of observations of EPs in disordered mesoscopic systems despite several investigations on non-Hermiticity in such systems\\cite{vazquez2014gain,bachelard2022coalescence,davy2019probing,huang2021wave,weidemann2020nonhermitian,balasubrahmaniyam2020necklace,sahoo2022anomalous}.\n\nThe procedure of demonstration of exceptional points in photonic structures has typically followed a common theme. The gain or loss parameter in one cavity is systematically tuned while the emission spectrum and the spatial intensity in the sample is monitored. For example, in a coupled microring system, the authors fabricated two physically identical microrings at a pre-defined separation and coupling\\cite{hodaei2014parity, hodaei2015parity}. Optical excitation provided gain, and the loss in one of the cavities was continuously tuned by filtering out the pump energy. At the exceptional point, the lasing exclusively occurred over one microring, and the emission intensity was considerably higher than that from an individual microring pumped at par. This strategy has been adopted in various photonic systems \\cite{takata2021observing, kim2016direct, chang2014parity, peng2014loss} that are carefully designed using physically separate cavities with tunable gain\/loss. However, such a strategy is not feasible in an Anderson-localizing structure\\cite{anderson1958absence,john1987strong,segev2013anderson,chabanov2000statistical,wiersma1997localization,sapienza2010cavity,schwartz2007transport,lahini2008anderson,riboli2011anderson} for several reasons. Anderson-localizing systems are nondeterministic, wherein the occurrence of a resonance is probabilistic. In a situation of coupled modes, both the cavities reside in the same physical structure and often involve common scatterers for distributed feedback. Naturally, calibrated introduction of gain\/loss into individual localized modes is inconceivable. Hence, the conventional strategy of observing exceptional points is implausible in Anderson-localizing systems. In such a nondeterministic scenario, another route towards an EP can be adopted. The formation of an EP within a localizing system is a probabilistic event. Therefore, if a sufficiently large number of configurations can be generated and monitored, then an exceptional point can be expected with the right probability. Although easily said, this is a challenging prospect because the three ingredients , namely, (i) achieving Anderson localization, (ii) achieving coupled modes thereof, and (iii) addition of adequate gain\/loss into the system, are all formidable experimental challenges. Nonetheless, this is the route we adopt in this work, and demonstrate Anderson-localization based lasing over exceptional points. \n\nWe use a technique which allows us to create and monitor thousands of disorder configurations\\cite{joshi2020reduction, joshi2022anomalous}. The system is one-dimensional, and hence has an increased probability of Anderson localization and, therefore, coupled Anderson-localized modes. Optical sources are distributed throughout the structure so that any localized modes existing at any spatial location are readily excited. We employ a simultaneous spectral, spatial and temporal diagnostic setup to completely characterize the emission from the microcavity array. The introduction of temporal diagnostics is a vital step towards identification of coupled modes. Usually, coupling is identified on the basis of spectral splitting. However, the spectra of Anderson localizing systems are random and multimodal in character. One cannot unambiguously distinguish a coupled-mode splitting from a pair of individual isolated modes with comparable spectral separation. On the other hand, the temporal signatures of the two cases are fundamentally different and provide conclusive evidence of splitting. Since temporal diagnostics are hitherto not reported at EP in lasing systems, we first simulate a deterministic prototype lasing structure and introduce the systematic temporal, spectral and spatial behavior at, and during the approach to, an EP. The EP-approach is characterized by quantum echoes (QE) that signify energy transfer between the coupled resonances, equivalent to Rabi oscillations in two-level systems\\cite{dietz2007rabi,dembowski2004first}. The echoes vanish at the EP. Subsequently, we illustrate the experimental results in our system wherein the spectra exhibit coupled modes via spectral splitting and the temporal diagnostics simultaneously exhibit quantum echoes. The spatial intensity distributions exhibit the expected supermodes of the coupled system. The approach to an EP is evident in reduced splittings and lower oscillation frequencies, accompanied by increased output intensity. Eventually, emission peaks with largely enhanced intensity are shown, wherein the spatial intensity resides exclusively in one Anderson-localizing cavity. In these cases, the spectral profiles are seen to follow a square-Lorentzian profile, identifying an EP. \n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\textwidth]{simulation_arxiv.pdf}\n\\caption{\\textbf{Quantum echo diagnosis of an exceptional point in a simulated, deterministic, model system}: (a) Temporal evolution of emission from two coupled microring resonators, as computed from finite-difference time-domain computations. Top panel: Situation for spectral splitting of $\\delta\\lambda = 3.6~$nm. Blue curve shows the echo in the emission at one wavelength, showing a period of $T$ ($=\\frac{1}{\\Omega}$, where $\\Omega$ is the echo frequency). The echo in the emission at the other wavelength (orange curve) is out of phase, quantified by $\\Delta T\/T$. Middle panel: Dynamics for $\\delta\\lambda = 1~$nm, where $T$ is increased, while the $\\Delta T\/T$ is reduced. Bottom panel: Behavior at the exceptional point (EP), i.e, $\\delta\\lambda = 0~$nm, where there is no echo or oscillation. A single temporal profile is seen because of the eigenvalue coalescence. (b) Spectro-spatial observations corresponding to (a). Insets show intensity distribution in the coupled system averaged over the entire echo. The energy resides in both the resonators at large splittings, while it exists exclusively in the amplifying resonator at the EP. (c) Phase difference ($\\Delta T\/T$) in the two echoes (blue markers, left Y axis) reduces systematically with $\\delta\\lambda$, and is zero at the EP. Peak emission intensity (red markers, right Y-axis) $I_p$ diverges at the EP. Inset shows $\\Omega$ with $\\delta\\lambda$. (d) Correspondence between the quantum echo behavior of (a) and PT-symmetry breaking and EP-formation. Circle markers mark the situations in (a) and (b).}\n\\label{fig:simring}\n\\end{figure}\n\nWe first illustrate the quantum echo in a known EP system, namely, a coupled microring cavity. We simulate identical resonators with a diameter of $d = 2.0~\\mu$m, thickness $t = 0.2~\\mu$m and a separation $s = 0.4~\\mu m$. The system was simulated using a finite-difference time-domain scheme (Lumerical) in the time domain, followed by full-wave finite-element computation (COMSOL) in the frequency domain. Optical gain or loss is introduced in the complex refractive index $n_{im}$ (see Methods for details). The $n_{im}$ in the second cavity is employed as the tunable parameter to approach an exceptional point, and is set to -0.01 (amplification) in the first cavity. Fig\\ref{fig:simring}(a) depicts the temporal behavior of the system, while (b) describes the corresponding eigenvalue behavior, along with the intensity distribution in the coupled resonators. The top panels of (a) and (b) depict the situation for identical gain, $n_{im} = -0.01$, in the two cavities. The resulting strong coupling realizes large splitting corresponding to a wavelength splitting $\\delta\\lambda$ of 3.6~nm. The emission clearly shows QE oscillations (top panel of Fig\\ref{fig:simring}(a)) in the temporal profile. The blue and orange curves represent emission from the two cavities. The high QE frequency ($\\Omega$) allows for multiple oscillations during the emission cycle, with out-of-phase oscillations from the two cavities. The corresponding split eigenvalues are shown in the top panel of (b), along with the intensity distribution (inset) in the cavities. The intensity is shown here instead of fields because the former is an experimental observable. Both the supermodes corresponding to the two eigenvalues show identical intensity, although the fields depict the bonding and antibonding character. See Supplementary information for the field distributions. With reducing gain, the splitting reduces proportionately. For example, the middle panel of (a) shows the situation for $n_{im} = -0.0073$ in cavity 2, where the $\\delta\\lambda = 1~$nm. The QE oscillation frequency and the phase difference ($\\Delta T\/ T$) diminishes. The intensity distribution for the two supermodes is skewed in favour of the cavity 1, as seen in (b). Ultimately, at $n_{im} = -0.0056$ in cavity 2, the system hits an exceptional point. The temporal emission coalesces into a single profile seen in the bottom panel of (a). The two eigenvalues coalesce and a single spectral peak is observed. The entire energy is accumulated in the cavity 1, which is indicative of eigenvector coalescence seen in the field distributions. See Supplementary Information for the fields. Fig\\ref{fig:simring}(c) describes the systematic variation of the phase difference between two echoes and the output intensity with $\\delta\\lambda$. A monotonic decrease in $\\Delta T\/T$ (blue markers, left Y-axis) with $\\delta\\lambda$ is seen. The peak output intensity $I_p$ increases upon approaching the EP, and maximises at the EP, indicative of enhanced functionality of the system operating at the exceptional point. The inset shows that $\\Omega$ reduces monotonically with $\\delta\\lambda$. The correspondence of the quantum echo with the PT-symmetry breaking was confirmed by calculating the complex eigenvalues of the system. These were computed by using a finite-element eigensolver computation, wherein the exact same structure was simulated. The gain\/loss parameter $n_{im}$ was same as in the FDTD. The real (blue curve) and imaginary (red curve) eigenvalues are shown in subplot (d). The circle markers on the blue curve are the situations discussed in (a) and (b). A clear correspondence between the temporal behavior and the theoretically-expected PT-symmetry breaking and EP-formation is seen. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.9\\textwidth]{1Dapprox_arxiv.pdf}\n\\caption{Schematic of linear arrays of spherical microcavities: (a) white arrows show emission and scattered directions: longitudinal emission (transmitted signal) and out-of-axis emission. The sizes of the cavities are about 18~$\\mu m$. (b) A Finite-Element-Method (FEM) simulation showing the longitudinal mode.}\n\\label{fig:1Dapprox}\n\\end{figure}\n\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\textwidth]{Exp_setup_arxiv.pdf}\n\\caption{\\textbf{Experimental setup and characterization of the Anderson localized lasing system:} (a) Complete schematic of the setup, see main text for details; (b) CCD image of the microresonator array; (c) zoomed view of the rectangular region in (a): the microresonator array is pumped by a mode-locked Nd:YAG laser (green beam, $\\lambda = 532~nm$, pulse width = $30~ps$, repetition rate = $10$~Hz). The emission in the longitudinal direction (transmitted light) is used for the temporal measurement $I_{tmp}$, and the transverse out-of-axis scatter gives the spatial profile $I_{sp}$ simultaneously. The schematic of emission directions is shown in (d)). The transverse scatter is imaged onto the input slit of a spectrometer S1 coupled with ICCD using a 4f imaging setup [L2: imaging lens (10 cm focal length); F2: Notch filter (to remove the pump beam)]. The longitudinal emission is measured using a mirror M located at an angle of $45^{0}$ w.r.t the array as shown in the schematic. This emission is directed towards the input slit of a second identical spectrometer S2 coupled to a streak camera (Optronis SC-10, temporal resolution = $\\sim3$~ps) through a focusing lens L3 and a notch filter F2. (e, f) represent the spatial data, where (e) shows the spectrographic image of the multiple lasing modes, and (f) shows a typical extracted spatial profile of one mode, yielding localization length; (g) shows the simultaneous streak image of the same modes with (h) showing a typical extracted temporal profile. (i) Lasing spectrum obtained by integrating the streak image over time axis, or integrating the spectrographic image over spatial axis. }\n\\label{fig:expsetup}\n\\end{figure}\n\n\nThe structures we employ for these experiments, schematized in Fig~\\ref{fig:1Dapprox}(a), are linear arrays of microcavities that couple to form photonic bands and bandgaps. The microcavities are spherical in shape with diameters of about $18~\\mu$m. The large diameters of the spheres and the linear configuration of the array induces coupling of longitudinal Fabry-Perot modes of the microcavities. Figure~\\ref{fig:1Dapprox}(b) illustrates a finite-element computation of an eigenmode of the periodic array, clearly depicting the longitudinal mode with quasi-planar wavefronts, akin to a multilayer system. Loss in this structure is induced by out-of-axis scattering of the mode by the interfaces. In order to introduce gain, we choose the material appropriately to make the microcavities, as described in the Methods. Disorder is introduced by deliberately modifying the axial dimensions and spatial separations of the microcavities. In our earlier works, we have shown that the periodic arrays sustain Bloch modes\\cite{joshi2022anomalous}, while the disordered arrays realize Anderson localized modes\\cite{joshi2020reduction, joshi2019effect, kumar2017temporal}. As the fabrication process of the microcavity arrays is dynamic in nature, we can monitor a huge number of configurations, as discussed next.\n\nFigure~\\ref{fig:expsetup} illustrates the creation of the microcavity array, and its simultaneous spectro-spatio-temporal characterization. The generation technique, as described in details in the Methods section, creates the microcavities from a Rhodamine-in-alcohol solution. The Rhodamine molecules act as sources distributed throughout the length of the array, whilst also providing gain to the mode. [CCD image shown in (b)]. Optical excitation is provided by pulses of an Nd:YAG laser fired at 1 Hz, illustrated by the green beam. The emission transverse to the array comprises the out-of-axis scatter that is imaged onto a identical spectrometer (S1) coupled to an imaging CCD. The longitudinal emission of the array, which carries the temporal signature of the emission in Anderson localized modes, is directed to an identical spectrometer (S2) coupled to a Streak Camera SC (Optronis SC-10). Since the microcavities are constantly created and introduced into the pump laser focus, every excitation laser pulse sees a different disorder configuration of the array. This feature enables us to generate the vast ensemble of configurations required in this study. Importantly, the Streak Camera and Spectrometer CCD are synchronized with the pump laser to register the spectral, spatial and temporal data of every configuration simultaneously. Panels (c) and (d) illustrate the characterization scheme. Panel (e) depicts the spectrographic image from S1 at a particular pump pulse. It shows the multiple emission modes from the corresponding configuration illuminated by the laser pulse. The horizontal axis represents the spatial axis, while the vertical axis represents the wavelength. Subplot (f) shows the extracted spatial profile of a representative mode labeled by the white arrow in the image (e). This is an Anderson-localized mode, whose localization length $\\xi$ can be extracted from the decay in the wings. Panel (g) shows the Streak image of the emission of the same configuration. Since this is directed through a spectrometer, the vertical axis corresponds to the wavelength, while the horizontal axis corresponds the time axis. A comparison between (g) and (e) shows that the same modes are captured in both devices, endorsing the simultaneous diagnostic capability of the setup. Subplot (h) is the extracted temporal profile of the same mode as in (f). Subplot (i) shows the emission spectrum obtained by integrating over the time-axis of (g). The same can be obtained by integrating the space-axis of (e). \n\nWe used this diagnostic system to characterize 3,000 different configurations of disorder, providing a massive ensemble of about 15,000 Anderson-localized lasing modes. The limit on the configuration number is set by the volume of the liquid present in the generator, as mentioned in the Methods. Although all modes were captured at the same pump energy, inherent pump fluctuations motivated us to further process the ensemble to choose modes in a very narrow range of pump energy, to within $\\sim1~\\mu$J at the average pump energy of $\\sim87~\\mu J$. The filtered ensemble still had over 5,000 modes. Among all the captured modes, pairs of coupled modes were identified through the temporal profiles, since the coupling resulted in a quantum echo. This clearly stood apart against the smooth profiles of individual uncoupled modes, such as the one shown in Figure \\ref{fig:expsetup}(h). The coupling was further corroborated by the splitting in the spectral profile. Various degrees of splitting allowed us to identify the approach to a second-order exceptional point, and eventually the EP itself. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\textwidth]{experimental_sptemp_coupled_arxiv.pdf}\n\\caption{\\textbf{Experimentally measured temporal, spectral, and spatial characteristics of Anderson localized lasing modes around and at an exceptional point (EP):} Subplots (a-d) show the measured quantum echo from lasing emission, with (d) being the EP. Unlike Figure ~3(h), the temporal profiles in (a) exhibit a quantum echo that decisively certifies that the two modes are coupled. Reducing echo frequency with decreasing phase difference between the echoes at the two wavelengths is clearly observed. Subplots (e-h) depict the corresponding eigenvalue spectra from the same configurations as (a-d) with mentioned splittings. The echo behavior is clearly consistent with the splitting. (i-l) represent corresponding spatial intensity distribution of the coupled localized states. Each spatial mode exhibits two peaks, and signatures of shape exchange. Energy is distributed in both Anderson cavities at large splittings, but resides mostly in one cavity at $\\delta\\lambda = 0.2$~nm. A single profile exists at the EP in (l). The intensity behavior certifies coalescence of eigenvectors towards and at the EP. Subplots (d), (h) and (l) represent Anderson-localization lasing over an exceptional point, further confirmed in Figure 5.}\n\\label{fig:sptemp}\n\\end{figure}\n\nFigure~\\ref{fig:sptemp} illustrates a representative set of measurements in the Anderson-localizing system showing the approach to an EP. The first column of panels shows the measured quantum echo, the middle column shows the corresponding spectral measurement, while the third column illustrates the corresponding spatial intensity distributions. Each row of three panels show simultaneous diagnostics from the same disorder configuration, while different rows represent data from different configurations. The subplot (a) shows a temporal profile in which a high frequency quantum echo is observed. The two profiles (orange and blue) arising from two coupled modes are out of phase by $\\Delta T\/T$ as indicated in the plot. Subplot(e) shows the spectral splitting in the emission from the two coupled modes in this configuration. Quite symmetric spectral peaks are observed. The marked $\\delta\\lambda = 0.6$~nm was measured using a peakfinder algorithm. Subplot (i) shows the spatial intensity of the two modes extracted from the spectrographic data as described earlier. Strong spatial overlap with signatures of separated peaks are seen in the profiles. The coupled Anderson localized modes are spread over almost the system length. Two distinct peaks in both the profiles are seen, indicative of the two resonant cavities that underwent coupling. These profiles strongly emphasize the fact that the two coupled cavities are not physically separated, unlike conventional photonic coupled-cavities (Figure 1). In such a disordered system, the gain\/loss values in the two cavities are self-determined by the random configuration that decides the relevant parameters such as quality factors, pump distributions etc. The lower rows show gradually decreasing quantum echo frequency, which corresponds to the reduction in splitting seen in the middle panels. At the same time, the intensity distributions morphologically coalesce as the splitting diminishes. Notably, subplot (k) shows a strong peak accompanied by a shoulder, that can be interpreted as the intensity preferentially residing in one cavity. These configurations represent the approach to a second-order exception point. Ultimately, the bottom row (d, h, l) represents lasing {\\it at} the exceptional point, showing only one echoless temporal profile, one spectral peak and one intensity profile. Subplots (a) through (d) illustrate the vanishing of the echo as the EP is hit. The conclusive inference that (d) corresponds to a second-order EP is provided in the next figure where we present statistical analysis of the data. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\textwidth]{Lorentzfit_arxiv.pdf}\n\\caption{\\textbf{Confirmation of the EP in a statistical ensemble:} (a) Scatter points for the QE frequency $\\Omega$ show an overall reduction with decreasing $\\delta\\lambda$. Pink diamonds indicate $\\langle \\Omega \\rangle$ at a particular $\\delta\\lambda$. (b) Measured $\\Delta T\/T$ and (c) peak emission intensity ($I_p$) as a function of $\\delta\\lambda$, measured over $\\sim100$ modes. Each datapoint in (b) is an average over a few oscillations in the temporal profile. Pink squares indicate $\\langle \\Delta T\/T \\rangle$ at a particular $\\delta\\lambda$. The scatter clearly shows an approach to $\\Delta T\/T \\rightarrow 0$ as $\\delta\\lambda \\rightarrow 0$. The yellow band around $\\delta\\lambda = 0$ signifies the EP region. All datapoints here are placed at $\\delta\\lambda = 0$ as the $\\delta\\lambda$ is beyond experimental spectral resolution. Although $\\Delta T\/T$ does not exist here, a suggestive datapoint is marked at $\\Delta T\/T = 0$ in logical follow-up to the neighbouring datapoints. (c) shows increasing peak intensity $I_p$ towards the EP region, with significant jump of over $300\\%$ in the $I_{p}$ in the EP region. Inset shows $P(I_p)$ with emphasis shading on outlier points of high intensity. (d) Spectral line shape at the EP (filled green plot). Blue and red plots are Lorentzian and squared-Lorentzian fits respectively. The profile possesses a clear squared-Lorentzian character, which is the classic signature of an EP. Inset shows the same in logarithmic Y axis.}\n\\label{fig:lorentz}\n\\end{figure}\n\nFig~\\ref{fig:lorentz}(a) illustrates the frequency of the quantum echo for various splittings. The mean frequency $\\langle \\Omega \\rangle$ at each $\\delta\\lambda$ is marked as pink diamonds. The trend in $\\langle \\Omega \\rangle$ is determined by the distribution of the scatter points which are constrained to a lower limit imposed by the total pulse duration ($\\sim 50$~ps). (b) shows the phase difference between the echograms of the coupled modes as a function of $\\delta\\lambda$ over several configurations. The scatter in the data reflects the probabilistic variation in the phase difference for a given splitting, with a large scatter existing in the range $0.2$~nm$<\\delta\\lambda<0.4$~nm, wherein a large number of coupled modes were observed. Such statistical variations are expected in non-deterministic systems due to configurational variations. The $\\langle \\Delta T\/T \\rangle$ is marked as pink squares. As $\\delta\\lambda \\rightarrow 0$~nm, the $\\langle \\Delta T\/T \\rangle \\rightarrow 0$. The $\\langle\\delta T\/T\\rangle$ shows a steady rise with $\\delta\\lambda$. The yellow band at $\\delta\\lambda = 0$ signifies the EP range. Clearly, $\\delta T\/T$ is undefined here since there is only one mode and one temporal profile. Nonetheless, a datapoint is intentionally marked to signify the completeness of the trend. Figure \\ref{fig:lorentz}(c) shows the peak intensity of the modes. The intensity rises as modes begin to coalesce spectrally, in agreement with the typical behavior in an EP-approach. Inside the EP range (yellow band at $\\delta\\lambda = 0$), six modes were identified that lased at a very high intensity. Upto 300\\% enhancement in lasing intensity was observed at these points. Such high intensity is an indicator of enhanced functionality, a preliminary identifier of a possible EP. The inset shows the intensity distribution over the entire ensemble of modes, emphasizing the outlier nature of the high-intensity points. Finally, subplot (d) confirms the exceptional point character of the modes using an analytical fit for the lineshape of one of the high-intensity peaks. It is known that LDOS is modified at the exceptional point such that the resonant lineshape approaches to the $n^{th}$-power of Lorentzian for an $n^{th}$-order EP. Consistent with this, the red solid line in (d) depicts the square-Lorentzian profile $\\propto \\frac{\\gamma^2}{4\\pi[(\\Delta\\omega^2 + \\gamma^2)]^2}$ that perfectly fits the experimental peak. In comparison, the blue line is the conventional Lorentzian fit which visibly deviates from the observed data. The inset shows the same on a logarithmic Y-axis, emphasizing the fit by the square-Lorentzian. The modified LDOS at the EP leads to a maximal peak enhancement of 4 in passive systems, and can be even more in the presence of gain. Although our system is random and every configuration has cavities of varying quality-factors, the average behavior of $I_p$ in subplot (c) does exhibit instances of peak enhancement $>4$ compared to the coupled modes. The Supplementary material elucidates further data regarding fits on other exceptional point modes. Thus, these spectral lineshape diagnostics confirm that the lasing modes shown here occurred over exceptional points in non-Hermitian Anderson-localizing systems. \n\nIn summary, we have identified exceptional points in one-dimensional Anderson localizing systems, and have extracted lasing over the corresponding modes. The methodology adopted here significantly deviates from the tuning-parameter mechanism that is conventionally utilized in photonics. Our technique preserves the statistical variations in disorder that are sacrosanct in the domain of mesoscopic optics. We sample thousands of disorder configurations, and identify the quantum echo which is set up whenever coupled Anderson-localized modes are realized within the structures. The echo provides unambiguous signatures of coupling, and sets the coupled modes apart from random neighbouring modes in the spectrum. Our method realizes a large multitude of disorder configurations and simultaneously measures spatial, spectral and temporal dynamics therein. The presence of a quantum echo in the temporal signal allows us to track configurations with spectral splitting. We find that reduced splitting is accompanied by enhanced output intensity which allows us to identify individual modes that provide exceedingly large outcoupled intensity. Spectral shape analysis of these intense modes reveal the characteristic square-Lorentzian shapes originating from a second-order exceptional point degeneracy, as against Lorentzian peaks otherwise. The excess intensity in these modes is testimonial to the enhanced functionality afforded by the exceptional points. \n\n\\section*{Methods}\n\n\\subsection*{Sample preparation and system characterization}\nThe system comprises an array of amplifying micro-droplets made of Rhodamine-6G (with a concentration of 1.0~mM) dissolved in a mixture of methanol and ethylene glycol with an equal proportion(50:50$\\%$). Ethylene glycol(EG) is added to obtain good stability and high index contrast (refractive index of the droplet, $n_{d}$ = 1.38) due to its high viscosity and refractive index value. Rh6G has a broadband gain profile in the range of 540$-$610~nm.\n\nThe array is generated using a vibrating orifice aerosol generator (VOAG). The liquid from a chamber (1000~ml liquid capacity) was pressurized using a dry-nitrogen cylinder and passed through VOAG. The VOAG has a circular opening with a diameter of $10~\\mu m$, which generates an inhomogeneous and unstable cylindrical jet of droplets. In order to make stable and spherical-shaped droplets,\na piezoelectric gate is attached at the circular opening of the VOAG. The gate is then perturbed by applying a periodic electronic signal. Depending upon the amplitude and frequency of the signal, the size and spacing of the droplets can be varied in a controlled way which allows us to generate an ensemble of monodisperse droplets which acts as a periodic array. Controlled deviations in the electronic signal perturb the microdroplets and form a disordered array. For this work, we generate a disordered array of droplets. We made a large ensemble of disordered arrays with a mean diameter of 18.6~$\\mu$m. The standard deviation in the diameters is 340.0~$nm$ which determines the strength of disorder in the array.\n\n\n\n\n\\subsection*{Coupled mode analysis of the binary system:}\n\n1. Eigenvalue solver using a finite element method (FEM: COMSOL Multiphysics) was used to calculate the complex eigenfrequencies of the coupled ring resonator. Simulations were performed in 2D. The structural parameters of the coupled cavities are: Diameter of the ring $D = 2.0~\\mu m$, thickness $t = 0.2~\\mu m$ and separation $s = 0.4~\\mu m$. The cavity resonance frequency for these parameters is $\\sim$190.35~THz. The cavities' gain or loss are provided in terms\nof their complex refractive indices, $n = n_{r}+ in_{im}$. The real part of the refractive index ($n_{r}$) for both cavities is set at 3.3. The separation between the cavities is also fixed at $s$, thus fixing the coupling strength ($\\kappa$) between the cavities. We provide a fixed amount of gain ($n_{im} = -0.01$) in the $1^{st}$ cavity. The simulation was then performed varying $n_{im}$ of the second cavity from -0.01 to 0. During the tuning, exceptional points were identified to occur at $n_{im}\\sim-0.0056$. \n\n2. Temporal analyses were done using Finite-difference time-domain method (Lumerical FDTD). The structural and material parameters are same as mentioned in part 1 above. In this method, we use a Lorentz gain model with a Lorentz permittivity and resonance frequency close to the cavity frequency of the ring resonator. The permittivity of the material for this method is given by the relation:\n\\begin{equation}\n\\epsilon(\\omega) = \\epsilon + \\epsilon_{l}\\omega_{0}^{2}\/(\\omega_{0}^{2}-2i\\delta_{0}\\omega-\\omega^{2})\n\\end{equation}\n\nThe gain center frequency is set at $\\omega_{0}$ (Lorentz Resonance) while the width is set as $\\delta_{0}$. The strength of the gain (gain amplitude) is given by the permittivity $\\epsilon_{l}$, while the sign determines whether the material has loss or gain. \n\nThe gain level ($n_{l} = \\sqrt{\\epsilon_{l}}$) in the $1^{st}$ cavity is kept fixed at -0.01. It was confirmed that this value of gain did not lead to any divergence due to an exponential growth of the field. The gain level in the second cavity is gradually reduced from this value and replaced with loss. Both cavities are uniformly excited using a set of identical dipole sources at the cavity resonance frequency. The time signal of the field and intensity are measured by placing field time monitors inside the cavities. Spectra are calculated using the standard Fourier transform method. A full apodization method is used to measure the frequency-resolved temporal evolution of the cavity modes. This technique applies a window function to the fields $E(t)$ before its Fourier transform. By using this method, it is possible to calculate $E(\\omega)$ from a portion of the time signal. \n\n\n\\bmhead{Data availability}\nAll data relating to the paper are available from the corresponding author upon reasonable request.\n\n\\bmhead{Code availability}\nAll the relevant computing codes used in this study are available from the\ncorresponding author upon reasonable request.\n\n\\bmhead{Supplementary information}\nA Supplementary Information document has been attached with the manuscript.\n\n\\bmhead{Author contributions}\nS.~M. conceived the project. S.~M. and K.~J. designed the experimental setup. K.~J. performed the experiment, numerical simulations and analyzed the data. K.~J. and S.~M. wrote the manuscript. S.~M. supervised the whole project.\n\n\\bmhead{Competing interests}\nThe authors declare no competing interests.\n\n\\bmhead{Acknowledgments}\nSM would like to acknowledge the Swarnajayanti Fellowship from the Department of Science and Technology, Government of India. We acknowledge funding from the Department of Atomic Energy, Government of India (12-R $\\&$ D-TFR-5.02-0200). \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}