diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfqul" "b/data_all_eng_slimpj/shuffled/split2/finalzzfqul" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfqul" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn a combinatorial optimisation problem, such as MAX-CUT or the TRAVELLING-SALESPERSON \\cite{Pap81}, the aim is to evolve from some initial state to a final state that encodes the solution to the optimisation problem. One approach might be to evolve adiabatically, encoding each of the initial and final states as the ground state of some Hamiltonian and interpolating sufficiently slowly between them. In practice this approach is limited by the minimum spectral gap of the interpolating Hamiltonian \\cite{Ami09, Rei04}. This approach is known as adiabatic quantum optimisation (AQO) \\cite{Apo89,Far00, Kad98,Fin94,Alb18}.\n\nIn the absence of mature hardware, AQO has relied on the adiabatic principle as a guiding design tenet. In turn AQO has lead to Quantum Annealing (QA). Similar to AQO, QA attempts to interpolate continuously between the initial and final Hamiltonians. QA denotes a broader approach than AQO to find (or approximate) the solution of the optimisation problem. This could include: thermal effects \\cite{Dic13}; intentionally operating diabatically \\cite{Cro21,Fry21,Som12}; or adding new terms to the Hamiltonian \\cite{Far11, Zen16, Far02, Fein22, Cro14, Cho21}. QA has gone on to help inspire a number of other approaches, such as: the gate-based Quantum Approximate Optimisation Algorithm (QAOA) \\cite{Far14}; continuous-time quantum walks (QW) for optimisation \\cite{Cal19,Ken20}; and combinations of different approaches \\cite{Cal21, Mor19}. In short, the adiabatic principle has been a successful design-principle for developing new heuristic quantum algorithms for optimisation. \n\nAn alternative approach for evolving from an initial state to the final state might be via Hamiltonians for optimal state transfer \\cite{Bro06}. These are Hamiltonians that transfer the system from the initial state to the final state in the shortest possible time. In this paper we use these Hamiltonians as our underlying design principle. Such Hamiltonians typically require knowledge of the final state (i.e., the solution to the optimisation problem). In the absence of this information we therefore focus on how the behaviour of these Hamiltonians might be approximated to find approximate solutions to optimisation problems. The result is a rapid continuous-time approach, with a single variational parameter. \n\nThe framework of this paper is as follows: Sections \\ref{sec:QAframe}, \\ref{sec:hamdes}, \\ref{sec:met}, and \\ref{sec:prob} introduce the requisite background material and introduce the Hamiltonians considered in the rest of the paper (Sec.\\ \\ref{sec:hamdes}). Sections \\ref{sec:H1}, \\ref{sec:QZ}, and \\ref{sec:lpa} provide analytical and numerical evidence for the performance of these Hamiltonians. Throughout the paper we adopt the convention $\\hbar=1$. The corresponding Pauli matrices are denoted by $X,Y$ and $Z$. The identity is denoted by $I$. The commutator is denoted by $[\\cdot,\\cdot]$. The simulations make use of the Python packages QuTiP \\cite{Joh12,Joh13} and Qiskit \\cite{Qiskit}.\n\n\\section{The QA-framework}\n\\label{sec:QAframe}\nBefore elaborating on our new approach, in this section we provide a very brief overview of the adiabatic influenced algorithms mentioned in the introduction. \n\nIn QA, typically the optimisation problem is encoded as finding the ground-state, $\\ket{\\psi_f}$, of an Ising Hamiltonian, $H_f$. The system is initialised in the ground-state, $\\ket{\\psi_i}$, of an easy to prepare Hamiltonian $H_i$. Usually, $H_i$ is taken to be the transverse-field Hamiltonian. The optimisation problem is solved by evolving the system to the ground-state of $H_f$. We refer to this set-up as the QA-framework. We utilise this encoding of the optimisation problem for our new approach. \n\nFor the adiabatic influenced approaches (AQO, QA, QAOA and QW) the system is subjected to the following Hamiltonian: \n\\begin{equation}\n\\label{eq:QAham}\n H(t)=A(t) H_i + B(t) H_f,\n\\end{equation}\nwhere $t\\in[0,T]$ denotes time. The design philosophy behind how the schedules $A(t)$ and $B(t)$ are chosen, is what distinguishes these near-term intermediate-scale quantum (NISQ) \\cite{Pre18} approaches. For AQO, the schedules are chosen to interpolate adiabatically between the two ground states \\cite{Apo89,Far00, Kad98,Fin94}. In QA, this restriction is loosened to typically continuous, monotonically decreasing for $A(t)$ and increasing for $B(t)$, schedules \\cite{Cro21,Hau20}. In QW $A(t)$ and $B(t)$ are taken to be constants over the whole evolution \\cite{Cal19,Ken20}. \n\nQAOA \\cite{Far14} is a gate-based design-philosophy for determining the schedules. In QAOA either $A(t)=1$ and $B(t)=0$, or $A(t)=0$ and $B(t)=1$. The switching parameter, $p$, controls the number of times the schedules alternate between the two parameter settings. The duration between switching is either determined prior to the evolution by a classical computer or by using a classical variational outer-loop attempting to minimise $\\langle H_f \\rangle$ by varying $2p$ free-parameters. QAOA again relies on the adiabatic principle to provide a guarantee of finding the ground-state in the limit of infinite $p$. However, far from this limit, the variational method allows QAOA to exploit non-adiabatic evolution \\cite{Zho20}. QAOA has been further generalised, retaining its switching framework to the `Quantum Alternating Operator Ansatz' \\cite{Had19}. It has also become a popular choice for benchmarking the performance of quantum hardware \\cite{Har21, Gra22, Ott17}. \n\n\nRecently Brady et al.\\ applied optimal control-theory to help design and investigate schedules \\cite{Bra21_1,Bra21,fei22}. They demonstrated for $0\\leq A(t) \\leq1$ and $B(t)=1-A(t)$, that the optimal schedule starts and finishes with a `bang' (i.e., one of the controls takes its maximum value)\\cite{Bra21_1}. This claim was further investigated in \\cite{Venuti_2021} and applied to open quantum systems.\n\nClearly there are many design-philosophies for tacking problems within the QA-framework. Although, some have been inspired by the adiabatic theorem, practical implementation might have little resemblance to this original idea. \n\nThe next section provides a brief introduction to Hamiltonians for optimal state-transfer as well as the motivation for the choice of Hamiltonians used in this paper. The bulk of this paper will examine the performance of this approach on a variety of optimisation problems.\n\n\n\\section{Hamiltonian design}\n\\label{sec:hamdes}\n\nThe aim of optimal state-transfer is to find a Hamiltonian that transfers the system from an initial state (i.e., $\\ket{\\psi_i}$) to a known final state (i.e., $\\ket{\\psi_f}$) in the shortest possible time. We shall refer to this Hamiltonian as the optimal Hamiltonian (although it is by no means unique).\n\nOne notable approach to finding the optimal Hamiltonian comes from Nielsen et al. \\cite{Nie05,Nie06,Dow07} who investigated the use of differential geometry, at the level of unitaries, to find geodesics connecting the identity to the desired unitary. The length of the geodesic was linked to the computational complexity of the problem \\cite{Nie06}. A second approach comes from Carnali et al.. Inspired by the brachistochrone problem, they developed a variational approach at both the level of state-vectors \\cite{Car06} and unitaries \\cite{Car07}. The quantum brachistochrone problem has generated a considerable amount of literature and interest \\cite{Rez09,Wan15,Wak20,Wan21, San21, Yan22}. \n\nBoth approaches allow for constraints to be imposed on the Hamiltonian. Both concluded that if the only constraint is on the total energy of the Hamiltonian, the optimal Hamiltonian is constant in time. In the rest of this section we outline the geometric argument put forward by Brody et al.\\ \\cite{Bro06} to find the optimal Hamiltonian in this case. The reader is invited to refer to the original work for the explicit details. \n\nThe optimal Hamiltonian will generate evolution in a straight line in the (complex-projective) space in which the states live. Intuitively, a line in this space between $\\ket{\\psi_i}$ and $\\ket{\\psi_f}$ consists of superpositions of the two states. Hence, the line in the complex-projective space can be represented on the Bloch sphere.\n\nIf, without loss of generality, $\\ket{\\psi_i}$ and $\\ket{\\psi_f}$ are placed in the traditional $z-x$ plane of the Bloch sphere, it is clear that the optimal Hamiltonian generates rotations in this plane. The optimal Hamiltonian is then (reminiscent of the cross-product):\n\\begin{equation}\n H_{opt}=-i \\left(\\ket{\\psi_i}\\bra{\\psi_f}-\\ket{\\psi_f}\\bra{\\psi_i}\\right).\n \\label{eq:optHam}\n\\end{equation}\nThis can be scaled to meet the condition on the energy of the Hamiltonian. It then remains to calculate the time required to transfer between the two states. This will depend on how far apart the states are and how fast the evolution is. This is encapsulated in the Anandan-Aharonov relationship \\cite{Ana90}:\n\\begin{equation}\n \\frac{ds}{dt}=2\\delta E(t).\n\\end{equation}\n\nThe left-hand-side denotes the speed of the state, $\\ket{\\psi(t)}$, where $ds$ is the infinitesimal distance between $\\ket{\\psi\\left(t+dt\\right)}$ and $\\ket{\\psi\\left(t\\right)}$ \\footnote{The distance is measured by the Fubini-study metric, $ds^2=4\\left(1-\\abs{\\bra{\\psi\\left(t\\right)}\\ket{\\psi\\left(t+dt\\right)}}^2\\right)$, on the complex-projective space.}. Under evolution by the Schr\\\"odinger equation, with Hamiltonian $H$, the instantaneous speed of the evolution is given by the uncertainty in the energy, $\\delta E(t)^2=\\bra{\\psi(t)}H(t)^2\\ket{\\psi(t)}-\\bra{\\psi(t)}H(t)\\ket{\\psi(t)}^2$.\n\nSince the optimal Hamiltonian is constant in time, $\\delta E$ can be evaluated using the initial state. Therefore, the time of evolution is:\n\\begin{equation}\nT=\\frac{\\arccos{\\abs{\\bra{\\psi_f}\\ket{\\psi_i}}}}{ \\sqrt{1-\\abs{\\bra{\\psi_f}\\ket{\\psi_i}}^2}}.\n\\end{equation}\n\n In summary, $e^{-i H_{opt} T}\\ket{\\psi_i}$ generates the state $\\ket{\\psi_f}$. The goal of the rest of this paper is to harness some of the physics behind this expression for computation. To this end we primarily focus on Hamiltonians which are constant in time. Appendix \\ref{sec:opHamQa} contains more details on what optimal Hamiltonians might look like within the QA-framework. \n \n In the style of a variational quantum eigensolver (VQE) \\cite{Per14,Fed21}, we allow $T$ to be a variational parameter that needs to be optimised in our new approach. In this paper we select the $T$ that minimises the final value of $\\langle H_f \\rangle$. There is scope to extend this to different metrics \\cite{Li20,Bar20}. As $T$ is a variational parameter the Hamiltonian is only important up to some constant factor. Rewriting Eq.\\ \\ref{eq:optHam} up to some constant gives:\n\\begin{equation}\n \\label{eq:optHamCom}\n H_{opt}\\propto\\frac{1}{2i}\\left[\\ket{\\psi_i}\\bra{\\psi_i},\\ket{\\psi_f}\\bra{\\psi_f}\\right],\n\\end{equation}\nassuming $\\ket{\\psi_i}$ and $\\ket{\\psi_f}$ have a non-zero overlap (this is a given in the standard QA-framework). This equation (Eq.\\ \\ref{eq:optHamCom}) provides the starting point for all the Hamiltonians considered in this paper. \n\nThe optimal Hamiltonian (i.e., Eq.\\ \\ref{eq:optHamCom}) requires knowledge of the final state. In practice, when attempting to solve an optimisation problem, one doesn't have direct access to $\\ket{\\psi_f}$. Instead one has easy access to $H_i$ and $H_f$. Therefore, we make the pragmatic substitutions $\\ket{\\psi_i}\\bra{\\psi_i} \\rightarrow H_i$ and $\\ket{\\psi_f}\\bra{\\psi_f} \\rightarrow H_f$ into Eq.\\ \\ref{eq:optHamCom}\n\n\\begin{align}\nH_{opt}\\propto\\frac{1}{2i}&\\left[\\ket{\\psi_i}\\bra{\\psi_i},\\ket{\\psi_f}\\bra{\\psi_f}\\right] \\nonumber \\\\\n & \\hspace{0.6cm}\\downarrow\\hspace{1.3cm}\\downarrow \\nonumber\\\\\n H_1=\\frac{1}{2i}&\\left[\\hspace{0.4cm}H_i\\hspace{0.4cm},\\hspace{0.6cm}H_f\\hspace{0.4cm}\\right].\n\\end{align}\nThis Hamiltonian is the most amenable to NISQ implementation of all the Hamiltonians considered in this paper, therefore the bulk of the paper is devoted to demonstrating its performance. The results can be seen in Sec.\\ \\ref{sec:H1}.\n\nThe substitutions $\\ket{\\psi_i}\\bra{\\psi_i}\\rightarrow H_i$ and $\\ket{\\psi_f}\\bra{\\psi_f}\\rightarrow H_f$ introduce errors, such that the evolution under $H_1$ no longer closely follows the evolution under $H_{opt}$. In Sec.\\ \\ref{sec:QZ} we try to correct for this error by adding a new term to the Hamiltonian\n\\begin{equation}H_{1,improved}=H_1+H_{QZ}.\n\\end{equation}\nThe proposed form of $H_{QZ}$ is motivated by the quantum Zermello problem \\cite{Bro15,Brody_2015,Rus14,Rus15}.\n\nFinally, in Sec.\\ \\ref{sec:lpa} we exploit our knowledge of the initial state and propose the substitution $\\ket{\\psi_f}\\bra{\\psi_f}\\rightarrow f(H_f)$, where $f(\\cdot)$ is some real function:\n\n\\begin{align}\n H_{opt}\\propto\\frac{1}{2i}&\\left[\\ket{\\psi_i}\\bra{\\psi_i},\\ket{\\psi_f}\\bra{\\psi_f}\\right] \\nonumber\\\\\n & \\hspace{2.1cm}\\downarrow \\nonumber\\\\\n H_{\\psi_i}=\\frac{1}{2i}&\\left[\\ket{\\psi_i}\\bra{\\psi_i},\\hspace{0.2cm}f\\left(H_f\\right)\\hspace{0.1cm}\\right].\n\\end{align}\n\n\n\\section{Metrics used to assess the performance}\n\\label{sec:met}\nOnce a problem and algorithm have been determined it remains to decide on the metric (or metrics) by which to assess the performance. A common choice is the ground-state probability, $P_{gs}$. This is a reasonable measure for an exact solver. It fails to capture the performance of an approximate solver (a solver that finds \\textit{good enough} solutions). A common measure for approximate solvers, such as QAOA, is the approximation ratio. The approximation ratio is a measure on the final distribution produced by the approach. Here we define the approximation ratio to be $\\langle H_f \\rangle\/E_{min}$ where $E_{min}$ is the energy of the ground-state solution and the expectation is with respect to the final state. If the approach finds the ground-state exactly, then $\\langle H_f \\rangle\/E_{min}=1$. Random guessing (for all the problem Hamiltonians considered in Sec.\\ \\ref{sec:prob} ) has an approximation ratio of $0$. If an algorithm for a specific problem is cited to have a approximation ratio with no explicit reference to time, this refers to an optimised approximation ratio. \n\nIn Sec.\\ \\ref{sec:H1Num}, we consider the width of the final energy distribution. Wider distributions may mean the algorithm is harder to optimise in practice. We use $\\sigma=\\sqrt{\\langle H_f^2\\rangle-\\langle H_f \\rangle}\/E_{min}$ to measure the width of the distribution.\n\nOf crucial interest for NISQ implementations is the duration of each run. For continuous-time approaches this is simply the time of each run. We would like to compare this approach to QAOA, which has the following ansatz:\n\\begin{equation}\n \\label{eq:QAOA}\n \\ket{\\psi_{QAOA}}=\\prod_{k=1}^p \\left(e^{-i \\beta_k H_i}e^{-i \\gamma_k H_f}\\right) \\ket{+}\n\\end{equation}\nwhere the $\\beta_k$s and $\\gamma_k$s are the variational parameters. We take the time of a QAOA run to be the sum of the variational parameters. In Sec.\\ \\ref{sec:QAOA} we compare the optimal time of QAOA $p=1$ and $H_1$, the optimal time being the time that maximises the approximation ratio. To make a fair comparison between the two approaches with different problem sizes, we fix the energy of the Hamiltonians in Sec.\\ \\ref{sec:QAOA} to be: \n\\begin{equation}\n \\label{eq:norm}\n \\frac{1}{2^n}\\Tr{H_*^2}=n,\n\\end{equation}\nfor $*=i,f,1$, where $n$ is the number of qubits.\n\nThese are by no means all possible metrics to assess the performance of an algorithm. For example time to solution is another popular metric choice \\cite{Alb18,Kadowaki_2022}.\n\n\n\n\\section{Problems considered}\n\\label{sec:prob}\nThe Hamiltonians proposed in Sec.\\ \\ref{sec:hamdes} need to be applied to optimisation problems to assess their performance. Algorithms are unlikely to work uniformly well on all problems; for instance QA is believed to work better on problems with tall, thin barriers in the energy landscape \\cite{Kat15}. Consequently the apparent performance of a heuristic algorithm will, in general, be dependent on the optimisation problem considered as well as the algorithm. With this caveat in mind this paper makes use of two canonical optimisation problems: MAX-CUT and a Sherrington-Kirkpatrick-inspired problem. This section briefly outlines these problems. Readers familiar with these problems, may skip this section.\n\n\\subsection{MAX-CUT}\n\nMAX-CUT seeks to find the maximum cut of a graph, $G=(V,E)$. A cut separates the nodes of the graph into two disjoint sets. The value of the cut is equal to the number of edges between the two disjoint sets. In general, MAX-CUT is an NP-hard problem \\cite{Gar76}. Indeed, finding very good approximations for MAX-CUT is a computationally hard problem \\cite{Pap88}.\n\n\nThe corresponding Ising formulation of this problem is:\n\\begin{equation}\n H_f=\\sum_{(i,j) \\in E} Z_i Z_j, \n\\end{equation}\nup to a constant offset (i.e. a term proportional to the identity) and multiplicative factor. In this paper we explore MAX-CUT on a range of graphs.\n\n\\subsubsection{Two-regular graphs}\n\nMAX-CUT on two-regular graphs (also known as the Ring of Disagrees or the anti-ferromagnetic ring) is a well studied problem in the context of QA and QAOA \\cite{Far00,Wan18,mbe19,Far14}. The problem Hamiltonian\n\\begin{equation}\n H_f=\\sum_{i=1}^n Z_i Z_{i+1}\n\\end{equation}\nwith $n+1=1$, consists of nearest-neighbour terms only. The performance of QA and QAOA on this problem has been understood by applying the Jordan-Wigner transformation to map the problem onto free fermions \\cite{Far00,Wan18}. In Sec.\\ \\ref{sec:RoD} we follow the approach laid out by \\cite{Wan18} to apply this technique to $H_1$. A further useful tutorial for tackling the Ising chain with the Jordan-Wigner transformations can be found in \\cite{Mbe20}.\n\nAlternatively, provided $p$ is sufficiently small, the performance of QAOA can be understood in terms of locality \\cite{Far14}. Due to the structure of the ansatz in QAOA (i.e.\\ Eq.\\ \\ref{eq:QAOA}), to find the expectation of a term in $H_f$, such as $Z_iZ_{i+1}$, it is only necessary to consider a subgraph. This subgraph consists of all nodes connected by no more than $p$ edges to a node in the support of the expectation value being calculated. For two regular graphs this is a chain consisting of $2p+2$ nodes. Provided this subgraph is smaller than the problem graph (i.e. the two-regular graph has more than $2p+2$ nodes), then QAOA is operating locally.\n\nFor a given $p$, all the subgraphs are identical for a two-regular graph. Hence, the performance of QAOA for this problem depends only on its performance on this subgraph. As a direct consequence of the locality, the approximation ratio of QAOA will not change as the size of the two-regular graph is scaled. By optimising over this subgraph with a classical resource, it is therefore possible to find the optimal time and approximation ratio of QAOA for this problem.\n\n\n\\subsubsection{Three-regular graphs}\n\\label{sec:prob3reg}\nMAX-CUT on three-regular graphs was considered in the original QAOA paper by Farhi et al.\\ \\cite{Far14}. The local-nature of QAOA allowed them to calculate explicit bounds on the performance of their algorithm. To this end, the graph was broken down into subgraphs to measure local expectation values (i.e., $\\langle Z_iZ_{i+1}\\rangle$). For QAOA $p=1$, there are three distinct subgraphs. By simulating QAOA for the subgraphs and bounding the proportion of subgraphs in the problem, they calculated a lower bound on the performance. The small number of relevant subgraphs for three-regular graphs, makes this approach particularly amenable. They found that QAOA $p=1$ will produce a distribution whose average will correspond to at least $0.6924$ times the best-cut. This lower-bound is saturated by triangle-free graphs. These graphs consist of a single subgraph. Therefore, the performance of QAOA $p=1$ is largely dependent of the proportion of edges in this problem that belong to this subgraph. We refer to the performance of a local approach being limited by its performance on one subgraph (or possibly a small handful of subgraphs) as being dominated by this subgraph.\n\nThis method was later extended by Braida et al.\\ \\cite{Bra22} who applied it to QA by using an approach inspired by Lieb-Robinson bounds. By operating QA on short times it can be treated as a local algorithm. By then simulating local sub-graphs (as in the QAOA case) and calculating the bounds outlined in the paper, the worst-case performance for each sub-graph can be calculated. This approach does not necessarily find a tight bound, but it is nonetheless impressive, with bounds in continuous-time quantum computation a rarity. They show that QA finds at least $0.5933$ times the best cut. The authors conjecture that $0.6963$ times the best cut might be a tighter bound on the performance but are unable to rigorously show that this is the case.\n\nIn Sec.\\ \\ref{sec:3reg} we make use of the approach formulated by Braida et al.\\ to prove a lower bound for $H_1$ for this problem.\n\n\\subsubsection{Random graph instances}\n\nTo generalise the MAX-CUT instances discussed above, we also consider MAX-CUT on randomly generated graphs. These graphs do not have fixed degree. The graphs are generated by selecting an edge between any two nodes with probability $p$. MAX-CUT undergoes a computational phase change for random graphs at $p=1\/2$ (harder problems appear for $p>1\/2$) \\cite{Gam18,Cop03,Pol22}. We set $p=2\/3$ for this paper (note that this is no guarantee of hardness for the problem instances considered). \n\n\\subsection{Sherrington-Kirkpatrick inspired model}\n\nIn a MAX-CUT problem all the couplers in the graph are set to the same value. Here we introduce a second problem, the Sherrington-Kirkpatrick model (SKM), where this is not the case. The problem is to find the ground-state of \n\\begin{equation}\n H_f=\\sum_{i,j} J_{i,j} Z_i Z_j \n\\end{equation}\nwhere the $J_{i,j}$'s are randomly selected from a normal distribution with mean 0 and variance 1 \\cite{She75}. Each qubit is coupled to every other qubit. \n\nTo further distinguish the SKM from the MAX-CUT problems we introduce bias terms to the Hamiltonian,\n\\begin{equation}\n H_f=\\sum_{i,j} J_{i,j} Z_i Z_j+\\sum_i^n h_i Z_i,\n\\end{equation}\nwhere the $h_{i}$'s are also randomly selected from a normal distribution with mean 0 and variance 1. Here we use the SKM to give an indicative idea of the performance of the proposed Hamiltonians on a wider range of problem instances, rather than just MAX-CUT.\n\n\nThe rest of the paper will examine the performance of each of the Hamiltonians introduced in Sec.\\ \\ref{sec:hamdes}. Each Hamiltonian will be applied to one or more of the problems outlined in Sec.\\ \\ref{sec:prob}. The performance will be optimised to give the best approximation ratio for each problem instance. The discussion of the performance will utilise the metrics outlined Sec.\\ \\ref{sec:met}. \n\n\n\\section{Taking the commutator between the initial and final Hamiltonian}\n\\label{sec:H1}\nIn Sec.\\ \\ref{sec:hamdes}. we motivated the Hamiltonian\n\\begin{equation}\n \\label{eq:sqham}\n H_1=\\frac{1}{2i}\\left[H_i,H_f\\right]\n\\end{equation}\nby substituting out the projectors in Eq.\\ \\ref{eq:optHamCom} for easily accessible Hamiltonians. In this section we explore the effectiveness of these substitutions. We begin by demonstrating that Eq.\\ref{eq:sqham} generates the optimal rotation for a single qubit (Sec.\\ \\ref{sec:singleQ}). In Sec.\\ \\ref{sec:SERand} we show that $H_1$ has the potential to outperform random guessing within the QA-framework. The rest of the section analyses the performance of $H_1$ on the problems outlined in Sec.\\ \\ref{sec:prob}.\n\n\n\\subsection{The optimal approach for a single qubit}\n\\label{sec:singleQ}\n\nHere we outline a simple geometric argument which shows that $H_1$ generates the optimal rotation for a single qubit (hence the name $H_1$). The eigenstates of $H_i$ and $H_f$ can be represented as points on the surface of the Bloch sphere, see Fig.\\ \\ref{fig:bloch}. Since these points lie in a plane, the aim is to write down a Hamiltonian that generates rotation in this plane. By writing the Hamiltonians in the Pauli basis, it is clear the Hamiltonian that generates the correct rotation is:\n\\begin{equation*}\n H_1=\\frac{1}{2i}\\left[H_i,H_f\\right].\n\\end{equation*}\nFor the full details see Appendix \\ref{app:sq}. Again, by using the Anandan-Aharonov relationship:\n\\begin{equation}\n \\frac{d\\theta}{dt}=2\\delta E,\n\\end{equation}\nwhere $\\delta E$ is the uncertainty in the energy and $\\theta$ is the distance between the desired states (Fig.\\ \\ref{fig:bloch}), we can calculate the time required to transfer between the two ground-states. \n\n\\begin{figure}\n\\begin{tikzpicture}[line cap=round, line join=round, >=Triangle]\n \\clip(-2.19,-2.49) rectangle (2.66,2.58);\n\n \\draw [shift={(0,0)}, lightgray, fill, fill opacity=0.1] (0,0) -- (-135.7:0.4) arc (-135.7:-41:0.4) -- cycle;\n \\draw(0,0) circle (2cm);\n \\draw [rotate around={0.:(0.,0.)},dash pattern=on 3pt off 3pt] (0,0) ellipse (2cm and 0.9cm);\n\n \\draw [->] (0,0) -- (0,2);\n \\draw [->] (0,0) -- (-0.81,-0.79);\n \\draw [->] (0,0) -- (0.90,-0.8);\n \\draw [->, style=dashed] (0,0) -- (0.81,0.79);\n \\draw [->, style=dashed] (0,0) -- (-0.9,0.8);\n\n \\draw[ ->] (0.5,1.3);\n \\draw (0,1.2) ellipse (0.5cm and 0.225cm);\n \\draw (-0.08,-0.3) node[anchor=north west] {$\\theta$};\n \n \\draw (-1.05,-0.75) node[anchor=north west] {$\\mathbf {\\hat{m}}$};\n \\draw (0.8,-0.7) node[anchor=north west] {$\\mathbf {\\hat{n}}$};\n \\draw (-0.5,2.6) node[anchor=north west] {$\\mathbf {\\hat{k}}=\\mathbf {\\hat{m}}\\cross\\mathbf {\\hat{n}}$};\n\\end{tikzpicture}\n\\caption{The geometric intuition behind finding the Hamiltonian for optimally transferring between the ground-states of $H_i$ and $H_f$ on the Bloch sphere. The vectors $\\pm \\hat{m}$ ($\\pm \\hat{n}$) are the eigenvectors of $H_i$ ($H_f$). The aim is to generate a rotation of $\\theta$ around $\\hat{k}$ to map $\\pm \\hat{m}$ to $\\pm \\hat{n}$. The handedness of the cross-product takes into account the direction.}\n\\label{fig:bloch}\n\\end{figure}\n\nHaving established $H_1$ as the optimal Hamiltonian for a single qubit, the next section investigates its performance on larger problems. \n\n\\subsection{Application to larger problems}\n\\subsubsection{Outperforming random guessing for short times}\n\\label{sec:SERand}\n\nIn this section we demonstrate that $H_1$ can always do better than random guessing within the QA-framework. Starting with the time-dependent Schr\\\"odinger equation:\n\\begin{equation*}\n \\ket{\\dot{\\psi}(t)}=-i H_1 \\ket{\\psi(t)},\n\\end{equation*}\nwe expand $\\ket{\\psi(t)}$ in terms of the eigenbasis of $H_f$, so $\\ket{\\psi(t)}=\\sum_k c_{k}(t) \\ket{k}$ where $\\ket{k}$ are the eigenvectors of $H_f$ with associated eigenvalue $E_k$. The eigenvalues are ordered such that $E_0\\leq E_1 \\leq E_2 \\dots$. Substituting this into the Schr\\\"odinger equation gives:\n\\begin{align*}\n \\sum_k \\dot{c}_k(t) \\ket{k}&=-\\frac{i}{2i}\\sum_k c_k(t) \\left(H_i H_f -H_f H_i \\right)\\ket{k}\\\\\n &=-\\frac{1}{2}\\sum_k c_k(t) E_k \\left(H_i-H_f H_i\\right)\\ket{k}\n\\end{align*}\n\nActing with $\\bra{j}$ on each side, to find $\\dot{c}_j(t)$, gives:\n\\begin{equation}\n \\dot{c}_j(t)=-\\frac{1}{2}\\sum_k c_k(t) \\underbrace{\\left(E_k-E_j\\right)}_{\\text{\"Velocity\"}}\\overbrace{\\bra{j}H_i\\ket{k}}^{\\substack{\\text{How the basis}\\\\ \\text{states of }H_f \\\\ \\text{are connected}}}.\n \\label{eq:TDSEcoef}\n\\end{equation}\n\nIn the standard QA-framework $H_i=-\\sum_k^n X_k$, $c_k(0)=1\/\\sqrt{2^n}$, for all $k$, and the basis states (e.g. $\\ket{k}$), correspond to computational basis states. Accordingly, $H_1$ connects computational basis states which are a Hamming-distance of one away.\n\nThe difference in energy of the computational basis states intuitively provides something akin to a velocity, with greater rates of change between states which are further apart in energy.\n\nFocusing on the derivative of the ground-state amplitude at $t=0$ we have:\n\\begin{equation}\n \\dot{c}_0(0)=-\\frac{1}{2}\\sum_k \\underbrace{c_k(0)}_{>0}\\underbrace{\\left(E_k-E_0\\right)}_{\\geq 0}\\underbrace{\\bra{0}-\\sum_j X_j\\ket{k}}_{=0 \\text{ or }-1},\n\\end{equation}\nso $\\dot{c}_0(0)\\geq0$, with equality if all states in a Hamming-distance of one have the same energy as $\\ket{0}$. In this case the above logic can be repeated for these states. Hence, at $t=0$, the ground state amplitude is increasing - meaning $H_1$ can do better than random guessing by measuring on short times. This is evidence that $H_1$ is capturing something of the of the optimal Hamiltonian for short times. Indeed, for short times all the amplitudes flow from higher-energy states to lower-energy states. \n\nThe above logic can be extended to the case where $H_i$ is any stoquastic Hamiltonian in the computational basis \\cite{Bra10,Alb18} and $\\ket{\\psi_i}$ the corresponding ground-state. That is to say, we require $H_i$ to have non-positive off-diagonal elements in the computational basis (i.e. stoquastic) and as a consequence we can can write the ground-state of $H_i$ with real non-negative amplitudes \\cite{Alb18}. Consequently, for any stoquastic choice of $H_i$ the ground-state amplitude is increasing at $t=0$ and can do better than the initial value of $c_0$ at short times. \n\nWe could also take a generalised version of $H_1$:\n\\begin{equation}\n H_{1,gen}=\\frac{1}{2i}\\left[f\\left(H_i\\right),g\\left(H_f\\right)\\right],\n\\end{equation}\nwhere $f$ and $g$ are real functions. Eq.\\ \\ref{eq:TDSEcoef} becomes:\n\\begin{equation}\n \\dot{c}_j(t)=-\\frac{1}{2}\\sum_k c_k(t) \\left(g\\left(E_k\\right)-g\\left(E_j\\right)\\right)\\bra{j}f\\left(H_i\\right)\\ket{k}.\n\\end{equation}\n\nThe function acting on $H_i$ (i.e., $f(\\cdot)$) controls how the computational-basis states are connected, while the function acting on $H_f$ (i.e., $g(\\cdot)$) controls the velocity between computational basis states. If $f(\\cdot)$ is the identity and $H_i$ stoquastic, then any monotonic function for $g(\\cdot)$ (e.g., $H_f^3$, $H_f^5$, $\\exp{H_f}$,...) will do better than $c_0(0)$ for short $t$. Taking $H_i$ to be the transverse-field Hamiltonian, that is better than random-guessing. \n\nThe above analysis demonstrates that $H_1$ has potential for tackling generic problems within the QA-framework. The next sections apply $H_1$ to specific examples in an attempt to quantify the success of this approach. For the rest of this paper we take $H_i$ to be the transverse-field Hamiltonian.\n\n\\subsubsection{MAX-CUT on two-regular graphs}\n\\label{sec:RoD}\n\nHere we study the performance of $H_1$ on MAX-CUT with two-regular graphs. The explicit form of $H_1$ is then:\n\\begin{equation}\n H_1=\\sum_{j=1}^n \\left(Y_jZ_{j+1}+Z_jY_{j+1}\\right).\n\\end{equation}\nThis can be solved analytically by mapping the problem onto free fermions via the Jordan-Wigner transformation. Details can be found in Appendix \\ \\ref{app:ferm}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/RoDtd.png}\n \\caption{A time-domain plot of the ground-state probability (in pink) and approximation ratio (in blue) for $H_1$ applied to MAX-CUT on a 2-regular graph with 400 qubits. Random guessing corresponds to a ground-state probability of $2^{-399}\\approx 10^{-120}$. The dashed purple line shows the location of the optimal time, corresponding to the maximum in approximation ratio.}\n \\label{fig:RoDtd}\n\\end{figure}\n\nA time domain plot for the approximation ratio and ground-state probability is shown in Fig.\\ \\ref{fig:RoDtd} for 400 qubits. The peak in approximation ratio corresponding to the optimal time. As expected from the previous section (Sec.\\ \\ref{sec:SERand}) the approximation ratio is increasing at $t=0$. There is a clear peak in ground-state probability at a time of $t\\approx 0.275$. This peak remains present for larger problem sizes too. The peak also occurs at a later time than the peak in approximation ratio. Further insight into this phenomena may be found in Sec.\\ \\ref{sec:lpa}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/RoDeC.png}\n \\caption{The approximation ratio for $H_1$ applied to MAX-CUT on two-regular graphs with an even number of qubits (solid line). The dashed lines show the performance of QAOA for this problem when $p < n\/2$ \\cite{Far14,mbe19}. The corresponding optimal times can be found in Fig. \\ref{fig:RoDts}. The approximation ratio for $H_1$ freezes out at $0.5819$, with a corresponding time of $0.2301$.}. \n \\label{fig:RodeC}\n\\end{figure}\n\nThe key result of this section is shown in Fig.\\ \\ref{fig:RodeC}, showing the optimal approximation ratio versus problem size for even numbers of qubits only. The corresponding plot for odd numbers of qubits can be found in Appendix \\ref{app:ferm}. Notably the approximation ratio saturates for large problem sizes, achieving an approximation ratio of $0.5819$, compared to $0.5$ for QAOA $p=1$. This is despite QAOA $p=1$ having two variational parameters, compared to the single variational parameter for $H_1$. This behaviour is suggestive of $H_1$ optimising locally, since its approximation ratio is largely independent of problem size.\n\nDespite MAX-CUT on two-regular graphs being an easy problem, the ground-state probability scales exponentially with problem size (Fig.\\ \\ref{fig:RoDegsp}). This is not necessarily a problem, as we present $H_1$ as an approximate approach only. Optimising the performance to give the best ground-state probability provides a small gain in performance but does not change the overall exponential scaling. The optimal times for both approximation ratio and ground-state probability can be found in Fig.\\ \\ref{fig:RoDts}. Both times freeze out at constant values for problem sizes greater than 10 qubits. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/RoDegsp.png}\n \\caption{The ground-state probability for different problem sizes under the evolution of $H_1$. The blue line shows the ground-state probability for times that maximise the approximation ratio. The optimised ground-state probability is shown in pink. The dashed purple line shows the probability of randomly guessing the ground-state. Only even qubit numbers are plotted here.}\n \\label{fig:RoDegsp}\n\\end{figure}\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/Rodts.png}\n \\caption{The optimal times for MAX-CUT on two-regular graphs. The blue (pink) line shows the time that optimises the approximation ratio (ground state probability). Only even numbers of qubits are plotted.}\n \\label{fig:RoDts}\n\\end{figure}\n\n\nWe have demonstrated with MAX-CUT on two-regular graphs, at large problem sizes, that $H_1$ can provide a better approximation ratio than QAOA $p=1$. We explore this comparison with QAOA $p=1$ further in Sec.\\ \\ref{sec:QAOA}. \n\n\n\n\n\n\n\\subsubsection{Performance on MAX-CUT problems with three-regular graphs}\n\\label{sec:3reg}\n\nIn Sec.\\ \\ref{sec:prob3reg} we outlined the approach taken by Braida et al.\\ to prove bounds on QA on MAX-CUT with three-regular graphs. Here we apply this approach to $H_1$. For details of the method the reader is referred to \\cite{Bra22} and for explicit details of this computation to Appendix \\ref{app:LRB}. We find that that $H_1$ finds at least $0.6003$ times the best cut. Hence $H_1$ has a marginally better worst-case than QA (which is 0.5933 times the best cut \\cite{Bra22}), when this method is applied. Both bounds are not necessarily (and unlikely to be) tight. Resorting to numerical simulation gives Fig.\\ \\ref{fig:MC3Reg}. Here we can see, for the random instances considered, that $H_1$ never does worse than the QAOA $p=1$ worst bound. Direct comparisons to QAOA $p=1$ can be found in Sec.\\ \\ref{sec:QAOA}. Fig.\\ \\ref{fig:MC3Reg} also shows that the approximation ratio of $H_1$ on three-regular graphs has little dependence on the problem size, again suggesting that it is optimising locally.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/3RegPer.png}\n \\caption{The performance of $H_1$ on randomly generated instances of three-regular graphs. For each problem size, 100 instances were generated. After accounting for graph isomorphisms the number of samples in order of ascending problem size were $[1,2,6,15,46,87,97]$. Disconnected graphs were allowed. The y-axis shows the average cut-value from sampling $H_1$. The final time has been numerically optimised to give the best approximation ratio.\n }\n \\label{fig:MC3Reg}\n\\end{figure}\n\n\n\\subsubsection{Numerical simulations on random instances of MAX-CUT and Sherrington-Kirkpatrick problems}\n\\label{sec:H1Num}\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/MCRandc.png}\n \\caption{Approximation ratio for randomly generated MAX-CUT.}\n \\label{fig:MCRandc}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/MCRandgsp_log.png}\n \\caption{Ground-state probability for randomly generated MAX-CUT.}\n \\label{fig:MCRandgsp}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/MCRAndVar.png}\n \\caption{Width of the final distribution, $\\sigma$, for randomly generated instances of MAX-CUT.}\n \\label{fig:MCRandVar}\n \\end{subfigure}\n \\hfill\n \\\\\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/SKc.png}\n \\caption{Approximation ratio for the SKM.}\n \\label{fig:SKc}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/SKgsp_log.png}\n \\caption{Ground-state probability for the SKM.}\n \\label{fig:SKgsp}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/SKVar.png}\n \\caption{Width of the final distribution, $\\sigma$, for the SKM.}\n \\label{fig:SKVar}\n \\end{subfigure}\n \\caption{Performance of $H_1$ on 100 randomly-generated instances of MAX-CUT and SKM.}\n \\label{fig:NumPer}\n\\end{figure*}\n\nIn the previous two sections we established the performance of $H_1$ on problems with a high degree of structure, allowing for more analytical investigation. In this section we explore the performance of $H_1$ numerically on MAX-CUT with random graphs and on the SKM. Consequently, we are restricted to exploring problem sizes that can be simulated classically.\n\nFor each problem we consider 100 randomly-generated instances for each problem size. The results can be seen in Fig.\\ \\ref{fig:NumPer}. For each instance the time has been numerically optimised to maximise the approximation ratio within the interval $[0,2\\pi)$. From Fig.\\ \\ref{fig:NumPer} we draw some conclusions from the simulations, with the caveat that either much larger sizes need to be simulated and\/or analytic work is required to fully substantiate the claims. On all the problems considered $H_1$ performed better than random guessing (which results in an approximation ratio of 0). \n\nFocusing first on MAX-CUT: there appears to be some evidence that the distribution of approximation ratios is becoming smaller as the problem size is increased (Fig.\\ \\ref{fig:MCRandc}). In addition the approximation ratio tends to a constant value, independent of the problem size. From analysing the regular graphs, it is reasonable to assume that $H_1$ is optimising locally. Therefore, we would expect the performance of $H_1$ to depend on the subgraphs in the problem. If the performance is limited by one, or a certain combination of subgraphs, then that would explain the constant approximation ratio. As the problem size is increased the chance of having an atypical combination of subgraphs is likely to decrease, resulting in a smaller distribution.\n \nThe ground-state probability shows a clear exponential dependence on problem size (Fig.\\ \\ref{fig:MCRandgsp}).\n %\nThe width of the final energy distribution is tending to a constant value around $0.3$ (Fig.\\ \\ref{fig:MCRandVar}), meaning that $H_1$ finds neither a computational basis state nor a superposition of degenerate computational basis states. \n\n\nMoving on to the SKM, the approximation ratio also appears to have little dependence on the problem size for more than 7 qubits, an indicator that the dynamics under $H_1$ is approximately local for these times. The ground-state probability also appears to decline exponentially (Fig.\\ \\ref{fig:SKgsp}). Finally, $\\sigma$ is non-zero (Fig.\\ \\ref{fig:SKVar}), suggesting the final state is not a computational basis state. \n\nWe have explained some characteristics of the data in Fig.\\ \\ref{fig:NumPer} by treating $H_1$ as a local algorithm. This insight helps to motivate the next section on choosing good initial starting guesses for the time $T$.\n\n\n\\subsubsection{Estimating the optimal time}\nPresenting the application of $H_1$ as a variational approach begs the question of how to find good initial guesses for the time, $T$, at which to measure the system. As previously noted the optimal time corresponds to the maximum approximation-ratio.\n\nFrom Sec.\\ \\ref{sec:SERand} we expect the first turning point in approximation ratio after $t=0$ to be a local maximum. Consequently, we might expect $T$ to be small. Further to this, throughout the work so far we have seen evidence of $H_1$ behaving locally. This local behaviour will allow us to motivate good initial guesses for $H_1$. As $H_1$ is optimising locally, its performance does not depend on the graph as a whole, but only on subgraphs.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/3RegOt.png}\n \\caption{The optimal time for the three-regular MAX-CUT instances considered in Fig.\\ \\ref{fig:MC3Reg}. The optimal time was found by dividing the interval $[0,2 \\pi] $ into 1000 time steps.}\n \\label{fig:3RegOt}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[scale=1.8, auto,swap]\n\n\\foreach \\pos\/\\name in { {(-1,0)\/0}, {(0,0)\/1}, {(1,0)\/2}, {(2,0)\/3}}\n \\node[vertex] (\\name) at \\pos {$\\name$};\n \n\n\\foreach \\source\/ \\dest in {0\/1, 1\/2, 2\/3}\n \\path[edge] (\\source) -- node {}(\\dest);\n\n\\end{tikzpicture}\n\\caption{A subgraph of two-regular graphs}\n\\label{fig:subgraph_RD}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[scale=1.8, auto,swap]\n\n\\foreach \\pos\/\\name in { {(-1,1)\/0}, {(-1,-1)\/1}, {(0,0)\/2}, {(2,0)\/3},{(3,1)\/4},{(3,-1)\/5}}\n \\node[vertex] (\\name) at \\pos {$\\name$};\n \n\n\\foreach \\source\/ \\dest in {0\/2, 1\/2, 2\/3,3\/4,3\/5}\n \\path[edge] (\\source) -- node {}(\\dest);\n\n\\end{tikzpicture}\n\\caption{A small subgraph contained in three-regular graphs which is prevalent in graphs that $H_1$ performs worse on.}\n\\label{fig:subgraph_3Reg}\n\\end{figure}\n\n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/MCRandOt.png}\n \\caption{Optimal times for MAX-CUT on randomly generated graphs. The optimal time was found by dividing the interval $[0,2 \\pi] $ into 1000 time steps.}\n \\label{fig:MCRandOt}\n \\end{subfigure}\n \\\\\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/SKOt.png}\n \\caption{Optimal times for the SKM. The optimal time was found by dividing the interval $[0,2 \\pi] $ into 1000 time steps.}\n \\label{fig:SKOt}\n \\end{subfigure}\n\\caption{Optimal time for $H_1$ on 20 randomly generated instances of MAX-CUT and SK.}\n\\label{fig:NumTime}\n\\end{figure}\n\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/QAOA_Comp_3_reg.png}\n \\caption{Approximation ratio comparison for MAX-CUT on three-regular graphs.}\n \\label{fig:comp3regc}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/QAOA_Comp_rand.png}\n \\caption{Approximation ratio comparison for MAX-CUT on randomly generated graphs.}\n \\label{fig:comprandc}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/QAOA_Comp_sk.png}\n \\caption{Approximation ratio comparison for the SKM.}\n \\label{fig:compskc}\n \\end{subfigure}\n \\hfill\n \\\\\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/QAOA_comp_time_3_reg.png}\n \\caption{Optimal time comparison for MAX-CUT on three-regular graphs.}\n \\label{fig:comp3regt}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/QAOA_comp_time_rand.png}\n \\caption{Optimal time comparison for MAX-CUT on randomly generated graphs.}\n \\label{fig:comprandt}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/QAOA_comp_time_sk.png}\n \\caption{Optimal time comparison for the SKM.}\n \\label{fig:compskt}\n \\end{subfigure}\n \\caption{Comparison of $H_1$ (y-axis on the above plots) with QAOA $p=1$ (x-axis on the above plots) for the problem instances considered in Sec.\\ \\ref{sec:3reg} and Sec.\\ \\ref{sec:H1Num}. The top line of figures compares approximation ratios. The bottom line of figures compares the optimal times (i.e.\\ the time that maximises the approximation ratios) of the two approaches. }\n \\label{fig:comph1QAOA}\n\\end{figure*}\n\nThe optimal time for MAX-CUT on two-regular graphs was $T=0.23$. Under the assumption that $H_1$ is behaving locally we can estimate the optimal times by considering a smaller subgraph. The subgraph in question is shown in Fig.\\ \\ref{fig:subgraph_RD}. By numerically optimising $\\langle Z_1Z_2 \\rangle$ for this subgraph with $H_f=Z_0Z_1+Z_1Z_2+Z_2Z_3$, we can find good estimates for the optimal $T$. Optimising this subgraph, within the interval $T\\in[0,1)$, gives $T=0.22$. This estimate matches the optimal time well. Choosing a larger subgraph will give a better estimate on the time.\n\nFig.\\ \\ref{fig:3RegOt} shows the optimal time for the larger instances of three-regular graphs considered in Fig.\\ \\ref{fig:MC3Reg}. The range of optimal times varied very little between problem instances and problem sizes, centered around $T=0.176$. As with the two-regular case we can examine subgraphs. In this case we consider the subgraph shown in Fig.\\ \\ref{fig:subgraph_3Reg}. This is the subgraph that saturates the Lieb-Robinson bound. Numerically optimising $\\langle Z_2Z_3 \\rangle$ for this subgraph with $H_f=Z_0Z_2+Z_1Z_2+Z_2Z_3+Z_3Z_4+Z_3Z_5$ gives a time of $T=0.19$. This again is a good estimate of the optimal time.\n\nFor problems with well understood local structure, such as regular graphs, we have shown that we can exploit this knowledge to provide reasonable estimates of the optimal times. These subgraphs are also of the same size used in finding the optimal time in QAOA $p=1$ \\cite{Far14}.\n\nFor the MAX-CUT and SKM problems in Sec.\\ \\ref{sec:H1Num}, the optimal times can be found in Fig.\\ \\ref{fig:NumTime}. It appears that the optimal time tends to a constant value (or a small range of values), with $T<1$. The optimal times are clustered together, suggesting good optimal times might be transferable between problem instances. This approach is common within the QAOA literature \\cite{Gal21}.\n\nSo far we have explored the performance of $H_1$. We have demonstrated that $H_1$ can provide a better approximation ratio for MAX-CUT on two-regular graphs. The intuition gained by studying QAOA $p=1$ has been transferable to the understanding of $H_1$. In the final part of this section we make some direct comparisons between QAOA $p=1$ and $H_1$.\n\n\n\n\\subsection{Direct numerical comparisons to QAOA p=1}\n\\label{sec:QAOA}\n\nWe have established in the previous sections that $H_1$ operates in a local fashion. Calculating the optimal time for sub-graphs the same size as those involved in QAOA $p=1$ gave good estimates for the optimal time for larger problem sizes. Therefore it is reasonable to assume that $H_1$ sees a similar proportion of the graph as QAOA $p=1$. Both approaches are variational with short run-times too. Since both approaches are using comparable resources, in this section we attempt to compare the two. Again we focus on the problems outlined in Sec.\\ \\ref{sec:prob}. \n\nFor two-regular graphs, the optimal time for QAOA $p=1$ is 2.4 times longer than the optimal time for $H_1$ for large problem sizes, despite providing a poorer approximation ratio.\n\nThe results comparing $H_1$ and QAOA $p=1$, for all the problem instances considered in Sec.\\ \\ref{sec:3reg} and Sec.\\ \\ref{sec:H1Num}, are shown in Fig.\\ \\ref{fig:comph1QAOA}. \n\nThe approximation ratios for each approach are largely correlated, suggesting in general that harder problems for QAOA $p=1$ corresponded to harder problems for $H_1$.For all the MAX-CUT instances considered, $H_1$ gave a greater than or equal to approximation ratio compared to QAOA $p=1$ (Fig.\\ \\ref{fig:comp3regc} and Fig.\\ \\ref{fig:comprandc}). For a handful of problems with the SKM, QAOA $p=1$ outperformed $H_1$, but for the vast majority of problems instances $H_1$ performed better (Fig.\\ \\ref{fig:compskc}).\n\nTurning now to the optimal time, $H_1$ had in the majority of cases the shorter optimal time (Figs.\\ \\ref{fig:comp3regt}, \\ref{fig:comprandt} and \\ref{fig:compskt}), though there are more exceptions to this with the SKM (Fig.\\ \\ref{fig:compskt}). In Appendix \\ref{app:QAOA_better} we elaborate further on the exceptions, that is the MAX-CUT problems that have longer run-times than QAOA p=1 and the SK problems that do better for QAOA p=1. \n\nThis section has numerically demonstrated that $H_1$ provides a better approximation ratio than QAOA $p=1$ in a significantly shorter time for the majority of instances considered, justifing our description of this this approach as rapid, which is crucial for NISQ implementation \\cite{Pre18}. Given that, $H_1$ tends to provide a better approximation ratio, in a shorter time, with fewer variational parameters, it raises the question - does QAOA $p=1$, the foundation of any QAOA circuit, make effective use of its afforded resources?\n\n\n\\section{An improvement inspired by the Quantum Zermello problem}\n\\label{sec:QZ}\n\\subsection{The approach}\n\\label{sec:QZapp}\nWith QAOA it is clear how to get better approximation ratios, that is by increasing $p$. It is less clear how to do this with $H_1$. One suggestion might be to append this Hamiltonian to a QAOA circuit. However, the aim of this paper is to explore how Hamiltonians for optimal state-transfer can provide a guiding design principle. Therefore, in this section we explore adding another term, motivated by this new design principle, to $H_1$ to improve the approximation ratio.\n\nIn Sec.\\ \\ref{sec:hamdes} we motivated $H_1$ from the optimal Hamiltonian by making the pragmatic substitutions $\\ket{\\psi_i}\\bra{\\psi_i} \\rightarrow H_i$ and $\\ket{\\psi_f}\\bra{\\psi_f} \\rightarrow H_f$. Subsequently, we demonstrated that $H_1$ provides a reasonable performance. However, $H_1$ no longer closely followed the evolution under the optimal Hamiltonian. To partially correct for this error we add another term to the Hamiltonian:\n\\begin{equation}\n \\label{eq:corQZ}\n H_{1,improved}=H_1+H_{QZ}.\n\\end{equation}\n\n Again, we make use of Hamiltonians for optimal state-transfer to motivate the form of $H_{QZ}$. Finding the optimal Hamiltonian in the presence of an uncontrollable term in the Hamiltonian is known as the quantum Zermello (QZ) problem \\cite{Bro15,Brody_2015,Rus14,Rus15}. \n\nIn the rest of this section we expand on the details of the QZ problem. From the exact form of the optimal correction, $H_{cor}$, we then apply a series of approximations so that $H_{cor}$ is time-independent and does not rely on knowledge of $\\ket{\\psi_f}$. This final Hamiltonian will be $H_{QZ}$ in Eq.\\ \\ref{eq:corQZ}.\n\n\n\\begin{figure}\n \\centering\n \n\\begin{tikzpicture}\n\n\\fill[black] (-1,5) circle (0.15cm) node[anchor=south east] {$\\ket{\\psi_i}$};\n\n\\fill[black] (5,-1) circle (0.15cm) node[anchor=north west] {$\\ket{\\psi_f}$};\n\n\\draw[myblue,ultra thick,dashed] (5,-1) .. controls (4,5) and (4,0) .. (0,2);\n\n\\draw[myblue, ultra thick] (-0.1,1.9) -- (0.1,2.1);\n\n\\draw[myblue, ultra thick] (-0.1,2.1) -- (0.1,1.9)node[anchor=south west] {$e^{i H_{QW}T} \\ket{\\psi_f}$};\n\n\\draw[mypink, ultra thick](-1,5)--(-2\/3,4) node[anchor=south west] {$e^{-i H t} \\ket{\\psi_i}$};\n\n\\draw[mypink, ultra thick, dashed](-2\/3,4)--(-1\/3,3);\n\\end{tikzpicture}\n \\caption{A cartoon of the evolution of states in the QZ problem for constant $H_{QW}$. In the interaction picture, with background Hamiltonian $H_{QW}$, it appears the final state is moving under the influence of this Hamiltonian. In this frame Eq. \\ref{eq:optHam} can then be applied. It then remains to move out of the interaction picture to get Eq.\\ \\ref{eq:opHQZ}. }\n \\label{fig:QZcartoon}\n\\end{figure}\n\nThe QZ problem, like the quantum brachistochrone problem, asks what is the Hamiltonian that transfers the system from $\\ket{\\psi_i}$ to the final state $\\ket{\\psi_f}$ in the shortest possible time. Unlike the quantum brachistochrone problem, part of the Hamiltonian is uncontrollable. In the case of a constant uncontrollable term, the total Hamiltonian can be written as:\n\\begin{equation}\n H_{opt | QW}=H_{QW}+H_{cor}(t),\n\\end{equation}\nwhere $H_{QW}$ is the constant `quantum wind' that cannot be changed and $H(t)$ is the Hamiltonian we are free to vary. Typically, $H_{QW}$ is understood as a noise term \\cite{Lap10,Muk13}. Instead, here we will take $H_{QW}$ to be $H_1$ to provide a favourable quantum wind that $H_{cor}(t)$ can provide an improvement on. \n\nThe optimal form of $H_{cor}(t)$ is (up to some factor) \\cite{Brody_2015}:\n\n\\begin{multline}\n H_{cor}(t)=-i e^{-iH_{1}t}\\left(\\ket{\\psi_i}\\bra{\\psi_f}e^{-iH_{1}T} \\right. \\\\\n \\left. -e^{i H_{1}T}\\ket{\\psi_f}\\bra{\\psi_i} \\right)e^{iH_{1}t},\n \\label{eq:opHQZ}\n\\end{multline}\nwhere $t$ is the time and $T$ is the final time. The motivation for this equation can be found in Fig.\\ \\ref{fig:QZcartoon}. This Hamiltonian requires knowledge of the final state, so we introduce a series of approximations to make $H_{cor}(t)$ more amenable for implementation. \n\n Since we know that the optimum evolution under $H_1$ tends to be short, we make the assumption that the total time $T$ is small. Therefore, we approximate the optimal correction with $H_{cor}(t)$ with\n\\begin{equation}\n H_{cor}(0)=-i \\left(\\ket{\\psi_i}\\bra{\\psi_f}e^{-iH_{1}T} \\right. \\\\\n \\left. -e^{i H_{1}T}\\ket{\\psi_f}\\bra{\\psi_i} \\right).\n\\end{equation}\n\nNote that this Hamiltonian is time-independent. We can estimate the error in this step by:\n\\begin{align*}\n \\delta&=\\left\\|H_{cor}(t)-H_{cor}(0)\\right\\|\\\\\n &=\\left\\|\\int_0^t ds\\, e^{-iH_{1}s}\\left[H_{1},H_{cor}(0)\\right]e^{iH_{1}s}\\right\\|\\\\\n &\\leq \\int_0^t ds\\, \\left\\|e^{-iH_{1}s}\\left[H_{1},H_{cor}(0)\\right]e^{iH_{1}s}\\right\\|\\\\\n &=\\left\\|\\left[H_{1},H_{cor}(0)\\right]\\right\\|t\n\\end{align*}\n\nHence, $\\delta \\leq \\left\\|\\left[H_{1},H_{cor}(0)\\right]\\right\\|t$.\n\n\nIntroducing the commutator structure (Sec.\\ \\ref{sec:hamdes}) with the same pragmatic substitutions as before for $H_1$ gives:\n\\begin{equation}\n \\label{eq:infT}\n H_{QZ}=-i \\left[H_i,e^{i H_{1}T}H_fe^{-iH_{1}T} \\right],\n\\end{equation}\nwhere we have introduced the subscript $QZ$ to distinguish this Hamiltonian from $H_{cor}(0)$ prior to the substitutions. Quantifying the error in this step remains an open challenge. Expanding this expression in $T$ gives:\n\n\\begin{multline}\n \\label{eq:QZexp}\n H_{QZ}=-i \\left[H_i, H_f+iT \\left[H_{1},H_f\\right]\\right.\\\\\n \\left.-T^2\\left[H_{1},\\left[H_{1},H_f\\right]\\right]\/2+\\mathcal{O}\\left(T^3\\right)\\right].\n\\end{multline}\n\nFrom the QZ problem we have motivated the form of the correction $H_{QZ}$ in Eq.\\ \\ref{eq:corQZ}. In spite of this we have no guarantee on its performance - to this end we carry out numerical simulations. \n\nIn both its philosophy and structure $H_{QZ}$ is reminiscent of shortcuts to adiabaticity (STA) \\cite{Gue19}. In STA the aim is to modify the Hamiltonian in Eq. \\ref{eq:QAham} to reach the adiabatic result in a shorter time, typically by appending to the standard QA Hamiltonian. The approach here is distinctly different for three key reasons, besides the initial starting point of Hamiltonian. \n\\begin{enumerate}\n \\item The aim here is to do something distinctly different from $H_1$, not to recover its behaviour in a shorter time.\n \\item In the QZ-inspired approach we make use of the excited states, with the aim of finding approximate solutions, as opposed to exact solutions. \n \\item Here we only consider time-independent Hamiltonians.\n\\end{enumerate}\n\nA clear downside to $H_{QZ}$ is the increased complexity, compared to say QAOA. However, if $H_{QZ}$ is decomposed into a QAOA-style circuit, the single free parameter in $H_{QZ}$ might translate to fewer free parameters in the QAOA circuit, allowing for easier optimisation. \n\n\n\\subsection{Numerical simulations}\n\\subsubsection{MAX-CUT on two-regular graphs}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/QZperRoD_std.png}\n \\caption{Numerically optimised performance of Eq.\\ \\ref{eq:QZexp}. Each point has been optimised in the time interval [0,0.3] by considering 3000 divisions. The legend shows the order of $T$, with `exp' referring to Eq.\\ \\ref{eq:infT}. The dashed lines show the asymptotic performance of QAOA. }\n \\label{fig:SerPer}\n\\end{figure}\n\nHere we focus on applying Eq.\\ \\ref{eq:QZexp} up to various orders in $T$ to MAX-CUT on two-regular graphs with an even number of qubits. We focus on this problem as it is trivial to scale and the performance of QAOA and $H_1$ on this problem is well understood. \n\nThe results for MAX-CUT with two-regular graphs can be seen in Fig.\\ \\ref{fig:SerPer}. Increasing the expansion in $T$ appears to improve the approximation ratio. But the improvement is capped, shown by the data labelled `exp'. Notably, this approach with a single variational parameter at order $T^2$ is performing better than QAOA $p=3$ (with 6 variational parameters) for 10 qubits. \n\nThe optimal $T$ for the QZ-inspired Hamiltonians can be seen in Fig.\\ \\ref{fig:Sertime}. Again, the optimal time for each order in $T$ appears to be tending to a constant value, suggesting this approach is still acting in a local fashion. This is consistent with the approximation ratio plateauing. As we can see the QZ-inspired approach is still operating in a rapid fashion. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/QZperRoD_t_std.png}\n \\caption{The corresponding optimal times for Fig.\\ \\ref{fig:SerPer}. The norm of each Hamiltonian, for each problem size has been fixed so Eq.\\ \\ref{eq:norm} is true, to make comparisons fair.}\n \\label{fig:Sertime}\n\\end{figure}\n\n\\subsubsection{Random instances on MAX-CUT and Sherrington-Kirkpatrick problems}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/BoxMacCutIter.png}\n \\caption{The approximation ratio for the QZ-inspired approach on 100 random MAX-CUT instances. The x-axis label refers to the order of $T$ in the expansion of Eq.\\ \\ref{eq:QZexp}, with $0$ being $H_1$ and $e$ referring to the full exponential (i.e.\\ Eq.\\ \\ref{eq:infT}).}\n \\label{fig:qzmc}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/BoxSKIter.png}\n \\caption{The approximation ratio for the QZ-inspired approach on 100 instances of the SKM. The x-axis label refers to the order of $T$ in the expansion of Eq.\\ \\ref{eq:QZexp}, with $0$ being $H_1$ and $e$ referring to the full exponential (i.e.\\ Eq.\\ \\ref{eq:infT}).}\n \\label{fig:qzsk}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/BoxMacCutIter_time.png}\n \\caption{The optimal time for the QZ-inspired approach on 100 random Max-CUT instances. The x-axis label refers to the order of $T$ in the expansion of Eq.\\ \\ref{eq:QZexp}, with $0$ being $H_1$ and $e$ referring to the full exponential (i.e.\\ Eq.\\ \\ref{eq:infT}). The norm of each Hamiltonian, for each problem size has been fixed, according to Eq.\\ \\ref{eq:norm}, ensuring a fair comparison.}\n \\label{fig:QZmct}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/BoxSKIterpng_time.png}\n \\caption{The optimal time for the QZ-inspired approach on 100 random Max-CUT instances. The x-axis label refers to the order of $T$ in the expansion of Eq.\\ \\ref{eq:QZexp}, with $0$ being $H_1$ and $e$ referring to the full exponential (i.e.\\ Eq.\\ \\ref{eq:infT}). The norm of each Hamiltonian, for each problem size has been fixed, according to Eq.\\ \\ref{eq:norm}, ensuring a fair comparison.}\n \\label{fig:QZSKt}\n\\end{figure}\n\n\nTo complete this section we examine the performance of the QZ-inspired approach (Eq.\\ \\ref{eq:QZexp}) to the randomly generated instances of MAX-CUT and SKM, detailed in Sec.\\ \\ref{sec:prob}. \n\nThe results for different orders in $T$ for the approximation ratio can be seen in Fig.\\ \\ref{fig:qzmc} for MAX-CUT and Fig.\\ \\ref{fig:qzsk} for the SKM. All the QZ-inspired approaches provide an improvement on the original $H_1$ Hamiltonian, indexed by 0 in the figures. However, the performance is not monotonically increasing with the order of the expansion. This is not unusual for a Taylor series of an oscillatory function. Consequently, achieving better approximation ratios is not as simple as increasing the order of $T$. At the same time, this means that it is not necessary to go to high orders in $T$, with very non-local terms, to achieve a significant gain in performance. For example, in both cases going to first order achieves a substantial improvement. \n\nThe optimal times for the QZ-inspired approach can be found in Fig.\\ \\ref{fig:QZmct} for MAX-CUT and Fig.\\ \\ref{fig:QZSKt} for the SKM. For clarity we only show the optimal times for the larger problem instances. As with $H_1$ the optimal times are clustered for a given order. The lack of dependence on problem size for optimal times and approximation ratios suggests that the QZ-inspired approach is still optimising locally. Compared to the $H_1$ case, the operators have a larger support. Despite optimising locally, they are optimising less locally than $H_1$, hence the increased performance.\n\nHere we have numerically demonstrated that the QZ-inspired approach can provide an improvement over $H_1$, suggesting how this new design philosophy might be extended. The numerics also suggest that going to first order may provide the best possible advantage. \n\n\n\\section{Using knowledge of the initial state}\n\\label{sec:lpa}\n\nAs introduced in Sec.\\ \\ref{sec:hamdes}, in this section we exploit our knowledge of the initial state and evaluate the performance of \n\\begin{equation}\n H_{\\psi_i}=-i\\left[\\ket{\\psi_i}\\bra{\\psi_i},f(H_f)\\right] \n\\end{equation}\nwithin the QA-framework. We take $f(\\cdot)$ to be a real function such that:\n\\begin{equation}\n f(H_f)=\\sum_k f(E_k)\\ket{E_k}\\bra{E_k},\n\\end{equation}\nwhere $\\ket{E_k}$ and $E_k$ are the eigenvectors and associated eigenvalues of $H_f$.\n\nEvolution under $H_{\\psi_i}$ can be calculated analytically - the details can be found in Appendix \\ref{app:expHam}. By evolving $\\ket{\\psi_i}$ the state:\n\\begin{equation}\n \\ket{\\omega}=\\frac{1}{\\sqrt{\\Tr f^2(H_f)}}\\sum_k f(E_k)\\ket{E_k},\n\\end{equation}\ncan be reached. Indeed $H_{\\psi_i}$ will generate linear superpositions of $\\ket{\\omega}$ and $\\ket{\\psi_i}$ only.\n\nAssuming that the state $\\ket{\\omega}$ is prepared, then the probability of finding the ground-state is \n\\begin{equation}\n P_{gs}=\\frac{g f^2(E_{gs})}{\\Tr f^2(H_f)},\n\\end{equation}\nwhere $g$ is the ground-state degeneracy and $E_{gs}$ the associated energy. If $f(\\cdot)$ is the identity, then $\\Tr H_f$ scales approximately as $2^n$ and $E_{gs}$ might scale with $n$. Hence the ground state probability will scale as $\\sim n^2\/2^n$. Indeed if $f(H_f)=H_f^m$, where $m$ is some positive integer, then the ground state probability might scale as $\\sim n^{2m}\/2^n$. This is an improvement over random guessing, but still with exponential scaling. This may be of some practical benefit, depending on the computational cost of calculating $f(H_f)$. \nIf $f(\\cdot)$ is the projector onto the ground-state then $P_{gs}=1$ (as expected). \n\nCalculating the expectation of $H_f$ for $\\ket{\\omega}$ gives:\n\\begin{equation}\n \\langle H_f \\rangle =\\frac{1}{\\Tr f^2(H_f)}\\sum_k E_k f^2(E_k).\n\\end{equation}\nHere we can see that $\\langle H_f \\rangle$ will be dominated by states for which $f^2(E_k)$ is large. If $f(\\cdot)$ is the identify this most likely means low energy states and high energy states. Hence, we do not expect a good approximation ratio. This provides some insight into Sec.\\ \\ref{sec:RoD} where the observed peak in ground-state probability did not coincide with the optimal approximation ratio.\n\nThis approach has the potential to provide a modest practical speed-up with a polynomial prefactor on the hardest problems. However, the success of this approach depends on the (unlikely) feasibility of implementing $H_{\\psi_i}$ and $f(H_f)$. It does however provide further evidence of the power of commutators for designing algorithms to tackle optimisation problems.\n\n\n\n\\section{Discussion and Conclusion}\n\nDesigning quantum algorithms to tackle combinatorial optimisation problems, especially within the NISQ framework, remains a challenge. Many algorithms have used AQO in their inspiration, such as in the choice of Hamiltonians. In this paper we have explored using optimal Hamiltonians as a guiding design principle. \n\nWith $H_1$, the commutator between $H_i$ and $H_f$, we demonstrated that we can outperform QAOA p=1, with fewer resources. The short run-times which do not appear to scale with problem size suggest that this approach is acting locally. An effective Lieb-Robinson bound prevents the information about the problem propagating instantaneously \\cite{Lieb1972,WangZ20}. This helps provide some insights into the performance of $H_1$:\n\\begin{itemize}\n \\item In the local regime, the effective local Hilbert space is smaller than the global Hilbert space, consequently $H_1$ will be a better approximation of the optimal Hamiltonian. This accounts for why we might expect $H_1$ to work better on short run-times.\n \\item A local algorithm is unlikely to be able to solve an optimisation problem, as it cannot see the whole graph. It follows that such an approach would have poor scaling of the ground-state probability. \n\\end{itemize}\n\nDue to the local nature of $H_1$ we were able to utilise some analytical tricks to assist the numerical assessment of its performance, allowing for some guarantee of the performance of the approach on large problem sizes. The techniques used had already been developed or deployed by the continuous-time quantum computing community in the context of QA\/QAOA, indicative of the wide applicability of the tools being developed to assess these algorithms.\n\nLocal approaches have clear advantages in NISQ-era computations. The short run-times put fewer demands on the coherence times of the device. The local nature can also help mitigate some errors. If, for example, there is a control misspecification in one part of the Hamiltonian this is unlikely to propagate through the whole system and affect the entire computation. \n\nBuoyed by the relative success of utilizing Hamiltonians for optimal state-transfer we turned to the quantum Zermello problem to help find improvements. These Hamiltonians compromised a single variational parameter and short run-times, for increased complexity in the Hamiltonian. Again, the saturation of the optimal time suggest these approaches are still operating locally.\n\nThe success of this approach, within the NISQ era, will depend on the feasibility of implementing these Hamiltonians. This might be achieved through decomposition into a product formula \\cite{Childs_2013} for gate based approaches, resulting in a QAOA like circuit. Alternatively, one could attempt to explicitly engineer the interactions involved. Indeed, for exponentially scaling problems, implementing these Hamiltonians for short times could be less challenging then maintaining coherence for exponentially increasing times.\n\nAlthough the results of this paper are not fully conclusive, it has shown that by considering Hamiltonians for optimal state-transfer we can develop promising new algorithms. We hope the results presented in this paper will encourage further work into the success of these Hamiltonians. There is scope for taking this work further. This could include changing the choice of $H_i$, exploiting our observation that any stoquastic Hamiltonian can lead to an increase in ground state probability. For $H_f$ we have only explored problems with trivial Ising encodings. There is scope to explore new encodings such as LHZ \\cite{Lec15} or Domain-wall \\cite{Cha19} encodings. Such encodings will result in different $H_1$ and presumably distinct dynamics.\n\n\\section*{Acknowledgments}\nWe gratefully acknowledge Filip Wudarski, Glen Mbeng, Henry Chew, Natasha Feinstein, and Sougato Bose for inspiring discussion and helpful comments. This work was supported by the Engineering and Physical Sciences Research Council through the Centre for Doctoral Training in Delivering Quantum Tech-\nnologies [grant number EP\/S021582\/1] and the ESPRC Hub in Quantum Computing and Simulation [grant number EP\/T001062\/1].\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nAlgorithms with exponential speedups over classical counterparts \\cite{shorsalgorithm,groversalgorithm}, \nsimulation of quantum systems \\cite{sethlloyd, du_hydrogen,lidar_thermalrate} and testing basic \nprinciples of quantum mechanics \\cite{jharana_nohiding,mahesh_legget} makes \nquantum Information processing (QIP) and quantum computation intensively investigated fields of physics over last decade. \nThe idea of simulating quantum systems in a quantum computer was proposed \nby Feynman \\cite{feynman_qs} in 1982 and is one of the most important practical application of the \nquantum computer. Quantum simulation has the potential to revolutionize the physics and chemistry and draws attention \nrecently by solving problems like -- molecular Hydrogen simulation \\cite{du_hydrogen}, \ncalculations of thermal rate constants of chemical reactions \\cite{lidar_thermalrate} and quantum chemical dynamics \n\\cite{kassal_chemicaldynamics}. \n\n\nDzyaloshinsky-Moriya (DM) interaction is an anisotropic antisymmetric exchange interaction arising from spin-orbit \ncoupling \\cite{dzyaloshinsky, moriya}.\nIt was proposed by Dzyaloshinski to explain the weak ferromagnetism of antiferromagnetic crystals \n($\\alpha$-$Fe_2O_3$, $MnCO_3$)\\cite{dzyaloshinsky}.\nDM interaction is crucial in the description of many antiferromagnetic systems \\cite{dender,kohgi,greven} and is \nimportant in the entanglement properties of the system. Here we present \na generic unitary operator decomposition which will help to simulate the Hamiltonian -- DM interaction in the presence of Heisenberg \nXY interaction -- in a two qubit system with almost any basic interaction between them.\n\n\\begin{comment}\nEntanglement behavior in Ising model with DM interaction \\cite{kargarian_ent}, role of DM interaction in \nentanglement transfer \\cite{maruyama_et}, quantum phase interference of spins\\cite{wernsdorfer} and significance of \nDM interaction in quantum dots \\cite{kavokin_quantumdots} are the recent investigations in DM interactions. \n\\end{comment}\n\n\n\n \nLong-lasting coherence and high fidelity controls in nuclear magnetic resonance (NMR) are ideal for quantum \ninformation processing. Experimental implementation quantum algorithms \n(Deutsch-Jozsa algorithm, Grover's search algorithm and Shor's algorithm of factorization), testing basic \nprinciples of quantum mechanics (nohiding theorem \\cite{jharana_nohiding} and Leggett-Garg inequality \n\\cite{mahesh_legget}) and quantum simulation \n(hydrogen molecule \\cite{du_hydrogen} and system with competing two and three Body interactions \n\\cite{suter_2and3bodyinteractions}) were performed in Liquid state NMR (LSNMR).\n\nGenetic algorithms (GA) are stochastic global search method based on the mechanics of natural biological \nevolution \\cite{whitely_ga}. It was first proposed by John Holland in 1975 \\cite{holland_ga}. GA operates on \na population of solutions of a specific problem by encoding the solutions to a simple chromosome-like \ndata structure, and applies recombination operators. GAs are attractive in engineering design and applications because \nthey are easy to use and are likely to find the globally best design or solution, which is superior to \nother design or solution \\cite{rasheed_ga}. Here we used Genetic algorithm optimization for solving \nUOD for generic DM Hamiltonian with Heisenberg-XY interaction. \n\nSection \\ref{sec:theory} deals with theoretical discussion of DM Hamiltonian simulation followed by experimental \nimplementation in Section \\ref{sec:expt}.\n\n\n\n\\section{Theory} \\label{sec:theory}\n\n\nDM interaction in the presence of Heisenberg XY interaction $H(J,D)$ is, \n\\begin{equation} \\label{eqn:dmh}\nH(J,D)=J(\\sigma_{1x}\\sigma_{2x}+\\sigma_{1y}\\sigma_{2y})+D(\\sigma_{1x}\\sigma_{2y}-\\sigma_{1y}\\sigma_{2x}),\n\\end{equation}\nwhere J and D respectively represents the strength of Heisenberg and DM interactions.\n\nExperimental simulation of $H(J,D)$ (Eqn. \\ref{eqn:dmh}) in a quantum system (with Hamiltonian $H_{sys}$) requires \nUOD of evolution operator $U(J,D,t)$,\n\\begin{equation} \\label{eqn:dmu}\nU(D,J,t)=exp(-i H(J,D) \\times t),\n\\end{equation}\nin terms of Single Qubit Rotations \n$R^n(\\theta,\\phi)$ ($\\theta$ angle rotation along $\\phi$ axis on $n^{th}$ spin),\n\\begin{equation} \\label{eqn:usqr}\nR^n(\\theta,\\phi) = exp(-i \\theta\/2 \\times [Cos\\phi ~\\sigma_{nx} +Sin\\phi ~\\sigma_{ny}]),\n\\end{equation}\nand evolution under system Hamiltonian $U_{sys}$ (Eqn. \\ref{eqn:sysu}),\n\\begin{equation} \\label{eqn:sysu}\nU_{sys}(t)=exp(-i H_{sys} \\times t).\n\\end{equation}\n\nWithout losing generality, Eqn. \\ref{eqn:dmu} can be written as,\n\\begin{equation} \\label{eqn:dmu1}\n\\begin{split}\nU(\\gamma,\\tau)=exp(-i [(\\sigma_{1x}\\sigma_{2x}+\\sigma_{1y}\\sigma_{2y})+ \\\\ \n\\gamma (\\sigma_{1x}\\sigma_{2y}-\\sigma_{1y}\\sigma_{2x})]~ \\tau),\n\\end{split}\n\\end{equation}\nwhere $\\gamma$ represents the relative ratio of interaction strengths ($\\gamma = D\/J$) and $\\tau=J\\times t$.\n\nEqn. \\ref{eqn:dmu1} forms the complete unitary operator for the Hamiltonian (Eqn. \\ref{eqn:dmh}) with \n$\\gamma$ and $\\tau$ varies from 0 to $\\infty$. We performed UOD for Eqn. \\ref{eqn:dmu1} using \nGenetic algorithm optimization \\cite{vsmanu_ga}.\nIn an operator optimization (as shown in \\cite{vsmanu_ga}), optimization is performed for a constant unitary \nmatrix-- corresponds to a single fidelity point. \nHere optimization has to be performed for a two dimensional fidelity profile generated by $\\gamma$ and $\\tau$. We name it as Fidelity Profile \nOptimization (FPO). FPO for the present case is explained in following steps.\n\nIn the first step, we performed Fidelity Profile Optimization with following assumptions -- (a).the range of \n$\\tau$ is from 0 to 15, (b). the range of $\\gamma$ is from 0 to 1 and (c). the system Hamiltonian ($H_{sys}$) is \ngiven by Eqn. \\ref{eqn:zzh}. \n\\begin{equation} \\label{eqn:zzh}\nH_{sys}=H_{zz}=J_{zz} (\\sigma_{1z}\\sigma_{2z}).\n\\end{equation}\nwhere $J_{zz}$ is the strength of zz-interaction.\n\nThe optimization procedure using Genetic algorithm is explained in the Supporting information. \n\nThe optimized UOD (Eqn. \\ref{eqn:uh1}) has seven SQRs and two system Hamiltonian evolutions. \n\\begin{equation} \\label{eqn:uh1}\n\\begin{split}\nU(\\gamma,\\tau) = R^1(\\tfrac{\\pi}{2},-\\tfrac{\\pi}{2}) R^1(\\tfrac{\\pi}{2},\\theta_2) R^2(\\pi,\\pi) \t\nU_{zz}(\\tfrac{\\pi}{4})\\\\ R^1(\\theta_1,\\theta_2+\\tfrac{\\pi}{2})R^2(\\pi-\\theta_1,0)\t\t\t\nU_{zz}(\\tfrac{\\pi}{4})\\\\ R^1(\\tfrac{\\pi}{2},\\theta_2+\\pi) R^2(\\tfrac{\\pi}{2},\\tfrac{\\pi}{2}), \n\\end{split}\n\\end{equation}\nwhere $\\theta_1$ and $\\theta_2$ (Eqn. \\ref{eqn:th1}) impart $\\gamma$ and $\\tau$ dependence to UOD \nand $U_{zz}$ is given by Eqn. \\ref{eqn:sysu} with $J_{zz} \\times t =\\pi\/4$.\n\\begin{equation} \\label{eqn:th1}\n\\begin{split}\n\\theta_1=[0.8423-0.3455~Cos(1.117~\\gamma)+ \\\\0.01806~Sin(1.117 \\gamma)]~\\tau \\\\\n\\theta_2=1.345~exp(-0.8731 \\gamma)+1.796.\n\\end{split}\n\\end{equation}\nThe fidelity \\cite{vsmanu_ga} profile of UOD is \nshown in Fig. \\ref{fig:fp1}. The minimum point in fidelity profile is greater than 99.99 \\%. \nIt should be noted that, the total length of UOD (Eqn. \\ref{eqn:uh1}) \nis invariant under $\\gamma$ and $\\tau$ (with the assumption -- all the SQRs are instantaneous).\n\n\n\\begin{figure}\n\\includegraphics[width=0.35\\textwidth, height=0.27\\textwidth]{h1.jpeg}\n\\caption{Fidelity profile of UOD given in Eqn. \\ref{eqn:uh1}.} \\label{fig:fp1}\n\\end{figure}\nFor generalizing the assumption on $\\tau$, we solved Eqn. \\ref{eqn:pf} numerically and find the \nperiod P($\\gamma$) of U($\\gamma$,$\\tau$) (Eqn. \\ref{eqn:p}). \n\\begin{equation} \\label{eqn:pf}\n H(\\gamma,\\tau+n \\times P(\\gamma))= H(\\gamma,\\tau).\n\\end{equation}\n\\begin{equation} \\label{eqn:p}\n P(\\gamma)=3.008~\\gamma^3-6.627~\\gamma^2-0.1498~\\gamma+12.59.\n\\end{equation}\nEqn. \\ref{eqn:p} has a maximum value of 12.59 at $\\gamma$=0. Since the maximum value of period is \nless than 15 (FOP performed till $\\tau$=15), UOD (Eqn. \\ref{eqn:uh1}) can be used for any value of $\\tau$.\nSame argument can be used for extending the range of $\\tau$ to $-\\infty$.\n\nIn order to incorporate the range of $\\gamma$ from 0 to $\\infty$, we performed FPO for the \noperator Eqn. \\ref{eqn:dmu2}),\n\\begin{equation}\\label{eqn:dmu2}\n\\begin{split} \nU'(\\gamma',\\tau')=exp(-i [\\gamma' (\\sigma_{1x}\\sigma_{2x}+\\sigma_{1y}\\sigma_{2y})+ \\\\ \n(\\sigma_{1x}\\sigma_{2y}-\\sigma_{1y}\\sigma_{2x})]~ \\tau'),\n\\end{split}\n\\end{equation}\nand the optimized unitary decomposition are,\n\\begin{equation} \\label{eqn:uh2}\n\\begin{split}\nU'(\\gamma',\\tau') = R^1(\\tfrac{\\pi}{2},\\tfrac{\\pi}{2}) R^2(\\tfrac{\\pi}{2},\\theta_3) \t\nU_{zz}(\\tfrac{\\pi}{4}) R^1(\\theta_2+\\theta_3,0)\\\\ R^2(\\theta_1,\\theta_4)\t\t\t\nU_{zz}(\\tfrac{\\pi}{4}) R^1(\\tfrac{\\pi}{2},\\tfrac{\\pi}{2}) R^2(\\tfrac{\\pi}{2},\\theta_3),\n\\end{split}\n\\end{equation}\nwhere $\\theta_1 \\cdots \\theta_4$ (Eqn. \\ref{eqn:th2}) impart $\\gamma$ and $\\tau$ dependence to UOD.\n\n\\begin{small}\\begin{align} \\label{eqn:th2}\n\\begin{split}\n&\\theta=[ 0.09812~ exp(-2.42 \\gamma)+0.4023 ~exp(0.5524 \\gamma)] \\tau,\\\\\n&\\theta_1=-\\theta+3.142, \\\\\n&\\theta_2=\\theta-[1.242 ~exp(-0.9617 \\gamma)+ 0.3546~exp(-0.1145 \\gamma)], \\\\\n&\\theta_3=1.259~exp(-0.957 \\gamma)+3.479~exp(-0.0087 \\gamma), \\\\\n&\\theta_4=1.256~exp(-0.959 \\gamma)+ 1.912~exp(-0.0166 \\gamma),\n\\end{split}\n\\end{align}\n\\end{small}\nwhere $\\gamma'$ varies from 0 to 1 and $\\tau$ from 0 to 15.\n\nEqn. \\ref{eqn:dmu2} satisfy the same periodicity relation as shown in Eqn. \\ref{eqn:p} and hence \ncan use the same reasoning for extending $\\tau$ range from 0 to $+\\infty$.\n\nFor $\\gamma > 1$, Eqn. \\ref{eqn:dmu1} can be written as,\n\\begin{equation} \\label{eqn:dmut}\n\\begin{split}\nU(\\gamma'',\\tau'')=exp(-i [\\gamma''(\\sigma_{1x}\\sigma_{2x}+\\sigma_{1y}\\sigma_{2y})+ \\\\ \n (\\sigma_{1x}\\sigma_{2y}-\\sigma_{1y}\\sigma_{2x})]~ \\tau''),\n\\end{split}\n\\end{equation}\nwhere $\\gamma''=1\/\\gamma$ and $\\tau''=\\gamma \\times \\tau$.\n\n\nEqn. \\ref{eqn:dmu2} and Eqn. \\ref{eqn:dmut} are equivalent and hence UOD for Eqn. \\ref{eqn:dmu1} can be shown as,\n\\begin{equation}\\label{eqn:uod}\nU(\\gamma, \\tau) = \\left\\{ \n \\begin{array}{l l}\n \\text{Eqn. \\ref{eqn:dmu1}} & \\quad \\text{if $\\gamma \\leqslant $ 1}\\\\\n \\text{Eqn. \\ref{eqn:dmu2}} & \\quad \\text{if $\\gamma >$ 1}\\\\\n \\end{array} \\right.\n\\end{equation}\n\nThe UOD optimization given Eqn. \\ref{eqn:uh1} is based on $H_{sys}=H_{zz}$ (Eqn. \\ref{eqn:zzh}). \nIt can be generalized to almost any interaction by \\textit{term isolation} procedure by Bremner et al. \\cite{bremner,hill}. \n\nAs an example consider the case,\n\\begin{equation} \\label{eqn:hsystst}\nH_{sys}= J(\\sigma_x\\sigma_x+\\sigma_y\\sigma_y+\\sigma_z\\sigma_z).\n\\end{equation}\n\nThe $H_{zz}$ terms can be isolated from Eqn. \\ref{eqn:hsystst} and is shown in Eqn. \\ref{eqn:hsyststiso}.\n\n\\begin{equation} \\label{eqn:hsyststiso}\n\\begin{split}\nexp(-iJ_{zz}\\sigma_z\\sigma_z t)=R^1(\\pi,z)exp(-iH_{sys}t)R^1(\\pi,z)\\\\exp(-iH_{sys}t),\n\\end{split}\n\\end{equation}\nwhere $R^1(\\pi,z) $ represents a $\\pi$-SQR on spin 1 along Z axis.\n\nCombinig all the steps above forms most generic UOD of the Hamiltonian -- DM interaction in the presence of Heisenberg XY interaction.\n\n\\section{Experimental Quantum Simulation} \\label{sec:expt}\nWe performed Quantum simulation experiments in a two qubit NMR system $^{13}CHCl_3$ (dissolved in Acetone-D6) \n(Fig. \\ref{fig:chcl3}) with $^{13}C$ and $^1H$ spins act as two qubit system with scalar coupling (zz interaction-- Eqn. \\ref{eqn:zzh}) \nbetween them. The system Hamiltonian is zz interaction (Eqn. \\ref{eqn:zzh}). We performed all the experiments in Bruker AV-500 \nspectrometer.\n\n\\begin{figure}\n\\includegraphics[width=0.14\\textwidth, height=0.12\\textwidth]{chcl3.png}\n\\caption{$^{13}C$ labeled Chloroform used for quantum simulation. $^{13}C$ and $^1H$ act as qubits with zz \ninteraction ($J_{CH}$=215.1 Hz) between them.} \\label{fig:chcl3}\n\\end{figure}\n\nQuantum computation experiments in NMR starts with (i). preparation of pseudo pure states \n\\cite{cory_pps, gershenfeld_pps, Mahesh_sallt}, (ii). processing the state by evolving under different average \nHamiltonians \\cite{ernstbook} and (iii). read-out by quantum state tomography \\cite{avik_adiabatic}. \nHere we studied the entanglement dynamics (quantified by concurrence \\cite{coffman_concurrence}) of a Bell state \n(Eqn. \\ref{eqn:bell2}) in the Hamiltonian given in Eqn. \\ref{eqn:dmh}.\n\n\\begin{equation} \\label{eqn:bell2}\n |\\phi\\rangle_-=\\frac{1}{\\sqrt{2}}(|01\\rangle-|10\\rangle)\n\\end{equation}\n\nUsing the unitary operator decompositions shown in Eqn. \\ref{eqn:uod}, we have simulated the Hamiltonian $H(\\gamma, \\tau)$ \nfor $\\gamma$=\\{0.33, 0.66, 0.99\\} and studied the entanglement dynamics of the singlet Bell state (Eqn. \\ref{eqn:bell2}) under these Hamiltonians.\n\nIn experimental implementation $R^n(\\theta,\\phi)$ is implemented by hard pulse \\cite{spindynamics} with suitable length \n(determined by $\\theta$) along the axis $\\phi$ on $^1H$ or $^{13}C$ spin and $U_{zz}(\\theta')$ is implemented by system \nHamiltonian evolution for a time determined by $\\theta'$ \\cite{ernstbook}. The experimental simulation results \n(Fig. \\ref{fig:ed}) shows a good agreement with the theoretical simulation. Average experimental deviation ($AED$) \nin concurrence -- calculated \nusing the formula Eqn. \\ref{eqn:cd} -- is 3.83\\%, which is mainly due to decoherence effects, static and rf inhomogeneities. \n\n\n\\begin{equation} \\label{eqn:cd}\n AED= \\displaystyle \\sum_{i=1}^{n} \\frac{|C_{es}(i)-C_{ts}(i)|}{C_{ts}(i)}\n\\end{equation}\n\n\n\nwhere $C_{es}(i)$ and $C_{ts}(i)$ are the concurrence in experimental and theoretical simulations and $n$ is the \nnumber of experimental points (here we performed simulation for $n=16$).\n\\subsubsection{Entanglement Preservation}\n\nHou et al. \\cite{hou2012preservation} demonstrated a mechanism for entanglement preservation of a quantum state in a Hamiltonian of the \ntype given in Eqn. \\ref{eqn:dmu}. \n\nPreservation of initial entanglement of a quantum state is performed by free evolution interrupted with a certain operator $O$, which makes the \nstate to go back to its initial state. The operator sequence for preservation is given in Eqn. \\ref{eqn:epp}.\n\n\\begin{equation} \\label{eqn:epp}\n OU{(\\gamma, \\tau)}OU{(\\gamma, \\tau)}\\equiv I,\n\\end{equation}\nwhere $O=I_1\\otimes \\sigma_{2z}$.\n\nWe performed entanglement preservation experiment for singlet state (Eqn. \\ref{eqn:bell2}) in $H(\\gamma,\\tau)$ with $\\gamma$=\\{0.33, 0.66, 0.99\\}.\nThe experimental results (Fig. \\ref{fig:ep}) shows excellent entanglement preservation and good agreement with the theoretical simulation. \nThe experimental deviation of concurrence (Eqn. \\ref{eqn:cd}) is less than 2\\%.\n\n\\begin{figure}[t]\n \\subfigure[~]{\\includegraphics[width=0.35\\textwidth, height=0.25\\textwidth]{ed.jpg} \\label{fig:ed}} \\\\\n \\subfigure[~]{\\includegraphics[width=0.35\\textwidth, height=0.25\\textwidth]{ep.jpeg} \\label{fig:ep}}\n\\caption{(a).Entanglement (concurrence) dynamics of $^{13}C-H$ system under the Hamiltonian Eqn. \\ref{eqn:dmh}, \n(b). Entanglement preservation experiment using Eqn. \\ref{eqn:epp}. Starting from singlet state, the concurrence \nsustains at 1 with the preservation procedure.}\n\\end{figure}\n\\section*{Conclusion}\nWe have performed Fidelity Profile Optimization for the Hamiltonian -- DM interaction in the presence \nof Heisenberg XY interaction. The optimized UOD can be used for all relative strengths ($\\gamma$) of the \ninteractions and length is invariant under $\\gamma$ or evolution time. Using these decompositions, \nwe have experimentally verified the entanglement preservation mechanism \nsuggested by Hou et al.\n\n\\section*{Acknowledgments}\n\nWe thank Prof. Apoorva Patel for discussions and suggestions,\nand the NMR Research Centre for use of the AV-500 NMR spectrometer.\nV.S.M. thanks UGC-India for support through a fellowship.\n\n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{introsec}\\eqnoset\n\n\n\nIn this paper, for any bounded domain $\\Og$ with smooth boundary in ${\\rm I\\kern -1.6pt{\\rm R}}^n$, $n\\ge2$, we consider the following elliptic system of $m$ equations ($m\\ge2$) \n\\beqno{e1}\\left\\{\\begin{array}{l} -\\Div(A(x,u)Du)=\\hat{f}(x,u,Du),\\quad x\\in \\Og,\\\\\\mbox{$u=0$ or $\\frac{\\partial u}{\\partial \\nu}=0$ on $\\partial \\Og$}. \\end{array}\\right.\\end{equation}\nwhere $A(x,u)$ is a $m\\times m$ matrix in $x\\in\\Og$ and $u\\in{\\rm I\\kern -1.6pt{\\rm R}}^m$, $\\hat{f}:\\Og\\times{\\rm I\\kern -1.6pt{\\rm R}}^m\\times {\\rm I\\kern -1.6pt{\\rm R}}^{mn}\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$ is a vector valued function. \n\nWe say that $u$ is a strong solution if $u$ is continuous on $\\bar{\\Og}$ with $Du\\in L^\\infty_{loc}(\\Og)$ and $D^2u\\in L^2_{loc}(\\Og)$. Hence, $u$ solves \\mref{e1} almost everywhere in $\\Og$.\n\n\nThe strongly coupled system \\mref{e1} appears in many applications, ranging from differential geometry to physical models. For instance, it describes the steady states of Maxwell-Stephan systems describing the diffusive transport of multicomponent mixtures, models in reaction and diffusion in electrolysis and diffusion of polymers, or population dynamics, among others. \n\nIt is always assumed that the matrix $A(x,u)$ is elliptic in the sense that there exist two scalar positive continuous functions $\\llg_1(x,u), \\llg_2(x,u)$ such that \\beqno{genelliptic} \\llg_1(x,u)|\\zeta|^2 \\le \\myprod{A(x,u)\\zeta,\\zeta}\\le \\llg_2(x,u)|\\zeta|^2 \\quad \\mbox{for all } x\\in\\Og,\\,u\\in{\\rm I\\kern -1.6pt{\\rm R}}^m,\\,\\zeta \\in{\\rm I\\kern -1.6pt{\\rm R}}^{nm}.\\end{equation} If there exist positive constants $c_1,c_2$ such that $c_1\\le \\llg_1(x,u)$ and $\\llg_2(x,u)\\le c_2$ then we say that $A(x,u)$ is {\\em regular elliptic}. If $c_1\\le \\llg_1(x,u)$ and $\\llg_2(x,u)\/\\llg_1(x,u)\\le c_2$, we say that $A(x,u)$ is {\\em uniform elliptic}. On the other hand, if we allow $c_1=0$ and $\\llg_1(x,u)$ tend to zero (respectively, $\\infty$) when $|u|\\to\\infty$ then we say that $A(x,u)$ is {\\em singular} (respectively, {\\em degenerate}). \n\nThe first fundamental problem in the study of \\mref{e1} is the existence and regularity of its solutions. One can decide to work with either weak or strong solutions. In the first case, the existence of a weak solution can be achieved via Galerkin or variational methods \\cite{Gius} but its regularity (e.g., boundedness, H\\\"older continuity of the solution and its higher derivatives) is an open issue and difficult to address. Several works (see \\cite{Gius} and the reference therein) have been done along this line to establish only {\\em partial regularity} of {\\em bounded} weak solutions, wherever they are VMO. The assumption on the boundedness of weak solutions is a very severe and hard to check one, as maximum principles do not generally exist for systems (i.e. $m>1$) like \\mref{e1}. One usually needs to use ad hoc techniques on the case by case basis to show that $u$ is bounded. Even for bounded weak solutions, we only know that they are partially regular, i.e. H\\\"older continuous almost everywhere. Techniques in this approach rely heavily on the fact that $A(x,u)$ is {\\em regular elliptic}, and hence the boundedness of weak solutions.\n\nIn our recent work \\cite{dleJFA}, we choose a different approach making use of fixed point theory and discussing the existence of {\\em strong} solutions of \\mref{e1} under the weakest assumption that they are a-priori VMO, not necessarily bounded, and general structural conditions on the data of \\mref{e1} which are independent of $x$, we assumed only that $A(u)$ is {\\em uniformly elliptic}. Applications were presented in \\cite{dleJFA} when $\\llg_1(u)$ has a positive polynomial growth in $|u|$ and, without the boundedness assumption on the solutions, so \\mref{e1} can be degenerate as $|u|\\to\\infty$.\n\nIn this paper, we will establish much stronger results than those in \\cite{dleJFA} under much more general assumptions on the structure of \\mref{e1} below. Beside the minor fact that the data can depend on $x$, we allow further that:\n\\begin{itemize}\n\t\\item $A(x,u)$ can be either degenerate or singular as $|u|$ tend to infinity;\n\t\\item $\\hat{f}(x,u,Du)$ can have a quadratic growth in $Du$.\n\t\n\\end{itemize} \n\n\nMost remarkably, the key assumption in \\cite{dleJFA} that $u$ is VMO will be replaced by a more versatile one in this paper: $K(u)$ is VMO for some suitable map $K:{\\rm I\\kern -1.6pt{\\rm R}}^m\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$. This allows us to consider the singular case where one may not be able to estimate the BMO norm of $u$ but that of $K(u)$ in small balls. Examples of this case in applications will be provided in \\refsec{res}.\n\nOne of the key ingredients in the proof in \\cite{dleJFA} is the {\\em local} weighted Gagliardo-Nirenberg inequality involving BMO norm \\cite[Lemma 2.4]{dleANS}. This allows us to consider VMO solutions in \\cite{dleJFA}. In this paper, we make use of a new version of this inequality reported in our work \\cite{dleGNnew} replacing the BMO norm of $u$ by that of $K(u)$ for some suitable map $K:{\\rm I\\kern -1.6pt{\\rm R}}^m\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$.\n\n\n\nWe consider the following structural conditions on the data of \\mref{e1}.\n\n\n\n\n\\begin{description}\n\n\\item[A)] $A(x,u)$ is $C^1$ in $x\\in\\Og$, $u\\in{\\rm I\\kern -1.6pt{\\rm R}}^m$ and there exist a constant $C_*>0$ and scalar $C^1$ positive functions $\\llg(u),\\og(x)$ such that for all $u\\in{\\rm I\\kern -1.6pt{\\rm R}}^m$, $\\zeta\\in{\\rm I\\kern -1.6pt{\\rm R}}^{mn}$ and $x\\in\\Og$ \n\\beqno{A1} \\llg(u)\\og(x)|\\zeta|^2 \\le \\myprod{A(x,u)\\zeta,\\zeta} \\mbox{ and } |A(x,u)|\\le C_*\\llg(u)\\og(x).\\end{equation} \n\nIn addition, there is a constant $C$ such that $|\\llg_u(u)||u|\\le C\\llg(u)$ and \\beqno{Axcond} |A_u(x,u)|\\le C|\\llg_u(u)|\\og(x),\\; |A_x(x,u)|\\le C|\\llg(u)||D\\og|.\\end{equation}\n\n\n\\end{description}\n\nHere and throughout this paper, if $B$ is a $C^1$ (vector valued) function in $u\\in {\\rm I\\kern -1.6pt{\\rm R}}^m$ then we abbreviate its derivative $\\frac{\\partial B}{\\partial u}$ by $B_u$. Also, with a slight abuse of notations, $A(x,u)\\zeta$, $\\myprod{A(x,u)\\zeta,\\zeta}$ in \\mref{genelliptic}, \\mref{A1} should be understood in the following way: For $A(x,u)=[a_{ij}(x,u)]$, $\\zeta\\in{\\rm I\\kern -1.6pt{\\rm R}}^{mn}$ we write $\\zeta=[\\zeta_i]_{i=1}^m$ with $\\zeta_i=(\\zeta_{i,1},\\ldots\\zeta_{i,n})$ and $$A(x,u)\\zeta=[\\Sigma_{j=1}^m a_{ij}\\zeta_j]_{i=1}^m,\\; \\myprod{A(x,u)\\zeta,\\zeta}=\\Sigma_{i,j=1}^m a_{ij}\\myprod{\\zeta_i,\\zeta_j}.$$\n\n\n\nWe also assume that $A(x,u)$ is regular elliptic for {\\em bounded} $u$.\n\n\\begin{description}\\item[AR)] $\\og\\in C^1(\\Og)$ and there are positive numbers $\\mu_*,\\mu_{**}$ such that \n\\beqno{Aregcond2} \\mu_*\\le\\og(x)\\le \\mu_{**}, \\; |D\\og(x)|\\le \\mu_{**} \\quad \\forall x\\in\\Og.\\end{equation} For any bounded set $K\\subset{\\rm I\\kern -1.6pt{\\rm R}}^m$ there is a constant $\\llg_*(K)>0$ such that\n\\beqno{Aregcond1} \\llg_*(K)\\le\\llg(u) \\quad \\forall u\\in K.\\end{equation} \\end{description}\n\nConcerning the reaction term $\\hat{f}(x,u,Du)$, which may have linear or {\\em quadratic} growth in $Du$, we assume the following condition.\n\\begin{description} \\item[F)] There exist a constant $C$ and a nonegative differentiable function $f:{\\rm I\\kern -1.6pt{\\rm R}}^m\\to{\\rm I\\kern -1.6pt{\\rm R}}$ such that $\\hat{f}$ satisfies: For any diffrentiable vector valued functions $u:{\\rm I\\kern -1.6pt{\\rm R}}^n\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$ and $p:{\\rm I\\kern -1.6pt{\\rm R}}^n\\to{\\rm I\\kern -1.6pt{\\rm R}}^{mn}$ we assume either that\n\\begin{description}\\item[f.1)] $\\hat{f}$ has a linear growth in $p$ \n\\beqno{FUDU11}|\\hat{f}(x,u,p)| \\le C\\llg(u)|p|\\og(x) + f(u)\\og(x),\\end{equation} $$|D\\hat{f}(x,u,p)| \\le C(\\llg(u)|Dp|+ |\\llg_u(u)||p|^2)\\og+ C\\llg(u)|p||D\\og|+C|D(f(u)\\og(x))|;$$\\end{description} or \\begin{description} \n\\item[f.2)] $\\llg_{uu}(u)$ exists and $\\hat{f}$ has a quadratic growth in $p$ \\beqno{FUDU112}|\\hat{f}(x,u,p)| \\le C|\\llg_u(u)||p|^2\\og(x) + f(u)\\og(x),\\end{equation} $$\\begin{array}{lll}|D\\hat{f}(x,u,p)| &\\le& C(|\\llg_u(u)||p||Dp|+ |\\llg_{uu}(u)||p|^3)\\og+ C|\\llg_u(u)||p|^2|D\\og|\\\\&&+C|D(f(u)\\og(x))|.\\end{array}$$ Furthermore, we assume that \\beqno{llgquadcond} |\\llg_{uu}(u)|\\llg(u)\\le C|\\llg_u(u)|^2.\\end{equation}\n\\end{description}\n\\end{description}\n\nBy a formal differentiation of \\mref{FUDU11} and \\mref{FUDU112}, one can see that the growth conditions for $\\hat{f}$ naturally implies those of $D\\hat{f}$ in the above assumption. The condition \\mref{llgquadcond} is verified easily if $\\llg(u)$ has a polynomial growth in $|u|$.\n\n\n\n\nWe organize our paper as follows. In \\refsec{res} we state our main results and their applications which are actually consequences of the most general but technical \\reftheo{gentheo1} in \\refsec{w12est} where we deal with general map $K$. The proof of the results in \\refsec{res}, which provide examples of the map $K$, will thus be provided in \\refsec{proofmainsec}. In \\refsec{GNsec} we state the new version of the local weighted Gagliardo-Nirenberg inequality in \\cite{dleGNnew} to prepare for the proof the main technical theorem in \\refsec{w12est}. We collect some elementary but useful facts in our proof in \\refsec{appsec}.\n\n\n\n\\section{Preliminaries and Main Results}\\eqnoset\\label{res}\n\n\n\n\n\n\n\n\nWe state the main results of this paper in this section. In fact, these results are consequences of our main technical results in \\refsec{w12est} assuming general conditions A) and F) and, roughly speaking, some a~priori knowledge on the smallness of the BMO norm of $K(u)$ for a general map $K:{\\rm I\\kern -1.6pt{\\rm R}}^m\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$ and any strong solution $u$ to \\mref{e1}.\n\n\n\n\nWe define the measure $d\\mu =\\og(x)dx$ and recall that a vector valued function $f\\in L^1(\\Og,\\mu)$ is said to be in $BMO(\\Og,\\mu)$ if \\beqno{bmodef} [f]_{*,\\mu}:=\\sup_{B_R\\subset\\Og}\\mitmu{B_R}{|f-f_{B_R}|}<\\infty,\\quad f_{B_R}:=\\frac{1}{\\mu(B_R)}\\iidmu{B_R}{f}.\\end{equation} We then define $$\\|f\\|_{BMO(\\Og,\\mu)}:=[f]_{*,\\mu}+\\|f\\|_{L^1(\\Og,\\mu)}.$$\n\n\n\n\nThroughout this paper, in our statements and proofs, we use $C,C_1,\\ldots$ to denote various constants which can change from line to line but depend only on the parameters of the hypotheses in an obvious way. We will write $C(a,b,\\ldots)$ when the dependence of a constant $C$ on its parameters is needed to emphasize that $C$ is bounded in terms of its parameters. We also write $a\\lesssim b$ if there is a universal constant $C$ such that $a\\le Cb$. In the same way, $a\\sim b$ means $a\\lesssim b$ and $b\\lesssim a$.\n\nTo begin, as in \\cite{dleANS} with $A$ is independent of $x$, we assume that the eigenvalues of the matrix $A(x,u)$ are not too far apart. Namely, for $C_*$ defined in \\mref{A1} of A) we assume\n\\begin{description}\\item[SG)] $(n-2)\/n n\/2$, $\\bg_0\\in(0,1)$ such that the following quantities\n\\beqno{llgmainhyp009} \\|\\llg^{-1}(u)\\|_{L^{\\frac n2}(\\Og,\\mu)},\\;\\||f_u(u)|\\llg^{-1}(u)\\|_{L^{r_0}(\\Og,\\mu)},\\; \\|(\\llg(u)|u|^2)^{\\bg_0}\\|_{L^{1}(\\Og,\\mu)},\\end{equation} \n\\beqno{llgmainhyp0099}\\iidmu{\\Og}{(|f_u(u)|+\\llg(u))|Du|^{2}}\\end{equation} are bounded by some constant $C_0$.\n\nDefine $K_0:{\\rm I\\kern -1.6pt{\\rm R}}^m\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$ by $K_0(u)=|\\log(|U|)||U|^{-1}U$, $U=[\\llg_0+|u_i|]_{i=1}^m$. We assume that the BMO norms of $K_0(u)$ and $\\log(\\llg_0+|u|)$ are small in small balls in the following sense: For any $\\eg>0$ there is $R_\\eg>0$ such that \\beqno{mainloghyp} \\|\\log(\\llg_0+|u|)\\|_{BMO(B_R,\\mu)},\\;\\|K_0(u)\\|_{BMO(B_R,\\mu)}<\\eg \\quad \\mbox{for all $B_R\\subset\\Og$ with $R\\le R_\\eg$}.\\end{equation} \n\nThen \\mref{e1} has a strong solution $u$.\n\n \\end{theorem}\n \n\n\nThe condition \\mref{mainloghyp} on the smallness of the BMO norm of $K_0(u)$ in small balls is the most crucial one in applications. The next result is more applicable in the checking of this condition. \n\n\\bcoro{dlec1coro} The conclusion of \\reftheo{dleNL-mainthm} holds if \\mref{mainloghyp} is replaced by: There exist $\\ag\\in(0,1)$ and a constant $C_0$ such that \n\\beqno{pis20az5} \\iidmu{\\Og}{(\\llg_0+|u|)^{-n\\ag}|Du|^{n}}\\le C_0.\\end{equation}\n\n\\end{corol}\n\nIn \\cite{dleJFA}, we consider the case $\\llg(u)\\sim(\\llg_0+|u|)^k$ with $k>0$ and assume that $u$ has small BMO norm in small balls, which can be verified by establishing that $\\|Du\\|_{L^n(\\Og)}$ is bounded. The assumption \\mref{pis20az5} is of course much weaker, especially when $|u|$ is large, and can apply to the case $k<0$.\n\n\n\nWe present an application of \\refcoro{dlec1coro}. This example concerns cross diffusion systems with polynomial growth data on planar domains. This type of systems occurs in many applications in mathematical biology and ecology. We will see that the assumptions of the corollary can be verified by a very simple integrability assumption on the solutions. \n\n\n\\bcoro{2dthm1} Let $n=2$. Suppose A), F) and $f(u)\\lesssim (\\llg_0+|u|)^l$ and $\\llg(u)\\sim(\\llg_0+|u|)^{k}$ for some $k,l$ satisfying \\beqno{klcond} k>\\frac{-2C_*}{C_*-1} \\mbox{ and } l-k<\\frac{C_*+1}{C_*-1}.\\end{equation} \nIf $\\hat{f}$ has a quadratic growth in $Du$ as in \\mref{FUDU112} of f.2) then we assume further that \\beqno{FUDU112z} |\\hat{f}(x,u,p)| \\le \\eg_0|\\llg_u(u)||p|^2 + f(u),\\end{equation} with $\\eg_0$ being sufficiently small.\n\n\nIf there is a constant $C_0$ such that for {\\em any strong} solution $u$ of \\mref{e1famzzz} \\beqno{keyn2lnorm0} \\|u\\|_{L^{l_0}(\\Og,\\mu)}\\le C_0\\quad \\mbox{for some $l_0>\\max\\{l,l-k-1\\}$},\\end{equation} then \\mref{e1} has a strong solution $u$. \\end{corol}\n\nThe assumption \\mref{keyn2lnorm0} is a very weak one. For example, if $k\\ge-1$ then we see from the growth condition $|f(u)|\\lesssim (\\llg_0+|u|)^l$ that \\mref{keyn2lnorm0} simply requires that $l_0>l$, or equivalently, $f(u)\\in L^r(\\Og,\\mu)$ for some $r>1$. \n\nThis result greatly generalizes \\cite[Corollary 3.9]{dleJFA} in many aspects. Beside the fact that we allow quadratic growth in $Du$ for $\\hat{f}(x,u,Du)$ and $k<0$, we also consider a much general relation between the the growths of $f(u)$ and $\\llg(u)$ in \\mref{klcond}, while we assume in \\cite{dleJFA} that $f(u)\\lesssim \\llg(u)|u|$ (i.e. $l-k=1$).\n\n\n\nIn the second main result, we consider the following generalized SKT system (see \\cite{dleJFA,SKT,yag}) with Dirichlet or Neumann boundary conditions on a bounded domain $\\Og\\subset{\\rm I\\kern -1.6pt{\\rm R}}^n$ with $n\\le4$.\n\\beqno{genSKT} -\\Delta(P_i(u))=B_i(u,Du)+ f_i(u), \\quad i=1,\\ldots,m.\\end{equation} Here, $P_i:{\\rm I\\kern -1.6pt{\\rm R}}^m\\to{\\rm I\\kern -1.6pt{\\rm R}}$ are $C^2$ functions. The functions $B_i, f_i$ are $C^1$ functions on ${\\rm I\\kern -1.6pt{\\rm R}}^m\\times{\\rm I\\kern -1.6pt{\\rm R}}^{mn}$ and ${\\rm I\\kern -1.6pt{\\rm R}}^m$ respectively. We will assume that $B_i(u,Du)$ has linear growth in $Du$.\n\nBy a different choice of the map $K$ in the main technical theorem, we have the following.\n\n\\btheo{n3SKT} Assume that the matrix $A(u)=\\mbox{diag}[(P_i)_u(u)]$ satisfies the condition A) with $\\llg(u)\\sim (\\llg_0+|u|)^k$ for some $k\\ge-1$. Moreover, $\\hat{f}(u,Du)=\\mbox{diag}[B_i(u,Du)+f_i(u)]$ satisfies the following special version f.1) of F) $$|B_i(u,Du)|\\le C\\llg(u)|Du|, \\; |f_i(u)|\\le f(u).$$\n\n\nThus, \\mref{genSKT} can be written as \\mref{e1}. Assume that there exist $r_0>n\/2$ and a constant $C_0$ such that for {\\em any strong} solution $u$ of \\mref{e1famzzz}\n\\beqno{fuhyp0}\\|f_u(u)\\llg^{-1}(u)\\|_{L^{r_0}(\\Og)}\\le C_0,\\end{equation} and the following conditions. \\begin{description}\\item[i)] If $k\\ge0$ then $\\|u\\|_{L^1(\\Og)}\\le C_0$. \\item[ii)] If $k\\in[-1,0)$ then $\\|u\\|_{L^{-kn\/2}(\\Og)}\\le C_0$. Furthermore,\n\\beqno{fuhyp0k}\\|\\llg^{-2}(u)f_u(u)\\|_{L^\\frac{n}{2}(\\Og)}\\le C_0.\\end{equation}\n\\end{description}\n\nThen \n\\mref{genSKT} has a strong solution for $n=2,3,4$.\n\\end{theorem}\n\nThe above result generalizes \\cite[Corollary 3.10]{dleJFA} where we assumed that $\\hat{f}$ is independent of $Du$, $k>0$ and $f(u)\\lesssim \\llg(u)|u|$. In this case, it is natural to assume that $|f_u(u)|\\lesssim\\llg(u)$ so that \\mref{fuhyp0} obviously holds. We also have $|\\llg^{-2}(u)f_u(u)|\\lesssim \\llg^{-1}(u)$ so that \\mref{fuhyp0k} is in fact a consequence of the assumption $\\|u\\|_{L^{-kn\/2}(\\Og)}\\le C_0$ in ii).\n\nWe should remark that all the assumptions on strong solutions of the family \\mref{e1famzzz} can be checked by considering the case $\\sg=1$ (i.e. \\mref{e1} or \\mref{genSKT}) because these systems satisfy the same structural conditions uniformly with respect to the parameter $\\sg\\in[0,1]$.\n\n\n\\section{A general local weighted Gagliardo-Nirenberg inequality} \\eqnoset\\label{GNsec}\nIn this section, we present a local weighted Gagliardo-Nirenberg inequality in our recent work \\cite{dleGNnew}, which will be one of the main ingredients of the proof of our main technical theorem in \\refsec{w12est}. \nThis inequality generalizes \\cite[Lemma 2.4]{dleANS} by replacing the Lebesgue measure with general one and the BMO norm of $u$ with that of $K(u)$ where $K$ is a suitable map on ${\\rm I\\kern -1.6pt{\\rm R}}^m$, and so the applications of our main technical theorem in the next section will be much more versatile than those in \\cite{dleJFA,dleANS}.\n\nLet us begin by describing the assumptions in \\cite{dleGNnew} for this general inequality.\nWe need to recall some well known notions from Harmonic Analysis.\n\nLet $\\og\\in L^1(\\Og)$ be a nonnegative function and define the measure $d\\mu=\\og(x)dx$. For any $\\mu$-measurable subset $A$ of $\\Og$ and any locally $\\mu$-integrable function $U:\\Og\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$ we denote by $\\mu(A)$ the measure of $A$ and $U_A$ the average of $U$ over $A$. That is, $$U_A=\\mitmu{A}{U(x)} =\\frac{1}{\\mu(A)}\\iidmu{A}{U(x)}.$$\n\n\n\n\n\nWe say that $\\Og$ and $\\mu$ support a $q_*$-Poincar\\'e inequality if the following holds. \\begin{description} \\item[P)] There exist $q_*\\in(0,2]$, $\\tau_*\\ge 1$ and some constant $C_P$ such that \\beqno{Pineq2} \n\\mitmu{B}{|h-h_{B}|}\\le C_Pl(B)\n\\left(\\mitmu{\\tau_*B}{|Dh|^{q_*}}\\right)^\\frac{1}{q_*}\\end{equation}\nfor any cube $B\\subset\\Og$ with side length $l(B)$ and any function $u\\in C^1(B)$. \\end{description}\n\nHere and throughout this section, we denote by $l(B)$ the side length of $B$ and by $\\tau B$ the cube which is concentric with $B$ and has side length $\\tau l(B)$. We also write $B_R(x)$ for a cube centered at $x$ with side length $R$ and sides parallel to to standard axes of ${\\rm I\\kern -1.6pt{\\rm R}}^n$. We will omit $x$ in the notation $B_R(x)$ if no ambiguity can arise.\n\nWe consider the following conditions on the density $\\og(x)$.\n\n\\begin{description} \\item[LM.1)] For some $N\\in(0,n]$ and any ball $B_r$ we have $\\mu(B_r)\\le C_\\mu r^N$. \nAssume also that $\\mu$ supports the 2-Poincar\\'e inequality \\mref{Pineq2} in P). Furthermore, $\\mu$ is doubling and satisfies the following inequality for some $s_*>0$ \\beqno{fracmu} \\left(\\frac{r}{r_0}\\right)^{s_*}\\le C_\\mu\\frac{\\mu(B_r(x))}{\\mu(B_{r_0}(x_0))},\\end{equation} where $B_r(x), B_{r_0}(x_0)$ are any cubes with $x\\in B_{r_0}(x_0)$.\n\n\\item[LM.2)] $\\og=\\og_0^2$ for some $\\og_0\\in C^1(\\Og)$ and $d\\mu=\\og_0^2 dx$ also supports a Hardy type inequality: There is a constant $C_H$ such that for any function $u\\in C^1_0(B)$\\beqno{lehr1m} \\iidx{\\Og}{|u|^2|D\\og_0|^2}\\le C_H\\iidx{\\Og}{|Du|^2\\og_0^2}.\\end{equation} \n\\end{description}\n\n\n\nFor $\\cg\\in(1,\\infty)$ we say that a nonnegative locally integrable function $w$ belongs to the class $A_\\cg$ or $w$ is an $A_\\cg$ weight on $\\Og$ if the quantity\n\\beqno{aweight} [w]_{\\cg,\\Og} := \\sup_{B\\subset\\Og} \\left(\\mitmu{B}{w}\\right) \\left(\\mitmu{B}{w^{1-\\cg'}}\\right)^{\\cg-1} \\quad\\mbox{is finite}.\\end{equation}\nHere, $\\cg'=\\cg\/(\\cg-1)$. For more details on these classes we refer the reader to \\cite{OP,st}. If the domain $\\Og$ is specified we simply denote $ [w]_{\\cg,\\Og}$ by $ [w]_{\\cg}$.\n\n\n\n\n\nWe assume the following hypotheses.\n\n\\begin{description}\n\\item[A.1)] Let $K:\\mbox{dom}(K)\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$ be a $C^1$ map on a domain $\\mbox{dom}(K)\\subset{\\rm I\\kern -1.6pt{\\rm R}}^m$ such that $\\DKTmU=(K_U(U)^{-1})^T$ exists and $\\mccK_U\\in L^\\infty(\\mbox{dom}(K))$.\n\nFurthermore, let $\\Fg,\\LLg:\\mbox{dom}(K)\\to{\\rm I\\kern -1.6pt{\\rm R}}^+$ be $C^1$ positive functions. We assume that for all $U\\in \\mbox{dom}(K)$\n\\beqno{kappamain} |\\DKTmU|\\lesssim \\LLg(U)\\Fg^{-1}(U),\\end{equation}\n\\beqno{logcondszmain}|\\Fg_U(U)||\\mccK(U)|\\lesssim \\Fg(U).\\end{equation}\n\\end{description}\n\n\nLet $\\Og_*$ be a proper subset of $\\Og$ and $\\og_*$ be a function in $C^1(\\Og)$ satisfying \n\\beqno{subogm} \\og_*\\equiv 1 \\mbox{ in $\\Og_*$ and } \\og_*\\le 1 \\mbox{ in $\\Og$}.\\end{equation}\n\nFor any $U\\in C^2(\\Og,\\mbox{dom}(K))$ we denote\n\\beqno{Idefm} I_1:=\\iidmu{\\Og}{\\Fg^2(U)|DU|^{2p+2}},\\;\nI_2:=\\iidmu{\\Og}{\\LLg^2(U)|DU|^{2p-2}|D^2U|^2},\\end{equation}\n\\beqno{Idef1zm} \\myIbar:=\\iidmu{\\Og}{|\\LLg_U(U)|^2|DU|^{2p+2}},\\;I_{1,*}:=\\iidmu{\\Og_*}{\\Fg^2(U)|DU|^{2p+2}},\\end{equation} \\beqno{I0*m} \\breve{I}_{0,*}:=\\sup_\\Og|D\\og_*|^2\\iidmu{\\Og}{\\LLg^2(U)|DU|^{2p}}.\\end{equation} \n\nBy \\refrem{PSrem} below, the assumption PS) in \\cite{dleGNnew} that $\\mu$ supports a Poincar\\'e-Sobolev inequality is then satisfied. We established the following local weighted Gagliardo-Nirenberg inequality in \\cite{dleGNnew}.\n\n\\btheo{GNlocalog1m} Suppose LM.1)-LM.2), A.1). Let $U\\in C^2(\\Og,\\mbox{dom}(K))$ and satisfy\n\\beqno{boundaryzm}\\myprod{\\og_*\\og_0^2 \\Fg^2(U)\\DKTmU DU,\\vec{\\nu}}=0\\end{equation} on $\\partial\\Og$ where $\\vec{\\nu}$ is the outward normal vector of $\\partial\\Og$. Let $\\myPi(x):=\\LLg^{p+1}(U(x))\\Fg^{-p}(U(x))$ and assume that $[\\myPi^{\\ag}]_{\\bg+1}$ is finite for some $\\ag>2\/(p+2)$ and $\\bg0$ there are constants $C, C([\\myPi^{\\ag}]_{\\bg+1})$ such that\n\\beqno{GNlocog11m}I_{1,*}\\le \\eg I_1+\\eg^{-1}C\\|K(U)\\|_{BMO(\\mu)}^2[I_2+\\myIbar+C([\\myPi^{\\ag}]_{\\bg+1}) [I_2+\\myIbar+\\breve{I}_{0,*}]].\\end{equation}\nHere, $C$ also depends on $C_{P},C_\\mu$ and $C_H$.\n\n\\end{theorem}\n\n\n\nFor our purpose in this paper we need only a special case of \\reftheo{GNlocalog1m} where $\\Og,\\Og_*$ are concentric balls $B_s,B_t$, $02\/(p+2)$ and $\\bg0$ and any ball $B_s(x_0)$, $02$ and $q_*<2$ and some constant $C_{PS}$\n\\beqno{PSineq} \\frac{1}{l(B)}\n\\left(\\mitmu{B}{|u-u_{B}|^{\\pi_*}}\\right)^\\frac1{\\pi_*} \\le C_{PS}\n\\left(\\mitmu{\\tau_*B}{|Du|^{2}}\\right)^\\frac{1}{2},\\quad \\pi_*>2.\\end{equation}\\erem In fact, if $q_*2$ if $s_*<2q_*\/(2-q_*)$. This is the case if we choose $q_*<2$ and closed to 2. Hence, the assumption PS) in \\cite{dleGNnew} that $\\mu$ supports a Poincar\\'e-Sobolev inequality \\mref{PSineq} is then satisfied for some $q_*<2$ and $\\pi_*>2$ (the dimensional parameters $d,n$ in that paper are now denoted by $n,N$ respectively).\n\n\\brem{PSremH} If $q_*=s_*$, \\cite[2) of Theorem 5.1]{Haj} shows that \\mref{PSineq} holds true\nfor any $\\pi_*>1$. In addition, if $q_*>s_*$ then the H\\\"older norm of $u$ is bounded in terms of $\\|Du\\|_{L^{q_*}(\\Og,\\mu)}$. \n\n\\erem\n\n\n\n\\section{The main technical theorem}\\eqnoset\\label{w12est}\n\nIn this section, we establish the main result of this paper. We consider the following system\n\\beqno{gensys}\\left\\{\\begin{array}{l} -\\Div(A(x,u)Du)=\\hat{f}(x,u,Du),\\quad x\\in \\Og,\\\\\\mbox{$u=0$ or $\\frac{\\partial u}{\\partial \\nu}=0$ on $\\partial \\Og$}. \\end{array}\\right.\\end{equation}\n\nWe imbed this system in the following family of systems \n\\beqno{gensysfam}\\left\\{\\begin{array}{l} -\\Div(A(x,\\sg u)Du)=\\hat{f}(x,\\sg u,\\sg Du),\\quad x\\in \\Og, \\sg\\in[0,1],\\\\\\mbox{$u=0$ or $\\frac{\\partial u}{\\partial \\nu}=0$ on $\\partial \\Og$}. \\end{array}\\right.\\end{equation}\n\n\nFirst of all, we will assume that the system \\mref{gensys} satisfies the structural conditions A) and F). Additional assumptions serving the purpose of this paper then follow for the validity of the local weighted Gagliardo-Nirenberg inequality of \\refcoro{GNlocalog1mcoro} with $\\LLg(U)=\\llg^\\frac12(U)$.\n\n\n\\begin{description} \n\\item[H)] There is a $C^1$ map $K:{\\rm I\\kern -1.6pt{\\rm R}}^m\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$ such that $\\mathbb{K}(u)=(K_u(u)^{-1})^T$ exists and $\\mathbb{K}_U\\in L^\\infty({\\rm I\\kern -1.6pt{\\rm R}}^m)$.\nFurthermore, for all $u\\in {\\rm I\\kern -1.6pt{\\rm R}}^m$\n\\beqno{kappamainz} |\\mathbb{K}(u)|\\lesssim \\llg(u)|\\llg_u(u)|^{-1}.\\end{equation}\n\\end{description}\n\n\\brem{polyrem} We can see that the condition H) implies the condition A.1) in \\reftheo{GNlocalog1m}, and then \\refcoro{GNlocalog1mcoro}\nwith $\\LLg(u)=\\llg^\\frac12(u)$ and $\\Fg(u)=|\\LLg_u(u)|$, \\mref{GNlocog11mcoro} is then applicable. Indeed, the assumption \\mref{kappamain} in this case is \\mref{kappamainz}. It is not difficult to see that the assumption in f.2) that $|\\llg_{uu}(u)|\\llg(u)\\lesssim |\\llg_u(u)|^2$ and \\mref{kappamainz} imply $|\\Fg_u(u)||\\mathbb{K}(u)|\\lesssim \\Fg(u)$, which gives \\mref{logcondszmain} of A.1). Hence, A.1) holds by H). In particular, if $\\llg$ has a polynomial growth in $u$, i.e. $\\llg(u)\\sim (\\llg_0+|u|)^k$ for some $k\\ne0$ and $\\llg_0\\ge0$, then H) reduced to the simple condition $|\\mathbb{K}(u)|\\lesssim|u|$.\n\\erem\n\n\nFor any strong solution $u$ of \\mref{gensysfam} we will consider the following assumptions. The exponents $s_*,\\pi_*$ are defined in \\mref{fracmu} and in the Poincar\\'e-Sobolev inequality \\mref{PSineq}.\n\n\\begin{description} \n\n\\item[M.0)] There exist a constant $C_0$ and some $r_0>r_*:={\\pi_*}\/({\\pi_*}-2)$ such that\n\\beqno{llgmainhyp0} \\|\\llg^{-1}(u)\\|_{L^{r_*}(\\Og,\\mu)},\\;\\|\\llg^{-1}(u)f_u(u)\\|_{L^{r_0}(\\Og,\\mu)}\\le C_0,\\end{equation}\n\\beqno{llgB3}\\iidmu{\\Og}{(|f_u(u)|+\\llg(u))|Du|^{2}}\\le C_0.\\end{equation}\n\n\\item[M.1)] For any given \n$\\mu_0>0$ \nthere is positive $R_{\\mu_0}$ sufficiently small in terms of the constants in A) and F) such that \\beqno{Keymu0} \\sup_{x_0\\in\\bar{\\Og}}\\|K(u)\\|_{BMO(B_{R}(x_0)\\cap\\Og,\\mu)}^2 \\le \\mu_0.\\end{equation} \n\nFurthermore, for $\\myPi_p(x):= \\llg^{p+\\frac12}(u)|\\llg_u(u)|^{-p}$ and any $p\\in[1,s_*\/2]$ there exist some $\\ag>2\/(p+2)$, $\\bgs_*\/2$ and a constant $M_*$ depending only on the constants in A) and F) such that any strong solution $u$ of \\mref{gensysfam} will satisfy \\beqno{M*def} \\|Du\\|_{L^{2p}(\\Og,\\mu)}\\le M_*.\\end{equation} This and \\refrem{PSremH} imply that there are positive constants $\\ag,M_0$ such that \\beqno{M0def} \\|u\\|_{C^{\\ag}(\\Og)}\\le M_0.\\end{equation}\n\nFor $\\sg\\in[0,1]$ and any $u\\in {\\rm I\\kern -1.6pt{\\rm R}}^m$ and $\\zeta\\in{\\rm I\\kern -1.6pt{\\rm R}}^{mn}$ we define the vector valued functions $F^{(\\sg)}$ and $f^{(\\sg)}$ by \\beqno{Bfdef}F^{(\\sg)}(x,u,\\zeta):=\\int_{0}^{1}\\partial_\\zeta F(\\sg,u,t\\zeta)\\,dt,\\quad f^{(\\sg)}(x,u):=\\int_{0}^{1}\\partial_u F(\\sg,x,tu,0)\\,dt.\\end{equation}\n\n\n\n \n For any given $u,w\\in W^{1,2}(\\Og)$ we write \\beqno{Bfdefalin}\\mathbf{\\hat{f}}(\\sg,x, u,w)=F^{(\\sg)}(x,u,Du)Dw+f^{(\\sg)}(x,u)w+\\hat{f}(x,0,0).\\end{equation}\n\n\nFor any given $u\\in W^{1,2}(\\Og)$ satisfying \\mref{M0def} we consider the following linear systems, noting that $\\mathbf{\\hat{f}}(\\sg,x,u,w)$ is linear in $w,Dw$\n\\beqno{Tmapdef}\\left\\{\\begin{array}{l} -\\Div(A(x,\\sg u)Dw)+Lw=\\mathbf{\\hat{f}}(\\sg,x,u,w)+Lu\\quad x\\in \\Og, \\\\\\mbox{$w=0$ or $\\frac{\\partial w}{\\partial \\nu}=0$ on $\\partial \\Og$}. \\end{array}\\right.\\end{equation}\nHere, $L$ is a suitable positive definite matrix depending on the constant $M_0$ such that the above system has a unique weak solution $w$ if $u$ satisfies \\mref{M0def}. We then define $T_\\sg(u)=w$ and apply the Leray-Schauder fixed point theorem to establish the existence of a fixed point of $T_1$.\nIt is clear from \\mref{Bfdefalin} that $\\hat{f}(x,\\sg u,\\sg Du)=\\mathbf{\\hat{f}}(\\sg,x,u,u)$. \nTherefore, from the definition of $T_\\sg$ we see that a fixed point of $T_\\sg$ is a weak solution of \\mref{gensysfam}. By an appropriate choice of $\\mX$, we will show that these fixed points are strong solutions of \\mref{gensysfam}, and so a fixed point of $T_1$ is a strong solution of \\mref{gensys}.\n\n\nFrom the proof of Leray-Schauder fixed point theorem in \\cite[Theorem 11.3]{GT}, we need to find some ball $B_M$ of radius $M$ and centered at $0$ of $\\mX$ such that $T_\\sg: \\bar{B}_M\\to \\mX$ is compact and that $T_\\sg$ has no fixed point on the boundary of $B_M$. The topological degree $\\mbox{ind}(T_\\sg, B_M)$ is then well defined and invariant by homotopy so that $\\mbox{ind}(T_1, B_M)=\\mbox{ind}(T_0, B_M)$. It is easy to see that the latter is nonzero because the linear system $$\\left\\{\\begin{array}{l} -\\Div(A(x,0)Du)=\\mathbf{\\hat{f}}(x,0,0)\\quad x\\in \\Og, \\\\\\mbox{$u=0$ or $\\frac{\\partial u}{\\partial \\nu}=0$ on $\\partial \\Og$}, \\end{array}\\right.$$ has a unique solution in $B_M$. Hence, $T_1$ has a fixed point in $B_M$.\n\nTherefore, the theorem is proved as we will establish the following claims.\n\n\\begin{description} \\item[Claim 1] There exist a Banach space $\\mX$ and $M>0$ such that the map $T_\\sg:\\bar{B}_M\\to\\mX$ is well defined and compact. \n\\item[Claim 2] $T_\\sg$ has no fixed point on the boundary of $\\bar{B}_M$. That is, $\\|u\\|_\\mX< M$ for any fixed points of $u=T_\\sg(u)$.\n\\end{description}\n\n\nThe following lemma establishes Claim 1.\n\n\\blemm{Tmaplem} Suppose that there exist $p>s_*\/2$ and a constant $M_*$ such that any strong solution $u$ of \\mref{gensysfam} satisfies \\beqno{M*defz} \\|Du\\|_{W^{1,2p}(\\Og,\\mu)}\\le M_*.\\end{equation} Then, there exist $M,\\bg>0$ such that for $\\mX=C^{\\bg}(\\Og)\\cap W^{1,2}(\\Og)$ the map $T_\\sg:\\bar{B}_M\\to\\mX$ is well defined and compact for all $\\sg\\in[0,1]$. Moreover, $T_\\sg$ has no fixed points on $\\partial B_M$. \\end{lemma}\n\n\n{\\bf Proof:~~} \nFor some constant $M_0>0$ we consider $u:\\Og_{R}\\to{\\rm I\\kern -1.6pt{\\rm R}}^m$ satisfying \n\\beqno{Xdefstartzz} \\|u\\|_{C(\\Og)}\\le M_0,\\; \\|Du\\|_{L^2(\\Og)}\\le M_0,\\end{equation}\nand write the system \\mref{Tmapdef} as a linear elliptic system for $w$\n\\beqno{LSUsys} -\\Div(\\mathbf{a}(x) Dw) +\\mathbf{b}(x)Dw+\\mathbf{g}(x)w+Lw =\\mathbf{f}(x),\\end{equation} where $ \\mathbf{a}(x)=A(x,\\sg u)$, $\\mathbf{b}(x)=F^{(\\sg)}(x,u,Du)$, $\\mathbf{g}(x)=f^{(\\sg)}(x,u)$, and $\\mathbf{f}(x)=\\hat{f}(x,0,0)+Lu$.\n\nThe matrix $\\mathbf{a}(x)$ is then regular elliptic with uniform ellipticity constants by A), AR) because $u$ is bounded. From the theory of {\\em linear} elliptic systems it is well known that if the operator $\\mathcal{L}(w)=-\\Div(\\mathbf{a}(x) Dw) +\\mathbf{g}(x)w +Lw$ is monotone and there exist positive constants $m$ and $q$ such that \\beqno{LSUcond}\\|\\mathbf{b}\\|_{L^q(\\Og)},\\; \\|\\mathbf{g}\\|_{L^q(\\Og)},\\; \\|\\mathbf{f}\\|_{L^q(\\Og)}\\le m,\\; \\mbox{$q> n\/2$},\\end{equation} then the system \\mref{LSUsys} has a unique weak solution $w$. \n\nIt is easy to find a matrix $L$ such that $\\mathcal{L}(w)$ is monotone. Because the matrix $\\mathbf{a}$ is regular elliptic and $\\mathbf{g}$ is bounded (see below). We just need to choose a positive definite matrix $L$ satisfying $\\myprod{Lw,w}\\ge l_0|w|^2$ for some $l_0>0$ and sufficiently large in terms of $M_0$.\n\nNext, we will show that \\mref{LSUcond} holds by F) and \\mref{Xdefstartzz}. We consider the two cases f.1) and f.2). If f.1) holds then from the definition \\mref{Bfdef} there is a constant $C(|u|)$ such that $$|\\mathbf{b}(x)|=|F^{(\\sg)}(x,u,\\zeta)|\\le C(|u|),\\; |\\mathbf{g}(x)|=|f^{(\\sg)}(x,u)|\\le C(|u|).$$\n \n From \\mref{Xdefstartzz}, we see that $\\|u\\|_\\infty\\le M_0$ and so there is a constant $m$ depending on $M_0$ such that \\mref{LSUcond} holds for any $q$ and $n$.\n \n \nIf f.2) holds then\n\\beqno{Bfdeff2}|F^{(\\sg)}(x,u,\\zeta)|\\le C(|u|)|\\zeta|,\\; |f^{(\\sg)}(x,u)|\\le C(|u|).\\end{equation} Therefore, $\\|\\mathbf{b}\\|_{L^2(\\Og)}$ is bounded by $C\\|Du\\|_{L^2(\\Og)}$. Again, if $n\\le3$ then \\mref{Xdefstartzz} implies the condition \\mref{LSUcond} for $q=2$.\n\n \n\nIn both cases, \\mref{LSUsys} (or \\mref{Tmapdef}) has a unique weak solution $w$. \nWe then define $T_\\sg(u)=w$. Moreover, from the regularity theory of linear systems, $w\\in C^{\\ag_0}(\\Og)$ for some $\\ag_0>0$ depending on $M_0$.\n\nThe bound in the assumption \\mref{M*defz} and \\refrem{PSremH} imply that $u$ is H\\\"older continuous and provide positive constants $\\ag, C(M_*)$ such that $\\|u\\|_{C^{\\ag}(\\Og)}\\le C(M_*)$. Also, the assumption \\mref{llgB3} and AR), that $\\llg(u),\\og$ are bounded from below, yield that $\\|Du\\|_{L^2(\\Og)}\\le C(C_0)$. Thus, there is a constant $M_1$, depending on $M_*,C_0$ such that any strong solution $u$ of \\mref{gensysfam} satisfies \n\\beqno{Xdefstart} \\|u\\|_{C^{\\ag}(\\Og)}\\le M_1,\\; \\|Du\\|_{L^2(\\Og)}\\le M_1.\\end{equation}\n\nIt is well known that there is a constant $c_0>1$, depending on $\\ag$ and the diameter of $\\Og$, such that $\\|\\cdot\\|_{C^{\\bg}(\\Og)}\\le c_0\\|\\cdot\\|_{C^{\\ag}(\\Og)}$ for all $\\bg\\in(0,\\ag)$. We now let $M_0$, the constant in \\mref{Xdefstartzz}, be $M=(c_0+1)M_1$ and define $\\ag_0$ in the previous argument accordingly. \n\nDefine $\\mX=C^{\\bg}(\\Og)\\cap W^{1,2}(\\Og)$ for some positive $\\bg<\\min\\{\\ag,\\ag_0\\}$. The space $\\mX$ is equipped with the norm $$\\|u\\|_\\mX = \\max\\{\\|u\\|_{C^{\\bg}(\\Og)},\\|Du\\|_{L^2(\\Og)}\\}.$$ \n \n\nWe now see that $T_\\sg$ is well defined and maps the ball $\\bar{B}_M$ of $\\mX$ into $\\mX$. Moreover, from the definition $M=(c_0+1)M_1$, it is clear that $T_\\sg$ has no fixed point on the boundary of $B_M$ because such a fixed points $u$ satisfies \\mref{Xdefstart} which implies $\\|u\\|_{\\mX}\\le c_0M_1s_*\/2$. Because the data of \\mref{Tmapdef} satisfy the structural conditions A), F) with the same set of constants and the assumptions of the theorem are assumed to be uniform for all $\\sg\\in[0,1]$, we will only present the proof for the case $\\sg=1$ in the sequel.\n\n\n\nLet $u$ be a strong solution of \\mref{e1} on $\\Og$. We begin with an energy estimate for $Du$. For $p\\ge1$ and any ball $B_s$ with center $x_0\\in\\bar{\\Og}$ we denote $\\Og_s=B_s\\cap\\Og$ and \n\\beqno{Hdef}\\ccH_{p}(s):=\n\\iidmu{\\Og_s}{\\llg(u)|Du|^{2p-2}|D^2u|^2},\\end{equation}\n\\beqno{Bdef}\\ccB_{p}(s):= \\iidmu{\\Og_s}{\\frac{|\\llg_u(u)|^2}{\\llg(u)}|Du|^{2p+2}},\\end{equation} \\beqno{Cdef}\\ccC_{p}(s):=\\iidmu{\\Og_s}{(|f_u(u)|+\\llg(u))|Du|^{2p}},\\end{equation} and \\beqno{cIdef}\\mccIee_{\\og,p}(s):=\\iidx{B_s}{(\\llg(u)|Du|^{2p}|D\\og_0|^2+|f(u)||Du|^{2p-1}|D\\og_0|\\og_0)}.\\end{equation}\n\n\n\n\\blemm{dleANSenergy} Assume A), F). \nLet $u$ be any strong solution of \\mref{gensys} on $\\Og$ and $p$ be any number in $[1,\\max\\{1,s_*\/2\\}]$.\n\nThere is a constant $C$, which depends only on the parameters in A) and F), such that for any two concentric balls $B_s,B_t$ with center $x_0\\in\\bar{\\Og}$ and $s0$ there exist a constant $C_0$ and a positive $R_{\\mu_0}$ sufficiently small in terms of the constants in A) and F) such that\n\\beqno{Keymu01} \\sup_{x_0\\in\\bar{\\Og}}[\\myPi_p^{\\ag}]_{\\bg+1,\\Og_R(x_0)}\\le C_0,\\;\\|K(u)\\|_{BMO(\\Og_{R}(x_0),\\mu)}^2 \\le \\mu_0.\\end{equation}\n\nThen for sufficiently small $\\mu_0$ there is a constant $C$ depending only on the parameters of A) and F) such that for $2R0$ we can use \\mref{GNlocog11mcoro} obtain a constant $C$ such that (using the bound $[\\myPi_p^{\\ag}]_{\\bg+1,B_{R_{\\mu_0}}(x_0)\\cap\\Og}\\le C_0$ and the definitions of $\\mu_0$ in \\mref{Keymu01} and $C(\\eg,U,\\myPi)$ in \\refcoro{GNlocalog1mcoro})\n$$ \\ccB_{p}(s) \\le \\eg\\ccB_{p}(t)+C\\eg^{-1}\\mu_0\\ccH_{p}(t)+C\\eg^{-1}\\mu_0(t-s)^{-2}\\ccC_{p}(t)\\quad 0r_*$ \\beqno{llgmainhyp} \\|\\llg^{-1}(u)\\|_{L^{r_*}(\\Og,\\mu)},\\;\\||f_u(u)|\\llg^{-1}(u)\\|_{L^{r_0}(\\Og,\\mu)}\\le C_0 ,\\end{equation}\n\\beqno{hypoiterpis1}\\iidmu{\\Og}{(|f_u(u)|+\\llg(u))|Du|^{2}}\\le C_0,\\end{equation} and\n\\beqno{hypoiterpis1z}\\iidmu{\\Og}{|f_u(u)|(1+f(u)^{s_*}|f_u(u)|^{-s_*})}\\le C_0.\\end{equation}\n\nThen there exist $p>s_*\/2$ and a constant $M_*$ depending only on the parameters of A) and F), $\\mu_0$, $R_{\\mu_0}$, $C_0$ and the geometry of $\\Og$ such that \n\\beqno{keydupANSt0}\\iidmu{\\Og}{|Du|^{2p}}\\le M_*.\\end{equation}\n \\end{lemma}\n\n\n{\\bf Proof:~~} First of all, by the condition AR), there is a constant $C_\\og$ such that $|D\\og_0|\\le C_\\og\\og_0$ and therefore we have from the the definition \\mref{cIdef} that \\beqno{cIdef1}\\mccIee_{\\og,p}(s)\\le C_\\og\\iidx{B_s}{(\\llg(u)|Du|^{2p}+f(u)|Du|^{2p-1})\\og_0^2}.\\end{equation} By Young's inequality, $f(u)|Du|^{2p-1}\\lesssim |f_u(u)||Du|^{2p}+(f(u)|f_u(u)|^{-1})^{2p}|f_u(u)|$. It follows easily from the assumption \\mref{hypoiterpis1z} that the integral of $(f(u)|f_u(u)|^{-1})^{2p}|f_u(u)|$ is bounded by $C_0$ for any $p\\in[1,s_*\/2]$. We then have from \\mref{keydupANSppb} and \\mref{cIdef1} that\n\\beqno{keydupANSppbbb}\\ccB_{p}(R)+\\ccH_{p}(R)\\le C(1+R^{-2})[\\ccC_{p}(2R)+C_0].\\end{equation}\n\n\nThe main idea of the proof is to show that the above estimate is self-improving in the sense that if it is true for some exponent $p\\ge1$ then it is also true for $\\cg_*p$ with some fixed $\\cg_*>1$ and $R$ being replaced by $R\/2$. To this end, assume that for some $p\\ge1$ we can find a constant $C(C_0,R,p)$ such that \\beqno{piter1} \\ccC_{p}(2R)\\le C(C_0,R,p).\\end{equation} \n\nIt then clearly follows from \\mref{keydupANSppbbb}, and the definition of $\\ccB_{p}(R),\\ccH_{p}(R)$ that \\beqno{keydupANSpp1} \\iidmu{\\Og_R}{[V^2+\n|DV|^2]}\\le C(C_0,R,p), \\quad \\mbox{where $V=\\llg^\\frac12(u)|Du|^{p}$}.\\end{equation}\n\n\nLet $\\pi_*$ be the exponent in the Poincar\\'e-Sobolev inequality \\mref{PSineq}. We have \n\\beqno{genPSpi}\\iidmu{\\Og_R}{|V|^{\\pi_*}}\\lesssim \\iidmu{\\Og_R}{|DV|^{2}}+\\mypar{\\iidmu{\\Og_R}{|V|^2}}^{{\\pi_*}\/2}.\\end{equation} We see that \\mref{keydupANSpp1} and the above inequality imply\n\\beqno{piter1az} \\iidmu{\\Og_R}{\\llg^{{\\pi_*}\/2}(u)|Du|^{p {\\pi_*}}} \\le C(C_0,R,p).\\end{equation}\n\nLet $\\cg_*\\in(1,{\\pi_*}\/2)$. We denote $\\bg_*=\\pi_*\/(\\pi_*-2\\cg_*)$ and $g(u)=\\max\\{|f_u(u)|,\\llg(u)\\}$. We write $g(u)|Du|^{2\\cg_* p}= \\llg(u)^{\\cg_*}|Du|^{2\\cg_* p}g(u)\\llg(u)^{-\\cg_*}$ and and use H\\\"older's inequality and \\mref{piter1az} to get\n\\beqno{piter1z1}\\iidmu{\\Og_R}{g(u)|Du|^{2\\cg_* p}}\\le C(C_0,R,p)^\\frac{2\\cg_*}{\\pi_*}\\mypar{\\iidmu{\\Og_R}{(g(u)\\llg(u)^{-\\cg_*})^{\\bg_*}}}^{1-\\frac{2\\cg_*}{{\\pi_*}}}.\\end{equation} Again, as $(g(u)\\llg(u)^{-\\cg_*})^{\\bg_*}=(g(u)\\llg(u)^{-1})^{\\bg_*}\\llg(u)^{-(\\cg_*-1)\\bg_*}$, the last integral can be bounded via H\\\"older's inequality by $$\\mypar{\\iidmu{\\Og_R}{(g(u)\\llg(u)^{-1})^{\\bg_*\\ag}}}^\\frac1\\ag\\mypar{\\iidmu{\\Og_R}{(\\llg(u)^{-(\\cg_*-1)\\bg_*\\ag'}}}^\\frac{1}{\\ag'}$$\nBy the assumption \\mref{llgmainhyp}, $|f_u(u)|\\llg^{-1}(u)$ is in $L^r_0(\\Og,\\mu)$ for some $r_0>r_*=\\pi_*\/(\\pi_*-2)$ and $\\llg^{-1}(u)\\in L^{r_*}(\\Og,\\mu)$. We can find $\\ag,\\cg_*>1$ such that $\\ag\\bg_*0$ then \\mref{piter1z} provides some fixed $\\cg_{*}>1$ such that \\mref{piter1} remains true for the new exponent $\\cg_{*} p$ and $R\/2$. By the assumption \\mref{hypoiterpis1}, \\mref{piter1} holds for $p=1$. It is now clear that, as long as the energy estimate \\mref{keydupANSenergy} is valid by \\reflemm{dleANSenergy}), we can repeat the argument $k_0$ times to find a number $p>s_*\/2$ such that \\mref{piter1} holds. It follows that \nthere is a constant $C$ depending only on the parameters of A) and F), $\\mu_0$, $R_{\\mu_0}$ and $k_0$ \nsuch that for some $p>s_*\/2$\n\\beqno{keydupANSt0z}\\iidmu{\\Og_{R_0}}{\\llg^{{\\pi_*}\/2}(u)|Du|^{{\\pi_*}p}}\\le C\\mbox{ for $R_0=2^{-k_0}R_{\\mu_0}$}.\\end{equation}\n\nWe now write $|Du|^{2p}=\\llg(u)|Du|^{2p}\\llg^{-1}(u)$ and have\n$$\\iidmu{\\Og_{R_0}}{|Du|^{2p}}\\le \\mypar{\\iidmu{\\Og_{R_0}}{\\llg^{{\\pi_*}\/2}(u)|Du|^{{\\pi_*}p}}}^\\frac{2}{{\\pi_*}}\\mypar{\\iidmu{\\Og_{R_0}}{\\llg(u)^{-{{\\pi_*}\/({\\pi_*}-2)}}}}^{1-\\frac{2}{{\\pi_*}}}.$$ By \\mref{llgmainhyp}, the last integral in the above estimate is bounded.\nUsing \\mref{keydupANSt0z} and summing the above inequalities over a finite covering of balls $B_{R_0}$ for $\\Og$, we find a constant $C$, depending also on the geometry of $\\Og$, and obtain the desired estimate \\mref{keydupANSt0}. The lemma is proved. \\begin{flushright} Q.E.D. \\end{flushright}\n\n\\brem{hypoiterpis1zrem} The condition \\mref{hypoiterpis1z} is void if $\\hat{f}$ is independent of $x$. Indeed, it was used only to estimate $\\mccIee_{\\og,p}$, which results from \\mref{Giter} in the proof of \\reflemm{dleANSprop}, and obtain \\mref{keydupANSppbbb}. If $\\hat{f}$ is independent of $x$ then $\\mccIee_{\\og,p}=0$ from the proof of the energy estimate in \\reflemm{dleANSenergy} (see \\mref{f1duest}) so that \\mref{hypoiterpis1z} is not needed. \\erem\n\n\n\n\n\\brem{murem1} It is also important to note that the estimate of \\reflemm{dleANSpropt0}, based on those in \\reflemm{dleANSenergy}, \\reflemm{dleANSprop}, is {\\em independent} of lower\/upper bounds of the function $\\llg_*$ in AR) but the integrals in M.0). The assumption AR) was used only in \\reflemm{Tmaplem} to define the map $T_\\sg$ and \\reflemm{Dulocbound} to show that fixed points of $T_\\sg$ are strong solutions. \\erem\n\n\n\n\nWe are ready to provide the proof of the main theorem of this section. \n\n{\\bf Proof of \\reftheo{gentheo1}:} \nIt is now clear that the assumptions M.0) and M.1) of our theorem allow us to\napply \\reflemm{dleANSpropt0} and obtain a priori uniform bound for any continuous strong solution $u$ of \\mref{gensysfam}. The uniform estimate \\mref{keydupANSt0} shows that the assumption \\mref{M*defz} of \\reflemm{Tmaplem} holds true so that the map $T_\\sg$ is well defined and compact on a ball $\\bar{B}_M$ of $\\mX$ for some $M$ depending on the bound $M_*$ provided by \\reflemm{dleANSpropt0}. Combining with \\reflemm{Dulocbound}, the fixed points of $T_\\sg$ are strong solutions of the system \\mref{gensysfam} so that $T_\\sg$ does not have a fixed point on the boundary of $\\bar{B}_M$. Thus, by the Leray-Schauder fixed point theorem, $T_1$ has a fixed point in $B_M$ which is a strong solution to \\mref{gensys}. The proof is complete.\n\\begin{flushright} Q.E.D. \\end{flushright}\n\n{\\bf Proof of \\refcoro{gentheo1coro}:} We just need to show that \\reflemm{dleANSpropt0} remains true with the condition \\mref{hypoiterpis1z}, which is \\mref{hypoiterpis1z00}, being replaced by \\mref{hypoiterpis1z11} and \\mref{hypoiterpis1z11z}. We revisit the proof of \\reflemm{dleANSpropt0}. First of all, we use Young's inequality in the estimate \\mref{cIdef1} for $\\mccIee_{\\og,p}(s)$ to get $f(u)|Du|^{2p-1}\\lesssim |f_u(u)||Du|^{2p}+(f(u)|f_u(u)|^{-1})^{2p}|f_u(u)|$. Using \\mref{specfcond} and \\mref{hypoiterpis1z11}, \\mref{keydupANSppbbb} now yields $$\\ccB_{p}(R)+\\ccH_{p}(R)\\le C(1+R^{-2})[\\ccC_{p}(2R)+C_0+I_p(2R)], \\quad I_p(s)=\\iidmu{\\Og_s}{|u|^{2p}|f_u(u)|}.$$ \n\nAs in \\mref{piter1}, using the same idea, we will first assume for some $p\\ge1$ that \\beqno{piter111} \\ccC_{p}(2R)+I_p(2R)\\le C(C_0,R,p)\\end{equation} and show that this assumption is self-improving, i.e., if it holds for some $p\\ge1$ then it remains true for $\\cg_*p$ for some fixed $\\cg_*>1$ and $2R$ being replaced by $R$. We need only consider $I_p(R)$ and assume first that \\beqno{improvep1} \\iidmu{\\Og_R}{(|f_u(u)|+\\llg(u))|u|^{2}}\\le C(C_0,R).\\end{equation}\n\nDenote $g(u)=|f_u(u)|$ and $\\cg_*=2-\\frac{1}{r_0}-\\frac{2}{\\pi_*}$. Since $r_0>\\frac{\\pi_*}{\\pi_*-2}$ it is clear that $\\cg_*>1$. We then define $r_1=\\frac{\\pi_*}{(\\cg_*-1)(\\pi_*-2)}$, $r_2=\\frac{\\pi_*}{2\\cg_*}$ and note that\n$$\\frac{1}{r_0}+\\frac{1}{r_1}+\\frac{1}{r_2}=\\frac{1}{r_0}+\\frac{(\\cg_*-1)(\\pi_*-2)}{\\pi_*}+\\frac{2\\cg_*}{\\pi_*}=\\frac{1}{r_0}+\\cg_*-1+\\frac{2}{\\pi_*}=1.$$ Therefore, writing\n$g(u)|u|^{2p\\cg_*}=g(u)\\llg^{-1}(u)\\llg^{1-\\cg_*}(u)\\llg^{\\cg_*}|u|^{2p\\cg_*}$, we can use H\\\"older's inequality to see that $I_{p\\cg_*}(R)$ can be bounded by \\beqno{fullgup}\\mypar{\\iidmu{\\Og_R}{(g(u)\\llg(u)^{-1})^{r_0}}}^\\frac{1}{r_0}\\mypar{\\iidmu{\\Og_R}{\\llg(u)^{-\\frac{\\pi_*}{\\pi_*-2}}}}^\\frac{1}{r_1}\\mypar{\\iidmu{\\Og_R}{\\llg(u)^{\\frac{\\pi_*}{2}}|u|^{p\\pi_*}}}^\\frac{1}{r_2}.\n\\end{equation}\nThanks to \\mref{llgmainhyp}, the first two integrals are bounded by $C(C_0)$. We estimate the third integral. Let $U=\\llg(u)^{\\frac{\\pi_*}{4}}|u|^\\frac{p\\pi_*}{2}$. Because $|\\llg_u(u)||u|\\lesssim\\llg(u)$, $|DU|^2\\lesssim \\llg(u)^{\\frac{\\pi_*}{2}}|u|^{p\\pi_*-1}|Du|$. By Poincar\\'e and Young's inequalities, we have for any $\\bg\\in(0,1]$\n$$\\iidmu{\\Og_R}{U^2}\\le R^2\\iidmu{\\Og_R}{\\llg(u)^{\\frac{\\pi_*}{2}}(\\eg|u|^{p\\pi_*}+C(\\eg)|Du|^{p\\pi_*})}+R^{n-n\/\\bg}\\mypar{\\iidmu{\\Og_R}{U^{2\\bg}}}^\\frac{1}{\\bg}.$$\n\nChoosing $\\eg$ small and $\\bg=2\/\\pi_*$, we see that the right hand side is bounded by $$ CR^2\\iidmu{\\Og_R}{\\llg(u)^{\\frac{\\pi_*}{2}}|Du|^{p\\pi_*}}+L_p(R)^\\frac{\\pi_*}{2},\\; \\mbox{where } L_p(s)=\\iidmu{\\Og_s}{\\llg(u)|u|^{2p}}.$$\n\nPutting these estimates together and using \\mref{piter1az}, which holds because of \\mref{piter111}, we see that the third integral in \\mref{fullgup} is bounded by a constant $C(C_0,R,p)$ if $L_p(R)\\le C(C_0,R,p)$. We then have $I_{\\cg_* p}(R)\\le C(C_0,R,p)$.\n\nOn the other hand, we can show that the estimate $L_p(R)\\le C(C_0,R,p)$ is also self improving. We repeat the argument in \\mref{fullgup} with $g(u)=\\llg(u)$ and, of course, $r_0=\\infty$ to see that if $L_p(R)\\le C(C_0,R,p)$ then $L_{\\cg_* p}(R)\\le C(C_0,R,p)$ for $\\cg_*=2-\\frac{2}{\\pi_*}>1$ because $\\pi_*>2$.\n\nHence, the estimate \\mref{piter111} remains true for $p,2R$ being replaced by $\\cg_*p,R$ respectively for some fixed $\\cg_*>1$. The proof of \\reflemm{dleANSpropt0} can continue with \\mref{improvep1} replacing \\mref{hypoiterpis1z}.\n\nFinally, we show that \\mref{improvep1} comes from the assumptions \\mref{hypoiterpis1} and \\mref{hypoiterpis1z11}. From \\mref{fullgup} with $p=1$, we see that $I_1(R)$ can be estimated in terms of the integral of $\\llg(u)^{\\frac{\\pi_*}{2}}|u|^{\\pi_*}$ over $\\Og_R$. The latter can be estimated by $L_1(R)$ via the Poincar\\'e-Sobolev inequality \\mref{genPSpi}, using $V=\\llg(u)^{\\frac{1}{2}}|u|$ and the assumption on the integrability of $|DV|^2\\sim\\llg(u)|Du|^2$ in \\mref{hypoiterpis1}. Hence, we need only consider $L_1(R)$. By the interpolation inequality, we have $$\\iidmu{\\Og_R}{\\llg(u)|u|^{2}}\\le R^2\\iidmu{\\Og_R}{\\llg(u)|Du|^2}+R^{n-n\/\\bg}\\mypar{\\iidmu{\\Og_R}{(\\llg(u)|u|^{2})^\\bg}}^\\frac{1}{\\bg}.$$ We then use \\mref{hypoiterpis1} and \\mref{hypoiterpis1z11} to see that $L_1(R)\\le C(C_0,R)$ and conclude the proof. \\begin{flushright} Q.E.D. \\end{flushright}\n\n\n\\section{Proof of the main results} \\label{proofmainsec} \\eqnoset\n\nWe now present the proof of the results in \\refsec{res} which are in fact just applications of the main technical theorem with different choices of the map $K$ verifying the conditions H) and M.1). Again, since the systems in \\mref{e1famzzz} satisfy the same set of conditions uniformly with respect to $\\sg\\in[0,1]$ we will only present the proof for $\\sg=1$ in the sequel.\n\n\n\n{\\bf Proof of \\reftheo{dleNL-mainthm}:} This theorem is a consequence of \\refcoro{gentheo1coro} whose integrability conditions in M.0) are already assumed in \\mref{llgmainhyp009} and \\mref{llgmainhyp0099}. We need only verify the condition M.1) of \\refcoro{gentheo1coro} (or \\reftheo{gentheo1}) with the map $K$ being defined by\n\\beqno{Kmapdef} K_{\\eg_0}(u)=(|\\log(|U|)|+\\eg_0)|U|^{-1}U, \\quad U=[\\llg_0+|u_i|]_{i=1}^m.\\end{equation} This map satisfies for any $\\eg_0>0$ the condition H) of \\reftheo{gentheo1} (see \\reflemm{H2lemm} and \\refrem{H2lemmrem} in the Appendix). We need only check the condition M.1). Because $$\\|K_{\\eg_0}(u)\\|_{BMO(B_R,\\mu)}\\le \\|K_0(u)\\|_{BMO(B_R,\\mu)}+\\eg_0\\||U|^{-1}U\\|_{BMO(B_R,\\mu)},$$ and $\\||U|^{-1}U\\|_{BMO(B_R,\\mu)}=1$ and so $\\|K_{\\eg_0}(u)\\|_{BMO(B_R,\\mu)}<\\mu_0$ for any given $\\mu_0>0$ if $\\eg_0<\\mu_0\/2$ and $R$ is small, thanks to the assumption \\mref{mainloghyp}. Therefore, the smallness condition \\mref{Keymu0} of M.1) holds.\n\nNext, we consider $\\myPi_p=\\llg^{p+\\frac12}(u)|\\llg_u(u)|^{-p}\\sim (\\llg_0+|u|)^{k_p}$ for some number $k_p$ because $\\llg(u), |\\llg_u(u)|$ have polynomial growths. As we assume in \\mref{mainloghyp} that $\\log(\\llg_0+|u|)$ has small BMO norm in small balls, $[\\myPi_p^\\ag]_{\\bg+1,B_R}\\sim [(\\llg_0+|u|)^{k_p\\ag}]_{\\bg+1,B_R}$ is bounded (see \\reflemm{Westlemm} following this proof) for any given $\\ag,\\bg>0$ if $R$ is sufficiently small. Thus, M.1) is verified and \\refcoro{gentheo1coro} applies here to complete the proof. \\begin{flushright} Q.E.D. \\end{flushright}\n\n\\newcommand{\\mmc}{\\mathbf{c}}\n\n\nWe have the following lemma which was used in the above proof to establish that $[\\myPi_p^\\ag]_{\\bg+1,B_R}$ is bounded. This lemma will be frequently referred to in the rest of this section. \n\n\\blemm{Westlemm} Let $\\mu$ be a doubling measure and $U$ be a nonnegative function on a ball $B\\subset \\Og$. There is a constant $\\mmc_2$ depending only on the doubling constant of $\\mu$ such that for any given $l$ and $\\bg>0$ if $[\\log U]_{*,\\mu}$ is sufficiently small then $[U^l]_{\\bg+1}\\le \\mmc_2^{1+\\bg}$.\n\\end{lemma}\n\n{\\bf Proof:~~} We first recall the John-Nirenberg inequality (\\cite[Chapter 9]{Graf}): For any BMO($\\mu$) function $v$ \nthere are constants $\\mmc_1,\\mmc_2$, which depend only on the doubling constant of $\\mu$, such that \n\\beqno{JNineq}\\mitmu{B}{e^{\\frac{\\mmc_1}{[v]_{*,\\mu}}|v-v_B|}} \\le \\mmc_2.\\end{equation}\n\n\nFor any $\\bg>0$ we know that $e^v$ is an $A_{\\bg+1}$ weight with $[e^v]_{\\bg+1}\\le \\mmc_2^{1+\\bg}$ (e.g. see \\cite[Chapter 9]{Graf}) if \\beqno{Acgcond}\\sup_B \\mitmu{B}{e^{(v-v_B)}} \\le \\mmc_2,\\; \\sup_B \\mitmu{B}{e^{-\\frac{1}{\\bg}(v-v_B)}} \\le \\mmc_2.\\end{equation}\n\nIt is clear that \\mref{Acgcond} follows from \\mref{JNineq} if $\\mmc_1[v]_{*,\\mu}^{-1}\\ge \\max\\{1,\\bg^{-1}\\}$. Therefore, for $v= l\\log U$ we see that if $[\\log U]_{*,\\mu}\\le \\mmc_1\\bg|l|^{-1}$ then $[U^l]_{\\bg+1}\\le \\mmc_2^{1+\\bg}$. Hence, for any given $l$ and $\\bg>0$ if $[\\log U]_{*,\\mu}$ is sufficiently small then $[U^l]_{\\bg+1}\\le \\mmc_2^{1+\\bg}$.\n\\begin{flushright} Q.E.D. \\end{flushright}\n\n\n{\\bf Proof of \\refcoro{dlec1coro}:} We just need to show that the assumption \\mref{pis20az5} implies \\mref{mainloghyp} of \\reftheo{dleNL-mainthm}. For $U=[\\llg_0+|u_i|]_{i=1}^m$ we have from the calculation in \\refrem{H2lemmrem} that\n$$(K_0)_u(u) = \\frac{|\\log(|U|)|}{|U|}\\left[I + \\left(\\frac{\\mbox{sign}(\\log(|U|))}{|\\log(|U|)|}-1\\right)\\zeta\\zeta^T\\right]\\mbox{diag}[\\mbox{sign} (u_i)],$$ where $\\zeta=|U|^{-1}U$. It is clear that $|(K_0)_u(u)|\\le C(1+|\\log(|U|))|U|^{-1}$. Since $|U|$ is bounded from below by $\\llg_0$, for any $\\ag\\in(0,1)$ there is a constant $C(\\ag)$ such that $|(K_0)_u(u)|\\le C(\\ag)|U|^{-\\ag}$. Therefore, $|D(K_0(u))|$ and $|D(\\log(\\llg_0+|u|))|$ can be bounded by $C(\\llg_0+|u|)^{-\\ag}|Du|$. \nIt follows from the assumption \\mref{pis20az5} that \\beqno{pis20az5zz} \\iidmu{\\Og}{|DK_0(u)|^{n}},\\; \\iidmu{\\Og}{|D(\\log(\\llg_0+|u|))|^{n}}\\le C(C_0).\\end{equation}\n\nFrom the Poincar\\'e-Sobolev inequality, using the assumption that $\\og$ is bounded from below and above by AR) \n$$\\left(\\mitmu{B_r}{|K_0(u)-K_0(u)_{B_r}|^2}\\right)^\\frac12 \\le C(n)\n\\left(\\iidmu{B_r}{|D(K_0(u))|^{n}}\\right)^\\frac{1}{n}.$$ The continuity of the integral of $|D(K_0(u))|^{n}$ and the uniform bound \\mref{pis20az5zz} show that the last integral is small if $r$ is. The same argument applies to the function $\\log(\\llg_0+|u|)$. We then see that the BMO norms of $K_0(u)$ and $\\log(\\llg_0+|u|)$ are small in small balls, and so \\mref{mainloghyp} of \\reftheo{dleNL-mainthm} holds. The proof is complete.\n\\begin{flushright} Q.E.D. \\end{flushright}\n\nFor the proof of \\refcoro{2dthm1} we first need the following lemma.\n\n\\blemm{2dlemma} Suppose A), F) and \\mref{FUDU112z} if $\\hat{f}$ has a quadratic growth in $Du$ with $\\eg_0$ being sufficiently small. For any $s$ satisfying \\beqno{sconds}s>-1 \\mbox{ and } C_*^{-1}>s\/(s+2)\\end{equation} and any ${\\ag_0}\\in(0,1)$ we have for $U:=\\llg_0+|u|$ that \\beqno{nequal2est}\\iidmu{\\Og}{U^{k+s}|Du|^2} \\le C\\mypar{\\iidmu{\\Og}{U^{{\\ag_0}(k+s+2)}}}^\\frac1{\\ag_0}+C\\iidmu{\\Og}{U^sf(u)|u|}.\\end{equation} \\end{lemma}\n\n{\\bf Proof:~~} \nLet $X=[\\llg_0+|u_i|]_{i=1}^m$ and test the system with $|X|^su$ to get \\beqno{llgDu2}\\iidmu{\\Og}{\\myprod{A(u)Du,D(|X|^su)}} \\le \\iidmu{\\Og}{\\myprod{\\hat{f}(u,Du),|X|^su}}.\\end{equation} We note that $\\myprod{A(u)Du,D(|X|^su)}=\\myprod{A(u) DX,D(|X|^sX)}$ so that, by the assumption \\mref{sconds} on $s$, there is $c_0>0$ such that $\\myprod{A(u)Du,D(|X|^su)}\\ge c_0\\llg(u)|X|^s|DX|^2$ (see \\mref{SGrem0} in the Appendix). Because $|DX|=|Du|$ and $|X|\\sim U$, the above yields \\beqno{fuduu}\\iidmu{\\Og}{\\llg(u)U^s|Du|^2}\\le C\\iidmu{\\Og}{U^s\\myprod{\\hat{f}(u,Du),u}}.\\end{equation}\n\nIf $\\hat{f}$ satifies f.1) then a simple use of Young's inequality gives $$|\\myprod{\\hat{f}(u,Du),u}| \\le \\eg\\llg(u)|Du|^2 + C(\\eg)\\llg(u)|u|^2+f(u)|u|.$$ \n\nIf f.2) holds with \\mref{FUDU112z} then $|\\myprod{\\hat{f}(u,Du),u}| \\le C|\\llg_u(u)||u||Du|^2 + f(u)|u|$. Because $|\\llg_u(u)||u|\\lesssim \\llg(u)$, we obtain the above inequality again with $\\eg=C\\eg_0$. Therefore, if $\\eg_0$ is sufficiently small then \\mref{fuduu} and the fact that $\\llg(u)\\sim U^k$ imply $$\\iidmu{\\Og}{U^{k+s}|Du|^2} \\le C\\iidmu{\\Og}{U^{k+s+2}}+C\\iidmu{\\Og}{U^sf(u)|u|}.$$\nWe apply the interpolation inequality $\\|w\\|_{L^2(\\Og,\\mu)}^2 \\le \\eg\\|Dw\\|_{L^2(\\Og,\\mu)}^2+C(\\eg)\\|w^{\\ag_0}\\|_{L^2(\\Og,\\mu)}^{1\/{\\ag_0}}$ with ${\\ag_0}\\in(0,1)$ and $w=U^{(k+s+2)\/2}$ to the first integral on the right hand side, noting also that $|Dw|^2\\sim U^{k+s}|Du|^2$. For $\\eg$ sufficiently small, we derive \\mref{nequal2est} from the above estimate and complete the proof. \\begin{flushright} Q.E.D. \\end{flushright}\n\n{\\bf Proof of \\refcoro{2dthm1}:} We apply \\refcoro{dlec1coro} here. We will verify first the condition \\mref{pis20az5} and then the integrability assumptions \\mref{llgmainhyp009} and \\mref{llgmainhyp0099}.\n\nIn the sequel we denote $U=\\llg_0+|u|$. From \\mref{nequal2est} of \\reflemm{2dlemma} we see that if there exist positive numbers $\\ag_0\\in(0,1)$, $C_0>0$ and $s$ satisfies \\mref{sconds} such that if\n\\beqno{nequal2conda}\\|U^{p}\\|_{L^1(\\Og)}\\le C_0,\\;p=\\ag_0(s+k+2)\\mbox{ and }p=s+l+1,\\end{equation} then \\mref{nequal2est}, with the assumption that $f(u)\\lesssim U^l$, implies \\beqno{nequal2condb}\\iidx{\\Og}{U^{k+s}|Du|^2}\\le C(C_0).\\end{equation}\n\nWe will first show that the assumption \\mref{keyn2lnorm0}, that $\\|u\\|_{L^{l_0}(\\Og,\\mu)}\\le C_0$ for some $l_0>\\max\\{l,l-k-1\\}$, implies \\mref{nequal2condb} for some $s=s_0>\\max\\{-1,-k-2\\}$. Indeed, for any such $l_0$ we have $s_0+l+1\\le l_0$ if $s_0$ is close to $\\max\\{-1,-k-2\\}$. Moreover, $s_0$ satisfies \\mref{sconds}. This clearly holds if $s_0\\le0$, i.e. $k\\ge-2$. Otherwise, the assumption $k>-2C_*\/(C_*-1)$ yields that $C_*^{-1}>s_0\/(s_0+2)$ if $s_0$ is close to $-k-2>0$. Thus, \\mref{nequal2est} holds for such $s_0$. We also choose $\\ag_0\\in(0,1)$ sufficiently small such that $\\ag_0(s_0+k+2)\\le l_0$. With these choices of $\\ag_0, s_0$ and the assumption \\mref{keyn2lnorm0}, we see that \\mref{nequal2conda} and then \\mref{nequal2condb} hold for $s=s_0$.\n\nAs $k+s_0>-2$, we can find $\\ag\\in(0,1)$ such that $-2\\ag\\le k+s_0$. Therefore, \\mref{nequal2condb} with $s=s_0$ yields \\mref{pis20az5} of \\refcoro{dlec1coro} for $n=2$ because $$\\iidx{\\Og}{(\\llg_0+|u|)^{-2\\ag}|Du|^2} \\le \\iidx{\\Og}{(\\llg_0+|u|)^{k+s_0}|Du|^2}\\le C(C_0).$$\n\nWe now check the integrability conditions \\mref{llgmainhyp009} and \\mref{llgmainhyp0099} of \\reftheo{dleNL-mainthm} which read\n\\beqno{llgmainhyp00} \\|\\llg^{-1}(u)\\|_{L^{\\frac n2}(\\Og,\\mu)},\\;\\||f_u(u)|\\llg^{-1}(u)\\|_{L^{r_0}(\\Og,\\mu)}\\le C_0,\\end{equation}\n\\beqno{llgB30}\\iidmu{\\Og}{(|f_u(u)|+\\llg(u))|Du|^{2}}\\le C_0,\\end{equation} \n\\beqno{llgB40}\\iidmu{\\Og}{(\\llg(u)|u|^2)^{\\bg_0}}\\le C_0.\\end{equation}\n\n\n\n\nBecause $n=2$, we have the inequality $\\|w\\|_{L^q(\\Og)} \\le C\\|Dw\\|_{L^2(\\Og)}+C\\|w^\\bg\\|_{L^1(\\Og)}^\\frac{1}{\\bg}$ which holds for all $q\\ge1$ and $\\bg\\in(0,1)$. Applying this to $w=|U|^{(k+s_0)\/2+1}$ and using \\mref{nequal2condb} and the assumption \\mref{keyn2lnorm0}, we see that $\\|U^q\\|_{L^1(\\Og)}\\le C(C_0)$ for all $q\\ge1$. By H\\\"older's inequality this is also true for $q\\ge0$. It is also true for $q<0$ because $U\\ge\\llg_0>0$. We then have\\beqno{n2allq} \\|U^q\\|_{L^1(\\Og)}\\le C(C_0,\\llg_0,q) \\quad \\mbox{for all $q$}.\\end{equation}\n\nThe above then immediately implies the integrability conditions \\mref{llgmainhyp00} and \\mref{llgB40} because $\\llg(u)$ and $|f_u(u)|$ are powers of $U$.\n\n\nSimilarly, \\mref{n2allq} implies that \\mref{nequal2conda} holds for any $p$ so that \\mref{nequal2condb} is valid if $s\\ge0$ and $C_*^{-1}>s\/(s+2)$. To verify \\mref{llgB30} we need to find a constant $C(C_0)$ such that \\beqno{llgB30zz}\\iidx{\\Og}{(\\llg_0+|u|)^{l-1}|Du|^2}+\\iidx{\\Og}{(\\llg_0+|u|)^k|Du|^{2}}\\le C(C_0).\\end{equation} Letting $s=0$ in \\mref{nequal2condb}, we see that the second integral on the left hand side is bounded by a constant $C(C_0)$. If $l\\le 1+k$ then the first integral in \\mref{llgB30zz} is bounded by the second one and we obtain the desired bound. If $l>k+1$ we let $s=l-k-1$ in \\mref{nequal2condb}. The condition on $s$ in \\mref{sconds} holds because $$\\frac{s}{s+2}n\/2$ and $C_0$ such that\n\\beqno{llgn3} \\|f_u(u)\\llg^{-1}(u)\\|_{L^{r_0}(\\Og)}\\le C_0.\\end{equation}\nThen for any $\\bg_0\\in(0,1]$ there exists a constant $C(C_0,\\bg_0)$, such that \\beqno{DUUest}\\|DP(u)\\|_{L^\\frac{2n}{n-2}(\\Og)}\\le C(C_0,\\bg_0)\\||P(u)|^{\\bg_0}\\|_{L^1(\\Og)}.\\end{equation} \\end{lemma}\n{\\bf Proof:~~} In the sequel, we write $U=[U_i]_{i=1}^m$, $U_i=P_i(u)$. Multiplying the $i$-th equation in \\mref{genSKT} by $-\\Delta U_i$, integrating over $\\Og$ and summing the results, we get\n$$\\iidx{\\Og}{|\\Delta U|^2}=-\\sum_i\\iidx{\\Og}{\\myprod{B_i(u,Du),\\Delta U_i}}-\\sum_i\\iidx{\\Og}{\\myprod{f_i(u),\\Delta U_i}}.$$ Applying integration by parts to the last integral, we have $$\\iidx{\\Og}{|\\Delta U|^2}=-\\sum_i\\iidx{\\Og}{\\myprod{B_i(u,Du),\\Delta U_i}}+\\sum_{i,j}\\iidx{\\Og}{\\myprod{(f_i)_{u_j}(u)Du_j,D U_i}}.$$\n\nThe condition A) implies $\\llg(u)|Du|^2\\le \\myprod{A(u)Du,Du}=\\myprod{DU,Du}$ and so Young's inequality yields $\\llg(u)|Du|^2\\le \\frac12\\llg^{-1}(u)|DU|^2+\\frac12\\llg(u)|Du|^2$. We then have $\\llg(u)|Du|\\le |DU|$.\nUsing this fact, the assumption that $|B_i(u,Du)|\\le C\\llg(u)|Du|$ and applying Young's inequality to the first integral on the right hand side of the above, we get $$\\|\\Delta U\\|_{L^2(\\Og)}^2\\lesssim\\iidx{\\Og}{|DU|^2}+\\iidx{\\Og}{|f_u(u)\\llg^{-1}(u)||DU|^2}.$$\nBy H\\\"older's inequality and \\mref{llgn3}, the last integral is estimated by\n$$ \\mypar{\\iidx{\\Og}{|f_u(u)\\llg^{-1}(u)|^{r_0}}}^\\frac{1}{r_0}\\|DU\\|_{L^{2r'_0}(\\Og)}^2 \\le C(C_0)\\|DU\\|_{L^{2r'_0}(\\Og)}^2.$$\n\nUsing Schauder's inequality $\\|D^2 U\\|_{L^2(\\Og)}\\le C\\|\\Delta U\\|_{L^2(\\Og)}$, we obtain from the above two inequalities that\n\\beqno{D2UPP}\\|D^2U\\|_{L^2(\\Og)}^2\\lesssim \\|DU\\|_{L^2(\\Og)}^2+C(C_0)\\|DU\\|_{L^{2r'_0}(\\Og)}^2.\\end{equation}\n\nWe recall the following interpolating Sobolev inequality: for any $\\eg>0$ \\beqno{intineq}\\|W\\|_{L^p(\\Og)}\\le \\eg\\|DW\\|_{L^2(\\Og)} + C(\\eg)\\|W^\\bg\\|_{L^1(\\Og)}^\\frac{1}{\\bg} \\mbox{ for any $p\\in[1,\\frac{2n}{n-2})$ and $\\bg\\in(0,1]$}.\\end{equation} \n\nBecause $r_0>n\/2$, $2r'_0<2n\/(n-2)$ so that we can apply \\mref{intineq} to $W=DU$ with $p=2$, $p=2r'_0$, $\\bg=1$ and $\\eg$ is sufficiently small in \\mref{D2UPP} to see that $\\|D^2U\\|_{L^2(\\Og)}^2\\le C(C_0)\\|DU\\|_{L^1(\\Og)}^2$. As $\\|D U\\|_{L^1(\\Og)}\\le \\eg\\|D^2U\\|_{L^2(\\Og)} + C(\\eg)\\|U\\|_{L^1(\\Og)}$,\nwe obtain for small $\\eg$ that $\\|D^2U\\|_{L^2(\\Og)}\\le C(C_0)\\|U\\|_{L^1(\\Og)}$.\nSobolev's embedding theorem then yields $$\\|D U\\|_{L^\\frac{2n}{n-2}(\\Og)}\\le C(C_0)\\|U\\|_{L^1(\\Og)}.$$\nApplying \\mref{intineq} again, with $W=|U|$, $p=1$ and $\\bg=\\bg_0$, to estimate the norm $\\|U\\|_{L^1(\\Og)}$ and noting that $\\|DU\\|_{L^2(\\Og)}\\lesssim\\|DU\\|_{L^\\frac{2n}{n-2}(\\Og)}$, we obtain \\mref{DUUest}. \n\\begin{flushright} Q.E.D. \\end{flushright}\n\n\n{\\bf Proof of \\reftheo{n3SKT}:} The proof is again based on \\reftheo{gentheo1}. The assumptions i) and ii) state \\beqno{uintcondk}\\left\\{\\begin{array}{ll}\\|u\\|_{L^1(\\Og)}\\le C_0 & \\mbox{if $k\\ge0$,}\\\\\\|u\\|_{L^{-kn\/2}(\\Og)}\\le C_0 &\\mbox{if $k\\in[-1,0)$,}\\end{array} \\right.\\end{equation} and clearly imply\n \\beqno{fuhyp0z}\\|\\llg^{-1}(u)\\|_{L^{\\frac{n}{2}}(\\Og)}\\le C(C_0,\\llg_0).\\end{equation} This and the assumption \\mref{fuhyp0} provide the integrability condition \\mref{llgmainhyp0} of M.0). Concerning the integrability condition \\mref{llgB3} in M.0), we use H\\\"older's inequality, writing $\\llg(u)|Du|^2= \\llg^{-1}(u)\\llg^2(u)|Du|^2$, and \\mref{fuhyp0z} and \\mref{DUUest} to have\n $$\\iidx{\\Og}{\\llg(u)|Du|^2}\\le\\|\\llg^{-1}(u)\\|_{L^\\frac{n}{2}(\\Og)}\\|\\llg(u)Du\\|_{L^\\frac{2n}{n-2}(\\Og)}^2\\le C_0\\|DP(u)\\|_{L^\\frac{2n}{n-2}(\\Og)}^2\\le C(C_0).$$\n \n Similarly, the integral of $|f_u(u)||Du|^2$ can be estimated by\n $$\\|\\llg^{-2}(u)f_u(u)\\|_{L^\\frac{n}{2}(\\Og)}\\|\\llg(u)Du\\|_{L^\\frac{2n}{n-2}(\\Og)}^2\\le C(C_0)\\|\\llg^{-2}(u)f_u(u)\\|_{L^\\frac{n}{2}(\\Og)}.$$\n \n If $k\\ge0$ then $\\llg^{-2}(u)|f_u(u)|\\le C(\\llg_0)\\llg^{-1}(u)|f_u(u)|$ so that the last norm in the above is bounded, thanks to \\mref{fuhyp0}. If $k<0$ then this norm is bounded by the assumption \\mref{fuhyp0k}. We conclude that the condition \\mref{llgB3} in M.0) holds.\n \n \nWe discuss the condition M.1). First of all, \nbecause $|P(u)|\\le \\llg(u)|u|\\le (\\llg_0+|u|)^{k+1}$, \\mref{uintcondk} also shows that for any positive and sufficiently small $\\bg_0$ \\beqno{Puhypz} \\|(\\llg_0^{k+1}+|P(u)|)^{\\bg_0}\\|_{L^1(\\Og)}\\le C(\\llg_0,C_0).\\end{equation}\n\nNext, using the Poincar\\'e-Sobolev inequality as in the proof of \\refcoro{dlec1coro}, we show that $K(u)$ has small BMO norm in small balls by estimating\n\\beqno{3dlog}\\|DK_i(u)\\|_{L^n(\\Og)} \\le \\|(\\llg_0^{k+1}+|P_i(u)|)^{-1}DP_i(u)\\|_{L^n(\\Og)}\\le C(\\llg_0)\\|DP_i(u)\\|_{L^n(\\Og)}.\\end{equation}\nBecause $n\\le4$, $n\\le 2n\/(n-2)$. By the assumption \\mref{fuhyp0}, $\\|f_u(u)\\llg^{-1}(u)\\|_{L^{r_0}(\\Og)}\\le C_0$, \\reflemm{DU6lem} shows that $\\|DP(u)\\|_{L^n(\\Og)}\\le C(C_0,\\bg_0)\\||P(u)|^{\\bg_0}\\|_{L^1(\\Og)}$. This and \\mref{Puhypz} and \\mref{3dlog} provide a constant $C(\\llg_0,C_0)$ such that $\\|DK(u)\\|_{L^n(\\Og)}\\le C(\\llg_0,C_0)$. We then see that $K(u)$ has small BMO norm in small balls.\n\n\n\nConcerning the weight $\\myPi_p$, because $|Du|\\lesssim\\llg^{-1}(u)|DP(u)|$, $k+1\\ge0$, we have\n$$|D(\\log(\\llg_0+|u|))|=\\frac{|Du|}{\\llg_0+|u|}\\lesssim \\frac{|DP(u)|}{(\\llg_0+|u|)\\llg(u)}\\sim \\frac{|DP(u)|}{(\\llg_0+|u|)^{k+1}}\\le \\llg_0^{-k-1}|DP(u)|.$$ \\reflemm{DU6lem} then shows that $D\\log(\\llg_0+|u|)\\in L^n(\\Og)$ so that $\\log(\\llg_0+|u|)$ is has small BMO norm in small balls. \\reflemm{Westlemm} applies to yield that $[\\myPi_p^\\ag]_{\\bg+1,B_R}\\sim [(\\llg_0+|u|)^{k_p\\ag}]_{\\cg,B_R}$ is bounded for any given $\\ag,\\bg>0$ if $R$ is sufficiently small. The assumption M.1) of \\reftheo{gentheo1} is verified.\n\n\nWe thus establish the conditions of \\reftheo{gentheo1} and complete the proof. \\begin{flushright} Q.E.D. \\end{flushright}\n\n\\section{Appendix}\\label{appsec}\\eqnoset\n\nLet $m,l$ be any integers. For $X=[X_i]_{i=1}^m$, $X_i\\in {\\rm I\\kern -1.6pt{\\rm R}}^l$ and for any $C^1$ function $k:{\\rm I\\kern -1.6pt{\\rm R}}^+\\to{\\rm I\\kern -1.6pt{\\rm R}}$ we consider the maps\n\\beqno{kdef} K(X)=k(|X|)|X|^{-1}X,\\;\\zeta=|X|^{-1}X=[\\zeta_i]_{i=1}^m.\\end{equation}\n\nWe see that $D_X(|X|)=\\zeta$ and $D_X\\zeta=|X|^{-1}(I-\\zeta\\zeta^T)$, where $\\zeta\\zeta^T=[\\myprod{\\zeta_i,\\zeta_j}]$. Hence, \n$$D_XK(X)= k(|X|)D_X\\zeta+D_Xk(|X|)\\zeta^T=k(|X|)|X|^{-1}(I-\\zeta\\zeta^T)+k'(|X|)\\zeta\\zeta^T.$$ \nWe then introduce the notations \\beqno{kudef}b=k'(|X|)|X|\/k(|X|),\\; \\ccK(\\zeta)=I+(b-1)\\zeta\\zeta^T.\\end{equation} Therefore, the calculation for $D_XK(X)$ yields \\beqno{kxdef} D_XK(X)=k(|X|)|X|^{-1}(I+(b-1)\\zeta\\zeta^T)=k(|X|)|X|^{-1}\\ccK(\\zeta).\\end{equation}\n\n\nIf $k(|X|)\\ne 0$ and $k'(|X|)\\ne 0$ then $\\ccK(\\zeta)$ is invertible. \nWe can use the Serman-Morrison formula $(I+wv^T)^{-1}=I-(1+v^Tw)^{-1}wv^T$, setting $w=(b-1)\\zeta$ and $v=\\zeta$, to see that \\beqno{DXKinv} (D_XK(X))^{-1}=\\frac{|X|}{k(|X|)}(I+(b^{-1}-1)\\zeta\\zeta^T).\\end{equation} \n\nOtherwise, if $k(|X|)=0$ (resp. $k'(|X|)=0$) then $D_XK(X)= k'(|X|)\\zeta\\zeta^T$ (resp. $D_XK(X)= k(|X|)|X|^{-1}(I-\\zeta\\zeta^T)$) and $D_XK(X)$ is not invertible.\n\nThe following lemma was used in the checking of the condition H) for the map $K_{\\eg_0}(u)$ in the proof of \\reftheo{dleNL-mainthm}.\n\\blemm{H2lemm} For any $\\eg_0,\\llg_0>0$ let $k(t)=|\\log(t)|+\\eg_0$ and $X(u)=[\\llg_0+|u_i|]_{i=1}^m$ in \\mref{kdef}. There exists a constant $C(\\eg_0)$ such that the map $\\mathbb{K}(u)=(K_u(X(u))^{-1})^T$ satisfies \\beqno{H2check}|\\mathbb{K}(u)|\\le C(\\eg_0)|X|,\\; \\|\\mathbb{K}_u(u)\\|_{L^\\infty({\\rm I\\kern -1.6pt{\\rm R}}^m)}\\le C(\\eg_0).\\end{equation}\n\n\\end{lemma}\n\n{\\bf Proof:~~} As $k'(t)=\\mbox{sign}(\\log t) t^{-1}$, we have $$b^{-1}=k(|X|)(k'(|X|)|X|)^{-1}=\\mbox{sign}(\\log(|X|)) (|\\log(|X|)|+\\eg_0).$$ \n\nDefine $X_u=\\mbox{diag}[\\mbox{sign} u_i]$. We have from \\mref{DXKinv} and the definition $X=[\\llg_0+|u_i|]_{i=1}^m$ that\n$$\\mathbb{K}(u)=(X_u)^{-1}\\frac{|X|}{|\\log(|X|)|+\\eg_0}(I+(\\mbox{sign}(\\log(|X|)) (|\\log(|X|)|+\\eg_0)-1)\\zeta\\zeta^T).$$\n\nAs $\\eg_0>0$, we easily see that $|\\mathbb{K}(u)|\\le C(\\eg_0)|X|$ for some constant $C(\\eg_0)$. A straightforward calculation also shows that $\\|\\mathbb{K}_u(u)\\|_{L^\\infty({\\rm I\\kern -1.6pt{\\rm R}}^m)}\\le C(\\eg_0)$. \\begin{flushright} Q.E.D. \\end{flushright}\n\n\\brem{H2lemmrem}\nIf $\\llg(u)\\sim (\\llg_0+|u|)^k\\sim |X(u)|^k$, with $k\\ne0$, and $\\LLg(u)= \\llg^\\frac12(u)$ and $\\Fg(u)=|\\LLg_u(u)|$. We then have $\\LLg(u)\\Fg^{-1}(u)\\sim |X(u)|$ and $\\Fg(u)|\\Fg_u(u)|^{-1}\\sim |X(u)|$. We obtain from \\mref{H2check} that $|\\mathbb{K}(u)|\\lesssim \\LLg(u)\\Fg^{-1}(u)$ and $|\\mathbb{K}(u)||\\Fg_u(u)|\\lesssim\\Fg(u)$.\nThus, the assumptions on the map $K$ for the local Gagliardo-Nirenberg inequality are verified here. \\erem\n\n\nWe then need that $K(U)$ is BMO and $\\myPi^\\ag$ is a weight. By \\mref{kudef} $$K_U(U) = \\frac{|\\log(|X|)|+\\eg_0}{|X|}\\left(I + (\\frac{X_s}{|\\log(|X|)|+\\eg_0}-1)\\zeta\\zeta^T\\right)X_U, \\quad X=[\\llg_0+|U_i|]_{i=1}^m.$$\n\nRecall that $$\\myPi=\\LLg^{p+1}(U)\\Fg^{-p}(U)\\sim |U|^p\\llg^\\frac12(U),\\; \\ag>2\/(p+2),\\; \\bg0$ then \\beqno{DXDK}\\myprod{DX,D(K(X))}\\ge0.\\end{equation} Moreover, for $\\ag=1-(\\frac{b-1}{b+1})^2$\n\\beqno{DXDK1}\\myprod{DX,D(K(X))}\\ge \\ag^\\frac12 |DX||D(K(X))|. \\end{equation}\n\n\\end{lemma}\n\n{\\bf Proof:~~} By \\mref{kxdef} $D(K(X))=\\mmk(|X|)\\ccK(\\zeta)DX$, where $\\mmk(t):=k(t)t^{-1}$, and so\n$$\\myprod{DX,D(K(X))}=\\mmk(|X|)(|DX|^2+(b-1)\\myprod{DX,\\zeta\\zeta^T DX}).$$ Note that $|\\zeta\\zeta^T|\\le1$ so that $\\myprod{DX,D(K(X))}\\ge0$ if $s=b-1>-1$. This gives \\mref{DXDK}.\n\nSince $\\zeta\\zeta^T$ is a projection, i.e. $(\\zeta\\zeta^T)^2=\\zeta\\zeta^T$, we have, setting $J=\\myprod{\\zeta\\zeta^T DX,DX}$\n$$\\begin{array}{lll}|D(K(X))|^2&=&|K_X(X)DX|^2=\\mmk^2(|X|)\\myprod{\\ccK(\\zeta)DX,\\ccK(\\zeta)DX}\\\\&=&\n\\mmk^2(|X|)(|DX|^2+(2s+s^2)J).\\end{array}$$\nHence, we can write $\\myprod{DX,D(K(X))}^2-\\ag |DX|^2|D(K(X))|^2$ as\n$$\\mmk^2(|X|)[(1-\\ag)|DX|^4+(2s-\\ag(2s+s^2))|DX|^2 J+s^2J^2].$$\nIf we choose $\\ag=1-(\\frac{s}{s+2})^2$ then the above is $\\mmk^2(|X|)\\left(\\frac{s}{s+2}|DX|^2-sJ\\right)^2\\ge0$.\nTherefore $\\myprod{DX,D(K(X))}^2\\ge \\ag |DX|^2|D(K(X))|^2$. This and \\mref{DXDK} yield \\mref{DXDK1}. \\begin{flushright} Q.E.D. \\end{flushright}\n\nWe now consider a matrix $A$ satisfying for some positive $\\llg,\\LLg$ and any vector $\\chi$\\beqno{Aellcond} \\myprod{A\\chi,\\chi}\\ge \\llg|\\chi|^2,\\quad |A\\chi|\\le \\LLg|\\chi|.\\end{equation} \n\n\\blemm{AuK} Assume \\mref{Aellcond} and that $b>0$ (see \\mref{kudef}). For $\\kappa=\\llg\/\\LLg^2$ and $\\nu=\\llg\/\\LLg$ we have $$\\myprod{\\kappa A DX,D(K(X))}\\ge (\\ag^\\frac12-(1-\\nu^2)^\\frac12) |DX||D(K(X))|.$$\n\\end{lemma}\n\n{\\bf Proof:~~} From \\mref{Aellcond} with $\\chi=DX$, we note that $$\\begin{array}{lll}|\\kappa ADX-DX|^2 &=& \\kappa^2|ADX|^2-2\\kappa\\myprod{ADX,DX}+|DX|^2\\\\&\\le& (\\kappa^2\\LLg^2-2\\kappa\\llg+1)|DX|^2=(1-\\nu^2)|DX|^2.\\end{array}$$\n\nTherefore, using \\mref{DXDK1}$$\\begin{array}{lll}\\myprod{\\kappa ADX,D(K(X))}&=&\\myprod{\\kappa ADX-DX,D(K(X))}+\\myprod{DX,D(K(X))}\\\\&\\ge& -|\\kappa ADX-DX||DK(X)|+\\ag^\\frac12 |DX||D(K(X))|.\\end{array}$$ As $|\\kappa ADX-DX|^2\\le (1-\\nu^2)|DX|^2$, we obtain the lemma. \\begin{flushright} Q.E.D. \\end{flushright}\n\nLet $k(t)=|t|^{s+1}$ then $K(X)=|X|^sX$ and $b=s+1$. The above lemma then gives the following result which was used in the energy estimate.\n\\blemm{SGrem} Assume \\mref{Aellcond}. If $s>-1$ and $\\nu=\\frac{\\llg}{\\LLg}>\\frac{s}{s+2}$, then \\beqno{SGrem0}\\myprod{A DX,D(|X|^sX)}\\ge c_0\\frac{\\LLg^2}{\\llg}|X|^s|DX|^{2},\\end{equation} where $c_0=(1-(\\frac{s}{s+2})^2)^\\frac12-(1-\\nu^2)^\\frac12>0$.\n\\end{lemma}\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix}\n\\irlabel{cut|cut}\n\nThis appendix briefly discusses generalized uses and forms of the differential ghost axioms and how it generalizes the differential auxiliaries proof rule \\cite{DBLP:journals\/lmcs\/Platzer12}.\n\n\\paragraph{Differential Lipschitz Ghosts}\nThe differential ghost axiom \\irref{DG} generalizes to arbitrary Lipschitz-continuous differential equations \\m{\\pevolve{\\D{y}=g(x,y)}}:\n\\[\n \\cinferenceRule[DLG|DG$_\\ell$]{differential Lipschitz ghost variables} %\n {\\linferenceRule[lpmil]\n {\\big({\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}} \\lbisubjunct \n {\\lexists{y}{\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=g(x,y)}{q(x)}}{p(x)}}}\\big)}\n {\\lexists{\\ell}{\\lforall{x,y,z}{\\abs{g(x,y)-g(x,z)} \\leq \\ell\\abs{y-z}}}}\n }\n {}%\n\\]\nThe soundness argument for \\irref{DLG} is an extension of the soundness proof for \\irref{DG}.\nThe direction ``$\\lylpmi$'' of \\irref{DG} is sound for all differential equations.\nThe proof for the direction ``$\\limply$'' extends the proof for \\irref{DG} with an adaptation of the function $F$ from \\rref{eq:diffghost-extra-ODE} to the differential equation \\m{\\pevolve{\\D{y}=g(x,y)}}:\n\\begin{equation}\n\\begin{aligned}\n y(0) &=d\\\\\n \\D{y}(t) &= F(t,y(t)) \\mdefeq \\ivaluation{\\imodif[state]{\\Iff[t]}{y}{y(t)}}{g(x,y)}\n\\end{aligned}\n\\label{eq:Lipschitz-diffghost-extra-ODE}\n\\end{equation}\nThis function $F(t,\\delta)$ is still continuous on $[0,r]\\times\\reals$ since it is a composition of the continuous evaluation (of the, by assumption, continuous term $g(x,y)$) with the (continuous) composition of the continuous function $\\iget[state]{\\Iff[t]}$ of $t$ with the continuous modification of the value of variable $y$ to $\\delta$.\nBy assumption $F(t,y)$ is Lipschitz in $y$, since there is an $\\ell\\in\\reals$ such that for all $t,a,b\\in\\reals$:\n\\begin{multline*}\n\\abs{F(t,a)-F(t,b)} = \\abs{ \\ivaluation{\\imodif[state]{\\Iff[t]}{y}{a}}{g(x,y)} - \\ivaluation{\\imodif[state]{\\Iff[t]}{y}{b}}{g(x,y)} }\n= \\abs{\\ivaluation{\\imodif[state]{\\imodif[state]{\\Iff[t]}{y}{a}}{z}{b}}{g(x,y) - g(x,z)}}\n\\\\\n= \\ivaluation{\\imodif[state]{\\imodif[state]{\\Iff[t]}{y}{a}}{z}{b}}{\\underbrace{\\abs{g(x,y) - g(x,z)}}_{\\leq\\ell\\ivaluation{\\imodif[state]{\\imodif[state]{\\Iff[t]}{y}{a}}{z}{b}}{\\abs{y-z}}}}\n\\leq \\ell \\abs{a-b}\n\\end{multline*}\nThis establishes the only two properties of $F$ that the soundness proof of \\irref{DG} was based on.\nThe existence of a solution \\(y:[0,r]\\to\\reals\\) of \\rref{eq:Lipschitz-diffghost-extra-ODE} is, thus, established again by Picard-Lindel\\\"of as needed for the soundness proof.\n\n\n\\paragraph{Differential Auxiliaries Rule}\nThe differential auxiliaries proof rule \\cite{DBLP:journals\/lmcs\/Platzer12} is derivable from \\irref{DG} and monotonicity \\irref{M}.\n\n\\begin{calculuscollections}{10cm}\n\\begin{calculus}\n\\cinferenceRule[diffaux|$DA$]{differential auxiliary variables}\n{\\linferenceRule[sequent]\n {\\lsequent[s]{}{p(x)\\lbisubjunct\\lexists{y}{r(x,y)}}\n &\\lsequent{r(x,y)} {\\dbox{\\hevolvein{\\D{x}=f(x)\\syssep\\D{y}=g(x,y)}{q(x)}}{r(x,y)}}}\n {\\lsequent{p(x)} {\\dbox{\\hevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}}\n}{}%\n\\end{calculus}\n\\end{calculuscollections}\n\n\\noindent\nwhere $y$ is new and \\m{\\hevolve{\\D{y}=g(x,y),y(0)=y_0}} has a solution $y:[0,\\infty)\\to\\reals^n$ for each $y_0$.\n\nThe derivation proceeds as follows (the middle premise uses \\irref{vacuousexists} with $y\\not\\in p(x)$):\n\n{%\n\\renewcommand{\\linferPremissSeparation}{~}%\n\\begin{sequentdeduction}[array]\n\\linfer[cut]\n {\\lsequent{} {p(x)\\lbisubjunct\\lexists{y}{r(x,y)}}\n !\\linfer[existsinst]%\n {\\linfer[DG]\n {\\linfer[existsinst]%\n {\\linfer[M]\n {\\linfer[vacuousexists]%\n {\\lsequent{\\lexists{y}{r(x,y)}} {p(x)}}\n {\\lsequent{r(x,y)} {p(x)}}\n !\\lsequent{r(x,y)} {\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=g(x,y)}{q(x)}}{r(x,y)}}\n }\n {\\lsequent{r(x,y)} {\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=g(x,y)}{q(x)}}{p(x)}}}\n }\n {\\lsequent{r(x,y)} {\\lexists{y}{\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=g(x,y)}{q(x)}}{p(x)}}}}\n }\n {\\lsequent{r(x,y)} {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}}\n }\n {\\lsequent{\\lexists{y}{r(x,y)}} {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}}\n }\n {\\lsequent{p(x)} {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}}\n\\end{sequentdeduction}\n}%\nUsing the following duals of \\irref{allinst} and \\irref{vacuousall} as well as monotonicity rule \\irref{M} \\cite{DBLP:journals\/tocl\/Platzer15} that derives from \\irref{G+K}:\n\n\\begin{calculuscollections}{\\columnwidth}\n \\begin{calculus}\n \\cinferenceRule[existsinst|$\\exists$i]{existential instantiation}\n {p(f) \\limply (\\lexists{x}{p(x)})}\n {}\n \\cinferenceRule[vacuousexists|V$_\\exists$]{vacuous existential quantifier}\n {\\lexists{x}{p} \\limply p}\n {}%\n \\cinferenceRule[M|M]{$\\dbox{}{}$ monotonic \/ $\\dbox{}{}$-generalization} %\n {\\linferenceRule[formula]\n {\\phi\\limply\\psi}\n {\\dbox{\\alpha}{\\phi}\\limply\\dbox{\\alpha}{\\psi}}\n }{}\n \\end{calculus}\n\\end{calculuscollections}\n\n\\section{Introduction}\n\n\\emph{Differential dynamic logic} (\\dL) \\cite{DBLP:journals\/jar\/Platzer08,DBLP:conf\/lics\/Platzer12b} is a logic for proving correctness properties of hybrid systems.\nIt has a sound and complete proof calculus relative to differential equations \\cite{DBLP:journals\/jar\/Platzer08,DBLP:conf\/lics\/Platzer12b} and a sound and complete proof calculus relative to discrete systems \\cite{DBLP:conf\/lics\/Platzer12b}.\nBoth sequent calculi \\cite{DBLP:journals\/jar\/Platzer08} and Hilbert-type axiomatizations \\cite{DBLP:conf\/lics\/Platzer12b} have been presented for \\dL but only the former has been implemented.\nThe implementation of \\dL's sequent calculus in \\KeYmaera\\iflongversion \\cite{DBLP:conf\/cade\/PlatzerQ08}\\fi makes it straightforward for users to prove properties of hybrid systems, because it provides rules performing natural decompositions for each operator.\nThe downside is that the implementation of the rule schemata and their side conditions on occurrence constraints and relations of reading and writing of variables as well as rule applications in context is nontrivial and inflexible in \\KeYmaera.\n\nThe goal of this paper is to identify how to make it straightforward to implement the axioms and proof rules of differential dynamic logic by writing down a finite list of \\emph{axioms} (concrete formulas, not axiom schemata that represent an infinite list of axioms subject to sophisticated soundness-critical schema variable matching implementations).\nThey require multiple axioms to be combined with one another to obtain the effect that a user would want for proving a hybrid system conjecture.\nThis paper argues that this is still a net win for hybrid systems, because a substantially simpler prover core is easier to implement correctly, and the need to combine multiple axioms to obtain user-level proof steps can be achieved equally well by appropriate tactics, which are not soundness-critical.\n\nTo achieve this goal, this paper follows observations for differential game logic \\cite{DBLP:journals\/corr\/Platzer14:dGL} that highlight the significance and elegance of \\emph{uniform substitutions}, a classical proof rule for first-order logic \\cite[\\S35,40]{Church_1956}.\nUniform substitutions uniformly instantiate predicate and function symbols with formulas and terms, respectively, as functions of their arguments.\nIn the presence of the nontrivial binding structure that nondeterminism and differential equations of hybrid programs induce for the dynamic modalities of differential dynamic logic, flexible but sound uniform substitutions become more complex for \\dL, but can still be read off elegantly from its static semantics.\nIn fact, \\dL's static semantics is solely captured\\footnote{\nThis approach is dual to other successful ways of solving the intricacies and subtleties of substitutions \\cite{DBLP:journals\/jsl\/Church40,DBLP:journals\/jsl\/Henkin53} by imposing occurrence side conditions on axiom schemata and proof rules, which is what uniform substitutions can get rid of.\n}\nin the implementation of uniform substitution (and bound variable renaming), thereby leading to a completely modular proof calculus.\n\nThis paper introduces a static and dynamic semantics for \\emph{differential-form} \\dL, proves coincidence lemmas and uniform substitution lemmas, culminating in a soundness proof for uniform substitutions (\\rref{sec:usubst}).\nIt exploits the new \\emph{differential forms} that this paper adds to \\dL for internalizing differential invariants \\cite{DBLP:journals\/logcom\/Platzer10}, differential cuts \\cite{DBLP:journals\/logcom\/Platzer10,DBLP:journals\/lmcs\/Platzer12}, differential ghosts \\cite{DBLP:journals\/lmcs\/Platzer12}, differential substitutions, total differentials and Lie-derivations \\cite{DBLP:journals\/logcom\/Platzer10,DBLP:journals\/lmcs\/Platzer12} as first-class citizens in \\dL, culminating in entirely modular axioms for differential equations and a superbly modular soundness proof (\\rref{sec:dL-axioms}).\nThis approach is to be contrasted with earlier approaches for differential invariants that were based on complex built-in rules \\cite{DBLP:journals\/logcom\/Platzer10,DBLP:journals\/lmcs\/Platzer12}.\nThe relationship to related work from previous presentations of differential dynamic logic \\cite{DBLP:journals\/jar\/Platzer08,DBLP:conf\/lics\/Platzer12b} continues to apply except that \\dL now internalizes differential equation reasoning axiomatically via differential forms.\n\n\\newsavebox{\\Rval}%\n\\sbox{\\Rval}{$\\scriptstyle\\mathbb{R}$}\n\\irlabel{qear|\\usebox{\\Rval}}\n\n\\newsavebox{\\USarg}%\n\\sbox{\\USarg}{$\\boldsymbol{\\cdot}$}\n\n\\newsavebox{\\UScarg}%\n\\sbox{\\UScarg}{$\\boldsymbol{\\_}$}\n\n\\newsavebox{\\Lightningval}%\n\\sbox{\\Lightningval}{$\\scriptstyle\\textcolor{red}{\\lightning}$}\n\\irlabel{clash|clash\\usebox{\\Lightningval}} %\n\\irlabel{unsound|\\usebox{\\Lightningval}} %\n\n\n\\section{Differential-Form Differential Dynamic Logic}\n\\subsection{Syntax}\nFormulas and hybrid programs (\\HPs) of \\dL are defined by simultaneous induction based on the following definition of terms.\nSimilar simultaneous inductions are used throughout the proofs for \\dL.\nThe set of all \\emph{variables} is $\\mathcal{V}$.\nFor any $V\\subseteq\\mathcal{V}$ is \\(\\D{V}\\mdefeq\\{\\D{x} : x\\in V\\}\\) the set of \\emph{differential symbols} $\\D{x}$ for the variables in $V$.\nFunction symbols are written $f,g,h$, predicate symbols $p,q,r$, and variables $x,y,z\\in\\mathcal{V}$ with differential symbols $\\D{x},\\D{y},\\D{z}\\in\\D{\\mathcal{V}}$.\nProgram constants are $a,b,c$.\n\n\\begin{definition}[Terms]\n\\emph{Terms} are defined by this grammar\n(with $\\theta,\\eta,\\theta_1,\\dots,\\theta_k$ as terms, $x\\in\\mathcal{V}$ as variable, $\\D{x}\\in\\D{\\mathcal{V}}$ differential symbol, and $f$ function symbol):\n\\[\n \\theta,\\eta ~\\mathrel{::=}~\n x\n ~|~ \\D{x}\n ~|~\n f(\\theta_1,\\dots,\\theta_k)\n ~|~\n \\theta+\\eta\n ~|~\n \\theta\\cdot\\eta\n ~|~ \\der{\\theta}\n\\]\n\\end{definition}\nNumber literals such as 0,1 are allowed as function symbols without arguments that are always interpreted as the numbers they denote.\nBeyond differential symbols $\\D{x}$, \\emph{differential-form \\dL} allows \\emph{differentials} \\(\\der{\\theta}\\) of terms $\\theta$ as terms for the purpose of axiomatically internalizing reasoning about differential equations.\n\\begin{definition}[Hybrid program]\n\\emph{Hybrid programs} (\\HPs) are defined by the following grammar (with $\\alpha,\\beta$ as \\HPs, program constant $a$, variable $x$, term $\\theta$ possibly containing $x$, and formula $\\psi$ of first-order logic of real arithmetic):\n\\[\n \\alpha,\\beta ~\\mathrel{::=}~\n a~|~\n \\pupdate{\\pumod{x}{\\theta}}\n ~|~ \\Dupdate{\\Dumod{\\D{x}}{\\theta}}\n ~|~\n \\ptest{\\psi}\n ~|~\n \\pevolvein{\\D{x}=\\genDE{x}}{\\psi}\n ~|~\n \\alpha\\cup\\beta\n ~|~\n \\alpha;\\beta\n ~|~\n \\prepeat{\\alpha}\n\\]\n\\end{definition}\n\\emph{Assignments} \\m{\\pupdate{\\pumod{x}{\\theta}}} of $\\theta$ to variable $x$, \\emph{tests} \\m{\\ptest{\\psi}} of the formula $\\psi$ in the current state, \\emph{differential equations} \\(\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}\\) restricted to the evolution domain constraint $\\psi$, \\emph{nondeterministic choices} \\(\\pchoice{\\alpha}{\\beta}\\), \\emph{sequential compositions} \\(\\alpha;\\beta\\), and \\emph{nondeterministic repetition} \\(\\prepeat{\\alpha}\\) are as usual in \\dL \\cite{DBLP:journals\/jar\/Platzer08,DBLP:conf\/lics\/Platzer12b}.\nThe effect of the \\emph{differential assignment} \\m{\\Dupdate{\\Dumod{\\D{x}}{\\theta}}} to differential symbol $\\D{x}$ is similar to the effect of the assignment \\m{\\pupdate{\\pumod{x}{\\theta}}} to variable $x$, except that it changes the value of the differential symbol $\\D{x}$ around instead of the value of $x$.\nIt is not to be confused with the differential equation \\(\\pevolve{\\D{x}=\\genDE{x}}\\), which will follow said differential equation continuously for an arbitrary amount of time.\nThe differential assignment \\m{\\Dupdate{\\Dumod{\\D{x}}{\\theta}}}, instead, only assigns the value of $\\theta$ to the differential symbol $\\D{x}$ discretely once at an instant of time.\nProgram constants $a$ are uninterpreted, i.e.\\ their behavior depends on the interpretation in the same way that the values of function symbols $f$ and predicate symbols $p$ depends on their interpretation.\n\n\\begin{definition}[\\dL formula]\nThe \\emph{formulas of (differential-form) differential dynamic logic} ({\\dL}) are defined by the grammar\n(with \\dL formulas $\\phi,\\psi$, terms $\\theta,\\eta,\\theta_1,\\dots,\\theta_k$, predicate symbol $p$, \\predicational symbol $C$, variable $x$, \\HP $\\alpha$):\n \\[\n \\phi,\\psi ~\\mathrel{::=}~\n \\theta\\geq\\eta ~|~\n p(\\theta_1,\\dots,\\theta_k) ~|~\n \\contextapp{C}{\\phi} ~|~\n \\lnot \\phi ~|~\n \\phi \\land \\psi ~|~\n \\lforall{x}{\\phi} ~|~ \n \\lexists{x}{\\phi} ~|~\n \\dbox{\\alpha}{\\phi}\n ~|~ \\ddiamond{\\alpha}{\\phi}\n \\]\n\\end{definition}\nOperators $>,\\leq,<,\\lor,\\limply,\\lbisubjunct$ are definable, e.g., \\(\\phi\\limply\\psi\\) as \\(\\lnot(\\phi\\land\\lnot\\psi)\\).\nLikewise \\(\\dbox{\\alpha}{\\phi}\\) is equivalent to \\(\\lnot\\ddiamond{\\alpha}{\\lnot\\phi}\\) and \\(\\lforall{x}{\\phi}\\) equivalent to \\(\\lnot\\lexists{x}{\\lnot\\phi}\\).\nThe modal formula \\(\\dbox{\\alpha}{\\phi}\\) expresses that $\\phi$ holds after all runs of $\\alpha$, while the dual \\(\\ddiamond{\\alpha}{\\phi}\\) expresses that there is a run of $\\alpha$ after which $\\phi$ holds.\n\\emph{\\Predicational symbols} $C$ (with formula $\\phi$ as argument), i.e.\\ higher-order predicate symbols that bind all variables of $\\phi$, are unnecessary but internalize contextual congruence reasoning efficiently.\n\n\\subsection{Dynamic Semantics}\n\nA state is a mapping from variables $\\mathcal{V}$ \nand differential symbols $\\D{\\mathcal{V}}$ to $\\reals$.\nThe set of states is denoted \\(\\linterpretations{\\Sigma}{V}\\).\nLet\n\\m{\\iget[state]{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{r}}} denote the state that agrees with state~$\\iget[state]{\\vdLint[const=I,state=\\nu]}$ except for the value of variable~\\m{x}, which is changed to~\\m{r \\in \\reals}, and accordingly for \\m{\\iget[state]{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x'}{r}}}. %\nThe interpretation of a function symbol $f$ with arity $n$ (i.e.\\ with $n$ arguments) is a smooth function \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(f):\\reals^n\\to\\reals\\) of $n$ arguments.\n\n\\begin{definition}[Semantics of terms] \\label{def:dL-valuationTerm}\nFor each interpretation $\\iget[const]{\\vdLint[const=I,state=\\nu]}$, the \\emph{semantics of a term} $\\theta$ in a state $\\iget[state]{\\vdLint[const=I,state=\\nu]}\\in\\linterpretations{\\Sigma}{V}$ is its value in $\\reals$.\nIt is defined inductively as follows\n\\begin{compactenum}\n\\item \\m{\\ivaluation{\\vdLint[const=I,state=\\nu]}{x} = \\iget[state]{\\vdLint[const=I,state=\\nu]}(x)} for variable $x\\in\\mathcal{V}$\n\\item \\m{\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\D{x}} = \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})} for differential symbol $\\D{x}\\in\\D{\\mathcal{V}}$\n\\item \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{f(\\theta_1,\\dots,\\theta_k)} = \\iget[const]{\\vdLint[const=I,state=\\nu]}(f)\\big(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta_1},\\dots,\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta_k}\\big)\\) for function symbol $f$\n\\item \\m{\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta+\\eta} = \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta} + \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\eta}}\n\\item \\m{\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta\\cdot\\eta} = \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta} \\cdot \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\eta}}\n\\item\n\\m{\n\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\displaystyle\n=\n\\newcommand{\\vdLint[const=I,state=]}{\\vdLint[const=I,state=]}\n\\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\Dp[x]{\\ivaluation{\\vdLint[const=I,state=]}{\\theta}}(\\iget[state]{\\vdLint[const=I,state=\\nu]})\n=\n\\def\\Im{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{X}}%\n\\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})\n \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\theta}}\n}\n\\end{compactenum}\n\\end{definition}\n\n\\noindent\nTime-derivatives are undefined in an isolated state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$.\nThe clou is that differentials can still be given a local semantics:\n\\m{\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}} is the sum of all (analytic) spatial partial derivatives of the value of $\\theta$ by all variables $x$ (or rather their values $X$) multiplied by the corresponding tangent described by the value $\\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})$ of differential symbol $\\D{x}$.\nThat sum over all variables $x\\in\\mathcal{V}$ has finite support, because $\\theta$ only mentions finitely many variables $x$ and the partial derivative by variables $x$ that do not occur in $\\theta$ is 0.\nThe spatial derivatives exist since $\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}$ is a composition of smooth functions, so smooth.\nThus, the semantics of \\m{\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}} is the \\emph{differential}\\footnote{%\nA slight abuse of notation rewrites the differential as \\(\\ivalues{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}} = d\\ivalues{\\vdLint[const=I,state=\\nu]}{\\theta} = \\sum_{i=1}^n \\Dp[x^i]{\\ivalues{\\vdLint[const=I,state=\\nu]}{\\theta}} dx^i\\)\nwhen $x^1,\\dots,x^n$ are the variables in $\\theta$ and their differentials \\(dx^i\\) form the basis of the cotangent space, which, when evaluated at a point $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ whose values \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})\\) determine the tangent vector alias vector field, coincides with \\rref{def:dL-valuationTerm}.\n}\nof (the value of) $\\theta$, hence a differential one-form giving a real value for each tangent vector (i.e.\\ vector field) described by the values \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})\\).\nThe values \\m{\\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})} of the differential symbols $\\D{x}$ describe an arbitrary tangent vector or vector field.\nAlong the flow of (the vector field of a) differential equation, though, the value of the differential \\m{\\der{\\theta}} coincides with the analytic time-derivative of $\\theta$ (\\rref{lem:differentialLemma}).\nThe interpretation of predicate symbol $p$ with arity $n$ is an $n$-ary relation \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(p)\\subseteq\\reals^n\\).\nThe interpretation of \\predicational symbol $C$ is a functional \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(C)\\) mapping subsets \\(M\\subseteq\\linterpretations{\\Sigma}{V}\\) to subsets \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(C)(M)\\subseteq\\linterpretations{\\Sigma}{V}\\).\n\n\\begin{definition}[\\dL semantics] \\label{def:dL-valuation}\nThe \\emph{semantics of a \\dL formula} $\\phi$, for each interpretation $\\iget[const]{\\vdLint[const=I,state=\\nu]}$ with a corresponding set of states $\\linterpretations{\\Sigma}{V}$, is the subset \\m{\\imodel{\\vdLint[const=I,state=\\nu]}{\\phi}\\subseteq\\linterpretations{\\Sigma}{V}} of states in which $\\phi$ is true.\nIt is defined inductively as follows\n\\begin{compactenum}\n\\item \\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\theta\\geq\\eta} = \\{\\iget[state]{\\vdLint[const=I,state=\\nu]} \\in \\linterpretations{\\Sigma}{V} \\with \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\geq\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\eta}\\}\\)\n\\item \\(\\imodel{\\vdLint[const=I,state=\\nu]}{p(\\theta_1,\\dots,\\theta_k)} = \\{\\iget[state]{\\vdLint[const=I,state=\\nu]} \\in \\linterpretations{\\Sigma}{V} \\with (\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta_1},\\dots,\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta_k})\\in\\iget[const]{\\vdLint[const=I,state=\\nu]}(p)\\}\\)\n\\item \\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\contextapp{C}{\\phi}} = \\iget[const]{\\vdLint[const=I,state=\\nu]}(C)\\big(\\imodel{\\vdLint[const=I,state=\\nu]}{\\phi}\\big)\\) for \\predicational symbol $C$\n\\item \\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\lnot\\phi} = \\scomplement{(\\imodel{\\vdLint[const=I,state=\\nu]}{\\phi})}\n= \\linterpretations{\\Sigma}{V}\\setminus\\imodel{\\vdLint[const=I,state=\\nu]}{\\phi}\\) \n\\item \\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\phi\\land\\psi} = \\imodel{\\vdLint[const=I,state=\\nu]}{\\phi} \\cap \\imodel{\\vdLint[const=I,state=\\nu]}{\\psi}\\)\n\\item\n\\(\\def\\Im{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{r}}%\n\\imodel{\\vdLint[const=I,state=\\nu]}{\\lexists{x}{\\phi}} = \\{\\iget[state]{\\vdLint[const=I,state=\\nu]} \\in \\linterpretations{\\Sigma}{V} \\with \\iget[state]{\\Im} \\in \\imodel{\\vdLint[const=I,state=\\nu]}{\\phi} ~\\text{for some}~r\\in\\reals\\}\\)\n\n\\item \\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\ddiamond{\\alpha}{\\phi}} = \\iaccess[\\alpha]{\\vdLint[const=I,state=\\nu]}\\compose\\imodel{\\vdLint[const=I,state=\\nu]}{\\phi}\\)\n\\(=\\{\\iget[state]{\\vdLint[const=I,state=\\nu]} \\with \\imodels{\\vdLint[const=I,state=\\omega]}{\\phi} ~\\text{for some}~\\iget[state]{\\vdLint[const=I,state=\\omega]}~\\text{such that}~\\iaccessible[\\alpha]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\}\\)\n\n\\item \\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\dbox{\\alpha}{\\phi}} = \\imodel{\\vdLint[const=I,state=\\nu]}{\\lnot\\ddiamond{\\alpha}{\\lnot\\phi}}\\)\n\\(=\\{\\iget[state]{\\vdLint[const=I,state=\\nu]} \\with \\imodels{\\vdLint[const=I,state=\\omega]}{\\phi} ~\\text{for all}~\\iget[state]{\\vdLint[const=I,state=\\omega]}~\\text{such that}~\\iaccessible[\\alpha]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\}\\)\n\n\\end{compactenum}\nA \\dL formula $\\phi$ is \\emph{valid in $\\iget[const]{\\vdLint[const=I,state=\\nu]}$}, written \\m{\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{\\phi}}, iff \\m{\\imodel{\\vdLint[const=I,state=\\nu]}{\\phi}=\\linterpretations{\\Sigma}{V}}, i.e.\\ \\m{\\imodels{\\vdLint[const=I,state=\\nu]}{\\phi}} for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$.\nFormula $\\phi$ is \\emph{valid}, written \\m{\\entails\\phi}, iff \\m{\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{\\phi}} for all interpretations $\\iget[const]{\\vdLint[const=I,state=\\nu]}$.\n\\end{definition}\n\n\\noindent\nThe interpretation of a program constant $a$ is a state-transition relation \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(a)\\subseteq\\linterpretations{\\Sigma}{V}\\times\\linterpretations{\\Sigma}{V}\\),\nwhere \\(\\related{\\iget[const]{\\vdLint[const=I,state=\\nu]}(a)}{\\iget[state]{\\vdLint[const=I,state=\\nu]}}{\\iget[state]{\\vdLint[const=I,state=\\omega]}}\\) iff $a$ can run from initial state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ to final state $\\iget[state]{\\vdLint[const=I,state=\\omega]}$.\n\n\\begin{definition}[Transition semantics of \\HPs] \\label{def:HP-transition}\nFor each interpretation $\\iget[const]{\\vdLint[const=I,state=\\nu]}$, each \\HP $\\alpha$ is interpreted semantically as a binary transition relation \\m{\\iaccess[\\alpha]{\\vdLint[const=I,state=\\nu]}\\subseteq\\linterpretations{\\Sigma}{V}\\times\\linterpretations{\\Sigma}{V}} on states, defined inductively by\n\\begin{compactenum}\n\\item \\m{\\iaccess[a]{\\vdLint[const=I,state=\\nu]} = \\iget[const]{\\vdLint[const=I,state=\\nu]}(a)} for program constants $a$\n\\item\n\\(\\def\\Im{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{r}}%\n\\iaccess[\\pupdate{\\pumod{x}{\\theta}}]{\\vdLint[const=I,state=\\nu]} = \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\Im}) \\with r=\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\}\n= \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}) \\with \n \\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}~\\text{except}~\\ivaluation{\\vdLint[const=I,state=\\omega]}{x}=\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\}\n\\)\n\n\\item\n\\(\\def\\Im{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x'}{r}}%\n\\iaccess[\\Dupdate{\\Dumod{\\D{x}}{\\theta}}]{\\vdLint[const=I,state=\\nu]} = \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\Im}) \\with r=\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\}\n= \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}) \\with \n \\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}~\\text{except}~\\ivaluation{\\vdLint[const=I,state=\\omega]}{\\D{x}}=\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\}\n\\)\n\n\n\\item \\m{\\iaccess[\\ptest{\\psi}]{\\vdLint[const=I,state=\\nu]} = \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\nu]}) \\with \\imodels{\\vdLint[const=I,state=\\nu]}{\\psi}\\}}\n\\item\n \\m{\\iaccess[\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}]{\\vdLint[const=I,state=\\nu]} = \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}) \\with\n \\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=\\genDE{x}\\land\\psi}},\n i.e.\n \\(\\imodels{\\Iff[\\zeta]}{\\D{x}=\\genDE{x}\\land\\psi}\\)\n for all \\(0\\leq \\zeta\\leq r\\),\n for some function \\m{\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,r]\\to\\linterpretations{\\Sigma}{V}} of some duration $r$ for which all\n \\(\\iget[state]{\\Iff[\\zeta]}(\\D{x}) = \\D[t]{\\iget[state]{\\Iff[t]}(x)}(\\zeta)\\) exist\n and \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\Iff[0]}\\) on $\\scomplement{\\{\\D{x}\\}}$ and \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\Iff[r]}\\)$\\}$;\n i.e., $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ solves the differential equation\n and satisfies $\\psi$ at all times.\n In case $r=0$, the only condition is that \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{\\{\\D{x}\\}}$ and \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}(\\D{x})=\\ivaluation{\\vdLint[const=I,state=\\omega]}{\\genDE{x}}\\) and \\(\\imodels{\\vdLint[const=I,state=\\omega]}{\\psi}\\).\n\n\\item \\m{\\iaccess[\\pchoice{\\alpha}{\\beta}]{\\vdLint[const=I,state=\\nu]} = \\iaccess[\\alpha]{\\vdLint[const=I,state=\\nu]} \\cup \\iaccess[\\beta]{\\vdLint[const=I,state=\\nu]}}\n\n\\item\n\\newcommand{\\iconcat[state=\\mu]{\\I}}{\\iconcat[state=\\mu]{\\vdLint[const=I,state=\\nu]}}\n\\m{\\iaccess[\\alpha;\\beta]{\\vdLint[const=I,state=\\nu]} = \\iaccess[\\alpha]{\\vdLint[const=I,state=\\nu]} \\compose\\iaccess[\\beta]{\\vdLint[const=I,state=\\nu]}}\n\\(= \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}) : (\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\iconcat[state=\\mu]{\\I}}) \\in \\iaccess[\\alpha]{\\vdLint[const=I,state=\\nu]}, (\\iget[state]{\\iconcat[state=\\mu]{\\I}},\\iget[state]{\\vdLint[const=I,state=\\omega]}) \\in \\iaccess[\\beta]{\\vdLint[const=I,state=\\nu]}\\}\\)\n\n\\item \\m{\\iaccess[\\prepeat{\\alpha}]{\\vdLint[const=I,state=\\nu]} = \\displaystyle\n\\closureTransitive{\\big(\\iaccess[\\alpha]{\\vdLint[const=I,state=\\nu]}\\big)}\n=\n\\cupfold_{n\\in\\naturals}\\iaccess[{\\prepeat[n]{\\alpha}}]{\\vdLint[const=I,state=\\nu]}} \nwith \\m{\\prepeat[n+1]{\\alpha} \\mequiv \\prepeat[n]{\\alpha};\\alpha} and \\m{\\prepeat[0]{\\alpha}\\mequiv\\,\\ptest{\\ltrue}}\n\\end{compactenum}\nwhere $\\closureTransitive{\\rho}$ denotes the reflexive transitive closure of relation $\\rho$.\n\\end{definition}\nThe initial values \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})\\) of differential symbols $\\D{x}$ do \\emph{not} influence the behavior of\\\\ \\(\\iaccessible[\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\), because they may not be compatible with the time-derivatives for the differential equation, e.g. in \\m{\\Dupdate{\\Dumod{\\D{x}}{1}};\\pevolve{\\D{x}=2}}, with a $\\D{x}$ mismatch.\n\\iflongversion\nThe final values \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}(\\D{x})\\) will coincide with the derivatives, though.\n\\fi\n\nFunctions and predicates are interpreted by $\\iget[const]{\\vdLint[const=I,state=\\nu]}$ and are only influenced indirectly by $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ through the values of their arguments.\nSo \\(p(e)\\limply\\dbox{\\pupdate{\\pumod{x}{x+1}}}{p(e)}\\) is valid if $x$ is not in $e$ since the change in $x$ does not change whether $p(e)$ is true (\\rref{lem:coincidence-term}).\nBy contrast \\(p(x)\\limply\\dbox{\\pupdate{\\pumod{x}{x+1}}}{p(x)}\\) is invalid, since it is false when \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(p)=\\{d \\with d\\leq5\\}\\) and \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}(x)=4.5\\).\nIf the semantics of $p$ were to depend on the state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$, then there would be no discernible relationship between the truth-values of $p$ in different states, so not even \\(p\\limply\\dbox{\\pupdate{\\pumod{x}{x+1}}}{p}\\) would be valid.\n\n\\subsection{Static Semantics}\n\nThe static semantics of \\dL and \\HPs defines some aspects of their behavior that can be read off directly from their syntactic structure without running their programs or evaluating their dynamical effects.\nThe most important aspects of the static semantics concern free or bound occurrences of variables\\iflongversion (which are closely related to the notions of scope and definitions\/uses in compilers)\\fi.\nBound variables $x$ are those that are bound by $\\lforall{x}{}$or $\\lexists{x}{}$, but also those that are bound by modalities such as \\(\\dbox{\\pupdate{\\pumod{x}{5y}}}{}\\)\nor \\(\\ddiamond{\\pevolve{\\D{x}=1}}{}\\)\nor \\(\\dbox{\\pchoice{\\pumod{x}{1}}{\\pevolve{\\D{x}=1}}}{}\\)\nor \\(\\dbox{\\pchoice{\\pumod{x}{1}}{\\ptest{\\ltrue}}}{}\\).\n\nThe notions of free and bound variables are defined by simultaneous induction in the subsequent definitions: free variables for terms ($\\freevars{\\theta}$), formulas ($\\freevars{\\phi}$), and \\HPs ($\\freevars{\\alpha}$), as well as bound variables for formulas ($\\boundvars{\\phi}$) and for \\HPs ($\\boundvars{\\alpha}$).\nFor \\HPs, there will be a need to distinguish must-bound variables ($\\mustboundvars{\\alpha}$) that are bound\/written to on all executions of $\\alpha$ from (may-)bound variables ($\\boundvars{\\alpha}$) which are bound on some (not necessarily all) execution paths of $\\alpha$, such as in \\(\\dbox{\\pchoice{\\pumod{x}{1}}{(\\pupdate{\\pumod{x}{0}};\\pupdate{\\pumod{y}{x+1}})}}{}\\), which has bound variables \\(\\{x,y\\}\\) but must-bound variables only $\\{x\\}$, because $y$ is not written to in the first choice.\n\n\n\\begin{definition}[Bound variable] \\label{def:boundvars}\n The set $\\boundvars{\\phi}\\subseteq\\mathcal{V}\\cup\\D{\\mathcal{V}}$ of \\emph{bound variables} of \\dL formula $\\phi$ is defined inductively as\n \\begin{align*}\n \\boundvars{\\theta\\geq\\eta} = \\boundvars{p(\\theta_1,\\dots,\\theta_k)} &= \\emptyset\\\\\n \\boundvars{\\contextapp{C}{\\phi}} &= \\mathcal{V}\\cup\\D{\\mathcal{V}}\\\\ %\n \\boundvars{\\lnot\\phi} &= \\boundvars{\\phi}\\\\\n \\boundvars{\\phi\\land\\psi} &= \\boundvars{\\phi}\\cup\\boundvars{\\psi}\\\\\n \\boundvars{\\lforall{x}{\\phi}} = \\boundvars{\\lexists{x}{\\phi}} &= \\{x\\}\\cup\\boundvars{\\phi}\\\\\n \\boundvars{\\dbox{\\alpha}{\\phi}} = \\boundvars{\\ddiamond{\\alpha}{\\phi}} &= \\boundvars{\\alpha}\\cup\\boundvars{\\phi}\n \\end{align*}\n\\end{definition}\n\n\\begin{definition}[Free variable] \\label{def:freevars}\n The set $\\freevars{\\theta}\\subseteq\\mathcal{V}\\cup\\D{\\mathcal{V}}$ of \\emph{free variables} of term $\\theta$, i.e.\\ those that occur in $\\theta$, is defined inductively as\n \\begin{align*}\n \\freevars{x} &= \\{x\\}\\\\\n \\freevars{\\D{x}} &= \\{\\D{x}\\}\\\\\n \\freevars{f(\\theta_1,\\dots,\\theta_k)} &= \\freevars{\\theta_1}\\cup\\dots\\cup\\freevars{\\theta_k}\\\\\n \\freevars{\\theta+\\eta} = \\freevars{\\theta\\cdot\\eta} &= \\freevars{\\theta}\\cup\\freevars{\\eta}\n \\\\\n \\freevars{\\der{\\theta}} &= \\freevars{\\theta} \\cup \\D{\\freevars{\\theta}}\n \\end{align*}\n The set $\\freevars{\\phi}$ of \\emph{free variables} of \\dL formula $\\phi$, i.e.\\ all those that occur in $\\phi$ outside the scope of quantifiers or modalities binding it, is defined inductively as\n \\begin{align*}\n \\freevars{\\theta\\geq\\eta} &= \\freevars{\\theta}\\cup\\freevars{\\eta}\\\\\n \\freevars{p(\\theta_1,\\dots,\\theta_k)} &= \\freevars{\\theta_1}\\cup\\dots\\cup\\freevars{\\theta_k}\\\\\n \\freevars{\\contextapp{C}{\\phi}} &= \\mathcal{V}\\cup\\D{\\mathcal{V}}\\\\ %\n \\freevars{\\lnot\\phi} &= \\freevars{\\phi}\\\\\n \\freevars{\\phi\\land\\psi} &= \\freevars{\\phi}\\cup\\freevars{\\psi}\\\\\n \\freevars{\\lforall{x}{\\phi}} = \\freevars{\\lexists{x}{\\phi}} &= \\freevars{\\phi}\\setminus\\{x\\}\\\\\n \\freevars{\\dbox{\\alpha}{\\phi}} = \\freevars{\\ddiamond{\\alpha}{\\phi}} &= \\freevars{\\alpha}\\cup(\\freevars{\\phi}\\setminus\\mustboundvars{\\alpha})\n \\end{align*}\n\\end{definition}\nSoundness requires that\n\\(\\freevars{\\dbox{\\alpha}{\\phi}}\\) is not defined as \\(\\freevars{\\alpha}\\cup(\\freevars{\\phi}\\setminus\\boundvars{\\alpha})\\),\notherwise \\(\\dbox{\\pchoice{\\pupdate{\\pumod{x}{1}}}{\\pupdate{\\pumod{y}{2}}}}{x\\geq1}\\) would have no free variables,\nbut its truth-value depends on the initial value of $x$,\ndemanding \\(\\freevars{\\dbox{\\pchoice{\\pupdate{\\pumod{x}{1}}}{\\pupdate{\\pumod{y}{2}}}}{x\\geq1}}=\\{x\\}\\).\n\\iflongversion\n The simpler definition \n \\(\\freevars{\\dbox{\\alpha}{\\phi}} = \\freevars{\\alpha}\\cup\\freevars{\\phi}\\)\n would be correct, but the results would be less precise then.\n Likewise for $\\ddiamond{\\alpha}{\\phi}$.\n\\fi\n\\iflongversion\nSoundness requires \\(\\freevars{\\der{\\theta}}\\) not to be defined as \\(\\D{\\freevars{\\theta}}\\),\nbecause\nthe value of \\(\\der{x y}\\) depends on $\\{x,\\D{x},y,\\D{y}\\}$, since \\(\\der{x y}\\) equals \\(\\D{x} y + x \\D{y}\\) (\\rref{lem:derivationLemma}).\n\\fi\n\nThe static semantics defines which variables are free so may be read ($\\freevars{\\alpha}$), which are bound ($\\boundvars{\\alpha}$) so may be written to somewhere in $\\alpha$, and which are must-bound ($\\mustboundvars{\\alpha}$) so must be written to on all execution paths of $\\alpha$.\n\n\\begin{definition}[Bound variable] \\label{def:boundvars-HP}\n The set $\\boundvars{\\alpha}\\subseteq\\mathcal{V}\\cup\\D{\\mathcal{V}}$ of \\emph{bound variables} of \\HP $\\alpha$, i.e.\\ all those that may potentially be written to, is defined inductively:\n \\begin{align*}\n \\boundvars{a} &= \\mathcal{V}\\cup\\D{\\mathcal{V}} &&\\text{for program constant $a$}\\\\\n \\boundvars{\\pupdate{\\pumod{x}{\\theta}}} &= \\{x\\}\n \\\\\n \\boundvars{\\Dupdate{\\Dumod{\\D{x}}{\\theta}}} &= \\{\\D{x}\\}\n \\\\\n \\boundvars{\\ptest{\\psi}} &= \\emptyset\n \\\\\n \\boundvars{\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}} &= \\{x,\\D{x}\\}\n \\\\\n \\boundvars{\\alpha\\cup\\beta} = \\boundvars{\\alpha;\\beta} &= \\boundvars{\\alpha}\\cup\\boundvars{\\beta}\n \\\\\n \\boundvars{\\prepeat{\\alpha}} &= \\boundvars{\\alpha}\n \\end{align*}\n\\end{definition}\n\n\\begin{definition}[Must-bound variable] \\label{def:mustboundvar}\n The set $\\mustboundvars{\\alpha}\\subseteq\\boundvars{\\alpha}\\subseteq\\mathcal{V}\\cup\\D{\\mathcal{V}}$ of \\emph{must-bound variables} of \\HP $\\alpha$, i.e.\\ all those that must be written to on all paths of $\\alpha$, is defined inductively as\n \\begin{align*}\n \\mustboundvars{a} &= \\emptyset &&\\text{for program constant $a$}\\\\\n \\mustboundvars{\\alpha} &= \\boundvars{\\alpha} &&\\text{for other atomic \\HPs $\\alpha$}\n \\\\\n \\mustboundvars{\\alpha\\cup\\beta} &= \\mustboundvars{\\alpha}\\cap\\mustboundvars{\\beta}\n \\\\\n \\mustboundvars{\\alpha;\\beta} &= \\mustboundvars{\\alpha}\\cup\\mustboundvars{\\beta}\n \\\\\n \\mustboundvars{\\prepeat{\\alpha}} &= \\emptyset\n \\end{align*}\n\\end{definition}\n\\iflongversion\nObviously, \\(\\mustboundvars{\\alpha}\\subseteq\\boundvars{\\alpha}\\).\nIf $\\alpha$ is only built by sequential compositions from atomic programs without program constants, then \\(\\mustboundvars{\\alpha}=\\boundvars{\\alpha}\\).\n\\fi\n\n\\begin{definition}[Free variable] \\label{def:freevars-HP}\n The set $\\freevars{\\alpha}\\subseteq\\mathcal{V}\\cup\\D{\\mathcal{V}}$ of \\emph{free variables} of \\HP $\\alpha$, i.e.\\ all those that may potentially be read, is defined inductively as\n \\begin{align*}\n \\freevars{a} &= \\mathcal{V}\\cup\\D{\\mathcal{V}} &&\\hspace{-10pt}\\text{for program constant $a$}\\\\\n \\freevars{\\pupdate{\\pumod{x}{\\theta}}}\n = \\freevars{\\Dupdate{\\Dumod{\\D{x}}{\\theta}}} &= \\freevars{\\theta}\n \\\\\n \\freevars{\\ptest{\\psi}} &= \\freevars{\\psi}\n \\\\\n \\freevars{\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}} &= \\{x\\}\\cup\\freevars{\\genDE{x}}\\cup\\freevars{\\psi}\n \\\\\n \\freevars{\\pchoice{\\alpha}{\\beta}} &= \\freevars{\\alpha}\\cup\\freevars{\\beta}\n \\\\\n \\freevars{\\alpha;\\beta} &= \\freevars{\\alpha}\\cup(\\freevars{\\beta}\\setminus\\mustboundvars{\\alpha})\n \\\\\n \\freevars{\\prepeat{\\alpha}} &= \\freevars{\\alpha}\n \\end{align*}\n\\iflongversion\nThe \\emph{variables} of \\HP $\\alpha$, whether free or bound, are \\(\\vars{\\alpha}=\\freevars{\\alpha}\\cup\\boundvars{\\alpha}\\).\n\\fi\n\\end{definition}\n\\iflongversion\n The simpler definition \n \\(\\freevars{\\alpha\\cup\\beta} = \\freevars{\\alpha}\\cup\\freevars{\\beta}\\)\n would be correct, but the results would be less precise then.\n\\fi\nUnlike $x$, the left-hand side $\\D{x}$ of differential equations is not added to the free variables of \\(\\freevars{\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}}\\), because its behavior does not depend on the initial value of differential symbols $\\D{x}$, only the initial value of $x$.\n Free and bound variables are the set of all variables $\\mathcal{V}$ and differential symbols $\\D{\\mathcal{V}}$ for program constants $a$, because their effect depends on the interpretation $\\iget[const]{\\vdLint[const=I,state=\\nu]}$, so may read and write all \\(\\freevars{a}=\\boundvars{a}=\\mathcal{V}\\cup\\D{\\mathcal{V}}\\) but not on all paths \\(\\mustboundvars{a}=\\emptyset\\).\n Subsequent results about free and bound variables are, thus, vacuously true when program constants occur.\nCorresponding observations hold for \\predicational symbols.\n\nThe static semantics defines which variables are readable or writable.\nThere may not be any run of $\\alpha$ in which a variable is read or written to.\nIf $x\\not\\in\\freevars{\\alpha}$, though, then $\\alpha$ cannot read the value of $x$.\nIf $x\\not\\in\\boundvars{\\alpha}$, it cannot write to $x$.\n\\iflongversion\n\\rref{def:freevars-HP} is parsimonious.\nFor example, $x$ is not a free variable of the following program\n\\[\n(\\pchoice{\\pupdate{\\pumod{x}{1}}}{\\pupdate{\\pumod{x}{2}}}); \\pupdate{\\pumod{z}{x+y}}\n\\]\nbecause $x$ is never actually read, since $x$ must have been defined on \\emph{every} execution path of the first part before being read by the second part.\nNo execution of the above program, thus, depends on the initial value of $x$, which is why it is not a free variable.\nThis would have been different for the simpler definition\n\\[\n\\freevars{\\alpha;\\beta} = \\freevars{\\alpha}\\cup\\freevars{\\beta}\n\\]\nThere is a limit to the precision with which any static analysis can determine which variables are really read or written \\cite{DBLP:journals\/ams\/Rice53}.\nThe static semantics in \\rref{def:freevars-HP} will, e.g., call $x$ a free variable of the following program even though no execution could read it, because they fail test $\\ptest{\\lfalse}$ when running the branch reading $x$:\n\\[\n\\pupdate{\\pumod{z}{0}}; \\prepeat{(\\ptest{\\lfalse};\\pupdate{\\pumod{z}{z+x}})}\n\\]\n\\fi\n\nThe \\dfn{signature}, i.e.\\ set of function, predicate, \\predicational symbols, and program constants in $\\phi$ is denoted by \\(\\intsigns{\\phi}\\) (accordingly for terms and programs).\nIt is defined like $\\freevars{\\phi}$ except that all occurrences are free.\nVariables in $\\mathcal{V}\\cup\\D{\\mathcal{V}}$ are interpreted by state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$.\nThe symbols in $\\intsigns{\\phi}$ are interpreted by interpretation $\\iget[const]{\\vdLint[const=I,state=\\nu]}$.\n\n\\subsection{Correctness of Static Semantics}\nThe following result reflects that \\HPs have bounded effect: for a variable $x$ to be modified during a run of $\\alpha$, $x$ needs the be a bound variable in \\HP $\\alpha$, i.e.\\ \\(x\\in\\boundvars{\\alpha}\\), so that $\\alpha$ can write to $x$.\nThe converse is not true, because $\\alpha$ may bind a variable $x$, e.g. by having an assignment to $x$, that never actually changes the value of $x$, such as \\(\\pupdate{\\pumod{x}{x}}\\) or because the assignment can never be executed.\n\\iflongversion\nThe following program, for example, binds $x$ but will never change the value of $x$ because there is no way of satisfying the test $\\ptest{\\lfalse}$:\n\\(\n\\pchoice{(\\ptest{\\lfalse}; \\pupdate{\\pumod{x}{42}})}{\\pupdate{\\pumod{z}{x+1}}}\n\\).\n\\fi\n\\begin{lemma}[Bound effect lemma] \\label{lem:bound}\n If \\(\\iaccessible[\\alpha]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\), then \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{\\boundvars{\\alpha}}$.\n\\end{lemma}\n\\begin{proofatend}\nThe proof is by a straightforward structural induction on $\\alpha$.\n\\begin{compactitem}\n\\item Since \\(\\boundvars{a} = \\mathcal{V}\\cup\\D{\\mathcal{V}}\\), the statement is vacuously true for program constant $a$, because \\(\\scomplement{\\boundvars{a}}=\\emptyset\\).\n\n\\item \\m{\\iaccessible[\\pupdate{\\pumod{x}{\\theta}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]} = \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}) \\with \\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}~\\text{except that}~\\ivaluation{\\vdLint[const=I,state=\\omega]}{x}=\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\}} \nimplies that \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) except for \\(\\{x\\}=\\boundvars{\\pupdate{\\pumod{x}{\\theta}}}\\).\n\n\\item \\m{\\iaccessible[\\Dupdate{\\Dumod{\\D{x}}{\\theta}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]} = \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}) \\with \\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}~\\text{except that}~\\ivaluation{\\vdLint[const=I,state=\\omega]}{\\D{x}}=\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\}} \nimplies that \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) except for \\(\\{\\D{x}\\}=\\boundvars{\\Dupdate{\\Dumod{\\D{x}}{\\theta}}}\\).\n\n\\item \\m{\\iaccessible[\\ptest{\\psi}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\nu]} = \\{(\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\nu]}) \\with \\imodels{\\vdLint[const=I,state=\\nu]}{\\psi} ~\\text{i.e.}~ \\iget[state]{\\vdLint[const=I,state=\\nu]}\\in\\imodel{\\vdLint[const=I,state=\\nu]}{\\psi}\\}}\nfits to \\(\\boundvars{\\ptest{\\psi}}=\\emptyset\\)\n\n\\item\n \\m{\\iaccessible[\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}} \n implies that \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) except for the variables with differential equations, which are \\(\\{x,\\D{x}\\}=\\boundvars{\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}}\\)\n observing that \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})\\) and \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}(\\D{x})\\) may differ by \\rref{def:HP-transition}.\n\n\\item \\m{\\iaccessible[\\pchoice{\\alpha}{\\beta}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]} = \\iaccess[\\alpha]{\\vdLint[const=I,state=\\nu]} \\cup \\iaccess[\\beta]{\\vdLint[const=I,state=\\nu]}}\nimplies \\(\\iaccessible[\\alpha]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\) or \\(\\iaccessible[\\beta]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\),\nwhich, by induction hypothesis,\nimplies \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{\\boundvars{\\alpha}}$\nor\n\\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{\\boundvars{\\beta}}$, respectively.\nEither case implies \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{(\\boundvars{\\alpha}\\cup\\boundvars{\\beta})}=\\scomplement{\\boundvars{\\pchoice{\\alpha}{\\beta}}}$.\n\n\\item\n\\newcommand{\\iconcat[state=\\mu]{\\I}}{\\dLint[state=\\mu]}%\n\\m{\\iaccessible[\\alpha;\\beta]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]} = \\iaccess[\\alpha]{\\vdLint[const=I,state=\\nu]} \\compose\\iaccess[\\beta]{\\vdLint[const=I,state=\\nu]}},\ni.e.\\ there is a $\\iget[state]{\\iconcat[state=\\mu]{\\I}}$ such that \\(\\iaccessible[\\alpha]{\\vdLint[const=I,state=\\nu]}{\\iconcat[state=\\mu]{\\I}}\\) and \\(\\iaccessible[\\beta]{\\iconcat[state=\\mu]{\\I}}{\\vdLint[const=I,state=\\omega]}\\).\nSo, by induction hypothesis, \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\iconcat[state=\\mu]{\\I}}\\) on $\\scomplement{\\boundvars{\\alpha}}$ and \\(\\iget[state]{\\iconcat[state=\\mu]{\\I}}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{\\boundvars{\\beta}}$.\nBy transitivity, \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{(\\boundvars{\\alpha}\\cup\\boundvars{\\beta})}=\\scomplement{\\boundvars{\\alpha;\\beta}}$.\n\n\\item\n\\renewcommand{\\iconcat[state=\\mu]{\\I}}[1][]{\\dLint[state=\\nu_{#1}]}%\n\\m{\\iaccessible[\\prepeat{\\alpha}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]} = \\displaystyle\\cupfold_{n\\in\\naturals}\\iaccess[{\\prepeat[n]{\\alpha}}]{\\vdLint[const=I,state=\\nu]}}, then there is an $n\\in\\naturals$ and a sequence \\(\\iget[state]{\\iconcat[state=\\mu]{\\I}[0]}=\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\iconcat[state=\\mu]{\\I}[1]},\\dots,\\iget[state]{\\iconcat[state=\\mu]{\\I}[n]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) such that \\(\\iaccessible[\\alpha]{\\iconcat[state=\\mu]{\\I}[i]}{\\iconcat[state=\\mu]{\\I}[i+1]}\\) for all $i0$ proceeds as follows.\nSince \\(\\freevars{\\prepeat[n]{\\alpha}}=\\freevars{\\prepeat{\\alpha}}=\\freevars{\\alpha}\\), the induction hypothesis applied to the structurally simpler \\HP $\\prepeat[n]{\\alpha}$ implies\nthat there is an $\\iget[state]{\\Italt}$ such that\n \\(\\iaccessible[\\alpha^n]{\\Ialt}{\\Italt}\\)\n and \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\Italt}\\) on $V\\cup\\mustboundvars{\\prepeat[n]{\\alpha}} \\supseteq V = V\\cup\\mustboundvars{\\prepeat{\\alpha}}$,\n since $\\mustboundvars{\\prepeat{\\alpha}}=\\emptyset$.\n Since \\(\\iaccess[{\\prepeat[n]{\\alpha}}]{\\Ialt}\\subseteq\\iaccess[\\prepeat{\\alpha}]{\\Ialt}\\), this concludes the proof.\n\n\\end{compactitem}\n\\end{proofatend}\n\n\\iflongversion\nWhen assuming $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ and $\\iget[state]{\\Ialt}$ to agree on all $\\vars{\\alpha}$, whether free or bound, $\\iget[state]{\\vdLint[const=I,state=\\omega]}$ and $\\iget[state]{\\Italt}$ will continue to agree on $\\vars{\\alpha}$:\n\\begin{corollary}[Coincidence lemma] \\label{cor:coincidence-HP}\n If \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\Ialt}\\) on $\\vars{\\alpha}$\n and \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}=\\iget[const]{\\Ialt}\\) on $\\intsigns{\\alpha}$\n and \\(\\iaccessible[\\alpha]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\),\n then there is an $\\iget[state]{\\Italt}$ such that\n \\(\\iaccessible[\\alpha]{\\Ialt}{\\Italt}\\)\n and \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\Italt}\\) on $\\vars{\\alpha}$.\n The same continues to hold if \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\Ialt}\\) only on $\\vars{\\alpha}\\setminus\\mustboundvars{\\alpha}$.\n\\end{corollary}\n\\begin{proofatend}\nBy \\rref{lem:coincidence-HP} with \\(V=\\vars{\\alpha}\\supseteq\\freevars{\\alpha}\\) or \\(V=\\vars{\\alpha}\\setminus\\mustboundvars{\\alpha}\\), respectively.\n\n\\end{proofatend}\n\n\n\\begin{remark}\n Using hybrid computation sequences, the agreement in \\rref{lem:coincidence-HP} continues to hold for\n \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\Italt}\\) on $V\\cup W$, where $W$ is the set of must-bound variables on the hybrid computation sequence that $\\alpha$ actually took for the transition \\(\\iaccessible[\\alpha]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\), which could be larger than $\\mustboundvars{\\alpha}$.\n\\end{remark}\n\\fi\n\n\n\\section{Uniform Substitutions} \\label{sec:usubst}\n\nThe uniform substitution rule \\irref{US0} from first-order logic \\cite[\\S35,40]{Church_1956} substitutes \\emph{all} occurrences of predicate $p(\\usarg)$ by a formula $\\mapply{\\psi}{\\usarg}$, i.e.\\ it replaces all occurrences of $p(\\theta)$, for any (vectorial) term $\\theta$, by the corresponding $\\mapply{\\psi}{\\theta}$ simultaneously:\n\\[\n \\cinferenceRule[US0|US$_1$]{uniform substitution}\n {\\linferenceRule[formula]\n {\\preusubst[\\phi]{p}}\n {\\usubst[\\phi]{p}{\\psi}}\n }{}%\n \\qquad\\qquad\n \\cinferenceRule[US|US]{uniform substitution}\n {\\linferenceRule[formula]\n {\\phi}\n {\\applyusubst{\\sigma}{\\phi}}\n }{}%\n\\]\nRule \\irref{US0} \\cite{DBLP:journals\/corr\/Platzer14:dGL} requires all relevant substitutions of $\\mapply{\\psi}{\\theta}$ for $p(\\theta)$ to be admissible and requires that no $p(\\theta)$ occurs in the scope of a quantifier or modality binding a variable of $\\mapply{\\psi}{\\theta}$ other than the occurrences in $\\theta$; see \\cite[\\S35,40]{Church_1956}.\n\nThis section considers a constructive definition of this proof rule that is more general: \\irref{US}.\nThe \\dL calculus uses uniform substitutions that affect terms, formulas, and programs.\nA \\dfn{uniform substitution} $\\sigma$ is a mapping\nfrom expressions of the\nform \\(f(\\usarg)\\) to terms $\\applysubst{\\sigma}{f(\\usarg)}$,\nfrom \\(p(\\usarg)\\) to formulas $\\applysubst{\\sigma}{p(\\usarg)}$,\nfrom \\(\\contextapp{C}{\\uscarg}\\) to formulas $\\applysubst{\\sigma}{\\contextapp{C}{\\uscarg}}$,\nand from program constants \\(a\\) to \\HPs $\\applysubst{\\sigma}{a}$.\nVectorial extensions are accordingly for uniform substitutions of other arities $k\\geq0$.\nHere $\\usarg$ is a reserved function symbol of arity zero and $\\uscarg$ a reserved \\predicational symbol of arity zero.\nFigure~\\ref{fig:usubst} defines the result $\\applyusubst{\\sigma}{\\phi}$ of applying to a \\dL formula~$\\phi$ the \\dfn{uniform substitution} $\\sigma$ that uniformly replaces all occurrences of function~$f$ by a (instantiated) term and all occurrences of predicate~$p$ or \\predicational~$C$ by a (instantiated) formula \nas well as of program constant $a$ by a program.\nThe notation $\\applysubst{\\sigma}{f(\\usarg)}$ denotes the replacement for $f(\\usarg)$ according to $\\sigma$, i.e.\\ the value $\\applysubst{\\sigma}{f(\\usarg)}$ of function $\\sigma$ at $f(\\usarg)$.\nBy contrast, $\\applyusubst{\\sigma}{\\phi}$ denotes the result of applying $\\sigma$ to $\\phi$ according to \\rref{fig:usubst} (likewise for $\\applyusubst{\\sigma}{\\theta}$ and $\\applyusubst{\\sigma}{\\alpha}$).\nThe notation $f\\in\\replacees{\\sigma}$ signifies that $\\sigma$ replaces $f$, i.e.\\ \\(\\applysubst{\\sigma}{f(\\usarg)} \\neq f(\\usarg)\\).\nFinally, $\\sigma$ is a total function when augmented with \\(\\applysubst{\\sigma}{g(\\usarg)}=g(\\usarg)\\) for all $g\\not\\in\\replacees{\\sigma}$.\nAccordingly for predicate symbols, \\predicational{}s, and program constants.\n\n\\begin{definition}[Admissible uniform substitution] \\label{def:usubst-admissible}\n \\index{admissible|see{substitution, admissible}}\n The uniform substitution~$\\sigma$ is $U$-\\emph{\\text{admissible}\\xspace} for $\\phi$ (or $\\theta$ or $\\alpha$, respectively) with respect to the set $U\\subseteq\\mathcal{V}\\cup\\D{\\mathcal{V}}$ iff\n \\(\\freevars{\\restrict{\\sigma}{\\intsigns{\\phi}}}\\cap U=\\emptyset\\),\n where \\({\\restrict{\\sigma}{\\intsigns{\\phi}}}\\) is the restriction of $\\sigma$ that only replaces symbols that occur in $\\phi$\n and\n \\(\\freevars{\\sigma}=\\cupfold_{f\\in\\replacees{\\sigma}} \\freevars{\\applysubst{\\sigma}{f(\\usarg)}} \\cup \\cupfold_{p\\in\\replacees{\\sigma}} \\freevars{\\applysubst{\\sigma}{p(\\usarg)}}\\)\n are the \\emph{free variables} that $\\sigma$ introduces. \n The uniform substitution~$\\sigma$ is \\emph{\\text{admissible}\\xspace} for $\\phi$ (or $\\theta$ or $\\alpha$, respectively) iff all admissibility conditions during its application according to \\rref{fig:usubst} hold, \n which check that the bound variables $U$ of each operator are not free in the substitution on its arguments, i.e.\\ $\\sigma$ is $U$-admissible.\n %\n Otherwise the substitution clashes so its result $\\applyusubst{\\sigma}{\\phi}$ ($\\applyusubst{\\sigma}{\\theta}$ or $\\applyusubst{\\sigma}{\\alpha}$) is not defined.\n %\n\\end{definition}\n\n\\irref{US} is only applicable if $\\sigma$ is admissible for $\\phi$.\nIn all subsequent results, all applications of uniform substitutions are required to be defined (no clash).\n\n\\begin{figure}[tbhp]\n \\begin{displaymath}\n \\begin{array}{@{}rcll@{}}\n \\applyusubst{\\sigma}{x} &=& x & \\text{for variable $x\\in\\mathcal{V}$}\\\\\n \\applyusubst{\\sigma}{\\D{x}} &=& \\D{x} & \\text{for differential symbol $\\D{x}\\in\\D{\\mathcal{V}}$}\\\\\n \\applyusubst{\\sigma}{f(\\theta)} &=& (\\applyusubst{\\sigma}{f})(\\applyusubst{\\sigma}{\\theta})\n \\mdefeq \\applyusubst{\\{\\usarg\\mapsto\\applyusubst{\\sigma}{\\theta}\\}}{\\applysubst{\\sigma}{f(\\usarg)}} &\n \\text{for function symbol}~f\\in\\replacees{\\sigma}\n \\\\\n \\applyusubst{\\sigma}{g(\\theta)} &=& g(\\applyusubst{\\sigma}{\\theta}) &\\text{for function symbol}~g\\not\\in\\replacees{\\sigma}\n \\\\\n \\applyusubst{\\sigma}{\\theta+\\eta} &=& \\applyusubst{\\sigma}{\\theta} + \\applyusubst{\\sigma}{\\eta}\n \\\\\n \\applyusubst{\\sigma}{\\theta\\cdot\\eta} &=& \\applyusubst{\\sigma}{\\theta} \\cdot \\applyusubst{\\sigma}{\\eta}\n \\\\\n \\applyusubst{\\sigma}{\\der{\\theta}} &=& \\der{\\applyusubst{\\sigma}{\\theta}} &\\text{if $\\sigma$ $\\mathcal{V}\\cup\\D{\\mathcal{V}}$-admissible for $\\theta$}\n \\\\\n \\hline\n \\applyusubst{\\sigma}{\\theta\\geq\\eta} &\\mequiv& \\applyusubst{\\sigma}{\\theta} \\geq \\applyusubst{\\sigma}{\\eta}\\\\\n \\applyusubst{\\sigma}{p(\\theta)} &\\mequiv& (\\applyusubst{\\sigma}{p})(\\applyusubst{\\sigma}{\\theta})\n \\mdefequiv \\applyusubst{\\{\\usarg\\mapsto\\applyusubst{\\sigma}{\\theta}\\}}{\\applysubst{\\sigma}{p(\\usarg)}} &\n \\text{for predicate symbol}~p\\in\\replacees{\\sigma}\\\\\n \\applyusubst{\\sigma}{q(\\theta)} &\\mequiv& q(\\applyusubst{\\sigma}{\\theta}) &\\text{for predicate symbol}~q\\not\\in\\replacees{\\sigma}\\\\\n \\applyusubst{\\sigma}{\\contextapp{C}{\\phi}} &\\mequiv& \\contextapp{\\applyusubst{\\sigma}{C}}{\\applyusubst{\\sigma}{\\phi}}\n \\mdefequiv \\applyusubst{\\{\\uscarg\\mapsto\\applyusubst{\\sigma}{\\phi}\\}}{\\applysubst{\\sigma}{\\contextapp{C}{\\uscarg}}} &\n \\text{if $\\sigma$ $\\mathcal{V}\\cup\\D{\\mathcal{V}}$-admissible for $\\phi$, $C\\in\\replacees{\\sigma}$}\n %\n %\n \\\\\n \\applyusubst{\\sigma}{\\contextapp{C}{\\phi}} &\\mequiv& \\contextapp{C}{\\applyusubst{\\sigma}{\\phi}} &\n \\text{if $\\sigma$ $\\mathcal{V}\\cup\\D{\\mathcal{V}}$-admissible for $\\phi$, $C\\not\\in\\replacees{\\sigma}$}\n %\n \\\\\n \\applyusubst{\\sigma}{\\lnot\\phi} &\\mequiv& \\lnot\\applyusubst{\\sigma}{\\phi}\\\\\n \\applyusubst{\\sigma}{\\phi\\land\\psi} &\\mequiv& \\applyusubst{\\sigma}{\\phi} \\land \\applyusubst{\\sigma}{\\psi}\\\\\n \\applyusubst{\\sigma}{\\lforall{x}{\\phi}} &=& \\lforall{x}{\\applyusubst{\\sigma}{\\phi}} & \\text{if $\\sigma$ $\\{x\\}$-admissible for $\\phi$}\\\\\n \\applyusubst{\\sigma}{\\lexists{x}{\\phi}} &=& \\lexists{x}{\\applyusubst{\\sigma}{\\phi}} & \\text{if $\\sigma$ $\\{x\\}$-admissible for $\\phi$}\\\\\n \\applyusubst{\\sigma}{\\dbox{\\alpha}{\\phi}} &=& \\dbox{\\applyusubst{\\sigma}{\\alpha}}{\\applyusubst{\\sigma}{\\phi}} & \\text{if $\\sigma$ $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\phi$}\\\\\n \\applyusubst{\\sigma}{\\ddiamond{\\alpha}{\\phi}} &=& \\ddiamond{\\applyusubst{\\sigma}{\\alpha}}{\\applyusubst{\\sigma}{\\phi}} & \\text{if $\\sigma$ $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\phi$}\n \\\\\n \\hline\n \\applyusubst{\\sigma}{a} &\\mequiv& \\applysubst{\\sigma}{a} &\\text{for program constant $a\\in\\replacees{\\sigma}$}\\\\\n \\applyusubst{\\sigma}{b} &\\mequiv& b &\\text{for program constant $b\\not\\in\\replacees{\\sigma}$}\\\\\n \\applyusubst{\\sigma}{\\pupdate{\\umod{x}{\\theta}}} &\\mequiv& \\pupdate{\\umod{x}{\\applyusubst{\\sigma}{\\theta}}}\\\\\n \\applyusubst{\\sigma}{\\Dupdate{\\Dumod{\\D{x}}{\\theta}}} &\\mequiv& \\Dupdate{\\Dumod{\\D{x}}{\\applyusubst{\\sigma}{\\theta}}}\\\\\n \\applyusubst{\\sigma}{\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}} &\\mequiv&\n \\hevolvein{\\D{x}=\\applyusubst{\\sigma}{\\genDE{x}}}{\\applyusubst{\\sigma}{\\psi}} & \\text{if $\\sigma$ $\\{x,\\D{x}\\}$-admissible for $\\genDE{x},\\psi$}\\\\\n \\applyusubst{\\sigma}{\\ptest{\\psi}} &\\mequiv& \\ptest{\\applyusubst{\\sigma}{\\psi}}\\\\\n \\applyusubst{\\sigma}{\\pchoice{\\alpha}{\\beta}} &\\mequiv& \\pchoice{\\applyusubst{\\sigma}{\\alpha}} {\\applyusubst{\\sigma}{\\beta}}\\\\\n \\applyusubst{\\sigma}{\\alpha;\\beta} &\\mequiv& \\applyusubst{\\sigma}{\\alpha}; \\applyusubst{\\sigma}{\\beta} &\\text{if $\\sigma$ $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\beta$}\\\\\n \\applyusubst{\\sigma}{\\prepeat{\\alpha}} &\\mequiv& \\prepeat{(\\applyusubst{\\sigma}{\\alpha})} &\\text{if $\\sigma$ $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\alpha$}\n \\end{array}%\n \\end{displaymath}%\n \\vspace{-\\baselineskip}\n \\caption{Recursive application of uniform substitution~$\\sigma$}%\n \\index{substitution!uniform|textbf}%\n \\label{fig:usubst}\n\\end{figure}\n\n\\subsection{Correctness of Uniform Substitutions}\n\nLet\n\\m{\\iget[const]{\\imodif[const]{\\vdLint[const=I,state=\\nu]}{p}{R}}} denote the interpretation that agrees with interpretation~$\\iget[const]{\\vdLint[const=I,state=\\nu]}$ except for the interpretation of predicate symbol~$p$, which is changed to~\\m{R \\subseteq \\reals}.\nAccordingly for predicate symbols of other arities, for function symbols $f$, and \\predicational{s} $C$.\n\n\\begin{corollary}[Substitution adjoints] \\label{cor:adjointUsubst}\n\\def\\Ialta{\\iadjointSubst{\\sigma}{\\Ialt}}%\nThe \\emph{adjoint interpretation} $\\iget[const]{\\Ia}$ to substitution $\\sigma$ for $\\iportray{\\vdLint[const=I,state=\\nu]}$ is the interpretation that agrees with $\\iget[const]{\\vdLint[const=I,state=\\nu]}$ except that for each function symbol $f\\in\\replacees{\\sigma}$, predicate symbol $p\\in\\replacees{\\sigma}$, \\predicational $C\\in\\replacees{\\sigma}$, and program constant $a\\in\\replacees{\\sigma}$:\n\\begin{align*}\n \\iget[const]{\\Ia}(f) &: \\reals\\to\\reals;\\, d\\mapsto\\ivaluation{\\imodif[const]{\\vdLint[const=I,state=\\nu]}{\\,\\usarg}{d}}{\\applysubst{\\sigma}{f}(\\usarg)}\\\\\n \\iget[const]{\\Ia}(p) &= \\{d\\in\\reals \\with \\imodels{\\imodif[const]{\\vdLint[const=I,state=\\nu]}{\\,\\usarg}{d}}{\\applysubst{\\sigma}{p}(\\usarg)}\\}\n \\\\\n \\iget[const]{\\Ia}(C) &: \\powerset{\\reals}\\to\\powerset{\\reals};\\, R\\mapsto\\imodel{\\imodif[const]{\\vdLint[const=I,state=\\nu]}{\\,\\uscarg}{R}}{\\applysubst{\\sigma}{\\contextapp{C}{\\uscarg}}}\n \\\\\n \\iget[const]{\\Ia}(a) &= \\iaccess[\\applysubst{\\sigma}{a}]{\\vdLint[const=I,state=\\nu]}\n\\end{align*}\nIf \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on \\(\\freevars{\\sigma}\\),\nthen \\(\\iget[const]{\\Ia}=\\iget[const]{\\Ita}\\).\nIf $\\sigma$ is $U$-admissible for $\\phi$ (or $\\theta$ or $\\alpha$, respectively) and \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{U}$, then\n\\begin{align*}\n \\ivalues{\\Ia}{\\theta} &= \\ivalues{\\Ita}{\\theta}\n ~\\text{i.e.}~\n \\ivaluation{\\iconcat[state=\\mu]{\\Ia}}{\\theta} = \\ivaluation{\\iconcat[state=\\mu]{\\Ita}}{\\theta} ~\\text{for all}~\\mu\n \\\\\n \\imodel{\\Ia}{\\phi} &= \\imodel{\\Ita}{\\phi}\\\\\n \\iaccess[\\alpha]{\\Ia} &= \\iaccess[\\alpha]{\\Ita}\n\\end{align*}\n\\end{corollary}\n\\begin{proofatend}\nFor well-definedness of $\\iget[const]{\\Ia}$, note that $\\iget[const]{\\Ia}(f)$ is a smooth function since \\({\\applysubst{\\sigma}{f}(\\usarg)}\\) has smooth values.\nFirst, \\(\\iget[const]{\\Ia}(a) = \\iaccess[\\applysubst{\\sigma}{a}]{\\vdLint[const=I,state=\\nu]} = \\iget[const]{\\Ita}(a)\\) holds because the adjoint to $\\sigma$ for $\\iportray{\\vdLint[const=I,state=\\nu]}$ in the case of programs is independent of $\\iget[state]{\\Ia}$ (the program has access to its respective initial state at runtime).\nLikewise \\(\\iget[const]{\\Ia}(C) = \\iget[const]{\\Ita}(C)\\) for \\predicational{s}.\nBy \\rref{lem:coincidence-term},\n\\(\\ivaluation{\\imodif[const]{\\vdLint[const=I,state=\\nu]}{\\,\\usarg}{d}}{\\applysubst{\\sigma}{f}(\\usarg)}\n= \\ivaluation{\\imodif[const]{\\vdLint[const=I,state=\\omega]}{\\,\\usarg}{d}}{\\applysubst{\\sigma}{f}(\\usarg)}\\) when \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on \\(\\freevars{\\applysubst{\\sigma}{f}(\\usarg)}\\).\nAlso \\(\\imodels{\\imodif[const]{\\vdLint[const=I,state=\\nu]}{\\,\\usarg}{d}}{\\applysubst{\\sigma}{p}(\\usarg)}\\)\niff \\(\\imodels{\\imodif[const]{\\vdLint[const=I,state=\\omega]}{\\,\\usarg}{d}}{\\applysubst{\\sigma}{p}(\\usarg)}\\)\nby \\rref{lem:coincidence} when \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on \\(\\freevars{\\applysubst{\\sigma}{p}(\\usarg)}\\).\nThus, \\(\\iget[const]{\\Ia}=\\iget[const]{\\Ita}\\) when $\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}$ on $\\freevars{\\sigma}$.\n\nIf $\\sigma$ is $U$-admissible for $\\phi$ (or $\\theta$ or $\\alpha$), then\n \\(\\freevars{\\applysubst{\\sigma}{f(\\usarg)}}\\cap U=\\emptyset\\)\n so\n \\(\\freevars{\\applysubst{\\sigma}{f(\\usarg)}}\\subseteq\\scomplement{U}\\)\n for every function symbol $f\\in\\intsigns{\\phi}$ (or $\\theta$ or $\\alpha$) and likewise for predicate symbols $p\\in\\intsigns{\\phi}$.\n Since \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{U}$,\n so \\(\\iget[const]{\\Ita}=\\iget[const]{\\Ia}\\) on the function and predicate symbols in $\\intsigns{\\phi}$ (or $\\theta$ or $\\alpha$).\n Finally \\(\\iget[const]{\\Ita}=\\iget[const]{\\Ia}\\) implies that\n \\(\\imodels{\\Ita}{\\phi}\\) iff \\(\\imodels{\\Ia}{\\phi}\\) by \\rref{lem:coincidence}\n and that \\(\\ivalues{\\Ia}{\\theta} = \\ivalues{\\Ita}{\\theta}\\) by \\rref{lem:coincidence-term}\n and that \\(\\iaccess[\\alpha]{\\Ita} = \\iaccess[\\alpha]{\\Ia}\\) by \\rref{lem:coincidence-HP}.\n\n\\end{proofatend}\n\nSubstituting equals for equals is sound by the compositional semantics of \\dL.\nThe more general uniform substitutions are still sound, because interpretations of uniform substitutes correspond to interpretations of their adjoints.\nThe semantic modification of adjoint interpretations has the same effect as the syntactic uniform substitution, proved by simultaneous induction.\n\\iflongversion\nRecall that all substitutions in the following are assumed to be defined (not clash), otherwise the lemmas make no claim.\n\\fi\n\n\\begin{lemma}[Uniform substitution lemma] \\label{lem:usubst-term}\nThe uniform substitution $\\sigma$ and its adjoint interpretation $\\iportray{\\Ia}$ to $\\sigma$ for $\\iportray{\\vdLint[const=I,state=\\nu]}$ have the same \\emph{term} semantics:\n\\[\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}} = \\ivaluation{\\Ia}{\\theta}\\]\n\\end{lemma}\n\\begin{proofatend}\n\\def\\Im{\\vdLint[const=\\usebox{\\ImnI},state=\\nu\\oplus(x\\mapsto\\usebox{\\Imnx})]}%\n\nThe proof is by structural induction on $\\theta$.\n\\begin{compactitem}\n\\item\n \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{x}} \n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{x} = \\iget[state]{\\vdLint[const=I,state=\\nu]}(x) \\)\n = \\(\\ivaluation{\\Ia}{x}\\)\n \\(\\text{since}~ x\\not\\in\\replacees{\\sigma}\\)\n for variable $x\\in\\mathcal{V}$\n\n\\item\n \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\D{x}}} \n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\D{x}} = \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\)\n = \\(\\ivaluation{\\Ia}{\\D{x}}\\)\n \\(\\text{as}~ \\D{x}\\not\\in\\replacees{\\sigma}\\)\n for differential symbol $\\D{x}\\in\\D{\\mathcal{V}}$\n\n\\item\n \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{f(\\theta)}}\n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{(\\applyusubst{\\sigma}{f})\\big(\\applyusubst{\\sigma}{\\theta}\\big)}\n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\{\\usarg\\mapsto\\applyusubst{\\sigma}{\\theta}\\}}{\\applysubst{\\sigma}{f(\\usarg)}}}\\)\n \\(\\stackrel{\\text{IH}}{=} \\ivaluation{\\Iminner}{\\applysubst{\\sigma}{f(\\usarg)}}\\)\n \\(= (\\iget[const]{\\Ia}(f))(d)\\)\n \\(= (\\iget[const]{\\Ia}(f))(\\ivaluation{\\Ia}{\\theta})\n = \\ivaluation{\\Ia}{f(\\theta)}\\)\n with\n \\(d\\mdefeq\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}} \n \\stackrel{\\text{IH}}{=} \\ivaluation{\\Ia}{\\theta}\\)\n by using the induction hypothesis twice,\n once for \\(\\applyusubst{\\sigma}{\\theta}\\) on the smaller $\\theta$ \n and once for\n \\(\\applyusubst{\\{\\usarg\\mapsto\\applyusubst{\\sigma}{\\theta}\\}}{\\applysubst{\\sigma}{f(\\usarg)}}\\)\n on the possibly bigger term\n \\({\\applysubst{\\sigma}{f(\\usarg)}}\\)\n but the structurally simpler uniform substitution\n \\(\\applyusubst{\\{\\usarg\\mapsto\\applyusubst{\\sigma}{\\theta}\\}}{\\dots}\\)\n that is a substitution on the symbol $\\usarg$ of arity zero, not a substitution of functions with arguments.\n For well-foundedness of the induction note that the $\\usarg$ substitution only happens for function symbols $f$ with at least one argument $\\theta$\n (\\(\\text{for}~f\\in\\replacees{\\sigma}\\)).\n \n\\item\n \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{g(\\theta)}}\n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{g(\\applyusubst{\\sigma}{\\theta})}\\)\n = \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(g)\\big(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}}\\big)\\)\n \\(\\stackrel{\\text{IH}}{=} \\iget[const]{\\vdLint[const=I,state=\\nu]}(g)\\big(\\ivaluation{\\Ia}{\\theta}\\big)\n = \\iget[const]{\\Ia}(g)\\big(\\ivaluation{\\Ia}{\\theta}\\big)\n = \\ivaluation{\\Ia}{g(\\theta)}\\)\n by induction hypothesis and since \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(g)=\\iget[const]{\\Ia}(g)\\) as the interpretation of $g$ does not change in $\\iget[const]{\\Ia}$\n \\(\\text{for}~g\\not\\in\\replacees{\\sigma}\\).\n\n\\item\n \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta+\\eta}}\n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta} + \\applyusubst{\\sigma}{\\eta}}\n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}} + \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\eta}}\\)\n \\(\\stackrel{\\text{IH}}{=} \\ivaluation{\\Ia}{\\theta} + \\ivaluation{\\Ia}{\\eta}\n = \\ivaluation{\\Ia}{\\theta+\\eta}\\)\n by induction hypothesis.\n\n\\item\n \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta\\cdot\\eta}}\n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta} \\cdot \\applyusubst{\\sigma}{\\eta}}\n = \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}} \\cdot \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\eta}}\\)\n \\(\\stackrel{\\text{IH}}{=} \\ivaluation{\\Ia}{\\theta} \\cdot \\ivaluation{\\Ia}{\\eta}\n = \\ivaluation{\\Ia}{\\theta\\cdot\\eta}\\)\n by induction hypothesis.\n\n\\item\n\\def\\Im{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{X}}%\n\\def\\Ima{\\iadjointSubst{\\sigma}{\\Im}}%\n\\def\\Iam{\\imodif[state]{\\Ia}{x}{X}}%\n\\(\n\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\der{\\theta}}}\n= \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\applyusubst{\\sigma}{\\theta}}}\n= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\applyusubst{\\sigma}{\\theta}}}\n\\stackrel{\\text{IH}}{=} \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Ima}{\\theta}}\n= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Iam}{\\theta}}\n= \\ivaluation{\\Ia}{\\der{\\theta}}\n\\)\nby induction hypothesis,\nprovided $\\sigma$ is $\\mathcal{V}\\cup\\D{\\mathcal{V}}$-admissible for $\\theta$, i.e. does not introduce any variables or differential symbols, \nso that \\rref{cor:adjointUsubst} implies \\(\\iget[const]{\\Ia}=\\iget[const]{\\Ita}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}$ (that agree on $\\scomplement{(\\mathcal{V}\\cup\\D{\\mathcal{V}})}=\\emptyset$, which imposes no condition on $\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}$).\n\n\\end{compactitem}\n\\end{proofatend}\n\n\n\n\\begin{lemma}[Uniform substitution lemma] \\label{lem:usubst}\nThe uniform substitution $\\sigma$ and its adjoint interpretation $\\iportray{\\Ia}$ to $\\sigma$ for $\\iportray{\\vdLint[const=I,state=\\nu]}$ have the same \\emph{formula} semantics:\n\\[\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi}} ~\\text{iff}~ \\imodels{\\Ia}{\\phi}\\]\n\\end{lemma}\n\\begin{proofatend}\nThe proof is by structural induction on $\\phi$.\n\\begin{compactitem}\n\\item\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta\\geq\\eta}}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta} \\geq \\applyusubst{\\sigma}{\\eta}}\\)\n iff \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}} \\geq \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\eta}}\\),\n by \\rref{lem:usubst-term},\n iff \\(\\ivaluation{\\Ia}{\\theta} \\geq \\ivaluation{\\Ia}{\\eta}\\)\n iff \\(\\ivaluation{\\Ia}{\\theta\\geq\\eta}\\)\n\n\\item\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{p(\\theta)}}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{(\\applyusubst{\\sigma}{p})\\big(\\applyusubst{\\sigma}{\\theta}\\big)}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\{\\usarg\\mapsto\\applyusubst{\\sigma}{\\theta}\\}}{\\applysubst{\\sigma}{p(\\usarg)}}}\\)\n iff, by IH, \\(\\imodels{\\Iminner}{\\applysubst{\\sigma}{p(\\usarg)}}\\)\n iff \\(d \\in \\iget[const]{\\Ia}(p)\\)\n iff \\((\\ivaluation{\\Ia}{\\theta}) \\in \\iget[const]{\\Ia}(p)\\)\n iff \\(\\imodels{\\Ia}{p(\\theta)}\\)\n with \\(d\\mdefeq\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}} = \\ivaluation{\\Ia}{\\theta}\\)\n by using \\rref{lem:usubst-term} for \\(\\applyusubst{\\sigma}{\\theta}\\)\n and by using the induction hypothesis\n for \\(\\applyusubst{\\{\\usarg\\mapsto\\applyusubst{\\sigma}{\\theta}\\}}{\\applysubst{\\sigma}{p(\\usarg)}}\\)\n on the possibly bigger formula \\({\\applysubst{\\sigma}{p(\\usarg)}}\\) but the structurally simpler uniform substitution \\(\\applyusubst{\\{\\usarg\\mapsto\\applyusubst{\\sigma}{\\theta}\\}}{\\dots}\\) that is a mere substitution on symbol $\\usarg$ of arity zero, not a substitution of predicates\n (\\(\\text{for}~p\\in\\replacees{\\sigma}\\)).\n\n\\item\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{q(\\theta)}}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{q(\\applyusubst{\\sigma}{\\theta})}\\)\n iff \\(\\big(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}}\\big)\\in \\iget[const]{\\vdLint[const=I,state=\\nu]}(q)\\)\n so, by \\rref{lem:usubst-term}, iff \\(\\big(\\ivaluation{\\Ia}{\\theta}\\big) \\in \\iget[const]{\\vdLint[const=I,state=\\nu]}(q)\\)\n iff \\(\\big(\\ivaluation{\\Ia}{\\theta}\\big) \\in \\iget[const]{\\Ia}(q)\\)\n iff \\(\\imodels{\\Ia}{q(\\theta)}\\)\n since \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}(q)=\\iget[const]{\\Ia}(q)\\) as the interpretation of $q$ does not change in $\\iget[const]{\\Ia}$\n (\\(\\text{for}~q\\not\\in\\replacees{\\sigma}\\))\n\n\\item\n\\def\\ImM{\\imodif[const]{\\vdLint[const=I,state=\\nu]}{\\uscarg}{R}}%\n\\let\\ImN\\ImM%\n\\def\\IaM{\\imodif[const]{\\Ia}{\\uscarg}{R}}%\n For the case \\({\\applyusubst{\\sigma}{\\contextapp{C}{\\phi}}}\\), first show \n \\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi}} = \\imodel{\\Ia}{\\phi}\\).\n By induction hypothesis for the smaller $\\phi$:\n \\(\\imodels{\\vdLint[const=I,state=\\omega]}{\\applyusubst{\\sigma}{\\phi}}\\)\n iff\n \\(\\imodels{\\Ita}{\\phi}\\),\n where \n \\(\\imodel{\\Ita}{\\phi}=\\imodel{\\Ia}{\\phi}\\)\n by \\rref{cor:adjointUsubst}\n for all $\\iget[state]{\\Ia},\\iget[state]{\\Ita}$\n (that agree on $\\scomplement{(\\mathcal{V}\\cup\\D{\\mathcal{V}})}=\\emptyset$, which imposes no condition on $\\iget[state]{\\vdLint[const=I,state=\\nu]},\\iget[state]{\\vdLint[const=I,state=\\omega]}$)\n since $\\sigma$ is $\\mathcal{V}\\cup\\D{\\mathcal{V}}$-admissible for $\\phi$.\n The proof then proceeds:\n\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\contextapp{C}{\\phi}}}\\)\n \\(=\\imodel{\\vdLint[const=I,state=\\nu]}{\\contextapp{\\applyusubst{\\sigma}{C}}{\\applyusubst{\\sigma}{\\phi}}}\\)\n \\(= \\imodel{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\{\\uscarg\\mapsto\\applyusubst{\\sigma}{\\phi}\\}}{\\applysubst{\\sigma}{\\contextapp{C}{\\uscarg}}}}\\),\n so, by induction hypothesis for the structurally simpler uniform substitution ${\\{\\uscarg\\mapsto\\applyusubst{\\sigma}{\\phi}\\}}$ that is a mere substitution on symbol $\\uscarg$ of arity zero, iff\n \\(\\imodels{\\ImM}{\\applysubst{\\sigma}{\\contextapp{C}{\\uscarg}}}\\)\n since the adjoint to \\(\\{\\uscarg\\mapsto\\applyusubst{\\sigma}{\\phi}\\}\\) is $\\iget[const]{\\ImM}$ with \\(R\\mdefeq\\imodel{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi}}\\).\n \n Also\n \\(\\imodels{\\Ia}{\\contextapp{C}{\\phi}}\\)\n \\(= \\iget[const]{\\Ia}(C)\\big(\\imodel{\\Ia}{\\phi}\\big)\\)\n \\(= \\imodel{\\ImN}{\\applysubst{\\sigma}{\\contextapp{C}{\\uscarg}}}\\)\n for \\(R=\\imodel{\\Ia}{\\phi}=\\imodel{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi}}\\) by induction hypothesis.\n Both sides are, thus, equivalent.\n \n\\item\n The case \\({\\applyusubst{\\sigma}{\\contextapp{C}{\\phi}}}\\) for $C\\not\\in\\replacees{\\sigma}$ again first shows\n \\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi}} = \\imodel{\\Ia}{\\phi}\\)\n for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ using that $\\sigma$ is $\\mathcal{V}\\cup\\D{\\mathcal{V}}$-admissible for $\\phi$.\n Then\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\contextapp{C}{\\phi}}}\\)\n \\(= \\imodel{\\vdLint[const=I,state=\\nu]}{\\contextapp{C}{\\applyusubst{\\sigma}{\\phi}}}\\)\n \\(= \\iget[const]{\\vdLint[const=I,state=\\nu]}(C)\\big(\\imodel{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi}}\\big)\\)\n \\(= \\iget[const]{\\vdLint[const=I,state=\\nu]}(C)\\big(\\imodel{\\Ia}{\\phi}\\big)\\)\n \\(= \\iget[const]{\\Ia}(C)\\big(\\imodel{\\Ia}{\\phi}\\big)\\)\n \\(= \\imodel{\\Ia}{\\contextapp{C}{\\phi}}\\)\n iff \\(\\imodels{\\Ia}{\\contextapp{C}{\\phi}}\\)\n\n\\item\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\lnot\\phi}}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\lnot\\applyusubst{\\sigma}{\\phi}}\\)\n iff \\(\\inonmodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi}}\\),\n by induction hypothesis,\n iff \\(\\inonmodels{\\Ia}{\\phi}\\)\n iff \\(\\imodels{\\Ia}{\\lnot\\phi}\\)\n\n\\item\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi\\land\\psi}}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi} \\land \\applyusubst{\\sigma}{\\psi}}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi}}\\) and \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\psi}}\\),\n by induction hypothesis,\n iff \\(\\imodels{\\Ia}{\\phi}\\) and \\(\\imodels{\\Ia}{\\psi}\\)\n iff \\(\\imodels{\\Ia}{\\phi\\land\\psi}\\)\n\n\\item\n\\def\\Imd{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{d}}%\n\\def\\Iamd{\\imodif[state]{\\Ia}{x}{d}}%\n\\def\\Imda{\\iadjointSubst{\\sigma}{\\Imd}}%\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\lexists{x}{\\phi}}}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\lexists{x}{\\applyusubst{\\sigma}{\\phi}}}\\)\n (provided $\\sigma$ is $\\{x\\}$-admissible for $\\phi$)\n iff \\(\\imodels{\\Imd}{\\applyusubst{\\sigma}{\\phi}}\\) for some $d$,\n so, by induction hypothesis,\n iff \\(\\imodels{\\Imda}{\\phi}\\) for some $d$,\n which is equivalent to\n \\(\\imodels{\\Iamd}{\\phi}\\) by \\rref{cor:adjointUsubst} as $\\sigma$ is $\\{x\\}$-admissible for $\\phi$ and $\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\Imd}$ on $\\scomplement{\\{x\\}}$.\n Thus, this is equivalent to\n \\(\\imodels{\\Ia}{\\lexists{x}{\\phi}}\\).\n \n\\item The case\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\lforall{x}{\\phi}}}\\)\n follows by duality \\(\\lforall{x}{\\phi} \\mequiv \\lnot\\lexists{x}{\\lnot\\phi}\\), which is respected in the definition of uniform substitutions.\n\n\\item\n \\newcommand{\\Iat}{\\iconcat[state=\\omega]{\\Ia}}%\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\ddiamond{\\alpha}{\\phi}}}\\)\n iff \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\ddiamond{\\applyusubst{\\sigma}{\\alpha}}{\\applyusubst{\\sigma}{\\phi}}}\\)\n (provided $\\sigma$ is $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\phi$)\n iff there is a $\\iget[state]{\\vdLint[const=I,state=\\omega]}$ such that\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\alpha}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\) and \\(\\imodels{\\vdLint[const=I,state=\\omega]}{\\applyusubst{\\sigma}{\\phi}}\\),\n which, by \\rref{lem:usubst-HP} and induction hypothesis, respectively, is equivalent to:\n there is a $\\iget[state]{\\Ita}$ such that\n \\(\\iaccessible[\\alpha]{\\Ia}{\\Ita}\\) and \\(\\imodels{\\Ita}{\\phi}\\),\n which is equivalent to\n \\(\\imodels{\\Ia}{\\ddiamond{\\alpha}{\\phi}}\\),\n because \\(\\imodels{\\Ita}{\\phi}\\) is equivalent to \\(\\imodels{\\Iat}{\\phi}\\) by \\rref{cor:adjointUsubst}\n as $\\sigma$ is $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\phi$ and \\(\\iget[state]{\\Ia}=\\iget[state]{\\Iat}\\) on $\\scomplement{\\boundvars{\\applyusubst{\\sigma}{\\alpha}}}$ by \\rref{lem:bound} since\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\alpha}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\).\n\n\\item The case\n \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\dbox{\\alpha}{\\phi}}}\\)\n follows by duality \\(\\dbox{\\alpha}{\\phi} \\mequiv \\lnot\\ddiamond{\\alpha}{\\lnot\\phi}\\), which is respected in the definition of uniform substitutions.\n\n\\end{compactitem}\n\\end{proofatend}\n\n\n\\begin{lemma}[Uniform substitution lemma] \\label{lem:usubst-HP}\nThe uniform substitution $\\sigma$ and its adjoint interpretation $\\iportray{\\Ia}$ to $\\sigma$ for $\\iportray{\\vdLint[const=I,state=\\nu]}$ have the same \\emph{program} semantics:\n\\[\n\\iaccessible[{\\applyusubst{\\sigma}{\\alpha}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\n~\\text{iff}~\n\\iaccessible[\\alpha]{\\Ia}{\\Ita}\n\\]\n\\end{lemma}\n\\begin{proofatend}\nThe proof is by structural induction on $\\alpha$.\n\\begin{compactitem}\n\\item\n \\(\\iaccessible[\\applyusubst{\\sigma}{a}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]} = \\iaccess[\\applysubst{\\sigma}{a}]{\\vdLint[const=I,state=\\nu]} = \\iget[const]{\\Ia}(a) = \\iaccess[a]{\\Ia}\\)\n for program constant $a\\in\\replacees{\\sigma}$\n (the proof is accordingly for $a\\not\\in\\replacees{\\sigma}$).\n\n\\item \n \\(\\iaccessible[\\applyusubst{\\sigma}{\\pumod{x}{\\theta}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\n = \\iaccess[\\pumod{x}{\\applyusubst{\\sigma}{\\theta}}]{\\vdLint[const=I,state=\\nu]}\\)\n iff \\(\\iget[state]{\\vdLint[const=I,state=\\omega]} = \\modif{\\iget[state]{\\vdLint[const=I,state=\\nu]}}{x}{\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}}}\\)\n = \\(\\modif{\\iget[state]{\\vdLint[const=I,state=\\nu]}}{x}{\\ivaluation{\\Ia}{\\theta}}\\)\n by \\rref{lem:usubst}, which is, thus, equivalent to\n \\(\\iaccessible[\\pumod{x}{\\theta}]{\\Ia}{\\Ita}\\).\n\n\\item\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\Dumod{\\D{x}}{\\theta}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\n = \\iaccess[\\Dumod{\\D{x}}{\\applyusubst{\\sigma}{\\theta}}]{\\vdLint[const=I,state=\\nu]}\\)\n iff \\(\\iget[state]{\\vdLint[const=I,state=\\omega]} = \\modif{\\iget[state]{\\vdLint[const=I,state=\\nu]}}{\\D{x}}{\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\theta}}}\\)\n = \\(\\modif{\\iget[state]{\\vdLint[const=I,state=\\nu]}}{\\D{x}}{\\ivaluation{\\Ia}{\\theta}}\\)\n by \\rref{lem:usubst}, which is, thus, equivalent to\n \\(\\iaccessible[\\Dumod{\\D{x}}{\\theta}]{\\Ia}{\\Ita}\\).\n\n\\item\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\ptest{\\psi}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\n = \\iaccess[\\ptest{\\applyusubst{\\sigma}{\\psi}}]{\\vdLint[const=I,state=\\nu]}\\)\n iff \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}\\)\n and \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\psi}}\\),\n iff, by \\rref{lem:usubst},\n \\(\\iget[state]{\\Ita}=\\iget[state]{\\Ia}\\) and\n \\(\\imodels{\\Ia}{\\psi}\\),\n which is equivalent to\n \\(\\iaccessible[\\ptest{\\psi}]{\\Ia}{\\Ita}\\).\n\n\\item\n\\newcommand{\\iconcat[state=\\varphi(t)]{\\I}}{\\iconcat[state=\\varphi(t)]{\\vdLint[const=I,state=\\nu]}}\n\\def\\Izetaa{\\iadjointSubst{\\sigma}{\\iconcat[state=\\varphi(t)]{\\I}}}%\n\\newcommand{\\iconcat[state=\\varphi(t)]{\\Ia}}{\\iconcat[state=\\varphi(t)]{\\Ia}}\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\n = \\iaccess[\\pevolvein{\\D{x}=\\applyusubst{\\sigma}{\\genDE{x}}}\n {\\applyusubst{\\sigma}{\\psi}}]{\\vdLint[const=I,state=\\nu]}\\)\n (provided $\\sigma$ $\\{x,\\D{x}\\}$-admissible for $\\genDE{x},\\psi$)\n iff \\(\\mexists{\\varphi:[0,T]\\to\\linterpretations{\\Sigma}{V}}\\)\n with \\(\\varphi(0)=\\iget[state]{\\vdLint[const=I,state=\\nu]}, \\varphi(T)=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) and for all $t\\geq0$:\n \\(\\D{\\varphi}(t) = \\ivaluation{\\iconcat[state=\\varphi(t)]{\\I}}{\\applyusubst{\\sigma}{\\genDE{x}}}\n = \\ivaluation{\\Izetaa}{\\genDE{x}}\\) by \\rref{lem:usubst-term}\n as well as\n \\(\\imodels{\\iconcat[state=\\varphi(t)]{\\I}}{\\applyusubst{\\sigma}{\\psi}}\\),\n which, by \\rref{lem:usubst}, is equivalent to\n \\(\\imodels{\\Izetaa}{\\psi}\\).\n \n Also\n \\(\\iaccessible[\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}]{\\Ia}{\\Ita}\\)\n iff \\(\\mexists{\\varphi:[0,T]\\to\\linterpretations{\\Sigma}{V}}\\)\n with \\(\\varphi(0)=\\iget[state]{\\vdLint[const=I,state=\\nu]}, \\varphi(T)=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) and for all $t\\geq0$:\n \\(\\D{\\varphi}(t) = \\ivaluation{\\iconcat[state=\\varphi(t)]{\\Ia}}{\\genDE{x}}\\)\n and\n \\(\\imodels{\\iconcat[state=\\varphi(t)]{\\Ia}}{\\psi}\\).\n Finally,\n \\(\\ivalues{\\iconcat[state=\\varphi(t)]{\\Ia}}{\\genDE{x}}=\\ivalues{\\Izetaa}{\\genDE{x}}\\) and\n \\(\\imodel{\\Izetaa}{\\psi}=\\imodel{\\iconcat[state=\\varphi(t)]{\\Ia}}{\\psi}\\)\n by \\rref{cor:adjointUsubst}\n since $\\sigma$ is $\\{x,\\D{x}\\}$-admissible for $\\genDE{x},\\psi$ and \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\iconcat[state=\\varphi(t)]{\\Ia}}\\) on $\\scomplement{\\boundvars{\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}}}=\\scomplement{\\{x,\\D{x}\\}}$ by \\rref{lem:bound}.\n \n\\item \n \\(\\iaccessible[\\applyusubst{\\sigma}{\\pchoice{\\alpha}{\\beta}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\n = \\iaccess[\\pchoice{\\applyusubst{\\sigma}{\\alpha}}{\\applyusubst{\\sigma}{\\beta}}]{\\vdLint[const=I,state=\\nu]}\\)\n = \\(\\iaccess[\\applyusubst{\\sigma}{\\alpha}]{\\vdLint[const=I,state=\\nu]} \\cup \\iaccess[\\applyusubst{\\sigma}{\\beta}]{\\vdLint[const=I,state=\\nu]}\\),\n which, by induction hypothesis, is equivalent to\n \\(\\iaccessible[\\alpha]{\\Ia}{\\Ita}\\) or \\(\\iaccessible[\\beta]{\\Ia}{\\Ita}\\),\n which is equivalent to\n \\(\\iaccessible[\\alpha]{\\Ia}{\\Ita} \\cup \\iaccess[\\beta]{\\Ia} = \\iaccess[\\pchoice{\\alpha}{\\beta}]{\\Ia}\\).\n \n\n\\item\n{\\newcommand{\\iconcat[state=\\mu]{\\I}}{\\iconcat[state=\\mu]{\\vdLint[const=I,state=\\nu]}}%\n\\newcommand{\\Iza}{\\iadjointSubst{\\sigma}{\\iconcat[state=\\mu]{\\I}}}%\n\\newcommand{\\Iaz}{\\iconcat[state=\\mu]{\\Ia}}%\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\alpha;\\beta}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\n = \\iaccess[\\applyusubst{\\sigma}{\\alpha}; \\applyusubst{\\sigma}{\\beta}]{\\vdLint[const=I,state=\\nu]}\\)\n = \\(\\iaccess[\\applyusubst{\\sigma}{\\alpha}]{\\vdLint[const=I,state=\\nu]} \\compose \\iaccess[\\applyusubst{\\sigma}{\\beta}]{\\vdLint[const=I,state=\\nu]}\\)\n (provided $\\sigma$ is $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\beta$)\n iff there is a $\\iget[state]{\\iconcat[state=\\mu]{\\I}}$ such that\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\alpha}]{\\vdLint[const=I,state=\\nu]}{\\iconcat[state=\\mu]{\\I}}\\) and \\(\\iaccessible[\\applyusubst{\\sigma}{\\beta}]{\\iconcat[state=\\mu]{\\I}}{\\vdLint[const=I,state=\\omega]}\\),\n which, by induction hypothesis, is equivalent to\n \\(\\iaccessible[\\alpha]{\\Ia}{\\Iza}\\) and \\(\\iaccessible[\\beta]{\\Iza}{\\Ita}\\).\n Yet, \\(\\iaccess[\\beta]{\\Iza}=\\iaccess[\\beta]{\\Ia}\\)\n by \\rref{cor:adjointUsubst}, because $\\sigma$ is $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\beta$ and \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\vdLint[const=I,state=\\omega]}\\) on $\\scomplement{\\boundvars{\\applyusubst{\\sigma}{\\alpha}}}$ by \\rref{lem:bound} since \\(\\iaccessible[\\applyusubst{\\sigma}{\\alpha}]{\\vdLint[const=I,state=\\nu]}{\\iconcat[state=\\mu]{\\I}}\\).\n Finally,\n \\(\\iaccessible[\\alpha]{\\Ia}{\\Iaz}\\) and \\(\\iaccessible[\\beta]{\\Iaz}{\\Ita}\\) for some $\\iget[state]{\\Iaz}$\n is equivalent to \\(\\iaccessible[\\alpha;\\beta]{\\Ia}{\\Ita}\\).\n\n}\n\n\\item\n\\newcommand{\\iconcat[state=\\mu]{\\I}}[1][]{\\iconcat[state=\\nu_{#1}]{\\vdLint[const=I,state=\\nu]}}%\n\\newcommand{\\Iza}[1][]{\\iadjointSubst{\\sigma}{\\iconcat[state=\\mu]{\\I}[#1]}}%\n\\newcommand{\\Iaz}[1][]{\\iconcat[state=\\nu_{#1}]{\\Ia}}%\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\prepeat{\\alpha}}]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\n = \\iaccess[\\prepeat{(\\applyusubst{\\sigma}{\\alpha})}]{\\vdLint[const=I,state=\\nu]}\n = \\closureTransitive{\\big(\\iaccess[\\applyusubst{\\sigma}{\\alpha}]{\\vdLint[const=I,state=\\nu]}\\big)}\n = \\cupfold_{n\\in\\naturals} (\\iaccess[\\applyusubst{\\sigma}{\\alpha}]{\\vdLint[const=I,state=\\nu]})^n\n \\)\n (provided $\\sigma$ is $\\boundvars{\\applyusubst{\\sigma}{\\alpha}}$-admissible for $\\alpha$)\n iff there are $n\\in\\naturals$ and \\(\\iget[state]{\\iconcat[state=\\mu]{\\I}[0]}=\\iget[state]{\\Ia},\\iget[state]{\\iconcat[state=\\mu]{\\I}[1]},\\dots,\\iget[state]{\\iconcat[state=\\mu]{\\I}[n]}=\\iget[state]{\\Ita}\\) such that\n \\(\\iaccessible[\\applyusubst{\\sigma}{\\alpha}]{\\iconcat[state=\\mu]{\\I}[i]}{\\iconcat[state=\\mu]{\\I}[i+1]}\\) for all $i\\usarg}}}\\):\n{\\footnotesize\n\\[\n\\hspace{-1cm}\n\\linfer[US]\n{\\dbox{\\pupdate{\\umod{x}{f}}}{p(x)} \\lbisubjunct p(f)}\n{\\dbox{x:=x^2}{\\dbox{(\\pchoice{y:=y{+}1}{\\prepeat{z:=x{+}z}});z:=x{+}yz}{y{>}x}} \\lbisubjunct\n\\dbox{(\\pchoice{y:=y{+}1}{\\prepeat{z:=x^2{+}z}});z:=x^2{+}yz}{y{>}x^2}\n}\n\\]\n}%\nIt is soundness-critical that \\irref{US} clashes when trying to instantiate $p$ in \\irref{vacuousall} with a formula that mentions the bound variable $x$:\n\\[\n\\linfer[clash]\n{p \\limply \\lforall{x}{p}}\n{x\\geq0 \\limply \\lforall{x}{x\\geq0}}\n\\qquad\n\\usubstlist{\\usubstmod{p}{x\\geq0}}\n\\]\nIt is soundness-critical that \\irref{US} clashes when substituting $p$ in vacuous program axiom \\irref{V} with a formula with a free occurrence of a variable bound by $a$:\n\\[\n\\linfer[clash]\n{p \\limply \\dbox{a}{p}}\n{x\\geq0 \\limply \\dbox{x:=x-1}{x\\geq0}}\n\\qquad\n\\usubstlist{\\usubstmod{a}{x:=x-1},\\usubstmod{p}{x\\geq0}}\n\\]\nG\\\"odel's generalization rule \\irref{G} uses $p(\\bar{x})$ instead of $p$ from \\irref{V}, so allows the proof:\n\\[\n\\linfer[US]\n{(-x)^2\\geq0}\n{\\dbox{x:=x-1}{(-x)^2\\geq0}}\n\\]\n\n\\noindent\nLet \\(\\bar{x}=(x,y)\\), \n\\(\\usubstlist{\\usubstmod{a}{x:=x+1},\\usubstmod{b}{x:=0;y:=0},\\usubstmod{p(\\bar{x})}{x\\geq y}}\\), \\irref{US} derives:%\n\\vspace{-\\baselineskip}\n\\begin{sequentdeduction}\n \\linfer[US]\n {\\linfer[choiceb]\n {\\lclose}\n {\\dbox{\\pchoice{a}{b}}{p(\\bar{x})} \\lbisubjunct \\dbox{a}{p(\\bar{x})} \\land \\dbox{b}{p(\\bar{x})}}\n }\n {\\dbox{\\pchoice{x:=x+1}{(x:=0;y:=0)}}{x\\geq y} \\lbisubjunct \\dbox{x:=x+1}{x\\geq0} \\land \\dbox{x:=0;y:=0}{x\\geq y}}\n\\end{sequentdeduction}\n\n\\noindent\nWith \\(\\bar{x}=(x,y)\\) and\n\\(\\usubstlist{\\usubstmod{a}{\\pchoice{x:=x+1}{y:=0}},\\usubstmod{b}{y:=y+1},\\usubstmod{p(\\bar{x})}{x\\geq y}}\\) \\irref{US} derives:\n\\vspace{-\\baselineskip}\n\\begin{sequentdeduction}\n\\hspace{-0.5cm}\n \\linfer[US]\n {\\linfer[composeb]\n {\\lclose}\n {\\dbox{a;b}{p(\\bar{x})} \\lbisubjunct \\dbox{a}{\\dbox{b}{p(\\bar{x})}}}\n }\n {\\dbox{(\\pchoice{x:=x+1}{y:=0});y:=y+1}{x\\geq y} \\lbisubjunct \\dbox{\\pchoice{x:=x+1}{y:=0}}{\\dbox{y:=y+1}{x\\geq y}}}\n\\end{sequentdeduction}\n\nNot all axioms fit to the uniform substitution framework.\nThe Barcan axiom was used in a completeness proof for the Hilbert-type calculus for differential dynamic logic \\cite{DBLP:conf\/lics\/Platzer12b} (but not in the completeness proof for its sequent calculus \\cite{DBLP:journals\/jar\/Platzer08}):\n\\[\n \\cinferenceRule[B|B]{Barcan$\\dbox{}{}\\forall{}$} %\n {\\linferenceRule[impl]\n {\\lforall{x}{\\dbox{\\alpha}{p(x)}}}\n {\\dbox{\\alpha}{\\lforall{x}{p(x)}}}\n }{\\m{x\\not\\in\\alpha}}\n\\]\n\\irref{B} is unsound without the restriction \\(x\\not\\in\\alpha\\), though, so that the following would be an unsound axiom:\n\\begin{equation}\n{\\lforall{x}{\\dbox{a}{p(x)}}\\limply{\\dbox{a}{\\lforall{x}{p(x)}}}}\n\\label{eq:unsound-B-attempt}\n\\end{equation}\nbecause $x\\not\\in a$ cannot be enforced for program constants, since their effect might very well depend on the value of $x$ or since they might write to $x$.\nIn \\rref{eq:unsound-B-attempt}, $x$ cannot be written by $a$ without violating soundness:\n\\[\n\\linfer[unsound]\n {\\lforall{x}{\\dbox{a}{p(x)}}\\limply{\\dbox{a}{\\lforall{x}{p(x)}}}}\n {\\lforall{x}{\\dbox{x:=0}{x\\geq0}}\\limply{\\dbox{x:=0}{\\lforall{x}{x\\geq0}}}}\n\\qquad\n\\usubstlist{\\usubstmod{a}{x:=0},\\usubstmod{p(\\usarg)}{\\usarg\\geq0}}\n\\]\nnor can $x$ be read by $a$ in \\rref{eq:unsound-B-attempt} without violating soundness:\n\\[\n\\linfer[unsound]\n {\\lforall{x}{\\dbox{a}{p(x)}}\\limply{\\dbox{a}{\\lforall{x}{p(x)}}}}\n {\\lforall{x}{\\dbox{\\ptest{y=x^2}}{y=x^2}}\\limply{\\dbox{\\ptest{y=x^2}}{\\lforall{x}{y=x^2}}}}\n\\qquad\n\\usubstlist{\\usubstmod{a}{\\ptest{y=x^2}},\\usubstmod{p(\\usarg)}{y=\\usarg^2}}\n\\]\n\nThus, the completeness proof for differential dynamic logic from prior work \\cite{DBLP:conf\/lics\/Platzer12b} does not directly carry over.\nA more general completeness result for differential game logic \\cite{DBLP:journals\/corr\/Platzer14:dGL} implies, however, that \\irref{B} is unnecessary for completeness.\n\\fi\n\n\n\\section{Differential Equations and Differential Axioms} \\label{sec:differential}\n\n\\rref{sec:dL-axioms} leverages the first-order features of \\dL and \\irref{US} to obtain a finite list of axioms without side-conditions.\nThey lack axioms for differential equations, though.\nClassical calculi for \\dL have axioms for replacing differential equations with a quantifier for time $t\\geq0$ and an assignment for their solutions $\\solf(t)$ \\cite{DBLP:journals\/jar\/Platzer08,DBLP:conf\/lics\/Platzer12b}.\nBesides being limited to simple differential equations, such axioms have the inherent side-condition ``if $\\solf(t)$ is a solution of the differential equation \\(\\pevolve{\\D{x}=\\genDE{x}}\\) with symbolic initial value $x$''.\nSuch a side-condition is more difficult than occurrence and read\/write conditions, but equally soundness-critical.\nThis section leverages \\irref{US} and the new differential forms in \\dL to obtain a logically internalized version of differential invariants and related proof rules for differential equations \\cite{DBLP:journals\/logcom\/Platzer10,DBLP:journals\/lmcs\/Platzer12} as axioms (without schema variables and free of side-conditions).\nThese axioms can prove properties of more general ``unsolvable'' differential equations. They can also prove all properties of differential equations that can be proved with solutions \\cite{DBLP:journals\/lmcs\/Platzer12} while guaranteeing correctness of the solution as part of the proof.\n\n\n\\subsection{Differentials: Invariants, Cuts, Effects, and Ghosts} \\label{sec:diffind}\n\nFigure~\\ref{fig:dL-ODE} shows differential equation axioms for differential weakening (\\irref{DW}), differential cuts (\\irref{DC}), differential effect (\\irref{DE}), differential invariants (\\irref{DI}) \\cite{DBLP:journals\/logcom\/Platzer10}, differential ghosts (\\irref{DG}) \\cite{DBLP:journals\/lmcs\/Platzer12}, solutions (\\irref{DS}), differential substitutions (\\irref{Dassignb}), and differential axioms (\\irref{Dplus+Dtimes+Dcompose}).\nAxioms identifying \\(\\der{x}=\\D{x}\\) for variables $x\\in\\mathcal{V}$ and \\(\\der{f}=0\\) for functions $f$ and number literals of arity 0 are used implicitly.\nSome axioms use reverse implications \\((\\phi\\lylpmi\\psi)\\mequiv(\\psi\\limply\\phi)\\) for emphasis.%\n\\begin{figure}[tbhp]\n \\begin{calculuscollections}{\\columnwidth}\n \\renewcommand*{\\irrulename}[1]{\\text{#1}}%\n \\iflongversion\\else\\linferenceRulevskipamount=0.4em\\fi\n \\begin{calculus}\n \\cinferenceRule[DW|DW]{differential evolution domain} %\n {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{q(x)}}\n {}\n \\cinferenceRule[DC|DC]{differential cut} %\n {\\linferenceRule[lpmi]\n {\\big(\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)} \\lbisubjunct \\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)\\land r(x)}}{p(x)}\\big)}\n {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{r(x)}}\n }\n {}%\n \\cinferenceRule[DE|DE]{differential effect} %\n {\\linferenceRule[viuqe]\n {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x,\\D{x})}}\n {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{f(x)}}}{p(x,\\D{x})}}}\n }\n {}%\n \\cinferenceRule[DI|DI]{differential induction} %\n {\\linferenceRule[lpmi]\n {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}\n {\\big(q(x)\\limply p(x)\\land\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{\\der{p(x)}}}\\big)\n }\n {}%\n \\cinferenceRule[DG|DG]{differential ghost variables} %\n {\\linferenceRule[viuqe]\n {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}\n {\\lexists{y}{\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=a(x)y+b(x)}{q(x)}}{p(x)}}}\n }\n {}%\n \\cinferenceRule[DS|DS]{(constant) differential equation solution} %\n {\\linferenceRule[viuqe]\n {\\dbox{\\pevolvein{\\D{x}=f}{q(x)}}{p(x)}}\n {\\lforall{t{\\geq}0}{\\big((\\lforall{0{\\leq}s{\\leq}t}{q(x+f\\itimes s)}) \\limply \\dbox{\\pupdate{\\pumod{x}{x+f\\itimes t}}}{p(x)}\\big)}}\n %\n }\n {}%\n \\cinferenceRule[Dassignb|$\\dibox{':=}$]{differential assignment}\n {\\linferenceRule[equiv]\n {p(f)}\n {\\dbox{\\Dupdate{\\Dumod{\\D{x}}{f}}}{p(\\D{x})}}\n }\n {}%\n \\cinferenceRule[Dplus|$+'$]{derive sum}\n {\\linferenceRule[eq]\n {\\der{f(\\bar{x})}+\\der{g(\\bar{x})}}\n {\\der{f(\\bar{x})+g(\\bar{x})}}\n }\n {}\n \\cinferenceRule[Dtimes|$\\cdot'$]{derive product}\n {\\linferenceRule[eq]\n {\\der{f(\\bar{x})}\\cdot g(\\bar{x})+f(\\bar{x})\\cdot\\der{g(\\bar{x})}}\n {\\der{f(\\bar{x})\\cdot g(\\bar{x})}}\n }\n {}\n \\cinferenceRule[Dcompose|$\\compose'$]{derive composition}\n {\n \\dbox{\\pupdate{\\pumod{y}{g(x)}}}{\\dbox{\\Dupdate{\\Dumod{\\D{y}}{1}}}\n {\\big( \\der{f(g(x))} = \\der{f(y)}\\stimes\\der{g(x)}\\big)}}\n }\n {}%\n \\end{calculus}%\n\\end{calculuscollections}%\n \\caption{Differential equation axioms and differential axioms}\n \\label{fig:dL-ODE}\n\\end{figure}\n\n\\emph{Differential weakening} axiom \\irref{DW} internalizes that differential equations can never leave their evolution domain $q(x)$.\n\\irref{DW} implies\\footnote{%\n\\(\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{(q(x)\\limply p(x))} \\limply \\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}\\) derives by \\irref{K} from \\irref{DW}.\nThe converse\n\\(\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)} \\limply \\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{(q(x)\\limply p(x))}\\)\nderives by \\irref{K} since \\irref{G} derives \\(\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{\\big(p(x)\\limply(q(x)\\limply p(x))\\big)}\\).\n}\n\\({\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}} \\lbisubjunct\n {\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{(q(x)\\limply p(x))}} \\irlabel{diffweaken|DW}\\)\nalso called \\irref{diffweaken},\nwhose (right) assumption is best proved by \\irref{G}\\iflongversion yielding premise \\(q(x)\\limply p(x)\\)\\fi.\nThe \\emph{differential cut} axiom \\irref{DC} is a cut for differential equations.\nIt internalizes that differential equations staying in $r(x)$ stay in $p(x)$ iff $p(x)$ always holds after the differential equation that is restricted to the smaller evolution domain \\(\\pevolvein{}{q(x)\\land r(x)}\\).\n\\irref{DC} is a differential variant of modal modus ponens \\irref{K}.\n\n\\emph{Differential effect} axiom \\irref{DE} internalizes that the effect on differential symbols along a differential equation is a differential assignment assigning the right-hand side $f(x)$ to the left-hand side $\\D{x}$.\nAxiom \\irref{DI} internalizes \\emph{differential invariants},\ni.e.\\ that a differential equation stays in $p(x)$ if it starts in $p(x)$ and if its differential $\\der{p(x)}$ always holds after the differential equation \\(\\pevolvein{\\D{x}=f(x)}{q(x)}\\).\nThe differential equation also vacuously stays in $p(x)$ if it starts outside $q(x)$, since it is stuck then.\nThe (right) assumption of \\irref{DI} is best proved by \\irref{DE} to select the appropriate vector field \\(\\D{x}=f(x)\\) for the differential $\\der{p(x)}$\nand a subsequent \\irref{diffweaken+G} to make the evolution domain constraint $q(x)$ available as an assumption.\nFor simplicity, this paper focuses on atomic postconditions for which \\(\\der{\\theta\\geq\\eta} \\mequiv \\der{\\theta>\\eta} \\mequiv \\der{\\theta}\\geq\\der{\\eta}\\)\nand \\(\\der{\\theta=\\eta} \\mequiv \\der{\\theta\\neq\\eta} \\mequiv \\der{\\theta}=\\der{\\eta}\\), etc.\nAxiom \\irref{DG} internalizes \\emph{differential ghosts},\ni.e. that additional differential equations can be added if their solution exists long enough.\nAxiom \\irref{DS} solves differential equations with the help of \\irref{DG+DC}.\nVectorial generalizations to systems of differential equations are possible for the axioms in \\rref{fig:dL-ODE}.\n\nThe following proof proves a property of a differential equation using differential invariants without having to solve that differential equation.\nOne use of \\irref{US} is shown explicitly, other uses of \\irref{US} are similar for \\irref{DI+DE+Dassignb} instances.\n\\vspace{-\\baselineskip}\n{\\footnotesize\n\\renewcommand{\\linferPremissSeparation}{~}%\n\\let\\orgcdot\\cdot%\n\\def\\cdot{{\\orgcdot}}%\n\\begin{sequentdeduction}[array]\n\\linfer[DI]\n{\\linfer[DE]\n {\\linfer[CE]%\n {\\linfer[G]\n {\\linfer[Dassignb]\n {\\linfer[qear]\n {\\lclose}\n {\\lsequent{}{x^3\\cdot x + x\\cdot x^3\\geq0}}\n }%\n {\\lsequent{}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\D{x}\\cdot x+x\\cdot\\D{x}\\geq0}}}\n %\n }%\n {\\lsequent{}{\\dbox{\\pevolve{\\D{x}=x^3}}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\D{x}\\cdot x+x\\cdot\\D{x}\\geq0}}}}\n !\n \\linfer%\n {\\linfer[CQ] %\n {\\linfer%\n {\\linfer[US]\n {\\linfer[Dtimes]\n {\\lclose}\n {\\lseqalign{\\der{f(\\bar{x})\\cdot g(\\bar{x})}} {= \\der{f(\\bar{x})}\\cdot g(\\bar{x}) + f(\\bar{x})\\cdot\\der{g(\\bar{x})}}}\n }\n {\\lseqalign{\\der{x\\cdot x}} {= \\der{x}\\cdot x + x\\cdot\\der{x}}}\n }\n {\\lseqalign{\\der{x\\cdot x}} {= \\D{x}\\cdot x + x\\cdot\\D{x}}}\n }\n {\\lseqalign{\\der{x\\cdot x}\\geq0}{\\lbisubjunct\\D{x}\\cdot x+x\\cdot\\D{x}\\geq0}}\n }\n {\\lseqalign{\\der{x\\cdot x\\geq1}}{\\lbisubjunct\\D{x}\\cdot x+x\\cdot\\D{x}\\geq0}}\n }%\n {\\lsequent{}{\\dbox{\\pevolve{\\D{x}=x^3}}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\der{x\\cdot x\\geq1}}}}}\n }%\n {\\lsequent{}{\\dbox{\\pevolve{\\D{x}=x^3}}{\\der{x\\cdot x\\geq1}}}}\n}%\n{\\lsequent{x\\cdot x\\geq1} {\\dbox{\\pevolve{\\D{x}=x^3}}{x\\cdot x\\geq1}}}\n\\end{sequentdeduction}\n}%\nPrevious calculi \\cite{DBLP:journals\/logcom\/Platzer10,DBLP:journals\/lmcs\/Platzer12} collapse this proof into a single proof step with complicated built-in operator implementations that silently perform the same reasoning in an opaque way.\nThe approach presented here combines separate axioms to achieve the same effect in a modular way, because they have individual responsibilities internalizing separate logical reasoning principles in \\emph{differential-form} \\dL.\n\\iflongversion Tactics combining the axioms as indicated make the axiomatic way equally convenient.\nClever cuts or \\irref{MP} enable proofs in which the main argument remains as fast \\cite{DBLP:journals\/logcom\/Platzer10,DBLP:journals\/lmcs\/Platzer12} while the additional premises subsequently check soundness.\nBoth \\irref{CQ} and also \\irref{CE} simplify the proof substantially but are not necessary:\n\\vspace{-\\baselineskip}\n{\\footnotesize\n\\begin{sequentdeduction}%\n\\hspace*{-0.7cm}\n\\linfer[MP]\n {\\linfer\n {\\lclose}\n {\\sdots\\limply(\\der{x\\cdot x}\\geq0 \\lbisubjunct \\D{x}\\cdot x + x\\cdot\\D{x}\\geq0)}\n &\\linfer%\n {\\linfer[US]\n {\\linfer[Dtimes]\n {\\lclose}\n {\\der{f(\\bar{x})\\cdot g(\\bar{x})} = \\der{f(\\bar{x})}\\cdot g(\\bar{x}) + f(\\bar{x})\\cdot\\der{g(\\bar{x})}}\n }\n {\\der{x\\cdot x} = \\der{x}\\cdot x + x\\cdot\\der{x}}\n }\n {\\der{x\\cdot x} = \\D{x}\\cdot x + x\\cdot\\D{x}}\n }\n {\\linfer[G]\n {\\der{x\\cdot x}\\geq0 \\lbisubjunct \\D{x}\\cdot x + x\\cdot\\D{x}\\geq0}\n {\\linfer[K]\n {\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{(\\der{x\\cdot x}\\geq0 \\lbisubjunct \\D{x}\\cdot x + x\\cdot\\D{x}\\geq0)}}\n {\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\der{x\\cdot x}\\geq0} \\lbisubjunct \\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\D{x}\\cdot x + x\\cdot\\D{x}\\geq0}}\n }\n}\n\\end{sequentdeduction}\n\\vspace{-\\baselineskip}\n\\begin{sequentdeduction}[array]\n\\linfer[DI]\n{\\linfer[DE]\n {\\linfer[G]\n {\\linfer[MP]\n {\\text{use proof above}\n !\\linfer[Dassignb]\n {\\linfer[qear]\n {\\lclose}\n {\\lsequent{}{x^3\\cdot x + x\\cdot x^3\\geq0}}\n }%\n {\\lsequent{}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\D{x}\\cdot x + x\\cdot\\D{x}\\geq0}}}\n %\n }%\n {\\lsequent{}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\der{x\\cdot x}\\geq0}}}\n }%\n {\\lsequent{}{\\dbox{\\pevolve{\\D{x}=x^3}}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\der{x\\cdot x}\\geq0}}}}\n }%\n {\\lsequent{}{\\dbox{\\pevolve{\\D{x}=x^3}}{\\der{x\\cdot x}\\geq0}}}\n}%\n{\\lsequent{x\\cdot x\\geq1} {\\dbox{\\pevolve{\\D{x}=x^3}}{x\\cdot x\\geq1}}}\n\\end{sequentdeduction}\n}%\nThe proof uses (implicit) cuts with equivalences predicting the outcome of the right branch, which is simple but inconvenient.\n\\newcommand{\\mydiffcond}[1][x,\\D{x}]{j(#1)}%\nA constructive direct proof uses a free function symbol $\\mydiffcond$, instead, which is ultimately instantiated by \\irref{US} as in \\rref{thm:dL-sound}.\n\nThe same technique is helpful for invariant search, in which case a free predicate symbol $p(\\bar{x})$ is used and instantiated by \\irref{US} lazily when the proof closes.\n{\\footnotesize\n\\begin{sequentdeduction}[array]\n\\linfer[DI]\n{\\linfer[DE]\n {\\linfer[CE]%\n {\\linfer[G]\n {\\linfer[Dassignb]\n {\\linfer%\n {\\linfer[qear]\n {\\lclose}\n {\\lsequent{}{x^3\\cdot x + x\\cdot x^3\\geq0}}\n }\n {\\lsequent{}\\mydiffcond[x,x^3]\\geq0}\n }%\n {\\lsequent{}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\mydiffcond\\geq0}}}\n %\n }%\n {\\lsequent{}{\\dbox{\\pevolve{\\D{x}=x^3}}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\mydiffcond\\geq0}}}}\n !\n \\linfer%\n {\\linfer[CQ] %\n {\\linfer%\n {\\linfer%\n {\\linfer[US]\n {\\linfer[Dtimes]\n {\\lclose}\n {\\lseqalign{\\der{f(\\bar{x})\\cdot g(\\bar{x})}} {= \\der{f(\\bar{x})}\\cdot g(\\bar{x}) + f(\\bar{x})\\cdot\\der{g(\\bar{x})}}}\n }\n {\\lseqalign{\\der{x\\cdot x}} {= \\der{x}\\cdot x + x\\cdot\\der{x}}}\n }\n {\\lseqalign{\\der{x\\cdot x}} {= \\D{x} \\cdot x + x\\cdot\\D{x}}}\n }\n {\\lseqalign{\\der{x\\cdot x}} {= \\mydiffcond}}\n }\n {\\lseqalign{\\der{x\\cdot x}\\geq0}{\\lbisubjunct\\mydiffcond\\geq0}}\n }\n {\\lseqalign{\\der{x\\cdot x\\geq1}}{\\lbisubjunct\\mydiffcond\\geq0}}\n }%\n {\\lsequent{}{\\dbox{\\pevolve{\\D{x}=x^3}}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{x^3}}}{\\der{x\\cdot x\\geq1}}}}}\n }%\n {\\lsequent{}{\\dbox{\\pevolve{\\D{x}=x^3}}{\\der{x\\cdot x\\geq1}}}}\n}%\n{\\lsequent{x\\cdot x\\geq1} {\\dbox{\\pevolve{\\D{x}=x^3}}{x\\cdot x\\geq1}}}\n\\end{sequentdeduction}\n}%\n\\fi\n\n\\iflongversion\nProofs based entirely on equivalences for solving differential equations involve \\irref{DG} for introducing a time variable, \\irref{DC} to cut the solutions in, \\irref{DW} to export the solution to the postcondition, inverse \\irref{DC} to remove the evolution domain constraints again, inverse \\irref{DG} to remove the original differential equations, and finally \\irref{DS} to solve the differential equation for time:\n\\def\\prem{\\phi}%\n{\\footnotesize\n\\begin{sequentdeduction}[array]\n\\linfer[DG]\n {\\linfer%\n {\\linfer[DC]\n {\\linfer[DC]\n {\\linfer[diffweaken]\n {\\linfer[G+K]%\n {\\linfer[DC]\n {\\linfer[DC]\n {\\linfer[DG]\n {\\linfer[DG]\n {\\linfer[DS]\n {\\linfer[assignb]\n {\\linfer[qear]\n {\\lclose}\n {\\lsequent{\\prem} {\\lforall{s{\\geq}0}{(x_0+\\frac{a}{2}s^2+v_0s\\geq0)}}}\n }%\n {\\lsequent{\\prem} {\\lforall{s{\\geq}0}{\\dbox{\\pupdate{\\pumod{t}{0+1s}}}{x_0+\\frac{a}{2}t^2+v_0t\\geq0}}}}\n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolve{\\D{t}=1}}{x_0+\\frac{a}{2}t^2+v_0t\\geq0}}}\n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolve{\\D{v}=a\\syssep\\D{t}=1}}{x_0+\\frac{a}{2}t^2+v_0t\\geq0}}}\n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolve{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}}{x_0+\\frac{a}{2}t^2+v_0t\\geq0}}}\n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at}}{x_0+\\frac{a}{2}t^2+v_0t\\geq0}}}\n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at\\land x=x_0+\\frac{a}{2}t^2+v_0t}}{x_0+\\frac{a}{2}t^2+v_0t\\geq0}}} \n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at\\land x=x_0+\\frac{a}{2}t^2+v_0t}}{(x{=}x_0{+}\\frac{a}{2}t^2{+}v_0t\\limply x{\\geq}0)}}}\n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at\\land x=x_0+\\frac{a}{2}t^2+v_0t}}{x\\geq0}}} \n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at}}{x\\geq0}}} \n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolve{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}}{x\\geq0}}} \n }%\n {\\lsequent{\\prem} {\\lexists{t}{\\dbox{\\pevolve{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}}{x\\geq0}}}}\n }%\n {\\lsequent{\\prem} {\\dbox{\\pevolve{\\D{x}=v\\syssep\\D{v}=a}}{x\\geq0}}} \n\\end{sequentdeduction}\n}%\nwhere $\\prem$ is \\({a\\geq0\\land v=v_0\\geq0 \\land x=x_0\\geq0}\\). %\nThe existential quantifier for $t$ is instantiated by $0$, leading to $\\dbox{\\pupdate{\\pumod{t}{0}}}{}$ (suppressed in the proof for readability reasons).\nThe 4 uses of \\irref{DC} lead to 2 additional premises proving that \\(v=v_0+at\\) and then \\(x=x_0+\\frac{a}{2}t^2+v_0t\\) are differential invariants (using \\irref{DI+DE+diffweaken}).\nShortcuts using \\irref{diffweaken} are possible but the above proof generalize to $\\ddiamond{}{}$ because it is an equivalence proof.\nThe additional premise for \\irref{DC} with \\(v=v_0+at\\) proves as follows:\n{\\footnotesize\n\\begin{sequentdeduction}[array]\n\\linfer[DI]\n{\\linfer[DE]\n {\\linfer[G]\n {\\linfer[CE]%\n {\\linfer[Dassignb]\n {\\linfer[qear]\n {\\lclose}\n {\\lsequent{}{a=0+a\\cdot1}}\n }%\n {\\lsequent{} {\\dbox{\\Dupdate{\\Dumod{\\D{v}}{a}}}{\\dbox{\\Dupdate{\\Dumod{\\D{t}}{1}}}{\\D{v}=0+a\\D{t}}}}}\n %\n !\n \\linfer%\n {\\linfer[CQ] %\n {\\linfer[Dtimes]%\n {\\linfer[US]\n {\\linfer[Dplus]\n {\\lclose}\n {\\lseqalign{\\der{f(\\bar{x})+ g(\\bar{x})}} {= \\der{f(\\bar{x})} + \\der{g(\\bar{x})}}}\n }\n {\\lseqalign{\\der{v_0+at}} {= \\der{v_0}+\\der{a t}}}\n }\n {\\lseqalign{\\der{v_0+at}} {= 0+a(\\D{t})}}\n }\n {\\lseqalign{\\D{v}=\\der{v_0+at}}{\\lbisubjunct\\D{v}=0+a\\D{t}}}\n }\n {\\lseqalign{\\der{v=v_0+at}}{\\lbisubjunct\\D{v}=0+a\\D{t}}}\n }%\n {\\lsequent{} {\\dbox{\\Dupdate{\\Dumod{\\D{v}}{a}}}{\\dbox{\\Dupdate{\\Dumod{\\D{t}}{1}}}{\\der{v=v_0+at}}}}}\n }%\n {\\lsequent{} {\\dbox{\\pevolve{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}}{\\dbox{\\Dupdate{\\Dumod{\\D{v}}{a}}}{\\dbox{\\Dupdate{\\Dumod{\\D{t}}{1}}}{\\der{v=v_0+at}}}}}}\n }%\n {\\lsequent{} {\\dbox{\\pevolve{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}}{\\der{v=v_0+at}}}}\n}%\n{\\lsequent{\\prem} {\\dbox{\\pevolve{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}}{v=v_0+at}}}\n\\end{sequentdeduction}\n}\nThe additional premise for \\irref{DC} with \\(x=x_0+\\frac{a}{2}t^2+v_0t\\) proves as follows:\n\n{\\tiny%\n\\begin{sequentdeduction}[array]\n\\linfer[DI]\n{\\linfer[DE]\n {\\linfer[diffweaken]\n {\\linfer[G]\n {\\linfer[CE]%\n {\\linfer[Dassignb]\n {\\linfer[qear]\n {\\lclose}\n {\\lsequent{}{{v=v_0+at} \\limply v=at\\cdot1+v_0\\cdot1}}\n }%\n {\\lsequent{} {{v=v_0+at} \\limply \\dbox{\\Dupdate{\\Dumod{\\D{x}}{v}}}{\\dbox{\\Dupdate{\\Dumod{\\D{t}}{1}}}{\\D{x}=at\\D{t}+v_0\\D{t}}}}}\n %\n !\n \\linfer%\n {\\linfer[CQ] %\n {\\linfer[Dplus+Dtimes]%\n {\\linfer%\n {\\lclose}\n {\\lseqalign{2\\frac{a}{2}t\\D{t}+v_0\\D{t}} {= at\\D{t}+v_o\\D{t}}}\n }\n {\\lseqalign{\\der{x_0+\\frac{a}{2}t^2+v_0t}} {= at\\D{t}+v_o\\D{t}}}\n }\n {\\lseqalign{\\D{x}=\\der{x_0+\\frac{a}{2}t^2+v_0t}}{\\lbisubjunct\\D{x}=at\\D{t}+v_o\\D{t}}}\n }\n {\\lseqalign{\\der{x=x_0+\\frac{a}{2}t^2+v_0t}}{\\lbisubjunct\\D{x}=at\\D{t}+v_o\\D{t}}}\n }%\n {\\lsequent{} {{v=v_0+at} \\limply \\dbox{\\Dupdate{\\Dumod{\\D{x}}{v}}}{\\dbox{\\Dupdate{\\Dumod{\\D{t}}{1}}}{\\der{x=x_0+\\frac{a}{2}t^2+v_0t}}}}}\n }%\n {\\lsequent{} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at}}{({v=v_0+at}\\limply\\dbox{\\Dupdate{\\Dumod{\\D{x}}{v}}}{\\dbox{\\Dupdate{\\Dumod{\\D{t}}{1}}}{\\der{x=x_0+\\frac{a}{2}t^2+v_0t}}})}}}\n }%\n {\\lsequent{} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at}}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{v}}}{\\dbox{\\Dupdate{\\Dumod{\\D{t}}{1}}}{\\der{x=x_0+\\frac{a}{2}t^2+v_0t}}}}}}\n }%\n {\\lsequent{} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at}}{\\der{x=x_0+\\frac{a}{2}t^2+v_0t}}}}\n}%\n{\\lsequent{\\prem} {\\dbox{\\pevolvein{\\D{x}=v\\syssep\\D{v}=a\\syssep\\D{t}=1}{v=v_0+at}}{x=x_0+\\frac{a}{2}t^2+v_0t}}}\n\\end{sequentdeduction}\n}%\n\\fi\n\n\n\\subsection{Differential Substitution Lemmas}\n\nThe key insight for the soundness of \\irref{DI} is that the analytic time-derivative of the value of a term $\\eta$ along a differential equation \\(\\pevolvein{\\D{x}=\\genDE{x}}{\\psi}\\) agrees with the values of its differential $\\der{\\eta}$ along the vector field of that differential equation.\n\n\\begin{lemma}[Differential lemma] \\label{lem:differentialLemma}\n %\n If \\m{\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=\\genDE{x}\\land\\psi}}\n holds for some flow \\m{\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,r]\\to\\linterpretations{\\Sigma}{V}} \nof any duration $r>0$,\n then for all $0\\leq\\zeta\\leq r$:\n \\[\n \\ivaluation{\\Iff[\\zeta]}{\\der{\\eta}}\n = \\D[t]{\\ivaluation{\\Iff[t]}{\\eta}} (\\zeta)\n \\]\n\\end{lemma}\n\\begin{proofatend}\n\\def\\Ifzm{\\imodif[state]{\\Iff[\\zeta]}{x}{X}}%\n\\newcommand{\\vdLint[const=I,state=]}{\\vdLint[const=I,state=]}\nBy chain rule \\cite[\\S3.10]{Walter:Ana2}:\n\\[\n\\D[t]{\\ivaluation{\\Iff[t]}{\\eta}} (\\zeta)\n=\n\\D{(\\ivaluation{\\vdLint[const=I,state=]}{\\eta} \\compose \\iget[flow]{\\DALint[const=I,flow=\\varphi]})}(\\zeta) = (\\gradient{\\ivaluation{\\vdLint[const=I,state=]}{\\eta}})\\big(\\iget[flow]{\\DALint[const=I,flow=\\varphi]}(\\zeta)\\big)\n\\stimes \\D{\\iget[flow]{\\DALint[const=I,flow=\\varphi]}}(\\zeta)\n= \\sum_x \\Dp[x]{\\ivaluation{\\vdLint[const=I,state=]}{\\eta}}\\big(\\iget[flow]{\\DALint[const=I,flow=\\varphi]}(\\zeta)\\big) \\D{\\iget[flow]{\\DALint[const=I,flow=\\varphi]}}(\\zeta)(x)\n\\]\nwhere \\((\\gradient{\\ivaluation{\\vdLint[const=I,state=]}{\\eta}})\\big(\\iget[flow]{\\DALint[const=I,flow=\\varphi]}(\\zeta)\\big)\\), the spatial gradient \\(\\gradient{\\ivaluation{\\vdLint[const=I,state=]}{\\eta}}\\)\nat \\(\\iget[flow]{\\DALint[const=I,flow=\\varphi]}(\\zeta)\\),\nis the vector of\n\\(\\Dp[x]{\\ivaluation{\\vdLint[const=I,state=]}{\\eta}}\\big(\\iget[flow]{\\DALint[const=I,flow=\\varphi]}(\\zeta)\\big)\n= \\Dp[X]{\\ivaluation{\\Ifzm}{\\eta}}\\).\nChain rule and \\rref{def:dL-valuationTerm} and\\rref{def:HP-transition}, thus, imply:\n\\[\n\\ivaluation{\\Iff[\\zeta]}{\\der{\\eta}}\n= \\sum_x \\iget[state]{\\Iff[\\zeta]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Ifzm}{\\eta}}\n= \\sum_x \\Dp[X]{\\ivaluation{\\Ifzm}{\\eta}} \\itimes \\D[t]{\\iget[state]{\\Iff[t]}(x)}(\\zeta)\n= \\D[t]{\\ivaluation{\\Iff[t]}{\\eta}} (\\zeta)\n\n\\]\n\\end{proofatend}\n\nThe key insight for the soundness of differential effects \\irref{DE} is that differential assignments mimicking the differential equation are vacuous along that differential equation.\nThe differential substitution resulting from a subsequent use of \\irref{Dassignb} is crucial to relay the values of the time-derivatives of the state variables $x$ along a differential equation by way of their corresponding differential symbol $\\D{x}$.\nIn combination, this makes it possible to soundly substitute the right-hand side of a differential equation for its left-hand side in a proof.\n\n\\begin{lemma}[Differential assignment] \\label{lem:differentialAssignLemma}\n %\n If \\m{\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=\\genDE{x}\\land\\psi}}\n for some flow \\m{\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,r]\\to\\linterpretations{\\Sigma}{V}} \nof any duration $r\\geq0$,\n then\n \\[\n \\imodels{\\DALint[const=I,flow=\\varphi]}{\\phi \\lbisubjunct \\dbox{\\Dupdate{\\Dumod{\\D{x}}{\\genDE{x}}}}{\\phi}}\n \\]\n\\end{lemma}\n\\begin{proofatend}\n\\m{\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=\\genDE{x}\\land\\psi}} implies\n\\(\\imodels{\\Iff[\\zeta]}{\\D{x}=\\genDE{x}\\land\\psi}\\),\ni.e. \\(\\iget[state]{\\Iff[\\zeta]}(\\D{x}) = \\ivaluation{\\Iff[\\zeta]}{\\genDE{x}}\\) and \\(\\imodels{\\Iff[\\zeta]}{\\psi}\\)\nfor all $0\\leq \\zeta\\leq r$.\nConsequently \\(\\iaccessible[\\Dupdate{\\Dumod{\\D{x}}{\\genDE{x}}}]{\\Iff[\\zeta]}{\\Iff[\\zeta]}\\)\ndoes not change the state, so that $\\phi$ and \\(\\dbox{\\Dupdate{\\Dumod{\\D{x}}{\\genDE{x}}}}{\\phi}\\) are equivalent along $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$.\n\n\\end{proofatend}\n\nThe final insights for differential invariant reasoning for differential equations are syntactic ways of computing differentials, which can be internalized as axioms (\\irref{Dplus+Dtimes+Dcompose}), since differentials are syntactically represented in differential-form \\dL. \n\\begin{lemma}[Derivations] \\label{lem:derivationLemma}\n The following equations of differentials are valid:\n \\begin{align}%\n \\der{f} & = 0\n &&\\text{for arity 0 functions\/numbers}~f\n \\label{eq:Dconstant}\\\\\n %\n \\der{x} & = \\D{x}\n &&\\text{for variables}~x\\in\\mathcal{V}\\label{eq:Dpolynomial}\\\\\n \\der{\\theta+\\eta} & = \\der{\\theta} + \\der{\\eta}\n \\label{eq:Dadditive}\\\\\n \\der{\\theta\\cdot \\eta} & = \\der{\\theta}\\cdot \\eta + \\theta\\cdot\\der{\\eta}\n \\label{eq:DLeibniz}\n \\\\\n \\dbox{\\pupdate{\\pumod{y}{\\theta}}}{\\dbox{\\Dupdate{\\Dumod{\\D{y}}{1}}}\n {}&{\\big( \\der{f(\\theta)} = \\der{f(y)}\\stimes\\der{\\theta}\\big)}}\n &&\\text{for $y,\\D{y}\\not\\in\\theta$}\n \\label{eq:Dcompose}\n %\n %\n \\end{align}%\n\\end{lemma}\n\\begin{proofatend}\n\\def\\Im{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{X}}%\n\\edef\\Imyypyval{\\lenvelope\\theta\\renvelope^I \\nu}%\n\\def\\Imyyp{\\vdLint[const=I,state=\\modif{\\nu}{y}{\\Imyypyval}\\modif{}{\\D{y}}{1}]{\\vdLint[const=I,state=\\nu]}}%\n\\newcommand{\\vdLint[const=I,state=]}{\\vdLint[const=I,state=]}%\nThe proof shows each equation separately.\nThe first parts consider any constant function (i.e. arity 0) or number literal $f$ for \\rref{eq:Dconstant} and align the differential \\(\\der{x}\\) of a term that happens to be a variable $x\\in\\mathcal{V}$ with its corresponding differential symbol $\\D{x}\\in\\D{\\mathcal{V}}$ for \\rref{eq:Dpolynomial}.\nThe other cases exploit linearity for \\rref{eq:Dadditive} and Leibniz properties of partial derivatives for \\rref{eq:DLeibniz}.\nCase \\rref{eq:Dcompose} exploits the chain rule and assignments and differential assignments for the fresh $y,\\D{y}$ to mimic partial derivatives.\nEquation \\rref{eq:Dcompose} generalizes to functions $f$ of arity $n>1$, in which case $\\stimes$ is the (definable) Euclidean scalar product.\n\\iflongversion\n\\def\\aptag#1{\\tag{#1}}%\n\\else%\n\\def\\aptag#1{\\notag&\\hspace*{-8pt}(#1)}%\n\\fi\n\\begin{align}\n\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{f}}\n&= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{f}}\n= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\iget[const]{\\Im}(f)}\n= 0\n\\aptag{\\ref{eq:Dconstant}}\n\\\\\n\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{x}}\n&=\n\\def\\Imy{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{y}{X}}%\n\\sum_y \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{y}) \\itimes \\Dp[X]{\\ivaluation{\\Imy}{x}}\n= \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{x}}\n= \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{X}\n= \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x})\n= \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\D{x}}\n\\aptag{\\ref{eq:Dpolynomial}}\n\\\\\n\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta+\\eta}}\n&= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\theta+\\eta}}\n= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{(\\ivaluation{\\Im}{\\theta}+\\ivaluation{\\Im}{\\eta})}\n\\notag\n\\\\&= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Big(\\Dp[X]{\\ivaluation{\\Im}{\\theta}} + \\Dp[X]{\\ivaluation{\\Im}{\\eta}}\\Big)\n\\notag\n\\\\&= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\theta}} + \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\eta}}\n\\notag\n\\\\&= \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}} + \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\eta}}\n= \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta} + \\der{\\eta}}\n\\aptag{\\ref{eq:Dadditive}}\n\\\\\n\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta\\cdot\\eta}}\n&= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\theta\\cdot\\eta}}\n= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{(\\ivaluation{\\Im}{\\theta}\\cdot\\ivaluation{\\Im}{\\eta})}\n\\notag\n\\\\\n&= \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Big(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\eta} \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\theta}}\n+ \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta} \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\eta}}\\Big)\n\\notag\n\\\\\n&=\n\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\eta} \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\theta}}\n+ \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta} \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\itimes \\Dp[X]{\\ivaluation{\\Im}{\\eta}}\n\\notag\n\\\\\n&= \n\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\\cdot \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\eta} + \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\cdot\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\eta}}\n= \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}\\cdot \\eta + \\theta\\cdot\\der{\\eta}}\n\\aptag{\\ref{eq:DLeibniz}}\n\\intertext{Proving that\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pupdate{\\pumod{y}{\\theta}}}{\\dbox{\\Dupdate{\\Dumod{\\D{y}}{1}}}{\\big(\\der{f(\\theta)} = \\der{f(y)}\\stimes\\der{\\theta}\\big)}}}\\)\nrequires showing that\n\\newline\n\\(\\imodels{\\Imyyp}{\\der{f(\\theta)} = \\der{f(y)}\\stimes\\der{\\theta}}\\),\ni.e.\\\n\\(\\ivaluation{\\Imyyp}{\\der{f(\\theta)}} = \\ivaluation{\\Imyyp}{\\der{f(y)}\\stimes\\der{\\theta}}\\).\nThis is equivalent to\n\\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{f(\\theta)}} = \\ivaluation{\\Imyyp}{\\der{f(y)}}\\stimes\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\\)\nby \\rref{lem:coincidence-term} since\n\\(\\iget[state]{\\vdLint[const=I,state=\\nu]}=\\iget[state]{\\Imyyp}\\) on $\\scomplement{\\{y,\\D{y}\\}}$ and\n\\(y,\\D{y}\\not\\in\\freevars{\\theta}\\) by assumption,\nso \\(y,\\D{y}\\not\\in\\freevars{\\der{f(\\theta)}}\\) and \\(y,\\D{y}\\not\\in\\freevars{\\der{\\theta}}\\).\nThe latter equation proves using the chain rule and a fresh variable $z$ when denoting \\(\\ivaluation{\\vdLint[const=I,state=]}{f} \\mdefeq \\iget[const]{\\vdLint[const=I,state=]}(f)\\):}\n \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{f(\\theta)}} & =\n \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\Dp[x]{\\ivaluation{\\vdLint[const=I,state=]}{f(\\theta)}}(\\iget[state]{\\vdLint[const=I,state=\\nu]})\n = \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\Dp[x]{(\\ivaluation{\\vdLint[const=I,state=]}{f}\\compose\\ivaluation{\\vdLint[const=I,state=]}{\\theta})}(\\iget[state]{\\vdLint[const=I,state=\\nu]})\n \\notag\n \\\\&\n %\n \\stackrel{\\text{chain}}{=} \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\Dp[y]{\\ivaluation{\\vdLint[const=I,state=]}{f}}\\big(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\big) \\stimes \\Dp[x]{\\ivaluation{\\vdLint[const=I,state=]}{\\theta}}(\\iget[state]{\\vdLint[const=I,state=\\nu]})\n\\notag\n \\\\&\n = \\Dp[y]{\\ivaluation{\\vdLint[const=I,state=]}{f}}\\big(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\big) \\stimes \\sum_x \\iget[state]{\\vdLint[const=I,state=\\nu]}(\\D{x}) \\Dp[x]{\\ivaluation{\\vdLint[const=I,state=]}{\\theta}}(\\iget[state]{\\vdLint[const=I,state=\\nu]})\n = \\Dp[y]{\\ivaluation{\\vdLint[const=I,state=]}{f}}\\big(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\big) \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\notag\n \\\\&= \\Dp[y]{\\iget[const]{\\vdLint[const=I,state=]}(f)}\\big(\\ivaluation{\\vdLint[const=I,state=\\nu]}{\\theta}\\big) \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\notag\n \\\\&=\n \\Dp[z]{\\iget[const]{\\vdLint[const=I,state=]}(f)}(\\Imyypyval) \\itimes 1 \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\notag\n \\\\&=\n \\Dp[z]{\\iget[const]{\\vdLint[const=I,state=]}(f)}\\big(\\ivaluation{\\Imyyp}{y}\\big) \\itimes \\Dp[y]{\\ivaluation{\\vdLint[const=I,state=]}{y}}(\\iget[state]{\\Imyyp}) \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\notag\n \\\\&\\stackrel{\\text{chain}}{=}\n \\Dp[y]{(\\iget[const]{\\vdLint[const=I,state=]}(f)\\compose\\ivaluation{\\vdLint[const=I,state=]}{y})}(\\iget[state]{\\Imyyp}) \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\notag\n \\\\&=\n \\left(\\Dp[y]{\\ivaluation{\\vdLint[const=I,state=]}{f(y)}}(\\iget[state]{\\Imyyp})\\right) \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\notag\n \\\\&=\n \\left(\\iget[state]{\\Imyyp}(\\D{y}) \\Dp[y]{\\ivaluation{\\vdLint[const=I,state=]}{f(y)}}(\\iget[state]{\\Imyyp})\\right) \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\notag\n \\\\&=\n \\left(\\sum_{x\\in\\{y\\}} \\iget[state]{\\Imyyp}(\\D{x}) \\Dp[x]{\\ivaluation{\\vdLint[const=I,state=]}{f(y)}}(\\iget[state]{\\Imyyp})\\right) \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n\\notag\n \\\\&=\n \\ivaluation{\\Imyyp}{\\der{f(y)}} \\stimes \\ivaluation{\\vdLint[const=I,state=\\nu]}{\\der{\\theta}}\n %\n \\aptag{\\ref{eq:Dcompose}}\n \n\\end{align}\n\n\\end{proofatend}\n\n\\subsection{Soundness}\n\n\\begin{theorem}[Soundness] \\label{thm:dL-sound}\n The \\dL axioms and proof rules in \\rref{fig:dL}, \\ref{fig:dL-ODE} are sound, i.e.\\ the axioms are valid formulas and the conclusion of a rule is valid if its premises are.\n All \\irref{US} instances of the proof rules (with $\\freevars{\\sigma}=\\emptyset$) are sound.\n\\end{theorem}\n\\begin{proofatend}\nThe axioms (and most proof rules) in \\rref{fig:dL} are special instances of corresponding axiom schemata and proof rules for differential dynamic logic \\cite{DBLP:conf\/lics\/Platzer12b} and, thus, sound.\nAll proof rules except \\irref{US} are even \\emph{locally sound}, i.e. for all $\\iget[const]{\\vdLint[const=I,state=\\nu]}$: if all their premises $\\phi_j$ are valid in $\\iget[const]{\\vdLint[const=I,state=\\nu]}$ (\\m{\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{\\phi_j}}) then their conclusion $\\psi$ is, too (\\m{\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{\\psi}}).\nLocal soundness implies soundness.\nIn addition, local soundness implies that \\irref{US} can be used to soundly instantiate proof rules just like it soundly instantiates axioms (\\rref{thm:usubst-sound}).\nIf\n\\begin{equation}\n\\linfer\n{\\phi_1 \\quad \\dots \\quad \\phi_n}\n{\\psi}\n\\label{eq:proofrule}\n\\end{equation}\nis a locally sound proof rule then its substitution instance is locally sound:\n\\begin{equation}\n\\linfer\n{\\applyusubst{\\sigma}{\\phi_1} \\quad \\dots \\quad \\applyusubst{\\sigma}{\\phi_n}}\n{\\applyusubst{\\sigma}{\\psi}}\n\\label{eq:usubstituted-proofrule}\n\\end{equation}\nwhere $\\sigma$ is any uniform substitution (for which the above results are defined, i.e.\\ no clash) with $\\freevars{\\sigma}=\\emptyset$.\nTo show this, consider any $\\iget[const]{\\vdLint[const=I,state=\\nu]}$ in which all premises of \\rref{eq:usubstituted-proofrule} are valid, i.e.\\\n\\(\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{\\applyusubst{\\sigma}{\\phi_j}}\\) for all $j$.\nThat is, \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi_j}}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ and all $j$.\nBy \\rref{lem:usubst}, \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\phi_j}}\\) is equivalent to\n\\(\\imodels{\\Ia}{\\phi_j}\\),\nwhich, thus, also holds for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ and all $j$.\nBy \\rref{cor:adjointUsubst}, \\(\\imodel{\\Ia}{\\phi_j}=\\imodel{\\Ita}{\\phi_j}\\) for any $\\iget[state]{\\Ita}$, since $\\freevars{\\sigma}=\\emptyset$.\nConsequently, all premises of \\rref{eq:proofrule} are valid in $\\iget[const]{\\Ita}$, i.e. \\(\\iget[const]{\\Ita}\\models{\\phi_j}\\) for all $j$.\nThus, \\(\\iget[const]{\\Ita}\\models{\\psi}\\) by local soundness of \\rref{eq:proofrule}.\nThat is, \\(\\imodels{\\Ia}{\\psi}=\\imodel{\\Ita}{\\psi}\\) by \\rref{cor:adjointUsubst} for all $\\iget[state]{\\Ia}$.\nBy \\rref{lem:usubst}, \\(\\imodels{\\Ia}{\\psi}\\) is equivalent to \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\applyusubst{\\sigma}{\\psi}}\\),\nwhich continues to hold for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$.\nThus, \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{\\applyusubst{\\sigma}{\\psi}}\\), i.e.\\ the conclusion of \\rref{eq:usubstituted-proofrule} is valid in $\\iget[const]{\\vdLint[const=I,state=\\nu]}$, hence \\rref{eq:usubstituted-proofrule} locally sound.\nConsequently, all \\irref{US} instances of the locally sound proof rules of \\dL with $\\freevars{\\sigma}=\\emptyset$ are locally sound.\nNote that \\irref{gena+MP} can be augmented soundly to use $p(\\bar{x})$ instead of $p(x)$ or $p$, respectively, such that the $\\freevars{\\sigma}=\\emptyset$ requirement will be met during \\irref{US} instances of all rules.\n\n\\begin{compactitem}\n\\item[\\irref{DW}]\nSoundness of \\irref{DW} uses that differential equations can never leave their evolution domain by \\rref{def:HP-transition}.\nTo show \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{q(x)}}\\), consider any $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ of any duration $r\\geq0$ solving \\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land q(x)}\\).\nThen \\(\\imodels{\\DALint[const=I,flow=\\varphi]}{q(x)}\\) hence \\(\\imodels{\\Iff[r]}{q(x)}\\).\n\n\n\\item[\\irref{DC}]\nSoundness of \\irref{DC} is a stronger version of soundness for the differential cut rule \\cite{DBLP:journals\/logcom\/Platzer10}.\n\\irref{DC} is a differential version of the modal modus ponens \\irref{K}.\nThe core is that if $r(x)$ always holds after the differential equation\nand $p(x)$ always holds after the differential equation \\(\\pevolvein{\\D{x}=f(x)}{q(x)\\land r(x)}\\) that is restricted to $r(x)$,\nthen $p(x)$ always holds after the differential equation \\(\\pevolvein{\\D{x}=f(x)}{q(x)}\\) without that additional restriction.\nLet \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{r(x)}}\\).\nSince all restrictions of solutions are solutions, this is equivalent to\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{r(x)}\\) for all $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ of any duration solving \\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land q(x)}\\) and starting in \\(\\iget[state]{\\Iff[0]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}\\) on $\\scomplement{\\{\\D{x}\\}}$.\nConsequently, for all $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ starting in \\(\\iget[state]{\\Iff[0]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}\\) on $\\scomplement{\\{\\D{x}\\}}$:\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land q(x)}\\) is equivalent to\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land q(x) \\land r(x)}\\).\nHence, \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)\\land r(x)}}{p(x)}}\\)\nis equivalent to \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}\\).\n\n\\item[\\irref{DE}]\nSoundness of \\irref{DE} is genuine to differential-form \\dL leveraging \\rref{lem:differentialAssignLemma}.\nConsider any state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$.\nThen\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x,\\D{x})}}\\)\niff\n\\(\\imodels{\\Iff[r]}{p(x,\\D{x})}\\)\nfor all solutions $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,r]\\to\\linterpretations{\\Sigma}{V}$ of\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land q(x)}\\) of any duration $r$ starting in \\(\\iget[state]{\\Iff[0]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}\\) on $\\scomplement{\\{\\D{x}\\}}$.\nThat is equivalent to: for all $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$,\nif\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land q(x)}\\)\nthen\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{p(x,\\D{x})}\\).\nBy \\rref{lem:differentialAssignLemma}, \\(\\imodels{\\DALint[const=I,flow=\\varphi]}{p(x,\\D{x})}\\) iff\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{f(x)}}}{p(x,\\D{x})}}\\),\nso, that is equivalent to\n\\(\\imodels{\\Iff[r]}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{f(x)}}}{p(x,\\D{x})}}\\)\nfor all solutions $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,r]\\to\\linterpretations{\\Sigma}{V}$ of\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land q(x)}\\) of any duration $r$ starting in \\(\\iget[state]{\\Iff[0]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}\\) on $\\scomplement{\\{\\D{x}\\}}$,\nwhich is, consequently, equivalent to\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{\\dbox{\\Dupdate{\\Dumod{\\D{x}}{f(x)}}}{p(x,\\D{x})}}}\\).\n\n\n\\item[\\irref{DI}]\nSoundness of \\irref{DI} has some relation to the soundness proof for differential invariants \\cite{DBLP:journals\/logcom\/Platzer10}, yet is generalized to leverage differentials.\nThe proof is only shown for \\m{p(x)\\mdefequiv g(x)\\geq0}, in which case \\(\\der{p(x)} \\mequiv (\\der{g(x)}\\geq0)\\).\nConsider a state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ in which\n\\\\\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{q(x) \\limply (p(x) \\land \\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{\\der{p(x)}}})\\).\nIf \\(\\inonmodels{\\vdLint[const=I,state=\\nu]}{q(x)}\\), there is nothing to show, because there is no solution of \\({\\pevolvein{\\D{x}=f(x)}{q(x)}}\\) for any duration, so the consequence holds vacuously.\nOtherwise, \\(\\imodels{\\vdLint[const=I,state=\\nu]}{q(x)}\\) implies \\(\\imodels{\\vdLint[const=I,state=\\nu]}{p(x) \\land \\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{\\der{p(x)}}}\\).\nTo show that\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}\\) consider any solution $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ of any duration $r\\geq0$.\nThe case $r=0$ follows from \\(\\imodels{\\vdLint[const=I,state=\\nu]}{p(x)}\\) by \\rref{lem:coincidence} since \\(\\freevars{p(x)}=\\{x\\}\\) is disjoint from \\(\\{\\D{x}\\}\\), which is changed by evolutions of any duration.\nThat leaves the case \\(r>0\\).\n\nLet \\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\pevolvein{\\D{x}=f(x)}{q(x)}}\\),\nwhich, by\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{\\der{p(x)}}}\\),\nimplies\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\der{p(x)}}\\).\nSince \\(r>0\\), \\rref{lem:differentialLemma} implies\n\\(0\\leq\\ivaluation{\\Iff[\\zeta]}{\\der{g(x)}}\n= \\D[t]{\\ivaluation{\\Iff[t]}{g(x)}} (\\zeta)\\)\nfor all $\\zeta$.\nTogether with \\(\\imodels{\\Iff[0]}{p(x)}\\) (by \\rref{lem:coincidence} and \\(\\freevars{p(x)}\\cap\\{\\D{x}\\}=\\emptyset\\)), i.e.\\\n\\(\\imodels{\\Iff[0]}{g(x)\\geq0}\\),\nthis implies \\(\\imodels{\\Iff[\\zeta]}{g(x)\\geq0}\\) for all $\\zeta$, including $r$,\nby the mean-value theorem since \\(\\ivaluation{\\Iff[t]}{g(x)}\\) is continuous in $t$ on $[0,r]$ and differentiable on $(0,r)$.\n\n\\item[\\irref{DG}]\n\\def\\Imd{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{d}}%\n\\newcommand{\\DALint[const=I,flow=\\tilde{\\varphi}]}{\\DALint[const=I,flow=\\tilde{\\varphi}]}\n\\newcommand{\\Iffy}[1][t]{\\vdLint[const=I,state=\\tilde{\\varphi}(#1)]}\nSoundness of \\irref{DG} is a constructive variation of the soundness proof for differential auxiliaries \\cite{DBLP:journals\/lmcs\/Platzer12}.\nLet \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\lexists{y}{\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=a(x)y+b(x)}{q(x)}}{p(x)}}}\\),\nthat is,\n\\\\\n\\(\\imodels{\\Imd}{\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=a(x)y+b(x)}{q(x)}}{p(x)}}\\) for some $d$.\nIn order to show that\n\\\\\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}\\),\nconsider any \\(\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,r]\\to\\linterpretations{\\Sigma}{V}\\) such that\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land q(x)}\\) and \\(\\iget[state]{\\Iff[0]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}\\) on $\\scomplement{\\{\\D{x}\\}}$.\nBy modifying the values of $y,\\D{y}$ along $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$, this function can be augmented to a solution \n\\(\\iget[flow]{\\DALint[const=I,flow=\\tilde{\\varphi}]}:[0,r]\\to\\linterpretations{\\Sigma}{V}\\) such that\n\\(\\imodels{\\DALint[const=I,flow=\\tilde{\\varphi}]}{\\D{x}=f(x)\\land\\D{y}=a(x)y+b(x)\\land q(x)}\\) and \\(\\iget[state]{\\Iffy[0]}(y)=d\\).\nThe assumption then implies \\(\\imodels{\\Iffy[r]}{p(x)}\\), which, by \\rref{lem:coincidence}, is equivalent to \\(\\imodels{\\Iff[r]}{p(x)}\\) since \\(y,\\D{y}\\not\\in\\freevars{p(x)}\\) and \\(\\iget[state]{\\Iff[r]}=\\iget[state]{\\Iffy[r]}\\) on $\\scomplement{\\{y,\\D{y}\\}}$,\nwhich implies \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}\\), since $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ was arbitrary.\nThe construction of the modification $\\iget[flow]{\\DALint[const=I,flow=\\tilde{\\varphi}]}$ of $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ on $\\{y,\\D{y}\\}$ proceeds as follows.\nBy Picard-Lindel\\\"of \\cite[\\S10.VII]{Walter:ODE}, there is a solution \\(y:[0,r]\\to\\reals\\) of the initial-value problem\n\\begin{equation}\n\\begin{aligned}\n y(0) &=d\\\\\n \\D{y}(t) &= F(t,y(t)) \\mdefeq y(t) \\ivaluation{\\Iff[t]}{a(x)} + \\ivaluation{\\Iff[t]}{b(x)}\n\\end{aligned}\n\\label{eq:diffghost-extra-ODE}\n\\end{equation}\nbecause \\(F(t,y)\\) is continuous on \\([0,r]\\times\\reals\\)\n(since \\(\\ivaluation{\\Iff[t]}{a(x)}\\) and \\(\\ivaluation{\\Iff[t]}{b(x)}\\) are continuous in $t$ as compositions of the, by \\rref{def:dL-valuationTerm} smooth, evaluation function and the continuous solution $\\iget[state]{\\Iff[t]}$ of a differential equation)\nand because \\(F(t,y)\\) satisfies the Lipschitz condition\n\\[\n\\norm{F(t,y)-F(t,z)} = \\norm{(y-z)\\ivaluation{\\Iff[t]}{a(x)}} \\leq \\norm{y-z} \\max_{t\\in[0,r]} \\ivaluation{\\Iff[t]}{a(x)}\n\\]\nwhere the maximum exists, because it is a maximum of a continuous function on the compact set $[0,r]$.\nThe modification $\\iget[flow]{\\DALint[const=I,flow=\\tilde{\\varphi}]}$ agrees with $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ on $\\scomplement{\\{y,\\D{y}\\}}$ and is defined as \\(\\iget[state]{\\Iffy[t]}(y) = y(t)\\) and \\(\\iget[state]{\\Iffy[t]}(\\D{y}) = F(t,y(t)) = \\D{y}(t)\\) on $\\{y,\\D{y}\\}$, respectively, for the solution $y(t)$ of \\rref{eq:diffghost-extra-ODE}.\nBy construction, \\(\\iget[state]{\\Iffy[0]}(y)=d\\) and\n\\(\\imodels{\\DALint[const=I,flow=\\tilde{\\varphi}]}{\\D{x}=f(x)\\land\\D{y}=a(x)y+b(x)\\land q(x)}\\),\nbecause \\(\\iget[state]{\\Iff[t]}=\\iget[state]{\\Iffy[t]}\\) on $\\scomplement{\\{y,\\D{y}\\}}$ so that\n\\(\\pevolvein{\\D{x}=f(x)}{q(x)}\\) continues to hold along $\\iget[flow]{\\DALint[const=I,flow=\\tilde{\\varphi}]}$ by \\rref{lem:coincidence-term} because \\(y,\\D{y}\\not\\in\\freevars{\\pevolvein{\\D{x}=f(x)}{q(x)}}\\),\nand because \\(\\D{y}=a(x)y+b(x)\\) holds along $\\iget[flow]{\\DALint[const=I,flow=\\tilde{\\varphi}]}$ by \\rref{eq:diffghost-extra-ODE}.\n\n\\def\\Imyd{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{y}{d}}%\n\\newcommand{\\Ifrest}{\\DALint[const=I,flow=\\restrict{\\varphi}{\\scomplement{\\{y,\\D{y}\\}}}]}%\n\\newcommand*{\\Iffrest}[1][\\zeta]{\\vdLint[const=I,state=\\restrict{\\varphi}{\\scomplement{\\{y,\\D{y}\\}}}(#1)]}%\nConversely, let \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolvein{\\D{x}=f(x)}{q(x)}}{p(x)}}\\).\nThis direction shows a stronger version of\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\lexists{y}{\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=a(x)y+b(x)}{q(x)}}{p(x)}}}\\)\nby showing that\n\\\\\n\\(\\imodels{\\Imyd}{\\dbox{\\pevolvein{\\D{x}=f(x)\\syssep\\D{y}=\\eta}{q(x)}}{p(x)}}\\)\nfor all $d\\in\\reals$ and all terms $\\eta$.\nConsider any \\(\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,r]\\to\\linterpretations{\\Sigma}{V}\\) such that\n\\(\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=f(x)\\land\\D{y}=\\eta\\land q(x)}\\)\nwith \\(\\iget[state]{\\Iff[0]}=\\iget[state]{\\Imyd}\\) on $\\scomplement{\\{\\D{x},\\D{y}\\}}$.\nThen the restriction \\(\\iget[flow]{\\Ifrest}\\) of $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ to $\\scomplement{\\{y,\\D{y}\\}}$ with \\(\\iget[state]{\\Iffrest[t]}=\\iget[state]{\\Imyd}\\) on $\\{y,\\D{y}\\}$ for all $t\\in[0,r]$ still solves\n\\(\\imodels{\\Ifrest}{\\D{x}=f(x)\\land q(x)}\\) by \\rref{lem:coincidence-term} since \\(\\iget[flow]{\\Ifrest}=\\iget[flow]{\\DALint[const=I,flow=\\varphi]}\\) on $\\scomplement{\\{y,\\D{y}\\}}$ and \\(y,\\D{y}\\not\\in\\freevars{\\pevolvein{\\D{x}=f(x)}{q(x)}}\\).\nIt also satisfies \\(\\iget[state]{\\Iffrest[0]}=\\iget[state]{\\Imyd}\\) on $\\scomplement{\\{\\D{x}\\}}$,\nbecause \\(\\iget[state]{\\Iff[0]}=\\iget[state]{\\Imyd}\\) on $\\scomplement{\\{\\D{x},\\D{y}\\}}$ yet \\(\\iget[state]{\\Iffrest[t]}(\\D{y})=\\iget[state]{\\Imyd}(\\D{y})\\).\nThus, by assumption, \\(\\imodels{\\Iffrest[r]}{p(x)}\\),\nwhich implies \\(\\imodels{\\Iff[r]}{p(x)}\\)\nby \\rref{lem:coincidence}, because\n\\(\\iget[flow]{\\DALint[const=I,flow=\\varphi]}=\\iget[flow]{\\Ifrest}\\) on $\\scomplement{\\{y,\\D{y}\\}}$ and $y,\\D{y}\\not\\in\\freevars{p(x)}$,\n\n\\item[\\irref{DS}]\n\\def\\iconcat[state=\\varphi(t)]{\\I}{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{t}{\\zeta}}%\n\\def\\constODE{f}%\nSoundness of the solution axiom \\irref{DS} follows from existence and uniqueness of global solutions of constant differential equations.\nConsider any state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$.\nThere is a unique \\cite[\\S10.VII]{Walter:ODE} global solution $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,\\infty)\\to\\linterpretations{\\Sigma}{V}$ defined as \\(\\iget[state]{\\Iff[\\zeta]}(x) \\mdefeq \\ivaluation{\\iconcat[state=\\varphi(t)]{\\I}}{x+\\constODE t}\\)\nand \\(\\iget[state]{\\Iff[\\zeta]}(\\D{x}) \\mdefeq \\D[t]{\\iget[state]{\\Iff[t]}(x)}(\\zeta) = \\iget[const]{\\iconcat[state=\\varphi(t)]{\\I}}(\\constODE)\\)\nand \\(\\iget[state]{\\Iff[\\zeta]} = \\iget[state]{\\vdLint[const=I,state=\\nu]}\\) on $\\scomplement{\\{x,\\D{x}\\}}$.\nThis solution satisfies\n\\(\\iget[state]{\\Iff[0]}=\\iget[state]{\\vdLint[const=I,state=\\nu]}(x)\\) on $\\scomplement{\\{\\D{x}\\}}$ \nand\n \\m{\\imodels{\\DALint[const=I,flow=\\varphi]}{\\D{x}=\\constODE}},\n i.e.\n \\(\\imodels{\\Iff[\\zeta]}{\\D{x}=\\constODE}\\)\n for all \\(0\\leq \\zeta\\leq r\\).\nAll solutions of \\(\\pevolve{\\D{x}=\\constODE}\\) from initial state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$ are restrictions of $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ to subintervals of $[0,\\infty)$.\nThe (unique) state $\\iget[state]{\\vdLint[const=I,state=\\omega]}$ that satisfies \\(\\iaccessible[\\pupdate{\\pumod{x}{x+\\constODE t}}]{\\iconcat[state=\\varphi(t)]{\\I}}{\\vdLint[const=I,state=\\omega]}\\) agrees with \\(\\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\Iff[\\zeta]}\\) on $\\scomplement{\\{\\D{x}\\}}$, so that, by $\\D{x}\\not\\in\\freevars{p(x)}$, \\rref{lem:coincidence} implies that \\(\\imodels{\\vdLint[const=I,state=\\omega]}{p(x)}\\) iff \\(\\imodels{\\Iff[\\zeta]}{p(x)}\\).\n\nFirst consider axiom\n\\({\\dbox{\\pevolve{\\D{x}=\\constODE}}{p(x)}} \\lbisubjunct {\\lforall{t{\\geq}0}{\\dbox{\\pupdate{\\pumod{x}{x+\\constODE t}}}{p(x)}}}\\) for $q(x)\\mequiv\\ltrue$.\nIf\n\\\\\n\\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolve{\\D{x}=\\constODE}}{p(x)}}\\),\nthen \\(\\imodels{\\Iff[\\zeta]}{p(x)}\\) for all $\\zeta\\geq0$,\nbecause the restriction of $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$ to $[0,\\zeta)$ solves \\(\\D{x}=\\constODE\\) from $\\iget[state]{\\vdLint[const=I,state=\\nu]}$,\nthus \\(\\imodels{\\vdLint[const=I,state=\\omega]}{p(x)}\\),\nwhich implies\n\\(\\imodels{\\iconcat[state=\\varphi(t)]{\\I}}{\\dbox{\\pupdate{\\pumod{x}{x+\\constODE t}}}{p(x)}}\\),\nso \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\lforall{t{\\geq}0}{\\dbox{\\pupdate{\\pumod{x}{x+\\constODE t}}}{p(x)}}}\\) as $\\zeta\\geq0$ was arbitrary.\n\nConversely, \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\lforall{t{\\geq}0}{\\dbox{\\pupdate{\\pumod{x}{x+\\constODE t}}}{p(x)}}}\\)\nimplies \\(\\imodels{\\iconcat[state=\\varphi(t)]{\\I}}{\\dbox{\\pupdate{\\pumod{x}{x+\\constODE t}}}{p(x)}}\\)\nfor all $\\zeta\\geq0$,\ni.e. $\\imodels{\\vdLint[const=I,state=\\omega]}{p(x)}$ when \\(\\iaccessible[\\pupdate{\\pumod{x}{x+\\constODE t}}]{\\iconcat[state=\\varphi(t)]{\\I}}{\\vdLint[const=I,state=\\omega]}\\).\n\\rref{lem:coincidence} again implies \\(\\imodels{\\Iff[\\zeta]}{p(x)}\\) for all $\\zeta\\geq0$, so \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{\\pevolve{\\D{x}=\\constODE}}{p(x)}}\\), since all solutions are restrictions of $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}$.\n\nSoundness of \\irref{DS} now follows using that all solutions $\\iget[flow]{\\DALint[const=I,flow=\\varphi]}:[0,r]\\to\\linterpretations{\\Sigma}{V}$ of \\(\\pevolvein{\\D{x}=f(x)}{q(x)}\\) satisfy \\(\\imodels{\\Iff[\\zeta]}{q(x)}\\) for all $0\\leq\\zeta\\leq r$, which, using \\rref{lem:coincidence} as above, is equivalent to \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\lforall{0{\\leq}s{\\leq}t}{q(x+\\constODE s)}}\\) when \\(\\iget[state]{\\vdLint[const=I,state=\\nu]}(t)=r\\).\n\n\\item[\\irref{Dassignb}]\nSoundness of \\irref{Dassignb} follows from the semantics of differential assignments (\\rref{def:HP-transition}) and compositionality.\nIn detail: \\m{\\Dupdate{\\Dumod{\\D{x}}{f}}} changes the value of symbol $\\D{x}$ to the value of $f$.\nThe predicate $p$ has the same value for arguments $\\D{x}$ and $f$ that have the same value.\n\n\\item[\\irref{Dplus+Dtimes+Dcompose}] Soundness of the derivation axioms \\irref{Dplus+Dtimes+Dcompose} follows from \\rref{lem:derivationLemma}, since they are special instances of \\rref{eq:Dadditive} and \\rref{eq:DLeibniz} and \\rref{eq:Dcompose}, respectively.\nFor \\irref{Dcompose} observe that $y,\\D{y}\\not\\in g(x)$.\n\n\\item[\\irref{G}]\nLet the premise \\(p(\\bar{x})\\) be valid in some $\\iget[const]{\\vdLint[const=I,state=\\nu]}$, i.e.\\ \\m{\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{p(\\bar{x})}}, i.e.\\ \\(\\imodels{\\vdLint[const=I,state=\\omega]}{p(\\bar{x})}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\omega]}$.\nThen, the conclusion \\(\\dbox{a}{p(\\bar{x})}\\) is valid in the same $\\iget[const]{\\vdLint[const=I,state=\\nu]}$,\ni.e.\\ \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\dbox{a}{p(\\bar{x})}}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$,\nbecause \\(\\imodels{\\vdLint[const=I,state=\\omega]}{p(\\bar{x})}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\omega]}$, so also for all $\\iget[state]{\\vdLint[const=I,state=\\omega]}$ with \\(\\iaccessible[a]{\\vdLint[const=I,state=\\nu]}{\\vdLint[const=I,state=\\omega]}\\).\nThus, \\irref{G} is locally sound.\n\n\\item[\\irref{gena}]\n\\def\\Imd{\\imodif[state]{\\vdLint[const=I,state=\\nu]}{x}{d}}%\nLet the premise \\(p(x)\\) be valid in some $\\iget[const]{\\vdLint[const=I,state=\\nu]}$, i.e.\\ \\m{\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{p(x)}}, i.e.\\ \\(\\imodels{\\vdLint[const=I,state=\\omega]}{p(x)}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\omega]}$.\nThen, the conclusion \\(\\lforall{x}{p(x)}\\) is valid in the same $\\iget[const]{\\vdLint[const=I,state=\\nu]}$,\ni.e.\\ \\(\\imodels{\\vdLint[const=I,state=\\nu]}{\\lforall{x}{p(x)}}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$, i.e.\\ \\(\\imodels{\\Imd}{p(x)}\\) for all $d\\in\\reals$,\nbecause \\(\\imodels{\\vdLint[const=I,state=\\omega]}{p(x)}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\omega]}$, so in particular for all $\\iget[state]{\\vdLint[const=I,state=\\omega]}=\\iget[state]{\\Imd}$ for any $d\\in\\reals$.\nThus, \\irref{gena} is locally sound.\n\n\\item[\\irref{CQ}]\nLet the premise \\(f(\\bar{x})=g(\\bar{x})\\) be valid in some $\\iget[const]{\\vdLint[const=I,state=\\nu]}$, i.e. \\m{\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{f(\\bar{x})=g(\\bar{x})}}, i.e.\\ \\(\\imodels{\\vdLint[const=I,state=\\nu]}{f(\\bar{x})=g(\\bar{x})}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$,\ni.e.\\ \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{f(\\bar{x})}=\\ivaluation{\\vdLint[const=I,state=\\nu]}{g(\\bar{x})}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$.\nConsequently, \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{f(\\bar{x})}\\in\\iget[const]{\\vdLint[const=I,state=\\nu]}(p)\\) iff \\(\\ivaluation{\\vdLint[const=I,state=\\nu]}{g(\\bar{x})}\\in\\iget[const]{\\vdLint[const=I,state=\\nu]}(p)\\).\nSo, \\(\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models {p(f(\\bar{x})) \\lbisubjunct p(g(\\bar{x}))}\\).\nThus, \\irref{CQ} is locally sound.\n\n\\item[\\irref{CE}]\nLet the premise \\(p(\\bar{x})\\lbisubjunct q(\\bar{x})\\) be valid in some $\\iget[const]{\\vdLint[const=I,state=\\nu]}$, i.e. \\m{\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{p(\\bar{x}) \\lbisubjunct q(\\bar{x})}}, i.e.\\ \\(\\imodels{\\vdLint[const=I,state=\\nu]}{p(\\bar{x}) \\lbisubjunct q(\\bar{x})}\\) for all $\\iget[state]{\\vdLint[const=I,state=\\nu]}$.\nConsequently, \\(\\imodel{\\vdLint[const=I,state=\\nu]}{p(\\bar{x})} = \\imodel{\\vdLint[const=I,state=\\nu]}{q(\\bar{x})}\\).\nThus,\n\\(\\imodel{\\vdLint[const=I,state=\\nu]}{\\contextapp{C}{p(\\bar{x})}}\n= \\iget[const]{\\vdLint[const=I,state=\\nu]}(C)\\big(\\imodel{\\vdLint[const=I,state=\\nu]}{p(\\bar{x})}\\big)\n= \\iget[const]{\\vdLint[const=I,state=\\nu]}(C)\\big(\\imodel{\\vdLint[const=I,state=\\nu]}{q(\\bar{x})}\\big)\n= \\imodel{\\vdLint[const=I,state=\\nu]}{\\contextapp{C}{q(\\bar{x})}}\\).\nThis implies\n\\(\\iget[const]{\\vdLint[const=I,state=\\nu]}\\models{\\contextapp{C}{p(\\bar{x})} \\lbisubjunct \\contextapp{C}{q(\\bar{x})}}\\),\nhence the conclusion is valid in $\\iget[const]{\\vdLint[const=I,state=\\nu]}$.\nThus, \\irref{CE} is locally sound.\n\n\\item[\\irref{CT}]\nRule \\irref{CT} is a (locally sound) derived rule and only included for comparison.\n\\irref{CT} is derivable from \\irref{CQ} using \\(p(\\usarg) \\mdefequiv (c(\\usarg)=c(g(\\bar{x})))\\) and reflexivity of $=$.\n\n\\item[\\irref{MP}]\nModus ponens \\irref{MP} is locally sound with respect to the interpretation $\\iget[const]{\\vdLint[const=I,state=\\nu]}$ and the state $\\iget[state]{\\vdLint[const=I,state=\\nu]}$, which implies local soundness and thus soundness.\nIf \\(\\imodels{\\vdLint[const=I,state=\\nu]}{p\\limply q}\\) and \\(\\imodels{\\vdLint[const=I,state=\\nu]}{p}\\) then \\(\\imodels{\\vdLint[const=I,state=\\nu]}{q}\\).\n\n\\item[\\irref{US}]\nUniform substitution is sound by \\rref{thm:usubst-sound}, just not necessarily locally sound.\n\n\\end{compactitem}\n\\end{proofatend}\n\n\n\\section{Conclusions}\n\nWith differential forms for local reasoning about differential equations, uniform substitutions lead to a simple and modular proof calculus for differential dynamic logic that is entirely based on axioms and axiomatic rules, instead of soundness-critical schema variables with side-conditions in axiom schemata.\nThe \\irref{US} calculus is straightforward to implement and enables flexible reasoning with axioms by contextual equivalence.\nEfficiency can be regained by tactics that combine multiple axioms and rebalance the proof to obtain short proof search branches.\nContextual equivalence rewriting for implications is possible when adding monotone \\predicational{s} $C$ whose substitution instances limit $\\uscarg$ to positive polarity.\n\n\\paragraph*{Acknowledgment.}\nI thank the anonymous reviewers of the conference version \\cite{DBLP:conf\/cade\/Platzer15} for their helpful feedback.\n\nThis material is based upon work supported by the National Science Foundation by \nNSF CAREER Award CNS-1054246.\nThe views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution or government. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of any sponsoring institution or government.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuantum critical metals (QCM) constitute a rich and ever-evolving field of modern condensed matter physics. The broad interest in their behavior is fueled not only by the increasing number of experimentally probed systems,\n such as cuprates and iron-based materials \\cite{PhysRevB.81.184519, hussey2008phenomenology, doi:10.1146\/annurev-conmatphys-031119-050558,RevModPhys.73.797}, but also by the advent of modern numerical techniques, in particular Quantum Monte-Carlo \\cite{berg2012sign, PhysRevX.6.031028, Lederer4905, doi:10.1146\/annurev-conmatphys-031218-013339}. One exciting aspect of QCMs is their prototypical non-Fermi liquid behavior, and it is a theorist's dream to determine the associated critical behavior exactly.\n So far, this has only been achieved in the special cases \n such as the SYK model \\cite{PhysRevLett.70.3339, Kitaev2015}, a particular matrix large $N$ theory~\\cite{PhysRevLett.123.096402}, a scalar large $N$ model with a particular dispersion of a critical boson \\cite{PhysRevB.82.045121}, and models obtained by dimensional regularization \\cite{PhysRevB.88.245106}.\n Besides, a fully self-consistent description of a spin density wave quantum critical point (QCP) in (2+1)D at the lowest energies has also been proposed in Ref.\\ \\cite{PhysRevX.7.021010} and Ref.~\\cite{PhysRevB.90.045121} presented\n the general description of critical theories with a chiral Fermi surface. \n\nA {particular example of a} QCM, which defies complete understanding despite years of study \\cite{POLCHINSKI1994617, PhysRevB.50.14048, PhysRevB.50.17917, PhysRevB.64.195109, PhysRevLett.91.066402, PhysRevB.73.045127, PhysRevB.73.085101, PhysRevB.74.195126, doi:10.1146\/annurev-conmatphys-070909-103925, metlitski2010quantum, holder2015anomalous, PhysRevB.82.045121, PhysRevB.88.245106,PhysRevB.96.155125, PhysRevB.103.235129},\nis the one at an Ising nematic QCP in (2+1)D.\nHere, the fermionic self-energy both at one and two loop level scales as $\\omega^{2\/3}$, which is a hallmark of a non-Fermi liquid. However, the loop expansion lacks a control parameter. It was originally believed \\cite{POLCHINSKI1994617,PhysRevB.50.14048} that one can justify the expansion by extending the model to $N$ fermion flavors and taking the limit $N \\to \\infty$. Indeed, at large $N$, the prefactor for $\\omega^{2\/3}$ at the two-loop order is parametrically smaller than the one-loop result \\cite{PhysRevB.50.14048,PhysRevB.74.195126}. However, it was later recognized by S.-S.\\ Lee \\cite{PhysRevB.80.165102} that this does not hold beyond the two-loop order. He analyzed a simplified ``one-patch'' model without backscattering and explicitly demonstrated that planar diagrams, which emerge at three-loop and higher orders, are not small in $1\/N$. Subsequent studies of the Ising nematic model within the ``two-patch theory'', which includes backscattering, found more contributions from planar diagrams, not small in $1\/N$, and discovered additional logarithmic singularities (Refs.~\\cite{metlitski2010quantum, holder2015anomalous}).\n\nThere are several equivalent characterizations of planar diagrams, which can all be invoked to extract their\nspecial behaviour at large $N$. The original name-giving criterion in the high-energy literature \\cite{HOOFT1974461} is that they can be drawn on a plane without holes. From the condensed matter perspective, the planar diagrams\ndescribe a subset of scattering processes which have a (1+1)-dimensional structure, despite the original problem being (2+1)D -- more specifically, in these diagrams the curvature term in the fermionic dispersion cancels out \\cite{metlitski2010quantum},\\footnote{We note that the two-dimensionality still shows up via Landau damping, without it the fermionic self-energy would be more singular than $\\omega^{2\/3}$ \\cite{PhysRevB.73.085101, PhysRevLett.97.226403} }.\n\nBecause the leading one-loop self-energy $\\omega^{2\/3}$ also comes from (1+1)D processes\n\\cite{PhysRevLett.95.026402, PhysRevB.73.045128}, the $N \\to \\infty$ limit at a QCP should be described by some effective (1+1)-dimensional theory. However, the detailed structure of higher-loop planar diagrams is not yet known, and the story is further complicated by apparent UV-divergences at higher-loop orders~\\cite{metlitski2010quantum,holder2015anomalous}. As a result, the form of fermionic and bosonic propagators at a QCP in the large $N$ limit remains unclear.\n\n\n\nIn this paper we analyze the scattering rate in a (2+1)D Fermi liquid away from a nematic\n QCP. Our first goal is to understand the role of planar diagrams in a Fermi liquid regime. In a generic 2D\n Fermi liquid, the imaginary part of the one-loop self-energy $\\text{Im}\\left[\\Sigma (\\omega)\\right]$ at $k=k_F$ scales as $\\omega^2 \\ln(\\omega)$ (as $ (\\omega^2 + (\\pi T)^2)\\ln\\left[{\\text{max}} \\left(\\omega, T\\right)\\right]$ at a finite temperature $T$). This non-analytic form comes from forward and backscattering processes~\\cite{PhysRevB.71.205112},\n which can be regarded as effectively one-dimensional. We analyze $\\text{Im}\\left[\\Sigma (\\omega)\\right]$ at $T=0$ beyond one-loop order, and show that it contains a series of higher powers of $\\ln(\\omega)$. The logarithms come from the Cooper part of backscattering, which is manifestly (1+1)D, and we argue that these terms can be equally viewed as coming from planar processes. We demonstrate this explicitly by analyzing the system behavior in the limit of large $N$, when only planar processes survive. We explicitly sum up series of logarithms and obtain the scattering rate to logarithmic accuracy to all orders in the interaction. We show, however, that in a Fermi liquid not all planar processes are (1+1)D -- the ones that do not contain the highest power of $\\ln (\\omega)$ at any loop order involve scattering with deviations from a single direction by momenta $q \\sim M$, where $M^2$ scales with the distance from a QCP \\footnote{At a QCP, all planar processes become (1+1)D, and, simultaneously, the argument of the logarithm becomes of order one. More specifically, it becomes\na function of the combination $q^3\/\\omega$, where $q$ and $\\omega$ are relevant internal momenta and frequency in a\nplanar diagram. Both are infinitesimally small at vanishing external frequency, and typical $q^3$ are of order $\\omega$.Then the effective dimension of the full $N = \\infty$ theory becomes (1+1)D, with no separation into the leading and subleading terms.}.\n\n\nFor a toy model with repulsive interaction in the particle-particle channel, the full $\\text{Im} \\left[ \\Sigma(\\omega) \\right]$ to logarithmic accuracy\nremains finite and is reduced, roughly by a half, compared to the one-loop expression. A similar result holds for the $T^2$\nterm in the specific heat of a 2D Fermi gas~\\cite{PhysRevB.74.075102, PhysRevB.76.165111}.\n For the realistic case of an attractive pairing interaction,\n the ground state is an $s$-wave superconductor~\\cite{PhysRevLett.77.3009,Moon2010,PhysRevLett.114.097001, PhysRevB.90.165144, PhysRevB.90.174510, PhysRevB.91.115111,PhysRevB.94.115138, CHOWDHURY2020168125}.\n We show that, in this case, the prefactor of the $\\omega^2$ term in $\\text{Im} \\left[ \\Sigma(\\omega) \\right]$ increases with $\\ln(\\omega)$, and the summation of the logarithms breaks down at some energy scale $\\omega \\geq \\Delta_0$, where $\\Delta_0$ is a pairing gap.\n\n\nOur second goal is then to correctly incorporate the superconducting ground state and obtain $\\text{Im} \\left[ \\Sigma(\\omega) \\right]$\nfor $\\omega \\simeq \\Delta_0$. This is of relevance both for experiments and for\nQuantum Monte-Carlo studies, which found~\\cite{PhysRevX.6.031028,Lederer4905} that superconductivity must be included to understand the data for the self-energy at the lowest frequencies\/temperatures.\n We argue that in a 2D Fermi liquid, the frequency range where logarithmic renormalizations from planar diagrams are relevant, but superconductivity can be neglected, can be made wide even near a QCP,\n by restricting to a weak coupling. We show that in this case one can incorporate superconductivity in a\n controllable way at $\\omega \\simeq \\Delta_0$ \\footnote{Exactly at a QCP, the scale $\\Delta_0$ is of the same order as the upper edge of the non-Fermi-liquid~\\cite{CHUBUKOV2020168142}, and there is no sizable frequency range for the renormalization of the self-energy by planar diagrams without including superconducting fluctuations, unless one makes specific assumptions about the shape of the Fermi surface \\cite{PhysRevB.80.165102}.}.\n\n\nThe fermionic self-energy at $\\omega \\simeq \\Delta_0$ has been analyzed in the past for a conventional $s$-wave superconductor with momentum-independent pairing interaction ~\\cite{PhysRevB.42.10211,Larkin2008}. We show that our case is qualitatively different, because the interaction, mediated by soft nematic fluctuations, gives rise to near-equal attraction in a large number of pairing channels, and $s$-wave is only slightly preferable\n over other pairing symmetries ~\\cite{PhysRevLett.114.097001, PhysRevB.98.220501}. In this case, we show that $\\text{Im} \\left[ \\Sigma(\\omega) \\right]$ has the same form as in a pure $s$-wave superconductor between $\\Delta_0$ (below which $\\text{Im} \\left[ \\Sigma(\\omega) \\right] = 0$) and slightly over $3\\Delta_0$, where the system realizes that attraction in the $s$-wave channel is larger than that in other channels.\nAt larger $\\omega$, all pairing channels contribute equally, and the form of $\\text{Im} \\left[ \\Sigma(\\omega) \\right]$ changes qualitatively.\n\n\n The\n paper is structured as follows: In Sec.\\ \\ref{modelsec}, we present the\nmodel, recapitulate one-loop results for $\\Sigma[\\omega]$ and discuss the relevant energy scales. In Sec.\\ \\ref{planarnonplanarsec}, we analyze the structure of three-loop planar diagrams and demonstrate that the theory is effectively one-dimensional at logarithmic order, but not beyond. In the next Sec.\\ \\ref{flselfsecrep}, we sum up the series of the leading $\\omega^2 (\\ln{\\omega})^n$ contributions to the self-energy from backscattering at all loop orders.\nIn Sec.\\ \\ref{repulsive}, we consider a toy model with repulsive pairing interaction and show that the summation of the leading logarithms converges at any frequency.\nThe realistic case of an attractive interaction is analyzed in Sec. \\ref{attractivenormalstatesec}, where we\nshow that the convergence only holds up to $\\omega \\sim \\Delta_0$. In Sec.\\ \\ref{scscatesec}, we demonstrate how the results get modified once we add superconductivity and obtain the expressions for $\\text{Im} \\left[ \\Sigma(\\omega) \\right]$ at $\\omega \\simeq \\Delta_0$. Conclusions and an outlook are presented in Sec.\\ \\ref{concsec}. Various technical details are relegated to the Appendices.\n\n\n\\section{Model and one-loop results}\n\\label{modelsec}\n\nWe consider a system of interacting $N$-flavor fermions in two dimensions at $T=0$.\nTwo spin projections are incorporated into $N$ such that the physical case of spin-1\/2 fermions corresponds to $N = 2$. We assume that the system is {isotropic and} close to a QCP towards a nematic order {($d$-wave Pomeranchuk instability)} and that near a QCP the dominant interaction between fermions is mediated by soft fluctuations of a nematic order parameter. The corresponding $T = 0$ Euclidean action \\cite{PhysRevLett.91.066402, PhysRevB.73.045127} is given by\n\\begin{align}\n\\label{mainaction}\n\\mathcal{S} &= \\mathcal{S}_0 + \\mathcal{S}_{\\text{int}} \\ , \\\\ \\notag\n\\mathcal{S}_0 &= \\sum_{\\sigma = 1}^N \\int_{k} \\bar\\psi_\\sigma(k) (-i\\omega_m + \\xi_{\\boldsymbol{k}})\\psi_\\sigma(k)\\ , \\\\ \\notag\n\\mathcal{S}_\\text{int} &=\n\\frac{g}{V} \\sum_{\\sigma, \\sigma^\\prime} \\int_{k,p,q} D_0({\\boldsymbol{q}})\nf_{\\boldsymbol{q}}({\\boldsymbol{k}}) f_{\\boldsymbol{q}}({\\boldsymbol{p}}) \\times \\\\ & \\notag \\quad \\bar\\psi_\\sigma(k + q\/2) \\bar\\psi_{\\sigma^\\prime} (p - q\/2) \\psi_{\\sigma^\\prime}(p+q\/2) \\psi_\\sigma(k-q\/2) \\ , \\\\ \\notag\n\\int_k &\\equiv \\int \\frac{d\\omega_m}{2\\pi} \\frac{d{\\boldsymbol{k}}}{(2\\pi)^2} \\ .\n\\end{align}\nHere $\\xi_{\\boldsymbol{k}} = v_F(|{\\boldsymbol{k}}| - k_F)$ is the flavor-independent fermionic dispersion, linearized around the Fermi momentum,\n$\\omega_m$ is a Matsubara frequency, $g$ is the coupling, which we set to be small compared to the Fermi energy, $g\/E_F \\ll 1$, and\n\\begin{align}\nD_0({\\boldsymbol{q}}) = - \\frac{1}{q^2 + M^2} \\ , \\quad q \\equiv |{\\boldsymbol{q}}| \\ ,\n\\end{align}\nis the bare bosonic propagator. The parameter $M$ (bosonic mass) measures the deviation from the quantum critical point, and has units of momentum. Near a QCP,\n\\begin{align}\n\\epsilon \\equiv \\frac{M}{k_F} \\ll 1 \\ .\n\\end{align}\n{Finally, $f_{\\boldsymbol{q}}({\\boldsymbol{k}}) = \\cos(2 \\theta_{{\\boldsymbol{k}}{\\boldsymbol{q}}})$ (with $ \\theta_{{\\boldsymbol{k}}{\\boldsymbol{q}}} = \\measuredangle({\\boldsymbol{k}},{\\boldsymbol{q}}))$ is the $d$-wave nematic form factor (not related to pairing symmetry), which specifies the symmetry of the ordered state. At a general boson-fermion vertex with incoming and outgoing fermionic momenta ${\\boldsymbol{k}}_{\\text{in}}, {\\boldsymbol{k}}_\\text{out}$, the half-sum $({\\boldsymbol{k}}_{\\text{in}} + {\\boldsymbol{k}}_{\\text{out}})\/2$ corresponds to ${\\boldsymbol{k}}$ in the above defintion of $f$, while ${\\boldsymbol{k}}_\\text{out} - {\\boldsymbol{k}}_\\text{in}$ corresponds to ${\\boldsymbol{q}}$. In our analysis, all momenta will fulfill $|{\\boldsymbol{k}}_{\\text{in}}|, |{\\boldsymbol{k}}_\\text{out}| \\simeq k_F$, resulting in ${\\boldsymbol{k}} \\perp {\\boldsymbol{q}}$ and $f \\simeq - 1$. (see Fig.\\ \\ref{formfactor}). Since all relevant expressions contain $f^2$, we can set $f = 1$ from the start. The form factor is important at high energies of order $\\mathcal{O}(E_F)$ or in the ordered state only, which are not considered in this paper. For a short discussion of lattice effects, see Sec.\\ \\ref{concsec}. }\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{fig1.pdf}\n\\caption{{Definition of form factor, see main text}}\n\\label{formfactor}\n\\end{figure}\n\n \n \n \n \n \nThe action of Eq.\\ (\\ref{mainaction}) is obtained from a microscopic Hamiltonian with 4-fermion interaction by integrating out fermions with energies above a certain cutoff. To account for screening from fermions with energies smaller than the cutoff, the bare propagator $D_0({\\boldsymbol{q}})$ should be dressed by a particle-hole bubble $\\Pi_{\\text{ph}}$, see Fig.\\ \\ref{diagtable}(a) and App.\\ \\ref{bubblesapp}. The dressed propagator $D(\\omega_m,{\\boldsymbol{q}})$ is\n\\begin{align}\n\\label{Landaudampingdef}\n&D(\\omega_m,{\\boldsymbol{q}}) = - \\frac{1}{q^2 + M^2 + g \\Pi_{\\text{ph}} (\\omega_m, {\\boldsymbol{q}}) }\n , &\\\\ \\notag\n &{\\text {where}} \\quad\n \\Pi_{\\text{ph}} (\\omega_m , {\\boldsymbol{q}}) = N\\rho \\frac{|\\omega_m|}{\\sqrt{\\omega^2_m + (v_F q)^2}} \\ . &\n\\end{align}\n$\\rho = m\/(2\\pi)$ the density of states per flavor, and $m = k_F\/v_F$ the fermionic mass.\n We assume, following earlier works \\cite{metlitski2010quantum, doi:10.1080\/0001873021000057123}, that the\nstatic part of $\\Pi_{\\text{ph}}$ is already incorporated into $D(\\omega_m, {\\boldsymbol{q}})$.\n At $\\omega_m \\ll v_F q$, which will be relevant to our analysis, $\\Pi_{\\text{ph}} (\\omega_m , {\\boldsymbol{q}}) \\approx N \\rho |\\omega_m|\/(v_F q)$ describes (in real frequencies) the Landau damping of\n a boson due to a decay into the particle-hole continuum.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\columnwidth]{fig2.pdf}\n\\caption{One-loop diagrams. (a) Particle-hole bubble $\\Pi_{\\text{ph}}$, which renormalizes the bare interaction $D_0$ (dashed wavy lines). (b) One-loop self-energy $\\Sigma^{(1)}$. Full wavy line represents the renormalized interaction $D$. (c) Diagram which effectively determines $\\Sigma^{(1)}(\\omega)$ for $\\omega \\ll {\\omega_{\\text{FL}}}$ (interaction lines are static).}\n\\label{diagtable}\n\\end{figure}\n\nWith these ingredients, we can evaluate the fermionic self-energy for the external momentum ${\\boldsymbol{k}}$ on the Fermi surface,\n$\\Sigma(\\omega_m,{\\boldsymbol{k}}_F) \\equiv \\Sigma(\\omega_m)$\n$(\\text{with}\\ G^{-1} = G^{-1}_0 - \\Sigma$). The one-loop self-energy $\\Sigma^{(1)}(\\omega_m)$, represented by the diagram in Fig.\\ \\ref{diagtable}(b), has been obtained before, see, e.g., Refs.\\ \\cite{PhysRevB.74.195126, metlitski2010quantum},\nand we just present the result:\n\\begin{align}\n\\label{sigma1resultmain}\n& \\Sigma^{(1)}(\\omega_m) \\simeq \\\\& \\notag \\begin{dcases} - i\n\\lambda \\omega_m + i \\text{sign}(\\omega_m) \\frac{2\\lambda}{\\pi}\n\\frac{\\omega^2_m}{{\\omega_{\\text{FL}}}} \\ln\\left(\\frac{{\\omega_{\\text{FL}}}}{|\\omega_m|}\\right), \\! &\\omega_m \\ll {\\omega_{\\text{FL}}} \\\\\n\\omega_{\\text{IN}}^{1\/3} \\omega^{2\/3}_m, \\! &\\omega_m \\gg {\\omega_{\\text{FL}}}.\n\\end{dcases}\n\\end{align}\nThe first line in Eq.\\ \\eqref{sigma1resultmain} has a familiar Fermi-liquid form\n in 2D. It contains\ntwo parameters\n\\begin{align}\n\\label{lambdadef}\n\\lambda \\equiv \\frac{g}{4 \\pi v_F M} , \\quad \\quad {\\omega_{\\text{FL}}} \\equiv \\frac{ v_F M^3}{N g \\rho} \\ .\n\\end{align}\nHere, $\\lambda$ is an effective coupling, which will be used to control perturbation theory, and ${\\omega_{\\text{FL}}}$\n serves as UV cutoff for the logarithms and also marks the crossover from a Fermi-liquid to a non-Fermi-liquid regime.\n The second line in \\eqref{sigma1resultmain} is the one-loop formula for the self-energy in the non-Fermi liquid regime.\n The scale\n \\beq\n\\label{win}\n\\omega_{\\text{IN}} = \\frac{8}{3\\sqrt{3}} \\lambda^3 {\\omega_{\\text{FL}}} \\\n \\eeq\n is the upper boundary of the true non-Fermi-liquid region, where $\\Sigma(\\omega_m) > \\omega_m$.\n The relevant energy scales are summarized in Fig.\\ \\ref{scales} for both $\\lambda \\ll 1$, $\\lambda \\gg 1$.\n\nIn deriving Eq.\\ \\eqref{sigma1resultmain} we assumed that typical internal frequencies $\\omega^\\prime_m$ and momenta $q$ satisfy\n$\\omega^\\prime_m \\ll v_F q$. This condition is naturally satisfied at a QCP, where typical $\\omega^\\prime_m \\sim q^{3}$, see e.g.\\ \\cite{PhysRevX.10.031053}, but in a weakly coupled Fermi liquid at $\\lambda \\ll 1$ we must additionally require $\\lambda > \\epsilon\/N$, such that ${\\omega_{\\text{FL}}}\n \\leq v_F M$.\n\n\nFor $\\omega \\ll {\\omega_{\\text{FL}}}$, the Landau damping is small compared to the mass term $M^2$ in $D(\\omega_m,{\\boldsymbol{q}})$, and the Fermi-liquid form of $\\Sigma^{(1)}(\\omega_m)$ is obtained by expanding in the Landau damping. In effect, this reduces the evaluation of the diagram in Fig.\\ \\ref{diagtable}(b) to the evaluation of the two-loop diagram with static interactions $D_0({\\boldsymbol{q}})$ of Fig.\\ \\ref{diagtable}(c). Let us now zoom into the $\\omega^2$-part of the Fermi-liquid self-energy at these frequencies:\n \\begin{align}\n\\label{sigmaomegaqdef}\n\\Sigma^{(1)}_{\\omega^2} (\\omega_m) \\equiv i \\text{sign}(\\omega_m)\n\\frac{2\\lambda}{\\pi} \\frac{\\omega^2_m}{{\\omega_{\\text{FL}}}} \\ln\\left(\\frac{{\\omega_{\\text{FL}}}}{|\\omega_m|}\\right).\n\\end{align}\n Upon analytical continuation $i\\omega_m \\rightarrow \\omega + i\\delta$, it maps to\nthe imaginary part of the retarded self-energy\n\\begin{align}\n\\text{Im}\\left[\\Sigma^R(\\omega)\\right] = -\\frac{2 \\lambda}{\\pi} \\frac{\\omega^2}{{\\omega_{\\text{FL}}}} \\ln\\left(\\frac{{\\omega_{\\text{FL}}}}{|\\omega|}\\right),\n\\label{a_1}\n\\end{align}\nwhich determines the fermionic scattering rate.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=\\columnwidth]{fig3.pdf}\n\\caption{Energy scales. For $\\lambda \\ll 1$, the self-energy is always smaller than the bare frequency, $\\Sigma(\\omega_m)< \\omega_m$, and there is no true non-Fermi-liquid. The green sliver marks the superconducting region for attractive interactions discussed in Sec.\\ \\ref{scscatesec}, with $\\Delta_0 \\simeq {\\omega_{\\text{FL}}} \\exp(-\\lambda)$. For $\\lambda \\gg 1$, a true non-Fermi-liquid region (red) develops where $\\Sigma(\\omega_m)> \\omega_m$. Superconductivity in the strong coupling case is outside the scope of this paper. The scale $v_F M > {\\omega_{\\text{FL}}}$ is not explicitly shown. }\n\\label{scales}\n\\end{figure}\n\n\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=\\columnwidth]{fig4.pdf}\n\\caption{One-dimensional scattering at two-loop level. (a) Most singular momentum points (forward and backscattering) which contribute to the logarithm in $\\Sigma_{\\omega^2}^{(1)}$. The grayed out diagram is also part of backscattering, but its contribution can be neglected since it involves a large momentum transfer $\\sim 2k_F$. (b) Way to extract the logarithm from the forward and backscattering processes (see main text).}\n\\label{oneloop1ddiag}\n\\end{figure}\n\n\nFor our future considerations it is important that the $\\omega_m^2 \\ln{(\\omega_m)}$ dependence in Eq.~(\\ref{sigmaomegaqdef}) can be traced to effectively one-dimensional scattering, embedded in two dimensions (see Appendix \\ref{oneloopfermiapp} for technical details). Namely, the processes that contribute to Eq.\\ (\\ref{sigmaomegaqdef}) are forward scattering and backscattering, with momentum deviations from ${\\boldsymbol{k}}$ and ${-{\\boldsymbol{k}}}$ on the order of external $\\omega_m$ (see Fig.~\\ref{oneloop1ddiag}(a)). As sketched in Fig.\\ \\ref{oneloop1ddiag}(b), instead of evaluating the closed particle-hole bubble in the diagram one can explicitly obtain one half of $\\omega_m^2 \\ln{(\\omega_m)}$\n from either of these two scattering processes (the first diagram describes forward scattering and the second one backscattering). In each case, the integration over the internal momentum ${\\boldsymbol{q}}_1$ (assuming $|{\\boldsymbol{q}}_1| \\ll k_F$) and corresponding frequency in the blue boxes in Fig.\\ \\ref{oneloop1ddiag}(b) produces a Landau damping term $\\sim \\omega_m^\\prime\/q$ as the leading dynamical contribution, where $q = |{\\boldsymbol{q}}|$ and $\\omega_m^\\prime$ are the momentum and the frequency of the remaining internal fermion in the diagrams in Fig.\\ \\ref{oneloop1ddiag}. The integral over the angle between ${\\boldsymbol{k}},{\\boldsymbol{q}}$ (or, equivalently, the component of ${\\boldsymbol{q}}$ parallel to ${\\boldsymbol{k}}$) restricts internal frequencies to $\\omega_m^\\prime \\in (0,\\omega_m)$, and each diagram yields\n\\begin{align}\n\\label{schematic1}\n\\Sigma_{\\omega^2}^{(1)} (\\omega_m) \\propto \\int_0^{\\omega_m} d\\omega_m^\\prime \\int_{\\omega^\\prime_m\/v_F}^{{\\omega_{\\text{FL}}}\/v_F} dq \\frac{\\omega_m^\\prime}{q} \\propto \\omega_m^2 \\ln(\\omega_m) \\ ,\n\\end{align}\nas in Eq.~\\eqref{sigmaomegaqdef}.\n\nRelevant $q$ in the integral are of order $\\omega'_m\/v_F$ (to logarithmic accuracy), and relevant $\\omega'_m$ are of order of external $\\omega_m$, which we set to be the smallest scale in the problem. Then relevant $q^2$ are much smaller than $M^2$, and hence the static part of $D(\\omega_m, q)$ can be approximated by $1\/M^2$, i.e., at this order static interaction can be treated as momentum-independent. This explains why the prefactor in Eq.~\\eqref{sigmaomegaqdef} is $ \\lambda\/{\\omega_{\\text{FL}}} \\propto 1\/M^4$ .\n\n\n\n\\section{Structure and effective dimensionality of higher order diagrams}\n\\label{planarnonplanarsec}\n\nThere are multiple self-energy diagrams at higher\n loop orders, but we have two tuning knobs to single out the most important ones: small external $\\omega_m$ and large $N$. Let us focus on large $N$ first. At the QCP, the diagrams with the highest power of $N$ are the planar ones, as discussed in Refs.~\\cite{PhysRevB.73.045128, metlitski2010quantum,PhysRevB.82.045121}. We show below that the same holds away from a QCP. We discuss the general structure of planar diagrams in Appendix \\ref{planarstructureapp}.\n\nWe compute the self-energy order-by-order in the loop expansion on the Matsubara axis, and then convert the\n result onto the real axis. We associate the full dynamical $D(\\omega_m, {\\boldsymbol{q}})$ to each bosonic propagator in the diagrams and absorb the $-i\\lambda \\omega_m$ term in the self-energy into the bare fermionic propagator.\n\nThe first planar diagrams beyond one-loop appear at the three-loop order. There are two such diagrams, we show them in Figs. \\ref{planar3loop}(a),(b). Each diagram can be computed explicitly (App \\ref{threeloopApp}), and the result is\n (for $\\omega_m>0$)\n \\begin{align}\n\\label{forwardresult3loop}\n\\Sigma^{(3)}_{\\omega^2, \\text{a}}(\\omega_m) &=\ni \\frac{1}{\\pi^2} \\frac{\\lambda^2}{1+\\lambda} \\frac{\\omega^2_m}{{\\omega_{\\text{FL}}}}\\times(0.56329\\hdots)\\ ,\n\\\\\n\\label{backscatteringhere}\n\\Sigma^{(3)}_{\\omega^2,\\text{b}}(\\omega_m) &= i \\frac{1}{2\\pi} \\frac{\\lambda^2}{1+\\lambda} \\frac{\\omega^2_m}{{\\omega_{\\text{FL}}}} \\ln^2\\left(\\frac{{\\omega_{\\text{FL}}}}{\\omega_m}\\right) \\ .\n\\end{align}\nComparing these two terms with the one-loop result, Eq.~(\\ref{sigmaomegaqdef}), we find that the power of $N$ is the same, and there is an additional factor $\\lambda\/(1+\\lambda)$. Besides, $\\Sigma^{(3)}_{\\omega^2,\\text{a}}( \\omega_m)$ contains no logarithm, while $\\Sigma^{(3)}_{\\omega^2,\\text{b}}(\\omega_m)$ contains $\\ln^2{(\\omega_m)}$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.85\\columnwidth]{fig5.pdf}\n\\caption{Three-loop diagrams. Labels indicate four-momenta (a) Planar diagram in the forward scattering channel (b) Planar diagram in the backscattering channel (c) Non-planar diagram. }\n\\label{planar3loop}\n\\end{figure}\n\n\nFor comparison, consider one of the non-planar diagrams, like the one in Fig.\\ \\ref{planar3loop}(c). Evaluating the corresponding integrals, we obtain\n\\begin{align}\n\\label{backresult3loop_1}\n\\Sigma^{(3)}_\\text{c}(\\omega_m) =\n- i \\frac{2}{N} \\frac{\\lambda}{(1+\\lambda)\\pi} \\frac{\\omega^2_m}{{\\omega_{\\text{FL}}}} \\ln\\left(\\frac{{\\omega_{\\text{FL}}}}{\\omega_m}\\right) \\ .\n\\end{align}\nComparing this result with Eq.~(\\ref{sigmaomegaqdef}), we see that $\\Sigma^{(3)}_\\text{c}(\\omega_m)$ contains an additional factor $1\/N$.\n\nWe now look more closely at the two planar diagrams and explore another knob: small external $\\omega_m$.\nWe remind the reader that the $\\omega^2_m \\ln{(\\omega_m)}$ term in the one-loop self-energy,\n Eq.~(\\ref{sigmaomegaqdef}), comes from forward scattering and backscattering.\n The issue we\nwant to address is whether the self-energies $\\Sigma^{(3)}_{\\omega^2,\\text{a}}(\\omega_m)$ and $\\Sigma^{(3)}_{\\omega^2,\\text{b}}(\\omega_m)$ can be cast into the same form as Eq.~(\\ref{sigmaomegaqdef})\nwith dressed static forward scattering and backscattering amplitudes. Simple experimentation shows that this holds if two of the three internal momenta and frequencies in Fig.~\\ref{planar3loop} (a),(b), (either $q$ and $q_1$, or $q$ and $q_2$, where $q = ( \\omega'_m, {\\boldsymbol{q}})$), scale with $\\omega_m$ and hence are infinitesimally small at $\\omega_m \\to 0$. At a QCP, $\\omega_m$ is the only scale in the problem, and all internal momenta\/frequencies in the planar diagrams are necessary infinitesimally small at $\\omega_m \\to 0$. However, in a Fermi liquid there exists an internal momentum scale $M$ (energy scale $v_F M$), and some internal momenta may be of order $M$. We illustrate this in Fig.~\\ref{planar3loopvertices}, where a blue box labels a dressed vertex.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.85\\columnwidth]{fig6.pdf}\n\\caption{Three-loop diagrams as in Fig.\\ \\ref{planar3loop}, with the dressed vertices marked by blue boxes.}\n\\label{planar3loopvertices}\n\\end{figure}\n\nThe computation of the dressed vertices at vanishing external $q$ and $q_2$ is straightforward and we skip the details. For the planar diagram of Fig.~\\ref{planar3loopvertices}(b), we find that,\n to logarithmic accuracy, the result can be expressed via the dressed static backscattering amplitude, i.e., the sum of the backscattering part of $\\Sigma^{(1)}_{\\omega^2}(\\omega_m)$ and $\\Sigma^{(3)}_{\\omega^2,\\text{b}}(\\omega_m)$ can be re-expressed as\n \\begin{align}\n\\label{sigmaomegaqdef_new}\n&\\Sigma_{\\omega^2, \\text{bs}}(\\omega_m) \\equiv \\\\ & \\notag i \\text{sign}(\\omega_m)\n\\frac{2\\lambda}{\\pi {\\omega_{\\text{FL}}}} \\int_0^{|\\omega_m|} d\\omega_m^\\prime\\omega_m^\\prime \\int_{\\omega_m^\\prime}^{{\\omega_{\\text{FL}}}} \\frac{dz}{z} \\left[\\frac{M^2}{g}\\Gamma_{\\text{bs}} (z)\\right]^2 ,\n\\end{align}\nwhere\n\\beq\n\\Gamma_{\\text{bs}} (z) = - \\frac{g}{M^2} \\left[1 + \\frac{1}{2} \n \\lambda Z \\ln\\left(\\frac{{\\omega_{\\text{FL}}}}{z}\\right) + ... \\right]\n\\label{a_2}\n\\eeq\n is the dressed backscattering amplitude and $Z =1\/(1+\\lambda)$ is the quasiparticle spectral weight.\nHowever, if we go beyond logarithmic accuracy, we find that $\\Sigma^{(3)}_{\\omega^2,\\text{b}}(\\omega_m)$ contains contributions that come from $|{\\boldsymbol{q}}| \\sim M$ and cannot be expressed via the backscattering vertex. Only very close to a QCP, where $M$ is vanishingly small, these non-logarithmic terms can be absorbed into the dressed backscattering amplitude.\n\n For the diagram of Fig.~\\ref{planar3loopvertices}(a), which is a candidate for the forward-scattering contribution, the dressed vertex (the blue box) vanishes at external $q, q_2=0$, and to reproduce $\\Sigma^{(3)}_{\\omega^2,\\text{a}}(\\omega_m)$ one has to keep $|{\\boldsymbol{q}}_2| \\sim M$. This implies that $\\Sigma^{(3)}_{\\omega^2,\\text{a}}(\\omega_m)$ does not come from forward scattering. This again holds at a finite $M$. The vanishing of the dressed vertex at $q, q_2 =0$ also explains why the three-loop self-energy $\\Sigma^{(3)}_{\\omega^2,\\text{a}}(\\omega_m)$ does not contain a logarithm.\n\nWe therefore see that in a Fermi liquid, only planar diagrams survive in the limit $N \\to \\infty$, but only a portion of the self-energy from these diagrams comes from forward scattering and backscattering. The converse is also true: one can easily verify that the self-energy $\\Sigma^{(3)}_\\text{a}(\\omega_m)$ from the non-planar diagram in Fig.~\\ref{planar3loopvertices}(c) does come from backscattering. However, this vertex renormalization is small in $1\/N$.\n\n\n Below we keep $M$ finite and compute the full self-energy to all orders in the interaction to logarithmic accuracy.\n To this accuracy, we can neglect renormalizations from ``near-forward-scattering\" and ``near-back-scattering\" processes and compute the renormalization of the backscattering amplitude keeping only the largest power of $\\ln{(\\omega_m)}$ at each order in the loop expansion. The fact that the self-energy can be obtained from the renormalized backscattering amplitude (to the logarithmic accuracy) again implies that the problem is effectively one-dimensional: the relevant internal fermions have momenta $\\pm {\\boldsymbol{k}}$ up to small deviations of order $\\omega$, i.e., they only move along a single direction. \n We note that for this calculation we do not need to set $N \\to \\infty$ as the $1\/N$ terms are outside the logarithmic accuracy.\n\n\\section{The full result for the self-energy to all orders in the interaction}\n\\label{flselfsecrep}\n\nLet us now compute the fully renormalized backscattering contribution to the self-energy by extracting the most logarithmically singular contributions to $\\Sigma_{\\omega^2, \\text{bs}}$ at each loop order. One can straightforwardly verify that these contributions arise from the diagrams shown in Fig.\\ \\ref{maindiagram}, which particle-particle (Cooper) ``bubbles\". We label them as ``Cooper diagrams''.\n\\begin{figure*}\n\\centering\n\\includegraphics[width=.8\\textwidth]{fig7.pdf}\n\\caption{Diagram contributing to $\\Sigma(\\omega)$ with 4 Cooper bubbles. Labels indicate four-momenta, interaction lines are RPA dressed. }\n\\label{maindiagram}\n\\end{figure*}\nAn example of a Cooper diagram is shown in Fig.\\ \\ref{maindiagram}. A generic diagram with\n$n$ Cooper bubbles contains $\\ln({\\omega_{\\text{FL}}}\/|\\omega_m|)^n$. Our goal is to sum up this series. We will see that the series\nis not geometrical, because the interaction is momentum-dependent.\n\nWe proceed with the evaluation of the Cooper diagrams. An inspection of these diagrams\nyields the following general strategy for the calculation of the $O(\\omega^2_m)$ term in the self-energy to logarithmic accuracy (see App.\\ \\ref{maincalcdetailssec} for details):\n we select one cross-section, from which we take the Landau-damping term (this gives the overall $\\omega_m^2$ in $\\Sigma_{\\omega^2, \\text{bs}}$) and sum up series of Cooper logarithms on both sides of this cross-section,\nwhich produces a factor $\\Gamma^2_{\\text{bs}}$. For the calculation of the Landau damping in the selected cross-section with internal momentum ${\\boldsymbol{p}}_i$ in the Cooper bubble, we set ${{\\boldsymbol{q}}}$ in Fig.\\ \\ref{maindiagram} nearly transverse to the external ${{\\boldsymbol{k}}}$ and ${\\boldsymbol{p}}_i$ nearly parallel to ${{\\boldsymbol{k}}}$. For the other Cooper bubbles, we only use a static interaction between fermions on the Fermi surface, \n \\bea\n\\label{interactionform}\nD(p_1 - p_2) \\simeq -g k^2_F \\left[\\epsilon^2 + 4 \\sin^2\\!\\left(\\frac{\\phi_{1} - \\phi_{2}}{2}\\right) \\right] \\ , \n\\eea\nwhere $\\phi_i$ are the angles of ${\\boldsymbol{p}}_i$ measured relative to ${\\boldsymbol{k}}$. We evaluate each particle-particle bubble $\\Pi_\\text{pp}$ to logarithmic accuracy and for $v_F q \\gg \\omega_m'$, where $\\omega'_m$ is the frequency component of $q$ in Fig.\\ \\ref{maindiagram}. \nUnder these conditions, the integration of the two fermionic propagators in the particle-particle bubble with $q-p_j$ and $p_j$ over internal frequency $\\omega_{jm}$ and over $|{\\boldsymbol{p}}_j|$ yields (see App.\\ \\ref{bubblesapp})\n\\begin{align}\n\\label{Pippmain}\n \\Pi_{\\text{pp}} (|{\\boldsymbol{q}}|, \\phi_j) = Z \\rho\\ln\\left(\\frac{{\\omega_{\\text{FL}}}}{v_F |{\\boldsymbol{q}}| |\\sin{\\phi_j}| }\\right) \\ . \n \\end{align} \n We assume and verify that typical values of {\\it all} $\\phi_j$ are of order $\\epsilon$.\n To logarithmic accuracy, we can then approximate $\\Pi_{\\text{pp}} (|{\\boldsymbol{q}}|, \\phi_j) \\simeq \\Pi_{\\text{pp}} (z) = Z \\rho\n \\ln({\\omega_{\\text{FL}}}\/z)$, where $z =v_F |{\\boldsymbol{q}}|\\epsilon$. Then the remaining integrations over $\\phi_{j}$ in the Cooper ladder \n involve only the interactions. This leads to the following expression for the backscattering amplitude, with $L(z) = \\ln({\\omega_{\\text{FL}}}\/z)$: \n \\begin{widetext}\n\\begin{align}\n\\label{Tfirst}\n&\n\\Gamma_{\\text{bs}} (z) \\equiv - \\frac{g}{k_F^2} \\sum_{n = 0}^\\infty \\left(\\frac{g\\rho Z L(z)}{k^2_F}\\right)^n\n \\cdot \\frac{1}{(2\\pi)^n} \\int_0^{2\\pi} d\\phi_1 d\\phi_2 \\hdots d\\phi_n\\frac{1}{\\epsilon^2 + 4 \\sin^2\\left(\\frac{\\phi_1}{2}\\right)} \\times \\frac{1}{\\epsilon^2 + 4 \\sin^2\\left(\\frac{\\phi_2 - \\phi_1}{2}\\right)} \\times \\hdots \\\\ & \\notag \\times \\frac{1}{\\epsilon^2 + 4\\sin^2\\left(\\frac{\\phi_n - \\phi_{n-1}}{2}\\right)} \\times \\frac{1}{\\epsilon^2 + 4 \\sin^2\\left(\\frac{\\phi_{n}}{2}\\right)} \\ .\n\\end{align}\nTo leading order in $\\epsilon$, the integrand can be approximated as \n\\begin{align}\n\\label{phiint}\n&\\frac{1}{(2\\pi)^n \\epsilon^{n+1}} \\int_{-\\infty}^\\infty d\\phi_1 \\hdots d\\phi_n \\frac{\\epsilon}{\\epsilon^2+ \\phi_1^2} \\times \\frac{\\epsilon}{\\epsilon^2+(\\phi_1 - \\phi_2)^2} \\times \\frac{\\epsilon}{\\epsilon^2+ (\\phi_2 - \\phi_3)^2} \\times \\hdots \\times \\frac{\\epsilon}{\\epsilon^2 + (\\phi_{n-1} - \\phi_n)^2} \\times \\frac{\\epsilon}{\\epsilon^2+\\phi_n^2} \\notag \\\\ & = \\frac{1}{(2\\pi)^n \\epsilon^{n+2}} \\times \\frac{\\pi^n}{(n+1)} \n= \\frac{1}{n+1} \\frac{1}{(2\\epsilon)^{n}} \\times \\frac{k^2_F}{M^2} \\ .\n\\end{align}\n\\end{widetext}\nThe integrals in Eq.\\ \\eqref{phiint} were solved by going to Fourier space, where the convolutions of Lorentzians become products of exponentials. The expression for the backscattering amplitude then reduces to \n\\begin{align}\n\\label{Tsumfirst}\n\\Gamma_{\\text{bs}}(z)\n \\simeq - \\frac{g}{M^2} \\sum_{n = 0}^\\infty \\left(\n\\tilde{\\lambda} \n L(z)\\right)^n \\frac{1}{n+1} , \\quad \\tilde\\lambda \\equiv \\lambda Z = \\frac{\\lambda}{1+ \\lambda}\\ .\n\\end{align}\nWe see that\n $\\tilde{\\lambda}$ becomes the effective coupling controlling the perturbation theory. \n As long as $\\tilde{\\lambda} L(z) < 1$, the summation in Eq.\\ \\eqref{Tsumfirst} converges, and we obtain\n\\begin{align}\n\\label{Tlognormal} \\Gamma_{\\text{bs}}(z) = \\frac{g}{M^2}\\left(\\frac{\\ln\\left(1- \\tilde{\\lambda} L(z)\\right)}{\\tilde{\\lambda} L(z)} \\right) \\ .\n\\end{align}\nThe full self-energy from backscattering then becomes\n\\begin{align}\n\\label{Sigmap2}\n&\\Sigma_{\\omega^2, \\text{bs}}(\\omega_m) = i \\text{sign}(\\omega_m) \\frac{\\lambda}{\\pi} \\frac{\\omega^2_m}{{\\omega_{\\text{FL}}}} \\int_{\\omega_m}^{{\\omega_{\\text{FL}}}} \\frac{dz}{z}\n \\left[\\frac{M^2}{g} \\Gamma_{\\text{bs}} (z)\\right]^2 \\notag \\\\\n & = i \\text{sign}(\\omega_m) \\frac{\\lambda}{\\pi} \\frac{\\omega^2_m}{{\\omega_{\\text{FL}}}}\n \\int_{\\omega_m}^{{\\omega_{\\text{FL}}}} \\frac{dz}{z} \\left(\\frac{\\ln\\left(1- \\tilde{\\lambda} L(z)\\right)}{\\tilde{\\lambda} L(z)}\\right)^2 \\ .\n\\end{align}\nAt $\\tilde{\\lambda} \\rightarrow 0$, we get \n\\begin{align}\n\\label{wcresult}\n\\Sigma_{\\omega^2, \\text{bs}}(\\omega_m) = i \\text{sign}(\\omega_m) \\frac{\\lambda}{\\pi} \\frac{\\omega^2_m}{{\\omega_{\\text{FL}}}} \\ln\\left(\\frac{{\\omega_{\\text{FL}}}}{|\\omega_m|}\\right) \\ .\n\\end{align}\nThis is exactly a half of the one-loop result, which is expected, as Eq.\\ \\eqref{Sigmap2} only contains the backscattering contribution. {The weak-coupling result \\eqref{wcresult} is only valid for $\\lambda \\ln({\\omega_{\\text{FL}}}\/|\\omega_m|) \\ll 1$; for small frequencies, $\\lambda \\ln({\\omega_{\\text{FL}}}\/|\\omega_m|)$ becomes $O(1)$, and one has to solve the full integral \\eqref{Sigmap2} instead. }\n\n\\subsection{Repulsive pairing interaction}\n\\label{repulsive}\n\nWe first analyze Eq.~(\\ref{Sigmap2}) for a toy model, in which we assume that ${\\tilde \\lambda}$ is negative, i.e., that the pairing interaction is repulsive. In this case, the series in Eq.\\ (\\ref{Tsumfirst}) converges for all $z$, even the smallest ones. The integral over $z$ in Eq.~\\eqref{Sigmap2} can then be evaluated exactly, and the result is\n\\begin{align}\n\\label{Sigmawithxrep}\n\\Sigma_{\\omega^2, \\text{bs}}(\\omega_m) &= i \\text{sign}(\\omega_m) \n \\frac{\\omega^2_m }{ Z \\pi {\\omega_{\\text{FL}}}} f_\\text{bs;r}(\\ell) \\ ,\n \\end{align}\n where\n \\beq\n \\ell = \n |\\tilde{\\lambda}| \\ln({\\omega_{\\text{FL}}}\/|\\omega_m|)\n\\eeq\n and\n \\beq\nf_\\text{bs;r}(\\ell) =\n- \\frac{(1+\\ell) \\ln^2(1+\\ell)}{\\ell} - 2\\text{Li}_2(-\\ell) \\ ,\n\\eeq\nwith $\\text{Li}_2$ the Polylogarithm.\nThe forward scattering contribution, in the same units, is\n\\begin{align}\n\\Sigma_{\\text{fw}}(\\omega_m) = i\\text{sign}(\\omega_m) \n \\frac{\\omega^2_m}{Z \\pi {\\omega_{\\text{FL}}}} f_\\text{fs}(\\ell), \\quad f_{\\text{fs}}(\\ell) = \\ell \\ .\n\\end{align}\nA plot of $f_\\text{bs;r}, f_\\text{fs}$ is shown in Fig.\\ \\ref{frepulsiveplot}. For small $\\ell$, i.e., fairly large frequencies, $f_\\text{bs;r} = f_{\\text{fs}}$ as expected. For smaller frequencies (increasing $\\ell$), $f_\\text{bs;r}(\\ell)$ does not grow linearly, but is bounded, and can be approximated by the limiting form\n\\begin{align}\nf_\\text{bs;r}(\\ell) = \\frac{\\pi^2}{3} - \\frac{\\ln^2(\\ell)}{\\ell} + \\mathcal{O} \\left( \\frac{\\ln(\\ell)}{\\ell}\n\\right) \\ .\n\\end{align}\nTherefore, for a repulsive pairing interaction, the backscattering rate incures a logarithmic suppression compared to forward scattering; at large $\\ell$ the full rate is reduced to the forward scattering part, i.e., by a half compared to the one-loop result.\n\n Upon analytical continuation to the real axis, $i\\omega_m \\to \\omega + i\\delta$, Eq.~(\\ref{Sigmawithxrep}) becomes the expression for the scattering rate at a low frequency \n \\begin{align}\n\\label{Imsigmarep}\n{\\text{Im}} \\left[\\Sigma_{\\text{bs;r}}^R (\\omega)\\right] &= - \n\\frac{\\omega^2}{Z\\pi {\\omega_{\\text{FL}}}} f_\\text{\nbs;r}(\\ell) \\ ,\n\\end{align}\n where now $\\ell = |{\\tilde \\lambda}| \\ln\\left({\\omega_{\\text{FL}}}\/|\\omega|\\right)$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{fig8.pdf}\\caption{The functions\n$f_\\text{bs;r}(\\ell)$ and $f_\\text{fs}(\\ell)$ \n for the toy model with repulsive pairing interaction. }\n\\label{frepulsiveplot}\n\\end{figure}\n\n\\subsection{Attractive pairing interaction}\n\\label{attractivenormalstatesec}\nLet us now consider the physical case of attractive pairing interaction, $\\tilde{\\lambda} >0$. Now we have\n\\begin{align}\n\\label{sigmaattleading}\n\\Sigma_{\\omega^2, \\text{bs}} &= i\\text{sign}(\\omega_m) \\frac{\\omega^2_m}{Z\\pi {\\omega_{\\text{FL}}}}\nf_\\text{bs;a}(\\ell) \\ , \\nonumber \\\\\nf_\\text{bs;a}(\\ell) &= \\frac{ (\\ell - 1)\\ln^2(1-\\ell)}{\\ell} + 2\\text{Li}_{2}(\\ell) \\ ,\n\\end{align}\nand $f_{\\text{bs;a}}(\\ell)$ is well-defined only for $\\ell \\leq 1$.\nAt larger $\\ell$ the results must be modified by including superconductivity, as we show below. A plot of $f_\\text{bs;a}(\\ell)$ is presented in Fig.\\ \\ref{fattractiveplot}. Again, for small $\\ell$, \n$f_\\text{bs;a} \\simeq \\ell$, but at\n$\\ell \\lesssim 1$ it grows faster. In the limit $\\ell \\rightarrow 1$,\n\\begin{align}\nf_\\text{bs;a}(\\ell) = \\frac{\\pi^2}{3} - (1 - \\ell) \\ln^2(1- \\ell) + \\mathcal{O}\\left(\\ln(1-\\ell) (1- \\ell)\\right) .\n\\end{align}\n$\nf_\\text{bs;a}(1) = \\pi^2\/3$ is finite, but the slope $df_\\text{bs;a}(\\ell)\/d\\ell$ is logarithmically divergent at\n $\\ell \\rightarrow 1$. As a curiosity, we note that $f_{\\text{bs;r}}$ and $f_{\\text{bs;a}}$ are related as\n\\beq\n\\label{fssym}\nf_{\\text{bs;r}}(\\ell) =f_\\text{bs;a}\\left(\\frac{\\ell}{1+\\ell} \\right) \\ .\n\\eeq\nThis can be derived by a simple substitution in the integral defining $f_\\text{bs;a}$. \n\n \\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{fig9.pdf}\n\\caption{\n The function $f_{\\text{bs;a}}$ to leading order in $\\epsilon$, from Eq. (\\ref{sigmaattleading}). \n $\\ell = 1$ corresponds to $\\omega = \\Delta_0$. The inset shows the modification of \n $f_\\text{bs;a}$ due to extra contributions beyond the leading order in $\\epsilon$, \n see Eq.~\\eqref{fbsstarall}.\n}\n\\label{fattractiveplot}\n\\end{figure}\nLet us look at the behavior of\n$f_{\\text{bs;a}} (\\ell)$ as\n $ \\ell \\rightarrow 1$ more carefully. We recall that\n$f_{\\text{bs;a}} (\\ell)$ has been obtained using the expression for the backscattering amplitude $\\Gamma_{\\text{bs}} (z)$, Eq.~\\eqref{Tlognormal}, valid to the leading order in $\\epsilon = M\/k_F$. We now\ncheck whether the dependence of $f_{\\text{bs;a}} (\\ell)$ on $1-\\ell$ gets modified if we compute $\\Gamma_{\\text{bs}} (z)$ beyond $\\mathcal{O}(\\epsilon)$. The reasoning for doing this is the following: to leading order in $\\epsilon$, partial amplitudes in pairing channels with different angular momentum are all equal, and this gives rise to the appearance of \n the factor $1\/(n+1)$ in the series for $\\Gamma_{\\text{bs}} (z)$ in \\eqref{Tlognormal}.\n Beyond the order $\\mathcal{O}(\\epsilon)$, partial pairing amplitudes do differ, and the largest one is in the $s$-wave channel.\n If we would only include this channel and ignore all other pairing components, we would find that the logarithmical series \n becomes geometrical and $f_{\\text{bs;a}} (\\ell) \\propto 1\/(1-\\ell)$ (see Eq.~(\\ref{sigmaattsubleading_1}) below). For a small but finite $\\epsilon$, we therefore expect a crossover from \n $f_{\\text{bs;a}} (\\ell)= \\pi^2\/3- (1-\\ell) \\ln^2(1- \\ell) +...$ at $1-\\ell\\leq 1$ to $\n f_{\\text{bs;a}} (\\ell) \\propto 1\/(1-\\ell)$ at the smallest $1- \\ell$. To describe this crossover, we now evaluate $\\Gamma_{\\text{bs}}(z)$ in next-to-leading order in $\\epsilon$.\n This can be achieved by angular momentum decomposition. \n Namely,\n the effective interaction on the Fermi surface, \n\\begin{align}\nV(\\phi) &= \n\\tilde{\\lambda} \\frac{2\\epsilon}{\\epsilon^2 + 4 \\sin^2(\\phi\/2)}\\ , \n\\end{align}\n can be decomposed into angular momenta as\n\\begin{align}\n\\label{Veffangular}\nV_m = \\int_0^{2\\pi} \\frac{d\\phi}{2\\pi} V(\\phi) \\exp(-i\\phi m) \\ .\n\\end{align}\n To a very good approximation (see App.\\ \\ref{Tmatrixapp})\n \\begin{align}\n\\label{Veffangular_1}\nV_m \\approx \\tilde{\\lambda} \\exp(- |m| \\epsilon) \\ .\n\\end{align}\nAt $\\epsilon \\rightarrow 0$, all components $V_m$ become degenerate. However, for a nonzero $\\epsilon$,\n this is not quite the case yet, and the $s$-wave component \n $V_0 \\simeq \\tilde{\\lambda}$ is largest.\nA straightforward analysis then shows that the series for\n $\\Gamma_{\\text{bs}}(z)$ can be rewritten as\n\\begin{align}\n\\notag\n \\Gamma_{\\text{bs}}(z) &= - \\frac{1}{\\rho} \\sum_m V_m \\sum_{n = 0}^\\infty \\left[ V_m L(z)\\right]^{n} \\\\ &= - \\frac{1}{\\rho} \\frac{V_0}{1-V_0 L(z)} - \\frac{2}{\\rho} \\sum_{m = 1}^\\infty \\frac{V_m}{1- V_m L(z)},\n\\label{remsum}\n\\end{align}\nwhere in the second line we singled out the $s$-wave part. The remaining sum in Eq.\\ \\eqref{remsum} can be evaluated using the Euler-Maclaurin formula, and the result is\n\\begin{align}\n\\label{Tnextto}\n&\\Gamma_{\\text{bs}}(z) \\simeq \\\\ & \\notag -\\frac{g}{M^2}\n\\left[-\\frac{\\ln\\left( 1- \\tilde{\\lambda} L(z)\n +\n \\epsilon\n \\right)}{\\tilde{\\lambda} L(z)} + \\frac{\\epsilon}{2} \\frac{1}{1- \\tilde{\\lambda} L(z)} \\right] .\n\\end{align}\nEq.\\ \\eqref{Tnextto} shows that the $\\ln\\left( 1- \\tilde{\\lambda} L(z)\\right)$ dependence of the\nbackscattering amplitude is the consequence of the near degeneracy of all\npartial pairing amplitudes. Once this degeneracy is lifted at a finite $\\epsilon$,\n the backscattering amplitude develops a conventional $s$-wave pole,\n albeit with a small weight $\\epsilon\/2$. Inserting this\n $\\Gamma_{\\text{bs}} (z)$ into the integral for $\\Sigma_{\\omega^2, \\text{bs}}$, Eq.~(\\ref{Sigmap2}), we obtain, \n \\begin{align}\n\\label{sigmaattsubleading_1}\n&\\Sigma_{\\omega^2, \\text{bs}} = i \\text{sign}(\\omega_m) \\frac{\\omega_m^2}{Z \\pi {\\omega_{\\text{FL}}}}\nf^*_\\text{bs;a}(\\ell), \n\\end{align} \nwhere \\begin{align}\n\\label{fbsstarall}\nf^*_\\text{bs;a}(\\ell) \\simeq f_\\text{bs;a}(\\ell) + \\frac{\\epsilon^2}{4(1-\\ell)} \\ .\n\\end{align} \n At $1-\\ell \\lesssim \\epsilon$, \n \\begin{align} \\notag \nf^*_{\\text{bs;a}}(\\ell) \\simeq &\\frac{\\epsilon^2}{4(1- \\ell)} + \\frac{\\pi^2}{3} + {\\epsilon} \\ln{(1-\\ell + \\epsilon)}\\ln{(1-\\ell)}\n \\\\ & - (1-\\ell +\\epsilon) \\ln^2 (1-\\ell + \\epsilon) \\ .\n \\label{a_5}\n\\end{align}\n$f_\\text{bs;a}(\\ell)$ and $f^*_\\text{bs;a}(\\ell)$ are plotted in the inset of Fig.\\ \\ref{fattractiveplot}. \nWe see that the self-energy diverges as $1\/(1-\\ell)$ at $\\ell \\to 1$, as is expected for an $s$-wave superconductor\n~\\cite{PhysRevB.42.10211,Larkin2008}, but in our case the divergent term appears with the prefactor $\\epsilon^2$ \\footnote{We note that the prefactor of the $s$-wave term gets renormalized ($N \\rightarrow N-1$ in the expression for ${\\omega_{\\text{FL}}}$) by ladder diagrams without a loop, in which a line, expressing an outgoing fermion, is attached to the upper part of the Cooper ladder \\cite{haussmann1993crossover}. Such a diagram has the same number of logarithms, but involves a large momentum transfer $2k_F$ and therefore renormalizes the $s$-wave part only}.\n\n\n\n\n\nLike before, upon analytical continuation to the real axis, $i\\omega_m \\to \\omega + i\\delta$, Eq.\\ (\\ref{sigmaattsubleading_1}) becomes the expression for the backscattering contribution to\nIm[$\\Sigma^R(\\omega)$]:\n\\beq\n\\text{Im}\\left[\\Sigma_{\\text{bs;a}}^R(\\omega)\\right] =\n- \\frac{\\omega^2}{Z \\pi {\\omega_{\\text{FL}}}} \n f^*_\\text{\n bs;a}(\\ell) \\ , \n\\label{sigmaattsubleading_2}\n\\eeq\nwhere on the real frequency axis $\\ell = {\\tilde \\lambda} \\ln({{\\omega_{\\text{FL}}}\/|\\omega|})$. Eq.~(\\ref{sigmaattsubleading_2}) shows that the scattering rate increases when $\\omega$ decreases ($\\ell$ increases) and\nbecomes singular at $\\ell =1$.\n\n\nThe enhancement of $\\text{Im}[\\Sigma_{\\text{bs;a}}^R(\\omega)]$ near $\\ell =1$\ncan be interpreted as follows:\nto the lowest order in the interaction, the rate comes from the\n decay of a particle into two particles and one hole; $\\text{Im}[\\Sigma_{\\text{bs;a}}^R(\\omega)]$ measures the phase space for such a process. By energy conservation, the three outgoing states must have energies smaller than $\\omega$.\nWhen Cooper pair formation sets in, the two electrons in the particle-particle ladder form a tightly bound Cooper pair, and there are only two decay products left. Therefore, the phase space restriction is less severe for small $\\omega$, and the scattering rate increases.\n\nBefore concluding this section, we discuss the range of validity of Eqs.~(\\ref{sigmaattsubleading_1}) and (\\ref{sigmaattsubleading_2}). The expression for the self-energy has been obtained with logarithmic accuracy,\n i.e., at each loop order we neglected terms with the same power of ${\\tilde \\lambda}$, but a smaller power of $\\ln({\\omega_{\\text{FL}}}\/|\\omega|)$. This is valid when ${\\tilde \\lambda} \\ll 1$ and $|\\omega| \\ll {\\omega_{\\text{FL}}}$.\nThe condition on $\\omega$ does not pose a restriction as throughout the paper we assume that ${\\omega_{\\text{FL}}}$ is finite and focus on the self-energy at the smallest $\\omega$. The condition ${\\tilde \\lambda} = \\lambda\/(1+\\lambda) \\ll 1$ implies weak coupling, $\\lambda \\ll 1$. Because $\\lambda = g\/(4\\pi v_F M) \\sim g\/(E_F \\epsilon)$, we need to keep $g\/E_F$ small to satisfy both $\\lambda \\ll 1$ and $\\epsilon \\ll 1$. \n Weak coupling is advantageous to us because, for ${\\tilde \\lambda} \\ll 1$, \nthere is a wide range of frequencies where on the one hand $|\\omega| \\ll {\\omega_{\\text{FL}}}$ and on the other $\\ell = {\\tilde \\lambda} \\ln({\\omega_{\\text{FL}}}\/|\\omega|) <1$, hence Eqs. (\\ref{sigmaattsubleading_1}) and (\\ref{sigmaattsubleading_2}) are applicable. This is indeed the consequence of the fact that superconductivity at weak coupling emerges at an exponentially small $\\omega \\sim \\Delta_0 ={\\omega_{\\text{FL}}} \\exp(-1\/{\\tilde \\lambda})$. We emphasize that this only holds in a Fermi liquid regime, at small enough $g\/E_F$. Very close to a QCP, $\\lambda$ necessarily becomes\n large,\n and the whole Fermi liquid range $\\omega < {\\omega_{\\text{FL}}}$ falls into $\\omega < \\omega_{\\text{IN}} \\sim \\lambda^3 {\\omega_{\\text{FL}}}$. The pairing instability in this case emerges at a \n larger $\\omega \\leq \\omega_{\\text{IN}}$ (Refs. \\cite{PhysRevLett.77.3009,Moon2010,PhysRevB.91.115111,CHUBUKOV2020168142}), hence there is no range of applicability for our analysis.\n\n\n\n\\section{Self-energy in the superconducting state}\n\\label{scscatesec}\n\nWe now show that the singularity of the self-energy at $\\ell=1$ gets regularized once we take into consideration the fact that the ground state is actually an $s$-wave superconductor.\n In practical terms, superconductivity implies that for the calculation of the backscattering amplitude and the fermionic self-energy one has to use normal and anomalous Green's functions\n\\begin{align}\n\\label{Gmeanfieldmain}\nG(\\omega_m, {\\boldsymbol{k}}) &= -\n \\frac{i Z^{-1}\\omega_m + \\xi_{\\boldsymbol{k}}}{Z^{-2}\\omega^2_m + \\xi_{\\boldsymbol{k}}^2 + \\Delta_0^2}\\ , \\\\ \\notag F(\\omega_m, {\\boldsymbol{k}}) &= \\frac{\\Delta_0}{Z^{-2} \\omega^2_m + \\xi_{\\boldsymbol{k}}^2 + \\Delta_0^2} \\ .\n \\end{align}\n\nThe three key processes that determine the decay of a quasiparticle in a superconductor are \n(i)\n scattering by amplitude and phase fluctuations of the superconducting order parameter, \n (ii) scattering by a (potential) resonance mode in the nematic $D(\\omega, {{\\boldsymbol{q}}})$ at (real) $\\omega = \\omega_{\\text{res}} <3\\Delta_0$ and (iii) scattering due to a decay into particles and holes (this process starts at $\\omega =3\\Delta_0$).\n For a charged superconductor, scattering by phase fluctuations is affected by long-range Coulomb interaction, which converts a Goldstone phase fluctuation mode into a plasmon (still gapless in 2D). However, we neglected the effects of long-range Coulomb interaction in the calculations assuming a normal state at $T=0$, and for consistency we also have to neglect this interaction in a superconductor.\n \nIn general, all three processes are important at $\\omega_m \\gtrsim \\Delta_0$, and the full-fledged\n evaluation of $\\Sigma(\\omega_m \\gtrsim \\Delta_0)$\n is a complicated endeavor. \nSince our primary goal is to understand how superconductivity affects \n singular behavior of the self-energy that we found in the previous section, we limit ourselves to the scattering processes which cut off the singularity. \n \n \n We will directly focus on the imaginary part of the backscattering self-energy in real frequencies, \n $\\text{Im}[\\Sigma_\\text{bs;a}^R(\\omega)]$. Consider first the contribution to $\\text{Im}[\\Sigma_\\text{bs;a}^R(\\omega)]$ arising from the sum of all angular momentum channels,\n Eq.~\\eqref{sigmaattleading}. \n This expression shows a slope singularity\n at $\\ell =1$, however,\n since it was obtained with the logarithmic accuracy, it is only valid for $ \\ell \n \\lesssim 1- \\tilde{\\lambda}$. At the boundary of this regime, the self-energy \\eqref{sigmaattleading} is of the same order as\nthe one-loop result, Eq.\\ \\eqref{a_1}. Therefore, to estimate this part of the self-energy\n at $1-\\ell <\\tilde{\\lambda}$, it is sufficient to re-evaluate the one-loop diagram of Fig.\\ \\ref{diagtable}(b) in the superconducting state.\n We find (see App.\\ \\ref{Imsigma1app} for details)\n\\begin{align}\\label{Imsigma3delta}\n&-\\text{Im}\\left[\\Sigma_\\text{bs;a}^{R}(\\omega)\\right] \\sim \\\\ & \\frac{\\Delta_0^2} {{\\omega_{\\text{FL}}}} \\tilde{\\lambda} \\ln\\left( \\frac{{\\omega_{\\text{FL}}}}{\\Delta_0}\\right) \\times \\sqrt{ \\frac{\\omega - 3\\Delta_0}{ \\Delta_0}} \\theta(\\omega - 3\\Delta_0) \\ . \\notag\n\\end{align} \nWe have suppressed the dependence on $Z \\simeq 1$. The scattering rate of Eq.\\ \\eqref{Imsigma3delta} starts at $\\omega = 3\\Delta_0$, which is transparent as the three scattering products, two quasiparticles and a quasihole, all have an energy gap $\\Delta_0$. At $\\omega - 3\\Delta_0 \\simeq \\Delta_0$, i.e., at $1-\\ell \\sim \\tilde{\\lambda}$, Eq.\\ \\eqref{Imsigma3delta} becomes of the same order as the normal state result, Eq.~\\eqref{sigmaattleading}. \nWe see therefore that the portion of the self-energy that comes from all scattering channels smoothly evolves from\n Eq.~\\eqref{sigmaattleading} to Eq.~(\\ref{Imsigma3delta}). We also computed the contribution to the self-energy \n from the resonance in $D(\\omega, {\\boldsymbol{q}})$ in the superconducting state. The resonance develops at $\\omega_{\\text{res}} = \\Delta_0 \\times \\left[1 + \\mathcal{O}(\\epsilon k_F\/(N \\rho g)^{1\/2})\\right]$. Adding this contribution, we find that $\\text{Im}[\\Sigma_{\\text{bs;a}}^{R}(\\omega)]$ extends down to $\\omega_{\\text{res}}$, but near $\\omega \\sim 3\\Delta_0$ it is given by \n Eq.\\ (\\ref{Imsigma3delta}) up to small corrections. The form of $\\text{Im}[\\Sigma_{\\text{bs;a}}^{R}(\\omega)]$ at $\\omega_{\\text{res}} \\leq \\omega \\leq 3\\Delta_0$ is rather involved and we will discuss it in a separate report. Ultimately, the complicated form of the self-energy is tied to the abundance of low-energy collective excitations in a superconductor with long-ranged interactions, as also recognized in Refs.\\ \\cite{PhysRevB.62.11778, KleinNpj2019}. \n \nTo find the correct cutoff of the $s$-wave pole, Eq.\\ \\eqref{sigmaattsubleading_1}, evaluation of the one-loop diagram is insufficient, and one \n needs to re-evaluate the full backscattering amplitude $\\Gamma_{\\text{bs}}$ using $\\Pi_\\text{pp}= GG + FF$. \n The key new effect coming from this calculation is the scattering of a quasiparticle \n by massless (Goldstone) phase fluctuations of the superconducting order parameter. Evaluating the $s$-wave component of $\\Gamma_{\\text{bs}}$ and substituting the result into the self-energy, we obtain\n\\begin{align}\n\\label{Imsigmas}\n-\n \\text{Im}\\left[\\Sigma_\\text{bs;a}^{R}(\\omega)\\right] \\sim \n \\frac{\\epsilon^2}{\\tilde{\\lambda}} \\frac{\\Delta_0^2}{{\\omega_{\\text{FL}}}} \\times \\theta(\\omega - \\Delta_0)\\ .\n\\end{align}\nThis contribution to the self-energy starts at $\\omega = \\Delta_0$, since the phase mode is gapless. \n For $1- \\ell \\simeq \\tilde{\\lambda}$, i.e., $\\omega \\gtrsim \\Delta_0$, this self-energy \nmatches with the $s$-wave piece $\\omega^2\/{\\omega_{\\text{FL}}} \\times \\epsilon^2\/(1- \\ell)$, obtained in the normal state calculation, Eq.\\ \\eqref{sigmaattsubleading_1}.\n A similar expression for the self-energy at $T=0$ in a 3D superfluid has recently been computed in Ref.~\\cite{PhysRevLett.124.073404}, building on older work of Ref.\\ \\cite{haussmann1993crossover}. The contribution from the self-energy to the density of states of an $s$-wave superconductor has been obtained in Refs.\n \\ \\cite{PhysRevB.42.10211, Larkin2008}.\n \n\n\nCollecting the results, we find the full self-energy from backscattering in the form \n\\begin{widetext}\n\\begin{align} \\label{ImsigmafinalresultN}\n&0< \\ \\ell \\lesssim 1- \\mathcal{O}(\\tilde{\\lambda}): \\quad &&\\text{Im}\\left[\\Sigma_{\\text{bs;a}}^R(\\omega)\\right] =\n-\\frac{\\omega^2}{\\pi {\\omega_{\\text{FL}}}} f^*_\\text{bs;a}(\\ell), \\quad\n f^*_\\text{bs;a}(\\ell) \\simeq \\frac{ (\\ell - 1)\\ln^2(1-\\ell)}{\\ell} + 2\\text{Li}_{2}(\\ell) + \\frac{\\epsilon^2}{4(1-\\ell)}, \\\\ &\n1- \\mathcal{O}(\\tilde{\\lambda})\\lesssim \\ \\ell < 1: \\quad &&\\text{Im}\\left[\\Sigma_{\\text{bs;a}}^R(\\omega)\\right] = - \\frac{\\Delta_0^2} {{\\omega_{\\text{FL}}}} \\tilde{\\lambda} \\ln\\left( \\frac{{\\omega_{\\text{FL}}}}{\\Delta_0}\\right) \\times \\sqrt{ \\frac{\\omega - 3\\Delta_0}{ \\Delta_0}} \\theta(\\omega - 3\\Delta_0) - \\frac{\\epsilon^2}{\\tilde{\\lambda}} \\frac{\\Delta_0^2}{{\\omega_{\\text{FL}}}} \\times \\theta(\\omega - \\Delta_0) .\n\\label{ImsigmafinalresultS}\n\\end{align}\n\\end{widetext}\n\n\nWe sketch $[\\Sigma_{\\text{bs;a}}^R(\\omega)]\/\\omega^2$ in Fig.\\ \\ref{interpolfig}. In panel (a) we set $\\epsilon < \\tilde{\\lambda}$. In this situation, the contribution from all angular channels dominates over the $s$-wave part for all $\\ell$. In panel (b) we set $\\epsilon > \\sqrt{\\tilde{\\lambda}}$. Here the contribution from the $s$-wave channel is the dominant one. We see that in this situation Im$[\\Sigma_{\\text{bs;a}}^R(\\omega)]\/\\omega^2$ has a sharp peak at $\\ell \\leq 1$. This can be directly verified in ARPES experiments.\n\nNote that while \n the imaginary part of the self-energy in real frequencies\n %\n vanishes at $|\\omega| < \\Delta_0$, the Matsubara self-energy (which can be obtained from Quantum Monte Carlo) is non-zero.\n At $\\omega_m \\ll \\Delta_0$, $\\Sigma_{\\text{bs;a}}(\\omega_m)$ scales as \n \n $\\omega_m\/\\Delta_0$. At larger $\\omega_m \\sim \\Delta_0$, $\\Sigma_{\\text{bs;a}}(\\omega_m)$ passes through a maximum.\n \n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=.95\\textwidth]{fig10.pdf}\\caption{Backscattering self-energy divided by $\\omega^2$ as a function of the logarithmic variable $\\ell$. Blue curves are the normal state result (Eq.\\ \\eqref{ImsigmafinalresultN} and Fig.\\ \\ref{fattractiveplot}), green curves correspond to the superconducting result, Eq.\\ \\eqref{ImsigmafinalresultS}. The black line is a possible interpolation representing the correct physical backscattering rate. (a) Sum of angular momentum channels dominates, used parameters: $\\epsilon = 0.05, \\tilde{\\lambda} = 0.1$. (b): $s$-wave channel dominates, used parameters: $\\epsilon = 0.3, \\tilde{\\lambda} = 0.05$. }\n\\label{interpolfig}\n\\end{figure*}\n\n\n\n\n\\section{Conclusions}\n\\label{concsec}\n\nIn this work we computed the fermionic self-energy in a Fermi liquid near a nematic QCP. We considered the \n model of fermions interacting with soft fluctuations of a nematic order parameter, and extended it to $N$ fermionic flavors. The leading contributions to the self-energy at $N \\to \\infty$ come from planar diagrams, as was determined before at a QCP. In a Fermi liquid, the contributions from planar diagrams contain series of logarithms. We identified the \nleading logarithmic contributions at all orders as coming from the fully dressed backscattering amplitude, i.e., \nthey describe one-dimensional scattering processes. We further argued that the subleading contributions (the terms with a smaller power of $\\ln(\\omega)$ at each loop order) come from scattering with a finite momentum transfer counted from either backscattering or forward scattering.\n \nWe found that the logarithmic series are non-geometrical and summed them up using several computational tricks. \nWe first considered a toy model with repulsive pairing interaction, and then a realistic model with attractive pairing interaction. For the latter case we found two contributions to the self-energ\n: one is the combined contribution from multiple pairing channels with near-equal attraction, and the second is the special contribution from an $s$-wave pairing channel, where the pairing interaction is a bit larger than in other channels. We first computed the two terms assuming the normal state and found that perturbation theory holds only above a certain frequency, comparable to superconducting gap $\\Delta_0$. At the edge, both contributions become non-analytic and the one from $s$-wave channel diverges. We then extended the analysis to include superconductivity at $T=0$ and showed that the would-be divergencies get regularized. In particular, the divergence in the $s$-wave channel is regularized by scattering of a fermion by phase fluctuations of the superconducting order parameter. We obtained the full expression for the imaginary part of the retarded self-energy in real frequencies (the scattering rate) at $\\omega \\sim \\Delta_0$ and argued that, depending on parameters, $\\text{Im}[\\Sigma^R(\\omega)]\/\\omega^2$ either evolves smoothly at $\\omega \\geq \\Delta_0$ or has a peak. This peak can be detected in ARPES measurements.\n\nThe unconventional behavior that we found stems from the competition between different pairing channels. In our case this holds near a nematic QCP, but similar behavior is expected when there is a competition between multiple pairing channels for other reasons, like, e.g., a competition between $s$- and $d$-wave channels in iron-based superconductors \n \\cite{PhysRevLett.109.087001, PhysRevLett.107.117001, HOSONO2015399}. Adapting our computations to a \n model that only allows for finite number of pairing channels, or, e.g., only selects pairing channels with even angular momentum yielding singlet superconductivity could be an interesting future project. In addition, a recent Monte Carlo study of a quantum critical metal \\cite{grossman2020specific} observed a strong impact of pairing fluctuation on thermodynamic observables (e.g., specific heat), and it would be worthwhile to elucidate how the competition between different pairing channels shows up in thermodynamics.\n \n{As a final remark, we emphasize that our analysis has been restricted to the isotropic case. On a lattice, the nematic form factor $f$ (which we set to one) becomes important, separating hot (where $f \\simeq 1$) and cold ($f \\ll 1$) regions on the Fermi surface. In the normal state $(\\omega \\gg \\Delta)$, our results carry over to the hot regions. Deep in the superconducting state ($\\omega < \\Delta$), the gap function will be maximal in the hot regions, and vanish in the cold ones, similar to Ref.\\ \\cite{PhysRevB.98.220501}. Accordingly, in this limit our results for the scattering rate have to be modified to account for the near-gapless regions. }\n\n\n\\section{Acknowledgments}\n\nWe thank E.~Berg, M.~Hecker, Y.~B.~Kim, A.~Klein, S.-S. Lee and Y.~Schattner for useful discussions. D.P.\\ is grateful to the Max-Planck-Institute for the Physics of Complex Systems Dresden (MPIPKS) for hospitality during the initial stage of this project. The work by A.V.C.\\ was supported by the Office of Basic Energy Sciences, U.S. Department of Energy, under award DE-SC0014402. A.K. was supported by NSF grant DMR-2037654.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}