diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzakfu" "b/data_all_eng_slimpj/shuffled/split2/finalzzakfu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzakfu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nDuring the last decades there has been an intense scrutiny of the \nstandard model in the search for traces of new physics effects. However, \nup to the energy scales probed until now and except for some occasional \ntensions, the standard model has proven to be an extremely successful theory. \nMoreover, the latest results from the LHC not only seem to confirm the \nHiggs-like nature of the newly-found \nscalar~\\cite{Aad:2012tfa,Chatrchyan:2012ufa} but continuously increase the \ngap between the standard model particles and the scale of new physics \n$\\Lambda$. \n\nIn the absence of new heavy particles in direct searches we should expect \nnew physics to be first seen as virtual effects. These can generically be \nencoded as anomalous couplings for the different sectors of the theory: \ngauge-fermion interactions, gauge boson interactions \n(oblique, triple-gauge and quartic) and scalar interactions. \nGiven the large energy gap between the electroweak and the new physics scale, \nan effective field theory (EFT) treatment becomes the best strategy to \nparametrize the new physics effects in a model-independent way. \nThe main virtue of the EFT treatment is that the standard model symmetries \nare automatically implemented in the anomalous couplings. The resulting \nconstraints from $SU(2)_L\\times U(1)_Y$ symmetry make it transparent that \n(i) the number of independent parameters is typically smaller than the number \nof couplings; (ii) arbitrarily setting some of the couplings to zero \nin experimental analysis is in general inconsistent with the electroweak \nsymmetry; (iii) the naive scaling with energy of the form factors is \nameliorated by ($SU(2)_L\\times U(1)_Y$)-induced cancellations. Therefore, \nadopting an EFT becomes not a matter of choice, but the only way to ensure \nconsistency at the field theoretical level. The advantages of the EFT\napproach have recently been re-emphasized \nin \\cite{Degrande:2012wf,Degrande:2013mh}. \n \nGlobal fits to the electroweak data using an EFT framework have been \nperformed by several groups in the past. Unfortunately, the global analysis \ncontains too many parameters and cross-correlations are too strong to obtain \nan informative fit~\\cite{Han:2004az,Cacciapaglia:2006pk}. As an alternative,\nthe number of coefficients is commonly \nlimited to a reduced set, inspired by the results of different models, and \nfits have been performed on this basis. A prototype of this approach is the \nwell-known $S$, $T$, $U$ parameter analysis of \\cite{Peskin:1991sw} . \n\nIn this paper we will explicitly show that the above shortcoming of the \nglobal fit to EFT couplings can be ameliorated by studying individual \nprocesses at high energies ($v\\ll \\sqrt{s}\\ll \\Lambda$). \nAs an example, we will \npresent a detailed EFT-based study of $W^+W^-$ production at linear \ncolliders. $e^+e^-\\to W^+W^-$ has been the benchmark process in the study of \ncharged triple-gauge corrections first at LEP \\cite{Alcaraz:2006mx}\nand subsequently for future \nlinear collider facilities \\@ifnextchar [{\\@tempswatrue\\@citexr}{\\@tempswafalse\\@citexr[]}{Gaemers:1978hg,Andreev:2012cj}. \n$W^+W^-$ production at the LHC has been considered for instance in\n\\cite{Weiglein:2004hn,Baur:1988qt,:1999fr,ThurmanKeup:2001ka}. \nHowever, although there have been \nseveral studies in the literature emphasizing the need for an EFT approach \nto triple-gauge couplings in $e^+e^-\\to W^+W^-$\n\\cite{Degrande:2012wf,Degrande:2013mh}, \\@ifnextchar [{\\@tempswatrue\\@citexr}{\\@tempswafalse\\@citexr[]}{Falk:1991cm,He:1997zm},\na complete analysis is still missing. \n\nThe present analysis will be performed in the full nonlinear EFT basis \nrecently studied \nin \\cite{Buchalla:2012qq}. Our final results will provide expressions \nfor the (initial and final state) polarized cross-sections in the large-$s$ \nexpansion, which is an excellent approximation for the projected energies at \nfuture linear colliders. \nThe leading corrections will grow with $s$ relative to the standard model\nresults, reflecting the fact that the nonlinear effective theory\nviolates unitarity in the UV. The $s\/v^2$ enhancement at\nenergies where the EFT is still valid, improves the visibility of small \nnew physics coefficients. Actually, the new physics effects at, say, \n$s=800$~GeV could be typically as large as $20\\%$. \n\nOne of the interesting properties of $e^+e^-\\to W^+W^-$ is that, up to tiny \nmass corrections, it is independent of couplings to a physical Higgs sector. \nWe will show that this is indeed the case by comparing our \nresults with the linear EFT basis of \n\\cite{Buchmuller:1985jz,Grzadkowski:2010es}. Besides the results for \nthe cross-sections, our main findings can be summarized as follows:\n\\begin{itemize}\n\\item Despite the sizable number of operators contributing to the process\nat next-to-leading order (NLO), \nthe final result for new physics effects in the large-$s$ limit\ncan be encoded in terms of just three parameters. \nThese can be expressed as the corrections to the left and \nright-handed gauge-fermion vertices. \n\\item Three of the gauge-fermion operators and the three leading \n(C, P and CP-con\\-ser\\-ving) triple-gauge operators are related by \nfield redefinitions. Therefore, in the case of $e^+e^-\\to W^+W^-$,\nomitting the gauge-fermion operators is not an approximation but \nan exact field-theoretical result: they can be traded for triple-gauge \noperators and vice versa, depending on the chosen operator basis. We stress \nthat this is because only three independent gauge-fermion couplings enter\nthe process $e^+e^-\\to W^+W^-$. In general, there are many more\ngauge-fermion operators and it is not possible to eliminate all of them.\n\\end{itemize} \nThe last point above implies that statements about gauge-fermion or \ntriple-gauge operators {\\it{per se}} are basis-dependent and therefore \nill-defined. For instance, in the basis where gauge-fermion operators are \nkept, the electroweak fit~\\cite{Han:2004az} does not support the common \nclaim that \nthey are tightly constrained. Furthermore, our analysis contradicts the \nstatement that $W^+W^-$ production directly tests triple-gauge corrections. \nRather, what one finds is that at large-$s$ one can put bounds on \ngauge-fermion couplings or {\\it{equivalently}} on triple-gauge couplings, \nsince they are not independent. \n\nThe existence of field-theoretical relations binding gauge-fermion, \ntriple-gauge and oblique operators raises the question of which basis should \nbe preferred for experimental analyses of electroweak physics. \nIn the particular case of $e^+e^-\\to W^+W^-$ the possibility of eliminating \nthe gauge-fermion operators altogether might suggest itself. However, in view \nof the general electroweak fit, it seems more natural to eliminate triple-gauge \noperators and keep the full set of gauge-fermion operators. As we will show, \nthe emerging picture in this basis turns out to be rather simple: only a \nsingle triple-gauge operator appears (${\\cal O}_{XU3}$ in (\\ref{oxu}) below), \nwhich is both parity and isospin-breaking, and therefore expected to be \nnumerically small.\\footnote{CP-violating triple-gauge operators are also \npresent but do not interfere with the standard model in the cross sections.} \nAdditionally, \nin the large-$s$ limit oblique corrections and the surviving triple-gauge \noperator can be shown to be generically subleading, such that the leading \nlarge-$s$ contribution naturally singles out gauge-fermion operators. \n\nThese rather simple and counterintuitive results follow from carefully \neliminating redundant operators and therefore stress the importance of \nworking with a complete and minimal basis in EFT-based analyses. \nComments on how this picture would generalize to hadron colliders \nwill be made but details will be left to future work. \n\nThis paper is organized as follows: in section~2 we will briefly review \nthe EFT of the standard model at NLO and fix our notation and conventions. \nIn section~3 \nwe will apply the EFT formalism to $e^+e^-\\to W^+W^-$, discussing in detail \ndirect contributions and parameter redefinitions. In section~4 we collect the \nresults for the differential cross sections for the different initial and \nfinal state polarizations. The issue of redundant operators and \nchoice of basis is addressed in section~5. A complementary view of the \nlarge-$s$ limit from the perspective of the equivalence theorem \nis given in section~6. \nIn section~7 we discuss the case of a linearly-realized EFT.\nIn order to get an estimate of the expected effects at linear \ncolliders, in section~8 the size of EFT couplings is estimated \nfrom different benchmark UV completions.\nIn section~9 we briefly comment on $W^+W^-$ production at the LHC. \nConclusions are given in section~10, while technical details are relegated \nto an Appendix. \n\n\n\n\n \n\\section{Electroweak chiral Lagrangian at NLO}\n\\label{sec:nlou}\n\nThe starting point of our analysis is the well-known leading order\nchiral Lagrangian of the electroweak standard model. \nTo define our notation we quote here the terms of the leptonic\nsector relevant for $e^+e^-\\to W^+W^-$. They read\n\\begin{equation}\\label{lsmlo}\n{\\cal L}_{\\rm LO} = -\\frac{1}{2}\\langle W_{\\mu\\nu}W^{\\mu\\nu}\\rangle \n-\\frac{1}{4} B_{\\mu\\nu}B^{\\mu\\nu} + \\frac{v^2}{4}\\ \\langle D_\\mu U^\\dagger D^\\mu U\\rangle\n+\\bar l_L i\\!\\not\\!\\! Dl_L + \\bar e_R i\\!\\not\\!\\! De_R \n\\end{equation}\nHere and in the following the trace of a matrix $M$ is written as\n$\\langle M\\rangle$.\nThe doublet of left-handed leptons is denoted by $l_L=(\\nu_L,e_L)^T$, \nthe right-handed electron by $e_R$, and we focus\nour attention on the first-generation fermions. \nThe covariant derivatives of the fermions are \n\\begin{equation}\\label{dcovf}\nD_\\mu l_L =\n\\partial_\\mu l_L +i g W_\\mu l_L -\\frac{i}{2} g' B_\\mu l_L , \n\\qquad\nD_\\mu e_R =\\partial_\\mu e_R - i g' B_\\mu e_R\n\\end{equation}\nThe electron mass is negligible and the associated Yukawa terms\nhave been omitted from (\\ref{lsmlo}). Couplings to a physical Higgs\nfield do not play a role in $e^+e^-\\to W^+W^-$ and are likewise omitted\nfrom the Lagrangian. The Goldstone \nbosons of electroweak symmetry breaking are represented by the matrix field \n\\begin{equation}\\label{uudef}\nU=\\exp(2i\\Phi\/v),\\qquad\n\\Phi=\\varphi^a T^a=\\frac{1}{\\sqrt{2}}\\left(\n\\begin{array}{cc}\n\\frac{\\varphi^0}{\\sqrt{2}} & \\varphi^+\\\\\n\\varphi^- & -\\frac{\\varphi^0}{\\sqrt{2}} \n\\end{array}\\right)\n\\end{equation}\nwith $T^a=T_a$ the generators of $SU(2)$. The $U$-field transforms as \n\\begin{equation}\\label{uglgr}\nU\\rightarrow g_L U g^\\dagger_R,\\qquad g_{L,R}\\in SU(2)_{L,R}\n\\end{equation}\nwhere $g_L$ and the $U(1)_Y$ subgroup of $g_R$\nare gauged, so that the covariant derivative of $U$ is given by\n\\begin{equation}\\label{dcovu}\nD_\\mu U=\\partial_\\mu U+i g W_\\mu U -i g' B_\\mu U T_3\n\\end{equation}\n\nThe effective Lagrangian (\\ref{lsmlo}) describes \nphysics at the electroweak scale $v=246\\,{\\rm GeV}$, assumed to be \nsmall in comparison with a new physics scale $\\Lambda$. \nThis Lagrangian is non-renormalizable in general, except\nwhen a Higgs field $h$ is introduced with specific couplings,\nin which case the theory reduces to the conventional standard model\n(see e.g. \\cite{Contino:2010rs} for a review). In the general case,\nadditional terms will arise beyond the lowest order from\nthe dynamics of electroweak symmetry breaking at the ${\\rm TeV}$ scale.\nThese subleading terms were first considered in\n\\@ifnextchar [{\\@tempswatrue\\@citexr}{\\@tempswafalse\\@citexr[]}{Longhitano:1980iz,Appelquist:1994qz}. \nA complete list of all NLO operators in this framework\nbased on a systematic \npower counting has recently been given in \\cite{Buchalla:2012qq}.\nUsing the notation of this paper, the NLO operators relevant\nfor $e^+e^-\\to W^+W^-$ can be written as\n\\begin{equation}\\label{lnloco} \n{\\cal L}_{\\rm NLO} =\\beta_1 {\\cal O}_{\\beta_1} + \n\\sum_{i=1}^{6} C_{Xi} {\\cal O}_{XUi} + \n\\sum_{i=7}^{10} C_{Vi} {\\cal O}_{\\psi Vi} + \nC^*_{V9} {\\cal O}^\\dagger_{\\psi V9} + \\frac{C_{4f}}{\\Lambda^2} {\\cal O}_{4f} +\n\\sum_{i=1}^{2} \\frac{C_{Wi}}{\\Lambda^2} {\\cal O}_{Wi}\n\\end{equation}\nwith operators ${\\cal O}_k$ specified in (\\ref{ob1}) -- (\\ref{owi}).\nThe complete basis of NLO operators \\cite{Buchalla:2012qq} also\ncontains the terms $\\bar e_L e_R W^+_\\mu W^{-\\mu}$ and \n$\\bar e_L\\sigma^{\\mu\\nu} e_R W^+_\\mu W^-_\\nu$, which could in principle\ncontribute to $e^+e^-\\to W^+W^-$. Due to the chirality flip in the electron\ncurrent the coefficients of these operators can be expected to be\nproportional to the Yukawa coupling of the electron and thus\nvery much suppressed. In addition,\nthe chirality-changing currents do not interfere with the vectorial currents\nof the leading-order amplitude. Those operators therefore give no\nfirst-order correction to the $e^+e^-\\to W^+W^-$ cross sections \nand we have omitted them from (\\ref{lnloco}).\nWe have included the 4-fermion operator ${\\cal O}_{4f}$, which contributes\nonly indirectly through the renormalization of the Fermi constant $G_F$.\nOther 4-fermion operators from \\cite{Buchalla:2012qq} do not give rise to\nfirst-order corrections to $e^+e^-\\to W^+W^-$ cross sections and have been\nneglected.\n\nThe operators ${\\cal O}_{Wi}$ (see (\\ref{owi}) below) are strictly\nspeaking terms that appear only at next-to-next-to-leading order (NNLO) in the \neffective Lagrangian. Their coefficients are generally loop-induced\n\\cite{Arzt:1994gp} and count as $C_{Wi}\\sim 1\/(16\\pi^2)\\sim v^2\/\\Lambda^2$, \nwhich multiplies the explicit prefactor $1\/\\Lambda^2$ in the last term\nof (\\ref{lnloco}). We have included them here in order to\nfacilitate the transition to the basis of operators within the framework \nof a linearly transforming Higgs field, to be considered in \nsection~\\ref{sec:nlophi}.\nIn this case they belong to the full list of operators of dimension 6,\nand we include them for completeness in our analysis.\nIn the present context, and working consistently to NLO,\nthe coefficients $C_{Wi}$ may be put to zero.\n\nAll operators in the Lagrangian (\\ref{lnloco}) are hermitian and have real \ncoefficients, except ${\\cal O}_{\\psi V9}$. They have already been known from \nthe work of \\cite{Longhitano:1980tm,Appelquist:1984rr,Appelquist:1993ka}. \nHowever, the basis of operators used there contains redundant terms,\nwhich can be eliminated using the equations of motion \n\\cite{Buchalla:2012qq,Nyffeler:1999ap,Grojean:2006nn}.\n\nThe operators in (\\ref{lnloco}) have the following explicit form, where the \nsecond expression in each case refers to unitary gauge with $U=1$:\n\\begin{equation}\\label{ob1}\n{\\cal O}_{\\beta_1} = v^2 \\langle U^\\dagger D_\\mu U T_3\\rangle^2\n= -M^2_Z Z_\\mu Z^\\mu \n\\end{equation}\n\n\\begin{eqnarray}\\label{oxu} \n{\\cal O}_{XU1} &=& g'g\\ B_{\\mu\\nu}\\ \\langle U^\\dagger W^{\\mu\\nu} U T_3\\rangle \n =\\frac{g'g}{2}\\ B^{\\mu\\nu}W_{\\mu\\nu}^{3} \\nonumber\\\\\n{\\cal O}_{XU2} &=& g^2\\ \\langle U^\\dagger W_{\\mu\\nu} U T_3\\rangle\\\n \\langle U^\\dagger W^{\\mu\\nu} U T_3\\rangle \n =\\frac{g^2}{4}W_{\\mu\\nu}^{3}W^{3\\mu\\nu}\\nonumber\\\\\n{\\cal O}_{XU3} &=& g\\ \\varepsilon^{\\mu\\nu\\lambda\\rho}\\\n \\langle U^\\dagger W_{\\mu\\nu} D_\\lambda U\\rangle\\ \n \\langle U^\\dagger D_\\rho U T_3\\rangle \\nonumber\\\\\n &=& \\frac{g}{4}\\varepsilon^{\\mu\\nu\\lambda\\rho}\n\\left[gW_{\\mu\\nu}^{a}W_{\\lambda}^{a}-g'W_{\\mu\\nu}^{3}B_{\\lambda}\\right]\n\\left[g'B_{\\rho}-gW_{\\rho}^{3}\\right] \\nonumber\\\\\n{\\cal O}_{XU4} &=& g' g\\ \\varepsilon^{\\mu\\nu\\lambda\\rho}\\\n B_{\\mu\\nu}\\ \\langle U^\\dagger W_{\\lambda\\rho} U T_3\\rangle \n = \\frac{g'g}{2}\\varepsilon^{\\mu\\nu\\lambda\\rho}\n B_{\\mu\\nu}W_{\\lambda\\rho}^{3} \\nonumber\\\\\n{\\cal O}_{XU5} &=& g^2\\ \\varepsilon^{\\mu\\nu\\lambda\\rho}\\ \n\\langle U^\\dagger W_{\\mu\\nu} U T_3\\rangle\\ \\langle U^\\dagger W_{\\lambda\\rho} U T_3\\rangle \n = \\frac{g^2}{4}\\varepsilon^{\\mu\\nu\\lambda\\rho}\n W_{\\mu\\nu}^{3}W_{\\lambda\\rho}^{3}\\nonumber\\\\\n{\\cal O}_{XU6} &=& g\\ \\langle U^\\dagger W_{\\mu\\nu} D^\\mu U\\rangle\\ \n\\langle U^\\dagger D^\\nu U T_3\\rangle \\nonumber\\\\\n &=& \\frac{g}{4} \\left[gW_{\\mu\\nu}^{a}W^{a\\mu}-\n g' W_{\\mu\\nu}^{3}B^{\\mu}\\right]\\left[g' B^{\\nu} - g W^{3\\nu} \\right]\n\\end{eqnarray}\n\n\n\\begin{eqnarray}\\label{opsiv}\n{\\cal O}_{\\psi V7} &=& \\bar l_L\\gamma^\\mu l_L\\ \\langle U^\\dagger iD_\\mu UT_3\\rangle \n=-\\displaystyle\\frac{\\sqrt{g^2+g'^2}}{2}\\bar l_L\\gamma^\\mu l_L Z_{\\mu} \\nonumber\\\\\n{\\cal O}_{\\psi V8} &=& \\bar l_L\\gamma^\\mu UT_3U^\\dagger l_L\\ \n\\langle U^\\dagger iD_\\mu UT_3\\rangle \n=-\\displaystyle\\frac{\\sqrt{g^2+g'^2}}{2}\\bar l_L\\gamma^\\mu T_3 l_L Z_{\\mu}\n\\nonumber\\\\\n{\\cal O}_{\\psi V9} &=& \\bar l_L\\gamma^\\mu U P_{12} U^\\dagger l_L\\ \n\\langle U^\\dagger iD_\\mu U P_{21}\\rangle \n=-\\displaystyle\\frac{g}{\\sqrt{2}} \\bar\\nu_L \\gamma^{\\mu} e_L W_{\\mu}^+ \\nonumber\\\\\n{\\cal O}_{\\psi V10} &=& \\bar e_R\\gamma^\\mu e_R\\ \\langle U^\\dagger iD_\\mu UT_3\\rangle\n=-\\displaystyle\\frac{\\sqrt{g^2+g'^2}}{2}\\bar e_R\\gamma^\\mu e_R Z_{\\mu}\n\\end{eqnarray}\n\n\\begin{eqnarray}\\label{op4f}\n{\\cal O}_{4f} &=& \\frac{1}{2} ({\\cal O}_{LL5}- 4 {\\cal O}_{LL15})\n=\\bar e_L\\gamma^\\mu\\mu_L\\, \\bar\\nu_{\\mu L}\\gamma_\\mu\\nu_{e L} + h.c.\n\\end{eqnarray}\nwhere the appropriate flavour structure is understood for\n${\\cal O}_{LL5}$, ${\\cal O}_{LL15}$ from \\cite{Buchalla:2012qq}.\n\nIn (\\ref{opsiv}) we have used the definitions $P_{12}\\equiv T_1 + i T_2$,\n$P_{21}\\equiv T_1 - i T_2$. \nIt is convenient to work with the following linear combinations\nof operators ${\\cal O}_{\\psi V7,8}$\n\\begin{equation}\\label{opsivpm}\n{\\cal O}_{\\psi V\\pm}\\equiv \\frac{1}{2}{\\cal O}_{\\psi V7}\\pm {\\cal O}_{\\psi V8}\n\\end{equation}\nwhose coefficients become\n\\begin{equation}\\label{cvpm}\nC_{V\\pm}\\equiv C_{V7}\\pm \\frac{1}{2} C_{V8}\n\\end{equation}\nOnly one of these coefficients, $C_{V-}$, appears in the amplitudes for \n$e^+e^-\\to W^+W^-$ within our approximations. This is most clearly seen in \nunitary gauge, where ${\\cal O}_{\\psi V-}$ couples the $Z$ to \nelectrons and ${\\cal O}_{\\psi V+}$ to neutrinos.\n\nFinally, the NNLO terms ${\\cal O}_{Wi}$ are\n\\begin{eqnarray}\\label{owi}\n{\\cal O}_{W1} &=& g^3 \\varepsilon^{abc} \n W^{a\\nu}_\\mu W^{b\\lambda}_\\nu W^{c\\mu}_\\lambda \\nonumber\\\\\n{\\cal O}_{W2} &=& g^3 \\varepsilon^{abc} \n \\tilde W^{a\\nu}_\\mu W^{b\\lambda}_\\nu W^{c\\mu}_\\lambda \n\\end{eqnarray}\nwith \n\\begin{equation}\\label{wtilde}\n\\tilde W^a_{\\mu\\nu}=\\frac{1}{2}\\varepsilon_{\\mu\\nu\\rho\\sigma} W^{a,\\rho\\sigma}\\, ,\n\\qquad \\varepsilon^{0123}=-1\n\\end{equation}\n\n\n\\section{Anomalous couplings}\n\\label{sec:ancoup}\n\nThe NLO terms in the effective Lagrangian modify the\nlowest order vertices of the standard model. Their effect\ncan be cast in the form of anomalous couplings.\n\nFor the triple-gauge vertex (TGV), coupling a virtual, neutral vector boson\n$V$ to a $W^+W^-$ pair in the final state, the Feynman rule can be written as \n\\begin{equation}\\label{tgvgam1}\nV^\\rho(k)\\to W^{-\\mu}(p)W^{+\\nu}(q)\\, :\\qquad \n-i \\left\\{\\begin{array}{c} g c_Z\\\\ g s_Z\\end{array}\\right\\}\n\\Gamma^{\\mu\\nu\\rho}_V(p,q;k)\\, ,\\quad \nV=\\left\\{\\begin{array}{c} Z\\\\ A \\end{array}\\right.\n\\end{equation}\nwhere \\cite{Appelquist:1993ka,Hagiwara:1986vm} \n\\begin{eqnarray}\\label{tgvgam2}\n\\Gamma^{\\mu\\nu\\rho}_V(p,q;k) &=& g^V_1 (p-q)^\\rho g^{\\mu\\nu} +\n(g^V_1+\\kappa_V)(k^\\mu g^{\\nu\\rho}-k^\\nu g^{\\mu\\rho}) \\nonumber\\\\\n&& +i g^V_4 (k^\\mu g^{\\nu\\rho}+k^\\nu g^{\\mu\\rho}) \n-i g^V_5 \\varepsilon^{\\mu\\nu\\lambda\\rho} (p-q)_\\lambda \n+\\tilde\\kappa_V \\varepsilon^{\\mu\\nu\\lambda\\rho} k_\\lambda \\nonumber\\\\\n&& -\\frac{\\lambda_V}{\\Lambda^2} (p-q)^\\rho k^\\mu k^\\nu\n-\\frac{\\tilde\\lambda_V}{\\Lambda^2} (p-q)^\\rho \n\\varepsilon^{\\mu\\nu\\sigma\\tau} p_\\sigma q_\\tau\n\\end{eqnarray}\nHere $s_Z$, $c_Z$ are, respectively, sine and cosine of the weak\nmixing angle in the $Z$-standard definition ($\\alpha=\\alpha(M_Z)$)\n\\begin{equation}\\label{szcz}\ns^2_Z c^2_Z\\equiv\\frac{\\pi\\alpha}{\\sqrt{2} G_F M^2_Z}\n\\end{equation}\nand $g$ is the $SU(2)_L$ gauge coupling, where \n$g s_Z = e=\\sqrt{4\\pi\\alpha}$. The anomalous-coupling parameters\nin (\\ref{tgvgam2}) encode deviations from the standard model, in which \n$g^V_1=\\kappa_V=1$ and $g^V_{4,5}=\\tilde\\kappa_V=\\lambda_V=\\tilde\\lambda_V=0$. \n\nSimilarly, the gauge-fermion interactions can be parametrized through\nthe Feynman rules\n\\begin{equation}\\label{gfvert}\n\\begin{array}{rcccc}\n\\bar\\nu e_{L,R}W\\, : & \\hspace*{1cm} & -\\frac{ig}{\\sqrt{2}}\\kappa_c\\gamma^\\mu P_L \n & \\hspace*{1cm} & 0 \\\\\n\\bar e e_{L,R}Z\\, : & & \\frac{ig}{2c_Z}(\\kappa_1-2 s^2_Z\\kappa_2)\\gamma^\\mu P_L \n & & \\frac{ig}{2c_Z}(-2 s^2_Z\\kappa_2)\\gamma^\\mu P_R \n\\end{array}\n\\end{equation}\nfor left- and right-handed electrons, respectively, with the\ncorresponding projectors $P_{L,R}=(1\\mp\\gamma_5)\/2$.\nThe couplings to the photon ($\\bar ee_{L,R}A$) are not modified \nby anomalous couplings because of electromagnetic gauge invariance. \nThe $\\kappa_i$ in (\\ref{gfvert}) parametrize deviations from the standard\nmodel, in which $\\kappa_c=\\kappa_1=\\kappa_2=1$. \n\nWorking in the framework of an effective theory, the anomalous couplings\nshould be expressed in terms of the operator coefficients in the\neffective Lagrangian (\\ref{lnloco}). \nThe operators ${\\cal O}_{\\beta_1}$, ${\\cal O}_{XU1}$ and ${\\cal O}_{XU2}$\ncontain terms bilinear in the gauge fields $Z$ and $A$, which\ncan be absorbed into the canonical kinetic terms through the \nrenormalizations \\cite{Holdom:1990xq} (see also \\cite{Appelquist:1993ka})\n\\begin{equation}\\label{zaren}\nZ_0=(1+\\delta_Z)Z,\\qquad A_0=(1+\\delta_A)A + \\delta_{AZ} Z,\\qquad\nM_{Z0}=(1-\\delta_{M_Z})M_Z\n\\end{equation}\nHere the subscript $0$ denotes fields and parameters in the absence\nof any NLO terms in the Lagrangian. We also have\n\\begin{equation}\\label{gse0}\ng_0 s_0=e_0,\\qquad e_0=(1-\\delta_A)e,\\qquad G_{F0}=(1-2\\delta_G)G_F\n\\end{equation}\nand, from (\\ref{szcz}),\n\\begin{equation}\\label{s0c0}\ns_0 c_0 = s_Z c_Z (1-\\delta_A+\\delta_{M_Z}+\\delta_G)\n\\end{equation}\n\nCorrections to the Fermi constant come from ${\\cal O}_{V9}$ and\n${\\cal O}_{4f}$. They lead to\n\\begin{equation}\\label{deltag}\n\\delta_G=\\frac{1}{2} {\\rm Re}~(C^e_{V9}+C^\\mu_{V9})-\n \\frac{v^2}{4\\Lambda^2} C_{4f} =C_{V9} - \\frac{v^2}{4\\Lambda^2} C_{4f}\n\\end{equation}\nThe first expression allows for general, flavour non-universal and \ncomplex coefficients of ${\\cal O}_{V9}$. In the opposite case, $\\delta_G$ \nsimplifies to the second expression in (\\ref{deltag}). \n\nWithin the basis of operators in (\\ref{ob1}) -- (\\ref{opsiv}) the\nanomalous couplings can finally be expressed as\n\\begin{equation}\\label{gvkap1}\ng^Z_1 = 1+\\left[\\frac{\\beta_1-\\delta_G +C_{X1} e^2\/c^2_Z}{c^2_Z-s^2_Z}\\right]\n+3\\frac{e^2}{s^2_Z}\\frac{k^2}{\\Lambda^2}C_{W1}, \n\\qquad g^A_1 = 1 +3\\frac{e^2}{s^2_Z}\\frac{k^2}{\\Lambda^2}C_{W1}\n\\end{equation}\n\\begin{equation}\\label{gvkap2}\n\\kappa_Z = \n1+\\left[\\frac{\\beta_1-\\delta_G +C_{X1} e^2\/c^2_Z}{c^2_Z-s^2_Z}\\right] +\n\\frac{e^2}{c^2_Z} C_{X1}-\\frac{e^2}{s^2_Z} C_{X2}\n+ 3\\frac{e^2}{s^2_Z}\\frac{2M^2_W -k^2}{\\Lambda^2}C_{W1}\n\\end{equation}\n\\begin{equation}\\label{gvkap3}\n\\kappa_A = 1-\\frac{e^2}{s^2_Z}(C_{X1}+C_{X2})\n+ 3\\frac{e^2}{s^2_Z}\\frac{2M^2_W -k^2}{\\Lambda^2}C_{W1}\n\\end{equation}\n\\begin{equation}\\label{gvkap4}\ng^Z_4 = \\frac{e^2}{4 s^2_Z c^2_Z} C_{X6}, \\qquad g^A_4 =0\n\\end{equation}\n\\begin{equation}\\label{gvkap5}\ng^Z_5 = -\\frac{e^2}{2 s^2_Z c^2_Z}C_{X3},\\qquad g^A_5 =0 \n\\end{equation}\n\\begin{equation}\\label{gvkap6}\n\\tilde\\kappa_Z = 2\\left(\\frac{e^2}{c^2_Z}C_{X4}-\\frac{e^2}{s^2_Z}C_{X5}\\right)\n-6\\frac{e^2}{s^2_Z}\\frac{M^2_W}{\\Lambda^2}C_{W2}\n\\end{equation}\n\\begin{equation}\\label{gvkap7}\n\\tilde\\kappa_A = -2\\frac{e^2}{s^2_Z}(C_{X4}+C_{X5})\n-6\\frac{e^2}{s^2_Z}\\frac{M^2_W}{\\Lambda^2}C_{W2}\n\\end{equation}\n\\begin{equation}\\label{gvkap8}\n\\lambda_{Z,A} = 6\\frac{e^2}{s^2_Z} C_{W1}, \\qquad \n\\tilde\\lambda_{Z,A} = 6\\frac{e^2}{s^2_Z} C_{W2}\n\\end{equation}\n\n\n\\begin{eqnarray}\n\\kappa_c &=& 1 +\\left[\\frac{C_{X1}e^2 + c^2_Z(\\beta_1-\\delta_G)}{c^2_Z-s^2_Z}-\n\\frac{C_{X2} e^2}{2 s^2_Z}\\right] + C_{V9}\\\\\n\\kappa_1 &=& 1+\\left[\\beta_1 -\\delta_G\\right] -C_{V-}+C_{V10}\\\\\n\\kappa_2 &=& 1+\\left[\\frac{\\delta_G-\\beta_1-C_{X1} e^2\/s^2_Z}{c^2_Z-s^2_Z}\\right]\n +\\frac{1}{2s^2_Z} C_{V10}\\label{kapc12}\n\\end{eqnarray}\n\nThe terms in (\\ref{gvkap1}) through (\\ref{kapc12}) that arise from\nrenormalizing $A$, $Z$, $e$, $s_Z$, $c_Z$ are indicated by square brackets.\nThe remaining corrections represent the direct effect of the NLO operators\non the interaction vertices.\nNote that the coefficients $\\beta_1$ and $\\delta_G$ always appear\nin the combination $\\beta_1 - \\delta_G$.\n\n\n\\section{Cross sections}\n\\label{sec:crosec}\n\n\nIn the following we present cross-section formulas for\n$e^+e^-\\to W^+W^-$, focussing on the new-physics corrections from\nthe NLO Lagrangian (\\ref{lnloco}). \nThe amplitude is determined by the $s$-channel ($Z$, $\\gamma$) and\n$t$-channel ($\\nu$) exchange diagrams. Of particular interest for a future \nlinear collider will be the limit of large centre-of-mass energy $\\sqrt{s}$, \ndefined as $v^2 \\ll s\\ll \\Lambda^2$ \\cite{Passarino:2012cb}. In this window \n$\\sqrt{s}$ is considered to be much larger than the electroweak \nscale $v$, and also $M_{W,Z}$, but still smaller than the new-physics scale \n$\\Lambda$ that determines the range of validity of the effective theory.\nWith the inequality $M^2_{W,Z} \\ll s$, the corrections to the\ncross sections can be expanded in inverse powers of $s$. Relative to the \nstandard model, the potentially leading corrections grow as ${\\cal O}(s)$,\nsubleading terms are of ${\\cal O}(1)$, whereas all further terms, suppressed\nas ${\\cal O}(v^2\/s)$ or higher, can be expected to be irrelevant in practice.\n\nWe provide results for cross sections with different polarizations of the \ninitial and final state particles \\cite{Ahn:1988fx}. The case of left-handed \nand right-handed $e^-$ will be denoted by $LH$ and $RH$, respectively. \nFor the $W^+W^-$ bosons we consider either longitudinal ($L$) or transverse \npolarization ($T$) for each, which leads to the cases $LL$, $TT$ and $LT$.\nFor $LT$ the cross sections are the same whether $W^+$ or $W^-$ is \nlongitudinally polarized ($LT=TL$).\nThe polarized cross sections are quoted relative to their standard model\nexpressions, where only the ${\\cal O}(s)$ enhanced terms are given here.\nThe corrections of ${\\cal O}(1)$, $f^{LH}_{LL},\\ldots$, \ncan be found in the appendix.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=12cm]{LHs.eps}\n\\caption{Energy dependence of scattering cross sections for left-handed \nelectrons at $\\cos\\theta=0$ in \nunits of $R=4\\pi\\alpha^2\/3s$. The solid curves are, from top to bottom \n(at $\\sqrt{s}=600\\,{\\rm GeV}$), the leading-order standard model results for \nunpolarized $W^+W^-$, and for $W$ polarizations $TT$, $LL$ and $LT$.\nThe dashed curves are the corresponding results including leading \nnew physics corrections. Note that these corrections are absent in the \n$TT$ case.}\\label{fig:lhs}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=12cm]{RHs.eps}\n\\caption{Energy dependence of scattering cross sections for right-handed \nelectrons at $\\cos\\theta=0$ in units of $R$. \nThe solid curves are, from top to bottom \n(at $\\sqrt{s}=600\\,{\\rm GeV}$), the leading-order standard model results for \nunpolarized $W^+W^-$, and for $W$ polarizations $LL$, $LT$ and $TT$.\nThe dashed curves are the corresponding results including leading \nnew physics corrections.}\\label{fig:rhs}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=12cm]{LHa.eps}\n\\caption{Angular dependence of scattering cross sections for left-handed \nelectrons at $s=(750\\,{\\rm GeV})^2$ in units of $R$. \nThe solid curves are, from top to bottom \n(at $\\cos\\theta=0$), the leading-order standard model results for \nunpolarized $W^+W^-$, and for $W$ polarizations $TT$, $LL$ and $LT$.\nThe dashed curves are the corresponding results including leading \nnew physics corrections. Note that these corrections are absent in the \n$TT$ case.}\\label{fig:lha}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=12cm]{RHa.eps}\n\\caption{Angular dependence of scattering cross sections for right-handed \nelectrons at $s=(750\\,{\\rm GeV})^2$ in units of $R$. \nThe solid curves are, from top to bottom \n(at $\\cos\\theta=0$), the leading-order standard model results for \nunpolarized $W^+W^-$, and for $W$ polarizations $LL$, $LT$ and $TT$.\nThe dashed curves are the corresponding results including leading \nnew physics corrections.}\\label{fig:rha}\n\\end{figure}\n\nThe $LL$ cross sections read:\n\n\\begin{align} \n\\begin{aligned}\\label{slhll}\n\\dth{\\sigma_{LL}^{LH}} &= \\dth{\\sigma_{LL, SM}^{LH}} \\bigg[ 1 + \ns \\frac{4 \\operatorname{Re} C_{ V9}^e}{M_Z^2} + s \\frac{2 C_{V-}}{M_Z^2} + \nf_{LL}^{LH} + \\mathcal{O} \\left( s^{-1} \\right) \\bigg] \n\\end{aligned} \n\\end{align}\n\n\\begin{align} \n\\begin{aligned}\\label{srhll} \n\\dth{\\sigma_{LL}^{RH}} &= \\dth{\\sigma_{LL, SM}^{RH}} \\bigg[ 1 + \ns \\frac{C_{ V10}}{M_Z^2 s_Z^2} + f_{LL}^{RH} + \n\\mathcal{O} \\left( s^{-1} \\right) \\bigg] \n\\end{aligned} \n\\end{align}\n\nThe notation ${\\rm Re}~C_{ V9}^e$ reflects the fact that, in general,\nthe coefficient $C_{V9}$ may be complex and flavour dependent.\nIf these possibilities are neglected ${\\rm Re}~C_{ V9}^e$ can be\nidentified with $C_{V9}$ (taken to be real), as it is frequently\ndone throughout this paper.\n\nThe $TT$ cross sections read:\n\n\\begin{align} \n\\begin{aligned}\\label{slhtt} \n\\dth{\\sigma_{TT}^{LH}} &= \\dth{\\sigma_{TT, SM}^{LH}} \\bigg[ 1 + f_{TT}^{LH} +\n \\mathcal{O} \\left( s^{-1} \\right) \\bigg] \n\\end{aligned} \n\\end{align}\n\n\\begin{align} \n \\begin{aligned}\\label{srhtt} \n \\dth{\\sigma_{TT}^{RH}} &= \\dth{\\sigma_{TT, SM}^{RH}} \\bigg[ 1 + \n s \\frac{C_{ V10}}{M_Z^2 s_Z^2} - s \\frac{2 e^2 C_{X1}}{M_Z^2 s_Z^2 c_Z^2} +\n s\\frac{C_{W1}}{\\Lambda^2} \\frac{6 e^2}{s_Z^2}+ f_{TT}^{RH} \\bigg] \n \\end{aligned} \n\\end{align}\nIn (\\ref{srhtt}) the $s$-dependence of the square bracket is exact,\n$f^{RH}_{TT}=0$, and terms of ${\\cal O}(s^{-1})$ or higher are absent\nin this case.\n\nThe $LT$ cross sections are given by:\n\n\\begin{align} \n \\begin{aligned}\\label{slhlt} \n\\dth{\\sigma_{LT}^{LH}} &= \\dth{\\sigma_{LT, SM}^{LH}} \\bigg[ 1 +\ns\\frac{4 \\operatorname{Re}C_{ V9}^e \\xi}{M_Z^2 \\chi} + \ns\\frac{2C_{V-} \\xi}{M_Z^2 \\chi} -s\\frac{e^2 \\xi C_{X1}}{M_Z^2 c_Z^2 \\chi} - \ns\\frac{e^2 \\xi C_{X2}}{M_Z^2 s_Z^2 \\chi}\\\\ \n&-s \\frac{e^2 \\left( c_Z^2 - s_Z^2 \\right) \\left[ \\left( 1+ \\cos \\theta \n\\right) c_Z^2 + \\cos \\theta \\right] C_{X3} }{M_Z^2 s_Z^2 c_Z^2 \\chi} - \ns \\frac{C_{W1}}{\\Lambda^2}\\frac{6 e^2 c_Z^2 \\xi}{s_Z^2\\chi} \\\\ \n& + f_{LT}^{LH} + \\mathcal{O} \\left( s^{-1} \\right) \\bigg] \n \\end{aligned} \n\\end{align}\n\n\n\\begin{align} \n\\begin{aligned}\\label{srhlt} \n\\dth{\\sigma_{LT}^{RH}} &= \\dth{\\sigma_{LT, SM}^{RH}} \\bigg[ 1 - \ns \\frac{e^2 C_{X3} \\cos \\theta}{M_Z^2 s_Z^2 c_Z^2 (1 + \\cos^2 \\theta)} -\ns\\frac{e^2 C_{X1}}{M_Z^2 c_Z^2 s_Z^2} +s\\frac{C_{ V10}}{M_Z^2 s_Z^2} \\\\ \n& \\quad + f_{LT}^{RH} + \\mathcal{O} \\left( s^{-1} \\right) \\bigg] \n\\end{aligned} \n\\end{align}\n\nwith\n\n\\begin{align} \n \\begin{aligned} \n \\xi &= 1+\\left( 2 c_Z^2 \\left(1+ \\cos \\theta \\right) + \n\\cos \\theta \\right) \\cos \\theta \\\\ \n \\chi &= 1+ \\left( 2 c_Z^2 \\left(1+ \\cos \\theta \\right) + \n\\cos \\theta \\right)^2 \\\\ \n \\end{aligned} \n\\end{align}\n\n\nFinally, we give the corresponding results also for the case\nof unpolarized $W$ bosons (denoted by $\\Sigma$):\n\n\\begin{align} \n\\begin{aligned}\\label{slhsum} \n\\dth{\\sigma_{\\Sigma}^{LH}} &= \\dth{\\sigma_{\\Sigma, SM}^{LH}} \\bigg[ 1 + \ns \\frac{16 \\operatorname{Re} C_{ V9}^e \\sin^4 \\frac{\\theta}{2}}{M_Z^2 \\eta } \n+ s \\frac{8 C_{V-} \\sin^4 \\frac{\\theta}{2}}{M_Z^2 \\eta } + f_{\\Sigma}^{LH} + \n\\mathcal{O} \\left( s^{-1} \\right) \\bigg] \n\\end{aligned} \n\\end{align}\n\n\\begin{align} \n\\begin{aligned}\\label{srhsum} \n\\dth{\\sigma_{\\Sigma}^{RH}} &= \\dth{\\sigma_{\\Sigma, SM}^{RH}} \\bigg[ 1 + \ns \\frac{C_{ V10}}{M_Z^2 s_Z^2} + f_{\\Sigma}^{RH} + \n\\mathcal{O} \\left( s^{-1} \\right) \\bigg] \n\\end{aligned} \n\\end{align}\n\nwith\n\n\\begin{align} \n \\begin{aligned} \n \\eta &= \\left(1+\\cos^2\\theta\\right)\\left(1+8c_Z^4\\right)-2\\cos\\theta \n \\end{aligned} \n\\end{align}\n\nIt is useful to present the latter results for unpolarized $W$ bosons \nalso in a slightly more explicit and complementary form.\nIn the high-energy limit ($s\\gg M^2_W$) the differential cross sections\nfor the scattering of polarized $e^+e^-$ into unpolarized $W^+W^-$\ncan be written as\n\\begin{eqnarray}\\label{selww} \n\\frac{d\\sigma(e^-_Le^+_R\\to W^-W^+)}{d\\cos\\theta} &=& \n\\frac{\\pi\\alpha^2}{2 s}\\left[\\frac{1-\\cos^2\\theta}{16 c^4_Z s^4_Z}+ \n\\frac{(1+\\cos\\theta)(1+\\cos^2\\theta)}{2 s^4_Z (1-\\cos\\theta)} \\right. \\\\ \n&-& \\left. \\frac{s(1-\\cos^2\\theta)}{8 M^2_W c^2_Z s^4_Z} \n\\left(\\delta\\kappa_1 - 2\\delta\\kappa_c +\\delta\\kappa_Z \n-2 s^2_Z(\\delta\\kappa_2 - \\delta\\kappa_A +\\delta\\kappa_Z)\\right)\\right] \n\\nonumber \n\\end{eqnarray}\nand\n\\begin{equation}\\label{serww} \n\\frac{d\\sigma(e^-_Re^+_L\\to W^-W^+)}{d\\cos\\theta} = \n\\frac{\\pi\\alpha^2}{2 s} \\frac{M^4_Z}{4M^4_W}(1-\\cos^2\\theta) \n\\left[1+\\frac{2s}{M^2_Z}(\\delta\\kappa_2-\\delta\\kappa_A +\\delta\\kappa_Z)\\right]\n\\end{equation}\nHere only the leading terms in $M^2_W\/s$ have been kept, both\nfor the standard model results and for the new physics corrections. \nThe latter are expressed in terms of the anomalous contributions to the \ncouplings, $\\delta\\kappa_i\\equiv\\kappa_i-\\kappa_{i,SM}$ defined in\n(\\ref{tgvgam2}) and (\\ref{gfvert}). \nIn terms of the Lagrangian coefficients one finds for the\nparameters that determine the leading corrections\n\\begin{equation}\\label{kapel} \n\\delta\\kappa_1 - 2\\delta\\kappa_c +\\delta\\kappa_Z \n-2 s^2_Z(\\delta\\kappa_2 - \\delta\\kappa_A +\\delta\\kappa_Z) \n=-C_{V-} - 2 {\\rm Re}~C^e_{V9} \n\\end{equation}\n\\begin{equation}\\label{kaper} \n\\delta\\kappa_2 - \\delta\\kappa_A +\\delta\\kappa_Z \n=\\frac{C_{V10}}{2 s^2_Z} \n\\end{equation}\nin agreement with (\\ref{slhsum}) and (\\ref{srhsum}).\n\nThe (full) energy dependence of the leading-order standard model\ncross sections is plotted in Figs.~\\ref{fig:lhs} and \\ref{fig:rhs},\ntheir angular dependence in Figs.~\\ref{fig:lha} and \\ref{fig:rha}\n(solid lines). For illustration, the typical size of potential, $s$-enhanced \nnew physics corrections is also indicated (dashed lines).\nThe following input parameters have been used:\n\\begin{equation}\nM_W=80.4\\,{\\rm GeV},\\qquad M_Z=91.19\\,{\\rm GeV},\n\\qquad G_F=1.166\\cdot 10^{-5}\\,{\\rm GeV}^{-2},\\qquad \\alpha=1\/129\n\\end{equation}\nThe sine of the weak mixing angle, in the definition used here, \nis then determined through (\\ref{szcz}) to be\n\\begin{equation}\\label{sz2num}\ns^2_Z = 0.231\n\\end{equation} \nIn order to display the potential impact of new physics we\ninclude for each cross section the leading ${\\cal O}(s)$ corrections,\nexemplarily setting the relevant coefficients to\n$C_{V-}=C_{V9}=C_{V10}=C_{X1}=C_{X2}=C_{X3}=1\/(16\\pi^2)$. This value corresponds\nto the natural size expected from naive dimensional analysis.\nIn the plots, all cross sections are normalized to the quantity\n\\begin{equation}\\label{rdef}\nR=\\frac{4\\pi\\alpha^2}{3s}\n\\end{equation}\n\nWe add several comments on the results presented above. \n\\begin{itemize}\n\\item \nIt is instructive to recall the large-$s$ behaviour of the\ncross sections in the standard model. The dominant ones scale\nas $1\/s$. They are:\n\\begin{equation}\\label{xsecdom}\n\\sigma^{LH}_{LL},\\qquad \\sigma^{LH}_{TT},\\qquad \\sigma^{RH}_{LL}\n\\end{equation}\nThe remaining cross sections are subleading at high energies\nand scale as\n\\begin{equation}\n\\sigma^{LH}_{LT}\\sim \\frac{1}{s^2},\\qquad \n\\sigma^{RH}_{LT}\\sim \\frac{1}{s^2},\\qquad \\sigma^{RH}_{TT}\\sim \\frac{1}{s^3}\n\\end{equation}\n\\item\nThe leading sensitivity to new physics comes from the ${\\cal O}(s)$\nenhanced corrections to the dominant cross sections (\\ref{xsecdom}).\nIt depends on the coefficients\n\\begin{equation}\n\\sigma^{LH}_{LL}:\\, C_{V-,V9},\\qquad \\sigma^{LH}_{TT}:\\, 0,\n\\qquad \\sigma^{RH}_{LL}:\\, C_{V10}\n\\end{equation}\nThe fact that $\\sigma^{LH}_{TT}$ receives no leading corrections\nis clearly visible from Figs. \\ref{fig:lhs} and \\ref{fig:lha}. This feature \nalso implies (Fig. \\ref{fig:lhs}) that the large-$s$ enhancement in the\ncross section for left-handed electrons into unpolarized\n$W^+W^-$ is contributed entirely by the longitudinal $W$ bosons,\neven though the transverse $W$ bosons have a larger cross section. \n\\item \nThe CP odd operators ${\\cal O}_{XU4}$, ${\\cal O}_{XU5}$, ${\\cal O}_{XU6}$ and\n${\\cal O}_{W2}$ do not contribute to the cross sections considered here.\n\\item\nThe triple-$W$ operators in (\\ref{owi}) arise only at NNLO\n($\\sim v^2\/(16\\pi^2\\Lambda^2)$) in the effective Lagrangian. Accordingly, \ntheir coefficients give only subleading contributions to the cross sections. \nThe coefficient $C_{W1}$ (CP even operator) enters the correction terms \n$f^{RH}_{LT}$, $f^{LH}_{TT}$, $f^{LH}_{LT}$, $f^{LH}_{\\Sigma}$ as well as\nthe ${\\cal O}(s)$ corrections in $\\sigma^{RH}_{TT}$ and $\\sigma^{LH}_{LT}$.\nIn the former case $C_{W1}$ is strongly suppressed by a factor\n$M^2_Z\/\\Lambda^2$. In the latter case the suppression is milder,\nby a factor $s\/\\Lambda^2$. However, this is compensated by the overall\nsuppression of these cross sections at large $s$,\n$\\sigma^{RH}_{TT}\\sim 1\/s^3$ and $\\sigma^{LH}_{LT}\\sim 1\/s^2$. \nTherefore the effect of $C_{W1}$ can be expected to be negligible\nin practice \\cite{Appelquist:1993ka,Arzt:1994gp}.\n\\item\nFor high-precision studies standard-model radiative corrections in \n$e^+e^-\\to W^+W^-$, which are neglected here, have to be taken into \naccount \\cite{Denner:2000bj,Denner:2005fg,Bierweiler:2012kw}.\nHowever, these corrections cannot affect the leading relative\ncorrections from new physics enhanced by $s\/M^2_Z$. \n\\item\nThe expression of anomalous couplings in terms of effective theory\ncoefficients, (\\ref{gvkap1}) through (\\ref{kapc12}), is fully general and \ncan be used to compute further observables in \n$e^+e^-\\to W^+W^-$ \\cite{Dawson:1993in,Diehl:1993br,Diehl:2002nj}.\n\\end{itemize} \n\n\n\n\\section{Redundant operators}\n\\label{sec:redop}\n\nIn addition to the dimension-4 operators in (\\ref{oxu}),\nbuilt from $B_{\\mu\\nu}$, $W_{\\mu\\nu}$ and $U$, further operators of similar \ntype can be written down. Those may also be used in describing modified\ngauge-boson vertices, but they can always be eliminated by \nappropriate field redefinitions (or, equivalently, using equations of motion) \nin favour of the terms in (\\ref{oxu})\n\\cite{Buchalla:2012qq,Nyffeler:1999ap,Grojean:2006nn}.\nIn this section we discuss how these redundant operators would enter\nthe anomalous couplings. We also show explicitly how their effect\ncan be absorbed into the coefficients of the operators \nalready present in our basis. This exercise facilitates the transformation\nto a different set of independent operators that one might want to\nconsider. It also provides a useful consistency check of the\nexpressions in (\\ref{gvkap1}) -- (\\ref{kapc12}). \n\nThere are 6 redundant operators that have been considered in the\nliterature, ${\\cal O}_{XUi}$, $i=7,\\ldots, 12$, in the notation of\n\\cite{Buchalla:2012qq}. The 3 CP-violating operators $i=10,\\, 11,\\, 12$\nare trivially related to ${\\cal O}_{XUi}$, $i=4,\\, 5,\\, 6$, in (\\ref{oxu})\nand we will not discuss them further here.\nThe first of the remaining operators is \n\\begin{equation}\\label{oxu7}\n{\\cal O}_{XU7}=-2ig' B_{\\mu\\nu} \\langle D^\\mu U^\\dagger D^\\nu UT_3\\rangle\n=-i g' g^2 B^{\\mu\\nu} W^+_\\mu W^-_\\nu\n\\end{equation}\nIt is related to the other operators, up to a total derivative, as\n\\begin{equation}\\label{oxu7rel}\n{\\cal O}_{XU7}=\\frac{g'^2}{2}B_{\\mu\\nu}B^{\\mu\\nu} + g'^2 {\\cal O}_{\\beta_1}\n-{\\cal O}_{XU1} - g'^2 {\\cal O}_{\\psi V7} -2 g'^2 {\\cal O}_{\\psi V10}\n\\end{equation}\nIn writing (\\ref{oxu7rel}) we have omitted operators similar to \n${\\cal O}_{\\psi Vi}$ that involve quark fields. The first term\non the r.h.s. only renormalizes the $B$-field kinetic term\nand has no effect on the anomalous couplings (see the discussion\nin section~\\ref{sec:nlophi} below).\nAdding a term $C_{X7}{\\cal O}_{XU7}$ to the NLO Lagrangian results in\nthe following shift in the anomalous couplings\n\\begin{equation}\\label{kapxu7}\n\\Delta\\kappa_Z = -\\frac{e^2}{c^2_Z} C_{X7},\\qquad\n\\Delta\\kappa_A = \\frac{e^2}{s^2_Z} C_{X7}\n\\end{equation}\nAll other couplings in (\\ref{gvkap1}) -- (\\ref{kapc12}) remain\nunchanged. According to (\\ref{oxu7rel}), an inclusion\nof $C_{X7}{\\cal O}_{XU7}$ in the Lagrangian is equivalent to shifting\nthe other coefficients by\n\\begin{equation}\\label{delcxu7}\n(\\Delta\\beta_1,\\Delta C_{X1},\\Delta C_{V7},\\Delta C_{V10}) =\nC_{X7}\\, (g'^2,-1,-g'^2,-2 g'^2)\n\\end{equation} \nThis reflects the redundancy of ${\\cal O}_{XU7}$ and\ncan be checked explicitly with (\\ref{gvkap1}) -- (\\ref{kapc12}).\n\nSimilar considerations apply to the operator\n\\begin{equation}\\label{oxu8}\n{\\cal O}_{XU8}=-2ig \\langle W_{\\mu\\nu} D^\\mu U D^\\nu U^\\dagger \\rangle\n\\end{equation}\nwhich is related to the other operators as\n\\begin{equation}\\label{oxu8rel}\n{\\cal O}_{XU8}=g^2 \\langle W_{\\mu\\nu}W^{\\mu\\nu}\\rangle \n-\\frac{g^2}{2}v^2\\, \\langle D_\\mu U^\\dagger D^\\mu U\\rangle \n-{\\cal O}_{XU1} - 2g^2 {\\cal O}_{\\psi V8} \n- g^2({\\cal O}_{\\psi V9} + {\\cal O}_{\\psi V9}^\\dagger)\n\\end{equation}\nup to total derivatives and contributions with quarks.\nThe first two terms can be absorbed into the\nleading-order Lagrangian and have no effect on the\nanomalous couplings. \nA term $C_{X8}{\\cal O}_{XU8}$ in the Lagrangian would shift the couplings by \n\\begin{equation}\\label{kapxu8}\n\\Delta\\kappa_Z = \\Delta\\kappa_A = g^2 C_{X8}\\, ,\\qquad\n\\Delta g^Z_1 =\\frac{g^2}{c^2_Z} C_{X8}\n\\end{equation}\nwith the remaining couplings in (\\ref{gvkap1}) -- (\\ref{kapc12}) unchanged.\nAccording to (\\ref{oxu8rel}), an inclusion\nof $C_{X8}{\\cal O}_{XU8}$ in the Lagrangian is equivalent to shifting\nthe other coefficients by\n\\begin{equation}\\label{delcxu8}\n(\\Delta C_{X1},\\Delta C_{V8},\\Delta C_{V9},\\Delta\\delta_G) =\n-C_{X8}\\, (1,2g^2,g^2,g^2)\n\\end{equation} \nas can be checked with (\\ref{gvkap1}) -- (\\ref{kapc12}).\n\nFinally, \n\\begin{equation}\\label{oxu9}\n{\\cal O}_{XU9}=-2ig \\langle U^\\dagger W_{\\mu\\nu} U T_3\\rangle\n\\, \\langle D^\\mu U^\\dagger D^\\nu U T_3 \\rangle\n\\end{equation}\nobeys the relation \n\\begin{equation}\\label{oxu9rel}\n{\\cal O}_{XU9}=\\frac{g^2}{4} \\langle W_{\\mu\\nu}W^{\\mu\\nu}\\rangle \n-\\frac{g^2}{8}v^2\\, \\langle D_\\mu U^\\dagger D^\\mu U\\rangle \n-\\frac{g^2}{4}{\\cal O}_{\\beta_1} - \\frac{1}{2}{\\cal O}_{XU2} \n- \\frac{g^2}{4} ({\\cal O}_{\\psi V9} + {\\cal O}_{\\psi V9}^\\dagger)\n\\end{equation}\nThe direct contribution from $C_{X9}{\\cal O}_{XU9}$ reads \n\\begin{equation}\\label{kapxu9}\n\\Delta\\kappa_Z = \\Delta\\kappa_A = \\frac{g^2}{2} C_{X9}\n\\end{equation}\nwhich, using (\\ref{oxu9rel}), is equivalent to shifting the other \ncoefficients by\n\\begin{equation}\\label{delcxu9}\n(\\Delta\\beta_1,\\Delta C_{X2},\\Delta C_{V9},\\Delta\\delta_G) =\n-\\frac{C_{X9}}{4}\\, (g^2,2,g^2,g^2)\n\\end{equation} \nThis is again consistent with (\\ref{gvkap1}) -- (\\ref{kapc12}).\n\nWe conclude this section with a discussion of an alternative\noperator basis, which includes the triple-gauge operators\n${\\cal O}_{XU7}$, ${\\cal O}_{XU8}$ and ${\\cal O}_{XU9}$, while eliminating \nthree of the original operators in (\\ref{ob1}), (\\ref{oxu}) and (\\ref{opsiv}).\nThe choice of these three is in principle arbitrary. We emphasize, however,\nthat it is not possible in general to eliminate all the gauge-fermion \noperators simultaneously since there are more than three (ten without counting \nflavour structure \\cite{Buchalla:2012qq}). Because only three gauge-fermion\noperators \n(${\\cal O}_{\\psi V-}$, ${\\cal O}_{\\psi V9}+ h.c.$, ${\\cal O}_{\\psi V10}$) happen to\ncontribute to $e^+e^-\\to W^+W^-$, those may indeed be removed altogether from \nthe basis\nin this case. Additional gauge-fermion terms will be required when other \nprocesses are considered, such as $W^+W^-$ production from hadronic\ninitial states (see section \\ref{sec:wwlhc}). Restricting our attention\nto $e^+e^-\\to W^+W^-$ we may write\n\\begin{align}\\label{leffbasis}\n&{\\cal L}_{eff, NLO} =\n\\tilde\\beta_1 {\\cal O}_{\\beta_1}+\\tilde C_{X1} {\\cal O}_{XU1}\n+\\tilde C_{X2} {\\cal O}_{XU2} +\\tilde C_{X7} {\\cal O}_{XU7}\n+\\tilde C_{X8} {\\cal O}_{XU8} +\\tilde C_{X9} {\\cal O}_{XU9}+\\ldots\\nonumber\\\\\n&=\\beta_1 {\\cal O}_{\\beta_1} + C_{X1} {\\cal O}_{XU1}\n+C_{X2} {\\cal O}_{XU2} + C_{V-} {\\cal O}_{\\psi V-}\n+C_{V9} ({\\cal O}_{\\psi V9} + h.c.) + \nC_{V10} {\\cal O}_{\\psi V10}+\\ldots\n\\end{align}\nwhere we disregard gauge-fermion operators other than ${\\cal O}_{\\psi V-}$,\n${\\cal O}_{\\psi V9}$, ${\\cal O}_{\\psi V10}$. Further operators that are not\naffected by the change of basis are understood to be included but are not\nwritten explicitly. In terms of the coefficients, the transformation from \none to the other basis in (\\ref{leffbasis}) is given by\n\\begin{align}\\label{coeffbasis}\n\\beta_1 &=\\tilde\\beta_1 +g'^2 \\tilde C_{X7} -\\frac{g^2}{4}\\tilde C_{X9},\\quad\nC_{X1} =\\tilde C_{X1} - \\tilde C_{X7} - \\tilde C_{X8},\\quad\nC_{X2} =\\tilde C_{X2} - \\frac{1}{2}\\tilde C_{X9}\\nonumber\\\\\nC_{V-} &=-g'^2 \\tilde C_{X7} + g^2 \\tilde C_{X8},\\quad\nC_{V9} =-g^2 \\tilde C_{X8} -\\frac{g^2}{4} \\tilde C_{X9},\\quad\nC_{V10} =-2g'^2 \\tilde C_{X7} \n\\end{align}\n\n\n\n\\section{High-energy limit and the Goldstone boson\\\\ \nequivalence theorem}\n\\label{sec:helequiv}\n\nThe results of section~\\ref{sec:crosec}\nshow that, despite the sizeable number of operators that pa\\-ra\\-me\\-trize \nnew physics effects in $e^+e^-\\to W^+W^-$, only 3 of them \nappear in the large-energy limit with a relative enhancement factor $s\/v^2$, \nthus introducing potential violations of \nunitarity in the $W^+W^-$ cross-section\\footnote{Obviously, such divergences \nare actually cut off at the scale of new physics, where new degrees of \nfreedom regulate them. Therefore, such divergences never violate unitarity, \nbut rather signal the point where the EFT ceases to be valid.}. \nThese unitarity violations are associated with the longitudinal modes of the \n$W$ bosons as can be seen by inspection of our results or, more generally, \nby a straightforward application of the equivalence \ntheorem~\\cite{Cornwall:1974km,Chanowitz:1985hj}. A general discussion\nof the equivalence theorem in the context of chiral Lagrangians can be found \nin \\cite{GrosseKnetter:1994yp,Dobado:1994vr}. In this section we will \nrederive the large-$s$ limit of the $e^+e^-\\to W^+W^-$ cross-section in a \nmore transparent way by working in the Landau gauge, \nwhere the Goldstone modes $\\varphi^{\\pm}$ appear explicitly. \n\nThe relevant topologies for $e^+e^-\\to \\varphi^+\\varphi^-$ are collected in \nthe second and third dia\\-gram of Fig.~\\ref{fig:4}. The leftmost diagram is \nthe standard model contribution. The $(\\gamma,Z)\\varphi^+\\varphi^-$ vertices \nare obtained from the Goldstone kinetic term\n\\begin{align}\\label{zphism}\n\\frac{v^2}{4}\\langle D^{\\mu}U^{\\dagger}D_{\\mu}U\\rangle = \ne\\left(\\varphi^+i\\stackrel{\\leftrightarrow}{\\partial_{\\mu}}\\varphi^-\\right)\n\\left(\\frac{c^2_Z-s^2_Z}{2 c_Z s_Z}Z^{\\mu}+A^{\\mu}\\right)+\\ldots\n\\end{align}\nIn the large-$s$ limit, the leading new physics contributions to \n$e^+e^-\\to \\varphi^+\\varphi^-$ can be shown to come only from \nthe gauge-fermion operators ${\\cal O}_{\\psi Vi}$: \nThe operator ${\\cal O}_{\\beta_1}$ contains a $Z\\varphi^+\\varphi^-$ coupling\nproportional to the standard model expression in (\\ref{zphism}). This \ncontribution is not enhanced in the large-$s$ limit and therefore subleading.\nNo $(\\gamma,Z)\\varphi^+\\varphi^-$ coupling arises from\n${\\cal O}_{XU1}$, ${\\cal O}_{XU2}$, ${\\cal O}_{XU4}$ and ${\\cal O}_{XU5}$,\nwhich are bilinear in the gauge fields. Finally, ${\\cal O}_{XU3}$ and\n${\\cal O}_{XU6}$ produce $(\\gamma,Z)\\varphi^+\\varphi^-$ only together with\nat least one additional Goldstone particle and therefore do not contribute to\nthe process of interest here.\n\nThe gauge-fermion operators give rise to the \ncentral diagram in Fig.~\\ref{fig:4}. They read explicitly\n\\begin{align}\\label{fermionbasis}\n{\\cal{O}}_{\\psi V7}&=-2{\\cal{O}}_{\\psi V8}=\n\\frac{1}{2}({\\cal{O}}_{\\psi V9} + {\\cal{O}}^\\dagger_{\\psi V9})=\n({\\bar{e}}_L\\gamma^{\\mu}e_L)\n\\frac{1}{v^2}\\left(\\varphi^+i\\stackrel{\\leftrightarrow}{\\partial_{\\mu}}\n\\varphi^-\\right)+\\ldots \\nonumber\\\\\n{\\cal{O}}_{\\psi V10}&=({\\bar{e}}_R\\gamma^{\\mu}e_R)\\frac{1}{v^2}\n\\left(\\varphi^+i\\stackrel{\\leftrightarrow}{\\partial_{\\mu}}\\varphi^-\\right)\n+\\ldots\n\\end{align}\nNotice the difference between the unitary and Landau gauge: the gauge-fermion \noperators, which in unitary gauge corrected the $s$ and $t$-channel vertices, \nnow take the form of $e^+e^-\\varphi^+\\varphi^-$ local terms. \n\nThe interference between the standard model and the new physics ($NP$)\ncontribution can be easily computed and results in\n\\begin{align}\\label{sigcv}\n\\frac{d\\sigma(e^-_R e^+_L\\to W^-W^+)_{NP}}{d\\cos\\theta}\n&=\\frac{\\pi \\alpha^2 \\sin^2\\theta}{8 s_Z^2c_Z^2 M_W^2}C_{V10}\\nonumber\\\\\n\\frac{d\\sigma(e^-_L e^+_R\\to W^-W^+)_{NP}}{d\\cos\\theta}\n&=\\frac{\\pi \\alpha^2 \\sin^2\\theta}{16 s_Z^4 c_Z^2 M_W^2}\n\\left(C_{V-}+2C_{V9}\\right)\n\\end{align}\nwhich agrees with the results in section~\\ref{sec:crosec}\n(assuming $C_{V9}$ to be real).\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=4.7cm]{plotGM.eps}\\hskip 0.5cm\n\\includegraphics[width=3.0cm]{plotGM2.eps}\\hskip 0.5cm\n\\includegraphics[width=4.7cm]{plotGM1.eps}\n\\end{center}\n\\caption{\\small{\\it{Different contributions to $e^+e^-\\to \\varphi^+\\varphi^-$. \nThe left-hand diagram is the standard model piece while the central and \nright-hand diagrams are the same contribution from new physics, expressed in \nterms of gauge-fermion (central) or triple-gauge operators (right). $C_{V}$ \nand $C_{XU}$ are short-hand notations for $C_{V7-10}$ and $C_{X7-9}$, \nrespectively.}}}\\label{fig:4}\n\\end{figure} \n\nAs discussed in section~\\ref{sec:redop}, the equations of motion imply \nrelations between gauge-fermion, oblique and triple-gauge operators. We have \nalready discussed the convenience of working with gauge-fermion operators \nwhile eliminating triple-gauge operators. However, it is still instructive to \nrederive the large-energy limit in the basis where gauge-fermion operators \nare absent. In this basis, the central diagram in Fig.~\\ref{fig:4} gets \nreplaced by the rightmost one, where the $(\\gamma,Z)\\varphi^+\\varphi^-$ \nvertices come from the triple-gauge operators\n\\begin{align}\\label{triplebasis}\n{\\cal{O}}_{XU7}&=-\\frac{4ig^{\\prime}}{v^2}B_{\\mu\\nu}\n\\partial^{\\mu}\\varphi^+\\partial^{\\nu}\\varphi^-\\nonumber\\\\\n{\\cal{O}}_{XU8}&=2 {\\cal{O}}_{XU9} = -\\frac{4ig}{v^2}W^3_{\\mu\\nu}\n\\partial^{\\mu}\\varphi^+\\partial^{\\nu}\\varphi^-\n\\end{align} \nThe results for the cross-sections now take the form\n\\begin{align}\\label{sigcx}\n\\frac{d\\sigma(e^-_R e^+_L\\to W^-W^+)_{NP}}{d\\cos\\theta}\n&=-\\frac{\\pi^2 \\alpha^3 \\sin^2\\theta}{ s_Z^2c_Z^4 M_W^2}C_{X7}\n\\nonumber\\\\\n\\frac{d\\sigma(e^-_L e^+_R\\to W^-W^+)_{NP}}{d\\cos\\theta}\n&=-\\frac{\\pi^2 \\alpha^3 \\sin^2\\theta}{4 s_Z^6 c_Z^4 M_W^2}\n\\left(s_Z^2C_{X7}+c_Z^2\\left(C_{X8}+\\frac{1}{2}C_{X9}\\right)\\right)\n\\end{align}\n\nThe equivalence of (\\ref{sigcv}) and (\\ref{sigcx}) can be checked\nusing the high-energy version of the equations relating \n${\\cal{O}}_{XU7,8,9}$ and ${\\cal{O}}_{V7,8,9,10}$ given in \nsection~\\ref{sec:redop}. In terms of the corresponding coefficients these\nrelations read\n\\begin{align}\nC_{X7}&=-\\frac{c_Z^2}{8\\pi\\alpha}C_{V10}\\nonumber\\\\\nC_{X8}&=\\frac{s_Z^2}{4\\pi\\alpha}\\left(C_{V-}-\\frac{1}{2}C_{V10}\\right)\\nonumber\\\\\nC_{X9}&=-\\frac{s_Z^2}{\\pi\\alpha}\\left(C_{V-}+C_{V9}-\\frac{1}{2}C_{V10}\\right)\n\\end{align}\n\nIn the ${\\cal O}_{XUi}$ basis (\\ref{triplebasis}), the\nenhancement $\\sim s$ of the relative corrections is obvious since the\n$(\\gamma,Z)\\varphi^+\\varphi^-$ vertices carry three derivatives,\ninstead of one in the standard model case (\\ref{zphism}).\nThe same enhancement comes about differently in the\n${\\cal O}_{\\psi Vi}$ basis (\\ref{fermionbasis}). These operators give\nlocal $e^+e^-\\varphi^+\\varphi^-$ vertices, which are similar to the \nstandard model amplitudes, but without the gauge-boson propagator $\\sim 1\/s$.\nThis then leads to the relative enhancement $\\sim s$ of the corrections\nwhen they are computed from the ${\\cal O}_{\\psi Vi}$. \n\n\\section{NLO Lagrangian for linearly transforming Higgs}\n\\label{sec:nlophi}\n\nIn the case of a linearly transforming Higgs field, the next-to-leading\norder Lagrangian consists of the operators of dimension 5 and 6\nlisted in \\cite{Buchmuller:1985jz,Grzadkowski:2010es}.\nThe terms of the NLO Lagrangian relevant for $e^+e^-\\to W^+W^-$\ncan be written as \n\\begin{equation}\\label{lnlozq}\n{\\cal L}_{\\rm NLO} =\\frac{1}{\\Lambda^2} \\sum_{i=1}^{9} z_i Q_i\n\\end{equation}\nwith real dimensionless coefficients $z_i$ and the dimension-6\noperators\n\\begin{eqnarray}\\label{qq16}\nQ_1 &=& (D_\\mu \\phi^\\dagger \\phi)(\\phi^\\dagger D^\\mu\\phi) \\nonumber\\\\\nQ_2 &=& g g' B_{\\mu\\nu}\\, \\phi^\\dagger W^{\\mu\\nu} \\phi \\nonumber\\\\\nQ_3 &=& g g' \\varepsilon^{\\mu\\nu\\lambda\\rho} \n B_{\\mu\\nu}\\, \\phi^\\dagger W_{\\lambda\\rho} \\phi \\nonumber\\\\\nQ_4 &=& \\bar l\\gamma^\\mu l\\, \n (\\phi^\\dagger i D_\\mu\\phi -iD_\\mu\\phi^\\dagger\\phi) \\nonumber\\\\\nQ_5 &=& \\bar l\\gamma^\\mu T^a l\\, \n (\\phi^\\dagger T^a i D_\\mu\\phi -iD_\\mu\\phi^\\dagger T^a \\phi) \\nonumber\\\\\nQ_6 &=& \\bar e\\gamma^\\mu e\\, \n (\\phi^\\dagger i D_\\mu\\phi -iD_\\mu\\phi^\\dagger\\phi) \\nonumber\\\\\nQ_7 &=& {\\cal O}_{4f}\\ ,\\quad Q_8 = {\\cal O}_{W1}\\ ,\\quad Q_9 = {\\cal O}_{W2}\n\\end{eqnarray}\nWe take the Higgs doublet $\\phi$ to be normalized such that\nits vev is $\\langle\\phi\\rangle =(0,v)^T$ with $v=246\\,{\\rm GeV}$.\n\nThe operators\n\\begin{equation}\\label{qq912}\n\\begin{array}{ll}\nQ_{10} = \\phi^\\dagger\\phi\\, B_{\\mu\\nu} B^{\\mu\\nu}\\ ,\\qquad &\nQ_{11} = \\phi^\\dagger\\phi\\, W^a_{\\mu\\nu} W^{a\\mu\\nu} \\\\\nQ_{12} = \\phi^\\dagger\\phi\\, \\varepsilon^{\\mu\\nu\\lambda\\rho} \n B_{\\mu\\nu} B_{\\lambda\\rho}\\ , \\qquad &\nQ_{13} = \\phi^\\dagger\\phi\\, \\varepsilon^{\\mu\\nu\\lambda\\rho} \n W^a_{\\mu\\nu} W^a_{\\lambda\\rho}\n\\end{array}\n\\end{equation}\nhave been omitted from (\\ref{lnlozq}) since they have no impact on the \n$e^+e^-\\to W^+W^-$ amplitude. This becomes clear in the unitary gauge and \nafter dropping contributions with the physical Higgs field $h$, which are of \nno interest in the present case. We may thus replace $\\phi^\\dagger\\phi\\to v^2$. \nThe operators $Q_{12}$ and $Q_{13}$ then reduce to total derivatives, whereas\n$Q_{10}$ and $Q_{11}$ take the form of the usual gauge kinetic terms.\nThe impact of $Q_{10}$ and $Q_{11}$ \ncan be eliminated by a simultaneous rescaling of the gauge field \nand the corresponding gauge coupling \\cite{De Rujula:1991se}. \nEx\\-pli\\-cit\\-ly, the contribution \nfrom $Q_{11}$ is eliminated, to first order, through the transformations\n$W^a_\\mu\\to (1+\\delta_W) W^a_\\mu$ and $g\\to (1-\\delta_W) g$ with\n$\\delta_W=2 z_{11} v^2\/\\Lambda^2$ in the leading-order Lagrangian. \nThis holds because the field $W^a_\\mu$ enters interaction terms in this\nLagrangian only in the combination $gW^a_\\mu$.\nIn particular, the above transformation leaves $gW^a_\\mu$ invariant and the \nnon-abelian field strength transforms homogeneously \nas $W^a_{\\mu\\nu}\\to (1+\\delta_W) W^a_{\\mu\\nu}$.\nA similar transformation removes the impact of $Q_{10}$. \n\nComparing with the NLO Lagrangian in the nonlinear realization of the Higgs\nsector, in unitary gauge and for $h\\to 0$, one finds that the\ncoefficients in (\\ref{lnloco}) are related to $z_1,\\, \\ldots, z_9$ as \n\\begin{equation}\\label{relcizi}\n\\begin{array}{lll}\n\\beta_1=-z_1\\, v^2\/\\Lambda^2 \\qquad & C_{V7}=-2 z_4\\, v^2\/\\Lambda^2 \n\\qquad & C_{4f}=z_7 \\\\\nC_{X1}=-z_2\\, v^2\/\\Lambda^2 \\qquad & C_{V8}=z_5\\, v^2\/\\Lambda^2 \n\\qquad & C_{W1}=z_8 \\\\\nC_{X4}=-z_3\\, v^2\/\\Lambda^2 \\qquad & \n C_{V9}=\\frac{1}{2}z_5\\, v^2\/\\Lambda^2=C^*_{V9}\\qquad & C_{W2}=z_9 \\\\\n & C_{V10}=-2 z_6\\, v^2\/\\Lambda^2 &\n\\end{array}\n\\end{equation}\nIn addition, since the operators ${\\cal O}_{XU2}$, ${\\cal O}_{XU3}$,\n${\\cal O}_{XU5}$, ${\\cal O}_{XU6}$ correspond to operators of dimension~8\nin the linear-Higgs basis \\cite{Buchalla:2012qq}, \nat NLO in this basis we may put \n\\begin{equation}\\label{cx2356}\nC_{X2}=C_{X3}=C_{X5}=C_{X6}=0\n\\end{equation}\nThe 15 real parameters $\\beta_1$. $C_{X1}$, $\\ldots$, $C_{X6}$,\n$C_{V7}$, $C_{V8}$, ${\\rm Re}~C_{V9}$, ${\\rm Im}~C_{V9}$, $C_{V10}$,\n$C_{4f}$, $C_{W1}$ and $C_{W2}$ from the nonlinear\nLagrangian thus reduce to the nine real coefficients $z_1$, $\\ldots$, $z_9$\nin the linear-Higgs basis.\n \n\n\\section{Examples of new physics scenarios} \n\\label{sec:models}\n\nIn previous sections we already commented on the fact that a global \nelectroweak fit of the effective theory coefficients does not seem very \ninformative, given the strong correlations between them~\\cite{Han:2004az}. \nIn order to obtain \nan estimate of the size of the coefficients beyond naive dimensional analysis,\nit is then useful to resort to different UV completions. In this section we \nwill discuss two such scenarios, which affect $e^+e^-\\to W^+W^-$ in a \ncomplementary way, namely UV completions with heavy fermions (constituent \ntechnicolor) or with heavy vectors ($Z^{\\prime}$ models). \nModels with heavy scalars can be shown to affect \n$e^+e^-\\to W^+W^-$ only at the loop level and will therefore not be considered. \n\n\\subsection{Constituent technicolor}\n\\label{subsec:contc}\n\nConstituent technicolor is a very simple model of strongly coupled dynamics \nfirst introduced in \\cite{Appelquist:1993ka}. The model consists of a flavour \ndoublet of chiral heavy fermions ${\\cal Q}=({\\cal U},{\\cal D})^T$ with electric \ncharges $\\pm1\/2$ to preserve anomaly cancellation. Since the strong interaction \nbetween techniquarks is neglected, except for their dynamical mass, it can be \nconsidered a model for a fourth quark generation. The full Lagrangian can \nthen be written as\n\\begin{align}\n{\\cal{L}}&={\\cal{L}}_{SM}+i{\\bar{{\\cal{Q}}}}_LD\\!\\!\\!\\!\\slash~{\\cal{Q}}_L+\ni{\\bar{{\\cal{U}}}}_R D\\!\\!\\!\\!\\slash~{\\cal{U}}_R+\ni{\\bar{{\\cal{D}}}}_R D\\!\\!\\!\\!\\slash~{\\cal{D}}_R-\n(m_U{\\bar{\\cal{Q}}}_LUP_+{\\cal{U}}_R+m_D{\\bar{\\cal{Q}}}_LUP_-{\\cal{D}}_R+h.c.)\n\\end{align}\nIntegrating out the heavy fermions to one loop induces a direct correction \nto the $ZWW$ and $\\gamma WW$ vertices but also to gauge boson bilinears. \nOne finds~\\cite{Appelquist:1993ka}\n\\renewcommand\\arraystretch{2}\n\\begin{equation}\n\\begin{array}{ll}\n\\tilde\\beta_1=\\displaystyle\\frac{4}{v^2}(m_U+m_D)^2\\delta^2\\xi\\qquad & \\\\\n\\tilde C_{X1}=-\\xi;\\qquad & \\tilde C_{X7}=-\\xi\\\\\n\\tilde C_{X2}=-\\displaystyle\\frac{16}{5}\\delta^2\\xi;\\qquad & \\tilde C_{X8}=\n-\\left(1-\\displaystyle\\frac{2}{5}\\delta^2\\right)\\xi\\\\\n\\tilde C_{X3}=-2\\delta \\xi;\\qquad & \n\\tilde C_{X9}=-\\displaystyle\\frac{28}{5}\\delta^2\\xi\n\\end{array}\n\\end{equation}\nwhere\n\\begin{align}\n\\xi&=\\frac{N_{TC}}{96\\pi^2};\\qquad\\qquad\\qquad \\delta=\\frac{m_U-m_D}{m_U+m_D}\n\\end{align} \nChoosing for illustration $N_{TC}=4$, $\\delta=1\/60$ and $m_U+m_D=3\\,{\\rm TeV}$, \none finds that\n$\\tilde\\beta_1\\approx 7\\cdot 10^{-4}$, \n$\\tilde C_{X1}\\approx \\tilde C_{X7}\\approx \\tilde C_{X8}\\approx -4\\cdot 10^{-3}$, \n$\\tilde C_{X3}\\approx -1\\cdot 10^{-4}$, and\n$2 \\tilde C_{X2}\\approx \\tilde C_{X9}\\approx -7\\cdot 10^{-6}$, \nwhich comply with the naive dimensional estimate $C_i\\sim 1\/(16\\pi^2)$. \nUsing (\\ref{coeffbasis}) one can trade the triple-gauge \noperators for gauge-fermion vertices. In the basis we have been using \nin this paper we find\n\\begin{equation}\n\\begin{array}{ll}\n\\beta_1=\\displaystyle\\left[\\frac{4}{v^2}(m_U+m_D)^2\\delta^2+\ne^2\\left(\\frac{7\\delta^2}{5s_Z^2}-\\frac{1}{c_Z^2}\\right)\\right]\\xi\\qquad & \\\\\nC_{X1}=\\displaystyle\\left(1-\\frac{2}{5}\\delta^2\\right)\\xi; \\qquad & \nC_{V-}=\\displaystyle e^2\\left[\\frac{1}{c_Z^2}-\\frac{1}{s_Z^2}\n\\left(1-\\frac{2}{5}\\delta^2\\right)\\right]\\xi\\\\\nC_{X2}=-\\displaystyle\\frac{2}{5}\\delta^2\\xi;\\qquad & \nC_{V9}=\\displaystyle\\frac{e^2}{s_Z^2}(1+\\delta^2)\\xi\\\\\nC_{X3}=-2\\delta \\xi;\\qquad & C_{V10}=\\displaystyle2\\frac{e^2}{c_Z^2}\\xi\n\\end{array}\n\\end{equation}\nDoing the same numerical exercise,\n$\\beta_1\\approx 1.6\\cdot 10^{-4}$, \n$C_{X1}\\approx 4\\cdot 10^{-3}$, $C_{X2}\\approx -5\\cdot 10^{-7}$, \n$C_{X3}\\approx -1\\cdot 10^{-4}$, and\n$C_{V9} \\approx -1.4 C_{V-}\\approx 1.7 C_{V10}\\approx 1.7\\cdot 10^{-3}$. \nTwo things are worth noticing: (i) the size of the triple gauge operators is \nbig enough to invert the sign of $C_{X1}$ in this change of basis,\nwhile $|C_{X1}|$ remains the same; \n(ii) $C_{X4}=C_{X5}=C_{X6}=0$ because constituent technicolor is CP-conserving. \n\n\\subsection{$Z^{\\prime}$ models}\n\nWe next consider models with a \n$Z^\\prime$ \\cite{Galison:1983pa,Langacker:2008yv,Langacker:2009su}, \nfollowing the approach developed \nin~\\cite{Babu:1997st}. The $Z^{\\prime}$ is the gauge boson of a\nlocal $U(1)^\\prime$ symmetry and will be assumed to have a mass \ngenerated through a dynamical mechanism not necessarily related to \nelectroweak symmetry breaking. \nSince we are interested in an EFT approach we will not be concerned with the \ndynamical details. Within these assumptions, we will set to zero a bare \n$Z-Z^{\\prime}$ mass-mixing term, implying that the Higgs sector of the standard \nmodel is charged under $U(1)_Y$, but not under $U(1)^\\prime$, and \n{\\it vice versa} for the Higgs sector of $Z^\\prime$. In contrast, a kinetic \nmixing is in general allowed and will be included.\n\nIn formulating the $Z^\\prime$ model we will use the chiral Lagrangian\ndescription of the standard-model part, as given in (\\ref{lsmlo}).\nThe results can then be interpreted in two different ways.\nEither, electroweak symmetry is dynamically broken and the nonlinear\nchiral Lagrangian is non-renormalizable with a cutoff $\\Lambda$ at\nabout a few TeV. In this case the $Z^\\prime$ mass should be below that scale. \nThe limit of interest is $v\\ll M_{Z^\\prime} < \\Lambda$, in which case $Z^\\prime$ \nis a light degree of freedom in the chiral Lagrangian, but still heavy enough\nin order to be integrated out at the weak scale $v$. Alternatively, we may\nconsider the conventional renormalizable standard model with the Higgs field\nwritten in polar coordinates, $H\\equiv (\\tilde\\phi, \\phi)=(v+h)U$,\nand with the physical Higgs scalar $h$ disregarded, since it does not\nenter in the applications of interest here. In this case the $Z^\\prime$ mass\ncould be taken to be (much) larger than a few TeV.\n\nThe Lagrangian for the $Z^\\prime$ model then reads \n\\begin{equation}\n{\\cal{L}}={\\cal L}_{SM,U}(\\hat B)-\n\\frac{1}{4}\\hat Z_{\\mu\\nu}^{\\prime} \\hat Z^{\\prime\\mu\\nu}-\n\\frac{\\sin\\chi}{2} \\hat Z_{\\mu\\nu}^{\\prime} \\hat B^{\\mu\\nu}\n+\\frac{\\cos^2\\chi}{2} M^2_{Z^\\prime} \\hat Z_{\\mu}^{\\prime} \\hat Z^{\\prime \\mu}\n-{\\hat g}\\sum_{j} \\hat Y_j {\\bar{f}}_j\\gamma_{\\mu}f_j \\hat Z^{\\prime\\mu}\n\\end{equation}\n${\\cal L}_{SM,U}(\\hat B)$ is the lowest-order standard model\nLagrangian (\\ref{lsmlo}) where the hypercharge gauge field is identified \nwith $\\hat B$. It is convenient to eliminate the kinetic mixing using\n\\begin{align}\n\\left(\n\\begin{array}{c}\n{\\hat{B}}_{\\mu}\\\\{\\hat{Z}}_{\\mu}^{\\prime}\n\\end{array}\n\\right)=\n\\left(\n\\begin{array}{cc}\n1 & -\\tan\\chi\\\\\n0 & 1\/\\cos\\chi\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{c}\nB_{\\mu}\\\\Z_{\\mu}^{\\prime}\n\\end{array}\n\\right)\n\\end{align}\nThis field redefinition modifies the $Z^{\\prime}$ coupling to fermions and \ngenerates a coupling between $Z^{\\prime}$ and the Goldstone fields. \nThe Lagrangian becomes\n\\begin{align}\n{\\cal{L}}={\\cal L}_{SM,U}(B)\n&-\\frac{1}{4}Z_{\\mu\\nu}^{\\prime}Z^{\\prime\\mu\\nu}+\n\\frac{M_{Z^{\\prime}}^2}{2}Z_{\\mu}^{\\prime}Z^{\\prime \\mu}\n+\\frac{v^2}{8} g^{\\prime 2}\\tan^2\\chi Z_{\\mu}^{\\prime}Z^{\\prime \\mu}\\nonumber\\\\\n&-\\bigg[\\frac{v^2}{2}g^{\\prime}\\tan\\chi\\langle U^\\dagger iD_\\mu U T_3\\rangle +\n\\sum_j{\\tilde{g}}_{j}{\\bar{f}}_j\\gamma_{\\mu}f_j\\bigg] Z^{\\prime\\mu}\n\\end{align}\nwhere\n\\begin{equation}\n{\\tilde{g}}_{j}={\\hat{g}}\\frac{{\\hat{Y}}_{j}}{\\cos\\chi}-\ng^{\\prime}Y_{j}\\tan\\chi\n\\end{equation}\nIntegrating out the $Z^{\\prime}$ at tree level, and expanding to first order\nin $1\/M^2_{Z^\\prime}$, gives the effective Lagrangian\n\\begin{align}\n{\\cal{L}}_{eff}&={\\cal{L}}_{SM}+\\frac{v^4}{8M_{Z^{\\prime}}^2}g^{\\prime 2}\\tan^2\\chi\n\\langle U^\\dagger D_\\mu U T_3\\rangle^2\n-\\sum_{i,j}\\frac{{\\tilde g}_i {\\tilde g}_j}{2M_{Z^{\\prime}}^2}\n({\\bar{f}}_i\\gamma_{\\mu}f_i)({\\bar{f}}_j\\gamma^{\\mu}f_j)\\nonumber\\\\\n&-\\frac{g^{\\prime}v^2\\tan\\chi}{2M_{Z^{\\prime}}^2}\\sum_{j}{\\tilde g}_j\n{\\bar{f}}_j\\gamma_{\\mu}f_j \\langle U^\\dagger i D^\\mu U T_3\\rangle\n\\end{align}\nFor $e^+e^-\\to W^+W^-$ the only relevant operators that receive contributions \nare ${\\cal{O}}_{\\beta_1}$, ${\\cal{O}}_{\\psi V7}$ and ${\\cal{O}}_{\\psi V10}$. \n(Here we will not discuss further the renormalization of $G_F$ due to \nthe 4-fermion operators, which is a subleading effect at large $s$.)\nThe coefficients read \n\\begin{align}\\label{coeffzprime}\n\\beta_1&=\\frac{v^2}{8M_{Z^{\\prime}}^2}g^{\\prime 2}\\tan^2\\chi\\nonumber\\\\\nC_{V7}&=-\\frac{g^{\\prime}v^2\\tan\\chi}{2M_{Z^{\\prime}}^2}{\\tilde{g}}_{l}\\nonumber\\\\\nC_{V10}&=-\\frac{g^{\\prime}v^2\\tan\\chi}{2M_{Z^{\\prime}}^2}{\\tilde{g}}_e\n\\end{align}\nFor illustration we choose $M_Z^{\\prime}=1$ TeV, $\\sin\\chi=0.3$, \n$\\hat{g}=g^{\\prime}$ and $\\hat{Y}_l=\\hat{Y}_e=-1$. With this choice of \nparameters one finds that \n$\\beta_1\\approx 0.9\\cdot 10^{-4}$, $C_{V7}\\approx 1.1\\cdot 10^{-3}$\nand $C_{V10}\\approx 0.9\\cdot 10^{-3}$.\n\nThe numerical values in this example are similar to those of\nsec.~\\ref{subsec:contc}. However, whereas the signs of the relevant couplings\nin \\ref{subsec:contc} are essentially fixed, the signs could be flipped\nin the $Z^\\prime$ scenario. This would lead to a clear discrimination between\nthe two models.\n\nFor completeness we will also comment on the linear case. Within the same \nassumptions, one can proceed in an analogous way, replacing the kinetic $U$ \nfield term by the corresponding term for the linear Higgs model. \nThe Lagrangian now takes the form\n\\begin{align}\n{\\cal{L}}={\\cal L}_{SM,\\phi}(B)\n&-\\frac{1}{4}Z_{\\mu\\nu}^{\\prime}Z^{\\prime\\mu\\nu}+\n\\frac{M_{Z^{\\prime}}^2}{2}Z_{\\mu}^{\\prime}Z^{\\prime \\mu}\n+\\frac{g^{\\prime 2}}{8}\\tan^2\\chi\\,\\phi^\\dagger\\phi\\, Z_{\\mu}^{\\prime}Z^{\\prime \\mu}\n\\nonumber\\\\\n&+\\bigg[\\frac{g^{\\prime}}{4}\\tan\\chi \n(\\phi^{\\dagger}i\\stackrel{\\leftrightarrow}{D}_{\\mu}\\phi)\n-\\sum_j{\\tilde{g}}_{j}{\\bar{f}}_j\\gamma_{\\mu}f_j\\bigg] Z^{\\prime\\mu}\n\\end{align}\nUpon integrating out the $Z^\\prime$ boson and matching to the linear basis of \n\\cite{Grzadkowski:2010es} one obtains the coefficients $z_1$, $z_4$, $z_6$, \nin the notation of section \\ref{sec:nlophi}. Their expression in \nterms of (\\ref{coeffzprime}) can be inferred from (\\ref{relcizi}).\n\n\n\\section{Comments on $W^+W^-$ production at the LHC}\n\\label{sec:wwlhc}\n\nIt is interesting at this point to discuss how the conclusions we have \nreached in our analysis for linear colliders extend to hadron colliders. \nAfter LEP~\\cite{Alcaraz:2006mx}, both \nTevatron~\\cite{Aaltonen:2009aa,Abazov:2009tr,Abazov:2009hk} and \nLHC~\\cite{ATLAS:2012mec,CMS:2013} have also studied $W^+W^-$ production and, \nmore generally, bounds on triple gauge couplings. The main advantage of a \nhadron collider over a linear one is that one can disentangle the anomalous \n$WWZ$ and $WW\\gamma$ contributions by looking at $W\\gamma$ \nproduction~\\cite{Dobbs1} and $WZ$ production~\\cite{Dobbs2}. $W^+W^-$ is \nafflicted with a larger background and, at least in principle, bounds are \nexpected to be less stringent.\n\nA full-fledged analysis of $W^+W^-$ production at the LHC deserves a separate \npaper. \n\nHere we will content ourselves with commenting on the \nqualitative features one would expect when an effective field theory point of \nview is adopted. For the qualitative approach we are pursuing it will suffice \nto work at the partonic level. The inclusion of parton distribution functions \n(PDFs), which are required in a complete analysis, will not affect our \nconclusions. A recent analysis of $W^+W^-$ production at the LHC, based on a \nsubset of the NLO operators in the linear-Higgs scenario, \ncan be found in \\cite{Degrande:2012wf,Degrande:2013mh}.\n\nAt the operator level the only difference between $W^+W^-$ at linear and \nhadron colliders arises in the initial state vertex \n(both in $s$ and $t$ channels), where the hadronic initial \nstate has twice the number of operators as the leptonic one. \nTo be more precise, while in $e^+e^-$ colliders one finds the 3 combinations\n\\begin{align}\n\\frac{1}{2} {\\cal{O}}_{\\psi V7} - {\\cal{O}}_{\\psi V8}, \\quad\n{\\cal{O}}_{\\psi V9}+{\\cal{O}}_{\\psi V9}^{\\dagger},\\quad\n{\\cal{O}}_{\\psi V10},\n\\end{align} \nin a $pp$ collider 6 operators contribute, namely\n\\begin{align}\n\\frac{1}{2} {\\cal{O}}_{\\psi V1} \\pm {\\cal{O}}_{\\psi V2},\\quad\n{\\cal{O}}_{\\psi V3}+{\\cal{O}}_{\\psi V3}^{\\dagger},\\quad\n{\\cal{O}}_{\\psi V6}+{\\cal{O}}_{\\psi V6}^{\\dagger},\\quad\n{\\cal{O}}_{\\psi V4},\\quad\n{\\cal{O}}_{\\psi V5}\n\\end{align}\nThe first thing to notice is that while in $e^+e^-\\to W^+W^-$ one can trade the \ngauge-fermion operators for triple gauge operators, therefore eliminating \nthem altogether, in $pp\\to W^+W^-$ this is no longer possible: gauge-fermion \noperators cannot be omitted in general. Obviously \none can still work in a basis where 3 of the gauge-fermion operators are \nremoved. This is however a rather arbitrary choice, which might be sensible \nfor a specific process but not for a global electroweak fit. When one is \ninterested in fitting more than one process, given the larger number of \nfermions compared to gauge bosons, it seems more natural to remove the triple \ngauge operators instead. \n\nEven without a detailed analysis one can anticipate the structure of the \ndominant new physics contribution to $pp\\to W^+W^-$. Since at $\\sqrt{s}=7$~GeV \nthe invariant mass of the W pair $\\hat{s}$ satisfies \n$M_W^2\\ll {\\hat{s}}\\ll \\Lambda^2$, a large-${\\hat{s}}$ expansion is warranted. \nUsing the equivalence theorem as in section~\\ref{sec:helequiv}, \none can easily conclude \nthat 5 out of the 6 gauge-fermion operators contribute at leading-${\\hat{s}}$, \nwhose precise coefficients can be determined once PDFs are included. \nTherefore, $W^+W^-$ production, somewhat against the common lore, can actually \nbe used both at linear and hadron colliders as an excellent probe of new \nphysics in the gauge-fermion sector. \n\n\n\\section{Conclusions}\n\\label{sec:concl}\n\nIn this paper we have analyzed new physics contributions to\nthe process $e^+e^-\\to W^+W^-$, consistently using an\neffective field theory treatment. The essential aspects\nand results can be summarized as follows:\n\n\\begin{itemize}\n\\item\nThe analysis employs the most general basis of next-to-leading order \noperators in the electroweak chiral Lagrangian.\n\\item\nComplete relations between the anomalous couplings and\nthe NLO coefficients in the effective Lagrangian have been derived.\nThe anomalous couplings include those that modify gauge-fermion interactions.\n\\item\nEquations-of-motion constraints have been discussed and used to eliminate\nredundant operators in order to work with a minimal basis of NLO terms.\nThe redundancy relations imply consistency checks of the relations \ndescribed in the previous item.\n\\item\nPolarized cross sections have been computed for $e^+e^-\\to W^+W^-$ with \nboth $W$'s on-shell, and with an emphasis on relative corrections \nto first order in the new-physics coefficients.\nSpecifically, both right- and left-handed electrons, and $W$'s \nwith longitudinal ($L$) or transverse ($T$) polarization ($LL$, $LT$, $TT$)\nhave been considered, as well as the case of an unpolarized $W$ pair.\n\\item\nCP-odd operators do not contribute to the considered observables.\n\\item\nOf particular interest for colliders in the TeV range is the high-energy,\nor large-$s$ limit, $M^2_W\\ll s\\ll\\Lambda^2$. The relative corrections \nto the cross sections were quoted explicitly through ${\\cal O}(s\/M^2_W)$ and \n${\\cal O}(1)$ in an $M^2_W\/s$ expansion, emphasizing the terms\nthat grow with $s$.\n\\item\nThe relative corrections growing with $s$ have been discussed and explained\nwith the help of the Goldstone-boson equivalence theorem.\n\\item\nThe choice of a basis for the NLO operators is arbitrary in principle\nand cannot affect the physics. For illustration we have discussed two \npossible bases and the relation between them. The basis without\nredundant triple-gauge boson operators but with all gauge-fermion terms\nappears as a convenient choice.\n\\item\nOur results, obtained within the chiral Lagrangian framework,\nhave also been expressed in terms of the basis of dimension-six operators \nin the standard model with a linearly realized Higgs sector.\nThe translation is straightforward in the case of $e^+e^-\\to W^+W^-$.\n\\item\nThe potential size of the new physics coefficients has been estimated using\nnaive dimensional counting ($C_i\\sim 1\/16\\pi^2$) and \nexplicit models (constituent technicolor, $Z'$).\n\\end{itemize}\n\nThe framework discussed here should be useful to identify\nand to interpret new physics effects from the dynamics\nof electroweak symmetry breaking in studies of $e^+e^-\\to W^+W^-$ \nat a TeV-scale linear collider in a systematic way. \nA similar approach can be pursued for many other collider\nobservables with other final states as well.\nOf interest will also be the application to $W$ pair production\nat the LHC. Recent measurements \\cite{ATLAS:2012mec,CMS:2013}\nshow somewhat enhanced cross sections for this process. Although the deviation \nfrom the standard model is not significant at present, such effects\ncould well be the signature of new physics as described by NLO terms in \nthe electroweak effective Lagrangian. The rise with energy of these\neffects provides an exciting opportunity, both for the future\nrunning of the LHC at $14\\,{\\rm TeV}$ and for\n$e^+e^-\\to W^+W^-$ at a linear collider. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Introduction}\n Recently, interest has increased in developing scalable data assimilation (DA) and uncertainty quantification methodologies for solving large-scale inverse problems.\n An inverse problem refers to the retrieval of a quantity of interest (QoI) associated with or stemming \n from a physical phenomenon underlying partial noisy experimental or observational data of that physical \n system~\\cite{aster2018parameter,vogel2002computational,stuart2010inverse}.\n The QoI could be, for example, the model state, initial condition, or other physics quantity. \n Inverse problems are prominent in a wide spectrum of applications including \n power grids and atmospheric numerical weather prediction~\\cite{ghil1991data,smith2013uncertainty}.\n In these problems, the prediction of the physical phenomena is often formulated as an initial value problem, \n while the initial condition of the simulator is corrected by fusing all available information. \n %\n Algorithmic approaches for solving inverse problems seek either a single-point estimate of the target QoI \n or a full probabilistic description of the knowledge about the QoI given all available information.\n The underlying principle of these methods is that information collected from observational systems \n is fused into computational models, along with associated uncertainties, to produce an accurate estimate \n of the ground truth of the physical phenomena of interest. \n %\n In the former approach seeking a single QoI estimate, the solution of an inverse problem is obtained \n by solving an optimization problem with an objective to minimize the mismatch between observational data \n and model simulations, possibly regularized by prior knowledge and uncertainty models.\n The latter approach, commonly known as Bayesian inversion, seeks to characterize the \n probability distribution of the QoI through the posterior formulated by applying Bayes' rule, \n that is, the probability distribution of the QoI conditioned by all available information. \n \n DA methods~\\cite{bannister2017review,daley1993atmospheric,navon2009data,smith2013uncertainty,ghil1991data,attia2015hmcfilter,attia2015hmcsampling,attia2015hmcsmoother,attia2017reduced,attia2019dates} \n aim to solve large- to extreme-scale inverse problems. They work by fusing \n information obtained from multiple sources, such as the dynamical model, prior knowledge, \n noisy and incomplete measurements, and error models, in order to better estimate the state and parameters \n of the physical system.\n This estimate improves the predictability of the simulation systems developed to make future predictions \n about the physical phenomena of interest.\n \n The quality of DA systems, and hence the accuracy of their predictions, is heavily influenced by the extent to \n which the mathematical assumptions reflect reality and depends on the quality of the collected measurements. \n Optimal data acquisition is the problem of determining the optimal observational strategy, \n for example, from a large set of candidate observational schemes. \n This problem is widely formulated as an optimal experimental design (OED) \n problem~\\cite{fedorov2000design,pukelsheim2006optimal}, where the design parameterizes and thus determines \n the observational configuration. \n In an OED problem, a design is defined to characterize a candidate configuration or a control, and the \n quality of the design is quantified by using a utility function.\n The optimal design is then defined as the one that maximizes this utility function or, equivalently, \n minimizes some OED criterion~\\cite{attia2022optimal}.\n Since the aim of Bayesian inference is to estimate the QoI posterior, Bayesian OED seeks an observational \n configuration that, when combined with the underlying dynamics, would maximize information gain from the \n data or minimize the posterior uncertainty. \n Thus, an optimal design is found by solving an optimization problem with the objective to maximize a \n utility function that quantifies the quality of the design and its influence on the solution \n of the inverse problem. \n OED for inverse problems has experienced a recent surge in interest by the scientific computing community; \n see, for example,~\\cite{alexanderian2021optimal} and references therein.\n\n Numerical testing and experiments are critical for developing efficient \n OED formulations and algorithms. This process is elementary for successful scientific research in general. \n Although statisticians have developed a plethora of mathematical formulations and algorithmic approaches \n for general-purpose OED algorithms, most of the available and publicly accessible OED software tools are \n limited to idealized formulations and specific applications such as finding optimal collocation points for \n regression problems or designing clinical experiments. \n In addition, they are written in the \\textrm{R} programming language or MATLAB, thus limiting code reutilization \n and accessibility by a wider audience; \n see, for example,~\\cite{foracchia2004poped,wheeler2019package,rasch2011optimal,tian2021autooed,rasch2009software}.\n \n Unfortunately, these tools do not align well with the interests of the increasingly large computational \n science community for developing new OED formulations and algorithmic approaches for inverse problems, \n DA, and model-constrained OED~\\cite{attia2018goal,alexanderian2016fast,huan2013simulation,bui2013computational,flath2011fast,haber2008numerical,haber2009numerical,pukelsheim2006optimal,pronzato2013design}. \n As a first step in alleviating this limitation, we present and describe {PyOED}\\xspace, \n a highly extensible open-source Python package written mainly to enable computational scientists to formulate and rapidly test OED---as well as DA---formulations and algorithmic approaches.\n\n \n {PyOED}\\xspace is unique in several ways. \n %\n First, to the best of our knowledge, it is the first open-source package for scientific computing that allows implementing and testing the individual components of DA as well as OED systems in a unified and streamlined environment. \n %\n Second, it is written in Python, which is arguably the most popular and adopted programming language for recent algorithmic developments in the computational science disciplines. \n It has a huge user-support community, and the learning curve is relatively smoother than that of other lower-level programming languages such as C\/C++. \n %\n Third, {PyOED}\\xspace is designed in an object-oriented programming (OOP) fashion, which enables practitioners to reconfigure and reuse the individual building blocks. \n Moreover, it is easy to combine {PyOED}\\xspace with other user-defined routines, such as numerical integration of simulation models and optimization routines, which makes {PyOED}\\xspace highly extensible and adaptable to a wide range of applications. \n %\n Fourth, {PyOED}\\xspace is not limited to a specific inverse problem formulation. Thus new DA and OED methods can be implemented and even interface with other inversion packages, for example, hIPPYlib~\\cite{villa2018hippylib}.\n %\n Fifth, {PyOED}\\xspace leverages best practices in software development, including detailed documentation with hands-on examples and robust unit-testing techniques.\n \n The rest of this paper is organized as follows. \n Section~\\ref{sec:Background} provides the mathematical formalism of inverse problems, DA, and OED.\n Section~\\ref{sec:PyOED} describes the structure and the philosophy of the {PyOED}\\xspace package.\n In Section~\\ref{sec:Test_cases} we provide a list of numerical experiments to demonstrate \n the general workflow and usage of {PyOED}\\xspace.\n Concluding remarks are given in Section~\\ref{sec:Conclusions}.\n\n\n\\section{Mathematical Background}\n\\label{sec:Background}\n In this section we provide a brief overview of the mathematical background of the {PyOED}\\xspace core, \n which is important for approaching the OED problem for Bayesian inversion. \n In this presentation we focus on sensor placement for Bayesian inversion as a modal OED formulation. \n We start by discussing the forward problem in~\\ref{subsec:forward_problem}; then we introduce the \n inverse problem in~\\ref{subsec:inverse_problem} and \n the OED formalism in~\\ref{subsec:OED}.\n\n \n \\subsection{The forward problem}\n \\label{subsec:forward_problem}\n \n The forward problem maps the model parameters (e.g., the initial condition) onto the observation space. \n Consider the forward problem described by\n \n \\begin{subequations}\\label{eqn:forward_problem}\n \\begin{equation}\\label{eqn:forward_problem_base}\n \\obs = \\Fcont(\\param) + \\obsnoise \\,,\n \\end{equation}\n %\n where $\\param$ is the model parameter of interest, \n $\\obs \\in \\Rnum^{\\Nobs}$ is the observation, and\n $\\obsnoise \\in \\Rnum^{\\Nobs}$ is a noise term that accounts for the inaccuracy of the observational system.\n %\n The forward operator $\\Fcont$ is occasionally referred to as the ``parameter-to-observable'' map \n and generally represents a composition of a simulation\/solution model $\\Sol$ and an observation \n operator $\\mat{O}$.\n The simulation model $\\Sol$ describes the evolution of the physical phenomena, for example, \n space-time advection and diffusion of a contaminant simulated over a predefined model grid.\n %\n The observation operator $\\mat{O}$ projects the simulated state onto the observational grid, \n for example, by interpolation or restriction to the observation grid.\n Thus, the forward problem~\\eqref{eqn:forward_problem_base} can be rewritten as\n %\n \\begin{equation}\\label{eqn:forward_problem_decomposed}\n \\obs = \\mat{O} \\circ \\Sol(\\param) + \\obsnoise \\,,\n \\end{equation}\n %\n \\end{subequations}\n %\n where $\\circ$ is the composition operator, that is, $\\mat{O} \\circ \\Sol(\\param)\\equiv \\mat{O} \\left( \\Sol(\\param)\\right)$.\n \n\n \n \n \n It is impossible to find a simple unique formalism of the simulation model that accurately represents \n all possible dynamical systems. \n In this work and in the implementation of {PyOED}\\xspace, we differentiate two types of simulation models: \n time-independent and time-dependent simulations.\n %\n A wide range of time-dependent simulation models for dynamical systems governed by \n partial differential equations (PDEs) can be described as \n %\n \\begin{equation}\\label{eqn:dudt}\n \\dfrac{\\partial \\boldsymbol{u}}{\\partial t} \n = \\boldsymbol{f}(\\boldsymbol{u}(\\boldsymbol{x}, t, \\mu)),\n \\end{equation}\n %\n where $\\boldsymbol{u}$ represents the prognostic variable(s) (e.g., physics), \n $\\boldsymbol{x}$ denotes the spatial coordinates, \n and $\\mu$ defines the physics parameters of the model.\n %\n To numerically solve the simulation model $\\Sol$ equations, we utilize spatial discretization and temporal integration \n routines. \n If we follow a \\emph{discretize-then-optimize} approach, we can rewrite~\\eqref{eqn:dudt} in the form \n %\n \\begin{equation}\\label{eqn:dudt_discretized}\n \\x_n := \\x(t_{n}) \n \n = \\Sol_{t_{n-1}\\rightarrow{t_n}}(\\x_0, \\mu),\n \\end{equation}\n %\n where $\\x_n := \\x(t_{n}) \\equiv \\x(t_{n},\\boldsymbol{x}, \\mu) $ is the \n model state at time instance $t_n$ and $\\Sol_{t\\rightarrow t+\\Delta t}$ is a one-time step \n transition mapping that results from the application of a standard spatial discretization \n method (e.g., finite difference, finite volume, or finite element) and \n time integration scheme (e.g., Runge--Kutta routine) with a step size $\\Delta t$. \n %\n Thus, the prediction at time $t_n$ can be related to the initial condition $\\x(t_0)$ and \n the model parameters using the recursive application of the mapping $\\Sol$ \n over a time interval $[t_0, t_n]$ as\n %\n \\begin{equation}\\label{eqn:umap}\n \\x_n\n \n \n = \\Sol_{t_{n-1}\\rightarrow {t_n}} \\circ \\dots \\circ \\Sol_{t_0 \\rightarrow {t_1}}(\\x_0,\\mu) \\,.\n \\end{equation}\n %\n \n In space-time formulations, one can stack the model state in one long vector \n $\n \\x := \\left( \\x_n\\tran, \\ldots,\\x_0\\tran\\right)\\tran \\,\n $ \n and define the solution operator $\\Sol$ as a block operator (e.g., a block matrix) that operates \n recursively on the the components of $\\x$. \n This would enable unifying the formulation---to some extent---of the forward problem into \n the form~\\eqref{eqn:forward_problem_base}.\n\n Since the observational measurements (e.g., sensory data) are not necessarily the same as the model \n state, we define the state-to-observable mapping $\\mat{O}_n(\\cdot)$ to map the \n state $\\x_n$ onto the observation space, for example, by restricting that state \n onto the observational grid points, as \n %\n \\begin{equation}\\label{eqn:ymap}\n \\obs(t_n) = \\mat{O}_n(\\x(t_{n})) \\,,\n \\end{equation}\n %\n which then can be used to construct a general observational vector \n $\\obs \\in \\Rnum^{\\Nobs}$ representing spatiotemporal data, for example, by stacking observations \n at multiple time points.\n\n Note that while we focused the discussion above on time-dependent problems, the case of \n time-independent simulations can be thought of as a special case of~\\eqref{eqn:umap} where \n $\\Sol$ maps, for example, the physics parameter $\\mu$ to a model state $\\x$ and \n the time index is dropped.\n %\n In most inverse problem formulations, based on the application of interest, the inversion parameter \n $\\iparam$ stated in~\\eqref{eqn:forward_problem_base} stands for the model physics $\\mu$, the model initial\n condition $\\x_0$, or both.\n %\n Note also that we have omitted details including adaptive time stepping where $\\Delta t$ is adaptively \n adjusted to guarantee stability and accuracy of the time integration methodology.\n We intentionally remove such details from the discussion here to simplify the presentation and \n focus more on OED. \n In fact, the OED routines in {PyOED}\\xspace are designed to solve OED problems where the utility function \n is regarded as a black box, thus abstracting the OED capabilities from the inverse problem definition. \n For these reasons, we take~\\eqref{eqn:forward_problem} to be an acceptable simplification that \n describes a forward problem setup that is general enough for our purposes in this paper. \n\n Both the simulation model $\\Sol$ and the observation operator $\\mat{O}$ are imperfect and generally \n include sensory noise and representativeness errors characterizing imperfection of the map between the \n model space and observation space.\n The fact that model observations $\\Fcont(\\iparam)$ are not perfectly aligned with observational data \n ($\\obs$) is modeled---assuming additive noise---by adding the noise term $\\delta$ to the \n simulated observations $\\Fcont(\\iparam)$. \n %\n In most applications, the observational noise follows a Gaussian distribution\n $\\obsnoise \\sim \\GM{\\vec{0}}{\\Cobsnoise}$, where $\\Cobsnoise$ is the observation error covariance \n matrix that captures uncertainty stemming from sensory noise and representativeness errors.\n In this case, the data likelihood is\n %\n \\begin{equation} \\label{eqn:Gaussian_likelihood}\n \\CondProb{\\obs}{ \\param } \\propto\n \\exp{\\left( - \\frac{1}{2}\n \\sqwnorm{ \\Fcont(\\param) - \\obs }{ \\Cobsnoise\\inv } \\right) } \\,,\n \\end{equation}\n %\n where the matrix-weighted norm in~\\eqref{eqn:Gaussian_likelihood} is defined as\n $\\sqwnorm{\\vec{x}}{\\mat{A}} = \\vec{x}\\tran \\mat{A} \\vec{x} $ for a vector $\\vec{x}$ and \n a square symmetric matrix $\\mat{A}$ of conformable sizes.\n\n\n \n \\subsection{The inverse problem}\n \\label{subsec:inverse_problem}\n \n An inverse problem refers to the retrieval of the model parameter $\\param$ from noisy \n observation $\\obs$, conditioned by the model dynamics.\n %\n This can be achieved by finding a point estimate or by building a complete probabilistic \n description as discussed in Section~\\ref{sec:Introduction}. \n In the former, an optimization problem is solved to minimize the mismatch between the expected \n observations (through simulation models) and real data. \n This is typically employed in variational DA methods where the estimate of the true \n parameter is obtained by minimizing a regularized log-likelihood objective, where regularization \n is employed to enforce smoothness or background information on the parameter.\n In this case, a point estimate of the true $\\param$ is obtained by solving\n %\n \\begin{equation}\\label{eqn:fdvar}\n \\argmin_{\\param} \\obj(\\param) \n := \\frac{1}{2} \\sqwnorm{ \\Fcont(\\param) - \\obs }{ \\Cobsnoise\\inv } \n + \\frac{1}{2} \\sqwnorm{ \\param - \\paramprior }{ \\Cparamprior\\inv } \\,, \n \\end{equation}\n %\n where $\\paramprior$ is an initial guess of the unknown true value of $\\param$.\n In general, the second term is added to enforce regularization or prior knowledge on the solution, for example,\n if the solution is assumed a priori to follow a Gaussian distribution $\\param\\sim\\GM{\\paramprior}{\\Cparamprior}$.\n \n %\n Uncertainty envelopes around the single-point estimate obtained by solving~\\eqref{eqn:fdvar} can be \n developed, for example, by using Laplacian approximation~\\cite{tierney1986accurate} where the posterior is approximated \n by a Gaussian distribution. \n This approach has been successfully employed in infinite-dimensional Bayesian inversion \n problems~\\cite{stuart2010inverse}.\n %\n A fully Bayesian approach, on the other hand, aims to provide a consistent probabilistic description \n of the unknown parameter along with the associated uncertainties and is not limited to Gaussian \n distributions. \n This is achieved by describing the posterior, that is, the probability distribution of \n the model parameter $\\param$ conditioned by the available simulations and noisy data $\\obs$, and is obtained by applying a form of Bayes' theorem\n \\begin{equation}\\label{eqn:Bayes}\n \\CondProb{\\param}{\\obs} \\propto \\CondProb{\\obs}{\\param} \\Prob(\\param) \\,, \n \\end{equation}\n %\n where $\\Prob(\\param)$ is the prior, $\\CondProb{\\obs}{\\param}$ is the data likelihood, and \n $\\propto$ indicates removal of a normalizing constant in the right-hand side of~\\eqref{eqn:Bayes}.\n For further details on Bayesian inversion see, for example,,~\\cite{smith2013uncertainty,stuart2010inverse}.\n Given the posterior~\\eqref{eqn:Bayes}, one can use the maximum a posteriori (MAP) point as an estimate \n of the true unknown QoI or follow a Monte Carlo approach to sample the posterior, thus building a complete probabilistic picture; \n see, for example,~\\cite{attia2017reduced}.\n %\n Both the variational and the Bayesian inference approaches provide a plethora of techniques for \n statistical data analysis in general, and specifically for solving inverse problems.\n\n The Bayesian perspective provides a formal mathematical ground for estimating the physical QoI, \n for example, the model parameter $\\iparam$, along with the associated uncertainties given \n the available sources of information. \n In many cases, however, this inversion is an intermediate step, and the goal QoI is a function \n of the model parameter, that is, $\\pred :=\\PredOper(\\param)$.\n A goal-oriented approach is followed in this case where one aims to inspect the posterior \n of the QoI conditioned by the available data~\\cite{LiebermanWillcox13,LiebermanWillcox14}. \n \n\n \n \n \\paragraph{The ideal case: linear Gaussian problems}\n \n If the forward operator $\\Fcont$ is linear (or linearized), and assuming Gaussian observational noise \n and a Gaussian prior\n $\\GM{\\iparb}{\\Cparampriormat}$, then the posterior is Gaussian $\\GM{\\ipara}{\\Cparampostmat}$ with\n %\n \\begin{equation}\\label{eqn:Gaussian_Posterior_Params}\n \\Cparampostmat = \\left(\\F \\adj \\Cobsnoise\\inv \\F\n + \\Cparampriormat\\inv \\right)\\inv \\,, \\quad\n %\n \\ipara = \\Cparampostmat \\left( \\Cparampriormat\\inv \\iparb\n + \\F\\adj \\Cobsnoise\\inv \\, \\obs \\right) \\,,\n \\end{equation}\n %\n where $\\F \\equiv \\Fcont$ is the forward model and $\\F\\adj$ is the associated adjoint.\n %\n Despite being simple, this setup~\\eqref{eqn:Gaussian_Posterior_Params} is of utmost importance in \n the Bayesian inversion and OED literature and is elementary for testing implementations of \n new DA and OED approaches, mainly because the posterior can be formulated exactly.\n %\n Moreover, in many large-scale applications, the posterior can be approximated, \n to an acceptable degree, by a Gaussian distribution obtained by \n linearizing the nonlinear operator $\\Fcont$ around the MAP estimate. The linearized model is also known as the tangent linear model (TLM) obtained by differentiating $\\Fcont$.\n \n \n As mentioned earlier, inversion for the parameter $\\param$ is often an intermediate stage, and \n the end-goal is to describe the posterior of a general QoI that is not the model parameter $\\iparam$ \n but rather a goal quantity $\\pred$ that depends on the inversion parameter $\\iparam$. \n %\n Specifically, goal-oriented inversion seeks the posterior\n $\n \\CondProb{\\pred}{\\obs} \\propto \\Like{\\obs}{\\pred, \\iparam} \\Prob(\\pred)\\,,\n $\n %\n where $\\Prob(\\pred)$ is a prior on the goal QoI and \n $\n \\Like{\\obs}{\\iparam} = \\Like{\\obs}{\\pred,\\iparam} \n $ \n is the data-likelihood~\\eqref{eqn:Gaussian_likelihood}, where $\\pred$ is determined completely \n based on $\\iparam$.\n %\n We focus the discussion here on the case of linear prediction operators $\\Predmat$. \n That is, we consider prediction quantities of the form\n %\n \\begin{equation}\\label{eqn:GOOED_linear_QoI}\n \\pred = \\Predmat \\iparam,\n \\end{equation}\n %\n where $\\Predmat$ is a linear prediction operator. \n Within the Gaussian linear setting, the prior of the goal QoI $\\pred$ is \n $\\GM{\\predb}{\\Cpredpriormat}$ with\n %\n \\begin{equation}\\label{eqn:prior_prediction_PDF}\n \\predb = \\Predmat \\iparamb,\n \\qquad\n \\Cpredpriormat = \\Predmat \\Cparampriormat \\Predmat^*,\n \\end{equation}\n %\n where $\\Predmat^*$ is the adjoint of the prediction operator $\\Predmat$.\n %\n The posterior distribution of the prediction $\\pred$, conditioned by the observations $\\obs$, \n is also Gaussian and is given by $\\GM{\\preda}{\\Cpredpostmat}$, where\n \n \\begin{equation}\\label{eqn:posterior_prediction}\n \t\\begin{aligned}\n \\preda = \\Predmat \\iparama\\,, \\qquad \\Cpredpostmat = \\Predmat \\Cparampostmat \\Predmat^*\n %\n = \\Predmat\\, \\left( \\F \\adj \\Cobsnoise\\inv \\F + \\Cparampriormat^{-1} \\right) \\, \\Predmat^*.\n \\end{aligned}\n \\end{equation}\n %\n \n Note that the goal-oriented Bayesian inversion falls back to the standard formulation of a Bayesian \n inverse problem if the prediction operator $\\PredOper$ is an identity operator. \n\n\n \n \\subsection{Optimal experimental design}\n \\label{subsec:OED}\n \n Here we outline the basics of an OED problem for Bayesian inversion. \n An excellent review of recent advances on model-constrained OED can be found \n in~\\cite{alexanderian2020optimal}. An OED optimization problem takes the general form\n %\n \\begin{equation}\\label{eqn:OED_optimization}\n \\design\\opt\n = \\argmax_{\\design} \\,\n \\mathcal{U}(\\design) \\,,\n \\end{equation}\n %\n where $\\mathcal{U}$ is a predefined utility function that quantifies the quality of the design $\\design$.\n %\n The nature of $\\design$ depends on the application at hand, \n and the utility function $\\mathcal{U}$ is chosen to enable defining what an ``optimal'' design is. \n %\n The optimization problem~\\eqref{eqn:binary_OED_optimization} is often associated with \n an auxiliary sparsity-enforcing term $ - \\regpenalty \\penaltyfunction{\\design}$ to prevent dense designs \n and to reduce the cost associated with deploying observational sensors. \n The utility function can be rewritten as\n %\n \\begin{equation}\n \\mathcal{U}(\\design) = \\Psi(\\design) - \\regpenalty \\penaltyfunction{\\design} \\,,\n \\end{equation}\n %\n where $\\Psi(\\cdot)$ is an OED optimality objective, referred to hereafter as the ``optimality criterion,'' which is defined based \n on the inverse problem at hand and on a chosen criterion (e.g., from the well-known OED alphabetic criteria). \n The function $\\penaltyfunction{\\design}$ asserts regularization or sparsity on the design. \n For example, this could be a resource constraint:\n $\\sum_{i}^{\\Nsens}\\design_i = \\wnorm{\\design}{} \\leq k $\\,, \\,\\, or\n $\\sum_{i}^{\\Nsens}\\design_i = \\wnorm{\\design}{} = k \\,; k\\in \\mathtt{Z}_{+}\\,,$\n for example, an upper bound (or exact budget) on the number of sensors (in OED).\n It also could be a sparsifying (possibly nondifferentiable) function, \n for example, $\\wnorm{\\design}{0}$, $\\wnorm{\\design}{1}$.\n %\n \n\n Generally speaking, we seek a design that maximizes the utility function $\\mathcal{U}$. Other \n formulations, however, involve minimization of an OED optimality criterion; see, for example,~\\cite{attia2018goal}. \n Both formulations are adopted in the OED literature and in {PyOED}\\xspace and often are even equivalent, as explained below.\n %\n In inverse problems, a design can be associated with the observational configuration \n and thus can be used to optimally select an optimal observational policy. \n For example, a design can be defined to select sensor location or temporal observation frequency \n that can help provide accurate prediction with minimum uncertainty, or it can be defined to select \n an observational configuration that guarantees maximum information gain from the data.\n\n \n \\paragraph{OED for sensor placement}\n \n In sensor placement we associate a binary design variable $\\design_i$ with the $i$th \n candidate sensor location with $1$ meaning activating the sensor and deactivating it otherwise. \n This defines the design as a binary vector $\\design\\equiv\\binarydesign\\in\\{0,1\\}^{\\Nsens}$, \n which collectively define the observational configuration.\n In this case, the OED problem~\\eqref{eqn:OED_optimization} takes the form\n %\n \\begin{equation}\\label{eqn:binary_OED_optimization}\n \\design\\opt\n = \\argmax_{\\design \\in \\{0, 1\\}^{\\Nsens}} \\mathcal{U}(\\design) := \n \\Psi(\\design) - \\regpenalty \\penaltyfunction{\\design} \\,.\n \\end{equation}\n %\n\n\n In Bayesian OED for sensor placement, the observation covariance $\\Cobsnoise$ is replaced with a weighted version\n $\\wdesignmat(\\design)$, resulting in the weighted data-likelihood\n \n \\begin{subequations}\n %\n \\begin{equation}\\label{eqn:weighted_joint_likelihood}\n \\Like{\\obs}{ \\param; \\design}\n \\propto \\exp{\\left( - \\frac{1}{2} \\sqwnorm{ \\F(\\param) - \\obs}{\\wdesignmat(\\design)} \\right) } \\,,\n \\end{equation}\n %\n where the weighted observation error covariance matrix takes the form \n %\n \\begin{equation}\\label{eqn:Shur_weighted_Precision}\n %\n \\wdesignmat(\\design)\n := \\pseudoinverse{ \\designmat(\\design) \\Cobsnoise \\designmat(\\design) }\n = \\proj\\tran(\\design) \\left(\n \\proj(\\design) \\Bigl(\\designmat(\\design) \\Cobsnoise \\designmat(\\design) \\Bigr) \\proj\\tran(\\design)\n \\right)\\inv \\proj(\\design) \\,,\n \\end{equation}\n %\n \\end{subequations}\n \n where $\\dagger$ denotes the Moore--Penrose (pseudo) inverse \n and $\\designmat(\\design) :=\\diag{\\design}$ is a diagonal matrix with the binary design\n $\\design\\in\\{0, 1\\}^{\\Nsens}$ on its diagonal.\n %\n $\\proj(\\design)$ is a sparse matrix that extracts nonzero rows\/columns from the \n design matrix $\\wdesignmat$; see~\\cite{attia2022optimal} for further details.\n\n\n \n \n \\paragraph{The utility function }\n \n \n In the case of linear Bayesian inversion, the posterior is Gaussian with the covariance being \n independent from the actual realizations of the data, as shown by~\\eqref{eqn:Gaussian_Posterior_Params} \n and~\\eqref{eqn:posterior_prediction}. \n This fact enables designing observational policies before actually deploying the observational sensors.\n Specifically, in linear Bayesian OED, we set the objective to minimize a scalar summary of the posterior uncertainty, \n that is, the posterior covariance matrix.\n %\n This is the underlying principle of the alphabetical criteria~\\cite{Pukelsheim93}.\n For example, an A-optimal design is the one that minimizes the trace of the posterior covariance matrix, \n and a D-optimal design is the one that minimizes its determinant (or equivalently the log-determinant).\n Note that in the case of a linear model $\\F$, the Fisher information matrix $\\mathsf{FIM}$ is equal to\n the inverse of the posterior covariance matrix, that is,\n %\n $\n \\mathsf{FIM} = \\Cparampostmat\\inv(\\design)\n = \\F\\adj \\wdesignmat(\\design) \\F + \\Cparampriormat\\inv \\,.\n $\n \n %\n Thus, in this case the utility function---discarding the penalty term---is set to\n $\\mathcal{U}(\\design):=\\Trace{\\mathsf{FIM}(\\design)}$ for\n A-optimal designs and $\\mathcal{U}(\\design):=\\logdet{\\mathsf{FIM}(\\design)}$ for\n D-optimal designs, and then the utility function is maximized.\n\n When the model $\\Fcont$ is nonlinear, the $\\mathsf{FIM}$ requires evaluating the\n TLM at the true parameter, that is,\n $\\F=\\partial\\Fcont|_{\\iparam=\\iparam_{\\rm true}}$. Thus, to obtain an optimal design,\n one can iterate over finding the MAP estimate of $\\iparam$ and solving an OED\n problem with Gaussian approximation around that estimate.\n Other utility functions employed in nonlinear OED problems or non-Gaussian distributions include \n the Kullback--Leibler divergence between the posterior and the prior~\\cite{huan2014gradient}.\n\n\n \n \\paragraph{Popular solution approaches}\n \n The OED problem~\\eqref{eqn:binary_OED_optimization} can be viewed as a mixed-integer program \n and can be solved by using branch-and-bound~\\cite{Gerdts05,Leyffer01}. However, this type of research has not yet been applied to model-constrained-OED.\n A common approach to solving~\\eqref{eqn:binary_OED_optimization} is to replace the binary optimization \n with the following relaxation:\n %\n \\begin{equation}\\label{eqn:relaxed_oed}\n \\design\\opt\n = \\argmax_{\\design \\in [0, 1]^{\\Nsens}} \\,\n \\mathcal{U}(\\design) \\,,\n \\end{equation}\n %\n where the design variables are relaxed to take any values in the interval $[0, 1]$ rather than \n only the binary values $\\{0, 1\\}$. This approach, if carried out properly, has the effect of generating \n a continuous relaxation surface that connects the values of the objective evaluated at the \n binary designs; see~\\cite{attia2022optimal} for further details. \n %\n A gradient-based optimization approach is generally used to solve~\\eqref{eqn:relaxed_oed}, which requires \n developing the gradient of both the optimality criterion $\\Psi$ and the penalty $\\penaltyfunc$ \n with respect to the design $\\design$.\n \n As mentioned earlier, the penalty term is generally chosen to promote sparsity on the design. \n A popular penalty function is based on the $\\ell_0$ norm to promote design sparsity \n and is thus nonsmooth and consequently is nondifferentiable.\n This difficulty can be alleviated, for example, by approximating the effect of $\\ell_0$ \n with a sequence of differentiable functions that converge in effect to the $\\ell_0$; \n see, for example,~\\cite{AlexanderianPetraStadlerEtAl14}.\n %\n In order to guarantee continuity of the relaxation surface, the weighted precision matrix is defined in\n the general form\n %\n \\begin{equation}\\label{eqn:pointwise_weighted_precision}\n \\begin{aligned}\n \\wdesignmat(\\design)&:=\\pseudoinverse{\\designmat(\\design) \\odot \\Cobsnoise}\\,, \\\\\n \\designmat_{i,j}(\\design)\n &:=\n \\begin{cases}\n \\weightfunc_i\\, \\weightfunc_j &;\\, i \\neq j \\\\\n \\begin{cases}\n 0 &; \\, \\weightfunc_i =0 \\\\\n \\frac{1}{\\weightfunc_i^2} &; \\weightfunc_i \\neq 0\n \\end{cases} &;\\, i=j \\\\\n \\end{cases} \\,;\\,\\,\n \\begin{matrix}\n & i,j = 1,2,\\ldots,\\Nsens \\,, \\\\[-1.0pt]\n & m,n = 1,2,\\ldots,\\nobstimes \\,,\n \\end{matrix}\n \\end{aligned}\n \\end{equation}\n %\n where $\\odot$ is the Hadamard (Schur) product of matrices and $\\weightfunc_i\\in[0, 1]$ is a weight\n calculated by using $\\design_i$, \n for example, $\\weightfunc_i:=\\design_i$; see~\\cite{attia2022optimal} for additional details.\n %\n The formulation~\\eqref{eqn:pointwise_weighted_precision} guarantees that for $\\design_i\\in[0, 1]^{\\Nsens}$ it holds \n that\n $\n \\lim_{\\design \\rightarrow \\binarydesign} \\wdesignmat(\\design) \n = \\wdesignmat(\\binarydesign)\n $\n for a binary design $\\binarydesign\\in\\{0, 1\\}^{\\Nsens}$ and thus guarantees continuity of the relaxation surface.\n %\n Thus, the solution of the relaxed OED optimization problem~\\eqref{eqn:relaxed_oed} is guaranteed to match \n the solution of the original binary OED optimization problem.\n\n The A- and D-optimal design relaxed optimization problems~\\eqref{eqn:relaxed_oed} take the following \n respective forms:\n %\n \\begin{subequations}\\label{eqn:A_and_D_optimality_Schur}\n \\begin{align}\n \\design^{\\rm A\\!-\\!opt}\n &= \\argmax_{\\design\\in [0, 1]^{\\Nsens}}\n := \\Trace{\\Predmat \\left( \\F\\adj\n \\pseudoinverse{ \\Cobsnoise \\odot \\designmat(\\design) }\n \\F\n + \\Cparampriormat\\inv \\right)\\inv \\Predmat\\adj\n }\\inv\n - \\regpenalty \\penaltyfunction{\\design}\n \\,, \\label{eqn:A_optimality_Schur}\\\\\n %\n \\design^{\\rm D\\!-\\!opt}\n &= \\argmax_{\\design\\in [0, 1]^{\\Nsens}}\n := \\logdet{\\Predmat \\left( \\F\\adj\n \\pseudoinverse{ \\Cobsnoise \\odot \\designmat(\\design) }\n \\F\n + \\Cparampriormat\\inv \\right)\\inv \\Predmat\\adj\n }\\inv\n - \\regpenalty \\penaltyfunction{\\design}\n \\,.\\label{eqn:D_optimality_Schur}\n \\end{align}\n \\end{subequations}\n %\n\n \n The most important piece of information for solving the relaxation~\\eqref{eqn:relaxed_oed} \n is the gradient of the utility function; see, for example,~\\cite{attia2022optimal} for a detailed derivation of\n the gradients of the objective functions in~\\eqref{eqn:A_and_D_optimality_Schur}.\n Gradient formulation, however, is mathematically involved and can be extremely computationally demanding \n because it requires numerous evaluations of the forward operator, the goal operator, and the corresponding adjoints.\n %\n Moreover, the penalty function $\\penaltyfunction{\\cdot}$ is required to be differentiable. \n \n \n A stochastic learning approach to binary OED has been recently presented in~\\cite{attia2022stochastic}, \n to solve the binary optimization problem~\\eqref{eqn:binary_OED_optimization} without the need for relaxation.\n This approach does not require differentiability of the utility function $\\mathcal{U}$. \n \n In this approach the optimal design is defined as\n %\n \\begin{equation}\\label{eqn:stochastic_OED_optimization}\n \\hyperparam\\opt\n = \\argmax_{\\hyperparam\\in[0,1]^{\\Nsens}}\n \\Expect{\\design\\sim\\CondProb{\\design}{\\hyperparam}}{\\mathcal{U}(\\design) - b} \\,, \n \\end{equation}\n %\n where $\\CondProb{\\design}{\\hyperparam}$ is a multivariate Bernoulli distribution with parameter\n $\\hyperparam$ specifying probabilities of success\/activation of each entry of $\\design$, \n that is, $\\hyperparam_i\\in[0, 1]$.\n Here $b$ is a constant ``baseline'' used to minimize variability of the stochastic estimate \n of the gradient; see~\\cite{attia2022stochastic} for further details.\n Algorithm~\\ref{alg:REINFORCE_baseline} summarizes the procedure followed to \n solve~\\eqref{eqn:stochastic_OED_optimization}.\n \n \n \\begin{algorithm}[htbp!]\n \\caption{Stochastic optimization for binary OED with the optimal baseline.}\n \\label{alg:REINFORCE_baseline}\n \\begin{algorithmic}[1]\n\n \\Require{Initial distribution parameter $\\hyperparam^{(0)}$,\n step size schedule $\\eta^{(n)}$,\n sample sizes $\\Nens,\\, m$,\n baseline batch size $b_m$}\n \\Ensure{$\\design\\opt$}\n\n \\State{initialize $n = 0$}\n\n \\While{Not Converged}\n \\State {\n Update $n\\leftarrow n+1$\n }\n\n \\State{\n Sample $\\{\\design[j]; j=1,2,\\ldots,\\Nens \\}\\sim\n \\CondProb{\\design}{\\hyperparam^{(n)}}$\n }\n\n \\State{\n Calculate $b$ = \\Call{OptimalBaseline}{$\\hyperparam^{(n)}$,\n $\\Nens$, $b_m$}\n }\n\n \\State\\label{algstep:REINFORCE_baseline:grad}{\n Calculate $ \\vec{g}^{(n)}=\\frac{1}{\\Nens} \\sum_{j=1}^{\\Nens}\n \\left(\\obj(\\design[j]-b)\\right) \\sum_{i=1}^{\\Nsens}\n \\left(\n \\frac{\\design_i[j]}{\\hyperparam_i}\n + \\frac{\\design[j]_i-1}{1-\\hyperparam_i}\n \\right) \\,\\vec{e}_i $\n }\n\n \\State \\label{algstep:REINFORCE_baseline:proj}{\n Update $\\hyperparam^{(n+1)}\n = \\Proj{}{\\hyperparam^{(n)} - \\eta^{(n)} g^{(n)} } $\n }\n\n \\EndWhile\n\n \\State{\n Set $\\hyperparam\\opt = \\hyperparam^{(n)}$\n }\n\n \\State\\label{algstep:REINFORCE_baseline:sampling} {\n Sample $\\{\\design[j];j=1,2,\\ldots,m \\} \\sim\n \\CondProb{\\design}{\\hyperparam\\opt}$,\n and calculate $\\obj(\\design[j])$\n }\n\n \\State \\Return{\n $\\design\\opt $: the design $\\design$ with smallest\n value of $\\obj$ in the sample.\n }\n\n \n\n \\Function{OptimalBaseline}{$\\theta$, $\\Nens$, $b_m$}\n\n \\State{Initialize $b \\gets 0$}\n\n \\For {$e$ $\\gets 1$ to $b_m$}\n\n \\For {$j$ $\\gets 1$ to $\\Nens$}\n \\State{\n Sample $\\design[j] \\sim \\CondProb{\\design}{\\hyperparam}$\n }\n \\State\\label{algstep:REINFORCE_baseline:logprob}{\n Calculate $\\vec{r}[j] = \\sum_{i=1}^{\\Nsens}\n \\left(\n \\frac{\\design_i[j]}{\\hyperparam_i}\n + \\frac{\\design[j]_i-1}{1-\\hyperparam_i}\n \\right) \\,\\vec{e}_i $\n }\n \\EndFor\n\n \\State{\n Calculate $\\vec{d}[e] = \\frac{1}{\\Nens} \\sum_{j=1}^{\\Nens}\n \\vec{r}[j] \\,,\\,$\n and\n \n $\\, \\vec{g}[e] = \\frac{1}{\\Nens} \\sum_{j=1}^{\\Nens}\n \\obj(\\design[j]) \\, \\vec{r}[j] $\n }\n \\State{\n Update $b \\gets b + \\left(\\vec{g}[e] \\right)\\tran \\vec{d}[e]$\n }\n \\EndFor\n\n \\State{\n Update\n $b \\gets\n \n \n b \\, \\Nens \\, \/ \\left( b_m \\,\n \\sum_{i=1}^{\\Nsens} \\frac{1}{\\hyperparam_i- \\hyperparam_i^2} \\right)\n $\n %\n \n \n \n \n \n }\n\n \\State \\Return $b$\n \\EndFunction\n\n \\end{algorithmic}\n \\end{algorithm}\n \n\n\n\n %\n Note that we do not provide an exclusive set of formulations or solution approaches in this study. We provide here only an exemplary set of formulations and algorithms used to inspire the development of {PyOED}\\xspace, which itself can be used to test further formulations and algorithmic approaches. \n\n\\section{{PyOED}\\xspace: Structure and Philosophy}\n\\label{sec:PyOED}\n {PyOED}\\xspace aims to provide a unified platform for implementing and testing model-constrained OED algorithmic approaches \n including formalisms~\\eqref{eqn:binary_OED_optimization},~\\eqref{eqn:relaxed_oed},\n ~\\eqref{eqn:A_and_D_optimality_Schur}, and~\\eqref{eqn:stochastic_OED_optimization}.\n %\n Solving model-constrained OED and inverse problems requires proper understanding and formulation of \n the underlying dynamical system, observational configuration, uncertainty models, \n DA and inversion algorithms, and the OED objective~\\cite{ucinski2000optimal} and the selected utility function. \n %\n {PyOED}\\xspace is a stand-alone, yet extensible, Python package that provides users and researchers in \n the computational science and engineering disciplines with a testing suite that effectively \n glues these components in an OOP fashion. \n %\n For example, {PyOED}\\xspace provides a variety of time-dependent and time-independent simulation models.\n These include systems governed by linear algebraic equations, ordinary differential equations, \n and PDEs. \n {PyOED}\\xspace is also equipped with a set of classes implementing various observational operators, \n probabilistic uncertainty models, and DA and OED methods. \n A high-level overview of the {PyOED}\\xspace major components and their coupling for solving DA and OED problems\n is provided in Figure~\\ref{fig:DA_and_OED}.\n In Section~\\ref{subsec:code_structure} we briefly describe the main components of {PyOED}\\xspace \n and outline the functionality they provide in correspondence with the diagram~\\ref{fig:DA_and_OED}.\n %\n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.75\\textwidth]{Plots\/DA_and_OED}\n \\caption{High-level overview of the main components of {PyOED}\\xspace.}\n \\label{fig:DA_and_OED}\n \\end{figure}\n %\n\n \n \n \\subsection{Code structure}\n \\label{subsec:code_structure}\n \n \n Figure~\\ref{fig:PyOED_Subpackages} shows the main subpackages (ordered alphabetically) \n shipped with the current version of {PyOED}\\xspace (v1.0).\n %\n The rest of this section~\\ref{subsec:code_structure} provides a high-level description of the packages\/subpackages \n of~{PyOED}\\xspace as displayed in Figure~\\ref{fig:PyOED_Subpackages}. \n %\n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Plots\/PyOED_Subpackages}\n \\caption{Main subpackages (ordered alphabetically) available in the current version of {PyOED}\\xspace (v1.0).}\n \\label{fig:PyOED_Subpackages}\n \\end{figure}\n %\n\n\n \n \\paragraph{\\code{pyoed.models}}\n \n Following the convention in the {DATeS}\\xspace package~\\cite{attia2019dates}, we use the word ``model'' to refer \n to three entities: \\emph{the simulation model}, \\emph{the observation model (or operator)}, and \\emph{the error models}.\n \n The simulation model provides a prediction about the behavioral pattern of the physical phenomena of concern.\n In the current version of {PyOED}\\xspace (v1.0) we provide various simulation models under the \n \\code{pyoed.models.simulation\\_models}, including several versions of the Lorenz system~\\cite{lorenz1996predictability} and advection-diffusion models. The structure of these prototypical simulation models should provide clear guidelines to practitioners willing to adopt {PyOED}\\xspace for their particular applications. \n \n The observation operator maps the model state onto the observation grid, thus providing a functional mapping \n between the model state and observational data. \n Two of the most prominent observation operators in experimental settings are the identity operator and an interpolator.\n {PyOED}\\xspace provides implementations of several observation operators including these two, with an observational design properly incorporated to enable\n altering observational configurations at any point in the DA or OED solution process. \n Observation operators are provided in the \\code{pyoed.models.observation\\_operators} subpackage. \n\n The error models quantify the uncertainty associated with the model parameter, model state, and observational data.\n An experimental design can be associated with any of these pieces. \n For example, in sensor placement, an experimental design is associated with the observational grid; thus, modifying the\n observational design affects the observational error model. \n For example, in the relaxation approach~\\eqref{eqn:A_and_D_optimality_Schur}, the design weights scale the entries of the covariance matrix, \n and the stochastic approach~\\eqref{eqn:stochastic_OED_optimization} works by removing rows\/columns of the observation\n error covariance matrix corresponding to zero design variables.\n %\n {PyOED}\\xspace provides various implementations of error models suitable for modeling priors, as well as observational errors in \n Bayesian inversion, where a design variable is consistently implemented to enable modifying the experimental design during any \n step of the DA and\/or OED solution process.\n The current version of {PyOED}\\xspace (v1.0) provides various error model implementations through\n the subpackage \\code{pyoed.models.error\\_models}, including a Gaussian model and Laplacian model. \n\n\n \n \\paragraph{\\code{pyoed.assimilation}}\n \n {PyOED}\\xspace provides a set of DA tools that include algorithms for ``filtering'' \n and ``smoothing.'' These two terms are widely used in the DA literature. \n The former algorithm solves inverse problems that involve time-independent or time-dependent \n simulation models, while the latter algorithm is restricted to time-dependent models.\n Filtering involves prediction (of observation) using the parameter-to-observable map, \n followed by a correction procedure to correct knowledge of the QoI given the observational data.\n In filtering for time-dependent simulations, the observational data is assimilated sequentially, with one observation time per assimilation window\/cycle.\n Examples of filtering DA methods include three-dimensional variational DA, and Kalman filtering~\\cite{evensen2009data,asch2016data}. \n %\n Smoothing, on the other hand, is concerned with history matching; these algorithms try \n to find the QoI that best matches multiple spatiotemporal observations (a trajectory) \n and is usually defined as an initial value problem.\n \n Examples include space-time Bayesian inversion~\\cite{Stuart10}\n and four-dimensional variational DA~\\cite{asch2016data}, \n for which vanilla implementations are provided in~{PyOED}\\xspace.\n Implementations of filtering DA algorithms are provided through \\code{pyoed.assimilation.filtering}, \n and smoothing algorithms are provided in \\code{pyoed.assimilation.smoothing}.\n\n\n \n \\paragraph{\\code{pyoed.optimization}}\n \n Numerical optimization routines are elementary for solving OED optimization problems, as well as the \n variational approaches for solving DA problems.\n A variety of optimization software packages can be used for solving numerical optimization problems \n including those described in this work.\n {PyOED}\\xspace enables using external optimization packages, including Python's \\code{Scipy} package, \n to solve DA and OED optimization problems.\n {PyOED}\\xspace, however, provides specific implementations of optimization procedures not available in popular optimization \n packages, such as the stochastic algorithm described by Algorithm~\\ref{alg:REINFORCE_baseline}, \n various versions of the stochastic average approximation (SAA) algorithm, and robust optimization~\\cite{attia2023robust}. \n\n\n \n \\paragraph{\\code{pyoed.ml}}\n \n This subpackage is intended to provide implementations of machine learning algorithms useful for DA and OED applications.\n For example, the stochastic learning approach to OED~\\eqref{eqn:stochastic_OED_optimization} can be seen as \n a reinforcement learning (RL) approach to solving the OED problem.\n %\n The module \\code{pyoed.ml.reinforcement\\_learning} under this package provides implementation of RL components, \n including an agent, a policy, transition probability, actions, and utility functions.\n\n\n \n \\paragraph{\\code{pyoed.stats}}\n \n This package aims to collect statistical procedures used by other parts of the package, such as \n sampling routines, and implementation of random variables and their probabilistic utility functions including \n density evaluation and log-probabilities.\n This version of {PyOED}\\xspace (v1.0) provides an exemplary implementation of a multivariate Bernoulli distribution required by\n the RL algorithm~\\ref{alg:REINFORCE_baseline}.\n Since statistical tools are crucial for various DA and OED algorithms, we chose to keep the subpackage \\code{pyoed.stats} rather \n than moving these implementations to other parts of the package. \n \n This approach is advantageous because we continuously extend the package with various statistical tools, for example, \n for randomized approximation methods for Bayesian inversion.\n\n \n \n \\paragraph{\\code{pyoed.oed}}\n \n OED is the main component of {PyOED}\\xspace that provides implementations of various \n algorithmic approaches for solving OED problems, including relaxation~\\eqref{eqn:A_and_D_optimality_Schur} and \n stochastic learning~\\eqref{eqn:stochastic_OED_optimization}, \n as well as recent developments including robust OED~\\cite{attia2023robust}.\n Most implementations in this package take an inverse problem (DA object) as input and use it to access all\n the underlying components, thus gaining access to the simulation model, error models, and observation operator as well\n as the experimental design.\n This approach enables the user to modify an experimental design, solve the DA problem if needed, and solve the underlying OED optimization problem.\n The core OED functionalities in most {PyOED}\\xspace routines, however, can be used with black-box utility functions, waiving the need for an inverse problem if needed.\n \n\n\n \n \\paragraph{\\code{pyoed.utility}}\n \n This subpackage aims to collect implementations of general-purpose functionality, such as file I\/O and visualization, as well as\n general mathematical and statistical procedures. \n The subpackage includes matrix-free implementations of expensive operations such as evaluating the trace and log-determinant of a matrix. \n It also provides routines to approximate matrix trace using statistical randomization~\\cite{AlexanderianSaibaba17,saibaba2016randomized}.\n\n\n \n \\paragraph{\\code{pyoed.examples}}\n \n This subpackage provides various example scripts that users can follow to learn how to effectively use various pieces of the package. \n The modules in this subpackage explain how to load all pieces of the subpackage independently \n and explain how to properly coordinate these components to design a consistent DA and\/or OED experiment. \n\n\n \n \\paragraph{\\code{pyoed.tutorials}}\n \n Given the popularity of Jupyter Notebooks in the computational science community, \n we converted some of the examples in the subpackage \\code{pyoed.examples} to Jupyter Notebooks \n and provided them in this subpackage \\code{pyoed.tutorials}. We employ them in the test cases presented in Section~\\ref{sec:Test_cases}. \n We can add more tutorials on reasonable demand.\n\n\n \n \\subsection{{PyOED}\\xspace utilization workflow}\n \\label{subsec:workflow}\n \n While the components of DA and OED problems can be used independently of each other, some level \n of ordering is mandatory for proper utilization.\n For example, an inverse problem (DA) object cannot be instantiated before a simulation model, \n an observation operator, and error model objects. \n Similarly, an OED problem for Bayesian inversion cannot be solved before creating an \n inverse problem.\n The general workflow for utilizing {PyOED}\\xspace components is displayed in \n Figure~\\ref{fig:PyOED_workflow}. \n A practical guide that illustrates how to follow this simple workflow is described in \n Section~\\ref{sec:Test_cases}.\n %\n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Plots\/PyOED_workflow}\n \\caption{Workflow describing initialization order and access level of {PyOED}\\xspace components.}\n \\label{fig:PyOED_workflow}\n \\end{figure}\n %\n\n\n \n \\subsection{Extending and contributing to {PyOED}\\xspace}\n \\label{subsec:extending_PyOED}\n \n {PyOED}\\xspace is meant to be extensible. Thus, we continuously interface other software tools that \n provide efficient implementations of the components of DA and OED problems.\n For example, hIPPYlib~\\cite{villa2018hippylib} is a software package for solving \n high-dimensional inverse problems following an optimize-then-discretize approach.\n It has been employed to empirically verify several inversion and \n OED algorithmic developments recently.\n Instead of rebuilding the functionality of hIPPYlib and similar packages, \n we have interfaced with some of its components to show how \n easily and efficiently {PyOED}\\xspace can extend other successful packages.\n For example, {PyOED}\\xspace interfaces with finite-element (FE) implementations of advection-diffusion \n and Poisson models from hIPPYlib as well-as point observation operators.\n \\commentout{\n Some modifications to the interfaced hIPPYlib components, however, had to be \n introduced to make the code more general rather than \n being tied to a specific inverse problem formulation.\n }\n However, such extension does not hinder the functionality or limit the extensibility of {PyOED}\\xspace.\n Specifically, interfacing with such external packages is optional and is not provided in the core of {PyOED}\\xspace, \n mainly because the backbone of these packages is not guaranteed not to be quickly outdated or be unmaintained.\n Thus, these extensions (e.g., interfacing with hIPPYlib) are made optional during the import process of {PyOED}\\xspace subpackages, \n and dependent functionality is used only when properly installed and available on the current architecture.\n\n\n\n \n \\subsection{Code availability}\n \\label{subsec:code_availability}\n \n The development version of {PyOED}\\xspace is available from \\url{https:\/\/gitlab.com\/ahmedattia\/pyoed}.\n \\commentout{\n As soon as license is granted by ANL, update this section. \n Software location, license, download from repository, \n and documentation webpage, contact, support and collaboration.\n }\n\n\n\\section{Test Cases}\n\\label{sec:Test_cases}\n {PyOED}\\xspace comes with a set of prototypical test problems with increasing complexity for both DA and OED.\n An ideal case typically used in scientific publications is the linear Gaussian setup, \n where the simulation model and the observation operator are both linear \n and the error models (observation noise and the prior) are both Gaussian; see Section~\\ref{sec:Background}.\n In this case, the posterior is also a Gaussian; the solution of the inverse \n problem is unique---the posterior mean and mode (MAP) are identical;\n and posterior moments (mean and covariance) both have closed forms that can be obtained by applying the Kalman filter theory.\n Such a simplified setup can be used for testing new formulations in both DA and OED, and thus it is provided in {PyOED}\\xspace.\n We discuss this formulation and in Section~\\ref{subsec:ToyLinear} show how it can be utilized.\n In Section~\\ref{subsec:results:standard_oed_examples} we discuss in further detail a standard experiment\n widely used in OED scientific research and offered by {PyOED}\\xspace.\n\n\n \n \\subsection{An ideal setup: linear Gaussian toy problem}\n \\label{subsec:ToyLinear}\n \n Consider a time-dependent forward problem defined \n at time instances $t_0+ i \\Delta t, i=0, 1, \\ldots, \\nobstimes$, \n for a fixed step size $\\Delta t$, as follows:\n %\n \\begin{align}\n \\x_n = \\mat{A}\\, \\x_{n-1}\\,, \\qquad\n \\obs_n = \\mat{I} \\x_n + \\obsnoise\\,,\\quad n=1, 2, \\ldots ,\n \\end{align}\n %\n where $\\x_{n}\\in\\Rnum^{\\Nstate}$ is the discrete model state at time instance $t_n$,\n $\\mat{A}\\in \\Rnum^{\\Nstate \\times \\Nstate}$ is a matrix representing model evolution over \n time interval $[t_{n-1}, t_{n}]$, and $\\mat{I}$ is the identity observation operator\/matrix. \n %\n If we assume $\\obsnoise\\sim \\GM{\\vec{0}}{\\mat{R}}$ and assume \n a Gaussian prior $\\x_{0}\\sim\\GM{\\x_{0}^{\\rm pr}}{\\mat{\\Gamma}_{\\rm pr}}$, \n then the posterior is Gaussian $\\GM{\\x_{0}^{\\rm post}}{\\mat{\\Gamma}_{\\rm post}}$ with\n %\n \\begin{equation}\\label{eqn:Gaussian_Posterior_Toy}\n \\mat{\\Gamma}_{\\rm post} = \\left(\\mat{A} \\tran \\mat{R}\\inv \\mat{A} \n + {\\mat{\\Gamma}_{\\rm pr}}\\inv \\right)\\inv \\,,\n \\qquad\n %\n \\x_{0}^{\\rm post} = \\mat{\\Gamma}_{\\rm post} \n \\left( {\\mat{\\Gamma}_{\\rm pr}}\\inv \\x_{0}^{\\rm pr}\n + \\sum_{i=1}^{\\nobstimes}{\\mat{A}\\tran \\mat{R}\\inv \\, \\obs }\n \\right) \\,.\n \\end{equation}\n %\n\n Since~\\eqref{eqn:Gaussian_Posterior_Toy} is a closed form of the posterior, \n we can use it to test and debug new DA and OED implementations.\n This fact is highly utilized in the unit tests developed in {PyOED}\\xspace. \n %\n To create a proper experiment, we will follow the workflow described by\n Figure~\\ref{fig:PyOED_workflow}.\n %\n In the rest of this section (\\ref{subsec:ToyLinear}) we describe how \n to initialize an inverse problem in {PyOED}\\xspace with the\n settings~\\eqref{eqn:Gaussian_Posterior_Toy}, and we provide a simple scheme \n that can be followed to initialize other experiments.\n The code summarized here is provided in the \\code{pyoed.examples.fourDVar\\_driver} module with additional comments, \n details, and capabilities that can help the user understand the workflow for creating and solving an inverse problem.\n A Jupyter Notebook \\code{pyoed.tutorials.toy\\_linear} is also available and can be used to regenerate the numerical \n results presented in this section (\\ref{subsec:ToyLinear}).\n \n \n \n \n \\paragraph{Creating the models}\n \n \n Assuming {PyOED}\\xspace is already in the Python path, the first step is to import\/load \n the simulation model (that describes $\\mat{A}$), the observation operator \n (here an identity operator), and the error models to create the prior and \n the observation error model. \n This can be done as described in the code snippet~\\ref{snipp:toy_linear_models_import}.\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_models_import},\n firstline=2,lastline=4, \n caption={Import essential modules to create the simulation model object, \n the observation operator, the prior, and the observation error model.}]{.\/Code_Snippets\/toy_linear.py}\n\n Note that we have imported only the classes we need in this example. However, {PyOED}\\xspace provides several\n other implementations of the simulation models, observation operators, and error models.\n In order to create a simulation model, an object of \\code{ToyLinearTimeDepndent} \n is instantiated as described by snippet~\\ref{snipp:toy_linear_simulation_model}.\n %\n This generates an internal two-dimensional array of size $5 \\times 5$ that represents the forward model \n $\\mat{A}$, which integrates the model state forward by a timestep $dt=0.1$.\n That internal array can be reproduced by setting the \\code{random\\_seed} parameter\n in the passed configurations dictionary.\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_simulation_model},\n firstline=8,lastline=8, \n caption={Instantiate the simulation model object.}]{.\/Code_Snippets\/toy_linear.py}\n \n Since each simulation model has its own configurations, many of which are \n assigned default values, we follow the strategy of {DATeS}\\xspace~\\cite{attia2019dates} \n and use dictionaries to pass model arguments. {PyOED}\\xspace aggregates and validates \n the passed dictionary against the default values and initiates the model accordingly.\n For example, in snippet~\\ref{snipp:toy_linear_simulation_model} we \n specify a \\code{random\\_seed} argument that guarantees reproducibility of \n any randomly generated data inside the model object.\n This is done by keeping an internal random state inside the model object that \n is independent from other objects and is initialized to the passed random seed.\n Thus, if no random seed is passed, each time the same model object is instantiated, \n completely random sequences will be generated if requested. \n Implementations of simulations model must provide an implementation of a class-level\n method \\code{get\\_default\\_configs} that returns a dictionary with all default\n values used if not passed upon instantiation. \n In order to ensure that, all {PyOED}\\xspace simulation models inherit the class \\code{pyoed.models.simulation\\_models.SimulationModel} that guarantees enforcing the implementation of mandatory methods required for the seamless integration of various components in {PyOED}\\xspace.\n The final configurations of a simulation model is a combination of those in \n the passed configurations dictionary and the default values, with precedence \n given to the passed configurations. \n The simulation model's configurations (a copy of it, in fact) can be accessed through \n the attribute \\code{configurations}. We generally choose to return a copy to guarantee that all settings are validated before modification. \n For example, one cannot modify the time step $dt$ without verifying whether the \n timestepping implementation is tied to that time step or not and updating dependencies accordingly.\n\n\n \n \\paragraph{The observation operator}\n \n Similar to simulation models, an observation operator is created by passing the \n settings in the configurations dictionary to the observation operator class constructor as shown in \n snippet~\\ref{snipp:toy_linear_obs_oper}. Thus, the observation operator\n has access to the model grid and other useful attributes to create and manipulate \n data structures (such as the model state) without having to provide any new \n implementations. This is mainly because observations in this case are the same as \n the corresponding model states (discarding observation noise).\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_obs_oper},\n firstline=11,lastline=11, \n caption={Create an identity observation operator.}]{.\/Code_Snippets\/toy_linear.py}\n\n \n \\paragraph{The prior and the data noise models}\n \n The error models (prior and observation noise) are created in this example as shown in \n snippet~\\ref{snipp:toy_linear_error_models}. \n Note that the \\code{random\\_seed} configurations variable is used to set the random number generator for reproducibility. \n This enables us regenerate a set of experiments and generate proper benchmarks for a fair comparison \n between various implementations. As mentioned earlier, if no random seed is passed, each instance of \n the error model is assigned a randomly generated state that guarantees \n that each instance has its own different sequence of random numbers\/vectors realizations.\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_error_models},\n firstline=14,lastline=15, \n caption={Create the prior and the observation error model.}]{.\/Code_Snippets\/toy_linear.py}\n\n\n \n \\paragraph{The inverse problem (DA) object}\n \n The next step is to put these models together in action and use them to create an \n inverse problem.\n We illustrate the utilization of a DA object to solve the inverse problem\n following a 4DVar formulation.\n The literature provides a plethora of variants of the general 4DVar scheme.\n {PyOED}\\xspace provides a few implementations; however, the most basic (vanilla) \n implementation is used here for illustration.\n Two approaches are followed in {PyOED}\\xspace for instantiating a DA object. The first is to pass\n all configurations (upon initialization) in the configurations dictionary \n \\code{configs} similar to the case of simulation models, error models, and \n observation operators. The second approach is to use the proper registration methods \n associated with the created object after instantiation. \n The latter can be also used to update components of the DA \n object after initialization. For example, one might want to change the settings of the \n assimilation time window, register new observations or remove the\n old ones, or modify or even change the prior.\n Since the first approach has already been explained with the simulation and \n error models, we demonstrate the second approach here.\n Specifically, a 4DVar assimilation object is created as in \n snippet~\\ref{snipp:toy_linear_4dvar}.\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_4dvar},\n firstline=18,lastline=22, \n caption={Create the inverse problem object with default settings, \n and then add (register) all the pieces created above, that is, the simulation model, \n the observation operator, the prior, and the observation error model.}]{.\/Code_Snippets\/toy_linear.py}\n\n The next step is to register observational data (along with observation times).\n A standard strategy in experimentation is to create synthetic data from \n a ground truth (known as a twin experiment). This is explained by snippet~\\ref{snipp:toy_linear_synthethize}, \n where we define the assimilation timespan (window) to be the interval $[0, 0.3]$,\n and the observations are taken at $3$ time instances $0.1, 0.2, 0.3$.\n The observational data is mimicked by adding random noise (using the observation error model) to the observed ground truth at the corresponding observation time instance.\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_synthethize},\n firstline=24,lastline=36, \n caption={Create synthetic noisy observations, and register all observation time points \n and observational data to the inverse problem object.}]{.\/Code_Snippets\/toy_linear.py}\n\n The final step in the DA procedure is to solve the inverse problem and assess the quality of the solution.\n For this setup, we know the ground truth, and thus one can evaluate the root mean squared error (RMSE), \n which is a standard error metric in statistics in the DA literature.\n In order to solve the inverse problem, the \\code{solve\\_inverse\\_problem} method of the 4DVar \n DA object is called (snippet~\\ref{snipp:toy_linear_solve_inversion}). \n This method will raise an instructive error if any of the essential elements, for example, the simulation model, are not registered properly. Note that this function is flexible and allows the posterior covariance to be constructed if needed. It also allows waiving finding the MAP estimate, which can be advantageous in OED applications due to associated computational savings. For example, one might want to estimate the posterior covariance in the linear Gaussian case without evaluating the MAP.\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_solve_inversion},\n firstline=39,lastline=39, \n caption={Solve the inverse problem. \n The optimization initial point is set by default to the prior mean; however, it can be modified if a better initial guess is known.\n Here, the posterior covariance is evaluated, and consequently the posterior \n is updated with both the mean and the covariance matrix.}]{.\/Code_Snippets\/toy_linear.py}\n\n\n {PyOED}\\xspace provides several utility functions to evaluate statistics, such as the RMSE, which can be used to quantify the accuracy of the inverse problem solution.\n Snippet~\\ref{snipp:toy_linear_rmse} shows how to call the utility function \n \\code{calculate\\_rmse} and use it to evaluate the prior and the analysis (posterior) \n RMSE, which are then printed. \n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_rmse},\n firstline=42,lastline=46,\n caption={Calculate and print the RMSE values associated with the prior mean (initial guess here) and the posterior mean.}]{.\/Code_Snippets\/toy_linear.py}\n\n The same procedure can be easily followed to inspect the RMSE results over the whole\n assimilation timespan as described by Snippet~\\ref{snipp:toy_linear_rmse_traject} with \n results plotted in Figure~\\ref{fig:toy_linear_rmse_traject}.\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_rmse_traject},\n firstline=49,lastline=53,\n caption={Generate RMSE over the whole assimilation window.}]{.\/Code_Snippets\/toy_linear.py}\n\n %\n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Plots\/ToyLinear4DVar_RMSE}\n \\caption{RMSE results of the solution of the inverse problem presented in Section~\\ref{subsec:ToyLinear}. \n The RMSE results of both the prior and the posterior trajectories plotted here are \n obtained by running the code in snippet~\\ref{snipp:toy_linear_rmse_traject}.\n }\n \\label{fig:toy_linear_rmse_traject}\n \\end{figure}\n %\n\n One can also analyze the posterior covariances, for example, by generating and plotting the posterior covariance matrix. \n Given the linear Gaussian settings in the present setup, one can validate the generated posterior covariance matrix against \n the exact formula~\\eqref{eqn:Gaussian_Posterior_Toy}.\n One way to construct the posterior covariance matrix is to invoke the posterior model \n as shown in snippet~\\ref{snipp:toy_linear_posterior_covariance}. \n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_posterior_covariance},\n firstline=74,lastline=74,\n caption={Construct and retrieve the posterior covariance matrix}]{.\/Code_Snippets\/toy_linear.py}\n \n Note, however, that one should avoid constructing the covariance matrix for high-dimensional \n error models. \n Alternatively, matrix-free implementations of covariance (and precision) \n matrix-vector product should be used. \n For example, to multiply the prior covariance by \\code{state}, \n one should call \\code{prior.covariance\\_matvec(state)}. The error models provide many \n attributes to efficiently access the statistics of the model, such as covariance diagonal, and trace.\n The posterior covariance matrix, constructed by employing the posterior functionality \n as in snippet~\\ref{snipp:toy_linear_posterior_covariance}\n and the covariance matrix evaluated by applying~\\eqref{eqn:Gaussian_Posterior_Toy} \n along with the mismatch errors \n are plotted in Figure~\\ref{fig:toy_linear_posterior_covariance}.\n %\n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{Plots\/ToyLinear4DVar_PostCov}\n \\caption{Entries of the posterior covariance matrix and the associated errors.\n Left: the posterior covariance matrix generated by snippet~\\ref{snipp:toy_linear_posterior_covariance}.\n Middle: the closed-form posterior covariance matrix given by~\\eqref{eqn:Gaussian_Posterior_Toy}.\n Right: RMSE obtained by pointwise comparison of the covariance matrices obtained by solving \n the inverse problem (left) and by using the closed form (middle).}\n \\label{fig:toy_linear_posterior_covariance}\n \\end{figure}\n %\n \n\n \n \\paragraph{An OED experiment}\n \n The simulation model is instantiated with a model grid of size \\code{nx=5}, and the \n observation operator copies the model state.\n In fact, one can inspect the model array representation $\\mat{A}$ for this toy linear model \n by calling \\code{model.get\\_model\\_array()}. \n Both the model state and the observation vector sizes here are $5$. Thus, there are actually $5$ candidate sensor locations \n (observation gridpoints), and one can try to find the optimal subset of sensors by using an OED implementation.\n \n \n Here we briefly illustrate utilizing an OED object to find the A-optimal design for the toy linear example discussed above; see \n snippet~\\ref{snipp:toy_linear_A_oed}. \n First, the proper OED module (here following~\\cite{attia2022optimal}) is imported and is used to create the \n \\code{oed\\_problem} instance.\n The A-optimality criterion is registered (which can be changed later by registering a proper OED criterion), and the OED problem is then solved.\n %\n The results of solving the OED problem, for example, the optimal observational design, are then stored in \n \\code{oed\\_result}, which is an instance of (or derived from) the \\code{pyoed.oed.OEDResults} class. This class \n provides access to various attributes of the OED problem and the solution process (such as the optimization trajectory and \n brute force solution if requested).\n This gives a taste of how simple it is to create and test DA and OED problems in {PyOED}\\xspace.\n Further details on OED implementations in {PyOED}\\xspace are discussed in the following \n section (\\ref{subsec:results:standard_oed_examples}).\n \n \\lstinputlisting[language=Python,\n label={snipp:toy_linear_A_oed},\n firstline=103,lastline=106, \n caption={Create an OED object, and solve the OED optimization problem for the toy linear model.}]{.\/Code_Snippets\/toy_linear.py}\n\n\n\n \n \\subsection{A standard model-constrained OED experiment}\n \\label{subsec:results:standard_oed_examples}\n \n Parameter identification for an advection-diffusion (AD) model is the foundation of \n an experiment widely used in the model-constrained OED literature for validating \n theoretical developments; see, for example,~\\cite{huan2013simulation,bui2013computational,alexanderian2016fast,attia2018goal}.\n Comparing independent scientific OED developments is admittedly hard, mainly because of the lack of availability of \n open software packages developed for OED. This is one of the main goals and features of {PyOED}\\xspace. \n %\n Specifically, {PyOED}\\xspace will enable OED researchers to compare the performance of new OED algorithmic approaches with other methods. \n Moreover, it enables comparison with solution by brute force search for small- to moderate-dimensional problems. \n %\n In this section~\\ref{subsec:results:standard_oed_examples} we describe in detail the steps required to construct and solve \n an OED problem in {PyOED}\\xspace with an AD simulation model.\n This problem has been utilized independently in several OED developments; \n see, for example,~\\cite{bui2013computational,alexanderian2016fast,attia2018goal,attia2022optimal,attia2022stochastic}.\n Here, we show how {PyOED}\\xspace can be used to solve and benchmark this OED problem, thus providing a starting point for utilizing and developing \n multiple approaches for solving OED problems in general in {PyOED}\\xspace.\n Following the same approach as in~\\ref{subsec:ToyLinear}, \n we start by describing the components of the inverse problem and briefly show how they are initialized in {PyOED}\\xspace;\n then we initialize and solve the OED problem using the efficient stochastic approach summarized by Algorithm~\\ref{alg:REINFORCE_baseline}.\n Additionally, we discuss the steps that should be modified to utilize other solution formulations \n and methods such as the relaxation approach~\\eqref{eqn:relaxed_oed}.\n \n The code summarized here is provided in the \\code{pyoed.examples.OED\\_AD\\_FE} module with additional comments, \n details, and capabilities. \n A Jupyter Notebook \\code{pyoed.tutorials.OED\\_AD\\_FE} is also available and can be used to regenerate the numerical \n results presented in this section (\\ref{subsec:ToyLinear}).\n \n\n \n \\paragraph{The simulation model}\n \n The governing equation of the contaminant field\n $u = \\xcont(\\mathbf{x}, t)$ is modeled by the following AD model equations\n with the associated boundary conditions:\n %\n \\begin{equation}\\label{eqn:advection_diffusion}\n \\begin{aligned}\n \\xcont_t - \\kappa \\Delta \\xcont + \\vec{v} \\cdot \\nabla \\xcont &= 0\n \\quad \\text{in } \\domain \\times [0,T], \\\\\n \\xcont(x,\\,0) &= \\theta \\quad \\text{in } \\domain, \\\\\n \\kappa \\nabla \\xcont \\cdot \\vec{n} &= 0\n \\quad \\text{on } \\partial \\domain \\times [0,T],\n \\end{aligned}\n \\end{equation}\n %\n where $\\kappa>0$ is the diffusivity, $T$ is the simulation final time and\n $\\vec{v}$ is the velocity field.\n \n The domain is $\\domain:=(0, 1) \\times (0, 1)$ with two rectangular regions\n modeling two buildings inside the domain.\n The velocity field $\\vec{v}$ is known exactly and is obtained by solving a steady Navier--Stokes\n equation, with the side walls driving the flow, as detailed in~\\cite{PetraStadler11,attia2022optimal}.\n\n To create a simulation model object implementing~\\eqref{eqn:advection_diffusion}, ground truth of the initial condition, \n and plot the domain (with finite elements discretization) as well as the velocity field, one can use Snippet~\\ref{snipp:AD_FE_Model}; \n the output is shown in Figure~\\ref{fig:AD_domain_velocity}.\n \n \\lstinputlisting[language=Python,\n label={snipp:AD_FE_Model},\n firstline=9,lastline=19, \n caption={Create an object representing the simulation model~\\eqref{eqn:advection_diffusion}.}]{.\/Code_Snippets\/OED_AD_FE.py}\n\n %\n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.30\\textwidth]{Plots\/AD_Domain}\n \\qquad\n \\includegraphics[width=0.30\\textwidth]{Plots\/AD_VelocityField}\n \\caption{\n Left: finite elements discretization of the domain $\\domain$ of the AD problem~\\eqref{eqn:advection_diffusion}.\n Right: the velocity field $\\vec{v}$.\n }\n \\label{fig:AD_domain_velocity}\n \\end{figure}\n %\n \n\n \n \\paragraph{The prior}\n \n In this setup, following~\\cite{attia2018goal,PetraStadler11,VillaPetraGhattas2016}, \n we choose a Laplacian prior of the parameter $\\iparam$ is $\\GM{\\iparb}{\\Cparampriormat}$,\n with $\\Cparampriormat$ being a discretization of $\\mathcal{A}^{-2}$,\n where $\\mathcal{A}$ is a Laplacian operator.\n In {PyOED}\\xspace, a Laplacian prior can be created as described in snippet~\\ref{snipp:Laplacian_prior}.\n \n \\lstinputlisting[language=Python,\n label={snipp:Laplacian_prior},\n firstline=22,lastline=28, \n caption={Create a Laplacian prior.}]{.\/Code_Snippets\/OED_AD_FE.py}\n \n\n \n \\paragraph{The observation operator}\n \n A common observational configuration is to consider uniformly distributed candidate sensor locations \n and solve an OED problem to choose the optimal subset of candidate sensor locations.\n A uniform observation operator can be created and incorporated in this problem as described in \n snippet~\\ref{snipp:AD_uniform_observations}. \n \n \\lstinputlisting[language=Python,\n label={snipp:AD_uniform_observations},\n firstline=32,lastline=36, \n caption={Create a uniform observation operator with $10$ candidate locations.}]{.\/Code_Snippets\/OED_AD_FE.py}\n \n Assuming Gaussian observational noise model, a Gaussian observation error model is created as described\n in~\\ref{snipp:AD_Gaussian_Noise} \n \n \\lstinputlisting[language=Python,\n label={snipp:AD_Gaussian_Noise},\n firstline=40,lastline=43, \n caption={Create a Gaussian noise model.}]{.\/Code_Snippets\/OED_AD_FE.py}\n\n\n \n \\paragraph{The inverse problem: 4DVar}\n \n As with the case of the toy linear problem described above in Section~\\ref{subsec:ToyLinear}, the elements of the inverse problem here can be created \n as described by snippet~\\ref{snipp:AD_inverse_problem}. \n Similarly, synthetic observations (data) can be created as in snippet~\\ref{snipp:AD_synthetic_data}.\n Note that all steps followed so far in this example are similar to those followed in the case \n of the toy linear model discussed in Section~\\ref{subsec:ToyLinear}.\n \n \\lstinputlisting[language=Python,\n label={snipp:AD_inverse_problem},\n firstline=46,lastline=54, \n caption={Create the DA object.}]{.\/Code_Snippets\/OED_AD_FE.py}\n\n \n \\lstinputlisting[language=Python,\n label={snipp:AD_synthetic_data},\n firstline=57,lastline=66, \n caption={Create synthetic observations, and associate them to the inverse problem object.}]{.\/Code_Snippets\/OED_AD_FE.py}\n\n\n \n \\paragraph{The OED problem}\n \n The discussion above is valid for all model-constrained OED approaches in {PyOED}\\xspace. In what follows, we describe the steps needed \n to create an OED object that follows the stochastic approach~\\eqref{eqn:stochastic_OED_optimization}. \n An OED object is created in {PyOED}\\xspace by following the steps in snippet~\\ref{snipp:AD_OED_object}.\n Here, we seek to activate only $4$ sensors out of the candidate $10$, and we use the trace of the Fisher information matrix ($\\mathsf{FIM}$) \n as the utility function. To enforce the budget, we use an $\\ell_0$ penalty term as detailed in~\\cite{attia2022stochastic}.\n \n \\lstinputlisting[language=Python,\n label={snipp:AD_OED_object},\n firstline=69,lastline=81, \n caption={Create an OED object implementing the stochastic approach~\\ref{alg:REINFORCE_baseline}.}]{.\/Code_Snippets\/OED_AD_FE.py}\n \n The OED problem can be solved as described by snippet~\\ref{snipp:AD_OED_solve}. \n \n \\lstinputlisting[language=Python,\n label={snipp:AD_OED_solve},\n firstline=85,lastline=90, \n caption={Solve the OED problem.}]{.\/Code_Snippets\/OED_AD_FE.py}\n \n The resulting \\code{oed\\_results} object can be used to generate several analysis plots as \n described by the code in snippet~\\ref{snipp:AD_OED_plot}.\n This generates multiple standard plots, including the\n performance of the optimization algorithm over consecutive iterations in Figure~\\ref{fig:AD_FE_Plots}(left),\n the optimal sensor locations generated by the optimization algorithm in Figure~\\ref{fig:AD_FE_Plots}(middle),\n and comparison of the quality of the solution with respect to brute force search shown in Figure~\\ref{fig:AD_FE_Plots}(right).\n\n \n \\lstinputlisting[language=Python,\n label={snipp:AD_OED_plot},\n firstline=92,lastline=93, \n caption={Create standard plots for assessing the performance of the optimization routine \n and the quality of the generated design.}]{.\/Code_Snippets\/OED_AD_FE.py}\n\n %\n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.35\\textwidth]{Plots\/AD_OED_optimization_iterations}\n \\quad \n \\includegraphics[width=0.22\\textwidth]{Plots\/AD_OED_optimal_sensors}\n \\quad\n \\includegraphics[width=0.35\\textwidth]{Plots\/AD_OED_bruteforce}\n \\caption{Subset of the plots generated by running the code in snippet~\\ref{snipp:AD_OED_plot}.\n Left: value of the utility (objective) function, i.e., the penalized OED criterion, over consecutive iterations of the optimization algorithm.\n Middle: optimal solution, showing optimal sensor locations in the domain.\n Right: value of the objective of the optimal solution (red star) returned by algorithm~\\ref{alg:REINFORCE_baseline}, \n compared with the global optimum solution (black $x$ mark), \n and all possible solutions marked as blue circles; the x-axis shows the indexes of all possible binary designs from $1$ to $2^{\\Nsens=10}=1024$, \n and the y-axis shows the corresponding values of the optimization objective.\n }\n \\label{fig:AD_FE_Plots}\n \\end{figure}\n %\n\n\n To use other OED formulations to solve the same problem, the user only needs to update the code in the snippet~\\ref{snipp:AD_OED_object} \n with the proper OED implementation.\n For example, the relaxation approach~\\eqref{eqn:relaxed_oed} can be used as illustrated in the case of the linear toy model above \n in~\\ref{subsec:ToyLinear}; see snippet~\\ref{snipp:toy_linear_A_oed}.\n %\n Specifically, the relaxation approach~\\eqref{eqn:relaxed_oed} can be used to solve the present optimal sensor placement problem by replacing the code in \n snippet~\\ref{snipp:AD_OED_object} with the following code in snippet~\\ref{snipp:AD_OED_relaxed_object}, which demonstrates the simplicity of {PyOED}\\xspace interface.\n Results of snippet~\\ref{snipp:AD_OED_relaxed_object} are omitted from the presentation here for clarity and because the main goal here is to discuss \n usage of the approaches in {PyOED}\\xspace rather than assessing the quality of the solution approach, which is left for interested users of the package and for future benchmarking research.\n \n \\lstinputlisting[language=Python,\n label={snipp:AD_OED_relaxed_object},\n firstline=96,lastline=110, \n caption={Create an OED object implementing the relaxation approach~\\eqref{eqn:relaxed_oed}.}]{.\/Code_Snippets\/OED_AD_FE.py}\n\n Note, however, that we had to change the penalty function in snippet~\\ref{snipp:AD_OED_relaxed_object} because \n the $\\ell_0$ penalty function used in snippet~\\ref{snipp:AD_OED_object} \n is non differentiable, while the relaxation approach requires the OED objective function to be differentiable. \n For details, see, for example,~\\cite{attia2022stochastic}. \n \n\n\\section{Concluding Remarks}\n\\label{sec:Conclusions}\n This work describes {PyOED}\\xspace, a highly extensible high-level software package for OED in inverse problems and DA.\n {PyOED}\\xspace aims to be a comprehensive Python toolkit for model-constrained OED.\n The package targets scientists and researchers interested in understanding the details of OED formulations and approaches. \n It is also meant to enable researchers to experiment with standard and innovative OED technologies within external test problems (e.g., simulations).\n %\n The mathematical formulations of OED, inverse problems, and DA overlap significantly, and thus, we plan to extend {PyOED}\\xspace with a plethora of Bayesian inversion, DA, and OED implementations as well as new scientific simulation models, \n observation error models, and observation operators.\n %\n \n While we focused the discussions in this paper on specific OED approaches, the current version {PyOED}\\xspace (v1.0) provides several other implementations and emphasizes \n implementing the essential infrastructure that enables combininig DA and OED elements with other parts of the package.\n The main limitation of the initial version of {PyOED}\\xspace is scalability. \n Specifically, the concept is developed without parallelization capability.\n In future versions of {PyOED}\\xspace, scalability will be achieved by adding message passing interface (MPI) support, for example using the \\code{mpi4py} package, \n and by supporting \\code{PETSc}~\\cite{balay2022petsc}.\n Performance will also be enhanced by converting or rewriting suitable parts of the package in \\code{Cython}.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{8sec1}\n\nThe modern theory of general parabolic initial-boundary problems has been\ndeveloped for the classical scales of H\\\"older--Zygmund and Sobolev function\nspaces \\cite{AgranovichVishik64, Friedman64, LadyzhenskajaSolonnikovUraltzeva67, LionsMagenes72ii, Zhitarashu85, Ivasyshen90, Eidelman94, Lunardi1995, ZhitarashuEidelman98}. The central result of this theory are the theorems on\nwell-posedness by Hadamard of these problems on appropriate pairs of these\nspaces. For applications, especially to the spectral theory\nof differential operators, inner product Sobolev spaces play a special role.\n\nIn 1963 H\\\"ormander \\cite{Hermander63} proposed a broad and meaningful generalization\nof the Sobolev spaces in the framework of Hilbert spaces. He introduced the spaces\n$$\n\\mathcal{B}_{2,\\mu}:=\\bigl\\{ w\\in\\mathcal{S}'(\\mathbb{R}^{k})\n:\\mu(\\xi)\\widehat{w}(\\xi)\\in L_2(\\mathbb{R}^{k},\\,d\\xi)\\bigr\\},\n$$\nfor which a general Borel measurable weight function $\\mu:\\mathbb{R}^{k}\\rightarrow(0,\\infty)$ serves as an index of regularity of a distribution $w$. (Here, $\\widehat{w}$ denotes the Fourier transform\nof $w$.) These spaces and their versions within the category of normed spaces (so called\nspaces of generalized smoothness) have found various applications to analysis and partial differential equations \\cite{VolevichPaneah65, Lizorkin86, Paneah00, Jacob010205, Triebel01, FarkasLeopold06, NicolaRodino10, MikhailetsMurach14, MikhailetsMurach15}.\n\nRecently Mikhailets and Murach \\cite{MikhailetsMurach06UMJ2, MikhailetsMurach06UMJ3, MikhailetsMurach07UMJ5, Murach07UMJ6, MikhailetsMurach08UMJ4} have built a theory of solvability of general elliptic systems and elliptic boundary-value problems on Hilbert scales of spaces $H^{s;\\varphi}:=\\mathcal{B}_{2,\\mu}$ for which the index of regularity is of the form\n$$\n\\mu(\\xi):=(1+|\\xi|^{2})^{s\/2}\\varphi((1+|\\xi|^{2})^{1\/2}).\n$$\nHere, $s$ is a real number, and $\\varphi$ is a function varying slowly at infinity\nin the sense of Karamata \\cite{Karamata30a}. This theory is based on the method\nof interpolation with a function parameter between Hilbert spaces, specifically\nbetween Sobolev spaces. This allows Mikhailets and Murach to deduce theorems about solvability of elliptic systems and elliptic problems from the known results on the solvability of elliptic equations in Sobolev spaces. This theory is set force in \\cite{MikhailetsMurach14, MikhailetsMurach12BJMA2}.\n\nGenerally, the method of interpolation between normed spaces proved to be very useful in the theory of elliptic \\cite{Berezansky65, LionsMagenes72i, Triebel95} and parabolic \\cite{LionsMagenes72ii, Lunardi1995} partial differential equations. Specifically, Lions and Magenes \\cite{LionsMagenes72ii} systematically used the interpolation with a number (power) parameter between Hilbert spaces in their theory of solvability of parabolic initial-boundary value problems on a complete scale of anisotropic Sobolev spaces. Using the more flexible method of interpolation with a function parameter between Hilbert spaces, Los, Mikhailets, and Murach \\cite{LosMurach13MFAT2, LosMikhailetsMurach17CPAA} proved theorems on solvability of semi-homogeneous parabolic problems in $2b$-anisotropic H\\\"ormander spaces $H^{s,s\/(2b);\\varphi}$, where $2b$ is a parabolic weight and where the parameters $s$ and $\\varphi$ are the same as those in the above mentioned elliptic theory. These problems were considered in the case of homogeneous initial conditions (Cauchy data).\n\nThe purpose of this paper is to establish the well-posedness of inhomogeneous parabolic problems on appropriate pairs of the H\\\"ormander spaces, i.e. to prove new isomorphism theorems for these problems. We consider the problems that consist of a general second order parabolic partial differential equation, the Dirichlet boundary condition or a general first order boundary condition, and the Cauchy datum. We deduce these isomorphism theorems from Lions and Magenes' result \\cite{LionsMagenes72ii} with the help of the interpolation with a function parameter between anisotropic Sobolev spaces. The use of this method in the case of inhomogeneous parabolic problems meets additional difficulties connected with the necessity to take into account quite complex compatibility conditions imposed on the right-hand sides of the problem. The model case of initial boundary-value problems for heat equation is investigated in \\cite{Los15UMG5}.\n\n\n\n\n\n\n\\section{Statement of the problem}\\label{8sec2}\n\nWe arbitrarily choose an integer $n\\geq2$ and a real number $\\tau>0$. Let $G$ be a bounded domain in $\\mathbb{R}^{n}$ with an infinitely smooth boundary $\\Gamma:=\\partial G$. We put\n$\\Omega:=G\\times(0,\\tau)$ and $S:=\\Gamma\\times(0,\\tau)$; so, $\\Omega$ is an open cylinder in $\\mathbb{R}^{n+1}$, and $S$ is its lateral boundary. Then $\\overline{\\Omega}:=\\overline{G}\\times[0,\\tau]$ and\n$\\overline{S}:=\\Gamma\\times[0,\\tau]$ are the closures of $\\Omega$ and $S$ respectively.\n\nIn $\\Omega$, we consider a parabolic second order partial differential equation\n\\begin{equation}\\label{8f1}\n\\begin{gathered}\nAu(x,t)\\equiv\\partial_{t}u(x,t)+\n\\sum_{|\\alpha|\\leq2}a_{\\alpha}(x,t)\\,D^\\alpha_x\nu(x,t)=f(x,t)\\\\\n\\mbox{for all}\\;\\;x\\in G\\;\\;\\mbox{and}\\;\\;t\\in(0,\\tau).\n\\end{gathered}\n\\end{equation}\nHere and below, we use the following notation for partial derivatives: $\\partial_t:=\\partial\/\\partial t$ and $D^\\alpha_x:=D^{\\alpha_1}_{1}\\dots D^{\\alpha_n}_{n}$, where $D_{j}:=i\\,\\partial\/\\partial{x_j}$, $x=(x_1,\\ldots,x_n)\\in\\mathbb{R}^{n}$, and $\\alpha:=(\\alpha_1,\\ldots,\\alpha_n)$ with $0\\leq\\alpha_1,...,\\alpha_n\\in\\mathbb{Z}$ and $|\\alpha|:=\\alpha_1+\\cdots+\\alpha_n$. We suppose that all the coefficients $a_{\\alpha}$ of $A$ belong to the space $C^{\\infty}(\\overline{\\Omega})$. In the paper, all functions and distributions are supposed to be complex-valued, so we consider complex function spaces.\n\nWe suppose that the partial differential operator $A$ is Petrovskii parabolic on $\\overline{\\Omega}$, i.e. it satisfies the following condition (see, e.g. \\cite[Section~9, Subsection~1]{AgranovichVishik64}):\n\n\\begin{condition}\\label{8cond1}\nFor arbitrary $x\\in\\overline{G}$, $t\\in[0,\\tau]$, $\\xi=(\\xi_{1},\\ldots,\\xi_{n})\\in\\mathbb{R}^{n}$, and $p\\in\\mathbb{C}$ with $\\mathrm{Re}\\,p\\geq0$, the inequality\n\\begin{equation*}\np+\\sum_{|\\alpha|=2} a_{\\alpha}(x,t)\\,\\xi_{1}^{\\alpha_{1}}\\cdots\\xi_{n}^{\\alpha_{n}}\n\\neq0\\quad\\mbox{holds whenever}\\quad|\\xi|+|p|\\neq0.\n\\end{equation*}\n\\end{condition}\n\nIn the paper, we investigate the initial-boundary value problem that consists of the parabolic equation~\\eqref{8f1}, the initial condition\n\\begin{equation}\\label{8f3}\nu(x,0)=h(x)\\quad\\mbox{for all}\\;\\;x\\in G,\n\\end{equation}\nand the zero-order (Dirichlet) boundary condition\n\\begin{equation}\\label{8f2}\nu(x,t)=g(x,t)\n\\quad\\mbox{for all}\\;\\;x\\in\\Gamma\\;\\;\\mbox{and}\\;\\;t\\in(0,\\tau)\n\\end{equation}\nor the first order boundary condition\n\\begin{equation}\\label{8f2n}\n\\begin{gathered}\nBu(x,t)\\equiv\\sum_{j=1}^{n}b_j(x,t)D_ju(x,t)+b_0(x,t)u(x,t)=g(x,t)\\\\\n\\mbox{for all}\\;\\;x\\in\\Gamma\\;\\;\\mbox{and}\\;\\;t\\in(0,\\tau).\n\\end{gathered}\n\\end{equation}\n\nAs to \\eqref{8f2n}, we assume that all the coefficients $b_0$, $b_1$, ..., $b_n$ of $B$ belong to $C^{\\infty}(\\overline{S})$ and that\n$B$ covers $A$ on $\\overline{S}$ \\cite[Section~9, Subsection~1]{AgranovichVishik64}. The latter assumption means the fulfilment of the following:\n\n\\begin{condition}\\label{8cond2}\nChoose arbitrarily $x\\in\\Gamma$, $t\\in[0,\\tau]$, vector\n$\\eta=(\\eta_1,\\dots,\\eta_n)\\in\\mathbb{R}^{n}$ tangent to the boundary $\\Gamma$ at the point $x$, and number $p\\in\\mathbb{C}$ with $\\mathrm{Re}\\,p\\geq0$ so that $|\\eta|+|p|\\neq0$. Let $\\nu(x)=(\\nu_1(x),\\dots,\\nu_n(x))$ be the unit vector of the inward normal to $\\Gamma$ at $x$. Then:\n\\begin{itemize}\n\\item[a)] the inequality\n$\n\\sum_{j=1}^{n}b_j(x,t)\\nu_j(x)\\neq0\n$\nholds true;\n\\item[b)] the number\n$$\n\\zeta=-\\biggl(\\,\\sum\\limits_{j=1}^{n}b_j(x,t)\\eta_j\\biggr)\n\\biggl(\\,\\sum\\limits_{j=1}^{n}b_j(x,t)\\nu_j(x)\\biggr)^{-1}\n$$\nis not a root of the polynomial\n$$\np+\\sum_{|\\alpha|=2} a_{\\alpha}(x,t)\\,(\\eta_{1}+\\zeta\\nu_{1}(x))^{\\alpha_{1}}\\cdots\n(\\eta_{n}+\\zeta\\nu_{n}(x))^{\\alpha_{n}}\\quad\\mbox{of}\\;\\;\n\\zeta\\in\\mathbb{C}.\n$$\n\\end{itemize}\n\\end{condition}\n\nIt is useful to note that if all the coefficients $b_1$,..., $b_n$ are real-valued, then part~b) of Condition~\\ref{8cond2} is satisfied. This follows directly from Condition~\\ref{8cond1}.\n\nThus, we examine both the parabolic problem \\eqref{8f1}, \\eqref{8f3}, \\eqref{8f2} and the parabolic problem \\eqref{8f1}, \\eqref{8f3}, \\eqref{8f2n}. We investigate them in appropriate H\\\"ormander inner product spaces considered in the next section.\n\n\n\n\n\n\n\n\\section{H\\\"ormander spaces}\\label{8sec3}\n\nAmong the normed function spaces $\\mathcal{B}_{p,\\mu}$ introduced by H\\\"ormander in \\cite[Section~2.2]{Hermander63}, we use the inner product spaces $H^{\\mu}(\\mathbb{R}^{k}):=\\mathcal{B}_{2,\\mu}$ defined over $\\mathbb{R}^{k}$, with $1\\leq k\\in\\mathbb{Z}$. Here, $\\nobreak{\\mu:\\mathbb{R}^{k}\\rightarrow(0,\\infty)}$ is an arbitrary Borel measurable function that satisfies the following condition: there exist positive numbers $c$ and $l$ such that\n$$\n\\frac{\\mu(\\xi)}{\\mu(\\eta)}\\leq\nc\\,(1+|\\xi-\\eta|)^{l}\\quad\\mbox{for all}\\quad \\xi,\\eta\\in\\mathbb{R}^{k}.\n$$\n\nBy definition, the (complex) linear space $H^{\\mu}(\\mathbb{R}^{k})$ consists of all tempered distributions $w\\in\\mathcal{S}'(\\mathbb{R}^{k})$ whose Fourier transform $\\widehat{w}$ is a locally Lebesgue integrable function subject to the condition\n\\begin{equation*}\n\\int\\limits_{\\mathbb{R}^{k}}\\mu^{2}(\\xi)\\,|\\widehat{w}(\\xi)|^{2}\\,d\\xi\n<\\infty.\n\\end{equation*}\nThe inner product in $H^{\\mu}(\\mathbb{R}^{k})$ is defined by the formula\n\\begin{equation*}\n(w_1,w_2)_{H^{\\mu}(\\mathbb{R}^{k})}=\n\\int\\limits_{\\mathbb{R}^{k}}\\mu^{2}(\\xi)\\,\\widehat{w_1}(\\xi)\\,\n\\overline{\\widehat{w_2}(\\xi)}\\,d\\xi,\n\\end{equation*}\nwhere $w_1,w_2\\in H^{\\mu}(\\mathbb{R}^{k})$. This inner product induces the norm\n$$\n\\|w\\|_{H^{\\mu}(\\mathbb{R}^{k})}:=(w,w)^{1\/2}_\n{H^{\\mu}(\\mathbb{R}^{k})}.\n$$\nAccording to \\cite[Section~2.2]{Hermander63}, the space $H^{\\mu}(\\mathbb{R}^{k})$ is Hilbert and separable with respect to this inner product. Besides that, this space is continuously embedded in the linear topological space $\\mathcal{S}'(\\mathbb{R}^{k})$ of tempered distributions on $\\mathbb{R}^{k}$, and the set $C^{\\infty}_{0}(\\mathbb{R}^{k})$ of test functions on $\\mathbb{R}^{k}$ is dense in $H^{\\mu}(\\mathbb{R}^{k})$ (see also H\\\"ormander's monograph \\cite[Section~10.1]{Hermander83}). We will say that the function parameter $\\mu$ is the regularity index for the space $H^{\\mu}(\\mathbb{R}^{k})$ and its versions $H^{\\mu}(\\cdot)$.\n\nA version of $H^{\\mu}(\\mathbb{R}^{k})$ for an\narbitrary nonempty open set $V\\subset\\mathbb{R}^{k}$ is introduced in the standard way. Namely,\n\\begin{gather}\\notag\nH^{\\mu}(V):=\\bigl\\{w\\!\\upharpoonright\\!V:\\,\nw\\in H^{\\mu}(\\mathbb{R}^{k})\\bigr\\},\\\\\n\\|u\\|_{H^{\\mu}(V)}:= \\inf\\bigl\\{\\|w\\|_{H^{\\mu}(\\mathbb{R}^{k})}:\\,w\\in\nH^{\\mu}(\\mathbb{R}^{k}),\\;u=w\\!\\upharpoonright\\!V\\bigr\\}, \\label{8f40}\n\\end{gather}\nwhere $u\\in H^{\\mu}(V)$. Here, as usual, $w\\!\\upharpoonright\\!V$ stands for the restriction of the distribution $w\\in H^{\\mu}(\\mathbb{R}^{k})$ to the open set~$V$. In other words, $H^{\\mu}(V)$ is the factor space of the space $H^{\\mu}(\\mathbb{R}^{k})$ by its subspace\n\\begin{equation}\\label{8f41}\nH^{\\mu}_{Q}(\\mathbb{R}^{k}):=\\bigl\\{w\\in\nH^{\\mu}(\\mathbb{R}^{k}):\\, \\mathrm{supp}\\,w\\subseteq Q\\bigr\\} \\quad\\mbox{with}\\;\\;Q:=\\mathbb{R}^{k}\\backslash V.\n\\end{equation}\nThus, $H^{\\mu}(V)$ is a separable Hilbert space.\nThe norm \\eqref{8f40} is induced by the inner product\n$$\n(u_{1},u_{2})_{H^{\\mu}(V)}:= (w_{1}-\\Upsilon\nw_{1},w_{2}-\\Upsilon w_{2})_{H^{\\mu}(\\mathbb{R}^{k})},\n$$\nwhere $w_{j}\\in H^{\\mu}(\\mathbb{R}^{k})$, $w_{j}=u_{j}$ in $V$\nfor each $j\\in\\{1,\\,2\\}$, and $\\Upsilon$ is the orthogonal projector of the space $H^{\\mu}(\\mathbb{R}^{k})$ onto its subspace \\eqref{8f41}. The spaces $H^{\\mu}(V)$ and $H^{\\mu}_{Q}(\\mathbb{R}^{k})$ were introduced and investigated by Volevich and Paneah\n\\cite[Section~3]{VolevichPaneah65}.\n\nIt follows directly from the definition of $H^{\\mu}(V)$ and properties of $H^{\\mu}(\\mathbb{R}^{k})$ that the space $H^{\\mu}(V)$ is continuously embedded in the linear topological space $\\mathcal{D}'(V)$ of all distributions on $V$ and that the set\n$$\nC^{\\infty}_{0}(\\overline{V}):=\\bigl\\{w\\!\\upharpoonright\\!\\overline{V}:\\, w\\in C^{\\infty}_{0}(\\mathbb{R}^{k})\\bigr\\}\n$$\nis dense in $H^{\\mu}(V)$.\n\nSuppose that the integer $k\\geq2$. Dealing with the above-stated parabolic problems, we need the H\\\"ormander spaces $H^{\\mu}(\\mathbb{R}^{k})$ and their versions in the case where the regularity index $\\mu$ takes the form\n\\begin{equation}\\label{8f4}\n\\begin{gathered}\n\\mu(\\xi',\\xi_{k})=\\bigl(1+|\\xi'|^2+|\\xi_{k}|\\bigr)^{s\/2}\n\\varphi\\bigl((1+|\\xi'|^2+|\\xi_{k}|)^{1\/2}\\bigr)\\\\\n\\mbox{for all}\\;\\;\\xi'\\in\\mathbb{R}^{k-1}\\;\\;\\mbox{and}\\;\\;\n\\xi_{k}\\in\\mathbb{R}.\n\\end{gathered}\n\\end{equation}\nHere, the number parameter $s$ is real, whereas the function parameter $\\varphi$ runs over a certain class~$\\mathcal{M}$.\n\nBy definition, the class $\\mathcal{M}$ consists of all Borel measurable functions $\\varphi:[1,\\infty)\\rightarrow(0,\\infty)$ such that\n\\begin{itemize}\n \\item [a)] both the functions $\\varphi$ and $1\/\\varphi$ are bounded on each compact interval $[1,b]$, with $10$.\n\\end{itemize}\n\nThe theory of slowly varying functions (at infinity) is expounded, e.g., in \\cite{BinghamGoldieTeugels89, Seneta76}. Their standard examples are the functions\n\\begin{equation*}\n\\varphi(r):=(\\log r)^{\\theta_{1}}\\,(\\log\\log r)^{\\theta_{2}} \\ldots\n(\\,\\underbrace{\\log\\ldots\\log}_{k\\;\\mbox{\\tiny{times}}}r\\,)^{\\theta_{k}}\n\\quad\\mbox{of}\\;\\;r\\gg1,\n\\end{equation*}\nwhere the parameters $k\\in\\mathbb{N}$ and\n$\\theta_{1},\\theta_{2},\\ldots,\\theta_{k}\\in\\mathbb{R}$ are arbitrary.\n\nLet $s\\in\\mathbb{R}$ and $\\varphi\\in\\mathcal{M}$. We put $H^{s,s\/2;\\varphi}(\\mathbb{R}^{k}):=H^{\\mu}(\\mathbb{R}^{k})$ in the case where $\\mu$ is of the form~\\eqref{8f4}. Specifically, if $\\varphi(r)\\equiv1$, then $H^{s,s\/2;\\varphi}(\\mathbb{R}^{k})$ becomes the anisotropic Sobolev inner product space $H^{s,s\/2}(\\mathbb{R}^{k})$ of order $(s,s\/2)$. Generally, if $\\varphi\\in\\mathcal{M}$ is arbitrary, then the following continuous and dense embeddings hold:\n\\begin{equation}\\label{8f5}\nH^{s_{1},s_{1}\/2}(\\mathbb{R}^{k})\\hookrightarrow\nH^{s,s\/2;\\varphi}(\\mathbb{R}^{k})\\hookrightarrow\nH^{s_{0},s_{0}\/2}(\\mathbb{R}^{k})\\quad\\mbox{whenever}\\quad s_{0}0$ and $\\varphi\\in\\mathcal{M}$. We put $H^{s,s\/2;\\varphi}(\\Pi):=H^{\\mu}(\\Pi)$ for the strip $\\Pi:=\\mathbb{R}^{n-1}\\times(0,\\tau)$ in the case where $\\mu$ is defined by formula \\eqref{8f4} with $k:=n$. Recall that, according to our assumption $\\Gamma=\\partial\\Omega$ is an infinitely smooth closed manifold of dimension $n-1$, the $C^{\\infty}$-structure on $\\Gamma$ being induced by $\\mathbb{R}^{n}$. From this structure we arbitrarily choose a finite atlas formed by local charts $\\nobreak{\\theta_{j}:\\mathbb{R}^{n-1}\\leftrightarrow \\Gamma_{j}}$ with $j=1,\\ldots,\\lambda$. Here, the open sets $\\Gamma_{1},\\ldots,\\Gamma_{\\lambda}$\nmake up a covering of~$\\Gamma$. We also arbitrarily choose functions $\\chi_{j}\\in C^{\\infty}(\\Gamma)$, with $j=1,\\ldots,\\lambda$, so that $\\mathrm{supp}\\,\\chi_{j}\\subset\\Gamma_{j}$ and $\\chi_{1}+\\cdots\\chi_{\\lambda}=1$ on $\\Gamma$.\n\nBy definition, the linear space $H^{s,s\/2;\\varphi}(S)$ consists of all\nsquare integrable functions $\\nobreak{g:S\\to\\mathbb{C}}$ that the function\n$$\ng_{j}(x,t):=\\chi_{j}(\\theta_{j}(x))\\,g(\\theta_{j}(x),t)\n\\quad\\mbox{of}\\;\\;x\\in\\mathbb{R}^{n-1}\\;\\;\\mbox{and}\\;\\;t\\in(0,\\tau)\n$$\nbelongs to $H^{s,s\/2;\\varphi}(\\Pi)$ for each number\n$j\\in\\{1,\\ldots,\\lambda\\}$. The inner product in $H^{s,s\/2;\\varphi}(S)$ is defined by the formula\n\\begin{equation*}\n(g,g')_{H^{s,s\/2;\\varphi}(S)}:=\\sum_{j=1}^{\\lambda}\\,\n(g_{j},g'_{j})_{H^{s,s\/2;\\varphi}(\\Pi)},\n\\end{equation*}\nwhere $g,g'\\in H^{s,s\/2;\\varphi}(S)$. This inner product naturally induces the norm\n$$\n\\|g\\|_{H^{s,s\/2;\\varphi}(S)}:=(g,g)^{1\/2}_{H^{s,s\/2;\\varphi}(S)}.\n$$\nThe space $H^{s,s\/2;\\varphi}(S)$ is complete (i.~e. Hilbert) and does not depend up to equivalence of norms on the choice of local charts and partition of unity on $\\Gamma$ \\cite[Theorem~1]{Los16JMathSci}. Note that this space is actually defined with the help of the following special local charts on $S$:\n\\begin{equation}\\label{8f-local}\n\\theta_{j}^*:\\Pi=\\mathbb{R}^{n-1}\\times(0,\\tau)\\leftrightarrow\n\\Gamma_{j}\\times(0,\\tau),\\quad j=1,\\ldots,\\lambda,\n\\end{equation}\nwhere $\\theta_{j}^*(x,t):=(\\theta_{j}(x),t)$ for all\n$x\\in\\mathbb{R}^{n-1}$ and $t\\in(0,\\tau)$.\n\nWe also need isotropic H\\\"ormander spaces $H^{s;\\varphi}(V)$ over an arbitrary open nonempty set $V\\subseteq\\mathbb{R}^{k}$ with $k\\geq1$. Let $s\\in\\mathbb{R}$ and $\\varphi\\in\\mathcal{M}$. We put $H^{s;\\varphi}(V):=H^{\\mu}(V)$ in the case where the regularity index $\\mu$ takes the form\n\\begin{equation}\\label{8f50}\n\\mu(\\xi)=\\bigl(1+|\\xi|^2\\bigr)^{s\/2}\\varphi\\bigl((1+|\\xi|^2)^{1\/2}\\bigr)\n\\quad\\mbox{for arbitrary}\\;\\;\\xi\\in\\mathbb{R}^{k}.\n\\end{equation}\nSince the function \\eqref{8f50} is radial (i.e., depends only on\n$|\\xi|$), the space $H^{s;\\varphi}(V)$ is isotropic.\nWe will use the spaces $H^{s;\\varphi}(V)$ given over the whole Euclidean space $V:=\\mathbb{R}^{k}$ or over the domain $V:=G$ in $\\mathbb{R}^{n}$.\n\nBesides, we will use H\\\"ormander spaces $H^{s;\\varphi}(\\Gamma)$ over $\\Gamma=\\partial\\Omega$. The are defined with the help of the above-mentioned collection of local charts $\\{\\theta_{j}\\}$ and partition of unity $\\{\\chi_{j}\\}$ on $\\Gamma$ similarly to the spaces over $S$. Let $s\\in\\mathbb{R}$ and $\\varphi\\in\\mathcal{M}$. By definition, the linear space $H^{s;\\varphi}(\\Gamma)$ consists of all distributions $\\omega\\in\\mathcal{D}'(\\Gamma)$ on $\\Gamma$ that for each number $j\\in\\{1,\\ldots,\\lambda\\}$ the distribution\n$\\omega_{j}(x):=\\chi_{j}(\\theta_{j}(x))\\,\\omega(\\theta_{j}(x))$ of\n$x\\in\\mathbb{R}^{n-1}$ belongs to $H^{s;\\varphi}(\\mathbb{R}^{n-1})$.\nThe inner product in $H^{s;\\varphi}(\\Gamma)$ is defined by the formula\n\\begin{equation*}\n(\\omega,\\omega')_{H^{s;\\varphi}(\\Gamma)}:=\n\\sum_{j=1}^{\\lambda}\\,\n(\\omega_{j},\\omega'_{j})_{H^{s;\\varphi}(\\mathbb{R}^{n-1})},\n\\end{equation*}\nwhere $\\omega,\\omega'\\in H^{s;\\varphi}(\\Gamma)$. It induces the norm\n$$\n\\|\\omega\\|_{H^{s;\\varphi}(\\Gamma)}:=\n(\\omega,\\omega)^{1\/2}_{H^{s;\\varphi}(\\Gamma)}.\n$$\nThe space $H^{s;\\varphi}(\\Gamma)$ is Hilbert separable and does not depend up to equivalence of norms on our choice of local charts and partition of unity on $\\Gamma$ \\cite[Theorem 3.6(i)]{MikhailetsMurach08MFAT1}.\n\nNote that the classes of isotropic inner product spaces\n\\begin{equation*}\n\\bigl\\{H^{s;\\varphi}(V):s\\in\\mathbb{R},\\;\\varphi\\in\\mathcal{M}\\bigr\\}\n\\quad\\mbox{and}\\quad\n\\bigl\\{H^{s;\\varphi}(\\Gamma):s\\in\\mathbb{R},\\;\\varphi\\in\\mathcal{M}\\bigr\\}\n\\end{equation*}\nwere selected, investigated, and systematically applied to elliptic differential operators and elliptic boundary-value problems by Mikhailets and Murach \\cite{MikhailetsMurach14, MikhailetsMurach12BJMA2}.\n\nIf $\\varphi\\equiv1$, then the considered spaces $H^{s,s\/2;\\varphi}(\\cdot)$ and $H^{s;\\varphi}(\\cdot)$ become the Sobolev spaces $H^{s,s\/2}(\\cdot)$ and $H^{s}(\\cdot)$ respectively. It follows directly from \\eqref{8f5} that\n\\begin{equation}\\label{8f5a}\nH^{s_{1},s_{1}\/2}(\\cdot)\\hookrightarrow\nH^{s,s\/2;\\varphi}(\\cdot)\\hookrightarrow\nH^{s_{0},s_{0}\/2}(\\cdot)\\quad\\mbox{whenever}\\quad s_{0}0$.\n\nNow, substituting \\eqref{8f9} into \\eqref{8f68}, we obtain the compatibility conditions\n\\begin{equation}\\label{8f10}\n\\partial^{k}_t g\\!\\upharpoonright\\!\\Gamma=v_k\\!\\upharpoonright\\!\\Gamma,\n\\quad\\mbox{with}\\;\\;\\;k\\in\\mathbb{Z}\\;\\;\\;\\mbox{and}\\;\\;\\;\n0\\leq k0$. Thus, the compatibility conditions \\eqref{8f10} are well posed.\n\nFor instance, if $22$ and function parameter $\\varphi\\in\\mathcal{M}$. We take $H^{s,s\/2;\\varphi}(\\Omega)$ as the source space of this isomorphism; otherwise speaking, $H^{s,s\/2;\\varphi}(\\Omega)$ serves as a space of solutions $u$ to the problem. To introduce the target space of the isomorphism, consider the Hilbert space\n\\begin{gather*}\n\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}:=\nH^{s-2,s\/2-1;\\varphi}(\\Omega)\\oplus\nH^{s-1\/2,s\/2-1\/4;\\varphi}(S)\\oplus H^{s-1;\\varphi}(G).\n\\end{gather*}\nIn the Sobolev case of $\\varphi\\equiv1$ this space coincides with the target space of the bounded operator \\eqref{8f7a}. The target space of the isomorphism is imbedded in $\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}$ and is denoted by $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$. We separately define this space in the $s\\notin E_{0}$ case and $s\\in E_{0}$ case.\n\nSuppose first that $s\\notin E_{0}$. By definition, the linear space $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$ consists of all vectors $\\bigl(f,g,h\\bigr)\\in\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}$ that satisfy the compatibility conditions \\eqref{8f10}. As we have noted, these conditions are well defined for every $\\bigl(f,g,h\\bigr)\\in \\mathcal{H}_{0}^{s-2-\\varepsilon,s\/2-1-\\varepsilon\/2}$ for sufficiently small $\\varepsilon>0$. Hence, they are also well defined for every $\\bigl(f,g,h\\bigr)\\in\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}$ due to the continuous embedding\n\\begin{equation}\\label{8f69a}\n\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}\n\\hookrightarrow \\mathcal{H}_{0}^{s-2-\\varepsilon,s\/2-1-\\varepsilon\/2}.\n\\end{equation}\nThe latter follows directly from \\eqref{8f5a} and \\eqref{8f5b}. Thus, our definition is reasonable.\n\nWe endow the linear space $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$ with the inner product and norm in the Hilbert space\n$\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}$. The space $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$\nis complete, i.e. a Hilbert one. Indeed, if the number $\\varepsilon>0$ is sufficiently small, then\n$$\n\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}=\n\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}\\cap\n\\mathcal{Q}_{0}^{s-2-\\varepsilon,s\/2-1-\\varepsilon\/2}.\n$$\nHere, the space $\\mathcal{Q}_{0}^{s-2-\\varepsilon,s\/2-1-\\varepsilon\/2}$ is complete because the differential operators and traces operators used in the compatibility conditions are bounded on the corresponding pairs of Sobolev spaces. Therefore the right-hand side of this equality is complete with respect to the sum of the norms in the components of the intersection, this sum being equivalent to the norm in $\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}$ due to \\eqref{8f69a}. Thus, the space $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$ is complete (with respect to the latter norm).\n\nIf $s\\in E_{0}$, then we define the Hilbert space $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$\nby means of the interpolation between its analogs just introduced. Namely, we put\n\\begin{equation}\\label{8f71}\n\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}:=\\bigl[\n\\mathcal{Q}_{0}^{s-2-\\varepsilon,s\/2-1-\\varepsilon\/2;\\varphi},\n\\mathcal{Q}_{0}^{s-2+\\varepsilon,s\/2-1+\\varepsilon\/2;\\varphi}\n\\bigr]_{1\/2}.\n\\end{equation}\nHere, the number $\\varepsilon\\in(0,1\/2)$ is arbitrarily chosen, and the right-hand side of the equality is the result of the interpolation\nof the written pair of Hilbert spaces with the parameter~$1\/2$. We will recall the definition of the interpolation between Hilbert spaces in Section~\\ref{8sec5}. The Hilbert space $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$ defined by formula \\eqref{8f71} does not depend on the choice of $\\varepsilon$ up to equivalence of norms and is continuously embedded in $\\mathcal{H}_{0}^{s-2,s\/2-1;\\varphi}$. This will be shown in Remark~\\ref{8rem1} at the end of Section~\\ref{8sec6}.\n\nNow we can formulate our main result concerning the parabolic initial-boundary value problem \\eqref{8f1}--\\eqref{8f2}.\n\n\\begin{theorem}\\label{8th1}\nFor arbitrary $s>2$ and $\\varphi\\in\\nobreak\\mathcal{M}$ the mapping \\eqref{8f7} extends uniquely (by continuity) to an isomorphism\n\\begin{equation}\\label{8f8}\n\\Lambda_{0}:H^{s,s\/2;\\varphi}(\\Omega)\\leftrightarrow\n\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}.\n\\end{equation}\n\\end{theorem}\n\nOtherwise speaking, the parabolic problem \\eqref{8f1}--\\eqref{8f2} is well posed (in the sense of Hadamard) on the pair of Hilbert spaces $H^{s,s\/2;\\varphi}(\\Omega)$ and $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$ whenever $s>2$ and $\\varphi\\in\\nobreak\\mathcal{M}$, the right-hand side $\\bigl(f,g,h\\bigr)\\in\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$ of the problem being defined by closer for an arbitrary function $u\\in H^{s,s\/2;\\varphi}(\\Omega)$.\n\nNote that the necessity to define the target space $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$ separately in the $s\\in E_{0}$ case is caused by the following: if we defined this space for $s\\in E_{0}$ in the way used in the $s\\notin E_{0}$ case, then the isomorphism \\eqref{8f8} would not hold at least for $\\varphi\\equiv1$. This follows from a result by Solonnikov \\cite[Section~6]{Solonnikov64}.\n\nConsider now the parabolic problem \\eqref{8f1}, \\eqref{8f3}, \\eqref{8f2n}, which corresponds to the first order boundary condition on $S$. Let us write the compatibility conditions for the right-hand sides of this problem.\n\nWe associate the linear mapping\n\\begin{gather}\\label{8f11}\n\\Lambda_1:\\,u\\mapsto\\bigl(Au,Bu,u(\\cdot,0)\\bigr),\\quad\n\\mbox{where}\\quad u\\in C^{\\infty}(\\overline{\\Omega}),\n\\end{gather}\nwith the problem \\eqref{8f1}, \\eqref{8f3}, \\eqref{8f2n}. For arbitrary real $s\\geq2$, this mapping extends uniquely (by continuity) to a bounded linear operator\n\\begin{equation}\\label{8f7N}\n\\Lambda_1:\\,H^{s,s\/2}(\\Omega)\\rightarrow\nH^{s-2,s\/2-1}(\\Omega)\\oplus\nH^{s-3\/2,s\/2-3\/4}(S)\\oplus H^{s-1}(G).\n\\end{equation}\nChoosing any function $u(x,t)$ from $H^{s,s\/2}(\\Omega)$, we define the right-hand sides\n\\begin{equation*}\nf\\in H^{s-2,s\/2-1}(\\Omega),\\quad g\\in H^{s-3\/2,s\/2-3\/4}(S),\\quad\n\\mbox{and}\\quad h\\in H^{s-1}(G)\n\\end{equation*}\nof the problem by the formula $(f,g,h):=\\Lambda_1u$ with the help of this bounded operator. Here, unlike \\eqref{8f7b}, the inclusion $u\\in H^{s,s\/2}(\\Omega)$ implies $g=Bu\\in H^{s-3\/2,s\/2-3\/4}(S)$ due to \\cite[Chapter~II, Theorem 7]{Slobodetskii58}. According to this theorem, the traces $\\partial^{k}_t g(\\cdot,0)\\in H^{s-5\/2-2k}(\\Gamma)$ are defined by closure for all $k\\in\\mathbb{Z}$ such that $0\\leq k0$. Note that if $s\\leq5\/2$, then there are no compatibility conditions.\n\nWe set $E_{1}:=\\{2r+1\/2:1\\leq r\\in\\mathbb{Z}\\}$. Observe that $E_{1}$ is the set of all discontinuities of the function that assigns the number of compatibility conditions \\eqref{8f14} to $s\\geq2$.\n\nTo formulate our isomorphism theorem for the parabolic problem \\eqref{8f1}, \\eqref{8f3}, \\eqref{8f2n}, we introduce the source and target spaces of this isomorphism. Let $s>2$ and $\\varphi\\in\\mathcal{M}$. As above, we take $H^{s,s\/2;\\varphi}(\\Omega)$ as the source space. The target space denoted by $\\mathcal{Q}_{1}^{s-2,s\/2-1;\\varphi}$ is embedded in the Hilbert space\n\\begin{gather*}\n\\mathcal{H}_{1}^{s-2,s\/2-1;\\varphi}:=\nH^{s-2,s\/2-1;\\varphi}(\\Omega)\\oplus\nH^{s-3\/2,s\/2-3\/4;\\varphi}(S)\\oplus H^{s-1;\\varphi}(G).\n\\end{gather*}\nIn the Sobolev case of $\\varphi\\equiv1$ this space coincides with the target space of the bounded operator \\eqref{8f7N}.\n\nIf $s\\notin E_{1}$, then the linear space $\\mathcal{Q}_{1}^{s-2,s\/2-1;\\varphi}$ is defined to consist of all vectors $\\bigl(f,g,h\\bigr)\\in \\mathcal{H}_{1}^{s-2,s\/2-1;\\varphi}$ that satisfy the compatibility conditions \\eqref{8f14}. The definition is reasonable because these conditions are well defined for every $\\bigl(f,g,h\\bigr)\\in \\mathcal{H}_{1}^{s-2-\\varepsilon,s\/2-1-\\varepsilon\/2}$ for sufficiently small $\\varepsilon>0$ and because\n\\begin{equation}\\label{8f69N}\n\\mathcal{H}_{1}^{s-2,s\/2-1;\\varphi}\n\\hookrightarrow \\mathcal{H}_{1}^{s-2-\\varepsilon,s\/2-1-\\varepsilon\/2}.\n\\end{equation}\nThis continuous embedding follows immediately from \\eqref{8f5a} and \\eqref{8f5b}. The linear space $\\mathcal{Q}_{1}^{s-2,s\/2-1;\\varphi}$ is\nendowed with the inner product and the norm in the Hilbert space\n$\\mathcal{H}_{1}^{s-2,s\/2-1;\\varphi}$. The space $\\mathcal{Q}_{1}^{s-2,s\/2-1;\\varphi}$ is\ncomplete, i.e. a Hilbert one. This is justified by the same reasoning as we have used to prove the completeness of $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$. Note that if $22$ and $\\varphi\\in\\nobreak\\mathcal{M}$ the mapping \\eqref{8f11} extends uniquely (by continuity) to an isomorphism\n\\begin{equation}\\label{8f12}\n\\Lambda_{1}:H^{s,s\/2;\\varphi}(\\Omega)\\leftrightarrow\n\\mathcal{Q}_{1}^{s-2,s\/2-1;\\varphi}.\n\\end{equation}\n\\end{theorem}\n\nIn other words, the parabolic problem \\eqref{8f1}, \\eqref{8f3}, \\eqref{8f2n} is well posed on the pair of Hilbert spaces $H^{s,s\/2;\\varphi}(\\Omega)$ and $\\mathcal{Q}_{1}^{s-2,s\/2-1;\\varphi}$ whenever $s>2$ and $\\varphi\\in\\nobreak\\mathcal{M}$, the right-hand side $\\bigl(f,g,h\\bigr)\\in\\mathcal{Q}_{1}^{s-2,s\/2-1;\\varphi}$ of this problem being defined by closer for an arbitrary function $u\\in H^{s,s\/2;\\varphi}(\\Omega)$.\n\nNote that the necessity to define the target space $\\mathcal{Q}_{1}^{s-2,s\/2-1;\\varphi}$ separately in the $s\\in E_{1}$ case is stipulated by a similar cause as that indicated for the space $\\mathcal{Q}_{0}^{s-2,s\/2-1;\\varphi}$. Namely, if we defined this space for $s\\in E_{1}$ in the way used in the $s\\notin E_{1}$ case, then the isomorphism \\eqref{8f12} would not hold at least when $\\varphi\\equiv1$ and \\eqref{8f2n} is the Neumann boundary condition (see \\cite[Section~6]{Solonnikov64}).\n\nTheorems \\ref{8th1} and \\ref{8th2} are known in the Sobolev case where $\\varphi\\equiv1$ and neither $s$ nor $s\/2$ is half-integer.\nNamely, they are contained in Agranovich and Vishik's result\n\\cite[Theorem~12.1]{AgranovichVishik64} in the case of $s,s\/2\\in\\mathbb{Z}$ and are covered by Lions and Magenes' result \\cite[Theorem~6.2]{LionsMagenes72ii}. Solonnikov \\cite[Theorem~17]{Solonnikov64} proved the corresponding a priory estimates for anisotropic Sobolev norms of solutions to the problem \\eqref{8f1}--\\eqref{8f2} and to the problem \\eqref{8f1}, \\eqref{8f3}, \\eqref{8f2n} provided that \\eqref{8f2n} is the Neumann boundary condition. Note that these results include the limiting case of $s=2$.\n\nIn Section \\ref{8sec6} we will deduce Theorems \\ref{8th1} and \\ref{8th2} from the above-mentioned results with the help of the method of interpolation with a function parameter between Hilbert spaces, specifically between Sobolev inner product spaces. Therefore we devote the next section to this method and its applications to Sobolev and H\\\"ormander spaces.\n\n\n\n\n\\section{Interpolation with a function parameter between Hilbert spaces}\\label{8sec5}\n\nThis method of interpolation is a natural generalization of the classical interpolation method by S.~Krein and J.-L.~Lions to the case when a general enough function is used instead of a number as an interpolation parameter; see, e.g., monographs \\cite[Chapter~IV, Section~1, Subsection~10]{KreinPetuninSemenov82} and \\cite[Chapter~1, Sections 2 and 5]{LionsMagenes72i}. For our purposes, it is sufficient to restrict the discussion of the interpolation with a function parameter to the case of separable complex Hilbert spaces. We mainly follow the monograph \\cite[Section~1.1]{MikhailetsMurach14}, which systematically expounds this interpolation (see also \\cite[Section~2]{MikhailetsMurach08MFAT1}).\n\nLet $X:=[X_{0},X_{1}]$ be an ordered pair of separable complex Hilbert spaces such that $X_{1}\\subseteq X_{0}$ and this embedding is continuous and dense. This pair is said to be admissible. For $X$, there is a positive-definite self-adjoint operator $J$ on $X_{0}$ with the domain $X_{1}$ such that $\\|Jv\\|_{X_{0}}=\\|v\\|_{X_{1}}$ for every $v\\in X_{1}$. This operator is uniquely determined by the pair $X$ and is called a generating operator for~$X$; see, e.g., \\cite[Chapter~IV, Theorem~1.12]{KreinPetuninSemenov82}. The operator defines an isometric isomorphism $J:X_{1}\\leftrightarrow X_{0}$.\n\nLet $\\mathcal{B}$ denote the set of all Borel measurable functions $\\psi:(0,\\infty)\\rightarrow(0,\\infty)$ such that $\\psi$ is bounded on each compact interval $[a,b]$, with $00$.\n\nChoosing a function $\\psi\\in\\mathcal{B}$ arbitrarily, we consider the (generally, unbounded) operator $\\psi(J)$ defined on $X_{0}$ as the Borel function $\\psi$ of $J$. This operator is built with the help of Spectral Theorem applied to the self-adjoint operator $J$. Let $[X_{0},X_{1}]_{\\psi}$ or, simply, $X_{\\psi}$ denote the domain of $\\psi(J)$ endowed with the inner product $(v_{1},v_{2})_{X_{\\psi}}:=(\\psi(J)v_{1},\\psi(J)v_{2})_{X_{0}}$\nand the corresponding norm $\\|v\\|_{X_{\\psi}}:=\\|\\psi(J)v\\|_{X_{0}}$. The linear space $X_{\\psi}$ is Hilbert and separable with respect to this norm.\n\nA function $\\psi\\in\\mathcal{B}$ is called an interpolation parameter if the following condition is satisfied for all admissible pairs $X=[X_{0},X_{1}]$ and $Y=[Y_{0},Y_{1}]$ of Hilbert spaces and for an arbitrary linear mapping $T$ given on $X_{0}$: if the restriction of $T$ to $X_{j}$ is a bounded operator $T:X_{j}\\rightarrow Y_{j}$ for each $j\\in\\{0,1\\}$, then the restriction of $T$ to\n$X_{\\psi}$ is also a bounded operator $T:X_{\\psi}\\rightarrow Y_{\\psi}$.\n\nIf $\\psi$ is an interpolation parameter, then we say that the Hilbert space $X_{\\psi}$ is obtained by the interpolation with the function parameter $\\psi$ of the pair $X=\\nobreak[X_{0},X_{1}]$ or, otherwise speaking, between the spaces $X_{0}$ and $X_{1}$. In this case, the dense and continuous embeddings $X_{1}\\hookrightarrow\nX_{\\psi}\\hookrightarrow X_{0}$ hold.\n\nThe class of all interpolation parameters (in the sense of the given definition) admits a constructive description. Namely, a function $\\psi\\in\\mathcal{B}$ is an interpolation parameter if and only if $\\psi$ is pseudoconcave in a neighbourhood of infinity. The latter property means that there exists a concave positive function $\\psi_{1}(r)$ of $r\\gg1$ that both the functions $\\psi\/\\psi_{1}$ and $\\psi_{1}\/\\psi$ are bounded in some neighbourhood of infinity. This criterion follows from Peetre's description of all interpolation functions for the weighted Lebesgue spaces \\cite{Peetre66, Peetre68} (this result of Peetre is set forth in the monograph \\cite[Theorem 5.4.4]{BerghLefstrem76}). The proof of the criterion is given in \\cite[Section 1.1.9]{MikhailetsMurach14}.\n\nAn application of this criterion to power functions gives the classical result by Lions and S.~Krein. Namely, the function $\\psi(r)\\equiv r^{\\theta}$ is an interpolation parameter whenever $\\nobreak{0\\leq\\theta\\leq1}$. In this case, the exponent $\\theta$ serves as a number parameter of the interpolation, and the interpolation space $X_{\\psi}$ is also denoted by $X_{\\theta}$. This interpolation was used in formulas \\eqref{8f71} and \\eqref{8f72} in the special case of $\\theta=1\/2$.\n\nLet us formulate some general properties of interpolation with a function parameter; they will be used in our proofs. The first of these properties enables us to reduce the interpolation of subspaces to the interpolation of the whole spaces (see \\cite[Theorem~1.6]{MikhailetsMurach14} or \\cite[Section~1.17.1, Theorem~1]{Triebel95}). As usual, subspaces of normed spaces are assumed to be closed. Generally, we consider nonorthogonal projectors onto subspaces of a Hilbert space.\n\n\\begin{proposition}\\label{8prop1}\nLet $X=[X_{0},X_{1}]$ be an admissible pair of Hilbert spaces, and let $Y_{0}$ be a subspace of $X_{0}$. Then $Y_{1}:=X_{1}\\cap Y_{0}$ is a subspace of $X_{1}$. Suppose that there exists a linear mapping $P:X_{0}\\rightarrow X_{0}$ such that $P$ is a projector of the space $X_{j}$ onto its subspace $Y_{j}$ for each $j\\in\\{0,\\,1\\}$. Then the pair $[Y_{0},Y_{1}]$ is admissible, and $[Y_{0},Y_{1}]_{\\psi}=X_{\\psi}\\cap Y_{0}$ with equivalence of norms for an arbitrary interpolation parameter~$\\psi\\in\\mathcal{B}$. Here, $X_{\\psi}\\cap Y_{0}$ is a subspace of $X_{\\psi}$.\n\\end{proposition}\n\nThe second property reduces the interpolation of orthogonal sums of Hilbert spaces to the interpolation of their summands (see \\cite[Theorem~1.8]{MikhailetsMurach14}.\n\n\\begin{proposition}\\label{8prop2}\nLet $[X_{0}^{(j)},X_{1}^{(j)}]$, with $j=1,\\ldots,q$, be a finite collection of admissible pairs of Hilbert spaces. Then\n$$\n\\biggl[\\,\\bigoplus_{j=1}^{q}X_{0}^{(j)},\\,\n\\bigoplus_{j=1}^{q}X_{1}^{(j)}\\biggr]_{\\psi}=\\,\n\\bigoplus_{j=1}^{q}\\bigl[X_{0}^{(j)},\\,X_{1}^{(j)}\\bigr]_{\\psi}\n$$\nwith equality of norms for every function $\\psi\\in\\mathcal{B}$.\n\\end{proposition}\n\nThe third property shows that the interpolation with a function parameter is stable with respect to its repeated fulfillment \\cite[Theorem~1.3]{MikhailetsMurach14}.\n\n\\begin{proposition}\\label{8prop3}\nLet $\\alpha,\\beta,\\psi\\in\\mathcal{B}$, and suppose that the function $\\alpha\/\\beta$ is bounded in a neighbourhood of infinity. Define the function $\\omega\\in\\mathcal{B}$ by the formula\n$\\omega(r):=\\alpha(r)\\psi(\\beta(r)\/\\alpha(r))$ for $r>0$. Then $\\omega\\in\\mathcal{B}$, and $[X_{\\alpha},X_{\\beta}]_{\\psi}=X_{\\omega}$ with equality of norms for every admissible pair $X$ of Hilbert spaces. Besides, if $\\alpha,\\beta,\\psi$ are interpolation parameters, then\n$\\omega$ is also an interpolation parameter.\n\\end{proposition}\n\nOur proof of Theorems \\ref{8th1} and \\ref{8th2} is based on the key fact that the interpolation with an appropriate function parameter between margin Sobolev spaces in \\eqref{8f5a} and \\eqref{8f5b} gives the intermediate H\\\"ormander spaces $H^{s,s\/2;\\varphi}(\\cdot)$ and $H^{s;\\varphi}(\\cdot)$ respectively. Let us formulate this property separately for isotropic and for anisotropic spaces.\n\n\\begin{proposition}\\label{8prop4}\nLet real numbers $s_{0}$, $s$, and $s_{1}$ satisfy the inequalities\n$s_{0}2r-1$ and $\\varphi\\in\\mathcal{M}$. This operator is right invertible; moreover, there exists a linear mapping $T:(L_2(\\Gamma))^r\\to L_2(S)$ that\nfor arbitrary $s>2r-1$ and $\\varphi\\in\\mathcal{M}$ the restriction of $T$ to the space $\\mathbb{H}^{s;\\varphi}(\\Gamma)$\nis a bounded linear operator\n\\begin{equation}\\label{8f43}\nT:\\mathbb{H}^{s;\\varphi}(\\Gamma)\\to H^{s,s\/2;\\varphi}(S)\n\\end{equation}\nand that $RTv=v$ for every $v\\in\\mathbb{H}^{s;\\varphi}(\\Gamma)$.\n\\end{lemma}\n\n\\begin{proof}\nWe first prove an analog of this lemma for H\\\"ormander spaces\ndefined on $\\mathbb{R}^{n}$ and $\\mathbb{R}^{n-1}$ instead of $S$ and $\\Gamma$. Then we deduce the lemma with the help of the special local charts on~$S$.\n\nConsider the linear mapping\n\\begin{equation}\\label{8f60}\nR_{0}:w\\mapsto\\bigl(w\\!\\mid_{t=0},\\,\n\\partial_{t}w\\!\\mid_{t=0},\\dots,\n\\partial^{r-1}_{t}w\\!\\mid_{t=0}\\bigr),\\quad\\mbox{with}\n\\quad w\\in C_{0}^{\\infty}(\\mathbb{R}^{n}).\n\\end{equation}\nHere, we interpret $w$ as a function $w(x,t)$ of $x\\in\\mathbb{R}^{n-1}$ and $t\\in\\mathbb{R}$ so that $R_{0}w\\in(C_{0}^{\\infty}(\\mathbb{R}^{n-1}))^{r}$.\nChoose $s>2r-1$ and $\\varphi\\in\\mathcal{M}$ arbitrarily, and prove that the mapping \\eqref{8f60} extends uniquely (by continuity) to a bounded linear operator\n\\begin{equation}\\label{8f63}\nR_{0}:H^{s,s\/2;\\varphi}(\\mathbb{R}^{n})\\rightarrow \\bigoplus_{k=0}^{r-1}\nH^{s-2k-1;\\varphi}(\\mathbb{R}^{n-1})=:\n\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1}).\n\\end{equation}\nThis fact is known in the Sobolev case of $\\varphi\\equiv1$ due to \\cite[Chapter~II, Theorem 7]{Slobodetskii58}. Using the interpolation with a function parameter between Sobolev spaces, we can deduce this fact in the general situation of arbitrary $\\varphi\\in\\mathcal{M}$.\n\nNamely, choose $s_0,s_1\\in\\mathbb{R}$ such that $2r-12r-1$ and $\\varphi\\in\\mathcal{M}$ is a bounded operator between the spaces $\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1})$ and $H^{s,s\/2;\\varphi}(\\mathbb{R}^{n})$ and that this operator is right inverse to \\eqref{8f63}.\n\nSimilarly to H\\\"ormander \\cite[Proof of Theorem~2.5.7]{Hermander63} we define the linear mapping\n\\begin{equation}\\label{8f58-def}\nT_0:v\\mapsto F_{\\xi\\mapsto x}^{-1}\\biggl[\n\\beta\\bigl(\\langle\\xi\\rangle^{2}t\\bigr)\\,\\sum_{k=0}^{r-1}\n\\frac{1}{k!}\\,\\widehat{v_k}(\\xi)\\times t^k\\biggr](x,t)\n\\end{equation}\non the linear topological space of vectors\n$$\nv:=(v_0,\\dots,v_{r-1})\\in\\bigl(\\mathcal{S}'(\\mathbb{R}^{n-1})\\bigr)^r.\n$$\nWe consider $T_{0}v$ as a distribution on the Euclidean space $\\mathbb{R}^{n}$ of points $(x,t)$, with $x=(x_{1},\\ldots,x_{n-1})\\in\\mathbb{R}^{n-1}$ and $t\\in\\mathbb{R}$. In \\eqref{8f58-def}, the function $\\beta\\in C^{\\infty}_{0}(\\mathbb{R})$ is chosen so that $\\beta=1$ in a certain neighbourhood of zero. As usual, $F_{\\xi\\mapsto x}^{-1}$ denotes the inverse Fourier transform with respect to $\\xi=(\\xi_{1},\\ldots,\\xi_{n-1})\\in\\mathbb{R}^{n-1}$, and $\\langle\\xi\\rangle:=(1+|\\xi|^2)^{1\/2}$. The variable $\\xi$ is dual to $x$ relative to the direct Fourier transform $\\widehat{w}(\\xi)=(Fw)(\\xi)$ of a function $w(x)$.\n\nObviously, the mapping \\eqref{8f58-def} is well defined and acts continuously between $(\\mathcal{S}'(\\mathbb{R}^{n-1})^r$ and $\\mathcal{S}'(\\mathbb{R}^{n})$. It is also evident that the restriction of this mapping to the space $(L_{2}(\\mathbb{R}^{n-1}))^r$ is a bounded operator between $(L_{2}(\\mathbb{R}^{n-1}))^r$ and $L_{2}(\\mathbb{R}^{n})$.\n\nWe assert that\n\\begin{equation}\\label{8f57}\nR_{0}T_{0}v=v \\quad\\mbox{for every}\\quad\nv\\in\\bigl(\\mathcal{S}(\\mathbb{R}^{n-1})\\bigr)^r.\n\\end{equation}\nHere, as usual, $\\mathcal{S}(\\mathbb{R}^{n-1})$ denotes the linear topological space of all rapidly decreasing infinitely smooth functions on $\\mathbb{R}^{n-1}$. Since $v\\in(\\mathcal{S}(\\mathbb{R}^{n-1})^r$ implies $T_{0}v\\in\\mathcal{S}(\\mathbb{R}^{n-1})$, the left-hand side of the equality \\eqref{8f57} is well defined. Let us prove this equality.\n\nChoosing $j\\in\\{0,\\dots,r-1\\}$ and\n$v=(v_0,\\dots,v_{r-1})\\in(S(\\mathbb{R}^{n-1}))^r$\narbitrarily, we get\n\\begin{align*}\nF\\bigl[\\partial^j_tT_0v\\!\\mid_{t=0}\\bigr](\\xi)&=\n\\partial^j_{t}F_{x\\mapsto\\xi}[T_0v](\\xi,t)\\big|_{t=0}=\n\\partial^j_t\\biggl(\n\\beta\\bigl(\\langle\\xi\\rangle^{2}t\\bigr)\\,\\sum_{k=0}^{r-1}\n\\frac{1}{k!}\\,\\widehat{v_k}(\\xi)\\,t^k\\biggr)\\bigg|_{t=0}\\\\\n&=\\beta(0)\\biggl(\\partial^j_t\\sum_{k=0}^{r-1}\n\\frac{1}{k!}\\,\\widehat{v_k}(\\xi)\\,t^k\\biggr)\\!\n\\bigg|_{t=0}=\\beta(0)\\,j!\\,\\frac{1}{j!}\\,\\widehat{v_j}(\\xi)=\n\\widehat{v_j}(\\xi)\n\\end{align*}\nfor every $\\xi\\in\\mathbb{R}^{n-1}$. In the fourth equality, we have used the fact that $\\beta=1$ in a neighbourhood of zero.\nThus, the Fourier transforms of all components of the vectors $R_{0}T_{0}v$ and $v$ coincide, which is equivalent to \\eqref{8f57}.\n\nLet us now prove that the restriction of the mapping \\eqref{8f58-def} to each space\n\\begin{equation}\\label{8f-doubleSobolev}\n\\mathbb{H}^{2m}(\\mathbb{R}^{n-1})=\n\\bigoplus_{k=0}^{r-1}H^{2m-2k-1}(\\mathbb{R}^{n-1})\n\\end{equation}\nwith $0\\leq m\\in\\mathbb{Z}$ is a bounded operator between $\\mathbb{H}^{2m}(\\mathbb{R}^{n-1})$ and $H^{2m,m}(\\mathbb{R}^{n})$.\nNote that the integers $2m-2k-1$ may be negative in \\eqref{8f-doubleSobolev}.\n\nLet an integer $m\\geq0$. We make use of the fact that the norm in the space $H^{2m,m}(\\mathbb{R}^{n})$ is equivalent to the norm\n\\begin{equation*}\n\\|w\\|_{2m,m}:=\\biggl(\\|w\\|^2+\n\\sum_{j=1}^{n-1}\\|\\partial_{x_j}^{2m}w\\|^2+\n\\|\\partial_{t}^{m}w\\|^2\\biggr)^{1\/2}\n\\end{equation*}\n(see, e.g., \\cite[Section~9.1]{BesovIlinNikolskii75}). Here and below in this proof, $\\|\\cdot\\|$ stands for the norm in the Hilbert space $L_2(\\mathbb{R}^{n})$. Of course, $\\partial_{x_j}u$ and $\\partial_{t}$ denote the operators of generalized partial derivatives with respect to $x_j$ and $t$ respectively. Choosing $v=(v_0,\\dots,v_{r-1})\\in(\\mathcal{S}(\\mathbb{R}^{n-1}))^r$ arbitrarily and using the Parseval equality, we obtain the following:\n\\begin{align*}\n\\|T_0v\\|^2_{2m,m}&=\\|T_0v\\|^2+\n\\sum_{j=1}^{n-1}\\|\\partial_{x_j}^{2m}\\,T_0v\\|^2+\n\\|\\partial_{t}^{m}\\,T_0v\\|^2\\\\\n&=\\|\\widehat{T_0v}\\|^2+\n\\sum_{j=1}^{n-1}\\|\\xi_j^{2m}\\,\\widehat{T_0v}\\|^2+\n\\|\\partial_{t}^{m}\\,\\widehat{T_0v}\\|^2\\\\\n&\\leq\\sum_{k=0}^{r-1}\\frac{1}{k!}\\,\n\\int\\limits_{\\mathbb{R}^{n}}\n\\bigl|\\beta(\\langle\\xi\\rangle^{2}t)\\,\\widehat{v_k}(\\xi)\\,t^k\\bigr|^2\nd\\xi dt\\\\\n&+\\sum_{j=1}^{n-1}\\sum_{k=0}^{r-1}\\frac{1}{k!}\\,\n\\int\\limits_{\\mathbb{R}^{n}}\n\\bigl|\\xi_j^{2m}\\,\\beta(\\langle\\xi\\rangle^{2}t)\\,\\widehat{v_k}(\\xi)\\,\nt^k\\bigr|^2d\\xi dt\\\\\n&+\\sum_{k=0}^{r-1}\\frac{1}{k!}\\,\n\\int\\limits_{\\mathbb{R}^{n}}\n\\bigl|\\partial_{t}^{m}\\bigl(\\beta(\\langle\\xi\\rangle^{2}t)\\,t^k\\bigr)\\,\n\\widehat{v_k}(\\xi)\\bigr|^2d\\xi dt.\n\\end{align*}\n\nLet us estimate each of these three integrals separately. We begin with the third integral. Changing the variable $\\tau=\\langle\\xi\\rangle^{2}t$ in the interior integral with respect to $t$, we get the equalities\n\\begin{align*}\n\\int\\limits_{\\mathbb{R}^{n}}\n\\bigl|\\partial_{t}^{m}\\bigl(\\beta(\\langle\\xi\\rangle^{2}t)\\,t^k\\bigr)\\,\n\\widehat{v_k}(\\xi)\\bigr|^2d\\xi dt\n&=\\int\\limits_{\\mathbb{R}^{n-1}}\n|\\widehat{v_k}(\\xi)|^2d\\xi \\int\\limits_{\\mathbb{R}}\n|\\partial_{t}^{m}(\\beta(\\langle\\xi\\rangle^{2}t)t^k)|^2 dt\\\\\n&=\\int\\limits_{\\mathbb{R}^{n-1}}\n\\langle\\xi\\rangle^{4m-4k-2}\\,|\\widehat{v_k}(\\xi)|^2 d\\xi \\int\\limits_{\\mathbb{R}}\n|\\partial_{\\tau}^{m}(\\beta(\\tau)\\tau^{k})|^2 d\\tau.\n\\end{align*}\nHence,\n$$\n\\int\\limits_{\\mathbb{R}^{n}}\n\\bigl|\\partial_{t}^{m}\\bigl(\\beta(\\langle\\xi\\rangle^{2}t)\\,t^k\\bigr)\\,\n\\widehat{v_k}(\\xi)\\bigr|^2d\\xi dt=\nc_1\\,\\|v_k\\|^2_{H^{2m-2k-1}(\\mathbb{R}^{n-1})},\n$$\nwith\n$$\nc_1:=\\int\\limits_{\\mathbb{R}}\n|\\partial_{\\tau}^{m}(\\beta(\\tau)\\tau^{k})|^2 d\\tau<\\infty.\n$$\n\nUsing the same changing of the variable $t$ in the second integral, we obtain the following:\n\\begin{align*}\n\\int\\limits_{\\mathbb{R}^{n}}\n\\bigl|\\xi_j^{2m}\\,\\beta(\\langle\\xi\\rangle^{2}t)\\,\\widehat{v_k}(\\xi)\\,\nt^k\\bigr|^2d\\xi dt\n&=\\int\\limits_{\\mathbb{R}^{n-1}}\n|\\xi_j|^{4m}|\\widehat{v_k}(\\xi)|^2d\\xi \\int\\limits_{\\mathbb{R}}\n|t^{k}\\,\\beta(\\langle\\xi\\rangle^{2}t)|^2dt\\\\\n&=\\int\\limits_{\\mathbb{R}^{n-1}}\n|\\xi_j|^{4m}\\langle\\xi\\rangle^{-4k-2}\\,|\\widehat{v_k}(\\xi)|^2d\\xi \\int\\limits_{\\mathbb{R}}|\\tau^{k}\\beta(\\tau)|^2d\\tau\\\\\n&\\leq\\int\\limits_{\\mathbb{R}^{n-1}}\n\\langle\\xi\\rangle^{4m-4k-2}\\,|\\widehat{v_k}(\\xi)|^2d\\xi\n\\int\\limits_{\\mathbb{R}}|\\tau^{k}\\beta(\\tau)|^2d\\tau.\n\\end{align*}\nHence,\n$$\n\\int\\limits_{\\mathbb{R}^{n}}\n\\bigl|\\xi_j^{2m}\\,\\beta(\\langle\\xi\\rangle^{2}t)\\,\\widehat{v_k}(\\xi)\\,\nt^k\\bigr|^2d\\xi dt\n\\leq c_2\\,\\|v_k\\|^2_{H^{2m-2k-1}(\\mathbb{R}^{n-1})},\n$$\nwith\n$$\nc_2:=\\int\\limits_{\\mathbb{R}}|\\tau^{k}\\beta(\\tau)|^2d\\tau<\\infty.\n$$\n\nFinally, replacing the symbol $\\xi_j$ with $1$ in the previous reasoning,\nwe obtain the following estimate for the first integral:\n$$\n\\int\\limits_{\\mathbb{R}^{n}}\n\\bigl|\\beta(\\langle\\xi\\rangle^{2}t)\\,\\widehat{v_k}(\\xi)\\,\nt^k\\bigr|^2d\\xi dt\n\\leq c_2\\,\\|v_k\\|^2_{H^{-2k-1}(\\mathbb{R}^{n-1})}\n\\leq c_2\\,\\|v_k\\|^2_{H^{2m-2k-1}(\\mathbb{R}^{n-1})}.\n$$\n\nThus, we conclude that\n\\begin{equation*}\n\\|T_0v\\|_{H^{2m,m}(\\mathbb{R}^{n})}^{2}\\leq c\\,\\sum_{k=0}^{r-1}\n\\|v_k\\|^2_{H^{2m-2k-1}(\\mathbb{R}^{n-1})}=\nc\\,\\|v\\|_{\\mathbb{H}^{2m}(\\mathbb{R}^{n-1})}^{2}\n\\end{equation*}\nfor any $v\\in(\\mathcal{S}(\\mathbb{R}^{n-1}))^r$, with the number $c>0$ being independent of $v$. Since the set $\\bigl(S(\\mathbb{R}^{n-1})\\bigr)^r$ is dense in $\\mathbb{H}^{2m}(\\mathbb{R}^{n-1})$, it follows from the latter estimate that the mapping \\eqref{8f58-def} sets a bounded linear operator\n\\begin{equation*}\nT_0:\\mathbb{H}^{2m}(\\mathbb{R}^{n-1})\\to H^{2m,m}(\\mathbb{R}^{n})\n\\quad\\mbox{whenever}\\quad 0\\leq m\\in\\mathbb{Z}.\n\\end{equation*}\n\nLet us deduce from this fact that the mapping \\eqref{8f58-def} acts continuously between the spaces $\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1})$ and $H^{s,s\/2;\\varphi}(\\mathbb{R}^{n})$ for every $s>2r-1$ and $\\varphi\\in\\mathcal{M}$. Put $s_0=0$, choose an even integer $s_1>s$, and consider the linear bounded operators\n\\begin{equation}\\label{8f66}\nT_{0}:\\mathbb{H}^{s_j}(\\mathbb{R}^{n-1})\\to H^{s_j,s_j\/2}(\\mathbb{R}^{n}),\n\\quad\\mbox{with}\\quad j\\in\\{0,1\\}.\n\\end{equation}\nLet, as above, $\\psi$ be the interpolation parameter \\eqref{8f16}. Then the restriction of the mapping \\eqref{8f66} with $j=0$ to the space\n\\begin{equation*}\n\\bigl[\\mathbb{H}^{s_0}(\\mathbb{R}^{n-1}),\n\\mathbb{H}^{s_1}(\\mathbb{R}^{n-1})\\bigr]_{\\psi}=\n\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1})\n\\end{equation*}\nis a bounded operator\n\\begin{equation}\\label{8f48}\nT_0:\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1})\\to\nH^{s,s\/2;\\varphi}(\\mathbb{R}^{n}).\n\\end{equation}\nHere, we have used formulas \\eqref{8f-intH} and \\eqref{8f-intHH}, which remain true for the considered $s_{0}$ and $s_{1}$.\n\nNow the equality \\eqref{8f57} extends by continuity over all vectors\n$v\\in\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1})$. Hence, the operator \\eqref{8f48} is right inverse to \\eqref{8f63}. Thus, the required mapping \\eqref{8f58} is built.\n\nWe need to introduce analogs of the operators \\eqref{8f63} and \\eqref{8f48} for the strip\n$$\n\\Pi=\\bigl\\{(x,t):x\\in\\mathbb{R}^{n-1},02r-1$ and $\\varphi\\in\\mathcal{M}$. Given $u\\in H^{s,s\/2;\\varphi}(\\Pi)$, we put\n$R_{1}u:=R_{0}w$, where a function\n$w\\in H^{s,s\/2;\\varphi}(\\mathbb{R}^{n})$ satisfies the condition\n$w\\!\\upharpoonright\\!\\Pi=u$. Evidently, this definition does not depend on the choice of $w$. The linear mapping $u\\mapsto R_{1}u$ is a bounded operator\n\\begin{equation}\\label{8f67}\nR_{1}:H^{s,s\/2;\\varphi}(\\Pi)\\to\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1}).\n\\end{equation}\nThis follows immediately from the boundedness of the operator \\eqref{8f63} and from the definition of the norm in $H^{s,s\/2;\\varphi}(\\Pi)$.\n\nLet us introduce a right-inverse of \\eqref{8f67} on the base of the mapping \\eqref{8f58-def}. We put $T_{1}v:=(T_0v)\\!\\upharpoonright\\!\\Pi$ for arbitrary $v\\in(L_{2}(\\mathbb{R}^{n-1}))^{r}$. The restriction of the linear mapping $v\\mapsto T_{1}v$ over vectors $v\\in\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1})$ is a bounded operator\n\\begin{equation}\\label{8f51}\nT_1:\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1})\\to H^{s,s\/2;\\varphi}(\\Pi).\n\\end{equation}\nThis follows directly from the boundedness of the operator \\eqref{8f48}. Observe that\n$$\nR_1T_1v=R_1\\bigl((T_0v)\\!\\upharpoonright\\!\\Pi\\bigr)=R_0T_0v=v\n\\quad\\mbox{for every}\\quad v\\in\\mathbb{H}^{s;\\varphi}(\\mathbb{R}^{n-1}).\n$$\nThus, the operator \\eqref{8f51} is right inverse to \\eqref{8f67}.\n\nUsing operators \\eqref{8f67} and \\eqref{8f51}, we can now prove our lemma with the help of the special local charts \\eqref{8f-local} on $S$. As above, let $s>2r-1$ and $\\varphi\\in\\mathcal{M}$. Choosing $k\\in\\{0,\\dots,r-1\\}$ and $g\\in C^{\\infty}(\\overline{S})$ arbitrarily, we get the following:\n\\begin{align*}\n\\|\\partial^{k}_{t}g\\!\\upharpoonright\\!\\Gamma\\|_\n{H^{s-2k-1;\\varphi}(\\Gamma)}^{2}&=\n\\sum_{j=1}^{\\lambda}\n\\|\\bigl(\\chi_{j}(\\partial^{k}_{t}g\\!\\upharpoonright\\!\\Gamma)\\bigr)\n\\circ\\theta_{j}\\|_{H^{s-2k-1;\\varphi}(\\mathbb{R}^{n-1})}^{2}\\\\\n&=\\sum_{j=1}^{\\lambda}\n\\|\\partial^{k}_{t}\\bigl((\\chi_{j}\\,g)\\circ\\theta^{\\ast}_{j}\\bigr)\n\\!\\upharpoonright\\!\\mathbb{R}^{n-1})\\|_\n{H^{s-2k-1;\\varphi}(\\mathbb{R}^{n-1})}^{2}\\\\\n&\\leq c^{2}\\,\\sum_{j=1}^{\\lambda}\n\\|(\\chi_{j}\\,g)\\circ\\theta^{\\ast}_{j}\\|_{H^{s,s\/2;\\varphi}(\\Pi)}^{2}=\nc^{2}\\,\\|g\\|_{H^{s,s\/2;\\varphi}(S)}^{2}.\n\\end{align*}\nHere, $c$ denotes the norm of the bounded operator \\eqref{8f67}, and, as usual, symbol \"$\\circ$\" designates a composition of functions. Recall that $\\{\\theta_{j}\\}$ is a collection of local charts on $\\Gamma$ and that $\\{\\chi_{j}\\}$ is an infinitely smooth partition of unity on $\\Gamma$. Thus,\n\\begin{equation*}\n\\|Rg\\|_{\\mathbb{H}^{s;\\varphi}(\\Gamma)}\\leq c\\,\\sqrt{r}\\,\\|g\\|_{H^{s,s\/2;\\varphi}(S)}\n\\quad\\mbox{for every}\\quad g\\in C^{\\infty}(\\overline{S}).\n\\end{equation*}\nThis implies that the mapping \\eqref{8f25} extends by continuity to the bounded linear operator \\eqref{8f65}.\n\nLet us build the linear mapping $T:(L_2(\\Gamma))^r\\to L_2(S)$ whose restriction to $\\mathbb{H}^{s;\\varphi}(\\Gamma)$ is a right-inverse of \\eqref{8f65}. Consider the linear mapping of flattening of $\\Gamma$\n\\begin{equation*}\nL:v\\mapsto\\bigl((\\chi_{1}v)\\circ\\theta_{1},\\ldots,\n(\\chi_{\\lambda}v)\\circ\\theta_{\\lambda}\\bigr),\n\\quad\\mbox{with}\\quad v\\in L_2(\\Gamma).\n\\end{equation*}\nIts restriction to $H^{\\sigma;\\varphi}(\\Gamma)$ is an isometric operator\n\\begin{equation}\\label{8f52}\nL:H^{\\sigma;\\varphi}(\\Gamma)\\rightarrow\n\\bigl(H^{\\sigma;\\varphi}(\\mathbb{R}^{n-1})\\bigr)^{\\lambda}\n\\quad\\mbox{whenever}\\quad\\sigma>0.\n\\end{equation}\nBesides, consider the linear mapping of sewing of $\\Gamma$\n\\begin{equation*}\nK:(h_{1},\\ldots,h_{\\lambda})\\mapsto\\sum_{j=1}^{\\lambda}\\,\nO_{j}\\bigl((\\eta_{j}h_{j})\\circ\\theta_{j}^{-1}\\bigr),\n\\quad\\mbox{with}\\quad h_{1},\\ldots,h_{\\lambda}\\in L_2(\\mathbb{R}^{n-1}).\n\\end{equation*}\nHere, each function $\\eta_{j}\\in\nC_{0}^{\\infty}(\\mathbb{R}^{n-1})$ is chosen so that $\\eta_{j}=1$ on the\nset $\\theta^{-1}_{j}(\\mathrm{supp}\\,\\chi_{j})$, whereas $O_{j}$\ndenotes the operator of the extension by zero to $\\Gamma$ of a function given on $\\Gamma_j$. The restriction of this mapping to $(H^{\\sigma;\\varphi}(\\mathbb{R}^{n-1}))^{\\lambda}$ is a bounded operator\n\\begin{equation*}\nK:\\bigl(H^{\\sigma;\\varphi}(\\mathbb{R}^{n-1})\\bigr)^{\\lambda}\\to\nH^{\\sigma;\\varphi}(\\Gamma)\\quad\\mbox{whenever}\\quad\\sigma>0,\n\\end{equation*}\nand this operator is left inverse to \\eqref{8f52} (see \\cite[the proof of Theorem~2.2]{MikhailetsMurach14}).\n\nThe mapping $K$ induces the operator $K_{1}$ of the sewing of the manifold $S=\\Gamma\\times(0,\\tau)$ by the formula\n\\begin{equation*}\n\\bigl(K_1(g_1,\\dots,g_\\lambda)\\bigr)(x,t):=\n\\bigl(K(g_1(\\cdot,t),\\ldots,g_\\lambda(\\cdot,t))\\bigr)(x)\n\\end{equation*}\nfor arbitrary functions $g_1,\\dots,g_\\lambda\\in L_2(\\Pi)$ and almost all $x\\in\\Gamma$ and $t\\in(0,\\tau)$. The restriction of the mapping $K_{1}$ to $(H^{\\sigma,\\sigma\/2;\\varphi}(\\Pi))^{\\lambda}$ is a bounded operator\n\\begin{equation}\\label{8f53}\nK_{1}:(H^{\\sigma,\\sigma\/2;\\varphi}(\\Pi))^{\\lambda}\\to\nH^{\\sigma,\\sigma\/2;\\varphi}(S)\\quad\\mbox{whenever}\\quad\\sigma>0\n\\end{equation}\n(see \\cite[the proof of Theorem~2]{Los16JMathSci}).\n\nGiven $v:=(v_0,v_1,\\dots,v_{r-1})\\in(L_{2}(\\Gamma))^{r}$, we set\n\\begin{equation*}\nTv:=K_1\\bigl(T_1(v_{0,1},\\ldots,v_{r-1,1}),\\ldots,\nT_1(v_{0,\\lambda},\\ldots,v_{r-1,\\lambda})\\bigr),\n\\end{equation*}\nwhere\n$$\n(v_{k,1},\\ldots,v_{k,\\lambda}):=\nLv_{k}\\in(L_{2}(\\mathbb{R}^{n-1}))^{\\lambda}\n$$\nfor each integer $k\\in\\{0,\\ldots,r-1\\}$. The linear mapping $v\\mapsto Tv$ acts continuously between $(L_{2}(\\Gamma))^{r}$ and $L_{2}(S)$, which follows directly from the definitions of $L$, $T_1$, and~$K_1$.\nThe restriction of this mapping to $\\mathbb{H}^{s;\\varphi}(\\Gamma)$ is the bounded operator \\eqref{8f43}. This follows immediately from the boundedness of the operators \\eqref{8f51}, \\eqref{8f52}, and \\eqref{8f53}. The operator \\eqref{8f43} is right inverse to \\eqref{8f65}. Indeed, choosing a vector $v=(v_0,v_1,\\dots,v_{r-1})\\in\\mathbb{H}^{s;\\varphi}(\\Gamma)$ arbitrarily, we obtain the following equalities:\n\\begin{align*}\n(RTv)_k&=\\bigl(RK_1\\bigl(T_1(v_{0,1},\\ldots,v_{r-1,1}),\\ldots,\nT_1(v_{0,\\lambda},\\ldots,v_{r-1,\\lambda})\\bigr)\\bigr)_k\\\\\n&=K\\bigl(\\bigl(R_1T_1(v_{0,1},\\ldots,v_{r-1,1})\\bigr)_k,\\ldots,\n\\bigl(R_1T_1(v_{0,\\lambda},\\ldots,v_{r-1,\\lambda})\\bigr)_k\\bigr)\\\\\n&=K(v_{k,1},\\dots,v_{k,\\lambda})=KLv_k=v_k.\n\\end{align*}\nHere, the index $k$ runs over the set $\\{0,\\dots,r-1\\}$ and denotes the $k$-th component of a vector. Hence, $RTv=v$.\n\\end{proof}\n\nUsing this lemma, we will now prove a version of Proposition \\ref{8prop5} for the target spaces of isomorphisms \\eqref{8f8} and \\eqref{8f12}. Note that the number of the compatibility conditions \\eqref{8f10} and \\eqref{8f14} are constant respectively on the intervals\n$$\nJ_{0,1}:=(2,\\,7\/2),\\quad J_{0,r}:=(2r-1\/2,\\,2r+3\/2),\\;\\;\\mbox{with}\\;\\; 2\\leq r\\in\\mathbb{Z},\n$$\nand\n$$\nJ_{1,0}:=(2,5\/2),\\quad J_{1,r}:=(2r+1\/2,\\,2r+5\/2),\\;\\;\\mbox{with}\\;\\; 1\\leq r\\in\\mathbb{Z},\n$$\nof the varying of $s$. Namely, if $s$ ranges over some $J_{l,r}$, then this number equals $r$.\n\n\\begin{lemma}\\label{8lem2}\nLet $l\\in\\{0,\\,1\\}$ and $1\\leq r\\in\\mathbb{Z}$. Suppose that real numbers $s_0,s,s_1\\in J_{l,r}$ satisfy the inequality $s_02$, $\\varphi\\in\\mathcal{M}$, and $l\\in\\{0,\\,1\\}$. If $l=0$ [or $l=1$], then our reasoning relates to Theorem \\ref{8th1} [or Theorem \\ref{8th2}]. We first consider the case where $s\\notin E_{l}$. Then $s\\in J_{l,r}$ for a certain integer $r$. Choose numbers $s_0,s_1\\in J_{l,r}$ such that $s_02$, we have the isomorphisms\n\\begin{equation*}\n\\Lambda_l:H^{s\\pm\\varepsilon,(s\\pm\\varepsilon)\/2;\\varphi}(\\Omega)\\leftrightarrow\n\\mathcal{Q}_l^{s\\pm\\varepsilon-2,(s\\pm\\varepsilon)\/2-1;\\varphi}.\n\\end{equation*}\nThey imply that the mapping \\eqref{8fmap-smooth} extends uniquely (by continuity) to an isomorphism\n\\begin{equation*}\n\\begin{aligned}\n\\Lambda_l:&\n\\bigl[H^{s-\\varepsilon,(s-\\varepsilon)\/2;\\varphi}(\\Omega),\nH^{s+\\varepsilon,(s+\\varepsilon)\/2;\\varphi}(\\Omega)\\bigr]_{1\/2}\\\\\n&\\leftrightarrow\n\\bigl[\\mathcal{Q}_l^{s-\\varepsilon-2,(s-\\varepsilon)\/2-1;\\varphi},\n\\mathcal{Q}_l^{s+\\varepsilon-2,(s+\\varepsilon)\/2-1;\\varphi}\\bigr]_{1\/2}=\n\\mathcal{Q}_l^{s-2,s\/2-1;\\varphi}.\n\\end{aligned}\n\\end{equation*}\nRecall that the last equality is the definition of the space $\\mathcal{Q}_l^{s-2,s\/2-1;\\varphi}$.\n\nIt remains to prove that\n\\begin{equation}\\label{8f73}\nH^{s,s\/2;\\varphi}(\\Omega)=\n\\bigl[H^{s-\\varepsilon,(s-\\varepsilon)\/2;\\varphi}(\\Omega),\nH^{s+\\varepsilon,(s+\\varepsilon)\/2;\\varphi}(\\Omega)\\bigr]_{1\/2}\n\\end{equation}\nup to equivalence of norms. We reduce the interpolation of H\\\"ormander spaces to an interpolation of Sobolev spaces with the help of Proposition~\\ref{8prop3}. Let us choose real $\\delta>0$ such that $s-\\varepsilon-\\delta>0$.\nAccording to Proposition~\\ref{8prop5} we have the equalities\n\\begin{equation*}\nH^{s-\\varepsilon,(s-\\varepsilon)\/2;\\varphi}(\\Omega)=\n\\bigl[H^{s-\\varepsilon-\\delta,(s-\\varepsilon-\\delta)\/2}(\\Omega),\nH^{s+\\varepsilon+\\delta,(s+\\varepsilon+\\delta)\/2}(\\Omega)\\bigr]_{\\alpha}\n\\end{equation*}\nand\n\\begin{equation*}\nH^{s+\\varepsilon,(s+\\varepsilon)\/2;\\varphi}(\\Omega)=\n\\bigl[H^{s-\\varepsilon-\\delta,(s-\\varepsilon-\\delta)\/2}(\\Omega),\nH^{s+\\varepsilon+\\delta,(s+\\varepsilon+\\delta)\/2}(\\Omega)\\bigr]_{\\beta}.\n\\end{equation*}\nHere, the interpolation parameters $\\alpha$ and $\\beta$\nare defined by the formulas\n\\begin{equation*}\n\\alpha(r):=r^{\\delta\/(2\\varepsilon+2\\delta)}\\varphi(r^{1\/(2\\varepsilon+2\\delta)}),\n\\quad\n\\beta(r):=r^{(2\\varepsilon+\\delta)\/(2\\varepsilon+2\\delta)}\\varphi(r^{1\/(2\\varepsilon+2\\delta)})\n\\quad\\mbox{if}\\quad r\\geq1\n\\end{equation*}\nand $\\alpha(r)=\\beta(r):=1$ if $0