diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzoqrz" "b/data_all_eng_slimpj/shuffled/split2/finalzzoqrz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzoqrz" @@ -0,0 +1,5 @@ +{"text":"\\section{Some recent development}\n\\subsection{Background}\n\nThe object of kinetic theory is the modeling of particles by a distribution function in the phase space, which is denoted by $F(t,x,v)$ for $(t,x,v) \\in [0, \\infty) \\times {\\Omega} \\times \\mathbb{R}^{3}$ where $\\Omega$ is an open bounded subset of $\\mathbb{R}^{3}$. Dynamics and collision processes of dilute charged particles with an electric field $E$ can be modeled by the (two-species) Vlasov-Poisson-Boltzmann equation\n\\begin{equation} \\label{2FVPB}\n\\begin{split}\n\\partial_t F_+ + v \\cdot \\nabla_x F_+ +E \\cdot \\nabla_v F_+ = Q(F_+,F_+) + Q(F_+,F_- ),\n\\\\ \\partial_t F_- + v \\cdot \\nabla_x F_- -E \\cdot \\nabla_v F_- = Q(F_-,F_+) + Q(F_-,F_- ).\n\\end{split} \\end{equation}\nHere $F_\\pm(t,x,v) \\ge 0 $ are the density functions for the ions $(+)$ and electrons $(-)$ respectively.\n\n\nThe collision operator measures ``the change rate'' in binary hard sphere collisions and takes the form of\n\\begin{equation} \\begin{split}\\label{Q}\nQ(F_{1},F_{2}) (v)&: = Q_\\mathrm{gain}(F_1,F_2)-Q_\\mathrm{loss}(F_1,F_2)\\\\\n&: =\\int_{\\mathbb{R}^3} \\int_{\\S^2} \n|(v-u) \\cdot \\omega| [F_1 (u^\\prime) F_2 (v^\\prime) - F_1 (u) F_2 (v)]\n \\mathrm{d} \\omega \\mathrm{d} u,\n\\end{split}\\end{equation} \nwhere $u^\\prime = u - [(u-v) \\cdot \\omega] \\omega$ and $v^\\prime = v + [(u-v) \\cdot \\omega] \\omega$. The collision operator enjoys a collision invariance: for any measurable $G$, $\n\\int_{\\mathbb{R}^{3}} \\begin{bmatrix}1 & v & \\frac{|v|^{2}-3}{2}\\end{bmatrix} Q(G,G) \\mathrm{d} v = \\begin{bmatrix}0 & 0 & 0 \\end{bmatrix}.$ It is well-known that a global Maxwellian $\\mu$\nsatisfies $Q(\\cdot,\\cdot)=0$, where\n\\begin{equation} \\label{Maxwellian}\n\\mu(v):= \\frac{1}{(2\\pi)^{3\/2}} \\exp\\bigg(\n - \\frac{|v |^{2}}{2 }\n \\bigg).\n\\end{equation}\n\n\nThe electric field $E$ is given by\n\\begin{equation} \\label{Field}\nE(t,x): = - \\nabla_{x} \\phi(t,x)\n\\end{equation}\nwhere an electrostatic potential is determined by the Poisson equation: \n\\begin{equation} \\label{Poisson2}\n-\\Delta_x \\phi(t,x) = \\int_{\\mathbb R^3 } (F_+(t,x,v) - F_-(t,x,v)) \\, dv\n\\ \\ \\text{in} \\ \\Omega .\n\\end{equation}\n\nA simplified one-species Vlasov-Poisson-Boltzmann equation is often considered to reduce the complexity. Where we let $F(t,x,v)$ takes the role of $F_+(t,x,v)$, and assume $F_- = \\rho_0 \\mu $ where the constant $\\rho_0 = \\int_{\\Omega \\times \\mathbb R^3} F_+(0,x,v) \\,dv dx$. Then we get the system\n\\begin{equation} \\label{Boltzmann_E}\n\\partial_{t} F + v\\cdot \\nabla_{x} F + E\\cdot \\nabla_{v} F = Q(F,F),\n\\end{equation} \n\\begin{equation} \\label{Poisson}\n-\\Delta_x \\phi(t,x) = \\int_{\\mathbb R^3 } F(t,x,v) \\, dv -\\rho_0\n\\ \\ \\text{in} \\ \\Omega .\n\\end{equation}\nHere the background charge density $\\rho_0$ is assumed to be a constant.\n\nThroughout this paper, we use the notation\n\\begin{equation} \\label{iota} \\begin{split}\n \\iota = + \\text{ or } -, \\text{ and }\n -\\iota = \\begin{cases}\n- &, \\text{ if } \\iota = + \\\\ +&, \\text{ if } \\iota = -.\n\\end{cases}\n\\end{split} \\end{equation}\nAnd for the one-species case, $F_\\iota = F$.\n\n\n\n\n\n\n\nIn many physical applications, e.g. semiconductor and tokamak, the charged dilute\ngas is confined within a container, and its interaction with the boundary, which can be described by suitable boundary conditions, often plays a crucial role in global dynamics. In this paper we consider one of the physical conditions, a so-called diffuse boundary condition:\n\\begin{equation} \\label{diffuse_BC}\nF_\\iota(t,x,v)= \\sqrt{2\\pi} \\mu(v) \\int_{n(x ) \\cdot u>0} F_\\iota(t,x,u) \\{n(x) \\cdot u\\}\\mathrm{d} u \\ \\ \\text{for} \\ (x,v) \\in \\gamma_-.\n\\end{equation}\nHere $\\gamma_-: = \\{(x,v) \\in \\partial\\Omega \\times \\mathbb{R}^3: n(x) \\cdot v<0\\}$, and $n(x)$ is the outward unit normal at a boundary point $x$. \n\n\nDue to its importance, there have been many research activities in mathematical study of the Boltzmann equation. In \\cite{Guo_P}, global strong solution of Boltzmann equation coupled with the Poisson equation has been established through the nonlinear energy method, when the initial data are close to the Maxwellian $\\mu$. In the large-amplitude regime, an almost exponential decay for Boltzmann solutions is established in \\cite{DV}, provided certain a priori strong Sobolev estimates can be verified. Such high regularity insures an $L^\\infty$-control of solutions which is crucial to handle the quadratic nonlinearity. Even though these estimates can be verified in periodic domains, their validity in general bounded domains have been doubted. \n\nDespite its importance, mathematical theory on boundary problems of VPB, especially for strong solutions, hasn't been developed up to satisfactory (cf. renormalized solutions of VPB were constructed in \\cite{Michler}). One of the fundamental difficulties for the system in bounded domains is the lack of higher regularity, which originates from the characteristic nature of boundary conditions in the kinetic theory, and the nonlocal property of the collision term $Q$. This nonlocal term indicates that the local behavior of the solution could be affected globally by $x$ and $v$, and thus prevents the localization of the solution. From that a seemingly inevitable singularity of the spatial normal derivative at the boundary $x \\in \\partial \\Omega$ arises\n$\\partial_n F_\\iota(t,x,v) \\sim \\frac{1 }{ n(x) \\cdot v } \\notin L^1_{loc}.$\nSuch singularity towards the grazing set $\\gamma_0 : = \\{(x,v) \\in \\partial\\Omega \\times \\mathbb{R}^3: n(x) \\cdot v=0\\}$ has been studied thoroughly in \\cite{GKTT1} for the Boltzmann equation in convex domain. Here we clarify that a $C^{\\alpha}$ domain means that for any ${p} \\in \\partial{\\Omega}$, there exists sufficiently small $\\delta_{1}>0, \\delta_{2}>0$, and an one-to-one and onto $C^{\\alpha}$-map, $\n\t\\eta_p: \\{( x_{\\|,1}, x_{\\|,2} , x_n ) \\in \\mathbb R^3 : x_n > 0 \\} \\cap B(0; \\delta_1 ) \\to \\Omega \\cap B(p;\\delta_2 )$ with $\\eta_p ( x_{\\|,1}, x_{\\|,2} , x_n ) = \\eta_p( x_{\\|,1}, x_{\\|,2} , 0 ) + x_n [-n (\\eta_p( x_{\\|,1}, x_{\\|,2} , 0))],$\n\tsuch that $\\eta_p (\\cdot, \\cdot, 0) \\in \\partial\\Omega$. \n\tA \\textit{convex} domain means that there exists $C_\\Omega>0$ such that for all $p \\in \\partial\\Omega$ and $\\eta_p$ and for all $x_\\parallel$,\n\\begin{equation}\\label{convexity_eta}\n\\begin{split}\n\\sum_{i,j=1}^{2} \\zeta_{i} \\zeta_{j}\\partial_{i} \\partial_{j} \\eta _{{p}} ( x_{\\parallel } )\\cdot \nn ( x_{\\parallel } )\n \\leq - C_{\\Omega} |\\zeta|^{2} \\ \n \\text{ for all} \\ \\zeta \\in \\mathbb{R}^{2}.\n\\end{split}\n\\end{equation}\n\n\n\n\nConstruction of a unique global solution and proving its asymptotic stability of VPB in general domains has been a challenging open problem for any boundary condition. In \\cite{VPB} the authors give the \\textit{first} construction of a unique global \\textit{strong} solution of the one-species VPB system with the diffuse boundary condition when the domain is $C^3$ and \\textit{convex.} Moreover an asymptotic stability of the global Maxwellian $\\mu$ is studied. The result was then extended to the two-species case in \\cite{2SVPB}.\n\n\n\n\n\n\n\n\\subsection{Global strong solution of VPB}\nIn \\cite{VPB, {2SVPB}}, the authors take the first step toward comprehensive understanding of VPB in bounded domains. They consider the zero Neumann boundary condition for the potential $\\phi$: $n \\cdot E \\vert_{\\partial \\Omega }= \\frac{ \\partial \\phi }{\\partial n } \\vert_{\\partial \\Omega } = 0$, which corresponds to a so-called insulator boundary condition. In such setting $(F_\\iota, E) = (\\mu, 0)$ is a stationary solution. \n\n\n\nThe characteristics (trajectory) is determined by the Hamilton ODEs for $f_+$ and $f_-$ separately\n\\begin{equation} \\label{hamilton_ODE1}\n\\frac{d}{ds} \\left[ \\begin{matrix}X_\\iota^f(s;t,x,v)\\\\ V_\\iota^f(s;t,x,v)\\end{matrix} \\right] = \\left[ \\begin{matrix}V_\\iota^f(s;t,x,v)\\\\ \n{-\\iota} \\nabla_x \\phi_f\n(s, X_\\iota^f(s;t,x,v))\\end{matrix} \\right] \\ \\ \\text{for} - \\infty< s , t < \\infty ,\n\\end{equation}\nwith $(X_\\iota^f(t;t,x,v), V_\\iota^f(t;t,x,v)) = (x,v)$. Where the potential is extended to negative time as $\\phi_f(t,x)= e^{-|t|} \\phi_{f_0}(x)$ for $t\\leq 0$.\n\n\nFor $(t,x,v) \\in \\mathbb{R} \\times \\Omega \\times \\mathbb{R}^3$, define \\textit{the backward exit time} $t_{\\mathbf{b},\\iota}^f(t,x,v)$ as \n\\begin{equation} \\label{tb}\nt_{\\mathbf{b},\\iota}^f (t,x,v) := \\sup \\{s \\geq 0 : X_\\iota^f(\\tau;t,x,v) \\in \\Omega \\ \\ \\text{for all } \\tau \\in (t-s,t) \\}.\n\\end{equation}\nFurthermore, define $x_{\\mathbf{b},\\iota}^f (t,x,v) := X_\\iota^f(t-t_{\\mathbf{b},\\iota}(t,x,v);t,x,v)$ and $v_{\\mathbf{b},\\iota}^f (t,x,v) := V_\\iota^f(t-t_{\\mathbf{b},\\iota}(t,x,v);t,x,v)$.\n\n\n\n\n\n\nIn order to handle the boundary singularity, they introduce the following notion\n\\begin{definition}[Kinetic Weight] \\label{kweight} For $\\e>0$\n\\begin{equation} \\label{alphaweight}\\begin{split}\n\\alpha_{f,\\e,\\iota}(t,x,v) : =& \\ \n\\chi \\Big(\\frac{t-t_{\\mathbf{b},\\iota}^{f}(t,x,v)+\\e}{\\e}\\Big)\n|n(x_{\\mathbf{b},\\iota}^{f}(t,x,v)) \\cdot v_{\\mathbf{b},\\iota}^{f}(t,x,v)| \\\\\n&+ \\Big[1- \\chi \\Big(\\frac{t-t_{\\mathbf{b},\\iota}^{f}(t,x,v) +\\e}{\\e}\\Big)\\Big].\n\\end{split}\\end{equation}\nHere they use a smooth function $\\chi: \\mathbb{R} \\rightarrow [0,1]$ satisfying\n\\begin{equation} \\label{chi}\n\\begin{split}\n\\chi(\\tau) =0, \\ \\tau\\leq 0, \\ \\text{and} \\ \\ \n\\chi(\\tau) = 1 , \\ \\tau\\geq 1. \n \\ \\ \\frac{d}{d\\tau}\\chi(\\tau) \\in [0,4] \\ \\ \\text{for all } \\tau \\in \\mathbb{R}.\n\\end{split}\n\\end{equation}\n\\end{definition}\nAlso, denote\n\\begin{equation} \\label{matrixalpha}\n\\alpha_{f,\\e}(t,x,v) := \\begin{bmatrix} \\alpha_{f, \\e, +}(t,x,v) & 0 \\\\ 0 & \\alpha_{f, \\e, -}(t,x,v) \\end{bmatrix}.\n\\end{equation}\n\nNote that $\\alpha_{f,\\e,\\iota}(0,x,v)\\equiv \\alpha_{{f_0},\\e,\\iota}(0,x,v)$ is determined by $f_0$. For the sake of simplicity, the superscription $^f$ in $X_\\iota^f, V_\\iota^f, t_{\\mathbf{b},\\iota}^f, x_{\\mathbf{b},\\iota}^f, v_{\\mathbf{b},\\iota}^f$ is dropped unless they could cause any confusion.\n\n\nOne of the crucial properties of the kinetic weight in (\\ref{alphaweight}) is an invariance under the Vlasov operator: $\n\\big[\\partial_t + v\\cdot \\nabla_x - \\nabla_x \\phi_f \\cdot \\nabla_v \\big] \\alpha_{f,\\e,\\iota}(t,x,v) =0.$ This is due to the fact that the characteristics solves a deterministic system (\\ref{hamilton_ODE1}). This crucial invariant property under the Vlasov operator is one of the key points in their approach in \\cite{VPB, 2SVPB}. \n\n\nDenote $\n\\label{weight}\nw_\\vartheta(v) = e^{\\vartheta|v|^2}.$\n\n\\begin{theorem}[\\cite{VPB, 2SVPB}] \n\\label{main_existence}\nAssume a bounded open $C^3$ domain $\\Omega \\subset\\mathbb{R}^3$ is convex (\\ref{convexity_eta}). Let $0< \\tilde{\\vartheta}< \\vartheta\\ll1$. Assume the compatibility condition: (\\ref{diffuse_BC}) holds at $t=0$. \nThere exists a small constant $0< \\e_0 \\ll 1$ such that for all $0< \\e \\leq \\e_0$ if an initial datum $F_{0,\\iota} =\\mu + \\sqrt \\mu f_{0,\\iota}$ satisfies\n\\begin{equation} \\label{small_initial_stronger}\n \\|w_\\vartheta f_{0,\\iota} \\|_{L^\\infty(\\bar{\\Omega} \\times \\mathbb{R}^3)}< \\e, \\| w_{\\tilde{\\vartheta}} \\nabla_{v } f_{0,\\iota} \\|_{ {L}^{3 } ( {\\Omega} \\times \\mathbb{R}^3)}< \\infty,\n \\end{equation}\n\\begin{equation} \\label{W1p_initial}\n\\begin{split}\n \\| w_{\\tilde{\\vartheta}} \\alpha_{f_{0,\\iota}, \\e }^\\beta \\nabla_{x,v } f_{0,\\iota} \\|_{ {L}^{p } ( {\\Omega} \\times \\mathbb{R}^3)}\n <\\e\n\\ \\\n\\text{for} \\ \\ 3< p < 6, \\ \\ \n1-\\frac{2}{p }\n < \\beta<\n\\frac{2}{3}\n \n,\\end{split}\n\\end{equation}\nthen there exists a unique global-in-time solution $F_\\iota(t)= \\mu+ \\sqrt{\\mu} f_\\iota(t) \\geq 0$ to (\\ref{2FVPB}), (\\ref{Field}), (\\ref{Poisson2}), \\eqref{diffuse_BC}. Moreover there exists $\\lambda_{\\infty} > 0$ such that \n\\begin{equation} \\begin{split}\\label{main_Linfty}\n \\sup_{ t \\geq0}e^{\\lambda_{\\infty} t} \\| w_\\vartheta f_\\iota(t)\\|_{L^\\infty(\\bar{\\Omega} \\times \\mathbb{R}^3)}+ \n \\sup_{ t \\geq0}e^{\\lambda_{\\infty} t} \\| \\phi_f(t) \\|_{C^{2}(\\Omega)} \\lesssim 1,\n\n\\end{split}\\end{equation}\nand, for some $C>0$, and, for $0< \\delta= \\delta(p,\\beta) $,\n\\begin{equation} \\label{W1p_main}\n \\| w_{\\tilde{\\vartheta}} \\alpha_{f, \\e ,\\iota }^{\\beta } \\nabla_{x,v} f_\\iota(t) \\|_{L^{ p} ( {\\Omega} \\times \\mathbb{R}^3)} \n \\lesssim e^{Ct} \\ \\ \\text{for all } t \\geq 0\n,\n\\end{equation}\n\n\\begin{equation} \\label{nabla_v f_31}\n\\| \\nabla_v f_\\iota (t) \\|_{L^3_x (\\Omega) L^{1+\\delta }_v (\\mathbb{R}^3)} \\lesssim_t 1 \\ \\ \\text{for all } \\ t\\geq 0.\n\\end{equation}\n \n\n\nFurthermore, if $F_\\iota$ nad $G_\\iota$ are both solutions to (\\ref{2FVPB}), (\\ref{Field}), (\\ref{Poisson2}), \\eqref{diffuse_BC},\nthen \n\\begin{equation} \\label{stability_1+}\n\\| f_\\iota(t) - g_\\iota(t) \\|_{L^{1+\\delta} (\\Omega \\times \\mathbb{R}^3)} \\lesssim_t \\| f_\\iota(0) - g_\\iota(0) \\|_{L^{1+\\delta} (\\Omega \\times \\mathbb{R}^3)} \\ \\ \\text{for all } \\ t\\geq 0.\n\\end{equation} \n\n\\end{theorem}\n\\begin{remark}The second author and his collaborators constructs a local-in-time solution for given general large datum in \\cite{CKL} for the generalized diffuse reflection boundary condition. By introducing a scattering kernel $R(u \\rightarrow v;x,t)$, representing the probability of a molecule striking in the boundary at $x\\in\\partial\\Omega$ with velocity $u$ to be bounced back to the domain with velocity $v$, they consider \\begin{equation}\\begin{split}\\label{eqn:BC}\n&F(t,x,v) |n(x) \\cdot v|= \\int_{\\gamma_+(x)}\nR(u \\rightarrow v;x,t) F(t,x,u)\n\\{n(x) \\cdot u\\} d u, \\quad \\text{ on }\\gamma_-\n.\n\\end{split}\n\\end{equation}\n In \\cite{CKL} they study a model proposed by Cercignani and Lampis in~\\cite{CIP,CL}. With two accommodation coefficients $ 00$,\n\t\t\\begin{equation} \\begin{split}\\label{phi_interpolation}\n\t\t\n\t\t\t\\|\\nabla^2_x \\phi(t )\\|_{L^\\infty (\\Omega)\n\t\t\n\t\t\t\\lesssim_{\\Omega, D_1, D_2}\n\t\t\n\t\t\te^{D_1 \\Lambda_0t}\\| \\phi(t)\\|_{C^{1,1-D_1}(\\Omega)}\n\t\t\t+ e^{- D_2 \\Lambda_0t}\\| \\phi(t)\\|_{C^{2, D_2}(\\Omega)}.\n\t\t\\end{split}\\end{equation}\n\t\\end{lemma} \n While an exponential decay of the weaker $C^{1,1-}$ norm can be derived from the exponential decay of $f_\\iota$ in $L^\\infty$, the $C^{2,0+}$ norm is controlled by Morrey's inequality\n\\begin{equation} \\label{phic2pbd}\n\\| \\phi \\|_{C_x^{2,0 +}} \\lesssim \\sum_{\\iota = \\pm} \\| \\int_{\\mathbb R^3 } f\\iota \\sqrt \\mu dv \\|_{C_x^{0,0+} } \\lesssim \\sum_{\\iota = \\pm} \\| \\int_{\\mathbb R^3} \\nabla_x f_\\iota \\sqrt \\mu dv \\|_{L^p_x} , \\text{ for } p>3.\n\\end{equation}\n\nNow the spatial derivative of $f_\\iota$ needs to be controlled.\nThey develop an $ \\alpha_\\iota$-weighted $W^{1,p}$ estimate by energy-type estimate of $ \\alpha_\\iota \\nabla_{x,v} f_\\iota$, where the $ \\alpha_\\iota$-multiplication compensates the boundary singularity. This allows us to bound \\eqref{phic2pbd} for $\\frac{p-2}{p} < \\beta < \\frac{p-1}{p}$,\n\\[\n \\| \\int_{\\mathbb R^3} \\nabla_x f_\\iota \\sqrt \\mu dv \\|_{L^p_x} \\lesssim \\| \\alpha_\\iota^{-\\beta} \\|_{L^{\\frac{p}{p-1}}} \\| \\alpha_\\iota^\\beta \\nabla_x f_\\iota \\sqrt \\mu \\|_{L^p_{x,v} } \\lesssim \\| \\alpha_\\iota^\\beta \\nabla_x f_\\iota \\sqrt \\mu \\|_{L^p_{x,v} },\n\\]\nas long as\n\\begin{equation} \\label{int1overalpha}\n \\alpha_\\iota^{-\\frac{\\beta p}{p-1} } \\sim \\frac{1}{ \\alpha_\\iota(t,x,v)^{1-}} \\in L^1_v \\text{ uniformly for all } x.\n\\end{equation}\n\nA difficulty of the proof of \\eqref{int1overalpha} arises form lack of local representation of $ \\alpha_\\iota (t, x, v)$. $\\alpha_\\iota$ is only defined at some boundary point along (possibly very complicated) characteristics. They employ a geometric change of variables $v \\mapsto (x_{\\mathbf{b},\\iota}(t,x,v),t_{\\mathbf{b},\\iota}(t,x,v) )$ to exam \\eqref{int1overalpha}. By computing the Jacobian there is an extra $ \\alpha$-factor from $dv \\sim \\frac{ \\alpha_\\iota}{|t_{\\mathbf{b},\\iota}|^3 } dt_{\\mathbf{b},\\iota} d x_{\\mathbf{b},\\iota}$, which cancels the singularity of \\eqref{int1overalpha}. Then they use a lower bound of $t_{\\mathbf{b},\\iota} \\gtrsim \\frac{|x_{\\mathbf{b},\\iota}^f-x|}{\\max |V|}$ and a bound $ \\alpha \\lesssim \\frac{|(x-x_{\\mathbf{b},\\iota}^f) \\cdot n(x_{\\mathbf{b},\\iota}^f)|}{t_{\\mathbf{b},\\iota}^f}$ to have\n\\begin{equation} \\label{alpha_bounded_intro}\n\\int_{|v| \\lesssim 1} { \\alpha_\\iota}^{- \\frac{\\beta p}{p-1}} \\mathrm{d} v \\lesssim \\int_{\\text{boundary}} \\frac{|(x- x_{\\mathbf{b},\\iota}) \\cdot n(x_{\\mathbf{b},\\iota})|^{1- \\frac{\\beta p}{p-1}}}{|x-x_{\\mathbf{b},\\iota}|^{3- \\frac{\\beta p}{p-1}}} \\mathrm{d} x_{\\mathbf{b},\\iota}\n+ \\text{good terms}< \\infty, \n\\end{equation}\nwhich turns to be bounded as long as $ \\frac{\\beta p}{p-1}<1$. \n\n\n\nFrom the above estimates and the interpolation, they derive \\textit{an exponential decay} of $\\phi(t)$ in $C_x^2$ as long as $\\| \\alpha_\\iota^\\beta \\nabla_x f(t) \\|_{L_{x,v}^p} $ grows at most exponentially. With the $C_x^2$-bound of $\\phi$ in hand, they control $\\| \\alpha_\\iota^\\beta \\nabla_x f(t) \\|_{L_{x,v}^p} $via Gronwall's inequality and close the estimate by proving its (at most) exponential growth.\n\n\nFor the uniqueness and stability of approximating sequence they prove $L^1$-stability. The key observation is that $v$-derivatives of the diffuse BC \\eqref{diffuse_BC} has no boundary singularity, thus is bounded. The equation of $\\nabla_v f_\\iota$ has a singular forcing term $\\nabla_x f_\\iota$.\nFor which they control $\\| \\nabla_x f_\\iota \\|_{L_x^3 L_v^1 } $ as $\\| \\alpha_\\iota^{-\\beta} \\|_{L_v^{\\frac{p}{p-1}} } \\| \\alpha_\\iota^\\beta \\nabla_x f_\\iota\\|_{L^p_{x,v} } $, and this term is bounded from \\eqref{int1overalpha}.\n\n\\hide\nFor the two-species VPB system in \\cite{2SVPB}, when performing the expansion for $\\begin{bmatrix} F_+ \\\\ F_- \\end{bmatrix} = \\begin{bmatrix} \\mu + \\sqrt \\mu f_+ \\\\ \\mu + \\sqrt \\mu f_- \\end{bmatrix}$, the vector linearized Boltzmann operator $L$ is defined as\n\\begin{equation} \\label{L_decomposition}\nL \\begin{bmatrix} g_1 \\\\ g_2 \\end{bmatrix} := -\\frac{1}{\\sqrt \\mu } \\begin{bmatrix} 2 Q(\\sqrt \\mu g_1, \\mu ) + Q (\\mu, \\sqrt \\mu ( g_1 + g_2 ) )\n\\\\ 2 Q(\\sqrt \\mu g_2, \\mu ) + Q (\\mu, \\sqrt \\mu ( g_1 + g_2 ) )\n \\end{bmatrix}.\n \\end{equation}\nThe null space of $L$ is a six-dimensional subspace of $L^2_v(\\mathbb R^3; \\mathbb R^2 )$ spanned by orthonormal vectors\n\\begin{equation} \n\\left\\{ \\begin{bmatrix} \\sqrt \\mu \\\\ 0 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ \\sqrt \\mu \\end{bmatrix}, \\begin{bmatrix} \\frac{v_i}{\\sqrt 2 } \\sqrt \\mu \\\\ \\frac{v_i}{\\sqrt 2 } \\sqrt \\mu \\end{bmatrix}, \\begin{bmatrix} \\frac{|v|^2 - 3}{2\\sqrt 2} \\sqrt \\mu \\\\ \\frac{|v|^2 - 3}{2\\sqrt 2} \\sqrt \\mu \\end{bmatrix}\n \\right\\}, \\, i = 1,2,3.\n\\end{equation}\nAnd the projection of $\\mathbf f = \\begin{bmatrix} f_+ \\\\ f_- \\end{bmatrix}$ onto the null space $N(L)$ can be denoted by\n\\begin{equation} \\begin{split}\n& \\mathbf P \\mathbf f (t,x,v)\n\\\\ & := \\left\\{ a_+(t,x) \\begin{bmatrix} \\sqrt \\mu \\\\ 0 \\end{bmatrix} + a_-(t,x) \\begin{bmatrix} 0 \\\\ \\sqrt \\mu \\end{bmatrix} + b(t,x) \\cdot \\frac{v}{\\sqrt 2 } \\begin{bmatrix} \\sqrt \\mu \\\\ \\sqrt \\mu \\end{bmatrix} + c(t,x) \\frac{|v|^2 - 3}{2\\sqrt 2}\\begin{bmatrix} \\sqrt \\mu \\\\ \\sqrt \\mu \\end{bmatrix}\n\\right\\}.\n\\end{split} \\end{equation}\nUsing the standard $L^2$ energy estimate of the equation, it is well-known that $L$ is degenerate: $\\left \\langle L \\mathbf f , \\mathbf f \\right \\rangle \\gtrsim \\left \\| (I - \\mathbf P) \\mathbf f \\right \\|_{L^2_{x,v }}$. Thus it's clear that in order to control the $L^2$ norm of $f_\\iota(t)$, a way to bound the missing $ \\left \\| \\mathbf P\\mathbf f (t) \\right \\|_{L^2} $ term is needed. Fortunately, the technique of test function method developed in \\cite{EGKM} can be applied to the two-species settings. By the weak formulation of the equation and a set of properly choosing test functions, the $a_\\pm(t,x), b(t,x), c(t,x)$ can be controlled. And the estimate of $\\mathbf P\\mathbf f $ is achieved as\n\\[\n\\left \\| \\mathbf P\\mathbf f (s) \\right \\|_{L_{x,v}^2 } \\lesssim \\left \\| ( I - \\mathbf P ) \\mathbf f \\right \\|_{L_{x,v}^2 } + \\text{ \"good terms\"}.\n\\]\n\\unhide\n\n\n\n\\subsection{Improved regularity under the sign condition}\nOne interesting question is to improve the regularity estimate beyond a weighted $W^{1,p}$ for $p< 6$ of $f_\\iota$ in \\cite{VPB, 2SVPB}. Some work in this direction has been done in \\cite{VPBEP}.\n\nIn \\cite{VPBEP} the author consider the one-species VPB system \\eqref{Field}, \\eqref{Boltzmann_E}, where the potential consists of a self-generated electrostatic potential and an external potential. That is $E = \\nabla \\phi$, where\n\\begin{equation} \\label{VPB2}\n\\phi(t,x) = \\phi_F(t,x) + \\phi_E(t,x), \\text{ with } \\frac{\\partial \\phi_E}{\\partial n } > C_E > 0 \\text{ on } \\partial \\Omega,\n\\end{equation}\nand $\\phi_F$ satisfies \\eqref{Poisson} and the zero Neumann boundary condition $\\frac{ \\partial \\phi_F}{\\partial n } = 0$ on $\\partial \\Omega$. Under such setting, the field $E$ satifies a crucial sign condition on the boundary\n\\begin{equation} \\label{signEonbdry}\nE(t,x) \\cdot n(x) > C_E > 0 \\text{ for all } t \\text{ and all } x \\in \\partial \\Omega.\n\\end{equation}\n\n\nWith the help of the external potential $\\phi_E$ with the crucial sign condition \\eqref{VPB2}, they construct a short time weighted $W^{1,\\infty}$ solution to the VPB system, which improves the regularity estimate of such system in Theorem \\ref{main_existence}. The key idea of the result is to incorporate a different distance function $\\tilde \\alpha$:\n\\begin{equation} \\label{alphatilde}\n\\tilde \\alpha \\sim \\bigg[ |v \\cdot \\nabla \\xi (x)| ^2 + \\xi (x)^2 - 2 (v \\cdot \\nabla^2 \\xi(x) \\cdot v ) \\xi(x) - 2(E(t,\\overline x ) \\cdot \\nabla \\xi (\\overline x ) )\\xi(x) \\bigg]^{1\/2},\n\\end{equation}\nwhere $\\xi:\\mathbb R^3 \\to \\mathbb R$ is a smooth function such that $ \\Omega = \\{ x \\in \\mathbb R^3: \\xi(x) < 0 \\}$, and the closest boundary point $\\overline x := \\{ \\bar x \\in \\partial \\Omega : d(x,\\bar x ) = d(x, \\partial \\Omega) \\}$ is uniquely defined for $x$ closed to the boundary. Note that $\\tilde \\alpha \\vert_{\\gamma_-} \\sim | n(x) \\cdot v |$. A version of a distance function without the potential was used in \\cite{GKTT1}. One of the key contribution in \\cite{VPBEP} is to incorporate this different distance function \\eqref{alphatilde} in the presence of an external field.\n\n\\begin{theorem}[\\cite{VPBEP}]\\label{WlinftyVPBthm} Let $\\phi_E (t,x)$ be a given external potential with $\\nabla_x \\phi_E$ satisfying (\\ref{signEonbdry}), and\n$ \\| \\nabla_x \\phi_E(t,x) \\|_{C^1_{t,x}(\\mathbb{R}_+ \\times\\bar \\Omega )}\n < \\infty.$\n Assume that, for some $0< \\vartheta < \\frac{1}{4}$,\n$\\| w_\\vartheta \\tilde{\\alpha} \\nabla_{x,v} f_0 \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)} + \\| w_\\vartheta f_0 \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)}\n< \\infty.$\nThen there exists a unique solution $F(t,x,v) = \\sqrt \\mu f(t,x,v) $ to (\\ref{Boltzmann_E}), \\eqref{Field}, (\\ref{diffuse_BC}), (\\ref{VPB2}) for $t \\in [0,T]$ with $0 < T \\ll 1$, such that for some $0< \\vartheta' < \\vartheta $, $\\varpi \\gg 1$,\n$\\sup_{0\\le t \\le T} \\| w_{\\vartheta'} f(t) \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)} < \\infty,$ and\n\\begin{equation\n\\sup_{0 \\le t \\le T} \\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde{\\alpha} \\nabla_{x,v} f^{} (t,x,v) \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)} < \\infty. \n\\end{equation}\n\\end{theorem}\n\n\nOne of the crucial property $\\tilde \\alpha$ enjoys, under the assumption of the sign condition \\eqref{signEonbdry}, is the invariance along the characteristics:\n\n\\begin{lemma}[Velocity lemma near boundary] \\label{velocitylemma} \nSuppose $E(t,x)$ satisfies the sign condition (\\ref{signEonbdry}).\nThen for any $0 \\le s0$ is defined on $x\\in \\partial \\Omega$ and smooth. Assume that two accommodation coefficients satisfy\n\n\\begin{equation}\\label{eqn: Constrain on T}\n\\frac{\\min_{x\\in \\partial \\Omega}\\{T_w(x)\\}}{\\max_{x\\in \\partial \\Omega}\\{T_w(x)\\}}>\\max\\Big(\\frac{1-r_\\parallel}{2-r_\\parallel},\\frac{\\sqrt{1-r_\\perp}-(1-r_\\perp)}{r_\\perp}\\Big)\\,.\n\\end{equation}\nLet $0< \\tilde{\\theta}< \\theta <\\frac{1}{4\\max_{x\\in \\partial \\Omega}\\{T_w(x)\\}}.$\nAssume\n\\begin{equation}\n\\| w_\\theta f_0 \\|_\\infty < \\infty, \\, \\label{eqn: w f_0} \\| w_{\\tilde{\\theta}} \\nabla_v f_0 \\|_{L^{3}_{x,v}}<\\infty,\n\\end{equation}\n\\begin{equation}\n\\| w_{\\tilde{\\theta}} \\alpha_{f_0, \\epsilon }^\\beta \\nabla_{x,v } f_0 \\|_{ {L}^{p } ( {\\Omega} \\times \\mathbb{R}^3)} <\\infty\\quad \\text{for} \\quad 3< p < 6\\,,\\, 1-\\frac{2}{p }< \\beta< \\frac{2}{3}.\\label{eqn: alpha f0}\n\\end{equation}\nThen there is a unique solution $F(t,x,v) = \\sqrt \\mu f(t,x,v)$ to~\\eqref{Boltzmann_E}, \\eqref{eqn:BC}, \\eqref{eqn: Formula for R} in a time interval of $t \\in [0,\\bar{t}]$.\nMoreover, there are $\\mathfrak{C}>0$ and $\\lambda>0$, so that $f$ satisfies\n\\begin{equation}\\label{infty_local_bound}\n\\sup_{0 \\leq t \\leq \\bar{t}}\\| w_{\\theta}e^{-\\mathfrak{C} \\langle v\\rangle^2 t} f (t) \\|_{\\infty}\\lesssim \\| w_\\theta f_0 \\|_\\infty , \\ \\ \\sup_{0 \\leq t \\leq \\bar{t}}\\| \\nabla_v f (t) \\|_{L^3_xL^{1+ \\delta}_v}< \\infty,\n\\end{equation}\n\\begin{equation}\\label{W1p_local_bound}\n\\sup_{0 \\leq t \\leq \\bar{t}}\\Big\\{ \\| w_{\\tilde{\\theta}}e^{-\\lambda t\\langle v\\rangle }\\alpha_{f,\\epsilon }^\\beta \\nabla_{x,v} f (t) \\|_{p} ^p+ \\int^t_0 |w_{\\tilde{\\theta}} e^{-\\lambda s\\langle v\\rangle } \\alpha_{f,\\epsilon}^\\beta \\nabla_{x,v} f (t) |_{p,+}^p\\Big\\}< \\infty .\n\\end{equation}\n\\end{theorem*}\n\n\n\\unhide\n \n\n\\section{On the Vlasov-Poisson-Boltzmann system surrounded by Conductor boundary}\n\nIn the second part of the paper, we consider the one-species VPB system surrounded by conductor boundary. More specifically, we consider the system \\eqref{Boltzmann_E}, \\eqref{Field}, where the electrostatic potential $\\phi$ is obtained by\n\\begin{equation} \\label{phiC}\n-\\Delta_x \\phi(t,x) = \\int_{\\mathbb R^3} F(t,x,v) dv, \\, x \\in \\Omega, \\ \\ \\ \n\\phi = 0, \\, x \\in \\partial \\Omega.\n\\end{equation}\nAn important benefit in the conductor boundary setting \\eqref{phiC} is that $E = - \\nabla_x \\phi$ enjoys the sign condition \\eqref{signEonbdry} from a quantitative Hopf lemma, without the need of an external potential.\n\\begin{lemma}[Lemma $3.2$ in \\cite{BC}]\\label{Hopf}\nSuppose $ h \\ge 0 $, and $h \\in L^\\infty(\\Omega)$. Let $v$ be the solution of\n\\begin{equation} \\begin{split}\n- \\Delta v = h \\text{ in } \\Omega,\n \\ \\ v = 0 \\text{ on } \\partial \\Omega.\n\\end{split} \\end{equation}\nThen for any $ x \\in \\partial \\Omega$,\n\\begin{equation} \\label{hopfquan}\n\\frac{ \\partial v(x) }{ \\partial n} \\ge c \\int_{\\Omega} h(x) d (x, \\partial \\Omega)dx,\n\\end{equation}\nfor some $c > 0$ depending only on $\\Omega$. Here $d (x, \\partial \\Omega)$ is the distance from $x$ to the boundary $\\partial \\Omega$.\n\\end{lemma}\n\\hide\n\\begin{proof}\nSee . \n\n\n\n\\end{proof}\\unhide\n\nOur goal is to prove a local existence and regularity theorem for the system \\eqref{Boltzmann_E}, \\eqref{Field}, \\eqref{diffuse_BC}, \\eqref{phiC}. Let's first define our distance function $\\tilde \\alpha$.\n\n\nLet $d(x,\\partial \\Omega) := \\inf_{y \\in \\partial \\Omega} \\| x - y \\| $. For any $\\delta > 0$, let $ \\Omega ^\\delta : = \\{ x \\in \\Omega : d(x, \\partial \\Omega ) < \\delta \\}$. For $\\delta \\ll 1$ is small enough, we have for any $x \\in \\Omega ^\\delta$ there exists a unique $\\bar x \\in \\partial \\Omega$ such that $d(x,\\bar x ) = d(x, \\partial \\Omega)$ (cf. (2.44) in \\cite{VPBEP}). \n\n\n\\begin{definition}\nFirst we define for all $(x, v ) \\in \\Omega ^{\\delta} \\times \\mathbb R^3 $,\n\\[\n\\beta(t,x,v) = \\bigg[ |v \\cdot \\nabla \\xi (x)| ^2 + \\xi (x)^2 - 2 (v \\cdot \\nabla^2 \\xi(x) \\cdot v ) \\xi(x) +2(\\nabla \\phi(t,\\overline x ) \\cdot \\nabla \\xi (\\overline x ) )\\xi(x) \\bigg]^{1\/2}.\n\\]\nFor any $\\epsilon >0$, let $\\chi_\\epsilon: [0,\\infty) \\to [0,\\infty) $ be a smooth function satisfying $ \\chi_\\epsilon(x) =x$ for $0 \\le x \\le \\frac{\\epsilon}{4}$, $\\chi_\\epsilon(x) = C_\\epsilon$ for $x \\ge \\frac{\\epsilon}{2}$, $\\chi_\\epsilon(x)$ is increasing for $ \\frac{\\epsilon}{4} < x < \\frac{\\epsilon}{2}$, and $ \\chi_\\epsilon'(x) \\le 1$. Let $\\delta': = \\min \\{| \\xi (x)| : x\\in \\Omega, d(x, \\partial \\Omega) = \\delta \\} $, then we define our weight function to be:\n\n\\begin{equation} \\label{alphadef}\n\\tilde \\alpha(t,x,v) : = \n\\begin{cases}\n (\\chi_{\\delta'} ( \\beta(t,x,v) ) )^{} & x \\in \\Omega^\\delta, \\\\\n C_{\\delta'} ^{} & x \\in \\Omega \\setminus \\Omega^\\delta.\n\\end{cases} \\end{equation}\n\\end{definition}\n\n\n\n\\begin{theorem}[Weighted $W^{1,\\infty}$ estimate for the VPB surrounded by conductor] \\label{WlinftyVPBthm} \nAssume $F_0 = \\sqrt \\mu f_0$ satisfies\n\\begin{equation} \\label{VPBf0assumption}\n\\| w_\\vartheta \\tilde \\alpha \\nabla_{x,v} f_0 \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)} + \\| w_\\vartheta f_0 \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)} + \\| w_\\vartheta \\nabla_v f_0 \\|_{L^3(\\bar \\Omega \\times \\mathbb R^3)} < \\infty,\n\\end{equation}\nfor some $0< \\vartheta < \\frac{1}{4}$.Then there exists a unique solution $F(t,x,v) = \\sqrt \\mu f(t,x,v) $ to (\\ref{Boltzmann_E}), \\eqref{Field}, \\eqref{diffuse_BC}, (\\ref{phiC}) for $t \\in [0,T]$ with $0 < T \\ll 1$, such that for some $0< \\vartheta' < \\vartheta $, $\\varpi \\gg 1$,\n\\begin{equation} \\label{linfinitybddsolution}\n\\sup_{0\\le t \\le T} \\| w_{\\vartheta'} f(t) \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)} < \\infty,\n\\end{equation} \n\\begin{equation}\\label{weightedC1bddsolution}\n\\sup_{0 \\le t \\le T} \\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_{x,v} f^{} (t,x,v) \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)} < \\infty, \n\\end{equation}\n\\begin{equation} \\label{L3L1plusbddsolution}\n \\sup_{0 \\le t \\le T}\\| e^{-\\varpi \\langle v \\rangle t } \\nabla_v f(t) \\|_{L^3_x(\\Omega)L_v^{1+\\delta}(\\mathbb R^3 ) } < \\infty \\text{ for } 0< \\delta \\ll 1.\n\\end{equation}\n\\end{theorem}\n\n\n\n\nThe corresponding equation for $f = \\frac{F}{\\sqrt \\mu } $ is\n\\begin{equation} \\label{VPBsq1}\n(\\partial_t + v \\cdot \\nabla_x - \\nabla \\phi \\cdot \\nabla_v + \\frac{v}{2} \\cdot \\nabla \\phi + \\nu( \\sqrt \\mu f) ) f^{} = \\Gamma_{\\text{gain}} (f,f),\n\\end{equation}\n\\begin{equation} \\label{VPBsq3}\n-\\Delta_x \\phi(t,x) = \\int_{\\mathbb R^3 } \\sqrt \\mu f dv, \\,\\, \\phi = 0 \\text{ on } \\partial \\Omega,\n\\end{equation}\n\\begin{equation} \\label{fbdry}\nf^{}(t,x,v) = c_\\mu \\sqrt{\\mu(v)} \\int_{n\\cdot u >0 } f(t,x,v) \\sqrt{\\mu(u)} (n(x) \\cdot u ) du.\n\\end{equation}\nHere $\n \\nu (\\sqrt \\mu f )(v) : = \\int_{\\mathbb R^3 } \\int_{\\mathbb S^2} |v - u|^{\\kappa} q_0 (\\frac{ v -u}{ |v -u| } \\cdot w ) \\sqrt{\\mu(u) } f(u) d\\omega du$, and $\n \\Gamma_{\\text{gain}} (f_1,f_2) (v): = \\int_{\\mathbb R^3 } \\int_{\\mathbb S^2} |v - u|^{\\kappa} q_0 (\\frac{ v -u}{ |v -u| } \\cdot w ) \\sqrt{\\mu(u) } f_1(u')f_2(v') d\\omega du$.\n\n\n\nLet $\\partial \\in \\{ \\nabla_x, \\nabla_v \\}$. Let $E = - \\nabla_x \\phi $. Denote \n\\begin{equation} \\label{nuvarpi}\n\\nu_{\\varpi} = \\nu(\\sqrt \\mu f )+ \\frac{v}{2} \\cdot E + \\varpi \\langle v \\rangle + t \\varpi \\frac{v}{\\langle v \\rangle }\\cdot E - {\\tilde \\alpha}^{-1} ( \\partial_t {\\tilde \\alpha} + v \\cdot \\nabla_x {\\tilde \\alpha} + E \\cdot \\nabla_v {\\tilde \\alpha} ).\n\\end{equation}\nThen by direct computation we get \n \\begin{equation} \\label{seqforc1fixedpotential} \\begin{split}\n \\bigg \\{ & \\partial_t + v\\cdot \\nabla_x + E \\cdot \\nabla_v + \\nu_{\\varpi} \\bigg\\} ( e^{-\\varpi \\langle v \\rangle t } {\\tilde \\alpha} \\partial f^{})\n\\\\ = & e^{-\\varpi \\langle v \\rangle t } {\\tilde \\alpha} \\left( \\partial \\Gamma_{gain} (f,f) - \\partial v \\cdot \\nabla_x f^{} - \\partial E \\cdot \\nabla_v f^{} - \\partial ( \\frac{v}{2} \\cdot E ) f^{} - \\partial (\\nu (\\sqrt \\mu f ) ) f^{} \\right)\n\\\\ := & \\mathcal{N}(t,x,v).\n\\end{split} \\end{equation}\n\n\n\nIn order to deal with the diffuse boundary condition \\eqref{diffuse_BC}, we define the stochastic (diffuse) cycles as $(t^0,x^0,v^0) = (t,x,v)$,\n\\begin{equation} \\label{diffusecycles} \\begin{split}\n& t^1 = t - t_{\\mathbf{b}}(t,x,v), \\, x^1 = x_{\\mathbf{b}}(t,x,v) = X(t - t_{\\mathbf{b}}(t,x,v);t,x,v), \n\\\\ & v_b^0 = V(t - t_{\\mathbf{b}}(t,x,v);t,x,v) = v_{\\mathbf{b}}(t,x,v),\n\\end{split} \\end{equation}\n and $v^1 \\in \\mathbb R^3$ with $n(x^1) \\cdot v^1 > 0$. \nFor $l\\ge1$, define\n\\[ \\begin{split}\n& t^{l+1} = t^l - t_{\\mathbf{b}}(t^l,x^l,v^l), x^{l + 1 } = x_{\\mathbf{b}}(t^l,x^l,v^l), \n\\\\ & v_b^l = v_{\\mathbf{b}}(t^l,x^l,v^l), \n\\end{split} \\]\nand $v^{l+1} \\in \\mathbb R^3 \\text{ with } n(x^{l+1}) \\cdot v^{l+1} > 0$.\nAlso, define \n\\[\nX^l(s) = X(s;t^l,x^l,v^l), \\, V^l(s) = V(s;t^l,x^l,v^l),\n\\]\n so $X(s) = X^0(s), V(s) = V^0(s)$. We have the following lemma.\n\n\n\n\\begin{lemma}[Lemma $12$ in \\cite{VPBEP}] \\label{expandtrajl}\n\n\n\nIf $t^1 < 0$, then\n\\begin{equation} \\label{C1trajectoryt1less0}\n\\begin{split}\n& e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha | \\partial f (t,x,v) |\n\\\\ & \\lesssim \\tilde \\alpha(0,X^0(0), V^0(0) ) \\partial f (0, X^0(0) , V^0(0) ) + \\int_0^t \\mathcal N(s,X^0(s), V^0(s) ) ds.\n\\end{split} \\end{equation}\n\n\nIf $t^1 > 0$, then\n\\begin{equation} \\label{C1trajectory}\n\\begin{split}\n& e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha | \\partial f (t,x,v) |\n\\\\ \\lesssim & e^{-\\frac{\\vartheta}{2} |v_{\\mathbf{b}}^0| ^2} P(\\| w_\\vartheta f_0 \\|_\\infty) + \\int_{t^1}^t \\mathcal N(s, X^0(s), V^0(s) ) ds\n\\\\ & + \\sqrt {\\mu (v_{\\mathbf{b}}^0) } \\langle v_{\\mathbf{b}}^0 \\rangle^2 \\int_{\\prod_{j=1}^{l-1} \\mathcal V_j} \\sum_{i=1}^{l-1} \\textbf{1}_{\\{t^{i+1} < 0 < t^i \\}} |\\tilde \\alpha \\partial f (0,X^{i}(0), V^{i}(0)) | \\, d \\Sigma_{i}^{l-1}\n\\\\ & + \\sqrt {\\mu (v_{\\mathbf{b}}^0) } \\langle v_{\\mathbf{b}}^0 \\rangle^2 \\int_{\\prod_{j=1}^{l-1} \\mathcal V_j} \\sum_{i=1}^{l-1} \\textbf{1}_{\\{t^{i+1} < 0 < t^i \\}} \\int_0^{t^i} \\mathcal N(s, X^i(s), V^i(s) ) ds \\, d \\Sigma_{i}^{l-1}\n\\\\ & + \\sqrt {\\mu (v_{\\mathbf{b}}^0) } \\langle v_{\\mathbf{b}}^0 \\rangle^2 \\int_{\\prod_{j=1}^{l-1} \\mathcal V_j} \\sum_{i=1}^{l-1} \\textbf{1}_{\\{t^{i+1} > 0 \\}} \\int_{t^{i+1}}^{t^i} \\mathcal N(s, X^i(s), V^i(s) ) ds \\, d \\Sigma_{i}^{l-1}\n\\\\ & + \\sqrt {\\mu (v_{\\mathbf{b}}^0) } \\langle v_{\\mathbf{b}}^0 \\rangle^2 \\int_{\\prod_{j=1}^{l-1} \\mathcal V_j} \\sum_{i=2}^{l-1} \\textbf{1}_{\\{t^{i} > 0 \\}} e^{-\\frac{\\vartheta}{2} |v_{\\mathbf{b}}^{i-1}| ^2} P(\\| w_\\vartheta f_0 \\|_\\infty) \\, d \\Sigma_{i-1}^{l-1}\n\\\\ & + \\sqrt {\\mu (v_{\\mathbf{b}}^0) } \\langle v_{\\mathbf{b}}^0 \\rangle^2 \\int_{\\prod_{j=1}^{l-1} \\mathcal V_j} \\textbf{1}_{\\{t^{l} > 0 \\}} e^{-\\varpi \\langle v_{\\mathbf{b}}^{l-1} \\rangle t^l } \\tilde \\alpha (t^l,x^l, v_{\\mathbf{b}}^{l-1}) | \\partial f (t^l,x^l,v_{\\mathbf{b}}^{l-1}) | d \\Sigma_{l-1}^{l-1},\n\\end{split} \\end{equation}\nwhere $\\mathcal V_j = \\{ v^j \\in \\mathbb R^3: n(x^j ) \\cdot v^j > 0 \\}$,\nand \n\\[ \\begin{split}\nd \\Sigma_i^{l-1} = & \\{\\prod_{j=i+1}^{l-1} \\mu(v^j) c_\\mu |n(x^j ) \\cdot v^j | dv^j \\} \n\\{ e^{\\varpi \\langle v^i \\rangle t^i } \\mu^{1\/4}(v^i) \\langle v^i \\rangle d v^i \\}\n\\\\ & \\{\\prod_{j=1}^{i-1} \\sqrt{\\mu(v_{\\mathbf{b}}^j ) } \\langle v_{\\mathbf{b}}^j \\rangle \\mu^{1\/4}(v^j ) \\langle v^j \\rangle e^{\\varpi \\langle v^j \\rangle t^j } d v^j\\},\n\\end{split} \\]\nwhere $c_\\mu$ is the constant that $ \\int_{\\mathbb R^3 } \\mu(v^j) c_\\mu |n(x^j ) \\cdot v^j | dv^j = 1$.\n\\end{lemma} \n\nThe following lemma is necessary for us to establish Theorem \\ref{WlinftyVPBthm}.\n\n\\begin{lemma}\nIf $(F, \\phi)$ solves (\\ref{phiC}), write $f = \\frac{F}{\\sqrt \\mu}$, then\n\\begin{equation} \\label{linfinitybdofpotential}\n\\| \\phi_F(t) \\|_{C^{1,1-\\delta} (\\Omega) } \\lesssim_{\\delta,\\Omega} \\| w_\\vartheta f(t) \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)}, \\text{ for any } 0< \\delta <1,\n\\end{equation}\nand\n\\begin{equation} \\label{c2bdofpotential}\n\\| \\nabla ^2 \\phi_F(t) \\|_{L^\\infty(\\Omega)} \\lesssim \\| w_\\vartheta f(t) \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)} + \\| e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_{x} f (t) \\|_{L^\\infty(\\bar \\Omega \\times \\mathbb R^3)}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nIt is obvious to have (\\ref{linfinitybdofpotential}) from the Morrey inequality and elliptic estimate.\nNext we show (\\ref{c2bdofpotential}).\nBy Schauder estimate, we have, for $p >3$ and $\\Omega \\subset \\mathbb R^3$,\n\\[\n\\| \\nabla^2 \\phi_F(t) \\|_{L^\\infty(\\Omega)} \\le \\| \\phi_F \\|_{C^{2, 1 - \\frac{3}{p}}(\\Omega)} \\lesssim_{p,\\Omega} \\| \\int_{\\mathbb R^3} f (t) \\sqrt \\mu dv \\|_{C^{0, 1 - \\frac{3}{p}}(\\Omega)}.\n\\]\nThen by Morrey inequality, $W^{1,p} \\subset C^{0, 1 -\\frac{3}{p}} $ with $p >3$ for a domain $\\Omega \\subset \\mathbb R^3$ with a smooth boundary $\\partial \\Omega$, we derive\n\\[ \\begin{split}\n \\| & \\int_{\\mathbb R^3} f (t) \\sqrt \\mu dv \\|_{C^{0, 1 - \\frac{3}{p}}} \\lesssim \\| \\int_{\\mathbb R^3} f (t) \\sqrt \\mu dv \\|_{W^{1,p} }\n \\\\ \n & \\lesssim \\| w_\\vartheta f(t) \\|_\\infty + \\| e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_{x} f (t) \\|_\\infty \\| \\int_{\\mathbb R^3 } e^{ \\varpi \\langle v \\rangle t } \\sqrt \\mu \\frac{1}{\\tilde \\alpha} dv \\|_{L^p(\\Omega ) }.\n\\end{split} \\]\n\n\nIt suffices to show that for some $\\beta > 1$, \n\\begin{equation} \\label{int1alphaLpbdd}\n\\| \\int_{\\mathbb R^3 } e^{-\\frac{1}{8} |v|^2 } \\frac{1}{\\tilde \\alpha^\\beta} dv \\|_{L^p(\\Omega ) } < \\infty.\n\\end{equation}\n Since $\\tilde \\alpha$ is bounded from below when $x$ is away from the boundary of $\\Omega$, it suffices to only consider the case when $x$ is close enough to $\\partial \\Omega$. From direct computation (see \\cite{VPBEP}), we get\n\\begin{equation} \\label{int1overalphadv}\n \\int_{\\mathbb R^3 } e^{-\\frac{1}{8} |v|^2 } \\frac{1}{\\tilde \\alpha^\\beta} dv \\lesssim \\frac{1}{(\\xi(x)^2 - 2 E(t,\\bar x ) \\cdot \\nabla \\xi(\\bar x) \\xi(x) )^{\\frac{\\beta -1 }{2}}} \\lesssim \\frac{1}{ |\\xi(x) | ^{\\frac{\\beta -1 }{2}}}.\n\\end{equation}\nAnd since $\\xi$ is $C^2$, we have \n\\[\n\\int_{d(x,\\partial \\Omega) \\ll 1 } \\frac{1}{ |\\xi(x) | ^{\\frac{(\\beta -1 )p}{2}}} dx \\lesssim \\int_{d(x,\\partial \\Omega) \\ll 1 } \\frac{1}{ |x - \\bar x | ^{\\frac{(\\beta -1 )p}{2}}} dx.\n\\]\nNow from (\\ref{convexity_eta}), \n\\[\n\\int_{\\Omega \\cap B(p;\\delta_2 ) } \\frac{1}{|x - \\bar x | ^{\\frac{(\\beta -1 )p}{2}}} dx \\lesssim \\int_{ |x_n | < \\delta_1 } \\frac{1}{|x_n|^{\\frac{(\\beta -1 )p}{2}}} d_{x_n} < \\infty,\n\\]\nif we pick $\\beta < \\frac{2}{p}+1 $. And since $\\partial \\Omega$ is compact, we can cover $\\partial \\Omega$ with finitely many such balls, and therefore we get (\\ref{int1alphaLpbdd}).\n\n\n\\end{proof}\n\n\n\\begin{proof} [Proof of Theorem \\ref{WlinftyVPBthm}]\nFor the sake of simplicity we only show the a priori estimate. See \\cite{CKL} for the construction of the sequences of solutions and passing a limit.\n\n\n\n\nThe proof of \\eqref{linfinitybddsolution} for $f$ satisfying \\eqref{VPBsq1}, \\eqref{VPBsq3}, and \\eqref{fbdry} is standard. We refer to Theorem $4$ in \\cite{VPBEP}.\n\n\n\nFirst from \\eqref{VPBsq3} and the fact that $ \\int_{\\mathbb R^3 } \\sqrt \\mu f dv \\ge 0$, we apply Lemma \\ref{Hopf} to get\n\\begin{equation} \\label{hopfm}\n - \\frac{ \\partial \\phi (t,x)}{\\partial n} \\ge c \\iint_{\\Omega \\times \\mathbb R^3} \\sqrt \\mu f(t,x,v) \\delta(x) dv dx,\n\\end{equation}\nfor some $c$ depending only on $\\Omega$. \n\nDenote\n\\[\n \\iint_{\\Omega \\times \\mathbb R^3} F_0(x,v) \\delta(x) dv dx = c_{E_0}.\n\\]\nThen $\\int_0^T \\iint_{\\Omega \\times \\mathbb R^3 }\\delta(x) \\times \\eqref{Boltzmann_E} \\ dv dx dt$ gives\n\\[ \\begin{split}\n& \\iint_{\\Omega \\times \\mathbb R^3}F(T,x,v) \\delta(x) dv dx \n \\\\ & = \\iint_{\\Omega \\times \\mathbb R^3} F_0(x,v) \\delta(x) dv dx + \\int_0^T \\iint_{\\Omega \\times \\mathbb R^3 } F v \\cdot \\nabla_x \\delta(x) dv dx dt.\n\\end{split} \\]\nTogether with \\eqref{linfinitybddsolution} and \\eqref{hopfm} we deduce\n\\begin{equation} \\label{signphim}\n - \\frac{ \\partial \\phi (t,x)}{\\partial n} \\ge c \\iint_{\\Omega \\times \\mathbb R^3}F(t,x,v) \\delta(x) dv dx > \\frac{c c_{E_0}}{2},\n\\end{equation}\nas long as $ T \\lesssim \\frac{ c_{E_0}}{2 M}$.\n\n\n\n\n\nNext, we investigate \\eqref{seqforc1fixedpotential}. Since\n\\[ \\begin{split}\n w_{\\vartheta } \\Gamma_{gain}(\\partial f,f ) \n \\lesssim \\| e^{2\\vartheta' |v|^2 } f \\|_\\infty \\int_{\\mathbb R^3 } \\frac{ e^{-C_{\\vartheta'} |u -v | ^2 } } {|u -v |^{2 -\\kappa} } |e^{\\vartheta ' |u|^2 } \\partial f(t,x,u ) | du,\n\\end{split} \\]\nand\n\\[ \\begin{split}\nw_{\\vartheta } \\nu(\\sqrt{\\mu} \\partial f ) f \n\\lesssim \\| e^{2 \\vartheta' |v|^2 } f \\|_\\infty \\int_{\\mathbb R^3 } \\frac{ e^{-C_{\\vartheta'} |u -v | ^2 } } {|u -v |^{2 -\\kappa} } | \\partial f(t,x,u ) | du.\n\\end{split} \\]\nThus from (\\ref{linfinitybddsolution}) we have the following bound for $\\mathcal N$:\n\n\\begin{equation} \\label{Nbdd}\\begin{split}\n| \\mathcal N(t,x,v) | \n \\lesssim & (1 + \\| \\nabla^2 \\phi \\|_\\infty) [ P(\\| w_\\vartheta f_0 \\|_\\infty ) + |w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\partial f(t,x,v) | ]\n\\\\ & + \\| w_\\vartheta f_0 \\|_\\infty e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha(t,x,v) \\int_{\\mathbb R^3 } \\frac{ e^{-C_\\vartheta |u -v | ^2 } } {|u -v |^{2 -\\kappa} } | e^{\\vartheta ' |u|^2 }\\partial f(t,x,u ) | du.\n\\end{split} \\end{equation}\n\nRecall the definition of $\\nu_\\varpi$ in \\eqref{nuvarpi}, note that from the velocity lemma (\\ref{vlemma}), and (\\ref{signphim}) we have\n\\[ \\begin{split}\n& \\tilde \\alpha^{-1} ( \\partial_t \\tilde \\alpha + v \\cdot \\nabla_x \\tilde \\alpha -\\nabla \\phi \\cdot \\nabla_v \\tilde \\alpha ) \n \\\\ \\lesssim & ( \\| \\nabla \\phi \\|_\\infty + \\| \\nabla^2 \\phi \\|_\\infty ) \\langle v \\rangle\n \\\\ \\lesssim & (\\| w_{\\vartheta'} f(t) \\|_\\infty + \\| e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_{x} f (t) \\|_\\infty)\\langle v \\rangle\n\\\\ \\lesssim & (P(\\| w_{\\vartheta} f_0\\|_\\infty ) + \\| \\tilde \\alpha \\partial f_0 \\|_\\infty)\\langle v \\rangle.\n \\end{split} \\]\nTherefore we have\n\\begin{equation} \\label{largeomegabar}\n\\nu_\\varpi \\ge \\frac{\\varpi}{2} \\langle v \\rangle,\n\\end{equation}\nonce we choose $\\varpi \\gg 1$ large enough.\n \nFor $t^1 < 0$, using the Duhamel's formulation we have from (\\ref{seqforc1fixedpotential})\n\n\\begin{equation} \\label{}\n\\begin{split}\n w_{\\vartheta'} & e ^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha | \\partial f (t,x,v) |\n\\\\ \\le & e^{ -\\int_s^t \\nu_{\\varpi} (\\tau, X(\\tau), V(\\tau) d\\tau} e^{\\vartheta' |V(0)|^2 } \\tilde \\alpha \\partial f (0, X(0) , V(0) ) \n\\\\ & + \\int_0^t e^{ -\\int_s^t \\nu_{\\varpi} (\\tau, X(\\tau), V(\\tau) d\\tau } \\mathcal N(s,X(s), V(s) ) ds.\n\\end{split} \n\\end{equation}\nThus by (\\ref{Nbdd}) we have\n\\[ \\begin{split}\n \\sup_{0 \\le t \\le T} & \\| \\textbf{1}_{ \\{ t^1 < 0 \\}} e^{-\\varpi \\langle v \\rangle t } w_{\\vartheta'} \\tilde \\alpha \\partial f (t,x,v) \\|_\\infty\n\\\\ \\le & \\sup_{0 \\le t \\le T} \\| e^{ -\\int_0^t \\nu_{\\varpi} (\\tau, X(\\tau), V(\\tau) d\\tau} e^{\\vartheta' |V(0)|^2 } \\tilde \\alpha \\partial f (0, X(0) , V(0) ) \n\\\\ & + \\int_0^t e^{ -\\int_s^t \\nu_{\\varpi} (\\tau, X(\\tau), V(\\tau) d\\tau } \\mathcal N(s,X(s), V(s) ) ds \\|_\\infty\n \\\\ \\le & \\| w_{\\vartheta'} \\tilde \\alpha \\partial f_0 \\|_\\infty + P(\\| w_{\\vartheta} f_0 \\|_\\infty) \\sup_{0 \\le t \\le T} \\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\partial f (t,x,v) \\|_\\infty\n \\\\ & +T (1 + \\| \\nabla^2 \\phi \\|_\\infty) [ P(\\| w_{\\vartheta} f_0 \\|_\\infty) + \\sup_{0 \\le t \\le T} \\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\partial f (t,x,v) \\|_\\infty] \n \\\\ & \\times \\int_0^t \\int_{\\mathbb R^3} e^{ -\\int_s^t\\frac{\\varpi}{2} \\langle V(\\tau;t,x,v) \\rangle d\\tau } \\frac{e^{-\\varpi \\langle (s;t,x,v) \\rangle s }}{e^{-\\varpi \\langle u \\rangle s}} \\frac{e^{-C_\\vartheta |V(s)-u|^2 }}{|V(s) -u |^{2 - \\kappa} } \\frac{\\tilde \\alpha(s,X(s) ,V(s))}{\\tilde \\alpha(s,X(s) ,u)} du ds.\n\\end{split} \\]\nNow since\n$\n \\langle u \\rangle - \\langle V(s;t,x,v) \\rangle \\le 2 \\langle u - V(s;t,x,v) \\rangle,\n$\nwe have\n$\n \\frac{e^{-\\varpi \\langle (s;t,x,v) \\rangle s }}{e^{-\\varpi \\langle u \\rangle s}} e^{-C_\\vartheta |V(s)-u|^2 }\n\\lesssim e^{ -\\frac{ C_\\vartheta |V(s) - u | ^2}{2}}.\n$\nThus\n\\begin{equation} \\label{genlemma1toget}\n\\begin{split}\n\\int_0^t & \\int_{\\mathbb R^3} e^{ -\\int_s^t\\frac{\\varpi}{2} \\langle V(\\tau;t,x,v) \\rangle d\\tau } \\frac{e^{-\\varpi \\langle (s;t,x,v) \\rangle s }}{e^{-\\varpi \\langle u \\rangle s}} \\frac{e^{-C_\\vartheta |V(s)-u|^2 }}{|V(s) -u |^{2 - \\kappa} } \\frac{\\tilde \\alpha(s,X(s) ,V(s))}{\\tilde \\alpha(s,X(s) ,u)} du ds\n\\\\ \\lesssim & \\int_0^t \\int_{\\mathbb R^3} e^{ -\\int_s^t\\frac{\\varpi}{2} \\langle V(\\tau;t,x,v) \\rangle d\\tau } \\frac{e^{-\\frac{C_\\vartheta}{2} |V(s)-u|^2 }}{|V(s) -u |^{2 - \\kappa} } \\frac{\\tilde \\alpha(s,X(s) ,V(s))}{\\tilde \\alpha(s,X(s) ,u)} du ds.\n\\end{split} \n\\end{equation}\nNote that, for any $\\beta > 1$,\n$\n\\frac{1}{\\tilde \\alpha(x, X(s) ,u ) } \\lesssim \\frac{1}{ (\\tilde \\alpha (x, X(s) , u ) )^\\beta } + 1.\n$\nSo from (\\ref{signphim}) we can let $1 < \\beta \\le 2$, and apply the nonlocal-to-local estimate (\\ref{nonlocaltolocal}) to (\\ref{genlemma1toget}) to have\n\\begin{equation}\n\\begin{split}\n \\int_0^t & \\int_{\\mathbb R^3} e^{ -\\int_s^t\\frac{\\varpi}{2} \\langle V(\\tau;t,x,v) \\rangle d\\tau } \\frac{e^{-\\varpi \\langle (s;t,x,v) \\rangle s }}{e^{-\\varpi \\langle u \\rangle s}} \\frac{e^{-C_\\vartheta |V(s)-u|^2 }}{|V(s) -u |^{2 - \\kappa} } \\frac{\\tilde \\alpha(s,X(s) ,V(s))}{\\tilde \\alpha(s,X(s) ,u)} du ds\n\\\\ \\lesssim & e^{ C ( \\| \\nabla \\phi \\|_\\infty^2 + \\| \\nabla^2 \\phi \\|_\\infty )} \\left( \\frac{ \\delta^{\\frac{3 - \\beta}{2}} (\\tilde \\alpha(t,x,v) )^{3 - \\beta } }{ (|v|^2 + 1 )^{\\frac{3 -\\beta}{2}} } + \\frac{ (|v| + 1 )^{\\beta - 1} (\\tilde \\alpha(t,x,v) )^{2 - \\beta }}{ \\delta ^{\\beta - 1} \\varpi \\langle v \\rangle } \\right)\n\\\\ \\lesssim & e^{ C ( \\| \\nabla \\phi \\|_\\infty^2 + \\| \\nabla^2 \\phi \\|_\\infty ) } \\left( \\delta^{\\frac{3 - \\beta}{2}} + \\frac{1}{\\delta ^{\\beta - 1} \\varpi} \\right),\n\\end{split}\n\\end{equation}\nwhere we used $\\tilde \\alpha(s,X(s) ,V(s)) \\lesssim e^{ C ( \\| \\nabla \\phi \\|_\\infty^2 + \\| \\nabla^2 \\phi \\|_\\infty ) } \\tilde \\alpha(t,x,v)$.\n\nSimilarly, for $t^1(t,x,v) \\ge 0 $, we again apply the nonlocal-to-local estimate (\\ref{nonlocaltolocal}) to get\n\\[\n\\begin{split}\n| & \\textbf{1}_{ \\{ t^1 > 0 \\}} w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\partial f (t,x,v) |\n\\\\ \\lesssim & C_l e^{Cl t^2 } \\left( \\delta^{\\frac{3 - \\beta}{2}} + \\frac{1}{\\delta ^{\\beta - 1} \\varpi} \\right) P(\\| w_{\\vartheta} f_0 \\|_\\infty) \\max_{ 0 \\le i \\le l-1 } e^{ C ( \\| \\nabla \\phi \\|_\\infty^2 + \\| \\nabla^2 \\phi \\|_\\infty )} \n\\\\ & \\times \\sup_{0 \\le t \\le T} \\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\partial f (t,x,v) \\|_\\infty\n\\\\ & + T(1 + \\|\\nabla^2 \\phi \\|_\\infty) \\sup_{0 \\le t \\le T} \\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\partial f (t,x,v) \\|_\\infty \n\\\\ + & T l (Ce^{Ct^2 } )^l (1 + \\|\\nabla^2 \\phi \\|_\\infty) \\sup_{0 \\le t \\le T} \\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\partial f (t,x,v) \\|_\\infty \n\\\\ + & Tl (Ce^{Ct^2 } )^l (1 + \\|\\nabla^2 \\phi \\|_\\infty )P(\\| w_{\\vartheta} f_0 \\|_\\infty) \n+ l (Ce^{Ct^2 } )^l \\| \\tilde \\alpha \\partial f_0 \\|_\\infty + P(\\| w_{\\vartheta} f_0 \\|_\\infty) \n\\\\ & + C \\left( \\frac{1}{2} \\right) ^l \\sup_{0 \\le t \\le T} \\|w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t} \\tilde \\alpha \\partial f (t,x,v) \\|_\\infty.\n\\end{split}\n\\]\n\n\nFinally from \\eqref{c2bdofpotential}, we can choose a large $l$ then large $C$ then small $\\delta$ then large $\\varpi$ and finally small $T$ to conclude \n\\[ \\begin{split}\n \\sup_{0 \\le t \\le T} \\| e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\partial f (t,x,v) \\|_\\infty\n\\le \\frac{C_1}{2} \\left( \\| w_{\\vartheta } \\tilde \\alpha \\partial f_0 \\|_\\infty + P ( \\| w_\\vartheta f_0 \\|_\\infty ) \\right)\n\\end{split} \\]\nThis proves (\\ref{weightedC1bddsolution}).\n\nNext we prove \\eqref{L3L1plusbddsolution}. Consider taking $\\nabla_v$ derivative of (\\ref{VPBsq1}) and adding the weight function $e^{-\\varpi \\langle v \\rangle t}$, we get\n\\begin{equation} \\begin{split}\\label{eqtn_g_v}\n\t\t\t&[\\partial_t + v\\cdot \\nabla_x - \\nabla_x \\phi \\cdot \\nabla_v + \\frac{v}{2} \\cdot \\nabla_x \\phi + \\varpi \\langle v \\rangle - \\frac{v}{ \\langle v \\rangle } \\varpi t \\cdot \\nabla_x \\phi + \\nu(\\sqrt \\mu f ) ] (e^{-\\varpi \\langle v \\rangle t} \\nabla_v f )\n\t\t\t\\\\ =& e^{-\\varpi \\langle v \\rangle t} \\left( - \\nabla_v \\nu(\\sqrt \\mu f ) f - \\nabla_x f - \\frac{1}{2} \\nabla_x\\phi f + \\nabla_v \\Gamma_{\\text{gain}}(f,f) \\right),\n\t\t\\end{split}\\end{equation}\nwith the boundary bound for $(x,v) \\in\\gamma_-$\n\t\t\\begin{equation} \\label{bdry_g_v}\n\t\t\\big|\\nabla_v f \\big| \\lesssim |v| \\sqrt{\\mu} \\int_{n \\cdot u>0} |f| \\sqrt{\\mu} \\{n \\cdot u \\} \\mathrm{d} u \\ \\ \\text{on } \\ \\gamma_-.\n\t\t\\end{equation}\nAnd \n\\[\n \\frac{v}{2} \\cdot \\nabla_x \\phi + \\varpi \\langle v \\rangle - \\frac{v}{ \\langle v \\rangle } \\varpi t \\cdot \\nabla_x \\phi + \\nu(\\sqrt \\mu f ) > \\frac{\\varpi }{2} \\langle v \\rangle,\n \\]\n for $\\varpi \\gg 1$.\n\n\t\tUsing the Duhamel's formulation, from (\\ref{eqtn_g_v}) we obtain the following bound along the characteristics\n\t\t\\begin{eqnarray}\n\t\t& \\,\\,\\, & |e^{-\\varpi \\langle v \\rangle t } \\nabla_v f(t,x,v)| \\nonumber\n\t\t\\\\\n\t\t& \\le & \\mathbf{1}_{ \\{ t_{\\mathbf{b}}(t,x,v)> t \\}} \n\t\t e^{ -\\int_0^t - \\frac{C}{2}\\langle V(\\tau) \\rangle d\\tau } |\\nabla_v f(0,X(0;t,x,v), V(0;t,x,v))|\\label{g_initial}\\\\\n\t\t& + & \\ \\mathbf{1}_{ \\{ t_{\\mathbf{b}}(t,x,v)0} \n\t\t| f(t-t_{\\mathbf{b}}, x_{\\mathbf{b}}, u) |\\sqrt{\\mu} \\{n(x_{\\mathbf{b}}) \\cdot u\\} \\mathrm{d} u\\label{g_bdry}\\\\\n\t\t& +& \\int^t_{\\max\\{t-t_{\\mathbf{b}}, 0\\}} \n\t\t e^{ -\\int_s^t - \\frac{\\varpi}{2}\\langle V(\\tau) \\rangle d\\tau } e^{-\\varpi \\langle V(s) \\rangle s } \n\t\t |\\nabla_x f(s, X(s),V(s))|\n\t\t\\mathrm{d} s\\label{g_x}\\\\\n\t\t& + &\\int^t_{\\max\\{t-t_{\\mathbf{b}}, 0\\}} \\label{g_Gamma}\n\t\t(1+ \\| w_{\\vartheta'} f \\|_\\infty \n\t\n\t\n\t\te^{ -\\int_s^t - \\frac{\\varpi}{2}\\langle V(\\tau) \\rangle d\\tau } e^{-\\varpi \\langle V(s) \\rangle s } \n\t\t\\\\ \\notag && \\,\\,\\, \\times \\int_{\\mathbb{R}^3} \\frac{e^{-C_{\\vartheta'} |V(s) - u |^2 }}{ |V(s) - u | ^{2 -\\kappa } } \\nabla_v f(s,X(s),u)| \\mathrm{d} u \n\t\t\\mathrm{d} s\\label{g_K}\\\\\n\t\t& + & \\label{nablaphigint}\n\t\t\\| w_{\\vartheta'} f\\|_\\infty \\int^t_{\\max\\{t-t_{\\mathbf{b}}, 0\\}} \n\t\t e^{ -\\int_s^t - \\frac{\\varpi}{2}\\langle V(\\tau) \\rangle d\\tau } e^{-\\varpi \\langle V(s) \\rangle s } e^{-\\vartheta' |V(s) |^2 }\n\t\t \\\\&& \\notag \\times |\\nabla_x \\phi (s, X(s;t,x,v))| \n\t\t\\mathrm{d} s. \\label{g_phi}\n\t\t\\end{eqnarray}\nWe first have\n\t\t\\begin{equation} \\begin{split}\\label{est_g_initial}\n\t\t\t&\\| (\\ref{g_initial})\\|_{L^3_x L^{1+ \\delta}_v}\\\\\n\t\t\t\\lesssim & \\left(\n\t\t\t\\int_{\\Omega}\n\t\t\t\\left(\\int_{\\mathbb{R}^3} |e^{\\vartheta' |V(0) | ^2 } \\nabla_v f(0,X(0 ), V(0 ))|^3 \n\t\t\t\\right)\n\t\t\t\\left(\n\t\t\t\\int_{\\mathbb{R}^3} e^{-(1+ \\delta) \\frac{3}{2-\\delta} \\vartheta' |V(0) | ^2 } \\mathrm{d} v \\right)^{\\frac{2-\\delta}{1+ \\delta}}\n\t\t\t\\right)^{1\/3} \\\\\n\t\t\t\\lesssim & \\\n\t\t\t\\left(\\iint_{\\Omega \\times \\mathbb{R}^3} |e^{\\vartheta' |V(0) | ^2 } \\nabla_v f(0,X(0;t,x,v), V(0;t,x,v))|^3 \\mathrm{d} v \\mathrm{d} x\\right)^{1\/3}\n\t\t\t\\\\\n\t\t \\lesssim & \\ \\| w_{\\vartheta'} \\nabla_v f (0) \\|_{L^3_{x,v}},\n\t\t\\end{split}\n\t\t\\end{equation}\n\t\twhere we have used a change of variables $(x,v) \\mapsto (X(0;t,x,v), V(0;t,x,v))$.\n\t\t\n\t\t\n\t\t\n\t\t\\hide\n\t\tAlso we use $|V(0;t,x,v)| \\gtrsim |v|$ for $|v|\\gg 1$, from (\\ref{decay_phi}), and hence $\\tilde{w}(V(0;t,x,v))^{- (1+ \\delta) \\frac{3}{2-\\delta}} \\in L_v^1 (\\mathbb{R}^3)$.\\unhide\n\t\t\n\t\tClearly \n\t\t\\begin{equation} \\label{est_g_bdry}\n\t\t\\|(\\ref{g_bdry})\\|_{L^3_x L^{1+ \\delta}_v} \\lesssim \\sup_{0 \\leq s \\leq t} \\| w_{\\vartheta'} f (s) \\|_\\infty.\n\t\t\\end{equation}\n\t\t\n\t\tFrom $ \\| \\nabla_x \\phi \\|_{L^3} \\lesssim \\| \\phi \\|_{W^{2,2}_x}$ for a bounded $\\Omega \\subset \\mathbb{R}^3$, and the change of variables $(x,v) \\mapsto (X(s;t,x,v), V(s;t,x,v))$ for fixed $s\\in(\\max\\{t-t_{\\mathbf{b}},0\\},t)$,\n\t\t\\begin{equation} \n\t\t\\begin{split}\\label{est_g_phi}\n\t\t\t\\| (\\ref{nablaphigint})\\|_{L^3_x L^{1+ \\delta}_v} \n\t\t\t \\lesssim & \\| w_{\\vartheta'} f\\|_\\infty \\int^t_{\\max\\{t-t_{\\mathbf{b}},0\\}} \\| \\phi (s) \\|_{W^{2,2}_{x} } \n\t\t\t\\\\\n\t\t\t\\lesssim & \\ \\| w_{\\vartheta'} f\\|_\\infty \\int^t_{\\max\\{t-t_{\\mathbf{b}},0\\}} \\| \\int_{\\mathbb R^3} \\sqrt \\mu f(s) dv \\|_{2}.\n\t\t\t \\lesssim t \\| w_{\\vartheta'} f\\|_\\infty \\| w_{\\vartheta'} f \\|_\\infty .\n\t\t\\end{split}\n\t\t\\end{equation}\n\n\t\t\n\n\n\n\n\n\n\t\t\n\t\n\t\tNext we have from (\\ref{int1overalphadv}), for $\\frac{3\\delta}{ 2 (1+\\delta) } < 1$, equivalently $0 < \\delta < 2 $,\n\t\t{\n\t\t\t\t\\begin{equation} \\label{init_p_xf}\n\t\t\\begin{split}\n\t\t\t \\|(\\ref{g_x})\\|_{L^3_x L^{1+ \\delta}_v} \\le &\\left\\|\\left\\| \\int^t_{\\max\\{t-t_{\\mathbf{b}}, 0\\}\n\t\t\t\\nabla_x f(s,X(s),V(s)) \\mathrm{d} s\n\t\t\t\\right\\|_{L_{v}^{1+ \\delta}( \\mathbb{R}^3)}\\right\\|_{L^{3}_x}\\\\\n\t\t\t= & \\ \\left\\|\\left\\| \\int^t_{\\max\\{t-t_{\\mathbf{b}}, 0\\}} \\frac{ e^{\\vartheta' |V(s) |^2 } e^{-\\varpi \\langle V(s) \\rangle s} \\tilde \\alpha \\nabla_x f(s,X(s),V(s))}{e^{\\vartheta' |V(s) |^2 } e^{-\\varpi \\langle V(s) \\rangle s} \\tilde \\alpha}\n\t\t\t\\mathrm{d} s\n\t\t\t\\right\\|_{L_{v}^{1+ \\delta}( \\mathbb{R}^3)}\\right\\|_{L^{3}_x}\n\t\t\t\\\\\n\t\t\t\\le & \\sup_{0 \\le t \\le T} \\ \\left\\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t} \\tilde \\alpha \\nabla_x f \\right\\|_\\infty \n\t\t\t\\\\\n\t\t\t& \\times \\left\\|\n\t\t\t \\left\\| \\int^t_{\\max\\{t-t_{\\mathbf{b}}, 0\\}} \\frac{e^{- \\vartheta' |V(s) |^2 } e^{\\varpi \\langle V(s) \\rangle s }}{\\tilde \\alpha(s, X(s), V(s) )}\t\t\\mathrm{d} s\n\\right\\|_{L_{v}^{1+\\delta}( \\mathbb{R}^3)}\\right\\|_{L^{3}_x}\\\\\n\t\t\t\\lesssim & e^{C ( \\| \\nabla \\phi \\|_\\infty + \\| \\nabla \\phi \\|_\\infty^2 +\\| \\nabla ^2 \\phi \\|_\\infty) }\n \\sup_{0 \\le t \\le T} \\ \\left\\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_x f \\right\\|_\\infty \n\t\t\t\t\t\\\\ & \\times t \\int_{\\Omega } \\left( \\int_{\\mathbb R^3 } \\frac{e^{- \\frac{\\vartheta'}{2} |v |^2 }}{(\\tilde \\alpha (t, x, v ))^{1+\\delta}} \\mathrm{d} v \\right)^{\\frac{3}{1+\\delta}} \\mathrm{d} x\t\t\t\n\t\t\\\\ \\lesssim & t e^{C ( \\| \\nabla \\phi \\|_\\infty^2 +\\| \\nabla ^2 \\phi \\|_\\infty) } \\sup_{0 \\le t \\le T} \\ \\left\\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_x f \\right\\|_\\infty.\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\end{split}\\end{equation}\n\t\t}\n\t\t \t\t\nNext, we consider (\\ref{g_Gamma}). From the computations in (\\ref{int1overalphadv}), and using the fact that $\\frac{1}{\\tilde \\alpha} \\lesssim \\frac{1}{\\tilde \\alpha^\\beta}$, we have\n{\n\\begin{equation} \\label{nonlocall3l1est} \\begin{split}\n & \\|(\\ref{g_Gamma})\\|_{L^3_x L^{1+ \\delta}_v} \n \\\\ \\le & \\left\\|\\left\\| \\int^t_{\\max\\{t-t_{\\mathbf{b}}, 0\\}}e^{ -\\int_s^t - \\frac{\\varpi}{2}\\langle V(\\tau) \\rangle d\\tau } e^{-\\varpi \\langle V(s) \\rangle s } \\right. \\right. \n\\\\ & \\left. \\left. \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\times \\int_{\\mathbb{R}^3} \\frac{e^{-C_{\\vartheta'} |V(s) - u |^2 }}{ |V(s) - u | ^{2 -\\kappa } } \\nabla_v f(s,X(s),u)| \\mathrm{d} u \\mathrm{d} s\n\t\t\t\\right\\|_{L_{v}^{1+ \\delta}( \\mathbb{R}^3)}\\right\\|_{L^{3}_x}\n\\\\ \\lesssim &e^{C \\| \\nabla \\phi \\|_\\infty } \\sup_{0 \\le t \\le T} \\ \\left\\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_x f \\right\\|_\\infty \n\\\\ & \\times \\left\\|\\left\\| \\int^t_{\\max\\{t-t_{\\mathbf{b}}, 0\\}}e^{ -\\int_s^t - \\frac{\\varpi}{2}\\langle V(\\tau) \\rangle d\\tau } \n \\int_{\\mathbb{R}^3} \\frac{e^{-C_{\\vartheta'} |V(s) - u |^2 }}{ |(s) - u | ^{2 -\\kappa } } \\frac{e^{- \\frac{\\vartheta'}{2} |u|^2 } }{(\\tilde \\alpha(s,X(s),u))^\\beta} \\mathrm{d} u \\mathrm{d} s\n\t\t\t\\right\\|_{L_{v}^{1+ \\delta}( \\mathbb{R}^3)}\\right\\|_{L^{3}_x}.\n\\end{split}\n\\end{equation}\nAnd then applying the nonlocal-to-local estimate \\eqref{nonlocaltolocal} to \\eqref{nonlocall3l1est} , we conclude\n\n\\begin{equation} \n\\begin{split}\n & \\|(\\ref{g_Gamma})\\|_{L^3_x L^{1+ \\delta}_v} \n \\\\ \\lesssim &e^{C (\\| \\nabla \\phi \\|_\\infty + \\| \\nabla^2 \\phi \\|_\\infty)} \\sup_{0 \\le t \\le T} \\ \\left\\| w_{\\vartheta'} e^{- \\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_x f \\right\\|_\\infty \n\\\\ & \\times \\left\\|\\left\\|\n\\frac{ \\delta ^{\\frac{3 -\\beta}{2} }}{(\\tilde \\alpha(t,x,v))^{\\beta -2 } (|v|^2 + 1 )^{\\frac{3 -\\beta}{2}}} + \\frac{ (|v| +1)^{\\beta -1} } { \\delta ^{\\beta -1 } \\varpi \\langle v \\rangle ( \\tilde \\alpha ( t,x,v) )^{\\beta - 1 } }\t\t\t\\right\\|_{L_{v}^{1+ \\delta}( \\mathbb{R}^3)}\\right\\|_{L^{3}_x}\n\\\\ \\lesssim &e^{C (\\| \\nabla \\phi \\|_\\infty + \\| \\nabla^2 \\phi \\|_\\infty)} \\sup_{0 \\le t \\le T} \\ \\left\\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_x f \\right\\|_\\infty \n\\\\ & \\times \\left( O(\\delta^{\\frac{3-\\beta}{2}} ) + \\frac{1}{ \\delta ^{\\beta -1 } \\varpi} \\left\\|\\left\\|\n \\frac{ 1} { \\langle v \\rangle^{2 -\\beta} ( \\tilde \\alpha ( t,x,v) )^{\\beta - 1 } }\t\t\t\\right\\|_{L_{v}^{1+ \\delta}( \\mathbb{R}^3)}\\right\\|_{L^{3}_x} \\right)\n\\\\ \\lesssim & C(\\delta^{\\frac{3-\\beta}{2}} + \\frac{1}{ \\delta ^{\\beta -1 } \\varpi} )e^{C (\\| \\nabla \\phi \\|_\\infty + \\| \\nabla^2 \\phi \\|_\\infty)} \\sup_{0 \\le t \\le T} \\ \\left\\| w_{\\vartheta'} e^{-\\varpi \\langle v \\rangle t } \\tilde \\alpha \\nabla_x f \\right\\|_\\infty,\n\\end{split} \\end{equation}\n}\nfor $\\beta$ satisfies $\\frac{ (\\beta-1)(1+\\delta) -1}{2} \\frac{3}{1+\\delta} < 1$, which is equivalent to $\\beta< \\frac{5}{3} + \\frac{1}{1+\\delta}$. Therefore any $1 < \\beta < \\frac{5}{3}$ would work.\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\tCollecting terms from (\\ref{g_initial})-(\\ref{nablaphigint}), and (\\ref{est_g_initial}), (\\ref{est_g_bdry}), (\\ref{est_g_phi}), (\\ref{init_p_xf}), (\\ref{nonlocall3l1est}), we derive\n\t\t\\begin{equation} \\begin{split}\\label{bound_nabla_v_g}\n\t\t\t& \\sup_{0 \\leq s \\leq t}\\| e^{-\\varpi \\langle v \\rangle t } \\nabla_vf(s) \\|_{L^3_xL^{1+ \\delta}_v} \\\\\n\t\\lesssim & \\| w_{\\vartheta'} \\nabla_v f (0) \\|_{L^3_{x,v}} + \\| w_{\\vartheta'} f \\|_\\infty )^2 + \\| w_{\\vartheta'} f \\|_\\infty\n\t\\\\ < & \\infty.\n\t\t\t\\end{split} \\end{equation}\nThis proves (\\ref{L3L1plusbddsolution}) and conclude Theorem \\ref{WlinftyVPBthm}.\n\n\n\n\n\n\\end{proof}\n\n\\section{Acknowledgements}This work was supported in part by National Science Foundation under Grant No. 1501031, Grant No. 1900923, and the Wisconsin Alumni Research Foundation.\n\n\n\\input{references}\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \\cite{[Euler]} (see \\cite{[Heine]} for the translation of \\cite{[Euler]} in\nEnglish) Euler proved that there does not exist a perfect map from the sphere\n$S^{2}$ or from a part of $S^{2}$ to the Euclidean plane $E^{2}.$ Recall that\na smooth map $f$ from $S^{2}$ (or from a part of $S^{2})$ to $E^{2}$ is called\n\\textit{perfect} if for each $p\\in S^{2}$\\ there is a neighborhood $U(p)$\\ of\n$p$ in $S^{2}$ such that the restriction of $f$\\ on $U(p)$\\ preserves\ndistances infinitesimally along the meridians and the parallels of $S^{2}$ and\n$f$ preserves also angles between meridians and parallels \\cite{[CP]}. In\nmodern geometric language a perfect map is a local isometry from $S^{2}$ to\n$E^{2}$ and thus Euler's theorem results as a corollary of the Gauss Egregium\nTheorem which was proven many years later. However, Euler's method of proof is\nvery fruitful and can be applied in similar problems, see for instance\nProposition 5 in \\cite{[CP]}. Very briefly, Euler's basic idea for the\nnon-existence of a perfect map from $S^{2}$ to $E^{2},$ is to translate\ngeometrical conditions to a system of differential equations and prove that\nthis system does not have a solution. Using Euler's method, the non-existence\nof a smooth map from a neighbourhood $U$ of $S^{2}$ to $E^{2}$ which preserves\ndistances infinitesimally along the meridians and the parallels of $S^{2}$ and\nwhich sends the meridional arcs of $U\\cap S^{2}$ to straight lines of $E^{2},$\ncan also be proven \\cite{[CP]}.\n\nThe origin of all these problems lies in the ancient problem of cartography,\nthat is, the problem of constructing geographical maps from $S^{2}$ (or from a\nsubset of $S^{2})$ to $E^{2}$ which satisfy certain specific requirements.\nThis problem can also be considered as part of a more general subject\nwhich explores the existence of coordinate transformations that preserves some\ngeometrical properties from one coordinate system to another.\nSeveral prominent mathematicians have been studying this problem from\nantiquity to our days and in the course of this study, $S^{2}$ was replaced\ngradually by surfaces of revolution or by surfaces in $E^{3}$ generally, see\n\\cite{[Papadop]} for an excellent historical recursion on this subject.\n\nThe goal of this work is to determine the class of surfaces of revolution $S$\nfor which there exists a smooth map $\\Phi$ from an open neighbourhood $U$ of\n$S$ to $E^{2}$ preserving distances infinitesimally along the meridians and\nthe parallels of $S$ and sending meridional arcs of $U\\cap S$ to straight\nlines of $E^{2}.$ Furthermore the map\n$\\Phi$ is computed explicitly. For the computation of $\\Phi$ we follow Euler's\nideas, that is, we convert geometrical conditions to differential equations\nwhose solutions allow us to find $\\Phi.$\n\nAs a corollary of the above result we deduce that if $p$ is a point of $S$ and\nif the Gaussian curvature at $p$ is positive, then a map $\\Phi$ as above does\nnot exist in a neighbourhood $U$ of $p.$ We also deduce that if $S_{0}$ is an\nabstract surface of constant negative curvature, such maps $\\Phi$ do not exist\nfrom an open subset $U$ of $S_{0}$ to $E^{2}.$\n\n\\section{Statement of Results}\n\nLet $S$ be a surface of revolution in $E^{3}$. \nIn the following we assume that all maps are of class $C^s$, $s\\geq2$. \n We consider a\nparametrization of $S$ given by\n\\begin{equation}\nr(t,u)=(f(u)\\cos t,f(u)\\sin t,g(u)), \\tag{r}\\label{r}%\n\\end{equation}\nwhere $f(u)>0,$ $a0$ and $c>0$.\nFurthermore, assuming that $\\Phi(t,u)=(x(t,u),y(t,u))$, we have that\n\\begin{align*}\nx(t,u) & =u\\cos(b(t))+\\int \\sqrt{k}\\cos(\\theta_{0}-b(t))dt,\\\\\ny(t,u) & =-u\\sin(b(t))+\\int \\sqrt{k}\\sin(\\theta_{0}-b(t))dt,\n\\end{align*}%\nwhere\n\\[\nb(t)=-\\sqrt{c}t+c_{0}.\n\\]\n\\end{theorem}\n\n\n\\begin{figure}[h] \n\\centering{\\includegraphics[scale = 0.3]{SURFACE-PETR2.eps}\n\\caption{\\small Surface with $f^{2}=u^{2}+1$}}\n\\end{figure}\n\n\nIn Figure 1, a surface of revolution $S$ is drawn which satisfies all\nhypothesis of Theorem \\ref{main theorem}. In this example, taking $f^{2}\n=u^{2}+1$ it results that $g(u)=\\ln(\\sqrt{u^{2}+1}+u),$ $u>0.$ The picture\nconfirms that the Gaussian curvature of each point of $S$ is negative.\n\n\n\\begin{corollary}\n(1) If $p\\in S$ and the Gaussian curvature at $p$ is positive then for each\nneighborhood $U$ of $p$ in $S$ there does not exist a map $\\Phi:U\\rightarrow\nE^{2}$ satisfying the conditions $(C1)$ and $(C2).$\n\n(2) If $S_{0}$ is a Riemannian surface of constant negative curvature then for\neach open neighbourhood $U\\subset S_{0}$ there does not exist a map\n$\\Phi:U\\rightarrow E^{2}$ satisfying the conditions $(C1)$ and $(C2).$\n\\end{corollary}\n\n\n\nCondition $(C1)$ is a natural requirement since meridians are geodesics of $S$\nand thus it is required to be sent to geodesics of $E^{2}$ via $\\Phi.$\n\nCondition $(C2)$ appears in Euler's writings and means that the elementary\nlength between two points $p,$ $q$ on a meridian (resp. two points $p,r$ on a\nparallel) of $S$ is equal to the elementary length of points $P=\\Phi(p),$\n$Q=\\Phi(q)$ (resp. of points $P,$ $R=\\Phi(r)).$ In other words, the `elements'\n$pq$ and $pr$ are equal to the `elements' $PQ$ and $PR$ respectively\n(following the terminology of \\cite{[Euler]}). In order to express $(C2)$\nrigorously, let $p=(t,u),$ $q=(t,u+du),$ $r=(t+dt,u),$ $P=\\Phi(t,u),$\n$Q=\\Phi(t,u+du),$ $R=\\Phi(t+du,u).$ If we denote by $|p_{1}-p_{2}|$ the\ndistance between the points $p_{1},$ $p_{2}\\in S$ and by $||P_{1}-P_{2}||$ the\nEuclidean distance between the points $P_{1},$ $P_{2}\\in E^{2},$ then\ncondition $(C2)$ means that:%\n\\begin{equation}\n\\underset{du\\rightarrow0}{\\lim}\\frac{|q-p|}{|du|}=\\underset{du\\rightarrow\n0}{\\lim}\\frac{||P-Q||}{|du|} \\tag{i}\\label{i}%\n\\end{equation}\n\n\\begin{equation}\n\\underset{dt\\rightarrow0}{\\lim}\\frac{|r-p|}{|du|}=\\underset{dt\\rightarrow\n0}{\\lim}\\frac{||R-Q||}{|du|}. \\tag{ii}\\label{ii}%\n\\end{equation}\n\n\nUsing the coordinate functions $x(t,u)$ and $y(t,u)$ the equalities (\\ref{i})\nand (\\ref{ii}) take respectively the following form:%\n\n\\begin{equation}\n\\sqrt{(\\frac{\\partial x}{\\partial u})^{2}+(\\frac{\\partial y}{\\partial u})^{2}%\n}=1, \\tag{iii}\\label{iii}%\n\\end{equation}\n\n\\begin{equation}\n\\text{\\qquad}\\sqrt{(\\frac{\\partial x}{\\partial t})^{2}+(\\frac{\\partial\ny}{\\partial t})^{2}}=f(u). \\tag{iv}\\label{iv}%\n\\end{equation}\nIndeed, relations (\\ref{iii}) and (\\ref{iv}) correspond to the relations (I)\nand (II) of (\\cite{[Heine]}, p. 5). In \\cite{[CP]}, these relations are\nreproved using a modern mathematical language and they are labelled as\nrelations (6) and (7) respectively. In the present work, relations (\\ref{iii})\nand (\\ref{iv}) are obtained by replacing $\\cos u$ in the parametrization\n$(\\cos u\\cos t,\\cos u\\sin t,\\sin u)$ of $S^{2}$ by the function $f(u)$ in the\nparametrization $(f(u)\\cos t,f(u)\\sin t,g(u))$ of $S$ and then repeating the\nsame steps. As a result, using Euler's method, condition $(C2)$ is translated\nin a system of differential equations consisting of the relations (\\ref{iii})\nand (\\ref{iv}).\n\nCombining $(C1)$ and $(C2)$ we deduce that $\\Phi$ restricted to a meridian of\n$S$ is an isometry onto its image.\n\n\\begin{remark} {\\rm\nIf $f^{\\prime}(u)=0$ for each $u\\in(a,b)$, then, the curve $\\gamma\n(u)=(f(u),g(u))$ is a straight line in the $(x,z)$-plane and so, the surface\n$S$ obtained by revolving $\\gamma$ about the $z$-axis is Euclidean i.e.\nlocally isometric to the Euclidean plane $E^{2}.$\nFurthermore, if $f^{\\prime\n\\prime}(u)=0$ for each $u,$ we deduce that $f^{\\prime}$ and $g^{\\prime}$ are\nconstant functions since by assumption the curve $\\gamma(u)=(f(u),g(u))$ is\nparametrized by arc-length. Therefore $\\gamma(u)$ is a line segment and thus\n$S$ is a locally isometric to $E^{2}.$ }\n\\end{remark}\n\n\n\n\n\\section{Auxiliary Lemmas}\n\nIn this section we give some results that we will use for the proof of Theorem 1.\n\n\\begin{lemma}\n Assume that $f^{\\prime}(u)\\neq 0$ and\n$f^{\\prime \\prime}(u)\\neq 0$, for each $u\\in(a,b)$. Let\n$U$ be an open connected\nsubset of $S$ and a map $\\Phi:U\\rightarrow\nE^2$ having the properties $(C_1)$ and $(C_2)$. Then, \n there are\nvariables $\\phi$ and $\\omega$ which are functions of $t,$ $u$ satisfying\n\\begin{equation*}\n\\left(\\frac{\\partial x}{\\partial u},\\frac{\\partial y}{\\partial u}\\right)=\n(\\cos \\phi\n,\\sin \\phi), \n\\ \\ \\ \\\n\\left(\\frac{\\partial x}{\\partial t},\\frac{\\partial y}{\\partial t}\\right)=(f(u)\\cos\n\\omega,f(u)\\sin \\omega)\n\\end{equation*}\nand\n\\[\nf^{\\prime \\prime}(u)=\\left( \\frac{\\partial \\omega}{\\partial u}(t,u)\\right)\n^{2}f(u). \n\\]\n\\end{lemma}\n\\begin{proof}\nProceeding as in the proof of Proposition 5 of \\cite{[CP]} \nwe have that there are\nvariables $\\phi$ and $\\omega$ which are functions of $t,$ $u$ such that\n\\begin{equation*}\n(\\frac{\\partial x}{\\partial u},\\frac{\\partial y}{\\partial u})=(\\cos \\phi\n,\\sin \\phi), \n\\end{equation*}\n\\begin{equation*}\n(\\frac{\\partial x}{\\partial t},\\frac{\\partial y}{\\partial t})=(f(u)\\cos\n\\omega,f(u)\\sin \\omega). \n\\end{equation*}\n\nSince\n\\[\n\\frac{\\partial^{2}}{\\partial u\\partial t}=\\frac{\\partial^{2}}{\\partial\nt\\partial u},\n\\]\nwe have%\n\\begin{equation*}\n-\\sin \\phi \\cdot \\frac{\\partial \\phi}{\\partial t}=f^{\\prime}\\cdot \\cos \\omega\n-f\\cdot \\sin \\omega \\cdot \\frac{\\partial \\omega}{\\partial u}. \n\\end{equation*}\n\n\\begin{equation*}\n\\cos \\phi \\cdot \\frac{\\partial \\phi}{\\partial t}=f^{\\prime}\\cdot \\sin \\omega\n+f\\cdot \\cos \\omega \\cdot \\frac{\\partial \\omega}{\\partial u}. \n\\end{equation*}\nMultiplying the first of the above equality by \n $\\cos \\omega$, the second by $\\sin \\omega$ and\nadding, we deduce that\n\\begin{eqnarray*}\n\\lefteqn{\n(-\\sin \\phi \\cdot \\cos \\omega+\\cos \\phi \\cdot \\sin \\omega)\\frac{\\partial \\phi}{\\partial\nt}=}\\\\\n& & f^{\\prime}\\cdot \\cos^{2}\\omega-f\\cdot \\sin \\omega \\cdot \\cos \\omega \\cdot\n\\frac{\\partial \\omega}{\\partial u}+f^{\\prime}\\cdot \\sin^{2}\\omega+f\\cdot\n\\cos \\omega \\cdot \\sin \\omega \\cdot \\frac{\\partial \\omega}{\\partial u}%\n\\end{eqnarray*}\nif and only if \n\\begin{equation*}\n\\sin(\\phi-\\omega)\\cdot \\frac{\\partial \\phi}{\\partial t}=f^{\\prime}.\n\\end{equation*}\nSimilarly, multiplying the first equality by\n $\\cos \\phi$, the second by $\\sin \\phi$\nand adding, we obtain%\n\\begin{eqnarray*}\n\\lefteqn{\\sin u\\cdot \\cos \\omega \\cdot \\cos \\phi-}\\\\\n& & \\cos u\\cdot \\sin \\omega \\cdot \\frac{\\partial \\omega}{\\partial u}\\cdot \\cos \\phi-\\sin u\\cdot \\sin \\omega \\cdot \\sin\\phi+\n\\cos u\\cdot \\cos \\omega \\cdot \\frac{\\partial \\omega}{\\partial u}\\cdot \\sin \\phi = 0\n\\end{eqnarray*}\nwhich implies that%\n\\begin{equation*}\nf\\cdot \\sin(\\phi-\\omega)\\frac{\\partial \\omega}{\\partial u}=-f^{\\prime}\\cdot\n\\cos(\\phi-\\omega). \n\\end{equation*}\n\nOn the other hand, the condition $(C1)$ implies that the meridians are\n mapped to straight lines, and so, we have \n\\begin{equation*}\n\\frac{\\partial \\phi}{\\partial u}=0.\n\\end{equation*}\nThus, \ndifferentiating \n\\begin{equation*}\n\\sin(\\phi-\\omega)\\cdot \\frac{\\partial \\phi}{\\partial t}=f^{\\prime}\n\\end{equation*}\n with respect to $u$ and using the previous equality, we obtain%\n\\begin{equation*}\n-\\cos(\\phi-\\omega)\\cdot \\frac{\\partial \\omega}{\\partial u}\\cdot \\frac\n{\\partial \\phi}{\\partial t}=f^{\\prime \\prime}.\n\\end{equation*}\n\n\nMultiplying \n\\begin{equation*}\n\\sin(\\phi-\\omega)\\cdot \\frac{\\partial \\phi}{\\partial t}=f^{\\prime}\n\\end{equation*}\nby%\n\\[\n\\frac{\\partial \\omega}{\\partial u}\\cdot \\frac{\\partial \\phi}{\\partial t}%\n\\]\nwe have%\n\\[\nf\\cdot \\sin(\\phi-\\omega)\\cdot\\left(\\frac{\\partial \\omega}{\\partial u}\\right)^{2}\\cdot\n\\frac{\\partial \\phi}{\\partial t}=-f^{\\prime}\\cdot \\cos(\\phi-\\omega)\\cdot\n\\frac{\\partial \\omega}{\\partial u}\\cdot \\frac{\\partial \\phi}{\\partial t}.\n\\]\nHence, combining the above equalities, we deduce%\n\\[\nf\\cdot f^{\\prime}\\cdot(\\frac{\\partial \\omega}{\\partial u})^{2}=f^{\\prime}\\cdot\nf^{\\prime \\prime}%\n\\]\nwhich implies that%\n\\begin{equation*}\nf^{\\prime}(u)(f^{\\prime \\prime}(u)-\\left( \\frac{\\partial \\omega}{\\partial\nu}\\right) ^{2}f(u))=0.\n\\end{equation*}\nSince we have $f^{\\prime}(u)\\neq 0$, we obtain the result.\n\\end{proof}\n\n\n\\begin{lemma}\nAssume that $f^{\\prime}(u)\\neq 0$ and\n$f^{\\prime \\prime}(u)\\neq 0$, for each $u\\in(a,b)$. Assume that\n\\begin{equation*}\n2f^{\\prime}a^{\\prime}+fa^{\\prime \\prime}=0.\n\\end{equation*}\nThen, we have $f^{2}=cu^{2}+du+k$ and\n\\[\na=a(u)=\\arctan \\left( \\frac{2c}{\\sqrt{-\\Delta}}\\left( u+\\frac{d}{2c}\\right)\n\\right),\n\\]\nwhere $\\Delta=d^{2}-4ck$.\n\\end{lemma}\n\\begin{proof} Putting $y=a^{\\prime}$, we have the differential equation\n\\[\ny^{\\prime}+\\frac{2f^{\\prime}}{f}y = 0.\n\\]\nIts solution is\n\\begin{equation*}\ny=a^{\\prime}=Ce^{-2\\int \\frac{f^{\\prime}}{f}du}.\n\\end{equation*}\nIt follows that\n$$(a^{\\prime})^{2} = C^2 e^{-4\\int \\frac{f^{\\prime}}{f}du},$$\nwhence\n $$\\frac{f^{\\prime \\prime}}{f}=C^2 e^{-4\\int \\frac{f^{\\prime}}{f}du},$$\nand so, we obtain\n$$\\ln f^{\\prime \\prime}-\\ln f =K-4\\int \\frac{f^{\\prime}}{f}du.$$\nDifferentiating the above equality, we get\n$$\\frac{f^{(3)}}{f^{\\prime \\prime}}-\\frac{f^{\\prime}}{f} =-4\\frac{f^{\\prime}%\n}{f},$$\nand therefore we deduce\n\\[\nf^{(3)}f+3f^{\\prime \\prime}f^{\\prime}=0.\n\\]\n\nOn the other hand, we have\n\\[\n(ff^{\\prime})^{\\prime \\prime}=((ff^{\\prime})^{\\prime})^{\\prime}=(f^{\\prime\n}f^{\\prime}+ff^{\\prime \\prime})^{\\prime}=2f^{\\prime \\prime}f+f^{\\prime \\prime\n}f+ff^{(3)}=f^{(3)}f+3f^{\\prime \\prime}f^{\\prime},\n\\]\nwhence we get\n\\begin{equation*}\n(ff^{\\prime})^{\\prime \\prime}=0.\n\\end{equation*}\nIt follows that $(ff^{\\prime})^{\\prime}=c$, whence we have\n$ff^{\\prime}=c_{1}u+d_{1}$, and so, we get $(f^{2})^{\\prime}=cu+d$. \nThus, we obtain\n\\[\nf^{2}=cu^{2}+du+k.\n\\]\n\n\nTaking the first and the second derivative, we have\n\\[\nf^{\\prime}=\\frac{1}{2}\\frac{2cu+d}{\\sqrt{cu^{2}+du+k}}\\ \\ \\ \\text{and}%\n\\ \\ \\ f^{\\prime \\prime}=\\frac{4ck-d^{2}}{4(cu^{2}+du+k)^{3\/2}}.\n\\]\nThus\n\\[\n\\frac{f^{\\prime \\prime}}{f}=\\frac{4ck-d^{2}}{4(cu^{2}+du+k)^{2}}%\n\\ \\ \\ \\text{and}\\ \\ \\ \\frac{f^{\\prime}}{f}=\\frac{2cu+d}{2(cu^{2}+du+k)}.\n\\]\nSince\n\\[\n\\frac{f^{\\prime \\prime}}{f}=(a^{\\prime})^{2}= C^2 \ne^{-4\\int \\frac{f^{\\prime}}%\n{f}du},\n\\]\nwe have\n\\[\n\\frac{f^{\\prime \\prime}}{f}=\\frac{C^2}{f^{4}}.\n\\]\nThus, we obtain\n\\[\n\\frac{4ck-d^{2}}{4(cu^{2}+du+k)^{2}}=\\frac{C^2}{(cu^{2}+du+k)^{2}},\n\\]\nand therefore\n\\[\nC^2 =\\frac{4ck-d^{2}}{4}.\n\\]\n\n\nLet $\\Delta=d^{2}-4ck$ be the discriminant of $cu^{2}+du+k$. Thus, we\nget \n\\[\nC=\\frac{\\sqrt{-\\Delta}}{2}.\n\\]\nFurthermore, we have\n\\[\na^{\\prime}=Ce^{-2\\int \\frac{f^{\\prime}}{f}du}=\\frac{\\sqrt{-\\Delta}\/2}{f^{2}%\n}=\\frac{\\sqrt{-\\Delta}\/2}{cu^{2}+du+k}%\n\\]\nand thus\n\\[\na=\\int a^{\\prime}du=\\int \\frac{\\sqrt{-\\Delta}\/2}{cu^{2}+du+k}du=\\int \\frac\n{\\sqrt{-\\Delta}\/2}{c(u+\\frac{d}{2c})^{2}+(\\frac{\\sqrt{-\\Delta}}{2c})^{2}}du.\n\\]\nHence, we obtain \n\\begin{equation*}\na(u)=\\arctan \\left( \\frac{2c}{\\sqrt{-\\Delta}}\\left( u+\\frac{d}{2c}\\right)\n\\right). \n\\end{equation*} \\end{proof}\n\n\n\n\\section{Proof of Theorem 1}\nSuppose that there exists a map $\\Phi:U\\rightarrow\nE^{2}$ having the properties $(C1)$ and $(C2)$. \nBy Lemma 1, there are\nvariables $\\phi$ and $\\omega$ which are functions of $t,$ $u$ satisfying\n\\begin{equation}\n\\left(\\frac{\\partial x}{\\partial u},\\frac{\\partial y}{\\partial u}\\right)=(\\cos \\phi\n,\\sin \\phi), \\tag{1}\\label{1}%\n\\end{equation}\n\\begin{equation}\n\\left(\\frac{\\partial x}{\\partial t},\\frac{\\partial y}{\\partial t}\\right)=(f(u)\\cos\n\\omega,f(u)\\sin \\omega). \\tag{2}\\label{2}%\n\\end{equation}\nand\n\\[\nf^{\\prime \\prime}(u)=\\left( \\frac{\\partial \\omega}{\\partial u}(t,u)\\right)\n^{2}f(u). \\tag{3}\\label{3}%\n\\]\nBy $(C1)$, the meridians are mapped to straight lines, and so, we have \n\\begin{equation*}\n\\frac{\\partial \\phi}{\\partial u}=0. \n\\end{equation*}\nThus (\\ref{1}) yields\n\\begin{eqnarray*}\n\\lefteqn{\\frac{\\partial}{\\partial u}\\left( \\frac{\\partial x}{\\partial u}%\n,\\frac{\\partial y}{\\partial u}\\right) =}\\\\\n& & \\left( \\frac{\\partial^{2}x}{\\partial\nu^{2}},\\frac{\\partial^{2}y}{\\partial u^{2}}\\right)= \\frac{\\partial}{\\partial\nu}(\\cos \\phi,\\sin \\phi)=\n\\left(-\\sin \\phi \\frac{\\partial \\phi}{\\partial u},\\cos \\phi\n\\frac{\\partial \\phi}{\\partial u}\\right)=(0,0).\n\\end{eqnarray*}\nIt follows\n\\begin{equation}\nx(t,u)=ug_{1}(t)+g_{2}(t),\\ \\ \\ \\ y(t,u)=uh_{1}(t)+h_{2}(t). \\tag{4}\\label{4}%\n\\end{equation}\nTherefore, the function $(\\partial \\omega\/\\partial u)$ is a function depending\nonly on the variable $u$, and hence there exist functions $a(u)$ and $b(t)$\nsuch that\n\\begin{equation}\n\\omega(t,u)=a(u)+b(t). \\tag{5}\\label{5}%\n\\end{equation}\nCombining (\\ref{2}) and (\\ref{5}), we deduce\n\\begin{equation}\n\\left( \\frac{\\partial x}{\\partial t},\\frac{\\partial y}{\\partial t}\\right)\n=(f\\cos(a(u)+b(t)),f\\sin(a(u)+b(t))), \\tag{6}\\label{6}%\n\\end{equation}\nand using that\n\\[\n\\frac{\\partial^{2}x}{\\partial u\\partial t}=\\frac{\\partial^{2}x}{\\partial\nt\\partial u},\n\\]\n(\\ref{4}) and (\\ref{6}) implies that\n\\[\n\\frac{\\partial}{\\partial u}f\\cos(a(u)+b(t))=\\frac{\\partial}{\\partial t}%\ng_{1}(t).\n\\]\nTherefore, for each $t$ and $u$, we deduce\n\\begin{equation}\nf^{\\prime}(u)\\cos((a(u)+b(t))-f(u)\\sin(a(u)+b(t))a^{\\prime}(u)=g_{1}^{\\prime\n}(t). \\tag{7}\\label{7}%\n\\end{equation}\nSimilarly, from\n\\[\n\\frac{\\partial^{2}y}{\\partial u\\partial t}=\\frac{\\partial^{2}y}{\\partial\nt\\partial u}%\n\\]\nwe get\n\\begin{equation}\nf^{\\prime}(u)\\sin((a(u)+b(t))+f(u)\\cos(a(u)+b(t))a^{\\prime}(u)=h_{1}^{\\prime\n}(t), \\tag{8}\\label{8}%\n\\end{equation}\nfor each $t$ and $u$.\n\nBy taking the derivative of (\\ref{7}) with respect to $u$ we have\n\\[\nf^{\\prime \\prime}\\cos \\omega-2f^{\\prime}a^{\\prime}\\sin \\omega-f(a^{\\prime}%\n)^{2}\\cos \\omega-fa^{\\prime \\prime}\\sin \\omega=0\n\\]\nand so, we get\n\\begin{equation*}\n\\sin \\omega(2f^{\\prime}a^{\\prime}+fa^{\\prime \\prime})-(f^{\\prime \\prime\n}-f(a^{\\prime})^{2})\\cos \\omega=0 \n\\end{equation*}\nAssuming that $\\sin \\omega \\neq0$, we obtain \n\\begin{equation}\n2f^{\\prime}a^{\\prime}+fa^{\\prime \\prime}=0. \\tag{9}\\label{9}%\n\\end{equation}\nIf $\\sin \\omega=0,$ then $\\cos \\omega \\neq0$.\nThus, by taking the derivative of (\\ref{8}) we can derive the same\ndifferential equation (\\ref{9}), restricting if necessary the domain where\nthe functions $f$ and $a$ are defined.\nLemma 2 implies that\n$$f^{2}=cu^{2}+du+k$$\nand\n\\[\na=a(u)=\\arctan \\left(\\frac{2c}{\\sqrt{-\\Delta}}\\left(u+\\frac{d}{2c}\\right)\n\\right), \\tag{10}\\label{10}\n\\]\nwhere $\\Delta=d^{2}-4ck$.\n\n\n Using (\\ref{9}) and (\\ref{10}) we get\n\\[\n\\frac{f^{\\prime}}{f}=-\\frac{a^{\\prime \\prime}}{2a^{\\prime}}=a^{\\prime}\\tan a,\n\\]\nwhich is equivalent to\n \\begin{equation}\n f^{\\prime}\\cos a-fa^{\\prime}\\sin a=0.\n \\tag{11}\\label{11}\n \\end{equation}\n\nIf $f \\cos a =0$, then the above equality implies that \n$fa^{\\prime}\\sin a=0$. Since $f(u) a^{\\prime}(u) \\neq 0$, for every $u$,\nwe have $\\sin a=0$ which is a contradiction. Thus, dividing the above\nequality by $f \\cos a$, we obtain $\\frac{f^{\\prime}}{f}\\tan\na+a^{\\prime}=0$. Substituting $f^{\\prime}\/f$ by $a^{\\prime}\\tan a$ we deduce\n $a^{\\prime}(\\tan a)^2+a^{\\prime}=0$, whence $(\\tan a)^2 = -1$\n which is a contradiction. Hence we have \n$$f^{\\prime}\\sin a+fa^{\\prime}\\cos a\\neq 0.$$ \n\n\nOn the other hand, by taking the derivative of\n$f^{\\prime}\\sin a+fa^{\\prime}\\cos a$, we have\n\\[\n(f^{\\prime}\\sin a+fa^{\\prime}\\cos a)^{\\prime}=f^{\\prime \\prime}\\sin\na+f^{\\prime}a^{\\prime}\\cos a+f^{\\prime}a^{\\prime}\\cos a+fa^{\\prime \\prime}\\cos\na-f(a^{\\prime})^{2}\\sin a.\n\\]\nIn order to prove that this expression is zero, it suffices to show that\n\\[\n\\frac{f^{\\prime \\prime}}{f}\\tan a+2\\frac{f^{\\prime}}{f}a^{\\prime}+a^{\\prime \\prime\n}-(a^{\\prime})^{2}\\tan a=0\n\\]\nand one can verify, by a simple replacement, that this relation holds. Furthermore, we have\n\\[\na^{\\prime}(0)=\\frac{\\sqrt{-\\Delta}}{2k},\\text{ }f^{\\prime}(0)=\\frac{d}%\n{2\\sqrt{k}},\\text{ }f(0)=\\sqrt{k},\\text{ }\\tan a(0)=\\frac{d}{\\sqrt{-\\Delta}}.\n\\]\nThen, we obtain\n\\begin{equation}\nf^{\\prime}\\sin a+fa^{\\prime}\\cos a=\\sqrt{c}. \\tag{12}\\label{12}\n\\end{equation}\n\n\n By expanding relation (\\ref{7}), we obtain\n\\begin{eqnarray*}\n\\lefteqn{f^{\\prime}(\\cos a(u)\\cos b(t)-\\sin a(u)\\sin b(t))-}\\\\\n& & fa^{\\prime}(\\sin a(u)\\cos\nb(t)+\\sin b(t)\\cos a(u))=g_1^{\\prime}(t),\n\\end{eqnarray*}\nand from (\\ref{11}), (\\ref{12}) the relation $g_{1}^{\\prime}(t)=\\sqrt{c}\\sin b(t)$ follows.\n\nSimilarly, from (\\ref{9}) we have:\n\\[\nf^{\\prime}(\\sin a\\cos b+\\sin b\\cos a)+fa^{\\prime}(\\cos a\\cos b-\\sin a\\sin\nb)=h_{1}^{\\prime}(t).\n\\]\nwhence\n\\[\n\\cos b(f^{\\prime}\\sin a+fa^{\\prime}\\cos a)+\\sin b(f^{\\prime}\\cos a-fa^{\\prime\n}\\sin a)=h_{1}^{\\prime}(t)\n\\]\nand so, we obtain $h_{1}^{\\prime}(t)=\\sqrt{c}\\, \\cos b(t)$.\n Therefore, we get\n$$g_{1}^{\\prime}(t)=\\sqrt{c}\\sin b(t) \\ \\ \\ \\text{and}\\ \\ \\ h_{1}^{\\prime\n}(t)=\\sqrt{c}\\, \\cos b(t).$$\n\nWe will proceed now with the computation of the projection $\\Phi.\\medskip$\nBy hypothesis, we have, that $(\\partial \\phi\/\\partial u)=0.$ \n Hence $\\phi$ is a\nfunction only of $t.$ From (\\ref{1}), (\\ref{2}) and (\\ref{4}) we have that\n\\begin{equation}\n\\left( \\frac{\\partial x}{\\partial u},\\frac{\\partial y}{\\partial u}\\right)\n=(\\cos \\phi(t),\\sin \\phi(t))=(g_{1},h_{1})\\text{ }\\tag{13}\\label{13}%\n\\end{equation}\nand%\n\\begin{equation}\n\\left( \\frac{\\partial x}{\\partial t},\\frac{\\partial y}{\\partial t}\\right)\n=(f(u)\\cos(a(u)+b(t)),f(u)\\sin(a(u)+b(t)))=(ug_{1}^{\\prime}+g_{2}^{\\prime\n},uh_{1}^{\\prime}+h_{2}^{\\prime}).\\tag{14}\\label{14}%\n\\end{equation}\nConsequently, (\\ref{13}) and (\\ref{14}) we get respectively that\n\\[\n(g_{1})^{2}+(h_{1})^{2}=1\n\\]\nand\n\\[\nu^{2}((g_{1}^{\\prime})^{2}+(h_{1}^{\\prime})^{2})+2u(g_{1}^{\\prime}%\ng_{2}^{\\prime}+h_{1}^{\\prime}h_{2}^{\\prime})+(g_{2}^{\\prime})^{2}%\n+(h_{2}^{\\prime})^{2}=cu^{2}+du+k.\n\\]\nTherefore, we have:\n\\begin{align*}\n(g_{2}^{\\prime})^{2}+(h_{2}^{\\prime})^{2} & =k\\\\\n2(g_{1}^{\\prime}g_{2}^{\\prime}+h_{1}^{\\prime}h_{2}^{\\prime}) & =d.\n\\end{align*}\nFrom the first of the previous relations we deduce that there exists a\nfunction $r(t)$ such that\n\\begin{equation}\n(g_{2}^{\\prime},h_{2}^{\\prime})=(\\sqrt{k}\\cos r(t),\\sqrt{k}\\sin r(t))\\tag{15}%\n\\label{15}%\n\\end{equation}\nwhile from the second, in combination with (\\ref{13}) and (\\ref{12}), we deduce\nthat $2\\sqrt{ck}\\sin(b+r)=d$ and thus\n\\[\n\\sin(b+r)=\\frac{d}{2\\sqrt{ck}}.\n\\]\nTherefore, there exists real number $\\theta_{0}$ such that\n$$r(t)=\\theta_{0}-b(t). $$\nFurthermore, from (\\ref{13}) we have that\n\\[\n(g_{1}^{\\prime},h_{1}^{\\prime})=(-\\phi^{\\prime}\\sin \\phi,\\phi^{\\prime}\\cos\n\\phi)=(\\sqrt{c}\\, \\sin b,\\text{ }\\sqrt{c}\\, \\cos b),\n\\]\nand so, we have the following two cases:%\n\na) $\\phi^{\\prime}=\\sqrt{c}$ and $\\phi(t)=-b(t)$. \nThus, we have \n$$b^{\\prime}(t)=-\\phi^{\\prime}=-\\sqrt{c},$$\nwhence \n$$b(t)=-\\sqrt{c}t+c_{0}. $$\nThen, we get\n\\begin{equation}\n(g_{1},h_{1})=(\\cos(-b(t)),\\sin(-b(t)))=(\\cos b(t),-\\sin b(t)).\\tag{16}%\n\\label{16}%\n\\end{equation}\nThus, combining (\\ref{4}), (\\ref{15}) and (\\ref{16}) we\ndeduce\n\\begin{align*}\nx(t,u) & =u\\cos(b(t))+\\int \\sqrt{k}\\cos(\\theta_{0}-b(t))dt\\\\\ny(t,u) & =-u\\sin(b(t))+\\int \\sqrt{k}\\sin(\\theta_{0}-b(t))dt.\n\\end{align*}\n\n\nb) $\\phi^{\\prime}=-\\sqrt{c}$ and $\\phi(t)=\\pi-b(t)$. Proceeding\nas above, we deduce the result.\n \nFurthermore, substituting $b(t)$\nin the integrals above we may calculate them and thus we may find explicit\nformulas for the map $\\Phi.$\n\n\nConversely, by substituting the above expressions of $x(t,u)$ and $y(t,u)$ into (iii) and (iv), and supposing that $f^{2}=cu^{2}+du+k$, we see that condition $(C1)$ is easily verified. Also, condition $(C2)$ is satisfied, since \n$\\frac{\\partial x}{\\partial u}=\\cos \\phi$\nimplies that $$\\phi=\\arccos(\\frac{\\partial x}{\\partial u})$$ and so, by taking the derivative with respect to $u$, we obtain that \n$\\frac{\\partial \\phi}{\\partial u}=0.$\nHence, Theorem \\ref{main theorem} is proven.\n\n\n\n\\section{Proof of Corollary 1}\n\\\n(1) The Gaussian curvature of each point of $S$ is given by the formula\n\\[\nK=-\\frac{f^{\\prime \\prime}}{f}%\n\\]\n(see Formula (9), p. 162, in the Example 4 of \\cite{[DoCarmo]}). On the other\nhand, in the proof of Lemma 2\n we have shown that $f^{\\prime \\prime\n}\/f>0.$ Therefore, $K<0$ at every point of $S$ and thus statement (1) is proven.\n\n(2) The surfaces of revolution of constant negative curvature are well known.\nA description of them can be found for example in (\\cite{[Gray]}, Theorem\n15.22, p. 477). Obviously these surfaces of revolution $R$ does not have the\nform of the surface $S$ given in Theorem \\ref{main theorem}. Therefore, for\nany point $p\\in R$ and for any neighborhood $U\\subset R$ of $p$ there does not\nexist a map $\\Phi:U\\rightarrow E^{2}$ satisfying the conditions $(C1)$ and\n$(C2).$ On the other hand, if $S_{0}$ is a Riemannian surface of constant\nnegative curvature $k<0,$ it is well known that $S_{0}$ is locally isometric\nto surface of revolution $R$ of constant curvature $k.$ Therefore our\nstatement follows.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Preliminaries}\n\n\n\nThe pointwise a.e. convergence of sequences of operators is of paramount importance and has been widely studied in several areas of analysis, such as harmonic analysis, PDE, and ergodic theory. This area boasts challenging problems, \nindicatively see \\cite{CRV, C66, DGL, JMZ}, \nand is intimately connected with the boundedness of the associated maximal operators; on this see \\cite{Stein}. \nMoreover, techniques and tools employed to study a.e. convergence have led to important developments in harmonic analysis. \n\nMultilinear harmonic analysis has made significant advances in recent years. The founders of this area are Coifman and Meyer \\cite{CM2} who realized the \napplicability of multilinear operators and introduced their study in analysis in the \nmid 1970s. \nFocusing on operators that commute with translations, a\nfundamental difference between the multilinear and the linear theory is the existence of a straightforward characterization of boundedness at an initial point, usually $L^2\\to L^2$. The lack of an easy characterization of boundedness at an initial point in the multilinear theory creates difficulties in their study. \nCriteria that get very close to characterization of boundedness have \nrecently been obtained by the first two authors and Slav\\'ikov\\'a \\cite{Gr_He_Sl}\nand also by Kato, Miyachi, and Tomita \\cite{Katoetal} in the bilinear case. \nThese criteria were extended to the general $m$-linear case for $m\\ge 2$ by the authors of this article in \\cite{Paper1}. This reference also contains initial \n$L^2\\times\\cdots\\times L^2\\to L^{2\/m}$ estimates for \nrough homogeneous multilinear singular integrals associated with \n$L^q$ functions on the sphere and multilinear multipliers of H\\\"ormander type. \n\n The purpose of this work is to obtain the pointwise a.e. convergence of truncated multilinear homogeneous singular integrals and lacunary multilinear multipliers by establishing boundedness for their associated maximal operators.\n\n\n\n\nWe first introduce multilinear truncated singular integral operators.\nLet $\\Omega$ be a integrable function, defined on the sphere $\\mathbb{S}^{mn-1}$, satisfying the mean value zero property \n\\begin{equation}\\label{vanishingmtcondition}\n\\int_{\\mathbb{S}^{mn-1}}\\Omega~ d\\sigma_{mn-1}=0.\n\\end{equation} Then we define\n\\begin{equation*}\nK(\\vec{{y}}\\,):=\\frac{\\Omega(\\vec{{y}}\\,')}{|\\vec{{y}}\\,|^{mn}}, \\qquad \\vec y \\neq 0, \n\\end{equation*} \nwhere $\\vec{{y}}\\,':=\\vec{{y}}\\,\/|\\vec{{y}}\\,|\\in \\mathbb{S}^{mn-1}$, and the \ncorresponding truncated multilinear operator $\\mathcal{L}_{\\Omega}^{(\\epsilon)}$ by\n$$\n\\mathcal L^{(\\epsilon)}_{\\Omega}\\big(f_1,\\dots,f_m\\big)(x):=\\int_{(\\mathbb R^n)^m\\setminus B(0,\\epsilon)}{K(\\vec{{y}}\\,)\\prod_{j=1}^{m}f_j(x-y_j)}~d\\,\\vec{{y}}\\,\n$$\n acting on Schwartz functions $f_1,\\dots,f_m$ on $\\mathbb R^n$, where $x\\in\\mathbb R^n$ and $\\vec{{y}}\\,:=(y_1,\\dots,y_m)\\in (\\mathbb R^n)^m$.\nMoreover, by taking $\\epsilon\\searrow 0$, we obtain the multilinear homogeneous Calder\\'on-Zygmund singular integral operator \n\\begin{align}\n\\mathcal L_{\\Omega}\\big(f_1,\\dots,f_m\\big)(x)&:=\\lim_{\\epsilon\\searrow 0}\\mathcal L^{(\\epsilon)}_{\\Omega}\\big(f_1,\\dots,f_m\\big)(x) \\label{epsilonto0}\\\\\n&=p.v. \\int_{(\\mathbb R^n)^m}{K(\\vec{{y}}\\,)\\prod_{j=1}^{m}f_j(x-y_j)}~d\\,\\vec{{y}}\\, . \\nonumber\n\\end{align}\nThis is still well-defined for any Schwartz functions $f_1, \\dots,f_m$ on $\\mathbb R^n$. Here, $B(0,\\epsilon)$ is the ball centered at zero with radius $\\epsilon>0$ in $(\\mathbb R^n)^m$. \n In \\cite{Paper1} we showed that \n if $\\Omega$ lies in $ L^q\n(\\mathbb{S}^{mn-1}) $ with $q>\\frac{2m}{m+1}$, then the multilinear singular integral operator $\\mathcal{L}_{\\Omega}$ admits a bounded \nextension from $L^2(\\mathbb R^n)\\times \\cdots \\times L^2(\\mathbb R^n)$ to $L^{2\/m}(\\mathbb R^n)$. In order words, given $f_j\\in L^2(\\mathbb R^n)$, $\\mathcal{L}_{\\Omega}(f_1,\\dots,f_m)$ is well-defined and is in $L^{2\/m}(\\mathbb R^n)$. It is natural to expect that, similar to the linear case, the truncated operator $\\mathcal{L}_{\\Omega}^{(\\epsilon)}(f_1,\\dots,f_m)$ converges a.e. to $\\mathcal{L}_{\\Omega}(f_1,\\dots,f_m)$ as $\\epsilon\\to 0$.\n\n\n\n\n\n\n\n\n\n\n\nOur first main result is as follows.\n\\begin{thm}\\label{CCC1'}\nLet $m\\ge 2$, $\\frac{2m}{m+1}0}$ be a \nfamily of $m$-linear operators while $T_\\ast$ is the associated maximal operator, defined by\n$$T_{\\ast}(f_1,\\dots,f_m):=\\sup_{t>0} \\big|T_t(f_1,\\dots,f_m)\\big| $$\nfor $f_j\\in D_j$, $1\\le j\\le m$. Suppose that there is a constant $B$ such that \n\\begin{equation}\\label{100'}\n\\|T_{\\ast} (f_1,\\dots,f_m)\\|_{L^{q,\\infty}(\\mathbb R^n)} \\leq B \\prod_{j=1}^m \\|f_j\\|_{L^{p_j}(\\mathbb R^n)}\n \\end{equation}\n for all $f_j\\in D_j(\\mathbb{R}^n)$. Also suppose that for all $f_j $ in $D_j$ we have \n\\begin{equation}\\label{tto1'}\n \\lim_{t\\to 0} T_{t} (f_1,\\dots, f_m) = T(f_1,\\dots,f_m)\n \\end{equation}\nexists and is finite. \nThen for all functions $f_j\\in L^{p_j}(\\mathbb{R}^n) $ the limit in \\eqref{tto1'} exists and is finite a.e., and defines an $m$-linear operator which uniquely extends $T$ defined on $D_1\\times \\cdots \\times D_m$ and\nwhich is bounded from $L^{p_1}(\\mathbb R^n)\\times \\cdots \\times L^{p_m}(\\mathbb R^n)$ to $L^{q,\\infty}(\\mathbb{R}^n)$. \n\\end{prop}\n\n\n\n\n\n\n \n\n \n \n With the help of this result, we reduce Theorem~\\ref{CCC1'} to the boundedness of \nthe associated maximal singular integral operator \n\\begin{equation*}\n\\mathcal{L}_{\\Omega}^*\\big(f_1,\\dots,f_m\\big)(x):=\\sup_{\\epsilon>0}\\Big| \\mathcal{L}_{\\Omega}^{(\\epsilon)}\\big(f_1,\\dots,f_m\\big)(x)\\Big|.\n\\end{equation*}\n\\begin{thm}\\label{MAXSINGINT}\nLet $m\\ge 2$, $\\frac{2m}{m+1}0$ such that\n\\begin{equation}\\label{maxinequal}\n\\big\\Vert \\mathcal{L}_{\\Omega}^*(f_1,\\dots,f_m)\\big\\Vert_{L^{2\/m}(\\mathbb R^n)}\\le C\\Vert \\Omega\\Vert_{L^q(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2(\\mathbb R^n)}\n\\end{equation}\nfor Schwartz functions $f_1,\\dots,f_m$ on $\\mathbb R^n$.\n\\end{thm}\nThis extends and improves a result obtained in \\cite{BH} when $m=2$ and $q=\\infty$.\\\\ \n\n\n The essential contribution of this article is to suitably combine \nLittlewood-Paley techniques and wavelet decompositions to reduce the boundedness of $\\mathcal{L}_\\Omega^*$ to decaying estimates for norms of maximal operators associated with lattice bumps; see \\eqref{2mmaingoal} for the exact formulation. This result is actually proved in terms of Plancherel type inequalities, developed in \\cite{Paper1} and stated in Proposition~\\ref{keyapplication1}. \n\n\n \n\n\n\n\n\\hfill\n\n\n\n\n\n\nThe tools used to establish Theorem~\\ref{CCC1'} turn out to be useful in the study of pointwise convergence problem of several related operators. \nAs an example\nlet us take multilinear multipliers with limited decay to demonstrate our idea. \n\nFor a smooth function $\\sigma\\in\\mathscr{C}^{\\infty}((\\mathbb R^n)^m)$ and $\\nu\\in\\mathbb{Z}$\nlet\n\\begin{equation}\\label{defSsinu}\nS_{\\sigma}^{\\nu}\\big(f_1,\\dots,f_m\\big)(x):=\\int_{(\\mathbb R^n)^m}{\\sigma(2^{\\nu}\\vec{{\\xi}}\\,)\\Big( \\prod_{j=1}^{m}\\widehat{f_j}(\\xi_j)\\Big)e^{2\\pi i\\langle x,\\sum_{j=1}^{m}\\xi_j \\rangle}} ~d\\,\\vec{{\\xi}}\\,\n\\end{equation}\nfor Schwartz functions $f_1,\\dots,f_m$ on $\\mathbb R^n$, where $\\vec{{\\xi}}\\,:=(\\xi_1,\\dots,\\xi_m) \\in (\\mathbb R^n)^m$.\nWe are interested in the poinwise convergence of $S^\\nu_\\sigma$ when $\\nu\\to-\\infty$.\nWe pay particular attention to $\\sigma$ satisfying the limited \ndecay property (for some fixed $a$)\n$$\n\\big|\\partial^{\\beta}\\sigma(\\vec{{\\xi}}\\,) \\big|\\lesssim_{\\beta}|\\vec{{\\xi}}\\,|^{-a}\n$$\nfor sufficiently many $\\beta$. Examples of multipliers of this type include $\\widehat \\mu$, the Fourier transform of the spherical measure $\\mu$; see \\cite{Ca79, CW78, Ru} for the corresponding linear results. \n\n\nThe second contribution of this work is the following result.\n\\begin{thm}\\label{CCC2'}\nLet $m\\ge 2$ and $a>\\frac{(m-1)n}{2}$.\nLet \n$\\sigma\\in\\mathscr{C}^{\\infty}((\\mathbb R^n)^m)$ satisfy\n\\begin{equation}\\label{givenassumption}\n\\big|\\partial^{\\beta}\\sigma(\\vec{{\\xi}}\\,) \\big|\\lesssim_{\\beta}|\\vec{{\\xi}}\\,|^{-a}\n\\end{equation}\nfor all $|\\beta|\\le \\big[ \\frac{(m-1)n}{2} \\big]+1$, where $ \\left[ r\\right]$ denotes the integer part of $r$.\nThen for \n$f_j$ in $L^2(\\mathbb{R}^n)$, $j=1,\\dots , m$, the functions \n$\nS_\\sigma^\\nu(f_1,\\dots, f_m) \n$\nconverge a.e. to $\\sigma(0) f_1 \\cdots f_m$ as $\\nu \\to -\\infty$. Additionally, if $\\lim_{y\\to \\infty} \n\\sigma(y)$ exists and and equals $L$, then the functions \n$S_\\sigma^\\nu(f_1,\\dots, f_m) $ converge a.e. to \n$L f_1 \\cdots f_m$ as $\\nu \\to \\infty$. \n\\end{thm}\n\n\n\n\n\n\n\n\n\nThis problem is also reduced to the boundedness of the associated\n $m$-(sub)linear lacunary maximal multiplier operator defined by: \n\\begin{equation*}\n\\mathscr{M}_{\\sigma}\\big(f_1,\\dots,f_m\\big)(x):=\\sup_{\\nu \\in\\mathbb{Z}}{\\big|S_{\\sigma}^{\\nu}\\big(f_1,\\dots,f_m\\big)(x)\\big|}.\n\\end{equation*}\n$\\mathscr{M}_{\\sigma}$ is the so-called multilinear spherical maximal function when $\\sigma=\\widehat\\mu$, which was studied extensively recently by \\cite{AP, Barrionuevo2017, DG, HHY, JL}. In particular a bilinear version of the following theorem was previously obtained in \\cite{Gr_He_Ho2020}.\n\n\n\n\n\n\n\n\n\\begin{thm}\\label{application4}\nLet $m\\ge 2$ and $a>\\frac{(m-1)n}{2}$.\nLet \n $\\sigma\\in\\mathscr{C}^{\\infty}((\\mathbb R^n)^m)$\nbe as in Theorem~\\ref{CCC2'} \nThen there exists a constant $C>0$ such that\n\\begin{equation*}\n\\big\\Vert \\mathscr{M}_{\\sigma}(f_1,\\dots,f_m)\\big\\Vert_{L^{2\/m}(\\mathbb R^n)}\\le C \\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2(\\mathbb R^n)}\n\\end{equation*}\nfor Schwartz functions $f_1,\\dots,f_m$ on $\\mathbb R^n$.\n\\end{thm}\n\n\n\n\n\nIt follows from Theorems~\\ref{MAXSINGINT} and ~\\ref{application4} that\n$\\mathcal{L}_{\\Omega}^*$ and $\\mathscr{M}_{\\sigma}$ have unique bounded extensions from $L^2\\times \\cdots \\times L^2$ to $L^{2\/m}$ by density.\n\n\n\nLet us now sketch the proof of Theorem~\\ref{CCC2'}, taking Theorem \\ref{application4} temporarily for granted.\nWe notice that the claimed \nconvergence holds pointwise everywhere for smooth functions $f_j$ with compact support\nby the Lebesgue dominated convergence theorem.\nThen the assertions are immediate consequences of Proposition~\\ref{prop00'}.\\\\\n\n\\begin{comment}\n\\begin{proof}[Proof of Theorem~\\ref{CCC1'} and Theorem~\\ref{CCC2'}]\nTo verify these results, we notice that the claimed \nconvergence holds pointwise everywhere for smooth functions $f_j$ with compact support\nby the Lebesgue dominated convergence theorem. In the case of the Theorem~\\ref{CCC2'} this assertion is straightforward, while in the case of Theorem~\\ref{CCC1'} it is a consequence \nof the smoothness of the function $f_1(x)\\cdots f_m(x)$ at the origin combined with the cancellation of $\\Omega$. The assertions in both theorems then follow by \napplying Proposition~\\ref{prop00'}.\n\\end{proof}\n\\end{comment}\n\n As Theorems \\ref{CCC1'} and \\ref{CCC2'} follow from Theorems~\\ref{MAXSINGINT} and ~\\ref{application4}, respectively, \nwe actually focus on the proof of Theorems~\\ref{MAXSINGINT} and ~\\ref{application4} in the remaining sections.\n\n\n\n\n\n\\section{Preliminary material}\\label{11171}\n\n\n We adapt some notations and key estimates from \\cite{Paper1}.\n For the sake of independent reading we review the main tools and notation. \nWe begin with certain orthonormal bases of $L^2$ due to Triebel \\cite{Tr2010}, that will be of great use in our work. \nThe idea is as follows. \nFor any fixed $L\\in\\mathbb{N}$ one can construct real-valued compactly supported functions $\\psi_F, \\psi_M$ in $\\mathscr{C}^L(\\mathbb{R})$ satisfying the following properties:\n $\\Vert \\psi_F\\Vert_{L^2(\\mathbb{R})}=\\Vert \\psi_M\\Vert_{L^2(\\mathbb{R})}=1$, \n $\\int_{\\mathbb{R}}{x^{\\alpha}\\psi_M(x)}dx=0$ for all $0\\le \\alpha \\le L$, and moreover, \nif $\\Psi_{\\vec{{G}}}$ is a function on $\\mathbb{R}^{mn}$, defined by\n$$\\Psi_{\\vec{{G}}}(\\vec{{x}}\\,):=\\psi_{g_1}(x_1)\\cdots \\psi_{g_{mn}}(x_{mn})$$\nfor $\\vec{{x}}\\,:=(x_1,\\dots,x_{mn})\\in \\mathbb{R}^{mn}$ and $\\vec{{G}}:=(g_1,\\dots,g_{mn})$ in the set \n $$\\mathcal{I}:=\\big\\{\\vec{{G}}:=(g_1,\\dots,g_{mn}):g_i\\in\\{F,M\\} \\big\\},$$\nthen the family of functions\n\\begin{equation*}\n\\bigcup_{\\lambda\\in\\mathbb{N}_0}\\bigcup_{\\vec{{k}}\\,\\in \\mathbb{Z}^{mn}}\\big\\{ 2^{\\lambda{mn\/2}}\\Psi_{\\vec{{G}}}(2^{\\lambda}\\vec{{x}}\\,-\\vec{{k}}\\,):\\vec{{G}}\\in \\mathcal{I}^{\\lambda}\\big\\}\n\\end{equation*}\nforms an orthonormal basis of $L^2(\\mathbb{R}^{mn})$,\nwhere $\\mathcal{I}^0:=\\mathcal{I}$ and for $\\lambda \\ge 1$, we set $\\mathcal{I}^{\\lambda}:=\\mathcal{I}\\setminus \\{(F,\\dots,F)\\}$.\n \n\n\nWe consistently use the notation $\\vec{{\\xi}}\\,:=(\\xi_1,\\dots,\\xi_m) $ for elements of $ (\\mathbb R^n)^m$, $\\vec{{G}}:=(G_1,\\dots,G_m)\\in (\\{F,M\\}^n)^m$, and \n$\n\\Psi_{\\vec{{G}}}(\\vec{{\\xi}}\\,)=\\Psi_{G_1}(\\xi_1)\\cdots \\Psi_{G_m}(\\xi_m).\n$\nFor each $\\vec{{k}}\\,:=(k_1,\\dots,k_m)\\in (\\mathbb Z^n)^m$ and $\\lambda\\in \\mathbb{N}_0$, let \n$$\n\\Psi_{G_i,k_i}^{\\lambda}(\\xi_i):=2^{\\lambda n\/2}\\Psi_{G_i}(2^{\\lambda}\\xi_i-k_i), \\qquad 1\\le i\\le m\n$$\nand\n$$\n\\Psi_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda}(\\vec{{\\xi}}\\,\\,):=\\Psi_{G_1,k_1}^{\\lambda}(\\xi_1)\\cdots \\Psi_{G_m,k_m}^{\\lambda}(\\xi_m).\n$$\nWe also assume that the support of $\\psi_{g_i}$ is contained in $\\{\\xi\\in \\mathbb{R}: |\\xi|\\le C_0 \\}$ for some $C_0>1$,\nwhich implies that\n\\begin{equation*}\n\\textup{Supp} (\\Psi_{G_i,k_i}^\\lambda)\\subset \\big\\{\\xi_i\\in\\mathbb R^n: |2^{\\lambda}\\xi_i-k_i|\\le C_0\\sqrt{n}\\big\\}.\n\\end{equation*}\nIn other words, the support of $\\Psi_{G_i,k_i}^\\lambda$ is contained in the ball centered at $2^{-\\lambda}k_i$ and radius $C_0\\sqrt{n}2^{-\\lambda}$.\nThen we note that for a fixed $\\lambda\\in\\mathbb{N}_0$, elements of $\\big\\{\\Psi_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda}\\big\\}_{\\vec{{k}}\\,}$ have (almost) disjoint compact supports.\n\n\n\n\n\n\n\n\n\n\n \n\n\nIt is also known in \\cite{Tr2006} that\nif $L$ is sufficiently large, then every tempered distribution $H$ on $\\mathbb{R}^{mn}$ can be represented as\n\\begin{equation}\\label{daubechewavelet}\nH(\\vec{{x}}\\,)=\\sum_{\\lambda\\in\\mathbb{N}_0}\\sum_{\\vec{{G}}\\in\\mathcal{I}^{\\lambda}}\\sum_{\\vec{{k}}\\,\\in \\mathbb{Z}^{mn}}b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda}2^{\\lambda mn\/2}\\Psi_{\\vec{{G}}}(2^{\\lambda} \\vec{{x}}\\, -\\vec{{k}}\\,)\n\\end{equation}\nand for $1|k_2|\\ge\\cdots \\ge |k_m| \\big\\}\\\\\n\\mathcal{U}_2^{{\\mu}}&:=\\big\\{\\vec{{k}}\\,\\in \\mathcal{U}^{{\\mu}}:|k_1|\\ge |k_2|\\ge 2C_0\\sqrt{n}>|k_3|\\ge\\cdots \\ge |k_m| \\big\\}\\\\\n&\\vdots\\\\\n\\mathcal{U}_m^{{\\mu}}&:=\\big\\{\\vec{{k}}\\,\\in \\mathcal{U}^{{\\mu}}:|k_1|\\ge \\cdots\\ge |k_m|\\ge 2C_0\\sqrt{n} \\big\\}.\n\\end{align*}\nThen we have the following two observations that appear in \\cite{Paper1}.\n\\begin{itemize}\n\\item For $\\vec{{k}}\\,\\in\\mathcal{U}_l^{\\lambda+\\mu}$,\n\\begin{equation}\\label{lgkest}\nL_{G_j,k_j}^{\\lambda,\\gamma}f=L_{G_j,k_j}^{\\lambda,\\gamma}f^{\\lambda,\\gamma,{\\mu}} \\quad \\text{ for }~ 1\\le j\\le l \n\\end{equation}\ndue to the support of $\\Psi_{G_j,k_j}^{\\lambda}$, where $\\widehat{f^{\\lambda,\\gamma,\\mu}}(\\xi):=\\widehat{f}(\\xi)\\chi_{C_0\\sqrt{n}2^{\\gamma-\\lambda}\\le |\\xi|\\le 2^{\\gamma+\\mu+3}}$.\n\\item For $\\mu\\ge 1$ and $\\lambda\\in\\mathbb{N}_0$, \n\\begin{equation}\\label{L2}\n\\Big( \\sum_{\\gamma\\in\\mathbb{Z}}{\\big\\Vert f^{\\lambda,\\gamma,{\\mu}}\\big\\Vert_{L^2}^2}\\Big)^{1\/2}\\lesssim ({\\mu}+\\lambda)^{1\/2}\\Vert f\\Vert_{L^2} \\lesssim \\mu^{1\/2}(\\lambda+1)^{1\/2}\\Vert f\\Vert_{L^2} \n\\end{equation} \nwhere Plancherel's identity is applied in the first inequality.\n\\end{itemize}\n\n \n\\begin{prop}[{\\cite[Proposition 2.4]{Paper1}}]\\label{keyapplication1}\nLet $m$ be a positive integer with $m\\ge 2$ and $00$, independent of $,\\vec{{G}}, \\lambda, \\mu$, such that\n\\begin{align*}\n&\\Big\\Vert \\Big(\\sum_{\\gamma\\in\\mathbb{Z}}\\Big| \\sum_{\\vec{{k}}\\,\\in \\mathcal{U}_1^{\\lambda+\\mu}} b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\gamma,\\mu}L_{G_1,k_1}^{\\lambda,\\gamma}f_1^{\\lambda,\\gamma,\\mu}\\prod\n_{j=2}^{m}L_{G_j,k_j}^{\\lambda,\\gamma}f_j\\Big|^r\\Big)^{1\/r}\\Big\\Vert_{L^{2\/m}}\\\\\n&\\le C A_{\\vec{{G}},\\lambda,\\mu} 2^{\\lambda mn\/2}\\Big(\\sum_{\\gamma\\in\\mathbb{Z}}{\\Vert f_1^{\\lambda,\\gamma,\\mu}\\Vert_{L^2}^r} \\Big)^{1\/r} \\prod_{j=2}^{m}\\Vert f_{j}\\Vert_{L^2}\n\\end{align*} \nfor Schwartz functions $f_1,\\dots,f_m$ on $\\mathbb R^n$.\n \\item For $2\\le l\\le m$ there exists a constant $C>0$, independent of $\\vec{{G}}, \\lambda, \\mu$, such that\n\\begin{align*}\n&\\Big\\Vert \\sum_{\\gamma\\in\\mathbb{Z}} \\Big| \\sum_{\\vec{{k}}\\,\\in \\mathcal{U}_l^{\\lambda+\\mu}} b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\gamma,\\mu}\\Big( \\prod_{j=1}^{l}L_{G_j,k_j}^{\\lambda,\\gamma}f_j^{\\lambda,\\gamma,\\mu}\\Big)\\Big( \\prod_{j=l+1}^{m}L_{G_j,k_{j}}^{\\lambda,\\gamma}f_{j}\\Big) \\Big| \\Big\\Vert_{L^{2\/m}}\\\\\n&\\le C A_{\\vec{{G}},\\lambda,\\mu}^{1-\\frac{(l-1)q}{2l}}B_{\\vec{{G}},\\lambda,\\mu,q}^{\\frac{(l-1)q}{2l}} 2^{\\lambda mn\/2}\\Big[ \\prod_{j=1}^{l}\\Big(\\sum_{\\gamma\\in\\mathbb{Z}}{\\Vert f_j^{\\lambda,\\gamma,\\mu}\\Vert_{L^2}^2} \\Big)^{1\/2}\\Big] \\Big[\\prod_{j=l+1}^{m}\\Vert f_{j}\\Vert_{L^2}\\Big]\n\\end{align*} \nfor Schwartz functions $f_1,\\dots,f_m$ on $\\mathbb R^n$,\n where $\\prod_{m+1}^m$ is understood as empty.\n \\end{enumerate}\n\\end{prop}\n\nIn view of (\\ref{lgkest}), (\\ref{L2}) and Proposition \\ref{keyapplication1}, we actually obtain\n\\begin{align}\\label{1mainprop}\n&\\Big\\Vert \\Big( \\sum_{\\gamma\\in\\mathbb{Z}}\\Big| \\sum_{\\vec{{k}}\\,\\in\\mathcal{U}_{1}^{\\lambda+\\mu}}b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\gamma,\\mu}\\prod_{j=1}^{m}L_{G_j,k_j}^{\\lambda,\\gamma}f_j\\Big|^2\\Big)^{1\/2}\\Big\\Vert_{L^{2\/m}}\\nonumber \\\\&\\lesssim A_{\\vec{{G}},\\lambda,\\mu}\\mu^{1\/2}2^{\\lambda mn\/2}(\\lambda+1)^{1\/2}\\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2}\n\\end{align} \nand for $2\\le l\\le m$\n\\begin{align}\\label{2mainprop}\n&\\Big\\Vert \\sum_{\\gamma\\in\\mathbb{Z}}\\Big| \\sum_{\\vec{{k}}\\,\\in\\mathcal{U}_{l}^{\\lambda+\\mu}}b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\gamma,\\mu}\\prod_{j=1}^{m}L_{G_j,k_j}^{\\lambda,\\gamma}f_j\\Big| \\Big\\Vert_{L^{2\/m}}\\nonumber\\\\\n&\\lesssim A_{\\vec{{G}},\\lambda,\\mu}^{1-\\frac{(l-1)q}{2l}}B_{\\vec{{G}},\\lambda,\\mu,q}^{\\frac{(l-1)q}{2l}} \\mu^{l\/2}2^{\\lambda mn\/2}(\\lambda+1)^{l\/2} \\prod_{j=1}^{m}\\Vert f_{j}\\Vert_{L^2}.\n\\end{align} \n \n \n\n\n\n\\section{An auxiliary lemma} \n\n\nWe have the following extension of Lemma~5 in \\cite{BH}.\n\n\\begin{lm}\\label{AUX}\nLet $10} \\frac{1}{R^{mn} } \\idotsint\\limits_{|\\vec y|\\le R} |\\Omega (\\vec{{y}}\\,' ) | \n\\prod_{j=1}^m \\big| f_j(x-y_j) \\big| ~ d\\vec{{y}}\\,\n\\]\nmaps $L^{p_1}(\\mathbb R^n) \\times \\cdots \\times L^{p_m}(\\mathbb R^n) $ to \n$L^{p }(\\mathbb R^n) $ with norm bounded by a constant multiple of $\\|\\Omega\\|_{L^q(\\mathbb{S}^{mn-1})}$.\n\\end{lm}\n\n\\begin{proof} \nSince $\\Vert \\Omega\\Vert_{L^r(\\mathbb{S}^{mn-1})}\\lesssim \\Vert \\Omega\\Vert_{L^{\\infty}(\\mathbb{S}^{mn-1})}$ for all $10}\\Big( \\frac{1}{R }\\int_0^R \\big| f (x-t\\theta_j) \\big|^{\\mu_j} ~dt\\Big)^{1\/\\mu_j}. \n\\]\nIt follows from this that \n\\[\n \\big\\| \\mathcal M_{\\Omega_l}(f_1,\\dots, f_m)\\big\\|_{L^{r}} \\le \\int_{\\mathbb S^{mn-1}} |\\Omega (\\vec \\theta\\, ) | \\prod_{j=1}^m \n\\big\\| \\mathcal M_{\\mu_j}^{\\theta_j}f_j \\big\\|_{L^{r_j}} ~ d\\vec \\theta,\n\\]\nwhere Minkowski's inequality and H\\\"older's inequality are applied.\nUsing the $L^{r_j}$ boundedness of $\\mathcal M_{\\mu_j}^{\\theta_j} $ for $0<\\mu_j1$ (for which $q>1$ implies the assumption (\\ref{AUXassume})) in the assertion follows from summing the estimates (\\ref{664455-1}) over $l\\ge 0$.\n\n\nThe other case $1\/m1$ such that \n\\begin{equation*}\n\\frac{1}{p}<\\frac{1}{rq}+\\frac{m}{q'} \\,\\,\\,\\Big( \\!\\!<\\frac{1}{q}+\\frac{m}{q'} \\Big),\n\\end{equation*}\nor, equivalently,\n$$\\frac{q(m-1\/p)}{q'(m-1\/r)}-\\frac{1\/p-1\/r}{m-1\/r}>0.$$\nThen the interpolation between (\\ref{664455}) and (\\ref{664455-1}) with appropriate $(r_1,\\dots, r_m)$ satisfying $1\/r=1\/r_1+\\dots+1\/r_m$ (using Theorem 7.2.2 in \\cite{MFA}) yields\n\\begin{equation*}\n\\big\\|\\mathcal M_{\\Omega_l} \\big\\|_{L^{p_1}\\times \\cdots \\times L^{p_m}\\to L^{p }} \\lesssim 2^{l \\frac{1\/p-1\/r}{m-1\/r}} 2^{-l \\frac{q}{q'} \\frac{m-1\/p}{m-1\/r} }=2^{-l(\\frac{q(m-1\/p)}{q'(m-1\/r)}-\\frac{1\/p-1\/r}{m-1\/r})}\n\\end{equation*}\nFinally, the exponential decay in $l$ together \nwith the fact that $\\|\\cdot \\|_{L^p}^p$ is a subadditive quantity for $00$ choose $\\rho\\in \\mathbb Z$ such that $2^{\\rho}\\le \\epsilon<2^{\\rho+1} $. \nThen we write\n\\begin{align}\n&\\bigg| \\int_{(\\mathbb R^n)^m\\setminus B(0,\\epsilon)}{K(\\vec{{y}}\\,)\\prod_{j=1}^{m}f_j(x-y_j)}~d\\,\\vec{{y}}\\, \\bigg| \n\\notag\\\\\n&\\qquad\\qquad \\le \n\\bigg| \\int_{(\\mathbb R^n)^m }{\\big( K^{(\\epsilon)}(\\vec{{y}}\\,) - \\widetilde{K}^{(2^{\\rho})}(\\vec{{y}}\\,) \\big) \\prod_{j=1}^{m}f_j(x-y_j)}~d\\,\\vec{{y}}\\, \\bigg| \\label{651} \\\\\n&\\qquad \\qquad\\qquad \\qquad+ \\bigg| \\int_{(\\mathbb R^n)^m} \\widetilde{K}^{(2^{\\rho})}(\\vec{{y}}\\,) \\prod_{j=1}^{m}f_j(x-y_j) ~d\\,\\vec{{y}}\\,\\bigg| \\label{652}.\n\\end{align}\nTerm \\eqref{652} is clearly less than\n\\begin{equation*}\n\\bigg|\\sum_{\\gamma\\in\\mathbb{Z}: \\gamma<-\\rho}\\int_{(\\mathbb R^n)^m}K^{\\gamma}(\\vec{{y}}\\,) \\prod_{j=1}^{m}f_j(x-y_j) ~d\\,\\vec{{y}}\\, \\bigg|\\le \\mathcal{L}_{\\Omega}^{\\sharp}\\big(f_1,\\dots,f_m\\big)(x),\n\\end{equation*}\nwhile \\eqref{651} is controlled by\n$\n \\mathcal{M}_{\\Omega}\\big(f_1,\\dots,f_m\\big)(x)\n$\nas\n\\begin{equation*}\n\\big|K^{(\\epsilon)}(\\vec{{y}}\\,) - \\widetilde K^{(2^{\\rho})}(\\vec{{y}}\\,) \\big|\\lesssim |K(\\vec{{y}}\\,)| \\chi_{|\\vec{{y}}\\,|\\approx 2^{\\rho}}\\lesssim \\frac{|\\Omega(\\vec{{y}}\\,')|}{2^{\\rho mn}}\\chi_{|\\vec{{y}}\\,|\\lesssim 2^{\\rho}}.\n\\end{equation*}\nThus \\eqref{Est77} follows after taking the supremum over all $\\epsilon>0$. \n\nSince the boundedness of $\\mathcal{M}_{\\Omega}$ follows from Lemma~\\ref{AUX} with the fact that $q>\\frac{2m}{m+1}$ implies $\\frac{m}{2} <\\frac{1}{q}+\\frac{m}{q'}$, \nmatters reduce to the boundedness of $\\mathcal{L}_{\\Omega}^{\\sharp}$. \n\nFor each $\\gamma\\in\\mathbb{Z}$ let $$K_{\\mu}:=\\sum_{\\gamma\\in\\mathbb{Z}}K_{\\mu}^{\\gamma}.$$\nIn the study of multilinear rough singular integral operators $\\mathcal{L}_{\\Omega}$ in \\cite{Paper1} whose kernel is $\\sum_{\\gamma\\in\\mathbb{Z}}K^{\\gamma}=\\sum_{\\mu\\in\\mathbb{Z}}\\sum_{\\gamma\\in\\mathbb{Z}}K_{\\mu}^{\\gamma}=\\sum_{\\mu\\in\\mathbb{Z}}K_{\\mu}$,\nthe part where $\\mu$ is less than a constant is relatively simple because the Fourier transform of $K_{\\mu}$ satisfies the estimate \n\\begin{equation}\\label{symbolest}\n\\big|\\partial^{\\alpha}\\widehat{K_{\\mu}}(\\vec{{\\xi}}\\,)\\big|\\lesssim \\Vert \\Omega\\Vert_{L^q(\\mathbb{S}^{mn-1})}|\\vec{{\\xi}}\\,|^{-|\\alpha|}Q(\\mu),\\qquad 1C_0\\sqrt{mn}} \n \\mathcal{L}_{\\Omega,\\mu}^{\\sharp}\\big(f_1,\\dots,f_m\\big) ,\n \\end{equation*}\nwhere we set\n$$\n \\widetilde \\mathcal{L}_{\\Omega}^{\\sharp}\\big(f_1,\\dots,f_m\\big) (x) \n :=\n \\sup_{\\tau \\in \\mathbb Z} \\bigg| \n \\int_{(\\mathbb R^n)^m }{ \\sum_{ \\gamma<\\tau} \\sum_{\\mu\\in\\mathbb{Z}:2^{\\mu-10}\\le C_0\\sqrt{mn}} K^{\\gamma}_{\\mu}(\\vec{{y}}\\,) \\prod_{j=1}^{m}f_j(x-y_j)}~d\\,\\vec{{y}}\\, \n \\bigg| \n$$\nand\n$$\n \\mathcal{L}_{\\Omega,\\mu}^{\\sharp}\\big(f_1,\\dots,f_m\\big)(x) \n := \\sup_{\\tau\\in \\mathbb Z} \\bigg| \n \\sum_{\\gamma<\\tau} \n \\int_{(\\mathbb R^n)^m }{ K^{\\gamma}_{\\mu}(\\vec{{y}}\\,) \\prod_{j=1}^{m}f_j(x-y_j)}~d\\,\\vec{{y}}\\, \n \\bigg| .\n $$\n Then Theorem \\ref{MAXSINGINT} follows from the following two propositions:\n \\begin{prop}\\label{propo1}\n Let $10$ such that\n \\begin{equation*}\n \\big\\Vert \\widetilde{\\mathcal{L}}_{\\Omega}^{\\sharp}(f_1,\\dots,f_m)\\big\\Vert_{L^p}\\le C\\Vert \\Omega\\Vert_{L^q(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^{p_j}}\n \\end{equation*}\n for Schwartz functions $f_1,\\dots,f_m$ on $\\mathbb R^n$.\n \\end{prop}\n \n \n \\begin{prop}\\label{propo2}\n Let $\\frac{2m}{m+1}C_0\\sqrt{mn}$. \n Then there exist $C, \\epsilon_0>0$ such that \n\\begin{equation*}\n\\big\\| \\mathcal{L}_{\\Omega,\\mu}^{\\sharp}(f_1,\\dots,f_m)\\big\\|_{L^{2\/m} } \\lesssim 2^{-\\epsilon_0 \\mu} \\| \\Omega\\|_{L^q(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2}\n\\end{equation*}\n for Schwartz functions $f_1,\\dots,f_m$ on $\\mathbb R^n$.\n \n \\end{prop}\n\n\n\n\n\n\n\n\n\\subsection{Proof of Proposition \\ref{propo1}}\n\nWe decompose $\\widetilde{\\mathcal{L}}_{\\Omega}^{\\sharp}$ further so that the Coifman-Meyer multiplier theorem is involved:\nSetting\n$$\\widetilde{K}(\\vec{{y}}\\,):=\\sum_{\\mu\\in\\mathbb{Z}: 2^{\\mu-10}\\le C_0\\sqrt{mn}}K_{\\mu}(\\vec{{y}}\\,)=\\sum_{\\mu\\in\\mathbb{Z}: 2^{\\mu-10}\\le C_0\\sqrt{mn}}\\;\\sum_{\\gamma\\in\\mathbb{Z}}K_{\\mu}^{\\gamma}(\\vec{{y}}\\,),$$\n$\\widetilde{\\mathcal{L}}^{\\sharp}_{\\Omega}\\big(f_1,\\dots,f_m\\big)(x)$ is controlled by the sum of \n\\begin{equation*}\nT_{\\widetilde{K}}^*\\big(f_1,\\dots,f_m\\big)(x):= \\sup_{\\tau\\in\\mathbb{Z}}\\Big|\\int_{|y|> 2^{-\\tau}}\\widetilde{K}(\\vec{{y}}\\,)\\prod_{j=1}^{m}f_j(x-y_j)~d\\,\\vec{{y}}\\, \\Big|\n\\end{equation*}\nand\n\\begin{equation*}\n\\mathfrak{T}_{K}^{**}\\big(f_1,\\dots,f_m\\big)(x):= \\sup_{\\tau\\in\\mathbb{Z}}\\Big| \\int_{(\\mathbb R^n)^m} {K}^{**}_{\\tau}(\\vec{{y}}\\,) \\prod_{j=1}^{m}f_j(x-y_j) ~ d\\,\\vec{{y}}\\, \\Big|, \n\\end{equation*}\nwhere \n\\begin{equation*}\n{K}^{**}_{\\tau}(\\vec{{y}}\\,):=\\Big(\\sum_{\\mu\\in\\mathbb{Z}: 2^{\\mu-10}\\le C_0\\sqrt{mn}}\\; \\sum_{\\gamma<\\tau}K_{\\mu}^{\\gamma}(\\vec{{y}}\\,)\\Big) -\\widetilde{K}(\\vec{{y}}\\,)\\chi_{|\\vec{{y}}\\,|> 2^{-\\tau}}. \n\\end{equation*}\n\n\nTo obtain the boundedness of $T_{\\widetilde{K}}^*$, \nwe claim that $\\widetilde{K}$ is an $m$-linear Calder\\'on-Zygmund kernel with constant $C\\Vert \\Omega\\Vert_{L^q(\\mathbb{S}^{mn-1})}$ for $12^{-\\tau}}\\Big)\n\\end{equation*}\nand thus \n\\begin{equation*}\n\\mathfrak{T}_{K}^{**}\\big(f_1,\\dots,f_m\\big)(x) \\le \\sup_{\\tau\\in\\mathbb{Z}}\\; \\sum_{\\mu\\in\\mathbb{Z}:2^{\\mu-10}\\le C_0\\sqrt{mn} } \\mathcal{I}_{\\mu,\\tau}(x)+\\mathcal{J}_{\\mu,\\tau}(x)\n\\end{equation*} where\n\\begin{equation*}\n\\mathcal{I}_{\\mu,\\tau}(x):= \\sum_{\\gamma<\\tau}\\Big|\\int_{|\\vec{{y}}\\,|<2^{-\\tau}} K_{\\mu}^{\\gamma}(\\vec{{y}}\\,) \\prod_{j=1}^{m}f_j(x-y_j) \\;d\\,\\vec{{y}}\\, \\Big|,\n\\end{equation*}\n\\begin{equation*}\n\\mathcal{J}_{\\mu,\\tau}(x):= \\sum_{\\gamma \\ge \\tau}\\Big|\\int_{|\\vec{{y}}\\,|\\ge 2^{-\\tau}} K_{\\mu}^{\\gamma}(\\vec{{y}}\\,) \\prod_{j=1}^{m}f_j(x-y_j) \\;d\\,\\vec{{y}}\\, \\Big|.\n\\end{equation*}\nWe claim that there exists $\\epsilon>0$ such that\n\\begin{equation}\\label{IJest}\n\\mathcal{I}_{\\mu,\\tau}+ \\mathcal{J}_{\\mu,\\tau}\\lesssim_{C_0,m,n} 2^{\\epsilon \\mu}\\Vert \\Omega\\Vert_{L^1(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\mathcal{M}f_j ~\\text{ uniformly in ~} \\tau\\in\\mathbb{Z}\n\\end{equation}\nfor $\\mu$ satisfying $2^{\\mu-10}\\le C_0\\sqrt{mn}$, where we recall $\\mathcal M$ is the Hardy-Littlewood maximal operator.\nThen, using H\\\"older's inequality and the boundedness of $\\mathcal{M}$, we obtain\n$$\n\\big\\Vert \\mathfrak{T}_{K}^{**}\\big(f_1,\\dots,f_m\\big) \\big\\Vert_{L^p}\\lesssim \\Vert \\Omega\\Vert_{L^1(\\mathbb{S}^{mn-1})} \\Big\\Vert \\prod_{j=1}^{m}\\mathcal{M}f_j \\Big\\Vert_{L^p}\\lesssim \\Vert \\Omega\\Vert_{L^1(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^{p_j}}\n$$\n for $1mn$, the sum over $\\gamma\\ge \\tau$ converges to $2^{-\\tau(M-mn)}$ and the integral over $|\\vec{{y}}\\,|\\ge 2^{-\\tau}$ is estimated by\n\\begin{align*}\n&\\sum_{l=0}^{\\infty}\\int_{2^{-\\tau+l}\\le |\\vec{{y}}\\,|<2^{-\\tau+l+1}}{\\frac{1}{|\\vec{{y}}\\,|^M}\\prod_{j=1}^{m}|f_j(x-y_j)|}~ d\\,\\vec{{y}}\\,\\\\\n&\\lesssim 2^{\\tau(M-mn)}\\sum_{l=0}^{\\infty}2^{-l(M-mn)} \\Big(\\frac{1}{2^{(-\\tau+l+1)mn}}\\int_{|\\vec{{y}}\\,|\\le 2^{-\\tau+l+1}}\\prod_{j=1}^{m}|f_j(x-y_j)| ~ d\\,\\vec{{y}}\\,\\Big)\\\\\n&\\lesssim 2^{\\tau(M-mn)}\\prod_{j=1}^{m}\\mathcal{M}f_j(x).\n\\end{align*}\nFinally, we have\n\\begin{equation*}\n \\mathcal{J}_{\\mu,\\tau} \\lesssim 2^{\\mu(mn+1-M)}\\Vert \\Omega\\Vert_{L^1(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\mathcal{M}f_j ,\n\\end{equation*} which completes the proof of (\\ref{IJest}).\n\n \n\\subsection{Proof of Proposition \\ref{propo2}}\nThe proof is based on the wavelet decomposition and the recent developments in \\cite{Paper1}.\nRecalling that $\\widehat{K_{\\mu}^0}\\in L^{q'}$, we apply the wavelet decomposition (\\ref{daubechewavelet}) to write \n\\begin{equation*}\n\\widehat{K_{\\mu}^0}(\\vec{{\\xi}}\\,)=\\sum_{\\lambda\\in\\mathbb{N}_0}\\sum_{\\vec{{G}}\\in\\mathcal{I}^{\\lambda}}\\sum_{\\vec{{k}}\\,\\in (\\mathbb Z^n)^m}b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\mu}\\Psi_{G_1,k_1}^{\\lambda}(\\xi_1)\\cdots \\Psi_{G_m,k_m}^{\\lambda}(\\xi_m)\n\\end{equation*}\nwhere \n\\begin{equation*}\nb_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\mu}:=\\int_{(\\mathbb R^n)^m}{\\widehat{K_{\\mu}^0}(\\vec{{\\xi}}\\,)\\Psi_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda}(\\vec{{\\xi}}\\,)}~ d\\,\\vec{{\\xi}}\\,.\n\\end{equation*}\nIt is known in \\cite{Paper1} that for any $0<\\delta<1\/q'$,\n\\begin{equation}\\label{maininftyest}\n\\big\\Vert \\{b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\mu}\\}_{\\vec{{k}}\\,}\\big\\Vert_{\\ell^{\\infty}}\\lesssim 2^{-\\delta {\\mu}}2^{-\\lambda (M+1+mn)} \\Vert \\Omega\\Vert_{L^q(\\mathbb{S}^{mn-1})}\n\\end{equation} where $M$ is the number of vanishing moments of $\\Psi_{\\vec{{G}}}$. \nMoreover, it follows from the inequality (\\ref{lqestimate}), the Hausdorff-Young inequality, and Young's inequality that\n\\begin{align}\\label{mainlqest}\n\\big\\Vert \\{b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\mu}\\}_{\\vec{{k}}\\,}\\big\\Vert_{\\ell^{q'}}&\\lesssim 2^{-\\lambda mn (1\/2-1\/q')}\\Vert \\widehat{K_{\\mu}^0}\\Vert_{L^{q'}}\\lesssim 2^{-\\lambda mn(1\/q-1\/2)}\\Vert \\Omega \\Vert_{L^q(\\mathbb{S}^{mn-1})}.\n\\end{align} \n\n\n\n\nNow we may assume that $2^{\\lambda+\\mu-2}\\le |\\vec k|\\le 2^{\\lambda+\\mu+2}$ due to the compact supports of $\\widehat{K_{\\mu}^0}$ and $\\Psi_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda}$. \nIn addition, by symmetry, it suffices to focus on the case $|k_1|\\ge\\cdots\\ge |k_m|$.\nSince $\\widehat{K_{\\mu}^\\gamma}(\\vec{{\\xi}}\\,)=\\widehat{K_{\\mu}^0}(\\vec{{\\xi}}\\,\/2^\\gamma)$, the boundedness of $ \\mathcal{L}_{\\Omega,\\mu}^{\\sharp}$ is reduced to the inequality\n\\begin{align}\\label{2mmainest}\n&\\bigg\\Vert \\sup_{\\tau\\in\\mathbb{Z}}\\Big| \\sum_{\\lambda\\in\\mathbb{N}_0}\\sum_{\\vec{{G}}\\in\\mathcal{I}^{\\lambda}} \\sum_{\\gamma\\in\\mathbb{Z}: \\gamma<\\tau}\\sum_{\\vec{{k}}\\,\\in\\mathcal{U}^{\\lambda+\\mu}} b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\mu}\\prod_{j=1}^{m} L_{G_j,k_j}^{\\lambda,\\gamma}f_j\\Big|\\bigg\\Vert_{L^{2\/m}}\\nonumber\\\\\n&\\lesssim 2^{-\\epsilon_0 \\mu}\\Vert \\Omega\\Vert_{L^q(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2}\n\\end{align} \nwhere the operators $L_{G_j,k_j}^{\\lambda,\\gamma}$ and the set $\\mathcal{U}^{\\lambda+\\mu}$ are defined as in (\\ref{lgkest}) and (\\ref{uset}). \nWe split $\\mathcal{U}^{\\lambda+{\\mu}}$ into $m$ disjoint subsets $\\mathcal{U}_l^{\\lambda+{\\mu}}$ ($1\\le l\\le m$) as before such that\nfor $k\\in \\mathcal{U}^{\\lambda+\\mu}_l$ we have \n$$|k_1|\\ge\\cdots \\ge |k_l|\\ge 2C_0\\sqrt n\\ge |k_{l+1}|\\ge\\cdots\\ge |k_m|.$$\nThen the left-hand side of (\\ref{2mmainest}) is estimated by\n\\begin{equation*}\n\\bigg( \\sum_{l=1}^{m}\\sum_{\\lambda\\in\\mathbb{N}_0}\\sum_{\\vec{{G}}\\in\\mathcal{I}^{\\lambda}} \\bigg\\Vert \\sup_{\\tau\\in\\mathbb{Z}}\\Big| \\sum_{\\gamma\\in\\mathbb{Z}:\\gamma<\\tau} \\mathcal{T}_{\\vec{{G}},l}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m) \\Big| \\bigg\\Vert_{L^{2\/m}}^{2\/m}\\bigg)^{m\/2}\n\\end{equation*} \nwhere $\\mathcal{T}_{\\vec{{G}},l}^{\\lambda,\\gamma,\\mu}$ is defined by\n\\begin{equation*}\n\\mathcal{T}_{\\vec{{G}},l}^{\\lambda,\\gamma,\\mu}\\big(f_1,\\dots,f_m\\big):=\\sum_{\\vec{{k}}\\,\\in\\mathcal{U}_l^{\\lambda+{\\mu}}}b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\mu} \\Big(\\prod_{j=1}^{m}L_{G_j,k_j}^{\\lambda,\\gamma}f_j \\Big).\n\\end{equation*}\n\nWe claim that for each $1\\le l\\le m$ there exists $\\epsilon_0, M_0>0$ such that\n\\begin{align}\\begin{split}\\label{2mmaingoal}\n \\bigg\\Vert \\sup_{\\tau\\in\\mathbb{Z}}\\Big| &\\sum_{\\gamma\\in\\mathbb{Z}:\\gamma<\\tau} \\mathcal{T}_{\\vec{{G}},l}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m) \\Big| \\bigg\\Vert_{L^{2\/m}} \\\\\n&\\lesssim 2^{-\\epsilon_0 \\mu_0}2^{-\\lambda M_0}\\Vert \\Omega\\Vert_{L^q(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2},\n\\end{split}\\end{align} \n which concludes (\\ref{2mmainest}). Therefore it remains to prove \\eqref{2mmaingoal}. \n\n\n\\subsubsection*{Proof of (\\ref{2mmaingoal})}\nWhen $2\\le l\\le m$, we apply (\\ref{2mainprop}) with $20$ since $1-\\frac{(m-1)q'}{2m}>0$. \n \n\nNow let us prove (\\ref{2mmaingoal}) for $l=1$.\nIn this case, we first see the estimate\n\\begin{equation}\\label{keykeyest}\n\\Big\\Vert \\Big( \\sum_{\\gamma\\in\\mathbb{Z}}\\big| \\mathcal{T}_{\\vec{{G}},1}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m) \\big|^2\\Big)^{1\/2}\\Big\\Vert_{L^{2\/m}} \\lesssim 2^{-\\epsilon_0\\mu} 2^{-M_0\\lambda} \\Vert \\Omega\\Vert_{L^q(\\mathbb{S}^{mn-1})}\\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2}\n\\end{equation}\nfor some $\\epsilon_0, M_0>0$, which can be proved, as in \\cite[Section 6]{Paper1}, by using (\\ref{1mainprop}) and (\\ref{maininftyest}).\n\nChoose a Schwartz function $\\Gamma$ on $\\mathbb R^n$ whose Fourier transform is supported in the ball $ \\{\\xi\\in\\mathbb R^n: |\\xi|\\le 2\\}$ and is equal to $1$ for $|\\xi|\\le 1$, and define $\\Gamma_k:=2^{kn}\\Gamma(2^k\\cdot)$ so that\n$\\textup{Supp}(\\widehat{\\Gamma_k})\\subset \\{\\xi\\in\\mathbb R^n: |\\xi|\\le 2^{k+1}\\}$ and $\\widehat{\\Gamma_k}(\\xi)=1$ for $|\\xi|\\le 2^k$.\n\n\nSince the Fourier transform of $\\mathcal{T}_{\\vec{{G}},1}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m)$ is supported in the set \n$\\big\\{ \\xi\\in\\mathbb R^n: 2^{\\gamma+\\mu-5}\\le |\\xi|\\le 2^{\\gamma+\\mu+4} \\big\\}$, we can write\n\\begin{equation*}\n\\sum_{\\gamma\\in\\mathbb{Z}: \\gamma<\\tau}\\mathcal{T}_{\\vec{{G}},1}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m)=\\Gamma_{\\tau+\\mu+3}\\ast \\Big( \\sum_{\\gamma\\in\\mathbb{Z}: \\gamma<\\tau}\\mathcal{T}_{\\vec{{G}},1}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m)\\Big)\n\\end{equation*}\nand then split the right-hand side into\n \\begin{align*}\n& \\Gamma_{\\tau+\\mu+3}\\ast \\Big( \\sum_{\\gamma\\in\\mathbb{Z}}\\mathcal{T}_{\\vec{{G}},1}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m)\\Big)-\\Gamma_{\\tau+\\mu+3}\\ast \\Big( \\sum_{ \\gamma\\in\\mathbb{Z} :\\gamma\\ge \\tau}\\mathcal{T}_{\\vec{{G}},1}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m)\\Big).\n\\end{align*}\nDue to the Fourier support conditions of $\\Gamma_{\\tau+\\mu+3}$ and $\\mathcal{T}_{\\vec{{G}},1}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m)$, the sum in the second term can be actually taken over $\\tau\\le \\gamma\\le \\tau+9$.\nTherefore, the left-hand side of (\\ref{2mmaingoal}) is controlled by the sum of\n\\begin{equation}\\label{DefI}\nI:=\\bigg\\Vert \\sup_{\\nu\\in\\mathbb{Z}}\\Big| \\Gamma_{\\nu}\\ast \\Big(\\sum_{\\gamma\\in\\mathbb{Z}}{ \\mathcal{T}_{\\vec{{G}},1}^{\\lambda,\\gamma,\\mu}(f_1,\\dots,f_m) } \\Big)\\Big| \\bigg\\Vert_{L^{2\/m}}\n\\end{equation}\nand\n\\begin{equation}\\label{DefII}\nII:=\\sum_{\\gamma=0}^{9}\\Big\\Vert \\sup_{\\tau\\in\\mathbb{Z}}\\big| \\Gamma_{\\tau+\\mu+3}\\ast T_{\\vec{{G}},1}^{\\lambda,\\tau+\\gamma,\\mu}(f_1,\\dots,f_m)\\big|\\Big\\Vert_{L^{2\/m}}.\n\\end{equation}\n\n\nFirst of all, when $0\\le \\gamma\\le 9$,\nthe Fourier supports of both $\\Gamma_{\\tau+\\mu+3}$ and $T_{\\vec{{G}},1}^{\\lambda,\\tau+\\gamma,\\mu}(f_!,\\dots,f_m)$ are $\\{\\xi\\in \\mathbb R^n: |\\xi|\\sim 2^{\\tau+\\mu}\\}$. This implies that\nfor any $01$ and $\\mu\\in \\mathbb{Z}$, then\n\\begin{equation}\\label{marshallest}\n\\Big\\Vert \\Big\\{ \\Phi^{(1)}_j\\!\\ast\\! \\Big(\\sum_{\\gamma\\in\\mathbb{Z}}{g_{\\gamma}}\\!\\Big)\\!\\Big\\}_{\\! j\\in\\mathbb{Z}}\\Big\\Vert_{L^p(\\ell^q)}\\lesssim_{C} \\big\\Vert \\big\\{ g_j\\big\\}_{j\\in\\mathbb{Z}}\\big\\Vert_{L^p(\\ell^q)} \\quad \\text{uniformly in }~\\mu\n\\end{equation} for $0C_0\\sqrt{mn}$ and\n $$\\widehat{\\Theta^{(m)}_{{\\mu}_0-1}}(\\vec{{\\xi}}\\,):=1-\\sum_{{\\mu}={\\mu}_0}^{\\infty}{\\widehat{\\Phi_{\\mu}^{(m)}}(\\vec{{\\xi}}\\,)}.$$\nClearly, \n\\begin{equation*}\n\\widehat{\\Theta^{(m)}_{{\\mu}_0-1}}(\\vec{{\\xi}}\\,)+\\sum_{{\\mu}={\\mu}_0}^{\\infty}{\\widehat{\\Phi_{\\mu}^{(m)}}(\\vec{{\\xi}}\\,)}=1\n\\end{equation*}\nand thus we can write\n\\begin{equation*}\n\\sigma(\\vec{{\\xi}}\\,)=\\widehat{\\Theta_{{\\mu}_0-1}^{(m)}}(\\vec{{\\xi}}\\,)\\sigma(\\vec{{\\xi}}\\,)+\\sum_{{\\mu}={\\mu}_0}^{\\infty}{\\widehat{\\Phi_{\\mu}^{(m)}}(\\vec{{\\xi}}\\,)\\sigma(\\vec{{\\xi}}\\,)}=:\\sigma_{{\\mu}_0-1}(\\vec{{\\xi}}\\,)+\\sum_{{\\mu}={\\mu}_0}^{\\infty}{\\sigma_{\\mu}(\\vec{{\\xi}}\\,)}.\n\\end{equation*}\nNote that $\\sigma_{{\\mu}_0-1}$ is a compactly supported smooth function and thus the corresponding maximal multiplier operator $\\mathscr{M}_{\\sigma_{{\\mu}_0-1}}$, defined by\n\\begin{align*}\n&\\mathscr{M}_{\\sigma_{{\\mu}_0-1}}\\big(f_1,\\dots,f_m\\big)(x)\\\\\n&:=\\sup_{\\nu \\in\\mathbb{Z}}\\Big|\\int_{(\\mathbb R^n)^m}{\\sigma_{{\\mu}_0-1}(2^{\\nu} \\vec{{\\xi}}\\,)\\Big( \\prod_{j=1}^{m}\\widehat{f_j}(\\xi_j)\\Big) e^{2\\pi i\\langle x,\\sum_{j=1}^{m}\\xi_j \\rangle}}d\\vec{{\\xi}}\\, \\Big|,\n\\end{align*}\nis bounded by a constant multiple of \n$\n\\mathcal{M}f_1(x)\\cdots\\mathcal{M}f_m(x)\n $ \n where $\\mathcal{M}$ is the Hardy-Littlewood maximal operator on $\\mathbb R^n$ as before.\nUsing H\\\"older's inequality and the $L^2$-boundedness of $\\mathcal{M}$, we can prove\n\\begin{equation*}\n\\big\\Vert \\mathscr{M}_{\\sigma_{{\\mu}_0-1}}(f_1,\\dots,f_m) \\big\\Vert_{L^{2\/m}}\\lesssim \\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2}.\n\\end{equation*}\n\nIt remains to show that\n\\begin{equation}\\label{lefttoshow}\n\\Big\\Vert \\sum_{{\\mu}={\\mu}_0}^{\\infty}\\mathscr{M}_{\\sigma_{\\mu}}(f_1,\\dots,f_m)\\Big\\Vert_{L^{2\/m}}\\lesssim \\prod_{j=1}^{m}\\Vert f_j\\Vert_{L^2}.\n\\end{equation}\nUsing the decomposition (\\ref{daubechewavelet}), write\n\\begin{equation}\\label{sigmajdef}\n\\sigma_{\\mu}(\\vec{{\\xi}}\\,)=\\sum_{\\lambda\\in\\mathbb{N}_0}\\sum_{\\vec{{G}}\\in\\mathcal{I}^{\\lambda}}\\sum_{\\vec{{k}}\\,\\in (\\mathbb Z^n)^m}{b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\mu}\\Psi_{G_1,k_1}^{\\lambda}(\\xi_1)\\cdots\\Psi_{G_m,k_m}^{\\lambda}(\\xi_m)}\n\\end{equation}\n where \n \\begin{equation*}\n b_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda,\\mu}:=\\int_{(\\mathbb R^n)^m}{\\sigma_{\\mu}(\\vec{{\\xi}}\\,)\\Psi_{\\vec{{G}},\\vec{{k}}\\,}^{\\lambda}(\\vec{{\\xi}}\\,)}d\\vec{{\\xi}}\\,.\n \\end{equation*} \n \n Let $M:=\\Big[ \\frac{(m-1)n}{2}\\Big]+1$ and choose $10$ be a \nfamily of $m$-linear operators while $T_\\ast=\\sup_{t>0} |T_t| $ is the associated maximal operator. Suppose that there is a constant $B$ such that \n\\begin{equation}\\label{100}\n\\|T_{\\ast} (f_1,\\dots,f_m)\\|_{L^{q,\\infty}} \\leq B \\prod_{j=1}^m \\|f_j\\|_{L^{p_j}}\n \\end{equation}\n for all $f_j\\in L^{p_j}(\\mathbb{R}^n)$. Also suppose that for all $\\varphi_j $ in a dense subclass $D_j $ of \n $L^{p_j}(\\mathbb{R}^n)$ we have \n\\begin{equation}\\label{tto1}\n \\lim_{t\\to 0} T_{t} (\\varphi_1,\\dots, \\varphi_m) = T(\\varphi_1,\\dots,\\varphi_m)\n \\end{equation}\nexists and is finite. Then for all functions $f_j\\in L^{p_j}(\\mathbb{R}^n) $ the limit in \\eqref{tto1} exists and is finite a.e., and defines an $m$-linear operator which uniquely extends $T$ defined on $D_1\\times \\cdots \\times D_m$ and\nwhich is bounded from $L^{p_1}\\times \\cdots \\times L^{p_m}$ to $L^{q,\\infty}(\\mathbb{R}^n)$. \n\\end{prop}\n\nWe now state the two main corollaries of our work. \n\n\\begin{cor}\\label{CCC1}\nLet $\\Omega$ be as in Theorem~\\ref{MAXSINGINT} and let\n\\begin{equation*}\n\\mathcal{L}_{\\Omega}^*\\big(f_1,\\dots,f_m\\big)(x):=\\sup_{\\epsilon>0} \n\\big| \\mathcal{L}_{\\Omega}^\\epsilon(f_1,\\dots, f_m)(x) \\big|, \n\\end{equation*}\nwhere \n\\[\n\\mathcal{L}_{\\Omega}^\\epsilon(f_1,\\dots, f_m)(x)=\n \\int_{(\\mathbb R^n)^m\\setminus B(0,\\epsilon)}{K(\\vec{{y}}\\,)\\prod_{j=1}^{m}f_j(x-y_j)}~d\\,\\vec{{y}}\\,. \n\\]\nThen for $f_j\\in L^2(\\mathbb{R}^n)$, $j=1,\\dots , m$, the truncated singular integrals \n$\\mathcal{L}_{\\Omega}^\\epsilon(f_1,\\dots, f_m)$ converge a.e. as $\\epsilon\\to 0$ to \n$ {\\mathcal{L}_{\\Omega}}(f_1,\\dots, f_m)$, where $ {\\mathcal{L}_{\\Omega}}$ denotes here the bounded extension of $\\mathcal{L}_{\\Omega}$ on $L^2\\times \\cdots\\times L^2.$\n\\end{cor}\n\n\n\n\\begin{cor}\\label{CCC2}\nLet $\\sigma$ be as in Theorem~\\ref{application4} \nand for $\\nu\\in \\mathbb Z$ let $S_\\sigma^\\nu$ be as in \\eqref{defSsinu}. Then for \n$f_j\\in L^2(\\mathbb{R}^n)$, $j=1,\\dots , m$, the $L^{2\/m}(\\mathbb{R}^n)$ functions \n$\nS_\\sigma^\\nu(f_1,\\dots, f_m) \n$\nconverge a.e. to $\\sigma(0) f_1 \\cdots f_m$ as $\\nu \\to -\\infty$. Additionally, if $\\lim_{y\\to \\infty} \n\\sigma(y)$ exists and and equals $L$, then the functions \n$S_\\sigma^\\nu(f_1,\\dots, f_m) $ converge a.e. to \n$L f_1 \\cdots f_m$ as $\\nu \\to \\infty$. \n\\end{cor}\n\nTo verify these corollaries, we notice that the claimed \nconvergence holds pointwise everywhere for smooth funcitons with compact support $f_j$ \nby the Lebesgue dominated convergence theorem. In the case of the Corollary~\\ref{CCC2} this assertion is straightforward, while in the case of Corollary~\\ref{CCC1} it is a consequence \nof the smoothness of the function $f_1(x)\\cdots f_m(x)$ at the origin combined with the cancellation of $\\Omega$. The assertions in both corollaries then follow by \napplying Proposition~\\ref{prop00}.\n}\n\\end{comment}\n\nAs of this writing, we are uncertain how to extend Theorem~\\ref{application4} \nin the non-lacunary case. A new ingredient may be necessary to accomplish this. \\\\\n\nWe have addressed the boundedness of several multilinear and maximal multilinear\noperators at the initial point $L^2\\times \\cdots \\times L^2\\to L^{2\/m}$. Our future \ninvestigation related to this project has two main directions: \n(a) to extend this initial estimate to many other operators, such as the \ngeneral maximal multipliers considered in \\cite{Gr_He_Ho2020, Ru}, and (b) to obtain \n $L^{p_1}\\times \\cdots \\times L^{p_m}\\to L^{p}$ bounds for all of these operators in the \n largest possible range of exponents possible. Additionally, one could consider the \n study of related endpoint estimates. We hope to achieve this goal in future \n publications. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\\label{sec:intro} \n\nAccurately counting the number of cells in microscopic images is important for medical diagnoses and biological applications~\\cite{venkatalakshmi2013automatic}.\nHowever, manual cell counting is a very time-consuming, tedious, and error-prone task. An automatic and efficient solution to improve the counting accuracy is highly desirable, but automatic cell counting is challenged by the low image contrast and significant inter-cell occlusions in 2D microscopic images~\\cite{matas2004robust,barinova2012,arteta2012,xing2014automatic}.\n\nRecently, density regression-based counting methods\nhave been employed to address these challenges~\\cite{lempitsky2010,xie2018}.\nThese methods employ machine-learning tools to learn a density regression model (DRM) that estimates the cell density distribution from the characteristics\/features of a given image.\nThe number of cells can be subsequently counted by integrating the estimated density map.\nHowever, in these methods, a large amount of annotated data are required, which can be difficult to obtain in practice.\nUsing annotated synthetic images to train a DRM can mitigate this problem~\\cite{xie2018}.\nHowever, due to the intrinsic domain shift between the experimental and synthetic cell images, a DRM trained with synthetic images may generalize poorly to experimental images~\\cite{lempitsky2010,xie2018}.\nCurrently available DRM methods~\\cite{lempitsky2010,xie2018} cannot address this critical issue.\n\nDomain adaptation methods based on unsupervised adversarial learning have been proposed \nto mitigate the harmful effects of domain shift in computer vision tasks, such as classification~\\cite{tzeng2017adversarial} and segmentation~\\cite{dou2018unsupervised}. \nThese methods aim at learning transformations that map two shifted image domains to a common feature space, so that the learned model that works for one domain can be applied to another.\nThe methods have demonstrated very promising performance for the target tasks~\\cite{tzeng2017adversarial,dou2018unsupervised}.\n\nCombining the advantages of both supervised learning-based density regression methods and unsupervised learning-based domain adaptation methods, \nThis study proposes a novel, manual-annotating-free automatic cell counting method. The proposed method is evaluated on experimental immunofluorescent microscopic images of human embryonic stem cells (hESC).\n\n\\section{Methodology}\n\\label{sec:methodology}\n\n\\subsection{Background: Density Regression-Based Automatic Cell Counting}\n\\label{ssec:density}\n\nThe goal of density regression-based cell counting methods is to learn a density function $F$ \nthat can be employed to estimate the cell density map for a given image~\\cite{lempitsky2010,xie2018}.\nFor a given image $X\\in \\mathbb{R}^{M\\times N}$ that includes $N_c$ cells, \nthe corresponding density map $Y\\in \\mathbb{R}^{M\\times N}$ can be considered as the superposition of a set of $N_c$ normalized 2D discrete Gaussian kernels that are placed at the centroids of the $N_c$ cells. \nLet $S = \\{ ({s_{k_x}},{s_{k_y}})\\in \\mathbb{N}^2: k = 1,2, ..., N_c\\}$ represent the cell centroid positions in $X$. \nEach pixel $Y_{i,j}$ on the density map $Y$ can be expressed as\n\\begin{equation}\nY_{i,j} = \\sum_{k=1}^{N_c} G_\\sigma (i-{s_{k_x}},j-{s_{k_y}}), \\quad \\forall \\quad i \\in M, \\quad j \\in N, \\\\\n\\label{eq:map}\n\\end{equation}\n\n\\noindent where $G_\\sigma(n_x,n_y) = C \\cdot e^{-\\frac{n_x^2+n_y^2}{2\\sigma^2}}\\in \\mathbb{R}^{(2K_G+1)\\times (2K_G+1)}, n_x, n_y = -K_G, -K_G+1,..., K_G$, is a normalized 2D Gaussian kernel that satisfies $\\sum_{n_x=-K_G}^{K_G}\\sum_{n_y=-K_G}^{K_G} G_\\sigma(n_x,n_y) =1$. Here, $\\sigma^2$ is the isotropic covariance, $(2K_G+1)\\times (2K_G+1)$ is the kernel size, and C is a normalization constant\n\nThe density regression-based cell counting process includes three steps: \n(1) map an image into a feature map,\n(2) estimate a cell density map from the feature map, and \n(3) integrate the estimated density map for cell counting.\nIn the first step, each pixel $X_{i,j}$ can be assumed to be associated with a real-valued feature vector $\\phi(X_{i,j})\\in \\mathbb{R}^Z$. \nThe feature map $P\\in \\mathbb{R}^{M\\times N \\times Z}$ of $X$ can be generated using specific feature extraction methods, such as the dense scale invariant feature transform (SIFT) descriptor~\\cite{vedaldi2010vlfeat}, ordinary filter banks~\\cite{fiaschi2012}, \nor codebook learning~\\cite{sommer2011ilastik}.\nIn the second step, the estimated density $\\hat{Y}_{i,j}$ of each pixel $X_{i,j}$ can be obtained by applying a pre-trained density regression function $F$ on the given $\\phi(X_{i,j})$:\n\\begin{equation}\n\t\\hat{Y}_{i,j} = F(\\phi(X_{i,j});\\Theta),\n\t\\label{eq:estimatemap}\n\\end{equation}\n\n\\noindent where $\\Theta$ is a parameter vector that determines the function $F$.\nFinally, in the third step, the number of cells in $X$, $N_c$, can be counted by integrating the estimated density map $\\hat{Y}$ over the image region:\n\\begin{equation}\n\tN_c \\approx \\hat{N_c} = \\sum_{i=1}^{M}\\sum_{j=1}^{N} \\hat{Y}_{i,j}.\n\t\\label{eq:counting}\n\\end{equation}\n\nA key task in density regression-based cell counting methods is learning the function $F$ by use of training datasets.\nThe learning of $F$ and the related cell counting method proposed in this study are described below.\n\n\\subsection{The Proposed Automatic Cell Counting Framework}\n\\label{ssec:method}\n\nThe proposed density regression-based automatic cell counting method is implemented by use of both supervised and unsupervised learning, and employs both annotated synthetic images and unannotated experimental images.\nCombining the advantages of both supervised learning-based density regression methods and unsupervised learning-based domain adaptation methods, \nthe proposed method can learn a DRM by use of annotated synthetic images without the need of manually-annotated experimental images. A domain adaptation model (DAM) will be learned by use of both unannotated synthetic and experimental images.\n\n\\subsubsection{Overview of the Proposed Method}\n\\label{sssec:overview}\n\nThe proposed method, shown in Figure~\\ref{fig:framework}, has three phases: 1) Source DRM training (Section~\\ref{sssec:drm}), 2) DAM training (Section~\\ref{sssec:dam}), and 3) Density map estimation for cell counting based on the target DRM (Section~\\ref{sssec:counting}).\nHere, a source DRM represents the DRM trained with synthetic images (the source domain), while a target DRM is the domain-adapted DRM that can be employed for estimating the density map of a given experimental image (the target domain). \nIn the source domain training phase, a source DRM consisting of an encoder CNN (ECNN) and decoder CNN (DCNN) is trained by use of supervised learning with a set of annotated images in the source domain. The trained ECNN maps the images in the source domain to a feature space of the source domain, while the DCNN maps the feature space to the corresponding density maps. The trained ECNN will be employed for training a DAM, while the trained DCNN is employed as part of the target DRM for density estimation and cell counting. In the DAM training phase, a DAM is trained in an unsupervised manner by use of the trained ECNN and a set of unannotated synthetic images and experimental images. The trained DAM will become part of the target DRM and will be employed to map the images in the target domain to a feature space that has minimum domain shift with that of the source domain. In the cell counting phase, the combination of the trained DAM and DCNN forms the target DRM to estimate the density map for a given experimental image.\n\n\\begin{figure}\n\t\\begin{center}\n\t \\includegraphics[width=\\textwidth]{framework.pdf}\n\t\\end{center}\n\\caption{Overview of the proposed automatic cell counting framework.}\n\\label{fig:framework}\n\\end{figure}\n\n\\subsubsection{Source DRM Training}\n\\label{sssec:drm}\n\nThe first phase of the proposed method is to train a source DRM with annotated synthetic images, as shown in the left block of Figure~\\ref{fig:drm}.\nThe trained source DRM then determines a density function $F$.\nThe source DRM is designed as a fully convolutional neural network (FCNN) that includes an encoder CNN (ECNN) and a decoder CNN (DCNN). This design is motivated by a network architecture described in the literature~\\cite{xie2018}.\nThe ECNN encodes synthetic images to a low-dimensional and highly-representative feature space, while the DCNN decodes the feature space to estimate the corresponding density map. \nThe architectures of the ECNN and DCNN are shown in Figure~\\ref{fig:drm}.\nEach block in the ECNN or DCNN includes a chain of layers.\nHere, CONV represents a convolutional layer, Pool represents a max-pooling layer, and \\lq\\lq{}Up\\rq\\rq{} represents an up-sampling layer. \nThere are a total of eight CONV layers in the ECNN and DCNN. \nThe numbers of kernels in these eight CONV layers are set to 32, 64, 128, 512, 128, 64, 32, and 1, respectively.\nThe size of each kernel in the first seven CONV layers and that of the kernel in the last CONV layer are set to $3\\times 3$ and $1\\times 1$, respectively.\n\n\nLet the source DRM be denoted as $F(X^s; \\Theta)$, \nwhere $X^s$ represents a synthetic image in the source domain, \nand $\\Theta = (\\Theta^e, \\Theta^d)$ represents the parameter vectors in the ECNN and the DCNN.\nTraining the source DRM is equivalent to learning a function $F$ with a parameter vectors $(\\Theta^e, \\Theta^d)$ that maps $X^s$ to an estimated density map, $\\hat{Y}$, such that\n\\begin{equation}\nY \\approx \\hat{Y} = F(X^s; \\Theta^e, \\Theta^d).\n\\end{equation}\n\nLet the source ECNN and DCNN be denoted as $\\mathcal{A}^s(X^s;\\Theta^{e})$ and $\\mathcal{D}^s(\\mathcal{A}^s(X^s;\\Theta^{e});\\Theta^{d})$, respectively.\nTherefore, $F(X^s; \\Theta) = \\mathcal{D}^s(\\mathcal{A}^s(X^s; \\Theta^{e}); \\Theta^{d})$.\nThe ECNN and the DCNN are jointly trained with a given set of $B$ training data \n$G^s = \\{(X^s_i,Y^s_i)\\}_{i = 1, ..., B}$, \nwhere $X^s_i$ is the $i$-th synthetic image and $Y^s_i$ is its ground truth density map. \nThe training process is implemented through the minimization of a loss function $L(\\Theta)$ that is defined as\n\\begin{equation}\nL(\\Theta) = \\frac{1}{B}\\sum_{i=1}^{B}\\left\\lVert Y_i^s - F(X^s_i;\\Theta)\\right\\rVert ^2.\n\\label{eq:loss1}\n\\end{equation}\n\nThe numerical minimization of $L(\\Theta)$ is performed via a momentum stochastic gradient descent (SGD) method~\\cite{bottou2010large}.\nA trained $F$ with the optimized parameters $\\Theta^*= (\\Theta^{e*}, \\Theta^{d*})$ is obtained in this step.\nThe trained ECNN, $\\mathcal{A}^s(X^s;\\Theta^{e*})$, will be employed for training the DAM as described next.\n\n\\subsubsection{DAM Training}\n\\label{sssec:dam}\n\nThe second phase of the proposed method is to train a DAM by use of images from both the source and target domains and the trained ECNN, $\\mathcal{A}^s(X^s;\\Theta^{e*})$.\nThe trained DAM is employed to map images in the target domain to a feature space that has minimum domain shift with the feature space of images in the source domain.\n\n\\begin{figure} [!ht]\n\\begin{center}\n\t\\begin{subfigure}{0.7\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{architecture.pdf}\n\t\t\\caption{The architecture of the source DRM}\n\t\t\\label{fig:drm}\n\t\\end{subfigure}\n \\break\n\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{DAM.pdf}\n \\caption{The architecture of the DAM}\n \\label{fig:dam}\n \\end{subfigure}\n\t\\begin{subfigure}{0.55\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{DCM.pdf}\n \\caption{The architecture of the DCM}\n \\label{fig:dcm}\n \\end{subfigure}\n\\end{center}\n\\caption{\nThe network architectures of the source DRM (ECNN+DCNN), the DAM, and the DCM.\n}\n\\label{fig:dam_training}\n\\end{figure}\n\nThe DAM training process is shown in the middle block of Figure~\\ref{fig:framework}.\nThe network architecture of the DAM is shown in Figure~\\ref{fig:dam}.\nLet the DAM be denoted as $\\mathcal{A}^t(X^t,\\Theta^t)$.\nThe DAM is trained with unannotated source and target images through the minimization of domain shifts between the feature spaces of both the source and target domains. \nThe Wasserstein distance is identified as a metric measuring this domain shift in this study. \nDue to the difficulty of directly computing the Wasserstein distance, as discussed in the literature~\\cite{arjovsky2017wasserstein},\na domain critic model (DCM) is set up as a surrogate to estimate the Wasserstein distance for differentiating the feature distributions of the two domains.\nThe architecture of the DCM employed in this study is shown in Figure~\\ref{fig:dcm}.\nThe DCM includes two CONV layers.\nThe numbers of kernels in these two CONV layers are set to 128 and 256, respectively. \nThe size of each kernel is set to $3\\times 3$. \nAVE represents an average pooling layer, and FC represents a fully connected layer. \nDropout rate is set to 0.5, and the number of neurons in the FC layer is set to 256. \nThe DAM and DCM are optimized iteratively via an adversarial loss function~\\cite{arjovsky2017wasserstein} in an unsupervised manner by use of unannotated synthetic and experimental images.\n\n\n\\subsubsection{Density Estimation for Cell Counting}\n\\label{sssec:counting}\nThe final phase of the proposed method is density estimation for automatic cell counting on a given image\nas shown in the right block of Figure~\\ref{fig:framework}. \nThe combination of the trained DAM and the trained DCNN forms a target DRM, \n$\\mathcal{D}^s(\\mathcal{A}^t(X^t;\\Theta^{t*});\\Theta^{d*})$.\nThe trained DAM maps an experimental image to a feature map in the feature space of the source domain,\nand the trained DCNN estimates the density map of the image with this feature map. \nFinally, the number of cells is subsequently counted by integrating the estimated density map.\n\n\\section{Experimental Results}\n\\label{sec:experiment}\n\n\\subsection{Datasets}\n\\label{ssec:dataset}\n\nThe datasets used in this study are described in Table~\\ref{tab:dataset}.\nIn this experiment, $200$ synthetic bacterial fluorescent microscopic cell images of $256\\times 256$ pixels each were generated by use of methods described in the literature~\\cite{lehmussola2007}.\nThe synthetic images were annotated automatically when generated. The ground truth density map of each synthetic image was generated by placing normalized Gaussian kernels at each annotated cell centroid in the image, according to the methods introduced in Section~\\ref{ssec:density}. The values of $\\sigma$ and $K_G$ were set to $3$ pixels and $10$ pixels, respectively.\n\nIn addition, $122$ experimental immunofluorescent microscopic hESC images of $512\\times 512$ pixels each were employed.\nIn 10 of these $122$ images, the centroids of each cell were manually annotated.\nThe density map for each of the $10$ images was generated with $\\sigma$ and $K_G$ being $3$ pixels and $10$ pixels, respectively.\nThese density maps were employed as the ground truth to evaluate the cell counting performance.\n\\begin{table}[ht]\n\\caption{Datasets employed in this study}\n\\begin{center}\n\\begin{tabular}{c c c c}\n\\hline\\hline\n\\textbf{Images}\t\t& \\textbf{Annotated synthetic}\t& \\textbf{Unannotated hESC}& \\textbf{Annotated hESC} \\\\\n\\textbf{Dataset size}\t& 200 \t\t\t\t\t\t& 112 \t\t\t\t\t\t& 10 \\\\\n\\textbf{Image size (pixels)} & $256\\times 256 $ \t\t& $512 \\times 512$ \t\t\t& $512\\times 512$\\\\\n\\textbf{Purpose}\t\t& DRM, DAM training\t\t\t& DAM training \t\t\t& Cell counting evaluation \\\\\n\\hline\\hline \n\\end{tabular}\n\\end{center}\n\\label{tab:dataset}\n\\end{table}\n\nEach pair of a synthetic image and a ground truth density map was employed for the supervised learning of the source DRM.\nOut of the $200$ samples, $160$ were employed for training the source DRM, and the remaining $40$ were employed as the validation set.\nDifferent from the source DRM training, unannotated synthetic and experimental data were employed in the DAM training phase.\nThe same $200$ synthetic cell images (without annotation) and $112$ unannotated experimental hESC images\nwere employed as the source and target image sets respectively, \nand were used for the adversarial learning of the DAM and DCM. \nThe DAM learning process was described in Section~\\ref{sssec:dam} above. Ten manually-annotated images and their density maps were employed as the ground truths to evaluate the cell counting performance of the proposed method.\n\n\\subsection{Method Implementation}\n\\label{ssec:implement}\n\nThe training and evaluation of the proposed method were performed on a NVIDIA Titan X GPU with 12GB of VRAM. \nSoftware packages employed in our experiments included Python 3.5, Keras 2.0, and Tensorflow 1.0.\nIn the source DRM training, the learning rate was set to $0.001$ and the batch size was set to $100$. The learning rate is a hyper-parameter that controls the stride of updating the values of the parameters in a to-be-trained model in each iteration.\nA mean square error (MSE) loss function was employed for model training.\nAfter $3000$ epochs, the model that resulted in the lowest MSE in the validation set was stored as the to-be-employed source DRM.\n\nIn the DAM training, the DAM was initialized with the weights of the trained ECNN.\nThe learning rates for training the DAM and DCM were both set to $10^{-8}$. \nIn each iteration, $100$ images were randomly selected from the source image set, \nand another $100$ images were randomly selected from the target image set. \nEach selected target image was randomly cropped into an image of $256 \\times 256$ pixels,\nso that the inputs of the ECNN and those of the DAM had the same sizes.\nThe DAM and DCM were iteratively optimized via an adversarial loss~\\cite{arjovsky2017wasserstein}.\n\n\n\\subsection{Results}\n\\label{ssec:result}\n\nWe compared the results obtained from the proposed method (denoted as Adaptation) with those from two other methods.\nThe first alternative method applied the source DRM directly to the experimental cell images (denoted as Source-only).\nThe second method was a fully convolutional regression network (FCRN)-based DRM~\\cite{xie2018} that was trained with annotated experimental images (denoted as Annotated-train).\nFor the Annotated-train method, $5$ out of the $10$ annotated images were employed \nto train the FCRN-based DRM, and the remaining $5$ were employed for the model validation.\n\nThe performances of the three cell counting methods were measured by the mean of absolute errors (MAE) and standard deviation of absolute errors (SAE).\nMAE measures the mean of absolute errors between the estimated cell counts and their ground truths for all 10 annotated hESC images, while SAE measures the standard deviation of the absolute errors.\nThe results of evaluating the three methods on all 10 images are shown in Table~\\ref{tab:mae}.\nIn terms of MAE and SAE, the proposed method demonstrates superior cell counting performance to the Source-only and Annotated-train methods.\n\\begin{table}[ht]\n\\caption{Performance of the proposed cell counting method} \n\\begin{center} \n\\begin{tabular}{c c c c} \n\\hline\\hline\nPerformance \t& Adaptation \t\t& Source-only \t\t\t& Annotated-train\\\\\nMAE $\\pm$ SAE \t& \\textbf{35.82$\\pm$24.98}\t& 170.06 $\\pm$ 50.58 \t& 39.84 $\\pm$ 25.24\\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:mae}\n\\end{table}\n\\begin{figure}\n\t\\begin{center}\n\t \\includegraphics[width=\\textwidth]{prediction_examples.png}\n\t\\end{center}\n\\caption{Density map estimation for 3 out of the 10 annotated hESC image examples.}\n\\label{fig:density_results}\n\\end{figure}\n\nAdditionally, the density maps of three testing hESC images estimated by use of the three methods are shown in Figure~\\ref{fig:density_results}.\nThe figures in each row display the results corresponding to different images.\nIn each row, the sub-figures from left to right show the experimental image, the ground truth density map, \nand the density maps estimated with the proposed Adaptation method, the Source-only method, \nand the Annotated-train method, respectively.\nThe ground truth number of cells and the number counted from the estimated density maps are indicated at the bottom of each sub-figure.\nAs shown in Figure~\\ref{fig:density_results}, the proposed method can estimate a density map that is visually similar to the ground truth density map. The density maps estimated by use of the Source-only method is much denser than the ground truth, while those estimated by the Annotated-train method are blurred.\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\nIn this work, a domain adaptation-based density regression framework is proposed for automatic microscopic cell counting.\nInstead of training a DRM using the experimental images that require manual annotating, in this study, we propose to 1) train a source DRM with annotated synthetic images and 2) train a DAM \nwith both unannotated synthetic and experimental images to minimize the domain shift between the source and target domains.\nThe trained source DRM and DAM are jointly employed as a target DRM for automatic cell counting in experimental images. \nThe proposed method reduces or even eliminates the need to manually annotate experimental images and can greatly improve the generalization of the DRM trained with annotated synthetic images. \nTo the best of our knowledge, this is the first study in which a domain adaptation method is employed to support the task of automatic cell counting in microscopic images.\n\nIn addition, the proposed framework allows many flexible modifications. \nFor example, the FCNN network employed in the training of the source DRM can be \nreplaced with other FCNN networks when different tasks need to be addressed.\nThe choice of the FCNN in the current study is motivated by the success of deep neural networks in other computer vision tasks, including image classification~\\cite{he2016}, segmentation~\\cite{ronneberger2015, he2018}, and object detection~\\cite{ren2015}.\nAlthough the network architecture of the FCNN is fixed in this study, tuning of the network architecture is still an open question in deep learning-based researches, but it is not the focus of this work.\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nA novel, manual-annotating-free density regression framework is proposed for automatic microscopic cell counting. \nThe proposed method integrates a supervised learned DRM and an unsupervised learned DAM for automatic cell counting, and achieves promising cell counting performance.\n\n\\acknowledgments\n \nThis work was supported in part by award NIH R01EB020604, R01EB023045, R01NS102213, and R21CA223799. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec0}\n\n\\par\n\nIn the paper, we consider pseudo-differential operators, where\nthe symbols are of infinite orders and possess suitable Gevrey regularities\nand which are allowed to grow sub-exponentially together with all\ntheir derivatives. Our main purpose is to extend boundedness results,\nin \\cite{To14}, of the\npseudo-differential operators when acting on modulation spaces.\n\n\\par\n\nMore specific, the symbols should satisfy conditions of the form\n\\begin{equation}\\label{Eq:SymbCondIntr}\n|\\partial _x^\\alpha \\partial _\\xi ^\\beta a(x,\\xi )|\n\\lesssim h^{|\\alpha +\\beta |}\\alpha !^\\sigma \\beta !^s\\omega _0(x,\\xi ),\n\\end{equation}\nwhere $\\omega _0$ should be a moderate weight on $\\rr {2d}$\nand satisfy boundedness conditions like\n\\begin{equation}\\label{Eq:WeightCondIntr}\n\\omega _0(x,\\xi )\\lesssim e^{r(|x|^{\\frac 1s}+|\\xi |^{\\frac 1\\sigma})}.\n\\end{equation}\nFor such symbols $a$ we prove that corresponding pseudo-differential\noperators $\\operatorname{Op} (a)$ is continuous from the modulation space\n$M(\\omega _0\\omega ,\\mathscr B )$ to $M(\\omega ,\\mathscr B )$.\n(See Section \\ref{sec1} for notations.)\n\n\\par\n\nSimilar investigations were performed in \\cite{To25} in the case\n$s=\\sigma$ (i.{\\,}e. the isotropic case).\nTherefore, the results in the current paper are more general\nin the sense of the anisotropicity of the considered symbol classes.\nMoreover, we use different techniques compared to \\cite{To25}.\n\n\\par\n\nWe also remark that several ideas arise in \\cite{To14}, where similar\ninvestigations were performed after the conditions \\eqref{Eq:SymbCondIntr}\nand \\eqref{Eq:WeightCondIntr} are replaced by\n$$\n|\\partial _x^\\alpha \\partial _\\xi ^\\beta a(x,\\xi )| \\lesssim \\omega _0(x,\\xi )\n$$\nand\n$$\n\\omega _0(x,\\xi )\\lesssim (1+|x|+|\\xi |)^N,\n$$\nrespectively, for some $N\\ge 0$.\n\n\\par\n\nIn \\cite{Fe4}, H. Feichtinger introduced the modulation spaces\nto measure the\ntime-frequency concentration of a function or distribution\non the time-frequency\nspace or the phase space $\\rr {2d}$.\nNowadays they become popular among mathematicians\nand engineers since their numerous applications in\nsignal processing \\cite{Fe8,Fe9}, pseudo-differential and Fourier integral operators\n\\cite{CoGaTo,CoTo,CoGrNiRo1,PT2,PT3,Ta,Te1,Te2,To14,To18,To20,To22,To24,To25}\nand quantum mechanics \\cite{CoGrNiRo,dGo}.\n\n\\par\n\n\n\\par\n\nThe paper is organized as follows. In Section \\ref{sec1}\nwe give the main definition and properties of Gelfand-Shilov and\nmodulation spaces and we recall some essential results.\nIn Section \\ref{sec2} we state our main results on\nthe continuity with anisotropic settings.\n\n\\par\n\n\\section{Preliminaries}\\label{sec1}\n\n\\par\n\nIn the current section we review basic properties for modulation\nspaces and other related spaces. More details and proofs can be found in\n\\cite{Fe2,Fe3,Fe4,FG1,FG2,FG4,GaSa,Gc2,To20} .\n\n\\par\n\n\\subsection{Weight functions}\\label{subsec1.1}\n\n\\par\n\nA function $\\omega$ on $\\rr d$ is called a \\emph{weight} or\n\\emph{weight function},\nif $\\omega ,1\/\\omega \\in L^\\infty _{\\operatorname{loc}} (\\rr d)$\nare positive everywhere.\nThe weight $\\omega$ on $\\rr d$ is called $v$-moderate\nfor some weight $v$ on $\\rr d$, if\n\\begin{equation}\\label{vModerate}\n\\omega (x+y)\\lesssim \\omega (x)v(y),\\quad x,y\\in \\rr d.\n\\end{equation}\nIf $v$ is even and satisfies \\eqref{vModerate} with $\\omega =v$,\nthen $v$ is called submultiplicative. \n\n\\par\n\nLet $s,\\sigma>0$. Then we let $\\mathscr P_E(\\rr d)$ be the\nset of all moderate weights on $\\rr d$, $\\mathscr P_s(\\rr d)$\n($\\mathscr P_s^0(\\rr d)$) be the set of all $\\omega\\in\n\\mathscr P_E(\\rr d)$ such that \n$$\n\\omega (x+y)\\lesssim \\omega (x)e^{r|y|^{\\frac 1s}}, \\quad x,y\\in \\rr d,\n$$\nfor some $r>0$ (for every $r>0$), and $\\mathscr P_{s,\\sigma}(\\rr {2d})$\n($\\mathscr P_{s,\\sigma}^0(\\rr {2d})$) be the set of all\n$\\omega\\in \\mathscr P_E(\\rr {2d})$ such that\n\\begin{equation} \\label{estomega}\n\\omega (x+y,\\xi+\\eta)\\lesssim \\omega (x,\\xi)\ne^{r(|y|^{\\frac 1s}+|\\eta|^{\\frac 1\\sigma})}, \\quad x,y,\\xi,\\eta\\in \\rr d,\n\\end{equation}\nfor some $r>0$ (for every $r>0$).\n\n\\par\n\n\nThe following result shows that for any weight in $\\mathscr P _E$,\nthere are equivalent weights that \nsatisfy strong Gevrey regularity.\n\n\\par\n\n\\begin{prop}\\label{Prop:EquivWeights}\nLet $\\omega \\in \\mathscr P _E(\\rr {2d})$ and $s,\\sigma> 0$.\nThen there exists a weight\n$\\omega _0\\in \\mathscr P _E(\\rr {2d})\\cap C^\\infty (\\rr {2d})$\nsuch that the following is true:\t\n\\begin{enumerate}\t\n\\item $\\omega _0\\asymp \\omega $;\n\n\\vspace{0.1cm}\n\n\\item for every $h>0$,\n\\begin{equation*}\n|\\partial _x ^{\\alpha}\\partial _\\xi ^\\beta\n\\omega _0(x, \\xi)|\n\\lesssim \nh^{|\\alpha +\\beta|}\\alpha !^\\sigma\n\\beta!^s \\omega _0(x,\\xi) \n\\asymp \nh^{|\\alpha +\\beta|}\\alpha !^\\sigma\n\\beta!^s\\omega (x,\\xi).\n\\end{equation*} \n\\end{enumerate}\n\\end{prop}\n\n\\par\n\nProposition \\ref{Prop:EquivWeights} is equivalent to \\cite[Proposition 1.6]{AbCoTo}.\nIn fact, by Proposition \\cite[Proposition 1.6]{AbCoTo} we have that\nProposition \\ref{Prop:EquivWeights} holds with $s=\\sigma$.\nHence, Proposition \\ref{Prop:EquivWeights} implies \\cite[Proposition 1.6]{AbCoTo}.\nOn the other hand, let $s_0=\\min (s,\\sigma)$. Then\n\\cite[Proposition 1.6]{AbCoTo} implies that there is a weight function\n$\\omega_0\\asymp \\omega$ satisfying\n\\begin{align*}\n|\\partial _x ^{\\alpha}\\partial _\\xi ^\\beta\n\\omega _0(x, \\xi)|\n&\\lesssim \nh^{|\\alpha +\\beta|}(\\alpha !\n\\beta!)^{s_0} \\omega _0(x,\\xi) \n\\\\[1ex]\n&\\lesssim\nh^{|\\alpha +\\beta|}\\alpha !^\\sigma\n\\beta!^s \\omega _0(x,\\xi),\n\\end{align*}\ngiving Proposition \\ref{Prop:EquivWeights}.\n\n\n\\par\n\n\n\\subsection{Gelfand-Shilov spaces}\\label{subsec1.2}\n\n\\par\n\nLet $00}\\mathcal S _{s;h}^{\\sigma}(\\rr d)\n\\quad \\text{and}\\quad \\Sigma _{s}^{\\sigma}(\\rr d) =\\bigcap _{h>0}\n\\mathcal S _{s;h}^{\\sigma}(\\rr d),\n\\end{equation}\nand that the topology for $\\mathcal S _{s}^{\\sigma}(\\rr d)$ is the strongest\npossible one such that the inclusion map from $\\mathcal S _{s;h}^{\\sigma}\n(\\rr d)$ to $\\mathcal S _{s}^{\\sigma}(\\rr d)$ is continuous, for every choice \nof $h>0$. The space $\\Sigma _{s}^{\\sigma}(\\rr d)$ is a Fr{\\'e}chet space\nwith seminorms $\\nm \\, \\cdot \\, {\\mathcal S _{s;h}^{\\sigma}}$, $h>0$. Moreover,\n$\\Sigma _{s}^{\\sigma}(\\rr d)\\neq \\{ 0\\}$, if and only\nif $s+\\sigma \\ge 1$ and $(s,\\sigma )\\neq\n(\\frac 12,\\frac 12)$, and\n$\\mathcal S _{s}^{\\sigma}(\\rr d)\\neq \\{ 0\\}$, if and only\nif $s+\\sigma \\ge 1$.\n\n\n\\medspace\n\nThe \\emph{Gelfand-Shilov distribution spaces} $(\\mathcal S _{s}^{\\sigma})'(\\rr d)$\nand $(\\Sigma _{s}^{\\sigma})'(\\rr d)$\nare the projective and inductive limit\nrespectively of $(\\mathcal S _{s;h}^{\\sigma})'(\\rr d)$.\nIn \\cite{Pi2} it is proved that $(\\mathcal S _{s}^{\\sigma})'(\\rr d)$\nis the dual of $\\mathcal S _s^\\sigma(\\rr d)$, and $(\\Sigma _{s}^{\\sigma})'(\\rr d)$\nis the dual of $\\Sigma _{s}^{\\sigma}(\\rr d)$ (also in topological sense).\n\n\n\\par\n\nThe Fourier transform $\\mathscr F$ is the linear and continuous\nmap on $\\mathscr S (\\rr d)$,\ngiven by the formula\n$$\n(\\mathscr Ff)(\\xi )= \\widehat f(\\xi ) \\equiv (2\\pi )^{-\\frac d2}\\int _{\\rr\n\t{d}} f(x)e^{-i\\scal x\\xi }\\, dx\n$$\nwhen $f\\in \\mathscr S (\\rr d)$. Here $\\scal \\, \\cdot \\, \\, \\cdot \\, $ denotes the usual\nscalar product on $\\rr d$. \nThe Fourier transform extends uniquely to homeomorphisms\nfrom $(\\mathcal S _{s}^{\\sigma})'(\\rr d)$ to $(\\mathcal S _{\\sigma}^{s})'(\\rr d)$,\nand from $(\\Sigma _{s}^{\\sigma})'(\\rr d)$ to $(\\Sigma _{\\sigma}^{s})'(\\rr d)$.\nFurthermore, it restricts to homeomorphisms from\n$\\mathcal S _{s}^{\\sigma}(\\rr d)$ to $\\mathcal S _{\\sigma}^{s}(\\rr d)$,\nand from $\\Sigma _{s}^{\\sigma}(\\rr d)$ to $\\Sigma _{\\sigma}^{s}(\\rr d)$.\n\n\\par\n\n\\medspace\n\nSome considerations later on involve a broader family of\nGelfand-Shilov spaces. More precisely, for $s_j,\\sigma _j\\in \\mathbf R_+$,\n$j=1,2$, the Gelfand-Shilov spaces $\\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$ and\n$\\Sigma _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$ consist of all functions\n$F\\in C^\\infty (\\rr {d_1+d_2})$ such that\n\\begin{equation}\\label{GSExtCond}\n|x_1^{\\alpha _1}x_2^{\\alpha _2}\\partial _{x_1}^{\\beta _1}\n\\partial _{x_2}^{\\beta _2}F(x_1,x_2)| \\lesssim\nh^{|\\alpha _1+\\alpha _2+\\beta _1+\\beta _2|}\n\\alpha _1!^{s_1}\\alpha _2!^{s_2}\\beta _1!^{\\sigma _1}\\beta _2!^{\\sigma _2}\n\\end{equation}\nfor some $h>0$ respective for every $h>0$. The topologies, and the duals\n\\begin{alignat*}{3}\n&(\\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2})'(\\rr {d_1+d_2}) &\n&\\quad \\text{and} \\quad &\n&(\\Sigma _{s _1,s_2}^{\\sigma _1,\\sigma _2})'(\\rr {d_1+d_2})\n\\intertext{of}\n&\\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2}) &\n&\\quad \\text{and} \\quad &\n&\\Sigma _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2}),\n\\end{alignat*}\nrespectively, and their topologies\nare defined in analogous ways as for the spaces $\\mathcal S _s^\\sigma (\\rr d)$\nand $\\Sigma _s^\\sigma (\\rr d)$ above.\n\n\\par\n\nThe following proposition explains mapping properties of partial\nFourier transforms on Gelfand-Shilov spaces, and follows by similar\narguments as in analogous situations in\n\\cite{GS}. The proof is therefore omitted. Here, $\\mathscr F _1F$\nand $\\mathscr F _2F$ are the partial\nFourier transforms of $F(x_1,x_2)$ with respect to\n$x_1\\in \\rr {d_1}$ and $x_2\\in \\rr {d_2}$,\nrespectively.\n\n\\par\n\n\\begin{prop}\\label{propBroadGSSpaceChar}\nLet $s_j,\\sigma _j >0$, $j=1,2$.\nThen the following is true:\n\\begin{enumerate}\t\n\\item the mappings $\\mathscr F _1$ and $\\mathscr F _2$ on $\\mathscr S (\\rr {d_1+d_2})$\nrestrict to homeomorphisms\n\\begin{align*}\n\\mathscr F _1 \\, &: \\, \\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2}) \\to\n\\mathcal S _{\\sigma _1,s_2}^{s_1,\\sigma _2}(\\rr {d_1+d_2})\n\\intertext{and}\n\\mathscr F _2 \\, &: \\, \\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2}) \\to\n\\mathcal S _{s _1,\\sigma _2}^{\\sigma _1,s_2}(\\rr {d_1+d_2})\n\\text ;\n\\end{align*}\n\n\\vspace{0.1cm}\n\n\\item the mappings $\\mathscr F _1$ and $\\mathscr F _2$ on\n$\\mathscr S (\\rr {d_1+d_2})$ are uniquely extendable to\nhomeomorphisms\n\\begin{align*}\n\\mathscr F _1 \\, &: \\, (\\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2})'(\\rr {d_1+d_2}) \\to\n(\\mathcal S _{\\sigma _1,s_2}^{s_1,\\sigma _2})'(\\rr {d_1+d_2})\n\\intertext{and}\n\\mathscr F _2 \\, &: \\, (\\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2})'(\\rr {d_1+d_2}) \\to\n(\\mathcal S _{s _1,\\sigma _2}^{\\sigma _1,s_2})'(\\rr {d_1+d_2}).\n\\end{align*}\n\\end{enumerate}\n\t\n\\par\n\t\nThe same holds true if the $\\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2}$-spaces and\ntheir duals are replaced by\ncorresponding $\\Sigma _{s _1,s_2}^{\\sigma _1,\\sigma _2}$-spaces and their duals.\n\\end{prop}\n\n\\par\n\nThe next two results follow from \\cite{ChuChuKim}. The proofs are therefore omitted.\n\n\\begin{prop}\nLet $s_j,\\sigma _j> 0$, $j=1,2$. Then the following\nconditions are equivalent.\n\\begin{enumerate}\n\\item $F\\in \\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$\\quad\n($F\\in \\Sigma _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$);\n\n\\vspace{0.1cm}\n\n\\item for some $h>0$ (for every $h>0$) it holds\n\\begin{equation*}\n\\displaystyle{|F(x_1,x_2)|\\lesssim e^{-h(|x_1|^{\\frac 1{s_1}} + |x_2|^{\\frac 1{s_2}} )}}\n\\quad \\text{and}\\quad \n\\displaystyle{|\\widehat F(\\xi _1,\\xi _2)|\\lesssim\n\te^{-h(|\\xi _1|^{\\frac 1{\\sigma _1}} + |\\xi _2|^{\\frac 1{\\sigma _2}} )}}.\n\\end{equation*}\n\\end{enumerate}\n\\end{prop}\n\n\\par\n\nWe notice that if\n$s_j+\\sigma _j<1$ for some $j=1,2$, then\n$\\mathcal S _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$\nand $\\Sigma _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$\nare equal to the trivial space $\\{ 0\\}$.\nLikewise, if $s_j=\\sigma _j=\\frac 12$ for some $j=1,2$, then\n$\\Sigma _{s _1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2}) = \\{ 0\\}$.\n\n\\par\n\n\\subsection{Short time Fourier transform and Gelfand-Shilov spaces}\n\n\\par\n\nWe recall here some basic facts about\nthe short-time Fourier transform and weights.\n\n\\par\n\nLet $\\phi \\in \\mathcal S _s^\\sigma (\\rr d)\\setminus 0$ be fixed. Then the short-time\nFourier transform of $f\\in (\\mathcal S _s^\\sigma )'(\\rr d)$ is given by\n$$\n(V_\\phi f)(x,\\xi ) = (2\\pi )^{-\\frac d2}(f,\\phi (\\, \\cdot \\, -x)\ne^{i\\scal \\, \\cdot \\, \\xi})_{L^2}.\n$$\nHere $(\\, \\cdot \\, ,\\, \\cdot \\, )_{L^2}$ is the unique extension of the $L^2$-form on\n$\\mathcal S _s^\\sigma (\\rr d)$ to a continuous sesqui-linear form on $(\\mathcal S\n_s^\\sigma )'(\\rr d)\\times \\mathcal S _s^\\sigma (\\rr d)$. In the case\n$f\\in L^p(\\rr d)$, for some $p\\in [1,\\infty]$, then $V_\\phi f$ is given by\n$$\nV_\\phi f(x,\\xi ) \\equiv (2\\pi )^{-\\frac d2}\\int _{\\rr d}f(y)\\overline{\\phi (y-x)}\ne^{-i\\scal y\\xi}\\, dy .\n$$\n\n\\par\n\nThe following characterizations of the\n$\\mathcal S _{s_1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$,\n$\\Sigma _{s_1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$\nand their duals\nfollow by similar arguments as in the proofs of\nPropositions 2.1 and 2.2 in \\cite{To22}. The details are left\nfor the reader.\n\n\\par\n\n\\begin{prop}\\label{Prop:STFTGelfand2}\nLet $s_j,\\sigma _j>0$ be such that $s_j+\\sigma _j\\ge 1$, $j=1,2$, \n$s_0\\le s$ and $\\sigma_0\\le \\sigma$. Also let\n$\\phi \\in \\mathcal S_{s_1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2}\n\\setminus 0$)\n($\\phi \\in \\Sigma _{s_1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2}\n\\setminus 0$) and let $f$ be a Gelfand-Shilov distribution on\n$\\rr {d_1+d_2}$.\nThen $f\\in \\mathcal S _{s_1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$\n($f\\in \\Sigma _{s_1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})$),\nif and only if\n\\begin{equation}\\label{stftexpest2}\n|V_\\phi f(x_1,x_2,\\xi _1,\\xi _2)|\n\\lesssim\ne^{-r (|x_1|^{\\frac 1{s_1}} + |x_2|^{\\frac 1{s_2}}\n\t+|\\xi _1|^{\\frac 1{\\sigma _1}} +|\\xi _2|^{\\frac 1{\\sigma _2}} )},\n\\end{equation}\nholds for some $r > 0$ (holds for every $r > 0$).\n\\end{prop}\n\n\\par\n\nA proof of Proposition \\ref{Prop:STFTGelfand2} can be found in\ne.{\\,}g. \\cite{GZ} (cf. \\cite[Theorem 2.7]{GZ}). The\ncorresponding result for Gelfand-Shilov distributions\nis the following improvement of \\cite[Theorem 2.5]{To18}.\n\n\\par\n\n\\begin{prop}\\label{Prop:STFTGelfand2Dist}\nLet $s_j,\\sigma _j>0$ be such that $s_j+\\sigma _j\\ge 1$, $j=1,2$, \n$s_0\\le s$ and $t_0\\le t$. Also let\n$\\phi \\in \\mathcal S_{s_1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})\n\\setminus 0$ and let $f$ be a Gelfand-Shilov distribution on\n$\\rr {d_1+d_2}$.\nThen the following is true:\n\t\\begin{enumerate}\n\\item $f\\in (\\mathcal S _{s_1,s_2}^{\\sigma _1,\\sigma _2})'(\\rr {d_1+d_2})$,\nif and only if\n\\begin{equation}\\label{stftexpest2Dist}\n|V_\\phi f(x_1,x_2,\\xi _1,\\xi _2)|\n\\lesssim\ne^{r (|x_1|^{\\frac 1{s_1}} + |x_2|^{\\frac 1{s_2}}\n\t+|\\xi _1|^{\\frac 1{\\sigma _1}} +|\\xi _2|^{\\frac 1{\\sigma _2}} )}\n\\end{equation}\nholds for every $r > 0$;\n\n\\vspace{0.1cm}\n\n\\item if in addition\n$\\phi \\in \\Sigma _{s_1,s_2}^{\\sigma _1,\\sigma _2}(\\rr {d_1+d_2})\n\\setminus 0$, then \n$f\\in (\\Sigma _{s_1,s_2}^{\\sigma _1,\\sigma _2})'(\\rr {d_1+d_2})$,\nif and only if\n\\begin{equation}\\label{stftexpest2DistA}\n|V_\\phi f(x_1,x_2,\\xi _1,\\xi _2)|\n\\lesssim\ne^{r (|x_1|^{\\frac 1{s_1}} + |x_2|^{\\frac 1{s_2}}\n\t+|\\xi _1|^{\\frac 1{\\sigma _1}} +|\\xi _2|^{\\frac 1{\\sigma _2}} )}\n\\end{equation}\nholds for some $r > 0$.\n\\end{enumerate}\n\\end{prop}\n\n\\par\n\n\\subsection{Broader family of modulation spaces}\\label{subsec1.3}\n\n\\par\n\n\\par\n\n\n\\begin{defn}\\label{bfspaces1}\nLet $\\mathscr B $ be a Banach space of measurable functions on $\\rr d$,\nand let $v \\in\\mathscr P _E(\\rr d)$.\nThen $\\mathscr B$ is called a \\emph{translation invariant\nBanach Function space on $\\rr d$} (with respect to $v$), or \\emph{invariant\nBF space on $\\rr d$}, if there is a constant $C$ such\nthat the following conditions are fulfilled:\n\\begin{enumerate}\n\\item if $x\\in \\rr d$ and $f\\in \\mathscr B$, then $f(\\, \\cdot \\, -x)\\in\n\\mathscr B$, and \n\\begin{equation}\\label{translmultprop1}\n\\nm {f(\\, \\cdot \\, -x)}{\\mathscr B}\\le Cv(x)\\nm {f}{\\mathscr B}\\text ;\n\\end{equation}\n\n\\vspace{0.1cm}\n\n\\item if $f,g\\in L^1_{loc}(\\rr d)$ satisfy $g\\in \\mathscr B$ and $|f|\n\\le |g|$, then $f\\in \\mathscr B$ and\n$$\n\\nm f{\\mathscr B}\\le C\\nm g{\\mathscr B}\\text ;\n$$\n\\vspace{0.1cm}\n\n\\item Minkowski's inequality holds true, i.{\\,}e.\n\\begin{equation}\\label{Eq:MinkIneq}\n\\nm {f*\\varphi}{\\mathscr B}\\lesssim \\nm {f}{\\mathscr B}\\nm \\varphi{L^1_{(v)}},\n\\qquad f\\in \\mathscr B ,\\ \\varphi \\in L^1_{(v)}(\\rr d).\n\\end{equation}\n\\end{enumerate}\n\\end{defn}\n\n\\par\n\nIf $v$ belongs to $\\mathscr P _{E,s}(\\rr d)$\n($\\mathscr P _{E,s}^0(\\rr d)$), then $\\mathscr B$ in Definition \\ref{bfspaces1}\nis called an invariant BF-space of Roumieu type (Beurling type) of order $s$.\n\n\\par\n\nIt follows from (2) in Definition \\ref{bfspaces1} that if $f\\in\n\\mathscr B$ and $h\\in L^\\infty$, then $f\\cdot h\\in \\mathscr B$, and\n\\begin{equation}\\label{multprop}\n\\nm {f\\cdot h}{\\mathscr B}\\le C\\nm f{\\mathscr B}\\nm h{L^\\infty}.\n\\end{equation}\nIn Definition \\ref{bfspaces1}, condition (2) means that a\ntranslation invariant BF-space is a solid BF-space in the sense of\n(A.3) in \\cite{Fe6}. \n\n\\par\n\n\\begin{example}\\label{Lpqbfspaces}\nAssume that $p,q\\in [1,\\infty ]$, and let $L^{p,q}_1(\\rr {2d})$ be the\nset of all $f\\in L^1_{loc}(\\rr {2d})$ such that\n$$\n\\nm f{L^{p,q}_1} \\equiv \\Big ( \\int \\Big ( \\int |f(x,\\xi )|^p\\, dx\\Big\n)^{q\/p}\\, d\\xi \\Big )^{1\/q}\n$$\nif finite.\nThen it follows that $L^{p,q}_1$\nis translation invariant BF-spaces with respect to $v=1$.\n\\end{example}\n\n\\par\n\nWe refer to \\cite {Fe4,FG1,FG2,FG4,GaSa,Gc2,RSTT,To20}\nfor more facts about modulation spaces.\nNext we consider the extended class of modulation spaces which we are interested\nin. \n\n\n\\par\n\n\\begin{defn}\\label{bfspaces2}\nAssume that $\\mathscr B$ is a translation\ninvariant QBF-space on $\\rr {2d}$, $\\omega \\in\\mathscr P _E(\\rr {2d})$,\nand that $\\phi \\in\n\\Sigma _1(\\rr d)\\setminus 0$. Then the set $M(\\omega ,\\mathscr B )$\nconsists of all $f\\in \\Sigma _1'(\\rr d)$ such that\n$$\n\\nm f{M(\\omega ,\\mathscr B )}\n\\equiv \\nm {V_\\phi f\\, \\omega }{\\mathscr B}\n$$\nis finite.\n\\end{defn}\n\n\\par\n\nObviously, we have\n$\nM^{p,q}_{(\\omega )}(\\rr d)=M(\\omega ,\\mathscr B )$\nwhen $\\mathscr B =L^{p,q}_1(\\rr {2d})$ (cf. Example \\ref{Lpqbfspaces}).\nIt follows that many properties which are valid for the classical modulation\nspaces also hold for the spaces of the form $M(\\omega ,\\mathscr B )$.\n\n\\par\n\nWe notice that $M(\\omega ,\\mathscr B )$ is independent of the choice\nof $\\phi$ in Definition \\ref{bfspaces2} cf.\\, \\cite{To25}.\nFurthermore, $M(\\omega ,\\mathscr B )$ is a Banach space in view of \\cite{PfTo}.\n\n\n\\par\n\n\\subsection{Pseudo-differential operators}\n\n\\par\n\nNext we recall some facts on pseudo-differential operators. Let\n$A\\in \\mathbf{M} (d,\\mathbf R)$ be fixed and\nlet $a\\in \\Sigma _1(\\rr {2d})$. Then the pseudo-differential\noperator $\\operatorname{Op} _A(a)$ is the linear and continuous operator on $\\Sigma _1(\\rr d)$,\ndefined by the formula\n\\begin{multline}\\label{e0.5}\n(\\operatorname{Op} _A(a)f)(x)\n\\\\[1ex]\n=\n(2\\pi ) ^{-d}\\iint a(x-A(x-y),\\xi )f(y)e^{i\\scal {x-y}\\xi }\\,\ndyd\\xi .\n\\end{multline}\nThe definition of $\\operatorname{Op} _A(a)$ extends to\nany $a\\in \\Sigma _1'(\\rr {2d})$, and then $\\operatorname{Op} _A(a)$ is continuous from\n$\\Sigma _1(\\rr d)$ to $\\Sigma _1'(\\rr d)$. Moreover, for every fixed\n$A\\in \\mathbf{M} (d,\\mathbf R)$, it follows that there is a one to\none correspondence between such operators and pseudo-differential\noperators of the form $\\operatorname{Op} _A(a)$. (See e.{\\,}g. \\cite {Ho1}.)\nIf $A=2^{-1}I$, where $I\\in \\mathbf{M} (d,\\mathbf R)$ is the identity matrix, then\n$\\operatorname{Op} _A(a)$ is equal to the Weyl operator $\\operatorname{Op} ^w(a)$\nof $a$. If instead $A=0$, then the standard (Kohn-Nirenberg)\nrepresentation $\\operatorname{Op} (a)$ is obtained.\n\n\\par\n\nIf $a_1,a_2\\in \\Sigma _1'(\\rr {2d})$ and $A_1,A_2\\in\n \\mathbf{M} (d,\\mathbf R)$, then\n\\begin{equation}\\label{pseudorelation}\n\\operatorname{Op} _{A_1}(a_1)=\\operatorname{Op} _{A_2}(a_2) \\quad \\Leftrightarrow \\quad a_2(x,\\xi\n)=e^{i\\scal {(A_1-A_2)D_\\xi}{D_x}}a(x,\\xi ).\n\\end{equation}\n(Cf. \\cite{Ho1}.)\n\n\\par\n\n\n\\par\n\n\\subsection{Symbol classes}\n\n\\par\n\nNext we introduce function spaces related to symbol classes\nof the pseudo-differential operators. These functions should obey various\nconditions of the form\n\\begin{align}\n|\\partial _x^\\alpha \\partial _\\xi ^\\beta a(x,\\xi )|\n&\\lesssim\nh ^{|\\alpha +\\beta |}\\alpha !^\\sigma \\beta !^s \\omega (x,\\xi ),\n\\label{Eq:symbols2}\n\\end{align}\nfor functions on $\\rr {d_1+d_2}$. For this reason we consider\nsemi-norms of the form\n\\begin{equation}\\label{Eq:GammaomegaNorm}\n\\nm a{\\Gamma _{(\\omega )}^{\\sigma ,s;h}}\n\\equiv \\sup _{(\\alpha ,\\beta) \\in \\nn {d_1+d_2}}\n\\left (\n\\sup _{(x,\\xi) \\in \\rr {d_1+d_2}} \\left (\n\\frac {|\\partial _x^\\alpha \\partial _\\xi ^\\beta a(x,\\xi )|}{\n\th ^{|\\alpha +\\beta |}\\alpha !^\\sigma \\beta !^s \\omega (x,\\xi )}\n\\right ) \\right ) ,\n\\end{equation}\nindexed by $h>0$, \n\n\\par\n\n\\begin{defn}\\label{Def:GammaSymb2}\nLet $s$, $\\sigma$ and $h$ be positive constants,\nlet $\\omega$ be a weight on $\\rr {d_1+d_2}$, and let\n$$\n\\omega _r(x,\\xi )\\equiv e^{r(|x|^{\\frac 1s} + |\\xi |^{\\frac 1\\sigma })}.\n$$ \n\\begin{enumerate}\n\\item The set $\\Gamma _{(\\omega )}^{\\sigma ,s;h} (\\rr {d_1+d_2})$\nconsists of\nall $a \\in C^\\infty(\\rr {d_1+d_2})$ such that\n$\\nm a{\\Gamma _{(\\omega )}^{\\sigma ,s;h}}$ in\n\\eqref{Eq:GammaomegaNorm} is finite.\nThe set $\\Gamma _0^{\\sigma ,s;h} (\\rr {d_1+d_2})$ consists of\nall $a \\in C^\\infty(\\rr {d_1+d_2})$ such that\n$\\nm a{\\Gamma _{(\\omega_r )}^{\\sigma ,s;h}}$ is finite\nfor every $r>0$, and the topology is the projective limit topology of\n$\\Gamma _{(\\omega _r)}^{\\sigma ,s;h} (\\rr {d_1+d_2})$ with respect to $r>0$;\n\n\\vspace{0.1cm}\n\n\\item The sets $\\Gamma _{(\\omega )}^{\\sigma ,s} (\\rr {d_1+d_2})$ and\n$\\Gamma _{(\\omega )}^{\\sigma ,s;0} (\\rr {d_1+d_2})$ are given by\n\\begin{align*}\n\\Gamma _{(\\omega )}^{\\sigma ,s} (\\rr {d_1+d_2})\n&\\equiv\n\\bigcup _{h>0}\\Gamma _{(\\omega )}^{\\sigma ,s;h} (\\rr {d_1+d_2})\n\\intertext{and}\n\\Gamma _{(\\omega )}^{\\sigma ,s;0} (\\rr {d_1+d_2})\n&\\equiv\n\\bigcap _{h>0}\\Gamma _{(\\omega )}^{\\sigma ,s;h} (\\rr {d_1+d_2}),\n\\end{align*}\nand their topologies are the inductive respective the projective topologies\nof $\\Gamma _{(\\omega )}^{\\sigma ,s;h} (\\rr {d_1+d_2})$ with respect to $h>0$.\t\n\\end{enumerate}\t\n\\end{defn}\n\n\\par\n\n\\par\n\nThe following result is a straight-forward consequence of \n\\cite[Proposition 2.4]{AbCaTo} and the definitions.\n\n\\par\n\n\\begin{prop}\\label{SymbClassModSpace}\nLet $R>0$, $q\\in (0,\\infty ]$, $s,\\sigma >0$ be such that $s+\\sigma \\ge 1$\nand $(s,\\sigma )\\neq (\\frac 12,\\frac 12)$, $\\phi \\in \\Sigma _{s,\\sigma}^{\\sigma ,s}\n(\\rr {2d})\\setminus 0$,\n$\\omega \\in \\mathscr P _{s,\\sigma}(\\mathbf{R}^{2d}),$ and let\n$$\n\\omega _R(x,\\xi, \\eta, y) = \\omega(x,\\xi) e^{-R(|y|^{\\frac 1s} + |\\eta|^{\\frac 1\\sigma})}.\n$$\nThen\n\\begin{align}\\label{iden}\n\\begin{split}\n\\Gamma^{\\sigma ,s}_{(\\omega)}(\\rr {2d})\n&=\n\\bigcup _{R>0}\\sets {a\\in (\\Sigma _{s,\\sigma}^{\\sigma ,s})'\n\t(\\rr {2d})}{\\nm {\\omega\n\t\t_R^{-1}V_\\phi a}{L^{\\infty ,q}} <\\infty },\n\\\\[1ex]\n\\Gamma^{\\sigma ,s;0}_{(\\omega)}(\\rr {2d})\n&=\n\\bigcap _{R>0}\\sets {a\\in (\\Sigma _{s,\\sigma}^{\\sigma ,s})'\n\t(\\rr {2d})}{\\nm {\\omega\n\t\t_R^{-1}V_\\phi a}{L^{\\infty ,q}} <\\infty }.\n\\end{split}\n\\end{align}\n\\end{prop}\n\n\\par\n\nThe following lemma is a consequence of \\cite[Theorem 3.6]{AbCaTo}.\n \n\\par\n\n\\begin{lemma}\\label{Somega}\nLet $s,\\sigma>0$ such that $s+\\sigma\\ge 1$ $\\omega \\in \\mathscr P _{s,\\sigma}(\\rr {2d})$,\n$A_1,A_2\\in \\mathbf{M} (d,\\mathbf R )$, and\nthat $a_1,a_2\\in (\\Sigma _{s,\\sigma}^{\\sigma,s})'(\\rr {2d})$ are such that\n$\\operatorname{Op} _{A_1}(a_1)=\\operatorname{Op} _{A_2}(a_2)$. Then\n\\begin{alignat*}{3}\na_1 &\\in \\Gamma _{(\\omega )}^{\\sigma,s;0}(\\rr {2d})&\\qquad &\\Leftrightarrow &\\qquad a_2\n&\\in \\Gamma_{(\\omega )}^{\\sigma,s;0}(\\rr {2d}) \n\\end{alignat*}\nand similarly for $\\Gamma _{(\\omega )}^{\\sigma,s}(\\rr {2d})$ in\nplace of $\\Gamma _{(\\omega )}^{\\sigma,s;0}(\\rr {2d})$.\n\\end{lemma}\n\n\\par\n\\section{Continuity for pseudo-differential operators\nwith symbols of infinite order}\\label{sec2}\n\n\\par\n\nIn this section we discuss continuity for operators in\n$\\operatorname{Op} (\\Gamma ^{\\sigma,s}_{(\\omega_0)})$ and\n$\\operatorname{Op} (\\Gamma ^{\\sigma,s;0}_{(\\omega _0)})$\nwhen acting on a general class of modulation spaces.\nIn Theorem \\ref{p3.2} continuity is treated where the symbols belong to\n$\\Gamma ^{\\sigma,s}_{(\\omega _0)}$ and in Theorem \\ref{p3.2B}\n continuity is treated where the symbols belong to\n$\\Gamma ^{\\sigma,s;0}_{(\\omega _0)}$.\nThis gives an analogy to \\cite[Theorem 3.2]{To14}\nin the framework of operator theory and Gelfand-Shilov classes.\n\n\\par\n\nOur main result is stated as follows.\n\n\\par\n\n\\begin{thm}\\label{p3.2}\nLet $A\\in \\mathbf{M} (d,\\mathbf R)$, $s,\\sigma\\ge 1$,\n$\\omega ,\\omega _0\\in\\mathscr P _{s,\\sigma}^0(\\rr {2d})$,\n$a\\in \\Gamma _{(\\omega _0)}^{\\sigma,s}(\\rr {2d})$, and that $\\mathscr B$\nis an invariant BF-space on $\\rr {2d}$. Then\n$\\operatorname{Op} _A(a)$ is continuous from $M(\\omega _0\\omega ,\\mathscr B )$\nto $M(\\omega ,\\mathscr B )$.\n\\end{thm}\n\n\\par\n\nWe need some preparations for the proof, and start with the following remark.\n\n\n\\par\n\n\\begin{rem}\\label{rem:adjUniq}\nLet $s,\\sigma>0$ such that $s+\\sigma \\geq 1$.\nIf $a\\in (\\Sigma _{s,\\sigma}^{\\sigma,s})'(\\rr {2d})$,\nthen there is a unique $b\\in (\\Sigma _{s,\\sigma}^{\\sigma,s})'(\\rr {2d})$ such that\n$\\operatorname{Op} (a)^*=\\operatorname{Op} (b)$, where $b(x,\\xi )= e^{i\\scal {D_\\xi}{D_x}}\\overline {a(x,\\xi )}$\nin view of \\cite[Theorem 18.1.7]{Ho1}. Furthermore, by the latter equality and\n\\cite[Theorem 4.1]{CaTo} it follows that\n$$\na\\in \\Gamma _{(\\omega )}^{\\sigma,s}(\\rr {2d})\n\\quad \\Leftrightarrow \\quad\nb\\in \\Gamma _{(\\omega )}^{\\sigma,s}(\\rr {2d}).\n$$\n\\end{rem}\n\n\\par\n\n\\par\n\n\\begin{lemma}\\label{lem:equivfun}\nSuppose $s,\\sigma\\geq 1$,\n$\\omega \\in \\mathscr P _E(\\rr {d_0})$ and that $f\\in C^\\infty\n(\\rr {d+d_0})$ satisfies\n\\begin{equation}\\label{eq:anGeShiEst}\n|\\partial ^\\alpha f(x,y)|\\lesssim h^{|\\alpha |}\\alpha !^\\sigma \ne^{-r|x|^{\\frac 1s}}\\omega (y),\t\\alpha \\in \\nn {d+d_0}\n\\end{equation}\nfor some $h>0$ and $r>0$. Then there are $f_0\\in C^\\infty (\\rr {d+d_0})$\nand $\\psi \\in \\mathcal S _s^\\sigma(\\rr d)$ such that \\eqref{eq:anGeShiEst} holds\nwith $f_0$ in place of $f$ for some for some $h>0$ and $r>0$, and\n$f(x,y)= f_0(x,y)\\psi (x)$.\t\t\n\\end{lemma}\n\n\\par\n\n\\begin{proof}\nBy Proposition \\ref{Prop:EquivWeights}, there is a submultiplicative weight\n$v_0\\in \\mathscr P _{E,s}(\\rr d)\\cap C^\\infty (\\rr d)$ such that\n\\begin{align}\nv_0(x)&\\asymp e^{\\frac r2|x |^{\\frac 1s}}\\label{eq:v0Est}\n\\intertext{and}\n|\\partial ^\\alpha v_0(x)| &\\lesssim h^{|\\alpha |}\\alpha !^\\sigma \nv_0(x),\\qquad \\alpha \\in \\nn d\n\\label{eq:vEst}\n\\intertext{for some $h,r>0$.\t\nSince $s,\\sigma\\ge 1$,\na straight-forward application of Fa{\\`a} di Bruno's formula,\nfor the composed function $\\psi(x)=g(v_0(x))$, where $g(t)=\\frac 1t$,\non \\eqref{eq:vEst} gives}\n\\left | \\partial ^\\alpha \\left (\\frac 1{v_0(x)}\\right ) \\right | &\\lesssim\nh^{|\\alpha |}\\alpha !^\\sigma\\cdot \\frac 1{v_0(x)},\\qquad \\alpha \\in \\nn d\n\\tag*{(\\ref{eq:vEst})$'$}\n\\end{align}\nfor some $h>0$. It follows from \\eqref{eq:v0Est} and \\eqref{eq:vEst}$'$\nthat if $\\psi =1\/{v_0}$, then $\\psi \\in \\mathcal S _s^\\sigma(\\rr d)$.\nFurthermore, if $f_0(x,y)=f(x,y)v_0(x)$,\nthen an application of Leibnitz formula we get\n\\begin{multline*}\n|\\partial ^{\\alpha}_x\\partial ^{\\alpha _0}_yf_0(x,y)|\\lesssim\n\\sum _{\\gamma \\le \\alpha }\\binom{\\alpha}{\\gamma} |\\partial ^\\delta _x\n\\partial ^{\\alpha _0}_yf(x,y)|\n\\, |\\partial ^{\\alpha -\\delta }v_0(x)|\n\\\\[1ex]\n\\lesssim\nh^{|\\alpha |+|\\alpha _0|} \\sum _{\\gamma \\le \\alpha }\n\\binom{\\alpha}{\\gamma}\n(\\gamma !\\alpha _0!)^\\sigma e^{-r|x|^{\\frac 1s}}\n\\omega (y)(\\alpha -\\gamma )!^\\sigma v_0(x)\n\\\\[1ex]\n\\lesssim\n(2h)^{|\\alpha |+|\\alpha _0|}(\\alpha !\\alpha _0!)^\\sigma\ne^{-r|x|^{\\frac 1s}}v_0(x)\\omega (y)\n\\\\[1ex]\n\\lesssim\n(2h)^{|\\alpha |+|\\alpha _0|}(\\alpha !\\alpha _0!)^\\sigma\ne^{-\\frac r2|x|^{\\frac 1s}}\\omega (y)\n\\end{multline*}\nfor some $h>0$, which gives the desired estimate on $f_0$,\nsince it is clear that $f(x,y)= f_0(x,y)\\psi (x)$.\n\\end{proof}\n\n\\par\n\n\\begin{lemma}\\label{Lemma:PrepReThm3.2}\nLet $s,\\sigma \\ge 1$, $\\omega \\in\n\\mathscr P _{s,\\sigma}^0(\\rr {2d})$, $v_1\\in\n\\mathscr P _{s}^0(\\rr {d})$ and $v_2\\in \\mathscr P _{\\sigma}^0(\\rr d)$ be such that\n$v_1$ and $v_2$ are submultiplicative, $\\omega \\in \\Gamma _{(\\omega )}\n^{\\sigma,s}(\\rr {2d})$ is $v_1\\otimes v_2$-moderate.\nAlso let $a\\in \\Gamma _{(\\omega )}^{\\sigma,s}(\\rr {2d})$, \n$f\\in \\mathcal S _s^\\sigma(\\rr d)$, $\\phi \\in \\Sigma _s^\\sigma(\\rr d)$,\n$\\phi_2=\\phi v_1$,\nIf \n\\begin{align}\n\\Phi (x,\\xi ,z,\\zeta ) &= \\frac {a(x+z,\\xi +\\zeta )}\n{\\omega (x,\\xi)v_1(z)v_2(\\zeta )}\\label{eq:Phidef}\n\\intertext{and}\nH(x,\\xi ,y) &= \\iint \\Phi (x,\\xi ,z,\\zeta )\\phi _2(z)v_2(\\zeta )\ne^{i\\scal {y-x-z}{\\zeta}}\\, dzd\\zeta .\\label{eq:Hidentity}\n\\end{align}\nThen\n\\begin{equation}\\label{eq:stftpseudoform}\nV_\\phi (\\operatorname{Op} (a)f)(x,\\xi ) = (2\\pi )^{-d} (f,e^{i\\scal \\, \\cdot \\, \\xi\n}H(x,\\xi ,\\, \\cdot \\, ))\\omega (x,\\xi\n).\n\\end{equation}\nFurthermore the following is true:\n\\begin{enumerate}\n\\item $H\\in C^\\infty (\\rr {3d})$ and satisfies\n\\begin{equation}\\label{eq:DerHEst}\n|\\partial _y^\\alpha H(x,\\xi ,y)| \\lesssim h_0^{|\\alpha |}\n\\alpha!^\\sigma e^{-r_0|x-y|^{\\frac 1s}},\n\\end{equation}\nfor every $\\alpha \\in \\rr d$ and some $h_0,r_0>0$;\n\\vspace{0.1cm}\n\n\\item there are functions $H_0\\in C^\\infty (\\rr {3d})$\nand $\\phi _0\\in \\mathcal S _s^\\sigma(\\rr d)$\nsuch that\n\\begin{equation}\\label{eq:HProd}\nH(x,\\xi ,y) = H_0(x,\\xi ,y)\\phi _0(y-x),\n\\end{equation}\nand such that \\eqref{eq:DerHEst} holds for some $h_0,r_0>0$,\nwith $H_0$ in place of $H$.\t\t\n\\end{enumerate}\t\n\\end{lemma}\n\n\\par\n\nLemma \\ref{Lemma:PrepReThm3.2} follows by similar arguments as in\n\\cite{To25}. In order to be self contained we give a different proof.\n\n\\par\n\n\\begin{proof}\nBy straight-forward computations we get\n\\begin{equation}\\label{eq:storformel}\nV_\\phi (\\operatorname{Op} (a)f)(x,\\xi ) \n= (2\\pi )^{-d} (f,e^{i\\scal \\, \\cdot \\, \\xi }H_1(x,\\xi ,\\, \\cdot \\, \n))\\omega (x,\\xi ),\n\\end{equation}\nwhere\n\\begin{multline*}\nH_1(x,\\xi ,y) = (2\\pi )^{d}e^{-i\\scal y\\xi}(\\operatorname{Op} (a)^*(\\phi (\\, \\cdot \\, -x)\n\\, e^{i\\scal \\cdot \\xi}))(y)\/\\omega (x,\\xi )\n\\\\[1ex]\n= \\iint \\frac {a(z,\\zeta )}{\\omega (x,\\xi )}\\phi (z-x)\ne^{i\\scal {y-z}{\\zeta -\\xi}}\\, dzd\\zeta \n\\\\[1ex]\n= \\iint \\Phi (x,\\xi ,z-x,\\zeta -\\xi )\\phi _2(z-x)v_2(\\zeta\n-\\xi )e^{i\\scal {y-z}{\\zeta -\\xi}}\\, dzd\\zeta .\n\\end{multline*}\nIf $z-x$ and $\\zeta -\\xi$ are taken as new variables of integrations,\nit follows that the right-hand side is the same as \\eqref{eq:Hidentity}.\nHence \\eqref{eq:stftpseudoform} holds. This\ngives the first part of the lemma.\n\n\\par\n\nThe smoothness of $H$ is a consequence of the uniqueness of the adjoint\n(cf. Remark \\ref{rem:adjUniq}) and \\cite[Lemma 2.7]{To25}.\n\n\\par\n\nTo show that \\eqref{eq:DerHEst} holds, let\n$$\n\\Phi _0(x,\\xi ,z,\\zeta ) = \\Phi (x,\\xi ,z,\\zeta )\\phi _2(z),\n$$\nwhere $\\Phi$ defined as in \\eqref{eq:Phidef},\nand let $\\Psi =\\mathscr F _3\\Phi_0$, where $\\mathscr F _3\\Phi_0$ is the partial\nFourier transform of $\\Phi _0(x,\\xi ,z,\\zeta )$ with respect to the $z$ variable.\nThen it follows from the assumptions and \\eqref{eq:vEst}$'$ that\n\\begin{multline*}\n|\\partial _z^\\alpha \\Phi _0(x,\\xi ,z,\\zeta )|\n\\lesssim \\sum_{\\gamma\\leq \\alpha}\\binom{\\alpha}{\\gamma}\n\\sum _{\\lambda\\leq \\gamma}\\binom{\\gamma}{\\lambda}\n\\frac{\\left|\\partial_z^{\\gamma-\\lambda}a(x+z,\\xi+\\zeta)\\right|}{\\omega(x,\\xi)v_2(\\zeta)}\n\\\\\n\\times \\partial^{\\lambda}\\prn{\\frac{1}{v_1(z)}}\nh^{|\\alpha-\\gamma|}(\\alpha-\\gamma)!^\\sigma e^{-r|z|^{\\frac 1s}}\n\\\\[1ex]\n\\lesssim \\sum_{\\gamma\\leq \\alpha}\\binom{\\alpha}{\\gamma}\n\\sum _{\\lambda\\leq \\gamma}\\binom{\\gamma}{\\lambda}\nh^{|\\alpha|}(\\alpha-\\gamma)!^\\sigma(\\gamma-\\lambda)!^\\sigma \\lambda!^\\sigma\ne^{-r_0|z|^{\\frac 1s}}\n\\\\[1ex]\n\\lesssim h^{|\\alpha|} \\alpha!^\\sigma\n\\sum_{\\gamma\\leq \\alpha}\\binom{\\alpha}{\\gamma}\n\\sum _{\\lambda\\leq \\gamma}\\binom{\\gamma}{\\lambda}\n\\prn{\\frac{(\\alpha-\\gamma)!\\gamma!}{\\alpha!}}^\\sigma\n\\prn{\\frac{(\\gamma-\\lambda)!\\lambda!}{\\gamma!}}^\\sigma\ne^{-r_0|z|^{\\frac 1s}}\n\\\\[1ex]\n\\lesssim (4h)^{|\\alpha|} \\alpha!^\\sigma e^{-r|z|^{\\frac 1s}}\n\\sum_{\\gamma\\leq \\alpha}1\\, \\cdot \\, \\sum _{\\lambda\\leq \\gamma}1.\n\\end{multline*}\nSince $\\sum _{\\lambda\\leq \\gamma}1 \\lesssim 2^{|\\gamma|}$,\nwe get\n\\begin{equation}\\label{eq:Phi_0Est}\n|\\partial _z^\\alpha \\Phi _0(x,\\xi ,z,\\zeta )|\n\\leq C (16h)^{|\\alpha|}\\alpha!^\\sigma e^{-r_0|z|^{\\frac 1s}}\n\\leq C h_0^{|\\alpha|}\\alpha!^\\sigma e^{-r_0|z|^{\\frac 1s}}\n\\end{equation}\nfor some $C,h_0,r_0>0$. Then $z\\mapsto \\Phi _0(x,\\xi ,z,\\zeta )$\nis an element in $\\mathcal S _s^\\sigma(\\rr d)$.\nMoreover, $\\set{\\Phi _0(x,\\xi ,z,\\zeta )}_{z\\in\\rr d}$ is a bounded set in\n$\\Gamma _{(1)}^{\\sigma,s}(\\rr d\\times\\rr {2d})$.\nIndeed, for a fixed $z_0\\in \\rr{d}$, then an application of Leibnitz formula,\nFa{\\`a} di Bruno's formula, Proposition \\ref{Prop:EquivWeights} and \\eqref{eq:vEst}$'$, give\n\\begin{multline*}\n\\abs{\\partial_x^\\alpha\\partial_\\xi^\\beta\\partial_\\zeta^\\gamma\\Phi _0(x,\\xi ,z_0,\\zeta )}\n\\leq \\sum\n\\binom \\alpha{\\alpha_1}\\binom \\beta{\\beta_1}\\binom \\gamma{\\gamma_1}\n\\partial_x^{\\alpha_1}\\partial_\\xi^{\\beta_1}\\prn{\\frac 1{\\omega(x,\\xi)}}\n\\\\%[1ex]\n\\times\\partial_\\zeta^{\\gamma_1}\\prn{\\frac 1{v_2(\\zeta)}}\\abs{\\partial_x^{\\alpha-\\alpha_1}\n\t\\partial_\\xi^{\\beta-\\beta_1}\\partial_\\zeta^{\\gamma-\\gamma_1}\n\ta(x+z_0,\\xi+\\zeta)}\\, \\cdot \\, \\frac{|\\phi(z_0)|}{v_1(z_0)}\n\\\\[1ex]\n\\lesssim\n\\sum\n\\binom \\alpha{\\alpha_1}\\binom \\beta{\\beta_1}\\binom \\gamma{\\gamma_1}\nh^{|\\alpha_1+\\beta_1+\\gamma_1|}\\alpha_1!^\\sigma(\\beta_1!\\gamma_1!)^s\n\\\\%[1ex]\n\\times\\prn{\\frac 1{\\omega(x,\\xi)v_1(z_0)v_2(\\zeta)}}\n\\abs{\\partial_x^{\\alpha-\\alpha_1}\n\t\\partial_\\xi^{\\beta-\\beta_1}\\partial_\\zeta^{\\gamma-\\gamma_1}\n\ta(x+z_0,\\xi+\\zeta)}\n\\\\[1ex]\n\\lesssim\nh^{|\\alpha+\\beta+\\gamma|}\\sum\n\\binom \\alpha{\\alpha_1}\\binom \\beta{\\beta_1}\\binom \\gamma{\\gamma_1}\n\\prn{(\\alpha-\\alpha_1)!\\alpha_1!}^\\sigma\n\\prn{(\\beta-\\beta_1)!\\beta_1!}^s\\prn{(\\gamma-\\gamma_1)!\\gamma_1!}^s\n\\\\[1ex]\n\\lesssim\n(4h)^{|\\alpha+\\beta+\\gamma|}\\alpha!^\\sigma(\\beta!\\gamma!)^s, \t\n\\end{multline*}\nwhere all the summations above are taken over all \n$\\alpha_1\\leq \\alpha, \\beta_1\\leq\\beta$ and $\\gamma_1\\leq\\gamma$.\nIn view of Proposition \\ref{propBroadGSSpaceChar}\nand \\eqref{eq:Phi_0Est} we have \n$$\n|\\partial _\\eta^\\alpha \\Psi (x,\\xi ,\\eta ,\\zeta )|\n\\lesssim h_0^{|\\alpha |}\\alpha !^se^{-r_0|\\eta |^{\\frac 1\\sigma}},\n$$\nfor some $h_0,r_0>0$. Hence\n$$\n|\\partial _\\eta^\\alpha (\\Psi (x,\\xi ,\\zeta ,\\zeta )v_2(\\zeta ))|\n\\lesssim h_0^{|\\alpha |}\\alpha !^se^{-r_0|\\zeta |^{\\frac 1\\sigma}}\n$$\nfor some $h_0,r_0>0$.\n\n\\par\n\nBy letting $H_2(x,\\xi ,\\, \\cdot \\, )$ be the inverse partial Fourier transform of\n$\\Psi (x,\\xi ,\\zeta ,\\zeta )v_2(\\zeta )$ with respect to the $\\zeta$ variable,\nit follows that\n\\begin{equation}\\label{eq:H2Est}\n|\\partial _y^\\alpha H_2(x,\\xi ,y)|\n\\lesssim h_0^{|\\alpha |}\\alpha !^\\sigma e^{-r_0|y|^{\\frac 1s}}\n\\end{equation}\nfor some $h_0,r_0>0$. The assertion (1) now follows from the latter estimate\nand the fact that $H(x,\\xi ,y)= H_2(x,\\xi ,x-y)$.\n\n\\par\n\nIn order to prove (2) we notice that \\eqref{eq:H2Est} shows that\n$y\\mapsto H_2(x,\\xi ,y)$ is an element in $\\mathcal S _s^\\sigma(\\rr d)$ with values\nin $\\Gamma ^{(1)}_{s,\\sigma}(\\rr {2d})$. It follows by Lemma \\ref{lem:equivfun}\nthat there exist $H_3\\in C^\\infty (\\rr {3d})$ and $\\phi _0\\in \\mathcal S _s^\\sigma(\\rr d)$\nsuch that \\eqref{eq:H2Est} holds for some $h_0,r_0>0$ with $H_3$ in place of $H_2$,\nand\n$$\nH_2(x,\\xi ,y)= H_3(x,\\xi ,y)\\phi _0(-y).\n$$\nThis is the same as (2), and the result follows.\t\n\\end{proof}\n\n\\par\n\n\\begin{proof}[Proof of Theorem \\ref{p3.2}]\nThere is no restriction if we assume that $A=0$.\nLet $G=\\operatorname{Op} (a)f$. In view of Lemma \\ref{Lemma:PrepReThm3.2} we have\n\\begin{multline*}\nV_\\phi G(x,\\xi )\n=\n(2\\pi )^{-\\frac d2}\\mathscr F ((f\\cdot \\overline {\\phi _0(\\, \\cdot \\, -x)}) \\cdot H_0(x,\\xi ,\\, \\cdot \\, ))(\\xi )\n\\omega (x,\\xi )\n\\\\[1ex]\n=\n(2\\pi )^{-d} (V_{\\phi _0}f)(x,\\, \\cdot \\, ) * (\\mathscr F (H_0(x,\\xi ,\\, \\cdot \\, )))(\\xi )\n\\omega (x,\\xi ).\n\\end{multline*}\nSince $\\omega$ and $\\omega _0$ belong to $\\mathscr P _{s,\\sigma}^0(\\rr {2d})$,\nthen for every $r_0>0$ and $x,\\xi,\\eta\\in \\rr d$ we have\n$$\n\\omega (x,\\xi )\\omega _0(x,\\xi )\n\\lesssim\n\\omega (x,\\eta )\\omega _0(x,\\eta ) e^{\\frac {r_0}2|\\xi-\\eta|^{\\frac 1\\sigma}},\n$$\n\nthis inequality and (2) in Lemma \\ref{Lemma:PrepReThm3.2} give\n\\begin{equation*}\n|V_\\phi G(x,\\xi )\\omega _0(x,\\xi )|\n\\lesssim\n\\left(\n|(V_{\\phi _0}f)(x,\\, \\cdot \\, )\\omega (x,\\, \\cdot \\, )\n\\omega _0(x,\\, \\cdot \\, )| * e^{-\\frac {r_0}2|\\, \\cdot \\, |^{\\frac 1\\sigma}}\n\\right)\n(\\xi).\n\\end{equation*}\n\n\\par\n\nIn view of Definition \\ref{bfspaces1},\nwe get for some $v\\in \\mathscr P _{\\sigma}^0(\\rr d)$,\n\\begin{multline*}\n\\nm G{M(\\omega _0,\\mathscr B )}\\lesssim \\nm {|(V_{\\phi _0}f) \n\\cdot \\omega \\cdot \\omega _0|\n* \\delta _0 \\otimes e^{-r_0|\\, \\cdot \\, |^{\\frac 1\\sigma}}}{\\mathscr B}\n\\\\[1ex]\n\\le\n\\nm {(V_{\\phi _0}f) \\cdot \\omega \\cdot \\omega _0}{\\mathscr B}\n\\nm {e^{-r_0|\\, \\cdot \\, |^{\\frac 1\\sigma}}v}{L^1}\n\\asymp \\nm f{M(\\omega \\cdot \\omega _0,\\mathscr B )}.\n\\end{multline*}\nThis gives the result.\n\\end{proof}\n\n\\par\n\nBy similar arguments as in the proof of Theorem \\ref{p3.2}\nand Lemma \\ref{Lemma:PrepReThm3.2} we get the following.\nThe details are left for the reader.\n\n\\par\n\n\n\n\\begin{thm}\\label{p3.2B}\nLet $A\\in \\mathbf{M} (d,\\mathbf R)$, $s,\\sigma\\ge 1$,\n$\\omega ,\\omega _0\\in\\mathscr P _{s,\\sigma}(\\rr {2d})$,\n$a\\in \\Gamma _{(\\omega _0)}^{\\sigma,s;0}(\\rr {2d})$, and that $\\mathscr B$\nis an invariant BF-space on $\\rr {2d}$. Then\n$\\operatorname{Op} _A(a)$ is continuous from $M(\\omega _0\\omega ,\\mathscr B )$\nto $M(\\omega ,\\mathscr B )$.\n\\end{thm}\n\n\\par\n\n\\begin{lemma}\\label{Lemma:PrepReThm3.2B}\nLet $s,\\sigma \\ge 1$, $\\omega \\in\n\\mathscr P _{s,\\sigma}(\\rr {2d})$, $v_1\\in\n\\mathscr P _{s}(\\rr {d})$ and $v_2\\in \\mathscr P _{\\sigma}(\\rr d)$ be such that\n$v_1$ and $v_2$ are submultiplicative, $\\omega \\in \\Gamma _{(\\omega )}\n^{\\sigma,s;0}(\\rr {2d})$ is $v_1\\otimes v_2$-moderate.\nAlso let $a\\in \\Gamma _{(\\omega )}^{\\sigma,s;0}(\\rr {2d})$, \n$f, \\phi \\in \\Sigma _s^\\sigma(\\rr d)$,\n$\\phi_2=\\phi v_1$, and let $\\Phi$ and $H$ be as in Lemma\n\\ref{Lemma:PrepReThm3.2}.\nThen \\eqref{eq:stftpseudoform} and the following hold true:\n\\begin{enumerate}\n\\item $H\\in C^\\infty (\\rr {3d})$ and satisfies \\eqref{eq:DerHEst}\nfor every $h_0,r_0>0$;\n\n\\vspace{0.1cm}\n\n\\item there are functions $H_0\\in C^\\infty (\\rr {3d})$\nand $\\phi _0\\in \\Sigma _s(\\rr d)$\nsuch that \\eqref{eq:HProd} holds,\nand such that \\eqref{eq:DerHEst} holds for every $h_0,r_0>0$,\nwith $H_0$ in place of $H$.\n\\end{enumerate}\n\\end{lemma}\n\n\\par\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}