diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzncal" "b/data_all_eng_slimpj/shuffled/split2/finalzzncal" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzncal" @@ -0,0 +1,5 @@ +{"text":"\n\n\n\n\n\n\\section{\\@startsection {section}{1}{\\z@}{+3.0ex plus +1ex minus\n +.2ex}{2.3ex plus .2ex}{\\normalsize\\bf}}\n\\def\\subsection{\\@startsection{subsection}{2}{\\z@}{+2.5ex plus +1ex\nminus +.2ex}{1.5ex plus .2ex}{\\normalsize\\bf}}\n\\def\\subsubsection{\\@startsection{subsubsection}{3}{\\z@}{+3.25ex plus\n +1ex minus +.2ex}{1.5ex plus .2ex}{\\normalsize\\bf}}\n \n\\expandafter\\ifx\\csname mathrm\\endcsname\\relax\\def\\mathrm#1{{\\rm #1}}\\fi\n \n\\makeatletter\n\n\\newcount\\@tempcntc\n\\def\\@citex[#1]#2{\\if@filesw\\immediate\\write\\@auxout{\\string\\citation{#2}}\\fi\n \\@tempcnta\\z@\\@tempcntb\\m@ne\\def\\@citea{}\\@cite{\\@for\\@citeb:=#2\\do\n {\\@ifundefined\n {b@\\@citeb}{\\@citeo\\@tempcntb\\m@ne\\@citea\n \\def\\@citea{,\\penalty\\@m\\ }{\\bf ?}\\@warning\n {Citation `\\@citeb' on page \\thepage \\space undefined}}%\n {\\setbox\\z@\\hbox{\\global\\@tempcntc0\\csname\nb@\\@citeb\\endcsname\\relax}%\n \\ifnum\\@tempcntc=\\z@ \\@citeo\\@tempcntb\\m@ne\n \\@citea\\def\\@citea{,\\penalty\\@m}\n \\hbox{\\csname b@\\@citeb\\endcsname}%\n \\else\n \\advance\\@tempcntb\\@ne\n \\ifnum\\@tempcntb=\\@tempcntc\n \\else\\advance\\@tempcntb\\m@ne\\@citeo\n \\@tempcnta\\@tempcntc\\@tempcntb\\@tempcntc\\fi\\fi}}\\@citeo}{#1}}\n\n\\def\\@citeo{\\ifnum\\@tempcnta>\\@tempcntb\\else\\@citea\n \\def\\@citea{,\\penalty\\@m}%\n \\ifnum\\@tempcnta=\\@tempcntb\\the\\@tempcnta\\else\n {\\advance\\@tempcnta\\@ne\\ifnum\\@tempcnta=\\@tempcntb \\else\n\\def\\@citea{--}\\fi\n \\advance\\@tempcnta\\m@ne\\the\\@tempcnta\\@citea\\the\\@tempcntb}\\fi\\fi}\n\n\\def,{\\relax}\n\\def,{,}\n\\def\\nonumber\\\\{\\nonumber\\\\}\n\\def\\co\\nonumber\\\\{,\\nonumber\\\\}\n\\def\\\\*[-1ex]{\\\\*[-1ex]}\n\\def\\vspace{-\\abovedisplayskip{\\vspace{-\\abovedisplayskip}\n \\vspace{\\abovedisplayshortskip}}\n\\newcommand{\\lsim}\n{\\mathrel{\\raisebox{-.3em}{$\\stackrel{\\displaystyle <}{\\sim}$}}}\n\\newcommand{\\gsim}\n{\\mathrel{\\raisebox{-.3em}{$\\stackrel{\\displaystyle >}{\\sim}$}}}\n\\def\\asymp#1%\n{\\mathrel{\\raisebox{-.4em}{$\\widetilde{\\scriptstyle #1}$}}}\n\\def\\Nlim#1{\\mathrel{\\raisebox{-.4em}\n{$\\stackrel{\\displaystyle\\longrightarrow}{\\scriptstyle#1}$}}}\n\\def\\Nequal#1%\n{\\mathrel{\\raisebox{-.5em}{$\\stackrel{=}{\\scriptstyle\\rm#1}$}}}\n\n\\def\\begin{equation}{\\begin{equation}}\n\\def\\end{equation}{\\end{equation}}\n\\def\\begin{eqnarray}{\\begin{eqnarray}}\n\\def\\end{eqnarray}{\\end{eqnarray}}\n\\def\\barr#1{\\begin{array}{#1}}\n\\def\\end{array}{\\end{array}}\n\\def\\begin{figure}{\\begin{figure}}\n\\def\\end{figure}{\\end{figure}}\n\\def\\begin{table}{\\begin{table}}\n\\def\\end{table}{\\end{table}}\n\\def\\begin{center}{\\begin{center}}\n\\def\\end{center}{\\end{center}}\n\\def\\nonumber{\\nonumber}\n\\def\\displaystyle{\\displaystyle}\n\\def\\textstyle{\\textstyle}\n\\def\\footnotesize{\\footnotesize}\n\\def1.2{1.2}\n \n\\def\\alpha{\\alpha}\n\\def\\beta{\\beta}\n\\def\\Gamma{\\Gamma}\n\\def\\gamma{\\gamma}\n\\def\\delta{\\delta}\n\\def\\Delta{\\Delta}\n\\def\\epsilon{\\epsilon}\n\\def\\varepsilon{\\varepsilon}\n\\def\\lambda{\\lambda}\n\\def\\omega{\\omega}\n\\def\\sigma{\\sigma}\n\\def\\Sigma{\\Sigma}\n\\def\\vartheta{\\vartheta}\n \n\\def\\refeq#1{\\mbox{(\\ref{#1})}}\n\\def\\refeqs#1{\\mbox{(\\ref{#1})}}\n\\def\\refeqf#1{\\mbox{(\\ref{#1})}}\n\\def\\reffi#1{\\mbox{Fig.~\\ref{#1}}}\n\\def\\reffis#1{\\mbox{Figs.~\\ref{#1}}}\n\\def\\refta#1{\\mbox{Table~\\ref{#1}}}\n\\def\\reftas#1{\\mbox{Tables~\\ref{#1}}}\n\\def\\refse#1{\\mbox{Section~\\ref{#1}}}\n\\def\\citere#1{\\mbox{Ref.~\\cite{#1}}}\n\\def\\citeres#1{\\mbox{Refs.~\\cite{#1}}}\n \n\\newcommand{\\unskip\\,\\mathrm{TeV}}{\\unskip\\,\\mathrm{TeV}}\n\\newcommand{\\unskip\\,\\mathrm{GeV}}{\\unskip\\,\\mathrm{GeV}}\n\\newcommand{\\unskip\\,\\mathrm{MeV}}{\\unskip\\,\\mathrm{MeV}}\n\\newcommand{\\unskip\\,\\mathrm{pb}}{\\unskip\\,\\mathrm{pb}}\n\\newcommand{\\unskip\\,\\mathrm{fb}}{\\unskip\\,\\mathrm{fb}}\n\n\\newcommand{{\\mathrm{d}}}{{\\mathrm{d}}}\n\\newcommand{{\\mathrm{U}}}{{\\mathrm{U}}}\n\\newcommand{{\\mathrm{L}}}{{\\mathrm{L}}}\n\\newcommand{{\\mathrm{T}}}{{\\mathrm{T}}}\n\n\\newcommand{\\mathswitch{{\\cal{O}}(\\alpha)}}{\\mathswitch{{\\cal{O}}(\\alpha)}}\n\\newcommand{\\mathswitch{{\\cal{O}}(\\alpha^2)}}{\\mathswitch{{\\cal{O}}(\\alpha^2)}}\n\\newcommand{{\\cal{M}}}{{\\cal{M}}}\n\\newcommand{{\\cal{V}}}{{\\cal{V}}}\n\\newcommand{{\\cal{B}}}{{\\cal{B}}}\n\\renewcommand{\\L}{{\\cal{L}}}\n \n\\def\\mathswitchr#1{\\relax\\ifmmode{\\mathrm{#1}}\\else$\\mathrm{#1}$\\fi}\n\\newcommand{\\mathswitch f}{\\mathswitch f}\n\\newcommand{\\mathswitch{\\bar f}}{\\mathswitch{\\bar f}}\n\\newcommand{\\mathswitchr W}{\\mathswitchr W}\n\\newcommand{\\mathswitchr Z}{\\mathswitchr Z}\n\\newcommand{\\mathswitchr A}{\\mathswitchr A}\n\\newcommand{\\mathswitchr g}{\\mathswitchr g}\n\\newcommand{\\mathswitchr H}{\\mathswitchr H}\n\\newcommand{\\mathswitchr e}{\\mathswitchr e}\n\\newcommand{\\mathswitch \\nu_{\\mathrm{e}}}{\\mathswitch \\nu_{\\mathrm{e}}}\n\\newcommand{\\mathswitchr d}{\\mathswitchr d}\n\\newcommand{\\mathswitchr u}{\\mathswitchr u}\n\\newcommand{\\mathswitchr s}{\\mathswitchr s}\n\\newcommand{\\mathswitchr c}{\\mathswitchr c}\n\\newcommand{\\mathswitchr b}{\\mathswitchr b}\n\\newcommand{\\mathswitchr t}{\\mathswitchr t}\n\\newcommand{\\mathswitchr{\\bar t}}{\\mathswitchr{\\bar t}}\n\\newcommand{\\mathswitchr {e^+}}{\\mathswitchr {e^+}}\n\\newcommand{\\mathswitchr {e^-}}{\\mathswitchr {e^-}}\n\\newcommand{\\mathswitchr {W^+}}{\\mathswitchr {W^+}}\n\\newcommand{\\mathswitchr {W^-}}{\\mathswitchr {W^-}}\n\\newcommand{\\mathswitchr {W^\\pm}}{\\mathswitchr {W^\\pm}}\n \n\\def\\mathswitch#1{\\relax\\ifmmode#1\\else$#1$\\fi}\n\\newcommand{\\mathswitch {m_\\Pf}}{\\mathswitch {m_\\mathswitch f}}\n\\newcommand{\\mathswitch {m_\\Pl}}{\\mathswitch {m_\\Pl}}\n\\newcommand{\\mathswitch {m_\\Pq}}{\\mathswitch {m_\\Pq}}\n\\newcommand{\\mathswitch {M_\\PV}}{\\mathswitch {M_\\PV}}\n\\newcommand{\\mathswitch {M_\\PW}}{\\mathswitch {M_\\mathswitchr W}}\n\\newcommand{\\mathswitch {\\lambda}}{\\mathswitch {\\lambda}}\n\\newcommand{\\mathswitch {M_\\PZ}}{\\mathswitch {M_\\mathswitchr Z}}\n\\newcommand{\\mathswitch {M_\\PH}}{\\mathswitch {M_\\mathswitchr H}}\n\\newcommand{\\mathswitch {m_\\Pe}}{\\mathswitch {m_\\mathswitchr e}}\n\\newcommand{\\mathswitch {m_\\mu}}{\\mathswitch {m_\\mu}}\n\\newcommand{\\mathswitch {m_\\tau}}{\\mathswitch {m_\\tau}}\n\\newcommand{\\mathswitch {m_\\Pd}}{\\mathswitch {m_\\mathswitchr d}}\n\\newcommand{\\mathswitch {m_\\Pu}}{\\mathswitch {m_\\mathswitchr u}}\n\\newcommand{\\mathswitch {m_\\Ps}}{\\mathswitch {m_\\mathswitchr s}}\n\\newcommand{\\mathswitch {m_\\Pc}}{\\mathswitch {m_\\mathswitchr c}}\n\\newcommand{\\mathswitch {m_\\Pb}}{\\mathswitch {m_\\mathswitchr b}}\n\\newcommand{\\mathswitch {m_\\Pt}}{\\mathswitch {m_\\mathswitchr t}}\n \n\\newcommand{\\mathswitch {s_\\PW}}{\\mathswitch {s_\\mathswitchr W}}\n\\newcommand{\\mathswitch {c_\\PW}}{\\mathswitch {c_\\mathswitchr W}}\n\\newcommand{\\mathswitch {\\bar s_\\PW}}{\\mathswitch {\\bar s_\\mathswitchr W}}\n\\newcommand{\\mathswitch {\\bar c_\\PW}}{\\mathswitch {\\bar c_\\mathswitchr W}}\n\\newcommand{\\mathswitch {Q_\\Pf}}{\\mathswitch {Q_\\mathswitch f}}\n\\newcommand{\\mathswitch {Q_\\Pl}}{\\mathswitch {Q_\\Pl}}\n\\newcommand{\\mathswitch {Q_\\Pq}}{\\mathswitch {Q_\\Pq}}\n\\newcommand{\\mathswitch {v_\\Pf}}{\\mathswitch {v_\\mathswitch f}}\n\\newcommand{\\mathswitch {a_\\Pf}}{\\mathswitch {a_\\mathswitch f}}\n\\newcommand{\\mathswitch {g_\\Pe}^{\\sigma}}{\\mathswitch {g_\\mathswitchr e}^{\\sigma}}\n\\newcommand{\\mathswitch {g_\\Pe}^-}{\\mathswitch {g_\\mathswitchr e}^-}\n\\newcommand{\\mathswitch {g_\\Pe}^+}{\\mathswitch {g_\\mathswitchr e}^+}\n\\newcommand{\\mathswitch {G_\\mu}}{\\mathswitch {G_\\mu}}\n \n\\def\\mathop{\\mathrm{Li}}\\nolimits{\\mathop{\\mathrm{Li}}\\nolimits}\n\\def\\mathop{\\mathrm{Re}}\\nolimits{\\mathop{\\mathrm{Re}}\\nolimits}\n\\defi.e.\\ {i.e.\\ }\n\\defe.g.\\ {e.g.\\ }\n\\defcf.\\ {cf.\\ }\n\\newcommand{self-energy}{self-energy}\n\\newcommand{self-energies}{self-energies}\n\\newcommand{counterterm}{counterterm}\n\\newcommand{counterterms}{counterterms}\n\\newcommand{cross-section}{cross-section}\n\\newcommand{cross-sections}{cross-sections}\n\n\\hyphenation{brems-strah-lung}\n \n\\newcommand{\\mpar}[1]{{\\marginpar{\\hbadness10000%\n \\sloppy\\hfuzz10pt\\boldmath\\bf#1}}%\n \\typeout{marginpar: #1}\\ignorespaces}\n\\marginparwidth 1.2cm\n\\marginparsep 0.2cm\n\n\\renewcommand{\\topfraction}{1.0}\n\\renewcommand{\\bottomfraction}{1.0}\n\\renewcommand{\\textfraction}{0.2}\n\\renewcommand{\\floatpagefraction}{0.7}\n\n\\newcommand{\\gamma\\gamma\\to\\PWp\\PWm}{\\gamma\\gamma\\to\\mathswitchr {W^+}\\mathswitchr {W^-}}\n\\newcommand{\\Pep\\Pem\\to f\\bar{f}}{\\mathswitchr {e^+}\\mathswitchr {e^-}\\to f\\bar{f}}\n\\newcommand{\\Pep\\Pem\\to\\PWp\\PWm}{\\mathswitchr {e^+}\\mathswitchr {e^-}\\to\\mathswitchr {W^+}\\mathswitchr {W^-}}\n\\newcommand{\\Pem\\gamma\\to\\PWm\\nu}{\\mathswitchr {e^-}\\gamma\\to\\mathswitchr {W^-}\\nu}\n\\newcommand{\\mathrm{Born}}{\\mathrm{Born}}\n\\newcommand{\\mathrm{SM}}{\\mathrm{SM}}\n\\newcommand{\\mathrm{GNLSM}}{\\mathrm{GNLSM}}\n\\newcommand{E_{\\mathrm{CMS}}}{E_{\\mathrm{CMS}}}\n\n\\newcommand{\\mathrm{Born}}{\\mathrm{Born}}\n\\newcommand{\\mathrm{self}}{\\mathrm{self}}\n\\newcommand{\\mathrm{corr}}{\\mathrm{corr}}\n\\newcommand{\\mathrm{weak}}{\\mathrm{weak}}\n\\newcommand{\\mathrm{bos}}{\\mathrm{bos}}\n\\newcommand{\\mathrm{ferm}}{\\mathrm{ferm}}\n\\newcommand{\\mathrm{Coul.}}{\\mathrm{Coul.}}\n\\newcommand{\\mathrm{counter}}{\\mathrm{counter}}\n\\newcommand{\\mathrm{cut}}{\\mathrm{cut}}\n\\newcommand{\\mathrm{SB}}{\\mathrm{SB}}\n\\newcommand{\\mathrm{unpol}}{\\mathrm{unpol}}\n\\newcommand{\\mathrm{gauge}}{\\mathrm{gauge}}\n\\newcommand{\\mathrm{CMS}}{\\mathrm{CMS}}\n\\newcommand{\\mathrm{NL}}{\\mathrm{NL}}\n\\newcommand{\\mathrm{tHF}}{\\mathrm{tHF}}\n\\renewcommand{\\min}{\\mathrm{min}}\n\\renewcommand{\\max}{\\mathrm{max}}\n\n\\begin{document}\n\n\\thispagestyle{empty}\n\\def\\arabic{footnote}{\\fnsymbol{footnote}}\n\\setcounter{footnote}{1}\n\\null\n\\strut\\hfill BI-TP 95\/04 \\\\\n\\strut\\hfill WUE-ITP-95-002\\\\\n\\strut\\hfill hep-ph\/9503442\n\\vskip 0cm\n\\vfill\n\\begin{center}\n{\\Large \\bf \n\\boldmath{Radiative Corrections to $\\gamma\\gamma\\to\\mathswitchr {W^+}\\mathswitchr {W^-}$ \\\\\n in the Electroweak Standard Model}\n\\par} \\vskip 2.5em\n{\\large\n{\\sc A.~Denner%\n\\footnote{On leave of absence from \nInstitut f\\\"ur Theoretische Physik, Universit\\\"at W\\\"urzburg,\nAm Hubland,\\\\ \\hspace*{.5cm} D-97074 W\\\"urzburg, Germany.}\n}\\\\[1ex]\n{\\normalsize \\it Institut f\\\"ur Theoretische Physik, Universit\\\"at Leipzig\\\\\nAugustusplatz 10, D-04109 Leipzig, Germany}\n\\\\[2ex]\n{\\sc S.~Dittmaier%\n\\footnote{Supported by the Bundesministerium f\\\"ur Forschung und\nTechnologie, Bonn, Germany.} }\\\\[1ex]\n{\\normalsize \\it Theoretische Physik, Universit\\\"at Bielefeld\\\\ \nPostfach 100131, D-33501 Bielefeld, Germany}\n\\\\[2ex]\n{\\sc R.~Schuster%\n\\footnote{Supported by the Deutsche Forschungsgemeinschaft.}\n} \\\\[1ex]\n{\\normalsize \\it Institut f\\\"ur Theoretische Physik, Universit\\\"at W\\\"urzburg\\\\\nAm Hubland, D-97074 W\\\"urzburg, Germany}\n}\n\\par \\vskip 1em\n\\end{center} \\par\n\\vskip 1cm \n\\vfill\n{\\bf Abstract:} \\par\nThe cross-section\\ for $\\gamma\\gamma\\to\\mathswitchr W^+\\mathswitchr W^-$ with arbitrary\npolarized photons and W bosons is calculated within the electroweak\nStandard Model including the complete virtual and soft-photonic \n${\\cal O}(\\alpha)$ corrections. We present a detailed numerical discussion\nof the complete radiative corrections and an analytical investigation of\nthe leading corrections. It turns out that in the on-shell\nrenormalization scheme for fixed $\\mathswitch {M_\\PW}$ no leading corrections \nassociated with the \nrunning of $\\alpha$ or heavy top-quark \nand Higgs-boson masses occur.\nThe corrections are typically of the order of 10\\%. \nThey reach, however, larger values where the lowest-order cross-sections\\ are suppressed.\n\\par\n\\vskip 1cm \n\\noindent BI-TP 95\/04 \\\\\nWUE-ITP-95-002\\par\n\\vskip .15mm\n\\noindent March 1995 \\par\n\\null\n\\setcounter{page}{0}\n\\clearpage\n\\def\\arabic{footnote}{\\arabic{footnote}}\n\\setcounter{footnote}{0}\n\n\\section{Introduction}\n\nThe $\\mathrm{SU}(2)\\times\\mathrm{U}(1)$ standard electroweak theory \nhas passed many\nprecision tests during the last years. In particular, measurements of\nthe muon decay constant $G_\\mu$, the gauge-boson masses \\mathswitch {M_\\PW}\\ and \\mathswitch {M_\\PZ},\nand the decay widths and asymmetries of the \\mathswitchr Z\\ boson at LEP have \nprovided stringent constraints which are successfully fulfilled by \nthe Standard Model (SM) evaluated at one-loop level. \nThe experimental data favor\na value for the top-quark mass which is in accordance with the\ndirect measurements \\cite{mt} of CDF, $\\mathswitch {m_\\Pt}=176\\pm16\\unskip\\,\\mathrm{GeV}$, and D\\O,\n$\\mathswitch {m_\\Pt}=199\\pm30\\unskip\\,\\mathrm{GeV}$. Nevertheless, further precision tests of the SM\nare required. Up to now, only weak direct experimental information exists on\nthe non-Abelian self-interaction of the gauge bosons \\cite{WWZ}.\nMoreover, no experimental evidence on the mechanism of spontaneous \nsymmetry breaking, which is responsible for mass generation and\npostulates the existence of the scalar Higgs boson, has been found yet. \nFor such\ninvestigations, energies of several hundred GeV or even few TeV are\nneeded, since the sensitivity to deviations from the SM gauge-boson\nself-interaction grows strongly with energy, and the existence of the\nHiggs particle can be proven only by direct production. To this end, \na ``Next Linear Collider'' (NLC) for $\\mathswitchr e\\Pe$,\n$\\mathswitchr e\\gamma$, and $\\gamma\\ga$ collisions was proposed \\cite{nlc} which offers\na unique environment for such precision experiments owing to the \ncomparably small background.\n\nA particularly interesting process in $\\gamma\\ga$ collisions is $\\gamma\\gamma\\to\\PWp\\PWm$.\nIts total cross-section\\ approaches a constant of about\n$80\\unskip\\,\\mathrm{pb}$ at high energies \ncorresponding to $8\\times10^6$ \\mathswitchr W~pairs for $10\\unskip\\,\\mathrm{fb}^{-1}$.\nThis large cross-section\\ is due to the massive $t$-channel exchange and is\ndrastically reduced by angular cuts. But even for $|\\!\\cos\\theta|< 0.8$\nthe cross-section\\ is still 15 and $4\\unskip\\,\\mathrm{pb}$ at a center-of-mass energy of 500 and\n$1000\\unskip\\,\\mathrm{GeV}$, respectively, and thus much larger as e.g.\\ the one for $\\Pep\\Pem\\to\\PWp\\PWm$.\nHence $\\gamma\\gamma\\to\\PWp\\PWm$ is very well-suited for precision investigations of the SM.\n\nSeveral features of $\\gamma\\gamma\\to\\PWp\\PWm$ have already been discussed in the literature.\nMost of the existing works concentrated on tree-level predictions\n\\cite{Pe73}, in particular on the influence of anomalous\nnon-Abelian gauge couplings \\cite{Ki73,Ye91,Be92}.\nThe process $\\gamma\\gamma\\to\\PWp\\PWm$ depends at tree level both on the triple\n$\\gamma\\mathswitchr W\\PW$ and the quartic $\\gamma\\ga\\mathswitchr W\\PW$ coupling, and\nno other vertices are involved in the unitary gauge at lowest order.\nThe sensitivity to the $\\gamma\\mathswitchr W\\PW$ coupling is comparable and \ncomplementary to the reactions $\\Pep\\Pem\\to\\PWp\\PWm$ and $\\Pem\\gamma\\to\\PWm\\nu$: the first\ninvolves a mixture of the $\\gamma\\mathswitchr W\\PW$ and the $\\mathswitchr Z\\mathswitchr W\\PW$ coupling, the\nsecond involves the $\\gamma\\mathswitchr W\\PW$ alone but is less sensitive \\cite{Ye91}.\nBecause the sensitivity to the $\\gamma\\ga\\mathswitchr W\\PW$ coupling is much larger \nthan the one in $\\mathswitchr {e^+}\\mathswitchr {e^-}$ processes, $\\gamma\\gamma\\to\\PWp\\PWm$ is the ideal process to study \nthis coupling \\cite{Be92}. \n\nThe one-loop diagrams involving a resonant Higgs boson have been\ncalculated in order to \nstudy the possible investigation of the Higgs boson via\n$\\gamma\\ga\\to\\mathswitchr H^*\\to\\mathswitchr {W^+}\\mathswitchr {W^-}$ \\cite{Va79,Bo92,Mo94,Ve94}.\nBased on our complete one-loop calculation, we have\nsupplemented these investigations by a discussion of the heavy-Higgs\neffects in \\citere{HH}. As a matter of fact, only the (suppressed)\nchannels of longitudinal \\mathswitchr W-boson production are sensitive to the Higgs\nmechanism, but the (dominant) channels of purely transverse \\mathswitchr W-boson \nproduction are extremely insensitive. \nThis insensitivity to the Higgs sector renders $\\gamma\\gamma\\to\\PWp\\PWm$ even more\nsuitable for the investigation of the non-Abelian self couplings.\n\nIn this paper, we focus on the complete SM one-loop corrections to $\\gamma\\gamma\\to\\PWp\\PWm$.\nOne reason why these have not\nbeen calculated so far is certainly their analytical\ncomplexity. We have calculated the numerous Feynman graphs (roughly\n300--550 depending on the gauge fixing) by using {\\it Mathematica\\\/}\n\\cite{math}. More precisely, we have generated and drawn the Feynman\ngraphs by {\\it FeynArts} \\cite{fa} and performed three different\ncalculations, one in 't~Hooft--Feynman gauge using {\\it FeynCalc}\n\\cite{fc} and two in a non-linear gauge with and without using\n{\\it FeynCalc}.\nAs the final result exhibits a very complicated and\nuntransparent analytical form, we refrain from writing\nit down in full detail. Instead, we indicate its general structure and\npresent a detailed discussion of\nthe numerical results for the $\\mathswitch{{\\cal{O}}(\\alpha)}$ (virtual and real\nsoft-photonic) corrections to the polarized as well as unpolarized\ncross-sections. We restrict the presentation of the analytical results\nto the lowest-order cross-sections\\ and the most important $\\mathswitch{{\\cal{O}}(\\alpha)}$ corrections.\nIn particular, we discuss the Higgs resonance, the heavy-Higgs effects,\nthe Coulomb singularity, and the leading\neffects from light fermions and a heavy top quark.\n\nThe paper is organized as follows: After fixing our notation and\nconventions in \\refse{se:notcon}, we consider the lowest-order cross-sections\\ for\nvarious polarizations in \\refse{se:born}. In \\refse{se:RC} we discuss \nthe evaluation and general features of the radiative corrections and in\n\\refse{se:num} the numerical results. \n\n\\section{Notation and conventions}\n\\label{se:notcon}\n\nWe consider the reaction\n$$ \\gamma(k_1,\\lambda_1) + \\gamma(k_2,\\lambda_2) \\rightarrow\n {\\mathswitchr {W^+}}(k_3,\\lambda_3) + {\\mathswitchr {W^-}}(k_4,\\lambda_4) \\; ,\n$$\nwhere $\\lambda_{1,2} = \\pm 1$ and $\\lambda_{3,4} = 0,\\pm 1$\ndenote the helicities of the incoming photons and outgoing W bosons,\nrespectively.\n\nIn the center-of-mass system (CMS)\nthe momenta read \nin terms of the beam energy $E$ of the incoming photons and \nthe scattering angle $\\theta$\n\\begin{eqnarray}\nk^\\mu_1 &=& E(1, 0, 0, -1) \\; ,\n\\nonumber \\\\\nk^\\mu_2 &=& E(1, 0, 0, 1) \\; ,\n\\nonumber \\\\\nk^\\mu_3 &=& E(1, -\\beta \\sin\\theta,\n 0, -\\beta \\cos\\theta) \\; ,\n\\nonumber \\\\\nk^\\mu_4 &=& E(1, \\beta \\sin\\theta,\n 0, \\beta \\cos\\theta) \\; ,\n\\end{eqnarray}\nwhere $\\beta = \\sqrt{1 - \\mathswitch {M_\\PW}^2 \/ E^2}$\nis the velocity of the \\mathswitchr W\\ bosons in the CMS.\nWe define the Mandelstam variables\n\\begin{eqnarray}\ns &=& (k_1 + k_2)^2 = (k_3 + k_4)^2 = 4 E^2 \\; ,\n\\nonumber \\\\\nt &=& (k_1 - k_3)^2 = (k_2 - k_4)^2 = \\mathswitch {M_\\PW}^2 - \\frac{s}{2}\n (1 - \\beta \\cos \\theta) \\; ,\n\\nonumber \\\\\nu &=& (k_1 - k_4)^2 = (k_2 - k_3)^2 = \\mathswitch {M_\\PW}^2 - \\frac{s}{2}\n (1 + \\beta \\cos \\theta) \\; .\n\\end{eqnarray}\n\nIn order to calculate polarized cross-sections, we introduce explicit polarization\nvectors for the photons and W bosons as follows\n\\begin{eqnarray}\n\\varepsilon_1^\\mu(k_1, \\lambda_1 =\\pm 1) &=& \\frac{-1}{\\sqrt{2}} (0, 1, \\mp i, 0) \\; ,\n\\nonumber \\\\\n\\varepsilon_2^\\mu(k_2, \\lambda_2 =\\pm 1) &=& \\frac{1}{\\sqrt{2}} (0, 1, \\pm i, 0) \\; ,\n\\nonumber \\\\\n{\\varepsilon_3^*}^\\mu(k_3, \\lambda_3 = \\pm 1) &=& \\frac{-1}{\\sqrt{2}} \n (0, \\cos\\theta, \\pm i, -\\sin\\theta) \\; ,\n\\nonumber \\\\\n{\\varepsilon_4^*}^\\mu(k_4, \\lambda_4 =\\pm 1) &=& \\frac{1}{\\sqrt{2}} \n (0, \\cos\\theta, \\mp i, -\\sin\\theta) \\; ,\n\\nonumber \\\\\n{\\varepsilon_3^*}^\\mu(k_3, \\lambda_3 =0)\\;\\;\\; &=& \\frac{E}{\\mathswitch {M_\\PW}} \n (\\beta, -\\sin\\theta, 0, -\\cos\\theta) \\; ,\n\\nonumber \\\\\n{\\varepsilon_4^*}^\\mu(k_4, \\lambda_4 =0)\\;\\;\\; &=& \\frac{E}{\\mathswitch {M_\\PW}} \n (\\beta, \\sin\\theta, 0, \\cos\\theta) \\; .\n\\end{eqnarray}\n\nWe decompose the amplitude ${\\cal M}$ into invariant functions $F_{ijkl}$\nand standard matrix elements (SME) ${\\cal M}_{ijkl}$, which\ncontain the whole information about the boson polarizations.\nUsing the transversality condition for the polarization vectors\nand Schouten's identity,\nthe amplitude ${\\cal M}$ can be reduced to\n\\begin{eqnarray}\n{\\cal M}(\\lambda_1,\\lambda_2,\\lambda_3,\\lambda_4,s,t) &=& \n \\sum_{i,j = \\{0,3,4\\}}\\sum_{k,l = \\{0,1,2\\}}\n F_{ijkl}(s,t) \n {\\cal M}_{ijkl}(\\lambda_1,\\lambda_2,\\lambda_3,\\lambda_4,s,t) \n\\nonumber \\\\\n & &{}+ F^{(t)}_{0000}(s,t) \n {\\cal M}_{0000}^{(t)}(\\lambda_1,\\lambda_2,\\lambda_3,\\lambda_4,s,t) \n\\nonumber \\\\\n & &{}+ F^{(u)}_{0000}(s,t) \n {\\cal M}_{0000}^{(u)}(\\lambda_1,\\lambda_2,\\lambda_3,\\lambda_4,s,t)\n\\label{decom}\n\\end{eqnarray}\nwith\n($i,j = \\{3,4\\}$, $k,l = \\{1,2\\}$)\n\\begin{eqnarray}\n{\\cal M}_{ijkl} &=& (\\varepsilon_1 \\cdot k_i) (\\varepsilon_2 \\cdot k_j) \n (\\varepsilon_3^* \\cdot k_k) (\\varepsilon_4^* \\cdot k_l) \/ s^2,\n\\label{5}\n\\\\[.3em]\n{\\cal M}_{0jkl} &=& (\\varepsilon_1 \\cdot A) (\\varepsilon_2 \\cdot k_j) \n (\\varepsilon_3^* \\cdot k_k) (\\varepsilon_4^* \\cdot k_l) \/ s,\n\\nonumber \\\\\n{\\cal M}_{i0kl} &=& (\\varepsilon_1 \\cdot k_i) (\\varepsilon_2 \\cdot A) \n (\\varepsilon_3^* \\cdot k_k) (\\varepsilon_4^* \\cdot k_l) \/ s,\n\\nonumber \\\\\n{\\cal M}_{ij0l} &=& (\\varepsilon_1 \\cdot k_i) (\\varepsilon_2 \\cdot k_j) \n (\\varepsilon_3^* \\cdot A) (\\varepsilon_4^* \\cdot k_l) \/ s,\n\\nonumber \\\\\n{\\cal M}_{ijk0} &=& (\\varepsilon_1 \\cdot k_i) (\\varepsilon_2 \\cdot k_j) \n (\\varepsilon_3^* \\cdot k_k) (\\varepsilon_4^* \\cdot A) \/ s,\n\\\\[.3em]\n{\\cal M}_{00kl} &=& (\\varepsilon_1 \\cdot \\varepsilon_2) \n (\\varepsilon_3^* \\cdot k_k) (\\varepsilon_4^* \\cdot k_l) \/ s,\n\\nonumber \\\\\n{\\cal M}_{0j0l} &=& (\\varepsilon_1 \\cdot \\varepsilon_3^*) \n (\\varepsilon_2 \\cdot k_j) (\\varepsilon_4^* \\cdot k_l) \/ s,\n\\nonumber \\\\\n{\\cal M}_{0jk0} &=& (\\varepsilon_1 \\cdot \\varepsilon_4^*) \n (\\varepsilon_2 \\cdot k_j) (\\varepsilon_3^* \\cdot k_k) \/ s,\n\\nonumber \\\\\n{\\cal M}_{i00l} &=& (\\varepsilon_2 \\cdot \\varepsilon_3^*) \n (\\varepsilon_1 \\cdot k_i) (\\varepsilon_4^* \\cdot k_l) \/ s,\n\\nonumber \\\\\n{\\cal M}_{i0k0} &=& (\\varepsilon_2 \\cdot \\varepsilon_4^*) \n (\\varepsilon_1 \\cdot k_i) (\\varepsilon_3^* \\cdot k_k) \/ s,\n\\nonumber \\\\\n{\\cal M}_{ij00} &=& (\\varepsilon_3^* \\cdot \\varepsilon_4^*) \n (\\varepsilon_1 \\cdot k_i) (\\varepsilon_2 \\cdot k_j) \/ s,\n\\\\[.3em]\n{\\cal M}_{000l} &=& (\\varepsilon_1 \\cdot \\varepsilon_2) \n (\\varepsilon_3^* \\cdot A) (\\varepsilon_4^* \\cdot k_l),\n\\nonumber \\\\\n{\\cal M}_{00k0} &=& (\\varepsilon_1 \\cdot \\varepsilon_2) \n (\\varepsilon_3^* \\cdot k_k) (\\varepsilon_4^* \\cdot A),\n\\nonumber \\\\\n{\\cal M}_{0j00} &=& (\\varepsilon_3^* \\cdot \\varepsilon_4^*) \n (\\varepsilon_1 \\cdot A) (\\varepsilon_2 \\cdot k_j),\n\\nonumber \\\\\n{\\cal M}_{i000} &=& (\\varepsilon_3^* \\cdot \\varepsilon_4^*) \n (\\varepsilon_1 \\cdot k_i) (\\varepsilon_2 \\cdot A),\n\\\\[.3em]\n{\\cal M}_{0000} &=& (\\varepsilon_1 \\cdot \\varepsilon_2) \n (\\varepsilon_3^* \\cdot \\varepsilon_4^*) ,\n\\nonumber \\\\\n{\\cal M}^{(t)}_{0000} &=& (\\varepsilon_1 \\cdot \\varepsilon_3^*) \n (\\varepsilon_2 \\cdot \\varepsilon_4^*) ,\n\\nonumber \\\\\n{\\cal M}^{(u)}_{0000} &=& (\\varepsilon_1 \\cdot \\varepsilon_4^*) \n (\\varepsilon_2 \\cdot \\varepsilon_3^*) , \\label{9}\n\\end{eqnarray}\nand \n\\begin{equation}\nA_\\mu = \\frac{i}{ut - \\mathswitch {M_\\PW}^4} \n \\varepsilon_{\\mu\\nu\\rho\\sigma} k_1^\\nu k_2^\\rho k_3^\\sigma,\n\\qquad \\varepsilon_{0123} = -1 \\;.\n\\end{equation}\nOur choice of polarization vectors for the photons implies\n\\begin{eqnarray}\n\\varepsilon_i k_j = 0, \\qquad i,j = 1,2 \\;,\n\\end{eqnarray}\nand thus by virtue of momentum conservation\n\\begin{eqnarray}\n\\varepsilon_i k_3 = - \\varepsilon_i k_4,\n \\qquad i = 1,2 \\;.\n\\end{eqnarray}\nWe use this relation to eliminate all SME\ninvolving $\\varepsilon_1 k_4$ and $\\varepsilon_2 k_3$.\nThis reduces the 83 SME defined in (\\ref{5}) -- \n(\\ref{9}) to 38 for the process under consideration. \n\nAs a consequence of CP invariance and Bose symmetry\nonly the sum of each SME and the one \nwith $(\\varepsilon_1,k_1,\\varepsilon_3,k_3)$ and\n$(\\varepsilon_2,k_2,\\varepsilon_4,k_4)$ interchanged occurs.\nFor instance, ${\\cal{M}}_{0401}$ only appears in the\ncombination ${\\cal{M}}_{0401}+{\\cal{M}}_{3020}$ in the expansion of ${\\cal{M}}$ in\n\\refeq{decom}. This leaves 22 independent SME. \n\nIn four dimensions, the matrix elements ${\\cal M}^{(t)}_{0000}$ and\n${\\cal M}^{(u)}_{0000}$ are not linearly independent from the set of all\n${\\cal{M}}_{ijkl}$ and \ncan be reduced to linear combinations of the other matrix\nelements using the identities \n\\begin{eqnarray}\n\\delta^{\\varepsilon_1\\varepsilon_3^*k_1k_2k_3}\n _{\\varepsilon_2\\varepsilon_4^*k_1k_2k_3} =\n\\delta^{\\varepsilon_1\\varepsilon_4^*k_1k_2k_3}\n _{\\varepsilon_2\\varepsilon_3^*k_1k_2k_3} = 0\n\\end{eqnarray}\ninvolving the Gram determinant\n\\begin{eqnarray}\n\\delta^{p_1 \\ldots p_n}_{q_1 \\ldots q_n} =\n\\left| \\matrix{p_1\\cdot q_1&\\ldots&p_1\\cdot q_n\\cr\n \\vdots&\\ddots&\\vdots\\cr\n p_n\\cdot q_1&\\ldots&p_n\\cdot q_n\\cr}\\right| \\;.\n\\end{eqnarray}\nNevertheless, we keep ${\\cal M}^{(t)}_{0000}$ and\n${\\cal M}^{(u)}_{0000}$ for convenience.\n\nBose symmetry implies that the amplitude ${\\cal{M}}$ is invariant under \nthe interchange $(k_1,\\varepsilon_1) \\leftrightarrow (k_2,\\varepsilon_2)$.\nSince many diagrams can be related to others by this transformation,\nit is useful to introduce a second set of SME which is obtained\nfrom \\refeqs{5}--\\refeqf{9} by this interchange.\nOf course, this second set of SME can be expressed by the original set.\n\nBesides Bose symmetry also CP is an exact symmetry,\nsince we use a unit quark-mixing \nmatrix.%\n\\footnote{For a non-trivial quark-mixing matrix, CP would be violated\nin the considered process first at two-loop level.} \nThe helicity amplitudes for fixed \npolarization configurations are related as follows\n\\begin{equation}\n\\begin{array}[b]{ll}\n{\\cal M}_{\\lambda_1 \\lambda_2 \\lambda_3 \\lambda_4}(s,t,u) = \n{\\cal M}_{\\lambda_2 \\lambda_1 \\lambda_3 \\lambda_4}(s,u,t) \n& \\hbox{(Bose)} \\\\\n{\\cal M}_{\\lambda_1 \\lambda_2 \\lambda_3 \\lambda_4}(s,t,u) = \n{\\cal M}_{-\\lambda_1 -\\lambda_2 -\\lambda_4 -\\lambda_3}(s,u,t) \n\\qquad & \\hbox{(CP)} \\\\\n{\\cal M}_{\\lambda_1 \\lambda_2 \\lambda_3 \\lambda_4}(s,t,u) =\n{\\cal M}_{-\\lambda_2 -\\lambda_1 -\\lambda_4 -\\lambda_3}(s,t,u)\n\\qquad & \\hbox{(Bose+CP)}.\n\\end{array}\n\\end{equation}\n\nIn the following, we only consider the sum of the two transverse \n\\mathswitchr W~polarizations.\nTherefore we indicate the polarizations of the external particles \nby four labels, the first pair corresponding to the photons, and the \nsecond pair to\nthe \\mathswitchr W~bosons. The labels $+$,$-$ represent right-handed and left-handed\nphotons, respectively, ${\\mathrm{L}}$ stands for longitudinal, and\n${\\mathrm{T}}$ for the sum of the two transverse \\mathswitchr W~polarizations.\n\nThe combination of Bose and CP symmetry leads to the following relations\nbetween the differential cross-sections\\ with equal photon helicities\n\\begin{eqnarray}\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{--{\\rm TT}} &=&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{++{\\rm TT}}\\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{--{\\rm LL}} &=&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{++{\\rm LL}}\\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{--{\\rm (LT+TL)}} &=&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{++{\\rm (LT+TL)}} \\; .\n\\label{eq:CPBds}\n\\end{eqnarray}\nMoreover, Bose symmetry implies that all cross-sections\\ in (\\ref{eq:CPBds})\nare forward--backward symmetric. \nFor different photon helicities Bose symmetry leads to\n\\begin{eqnarray}\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{-+{\\rm TT}}(s,t,u) &=&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{+-{\\rm TT}}(s,u,t)\\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{-+{\\rm LL}}(s,t,u) &=&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{+-{\\rm LL}}(s,u,t)\\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{-+{\\rm (LT+TL)}}(s,t,u) &=&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{+-{\\rm (LT+TL)}}(s,u,t) \\; ,\n\\label{eq:Bds}\n\\end{eqnarray}\nwhereas Bose+CP does not yield further relations.\n\nC and P symmetry are only violated by the fermionic loop corrections,\nbut hold in lowest order and for the bosonic loop corrections. We\nindicate these restricted symmetries by a modified equality sign\n\\begin{eqnarray}\n{\\cal M}_{\\lambda_1 \\lambda_2 \\lambda_3 \\lambda_4}(s,t,u) &\\Nequal{P}&\n{\\cal M}_{-\\lambda_1 -\\lambda_2 -\\lambda_3 -\\lambda_4}(s,t,u) \\; ,\n\\nonumber\\\\\n{\\cal M}_{\\lambda_1 \\lambda_2 \\lambda_3 \\lambda_4}(s,t,u) &\\Nequal{C}&\n{\\cal M}_{\\lambda_1 \\lambda_2 \\lambda_4 \\lambda_3}(s,u,t) \\; .\n\\end{eqnarray}\nP invariance then implies for the differential cross-sections\\\n\\begin{eqnarray}\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{-+{\\rm TT}} &\\Nequal{P}&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{+-{\\rm TT}}\\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{-+{\\rm LL}} &\\Nequal{P}&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{+-{\\rm LL}}\\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{-+{\\rm (LT+TL)}} &\\Nequal{P}&\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)_{+-{\\rm (LT+TL)}}\\; .\n\\label{eq:Pds}\n\\end{eqnarray}\nIn combination with (\\ref{eq:Bds}) this means that the forward-backward\nasymmetries of the differential cross-sections\\ for unequal photon helicities \nare entirely due to fermionic corrections.\n\nWe perform the calculation both in 't~Hooft--Feynman (tHF) gauge and in a\ngauge with the following non-linear (NL) gauge-fixing term \\cite{gavela}\n\\begin{eqnarray}\n{\\cal L}_{\\rm GF} &=& -\\left| \\partial^\\mu W^+_\\mu + i e\n (A^\\mu - \\frac{\\mathswitch {c_\\PW}}{\\mathswitch {s_\\PW}} Z^\\mu) W^+_\\mu \n -i \\mathswitch {M_\\PW} \\phi^+ \\right|^2\n\\nonumber \\\\\n & &{}- \\frac{1}{2} (\\partial^\\mu Z_\\mu - \\mathswitch {M_\\PZ} \\chi)^2\n - \\frac{1}{2} (\\partial^\\mu A_\\mu)^2 \\;,\n\\label{nl}\n\\end{eqnarray}\nwith the conventions of \\citere{ad&mex} for the fields.\nIn particular, $\\phi^\\pm$ and $\\chi$ denote the charged and neutral \nwould-be-Goldstone fields, respectively.\nIn this NL gauge the $\\phi^\\pm W^\\mp A$ vertices\nvanish. This reduces the number of Feynman graphs in comparison to the \ntHF gauge considerably.\n\n\\section{Lowest-order cross-section}\n\\label{se:born}\nIn NL gauge, only the three diagrams of \\reffi{fi:borndia} \ncontribute to the lowest-order amplitude. \n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,35)\n\\put(-13,-154){\\special{psfile=aawwborn.nlgauge.ps hscale=90 vscale=90}}\n\\end{picture}\n\\caption{Lowest-order diagrams for $\\gamma\\gamma\\to\\PWp\\PWm$ in NL gauge}\n\\label{fi:borndia}\n\\end{figure}\nIn tHF gauge two additional diagrams exist which \ninvolve internal $\\phi$ fields.\nEvaluation of the tree diagrams in either gauge yields the Born amplitude\n\\begin{equation}\n{\\cal{M}}_{\\mathrm{Born}} = 8\\pi\\alpha\\Biggl\\{\n\\frac{s}{\\mathswitch {M_\\PW}^2-t}{\\cal{M}}_{0,t} + \\frac{s}{\\mathswitch {M_\\PW}^2-u}{\\cal{M}}_{0,u} - {\\cal{M}}_{0000} \n\\Biggr\\},\n\\end{equation}\nwhere\n\\begin{eqnarray}\n{\\cal{M}}_{0,t} &=& \n2 {\\cal{M}}_{0012} + 2 {\\cal{M}}_{3400} - 2 {\\cal{M}}_{0401} - 2 {\\cal{M}}_{3020} + 2 {\\cal{M}}_{0410} \n+ 2 {\\cal{M}}_{3002} + {\\cal{M}}_{0000}^{(t)}, \\nonumber\\\\\n{\\cal{M}}_{0,u} &=& \n2 {\\cal{M}}_{0021} + 2 {\\cal{M}}_{4300} - 2 {\\cal{M}}_{0310} - 2 {\\cal{M}}_{4002} + 2 {\\cal{M}}_{0301} \n+ 2 {\\cal{M}}_{4020} + {\\cal{M}}_{0000}^{(u)}.\n\\end{eqnarray}\nThe lowest-order matrix element vanishes for the helicities \n$(\\lambda_1,\\lambda_2,\\lambda_3,\\lambda_4) = \n(\\pm,\\pm,0,\\pm)$, $(\\pm,\\pm,\\pm,0)$, $(\\pm,\\pm,0,\\mp)$, \n$(\\pm,\\pm,\\mp,0)$, $(\\pm,\\pm,\\pm,\\mp)$, $(\\pm,\\pm,\\mp,\\pm)$.\n\nThe differential Born cross-section\\ \\cite{Ye91,Be92} is obtained as\n\\begin{eqnarray}\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}} &=&\n\\frac{\\beta}{64 \\pi^2 s}\n\\sum_{\\lambda_1 \\lambda_2 \\lambda_3 \\lambda_4}\n\\frac{1}{4} \\left(1+ \\lambda_1 P^\\gamma_1\\right)\n \\left(1+ \\lambda_2 P^\\gamma_2\\right)\n\\left| {\\cal M}_{{\\mathrm{Born}}} \\right|^2 \\; ,\n\\end{eqnarray} \nwhere $P^\\gamma_{1,2}$ denote the degrees of photon-beam polarization and\nthe sum over $\\lambda_3$, $\\lambda_4$ include the desired W polarizations.\n\nWe list the differential cross-sections\\ for several helicity configurations:\n\\begin{eqnarray}\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}}_{\\pm\\pm{\\mathrm{T}}\\rT} &=& \n\\frac{\\alpha^2 \\beta s (2 \\mathswitch {M_\\PW}^4 -4 \\mathswitch {M_\\PW}^2 s +s^2)}{(\\mathswitch {M_\\PW}^2 -t)^2(\\mathswitch {M_\\PW}^2 -u)^2} \\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}}_{\\pm\\pm{\\mathrm{L}}\\rL} &=& \n \\frac{\\alpha^2 \\beta \\mathswitch {M_\\PW}^4 s}{(\\mathswitch {M_\\PW}^2 -t)^2(\\mathswitch {M_\\PW}^2 -u)^2} \\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}}_{\\pm\\pm({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}})} \n&=& 0 \\; , \\nonumber\\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}}_{\\pm\\mp{\\mathrm{T}}\\rT} &=& \n \\frac{\\alpha^2 \\beta s^3}{(\\mathswitch {M_\\PW}^2 -t)^2(\\mathswitch {M_\\PW}^2 -u)^2} \n \\biggl\\{2 \\frac{(16 \\mathswitch {M_\\PW}^4 +s^2) (ut-\\mathswitch {M_\\PW}^4)^2}{s^6\\beta^4}\n +\\frac{(t-u)^2}{s^2\\beta^2} \\biggr\\} \\; , \\nonumber\\\\ \n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}}_{\\pm\\mp{\\mathrm{L}}\\rL} &=& \n \\frac{\\alpha^2 (4 \\mathswitch {M_\\PW}^2+s)^2 (\\mathswitch {M_\\PW}^4 - t u)^2}\n {\\beta^3 s^3 (\\mathswitch {M_\\PW}^2 -t)^2(\\mathswitch {M_\\PW}^2 -u)^2} \\; ,\n\\nonumber \\\\\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}}_{\\pm\\mp({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}})} &=& \n \\frac{16 \\alpha^2 \\mathswitch {M_\\PW}^2 (2 \\mathswitch {M_\\PW}^4 -t^2-u^2) (\\mathswitch {M_\\PW}^4 - t u)}{\n \\beta^3 s^2 (\\mathswitch {M_\\PW}^2 -t)^2(\\mathswitch {M_\\PW}^2 -u)^2} \\; .\n\\end{eqnarray}\nThese results can be reconstructed from equation (5) in \\citere{Be92}\nor from equation (4.5) in \\citere{Ye91}.\n\nAdding up the single contributions,\nwe get for the unpolarized differential cross-section\n\\footnote{The second term in equation (6) of \\citere{Be92} should be \nmultiplied by 2.}\n\\begin{eqnarray}\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}}_{\\mathrm{unpol}} &=&\n \\frac{3 \\alpha^2 \\beta}{2s} \n\\Biggl\\{1 - \\frac{2s^2}{(\\mathswitch {M_\\PW}^2 -t)(\\mathswitch {M_\\PW}^2 -u)} \\left(\n \\frac{2}{3} + \\frac{\\mathswitch {M_\\PW}^2}{s} \\right) \n\\nonumber \\\\ \n& &{}+ \\frac{2s^4}{(\\mathswitch {M_\\PW}^2 -t)^2(\\mathswitch {M_\\PW}^2 -u)^2} \\left(\n \\frac{1}{3} + \\frac{\\mathswitch {M_\\PW}^4}{s^2} \\right)\n \\Biggr\\} \\; .\n\\label{unpol}\n\\end{eqnarray}\n\nIntegration over $\\theta_{\\mathrm{cut}} \\leq \\theta \\leq \\pi - \\theta_{\\mathrm{cut}}$\nyields:\n\\begin{eqnarray}\n\\sigma^{\\mathrm{Born}}_{\\pm\\pm{\\mathrm{T}}\\rT} &=& \\frac{16 \\pi \\alpha^2}{s}\\, \n\\frac{s^2 - 4 \\mathswitch {M_\\PW}^2s + 2\\mathswitch {M_\\PW}^4}{s^2} \\left\\{\n\\log\\left(\\frac{1 + \\beta \\cos\\theta_{\\mathrm{cut}}}{1 - \\beta\\cos\\theta_{\\mathrm{cut}}}\\right)\n + \\frac{2\\beta \\cos\\theta_{\\mathrm{cut}}}{1 - \\beta^2 \\cos^2\\theta_{\\mathrm{cut}}}\n \\right\\} \\; ,\n\\nonumber \\\\\n\\sigma^{\\mathrm{Born}}_{\\pm\\pm{\\mathrm{L}}\\rL} &=& \\frac{16 \\pi \\alpha^2}{s}\\frac{\\mathswitch {M_\\PW}^4}{s^2} \n\\left\\{\n\\log\\left(\\frac{1 + \\beta\\cos\\theta_{\\mathrm{cut}}}{1 - \\beta\\cos\\theta_{\\mathrm{cut}}}\\right)\n + \\frac{2\\beta \\cos\\theta_{\\mathrm{cut}}}{1 - \\beta^2 \\cos^2\\theta_{\\mathrm{cut}}}\n \\right\\} \\; ,\n\\nonumber \\\\\n\\sigma^{\\mathrm{Born}}_{\\pm\\mp{\\mathrm{T}}\\rT} &=& \\frac{8 \\pi \\alpha^2 }{s \\beta^4} \\Biggl\\{\n \\frac{s^2 + 16\\mathswitch {M_\\PW}^4}{s^2} \\beta \\cos\\theta_{\\mathrm{cut}}\\nonumber\\\\\n& & {} -2\\frac{s^4 - 2 \\mathswitch {M_\\PW}^2 s^3 - 2 \\mathswitch {M_\\PW}^4 s^2 + 32 \\mathswitch {M_\\PW}^6 s - 32 \\mathswitch {M_\\PW}^8} {s^4}\n\\log\\left(\\frac{1 + \\beta \\cos\\theta_{\\mathrm{cut}}}{1 - \\beta\\cos\\theta_{\\mathrm{cut}}}\\right) \n\\nonumber\\\\\n& &{}+ 4 \\frac{s^4 - 4 \\mathswitch {M_\\PW}^2 s^3 + 2 \\mathswitch {M_\\PW}^4 s^2 + 32 \\mathswitch {M_\\PW}^8}{s^4}\\,\n \\frac{\\beta \\cos\\theta_{{\\mathrm{cut}}}} {1 - \\beta^2 \\cos^2\\theta_{{\\mathrm{cut}}}} \n\\Biggr\\} \\; , \\nonumber\\\\\n\\sigma^{\\mathrm{Born}}_{\\pm\\mp{\\mathrm{L}}\\rL} &=& \\frac{4 \\pi \\alpha^2}{s\\beta^4}\n\\frac{ (4 \\mathswitch {M_\\PW}^2 + s)^2}{s^2}\n\\Biggl\\{\\beta \\cos\\theta_{{\\mathrm{cut}}}\n- 4 \\frac{\\mathswitch {M_\\PW}^2 (s-\\mathswitch {M_\\PW}^2)}{s^2}\n\\log\\left(\\frac{1 + \\beta\\cos\\theta_{\\mathrm{cut}}}{1 - \\beta\\cos\\theta_{\\mathrm{cut}}}\\right) \n\\nonumber\\\\\n& &{} + \\frac{8\\mathswitch {M_\\PW}^4}{s^2}\n\\frac{\\beta \\cos\\theta_{{\\mathrm{cut}}}}{1 - \\beta^2 \\cos^2\\theta_{{\\mathrm{cut}}}} \n \\Biggr\\} \\; ,\n\\nonumber \\\\\n\\sigma^{\\mathrm{Born}}_{\\pm\\mp({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}})} &=& \\frac{128 \\pi \\alpha^2}{s\\beta^4}\n\\frac{\\mathswitch {M_\\PW}^2}{s}\n\\Biggl\\{\n- \\beta \\cos\\theta_{{\\mathrm{cut}}}\n+ \\frac{s^2 - 2 \\mathswitch {M_\\PW}^2s + 4\\mathswitch {M_\\PW}^4}{s^2} \n\\log\\left(\\frac{1 + \\beta\\cos\\theta_{{\\mathrm{cut}}}} {1 - \\beta\\cos\\theta_{{\\mathrm{cut}}}}\n\\right) \\nonumber\\\\\n& &{}\n- \\frac{4 \\mathswitch {M_\\PW}^2 (s - 2\\mathswitch {M_\\PW}^2)}{s^2} \\,\n\\frac{\\beta \\cos\\theta_{{\\mathrm{cut}}}}{1 - \\beta^2 \\cos^2\\theta_{\\mathrm{cut}}} \n \\Biggr\\} \\; ,\n\\end{eqnarray}\nand for the unpolarized cross-section\\\n\\begin{eqnarray}\n\\sigma^{\\mathrm{Born}}_{{\\mathrm{unpol}}} &=& \\frac{6\\pi \\alpha^2 }{s}\n\\Biggl\\{\n\\beta \\cos\\theta_{{\\mathrm{cut}}} - \n4 \\frac{\\mathswitch {M_\\PW}^2}{s}\\left(1 - \\frac{2\\mathswitch {M_\\PW}^2}{s}\\right)\n\\log\\left(\\frac{1 + \\beta \\cos\\theta_{{\\mathrm{cut}}}}\n{1 - \\beta \\cos\\theta_{{\\mathrm{cut}}}}\\right)\n\\nonumber \\\\\n& &{}+ \\left(\\frac{1}{3} + \\frac{\\mathswitch {M_\\PW}^4}{s^2} \\right)\n\\frac{16 \\beta \\cos\\theta_{\\mathrm{cut}}}{1 - \\beta^2\\cos^2\\theta_{\\mathrm{cut}}} \\Biggr\\} \\;.\n\\end{eqnarray}\n\nIn \\reffis{fi:intcs10} and \\ref{fi:intcs20} \nwe show the lowest-order cross-sections\\ for various polarizations\nand two different angular cuts $\\theta_{\\mathrm{cut}} = 10^\\circ, 20^\\circ$.\nFor $\\theta_{{\\mathrm{cut}}} = 0$, the cross-sections\\ for\ntransverse W~bosons approach a constant at high energies, $s \\gg \\mathswitch {M_\\PW}^2$, \nowing to the massive $t$-channel exchange \n\\begin{equation}\n\\sigma^{\\mathrm{Born}}_{\\pm\\pm{\\mathrm{T}}\\rT}, \\sigma^{\\mathrm{Born}}_{\\pm\\mp{\\mathrm{T}}\\rT}\n\\;\\Nlim{s\\rightarrow\\infty}\\;\n\\frac{8 \\pi \\alpha^2}{\\mathswitch {M_\\PW}^2} = 80.8 \\unskip\\,\\mathrm{pb}.\n\\end{equation}\nFor a finite cut, $\\sigma^{\\mathrm{Born}}_{\\pm\\pm{\\mathrm{T}}\\rT}$ and\n$\\sigma^{\\mathrm{Born}}_{\\pm\\mp{\\mathrm{T}}\\rT}$ behave as $1\/s$ for large $s$.\nThe cross-sections\\ $\\sigma^{\\mathrm{Born}}_{\\pm\\mp{\\mathrm{L}}\\rL}$ and\n$\\sigma^{\\mathrm{Born}}_{\\pm\\mp({\\mathrm{T}}{\\mathrm{L}}+{\\mathrm{L}}{\\mathrm{T}})}$ are\nproportional to $1\/s$ and $1\/s^2$, respectively, independently\n of the cut-off.\nThe cross-section\\ $\\sigma^{\\mathrm{Born}}_{\\pm\\pm{\\mathrm{L}}\\rL}$ goes \nlike $1\/s^2$ at high energies for $\\theta_{\\mathrm{cut}} = 0$\nand like $1\/s^3$ for a finite cut-off. \nIt is suppressed by about a factor of $10^3$ at $E_{\\mathrm{CMS}} = 500$ GeV.\nNote that the latter cross-section\\ can be\nenhanced drastically by non-standard physics \\cite{HH}.\nAt high energies, the unpolarized cross-section\\ $\\sigma^{\\mathrm{Born}}_{\\mathrm{unpol}}$\nis dominated by transverse \\mathswitchr W~bosons,\nand all polarized cross-sections\\ involving two transverse \\mathswitchr W~bosons are of the\nsame order-of-magnitude.\nClose to threshold the differential and integrated \ncross-sections\\ for all polarization configurations vanish like $\\beta$.\nNumerical values for the lowest-order cross-sections\\ can be found in \n\\refta{table_born}.\n\\begin{table}\n\\footnotesize\n\\begin{center}\n\\arraycolsep 6pt\n$$\\begin{array}{|c|c||c|c|c|c|c|c|}\n\\hline\n\\sqrt{s}\/\\mathrm{GeV}\n & \\theta &\n{\\mathrm{unpol}} & {{\\pm\\pm}\\mathrm{TT}} & {{\\pm\\pm}\\mathrm{LL}} &\n{{\\pm\\mp}\\mathrm{TT}} & {{\\pm\\mp}\\mathrm{LL}} & {{\\pm\\mp}\\mathrm{(LT+TL)}} \\\\\n\\hline\\hline\n\\phantom{0}500 & \\phantom{0}0^\\circ<\\theta<180^\\circ & \n77.6 & 82.2 & 6.10\\times 10^{-2} & 70.2 & 9.99 \\times 10^{-1} & \n1.69 \\phantom{{}\\times 10^{-1}} \\\\\n\\cline{2-8}\n& 20^\\circ<\\theta<160^\\circ & \n36.7 & 42.7 & 3.17\\times 10^{-2}& 28.2 & 9.89\\times 10^{-1}& \n1.49\\phantom{{}\\times 10^{-1}} \\\\\n\\hline\\hline\n1000 & \\phantom{0}0^\\circ<\\theta<180^\\circ & \n80.1 & 82.8 & 3.54 \\times 10^{-3} & 76.9 & 2.52\\times 10^{-1} & \n1.70\\times 10^{-1}\\\\\n\\cline{2-8}\n& 20^\\circ<\\theta<160^\\circ & \n14.2 & 16.8 &7.18 \\times 10^{-4}& 11.2 & 2.44\\times 10^{-1} & \n1.21\\times 10^{-1}\\\\\n\\hline\\hline\n2000 & \\phantom{0}0^\\circ<\\theta<180^\\circ & \n80.6 & 81.6 & 2.14\\times 10^{-4}& 79.5 & 6.41\\times 10^{-2}& \n1.50\\times 10^{-2}\\\\\n\\cline{2-8}\n& 20^\\circ<\\theta<160^\\circ & \n4.07& 4.84 & 1.27\\times 10^{-5}& 3.23 & 6.11\\times 10^{-2}& \n8.26\\times 10^{-3} \\\\\n\\hline\n\\end{array}$$\n\\caption{Lowest-order integrated cross-sections\\ in pb for several polarizations}\n\\label{table_born}\n\\end{center}\n\\end{table}\n\nFigures \\ref{fi:difcspp} and \\ref{fi:difcsmm} show \nthe angular distributions of the differential\n lowest-order cross-sections\\ for various polarizations at \n $E_{\\mathrm{CMS}} = 500$, $1000$ and $2000$ GeV.\nThe cross-sections\\ involving transverse W bosons are characterized by the \n$t$- and $u$-channel poles in the forward and backward directions,\nrespectively. With increasing energy they increase in the very forward and\nbackward direction proportional to $s$ but decrease in the central angular\nregion proportional to $1\/s$.\nThe respective behavior of \n$({{{\\mathrm{d}}}\\sigma}\/{{{\\mathrm{d}}}\\Omega})^{\\mathrm{Born}}_{\\pm\\pm{\\mathrm{L}}\\rL}$\nis $1\/s$ and $1\/s^3$.\nThe cross-sections\\ $({{{\\mathrm{d}}}\\sigma}\/{{{\\mathrm{d}}}\\Omega})^{\\mathrm{Born}}_{\\pm\\mp{{\\mathrm{L}}\\rL}}$\nand $({{{\\mathrm{d}}}\\sigma}\/{{{\\mathrm{d}}}\\Omega})^{\\mathrm{Born}}_{\\pm\\mp{({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}})}}$\nvanish in the forward and backward direction. \nWhile $({{{\\mathrm{d}}}\\sigma}\/{{{\\mathrm{d}}}\\Omega})^{\\mathrm{Born}}_{\\pm\\mp{{\\mathrm{L}}\\rL}}$\nreach their maxima at $90^\\circ$ and decrease proportional to $1\/s$\nfor all angles,\n$({{{\\mathrm{d}}}\\sigma}\/{{{\\mathrm{d}}}\\Omega})^{\\mathrm{Born}}_{\\pm\\mp{({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}})}}$\npossess maxima at $|\\!\\cos\\theta| = \\beta$ decreasing\nproportional to $1\/s$ and relative minima at\n$\\theta = 90^\\circ$ decreasing proportional to $1\/s^2$.\n \n\\section{Radiative corrections}\n\\label{se:RC}\n\n\\subsection{Non-linear gauge fixing}\n\nWe have performed the calculation of the radiative corrections\nin tHF gauge\nand a NL gauge with the gauge-fixing term given in equation\n\\refeq{nl} applying the complete on-shell renormalization\nscheme in both cases \\cite{ad&mex}.\nAs pointed out in Sect.~2 the $\\phi^\\pm W^\\mp A$ vertices\nvanish in NL gauge. As a consequence \nthe $\\phi$ self-energy\\ and the $\\phi W$ mixing energy do not contribute, and \nthe number of vertex and box diagrams is reduced from 441 in \ntHF gauge to 268 in NL gauge (for one fermion generation).\n\nFurthermore, the analytical expressions for the\n$W^\\pm \\overline{u}^\\pm u^{A,Z}$\nvertices (with $\\bar u$, $u$ denoting the Fadeev--Popov ghost fields)\nare proportional to the W-boson momentum\nin NL gauge and thus vanish for on-shell W~bosons.\nFor this reason most of the box\nand vertex diagrams with internal ghost fields vanish.\nAs the corrections to the $AAA$ and $AAZ$ vertices vanish in both gauges,\nthe number of non-vanishing vertex and box diagrams reduces to 365\nin tHF gauge and to 168 in NL gauge.\nMoreover, many diagrams have a simpler structure in NL gauge.\n\nIn order to determine the counterterms\\ necessary for renormalization\none has to calculate the self-energies. Here we list the differences\n$\\Delta\\Sigma = \\Sigma^{\\mathrm{NL}} -\\Sigma^{\\mathrm{tHF}}$ between the self-energies\\ in\nNL gauge and the ones in tHF gauge.\nThe transverse parts of the latter can be found e.g.~in Ref.~\\cite{ad&mex}.\nFor the transverse part of the W self-energy\\ we find\n\\begin{equation}\n\\Delta\\Sigma^{WW}_{{\\mathrm{T}}} = \n \\frac{\\alpha}{2 \\pi} \\bigl(k^2 - \\mathswitch {M_\\PW}^2\\bigr)\n \\biggl[B_0\\bigl(k^2,0,\\mathswitch {M_\\PW}\\bigr) + \n \\frac{\\mathswitch {c_\\PW}^2}{\\mathswitch {s_\\PW}^2} B_0\\bigl(k^2,\\mathswitch {M_\\PZ},\\mathswitch {M_\\PW}\\bigr)\n \\biggr] \\; ,\n\\end{equation}\nand for its longitudinal part\n\\begin{eqnarray}\n\\Delta\\Sigma^{WW}_{{\\mathrm{L}}} &=& \n- \\frac{\\alpha}{4 \\pi} \\biggl\\{\n \\frac{\\mathswitch {c_\\PW}^2}{\\mathswitch {s_\\PW}^2} \\bigl(5 k^2 + 5 \\mathswitch {M_\\PZ}^2 - 3 \\mathswitch {M_\\PW}^2\\bigr) \n B_0\\bigl(k^2,\\mathswitch {M_\\PW},\\mathswitch {M_\\PZ}\\bigr) +\n \\bigl(5 k^2 - 3 \\mathswitch {M_\\PW}^2\\bigr) B_0\\bigl(k^2,\\mathswitch {M_\\PW},0\\bigr) \n\\nonumber \\\\\n&&\\phantom{\\frac{- \\alpha}{4 \\pi} \\biggl\\{}\n+ \\frac{5}{\\mathswitch {s_\\PW}^2}\\mathswitch {M_\\PW}^2 \\bigl[\n B_0\\bigl(0,0,\\mathswitch {M_\\PW}\\bigr) - B_0\\bigl(0,0,\\mathswitch {M_\\PZ}\\bigr)\\bigr]\n - \\frac{2}{\\mathswitch {s_\\PW}^2}k^2\n \\biggr\\}\\;,\n\\end{eqnarray}\nwhere $B_0$ is the scalar one-loop two-point function \\cite{ad&mex,scalar}.\nThe differences for the self-energies\\ involving neutral gauge bosons can be given \nin a compact way\n\\begin{eqnarray}\n\\Delta\\Sigma^{BB'}_{{\\mathrm{T}}} &=& \n- \\frac{\\alpha}{2 \\pi} f^{ B B'} \n \\bigl(2 k^2 - M^2_{ B} - M^2_{ B'}\\bigr)\n B_0\\bigl(k^2,\\mathswitch {M_\\PW},\\mathswitch {M_\\PW}\\bigr) \\; ,\n\\\\\n\\Delta\\Sigma^{BB'}_{{\\mathrm{L}}} &=&\n\\frac{\\alpha}{2 \\pi} f^{B B'}\n \\bigl(M^2_{B} + M^2_{B'}\\bigr)\n B_0\\bigl(k^2,\\mathswitch {M_\\PW},\\mathswitch {M_\\PW}\\bigr) \\; ,\n\\end{eqnarray}\nwith ${B}^{(\\prime)} = A,Z$ and\n\\begin{equation}\nf^{A A} = 1, \\qquad\nf^{A Z} = -\\frac{\\mathswitch {c_\\PW}}{\\mathswitch {s_\\PW}}, \\qquad\nf^{ZZ} = \\frac{\\mathswitch {c_\\PW}^2}{\\mathswitch {s_\\PW}^2} \\;.\n\\end{equation}\n\nNote that the differences for the transverse parts of the W, Z, and $A$ \nself-energies\\ are proportional to $(k^2 - M_{\\mathswitchr W,\\mathswitchr Z}^2)$ and $k^2$,\nrespectively.\n\n\\subsection{Inventory of ${\\cal O}(\\alpha)$ corrections}\n\nIn the following we list the virtual corrections,\ni.e.\\ the contributions to $\\delta\\cal M$, in NL gauge.\nWe adopt the conventions of\nRef.~\\cite{ad&mex},\nwhere the necessary explicit results for the transverse parts of the \nself-energies\\ and the renormalization constants can be found.\nBecause of the length of our results we do not explicitly write down the\nanalytic expressions.\n\nOwing to our renormalization scheme,\nwe have to deal with self-energy\\ insertions only into the internal\nlines of the tree diagrams of Fig.{} 1.\nThese result in the following contribution to the invariant matrix\nelement\n\\begin{eqnarray}\n\\delta {{\\cal{M}}}_{\\mathrm{self}} &=& 4\\pi\\alpha\\Biggl\\{\n\\frac{2s}{(\\mathswitch {M_\\PW}^2-t)^2} \\Sigma^{WW}_{{\\mathrm{T}}}(t) {\\cal{M}}_{0,t} \n+ \\frac{2s}{(\\mathswitch {M_\\PW}^2-u)^2} \\Sigma^{WW}_{{\\mathrm{T}}}(u){\\cal{M}}_{0,u} \\\\\n&& \\phantom{4\\pi\\alpha\\biggl(}\n{}+\\frac{1}{t} \n\\Bigl(\\Sigma^{WW}_{{\\mathrm{T}}}(t) - \\Sigma^{WW}_{{\\mathrm{L}}}(t)\\Bigr){\\cal{M}}_{0000}^{(t)} \n+\\frac{1}{u} \n\\Bigl(\\Sigma^{WW}_{{\\mathrm{T}}}(u) - \\Sigma^{WW}_{{\\mathrm{L}}}(u)\\Bigr){\\cal{M}}_{0000}^{(u)} \n\\Biggr\\} . \\nonumber\n\\end{eqnarray}\n\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,168)\n\\put(-2,-55){\\special{psfile=aaww.aww.ps voffset=0 hoffset=0}}\n\\end{picture}\n\\caption{The $t$-channel diagrams for the upper $AWW^*$ vertex }\n\\label{fi:AWW}\n\\end{figure}\n\nFigure \\ref{fi:AWW} shows the $t$-channel graphs for the upper\n$A W W^*$ vertex (asterics denote off-shell fields);\nthe diagrams (h), (i), (k), (l), and (n) vanish for on-shell \nexternal photons and \\mathswitchr W~bosons.\nThe diagrams for the lower $A W W^*$ vertex can be\nconstructed in an analogous way, and the $u$-channel diagrams are obtained\nvia crossing, i.e.\\\nthe interchange of the two external photons.\n\nThe $AAA^*$ and $AAZ^*$ vertex corrections vanish according\nto Yangs theorem \\cite{ya49} and because the virtual $A$ and $Z$ are \ncoupled to a conserved current.\nThus, the only $s$-channel vertex corrections, which contribute to \n$\\delta{\\cal M}$, are the Higgs-resonant \n$AAH^*$-vertex graphs shown in \\reffi{fi:AAH}. For the graphs (a)--(d)\nalso crossed ones exist. The $AAH^*$ corrections are discussed in the\nnext subsection.\n\nThe box diagrams are shown in \\reffis{fi:box1} and \\ref{fi:box2}.\nWhile to each diagram in \\reffi{fi:box1} a crossed partner diagram\ncorresponds, those in \\reffi{fi:box2} are symmetric under crossing.\nFor on-shell external bosons the graphs (h) and (i) of \\reffi{fi:box1}\nand the graphs (e) and (f) of \\reffi{fi:box2} vanish.\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,165)(0,0)\n\\put(-34,-113){\\special{psfile=aaww.box1.ps voffset=0 hoffset=0}}\n\\end{picture}\n\\caption{Non-crossing-symmetric box diagrams}\n\\label{fi:box1}\n\\end{figure}\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,98)(0,0)\n\\put(+3,-179){\\special{psfile=aaww.box2.ps voffset=0 hoffset=0}}\n\\end{picture}\n\\caption{Crossing-symmetric box diagrams}\n\\label{fi:box2}\n\\end{figure}\n\nThe renormalization is performed in the on-shell renormalization\nscheme. Evaluation of the counterterms\\ diagrams yields\n\\begin{eqnarray}\n\\delta{\\cal M}_{\\mathrm{counter}} &=&\n{\\cal M}_{\\mathrm{Born}} \\biggl(\n2 \\delta { Z}_{ e} + \\delta { Z}_{ W} +\n\\delta { Z}_{A A} - \\frac{\\mathswitch {c_\\PW}}{\\mathswitch {s_\\PW}} \\delta Z_{ZA} \\biggr)\n\\nonumber \\\\ && {}- 8\\pi\\alpha \n\\left(\\frac{s\\delta \\mathswitch {M_\\PW}^2}{(\\mathswitch {M_\\PW}^2-t)^2}{\\cal{M}}_{0,t} \n+ \\frac{s\\delta \\mathswitch {M_\\PW}^2}{(\\mathswitch {M_\\PW}^2-u)^2}{\\cal{M}}_{0,u} \\right) .\n\\end{eqnarray}\nIn this context we mention that the massive gauge-boson sector does not\nbreak electromagnetic gauge invariance if the NL gauge fixing\n\\refeq{nl} is applied. As a consequence on-shell photons do not mix with\n\\mathswitchr Z\\ bosons rendering the counterterm\\ $\\delta Z_{ZA}$ zero,\n\\begin{equation}\n\\delta Z_{ZA}=2\\frac{\\Sigma^{AZ}_{\\rm T}(0)}{\\mathswitch {M_\\PZ}^2}=0.\n\\end{equation}\nThe charge renormalization constant $\\delta Z_e$ is then given by\n\\cite{ad&mex}\n\\begin{equation}\n\\delta Z_e = -\\frac{1}{2}\\delta Z_{AA} =\n\\left.\\frac{1}{2}\\frac{\\partial\\Sigma^{AA}_{\\rm T}(k^2)}{\\partial k^2}\n\\right|_{k^2=0},\n\\label{eq:dze}\n\\end{equation}\nso that the complete counterterm\\ contribution to the matrix element ${\\cal{M}}$\nreduces to \n\\begin{eqnarray}\n\\delta{\\cal M}^{\\mathrm{NL}}_{\\mathrm{counter}} &=&\n{\\cal{M}}_{\\mathrm{Born}} \\delta { Z}_{ W} \n- 8\\pi\\alpha \n\\left(\\frac{s\\delta \\mathswitch {M_\\PW}^2}{(\\mathswitch {M_\\PW}^2-t)^2}{\\cal{M}}_{0,t} \n+ \\frac{s\\delta \\mathswitch {M_\\PW}^2}{(\\mathswitch {M_\\PW}^2-u)^2}{\\cal{M}}_{0,u} \\right) .\n\\end{eqnarray}\n\n\\subsection{Higgs resonance}\nThe Higgs resonance in $\\gamma\\gamma\\to\\PWp\\PWm$\nwas discussed extensively in the literature;\nsee e.g.\\ \\citeres{Mo94,Ve94}.\nSo we restrict ourselves to the listing of our results for the \nHiggs-resonant graphs.\n\nThe Higgs-resonant part of the process is caused by the graphs\nof \\reffi{fi:AAH}.\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,63)(0,0)\n\\put(16,-133){\\special{psfile=aaww.aah.ps voffset=00 hoffset=0}}\n\\end{picture}\n\\caption{The $AAH^*$ vertex diagrams}\n\\label{fi:AAH}\n\\end{figure}\nThese yield a contribution of the form (compare \\citere{Ve94})\n\\begin{eqnarray}\n\\delta {\\cal{M}}_{AAH^*} = \\frac{F^H(s)}{s-\\mathswitch {M_\\PH}^2}\n{\\cal{M}}_{0000} \n\\label{eq_higgs}\n\\end{eqnarray}\nwith\n\\strut\n\\begin{eqnarray}\nF^H(s) &=& -\\frac{\\alpha^2}{\\mathswitch {s_\\PW}^2} \\biggl\\{\n 6 \\mathswitch {M_\\PW}^2 + \\mathswitch {M_\\PH}^2 + \\mathswitch {M_\\PW}^2 C^{(\\mathrm{gauge})}\nC_0\\bigl(s,0,0,\\mathswitch {M_\\PW}^2,\\mathswitch {M_\\PW}^2,\\mathswitch {M_\\PW}^2\\bigr)\n\\nonumber \\\\\n&&\\phantom{\\frac{e^2}{8 \\pi \\mathswitch {s_\\PW}^2} \\biggl\\{}\n-2 \\sum_{f} N_f^c Q_{f}^2m_f^2 \\left[ 2 + (4m_f^2 -s)\n C_0\\bigl(s,0,0,m_{f}^2,m_{f}^2,m_{f}^2\\bigr) \\right] \n\\biggr\\}.\n\\label{eq:FH}\n\\end{eqnarray}\n\\strut\nThe sum in (\\ref{eq:FH}) extends over all massive fermions with charge\n\\pagebreak[2]\n$Q_f$ and color factor $N_f^c$. \nThe coefficient $C^{(\\mathrm{gauge})}$ is gauge-dependent and\nreads\n\\begin{eqnarray}\nC^{(\\mathrm{gauge})} = 12\\mathswitch {M_\\PW}^2 + 2\\mathswitch {M_\\PH}^2 - 8s\n\\end{eqnarray}\nin NL gauge and\n\\begin{eqnarray}\nC^{(\\mathrm{gauge})} = 12\\mathswitch {M_\\PW}^2 + \\mathswitch {M_\\PH}^2 - 7s\n\\end{eqnarray}\nin tHF gauge.\nNote that $\\delta {\\cal{M}}_{AAH^*}$ vanishes for opposite helicities of the\nincoming photons or outgoing \\mathswitchr W~bosons together with ${\\cal{M}}_{0000}$. \nHence the Higgs resonance is only present for \nphotons and \\mathswitchr W~bosons with equal helicities.\n\nIn the literature \\cite{Bo92,Mo94,Ve94}, the Higgs-boson width has been\nintroduced na{\\\"\\i}vely by the replacement \n\\begin{equation}\n\\frac{F^H(s)}{s-\\mathswitch {M_\\PH}^2} \\rightarrow \\frac{F^H(s)}{s-\\mathswitch {M_\\PH}^2 + i \\mathswitch {M_\\PH}\\Gamma_{\\mathswitchr H}}\n\\end{equation}\nin \\refeq{eq_higgs}.\nOwing to the gauge dependence of $F^H(s)$, this treatment \ndestroys gauge invariance. The violation of gauge invariance occurs at\nthe level of the non-resonant $\\mathswitch{{\\cal{O}}(\\alpha)}$ corrections, which were neglected in\n\\citeres{Bo92,Mo94,Ve94}. Since our main concern are exactly these\ncorrections we have to take care of gauge invariance.\nTo this end we \ndecompose (\\ref{eq_higgs}) into a gauge-invariant resonant part and a \ngauge-dependent non-invariant part and introduce $\\Gamma_\\mathswitchr H$ only \nin the former. This results in the following replacement in (\\ref{eq_higgs})\n\\begin{equation}\n\\frac{F^H(s)}{s-\\mathswitch {M_\\PH}^2} \\rightarrow\n\\frac{F^H(\\mathswitch {M_\\PH}^2)}{s-\\mathswitch {M_\\PH}^2 + i \\mathswitch {M_\\PH}\\Gamma_{\\mathswitchr H}} \n+ \\frac{F^H(s) - F^H(\\mathswitch {M_\\PH}^2)}{s-\\mathswitch {M_\\PH}^2} \\;.\n\\label{eq_higgs2}\n\\end{equation}\nEquations (\\ref{eq_higgs}) and (\\ref{eq_higgs2}) yield a gauge-invariant\namplitude including the finite width in the resonant Higgs contributions.\n\nSince the resonant Higgs contributions are large for $s\\approx\\mathswitch {M_\\PH}^2$,\nwe take also the square of the resonant part of the\nmatrix element into account in the numerical analysis [compare\n\\refeq{eq:dsido}].\n\nFor a calculation with order $\\mathswitch{{\\cal{O}}(\\alpha)}$ accuracy also near $s=\\mathswitch {M_\\PH}^2$, \none should take into account the $\\mathswitch{{\\cal{O}}(\\alpha)}$ corrections to the \nHiggs-boson width \\cite{Fl81} and to $F^H(\\mathswitch {M_\\PH}^2)$ in the resonant\ncontribution. Since the\nHiggs resonance is not our main concern, we only take into account the\nlowest-order decay width determined from the imaginary part of the\none-loop Higgs-boson self-energy\\ and \\refeq{eq:FH} for $F^H(\\mathswitch {M_\\PH}^2)$. \n\n\\subsection{Leading corrections}\n\\label{leadrcs}\n\nThe electroweak radiative corrections typically involve leading\ncontributions of universal origin such as the leading-logarithmic QED\ncorrections, corrections arising from the running of $\\alpha$,\ncorrections associated with large top-quark or Higgs-boson \nmasses, and the Coulomb\nsingularity at threshold for the production of a pair of charged\nparticles.\n\nWe first discuss the leading weak corrections:\n\\begin{itemize}\n\\item\nFor $\\gamma\\gamma\\to\\PWp\\PWm$, the running of $\\alpha$ is not relevant, as the {\\it two} \nexternal\nphotons are on mass shell, i.e.\\ the relevant effective coupling is the\none at zero-momentum transfer. Technically, the large logarithms present\nin the renormalization constant $\\delta Z_e$ of the electron charge are \ncanceled by the corresponding logarithms in the wave-function \nrenormalization constant $\\delta Z_{AA}$ of the external photons, as can\nbe explicitly seen in \\refeq{eq:dze}.\n\\item\nThe Higgs-mass-dependent corrections have been discussed in detail in\n\\citere{HH}. \nIn the heavy-Higgs limit, $\\mathswitch {M_\\PH}\\gg\\sqrt{s}$, no corrections involving \n$\\log(\\mathswitch {M_\\PH}\/\\mathswitch {M_\\PW})$ or $\\mathswitch {M_\\PH}^2\/\\mathswitch {M_\\PW}^2$ arise. Consequently the Higgs-mass\ndependence is small. However, for $\\sqrt{s}\\gg\\mathswitch {M_\\PH}\\gg\\mathswitch {M_\\PW}$ corrections\nproportional to $\\mathswitch {M_\\PH}^2\/\\mathswitch {M_\\PW}^2$ appear for the cross-sections\\ involving\nlongitudinal gauge bosons as a remnant of the unitarity cancellations\n(compare \\citere{eeWWhe}). These give rise to large effects in\nparticular for $\\sigma_{\\pm\\pm{\\mathrm{L}}\\rL}$.\n\\item\nThe situation is similar for the top-dependent corrections.\nAs the lowest-order matrix element is independent of the weak mixing \nangle, no universal corrections proportional to $\\mathswitch {m_\\Pt}^2\/\\mathswitch {M_\\PW}^2$ arise from\nrenormalization. It can be easily derived by power counting that such\nterms do also not result from loop diagrams in the heavy top limit \n$\\mathswitch {m_\\Pt}\\gg\\sqrt{s}$. A more accurate analysis\nreveals that even no terms involving $\\log(\\mathswitch {m_\\Pt}\/\\mathswitch {M_\\PW})$ occur in this limit. \nOn the other hand, for $\\sqrt{s}\\gg\\mathswitch {m_\\Pt}\\gg\\mathswitch {M_\\PW}$ corrections\nproportional to $\\mathswitch {m_\\Pt}^2\/\\mathswitch {M_\\PW}^2$ appear for longitudinal gauge bosons \n(compare \\citere{eeWWhe}).\n\\end{itemize}\nAll these statements hold in the on-shell renormalization scheme with\n$\\alpha$, \\mathswitch {M_\\PW}\\ and \\mathswitch {M_\\PZ}\\ as input parameters. \nIf the \\mathswitch {M_\\PW}\\ mass is determined from \\mathswitch {G_\\mu}, \ncorrections involving logarithms of\nthe light fermion masses, \\mathswitch {m_\\Pt}, and \\mathswitch {M_\\PH}\\ occur together with the\nuniversal corrections proportional to $\\mathswitch {m_\\Pt}^2\/\\mathswitch {M_\\PW}^2$ associated with the\n$\\rho$ parameter.\n\nThe leading corrections of electromagnetic origin are independent of \nthe renormalization scheme and the input parameters:\n\\begin{itemize}\n\\item\nAs $\\gamma\\gamma\\to\\PWp\\PWm$ involves no light charged external particles,\nno large logarithmic corrections associated with collinear\nphotons show up apart from the region of very high energies, $s\\gg\\mathswitch {M_\\PW}^2$.\nAs a consequence, the photonic corrections are not enhanced with respect\nto the weak corrections.\n\\item\nClose to threshold, the Coulomb singularity gives rise to large effects \nas in any pair-production process of charged particles. \nThese effects can be extracted on general grounds or directly\nfrom the Feynman diagrams. To this end one has to consider all diagrams\nresulting from the lowest-order diagrams with an additional photon\nexchanged between the final state \\mathswitchr W~bosons (\\reffi{fi:coul}). \nIn the limit $\\beta\\ll 1$ one obtains:\n\\begin{equation}\n\\delta\\sigma^{\\mathrm{Coul.}} = \\frac{\\alpha\\pi}{2\\beta} \\sigma^{\\mathrm{Born}}.\n\\label{eq:coul}\n\\end{equation}\nThe $\\beta^{-1}$ correction factor in \\refeq{eq:coul} to the Born\ncross-section\\ near threshold is typical for the pair production of\nstable (on-shell) particles. The generalization to unstable (off-shell)\nparticles can be found in the literature \\cite{coul}.\n\\end{itemize}\n\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,30)(0,0)\n\\put(14,-188){\\special{psfile=aaww.coulsing.ps voffset=0 hoffset=0}}\n\\end{picture}\n\\caption{The diagrams that contribute to the Coulomb singularity in\nNL gauge}\n\\label{fi:coul}\n\\end{figure}\n\nAt high energies, $s\\gg\\mathswitch {M_\\PW}^2$, the radiative corrections are dominated \nby terms like\n$(\\alpha\/\\pi)\\log^2(s\/\\mathswitch {M_\\PW}^2)$, which arise from vertex and box diagrams\n(comp.\\ \\citere{eeWWhe}).\nAt $1\\unskip\\,\\mathrm{TeV}$ these are about 10\\%, setting the scale for the (weak)\nradiative corrections at this energy.\n\n\\subsection{Structure of the final result}\n\nFor a consistent treatment of the virtual one-loop radiative corrections\nthe squared transition matrix element has to be expanded in a power series\nof the coupling constant $\\alpha$\n\\begin{equation}\n|{\\cal M}|^2 = |{\\cal M}_{\\mathrm{Born}}|^2 \n+ 2\\mathop{\\mathrm{Re}}\\nolimits\\{\\delta {\\cal M} {\\cal M}^*_{\\mathrm{Born}}\\} + \\hbox{higher orders}.\n\\end{equation}\nThe ${\\cal {O}}(\\alpha)$ correction $\\delta {\\cal M}$ to the matrix element\n${\\cal M}$ is decomposed as in (\\ref{decom}).\nWe do not consider those polarization configurations for which the\nlowest-order matrix element vanishes.\n\nThe invariant functions $F_{ijkl}$ are calculated in terms\nof standard tensor integrals, which are reduced to scalar integrals by the\nprocedure proposed in \\citere{pas79}. The scalar one-loop integrals are\nevaluated using the methods and general results of \\citere{scalar}. Whereas\nUV divergences are regularized dimensionally, we treat IR divergences\nby introducing an infinitesimal photon mass $\\lambda$. The\nartificial $\\lambda$ dependence drops out when soft-photon\nbremsstrahlung is added. \n\nThe cross-section\\ including full ${\\cal {O}}(\\alpha)$ corrections and the \nsquared Higgs-resonant $\\mathswitch{{\\cal{O}}(\\alpha)}$ contributions read\n\\begin{eqnarray}\n\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right) &=&\n\\frac{\\beta}{64\\pi^2s} \\sum_{\\lambda_1 \\lambda_2 \\lambda_3 \\lambda_4}\n\\biggl[\n\\left| {\\cal M}_{\\mathrm{Born}} \\right|^2 (1 + \\delta_{\\mathrm{SB}}) \n+ 2\\mathop{\\mathrm{Re}}\\nolimits\\{\\delta {\\cal M} {\\cal{M}}_{{\\mathrm{Born}}}^*\\}\n+ \\frac{|F^H(\\mathswitch {M_\\PH}^2){\\cal{M}}_{0000}|^2}{(s-\\mathswitch {M_\\PH}^2)^2 + \\Gamma_H^2 \\mathswitch {M_\\PH}^2}\\biggr]\n\\nonumber \\\\*\n&=& \\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right)^{\\mathrm{Born}}(1+\\delta),\n\\label{eq:dsido}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n\\delta_{\\mathrm{SB}} &=& -\\frac{\\alpha}{\\pi}\n\\biggl\\{2 \\log \\frac{2 \\Delta E}{\\lambda} +\n\\frac{1}{\\beta} \\log\\biggl(\\frac{1-\\beta}{1+\\beta}\\biggr)\n + \\frac{s - 2 \\mathswitch {M_\\PW}^2}{s \\beta} \\biggl[2 \\log \\frac{2 \\Delta E}{\\lambda}\n \\log\\biggl(\\frac{1-\\beta}{1+\\beta}\\biggr)\n\\nonumber \\\\\n\\phantom{-\\frac{\\alpha}{\\pi}\\biggl\\{}\n&& \n - 2 {\\mathop{\\mathrm{Li}}\\nolimits}_2 \\biggl(\\frac{1-\\beta}{1+\\beta}\\biggr) + \\frac{1}{2}\n \\log^2\\biggl(\\frac{1-\\beta}{1+\\beta}\\biggr) + \\frac{\\pi^2}{3}\n - 2 \\log\\biggl(\\frac{1-\\beta}{1+\\beta}\\biggr)\n \\log\\biggl(\\frac{2 \\beta}{1+\\beta}\\biggr)\\biggr]\\biggr\\}\n\\end{eqnarray}\ndenotes the soft-photon correction factor, \n$\\Delta E$ is the maximal energy of the emitted photon, \n$F^H$ is given in \\refeq{eq:FH},\nand $\\delta$ is the relative correction.\n\nFor the integrated cross-section\\ $\\sigma$, the relative correction is defined\nanalogously\n\\begin{equation}\n\\sigma = \\int_{\\theta_{\\min}}^{\\theta_{\\max}} {\\mathrm{d}}\\!\\cos\\theta \\int_0^{2\\pi}\n{\\mathrm{d}}\\phi\\,\\left(\\frac{{\\mathrm{d}}\\sigma}{{\\mathrm{d}}\\Omega}\\right) = \\sigma^{\\mathrm{Born}}(1+\\delta).\n\\end{equation}\n\nIn order to ensure the correctness of our results we have performed \nthree different calculations. The corrections were calculated with \n{\\it FeynCalc\\\/} \\cite{fc} both in tHF gauge and NL\ngauge \\refeq{nl}. A further calculation was performed independently with \n{\\it Mathematica\\\/} without using {\\it FeynCalc\\\/} in NL gauge.\nThe results of these various calculations agree numerically within\n8--9 digits for the corrected cross-section.\nMoreover, we have checked that all \nUV and IR singularities cancel, and that the symmetries discussed in\n\\refse{se:notcon}\nhold. Finally, the leading corrections discussed in \\refse{leadrcs} have\nbeen deduced analytically and checked numerically.\n \n\\section{Numerical results}\n\\label{se:num}\n\nFor the numerical evaluation we used the following set of parameters\n\\cite{PDG94}\n\\begin{equation}\n\\begin{array}[b]{lcllcllcl}\n\\alpha &=& 1\/137.0359895 &\n\\mathswitch {G_\\mu} & = & 1.166390 \\times 10^{-5} \\unskip\\,\\mathrm{GeV}^{-2} \\\\[.3em]\n\\mathswitch {M_\\PZ} & = & 91.187\\unskip\\,\\mathrm{GeV}, &\n\\mathswitch {M_\\PH} & = & 250\\unskip\\,\\mathrm{GeV}, &&& \\\\[.3em]\n\\mathswitch {m_\\Pe} & = & 0.51099906\\unskip\\,\\mathrm{MeV}, \\hspace{1.5em} &\nm_{\\mu} & = & 105.65839\\unskip\\,\\mathrm{MeV}, \\hspace{1.5em} &\nm_{\\tau} & = & 1.777\\;\\unskip\\,\\mathrm{GeV}, \\\\[.3em]\n\\mathswitch {m_\\Pu} & = & 46.0\\;\\unskip\\,\\mathrm{MeV}, &\n\\mathswitch {m_\\Pc} & = & 1.50\\;\\unskip\\,\\mathrm{GeV}, &\n\\mathswitch {m_\\Pt} & = & 170\\;\\unskip\\,\\mathrm{GeV}, \\\\[.3em]\n\\mathswitch {m_\\Pd} & = & 46.0\\;\\unskip\\,\\mathrm{MeV}, &\n\\mathswitch {m_\\Ps} & = & 150\\;\\unskip\\,\\mathrm{MeV}, &\n\\mathswitch {m_\\Pb} & = & 4.50\\;\\unskip\\,\\mathrm{GeV}.\n\\end{array}\n\\label{eq:par}\n\\end{equation}\nThe masses of the light quarks are adjusted such that the\nexperimentally measured hadronic vacuum polarization is reproduced\n\\cite{Ei95}. As discussed in the previous section, \nno large logarithms associated with fermion masses enter the\n$\\mathswitch{{\\cal{O}}(\\alpha)}$ corrections for $\\gamma\\gamma\\to\\PWp\\PWm$ in the on-shell renormalization scheme,\nand the fermion mass contributions are only of the order $\\alpha\nm_f^2\/\\mathswitch {M_\\PW}^2$.\nHowever, as the Fermi-constant $\\mathswitch {G_\\mu}$ is empirically much better\nknown than the \\mathswitchr W~mass, \\mathswitch {M_\\PW}\\ is usually calculated from all the other \nparameters using the muon decay width including radiative corrections. \nIn this calculation of \\mathswitch {M_\\PW}\\ all parameters given above enter sensibly.\nIf not stated otherwise, $\\mathswitch {M_\\PW}$ is determined in the following using\nformulae (2.56) and (2.57) of \\citere{wwrev}. The above set of\nparameters yields\n$$\\mathswitch {M_\\PW}=80.333\\unskip\\,\\mathrm{GeV}.$$\n\nAs discussed above, no leading collinear logarithms occur in \n$\\gamma\\gamma\\to\\PWp\\PWm$. Thus, the only source of enhanced photonic corrections are\nthe soft-photon-cut-off-dependent terms which yield the following\nrelative correction\n\\begin{equation}\n\\delta_{\\mathrm{cut}} = -\\frac{2\\alpha}{\\pi} \\log\\frac{\\Delta E}{E} \n\\left(1 - \\frac{s-2\\mathswitch {M_\\PW}^2}{s\\beta}\\log\\frac{1+\\beta}{1-\\beta}\\right).\n\\label{eq:cut}\n\\end{equation}\nWhile these cut-off-dependent terms are \ndefinitely of electromagnetic origin, the complete electroweak \\mathswitch{{\\cal{O}}(\\alpha)}\\ \ncorrections cannot be\nseparated on the basis of Feynman diagrams in a gauge-invariant way.\nSince we are mainly interested in the weak corrections we discard the \ncut-off-dependent terms \\refeq{eq:cut} \nand consider the rest as a suitable measure of\nthe weak corrections for the process at hand. The elimination of the \ncut-off-dependent terms can be achieved simply by setting the soft-photon\ncut-off energy equal to the beam energy. If not stated otherwise, the\ncorrection $\\delta$ stands in the following for\nthe complete soft-photonic and virtual electroweak corrections\nas defined in \\refeq{eq:dsido} for $\\Delta E = E$.\n\nFigure \\ref{fi:intcs10} shows the corrections to the total cross-sections\\ integrated \nover $10^\\circ \\leq \\theta \\leq 170^\\circ$ for different boson polarizations.\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,210)(0,0)\n\\put(0,110){\\input{aaww0.tot10.tex}}\n\\put(0,05){\\input{aaww.tot10.tex}}\n\\end{picture}\n\\caption{Integrated lowest-order cross-sections\\ and corresponding relative \ncorrections for several polarizations with an angular cut \n $10^\\circ \\leq \\theta \\leq 170^\\circ$}\n\\label{fi:intcs10}\n\\end{figure}\nThe dominating channels involving transverse \\mathswitchr W~bosons get \ncorrections which almost coincide with each other as well as\nthe unpolarized case and reach roughly $-20\\%$ at $\\sqrt{s}=2\\unskip\\,\\mathrm{TeV}$.\nFor $\\theta_{\\mathrm{cut}} = 10^\\circ$ the corrections to $\\sigma_{\\pm\\mp{\\mathrm{L}}\\rL}$ are \nsimilar, and those to $\\sigma_{\\pm\\mp({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}})}$ are only slightly larger.\nThe corrections to $\\sigma_{\\pm\\pm{\\mathrm{L}}\\rL}$ are \ncompletely different. At low energies they are dominated by the Higgs\nresonance, at high energies \nby corrections proportional to $\\mathswitch {M_\\PH}^2\/\\mathswitch {M_\\PW}^2$ which are additionally\nenhanced owing to the suppression of the corresponding lowest-order cross-section.\nThis cross-section, which is also most sensitive to a very heavy Higgs boson, has been\ndiscussed in detail in \\citere{HH}. \nNote that owing to helicity conservation only the cross-sections\\ with\nequal photon and \\mathswitchr W~boson helicities are affected by the Higgs resonance.\n\nImposing a more stringent angular cut $20^\\circ <\\theta <160^\\circ$ \nto the phase-space integration, the corrections become\nlarger at high energies for all polarizations \ninvolving $t$- and $u$-channel poles\nand reach about $-35\\%$ at $\\sqrt{s}=2\\unskip\\,\\mathrm{TeV}$ (\\reffi{fi:intcs20}).\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,210)(0,0)\n\\put(0,110){\\input{aaww0.tot20.tex}}\n\\put(0,05){\\input{aaww.tot20.tex}}\n\\end{picture}\n\\caption{Same as in \\protect\\reffi{fi:intcs10} but with an angular cut \n $20^\\circ \\leq \\theta \\leq 160^\\circ$}\n\\label{fi:intcs20}\n\\end{figure}\nThis is due to the fact that after cutting off the dominant forward \nand backward peaks we are left\nwith a region in phase space where the influence of the radiative\ncorrections becomes more important.\nThe corrections to the other cross-sections, in particular to $\\sigma_{\\pm\\mp{\\mathrm{L}}\\rL}$,\nare hardly affected.\n \nIn \\reffis{fi:difcspp} and \\ref{fi:difcsmm} we\nshow the corrections to the differential cross-sections\\ for $\\sqrt{s} =\n0.5$, 1 and $2\\unskip\\,\\mathrm{TeV}$.\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,210)(-1,0)\n\\put(26,168){\\makebox(0,0)[l]{{\\footnotesize unpol}}}\n\\put(26,78){\\makebox(0,0)[l]{{\\footnotesize unpol}}}\n\\put(77,168){\\makebox(0,0)[l]{{\\footnotesize $\\pm\\pm$TT}}}\n\\put(77,78){\\makebox(0,0)[l]{{\\footnotesize $\\pm\\pm$TT}}}\n\\put(130,168){\\makebox(0,0)[l]{{\\footnotesize $\\pm\\pm$LL}}}\n\\put(130,28){\\makebox(0,0)[l]{{\\footnotesize $\\pm\\pm$LL}}}\n\\put(-10,110){\\input{aaww0.unpol}\n\\hspace*{-1.7cm}\\input{aaww0.22TT}\n\\hspace*{-1.7cm}\\input{aaww0.22LL}}\n\\put(-10,0){\n\\input{aaww.unpol}\n\\hspace*{-1.7cm}\\input{aaww.22TT}\n\\hspace*{-1.7cm}\\input{aaww.22LL}}\n\\end{picture}\n\\caption{Differential lowest-order cross-sections\\ and relative \ncorrections for the unpolarized cross-section\\ and the cross-sections\\ with equal photon helicities}\n\\label{fi:difcspp}\n\\end{figure}%\n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,210)(-1,0)\n\\put(21,164){\\makebox(0,0)[l]{{\\footnotesize $\\pm$$\\mp$(LT+TL)}}}\n\\put(21,78){\\makebox(0,0)[l]{{\\footnotesize $+$$-$(LT+TL)}}}\n\\put(77,164){\\makebox(0,0)[l]{{\\footnotesize $\\pm\\mp$TT}}}\n\\put(77,78){\\makebox(0,0)[l]{{\\footnotesize $+$$-$TT}}}\n\\put(130,164){\\makebox(0,0)[l]{{\\footnotesize $\\pm\\mp$LL}}}\n\\put(130,78){\\makebox(0,0)[l]{{\\footnotesize $+$$-$LL}}}\n\\put(-10,110){\\input{aaww0.21_LT_TL}\n\\hspace*{-1.7cm}\\input{aaww0.21TT}\n\\hspace*{-1.7cm}\\input{aaww0.21LL}}\n\\put(-10,0){\n\\input{aaww.21_LT_TL}\n\\hspace*{-1.7cm}\\input{aaww.21TT}\n\\hspace*{-1.7cm}\\input{aaww.21LL}}\n\\end{picture}\n\\caption{Differential lowest-order cross-sections\\ and relative \ncorrections for the cross-sections\\ with opposite photon helicities}\n\\label{fi:difcsmm}\n\\end{figure}\nWhenever the differential cross-section\\ is sizable, $\\delta$ is of the\norder of $10\\%$. The corrections are in particular small in the forward\nand backward direction for all cross-sections\\ that involve $t$- and $u$-channel\npoles in lowest order. On the other hand, the corrections get very large\nwhen the lowest-order cross-section\\ is suppressed or tends to zero, in particular\nfor ${\\mathrm{d}}\\sigma_{\\pm\\pm{\\mathrm{L}}\\rL}\/{\\mathrm{d}}\\Omega$ \nat high energies and intermediate scattering angles.\nThe maximal corrections are usually reached for central values of the\nscattering angle.\nIn accordance with the discussion in \\refse{se:notcon}, the corrections are\nforward-backward symmetric for equal photon helicities.\nFor opposite photon helicities they\ninclude an asymmetric contribution originating from box diagrams and\n$AWW$ vertex corrections involving fermion loops.\nThe corrections for two negative helicity photons are equal to those for\ntwo positive helicity photons. As a consequence of Bose symmetry, the\ncorrections to ${\\mathrm{d}}\\sigma_{+-}\/{\\mathrm{d}}\\Omega$ are obtained from those to \n${\\mathrm{d}}\\sigma_{-+}\/{\\mathrm{d}}\\Omega$ upon \nexchanging $u$ and $t$, i.e.\\ $\\theta\\leftrightarrow 180^\\circ-\\theta$. \nThus, the unpolarized cross-section\\ is forward--backward-symmetric.\n \n\\begin{table}\n\\footnotesize\n\\newdimen\\digitwidth\n\\setbox0=\\hbox{0}\n\\digitwidth=\\wd0\n\\catcode`!=\\active \n\\def!{\\kern\\digitwidth}\n\\newdimen\\minuswidth\n\\setbox0=\\hbox{$-$}\n\\minuswidth=\\wd0\n\\catcode`?=\\active \n\\def?{\\kern\\minuswidth}\n\\begin{center}\n\\arraycolsep 6pt\n$$\\begin{array}{|c|c||c|c|c|c|c|c|}\n\\hline\n\\sqrt{s}\/\\mathrm{GeV} & \\theta & \\sigma^{\\mathrm{Born}}\/\\mathrm{pb} &\n\\delta_{\\Delta E = 0.1E} \/\\% & \n\\delta_{\\mathrm{cut}} \/\\% & \n\\delta_{\\Delta E = E} \/\\% & \n\\delta_{\\mathrm{bos}}\/\\% &\n\\delta_{\\mathrm{ferm}}\/\\% \\\\\n\\hline\\hline\n & !5^\\circ & 98.13 & !?0.02 & -2.79 & ?!2.81 & ?!1.49 & ?1.32 \\\\\n \\cline{2-8}\n & 20^\\circ & 26.04 & !-2.68 & -2.79 & ?!0.11 & !-0.08 & ?0.19 \\\\\n\\cline{2-8}\n !500 & 90^\\circ & 0.724 & -10.79 & -2.79 & !-8.00 & !-5.62 & -2.38 \\\\\n\\cline{2-8}\n & !0^\\circ< \\theta <180^\\circ & 77.55 & !-3.38 & -2.79 \n& !-0.59 & !-0.65 & ?0.06 \\\\\n\\cline{2-8}\n & 10^\\circ< \\theta <170^\\circ & 60.74 & !-4.27 & -2.79 &\n !-1.48 & !-1.21 & -0.27 \\\\\n\\cline{2-8}\n & 20^\\circ< \\theta <160^\\circ & 36.67 & !-6.06 & -2.79 \n& !-3.27 & !-2.39 & -0.89 \\\\\n\\hline\\hline\n & !5^\\circ & 291.9 & !-2.06 & -4.31 & ?!2.25 & ?!1.04 & ?1.21 \\\\\n\\cline{2-8}\n & 20^\\circ & 15.61 & -11.90 & -4.31 & !-7.59 & !-6.37 & -1.22 \\\\\n\\cline{2-8}\n 1000 & 90^\\circ & 0.193 & -31.64 & -4.31 & -27.33 & -21.93 & -5.40 \\\\\n\\cline{2-8}\n & !0^\\circ< \\theta <180^\\circ & 80.05 & !-7.08 & -4.31 \n& !-2.77 & !-2.71 & -0.06 \\\\ \n\\cline{2-8}\n & 10^\\circ< \\theta <170^\\circ & 37.06 & -12.26 & -4.31 \n& !-7.95 & !-6.65 & -1.30 \\\\\n\\cline{2-8}\n & 20^\\circ< \\theta <160^\\circ & 14.16 & -19.29 & -4.31 \n& -14.98 & -12.20 & -2.78 \\\\\n\\hline\\hline\n & !5^\\circ & 418.8 & !-7.14 & -5.80 & !-1.33 & !-1.59 & ?0.25 \\\\\n\\cline{2-8}\n & 20^\\circ & 5.163 & -30.31 & -5.80 & -24.51 & -20.96 & -3.55 \\\\\n\\cline{2-8}\n 2000 & 90^\\circ & 0.049 & -59.59 & -5.80 & -53.78 & -45.47 & -8.32 \\\\ \n\\cline{2-8}\n & !0^\\circ< \\theta <180^\\circ & 80.59 & !-9.85 & -5.80 \n& !-4.04 & !-3.95 & -0.09 \\\\\n\\cline{2-8}\n & 10^\\circ< \\theta <170^\\circ & 14.14 & -27.15 & -5.80 \n& -21.35 & -18.34 & -3.01 \\\\\n\\cline{2-8}\n & 20^\\circ< \\theta <160^\\circ & 4.068 & -41.22 & -5.80 \n& -35.41 & -30.12 & -5.29 \\\\\n\\hline\n\\end{array}$$\n\\caption{Lowest-order cross-sections\\ and relative corrections for unpolarized\nparticles}\n\\label{ta:born}\n\\end{center}\n\\end{table}\nIn \\refta{ta:born} we list the unpolarized cross-section\\ and the corresponding \ncorrections for several energies and scattering angles. We include the\ncorrections for a soft-photon-energy cut-off $\\Delta E = 0.1E$, the \ncut-off-dependent corrections $\\delta_{\\mathrm{cut}}$ from \\refeq{eq:cut}, and the \nindividual (gauge-invariant) fermionic $\\delta_{\\mathrm{ferm}}$ \nand bosonic corrections $\\delta_{\\mathrm{bos}}$.\nThe fermionic corrections consist of all loop diagrams and counterterm\\ \ncontributions involving fermion loops, all other contributions form the \nbosonic corrections.\nThe fermionic corrections stay below 5--10\\% even for high energies. \nOn the other hand, the bosonic contributions are responsible for the\nlarge corrections at high energies, in\nparticular in the central angular region. \n \nIn \\citere{Ye91} various observables have been investigated in view of\ntheir sensitivity to anomalous couplings, \ninvolving the total cross-section\\ and the following ratios%\n\\footnote{Note that we do not perform a convolution with a realistic\n\\label{fo:3}%\nphoton spectrum but consider the incoming photons as monochromatic.}\n\\begin{eqnarray}\nR_{\\mathrm{IO}}&=& \\frac{\\sigma(|\\!\\cos\\theta|<0.4)}{\\sigma(|\\!\\cos\\theta|<0.8)}, \\\\\nR_{\\mathrm{LT}}&=& \\frac{\\sigma_{{\\mathrm{L}}\\rL}}{\\sigma_{{\\mathrm{T}}\\rT}}, \\\\\nR_{\\mathrm{02}}&=& \\frac{\\sigma_{++}}{\\sigma_{+-}} .\n\\end{eqnarray}\nWe list the lowest-order predictions together with the $\\mathswitch{{\\cal{O}}(\\alpha)}$-corrected\nones and the relative corrections for these observables in\n\\refta{ta:obs} using $|\\!\\cos\\theta_{\\mathrm{cut}}|=0.8$.\n\\begin{table}\n\\newdimen\\digitwidth\n\\setbox0=\\hbox{0}\n\\digitwidth=\\wd0\n\\catcode`!=\\active \n\\def!{\\kern\\digitwidth}\n\\newdimen\\minuswidth\n\\setbox0=\\hbox{$-$}\n\\minuswidth=\\wd0\n\\catcode`?=\\active \n\\def?{\\kern\\minuswidth}\n\\begin{center}\n\\arraycolsep 6pt\n$$\\begin{array}{|c|c||c|c|c|c|}\n\\hline\n\\sqrt{s}\/\\mathrm{GeV} & & \\sigma\/\\mathrm{pb} &\nR_{\\mathrm{IO}} & R_{\\mathrm{LT}} & R_{02} \\\\\n\\hline\\hline\n & \\mathrm{Born~level} & ?15.74 & ?0.265 & 0.0308 & ?1.934 \\\\\n\\cline{2-6}\n !500 & \\mathrm{corrected} & ?14.82 & ?0.259 & 0.0325 & ?1.950 \\\\\n\\cline{2-6}\n & \\mathrm{corrections\/\\%} & !-5.83 & !-2.02 & !5.43! & ?0.78! \\\\\n\\hline\\hline\n & \\mathrm{Born~level} & ?4.659 & ?0.241 & 0.0235 & ?2.229 \\\\\n\\cline{2-6}\n 1000 & \\mathrm{corrected} & ?3.617 & ?0.227 & 0.0276 & ?2.184 \\\\\n\\cline{2-6}\n & \\mathrm{corrections\/\\%} & -22.36 & !-5.64 & 17.08! & -2.05! \\\\\n\\hline\\hline\n & \\mathrm{Born~level} & ?1.218 & ?0.234 & 0.0220 & ?2.307 \\\\\n\\cline{2-6}\n 2000 & \\mathrm{corrected} & ?0.647 & ?0.207 & 0.0321 & ?2.168 \\\\\n\\cline{2-6}\n & \\mathrm{corrections\/\\%} & -46.86 & -11.53 & 46.11! & -6.02!\\\\\n\\hline\n\\end{array}$$\n\\caption{Lowest-order predictions and corresponding corrections for various\nobservables and $|\\!\\cos\\theta_{\\mathrm{cut}}|=0.8$}\n\\label{ta:obs}\n\\end{center}\n\\end{table}\n\nIn \\reftas{ta:MTvar.mw} -- \\ref{ta:MHvar.gf} we show the variation of the \nSM corrections with the top-quark and Higgs-boson masses at\n$\\sqrt{s}=500\\unskip\\,\\mathrm{GeV}$ \nin per cent of the cross-section\\ for our standard set of parameters \\refeq{eq:par}.\nWe have determined this variation by searching the maximum and minimum\ncross-sections\\ in the range $130\\unskip\\,\\mathrm{GeV} < \\mathswitch {m_\\Pt} < 210\\unskip\\,\\mathrm{GeV}$ for the variation with\n$\\mathswitch {m_\\Pt}$ and in the ranges $60\\unskip\\,\\mathrm{GeV} < \\mathswitch {M_\\PH} < 400\\unskip\\,\\mathrm{GeV}$ and $600\\unskip\\,\\mathrm{GeV} < \\mathswitch {M_\\PH} <\n1000\\unskip\\,\\mathrm{GeV}$ for the variation with $\\mathswitch {M_\\PH}$. The range $400\\unskip\\,\\mathrm{GeV} < \\mathswitch {M_\\PH} <\n600\\unskip\\,\\mathrm{GeV}$ has been left out as there the Higgs-mass dependence is\ndominated by the Higgs resonance. Because the resonance dominates\n$\\sigma_{\\pm\\pm{\\mathrm{L}}\\rL}$ in an even wider range, we have omitted this\ncross-section\\ in the tables for the Higgs dependence.\n\n\\begin{table}\n$$ \\begin{array}{|c||*6{c|}}\n\\hline\n$\\mathswitch {M_\\PW}$ \\mathrm{~fixed}\n&{\\mathrm{U}}\\rU{\\mathrm{U}}\\rU & +{+}{\\mathrm{T}}\\rT & +{+}{\\mathrm{L}}\\rL & +{-}{\\mathrm{T}}\\rT & \n+{-}{\\mathrm{L}}\\rL & +{-}({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}}) \\\\\n\\hline\n\\theta=20^\\circ & 0.15\\% & 0.18\\% & 0.46\\% & 0.14\\% & 0.53\\% & 0.06\\% \\\\\n\\theta=90^\\circ & 0.62\\% & 0.58\\% & 3.52\\% & 0.44\\% & 1.50\\% & 0.29\\% \\\\\n\\mbox{integrated over} & & & & & & \\\\\n10^\\circ<\\theta<170^\\circ\n& 0.22\\% & 0.26\\% & 1.06\\% & 0.16\\% & 1.23\\% & 0.25\\% \\\\\n20^\\circ<\\theta<160^\\circ \n& 0.29\\% & 0.33\\% & 1.48\\% & 0.21\\% & 1.23\\% & 0.26\\% \\\\\n\\hline\n\\end{array} $$\n\\caption{Variation of various polarized cross-sections\\ at $E_{\\mathrm{CMS}} = 500\\unskip\\,\\mathrm{GeV}$\nfor fixed $\\mathswitch {M_\\PW}$\nwith the top-quark mass in the range $130\\unskip\\,\\mathrm{GeV} < \\mathswitch {m_\\Pt} < 210\\unskip\\,\\mathrm{GeV}$ in per cent \nof the cross-section\\ for $\\mathswitch {m_\\Pt} = 174\\unskip\\,\\mathrm{GeV}$}\n\\label{ta:MTvar.mw}\n\\end{table}\n\\begin{table}\n$$ \\begin{array}{|c||*5{c|}}\n\\hline\n$\\mathswitch {M_\\PW}$ \\mathrm{~fixed}\n&{\\mathrm{U}}\\rU{\\mathrm{U}}\\rU & +{+}{\\mathrm{T}}\\rT & +{-}{\\mathrm{T}}\\rT & \n+{-}{\\mathrm{L}}\\rL & +{-}({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}}) \\\\\n\\hline\n\\theta=20^\\circ & 0.16\\% & 0.18\\% & 0.13\\% & 0.37\\% & 0.81\\% \\\\\n\\theta=90^\\circ & 0.44\\% & 0.21\\% & 0.34\\% & 2.62\\% & 1.51\\% \\\\\n\\mbox{integrated over} & & & & & \\\\\n10^\\circ<\\theta<170^\\circ\n& 0.09\\% & 0.12\\% & 0.10\\% & 2.04\\% & 0.37\\% \\\\\n20^\\circ<\\theta<160^\\circ \n& 0.07\\% & 0.09\\% & 0.06\\% & 2.06\\% & 0.52\\% \\\\\n\\hline\n\\end{array} $$\n\\caption{Variation of various polarized cross-sections\\ at $E_{\\mathrm{CMS}} = 500\\unskip\\,\\mathrm{GeV}$\nfor fixed $\\mathswitch {M_\\PW}$\nwith the Higgs-boson mass\nin the ranges $60\\unskip\\,\\mathrm{GeV} < \\mathswitch {M_\\PH} < 400\\unskip\\,\\mathrm{GeV}$ and $600\\unskip\\,\\mathrm{GeV} < \\mathswitch {M_\\PH} <\n1000\\unskip\\,\\mathrm{GeV}$ in per cent\nof the cross-section\\ for $\\mathswitch {M_\\PH} = 250\\unskip\\,\\mathrm{GeV}$}\n\\label{ta:MHvar.mw}\n\\end{table}\nIn \\reftas{ta:MTvar.mw} and \\ref{ta:MHvar.mw} the $\\mathswitchr W$-boson mass is\nkept fixed at $\\mathswitch {M_\\PW}=80.22\\unskip\\,\\mathrm{GeV}$. Then,\nas argued in the previous section, the variation is \nsmall owing to the absence of large top- and Higgs-mass-dependent\ncorrections. The larger variation of the cross-sections\\ \ninvolving longitudinal \\mathswitchr W~bosons\nis due to terms proportional to $\\mathswitch {m_\\Pt}^2\/\\mathswitch {M_\\PW}^2$ or $\\mathswitch {M_\\PH}^2\/\\mathswitch {M_\\PW}^2$ \narising as a remnant of the unitarity cancellations for\n$\\sqrt{s}\\gg\\mathswitch {m_\\Pt},\\mathswitch {M_\\PH}$. \nThese terms induce a sizable variation of these cross-sections\\\nwith $\\mathswitch {m_\\Pt}$ and $\\mathswitch {M_\\PW}$ at higher energies.\n\n\\begin{table}\n$$ \\begin{array}{|c||*6{c|}}\n\\hline\n$\\mathswitch {G_\\mu}$ \\mathrm{~fixed}\n&{\\mathrm{U}}\\rU{\\mathrm{U}}\\rU & +{+}{\\mathrm{T}}\\rT & +{+}{\\mathrm{L}}\\rL & +{-}{\\mathrm{T}}\\rT & \n+{-}{\\mathrm{L}}\\rL & +{-}({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}}) \\\\\n\\hline\n\\theta=20^\\circ & 1.20\\% & 1.24\\% & 1.39\\% & 1.17\\% & 0.45\\% & 0.05\\% \\\\\n\\theta=90^\\circ & 0.18\\% & 0.06\\% & 3.23\\% & 0.20\\% & 1.56\\% & 1.22\\% \\\\\n\\mbox{integrated over} & & & & & & \\\\\n10^\\circ<\\theta<170^\\circ\n & 0.98\\% & 0.94\\% & 1.62\\% & 1.12\\% & 1.17\\% & 0.84\\% \\\\\n20^\\circ<\\theta<160^\\circ \n & 0.52\\% & 0.51\\% & 2.03\\% & 0.67\\% & 1.19\\% & 0.95\\% \\\\\n\\hline\n\\end{array} $$\n\\caption{Same as in \\protect\\refta{ta:MTvar.mw} but now for fixed $\\mathswitch {G_\\mu}$}\n\\label{ta:MTvar.gf}\n\\end{table}\n\\begin{table}\n$$ \\begin{array}{|c||*5{c|}}\n\\hline\n$\\mathswitch {G_\\mu}$ \\mathrm{~fixed}\n&{\\mathrm{U}}\\rU{\\mathrm{U}}\\rU & +{+}{\\mathrm{T}}\\rT & +{-}{\\mathrm{T}}\\rT & \n+{-}{\\mathrm{L}}\\rL & +{-}({\\mathrm{L}}{\\mathrm{T}}+{\\mathrm{T}}{\\mathrm{L}}) \\\\\n\\hline\n\\theta=20^\\circ & 0.33\\% & 0.33\\% & 0.35\\% & 0.73\\% & 0.81\\% \\\\\n\\theta=90^\\circ & 0.56\\% & 0.36\\% & 0.55\\% & 2.63\\% & 1.26\\% \\\\\n\\mbox{integrated over} & & & & & \\\\\n10^\\circ<\\theta<170^\\circ\n& 0.36\\% & 0.34\\% & 0.38\\% & 2.09\\% & 0.25\\% \\\\\n20^\\circ<\\theta<160^\\circ \n& 0.31\\% & 0.27\\% & 0.32\\% & 2.10\\% & 0.37\\% \\\\\n\\hline\n\\end{array} $$\n\\caption{Same as in \\protect\\refta{ta:MHvar.mw} but now for fixed $\\mathswitch {G_\\mu}$}\n\\label{ta:MHvar.gf}\n\\end{table}\nThe variations of the corrections for fixed $\\mathswitch {G_\\mu}$ are shown in\n\\reftas{ta:MTvar.gf} and \\ref{ta:MHvar.gf}. \nIt is larger in particular for the cross-sections\\ for purely transverse\n\\mathswitchr W~bosons. This fact results\nfrom the dependence of $\\mathswitch {M_\\PW}$ on $\\mathswitch {m_\\Pt}$ and $\\mathswitch {M_\\PH}$ that involves\nlogarithmic top- and Higgs-mass-dependent terms and terms proportional\nto $\\mathswitch {m_\\Pt}^2\/\\mathswitch {M_\\PW}^2$. \n\nTo visualize the Higgs resonance, we plot in \\reffi{fi:intcshiggs} the cross-section\\ \nincluding \\mathswitch{{\\cal{O}}(\\alpha)}\\ corrections\nintegrated over $20^\\circ <\\theta <160^\\circ$ for various values of the\nHiggs-boson mass. \n\\begin{figure}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(160,110)(0,0)\n\\put(0,10){\\input{aaww.higgs.tex}}\n\\end{picture}\n\\caption{Integrated unpolarized cross-section\\ including $\\mathswitch{{\\cal{O}}(\\alpha)}$ corrections\nfor various Higgs-boson masses ($20^\\circ<\\theta<160^\\circ$)}\n\\label{fi:intcshiggs}\n\\end{figure}\nOur results agree qualitatively%\n\\footnote{See footnote \\ref{fo:3}.}\nwell with those of \\citere{Mo94}.\nWhile the resonance\nis comparably sharp at small energies, it is washed out by the large width \nof the Higgs boson at high energies. Already for $\\mathswitch {M_\\PH}=400\\unskip\\,\\mathrm{GeV}$ \nthe Higgs resonance is hardly visible in $\\gamma\\gamma\\to\\PWp\\PWm$.\n\n\\section{Summary}\n \nThe process $\\gamma\\gamma\\to\\PWp\\PWm$ will be one of the most interesting reactions at future\n$\\gamma\\ga$ colliders.\nIn particular, it is very useful to study triple and quartic \nnon-Abelian gauge couplings.\n \nWe have calculated the one-loop radiative corrections to $\\gamma\\gamma\\to\\PWp\\PWm$ within the\nelectroweak Standard Model in the soft-photon approximation for arbitrary\npolarizations of the photons and \\mathswitchr W~bosons. \nBy using a non-linear gauge-fixing term the number of contributing\ndiagrams can be reduced by roughly a factor of two.\nAn interesting peculiarity of $\\gamma\\gamma\\to\\PWp\\PWm$\nis the absence of most (universal) leading corrections,\nsuch as leading logarithms of light quark masses associated with the\nrunning of $\\alpha$ and leading logarithms associated with collinear\nbremsstrahlung. \nTherefore, the theoretical predictions are very clean.\n \nIn the heavy mass limit no leading $\\mathswitch {m_\\Pt}^2$- and \n$\\log\\mathswitch {m_\\Pt}$-terms and $\\log\\mathswitch {M_\\PH}$-terms exist.\nConsequently, \nthe variation of the cross-sections\\ with the top-quark and Higgs-boson masses\nis small if $\\mathswitch {M_\\PW}$ is kept fixed with the exception of the cross-sections\\\ninvolving longitudinal \\mathswitchr W~bosons at high energies. For fixed $\\mathswitch {G_\\mu}$\nthe variation arises mainly from the variation of $\\mathswitch {M_\\PW}$ with \n$\\mathswitch {m_\\Pt}$ and $\\mathswitch {M_\\PH}$ and is thus of similar origin like \nthe one of $\\Pep\\Pem\\to f\\bar{f}$ or $\\Pep\\Pem\\to\\PWp\\PWm$.\n\nWe have presented a detailed numerical discussion of the lowest-order\ncross-sections\\ and the virtual and soft-photonic corrections to $\\gamma\\gamma\\to\\PWp\\PWm$.\nThe soft-photon-cut-off-independent radiative corrections to the\ntotal cross-section\\ are of the order of 10\\%. They are \nincreased at high energies if the forward and backward regions are \nexcluded by an angular cut.\nThis is due to the fact that at high energies the radiative corrections\nreach several\n10\\% for intermediate scattering angles whereas they\nare at the level of several per cent in the forward and backward direction \nwhich dominate the total cross-section. \nThe large corrections are caused by bosonic loop diagrams whereas the\neffects of the fermionic diagrams are of the order of 5--10\\%.\n\n\\section*{Acknowledgement}\nWe are grateful to M.~B\\\"ohm for useful discussions.\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nX-ray ptychography\nhas a resurgence of interest in late 2000s due to the increase of both brilliance and coherence in modern synchrotron light sources \\cite{PfeifferNatPho17}. Inheriting the sophisticated techniques and advantages from both coherent diffraction imaging (CDI) and scanning transmission X-ray microscopy (STXM),\nit has become an essential tool in X-ray imaging at nanometer scale, and has the ability to produce high-resolution sample images while mitigating requirements for sample preparation, data analysis, optics, \\emph{a priori} knowledge of probe, etc \\cite{PfeifferNatPho17}. By scanning an X-ray probe over an extended specimen, one could measure the intensity of far-field transmission diffraction for each point, sometimes referred to as a ``view'', on a 2D scanning grid. However, phase information of the diffraction cannot be directly measured. In order to reconstruct both the amplitude and the phase for the light profile (probe) and specimen (object), \noverlapped scanning spots are necessary for providing sufficient information so that iterative algorithms can be used to retrieve the diffraction phase. \n\nThere are several phase-retrieval algorithms in X-ray ptychography, including the ptychographical iterative engine (PIE) \\cite{FaulknerPRL04,RodenburgAPL04} and its extension (ePIE) \\cite{MaidenUlt09}, the difference map (DM) algorithm \\cite{ThibaultSci08,ThibaultUlt09}, the non-linear optimization approach \\cite{Guizar-SicairosOE08}, the maximum-likelihood optimization \\cite{ThibaultNJP12}, etc.\nIn the present paper we focus on the DM algorithm, and implement it as part of the state-of-the-art multi-mode high-resolution tool suite at the Hard X-ray Nanoprobe (HXN) beamline at National Synchrotron Light Source II (NSLS-II) \\cite{YanNF18}.\n\n\nWithout any parallelization, a typical ptychographic reconstruction of an image can take hours to process on a single CPU core of a standard workstation. Furthermore, in order to reduce the data acquisition time, most of the measurements are conducted in the ``on-the-fly'' scan mode \\cite{PelzAPL14,DengOE15,HuangSR15} --- the sample is continuously moving relative to the probe. For accommodating the blurriness caused by continuous motion in the recorded diffraction data, multiple illumination modes have to be included in the reconstruction, which is introduced in \\cite{ThibaultNat13}. This multi-mode approach increases both memory footprint and computation time for the ptychographic reconstruction, making it crucial to have a high-performance ptychography reconstruction software. \n\nIn this paper, we report the steps we took to port the HXN ptychography reconstruction software to distributed GPUs and the resulting performance improvements. The new GPU version reduces the computation time significantly down to only tens of seconds for reconstructing a single image with one or more probe\/object modes, thereby fundamentally changing the workflow in the HXN beamline and providing real-time feedback to the facility users for arranging or adjusting the experimental setup.\n\n\n\n\\section{X-ray ptychography: difference map algorithm}\nThe DM algorithm is one of the most widely used and powerful phase-retrieval algorithms, which allows simultaneous reconstruction of the probe and the object, and our implementation follows closely the original work \\cite{ElserACSA03,ThibaultSci08,ThibaultNat13}. \nAs far as we know this is the first reported GPU acceleration effort in the DM algorithm. Similar efforts utilizing GPUs include \\cite{NashedOE14,MandulaJAC16,MarchesiniJAC16,NashedPCS17}.\n\nFor the fly scans,\nthe key insight is that the resulting blurriness can be equivalently seen as caused by an incoherent illumination beam.\nAs a result, the imperfection can be compensated by allowing the presence of multiple probe and object modes in the reconstruction.\nFirst, in the far-field limit the recorded transmission intensity is written as an incoherent sum of multiple modes, labeled by $k$ for probe and $l$ for object, respectively:\n\\begin{equation}\n\tI_j(\\mathbf{q}) = \\sum_{k,l} \\left| \\mathcal{F} \\left[\\psi_j^{(k,l)}(\\mathbf{r})\\right]\\right|^2,\n\t\\label{eq: intensity mode}\n\\end{equation}\nwhere $\\mathbf{r}$ is the position vector in the specimen plane and $\\mathbf{q}$ the corresponding reciprocal vector, $\\psi_j$ is the complex exit wave produced at the $j$-th scanning point and $\\mathcal{F}$ stands for Fourier transform. In the following, it is assumed $j\\in[1, N]$, $k\\in [1, M_P]$, and $l\\in[1, M_O]$. \nNext, we require $\\psi$ to be expressed as a product of the $k$-th probe mode and $l$-th object mode,\n\\begin{equation}\n\\psi^{(k, l)}_j(\\mathbf{r}) = P^{(k)}(\\mathbf{r}-\\mathbf{r}_j) O^{(l)}(\\mathbf{r}), \n\\label{eq: field product mode}\n\\end{equation}\nwhere \n$\\mathbf{r}_j$ is the scan position. \nThe underlying assumption\nis that the probe variation along the propagation direction is negligible within the sample thickness.\nThe goal of ptychography is to reconstruct $P$ and $O$ iteratively subject to \\eqref{eq: intensity mode} and \\eqref{eq: field product mode}, \nwhich can be regarded as a constrained search in the hyperspace of complex pixels.\n\n\nIn the first step, one provides an initial guess of $P$ and $O$ so that the initial fields can be constructed for each view $j$ based on \\eqref{eq: field product mode}.\nNext, the fields $\\{\\psi_j\\}$ are iteratively updated according to\n\\begin{equation}\n\\begin{aligned}\n \\psi_j^{(k,l)}(\\mathbf{r}) &\\leftarrow \\psi_j^{(k,l)}(\\mathbf{r}) + \\beta \\left\\{ \\mathcal{F}^{-1}\\circ\\mathcal{F}_c \\left[ 2P^{(k)}(\\mathbf{r}-\\mathbf{r}_j)O^{(l)}(\\mathbf{r})\\right.\\right.\\\\\n &\\left.\\left.\\quad- \\psi_j^{(k,l)}(\\mathbf{r}) \\right] - P^{(k)}(\\mathbf{r}-\\mathbf{r}_j)O^{(l)}(\\mathbf{r}) \\right\\},\n \\label{eq: update field}\n \\end{aligned}\n\\end{equation}\n where $\\beta\\in[0, 1]$ is a free parameter empirically adjusted to speed up convergence (usually we set $\\beta=0.8$), and $\\mathcal{F}_c$ refers to ``constrained'' Fourier transform, in which the computed amplitude is replaced by the measured one while the computed phase is kept, \n \\begin{equation}\n \\mathcal{F}_c \\left[\\psi_j^{(k,l)}(\\mathbf{r})\\right] \\equiv \\sqrt{\\frac{I_j(\\mathbf{q})}{\\sum_{k,l}\\left|\\tilde{\\psi}_j^{(k,l)}(\\mathbf{q})\\right|^2}}\n \\tilde{\\psi}_j^{(k,l)}(\\mathbf{q})\n \\end{equation}\n with $\\tilde{\\psi} = \\mathcal{F} \\left[\\psi\\right]$.\n The probe and object are also iteratively updated as follows:\n \\begin{align}\n P^{(k)}(\\mathbf{r}) &\\leftarrow \\frac{\\sum_l\\sum_j \\left[ O^{(l)}(\\mathbf{r}+\\mathbf{r}_j) \\right]^*\\psi_j^{(k,l)}(\\mathbf{r}+\\mathbf{r}_j)} {\\sum_l\\sum_j |O^{(l)}(\\mathbf{r}+\\mathbf{r}_j)|^2}, \\label{eq: update probe}\\\\\n O^{(l)}(\\mathbf{r}) &\\leftarrow \\frac{\\sum_k\\sum_j \\left[ P^{(k)}(\\mathbf{r}-\\mathbf{r}_j) \\right]^*\\psi_j^{(k,l)}(\\mathbf{r})} {\\sum_k\\sum_j |P^{(k)}(\\mathbf{r}-\\mathbf{r}_j)|^2}. \\label{eq: update object}\n \\end{align}\n\n The convergence of all quantities can be monitored by calculating the relative 2-norm (L2-norm) distance between iterations, for example,\n \\begin{equation}\n \\varepsilon_P^{(k)} = \\sqrt{\\frac{\\int d^2\\mathbf{r}\\left| P^{(k)}_\\text{new}(\\mathbf{r}) - P^{(k)}_\\text{old}(\\mathbf{r}) \\right|^2}{\\int d^2\\mathbf{r}\\left|P^{(k)}_\\text{new}(\\mathbf{r})\\right|^2}}.\n \\end{equation}\n In our experience, the DM algorithm does not need many iterations to converge and to reach satisfying resolution, although the number of required iterations is sample-dependent. \n\nIn practice, to determine the best choice for the number of probe modes ($M_P$), we increase $M_P$ incrementally in a few trial runs and examine whether the contribution of each mode to the total intensity is larger than at least 1\\% (if not, then stop).\nSo far we are unaware of any situation in which $M_O>1$ is useful, but we keep this flexibility in the code for future extensions.\n\n\n\n\\section{Porting to GPUs: strategy and implementation details}\n\nWe implemented the multi-mode DM algorithm on both CPU and GPU.\nThe original CPU code was written in Python for easy integration with other data acquisition, analysis, and visualization tools provided at NSLS-II. Therefore, in our port to NVIDIA GPUs we use PyCUDA (wrapper for CUDA API) \\cite{PyCUDA}, scikit-cuda (for cuFFT and other CUDA libraries) \\cite{scikit-cuda} and MPI for Python (mpi4py) \\cite{mpi4py1,mpi4py2,mpi4py3} to accelerate the existing Python code.\nIn addition, most of the computation are rewritten as CUDA C kernels, which are called through the PyCUDA binding for the best performance. An example of such binding is shown below:\n\\begin{lstlisting}\nimport pycuda.driver as cuda\nimport pycuda.gpuarray as gpuarray\nimport pycuda.autoinit\nimport numpy as np\n\n# assuming kernel \"fx1\" is implemented in sample.cu:\n# extern \"C\" { \/\/ avoid C++ name mangling\n# __global__ void fx1(float* input) { \n# \/* code *\/ \n# } \n# \/* other kernels *\/\n# }\n\n# load the CUDA binary (.cubin) generated by\n# nvcc -cubin -o sample.cubin sample.cu\ngpu_func_mod = cuda.module_from_file(\"sample.cubin\")\n\n# obtain a handle to the CUDA kernel\nkernel_fx1 = gpu_func_mod.get_function(\"fx1\")\n\n# copy an array to GPU (a bit slower than htod)\na = gpuarray.to_gpu(np.arange(100, dtype=float))\n\n# launch the kernel as if it were a Python function\n# block and grid mean the same as in CUDA C\nkernel_fx1(a, block=(100,1,1), grid=(1,1,1))\n\\end{lstlisting}\n\n\nWe first identified the hotspots in the code by coarse-grained performance profiling of the different function calls in the algorithm. We found that in the serial implementation, the update of the views $\\{\\psi_j\\}$ consumes the most CPU time and was our first target for GPU acceleration. Once this part has been ported to the GPUs, we redid the profiling and identified subsequently the error estimation for $\\{\\psi_j\\}$, the update of the probe and object, and so on, as the next target for acceleration.\n\nBecause GPU memory is scarce (ranging from a few to 32 GB depending on the device model), the full computation with realistic data size is usually difficult to be carried out using one GPU alone, especially for the multi-mode case as discussed earlier. It is therefore necessary to distribute the computation to multiple GPUs, either on the same compute node or across multiple nodes.\nFurthermore, in order to carry out most of the computation in the device and avoid large data transfer, \nwe need to allocate extra GPU memory as buffer for FFT, product of $P$ and $O$, \nparallel reduction, etc.\nAs a result, assuming a single GPU can take a reconstruction of $N=40000$, probe size $200\\times200$, and object size $10^3\\times10^3$, the memory footprint is about $3$ times larger than the raw data. Another factor of roughly $3.7$ is needed if we have five probe modes ($M_P=5$). \nIn total, over $10$ times of GPU memory compared to the raw data size is required for multi-mode ptychographic reconstruction\\footnote{We note that cuFFT allocates additional memory that is not taken into account in our estimates.}, which \ncan certainly be alleviated with multiple GPUs.\n\n\n\\begin{figure}[b]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{ptycho_workflow}\n\t\\caption{The workflow of our implementation. The diffraction data associated with $N$ scanning points are distributed (almost) evenly to available GPUs, along with initial guess of probe $P$ and object $O$, so that most of the computation is done in the device. At each iteration, \n\n\ta few MPI collective communications (reduce and broadcast) are necessary to keep updated $P$ and $O$ visible across all GPUs.\n\tOther actions done in each iteration, such as error estimation, are not shown for clarity.\n\t}\n\t\\label{fig: workflow}\n\\end{figure}\n\n\n\nThe workflow of our Python code is shown in Fig.~\\ref{fig: workflow}. When the code starts, we divide (almost) equally the array indices of the dataset ($N$ intensity measurements)\nby the number of available GPUs, possibly on different physical machines as long as reachable by MPI, let each GPU grab its assigned portion from the disk, send initial guesses for the probe and object to all GPUs, and start the iteration loop. \nSince each view $j$ is updated independently [cf.~\\eqref{eq: update field}], this is a natural way to parallelize the computation,\nand the consequence of such workload distribution is that the summation over $j$ in \\eqref{eq: update probe} and \\eqref{eq: update object} is partitioned into each GPU.\nTherefore,\nwhile each view $\\psi_j$ is only updated in the GPU that it resides and no MPI collective communications are needed,\nin each iteration we need to collect the (partially summed) probe and object from each GPU, perform an MPI reduce followed by an MPI broadcast to keep the updated $P$ and $O$ visible to all GPUs.\nWe stress that because all views are needed to compute the probe and the object at each iteration, such synchronization is necessary, and we do not find this synchronization to be a significant burden to the entire calculation. Moreover, our approach avoids the complication of image stitching, such as those reported in \\cite{NashedOE14}. \n\nOne remark on the object reconstruction: Typically the probe dimension is much smaller than that of object. If we consider the functions of interest as matrices (since they live on a 2D plane labeled by pixel position $\\mathbf{r}$), then effectively the object update according to \\eqref{eq: update object} is an \\emph{embedding} procedure: the Hadamard product of two smaller matrices ($P$ and $\\psi$) is embedded into a larger matrix ($O$). Such embedding is sketched as follows:\n\\begin{lstlisting}\n# For each point j, the array indices (x_start and\n# others) are calculated during initialization. Note \n# that the 1st dimension of prb and psi[j] is equal\n# to x_end-x_start, and similarly for the 2nd dim\nfor j, (x_start, x_end, y_start, y_end) in \\\n enumerate(point_info):\n obj_update[x_start:x_end, y_start:y_end] \\\n += np.conjugate(prb) * psi[j]\n # perform other computations\n\\end{lstlisting}\nCurrently we perform this embedding in series (\\textit{i.e.,} loop over scanning points $\\mathbf{r}_j$) because such parallelization requires determining and grouping non-overlapping points in runtime, which is not trivial, and we are exploring possibilities of parallelizing it. As a workaround for mitigating this issue, we find that if the points on each GPU are batch-processed, then swapping the order of the $P$ and $O$ updates and then overlapping the $\\psi$ update for batch $i+1$ with the $O$ update for batch $i$ can speed up by about 20\\%. On the other hand, the probe counterpart \\eqref{eq: update probe} does not have this problem as\nall pixels in $P$ are updated at once. \n\nFinally, we note that the initial guesses of $P$ and $O$ can be completely random or supplied by pre-existing results, and that initializing different probe modes with different degrees of blurriness can help converge faster.\n\n\\section{Performance benchmark}\n\\begin{figure}[t\n\t\\centering\n\t\\includegraphics[width=1.0\\linewidth]{strong_scaling_rev2}\n\t\\caption{Strong scaling in the number of GPUs for completing 50 iterations for single-mode reconstruction, plotted in log-log scale. Real measurements of different $N$ and dimensions are used, and the raw data sizes are indicated. For some datasets (e.g.\\ \\#6) the data size is too large to fit in one GPU. Dashed lines are perfect scalings to guide the eyes. Tested on the HPC1 cluster in CSI, BNL. Each K20 compute node has Intel Xeon CPU E5-2670 @2.60GHz, 128GB physical memory, and one NVIDIA Tesla K20 GPU.\n\t}\n\t\\label{fig: strong scaling}\n\\end{figure}\n\n\nTo benchmark the performance of our implementation, we first show in Fig.~\\ref{fig: strong scaling} time-to-solution scaling as a function of the number of GPUs for realistic datasets with various sizes (different $N$ and small variation in probe dimension), a strong scaling in other words.\nThe speedup is most apparent for large datasets. In particular, for datasets of gigabytes order, the reconstruction can still be done on the order of a minute when using multiple GPUs. In fact, it can be seen that some datasets are too large to be fit in a single GPU (thus no data point on the plot), showing the necessity of using distributed GPUs. \n\n\n\nNext, we simultaneously increase both the total problem size (the number of views $N$) and the number of GPUs while keeping the workload distributed to each GPU fixed, which is known as a weak scaling.\nIn this case, we use instead synthesized data similar to the standard practice in the literature \\cite{MaidenUlt09,HuangOE14,NashedOE14}:\nwe take two arbitrary images to be the object's amplitude and phase and generate the hypothetical diffraction pattern through a zone plate, with variable number of scanning points such that roughly $5000$ points are assigned to each compute node. The result is shown in Fig.~\\ref{fig: weak scaling}, which shows a nearly constant scaling. We note that in this case due to the increasing communication overhead with more GPUs, the network infrastructure is critical to the scaling behavior. The measurement was done using the InfiniBand connection; with the TCP connection, we found that the scaling is less ideal but in general the elapsed time is still below 2 minutes for the highest $N$ (not shown).\n\n\\begin{figure}[b\n\t\\centering\n\t\\includegraphics[width=0.85\\linewidth]{weak_scaling_rev}\n\t\\caption{Weak scaling in the number of GPUs for completing 50 iterations for single-mode reconstruction, plotted in log-linear scale. Synthesized data of dimension $200\\times200$ are used, and each node is assigned about 5000 points. Tested on the Institutional Cluster in BNL. Each compute node has Intel Xeon CPU E5-2695 v4 @2.10GHz, 256GB physical memory, Mellanox EDR InfiniBand connection, and either two NVIDIA Tesla K80 or two Pascal P100 GPUs.\n\t}\n\t\\label{fig: weak scaling}\n\\end{figure}\n\n\n\nFinally, as a representative example, we present the results of a multi-mode reconstruction in Fig.~\\ref{fig: reconstruction}. The raw measurement data is about 2.63GB ($N=10000$, image size $188\\times188$, double precision), but since some scratch space is allocated by our code,\nfor this particular case more than 32 GB of memory is required if only one GPU were used, which is not possible even on the latest NVIDIA Tesla V100 product. Therefore, we use two and four V100 GPUs, and the corresponding timings for completing 50 iterations are 49.82s and 25.69s, respectively. Compared with the single-core CPU performance \n(8.8hr) on the same test machine,\nthe speedup is about 1235x with 4 GPUs\\footnote{The beamline machines have no job scheduler and can be accessed by all internal users, so a certain degree of performance degradation due to resource competition is expected\\label{HXNnote}.}. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{ptycho_34802}\n\t\\caption{Multi-mode ptychographical reconstruction. (a) The first four dominant probe modes. (b) Reconstructed object phase, in which the field of view is $1\\times 1 \\mu$m. The sample is gold nano-crystals, prepared by annealing 20 nm thick gold film at 800$^{\\circ}$C for 8 hours. Test machine: xf03id-srv5 in HXN, which has Intel Xeon CPU E5-2630 v4 @2.20GHz, 256GB physical memory, and four NVIDIA Tesla V100 GPUs\\textsuperscript{\\ref{HXNnote}}.\n\t\n\t}\n\t\\label{fig: reconstruction}\n\\end{figure}\n\n\n\\section{Conclusion and outlook}\n\n\nIn summary, we present a GPU implementation of single- and multi- mode DM algorithm for X-ray ptychography, which is already deployed at the HXN beamline of NSLS-II. The significant reduction of computation time from hours to tens of seconds is a game changer for the beamline scientists and users, allowing real-time feedback and shortening the analysis workflow. We emphasize that the GPU runtime is much shorter than the data acquisition time in diffraction measurements, therefore the reconstruction can be done effectively in real-time as the measurement progresses.\n\nWe are continuing tuning the performance to further reduce the computation time and\/or the memory footprint. For example, our preliminary tests indicate a slight improvement (about 10\\% to 30\\% speedup depending on the dataset size) on top of the achieved speedup by using page-locked host memory for data transfer, and we expect that this will benefit near-future experiments in which the object size is one order of magnitude larger in each dimension. Other possible routes include using single-precision floating point arithmetic, adapting CUDA-aware MPI, overlapping kernel execution and data transfer, and parallelizing the object update as discussed earlier. \nWe are also porting other ptychographic algorithms to GPU as part of a high-performance ptychographic toolbox,\nall of which will soon be open-sourced on \\url{https:\/\/github.com\/NSLS-II\/}. \n\n\n\n\n\\bibliographystyle{IEEEtran.bst}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWe consider the \\emph{single-channel source separation} problem, in\nwhich we wish to separate a single aggregate signal into mixture of\nunobserved component signals. Traditionally, this problem has been\napproached in two ways: the \\emph{supervised} setting\n\\cite{kolter2010energy,roweis2001one,schmidt2006single},\nwhere we have access to training data with the true signal\nseparations and the \\emph{unsupervised} (or ``blind'')\nsetting \\cite{blumensath2005shift, davies2007source, lewicki2000learning,\n schmidt2006nonnegative}, where we have only the\naggregate signal. However,\nboth settings have potential drawbacks: for many problems, including\nenergy disaggregation---which looks to separate individual energy uses\nfrom a whole-home power signal \\cite{hart1992nonintrusive}---it can\nbe difficult to obtain training data with the true separated signals\nneeded for the supervised setting; in contrast, the unsupervised\nsetting is an ill-defined problem with arbitrarily many solutions, and\nthus algorithms are highly\ntask-dependent.\n\nIn this work, we propose an alternative approach that lies between\nthese two extremes. We propose a framework of \\emph{contextual\n supervision}, whereby along with the input signal to be separated, we\nprovide contextual features correlated with the\nunobserved component signals. In practice, we find that this is often\nmuch easier than providing a fully supervised training set; yet it\nalso allows for a well-defined problem, unlike the unsupervised\nsetting. The approach is a natural fit for energy disaggregation, since\nwe have strong correlations between energy usage and easily observed\ncontext---air conditioning spikes in hot summer months,\nlighting increases when there is a lack of sunlight, etc. We\nformulate our model directly as an optimization problem in which we jointly\nestimate these correlations along with the most likely source\nseparation. Theoretically, we show that when the contextual features are\nrelatively uncorrelated between different groups, we can recover the\ncorrect separation with high probability. We demonstrate that our\nmodel recovers the correct separation on synthetic examples resembling\nthe energy disaggregation task and apply the\nmethod to the task of separating sources of energy usage for thousands of homes\nover 4 years, a scale much larger than previously published efforts. The main\ncontributions of this\npaper are 1) the proposed contextually supervised setting and the\noptimization formulation; 2) the theoretical analysis showing that\naccurate separation only requires linear independence between features\nfor different signals; and 3) the application of this approach to the\nproblem of energy disaggregation, a task with significant potential to\nhelp inform consumers about their energy behavior, thereby increasing\nefficiency.\n\n\n\\section{Related work}\n\nAs mentioned above, work in single-channel source separation\nhas been separated along the lines of supervised and\nunsupervised algorithms, although several algorithms can be\napplied to either setting. A common strategy is to\nseparate the observed aggregate signal into a linear combination\nof several \\emph{bases}, where different bases correspond to different\ncomponents of the signal; algorithms such as Probabilistic Latent\nComponent Analysis (PLCA) \\cite{smaragdis2006probabilistic}, sparse coding\n\\cite{olshausen1997sparse}, and factorial hidden Markov models (FHMMs)\n\\cite{ghahramani1997factorial} all fall within this category, with\nthe differences concerning 1) how bases are represented and assigned\nto different signal components and 2) how the algorithm infers the\nactivation of the different bases given the aggregate signal. For\nexample, PLCA typically uses pre-defined basis functions (commonly\nFourier or Wavelet bases), with a probabilistic model for how sources\ngenerate different bases; sparse coding learns bases tuned to data while\nencouraging sparse activations;\nand FHMMs use hidden Markov models to represent each source. In the\nsupervised setting, one typically uses the individual signals to learn\nparameters for each set of bases (e.g., PLCA will learn which bases\nare typical for each signal), whereas unsupervised methods learn\nthrough an EM-like procedure or by maximizing some separation\ncriteria for the learned bases. The method we propose here is\nconceptually similar, but the nature of these bases is rather different: instead\nof fixed bases with changing activations, we require features that\neffectively generate time-varying bases and learn\nactivations that are constant over time.\n\nOrthogonal to this research, there has also been a great deal of work in\n\\emph{multi-channel} blind source separation problems, where we observe multiple\nmixings of the same sources (typically, as many mixings as there are signals)\nrather than in isolation.\nThese methods can exploit significantly more structure and algorithms like Independent Component Analysis\n\\cite{comon1994independent,bell1995information}\ncan separate signals with virtually no supervised information.\nHowever, when applied to the single-channel problem (when this is even\npossible), they typically perform substantially worse than\nmethods which exploit structure in the problem, such as those described above.\n\nFrom the applied point of view, algorithms for energy disaggregation\nhave received growing interest in recently years\n\\cite{kolter2010energy,zeifman.11,kolter2012approximate,parson2012non}.\nThis is an important task since\nmany studies have shown that consumers naturally adopt energy\nconserving behaviors when presented with a breakdown of their energy\nusage \\cite{darby.06,neenan.09,ehrhardt2010advanced}. Algorithmic\napproaches to disaggregation are appealing as they allow for these\ntypes of breakdowns, but existing disaggregation approaches virtually all use\nhigh-frequency sampling of the whole-building power signal (e.g. per second)\nrequiring the installation of custom monitoring hardware for data collection. In contrast,\nthis work focuses on disaggregation using data from ``smart meters'',\ncommunication-enabled power meters that are currently installed in\nmore than 32 million homes \\cite{iee.12}, but are limited to recording\nusage at low frequencies (every 15 minutes or hour), leading to a substantially\ndifferent set of challenges. Smart meters are relatively new and deployment is\nongoing, but due to the large amount of data available now and in the near\nfuture, successful disaggregation has the potential to have a profound impact on\nenergy efficiency.\n\n\\section{Optimization Formulation}\n\nWe begin by formulating the optimization problem for contextual source\nseparation. Formally, we assume there is some unknown matrix of $k$\ncomponent signals\n\\begin{equation}\nY \\in \\mathbb{R}^{T \\times k} = \\left [ \\begin{array}{cccc}\n\\mid & \\mid & & \\mid \\\\ y_1 & y_2 & \\cdots & y_k \\\\ \\mid & \\mid & &\n\\mid \\end{array} \\right ]\n\\end{equation}\nfrom which we observe the sum $\\bar{y} = \\sum_{i=1}^k y_i$. For\nexample, in our disaggregation setting, $y_i \\in \\mathbb{R}^T$ could denote a\npower trace (with $T$ total readings) for a single type of\nappliance, such as the air conditioning, lighting, or electronics, and\n$\\bar{y}$ denotes the sum of all these power signals, which we observe\nfrom a home's power meter.\n\nIn our proposed model, we represent each individual component signal\n$y_i$ as a linear function of some component-specific bases $X_i \\in\n\\mathbb{R}^{T \\times n_i}$\n\\begin{equation}\ny_i \\approx X_i \\theta_i\n\\end{equation}\nwhere $\\theta_i \\in \\mathbb{R}^{n_i}$ are the signal's\ncoefficients. The formal objective of our algorithm is: given the\naggregate signal $\\bar{y}$ and the component features $X_i$,\n$i=1,\\ldots,k$, estimate both the parameters $\\theta_i$ and the\nunknown source components $y_i$. We cast this as an optimization problem\n\\begin{equation}\n\\label{eq-opt}\n\\begin{split}\n\\minimize_{Y,\\theta} \\;\\; & \\sum_{i=1}^k \\left \\{ \\ell_i(y_i, X_i\\theta_i) +\ng_i(y_i) + h_i(\\theta_i) \\right \\} \\\\\n\\subjectto \\;\\; & \\sum_{i=1}^k y_i = \\bar y \\\\\n\\end{split}\n\\end{equation}\nwhere $\\ell_i : \\mathbb{R}^{T} \\times \\mathbb{R}^{T} \\rightarrow\n\\mathbb{R}$ is a loss function penalizing differences between the\n$i$th reconstructed signal and its linear representation; $g_i$ is a\nregularization term encoding the ``likely'' form of the signal $y_i$,\nindependent of the features; and $h_i$ is a regularization penalty on\n$\\theta_i$. Choosing $\\ell_i$, $g_i$ and $h_i$ to be convex functions\nresults in a convex optimization problem.\n\nA natural choice of loss function $\\ell_i$ is a norm\npenalizing the difference between the reconstructed signal and its features\n$\\|y_i - X_i \\theta_i\\|$, but since our formulation enables loss functions that\ndepend simultaneously on all $T$ values of the signal, we allow for more complex\nchoices as well. For example\nin the energy disaggregation problem, air conditioning is correlated with\nhigh temperature but does not respond to outside temperature changes instantaneously;\nthermal mass and the varying occupancy in buildings often results in air\nconditioning usage that correlates with high temperature over some\nwindow (for instance, if no one is in a room during a period of high\ntemperature, we may not use electricity then, but need to ``make up'' for this\nlater when someone does enter the room). Thus, the loss function\n\\begin{equation}\n\\ell_i(y_i, X_i \\theta_i) = \\|(y_i - X_i \\theta_i)(I \\otimes\n1^T)\\|^2_2\n\\end{equation}\nwhich penalizes the aggregate difference of $y_i$ and $X_i\\theta_i$ over a\nsliding window, can be used to capture such dynamics. In many settings, it\nmay also make sense to use $\\ell_1$ or $\\ell_\\infty$ rather than $\\ell_2$ loss,\ndepending on the nature of the source signal.\n\nLikewise, since the objective term $g_i$ depends on all $T$ values of $y_i$, we\ncan use it to encode the likely dynamics of the source signal independent of\n$X_i\\theta_i$. For air conditioning and other single appliance types, we expect\ntransitions between on\/off\nstates which we can encode by penalizing the $\\ell_1$ norm of $Dy_i$ where\n$D$ is the linear difference operator subtracting $(y_i)_{t-1} - (y_i)_t$. For\nother types of energy consumption, for example groups of many electronic\nappliances, we expect the signal to have smoother\ndynamics and thus $\\ell_2$ loss is more appropriate. Finally, we also include\n$h_i$ for statistical regularization purposes---but for problems where $T \\gg\nn_i$, such as the ones we consider in our experiments, the choice of $h_i$ is\nless important.\n\n\\begin{comment}\nwe find it most natural not\nto assume a particular distribution for $w_i$ but to instead capture\nthis correspondence through a\nloss function, $f_i(y_i, X_i\\theta_i)$; common choices\nbeing a norm $\\|y_i - X_i\\theta_i\\|$, or more complex alternatives such as\n$\\|(x - y)(I \\otimes 1^T)\\|$ which allows some amount of ``shift'' in the\ncorrespondence between the signal and the input features over time. We\ncombine these functions in an optimization problem that jointly estimates both\nthe model parameters and the source signals\n where the additional term in the objective, $g_i(y_i)$, encodes the model of\n how the source evolves over time. For example, the $\\ell_2$ norm of difference\noperator, $\\|Dy\\|_2$, smoothes the signal over time while the difference operator\nwith the $\\ell_1$ norm tends to encourage a piecewise constant signal. The\noptimization framework makes it straightforward to include additional\nconstraints such as the $y_i \\ge 0$ which satisfies the requirement\nthat energy usage be\nnonnegative. It would similarly be straightforward to add additional\npenalty terms on $\\theta_i$ for statistical regularization, but the energy\ndisaggregation task that we consider in this work exploits the fact that $T \\gg\nn$ and thus for this application, regularization is unnecessary.\n\\end{comment}\n\n\\section{Theoretical analysis}\n\nNext, we consider the ability of our model to recover the true source\nsignals as $T$ grows while $k$ and $n_i$ remain fixed. For the purposes\nof this section only, we restrict our attention to the choice of\n$\\ell_2$ loss, no $g_i$ or $h_i$ terms, and\nGaussian noise (the extension to the sub-Gaussian case is\nstraightforward). We show that under this specialization of\nthe model, the optimization problem recovers the underlying signals\nat a rate dependent on the linear independence between blocks of input features\n$X_i$. In practice, the choice of $\\ell_i$, $g_i$ and\n$h_i$ is problem-specific, but $\\ell_2$ loss is a reasonable default\nand while simplifying the theoretical analysis dramatically, captures\nthe essential behavior of the model in the large $T$ regime.\n\nFormally, for this section we assume the source signals have Gaussian noise\n\\begin{equation}\n\\label{eq-model}\ny_i = X_i\\theta^\\star_i + w_i\n\\end{equation}\nfor some $\\theta^\\star_i \\in \\mathbb{R}^{n_i}$ and $w_i \\sim\n\\mathcal{N}(0, \\sigma_i^2 I)$. Under the choice of $\\ell_2$ loss, our\noptimization problem simplifies to\n\\begin{equation}\n\\begin{split}\n\\minimize_{Y,\\theta} \\;\\; &\\|Y1 - X\\theta\\|_2^2 \\\\\n\\subjectto \\;\\; & Y1 = \\bar y\n\\end{split}\n\\end{equation}\nwhere $1 \\in \\mathbb{R}^k$ is the all-ones vector, $\\theta \\in\n\\mathbb{R}^{n}$ is a concatenation of all the $\\theta_i$'s, $X \\in\n\\mathbb{R}^{T \\times n}$ is a concatenation of all the\n$X_i$'s and $n = \\sum_{i=1}^k n_i$ is the total number of features. In\nparticular, estimation of $\\theta$ is equivalent to\nleast-squares\n\\begin{equation}\n\\label{eq-theta-estimate}\n\\hat{\\theta} \\in \\mathbb{R}^{n} = \\arg \\min_{\\theta} \\|\\bar y - X\n\\theta\\|_2^2 = (X^T X)^{-1} X^T{\\bar y}.\n\\end{equation}\nSince each $y_i$ has it's own noise term, we can never expect to recover $y_i$\nexactly, but we can recover the true $\\theta^\\star$ with analysis that is the same\nas for standard linear regression. However, in the context of source separation,\nwe are interested in the recovery of the ``noiseless'' $y_i$, $X_i\n\\theta^\\star_i$, and thus in our analysis we consider how the root mean\nsquared error\n\\begin{equation}\n\\label{eq-err}\n\\mathrm{RMSE}(X_i\\hat{\\theta}_i) = \\sqrt{\\frac{1}{T} \\left\\|X_i\n \\hat{\\theta}_i - X_i \\theta_i^\\star \\right \\|_2^2}\n\\end{equation}\nvanishes for large $T$; indeed, a key feature of our analysis is to\nshow that we may recover the underlying signals faster than $\\theta^\\star_i$.\n\n\\begin{theorem}\n\\label{theorem-bounds}\nGiven data generated by the model \\eqref{eq-model}, and estimating\n$\\hat{\\theta}$ via \\eqref{eq-theta-estimate}, we have that\n\\begin{equation}\n\\label{eq-expected-value}\n\\mathbf{E}\\left[\\|X_i \\hat{\\theta}_i - X_i \\theta^\\star\\|_2^2 \\right ]\n= \\sigma^2 \\tr X_i^T X_i (X^T X)^{-1}_{ii} \\leq \\sigma^2 n_i\n\\rho_i\n\\end{equation}\nwhere $\\sigma^2 = \\sum_{i=1}^k \\sigma_i^2$ and $\\rho_i =\n\\lambda_{\\max}(X_i^T X_i (X^T X)^{-1}_{ii})$. Furthermore, for $\\delta\n\\leq 0.1$, with probability greater than $1 - \\delta$\n\\begin{equation}\n\\label{eq-sample-complexity}\n\\mathrm{RMSE}(X_i\\hat{\\theta}_i) \\leq \\sqrt{\\frac{4 \\sigma^2 n_i\n \\rho_i \\log (1\/\\delta)}{T}}.\n\\end{equation}\n\\end{theorem}\n\nA key quantity in this theorem is the matrix $X_i^T X_i (X^T\nX)^{-1}_{ii} \\in \\mathbb{R}^{n_i \\times n_i}$; $(X^T X)^{-1}_{ii}$\ndenotes the $i,i$ block of the full inverse $(X^TX)^{-1}$ (i.e., first\ninverting the joint covariance matrix of all the features, and then\ntaking the $i,i$ block), and this term provides a measure of the\nlinear independence between features corresponding to \\emph{different}\nsignals. To see this, note that if features across different signals\nare orthogonal, $X^T X$ is block diagonal, and thus $X_i^T X_i (X^T\nX)^{-1}_{ii} = X_i^T X_i (X_i^T X_i)^{-1} = I$, so $\\rho_i = 1$.\nAlternatively, if two features provided for different signals\nare highly correlated, entries of $(X^TX)_{ii}^{-1}$ will have large\nmagnitude that is not canceled by $X_i^T X_i$ and $\\rho_i$ will be large. This\nformalizes an intuitive notion for\ncontextually supervised source separation: for recovering the\nunderlying signals, it does not matter if two features for the\n\\emph{same} signal are highly correlated (this contrasts to the case\nof recovering $\\theta^\\star$ itself which depends on all correlations), but two\ncorrelated signals for different features make estimation difficult;\nintuitively, if two very similar features are provided for two different source\nsignals, attribution becomes difficult. A particularly useful property of these\nbounds is that all terms can be computed using just $X_i$, so we can estimate\nrecovery rates when choosing our design matrix.\n\n\\subsection{Proofs}\n\nThe proof of Theorem \\ref{theorem-bounds} proceeds in two steps.\nFirst, using rules for linear transformations of Gaussian random\nvariables, we show that the quantity $X_i(\\hat{\\theta}_i -\n\\theta^\\star_i)$ is also (zero-mean) Gaussian, which immediately leads\nto \\eqref{eq-expected-value}. Second, we derive a tail bound on the\nprobability that $X_i(\\hat{\\theta}_i - \\theta^\\star_i)$ exceeds some\nthreshold, which leads to the sample complexity bound\n\\eqref{eq-sample-complexity}; because this quantity has a singular\ncovariance matrix, this requires a slightly specialized probability\nbound, given by the following lemma.\n\n\\begin{lemma}\n\\label{lemma-tail}\nSuppose $x \\in \\mathbb{R}^p \\sim \\mathcal{N}(0,\\Sigma)$ with\n$\\mathrm{rank}(\\Sigma) = n$. Then\n\\begin{equation}\nP\\left(\\|x\\|_2^2 \\geq t\\right) \\leq \\left (\\frac{t}{n \\lambda} \\right )^{n\/2}\n\\exp\\left \\{-\\frac{1}{2}(t\/\\lambda - n) \\right \\}\n\\label{eq-syn}\n\\end{equation}\nwhere $\\lambda$ is the largest eigenvalue of $\\Sigma$.\n\\end{lemma}\n\n\\begin{proof}\nBy Chernoff's bound\n\\begin{equation}\nP\\left(\\|x\\|_2^2 \\geq t\\right) \\leq\n\\frac{\\mathbf{E}\\left[e^{\\alpha \\|x\\|_2^2}\\right ]}{e^{\\alpha t}}.\n\\end{equation}\nfor any $\\alpha \\geq 0$. For any $\\epsilon > 0$, $z \\sim\n\\mathcal{N}(0, \\Sigma + \\epsilon I)$,\n\\begin{equation}\n\\begin{split}\n\\mathbf{E}\\left[e^{\\alpha \\|z\\|_2^2}\\right] & = (2\\pi)^{-p\/2}|\\Sigma + \\epsilon\n I|^{-1\/2} \\int \\exp\\left \\{ - \\frac{1}{2} z^T (\\Sigma + \\epsilon I)^{-1}\n z + \\alpha z^T z \\right \\} dz \\\\\n& = (2\\pi)^{-p\/2}|\\Sigma + \\epsilon\n I|^{-1\/2} \\int \\exp\\left \\{ - \\frac{1}{2} z^T (\\Sigma + \\epsilon\n I)^{-1}(I - 2 \\alpha (\\Sigma + \\epsilon I)^{-1})\n z \\right \\} dz \\\\\n& = (2\\pi)^{-p\/2}|\\Sigma + \\epsilon I|^{-1\/2} (2\\pi)^{p\/2}|\\Sigma +\n \\epsilon I|^{1\/2} |I - 2 \\alpha (\\Sigma + \\epsilon I)|^{-1\/2}\n\\end{split}\n\\end{equation}\nso taking the limit $\\epsilon \\rightarrow 0$, we have that\n$\\mathbf{E}[e^{\\alpha \\|x\\|_2^2}] = |I - 2 \\alpha\n\\Sigma|^{-1\/2}$. Since $\\Sigma$ has only $n$\nnonzero eigenvalues,\n\\begin{equation}\n|I - 2 \\alpha \\Sigma| = \\prod_{i=1}^n (1 - 2 \\alpha \\lambda_i)\n\\geq (1 - 2 \\alpha \\lambda)^n\n\\end{equation}\nand so\n\\begin{equation}\nP\\left(\\|x\\|_2^2 \\geq t\\right) \\leq \\frac{1}{(1 - 2 \\alpha\n \\lambda)^{n\/2}e^{\\alpha t}}.\n\\end{equation}\nMinimizing this expression over $\\alpha$ gives $\\alpha = \\frac{t - n\n \\lambda}{2t\\lambda}$ and substituting this into the equation\nabove gives the desired bound.\n\\end{proof}\n\n\nTo write the problem more compactly, we define the block matrix $W \\in\n\\mathbb{R}^{T \\times k}$ with columns $w_i$, and define the\n``block-diagonalization'' operator $B : \\mathbb{R}^{n} \\rightarrow\n\\mathbb{R}^{n \\times k}$ as\n\\begin{equation}\nB(\\theta) = \\left [ \\begin{array}{cccc}\n\\theta_1 & 0 & \\cdots & 0 \\\\\n0 & \\theta_2 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & \\cdots & \\theta_k \\end{array} \\right ].\n\\end{equation}\n\n\\begin{proof}\n(of Theorem \\ref{theorem-bounds}) Since $Y = X B(\\theta^\\star) + W$,\n\\begin{equation}\n\\begin{split}\nX B(\\hat{\\theta}) & = X B\\left((X^T X)^{-1} X^T (X B(\\theta^\\star) +\nW)1 \\right) \\\\ & = X B(B(\\theta^\\star)) 1 + X B\\left((X^T X)^{-1} X^T W 1\n\\right) \\\\ & = X B(\\theta^\\star) + X B\\left((X^T X)^{-1} X^T W 1\n\\right)\n\\end{split}\n\\end{equation}\nFor simplicity of notation we also denote\n\\begin{equation}\nu \\in \\mathbb{R}^{n} \\equiv (X^T X)^{-1} X^T W 1.\n\\end{equation}\nThus\n\\begin{equation}\nX B(\\theta^\\star) - X B(\\hat{\\theta}) = X B(u).\n\\end{equation}\nNow, using rules for Gaussian random variables under linear transformations\n$W 1 \\sim \\mathcal{N}(0, \\sigma^2 I_T)$ and so $u \\sim \\mathcal{N}(0,\n\\sigma^2(X^T X)^{-1})$. Finally, partioning $u_1, u_2, \\ldots,\nu_k$ conformally with $\\theta$,\n\\begin{equation}\nX_i \\theta^\\star - X_i \\hat{\\theta}_i = X_i u_i \\sim \\mathcal{N}(0,\n\\sigma^2 X_i(X^T X)^{-1}_{ii} X_i^T)\n\\end{equation}\nso\n\\begin{equation}\n\\mathbf{E} \\left [\\left \\|X_i \\theta^\\star - X_i \\hat{\\theta}_i \\right\n \\|_2^2 \\right ] = \\sigma^2 \\tr X_i^T X_i(X^T X)^{-1}_{ii}.\n\\end{equation}\n\nSince $\\sigma^2 X_i(X^T X)^{-1}_{ii}X_i^T$ is a rank $n_i$ matrix\nwith maximum eigenvalue equal to $\\sigma^2 \\rho_i$, applying\nLemma \\ref{lemma-tail} above gives\n\\begin{equation}\nP\\left ( \\mathrm{RMSE}(X_i\\hat{\\theta}_i) \\geq \\epsilon \\right) =\nP\\left ( \\|X_iu_i\\|^2_2 \\geq T \\epsilon^2 \\right) \\leq \\left\n (\\frac{T \\epsilon^2}{n_i \\sigma^2 \\rho_i} \\right )^{n_i\/2} \\exp\\left\n \\{-\\frac{1}{2}\\left (\\frac{T \\epsilon^2}{\\sigma^2 \\rho_i} -\n n_i\\right) \\right \\}.\n\\end{equation}\nSetting the right hand side equal to $\\delta$ and solving for\n$\\epsilon$ gives\n\\begin{equation}\n\\epsilon = \\sqrt{\\frac{-W(-\\delta^{2\/n}\/e) n_i \\rho_i \\sigma^2}{T}}\n\\end{equation}\nwhere $W$ denotes the Lambert $W$ function (the inverse of $f(x) = x\ne^x$). The theorem follows by noting that $-W(-\\delta^{2\/n}\/e)\n\\leq 4 \\log \\frac{1}{\\delta}$ for all $n \\geq 1$ when $\\delta \\leq\n0.1$, with both quantities always positive in this range (note that\nleaving the $W$ term in the bound can be substantially tighter in some\ncases).\n\\end{proof}\n\n\n\\section{Experimental results}\n\n\\begin{figure}\n\\includegraphics{recovery_T}\n\\includegraphics{recovery_rho}\n\\includegraphics{recovery_comparison}\n\\caption{Comparing 100 random experiments with the theoretical\n mean and 90\\% upper bound for increasing $T$ (left); fixing $T = 500$ and\n varying $\\rho_i$ (center); and compared to the recovery of\n $\\theta_1$ directly (right).}\n\\label{fig-syn1}\n\\end{figure}\n\nIn this section we evaluate contextual supervision on synthetic\ndata and apply in to disaggregate smart meter data collected from thousands of\nhomes. Since labeled data is unavailable, we design a synthetic\ndisaggregation task similar to energy disaggregation from smart meters in order\nto evaluate our performance on this task quantitatively. Here we explore the\nchoice of loss functions and demonstrate that contextual supervision\ndramatically outperforms the unsupervised approach.\n\n\\textbf{Rates for signal recovery}. We begin with a set of experiments examining\nthe ability of the model to recover the\nunderlying source signals as predicted by our theoretical analysis. In these\nexperiments, we consider the problem of separating two source signals with $X_i\n\\in \\mathbb{R}^{T \\times 16}$\nsampled independently from a zero-mean Normal with covariance $I + (1-\\mu)11^T$; we sample $\\theta_i^\\star$ uniformly from $[-1,1]$ and $Y \\sim\n\\mathcal{N}(X\\theta^\\star, I)$. Setting $\\mu = 0.01$ causes the\nfeatures for each signal to be highly correlated with each other but since $X_i$\nare sampled independently, not highly correlated across signals. In Figure \\ref{fig-syn1},\nwe see that MSE vanishes as $T$ grows; when comparing these experimental results to\nthe theoretical mean and 90\\% upper bound, we see that at least in these\nexperiments, the bound is somewhat loose for large values of $\\rho_i$. We are\nalso able to recover $\\theta^\\star$ (which is expected since the least-squares\nestimator $\\hat \\theta$ is consistent), but the rate is much slower due to\nhigh correlations in $X_i$.\n\n\\textbf{Disaggregation of synthetic data}. The next set of experiments considers\na synthetic generation process that more\nclosely mimics signals that we encounter in energy disaggregation. The process\ndescribed visually in Figure\n\\ref{fig-syn2} (top) begins with two signals, the first is smoothly varying over\ntime while the other is a repeating step function\n\\begin{equation}\nX_1(t) = \\sin(2\\pi t \/ \\tau_1) + 1, \\quad X_2(t) = I(t \\bmod \\tau_2 <\n\\tau_2\/2)\n\\end{equation}\nwhere $I(\\cdot)$ is the indicator function and $\\tau_1$, $\\tau_2$ are the\nperiod of each signal. We also use\ntwo different noise models: for the smooth signal we sample Gaussian noise from\n$\\mathcal{N}(0, \\sigma^2)$ while for the step function,\nwe sample a distribution with a point mass at zero, uniform probability\nover $[1, 0) \\cup (0, 1]$ and correlate it across time by summing over a\nwindow of size $\\beta$. Finally, we constrain both noisy signals to be\nnonnegative and sum them to generate our input.\n\n\\begin{figure}\n\\includegraphics{synthetic_x}\n\\includegraphics{synthetic_noise}\n\\includegraphics{synthetic_y}\n\\includegraphics{synthetic_y1_comparison}\n\\includegraphics{synthetic_y2_comparison}\n\\caption{(top) Synthetic data generation process starting with two underlying\n signals (left), corrupted by different noise models (center), and constrained\n to be nonnegative\n and sum'd to give the observed input (right); (bottom) comparison of the true noisy\n signal and estimates.}\n\\label{fig-syn2}\n\\end{figure}\n\nWe generate data under this model for $T = 50000$ time points and consider\nincreasingly specialized optimization objectives while measuring the error in\nrecovering $Y^\\star = XD(\\theta^\\star) + W$, the underlying source signals\ncorrupted by noise. As can be seen in Table \\ref{tab-syn2-mse}, by using $\\ell_1$\nloss for $y_2$ and adding $g_i(y_i)$ terms\npenalizing $\\|Dy_1\\|_2^2$ and $\\|Dy_2\\|_1$, error decreases by 25\\% over\njust $\\ell_2$ loss alone; in Figure \\ref{fig-syn2}, we observe that our\nestimations recovers the true source signals closely with the $g_i$ terms\nhelping to capture the dynamics of the noise model for $w_2$.\n\nAs a baseline for this result, we compare to an unsupervised method, nonnegative\nsparse coding \\cite{hoyer2002non}. We apply sparse coding\nby segmenting the input signal into $1000$ examples of $50$ time points\n(1\/4 the period of the sine wave, $X_1(t)$) and fit a sparse\nmodel of 200 basis functions. We report the best possible source separation by\nassigning each basis function according to an oracle measuring\ncorrelation with the true source signal and using the best value over a grid of\nhyperparameters; however, performance\nis still significantly worse than the contextually supervised method which makes\nexplicit use of additional information.\n\\begin{table}\n\\label{tab-syn2-mse}\n\\centering\n\\caption{Comparison of models for source separation on synthetic data.}\n\\begin{tabular}{|l|r|}\n\\hline\n\\textbf{Model} & \\textbf{RMSE} \\\\\n\\hline\nNonnegative sparse coding & 0.4035 \\\\\n$\\ell_2$ loss for $y_1$ & 0.1640 \\\\\n$\\ell_2$ loss for $y_1$, $\\ell_1$ loss for $y_2$ & 0.1520 \\\\\n$\\ell_2$ loss for $y_1$, $\\ell_1$ loss for $y_2$ and $g_i$ penalizing $\\|Dy_i\\|$ & \\textbf{0.1217} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\textbf{Energy disaggregation on smart meter data}. Next, we turn to the motivating problem for our model: disaggregating\nlarge-scale low resolution smart meter data into its component sources of\nconsumption. Our dataset comes from PG\\&E and was collected by the utility from\ncustomers in Northern California who had smart meters between 1\/2\/2008\nand 12\/31\/2011. According to estimations based on survey data, heating and cooling (air\nconditioning and refrigerators) comprise over 39\\% of total consumer\nelectricity usage \\cite{recs} and thus are dominant uses for\nconsumers. Clearly, we expect temperature to have a strong correlation with\nthese uses and thus we provide contextual supervision in the form of\ntemperature information. The PG\\&E data is anonymized, but the\nlocation of individual customers is identified at the census block\nlevel; we use this information to construct a parallel temperature\ndataset using data from Weather Underground\n(\\url{http:\/\/www.wunderground.com\/}).\n\nThe exact specification of our energy disaggregation model is given in Table\n\\ref{tab-dis-features}---we capture the non-linear dependence on temperature\nwith radial-basis functions (RBFs), include a ``Base'' category which models\nenergy used as a function of time of day, and featureless ``Other'' category\nrepresenting end-uses not explicitly modeled. For simplicity, we penalize each\ncategory's deviations from the model using $\\ell_1$ loss; but for heating and\ncooling we first multiply by a smoothing matrix $S_n$ ($1$'s on\nthe diagonal and $n$ super diagonals) capturing the thermal mass\ninherent in heating and cooling: we expect energy usage to correlate\nwith temperature over a window of time, not immediately. Finally, we use $g_i(y_i)$ and the\ndifference operator to encode our intuition of how energy consumption in each\ncategory evolves over time. The ``Base'' category represents an aggregation of\nmany sources of consumption and which we expect to evolve smoothly over time,\nwhile the on\/off behavior in other categories is best represented by the\n$\\ell_1$ penalty.\n\nWe present the result of our model at two time scales, starting with Figure 3\n(top), where we show aggregate energy consumption across all homes at the\nweek level to demonstrate basic trends in usage. Quantitatively, our model\nassigns 15.6\\% of energy consumption to ``Cooling'' and 7.7\\% to\n``Heating'', which is reasonably close to estimations based on survey\ndata \\cite{recs} (10.4\\% for air conditioning and 5.4\\% for space heating). We\nhave deliberately kept the model simple and thus our higher estimations are\nlikely due to conflating other temperature-related energy usages, such as refrigerators and\nwater heating. In Figure \\ref{fig-dis} (bottom), we present the\nresults of the model in disaggregating energy usage for a single hot summer\nweek where the majority of energy usage is estimated to be cooling due\nto the context provided by high temperature.\n\n\\begin{table}\n\\label{tab-dis-features}\n\\centering\n\\caption{Model specification for contextually supervised energy disaggregation.}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n\\textbf{Category} & \\textbf{Features} & \\textbf{$\\ell_i$} & \\textbf{$g_i$} \\\\\n\\hline\nBase & Hour of day & $\\|y_1 - X_1\\theta_1 \\|_1 $ & $\\|Dy_1 \\|_2^2$ \\\\\nCooling & RBFs over temperatures $>70^\\circ\\mathrm{F}$ & $\\|S_2(y_2 - X_2\\theta_2) \\|_1$ &\n$0.1 \\times \\| Dy_2\\|_1$ \\\\\nHeating & RBFs over temperatures $<50^\\circ\\mathrm{F}$ & $\\|S_2(y_3 - X_3\\theta_3) \\|_1$ &\n$0.1 \\times \\| Dy_3 \\|_1$ \\\\\nOther & None & $\\| y_4 \\|_1 $ & $0.05 \\times \\| Dy_4 \\|_1$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}\n\\includegraphics{homes}\n\\includegraphics{single_home_model}\n\\includegraphics{single_home}\n\\caption{Disaggregated energy usage for 1276 homes over nearly four years shown weekly\n (top); and for a single home near Fresno, California over the week starting 6\/28\/2010 (bottom) with\n estimated correlations $X_i\\hat{\\theta}_i$ (left) and estimated energy uses $\\hat{y}_i$ (right).}\n\\label{fig-dis}\n\\end{figure}\n\n\\section{Conclusion and discussion}\n\nWe believe the advances in this work, formalizing contextually supervised source\nseparation and theoretically analyzing the requirements for accurate source\nsignal recovery, will enable new applications of single-channel source separation\nin domains with large amounts of data but no access to explicit supervision. In\nenergy disaggregation, this approach has allowed us to reasonably separate\nsources of consumption from extremely low-frequency smart meter data. This a\nsignificant advancement with the potential to drive increases in energy\nefficiency through programs that expose this information to consumers and\nautomated systems for demand response. Developing algorithms that use this\ninformation to achieve these goals is an interesting direction for\nfuture work.\n\nAnother interesting direction is the explicit connection of our large-scale\nlow-resolution methods with the more sophisticated appliance models developed on\nsmaller supervised datasets with high-frequency measurements. As a few examples, a\nsmall amount of supervised information would enable us to calibrate\nour models automatically, while a semi-supervised approach would enable\nspreading some of the benefit of high-resolution load monitoring to the vast\nnumber of homes where only smart meter data is available.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTesting the predictions of the Cabibbo-Kobayashi-Maskawa (CKM) mechanism for violation of the combined charge-parity ($CP$) symmetry~\\cite{C,KM} is one of the major precision tests~\\cite{JPsiKs_theo,ccK_Belle,BigiSander2} of the flavor sector of the Standard-Model (SM). The Belle experiment at KEK significantly contributed to the validation of the CKM scheme and to constraining the unitarity triangle for $B$ decays to its current precision. Up-to-date, the SM describes the data impressively well, but there is still room for a deviation from unitarity, which clearly would point towards physics beyond the SM. \nThese proceedings give a summary of the experimental status of measurements related to two of the CKM angles defined from the CKM matrix elements as: $\\ensuremath{\\phi_{2}} \\equiv \\arg(V_{td}V^{*}_{tb})\/(-V_{ud}V^{*}_{ub})$ and $\\phi_3 \\equiv \\arg(V_{ud}V^*_{ub})\/(-V_{cd}V^*_{cb})$. All measurements presented here are based on Belle's final data set of $772 \\times 10^{6}$ $B\\bar{B}$ pairs. \n\n\n\\section{The CKM Angle \\ensuremath{\\phi_{2}}}\n\nThe CKM angle $\\phi_2$ can be determined by measuring the time-dependent asymmetry between \\ensuremath{B^{0}}\\ and \\ensuremath{\\bar{B}^{0}}\\ decays into a common $CP$ eigenstate~\\cite{CP} made out of unflavored quarks ($\\bar{b} \\rightarrow \\bar{u}u\\bar{d}$ quark transitions). Examples are the decays $\\ensuremath{B^{0}} \\rightarrow \\pi\\pi$, $\\rho\\pi$, $\\rho\\rho$ and $\\ensuremath{a_{1}(1260)}\\pi$~\\cite{pipi_Belle,pipi_BaBar,pipi_LHCb,rhoprhom_BaBar, rhoprhom_Belle1, rhoprhom_Belle2,belle_r0r0,r0r0Babar}. In the decay sequence, $\\ensuremath{\\Upsilon(4S)} \\rightarrow B_{CP}B_{\\rm Tag} \\rightarrow f_{CP}f_{\\rm Tag}$, where one of the $B$ mesons decays into a $CP$ eigenstate $f_{CP}$ at a time $t_{CP}$ and the other decays into a flavor specific final state $f_{\\rm Tag}$ at a time $t_{\\rm Tag}$, the time-dependent decay rate is given by\n\\begin{equation}\n {P}(\\Delta t, q) = \\frac{e^{-|\\ensuremath{\\Delta t}|\/\\tau_{B^0}}}{4\\tau_{B^0}} \\bigg[ 1+q(\\ensuremath{{A}_{CP}}\\cos\\Delta m_d \\ensuremath{\\Delta t} + \\ensuremath{{S}_{CP}}\\sin\\Delta m_d \\ensuremath{\\Delta t}) \\bigg],\n\\label{eq1}\n\\end{equation}\nwhere $\\ensuremath{\\Delta t} \\equiv t_{CP}- t_{\\rm Tag}$ is the lifetime difference between the two $B$ mesons, $\\Delta m_d$ is the mass difference between the mass eigenstates $B_{H}$ and $B_{L}$ and $q = +1 (-1)$ for $B_{\\rm Tag} = \\ensuremath{B^{0}} (\\ensuremath{\\bar{B}^{0}})$. The $CP$ asymmetry is given by \n\\begin{equation}\n\\frac{N(\\bar{B}\\to f_{CP}) - N(B\\to f_{CP})}{N(\\bar{B}\\to f_{CP}) + N(B\\to f_{CP})},\n\\label{eq_asym}\n\\end{equation}\nwhere $ N(B(\\bar{B})\\to f_{CP})$ is the number of events of a $B(\\bar{B})$ decaying to $f_{CP}$, the asymmetry can be time-dependent.\n The parameters \\ensuremath{{A}_{CP}}\\ and \\ensuremath{{S}_{CP}}\\ describe direct and mixing-induced $CP$ violation, respectively~\\footnote{There exists an alternate notation where $C_{CP} = -\\ensuremath{{A}_{CP}}$.}. \\\\\n At tree level one expects $\\ensuremath{{A}_{CP}}=0$ and $\\ensuremath{{S}_{CP}}=\\sin2\\ensuremath{\\phi_{2}}$ for the above mentioned decays sensitive to $\\phi_2$. Possible penguin contributions can give rise of direct $CP$ violation, $\\ensuremath{{A}_{CP}}\\neq 0$ and also pollute the measurement of \\ensuremath{\\phi_{2}}, $\\ensuremath{{S}_{CP}}=\\sqrt{1-\\ensuremath{{A}_{CP}}^{2}}\\sin(2\\ensuremath{\\phi_{2}}^{eff})$ where the observed $\\ensuremath{\\phi_{2}}^{eff} \\equiv \\ensuremath{\\phi_{2}} - \\Delta \\ensuremath{\\phi_{2}}$ is shifted by $\\Delta \\phi_2$ due to different weak and strong phases from additional non-leading contributions.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=.45\\textwidth]{pipi_feyn.eps}\n \\includegraphics[width=.45\\textwidth]{pippim_peng.eps}\n \\put(-380,1){\\scriptsize a)}\n \\put(-180,1){\\scriptsize b)}\\\\\n \\caption{a) leading order and b) penguin feynman diagrams for color-allowed $b \\rightarrow u \\bar{u} d$ transitions.}\n \\label{fig_pipi_feyn}\n\\end{figure}\nDespite this, it is possible to determine $\\Delta \\ensuremath{\\phi_{2}}$ in $\\ensuremath{B^{0}} \\rightarrow h^{+} h^{-}$ with an $SU(2)$ isospin analysis by considering the set of three $B \\rightarrow hh$ decays where the $hh$s are either two pions or two longitudinally polarized $\\rho s$, related via isospin symmetry~\\cite{iso}. The $B \\rightarrow h^{i} h^{j}$ amplitudes $A_{ij}$ obey the triangle relations,\n\\begin{equation}\n A_{+0} = \\frac{1}{\\sqrt{2}}A_{+-} + A_{00}, \\;\\;\\;\\; \\bar{A}_{-0} = \\frac{1}{\\sqrt{2}}\\bar{A}_{+-} + \\bar{A}_{00}.\n \\label{eq_iso}\n\\end{equation}\nIsospin arguments demonstrate that $\\ensuremath{B^{+}} \\rightarrow h^{+} h^{0}$ is a pure first-order mode in the limit of neglecting electroweak penguins, thus these triangles share the same base, $A_{+0}=\\bar{A}_{-0}$, see Fig.~\\ref{fig_iso} for an illustration. $\\Delta \\ensuremath{\\phi_{2}}$ can then be determined from the difference between the two triangles. This method has an inherent four-fold discrete ambiguity in the determination of $\\sin(2\\ensuremath{\\phi_{2}})$.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=.5\\columnwidth]{iso_anal2.eps}\n \\caption{A sketch of the isospin triangle.}\n \\label{fig_iso}\n\\end{figure}\n\n\\subsection{The Decay $B^0\\to \\rho^+\\rho^-$}\n Having a decay into two vector particles, an angular analysis is performed to separate the $CP$-even states from the $CP$-odd states for the isopsin analysis. Longitudinal polarized states correspond to pure $CP$-even states and their fraction, $f_L$, is obtained from a fit to the cosine of helicity angles, $\\Theta_{\\rm H}^{\\pm}$, which are defined as sketched in Fig.~\\ref{fig_hel}. Previous measurements show that $f_L$ is consistent with one~\\cite{rhoprhom_BaBar, rhoprhom_Belle1, rhoprhom_Belle2}, consequently the isospin analysis is performed for longitudinal polarization only.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[height=!,width=0.6\\columnwidth]{helicity.eps}\n \\caption{The helicity angles $\\cos\\Theta_{\\rm H}^{\\pm}$; each one is defined in its $\\rho$ rest frame.}\n \\label{fig_hel}\n\\end{figure}\nIn addition to combinatorial background, the presence of multiple, largely unknown backgrounds with the same four-pion final state as $B^0\\to \\rho^+\\rho^-$ make this decay quite difficult to isolate and interference between the various $4\\pi$ modes needs to be considered. Besides updating to the full data set, the branching fraction, the fraction of longitudinally polarized $\\rho$ mesons on this decay and the $CP$ violating parameters are obtained simultaneously from the fit to the data. In the fit, Belle uses the missing energy $(\\Delta E \\equiv E_{B}^{*}-E_{\\rm beam}^*)$, the beam-constraint $B$ mass ($M_{\\rm bc}=\\sqrt{E_{\\rm beam}^{*2} - p_{B}^{*2}})$, the masses and helicity angles of the two reconstructed $\\rho^\\pm$ mesons, a fisher discriminant to separate the jet-like $e^+e^-\\to q\\bar{q},\\; (q=u,\\;d,\\;s,\\;c)$ background from the spherical $B\\bar{B}$ decays and the $\\Delta t$ distribution for the two flavors of $B_{\\rm Tag}$. \nThey obtain\n\\begin{itemize}\n\\item[]${\\cal B}(B^0\\to\\rho^+\\rho^-)=(28.3\\pm 1.5\\;(\\rm stat) \\pm 1.4\\;(\\rm syst))\\times 10^{-6},$\n\\item[]$f_L = 0.988\\pm\\; 0.012\\;(\\rm stat ) \\pm\\;0.023(\\rm syst),$\n\\item[]${S}_{CP} =-0.13\\pm0.15(\\rm stat ) \\pm0.05\\;(\\rm syst)$ and \n\\item[]${A}_{CP} =0.00\\pm0.10(\\rm stat ) \\pm0.06\\;(\\rm syst)$.\n\\end{itemize}\nThis is currently the most precise measurement of the branching fraction and polarization of $B\\to \\rho^+\\rho^-$ decays as well as the tightest constraint on $CP$ violation in this mode.\n Fig.~\\ref{p_r0r0} shows the projections onto $\\Delta E$, $\\cos\\Theta_{\\rm H}^{+}$ and onto $\\Delta t$ for the two flavor of $B_{\\rm Tag}$, each with the fit result on top.\n\\begin{figure}[htb]\n \\centering\n\\includegraphics[height=!,width=0.32\\columnwidth]{sigReg_dE.eps} \n\\includegraphics[height=!,width=0.32\\columnwidth]{sigReg_H1.eps} \n\\includegraphics[height=!,width=0.32\\columnwidth]{sigReg_dt_asym.eps} \n\\put(-50,105){\\textcolor{blue}{\\footnotesize $q=+1$}}\\put(-50,95){\\textcolor{red}{\\footnotesize $q=-1$}} \n\\put(-195,110){\\footnotesize \\textcolor{red}{signal}} \\put(-195,102){\\footnotesize \\textcolor{cyan}{$B\\to 4\\pi$}}\\put(-195,94){\\footnotesize \\textcolor{mygreen}{$B\\bar{B}$}}\\\\\n \\caption{ Signal enhanced distributions of (a) $\\Delta E$, (b) $\\cos\\Theta_H$, and (c) $\\Delta t$ for the two flavors of $B_{\\rm Tag}$ ($B_{\\rm Tag} = B^0$ for $q=+1$) with the fit result on top. The shaded red area is the $B^0\\to\\rho^0\\rho^0$ contribution. Furthermore, all $B$ decays with a four pion final state are shown in cyan, the entire ($B\\bar{B}$) background in dashed (dash-dotted dark) green and the full PDF in blue.}\n \\label{p_r0r0}\n\\end{figure}\nThe results from this measurement are used in an isospin analysis together with other Belle measurements~\\cite{belle_r0r0, rpr0_Belle} (longitudinal polarization only). Fig.~\\ref{fig_phi2}~b) shows the \\ensuremath{\\phi_{2}}\\ scan from the isospin analysis, the constraint most consistent with other measurements of the CKM triangle is $\\ensuremath{\\phi_{2}} = (93.7 \\pm 10.6)^{\\circ}$ and the penguin pollution is consistent with zero: $\\Delta\\phi_2 = (0.0\\pm9.6)^{\\circ}$. In the $B\\to\\rho\\rho$ system, the relatively small amplitude of $B^0\\to\\rho^0\\rho^0$ makes the isospin triangles flat and therefore the isospin analysis has no ambiguity.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[height=!,width=0.65\\columnwidth]{rho_phi2_CL_rprm2.eps}\n \\put(-15, 70){\\small \\textcolor{black}{ $68\\%$}} \n \\put(-68, 120){\\small \\textcolor{black}{\\large $\\rho\\rho_{LP}$}}\\\\ \n \\caption{Probability scan of $\\phi_2$ in the $B\\to\\rho\\rho$ system.}\n \\label{fig_phi2}\n\\end{figure}\n\n\\subsection{The Decay $B^0\\to\\ensuremath{\\pi^{0}}\\piz$}\nThis decay is an important input for the isospin analysis in the $B\\to\\pi\\pi$ system. Being reconstructed from four $\\gamma$s makes this measurement experimentally quite challenging. A fit to \\ensuremath{\\Delta E}, \\ensuremath{M_{\\rm bc}}\\ and a fisher discriminant $T_C$ is performed and a preliminary branching fraction of \n\\begin{itemize}\n\\item[] ${\\cal B}(B\\to\\ensuremath{\\pi^{0}}\\piz) = (0.9\\pm 0.12\\;{\\rm stat}\\pm 0.10\\;{\\rm syst})\\times 10^{-6}$\n\\end{itemize}\n is obtained. Signal enhanced projections are shown in Fig.~\\ref{p_p0p0}. It is planned to supersede this currently most precise measurement of the branching fraction with one including the determination of the direct $CP$ violation in this mode.\\\\\nThe upcoming Belle 2 experiment will make a time-dependent analysis possible, as the accumulated data will provides enough data to use converted photons to determine the $B$ vertex. Including the mixing-induced $CP$-violation parameter of $B\\to\\ensuremath{\\pi^{0}}\\piz$ in the isospin analysis might remove the four-fold ambiguity from the isospin analysis being currently present in the $B\\to\\pi\\pi$ system.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[height=!,width=0.9\\columnwidth]{pi0pi0_B.eps}\n \\caption{ Signal enhanced distributions of (a) $\\Delta E$, (b) $\\cos\\Theta_H$ and (c) $\\Delta t$ with the fit result on top. Contributions from signal, continuum, $\\rho\\pi$ and other rare B decays are shown by blue, green, red and cyan respectively.}\n \\label{p_p0p0}\n\\end{figure}\n\n\\section{The Decay \\ensuremath{B^0 \\to D^0[K_S^0\\pip\\pim]K^{*0}}\\ and the CKM Angle $\\phi_3$}\n\nA model independent dalitz plot analysis~\\cite{phi3_theo, B2DK_Belle} of the decay \\ensuremath{B^0 \\to D^0[K_S^0\\pip\\pim]K^{*0}}\\ has been performed for the first time. The flavor-specific decay, $K^{*0}\\to K^+\\ensuremath{\\pi^{-}}$, allows to determine the flavor of the $B$ meson. The number of events in different bins of the dalitz plot of the $D$ meson coming from either a $B^0$ ($N_i^+$) or a $\\bar{B}^0$ ($N_i^-$) is given by\n\\begin{equation}\nN_i^\\pm = h_B[K_{\\pm i} + r_S^2K_{\\mp i} + 2k\\sqrt{K_iK_{-i}}(x_\\pm c_i \\pm y_\\pm s_i))],\n\\label{e_BDK}\n\\end{equation}\nwhere $h_B$ is a normalization constant, $K_{\\pm i}$ are the entries in the $D$ (+) or $\\bar{D}$ (-) dalitz plot, $k=0.95 \\pm 0.03$~\\cite{B2DK_Babar} accounts for interference effects in the $D$ decay, and $c_i$ and $s_i$ include information on the average of the phase variation within a dalitz plot bin. All information on the $D$ dalitz plot is provided by measurements from the CLEO collaboration~\\cite{Cleo_D}. The observables $x_\\pm\\equiv r_{s}\\cos(\\delta_S \\pm \\phi_3)$ and $y_\\pm \\equiv r_{s}\\sin(\\delta_S \\pm \\phi_3)$ from the interference term in Equ.~\\ref{e_BDK} allow an extraction of the CKM angle $\\phi_3$ in general, where the ratio between the cabbibo-allowed and double-cabbibo-suppressed amplitudes, $r_S\\equiv\\frac{\\bar{A}}{A}=\\frac{A(\\bar{B}\\to \\bar{D}^0\\bar{K}^{*0})}{A(\\bar{B}\\to D^0\\bar{K}^{*0})}$, indicates the sensitivity to $\\phi_3$. Belle obtains\n\\begin{eqnarray}\nx_+=+0.1^{+0.7+0.0}_{-0.4-0.1}\\pm0.1, &\\;\\;\\;\\;\\;\\;\\;y_+=+0.3^{+0.5+0.0}_{-0.8-0.1}\\pm0.1,\\nonumber \\\\\nx_-=+0.4^{+0.5+0.0}_{-0.8-0.1}\\pm0.0, &\\;\\;\\;\\;\\;\\;\\;y_-=-0.6^{+0.8+0.1}_{-1.0-0.0}\\pm0.1,\\nonumber\n\\end{eqnarray}\nand uses the result to obtain an upper limit of \n\\begin{eqnarray}\nr_S<0.87 \\nonumber \n\\end{eqnarray}\nat the one $\\sigma$ level. The (dalitz plot integrated) fit results are shown in Fig.~\\ref{p_DK} and the $r_S$ scan is shown in Fig.~\\ref{p_DK2}. This mode is still statistically limited but will give additional insights on $\\phi_3$ when the Belle 2 data will be available.\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[height=!,width=0.99\\columnwidth, bb=2 500 1300 830]{B2DKs_result.eps}\n \\caption{ Signal enhanced distributions of (a) $\\Delta E$, (b) $\\cos\\Theta_H$ and (c) $\\Delta t$ with the fit result on top. The signal contribution is shown in red.}\n \\label{p_DK}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[height=!,width=0.7\\columnwidth, bb = 20 330 1150 900]{B2DKs_rS.eps}\n \\caption{Probability scan of $r_S$.}\n \\label{p_DK2}\n\\end{figure}\n\n\\section{Summary}\nWe have presented three recent and preliminary measurements from Belle sensitive to the CKM phases \\ensuremath{\\phi_{2}}\\ and $\\phi_3$ using the full data set of $772$ million $B\\bar{B}$ pairs .\nMeasurements of the branching fraction, the polarization and the $CP$ asymmetries in $B \\rightarrow \\rho^+\\rho^-$ were used to update the $\\phi_2$ isospin constraint from Belle. The branching fraction measurement of of $B\\to\\pi^0\\pi^0$ has been presented and the importance for this mode the isospin analysis in the $B\\to \\pi\\pi$ system has been discussed. The current world averages of \\ensuremath{\\phi_{2}}\\ as computed by the CKMfitter~\\cite{CKMfitter} and UTfit~\\cite{UTfit} collaborations are $\\ensuremath{\\phi_{2}} = (87.6^{+3.5}_{-3.3})^{\\circ}$ and $\\ensuremath{\\phi_{2}} = (88\/6 \\pm 3.3 )^{\\circ}$, respectively. Furthermore we presented a preliminary measurement of \\ensuremath{B^0 \\to D^0[K_S^0\\pip\\pim]K^{*0}}\\ decays and discussed the sensitivity to $\\phi_3$. All shown results are in good agreement with other SM based constraints on the CKM triangle. With Belle 2 being built~\\cite{Belle2} and the LHCb operating, the next generation of $B$ physics experiments are expected to further reduce the uncertainty of the CKM observables and might reveal new phenomena.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{\\ensuremath{\\mathsf{LRSyn}}\\xspace: The Solution Outline}\n\nOur solution data-extraction system is parametrized by a number of components\nthat need to be instantiated for each domain.\n\\begin{compactitem}\n \\item \\emph{Region extraction program synthesizer.}\n The region extraction DSL $\\mathcal{L}_{rx}$ is equipped with a synthesizer that\n takes as input examples of the form $(\\ensuremath{\\mathsf{doc}}, \\ensuremath{\\ell}) \\mapsto \\ensuremath{\\mathsf{R}}$, and\n produces programs from $\\mathcal{L}_{rx}$.\n %\n Here, the example maps a document $\\ensuremath{\\mathsf{doc}}$ and a location $\\ensuremath{\\ell}$ within\n $\\ensuremath{\\mathsf{doc}}$ to a region $\\ensuremath{\\mathsf{R}}$ of the document.\n %\n For instance, an example in the HTML domain will have an input HTML\n document, an input location (DOM node), and an output region (set of\n contiguous DOM nodes).\n %\n \\item \\emph{Value extraction program synthesizer.}\n The value extraction DSL $\\mathcal{L}_{vx}$ is equipped with a synthesizer that\n takes as input examples of the form $\\ensuremath{\\mathsf{R}} \\mapsto \\ensuremath{v}$, and produces\n a program from $\\mathcal{L}_{vx}$.\n %\n Here, $\\ensuremath{\\mathsf{R}}$ is a region in a document and $\\ensuremath{v}$ is the field-value\n for that document.\n %\n For instance, an example might have an input region given by the blue\n rectangle in Figure~\\ref{fig:formed_documents}(a), and an output value\n ``Friday, Apr 3 8:18 PM''.\n %\n \\item \\emph{Blueprinting and Locating functions.}\n The blueprinting function $\\ensuremath{\\mathsf{BP}}$, locating function $\\ensuremath{\\mathsf{Locate}}$, and the\n blueprint distance function $\\ensuremath{\\delta}$ (as described in\n Section~\\ref{sec:problem}) need to be specified per domain.\n\\end{compactitem}\n\n\\begin{figure*}[t]\n \\scalebox{0.7}{\n\\begin{tikzpicture}[scale=0.5, every node\/.style={inner sep=5}]\n \\node[draw, rectangle] (trainset)\n {\\tabular{c}Training Subset\\\\with Annotations $\\ensuremath{\\mathcal{A}}$ \\endtabular} ;\n \\node[rectangle, below of=trainset, yshift=-5mm] (dataset) {Documents} ;\n \\node[draw, rectangle, thick, fit=(trainset)(dataset)] (fulldataset) {};\n\n \\node[draw, rectangle, right of=fulldataset, xshift=5cm, yshift=1cm, minimum width=3.7cm] (cl1)\n {\\tabular{c}Cluster \\& Landmark\\\\ $(\\ensuremath{C}_1, \\ensuremath{\\mathsf{m}}_1)$ \\endtabular};\n \\node[draw, rectangle, dashed, below of=cl1, yshift=-2mm, minimum width=3.7cm] (cl2)\n {$(\\ensuremath{C}_2, \\ensuremath{\\mathsf{m}}_2)$};\n \\node[rectangle, below of=cl2, minimum width=3.7cm, yshift=0.5cm] (cl3)\n {$\\ldots$};\n \\node[rectangle, below of=cl3, minimum width=3.7cm, yshift=0.5cm] (cl4)\n {$\\ldots$};\n\n \\draw[->] (fulldataset.east) -- ++(1.5cm, 0cm) |- node[above] {} (cl1);\n \\draw[->, dashed] (fulldataset.east) ++(1.5cm, 0cm) |- node[above] {} (cl2);\n \\draw[->, dashed] (fulldataset.east) ++(1.5cm, 0cm) |- node[above] {} (cl3);\n \\draw[->, dashed] (fulldataset.east) ++(1.5cm, 0cm) |- node[above] {} (cl4);\n\n \\node[draw, rectangle, right of=cl1, xshift=5cm, yshift=1cm, minimum width=5cm] (inferroi)\n {\\tabular{c}For each $\\ensuremath{\\mathsf{doc}}_i \\in \\ensuremath{C}_1$ \\\\ Infer ROI $\\ensuremath{\\mathsf{R}}_i$ \\endtabular};\n\n \\node[draw, rectangle, below of=inferroi, yshift=-0.5cm, minimum width=5cm] (regionsynth)\n {\\tabular{c}Synthesize Region Program\\\\mathsf{Ex}: $(\\ensuremath{\\mathsf{doc}}_i, \\ensuremath{\\mathsf{m}}_i) \\mapsto \\ensuremath{\\mathsf{R}}_i$ \\endtabular} ;\n\n \\node[draw, rectangle, below of=regionsynth, yshift=-0.5cm, minimum width=5cm] (valuesynth)\n {\\tabular{c}Synthesize Value Program\\\\mathsf{Ex}: $\\ensuremath{\\mathsf{R}}_i \\mapsto \\ensuremath{\\mathcal{A}}(\\ensuremath{\\mathsf{doc}}_i)$ \\endtabular} ;\n\n \\node[draw, rectangle, below of=valuesynth, yshift=-0.5cm, minimum width=5cm] (inferbp)\n {\\tabular{c}Compute average \\\\ ROI blueprint \\endtabular} ;\n\n \\node[draw, thick, fit=(inferroi)(regionsynth)(valuesynth)(inferbp)] (synth) {};\n \\draw[->] (cl1.east) -- (cl1.east-|synth.west) ;\n\n \\node[draw, rectangle, right of=regionsynth, xshift=6cm, yshift=1cm, minimum width=4cm] (rprog)\n {Region Program $\\ensuremath{\\mathsf{RProg}}_1$} ;\n \\node[draw, rectangle, below of=rprog, minimum width=4cm] (vprog)\n {Value Program $\\ensuremath{\\mathsf{EProg}}_1$} ;\n \\node[draw, rectangle, below of=vprog, minimum width=4cm] (bpvalue)\n {ROI Blueprint $\\ensuremath{\\mathsf{b}}_1$} ;\n \\node[rectangle, above of=rprog, minimum width=4cm,yshift=-0.3cm] (fullprog1)\n {Extraction Program for $\\ensuremath{C}_1$} ;\n \\node[draw, thick, fit=(rprog)(vprog)(bpvalue)(fullprog1)] {};\n\n \\draw[->] (regionsynth.east) -- ++(1.00cm,0cm) |- (rprog.west) ;\n \\draw[->] (valuesynth.east) -- ++(1.25cm,0cm) |- (vprog.west) ;\n \\draw[->] (inferbp.east) -- ++(1.50cm,0cm) |- (bpvalue.west) ;\n\n \\node[draw, dashed, below of=fullprog1, yshift=-3cm, inner sep=0.4cm] (fullprog2)\n {Extaction Program for $C_2$};\n \\node[below of=fullprog2] (fullprog3) {...};\n\n \\draw [decorate, thick, decoration = {calligraphic brace}]\n ($(cl1.east|-synth.south)+(0cm,-0.5cm)$)\n -- node[below] {\\large Joint Cluster and Infer Landmarks}\n ($(fulldataset.west|-synth.south)+(0cm,-0.5cm)$) ;\n\n \\draw [decorate, thick, decoration = {calligraphic brace}]\n ($(fullprog1.east|-synth.south)+(0cm,-0.5cm)$)\n -- node[below] {\\large Synthesize Extraction Programs}\n ($(synth.west|-synth.south)+(0cm,-0.5cm)$) ;\n\n\\end{tikzpicture}\n }\n \\vspace{-2ex}\n \\caption{Outline of landmark-based robust synthesis \\ensuremath{\\mathsf{LRSyn}}\\xspace}\n \\label{fig:lrsynblocks}\n \\vspace{-3ex}\n\\end{figure*}\n\n\\begin{algorithm}\n\\small\n\\caption{Landmark-based robust synthesis $\\mathsf{LRSyn}$}\n\\label{algo:lrsyn}\n\\begin{algorithmic}[1]\n \\Require Training set $\\ensuremath{\\dataset_\\train} \\subseteq \\ensuremath{D}$\n \\Require Annotation $\\ensuremath{\\mathcal{A}}(\\ensuremath{\\mathsf{doc}})$ for each document $\\ensuremath{\\mathsf{doc}} \\in \\ensuremath{\\dataset_\\train}$\n \n \\Require Region extraction DSL $\\mathcal{L}_{rx}$\n \\Require Value extraction DSL $\\mathcal{L}_{vx}$\n %\n \\State $[ (\\ensuremath{C}, \\ensuremath{\\mathsf{m}}) ] \\gets \\text{\\textsc{InferLandmarksAndCluster}}(\\ensuremath{\\dataset_\\train}, \\ensuremath{\\mathcal{A}})$\n \\label{line:joint-cluster-and-landmark}\n %\n \\For{Cluster and landmark $(\\ensuremath{C}_i, \\ensuremath{\\mathsf{m}}_i) \\ \\in [(\\ensuremath{C}, \\ensuremath{\\mathsf{m}})]$:} \n %\n \\State $(\\ensuremath{\\mathsf{RProg}}_i,\\ensuremath{\\mathsf{b}}_i,\\ensuremath{\\mathsf{EProg}}_i) \\gets$\n \\Statex\\hspace{\\algorithmicindent}$\\hspace*{3mm}\\text{\\textsc{SynthesizeExtractionProgram}}(\\ensuremath{C}_i, \\ensuremath{\\mathsf{m}}_i, \\ensuremath{\\mathcal{A}}, \\mathcal{L}_{rx}, \\mathcal{L}_{vx})$\n \\EndFor\n %\n \\State \\Return $\\mathsf{Extract}(\\{ (\\ensuremath{\\mathsf{m}}_i, \\ensuremath{\\mathsf{RProg}}_i, \\ensuremath{\\mathsf{b}}_i, \\ensuremath{\\mathsf{EProg}}_i) \\})$\n\\end{algorithmic}\n\\end{algorithm}\n\nAlgorithm~\\ref{algo:lrsyn} presents an outline of our landmark-based robust\nsynthesis algorithm.\nThe high-level components are illustrated in Figure~\\ref{fig:lrsynblocks}.\nGiven an annotated training set $\\ensuremath{\\dataset_\\train}$, the first task is to infer the clustering of\n$\\ensuremath{\\dataset_\\train}$ into $\\ensuremath{C}_0, \\ldots, \\ensuremath{C}_n$.\nIn Algorithm~\\ref{algo:lrsyn}, this step is combined with the inference of\nlandmarks.\nThe procedure $\\text{\\textsc{InferLandmarksAndCluster}}$\n(line~\\ref{line:joint-cluster-and-landmark}) produces a set of clusters $C_i$\neach associated with a landmark $\\ensuremath{\\mathsf{m}}_i$.\nThen, for each cluster $\\ensuremath{C}_i$, the algorithm calls the subroutine\n$\\text{\\textsc{SynthesizeExtractionProgram}}$ to synthesize a region extraction program $\\ensuremath{\\mathsf{RProg}}_i$, a\nregion blueprint $\\ensuremath{\\mathsf{b}}_i$, and a value extraction program $\\ensuremath{\\mathsf{EProg}}_i$.\n\\begin{comment}\n\\begin{compactitem}\n \\item The region extraction program $\\ensuremath{\\mathsf{RProg}}_i$ takes $\\ensuremath{\\mathsf{doc}}$ and $\\ensuremath{\\mathsf{m}}$ as inputs and\n produces the corresponding ROI $\\ensuremath{\\mathsf{R}}$ as the output.\n %\n \\item The region blueprint $\\ensuremath{\\mathsf{b}}_i$ represents the typical blueprint\n of an ROI for a document in $\\ensuremath{C}_i$.\n \\item The value extraction program $\\ensuremath{\\mathsf{EProg}}_i$ produces the output field value given\n the ROI.\n\\end{compactitem}\n\\end{comment}\nThe algorithm combines these with the landmark to output an extraction program in the Landmark-based DSL $\\mathcal{L}_{ld}$, which can be executed using semantics shown in Algorithm 1.\n\n\n\\subsection {Clustering Documents and Inferring Landmarks}\n\\label{subsec:clustering-and-landmark-inference}\n\n\\leaveout{\nIn Algorithm~\\ref{algo:lrsyn}, the first step is to cluster the set of documents\n$\\ensuremath{D}$ into clusters $\\{ \\ensuremath{C}_0, \\ldots, \\ensuremath{C}_n \\}$ and identify a\nlandmark $\\ensuremath{\\mathsf{m}}_i$ for each cluster.\nIdeally, we want to cluster the documents not based on the global structure, but\nthe local structure of the ROI around the landmark and field values.\nHowever, the appropriate ROI cannot be identified without the\ncorresponding landmark, and in turn, landmarks can only be identified given the\nclusters as they are defined in terms of values common to all documents in a cluster.\nTo avoid this circular dependence, we first generate fine-grained clusters based\non the global structure, compute landmarks and regions of interest for these\nclusters, and then merge these fine-grained clusters based on the blueprints\nof the regions of interest.\n}\n\n\\begin{algorithm}\n\\small\n\\caption{Joint clustering and landmark inference \\\\\n \\textbf{Procedure} $\\text{\\textsc{InferLandmarksAndCluster}}(\\ensuremath{\\dataset_\\train},\\ensuremath{\\mathcal{A}})$ }\n\\begin{algorithmic}[1]\n\\Require Training dataset $\\ensuremath{\\dataset_\\train}$ along with annotations $\\ensuremath{\\mathcal{A}}$.\n\\Require Blueprint function $\\ensuremath{\\mathsf{BP}}$.\n\\Require Blueprint distance metric dataset $\\ensuremath{\\delta}$.\n\\State \\Comment{\\textbf{Initial clustering using whole document blueprints}}\n\\State $\\Delta_{\\mathsf{fine}}(\\ensuremath{\\mathsf{doc}}, \\ensuremath{\\mathsf{doc}}') \\gets\n \\ensuremath{\\delta}(\\ensuremath{\\mathsf{BP}}(\\ensuremath{\\mathsf{doc}}), \\ensuremath{\\mathsf{BP}}(\\ensuremath{\\mathsf{doc}}')), \\forall \\ensuremath{\\mathsf{doc}}, \\ensuremath{\\mathsf{doc}}' \\in \\ensuremath{\\dataset_\\train}$\n \\label{line:fine-grained-distance}\n\\State $ [ \\ensuremath{C} ] \\gets \\mathsf{Cluster}(\\ensuremath{\\dataset_\\train}, \\Delta_\\mathsf{fine})$\n \\label{line:initial-clustering}\n\\State \\Comment{\\textbf{Compute landmark and blueprint candidates}}\n\\For{$\\ensuremath{C}_i \\in [\\ensuremath{C}]$} \n\\State $\\ensuremath{\\mathsf{M}}_i \\gets \\ensuremath{\\mathsf{LandmarkCandidates}}(\\ensuremath{C}_i, \\ensuremath{\\mathcal{A}})$\n \\label{line:landmark-candidates}\n\\For{$\\ensuremath{\\mathsf{doc}} \\in \\ensuremath{\\dataset_\\train}$} \\label{line:compute-blueprints-start}\n\\State $\\ensuremath{\\mathsf{R}}_{\\ensuremath{\\mathsf{doc}},\\ensuremath{\\mathsf{m}}} \\gets \\ensuremath{\\mathsf{EncRgn}}(\\ensuremath{\\mathcal{A}}(\\ensuremath{\\mathsf{doc}}) \\cup\n \\ensuremath{\\mathsf{Locate}}(\\ensuremath{\\mathsf{m}}, \\ensuremath{\\mathsf{doc}}))$\n\\Statex\\hspace{20em}$, \\forall \\ensuremath{\\mathsf{m}} \\in \\ensuremath{\\mathsf{M}}_i$\n\\State $\\mathsf{roi}[\\ensuremath{\\mathsf{doc}}] \\gets \\{ (\\ensuremath{\\mathsf{m}}, \\ensuremath{\\mathsf{BP}}(\\ensuremath{\\mathsf{R}}_{\\ensuremath{\\mathsf{doc}},\\ensuremath{\\mathsf{m}}})) \n \\mid \\ensuremath{\\mathsf{m}} \\in \\ensuremath{\\mathsf{M}}_i \\}$\n \\label{line:compute-blueprints-end}\n\\EndFor\n\\EndFor\n\\State \\Comment{\\textbf{Merge clusters}}\n\\State Define $\\Delta_{\\mathsf{c}}(\\ensuremath{\\mathsf{doc}}_1, \\ensuremath{\\mathsf{doc}}_2) \\gets$ \\par\n $\\min(\\{ \\ensuremath{\\delta}(\\ensuremath{\\mathsf{b}}_1, \\ensuremath{\\mathsf{b}}_2)\n \\mid (\\ensuremath{\\mathsf{m}}_{1,2}, \\ensuremath{\\mathsf{b}}_{1,2}) \\in \\mathsf{roi}[\\ensuremath{\\mathsf{doc}}_{1,2}] \\land\n \n \\ensuremath{\\mathsf{m}}_1 = \\ensuremath{\\mathsf{m}}_2 \\}$\n \\label{line:coarse-grained-distance}\n\n\\While{No change in $[\\ensuremath{C}]$} \\label{line:merge-clusters-start}\n\\State Let $\\Delta(\\ensuremath{C}_1, \\ensuremath{C}_2) = \n \\mathsf{Avg}(\\{ \\Delta_\\mathsf{c}(\\ensuremath{\\mathsf{doc}}_1, \\ensuremath{\\mathsf{doc}}_2) \\mid \\ensuremath{\\mathsf{doc}}_i \\in \\ensuremath{C}_i \\})$\n\\If{$\\exists \\ensuremath{C}_1, \\ensuremath{C}_2 \\in [\\ensuremath{C}]$ such that $\\Delta(\\ensuremath{C}_1, \\ensuremath{C}_2) \\leq \\mathsf{threshold}$}\n \\State $[\\ensuremath{C}] \\gets ([\\ensuremath{C}] \\setminus \\{ \\ensuremath{C}_1, \\ensuremath{C}_2 \\})\n \\cup \\{ \\ensuremath{C}_1 \\cup \\ensuremath{C}_2 \\}$\n \\label{line:merge-clusters-end}\n\\EndIf\n\\EndWhile\n\n\\State \\Return $[\\ensuremath{C}, \\mathsf{TopLandmarkCandidate}(\\ensuremath{C}) ]$\n\\end{algorithmic}\n\\label{algo:joint-clustering-landmarks}\n\\end{algorithm}\n\nThe \\text{\\textsc{InferLandmarksAndCluster}}\\ procedure (Algorithm~\\ref{algo:joint-clustering-landmarks}) outlines how we jointly perform clustering and landmark detection, using the approach described in Section~\\ref{subsec:joint-infer-cluster}.\n\n\\subparagraph{Initial clustering}\nLines~\\ref{line:fine-grained-distance}-\\ref{line:initial-clustering} perform the initial clustering to obtain the initial fine-grained clusters.\nHere, the clustering is by the blueprint of the whole document, and\nhence, two documents will be in the same cluster only if they have more or less\nexactly the same format with little or no variations.\n\n\\subparagraph{Landmark and blueprint identification}\nThe procedure $\\ensuremath{\\mathsf{LandmarkCandidates}}$ identifies common values in the\ndocuments of $\\ensuremath{C}_i$ as landmark candidates and orders\nthem by a \\emph{scoring function} (line~\\ref{line:landmark-candidates}).\nThe scoring function is based on two features:\n\\begin{inparaenum}[(a)]\n \\item the distance between the landmark candidate and the annotated values in\n the document, and\n \\item the size of the region that encloses the landmark candidates and\n annotated values.\n\\end{inparaenum}\nThese features were determined after initial experiments on a small fraction of\nour evaluation datasets.\nThe procedure $\\ensuremath{\\mathsf{LandmarkCandidates}}$ only return candidates with a score\nover a certain threshold.\nThen, for each landmark candidate and document, we compute and store the\nblueprint of the ROI in lines 8-9.\n\n\\subparagraph{Coarse-grained clustering}\nNow, for each document, we have a number of landmark candidates along with their\nassociated ROIs.\nWith the ROIs, we can now define a coarse-grained distance over\ndocuments that is based only on the blueprints of the local structure of ROIs\n(line~\\ref{line:coarse-grained-distance}).\nWith this coarse-grained distance, we now repeatedly merge clusters based on\ntheir average document distance\n(line~\\ref{line:merge-clusters-start}-\\ref{line:merge-clusters-end}).\nSince the coarse-grained distances are based on the blueprints of ROI, we now\nhave clusters that are solely based on the local structure,\nwhich was our intention in the first place.\n\n\\subparagraph{Finalizing landmarks}\nFinally, the procedure returns each coarse-grained cluster along with its top\nlandmark candidate.\n\n\n\\subsection{Synthesizing Extraction Programs}\n\\label{subsec:synth}\n\n\n\\begin{algorithm}\n\\small\n\\caption{Synthesize Extraction Program\\\\\n \\textbf{Proc.} $\\text{\\textsc{SynthesizeExtractionProgram}}(\n \\ensuremath{C},\n \\ensuremath{\\mathsf{m}},\n \\ensuremath{\\mathcal{A}},\n \\mathcal{L}_{rx},\n \\mathcal{L}_{vx})$}\n \\label{algo:synthesize}\n\\begin{algorithmic}[1]\n\\Require Cluster $\\ensuremath{C}$ with annotations $\\ensuremath{\\mathcal{A}}$\n\\Require Landmark value $\\ensuremath{\\mathsf{m}}$\n\\Require Region and value extraction DSLs: $\\mathcal{L}_{rx}$ and $\\mathcal{L}_{vx}$\n\\State \\textbf{for all}~$\\ensuremath{\\mathsf{doc}}_i \\in \\ensuremath{C}$~\\textbf{define}:\n\\label{line:region-compute-begin}\n\\State \\hspace{\\algorithmicindent}\n $\\ensuremath{\\ell}_i \\gets \\ensuremath{\\mathsf{Locate}}(\\ensuremath{\\mathsf{m}}, \\ensuremath{\\mathsf{doc}}_i)$\n\\State \\hspace{\\algorithmicindent}\n $(\\ensuremath{\\mathsf{Locs}}_i, \\ensuremath{\\mathsf{Agg}}_i) \\gets \\ensuremath{\\mathcal{A}}(\\ensuremath{\\mathsf{doc}}_i)$\n\\State \\hspace{\\algorithmicindent}\n $\\ensuremath{\\mathsf{R}}_i \\gets \\ensuremath{\\mathsf{EncRgn}}(\\{ \\ensuremath{\\ell}_i \\} \\cup \\ensuremath{\\mathsf{Locs}}_i, \\ensuremath{\\mathsf{doc}}_i)$\n\\label{line:region-compute-end}\n\\State \\Comment{\\textbf{Synthesize region program}}\n\\State $\\ensuremath{\\mathsf{RegionSpec}} \\gets \\{ (\\ensuremath{\\mathsf{doc}}_i, \\ensuremath{\\ell}_i) \\mapsto \\ensuremath{\\mathsf{R}}_i\n \\mid \\ensuremath{\\mathsf{doc}}_i \\in \\ensuremath{C}\n \\}$\n \\label{line:region-spec}\n\\State $\\ensuremath{\\mathsf{RProg}} \\gets \\mathsf{Synthesize}(\\ensuremath{\\mathsf{RegionSpec}}, \\mathcal{L}_{rx})$\n \\label{line:region-synth}\n\\State \\Comment{\\textbf{Compute region blueprint}}\n\\State $\\ensuremath{\\mathsf{b}} \\gets\n \\mathsf{Average}(\\{ \\ensuremath{\\mathsf{BP}}(\\ensuremath{\\mathsf{RegionSpec}}(\\ensuremath{\\mathsf{doc}}))) \\mid \\ensuremath{\\mathsf{doc}} \\in \\ensuremath{C} \\})$\n \\label{line:fp-compute}\n\\State \\Comment{\\textbf{Synthesize extraction program}}\n\\State $\\ensuremath{\\mathsf{ValueSpec}} \\gets \\{ \\ensuremath{\\mathsf{R}}_i \\mapsto\n \\ensuremath{\\mathsf{Agg}}_i(\\ensuremath{\\mathsf{Locs}}_i) \\mid \\ensuremath{\\mathsf{doc}}_i \\in \\ensuremath{C} \\}$\n \\label{line:extraction-spec}\n\\State $\\ensuremath{\\mathsf{EProg}} \\gets \\mathsf{Synthesize}(\\ensuremath{\\mathsf{ValueSpec}}, \\mathcal{L}_{vx})$\n \\label{line:extraction-synth}\n\\State \\Return $(\\ensuremath{\\mathsf{RProg}}, \\ensuremath{\\mathsf{b}}, \\ensuremath{\\mathsf{EProg}})$\n\\end{algorithmic}\n\\end{algorithm}\n\n\nThe $\\text{\\textsc{SynthesizeExtractionProgram}}$ procedure (Algorithm~\\ref{algo:synthesize}) outlines\nhow we process a cluster with a given landmark, and calculate a region extraction program, blueprint, and a value extraction program.\nThe algorithm takes as input:\n\\begin{inparaenum}[(a)]\n \\item a cluster $\\ensuremath{C}$ and corresponding landmark $\\ensuremath{\\mathsf{m}}$ \n \n \\item the annotations $\\ensuremath{\\mathcal{A}}$ for the documents in $\\ensuremath{C}$, and\n \\item the DSLs for region programs and extraction programs.\n\\end{inparaenum}\n\nIn the first step, the algorithm computes the ROI $\\ensuremath{\\mathsf{R}}_i$\nfor each document $\\ensuremath{\\mathsf{doc}}_i$ from the landmark and the annotations\n(lines~\\ref{line:region-compute-begin}-\\ref{line:region-compute-end}).\nThen, we synthesize the region program $\\ensuremath{\\mathsf{RProg}}$ using a set of examples of the\nform $(\\ensuremath{\\ell}_i, \\ensuremath{\\mathsf{doc}}_i) \\mapsto \\ensuremath{\\mathsf{R}}_i$\n(lines~\\ref{line:region-spec} and~\\ref{line:region-synth}).\nWe also compute the average or typical blueprint $\\ensuremath{\\mathsf{b}}$ for all the\nROIs in the cluster (line~\\ref{line:fp-compute}).\nThe region extraction program $\\ensuremath{\\mathsf{RProg}}$ and filtering based on blueprint $\\ensuremath{\\mathsf{b}}$ (used in the execution semantics in Algorithm 1) together act as a robust system for detecting the\nROIs.\nNext, we synthesize a value extraction program $\\ensuremath{\\mathsf{EProg}}$ using examples where the\ninputs are the ROIs in the document, and the outputs are the\nexpected field values (line~\\ref{line:extraction-spec}\nand~\\ref{line:extraction-synth}).\nThe region and value extraction programs work not only on the documents in $\\ensuremath{C}$, but typically also on unseen documents and formats where the global structure changes, \nwithout changes in the ROI.\n\nThe algorithm finally returns $(\\ensuremath{\\mathsf{RProg}}, \\ensuremath{\\mathsf{b}}, \\ensuremath{\\mathsf{EProg}})$, which is combined\nwith the landmark value $\\ensuremath{\\mathsf{m}}$ to produce a complete extraction program in the landmark DSL $\\mathcal{L}_{ld}$ in\nAlgorithm~\\ref{algo:lrsyn}.\n\n\n\\section{Region program for HTML}\n\\begin{algorithm}\n\\small\n\\caption{$\\mathsf{LearnRegionProgram}$ for a HTML document}\n\\label{algo:regionhtml}\n\\begin{algorithmic}[1]\n\\Require Annotated HTML documents $D_{tr}$ from a single cluster \n\\Require Field values $f_T(d)$ \\spsays{Do we remove the field here as well?}\n\\Require $\\ell^{f_T}_d$\n\\State $\\mathsf{BluePrint} \\gets \\langle\\rangle$\n\\State $\\mathsf{Hops} \\gets \\langle\\rangle$\n\\For {$Document \\; d \\in D_{tr}$}\n\\State $\\mathsf{ParentHops}, \\mathsf{SiblingHops} \\gets 0, 0$\n\\State $\\mathsf{landmarkRegion} \\gets \\ensuremath{\\mathsf{Locate}}({\\ell^{f_T}_d})$\n\\State $\\mathsf{fieldRegions} \\gets f_T(d)$\n\\State $\\mathsf{region} \\gets \\mathsf{landmarkRegion}$\n\\Do\n \\If{$\\forall \\; \\mathsf{fieldRegions} \\subseteq \\mathsf{region}$}\n \\State break\n \\EndIf\n \\If{$\\forall \\; \\mathsf{fieldRegions}$ are k siblings from $\\mathsf{region}$}\n \\State $\\mathsf{SiblingHops} \\gets k$\n \\State break\n \\EndIf\n \\State $\\mathsf{ParentHops} \\gets \\mathsf{ParentHops} + 1$\n \\State $\\mathsf{region} \\gets \\mathsf{Expand} \\; \\mathsf{region}$ by one parent\n\\doWhile {$\\mathsf{True}$}\n\\State $\\mathsf{Hops} \\gets \\mathsf{Hops} + \\langle \\mathsf{ParentHops}, \\mathsf{SiblingHops} \\rangle$\n\\EndFor\n\\State\n\\State ParentHops = max over all parenthops in Hops \\spsays{Write this formally}\n\\State SiblingHops = max over all siblinghops for parenthop == ParentHops found above\n\\State\n\\State $\\mathsf{BluePrint} \\gets \\ensuremath{\\mathsf{BPrint}}(P_{rg}(d))$ \\spsays{we take blueprints over all documents and maintain them as set, usually the size is <=3. Should we simplify this?}\n\\State\n\\State $P_{rg} \\gets \\mathsf{ParentHops}, \\mathsf{SiblingHops} , \\mathsf{BluePrint}$\n\\State \\Return $P_{rg}$\n\\end{algorithmic}\n\\end{algorithm}\n\nAlgorithm~\\ref{algo:regionhtml}~describes the region growing algorithm for a HTML document. The algorithm loops through the training documents $D_{tr}$ and for each document, finds the $\\mathsf{ParentHops}$ and $\\mathsf{SiblingHops}$ required to grow the region. The first step in the algorithm (Line 5) is to use the $\\ensuremath{\\mathsf{Locate}}$ function to find the region associated with the landmark value. In this case, $\\ensuremath{\\mathsf{Locate}}$ function translates to a simple \\emph{grep} on the document. We initialize the $region$ variable to encompass the landmark region (Line 7).\n\nThe algorithm proceeds as follows: We check if all the field regions are contained within the $region$ itself, in which case we are done. Else, we check if all the field regions are within $k$ siblings away from the region. We call this as the \\emph{sibling consistency} check. If this check passes, we set $\\mathsf{SiblingHops}$ to $k$ and break. Else, it implies that we need a larger region to encompass the landmark and field regions. We increment the $\\mathsf{SiblingHops}$ by $1$ and expand the region by one parent (Lines $14 \\to 15$) and continue with the iteration from Line $9$.\n\nAt the end of the loop, we have $\\mathsf{ParentHops}$ and $\\mathsf{SiblingHops}$ across all training documents $D_{tr}$. Since we require a common parameter which will work across documents during inference time, we reconcile these two parameters as follows: We take the maximum over $\\mathsf{ParentHops}$. Lets call this $M$. And for all pairs with $\\mathsf{ParentHops} == M$, we take the maximum over $\\mathsf{SiblingHops}$. Theorem~\\ref{}~proves that this operation always computes the required region correctly across all documents.\n\nIn addition to the hops, we also output a blueprint characterizing the region. For doing this, we again grow the regions for each document using the final $\\mathsf{ParentHops}$ and $\\mathsf{SiblingHops}$ and add the resulting region blueprints to a set and return them.\n\n\\begin{theorem}\nThe operation described above to reconcile $\\mathsf{ParentHops}$ and $\\mathsf{SiblingHops}$ always produces the correct region for the given input documents.\n\n\\spsays{Write this formally}\n\nExample: If we have (3, 0) and (2, 1), this will result in (3, 0). And (3, 0) will be a superset of the region and will contain (2, 1) as well.\n\nLikewise, if we have (3, 0) and (3, 1), if we take the region (3, 1), it will also contain (3, 0). \n\n\\end{theorem}\n\n{\\bf[MOHAMMAD COMMENT: ] It is not clear what \"correct\" here means. Do you mean a region that contains the desired field values and landmark values? If so, then that sounds more like a soundness result that many possible answers could satisfy rather than one \"correct\" answer. So may be good to reword the theorem to \"a correct region\". Also wonder if theorem could be made stronger - e.g. is it the \"smallest\" such region or something like that? }\n\n\n\n\n\\begin{figure}\n\\small\n\\begin{tabular}{r c l}\nProg & := & \\textsf{map}(\\(\\lambda\\) node .\\; \\textsf{FFProg}(node), \\textsf{Nodes}) \\\\\nNodes & := & \\textsf{AllNodes}(\\textsf{input}) | \\textsf{Descendants}(\\textsf{Nodes}) \\\\\n & & | \\textsf{filter}(\\textsf{Selector}, \\textsf{Nodes}) | \\textsf{Children}(\\textsf{Nodes}) \\\\\nSelector & := & \\textbf{tag} = c | \\textbf{class} = c | \\textbf{id} = c | \\textbf{nth-child}(n) | \\ldots \\\\\nFFProg & := & \\textsf{Substring} | \\textsf{Concat}(\\textsf{SubString}, \\textsf{FFProg}) \\\\\nSubString & := & \\textsf{node.TextValue} \\\\\n & & | \\textsf{Extract}(RegexPos, RegexPos, SubString) \\\\\nRegexPos & := & \\textsf{RegexSearch}(\\textit{regex}, k)\n\\end{tabular}\n\\vspace{-2ex}\n\\caption{Syntax of the HTML extraction language $\\mathcal{L}_{ex}$}\n\\vspace{-3ex}\n\\end{figure}\n\n\\begin{table*}[ht]\n\\small\n\\label{table:m2hexpmt1}\n\\resizebox{8cm}{!}{\n\\begin{tabular}{|c|l|c|c|c|c|c|c|c|c|}\n\\hline\n{Domain} & Fields & \\multicolumn{4}{|c|}{$\\ensuremath{\\mathsf{NDSyn}}\\xspace$} & \\multicolumn{4}{|c|}{$\\ensuremath{\\mathsf{LRSyn}}\\xspace$}\\\\\n\\cline{3-10}& & Prgs & Pre. & Rec. & F1 & Prgs & Pre. & Rec. & F1\\\\\n\\hline\n\\hline\\multirow{8}{*}{ifly.alaskaair.com}\n\n& AIata & 2 & 0.99 & 0.47 & 0.64 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 2 & 0.97 & 0.46 & 0.62 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DIata & 2 & 0.89 & 0.42 & 0.57 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 1 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 2 & 0.89 & 0.42 & 0.58 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& FNum & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Name & 1 & 1.00 & 0.99 & 0.99 & 1 & 1.00 & 0.99 & 0.99 \\\\\n& RId & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{delta.com}\n\n& AIata & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& DIata & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 1 & 0.94 & 0.97 & 0.95 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& FNo & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& Name & 2 & 0.95 & 0.88 & 0.91 & 2 & 0.95 & 1.00 & 0.97 \\\\\n& Pvdr & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& RId & 2 & 1.00 & 0.99 & 0.99 & 2 & 1.00 & 0.99 & 0.99 \\\\\n\n\\hline\\multirow{9}{*}{booking.airasia.com}\n\n& AIata & 1 & 0.67 & 1.00 & 0.67 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 0 & 0.00 & 0.00 & 0.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DIata & 1 & 0.67 & 1.00 & 0.67 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 1 & 0.67 & 1.00 & 0.67 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 0 & 0.00 & 0.00 & 0.00 & 1 & 1.00 & 1.00 & 1.00\\\\\n& FNo & 1 & 1.00 & 0.92 & 0.92 & 1 & 1.00 & 0.92 & 0.92 \\\\\n& Name & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Pvdr & 1 & 1.00 & 0.92 & 0.92 & 1 & 1.00 & 0.92 & 0.92 \\\\\n& RId & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{getthere.com}\n\n& AIata & 3 & 0.73 & 0.84 & 0.78 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 2 & 0.94 & 0.91 & 0.92 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DIata & 3 & 0.94 & 0.95 & 0.94 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 3 & 0.98 & 0.94 & 0.96 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 2 & 0.86 & 0.85 & 0.85 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& FNo & 1 & 0.94 & 0.96 & 0.95 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Name & 1 & 0.83 & 0.96 & 0.89 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Pvdr & 1 & 0.98 & 0.95 & 0.97 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& RId & 1 & 1.00 & 0.88 & 0.94 & 1 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{t.delta.com}\n\n& AIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 1 & 0.99 & 1.00 & 0.99 & 1 & 0.99 & 1.00 & 0.99 \\\\\n& DIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 1 & 0.99 & 1.00 & 0.99 & 1 & 0.99 & 1.00 & 0.99 \\\\\n& FNo & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& Name & 1 & 0.99 & 0.99 & 0.99 & 1 & 0.99 & 0.99 & 0.99 \\\\\n& Pvdr & 2 & 0.99 & 0.97 & 0.98 & 2 & 1.00 & 0.97 & 0.98 \\\\\n& RId & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{philippineairlines.com}\n\n& AIata & 3 & 0.91 & 1.00 & 0.96 & 3 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 2 & 1.00 & 1.00 & 1.00 & 3 & 1.00 & 0.98 & 0.99 \\\\\n& DIata & 2 & 0.99 & 0.99 & 0.99 & 2 & 0.99 & 0.99 & 0.99 \\\\\n& DeDate & 2 & 0.98 & 0.94 & 0.96 & 2 & 0.98 & 0.94 & 0.96 \\\\\n& DTime & 2 & 1.00 & 1.00 & 1.00 & 3 & 1.00 & 0.98 & 0.99 \\\\\n& FNo & 3 & 0.74 & 0.68 & 0.70 & 3 & 1.00 & 0.93 & 0.97 \\\\\n& Name & 3 & 1.00 & 0.97 & 0.99 & 2 & 1.00 & 0.99 & 0.99 \\\\\n& Pvdr & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n& RId & 2 & 1.00 & 1.00 & 1.00 & 2 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{itinerary.westjet.com}\n\n& AIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& FNo & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Name & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Pvdr & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& RId & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{aeromexico.com}\n\n& AIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& FNo & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Name & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Pvdr & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& RId & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{mytrips.amexgbt.com}\n& AIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 1 & 0.99 & 0.99 & 0.99 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 1 & 0.99 & 1.00 & 0.99 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 1 & 0.99 & 1.00 & 0.99 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& FNo & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Name & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& Pvdr & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& RId & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{qatarairways.com.qa}\n\n& AIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& ATime & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DIata & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DDate & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& DTime & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& FNo & 1 & 0.99 & 1.00 & 0.99 & 1 & 0.99 & 1.00 & 0.99 \\\\\n& Name & 1 & 0.99 & 1.00 & 0.99 & 1 & 0.99 & 1.00 & 0.99 \\\\\n& Pvdr & 1 & 1.00 & 1.00 & 1.00 & 1 & 1.00 & 1.00 & 1.00 \\\\\n& RId & 2 & 1.00 & 0.99 & 0.99 & 2 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\n\\end{tabular}\n}\n\\caption{No. Program, Precision, Recall and F1 numbers on HTML extraction scenario (M2H dataset) with NDSyn and ReSyn}\n\\end{table*}\n\n\\begin{table*}[t]\n\\small\n\\label{table:m2hexpmt2appendix}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n{Domain} & Fields & \\multicolumn{4}{|c|}{HDEF} & \\multicolumn{5}{|c|}{$\\ensuremath{\\mathsf{LRSyn}}\\xspace$}\\\\\n\\cline{3-11}& & Programs & Pre. & Rec. & F1 & Clusters & Programs & Pre. & Rec. & F1\\\\\n\\hline\n\\hline\\multirow{9}{*}{delta.com}\n& ArrivalAirportIata&2&0.9967&0.9967&0.9967&2&2&0.9967&0.9967&0.9967\\\\\n& ArrivalTime&2&0.9962&0.9975&0.9968&2&3&0.9971&0.9975&0.9973\\\\\n& DepartureAirportIata&2&0.9934&0.9967&0.9950&2&2&0.9934&0.9967&0.9950\\\\\n& DepartureDate&2&0.932&0.9659&0.9486&2&2&0.9986&0.9977&0.9981\\\\\n& DepartureTime&2&1&0.9975&0.9987&2&2&1&0.9975&0.9987\\\\\n& FlightNumber&2&1&1&1.00&2&2&1&1&1.00\\\\\n& Name&2&0.9447&0.8795&0.9109&3&3&0.951&1&0.9749\\\\\n& Provider&2&1&1&1.00&2&2&1&1&1.00\\\\\n& ReservationId&2&1&0.9989&0.9994&3&3&1&0.9989&0.9994\\\\\n\\hline\\multirow{9}{*}{philippineairlines.com}\n& ArrivalAirportIata&3&1&1&1.00&2&3&1&1&1.00\\\\\n& ArrivalTime&2&1&1&1.00&2&3&1&1&1.00\\\\\n& DepartureAirportIata&2&0.9937&0.9937&0.9937&2&2&0.9935&0.9935&0.9935\\\\\n& DepartureDate&2&0.9815&0.9463&0.9636&2&2&0.9809&0.9449&0.9626\\\\\n& DepartureTime&2&1&1&1.00&2&3&1&1&1.00\\\\\n& FlightNumber&2&0.8983&1&0.9464&2&2&0.9012&1&0.9480\\\\\n& Name&3&1&0.9745&0.9871&2&2&1&0.9953&0.9976\\\\\n& Provider&2&1&1&1.00&2&2&1&1&1.00\\\\\n& ReservationId&2&1&1&1.00&2&2&1&1&1.00\\\\\n\\hline\\multirow{9}{*}{itinerary.westjet.com}\n& ArrivalAirportIata&1&1&1&1.00&1&1&1&1&1.00\\\\\n& ArrivalTime&1&1&1&1.00&1&1&1&1&1.00\\\\\n& DepartureAirportIata&1&1&1&1.00&1&1&1&1&1.00\\\\\n& DepartureDate&1&1&1&1.00&1&1&1&1&1.00\\\\\n& DepartureTime&1&1&1&1.00&1&1&1&1&1.00\\\\\n& FlightNumber&1&1&1&1.00&1&1&1&1&1.00\\\\\n& Name&1&0.9931&1&0.9965&1&2&1&0.9942&0.9971\\\\\n& Provider&1&1&1&1.00&1&1&1&1&1.00\\\\\n& ReservationId&1&1&1&1.00&1&1&1&1&1.00\\\\\n\\hline\\multirow{9}{*}{qatarairways.com.qa}\n& ArrivalAirportIata&1&1&1&1.00&1&1&1&1&1.00\\\\\n& ArrivalTime&1&1&1&1.00&1&1&1&1&1.00\\\\\n& DepartureAirportIata&1&1&1&1.00&1&1&1&1&1.00\\\\\n& DepartureDate&1&1&1&1.00&1&1&1&1&1.00\\\\\n& DepartureTime&1&1&1&1.00&1&1&1&1&1.00\\\\\n& FlightNumber&1&0.9982&1&0.9991&1&1&0.9982&1&0.9991\\\\\n& Name&1&0.9988&1&0.9994&1&1&0.9988&1&0.9994\\\\\n& Provider&1&1&1&1.00&1&1&1&1&1.00\\\\\n& ReservationId&2&1&1&1.00&2&2&1&1&1.00\\\\\n\\hline\n\\end{tabular}\n\\caption{Results after adding new equivalence classes}\n\\end{table*}\n\\begin{table*}[t]\n\\small\n\\label{table:m2hexpmt2-Add}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n{Domain} & \\multicolumn{2}{c}{Original} &\\multicolumn{2}{|c|}{Added} & \\multicolumn{2}{|c|}{Final} & {New Equi Classes}\\\\\n\\cline{2-7}& Train & Test & Train & Test & Train & Test& \\\\\n\\hline\ndelta&102&927&8&8&110&935&1\\\\\nphilippines&74&90&5&4&79&94&4\\\\\nqatar&55&581&10&20&65&601&9\\\\\nwestjet&50&427&5&73&55&500&0\\\\\n\\hline\n\\end{tabular}\n\\caption{Additions made to the dataset to account for out of cluster entities}\n\\end{table*}\n\n\n\n\\begin{table*}\n\\small\n\\label{table:m2hexpmt2}\n\\resizebox{17cm}{!}{\n\\begin{tabular}{|l|c|l|}\n\\hline\nDomain : Field & & Program\\\\\n\n\\hline\\multirow{3}{*}{ifly.alaskaair.com:ArrivalAirportIata}\n&HDEF&$TR:nth-child(1):nth-last-child(1) > [style*=\"width\\:48\\%\"]:nth-child(3) > TABLE[cellpadding=\"0\"][cellspacing=\"0\"][border=\"0\"][style*=\"width\\:100\\%\"]:nth-child(1):nth-last-child(1) > TBODY:nth-child(1):nth-last-child(1) > :nth-child(1)$\\\\\n& &$TR:nth-child(1):nth-last-child(1) > [style*=\"padding-bottom\\:25px\"] > TABLE[cellpadding=\"0\"][cellspacing=\"0\"][border=\"0\"][style*=\"width\\:100\\%\"]:nth-child(1):nth-last-child(1) > TBODY:nth-child(1):nth-last-child(1) > :nth-last-child(4)$\\\\\n& HiSyn &$TR:nth-child(1)$ \\\\\n\n\\hline\\multirow{4}{*}{ifly.alaskaair.com:DepartureTime}\n&HDEF&$TR:nth-child(1):nth-last-child(1) > [style*=\"width\\:48\\%\"]:nth-child(1) > TABLE[cellpadding=\"0\"][cellspacing=\"0\"][border=\"0\"][style*=\"width\\:100\\%\"]:nth-child(1):nth-last-child(1) > TBODY:nth-child(1):nth-last-child(1) > :nth-child(2)$\\\\\n& &$TR:nth-child(1):nth-last-child(1) > [style*=\"padding-bottom\\:25px\"] > TABLE[cellpadding=\"0\"][cellspacing=\"0\"][border=\"0\"][style*=\"width\\:100\\%\"]:nth-child(1):nth-last-child(1) > TBODY:nth-child(1):nth-last-child(1) > :nth-last-child(3)$ \\\\ \n& HiSyn &$:nth-child(2)$ \\\\\n\\hline\\multirow{2}{*}{ifly.alaskaair.com:ReservationID}\n&HDEF&$TR:nth-child(1):nth-last-child(1) > [style*=\"padding-bottom\\:20px\"] > [align=\"center\"] > TBODY:nth-child(1):nth-last-child(1) > :nth-child(1)$\\\\\n&HiSyn&$TR:nth-child(1)$\\\\\n\n\\hline\\multirow{2}{*}{delta.com:DepartureAirportIata}\n&HDEF&$TABLE:nth-child(1) > TBODY:nth-child(1):nth-last-child(1) > TR > [style*=\"font-family\\:Lucida Grande\\, Lucida Sans\\, Lucida Sans Unicode\\, Trebuchet MS\\, Verdana\\, Tahoma\\, sans-serif\"]:nth-last-child(4) > SPAN[style*=\"color\\:rgb(112, 112, 112)\"]:nth-child(1):nth-last-child(1)$\\\\\n&HiSyn&$TBODY > TR > :nth-child(3)$\\\\\n\n\\hline\\multirow{2}{*}{delta.com:DepartureAirportIata}\n&HDEF&$TABLE:nth-child(1) > TBODY:nth-child(1):nth-last-child(1) > TR > [style*=\"font-family\\:Lucida Grande\\, Lucida Sans\\, Lucida Sans Unicode\\, Trebuchet MS\\, Verdana\\, Tahoma\\, sans-serif\"]:nth-last-child(4) > SPAN[style*=\"color\\:rgb(112, 112, 112)\"]:nth-child(1):nth-last-child(1)$\\\\\n&HiSyn&$TBODY > TR > :nth-child(3)$ \\\\\n\n\\hline\\multirow{2}{*}{booking.airasia.com:DepartureTime} \n&HDEF &$ [style*=\"border-collapse\\:collapse\"][style*=\"font-family\\:Roboto\\, Arial\\, sans-serif\"] > TBODY:nth-child(1):nth-last-child(1) > :nth-last-child(1):nth-child(1) > [valign=\"middle\"]:nth-child(3) > :nth-child(3)$\\\\\n&HiSyn&$TD:nth-child(3) > :nth-child(3)$ \\\\\n\n\\hline\\multirow{2}{*}{getthere.com:Provider} & HDEF&\n$DIV:nth-last-child(13) > TABLE > TBODY:nth-child(1):nth-last-child(1) > :nth-child(1) > :nth-child(2)$\\\\&HiSyn&$:nth-child(2)$\\\\\n\\hline\\multirow{2}{*}{getthere.com:ReservationID} & HDEF &\n$:nth-last-child(17) > TABLE > TBODY:nth-child(1):nth-last-child(1) > :nth-child(2) > :nth-child(2)$\\\\&HiSyn&$:nth-child(2)$\\\\\n\\hline\\multirow{2}{*}{t.delta.com:DepartureAirportIata}\n&HDEF&$.mj-column-per-100 > TABLE[border=\"0\"][cellpadding=\"0\"][cellspacing=\"0\"][role=\"presentation\"]:nth-child(1):nth-last-child(1) > TBODY:nth-child(1):nth-last-child(1) > TR > :nth-child(2)$\\\\&HiSyn&$TBODY:nth-child(1):nth-last-child(1) > TR > :nth-child(2)$\\\\\n\\hline\\multirow{2}{*}{t.delta.com:DepartureTime}\n&HDEF&$.mj-column-per-100 > :nth-child(1):nth-last-child(1) > TBODY:nth-child(1):nth-last-child(1) > TR > :nth-child(2)$\\\\&HiSyn&$TBODY:nth-child(1):nth-last-child(1) > TR > :nth-child(2)$\\\\\n\n\\hline\\multirow{2}{*}{philippineairlines.com:Name}\n&HDEF &$.content > [cellpadding=\"2\"] > TBODY:nth-child(1):nth-last-child(1) > [valign=\"top\"] > :nth-last-child(4):nth-child(1)$\n\\\\&HiSyn&$TR:nth-last-child(29)$\\\\\n\n\\hline\\multirow{2}{*}{itinerary.westjet.com:DepartureAirportIata} \n&HDEF&$[style*=\"font-size\\:13px\"][width=\"100\\%\"] > TBODY:nth-child(1):nth-last-child(1) > :nth-last-child(3):nth-child(1) > .accent > [id*=\"airSegment-departure-city-\"]$\\\\&HiSyn&$.accent > SPAN[id*=\"airSegment-departure-city-\"][style*=\"text-transform\\:uppercase\"]:nth-child(1)$\\\\\n\n\\hline\\multirow{2}{*}{aeromexico.com:ArrivalTime} &HDEF &\n$[id*=\"itinerary-container\"] > TABLE[cellspacing=\"0\"][width=\"100\\%\"]:nth-child(1):nth-last-child(1) > TBODY:nth-child(1):nth-last-child(1) > [style*=\"vertical-align\\:top\"] > [id*=\"-arrival-time\"]$\\\\&HiSyn&$TBODY:nth-child(1):nth-last-child(1) > [style*=\"vertical-align\\:top\"] > [id*=\"-arrival-time\"]$\\\\\n\n\\hline\\multirow{2}{*}{mytrips.amexgbt.com:FlightNumber} \n&HDEF&$TBODY > :nth-child(1):nth-last-child(1) > :nth-child(2) > DIV.right_section:nth-child(1):nth-last-child(1) > :nth-child(7):nth-last-child(12)$\\\\&HiSyn&$P:nth-child(7):nth-last-child(12)$ \\\\\n\n\\hline\\multirow{2}{*}{mytrips.amexgbt.com:FlightNumber}&HDEF&\n$TBODY > :nth-child(1):nth-last-child(1) > :nth-child(2) > DIV.right_section:nth-child(1):nth-last-child(1) > :nth-child(7):nth-last-child(12)$ \\\\&HiSyn& $P:nth-child(7):nth-last-child(12)$ \\\\\n\n\\hline\\multirow{2}{*}{qatarairways.com.qa:DepartureDate} \n&HDEF&$[style*=\"margin-bottom\\:10px\"]:nth-child(2) > TBODY:nth-child(1):nth-last-child(1) > TR:nth-child(1):nth-last-child(1) > :nth-child(4)$\\\\&HiSyn&$TABLE > TBODY:nth-child(1):nth-last-child(1) > TR:nth-child(1):nth-last-child(1) > :nth-child(4)$\\\\\n\n\\hline\n\\end{tabular}\n}\n\\caption{HDEF vs ReSyn Programs}\n\\end{table*}\n\n\n\\subsection{Hierarchical Landmarks}\n\nConsider again the email in Figure~\\ref{fig:formed_documents}b) and a variant where the term\n\\emph{Pick-up} has been replaced by \\emph{Depart}.\nNow, using the landmark \\emph{Depart} for extraction will unintentionally also extract the car trip departure time.\nIn Section~\\ref{sec:overview}, the human used a hierarchy of landmarks (i.e.,\n\\emph{AIR} followed by \\emph{Depart}) to obtain the correct results.\nAlgorithm~\\ref{algo:lrsyn} can similarly be extended to hierarchical\nextractions.\n\nSay we first synthesized a program $\\ensuremath{\\mathsf{Prog}}_0$ using\nAlgorithm~\\ref{algo:lrsyn} that uses the landmark \\emph{Depart}.\nWe run $\\ensuremath{\\mathsf{Prog}}_0$ on the training document and realize that a spurious\nlandmark location (car departure) is identified.\nIn this case, we use the correct landmark locations (i.e., the first and last\noccurrence of \\emph{Depart}) as a new annotation.\nRunning Algorithm~\\ref{algo:lrsyn} with this annotation will produce a program\n$\\ensuremath{\\mathsf{Prog}}_1$ that uses \\emph{AIR} as a landmark and extracts precisely the\nrelevant occurrences of \\emph{Depart}.\nAt inference time, we run $\\ensuremath{\\mathsf{Prog}}_1$ to identify only the correct occurrences\nof \\emph{Depart} and then run $\\ensuremath{\\mathsf{Prog}}_0$ starting with only those\noccurrences of \\emph{Depart}.\nIn our implementation in Section~\\ref{sec:evaluation}, we have implemented the\nfull hierarchical extraction algorithm for the HTML domain.\n\n\\begin{comment}\nThat is, if \\emph{AIR} was also ambiguous, we create a new annotation with only\nthe relevant occurrences of \\emph{AIR} and iterate.\n\nAlgorithm~\\ref{algo:hierarchy} depicts this iterative process in an informal\nway.\nFor now, we are assuming only one document in the training set to simplify\npresentation.\nIn the first step, the algorithm runs the standard $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ algorithm to obtain\n$\\ensuremath{\\mathsf{Prog}}$ and compares the results of running $\\ensuremath{\\mathsf{Prog}}$ against the\nannotations.\nIf they are equal, we return the learned program.\nIf not, we identify the landmark locations that lead to ``good'' extractions.\nThat is, we run $\\ensuremath{\\mathsf{Prog}}$ on $\\ensuremath{\\mathsf{doc}}$, but restrict ourselves to a single\nlandmark location after finding the landmark locations in $\\ensuremath{\\mathsf{Locate}}$.\nWe continue running the rest of the programs with this single location, and if\nthe results we obtain are a subset of the annotations, it is considered good.\nWe then use these good landmark locations as a new annotation and recursively\ncall the algorithm, and finally return a hierarchical program which combines\n$\\ensuremath{\\mathsf{Prog}}$ with the results of the recursive call.\n\n\\begin{algorithm}\n\\small\n \\caption{Hierarchical Landmark based Synthesis}\n \\label{algo:hierarchy}\n\\begin{algorithmic}[1]\n\\Require Training set $\\ensuremath{\\dataset_\\train} = \\{ \\ensuremath{\\mathsf{doc}} \\}$\n\\Require Annotation $\\ensuremath{\\mathcal{A}}$ \n\\Require Region program DSL $\\mathcal{L}_{rg}$\n\\Require Extraction program DSL $\\mathcal{L}_{ex}$\n\\State $\\ensuremath{\\mathsf{Prog}} \\gets \\ensuremath{\\mathsf{LRSyn}}\\xspace(\\ensuremath{\\dataset_\\train}, \\ensuremath{\\mathcal{A}}, \\mathcal{L}_{rg}, \\mathcal{L}_{ex})$\n\\State $\\mathsf{Results} \\gets \\ensuremath{\\mathsf{Prog}}(\\ensuremath{\\mathsf{doc}})$\n\\If{$\\mathsf{Results} = \\ensuremath{\\mathcal{A}}(\\ensuremath{\\mathsf{doc}})$}\n\\Return $\\langle \\ensuremath{\\mathsf{Prog}} \\rangle$\n\\EndIf\n\\State $\\ensuremath{\\mathsf{Locs}} \\gets \\ensuremath{\\mathsf{Locate}}(\\ensuremath{\\mathsf{Prog}}.\\mathsf{Landmark}, \\ensuremath{\\mathsf{doc}})$\n\\State $\\ensuremath{\\mathsf{Locs}}_\\mathsf{good} \\gets\n \\{ \\ensuremath{\\ell} \\mid \\ensuremath{\\ell} \\in \\ensuremath{\\mathsf{Locs}} \\land \\ensuremath{\\mathsf{Prog}}(\\ensuremath{\\mathsf{doc}}, \\ensuremath{\\ell})\n \\subseteq \\ensuremath{\\mathcal{A}}(\\ensuremath{\\mathsf{doc}}) \\}$\n\\State $\\ensuremath{\\mathcal{A}}' \\gets \\{ \\ensuremath{\\mathsf{doc}} \\mapsto \\ensuremath{\\mathsf{Locs}}_\\mathsf{good} \\}$\n\\State \\Return $\\langle \\ensuremath{\\mathsf{Prog}} \\rangle +\n \\mathsf{HierarchicalLRSyn}(\\ensuremath{\\mathsf{doc}}, \\ensuremath{\\mathcal{A}}', \\mathcal{L}_{rg}, \\mathcal{L}_{ex})$\n\\end{algorithmic}\n\\end{algorithm}\n \n\\end{comment}\n\n\\subsection{Robustness of \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace}\n\n\\begin{comment}\nManually examining our datasets, we have seen $4$ common ways in which formats\nchange.\nExpressing these in terms of ROIs, we see changes to:\n\\begin{inparaenum}[(1)]\n \\item the format outside the ROIs,\n \\item the format inside the ROIs,\n \\item the position of the ROIs inside the document, and\n \\item the number of ROIs in the document.\n\\end{inparaenum}\nOf these, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace handles $3$ kinds of variations (1, 3, and 4) to a greater\nor lesser extent.\nFor changes inside the ROIs (2), we discuss how \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace can be leveraged to\nease the task of maintaining and updating the system to handle the new formats.\n\nThe major advantage \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace has over related techniques is the use of the\n$\\ensuremath{\\mathsf{Locate}}$ function to narrow down on the ROIs using data content rather than the\ndocument structure, making structural format changes outside the ROIs \nand changes to the position of ROIs inside the document mostly irrelevant.\nOne special case we need to discuss is when the format change adds a new\nlandmark location outside the ROIs.\nFor example, in Figure~\\ref{fig:xxx}, suppose the format changes adding the word\n\\emph{Depart}.\nWe have two separate cases:\n\\begin{compactitem}\n \\item If the word \\emph{Depart} is added in a position where the local\n structure is different, the region blueprint learned by \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace will\n filter out this new landmark location.\n %\n For example, if \\emph{Depart} is part of an advertisement \\emph{Depart today\n for your dream destination!}, the local structure around the new occurrence\n of the landmark is significantly different.\n %\n \\item On the other hand, the word may be added in a context which is similar\n to the ROIs.\n %\n For example, as a part of a new \\emph{TRAIN} section that is similar to the\n \\emph{AIR} section.\n %\n In this case, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace does not naturally handle the change in the\n variation, unless appropriate hierarchical landmarks are present in the program.\n %\n In general, this kind of landmark addition is problematic for \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace.\n\\end{compactitem}\n\\end{comment}\n\nBy design, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace is robust to format variations that change\n\\begin{inparaenum}[(a)]\n \\item document structure outside the ROIs,\n \\item position of ROIs, and\n \\item the number of ROIs.\n\\end{inparaenum}\nHowever, there are $2$ clear limitations to the robustness of \\ensuremath{\\mathsf{LRSyn}}\\xspace:\n \n If a format changes by adding a new part (e.g., car departure time) that\n contains the landmark the \\ensuremath{\\mathsf{LRSyn}}\\xspace generated program uses, the program may\n generate a spurious output.\n %\n This is only a problem if the new part added also has similar blueprint to\n the existing ROIs.\n %\n For example, the program will not be mislead by an new banner advertisement\n saying \\emph{Depart today for your dream destination!}\n %\n \n %\n The second case is when the format inside the ROI changes.\n %\n In this case, the underlying assumption about invariant local structure\n is violated and \\ensuremath{\\mathsf{LRSyn}}\\xspace is unlikely to cope with this variation.\n %\n One possible solution that we discuss in Section~\\ref{sec:conclusion} is to use a trained ML model to automatically re-synthesize as in~\\cite{ij2019}.\n \n \n %\n \n \n \n \n\n\\begin{comment}\nThe other variation that \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace is not naturally robust to (2 in the\nlist above), sometimes the robustness of the underlying program synthesis\ntechnique is sufficient.\nFor example, if the text in the ROI in Figure~\\ref{fig:xxx} changed to \\emph{To\nDenver (DEN) at 8:18 PM on Friday, April 3}, the underlying text extraction\ntechnique (FlashFill) will naturally handle the variation.\nHowever, non-robustness to variations within ROIs is a natural limitation of\n\\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace.\nOne potential solution is to use an ML model to account for this variation as\nin~\\cite{hdef}.\nHowever, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace does provide a significant advantage here: the model can be\ntrained on significantly smaller inputs (i.e., the ROIs) compared to the\n\\cite{hdef} (which uses the whole document).\n\\end{comment}\n\n\\begin{comment}\n \n\\subsection{Practical advantages}\n\nIn addition to robustness discussed above, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace provides several practical\nadvantages compared to similar techniques.\nFor one, we do not need an ML model to achieve similar results to other\ntechniques.\nThis greatly expands the kind of devices on which the technique can be deployed,\ni.e., on low power portable devices like inexpensive phones.\n\nSecondly, compared to~\\cite{hdef}, the number and size of programs that are\ngenerated by \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace are significantly lower.\nThis makes manual examination of the programs generated feasible, allowing an\nextraction system developer to gain confidence in the program generated.\nFor example, in the program in Figure~\\ref{fig:yyy}, it is significantly easier\nto examine the correctness of the \\ensuremath{\\mathsf{LRSyn}}\\xspace program---{\\color{red} write one short\nsentence after the figure if finalized}\n\nAnother significant advantage is the extraction performance.\nThis is very relevant in industrial settings where an extraction system is\nexpected to process billions of documents each day ({\\color{red} put numbers\nfrom Amit?}).\nThe first step of executing an \\ensuremath{\\mathsf{LRSyn}}\\xspace program is using the $\\ensuremath{\\mathsf{Locate}}$ function to\nnarrow down on a small part of the input document.\nIn most cases, the $\\ensuremath{\\mathsf{Locate}}$ function is extremely inexpensive, involving just\nstring searches.\nHence, the rest of the program runs on a small sub-document and hence, is quite\nefficient as well.\nIn our experiments, programs learned by \\ensuremath{\\mathsf{LRSyn}}\\xspace outperformed those\nfrom~\\cite{hdef} by a factor of {\\color{red}$XXX$}.\n\n\\arsays{Training data required. This is not as easy to explain.}\n\\end{comment}\n\\subsection{HTML extraction}\n\\label{sec:html-eval}\nOur HTML document dataset, called the {\\em machine-to-human (M2H)} email\ndataset, consists of anonymized flight reservation emails.\nIt consists of $3503$ emails from $6$ different flight providers and is divided\ninto training and test sets of size $362$ and $3141$, respectively.\nFor each provider, we synthesize and test programs using Algorithm~\\ref{algo:lrsyn},\nas instantiated in Section~\\ref{sec:html}.\nWe compare against $2$ state-of-the-art techniques, namely\n$\\ensuremath{\\mathsf{NDSyn}}\\xspace$~\\cite{ij2019} and $\\textsf{ForgivingXPaths}$~\\cite{omari2017synthesis}.\n\n\\paragraph{Overall results}\nTable~\\ref{table:m2hoverall}~shows the average precision, recall and F1 scores\nacross various extraction tasks for $\\ensuremath{\\mathsf{ForgivingXPaths}}$, $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ for both\ncontemporary and longitudinal setting.\nAs seen in the table, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ has near perfect precision and recall, with\n$\\ensuremath{\\mathsf{NDSyn}}\\xspace$ performing quite well with numbers $>0.9$.\nFurther, as expected, the gap between $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ and $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ is higher in the longitudinal dataset indicating that $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ can cope with format variations better than $\\ensuremath{\\mathsf{NDSyn}}\\xspace$.\n\nUnlike $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ or $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ which use a combination of structure and text\nprograms for extraction, $\\textsf{ForgivingXPaths}$ only outputs XPaths which\ncorrespond to the entire node, rather than the sub-text contained within that\nnode.\nConsequently, it has high recall and poor precision when the field value is a substring of the entire DOM node text.\nWe therefore omit it from the more detailed results below.\n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|l|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{Contemporary} \\\\\n\\hline\nMetric & $\\ensuremath{\\mathsf{ForgivingXPaths}}$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAvg. Precision & 0.17 & 0.96 & 1.00\\\\ \\hline\nAvg. Recall & 0.99 & 0.91 & 1.00\\\\ \\hline\nAvg. F1 & 0.22 & 0.93 & 1.00 \\\\ \\hline\n\\end{tabular}\n\\begin{tabular}{|l|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{Longitudinal} \\\\\n\\hline\nMetric & $\\ensuremath{\\mathsf{ForgivingXPaths}}$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAvg. Precision & 0.15 & 0.99 & 1.00\\\\ \\hline\nAvg. Recall & 0.98 & 0.89 & 1.00\\\\ \\hline\nAvg. F1 & 0.20 & 0.92 & 1.00 \\\\ \\hline\n\\end{tabular}\n\\caption{Overall scores of $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ and $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ on the M2H Contemporary and Longitudinal datasets}\n\\label{table:m2hoverall}\n\\end{table}\n\n\n\\paragraph{Detailed comparison}\nTable~\\ref{tab:m2hexpmt1} shows a more detailed drill-down of the F1 scores for\n$\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$, in the two settings.\nIn summary, in all cases, the hit to the F1-scores come from lower precision numbers\nrather than lower recall numbers.\n\n\\highlightbox{\n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$ is very robust to variations in the longitudinal\nsetting, achieving $>95\\%$ F1 score in all $53$ out of $53$ fields, with a perfect F1 score of $1.00$ in $49$ cases.\nIn comparison, $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ achieves $>95\\%$ and perfect scores in $40$ and $33$ cases respectively.\n}\nIn many applications, having a score of $1.00$ is crucial, i.e., even a system\nwith $0.99$ precision cannot be deployed in practice.\nFor example, even a tiny imprecision in a system that automatically adds\ncalendar entries based on flight reservation emails is disastrous for millions\nof users.\nComparing the numbers precisely, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ outperforms $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ in 19 and 20 out\nof the 53 fields in the contemporary and longitudinal setting, respectively.\nIn the remaining fields, the two approaches have comparable F1 scores.\n\nWe examined the domains \\emph{aeromexico} and \\emph{mytrips.amexgbt} where both\n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$ and $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ achieved perfect scores. \nIn the \\emph{aeromexico} domain, each field has a unique dedicated ID attribute\nin the HTML domain which act as \\emph{implicit landmarks}, and both $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and\n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$ are able to latch on to this unique ID.\nFor example, the arrival time and departure city DOM nodes have id\n\\emph{arrival-city} and \\emph{departure-city}, and $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ produces a program\nthat searches for this ID across the whole document, emulating the landmark\nlocation step of a $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ program.\n\nIn the \\emph{mytrips.amexgbt} domain, the $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ program while perfectly\naccurate on all the variations in our dataset, is very fragile.\nIn the final CSS selector step of the web extraction component, it looks for the $10^{th}$ child of a DOM element corresponding to a flight details section.\nIncidentally, all new sections (car reservations, hotel reservations, etc) added\nin the new variations have at most $5$ children, and hence, they are\nautomatically ignored by the $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ program.\nAny new variation that would add a long enough section will break this program.\nIn contrast, the $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ program narrows down on the right region with a\nlandmark and is resistant to such variations.\n\n\\begin{table*}\n\\scriptsize\n\\begin{tabular}{|l|c|c|c||c|c|}\n\\hline\n & & \\multicolumn{2}{|c||}{ Contemporary} & \\multicolumn{2}{|c|}{Longitudinal} \\\\\n {Fields} & Domain & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ \\\\\n\\hline\n\\hline\n AIata & \\multirow[b]{4}{*}{ifly} & 0.81 & \\textbf{1.00} & 0.64 & \\textbf{1.00} \\\\\n ATime & & 0.76 & \\textbf{1.00} & 0.62 & \\textbf{1.00} \\\\\n DIata & & 0.73 & \\textbf{1.00} & 0.55 & \\textbf{1.00} \\\\\n DDate & & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n DTime & alaska & 0.73 & \\textbf{1.00} & 0.55 & \\textbf{1.00} \\\\\n FNum & \\multirow[t]{4}{*}{air} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n Name & & 1.00 & 1.00 & 0.99 & 0.99 \\\\\n Pvdr & & -- & -- & -- & -- \\\\\n RId & & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\n AIata & \\multirow{9}{*}{airasia} & 0.67 & \\textbf{1.00} & 0.67 & \\textbf{1.00} \\\\\n ATime & & NaN & \\textbf{1.00} & NaN & \\textbf{1.00} \\\\\n DIata & & 0.67 & \\textbf{1.00} & 0.67 & \\textbf{1.00} \\\\\n DDate & & 0.67 & \\textbf{1.00} & 0.67 & \\textbf{1.00} \\\\\n DTime & & NaN & \\textbf{1.00} & NaN & \\textbf{1.00} \\\\\n FNum & & 1.00 & 1.00 & 0.96 & 0.96 \\\\\n Name & & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n Pvdr & & 1.00 & 1.00 & 0.96 & 0.96 \\\\\n RId & & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline \n\\end{tabular}\n\\begin{tabular}{|c|c|c||c|c|}\n\\hline\n & \\multicolumn{2}{|c||}{ Contemporary} & \\multicolumn{2}{|c|}{Longitudinal} \\\\\n {Domain} & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ \\\\\n\\hline\n\\hline\\multirow{9}{*}{getthere}\n\n & 0.75 & \\textbf{1.00} & 0.74 & \\textbf{1.00} \\\\\n & 0.94 & \\textbf{1.00} & 0.91 & \\textbf{1.00} \\\\\n & 0.94 & \\textbf{1.00} & 0.95 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} & 0.95 & \\textbf{1.00} \\\\\n & 0.76 & \\textbf{1.00} & 0.78 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} & 0.98 & \\textbf{1.00} \\\\\n & 1.00 & 1.00 & 0.89 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} & 0.97 & \\textbf{1.00} \\\\\n & 0.93 & \\textbf{1.00} & 0.94 & \\textbf{1.00} \\\\\n\n\\hline\\multirow{9}{*}{delta}\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 0.94 & \\textbf{1.00} & 0.95 & \\textbf{1.00} \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 0.85 & \\textbf{0.97} & 0.91 & \\textbf{0.97} \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline \n\\end{tabular}\n\\begin{tabular}{|c|c|c||c|c|}\n\\hline\n & \\multicolumn{2}{|c||}{ Contemporary} & \\multicolumn{2}{|c|}{Longitudinal} \\\\\n {Domain} & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ \\\\\n\\hline\n\\hline\\multirow[b]{4}{*}{aero} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\\multirow[t]{4}{*}{mexico} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\n\\multirow[b]{4}{*}{mytrips} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n amex & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n \\multirow[t]{3}{*}{gbt} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline \n\\end{tabular}\n\\caption{F1 scores of \\ensuremath{\\mathsf{NDSyn}}\\xspace\\ and \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ for M2H HTML dataset. The Pvdr field is not relevant for iflyalaskaair}\n\\label{tab:m2hexpmt1}\n\\vspace{-5ex}\n\\end{table*}\n\n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|c|l|c|c|c|}\n\\hline\n{Domain} & Fields & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\\multirow{9}{*}{AccountsInvoice}\n & Amount & 0.99 & \\textbf{1.00} \\\\\n & Chassis & 0.82 & \\textbf{0.99} \\\\\n & CustAddr & 0.98 & \\emph{0.96} \\\\\n & Date & 0.93 & \\textbf{0.98} \\\\\n & Dnum & 0.96 & \\textbf{0.97} \\\\\n & Engine & 0.82 & \\textbf{1.00} \\\\\n & InvoiceAddress & 0.90 & \\textbf{0.95} \\\\\n & Model & 0.75 & \\textbf{1.00} \\\\\n\n\\hline\\multirow{8}{*}{CashInvoice}\n & Amount & 1.00 & 1.00 \\\\\n & Chassis & 0.99 & 0.99 \\\\\n & CustAddr & 0.99 & \\emph{0.97} \\\\\n & Date & 0.99 & 0.99 \\\\\n & Dnum & 0.96 & 0.96 \\\\\n & Engine & 0.93 & \\textbf{0.95} \\\\\n & InvoiceAddress & 0.99 & 0.99 \\\\\n & Model & 0.99 & \\textbf{1.00} \\\\\n\n\\hline\\multirow{5}{*}{CreditNote}\n & Amount & 1.00 & 1.00 \\\\\n & CreditNoteAddress & 0.99 & \\textbf{1.00} \\\\\n & CreditNoteNo & 0.94 & \\emph{0.93} \\\\\n & CustRefNo & 1.00 & 1.00 \\\\\n & Date & 1.00 & 1.00 \\\\\n & RefNo & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{6}{*}{SalesInvoice}\n & Amount & 1.00 & 1.00 \\\\\n & CustomerReferenceNo & 1.00 & 1.00 \\\\\n & Date & 1.00 & 1.00 \\\\\n & InvoiceAddress & 0.94 & \\textbf{0.99} \\\\\n & RefNo & 0.99 & 0.99 \\\\\n & SalesInvoiceNo & 0.99 & 0.99 \\\\\n\n\\hline\\multirow{6}{*}{SelfBilledCreditNote}\n & Amount & 1.00 & 1.00 \\\\\n & CustomerAddress & 1.00 & \\emph{0.99} \\\\\n & CustomerReferenceNo & 0.99 & 0.99 \\\\\n & Date & 1.00 & 1.00 \\\\\n & DocumentNumber & 1.00 & 1.00 \\\\\n & VatRegNo & 1.00 & 1.00 \\\\\n\n\\hline\n\\end{tabular}\n\\caption{F1 scores for Finance dataset}\n\\label{tab:financedataset}\n\\vspace{-3ex}\n\\end{table}\n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|l|p{1.5cm}|c|c|}\n\\hline\nFields & {Domain} & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n \\hline\nAIata & \\multirow{9}{*}{aeromexico} & 0.62 & \\textbf{0.65} \\\\\nATime & & 0.69 & \\textbf{0.99} \\\\\nDIata & & 0.36 & \\textbf{0.66} \\\\\nDDate & & 0.71 & \\textbf{0.89} \\\\\nDTime & & 0.65 & \\textbf{0.97} \\\\\nFNum & & 0.66 & \\textbf{0.83} \\\\\nName & & 0.96 & \\textbf{0.98} \\\\\nPvdr & & 0.69 & \\textbf{0.78} \\\\\nRId & & 1.00 & 1.00 \\\\\n \n\\hline\n\\end{tabular}\n\\begin{tabular}{|p{1.5cm}|c|c|}\n\\hline\n{Domain} & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\n\\multirow{9}{*}{getthere} & 0.94 & \\textbf{1.00} \\\\\n & 0.87 & \\textbf{1.00} \\\\\n & 0.93 & \\textbf{1.00} \\\\\n & 0.96 & \\textbf{0.99} \\\\\n & 0.88 & \\textbf{1.00} \\\\\n & 0.94 & \\textbf{1.00} \\\\\n & 0.99 & 0.99 \\\\\n & 0.75 & \\textbf{1.00} \\\\\n & 0.89 & \\textbf{0.95} \\\\\n \n\\hline\n\\end{tabular}\n\n\\begin{tabular}{|l|p{1.5cm}|c|c|}\n\\hline\n{Fields} & Domain & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAIata & \\multirow[b]{4}{*}{ifly.} & 0.99 & \\textbf{1.00} \\\\\nATime & & 0.95 & \\textbf{1.00} \\\\\nDIata & & 0.98 & \\textbf{1.00} \\\\\nDDate & & 0.98 & \\emph{-} \\\\\nDTime & \\multirow[t]{4}{*}{alaskaair} & 0.95 & \\textbf{0.98} \\\\\nFNum & & 0.97 & \\textbf{1.00} \\\\\nName & & 0.98 & 0.98 \\\\\nPvdr & & 0.93 & \\textbf{0.99} \\\\\nRId & & 1.00 & \\emph{0.86} \\\\\n \n\\hline\n\\end{tabular}\n\\begin{tabular}{|p{1.5cm}|c|c|}\n\\hline\n{Domain} & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n \\hline\n\\multirow[b]{4}{*}{mytrips} & 0.85 & \\textbf{0.98} \\\\\n & 0.97 & \\textbf{1.00} \\\\\n & 0.96 & \\textbf{0.99} \\\\\n & 0.93 & \\textbf{1.00} \\\\\n\\multirow[t]{4}{*}{amexgbt} & 0.99 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} \\\\\n & 0.91 & \\textbf{0.99} \\\\\n & 0.61 & \\textbf{0.96} \\\\\n\n\\hline\n\\end{tabular}\n\\caption{F1 Score for M2H-Images Dataset}\n\\label{tab:m2himages}\n\\vspace{-2ex}\n\\end{table}\n\n\n\n\\subsection{Form Image Extraction}\n\\label{sec:forms-eval}\n\nWe consider two datasets of form images:\n\\begin{compactitem}\n\\item {\\em Finance} dataset:\nThis consists of ~850 images of receipts, purchase\norders, credit notes, sales invoice and similar such documents.\nHere, the training and test data are from the same time period and we only\nevaluate the contemporary setting.\n\\item {\\em M2H-Images} dataset:\nWe convert M2H emails from $4$ domains to images and extract the same fields as\nbefore.\nThis represents common scenarios in practice where HTML documents such\nas booking mails or receipts may be printed and then scanned again, say when\nexpense reports are filed.\nThe OCR service we use produced extremely poor\nresults on $2$ of the $6$ domains from the HTML experiments, and hence, we used only $4$ domains in this dataset (The same OCR service is used by our baseline as well, see below).\n\\end{compactitem}\nFor both datasets, we use a training data size of $10$ for each field, and compare\nagainst Azure Form Recognizer (\\ensuremath{\\mathsf{AFR}})~\\cite{AFR}, a cloud-based form data extraction service.\n\n\\paragraph{Overall Results}\nTable~\\ref{tab:imagesoverall}~shows the average precision, recall, F1 and\naccuracy scores for \\ensuremath{\\mathsf{AFR}}\\ and \\ensuremath{\\mathsf{NDSyn}}\\xspace\\, for both the Finance and M2H-image\ndatasets.\nAs we can see from the table, both $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ and $\\ensuremath{\\mathsf{AFR}}$ perform very well on the\nFinance dataset, with $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ performing marginally better.\nIn this dataset, the image formats do not vary much, resulting in these\nhigh-quality results.\n\\highlightbox{\n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$ outperforms $\\ensuremath{\\mathsf{AFR}}$, a state-of-the-art industrial neural form extraction\nsystem with just $10$ training images per\nfield, having a precision of $0.97$ vs $0.90$ on the M2H-Images dataset.\n}\n\n\\begin{table}\n\\scriptsize\n\\parbox{.45\\linewidth}{\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\nMetric & $\\ensuremath{\\mathsf{AFR}} $ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAvg. Pre. & 0.98\t& 0.99 \\\\ \\hline\nAvg. Rec. & 0.96\t& 0.99 \\\\ \\hline\nAvg. F1 & 0.97\t& 0.99 \\\\ \\hline\n\\end{tabular}\n\\caption*{Finance dataset}\n}\n\\hfill\n\\parbox{.45\\linewidth}{\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\nMetric & $\\ensuremath{\\mathsf{AFR}} $ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAvg. Prec. & 0.90\t& 0.97 \\\\ \\hline\nAvg. Rec. & 0.93\t& 0.97 \\\\ \\hline\nAvg. F1 & 0.91\t& 0.97 \\\\ \\hline\n\\end{tabular}\n\\caption*{M2H-Images dataset}\n}\n\\vspace{-2ex}\n\\caption{Average precision, recall, F1 numbers on Finance and M2H-Images dataset (Ignoring DDate field in ifly.alaskaair)}\n\\label{tab:imagesoverall}\n\\vspace{-3ex}\n\\end{table}\n\n\\paragraph{Detailed comparison}\nTable~\\ref{tab:financedataset}~shows the results of $\\ensuremath{\\mathsf{AFR}}$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ on the Finance dataset with respect to $34$ extraction tasks.\n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$ performs better than $\\ensuremath{\\mathsf{AFR}}$ in $12$ out of the $34$ cases and is on par on the rest, with significant gains in some domains like \"AccountsInvoice\". \n\nThough the neural model in $\\ensuremath{\\mathsf{AFR}}$ is trained with thousands of invoices and\nreceipts, and further fine-tuned with our training data, we observe that it is\nsensitive to the region coordinates in a given document.\nIf these regions are translated, or if the document scan is tilted, $\\ensuremath{\\mathsf{AFR}}$\nproduces erroneous results.\nOn the other hand, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ is partially robust to such changes as we use a text\nlandmark.\n$\\ensuremath{\\mathsf{AFR}}$ is marginally better than $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ in some extraction tasks.\nThese are cases where there is no clear bounding pattern for the field values.\nOn the other hand, $\\ensuremath{\\mathsf{AFR}}$'s semantic understanding of the data is not\naffected by boundary text patterns.\n\nTable~\\ref{tab:m2himages}~shows the results of $\\ensuremath{\\mathsf{AFR}}$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ on the M2H\ndataset with respect to $45$ extraction tasks.\nThis dataset exhibits more variations at the visual level as compared to the\nFinance dataset, and hence, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ performs better than $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ in $35$ out\nof the $45$ tasks and is on par on most of the remaining extraction tasks.\nThere is $1$ specific case where $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ fails altogether, producing no\nprograms.\nThese are cases where there is no local textual landmark geometrically near the\nfield value.\nHowever, the region around the field value may still be similar across documents.\nWe discuss the possibility of using visual landmarks as opposed to textual\nones in Section~\\ref{sec:conclusion}.\n\n\\paragraph{Summary of results}\nIn the HTML domain, the prior work \\ensuremath{\\mathsf{NDSyn}}\\xspace\\ is a high-performing system with F1 scores in the range of 0.9. \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ is able to push the F1 scores to a perfect 1.0 in most cases. In longitudinal scenarios, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ improves \\ensuremath{\\mathsf{NDSyn}}\\xspace\\ in 20 out of 53 fields, with significant lift in F1 scores in many cases.\nIn the images domain, even with very little training data, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ matches AFR, which is a released product on contemporary settings, and outperforms AFR in longitudinal settings. In addition, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ produces simpler interpretable programs that match human intuition for extraction, and are much easier to maintain. Hence, we see a lot of promise in this approach.\n\n\\subsection{Nature of Synthesized Programs}\n\\label{sec:programs-eval}\n\nAdditionally, we performed secondary\nanalysis to understand the features of the $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs.\n\n\\paragraph{Program size}\nIn the HTML domain, we compared size of the $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ and $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ programs.\nSince the programs are naturally of different shapes, we only compare the web extraction part of the programs.\nThe final text extraction program is generally the same across both algorithms.\nNote that these numbers need to be taken in the context that $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs\nadditionally have a landmark and blueprint.\n\\highlightbox{\n For the M2H dataset, the web extraction part of $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs have $2.95$\n CSS selector components as compared to $8.51$ for $\\ensuremath{\\mathsf{NDSyn}}\\xspace$.\n}\n\n\\ignore{\nHere is an example region and text program synthesized for the \"Date\" field in \\emph{SalesInvoice} document, with respect to a landmark \"Date\". \\\\\nRegion program: $Extend(\"Date\", Relative(Right, EOL, false)$ \\\\\nText program: Extract a substring between \":\" and EOL\\\\\n\nThe region program starts from the landmark \"Date\" and extends right to collect all boxes until the end of line (EOL). And the corresponding text program extracts the text between a colon and EOL. We refer the readers to Table~\\ref{}~ in the Appendix for more examples. \\spsays{looks ugly, needs more work!!}\n\n\\paragraph{Synthesized programs}\nFigures~\\ref{lst:ndsynprogram}~and~\\ref{lst:lrsynprogram} show an example extraction program synthesized by $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ respectively. \nAs seen from these, extraction programs synthesized by $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ programs are way more complex as they process the entire HTML page, when compared to the programs synthesized by $\\ensuremath{\\mathsf{LRSyn}}\\xspace$, which only process the local region of interest.\nConsequently, if the formats change, the programs synthesized by $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ continue to function correctly.\nFurther, the shape and size of the $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs makes them easy to \nexamine by hand and gain confidence in their correctness.\n\\arsays{Do we have any numbers here?}\n\\highlightbox{\nThe $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs are significantly smaller compared to $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ programs.\nFor the web extraction component, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs have $XXX$ components\nin their CSS selectors as compared to $YYY$ for $\\ensuremath{\\mathsf{NDSyn}}\\xspace$.\n}\n}\n\n\n\\paragraph{Quality of Inferred Landmarks}\nWe infer landmarks automatically using the techniques and scoring functions from Sections~\\ref{sec:algorithms} and~\\ref{sec:instantiations}.\nTo check the quality of landmark inference, we also asked data annotators to tag landmarks manually.\n\\highlightbox{\nIn $57$ out of $63$ clusters across all fields, the inferred landmarks are\nthe same as manually provided landmarks.\nIn $5$ of the remaining $6$ cases, the human annotator agreed that the inferred landmark \nwas of equal quality.\n}\nIn the remaining $1$ case, the algorithm chose the human annotated landmark as well, but \nin addition chose a disambiguating hierarchical landmark of low quality.\nIn particular, it disambiguated the term \\emph{Name} occurred in reference to both\nthe name of the passenger and the name in the billing address.\nHere, the algorithm chose to disambiguate using the term \\emph{Meal}, i.e.,\npassengers have a meal preference while the billed person does not.\n\n\\subsection{Robustness of Experimental Results}\n\\label{sec:results-robustness}\n\n\n\\paragraph{Training set choice}\nIn our experiments, the training set is small compared to the\nfull dataset leading to a possibility of over-fitting, with different training\nsets potentially producing significantly differing results.\nHowever, our techniques are robust even with small training sets:\n\\begin{inparaenum}[(a)]\n \\item Landmark identification can leverage the full dataset of both labeled\n and unlabeled documents.\n \\item $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ does not need to see all format variations that differ only\n outside the ROIs, and a small set covering only the variations within the ROIs is sufficient.\n\\end{inparaenum}\nTo confirm this, we reran all our experiments on the M2H-HTML dataset with $4$\ndifferent randomly chosen training datasets.\nIn all runs, the F1 scores of the generated programs for each field and domain\nvaried by no more than $0.01$ from the results presented in\nTable~\\ref{tab:m2hexpmt1}, confirming our hypothesis that the results are robust\nto training set choice.\n\n\\paragraph{Landmark identification threshold}\nFor the landmark candidate score threshold, we picked threshold values that\nresulted in \\textasciitilde$10$ candidates for each case.\nTo study the robustness of the results to this choice, we reran all experiments\nwith a threshold value that returned $2X$ as many candidates.\nThe obtained results were exactly identical to the results presented in the previous sections. \nThis is expected as ``bad'' landmark candidates are eliminated in subsequent\nsteps, i.e., there is usually no program that extracts the required field value\nstarting from the landmark.\nHence, as long as the threshold is high enough to allow for some good landmark\ncandidates, it does not matter how many bad landmark candidates are included.\n\n\n\\ignore{\n\\section{Ablation studies}\nThough the training set is small, landmark identification uses both labeled\nand unlabeled documents. Hence, signal from landmarks is robust.\nAlso, extraction is unaffected if changes happen outside ROIs, helping with\nrobustness. Table~\\ref{tab:ablation_trainingsize}~shows F1 scores for M2H-HTML dataset in the longitudinal for $4$ different training sets. As we can see, $LRSyn$ is robust even with small training set sizes.\n\nThe procedure LandmarkCandidates only return candidates with a score>threshold.\nWe chose this threshold based on subset of training data so that we obtained\n~10 candidates. We believe that the results are not very sensitive to this\nthreshold as \"bad landmarks\" will be eliminated in subsequent steps. We are\nwilling to add an experiment to show this.\n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|l|p{1.5cm}|c|c|c|c|}\n\\hline\nFields & {Domain} & Seed1 & Seed2 & Seed3 & Seed4\\\\\n \\hline\nAIata & \\multirow{9}{*}{aeromexico} & 1.00 & 1.00 & 1.00 & 1.00\\\\\nATime & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDIata & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDDate & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDTime & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nFNum & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nName & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nPvdr & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nRId & & 1.00 & 1.00 & 1.00 & 1.00\\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|l|p{1.5cm}|c|c|c|c|}\n\\hline\nFields & {Domain} & Seed1 & Seed2 & Seed3 & Seed4\\\\\n\\hline\nAIata & \\multirow{9}{*}{getthere} & 1.00 & 1.00 & 1.00 & 1.00\\\\\nATime & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDIata & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDDate & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDTime & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nFNum & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nName & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nPvdr & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nRId & & 1.00 & 1.00 & 1.00 & 1.00\\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|l|p{1.5cm}|c|c|c|c|}\n\\hline\nFields & {Domain} & Seed1 & Seed2 & Seed3 & Seed4\\\\\n \\hline\nAIata & \\multirow[b]{4}{*}{ifly.} & 1.00 & 1.00 & 1.00 & 1.00\\\\\nATime & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDIata & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDDate & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDTime & \\multirow[t]{4}{*}{alaskaair} & 1.00 & 1.00 & 1.00 & 1.00\\\\\nFNum & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nName & & 0.99 & 0.99 & 0.98 & 0.99\\\\\nRId & & 1.00 & 1.00 & 1.00 & 1.00\\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|l|p{1.5cm}|c|c|c|c|}\n\\hline\nFields & {Domain} & Seed1 & Seed2 & Seed3 & Seed4\\\\\n\\hline\nAIata & \\multirow[b]{4}{*}{mytrips} & 1.00 & 1.00 & 1.00 & 1.00\\\\\nATime & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDIata & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDDate & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nDTime & \\multirow[t]{4}{*}{amexgbt} & 1.00 & 1.00 & 1.00 & 1.00\\\\\nFNum & & 1.00 & 0.98 & 1.00 & 0.98\\\\\nName & & 1.00 & 1.00 & 1.00 & 1.00\\\\\nPvdr & & 1.00 & 0.98 & 1.00 & 0.98\\\\\nRId & & 0.98 & 1.00 & 1.00 & 1.00\\\\\n\\hline\n\\end{tabular}\n\n\\caption{F1 Score for M2H-HTML Dataset in Longitudinal setting for $4$ different training sets}\n\\label{tab:ablation_trainingsize}\n\\vspace{-2ex}\n\\end{table}\n}\n\n\\subsection{HTML documents}\n\\label{sec:html}\n\n\n\\subparagraph{Landmarks and Landmark Candidates}\nWe use \\emph{$n$-grams} as landmarks ($n \\leq 5$), and the $\\ensuremath{\\mathsf{Locate}}$ function\nlists all the document DOM nodes and finds those containing the landmark\n$n$-gram.\nThe $\\ensuremath{\\mathsf{LandmarkCandidates}}$ procedure for identifying the top landmark\ncandidates lists all $n$-grams in the document, filters out those containing\nstop words, retains those $n$-grams common to all documents in the cluster, and\nthen scores them according to the criteria from Section~\\ref{sec:algorithms}.\nIn particular, the score for a landmark candidate $\\ensuremath{\\mathsf{m}}$ is given by a\nweighted sum of:\n\\begin{inparaenum}[(a)]\n \\item the number of nodes in the path from the DOM nodes corresponding to\n $\\ensuremath{\\mathsf{m}}$ and field value $\\ensuremath{v}$, \n \\item the number of nodes in the smallest region enclosing both $\\ensuremath{\\mathsf{m}}$\n and $\\ensuremath{v}$, and\n \\item the Euclidean distance between $\\ensuremath{\\mathsf{m}}$ and $\\ensuremath{v}$ in the rendered\n document.\n\\end{inparaenum}\n\n\\subparagraph{Blueprints}\nWe define the blueprint of a region to be the set of XPaths to the \\emph{common\nvalue} DOM nodes in the region, ignoring the DOM node order.\nFor example, the XPath \\texttt{body[1]\/table[4]\/tr[3]\/td[2]} is\nsimplified to \\texttt{body\/table\/tr\/td} before adding it to the blueprint set. \n\n\\subparagraph{Region Extraction DSL}\nA program in the region extraction DSL $\\mathcal{L}_{rx}$ is a pair of integers $(\\ensuremath{\\mathsf{parentHops}}, \\ensuremath{\\mathsf{siblingHops}})$\nof \\emph{parent hops} and \\emph{sibling hops}.\nGiven a landmark location $\\ensuremath{\\ell}$, the semantics of the $(\\ensuremath{\\mathsf{parentHops}},\n\\ensuremath{\\mathsf{siblingHops}})$ is as follows:\n\\begin{inparaenum}[(a)]\n \\item from $\\ensuremath{\\ell}$ go up the DOM tree $\\ensuremath{\\mathsf{parentHops}}$ steps to get node $n_1$,\n \\item from $n_1$ go $\\ensuremath{\\mathsf{siblingHops}}$ right to obtain node $n_2$, and\n \\item the result is the set of all descendants of all sibling nodes between\n $n_1$ and $n_2$ (inclusive).\n\\end{inparaenum} \nFor synthesizing program in $\\mathcal{L}_{rg}$ given the landmark location $\\ensuremath{\\ell}$\nand the annotated location $\\ensuremath{\\mathsf{Locs}}$, we first take the lowest common\nancestor (LCA) $n$ of $\\ensuremath{\\ell}$ and all nodes in $\\ensuremath{\\mathsf{Locs}}$.\nThe $\\ensuremath{\\mathsf{parentHops}}$ is given by the difference in depths of $n$ and $\\ensuremath{\\ell}$\nminus $1$, and $\\ensuremath{\\mathsf{siblingHops}}$ is given by the difference in index of the\nleft-most and right-most child of $n$ that have one of $\\ensuremath{\\ell}$ or\n$\\ensuremath{\\mathsf{Locs}}$ as a descendant.\n\n\\subparagraph{Value Extraction DSL}\nFor the value extraction DSL $\\mathcal{L}_{vx}$, we build upon the synthesis techniques\nfrom~\\cite{raza2017automated} and~\\cite{flashfill}, as in~\\cite{ij2019}.\nWe do not discuss the DSL and synthesis techniques in detail, but refer the\nreader to~\\cite{ij2019}.\nFrom a bird's eye view, a program in $\\mathcal{L}_{vx}$ consists of two parts: a web\nextraction program which extracts the particular DOM node which contains the\nfield value, and a text extraction program which extracts the field value from\nthe text present in the extracted DOM node.\nGiven an example $\\ensuremath{\\mathsf{R}} \\mapsto \\ensuremath{v}$, the synthesis procedure first finds\nthe DOM node $n$ which contains the text $\\ensuremath{v}$.\nThen, we use $\\ensuremath{\\mathsf{R}} \\mapsto n$ as the example to synthesize the web extraction\nprogram using techniques from~\\cite{raza2017automated}, and $\\mathsf{text}(n) \\mapsto\n\\ensuremath{v}$ as the example to synthesize the text extraction program using\ntechniques from~\\cite{flashfill}.\n\n\n\\begin{example}\n Consider the task of extracting departure time from the email in Figure~\\ref{fig:formed_documents} (a). The synthesized region extraction, value extraction programs and blueprint are shown in Figure~\\ref{lst:lrsynprogram}.\n\\end{example}\n\n\n\\subsection{Form Images}\n\nThis domain concerns images that are obtained by scanning or photographing of\nphysical paper documents.\nThese images are first processed by an Optical Character Recognition\n(OCR) technique to obtain a list of text boxes along with their coordinates.\nThe form images domain is significantly more complex than the HTML domain as:\n\\begin{inparaenum}[(a)]\n \\item The OCR output is generally very noisy, sometimes splitting up field\n values into a varying number of different text boxes.\n %\n \n \n \n %\n \\item These documents do not come equipped with a hierarchical structure\n that defines natural regions.\n %\n \n \n\\end{inparaenum}\n\n\\subparagraph{Landmarks and Landmark Candidates}\nAs in the HTML case, we use $n$-grams as landmarks.\nThe $\\ensuremath{\\mathsf{Locate}}$ and $\\ensuremath{\\mathsf{LandmarkCandidates}}$ functions work similarly to the HTML\ncase with OCR output text boxes replacing DOM nodes.\nThe scoring function for $\\ensuremath{\\mathsf{LandmarkCandidates}}$ computes the score for a landmark\ncandidate $\\ensuremath{\\mathsf{m}}$ as a weighted sum of:\n\\begin{inparaenum}[(a)]\n \\item the Euclidean distance between $\\ensuremath{\\mathsf{m}}$ and field value $\\ensuremath{v}$, and\n \\item the area of the smallest rectangle that encloses both $\\ensuremath{\\mathsf{m}}$ and $\\ensuremath{v}$.\n\\end{inparaenum}\n\n\\subparagraph{Blueprints}\nRather than considering all common boxes for blueprinting as in the HTML case,\nwe instead use only the boxes containing the top $50\\%$ most frequent $n$-grams.\nThe blueprint of a region is defined to be the $\\ensuremath{\\mathsf{BoxSummary}}$ of each such box\ntaken in document order.\nThe $\\ensuremath{\\mathsf{BoxSummary}}$ of $\\ensuremath{\\mathsf{box}}$ consists of $2$ parts:\n\\begin{inparaenum}[(a)]\n \\item The frequent $n$-gram that is present in the box, and\n \\item For each of the directions top, left, right, and bottom, the content\n type in the text box that immediately neighbors $\\ensuremath{\\mathsf{box}}$ in the direction.\n %\n The content type of a box is either:\n %\n \\begin{inparaenum}[(1)]\n \\item $\\bot$ if the box does not exist,\n \\item the frequent $n$-gram in the text of the box if one exists, and\n \\item $\\top$ if the box exists, but does not contain a frequent $n$-gram.\n \\end{inparaenum}\n\\end{inparaenum}\n\n\\begin{example}\n Consider the text box enclosing \\emph{Engine number} in the Accounts Invoice image in\n Figure~\\ref{fig:formed_documents}(c).\n %\n The $n$-gram \\emph{Engine number} is frequent and hence, is included in the\n blueprint.\n %\n The $\\ensuremath{\\mathsf{BoxSummary}}$ of the box is given by:\n %\n $\\langle\n \\mathsf{ngram} \\mapsto \\text{\\emph{Engine number}},\n \\mathsf{Top} \\mapsto \\bot,\n \\mathsf{Left} \\mapsto \\text{\\emph{Chassis number}},$\n $\\mathsf{Right} \\mapsto \\text{\\emph{Reg Date}}, \n \\mathsf{Bot} \\mapsto \\top\n \\rangle$\n %\n Here, \\emph{Engine number} and \\emph{Reg Date} are also a frequent $n$-grams, while the value of\n the Engine number \\emph{4713872198212} is not.\n\\end{example}\n\n\\begin{figure}\n\\begin{alltt}\nRProg := Disjunct(path, path, ...)\npath := input | Expand(path, motion)\nmotion := Absolute(dir, k)\n | Relative(dir, pattern, inclusive)\ndir := Top | Left | Right | Bottom\n\\end{alltt}\n\\vspace{-2ex}\n\\caption{The Form Images Region extraction DSL $\\mathcal{L}_{rx}$}\n\\label{fig:region_dsl}\n\\end{figure}\n\n\\subparagraph{Region Extraction DSL}\nFigure~\\ref{fig:region_dsl} depicts a novel region extraction DSL $\\mathcal{L}_{rx}$ for this domain.\n$\\mathcal{L}_{rx}$ has the following components:\n The top operator is a disjunction of \\emph{path programs}:\n operationally, these programs are executed in sequence and the first\n non-null result is returned.\n %\n Due to the OCR noise and variations in form images, often a single\n non-disjunctive path program is not sufficient.\n %\n \n \n \n %\n Each path program starts at the input landmark and repeatedly extends the\n path in steps till the path's bounding box covers all the annotated\n values.\n %\n Each extension step is specified by a direction and a motion.\n %\n The motion may be \\emph{absolute} (e.g., move right by $4$ boxes) or\n \\emph{relative} (e.g., move down till you hit a text box that matches\n the regex \\texttt{[0-9]\\{5\\}}).\n %\n The additional $\\mathtt{inclusive}$ parameter indicates whether the box\n that matches the pattern should be included or excluded in the path.\n \n \n \n\n\n\\begin{example}\n \\label{ex:pathexample}\n In Figure~\\ref{fig:formed_documents}(c), let us consider the landmark \\emph{Chassis number} and the\n annotated value \\emph{WDX 28298 2L SHX 3 }.\n %\n The field value here is a variable-length string and the OCR splits the\n value into $1-4$ separate boxes.\n \n \n Consider the two region programs given below:\\\\\n $\\bullet~\\mathsf{Ext(Ext(input, Abs(down, 1)), Rel(Right, [0-9]\\{13\\}, false))}$\\\\\n $\\bullet~\\mathsf{Ext(Ext(input, Abs(down, 1)), Rel(Right, DATE, false))}$\\\\\n \n Both programs first move one step down from the landmark \\emph{Chassis\n number}.\n %\n However, the first moves to the right till it hits a $13$ digit engine\n number, while the other till it hits a date.\n %\n In case the engine number is present in a given form, the first program\n produces a path which ends with the annotated value, while the second one\n does so if the engine number is absent.\n %\n When combined disjunctively in the right order, they together cover both\n cases.\n %\n \n \n \n \n \n \n \n %\n \n \n \n\\end{example}\n\nThe synthesis algorithm for $\\mathcal{L}_{rx}$ is split into two parts: generating path\nprograms and selecting path programs to construct a disjunction.\nFix a set of input documents with annotations.\nWe first synthesize path programs for small subsets (size $\\leq 3$) of input\ndocuments.\nFor synthesizing path programs, we use \\emph{enumerative\nsynthesis}~\\cite{transit,sygus} to generate numerous candidate programs and then\nfilter them by whether they cover the annotated values when starting from the\nlandmark.\nWe enumerate paths of up to $4$ motions, bounding $\\mathtt{k}$ to positive\nintegers $< 5$.\nFor $\\mathtt{pattern}$, we enumerate a finite set of regular expression patterns\ngenerated using a string profiling technique \\cite{flashprofile,confminer}\nover all the common and field text values present in the cluster.\nFor example, when given a cluster of documents of similar to\nFigure~\\ref{fig:formed_documents}c), one of the patterns returned is $\\mathtt{[0-9]\\{13\\}}$ as\nthe cluster contains many engine numbers of that form. \n\n\nAfter the enumeration step, we have a collection $\\{ P_1, \\ldots, P_n \\}$ of\npath programs that are each synthesized from a small subset of input examples.\nNow, we use the $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ algorithm from~\\cite{ij2019} to select a subset of these\nprograms to construct the disjunctive program. \nNow, for each program $P_i$, we define the set $\\mathsf{Ex}_i$ to be the subset of\n$\\mathsf{Examples}$ that $P_i$ is correct on.\nThe $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ algorithm selects a subset of $\\mathcal{P}$ of programs such that\n$\\bigcup_{P_i \\in \\mathcal{P}} \\mathsf{Ex}_i = \\mathsf{Examples}$, optimizing\nfor F1 score and program size~\\cite{ij2019}.\n\n\\begin{example}\n Consider the two path programs from Example~\\ref{ex:pathexample}, along with the\n additional program $\\mathsf{Ext(Ext(input, Abs(Down, 1)), Abs(Right, 2))}$.\n %\n Given a collection of such path programs, $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ builds the disjunctive\n program using the two from Example~\\ref{ex:pathexample} as they cover a large\n fraction of the documents in the cluster.\n %\n The additional program above will be ignored as it is only correct when the\n chassis number field value is split into $2$ boxes by the OCR.\n\\end{example}\n\n\\subparagraph{Value Extraction DSL}\nFor the value extraction DSL, we use FlashFill~\\cite{flashfill}.\nThe input to the value extraction program is the concatenation of all the\ntext values in the boxes returned by the path program.\n\n\n\\begin{comment}\n\\spsays{Lincy removed the last k parameter, we just aggregate the text content and pass it to TText. But I think last k will be good for generality? }\nA program $(k, P_{FF})$ in the extraction DSL $\\mathcal{L}_{ex}$ is primarily a\nFlashFill~\\cite{flashfill} program $P_{FF}$ along with one auxiliary integer\nvalue $k$.\nGiven a path program, a program $(k, P_{FF})$ in $\\mathcal{L}_{ex}$ chooses the last $k$\nboxes that appear in the path, and returns the result of running $P_{FF}$ on the\nconcatenation of the text content of the last $k$ boxes appearing in the path.\nDuring synthesis, we first determine $k$ to be the smallest integer such that\nthe last $k$ boxes in the path of each example contains the corresponding\nannotated values in the document.\nFixing this $k$, w\n\\end{comment}\n\n\n\n\\section{Introduction}\n\\input{intro}\n\n\\section{Overview}\n\\label{sec:overview}\n\\input{overview}\n\n\\section{Formed Document Extraction}\n\\label{sec:problem}\n\\input{problem}\n\n\\section{Landmark and Region based Synthesis}\n\\label{sec:algorithms}\n\\input{algorithms}\n\n\n\\section{Instantiating \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace}\n\\label{sec:instantiations}\n\\input{instantiations}\n\n\\section{Discussion}\n\\input{discussion}\n\n\\section{Evaluation}\n\\input{evaluation}\n\n\\section{Related work}\n\\input{related}\n\n\\vspace{-1ex}\n\\section{Conclusion}\n\\label{sec:conclusion}\n\\input{conclusion}\n\\balance\n\n\n\\section{Synthesis}\n\\begin{itemize}\n \\item Provide dsls here\n\\end{itemize}\n\\spsays{TODO. Copied this segment from HDEF paper, revise it appropriately}\nMention cover loop\n\n\\paragraph{The HTML Extraction DSL}\nThe DSL $\\mathcal{L}_{ex}$ we use for the M2H email data-set is a combination of the\nHTML extraction DSL from~\\cite{} and the \\textsc{FlashFill}\ntext-transformation DSL~\\cite{}.\nWe do not explain the semantics of the DSL in detail, but instead\nrefer to the corresponding papers.\nInformally, we use the HTML extraction DSL to select a set of DOM nodes\nfrom each email, and then use the \\textsc{FlashFill} DSL to transform the\ntext inside the DOM nodes to the field values.\nThe HTML extraction DSL is inspired by CSS selectors and each program is a \ncomposition of atomic selectors that act as filters on the set of DOM nodes\nin the email.\nThe \\textsf{FlashFill} DSL extracts sub-strings based on regular expression\nmatches within the text content of each DOM node.\n\n\\begin{comment}\n\\subsection{Evaluation on FX DS3 dataset}\nWe also evaluate $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ on the DS3 dataset published by $\\textsf{ForgivingXPaths}$ authors. The dataset primarily consists of product web pages across $k$ different providers. For our experiments, we sample $15$ web pages uniformaly at random from this dataset as our train data and evaluate on the remaining data, for each provider. We do $5$ runs of this experiment with different train samples and report the average numbers.\n\nTable~\\ref{} shows the F1 scores for $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ on this dataset. As seen from this experiment, \\spsays{TODO}\n\n\\end{comment}\n\\begin{comment}\nTo that effect, we further split the test data into two parts: one comprising of emails which are consistent with the landmarks and blueprints, and other consisting of emails which are not consistent. Table~\\ref{}~indicates the proportion of this split. As we can see from the table, the number of emails which are not consistent is fairly less, even though the test data was gathered from a different time period. This also points that narrowing down to small regions helps, as the core regions do not change often in datasets. In this experiment, we perform our comparison only with test emails which are consistent with the training data for that domain. We discuss more about the remaining emails in the next section.\n\\end{comment}\n\n\n\n\\begin{comment}\n\\begin{table}\n\\small\n\\label{table:m2hexpmt_1}\n\\begin{tabular}{|c|l|c|c|c|}\n\\hline\n{Domain} & Fields & $\\textsf{FX}$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\n\\hline\\multirow{9}{*}{itinerary.westjet}\n\n& AIata & & 1.00 & 1.00 \\\\\n& ATime & & 1.00 & 1.00 \\\\\n& DIata & & 1.00 & 1.00 \\\\\n& DDate & & 1.00 & 1.00 \\\\\n& DTime & & 1.00 & 1.00 \\\\\n& FNo & & 1.00 & 1.00 \\\\\n& Name & & 1.00 & 1.00 \\\\\n& Pvdr & & 1.00 & 1.00 \\\\\n& RId & & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{aeromexico}\n\n& AIata & & 1.00 & 1.00 \\\\\n& ATime & & 1.00 & 1.00 \\\\\n& DIata & & 1.00 & 1.00 \\\\\n& DDate & & 1.00 & 1.00 \\\\\n& DTime & & 1.00 & 1.00 \\\\\n& FNo & & 1.00 & 1.00 \\\\\n& Name & & 1.00 & 1.00 \\\\\n& Pvdr & & 1.00 & 1.00 \\\\\n& RId & & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{mytrips.amexgbt}\n& AIata & & 1.00 & 1.00 \\\\\n& ATime & & 0.99 & 1.00 \\\\\n& DIata & & 1.00 & 1.00 \\\\\n& DDate & & 0.99 & 1.00 \\\\\n& DTime & & 1.00 & 1.00 \\\\\n& FNo & & 1.00 & 1.00 \\\\\n& Name & & 1.00 & 1.00 \\\\\n& Pvdr & & 1.00 & 1.00 \\\\\n& RId & & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{9}{*}{qatarairways.com.qa}\n\n& AIata & & 1.00 & 1.00 \\\\\n& ATime & & 0.99 & 1.00 \\\\\n& DIata & & 1.00 & 1.00 \\\\\n& DDate & & 0.99 & 1.00 \\\\\n& DTime & & 1.00 & 1.00 \\\\\n& FNo & & 1.00 & 1.00 \\\\\n& Name & & 1.00 & 1.00 \\\\\n& Pvdr & & 1.00 & 1.00 \\\\\n& RId & & 0.99 & 1.00 \\\\\n\n\\hline\n\\end{tabular}\n\\end{table}\n\\end{comment}\n\n\\begin{comment}\n\\subsubsection{Inconsistent test emails}\nIn this experiment, we evaluate on test emails which were not consistent with train emails with respect to landmarks and blueprints. In the M2H dataset, there were only $4$ domains which had inconsistent emails. All these emails were marked inconsistent because the blueprints did not match owing to a different structure, though the landmark remained the same. If we group these outlier emails based on blueprints inferred using ground truth, delta.com had $k$ groups, Philippines had $k2$ groups, qatar had $k3$ and alaksa had $k4$ groups.\n \nThere are various ways we can deal with this issue. We can use NDSyn programs or a ML model to predict on these emails and add the labelled set to the training data. In this work, we adopt a simpler method: We sample some emails from this set and add it to the training data. Table~\\ref{}~shows the performance once more emails are added to the training data.\n\nInteresting example where landmark improved on addition of new data.\n\\end{comment}\n\nWe treat the \\emph{n-grams} appearing across all the HTML documents as landmark\ncandidates. We filter candidates which appear in our stop word file, like\npunctuation and pronouns. We also filter n-grams which are very lengthy. Given\na landmark candidate and a document, field values (which are obtained from the\nannotations) We use two types of scores:\n\\begin{enumerate}\n \\item We render the HTML document in a browser and find the sum of Euclidean distance between the landmark location and the field value locations. The intuition here is that a \\emph{good} landmark candidate will appear very close to the field values in all the documents.\n \\item Given the landmark location and the field regions, we use the $\\mathsf{LearnRegionProgram}$ routine to compute the enclosing region. And we measure the region size by the number of nodes in this region. The main reasoning here is that a smaller region will have lesser number of nodes in practice.\n\\end{enumerate}\nThe final score is a sum of these two scores weighted appropriately for scale.\nIn our ablation experiments, we also experiment with other scoring measures like\nManhattan distance and left-aligned scores. \n\n\\subsubsection{$\\mathsf{LearnRegionProgram}$ and Blueprints for HTML documents}\nThe goal of $\\mathsf{LearnRegionProgram}$ function is to find a \\emph{small}\nenclosing region given a pair of landmark and field values $\\langle \\ell^m_d,\n\\ell^{f_T}_d \\rangle$. For simplicity, let us assume that there is $1:n$ between\nthe landmark and field values. We will describe the generalization to the $m:n$\ncase later.\n\nAn HTML document provides an explicit tree structure which we leverage to find\nthe smallest enclosing region given a landmark and field value. Intuitively, the\nencompassing smallest region will be a sibling region or the lowest common\nancestor (LCA) of landmark and the field values. We represent $P_{rg}$ using\n$\\mathsf{parentHops}, \\mathsf{siblingHops}$ such that they cover all the train\ndocuments. We omit the detailed procedure computing these hops and refer the\nreaders to the Appendix.\n\n\\paragraph{$\\mathsf{Blueprint}$ for a HTML web region} HTML document provides\nus with an explicit tree structure. For computing a blueprint for an input web\nregion, we gather a sequence of tags in terms of their XPaths from a\n$\\mathsf{PreordeTraversal}$ starting from the begin node and ending at the last\nnode of the region. We ignore the indexes in the \\spsays{Add more nuances here,\nwe filter out similar tag structures}\n\n\nWe first process structured images using Optical Character Recognition (OCR)\npackages~\\cite{ocr1,ocr2}, which returns a list of bounding boxes and their\nassociated texts. Like in HTML, we treat bounding boxes which appear very close\nto each other as \\emph{n-grams}. We gather the \\emph{n-grams} appearing across\nall the structured images as landmark candidates. For scoring the landmark, we\nagain employ two scores as below. Unlike HTML which requires rendering, the\nbounding boxes here directly give us the visual coordinates.\n\n\\begin{enumerate}\n \\item We find the sum of Euclidean distance between the landmark region and the field regions. We use the center of these regions to compute the distances.\n \\item We also measure the size of the rectangle induced by the landmark region and the field region.\n\\end{enumerate}\nThe final score is a sum of these two scores weighted appropriately for scale.\n\n\n\\begin{comment}\n\\begin{algorithm}\n\\small\n\\caption{$\\mathsf{Blueprint}$ for a HTML web region}\n\\label{algo:blueprint}\n\\begin{algorithmic}[1]\n\\Require HTML web region $R$\n\\State $\\mathsf{BluePrint} \\gets \\mathsf{TagSequence}$ of $\\mathsf{PreordeTraversal}$ from the start of web region to end of web region\n\\State \\Return $\\mathsf{BluePrint}$\n\\end{algorithmic}\n\\end{algorithm}\n\\end{comment}\n\n\\spsays{Rough first draft}\nAs a first step, we do an OCR on structured images which returns bounding boxes and associated text. As we do not have explicit tag structure like HTML, the region program on top of OCR boxes is more complex. Recall that the goal of $\\mathsf{LearnRegionProgram}$ is to find\/enumerate region(s) satisfying the $\\ensuremath{\\mathsf{EncRgn}}$ property. Additionally, we want the outputted region to be as small as possible. \n\nFigure~\\ref{}~depicts the overall idea behind the region growing algorithm for images. Given an initial $\\mathsf{region}$, we can expand it in all directions so as to find an enclosing region. Specifically, we start with the landmark region as in HTML, and we would like to expand this region to encompass all the field regions. We expand in any direction only if required. To control the expansion, we leverage the following properties:\n\\begin{enumerate}\n \\item If we know the field pattern, then we can continue expanding till we find this pattern. $\\mathsf{Expand(Bottom, fieldpattern)}$\n \\item We can continue expanding till we hit a boundary pattern, which we can choose to include or exclude. $\\mathsf{Expand(Right, pattern, include\/exclude)}$\n \\item We can expand in an absolute manner. For instance, $\\mathsf{Expand(Left, 1)}$\n\\end{enumerate}\n\n\\begin{example}\nFigure~\\ref{}~depicts the region growing procedure for an accounts invoice, for various fields.\n\\begin{itemize}\n \\item If we want to extract the document date given $\\mathsf{Date}$ as a landmark, we can Expand one region to the right and extract the value. We can also keep expanding the region till a Date type is found.\n \\item Likewise, if we want to extract account number, given $\\mathsf{Account No.}$ as landmark, we can expand one region to the bottom and extract the value.\n \\item Extracting the chassis value is more complex, as it can contain varied number of tokens and might not abide to a strict field pattern. In this case, we traverse one region down and keep expanding to the right until we hit the engine value pattern, which we do exclude from this region.\n \\item Extracting customer address or invoice address requires us to keep traversing in the downwards direction and collecting all the lines until we hit a pattern like zip code or a stopping criterion like the constant \\emph{Raised by} or \\emph{Invoiced by}. \n\\end{itemize}\n\\end{example}\n\n\\begin{verbatim}\nlanguage Visual;\nfeature double Score = Microsoft.ProgramSynthesis.Extraction.Forms.Learning.RankingScore;\n\n@start List roi := landmarks | bothDirections;\nList bothDirections := ExtendLineItems(roi, pattern)\n\t\t\t\t\t\t\t\t\t\t| ExtendAbsAbs(roi, axis, k, k)\n\t\t\t\t\t\t\t\t\t\t| ExtendAbsRel(roi, axis, k, pattern, isIncl)\n\t\t\t\t\t\t\t\t\t\t| ExtendRelAbs(roi, axis, pattern, isIncl, k)\n\t\t\t\t\t\t\t\t\t\t| ExtendRelRel(roi, axis, pattern, isIncl, pattern, isIncl);\n\nint k;\nAxis axis;\nRegexToken pattern;\nbool isIncl;\n\n@input List landmarks;\n\\end{verbatim}\n\n\\begin{comment}\n\\begin{figure}\n\\small\n\\begin{tabular}{r c l}\nregionProg & := & $\\;\\;$\\textsf{landmarks} \\\\\n & & | \\textsf{ExtendAbsAbs}(\\textit{regionProg}, \\textit{axis}, \\textit{k}, \\\\\n & & \\textit{k}) \\\\\n & & | \\textsf{ExtendAbsRel}(\\textit{regionProg}, \\textit{axis}, \\textit{k}\\\\\n & & \\textit{pattern}, \\textit{isIncl}) \\\\\n & & | \\textsf{ExtendRelAbs}(\\textit{regionProg}, \\textit{axis}, \\\\\n & & \\textit{pattern}, \\textit{incl}, \\textit{k}) \\\\\n & & | \\textsf{ExtendRelRel}(\\textit{regionProg}, \\textit{axis}, \\\\\n & & \\textit{pattern}, \\textit{isIncl}, \\textit{pattern}, \\textit{isIncl}) \\\\\n\\end{tabular}\n\\textit{axis} $\\in$ $\\langle Up, Down, Left, Right \\rangle$; Set[Pattern] \\textit{pattern};\n\\vspace{-2ex}\n\\caption{Syntax of the visual region-growing language $\\mathcal{L}_{rg}$}\n\\label{fig:visual-dsl}\n\\end{figure}\n\\end{comment}\n\n\\paragraph{Talk about enumerative synthesis}\nFigure~\\ref{fig:visual-dsl} describes the region DSL $\\L_{rg}$ for images. Starting from an initial landmark region, we have two broad cases:\n\\begin{compactitem}\n\\item The landmark region itself encompasses all the fields, in which case the $\\textsf{regionProg}$ just returns the landmark region\n\\item We expand the landmark region in such a way that we enclose the field regions. The expansion can be done along all four cardinal directions. We group the left-right as one group and top-down as another group. In each of this group, we can either expand in an \\emph{absolute} sense or in a \\emph{relative} way. \n\\end{compactitem}\n\n\\paragraph{Cover loop in region growing}\nIn HTML, we could find the enclosing region with parent and sibling hops. In images, we grow the region using Enumerative synthesis with the DSL shown above. To account for the variability in the images and the resulting OCR, we employ a cover loop to find a disjunction of programs to cover the training data, instead of a single program.\n\n\\spsays{Give example}\n\\spsays{Give cover loop pseudo code}\n\\spsays{Move all blueprinting logic to clustering?}\n\n\\subsubsection{Blueprints for images} Images do not provide an explicit tag structure which we can use for blueprints, unlike HTML. As first step, we compute a frequency histogram of n-grams occurring in the given input images. Recall that, we first perform an OCR on the input image which provides us with bounding boxes and their corresponding text. We retain the top $k\\%$ of the frequently occurring n-grams as the tags for images. Our goal is to describe a given input document with respect to these tags.\n\nFor doing so, we compute the neighborhood information for each tag in the $4$ cardinal directions. The neighbors for a given tag are close-by elements, and can either be another frequently occurring n-gram or a variable text. If it turns out to be a variable text, we denote the existence of the box, rather than a specific n-gram. So the given image in Figure~\\ref{fig:formsextraction}~can be described in terms of these tags as $\\langle NIL<\/Top>NIL><\/Right>Exists<\/Bottom>Nil><\/Right> <\/VatRegNo>...$ and so on. \\spsays{Need a better forms example}. Section~\\ref{}~provides more details on how we use BluePrints during clustering and inference.\n\n\\begin{theorem}\nGiven $\\mathsf{LearnRegionProgram}$ for an input domain, the synthesized program $\\P$ is robust to any changes occurring outside of the enclosed region.\n\\end{theorem}\n\n\n\\begin{comment}\n\\begin{table*}[t]\n\\small\n\\label{table:m2hexpmt2}\n\\resizebox{15cm}{!}{\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n{Domain} & \\multicolumn{2}{|c|}{ArrivalAirportIata} &\\multicolumn{2}{|c|}{ArrivalTime} & \\multicolumn{2}{|c|}{DepartureAirportIata} &\\multicolumn{2}{|c|}{DepartureDate} &\\multicolumn{2}{|c|}{DepartureTime} \\\\\n\\cline{2-11} & RobustLmk & UserLmk & RobustLmk & UserLmk & RobustLmk & UserLmk & RobustLmk & UserLmk & RobustLmk & UserLmk \\\\\n\n\\hline\\multirow{2}{*}{ifly.alaskaair.com}\n&Arrives&Arrives&Arrives&Arrives&Departs&Departs&Departs&Departs&Departs&Departs\\\\\n&&&&&&&Confirmation code: $->$Term\/Gate&Term Gate&&\\\\\n\\hline\\multirow{2}{*}{delta.com}\n&Departure Time&To&Arrival Time&Arrival Time&Flight&From&Departure Time&Departure Time&Departure Time&Departure Time\\\\\n&Flight Number:&Arrives&Arrives&Arrives&Departs&Departs&Departs&Departs&Departs&Departs\\\\\n\\hline\\multirow{1}{*}{booking.airasia.com}\nArrive&Arrive&Arrive&Arrive&Depart&Depart&Depart&Depart&Depart&Depart&Depart\\\\\n\\hline\\multirow{1}{*}{getthere.com}\n&Arrive:&Arrive:&Arrive:&Arrive:&Depart:&Depart:&Depart:&Depart:&Depart:&Depart: \\\\\n\\hline\\multirow{1}{*}{t.delta.com}\n&ARRIVE&ARRIVE&ARRIVE&ARRIVE&DEPART&DEPART&DEPART&DEPART&DEPART&DEPART\\\\\n\\hline\\multirow{2}{*}{philippineairlines.com}\n&Departure&To&Arrival&Arrival&New&From&Departure&Departure&Departure&Departure\\\\\n&Philippine Airlines&Outbound&Philippine Airlines&Outbound&Outbound&Outbound&Family:->Fare&Outbound&Philippine Airlines&Outbound\\\\\n\\hline\\multirow{1}{*}{itinerary.westjet.com}\n&Arrival:&Arrival:&Arrival:&Arrival:&Departure:&Departure:&Flight Number&Flight Number&Departure:&Departure:\\\\\n\\hline\\multirow{1}{*}{aeromexico.com}\n&Route&Route&Arrives&Arrives&Route&Route&Date&Date&Departs&Departs\\\\\n\\hline\\multirow{1}{*}{mytrips.amexgbt.com}\n&Destination:&Destination:&Arriving:&Arriving:&Origin:&Origin:&Departing:&Departing:&Departing:&Departing:\\\\\n\\hline\\multirow{1}{*}{qatarairways.com.qa}\n&Arrival&Arrival&Arrival&Arrival&Departure&Departure&Arrival&Arrival&Departure&Departure\\\\\n\\hline\n\\end{tabular}\n}\n\\caption{Landmarks detected vs User Selected Landmarks}\n\\end{table*}\n\n\\begin{table*}\n\\small\n\\label{table:m2hexpmt2}\n\\resizebox{15cm}{!}{\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n\\hline\n{Domain} & \\multicolumn{2}{|c|}{FlightNumber} &\\multicolumn{2}{|c|}{Name} & \\multicolumn{2}{|c|}{Provider} &\\multicolumn{2}{|c|}{ReservationId} \\\\\n\\cline{2-9} & RobustLmk & UserLmk & RobustLmk & UserLmk & RobustLmk & UserLmk & RobustLmk & UserLmk \\\\\n\n\\hline\\multirow{1}{*}{ifly.alaskaair.com}\n&Departs&Flight&Travelers:&Travellers&&&Travelers:->Confirmation code:&ConfirmationCode\\\\\n\\hline\\multirow{2}{*}{delta.com}\n&Flight&Flight&Name&Name&Carrier&Carrier&Flight Information (Record Locator :&Flight Information (Record Locator : \\\\\n&Flight Number:&Flight Number:&Who's Coming Along&Who's Coming Along&Flight Number:&Flight Number:&Confirmation Number :&Confirmation Number : \\\\\n\\hline\\multirow{1}{*}{booking.airasia.com}\n&Depart&Depart&Guests&Guests&Depart&Depart&Booking number&Booking number\\\\\n\n\\hline\\multirow{1}{*}{getthere.com}\n&Flight\/Equip&Flight\/Equip&Meal:->Name:&Name&Flight\/Equip&Flight\/Equip&Airline Record Locator&Airline Record Locator\\\\\n\\hline\\multirow{2}{*}{t.delta.com}\n&DEPART&DEPART&Name:&Name:&DEPART&DEPART&Confirmation \\#:&Confirmation \\#: \\\\\n&&&&&&&Your Trip Confirmation \\#:&Your Trip Confirmation \\#: \\\\\n\\hline\\multirow{2}{*}{philippineairlines.com}\n&Flight&Flight&Dear&Dear&Flight&Flight&Booking reference:&Booking reference:\\\\\n&Philippine Airlines&Philippine Airlines&Passengers&Passengers&Philippine Airlines&Philippine Airlines&Reference:&Reference:\\\\\n\\hline\\multirow{1}{*}{itinerary.westjet.com}\n&Flight Number->WESTJET&Flight Number&Seat(s):&Seat(s):&Flight Number->WESTJET&WESTJET&WestJet&WestJet\\\\\n\\hline\\multirow{1}{*}{aeromexico.com}\n&Flt \\#&Flt \\#&Passengers&Passengers&Flt \\#&Flt \\#&Confirmation Code:&Confirmation Code:\\\\\n\\hline\\multirow{1}{*}{mytrips.amexgbt.com}\n&Flight:&Flight:&Booking Reference:&Traveler&Flight:&Flight:&Airline Booking Ref:&Airline Booking Ref:\\\\\n\\hline\\multirow{2}{*}{qatarairways.com.qa}\n&Class&Class&Passenger name\/ E&Passenger name&Class&Class&Booking reference (PNR)&Booking reference (PNR)\\\\\n&Class&Class&&&Class&Class&Booking reference (PNR)&Booking reference (PNR)\\\\\n\\hline\n\\end{tabular}\n}\n\\caption{Landmarks detected vs User Selected Landmarks}\n\\end{table*}\n\\end{comment}\n\n\n\\begin{comment}\n\\begin{table*}[t]\n\\small\n\\label{table:m2hexpmt1}\n\\begin{tabular}{|c|c|c|c||c|c|c|c|c|c|c|c|c|}\n\\hline\n \\multirow{\\shortstack[c]{Domain}} & \\multirow{Fields} & \\multicolumn{4}{|c|}{NDSyn} & \\multicolumn{4}{|c|}{$\\hisyn$}\\\\\n\\cline{3-19}& & #Programs & Pre. & Rec. & F1 & #Programs & Pre. & Rec. & F1\\\\\n\\hline\n\\hline\\multiline{ifly.alaskaair.com}\n& ArrivalAirportIata & 2 & 0.9989 & 0.4718 & 0.6409 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n& ArrivalTime & 2 & 0.9697 & 0.4576 & 0.6218 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n& DepartureAirportIata&2&0.8888&0.4198&0.5703&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00 \\\\\n& DepartureDate&1&1.00&1.00&1.00&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00 \\\\\n& DepartureTime&2&0.8989&0.4241&0.5763&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00 \\\\\n& FlightNumber&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\ \n& Name&1&1.00&0.9751&0.9874&1&1&1.00&0.9751&0.9874&0& & & & &1.00&0.9751&0.9874\\\\ \n& ReservationId&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n\\hline\\multiline{delta.com}\n& ArrivalAirportIata&2&0.9967&0.9967&0.9967&2&2&1.00&1.00&1.00&7&2&0.0000&0.0000&0.0000&0.9967&0.9967&0.9967\\\\\n& ArrivalTime&2&0.9962&0.9975&0.9968&2&2&1.00&1.00&1.00&7&2&0.0000&0.0000&0.0000&0.9962&0.9975&0.9968\\\\\n& DepartureAirportIata&2&0.9934&0.9967&0.9950&2&2&0.9934&0.9967&0.9950&0& & & & &0.9934&0.9967&0.9950\\\\\n& DepartureDate&1&0.9320&0.9659&0.9486&2&2&1.00&1.00&1.00&7&1&0.1429&0.3636&0.2052&0.9951&0.9986&0.9968\\\\\n& DepartureTime&2&1.00&0.9975&0.9987&2&2&1.00&1.00&1.00&7&2&0.0000&0.0000&0.0000&1.00&0.9975&0.9987\\\\\n& FlightNumber&2&1.00&1.00&1.00&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Name&2&0.9447&0.8795&0.9109&2&2&0.9508&1.00&0.9748&7&2&0.0000&0.0000&0.0000&0.9508&0.9951&0.9724\\\\\n& Provider&2&1.00&1.00&1.00&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ReservationId&2&1.00&0.9915&0.9957&2&2&1.00&0.9989&0.9994&8&2&1.00&0.2222&0.3636&1.00&0.9915&0.9957\\\\\n\\hline\\multiline{booking.airasia.com}\n& ArrivalAirportIata&1&0.5000&1.00&0.6667&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ArrivalTime&0& NaN&0.0000&0.0000&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureAirportIata&1&0.5000&1.00&0.6667&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureDate&1&0.5000&1.00&0.6667&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureTime&0& NaN&0.0000&0.0000&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& FlightNumber&1&1.00&0.8974&0.9459&1&1&1.00&0.9189&0.9577&1&1&1.00&0.5000&0.6667&1.00&0.8974&0.9459\\\\\n& Name&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Provider&1&1.00&0.8974&0.9459&1&1&1.00&0.9189&0.9577&1&1&1.00&0.5000&0.6667&1.00&0.8974&0.9459\\\\\n& ReservationId&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n\\hline\\multiline{getthere.com}\n& ArrivalAirportIata&3&0.7356&0.8384&0.7836&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ArrivalTime&2&0.9384&0.9114&0.9247&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureAirportIata&3&0.9426&0.9452&0.9439&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureDate&3&0.9834&0.9378&0.9601&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureTime&2&0.8659&0.8619&0.8639&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& FlightNumber&1&0.9409&0.9589&0.9498&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Name&1&0.8285&0.9641&0.8912&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Provider&1&0.9804&0.9530&0.9665&1&1&1.00&1.00&1.00&1&1&0.0000&0.0000&0.0000&1.00&1.00&1.00\\\\\n& ReservationId&1&1.00&0.8757&0.9337&1&1&1.00&1.00&1.00&1&1&0.0000&0.0000&0.0000&1.00&0.9941&0.9970\\\\\n\\hline\\multiline{t.delta.com}\n& ArrivalAirportIata&1&0.9996&0.9979&0.9987&1&1&1.00&0.9982&0.9991&2&1&0.9231&0.9231&0.9231&0.9996&0.9979&0.9987\\\\\n& ArrivalTime&1&0.9978&1.00&0.9989&1&1&0.9982&1.00&0.9991&2&1&0.9231&1.00&0.9600&0.9978&1.00&0.9989\\\\\n& DepartureAirportIata&1&1.00&0.9986&0.9993&1&1&1.00&0.9986&0.9993&2&1&1.00&1.00&1.00&1.00&0.9986&0.9993\\\\\n& DepartureDate&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureTime&1&0.9975&1.00&0.9987&1&1&0.9975&1.00&0.9987&2&1&1.00&1.00&1.00&0.9975&1.00&0.9987\\\\\n& FlightNumber&2&1.00&0.9634&0.9814&1&2&1.00&0.9687&0.9841&2&2&1.00&1.00&1.00&1.00&0.9688&0.9842\\\\\n& Name&1&0.9976&0.9976&0.9976&1&1&0.9976&0.9976&0.9976&63&1&0.0000&0.0000&0.0000&0.9976&0.9976&0.9976\\\\\n& Provider&2&0.9928&0.9400&0.9657&1&2&1.00&0.9472&0.9729&2&2&1.00&0.6250&0.7692&1.00&0.9453&0.9719\\\\\n& ReservationId&1&1.00&1.00&1.00&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n\\hline\\multiline{alaskaair.com}\n& ArrivalAirportIata&2&1.00&1.00&1.00&6&5&1.00&0.9990&0.9995&27&2&1.00&1.00&1.00&1.00&0.9990&0.9995\\\\\n& ArrivalTime&2&1.00&1.00&1.00&6&5&1.00&0.9990&0.9995&27&2&1.00&1.00&1.00&1.00&0.9990&0.9995\\\\\n& DepartureAirportIata&2&1.00&1.00&1.00&6&5&1.00&0.9990&0.9995&27&2&1.00&1.00&1.00&1.00&0.9990&0.9995\\\\\n& DepartureDate&2&1.00&0.9997&0.9998&6&3&1.00&0.9948&0.9974&38&2&1.00&0.9932&0.9966&1.00&0.9948&0.9974\\\\\n& DepartureTime&2&1.00&1.00&1.00&6&5&1.00&0.9990&0.9995&27&2&1.00&1.00&1.00&1.00&0.9990&0.9995\\\\\n& FlightNumber&1&0.9603&0.9995&0.9795&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Name&2&1.00&1.00&1.00&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Provider&2&1.00&0.9644&0.9819&2&2&1.00&0.9767&0.9882&5&2&1.00&0.5333&0.6956&1.00&0.9735&0.9866\\\\\n& ReservationId&2&1.00&1.00&1.00&2&2&1.00&1.00&1.00&165&2&1.00&1.00&1.00&1.00&1.00&1.00\\\\\n\\hline\\multiline{philippineairlines.com}\n& ArrivalAirportIata&3&0.9191&1.00&0.9578&2&3&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ArrivalTime&2&1.00&1.00&1.00&2&3&1.00&0.9811&0.9905&0& & & & &1.00&0.9811&0.9905\\\\\n& DepartureAirportIata&2&0.9937&0.9937&0.9937&2&2&0.9932&0.9932&0.9932&5&2&1.00&1.00&1.00&0.9937&0.9937&0.9937\\\\\n& DepartureDate&2&0.9815&0.9463&0.9636&2&2&0.9815&0.9463&0.9636&0& & & & &0.9815&0.9463&0.9636\\\\\n& DepartureTime&2&1.00&1.00&1.00&2&3&1.00&0.9811&0.9905&0& & & & &1.00&0.9811&0.9905\\\\\n& FlightNumber&3&0.7397&0.6792&0.7082&2&3&1.00&0.9308&0.9642&0& & & & &1.00&0.9308&0.9642\\\\\n& Name&3&1.00&0.9745&0.9871&2&2&1.00&0.9954&0.9977&0& & & & &1.00&0.9954&0.9977\\\\\n& Provider&2&1.00&1.00&1.00&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ReservationId&2&1.00&1.00&1.00&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n\\hline\\multiline{itinerary.westjet.com}\n& ArrivalAirportIata&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ArrivalTime&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureAirportIata&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureDate&1&1.00&0.9956&0.9978&1&1&1.00&1.00&1.00&73&1&1.00&0.9757&0.9877&1.00&0.9956&0.9978\\\\\n& DepartureTime&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& FlightNumber&1&1.00&1.00&1.00&1&1&1.00&0.9965&0.9982&0& & & & &1.00&0.9965&0.9982\\\\\n& Name&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&73&1&1.00&1.00&1.00&1.00&1.00&1.00\\\\\n& Provider&1&1.00&1.00&1.00&1&1&1.00&0.9965&0.9982&0& & & & &1.00&0.9965&0.9982\\\\\n& ReservationId&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n\\hline\\multiline{aeromexico.com}\n& ArrivalAirportIata&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ArrivalTime&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureAirportIata&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureDate&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureTime&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& FlightNumber&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Name&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Provider&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n\\hline\\multiline{mytrips.amexgbt.com}\n& ReservationId&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ArrivalAirportIata&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ArrivalTime&1&0.9948&1.00&0.9974&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureAirportIata&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureDate&1&0.9955&1.00&0.9977&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& DepartureTime&1&0.9955&1.00&0.9977&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& FlightNumber&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Name&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& Provider&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n& ReservationId&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n\\hline\\multiline{qatarairways.com.qa}\n& ArrivalAirportIata&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&20&1&1.00&1.00&1.00&1.00&1.00&1.00\\\\\n& ArrivalTime&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&20&1&1.00&1.00&1.00&1.00&1.00&1.00\\\\\n& DepartureAirportIata&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&20&1&1.00&1.00&1.00&1.00&1.00&1.00\\\\\n& DepartureDate&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&20&1&1.00&1.00&1.00&1.00&1.00&1.00\\\\\n& DepartureTime&1&1.00&1.00&1.00&1&1&1.00&1.00&1.00&20&1&1.00&1.00&1.00&1.00&1.00&1.00\\\\\n& FlightNumber&1&0.9982&1.00&0.9991&2&1&0.9981&1.00&0.9990&20&1&1.00&1.00&1.00&0.9982&1.00&0.9991\\\\\n& Name&1&0.9988&1.00&0.9994&1&1&0.9988&1.00&0.9994&0& & & & &0.9988&1.00&0.9994\\\\\n& Provider&1&1.00&1.00&1.00&2&1&1.00&1.00&1.00&20&1&1.00&1.00&1.00&1.00&1.00&1.00\\\\\n& ReservationId&2&1.00&0.9967&0.9983&2&2&1.00&1.00&1.00&0& & & & &1.00&1.00&1.00\\\\\n\\hline\n\\end{tabular}\n\\caption{#Program, Precision, Recall and F1 numbers on HTML extraction scenario (M2H dataset) with NDSyn and $\\hisyn$}\n\\end{table*}\n\n\\end{comment}\n\n\\begin{comment}\n\\item FX Dataset\n \\begin{itemize}\n \\item Description: E-commerce website product pages from 4 domains(bestbuy, currys, ebuyer and pricerunner).\n \\item Methodology:\n Out of the 1000 emails provided for each domain, we first cleaned the dataset to remove pages where there were annotations missing for a given field. After this, 150 pages were sampled randomly from the now cleaned dataset to create training sets. We further considered 5 seeds to randomly sample our trainsets. The trainsets sampled were of sizes 5, 10, 15 and 20 pages. The remaining pages were used as the test set to calculate the following metrics. \n \\item Metrics: Precision, Recall, F1 score, \\#Programs, Method of landmark scoring, \\#NotInCluster\n \\end{itemize}\n\\end{comment}\n\n\\spsays{TODO}\n\\spsays{Write this formally}\n\\spsays{Give example? Should we add code for this, as we might run out of\nspace?}\nLandmark values can occur at multiple places in a given document. Our aim is to\nnarrow down the region precisely. We proceed as follows:\n\\begin{enumerate}\n \\item We grow the region around the landmark and field values. And compute\n the blueprint of this small region. We also use this region program to\n grow the region around other occurrences of this same landmark value. And\n compute the blueprints of these regions. If the blueprints differ, then\n we are done. We already have a disambiguating pair.\n \\item If the is ambiguous, we learn one more level\n of hierarchy. For this, we treat the current landmark value as a field, and\n learn a landmark for this value.\n \\item We repeat this procedure till we find a disambiguating pair.\n\\end{enumerate}\n\nAt inference time, we locate the topmost landmark value in the hierarchy, in the\ngiven document. Then we locate the next nearest landmark value and so on till we\nlocate the last value of the landmark. Then we use the region program to grow\nthe regions, match the blueprints and run extraction program on each of these\nregions. \n\n\\begin{figure*}[th]\n\\centering\n\\begin{minipage}{5cm}\n\\includegraphics[width=5cm]{images\/forms_regiongrowing.png}\n\\label{fig:formsregion}\n\\end{minipage}\n\\qquad\n\\begin{minipage}{7cm}\n\\frame{\\includegraphics[width=9cm]{images\/forms_image2_annotated.png}}\n\\label{fig:formsextraction2}\n\\end{minipage}\n\\caption{Improve these images.Anonymize these images}\n\\end{figure*}\n\n\n\nOur travel emails dataset is called the {\\em machine-to-human (M2H)} dataset.\nWe perform two experiments with the M2H dataset: contemporary and longitudinal.\nIn the contemporary experiment, the training and test datasets contain\nanonymized emails from the same time period.\nIn the longitudinal experiment, we evaluate our method with training and test\ndata across different time periods.\nThe M2H dataset consists of anonymized emails from $6$ different flight\nproviders. Each domain has 2 to 10 structural variations. We perform our\nexperiments one provider at a time. We synthesize programs for a given provider\nwith the training data and predict on the corresponding test data.\nTable~\\ref{table:m2hdataset}~shows the training and test data sizes for each\nflight domain.\n\nWe compare the performance of \\ensuremath{\\mathsf{LRSyn}}\\xspace~with two other approaches: (1) the \\ensuremath{\\mathsf{NDSyn}}\\xspace~algorithm from prior work on heterogeneous data extraction~\\cite{ij2019}, and (2) the \\ensuremath{\\mathsf{ForgivingXPaths}}\\ algorithm~\\cite{omari2017synthesis}.\nOur goal is to extract various fields from these varied data formats with an intention to answer the following questions:\n\\begin{compactitem}\n\\item \\emph{Q1: How does $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ compare with previous approaches, namely $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\textsf{ForgivingXPaths}$ in contemporary and longitudinal settings?}\n\\item \\emph{Q2: How do the automatically learnt landmarks by $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ compare to user provided landmarks?} \n\\item \\emph{Q3: How do the programs synthesized by $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ and $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ compare in terms of simplicity and robustness?}\n\\end{compactitem}\n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|l|c|c|c|}\n\\hline\nDomain & \\shortstack[c]{Train \\\\ Size} & \\shortstack[c]{Test \\\\ Size}\\\\\n\\hline\niflyalaskaair & 100 & 1092\\\\ \\hline\ndelta & 102 & 928 \\\\ \\hline\ngetthere & 50 & 154 \\\\ \\hline\n\\end{tabular}\n\\hspace{3ex}\n\\begin{tabular}{|l|c|c|c|}\n\\hline\nDomain & \\shortstack[c]{Train \\\\ Size} & \\shortstack[c]{Test \\\\ Size}\\\\\n\\hline\nairasia & 25 & 24 \\\\ \\hline\naeromexico & 35 & 303 \\\\ \\hline\namex & 50 & 640 \\\\ \\hline\n\\end{tabular}\n\\caption{M2H dataset}\n\\label{table:m2hdataset}\n\\vspace{-2ex}\n\\end{table}\n\\paragraph{Overall results}\nTable~\\ref{table:m2hoverall}~shows the average precision, recall and F1 scores across various extraction tasks for $\\ensuremath{\\mathsf{ForgivingXPaths}}$, $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ for both contemporary and longitudinal setting. Unlike $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ or $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ which use a combination of structure and text programs for extraction, $\\textsf{ForgivingXPaths}$ only outputs XPaths which correspond to the entire node, rather than the sub-text contained within that node. Consequently, $\\textsf{ForgivingXPaths}$ has high recall and poor precision. We therefore omit $\\textsf{ForgivingXPaths}$ from more detailed results tables below. We also observe that the gap between $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ and $\\ensuremath{\\mathsf{NDSyn}}\\xspace$, as measured by precision, recall (and thereby F1 score) is larger in the longitudinal evaluation as compared to the contemporary one. This is because programs synthesized by $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ generalize better and handle format changes better than $\\ensuremath{\\mathsf{NDSyn}}\\xspace$.\n\n\\highlightbox{\n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$ is very robust to variations in the longitudinal\nsetting, achieving $>95\\%$ F1 score in all $53$ fields, with a perfect F1 score of $1.00$ in $49$ cases, as compared to $40$ and $33$ cases for $\\ensuremath{\\mathsf{NDSyn}}\\xspace$.\n}\nIn many applications, having a score of $1.00$ is crucial, i.e., even a system with $0.99$ \nprecision cannot be deployed in practice.\nFor example, even a tiny imprecision in a system that automatically adds calendar entries\nbased on flight reservation emails is disastrous for millions of users.\n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|l|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{Contemporary} \\\\\n\\hline\nMetric & $\\ensuremath{\\mathsf{ForgivingXPaths}}$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAvg. Precision & 0.17 & 0.96 & 1.00\\\\ \\hline\nAvg. Recall & 0.99 & 0.91 & 1.00\\\\ \\hline\nAvg. F1 & 0.22 & 0.93 & 1.00 \\\\ \\hline\n\\end{tabular}\n\\begin{tabular}{|l|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{Longitudinal} \\\\\n\\hline\nMetric & $\\ensuremath{\\mathsf{ForgivingXPaths}}$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAvg. Precision & 0.15 & 0.99 & 1.00\\\\ \\hline\nAvg. Recall & 0.98 & 0.89 & 1.00\\\\ \\hline\nAvg. F1 & 0.20 & 0.92 & 1.00 \\\\ \\hline\n\\end{tabular}\n\\caption{F1 scores for $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ and $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ for the M2H Contemporary and Longitudinal datasets}\n\\label{table:m2hoverall}\n\\end{table}\n\n\n\\paragraph{Detailed comparison}\nTable~\\ref{tab:m2hexpmt1} shows a more detailed drill-down of the F1 scores for $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$, in the two settings. For a further drill-down of the F1 score into precision and recall, please see the supplementary material.\nThe rows of the table correspond to 53 extraction tasks, each corresponding to a field value, over the 6 airline domains in the M2H dataset. \nAs seen from the table, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ performs much better than $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ on 23 out of the 53 fields in the contemporary setting, and 24 out of 53 fields in the longitudinal setting. In the remaining fields, the two approaches have comparable F1 scores.\n\nFor the fields \\emph{AIata}, \\emph{ATime}, \\emph{DIata} and \\emph{DTime} in the \\emph{iflyalaskaair} domain, we find that $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ performs much better than $\\ensuremath{\\mathsf{NDSyn}}\\xspace$, and the gap between the two is wider in the longitudinal setting.\nIn the case of files \\emph{ATime} and \\emph{DTime} in the domains \\emph{delta} and \\emph{airasia} we find that $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ is not able to synthesize any extraction program (which is why the F1 score is \"NaN\"), wheras \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ is able to synthesize programs that produce a perfect F1 score of 1.\n\nIn the domain \\emph{getthere.com}, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ achieves perfect extraction over all fields as compared to $\\ensuremath{\\mathsf{NDSyn}}\\xspace$. This is primarily because the document formats in \\emph{getthere.com} permute the constituent regions as shown in Figure~\\ref{fig:formed_documents}, and \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ handles such permutations naturally because it relies on landmarks and local regions, which are invariant to permutations of the regions in the document.\n\n\nIn the domains \\emph{aeromexico} and \\emph{mytrips.amexgbt} we find that both $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ acheive perfect F1 scores. However, this is due to some incidental aspects described below. In the \\emph{aeromexico} domain, each field has dedicated \\emph{id} attributes in the DOM, such as\\emph{-arrival-city-, -arrival-time, -departure-city}. Both $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ are able to latch on this attribute for extraction. These attributes remain the same in both our contemporary and longitudinal datasets, and they perform the role of landmarks, even for $\\ensuremath{\\mathsf{NDSyn}}\\xspace$. Also, even the initial fine-grained whole-document blueprints in this domain exhibit no change across the contemporary and longitudinal datasets, indicating that the formats have not varied across time in this domain.\nIn the \\emph{mytrips.amexgbt} domain, there are variations in fine-grained whole document blueprints across the two datasets which suggest that $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ should perform better than $\\ensuremath{\\mathsf{NDSyn}}\\xspace$. When we look deeply into these variations, there are \\emph{hotel} and \\emph{cab} booking blocks getting added to these documents just like \"getthere\".\nThe $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ structure program which get synthesized for \\emph{departure time} is shown in the figure below:\n\n\\begin{figure}[H]\n\\vspace{-4ex}\n\\small\n\\begin{alltt}\nTBODY > :nth-child(1):nth-last-child(1)\n> :nth-child(2) > DIV.right_section:nth-child(1)\n:nth-last-child(1) > :nth-child(10):nth-last-child(9)\n\\end{alltt}\n\\vspace{-4ex}\n\\end{figure}\nThis program iterates over the various blocks of a table, and for each block it tries to extract a region which is the $10^{th}$ child from the start and $9^{th}$ child from the end. And it so turns out that in our dataset, the \\emph{hotel} or \\emph{cab} blocks always have fewer than 5 lines and never pass this predicate. But, this program will break if more lines get added to these blocks, so $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ gets \"lucky\" in these cases. In comparison, the program synthesized by $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ zooms into the region with \"Departing\" as a landmark and synthesizes a simpler extraction program, which is robust to such changes in the format. It so happens that such format changes are not present in our test set.\n\n\\begin{table*}\n\\scriptsize\n\\begin{tabular}{|l|c|c|c||c|c|}\n\\hline\n & & \\multicolumn{2}{|c||}{ Contemporary} & \\multicolumn{2}{|c|}{Longitudinal} \\\\\n {Fields} & Domain & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ \\\\\n\\hline\n\\hline\n AIata & \\multirow[b]{4}{*}{ifly} & 0.81 & \\textbf{1.00} & 0.64 & \\textbf{1.00} \\\\\n ATime & & 0.76 & \\textbf{1.00} & 0.62 & \\textbf{1.00} \\\\\n DIata & & 0.73 & \\textbf{1.00} & 0.55 & \\textbf{1.00} \\\\\n DDate & & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n DTime & alaska & 0.73 & \\textbf{1.00} & 0.55 & \\textbf{1.00} \\\\\n FNum & \\multirow[t]{4}{*}{air} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n Name & & 1.00 & 1.00 & 0.99 & 0.99 \\\\\n Pvdr & & -- & -- & -- & -- \\\\\n RId & & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\n AIata & \\multirow{9}{*}{airasia} & 0.67 & \\textbf{1.00} & 0.67 & \\textbf{1.00} \\\\\n ATime & & NaN & \\textbf{1.00} & NaN & \\textbf{1.00} \\\\\n DIata & & 0.67 & \\textbf{1.00} & 0.67 & \\textbf{1.00} \\\\\n DDate & & 0.67 & \\textbf{1.00} & 0.67 & \\textbf{1.00} \\\\\n DTime & & NaN & \\textbf{1.00} & NaN & \\textbf{1.00} \\\\\n FNum & & 1.00 & 1.00 & 0.96 & 0.96 \\\\\n Name & & 1.00 & 1.00 & NaN & \\textbf{1.00} \\\\\n Pvdr & & 1.00 & 1.00 & 0.96 & 0.96 \\\\\n RId & & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline \n\\end{tabular}\n\\begin{tabular}{|c|c|c||c|c|}\n\\hline\n & \\multicolumn{2}{|c||}{ Contemporary} & \\multicolumn{2}{|c|}{Longitudinal} \\\\\n {Domain} & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ \\\\\n\\hline\n\\hline\\multirow{9}{*}{getthere}\n\n & 0.75 & \\textbf{1.00} & 0.74 & \\textbf{1.00} \\\\\n & 0.94 & \\textbf{1.00} & 0.91 & \\textbf{1.00} \\\\\n & 0.94 & \\textbf{1.00} & 0.95 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} & 0.95 & \\textbf{1.00} \\\\\n & 0.76 & \\textbf{1.00} & 0.78 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} & 0.98 & \\textbf{1.00} \\\\\n & 1.00 & 1.00 & 0.89 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} & 0.97 & \\textbf{1.00} \\\\\n & 0.93 & \\textbf{1.00} & 0.94 & \\textbf{1.00} \\\\\n\n\\hline\\multirow{9}{*}{delta}\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 0.94 & \\textbf{1.00} & 0.95 & \\textbf{1.00} \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 0.85 & \\textbf{0.97} & 0.91 & \\textbf{0.97} \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline \n\\end{tabular}\n\\begin{tabular}{|c|c|c||c|c|}\n\\hline\n & \\multicolumn{2}{|c||}{ Contemporary} & \\multicolumn{2}{|c|}{Longitudinal} \\\\\n {Domain} & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ & $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ \\\\\n\\hline\n\\hline\\multirow[b]{4}{*}{aero} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\\multirow[t]{4}{*}{mexico} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline\n\\multirow[b]{4}{*}{mytrips} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n amex & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n \\multirow[t]{3}{*}{gbt} & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n & 1.00 & 1.00 & 1.00 & 1.00 \\\\\n\n\\hline \n\\end{tabular}\n\\caption{M2H F1 score details. The Pvdr field is not relevant for iflyalaskaair}\n\\label{tab:m2hexpmt1}\n\\vspace{-5ex}\n\\end{table*}\n\n\\begin{comment}\n\\begin{table*}\n\\small\n\\begin{tabular}{|l|}\n\\hline\nifly.alaskaair.com : ArrivalAirportIata \\\\\n\\\\\n$\\ensuremath{\\mathsf{NDSyn}}\\xspace$: \\\\\n$L_{ex}$: $TR:nth-child(1):nth-last-child(1) > [style*=\"width:48\\%\"]:nth-child(3) > TABLE[cellpadding=\"0\"]$ \n\\\\\n$[cellspacing=\"0\"][border=0][style*=\"width:100\\%\"]:nth-child(1):nth-last-child(1) >$\n\\\\\n$TBODY:nth-child(1):nth-last-child(1) > :nth-child(1)$\\\\\n$Text Program$ : $Extract(str, RegexPair(\"\", \"Alphanumeric\"), RegexPair(\"Alphanumeric\", \"\\epsilon\"), -1)$\\\\\n\\smallskip\\\\\n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$:\\\\\n$Landmark$ : $Arrives$~,~$L_{rg}$: $\\langle \\mathsf{parenthops}: 0, \\mathsf{siblinghops}$: $1$, $\\mathsf{blueprint}$ : \/TR\/TD $\\rangle$\\\\\n$L_{ex}$: $TR:nth-child(1)$ \\\\\n$Text Program$ : $Extract(str, RegexPair(\"\\epsilon\", \"Alphanumeric\"), RegexPair(\"Alphanumeric\", \"\\epsilon\")$, \\\\\n\\hline\n\\end{tabular}\n\\caption{Synthesized extraction programs}\n\\label{tab:m2hprograms}\n\\end{table*}\n\\end{comment}\n\n\\begin{comment}\ngetthere.com:Provider \\\\\n\\\\\n$\\ensuremath{\\mathsf{NDSyn}}\\xspace$: \\\\\n$L_{ex}$: $DIV:nth-last-child(13) > TABLE > TBODY:nth-child(1):nth-last-child(1) > :nth-child(1) > :nth-child(2)$ \\\\\n$Text Program$ : $Extract(str, RegexPair(\\epsilon, \"Alphanumeric\"), RegexPair(\\epsilon, \"WhiteSpaceNumber\"), 1)$\\\\\n\\\\ \n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$:\\\\\n$Landmark$ : $Flight\/Equip$\\\\\n $L_{rg}:\\langle \\mathsf{parenthops}: 0, \\mathsf{siblinghops}: 1, \\mathsf{blueprint}:\/TD \\rangle$\\\\\n$L_{ex}:nth-child(2)$ \\\\\n$Text Program : Extract(str, RegexPair(\\epsilon, \"Alphanumeric\"), RegexPair($\\epsilon$, \"WhiteSpaceNumber\"), 1)$\\\\\n\n\\hline\n\\end{comment}\n\n\n\n\\paragraph{Synthesized programs}\nFigures~\\ref{lst:ndsynprogram}~and~\\ref{lst:lrsynprogram} show an example extraction program synthesized by $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ respectively. More examples can be found in the supplementary material.\nAs seen from these, extraction programs synthesized by $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ programs are way more complex as they process the entire HTML page, when compared to the programs synthesized by $\\ensuremath{\\mathsf{LRSyn}}\\xspace$, which only process the local region of interest.\nConsequently, if the formats change, the programs synthesized by $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ continue to function correctly.\nFurther, the shape and size of the $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs makes them easy to \nexamine by hand and gain confidence in their correctness.\n\\arsays{Do we have any numbers here?}\n\\highlightbox{\nThe $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs are significantly smaller compared to $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ programs.\nFor the web extraction component, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs have $XXX$ components\nin their CSS selectors as compared to $YYY$ for $\\ensuremath{\\mathsf{NDSyn}}\\xspace$.\n}\n\n\n\\begin{comment}\n\\begin{table}\n\\small\n\\label{table:m2hnumberofprograms}\n\\begin{tabular}{|l|c|c|}\n\\hline\nField & \\#$\\ensuremath{\\mathsf{NDSyn}}\\xspace$ & \\#$\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\nAIata & 3 & 1 \\\\\nATime & 2 & 1 \\\\\nDIata & 3 & 1 \\\\\nDDate & 3 & 1 \\\\\nDTime & 2 & 1 \\\\\nFNum & 1 & 1 \\\\\nName & 1 & 1 \\\\\nPvdr & 1 & 1 \\\\\nRId & 1 & 1 \\\\\n\\hline\n\\end{tabular}\n\\caption{Getthere.com: Number of $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs}\n\\label{tab:m2h-numprograms}\n\\end{table}\n\\end{comment}\n\n\\paragraph{Number of programs} Since \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ merges clusters based on ROIs rather than entire documents, this results in fewer programs synthesized by \\ensuremath{\\mathsf{LRSyn}}\\xspace, when compared to \\ensuremath{\\mathsf{NDSyn}}\\xspace.\nFor \"getthere\", $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ synthesizes only a single program for all fields: \\emph{:nth-child(2)}. \\ensuremath{\\mathsf{NDSyn}}\\xspace ,on the other hand, synthesizes $3$ programs each for \\emph{AIata}, \\emph{DIata}, \\emph{DDate}, $2$ programs each for \\emph{ATime} and \\emph{DTime} and $1$ program for the remaining $4$ fields. \nWe see this trend in all other domains as well, confirming our initial hypothesis that\nlocal structure of regions of interest tend to vary less than global document\nstructure.\n\n\\highlightbox{\nGet numbers from Anirudh}\n\n\\begin{table}\n\\scriptsize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\nDomain & Test \\\\\n\\hline\nAccountInvoice & 157\\\\ \\hline\nCashInvoice & 40 \\\\ \\hline\nCreditNote & 30 \\\\ \\hline\nSalesInvoice & 401 \\\\ \\hline\nSelfBilledCreditNote & 146 \\\\ \\hline\n\\end{tabular}\n\\quad\n\\begin{tabular}{|l|c|}\n\\hline\nDomain & Test \\\\\n\\hline\naeromexico & 303\\\\ \\hline\ngetthere & 155 \\\\ \\hline\nwestjet & 500 \\\\ \\hline\namex & 619 \\\\ \\hline\n\\end{tabular}\n\\caption{Test set sizes for Finance and M2H-Images datasets. Train set size is $10$ for all the domains}\n\\label{tab:imagesdata}\n\\end{table}\n\n\n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|c|l|c|c|c|}\n\\hline\n{Domain} & Fields & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\\multirow{9}{*}{AccountsInvoice}\n & Amount & 0.99 & \\textbf{1.00} \\\\\n & Chassis & 0.82 & \\textbf{0.99} \\\\\n & CustAddr & 0.98 & \\emph{0.96} \\\\\n & Date & 0.93 & \\textbf{0.98} \\\\\n & Dnum & 0.96 & \\textbf{0.97} \\\\\n & Engine & 0.82 & \\textbf{1.00} \\\\\n & InvoiceAddress & 0.90 & \\textbf{0.95} \\\\\n & Model & 0.75 & \\textbf{1.00} \\\\\n\n\\hline\\multirow{8}{*}{CashInvoice}\n & Amount & 1.00 & 1.00 \\\\\n & Chassis & 0.99 & 0.99 \\\\\n & CustAddr & 0.99 & \\emph{0.97} \\\\\n & Date & 0.99 & 0.99 \\\\\n & Dnum & 0.96 & 0.96 \\\\\n & Engine & 0.93 & \\textbf{0.95} \\\\\n & InvoiceAddress & 0.99 & 0.99 \\\\\n & Model & 0.99 & \\textbf{1.00} \\\\\n\n\\hline\\multirow{5}{*}{CreditNote}\n & Amount & 1.00 & 1.00 \\\\\n & CreditNoteAddress & 0.99 & \\textbf{1.00} \\\\\n & CreditNoteNo & 0.94 & \\emph{0.93} \\\\\n & CustRefNo & 1.00 & 1.00 \\\\\n & Date & 1.00 & 1.00 \\\\\n & RefNo & 1.00 & 1.00 \\\\\n\n\\hline\\multirow{6}{*}{SalesInvoice}\n & Amount & 1.00 & 1.00 \\\\\n & CustomerReferenceNo & 1.00 & 1.00 \\\\\n & Date & 1.00 & 1.00 \\\\\n & InvoiceAddress & 0.94 & \\textbf{0.99} \\\\\n & RefNo & 0.99 & 0.99 \\\\\n & SalesInvoiceNo & 0.99 & 0.99 \\\\\n\n\\hline\\multirow{6}{*}{SelfBilledCreditNote}\n & Amount & 1.00 & 1.00 \\\\\n & CustomerAddress & 1.00 & \\emph{0.99} \\\\\n & CustomerReferenceNo & 0.99 & 0.99 \\\\\n & Date & 1.00 & 1.00 \\\\\n & DocumentNumber & 1.00 & 1.00 \\\\\n & VatRegNo & 1.00 & 1.00 \\\\\n\n\\hline\n\\end{tabular}\n\\caption{F1 scores for Finance dataset}\n\\label{tab:financedataset}\n\\end{table}\n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|l|p{1.5cm}|c|c|}\n\\hline\nFields & {Domain} & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n \\hline\nAIata & \\multirow{9}{*}{aeromexico} & 0.62 & \\textbf{0.65} \\\\\nATime & & 0.69 & \\textbf{0.99} \\\\\nDIata & & 0.36 & \\textbf{0.66} \\\\\nDDate & & 0.71 & \\textbf{0.89} \\\\\nDTime & & 0.65 & \\textbf{0.97} \\\\\nFNum & & 0.66 & \\textbf{0.83} \\\\\nName & & 0.96 & \\textbf{0.98} \\\\\nPvdr & & 0.69 & \\textbf{0.78} \\\\\nRId & & 1.00 & 1.00 \\\\\n \n\\hline\n\\end{tabular}\n\\begin{tabular}{|p{1.5cm}|c|c|}\n\\hline\n{Domain} & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\n\\multirow{9}{*}{getthere} & 0.94 & \\textbf{1.00} \\\\\n & 0.87 & \\textbf{1.00} \\\\\n & 0.93 & \\textbf{1.00} \\\\\n & 0.96 & \\textbf{0.99} \\\\\n & 0.88 & \\textbf{1.00} \\\\\n & 0.94 & \\textbf{1.00} \\\\\n & 0.99 & 0.99 \\\\\n & 0.75 & \\textbf{1.00} \\\\\n & 0.89 & \\textbf{0.95} \\\\\n \n\\hline\n\\end{tabular}\n\n\\begin{tabular}{|l|p{1.5cm}|c|c|}\n\\hline\n{Fields} & Domain & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAIata & \\multirow[b]{4}{*}{itinerary.} & 0.94 & \\textbf{0.99} \\\\\nATime & & 0.90 & \\emph{0.88} \\\\\nDIata & & 0.93 & \\textbf{0.98} \\\\\nDDate & & 0.93 & \\textbf{1.00} \\\\\nDTime & \\multirow[t]{4}{*}{westjet} & 0.94 & \\textbf{0.95} \\\\\nFNum & & 0.96 & \\textbf{1.00} \\\\\nName & & 0.60 & \\textbf{0.61} \\\\\nPvdr & & 0.95 & \\textbf{1.00} \\\\\nRId & & 1.00 & 1.00 \\\\\n \n\\hline\n\\end{tabular}\n\\begin{tabular}{|p{1.5cm}|c|c|}\n\\hline\n{Domain} & $\\ensuremath{\\mathsf{AFR}}$ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n \\hline\n\\multirow[b]{4}{*}{mytrips} & 0.85 & \\textbf{0.98} \\\\\n & 0.97 & \\textbf{1.00} \\\\\n & 0.96 & \\textbf{0.99} \\\\\n & 0.93 & \\textbf{1.00} \\\\\n\\multirow[t]{4}{*}{amexgbt} & 0.99 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} \\\\\n & 0.98 & \\textbf{1.00} \\\\\n & 0.91 & \\textbf{0.99} \\\\\n & 0.61 & \\textbf{0.96} \\\\\n\n\\hline\n\\end{tabular}\n\\caption{F1 Score for Images M2H Dataset}\n\\label{tab:m2himages}\n\\vspace{-5ex}\n\\end{table}\n\n\n\\begin{comment} \n\\hline\\multirow{9}{*}{ifly.alaskaair} \n & AIata & 0.98 & \\textbf{1.00} \\\\\n & ATime & 0.95 & \\textbf{0.99} \\\\\n & DIata & 0.98 & \\textbf{1.00} \\\\\n & DDate & 0.98 & \\emph{NaN} \\\\\n & DTime & 0.95 & \\textbf{0.98} \\\\\n & FNum & 0.97 & \\textbf{1.00} \\\\\n & Name & 0.98 & 0.98 \\\\\n & Pvdr & 0.93 & \\textbf{0.99} \\\\\n & RId & 1.00 & \\emph{0.86} \\\\\n\\end{comment}\n\n\\begin{comment}\n\\begin{table*}\n\\small\n\\begin{tabular}{|l|l|l|}\n\\hline\n{Domain} & Manually provided landmarks & Learnt landmarks \\\\\n\\hline\nifly.alaskaair & \n Departs, Arrives, Term\/Gate, & Departs, Arrives, \\textbf{Confirmation code: >> Term\/Gate,} \\\\\n & Travellers, Confirmation code & Travellers, \\textbf{Travelers: >> Confirmation code} \\\\\n\\hline\n\\end{tabular}\n\\caption{Landmarks for M2H dataset}\n\\label{tab:m2hlandmarks}\n\\end{table*}\n\\end{comment}\n\n\\paragraph{Quality of Inferred Landmarks}\nWe infer landmarks automatically using the techniques and scoring functions mentioned in Section~\\ref{sec:algorithms} and~\\ref{sec:instantiations}.\nTo check the quality of landmark inference, we also asked data annotators to tag the landmarks manually.\n\\highlightbox{\nIn $57$ out of $63$ clusters across all fields, the inferred landmarks are exactly the same as manually provided landmarks.\nIn $5$ of the remaining $6$ cases, the human annotator agreed that the inferred landmark \nwas of equal quality.\n}\nIn the remaining $1$ case, the algorithm chose the human annotated landmark as well, but \nin addition chose a disambiguating hierarchical landmark of low quality.\nIn particular, it disambiguated the term \\emph{Name} occurred in reference to both\nthe name of the passenger and the name in the billing address.\nHere, the algorithm chose to disambiguate using the term \\emph{Meal}, i.e.,\npassengers have a meal preference while the billed person does not.\nWe refer the readers to the supplementary material for the list of all landmarks inferred.\n\n\n\\subsection{Form Image Extraction}\nWe consider two datasets of form images:\n(1) {\\em Finance} dataset: This consists of ~850 images of receipts, purchase orders, credit notes, sales invoice and similar such documents. Figure~\\ref{fig:formed_documents}(c)~shows one such image of a sales invoice. The training and test data are from the same time period. Hence this dataset is used to do contemporary evaluation.\n(2) {\\em M2H-Images} dataset: We convert M2H emails from $4$ domains to images and extract the same fields as before. This represents common scenarios in practice where HTML documents such as booking mails or receipts may be printed and then scanned again for data extraction where the original structure is lost. Table~\\ref{tab:imagesdata} shows the test data sizes for each domain in both these datasets. We use a training data size of $10$ in all our image extraction experiments. In M2H dataset, the test data contains emails from a different time period than the ones in the training set. Hence this dataset is used to do a longitudinal evaluation.\n\\arsays{Somebody will definitely ask why not all 6 domains}\n\n\nWe first pre-process these images to make them sharper and perform optical character recognition (OCR) on them. This provides us with bounding boxes and text for these images, and we operate on top of this input. For our experiments, we use the OCR offering from Microsoft Azure~\\cite{AzureOCR}. We compare $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ with a custom neural model fine-tuned using Azure Form Recognize (\\ensuremath{\\mathsf{AFR}})~\\cite{AFR}, which is a cloud-based Azure Applied AI Service. We use a sample of $10$ documents as training data for each domain in our experiments. We do not handle issues like spelling errors in OCR output in the current work and employ heuristics to correct them.\n\n\\paragraph{Overall Results}\nTable~\\ref{tab:imagesoverall}~shows the average precision, recall, F1 and accuracy scores for \\ensuremath{\\mathsf{AFR}}\\ and \\ensuremath{\\mathsf{NDSyn}}\\xspace\\, for both the Finance and M2H-image datasets. The scores are weighted by the test sizes across various extraction tasks. Unlike $\\ensuremath{\\mathsf{AFR}}$ which is a neural model, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ synthesizes programs on top of OCR boxes. As we can see from the table, $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ performs marginally better than $\\ensuremath{\\mathsf{AFR}}$ on the Finance dataset. This is mainly because the test data does not have variations when compared to training set. $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ outperforms $\\ensuremath{\\mathsf{AFR}}$ in the M2H-Images dataset with a F1 score of $0.95$ as compared to $0.87$, since the test data in this dataset contains images from a different time period, and these images have many variations when compared to the training set.\n\\highlightbox{\n$\\ensuremath{\\mathsf{LRSyn}}\\xspace$ outperforms a state-of-the-art industrial form extraction system trained on\n\\arsays{$10^x$s} of images, with just $10$ training images per field, having a precision\nof $0.95$ vs $0.87$ on the M2H-Image dataset.\n}\n\n\\begin{table}\n\\scriptsize\n\\parbox{.45\\linewidth}{\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\nMetric & $\\ensuremath{\\mathsf{AFR}} $ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAvg. Pre. & 0.98\t& 0.99 \\\\ \\hline\nAvg. Rec. & 0.96\t& 0.99 \\\\ \\hline\nAvg. F1 & 0.97\t& 0.99 \\\\ \\hline\n\\end{tabular}\n\\caption*{Finance dataset}\n}\n\\hfill\n\\parbox{.45\\linewidth}{\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\nMetric & $\\ensuremath{\\mathsf{AFR}} $ & $\\ensuremath{\\mathsf{LRSyn}}\\xspace$\\\\\n\\hline\nAvg. Prec. & 0.85\t& 0.95 \\\\ \\hline\nAvg. Rec. & 0.91\t& 0.95 \\\\ \\hline\nAvg. F1 & 0.87\t& 0.95 \\\\ \\hline\n\\end{tabular}\n\\caption*{M2H-Images dataset}\n}\n\\vspace{-3ex}\n\\caption{Average precision, recall, F1 numbers on Finance and M2H-Images dataset}\n\\label{tab:imagesoverall}\n\\vspace{-3ex}\n\\end{table}\n\n\\paragraph{Detailed comparison}\nTable~\\ref{tab:financedataset}~shows the results of $\\ensuremath{\\mathsf{AFR}}$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ on the Finance dataset with respect to $34$ extraction tasks. $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ performs better than $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ in $12$ out of the $34$ cases and is on par on the rest, with significant gains in some domains like \"AccountsInvoice\". \n\nEven though the neural model in $\\ensuremath{\\mathsf{AFR}}$ is trained with thousands of invoices and receipts, and further fine-tuned with our training data, we observe that it is sensitive to the region coordinates in a given document. If these regions translate slightly or if the document scan is tilted, $\\ensuremath{\\mathsf{AFR}}$ extraction is erroneous. $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ is partially robust to such changes as the region programs start from a landmark value, and is agnostic to changes happening in the other regions. \n\nAlso, as we can note from the table, $\\ensuremath{\\mathsf{AFR}}$ is marginally better than $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ in some of the extraction tasks. This can be attributed to the absence of the boundary pattern in some of the test documents resulting in incorrect extractions. $\\ensuremath{\\mathsf{AFR}}$, on the other hand, in all likelihood, understands concepts like \"address\" and can extract irrespective of the presence of a boundary pattern. $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ can be improved further by incorporating such a semantic layer in its processing. \n\nTable~\\ref{tab:m2himages}~shows the results of $\\ensuremath{\\mathsf{AFR}}$ and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ on the M2H dataset with respect to $45$ extraction tasks. This dataset exhibits more variations at the visual level as compared to the Finance dataset. $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ performs better than $\\ensuremath{\\mathsf{NDSyn}}\\xspace$ in $35$ out of the $45$ tasks and is on par on most of the remaining extraction tasks. This is mainly because $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ first locates the region of interests and then extracts values from these regions. \n\n\\paragraph{Synthesized programs}\nHere is an example region and text program synthesized for the \"Date\" field in \\emph{SalesInvoice} document, with respect to a landmark \"Date\". \\\\\nRegion program: $Extend(\"Date\", Relative(Right, EOL, false)$ \\\\\nText program: Extract a substring between \":\" and EOL\\\\\n\nThe region program starts from the landmark \"Date\" and extends right to collect all boxes until the end of line (EOL). And the corresponding text program extracts the text between a colon and EOL. We refer the readers to Table~\\ref{}~ in the Appendix for more examples. \\spsays{looks ugly, needs more work!!}\n\n\\paragraph{Summary of results}\nIn the HTML domain, the prior work \\ensuremath{\\mathsf{NDSyn}}\\xspace\\ is a high-performing system with F1 scores in the range of 0.9. \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ is able to push the F1 scores to a perfect 1.0 in most cases. In longitudinal scenarios, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ improves \\ensuremath{\\mathsf{NDSyn}}\\xspace\\ in 24 out of 53 fields, with significant lift in F1 scores in many cases.\nIn the images domain, even with very little training data, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ matches AFR, which is a released product on contemporary settings, and outperforms AFR in longitudinal settings. In addition, \\ensuremath{\\mathsf{LRSyn}}\\xspace\\ produces simpler interpretable programs that match human intuition for extraction, and are much easier to maintain. Hence, we see a lot of promise in this approach.\n\n\n\n\\begin{comment}\n\\begin{table*}\n\\small\n\\begin{tabular}{|l|}\n\\hline\nSale Invoice : Date \\\\\nLandmark: \"Date:\" \\\\\nRegion program: Extend(\"Date\", Relative(Right, EOL, false) \\\\\nText program: Extract a substring between \":\" and EOL \\\\\n\\hline \nAccounts Invoice: Chassis Number \\\\\nLandmark: \"Chassis Number\" \\\\\nRegion program: Extend(Extend(input, Absolute(down, 1)), Relative(Right, DATE, false)) \\\\\nText Program: Take all the strings in this path \\\\\n\\hline\nAccounts Invoice : InvoiceAddress \\\\\nRegion program: ExtendLineItems(\"Invoice Name \\& Address\", Relative(down, \"Model SKU\")) \\\\\nText Program: Take all the strings in this path \\\\\n\\hline\n\\end{tabular}\n\\caption{Synthesized extraction programs \\spsays{Improve presentation}}\n\\label{tab:imageprograms}\n\\end{table*}\n\\end{comment}\n\n\\paragraph{Efficiency of Data Extraction}\nFormed document data extraction programs are sometimes run on extremely large\ndatasets, and hence, their efficiency becomes a concern.\nFor example, an industrial system that was the source of the M2H data processes\nover $3 \\times 10^8$ emails each day.\n\\highlightbox{\n The $\\ensuremath{\\mathsf{LRSyn}}\\xspace$ programs for the M2H dataset outperform their $\\ensuremath{\\mathsf{NDSyn}}\\xspace$\n counterparts by \\arsays{AAAX}, with an average processing time of $XXX$~ms vs. $YYY$~ms.\n}\nWe believe this is mainly due to the extremely cheap landmark location step,\nwith the more expensive parts of the program only running on the small ROI.\n\\subsection{Jointly inferring landmarks and clusters}\n\\label{subsec:joint-infer-cluster}\n\n\\paragraph{Initial Clustering}\nThe first step in our technique is to separate the data-set into clusters that\ncorrespond to different formats of emails.\nFor this, we use the notion of {\\em blueprints}. \nFor the HTML domain, the blueprint of a document (or a document fragment)\ncontains the tag structure and values that are common across documents. \nWe compute an initial clustering by considering of whole documents, and using the closeness of blueprints as the distance metric between documents. The initial clustering produces a large number of very fine-grained clusters.\nOn the M2H data-set, our initial clustering produces ~$20$ \\emph{head\nclusters} along with many small \\emph{tail clusters} totaling ~$70$\nclusters altogether. \n\n\\paragraph{Identifying Landmark Candidates}\nFor each cluster, we identify a landmark, which is an n-gram that appears in all documents in the cluster.\nFor each such landmark candidate (i.e., shared n-grams), we assign a score based\non two metrics:\n\\begin{inparaenum}[(a)]\n \\item the distance between the landmark candidate and the field value to be\n extracted, and\n %\n \\item the number of nodes in the document region that encloses both the\n landmark candidate and the field value.\n\\end{inparaenum}\nThese two metrics capture the intuition that landmarks should be close to the\nvalues being extracted.\nFor the email in Figure~\\ref{fig:formed_documents}(a), the top-scoring landmark candidates for extracting \\emph{Departure Time} are \"Depart\" and \"Arrive\".\n\n\\paragraph{Regions of Interest and Re-clustering}\nA contiguous region in the document that encloses the landmark candidate and the field value is called a \\emph{region of interest}. We are interested in small regions of interest.\nIn Figure~\\ref{fig:formed_documents} the landmark candidates are shown in orange ellipses, and the corresponding regions of interest are shown using blue rectangles.\nWe now re-define a distance metric between two documents as the minimum distance (over all\nlandmark candidates) between the blueprints of the corresponding regions of interest.\nUsing this revised distance metric, we iteratively merge clusters if the average distance between documents in the clusters is less than a threshold, till no further merging is possible.\nAs a result of such merging, formats with an advertisement section added, or existing sections rearranged would all be included in the same cluster, as the blueprint of the region of interest is invariant to such changes that occur outside the region of interest. In the M2H data-set, iterative merging based on closeness of regions of interest results in less than $5$ clusters at a field level.\n\n\\subsection{Synthesizing Extraction Programs.}\nWith the new coarse-grained clusters, we synthesize $2$ sub-programs that\ntogether with the blueprint of the region of interest forms the complete\nextraction program.\nThe first sub-program, called {\\em region extraction program}, takes as input the whole document $d_i$ and a landmark location $\\ell$ as inputs, and produces a region of interest $R$ as the output. \n %\n In the case of HTML documents, the synthesized region extraction program starts with the landmark location and traverses the tree-structure of the HTML document and grows a region starting at the location of the landmark, and ensures that all locations of field values are included.\n %\n As an example,for the airline itinerary example shown in Figure~\\ref{fig:formed_documents}(a), in order to extract the value of the departure time, with \"Depart:\" as the landmark, the synthesized region extraction program is given by \\emph{0 parent hops, 1 sibling hop}. The single sibling hop here implies that across all the training data, the extraction value lies $1$ sibling away from the landmark within the same parent node in the DOM. The \\emph{parent hops} and \\emph{sibling hops} are computed based on all the annotated training documents in the given cluster, and hence produce a large enough ROI that includes the location of all the field values for extraction. We also compute the blueprint for this region, and use this blueprint during execution of the region extraction program (see below).\n The second sub-program, called {\\em value extraction program}, takes the region produced by the region extraction program as input, and produces the desired field value as output. For HTML documents, we use the DSL and the corresponding synthesis algorithm from~\\cite{raza2020web}. For the example in Figure~\\ref{fig:formed_documents}(a), the synthesized region and value extraction programs, and the blueprint of the region of interest are shown in Figure~\\ref{lst:lrsynprogram}. \n \n\n\\begin{figure}\n\\small\n\\lrsynprogram{Depart}{0}{1}{\/TD}{nth-child(2)}{Extract TIME sub-string}\n\\caption{\\ensuremath{\\mathsf{LRSyn}}\\xspace\\ extraction program for Depart time}\n\\label{lst:lrsynprogram}\n\\end{figure}\n\n\n\n\n\n\n\nDuring execution of the synthesized extraction program, we compute the blueprint of the ROI calculated by the region extraction program and compare it with the blueprint of the ROI generated during synthesis. If the distance between these two blueprints is below a threshold value $t$, the regions from synthesis and inference times are \"roughly similar\", and we use this extracted program. Otherwise, we look for other extraction programs synthesized from other clusters, which better match the current document.\n\n\n\\subsection{Preliminaries and Problem Statement}\n\n\\paragraph{Documents and Locations}\nWe use the term \\emph{document} to represent a single record of a dataset from which we are interested in extracting data.\nWe use the symbol $\\ensuremath{\\mathsf{doc}}$ to represent documents.\nA document $\\ensuremath{\\mathsf{doc}}$ has a set of \\emph{locations} $\\ensuremath{\\mathsf{Locs}}(\\ensuremath{\\mathsf{doc}})$ which can be\nused to index into the document and look up values.\nFor a location $\\ensuremath{\\ell} \\in \\ensuremath{\\mathsf{Locs}}(\\ensuremath{\\mathsf{doc}})$, the function $\\ensuremath{\\mathsf{Data}}[\\ensuremath{\\ell}]$\nreturns the value of the data present in the location $\\ensuremath{\\ell}$ of $\\ensuremath{\\mathsf{doc}}$.\n\\begin{example}\n \\label{ex:locs}\n When dealing with HTML documents, the location are XPaths that retrieve\n elements in the HTML DOM document tree structure.\n %\n The data value of a location is the concatenation of all the text elements\n in the DOM element.\n\\end{example}\n\n\\paragraph{Data-sets and Fields}\nWe model a heterogeneous dataset $\\ensuremath{\\mathcal{D}}$ as a tuple $(\\ensuremath{D}, \\{\n\\ensuremath{C}_0, \\ldots, \\ensuremath{C}_n \\})$ where:\n\\begin{inparaenum}[(a)]\n\\item $\\ensuremath{D}$ is a finite set of \\emph{input documents} (or just \\emph\n {documents} for short) and\n\\item $\\{ \\ensuremath{C}_0, \\ldots, \\ensuremath{C}_n \\}$ is a partition of $\\ensuremath{D}$ into\n \\emph{clusters}, i.e., $\\bigcup \\ensuremath{C}_i = D$ and $\\forall i \\neq j .\\;\n \\ensuremath{C}_i \\cap \\ensuremath{C}_j = \\emptyset$.\n\\end{inparaenum}\nEach partition represents a \\textit{similar} set of documents in terms of format.\nThe extraction framework has access to the inputs $\\ensuremath{D}$, but not the\npartitioning.\nHenceforth, we write ``dataset $\\ensuremath{D}$'' instead of ``heterogeneous\ndataset $\\ensuremath{\\mathcal{D}}$'' to denote that the exact partition of $\\ensuremath{D}$ into\nclusters is not provided to us as input.\n\nFor a given dataset $\\ensuremath{D}$, a \\emph{field} $\\ensuremath{\\mathsf{F}}$ of type $\\ensuremath{\\mathsf{T}}$\nis a partial function $\\ensuremath{\\mathsf{F}} : D \\not\\to \\ensuremath{\\mathsf{T}}$ that maps documents to\nvalues of type $\\ensuremath{\\mathsf{T}}$.\nWe implicitly assume that a field is either defined for all documents in a\ncluster or is undefined for all documents in the cluster, i.e.,\n$\\forall \\ensuremath{C}_i . \\forall \\ensuremath{\\mathsf{doc}}, \\ensuremath{\\mathsf{doc}}' \\in \\ensuremath{C}_i. \n \\ensuremath{\\mathsf{F}}(\\ensuremath{\\mathsf{doc}}) = \\bot \\Leftrightarrow \\ensuremath{\\mathsf{F}}(\\ensuremath{\\mathsf{doc}}') = \\bot$.\nWe say that $\\ensuremath{\\mathsf{F}}(\\ensuremath{\\mathsf{doc}})$ is the {\\em value of the field} $\\ensuremath{\\mathsf{F}}$\nin $\\ensuremath{\\mathsf{doc}}$.\nThe type of a field can either be a primitive type such as integer or string or a\ncomposite type such as a list of strings or set of integers.\nThough we are interested in extracting multiple fields from each document\nin a dataset, for simplicity of presentation, our formal treatment considers\nextracting the value of a single field.\n\n\\begin{comment}\n\\begin{example}\n \\label{ex:fields}\n %\n The M2H dataset introduced in Section~\\ref{sec:overview} consists of flight\n reservation emails from multiple different airlines. In this dataset, a field like reservation number has a primitive type, i.e., string, while the others like departure time are lists of strings.\n %\n In Figure~\\ref{fig:formed_documents} a), the value of reservation number is ``SAGDQU'', while the \n values of departure time and traveler names are\n [``23.34'', ``21.44''] and [``Fx Engelberg Katrena Troi''], respectively.\n\\end{example}\n\\end{comment}\n\n\n\n\\paragraph{Annotations}\nGiven a field $\\ensuremath{\\mathsf{F}}$ of a data-set $\\ensuremath{D}$, an \\emph{annotation}\n$\\ensuremath{\\mathcal{A}}(\\ensuremath{\\mathsf{doc}})$ of $\\ensuremath{\\mathsf{doc}} \\in \\ensuremath{D}$ is a list of locations\n$[ \\ensuremath{\\ell}_1, \\dots, \\ensuremath{\\ell}_n ]$ and an aggregation function $\\ensuremath{\\mathsf{Agg}}$\nsuch that $\\ensuremath{\\mathsf{F}}(\\ensuremath{\\mathsf{doc}}) =\n\\ensuremath{\\mathsf{Agg}}(\\ensuremath{\\mathsf{Data}}[\\ensuremath{\\ell}_1],\\ldots, \\ensuremath{\\mathsf{Data}}[\\ensuremath{\\ell}_n])$.\nAnnotations are user provided ``labels'' in ML parlance, and are used as\ntraining data.\nFor our experiments, we built a visual user interface where annotators could\nclick on individual HTML and image documents to select annotation locations.\nIn the background, the tool converts these clicks into locations, i.e., XPaths\nin HTML and x-y coordinates in PDF documents.\n\n\\begin{example}\n \\label{ex:annotations}\n For the departure time field in Figure~\\ref{fig:formed_documents}(a), the annotation contains the two locations with text elements ``Friday, Apr 3 8:18 PM'' and ``Thursday, Apr 9 2:02 PM'', and the\n aggregation function collects these values into a list.\n %\n\\end{example}\n\n\\paragraph{The formed document extraction problem}\nFix a dataset $\\ensuremath{D}$ and a field $\\ensuremath{\\mathsf{F}}$.\nThe input to the formed document extraction problem is given by\nitem a set of annotations on a \\emph{training set} $\\ensuremath{D}_\\ensuremath{\\mathsf{tr}} \\subseteq\n\\ensuremath{D}$.\nThe ideal expected output for such a problem is an \\emph{extraction function}\n$\\ensuremath{\\mathsf{Extract}}$ such that $\\forall \\ensuremath{\\mathsf{doc}} \\in \\ensuremath{D}. \\ensuremath{\\mathsf{Extract}}(\\ensuremath{\\mathsf{doc}}) = \\ensuremath{\\mathsf{F}}(\\ensuremath{\\mathsf{doc}})$.\nHowever, it is hard to produce ideal extractions, and we instead use the\nstandard metrics of \\emph{precision}, \\emph{recall} and \\emph{F1 score} to\nmeasure the quality of an extraction function (see, for example,~\\cite{PR}).\nIn practice, we are usually interested in extracting multiple fields from a\ndocument at once, and in fact, our implementation can do so.\nHowever, for simplicity of discussion, we present our techniques and conduct our\nexperiments for one field at a time. \n\n\\subsection{Landmarks and Regions}\n\\paragraph{Landmarks}\nA landmark is a value that we can use to identify a location in a\ndocument, such that the field value is present in a ``nearby'' location (or in\n\"nearby\" locations if the field value is aggregated from multiple data values).\nFormally, a \\emph{landmark} is given by a data value $\\ensuremath{\\mathsf{m}}$.\nA given landmark $\\ensuremath{\\mathsf{m}}$ identifies a unique location $\\ensuremath{\\ell}$\nin a document $\\ensuremath{\\mathsf{doc}}$ such that $\\ensuremath{\\mathsf{Data}}[\\ensuremath{\\ell}_i] \\supseteq \\ensuremath{\\mathsf{m}}$, i.e., the\nlandmark value is a sub-string of the data at $\\ensuremath{\\ell}_i$.\nIn order for a landmark to be useful for our purposes, we require the existence\nof an inexpensive \"locator\" function, which can locate the occurrences of a\nlandmark in a document.\nMore precisely, we assume a computationally inexpensive function $\\ensuremath{\\mathsf{Locate}}$ such\nthat, $\\ensuremath{\\mathsf{Locate}}(\\ensuremath{\\mathsf{doc}},\\ensuremath{\\mathsf{m}}) = \\ensuremath{\\ell} \\implies \\ensuremath{\\mathsf{Data}}[\\ensuremath{\\ell}] = \\ensuremath{\\mathsf{m}}$.\n\n\\begin{example}\n \\label{ex:landmark}\n Consider the travel itinerary document in Figure~\\ref{fig:formed_documents}(a).\n %\n In order to extract departure times from this document, a possible landmark\n to use is the phrase ``Depart:''.\n\\end{example}\n\n\\begin{remark}\n For ease of presentation, the definition assumes\n that landmarks occur in one location per document\n (contrast against \\emph{Depart} in Figure~\\ref{fig:formed_documents}(a)).\n %\n We discuss handling multiple, ambiguous, landmark locations in Section 6.\n\\end{remark}\n\n\n\n\\begin{comment}\nWe assume the existence of an ordering \\ensuremath{\\preccurlyeq}\\ between locations in a document.\nThe definition of the ordering varies depending on the document type. For HTML\ntrees, the relationship can be defined on the order in which locations are\ntraversed during a traversal of the HTML tree (such as pre-order traversal). For\ninvoice documents that are scanned images, we first process them with OCR\npackages, which convert the images into a set of text boxes with x-y\nco-ordinates.\nEach of the text boxes can be thought of as a location, and the ordering between\nlocations can be defined based on conjunctions of left-to-right and\ntop-to-bottom relationships between the x and y co-ordinates of the top-left\ncorners of the text boxes.\n\\end{comment}\n\n\\paragraph{Regions}\nA {\\em region} $\\ensuremath{\\mathsf{R}}$ of a document $\\ensuremath{\\mathsf{doc}}$ is a set of contiguous locations.\nA region can be thought of as a ``sub-document''.\n\\begin{comment}\n\\begin{definition}[\\textbf{Enclosing region}]\nGiven a document $d$, and a set of locations $R$ in a document $d$, the {\\em\nenclosing region} of $L$, denoted $\\ensuremath{\\mathsf{EncRgn}}(R,d)$ is a sub-document of $d$ that\ncontains the smallest super-set of $R$ that is closed with respect to $\\ensuremath{\\preccurlyeq}$.\nThat is, $\\ensuremath{\\mathsf{EncRgn}}(L,d) \\supseteq R$, and for any pair of locations $\\ell_1$ and\n$\\ell_2 \\in \\ensuremath{\\mathsf{EncRgn}}(R,d)$, if $\\hat \\ell \\in L_d$ is such that $\\ell_1 \\ensuremath{\\preccurlyeq}\n\\hat \\ell \\ensuremath{\\preccurlyeq} \\ell_2$, then $\\hat \\ell \\in \\ensuremath{\\mathsf{EncRgn}}(R,d)$\n\\end{definition}\n\\end{comment}\nGiven a set of locations $\\ensuremath{L}$ of a document $\\ensuremath{\\mathsf{doc}}$, the \\emph{enclosing\nregion} $\\ensuremath{\\mathsf{EncRgn}}(\\ensuremath{L}, \\ensuremath{\\mathsf{doc}})$ is the smallest region that contains all\nlocations in $\\ensuremath{L}$.\nWe are particularly interested in regions that enclose a landmark and the\ncorresponding field values as our approach is based on narrowing down the\ndocument to such regions. \nWe call such regions as {\\em regions of interest} or {\\em ROIs} for short.\n \n\\begin{example}\n The bottom two blue rectangles in Figure~\\ref{fig:formed_documents}(a) highlight the relevant ROIs that contain both the landmark ``Depart:'' and the\n associated field values.\n\\end{example}\n\n\n\\paragraph{Blueprints}\nIntuitively, a blueprint of a region\nis a ``hash'' of all the parts that are ``common'' to\nall such regions in the cluster.\nFor example, the strings \"Airline Record Locator\", \"AIR\", \"Meal\", \"Depart:\" in Figure~\\ref{fig:formed_documents}(a) are common values, since they will occur in all\ndocuments that follow this format.\n \nLet the \\emph{layout} of a region $\\ensuremath{\\mathsf{R}}$ in a document\n$\\ensuremath{\\mathsf{doc}}$ be the subset of all locations $\\ensuremath{\\ell} \\in \\ensuremath{\\mathsf{R}}$ such that\n$\\ensuremath{\\mathsf{Data}}[\\ensuremath{\\ell}]$ is a common value. The \\emph{blueprint} $\\ensuremath{\\mathsf{BP}}(\\ensuremath{\\mathsf{R}})$ of a region $\\ensuremath{\\mathsf{R}}$ is then defined as a hash\nof values in the layout of $\\ensuremath{\\mathsf{R}}$.\n\nIf two regions are similar, we want to define their blueprints such that they are close to each other.\nGiven two blueprints $\\ensuremath{\\mathsf{b}}_1$ and $\\ensuremath{\\mathsf{b}}_2$, we use the notation\n$\\ensuremath{\\delta}(\\ensuremath{\\mathsf{b}}_1, \\ensuremath{\\mathsf{b}}_2)$ to denote the distance between $\\ensuremath{\\mathsf{b}}_1$ and\n$\\ensuremath{\\mathsf{b}}_2$. \nIf $\\ensuremath{\\mathsf{R}}_1$ and $\\ensuremath{\\mathsf{R}}_2$ are similar in structure, we want\n$\\ensuremath{\\delta}(\\ensuremath{\\mathsf{BP}}(\\ensuremath{\\mathsf{R}}_1), \\ensuremath{\\mathsf{BP}}(\\ensuremath{\\mathsf{R}}_2))$ to be small value.\n\n\\begin{example}\n Our notion of blueprint of an HTML region is based on the XPaths\n to its DOM nodes, but ignoring node order.\n %\n For example, the blueprint of a region stores the path starting from\n a \\texttt{div} node, descending through \\texttt{table}, \\texttt{tr}, \\texttt{td}, and \\texttt{span} nodes, but without storing where this path is in relationship to other paths.\n\\end{example}\n\n\n\\subsection{Landmark-based DSLs}\nTo formalize the notions of extraction using landmarks, we introduce a special class of DSLs called landmark-based DSLs. This is a generic design of languages that formally captures our reasoning using landmarks and regions for extraction tasks in a domain-agnostic fashion. Figure \\ref{fig:dsllandmark} shows the structure of a landmark-based DSL. In such DSLs, the input is always assumed to be a document and a complete program returns a field value extracted from the document. Such DSLs consist of four notions: landmarks $m$, blueprints $b$, region extraction programs $p_{rx}$ and value extraction programs $p_{vx}$.\nThe region and value extraction programs can be instantiated arbitrarily for a particular domain by defining the language fragments $\\mathcal{L}_{rx}$ and $\\mathcal{L}_{vx}$, and we shall illustrate such instantiations of these fragments for the web and image extraction domains. \n\n These four notions are brought together in the single top-level $\\mathsf{Extract}$ operator, the semantics of which is defined in Algorithm ~\\ref{algo:extract}. This operator takes a list $Q$ of 4-tuples, where each tuple consists of a landmark, a region program, a blueprint for the region, and an extraction program. Each tuple represents an extraction strategy for a particular region format. The region program uses a landmark to identify a region of the document, and if this region matches the given blueprint, then the extraction program can be applied on the region to extract the field value. The top-level operator acts as a switch statement that applies the first tuple that successfully extracts a value from the document. Formally, for each tuple, we first use the $\\ensuremath{\\mathsf{Locate}}$ function to identify the location corresponding to the landmark $\\ensuremath{\\mathsf{m}}$. This location is then input into the region program $\\ensuremath{\\mathsf{RProg}}_i$ to produce the\nregion of interest $\\ensuremath{\\mathsf{R}}$. Now, we proceed with this cluster only if the blueprint of the region is\nwithin a certain tunable threshold of similarity. If the blueprint is close enough, we return the output of the extraction\nprogram $\\ensuremath{\\mathsf{EProg}}_i$ on the region. Otherwise, we continue with the remaining tuples in $Q$.\n\n\n\\begin{figure}\n\\small{\n\\begin{tabular}{rcl}\n\\!\\!\\!\\!$\\boldsymbol{\\mathsf{@start}}$ \\; $\\mathsf{T}$ \\; $\\mathit{t}$ \\!\\! \\!\\!\\!\\!&$:=$&\\!\\! \\!\\!\\!\\!\\! $\\mathsf{Extract}(\\mathit{q,\\!...,q})$ where $q = (\\mathit{m}, \\mathit{p}_{rx}, \\mathit{b}, \\mathit{p}_{vx}) $ \\;\\; \\\\ [0.75ex]\n\\!\\!\\!\\!$\\mathsf{R \\!\\rightarrow\\! T}$ \\; $\\mathit{p}_{vx}$ \\!\\! \\!\\!\\!\\!&$:=$&\\!\\! \\!\\!\\!\\!\\! $\\ldots \\;\\; \\mathcal{L}_{vx} \\;\\; \\ldots$ \\;\\; \\\\ [0.75ex]\n\\!\\!\\!\\!$\\mathsf{(doc, str) \\!\\rightarrow\\! R}$ \\; $\\mathit{p}_{rx}$ \\!\\! \\!\\!\\!\\!&$:=$&\\!\\! \\!\\!\\!\\!\\! $\\ldots \\;\\; \\mathcal{L}_{rx} \\;\\; \\ldots$ \\;\\; \\\\ [0.75ex]\n\\!\\!\\!\\!$\\mathsf{str}$ $\\mathit{m}$ \\!\\! \\!\\!\\!\\!& &\\!\\! \\!\\!\\!\\!\\! \\;\\; \/\/ landmark \\\\ [0.75ex]\n\\!\\!\\!\\!$\\mathsf{obj}$ $\\mathit{b}$ \\!\\! \\!\\!\\!\\!& &\\!\\! \\!\\!\\!\\!\\! \\;\\; \/\/ blueprint \\\\ [0.75ex]\n\\!\\!\\!\\!$\\boldsymbol{\\mathsf{@input}}$ \\; $\\mathsf{doc}$ $\\mathit{d}$ \\!\\! \\!\\!\\!\\!& &\\!\\! \\!\\!\\!\\!\\! \\;\\; \/\/ input document\\\\ [0.75ex]\n\n\n\n\n\\end{tabular}\n\n}\n\\vspace{-2ex}\n\\caption{Structure of a Landmark-based DSL $\\mathcal{L}_{ld}$}\n\\label{fig:dsllandmark} \n\\end{figure}\n\n\\begin{algorithm}\n\\small\n\\caption{Semantics of the $\\mathsf{Extract}$ operator in a landmark-based DSL. The blueprint threshold $t$ is a tunable parameter of the semantics.}\n\\label{algo:extract}\n\\begin{algorithmic}[1]\n \\Require Input document $d$ of type $\\ensuremath{\\mathsf{doc}}$\n \\Require List $Q = [q_1,...,q_k]$, where each $q_i$ has the form $(m, p_{rx}, b, p_{vx})$ for $ 1 \\leq i \\leq k$\n %\n \\For{$(m, p_{rx}, b, p_{vx}) \\in\n Q$}\n \\State $\\ensuremath{\\ell} \\gets \\ensuremath{\\mathsf{Locate}}(d, m)$\n \\State $\\ensuremath{\\mathsf{R}} \\gets p_{rx}(d, \\ensuremath{\\ell})$\n \\If{$\\ensuremath{\\mathsf{R}} \\neq \\bot \\land \\ensuremath{\\delta}(\\ensuremath{\\mathsf{BP}}(\\ensuremath{\\mathsf{R}}), b) \\leq t$}\n \\State \\Return $\\ensuremath{\\mathsf{Agg}}(p_{vx}(\\ensuremath{\\mathsf{R}}))$\n \\EndIf\n \\EndFor\n \\State \\Return $\\bot$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\section{HTML extraction and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$}\n\n\n\\section{Structured image extraction and $\\ensuremath{\\mathsf{LRSyn}}\\xspace$}\n\n\\begin{comment}\n\\begin{algorithm}\n\\small\n\\caption{Landmark Detection}\n\\label{lst:cover}\n\\flushleft $\\mathsf{Landmark Detection}(D,O_R,F)$\n\\begin{algorithmic}[1]\n\\Require Documents $D = \\{D_1,D_2,...,D_n\\}$\n\\Require Output Regions $O_R = \\{O_1,O_2,O_3,...,O_n\\}$ for $D_i$\n\\Require Field $F$\n\\State $L_c=\\phi$ where $L_c$ is set of Lmk candidate \n\\State \\textbf{for each}$D_i \\in D$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent} \\textbf{if }$L_c = \\phi$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$L_c \\gets FindNGrams(D_i)$\n\\State \\hspace{\\algorithmicindent}\\textbf{else}\n\\State\\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$L_c \\gets L_c \\cap FindNGrams(D_i)$\n\\State \\textbf{for each} $D_i \\in D$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent}\\textbf{for each} $L_i \\in L_c$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$l_r \\gets ComputeSmallestRegion(Ls,D)$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$R_{L_i} \\gets RegionGrwoingLearn(L_i,O_R,F)$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$R_{hash} \\gets ComputeBlueprint(R_i)$ for each $R_i \\in R_{L_i}$ \n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}Steps $S \\gets Max(ComputeSteps(R_i)$ for each $R_i \\in R_{L_i}$ \n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}Find $l_{r'} = l_r \\notin R_{L_i}$ \n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$R'\\gets RegionGrowingPredict(L_i,S,D)$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$R_{hash}' \\gets ComputeBlueprint(R_i')$ for each $R_i' \\in R_{L_i}' $\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$R_{noisy} =R_{hash}' \\in R_{hash} $\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}Store $R_{hash}' \\notin R_{hash}$ \n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$s_{noisy}=1\/(|R_{noisy}|+1)$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$s_{region}=1\/(Argmax(Width(R_{L_i})))$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$D_{visual} = ComputeVisualDistance(l_i,o_i)$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$s_{visual}= 1\/ArgMax(D_visual)$\n\\State Compute Average across each score for $l_i$ to find ${S_{n_{l_i}},S_{r_{l_i}},S_{v_{l_i}}}$\n\\State Previous Noisy Score $S_{N_prev} = 0$ \n\\State \\textbf{do}\n\\State \\hspace{\\algorithmicindent} $L_s \\gets \\{ l_i | Argmax(ComputeScore(S_{r_{l_i}},S_{v_{l_i}})) \\land S_{n_{l_i}} > S_{N_{prev}} \\}$\n\\State \\hspace{\\algorithmicindent} $S_{N_{prev}} = S_{n_{l_i}}$\n\\State \\textbf{while $S_{n_{l_i}}!=1$}\n\\State \\Return $L_s$\n\\State\n\\Function{FindNGrams}{$D_i$}\n \\State $N \\gets GenerateNGram(D_i)$\n \\State $N$\n\\State \\Return $\\mathcal{R}$\n\\EndFunction\n\\State\n\\Function{ComputeVisualDistance}{$l_i,o_i$}\n \\State Let $Box_{l_i} \\gets\\{x_{l_{left}},x_{l_{right}},y_{l_{top}},y_{l_{bottom}}\\} $\n \\State Let $Box{o_i} \\gets\\{x{o_{left}},x_{o_{right}},y_{o_{top}},y_{o_{bottom}}\\} $\n \\State $x_1 = (x{l_{left}}+x_{l_{right}})\/2$\n \\State $x_2 = (x{o_{left}}+x_{o_{right}})\/2$\n \\State $y_1 = (y{l_{top}}+y_{l_{bottom}})\/2$\n \\State $y_2 = (y{o_{top}}+y_{o_{bottom}})\/2$\n \\State $D=\\sqrt{(x_{1}-x_{2})^2+(y_{1}-y_{2})^2}$ \n\\State \\Return $D$\n\\EndFunction\n\\State\n\\Function{ComputeScore}{$S_{r},S_{v}$}\n \\State $S =\\log_{10}(S_{r})+\\log_{10}(S{v})$ \n\\State \\Return $S$\n\\EndFunction\n\n\\end{algorithmic} \n\\end{algorithm}\n\n\\begin{algorithm}\n\\small\n\\caption{Clustering And Landmark Detection}\n\\label{lst:cover}\n\\flushleft $\\mathsf{ClusteringAndLmkDetection}(D,T,K,F)$\n\\begin{algorithmic}[1]\n\\Require Documents $D = \\{D_1,D_2,...,D_n\\}$\n\\Require Threshold for big clusters $T$\n\\Require Number of Lmk Candidates $K$\n\\Require Field $F$\n\\State \\textbf{for each} $D_i \\in D$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent} $F \\gets ComputeBlueprint(D)$\n\\State Let Big Clusters $B = \\{B_1,B_2,...,B_n\\}$ \n\\State $L_0 \\gets$ Merge Based on Blueprint F\n\\State Let $L_0=\\{\\{D_1,D_x,...\\},\\{D_3,D_y,..\\},\\{D_n,D_z,...\\}\\}$\n\\State \\textbf{for each} $C_i \\in L_0$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent}\\textbf{if} $|C_i| \\ge T$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$B_i \\gets C_i$\n\\State \\hspace{\\algorithmicindent}\\textbf{else}\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$S_i \\gets C_i$\n\n\\State \\textbf{if} $|B_i|=0$\n\\State \\hspace{\\algorithmicindent}$B_1 \\gets C_i \\forall C_i \\in L_0$\n\\State \\hspace{\\algorithmicindent}$S_i \\gets \\phi$\n\\State \\textbf{for each} $B_i \\in B$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent} $L \\gets LandmarkDetection(B_i,O,F)$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\textbf{for each} $l_i \\in L$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$R_{B_{l_i}}\\gets RegionGrowingLearn(l_i,O,F)$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$H_{B_{l_i}}\\gets ComputeBlueprint(R_{B_{l_i}})$ where $H = ROIHash$ \n\\State $H_{score}=\\{(H_B_{l_1},0),(H_B_{l_2},0),...,(H_B_{l_i},0)\\}$ where $H_B_{l_i} \\in H_{B_{l}}$ \n\\State $C=\\{B_1\\}$ where $B1=Biggest Cluster$\n\\State $P = (B-\\{B_1\\}) \\cup S$\n\\State \\textbf{for each} $P_i \\in P$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent} \\textbf{for each} $C_j \\in C$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$L' = L_{B}$ where $B = Argmax(|B| \\in C)$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$R_{p_i} \\gets RegionGrowingLearn(L_{k}',O_{p_{i}},F) \\forall L_{k}' \\in L'$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$H_{p_i}' \\gets ComputeBlueprint(R_{p_k})\\forall R_{p_k} \\in R_{p_i}$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\textbf{if} $H_{p_i}' \\cap H_{C_j} \\to$ \\o\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$C \\gets P_i$; Add $P_i$ to $C$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\textbf{else}\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\textbf{for each}$H_x \\in H_{p_i}$' \\cap H_{C_j}$ \\textbf{do}\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}H_{score}[H_x] =H_{score}[H_x]+1$\n\\State \\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}\\hspace{\\algorithmicindent}$C_j \\gets C_j \\cup P_i$\n\n\\State \\Return C\n\\end{algorithmic} \n\\end{algorithm}\n\\end{comment}\n\n\\begin{comment}\n\\begin{example}\n One possible approach to achieve the extraction of departure times is in\n $3$ steps:\n %\n \\begin{inparaenum}[(a)]\n \\item First, extract the itinerary table from the email,\n \\item Second, from the itinerary table extract the row corresponding to\n each flight; and\n \\item Finally, extract the cell corresponding to the departure time from\n the row.\n \\end{inparaenum}\n %\n There are several advantages of doing this extraction in steps.\n %\n In Figure~\\ref{fig:roi}, we illustrate one hierarchy of regions of interest\n that would be produced during this extraction.\n\n During a one shot extraction, an extractor needs to consider multiple\n issues at once:\n %\n \\emph{\n Which of the tables in the email are itinerary tables?\n %\n Which of the rows in the itinerary tables correspond to flights and which\n correspond to advertisements?\n %\n In the rows, which cell contains the departure time?\n }\n %\n In practice, a program synthesizer or ML model handling all these issues at\n once is more likely to produce incorrect output both due to the inherent\n complexity of the problem and due to the technique \n %\n On the other hand, separating concerns, we can narrow down these regions progressively and synthesizer has to take into account only the final region to synthesize an extraction program. This results in simpler programs as well.\n %\n \\qed\n\\end{example}\n\nFurther, by proceeding one step at a time, it is possible to use different\nextractors in different steps. In this work, we follow simple traversal techniques to narrow down the regions, but one can imagine employing ML models for narrowing down the regions.\n\\end{comment}\n\\begin{comment}\nIt is frequently the case that different extractors are suited for different\nsteps.\nFor example, ML models are often more suited to the task of narrowing down\nfrom a whole document to a smaller region that contains all the output\nregions, while programs produced by a program synthesizer are more adept at\nextracting the intended output from the smaller region.\n\\end{comment}\n\n\\begin{comment}\nA \\emph{sequence extractor} $\\mathsf{SE}$ is given by a sequence of\nextractors $\\mathsf{E}_0 \\cdot \\ldots \\cdot \\mathsf{E}_k$.\nThe sequence of extractors behave as a single extractor as follows:\n\\begin{itemize}\n \\item For a sequence extractor $\\mathsf{SE} = \\mathsf{E}_0$ consisting of\n a single extractor we define $\\mathsf{SE}(\\ensuremath{\\mathsf{doc}}) = \\mathsf{E}_0(\\ensuremath{\\mathsf{doc}})$,\n and\n \\item For a sequence extractor $\\mathsf{SE} = \\mathsf{E}_0 \\cdot \\mathsf{SE}'$,\n we define $\\mathsf{SE}(\\ensuremath{\\mathsf{doc}}) =\n \\mathsf{SE}'(\\ensuremath{\\mathsf{R}}_0)\\ldots\\mathsf{SE}'(\\ensuremath{\\mathsf{R}}_n)$ where\n $\\mathsf{E}_0(\\ensuremath{\\mathsf{doc}}) = \\ensuremath{\\mathsf{R}}_0\\ldots\\ensuremath{\\mathsf{R}}_n$.\n\\end{itemize}\nNote that we are generalizing the definition extractor to act both on\ndocuments as well as regions.\nHenceforth, we use the term extractor to also mean sequence extractor,\nassuming that the intended meaning is clear from the context.\n\nDuring a sequence extraction, each extraction phase produces intermediate\noutput regions.\nWe dub the regions at each intermediate step the \\emph{regions of interest}\n(ROIs) at that step.\n\\end{comment}\n\n\\begin{comment}\n\\section{The \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace Algorithm}\nAlgorithm~\\ref{algo:hierarchical-extraction} depicts the high level procedure\nof algorithm \\ensuremath{\\mathsf{LRSyn}}\\xspace\\xspace.\nThe algorithm takes as input a set of examples $\\mathsf{Ex}^*$ and a set of\nlearners $\\mathsf{Learners}$.\nFirst, it chooses non-deterministically whether to do a one-shot extraction\nor a hierarchical extraction.\nIf a one-shot extraction is chosen, the procedure \\textsc{Extract} picks a\nlearner from $\\mathsf{Learners}$ and returns the result of that learner on\nthe given examples.\n\nOn the other hand, if a hierarchical extraction is chosen, for each example\n$\\ensuremath{\\mathsf{doc}} \\mapsto \\ensuremath{\\mathsf{R}}_0\\ldots\\ensuremath{\\mathsf{R}}_n$, we first identify the regions of\ninterest $\\ensuremath{\\mathsf{R}}_0^*\\ldots\\ensuremath{\\mathsf{R}}_m^*$ for the current step.\nThe first extraction task is to hierarchically extract the regions of\ninterest from each document.\nWe do this using a recursive call to \\textsc{HierarchicalExtract} on examples\nof the form $\\ensuremath{\\mathsf{doc}} \\mapsto \\ensuremath{\\mathsf{R}}_0^*\\ldots\\ensuremath{\\mathsf{R}}_m^*$ for each document.\nNow, the second extraction task is to extract the intended outputs from the\nregions of interest.\nFor each region of interest $\\ensuremath{\\mathsf{R}}^*_i$, we associate corresponding output\n$\\ensuremath{\\mathsf{R}}^{i_1}\\ldots\\ensuremath{\\mathsf{R}}^{i_k}$ where $\\ensuremath{\\mathsf{R}}_{i_*}$ consists of all the\noutput regions that are contained in the region on interest $\\ensuremath{\\mathsf{R}}^*_i$.\nNow, the examples for the second recursive call is given by the union of such\nexamples over all documents.\nThe final returned extractor is the results of the individual recursive calls\nin sequence.\n\n\\begin{algorithm}\n\\small\n\\caption{Hierarchical Data Extraction}\n\\label{algo:hierarchical-extraction}\n\\arsays{No one will understand the indexing notation}\n\\begin{algorithmic}[1]\n\\Require Extraction examples $\\mathsf{Ex}^*$\n\\Require Learners $\\mathsf{Learners} = \\mathcal{L}_0, \\ldots, \\mathcal{L}_n$\n\\State \\Return \\Call{HierarchicalExtract}{$\\mathsf{Ex}^*, \\mathsf{Learners}$}\n\\State\n\\Function{HierarchicalExtract}{$\\mathsf{Ex}, \\mathsf{Learners}$}\n\\State Let $\\mathsf{Ex}$ be $\\{ \\ensuremath{\\mathsf{doc}} \\mapsto \\ensuremath{\\mathsf{R}}_0\\ldots\\ensuremath{\\mathsf{R}}_n \\}_i$\n\\If{*} \\Comment{One-shot vs. Hierarchical}\n\\State \\Return \\Call{Extract}{$\\mathsf{Ex}, \\mathsf{Learners}$}\n\\Else\n\\State $\\{ \\ensuremath{\\mathsf{doc}} \\mapsto \\ensuremath{\\mathsf{R}}_0^*\\ldots\\ensuremath{\\mathsf{R}}_m^* \\}_i \\gets \\Call{IdentifyROI}{\\mathsf{Ex}}$\n\\State $\\mathsf{Ex}' \\gets \\{ \\ensuremath{\\mathsf{doc}} \\mapsto \\ensuremath{\\mathsf{R}}_0^*\\ldots\\ensuremath{\\mathsf{R}}_m^*\\}_i$\n\\State $\\mathsf{Ex}'' \\gets \\Call{Flatten}{\\{\n \\{\n \\ensuremath{\\mathsf{R}}^* \\mapsto \\ensuremath{\\mathsf{R}}_{p_0}\\ldots\\ensuremath{\\mathsf{R}}_{p_k} \\mid \\ensuremath{\\mathsf{R}}_{p_k} \\sqsubseteq \\ensuremath{\\mathsf{R}}^*\n \\}_j\n\\}_i}$ \n\\State $\\mathsf{SE}_0 \\gets \\Call{HierarchicalExtract}{\\mathsf{Ex}', \\mathsf{Learners}}$\n\\State $\\mathsf{SE}_1 \\gets \\Call{HierarchicalExtract}{\\mathsf{Ex}'', \\mathsf{Learners}}$\n\\EndIf \n\\State \\Return $\\mathsf{SE}_0 \\cdot \\mathsf{SE}_1$\n\\EndFunction\n\\State\n\\Function{Extract}{$\\mathsf{Ex}, \\mathsf{Learners}$}\n\\State $\\mathcal{L} \\gets \\text{Pick learner from $\\mathsf{Learners}$}$\n\\State \\Return $\\mathcal{L}(\\mathsf{Ex})$\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\\paragraph{Identifying Regions of Interest}\n\n\\begin{itemize}\n \\item Landmarks using delimiters\n \\item Unsupervised clustering + limited DSL for predicates\n \\item Common segments across multiple inputs and input formats\n\\end{itemize}\n\n\\paragraph{Learner Agnostic Robustness}\nTo ensure the robustness of the full sequence extractor produced by\nAlgorithm~\\ref{algo:hierarchical-extraction}, we ensure the robustness of the\nextractor produced at each step.\nRecall that we defined robustness using the notion of admissible inputs\n$\\ensuremath{\\mathbb{D}_A} \\subseteq \\ensuremath{\\mathsf{docs}}$, the set of input documents similar enough to the\nexample inputs, on which the sequence extractor is expected to operate\ncorrectly.\nHowever, $\\ensuremath{\\mathbb{D}_A}$ is not provided as an input to the procedure.\nInstead, we attempt to make the extractor robust using \\emph{perturbations}.\n\nGiven an example $\\mathsf{ex} = \\ensuremath{\\mathsf{doc}} \\mapsto \\ensuremath{\\mathsf{R}}_0\\ldots\\ensuremath{\\mathsf{R}}_n$, a\n\\emph{perturbation strategy} produces additional examples of the form\n$\\ensuremath{\\mathsf{doc}}' \\mapsto \\ensuremath{\\mathsf{R}}_0'\\ldots\\ensuremath{\\mathsf{R}}_n'$ derived from $\\mathsf{ex}$.\nInformally, given an example, a perturbation strategy generates multiple new\nexamples similar to the original.\nOur perturbation strategies make minor changes to the input $\\ensuremath{\\mathsf{doc}}$ which are\nlikely to change the expected output in a predictable way.\n\n\\begin{example}\n Changing formats, changing entities, adding ads in emails, etc\n\\end{example}\n\n\\section{Introduction}\nPrograms synthesized using current benchmark examples aren't robust enough, they do not give correct outputs on similar unseen inputs. So following set of techniques were used for achieving robustness.\n\n\\section{Techniques}\nEach of these ideas are cover one or more following points of robustness:\n\\begin{itemize}\n \\item Robust program is based on global property of input. For example, for TText this can be some character\/string like delimiters. \n \\item The program tokens should be generic enough to cover unseen inputs. For example, TText regex token \"Words\/dots\/hyphens\" is more generic than \"Alphanumeric\".\n\\end{itemize}\n \n\\subsection{Perturbation}\nThe idea focuses on second point of robustness, where adding adding different varied input examples should guide the DSL to learning robust program. Introducing variety into constraints also help in the synthesizing program based on global property as different input representations go out of scope of DSL tokens.\\\\ \\\\\n\\begin{algorithm}[H]\n\\caption{Generate robust program: Perturbation}\n\\begin{algorithmic}[1]\n \\Require Original benchmark constraints $C_1$, DSL Ranker $R$\n \\State Generate new constraints $C_2$ using entity and character replacement in $C_1$, $C_2 = PERTURB(C_1)$\\\\\n \\State Synthesize program $P = SYNTHESIS(C_2,R)$\n \n \\Comment{degree of robustness checked manually}\n \\If{$P$ is robust enough} \n \\State {End} \n \\Else {$C_2 = PERTURB(C_2)$to break program token in $P$, Go to 3}\n \\EndIf\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Improved Ranking}\nDSL ranking is tweaked to give preference to certain characters based on which the relevant programs will get boost. In TText, these turn out to be common delimiters as pointed out in robustness earlier. For example, Words\/dots\/hyphens \u25e6 Comma to Comma, i.e. boost programs having only Comma than xyz \u25e6 Comma.\\\\ \\\\\n\\begin{algorithm}[H]\n\\caption{Generate robust program: Tweaked Ranking}\n Given: original benchmark constraints $C_1$, DSL Ranker $R$ \\\\\n Create new ranker $R_{mod}^1$ wherein delimiter is taken from user input.\\\\\n Synthesize robust program $P = SYNTHESIS(C_1,R_{mod}^1)$\n\\end{algorithm}\n\n\n\\subsection{Perturbation and Ranking}\nCombining DSL ranking tweaking and adding perturbed examples overcomes shortcomings of both techniques and reduces user input.\\\\ \\\\\n\\begin{algorithm}[H]\n\\caption{Generate robust program: Perturbation and Ranking}\n Given: original benchmark constraints $C_1$, DSL Ranker $R$ \\\\\n Generate new constraints $C_2$ using entity and character replacement in $C_1$, $C_2 = PERTURB(C_1)$\\\\\n $C= C_1\\cup C_2$\\\\\n Create new ranker $R_{mod}^2$ by fixing delimiters such that each program in benchmarks sees same set of delimiters\\\\\n Synthesize robust program $P = SYNTHESIS(C,R_{mod}^2)$ \n\\end{algorithm}\n\n\\subsection{Notes}\nPlease find below the revised algo for noisy landmark identification w.r.t to landmark inference.\nPreviously: \n\u2022\tIn landmark hierarchy \no\tNoisyScore = (1\/number of noisy landmarks + 1)\no\tNoisy landmarks is defined the landmark regions containing landmark string which present outside of any ROI computed using output labels. \no\tNoisyScore is used as stopping condition for deciding whether to do more rounds of landmark inference.\n\u2022\tROI which is given as input to synthesis is always w.r.t base landmark, outer and middle landmarks in hierarchy are used as anchors to reach to the correct base landmark using nearest landmark logic.\n\nCurrent:\n\u2022\tIn landmark hierarchy\no\tNoisy landmarks definition is changed slightly only for base landmark.\n\uf0a7\tNoisy landmarks for base landmark is defined as the landmark regions satisfying one more condition apart from this. Any base landmark string S which also occurs outside of the ROIs (R1, R2,\u2026Rn) when grown from noisy base LB' to a region R' (using steps computed using LB, R1), if R' structure is same as any of structures in (R1, R2,\u2026Rn) then base landmark S is noisy.\n\uf0a7\tSince we have ROI structure filtering at inference time, region grown from incorrect base landmarks will be filtered out.\no\tIf R' structure is same as we have in Getthere Name field, one more round of hierarchy is done. If R' is different as we have in Qatar then we stop at base landmark only and therefore have only Departure as landmark now, no mobile number is learnt as hierarchy. \no\tWith this change we now do not need to compute all base landmarks at inference, we revert to the original logic of picking up nearest landmark in hierarchy. \n\uf0a7\tWe did multiple base landmarks for Qatar because there was 1 mobile number but 2 Departure but now we have only Departure as landmark. \no\tRest of landmarks in hierarchy obey the previous definition of noisy landmarks since these are not concerned with ROI. \n\nImportant note:\n\u2022\tWith this reverting back to nearest landmark logic at inference, code is assuming that hierarchy is only of this pattern (L1 -> L2 -> L3 -> ..)* which says each ROI is associated with unique L1, L2, ..\n\n\\end{comment}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}