diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmbpa" "b/data_all_eng_slimpj/shuffled/split2/finalzzmbpa" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmbpa" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\nLarge scale electronic structure calculation, with thousand atoms or more, \nis of great importance both in science and technology.\nRecently, \ncalculations with ten million atoms \nwere realized on the K computer \n\\cite{HOSHI-mArnoldi, HOSHI-3013-KEI-BENCH}\nby our code ELSES (http:\/\/www.elses.jp\/).\nThe computational cost is order-$N$ or is proportional to the number of atoms $N$, \nas shown in Fig. 3(a) of Ref.~\\cite{HOSHI-mArnoldi}.\nThe present paper presents more recent calculations \nfor larger systems with one hundred million atoms\nand the calculations are called \n\\lq 100-nm-scale calculation',\nbecause one hundred million atoms are \nthose in silicon single crystal with the volume of $V=$(126nm)$^3$. \nThe present paper also presents \na calculation method of specific eigen states\nwith a polymer example.\nThe calculation was carried out with modeled \n(tight-binding) electronic structure theory based on {\\it ab initio} calculations. \nThe detailed theory is described in Ref.~\\cite{HOSHI-mArnoldi}\n\n\n\\section{Method and parallel efficiency}\n\nThe used linear algebraic method is\ncalled multiple Arnoldi (mArnoldi) method. \\cite{HOSHI-mArnoldi}\nThe mathematical foundation is \nthe \\lq generalized shifted linear equation', \nor the set of linear equations \n\\begin{eqnarray}\n ( z S -H ) \\bm{x} = \\bm{b},\n \\label{EQ-SHIFT-EQ}\n\\end{eqnarray}\nwhere $z$ is a (complex) energy value and \n$H$ and $S$ denote\nthe Hamiltonian and overlap matrices \nin the linear-combination-of-atomic-orbital (LCAO) representation, respectively.\nThe matrices are sparse real-symmetric $M \\times M$ matrices, \nand $S$ is positive definite. \nThe vector $\\bm{b}$ is the input and the vector $\\bm{x}$ is the solution vector. \nEquation (\\ref{EQ-SHIFT-EQ}) is solved, \ninstead of the original generalized eigen-value equation \n\\begin{eqnarray}\nH \\bm{\\phi}_k = \\varepsilon_k S \\bm{\\phi}_k.\n \\label{EQ-GEV-EQ}\n\\end{eqnarray}\nThe method is purely mathematical and \nmay be applicable to other problems. \n\nThe mArnoldi method \\cite{HOSHI-mArnoldi} \nreduces the problem into a set of many small $(\\nu \\times \\nu)$ standard eigen-value equations \ndefined within the Krylov sub(Hilbert)spaces of $\\{ {\\cal L}_\\nu^{(j)} \\}$,\nwhere $j$ is the basis index $(j=1,2,3,....M)$\nand $\\nu$ is the subspace dimension typically $\\nu = 30-300$. \nThe $j$-th subspace (${\\cal L}_\\nu^{(j)}$) contains \nthe $j$-th unit vector of $\\bm{e}_j \\equiv (0,0,..1_j, 0,.,0_M)^{\\rm t}$\n($\\bm{e}_j \\in {\\cal L}_\\nu^{(j)}$).\nFor each subspace eigen-value equation, \neigen levels $\\{ \\varepsilon^{(j)}_\\alpha \\}$ and eigen vectors $\\{ \\bm{v}^{(j)}_\\alpha \\}$ \n($\\alpha = 1,2,.., \\nu$) are obtained and are called \nsubspace eigen levels and subspace eigen vectors, respectively. \nThe Green's function is determined by the subspace eigen levels and vectors \nand gives the total energy and force. \n\nFigure \\ref{fig-BENCH}(a) shows \nthe parallel efficiency on the K computer\nwith one hundred million atoms ($N=$103,219,200).\nThe elapse time $T$ is measured as the function of \nthe number of used processor cores $P$ $(T=T(P))$,\nwhere \n$P = P_{\\rm min} \\equiv 32,768$, $P_{a} \\equiv$98,304, $P_{b} \\equiv$294,912, $P_{\\rm all} \\equiv 663,552$ (all cores). \nThe resultant benchmark is called \n\\lq strong scaling' in the high-performance computation society.\nThe calculated material is \nsp$^2$-sp$^3$ nano-composite carbon solid. \\cite{HOSHI-3013-KEI-BENCH}\nThe time of the total energy calculation was measured for a given atomic structure.\nThe measured parallel efficiency, $\\alpha(P) \\equiv T(P)\/T(P_{\\rm min})$, was\n$\\alpha(P_{a})=0.98$, \n$\\alpha(P_{b})=0.90$ and \n$\\alpha(P_{\\rm all})=0.73$. \n\n\n\n\\begin{figure}[htbp] \n\\begin{center}\n \\includegraphics[width=14cm]{fig-APPC-bench-MD-lt.eps}\n\n\\end{center}\n\\caption{\\label{fig-BENCH} \n(a) Parallel efficiency on the K computer with $N=$103,219,200 atoms.\nThe calculations were carried out with\nthe total number of processor cores \n$P$ = 32,768, 98,304, 294,912, 663,552 (all cores). \nThe calculated material is \nsp$^2$-sp$^3$ nano-composite carbon solid. \n(b) Structure of poly-(9,9 dioctyl-fluorene). \nHere $R \\equiv$ C$_8$H$_{17}$. \n(c) A molecular dynamics simulation of a poly-(9,9 dioctyl-fluorene) system.\n}\n\\end{figure}\n\n\n\\section{Method for calculating eigen states}\n\n\nThis section is devoted to a calculation method for individual eigen states in the mArnoldi method, \nsince it is missing in our previous papers. \\cite{HOSHI-mArnoldi,HOSHI-3013-KEI-BENCH}\nA conjugated polymer system depicted in Figs.~\\ref{fig-BENCH}(b)(c),\npoly-(9,9 dioctyl-fluorene), was chosen as a test system.\nThe poly-fluorene and its family are famous \nas a hopeful candidate for industrial lighting applications. \nA previous theoretical paper \\cite{ZEMPO-JPCM-2008} investigates \nrelated small molecules \nand drove us to calculations of polymer materials with non-ideal (amorphous-like) structures.\nThe present calculations are intended for a methodological test, in particular, on $\\pi$ states.\nAs a reference data, \nthe monomer and dimer were calculated \nby the present method and \nthe {\\it ab initio} calculation of Gaussian$^{\\rm (TM)}$\nand the results agree reasonably among the two methods. \\cite{HOSHI-mArnoldi} \nThe calculated highest-occupied (HO) and lowest-unoccupied (LU) states of dimer\nare contributed only by the $\\pi$ states of benzene rings, as in benzene, \nand are illustrated in Fig.\\ref{fig-PF2076atoms-HOMO}(a) and (b), respectively. \nA molecular dynamics simulation with 2076 atoms was carried out \nas shown in Fig.~\\ref{fig-BENCH}(c).\nAn artificially packed structure was thermally relaxed \nwithin the period of $T$=80 ps, \nso as to obtain non-ideal polymer structures.\n~\\cite{HOSHI-mArnoldi}\nThe calculated DOS is shown in\nFig.2 of Ref.~\\cite{HOSHI-mArnoldi}. \nHereafter the same poly-fluorene system will be discussed\nin the same calculation conditions as in the DOS. \n\nThe method for calculating the $k$-th eigen level ($\\varepsilon_k$) and vector ($\\bm{\\phi}_k$) are explained; \nThe Green's function gives \nthe integrated density of states $n=n(\\eta)$ and its inverse function $\\eta = \\eta(n)$.\nThe $k$-th eigen level is determined in the inner product form of\n$\\varepsilon_k : = \\eta(k-1\/2)$\n(See Footnote 9 of Ref.~\\cite{HOSHI-mArnoldi}).\nThen, the $k$-th eigen vector ($\\bm{\\phi}_k$) is calculated from\nthe subspace eigen vectors of which energy levels lie \nin the range of $\\eta(k-1) < \\varepsilon < \\eta(k)$ so\nthat the $j$-th component $(\\bm{e}_j^{\\rm t} \\bm{\\phi}_k)$ is determined as\n\\begin{eqnarray}\n\\bm{e}_j^{\\rm t} \\bm{\\phi}_k : = \n\\sum_{\\alpha} \\bm{e}_j^{\\rm t} \\bm{v}_{\\alpha}^{(j)} \\int_{\\eta(k-1)}^{\\eta(k)} \n\\delta (\\varepsilon - \\varepsilon_{\\alpha}^{(j)}) d \\varepsilon.\n\\label{EQ-EIGEN-KRY}\n\\end{eqnarray}\nHere the delta function in the above equation is a \\lq smoothed' one. \nAdditional techniques for calculating eigen vectors are given in\nin Appendix.\n\nFigure \\ref{fig-PF2076atoms-HOMO}(c) is a part of the polymer system\nat which the calculated HO and LU wavefunctions are localized, \nas in Figs \\ref{fig-PF2076atoms-HOMO}(d) and (e), respectively. \nThey are $\\pi$ states and two features are found; \n(I) The characteristic node structures are observed and\nare the same as in the dimer case (Fig.~\\ref{fig-PF2076atoms-HOMO}(a) and (b)). \nIn short, \nthe HO and LU wavefunctions of the polymer are connected HO and LU wavefunctions of dimer, respectively. \n(II) The wavefunctions are localized on the chain structure among four monomers and terminated \nat one inter-monomer boundary indicated by an arrow.\nAt that boundary, the structure is largely tilted (or twisted) and the $\\pi$ states are disconnected. \nThere features are confirmed in the wavefunctions given by the exact diagonalization method.\n\n\\begin{figure*}[htbp] \n\\begin{center}\n \\includegraphics[width=15cm]{fig-APPC-aPF-only-polymer-lt.eps}\n\\end{center}\n\\caption{\\label{fig-PF2076atoms-HOMO} \n(a)(b) Schematic figures of\nthe HO or LO ($\\pi$) state of fluorene dimer, respectively. \n(c)-(e) A part of the poly-fluorene system is shown in (c) \nand the calculated HO and LU wavefunctions are shown in (d) and (e), respectively.\nOnly carbon atoms and carbon-carbon bonds are drawn.\n}\n\\end{figure*}\n\n\n\\section{Summary and future outlook}\n\nThe present paper shows methods and results of \none-hundred-atom electronic structure calculations\nbased on our novel linear algebraic algorithm. \nFor future outlook, \nmethods for automatic determination of \nmodel (tight-binding) parameters \nare developing, \\cite{NISHINO-2013} so as to enhance studies of various materials.\nMatrix data $(H,S)$ for materials \n\\cite{ELSES-MATRIX-LIBRARY} and\na small application for matrix solvers \\cite{EIGEN-TEST} are \nprepared\nfor further interdisciplinary study between physics and applied mathematics.\n\n\\vspace{3mm}\n\n{\\bf Acknowledgments} \\hspace{3mm}\nThis research is partially supported by Grant-in-Aid \nfor Scientific Research\n(Nos. 23540370 and 25104718)\nfrom the MEXT of Japan. \nThe K computer was used \nin the research proposals of hp120170 and hp120280.\nSupercomputers were also used \nat the Institute for Solid State Physics, University of Tokyo, \nat the Research Center for \nComputational Science, Okazaki,\nand at the Information Technology Center, University of Tokyo.\nThe authors thank Y. Zempo (Hosei University) and M. Ishida (Sumitomo Chemical Co., Ltd) \nfor providing the structure model of poly-fluorene system.\nThe wavefunctions in Figs.~\\ref{fig-PF2076atoms-HOMO}(d)(e) are drawn \nby an original python-based visualization tool VisBAR\\_wave\\_batch, \na part of VisBAR package. \\cite{HOSHI-3013-KEI-BENCH}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\n\\subsection{Ontology-based data access}\n\nOntology-based data access (OBDA) via query rewriting was proposed by Poggi et al.\\ \\cite{PLCD*08} with the aim of facilitating query answering over complex, possibly incomplete and heterogeneous data sources. In an OBDA system (see Fig.~\\ref{fig:obda}), the user does not have to be aware of the structure of data sources, which can be relational databases, spreadsheets, RDF triplestores, etc. Instead, the system provides the user with an ontology that serves as a high-level conceptual view of the data, gives a convenient vocabulary for user queries, and enriches incomplete data with background knowledge. A snippet, $\\mathcal{T}$, of such an ontology is shown below in the syntax of first-order (FO) logic:\n\\begin{align*}\n& \\forall x \\, \\big(\\textit{ProjectManager}(x) \\to \\exists y\\, (\\textit{isAssistedBy}(x,y) \\land \\textit{PA}(y))\\big),\\\\\n& \\forall x\\, \\big(\\exists y\\, \\textit{managesProject}(x,y) \\to \\textit{ProjectManager}(x)\\big),\\\\\n& \\forall x \\, \\big(\\textit{ProjectManager}(x) \\to \\textit{Staff}(x)\\big),\\\\ \n& \\forall x \\, \\big(\\textit{PA}(x) \\to \\textit{Secretary}(x)\\big). \n\\end{align*}\nUser queries are formulated in the signature of the ontology. For example, the conjunctive query (CQ)\n\\begin{align*}\n{\\boldsymbol q}(x) \\ = \\ \\exists y \\, (\\textit{Staff}(x) \\land \\textit{isAssistedBy}(x,y) \\land \\textit{Secretary}(y)))\n\\end{align*}\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}[xscale=1.25]\n\\draw[thick,fill=gray!30] (-4,-1.7) rectangle +(11,4.4);\n\\begin{scope}[->,black!70,line width=2mm]\n\\draw[out=0,in=120] (0,4.21) to (4,3.4);\n\\draw[out=0,in=180] (0.3,2.25) to (1.4,2.25);\n\\draw[out=180,in=-90,looseness=1.5] (-0.5,-1.1) to (-2.3,0.7);\n\\draw[out=180,in=0] (3.5,-0.45) to (0.3,0.8);\n\\end{scope}\n\\draw[thick,fill=gray!30] (-4,3.1) rectangle +(4,1.2);\n\\node at (-2,3.7) {\\itshape\\tiny\\begin{tabular}{l}SELECT ?s \\{\\\\[-1pt]\\hspace*{1em}?s a :Staff . \\\\[-1pt]\\hspace*{1em}?s a $[$ a owl:restriction\\textup{;} \\\\[-1pt]\\hspace*{3.5em} owl:onProperty :assistedBy\\textup{;}\\\\[-1pt]\\hspace*{3.5em} owl:someValuesFrom :Secretary$]$ . \\} \\end{tabular}};\n\\node at (-0.75,3.95) {\\bf query};\n\\draw[fill=gray!5] (-3.7,0.7) rectangle +(4,1.7);\n\\node at (-1.7,1.6) {\\tiny\\ttfamily\\begin{tabular}{l}[] rdf:type rr:TriplesMap ;\\\\[-1pt]\n\\hspace*{1em}rr:logicalTable \"SELECT * FROM PROJECT\";\\\\[-1pt]\n\\hspace*{1em}rr:subjectMap [ a rr:BlankNodeMap ;\\\\[-1pt]\\hspace*{8em} rr:column \"PRJ\\_ID\" ; ] ;\\\\[-1pt]\n\\hspace*{1em}rr:propertyObjectMap [ rr:property a:name;\\\\[-1pt]\\hspace*{8em} rr:column \"PRJ\\_NAME\" ] ;\\\\[-1pt]\n\\hspace*{1em}\\dots\\end{tabular}};\n\\node at (-0.8,0.95) {\\small\\bf mappings};\n\\draw[thick,fill=white] (1.4,1.1) rectangle +(5.3,2.3);\n\\node at (5.7,1.5) {\\bf ontology};\n\\begin{scope}\\tiny\\itshape\n\\node[draw=black,fill=gray!5,rectangle] (staff) at (2.4, 3.1) {Staff};\n\\node[draw=black,fill=gray!5,rectangle] (projman) at (2.4, 2.5) {ProjectManager};\n\\node[draw=black,fill=gray!5,rectangle] (proj) at (2.4, 1.4) {\\hspace*{1em}Project\\hspace*{1em}};\n\\node[draw=black,fill=gray!5,diamond,aspect=2.5,inner ysep=0pt] (man) at (2.4, 1.95) {manages};\n\\node[draw=black,fill=gray!5,rectangle] (pa) at (5.7, 2.5) {\\hspace*{2em}PA\\hspace*{2em}};\n\\node[draw=black,fill=gray!5,diamond,aspect=2.5,inner ysep=0pt] (assisted) at (4.15, 2.5) {isAssistedBy};\n\\node[draw=black,fill=gray!5,rectangle] (sec) at (5.7, 3.1) {\\hspace*{1em}Secretary\\hspace*{1em}};\n\\draw(man) -- (projman);\n\\draw(man) -- (proj);\n\\draw(assisted) -- (pa);\n\\draw(assisted) -- (projman);\n\\draw[] (pa) -- (sec) node[sloped,midway,rotate=-90] {$\\cup$};\n\\draw[] (projman) -- (staff) node[sloped,midway,rotate=-90] {$\\cup$};\n\\end{scope}\n\\begin{scope}[shift={(1,0)}]\n\\draw[fill=gray!5] (0,-1.3) ellipse (1.5 and 0.2);\n\\fill[gray!5] (-1.5,-1.3) rectangle +(3,1.3);\n\\draw(-1.5,-1.3) -- +(0,1.3);\n\\draw(1.5,-1.3) -- +(0,1.3);\n\\draw[fill=gray!5] (0,0) ellipse (1.5 and 0.2);\n\\node at (0,-0.8) {\\texttt{\\tiny\\begin{tabular}{l}CREATE TABLE PROJECT (\\\\[-1pt]\\hspace*{1em}PRJ\\_ID INT NOT NULL,\\\\[-1pt]\\hspace*{1em}PRJ\\_NAME VARCHAR(60) NOT NULL,\\\\[-1pt]\\hspace*{1em}PRJ\\_MANAGER\\_ID INT NOT NULL\\\\[-1pt]\\hspace*{1em}\\dots\\\\[-1pt])\\end{tabular}}};\n\\end{scope}\n\\begin{scope}[shift={(1,0.8)}]\n\\draw[fill=gray!5] (2.5,-1.6) rectangle +(3,1.6);\n\\draw (2.5,-0.2) -- +(3,0);\n\\draw (2.7,0) -- +(0,-1.6);\n\\foreach \\x in {3.4,4.1,4.8} {\\draw[ultra thin,gray] (\\x,0) -- +(0,-1.6); }\n\\foreach \\y in {-0.4,-0.6,-0.8,-1,-1.2,-1.4} {\\draw[ultra thin,gray] (2.5,\\y) -- +(3,0); }\n\\node at (3.05,-0.1) {\\tiny\\ttfamily\\bfseries A};\n\\node at (3.75,-0.1) {\\tiny\\ttfamily\\bfseries B};\n\\node at (4.45,-0.1) {\\tiny\\ttfamily\\bfseries C};\n\\node at (5.15,-0.1) {\\tiny\\ttfamily\\bfseries D};\n\\node at (2.6,-0.3) {\\tiny\\ttfamily\\bfseries 1};\n\\node at (2.6,-0.5) {\\tiny\\ttfamily\\bfseries 2};\n\\node at (2.6,-0.7) {\\tiny\\ttfamily\\bfseries 3};\n\\node at (2.6,-0.9) {\\tiny\\ttfamily\\bfseries 4};\n\\node at (2.6,-1.1) {\\tiny\\ttfamily\\bfseries 5};\n\\node at (2.6,-1.3) {\\tiny\\ttfamily\\bfseries 6};\n\\node at (2.6,-1.5) {\\tiny\\ttfamily\\bfseries 7};\n\\end{scope}\n\\node at (4.2,-1.2) {\\small\\bf data sources};\n\\end{tikzpicture}%\n\\caption{Ontology-based data access.}\\label{fig:obda}\n\\end{figure}%\nis supposed to find the staff assisted by secretaries. The ontology signature and data schemas are related by mappings designed by the ontology engineer and invisible to the user. The mappings allow the system to view the data sources as a single RDF graph (a finite set of unary and binary atoms), $\\mathcal{A}$, in the signature of the ontology. For example, the global-as-view (GAV) mappings \n\\begin{align*}\n& \\forall x,y,z\\,\\big( {\\small\\texttt{PROJECT}}(x,y,z) \\to \\textit{managesProject}(x,z) \\big),\\\\\n& \\forall x,y\\, \\big( {\\small\\texttt{STAFF}}(x,y) \\land (y = 2) \\to \\textit{ProjectManager}(x)\\big)\n\\end{align*}\npopulate the ontology predicates $\\textit{managesProject}$ and $\\textit{ProjectManager}$ with values from the database relations ${\\small\\texttt{PROJECT}}$ and ${\\small\\texttt{STAFF}}$. \nIn the query rewriting approach of Poggi et al.\\ \\cite{PLCD*08}, the OBDA system employs the ontology and mappings in order to transform the user query into a query over the data sources, and then delegates the actual query evaluation to the underlying database engines and triplestores. \n\nFor example, the first-order query \n\\begin{multline*}\n{\\boldsymbol q}'(x) \\ = \\ \\exists y \\, \\big[\\textit{Staff}(x) \\land \\textit{isAssistedBy}(x,y) \\land (\\textit{Secretary}(y) \\lor \\textit{PA}(y))\\big] \\lor{}\\\\\n \\textit{ProjectManager}(x) \\lor \\exists z\\, \\textit{managesProject}(x,z) \n\\end{multline*}\nis an \\emph{FO-rewriting} of the \\emph{ontology-mediated query} (OMQ) ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ over any RDF graph $\\mathcal{A}$ in the sense that $a$ is an answer to ${\\boldsymbol q}'(x)$ over $\\mathcal{A}$ iff ${\\boldsymbol q}(a)$ is a logical consequence of $\\mathcal{T}$ and $\\mathcal{A}$. As the system is not supposed to materialise $\\mathcal{A}$, it uses the mappings to unfold the rewriting ${\\boldsymbol q}'$ into an SQL (or SPARQL) query over the data sources.\n\nOntology languages suitable for OBDA via query rewriting have been identified by the Description Logic, Semantic Web, and Database\/Datalog communities. The \\textsl{DL-Lite}{} family of description logics, first proposed by Calvanese et al.~\\cite{CDLLR07} and later extended by Artale et al.~\\cite{ACKZ09}, was specifically designed to ensure the existence of FO-rewritings for all conjunctive queries (CQs). Based on this family,\nthe W3C defined a profile \\textsl{OWL\\,2\\,QL}\\footnote{\\url{http:\/\/www.w3.org\/TR\/owl2-overview\/\\#Profiles}}\\ of the Web Ontology Language \\textsl{OWL\\,2}{}\n`so that data [\\ldots]\nstored in a standard relational database system can be queried through an ontology via a simple rewriting mechanism.\\!' \nVarious dialects of tuple-generating dependencies (tgds) that admit FO-rewritings of CQs and extend \\textsl{OWL\\,2\\,QL}{} have also been identified~\\cite{DBLP:journals\/ai\/BagetLMS11,DBLP:journals\/ai\/CaliGP12,DBLP:conf\/datalog\/CiviliR12}. \nWe note in passing that while most work on OBDA (including the present paper) assumes that the user query is given as a CQ, \nother query languages, allowing limited forms of recursion and\/or negation, have also been investigated \\cite{DBLP:conf\/icdt\/Rosati07,DBLP:journals\/ws\/Gutierrez-Basulto15,DBLP:journals\/jair\/BienvenuOS15,DBLP:conf\/aaai\/KostylevRV15}.\nSPARQL~1.1, the standard query language for RDF graphs, contains negation, aggregation and other features beyond first-order logic. The entailment regimes of SPARQL~1.1\\footnote{\\url{http:\/\/www.w3.org\/TR\/2013\/REC-sparql11-entailment-20130321}} also bring inferencing capabilities in the setting, which are, however, necessarily limited for efficient implementations. \n\n\n\nBy reducing OMQ answering to standard database query evaluation, which is generally regarded to be very efficient, OBDA via query rewriting has quickly become a hot topic in both theory and practice.\nA number of rewriting techniques have been proposed and implemented for \\textsl{OWL\\,2\\,QL}{} (PerfectRef~\\cite{PLCD*08}, Presto\/Prexto~\\cite{DBLP:conf\/kr\/RosatiA10,DBLP:conf\/esws\/Rosati12}, tree witness rewriting~\\cite{DBLP:conf\/kr\/KikotKZ12}), sets of tuple-generating dependencies (Nyaya~\\cite{DBLP:conf\/icde\/GottlobOP11}, PURE~\\cite{DBLP:journals\/semweb\/KonigLMT15}), and more expressive ontology languages that require recursive datalog rewritings (Requiem~\\cite{DBLP:conf\/dlog\/Perez-UrbinaMH09}, Rapid~\\cite{DBLP:conf\/cade\/ChortarasTS11}, Clipper~\\cite{DBLP:conf\/aaai\/EiterOSTX12} and Kyrie~\\cite{kyrie2}). \nA few mature OBDA systems have also recently emerged: pioneering MASTRO~\\cite{DBLP:journals\/semweb\/CalvaneseGLLPRRRS11}, commercial Stardog~\\cite{Perez-Urbina12} and Ultrawrap~\\cite{DBLP:conf\/semweb\/SequedaAM14}, and the Optique platform~\\cite{optique} based on the query answering engine Ontop~\\cite{ISWC13,DBLP:conf\/semweb\/KontchakovRRXZ14}.\nBy providing a semantic end-to-end connection between users and multiple distributed data sources (and thus making the IT expert middleman redundant), OBDA has attracted the attention of industry, with companies such as Siemens~\\cite{DBLP:conf\/semweb\/KharlamovSOZHLRSW14} and Statoil~\\cite{DBLP:conf\/semweb\/KharlamovHJLLPR15} experimenting with OBDA technologies to streamline the process of data access for their engineers.\\!\\footnote{See, e.g., \\url{http:\/\/optique-project.eu}.} \n\n\\subsection{Succinctness and complexity}\n\nIn this paper, our concern is two fundamental theoretical problems whose solutions \nwill elucidate the computational costs required for answering OMQs with \\textsl{OWL\\,2\\,QL}{} ontologies. The succinctness problem for FO-rewritings is to understand how difficult it is to construct FO-rewritings for OMQs in a given class and, in particular, to determine whether OMQs in the class have polynomial-size FO rewritings or not. In other words, the succinctness problem clarifies the computational costs of the \\emph{reduction} of OMQ answering to database query evaluation. On the other hand, it is also important to measure the resources required to answer OMQs by a \\emph{best possible algorithm}, not necessarily a reduction to database query evaluation. Thus, we are interested in the combined complexity of the OMQ answering problem: given an OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ from a certain class, a data instance $\\mathcal{A}$ and a tuple $\\avec{a}$ of constants from $\\mathcal{A}$, decide whether $\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$. The combined complexity of CQ evaluation has been thoroughly investigated in database theory; cf.~\\cite{DBLP:conf\/stoc\/GroheSS01,Libkin} and references therein. To slightly simplify the setting for our problems, we assume that data is given in the form of an RDF graph and leave mappings out of the picture (in fact, GAV mappings only polynomially increase the size of FO-rewritings over RDF graphs).\n\nWe suggest a `two-dimensional' classification of OMQs. One dimension takes account of the shape of the CQs in OMQs by quantifying their treewidth (as in classical database theory) and the number of leaves in tree-shaped CQs. Note that, in SPARQL~1.1, the sub-queries that require rewriting under the \\textsl{OWL\\,2\\,QL}{} entailment regime are always tree-shaped (they are, in essence, complex class expressions). \nThe second dimension is the existential depth of ontologies, that is, the length of the longest chain of labelled nulls in the chase on any data. \nThus, the NPD FactPages ontology,\\!\\footnote{http:\/\/sws.ifi.uio.no\/project\/npd-v2\/} which was designed to facilitate querying the datasets of the Norwegian Petroleum Directorate,\\!\\footnote{http:\/\/factpages.npd.no\/factpages\/} is of depth 5. A typical example of an ontology axiom causing infinite depth is \\mbox{$\\forall x\\, \\bigl(\\textit{Person}(x) \\to \\exists y\\, (\\textit{ancestor}(y,x) \\land \\textit{Person}(y))\\bigr)$}. \n\n\\begin{figure}[t]%\n\\tikzset{cmplx\/.style={draw,thick,rounded corners,inner sep=0mm}}%\n\\mbox{}\\hspace*{-0.3em}\\begin{tikzpicture}[x=8mm,y=7mm]\n\\draw[thick] (0.4,-0.4) rectangle +(8.2,-6.2);\n\\node[rotate=90] at (-0.4,-4.0) {\\scriptsize ontology depth};\n\\begin{scope}[ultra thin]\n\\draw (0.4,-1) -- +(8.2,0); \\node at (0.1,-1) {\\scriptsize 1};\n\\draw (0.4,-2) -- +(8.2,0); \\node at (0.1,-2) {\\scriptsize 2};\n\\draw (0.4,-3) -- +(8.2,0); \\node at (0.1,-3) {\\scriptsize 3};\n\\draw (0.4,-4) -- +(8.2,0); \\node at (0,-4) {\\scriptsize \\dots};\n\\draw (0.4,-5) -- +(8.2,0); \\node at (0.1,-5) {\\scriptsize $d$};\n\\draw (0.4,-6) -- +(8.2,0); \\node at (0,-6) {\\scriptsize arb.};\n\\draw (1,-0.4) -- +(0,-6.2); \\node at (1,-6.8) {\\scriptsize 2}; \n\\draw (2,-0.4) -- +(0,-6.2); \\node at (2,-6.8) {\\scriptsize \\dots}; \n\\draw (3,-0.4) -- +(0,-6.2); \\node at (3,-6.8) {\\scriptsize $\\ell$}; \n\\draw (4,-0.4) -- +(0,-6.2); \\node at (4,-6.8) {\\scriptsize trees}; \n\\draw (5,-0.4) -- +(0,-6.2); \\node at (5,-6.8) {\\scriptsize 2}; \n\\draw (6,-0.4) -- +(0,-6.2); \\node at (6,-6.8) {\\scriptsize \\dots}; \n\\draw (7,-0.4) -- +(0,-6.2); \\node at (7,-6.8) {\\scriptsize bound.}; \n\\draw (8,-0.4) -- +(0,-6.2); \\node at (8,-6.8) {\\scriptsize arb.}; \n\\end{scope}\n\\node at (2,-7.2) {\\scriptsize number of leaves};\n\\node at (6.5,-7.2) {\\scriptsize treewidth};\n\\node [fill=gray!25,cmplx,fill opacity=0.9,fit={(0.6,-1.6) (3.4,-6.4)}] \n{\\raisebox{-5ex}{\\begin{tabular}{c}poly NDL\\\\[4pt] no poly PE\\\\[4pt] \n\\footnotesize poly FO\\\\[-2pt]\\scriptsize iff\\\\[-2pt]\\scriptsize NL\/poly $\\,\\subseteq\\,$ $\\mathsf{NC}^1$\\end{tabular}}};\n\\node [fill=gray!50,cmplx,fill opacity=0.9,fit={(3.6,-1.6) (7.4,-5.4)}] \n{\\raisebox{-6ex}{\\begin{tabular}{c}poly NDL\\\\[4pt] no poly PE\\\\[4pt] \n\\footnotesize poly FO\\\\[-2pt] \\scriptsize iff\\\\[-2pt] \\scriptsize LOGCFL\/poly $\\,\\subseteq\\,$ $\\mathsf{NC}^1$\\end{tabular}}};\n\\node [fill=black,cmplx,fit={(3.6,-5.6) (8.4,-6.4)}] \n{\\hspace*{-1em}\\raisebox{-9pt}{\\textcolor{white}{\\begin{tabular}{c}\\small no poly NDL \\& PE\n\\end{tabular}}}}; \n\\node [fill=black,cmplx,fit={(7.6,-1.6) (8.4,-6.4)}] \n{\\rotatebox{90}{\\hspace*{-4.9em}\\textcolor{white}{\\begin{tabular}{c}\n\\scriptsize poly FO \\ iff \\ NP\/poly $\\subseteq$ NC$^1$\n\\end{tabular}}}};\n\\node [fill=gray!5,cmplx,fill opacity=0.9,fit={(0.6,-0.6) (4.4,-1.4)}] {\\raisebox{-1.5ex}{poly $\\mathsf{\\Pi}_4$-PE}};\n\\node [fill=gray!5,cmplx,fill opacity=0.9,fit={(4.6,-0.6) (7.4,-1.4)}] {\\raisebox{-1.5ex}{poly PE}};\n\\node [fill=gray!25,cmplx,fill opacity=0.9,fit={(7.6,-0.6) (8.4,-1.4)}] { };\n\\node[inner sep=0pt] (test) at (5.5,0.3) {\\begin{tabular}{c}poly NDL, {\\scriptsize but} no poly PE\\\\[-3pt]\\footnotesize poly FO \\scriptsize iff NL\/poly $\\,\\subseteq\\,$ $\\mathsf{NC}^1$\\end{tabular}};\n\\draw (8,-1) -- (test.-15);\n\\node at (4.5,-7.7) {\\footnotesize (a)};\n\\end{tikzpicture}\n\\hspace*{0.2em}\n\\begin{tikzpicture}[x=7mm,y=7mm]\n\\draw[thick] (0.4,-0.4) rectangle +(8.2,-6.2);\n\\begin{scope}[ultra thin]\n\\draw (0.4,-1) -- +(8.2,0); \\node at (0.1,-1) {\\scriptsize 1};\n\\draw (0.4,-2) -- +(8.2,0); \\node at (0.1,-2) {\\scriptsize 2};\n\\draw (0.4,-3) -- +(8.2,0); \\node at (0.1,-3) {\\scriptsize 3};\n\\draw (0.4,-4) -- +(8.2,0); \\node at (0,-4) {\\scriptsize \\dots};\n\\draw (0.4,-5) -- +(8.2,0); \\node at (0.1,-5) {\\scriptsize $d$};\n\\draw (0.4,-6) -- +(8.2,0); \\node at (0,-6) {\\scriptsize arb.};\n\\draw (1,-0.4) -- +(0,-6.2); \\node at (1,-6.8) {\\scriptsize 2}; \n\\draw (2,-0.4) -- +(0,-6.2); \\node at (2,-6.8) {\\scriptsize \\dots}; \n\\draw (3,-0.4) -- +(0,-6.2); \\node at (3,-6.8) {\\scriptsize $\\ell$}; \n\\draw (4,-0.4) -- +(0,-6.2); \\node at (4,-6.8) {\\scriptsize trees}; \n\\draw (5,-0.4) -- +(0,-6.2); \\node at (5,-6.8) {\\scriptsize 2}; \n\\draw (6,-0.4) -- +(0,-6.2); \\node at (6,-6.8) {\\scriptsize \\dots}; \n\\draw (7,-0.4) -- +(0,-6.2); \\node at (7,-6.8) {\\scriptsize bound.}; \n\\draw (8,-0.4) -- +(0,-6.2); \\node at (8,-6.8) {\\scriptsize arb.}; \n\\end{scope}\n\\node at (2,-7.2) {\\scriptsize number of leaves};\n\\node at (6.5,-7.2) {\\scriptsize treewidth};\n\\node [fill=gray!5,cmplx,fill opacity=0.9,fit={(3.4,-5.4) (0.6,-0.6)}] {$\\ensuremath{\\mathsf{NL}}$};\n\\node [fill=gray!40,cmplx,fill opacity=0.9,fit={(3.6,-0.6) (7.4,-5.4)}] {$\\ensuremath{\\mathsf{LOGCFL}}$};\n\\node [fill=black,cmplx,fit={(8.4,-6.4) (7.6,-0.6)}] {};\n\\node [fill=black,cmplx,fit={(3.6,-5.6) (8.4,-6.4)}] {\\hspace*{7em}\\raisebox{-1ex}{\\textcolor{white}{\\ensuremath{\\mathsf{NP}}}}};\n\\node [fill=gray!40,cmplx,fill opacity=0.9,fit={(0.6,-5.6) (3.4,-6.4)}] {\\raisebox{-1ex}{\\ensuremath{\\mathsf{LOGCFL}}}};\n\\node at (4.5,-7.7) {\\footnotesize (b)};\n\\end{tikzpicture}%\n\\caption{(a) Succinctness of OMQ rewritings, and (b) combined complexity of OMQ answering (tight bounds).}\n\\label{pic:results}\n\\end{figure}\n\n\\subsection{Results}\\label{sec:results}\n\nThe results of our investigation are summarised in the succinctness and complexity landscapes of Fig.~\\ref{pic:results}. In what follows, we discuss these results in more detail. \n\nThe \\emph{succinctness problem} we consider can be formalised as follows: given a sequence ${\\ensuremath{\\boldsymbol{Q}}}_n$ ($n<\\omega$) of OMQs whose size is polynomial in $n$, determine whether the size of minimal rewritings of ${\\ensuremath{\\boldsymbol{Q}}}_n$ can be bounded by a polynomial function in $n$. We distinguish between three types of rewritings: arbitrary FO-rewritings, positive existential (PE-) rewritings (in which only $\\land$, $\\lor$ and $\\exists$ are allowed), and non-recursive datalog (NDL-) rewritings.\\!\\footnote{Domain-independent FO-rewritings correspond to SQL queries, PE-rewritings to \\textsc{Select}-\\textsc{Project}-\\textsc{Join}-\\textsc{Union} (or SPJU) queries, and NDL-rewritings to SPJU queries with views; see also Remark~\\ref{dom-ind}.} \nThis succinctness problem was first considered by Kikot et al.~\\cite{DBLP:conf\/icalp\/KikotKPZ12} and Gottlob and Schwentick~\\cite{DBLP:conf\/kr\/GottlobS12}. The former constructed a sequence ${\\ensuremath{\\boldsymbol{Q}}}_n$ of OMQs (with tree-shaped CQs) whose PE- and NDL-rewritings are of exponential size, while FO-rewritings are superpolynomial unless $\\ensuremath{\\mathsf{NP}} \\subseteq \\P\/\\mathsf{poly}$. Gottlob and Schwentick~\\cite{DBLP:conf\/kr\/GottlobS12} and Gottlob et al.~\\cite{DBLP:journals\/ai\/GottlobKKPSZ14} showed that PE- (and so all other) `rewritings' can be made polynomial under the condition that all relevant data instances contain two special constants. The `succinctification' trick involves polynomially many extra existential quantifiers over these constants to guess a derivation of the given CQ in the chase, which makes such rewritings impractical (cf.~NFAs vs DFAs, and~\\cite{DBLP:journals\/tocl\/Avigad03}). In this paper, we stay within the classical OBDA setting that does not impose any extra conditions on the data and does not allow any special constants in rewritings. \n\nFigure~\\ref{pic:results}~(a) gives a summary of the succinctness results obtained in this paper.\nIt turns out that polynomial-size PE-rewritings are guaranteed to exist---in fact, can be constructed in polynomial time---only for the class of OMQs with ontologies of depth~1 and CQs of bounded treewidth; moreover, tree-shaped OMQs have polynomial-size $\\mathsf{\\Pi}_4$-PE-rewritings (with matrices of the form ${\\land}{\\lor}{\\land}{\\lor}$). Polynomial-size NDL-rewritings can be efficiently constructed for all tree-shaped OMQs with a bounded number of leaves, all OMQs with ontologies of bounded depth and CQs of bounded treewidth, and all OMQs with ontologies of depth 1. For OMQs with ontologies of depth 2 and arbitrary CQs, and OMQs with arbitrary ontologies and tree-shaped CQs, we have an exponential lower bound on the size of NDL- (and so PE-) rewritings. The existence of polynomial-size FO rewritings for all OMQs in each of these classes (save the first one) turns out to be equivalent to one of the major open problems in computational complexity such as $\\mathsf{NC}^1 = \\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}$.\\!\\footnote{$\\mathsf{C}\/\\mathsf{poly}$ is the non-uniform analogue of a complexity class $\\mathsf{C}$.}\n\nWe obtain these results by establishing a connection between succinctness of rewritings and circuit complexity, a branch of computational complexity theory that classifies Boolean functions according to the size of circuits computing them. Our starting point is the observation that the tree-witness PE-rewriting of an OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ introduced by~\\cite{DBLP:conf\/kr\/KikotKZ12} defines a hypergraph whose vertices are the atoms in ${\\boldsymbol q}$ and whose hyperedges correspond to connected sub-queries of ${\\boldsymbol q}$ that can be homomorphically mapped to labelled nulls of some chases for $\\mathcal{T}$.\nBased on this observation, we introduce a new computational model for Boolean functions by treating any hypergraph~$H$, whose vertices are labelled by (possibly negated) Boolean variables or constants~0 and~1, as a program computing a Boolean function $f_H$ that returns 1 on a valuation for the variables iff there is an independent subset of hyperedges covering all vertices labelled by 0 (under the valuation). We show that constructing short FO- (respectively, PE- and NDL-) rewritings of ${\\ensuremath{\\boldsymbol{Q}}}$ is (nearly) equivalent to finding short Boolean formulas (respectively, monotone formulas and monotone circuits) computing the hypergraph function for~${\\ensuremath{\\boldsymbol{Q}}}$. \n\nFor each of the OMQ classes in Fig.~\\ref{pic:results}~(a), we characterise the computational power of the corresponding hypergraph programs and employ results from circuit complexity to identify the size of rewritings. For example, we show that OMQs with ontologies of depth 1 correspond to hypergraph programs of degree $\\le 2$ (in which every vertex belongs to at most two hyperedges), and that the latter are polynomially equivalent to nondeterministic branching programs (NBPs). Since NBPs compute the Boolean functions in the class $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\P\/\\mathsf{poly}$, the tree-witness rewritings for OMQs with ontologies of depth 1 can be equivalently transformed into polynomial-size NDL-rewritings. On the other hand, there exist monotone Boolean functions computable by polynomial-size NBPs but not by polynomial-size monotone Boolean formulas, which establishes a superpolynomial lower bound for PE-rewritings. It also follows that all such OMQs have polynomial-size FO-rewritings iff \\mbox{$\\mathsf{NC}^1 = \\ensuremath{\\mathsf{NL}}\/\\mathsf{poly}$}. \n \nThe succinctness results in Fig.~\\ref{pic:results}~(a), characterising the complexity of the reduction to plain database query evaluation, are complemented by the combined complexity results in Fig.~\\ref{pic:results}~(b).\n\\emph{Combined complexity} measures the time and space required for a best possible algorithm to answer an OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ from the given class over a data instance $\\mathcal{A}$, as a function of the size of ${\\ensuremath{\\boldsymbol{Q}}}$ and $\\mathcal{A}$. \nIt is known~\\cite{CDLLR07,ACKZ09} that the general OMQ answering problem is \\ensuremath{\\mathsf{NP}}-complete for combined complexity---that is, of the same complexity as standard CQ evaluation in databases. However, answering tree-shaped OMQs turns out to be \\ensuremath{\\mathsf{NP}}-hard~\\cite{DBLP:conf\/dlog\/KikotKZ11} in contrast to the well-known tractability of evaluating tree-shaped and bounded-treewidth CQs~\\cite{DBLP:conf\/vldb\/Yannakakis81,DBLP:journals\/tcs\/ChekuriR00,DBLP:conf\/icalp\/GottlobLS99}. Here, we prove that, surprisingly, answering OMQs with ontologies of bounded depth and CQs of bounded treewidth is no harder than evaluating CQs of bounded treewidth, that is, \\ensuremath{\\mathsf{LOGCFL}}-complete. By restricting further the class of CQs to trees with a bounded number of leaves, we obtain an even better \\ensuremath{\\mathsf{NL}}-completeness result, which matches the complexity of evaluating the underlying CQs. \nIf we consider bounded-leaf tree-shaped CQs coupled with arbitrary \\textsl{OWL\\,2\\,QL}{} ontologies, then the OMQ answering problem remains tractable, \\ensuremath{\\mathsf{LOGCFL}}-complete to be more precise.\n \nThe plan of the paper is as follows. Section~\\ref{sec:definitions} gives formal definitions of \\textsl{OWL\\,2\\,QL}{}, OMQs and rewritings. Section~\\ref{sec:TW} defines the tree-witness rewriting. Section~\\ref{sec:Bfunctions} reduces the succinctness problem for OMQ rewritings to the succinctness problem for hypergraph Boolean functions associated with tree-witness rewritings, and introduces hypergraph programs for computing these functions. Section~\\ref{sec:OMQs&hypergraphs} establishes a correspondence between the OMQ classes in Fig.~\\ref{pic:results} and the structure of the corresponding hypergraph functions and programs. Section~\\ref{sec:circuit_complexity} characterises the computational power of hypergraph programs in these classes by relating them to standard models of computation for Boolean functions. Section~\\ref{sec:7} uses the results of the previous three sections and some known facts from circuit complexity to obtain the upper and lower bounds on the size of PE-, NDL- and FO-rewritings in Fig.~\\ref{pic:results}~(a). Section~\\ref{sec:complexity} establishes the combined complexity results in Fig.~\\ref{pic:results}~(b). We conclude in Section~\\ref{sec:conclusions} by discussing the obtained succinctness and complexity results and formulating a few open problems.\nAll omitted proofs can be found in the appendix.\n\n\n\n\n\n\\section{\\textsl{OWL\\,2\\,QL}{} ontology-mediated queries and first-order rewritability}\\label{sec:definitions}\n\nIn first-order logic, any \\textsl{OWL\\,2\\,QL}{} \\emph{ontology} (or \\emph{TBox} in description logic parlance), $\\mathcal{T}$, can be given as a finite set of sentences (often called \\emph{axioms}) of the following forms \n\\begin{align*}\n& \\forall x\\,\\big(\\tau(x) \\to \\tau'(x)\\big), &\n& \\forall x\\, \\big(\\tau(x) \\land \\tau'(x) \\to \\bot \\big),\\\\\n& \\forall x,y\\,\\big(\\varrho(x,y) \\to \\varrho'(x,y)\\big), &\n& \\forall x,y\\,\\big(\\varrho(x,y) \\land \\varrho'(x,y) \\to \\bot\\big),\\\\\n& \\forall x\\, \\varrho(x,x) , &\n& \\forall x\\,\\big(\\varrho(x,x) \\to \\bot\\big),\n\\end{align*}\nwhere the formulas $\\tau(x)$ (called \\emph{classes} or \\emph{concepts}) and $\\varrho(x,y)$ (called \\emph{properties} or \\emph{roles}) are defined, using unary predicates $A$ and binary predicates $P$, by the grammars\n\\begin{equation}\\label{syntax}\n\\tau(x) \\ ::= \\ \\top \\ \\mid \\ A(x) \\ \\mid \\ \\exists y\\,\\varrho(x,y) \\qquad \\text{and} \\qquad \\varrho(x,y) \\ ::= \\ \\top \\ \\mid \\ P(x,y) \\ \\mid \\ P(y,x).\n\\end{equation}\n(Strictly speaking, \\textsl{OWL\\,2\\,QL}{} ontologies can also contain inequalities $a \\ne b$, for constants $a$ and $b$. However, they do not have any impact on the problems considered in this paper, and so will be ignored.) \n\\begin{example}\\label{ex:NPDontology}\nTo illustrate, we show a snippet of the NPD FactPages ontology:\n\\begin{align*}\n& \\forall x \\, (\\textit{GasPipeline}(x) \\to \\textit{Pipeline}(x)),\\\\\n& \\forall x \\, (\\textit{FieldOwner}(x) \\leftrightarrow \\exists y \\, \\textit{ownerForField}(x,y)),\\\\\n& \\forall y \\, (\\exists x \\, \\textit{ownerForField}(x,y) \\to \\textit{Field}(y)),\\\\\n& \\forall x,y \\, (\\textit{shallowWellboreForField}(x,y) \\to \\textit{wellboreForField}(x,y)),\\\\\n& \\forall x,y \\, (\\textit{isGeometryOfFeature}(x,y) \\leftrightarrow \\textit{hasGeometry}(y,x)).\n\\end{align*}\n\\end{example}\n\nTo simplify presentation, in our ontologies we also use sentences of the form\n\\begin{equation}\\label{eq:sugar}\n\\forall x\\, \\big(\\tau(x) \\to \\zeta(x)\\big),\n\\end{equation}\nwhere\n\\begin{equation*}\n\\zeta(x) \\ ::= \\ \\tau(x) \\ \\mid \\ \\zeta_1(x) \\land \\zeta_2(x) \\ \\mid \\ \\exists y\\, \\big(\\varrho_1(x,y) \\land \\dots \\land \\varrho_k(x,y) \\land \\zeta(y)\\big).\n\\end{equation*}\nIt is readily seen that such sentences are just syntactic sugar and can be eliminated by means of polynomially many fresh predicates. Indeed, any axiom of the form~\\eqref{eq:sugar} with \n\\begin{equation*}\n\\zeta(x) = \\exists y\\, \\big(\\varrho_1(x,y) \\land \\dots \\land \\varrho_k(x,y) \\land \\zeta'(y)\\big)\n\\end{equation*}\ncan be (recursively) replaced by the following axioms, for a fresh $P_\\zeta$ and $i=1,\\dots,k$:\n\\begin{equation}\\label{eq:replacement}\n\\forall x \\, \\bigl(\\tau(x) \\to \\exists y\\, P_\\zeta(x,y)\\bigr),\\quad\n\\forall x,y\\,\\bigl(P_\\zeta(x,y) \\to \\varrho_i (x,y)\\bigr), \\quad\n\\forall y\\, \\bigl(\\exists x\\, P_\\zeta(x,y) \\to \\zeta'(y)\\bigr)\n\\end{equation}\nbecause any first-order structure is a model of~\\eqref{eq:sugar} iff it is a restriction of some model of~\\eqref{eq:replacement} to the signature of~\\eqref{eq:sugar}. The result of eliminating the syntactic sugar from an ontology $\\mathcal{T}$ is called the \\emph{normalisation} of $\\mathcal{T}$. We always assume that all of our ontologies are normalised even though this is not done explicitly; however, we stipulate (without loss of generality) that the normalisation predicates $P_\\zeta$ never occur in the data.\n\nWhen writing ontology axioms, we usually omit the universal quantifiers. We typically use the characters $P$, $R$ to denote binary predicates, $A$, $B$, $C$ for unary predicates, and $S$ for either of them. For a binary predicate $P$, we write $P^-$ to denote its inverse; that is, $P(x,y) = P^-(y,x)$, for any $x$ and $y$, and $P^{--}=P$.\n\n\nA \\emph{conjunctive query} (CQ) ${\\boldsymbol q}(\\avec{x})$ is a formula of the form $\\exists \\avec{y}\\, \\varphi(\\avec{x}, \\avec{y})$, where $\\varphi$ is a conjunction of atoms $S(\\avec{z})$ all of whose variables are among $\\avec{x}$, $\\avec{y}$.\n\n\\begin{example}\nHere is a (fragment of a) typical CQ from the NPD FactPages:\n\\begin{multline*}\n{\\boldsymbol q}(x_1,x_2,x_3) ~=~ \\exists y,z \\, \\big[\\textit{ProductionLicence}(x_1) \\land\n\\textit{ProductionLicenceOperator}(y) \\land{} \\\\\n\\textit{dateOperatorValidFrom}(y, x_2) \\land\n\\textit{licenceOperatorCompany}(y, z) \\land{}\\\\\n\\textit{name}(z, x_3) \\land \\textit{operatorForLicence}(y, x_1) \\big].\n\\end{multline*}\n\\end{example}\n\nTo simplify presentation and without loss of generality, we assume that CQs do not contain constants. Where convenient, we regard a CQ as the \\emph{set} of its atoms; in particular, $|{\\boldsymbol q}|$ is the \\emph{size of ${\\boldsymbol q}$}. The variables in $\\avec{x}$ are called the \\emph{answer variables} of a CQ ${\\boldsymbol q}(\\avec{x})$.\nA CQ without answer variables is called \\emph{Boolean}.\nWith every CQ ${\\boldsymbol q}$, we associate its \\emph{Gaifman graph} $G_{{\\boldsymbol q}}$ whose vertices are the variables of ${\\boldsymbol q}$ and whose edges are the pairs $\\{u,v\\}$ such that $P(u,v)\\in{\\boldsymbol q}$, for some $P$. A CQ ${\\boldsymbol q}$ is \\emph{connected} if the graph $G_{\\boldsymbol q}$ is connected.\nWe call ${\\boldsymbol q}$ \\emph{tree-shaped} if $G_{{\\boldsymbol q}}$ is a tree\\footnote{Tree-shaped CQs also go by the name of \\emph{acyclic queries}~\\cite{DBLP:conf\/vldb\/Yannakakis81,DBLP:conf\/ijcai\/BienvenuOSX13}.}\\!\\!, and if $G_{{\\boldsymbol q}}$ is a tree with at most two leaves, then ${\\boldsymbol q}$ is said to be \\emph{linear}. \n\nAn \\textsl{OWL\\,2\\,QL}{} \\emph{ontology-mediated query} (OMQ) is a pair ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$, where $\\mathcal{T}$ is an \\textsl{OWL\\,2\\,QL}{} ontology and ${\\boldsymbol q}(\\avec{x})$ a CQ. The \\emph{size of ${\\ensuremath{\\boldsymbol{Q}}}$} is defined as $|{\\ensuremath{\\boldsymbol{Q}}}| = |\\mathcal{T}| + |{\\boldsymbol q}|$, where $|\\mathcal{T}|$ is the number of symbols in $\\mathcal{T}$. \n\nA \\emph{data instance}, $\\mathcal{A}$, is a finite set of unary or binary ground atoms (called an \\emph{ABox} in description logic).\nWe denote by $\\mathsf{ind}(\\mathcal{A})$ the set of individual constants in $\\mathcal{A}$.\nGiven an OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ and a data instance $\\mathcal{A}$, a tuple $\\avec{a}$ of constants from $\\mathsf{ind}(\\mathcal{A})$ of length $|\\avec{x}|$ is called a \\emph{certain answer to ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over} $\\mathcal{A}$ if $\\mathcal{I} \\models {\\boldsymbol q}(\\avec{a})$ for all models $\\mathcal{I}$ of $\\mathcal{T}\\cup\\mathcal{A}$; in this case we write \\mbox{$\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$}. If ${\\boldsymbol q}(\\avec{x})$ is Boolean, a certain answer to ${\\ensuremath{\\boldsymbol{Q}}}$ over $\\mathcal{A}$ is `yes' if $\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}$, and `no' otherwise. We remind the reader~\\cite{Libkin} that, for any CQ ${\\boldsymbol q}(\\avec{x}) = \\exists \\avec{y}\\, \\varphi(\\avec{x}, \\avec{y})$, any first-order structure $\\mathcal{I}$ and any tuple $\\avec{a}$ from its domain $\\Delta$, we have $\\mathcal{I} \\models {\\boldsymbol q}(\\avec{a})$ iff there is a map $h \\colon \\avec{x} \\cup \\avec{y} \\to \\Delta$ such that\n(\\emph{i}) if $S(\\avec{z}) \\in {\\boldsymbol q}$ then $\\mathcal{I} \\models S(h(\\avec{z}))$, and (\\emph{ii}) $h(\\avec{x}) = \\avec{a}$.\nIf (\\emph{i}) is satisfied then $h$ is called a \\emph{homomorphism} from ${\\boldsymbol q}$ to $\\mathcal{I}$, and we write $h \\colon {\\boldsymbol q} \\to \\mathcal{I}$; if (\\emph{ii}) also holds, we write $h \\colon {\\boldsymbol q}(\\avec{a}) \\to \\mathcal{I}$.\n\nCentral to OBDA is the notion of OMQ rewriting that reduces the problem of finding certain answers to standard query evaluation.\nMore precisely, an FO-formula ${\\boldsymbol q}'(\\avec{x})$, possibly with equality, $=$, is an \\emph{FO-rewriting of an OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$} if, for \\emph{any} data instance $\\mathcal{A}$ (without the normalisation predicates for $\\mathcal{T}$) and any tuple $\\avec{a}$ in $\\mathsf{ind}(\\mathcal{A})$, \n\\begin{equation}\\label{def:rewriting}\n\\mathcal{T}, \\mathcal{A} \\models {\\boldsymbol q}(\\avec{a}) \\qquad \\text{iff} \\qquad \\mathcal{I}_\\mathcal{A} \\models {\\boldsymbol q}'(\\avec{a}),\n\\end{equation}\nwhere $\\mathcal{I}_\\mathcal{A}$ is the first-order structure over the domain $\\mathsf{ind}(\\mathcal{A})$ such that $\\mathcal{I}_\\mathcal{A} \\models S(\\avec{a})$ iff $S(\\avec{a}) \\in \\mathcal{A}$, for any ground atom $S(\\avec{a})$. \nAs $\\mathcal{A}$ is arbitrary, this definition implies, in particular, that the rewriting must be \\emph{constant-free}. \nIf ${\\boldsymbol q}'(\\avec{x})$ is a positive existential formula---that is, ${\\boldsymbol q}'(\\avec{x}) = \\exists \\avec{y}\\, \\varphi(\\avec{x}, \\avec{y})$ with $\\varphi$ constructed from atoms (possibly with equality) using $\\land$ and $\\lor$ only---we call it a \\emph{PE-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$}. A PE-rewriting whose matrix $\\varphi$ is a disjunction of conjunctions is known as a \\emph{UCQ-rewriting}; if $\\varphi$ takes the form ${\\land}{\\lor}{\\land}{\\lor}$ we call it a $\\mathsf{\\Pi}_4$-\\emph{rewriting}. \nThe size $|{\\boldsymbol q}'|$ of ${\\boldsymbol q}'$ is the number of symbols in it. \n\nWe also consider rewritings in the form of nonrecursive datalog queries.\nRecall~\\cite{Abitebouletal95} that a \\emph{datalog program}, $\\Pi$, is a finite set of Horn clauses\n$\\forall \\avec{x}\\, (\\gamma_1 \\land \\dots \\land \\gamma_m \\to \\gamma_0)$,\nwhere each $\\gamma_i$ is an atom $P(x_1,\\dots,x_l)$ with $x_i \\in \\avec{x}$. The atom $\\gamma_0$ is the \\emph{head} of the clause, and $\\gamma_1,\\dots,\\gamma_m$ its (possibly empty) \\emph{body}. \nA predicate $S$ \\emph{depends} on $S'$ in $\\Pi$ if $\\Pi$ has a clause with $S$ in the head and $S'$ in the body; $\\Pi$ is \\emph{nonrecursive} if this dependence relation is acyclic. \n\nLet ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ be an OMQ, $\\Pi$ a constant-free nonrecursive program, and $G(\\avec{x})$ a predicate. The pair ${\\boldsymbol q}'(\\avec{x}) = (\\Pi,G(\\avec{x}))$ is an \\emph{\\text{NDL}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}$} if, for any data instance $\\mathcal{A}$ and any tuple $\\avec{a}$ in $\\mathsf{ind} (\\mathcal{A})$, we have $\\mathcal{T},\\mathcal{A}\\models {\\boldsymbol q}(\\avec{a})$ iff $\\Pi(\\mathcal{I}_\\mathcal{A}) \\models G(\\avec{a})$, where \n$\\Pi(\\mathcal{I}_\\mathcal{A})$ is the structure with domain $\\mathsf{ind}(\\mathcal{A})$ obtained by closing $\\mathcal{I}_\\mathcal{A}$ under the clauses in~$\\Pi$. Every \\text{PE}-rewriting can clearly be represented as an \\text{NDL}-rewriting of linear size. \n\n\\begin{remark}\\label{dom-ind}\nAs defined, \\text{FO}- and \\text{PE}-rewritings are not necessarily domain-independent queries, while \\text{NDL}-rewritings are not necessarily safe~\\cite{Abitebouletal95}. For example, $(x=x)$ is a PE-rewriting of the OMQ $(\\{\\forall x \\, P(x,x)\\},P(x,x))$, and the program $(\\{\\top \\to A(x)\\}, A(x))$ is an \\text{NDL}-rewriting of the OMQ $(\\{\\top\\to A(x)\\}, A(x))$. Rewritings can easily be made domain-independent and safe by relativising their variables to the predicates in the data signature (relational schema). For instance, if this signature is $\\{A, P\\}$, then a domain-independent relativisation of $(x=x)$ is the \\text{PE}-rewriting $\\bigl(A(x) \\lor \\exists y\\,P(x,y) \\lor \\exists y\\, P(y,x)\\bigr) \\land (x = x)$. Note that if we exclude from \\textsl{OWL\\,2\\,QL}{} reflexivity statements and axioms with $\\top$ on the left-hand side, then rewritings are guaranteed to be domain-independent, and no relativisation is required. In any case, rewritings are always interpreted under the \\emph{active domain semantics} adopted in databases; see~\\eqref{def:rewriting}.\n\\end{remark}\n\nAs mentioned in the introduction, the \\textsl{OWL\\,2\\,QL}{} profile of \\textsl{OWL\\,2}{} was designed to ensure FO-rewritability of all OMQs with ontologies in the profile or, equivalently, OMQ answering in {\\ensuremath{\\mathsf{AC}^0}}{} for data complexity.\nIt should be clear, however, that for the OBDA approach to work in practice, the rewritings of OMQs must be of `reasonable shape and size'\\!. Indeed, it was observed experimentally~\\cite{DBLP:journals\/semweb\/CalvaneseGLLPRRRS11} and also established theoretically~\\cite{DBLP:conf\/icalp\/KikotKPZ12} that sometimes the rewritings are prohibitively large---exponentially-large in the size of the original CQ, to be more precise. \nThese facts imply that, in the context of OBDA, we should actually be interested not in arbitrary but in \\emph{polynomial-size} rewritings. In complexity-theoretic terms, the focus should not only be on the data complexity of OMQ answering, which is an appropriate measure for database query evaluation (where queries are indeed usually small)~\\cite{Vardi82}, but also on the combined complexity that takes into account the contribution of ontologies and queries.\n\n\n\n\n\n\n\\section{Tree-Witness Rewriting}\\label{sec:TW}\n\nNow we define one particular rewriting of \\textsl{OWL\\,2\\,QL}{} OMQs that will play a key role in the succinctness and complexity analysis later on in the paper. This rewriting is a modification of the tree-witness PE-rewriting originally introduced by Kikot et al.~\\cite{DBLP:conf\/kr\/KikotKZ12}\n(cf.~\\cite{Lutz-IJCAR08,KR10our,DBLP:journals\/semweb\/KonigLMT15} for rewritings based on similar ideas).\n\nWe begin with two simple observations that will help us remove unneeded clutter from the definitions. Every \\textsl{OWL\\,2\\,QL}{} ontology $\\mathcal{T}$ consists of two parts: $\\mathcal{T}^-$, which contains all the sentences with $\\bot$, and the remainder, $\\mathcal{T}^+$, which is consistent with every data instance. For any $\\psi(\\avec{z}) \\to \\bot$ in $\\mathcal{T}^-$, consider the Boolean CQ $\\exists \\avec{z} \\, \\psi(\\avec{z})$. It is not hard to see that, for any OMQ $(\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ and data instance $\\mathcal{A}$, a tuple $\\avec{a}$ is a certain answer to $(\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ over $\\mathcal{A}$ iff either $\\mathcal{T}^+,\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$ or $\\mathcal{T}^+,\\mathcal{A} \\models \\exists \\avec{z} \\, \\psi(\\avec{z})$, for some $\\psi(\\avec{z}) \\to \\bot$ in $\\mathcal{T}^-$; see~\\cite{DBLP:journals\/ws\/CaliGL12} for more details. Thus, from now on we will assume that, in all our ontologies~$\\mathcal{T}$, the `negative' part $\\mathcal{T}^-$ is empty, and so they are \\emph{consistent} with all data instances.\n\nThe second observation will allow us to restrict the class of data instances we need to consider when rewriting OMQs.\nIn general, if we only require condition \\eqref{def:rewriting} to hold for any data instance $\\mathcal{A}$ from some class $\\mathfrak A$, then we call ${\\boldsymbol q}'(\\avec{x})$ a \\emph{rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over~$\\mathfrak A$}. Such classes of data instances can be defined, for example, by the integrity constraints in the database schema and the mapping~\\cite{ISWC13}.\nWe say that a data instance $\\mathcal{A}$ is \\emph{complete}\\footnote{Rodriguez-Muro et al.~\\cite{ISWC13} used the term `H-completeness'; see also~\\cite{DBLP:conf\/ijcai\/KonigLM15}.} for an ontology $\\mathcal{T}$ if $\\mathcal{T},\\mathcal{A} \\models S(\\avec{a})$ implies $S(\\avec{a}) \\in \\mathcal{A}$, for any ground atom $S(\\avec{a})$ with $\\avec{a}$ from $\\mathsf{ind}(\\mathcal{A})$. The following proposition means that from now on we will only consider rewritings over complete data instances.\n\n\n\\begin{proposition}\\label{complete}\nIf ${\\boldsymbol q}'(\\avec{x})$ is an \\text{NDL}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ over com\\-plete data instances, then there is an \\text{NDL}-rewriting ${\\boldsymbol q}''(\\avec{x})$ of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over arbitrary data instances with $|{\\boldsymbol q}''| \\leq |{\\boldsymbol q}'| \\cdot |\\mathcal{T}|$. A similar result holds for \\text{PE}- and \\text{FO}-rewritings.\n\\end{proposition}\n\\begin{proof}\nLet $(\\Pi, G(\\avec{x}))$ be an \\text{NDL}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over complete data instances.\nDenote by $\\Pi^*$ the result of replacing each predicate $S$ in $\\Pi$ with a fresh predicate $S^*$. Define $\\Pi'$ to be the union of $\\Pi^*$ and the following clauses for predicates in $\\Pi$: \n\\begin{align*}\n\\tau(x) & \\to A^*(x),\\qquad \\text{ if } \\ \\mathcal{T} \\models \\tau(x) \\to A(x),\\\\\n\\varrho(x,y) & \\to P^*(x,y), \\quad\\text{ if } \\ \\mathcal{T} \\models \\varrho(x,y) \\to P(x,y),\\\\\n\\top & \\to P^*(x,x), \\quad\\text{ if } \\ \\mathcal{T}\\models P(x,x)\n\\end{align*}\n(the empty body is denoted by $\\top$).\nIt is readily seen that $(\\Pi',G^*(\\avec{x}))$ is an \\text{NDL}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over arbitrary data instances. \nThe cases of \\text{PE}- and \\text{FO}-rewritings are similar except that we replace $A(x)$ and $P(x,y)$ with \n\\begin{equation*}\n\\bigvee_{\\mathcal{T}\\models \\tau(x) \\to A(x)} \\hspace*{-2em}\\tau(x) \\qquad\\text{ and }\\qquad \\bigvee_{\\mathcal{T}\\models \\varrho(x,y) \\to P(x,y)} \\hspace*{-2em}\\varrho(x,y) \\quad \\vee \\bigvee_{\\mathcal{T}\\models P(x,x)} \\hspace*{-1em}(x = y),\n\\end{equation*}\nrespectively (the empty disjunction is, by definition, $\\bot$).\n\\end{proof}\n\n\n\nAs is well-known~\\cite{Abitebouletal95}, every pair $(\\mathcal{T},\\mathcal{A})$ of an ontology $\\mathcal{T}$ and data instance $\\mathcal{A}$ possesses a \\emph{canonical model} (or \\emph{chase}) $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$ such that \\mbox{$\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$} iff $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}} \\models {\\boldsymbol q}(\\avec{a})$, for all CQs ${\\boldsymbol q}(\\avec{x})$ and $\\avec{a}$ in $\\mathsf{ind}(\\mathcal{A})$. In our proofs, we use the following definition of $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$, where without loss of generality we assume that $\\mathcal{T}$ does not contain binary predicates $P$ such that $\\mathcal{T} \\models \\forall x,y \\, P(x,y)$. Indeed, occurrences of such $P$ in $\\mathcal{T}$ can be replaced by $\\top$ and occurrences of $P(x,y)$ in CQs can simply be removed without changing certain answers over any data instance (provided that $x$ and $y$ occur in the remainder of the query).\n\n\n\n\nThe domain $\\Delta^{\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}}$ of the canonical model $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$ consists of $\\mathsf{ind}(\\mathcal{A})$ and the \\emph{witnesses}, or \\emph{labelled nulls}, introduced by the existential quantifiers in (the normalisation of) $\\mathcal{T}$. More precisely, the labelled nulls in $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$ are finite words of the form $w = a \\varrho_1 \\dots \\varrho_n$ ($n \\geq 1$) such that \n\\begin{nitemize}\n\\item $a \\in \\mathsf{ind}(\\mathcal{A})$ and $\\mathcal{T}, \\mathcal{A} \\models \\exists y\\, \\varrho_1 (a,y)$, but $\\mathcal{T}, \\mathcal{A} \\not\\models \\varrho_1(a,b)$ for any $b \\in \\mathsf{ind}(\\mathcal{A})$;\n\\item $\\mathcal{T}\\not\\models \\varrho_i(x,x)$ for $1 \\le i \\le n$;\n\\item $\\mathcal{T} \\models \\exists x\\, \\varrho_i(x,y) \\to \\exists z\\, \\varrho_{i+1}(y,z)$ and $\\mathcal{T} \\not \\models \\varrho_i(y,x) \\to \\varrho_{i+1}(x,y)$ for $1 \\leq i < n$.\n\\end{nitemize}\nEvery individual name $a \\in \\mathsf{ind}(\\mathcal{A})$ is interpreted in $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$ by itself, and unary and binary predicates are interpreted as follows: for any $u,v \\in \\Delta^{\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}}$,\n\\begin{nitemize}\n\\item $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}} \\models A(u)$ iff either $u \\in \\mathsf{ind}(\\mathcal{A})$ and $\\mathcal{T},\\mathcal{A} \\models A(u)$, or $u = w\\varrho$, for some $w$ and $\\varrho$ with $\\mathcal{T} \\models \\exists y\\,\\varrho(y,x) \\to A(x)$;\n\\item $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}} \\models P(u,v)$ iff one of the following holds: (\\emph{i}) $u,v \\in \\mathsf{ind}(\\mathcal{A})$ and $\\mathcal{T},\\mathcal{A} \\models P(u,v)$; (\\emph{ii}) $u=v$ and $\\mathcal{T}\\models P(x,x)$; (\\emph{iii}) $v = u\\varrho$ and $\\mathcal{T} \\models \\varrho(x,y) \\to P(x,y)$; (\\emph{iv}) $u = v\\varrho^-$ and $\\mathcal{T} \\models \\varrho(x,y) \\to P(x,y)$.\n\\end{nitemize}\n\n\\begin{example}\\label{ex:canon}\nConsider the following ontologies: \n\\begin{align*}\n& \\mathcal{T}_1 \\ = \\ \\{ \\ A(x) \\to \\exists y\\, \\bigl(R(x,y) \\land Q(y,x)\\bigr)\\ \\},\\\\\n& \\mathcal{T}_2 \\ = \\ \\{\\ A(x) \\to \\exists y\\, R(x,y), \\ \\ \\exists x\\,R(x,y) \\to \\exists z\\, Q(z,y)\\ \\},\\\\\n& \\mathcal{T}_3 \\ = \\ \\{\\ A(x) \\to \\exists y\\, R(x,y), \\ \\ \\exists x\\,R(x,y) \\to \\exists z\\, R(y,z)\\ \\}.\n\\end{align*}\nThe canonical models of $(\\mathcal{T}_i,\\mathcal{A})$, for $\\mathcal{A} = \\{A(a)\\}$, $i=1,2,3$, are shown in Fig.~\\ref{fig:canonical}, where $\\zeta(x) = \\exists y\\, (R(x,y) \\land Q(y,x))$ and $P_\\zeta$ is the corresponding normalisation predicate. When depicting canonical models, we use \\begin{tikzpicture}\\node[bpoint] at (0,0) {};\\end{tikzpicture} for constants and \\begin{tikzpicture}\\node[point] at (0,0) {};\\end{tikzpicture} for labelled nulls.\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\node[bpoint, label=right:{$A$}, label=left:{\\scriptsize $a$}] (a0) at (0,0) {};\n\\node at (-1.3,0.1) {\\normalsize $\\mathcal{C}_{\\mathcal{T}_1,\\mathcal{A}}$};\n\\node[point, label=left:{\\scriptsize $aP_\\zeta$}] (a1) at (0,1) {};\n\\draw[can,->] (a0) to node[label=left:{\\textcolor{gray}{$P_\\zeta$}}, label=right:{$R,Q^-$}]{} (a1);\n\\node[bpoint, label=right:{$A$}, label=left:{\\scriptsize $a$}] (b0) at (5,0) {};\n\\node at (3.8,0.1) {\\normalsize $\\mathcal{C}_{\\mathcal{T}_2,\\mathcal{A}}$};\n\\node[point, label=left:{\\scriptsize $aR$}] (b1) at (5,1) {};\n\\node[point, label=left:{\\scriptsize $aRQ^-$}] (b2) at (5,2) {};\n\\draw[can,->] (b0) to node[label=right:{$R$}]{} (b1);\n\\draw[can,->] (b1) to node[label=right:{$Q^-$}]{} (b2);\n\\node[bpoint, label=right:{$A$}, label=left:{\\scriptsize $a$}] (c0) at (10,0) {};\n\\node at (8.8,0.1) {\\normalsize $\\mathcal{C}_{\\mathcal{T}_3,\\mathcal{A}}$};\n\\node[point, label=left:{\\scriptsize $aR$}] (c1) at (10,1) {};\n\\node [point, label=left:{\\scriptsize $aRR$}] (c2) at (10,2) {};\n\\draw[can,->] (c0) to node[label=right:{$R$}]{} (c1);\n\\draw[can,->] (c1) to node[label=right:{$R$}]{} (c2);\n\\draw[dotted,thick] (c2) -- ++(0,0.4);\n\\end{tikzpicture}%\n\\caption{Canonical models in Example~\\ref{ex:canon}.}\\label{fig:canonical}\n\\end{figure}%\n\\end{example}\n\nFor any ontology $\\mathcal{T}$ and any formula $\\tau(x)$ given by~\\eqref{syntax}, we denote by $\\mathcal{C}_\\mathcal{T}^{\\smash{\\tau(a)}}$ the canonical model of $(\\mathcal{T} \\cup \\{ A(x) \\to \\tau(x)\\} , \\{ A(a) \\})$, for a fresh unary predicate $A$. We say that $\\mathcal{T}$ is \\emph{of depth $k$}, $1 \\le k < \\omega$, if (\\emph{i}) there is no $\\varrho$ with $\\mathcal{T} \\models \\varrho(x,x)$, (\\emph{ii}) at least one of the $\\mathcal{C}_\\mathcal{T}^{\\smash{\\tau(a)}}$ contains a word $a \\varrho_1 \\dots \\varrho_k$, but (\\emph{iii}) none of the $\\mathcal{C}_\\mathcal{T}^{\\smash{\\tau(a)}}$ has such a word of greater length.\nThus, $\\mathcal{T}_1$ in Example~\\ref{ex:canon} is of depth 1, $\\mathcal{T}_2$ of depth 2, while $\\mathcal{T}_3$ is not of any finite depth.\n\nOntologies of infinite depth generate infinite canonical models. However, \\textsl{OWL\\,2\\,QL}{} has the \\emph{polynomial derivation depth property} (PDDP) in the sense that there is a polynomial $p$ such that, for any OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T}, {\\boldsymbol q}(\\avec{x}))$, data instance $\\mathcal{A}$ and $\\avec{a}$ in $\\mathsf{ind}(\\mathcal{A})$, we have $\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$ iff ${\\boldsymbol q}(\\avec{a})$ holds in the sub-model of $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$ whose domain consists of words of the form $a \\varrho_1 \\dots \\varrho_n$ with $n \\le p(|{\\ensuremath{\\boldsymbol{Q}}}|)$~\\cite{DBLP:conf\/pods\/JohnsonK82,DBLP:journals\/ws\/CaliGL12}. (In general, the bounded derivation depth property of an ontology language is a necessary and sufficient condition of FO-rewritability~\\cite{DBLP:journals\/ai\/GottlobKKPSZ14}.)\n\nWe call a set $\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}$ of words of the form $w=\\varrho_1 \\dots \\varrho_n$ \\emph{fundamental for ${\\ensuremath{\\boldsymbol{Q}}}$} if, for any $\\mathcal{A}$ and $\\avec{a}$ in $\\mathsf{ind}(\\mathcal{A})$, we have $\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$ iff ${\\boldsymbol q}(\\avec{a})$ holds in the sub-model of $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$ with the domain $\\{aw \\mid a \\in \\mathsf{ind}(\\mathcal{A}),\\ w \\in \\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}\\}$.\nWe say that a class $\\mathcal{Q}$ of OMQs has the \\emph{polynomial fundamental set property} (PFSP) if there is a polynomial $p$ such that every ${\\ensuremath{\\boldsymbol{Q}}} \\in \\mathcal{Q}$ has a fundamental set~$\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}$ with $|\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}| \\le p(|{\\ensuremath{\\boldsymbol{Q}}}|)$.\nThe class of \\emph{all} OMQs (even with ontologies of finite depth and tree-shaped CQs) does not have the PFSP~\\cite{DBLP:conf\/icalp\/KikotKPZ12}. On the other hand, it should be clear that the class of OMQs with ontologies of bounded depth does enjoy the PFSP. A less trivial example is given by the following theorem, which is an immediate consequence of Theorem~\\ref{thm:noroles} below: \n\n\\begin{theorem}\\label{role-inc}\nThe class of OMQs whose ontologies do not contain axioms of the form $\\varrho(x,y) \\to \\varrho'(x,y)$ \\textup{(}and syntactic sugar~\\eqref{eq:sugar}\\textup{)} enjoys the PFSP.\n\\end{theorem}\n\n\nWe are now in a position to define the tree-witness PE-rewriting of \\textsl{OWL\\,2\\,QL}{} OMQs.\nSuppose we are given an OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ with ${\\boldsymbol q}(\\avec{x}) = \\exists \\avec{y}\\, \\varphi(\\avec{x}, \\avec{y})$. For a pair $\\t = (\\mathfrak{t}_\\mathsf{r}, \\mathfrak{t}_\\mathsf{i})$ of disjoint sets of variables in ${\\boldsymbol q}$, with $\\mathfrak{t}_\\mathsf{i}\\subseteq \\avec{y}$\\footnote{We (ab)use set-theoretic notation for lists and, for example, write $\\mathfrak{t}_\\mathsf{i}\\subseteq \\avec{y}$ to say that every element of $\\mathfrak{t}_\\mathsf{i}$ is an element of $\\avec{y}$.} and $\\mathfrak{t}_\\mathsf{i} \\ne\\emptyset$ ($\\mathfrak{t}_\\mathsf{r}$ can be empty), set\n\\begin{equation*}\n{\\boldsymbol q}_\\t \\ = \\ \\bigl\\{\\, S(\\avec{z}) \\in {\\boldsymbol q} \\mid \\avec{z} \\subseteq \\mathfrak{t}_\\mathsf{r}\\cup \\mathfrak{t}_\\mathsf{i} \\text{ and } \\avec{z}\\not\\subseteq \\mathfrak{t}_\\mathsf{r}\\,\\bigr\\}.\n\\end{equation*}\nIf ${\\boldsymbol q}_\\t$ is a minimal subset of ${\\boldsymbol q}$ for which there is a homomorphism $h \\colon {\\boldsymbol q}_\\t \\to \\mathcal{C}_\\mathcal{T}^{\\smash{\\tau(a)}}$ such that $\\mathfrak{t}_\\mathsf{r} = h^{-1}(a)$ and ${\\boldsymbol q}_\\t$ contains every atom of ${\\boldsymbol q}$ with at least one variable from $\\mathfrak{t}_\\mathsf{i}$, then we call $\\t = (\\mathfrak{t}_\\mathsf{r}, \\mathfrak{t}_\\mathsf{i})$ a \\emph{tree witness for ${\\ensuremath{\\boldsymbol{Q}}}$ generated by $\\tau$} (and \\emph{induced by $h$}). \nObserve that if $\\mathfrak{t}_\\mathsf{r} = \\emptyset$ then ${\\boldsymbol q}_\\t$ is a connected component of ${\\boldsymbol q}$; in this case we call $\\t$ \\emph{detached}.\nNote also that the same tree witness $\\t = (\\mathfrak{t}_\\mathsf{r}, \\mathfrak{t}_\\mathsf{i})$ can be generated by different $\\tau$.\nNow, we set \n\\begin{equation}\\label{tw-formula}\n\\mathsf{tw}_{\\t}(\\mathfrak{t}_\\mathsf{r}) ~=~ \\exists z\\,\\bigl(\\bigwedge_{x \\in \\mathfrak{t}_\\mathsf{r}} (x=z) \\ \\ \\land \\hspace*{-1mm}\\bigvee_{\\t \\text{ generated by } \\tau} \\hspace*{-1.5em}\\tau(z)\\bigr).\n\\end{equation}\nThe variables in $\\mathfrak{t}_\\mathsf{i}$ do not occur in $\\mathsf{tw}_{\\t}$ and are called \\emph{internal}. The variables in $\\mathfrak{t}_\\mathsf{r}$, if any, are called \\emph{root variables}. Note that no answer variable in ${\\boldsymbol q}(\\avec{x})$ can be internal. The length $|\\mathsf{tw}_\\t|$ of $\\mathsf{tw}_\\t$ is $O(|{\\ensuremath{\\boldsymbol{Q}}}|)$. Tree witnesses $\\t$ and $\\t'$ are \\emph{conflicting} if ${\\boldsymbol q}_\\t \\cap {\\boldsymbol q}_{\\t'} \\ne \\emptyset$. Denote by $\\twset$ the set of tree witnesses for ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$. A subset $\\Theta\\subseteq \\twset$ is \\emph{independent} if no pair of distinct tree witnesses in it is conflicting. Let ${\\boldsymbol q}_\\Theta = \\bigcup_{\\t\\in\\Theta} {\\boldsymbol q}_\\t$.\nThe following PE-formula is called the \\emph{tree-witness rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over complete data instances}: \n\\begin{equation}\\label{rewriting0}\n\\q_{\\mathsf{tw}}(\\avec{x}) \\ \\ = \\hspace*{-0.3em} \\bigvee_{\\Theta \\subseteq \\twset \\text{ independent}} \\hspace*{-0.2em} \\exists\\avec{y}\\ \\bigl(\\hspace*{-1mm}\n\\bigwedge_{S(\\avec{z}) \\in {\\boldsymbol q} \\setminus {\\boldsymbol q}_\\Theta}\\hspace*{-1mm} S(\\avec{z})\n \\ \\land \\ \\bigwedge_{\\t\\in\\Theta} \\mathsf{tw}_\\t(\\mathfrak{t}_\\mathsf{r}) \\,\\bigr).\n\\end{equation}\n\n\\begin{remark}\\label{ignored}\nAs the normalisation predicates $P_\\zeta$ cannot occur in data instances, we can omit from~\\eqref{tw-formula} all the disjuncts with $P_\\zeta$. For the same reason, the tree witnesses generated only by concepts with normalisation predicates will be ignored in the sequel. \n\\end{remark}\n\n\n\\begin{example}\\label{ex:conf}\nConsider the OMQ ${\\ensuremath{\\boldsymbol{Q}}}(x_1,x_2) = (\\mathcal{T},{\\boldsymbol q}(x_1,x_2))$ with\n\\begin{align*}\n\\mathcal{T} & \\ = \\ \\bigl\\{A_1(x) \\to \\underbrace{\\exists y\\, \\bigl(R_1(x,y) \\land Q(x,y)\\bigr)}_{\\zeta_1(x)},\\\nA_2(x) \\to \\underbrace{\\exists y\\, \\bigl(R_2(x,y) \\land Q(y,x)\\bigr)}_{\\zeta_2(x)}\\bigr\\},\\\\\n{\\boldsymbol q}(x_1, x_2) & \\ = \\ \\exists y_1,y_2\\, \\bigl(R_1(x_1,y_1)\\land Q(y_2,y_1)\\land R_2(x_2,y_2)\\bigr).\n\\end{align*}\nThe CQ ${\\boldsymbol q}$ is shown in Fig.~\\ref{fig:conflicting-tws} alongside $\\mathcal{C}_{\\mathcal{T}}^{\\smash{A_1(a)}}$ and $\\mathcal{C}_{\\mathcal{T}}^{\\smash{A_2(a)}}$. When depicting CQs, we use \\begin{tikzpicture}\\node[bpoint] at (0,0) {};\\end{tikzpicture} for answer variables and \\begin{tikzpicture}\\node[point] at (0,0) {};\\end{tikzpicture} for existentially quantified variables.\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\coordinate (c1) at (0,0);\n\\coordinate (c2) at (1.5,1);\n\\coordinate (c3) at (3,0);\n\\coordinate (c4) at (4.5,1);\n\\node [fill=gray!5,thin,rounded corners,inner xsep=0mm,inner ysep=7mm,fit=(c2) (c3) (c4)]{};\n\\node [draw,fill=gray!70,thin,fill opacity=0.5,rounded corners,inner xsep=0mm,inner ysep=5mm,fit=(c1) (c2) (c3)]{};\n\\node [draw,thin,rounded corners,inner xsep=0mm,inner ysep=7mm,fit=(c2) (c3) (c4)]{};\n\\draw[hom] (-2,1) -- (6.5,1);\n\\draw[hom] (-2,0) -- (6.5,0);\n\\node at (0.5,-0.25) {\\large $\\t^1$};\n\\node at (4,-0.3) {\\large $\\t^2$};\n\\node[bpoint, label=below left:{$x_1$}] (t1) at (c1) {};\n\\node[point, label=above right:{$y_1$}] (t2) at (c2) {};\n\\node[point, label=below left:{$y_2$}] (t3) at (c3) {};\n\\node[bpoint, label=above right:{$x_2$}] (t4) at (c4) {};\n\\draw[->,query] (t1) to node [above, sloped]{$R_1$} (t2);\n\\draw[->,query] (t3) to node [above, sloped]{$Q$} (t2);\n\\draw[->,query] (t4) to node [pos=0.4, below, sloped]{$R_2$} (t3);\n\\node[bpoint, label=above:{$A_2$}, label=right:{$a$}] (cd1) at (6.5,1) {};\n\\node[point,label=right:{\\scriptsize $aP_{\\zeta_2}$}] (cd2) at (6.5,0) {};\n\\draw[can,->] (cd1) to node [left]{$R_2,Q^-$} node [right]{\\scriptsize\\textcolor{gray}{$P_{\\zeta_2}$}} (cd2);\n\\node at (8,0.5) {\\normalsize $\\mathcal{C}_{\\mathcal{T}}^{\\smash{A_2(a)}}$};\n\\node[bpoint, label=below:{$A_1$}, label=left:{$a$}] (cc1) at (-2,0) {};\n\\node[point, label=left:{\\scriptsize $aP_{\\zeta_1}$}] (cc2) at (-2,1) {};\n\\draw[can,->] (cc1) to node [right]{$R_1,Q$} node [left]{\\scriptsize\\textcolor{gray}{$P_{\\zeta_1}$}} (cc2);\n\\node at (-3.5,0.5) {\\normalsize $\\mathcal{C}_{\\mathcal{T}}^{\\smash{A_1(a)}}$};\n\\end{tikzpicture}%\n\\caption{Tree witnesses in Example~\\ref{ex:conf}.}\\label{fig:conflicting-tws}\n\\end{figure}\nThere are two tree witnesses, $\\t^1$ and $\\t^2$, for ${\\ensuremath{\\boldsymbol{Q}}}$ with\n\\begin{equation*}\n{\\boldsymbol q}_{\\t^1} = \\bigl\\{\\,R_1(x_1,y_1), Q(y_2,y_1)\\,\\bigr\\} \\quad\\text{ and }\\quad {\\boldsymbol q}_{\\t^2} = \\bigl\\{\\, Q(y_2,y_1), R_2(x_2,y_2)\\,\\bigr\\}\n\\end{equation*}\nshown in Fig.~\\ref{fig:conflicting-tws} by the dark and light shading, respectively. \nThe tree witness $\\t^1 = (\\mathfrak{t}_\\mathsf{r}^1, \\mathfrak{t}_\\mathsf{i}^1)$\nwith $\\mathfrak{t}_\\mathsf{r}^1 = \\{x_1,y_2\\}$ and $\\mathfrak{t}_\\mathsf{i}^1 =\\{y_1\\}$ is generated by $A_1(x)$, which gives\n\\begin{equation*}\n\\mathsf{tw}_{\\t^1}(x_1,y_2) = \\exists z \\, \\bigl(A_1(z)\\land(x_1 = z) \\land (y_2 =z)\\bigr).\n\\end{equation*}\n(Recall that although $\\t^1$ is also generated by $\\exists y\\,P_{\\zeta_1}(y,z)$, we do not include it in the disjunction in $\\mathsf{tw}_{\\t^1}$ because $P_{\\zeta_1}$ cannot occur in data instances.)\nSymmetrically, the tree witness $\\t^2$ gives\n\\begin{equation*}\n\\mathsf{tw}_{\\t^2}(x_2,y_1) = \\exists z \\, \\bigl(A_2(z)\\land(x_2 = z) \\land (y_1 =z)\\bigr).\n\\end{equation*}\nAs $\\t^1$ and $\\t^2$ are conflicting, $\\twset$ contains three independent subsets: $\\emptyset$, $\\{\\t^1\\}$ and $\\{\\t^2\\}$. Thus, we obtain the following tree-witness rewriting ${\\boldsymbol q}_\\mathsf{tw}(x_1,x_2)$ of ${\\ensuremath{\\boldsymbol{Q}}}$ over complete data instances:\n\\begin{equation*}\n\\exists y_1,y_2 \\, \\big[\\big(R_1(x_1,y_1) \\land Q(y_2,y_1) \\land R_2(x_2,y_2)\\big) \\lor{}\n\\big(\\mathsf{tw}_{\\t^1} \\land R_2(x_2,y_2)\\big) \\lor \\big(R_1(x_1,y_1) \\land \\mathsf{tw}_{\\t^2} \\big) \\big].\n\\end{equation*}\n\\end{example}\n\n\n\\begin{theorem}[\\cite{DBLP:conf\/kr\/KikotKZ12}]\nFor any OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$, any data instance $\\mathcal{A}$, which is complete for $\\mathcal{T}$, and any tuple $\\avec{a}$ from $\\mathsf{ind}(\\mathcal{A})$, we have $\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$ iff $\\mathcal{I}_\\mathcal{A} \\models \\q_{\\mathsf{tw}}(\\avec{a})$.\nIn other words, ${\\boldsymbol q}_\\mathsf{tw}$ is a rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over complete data instances.\n\\end{theorem}\n\nIntuitively, for every homomorphism $h \\colon {\\boldsymbol q}(\\avec{a}) \\to \\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$, the sub-CQs of ${\\boldsymbol q}$ mapped by~$h$ to sub-models of the form $\\mathcal{C}^{\\smash{\\tau(a)}}_\\mathcal{T}$ define an independent set $\\Theta$ of tree witnesses; see Fig.~\\ref{fig:twr}. Conversely, if $\\Theta$ is such a set, then the homomorphisms corresponding to the tree witnesses in $\\Theta$ can be pieced together into a homomorphism from ${\\boldsymbol q}(\\avec{a})$ to $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$---provided that the $S(\\avec{z})$ from ${\\boldsymbol q} \\setminus {\\boldsymbol q}_\\Theta$ and the $\\mathsf{tw}_\\t(\\mathfrak{t}_\\mathsf{r})$ for $\\t \\in \\Theta$ hold in $\\mathcal{I}_\\mathcal{A}$.\n\n\\begin{figure}[b]%\n\\centering%\n\\begin{tikzpicture}[yscale=0.9]\n\\draw (1.3,1.3) ellipse (2.5 and 2.8);\n\\draw[thin,fill=gray!40] (0,0) circle (0.7);\n\\node[point] (a0) at (0,0) {};\n\\node[bpoint] (a1) at (0:0.7) {};\n\\node[point] (a2) at (120:0.7) {};\n\\node[bpoint] (a3) at (-120:0.7) {};\n\\draw[query,->] (a1) -- (a0);\n\\draw[query,->] (a2) -- (a0);\n\\draw[query,<-] (a3) -- (a0);\n\\draw[thin,fill=gray!10] (2.2,2.25) ellipse (0.7 and 1.4);\n\\node[point] (a4) at (2.2,2) {};\n\\node[point] (a5) at (1.9,1) {};\n\\node[bpoint] (a6) at (2.5,1) {};\n\\node[point] (a7) at (2.5,3) {};\n\\node[point] (a8) at (1.9,3) {};\n\\draw[query,->] (a5) -- (a4);\n\\draw[query,<-] (a6) -- (a4);\n\\draw[query,->] (a7) -- (a4);\n\\draw[query,<-] (a8) -- (a4);\n\\node[bpoint,gray!40] (a9) at (0,1.5) {};\n\\node[point,gray!40,fill=white] (a10) at (0.9,1.5) {};\n\\draw[query,->,gray!40] (a9) -- (a10);\n\\draw[query,->,gray!40,out=-45,in=180] (a10) to (a5);\n\\draw[query,<-,gray!40] (a9) -- (a2);\n\\draw[query,<-,gray!40,out=0,in=-135] (a1) to (a6);\n\\node at (0.25,2.5) {\\large ${\\boldsymbol q}$};\n\\node at (0.5,-1) {\\small ${\\boldsymbol q}_{\\t^1}$};\n\\node at (1.3,3.5) {\\small ${\\boldsymbol q}_{\\t^2}$};\n\\begin{scope}[shift={(-1,0)}]\n\\draw (10,0.7) ellipse (3.5 and 1.2);\n\\draw[fill=gray!40,rounded corners=2mm,ultra thin] (8,-1.3) -- (8.7,0) -- (9.3,0) -- (10,-1.3); \n\\node[bpoint] (b0) at (9,0) {};\n\\node[point] (b1) at (9,-1) {};\n\\draw[can,->] (b0) -- (b1);\n\\draw[fill=gray!10,rounded corners=2mm,ultra thin] (8.5,3.8) -- (9.7,1.5) -- (10.3,1.5) -- (11.5,3.8); \n\\node[bpoint] (b2) at (10,1.5) {};\n\\node[point] (b3) at (10,2.5) {};\n\\draw[can,->] (b2) -- (b3);\n\\node[point] (b4) at (9.7,3.5) {};\n\\node[point] (b5) at (10.3,3.5) {};\n\\draw[can,->] (b3) -- (b4);\n\\draw[can,->] (b3) -- (b5);\n\\draw[hom,out=-30,in=180] (a0) to (b1);\n\\draw[hom] (a1) to (b0);\n\\draw[hom,out=0,in=175] (a2) to (b0);\n\\draw[hom,out=0,in=-175] (a3) to (b0);\n\\draw[hom] (a4) to (b3);\n\\draw[hom] (a7) to (b4);\n\\draw[hom,out=30,in=150,looseness=0.3] (a8) to (b5);\n\\draw[hom,out=-30,in=180,looseness=0.3] (a5) to (b2);\n\\draw[hom] (a6) to (b2);\n\\draw[can,gray!40] (b0) to (b2);\n\\node[bpoint,gray!40] (b6) at (11,1) {};\n\\node[bpoint,gray!40] (b7) at (12,0.5) {};\n\\node[bpoint,gray!40] (b8) at (10.5,0) {};\n\\node[bpoint,gray!40] (b9) at (8,0.8) {};\n\\draw[can,gray!40] (b6) to (b7);\n\\draw[can,gray!40] (b7) to (b8);\n\\draw[can,gray!40] (b2) to (b8);\n\\draw[can,gray!40] (b8) to (b6);\n\\draw[can,gray!40] (b0) to (b9);\n\\draw[can,gray!40] (b0) to (b8);\n\\node at (12.5,-1) {\\large $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$}; \n\\node at (10.5,-1) {\\small $\\mathcal{C}_\\mathcal{T}^{\\tau_1(a_1)}$}; \n\\node at (12,3.5) {\\small $\\mathcal{C}_\\mathcal{T}^{\\tau_2(a_2)}$}; \n\\node at (6,2.5) {\\normalsize $h$};\n\\node at (6,-0.75) {\\normalsize $h$};\n\\end{scope}\n\\end{tikzpicture}%\n\\caption{Tree-witness rewriting.}\\label{fig:twr}\n\\end{figure}\n\n\nThe \\emph{size} of the tree-witness \\text{PE}-rewriting $\\q_{\\mathsf{tw}}$ depends on the number of tree witnesses in the given OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ and, more importantly, on the cardinality of $\\twset$ as we have $|\\q_{\\mathsf{tw}}| = O(2^{|\\twset|} \\cdot |{\\ensuremath{\\boldsymbol{Q}}}|^2)$ with $|\\twset| \\le 3^{|{\\boldsymbol q}|}$. \n\n\\begin{theorem}\\label{thm:noroles}\nOMQs ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$, in which $\\mathcal{T}$ does not contain axioms of the form $\\varrho(x,y) \\to \\varrho'(x,y)$ \\textup{(}and syntax sugar~\\eqref{eq:sugar}\\textup{)}, have at most $3|{\\boldsymbol q}|$ tree witnesses.\n\\end{theorem}\n\\begin{proof}\nAs observed above, there can be only one detached tree witness for each connected component of ${\\boldsymbol q}$. As $\\mathcal{T}$ has no axioms of the form $\\varrho(x,y) \\to \\varrho'(x,y)$, any two points in $\\mathcal{C}_\\mathcal{T}^{\\smash{\\tau(a)}}$ can be $R$-related for at most one $R$, and so no point can have more than one $R$-successor, for any $R$. It follows that, for every atom $P(x,y)$ in ${\\boldsymbol q}$, there can be at most one tree witness $\\t = (\\mathfrak{t}_\\mathsf{r}, \\mathfrak{t}_\\mathsf{i})$ with $P(x,y)\\in{\\boldsymbol q}_\\t$, $x\\in \\mathfrak{t}_\\mathsf{r}$ and $y \\in \\mathfrak{t}_\\mathsf{i}$ ($P^-(y,x)$ may give another tree witness). \n\\end{proof}\n\nOMQs with arbitrary axioms can have \\emph{exponentially many} tree witnesses:\n\n\\begin{example}\\label{ex:exp-tws}\nConsider the OMQ ${\\ensuremath{\\boldsymbol{Q}}}_n = (\\mathcal{T},{\\boldsymbol q}_n(\\avec{x}^0))$, where \n\\begin{align*}\n\\mathcal{T} \\ & = \\ \\big\\{ A(x) \\to \\exists y\\, \\big( R(y,x) \\land \\exists z\\, (R(y,z) \\land B(z))\\big)\\big\\},\\\\\n{\\boldsymbol q}_n(\\avec{x}^0) \\ & = \\ \\exists y,\\avec{y}^1,\\avec{x}^1,\\avec{y}^2 \\, \\bigl( B(y) \\ \\ \\land \\bigwedge_{1\\leq k\\leq n} \\hspace*{-0.5em}\\big(R(y_k^1,y) \\land R(y_k^1, x_k^1) \\land R(y_k^2,x_k^1) \\land R(y_k^2,x^0_k)\\big)\\bigr)\n\\end{align*}\nand $\\avec{x}^i$ and $\\avec{y}^i$ denote vectors of $n$ variables $x^i_k$ and $y^i_k$, for $1\\leq k \\leq n$, respectively.\nThe CQ is shown in Fig.~\\ref{fig:exp-tws} alongside the canonical model $\\mathcal{C}_\\mathcal{T}^{\\smash{A(a)}}$.\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\node at (-1,0) {\\normalsize ${\\boldsymbol q}_n(\\avec{x}^0)$};\n\\node[point, label=right:{$B$}, label=below:{$y$}] (a0) at (2,2) {};\n\\node[point] (a1) at (-1,1) {};\n\\node[point, label=below:{\\normalsize $x^1_0$}] (a2) at (0,1) {};\n\\node[point] (a3) at (1,1) {};\n\\node[bpoint, label=below:{\\normalsize $x^0_1$}] (a4) at (1,0) {};\n\\node[point] (b1) at (3,1) {};\n\\node[point, label=below:{\\normalsize $x^1_n$}] (b2) at (4,1) {};\n\\node[point] (b3) at (5,1) {};\n\\node[bpoint, label=below:{\\normalsize $x^0_n$}] (b4) at (5,0) {};\n\\draw[query,->] (a1) to (a0);\n\\draw[query,->] (a1) to (a2);\n\\draw[query,->] (a3) to (a2);\n\\draw[query,->] (a3) to (a4);\n\\draw[query,->] (b1) to (a0);\n\\draw[query,->] (b1) to (b2);\n\\draw[query,->] (b3) to (b2);\n\\draw[query,->] (b3) to (b4);\n\\node at (2,1) {\\Large $\\dots$};\n\\node[bpoint, label=left:{$A$}, label=below:{\\scriptsize $a$}] (c0) at (8,0) {};\n\\node at (7,1.8) {\\normalsize $\\mathcal{C}_{\\mathcal{T}}^{\\smash{A(a)}}$};\n\\node[point] (c1) at (8,1) {};\n\\node[point, label=left:{$B$}] (c2) at (8,2) {};\n\\draw[can,->] (c0) to node[label=left:{$R^-$}]{} (c1);\n\\draw[can,->] (c1) to node[label=left:{$R$}]{} (c2);\n\\draw[hom] (11.5,0) -- (c0);\n\\draw[hom] (11.5,1) -- (c1);\n\\draw[hom] (11.5,2) -- (c2);\n\\node[point,label=left:{$B$},label=above:{$y$}] (d0) at (9.5,2) {};\n\\node[point] (d1) at (9.5,1) {};\n\\node[point] (d2) at (10,2) {};\n\\node[point] (d3) at (10,1) {};\n\\node[bpoint,label=below:{\\normalsize $x_i^0$}] (d4) at (10,0) {};\n\\draw[query,->] (d1) to (d0);\n\\draw[query,->] (d1) to (d2);\n\\draw[query,->] (d3) to (d2);\n\\draw[query,->] (d3) to (d4);\n\\node[point,label=right:{$B$},label=above:{$y$}] (e0) at (11.5,2) {};\n\\node[point] (e1) at (11.5,1) {};\n\\node[point,label=below:{\\normalsize $x_i^1$}] (e2) at (11.5,0) {};\n\\draw[query,->] (e1) to (e0);\n\\draw[query,->] (e1) to (e2);\n\\end{tikzpicture}%\n\\caption{The query ${\\boldsymbol q}_n(\\avec{x}^0)$ (all edges are labelled by $R$), the canonical model $\\mathcal{C}_\\mathcal{T}^{\\smash{A(a)}}$ (the normalisation predicates are not shown) and two ways of mapping a branch of the query to the canonical model in Example~\\ref{ex:exp-tws}.}\\label{fig:exp-tws}\n\\end{figure}\nOMQ ${\\ensuremath{\\boldsymbol{Q}}}_n$ has at least $2^n$ tree witnesses: for any $\\avec{\\alpha} = (\\alpha_1, \\dots, \\alpha_n) \\in \\{0,1\\}^n$, there is a tree witness $(\\mathfrak{t}_\\mathsf{r}^{\\avec{\\alpha}},\\mathfrak{t}_\\mathsf{i}^{\\avec{\\alpha}})$ with $\\mathfrak{t}_\\mathsf{r}^{\\avec{\\alpha}} = \\{x_k^{\\alpha_k} \\mid 1 \\le k \\le n\\}$. \nNote, however, that the tree-witness rewriting of ${\\ensuremath{\\boldsymbol{Q}}}_n$ can be equivalently transformed into the following \\emph{polynomial-size} PE-rewriting: \n\\begin{equation*}\n{\\boldsymbol q}_n(\\avec{x}^0) \\ \\ \\lor \\ \\ \\exists z \\, \\big[A(z) \\land \\bigwedge_{1 \\leq i\\leq n} \\big((x^0_i=z) \\lor \\exists y \\, (R(y,x^0_i) \\land R(y,z))\\big)\\big].\n\\end{equation*}\n\\end{example}\n\nIf any two tree witnesses for an OMQ ${\\ensuremath{\\boldsymbol{Q}}}$ are compatible in the sense that either they are non-conflicting or one is included in the other, then $\\q_{\\mathsf{tw}}$ can be equivalently transformed to the \\text{PE}-rewriting\n\\begin{equation*}\n\\exists \\avec{y}\\,\\bigwedge_{S(\\avec{z})\\in {\\boldsymbol q}} \\bigl(\\, S(\\avec{z}) \\ \\ \\lor \\bigvee_{\\t\\in\\twset \\text{ with } S(\\avec{z})\\in{\\boldsymbol q}_{\\t}} \\hspace*{-2em}\\mathsf{tw}_\\t(\\mathfrak{t}_\\mathsf{r})\\,\\bigr) \\qquad \\text{of size $O(|\\twset| \\cdot |{\\ensuremath{\\boldsymbol{Q}}}|^2 )$.}\n\\end{equation*}\nWe now analyse transformations of this kind in the setting of Boolean functions.\n\n\n\n\\section{OMQ Rewritings as Boolean Functions}\\label{sec:Bfunctions}\n\nFor any OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})=(\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$, we define Boolean functions $f^\\triangledown_{\\omq}$ and $f^\\vartriangle_{\\omq}$ such that:\n\\begin{nitemize}\\itemsep=6pt\n\\item[--] if $f^\\triangledown_{\\omq}$ is computed by a Boolean formula (monotone formula or monotone circuit) $\\Phi$, then ${\\ensuremath{\\boldsymbol{Q}}}$ has an FO- (respectively, PE- or NDL-) rewriting of size $O(|\\Phi| \\cdot |{\\ensuremath{\\boldsymbol{Q}}}|)$;\n\n\\item[--] if ${\\boldsymbol q}'$ is an FO- (PE- or NDL-) rewriting\nof ${\\ensuremath{\\boldsymbol{Q}}}$, then $f^\\vartriangle_{\\omq}$ is computed by a Boolean formula (respectively, monotone formula or monotone circuit) of size $O(|{\\boldsymbol q}'|)$.\n\\end{nitemize}\nWe remind the reader (for details see, e.g.,~\\cite{Arora&Barak09,Jukna12}) that an \\emph{$n$-ary Boolean function}, for $n\\ge 1$, is any function from $\\{0,1\\}^n$ to $\\{0,1\\}$. A Boolean function $f$ is \\emph{monotone} if $f(\\avec{\\alpha}) \\leq f(\\avec{\\beta})$ for all $\\avec{\\alpha}\\leq \\avec{\\beta}$, where $\\leq$ is the component-wise $\\leq$ on vectors of $\\{0,1\\}$.\nA \\emph{Boolean circuit}, $\\boldsymbol{C}$, is a directed acyclic graph whose vertices are called \\emph{gates}. Each gate is labelled with a propositional variable, a constant $0$ or~$1$, or with $\\textsc{not}$, $\\textsc{and}$ or $\\textsc{or}$. Gates labelled with variables and constants have in-degree~$0$ and are called \\emph{inputs}; $\\textsc{not}$-gates have in-degree~$1$, while $\\textsc{and}$- and $\\textsc{or}$-gates have in-degree~$2$ (unless otherwise specified).\nOne of the gates in $\\boldsymbol{C}$ is distinguished as the \\emph{output gate}. Given an assignment $\\avec{\\alpha} \\in \\{0,1\\}^n$ to the variables, we compute the value of each gate in $\\boldsymbol{C}$ under $\\avec{\\alpha}$ as usual in Boolean logic. The \\emph{output $\\boldsymbol{C}(\\avec{\\alpha})$ of $\\boldsymbol{C}$ on} $\\avec{\\alpha} \\in \\{0,1\\}^n$ is the value of the output gate. We usually assume that the gates $g_1,\\dots,g_m$ of $\\boldsymbol{C}$ are ordered in such a way that $g_1,\\dots,g_n$ are input gates; each gate $g_i$, for $i \\geq n$, gets inputs from gates $g_{j_1},\\dots,g_{j_k}$ with $j_1,\\dots,j_k < i$, and $g_m$ is the output gate. We say that $\\boldsymbol{C}$ \\emph{computes} an $n$-ary Boolean function $f$ if \\mbox{$ \\boldsymbol{C}(\\avec{\\alpha})=f(\\avec{\\alpha})$} for all $\\avec{\\alpha} \\in \\{0,1\\}^n$. The \\emph{size} $|\\boldsymbol{C}|$ of $\\boldsymbol{C}$ is the number of gates in $\\boldsymbol{C}$. A circuit is \\emph{monotone} if it contains only inputs, $\\textsc{and}$- and $\\textsc{or}$-gates. \\emph{Boolean formulas} can be thought of as circuits in which every logic gate has at most one outgoing edge.\nAny monotone circuit computes a monotone function, and any monotone Boolean function can be computed by a monotone circuit.\n\n\n\n\n\\subsection{Hypergraph Functions}\\label{hyper-functions}\n\nLet $H = (V,E)$ be a hypergraph with \\emph{vertices} $v \\in V$ and \\emph{hyperedges} $e \\in E\\subseteq 2^V$.\nA subset $E' \\subseteq E$ is said to be \\emph{independent} if $e \\cap e' = \\emptyset$, for any distinct $e,e' \\in E'$. The set of vertices that occur in the hyperedges of $E'$ is denoted by $V_{E'}$.\nFor each vertex $v \\in V$ and each hyperedge $e \\in E$, we take propositional variables $p_v$ and $p_e$, respectively. The \\emph{hypergraph function $f_H$ for $H$} is given by the monotone Boolean formula\n\\begin{equation}\\label{hyper}\nf_H \\ \\ \\ = \\ \\bigvee_{\nE' \\text{ independent}}\n\\Big( \\bigwedge_{v \\in V \\setminus V_{E'}}\n\\hspace*{-0.5em} p_v \\ \\land \\ \\bigwedge_{e \\in E'} p_e\n\\Big).\n\\end{equation}\n\n\nThe tree-witness \\text{PE}-rewriting $\\q_{\\mathsf{tw}}$ of any OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})=(\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ defines a hypergraph whose vertices are the atoms of ${\\boldsymbol q}$ and hyperedges are the sets ${\\boldsymbol q}_\\t$, where $\\t$ is a tree witness for ${\\ensuremath{\\boldsymbol{Q}}}$. We denote this hypergraph by $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ and call $f_{\\HG{{\\ensuremath{\\boldsymbol{Q}}}}}$ the \\emph{tree-witness hypergraph function for ${\\ensuremath{\\boldsymbol{Q}}}$}. To simplify notation, we write $\\twfn$ instead of $f_{\\HG{{\\ensuremath{\\boldsymbol{Q}}}}}$. \nNote that formula~\\eqref{hyper} defining $\\twfn$ is obtained from rewriting~\\eqref{rewriting0} by regarding the atoms $S(\\avec{z})$ in ${\\boldsymbol q}$ and tree-witness formulas $\\mathsf{tw}_\\t$ as propositional variables. We denote these variables by $p_{S(\\avec{z})}$ and $p_{\\t}$ (rather than $p_v$ and $p_e$), respectively.\n\n\\begin{example}\\label{ex:simple hyper}\nFor the OMQ ${\\ensuremath{\\boldsymbol{Q}}}$ in Example~\\ref{ex:conf}, the hypergraph $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ has 3~vertices (one for each atom in the query) and 2~hyperedges (one for each tree witness) shown in Fig.~\\ref{fig:hyper-example}.\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}[label distance=-2pt,yscale=1.2,xscale=1.5]\n\\coordinate (c1) at (0,0);\n\\coordinate (c2) at (1,0.7);\n\\coordinate (c3) at (2,0);\n\\draw[fill=gray!5,rounded corners=10] (4.1,-0.2) -- (1.5,1.2) -- (-0.7,1.2) -- (1.9,-0.2) -- cycle;\n\\draw[fill=gray!30,rounded corners=10] (-2.1,-0.2) -- (0.5,1.2) -- (2.7,1.2) -- (0.1,-0.2) -- cycle;\n\\draw[rounded corners=10] (4.1,-0.2) -- (1.5,1.2) -- (-0.7,1.2) -- (1.9,-0.2) -- cycle;\n\\node at (c1) [point,fill=white,label=left:{\\small $R_1(x_1,y_1)$}] {};\n\\node at (c2) [point,fill=white,label=above:{\\small $Q(y_2,y_1)$}] {};\n\\node at (c3) [point,fill=white,label=right:{\\small $R_2(y_2,x_2)$}] {};\n\\node at (-0.1,0.5) {\\large $\\t^1$};\n\\node at (2.1,0.5) {\\large $\\t^2$};\n\\end{tikzpicture}%\n\\caption{The hypergraph $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ for ${\\ensuremath{\\boldsymbol{Q}}}$ from Example~\\ref{ex:conf}.}\\label{fig:hyper-example}\n\\end{figure}\nThe tree-witness hypergraph function for ${\\ensuremath{\\boldsymbol{Q}}}$ is as follows: \n\\begin{equation*}\n\\twfn \\ \\ = \\ \\ \\bigl(p_{R_1(x_1,y_1)} \\land p_{Q(y_2,y_1)} \\land p_{R_2(x_2,y_2)}\\bigr) \\lor\n\\bigl(p_{\\t^1} \\land p_{R_2(x_2,y_2)} \\bigr) \\lor \\bigl(p_{R_1(x_1,y_1)} \\land p_{\\t^2}\\bigr).\n\\end{equation*}\n\\end{example}\n\nSuppose the function $\\twfn$ for an OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ is computed by a Boolean formula $\\Phi$. Consider the first-order formula $\\Phi^*(\\avec{x})$ obtained by replacing each $p_{S(\\avec{z})}$ in $\\Phi$ with $S(\\avec{z})$, each $p_{\\t}$ with $\\mathsf{tw}_\\t$, and adding the appropriate prefix $\\exists \\avec{y}$. By comparing~\\eqref{hyper} and~\\eqref{rewriting0}, we see that $\\Phi^*(\\avec{x})$ is an \\text{FO}-rewriting of~${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over data instances that are complete over $\\mathcal{T}$. This gives claim (\\emph{i}) of the following theorem:\n\n\\begin{theorem}\\label{TW2rew}\n\\textup{(}i\\textup{)} If $\\twfn$ is computed by a \\textup{(}monotone\\textup{)} Boolean formula $\\Phi$, then\nthere is a \\textup{(}\\text{PE}-\\textup{)} \\text{FO}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ of size $O(|\\Phi| \\cdot |{\\ensuremath{\\boldsymbol{Q}}}|)$.\n\n\\textup{(}ii\\textup{)} If $\\twfn$ is computed by a monotone Boolean circuit $\\boldsymbol{C}$, then there is an \\text{NDL}-rewriting of\n${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ of size $O(|\\boldsymbol{C}|\\cdot |{\\ensuremath{\\boldsymbol{Q}}}|)$.\n\\end{theorem}\n\\begin{proof}\n(\\emph{ii}) Let $\\t^1,\\dots,\\t^{l}$ be tree witnesses for ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$, where ${\\boldsymbol q}(\\avec{x}) = \\exists \\avec{y}\\, \\bigwedge_{i = 1}^n S_i(\\avec{z}_i)$.\nWe assume that the gates $g_1, \\dots, g_n$ of $\\boldsymbol{C}$ are the inputs $p_{S_1(\\avec{z}_1)}, \\dots, p_{S_n(\\avec{z}_n)}$ for the atoms, the gates $g_{n+1},\\dots,g_{n+l}$ are the inputs $p_{\\t^1},\\dots,p_{\\t^{l}}$ for the tree witnesses and $g_{n+l+1},\\dots,g_m$ are $\\textsc{and}$- and $\\textsc{or}$-gates. Denote by $\\Pi$ the following NDL-program, where $\\avec{z} = \\avec{x} \\cup\\avec{y}$:\n\\begin{nitemize}\n\\item[--] $S_i(\\avec{z}_i) \\to G_i(\\avec{z})$, for $0 < i \\le n$;\n\n\\item[--] $\\tau(u) \\to G_{i+m}(\\avec{z}[\\mathfrak{t}_\\mathsf{r}^j\/u])$, for $0 < j \\le l$ and $\\tau$ generating $\\t^j$, where $\\avec{z}[\\mathfrak{t}_\\mathsf{r}^j\/u]$ is the result of replacing each $z \\in \\mathfrak{t}_\\mathsf{r}^j$ in $\\avec{z}$ with $u$;\n\n\\item[--] $\\begin{cases}\nG_j(\\avec{z}) \\land G_k(\\avec{z}) \\to G_i(\\avec{z}), &\\text{if } g_i = g_j \\land g_k,\\\\\nG_j(\\avec{z}) \\to G_i(\\avec{z})\\text{ and } G_k(\\avec{z}) \\to G_i(\\avec{z}), &\\text{if } g_i = g_j \\lor g_k,\n\\end{cases}$\\quad for $n+l < i \\leq m$;\n\\item[--] $G_m(\\avec{z}) \\to G(\\avec{x})$.\n\\end{nitemize}\nIt is not hard to see that $(\\Pi, G(\\avec{x}))$ is an NDL-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$.\n\\end{proof}\n\nThus, the problem of constructing polynomial-size rewritings of OMQs reduces to finding polynomial-size (monotone) formulas or monotone circuits for the corresponding functions $\\twfn$.\nNote, however, that $\\twfn$ contains a variable $p_\\t$ for every tree witness~$\\t$, which makes this reduction useless for OMQs with exponentially many tree witnesses. To be able to deal with such OMQs, we slightly modify the tree-witness rewriting. \n \nSuppose $\\t = (\\mathfrak{t}_\\mathsf{r},\\mathfrak{t}_\\mathsf{i})$ is a tree witness for ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ induced by a homomorphism $h \\colon {\\boldsymbol q}_\\t \\to \\smash{\\mathcal{C}_\\mathcal{T}^{\\tau(a)}}$. We say that $\\t$ is $\\varrho$-\\emph{initiated} if $h(z)$ is of the form $a \\varrho w$, for every (equivalently, some) variable $z\\in \\mathfrak{t}_\\mathsf{i}$. For such $\\varrho$, we define a formula $\\varrho^*(x)$ by taking the disjunction of $\\tau(x)$ with $\\mathcal{T}\\models \\tau(x) \\to \\exists y\\,\\varrho(x,y)$. Again, the disjunction includes only those $\\tau(x)$ that do not contain normalisation predicates (even though $\\varrho$ itself can be one).\n\n\\begin{example}\\label{ex:initated}\nConsider the OMQ ${\\ensuremath{\\boldsymbol{Q}}}(x) = (\\mathcal{T},{\\boldsymbol q}(x))$ with \n\\begin{equation*}\n\\mathcal{T} = \\bigl\\{\\,\\exists y\\, Q(x,y) \\to \\exists y\\, P(x,y), \\ \\ P(x,y) \\to R(x,y)\\, \\bigr\\}\\quad \\text{and} \\quad\n{\\boldsymbol q}(x) = \\exists y \\, R(x,y).\n\\end{equation*}\nAs shown in Fig.~\\ref{fig:ex:initiated}, the tree witness $\\t = (\\{x\\},\\{y\\})$ for ${\\ensuremath{\\boldsymbol{Q}}}(x)$ is generated by $\\exists y\\, Q(x,y)$, $\\exists y\\, P(x,y)$ and $\\exists y\\, R(x,y)$; it is also $P$- and $R$-initiated, but not $Q$-initiated. We have:\n\\begin{equation*}\nP^*(x) = \\exists y\\,Q(x,y) \\lor \\exists y\\,P(x,y) \\text{ and } R^*(x) = \\exists y\\,Q(x,y) \\lor \\exists y\\,P(x,y) \\lor \\exists y\\, R(x,y).\n\\end{equation*}\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\node (a0) at (-0.5,0) [bpoint, label=below:{$\\exists y\\, Q(a,y)$}, label=left:{$a$}]{};\n\\node (a1) at (-1.25,1) [point, label=left:{$aQ$}]{};\n\\node (a2) at (0.25,1) [point, label=right:{$aP$}]{};\n\\draw[can,->] (a0) to node[label=left:{$Q$}]{} (a1);\n\\draw[can,->] (a0) to node[label=right:{$P,R$}]{} (a2);\n\\node (b0) at (3,0) [bpoint, label=below:{$\\exists y\\, P(a,y)$}, label=left:{$a$}]{};\n\\node (b1) at (3,1) [point, label=left:{$aP$}]{};\n\\draw[can,->] (b0) to node[label=right:{$P,R$}]{} (b1);\n\\node (c0) at (6,0) [bpoint, fill=black, label=below:{$\\exists y\\, R(a,y)$}, label=left:{$a$}]{};\n\\node (c1) at (6,1) [point, label=left:{$aR$}]{};\n\\draw[can,->] (c0) to node[label=right:{$R$}]{} (c1);\n\\end{tikzpicture}%\n\\caption{Canonical models in Example~\\ref{ex:initated}.}\\label{fig:ex:initiated}\n\\end{figure}%\n\\end{example}\n\nThe modified tree-witness rewriting for ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$, denoted $\\q_{\\mathsf{tw}}'(\\avec{x})$, is obtained by replacing~\\eqref{tw-formula} in~\\eqref{rewriting0} with the formula\n\\begin{equation*}\\label{tw-formula'}\\tag{\\ref{tw-formula}$'$}\n\\mathsf{tw}'_\\t(\\mathfrak{t}_\\mathsf{r},\\mathfrak{t}_\\mathsf{i}) ~=~ \\bigwedge_{R(z,z')\\in {\\boldsymbol q}_\\t}\\hspace*{-0.5em} (z=z') \\quad \\land \\bigvee_{\\t \\text{ is $\\varrho$-initiated}} \\ \\bigwedge_{z \\in \\mathfrak{t}_\\mathsf{r} \\cup \\mathfrak{t}_\\mathsf{i}} \\varrho^*(z).\n\\end{equation*}\nNote that unlike~\\eqref{tw-formula}, this formula contains the variables in both $\\mathfrak{t}_\\mathsf{i}$ and $\\mathfrak{t}_\\mathsf{r}$, which must be equal under every satisfying assignment. \nWe associate with $\\q_{\\mathsf{tw}}'(\\avec{x})$ the monotone Boolean function~$f^\\blacktriangledown_{{\\ensuremath{\\boldsymbol{Q}}}}$ \ngiven by the formula obtained from~\\eqref{hyper} by replacing each variable $p_v$ with the respective $p_{S(\\avec{z})}$, for $S(\\avec{z})\\in {\\boldsymbol q}$, and each variable $p_e$ with the formula\n\\begin{equation}\\label{subst}\n\\bigwedge_{R(z,z')\\in {\\boldsymbol q}_\\t} \\hspace*{-0.5em} p_{z=z'} \\ \\ \\ \\wedge \\bigvee_{\\t \\text{ is $\\varrho$-initiated}} \\,\\, \\bigwedge_{z \\in \\mathfrak{t}_\\mathsf{r}\\cup\\mathfrak{t}_\\mathsf{i}} p_{\\varrho^*(z)},\n\\end{equation}\nfor the respective tree witness $\\t=(\\mathfrak{t}_\\mathsf{r},\\mathfrak{t}_\\mathsf{i})$ for ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$, where $p_{z=z'}$ and $p_{\\varrho^*(z)}$ are propositional variables. \nClearly, the number of variables in $\\homfn$ is \\emph{polynomial} in $|{\\ensuremath{\\boldsymbol{Q}}}|$. \n\n\\begin{example}\nFor the OMQ ${\\ensuremath{\\boldsymbol{Q}}}(x)$ in the Example~\\ref{ex:conf}, we have: \n\\begin{multline*}\n\\homfn \\ \\ = \\ \\ \\bigl(p_{R_1(x_1,y_1)} \\land p_{Q(y_2,y_1)} \\land p_{R_2(x_2,y_2)}\\bigr)\\ \\ \\lor {} \\\\ \n\\bigl(\n\\bigl(p_{x_1 = y_1}\\! \\land p_{y_2 = y_1} \\ \\ \\wedge \\hspace*{-0.5em}\\bigwedge_{z \\in \\{x_1, y_1, y_2\\}} \\hspace*{-1.5em} p_{P^*_{\\zeta_1}(z)}\\bigr) \\ \\ \\land \\ \\ p_{R_2(x_2,y_2)} \\bigr) \\ \\ \\lor {} \\\\\n\\bigl(p_{R_1(x_1,y_1)} \\ \\ \\land \\ \\\n\\bigl(p_{y_2 = y_1} \\!\\land p_{x_2 = y_2}\n\\ \\ \\wedge \\hspace*{-0.5em}\\bigwedge_{z \\in\\{y_1, y_2, x_2\\}} \\hspace*{-1.5em} p_{P^*_{\\zeta_2}(z)}\\bigr)\\bigr).\n\\end{multline*}\n\\end{example}\n\nThe proof of the following theorem is given in Appendix~\\ref{app:proof:Th4.5}:\n\n\\begin{theorem}\\label{Hom2rew}\n\\textup{(}i\\textup{)} For any OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$, the formulas $\\q_{\\mathsf{tw}}(\\avec{x})$ and $\\q_{\\mathsf{tw}}'(\\avec{x})$ are equivalent, and so $\\q_{\\mathsf{tw}}'(\\avec{x})$ is a PE-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x})$ over complete data instances.\n\n\\textup{(}ii\\textup{)} Theorem~\\ref{TW2rew} continues to hold for $\\twfn$ replaced by $\\homfn$.\n\\end{theorem}\n\nFinally, we observe that although $\\twfn$ and $\\homfn$ are defined by exponential-size formulas, each of these functions can be computed by a nondeterministic polynomial algorithm (in the number of propositional variables). Indeed, given truth-values for the $p_{S(\\avec{z})}$ and~$p_{\\t}$ in $\\twfn$, guess a set $\\Theta$ of at most $|{\\boldsymbol q}|$ tree witnesses and check whether (\\emph{i}) $\\Theta$ is independent, (\\emph{ii}) $p_{\\t} = 1$ for all $\\t \\in \\Theta$, and (\\emph{iii}) every $S(\\avec{z})$ with $p_{S(\\avec{z})} = 0$ belongs to some $\\t \\in \\Theta$. The function $\\homfn$ is computed similarly except that, in (\\emph{ii}), we check whether the polynomial-size formula~\\eqref{subst} is true under the given truth-values for every $\\t\\in\\Theta$.\n\n\n\n\\subsection{Primitive Evaluation Functions}\n\nTo obtain lower bounds on the size of rewritings, we associate with every OMQ ${\\ensuremath{\\boldsymbol{Q}}}$ a third Boolean function $f^\\vartriangle_{\\omq}$ that describes the result of evaluating ${\\ensuremath{\\boldsymbol{Q}}}$ on data instances with a single individual constant. Let $\\avec{\\gamma}\\in \\{0,1\\}^n$ be a vector assigning the truth-value $\\avec{\\gamma}(S_i)$ to each unary or binary predicate $S_i$ in ${\\ensuremath{\\boldsymbol{Q}}}$. We associate with $\\avec{\\gamma}$ the data instance\n\\begin{equation*}\n\\mathcal{A}(\\avec{\\gamma}) \\ \\ = \\ \\ \\bigl\\{\\, A_i(a) \\mid \\avec{\\gamma}(A_i) = 1\\,\\bigr\\} \\ \\ \\cup \\ \\ \\bigl\\{\\, P_i(a,a) \\mid \\avec{\\gamma}(P_i) = 1\\,\\bigr\\}\n\\end{equation*}\nand set $f^\\vartriangle_{\\omq}(\\avec{\\gamma}) = 1$ iff $\\mathcal{T}, \\mathcal{A}(\\avec{\\gamma}) \\models {\\boldsymbol q}(\\avec{a})$, where $\\avec{a}$ is the $|\\avec{x}|$-tuple of $a$s. We call $f^\\vartriangle_{\\omq}$ the \\emph{primitive evaluation function} for ${\\ensuremath{\\boldsymbol{Q}}}$.\n\\begin{theorem}\\label{rew2prim}\n\\textup{(}i\\textup{)} If ${\\boldsymbol q}'$ is a \\textup{(}\\text{PE}-\\textup{)} \\text{FO}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}$, then $f^\\vartriangle_{\\omq}$ can be computed by a \\textup{(}monotone\\textup{)} Boolean formula of size $O(|{\\boldsymbol q}'|)$.\n\n\\textup{(}ii\\textup{)} If ${\\boldsymbol q}'$ is an \\text{NDL}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}$, then $f^\\vartriangle_{\\omq}$ can be computed by a monotone Boolean circuit of size $O(|{\\boldsymbol q}'|)$.\n\\end{theorem}\n\\begin{proof}\n(\\emph{i}) Let ${\\boldsymbol q}'$ be an \\text{FO}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}$. We eliminate the quantifiers in ${\\boldsymbol q}'$ by replacing each subformula of the form $\\exists x\\, \\psi(x)$ and $\\forall x\\, \\psi(x)$ in ${\\boldsymbol q}'$ with $\\psi(a)$. We then\nreplace each $a=a$ with $\\top$ and each atom of the form $A(a)$ and $P(a,a)$ with the corresponding propositional variable. The resulting Boolean formula clearly computes $f^\\vartriangle_{\\omq}$.\nIf ${\\boldsymbol q}'$ is a \\text{PE}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}$, then the result is a monotone Boolean formula computing~$f^\\vartriangle_{\\omq}$.\n\n\n(\\emph{ii}) If $(\\Pi, G)$ is an \\text{NDL}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}$, then\nwe replace all variables in $\\Pi$ with $a$ and then perform the replacement described in (\\emph{i}). We now turn the resulting propositional \\text{NDL}-program $\\Pi'$ into a monotone circuit computing $f^\\vartriangle_{\\omq}$. For every (propositional) variable $p$ occurring in the head of a rule in $\\smash{\\Pi'}$, we take an appropriate number of $\\textsc{or}$-gates whose output is $p$ and inputs are the bodies of the rules with head $p$; for every such body, we introduce an appropriate number of $\\textsc{and}$-gates whose inputs are the propositional variables in the body, or, if the body is empty, we take the gate for constant~$1$.\n\\end{proof}\n\n\n\n\n\\subsection{Hypergraph Programs}\n\nWe introduced hypergraph functions as Boolean abstractions of the tree-witness rewritings. Our next aim is to define a model of computation for these functions.\n\nA \\emph{hypergraph program} (HGP) $P$ is a hypergraph $H = (V,E)$ each of whose vertices is labelled by $0$, $1$ or a literal over a list $p_1,\\dots,p_n$ of propositional variables. (As usual, a \\emph{literal}, $\\boldsymbol l$, is a propositional variable or its negation.) An \\emph{input} for $P$ is a tuple \\mbox{$\\avec{\\alpha} \\in \\{0,1\\}^n$}, which is regarded as a valuation for $p_1,\\dots,p_n$. The output $P(\\avec{\\alpha})$ of $P$ on~$\\avec{\\alpha}$ is 1 iff there is an independent subset of $E$ that \\emph{covers all zeros}---that is, contains every vertex in $V$ whose label evaluates to $0$ under $\\avec{\\alpha}$. We say that $P$ \\emph{computes} an $n$-ary Boolean function~$f$ if $f(\\avec{\\alpha})=P(\\avec{\\alpha})$, for all $\\avec{\\alpha}\\in\\{0,1\\}^n$. An HGP is \\emph{monotone} if its vertex labels do not have negated variables. The \\emph{size} $|P|$ of an HGP $P$ is the size $|H|$ of the underlying hypergraph $H = (V,E)$, which is $|V| + |E|$.\n\n\n\n\nThe following observation shows that monotone HGPs capture the computational power of hypergraph functions. We remind the reader that a \\emph{subfunction} of a Boolean function $f$ is obtained from $f$ using two operations: (1) fixing some of its variables to $0$ or $1$, and (2) renaming (in particular, identifying) some of the variables in $f$. A hypergraph $H$ is said to be of \\emph{degree at most} $d$ if every vertex in it belongs to at most $d$ hyperedges; $H$ is of \\emph{degree} $d$ if every vertex in it belongs to exactly $d$ hyperedges.\n\n\\begin{proposition}\\label{hyper:program}\n\\textup{(}i\\textup{)} Any monotone HGP based on a hypergraph $H$ computes a subfunction of the hypergraph function $f_H$.\n\n\\textup{(}ii\\textup{)} For any hypergraph $H$ of degree at most $d$, there is a monotone HGP of size $O(|H|)$ that computes $f_H$ and such that its hypergraph is of degree at most $\\max(2,d)$.\n\\end{proposition}\n\\begin{proof}\nTo show (\\emph{i}), it is enough to replace the vertex variables $p_v$ in $f_H$ by the corresponding vertex labels of the given HGP and fix all the edge variables $p_e$ to $1$.\n\nFor (\\emph{ii}), given a hypergraph $H = (V,E)$, we label each $v \\in V$ by the variable $p_v$. For each $e \\in E$, we add a fresh vertex $a_e$ labelled by $1$ and a fresh vertex $b_e$ labelled by~$p_e$; then we create a new hyperedge $e' = \\{a_e, b_e\\}$ and add $a_e$ to the hyperedge $e$. We claim that the resulting HGP $P$\ncomputes $f_H$. Indeed, for any input $\\avec{\\alpha}$ with $\\avec{\\alpha}(p_e) = 0$, we have to include the edge $e'$ into the cover, and so cannot include the edge $e$ itself. Thus, $P(\\avec{\\alpha})=1$ iff there is an independent set $E$ of hyperedges with $\\avec{\\alpha}(p_e)=1$, for all $e\\in E$, covering all zeros of the variables $p_v$. \n\\end{proof}\n\nIn some cases, it will be convenient to use \\emph{generalised HGPs} that allow hypergraph vertices to be labelled by conjunctions $\\bigwedge_{i} \\boldsymbol l_i$ of literals $\\boldsymbol l_i$. \nThe following proposition shows that this generalisation does not increase the computational power of HGPs.\n\n\\begin{proposition}\\label{lem:genhgps}\nFor every generalised HGP $P$ over $n$ variables, there is an HGP $P^\\prime$ computing the same function and such that $|P'| \\le n \\cdot |P|$.\n\\end{proposition}\n\\begin{proof}\nTo construct $P^\\prime$, we split every vertex $v$ of $P$ labelled with $\\bigwedge_{i=1}^{k} \\boldsymbol l_i$ into $k$ new vertices $v_1, \\dots, v_k$ and label $v_i$ with $\\boldsymbol l_i$, for $1 \\leq i \\leq k$ (without loss of generality, we can assume that $\\boldsymbol l_i$ and $\\boldsymbol l_j$ have distinct variables for $i \\ne j$); each hyperedge containing $v$ will now contain all the $v_i$. It is easy to see that $P(\\avec{\\alpha}) = {P^\\prime}(\\avec{\\alpha})$, for any input $\\avec{\\alpha}$. Since $k \\leq n$, we have $|P'| \\le n \\cdot |P|$.\n\\end{proof}\n\n\n\n\\section{OMQs, hypergraphs and monotone hypergraph programs}\\label{sec:OMQs&hypergraphs}\n\nWe now establish a correspondence between the structure of OMQs and hypergraphs.\n\n\\subsection{OMQs with ontologies of depth 2}\\label{sec5.1}\n\nTo begin with, we show that every hypergraph $H = (V, E)$ can be represented by a polynomial-size OMQ ${\\ensuremath{\\boldsymbol{Q}}}_H = (\\mathcal{T},{\\boldsymbol q})$ with $\\mathcal{T}$ of depth~$2$. With every vertex $v\\in V$ we associate a unary predicate $A_v$, and with every hyperedge $e \\in E$ a unary predicate $B_e$ and a binary predicate $R_e$. We define $\\mathcal{T}$ to be the set of the following axioms, for $e \\in E$: \n\\begin{equation*}\nB_e(x) \\ \\ \\to \\ \\ \\exists y \\,\\bigl[ \\hspace*{-0.5em}\\bigwedge_{e \\cap e' \\neq \\emptyset,\\ e \\ne e'} \\hspace*{-1.5em}R_{e'}(x,y) \\ \\land \\\n\\bigwedge_{v \\in e} A_v(y)\n\\ \\land \\ \\exists z\\, R_{e}(z,y)\\bigr].\n\\end{equation*}\nClearly, $\\mathcal{T}$ is of depth 2. We also take the Boolean CQ ${\\boldsymbol q}$ with variables $y_v$, for $v\\in V$, and $z_e$, for $e\\in E$:\n\\begin{equation*}\n{\\boldsymbol q} \\ \\ = \\ \\ \\bigl\\{\\, A_v(y_v) \\mid v \\in V \\,\\bigr\\} \\ \\ \\cup \\ \\ \\bigl\\{\\,\nR_e(z_e, y_v) \\mid v \\in e, \\text{ for } v \\in V \\text{ and } e \\in E \\,\\bigr\\}.\n\\end{equation*}\n\\begin{example}\nConsider again the hypergraph from Example~\\ref{ex:simple hyper}, which we now denote by $H = (V,E)$ with $V = \\{v_1,v_2,v_3\\}$, $E = \\{e_1,e_2\\}$, $e_1 = \\{v_1,v_2\\}$ and $e_2 = \\{v_2,v_3\\}$. The CQ ${\\boldsymbol q}$ and the canonical models $\\mathcal{C}_{\\mathcal{T}}^{B_{\\smash{e_i}}(a)}$, for $i=1,2$, are shown in Fig.~\\ref{fig:QH:simple hyper} along with four tree witnesses for ${\\ensuremath{\\boldsymbol{Q}}}_H$ (as explained in Remark~\\ref{ignored}, we ignore the two extra tree witnesses generated only by normalisation predicates).\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\node[bpoint, label=left:{$B_{e_1}$}, label=right:{$a$}] (a) at (7,2) {};\n\\node[point, label=left:{$A_{v_2},A_{v_3}$}] (d1) at (7,1) {};\n\\node[point, fill=white] (d2) at (7,0) {};\n\\draw[can,->] (a) to node [right]{\\scriptsize $R_{e_2}$} (d1);\n\\draw[can,->] (d1) to node [right]{\\scriptsize $R_{e_1}^-$} (d2);\n\\node at (8.25,2) {\\normalsize $\\mathcal{C}_{\\mathcal{T}}^{B_{e_1}(a)}$};\n\\node[bpoint, label=right:{$B_{e_2}$}, label=left:{$a$}] (a) at (-3,0) {};\n\\node[point, label=right:{$A_{v_1},A_{v_2}$}] (d1) at (-3,1) {};\n\\node[point, fill=white] (d2) at (-3,2) {};\n\\draw[can,->] (a) to node [left]{\\scriptsize $R_{e_1}$} (d1);\n\\draw[can,->] (d1) to node [left]{\\scriptsize $R_{e_2}^-$} (d2);\n\\node at (-4.25,0) {\\normalsize $\\mathcal{C}_{\\mathcal{T}}^{B_{e_2}(a)}$};\n\\coordinate (czv1) at (0,1);\n\\coordinate (czv1p) at (-0.7,1);\n\\coordinate (czv1pp) at (-0.8,1);\n\\coordinate (cze1) at (1,0);\n\\coordinate (czv2) at (2,1);\n\\coordinate (cze2) at (3,2);\n\\coordinate (czv3) at (4,1);\n\\coordinate (czv3p) at (4.7,1);\n\\coordinate (czv3pp) at (4.8,1);\n\\node[draw,thin,fill=gray!10,rounded corners,inner xsep=0mm,inner ysep=7mm,fit=(cze2) (cze1) (czv1pp)] {};\n\\node[draw,thin,fill=gray!50,fill opacity=0.5,rounded corners,inner xsep=0mm,inner ysep=5mm,fit=(cze1) (cze2) (czv3pp)] {};\n\\node[draw,thin,fill=gray!40,rounded corners,inner xsep=0mm,inner ysep=3mm,fit=(cze1) (czv1p)] {};\n\\node[draw,thin,fill=gray!15,rounded corners,inner xsep=0mm,inner ysep=3mm,fit=(cze2) (czv3p)] {};\n\\node at (-0.2,0) {\\normalsize $\\t_{v_1}$};\n\\node at (-0.3,2.3) {\\normalsize $\\t_{e_1}$};\n\\node at (4.4,2) {\\normalsize $\\t_{v_2}$};\n\\node at (4.5,-0.1) {\\normalsize $\\t_{e_2}$};\n\\node[point,label=left:{$y_{v_1}$},label=right:$A_{v_1}$] (zv1) at (czv1) {};\n\\node[point,label=left:{$z_{e_1}$}] (ze1) at (cze1) {};\n\\node[point,label=left:{$y_{v_2}$},label=below right:{$A_{v_2}$}] (zv2) at (czv2) {};\n\\node[point,label=right:$z_{e_2}$] (ze2) at (cze2) {};\n\\node[point,label=right:{$y_{v_3}$},label=left:{$A_{v_3}$}] (zv3) at (czv3) {};\n\\draw[<-,query] (zv1) to node[left] {\\scriptsize$R_{e_1}$} (ze1);\n\\draw[<-,query] (zv2) to node[right,near end] {\\scriptsize$R_{e_1}$} (ze1);\n\\draw[->,query] (ze2) to node[left] {\\scriptsize$R_{e_2}$} (zv2);\n\\draw[->,query] (ze2) to node[right] {\\scriptsize$R_{e_2}$} (zv3);\n\\end{tikzpicture}%\n\\caption{The OMQ ${\\ensuremath{\\boldsymbol{Q}}}_H$ for $H$ from Example~\\ref{ex:simple hyper} and its tree witnesses.}\\label{fig:QH:simple hyper}\n\\end{figure}\n\\end{example}\n\nIt is not hard to see that the number of tree witnesses for ${\\ensuremath{\\boldsymbol{Q}}}_H$ does not exceed $|H|$. \nIndeed, all the tree witnesses for ${\\ensuremath{\\boldsymbol{Q}}}_H$ fall into two types:\n\\begin{align*}\n& \\t_v=(\\mathfrak{t}_\\mathsf{i}, \\mathfrak{t}_\\mathsf{r}) \\text{ with } \\mathfrak{t}_\\mathsf{r} =\\{z_e \\mid v \\in e\\} \\text{ and } \\mathfrak{t}_\\mathsf{i} = \\{y_v\\}, \\ \\ \\text{ for } v\\in V \\text{ that belong to a single } e\\in E;\\\\\n& \\t_e=(\\mathfrak{t}_\\mathsf{i}, \\mathfrak{t}_\\mathsf{r}) \\text{ with } \\mathfrak{t}_\\mathsf{r} = \\{z_{e'} \\mid e \\cap e' \\neq \\emptyset, e \\ne e' \\} \\text{ and } \\mathfrak{t}_\\mathsf{i} = \\{z_e\\} \\cup\\{y_v \\mid v \\in e\\},\\quad \\text{ for } e\\in E.\n\\end{align*}\n\nWe call a hypergraph $H'$ a \\emph{subgraph} of a hypergraph $H = (V,E)$ if $H'$ can be obtained from $H$ by (\\emph{i}) removing some of its hyperedges and (\\emph{ii}) removing some of its vertices from both $V$ and the hyperedges in $E$.\n\n\\begin{theorem}\\label{hg-to-query}\n\\textup{(}i\\textup{)} Any hypergraph $H$ is isomorphic to a subgraph of $\\HG{{\\ensuremath{\\boldsymbol{Q}}}_H}$.\n\n\\textup{(}ii\\textup{)} Any monotone HGP $P$ based on a hypergraph $H$ computes a subfunction of the primitive evaluation function $f^\\vartriangle_{{\\ensuremath{\\boldsymbol{Q}}}_H}$.\n\\end{theorem}\n\\begin{proof}\n(\\emph{i}) An isomorphism between $H$ and a subgraph of $\\HG{{\\ensuremath{\\boldsymbol{Q}}}_H}$ can be established by the map $v \\mapsto A_v(y_v)$, for $v \\in V$, and $e \\mapsto {\\boldsymbol q}_{\\t_e}$, for $e \\in E$. \n\n(\\emph{ii}) Suppose that $P$ is based on a hypergraph $H= (V, E)$. Given an input $\\avec{\\alpha}$ for $P$, we define an assignment $\\avec{\\gamma}$ for the predicates in ${\\ensuremath{\\boldsymbol{Q}}}_H = (\\mathcal{T},{\\boldsymbol q})$ by taking $\\avec{\\gamma}(A_v)$ to be the value of the label of $v$ under $\\avec{\\alpha}$, $\\avec{\\gamma}(B_e) = 1$,\n$\\avec{\\gamma}(R_e)=1$ (and of course $\\avec{\\gamma}(P_\\zeta) = 0$, for all normalisation predicates $P_\\zeta$).\nBy the definition of $\\mathcal{T}$, for each $e\\in E$, the canonical model $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}(\\avec{\\gamma})}$ contains labelled nulls $w_e$ and $w'_{e}$ such that\n\\begin{equation*}\n\\mathcal{C}_{\\mathcal{T},\\mathcal{A}(\\avec{\\gamma})} \\models \\bigwedge_{e \\cap e' \\neq \\emptyset, \\ e \\ne e'} \\hspace*{-1em}R_{e'}(a, w_e) \\ \\land \\ \\bigwedge_{v \\in e} A_v(w_e) \\ \\land \\ R_e(w'_e, w_e).\n\\end{equation*}\nWe now show that $P(\\avec{\\alpha}) = 1$ iff $f^\\vartriangle_{{\\ensuremath{\\boldsymbol{Q}}}_H}(\\avec{\\gamma})= 1$ (iff $\\mathcal{T},\\mathcal{A}(\\avec{\\gamma}) \\models {\\boldsymbol q}$).\nSuppose $P(\\avec{\\alpha}) = 1$, that is, there is an independent subset $E' \\subseteq E$ such that the label of each \n$v \\notin \\bigcup E'$ evaluates to $1$ under $\\avec{\\alpha}$. Then the map $h \\colon {\\boldsymbol q} \\to \\mathcal{C}_{\\mathcal{T},\\mathcal{A}(\\avec{\\gamma})}$ defined by taking\n\\begin{equation*}\nh(z_e)=\\begin{cases} w'_e,& \\mbox{if } e \\in E',\\\\\na,& \\mbox{ otherwise,}\n\\end{cases} \n\\qquad\nh(y_v)=\\begin{cases} w_e,& \\mbox{if } v \\in e \\in E',\\\\\na,& \\mbox{ otherwise}\n\\end{cases}\n\\end{equation*}\nis a homomorphism witnessing $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}(\\avec{\\gamma})} \\models {\\boldsymbol q}$, whence $f^\\vartriangle_{{\\ensuremath{\\boldsymbol{Q}}}_H}(\\avec{\\gamma})= 1$.\n\nConversely, if $f^\\vartriangle_{{\\ensuremath{\\boldsymbol{Q}}}_H}(\\avec{\\gamma})= 1$ then there is a homomorphism\n$h\\colon {\\boldsymbol q} \\to \\mathcal{C}_{\\mathcal{T},\\mathcal{A}(\\avec{\\gamma})}$. For any hyperedge $e\\in E$, there are only two options for $h(z_e)$: either $a$ or $w'_e$. It follows that the set $E' = \\{e \\in E \\mid h(z_e) = w'_e\\}$ is independent and covers all zeros. Indeed, if $v\\notin\\bigcup E'$\nthen $h(y_v) = a$, and so the label of $v$ evaluates to $1$ under $\\avec{\\alpha}$ because $A_v(y_v) \\in {\\boldsymbol q}$.\n\\end{proof}\n\nNext, we establish a tight correspondence between hypergraphs of degree at most~2 and OMQs with ontologies of depth 1.\n\n\n\\subsection{Hypergraphs of Degree 2 and OMQs with Ontologies of Depth 1}\\label{sec:depth1}\n\n\\begin{theorem}\\label{depth1}\nFor any OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ with $\\mathcal{T}$ of depth~$1$, the hypergraph $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ is of degree at most~2 and $|\\HG{{\\ensuremath{\\boldsymbol{Q}}}}|\\leq 2|{\\boldsymbol q}|$.\n\\end{theorem}\n\\begin{proof}\nWe have to show that every atom in ${\\boldsymbol q}$ belongs to at most two ${\\boldsymbol q}_\\t$, $\\t\\in\\twset$.\nSuppose $\\t = (\\mathfrak{t}_\\mathsf{r},\\mathfrak{t}_\\mathsf{i})$ is a tree witness for ${\\ensuremath{\\boldsymbol{Q}}}$ and $y \\in \\mathfrak{t}_\\mathsf{i}$. Since $\\mathcal{T}$ is of depth~1, $\\mathfrak{t}_\\mathsf{i} = \\{y\\}$ and $\\mathfrak{t}_\\mathsf{r}$ consists of all the variables in ${\\boldsymbol q}$ adjacent to $y$ in the Gaifman graph $G_{\\boldsymbol q}$ of ${\\boldsymbol q}$.\nThus, different tree witnesses have different internal variables $y$. An atom of the form $A(u)\\in{\\boldsymbol q}$ is in~${\\boldsymbol q}_\\t$ iff $u = y$. An atom of the form $P(u,v)\\in{\\boldsymbol q}$ is in ${\\boldsymbol q}_\\t$ iff either $u = y$ or $v=y$. Therefore, $P(u,v)\\in{\\boldsymbol q}$ can only be covered by the tree witness with internal $u$ and by the tree witness with internal $v$.\n\\end{proof}\n\nConversely, we show now that any hypergraph $H$ of degree 2 is isomorphic to $\\HG{\\OMQI{H}}$, for some OMQ $\\OMQI{H} = (\\mathcal{T},{\\boldsymbol q})$ with $\\mathcal{T}$ of depth~1.\nWe can assume that $H = (V, E)$ comes with two fixed maps $i_1,i_2\\colon V \\to E$ such that for every $v\\in V$, we have $i_1(v) \\ne i_2(v)$, $v \\in i_1(v)$ and $v \\in i_2(v)$.\nFor any $v \\in V$, we fix \na binary predicate $R_v$, and let the ontology $\\mathcal{T}$ in $\\OMQI{H}$ contain the following axioms, for $e \\in E$:\n\\begin{equation*\nA_e(x) \\ \\to \\ \\exists y\\, \\bigl[ \\bigwedge_{\\begin{subarray}{c}v\\in V\\\\i_1(v) = e\\end{subarray}} R_v(y,x) \\ \\ \\land \\bigwedge_{\\begin{subarray}{c}v\\in V\\\\i_2(v) = e\\end{subarray}} R_v(x,y)\\bigr].\n\\end{equation*}\nClearly, $\\mathcal{T}$ is of depth 1. The Boolean CQ ${\\boldsymbol q}$ contains variables $z_e$, for $e\\in E$, and is defined by taking \n\\begin{equation*}\n{\\boldsymbol q} ~=~ \\bigl\\{\\, R_v(z_{i_1(v)}, z_{i_2(v)}) \\mid v\\in V\\,\\bigr\\}.\n\\end{equation*}\n \n\\begin{example}\\label{example1}\nSuppose that $H = (V, E)$, where $V = \\{v_1, v_2, v_3, v_4\\}$, $E = \\{e_1, e_2, e_3\\}$ and $e_1 = \\{v_1, v_2, v_3\\}$, $e_2 = \\{v_3, v_4\\}$, $e_3 = \\{v_1, v_2, v_4\\}$. Let\n\\begin{align*}\n& i_1\\colon v_1\\mapsto e_1, \\quad v_2\\mapsto e_3, \\quad v_3 \\mapsto e_1, \\quad v_4 \\mapsto e_2,\\\\\n& i_2\\colon v_1\\mapsto e_3, \\quad v_2 \\mapsto e_1, \\quad v_3 \\mapsto e_2, \\quad v_4 \\mapsto e_3.\n\\end{align*}\nThe hypergraph $H$ and the query ${\\boldsymbol q}$ are shown in Fig.~\\ref{fig:depth1}: each $R_{v_k}$ is represented by an edge, $i_1(v_k)$ is indicated by the circle-shaped end of the edge and $i_2(v_k)$ by the diamond-shaped end of the edge; the $e_j$ are shown as large grey squares.\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}[scale=1.2]\n\\coordinate (c2) at (-1,0.75);\n\\coordinate (c1) at (-1.5,1.5);\n\\coordinate (c3) at (0.5,1.5);\n\\coordinate (c4) at (-0.25,0) {};\n\\node[draw=black,fill=gray!20,ultra thin,rounded corners=12,fit=(c1) (c2) (c4),inner sep=20] {};\n\\node[draw=black,ultra thin,rounded corners=12,fit=(c1) (c2) (c4),inner sep=20] {};\n\\node[draw=black,fill=gray!60,fill opacity=0.5,ultra thin,rounded corners=12,fit=(c3) (c4),inner sep=15] {};\n\\node[draw=black,ultra thin,rounded corners=12,fit=(c3) (c4),inner sep=15] {};\n\\node[draw=black,fill=gray!5,fill opacity=0.5,ultra thin,rounded corners=12,fit=(c1) (c2) (c3),inner sep=10] {};\n\\node[draw=black,ultra thin,rounded corners=12,fit=(c1) (c2) (c3),inner sep=10] {};\n\\node[point,fill=white,label=left:{$v_2$}] (v2) at (c2) {};\n\\node[point,fill=white,label=right:{$v_1$}] (v1) at (c1) {};\n\\node[point,fill=white,label=left:{$v_3$}] (v3) at (c3) {};\n\\node[point,fill=white,label=right:{$v_4$}] (v4) at (c4) {};\n\\node at (-1.5,0) {\\large $e_3$};\n\\node at (0.6,0) {\\large $e_2$};\n\\node at (-0.25,1) {\\large $e_1$};\n\\coordinate (v12) at (2,0.3);\n\\coordinate (v22) at (3.3,1.3);\n\\coordinate (v42) at (2.6,-0.1);\n\\coordinate (v11) at (3,1.7);\n\\coordinate (v21) at (2.6,0.3);\n\\coordinate (v31) at (3.6,1.7);\n\\coordinate (v32) at (4.5,0.3);\n\\coordinate (v41) at (4.1,-0.1);\n\\coordinate (v61) at (4.6,-0.1);\n\\node[rectangle,draw=black,fill=gray!30,ultra thin,rounded corners=12,fit=(v41) (v61) (v32),inner sep=15] (e2) {};\n\\node at ($(e2)+(0.2,-0.3)$) {\\large $z_{e_2}$};\n\\node[rectangle,draw=black,fill=gray!5,ultra thin,rounded corners=12,fit=(v11) (v22) (v31),inner sep=15] (e1) {\\large $z_{e_1}$};\n\\node[rectangle,draw=black,fill=gray!20,ultra thin,rounded corners=12,fit=(v12) (v21) (v42),inner sep=15] (e3) {};\n\\node at ($(e3)+(-0.1,-0.15)$) {\\large $z_{e_3}$};\n\\begin{scope}[open diamond-*,line width=0.4mm,fill=gray,draw=black!80]\n\\draw (v12) to node[left, midway] {\\small $R_{v_1}$} (v11) ;\n\\draw (v22) to node[right, pos=0.6] {\\small $R_{v_2}$} (v21);\n\\draw (v42) to node[below, midway] {\\small $R_{v_4}$} (v41);\n\\draw (v32) to node[right, midway] {\\small $R_{v_3}$} (v31);\n\\end{scope}\n\\node at (1.2,2) {\\large $H$};\n\\node at (2,1.9) {\\large ${\\boldsymbol q}$};\n\\node at (6.2,-0.25) {\\normalsize $\\t^{e_1}$};\n\\node at (8.5,-0.25) {\\normalsize $\\mathcal{C}_\\mathcal{T}^{A_{e_1}(a)}$};\n\\coordinate (v31c) at (6.05,1.75);\n\\coordinate (v22c) at (6.2,1.75);\n\\coordinate (v11c) at (6.35,1.75);\n\\coordinate (w1c) at (6.2,1.68);\n\\coordinate (w1c0) at (6.2,1.68);\n\\coordinate (v32c) at (6.05,0.25);\n\\coordinate (v21c) at (6.2,0.25);\n\\coordinate (v12c) at (6.35,0.25);\n\\coordinate (a1c) at (6.2,0.35);\n\\coordinate (a1c0) at (6.2,0.35);\n\\draw[line width=2.5mm,-triangle 60,postaction={draw, line width=6mm, shorten >=4mm, -},draw=gray!20] (6.2,0.25) -- (6.2,2);\n\\node[rectangle,draw=black,line width=0.3mm,rounded corners=5,fit=(w1c0) (w1c),inner xsep=20, inner ysep=10,label=right:{\\small $z_{e_1}$}] (w1) {};\n\\node[rectangle,draw=black,line width=0.3mm,rounded corners=5,fit=(a1c0) (a1c),inner xsep=20, inner ysep=10,label=right:{\\hspace*{-0.5em}\\small\\begin{tabular}{c}$z_{e_2}$\\\\$z_{e_3}$\\end{tabular}}] (a1) {};\n\\begin{scope}[line width=0.25mm,draw=black]\n\\draw[open diamond-*] (v32c) to (v31c);\n\\draw[*-open diamond] (v21c) to (v22c);\n\\draw[open diamond-*] (v12c) to (v11c);\n\\end{scope}\n\\draw[densely dotted,bend left,looseness=0.8] (v31) to (v31c);\n\\draw[densely dotted,bend left,looseness=0.6] (v22) to (v22c);\n\\draw[densely dotted,bend left,looseness=0.8] (v11) to (v11c);\n\\draw[densely dotted,bend right,looseness=0.2] (v32) to (v32c); %\n\\draw[densely dotted,bend right,looseness=0.2] (v21) to (v21c);\n\\draw[densely dotted,bend right,looseness=0.3] (v12) to (v12c);\n\\node[bpoint, label=right:{$A_{e_1}$}, label=left:{\\footnotesize $a$}] (aa) at (9,0.25) {};\n\\node[point] (ab) at (9,1.75) {};\n\\draw[can,->] (aa) to node[right] {\\footnotesize\\textcolor{gray}{$P_\\zeta$}} node[left] {\\footnotesize$R_{v_3}^-, R_{v_2}, R_{v_1}^-$} (ab);\n\\end{tikzpicture}%\n\\caption{Hypergraph $H$ in Example~\\ref{example1}, its CQ ${\\boldsymbol q}$, tree witness $\\t^{e_1}$ for $\\OMQI{H}$ and canonical model $\\mathcal{C}_\\mathcal{T}^{A_{e_1}(a)}$.}\\label{fig:depth1}\n\\end{figure}\n\\noindent In this case,\n\\begin{equation*}\n{\\boldsymbol q} \\ \\ = \\ \\ \\exists z_{e_1},z_{e_2},z_{e_3} \\,\\bigl(R_{v_1}(z_{e_1},z_{e_3}) \\land R_{v_2}(z_{e_3},z_{e_1}) \\land\nR_{v_3}(z_{e_1},z_{e_2})\\land R_{v_4}(z_{e_2},z_{e_3}) \\bigr)\n\\end{equation*}\nand $\\mathcal{T}$ consists of the following axioms:\n\\begin{align*}\nA_{e_1}(x) &\\to\n\\exists y\\, \\bigl[R_{v_1}(y,x)\\land R_{v_2}(x,y)\\land R_{v_3}(y,x)\\bigr],\\\\%}_{\\zeta(x)},\\\\\nA_{e_2}(x) &\\to \\exists y\\, \\bigl[ R_{v_3}(x,y) \\land R_{v_4}(y,x)\\bigr], \\\\\nA_{e_3}(x) &\\to \\exists y\\, \\bigl[ R_{v_1}(x,y) \\land R_{v_2}(y,x)\\land R_{v_4}(x,y)\\bigr].\n\\end{align*}\nThe canonical model $\\mathcal{C}^{A_{\\smash{e_1}}(a)}_{\\mathcal{T}}$ is shown on the right-hand side of Fig.~\\ref{fig:depth1}. Note that each $z_{e}$ determines the tree witness $\\t^{e}$ with ${\\boldsymbol q}_{\\t^e} = \\{R_v(z_{i_1(v)}, z_{i_2(v)}) \\mid v \\in e\\}$; distinct $\\t^e$ and $\\t^{\\smash{e'}}$ are conflicting iff $e \\cap e' \\ne \\emptyset$. It follows that $H$ is isomorphic to $\\HG{\\OMQI{H}}$.\n\\end{example}\n\\begin{theorem}\\label{representable}\n\\textup{(}i\\textup{)} Any hypergraph $H$ of degree~2 is isomorphic to $\\HG{\\OMQI{H}}$.\n\n\\textup{(}ii\\textup{)} \nAny monotone HGP $P$ based on a hypergraph $H$ of degree~2 computes a subfunction of the primitive evaluation function $f^\\vartriangle_{\\OMQI{H}}$.\n\\end{theorem}\n\\begin{proof}\n(\\emph{i}) We show that the map $g\\colon v\\mapsto R_v(z_{i_1(v)}, z_{i_2(v)})$ is an isomorphism between $H$ and $\\HG{\\OMQI{H}}$. By the definition of $\\OMQI{H}$, $g$ is a bijection between $V$ and the atoms of ${\\boldsymbol q}$. For any $e\\in E$, there is a tree witness\n$\\t^e = (\\mathfrak{t}_\\mathsf{r}^e, \\mathfrak{t}_\\mathsf{i}^e)$ generated by $A_e(x)$ with\n$\\mathfrak{t}_\\mathsf{i}^e = \\{z_e\\}$ and $\\mathfrak{t}_\\mathsf{r}^e = \\{z_{e'} \\mid e \\cap e' \\neq\\emptyset, e \\ne e'\\}$,\nand ${\\boldsymbol q}_{\\t^e}$ consists of the $g(v)$, for $v\\in e$. Conversely, every tree witness $\\t$ for $\\OMQI{H}$ contains $z_e \\in \\mathfrak{t}_\\mathsf{i}$, for some $e\\in E$, and so\n${\\boldsymbol q}_\\t = \\{ g(v) \\mid v \\in e\\}$.\n\n\\smallskip\n\n(\\emph{ii}) By Proposition~\\ref{hyper:program}~(\\emph{i}), $P$ computes a subfunction of $f_H$. Thus, it suffices to show that $f_H$ is a subfunction of $f^\\vartriangle_{\\OMQI{H}}$. Let $H = (V,E)$ be a hypergraph of degree~$2$. For any $\\avec{\\alpha}\\in \\{0,1\\}^{\\smash{|H|}}$, we define~$\\avec{\\gamma}$ by taking $\\avec{\\gamma}(R_v) = \\avec{\\alpha}(p_v)$ for $v \\in V$, $\\avec{\\gamma}(A_e) = \\avec{\\alpha}(p_e)$ for $e \\in E$ (and $\\avec{\\gamma}(P_\\zeta) = 0$ for all normalisation predicates $P_\\zeta$). We prove that $f_H(\\avec{\\alpha}) = 1$ iff \\mbox{$\\mathcal{T}, \\mathcal{A}(\\avec{\\gamma}) \\models {\\boldsymbol q}$}. By the definition of $\\mathcal{T}$, for each $e\\in E$ with $A_e(a) \\in\\mathcal{A}(\\avec{\\gamma})$ or, equivalently, $\\avec{\\alpha}(p_e) = 1$, the canonical model $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}(\\avec{\\gamma})}$ contains a labelled null $w_e$ such that \n\\begin{equation*}\n\\mathcal{C}_{\\mathcal{T},\\mathcal{A}(\\avec{\\gamma})} \\models \\bigwedge_{\\begin{subarray}{c}v\\in V\\\\i_1(v) = e\\end{subarray}} R_v(w_e,a) \\ \\ \\land \\bigwedge_{\\begin{subarray}{c}v\\in V\\\\i_2(v) = e\\end{subarray}} R_v(a,w_e).\n\\end{equation*}\n\n$(\\Rightarrow)$ Let $E'$ be an independent subset of $E$ such that $\\bigwedge_{v \\in V \\setminus V_{E'}} p_v \\land \\bigwedge_{e \\in E'} p_e$ is true on~$\\avec{\\alpha}$.\nDefine $h \\colon {\\boldsymbol q} \\to \\mathcal{C}_{\\mathcal{T}, \\mathcal{A}(\\avec{\\gamma})}$\nby taking $h(z_e) = a$ if $e\\notin E'$ and $h(z_e) = w_e$ otherwise.\nOne can check that $h$ is a homomorphism, and so $\\mathcal{T}, \\mathcal{A}({\\avec{\\gamma}}) \\models {\\boldsymbol q}$.\n\n\n$(\\Leftarrow)$ Given a homomorphism $h \\colon {\\boldsymbol q} \\to \\mathcal{C}_{\\mathcal{T}, \\mathcal{A}(\\avec{\\gamma})}$, we show that $E' = \\{e \\in E \\mid h(z_e) \\neq a\\}$ is independent. Indeed, if $e, e' \\in E'$ and $v \\in e \\cap e'$, then $h$ sends one variable of the $R_v$-atom to the labelled null $w_e$ and the other end to $w_{e'}$, which is impossible. We claim that $f_H(\\avec{\\alpha}) = 1$. Indeed, for each $v \\in V\\setminus V_{E'}$, $h$ sends both ends of the $R_v$-atom to $a$, and so $\\avec{\\alpha}(p_v) = 1$. For each $e \\in E'$, we must have $h(z_e) = w_{e}$ because $h(z_e) \\neq a$, and so $\\avec{\\alpha}(p_e) = 1$. It follows that $f_H(\\avec{\\alpha}) = 1$.\n\\end{proof}\n\n\n\n\\subsection{Tree-Shaped OMQs and Tree Hypergraphs}\\label{sec:5.3}\n\nWe call an OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ \\emph{tree-shaped} if the CQ ${\\boldsymbol q}$ is tree-shaped. We now establish a close correspondence between tree-shaped OMQs and tree hypergraphs that are defined as follows.\\!\\footnote{Our definition of tree hypergraph is a minor variant of the notion of (sub)tree hypergraph (aka hypertree) from graph theory~\\cite{Flament1978223,Brandstadt:1999:GCS:302970,Bretto:2013:HTI:2500991}.}\n\nSuppose $T=(V_T,E_T)$ is an (undirected) tree. A leaf is a vertex of degree~1. A subtree $T' = (V'_T,E'_T)$ of $T$ is said to be \\emph{convex} if, for any non-leaf vertex $u$ in the subtree~$T'$, we have $\\{u,v\\} \\in E'_T$ whenever $\\{u,v\\} \\in E_T$. \nA hypergraph $H = (V,E)$ is called a \\emph{tree hypergraph} if there is a tree $T=(V_T,E_T)$ such that $V = E_T$ and every hyperedge $e\\in E$ induces a convex subtree $T_e$ of $T$. In this case, we call $T$ the \\emph{underlying tree} of~$H$.\nThe \\emph{boundary} of a hyperedge $e$ consists of all leaves of $T_e$; the interior of $e$ is the set of non-leaves of~$T_e$.\nA \\emph{tree hypergraph program} (THGP) is an HGP based on a tree hypergraph. \n\n\n\\begin{example}\\label{ex:hypertree}\nLet $T$ be the tree shown in Fig.~\\ref{fig:tree}. Any tree hypergraph with underlying tree $T$ has the set of vertices\n$\\{\\{1,2\\}, \\{2,3\\}, \\{2,6\\}, \\{3,4\\}, \\{4,5\\}\\}$ (each vertex is an edge of $T$),\n\nand its hyperedges may include \n$\\{\\{1,2\\}, \\{2,3\\}, \\{2,6\\}\\}$ \nas the subtree of $T$ induced by these edges is convex, but not $\\{\\{1,2\\},\\{2,3\\}\\}$.\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}[xscale=2,yscale=0.7]\n\\coordinate (n1) at (0,0);\n\\coordinate (n3) at (2,0);\n\\coordinate (n6) at (1,1);\n\\node[draw,ultra thin,fill=gray!20,rounded corners,inner xsep=5mm,inner ysep=7mm,fit=(n1) (n3) (n6)] {};\n\\node[draw,ultra thin,fill=gray!5,rounded corners,inner xsep=3mm,inner ysep=5mm,fit=(n1) (n3)] {};\n\\node (1) at (0,0) [point,label=below:{\\footnotesize $1$}]{};\n\\node (2) at (1,0) [point,label=below:{\\footnotesize $2$}]{};\n\\node (3) at (2,0) [point,label=below:{\\footnotesize $3$}]{};\n\\node (4) at (3,0) [point,label=below:{\\footnotesize $4$}]{};\n\\node (5) at (4,0) [point,label=below:{\\footnotesize $5$}]{};\n\\node (6) at (1,1) [point,label=above:{\\footnotesize $6$}]{};\n\\draw[query] (1) to (2);\n\\draw[query] (2) to (3);\n\\draw[query] (3) to (4);\n\\draw[query] (4) to (5);\n\\draw[query] (2) to (6);\n\\node (c) at (3.5,1.4) {\\small\\begin{tabular}{c}non-convex\\\\[-1pt]subtree\\end{tabular}};\n\\node (d) at (-1.2,1.4) {\\small\\begin{tabular}{c}convex\\\\[-1pt]subtree\\end{tabular}};\n\\draw[thin] (0,0.9) -- (d);\n\\draw[thin] (2,0.3) -- (c);\n\\end{tikzpicture}%\n\\caption{Tree $T$ in Example~\\ref{ex:hypertree}.}\\label{fig:tree}\n\\end{figure}\n\\end{example}\n\n\n\n\\begin{theorem}\\label{prop:tree-shaped}\nIf an OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ is tree-shaped, then $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ is isomorphic to a tree hypergraph. Furthermore, if ${\\boldsymbol q}$ has at least one binary atom, then the number of leaves in the tree underlying $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ is the same as the number of leaves in ${\\boldsymbol q}$.\n\\end{theorem}\n\\begin{proof}\nThe case when ${\\boldsymbol q}$ has no binary atoms is trivial. Otherwise, let $G_{\\boldsymbol q}$ be the Gaifman graph of ${\\boldsymbol q}$ whose vertices $u$ are labelled with the unary atoms $\\xi(u)$ in ${\\boldsymbol q}$ of the form $A(u)$ and $P(u,u)$, and whose edges $\\{u,v\\}$ are labelled with the atoms of the form $P(u,v)$ and $P'(v,u)$ in ${\\boldsymbol q}$.\nWe replace every edge $\\{u,v\\}$ labelled with $P_1(u_1',v_1'), \\dots, P_n(u_n',v_n')$, for $n \\ge 2$, by a sequence of $n$ edges forming a path from $u$ to $v$ and label them with $P_1(u_1',v_1'), \\dots, P_n(u_n',v_n')$, respectively. In the resulting tree, for every vertex $u$ labelled with $n$ unary atoms $\\xi_1(u),\\dots, \\xi_n(u)$, for $n\\ge 1$, we pick an edge $\\{u,v\\}$ labelled with some $P(u',v')$ and replace it by a sequence of $n+1$ edges forming a path from $u$ to $v$ and label them with $\\xi_1(u),\\dots, \\xi_n(u), P(u',v')$, respectively. The resulting tree $T$ has the same number of leaves as ${\\boldsymbol q}$. It is readily checked that, for any tree witness $\\t$ for ${\\ensuremath{\\boldsymbol{Q}}}$, the set of edges in $T$ labelled with atoms in ${\\boldsymbol q}_\\t$ forms a convex subtree of $T$, which gives a tree hypergraph isomorphic to~$\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$.\n\\end{proof}\n\n\n\nSuppose $H=(V, E)$ is a tree hypergraph whose underlying tree $T = (V_T,E_T)$ has vertices $V_T = \\{1,\\dots, n\\}$, for $n > 1$, and $1$ is a leaf of $T$. Let $T^1 = (V_T,E^1_T)$ be the \\emph{directed} tree obtained from $T$ by fixing $1$ as the root and orienting the edges away from~$1$. We associate with $H$ a tree-shaped OMQ $\\OMQT{H} = (\\mathcal{T},{\\boldsymbol q})$, in which ${\\boldsymbol q}$ is the Boolean CQ\n\\begin{equation*}\n{\\boldsymbol q} \\ \\ = \\ \\ \\bigl\\{\\, R_{ij}(z_i, y_{ij}), \\ \\ S_{ij}(y_{ij}, z_{j}) \\mid (i,j) \\in E^1_{T} \\,\\bigr\\},\n\\end{equation*}\nwhere the $z_i$, for $i \\in V_T$, are the variables for vertices of the tree and the $y_{ij}$, for $(i,j)\\in E_T^1$, are the variables for the edges of the tree.\nTo define $\\mathcal{T}$, \nsuppose a hyperedge $e \\in E$ induces a convex directed subtree $T_e = (V_e,E_e)$ of $T^1$ with root $r^e\\in V_e$ and leaves $L_e \\subseteq V_e$. Denote by $\\mathcal{T}$ the ontology that contains the following axiom, for each $e\\in E$:\n\\begin{multline*}\nA_e(x) \\ \\ \\to \\ \\ \\exists y\\,\\bigl[\\bigwedge_{(i,j)\\in E_e, \\ i = r^e}\\hspace*{-1em} R_{r^ej}(x,y) \\ \\ \\land \\bigwedge_{(i,j)\\in E_e, \\ j\\in L_e} \\hspace*{-1.5em}S_{ij}(y,x) \\ \\ \\ \\land\\\\ \n\\exists z\\,\\bigl( \\bigwedge_{(i,j)\\in E_e, \\ i \\ne r^e} \\hspace*{-1.5em}R_{ij}(z,y) \\ \\ \\ \\land \\bigwedge_{(i,j)\\in E_e, \\ j\\notin L_e} \\hspace*{-2em} S_{ij}(y,z) \\bigr)\\bigr].\n\\end{multline*}\nSince $T_e$ is convex, its root, $r_e$, has only one outgoing edge, $(r^e,j)$, for some $j$, and so the first conjunct above contains a single atom, $R_{r^ej}(x,y)$. These axioms (together with convexity of hyperedges) \nensure that $\\OMQT{H}$ has a tree witness $\\t^e = (\\mathfrak{t}_\\mathsf{r}^e,\\mathfrak{t}_\\mathsf{i}^e)$, for $e\\in E$, with\n\\begin{align*}\n& \\mathfrak{t}_\\mathsf{r}^e \\ \\ = \\ \\ \\{\\, z_i \\mid i \\text{ is on the boundary of } e \\,\\},\\\\ \n& \\mathfrak{t}_\\mathsf{i}^e \\ \\ = \\ \\\n\\{\\,z_i \\mid i \\text{ is in the interior of } e\\,\\} \\ \\ \\cup \\ \\ \\{\\,y_{ij} \\mid (i,j) \\in e\\,\\}.\n\\end{align*}\nNote that $\\mathcal{T}$ is of depth 2, and $\\OMQT{H}$ is of polynomial size in $|H|$. \n\n\n\\begin{example}\\label{ex:tree-hypergraph:char}\nLet $H$ be the tree hypergraph whose underlying tree is as in Example~\\ref{ex:hypertree} with fixed root $1$ and whose only hyperedge is $e = \\{\\{1,2\\},\\{2,3\\},\\{3,4\\},\\{2,6\\}\\}$. The CQ ${\\boldsymbol q}$ and the canonical model $\\mathcal{C}^{\\smash{A_e(a)}}_{\\mathcal{T}}$ for this $H$ are shown in Fig.~\\ref{fig:hypertree-query}.\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\coordinate (z1) at (-1.5,2);\n\\coordinate (z4) at (5,2);\n\\coordinate (z2) at (1,-0.5);\n\\node[draw,thin,fill=gray!20,rounded corners,inner xsep=3mm,inner ysep=0mm,fit=(z1) (z2) (z4)] {};\n\\begin{scope}[draw=gray!5,fill=gray!5,line width=4mm]\n\\draw[->] (-1.5,2) -- (1,0);\n\\draw[->] (1,0) -- (3,0);\n\\draw[->] (1,0) -- (1,2);\n\\draw[->] (3,0) -- (5,2);\n\\end{scope}\n\\node[point,label=above:{$z_1$}] (y1) at (-1.5,2) {};\n\\node[point] (y12) at (-0.25,1) {};\n\\node[point,label=below:{$z_2$}] (y2) at (1,0) {};\n\\node[point] (y26) at (1,1) {};\n\\node[point,label=above:{$z_{6}$}] (y6) at (1,2) {};\n\\node[point] (y23) at (2,1) {};\n\\node[point,label=below:{$z_3$}] (y3) at (3,0) {};\n\\node[point] (y34) at (4,1) {};\n\\node[point,label=above:{$z_{4}$}] (y4) at (5,2) {};\n\\node[point] (y45) at (6,2) {};\n\\node[point,label=above:{$z_{5}$}] (y5) at (7,2) {};\n\\node at (-1.2, -0.1){\\normalsize ${\\boldsymbol q}_{\\t^e}$};\n\\node at (7, 0){\\large ${\\boldsymbol q}$};\n\\draw[->,query] (y1) to node[left] {\\footnotesize$R_{12}$} (y12);\n\\draw[->,query] (y12) to node[left] {\\footnotesize$S_{12}$} (y2);\n\\draw[->,query] (y2) to node[left,near end] {\\footnotesize$R_{26}$} (y26);\n\\draw[->,query] (y2) to node[right] {\\footnotesize$R_{23}$} (y23);\n\\draw[->,query] (y26) to node[left] {\\footnotesize$S_{26}$} (y6);\n\\draw[->,query] (y23) to node[right] {\\footnotesize$S_{23}$} (y3);\n\\draw[->,query] (y3) to node[right] {\\footnotesize$R_{34}$} (y34);\n\\draw[->,query] (y34) to node[left] {\\footnotesize$S_{34}$} (y4);\n\\draw[->,query] (y4) to node[below,near end] {\\footnotesize$R_{45}$} (y45);\n\\draw[->,query] (y45) to node[below] {\\footnotesize$S_{45}$} (y5);\n\\node (a) at (-4,2) [bpoint, label=right:{$A_e$}, label=left:{$a$}]{};\n\\node (d1) at (-4,1) [point, fill=white]{};\n\\node (d2) at (-4,0) [point, fill=white]{};\n\\node at (-5.2, 2){\\large $\\mathcal{C}^{A_e(a)}_{\\mathcal{T}}$};\n\\draw[can,->] (a) to node [right]{\\footnotesize $S_{34}^-,S_{26}^-$} node[left]{$R_{12}$} (d1);\n\\draw[can,->] (d1) to node [left]{\\footnotesize $S_{12},S_{23}$} node[right]{$R_{23}^-,R_{34}^-, R_{26}^-$} (d2);\n\\end{tikzpicture}%\n\\caption{The canonical model $\\mathcal{C}_\\mathcal{T}^{\\smash{A_e(a)}}$ and the query ${\\boldsymbol q}$ ($y_{ij}$ is the half-way point between $z_i$ and $z_j$) for the tree hypergraph $H$ in Example~\\ref{ex:tree-hypergraph:char}.}\\label{fig:hypertree-query}\n\\end{figure}\nNote the homomorphism from ${\\boldsymbol q}_{\\t^e}$ into $\\mathcal{C}^{\\smash{A_e(a)}}_{\\mathcal{T}}$.\n\\end{example}\n\n\nThe proofs of the following results (which are THGP analogues of Theorem~\\ref{hg-to-query} and Propositions~\\ref{hyper:program}~(\\emph{ii}) and~\\ref{lem:genhgps}, respectively) are given in Appendices~\\ref{app:proof:Th5.9} and~\\ref{app:proof:Prop5.10}:\n\n\\begin{theorem}\\label{tree-hg-to-query}\n\\textup{(}i\\textup{)} Any tree hypergraph $H$ is isomorphic to a subgraph of $\\HG{\\OMQT{H}}$.\n\n\\textup{(}ii\\textup{)} Any monotone THGP based on a tree hypergraph $H$ computes a subfunction of the primitive evaluation function $f^\\vartriangle_{\\OMQT{H}}$.\n\\end{theorem}\n\n\\begin{proposition}\\label{hyper:thgp}\n\\textup{(}i\\textup{)} For any tree hypergraph $H$ of degree at most $d$, there is a monotone THGP of size $O(|H|)$ that computes $f_H$ and such that its hypergraph is of degree at most $\\max(2,d)$.\n\n\\textup{(}ii\\textup{)} For every generalised THGP $P$ over $n$ variables, there is a THGP $P'$ such that $|P'| \\le n \\cdot |P|$ and $P'$ has the same degree and number of leaves as $P$ and computes the same function.\n\\end{proposition}\n\n\n\n\n\\subsection{OMQs with Bounded Treewidth CQs and Bounded Depth Ontologies}\\label{sec:boundedtw}\n\nRecall (see, e.g.,~\\cite{DBLP:series\/txtcs\/FlumG06}) that a \\emph{tree decomposition} of an undirected graph $G=(V,E)$ is a pair $(T,\\lambda)$, where $T$ is an (undirected) tree and $\\lambda$ a function from the set of nodes of $T$ to $2^V$ such that \n\\begin{nitemize}\n\\item[--] for every $v \\in V$, there exists a node $N$ with $v \\in \\lambda(N)$;\n\n\\item[--] for every $e \\in E$, there exists a node $N$ with $e \\subseteq \\lambda(N)$;\n\n\\item[--] for every $v \\in V$, the nodes $\\{N\\mid v \\in \\lambda(N)\\} $ induce a (connected) subtree of~$T$.\n\\end{nitemize}\nWe call the set $\\lambda(N) \\subseteq V$ a \\emph{bag for} $N$. The \\emph{width of a tree decomposition} $(T, \\lambda)$ is the size of its largest bag minus one. The \\emph{treewidth} of $G$ is the minimum width over all tree decompositions of $G$. The \\emph{treewidth of a CQ} ${\\boldsymbol q}$ is the treewidth of its Gaifman graph~$G_{\\boldsymbol q}$. \n\n\n\\begin{example}\\label{ex:boundedtree} \nThe Boolean CQ\n${\\boldsymbol q} = \\bigl\\{ R(y_2, y_1),\\ R(y_4, y_1), \\ S_1(y_3, y_4), \\ S_2(y_2, y_4)\\bigr\\}$\nand its tree decomposition $(T, \\lambda)$ of width 2 are shown in Fig.~\\ref{fig:tree-decomposition}, where $T$ has two nodes, $N_1$ and $N_2$, connected by an edge, with bags $\\lambda(N_1)= \\{y_1, y_2, y_4\\}$ and \\mbox{$\\lambda(N_2) = \\{y_2, y_3, y_4\\}$}.%\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}[bag\/.style={draw,rectangle,rounded corners=8,inner sep=8pt},yscale=0.8]\n\\draw[ultra thin,fill=gray!30,rounded corners=8] (-1,0.7) rectangle +(4,1.6);\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (-0.8,1.7) rectangle +(3.6,1.6);\n\\draw[ultra thin,rounded corners=8] (-0.8,1.7) rectangle +(3.6,1.6);\n\\node[point,label=left:{${y_1}$}] (y1) at (1,3) {};\n\\node[point,label=right:{${y_2}$}] (y2) at (2,2) {};\n\\node[point,label=left:{${y_3}$}] (y3) at (1,1) {};\n\\node[point,label=left:{${y_4}$}] (y4) at (0,2) {};\n\\draw[->,query] (y4) to node[above left] {\\scriptsize$R$} (y1);\n\\draw[->,query] (y2) to node[above right] {\\scriptsize$R$} (y1);\n\\draw[->,query] (y4) to node[below left] {\\scriptsize$S_{1}$} (y3);\n\\draw[->,query] (y2) to node[below right] {\\scriptsize$S_{2}$} (y3);\n\\node[bag,fill=gray!10,label=right:{$N_1$}](N1) at (6,2.8){$y_1, y_2, y_4$};\n\\node[bag,fill=gray!30,label=right:{$N_2$}](N2) at (6,1.2){$y_2, y_3, y_4$};\n\\draw[-,query] (N1) to (N2);\n\\end{tikzpicture}\n\\caption{Tree decomposition in Example~\\ref{ex:boundedtree}.}\\label{fig:tree-decomposition}\n\\end{figure}%\n\\end{example}\n\nOur aim in this section is to show that, for any OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ with ${\\boldsymbol q}$ of bounded treewidth and a finite fundamental set $\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}$, the modified tree-witness hypergraph function $\\smash{\\homfn}$ can be computed using a monotone THGP of size bounded by a polynomial in $|{\\boldsymbol q}|$ and $|\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}|$.\n\nLet $(T, \\lambda)$ be a tree decomposition of $G_{\\boldsymbol q}$ of width~\\mbox{$m - 1$}. \nIn order to refer to the variables of ${\\boldsymbol q}$, for each bag $\\lambda(N)$, we fix an order of variables in the bag and define a injection $\\nu_N\\colon \\lambda(N) \\to \\{1,\\dots, m\\}$ that gives the index of each $z$ in $\\lambda(N)$. \nA \\emph{\\textup{(}bag\\textup{)} type} is an $m$-tuple of the form $\\avec{w} = (\\avec{w}[1], \\dots, \\avec{w}[m])$, where each $\\avec{w}[i] \\in \\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}$. Intuitively, the $i$th component $\\avec{w}[i]$ of $\\avec{w}$ indicates that the $i$th variable in the bag is mapped to a domain element of the form $a\\avec{w}[i]$ in the canonical model $\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$. \nWe say that a type $\\avec{w}$ is \\emph{compatible with a node} $N$ of $T$ if the following conditions hold, for all $z,z'\\in\\lambda(N)$:\n\\begin{enumerate}[(1)]\n\\item if $A(z) \\in {\\boldsymbol q}$ and $\\avec{w}[\\nu_N(z)] \\neq \\varepsilon$, then $\\avec{w}[\\nu_N(z)]= w\\varrho$ and $\\mathcal{T} \\models \\exists y\\, \\varrho(y,x) \\to A(x)$;\n\n\\item if $P(z,z') \\in {\\boldsymbol q}$ and either $\\avec{w}[\\nu_N(z)] \\ne\\varepsilon$ or $\\avec{w}[\\nu_N(z')] \\ne \\varepsilon$, then \n\\begin{nitemize}\n\\item[--] $\\avec{w}[\\nu_N(z)] = \\avec{w}[\\nu_N(z')]$ and $\\mathcal{T} \\models P(x,x)$, or \n\n\\item[--] $\\avec{w}[\\nu_N(z')]= \\avec{w}[\\nu_N(z)] \\varrho$ and $\\mathcal{T} \\models \\varrho(x,y) \\rightarrow P(x,y)$, or\n\n\\item[--] $\\avec{w}[\\nu_N(z)] = \\avec{w}[\\nu_N(z')] \\varrho^-$ and $\\mathcal{T} \\models \\varrho(x,y) \\rightarrow P(y,x)$.\n\\end{nitemize}\n\\end{enumerate}\nClearly, the type with all components equal to $\\varepsilon$ is compatible with any node $N$ and corresponds to mapping all variables in $\\lambda(N)$ to individuals in $\\mathsf{ind}(\\mathcal{A})$.\n\n\\begin{example}\\label{ex:5.11}\nSuppose $\\mathcal{T} = \\{\\,A(x) \\to \\exists y\\, R(x, y)\\,\\}$ and ${\\boldsymbol q}$ is the same as in Example~\\ref{ex:boundedtree}. Let $\\nu_{N_1}$ and $\\nu_{N_2}$ respect the order of the variables in the bags shown in Fig.~\\ref{fig:tree-decomposition}. The only types compatible with $N_1$ are $(\\varepsilon,\\varepsilon,\\varepsilon)$ and $(R, \\varepsilon,\\varepsilon)$, whereas the only type compatible with $N_2$ is $(\\varepsilon,\\varepsilon,\\varepsilon)$.\n\\end{example}\n\nLet $\\avec{w}_1, \\dots, \\avec{w}_M$ be all the bag types for $\\Omega_{\\ensuremath{\\boldsymbol{Q}}}$ ($M = |\\Omega_{\\ensuremath{\\boldsymbol{Q}}}|^m$). Denote by $T'$\nthe tree obtained from $T$ by replacing every edge $\\{N_i, N_j\\}$ with the following sequence of edges:\n\\begin{multline*}\n\\{N_i, u^1_{ij}\\},\\qquad \\{u^k_{ij}, v^k_{ij}\\} \\text{ and } \\{ v^k_{ij}, u^{k+1}_{ij} \\}, \\text{ for } 1 \\leq k < M, \\qquad \\{ u^M_{ij}, v^M_{ij}\\},\\qquad \\{ v^M_{ij}, v^M_{ji}\\},\\\\\n\\{v^M_{ji}, u^M_{ji}\\},\\qquad \\{ u^{k+1}_{ji}, v^k_{ji} \\} \\text{ and } \\{ v^k_{ji}, u^k_{ji}\\}, \\text{ for } 1 \\leq k < M,\\qquad \\{u^1_{ji}, N_j\\}, \n\\end{multline*}\nfor some fresh nodes $u^k_{ij}$, $v^k_{ij}$, $u^k_{ji}$ and $v^k_{ji}$.\nWe now define a generalised monotone THGP $P_{{\\ensuremath{\\boldsymbol{Q}}}}$ based on a hypergraph with the underlying tree $T'$. \nDenote by $[L]$ the set of nodes of the minimal convex subtree of $T'$ containing all nodes of $L$. The hypergraph has the following hyperedges:\n\\begin{nitemize}\n\\item[--] $E_i^k = [N_i,u_{ij_1}^k, \\dots, u_{i j_n}^k]$ if $N_{j_1}, \\dots, N_{j_n}$ are the neighbours of $N_i$ in $T$ and $\\avec{w}_k$ is compatible with $N_i$;\n \n\\item[--] $E_{ij}^{k \\ell} = [v_{ij}^k, v_{ji}^\\ell]$ if $\\{N_i, N_j\\}$ is an edge in $T$ and $(\\avec{w}_k, \\avec{w}_\\ell)$ is compatible with $(N_i, N_j)$ in the sense that $\\avec{w}_k[\\nu_{N_i}(z)] = \\avec{w}_\\ell[\\nu_{N_j}(z)]$, for all $z\\in \\lambda(N_i)\\cap \\lambda(N_j)$. \n\\end{nitemize}\nWe label the vertices of the hypergraph---that is, the edges of $T'$---in the following way. The edges $\\{N_i, u_{ij}^1\\}$, $\\{v_{ij}^k, u_{ij}^{k +1}\\}$ and $\\{v_{ij}^M, v_{ji}^M\\}$ are labelled with $0$, and every edge $\\{u_{ij}^k, v_{ij}^k\\}$ is labelled with the conjunction of the following variables:\n\\begin{nitemize}\n\\item[--] $p_\\atom$, whenever $\\atom \\in {\\boldsymbol q}$, $\\avec{z}\\subseteq \\lambda(N_i)$ and \n$\\avec{w}_k[\\nu_{N_i}(z)]= \\varepsilon$, for all \n$z\\in \\avec{z}$;\n\n\\item[--] $p_{\\varrho^*(z)}$, whenever $A(z) \\in {\\boldsymbol q}$, $z\\in \\lambda(N_i)$ and \n$\\avec{w}_k[\\nu_{N_i}(z)] = \\varrho w$;\n\n\\item[--] $p_{\\varrho^*(z)}$, $p_{\\varrho^*(z')}$ and $p_{z=z'}$, whenever $R(z,z') \\in {\\boldsymbol q}$ (possibly with $z=z'$), $z,z'\\in \\lambda(N_i)$,\nand either $\\avec{w}_k[\\nu_{N_i}(z)]= \\varrho w$ or $\\avec{w}_k[\\nu_{N_i}(z')]= \\varrho w$.\n\\end{nitemize}\nThe following result is proved in Appendix~\\ref{app:proof:5.13}: \n\n\\begin{theorem}\\label{DL2THP}\nFor every OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ with a fundamental set $\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}$ and with ${\\boldsymbol q}$ of treewidth~$t$,\nthe generalised monotone THGP $P_{{\\ensuremath{\\boldsymbol{Q}}}}$ computes $\\homfn$ and is of size polynomial in $|{\\boldsymbol q}|$ and $|\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}|^t$. \n\\end{theorem}\n\n\n\\begin{example}\\label{ex:thgp}\nLet ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ be the OMQ from Example~\\ref{ex:5.11}. As we have seen, there are only two types compatible with nodes in $T$: $\\avec{w}_1=(\\varepsilon,\\varepsilon,\\varepsilon)$ and $\\avec{w}_2 = (R, \\varepsilon, \\varepsilon)$.\nThis gives us the generalised THGP $P_{{\\ensuremath{\\boldsymbol{Q}}}}$ shown in Fig.~\\ref{fig:thgp}, where the omitted labels are all~0. \n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}[xscale=0.97]\n\\draw[ultra thin,fill=gray!10,rounded corners=8] (3,-1.1) rectangle +(7,2.2);\n\\node at (6.7,-0.9) {\\small $E_{12}^{11}$};\n\\draw[ultra thin,fill=gray!40,fill opacity=0.5,rounded corners=8] (0,-0.9) rectangle +(4,1.8);\n\\draw[ultra thin,rounded corners=8] (0,-0.9) rectangle +(4,1.8);\n\\node at (2,-0.7) {\\small $E_1^2$};\n\\draw[ultra thin,fill=gray!40,rounded corners=8] (0,-0.7) rectangle +(1,1.4);\n\\node at (0.5,-0.5) {\\small $E_1^1$};\n\\draw[ultra thin,fill=gray!40,rounded corners=8] (12,-0.7) rectangle +(1,1.4);\n\\node at (12.5,-0.5) {\\small $E_2^1$};\n\\draw[ultra thin,fill=gray!40,rounded corners=8] (7,-0.7) rectangle +(3,1.4);\n\\node at (8.9,-0.5) {\\small $E^{21}_{12}$};\n\\node[point,label=left:{$N_1$}] (N1) at (0,0) {};\n\\node[point,label=right:{$N_2$}] (N2) at (13,0) {};\n\\draw[query] (N1) -- (N2);\n\\node[point,label=above:{\\small $u_{12}^1$\\hspace*{1.8em}}] (u121) at (1,0) {};\n\\node[point,label=below:{\\small\\hspace*{1.8em}$v_{12}^1$}] (v121) at (3,0) {};\n\\node[point,label=above:{\\small$u_{12}^2$\\hspace*{1.8em}}] (u122) at (4,0) {};\n\\node[point,label=below:{\\small\\hspace*{1.8em}$v_{12}^2$}] (v122) at (7,0) {};\n\\node[point,label=below:{\\small$v_{21}^2$}] (v212) at (8,0) {};\n\\node[point,label=above:{\\small$u_{21}^2$}] (u212) at (9,0) {};\n\\node[point,label=below:{\\small$v_{21}^1$\\hspace*{1.8em}}] (v211) at (10,0) {};\n\\node[point,label=above:{\\small\\hspace*{1.8em}$u_{21}^1$}] (u211) at (12,0) {};\n\\node at (2, 0.2) {\\small $p_{R(y_4,y_1)}$};\n\\node at (2,-0.2) {\\small $p_{R(y_2, y_1)}$};\n\\node at (5.5, 0.2) {\\small$p_{y_4 = y_1} \\ \\ p_{y_2=y_1}$};\n\\node at (5.5,-0.4) {\\small\\begin{tabular}{c}$p_{R^*(y_1)} \\ \\ p_{R^*(y_2)}$\\\\$p_{R^*(y_4)}$\\end{tabular}};\n\\node at (11, 0.2) {\\small $p_{S_1(y_4,y_3)}$};\n\\node at (11,-0.2) {\\small $p_{S_2(y_2, y_3)}$};\n\\end{tikzpicture}%\n\\caption{THGP $P_{\\ensuremath{\\boldsymbol{Q}}}$ in Example~\\ref{ex:thgp}: non-zero labels of vertices in $P_{\\ensuremath{\\boldsymbol{Q}}}$ are given on the edges of the tree.}\\label{fig:thgp}\n\\end{figure}\nTo explain the meaning of $P_{{\\ensuremath{\\boldsymbol{Q}}}}$, suppose $\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}$, for some data instance $\\mathcal{A}$. Then there is a homomorphism \\mbox{$h \\colon {\\boldsymbol q} \\to \\mathcal{C}_{\\mathcal{T},\\mathcal{A}}$}. This homomorphism defines the type of bag $N_1$, which can be either $\\avec{w}_1$ (if $h(z) \\in \\mathsf{ind}(\\mathcal{A})$ for all $z \\in \\lambda(N_1)$) or $\\avec{w}_2$ (if \\mbox{$h(y_1) = aR$} for some $a \\in \\mathsf{ind}(\\mathcal{A})$). These two cases are represented by the hyperedges $E^1_1 = [N_1, u^1_{12}]$ and $E^2_1 = [N_1, u^2_{12}]$. Since\n$\\{N_1, u^1_{12}\\}$ is labelled with 0, exactly one of them must be chosen to construct an independent subset of hyperedges covering all zeros. In contrast to that, there is no hyperedge $E^2_2$ because $\\avec{w}_2$ is not compatible with $N_2$, and so \\mbox{$E_2^1 = [u_{21}^1, N_2]$} must be present in every covering of all zeros. Both $(\\avec{w}_1, \\avec{w}_1)$ and $(\\avec{w}_2, \\avec{w}_1)$ are compatible with $(N_1, N_2)$, which gives $E^{11}_{12}= [v^1_{12}, v^1_{21}]$ and $E^{21}_{12}= [v^2_{12}, v^1_{21}]$. Thus, if $N_1$ is of type $\\avec{w}_1$, then we include $E^1_1$ and $E^{11}_{12}$ in the covering of all zeros, and so $p_{R(y_4,y_1)}\\land p_{R(y_2, y_1)}$ should hold. If $N_1$ is of type $\\avec{w}_2$, then instead of $E^{11}_{12}$, we take $E^{21}_{12}$, and so\n$p_{y_4 = y_1}\\land p_{y_2=y_1}\\land p_{R^*(y_1)} \\land p_{R^*(y_2)} \\land p_{R^*(y_4)}$ should be true. Finally, since $\\{v_{21}^1, u_{21}^1\\}$ does not belong to any hyperedge, $p_{S_1(y_4,y_3)}\\land p_{S_2(y_2, y_3)}$ must hold in either case.\n\\end{example}\n\n\n\\subsection{Summary}\n\nIn Tables~\\ref{table:OMQ-to-HGP} and~\\ref{table:HG-HGf}, we summarise the results of Section~\\ref{sec:OMQs&hypergraphs} that will be used in Section~\\ref{sec:7} to obtain lower and upper bounds for the size of OMQ rewritings. \nTable~\\ref{table:OMQ-to-HGP} shows how Theorems~\\ref{depth1} and~\\ref{prop:tree-shaped} (on the shape of tree-witness hypergraphs) combined with Proposition~\\ref{hyper:program}~(\\emph{ii}), as well as Theorem~\\ref{DL2THP} provide us with hypergraph programs computing tree-witness hypergraph functions for OMQs.\n\\begin{table}[t]\n\\caption{HGPs computing tree-witness hypergraph functions for OMQs.}\n{\\centerline{\\renewcommand{\\arraystretch}{1.5}\\begin{tabular}{cccc}\\toprule\nOMQ ${\\ensuremath{\\boldsymbol{Q}}}= (\\mathcal{T},{\\boldsymbol q})$ & $P_{\\ensuremath{\\boldsymbol{Q}}}$ & of size & computes \\\\\\midrule\n$\\mathcal{T}$ of depth~1 & $\\mathsf{mHGP}^2$ & $O(|{\\boldsymbol q}|)$ & $\\twfn$\\\\\ntree-shaped ${\\boldsymbol q}$ with $\\ell$ leaves & $\\mathsf{mTHGP}(\\ell)$ & $|{\\boldsymbol q}|^{O(\\ell)}$ & $\\twfn$\\\\\n{\\renewcommand{\\arraystretch}{1}\\begin{tabular}{c}${\\boldsymbol q}$ of treewidth $t$,\\\\$\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}$ a fundamental set\\end{tabular}} & $\\mathsf{mTHGP}$ & $|{\\boldsymbol q}|^{O(1)} \\cdot |\\Omega_{{\\ensuremath{\\boldsymbol{Q}}}}|^{O(t)}$ & $\\homfn$\\\\\\bottomrule\n\\end{tabular}}}%\n\\label{table:OMQ-to-HGP}\n\\end{table}\nTable~\\ref{table:HG-HGf} contains the representation results of Theorems~\\ref{hg-to-query},~\\ref{representable} and~\\ref{tree-hg-to-query} that show how abstract hypergraphs can be embedded into tree-witness hypergraphs of OMQs.\n\\begin{table}[t]\n\\caption{Representation results for classes of hypergraphs.}{\\centerline{\\renewcommand{\\arraystretch}{1.6}\\begin{tabular}{ccc}\\toprule\nhypergraph $H$ & is isomorphic to & \\rule[-12pt]{0pt}{30pt}\\parbox{43mm}{\\centering any mHGP based on $H$\\\\ computes a subfunction of} \\\\\\midrule\nany & a subgraph of $\\HG{{\\ensuremath{\\boldsymbol{Q}}}_H}$ & $f^\\vartriangle_{{\\ensuremath{\\boldsymbol{Q}}}_H}$ \\\\\nof degree 2 & $\\HG{\\OMQI{H}}$ & $f^\\vartriangle_{\\OMQI{H}}$ \\\\\ntree hypergraph & a subgraph of $\\HG{\\OMQT{H}}$ & $f^\\vartriangle_{\\OMQT{H}}$\\\\\\bottomrule\n\\end{tabular}}}\n\\label{table:HG-HGf}\n\\end{table}\n\n\n\n\n\\section{Hypergraph Programs and Circuit Complexity}\\label{sec:circuit_complexity}\n\nIn the previous section, we saw how different classes of OMQs gave rise to different classes of monotone HGPs. Here we characterise the computational power of HGPs in these classes by relating them to standard models of computation for Boolean functions. Table~\\ref{table:comp:classes} shows some of the obtained results. For example, its first row says that any Boolean function computable by a polynomial-size nondeterministic circuit can also be computed by a polynomial-size HGP of degree at most 3, and the other way round.\n\\begin{table}\n\\caption{Complexity classes, models of computation and corresponding classes of HGPs.}{{%\n\\renewcommand{\\arraystretch}{1.6}\\tabcolsep=10pt%\n\\begin{tabular}{lll}\\toprule\n{\\renewcommand{\\arraystretch}{0.8}\\begin{tabular}{c}complexity\\\\class\\end{tabular}} & \\mbox{}\\hfil model of computation & \\mbox{}\\hfil class of HGPs \\\\\\midrule\n$\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}$ & nondeterministic Boolean circuits & $\\mathsf{HGP} = \\mathsf{HGP}^d$, $d\\geq 3$\\\\\n$\\P\/\\mathsf{poly}$ & Boolean circuits & ---\\\\\n{\\tabcolsep=0pt\\renewcommand{\\arraystretch}{1}\\begin{tabular}{l}$\\ensuremath{\\mathsf{LOGCFL}}\/\\mathsf{poly}$\\\\($\\ensuremath{\\mathsf{SAC}}^1$)\\end{tabular}} & {\\tabcolsep=0pt\\renewcommand{\\arraystretch}{0.8}\\begin{tabular}{l}logarithmic-depth circuits with\\\\\\hspace*{1em}unbounded fan-in \\textsc{and}-gates and\\\\\\hspace*{1em}\\textsc{not}-gates only on inputs\\end{tabular}} & $\\mathsf{THGP}$ \\\\\n$\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly}$ & nondeterministic branching programs & $\\mathsf{HGP}^2 = \\mathsf{THGP}(\\ell)$, $\\ell \\ge 2$\\\\\n$\\mathsf{NC}^1$ & Boolean formulas & $\\mathsf{THGP}^d$, $d \\ge 3$\\\\\n${\\ensuremath{\\mathsf{AC}^0}}$ & {\\tabcolsep=0pt\\renewcommand{\\arraystretch}{0.8}\\begin{tabular}{l}constant-depth circuits with\\\\\\hspace*{1em}unbounded fan-in \\textsc{and}- and \\textsc{or}-gates, and\\\\\\hspace*{1em}\\textsc{not}-gates only on inputs\\end{tabular}} & --- \\\\\n${\\ensuremath{\\mathsf{\\Pi}_3}}$ & ${\\ensuremath{\\mathsf{AC}^0}}$ circuits of depth 3 with output \\textsc{and}-gate & $\\mathsf{THGP}^2 = \\mathsf{THGP}^2(2)$\\\\\\bottomrule\n\\end{tabular}}%\n}\\label{table:comp:classes}\n\\end{table}\n\nWe remind the reader that the complexity classes in the table form the chain\n\\begin{equation} \\label{eq:inclusions}\n{\\ensuremath{\\mathsf{\\Pi}_3}} ~\\subsetneqq~ {\\ensuremath{\\mathsf{AC}^0}} ~\\subsetneqq~ \\mathsf{NC}^1 ~\\subseteq~ \\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} ~\\subseteq~ \\ensuremath{\\mathsf{LOGCFL}}\/\\mathsf{poly} ~\\subseteq~ \\P\/\\mathsf{poly} ~\\subseteq~ \\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}\n\\end{equation}\nand that whether any of the non-strict inclusions is actually strict remains a major open problem in complexity theory; see, e.g.,~\\cite{Arora&Barak09,Jukna12}. \nAll these classes are non-uniform in the sense that they are defined in terms\nof polynomial-size non-uniform sequences of Boolean circuits of certain shape and depth. The suffix `$\/\\mathsf{poly}$' comes from an alternative definition of $\\mathsf{C}\/\\mathsf{poly}$ in terms of Turing machines for the class $\\mathsf{C}$ with an additional advice input of polynomial size. \n\nWhen talking about complexity classes, instead of individual Boolean functions, we consider \\emph{sequences} of functions $f = \\{f_n\\}_{n < \\omega}$ with $f_n \\colon \\{0,1\\}^n \\to \\{0,1\\}$. \nThe same concerns circuits, HGPs and the other models of computation we deal with.\nFor example, we say that a circuit $\\boldsymbol{C} = \\{\\boldsymbol{C}_n\\}_{n < \\omega}$ \\emph{computes} a function $f = \\{f_n\\}_{n < \\omega}$ if $\\boldsymbol{C}_n$ computes $f_n$ for every $n < \\omega$. (It will always be clear from context whether $f$, $\\boldsymbol{C}$, etc.\\ denote an individual function, circuit, etc.\\ or a sequence thereof.) \nA circuit $\\boldsymbol{C}$ is said to be \\emph{polynomial} if there is a polynomial $p \\colon \\mathbb N \\to \\mathbb N$ such that $|\\boldsymbol{C}_n| \\le p(n)$, for every $n < \\omega$. The \\emph{depth} of~$\\boldsymbol{C}_n$ is the length of the longest directed path from an input to the output of~$\\boldsymbol{C}_n$. \n\nThe complexity class $\\P\/\\mathsf{poly}$ can be defined as comprising those Boolean functions that are computed by polynomial circuits, and $\\mathsf{NC}^1$ consists of functions computed by polynomial formulas (that is, circuits every logic gate in which has at most one output).\nAlternatively, a Boolean function is in $\\mathsf{NC}^1$ iff it can be computed by a polynomial-size circuit of logarithmic depth, whose \\textsc{and}- and \\textsc{or}-gates have two inputs.\n\n\n$\\ensuremath{\\mathsf{LOGCFL}}\/\\mathsf{poly}$ (also known as $\\ensuremath{\\mathsf{SAC}}^1$) is the class of Boolean functions computable by polynomial-size and logarithmic-depth circuits in which \\textsc{and}-gates have two inputs but \\textsc{or}-gates can have arbitrarily many inputs (\\emph{unbounded fan-in}) and \\textsc{not}-gates can only be applied to inputs of the circuit~\\cite{circuit}. \n${\\ensuremath{\\mathsf{AC}^0}}$ is the class of functions computable by polynomial-size circuits of constant depth with \\textsc{and}- and \\textsc{or}-gates of unbounded fan-in and \\textsc{not}-gates only at the inputs; ${\\ensuremath{\\mathsf{\\Pi}_3}}$ is the subclass of ${\\ensuremath{\\mathsf{AC}^0}}$ that only allows circuits of depth 3 (not counting the \\textsc{not}-gates) with an output \\textsc{and}-gate. \n\nFinally, a Boolean function $f = \\{f_n\\}_{n < \\omega}$ is in the class $\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}$ if there is a polynomial $p$ and a polynomial circuit $\\boldsymbol{C} = \\{\\boldsymbol{C}_{n+p(n)}\\}_{n < \\omega}$ such that, for any $n$ and $\\avec{\\alpha} \\in \\{0,1\\}^n$,\n\\begin{equation}\\label{eq:NP_poly}\nf_n(\\avec{\\alpha}) = 1 \\quad \\text{ iff } \\quad \\text{there is } \\avec{\\beta} \\in \\{0,1\\}^{p(n)} \\text{ such that } \\boldsymbol{C}_{n+p(n)}(\\avec{\\alpha},\\avec{\\beta}) = 1\n\\end{equation}\n(the $\\avec{\\beta}$-inputs are sometimes called \\emph{certificate inputs}).\n\nBy allowing only \\emph{monotone} circuits or formulas in the definitions of the complexity classes, we obtain their monotone variants: for example, the monotone variant of $\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}$ is denoted by $\\mathsf{mNP}\/\\mathsf{poly}$\nand defined by restricting the use of \\textsc{not}-gates in the circuits to the certificate inputs only. We note in passing that the monotone variants of the classes in~\\eqref{eq:inclusions} also form a chain~\\cite{Razborov85,AlonB87,KarchmerW88}:\n\\begin{equation} \\label{eq:inclusions_monotone}\n{\\ensuremath{\\mathsf{m\\Pi}_3}} \\subsetneqq {\\ensuremath{\\mathsf{mAC}^0}} ~\\subsetneqq~ \\mathsf{mNC}^1 ~\\subsetneqq~ \\mathsf{mNL}\/\\mathsf{poly} ~\\subseteq~ \\mathsf{mLOGCFL}\/\\mathsf{poly} ~\\subsetneqq~ \\mathsf{mP}\/\\mathsf{poly} ~\\subsetneqq~ \\mathsf{mNP}\/\\mathsf{poly}.\n\\end{equation}\nWhether the inclusion $\\mathsf{mNL}\/\\mathsf{poly} \\subseteq \\mathsf{mLOGCFL}\/\\mathsf{poly}$ is proper remains an open problem. \n\nWe use these facts in the next section to show lower bounds on the size of OMQ rewritings.\n\n\n\n\n\\subsection{NP\/poly and HGP$^\\textbf{3}$}\n\n\nOur first result shows that $\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}$ and $\\mathsf{mNP}\/\\mathsf{poly}$ coincide with the classes $\\mathsf{HGP}^3$ and $\\mathsf{mHGP}^3$ of Boolean functions computable by polynomial-size (sequences of) HGPs and monotone HGPs of degree at most~3, respectively. \n\n\n\n\\begin{theorem}\\label{NBC}\n$\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly} = \\mathsf{HGP} = \\mathsf{HGP}^3$ and $\\mathsf{mNP}\/\\mathsf{poly} = \\mathsf{mHGP} = \\mathsf{mHGP}^3$.\n\\end{theorem}\n\\begin{proof}\nSuppose $P$ is a (monotone) HGP. We construct a non-deterministic \ncircuit~$\\boldsymbol{C}$ of size polynomial in $|P|$, whose input variables are the same as the variables in $P$, certificate inputs correspond to the hyperedges of $P$, and such that $\\boldsymbol{C}(\\avec{\\alpha},\\avec{\\beta}) = 1$ iff $\\{e_i \\mid \\avec{\\beta}(e_i) = 1\\}$ is an independent set of hyperedges covering all zeros under $\\avec{\\alpha}$. It will then follow that\n\\begin{equation}\\label{eq:prop:NBC}\nP(\\avec{\\alpha}) = 1 \\quad \\text{ iff } \\quad \\text{there is } \\avec{\\beta} \\text{ such that } \\boldsymbol{C}(\\avec{\\alpha}, \\avec{\\beta}) = 1.\n\\end{equation}\nFirst, for each pair of intersecting hyperedges $e_i, e_j$ in $P$, we take the disjunction \\mbox{$\\neg e_i \\vee \\neg e_j$}, and, for each vertex in $P$ labelled with a literal $\\boldsymbol l$ (that is, $p$ or $\\neg p$) and the hyperedges $e_{i_1}, \\dots, e_{i_k}$ incident to it, we take the disjunction $\\boldsymbol l \\vee e_{i_1} \\lor \\dots \\lor e_{i_k}$. The circuit $\\boldsymbol{C}$ is then a conjunction of all such disjunctions. Note that if $P$ is monotone, then~$\\neg$ is only applied to the certificate inputs, $\\avec{e}$, in $\\boldsymbol{C}$.\n\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\draw[ultra thin,fill=gray!60,fill opacity=0.5,rounded corners=8] (-1.5,-0.5) rectangle +(2,1);\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (-0.5,-0.6) rectangle +(2,1.2);\n\\draw[ultra thin,rounded corners=8] (-1.5,-0.5) rectangle +(2,1);\n\\begin{scope}\\small\n\\node at (-1,0.7) {$\\bar{e}_i$};\n\\node at (1,0.8) {$e_i$};\n\\end{scope}\n\\node[point,label=above:{$g_i$}] (gxi) at (0,0) {};\n\\node[point,fill=gray,label=below:{$p$}] (xi) at (-1,0) {};\n\\node[point,fill=gray,label=below:{$\\neg p$}] (xi) at (1,0) {};\n\\node at (0,-1) {\\small $g_i = p$};\n\\begin{scope}[xshift=40mm]\n\\draw[ultra thin,fill=gray!60,fill opacity=0.5,rounded corners=8] (-1.5,-0.5) -- ++(0,1.7) -- ++(1,0) -- ++(0,-0.7) -- ++(1,0) -- ++ (0,-1) -- cycle;\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (-0.5,-0.6) -- ++(2.1,0) -- ++ (0,1.9)-- ++ (-1.2,0) -- ++(0,-0.7) -- ++(-0.9,0) --cycle;\n\\draw[ultra thin,fill=gray!60,fill opacity=0.5,rounded corners=8] (-0.5,1) -- ++(0,1) -- ++(2,0) -- ++(0,-1.7) -- ++(-1,0) -- ++(0,0.7) -- cycle;\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (-1.6,0.2) -- ++(0,1.9) -- ++(2,0) -- ++(0,-1) -- ++ (-1,0) -- ++(0,-0.9) -- cycle;\n\\draw[ultra thin,rounded corners=8] (-1.5,-0.5) -- ++(0,1.7) -- ++(1,0) -- ++(0,-0.7) -- ++(1,0) -- ++ (0,-1) -- cycle;\n\\draw[ultra thin,rounded corners=8] (-0.5,1) -- ++(0,1) -- ++(2,0) -- ++(0,-1.7) -- ++(-1,0) -- ++(0,0.7) -- cycle;\n\\begin{scope}\\small\n\\node at (-1.8,0) {$\\bar{e}_j$};\n\\node at (2,0) {$e_j$};\n\\node at (-1.9,1.5) {$e_i$};\n\\node at (1.9,1.5) {$\\bar{e}_i$};\n\\end{scope}\n\\node[point,label=above:{$g_i$}] (gxi) at (0,1.5) {};\n\\node[point,label=below:{$g_j$}] (gxj) at (0,0) {};\n\\node[point,fill=black] (xi) at (-1,0.75) {};\n\\node[point,fill=black] (xi) at (1,0.75) {};\n\\node at (0,-1) {\\small $g_i = \\neg g_j$};\n\\end{scope}\n\\begin{scope}[xshift=95mm]\n\\draw[ultra thin,fill=gray!60,fill opacity=0.5,rounded corners=8] (-2.5,-0.5) rectangle +(1,1.6);\n\\draw[ultra thin,fill=gray!60,fill opacity=0.5,rounded corners=8] (1.5,-0.5) rectangle +(1,1.6);\n\\draw[ultra thin,fill=gray!60,fill opacity=0.5,rounded corners=8] (-0.8,-0.5) rectangle +(1.6,2.6);\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (-2.6,-0.6) rectangle +(2.45,0.8);\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (0.15,-0.6) rectangle +(2.45,0.8);\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (-2.6,0.4) rectangle +(3,0.8);\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (-0.4,0.4) rectangle +(3,0.8);\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (-0.4,1.4) rectangle +(2,0.8);\n\\draw[ultra thin,rounded corners=8] (-2.5,-0.5) rectangle +(1,1.6);\n\\draw[ultra thin,rounded corners=8] (1.5,-0.5) rectangle +(1,1.6);\n\\draw[ultra thin,rounded corners=8] (-0.8,-0.5) rectangle +(1.6,2.6);\n\\node[point,label=right:{$g_i$}] (gxi) at (0,1.7) {};\n\\node[point,label=above:{$u_i$}] (ui) at (0,0.7) {};\n\\node[point,label=right:{$g_k$}] (gxjp) at (1.9,-0.2) {};\n\\node[point,label=left:{$g_j$}] (gxj) at (-1.9,-0.2) {};\n\\node[point,fill=black] (xi) at (-0.5,-0.2) {};\n\\node[point,fill=black] (xip) at (0.5,-0.2) {};\n\\node[point,fill=black,label=left:{$v_j$}] (yi) at (-1.9,0.75) {};\n\\node[point,fill=black,label=right:{$v_k$}] (yio) at (1.9,0.75) {};\n\\begin{scope}\\small\n\\node at (-1.75,-0.9) {$e_j$};\n\\node at (1.75,-0.9) {$e_k$};\n\\node at (-2.75,0.4) {$\\bar{e}_j$};\n\\node at (2.85,0.4) {$\\bar{e}_k$};\n\\node at (1.9,1.8) {$e_i$};\n\\node at (-1.1,1.8) {$\\bar{e}_i$};\n\\end{scope}\n\\node at (0,-1) {\\small $g_i = g_j \\lor g_k$};\n\\end{scope}\n\\end{tikzpicture}\n\\caption{HGP in the proof of Theorem~\\ref{NBC}: black vertices are labelled with 1 and white vertices with 0.}\\label{fig:NBC}\n\\end{figure}\n\n\\smallskip\n\nConversely, let $\\boldsymbol{C}$ be a circuit with certificate inputs. We construct an HGP $P$ of degree at most 3 satisfying~\\eqref{eq:prop:NBC} as follows.\nFor each gate $g_i$ in $\\boldsymbol{C}$, the HGP contains a vertex $g_i$ labelled with $0$ and a pair of hyperedges $\\bar{e}_i$ and $e_i$, both containing $g_i$. No other hyperedge contains $g_i$, and so either $\\bar{e}_i$ or $e_i$ should be present in any cover of zeros. To ensure this property, for each gate $g_i$, we add the following vertices and hyperedges to $P$ (see Fig.~\\ref{fig:NBC}):\n\\begin{nitemize}\n\\item[--] if $g_i$ is an input $p$, then we add a vertex labelled with $\\neg p$ to $e_i$ and a vertex labelled with $p$ to $\\bar{e}_i$;\n\n\\item[--] if $g_i$ is a certificate input, then no additional vertices and hyperedges are added;\n\n\\item[--] if $g_i = \\neg g_j$, then we add a vertex labelled with $1$ to hyperedges $e_i$ and $\\bar{e}_j$, and a vertex labelled with $1$ to hyperedges $\\bar{e}_i$ and $e_j$;\n\n\\item[--] if $g_i = g_j \\lor g_k$, then we add a vertex labelled with $1$ to hyperedges $e_j$ and $\\bar{e}_i$,\nadd a vertex labelled with $1$ to $e_k$ and $\\bar{e}_i$;\nthen, we add vertices $v_j$ and $v_k$ labelled with~$1$ to $\\bar{e}_j$ and $\\bar{e}_k$, respectively, and a vertex $u_{i}$ labelled with $0$ to $\\bar{e}_i$; finally, we add hyperedges $\\{v_j, u_i\\}$ and $\\{v_k, u_i\\}$ to $P$;\n\n\\item[--] if $g_i = g_j \\land g_k$, then we add the pattern dual to the case of $g_i = g_j \\lor g_k$:\nwe add a vertex labelled with $1$ to $\\bar{e}_j$ and $e_i$, a vertex labelled with $1$ to $\\bar{e}_k$ and $e_i$;\nthen, we add vertices $v_j$ and $v_k$ labelled with $1$ to $e_j$ and $e_k$, respectively, and a vertex $u_{i}$ labelled with $0$ to $e_i$; finally, we add hyperedges $\\{v_j, u_i\\}$ and $\\{v_k, u_i\\}$ to $P$.\n\\end{nitemize}\nFinally, we add one more vertex labelled with $0$ to $e_m$ for the output gate $g_m$ of $\\boldsymbol{C}$, which ensures that $e_m$ must be included the cover. \nIt is easily verified that the constructed HGP is of degree at most 3. \nOne can establish~\\eqref{eq:prop:NBC} by induction on the structure of $\\boldsymbol{C}$.\nWe illustrate the proof of the inductive step for the case of \\mbox{$g_i = g_j \\lor g_k$}: we show that $e_i$ is in the cover iff it contains either $e_j$ or $e_k$.\nSuppose the cover contains~$e_j$. Then it cannot contain $\\bar{e}_i$, and so it contains $e_i$. The vertex $u_i$ in this case can be covered by $\\{v_j, u_i\\}$ since $\\bar{e}_j$ is not in the cover.\nConversely, if neither $e_j$ nor $e_k$ is in the cover, then it must contain both $\\bar{e}_j$ and\n$\\bar{e}_k$, and so neither $\\{v_j, u_i\\}$ nor $\\{v_k, u_i\\}$ can belong to the cover,\nand thus we will have to include $\\bar{e}_i$ to the cover.\n\nIf $\\boldsymbol{C}$ is monotone, then we remove from $P$ all vertices labelled with $\\neg p$, for an input~$p$, and denote the resulting program by $P'$. We claim that, for any $\\avec{\\alpha}$, we have $P'(\\avec{\\alpha})=1$ iff there is $\\avec{\\beta}$ such that $\\boldsymbol{C}(\\avec{\\alpha}, \\avec{\\beta})=1$. The implication $(\\Leftarrow)$ is trivial: if \\mbox{$\\boldsymbol{C}(\\avec{\\alpha}, \\avec{\\beta})=1$} then, by the argument above, $P(\\avec{\\alpha})=1$ and, clearly, $P'(\\avec{\\alpha}) = 1$. Conversely, suppose $P'(\\avec{\\alpha})=1$. Each of the vertices $g_i$ in $P'$ corresponding to the inputs is covered by one of the hyperdges $e_i$ or $\\bar{e}_i$. Let $\\avec{\\alpha}'$ be the vector corresponding to these hyperedges; clearly, $\\avec{\\alpha}' \\leq \\avec{\\alpha}$. This cover of vertices of $P'$ gives us $P(\\avec{\\alpha}')=1$.\nThus, by the argument above, there is $\\avec{\\beta}$ such that $\\boldsymbol{C}(\\avec{\\alpha}',\\avec{\\beta})=1$. Since $\\boldsymbol{C}$ is monotone, $\\boldsymbol{C}(\\avec{\\alpha},\\avec{\\beta})=1$.\n\\end{proof}\n\n\n\n\n\\subsection{NL\/poly and HGP$^\\textbf{2}$}\n\nA Boolean function belongs to the class $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly}$ iff it can be computed by a polynomial-size \\emph{nondeterministic branching program} (NBP). We remind the reader (consult~\\cite{Jukna12} for more details) that an NBP $B$ is a directed graph $G=(V,E)$, whose arcs are labelled with the Boolean constants 0 and 1, propositional variables $p_1, \\dots, p_n$ or their negations, and which distinguishes two vertices $s,t\\in V$. Given an assignment $\\avec{\\alpha}$ to variables $p_1,\\dots,p_n$, we write $s \\to_{\\avec{\\alpha}}t$ if there is a path in $G$ from~$s$ to~$t$ all of whose labels evaluate to $1$ under $\\avec{\\alpha}$. We say that an NBP $B$ \\emph{computes} a Boolean function $f$ if $f(\\avec{\\alpha}) = 1$ iff $s \\to_{\\avec{\\alpha}}t$, for any $\\avec{\\alpha} \\in \\{0,1\\}^n$.\nThe \\emph{size} $|B|$ of $B$ is the size of the underlying graph, $|V| + |E|$.\nAn NBP is \\emph{monotone} if there are no negated variables among its labels. The class of Boolean functions computable by polynomial-size monotone NBPs is denoted by $\\mathsf{mNL}\/\\mathsf{poly}$; the class of functions $f$ whose \\emph{duals} $f^*(p_1,\\dots,p_n) = \\neg f(\\neg p_1,\\dots,\\neg p_n)$ are in $\\mathsf{mNL}\/\\mathsf{poly}$ is denoted by $\\mathsf{co}\\text{-}\\mathsf{mNL}\/\\mathsf{poly}$.\n\n\n\n\n\\begin{theorem}\\label{thm:deg_2}\n$\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} = \\mathsf{HGP}^2$ and $\\mathsf{co}\\text{-}\\mathsf{mNL}\/\\mathsf{poly} = \\mathsf{mHGP}^2$.\n\\end{theorem}\n\\begin{proof}\nAs follows from~\\cite{szelepcsenyi88,immerman88}, if a function $f$ is computable by a polynomial-size NBP, then $\\neg f$ is also computable by a polynomial-size NBP. So suppose $\\neg f$ is computed by an NBP $B$. We construct an HGP $P$ computing~$f$ of degree at most~2 and polynomial size in $|B|$ as follows (see Fig.~\\ref{fig:deg_2}). \n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\node[point,label=left:{$v_0$}] (v0) at (-1.5,0) {};\n\\node[point,label=above:{$v_1$}] (v1) at (0,0) {};\n\\node[point,label=above:{$v_2$}] (v2) at (1.5,0.5) {};\n\\node[point,label=below:{$v_3$}] (v3) at (1.5,-0.5) {};\n\\draw[->,thick] (v0) -- (v1) node[below,midway,sloped] {\\small${\\scriptstyle e_0\\colon} q$};\n\\draw[->,thick] (v1) -- (v2) node[above,midway,sloped] {\\small${\\scriptstyle e_1\\colon} \\neg q$};\n\\draw[->,thick] (v1) -- (v3) node[below,midway,sloped] {\\small${\\scriptstyle e_2\\colon} p$};\n\\begin{scope}[xshift=30mm]\n\\draw[ultra thin,fill=gray!60,fill opacity=0.5,rounded corners=8] (2.1,-1) -- ++(0,2) -- ++(2.4,0) -- ++(0,-2) -- cycle;\n\\draw[ultra thin,rounded corners=8] (2.1,-1) -- ++(0,2) -- ++(2.4,0) -- ++(0,-2) -- cycle;\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (0.3,-0.4) -- ++(0,0.8) -- ++(2.9,0) -- ++(0,-0.8) -- cycle;\n\\draw[ultra thin,rounded corners=8] (0.3,-0.4) -- ++(0,0.8) -- ++(2.9,0) -- ++(0,-0.8) -- cycle;\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (3.3,0) -- ++(0,0.7) -- ++(2.9,0.25) -- ++(0,-0.7) -- cycle;\n\\draw[ultra thin,rounded corners=8] (3.3,0) -- ++(0,0.7) -- ++(2.9,0.25) -- ++(0,-0.7) -- cycle;\n\\draw[ultra thin,fill=gray!20,fill opacity=0.5,rounded corners=8] (3.3,0) -- ++(0,-0.7) -- ++(2.9,-0.25) -- ++(0,0.7) -- cycle;\n\\draw[ultra thin,rounded corners=8] (3.3,0) -- ++(0,-0.7) -- ++(2.9,-0.25) -- ++(0,0.7) -- cycle;\n\\draw[ultra thin,rounded corners=8] (2.1,-1) -- ++(0,2) -- ++(2.4,0) -- ++(0,-2) -- cycle;\n\\node at (3.3,1.3) {$v_1$-hyperedge};\n\\node at (1,0.7) {$e_0$-hyperedge};\n\\node[rotate=5] at (5.5,1.2) {$e_1$-hyperedge};\n\\node[rotate=-5] at (5.5,-1.2) {$e_2$-hyperedge};\n\\node[point,fill=gray,label=left:{\\scriptsize$e_0^0$},label=below:{\\small $\\neg q$}] (e00) at (1,0) {};\n\\node[point,fill=gray,label=left:{\\scriptsize$e_1^0$},label=above:{\\small $q$}] (e10) at (4,0.25) {};\n\\node[point,fill=gray,label=left:{\\scriptsize$e_2^0$},label=below:{\\small $\\neg p$}] (e20) at (4,-0.25) {};\n\\node[point,fill=black,label=right:{\\scriptsize$e_0^1$}] (e01) at (2.5,0) {};\n\\node[point,fill=black,label=right:{\\scriptsize$e_1^1$}] (e11) at (5.5,0.5) {};\n\\node[point,fill=black,label=right:{\\scriptsize$e_2^1$}] (e21) at (5.5,-0.5) {};\n\\end{scope}\n\\end{tikzpicture}\n\\caption{HGP in the proof of Theorem~\\ref{thm:deg_2}: black vertices are labelled with 1.}\\label{fig:deg_2}\n\\end{figure}\nFor each arc $e$ in $B$, the HGP $P$ has two vertices $e^0$ and $e^1$, which represent the beginning and the end of $e$, respectively. The vertex $e^0$ is labelled with the \\emph{negated} label of $e$ in $B$ and $e^1$ with $1$. For each arc $e$ in $B$, the HGP $P$ has an $e$-hyperedge $\\{e^0, e^1\\}$.\nFor each vertex $v$ in $B$ but $s$ and $t$, the HGP $P$ has a $v$-hyperedge comprising all vertices $e^1$ for the arcs $e$ leading to $v$, and all vertices $e^0$ for the arcs $e$ leaving $v$. We also add to the HGP $P$ a vertex $w$ labelled with~$0$ and\na hyperedge, $\\bar e_w$, that consists of $w$ and all vertices $e^1$ for the arcs $e$ in $B$ leading to $t$. \nWe claim that the constructed HGP $P$ computes $f$.\nIndeed, if $s \\not\\to_{\\avec{\\alpha}} t$ then the following subset of hyperedges is independent and covers all zeros: all $e$-hyperedges, for the arcs~$e$ reachable from $s$ and labelled with 1 under $\\avec{\\alpha}$, and all $v$-hyperedges with $s\\not\\to_{\\avec{\\alpha}} v$ (including $\\bar e_w$).\nConversely, if $s\\to_{\\avec{\\alpha}} t$ then one can show by induction that, for each arc $e$ of the path, the $e$-hyperedge must be in the cover of all zeros. Thus, no independent set can cover $w$, which is labelled with 0.\n\n\nConversely, suppose $f$ is computed by an HGP $P$ of degree 2 with hyperedges $e_1, \\dots, e_k$. We first provide a graph-theoretic characterisation of independent sets covering all zeros based on the implication graph~\\cite{AspvallPlassTarjan79}. \nWith every hyperedge $e_i$ we associate a propositional variable $u_i$ and with every assignment $\\avec{\\alpha}$ we associate the following set $\\Phi_{\\avec{\\alpha}}$ of propositional binary clauses:\n\\begin{align*}\n& \\neg u_i \\lor \\neg u_j, && \\text{if } \\ e_i\\cap e_j \\ne \\emptyset,\\\\\n& u_i \\lor u_j, && \\text{if there is } \\ v\\in e_i\\cap e_j \\text{ with } \\avec{\\alpha}(v) = 0.\n\\end{align*}\nInformally, the former means that intersecting hyperedges cannot be chosen at the same time and the latter that all zeros must be covered; note that all vertices have at most two incident edges.\nBy definition, $X$ is an independent set covering all zeros iff $X = \\{ e_i \\mid \\avec{\\gamma}(u_i) = 1\\}$, for some assignment $\\avec{\\gamma}$ satisfying $\\Phi_{\\avec{\\alpha}}$. Let $C_{\\avec{\\alpha}} = (V, E_{\\avec{\\alpha}})$ be the implication graph of $\\Phi_{\\avec{\\alpha}}$, that is, a directed graph with\n\\begin{align*}\nV & ~=~ \\bigl\\{ u_i, \\bar{u}_i \\mid 1\\leq i \\leq k\\bigr\\},\\\\\nE_{\\avec{\\alpha}} & ~=~ \\bigl\\{ (u_i, \\bar{u}_j) \\mid e_i \\cap e_j \\ne \\emptyset \\bigr\\} \\ \\cup\n\\bigl\\{ (\\bar{u}_i,u_j) \\mid \\text{there is } v\\in e_i\\cap e_j \\text{ with } \\avec{\\alpha}(v) = 0 \\bigr\\}.\n\\end{align*}\n($V$ is the set of all `literals' for the variables of $\\Phi_{\\avec{\\alpha}}$ and $E_{\\avec{\\alpha}}$ is the arcs for the implicational form of the clauses of $\\Phi_{\\avec{\\alpha}}$.) Note that $\\neg u_i \\lor \\neg u_j$ gives rise to two implications, $u_i \\to \\neg u_j$ and $u_j \\to \\neg u_i$, and so to two arcs in the graph; similarly, for $u_i \\lor u_j$.\nBy~\\cite[Theorem~1]{AspvallPlassTarjan79},\n$\\Phi_{\\avec{\\alpha}}$ is satisfiable iff there is no $u_i$ with a (directed) cycle going through $u_i$ and $\\bar{u}_i$.\nIt will be convenient for us to regard the $C_{\\avec{\\alpha}}$, for assignments~$\\avec{\\alpha}$, as a single labelled directed graph $C$ with arcs of the form $(u_i, \\bar{u}_j)$ labelled with $1$ and arcs of the form $(\\bar{u}_i, u_j)$ labelled with the negation of the literal labelling the uniquely defined $v\\in e_i\\cap e_j$ (recall that the hypergraph of $P$ is of degree~2). It should be clear that $C_{\\avec{\\alpha}}$ has a cycle going through $u_i$ and $\\bar{u}_i$ iff we have both $\\bar{u}_i \\to_{\\avec{\\alpha}} u_i$ and $u_i \\to_{\\avec{\\alpha}} \\bar{u}_i$ in $C$.\nThe required NBP $B$ contains distinguished vertices $s$ and $t$, and, for each hyperedge $e_i$ in $P$, two copies, $C_i^0$ and $C_i^1$, of $C$ with additional arcs from $s$ to the $\\bar{u}_i$ vertex of $C_i^0$, from the $u_i$ vertex of $C_i^0$ to the $u_i$ vertex of $C_i^1$, and from the $\\bar{u}_i$ vertex of $C_i^1$ to $t$; see Fig.~\\ref{fig:deg_2:NBP}. By construction, $s\\to_{\\avec{\\alpha}} t$ iff there is a hyperedge $e_i$ in $P$ such that $C_{\\avec{\\alpha}}$ contains a cycle going through $u_i$ and $\\bar{u}_i$.\nWe have thus constructed a polynomial-size NBP $B$ computing $\\neg f$, and thus $f$ must also be computable by a polynomial-size NBP.\n\n\\begin{figure}[t]%\n\\centering%\n\\begin{tikzpicture}\\footnotesize\n\\node[point,label=left:{$s$}] at (0,0) (s) {}; \n\\filldraw[fill=gray!20] (1,-0.5) rectangle +(3,1);\n\\filldraw[fill=gray!20] (5,-0.5) rectangle +(3,1);\n\\node[point,label=right:{$t$}] at (9,0) (t) {}; \n\\node[point,label=above:{$\\bar{u}_i$}] at (1.5,-0.25) (bui) {}; \n\\node at (2.5,0) {\\normalsize $C_i^0$};\n\\node[point,label=below:{$u_i$}] at (3.5,0.25) (ui) {}; \n\\draw[thick,->] (s) -- (bui);\n\\node[point,label=below:{$u_i$}] at (5.5,0.25) (bui2) {}; \n\\node at (6.5,0) {\\normalsize $C_i^1$};\n\\node[point,label=above:{$\\bar{u}_i$}] at (7.5,-0.25) (ui2) {}; \n\\draw[thick,->] (ui2) -- (t);\n\\draw[thick,->] (ui) -- (bui2);\n\\end{tikzpicture}\n\\caption{The NBP in the proof of Theorem~\\ref{thm:deg_2}.}\\label{fig:deg_2:NBP}\n\\end{figure}\n\n\nAs to $\\mathsf{co}\\text{-}\\mathsf{mNL}\/\\mathsf{poly} = \\mathsf{mHGP}^2$, observe that the first construction, if applied to a monotone NBP for $f^*$, produces a polynomial-size HGP of degree~2 computing $\\neg f^*$, all of whose labels are negative. By removing negations from labels, we obtain a monotone HGP computing~$f$.\nThe second construction allows us to transform a monotone HGP of degree~2 for $f$ into an NBP with only negative literals that computes $\\neg f$. By changing the polarity of the literals in the labels, we obtain a monotone NBP computing $f^*$. \n\\end{proof}\n\n\\subsection{NL\/poly and THGP($\\ell$)}\\label{sec:NLpoly-THGP}\n\nFor any natural $\\ell \\ge 2$, we denote by $\\mathsf{THGP}(\\ell)$ and $\\mathsf{mTHGP}(\\ell)$ the classes of Boolean functions computable by (sequences of) polynomial-size THGPs and, respectively, monotone THGPs whose underlying trees have at most $\\ell$ leaves. \n\n\n\n\\begin{theorem}\\label{thm:linear_hgp}\n$\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} = \\mathsf{THGP}(\\ell)$ and $\\mathsf{mNL}\/\\mathsf{poly} = \\mathsf{mTHGP}(\\ell)$, for any $\\ell \\ge 2$.\n\\end{theorem}\n\\begin{proof}\nSuppose a polynomial-size THGP $P$ computes a Boolean function $f$. \nFor simplicity, we consider only $\\ell =2$ here and prove the general case in Appendix~\\ref{app:circuit_complexity}. Thus, we can assume that the vertices $v_1, \\dots, v_n$ of $P$ are consecutive edges of the path graph underlying $P$, and therefore, every hyperedge in $P$ is of the form \\mbox{$[v_i,v_{i+m}]=\\{v_i,\\dots,v_{i+m}\\}$}, for some $m \\ge 0$. We add to $P$ two extra vertices, $v_0$ and $v_{n+1}$ (thereby extending the underlying 2-leaf tree to $v_0,v_1, \\dots, v_n,v_{n+1}$) and label them with 0; we also add two hyperedges $s = \\{v_0\\}$ and $t = \\{v_{n+1}\\}$ to $P$. Clearly, the resulting THGP $P'$ computes the same $f$.\nTo construct a polynomial-size NBP $B$ computing $f$, we take a directed graph whose vertices are hyperedges of $P'$ and which contains an arc from $e_i = [v_{i_1}, v_{i_2}]$ to $e_j = [v_{j_1}, v_{j_2}]$ iff \n$i_2 < j_1$; we label this arc with $\\bigwedge_{i_2 < k < j_1} \\boldsymbol l_k$, where $\\boldsymbol l_k$ is the label of $v_k$ in HGP $P$.\nIt is not hard to see that a path from $s$ to $t$ evaluated to~1 under given assignment~$\\avec{\\alpha}$ corresponds to a cover of zeros in $P'$ under $\\avec{\\alpha}$. Finally, to get rid of conjunctive labels on edges, we replace every arc with a label $\\boldsymbol l_{i_1}\\land \\dots\\land \\boldsymbol l_{i_k}$ by a sequence of $k$ arcs consequently labelled with $\\boldsymbol l_{i_1}, \\dots, \\boldsymbol l_{i_k}$.\n\nConversely, suppose a Boolean function $f$ is computed by an NBP $B$ based on a directed graph with vertices $V = \\{v_1, \\dots, v_n\\}$, edges $E = \\{e_1, \\dots, e_m\\}$, $s=v_1$ and $t=v_n$. Without loss of generality, we assume that $e_m$ is a loop from $t$ to $t$ labelled with~1. Thus, if there is a path from $s$ to $t$ whose labels evaluate to 1, then there is such a path of length $n-1$. \nWe now construct a polynomial-size THGP computing $f$ whose underlying tree $T$ has two leaves. The vertices of the tree $T$ are arranged into $n$ vertex blocks and $n-1$ edge blocks, which alternate. The $k$th vertex (edge) block contains two copies $v_i^k, \\bar v_i^k$ (respectively, $e_i^k, \\bar e_i^k$) of every $v_i \\in V$\n(respectively, $e_i \\in E$):\n\\begin{multline*}\n\\underbrace{\\textcolor{gray!50}{v_1^1}, \\bar{v}_1^1, v_2^1,\\bar{v}_2^1,\\dots,v_n^1,\\bar{v}_n^1,}_{\\text{1st vertex block}} \\ \\ \n\\underbrace{e_1^1, \\bar{e}_1^1, e_2^1,\\bar{e}_2^1,\\dots,e_m^1,\\bar{e}_m^1,}_{\\text{1st edge block}} \\ \\ \n\\underbrace{v_1^2, \\bar{v}_1^2, v_2^2,\\bar{v}_2^2,\\dots,v_n^2,\\bar{v}_n^2,}_{\\text{2nd vertex block}} \\ \\ \\dots\\\\\n\\underbrace{e_1^{n-1}, \\bar{e}_1^{n-1}, e_2^{n-1},\\bar{e}_2^{n-1},\\dots,e_m^{n-1},\\bar{e}_m^{n-1},}_{\\text{$(n-1)$th edge block}} \\ \\ \n\\underbrace{v_1^n, \\bar{v}_1^n, v_2^n,\\bar{v}_2^n,\\dots,v_n^n,\\textcolor{gray!50}{\\bar{v}_n^n}}_{\\text{$n$th vertex block}}.\n\\end{multline*}\nWe remove the first, $v^1_1$, and last vertex, $\\bar v_n^n$ (shown in grey in the formula above), and connect the adjacent vertices by edges to construct the undirected tree $T$. \nConsider now a hypergraph~$H$ whose vertices are the edges of $T$ and hyperedges are of the form $h^k_i = [\\bar v^k_j, e^k_i]$ and $g^k_i = [{\\bar e}^k_i, v^{k+1}_{j'}]$, for $e_i =(v_j,v_{j'}) \\in E$ and $1 \\leq k < n$. The vertices of $H$ of the form $\\{e^k_i,\\bar e^k_i\\}$, which separate hyperedges $h^k_i$ and $g^k_i$, are labelled with the label of $e_i$ in the given NBP $B$, and all other vertices of $H$ with $0$. \nWe show now that the constructed THGP $P$ computes $f$. Indeed, if $f(\\avec{\\alpha})=1$,\nthen there is a path $e_{i_1}, \\dots, e_{i_{n-1}}$ from $v_1$ to $v_n$ whose labels evaluate to 1 under $\\avec{\\alpha}$. It follows that $\\{h^k_{i_k}, g^k_{i_k} \\mid 1 \\leq k< n\\}$ is an independent set in $H$ covering all zeros. \nConversely, if $E'$ is an independent set in~$H$ and covers all zeros under $\\avec{\\alpha}$, then it must contain exactly one pair of hyperedges $h^k_{i_k}$ and $g^k_{i_k}$ for every $k$ with $1 \\leq k ] (g1) to (g4);\n\\draw[->] (g2) to (g4);\n\\draw[->] (g4) to (g6);\n\\draw[->] (g3) to (g6);\n\\draw[->] (g4) to (g7);\n\\draw[->] (g5) to (g7);\n\\draw[->] (g6) to (g8);\n\\draw[->] (g7) to (g8);\n\\begin{scope}[yshift=5mm,xshift=-5mm]\\footnotesize\n\\begin{scope}[rounded corners=2mm]\n\\filldraw[fill=gray!60,ultra thin] (6,1.9) -- ++(-3,0) -- ++(0,-0.3) -- ++(1.35,-0.9) -- ++(2.2,0) -- ++(0.9,-1) -- ++(1.2,0) -- ++(0,0.9) -- cycle; \n\\filldraw[fill=gray!40,ultra thin,fill opacity=0.5] (8,2.5) -- ++(-4,0) -- ++(0,-0.9) -- ++(1.35,-0.9) -- ++(5.6,0) -- ++(0,0.9) -- cycle; \n\\end{scope}\n\\node[rotate=-15] at (8.8,1.8) {$g_7 = g_5 \\land g_4$};\n\\node[rotate=-35] at (4,1.3) {$g_6 = g_3 \\land g_4$};\n\\node[point,draw=gray!40,label=above:{\\scriptsize\\textcolor{gray!40}{$w_8$}}] (w8) at (9,3.4) {}; \n\\node[point,fill=gray,label=right:{\\scriptsize$v_8$}] (v8) at (9,3.1) {}; \n\\node[point,label=right:{\\scriptsize$u_8$},label=below:{$8$}] (u8) at (9,2.8) {}; \n\\node[point] (w7) at (7,2.8) {};\n\\node[point,fill=gray] (v7) at (7,2.5) {};\n\\node[point,label=below:{$7$}] (u7) at (7,2.2) {}; \n\\node[point] (w6) at (5,2.2) {}; \n\\node[point,fill=gray] (v6) at (5,1.9) {}; \n\\node[point,label=below:{$6$}] (u6) at (5,1.6) {}; \n\\node[point] (w5) at (6,1) {}; \n\\node[point,fill=gray] (v5) at (6,0.7) {}; \n\\node[point,label=below:{$5$}] (u5) at (6,0.4) {}; \n\\node[point,label=above:{\\scriptsize$w_2$}] (w2) at (4,0.4) {};\n\\node[point,fill=gray,label=left:{\\scriptsize$v_2$}] (v2) at (4,0.1) {}; \n\\node[point,label=right:{\\scriptsize$u_2$},label=below:{$2$}] (u2) at (4,-0.6) {}; \n\\node[point,label=above:{\\scriptsize$w_1$}] (w1) at (2,-0.6) {};\n\\node[point,fill=gray,label=left:{\\scriptsize$v_1$}] (v1) at (2,-0.9) {};\n\\node[point,label=left:{\\scriptsize$u_1$},label=below:{$1$}] (u1) at (2,-1.6) {}; \n\\node[point] (w4) at (10,1) {};\n\\node[point,fill=gray] (v4) at (10,0.7) {}; \n\\node[point,label=below:{$4$}] (u4) at (10,0) {}; \n\\node[point] (w3) at (8,0) {};\n\\node[point,fill=gray] (v3) at (8,-0.3) {};\n\\node[point,label=below:{$3$}] (u3) at (8,-1) {}; \n\\begin{scope}[thick]\\tiny\n\\draw (v8) -- (u8);\n\\draw (u8) -- node[midway,fill=black,rectangle,draw,inner sep=1pt] {\\textcolor{white}{$1$}} (w7);\n\\draw (w7) -- (v7);\n\\draw (v7) -- (u7);\n\\draw (u7) -- node[midway,fill=black,rectangle,draw,inner sep=1pt] {\\textcolor{white}{$1$}} (w6);\n\\draw (w6) -- (v6);\n\\draw (v6) -- (u6);\n\\draw (u6) -- node[midway,fill=black,rectangle,draw,inner sep=1pt,sloped] {\\textcolor{white}{$1$}} (w5);\n\\draw (u6) -- node[midway,fill=black,rectangle,draw,inner sep=1pt,sloped] {\\textcolor{white}{$1$}} (w4);\n\\draw (w5) -- (v5);\n\\draw (v5) -- (u5);\n\\draw (u5) -- node[midway,fill=black,rectangle,draw,inner sep=1pt] {\\textcolor{white}{$1$}} (w2);\n\\draw (w2) -- (v2);\n\\draw (v2) -- node[fill=black,rectangle,draw,inner sep=2pt] {\\small\\textcolor{white}{$x_2$}} (u2);\n\\draw (u2) -- node[midway,fill=black,rectangle,draw,inner sep=1pt] {\\textcolor{white}{$1$}} (w1);\n\\draw (w1) -- (v1);\n\\draw (v1) -- node[fill=black,rectangle,draw,inner sep=2pt] {\\small\\textcolor{white}{$x_1$}} (u1);\n\\draw (w4) -- (v4);\n\\draw (v4) -- node[fill=black,rectangle,draw,inner sep=2pt] {\\small\\textcolor{white}{$x_4$}} (u4);\n\\draw (u4) -- node[midway,fill=black,rectangle,draw,inner sep=1pt] {\\textcolor{white}{$1$}} (w3);\n\\draw (w3) -- (v3);\n\\draw (v3) -- node[fill=black,rectangle,draw,inner sep=2pt] {\\textcolor{white}{\\small $\\neg x_3$}} (u3);\n\\end{scope}\n\\end{scope}\n\\end{tikzpicture}\n\\caption{(a) A circuit $\\boldsymbol{C}$. (b) The labelled tree $T$ for $\\boldsymbol{C}$: the vertices in the $i$th triple are $u_i,v_i,w_i$ and the omitted edge labels are 0s. The vertices of THGP are the edges of $T$ (with the same labels) and the hyperedges are sets of edges of $T$ (two of them are shown).}\n\\label{fig:7}\n\\end{figure}\nTo show $\\ensuremath{\\mathsf{LOGCFL}}\/\\mathsf{poly} \\subseteq \\mathsf{THGP}$, consider a $\\ensuremath{\\mathsf{SAC}}^1$-circuit $\\boldsymbol{C}$ of depth \\mbox{$d \\le \\log |\\boldsymbol{C}|$}.\nIt will be convenient to think of $\\boldsymbol{C}$ as containing no $\\textsc{not}$-gates but having \\emph{literals} as inputs.\nBy the $\\textsc{and}$-\\emph{depth} of a gate $g$ in $\\boldsymbol{C}$ we mean the maximal number of $\\textsc{and}$-gates in a path from an input of $\\boldsymbol{C}$ to $g$ (it does not exceed $d$). Let $S_n$ be the set of $\\textsc{and}$-gates in $\\boldsymbol{C}$ of $\\textsc{and}$-depth $n$.\nWe denote by $\\mathsf{{left}}(g)$ and $\\mathsf{{right}}(g)$ the sub-circuits of $\\boldsymbol{C}$ computing the left and right inputs of an $\\textsc{and}$-gate $g$, respectively. Without loss of generality (see Lemma~\\ref{l:6.5} in Appendix~\\ref{app:thp_vs_sac}) we can assume that, for any $n \\le d$, \n\\begin{equation*}\n\\bigcup\\nolimits_{g \\in S_n} \\mathsf{{left}}(g) \\quad \\cap \\quad \\bigcup\\nolimits_{g \\in S_n} \\mathsf{{right}}(g) \\ \\ = \\ \\ \\emptyset.\n\\end{equation*}\nOur aim is to transform $\\boldsymbol{C}$ into a polynomial-size THGP $P$. We construct its underlying tree $T$ by associating with each gate $g_i$ three vertices $u_i, v_i, w_i$ and arranging them into a tree as shown in Fig.~\\ref{fig:7}. More precisely, we first arrange the vertices associated with the gates of maximal $\\textsc{and}$-depth, $n$, into a path following the order of the gates in~$\\boldsymbol{C}$ and the alphabetic order for $u_i, v_i, w_i$. Then we fork the path into two branches one of which is associated with the sub-circuit $\\bigcup_{g \\in S_n} \\mathsf{{left}}(g)$ and the other with $\\bigcup_{g \\in S_n} \\mathsf{{right}}(g)$, and so forth. We obtain the tree $T$ by removing the vertex $w_{m}$ from the result, where $m = |\\boldsymbol{C}|$ and $g_m$ is the output gate of $\\boldsymbol{C}$; it has $v_{m}$ as its root and contains $3|\\boldsymbol{C}| -1$ vertices.\nThe THGP $P$ is based on the hypergraph whose vertices are the edges of $T$ and whose hyperedges \ncomprise the following (see Fig.~\\ref{fig:7}):\n\\begin{nitemize}\n\\item $[w_i, u_i]$, for each $i < m$ (pairs of edges in each triple of vertices in Fig.~\\ref{fig:7});\n\n\\item $[v_j,v_k,v_i]$, for each $g_i = g_j \\land g_k$ (shown in Fig.~\\ref{fig:7} by shading);\n\n\\item $[v_{j_1}, v_i], \\dots, [v_{j_k}, v_i]$, for each $g_i = g_{j_1} \\lor \\cdots \\lor g_{j_k}$,\n\\end{nitemize}\nwhere $[L]$ is the minimal convex subtree of $T$ containing the vertices in $L$.\nFinally, if an input gate $g_i$ is a literal $\\boldsymbol l$, we label the edge $\\{u_i, v_i\\}$ with $\\boldsymbol l$; we label all other $\\{u_j,v_j\\}$- and $\\{w_j,v_j\\}$-edges with 0, and the remaining ones with 1. Clearly, the size of $P$ is polynomial in $|\\boldsymbol{C}|$. By Lemma~\\ref{D2}, for any input $\\avec{\\alpha}$, the output of $g_i$ is 1 iff the subtree with root $v_i$ can be covered, i.e., there is an independent set of hyperedges wholly inside and covering all zeros. Thus, $P$ computes the same function as $\\boldsymbol{C}$.\n\n\n\n\nTo show $\\mathsf{THGP} \\subseteq \\ensuremath{\\mathsf{LOGCFL}}\/\\mathsf{poly}$, suppose a THGP $P$ is based on a hypergraph~$H$ with an underlying tree $T$. \nBy a \\emph{subtree} of $T$ we understand a (possibly empty) connected subset of edges in $T$.\nGiven an input $\\avec{\\alpha}$ for $P$ and a nonempty subtree $D$ of $T$, we set $\\mathsf{{cover}}_D$ true iff there exists an independent subset of hyperedges in $H$ that lie in $D$ and cover all zeros in $D$. We also set $\\mathsf{{cover}}_{\\emptyset}$ true. Note that, for any edge $e$ of $T$, $\\mathsf{{cover}}_{\\{e\\}}$ is true if $\\{e\\}$ is a hyperedge of $H$; otherwise $\\mathsf{{cover}}_{\\{e\\}}$ is the value of $e$'s label in $P$ under~$\\avec{\\alpha}$. \n\nOur aim is to construct recursively a polynomial-size $\\ensuremath{\\mathsf{SAC}}^1$-circuit $\\boldsymbol{C}$ computing the function $\\mathsf{{cover}}_T$. \nObserve that, if $D$ is a subtree of $T$ and a vertex $v$ splits $D$ into subtrees $D_1, \\dots, D_k$, then \n\\begin{equation} \\label{eq:sac_construction}\n\\mathsf{{cover}}_D \\ \\ = \\ \\ \\bigwedge_{1 \\leq j \\leq k} \\mathsf{{cover}}_{D_j} \\ \\ \\lor \\ \\ \n\\bigvee_{v \\in h \\subseteq D} \\hspace{3mm} \\bigwedge_{1 \\leq j \\leq k_h} \\mathsf{{cover}}_{D_j^h},\n\\end{equation}\nwhere $h$ ranges over the hyperedges in $H$, and $D_1^h,\\dots,D_{k_h}^h$ are the maximal convex subtrees of $T$ that lie in $D \\setminus h$.\nWe call a vertex $v$ of $D$ \\emph{boundary} if $T$ has an edge $\\{v,u\\}$ with $u$ not in $D$, and define the \\emph{degree} $\\mathsf{deg}(D)$ of $D$ to be the number of its boundary vertices. Note that $T$ itself is the only subtree of $T$ of degree $0$. \nThe following lemma shows that to compute $\\mathsf{{cover}}_T$ we only need subtrees of degree 1 and 2 and the depth of recursion $O(\\log |P|)$.\n\n\\begin{lemma}\\label{l:6.8}\nLet $D$ be a subtree of $T$ with $m$ vertices and $\\mathsf{deg}(D) \\leq 2$. If $\\mathsf{deg}(D) \\leq 1$, then there is a vertex $v$ splitting $D$ into subtrees of size at most $m\/2 + 1$ and degree at most $2$. If $\\mathsf{deg}(D) =2$, then there is $v$ splitting $D$ into subtrees of size at most $m\/2+1$ and degree at most $2$ and, possibly, one subtree of size less than $m$ and degree~$1$.\n\\end{lemma}\n\\begin{proof}\nLet $\\mathsf{deg}(D) \\leq 1$. Suppose some vertex $v_1$ splits $D$ into subtrees one of which, say $D_1$, is larger than $m\/2+1$. Let $v_2$ be the (unique) vertex in $D_1$ adjacent to $v_1$. The splitting of $D$ by $v_2$ consists of the subtree $D_2 = (D \\setminus D_1) \\cup \\{v_1,v_2\\}$ of size at most $m\/2$ and some other subtrees lying inside $D_1$; all of them are of degree at most 2. We repeat this process until the size of the largest subtree becomes at most $m\/2+1$.\n\nLet $\\mathsf{deg}(D) =2$, with $b_1$ and $b_2$ being the boundary vertices. We proceed as above starting from $v_1=b_1$, but stop when either the largest subtree has $\\le m\/2+1$ vertices or $v_{i+1}$ leaves the path between $b_1$ and $b_2$, in which case $v_i$ splits $D$ into subtrees of degree at most $2$ and one subtree of degree $1$ with more than $m\/2 + 1$ vertices. \n\\end{proof}\n\nBy applying~\\eqref{eq:sac_construction} to $T$ recursively and choosing the splitting vertices $v$ as prescribed by Lemma~\\ref{l:6.8}, we obtain a circuit $\\boldsymbol{C}$ whose inputs are the labels of some vertices of~$H$. Since any tree has polynomially many subtrees of degree $1$ or $2$, the size of $\\boldsymbol{C}$ is polynomial in $|P|$. We now show how to make the depth of $\\boldsymbol{C}$ logarithmic in $|P|$. \n\nSuppose $D$ is a subtree with $m$ edges constructed on the recursion step $i$. To compute $\\mathsf{{cover}}_D$ using~\\eqref{eq:sac_construction}, we need one $\\textsc{or}$-gate of unbounded fan-in and a number of $\\textsc{and}$-gates of fan-in 2. We show by induction that we can make the $\\textsc{and}$-depth of these $\\textsc{and}$-gates at most $\\log m + i$. Suppose $D_j$ in~\\eqref{eq:sac_construction} has $m_j$ edges, and so $m = m_1 + \\dots + m_k$.\nBy the induction hypothesis, we can compute each $\\mathsf{{cover}}_{D_j}$ within the $\\textsc{and}$-depth at most $\\log m_j + i-1$. Assign the probability $m_j\/m$ to $D_j$. As shown by Huffman~\\cite{huf52}, there is a prefix binary code such that each $D_j$ is encoded by a word of length $\\lceil \\log(m\/m_j)\\rceil$. This encoding can be represented as a binary tree whose leaves are labelled with the $D_j$ so that the length of the branch ending at $D_j$ is $\\lceil \\log(m\/m_j)\\rceil$. By replacing each non-leaf vertex of the tree with an $\\textsc{and}$-gate, we obtain a circuit for the first conjunction in~\\eqref{eq:sac_construction} whose depth does not exceed\n\\begin{equation*}\n\\max_j\\{\\log m_j + (i-1) + \\log(m\/m_j) + 1\\} \\ \\ \\ = \\ \\ \\log m + i.\n\\end{equation*}\nThe second conjunction is considered analogously.\\qed\n\\end{proof}\n\n\n\\subsection{NC$^{\\boldsymbol{1}}$, $\\boldsymbol{\\mathsf{\\Pi}}_{\\boldsymbol{3}}$ and THGP$^d$}\n\n\n\nThe proof of the following theorem, given in Appendix~\\ref{app:thp_vs_sac}, is a simplified version of the proof of Theorem~\\ref{thm:thp_vs_sac}:\n\n\\begin{theorem}\\label{thm:nc1-thgp3}\n$\\mathsf{NC}^1 = \\mathsf{THGP}^d$ and $\\mathsf{mNC}^1 = \\mathsf{mTHGP}^d$, for any $d \\geq 3$.\n\\end{theorem}\n\nTHGPs of degree 2 turn out to be less expressive:\n\n\\begin{theorem}\\label{thm:pi3-thgp2}\n${\\ensuremath{\\mathsf{\\Pi}_3}} = \\mathsf{THGP}^2 = \\mathsf{THGP}^2(2)$ and ${\\ensuremath{\\mathsf{m\\Pi}_3}} = \\mathsf{mTHGP}^2 = \\mathsf{mTHGP}^2(2)$.\n\\end{theorem}\n\\begin{proof}\nTo show $\\mathsf{THGP}^2 \\subseteq {\\ensuremath{\\mathsf{\\Pi}_3}}$, take a THGP $P$ of degree 2. Without loss of generality we can assume that it contains no hyperedges $e, e^\\prime$ with $e \\subseteq e^\\prime$, for otherwise the vertices in $e$ would not be covered by any other hyperedges, and so could be removed from $P$ together with $e$.\n\nConsider the graph $D$ whose vertices are the hyperedges of $P$, with two vertices being connected if the corresponding hyperedges intersect. Clearly, $D$ is a forest. We label an edge $\\{e_1, e_2\\}$ with the conjunction of the labels of the vertices in $e_1\\cap e_2$, and label a vertex $e$ with the conjunction of the labels of the vertices in $P$ contained exclusively in $e$.\nIt is easy to see that, for any given input, an independent cover of zeros in $P$ corresponds to an independent set in $D$ covering all zeros in the vertices and such that each edge labelled with 0 has precisely one endpoint in that independent set.\n\nWe claim that there is no such an independent set $I$ in $D$ iff there is a path $e_0, e_1, \\ldots, e_k$ in $D$ with odd $k$ (in particular, $k=1$) such that $e_0$ and $e_k$ are labelled with $0$ and `even' edges $\\{e_{i-1},e_{i}\\}$ with even $i$ are labelled with $0$.\nTo see $(\\Leftarrow)$, observe that we have to include $e_0$ and $e_k$ in $I$. Then, the edge $\\{e_1, e_2\\}$ labelled with $0$ makes us to include $e_2$ to $I$ ($e_1$~is adjacent to $e$ and cannot be included in $I$). Next, the edge $\\{e_3,e_4\\}$ makes us to include~$e_4$ in $I$ and so on. In the end we will have to include $e_{k-1}$ to $I$ and, since $e_k$ is also in $I$, this gives a contradiction with independence of $I$.\n\nTo show $(\\Rightarrow)$, suppose there is no such a pair of vertices. Then we can construct a desired independent set $I$. Add to $I$ all vertices labelled with $0$. If there is a triple of consecutive vertices $e, e_1, e_2$ in $D$ such that $e$ is already in $I$ and an edge $\\{e_1,e_2\\}$ is labelled with $0$, then we add $e_2$ to $I$. Note that, if we have add some vertex $e^\\prime$ to $e$ in this process, then there is a path $e=e_0, e_1, \\ldots, e_k=e^\\prime$ with even $k$ such that vertex $e$ is labelled with $0$ and every edge $\\{e_{i-1},e_i\\}$ for even $i$ in this path is labelled with $0$.\n\nIn this process we never add two connected vertices $e$ and $e^\\prime$ of $D$ to $I$, for otherwise the union of the paths described above for these two vertices would result in a path of odd length with endpoints labelled with $0$ and with every second edge labelled with $0$. This directly contradicts our assumption.\n\nIf there are still edges labelled with $0$ in $D$ with no endpoints in $I$, then add any endpoint of such an edge to $I$ and repeat the process above. This also will not lead to a pair of connected vertices in $I$. Indeed, if as a result we add to $I$ a vertex $e_1$ connected to a vertex $e$ which was added to $I$ previously, then there is an edge $\\{e_2, e_1\\}$ labelled with $0$ (that was the reason for adding $e_1$ to $I$), and so we should have added $e_2$ to $I$ before.\nBy repeating this process, we obtain an independent set $I$ covering all vertices and edges labelled with $0$.\n\nThe established claim means that an independent set $I$ in $D$ exists iff, for any simple path $e_0, e_1, \\dots, e_k$ with an odd $k$, the label of $e_0$ or $e_k$ evaluates to $1$, or the label of at least one $\\{e_{i-1},e_{i}\\}$, for even $i$, evaluates to $1$.\nThis property is computed by a ${\\ensuremath{\\mathsf{\\Pi}_3}}$-circuit where, for each simple path $e_0, e_1, \\dots, e_k$ with an odd $k$, we take $(k+3)\/2$-many \n$\\textsc{and}$-gates whose inputs are the literals in the labels of $e_0$, $e_k$ and the $\\{e_{i-1},e_{i}\\}$ for even $i$; then we send the outputs of those $\\textsc{and}$-gates to an $\\textsc{or}$-gate; and, finally, we collect the outputs of all the $\\textsc{or}$-gates as inputs to an $\\textsc{and}$-gate.\n\n\\smallskip\n\nTo show ${\\ensuremath{\\mathsf{\\Pi}_3}} \\subseteq \\mathsf{THGP}^2(2)$, suppose we are given a ${\\ensuremath{\\mathsf{\\Pi}_3}}$-circuit $\\boldsymbol{C}$. We can assume $\\boldsymbol{C}$ to be a conjunction of DNFs. So, we first construct a generalised HGP $P$ from $ \\mathsf{THGP}^2(2)$ computing the same function as $\\boldsymbol{C}$. \nDenote the $\\textsc{or}$-gates of $\\boldsymbol{C}$ by $g^1, \\dots, g^k$ and the inputs of $g^i$ by $h^i_{1}, \\dots, h^i_{l_i}$, where each $h^i_{j}$ is an $\\textsc{and}$-gate. \nNow, we define a tree hypergraph whose underlying path graph has the following edges (in the given order)\n\\begin{equation*}\nv^1_{0},\\dots, v^1_{2l_1-2}, \\ \\ \\ v^2_{0},\\dots, v^2_{2l_2-2},\\ \\ \\ \\dots, \\ \\ \\ v^k_{0},\\dots, v^k_{2l_k-2}\n\\end{equation*}\nand whose hyperedges are of the form $\\{v^i_{j}, v^i_{j+1}\\}$. We label $v^i_{2m}$ with a conjunction of the inputs of $h^i_{m+1}$ and the remaining vertices with $0$.\nBy the previous analysis for a given~$i$ and an input for $\\boldsymbol{C}$, we can cover all zeros among $v^i_{0},\\dots, v^i_{2l_i-2}$ with an independent set of hyperedges iff at least one of the gates $h^i_{1},\\dots, h^i_{l_i}$ outputs 1. For different $i$, the corresponding $v^i_{0},\\dots, v^i_{2l_i-2}$ are covered independently. Thus, $P$ computes the same function as $\\boldsymbol{C}$. We convert $P$ to a THGP from $\\smash{\\mathsf{THGP}^2(2)}$ using Proposition~\\ref{hyper:thgp}~(\\emph{ii}).\n\\end{proof}\n\n\n\n\\section{The Size of OMQ Rewritings}\\label{sec:7}\n\nIn this section, by an OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ we mean a sequence $\\{{\\ensuremath{\\boldsymbol{Q}}}_n = (\\mathcal{T}_n,{\\boldsymbol q}_n)\\}_{n < \\omega}$ of OMQs whose size is polynomial in $n$; by a rewriting ${\\boldsymbol q}'$ of ${\\ensuremath{\\boldsymbol{Q}}}$ we mean a sequence $\\{{\\boldsymbol q}'_n\\}_{n<\\omega}$, where each ${\\boldsymbol q}'_n$ is a rewriting of ${\\ensuremath{\\boldsymbol{Q}}}_n$, for $n < \\omega$. \n\nBy putting together the results of the previous three sections and some known facts from circuit complexity, we obtain the upper and lower bounds on the size of PE-, NDL- and FO-rewritings for various OMQ classes that are collected in Table~\\ref{table:rewritings},\n\\begin{table}\n\\caption{The size of OMQ rewritings.}{%\n\\renewcommand{\\tabcolsep}{7pt}%\n\\begin{tabular}{lccc}\\toprule\nOMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ & PE & NDL & FO \\\\\\midrule\n$\\mathcal{T}$ of depth 2 & \\textcolor{gray}{$\\mathsf{exp}$ {\\footnotesize (Th.~\\ref{Depth2:Clique})}} & $\\mathsf{exp}$ {\\footnotesize (Th.~\\ref{Depth2:Clique})} & \n\\begin{tabular}{c}$>\\mathsf{poly} ~\\text{ if }~ \\ensuremath{\\mathsf{NP}} \\not\\subseteq \\P\/\\mathsf{poly}$ {\\footnotesize (Th.~\\ref{Depth2:Clique})}\\\\\n$\\mathsf{poly} ~\\text{ iff }~ \\ensuremath{\\mathsf{NP}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$ {\\footnotesize (Th.~\\ref{nppoly})}\\end{tabular}\\\\\\midrule\n$\\mathcal{T}$ of depth 1 & $> \\mathsf{poly}${\\footnotesize (Th.~\\ref{Depth1:PE})} & $\\mathsf{poly}${\\footnotesize (Th.~\\ref{polyNDL})} & $\\mathsf{poly} ~\\text{ iff }~ \\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1${\\footnotesize (Th.~\\ref{thm:nc1cond})}\\\\\n\\ \\& ${\\boldsymbol q}$ of treewidth $t$ & $\\mathsf{poly}${\\footnotesize (Th.~\\ref{depth-one-btw})} & \\textcolor{gray}{$\\mathsf{poly}${\\footnotesize (Th.~\\ref{depth-one-btw})}} & \\textcolor{gray}{$\\mathsf{poly}${\\footnotesize (Th.~\\ref{depth-one-btw})}} \\\\\n\\ \\& ${\\boldsymbol q}$ tree & $\\mathsf{poly}\\text{-}\\mathsf{\\Pi}_4${\\footnotesize (Th.~\\ref{depth-one-tree})} & \\textcolor{gray}{$\\mathsf{poly}${\\footnotesize (Th.~\\ref{depth-one-tree})}} & \\textcolor{gray}{$\\mathsf{poly}\\text{-}\\mathsf{\\Pi}_4${\\footnotesize (Th.~\\ref{depth-one-tree})} } \\\\\\midrule\n${\\boldsymbol q}$ tree with $\\ell$ leaves & \\begin{tabular}{c}$> \\mathsf{poly}${\\footnotesize (Th.~\\ref{linear-lower})}\\\\\\footnotesize ($\\mathcal{T}$ is of depth 2)\\end{tabular} & $\\mathsf{poly}${\\footnotesize (Th.~\\ref{bbcq-ndl})} & $\\mathsf{poly} ~\\text{ iff }~ \\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1${\\footnotesize (Th.~\\ref{nbps-conditional})}\\\\\\midrule\n\\begin{tabular}{l}${\\ensuremath{\\boldsymbol{Q}}}$ with PFSP\\\\\n${\\boldsymbol q}$ of treewidth $t$\\end{tabular} & & $\\mathsf{poly}${\\footnotesize (Th.~\\ref{btw-ndl})} & \\begin{tabular}{c}$\\mathsf{poly} ~\\text{ iff }~ \\ensuremath{\\mathsf{LOGCFL}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$\\\\\\mbox{}\\hfill\\footnotesize{(Th.~\\ref{btw-fo})}\\end{tabular}\\\\\\bottomrule\n\\end{tabular}}\n\\label{table:rewritings}\n\\end{table}\nwhere $\\mathsf{exp}$ means an exponential lower bound, $> \\mathsf{poly}$ a superpolynomial lower bound, $\\mathsf{poly}$ a polynomial upper bound, $\\mathsf{poly}\\text{-}\\mathsf{\\Pi}_4$ a polynomial-size $\\mathsf{\\Pi}_4$-rewriting (that is, a PE-rewriting with the matrix of the form ${\\land}{\\lor}{\\land}{\\lor}$), and $\\ell$ and $t$ are any fixed constants. It is to be noted that, in case of polynomial upper bounds, we actually provide polynomial algorithms for constructing rewritings.\n\n\n\n\n\\subsection{Rewritings for OMQs with ontologies of depth 2}\n\nBy Theorem~\\ref{NBC}, OMQs with ontologies of depth~2 can compute any $\\ensuremath{\\mathsf{NP}}$-complete monotone Boolean function, in particular, the function $\\textsc{Clique}$ with $n(n-1)\/2$ variables $e_{jj'}$, $1 \\leq j < j'\\le n$, that returns 1 iff the graph with vertices $\\{1,\\dots,n\\}$ and edges $\\{ \\{j,j'\\} \\mid e_{jj'}=1\\}$ contains a $k$-clique, for some fixed $k$.\nA series of papers, started by Razborov~\\cite{Razborov85}, gave an exponential lower bound for the size of monotone circuits computing $\\textsc{Clique}$, namely, $2^{\\Omega(\\sqrt{k})}$ for $k \\leq \\smash{\\frac{1}{4}} (n\/ \\log n)^{2\/3}$~\\cite{AlonB87}. For monotone formulas, an even better lower bound is known: $2^{\\Omega(k)}$ for $k = 2n\/3$~\\cite{RazW92}. Thus, we obtain:\n\n\\begin{theorem}\\label{Depth2:Clique}\nThere is an OMQ with ontologies of depth $2$, any \\text{PE}- and \\text{NDL}-rewritings of which are of exponential size, while any \\text{FO}-rewriting is of superpolynomial size unless $\\ensuremath{\\mathsf{NP}} \\subseteq \\P\/\\mathsf{poly}$.\n\\end{theorem}\n\\begin{proof}\nIn view of $\\textsc{Clique} \\in \\ensuremath{\\mathsf{NP}} \\subseteq \\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}$ and Theorem~\\ref{NBC}, there is a polynomial-size monotone HGP $P$ computing $\\textsc{Clique}$. Suppose $P$ is based on a hypergraph $H$ and ${\\ensuremath{\\boldsymbol{Q}}}_H$ is the OMQ for $H$ constructed in Section~\\ref{sec5.1}. By Theorem~\\ref{hg-to-query}~(\\emph{ii}), $\\textsc{Clique}$ is a subfunction of the primitive evaluation function $f^\\vartriangle_{{\\ensuremath{\\boldsymbol{Q}}}_H}$. By Theorem~\\ref{rew2prim}, if ${\\boldsymbol q}'$ is a \\text{PE}- or \\text{NDL}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}_H$, then $f^\\vartriangle_{{\\ensuremath{\\boldsymbol{Q}}}_H}$---and so $\\textsc{Clique}$---can be computed by a monotone formula or, respectively, circuit of size $O(|{\\boldsymbol q}'|)$. Thus, ${\\boldsymbol q}'$ must be of exponential size. If ${\\boldsymbol q}'$ is an \\text{FO}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}_H$ then, by Theorem~\\ref{rew2prim}, $\\textsc{Clique}$ is computable by a Boolean formula of size $O(|{\\boldsymbol q}'|)$. If $\\ensuremath{\\mathsf{NP}} \\not\\subseteq \\P\/\\mathsf{poly}$ then $\\textsc{Clique}$ cannot be computed by a polynomial circuit, and so ${\\boldsymbol q}'$ must be of superpolynomial size.\n\\end{proof}\n\nOur next theorem gives a complexity-theoretic characterisation of the existence of FO-rewritings for OMQs with ontologies of depth $2$.\n\n\\begin{theorem}\\label{nppoly}\nThe following conditions are equivalent\\textup{:}\n\\begin{enumerate}[(1)]\n\\item all OMQs with ontologies of depth $2$ have polynomial-size \\text{FO}-rewritings\\textup{;}\n\n\\item all OMQs with ontologies of depth $2$ and polynomially many tree witnesses have polynomial-size \\text{FO}-rewritings\\textup{;}\n\n\\item $\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nThe implication (1) $\\Rightarrow$ (2) is obvious. To show that (2) $\\Rightarrow$ (3), suppose there is a polynomial-size \\text{FO}-rewriting for the OMQ ${\\ensuremath{\\boldsymbol{Q}}}_H$ from the proof of Theorem~\\ref{Depth2:Clique}, which has polynomially many tree witnesses. Then $\\textsc{Clique}$ is computed by a polynomial-size Boolean formula. Since $\\textsc{Clique}$ is $\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}$-complete under $\\mathsf{NC}^1$-reductions, we have $\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$. Finally, to prove (3) $\\Rightarrow$ (1), assume $\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$. Let ${\\ensuremath{\\boldsymbol{Q}}}$ be an arbitrary OMQ with ontologies of depth 2. As observed in Section~\\ref{hyper-functions}, the function $\\homfn$ is in $\\ensuremath{\\mathsf{NP}} \\subseteq \\ensuremath{\\mathsf{NP}}\/\\mathsf{poly}$. \nTherefore, by our assumption, $\\homfn$ can be computed by a polynomial-size formula, and so, by Theorem~\\ref{Hom2rew}, ${\\ensuremath{\\boldsymbol{Q}}}$ has a polynomial-size \\text{FO}-rewriting.\n\\end{proof}\n\n\n\n\n\n\\subsection{Rewritings for OMQs with ontologies of depth 1}\n\n\n\\begin{theorem}\\label{polyNDL}\nAny OMQ ${\\ensuremath{\\boldsymbol{Q}}}$ with ontologies of depth $1$ has a polynomial-size $\\text{NDL}$-rewriting.\n\\end{theorem}\n\\begin{proof}\nBy Theorem~\\ref{depth1}, the hypergraph $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ is of degree at most~2, and so, by Proposition~\\ref{hyper:program}~(\\emph{ii}), there is a polynomial-size monotone HGP of degree at most~2 computing $\\twfn$. By Theorem~\\ref{thm:deg_2}, $\\mathsf{co}\\text{-}\\mathsf{mNL}\/\\mathsf{poly} = \\mathsf{mHGP}^2$, and so we have a polynomial-size monotone NBP computing the dual ${\\twfn}^*$ of $\\twfn$. \nSince $\\mathsf{mNL}\/\\mathsf{poly} \\subseteq \\mathsf{mP}\/\\mathsf{poly}$, \nwe also have a polynomial-size monotone Boolean circuit that computes ${\\twfn}^*$. \nBy swapping $\\textsc{and}$- and $\\textsc{or}$-gates in that circuit, we obtain a polynomial-size monotone circuit computing $\\twfn$. It remains to apply Theorem~\\ref{TW2rew}~(\\emph{ii}).\n\\end{proof}\n\nHowever, this upper bound cannot be extended to \\text{PE}-rewritings:\n\n\\begin{theorem}\\label{Depth1:PE}\nThere is an OMQ ${\\ensuremath{\\boldsymbol{Q}}}$ with ontologies of depth $1$, any \\text{PE}-rewriting of which is of superpolynomial size \\textup{(}$n^{\\Omega(\\log n)}$, to be more precise\\textup{)}.\n\\end{theorem}\n\\begin{proof}\nConsider the monotone function $\\textsc{Reachability}$ that takes the adjacency matrix of a directed graph $G$ with two distinguished vertices $s$ and $t$ and returns~1 iff the graph $G$ contains a directed path from $s$ to $t$. It is known~\\cite{KarchmerW88,Jukna12} that $\\textsc{Reachability}$ is computable by a polynomial-size monotone NBP (that is, belongs to $\\mathsf{mNL}\/\\mathsf{poly}$), but any monotone Boolean formula for $\\textsc{Reachability}$ is of size $n^{\\Omega(\\log n)}$. \nLet $f = \\textsc{Reachability}$. By Theorem~\\ref{thm:deg_2}, there is a polynomial-size monotone HGP that is based on a hypergraph $H$ of degree~$2$ and computes the dual $f^*$ of $f$. Consider now the OMQ $\\OMQI{H}$ for $H$ defined in Section~\\ref{sec:depth1}. By Theorem~\\ref{representable}~(\\emph{ii}),\n$f^*$ is a subfunction of $f^\\vartriangle_{\\OMQI{H}}$. By Theorem~\\ref{rew2prim}~(\\emph{i}), no PE-rewriting of the OMQ $\\OMQI{H}$ can be shorter than $\\smash{n^{\\Omega(\\log n)}}$.\n\\end{proof}\n\n\n\n\\begin{theorem}\\label{thm:nc1cond}\nAll OMQs with ontologies of depth $1$ have polynomial-size \\text{FO}-rewritings iff $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$.\n\\end{theorem}\n\\begin{proof}\nSuppose $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$. Let ${\\ensuremath{\\boldsymbol{Q}}}$ be an OMQ with ontologies of depth $1$. By Theorem~\\ref{depth1}, its hypergraph $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ is of degree~$2$ and polynomial size. By Proposition~\\ref{hyper:program}~(\\emph{ii}), there is a polynomial-size HGP of degree~2 that computes $\\twfn$. By Theorem~\\ref{thm:deg_2}, \n$\\twfn\\in\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly}$. Therefore, by our assumption, $\\twfn$ can be computed by a polynomial-size Boolean formula.\nFinally, Theorem~\\ref{TW2rew}~(\\emph{i}) gives a polynomial-size \\text{FO}-rewriting of ${\\ensuremath{\\boldsymbol{Q}}}$.\n\nConversely, suppose there is a polynomial-size \\text{FO}-rewriting for any OMQ with ontologies of depth~$1$. Let $f = \\textsc{Reachability}$. Since $f \\in \\ensuremath{\\mathsf{NL}} \\subseteq \\ensuremath{\\mathsf{NL}}\/\\mathsf{poly}$, \nby Theorem~\\ref{thm:deg_2}, \nwe obtain a polynomial-size HGP computing $f$ and based on a hypergraph $H$ of degree~2. Consider the OMQ $\\OMQI{H}$ with ontologies of depth~1 defined in Section~\\ref{sec:depth1}. By Theorem~\\ref{representable}~(\\emph{ii}),\n$f$ a subfunction of $f^{\\vartriangle}_{\\OMQI{H}}$. \nBy our assumption, $\\OMQI{H}$ has a polynomial-size \\text{FO}-rewriting; hence, by Theorem~\\ref{rew2prim}~(\\emph{i}), $f^{\\vartriangle}_{\\OMQI{H}}$ (and so $f$) are computed by polynomial-size Boolean formulas. Since $f$ is $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly}$-complete under $\\smash{\\mathsf{NC}^1}$-reductions~\\cite{Razborov91}, we obtain $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\smash{\\mathsf{NC}^1}$.\n\\end{proof}\n\n\n\n\n\n\\subsection{Rewritings for tree-shaped OMQs with a bounded number of leaves}\n\nSince, by Theorem~\\ref{thm:linear_hgp1}, the hypergraph function of a leaf-bounded OMQ can be computed by a polynomial-size NBP, we have:\n\n\\begin{theorem}\\label{bbcq-ndl}\nFor any fixed $\\ell \\ge 2$, all tree-shaped OMQs with at most $\\ell$ leaves have polynomial-size NDL-rewritings.\n\\end{theorem}\n\nThe superpolynomial lower bound below is proved in exactly the same way as Theorem~\\ref{Depth1:PE} using Theorems~\\ref{thm:linear_hgp} and~\\ref{tree-hg-to-query} instead of Theorems~\\ref{thm:deg_2} and~\\ref{representable}.\n\n\\begin{theorem}\\label{linear-lower}\nThere is an OMQ with ontologies of depth 2 and linear CQs any \\text{PE}-rewriting of which is of superpolynomial size \\textup{(}$n^{\\Omega(\\log n)}$, to be more precise\\textup{)}.\n\\end{theorem}\n\nOur next result is similar to Theorem~\\ref{thm:nc1cond}:\n\n\\begin{theorem}\\label{nbps-conditional}\nThe following are equivalent\\textup{:}\n\\begin{enumerate}[(1)]\n\\item there exist polynomial-size \\text{FO}-rewritings for all OMQs with linear CQs and ontologies of depth $2$\\textup{;}\n\n\\item for any fixed $\\ell$, there exist polynomial-size \\text{FO}-rewritings for all tree-shaped OMQs with at most $\\ell$ leaves\\textup{;}\n\n\\item $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof} $(1) \\Rightarrow (3)$ Suppose every OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ with linear ${\\boldsymbol q}$ and $\\mathcal{T}$ of depth~2 has an FO-rewriting of size $p(|{\\ensuremath{\\boldsymbol{Q}}}|)$, for some fixed polynomial $p$. Consider $f = \\textsc{Reachability}$. As $f$ is monotone and $f\\in\\ensuremath{\\mathsf{NL}}$, we have $f\\in \\mathsf{mNL}\/\\mathsf{poly}$. Thus, Theorem~\\ref{thm:linear_hgp} gives us an HGP $P$ from $\\mathsf{mTHGP}(2)$ that computes $f$. Let $P$ be based on a hypergraph~$H$, and let $\\OMQT{H}$ be the OMQ with a linear CQ and an ontology of depth~2 constructed in Section~\\ref{sec:5.3}. By Theorem~\\ref{tree-hg-to-query}~(\\emph{ii}), $f$ is a subfunction of $f^\\vartriangle_{\\OMQT{H}}$.\nBy our assumption, however, $\\OMQT{H}$ has a polynomial-size FO-rewriting, and so, by Theorem~\\ref{rew2prim}~(\\emph{i}), it is computed by a polynomial-size Boolean formula. Since $f$ is $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly}$-complete under $\\mathsf{NC}^1$-reductions~\\cite{Razborov91}, we obtain $\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$.\nThe implication $(3) \\Rightarrow (2)$ follows from Theorems~\\ref{thm:linear_hgp1} and~\\ref{TW2rew}~(\\emph{i}), and $(2) \\Rightarrow (1)$ is trivial.\n\\end{proof}\n\n\n\n\\subsection{Rewritings for OMQs with PFSP and bounded treewidth}\\label{sec:7.4}\n\nSince OMQs with the polynomial fundamental set property (PFSP, see Section~\\ref{sec:TW}) and CQs of bounded treewidth can be polynomially translated into monotone THGPs and $\\mathsf{mTHGP} = \\mathsf{mLOGCFL}\/\\mathsf{poly} \\subseteq \\mathsf{mP}\/\\mathsf{poly}$, we obtain:\n\\begin{theorem}\\label{btw-ndl}\nFor any fixed $t >0$, all OMQs with the PFSP and CQs of treewidth at most $t$ have polynomial-size NDL-rewritings.\n\\end{theorem}\n\n\nUsing Theorem~\\ref{role-inc} and the fact that OMQs with ontologies of bounded depth enjoy the PFSP, we obtain:\n\\begin{corollary}\nThe following OMQs have polynomial-size \\text{NDL}-rewritings\\textup{:}\n\\begin{nitemize}\n\\item[--] OMQs with ontologies of bounded depth and CQs of bounded treewidth\\textup{;}\n\n\\item[--] OMQs with ontologies not containing axioms of the form $\\varrho(x,y) \\to \\varrho'(x,y)$ \\textup{(}and~\\eqref{eq:sugar}\\textup{)} and CQs of bounded treewidth.\n\\end{nitemize}\n\\end{corollary}\n\nWhether all OMQs without axioms of the form $\\varrho(x,y) \\to \\varrho'(x,y)$ have polynomial-size rewritings remains open.\\!\\footnote{A positive answer to this question given by Kikot et al.~\\cite{DBLP:conf\/dlog\/KikotKZ11} is based on a flawed proof.}\n\n\\begin{theorem}\\label{btw-fo}\nThe following are equivalent\\textup{:}\n\\begin{enumerate}[(1)]\n\\item there exist polynomial-size FO-rewritings for all tree-shaped OMQs with ontologies of depth 2\\textup{;}\n\n\\item there exist polynomial-size FO-rewritings for all OMQs with the PFSP and CQs of treewidth at most $t$ \\textup{(}for any fixed $t$\\textup{)}\\textup{;} \n\n\\item $\\ensuremath{\\mathsf{LOGCFL}}\/\\mathsf{poly} \\subseteq \\mathsf{NC}^1$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof} The implication $(2) \\Rightarrow (1)$ is trivial, and $(1) \\Rightarrow (3)$ is proved similarly to the corresponding case of Theorem~\\ref{nbps-conditional} using Theorems~\\ref{thm:thp_vs_sac},~\\ref{tree-hg-to-query} and~\\ref{rew2prim}.\n$(3) \\Rightarrow (2)$ follows from Theorems~\\ref{DL2THP},~\\ref{thm:thp_vs_sac} and~\\ref{Hom2rew}.\n\\end{proof}\n\n\n\n\\subsection{Rewritings for OMQs with ontologies of depth 1 and CQs of bounded treewidth}\\label{sec:7.5}\n\nWe show finally that polynomial PE-rewritings are guaranteed to exist for OMQs with ontologies of depth 1 and CQs of bounded treewidth. By Theorem~\\ref{thm:nc1-thgp3}, it suffices to show that $\\homfn$ is computable by a THGP of bounded degree. However, since tree witnesses can be initiated by multiple roles, the THGPs constructed in Section \\ref{sec:boundedtw} do not enjoy this property and require a minor modification. \n\nLet ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$ be an OMQ with $\\mathcal{T}$ of depth 1. For every tree witness $\\t = (\\mathfrak{t}_\\mathsf{r},\\mathfrak{t}_\\mathsf{i})$, we take a fresh binary predicate $P_\\t$ \n(which cannot occur in any data instance) \nand extend~$\\mathcal{T}$ with the following axioms:\n\\begin{align*}\n\\tau(x) \\to \\exists y\\, P_\\t(x,y), & \\qquad \\text{ if } \\tau \\text{ generates } \\t,\\\\\nP_\\t(x,y) \\to \\varrho(x,y), & \\qquad \\text{ if } \\varrho(u,v) \\in {\\boldsymbol q}_\\t, u \\in \\mathfrak{t}_\\mathsf{r} \\text{ and } v \\in \\mathfrak{t}_\\mathsf{i}. \n\\end{align*}\nDenote the resulting ontology by $\\mathcal{T}'$ and set ${\\ensuremath{\\boldsymbol{Q}}}' = (\\mathcal{T}',{\\boldsymbol q})$. \nBy Theorem~\\ref{depth1}, the number of tree witnesses for ${\\ensuremath{\\boldsymbol{Q}}}$ does not exceed $|{\\boldsymbol q}|$, and so \nthe size of ${\\ensuremath{\\boldsymbol{Q}}}'$ is polynomial in $|{\\ensuremath{\\boldsymbol{Q}}}|$. It is easy to see that any rewriting of ${\\ensuremath{\\boldsymbol{Q}}}'$ (with $P_\\t$ replaced by $\\bot$) is also a rewriting for~${\\ensuremath{\\boldsymbol{Q}}}$. \nThus, it suffices to consider OMQs of the form ${\\ensuremath{\\boldsymbol{Q}}}'$, which will be called \\emph{explicit}.\n\nGiven an explicit OMQ ${\\ensuremath{\\boldsymbol{Q}}} = (\\mathcal{T},{\\boldsymbol q})$, we construct a THGP $P'_{{\\ensuremath{\\boldsymbol{Q}}}}$ in the same way as $P_{{\\ensuremath{\\boldsymbol{Q}}}}$ in Section~\\ref{sec:boundedtw} except that in the definition of $E^k_i$, instead of considering all types $\\avec{w}_k$ of $N_i$, we only use $\\avec{w}_k=(\\avec{w}[1],\\dots,\\avec{w}[m])$ in which $\\avec{w}[j]$ is either $\\varepsilon$ or $P_\\t$ for the unique tree witness $\\t = (\\mathfrak{t}_\\mathsf{r},\\mathfrak{t}_\\mathsf{i})$ with $\\mathfrak{t}_\\mathsf{i} = \\{ \\lambda_j(N_i)\\}$. (Since $\\mathcal{T}$ is of depth 1, every tree witness~$\\t$ has $\\mathfrak{t}_\\mathsf{i}=\\{z\\}$, for some variable $z$, and $\\mathfrak{t}_\\mathsf{i} \\neq \\mathfrak{t}_\\mathsf{i}'$ whenever $\\t \\neq \\t'$.)\nThis modification guarantees that, for every $i$, the number of distinct $E^k_i$ is bounded by $2^m$. It follows that the hypergraph of $\\smash{P_{{\\ensuremath{\\boldsymbol{Q}}}}'}$ is of bounded degree, $2^m + 2^{2m}$ to be more precise. \nTo establish the correctness of the modified construction, we can prove an analogue of Theorem~\\ref{DL2THP}, in which the original THGP $P_{{\\ensuremath{\\boldsymbol{Q}}}}$ is replaced by $P_{{\\ensuremath{\\boldsymbol{Q}}}}'$, and the function $\\homfn$ is replaced by \n\\begin{equation*}\\label{hyper-func''}\n\\homfnprime \\ \\ = \\ \\ \\hspace*{-1em}\\bigvee_{\\substack{\\Theta \\subseteq \\twset\n\\\\ \\text{ independent}}}\\hspace*{-2mm}\n\\Big(\\bigwedge_{\\atom \\in {\\boldsymbol q} \\setminus {\\boldsymbol q}_\\Theta} \\hspace*{-1em} p_\\atom \\hspace*{1em}\n \\wedge \\hspace*{0.5em}\\bigwedge_{\\t \\in \\Theta} \\big(\\bigwedge_{R(z,z')\\in {\\boldsymbol q}_\\t} p_{z=z'} \\hspace*{0.5em} \\land \\hspace*{0.5em} \\bigwedge_{z \\in \\mathfrak{t}_\\mathsf{r}\\cup\\mathfrak{t}_\\mathsf{i}} p_{\\exists y P_\\t(z,y)}\\big)\\Big) \n\\end{equation*}\n(which is obtained from $\\homfn$ by always choosing $P_\\t$ as the predicate that initiates $\\t$). It is easy to see that Theorem~\\ref{TW2rew} holds also for $\\homfnprime$ (with explicit ${\\ensuremath{\\boldsymbol{Q}}}$), which gives us:\n\\begin{theorem} \\label{depth-one-btw}\nFor any fixed $t>0$, all OMQs with ontologies of depth 1 and CQs of treewidth at most $t$ have polynomial-size PE-rewritings.\n\\end{theorem}\n\nFor tree-shaped OMQs, we obtain an even better result. Indeed, by Theorem~\\ref{prop:tree-shaped}, $\\HG{{\\ensuremath{\\boldsymbol{Q}}}}$ is a tree hypergraph; by Theorem~\\ref{depth1}, it is of degree at most~2, and so, by Theorem~\\ref{thm:pi3-thgp2}, $\\twfn$ is computed by a polynomial-size ${\\ensuremath{\\mathsf{\\Pi}_3}}$-circuit (which is monotone by definition). Thus, Theorem~\\ref{TW2rew}~(\\emph{i}) gives us the following (${\\ensuremath{\\mathsf{\\Pi}_3}}$ turns into $\\mathsf{\\Pi}_4$ because of the disjunction in the formula $\\mathsf{tw}_\\t$):\n\n\\begin{theorem} \\label{depth-one-tree}\nAll tree-shaped OMQs with ontologies of depth 1 have polynomial-size $\\mathsf{\\Pi}_4$-rewritings. \n\\end{theorem}\n\n\n\n\n\\section{Combined Complexity of OMQ answering}\\label{sec:complexity}\n\nThe size of OMQ rewritings we investigated so far is crucial for classical OBDA, which relies upon a reduction to standard database query evaluation (under the assumption that it is efficient in real-world applications). However, this way of answering OMQs may not be optimal, and so understanding the size of OMQ rewritings does not shed much light on how hard OMQ answering actually is. For example, \nanswering the OMQs from the proof of Theorem~\\ref{Depth1:PE} via PE-rewriting requires superpolynomial time, while the graph reachability problem encoded by those OMQs is \\ensuremath{\\mathsf{NL}}-complete. On the other hand, the existence of a short rewriting does not obviously imply tractability.\n\nIn this section, we analyse the \\emph{combined} complexity of answering OMQs classified according to the depth of ontologies and the shape of CQs. More precisely, our concern is the following decision problem: given an OMQ ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$, a data instance $\\mathcal{A}$ and a tuple $\\avec{a}$ from $\\mathsf{ind}(\\mathcal{A})$ (of the same length as $\\avec{x}$), decide whether $\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$. Recall from Section~\\ref{sec:TW} that $\\mathcal{T},\\mathcal{A} \\models {\\boldsymbol q}(\\avec{a})$ iff $\\canmod \\models {\\boldsymbol q}(\\avec{a})$ iff there exists a homomorphism from ${\\boldsymbol q}(\\avec{a})$ to $\\canmod$.\n\nThe combined complexity of CQ evaluation has been thoroughly investigated in relational database theory. In general, evaluating CQs is \\ensuremath{\\mathsf{NP}}-complete~\\cite{Chandra&Merlin77}, but becomes tractable for tree-shaped CQs~\\cite{DBLP:conf\/vldb\/Yannakakis81} and bounded treewidth CQs~\\cite{DBLP:journals\/tcs\/ChekuriR00,DBLP:conf\/stoc\/GroheSS01}---\\ensuremath{\\mathsf{LOGCFL}}-complete, to be more precise~\\cite{DBLP:journals\/jacm\/GottlobLS01}.\n\nThe emerging combined complexity landscape for OMQ answering is summarised in Fig.~\\ref{pic:results}~(b) in Section~\\ref{sec:results}. The \\ensuremath{\\mathsf{NP}}{} and \\ensuremath{\\mathsf{LOGCFL}}{} lower bounds for arbitrary OMQs and tree-shaped OMQs with ontologies of bounded depth are inherited from the corresponding CQ evaluation problems. The \\ensuremath{\\mathsf{NP}}{} upper bound for all OMQs was shown by~\\cite{CDLLR07} and \\cite{ACKZ09}, while the matching lower bound for tree-shaped OMQs by~\\cite{DBLP:conf\/dlog\/KikotKZ11} and \\cite{DBLP:journals\/ai\/GottlobKKPSZ14}. By reduction of the reachability problem for directed graphs, one can easily show that evaluation of tree-shaped CQs with a bounded number of leaves (as well as answering OMQs with unary predicates only) is \\ensuremath{\\mathsf{NL}}-hard.\nWe now establish the remaining results.\n\n\n\n\n\\subsection{OMQs with bounded-depth ontologies}\n\nWe begin by showing that the \\ensuremath{\\mathsf{LOGCFL}}{} upper bound for CQs of bounded treewidth~\\cite{DBLP:journals\/jacm\/GottlobLS01} is preserved even in the presence of ontologies of bounded depth.\n\n\\begin{theorem}\\label{logcfl-btw}\nFor any fixed $d\\geq 0$ and $t>0$, answering OMQs with ontologies of depth at most $d$ and CQs of treewidth at most $t$ is \\ensuremath{\\mathsf{LOGCFL}}-complete. \n\\end{theorem}\n\\begin{proof}\nLet ${\\ensuremath{\\boldsymbol{Q}}}(\\avec{x}) = (\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ be an OMQ with $\\mathcal{T}$ of depth at most~$d$ and ${\\boldsymbol q}$ of treewidth at most~$t$. As $\\mathcal{T}$ is of finite depth, $\\canmod$ is finite for any $\\mathcal{A}$.\nAs \\ensuremath{\\mathsf{LOGCFL}}{} is closed under \\ensuremath{\\lspace^{\\LOGCFL}}\\ reductions~\\cite{DBLP:conf\/icalp\/GottlobLS99} and evaluation of CQs of bounded treewidth is \\ensuremath{\\mathsf{LOGCFL}}-complete, it suffices to show that $\\canmod$ can be computed by an \\ensuremath{\\lspace^{\\LOGCFL}}-transducer (a deterministic logspace Turing machine with a \\ensuremath{\\mathsf{LOGCFL}}{} oracle). \nClearly, we need only logarithmic space to represent any predicate name or individual constant from $\\mathcal{T}$ and $\\mathcal{A}$, as well as any word $aw \\in \\Delta^{\\canmod}$ (since $|w| \\leq d$ and $d$ is fixed). Finally, as entailment in \\textsl{OWL\\,2\\,QL}{} is in \\ensuremath{\\mathsf{NL}}~\\cite{ACKZ09}, each of the following problems can be decided by making a call to an \\ensuremath{\\mathsf{NL}}\\ (hence \\ensuremath{\\mathsf{LOGCFL}}) oracle:\n\\begin{nitemize}\n\\item decide whether $a\\varrho_1 \\dots \\varrho_n \\in \\Delta^{\\canmod}$, for any $n \\le d$ and roles $\\varrho_i$ from $\\mathcal{T}$;\n\n\\item decide whether $u \\in \\Delta^{\\canmod}$ belongs to $A^{\\canmod}$, for a unary $A$ from $\\mathcal{T}$ and $\\mathcal{A}$;\n\n\\item decide whether $(u_1,u_2) \\in \\Delta^{\\canmod} \\times \\Delta^{\\canmod}$ is in $P^{\\canmod}$, for a binary $P$ from $\\mathcal{T}$ and $\\mathcal{A}$. \\qed\n\\end{nitemize}\n\\end{proof}\n\nIf we restrict the number of leaves in tree-shaped OMQs, then the \\ensuremath{\\mathsf{LOGCFL}}{} upper bound can be reduced to \\ensuremath{\\mathsf{NL}}:\n\n\n\n\n\n\\begin{theorem}\\label{nl-bb}\nFor any fixed $d\\geq 0$ and $\\ell\\geq 2$, answering OMQs with ontologies of depth at most $d$ and tree-shaped CQs with at most $\\ell$ leaves is \\ensuremath{\\mathsf{NL}}-complete.\n\\end{theorem}\n\\begin{proof}\nAlgorithm~\\ref{algo:tree-entail} defines a non-deterministic procedure \\ensuremath{\\mathsf{TreeQuery}}{} for deciding whether a tuple $\\avec{a}$ is a certain answer to a tree-shaped OMQ $(\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$ over $\\mathcal{A}$. The procedure views ${\\boldsymbol q}$ as a directed tree (we pick one of its variables $z_0$ as a root) and constructs a homomorphism from ${\\boldsymbol q}(\\avec{x})$ to~$\\canmod$ on-the-fly by traversing the tree from root to leaves. \nThe set $\\mathsf{frontier}$ is initialised with a pair $z_0\\mapsto u_0$ representing the choice of where to map $z_0$.\nThe possible choices for $z_0$ include $\\mathsf{ind}(\\mathcal{A})$ and $aw \\in \\Delta^{\\canmod}$ such that \\mbox{$|w| \\leq 2|\\mathcal{T}|+|{\\boldsymbol q}|$}, which are enough to find a homomorphism if it exists~\\cite{ACKZ09}. This set of possible choices is denoted by $U$ in Algorithm~\\ref{algo:tree-entail}. Note that $U$ occurs only in statements of the form `\\Guess $u \\in U$' and need not be materialised. Instead, we assume that the sequence $u$ is guessed element-by-element and the condition $u\\in U$ is verified along the sequence of guesses. We use the subroutine call \\canMap{$z_0$, $u_0$} to check whether the guessed $u_0$ is compatible with $z_0$.\\!\\footnote{The operator \\Check{} immediately returns $\\ensuremath{\\mathsf{false}}$ if the condition is not satisfied.}\\ It first ensures that, if $z_0$ is an answer variable of ${\\boldsymbol q}(\\avec{x})$, then $u_0$ is the individual constant corresponding to $z_0$ in $\\avec{a}$. Next, if $z_0 \\in \\mathsf{ind}(\\mathcal{A})$, then it verifies that $u_0$ satisfies all atoms in ${\\boldsymbol q}(\\avec{x})$ that involve only~$z_0$. If $u_0 \\not \\in \\mathsf{ind}(\\mathcal{A})$, then $u_0$ must take the form $a w \\varrho$ and the subroutine checks whether $\\mathcal{T} \\models \\exists y\\, \\varrho(y,x) \\rightarrow A(x)$ (equivalently, $a w \\varrho \\in A^{\\canmod}$) for every $A(z_0) \\in {\\boldsymbol q}$ and whether $\\mathcal{T}\\models P(x,x)$ for every $P(z_0,z_0)\\in {\\boldsymbol q}$. The remainder of the algorithm consists of a while loop, in which we remove $z\\mapsto u$ from $\\mathsf{frontier}$, and if $z$ is not a leaf node, guess where to map its children. We must then check that the guessed element $u'$ for child $z'$ is compatible with (\\emph{i}) the binary atoms linking $z$ to $z'$ and (\\emph{ii}) the atoms that involve only $z'$; the latter is done by \\canMap{$z'$, $u'$}. If the check succeeds, we add $z' \\mapsto u'$ to $\\mathsf{frontier}$, for each child $z'$ of $z$; otherwise, $\\ensuremath{\\mathsf{false}}$ is returned. We exit the while loop when $\\mathsf{frontier}$ is empty, i.e., when an element of $\\canmod$ has been assigned to each variable in~${\\boldsymbol q}(\\avec{x})$. \n\n\n\\begin{algorithm}[t]\\SetAlgoVlined\n\\KwData{a tree-shaped OMQ $(\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$, a data instance $\\mathcal{A}$ and \na tuple $\\avec{a}$ from $\\mathsf{ind}(\\mathcal{A})$}\n\\KwResult{\\ensuremath{\\mathsf{true}}{} if $\\mathcal{T},\\mathcal{A}\\models{\\boldsymbol q}(\\avec{a})$ and \\ensuremath{\\mathsf{false}}{} otherwise}\n\\BlankLine\nfix a directed tree $T$ compatible with the Gaifman graph of ${\\boldsymbol q}$ and let $z_0$ be its root\\; \nlet $U = \\bigl\\{aw \\in\\Delta^{\\canmod} \\mid a\\in \\mathsf{ind}(\\mathcal{A}) \\text{ and } |w| \\leq 2|\\mathcal{T}|+|{\\boldsymbol q}|\\bigr\\}$\\tcc*[r]{not computed}\n\\Guess{$u_0 \\in U$}\\tcc*[r]{use the definition of $U$ to check whether the guess is allowed}\n\\Check{\\canMap{$z_0$,$u_0$}}\\;\n$\\mathsf{frontier} \\longleftarrow \\{z_0 \\mapsto u_0 \\}$\\;\n\\While{$\\mathsf{frontier} \\ne \\emptyset$}{\nremove some $z \\mapsto u$ from $\\mathsf{frontier}$\\;\n\\ForEach{child $z'$ of $z$ in $T$}{%\n\\Guess{$u' \\in U$}\\tcc*[r]{use the def. of $U$ to check whether the guess is allowed}\n\\Check{$(u,u')\\in P^{\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}}$, for all $P(z,z')\\in{\\boldsymbol q}$, \\KwAnd \\canMap{$z'$,$u'$}}\\;\n$\\mathsf{frontier} \\longleftarrow \\mathsf{frontier} \\cup \\{ z' \\mapsto u' \\}$\n}\n}\n\\Return \\ensuremath{\\mathsf{true}}\\;\n\\BlankLine\n\\func{\\canMap{$z$, $u$}}{\n\\lIf{$z$ is the $i$th answer variable \\KwAnd $u\\ne a_i$}{\\Return \\ensuremath{\\mathsf{false}}}\n\\uIf(\\tcc*[f]{the element $u$ is in the tree part of the canonical model}){$u = aw\\varrho$}{%\n\\Check{$\\mathcal{T}\\models \\exists y\\,\\varrho(y,x)\\to A(x)$, for all $A(z)\\in {\\boldsymbol q}$, \\KwAnd $\\mathcal{T}\\models P(x,x)$, for all $P(z,z)\\in {\\boldsymbol q}$}}\n\\Else(\\tcc*[f]{otherwise, $u\\in \\mathsf{ind}(\\mathcal{A})$}){%\n\\Check{$u\\in A^{\\canmod}$, for all $A(z)\\in {\\boldsymbol q}$, \\KwAnd $(u,u)\\in P^{\\canmod}$, for all $P(z,z)\\in {\\boldsymbol q}$}\n}\n\\Return \\ensuremath{\\mathsf{true}};\n}\n\\caption{Non-deterministic procedure \\ensuremath{\\mathsf{TreeQuery}}{}\nfor answering tree-shaped OMQs}\\label{algo:tree-entail}\n\\end{algorithm}\n\n\nCorrectness and termination of the algorithm are straightforward and hold for tree-shaped OMQs with arbitrary ontologies. Membership in \\ensuremath{\\mathsf{NL}}{} for bounded-depth ontologies and bounded-leaf queries follows from the fact that the number of leaves of ${\\boldsymbol q}$ does not exceed $\\ell$, in which case the cardinality of $\\mathsf{frontier}$ is bounded by $\\ell$, and the fact that the depth of $\\mathcal{T}$ does not exceed~$d$, in which case every element of $U$ requires only a fixed amount of space to store. So, since each variable $z$ can be stored in logarithmic space, the set $\\mathsf{frontier}$ can also be stored in logarithmic space. Finally, it should be clear that the subroutine \\canMap{$z$, $u$} can also be implemented in \\ensuremath{\\mathsf{NL}}~\\cite{ACKZ09}.\n\\end{proof}\n\n\n\\subsection{OMQs with bounded-leaf CQs}\n\nIt remains to settle the complexity of answering OMQs with arbitrary ontologies and bounded-leaf CQs, for which neither the upper bounds from the preceding subsection nor the \\ensuremath{\\mathsf{NP}}{} lower bound by~\\cite{DBLP:conf\/dlog\/KikotKZ11} are applicable. \n\n\\begin{theorem}\\label{logcfl-c-arb}\nFor any fixed $\\ell\\geq 2$, answering OMQs with tree-shaped CQs having at most $\\ell$ leaves is \\ensuremath{\\mathsf{LOGCFL}}-complete.\n\\end{theorem} \n\\begin{proof} \nFirst, we establish the upper bound using a characterisation of the class \\ensuremath{\\mathsf{LOGCFL}}{} in \nterms of non-deterministic auxiliary pushdown automata (NAuxPDAs). \nAn NAuxPDA~\\cite{DBLP:journals\/jacm\/Cook71} is a non-deterministic Turing machine with an additional work tape constrained\nto operate as a pushdown store. \\cite{sudborough78} showed that \\ensuremath{\\mathsf{LOGCFL}}{} coincides with the class of problems that can be solved by NAuxPDAs running in logarithmic space and polynomial time (note that the space on the pushdown tape is not subject to the logarithmic space bound). Algorithms~\\ref{algo:bbqueries} and~\\ref{algo:bbqueries:2} give a procedure \\ensuremath{\\mathsf{BLQuery}}{} for answering OMQs with bounded-leaf CQs that can be implemented by an NAuxPDA. \n\n\n\n\\begin{algorithm}[t]\\SetAlgoVlined%\n\\KwData{a bounded-leaf OMQ $(\\mathcal{T},{\\boldsymbol q}(\\avec{x}))$, a data instance $\\mathcal{A}$ and\na tuple $\\avec{a}$ from $\\mathsf{ind}(\\mathcal{A})$}\n\\KwResult{\\ensuremath{\\mathsf{true}}{} if $\\mathcal{T},\\mathcal{A}\\models{\\boldsymbol q}(\\avec{a})$ and \\ensuremath{\\mathsf{false}}{} otherwise}\n\\BlankLine\nfix a directed tree $T$ compatible with the Gaifman graph of ${\\boldsymbol q}$ and let $z_0$ be its root\\; \n\\Guess{$a_0 \\in\\mathsf{ind}(\\mathcal{A})$}\\tcc*[r]{guess the ABox element}\n\\Guess{$n_0 < 2|\\mathcal{T}|+|{\\boldsymbol q}|$}\\tcc*[r]{maximum distance from ABox of relevant elements}\n\\ForEach(\\tcc*[f]{guess the initial element in a step-by-step fashion} ){$n$ in $1,\\dots,n_0$}{\n\\Guess{a role $\\varrho$ in $\\mathcal{T}$ \\KwSuchThat \\isGenerated{$\\varrho$, $a_0$, $\\mathsf{top}(\\mathsf{stack})$}}\\;\npush $\\varrho$ on $\\mathsf{stack}$}\n\\Check{\\canMapTail{$z_0$, $a_0$, $\\mathsf{top}(\\mathsf{stack})$}}\\\n$\\mathsf{frontier}\\longleftarrow \\bigl\\{(z_0 \\mapsto (a_0, |\\mathsf{stack}|), z_i) \\mid z_i \\text{ is a child of } z_0 \\text{ in } \\mathcal{T} \\bigr\\}$\\;\n\\While{$\\mathsf{frontier} \\ne \\emptyset$}{\n\\Guess{one of the 4 options}\\;\n\\uIf(\\tcc*[f]{take a step in $\\mathsf{ind}(\\mathcal{A})$}){Option 1}{\nremove some $(z \\mapsto (a,0), z')$ from $\\mathsf{frontier}$\\;\n\\Guess{$a' \\in \\mathsf{ind}(\\mathcal{A})$}\\; \n\\Check{$(a,a')\\in P^{\\mathcal{C}_{\\mathcal{T},\\mathcal{A}}}$, for all $P(z,z')\\in{\\boldsymbol q}$, \\KwAnd \\canMapTail{$z'$, $a'$, $\\varepsilon$}}\\\n$\\mathsf{frontier} \\longleftarrow \\mathsf{frontier} \\cup \\{ (z' \\mapsto (a',0),z_i')\\mid z_i' \\text{ is a child of } z' \\text{ in } T \\}$\n}\n\\uElseIf(\\tcc*[f]{a step `forward' in the tree part}){Option 2 \\KwAnd $|\\mathsf{stack}| < 2|\\mathcal{T}|+|{\\boldsymbol q}|$}{\nremove some $(z \\mapsto (a,|\\mathsf{stack}|), z')$ from $\\mathsf{frontier}$\\;\n\\Guess{a role $\\varrho$ in $\\mathcal{T}$ \\KwSuchThat \\isGenerated{$\\varrho$, $a$, $\\mathsf{top}(\\mathsf{stack})$}}\\;\npush $\\varrho$ on $\\mathsf{stack}$\\;\n\\Check{$\\mathcal{T}\\models \\varrho(x,y)\\to P(x,y)$, for all $P(z,z')\\in {\\boldsymbol q}$, \\KwAnd \\canMapTail{$z'$, $a$, $\\mathsf{top}(\\mathsf{stack})$}}\\;\n$\\mathsf{frontier} \\longleftarrow \\mathsf{frontier} \\cup \\{ (z' \\mapsto (a,|\\mathsf{stack}|),z_i')\\mid z_i' \\text{ is a child of } z' \\text{ in } T \\}$\n}\n\\uElseIf(\\tcc*[f]{take a step `backward' in the tree part}){Option 3 \\KwAnd $|\\mathsf{stack}| > 0$}{\nlet $\\mathsf{deepest} = \\{(z \\mapsto (a,n), z') \\in \\mathsf{frontier} \\mid n = |\\mathsf{stack}| \\}$\\tcc*[r]{may be empty}\nremove all $\\mathsf{deepest}$ from $\\mathsf{frontier}$\\; \npop $\\varrho$ from $\\mathsf{stack}$\\;\n\\ForEach{$(z \\mapsto (a,n), z') \\in \\mathsf{deepest}$}{\n\\Check{$\\mathcal{T}\\models \\varrho(x,y) \\to P(x,y)$, for all $P(z',z)\\in{\\boldsymbol q}$, \\KwAnd \\canMapTail{$z'$, $a$, $\\mathsf{top}(\\mathsf{stack})$}}\\;\n$\\mathsf{frontier} \\longleftarrow \\mathsf{frontier} \\cup \\{ (z' \\mapsto (a,|\\mathsf{stack}|),z_i')\\mid z_i' \\text{ is a child of } z' \\text{ in } T \\}$\n}\n}\n\\uElseIf(\\tcc*[f]{take a `loop'-step in the tree part of $\\canmod$}){Option 4}{ \nremove some $(z \\mapsto (a,|\\mathsf{stack}|), z')$ from $\\mathsf{frontier}$\\;\n\\Check{$\\mathcal{T}\\models P(x,x)$, for all $P(z,z')\\in{\\boldsymbol q}$, \\KwAnd \\canMapTail{$z'$, $a$, $\\mathsf{top}(\\mathsf{stack})$}}\\;\n$\\mathsf{frontier} \\longleftarrow \\mathsf{frontier} \\cup \\{ (z' \\mapsto (a,|\\mathsf{stack}|),z_i')\\mid z_i' \\text{ is a child of } z' \\text{ in } T \\}$\n}\n\\lElse{\\Return \\ensuremath{\\mathsf{false}}}\n}\n\\Return \\ensuremath{\\mathsf{true}}\\;\n\\caption{Non-deterministic procedure \\ensuremath{\\mathsf{BLQuery}}{} \nfor answering bounded-leaf OMQs.}\n\\label{algo:bbqueries}\n\\end{algorithm}\n\n\n\\begin{algorithm}[t]\\SetAlgoVlined%\n\\func{\\canMapTail{$z$, $a$, $\\sigma$}}{\n\\lIf{$z$ is the $i^{\\text{th}}$ answer variable \\KwAnd either $a\\ne a_i$ or $\\sigma\\ne\\varepsilon$}{\\Return \\ensuremath{\\mathsf{false}}}\n\\uIf(\\tcc*[f]{an element of the form $a\\ldots\\sigma$ in the tree part}){$\\sigma \\ne\\varepsilon$}{%\n\\Check{$\\mathcal{T}\\models \\exists y\\,\\sigma(y,x)\\to A(x)$, for all $A(z)\\in {\\boldsymbol q}$, \\KwAnd $\\mathcal{T}\\models P(x,x)$, for all $P(z,z)\\in{\\boldsymbol q}$}\n}\n\\Else(\\tcc*[f]{otherwise, in $\\mathsf{ind}(\\mathcal{A})$}){%\n\\Check{$a\\in A^{\\canmod}$, for all $A(z)\\in {\\boldsymbol q}$, \\KwAnd $(a,a)\\in P^{\\canmod}$, for all $P(z,z)\\in{\\boldsymbol q}$}\n}\n\\Return \\ensuremath{\\mathsf{true}};\n}\n\\BlankLine\n\\func{\\isGenerated{$\\varrho$, $a$, $\\sigma$}}{\n\\uIf(\\tcc*[f]{an element of the form $a\\ldots\\sigma$ in the tree part}){$\\sigma \\ne\\varepsilon$}{%\n\\Check{$\\mathcal{T}\\models \\exists y\\,\\sigma(y,x)\\to \\exists y\\,\\varrho(x,y)$}\n}\n\\Else(\\tcc*[f]{otherwise, in $\\mathsf{ind}(\\mathcal{A})$}){%\n\\Check{$(a,b)\\in \\varrho(x,y)^{\\canmod}$, for some $b\\in\\Delta^{\\canmod}\\setminus\\mathsf{ind}(\\mathcal{A})$}\n}\n\\Return \\ensuremath{\\mathsf{true}};\n}\n\\caption{Subroutines for \\ensuremath{\\mathsf{BLQuery}}{}.}\n\\label{algo:bbqueries:2}\n\\end{algorithm}\n\n\n\nSimilarly to \\ensuremath{\\mathsf{TreeQuery}}, the idea is to view the input CQ ${\\boldsymbol q}(\\avec{x})$ as a tree and iteratively construct a homomorphism from ${\\boldsymbol q}(\\avec{x})$ to $\\canmod$, working from root to leaves. We begin by guessing an element $a_0w$ to which the root variable $z_0$ is mapped and checking that $a_0w$ is compatible with $z_0$. However, instead of storing directly $a_0w$ in $\\mathsf{frontier}$, we guess it element-by-element and push the word $w$ onto the stack, $\\mathsf{stack}$. We assume that we have access to the top of the $\\mathsf{stack}$, denoted by $\\mathsf{top}(\\mathsf{stack})$, and the call $\\mathsf{top}(\\mathsf{stack})$ on empty $\\mathsf{stack}$ returns $\\varepsilon$. During execution of \\ensuremath{\\mathsf{BLQuery}}, the height of the stack will never exceed $2|\\mathcal{T}| + |{\\boldsymbol q}|$, and so we assume that the height of the stack, denoted by $|\\mathsf{stack}|$, is also available as, for example, a variable whose value is updated by the push and pop operations on $\\mathsf{stack}$. \n\nAfter having guessed $a_0w$, we check that $z_0$ can be mapped to it, which is done by calling \\canMapTail{$z_0$, $a_0$, $\\mathsf{top}(\\mathsf{stack})$}. If the check succeeds, we initialise $\\mathsf{frontier}$ to the set of $4$-tuples of the form $(z_0 \\mapsto (a_0, |\\mathsf{stack}|), z_i)$, for all children $z_i$ of $z_0$ in $T$. \nIntuitively, a tuple $(z \\mapsto (a,n),z')$ records that the variable $z$ is mapped to the element $a \\,\\mathsf{stack}_{\\leq n}$ and that the child $z'$ of $z$ remains to be mapped (in the explanations we use $\\mathsf{stack}_{\\leq n}$ to refer to the word comprising the first $n$ symbols of $\\mathsf{stack}$; the algorithm, however, cannot make use of it). \n\nIn the main loop, we remove one or more tuples from $\\mathsf{frontier}$, choose where to map the variables and update $\\mathsf{frontier}$ and $\\mathsf{stack}$ accordingly. There are four options. Option~1 is used for tuples $(z \\mapsto (a,0),z')$ where both $z$ and $z'$ are mapped to individual constants, Option~2 (Option~3) for tuples $(z\\mapsto(a,n),z')$ in which we map $z'$ to a child (respectively, parent) of the image of $z$ in $\\canmod$, while Option~4 applies when $z$ and $z'$ are mapped to the same element (which is possible if $P(z,z')\\in {\\boldsymbol q}$, for some $P$ that is reflexive according to~$\\mathcal{T}$). Crucially, however, the order in which tuples are treated matters due to the fact that several tuples `share' the single stack. Indeed, when applying Option~3, we pop a symbol from $\\mathsf{stack}$, and may therefore lose some information that is needed for processing other tuples. To avoid this, \nOption~3 may only be applied to tuples \\mbox{$(z \\mapsto (a,n),z')$} with maximal $n$, and it must be applied to \\emph{all} such tuples at the same time. For Option~2, we require that the selected tuple $(z\\mapsto(a,n), z')$ is such that $n=|\\mathsf{stack}|$:\nsince $z'$ is being mapped to an element $a \\, \\mathsf{stack}_{\\leq n} \\,\\varrho$, we need to access the $n$th symbol in $\\mathsf{stack}$ to determine the possible choices for $\\varrho$ and to record the symbol chosen by pushing it onto $\\mathsf{stack}$. \n\nThe procedure terminates and returns \\ensuremath{\\mathsf{true}}{} when $\\mathsf{frontier}$ is empty, meaning that we have successfully constructed a homomorphism witnessing that the input tuple is an answer. Conversely, given a homomorphism from ${\\boldsymbol q}(\\avec{a})$ to $\\canmod$, we can define a successful execution of \\ensuremath{\\mathsf{BLQuery}}. We prove in Appendix~\\ref{app:complexity} that \\ensuremath{\\mathsf{BLQuery}}{} terminates (Proposition~\\ref{logcfl-upper-prop:termination}), is correct (Proposition~\\ref{logcfl-upper-prop:correctness}) \nand can be implemented by an NAuxPDA (Proposition \\ref{nauxpda}).\nThe following example illustrates the construction. \n \n\\begin{example}\\label{ex-second-algo}\nSuppose $\\mathcal{T}$ has the following axioms:\n\\begin{align*}\nA(x) & \\rightarrow \\exists y\\, P(x,y), & \nP(x,y) & \\rightarrow U(y,x), \\\\ \n\\exists y \\, P(y,x) & \\rightarrow \\exists y \\,S(x,y), &\n\\exists y\\, S(y,x) & \\rightarrow \\exists y\\, T(y,x), &\n\\exists y\\, P(y,x) & \\rightarrow B(x).\n \\end{align*}\nthe query is as follows:\n\\begin{multline*}\n{\\boldsymbol q}(x_1,x_2) \\ \\ = \\ \\ \\exists y_1y_2y_3y_4y_5\\, \\bigl(R(y_2,x_1) \\ \\land \\ P(y_2,y_1) \\ \\land \\ S(y_1,y_3) \\ \\land {}\\\\mathcal{T}(y_5,y_3) \\ \\land \\ S(y_4,y_3) \\ \\land \\ U(y_4,x_2) \\bigr)\n\\end{multline*}\nand $\\mathcal{A} = \\{A(a), R(a,c)\\}$.\nObserve that $\\mathcal{C}_{\\mathcal{T}, \\mathcal{A}} \\models {\\boldsymbol q}(c,a)$. We show how to define an execution of \\ensuremath{\\mathsf{BLQuery}}{} that returns \\ensuremath{\\mathsf{true}}{} on $((\\mathcal{T},{\\boldsymbol q}), \\mathcal{A}, {\\boldsymbol q}, (c,a))$ and the homomorphism it induces.\nWe fix some variable, say $y_1$, as the root of the query tree. We then guess the constant~$a$ and the word $P$, push $P$ onto $\\mathsf{stack}$ and check using \\mbox{\\canMapTail{$y_1$, $a$, $P$}} that our choice is compatible with $y_1$. \nAt the start of the while loop, we have\n\\begin{equation*}\\tag{\\textsf{w-1}}\n\\mathsf{frontier} = \\{(y_1\\mapsto (a,1),y_2),(y_1\\mapsto(a,1), y_3)\\} \\ \\ \\ \\text{ and } \\ \\ \\ \\mathsf{stack} = P,\n\\end{equation*}\nwhere the first tuple, for example, records that $y_1$ has been mapped to $a\\, \\mathsf{stack}_{\\leq 1} = aP$ and\n$y_2$ remains to be mapped. \nWe are going to use Option~3 for $(y_1\\mapsto(a,1),y_2)$ and Option~2 for $(y_1\\mapsto(a,1), y_3)$. \nWe (have to) start with Option~2 though: we remove $(y_1\\mapsto (a,1), y_3)$ from $\\mathsf{frontier}$, guess $S$, push it onto $\\mathsf{stack}$, and add $(y_3\\mapsto(a,2), y_4)$ and $(y_3\\mapsto(a,2),y_5)$ to $\\mathsf{frontier}$. Note that the tuples in $\\mathsf{frontier}$ allow us to read off the elements $a\\, \\mathsf{stack}_{\\leq 1}$ and $a\\, \\mathsf{stack}_{\\leq 2}$ to which $y_1$ and $y_3$ are mapped. Thus, \n\\begin{equation*}\\tag{\\textsf{w-2}}\n\\mathsf{frontier} = \\{(y_1\\mapsto(a,1),y_2),(y_3\\mapsto(a,2),y_4),(y_3\\mapsto(a,2),y_5)\\} \\ \\ \\ \\text{ and } \\ \\ \\ \\mathsf{stack}= PS\n\\end{equation*}\nat the start of the second iteration of the while loop. We are going to use Option~3 for $(y_3\\mapsto(a,2),y_4)$ and Option~2 for $(y_3\\mapsto(a,2),y_5)$. Again, we have to start with Option~2: we remove $(y_3\\mapsto(a,2),y_5)$ from $\\mathsf{frontier}$, and guess $T^-$ and push it onto $\\mathsf{stack}$. As $y_5$ has no children, we leave $\\mathsf{frontier}$ unchanged. At the start of the third iteration, \n\\begin{equation*}\\tag{\\textsf{w-3}}\n\\mathsf{frontier} = \\{(y_1\\mapsto(a,1),y_2),(y_3\\mapsto(a,2),y_4)\\} \\ \\ \\ \\text{ and } \\ \\ \\ \\mathsf{stack}= PST^-;\n\\end{equation*}\nsee Fig.~\\ref{ex-fig}~(a). We apply Option~3 and, since $\\mathsf{deepest} = \\emptyset$, we pop $T^-$ from $\\mathsf{stack}$ but make no other changes. In the fourth iteration, we again apply Option~3. Since \\mbox{$\\mathsf{deepest} = \\{(y_3\\mapsto(a,2),y_4)\\}$}, we remove this tuple from\n$\\mathsf{frontier}$ and pop $S$ from $\\mathsf{stack}$. As the checks succeed for $S$, we add $(y_4\\mapsto(a,1), x_2)$ to $\\mathsf{frontier}$. \nBefore the fifth iteration, \n\\begin{equation*}\\tag{\\textsf{w-5}}\n\\mathsf{frontier} = \\{(y_1\\mapsto(a,1),y_2),(y_4\\mapsto(a,1),x_2)\\} \\ \\ \\ \\text{ and } \\ \\ \\ \\ \\mathsf{stack} = P;\n\\end{equation*}\nsee Fig.~\\ref{ex-fig}~(b). We apply Option 3 with $\\mathsf{deepest} = \\{(y_1\\mapsto(a,1),y_2),(y_4\\mapsto(a,1),x_2)\\}$. This leads to both tuples being removed\nfrom $\\mathsf{frontier}$ and $P$ popped from $\\mathsf{stack}$. We next perform the required checks and, in particular, verify that the choice of where to map the answer variable $x_2$ agrees with the input vector $(c,a)$ (which is indeed the case). Then, we add $(y_2\\mapsto(a,0), x_1)$ to $\\mathsf{frontier}$. The final, sixth, iteration begins with \n\\begin{equation*}\\tag{\\textsf{w-6}}\n\\mathsf{frontier} = \\{(y_2\\mapsto(a,0),x_1)\\} \\ \\ \\ \\text{ and } \\ \\ \\ \\ \\mathsf{stack}= \\varepsilon;\n\\end{equation*}\nsee Fig.~\\ref{ex-fig}~(c). We choose Option 1, remove $(y_2\\mapsto (a,0),x_1)$ from $\\mathsf{frontier}$, guess $c$, and perform the required compatibility checks. As $x_1$ is a leaf, no new tuples are added to $\\mathsf{frontier}$; see Fig.~\\ref{ex-fig}~(d). We are thus left with $\\mathsf{frontier}= \\emptyset$, and return \\ensuremath{\\mathsf{true}}{}. \n\\begin{figure}[t]%\n\\centerin\n\\begin{tikzpicture}[yscale=0.9,xscale=1]\\footnotesize\n\\node at (-0.5,3.8) {\\small (a)};\n\\node[point,label=left:{$y_1$}, label=above:$B$] (y1) at (1,3) {};\n\\node at (1.25,3.8) {${\\boldsymbol q}(x_1,x_2)$};\n\\node[lrgpoint,label=left:{$y_2$}] (y2) at (0,2) {};\n\\node[point,label=right:{$y_3$}] (y3) at (2,2) {};\n\\node[bpoint,label=left:{$x_1$}] (x1) at (0,1) {};\n\\node[lrgpoint,label=left:{$y_4$}] (y4) at (1.5,1) {};\n\\node[point,label=below:{$y_5$}] (y5) at (2.5,1) {};\n\\node[bpoint,label=left:{$x_2$}] (x2) at (1.5,0) {};\n\\begin{scope}\\footnotesize\n\\draw[->,query] (y2) to node[above,sloped,pos=0.4] {$P$} (y1);\n\\draw[->,query] (y1) to node[below,sloped] {$S$} (y3);\n\\draw[->,query] (y2) to node[left] {$R$} (x1);\n\\draw[->,query] (y4) to node[pos=0.3,above,sloped] {$S$} (y3);\n\\draw[->,query] (y5) to node[pos=0.3,above,sloped] {$T$} (y3);\n\\draw[->,query] (y4) to node[left] {$U$} (x2);\n\\end{scope}\n\\begin{scope}[xshift=25mm]\n\\node[bpoint, label=left:{$A$}, label=above:{$a$}] (a) at (1,3) {};\n\\node at (1.75,3.8) {$\\canmod$};\n\\node[point, label=above:{$c$}] (c) at (2.5,3) {};\n\\node[point, label=right:{\\scriptsize $aP$}, label=left:{$B$}] (d1) at (1,2) {};\n\\node[point, label=right:{\\scriptsize $aPS$}] (d2) at (1,1) {};\n\\node[point, label=right:{\\scriptsize $aPST^-$}] (d3) at (1,0) {};\n\\draw[->,can] (a) to node[above] {$R$} (c);\n\\draw[->,can] (a) to node [right]{$P, U^-$} (d1);\n\\draw[->,can] (d1) to node [right]{$S$} (d2);\n\\draw[->,can] (d2) to node [right]{$T^-$} (d3);\n\\draw [line width=1mm] (2,0.7) -- ++(1,0);\n\\node[rectangle,fill=black,minimum width=7mm] at (2.5,1) {\\textcolor{white}{$\\boldsymbol{P}$}}; \n\\node[rectangle,fill=black,minimum width=7mm] at (2.5,1.5) {\\textcolor{white}{$\\boldsymbol{S}$}}; \n\\node[rectangle,fill=black,minimum width=7mm] at (2.5,2) {\\textcolor{white}{$\\boldsymbol{T^-}$}}; \n\\end{scope}\n\\draw[hom] (y5) to node[above,midway,sloped,circle,fill=black,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{2}} (d3);\n\\draw[hom] (y3) to node[above,midway,sloped,circle,fill=black,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{1}} (d2);\n\\draw[hom] (y1) to node[above,midway,sloped,circle,fill=black,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{0}} (d1);\n\\begin{scope}[xshift=70mm]\n\\node at (-0.5,3.8) {\\small (b)};\n\\node[point,label=left:{$y_1$}, label=above:$B$] (y1) at (1,3) {};\n\\node at (1.25,3.8) {${\\boldsymbol q}(x_1,x_2)$};\n\\node[lrgpoint,label=left:{$y_2$}] (y2) at (0,2) {};\n\\node[point,label=right:{$y_3$}] (y3) at (2,2) {};\n\\node[bpoint,label=left:{$x_1$}] (x1) at (0,1) {};\n\\node[point,label=left:{$y_4$}] (y4) at (1.5,1) {};\n\\node[point,label=below:{$y_5$}] (y5) at (2.5,1) {};\n\\node[lrgbpoint,label=left:{$x_2$}] (x2) at (1.5,0) {};\n\\begin{scope}\\footnotesize\n\\draw[->,query] (y2) to node[above,sloped,pos=0.4] {$P$} (y1);\n\\draw[->,query] (y1) to node[below,sloped] {$S$} (y3);\n\\draw[->,query] (y2) to node[left] {$R$} (x1);\n\\draw[->,query] (y4) to node[pos=0.3,above,sloped] {$S$} (y3);\n\\draw[->,query] (y5) to node[pos=0.25,below,sloped] {$T$} (y3);\n\\draw[->,query] (y4) to node[left] {$U$} (x2);\n\\end{scope}\n\\begin{scope}[xshift=25mm]\n\\node[bpoint, label=left:{$A$}, label=above:{$a$}] (a) at (1,3) {};\n\\node at (1.75,3.8) {$\\canmod$};\n\\node[point, label=above:{$c$}] (c) at (2.5,3) {};\n\\node[point, label=right:{\\scriptsize $aP$}, label=left:{$B$}] (d1) at (1,2) {};\n\\node[point, label=right:{\\scriptsize $aPS$}] (d2) at (1,1) {};\n\\node[point, label=right:{\\scriptsize $aPST^-$}] (d3) at (1,0) {};\n\\draw[->,can] (a) to node[above] {$R$} (c);\n\\draw[->,can] (a) to node [right]{$P, U^-$} (d1);\n\\draw[->,can] (d1) to node [right]{$S$} (d2);\n\\draw[->,can] (d2) to node [right]{$T^-$} (d3);\n\\draw [line width=1mm] (2,0.7) -- ++(1,0);\n\\node[rectangle,fill=black,minimum width=7mm] at (2.5,1) {\\textcolor{white}{$\\boldsymbol{P}$}}; \n\\end{scope}\n\\draw[hom] (y3) to node[below,pos=0.7,sloped,circle,fill=gray,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{1}} (d2);\n\\draw[hom] (y4) to node[above,pos=0.7,sloped,circle,fill=black,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{4}} (d1);\n\\end{scope}\n\\begin{scope}[xshift=0mm,yshift=-42mm]\n\\node at (-0.5,3.3) {\\small (c)};\n\\node[point,label=left:{$y_1$}, label=above:$B$] (y1) at (1,3) {};\n\\node[point,label=left:{$y_2$}] (y2) at (0,2) {};\n\\node[point,label=right:{$y_3$}] (y3) at (2,2) {};\n\\node[lrgbpoint,label=left:{$x_1$}] (x1) at (0,1) {};\n\\node[point,label=left:{$y_4$}] (y4) at (1.5,1) {};\n\\node[point,label=below:{$y_5$}] (y5) at (2.5,1) {};\n\\node[bpoint,label=left:{$x_2$}] (x2) at (1.5,0) {};\n\\begin{scope}\\footnotesize\n\\draw[->,query] (y2) to node[above,sloped,pos=0.4] {$P$} (y1);\n\\draw[->,query] (y1) to node[below,sloped] {$S$} (y3);\n\\draw[->,query] (y2) to node[left] {$R$} (x1);\n\\draw[->,query] (y4) to node[pos=0.3,above,sloped] {$S$} (y3);\n\\draw[->,query] (y5) to node[pos=0.3,above,sloped] {$T$} (y3);\n\\draw[->,query] (y4) to node[left] {$U$} (x2);\n\\end{scope}\n\\begin{scope}[xshift=25mm]\n\\node[bpoint, label=left:{$A$}, label=above:{$a$}] (a) at (1,3) {};\n\\node[point, label=above:{$c$}] (c) at (2.5,3) {};\n\\node[point, label=right:{\\scriptsize $aP$}, label=left:{$B$}] (d1) at (1,2) {};\n\\node[point, label=right:{\\scriptsize $aPS$}] (d2) at (1,1) {};\n\\node[point, label=right:{\\scriptsize $aPST^-$}] (d3) at (1,0) {};\n\\draw[->,can] (a) to node[above] {$R$} (c);\n\\draw[->,can] (a) to node [right]{$P, U^-$} (d1);\n\\draw[->,can] (d1) to node [right]{$S$} (d2);\n\\draw[->,can] (d2) to node [right]{$T^-$} (d3);\n\\draw [line width=1mm] (2,0.7) -- ++(1,0);\n\\end{scope}\n\\draw[hom] (y2) to node[above,pos=0.75,sloped,circle,fill=black,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{5}} (a);\n\\draw[hom] (y1) to node[above,pos=0.25,sloped,circle,fill=gray,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{0}} (d1);\n\\draw[hom] (y4) to node[below,pos=0.75,sloped,circle,fill=gray,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{4}} (d1);\n\\draw[hom] (x2) to node[above,pos=0.85,sloped,circle,fill=black,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{5}} (a);\n\\end{scope}\n\\begin{scope}[xshift=70mm,yshift=-42mm]\n\\node at (-0.5,3.3) {\\small (d)};\n\\node[point,label=left:{$y_1$}, label=above:$B$] (y1) at (1,3) {};\n\\node[point,label=left:{$y_2$}] (y2) at (0,2) {};\n\\node[point,label=right:{$y_3$}] (y3) at (2,2) {};\n\\node[bpoint,label=left:{$x_1$}] (x1) at (0,1) {};\n\\node[point,label=left:{$y_4$}] (y4) at (1.5,1) {};\n\\node[point,label=below:{$y_5$}] (y5) at (2.5,1) {};\n\\node[bpoint,label=left:{$x_2$}] (x2) at (1.5,0) {};\n\\begin{scope}\\footnotesize\n\\draw[->,query] (y2) to node[above,sloped,pos=0.4] {$P$} (y1);\n\\draw[->,query] (y1) to node[below,sloped] {$S$} (y3);\n\\draw[->,query] (y2) to node[left] {$R$} (x1);\n\\draw[->,query] (y4) to node[pos=0.3,above,sloped] {$S$} (y3);\n\\draw[->,query] (y5) to node[pos=0.3,above,sloped] {$T$} (y3);\n\\draw[->,query] (y4) to node[left] {$U$} (x2);\n\\end{scope}\n\\begin{scope}[xshift=25mm]\n\\node[bpoint, label=left:{$A$}, label=above:{$a$}] (a) at (1,3) {};\n\\node[point, label=above:{$c$}] (c) at (2.5,3) {};\n\\node[point, label=right:{\\scriptsize $aP$}, label=left:{$B$}] (d1) at (1,2) {};\n\\node[point, label=right:{\\scriptsize $aPS$}] (d2) at (1,1) {};\n\\node[point, label=right:{\\scriptsize $aPST^-$}] (d3) at (1,0) {};\n\\draw[->,can] (a) to node[above] {$R$} (c);\n\\draw[->,can] (a) to node [right]{$P, U^-$} (d1);\n\\draw[->,can] (d1) to node [right]{$S$} (d2);\n\\draw[->,can] (d2) to node [right]{$T^-$} (d3);\n\\draw [line width=1mm] (2,0.7) -- ++(1,0);\n\\end{scope}\n\\draw[hom] (y2) to node[above,pos=0.7,sloped,circle,fill=gray,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{5}} (a);\n\\draw[hom] (x1) to node[above,pos=0.6,sloped,circle,fill=black,inner sep=1pt] {\\sffamily\\bfseries\\scriptsize\\textcolor{white}{6}} (c);\n\\end{scope}\n\\end{tikzpicture}%\n\\caption{Partial homomorphisms from a tree-shaped CQ ${\\boldsymbol q}(x_1,x_2)$ to the canonical model \n$\\mathcal{C}_{\\mathcal{T}, \\mathcal{A}}$ and the contents of $\\mathsf{stack}$ in Example~\\ref{ex-second-algo}: (a) before the third iteration, (b) before the fifth iteration, (c) before and (d) after the final (sixth) iteration. Large nodes indicate the last component of the tuples in $\\mathsf{frontier}$.}\n\\label{ex-fig}\n\\end{figure}%\n\\end{example}\n\n\n\nThe proof of \\ensuremath{\\mathsf{LOGCFL}}-hardness is by reduction of the following problem: decide whether an input of length $n$ is accepted by the $n$th circuit of a \\emph{logspace-uniform} family of $\\ensuremath{\\mathsf{SAC}}^1$ circuits, which is known to be \\ensuremath{\\mathsf{LOGCFL}}-hard~\\cite{DBLP:journals\/jcss\/Venkateswaran91}. This problem was used by \\cite{DBLP:journals\/jacm\/GottlobLS01} to show \\ensuremath{\\mathsf{LOGCFL}}-hardness of evaluating tree-shaped CQs. We follow a similar approach, but with one crucial difference: using an ontology, we `unravel' the circuit into a tree, which allows us to replace tree-shaped CQs by linear ones. \nFollowing~\\cite{DBLP:journals\/jacm\/GottlobLS01}, we assume without loss of generality that the considered $\\ensuremath{\\mathsf{SAC}}^1$ circuits adhere to the following \\emph{normal form}: \n\\begin{nitemize}\n\\item fan-in of all $\\textsc{and}$-gates is 2;\n\n\\item nodes are assigned to levels, with gates on level $i$ only receiving inputs from gates on level $i-1$, the input gates on level 1 and the output gate on the greatest level;\n\n\\item the number of levels is odd, all even-level gates are $\\textsc{or}$-gates, and all odd-level non-input gates are $\\textsc{and}$-gates.\n\\end{nitemize}\nIt is well known~\\cite{DBLP:journals\/jacm\/GottlobLS01,DBLP:journals\/jcss\/Venkateswaran91} that a circuit in normal form \naccepts an input $\\avec{\\alpha}$ iff there is a labelled rooted tree (called a \\emph{proof tree}) such that\n\\begin{nitemize}\n\\item the root node is labelled with the output $\\textsc{and}$-gate;\n\n\\item if a node is labelled with an $\\textsc{and}$-gate $g_i$ and $g_i = g_j \\land g_k$,\nthen it has two children labelled with $g_j$ and $g_k$, respectively;\n\n\\item if a node is labelled with an $\\textsc{or}$-gate $g_i$ and $g_i = g_{j_1} \\lor \\dots \\lor g_{j_k}$,\nthen it has a unique child that is labelled with one of $g_{j_1},\\dots,g_{j_k}$;\n\n\\item every leaf node is labelled with an input gate whose literal evaluates to 1 under $\\avec{\\alpha}$.\n\\end{nitemize}\nFor example, the circuit in Fig.~\\ref{fig:5a}~(a) accepts $(1,0,0,0,1)$, as witnessed by the proof tree in Fig.~\\ref{fig:5a}~(b). \nWhile a circuit-input pair may admit multiple proof trees, they are all isomorphic modulo the labelling. Thus, with every circuit $\\boldsymbol{C}$, we can associate a \\emph{skeleton proof tree} $T$ such that $\\boldsymbol{C}$ accepts $\\avec{\\alpha}$ iff some labelling of $T$ is a proof tree for $\\boldsymbol{C}$ and~$\\avec{\\alpha}$. Note that $T$ depends only on the number of levels in $\\boldsymbol{C}$. The reduction~\\cite{DBLP:journals\/jacm\/GottlobLS01}, which is for presentation purposes reproduced here with minor modifications, encodes $\\boldsymbol{C}$ and $\\avec{\\alpha}$ in the database and uses a Boolean tree-shaped CQ based on the skeleton proof tree. Specifically, the database $D(\\avec{\\alpha})$ uses the gates of $\\boldsymbol{C}$ as constants and consists of the following facts:\n\\begin{align*}\n& L}%{\\mathsf{la}(g_j, g_i) \\text{ and } R}%{\\mathsf{ra}(g_k,g_i), && \\text{ for every $\\textsc{and}$-gate $g_i$ with } g_i = g_j \\land g_k;\\\\\n& U}%{\\mathsf{or}(g_{j_1},g_i),\\dots,U}%{\\mathsf{or}(g_{j_k},g_i), && \\text{ for every $\\textsc{or}$-gate $g_i$ with } g_i = g_{j_1} \\lor \\cdots \\lor g_{j_k};\\\\\n& A(g_i), && \\text{ for every input gate $g_i$ whose value is $1$ under $\\avec{\\alpha}$}. \n\\end{align*}\nThe CQ ${\\boldsymbol q}$ uses the nodes of $T$ as variables, \nhas an atom $U}%{\\mathsf{or}(z_j,z_i)$ ($L}%{\\mathsf{la}(z_j,z_i)$, $R}%{\\mathsf{ra}(z_j,z_i)$) for every node $z_i$ with unique (left, right) child $z_j$,\nand has an atom $A(z_i)$ for every leaf node $z_i$.\nThese definitions guarantee that $D(\\avec{\\alpha})\\models {\\boldsymbol q}$ iff $\\boldsymbol{C}$ accepts $\\avec{\\alpha}$; moreover, both ${\\boldsymbol q}$ and $D(\\avec{\\alpha})$ can be constructed by logspace transducers. \n\n\n\\begin{figure}[t]%\n\\begin{tikzpicture}[xscale=1.1,yscale=0.8]\\small%\n\\node at (-3.2,4) {{\\small (a)}};\n\\node[fill=gray!40,input,label=left:{\\scriptsize $1$}] (in1) at (-2.7,0) {$x_1$}; \n\\node[input,label=left:{\\scriptsize $2$}] (in2) at (-1.6,0) {$x_2$}; \n\\node[fill=gray!40,input,label=left:{\\scriptsize $3$}] (in3) at (-.5,0) {$\\!\\!\\neg x_3$}; \n\\node[input,label=left:{\\scriptsize $4$}] (in4) at (.6,0) {$x_4$}; \n\\node[input,label=left:{\\scriptsize $5$}] (in5) at (1.7,0) {$x_5$}; \n\\node[input,label=left:{\\scriptsize $6$}] (in6) at (2.8,0) {$\\!\\!\\neg x_1$}; \n\\node[fill=gray!40,or-gate,label=left:{\\scriptsize $7$}] (or1-1) at (-2.2,1) {$\\textsc{or}$}; \n\\node[fill=gray!40,or-gate,label=left:{\\scriptsize $8$}] (or1-2) at (-.75,1) {$\\textsc{or}$}; \n\\node[fill=gray!40,or-gate,label=left:{\\scriptsize $9$}] (or1-4) at (.75,1) {$\\textsc{or}$}; \n\\node[or-gate,label=left:{\\scriptsize $10$}] (or1-5) at (2.2,1) {$\\textsc{or}$}; \n\\node[fill=gray!40,and-gate,label=left:{\\scriptsize $11$}] (and1-1) at (-1.5,2) {$\\textsc{and}$};\n\\node[fill=gray!40,and-gate,label=left:{\\scriptsize $12$}] (and1-2) at (0,2) {$\\textsc{and}$};\n\\node[and-gate,label=left:{\\scriptsize $13$}] (and1-3) at (1.5,2) {$\\textsc{and}$};\n\\node[fill=gray!40,or-gate,label=left:{\\scriptsize $14$}] (or2-1) at (-.8,3) {$\\textsc{or}$}; \n\\node[fill=gray!40,or-gate,label=left:{\\scriptsize $15$}] (or2-2) at (.8,3) {$\\textsc{or}$}; \n\\node[fill=gray!40,and-gate,label=left:{\\scriptsize $16$}] (and2) at (0,4) {$\\textsc{and}$}; \n\\begin{scope}[semithick]\n\\draw[->] (in1) to (or1-1);\n\\draw[->] (in1) to (or1-2);\n\\draw[->] (in2) to (or1-1);\n\\draw[->] (in2) to (or1-4);\n\\draw[->] (in3) to (or1-2);\n\\draw[->] (in3) to (or1-4);\n\\draw[->] (in4) to (or1-4);\n\\draw[->] (in4) to (or1-5);\n\\draw[->] (in5) to (or1-5);\n\\draw[->] (in6) to (or1-5);\n\\draw[->] (or1-1) to (and1-1);\n\\draw[->] (or1-2) to (and1-1);\n\\draw[->] (or1-2) to (and1-3);\n\\draw[->] (or1-2) to (and1-2);\n\\draw[->] (or1-4) to (and1-2);\n\\draw[->] (or1-5) to (and1-3);\n\\draw[->] (and1-1) to (or2-1);\n\\draw[->] (and1-2) to (or2-1);\n\\draw[->] (and1-2) to (or2-2);\n\\draw[->] (and1-3) to (or2-2);\n\\draw[->] (or2-1) to (and2);\n\\draw[->] (or2-2) to (and2);\n\\end{scope}\n\\begin{scope}[xshift=-20mm,xscale=1.3]\n\\node at (4.7,4) {{\\small (b)}};\n\\node[input,fill=gray!40,label=left:{\\scriptsize $1$}] (v10) at (5,0) {$x_1$};\n\\node[or-gate,fill=gray!40,label=left:{\\scriptsize $7$}] (v6) at (5,1) {$\\textsc{or}$};\n\\node[input,fill=gray!40,label=left:{\\scriptsize $3$}] (v11) at (6,0) {$\\!\\!\\neg x_3$};\n\\node[or-gate,fill=gray!40,label=left:{\\scriptsize $8$}] (v7) at (6,1) {$\\textsc{or}$};\n\\node[and-gate,fill=gray!40,label=left:{\\scriptsize $11$}] (v4) at (5.5,2) {$\\textsc{and}$};\n\\node[or-gate,fill=gray!40,label=left:{\\scriptsize $14$}] (v2) at (5.5,3) {$\\textsc{or}$};\n\\node[and-gate,fill=gray!40,label=left:{\\scriptsize $16$}] (v1) at (6.5,4) {$\\textsc{and}$};\n\\node[or-gate,fill=gray!40,label=left:{\\scriptsize $15$}] (v3) at (7.5,3) {$\\textsc{or}$};\n\\node[and-gate,fill=gray!40,label=left:{\\scriptsize $12$}] (v5) at (7.5,2) {$\\textsc{and}$};\n\\node[or-gate,fill=gray!40,label=left:{\\scriptsize $8$}] (v8) at (7,1) {$\\textsc{or}$};\n\\node[or-gate,fill=gray!40,label=left:{\\scriptsize $9$}] (v9) at (8,1) {$\\textsc{or}$};\n\\node[input,fill=gray!40,label=left:{\\scriptsize $3$}] (v12) at (7,0) {$\\!\\!\\neg x_3$};\n\\node[input,fill=gray!40,label=left:{\\scriptsize $3$}] (v13) at (8,0) {$\\!\\!\\neg x_3$};\n\\begin{scope}[semithick]\n\\draw[-] (v10) to (v6);\n\\draw[-] (v11) to (v7);\n\\draw[-] (v6) to (v4);\n\\draw[-] (v7) to (v4);\n\\draw[-] (v4) to (v2);\n\\draw[-] (v2) to (v1);\n\\draw[-] (v12) to (v8);\n\\draw[-] (v13) to (v9);\n\\draw[-] (v8) to (v5);\n\\draw[-] (v9) to (v5);\n\\draw[-] (v5) to (v3);\n\\draw[-] (v3) to (v1);\n\\end{scope}\n\\end{scope}\n\\end{tikzpicture}\\\\[6pt]\n\\begin{tikzpicture}[yscale=0.9,\npoint\/.style={circle,draw=black,thick,minimum size=3.5mm,inner sep=0pt,fill=white},\nipoint\/.style={circle,draw=black,thick,minimum size=2mm,inner sep=0pt,fill=white}]\n\\footnotesize\n\\begin{scope}[xscale=0.7]\n\\node at (-4,4) {{\\small (d)}};\n\\begin{scope}\\scriptsize\n\\node[point,label=below:{\\footnotesize $A$}] (in1) at (-4,0) {1}; \n\\node[point] (in2) at (-3.3,0) {2}; \n\\node[point,label=below:{\\footnotesize $A$}] (in3c2) at (-2.6,0) {1}; \n\\node[point,label=below:{\\footnotesize $A$}] (in4c2) at (-1.9,0) {3}; \n\\node[point,label=below:{\\footnotesize $A$}] (in3) at (-0.6,0) {1}; \n\\node[point] (in4) at (0.1,0) {4}; \n\\node[point] (in2d) at (0.8,0) {2}; \n\\node[point,label=below:{\\footnotesize $A$}] (in3d) at (1.5,0) {3}; \n\\node[point] (in4d) at (2.2,0) {4}; \n\\node[point,label=below:{\\footnotesize $A$}] (in3c) at (2.9,0) {1}; \n\\node[point,label=below:{\\footnotesize $A$}] (in4c) at (3.6,0) {3}; \n\\node[point] (in5) at (4.3,0) {4}; \n\\node[point] (in6) at (5,0) {5}; \n\\node[point] (in7) at (5.7,0) {6}; \n\\node[point] (or1-1) at (-3.65,1) {7}; \n\\node[point] (or1-2) at (-2.25,1) {8};\n\\node[point] (or1-3) at (-0.25,1) {8}; \n\\node[point] (or1-4) at (1.5,1) {9}; \n\\node[point] (or1-5) at (3.25,1) {8}; \n\\node[point] (or1-6) at (5,1) {10}; \n\\node[point] (and1-1) at (-2.25-0.7,2) {11};\n\\node[point] (and1-2) at (-1,2) {12};\n\\node at (-1, 1.5) {$\\vdots$};\n\\node[point] (and1-3) at (.75,2) {12};\n\\node[point] (and1-4) at (3.25+0.875,2) {13};\n\\node[point] (g2) at (-1-1.95\/2,3) {14}; \n\\node[point] (g3) at (0.75+3.3\/2,3) {15}; \n\\node[point,fill=black,label=right:{\\footnotesize $a$}] (a) at (0,4) {\\textcolor{white}{16}}; \n\\end{scope}\n\\begin{scope}\\scriptsize\n\\draw[->,can] (g2) to node [above,sloped]{$L$} (a);\n\\draw[->,can] (g3) to node [above,sloped]{$R$} (a);\n\\draw[->,can] (and1-1) to node [above,sloped]{$U$} (g2);\n\\draw[->,can] (and1-2) to node [above,sloped]{$U$} (g2);\n\\draw[->,can] (and1-3) to node [above,sloped]{$U$} (g3);\n\\draw[->,can] (and1-4) to node [above,sloped]{$U$} (g3);\n\\draw[->,can] (or1-1) to node [above,sloped,pos=0.4]{$L$} (and1-1);\n\\draw[->,can] (or1-2) to node [above,sloped,pos=0.4]{$R$} (and1-1);\n\\draw[->,can] (or1-3) to node [above,sloped,pos=0.4]{$L$} (and1-3);\n\\draw[->,can] (or1-4) to node [above,sloped,pos=0.4]{$R$} (and1-3);\n\\draw[->,can] (or1-5) to node [above,sloped,pos=0.4]{$L$} (and1-4);\n\\draw[->,can] (or1-6) to node [above,sloped,pos=0.4]{$R$} (and1-4);\n\\begin{scope}\\tiny\n\\draw[->,can] (in1) to node [above,sloped,pos=0.4]{$U$} (or1-1);\n\\draw[->,can] (in2) to node [above,sloped,pos=0.4]{$U$} (or1-1);\n\\draw[->,can] (in3c2) to node [above,sloped,pos=0.4]{$U$} (or1-2);\n\\draw[->,can] (in4c2) to node [above,sloped,pos=0.4]{$U$} (or1-2);\n\\draw[->,can] (in3c) to node [above,sloped,pos=0.4]{$U$} (or1-5);\n\\draw[->,can] (in4c) to node [above,sloped,pos=0.4]{$U$} (or1-5);\n\\draw[->,can] (in2d) to node [above,sloped,pos=0.4]{$U$} (or1-4);\n\\draw[->,can] (in3d) to node [above,sloped,pos=0.4]{$U$} (or1-4);\n\\draw[->,can] (in4d) to node [above,sloped,pos=0.4]{$U$} (or1-4);\n\\draw[->,can] (in3) to node [above,sloped,pos=0.4]{$U$} (or1-3);\n\\draw[->,can] (in4) to node [above,sloped,pos=0.4]{$U$} (or1-3);\n\\draw[->,can] (in5) to node [above,sloped,pos=0.4]{$U$} (or1-6);\n\\draw[->,can] (in6) to node [above,sloped,pos=0.4]{$U$} (or1-6);\n\\draw[->,can] (in7) to node [above,sloped,pos=0.4]{$U$} (or1-6);\n\\end{scope}\n\\end{scope}\n\\end{scope}\n\\begin{scope}[xshift=-7mm,xscale=1.3]\n\\node at (4.5,4) {{\\small (c)}};\n\\begin{scope}[line width=1.8mm,gray!30] \n\\draw[->] (5,0) to node[pos=0.6] {\\color{black}$U}%{\\mathsf{or}$} (5,1.1);\n\\draw[->] (6,0) to node[pos=0.6] {\\color{black}$U}%{\\mathsf{or}$} (6,1.1);\n\\draw[->] (5,1) to node[midway,sloped] {\\normalsize\\color{black}$L}%{\\mathsf{la}$} (5.4,2);\n\\draw[->] (6,1) to node[midway,sloped] {\\normalsize\\color{black}$R}%{\\mathsf{ra}$} (5.6,2);\n\\draw[->] (7,0) to node[pos=0.6] {\\color{black}$U}%{\\mathsf{or}$} (7,1.1);\n\\draw[->] (8,0) to node[pos=0.6] {\\color{black}$U}%{\\mathsf{or}$} (8,1.1);\n\\draw[->] (7,1) to node[midway,sloped] {\\normalsize\\color{black}$L}%{\\mathsf{la}$} (7.4,2);\n\\draw[->] (8,1) to node[midway,sloped] {\\normalsize\\color{black}$R}%{\\mathsf{ra}$} (7.6,2);\n\\draw[->] (5.5,2) to node[pos=0.5] {\\color{black}$U}%{\\mathsf{or}$} (5.5,3);\n\\draw[->] (7.5,2) to node[pos=0.5] {\\color{black}$U}%{\\mathsf{or}$} (7.5,3);\n\\draw[->] (5.5,3) to node[midway,sloped] {\\normalsize\\color{black}$L}%{\\mathsf{la}$} (6.35,4);\n\\draw[->] (7.5,3) to node[midway,sloped] {\\normalsize\\color{black}$R}%{\\mathsf{ra}$} (6.65,4);\n\\end{scope}\n\\begin{scope}\\footnotesize\n\\node[ipoint,fill=black] (u1) at (6,4) {};\n\\node[ipoint] (u2) at (5.25,3) {};\n\\node[ipoint] (u3) at (5,2) {};\n\\node[ipoint] (u4) at (4.75,1) {};\n\\node[ipoint,label=below:{$A$}] (u5) at (5,0) {};\n\\node[ipoint] (u6) at (5.25,1) {};\n\\node[ipoint] (u7) at (5.5,2) {};\n\\node[ipoint] (u8) at (5.75,1) {};\n\\node[ipoint,label=below:{$A$}] (u9) at (6,0) {};\n\\node[ipoint] (u10) at (6.25,1) {};\n\\node[ipoint] (u11) at (6,2) {};\n\\node[ipoint] (u12) at (5.75,3) {};\n\\node[ipoint] (u13) at (6.5,4) {};\n\\node[ipoint] (u14) at (7.25,3) {};\n\\node[ipoint] (u15) at (7,2) {};\n\\node[ipoint] (u16) at (6.75,1) {};\n\\node[ipoint,label=below:{$A$}] (u17) at (7,0) {};\n\\node[ipoint] (u18) at (7.25,1) {};\n\\node[ipoint] (u19) at (7.5,2) {};\n\\node[ipoint] (u20) at (7.75,1) {};\n\\node[ipoint,label=below:{$A$}] (u21) at (8,0) {};\n\\node[ipoint] (u22) at (8.25,1) {};\n\\node[ipoint] (u23) at (8,2) {};\n\\node[ipoint] (u24) at (7.75,3) {};\n\\node[ipoint] (u25) at (7,4) {};\n\\end{scope}\n\\begin{scope}[thick]\\normalsize\n\\draw[<-] (u1) to (u2);\n\\draw[<-] (u2) to (u3);\n\\draw[<-] (u3) to node[left] {} (u4);\n\\draw[<-] (u4) to (u5);\n\\draw[->] (u5) to (u6);\n\\draw[->] (u6) to (u7);\n\\draw[<-] (u7) to (u8);\n\\draw[<-] (u8) to (u9);\n\\draw[->] (u9) to (u10);\n\\draw[->] (u10) to (u11);\n\\draw[->] (u11) to (u12);\n\\draw[->] (u12) to (u13);\n\\draw[<-] (u13) to (u14);\n\\draw[<-] (u14) to (u15);\n\\draw[<-] (u15) to (u16);\n\\draw[<-] (u16) to (u17);\n\\draw[->] (u17) to (u18);\n\\draw[->] (u18) to (u19);\n\\draw[<-] (u19) to (u20);\n\\draw[->] (u21) to (u20);\n\\draw[->] (u21) to (u22);\n\\draw[->] (u22) to (u23);\n\\draw[->] (u23) to (u24);\n\\draw[->] (u24) to (u25);\n\\end{scope}\n\\end{scope}\n\\end{tikzpicture}\n\\caption{\n(a) A circuit $\\boldsymbol{C}$ of 5 levels with input $\\avec{\\alpha}\\colon x_1\\mapsto 1, \\ x_2\\mapsto 0, \\ x_3\\mapsto 0,\\ x_4\\mapsto 0, \\ x_5\\mapsto 0$ (the gate number is indicated on the left and gates with value $1$ under $\\avec{\\alpha}$ are shaded); (b) a proof tree for $\\boldsymbol{C}$ and $\\avec{\\alpha}$; (c) CQs ${\\boldsymbol q}$ (thick gray arrows) and $\\q'$ (black arrows);\n(d) canonical model of $(\\Tmc_{\\avec{\\alpha}},\\Amc)$ with the subscript of $G_i$ inside the nodes.}\n\\label{fig:5a}\n\\end{figure}\n\n\nTo adapt this reduction to our setting, we replace ${\\boldsymbol q}$ by a linear CQ $\\q'$, which is obtained by a depth-first traversal of ${\\boldsymbol q}$. When evaluated on $D(\\avec{\\alpha})$, the CQs $\\q'$ and ${\\boldsymbol q}$ may give different answers, but the answers coincide if the CQs are evaluated on the \\emph{unravelling of $D(\\avec{\\alpha})$ into a tree}. Thus, we define $(\\Tmc_{\\avec{\\alpha}},\\Amc)$ whose canonical model \ninduces a tree isomorphic to the unravelling of $D(\\avec{\\alpha})$. \nTo formally introduce $\\q'$,\nconsider the sequence of words defined inductively as follows:\n\\begin{equation*}\nw_0 = \\varepsilon\\quad\\text{ and }\\quad w_{j+1} = L}%{\\mathsf{la}^-\\,U}%{\\mathsf{or}^-\\,w_j\\,U}%{\\mathsf{or}\\,L}%{\\mathsf{la}\\,R}%{\\mathsf{ra}^-\\,U}%{\\mathsf{or}^-\\,w_j\\,U}%{\\mathsf{or}\\,R}%{\\mathsf{ra}, \\text{ for } j > 0.\n\\end{equation*}\nSuppose $\\boldsymbol{C}$ has $2d+1$ levels, $d \\geq 0$. Consider the $d$th word $w_d = \\varrho_1 \\varrho_2 \\dots \\varrho_k$ and take\n\\begin{equation*}\n\\q'(y_0) \\ \\ \\ = \\ \\ \\ \\exists y_1, \\dots, y_k\\,\\Bigl[\\ \\ \\bigwedge_{i=1}^{k} \\varrho_i(y_{i-1}, y_i) \\ \\ \\ \\land\\bigwedge_{\n\\varrho_i\\varrho_{i+1} =U}%{\\mathsf{or}^-\\,U}%{\\mathsf{or}} \\hspace*{-1.5em}A(y_i)\\ \\Bigr];\n\\end{equation*}\nsee Fig.~\\ref{fig:5a}~(c). \nWe now define $(\\Tmc_{\\avec{\\alpha}},\\Amc)$.\nSuppose $\\boldsymbol{C}$ has gates $g_1, \\dots, g_m$, with $g_m$ \nthe output gate. In addition to predicates $U}%{\\mathsf{or}$, $L}%{\\mathsf{la}$, $R}%{\\mathsf{ra}$, $A$, we introduce \na unary predicate $G_i$ for each gate $g_i$.\nWe set $\\mathcal{A} = \\{G_m(a)\\}$ and include the following axioms in $\\mathcal{T}_{\\avec{\\alpha}}$:\n\\begin{align*}\n& G_{i}(x) \\rightarrow \\exists y\\, \\bigl(S(x,y)\\land G_j(y)\\bigr), && \\text{ for every } S(g_j,g_i) \\in D(\\avec{\\alpha}), \\ S\\in \\{U}%{\\mathsf{or}, L}%{\\mathsf{la},R}%{\\mathsf{ra}\\},\\\\\n& G_i(x) \\rightarrow A(x), && \\text{ for every } A(g_i) \\in D(\\avec{\\alpha});\n\\end{align*}\nsee Fig.~\\ref{fig:5a}~(d) for an illustration. \nWhen restricted to predicates $U}%{\\mathsf{or}$, $L}%{\\mathsf{la}$, $R}%{\\mathsf{ra}$, $A$, the canonical model of $(\\Tmc_{\\avec{\\alpha}},\\Amc)$ is isomorphic to the unravelling of $D(\\avec{\\alpha})$ starting from $g_m$.\n\nWe show in Appendix~\\ref{app:logcfl-hardness} that $\\q'$ and $(\\Tmc_{\\avec{\\alpha}},\\Amc)$ can be constructed by logspace transducers (Proposition~\\ref{prop:f.2}), and that $\\boldsymbol{C}$ accepts $\\avec{\\alpha}$ iff $\\mathcal{T}_{\\avec{\\alpha}}, \\mathcal{A} \\models \\q'(a)$ (Proposition~\\ref{prop:f.3}). \n\\end{proof}\n\n\\section{Conclusions and open problems}\\label{sec:conclusions}\n\nOur aim in this work was to understand how the size of OMQ rewritings and the combined complexity of OMQ answering depend on (\\emph{i}) the existential depth of \\textsl{OWL\\,2\\,QL}{} ontologies, (\\emph{ii}) the treewidth of CQs or the number of leaves in tree-shaped CQs, and (\\emph{iii}) the type of rewriting: PE, NDL or arbitrary FO. \n\nWe tackled the succinctness problem by representing OMQ rewritings as (Boolean) hypergraph functions and establishing an unexpectedly tight correspondence between the size of OMQ rewritings and the size of various computational models for computing these functions.\nIt turned out that polynomial-size \\emph{PE-rewritings} can only be constructed for OMQs with ontologies of depth 1 and CQs of bounded treewidth. Ontologies of larger depth require, in general, PE-rewritings of super-polynomial size. \nThe good and surprising news, however, is that, for classes of OMQs with ontologies of bounded depth and CQs of bounded treewidth, we can always (efficiently) construct polynomial-size \\emph{NDL-rewritings}. The same holds if we consider OMQs obtained by pairing ontologies of depth 1 with arbitrary CQs or coupling arbitrary ontologies with bounded-leaf queries; see Fig.~\\ref{pic:results} for details. \nThe existence of polynomial-size \\emph{FO-rewritings} for different classes of OMQs was shown to be equivalent to major open problems in computational and circuit complexity such as `$\\ensuremath{\\mathsf{NL}}\/\\mathsf{poly} \\subseteq \\smash{\\mathsf{NC}^1}?$'\\!, `$\\ensuremath{\\mathsf{LOGCFL}}\/\\mathsf{poly} \\subseteq \\smash{\\mathsf{NC}^1}?$' and `$\\ensuremath{\\mathsf{NP}}\/\\mathsf{poly} \\subseteq \\smash{\\mathsf{NC}^1}?$'\n\nWe also determined the combined complexity of answering OMQs from the considered classes. In particular, we showed that OMQ answering is tractable---either \\ensuremath{\\mathsf{NL}}- or \\ensuremath{\\mathsf{LOGCFL}}-complete---for bounded-depth ontologies coupled with bounded treewidth CQs, as well as for arbitrary ontologies paired with tree-shaped queries with a bounded number of leaves. We point out that membership in \\ensuremath{\\mathsf{LOGCFL}}{} implies that answering OMQs from the identified tractable classes can be `profitably parallelised' (for details, consult~\\cite{DBLP:journals\/jacm\/GottlobLS01}).\n\nComparing the two sides of Fig.~\\ref{pic:results}, we remark that the class of tractable OMQs nearly coincides with the OMQs admitting polynomial-size \\text{NDL}-rewritings (the only exception being OMQs with ontologies of depth 1 and arbitrary CQs). However, the \\ensuremath{\\mathsf{LOGCFL}}{} and \\ensuremath{\\mathsf{NL}}{} membership results cannot be immediately inferred from the existence of polynomial-size \\text{NDL}-rewritings, since evaluating polynomial-size NDL-queries is a \\textsc{PSpace}-complete problem in general. In fact, \nmuch more work is required to construct NDL-rewritings that can be evaluated in \\ensuremath{\\mathsf{LOGCFL}}{} and \\ensuremath{\\mathsf{NL}}{}, which will be done in a follow-up publication; see technical report~\\cite{MeghynDL16}. \n\n\n\n\nAlthough the present work gives comprehensive solutions to the succinctness and combined complexity problems formulated in Section~\\ref{intro}, it also raises some interesting and challenging questions: \n\\begin{enumerate}[(1)]\n\\item What is the size of rewritings of OMQs with a \\emph{fixed ontology}?\n\n\\item What is the size of rewritings of OMQs with ontologies in a \\emph{fixed signature}?\n\n\\item Is answering OMQs with CQs of bounded treewidth and ontologies of finite depth fixed-parameter tractable if the \\emph{ontology depth} is the parameter? \n\n\\item What is the size of rewritings for OMQs whose ontologies do not contain role inclusions, that is, axioms of the form $\\varrho(x,y) \\to \\varrho'(x,y)$?\n\\end{enumerate}\nAnswering these questions would provide further insight into the difficulty of OBDA and could lead to the identification of new classes of well-behaved OMQs. \n\nAs far as practical OBDA is concerned, our experience with the query answering engine Ontop~\\cite{ISWC13,DBLP:conf\/semweb\/KontchakovRRXZ14}, which employs the tree-witness rewriting, shows that mappings and database constraints together with semantic query optimisation techniques can drastically reduce the size of rewritings and produce efficient SQL queries over the data. The role of mappings and data constraints in OBDA \nis yet to be fully investigated~\\cite{DBLP:conf\/kr\/Rodriguez-MuroC12,DBLP:conf\/esws\/Rosati12,DBLP:conf\/semweb\/LemboMRST15,DBLP:conf\/dlog\/BienvenuR15}\nand constitutes another promising avenue for future work. \n\n\n\n\nFinally, the focus of this paper was on the ontology language \\textsl{OWL\\,2\\,QL}{} that has been designed specifically for OBDA via query rewriting. However, in practice ontology designers often require constructs that are not available in \\textsl{OWL\\,2\\,QL}. Typical examples are axioms such as $A(x) \\to B(x) \\lor C(x)$ and $P(x,y) \\land A(y) \\to B(x)$. The former is a standard covering constraint in conceptual modelling, while the latter occurs in ontologies such as SNOMED CT. \nThere are at least two ways of extending the applicability of rewriting techniques to a wider class of ontology languages.\nA first approach relies upon the observation that although many ontology languages do not guarantee the existence of rewritings for all ontology-query pairs,\nit may still be the case that the queries and ontologies typically encountered in practice do admit rewritings.\nThis has motivated the development of diverse methods for identifying particular ontologies and OMQs for which (first-order or Datalog) rewritings exist \\cite{DBLP:conf\/ijcai\/LutzPW11,DBLP:conf\/ijcai\/BienvenuLW13,DBLP:journals\/tods\/BienvenuCLW14,DBLP:conf\/aaai\/KaminskiNG14,DBLP:conf\/ijcai\/LutzHSW15}. \nA second approach consists in replacing an ontology formulated in a complex ontology language (which lacks efficient query answering algorithms)\nby an ontology written in a simpler language, for which query rewriting methods can be employed.\nIdeally, one would show that the simpler ontology is equivalent to the original with regards to query answering \\cite{BotoevaCSSSX16}, and thus provides the exact set of answers.\nAlternatively, one can use a simpler ontology to approximate the answers for the full one \\cite{DBLP:conf\/semweb\/ConsoleMRSS14,BotoevaCSSSX16}\n(possibly employing a more costly complete algorithm to decide the status of the remaining candidate answers \\cite{DBLP:journals\/jair\/ZhouGNKH15}).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRoth \\cite{Roth1984} proposed the study of many-to-many matching markets in the context of job markets, where each worker can have multiple jobs, and each firm can employ multiple workers, but at most one contract can be signed in between any worker and firm. The agents of such a market have choice functions over the possible contracts involving them, that specifies a subset for any given set of contracts. The most well studied solution concept is \\emph{stability}. A solution is setwise stable if there are no alternative contracts outside of the solution set that would be selected by all parties in a blocking coalition (possibly rejecting some existing contracts). Pairwise stability means the lack of a single blocking contract. Roth showed that setwise and pairwise stable solutions coincide and exist for specific substitutable choice functions, and a number of extensions and structural results have been obtained in the follow-up literature \\cite{Roth1985}, \\cite{Blair1988}, \\cite{Fleiner2003}, \\cite{KlausWalzl2009}, \\cite{KlijnYazici2014}. \n\nIn this paper we are focusing on the concept of strong core and Pareto-optimality under lexicographic preferences. The strong core is a classical solution concept in cooperative game theory, meaning that there is no weakly blocking coalition where there exists a matching for the coalition (without using outside contracts) that is at least as good for all of them, and strict improvement for at least one member. A solution is Pareto-optimal if the grandcoalition is not weakly blocking. It was already observed by Blair \\cite{Blair1988} that the (strong) core and the set of pairwise stable solutions can be independent for many-to-many matching problems under substitutable preferences. Further examples of this kind were provided in \\cite{Sotomayor1999} and \\cite{KonishiUnver2006} for more restricted responsive preferences. In this paper we provide new examples for lexicographic preferences.\n\nWhat is the relevance of strong core solutions in practice? The bilateral contracts in between agents can create strong bounds, even if two agents are not transacting directly, but they have a connection through third parties then they may care about the well-being of each other. In particular, one would not seek a new transaction with another agent, if this new transaction would result in a worse or terminated deal for an agent in their connected network. \n\nLet us consider a simple example to make this point clear. Suppose that we have four players, $a$, $b$, $c$, and $d$ and they are transacting with each other through bilateral contracts $ab$, $bc$ and $cd$. Now, $a$ and $d$ has a new potential collaboration, that would be beneficial for both $a$ and $d$, however if this happens then $a$ would cancel her partnership with $b$ making her worse off. Since $b$ and $d$ are connected through $c$, $d$ may decide not to engage in this blocking deal with $a$. \n\nAre such situations realistic in real world markets? Let us just substitute $a$ with Russia, $b$ with Ukraine, $c$ with USA, and $d$ with Germany. Russia is trading gas with Ukraine, but they would prefer to trade with Germany instead directly through a new channel (Nord Stream 2) and then terminate their deals with Ukraine. USA, who has a strong partnership with both Germany and Ukraine is opposing this new deal, as they are concerned about Ukraine.\\footnote{The current situation with Ukraine is more complex obviously, a careful game theoretical analysis can be found about the case of Nord Stream 2 in \\cite{Sziklai_etal2020}.} \n\nWhy do we study lexicographic preferences? From a theoretical point of view this is the simplest case of preferences over bundles. When the agents are providing their strict rankings over their potential partners then lexicographic preferences over the bundles are generated in a unique, straightforward way. The responsive and the even more general substitutable preferences have a large spectrum, and a central coordinator of such a market cannot expect the agents to express their preferences over the bundles, since these can be very complex and also exponential in size. Studying the concept of (pairwise) stability can be still tractable based on the preferences over the individual partners, but for studying the (strong) core or Pareto-optimality one would need to make certain assumptions to deal with the ambiguity of possible preference extensions for bundles. \\footnote{As an example, we can mention the concept of possible and necessary Pareto-optimality for responsive preferences, that was studied for allocation problems in \\cite{Aziz_etal2019}. For given linear orders by the agents over individual partners, a solution is \\emph{possibly Pareto-optimal} if it is Pareto-optimal for one possible responsive extension of the individual preferences, and it is \\emph{necessarily Pareto-optimal} if it is Pareto-optimal for all possible responsible extension of the individual preferences.} We shall also note that our counter-examples and hardness results for lexicographic preferences are naturally valid for all the above mentioned domains, namely for additive and responsive preferences as well. \n\n\n\\subsection{Related literature}\n\nMany-to-many matching markets have been studied first by Roth in \\cite{Roth1984} and \\cite{Roth1985}. He considered a model with multiple possible contract terms in between any worker-firm pair, from which they may select at most one. The agents at both sides select the best contracts from a possible set according to their choice functions. Roth showed that if these choice functions are substitutable then a (pairwise) stable matching always exists, and can be obtained by a deferred-acceptance algorithm. The lattice property of (pairwise) stable solutions was proved in \\cite{Blair1988}, and even more general result for the existence and lattice structure were obtained by Fleiner for substitutable choice functions by using Tarski's fixpoint theorem \\cite{Fleiner2003}. Klaus and Walzl studied special versions of setwise stability under different domain restrictions on substitutable preferences \\cite{KlausWalzl2009}. Klijn and Yazici proved that the rural hospitals theorem holds for substitutable and weakly separable\npreferences in many-to-many markets \\cite{KlijnYazici2014}. \n\nThe efficient computation of pairwise stable solution many-to-many stable matching problems was demonstrated in \\cite{BaiouBalinski2000}, and the problem of computing an optimal solution with respect to the overall rank of the matching was given in \\cite{Bansal_etal2003}. For the nonbipartite stable fixtures problem Irving and Scott \\cite{IrvingScott2007} provided a linear time algorithm for finding a pairwise stable solution, if one exists. Finally, Fleiner and Cechl\\'arov\\'a \\cite{CechlarovaFleiner2005} extended these tractability results for the case of multiple contracts for the fixtures problem.\n\nRegarding the concept of (strong) core, for many-to-one stable matching markets under responsive preferences the strong core coincides with the set of pairwise stable solutions, as shown e.g. in \\cite{RothSotomayor1990}. However, for many-to-many stable matchings Sotomayor provided examples to show that the strong core and the set of pairwise stable solutions can be disjoint \\cite{Sotomayor1999}. Konishi and \\\"Unver \\cite{KonishiUnver2006} gave an example for a many-to-many stable matching problem under responsible preferences where the core is empty. (However, we shall remark that their example allowed preferences, where one agent finds another agent unacceptable alone, but bundled with another agent they together become acceptable for her.) In this paper we strengthen these results by giving an example for the emptiness of the core under the restricted domain of lexicographic preferences (where, by definition, an unacceptable agent can never be part of an acceptable bundle).\n\n\nLexicographic preferences for many-to-many assignment problems with one-sided preferences have been studied in \\cite{Aziz_etal2019}, \\cite{Cechlarova_etal2014}, and \\cite{HosseiniLarson2019}.\nHowever, we are not aware of any paper on lexicographic preferences for multiple partners matching problems.\n\n\\subsection{Our contribution}\n\nFirst we provide an example showing that the strong core of many-to-many stable matching problems can be empty even for lexicographic preferences in Section \\ref{sec:empty}. In Section \\ref{sec:hardness} we prove hardness results. We show that deciding whether a many-to-many stable matching problem has non-empty core is NP-hard. We also prove that it is co-NP-complete to decide whether a given matching for a many-to-many stable matching problem is Pareto-optimal or whether it is in the strong core. We also show that finding a maximum size Pareto-optimal matching for the fixtures problem is NP-hard. On the positive side, in Section \\ref{sec:easiness} we give efficient algorithms for finding a strong core solution for slightly adjusted capacities, and also for finding a half-matching that is in the strong core of fractional matchings for the stable fixtures problem. Finally, we show that finding a maximum size matching that is Pareto-optimal is possible efficiently for many-to-many problems. \n\n\\section{Preliminaries}\n\nFirst we define the \\emph{two-sided many-to-many stable matching problem}, and the one-sided \\emph{stable fixtures problem}.\nLet $G=(N,E)$ denote the underlying graph, where the node set $N$ represents the agents and we have an undirected edge $ab\\in E(G)$ if the two corresponding agents find each other mutually acceptable. Let $k(a)$ denote the (integer) capacity of agent $a$. We assume that every agent $a$ has linear preferences $>_a$ over the agents acceptable for her, where $b>_ac$ means that $a$ prefers $b$ to $c$. The solution of our problem is a \\emph{matching}, that is a set of edges $M\\subset E$ such that no quota is violated. If $M(a)$ denotes the set of edges incident to node $a$ in $M$ (that is the set of pair in which agent $a$ is involved in the solution), then the feasibility of the matching can be described with condition $|M(a)|\\leq k(a)$ for every agent $a\\in N$. If, for a matching $M$ the above condition is satisfied with equality then we say that the agent is \\emph{saturated}, otherwise she is \\emph{unsaturated}. When $G$ in non-bipartite then we get the stable fixtures problem \\cite{IrvingScott2007}, and when $G$ is bipartite then we get the many-to-many stable matching problem, see e.g. \\cite{BaiouBalinski2000}. \n\nThe classical solution concept for these problems is (pairwise) stability. A matching $M$ is \\emph{stable} if there is no blocking pair. A pair $ab\\notin M$ is \\emph{blocking}, if $a$ is either unsaturated, or there is $ac\\in M$ such that $b>_a c$, and likewise, $b$ is either unsaturated or there is $bd\\in M$ such that $a>_b d$. \n\nWhen all the capacities are unit then for two-sided problems we get the \\emph{stable marriage problem}, and the one-sided case is called \\emph{stable roommates problem}, as defined by Gale and Shapley \\cite{GaleShapley1962}. Gale and Shapley gave an efficient algorithm for finding a stable matching for the marriage case, and demonstrated with an example that stable matching may not exists for the roommates case. Irving \\cite{Irving1985} gave a linear time algorithm that can find a stable solution for the roommates problem, if one exists. The results are similar for the capacitated case, a stable solution always exists for two-sided problems and can be computed in linear time by a generalised Gale-Shapley type algorithm, see e.g. \\cite{BaiouBalinski2000}. For the stable fixtures Irving and Scott \\cite{IrvingScott2007} provided a linear time algorithm to find a stable solution, if one exists. \n\nIn this paper we focus on the (strong) core and Pareto-optimality of the solutions, so we need to extend the preferences of the agents over the set of partners. Let $\\succ_a$ denote the linear preferences of agent $a$ over the possible set of partners. We will assume that the preferences of the agents are lexicographic in the sense that they mostly care about their best partner, and then about their second best partner, and so on. \nFormally, we define the preference relation $\\succ_a$ for agent $a$ over the sets $S,T\\subset N$ the following way. Consider the characteristic vector of $S $ and $T$, denoted by $\\chi_S$ and $\\chi_T$, where the order of the coordinates are the same as the preference order of $a$ over the elements in $N$. Then $S\\succ_a T$ if and only if $\\chi_S$ is lexicographically greater than $\\chi_T$. This definition can be easily extended to the fractional case, to be defined in Section \\ref{sec:easiness}. Note that lexicographic preferences are strict by definition, so if $S\\succeq_a T$ then either $S\\succ_a T$ or $S=T$. \n\nA matching $M$ is in the \\emph{core}, if there is no \\emph{blocking coalition} $S$ with an alternative matching $M'$ on $S$ that is strictly preferred by all the members of $S$, that is $M'(a)\\succ_a M(a)$ for every $a\\in S$. A matching $M$ is in the \\emph{strong core}, if there is no \\emph{weakly blocking coalition} $S$ with alternative matching $M'$ on $S$ that is weakly preferred by all the members and strictly preferred by at least one member in $S$\\footnote{Note that this means that in a weakly blocking coalition for lexicographic preferences everybody either gets strictly better partners or gets the same partners as before.}. Finally, a matching is \\emph{Pareto-optimal} if it is not weakly blocked by $N$.\n\n\\subsection*{Examples for stability versus core property}\\label{examples}\n\nHere we provide two examples to demonstrate the differences in between stable matchings, strong core and Pareto-optimal matchings.\n\n\\subsubsection*{Example 1}\nWe have four agents on both sides, $A=\\{a,b,c,d\\}$ and $B=\\{x,y,z,w\\}$ having capacity two each, and with the following linear preferences on their potential partners:\\\\\n\n\\begin{center}\n\\begin{tabular}{rl|rl}\n$a:$ & $x > z > w > y$ & $x:$ & $b > c > d > a$\\\\\n$b:$ & $y > z > w > x$ & $y:$ & $a > c > d > b$\\\\\n$c:$ & $x > y$ & $z:$ & $a > b$\\\\\n$d:$ & $x > y$ & $w:$ & $a > b$\\\\\n\\end{tabular}\n\\end{center}\n\nHere, the unique pairwise stable solution is $M=\\{az,aw,bz,bw,cx,cy,dx,dy\\}$, and the unique strong core solution is $M'=\\{ax,ay,bx,by\\}$ when we assume that agents have lexicographic preferences. Note that both of these solutions are Pareto-optimal.\n\nThe next, extended example shows that the unique stable solution may not even be Pareto-optimal.\n\n\\subsubsection*{Example 2}\nWe have five agents on both sides, $A=\\{a,b,c,d,p\\}$ and $B=\\{x,y,z,w,q\\}$ having capacity two each, and with the following linear preferences on their potential partners:\\\\\n\n\\begin{center}\n\\begin{tabular}{rl|rl}\n$a:$ & $x > y > z > q > w$ & $x$: & $d > c > b > p > a$\\\\\n$b:$ & $y > x > w > q > z$ & $y:$ & $c > d > a > p > b$\\\\\n$c:$ & $z > w > x > q > y$ & $z:$ & $b > a > d > p > c$\\\\\n$d:$ & $w > z > y > q > x$ & $w:$ & $a > b > c > p > d$\\\\\n$p:$ & $x > y > z > w > q$ & $q:$ & $a > b > c > d > p$\\\\\n\\end{tabular}\n\\end{center}\n\nHere, the unique pairwise stable solution is $M=\\{ay, az, bx, bw, cw, cx, dz, dy, pq\\}$, and the unique strong core solution is $M'=\\{ax, aw, by, bz, cz, cy, dw, dx, pq\\}$. Note that $M'$ also Pareto-dominates $M$, so no stable matching is Pareto-optimal for this example.\\\\\n\n\\subsubsection*{Example 3}\n\nWe have a stable fixtures problem with ten agents and the following preferences:\n\\begin{center}\n\\begin{tabular}{rl}\n $x_1:$ & $x_2>x_4>x_3$ \\\\\n $x_2:$ & $x_1>x_5>x_6$ \\\\\n $x_3:$ & $x_7>x_1$ \\\\\n $x_4:$ & $x_8>x_1$ \\\\\n $x_5:$ & $x_9>x_2$ \\\\\n $x_6:$ & $x_{10}>x_2$ \\\\\n $x_7:$ & $x_3>x_8$ \\\\\n $x_8:$ & $x_4>x_7$ \\\\\n $x_9:$ & $x_5>x_{10}$ \\\\\n $x_{10}:$ & $x_6>x_9$\\\\\n\\end{tabular}\n\\end{center}\nThe capacities of agents $x_1$ and $x_2$ are 2, the capacities of the others are 1. Here the only complete matching is $M=\\{ x_1x_3,x_1x_4,x_2x_5,x_2x_6, x_7x_8, x_9x_{10} \\}$, but the matching $M'=\\{ x_1x_2, x_3x_7, x_4x_8, x_5x_9, x_6x_{10} \\}$ Pareto-dominates it, so there is no complete Pareto-optimal matching in this instance. We will see in Section \\ref{sec:easiness} that for a many-to-many stable matching problem a maximum size Pareto-optimal matching always exists and one can be found in polynomial time, but the same problem is NP-hard for the fixtures problem. \n\\begin{figure}\n \\centering\n \\includegraphics[height=0.15\\textheight]{pareto_ellen.pdf}\n \\caption{Example 3}\n \\label{pareto}\n\\end{figure}\n\n\\section{Emptiness of the strong core}\\label{sec:empty}\n\nIn this section we show that the strong core of a many-to-many stable matching problem may be empty, even under lexicographic preferences. \n\n\n\\begin{Th}\n\\label{ell}\nThe strong core of many-to-many matching markets may be empty, even if the preferences are lexicographic.\n\\end{Th}\n\\begin{proof}\n\n\nWe construct an instance where the strong core is empty. The construction is the following: There are 12 agents, on one side we have agents $a$ and $b$ with capacity $2$, $c,d,x',y'$ with capacity one, and on the other side we have agents $x,y$ with capacity $2$ and $u,v,a',b'$ with capacity one. The preferences of the agents are shown as follows:\n\\begin{center}\n\\begin{tabular}{rl|rl}\n$a:$ & $u > y > v > a'>x$ & $x:$ & $d > a > c > x'>b$\\\\\n$b:$ & $v > x > u > b'>y$ & $y:$ & $c > b > d > y'>a$\\\\\n$c:$ & $x > y$ & $u:$ & $b>a $\\\\\n$d:$ & $y > x$ & $v:$ & $a> b$\\\\\n$x':$ & $x$ & $a':$ & $a$\\\\\n$y':$ & $y$ & $b':$ & $b$\\\\\n\\end{tabular}\n\\end{center}\n\n\\begin{figure}\n\\centering\n\\label{fig1}\n\n\\includegraphics[width=6cm]{ellenpelda.pdf}\n\\caption{An illustrative image of the counterexample used in Theorem \\ref{ell}}\n\\end{figure}\nLet us suppose that there is a matching $M$ in the strong core. Clearly, if any one of $\\{ c,d,u,v \\}$ is unmatched, then they form a blocking coalition with their second choice, since they are their second choice's best option. This also means that the middle four cycle $C=\\{ ax,xb,by,ya\\}$ cannot be included either, since they are the only possible partners of $\\{ c,d,u,v\\}$. This also means that any possible matching $M$ in the strong core has to be acyclic.\n\nObserve that $a,b,x,y$ must be saturated (otherwise they would block with their dummy partner $a',b',x'$ or $y'$ and the rest of the acyclic component containing them.\n\nSuppose that there is an edge of the four cycle $C$ that is included in $M$, suppose by symmetry it is $ax$. Then $aa'$ would block it with the rest of $a$'s component in $M$, because $a$ and $x$ cannot be connected with a path. So no edges of $C$ can be in $M$.\n\nIf any of $\\{au, bv, xd, yc\\}$ is included in the matching, then $bu$, $av$, $yc$ or $xc$ would block with the rest of the given component, respectively. \nThis means that the only possible choice left for $M$ is $\\{ av,aa',bu,bb',yd,yy',xc,xx'\\}$, but then the four cycle $C$ in the middle would block, a contradiction.\nSo the strong core of the instance is indeed empty.\n\n\\end{proof}\n\n\\section{Hardness results}\\label{sec:hardness}\n\nIn this section we prove NP-hardness results.\n\n\\subsection{Deciding the non-emptiness of the strong core}\n\n\\begin{Th}\n\\label{np-core}\nDeciding whether the strong core of a many-to-many stable matching problem is non-empty is NP-hard under lexicographic preferences, even if each capacity is at most two. \n\\end{Th}\n\\begin{proof}\nWe reduce from an NP-complete special version of the \\textsc{com-smti} problem, which was shown to be NP-hard by Manlove et al.\\cite{Manlove_etal2002}. This problem is a special instance of the stable marriage problem with ties and incomplete preference lists such that there are no ties in the preferences of the men $U=\\{ u_1,..,u_n\\}$ and the set of woman can be partitioned into two parts $W=W^s\\cup W^t=\\{ w_1,..,w_k\\} \\cup \\{ w_{k+1},...,w_n\\}$ such that the preference lists of the woman in $W^s$ have no ties and the preference lists of the woman in $W^t$ consists of only a single tie. The task is to find a complete weakly stable matching, that is a matching $M$ that pairs every man and woman with someone and there is no pair $(m,w)\\notin M$ such that they both strictly prefer each other to their partner in $M$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=0.2\\textheight]{gadget.pdf}\n \\caption{The gadget $G_i$ for theorem \\ref{np-core} if the tie in $w_i$'s list was $u_j\\sim u_l$ }\n \\label{woman}\n\\end{figure}\nSuppose that we have an instance $I$ of the above problem. We construct an instance $I'$ of the multiple partner stable matching problem such that the strong core is nonempty if and only if there is a complete weakly stable matching in $I$.\n\nFirst of all, for every man $u_i\\in U$ we create a vertex $u'_i$ with capacity one, and for every woman $w_i\\in W^s$ we create a vertex $w_i'$ with capacity one. Denote these two sets by $U'$ and $W^{s'}$. Then, for every woman $w_i\\in W^t$ we create a gadget $G_i$ illustrated in Figure \\ref{woman}. We create four vertices: $w_i',w_i''$ and $c_i$ having capacity two and $d_i$ having capacity one. $w_i'$ is connected to one man in $w_i$'s preference list and $w_i''$ is connected to the other. The preferences are the following:\n\n\\begin{center}\n\\begin{tabular}{rl}\n$c_i:$ & $w_i'>w_i''$ \\\\\n$d_i:$ & $w_i''>w_i'$ \\\\\n$w_i':$ & $c_i>d_i>u_j'$ \\\\\n$w_i'':$ & $c_i>d_i>u_l'$ \\\\\n\\end{tabular}\n\\end{center}\nFinally, we add a gadget $G$, which is just a copy of the counterexample from Figure \\ref{ell} and a special agent $g$ with capacity one.\n\nThe preferences of the agents in $U'$ are the same just over the agents $w_i'$ instead of $w_i$, except if their was a woman $w_i\\in W^t$ in their preference list, then we substitute $w_i$ with the appropriate copy from $\\{ w_i',w_i''\\}$. Finally, we add the special agent $g$ to the end of all of their preference lists.\n\nSimilarly, for each $w_i'\\in W^{s'}$, the preference lists are the same with $u_i'$-s instead of $u_i$-s.\nThe agents in the gadget $G$ have the same preferences, except that we add $g$ to the beginning of $a\\in G$'s list.\nFinally, the preference lists of $g$ has the agents in $U'$ in arbitrary order followed by $a\\in G$ in the end.\n\nNow let us suppose that we have a complete weakly stable matching $M$ in $I$. We create a matching $M'$ in $I'$ by adding an edge $u_i'w_j'$ or $u_i'w_j''$ (the one which exists) for each $u_iw_j\\in M$. Also for each gadget $G_i$ we add the edges $w_i'c_i$ and $w_i''c_i$ to $M'$. If the partner of $w_i\\in W^s$ was $u_j$, then we add $w_i''d_i$, if it was $u_l$, then we add $w_i'd_i$. Finally, we add the edges $ag$ and $\\{ av,bu,yd,yy',xc,xx', bb'\\}$ to $M'$. \n\nWe show that $M'$ is in the strong core. Let us suppose there is a blocking coalition $C$ for $M'$. If there is a vertex $v_i\\in \\{ w_i',w_i'',c_i,d_i\\}$ from a gadget $G_i$ in $C$, then all of them are in $C$, since if $v_i=d_i$, then $w_i'$ or $w_i''\\in C$, so their favorite partner $c_i$ is also in $C$ and so is the other copy of $w_i$. Similarly if $w_i'$ or $w_i''$ or $c_i\\in C$, then all of them are in $C$ and the two copies of $w_i$ get the same partner, so none of them can achieve a strictly better situation. So no agents from $G_i$ can improve their situation.\n\nIf a man $u_i'$ is strictly better off in $C$, then she has to have a better partner and also she has to be at least as good a partner for her. She cannot be strictly better, since $M$ was weakly stable. So the partner $w_i$ has to be from $W^t$. But then, the corresponding copy of $w_i$ was matched to $c_i$ and $d_i$ in $M'$, both of which it prefers to $u_i'$, so they cannot be paired in a blocking coalition, a contradiction. \n\nIf an agent $w_i'\\in W^{s'}$ gets a strictly better partner in $C$, then she and her partner would form a blocking pair to $M$, a contradiction.\n\nSpecial agent $g$ cannot get a better partner in $C$, because she is the worst choice for every other possible partner other than $a$, and every one of them is at full capacity, since $M$ was a complete matching. \n\nFinally, it is easy to check, that there are no blocking coalition in the gadget $G$ to $M'$ either, so $M'$ is in the strong core.\n\nFor the other direction suppose that $M'$ is in the strong core of $I'$. This implies that $ga\\in M'$, since there is no strong core solution among the agents in $G$. Therefore every agent in $U'$ must be matched to someone in $W'$, because otherwise they would block with $g$. \n\nNow, we create $M$ the following way: for each $u_i'\\in U'$ we assign $u_i$ the woman corresponding to the partner of $u_i'$ in $M'$. To see that no two man gets the same partner, suppose that $u_j$ and $u_l$ does. Then, $u_j'w_i'$ and $u_l'w_i''$ are both in $M'$ for a woman in $W^{t'}$. But this means that $c_i$ and $d_i$ cannot be both saturated, so one of them blocks with $g$ and the rest of its (acyclic) component in $M'$.\n\nSince every man $u_i$ is matched and to different partners, it follows that $M$ is a complete matching.\n\nNow suppose there is a strictly blocking pair $(u_i,w_i)$. Then $w_i\\in W^s$ and $\\{ u_i',w_i'\\}$ would form a blocking coalition for $M'$, a contradiction.\n \nSo $M$ is a complete, and weakly stable matching.\n\\end{proof}\n\n\\subsection{Checking Pareto-optimality of a matching}\n\nFirst we show that checking Pareto-optimality is co-NP-complete, and then we prove that this implies a similar hardness result for the problem of checking the strong core property. We say that a matching \\emph{complete} if every vertex is saturated\n\n\\begin{Th}\n\\label{pareto1}\nDeciding whether a given matching is Pareto-optimal for the many-to-many stable matching problem under lexicographic preferences is co-NP-complete, even for complete matchings.\n\\end{Th}\n\n\\begin{proof}\nThe problem is in co-NP, since checking that an alternative matching $M'$ Pareto-dominates $M$ can be done efficiently. We reduce from {\\sc Exact-3-Cover}, where we are given a set of $3n$ items $X=\\{x_1, x_2, \\dots , x_{3n}\\}$ and a set of $m$ 3-sets, $\\mathcal{Y}=\\{Y_1, Y_2, \\dots ,Y_m\\}$, where each $Y_j$ contains 3 items from $X$ and the decision question is whether there exists a subset $\\mathcal{Y'}\\subset\\mathcal{Y}$ of size $n$ that admits all the elements of $X$ exactly ones. Given an instance $I$ of {\\sc Exact-3-Cover}, as described above, we create an instance $I'$ of many-to-many stable matching problem as follows. We will have five gadgets, each with two sets of agents, $A\\cup B$, $C\\cup D$, $P\\cup Q$, $S\\cup T$ and $U\\cup V$. More specifically, the set of agents are as follows.\n\n\\begin{center}\n\\begin{tabular}{l|l}\n$A=\\{a_1, a_2, \\dots , a_{3n}\\}$ & $B=\\{b_1, b_2, \\dots , b_{3n}\\}$\\\\\n$C=\\{c_1, c_2, \\dots , c_{n}\\}$ & $D=\\{d_1, d_2, \\dots , d_{n}\\}$\\\\\n$P=\\{p_1, p_2, \\dots , p_{3n}\\}$ & $Q=\\{q_1, q_2, \\dots , q_{3n}\\}$\\\\\n$S=\\{s_1, s_2, \\dots , s_{m}\\}$ & $T=\\{t_1, t_2, \\dots , t_{m}\\}$\\\\\n$U=\\{u_1, u_2, \\dots , u_{4m}\\}$ & $V=\\{v_1, v_2, \\dots , v_{4m}\\}$\\\\\n\\end{tabular}\n\\end{center}\n\nLet the capacity of every agent be 2 in $A\\cup B$, 3 in $C\\cup D$, 1 in $P\\cup Q$, 4 in $S\\cup T$ and 1 in $U\\cup V$. Finally, we describe the linear orders of the agents on their acceptable partners. Here, for any set $E$, $[E]$ denotes the elements of $E$ in the order of the elements' indices. Furthermore, we define $Q_j=\\cup\\{q_i:x_i\\in Y_j\\}$ and $S_i=\\cup\\{s_j:x_i\\in Y_j\\}$, and similarly $P_j=\\cup\\{p_i:x_i\\in Y_j\\}$ and $T_i=\\cup\\{t_j:x_i\\in Y_j\\}$.\n\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{rl|rl}\n$a_1:$ & $b_1> q_1> d_{1}> b_{3n}$ & $b_1:$ & $a_2> p_1> c_{1}> a_{1}$ \\\\\n & $\\dots $ & $\\dots $ & \\\\\n$a_i:$ & $b_i> q_i> d_{\\lfloor (i+2)\/3\\rfloor}> b_{i-1}$ & $b_i:$ & $a_{i+1}> p_i> c_{\\lfloor (i+2)\/3\\rfloor}> a_{i}$ \\\\\n & $\\dots $ & $\\dots $ & \\\\\n$a_{3n}:$ & $b_{3n}> q_{3n}> d_{n}> b_{3n-1}$ & $b_{3n}:$ & $a_1> p_{3n}> c_{n}> a_{3n}$\\\\\n\\hline\n$c_1:$ & $d_1> b_1> b_2> b_3> d_n> [T]$ & $d_1:$ & $c_2> a_1> a_2> a_3> c_1> [S]$ \\\\\n & $\\dots $ & $\\dots $ & \\\\\n$c_i:$ & $d_i> b_{3i-2}> b_{3i-1}> b_{3i}> d_{i-1}> [T]$ & $d_i:$ & $c_{i+1}> a_{3i-2}> a_{3i-1}> a_{3i}> c_i> [S]$ \\\\\n & $\\dots $ & $\\dots $ & \\\\\n$c_n:$ & $d_n> b_{3n-2}> b_{3n-1}> b_{3n}> d_{n-1}> [T]$ & $d_n:$ & $c_{1}> a_{3n-2}> a_{3(n-1)+2}> a_{3n}> c_n> [S]$ \\\\\n\\hline\n$p_1:$ & $[T_1]> b_1$ & $q_1:$ & $[S_1]> a_1$ \\\\\n & $\\dots $ & $\\dots $ & \\\\\n$p_i:$ & $[T_i]> b_i$ & $q_i:$ & $[S_i]> a_i$ \\\\\n & $\\dots $ & $\\dots $ & \\\\\n$p_n:$ & $[T_n]> b_n$ & $q_n:$ & $[S_n]> a_n$ \\\\\n\\hline\n$s_1:$ & $[D]> v_1> v_2> v_3> v_4> [Q_1]$ & $t_1:$ & $[C]> u_1> u_2> u_3> u_4> [P_1]$ \\\\\n & $\\dots $ & $\\dots $ & \\\\\n$s_j:$ & $[D]> v_{4j-3}> v_{4j-2}> v_{4j-1}> v_{4j}> [Q_j]$ & $t_j:$ & $[C]> u_{4j-3}> u_{4j-2}> u_{4j-1}> u_{4j}> [P_j]$ \\\\\n & $\\dots $ & $\\dots $ & \\\\\n$s_m:$ & $[D]> v_{4m-3}> v_{4m-2}> v_{4m-1}> v_{4m}> [Q_n]$ & $t_m:$ & $[C]> u_{4m-3}> u_{4m-2}> u_{4m-1}> u_{4m}> [P_m]$ \\\\\n\\hline\n $u_1:$ & $v_1> t_1$ & $v_1:$ & $u_1> s_1$ \\\\\n& $\\dots $ & $\\dots $ & \\\\\n $u_j:$ & $v_j> t_{\\lfloor (j+3)\/4\\rfloor}$ & $v_j:$ & $u_j> s_{\\lfloor (j+3)\/4\\rfloor}$ \\\\\n& $\\dots $ & $\\dots $ & \\\\\n $u_{4m}:$ & $v_{4m}> t_{m}$ & $v_{4m}:$ & $u_{4m}> s_m$ \\\\\n\\end{tabular}\n\\end{center}\n\\normalsize\n\nWe create a matching $M$ in $I'$ as follows. Let each agent in $A$ and $B$ be matched with their acceptable partners in $D\\cup Q$ and in $C\\cup P$, respectively. Furthermore, each agent in $S$ is matched with all of her four acceptable agents in $V$, similarly, each agent in $T$ is matched with all of his four acceptable agents in $U$. We depict the accessibility graph of $I'$ in Figure \\ref{lex} with regard to the main sets of agents, where the solid edges mark that all of the mutually acceptable pairs between the two corresponding sets belong to $M$ and the dashed edges denote when no edge between the corresponding sets belongs to $M$.\n\n\\begin{figure}\n\\begin{center}\n\\scalebox{0.4}{\\includegraphics{lex.pdf}}\n\\caption{The mutually acceptable pairs between the main sets of agents, where the solid edges denote the projection of $M$}\\label{lex}\n\\end{center}\n\\end{figure}\n\nNow we shall prove that $I$ has an exact 3-cover if an only if matching $M$ is not Pareto-optimal in $I'$.\n\nFirst, let us suppose that we have an exact 3-cover $\\mathcal{Y'}$ in $I$. We create a matching $M'$ in $I'$ that Pareto dominates $M$ in the following way. In $M'$ we match each agent in $A$ to her two acceptable partners in $B$, which implies that we also match each agent in $B$ to his two acceptable partners in $A$. Likewise, we match each agent in $C$ to her two acceptable partners in $D$, which implies that we also match each agent in $B$ to his two acceptable partners in $A$. For the rest of the agents we create $M'$ according to the 3-cover $\\mathcal{Y'}$ as follows. If $Y_j\\in \\mathcal{Y'}$ then we match $s_j$ to the three agents in $Q_j$ and also to an arbitrary agent in $D$, and similarly, we match $t_j$ to the three agents in $P_j$ and also to an arbitrary agent in $C$, and finally we also match $u_{4(j-1)+k}$ to $v_{4(j-1)+k}$ for every $k\\in\\{1, 2, 3, 4\\}$. For those agents in $s_j\\in S$ and $t_j\\in T$, where $Y_j\\notin Y'$, we keep the edges of $M$. It is easy to see that all the agents that changed partners in $I'$ improved according to their lexicographic preferences, since all of them become matched to their best potential partner in $M'$.\n\nIn the other direction, let us suppose that $M$ is not-Pareto optimal, so there is an alternative matching $M^*$ that Pareto-dominates it. We shall prove that $I$ has an exact 3-cover. First we note that if any agent in $A\\cup B\\cup C\\cup D$ has a different partner in $M^*$ than in $M$ (thus necessarily improves) then the matching between sets $A$ and $B$ and also between $C$ and $D$ must be complete, as we had in $M'$. This also implies that all the agents in both $P$ and $Q$ must get new partners in $M^*$ from the sets $T$ and $S$, respectively. However, this is only possible if at least $n$ agents form both $S$ and $T$ get also new partners from the sets $D$ and $C$, respectively. But the agents in $C$ and $D$ have remaining capacity one each, so they should become matched with exactly $n$ agents from each of $T$ and $S$, respectively, so these are the only $2n$ agents from these sets that can change partners and help improve to those in $P$ and $Q$. To summarise, if these agents all improve then we must be able to choose an exact 3-cover by adding $Y_j$ to $\\mathcal{Y'}$ if $s_j$ has improved in $M^*$. What remained is to show that the improvement of any other agent outside the set $A\\cup B\\cup C\\cup D$ would also lead to the same effect. Indeed, if any agent in $U\\cup V$ improves in $M^*$ then her\/his partner in $M$ from $S\\cup T$ must also improve and this is only possible is the latter agent gets matched with someone from $A\\cup B\\cup C\\cup D$. The same applies if any agent in $P\\cup Q$ would improve. Thus we can conclude that the improvement of any agent in $I'$ implies that all agents in $A\\cup B\\cup C\\cup D$ must improve and thus we are able to find an exact 3-cover in $I$.\n\\end{proof}\n\n\\subsection{Finding a complete Pareto-optimal matching}\n\nIn this section we show that in the stable fixtures case, the problem of finding a maximum size Pareto-optimal matching is NP-hard. We prove this for complete matchings.\n\n\\begin{Th}\nDeciding whether there exists a complete Pareto-optimal matching for the stable fixtures problem under lexicographic preferences is NP-hard, even if each capacity is at most 4.\n\n\\end{Th}\n\\begin{proof}\nAgain we reduce from the problem \\textsc{exact-3-cover}. We use almost the same construction as in theorem \\ref{pareto1} with the only difference that here we substitute each $a_ib_i$ and $a_ib_{i-1}$ edge with a gadget $G_i$ and $H_i$ respectively. Every gadget $G_i$ and $H_i$ are essentially just a copy of Example 3 in Section 2, illustrated in Figure \\ref{pareto}, only we add a special agent $g_i$ and $h_i$ respectively. We will denote the agents corresponding to $x_1,...,x_{10}$ in $G_i$ by $x_1^i,...,x_{10}^i$ and in $H_i$ by $y_1^i,...,y_{10}^i$. An agent $g_i$ has preference $b_i>x_7^i>x_8^i>a_i$ and is added to the end of the preference lists of both $x_7^i$ and $x_8^i$. An agent $h_i$ has preference $a_i>y_7^i>y_8^i>b_{i-1}$ and is added to the end of the preference lists of both $y_7^i$ and $y_8^i$. Finally, we substitute $b_i$ and $b_{i-1}$ in the preference list of $a_i$ by $g_i$ and $h_i$ respectively and similarly substitute $a_i$ and $a_{i+1}$ in $b_i$'s preference list by $g_i$ and $h_{i+1}$ respectively for each $i=1,...,3n.$\n\nSuppose that there is a complete Pareto-optimal matching $M$ in this instance and suppose that there is an index $i$ such that $a_ig_i$ and $g_ib_i$ are in $M$. Then the matching $M$ restricted to the se of vertices $\\{x_1^i,...,x_{10}^i\\}$ has to be $\\{ x_1^ix_3^i, x_1^ix_4^i, x_2^ix_5^i, x_2^ix_6^i, x_7^ix_8^i, x_9^ix_{10}^i \\}$ by the completeness of $M$, but then if we give each agent apart from $x_1^i,..,x_{10}^i$ the same partners and match $x_1^i,...x_{10}^i$ such that each gets only their favourite, then we obtain a matching $M'$ that Pareto-dominates $M$, a contradiction. \n\nSimilarly, if $a_ih_i$ and $h_ib_{i-1}$ are in $M$ for some $i$, then $M$ cannot be Pareto-optimal either, a contradiction.\n\nSince a gadget $G_i$ or $H_i$ can be saturated only if $g_i$ or $h_i$ is matched to 0 or 2 agents in them, we obtain that there can be no edge in $M$ that connects an agent $a_i$ or $b_i$ to an agent $g_j$ or $h_j$. But $M$ is a complete matching, hence each agent in $A\\cup B$ is saturated, so all edges between $A$ and $Q\\cup D$ has to be included and also all edges between $B$ and $C\\cup P$. Then, since every agent in $T\\cup S$ is saturated too, all edges between $U$ and $T$ and all edges between $S$ and $V$ are also included $M$, so $M$ is basically the same matching that we constructed in Theorem \\ref{pareto1}. \n\nNow if there would be an exact 3-cover in $I$, then we could construct a matching $M'$ that Pareto-dominates $M$, implicating that there can be no complete Pareto-optimal matching the same way as before, with the addition that the agents in $A\\cup B$ obtain their partners in $\\cup_ig_i \\cup \\cup_ih_i$ instead of each other. This way each agent $g_j$ and $h_j$ obtains their best partner, so they are strictly better off, too. Finally, we also match each agent in the remaining parts of $\\cup_iG_i \\cup \\cup_iH_i$ to their best choices. So the existence of an exact 3-cover implicates that no complete Pareto-optimal matching exists. \n\nIn the other direction if there is no complete Pareto-optimal matching, then the matching $M$ constructed above is not Pareto-optimal, so it is dominated by a matching $M'$. Again, the same proof works to show that there has to be a 3-cover of the original instance. The only additional thing we have to check in this case is that, if any agent from a gadget $G_i$ or $H_i$ improves their position, then so does every agent in $A\\cup B\\cup C\\cup D$. But this is only possible if it gets its first choice in $M'$, which implicates that every agent in the gadget obtains its best partner, so $g_i$ or $h_i$ improves their position too, which leads to every edge between $A\\cup B$ and $\\cup_ig_i\\cup \\cup_ih_i$ included in $M'$, so the proof is completed.\n\\end{proof}\n\n\\subsection{Checking the strong core property of a matching}\n\n\\begin{Th}\nDeciding whether a given matching is in the strong core of a many-to-many stable matching problem under lexicographic preferences is co-NP-complete.\n\\end{Th}\n\n\\begin{proof}\nThe problem is in co-NP, since checking that $M$ can be blocked by a coalition $S$ with an alternative matching $M_S$ can be done efficiently. We reduce from the problem of checking Pareto-optimality for many-to-many stable matching problem under lexicographic preferences, that we showed to be co-NP-complete in Theorem \\ref{pareto1}. Suppose that we have such an instance $I$, thus a many-to-many market with linear orders of the agents, and a matching $M$ that is to be checked to be Pareto-optimal. We add two new agents $a^*$ and $b^*$ to the market, one to each side, such that they have unbounded capacities and they find everyone acceptable and most importantly everyone in $I$ put either $a^*$ or $b^*$ as her top choice in $I'$. Let us also increase the capacity of all the agents in $I$ by one and then we create $M'$ as an extension of $M$ in the following way, we add $a^*b^*$ and we match all the agents in $I$ to either $a^*$ or $b^*$ (according to the side they belong to). Now, it is easy to see that $M$ is Pareto-optimal in $I$ if and only if $M'$ is in the strong core in $I'$, since a blocking coalition in $I'$ must involve every agent in $I'$ and the alternative matching of the blocking coalition must keep all the pairs in $M'\\setminus M$.\n\\end{proof}\n\n\\section{Tractable cases}\\label{sec:easiness}\n\nIn this section we show that although many natural problems related to strong core and Pareto-optimality are NP-hard, we can still find some reasonable solutions efficiently.\n\n\\subsection{Near feasible and fractional matchings with TTC}\n\nWe give two algorithms, that are heavily inspired by the Top Trading Cycle (TTC) algorithm of Gale \\cite{ShapleyScarf1974}. One computes a matching $M$, such that $M$ violates the original capacity constraints by at most one, but is guaranteed to be a strong core solution for this slightly modified instance. We call such a solution a \\textit{near-feasible strong core solution}.\nThe other computes a half-matching $M$, that is guaranteed to be in the strong core of fractional solutions of the original instance. \n\nWe define fractional matchings as $M:E\\to [0,1]$ functions, such that $\\sum_{u:uv\\in E}M(uv)\\le k(v)$ for every $v\\in V$. Now, if $S$ and $T$ are fractional matchings, then $S\\succ_a T$ if and only if the fractional characteristic vectors (so $\\chi_S$ has $M(av)$ at its $v$ coordinate) satisfy $\\chi_S>\\chi_T$ lexicographically.\n\nThe algorithms described here not only work for the many-to-many stable matching case, but also for the non-bipartite stable fixtures problem. Moreover, both algorithms run in linear time in the number of edges.\n\nThe main idea of the algorithms is very simple: in each step, we create a directed graph $D_i=(V_i,A_i)$, such that the vertices of $D_i$ are the agents who have remaining capacities at the $i$-th iteration, and there is a directed edge from $a$ to $b$ if $b$ is $a$'s best choice from the vertices of $D_i$ who are not yet matched to $a$. Then we search for a directed cycle $C_i$ in $D_i$ and add the edges of $C_i$ to the matching. \n\nNow we describe the algorithms formally. Let $p_U^M(v)$ denote the best agent in $v$'s preference list among the agents in $U$ who are not matched to $v$ in $M$. Let $k(v)$ denote the capacity of $v$. Also we use $E(C_i)$ as the edges corresponding the the directed edges of $A(C_i) $ in the original graph $G$.\nWhen we use the notation $\\frac{1}{2}e$, we mean that we only add the edge $e$ with $\\frac{1}{2}$ weight to $M$.\n\n\\begin{algorithm}\\caption{Near-feasible strong core}\n\\begin{algorithmic}\n\\State Set $M=\\emptyset$\n\\State $V_0=N$, $A_0=\\{ vp_V^M(v) :\\; v\\in V_0\\}$,\n\\While{$A_i\\ne \\emptyset$}\n\\State Find a directed cycle $C_i$ in $D_i$. \n\\State For each $e\\in E(C_i)$: $M:=M\\cup e$.\n\\If{$|C_i|=2$}\n\\State For each $v\\in V(C_i):$ $k(v)=k(v)-1$\n\\Else\n\\State For each $v\\in V(C_i):$ $k(v)=k(v)-2$\n\\EndIf\n\\State $V_{i+1}=\\{ v\\in V:\\; k(v)\\ge 1\\}$\n\\State $A_{i+1}=\\{ vp_{V_{i+1}}^M(v):\\; v\\in V_{i+1}\\}$\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}\\caption{Half-integer strong core}\n\\begin{algorithmic}\n\\State Set $M=\\emptyset$\n\\State $V_0=N$, $A_0=\\{ vp_V^M(v) :\\; v\\in N\\}$,\n\\While{$A_i\\ne \\emptyset$}\n\\State Find a directed cycle $C_i$ in $D_i$. \n\\If{$|C_i|=2$}\n\\State Let $e= E(C_i)$: $M:=M\\cup e$. \n\\State For each $v\\in V(C_i):$ $k(v)=k(v)-1$\n\\Else\n\\If{$\\exists v\\in V(C_i): k(v)=1$}\n\\State For each $e\\in E(C_i)$: $M:=M\\cup \\frac{1}{2}e$.\n\\State For each $v\\in V(C_i):$ $k(v)=k(v)-1$\n\\Else\n\\State For each $e\\in E(C_i)$: $M:=M\\cup e$.\n\\State For each $v\\in V(C_i):$ $k(v)=k(v)-2$\n\\EndIf\n\\EndIf\n\\State $V_{i+1}=\\{ v\\in V:\\; k(v)\\ge 1\\}$\n\\State $A_{i+1}=\\{ vp^M_{V_{i+1}}(v):\\; v\\in V_{i+1}\\}$\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{Th}\n\\label{nearf}\nAlgorithm 1 produces a matching $M$ in $\\mathcal{O}(|E|)$ time for the stable fixtures problem that is in the strong core of the instance with modified capacities, where $k(v)=\\max \\{ k(v),|M(v)|\\} \\le k(v)+ 1$.\n\\end{Th}\n\\begin{proof}\nIn each iteration we add at least one edge to $M$, so the algorithm terminates in at most $|E|$ iterations. \n\nAlso, we only add at most two edges containing a given vertex $v$ in one step and only to vertices with $k(v)\\ge 1$, so $|M(v)|\\le k(v)+1$.\n\nFinally we show that $M$ is in the strong core of this new instance. First of all it is easy to see, that if we run the algorithm with these new capacities we get the same output $M$, so we can suppose that the algorithm never violates the capacity constraints during its execution.\n\nAssume that there is a blocking coalition $U$ for $M$ and let $M(U)$ be the matching for the vertices in $U$ that blocks $M$. Let $C_i$ be the first cycle that contains an edge that is not in $M(U)$, but contains a vertex of $U$ and let that vertex be $u$. Since $M(U)\\not\\subset M$, such a cycle exists. Then, by the fact that $u$ must have an at least as good partner set it $M(U)$ we get the edge corresponding to the arc $uw$ starting from $u$ in $C_i$ is in $M(U)$. ($u$ cannot get a better partner than $w=p^{M_i}_{V_i}(u) $ that she does not already had in $M$, because then there would be a cycle $C_j$ before $C_i$ that contains a vertex in $U$ but not every edge of $E(C_j)$ is in $M(U)$, a contradiction). This also means that $w\\in U$, and by similar reasoning, the edge corresponding to the arc starting from $w$ is in $M(U)$, too, and continuing this argument we get that $E(C_i)\\subset M(U)$, a contradiction. \n\\end{proof}\n\\begin{Th}\nFor the stable fixtures problem Algorithm 2 produces a half-matching $M$ in $\\mathcal{O}(|E|)$ time that is in the strong core of fractional matchings. \n\\end{Th}\n\\begin{proof}\nThe running time is $\\mathcal{O}(|E|)$, because in each iteration we add at least one edge fully to $M$ or two half edges.\n\nThe capacity constraints are obviously satisfied during the algorithm.\n\nTo show that the half-matching $M$ produced by the algorithm is in the strong fractional core, take the first cycle $C_i$ that has an agent from a blocking coalition $U$, but there is an edge of $C_i$ that has less weight in $M(U)$ than in $M$. Here $M(U)$ is the fractional matching that the agents of $U$ obtain among themselves to get a better solution.\nAgain, since $M$ restricted to $U$ cannot be $M(U)$, such a cycle exists. Let $u\\in C_i\\cap U$. The situation of $u$ has improved in the fractional matching $M(U)$, which means that the arc $uw$, $w=p_U^{M_i}(u)$ leaving $u$ must be in $M(U)$ with as much weight as possible (by the choice of $C_i$) and so are every edge of $C_i$. But this yields that all edges of $E(C_i)$ have the same weight in $M$ and $M(U)$, a contradiction.\n\\end{proof}\n\n\\subsection{Maximum size Pareto-optimal matchings}\n\nFinally, we give an efficient algorithm for computing a maximum cardinality Pareto-optimal matching for the many-to-many stable matching problem. The techniques we use here are very similar to the ones in \\cite{cechlarova2014pareto}, where they investigated Pareto-optimal matchings in the case when only one side of the agents have preferences.\n\nNow we state our algorithm. We will call the two types of agents as men and women respectively. Denote the set of men as $U=\\{ u_1,...,u_n\\}$ and the set of women $W=\\{ w_1,...,w_m\\}$.\n\n\\begin{algorithm}\\caption{Maximum size Pareto-optimal matching}\n\\begin{algorithmic}\n\\State Let $M:=\\emptyset$\n\\For{$i=1,..,n$}\n\\State $l=1$\n\\While{$|M(u_i)|0$ are called the frame bounds, and $H$ denotes the specific Hilbert space. If $A=B$, then the frame is called a tight frame.\nIn \\eqref{eq:Xc}, another family of functions $\\{\\tilde{\\blmath b}_i\\}_{i\\in \\mathbb{N}}$ form the {\\em dual} frame\nsatisfying the equality:\n$$\\sum_{i\\in \\mathbb{N}} \\langle{\\blmath b}_i, \\tilde{\\blmath b}_i \\rangle = 1. $$\nIf the frame basis is chosen in a multiresolution manner, it is called {\\em framelet}.\nIn compressed sensing, \nthe frames are usually chosen from wavelet transforms, learned dictionaries, or other redundant bases,\nwhere the sparse combination of a subset of the frame can represent the unknown signals with high accuracy.\n }\n\n\\add{Even for recent approaches such as low-rank Hankel structured matrix completion approaches, the same frame interpretation exists.\nSpecifically, for a given low-rank Hankel matrix $\\boldsymbol{\\mathbb{H}}_{[d]}^{[n]}(\\blmath{x})$,\nlet ${\\boldsymbol {\\Phi}} =[{\\boldsymbol{\\phi}}_1,\\cdots,{\\boldsymbol{\\phi}}_n] \\in {\\mathbb R}^{n\\times n}$ and ${\\boldsymbol {\\Psi}}=[{\\boldsymbol{\\psi}}_1,\\cdots, {\\boldsymbol{\\psi}}_d] \\in {\\mathbb R}^{d\\times d}$ denote the arbitrary basis matrices\nthat are multiplied to the left and right\nof the Hankel matrix, respectively.\nYin et al.\nderived the following signal expansion,\nwhich they called the {\\em convolution framelet} expansion%\n~\\cite{yin2017tale}:\n\\begin{eqnarray}\\label{eq:frame0}\n\\blmath{x} \n&=& \\frac{1}{d} \\sum_{i=1}^{n}\\sum_{j=1}^d\\langle \\blmath{x}, {\\boldsymbol{\\phi}}_i \\circledast {\\boldsymbol{\\psi}}_j \\rangle {\\boldsymbol{\\phi}}_i \\circledast {\\boldsymbol{\\psi}}_j,\n\\end{eqnarray}\nimplying that\n$\\{{\\boldsymbol{\\phi}}_i \\circledast {\\boldsymbol{\\psi}}_j\\}_{i,j=1}^{n,d}$\nis a tight frame.}\n\n\\add{The observations imply that an efficient and concise signal representation is important for the success of\nmodern image reconstruction approaches, and specific algorithms may differ in their choice of the frame basis\nand specific method to identify the sparse subset that concisely represents the signal.}\n\n\\add{In this perspective, the classical approaches such as\nCS, structured low-rank Hankel matrix approaches, etc. have two fundamental limitations.\nFirst, the choice of the underlying frame (and its dual)\nis based on top-down design principles.\nFor example, most of wavelet theory has been developed\naround the edge-adaptive basis representations\nsuch as curvelet \\cite{candes2006fast}, contourlet \\cite{do2005contourlet}, etc.,\nwhose design principle is based on top-down mathematical modeling.\nMoreover, the search for the sparse index set $I$\nfor the case of compressed sensing\nis usually done using a computationally expensive optimization framework.\nThe following section shows that these limitations\nof classical representation learning approaches\ncan be largely overcome by deep learning approaches.\n}\n\n\n\\begin{figure*}[!t]\n\t\\center\n\n\t{\\includegraphics[width =0.8\\textwidth]{ED-CNN.pdf}}~~\n\n\t\\caption{An example of encoder-decoder CNN. \\add{The encoder is composed of the first $\\kappa$ layers, while the latter $\\kappa$ layers form a decoder network.} }\n\t\\label{fig:EDCNN}\n\n\\end{figure*}\n\n\n\\subsubsection{\\add{Deep Neural Networks as Combinatorial Representation Learning}}\n\n\nThe recent theory of deep convolutional framelets\nclaims that a deep neural network\ncan be interpreted\nas a framelet representation,\nwhose frame basis is learned from the training data~\\cite{ye:18:dcf}.\n\\add{Moreover, a recent follow-up study~\\cite{ye2019understanding} showed how\nthis frame representation can be automatically adapted to various input signals in a real-time manner.}\n\nTo understand these findings, \nconsider \\add{the} symmetric encoder-decoder CNN in Fig.~\\ref{fig:EDCNN},\nwhich \\add{has been} used \nfor image reconstruction problems~\\cite{jin2017deep,han2017framing}.\nSpecifically, the encoder network maps a given input signal\n$\\blmath{x}\\in{\\boldsymbol{\\mathcal X}}\\subset {\\mathbb R}^{d_0}$\nto a feature space $\\blmath{z} \\in {\\boldsymbol{\\mathcal Z}}\\subset {\\mathbb R}^{d_\\kappa}$,\nwhereas the decoder takes this feature map as an input,\nprocesses it,\nand produces an output \n$\\blmath{y} \\in {\\boldsymbol{\\mathcal Y}}\\subset {\\mathbb R}^{d_0}$.\nAt the $l$th layer, $m_l$, $q_l$, and $d_l:=m_lq_l$ denote the dimension of the signal,\nthe number of filter channels,\nand the total feature vector dimension, respectively.\nWe consider a symmetric configuration,\nwhere both the encoder and decoder have the same number of layers, say $\\kappa$;\nand the encoder layer ${{\\mathcal E}}^l$ and the decoder layer ${{\\mathcal D}}^l$ are symmetric.\n\\begin{eqnarray*}\n{{\\mathcal E}}^l:{\\mathbb R}^{d_{l-1}} & \\mapsto& {\\mathbb R}^{d_l}, \\\\\n{{\\mathcal D}}^l:{\\mathbb R}^{d_{l}} &\\mapsto& {\\mathbb R}^{d_{l-1}}\n\\end{eqnarray*}\nThe\n$j$th channel output\nfrom the the $l$th layer encoder\ncan be represented by a multi-channel convolution operation~\\cite{ye2019understanding}:\n \\begin{eqnarray}\\label{eq:encConv}\n\\blmath{x}_j^l = \\sigma\\left({\\boldsymbol {\\Phi}}^{l\\top} \\sum_{k=1}^{q_{l-1}}\\left(\\blmath{x}_k^{l-1}\\circledast \\overline {\\boldsymbol{\\psi}}_{j,k}^l\\right)\\right) ,\n\\end{eqnarray}\nwhere\n$\\blmath{x}_k^{l-1}$ denotes the $k$th input channel signal, \n$\\overline{\\boldsymbol{\\psi}}_{j,k}^l\\in {\\mathbb R}^r$ denotes the $r$-tap convolutional kernel\nthat is convolved with the $k$th input channel\nto contribute to the $j$th channel output,\nand ${\\boldsymbol {\\Phi}}^{l\\top}$ is the pooling operator.\nHere, $\\overline {\\blmath v}$ is the flipped version of the vector $\\blmath v $ such that\n$\\overline v[n]= v[-n]$ with the periodic boundary condition, and $\\circledast$ is the circular convolution. \n(Using periodic boundary conditions\nsimplifies the mathematical treatments.)\nSimilarly,\nthe $j$th channel decoder layer convolution output is given by \\cite{ye2019understanding}\n \\begin{eqnarray}\\label{eq:decConv}\n\\tilde\\blmath{x}_j^{l-1} = \\sigma\\left(\\sum_{k=1}^{q_{l}}\\left(\\tilde{\\boldsymbol {\\Phi}}^l\\tilde\\blmath{x}^{l}_k\\circledast {\\tilde{\\boldsymbol{\\psi}}_{j,k}^l}\\right)\\right) , \n\\end{eqnarray}\nwhere $\\tilde{\\boldsymbol {\\Phi}}^l$ denotes the unpooling operator,\n \\add{$\\tilde\\blmath{x}_k^{l}$ denotes the $k$th input channel signal for the decoder, and \n$\\tilde{\\boldsymbol{\\psi}}_{j,k}^l\\in {\\mathbb R}^r$ denotes the $r$-tap convolutional kernel\nthat is convolved with the $k$th input channel\nto contribute to the $j$th channel output.}\n\nBy concatenating the multi-channel signal in column direction as\n$$\n\\blmath{x}^l:=\\begin{bmatrix} \\blmath{x}^{l\\top}_1 & \\cdots & \\blmath{x}^{l\\top}_{q_{l}} \\end{bmatrix}^\\top,\n$$\nthe encoder and decoder convolution in \\eqref{eq:encConv} and \\eqref{eq:decConv}\ncan be represented using matrix notation:\n\\begin{eqnarray}\\label{eq:ED}\n \\blmath{x}^l=\\sigma({\\blmath E}^{l\\top} \\blmath{x}^{l-1}),& \\tilde \\blmath{x}^{l-1}=\\sigma({\\blmath D}^l \\tilde\\blmath{x}^{l})\n\n\\end{eqnarray}\nwhere $\\sigma(\\cdot)$ denotes the element-wise rectified linear unit (ReLU) and \n \\begin{eqnarray}\\label{eq:El}\n\\blmath{E}^l= \\begin{bmatrix} \n{\\boldsymbol {\\Phi}}^l\\circledast {\\boldsymbol{\\psi}}^l_{1,1} & \\cdots & {\\boldsymbol {\\Phi}}^l\\circledast {\\boldsymbol{\\psi}}^l_{q_l,1} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n{\\boldsymbol {\\Phi}}^l\\circledast {\\boldsymbol{\\psi}}^l_{1,q_{l-1}} & \\cdots & {\\boldsymbol {\\Phi}}^l\\circledast {\\boldsymbol{\\psi}}^l_{q_{l},q_{l-1}}\n \\end{bmatrix}\n \\end{eqnarray}\n \\begin{eqnarray}\\label{eq:Dl}\n {\\blmath D}^l= \\begin{bmatrix} \n\\tilde{\\boldsymbol {\\Phi}}^l\\circledast \\tilde{\\boldsymbol{\\psi}}^l_{1,1} & \\cdots & \\tilde{\\boldsymbol {\\Phi}}^l\\circledast \\tilde{\\boldsymbol{\\psi}}^l_{1,q_l} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n\\tilde{\\boldsymbol {\\Phi}}^l\\circledast \\tilde{\\boldsymbol{\\psi}}^l_{q_{l-1},1} & \\cdots & \\tilde{\\boldsymbol {\\Phi}}^l\\circledast \\tilde{\\boldsymbol{\\psi}}^l_{q_{l-1},q_{l}}\n \\end{bmatrix}\n \\end{eqnarray}\nand\n$${\\boldsymbol {\\Phi}}^l=\\begin{bmatrix} {\\boldsymbol{\\phi}}^l_1 & \\cdots & {\\boldsymbol{\\phi}}^l_{m_l} \\end{bmatrix},$$\n\\begin{eqnarray*\n\\begin{bmatrix} {\\boldsymbol {\\Phi}}^l \\circledast {\\boldsymbol{\\psi}}_{i,j}^l \\end{bmatrix} :=\\begin{bmatrix} {\\boldsymbol{\\phi}}^l_1 \\circledast {\\boldsymbol{\\psi}}_{i,j}^l & \\cdots & {\\boldsymbol{\\phi}}^l_{m_l} \\circledast {\\boldsymbol{\\psi}}_{i,j}^l\\end{bmatrix} \\label{eq:defconv}\n.\\end{eqnarray*}\n\\add{Then, one of the most important observations is that \nthe output of the encoder-decoder CNN} can be represented\nas follows \\cite{ye2019understanding}:\n\\begin{eqnarray}\\label{eq:basis}\n\\blmath{y}\n&=& \\sum_{i} \\langle {\\blmath b}_i(\\blmath{x}), \\blmath{x} \\rangle \\tilde {\\blmath b}_i(\\blmath{x}),\n\\end{eqnarray}\nwhere $ {\\blmath b}_i(\\blmath{x})$ and $\\tilde {\\blmath b}_i(\\blmath{x})$\ndenote the $i$th columns of the following frame basis and its dual:\n\\begin{eqnarray}\n\\blmath{B}(\\blmath{x})&=& {\\blmath E}^1{\\boldsymbol {\\Sigma}}^1(\\blmath{x}){\\blmath E}^2 \\cdots {\\boldsymbol {\\Sigma}}^{\\kappa-1}(\\blmath{x}){\\blmath E}^{\\kappa},~\\quad \\label{eq:Bc}\\\\\n\\tilde \\blmath{B}(\\blmath{x}) &=& {\\blmath D}^1\\tilde{\\boldsymbol {\\Sigma}}^1(\\blmath{x}){\\blmath D}^2 \\cdots \\tilde{\\boldsymbol {\\Sigma}}^{\\kappa-1}(\\blmath{x}){\\blmath D}^{\\kappa}, \\label{eq:tBc}\n\\end{eqnarray}\nand\n${\\boldsymbol {\\Sigma}}^l(\\blmath{x})$ and $\\tilde{\\boldsymbol {\\Sigma}}^l(\\blmath{x})$ denote diagonal matrices with 0 and 1 values that are determined by the ReLU output\nin the previous convolution steps.\nSimilar basis representation holds for the encoder-decoder CNNs with skipped connection. For more details, see~\\cite{ye2019understanding}.\n\n\n\n\\add{In the absence of ReLU nonlinearities, the authors in \\cite{ye:18:dcf,ye2019understanding}\nshowed that,\nassuming that the pooling and unpooling operators\nand the filter matrices satisfy appropriate frame conditions for each $l$,\nthe representation \\eqref{eq:basis} is indeed a frame representation of \\blmath{x} \nas in \\eqref{eq:Xc},\nensuring perfect signal reconstruction.\nHowever,\nin neural networks the input and output should differ,\nso the perfect reconstruction condition is not of practical interest.\nFurthermore, the signal representation in~\\eqref{eq:basis}\nshould generalize well\nfor various inputs\nrather than for specific inputs at the training phase.}\n\n\\add{\nIndeed,\n\\cite{ye2019understanding} shows that\nthe explicit dependence on the input \\blmath{x} in \\eqref{eq:Bc} and \\eqref{eq:tBc}\ndue to the ReLU nonlinearity solves the riddle, and\nCNN generalizability\ncomes from the combinatorial nature of the expansion in \\eqref{eq:basis}\ndue to the ReLU.\n}\n\n\n\\begin{figure}[ht!]\n\t\\center\n\t{\\includegraphics[width =0.4\\textwidth]{partition.pdf}}~~\n\t\\caption{\\add{A high level illustration of input space partitioning for the three layer neural network with two filter channels for each layer. Input images at each partition share the same linear representation, but not across different partitions.}}\n\t\\label{fig:partition}\n\n\\end{figure}\n\n\\add{Specifically, since the nonlinearity is applied after the convolution operation, the on-and-off activation pattern of each ReLU\ndetermines a binary partition of the feature space at each layer across the hyperplane that is determined by the convolution. Accordingly,\n in deep neural networks, \n the input space ${\\boldsymbol{\\mathcal X}}$ is partitioned into multiple non-overlapping\nregions as shown in Fig.~\\ref{fig:partition} so that input images for each region share the same linear representation, but not accross\nthe partition.\nThis implies that\ntwo different input images in Fig.~\\ref{fig:partition} are automatically switched to \ntwo distinct linear representations that are different from each other.}\n\n\\add{ This input adaptivity\nposes an important computational advantage over the classical representation learning approaches that rely on computationally expensive optimization techniques.\nMoreover, the representations are entirely\ndependent on the filter sets\nthat are learned from the training data set, which is different\nfrom the classical representation learning approaches that are designed by\nmathematical principles.\nFurthermore,\nthe number of input space partitions and the associated distinct linear representations\n increases exponentially\nwith the network depth, width, and the skipped connection thanks to the combinatorial\nnature of ReLU nonlinearities~\\cite{ye2019understanding}. \nThis exponentially large {\\em expressivity} of the neural network\nis another important advantage, which may, with the combination of the aforementioned {\\em adaptivity}, may explain the origin\nof the success of deep neural networks for image reconstruction.}\n\n\n\n\n\n\\subsection{Partially Data-Adaptive Sparsity-Based Methods}\n\nWhile early reconstruction methods such as in CS MRI \nused sparsity in known transform domains\nsuch as wavelets~\\cite{lustig2007sparse},\ntotal variation domain,\ncontourlets~\\cite{qu2010iterative}, etc.,\nlater works proposed partially data-adaptive sparsity models by incorporating directional information of patches or block matching, etc., during reconstruction.\n\n\\addnew{A patch-based directional wavelets (PBDW) scheme was proposed for MRI in~\\cite{qu2012undersampled},}\nwherein the regularizer was based on analysis sparsity and was the sum of the $\\ell_1$ norms\nof each optimally (adaptively) rearranged and transformed (by fixed 1D Haar wavelets) image patch.\nThe patch rearrangement or permutation involved rearranging pixels parallel to a certain geometric direction, approximating patch rotation.\nThe best permutation for each patch\nfrom among a set of pre-defined permutations\nwas pre-computed based on \\addnew{initial reconstructions}\nto minimize the residual between the transformed permuted patch and its \\addnew{thresholded\nversion.}\n\\addnew{An improved\nreconstruction method was proposed in~\\cite{ning2013magnetic},}\nwhere the optimal permutations were computed for patches extracted\nfrom the subbands in the 2D Wavelet domain\n\\addnew{(a shift-invariant discrete wavelet transform is used)} of the image.\n\\addnew{A recent work~\\cite{zhan2016fast} proposed a different \\add{effective modification} of the PBDW scheme,}\nwherein a unitary matrix is\n\\addnew{adapted to sparsify the patches\ngrouped with a common (optimal) permutation.\nIn this case, the analysis sparsity penalty during reconstruction used the $\\ell_1$ norms of patches\ntransformed by the adapted unitary matrices (one per group of patches).}\n\n\nA different \\add{fast and effective \n\\addnew{method} (\\addnew{patch-based} nonlocal operator or PANO)} was proposed in~\\cite{qu2014magnetic},\nwherein for each patch, a small group of patches most similar to it was pre-estimated\n(called \\emph{block matching}),\nand the regularizer during reconstruction\npenalized the sparsity of the groups of \n\\addnew{patches in a known transform domain.}\n\\add{Another reconstruction scheme based on adaptive clustering of patches\nwas applied to MRI in~\\cite{akcakaya:11:lds}.}\nAll these\naforementioned methods\n\\add{are also} \\addnew{quite} related to the recent transform learning-based methods\ndescribed in Section~\\ref{sec:datadriven:transformlearning},\nwhere the sparsifying operators are fully adapted in an optimization framework.\n\\subsection{Synthesis Dictionary Learning-Based Approaches for Reconstruction} \\label{sec:datadriven:dictionarylearning}\n\n\nAmong the learning-based approaches that have shown promise for medical image reconstruction, one popular class of methods exploits \\emph{synthesis dictionary learning}.\n\n\\subsubsection{Synthesis Dictionary Model}\n\nAs briefly discussed in Section~\\ref{sec:sparsity},\nthe synthesis model suggests that a signal can be approximated\nby a sparse linear combination of atoms or columns of a dictionary,\ni.e., the signal lives approximately in a subspace spanned by a few dictionary atoms.\nBecause different signals may be approximated with different subsets of dictionary columns,\nthe model is viewed as a union of subspaces model~\\cite{vidal11}.\n\nIn imaging, the synthesis model is often applied to image patches (see Fig.~\\ref{fig:synthesisdictionarymodel}) or image blocks $\\P_j \\blmath{x}$ as $\\P_j \\blmath{x} \\approx \\blmath{D} \\mathbf{z}_j$, with $\\P_j$ denoting the operator that extracts a vectorized patch (with $n$ pixels) of $\\blmath{x}$, $\\blmath{D} \\in \\mathbb{C}^{n \\times K}$ denoting a synthesis dictionary (in general complex-valued), and $\\mathbf{z}_j \\in \\mathbb{C}^{K}$ being the sparse representation or code for the patch $\\P_j \\blmath{x}$ with many zeros.\nWhile dictionaries based on the discrete cosine transform (DCT), etc., can be used to model image patches, much better representations can be obtained by adapting the dictionaries to data.\nThe learning of synthesis dictionaries has been explored in many works~\\cite{ols96,Aharon06,Mai10} and shown to be promising in inverse problem settings~\\cite{elad06,Mai08,sai2011dlmri}.\n\n\\subsubsection{Dictionary Learning for MRI}\n\n\\addnew{A dictionary learning-based method for MRI (DL-MRI) was proposed in~\\cite{sai2011dlmri},}\nwhere the image and the dictionary for its patches are simultaneously estimated\nfrom limited measurements.\nThe approach also known as blind compressed sensing (BCS)~\\cite{eldar2011bcs}\ndoes not require training data\nand learns a dictionary that is highly adaptive to the underlying image content.\nHowever, the optimization problem is highly nonconvex, and is formulated as follows:\n\\begin{align}\n\\nonumber & \\min_{\\blmath{x},\\blmath{D},\\blmath{Z}} \\frac{1}{2} \\left \\| \\blmath{A} \\blmath{x} - \\blmath{y} \\right \\|_{2}^{2} + \\beta \\sum_{j=1}^{N}\\left \\| \\P_{j} \\blmath{x} - \\blmath{D} \\mathbf{z}_{j} \\right \\|_{2}^{2} \\\\\n & \\;\\;\\;\\; \\text{s.t.} \\;\\; \\begin{Vmatrix}\n\\mathbf{z}_j\n\\end{Vmatrix}_{0} \\leq s, \\; \\begin{Vmatrix}\n\\mathbf{d}_i\n\\end{Vmatrix}_{2} = 1, \\; \\forall \\, i,j. \\label{dlmri}\n\\end{align}\nThis corresponds to using a dictionary learning regularizer (weighted by $\\beta>0$) of the following form:\n\\begin{align}\n\\nonumber R(\\blmath{x}) = & \\min_{\\blmath{D},\\blmath{Z}} \\sum_{j=1}^{N}\\left \\| \\P_{j} \\blmath{x} - \\blmath{D} \\mathbf{z}_{j} \\right \\|_{2}^{2} \\\\\n& \\;\\; \\text{s.t.} \\;\\; \\begin{Vmatrix}\n\\mathbf{z}_j\n\\end{Vmatrix}_{0} \\leq s, \\; \\begin{Vmatrix}\n\\mathbf{d}_i\n\\end{Vmatrix}_{2} = 1, \\; \\forall \\, i,j, \\label{dlmrireg}\n\\end{align}\nwhere $\\blmath{Z}$ is a matrix whose columns are the sparse codes $\\mathbf{z}_j$ that each have at most $s$ non-zeros, and the $\\ell_0$ ``norm\" counts the total number of nonzeros in a vector or matrix.\nThe columns $\\mathbf{d}_i$ of $\\blmath{D}$ are constrained to have unit norm as otherwise $\\mathbf{d}_i$ can be scaled arbitrarily along with corresponding inverse scaling of the $i$th row of $\\blmath{Z}$, and the objective is invariant to this \\emph{scaling ambiguity}.\n\n\\begin{figure}[!t]\n\\hspace{-0.0in}\\includegraphics[width=0.47\\textwidth]{dictionarymodel.pdf}\n\\caption{The synthesis dictionary model for image patches: overlapping patches $\\P_j \\blmath{x}$ of the image $\\blmath{x}$ are assumed approximated by sparse linear combinations of columns of the dictionary $\\blmath{D}$, i.e., $\\P_j \\blmath{x} \\approx \\blmath{D} \\mathbf{z}_j$, where $\\mathbf{z}_j$ has several zeros (denoted with white blocks above). }\n\\label{fig:synthesisdictionarymodel}\n\\end{figure}\n\n\nProblem~\\eqref{dlmri} was optimized in~\\cite{sai2011dlmri} by alternating between solving for the image $\\blmath{x}$ (image update step) and optimizing the dictionary and sparse coefficients (dictionary learning step).\n\\add{In} specific cases such as in single coil \\add{Cartesian MRI,} \n\\add{the image update step is solved} in closed-form using FFTs. \nHowever, the dictionary learning \\add{step} \n\\add{involves} a nonconvex and NP-hard\n\\add{optimization} problem~\\cite{ambruck09}.\nVarious dictionary learning algorithms exist for this problem and its variants~\\cite{Aharon06,Rubinstein10,Mai10} that \\add{often} alternate\n\\add{between updating the sparse coefficients (sparse coding)}\n\\add{and the dictionary.}\n\\add{The DL-MRI method for~\\eqref{dlmri} used the K-SVD dictionary learning algorithm~\\cite{Aharon06}} \n\\add{and} showed significant image quality improvements\nover previous CS MRI methods that used nonadaptive wavelets and total variation~\\cite{lustig2007sparse}.\nHowever, it is slow due to expensive and repeated sparse coding steps,\nand lacked convergence guarantees.\nIn practice, variable rather than common sparsity levels across patches\ncan be allowed in DL-MRI\nby using an error threshold based stopping criterion when \\add{sparse coding with OMP~\\cite{pati93}.}\n\n\n\\subsubsection{Other Applications and Variations}\n\nLater works applied dictionary learning to dynamic MRI~\\cite{lingaljacob13,wangying14,josecab14}, parallel MRI~\\cite{weller16}, and PET reconstruction~\\cite{Chen2015}.\nAn alternative Bayesian nonparametric dictionary learning approach\nwas used for MRI reconstruction in~\\cite{huang14}.\n\\addnew{Dictionary learning was studied for CT image reconstruction in~\\cite{xu:12:ldx}, which compared the BCS approach to pre-learning} the dictionary from a dataset\nand fixing it during reconstruction.\nThe former was found to be more promising when sufficient views (in sparse-view CT) were measured,\nwhereas with very few views\n(or with very little measured information),\npre-learning performed better.\nTensor-structured (patch-based) dictionary learning has also been exploited recently\nfor dynamic CT~\\cite{Tan2015} and spectral CT~\\cite{zhang17} reconstructions.\n\n\n\\subsubsection{Recent Efficient Dictionary Learning-Based Methods}\n\nRecent work proposed efficient dictionary learning-based reconstruction algorithms, dubbed \\add{SOUP-DIL image reconstruction algorithms}~\\cite{ravrajfes17} that used the following regularizer:\n\\begin{align}\n & \\min_{\\blmath{D},\\blmath{Z}} \\sum_{j=1}^{N}\n\\begin{Bmatrix}\n\\left \\| \\P_{j} \\blmath{x} - \\blmath{D} \\mathbf{z}_{j} \\right \\|_{2}^{2} + \\lambda^2 \\begin{Vmatrix}\n\\mathbf{z}_j\n\\end{Vmatrix}_{0}\n\\end{Bmatrix}\n\\;\\; \\text{s.t.} \\;\\; \\begin{Vmatrix}\n\\mathbf{d}_i\n\\end{Vmatrix}_{2} = 1, \\; \\forall \\, i. \\label{dlmrireg2}\n\\end{align}\nHere, the \\emph{aggregate} sparsity penalty $\\sum_{j=1}^{N} \\left \\| \\mathbf{z}_{j} \\right \\|_{0}$\nwith weight $\\lambda^2$ \nautomatically enables variable sparsity levels across patches.\nThe dictionary learning step of \\add{the SOUP-DIL reconstruction algorithm efficiently} optimized \\eqref{dlmrireg2} using an \\emph{exact} block coordinate descent scheme by decomposing $\\blmath{D} \\blmath{Z}$ as a sum of outer products (SOUP) of dictionary columns and rows of $\\blmath{Z}$, and solving for $\\mathbf{d}_i$ \\add{and then the $i$th row of $\\blmath{Z}$ (by thresholding) in closed-form, and cycling over all such pairs ($1 \\leq i \\leq K$).}\n\n\n\n\n\\begin{figure}[!t]\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5in]{SOUPDILLOMRIrecon}&\n\\includegraphics[height=1.5in]{samplingmask}\\\\\n(a) & (b) \\\\\n\\includegraphics[height=1.5in]{dictionarylearnedreal}&\n\\includegraphics[height=1.5in]{dictionarylearnedimag}\\\\\n(c) & (d)\\\\\n\\end{tabular}\n\\caption{Dictionary Learning for MRI (images from~\\cite{ravrajfes17}): (a) SOUP-DILLO MRI~\\cite{ravrajfes17} reconstruction \\add{(with $\\ell_0$ penalty)} of the water phantom~\\cite{ning2013magnetic}; (b) sampling mask in k-space with 2.5x undersampling; and (c) real and (d) imaginary parts of the dictionary learned during reconstruction, with atoms shown as $6 \\times 6$ patches.}\n\\label{fig:soupdilmriexample}\n\\end{center}\n\\end{figure}\n\nWhile the earlier DL-MRI used inexact (greedy) and expensive sparse code updates and lacked convergence analysis, \\add{the SOUP-DIL scheme} used efficient, exact updates and was proved to converge to the critical points (generalized stationary points) of the underlying problems and improved image quality over several\nschemes~\\cite{ravrajfes17}.\nFig.~\\ref{fig:soupdilmriexample} shows an example reconstruction with this BCS method along with the learned dictionaries.\nAnother recent work~\\cite{ravmoorerajfes17} extended the L+S model for dynamic \n\\add{image}\nreconstruction in \\eqref{L+S} to a low-rank and adaptive sparse \\addnew{signal\nmodel} that incorporated a dictionary learning regularizer similar to \\eqref{dlmrireg2} for the $\\blmath{x}_{S}$ component.\n\n\\subsubsection{Alternative Convolutional Dictionary Model}\n\nOne can replace the patch-based dictionary model\nwith a convolutional model as\n$\\blmath{x} \\approx \\sum_{i=1}^{K} \\mathbf{d}_{i} \\otimes \\mathbf{c}_{i}$\nthat directly represents the image\nas a sum of (possibly circular) convolutions of dictionary filters $\\mathbf{d}_{i}$\nand sparse coefficient maps $\\mathbf{c}_{i}$%\n~\\cite{garcbrendt18,wohlberg16}.\nThe convolutional synthesis dictionary model is distinct from the patch-based model.\nHowever, its main drawback is the inability to represent \\addnew{very} low-frequency content in images,\nnecessitating\npre-processing of images\nto remove \\addnew{very} low-frequency content prior to convolutional dictionary learning.\nThe utility of convolutional synthesis dictionary learning for biomedical image reconstruction\nis an open and interesting area for future research;\nsee~\\cite{chun:18:cdl}\nfor a denoising formulation\nthat could be extended to inverse problems.\n\n\\subsection{Connections between Transform Learning Approaches and Convolutional Network Models}\n\\label{secfilterbanks}\n\nThe sparsifying transform models in Section~\\ref{sec:datadriven:transformlearning}\nhave close connections with convolutional filterbanks.\nThis subsection and the next review some of these connections and implications for reconstruction. \n\n\\subsubsection{Connections to Filterbanks}\n\nTransform learning and its application\nto regularly spaced image\n\\addnew{patches~\\cite{pfisbres19}}\ncan be equivalently performed using convolutional operations.\nFor example, applying an atom of the transform\nto all the overlapping patches of an (2D) image via inner products\nis equivalent to convolving the image\nwith a \\emph{transform filter} that is the (2D) flipped version of the atom.\nThus, sparse coding in the transform model\ncan be viewed as convolving the image with a set of transform filters\n(obtained from the transform atoms)\nand thresholding the resulting filter coefficient maps,\nand transform learning can be viewed as equivalently\nlearning convolutional sparsifying filters\n\\cite{chun:18:cao-asilomar,chun:18:cao-arxiv}.\nWhen using only a regularly spaced subset of patches,\nthe above interpretation of transform sparse coding modifies to convolving the image with the transform filters,\ndownsampling the results, and then thresholding~\\cite{pfisbres19}.\nTransform models\nbased on clustering~\\cite{sravTCI1}\nadd\nnon-trivial complexities to this process. \n\nApplying the matrix $\\blmath{W}^{H}$\nto the sparse codes of all overlapping patches\nand spatially aggregating the results,\nan operation used in iterative transform-based reconstruction algorithms~\\cite{sravTCI1},\nis equivalent to filtering the thresholded filter coefficient maps\nwith corresponding matched filters\n(complex conjugate of transform atoms)\nand summing the results over the channels.\nThese equivalences between patch-based and convolutional operations\nfor the transform model\ncontrast with the case for the synthesis dictionary model\nin Section~\\ref{sec:datadriven:dictionarylearning},\nwhere the patch-based and convolutional versions of the model are not equivalent in general.\nWhen a disparate set of (e.g., randomly chosen) image patches or operations\nsuch as block matching~\\cite{wen2018power}, etc., are used with the transform model,\nthe underlying operations do not correspond to convolutions\n(thus, the\ntransform learning frameworks can be viewed as more general).\nTypically the convolutional implementation of transforms\nis more computationally efficient than the patch-based version\nfor \\addnew{large filter sizes~\\cite{pfisbres19}.}\n\n\nRecent works have exploited the filterbank interpretation\nof the transform model~\\cite{pfisbres19,ravacfess18,saibrenmulti}.\n\\addnew{For example, \\cite{pfisbres15}\nlearned filterbanks for MRI reconstruction.\nIn\n~\\cite{pfisbres19},\nthe authors studied} alternative properties and regularizers for transform learning.\n\n\n\\subsubsection{Multi-layer Transform Learning}\n\n\\addnew{A recent work~\\cite{saibrenmulti} proposed}\nlearning multi-layer extensions of the transform model\n(dubbed deep residual transforms (DeepResT))\n\\add{that}\nmimic convolutional neural networks (CNNs)\nby incorporating components such as\nfiltering, nonlinearities, pooling,\nand stacking;\nhowever, the learning was done using unsupervised\n\\emph{model-based transform learning-type cost functions}. \n\n\\begin{figure*}[!t]\n\\hspace{-0.0in}\\includegraphics[width=1.0\\textwidth]{layers.pdf}\n\\caption{The reconstruction model (see~\\cite{ravacfess18}) derived from the image update step of \\add{the square transform learning-based image reconstruction algorithm in~\\cite{sravTCI1}.} \nThe model here has $K$ layers corresponding to $K$ iterations. Each layer first has a decorruption step that computes the second term in \\eqref{imupd} using filtering and thresholding operations, assuming a transform model with $L$ filters. This is followed by a system model block that adds the fixed bias term $\\nu \\blmath{A}^{H} \\blmath{y}$ to the output of the decorruption step and performs a least-squares type image update (e.g., using CG) to enforce the imaging forward model.}\n\\label{fig:physicsdrivenlearning}\n\\vspace{-0.04in}\n\\end{figure*}\n\n\nIn the conventional transform model,\nthe image is passed through a set of transform filters and thresholded (the non-linearity)\nto generate the sparse coefficient maps.\n\\add{In} the DeepResT model,\nthe residual (difference) between the filter \\add{outputs}\nand their \\add{sparse}\nversions is computed\nand \\add{these}\nresidual maps for different filters \nare stacked together to form a residual volume\n\\add{that is} jointly sparsified in the next layer. \n\\add{To} prevent dimensionality explosion,\neach filtering of the residual volume in the second and subsequent layers\nproduces a 2D output (for a 2D initial image).\nThe multi-layer model \\add{thus} consists of successive joint sparsification of residual maps several \\add{times (cf. Fig. 1 in \\cite{saibrenmulti} and Fig. 9 in \\cite{wensailukebres19}).}\n\\add{The filters and sparse maps}\n\\add{in all layers of the (\\emph{encoder}) network are\njointly and efficiently learned in~\\cite{saibrenmulti} from images \nto provide the smallest sparsification\nresiduals in the final (output) layer,\na transform learning-type cost.}\nThe learned model and multi-layer sparse coefficient maps can then be backpropagated\nin a linear fashion (\\emph{decoder})\nto generate\n\\add{image approximations.}\nThe DeepResT model also downsampled (pooled) the residual maps \\add{(along the filter channel dimension)} in each encoder layer\nbefore further filtering them,\nproviding robustness to noise and data corruptions.\n\\add{The learned models were shown~\\cite{saibrenmulti} to provide promising performance\nfor denoising images when learning directly from noisy data,}\nand moreover learning stacked multi-layer encoder-decoder modules was shown to improve performance,\nespecially at high noise levels.\nApplication of such deep transform models to medical image reconstruction\nis an \\add{ongoing area~\\cite{zhengsaietalmulti2019}} of potent research.\n\n\n\n\\subsection{Physics-Driven Deep Training of Transform-Based Reconstruction Models} \\label{secphysicsdrivendeep}\n\nThere has been growing recent interest in supervised learning approaches\nfor image reconstruction~\\cite{schlemper18}.\nThese methods learn the parameters of reconstruction algorithms from training datasets\n(typically consisting of pairs of ground truth images and initial reconstructions from measurements)\nto minimize the error in reconstructing the training images\nfrom their typically limited or corrupted measurements.\nFor example, the reconstruction model can be a deep CNN\n(typically consisting of encoder and decoder parts)\nthat can be trained (as a denoiser)\nto produce a reconstruction from an initial corrupted version~\\cite{leeye18}.\nSection~\\ref{sec:deepmodel} discusses such approaches in more detail.\nThese methods can often require large training sets\nto learn billions of parameters (e.g., filters, etc.).\nMoreover, learned CNNs (\\emph{deep learning})\nmay \nnot typically or rigorously incorporate the imaging measurement model\nor the information about the Physics of the imaging process,\nwhich are a key part\nof solving inverse problems.\nHence, there has been recent interest\nin learning the parameters of iterative algorithms\nthat solve regularized inverse problems~\\cite{sun2016deep,ravchfess17}\n(cf. Section~\\ref{sec:deepmodel} for more such methods).\nThese methods can also typically have fewer free parameters to train.\n\n\\add{Recent works have interpreted early transform-based BCS algorithms\nas deep physics-driven convolutional networks learned on-the-fly,\ni.e., in a blind manner, from measurements~\\cite{ravchfess17,ravacfess18}.}\nFor example, the image update step in the \\add{square} transform BCS \\add{(that learns a unitary transform)} algorithm \\add{in~\\cite{sravTCI1}} \ninvolves a least squares-type optimization with the following normal equation:\n\\begin{align}\n\\blmath{G} \\blmath{x}^{k} = \\nu \\blmath{A}^{H}\\blmath{y} + \\sum_{j=1}^{N} \\P_{j}^{T}\\blmath{D}^{k}\\H_{\\lambda}(\\blmath{W}^{k}\\P_j \\blmath{x}^{k-1}), \\label{imupd}\n\\end{align}\nwhere $\\nu=1\/\\beta$\n(for $\\beta$ in Section~\\ref{sec:datadriven:dictionarylearning}\nor \\ref{sec:datadriven:transformlearning})\nand $k$ denotes the iteration number in the block coordinate descent \\add{reconstruction algorithm}.\nMatrix $\\blmath{D}^{k}\\triangleq\\begin{pmatrix}\n\\blmath{W}^{k}\n\\end{pmatrix}^H$ is a (matched) synthesis operator,\nand $\\blmath{G} \\triangleq \\sum_{j=1}^{N} \\P_{j}^{T}\\P_{j} + \\nu \\blmath{A}^{H}\\blmath{A}$\nis a fixed matrix.\nThe hard-thresholding in~\\eqref{imupd}\ncorresponds to the solution of the sparse coding step \\add{of the BCD algorithm~\\cite{sravTCI1}.}\n\n\n\nFig.~\\ref{fig:physicsdrivenlearning} shows an \\emph{unrolling} of $K$ iterations (\\emph{layers})\nof \\eqref{imupd},\nwith fresh filters in each iteration.\nEach layer has a system model block that solves \\eqref{imupd} (e.g., with FFTs or CG),\nwhose inputs are the two terms on the right hand side of \\eqref{imupd}:\nthe first term is a \\emph{fixed bias} term;\nand the second term (denotes a decorruption step) is computed via convolutions by first applying the transform filters\n(denoted by $h_{l}^{k}$, $1\\leq l \\leq L$ in Fig.~\\ref{fig:physicsdrivenlearning})\nfollowed by thresholding (the non-linearity)\nand then matched synthesis filters (denoted by $g_{l}^{k}$, $1\\leq l \\leq L$),\nand summing the outputs over the filters. %\nThis is clear from writing the second term in~\\eqref{imupd} as $\\sum_{l=1}^{L}\\sum_{j=1}^{N} \\P_{j}^{T}\\mathbf{d}_{l}^{k}\\H_{\\lambda}(\\mathbf{r}_{l}^{k^T}\\P_j\\blmath{x}^{k-1})$, with $\\mathbf{d}_{l}$ and $\\mathbf{r}_{l}$ denoting the $l$th columns of $\\blmath{D}$ and $\\mathbf{R}=\\blmath{W}^{T}$, respectively.\nEach of the $L$ terms forming the outer summation here\ncorresponds to the output of an arm (of transform filtering, thresholding, and synthesis filtering) in the decorruption module of Fig.~\\ref{fig:physicsdrivenlearning}.\n\\add{Since the BCS scheme in~\\cite{sravTCI1} does not use training data, but rather learns the transform filters as part of the iterative BCD algorithm (hence, the transform could change from iteration to iteration or layer to layer), it can be interpreted as learning the model in Fig.~\\ref{fig:physicsdrivenlearning} in an\non-the-fly sense from measurements.}\n\n\n\nRecent works~\\cite{ravchfess17,ravacfess18} learned the filters in this multi-layer model\n(a block coordinate descent or BCD Net \\cite{chfess18})\nwith soft-thresholding ($\\ell_1$ norm-based) nonlinearities and trainable thresholds\nusing a greedy scheme to minimize the error in reconstructing a training set\nfrom limited measurements.\nThese and similar approaches\n(including the transform-based ADMM-Net~\\cite{sun2016deep})\ninvolving unrolling of \\add{typical\nimage reconstruction algorithms}\nare\n\\emph{physics-driven deep training} methods\ndue to the systematic inclusion of the imaging forward model in the convolutional network.\nOnce learned, the reconstruction model can be efficiently applied to test data\nusing convolutions, thresholding, and least squares-type updates.\nWhile \\cite{ravchfess17,ravacfess18} did not\nenforce the corresponding synthesis and transform filters\n(in each arm of the decorruption module) to be matched in each layer,\nrecent work \\cite{chfess18} learned matched filters,\nimproving image quality.\nThe learning of such \\add{physics-driven}\n\\add{networks}\nis an active area of research,\nwith interesting possibilities for new innovation\nin the convolutional models in the architecture\nmotivated by more recent transform and dictionary learning based \\addnew{(or other)} reconstruction methods.\nIn \\addnew{such} methods,\nthe thresholding operation\nis the key to exploiting sparsity.\n\n\n\n\\subsection{\\add{Types of Image Reconstruction Methods}}\n\nImage reconstruction methods\nhave undergone significant advances\nover the past few decades,\n\\add{\nwith different paths for\nvarious modalities.\n}\nThese advances can be broadly grouped in four\n\\add{categories of methods}.\n\\add{The} first\n\\add{category consists of}\nanalytical and algebraic methods.\nThese methods include the classical filtered back-projection (FBP) methods for X-ray CT\n(e.g., Feldkamp-Davis-Kress or FDK method \\cite{feldkamp:84:pcb})\nand the inverse Fast Fourier transform\nand extensions such as the Nonuniform Fast Fourier Transform (NUFFT)%\n~\\cite{fessler:03:nff,defrancesco:04:enb}\nfor MRI and CT.\nThese methods are based on relatively simple mathematical models of the imaging systems,\nand although they have efficient \\add{and} fast implementations,\nthey suffer from suboptimal properties such as poor resolution-noise trade-off for CT.\n\nA second \\add{category} of reconstruction methods\ninvolves iterative reconstruction algorithms\nthat are based on more sophisticated models\nfor the imaging system's physics and models for sensor and noise statistics.\nOften called model-based image reconstruction (MBIR) methods\nor statistical image reconstruction (SIR) methods,\nthese schemes iteratively estimate the unknown image\nbased on the system (physical or forward) model, measurement statistical model,\nand assumed prior information about the underlying object\n\\cite\nsauer:93:alu\nthibault:06:arf}.\nFor example, minimizing penalized weighted-least squares (PWLS) cost functions\nhas been popular in many modalities including PET and X-ray CT,\nand these costs include a statistically weighted quadratic data-fidelity term\n(capturing the imaging forward model and noise \\term{variance})\nand a penalty term called a regularizer\nthat models the prior information about the object\n\\add{\\cite{depierro:93:otr}}.\nThese iterative reconstruction methods improve image quality\nby reducing noise and artifacts. \nIn MRI, parallel data acquisition methods (P-MRI)\nexploit the diversity of multiple receiver coils\nto acquire fewer Fourier or k-space samples \\cite{pMRI-Survey}.\nToday, P-MRI acquisition is used widely\nin commercial systems,\nand MBIR-type methods in this case\ninclude those based on coil sensitivity encoding (SENSE) \\cite{pMRI-Survey}, etc.\nThe iterative medical image reconstruction methods approved\nby the U.S. food and drug administration (FDA)\nfor SPECT, PET, and X-ray CT\nhave been based on relatively simple regularization models.\n\nA third \\add{category} of reconstruction methods\naccommodate modified data acquisition methods\nsuch as reduced sampling in MRI and CT\nto significantly reduce scan time and\/or radiation dose.\nCompressed sensing (CS) techniques\n\\cite{feng96a,BreFen-C96c,donoho2006compressed,emmanuel2004robust,ye2002self}\nhave been particularly popular among this class of methods\n(leading to journal special issues\n\\add{\\cite{baraniuk:10:aos}}\n\\cite{wangbresntzi11}).\nThese methods have been so beneficial for MRI~\\cite{lustig2007sparse,CSMRIreview}\nthat they recently got FDA approval~\\cite{fda:17:ge,fda:17:siemens,fda:18:philips}.\nCS theory predicts the recovery of images from far fewer measurements\nthan the number of unknowns,\nprovided that the image is sparse in a transform domain or dictionary,\nand the acquisition or sampling procedure is appropriately incoherent with the transform.\nSince MR acquisition in Fourier or k-space occurs sequentially over time,\nmaking it a relatively slow modality,\nCS for MRI can enable quicker acquisition by collecting fewer k-space samples.\nHowever, the reduced sampling time comes at the cost\nof slower, nonlinear, iterative reconstruction.\nThe methods for reconstruction from limited data\ntypically exploit mathematical image models\nbased on \\emph{sparsity} or \\emph{low-rank}, etc.\nIn particular, CS-based MRI methods often use variable density random sampling techniques\nto acquire the data\nand use sparsifying transforms such as wavelets, finite difference operators\n(via total variation (TV) penalty), contourlets, etc.,\nfor reconstruction~\\cite{lustig2007sparse,qu2010iterative}.\nResearch \\add{about such methods}\nalso focused on developing new theory and guarantees\nfor sampling and reconstruction from limited data~\\cite{adchanpoonrom13},\nand on new optimization algorithms\nfor reconstruction with good convergence rates~%\n\\cite\nkim:15:cos}.\n\nA fourth \\add{category} of image reconstruction methods\nreplaces mathematically designed models of images and processes\nwith \\emph{data-driven} or \\emph{adaptive} models\ninspired by the field of \\emph{machine learning}.\nSuch models\n(e.g., synthesis dictionaries~\\cite{Aharon2006},\nsparsifying transforms~\\cite{sai2013tl}, tensor models, etc.)\ncan be learned in various ways\nsuch as by\nusing training\ndatasets~\\cite{xu:12:ldx,zheng:18:pua},\nor even learned jointly with the reconstruction%\n~\\cite{sai2011dlmri,xu:12:ldx,lingaljacob13,sravTCI1},\na setting called \nmodel-blind reconstruction or blind compressed sensing (BCS)~\\cite{eldar2011bcs}. \nWhile most of these methods perform \\emph{offline} reconstruction\n(where the reconstruction is performed once all the measurements are collected),\nrecent works show that the models can also be learned\nin a time-sequential or \\emph{online} manner\nfrom streaming measurements to reconstruct dynamic objects%\n~\\cite{mardanimateosgiannakis15,briansairajjeff18}.\nThe learning can be done in an unsupervised manner\nemploying model-based and surrogate cost functions,\nor the reconstruction algorithms\n(such as deep convolutional neural networks (CNNs))\ncan be trained in a supervised manner\nto minimize the error in reconstructing training datasets\nthat typically consist of pairs of ground truth and \\addnew{undersampled data%\n~\\cite{schlemper18,leeye18,ravchfess17,wang:16:apo,shan:19:cpo}.}\nThese\nlearning-based reconstruction methods\nform a very active field of research\nwith numerous conference special sessions and special journal issues\ndevoted to the topic~\\cite{wang:18:iri}.\n\n\\add{The categories above\nare not a strict chronology;\nfor example,\nNN methods were investigated for image reconstruction\nas early as 1991\n\\cite{floyd:91:aan},\n\\addnew{and for MR spectroscopy soon thereafter\n\\cite{venkataraman:94:ann}},\nand some of the earliest methods for X-ray CT were iterative.\n}\n\n\n\\subsection{Focus and Outline of This Paper}\n\nThis paper reviews the progress in medical image reconstruction,\nfocusing on the two most recent trends:\nmethods based on sparsity using analytical models,\nand low-rank models and extensions that combine sparsity and low-rank, etc.;\nand data-driven models and approaches exploiting machine learning.\nSome of the mathematical underpinnings\nand connections between different models and their pros and cons are also discussed.\n\nThe paper is organized as follows.\nSection~\\ref{sec:approaches} describes early image reconstruction approaches,\nespecially those used in current clinical systems.\nSections~\\ref{sec:sparsity} and \\ref{sec:lowrank} describe\nsparsity and low-rank based approaches\nfor image reconstruction.\nSection~\\ref{sec:datadriven} surveys \nthe advances in data-driven image models and related machine learning approaches for image reconstruction.\nAmong the learning-based methods,\ntechniques that learn image models using model-based cost functions\nfrom training data,\nor on-the-fly from measurements are discussed,\nfollowed by recent methods relying on supervised learning of models for reconstruction,\ntypically from datasets of high quality images and their corrupted versions.\nSection~\\ref{sec:deepmodel} reviews the very recent works using learned convolutional neural networks (a.k.a. deep learning) for image reconstruction.\nSection~\\ref{sec:open} discusses some of the current challenges and open questions\nin image reconstruction and outlines future directions for the field.\nSection~\\ref{sec:conclusion} concludes this review paper.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Low-Rank Models and Extensions} \\label{sec:lowrankpart1}\n\nLow-rank models have been exploited in many imaging applications such as \\add{dynamic MRI~\\cite{zliang07}, functional MRI~\\cite{singh15,chiew2016}, diffusion-weighted MRI~\\cite{yuxinhu19}, and MR fingerprinting (MRF)~\\cite{mazoreldar18,zhao18,cruz19}.}\n\n\\add{Low-rank} assumptions are especially useful when processing dynamic or time-series data, and have been popular in dynamic MRI, where the underlying image sequence tends to be quite correlated over time.\nIn dynamic MRI, the measurements are inherently undersampled because the object changes as the samples are collected.\nReconstruction methods therefore typically pool the k-t space data in time to make sets of k-space data (the underlying dynamic object is written in the form of a Casorati matrix~\\cite{zliang07}, whose rows represent voxels and columns denote temporal \\emph{frames}, and the sets of k-space data denote measurements of such frames) that appear to have sufficient samples.\nHowever, these methods can have poor temporal resolution and artifacts due to pooling.\nCareful model-based (CS-type) techniques can help achieve improved temporal or spatial resolution in such undersampled settings.\n\nSeveral works have exploited low-rankness of the underlying Casorati (space-time) matrix for dynamic MRI reconstruction~\\cite{zliang07,hald10,bzhao10,peder09}.\nLow-rank modeling of local space-time image patches has also been investigated in~\\cite{Trzaskolocallow11}.\nLater works combined low-rank (L) and sparsity (S) models for improved reconstruction. Some of these works model the dynamic image sequence as both low-rank and sparse (L \\& S)~\\cite{lingala11,bzhao12}.\nThere has also been growing interest in models that decompose the dynamic image sequence into the sum of a low-rank and sparse component (a.k.a. robust principal component analysis (RPCA))~\\cite{Can11,guo14}. In this L+S model, the low-rank component can capture the background or slowly changing parts of the dynamic object, whereas the sparse component can capture the dynamics in the foreground such as local motion or contrast changes, etc.\n\nRecent works have applied the L+S model to dynamic MRI reconstruction~\\cite{otazo15,ben14}, with the S component modeled as sparse by itself or in a known transform domain.\nAccurate reconstructions can be obtained~\\cite{otazo15} when the underlying L and S components are incoherent (distinguishable) and the k-t space acquisition is appropriately incoherent with these components.\nThe L+S reconstruction problem can be formulated as follows:\n\\begin{align}\n\\nonumber \\min_{\\blmath{x}_{L},\\, \\blmath{x}_{S}} & \\frac{1}{2} \\left \\| \\blmath{A}(\\blmath{x}_{L}+ \\blmath{x}_{S})-\\blmath{y} \\right \\|_{2}^{2} + \\lambda_{L}\\left \\| \\blmath{R}_{1}(\\blmath{x}_{L}) \\right \\|_{*} \\\\\n &\\; \\; \\; \\; + \\lambda_{S} \\left \\| \\blmath{T} \\blmath{x}_{S} \\right \\|_{1}. \\label{L+S}\n\\end{align}\nHere, the underlying vectorized object satisfies the L+S decomposition $\\blmath{x} = \\blmath{x}_{L} + \\blmath{x}_{S}$.\nThe sensing operator $\\blmath{A}$ acting on it can take various forms. For example, in parallel imaging of a dynamic object, $\\blmath{A}$ performs frame-by-frame multiplication by coil sensitivities (in the SENSE approach) followed by undersampled Fourier encoding.\nThe low-rank regularization penalizes the nuclear norm of $\\blmath{R}_{1}(\\blmath{x}_{L})$, where $\\blmath{R}_{1}(\\cdot)$ reshapes its input into a space-time matrix. The nuclear norm serves as a convex surrogate or envelope for the nonconvex matrix rank.\nThe sparsity penalty on $\\blmath{x}_{S}$ has a similar form as in CS approaches, and $\\lambda_{L}$ and $\\lambda_{S}$ are non-negative weights above.\nProblem~\\eqref{L+S} is convex and can be solved using various iterative techniques. Otazo et al.~\\cite{otazo15} used the proximal gradient method, wherein the updates involved simple singular value thresholding (SVT) for the L component and soft thresholding for the S component.\nLater, we mention a data-driven version of the L+S model in Section~\\ref{sec:datadriven:dictionarylearning}.\nWhile the above works used low-rank models of matrices (e.g., obtained by reshaping the underlying multi-dimensional dynamic object into a space-time matrix), \\add{other} recent works also used low-rank \\add{tensor models of the underlying object (a tensor) in reconstruction~\\cite{banco16,yaman17,he16,Christodoulou18}.}\n\n\n\n\\subsection{Online Learning for Reconstruction} \\label{seconlinelearning}\n\nRecent works have proposed online learning of sophisticated models for reconstruction particularly of dynamic data from time-series\n\\addnew{measurements~\\cite{saibrianrajjeff17,briansairajjeff18,mardani16}.}\nIn this setting, the reconstructions are produced in a time-sequential manner from the incoming measurement sequence, with the models also adapted simultaneously and sequentially\nover time to track the underlying object's dynamics and aid reconstruction.\nSuch methods allow greater adaptivity to temporal dynamics\nand can enable dynamic reconstruction with less latency, memory use, and computation\nthan conventional methods.\nPotential applications include real-time medical imaging, interventional imaging, etc.,\nor they could be used even for more efficient and \\addnew{(spatially, temporally)} adaptive \\addnew{\\emph{offline} reconstruction\nof large-scale (\\emph{big}) data.}\n\n\\begin{figure}[!t] \n\\begin{center} \n\\begin{tabular}{cc}\n\\includegraphics[width=1.4in]{STROLLRMRIfig\/original.pdf} &\n\\includegraphics[width=1.4in]{STROLLRMRIfig\/SparseMRI.pdf} \\\\\n{\\footnotesize Ground Truth} & \n{\\footnotesize Sparse MRI} \\\\\n\\includegraphics[width=1.4in]{STROLLRMRIfig\/ADMMNet.pdf} &\n\\includegraphics[width=1.4in]{STROLLRMRIfig\/STROLLRMRI.pdf} \\\\\n{\\footnotesize ADMM-Net} & \n{\\footnotesize STROLLR-MRI } \\\\\n\\end{tabular} \n\\caption{MRI reconstructions (images from~\\cite{wensailukebres19}) with pseudo-radial sampling and 5x undersampling using Sparse MRI~\\cite{lustig2007sparse} (PSNR = $27.92$ dB), ADMM-Net~\\cite{sun2016deep} (PSNR = $30.67$ dB), and STROLLR-MRI~\\cite{wen2018power} (PSNR = $31.98$ dB), along with the original image from~\\cite{sun2016deep}. STROLLR-MRI clearly outperforms the nonadaptive Sparse MRI, while ADMM-Net also produces undesirable artifacts.} \\label{fig:tlmri} \n\\end{center} \n\\vspace{-0.2in} \n\\end{figure}\n\n\n\n\n\\addnew{A recent work\nefficiently adapted} low-rank tensor models in an online manner for dynamic MRI~\\cite{mardani16}.\nOnline learning for dynamic\n\\add{image} reconstruction was shown to be promising in~\\cite{saibrianrajjeff17,briansairajjeff18},\nwhich adapted synthesis dictionaries to spatio-temporal patches.\n\\addnew{In this setup~\\cite{briansairajjeff18},}\nmeasurements corresponding to a group\n(called \\emph{mini-batch}) of frames are processed at a time using a sliding window strategy.\nThe objective function for reconstruction is a weighted %\ntime average of instantaneous cost functions, each corresponding to a group of processed frames.\nAn exponential weighting (forgetting) factor for the instantaneous cost functions\ncontrols the past memory in the objective.\nThe instantaneous cost functions include both a data-fidelity and a regularizer\n(corresponding to patches in the group of frames) term.\nThe objective function thus changes over time and is optimized at each time point\nwith respect to the most recent mini-batch of frames and corresponding sparse coefficients (with older frames and coefficients fixed),\nbut the dictionary is itself adapted therein to all the data. \nEach frame can be reconstructed from multiple overlapping temporal windows\nand a weighted average of those used as the final estimate.\n\nThe online learning algorithms in~\\cite{briansairajjeff18} achieved\n\\add{computational efficiency}\nby using warm start initializations (that improve over time)\nfor variables and frames based on estimates in previous windows,\nand thus running only a few iterations of optimization for each new window.\nThey stored past information in small (cumulatively updated) matrices for the dictionary update \\add{(low memory usage)}. \nThe \\addnew{methods were}\nsignificantly more efficient and more effective\nthan batch learning-based techniques for dynamic MRI\nthat iteratively learn and reconstruct from \\emph{all} k-t space measurements.\nGiven the potential of online learning methods to transform dynamic and large-scale imaging,\nwe expect to see growing interest and research in this domain.\n\n\n\\subsection{Sparsifying Transform Learning-Based Methods}\n\\label{sec:datadriven:transformlearning}\n\n\nSeveral recent works have studied the learning of the\nefficient \\emph{sparsifying transform model}\nfor biomedical image reconstruction~\\cite{saibressiam15,sravTCI1,zheng:18:pua}.\nThis subsection reviews these advances (see~\\cite{wensailukebres19} for an MRI focused review).\n\n\n\\subsubsection{Transform Model}\n\nThe sparsifying transform model is a generalization~\\cite{sai2013tl} of the analysis dictionary model. The latter assumes that applying an operator $\\blmath{W}$ to a signal $\\mathbf{f}$ produces several zeros in the output, i.e., the signal lies in the null space of a subset of rows of the operator.\nThe sparsifying transform model allows for a sparse approximation as $\\blmath{W} \\mathbf{f} = \\mathbf{z} + e$, where $\\mathbf{z}$ has several zeros and $e$ is a\n\\add{transform} domain modeling error.\nNatural images are well-known to be approximately sparse in transform domains such as the DCT and wavelets, a property that has been exploited for image compression~\\cite{marcellin00}, denoising, and inverse problems. \nA key advantage of the sparsifying transform model compared to the synthesis dictionary model is that the transform domain sparse approximation can be computed exactly and cheaply by thresholding $\\blmath{W} \\mathbf{f}$~\\cite{sai2013tl}.\n\n\\subsubsection{Early Efficient Transform Learning-Based Methods}\n\nRecent works~\\cite{saibressiam15,sravTCI1} proposed \\addnew{transform learning based}\n\\add{image reconstruction}\nmethods that involved computationally cheap, closed-form updates in the iterative algorithms. \n\\add{The} following square transform \\add{learning~\\cite{sai2013tl}} regularizer was used for reconstruction in~\\cite{saibressiam15}:\n\\begin{align} \nR(\\blmath{x}) = & \\min_{\\blmath{W},\\blmath{Z}} \\sum_{j=1}^{N}\\left \\| \\blmath{W} \\P_{j} \\blmath{x} - \\mathbf{z}_{j} \\right \\|_{2}^{2} + \\gamma Q(\\blmath{W}) \\;\\; \\text{s.t.} \\;\\; \\begin{Vmatrix}\n\\blmath{Z}\n\\end{Vmatrix}_{0} \\leq s, \\label{stlmrireg}\n\\end{align}\nwhere $\\blmath{W} \\in \\mathbb{C}^{n \\times n}$\nis a square matrix and the transform learning regularizer\n$Q(\\blmath{W}) = - \\log \\left | \\det \\blmath{W} \\right | + 0.5\\left \\| \\blmath{W} \\right \\|_{F}^{2}$ with weight $\\gamma>0$\nprevents trivial solutions in\nlearning \nsuch as the zero matrix\nor matrices with repeated rows.\nMoreover, it also helps control the condition number of the transform~\\cite{sai2013tl}\\add{.}\nThe term\n$\\sum_{j=1}^{N}\\left \\| \\blmath{W} \\P_{j} \\blmath{x} - \\mathbf{z}_{j} \\right \\|_{2}^{2}$\ndenotes the transform domain modeling error or \\emph{sparsification error},\nwhich is minimized to learn a good sparsifying transform. %\nThe constraint in~\\eqref{stlmrireg} on the $\\ell_0$ ``norm\" of the matrix $\\blmath{Z}$\ncontrols the net or aggregate sparsity of all patches' sparse coefficients. \n\n\n\n\nThe image reconstruction problem with regularizer~\\eqref{stlmrireg}\nwas solved in~\\cite{saibressiam15}\nusing a highly efficient block coordinate descent (BCD) approach\nthat alternates between \\add{minimizing with respect to}\n$\\blmath{Z}$ (transform sparse coding step),\n$\\blmath{W}$ (transform update step), and $\\blmath{x}$ (image update step).\nImportantly, the transform sparse coding step has a closed-form solution,\nwhere the matrix $\\blmath{B}$, whose columns are $\\blmath{W} \\P_{j} \\blmath{x}$,\nis thresholded to its $s$ largest magnitude elements,\nwith other entries set to zero.\n\\add{When the sparsity constraint is replaced with alternative sparsity promoting functions such as the $\\ell_0$ sparsity penalty \nor $\\ell_1$ penalty, the sparse coding solution is obtained in closed-form by hard \nor soft thresholding.}\n\\add{The transform update step} has a \\add{simple solution}\ninvolving the singular value decomposition (SVD) of a small matrix~\\cite{saibressiam15} \\add{and} the image update step involves \\add{a least squares problem (e.g., in the case of single coil Cartesian MRI, it is solved in closed-form using FFTs~\\cite{saibressiam15}).}\nThis efficient BCD scheme was proven to converge in general to the critical points of the nonconvex reconstruction problem~\\cite{saibressiam15}.\n\n\n\n\nIn practice, the sparsity controlling parameter can be varied over algorithm iterations\n(a continuation strategy),\nallowing for faster artifact removal initially\nand then reduced bias over the iterations~\\cite{sravTCI1}.\n\\add{The scheme in~\\cite{saibressiam15} was shown}\nto be much faster than the previous DL-MRI scheme.\nTanc and Eksioglu~\\cite{tanc16} further combined \\add{transform learning}\nwith global sparsity regularization\nin known transform domains for CS MRI.\n\n\\addnew{Square transform learning has also been applied to CT reconstruction~\\cite{lukebresler14}.}\nAnother recent work used \\add{square transform learning} for low-dose CT image reconstruction~\\cite{yeravyongfes17} with a shifted-Poisson likelihood penalty for the data-fidelity term in the cost (instead of the conventional weighted least squares penalty), but pre-learned the transform from a dataset and fixed it during reconstruction to save computation.\n\n\n\n\n\\addnew{Other works have explored alternative formulations for transform learning (e.g., \novercomplete or tall~\\cite{ravbres13} transforms) that could be potentially used for image reconstruction.}\n\n\\subsubsection{Learning Rich Unions of Transforms for Reconstruction}\n\nSince, images typically contain a diversity of textures, features, and edge information,\nrecent works~\\cite{wensaibres15,sravTCI1,zheng:18:pua}\nlearned a union of transforms (a rich model) for image reconstruction.\nIn this setting, a collection of $K$ transforms are learned and the image patches are grouped or \\emph{clustered} into $K$ classes, with each class of (\\emph{similar}) patches best matched to and using a particular transform.\nThe \\add{UNITE (UNIon of Transforms lEarning) image reconstruction}\nformulation in~\\cite{sravTCI1} uses the following \\add{regularizer:}\n\\begin{align} \n\\nonumber \\hspace{-0.05in} R(\\blmath{x}) = & \\min_{\\begin{Bmatrix}\n\\blmath{W}_{k},C_{k},\\mathbf{z}_{j}\n\\end{Bmatrix}} \\sum_{k=1}^{K} \\sum_{j \\in C_{k}}\n\\begin{Bmatrix}\n\\left \\| \\blmath{W}_{k} \\P_{j} \\blmath{x} - \\mathbf{z}_{j} \\right \\|_{2}^{2} + \\lambda^{2} \\begin{Vmatrix}\n\\mathbf{z}_j\n\\end{Vmatrix}_{0}\n\\end{Bmatrix} \\\\\n& \\;\\;\\;\\;\\;\\;\\;\\;\\; \\text{s.t.} \\;\\; \\blmath{W}_{k}^{H}\\blmath{W}_{k}= \\mathbf{I} \\; \\forall \\, k, \\; \\begin{Bmatrix}\nC_{k}\n\\end{Bmatrix} \\in G.\n\\label{unitemrireg}\n\\end{align}\nHere, $C_{k}$ is a set containing the indices of all patches matched to the transform $\\blmath{W}_{k}$, and $G$ denotes the set of all partitions of $[1 : N ]$ into $K$ disjoint subsets, where $N$ is the total number of overlapping patches.\nNote that when indexed variables are enclosed in braces (in \\eqref{unitemrireg} and later equations), we mean the set of all variables over the range of the indices.\n\n\n\n\nThe \\add{UNITE}\nreconstruction formulation jointly learns a collection of transforms, clusters and sparse codes patches, and reconstructs the image $\\blmath{x}$ from measurements.\n\\addnew{An} \\add{efficient BCD algorithm with convergence\nguarantees was proposed for optimizing the problem\nin~\\cite{sravTCI1}.} \n\\add{The $K$ transforms in~\\eqref{unitemrireg} are} \n\\add{unitary, which simplifies the BCD updates.}\n\\add{For MRI, UNITE-MRI achieved improved image quality over \nthe square transform learning-based scheme}\nwhen reconstructing from undersampled k-space measurements~\\cite{sravTCI1}.\n\n\n\\begin{figure}[!t]\n\t\\centering \t\n \n \\begin{tikzpicture}\n\t[spy using outlines={rectangle,green,magnification=2,size=10mm, connect spies}]\n\t\\node {\\includegraphics[width=0.21\\textwidth]{PWLSULTRAfig\/xtrue3d}\t};\n\t\\spy on (0.87,0.12) in node [left] at (2.1,-1.2);\t\n\t\\spy on (-0.25,0.4) in node [left] at (2.1,1.65);\t\t\n\t\\end{tikzpicture}\n\t\\begin{tikzpicture}\n\t[spy using outlines={rectangle,green,magnification=2,size=10mm, connect spies}]\n\t\\node {\\includegraphics[width=0.21\\textwidth]{PWLSULTRAfig\/5e3xfdk_new}\t};\n\t\\spy on (0.87,0.12) in node [left] at (2.1,-1.2);\t\n\t\\spy on (-0.25,0.4) in node [left] at (2.1,1.65);\t\t\n\t\\end{tikzpicture}\\\\\n\t\\begin{tikzpicture}\n\t[spy using outlines={rectangle,green,magnification=2,size=10mm, connect spies}]\n\t\\node {\\includegraphics[width=0.21\\textwidth]{PWLSULTRAfig\/5e3_l2b14dot5_os24_iter50}\t};\n\t\\spy on (0.87,0.12) in node [left] at (2.1,-1.2);\t\t\n\t\\spy on (-0.25,0.4) in node [left] at (2.1,1.65);\t\t\n\t\\end{tikzpicture}\n\t\\begin{tikzpicture}\n\t[spy using outlines={rectangle,green,magnification=2,size=10mm, connect spies}]\n\t\\node {\\includegraphics[width=0.21\\textwidth]{PWLSULTRAfig\/5e3kap1_block15_beta1dot2e4_gam20_SldDist2_clu20_iter2_os4_slice101_154_SldDist2_learn75}\t};\n\t\\spy on (0.87,0.12) in node [left] at (2.1,-1.2);\t\t\n\t\\spy on (-0.25,0.4) in node [left] at (2.1,1.65);\t\t\n\t\\end{tikzpicture}\n\n\t\\caption{Cone-beam CT reconstructions (images from~\\cite{zheng:18:pua}) of the XCAT phantom~\\cite{segars:08:rcs} using the FDK, PWLS-EP~\\cite{cho:15:rdf} (with edge-preserving regularizer), and PWLS-ULTRA~\\cite{zheng:18:pua} ($K=15$) methods at dose $I_0 = 5 \\times 10^3$ incident photons per ray, shown along with the ground truth (top left). The central axial, sagittal, and coronal planes of the 3D reconstruction are shown. The learning-based PWLS-ULTRA removes noise and preserve edges much better than the other schemes.}\n\t\\label{fig:PWLSULTRA}\n\t\\vspace{-0.15in}\n\\end{figure}\n\n\nRecent works applied learned unions of transforms to other applications.\nFor example, the union of transforms model was pre-learned (from a dataset) and used in a clustering-based low-dose 3D CT reconstruction scheme~\\cite{zheng:18:pua}.\nFig.~\\ref{fig:PWLSULTRA} shows an example of high quality reconstructions obtained with this \\add{scheme.}\nWhile the work used a PWLS-type reconstruction cost, a more recent \\add{method~\\cite{yeravyongfes17}} \n\\add{replaced} the weighted least squares data-fidelity term with\n\\add{the}\nshifted-Poisson likelihood penalty, which further improved image quality and reduced bias in the reconstruction in ultra low-dose \\add{settings.}\n\\add{Other}\nrecent \\add{works} combined learned union of transforms models\nwith \\add{material image models}\nand applied it to image-domain material decomposition in dual-energy CT\nwith\nhigh quality \\add{results~\\cite{liravyongfes19,zlisaiyong19}.}\n\n\n\n\\subsubsection{Learning Structured Transform Models}\n\nIt is often useful to incorporate various structures and invariances in learning to better model natural data, and to prevent learning spurious features in the presence of noise and corruptions.\n\\add{Flipping and rotation invariant sparsifying transform learning} was recently\nproposed and applied to image reconstruction in~\\cite{wen2017frist}.\nThe regularization is similar to \\eqref{unitemrireg}, but using $\\blmath{W}_k = \\blmath{W} \\mathbf{\\Phi}_k$ with a common \\emph{parent transform} $\\blmath{W}$ and $\\begin{Bmatrix}\n\\mathbf{\\Phi}_k\n\\end{Bmatrix}$ denoting a set of known \\addnew{flipping and rotation\noperators} that apply to each (row) atom of $\\blmath{W}$ and approximate \\addnew{flips and rotations}\nby permutations (similar to \\cite{qu2012undersampled}, but which used fixed 1D Haar wavelets as the parent).\nThis enables learning a much more structured but flexible\n(depending on the number of \\addnew{operators $\\mathbf{\\Phi}_k$}) model\nthan in \\eqref{unitemrireg},\nwith clustering done more based on similar directional properties.\nImages with more directional features are better modeled by \n\\add{such learned transforms~\\cite{wen2017frist}.}\n\n\n\\subsubsection{Learning Complementary Models -- Low-rank and Transform Sparsity}\n\n\n\\addnew{A recent work~\\cite{wen2018power} proposed an approach called STROLLR (Sparsifying TRansfOrm Learning and Low-Rank)} that combines two complementary regularizers:\none \\addnew{exploiting (\\emph{non-local}) self-similarity between regions,\nand another exploiting transform learning}\n\\add{that} is based on \\emph{local} patch sparsity.\nNon-local similarity and block matching models are well-known to have excellent performance in image processing tasks such as image denoising (with BM3D~\\cite{dbov07}).\n\\add{The STROLLR} regularizer has the form $R(\\blmath{x}) = R_{1}(\\blmath{x}) + R_{2}(\\blmath{x})$,\nwhere the low-rank regularizer is as follows:\n\\begin{equation}\nR_{1}(\\blmath{x}) = \\min_{\\begin{Bmatrix}\n\\mathbf{U}_j\n\\end{Bmatrix}} \\sum_{j=1}^{N}\\begin{Bmatrix}\n\\begin{Vmatrix}\n\\add{\\mathcal{M}_{j} (\\blmath{x})} - \\mathbf{U}_{j}\n\\end{Vmatrix}_{F}^{2} + \\eta^2 \\, \\text{rank}(\\mathbf{U}_j)\n\\end{Bmatrix}, \\label{strollrmrilrreg}\n\\end{equation}\nand the transform learning regularizer is\n\\begin{align}\n\\nonumber R_{2}(\\blmath{x}) = & \\min_{\\blmath{W}, \\begin{Bmatrix}\n\\mathbf{z}_j\n\\end{Bmatrix}} \\sum_{j=1}^{N}\\begin{Bmatrix}\n\\begin{Vmatrix}\n\\blmath{W} \\mathbf{H}_{j} \\blmath{x} - \\mathbf{z}_{j}\n\\end{Vmatrix}_{2}^{2} + \\lambda^2 \\, \\begin{Vmatrix}\n\\mathbf{z}_{j}\n\\end{Vmatrix}_{0}\n\\end{Bmatrix} \\\\\n& \\;\\;\\;\\;\\;\\; \\text{s.t.} \\;\\; \\blmath{W}^{H}\\blmath{W} = \\mathbf{I}. \\label{strollrmrisreg}\n\\end{align}\nHere, the \\add{operator $\\mathcal{M}_{j}$} is a block matching operator that extracts the $j$th patch $\\P_{j} \\blmath{x}$ and the $L-1$ patches most similar to it and forms a matrix, whose columns are the $j$th patch and its matched siblings, ordered by degree of match. This matrix is approximated by a low-rank matrix $\\mathbf{U}_j$ in \\eqref{strollrmrilrreg}, with $\\eta>0$.\nThe vector $\\mathbf{H}_j \\blmath{x}$ is a vectorization of the submatrix that is the first $P$ columns \\add{of $\\mathcal{M}_{j} (\\blmath{x})$}.\nThus the regularizer in \\eqref{strollrmrisreg} learns a higher-dimensional\n\\addnew{transform} (e.g., 3D transform for 2D patches), and jointly sparsifies non-local but similar patches.\n\n\\add{STROLLR-MRI~\\cite{wen2018power} was shown to achieve better CS MRI image quality over several methods including the supervised (deep) learning based ADMM-Net~\\cite{sun2016deep}.}\nFig.~\\ref{fig:tlmri} shows example MRI reconstructions and comparisons.\nSimilar to \\add{UNITE-MRI,}\nthere is an underlying grouping of patches,\nbut STROLLR-MRI exploits block matching and sparsity to implicitly \\add{perform grouping.}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}