diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzelbd" "b/data_all_eng_slimpj/shuffled/split2/finalzzelbd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzelbd" @@ -0,0 +1,5 @@ +{"text":"\\section*{Abstract.}%\n \\else\n \\small\n \\quotation\n\t\\noindent{\\bfseriesAbstract.}%\n \\fi}\n {\\if@twocolumn\\else\\endquotation\\fi}\n\\makeatother\n\\setcounter{secnumdepth}{2}\n\n\\title{\\Large \\bf Placing Green Bridges Optimally, with a Multivariate~Analysis}\n\\author{Till Fluschnik\\footnote{Supported by DFG, project TORE (NI\/369-18).} \\and Leon Kellerhals}\n\\date{\\small Technische Universit\u00e4t Berlin, Faculty~IV,\\\\ Algorithmics and Computational Complexity, Germany.\\\\\\texttt{\\{till.fluschnik,leon.kellerhals\\}@tu-berlin.de}}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nWe study the problem of placing wildlife crossings, \nsuch as green bridges,\nover human-made obstacles to challenge habitat fragmentation.\nThe main task\nherein is,\ngiven\na graph describing habitats or routes of wildlife animals and possibilities of building green bridges,\nto find a low-cost placement of green bridges that connects the habitats.\nWe develop different problem models for this task\nand study them from a computational complexity and parameterized algorithmics perspective.\n\n\\medskip\n\\noindent\n\\emph{Keywords.}\nwildlife crossings,\ncomputational complexity,\nNP-hardness,\nparameterized algorithmics,\nconnected subgraphs.\n\\end{abstract}\n\n\\section{Introduction}\n\\label{sec:intro}\n\nSustainability \nis\nan enormous concern \nimpacting today's\npolitics,\neconomy,\nand industry.\nAccordingly,\nsustainability sciences are well-established by now.\nYet,\nthe \\emph{interdisciplinary} scientific field ``computational sustainability''~\\cite{Gomes09a},\nwhich combines practical and theoretical computer science with sustainability sciences,\nis quite young.\nFor instance,\nthe Institute for Computational Sustainability \nat Cornell University was founded in 2008,\nthe 1st International Conference on Computational Sustainability \n(CompSust'09) \ntook place in 2009,\nand\nspecial tracks on\ncomputational sustainability and AI\nwere established\nin 2011 (AAAI) and 2013 (IJCAI).\nThis work\ncontributes to\ncomputational sustainability:\nWe model problems of elaborately placing wildlife crossings\nand give complexity-theoretical and algorithmic analysis for each.\nWildlife crossings are constructions (mostly bridges or tunnels) that allow wildlife animals to safely cross human-made transportation lines (mostly roads). \nWe will refer to wildlife crossings as~\\emph{green bridges}.\n\nHuijser~\\textsl{et~al.}~\\cite{HuijserMHKCSA08}\ngive an extensive report on wildlife-vehicle collisions.\nThey identify several endangered animal species suffering from high road mortality\nand estimate the annual cost associated with wildlife-vehicle collisions with about 8~billion US dollars.\nWildlife fencing with wildlife crossings\ncan reduce collisions by over 80\\%~\\cite{HuijserMHKCSA08},\nenables populations to sustain~\\cite{SawayaKC14},\nand are thereby among the most cost-effective~\\cite{HuijserDCAM09}. \nThe implementation,\nthough,\nis an important problem:\n\\begin{quote}\nThe location, type, and dimensions of wildlife\ncrossing structures must be carefully planned with regard to the species and surrounding\nlandscape.\\ \n[...]\\ \nIn addition, different species use different habitats, influencing their\nmovements and where they want to cross the road.\n~\\cite[p.\\;\\!16]{HuijserMHKCSA08}\n\\end{quote}\nIn addition,\nit is pointed out that data about wildlife habitats \nis basic for mitigation plans,\nyet challenging to obtain~\\cite{ZhengLY19}.\nIn this work,\nour main problem is placing green bridges\nat low cost and under several variants of habitat-connectivity requirements,\nthereby inherently modeling different availabilities of data on habitats.\nThe problem is hence the following:\n\\textsl{\nGiven a graph describing habitats of wildlife animals and possibilities of building green bridges,\nfind a low-cost placement of green bridges that sufficiently connects habitats.\n}\nIn particular,\nwe comparatively \nstudy in terms of computational complexity and parameterized algorithmics\nthe following three different (families of) decision problems.%\n\\footnote{The ($d$-th power) graph $G^d$\nof~$G$ \ncontains edge~$\\{v,w\\}$ if and only if~$\\dist_G(v,w)\\leq d$.}\n\\decprobX{$\\Pi$ Green Bridges Placement{} ($\\Pi$ \\prob{GBP}{})}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$ of habitats where~$V_i\\subseteq V$ for all~$i\\in\\set{r}$, \nand~$k\\in\\mathbb{N}_0$.}\n{Is there an edge set~$F\\subseteq E$ with~$|F|\\leq k$ such that for every~$i\\in\\set{r}$, \nit holds that\n\n\\smallskip\n\\noindent\n{\n\\setlength{\\tabcolsep}{7pt}\n\\begin{tabular}{@{\\hspace{1em}}rlll@{}}\n $\\Pi\\equiv{}$\\prob{$d$-Reach}: & $G[F]^d[V_i]$ is connected? & (\\cref{prob:rgbp}) & (\\cref{sec:rgbp}) \\\\\n $\\Pi\\equiv{}$\\prob{$d$-Closed}: & $G[F]^d[V_i]$ is a clique? & (\\cref{prob:cgbp}) & (\\cref{sec:cgbp})\\\\\n $\\Pi\\equiv{}$\\prob{$d$-Diam(eter)}: & $\\diam(G[F][V_i])\\leq d$? & (\\cref{prob:dgbp}) & (\\cref{sec:dgbp})\n\\end{tabular}\n}}\n\n\\noindent\nIn words:\n\\RgbpAcr{} seeks to connect each habitat \nsuch\nthat \nevery patch has some other patch at short distance.\n\\CgbpAcr{} seeks to connect each habitat such that any two habitat's patches are\nat short distance.\nFinally,\n\\DgbpAcr{} seeks to connect each habitat such that the habitat forms a connected component of low diameter.\n\\cref{fig:relationship}\ndepicts a relationship between the problems in terms of \nKarp reductions.\n\\begin{figure}[t]\\centering\n \\begin{tikzpicture}\n \\def1.1{1}\n \\def0.9{1}\n \\node (con) at (0,-1.25*0.9)[]{\\hyperref[prob:congbp]{\\prob{Connect \\GBP}}};\n \\node (r) at (-3*1.1,-1*0.9)[]{\\hyperref[prob:rgbp]{\\prob{Reach \\prob{GBP}{}}}};\n \\node (c) at (3*1.1,-1*0.9)[]{\\hyperref[prob:cgbp]{\\prob{Closed \\prob{GBP}{}}}};\n \n \\newcommandx{\\AredB}[3][1=$\\geq_P$]{\n \\draw[draw=none] (#2) to node[midway,sloped]{#1}(#3);\n }\n \\AredB{con}{r};\n \\AredB[$\\leq_P$]{con}{c};\n \n \\node (r1) at (-4*1.1,-2*0.9)[]{\\RgbpAcr[1]{}};\n \\node (c1) at (4*1.1,-2*0.9)[]{\\CgbpAcr[1]{}};\n \n \\AredB[$\\leq_P$]{r1}{r};\n \\AredB{c1}{c};\n \n \\node (d) at (-1.25*1.1,-1.75*0.9)[]{\\hyperref[prob:dgbp]{\\prob{Diam \\prob{GBP}{}}}};\n \\node (d1) at (1.25*1.1,-2*0.9)[]{\\DgbpAcr[1]{}};\n \n \\AredB[$\\leq_P$]{r1}{d}\n \\AredB{d1}{d}\n \\AredB[$\\equiv_P$]{d1}{c1}\n \\end{tikzpicture} \n \\caption{Polynomial-time many-one reducibility directly derived from problem definitions.}\n \\label{fig:relationship}\n\\end{figure}\n\n\\paragraph{Our contributions.}\nOur results are summarized in \\cref{tab:results}.\n\\begin{table}[t]\n \\caption{Overview of our results. \n \\cocl{NP}-c., P, and K \n stand for \n \\cocl{NP}-complete,\n ``polynomial-size'',\n and ``problem kernel'',\n respectively.\n \\tss{a}(even on planar graphs or if~$\\Delta=4$)\n \\tss{b}(even on bipartite graphs with~$\\Delta=4$ or graphs of diameter four.)\n \\tss{c}(even if~$r=1$ or if~$r=2$ and~$\\Delta=4$)\n \\tss{d}(even on bipartite graphs of diameter three and~$r=1$, \n \\emph{but} linear-time solvable when~$r+\\Delta$ is constant)\n \\tss{e}(admits a linear-size problem kernel if~$\\Delta$ is constant)\n \\tss{f}(linear-time solvable when~$r+\\Delta$ is constant) \n \\tss{$\\dagger$}(no polynomial problem kernel unless~\\ensuremath{\\NP\\subseteq \\coNP\/\\poly}, but an~$\\O(k^3)$-vertex kernel on planar graphs)\n }\n \\label{tab:results}\n \\centering\n \\setlength{\\tabcolsep}{3.25pt}\n \\begin{tabular}{@{}p{0.12\\textwidth}l|p{0.13\\textwidth}|p{0.17\\textwidth}p{0.11\\textwidth}p{0.19\\textwidth}|p{0.1\\textwidth}@{}}\\toprule\n Problem & & Comput. & \\multicolumn{3}{p{0.46\\textwidth}|}{\n Parameterized Algorithmics} & Ref.\\\\\n ($\\Pi$~\\prob{GBP}) & & Complex. & $k$ & $r$ & $k+r$ &\n \\\\ \\midrule\\midrule\n \\multirow{3}{*}{\\makecell{\\prob{$d$-Reach} \\\\ \\tref{sec:rgbp}}} & $d=1$ & \\cocl{NP}-c.~\\tss{a} & $O(k4^{k})$ K & \\emph{open} & $O(rk+k^2)$ PK & \\tref{ssec:1rgbp}\\\\\n & $d=2$ & \\cocl{NP}-c.\\tss{b} & $O(2^{4k})$ K\\tss{$\\dagger$} & p-\\cocl{NP}-h. \\tss{c} & FPT\\tss{$\\dagger$} & \\tref{ssec:2rgbp} \\\\\n & $d\\geq 3$ & \\cocl{NP}-c. & \\cocl{XP}, \\W{1}-h. & p-\\cocl{NP}-h. & \\cocl{XP}, \\W{1}-h. & \\tref{ssec:3rgbp}\\\\\n \\midrule\n \\multirow{3}{*}{\\makecell{\\prob{$d$-Closed} \\\\ \\tref{sec:cgbp}}} & $d=1$ & Lin.~time & --- & --- & --- & \\tref{sec:cgbp} \\\\\n & $d=2$ & \\cocl{NP}-c. \\tss{d} & $O(2^{4k})$ K\\tss{$\\dagger$} & p-\\cocl{NP}-h.\\tss{e} & FPT\\tss{$\\dagger$} & \\tref{ssec:2cgbp}\\\\\n & $d\\geq 3$ & \\cocl{NP}-c. & \\cocl{XP}, \\W{1}-h. & p-\\cocl{NP}-h.\\tss{e} & \\cocl{XP}, \\W{1}-h. & \\tref{ssec:3cgbp}\\\\\n \\midrule\n \\multirow{3}{*}{\\makecell{\\prob{$d$-Diam} \\\\ \\tref{sec:dgbp}}} & $d=1$ & Lin.~time & --- & --- & --- & \\tref{sec:dgbp}\\\\\n & $d=2$ & \\cocl{NP}-c. \\tss{f} & $2k$-vertex K & p-\\cocl{NP}-h. & $O(rk+k^2)$ PK & \\tref{ssec:2dgbp}\\\\\n & $d\\geq 3$ & \\cocl{NP}-c. & $2k$-vertex K & p-\\cocl{NP}-h. & $O(rk+k^2)$ PK & \\tref{ssec:3dgbp}\n \\\\\\arrayrulecolor{black} \\bottomrule\n \\end{tabular}\n\\end{table}\nWe settle the classic complexity and parameterized complexity (regarding the number~$k$ of green bridges and the number~$r$ of habitats)\nof the three problems.\nWhile~\\RgbpAcr{} is \n(surprisingly) \nalready \\cocl{NP}-hard for~$d=1$ on planar or maximum degree~$\\Delta=4$ graphs,\n\\CgbpAcr{} and \\DgbpAcr{} become~\\cocl{NP}-hard for~$d\\geq 2$,\nbut admit an $(r+\\Delta)^{\\O(1)}$-sized problem kernel \nand thus are linear time solvable if~$r+\\Delta$ is constant.\nExcept for~\\RgbpAcr[1]{},\nwe proved all variants to be para-\\cocl{NP}-hard regarding~$r$.\n\\RgbpAcr{} and \\CgbpAcr{}\nare fixed-parameter tractable regarding~$k$ when~$d\\leq 2$,\nbut become~\\W{1}-hard (yet~\\cocl{XP}) regarding~$k$ and~$k+r$ when~$d>2$.\nAdditionally,\nwe prove that\n\\RgbpAcr{} admits an $rd$-approximation in~$\\O(mn+rnd)$ time.\n\n\\paragraph{Further related work.}\nOur problems deal with\nfinding (small) spanning connected subgraphs\nobeying some (connectivity) constraints,\nand thus can be seen as \nnetwork design problems~\\cite{KerivinM05}.\nMost related to our problems are \nSteiner multigraph problems~\\cite{RicheyP86}\nwith an algorithmic study~\\cite{Gassner10,LaiGSMCM11}\n(also in the context of wildlife corridor construction).\nRequiring small diameter \nappears also in the context of spanning trees~\\cite{RaviSMRR96}\nand Steiner forests~\\cite{DingQ20}.\nA weighted version of\n\\DgbpAcr[4]{}\nis proven to be \\cocl{NP}-hard with two different weights~\\cite{Plesnik81}.\nAs to wildlife crossing placement,\nmodels and approaches different to ours are studied~\\cite{LoraammD16,DownsHLAKO14}.\n\n\\decprob{\\prob{Connected \\gbp}{} (\\prob{Connect \\GBP}{})}{congbp}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$ of habitats \nwhere~$V_i\\subseteq V$ for all~$i\\in\\set{r}$, \nand an integer~$k\\in\\mathbb{N}_0$.}\n{Is there a subset~$F\\subseteq E$ with~$|F|\\leq k$ such that\nfor every~$i\\in\\set{r}$\nit holds that in~$G[F]$ exists a connected component containing~$V_i$?\n}\n\n\\noindent\n\\prob{Connect \\GBP}{}\nwith edge costs\nis also known as \\prob{Steiner Forest}~\\cite{Gassner10}\nand generalizes the well-known \\cocl{NP}-hard \\prob{Steiner Tree} problem. \nGassner~\\cite{Gassner10} proved \\prob{Steiner Forest} to be \\cocl{NP}-hard \neven if every so-called terminal net contains two vertices,\nif the graph is planar and has treewidth three,\nand if there are two differrent edge costs,\neach being upper-bounded linearly in the instance size.\nIt follows that \\prob{Connect \\GBP}{} is also \\cocl{NP}-hard in this case.\nBateni~\\textsl{et~al.}~\\cite{BateniHM11} proved that \\prob{Steiner Forest} is polynomial-time solvable on treewidth-two graphs \nand admits approximation schemes on planar and bounded-treewidth graphs.\n\nFrom a modeling perspective,\nsolutions are valid in which any habitat\nis scattered and \na habitat's patch is far away from all the others;\nthus\nanimals may need to take long walks through areas outside of their habitats.\nWith our models we avoid solutions with this property.\n\n\\section{Preliminaries}\n\\label{sec:prelims}\n\nLet~$\\mathbb{N}$ and~$\\mathbb{N}_0$ be the natural numbers with and without zero,\nrespectively.\nWe use basic definitions from graph theory~\\cite{Diestel} and\nparameterized algorithmics~\\cite{cygan2015parameterized}.\n\n\\paragraph*{Graph Theory.}\n\nLet~$G=(V,E)$ be an undirected graph with vertex set~$V$ and edge set~$E\\subseteq \\binom{V}{2}$.\nWe also denote by~$V(G)$ and~$E(G)$ the vertices and edges of~$G$,\nrespectively.\nFor~$F\\subseteq E$ let~$V(F)\\ensuremath{\\coloneqq} \\{v\\in V\\mid \\exists e\\in F :\\: v\\in e\\}$ and\n$G[F]\\ensuremath{\\coloneqq} (V(F),F)$.\nA path~$P$ is a graph with~$V(P) \\ensuremath{\\coloneqq} \\{v_1, \\ldots, v_n\\}$ and~$E(P) \\ensuremath{\\coloneqq} \\{\\{v_i,v_{i+1}\\} \\mid 1 \\le i < n \\}$.\nThe length of the path~$P$ is $|E(P)|$.\nThe distance~$\\dist_G(v,w)$ between vertices~$v,w \\in V(G)$ is the length of the shortest path between~$v$ and~$w$ in~$G$.\nThe diameter~$\\diam(G)$ is the length of longest shortest path over all vertex pairs.\nFor~$p\\in \\mathbb{N}$,\nthe graph~$G^p$ is the $p$-th power of~$G$ \ncontaining the vertex set~$V$ and edge set~$\\{\\{v,w\\}\\in \\binom{V}{2}\\mid \\dist_G(v,w)\\leq p\\}$.\nLet~$N_G(v)\\ensuremath{\\coloneqq} \\{w\\in V\\mid \\{v,w\\}\\in E\\}$ be the (open) neighborhood of~$v$,\nand $N_G[v]\\ensuremath{\\coloneqq} N_G(v)\\cup\\{v\\}$ be the closed neighborhood of~$v$.\nFor~$p\\in\\mathbb{N}$,\nlet~$N_G^p(v)\\ensuremath{\\coloneqq} \\{w\\in V\\mid \\{v,w\\}\\in E(G^p)\\}$ be the (open) $p$-neighborhood of~$v$,\nand $N_G^p[v]\\ensuremath{\\coloneqq} N_G^p(v)\\cup\\{v\\}$ be the closed $p$-neighborhood of~$v$.\nTwo vertices~$v,w\\in V$ are called twins if~$N_G(v)=N_G(w)$.\nThe (vertex) degree~$\\deg_G(v)\\ensuremath{\\coloneqq} |N_G(v)|$ of~$v$ is the number if its neighbors.\nThe maximum degree~$\\Delta(G)\\ensuremath{\\coloneqq} \\max_{v\\in V}\\deg_G(v)$ is the maximum over all (vertex) degrees.\n\n\\section{Connecting Habitats with a Patch at Short Reach}\n\\label{sec:rgbp}\n\nThe following problem ensures that any habitat patch can reach the other patches via patches of the same habitat and short strolls over ``foreign'' ground.\n\n\\decprob{\\RgbpTsc{} (\\RgbpAcr{})}{rgbp}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$ of habitats where~$V_i\\subseteq V$ for all~$i\\in\\set{r}$, \nand an integer~$k\\in\\mathbb{N}_0$.}\n{Is there a subset~$F\\subseteq E$ with~$|F|\\leq k$ such that\nfor every~$i\\in\\set{r}$\nit holds that~$G[F]^d[V_i]$ is connected?\n}\n\n\\noindent\nNote that if~$d$ is part of the input,\nthen~$\\prob{Connect \\GBP}{} \\leq \\prob{Reach \\prob{GBP}{}}$.\nWe will prove the following.\n\n\\begin{theorem}\n\\label{thm:rgbp}\n \\RgbpAcr{} is\n \\begin{compactenum}[(i)]\n \\item if~$d=1$, \\cocl{NP}-hard even on planar graphs or graphs with maximum degree four;\n \\item if~$d=2$, \\cocl{NP}-hard even on graphs with maximum degree four and~$r=2$ or graphs with diameter four and~$r=1$, and in~\\cocl{FPT}{} regarding~$k$;\n \\item if~$d\\geq 3$, \\cocl{NP}-hard and~\\W{1}-hard regarding~$k+r$.\n \\end{compactenum}\n Moreover, \\RgbpAcr{} admits an~$rd$-approximation of the minimum number of green bridges in~$\\O(mn+rnd)$ time.\n\\end{theorem}\n\n\\subsection{Approximation}\nThe approximation algorithm computes for every habitat~$V_i$ a spanning tree in~$G^d[V_i]$,\nand adds the edges of the corresponding paths to the solution~$F$.\nEach of the spanning trees then is a~$d$-approximation for just the one habitat,\nhence the union of the spanning trees is an~$rd$-approximation for all habitats.\n\n\\begin{lemma}\n \\label{lem:d-approx}\n For~$r=1$,\n \\RgbpAcr{} admits a~$d$-approximation \n of the minimum number of green bridges\n in~$\\O(mn)$ time.\n\\end{lemma}\n\n\\begin{proof}\n We start off by computing in~$\\O(mn)$ time the graph~$H \\ensuremath{\\coloneqq} G^d$ as well as for every edge~$e = \\{u, v\\} \\in E(H)$ the corresponding path~$P_e$ from~$u$ to~$v$ of length at most~$d$ in~$G$.\n If~$H[V_1]$ is not connected, then return~\\textnormal{\\texttt{no}}{}.\n If not,\n then compute a minimum spanning tree~$T \\subseteq H[V_1]$ in~$\\O(n \\log n)$ time.\n For each edge~$e = \\{u, v\\} \\in E(T)$ compute in~$\\O(m)$ time the corresponding path~$P_e \\subseteq G$ from~$u$ to~$v$ of length at most~$d$.\n Finally, return the set~$F \\ensuremath{\\coloneqq} \\bigcup_{e \\in E(T)} E(P_e)$, computable in~$\\O(m)$ time.\n Clearly, $G[F]^d[V_1]$ is connected.\n As a minimum solution~$F^*$ has at least~$|V_1|-1$ edges,\n and every of the paths~$P_e$ consists of at most~$d$ edges,\n \\[\n |F| = |\\bigcup_{e\\in E(T)} E(P_e)| \\le \\sum_{e \\in E(T)} E(P_e) \\le (|V_1|-1)\\cdot d \\le d|F^*|.\\qedhere\n \\]\n\\end{proof}\n\nWith the~$d$-approximation algorithm for a habitat at hand, we present our~$rd$-approximation algorithm.\n\n\\begin{proposition}\n\t\\RgbpAcr{} admits an~$rd$-approximation \n\tof the minimum number of green bridges \n\tin $\\O(mn + rnd)$ time.\n\\end{proposition}\n\\begin{proof}\n\tWe initially compute the shortest paths between all vertex pairs in~$G$ in~$O(mn)$ time.\n\tWe obtain the graph~$H \\ensuremath{\\coloneqq} G^d$ as a byproduct.\n\tIf for some~$i \\in \\set{r}$, $H[V_i]$ is not connected,\n\tthen return~\\textnormal{\\texttt{no}}{}.\n\tIf not, then \n\tcompute for each~$i \\in \\set{r}$ a spanning tree~$T_i$ of~$H[V_i]$, or return~\\textnormal{\\texttt{no}}{} if~$H[V_i]$ is not connected.\n\tLet~$F_i \\subseteq E(G)$ be the edge set corresponding to~$T_i$ as in the proof of \\cref{lem:d-approx}.\n\tAs~$G[F_i]^d[V_i]$ is connected,\n\t$F \\ensuremath{\\coloneqq} \\bigcup_{i=1}^r F_i$ is a solution.\n\n\tNote that each of the~$r$ spanning trees~$T_i$ contain at most~$n$ edges,\n\tand for each of these edges~$e \\in F_i$ we can determine the corresponding paths~$P_e \\subseteq G$\n\tof length at most~$d$ in~$\\O(d)$ time.\n\tWe obtain an overall running time of~$\\O(mn + rnd)$.\n\n\tAs for the approximation ratio, let~$F^*$ be a minimum solution, and for every~$i \\in \\set{r}$ let~$F^*_i \\in E(G)$ be a minimum-size edge set such that~$G[F^*_i]^d[V_i]$ is connected.\n\tAs~$|F^*| \\ge \\max_{i\\in\\set{r}} |F^*_i|$,\n\twe have\n\t\\[\n\t\t|F| \\le \\sum_{i=1}^r |F_i| \\le \\sum_{i=1}^r d|F^*_i| \\le r \\cdot d|F^*|. \\qedhere\n\t\\]\n\\end{proof}\n\n\\subsection{When a next habitat is directly reachable (\\texorpdfstring{\\boldmath{$d=1$}}{d=1})}\n\\label{ssec:1rgbp}\n\nSetting~$d=1$ may reflect perfect knowledge about the habitats.\nIn this case,\nwe want that in~$G[F]$, \neach habitat~$V_i$ forms a connected component.\n\n\\begin{proposition}\n \\label{prop:1rgbp}\n \\RgbpAcr[1]{} is \\cocl{NP}-hard even on graphs of maximum degree four\n and on series-parallel graphs.\n\\end{proposition}\nWe remark that series-parallel graphs are also planar.\nWe first present the construction for showing hardness for graphs of degree at most four.\n\n\\begin{construction}\n \\label{constr:1rgbp}\n For an instance~$\\mathcal{I}=(G, k)$ of \\prob{3-Regular Vertex Cover} with~$G=(V, E)$,\n $V=\\set{n}$,\n and~$E=\\{e_1,\\dots,e_m\\}$,\n construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G',\\mathcal{H},k')$\n where\n $\\mathcal{H}\\ensuremath{\\coloneqq} \\mathcal{V}\\cup\\mathcal{W}\\cup\\mathcal{Z}$, \n $\\mathcal{V}\\ensuremath{\\coloneqq} \\{V_1,\\dots,V_n\\}$,\n $\\mathcal{W}\\ensuremath{\\coloneqq} \\{W_1,\\dots,W_n\\}$,\n $\\mathcal{Z}\\ensuremath{\\coloneqq} \\{Z_1,\\dots,Z_m\\}$,\n and~$k'\\ensuremath{\\coloneqq} 4m+k$, as follows\n (see~\\cref{fig:1rgbp} for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n \\contourlength{0.09em}\n\n \\def1.1{1}\n \\def0.9{1}\n \\def0.75{0.75}\n \\def1.5*\\yr{1}\n \\def25{15}\n \\def155{165}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4175*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n \\tikzstyle{xedgem}=[,-,color=cyan!50!green]\n \\tikzstyle{xedgef}=[very thick,-,color=cyan!50!green]\n \n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode}{\n \\node (a\\x) at (\\x*0.75*1.1-5*1.1,0)[\\y]{};\n \\node (b\\x) at (\\x*0.75*1.1-5*1.1,-0.9*1.5*\\yr)[\\y]{};\n }\n \\draw[xedge,white] (a1) to node[midway]{\\color{cyan!50!green}$e_1$}(b1);\n \\draw[xedge,white] (a3) to node[midway]{\\color{cyan!50!green}$e_s$}(b3);\n \\draw[xedge,white] (a5) to node[midway]{\\color{cyan!50!green}$e_t$}(b5);\n \\draw[xedge,white] (a7) to node[midway]{\\color{cyan!50!green}$e_m$}(b7);\n\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (x\\x) at (9*0.75*1.1+\\x*0.75*1.1-5*1.1,0)[\\y]{};\n \\node (y\\x) at (9*0.75*1.1+\\x*0.75*1.1-5*1.1,-0.9*1.5*\\yr)[\\y]{};\n \\draw[xedge] (x\\x) to (y\\x);\n }\n \\draw[xedge] (x1) to node[midway,left]{$1$}(y1);\n \\draw[xedge] (x3) to node[midway,left]{$i$}(y3);\n \\draw[xedge] (x5) to node[midway,left]{$j$}(y5);\n \\draw[xedge] (x7) to node[midway,left]{$n$}(y7);\n\n \\foreach \\x in {2,4,6}{\n \\node at (a\\x.north east)[anchor=south,rotate=25]{$\\cdots$};\n \\node at (b\\x.south east)[anchor=north,rotate=-25]{$\\cdots$};\n }\n \\foreach \\x in {4}{\n \\node at (x\\x.north west)[anchor=south,rotate=-25]{$\\cdots$};\n \\node at (y\\x.south west)[anchor=north,rotate=25]{$\\cdots$};\n }\n \\foreach \\x\/\\y in {1\/7,1\/6,7\/2,7\/4}{\n \\draw[xedgem,color=red!50!black] (a\\x) to [out=25,in=155](x\\y);\n \\draw[xedgem,color=red!50!black] (b\\x) to [out=-25,in=-155](y\\y);\n }\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/1,5\/3}{\n \\draw[xedgef,color=red!50!black] (a\\x) to [out=25,in=155](x\\y);\n \\draw[xedgef,color=red!50!black] (b\\x) to [out=-25,in=-155](y\\y);\n }\n \n \\newcommand{\\ltb}[3]{\n \\contourlength{0.09em}\n \\node at (#1)[label={[align=center,font=\\scriptsize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \\ltb{a5}{~~~$\\in V_1,V_i,Z_t$}{};\n \\ltb{b5}{}{~~~$\\in W_1,W_i,Z_t$}{};\n \n \\ltb{a3}{$\\in V_i,V_j,Z_s$~~~}{};\n \\ltb{b3}{}{$\\in W_i,W_j,Z_s$~~~}{};\n \n \\ltb{x3}{$\\in V_i,Z_s,Z_t$~~~}{};\n \\ltb{y3}{}{$\\in W_i,Z_s,Z_t$~~~}{};\n \\ltb{x5}{$\\in V_j,Z_s$}{};\n \\ltb{y5}{}{$\\in W_j,Z_s$}{};\n \n \\end{tikzpicture}\n \\caption{Illustration to~\\cref{constr:1rgbp} for~\\RgbpAcr[1]{}. \n Here,\n e.g.,~$e_s=\\{i,j\\}$ and~$e_t=\\{1,i\\}$.\n Every solution (if existent) contains all red-colored edges~(\\Cref{obs:1rgbp}).}\n \\label{fig:1rgbp}\n \\end{figure}\n Construct vertex sets~$V_E\\ensuremath{\\coloneqq} \\{x_i,y_i\\mid e_i\\in E\\}$ \n and~$V_G\\ensuremath{\\coloneqq} \\{v_i,w_i\\mid i\\in V\\}$.\n Next,\n construct edge sets~$E^*\\ensuremath{\\coloneqq} \\bigcup_{i\\in V} \\{\\{v_i,x_j\\},\\{w_i,y_j\\}\\mid i\\in e_j\\}$\n and~$E'\\ensuremath{\\coloneqq} \\{\\{v_i,w_i\\}\\mid i\\in V\\}\\cup E^*$.\n Finally, \n construct habitats~$V_i\\ensuremath{\\coloneqq} \\{v_i\\}\\cup\\bigcup_{i\\in e_j} \\{x_j\\}$ and\n $W_i\\ensuremath{\\coloneqq} \\{w_i\\}\\cup\\bigcup_{i\\in e_j} \\{y_j\\}$ \n for every~$i\\in\\set{n}$,\n and~$Z_j\\ensuremath{\\coloneqq} \\{x_j,y_j\\}\\cup \\bigcup_{i\\in e_j} \\{v_i,w_i\\}$ for every~$j\\in\\set{m}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:1rgbp}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance.\n Then\n every solution~$F$ contains all edges in~$E^*$.\n\\end{observation}\n\n\\begin{proof}\n Observe that by construction,\n for every~$S\\in\\mathcal{V}\\cup\\mathcal{W}$,\n $G[S]$ is a star with center in~$V_G$.\n Hence,\n all edges in~$G[S]$ must be contained in every solution.\n Since~$E^* = \\bigcup_{S\\in\\mathcal{V}\\cup\\mathcal{W}} E(G[S])$,\n the claim follows.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:1rgbp}\n Let~$\\mathcal{I}'$ be the instance obtained from an instance~$\\mathcal{I}$ using~\\cref{constr:1rgbp}.\n Then,\n $\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S\\subseteq V$ be a vertex cover of~$G$ of size~$k$.\n We claim that~$F\\ensuremath{\\coloneqq} E^*\\cup \\bigcup_{i\\in S}\\{\\{x_i,y_i\\}\\}$\n is a solution to~$\\mathcal{I}'$.\n Note that~$|F|=4m+k$.\n Observe that~$G[F][T]$ is connected for every~$T\\in\\mathcal{V}\\cup\\mathcal{W}$.\n Suppose that there is~$Z_\\ell$ such that~$G[F][Z_\\ell]$\n is not connected.\n Let~$e_\\ell=\\{i,j\\}$.\n Since~$E^*\\subseteq F$,\n none of~$\\{v_i,w_i\\}$ and~$\\{v_j,w_j\\}$ is contained in~$F$.\n It follows that~$\\{i,j\\}\\cap S=\\emptyset$,\n contradicting the fact that~$S$ is a vertex cover.\n \n $(\\Leftarrow)\\quad${}\n Let~$F$ be a solution to~$\\mathcal{I}'$.\n We know that~$E^*\\subseteq V$.\n We claim that~$S\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{x_i,y_i\\}\\in F\\}$\n is a vertex cover of~$G$.\n Note that~$|S|\\leq k$.\n Suppose not,\n that is,\n there is an~$e_\\ell=\\{i,j\\}$ with~$\\{i,j\\}\\cap S=\\emptyset$.\n Then,\n $G[F][Z_\\ell]$ is not connected,\n a contradiction.\n\\end{proof}\n\nThe construction for showing hardness for series-parallel graphs is very similar:\nWe replace the edges in~$E^*$ by two stars.\n\n\\begin{construction}\n \\label{constr:1rgbp-planar}\n For an instance~$\\mathcal{I}=(G,k)$ of~\\prob{3-Regular Vertex Cover}\n with~$G=(V,E)$, and~$V=\\set{n}$ and~$E=\\{e_1,\\dots,e_m\\}$,\n construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G',\\mathcal{H},k')$ with\n $\\mathcal{H}=\\{S, T, Z_1,\\dots,Z_m\\}$\n and~$k'\\ensuremath{\\coloneqq} 2m+2n+k$ as follows\n (see~\\cref{fig:1rgbp} for an illustration).\n\\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n \\contourlength{0.09em}\n\n \\def1.1{1}\n \\def0.9{1}\n \\def0.75{0.75}\n \\def1.5*\\yr{1}\n \\def25{25}\n \\def155{155}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodex}=[label={[xshift=-0.55*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode}{\n \\node (a\\x) at (\\x*0.75*1.1,0)[\\y]{};\n \\node (b\\x) at (\\x*0.75*1.1,-0.9*1.5*\\yr)[\\y]{};\n }\n \\draw[xedge,white] (a1) to node[midway]{\\color{cyan!50!green}$e_1$}(b1);\n \\draw[xedge,white] (a3) to node[midway]{\\color{cyan!50!green}$e_p$}(b3);\n \\draw[xedge,white] (a5) to node[midway]{\\color{cyan!50!green}$e_q$}(b5);\n \\draw[xedge,white] (a7) to node[midway]{\\color{cyan!50!green}$e_m$}(b7);\n\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (x\\x) at (9*0.75*1.1+\\x*0.75*1.1,0)[\\y]{};\n \\node (y\\x) at (9*0.75*1.1+\\x*0.75*1.1,-0.9*1.5*\\yr)[\\y]{};\n \\draw[xedge] (x\\x) to (y\\x);\n }\n \\draw[xedge] (x1) to node[midway,left]{$1$}(y1);\n \\draw[xedge] (x3) to node[midway,left]{$i$}(y3);\n \\draw[xedge] (x5) to node[midway,left]{$j$}(y5);\n \\draw[xedge] (x7) to node[midway,left]{$n$}(y7);\n\n \n \\foreach \\x in {2,4,6}{\n \\node at (a\\x.north east)[anchor=south,rotate=25]{$\\cdots$};\n \\node at (b\\x.south east)[anchor=north,rotate=-25]{$\\cdots$};\n }\n \\foreach \\x in {4}{\n \\node at (x\\x.north west)[anchor=south,rotate=-25]{$\\cdots$};\n \\node at (y\\x.south west)[anchor=north,rotate=25]{$\\cdots$};\n }\n \\node (a) at (8.5*0.75*1.1,1*0.9)[xnode]{};\n \\node (b) at (8.5*0.75*1.1,-2*0.9)[xnode]{};\n \\foreach \\x\/\\y in {1,...,7}{\n \\draw[xedge,color=red!50!black] (a\\x) to (a);\n \\draw[xedge,color=red!50!black] (x\\x) to (a);\n \\draw[xedge,color=red!50!black] (b\\x) to (b);\n \\draw[xedge,color=red!50!black] (y\\x) to (b);\n }\n \n \\newcommand{\\ltb}[3]{\n \\node at (#1)[label={[align=center,font=\\scriptsize,color=black]#2},label={[align=center,font=\\scriptsize,color=black]-90:#3}]{};\n }\n \\ltb{a3}{\\contour*{white}{$\\in Z_p,S$~~~}}{};\n \\ltb{b3}{}{\\contour*{white}{$\\in Z_p,T$~~~}}{};\n \n \\ltb{a5}{\\contour*{white}{~~~$\\in Z_q,S$}}{};\n \\ltb{b5}{}{\\contour*{white}{~~~$\\in Z_q,T$}}{};\n \n \\ltb{x3}{\\contour*{white}{$\\in Z_p,Z_q,S$}}{};\n \\ltb{y3}{}{\\contour*{white}{$\\in Z_p,Z_q,T$}}{};\n \\ltb{x5}{\\contour*{white}{$\\in Z_q,S$}}{};\n \\ltb{y5}{}{\\contour*{white}{$\\in Z_q,T$}}{};\n \n \\ltb{a}{\\contour*{white}{$\\in S,Z_1,\\dots,Z_m$}}{$s$};\n \\ltb{b}{$t$}{\\contour*{white}{$\\in T,Z_1,\\dots,Z_m$}}{};\n \n \\end{tikzpicture}\n \\label{fig:1rgbp-planar}\n \\caption{Illustration to~\\cref{constr:1rgbp-planar} for \\RgbpAcr[1]{} on planar series-parallel graphs. \n In this example,\n there are e.g.~$e_p=\\{1,i\\}$ and~$e_q=\\{i,j\\}$.\n In case of a \\textnormal{\\texttt{yes}}-instance,\n the red-colored edges are in every solution~(\\cref{obs:1rgbp-planar}).\n ($k=2n+2m+k$)}\n \\end{figure}\n \n Add to~$G'$ the vertex sets~$V_E\\ensuremath{\\coloneqq} \\{x_j,y_j\\mid e_j\\in E\\}$,\n $V_G\\ensuremath{\\coloneqq} \\{v_i,w_i\\mid i\\in V\\}$,\n and~$V_C\\ensuremath{\\coloneqq} \\{s, t\\}$,\n and the edge sets\n $E_V\\ensuremath{\\coloneqq} \\{\\{v_i,w_i\\}\\mid i\\in V\\}$\n and~$E^*\\ensuremath{\\coloneqq} \\{\\{s, v_i\\}, \\{t, w_i\\} \\mid i \\in V\\} \\cup \\{\\{s, x_j\\}, \\{t, y_j\\} \\mid e_j \\in E\\}$.\n Finally,\n let~$S\\ensuremath{\\coloneqq} \\{s\\}\\cup\\bigcup_{i \\in V} \\{v_i\\} \\cup \\bigcup_{e_j\\in E} \\{x_j\\}$,\n let~$T\\ensuremath{\\coloneqq} \\{t\\}\\cup\\bigcup_{i \\in V} \\{w_i\\} \\cup \\bigcup_{e_j\\in E} \\{y_j\\}$,\n and for~$e_j \\in E$ let~$Z_j\\ensuremath{\\coloneqq}\\{x_j,y_j\\}\\cup\\bigcup_{i\\in e_j}\\{v_i, w_i\\}\\cup\\{s,t\\}$.\n\\end{construction}\n\n\\begin{observation}\n\tThe graph~$G'$ constructed in \\cref{constr:1rgbp-planar} is planar and series-parallel.\n\\end{observation}\n\n\\begin{observation}\n \\label{obs:1rgbp-planar}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance.\n Then\n every solution~$F$ contains all edges in~$E^*$.\n\\end{observation}\n\n\\begin{proof}\n Observe that,\n by construction,\n $G[S]$ is a star with center~$s$\n and~$G[T]$ is a star with center~$t$.\n Hence,\n all edges in~$G[S]$ and in~$G[T]$ are contained in every solution.\n Since~$E^* = E(G[S]) \\cup E(G[T])$,\n the claim follows.\n\\end{proof}\n\n\n\\begin{lemma}\n \\label{lem:1rgbp-planar}\n Let~$\\mathcal{I}'$ be the instance obtained from an instance~$\\mathcal{I}$ using~\\cref{constr:1rgbp-planar}.\n Then,\n $\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S\\subseteq V$ be a vertex cover of~$G$ of size~$k$.\n We claim that~$F\\ensuremath{\\coloneqq} E^*\\cup \\bigcup_{i\\in S}\\{\\{x_i,y_i\\}\\}$\n is a solution to~$\\mathcal{I}'$.\n Note that~$|F|=2m+2n+k$.\n Observe that~$G[F][S]$ and~$G[F][T]$ are connected.\n Suppose that there is~$Z_\\ell$ such that~$G[F][Z_\\ell]$\n is not connected.\n Let~$e_\\ell=\\{i,j\\}$.\n Since~$E^*\\subseteq F$,\n none of~$\\{v_i,w_i\\}$ and~$\\{v_j,w_j\\}$ is contained in~$F$.\n It follows that~$\\{i,j\\}\\cap S=\\emptyset$,\n contradicting the fact that~$S$ is a vertex cover.\n \n $(\\Leftarrow)\\quad${}\n Let~$F$ be a solution to~$\\mathcal{I}'$.\n We know that~$E^*\\subseteq V$.\n We claim that~$S\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{x_i,y_i\\}\\in F\\}$\n is a vertex cover of~$G$.\n Note that~$|S|\\leq k$.\n Suppose not,\n that is,\n there is an~$e_\\ell=\\{i,j\\}$ with~$\\{i,j\\}\\cap S=\\emptyset$.\n Then,\n $G[F][Z_\\ell]$ is not connected,\n a contradiction.\n\\end{proof}\n\n\\subsection{One hop between habitat patches (\\texorpdfstring{\\boldmath{$d=2$}}{d=2})}\n\\label{ssec:2rgbp}\n\nWe prove that~\\RgbpAcr[2]{}\nis already~\\cocl{NP}-complete even if there are two habitats and the graph has maximum degree four,\nor if there is only one habitat.\n\n\\begin{proposition}\n \\label{prop:2rgbp}\n \\RgbpAcr{} with~$d\\geq 2$ is \\cocl{NP}-complete \n even if\n (i) $r=2$ and~$\\Delta\\leq 4$\n or\n (ii) $r=1$ and the input graph has diameter~$2d$.\n\\end{proposition}\n\n\\noindent\nFor the sake of presentation,\nwe prove~\\cref{prop:2rgbp}(i) for~$d=2$.\nAfterwards,\nwe briefly explain how to adapt the proof for~$d>2$ and for \\cref{prop:2rgbp}(ii).\n\n\\begin{construction}\n \\label{constr:2rgbp}\n For an instance~$\\mathcal{I}=(G, k)$ of \\prob{3-Regular Vertex Cover} with~$G=(V, E)$ and~$V=\\set{n}$\n construct an instance of~\\RgbpAcr[2]{}\n with graph~$G'=(V',E')$, \n habitat sets~$V_1$ and~$V_2$,\n and integer~$k'\\ensuremath{\\coloneqq} |E|+(n-1)+k$ as follows \n (see~\\cref{fig:2rgbp}(a) for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\def1.1{0.67}\n \\def0.9{0.7}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.64*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\node at (-4.25*1.1,1*0.9)[]{(a)};\n \n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode}{\n \\node (z\\x) at (\\x*1.1-4*1.1,-2*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(z1.south west)-(0.125,0.125)$) rectangle ($(z7.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (z\\x) to (v\\x);\n }\n \\foreach \\x\/\\y in {1\/2,2\/3,3\/4,4\/5,5\/6,6\/7}{\n \\draw[xedge] (z\\x) to (z\\y);\n }\n \n \\contourlength{0.09em}\n \\newcommand{\\ltb}[3]{\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]#2},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{\\footnotesize$e=$\\\\\\footnotesize$\\{i,j\\}$}{$\\in V_1$};\n \\ltb{e5}{\\footnotesize$e'=$\\\\\\footnotesize$\\{i,j'\\}$}{$\\in V_1$};\n \\ltb{e7}{\\footnotesize$e''=$\\\\\\footnotesize$\\{i',j\\}$}{$\\in V_1$};\n \n \\ltb{z3}{\\contour*{white}{$x_i$}}{$\\in V_1,V_2$};\n \\ltb{z5}{\\contour*{white}{$x_j$}}{$\\in V_1,V_2$};\n \n \\ltb{v3}{$v_i$}{};\n \\ltb{v5}{$v_j$}{};\n \\end{tikzpicture}\n \\hfill\n \\begin{tikzpicture}\n\n \\def1.1{0.67}\n \\def0.9{0.7}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.64*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\node at (-4.25*1.1,1*0.9)[]{(b)};\n \n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\node (x) at (0*1.1,-2*1.5*\\yr)[xnodef]{};\n\n\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (x) to (v\\x);\n }\n \n \\contourlength{0.09em}\n \\newcommand{\\ltb}[3]{\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]#2},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{\\footnotesize$e=$\\\\\\footnotesize$\\{i,j\\}$}{$\\in V_1$};\n \\ltb{e5}{\\footnotesize$e'=$\\\\\\footnotesize$\\{i,j'\\}$}{$\\in V_1$};\n \\ltb{e7}{\\footnotesize$e''=$\\\\\\footnotesize$\\{i',j\\}$}{$\\in V_1$};\n \n \\ltb{x}{\\contour*{white}{$x$}}{$\\in V_1$};\n \n \\ltb{v3}{$v_i$}{};\n \\ltb{v5}{$v_j$}{};\n \\end{tikzpicture}\n \\caption{Illustration for~\\RgbpAcr[2]{} with \n (a) $r=2$ and~$\\Delta=4$ ($k'=m+(n-1)+k$) and\n (b) $r=1$ ($k'=m+k$).}\n \\label{fig:2rgbp}\n\\end{figure}\n\n Add the vertex set~$V_E\\ensuremath{\\coloneqq} \\{v_e\\mid e\\in E\\}$\n and add~$v_e$ with $e=\\{i,j\\}\\in E$ to habitat~$V_1$.\n Next,\n add the vertex sets~$V_G=\\{v_i\\mid i\\in V\\}$,\n and connect each~$v_i$ with all edge-vertices corresponding to an edge incident with~$i$,\n i.e.,\n add the edge set~$E_G\\ensuremath{\\coloneqq} \\bigcup_{i\\in V}\\{\\{v_i,v_e\\}\\mid i\\in e\\}$.\n Next,\n add the vertex set~$V_X\\ensuremath{\\coloneqq} \\{x_i\\mid i\\in V\\}$,\n connect each~$x_i$ with~$v_i$,\n and add~$x_i$ to~$V_1,V_2$.\n Finally,\n add the edge set~$\\{\\{x_i,x_{i+1}\\}\\mid i\\in\\set{n-1}\\}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:2rgbp}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{3-Regular Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1,V_2\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2rgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then every solution contains all edges in~$G[V_X]$.\n\\end{observation}\n\n\\begin{proof}\n Suppose not,\n and let~$F$ be a solution without some edge~$\\{x_i,x_{i+1}\\}$.\n Note that in~$G-\\{\\{x_i, x_{i+1}\\}\\}$, the distance between~$x_i$ and~$x_{i+1}$ is at least four;\n thus~$G[F]^2[V_X] = G[F]^2[V_2]$ is not be connected.\n A contradiction.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:2rgbp:edinc}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{3-Regular Vertex Cover} and let~$\\mathcal{I}'=(G',f,k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2rgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then there is a solution~$F\\subseteq E(G')$ such that~$\\deg_{G'[F]}(v_e)=1$ for all~$e\\in E(G)$.\n\\end{lemma}\n\n\\begin{proof}\n Clearly, in every solution,\n we have~$\\deg_{G'[F]}(v_e)\\geq 1$.\n Let~$F$ be a minimum solution with a minimum number of edges incident to vertices in~$\\{v_e \\mid e\\in E\\}$.\n Suppose that there is at least one~$e=\\{i,j\\}\\in E$ such that~$\\deg_{G'[F]}(v_e)=2$, that is, $\\{v_e, v_i\\}, \\{v_e, v_j\\} \\in F$.\n Since~$F$ is a solution, there is a path~$P$ in~$G'[F]$ from~$v_e$ to some~$x_i$.\n Let~$\\{v_e,v_i\\}$ be the first edge on this path.\n Let~$F'\\ensuremath{\\coloneqq} (F \\setminus\\{v_e,v_j\\})\\cup\\{v_j,x_j\\}$.\n We claim that~$F'$ is a solution,\n yielding a contradiction to the fact that~$F$ is a solution with a minimum number of edges incident with vertices in~$V_E$.\n\n Only a vertex~$v_{e'}$ can be disconnected from any~$V_X$ by removing~$\\{v_e,v_j\\}$ from~$F$.\n This vertex cannot be on the path~$P$,\n and hence is connected to~$v_e$ via edge~$\\{v_e,v_j\\}$.\n Since now edge~$\\{v_j,x_j\\}$ is present,\n $v_{e'}$ is again connected to~$V_X$.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:2rgbp:cor}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{3-Regular Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1,V_2\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2rgbp}.\n Then~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S\\subseteq V$ be a vertex cover of size~$k$ in~$G$.\n We construct a solution~$F\\subseteq E'$ as follows.\n Let~$F_X=\\bigcup_{i=1}^{n-1} \\{\\{x_i,x_{i+1}\\}\\}$\n and~$F_V\\ensuremath{\\coloneqq} \\{\\{v_i,x_i\\}\\mid i\\in S\\}$.\n We define the auxiliary function~$g\\colon E\\to V'$ with~$g(\\{i,j\\})=v_{\\min(\\{i,j\\}\\cap S)}$.\n Let~$F_E\\ensuremath{\\coloneqq} \\bigcup_{e=\\{i,j\\}\\in E} \\{v_e,g(e)\\}$.\n Let~$F\\ensuremath{\\coloneqq} F_X\\cup F_V\\cup F_E$. \n Note that~$|F|=|F_X|+|F_V|+|F_E|\\leq |E|+(n-1)+k = k'$.\n Moreover, \n every~$v_e\\in V_E$ is connected to~$x_i$ via a path~$(v_e,v_i,x_i)$, \n where~$i\\in (e\\cap S)$.\n Finally,\n observe that~$G'[F][V_X]$ is connected.\n \n $(\\Leftarrow)\\quad${}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance.\n Due to \\cref{lem:2rgbp:edinc} there\n is a solution~$F\\subseteq E'$ such that $\\deg_{G'[F]}(v_e)=1$ for all~$e\\in E$.\n Due to the observation,\n we know that~$\\bigcup_{i=1}^{n-1} \\{\\{x_i,x_{i+1}\\}\\}\\subseteq F$.\n Let~$S\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{v_i,x_i\\}\\in F\\}$.\n We claim that~$S$ is a vertex cover.\n Suppose not,\n that is,\n there is an edge~$e\\in E$ such that~$e\\cap S=\\emptyset$.\n That means that the unique neighbor of~$v_e$,\n say~$v_i$,\n is not adjacent with~$x_i$ in~$G'[F]$.\n Since $\\deg_{G'[F]}(v_e)=1$ for all~$e\\in E$,\n $N_{G'[F]}[v_i]\\neq G'[F]$ forms a connected component in~$G'[F]^2$.\n Hence,\n $F$ is not a solution.\n A contradiction.\n\\end{proof}\n\n\\begin{remark}\n \\begin{inparaenum}[(i)]\n \\item To make the reduction work for~$d\\geq 3$,\n it is enough to subdivide each edge~$\\{v_e,v_i\\}$\n $(d-2)$ times and set~$k'\\ensuremath{\\coloneqq} (d-1)m+(n-1)+k$.\n \\item If we contract all~$x_i$,\n set~$V_2=\\emptyset$ \n (i.e., only one habitat remains),\n and set~$k'\\ensuremath{\\coloneqq} (d-1)m+k$,\n then the reduction is still valid\n (see~\\cref{fig:2rgbp}(b) for an illustration).\n \\end{inparaenum}\n Thus,\n \\cref{prop:2rgbp}(ii) follows.\n\\end{remark}\n\n\\noindent\n\\cref{prop:2rgbp}\nleaves $k$ unbounded.\nThis leads to the following.\n\n\\subsubsection{Parameterizing with~\\texorpdfstring{\\boldmath{$k$}}{k}.}\n\nWe show that \\RgbpAcr[2]{} admits a problem kernel of size exponential in~$k$.\n\\begin{proposition}\n \\label{prop:rgbp:kernel}\n \\RgbpAcr[2]{} admits a problem kernel with at most $2k+\\binom{2k}{k}$ vertices,\n at most $\\binom{2k}{2}+k\\binom{2k}{k}$ edges,\n and at most~$2^{2k}$ habitats.\n\\end{proposition}\n\n\\noindent\nLet~$\\bar{V}\\ensuremath{\\coloneqq} V\\setminus \\bigcup_{V'\\in\\mathcal{H}} V'$ for graph~$G=(V,E)$ and habitat set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$.\nThe following reduction rules are immediate.\n\n\\begin{rrule}\n \\label{rr:immediate}\n \\begin{inparaenum}[(i)]\n \\item If~$|V_i|=1$ some~$i$,\n delete~$V_i$.\n \\item If a vertex in~$\\bar{V}$ is of degree at most one,\n delete it.\n \\item If there is an~$i\\in\\set{r}$ with~$|V_i|>1$ and an~$v\\in V_i$ of degree zero,\n return a trivial \\textnormal{\\texttt{no}}-instance.\n \\item If there is~$v\\in V\\setminus\\bar{V}$ of degree at most one,\n delete it (also from~$V_1,\\dots,V_r$),\n and set~$k\\ensuremath{\\coloneqq} k-1$.\n \\end{inparaenum}\n\\end{rrule}\n\n\\noindent\nClearly, \n$k$ edges can connect at most~$2k$ vertices;\nthus we obtain the following.\n\n\\begin{rrule}\n \\label{rr:few-habitat-vertices}\n If~$|V\\setminus\\bar{V}|>2k$,\n then return a trivial \\textnormal{\\texttt{no}}{}-instance.\n\\end{rrule}\n\n\\noindent\nSo we have at most~$2k$ vertices in habitats.\nNext, we upper-bound the number of non-habitat vertices.\nNo minimal solution has edges between two such vertices.\n\n\\begin{rrule}\n \\label{rr:no-bar-edges}\n If there is an edge~$e \\in E$ with $e\\subseteq \\bar{V}$,\n then delete~$e$.\n\\end{rrule}\n\n\\noindent\nMoreover,\nno minimum solution connects through non-habitat twins.\n\n\\begin{rrule}\n \\label{rr:no-twins}\n If~$N(v)\\subseteq N(w)$ for distinct~$v,w\\in \\bar{V}$,\n then delete~$v$.\n\\end{rrule}\n\n\\noindent\nWe still need to bound the number of vertices in~$\\bar V$.\nFor an $n$-element set~$S$ let~$\\mathcal F \\subseteq 2^{S}$ be a family of subsets such that\nfor every~$A, B \\in \\mathcal F$ we have~$A \\not\\subseteq B$.\nThen~$|\\mathcal F| \\le \\binom{n}{\\lfloor n\/2 \\rfloor}$ by Sperner's Theorem.\nHence,\nafter \napplying the reduction rules,\nwe get an instance with at most~$2k+\\binom{2k}{k}$ vertices \nand~$\\binom{2k}{2}+2k\\binom{2k}{k}$ edges.\n\n\\begin{rrule}\n\\label{rr:habitats}\nIf $V_i=V_j$ for distinct~$i,j\\in\\set{r}$,\nthen delete~$V_j$.\n\\end{rrule}\n\n\\noindent\nIt follows that we can safely assume that~$r\\leq 2^{2k}$.\nThus,\n\\cref{prop:rgbp:kernel} follows.\nUnfortunately,\nimproving the problem kernel \nto polynomial-size appears unlikely.\n\n\\begin{proposition}\n \\label{prop:rgbp:nopk}\n Unless~\\ensuremath{\\NP\\subseteq \\coNP\/\\poly},\n \\RgbpAcr{} for~$d\\geq 2$ admits no problem kernel of size~$k^{\\O(1)}$,\n even if~$r\\geq 1$ is constant.\n\\end{proposition}\n\n\\noindent\nWe will give a linear parameteric transformation from the following problem:\n\n\\decprob{Set Cover (SC)}{setcover}\n{A universe~$U$, a set~$\\mathcal{F}\\subseteq 2^U$ of subsets of~$U$, and an integer~$k$.}\n{Is there~$\\mathcal{F}'\\subset\\mathcal{F}$ with~$|\\mathcal{F}'|\\leq k$ such that~$\\bigcup_{F\\in\\mathcal{F}'} F=U$?}\n\n\\noindent\nThe construction is basically the same as for \\cref{prop:2rgbp}(ii).\nNote that \\prob{Set Cover} admits no problem kernel of size polynomial in~$|U|+k$,\nunless~$\\ensuremath{\\NP\\subseteq \\coNP\/\\poly}$~\\cite{DomLS14}.\n\n\n\\begin{proof}\n Let~$\\mathcal{I}=(U,\\mathcal{F},k)$ be an instance of~\\prob{Set Cover},\n with~$U=\\{u_1,\\dots,u_n\\}$.\n Construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G,V_1,k')$ of \\RgbpAcr[2]{} with~$k'=|U|+k$ as follows \n (see~\\cref{fig:2rgbp:nopk}).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\def1.1{1.1}\n \\def0.9{1}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,0*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\node (x) at (0*1.1,-2*1.5*\\yr)[xnodef]{};\n\n\n \\foreach \\x\/\\y in {3\/2,3\/3,3\/5,5\/1,5\/3,5\/5,5\/7,7\/4,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,9}{\n \\draw[xedge] (x) to (e\\x);\n }\n \n \\newcommand{\\ltb}[3]{\n \\contourlength{0.09em}\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{$v_{F'}$}{};\n \\ltb{e5}{$v_F$}{};\n \\ltb{e7}{$v_{F''}$}{};\n \n \\ltb{x}{$x$}{$\\in V_1$};\n \n \\ltb{v1}{$u_1$}{$\\in V_1$};\n \\ltb{v7}{$u_n$}{$\\in V_1$};\n \\ltb{v3}{$u_i$}{$\\in V_1$};\n \\ltb{v5}{$u_j$}{$\\in V_1$};\n \n \\node[right =of v7,xshift=-1.1*0.5 cm]{$V_U$};\n \\node[right =of e9,xshift=-1.1*0.5 cm]{$V_\\mathcal{F}$};\n \\end{tikzpicture}\n \\caption{Illustration for the construction in the proof of~\\cref{prop:rgbp:nopk} for~\\RgbpAcr[2]{} with~$r=1$. \n In this example, \n $U=\\{u_1,\\dots,u_n\\}$ and\n we have $\\{u_1,u_i,u_j,u_n\\}= F\\in\\mathcal{F}$.}\n \\label{fig:2rgbp:nopk}\n \\end{figure}\n Let~$G$ be initially empty.\n Add the vertex set~$V_U\\ensuremath{\\coloneqq} U$,\n the vertex set~$V_\\mathcal{F}\\ensuremath{\\coloneqq} \\{v_F\\mid F\\in\\mathcal{F}\\}$,\n and the vertex~$x$.\n Set~$V_1\\ensuremath{\\coloneqq} V_U\\cup\\{x\\}$.\n Make each vertex in~$V_\\mathcal{F}$ adjacent with~$x$.\n Finally,\n for each~$F\\in\\mathcal{F}$,\n add the edge set~$\\{\\{v_i,v_F\\}\\mid u_i\\in F\\}$.\n \n The proof that~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance \n is analogous with the correctness proof for \\cref{prop:2rgbp}(ii).\n \n Since \\prob{Set Cover} admits no problem kernel of size polynomial in~$|U|+k$,\n unless~$\\ensuremath{\\NP\\subseteq \\coNP\/\\poly}$~\\cite{DomLS14},\n neither does~\\RgbpAcr[2]{} when parameterized by~$k'=|U|+k$.\n\\end{proof}\n\n\\noindent\n\\cref{prop:rgbp:nopk} holds for general graphs.\nIn fact, \nfor planar graphs,\nthe above reduction rules allow for \nan~$\\O(k^3)$-vertex kernel.\n\n\\begin{proposition}\n\t\\label{prop:rgbp:planar}\t\n\t\\RgbpAcr[2]{} on planar graphs admits a problem kernel with~$\\O(k^3)$ vertices and edges and at most~$2^{2k}$ habitats.\n\\end{proposition}\n\n\\begin{observation}\n \\label{obs:2rgbpplanar}\n\tSuppose all reduction rules were applied exhaustively.\n\tThen\n\t\\begin{inparaenum}[(i)]\n\t\\item there are at most~$k$ vertices of degree two in~$\\bar V$, and\n\t\\item there are at most~$3\\binom{2k}{3}$ vertices of dergee at least three in~$\\bar V$.\n\t\\end{inparaenum}\n\\end{observation}\n\n\\begin{proof}\n\t\\begin{inparaenum}[\\itshape (i)]\n\t\\item By \\cref{rr:no-bar-edges,rr:few-habitat-vertices,rr:no-twins},\n\t\tevery degree-two vertex in~$\\bar V$ has a pairwise different pair of neighbors in~$V \\setminus \\bar V$.\n\t\tIf there are more than~$2\\binom{2k}{2}$ degree-two vertices in~$\\bar V$,\n\t\tthen one of the reduction rules was not applied exhaustively.\n\n\t\\item Any three vertices~$u,v,w\\in V$ of a planar graph share at most two neighbors,\n\t\tthat is,\n\t\t$|N(u)\\cap N(v)\\cap N(w)| \\le 2$.\n\t\tSuppose there are more than~$3\\binom{2k}{3}$ vertices in~$\\bar V$ of degree at least three.\n\t\tThen,\n\t\tby \\cref{rr:no-bar-edges,rr:few-habitat-vertices,rr:no-twins},\n\t\tthere are three vertices~$u,v,w\\in\\bar V$ such that~$|N(u)\\cap N(v)\\cap N(w)| \\ge 3$,\n\t\ta contradiction to~$G$ being planar.\n\t\\end{inparaenum}\n\\end{proof}\n\n\\noindent\nAs~$|V\\setminus \\bar V| \\le 2k$ and we deleted all degree-one vertices,\n\\cref{prop:rgbp:planar} follows.\n\n\\subsection{At least two hops between habitat patches (\\texorpdfstring{\\boldmath{$d\\ge 3$}}{d\u2a7e3})}\n\\label{ssec:3rgbp}\n\nIf the data is more sparse,\nthat is,\nthe observed habitats to connect are rather scattered,\nthen the problem becomes significantly harder to solve from the parameterized complexity point of view.\n\n\\begin{proposition}\n \\label{prop:3rgbp}\n \\RgbpAcr{} with~$d\\geq 3$ is \\cocl{NP}-complete and \n \\W{1}-hard when parameterized by~$k+r$.\n\\end{proposition}\n\n\\noindent\nWe give the construction for~$d$ being odd.\nAfterwards,\nwe explain how to adapt the reduction to~$d$ being even.\n\n\\begin{construction}\n \\label{constr:3rgbp}\n Let~$(G)$ with~$G=(U^1,\\dots,U^k,E)$ be an instance of \\prob{Multicolored Clique}\n where~$G[U^i]$ forms an independent set for every~$i\\in\\set{k}$.\n Assume without loss of generality that~$U^i=\\{u^i_1,\\dots,u^i_{|V^i|}\\}$.\n Construct the instance~$(G',V_1,\\dots,V_{\\binom{k}{2}},k')$ with~$k\\ensuremath{\\coloneqq} \\frac{(d-1)}{2}k+\\binom{k}{2}$ as follows\n (see~\\cref{fig:3rgbp} for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\usetikzlibrary{calc}\n \\def1.1{1}\n \\def0.9{0.775}\n \\def0.125{0.175}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n\n \\newcommand{\\xpart}[6]{\n \\begin{scope}[xshift=#5*1.1 cm,yshift=#6*0.9 cm,rotate around={#4:(0,0)}]\n \\node (u#1) at (0,2*0.9)[xnode]{};\n \\node (v#11) at (-1.5*1.1,0*0.9)[xnodef]{};\n \\node (v#12) at (-0.75*1.1,0*0.9)[rotate=#4]{$\\cdots$};\n \\node (v#13) at (0*1.1,0*0.9)[xnodef]{};\n \\node (v#14) at (0.75*1.1,0*0.9)[rotate=#4]{$\\cdots$};\n \\node (v#15) at (1.5*1.1,0*0.9)[xnodef]{};\n \n\n \\foreach \\x in {1,...,5}{\n \\draw[xedge] (u#1) to (v#1\\x);\n \\draw[dash pattern=on \\pgflinewidth off 8pt, fill=white,thick, line width=3pt,color=cyan!50!green] (u#1) to (v#1\\x);\n }\n \\draw[densely dotted,rounded corners] ($(v#11)-(0.125,0.125)$) rectangle ($(v#15)+(0.125,0.125)$);\n \\end{scope}\n \\node at (v#15)[label={[]#3}]{};\n \\node at (u#1)[label={[]#2}]{};\n }\n \n \\xpart{1}{90:$\\in V_\\ell$ if~$i\\in g^{-1}(\\ell)$}{90:$U^i$}{45}{-3}{2};\n \\xpart{2}{-90:$\\in\\bigcap_{\\ell: j\\in g^{-1}(\\ell)} V_\\ell$}{180:$U^j$}{135}{-3}{-2};\n \\xpart{3}{-90:$\\in\\bigcap_{\\ell: j'\\in g^{-1}(\\ell)} V_\\ell$}{-90:$U^{j'}$}{225}{3}{-2};\n \\xpart{4}{90:$\\in\\bigcap_{\\ell: i'\\in g^{-1}(\\ell)} V_\\ell$}{0:$U^{i'}$}{315}{3}{2};\n\n \\node (top) at (0,3.2*0.9)[scale=2]{$\\cdots$};\n \\node (bot) at (0,-3.2*0.9)[scale=2]{$\\cdots$};\n \\draw[xedge] (v13) to (v25) to (v33) to (v45) to (v13) to (v33);\n \\draw[xedge] (v25) to (v45);\n \\foreach \\x\/\\y in {1\/3,2\/5,3\/3,4\/5}{\n \\draw[xedge] (v\\x\\y) to (top);\n \\draw[xedge] (v\\x\\y) to (bot);\n }\n \\draw[draw=none] (v11) to node[midway,below,sloped,color=red]{$(d-1)\/2$ edges}(u1);\n\n \\end{tikzpicture}\n \\caption{Illustration to~\\cref{constr:3rgbp} for~\\RgbpAcr{} for~$d\\geq 3$.}\n \\label{fig:3rgbp}\n \\end{figure}\n\n Let~$g\\colon \\binom{\\set{k}}{2}\\to \\set{\\binom{k}{2}}$ be a bijective function.\n Let~$G'$ be initially~$G$.\n For each~$i\\in\\set{k}$,\n add a vertex~$v_i$ to~$G'$,\n add~$v_i$ to each habitat~$V_\\ell$ with~$i\\in g^{-1}(\\ell)$,\n and connect~$v_i$ with~$u^i_j$ for each~$j\\in\\set{|u^i_{|U^i|}}$ via a path with~$\\frac{d-1}{2}$ edges,\n where~$v_i$ and~$u_i^j$ are the endpoints of the path.\n\\end{construction}\n\n\\begin{remark}\n For every even~$d\\geq 4$,\n we can adapt the reduction for~$d-1$:\n at the end of the construction,\n subdivide each edge between two vertices that are in the original graph~$G$.\n\\end{remark}\n\n\\begin{observation}\n \\label{obs:3rgbp:smallhabs}\n In the obtained instance,\n for every~$\\ell\\in\\set{\\binom{k}{2}}$,\n it holds that, \n $V_\\ell=\\{v_i,v_j\\}$ where~$\\{i,j\\}=g^{-1}(\\ell)$,\n and for every~$i,j\\in\\set{k}$,\n $i\\neq j$,\n it holds that~$\\{\\ell'\\mid \\{v_i,v_j\\}\\subseteq V_{\\ell'}\\}=\\{\\ell\\}$ with~$\\ell=g(\\{i,j\\})$.\n\\end{observation}\n\n\\begin{observation}\n \\label{obs:3rgbp:exone}\n If the obtained instance is a \\textnormal{\\texttt{yes}}-instance,\n then in every minimal solution~$F$,\n for every~$i\\in\\set{k}$ there is exactly one~$u^i_j$ in~$G[F]$.\n\\end{observation}\n\n\\begin{proof}\n Note that each~$v_i$ must be connected with at least one vertex from~$U^i$ in~$G[F]$.\n Thus,\n $|G[F]\\cap U^i|\\geq 1$.\n Moreover,\n from each~$i,j\\in\\set{k}$,\n $i\\neq j$,\n $F$~must contain an edge between~$U^i$ and~$U^j$,\n since~$\\dist_{G'}(v_i,u)+\\dist_{G'}(v_j,u')\\geq d-1$ for every~$u\\in U^i$, \n $u'\\in U^j$.\n Since additionally~$k= \\frac{(d-1)}{2}k+\\binom{k}{2}$,\n it follows that~$v_i$ can not be connected with two vertices from~$U^i$ in~$G[F][U^i\\cup \\{v_i\\}]$.\n Hence,\n if there are two vertices~$u,u'\\in U^i\\cap F$,\n with~$u$ being connected to~$v_i$ in~$G[F][U^i\\cup \\{v_i\\}]$,\n then~$u'$ is not part of an $v_a$-$v_b$~path in~$G[F]$ of length at most~$d$\n for every~$a,b\\in\\set{k}$.\n It follows that~$F$ is not minimal.\n\\end{proof}\n\n\n\\begin{lemma}\n \\label{lem:3rgbp}\n Let~$\\mathcal{I}=(G)$ with~$G=(U^1,\\dots,U^k,E)$ be an instance of \\prob{Multicolored Clique} and let~$\\mathcal{I}'=(G',\\mathcal{H},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:3rgbp}.\n Then~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${} \n Let~$W\\subseteq V(G)$ be a multicolored clique.\n Let~$F$ contain~$\\binom{W}{2}$ and all edges of a path from~$v_i$ to~$U^i\\cap W$.\n We claim that~$F$ is a solution.\n Note that~$|F|=\\binom{k}{2}+k\\frac{d-1}{2}$.\n Since~$V_\\ell$ is of size two for all~$\\ell\\in\\set{\\binom{k}{2}}$ (\\cref{obs:3rgbp:smallhabs}),\n we only need to show that~$v_i,v_j$ with~$\\{i,j\\}=g^{-1}(\\ell)$\n is connected by a path of length at most~$d$.\n We know that~$v_i$ is connected to some~$u^i_x$ by a path of length~$(d-1)\/2$,\n which is adjacent to some~$u^j_y$, \n which is connected to~$v_j$ by a path of length~$(d-1)\/2$.\n Thus, \n $v_i$ and~$v_j$ are of distance~$d$.\n \n $(\\Leftarrow)\\quad${}\n Let~$F$ be a solution.\n Note that~$|F|=\\binom{k}{2}+k\\frac{d-1}{2}$.\n We claim that~$W\\ensuremath{\\coloneqq} V(G'[F])\\cap V(G)$ is a multicolored clique.\n First, \n observe that~$|W|=k$ since for every~$v_i$ there is exactly one~$u^i_{\\ell_i}$ in~$G'[F]$ (\\cref{obs:3rgbp:exone}).\n Suppose that~$W$ is not a multicolored clique,\n that is,\n there are~$U^i$ and~$U^j$ such that there is no edge in~$F$ between them.\n Then~$v_i$ and~$v_j$ are of distance larger than~$d$ in~$G[F]$,\n contradicting that~$F$ is a solution.\n\\end{proof}\n\n\\section{Connecting Habitats at Short Pairwise Distance}\n\\label{sec:cgbp}\n\nIn the next problem, \nwe require short pairwise reachability.\n\n\\decprob{\\CgbpTsc{} (\\CgbpAcr{})}{cgbp}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal H = \\{V_1, \\dots, V_r\\}$ of habitats where~$V_i\\subseteq V$ for all~$i\\in\\set{r}$,\nand~$k \\in \\mathbb{N}_0$.}\n{Is there a subset~$F\\subseteq E$ with~$|F|\\leq k$ such that\nfor every~$i\\in\\set{r}$\nit holds that~$G[F]^d[V_i]$ is a clique?\n}\n\n\\noindent\nNote that if $G[F]^d[V_i]$ is a clique, \nthen~$\\dist_{G[F]}(v,w) \\leq d$ for all~$v,w\\in V_i$.\nNote that~\\CgbpAcr[2]{} is an unweighted variant of the 2NET problem~\\cite{DahlJ04}.\n\n\\begin{theorem}\n \\CgbpAcr{} is,\n \\begin{compactenum}[(i)]\n \\item if~$d=1$, linear-time solvable;\n \\item if~$d=2$, \\cocl{NP}-hard even on bipartite graphs of diameter three and~$r=1$, \n and in~\\cocl{FPT}{} regarding~$k$;\n \\item if~$d\\geq 3$, \\cocl{NP}-hard and~\\W{1}-hard regarding~$k$ even if~$r=1$.\n \\end{compactenum}\n\\end{theorem}\n\n\n\\noindent\nFor~$d=1$,\nthe problem is solvable in linear time:\nCheck whether each habitat induces a clique.\nIf so,\ncheck if the union of the cliques is small enough.\n\n\\begin{observation}\n \\label{obs:1cgbp}\n \\CgbpAcr[1]{} is solvable in linear time.\n\\end{observation}\n\n\\begin{proof}\n We employ the following algorithm:\n For each~$i\\in\\set{r}$,\n let~$G_i := G[V_i]$ and return~\\textnormal{\\texttt{no}}{} if~$G_i$ is not a clique.\n Finally,\n return~\\textnormal{\\texttt{yes}}{} if~$|\\bigcup_{i=1}^r E(G_i)|\\leq k$,\n and~\\textnormal{\\texttt{no}}{} otherwise.\n Clearly,\n if the algorithm returns \\textnormal{\\texttt{yes}},\n then~$\\mathcal{I}$ is \\textnormal{\\texttt{yes}}-instance.\n Conversely,\n let~$\\mathcal{I}$ be a \\textnormal{\\texttt{yes}}-instance\n and let~$F'$ be a solution to~$\\mathcal{I}$.\n We know that for every~$i\\in\\set{r}$,\n and any two vertices~$v,w\\in V_i$,\n edge~$\\{v,w\\}$ must be in~$F'$.\n It follows that\n $\\bigcup_{i=1}^r E(G_i)\\subseteq F'$.\n Thus,\n $|\\bigcup_{i=1}^r E(G_i)|\\leq |F'|\\leq k$ \n and the algorithm correctly returns~\\textnormal{\\texttt{yes}}.\n\\end{proof}\n\n\\subsection{When each part is just two steps away (\\texorpdfstring{\\boldmath{$d=2$}}{d=2})}\n\\label{ssec:2cgbp}\n\nFor~$d=2$,\n\\CgbpAcr{}\nbecomes \\cocl{NP}-hard already on quite restrictive inputs.\n\n\\begin{proposition}\n \\label{prop:2cgbp}\n \\CgbpAcr[2]{} is \\cocl{NP}-complete,\n even\n if~$r=1$ and\n the input graph is bipartite and of diameter three.\n\\end{proposition}\n\n\\begin{construction}\n \\label{constr:2cgbp}\n Let~$\\mathcal{I}=(G,k)$ with~$G=(V,E)$ be an instance of \\prob{Vertex Cover},\n and assume without loss of generality that~$V=\\set{n}$.\n Construct an instance of~\\CgbpAcr[2]{}\n with graph~$G'=(V',E')$, \n habitat~$V_1$,\n and integer~$k'\\ensuremath{\\coloneqq} 2|E|+k+3$ as follows\n (see~\\cref{fig:2cgbp} for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n \\def1.1{1.1}\n \\def0.9{0.7}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \n \\node (xp) at (0,1*1.5*\\yr)[xnodef]{};\n \\node (x) at (0*1.1,2*1.5*\\yr)[xnode]{};\n \n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\node (z) at (0,-2*1.5*\\yr)[xnode]{};\n\n\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (z) to (v\\x);\n }\n \n \\foreach \\x in {1,...,9}{\n \\draw[xedge] (e\\x) to (xp);\n }\n \\draw[xedge] (xp) to (x);\n \n \\node[right =of e9](y)[xnodef]{};\n \\draw[xedge] (z) to [out=0,in=-90](y);\n \\draw[xedge] (y) to [out=90,in=0](x);\n \n \\newcommand{\\dlabel}[3]{\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]#2},\n label={[align=center,font=\\scriptsize,color=black]-90:#3}]{};\n }\n \\newcommand{\\ltb}[3]{\n \\contourlength{0.09em}\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n\n\n \n \\ltb{xp}{$y'$}{}\n \\ltb{x}{$y$}{$\\in V_1$}\n \n \\ltb{e3}{$e=\\{i,j\\}$}{$\\in V_1$};\n \\ltb{e5}{$e'=\\{i,j'\\}$}{$\\in V_1$};\n \\ltb{e7}{$e''=\\{i',j\\}$}{$\\in V_1$};\n\n \\ltb{v3}{$v_i$}{};\n \\ltb{v5}{$v_j$}{};\n \n \\ltb{z}{$x$}{$\\in V_1$};\n \\dlabel{y}{180:$z$}{};\n \n \\end{tikzpicture}\n \\caption{Illustration to~\\cref{constr:2cgbp} for~\\CgbpAcr[2]{}.}\n \\label{fig:2cgbp}\n \\end{figure}\n\n To construct~$G'$ and~$V_1$,\n add the vertex set~$V_E\\ensuremath{\\coloneqq} \\{v_e\\mid e\\in E\\}$ and add~$V_E$ to~$V_1$.\n Add two designated vertices~$y'$ and~$y$,\n add~$y$ to~$V_1$,\n and make~$y'$ adjacent with~$y$ and all vertices in~$V_E$.\n Add a designated vertex~$x$,\n add~$x$ to~$V_1$,\n and introduce a path of length two from~$x$ to~$y$ (call the inner vertex~$z$).\n Add the vertex set~$V_G\\ensuremath{\\coloneqq} \\{v_i\\mid i\\in V\\}$,\n and make each~$v_i$ adjacent with~$x$ \n and all edge-vertices corresponding to an edge incident with~$i$,\n i.e.,\n add the edge set~$E_G\\ensuremath{\\coloneqq} \\bigcup_{i\\in V}\\{\\{v_i,v_e\\}\\mid i\\in e\\}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:2cgbp}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2cgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then for every solution~$F\\subseteq E(G')$ \n it holds that~$\\{\\{y,y'\\},\\{y,z\\},\\{z,x\\}\\}\\cup \\{\\{y',v_e\\}\\mid e\\in E(G)\\}\\subseteq F$.\n\\end{observation}\n\n\\begin{lemma}\n \\label{lem:2cgbp:edinc}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2cgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then there is a solution~$F\\subseteq E(G')$ such that~$|N_{G'[F]}(v_e)\\cap V_G|=1$ for all~$e\\in E(G)$.\n\\end{lemma}\n\n\\begin{proof}\n Note that in every solution,\n clearly we have~$|N_{G'[F]}(v_e)\\cap V_G|\\geq 1$.\n Suppose there is a minimal solution~$F$ such that there is at least one~$e=\\{i,j\\}\\in E$ such that~$|N_{G'[F]}(v_e)\\cap V_G|=2$.\n Let~$F$ be a solution with a minimum number of edges incident to vertices in~$V_E$.\n \n Since~$\\dist_{G'[F]}(v_e,x)= 2$,\n at least one of the edges~$\\{v_i,x_i\\}$ or~$\\{v_j,x_j\\}$ are in~$F$.\n If both are present\n then we can remove one of the edges~$\\{v_e,v_i\\}$ or~$\\{v_e,v_j\\}$ to~$e$ to obtain a solution of smaller size.\n This yields a contradiction.\n \n Otherwise,\n assume there is exactly one edge, say~$\\{v_e,v_i\\}$, \n contained in~$F$.\n Then exchanging~$\\{v_e,v_j\\}$ with~$\\{v_j,x\\}$ yields a solution with a lower number of edges incident to vertices in~$V_E$.\n A contradiction.\n\\end{proof}\n\n\n\\begin{lemma}\n \\label{lem:2cgbp:cor}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2cgbp}.\n Then~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$W\\subseteq V$ be a vertex cover of size at most~$k$ in~$G$.\n We construct a solution~$F\\subseteq E'$ as follows.\n Let~$F'$ denote the set of all edges required due to~\\cref{obs:2cgbp}.\n Let~$F_V\\ensuremath{\\coloneqq} \\{\\{v_i,x\\}\\mid i\\in W\\}$.\n We define the auxiliary function~$g\\colon E\\to V'$ with~$g(\\{i,j\\})=v_{\\min(\\{i,j\\}\\cap W)}$.\n Let~$F_E\\ensuremath{\\coloneqq} \\bigcup_{e=\\{i,j\\}\\in E} \\{v_e,g(e)\\}$.\n Let~$F\\ensuremath{\\coloneqq} F'\\cup F_V\\cup F_E$. \n Note that~$|F|=|F'|+|F_V|+|F_E|\\leq |E|+3+|E|+k = k'$.\n Moreover, \n every~$v_e\\in V'$ is connected to~$x$ via a path~$(v_e,v_i,z)$, \n for some~$i\\in (e\\cap W)$,\n of length two.\n Thus all vertex pairs in~$V_1$ are at distance at most two.\n \n $(\\Leftarrow)\\quad${}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance.\n Due to~\\cref{lem:2cgbp:edinc},\n there\n is a solution~$F\\subseteq E'$ such that~$\\deg_{G'[F]}(v_e)=1$ for all~$e\\in E$.\n Let~$W\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{v_i,x\\}\\in F\\}$.\n We claim that~$W$ is a vertex cover.\n Suppose not,\n that is,\n there is an edge~$e\\in E$ such that~$e\\cap W=\\emptyset$.\n That means that the unique neighbor of~$v_e$,\n say~$v_i$,\n is not adjacent with~$x$ in~$G'[F]$.\n Then,\n $v_e$ is not connected with~$x$ in~$G'[F]^2$,\n and hence~$F$ is no solution,\n a contradiction.\n\\end{proof}\n\n\\subsubsection{Graphs of constant maximum degree.}\n\n\\RgbpAcr[2]{} is \\cocl{NP}-hard if the number~$r$ of habitats and the maximum degree~$\\Delta$ are constant \n(\\cref{prop:2rgbp}).\n\\CgbpAcr[2]{} is linear-time solvable in this~case:\n\n\\begin{proposition}\n \\label{prop:cgbpdelta}\n \\CgbpAcr{} admits an~$\\O(r\\Delta(\\Delta-1)^{3d\/2})$-sized problem kernel computable in~$\\O(r(n+m))$ time.\n\\end{proposition}\n\n\\begin{proof}\n\tLet~$\\mathcal{I} = (G, \\mathcal{H}, k)$ be an instance of \\CgbpAcr{}.\n\tFor every~$i\\in\\set{r}$, fix a vertex~$u_i\\in V_i$.\n\tWe assume that we have $V_i \\subseteq N_G^d[u_i]$ for all~$i\\in\\set{r}$, otherwise~$\\mathcal{I}$ is a \\textnormal{\\texttt{no}}-instance.\n\tNow let~$W_i = N_G^{\\lceil 3d\/2\\rceil}[u_i]$ and let~$G' \\ensuremath{\\coloneqq} G[\\bigcup_{i=1}^r W_i]$.\n\tNote that~$G'$ contains at most~$r\\Delta(\\Delta - 1)^{\\lceil 3d\/2\\rceil}$ vertices and can be computed by~$r$ breadth-first searches.\n\tWe claim that~$G'$ contains every path of length at most~$d$ between every two vertices~$v, w\\in V_i$, for every~$i \\in \\set{r}$.\n\tRecall that an edge set~$F \\subseteq E$ is a solution if and only if for every~$i\\in\\set{r}$ and for every~$v, w\\in V_i$,\n\tthe graph $G[F]$ contains a path of length at most~$d$ from~$v$ to~$w$.\n\tAs by our claim~$G'$ contains any such path,\n\tthis implies that~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'\\ensuremath{\\coloneqq}(G',\\mathcal{H},k)$ is a \\textnormal{\\texttt{yes}}-instance (note that~$V_i \\subseteq V(G')$ for every~$i \\in \\set{r}$).\n\n\tAssuming that~$V_i \\subseteq N_G^d[u_i]$,\n\t$G[W_i]$ contains all paths of length at most~$d$ between~$u_i$ and any~$v\\in V_i$.\n\tSo let~$v, w \\in V_i$ be two vertices, both distinct from~$u_i$.\n\tAs $v, w \\in N^d_G[u_i]$ and~$W_i=N^{\\lceil 3d\/2\\rceil}_G[u_i]$,\n\tthe subgraph $G[W_i]$ contains all vertices in~$N^{\\lceil d\/2\\rceil}_G[v]$ and~$N^{\\lceil d\/2\\rceil}_G[w]$.\n\tConsider now a path of length at most~$d$ between~$v$ and~$w$.\n\tSuppose it contains a vertex~$x \\in V(G) \\setminus (N^{\\lceil d\/2\\rceil}_G[v]\\cup N^{\\lceil d\/2\\rceil}_G[w])$.\n\tThen~$\\dist_G(v, x) + \\dist_G(w, x) > 2 \\lceil d\/2 \\rceil \\ge d$, a contradiction to~$x$ being on a path from~$v$ to~$w$ of length at most~$d$.\n\tThe claim follows.\n\\end{proof}\n\n\\subsubsection{Parameterizing with~\\texorpdfstring{\\boldmath{$k$}}{k}.}\n\nAll the reduction rules that worked for~\\RgbpAcr[2]{}\nalso work for~\\CgbpAcr[2]{}.\nIt thus follows that~\\CgbpAcr[2]{} admits a problem kernel of size exponentially in~$k$,\nand hence,\n\\CgbpAcr[2]{} is~\\cocl{FPT}.\nAs with~\\RgbpAcr[2]{},\nthe problem kernel presumably cannot be much improved.\nFor this, we combine the constructions for~\\cref{prop:rgbp:nopk,prop:2cgbp}\n(see \\cref{fig:2rgbp:nopk,fig:2cgbp}).\n\n\\begin{corollary}\n \\CgbpAcr[2]{} admits a problem kernel of size exponentially in~$k$\n and,\n unless~\\ensuremath{\\NP\\subseteq \\coNP\/\\poly},\n none of size polynomial in~$k$,\n even if~$r=1$.\n\\end{corollary}\n\n\\subsection{When reaching each part is a voyage (\\texorpdfstring{\\boldmath{$d=3$}}{d\u2a7e3})}\n\\label{ssec:3cgbp}\n\nFor~$d\\geq 3$,\nthe problem\nis\n\\W{1}-hard regarding the number~$k$ of green bridges,\neven for one habitat.\nThe reduction is similar to the one for \\cref{prop:3rgbp}.\n\n\\begin{proposition}\n \\label{prop:3cgbp}\n \\CgbpAcr{} with~$d\\geq 3$ is \\cocl{NP}-complete and~\\W{1}-hard when parameterized by the number~$k$,\n even if $r=1$.\n\\end{proposition}\n\n\\begin{proof}\n Let~$\\mathcal{I}=(G)$ with~$G=(U^1,\\dots,U^k,E)$ be an instance of \\prob{Multicolored Clique}.\n Apply\n \\cref{constr:3rgbp}\n to obtain instance~$\\mathcal{I}''=(G',\\{V_1,\\dots,V_{\\binom{k}{2}}\\},k')$\n (recall that~$k'=\\frac{d-1}{2}k+\\binom{k}{2}$).\n Let~$\\mathcal{I}'=(G',\\{V_1'\\},k')$ with~$V_1'\\ensuremath{\\coloneqq} \\bigcup_{i=1}^{\\binom{k}{2}} V_i=\\{v_1,\\dots,v_k\\}$ be the finally obtained instance of~\\CgbpAcr{}.\n We claim that~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n \n $(\\Rightarrow)\\quad${}\n Let~$C$ be a multicolored clique in~$G$.\n Let~$z_i\\ensuremath{\\coloneqq} V(C)\\cap U^i$.\n We claim that~$F$,\n consisting of the edges of each shortest path from~$v_i$ to~$z_i$ and the edge set~$E(C)$,\n is a solution to~$\\mathcal{I}'$.\n Note that~$|F|=k'$.\n Moreover,\n for any two~$i,j\\in\\set{k}$,\n we have that~$v_i$ and~$v_j$ are of distance~$2\\frac{d-1}{2}+1=d$.\n Hence,\n $F$ is a solution.\n \n $(\\Leftarrow)\\quad${}\n Let~$F$ be a solution to~$\\mathcal{I}$.\n Since~$F$ must contain a path from~$v_i$ to some~$z_i\\in U^i$\n for every~$i\\in\\set{k}$,\n there are at most~$\\binom{k}{2}$ edges left to connect.\n Let~$Z\\ensuremath{\\coloneqq} \\{z_1,\\dots,z_k\\}$ be the vertices such that~$v_i$ is connected with~$z_i$ in~$G[F][U^i]$.\n Since~$d\\geq \\dist_{G'[F]}(v_i,v_j) = \\dist_{G'[F]}(v_i,z_i)+\\dist_{G'[F]}(z_i,z_j)+\\dist_{G'[F]}(z_j,v_j)$ and~$d-1=\\dist_{G'[F]}(v_i,z_i)+\\dist_{G'[F]}(z_j,v_j)$,\n it follows that~$\\dist_{G'[F]}(z_i,z_j)=1$.\n Thus,\n $G[Z]$ forms a mulicolored clique.\n\\end{proof}\n\n\\section{Connecting Habitats at Small Diameter}\n\\label{sec:dgbp}\n\nLastly,\nwe consider requiring short pairwise reachability\nin \\RgbpAcr[1]{}.\n\n\\decprob{\\DgbpTsc{} (\\DgbpAcr{})}{dgbp}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$ of habitats where~$V_i\\subseteq V$ for all~$i\\in\\set{r}$, \nand an integer~$k\\in\\mathbb{N}_0$.}\n{Is there a subset~$F\\subseteq E$ with~$|F|\\leq k$ such that\nfor every~$i\\in\\set{r}$\nit holds that~$G[F][V_i]$ has diameter~$d$?\n}\n\n\\noindent\nIn particular,\n$G[F][V_i]$ is connected.\nNote that\n\\RgbpAcr[1]{} reduces to~\\prob{Diam \\prob{GBP}} \n(where $d$ is part of the input and then set to the number of vertices in the input instance's graph).\nWe have the following.\n\n\\begin{theorem}\n \\label{thm:dgbp}\n \\DgbpAcr{} is\n \\begin{compactenum}[(i)]\n \\item if~$d=1$, solvable in linear time;\\label{thm:dgbp:i}\n \\item if~$d=2$, \\cocl{NP}-hard even if~$r=3$;\n \\item if~$d=3$, \\cocl{NP}-hard even if~$r=2$.\n \\end{compactenum}\n Moreover,\n \\DgbpAcr{} admits a problem kernel with at most~$2k$ vertices and at most~$2^{2k}$ habitats.\n\\end{theorem}\n\n\\noindent\n\\DgbpAcr[1]{} is equivalent to \\CgbpAcr[1]{},\nwhich is linear-time solvable by \\cref{obs:1cgbp}.\nThus, \n\\cref{thm:dgbp}\\eqref{thm:dgbp:i} follows.\nApplying \\cref{rr:few-habitat-vertices,rr:habitats} and deleting all non-habitat vertices yields the problem kernel.\n\n\\subsection{Over at most one patch to every other (\\texorpdfstring{\\boldmath{$d=2$}}{d=2})}\n\\label{ssec:2dgbp}\n\n\\DgbpAcr[2]{} turns out to be \\cocl{NP}-hard even for three habitats.\n\n\n\n\\begin{proposition}\n \\label{prop:2dgbp}\n \\DgbpAcr[2]{} is \\cocl{NP}-hard even if~$r=3$.\n\\end{proposition}\n\n\\begin{construction}\n \\label{constr:2dgbp}\n Let~$\\mathcal{I}=(G,k)$ with~$G=(V,E)$ be an instance of~\\prob{Vertex Cover}\n and assume without loss of generality that~$V=\\{1,\\dots,n\\}$ and~$E=\\{e_1,\\dots,e_m\\}$.\n Construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G',\\{V_1,V_2,V_3\\},k')$\n with~$k'=2m+2n+k+4$ as follows\n (see~\\cref{fig:2dgbp} for an illustration).\n\\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\def1.1{1.1}\n \\def0.9{0.9}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\node (x) at (0*1.1,-2*1.5*\\yr)[xnodef]{};\n \\node (z) at (5*1.1,-0.5*1.5*\\yr)[xnodef]{};\n \\node (zp) at (6*1.1,-0.5*1.5*\\yr)[xnodef]{};\n \\node (y) at (5*1.1,-1.5*1.5*\\yr)[xnodef]{};\n \\node (yp) at (6*1.1,-1.5*1.5*\\yr)[xnodef]{};\n \n\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x in {1,...,9}{\n \\draw[xedge] (e\\x) to [out=-15,in=175](z);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (v\\x) to [out=15,in=185](z);\n \\draw[xedge] (v\\x) to [out=-15,in=180](y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (x) to (v\\x);\n }\n \\draw[xedge] (x) to (y);\n \\draw[xedge] (y) to (z);\n \\draw[xedge] (y) to (yp);\n \\draw[xedge] (z) to (zp);\n\n \\newcommand{\\ltb}[3]{\n \\contourlength{0.09em}\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{$e=\\{i,j\\}$}{$\\in V_1,V_2$};\n \\ltb{e5}{$e'=\\{i,j'\\}$}{$\\in V_1,V_2$};\n \\ltb{e7}{$e''=\\{i',j\\}$}{$\\in V_1,V_2$};\n \n \\ltb{x}{$x$}{$\\in V_1, V_3$};\n \\ltb{y}{$y$}{$\\in V_1,V_3$};\n \\ltb{z}{$z$}{$\\in V_1,V_2,V_3$};\n \\ltb{yp}{$y'$}{$\\in V_3$};\n \\ltb{zp}{$z'$}{$\\in V_2$};\n \n \\ltb{v3}{$v_i$}{$\\in V_1,V_2,V_3$};\n \\ltb{v5}{$v_j$}{$\\in V_1,V_2,V_3$};\n \\end{tikzpicture}\n \\caption{Illustration for~\\DgbpAcr[2]{} with~$r=3$. ($k'=2m+2n+k+4$)}\n \\label{fig:2dgbp}\n\\end{figure}\n Add the vertex sets~$V_E\\ensuremath{\\coloneqq} \\{v_e\\mid e\\in E\\}$ and~$V_G=\\{v_i\\mid i\\in V\\}$,\n as well as the vertices~$x,y,y',z,z'$.\n Add all vertices except for~$y'$ and~$z'$ to~$V_1$.\n Let~$V_2\\ensuremath{\\coloneqq} V_E\\cup V_G\\cup \\{z,z'\\}$\n and\n $V_3\\ensuremath{\\coloneqq} V_G\\cup \\{x,z,y,y'\\}$.\n Next,\n for each~$e=\\{i,j\\}\\in E$,\n connect~$v_e$ with~$v_i$, \n $v_j$, \n and~$z$.\n For each~$i\\in V$,\n connect~$v_i$ with~$x$, \n $y$, \n and~$z$.\n Lastly, \n add the edge set~$E^*\\ensuremath{\\coloneqq}\\{\\{x,y\\},\\{y,y'\\},\\{z,z'\\},\\{z,y\\}\\}$ to~$E'$.\n Let~$E_y \\ensuremath{\\coloneqq} \\{\\{y,v_i\\} \\mid i \\in V\\}$,\n $E_{E}\\ensuremath{\\coloneqq}\\{\\{v_e,z\\} \\mid e \\in E\\}$,\n and~$E_{V}\\ensuremath{\\coloneqq}\\{\\{v_i,z\\}\\mid i \\in V\\}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:2dgbp}\n Let~$\\mathcal{I}'$ be the instance obtained from some instance~$\\mathcal{I}$ using~\\cref{constr:2dgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then every solution~$F$ for~$\\mathcal{I}'$ contains\n (i) $\\{\\{v,z\\}\\mid v\\in N_{G'}(z)\\}$ and \n (ii) $\\{\\{v,y\\}\\mid v\\in N_{G'}(y)\\}$.\n\\end{observation}\n\n\\begin{proof}\n Clearly,\n $\\{z,z'\\},\\{y,y'\\}\\in F$.\n Every path of length at most two from~$z'$ to any vertex in~$V_E\\cup V_G$ passes~$z$.\n Hence,\n each edge in~$\\{\\{v,z\\}\\mid v\\in V_G\\cup V_E\\}$ is contained in~$F$.\n This proves~(i).\n The proof for~(ii) is analogous.\n\\end{proof}\n\n\\noindent\nNote that~$|\\{\\{v,z\\}\\mid v\\in N_{G'}(z)\\}\\cup\\{\\{v,y\\}\\mid v\\in N_{G'}(y)\\}|=m+2n+4$.\n\n\\begin{lemma}\n \\label{lem:2dgbp}\n Let~$\\mathcal{I}'$ be the instance obtained from some instance~$\\mathcal{I}$ using~\\cref{constr:2dgbp}.\n Then,\n $\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S\\subseteq V$ be a vertex cover of size~$k$.\n Let~$F'$ denote the set of all edges required to be in a solution due to \\cref{obs:2dgbp}.\n Let~$F_V\\ensuremath{\\coloneqq} \\{\\{v_i,x\\}\\mid i\\in S\\}$.\n We define the auxiliary function~$g\\colon E\\to V'$ with~$g(\\{i,j\\})=v_{\\min(\\{i,j\\}\\cap S)}$.\n Let~$F_E\\ensuremath{\\coloneqq} \\bigcup_{e\\in E} \\{\\{v_e,g(e)\\}\\}$.\n Let~$F\\ensuremath{\\coloneqq} F'\\cup F_V\\cup F_E$. \n Note that~$|F|=|F'|+|F_V|+|F_E|\\leq (m+2n+4)+k+m = k'$.\n Observe that~$G'[F][V_2]$ and~$G'[F][V_3]$ are connected and of diameter two.\n Next consider~$G'[F][V_1]$.\n We claim that for all~$e \\in E$, $\\dist_{G'[F][V_1]}(x,v_e)=2$.\n By construction,\n $\\dist_{G'[F][V_1]}(x,v_e)>1$.\n Suppose that there is~$v_e$ with~$e=\\{i,j\\}$ and~$\\dist_{G'[F][V_1]}(x,v_e)>2$.\n Then there is no path~$(x,v,v_e)$ with~$v\\in\\{v_i,v_j\\}$.\n Then~$\\{i,j\\}\\cap S=\\emptyset$,\n contradicting the fact that~$S$ is a vertex cover.\n \n $(\\Leftarrow)\\quad${} \n Let~$F$ be a solution to~$\\mathcal{I}'$.\n Let~$F'$ be the set of edges mentioned in \\cref{obs:2dgbp};\n so~$F' \\subseteq F$.\n Observe that in~$G'-V_G$,\n the distance of~$x$ to any~$v_e\\in V_E$ is larger than two.\n Hence,\n for each~$v_e$,\n there is a path~$(v_e,v,x)$ in~$G'[F][V_1]$ with~$v\\in V_G$.\n We claim that~$S\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{v_i,x\\}\\in F\\}$ is a vertex cover for~$G$ of size at most~$k$.\n Suppose not,\n that is,\n there is an edge~$e=\\{i,j\\}$ with~$e\\cap S=\\emptyset$.\n This contradict the fact that there is a path~$(v_e,v,x)$ in~$G'[F][V_1]$ with~$v\\in V_G$.\n It remains to show that~$|S|\\leq k$.\n As~$F$ contains an edge~$\\{v_e,v\\}$ with~$v \\in V_G$ for every~$e \\in E$,\n $|S| = |F\\cap \\{\\{v_i,x\\} \\mid i\\in V\\}| \\le k'-(|F'|+m) = k$,\n and the claim follows.\n\\end{proof}\n\n\\subsection{Over at most two patches to every other (\\texorpdfstring{\\boldmath{$d\\ge 3$}}{d\u2a7e3})}\n\\label{ssec:3dgbp}\n\n\\DgbpAcr[3]{} turns out to be \\cocl{NP}-hard even for two habitats.\n\n\\begin{proposition}\n \\label{prop:3dgbp}\n \\DgbpAcr[3]{} is \\cocl{NP}-hard even if~$r=2$.\n\\end{proposition}\n\n\\begin{construction}\n \\label{constr:3dgbp}\n Let~$\\mathcal{I}=(G,k)$ with~$G=(V,E)$ be an instance of~\\prob{Vertex Cover}.\n Construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G',\\{V_1,V_2\\},k')$\n with~$k'=2m+n+k+4$ as follows\n (see~\\cref{fig:3dgbp} for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\def1.1{1.1}\n \\def0.9{0.9}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \n \\node (x) at (0*1.1,-2*1.5*\\yr)[xnodef]{};\n \\node (xp) at (6*1.1,-2*1.5*\\yr)[xnodef]{};\n\n \\node (y) at (5*1.1,-1.5*1.5*\\yr)[xnodef]{};\n \\node (yp) at (6*1.1,-1*1.5*\\yr)[xnodef]{};\n \\node (z) at (5*1.1,-0.5*1.5*\\yr)[xnodef]{};\n\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x in {1,...,9}{\n \\draw[xedge] (e\\x) to [out=-15,in=180](z);\n }\n \\draw[xedge] (z) to (yp);\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (v\\x) to [out=-15,in=180](y);\n }\n \\draw[xedge] (yp) to (y);\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (x) to (v\\x);\n }\n \\draw[xedge] (x) \n to (xp) to (yp);\n \n \\contourlength{0.09em}\n \\newcommand{\\ltb}[3]{\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{$e=\\{i,j\\}$}{$\\in V_1$};\n \\ltb{e5}{$e'=\\{i,j'\\}$}{$\\in V_1$};\n \\ltb{e7}{$e''=\\{i',j\\}$}{$\\in V_1$};\n \n \\ltb{x}{$x$}{$\\in V_1$};\n \\ltb{xp}{$x'$}{$\\in V_1$};\n \\ltb{z}{$z$}{$\\in V_1$};\n \\ltb{y}{$y$}{$\\in V_1,V_2$};\n \\ltb{yp}{$y'$}{$\\in V_1,V_2$};\n \n \\ltb{v3}{$v_i$}{$\\in V_1,V_2$};\n \\ltb{v5}{$v_j$}{$\\in V_1,V_2$};\n \\end{tikzpicture}\n \\caption{Illustration for~\\DgbpAcr[3]{} with~$r=2$. ($k'=2m+n+k+4$)}\n \\label{fig:3dgbp}\n \\end{figure}\n Add the vertex sets~$V_E\\ensuremath{\\coloneqq} \\{v_e\\mid e\\in E\\}$,\n $V_G\\ensuremath{\\coloneqq}\\{v_i\\mid i\\in V\\}$,\n and~$V^*\\ensuremath{\\coloneqq} \\{x,x',y,y',z\\}$.\n Add all vertices to~$V_1$,\n and all vertices in~$V_G$ as well as~$y$ and~$y'$ to~$V_2$.\n Next,\n for each~$e=\\{i,j\\}\\in E$,\n connect~$v_e$ with~$v_i$, $v_j$, and~$z$.\n Denote by~$E_z$ the set of edges incident with~$z$.\n For each~$i\\in V$,\n connect~$v_i$ with~$x$ and~$y$.\n Denote by~$E_y$ the set of edges incident with~$y$.\n Finally,\n add the edge set~$E^*\\ensuremath{\\coloneqq}\\{\\{x,x'\\},\\{y,y'\\},\\{z,y'\\},\\{y',x'\\}\\}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:3dgbp-aux}\n Let~$\\mathcal{I}'$ be the instance obtained from some instance~$\\mathcal{I}$ using~\\cref{constr:3dgbp}.\n Then every solution~$F$ for~$\\mathcal{I}'$ (if there is one) contains the set of edges~$F'\\ensuremath{\\coloneqq} E_y \\cup E_z \\cup E^*$.\n Further, $\\dist_{G[F']}(u, v) \\le 3$ for every~$u, v \\in V^*$ and for every~$u \\in V_G \\cup V_E$ and~$v \\in \\{y, y', z, x'\\}$.\n\\end{observation}\n\n\\begin{proof}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance and\n let~$F$ be a solution.\n Since~$G[V_2]$ is a tree and~$E_y = E(G[V_2])$,\n we have~$E_y \\subseteq F$.\n If for some~$e \\in E$, $\\{v_e, z\\} \\notin F$, \n then~$\\dist_{G[F]}(v_e, z) > 3$.\n If~$\\{z,y'\\}\\not\\in F$, \n then~$\\dist_{G[F]}(z, y') > 3$.\n Thus,\n $E_z \\subseteq F$.\n If~$\\{y',x'\\}\\not\\in F$, \n then~$\\dist_{G[F]}(z, x') > 3$.\n If~$\\{x,x'\\}\\not\\in F$, \n then~$\\dist_{G[F]}(x, x') > 3$.\n Thus,\n $E^* \\subseteq F$.\n The distances claimed for~$G[F']$ are immediate.\n\\end{proof}\n\n\\begin{lemma}\n Let~$\\mathcal{I}'$ be the instance obtained from an instance~$\\mathcal{I}$ using \\cref{constr:3dgbp}.\n Then~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S \\subseteq V$ be a vertex cover of~$G$ of size~$k$.\n We construct a solution~$F \\subseteq E'$ as follows.\n Let~$g \\colon E \\to V'$ be an auxiliary function with~$g(\\{i, j\\}) = v_{\\min(\\{i, j\\}\\cap S)}$.\n Let~$F_V\\ensuremath{\\coloneqq}\\{\\{v_i, x\\} \\mid i \\in S\\}$\n and let~$F_E\\ensuremath{\\coloneqq}\\{\\{v_e, g(e)\\} \\mid e \\in E\\}$.\n Then we set~$F\\ensuremath{\\coloneqq} F' \\cup F_V \\cup F_E$, \n where~$F'$ is as defined in \\cref{obs:3dgbp-aux}.\n Note that~$|F| = n + m + 4 + k + m = k'$. \n Since~$S$ is a vertex cover, \n every vertex~$v_e \\in V_E$ is adjacent to a vertex~$v_i$, $i \\in S$, \n which in turn is adjacent to~$x$ in~$G[F]$ due to~$F_V$.\n Further, as every~$v_i$, $i \\in V \\setminus S$, \n has distance two to every vertex~$v_j$, $j \\in S$, \n we have~$\\dist_{G[F]}(v_i, v_e) = \\dist_{G[F]}(v_i, x) \\le 3$ for every~$v_e \\in V_E$.\n Hence, $F$ is a solution for~$\\mathcal{I}'$.\n \n $(\\Leftarrow)\\quad${}\n Let~$F \\subseteq E'$ be a solution.\n Observe that in~$G$ the only path of length at most three from a vertex in~$V_E$ to~$x$ is via a vertex in~$V_G$ (and thus of length exactly two).\n Hence, in~$G[F]$, for every~$v_e \\in V_E$ there exists a~$v_i \\in V_G$ such that~$\\{v_e, v_i\\}, \\{v_i, x\\} \\in F$.\n It follows that\n $G[F]$ contains at least~$m$ edges between~$V_E$ and~$V_I$ and hence at most~$k$ edges between~$V_I$ and~$x$.\n We claim that~$S\\ensuremath{\\coloneqq}\\{i \\in V \\mid \\{v_i, x\\} \\in G[F]\\}$ is a vertex cover in~$G$.\n First note that~$|S|\\leq |F|-(|F'|+m)=k$,\n where~$F' \\subseteq F$ is the set of edges from~\\cref{obs:3dgbp-aux}.\n Suppose not,\n that is,\n $S$ is no vertex cover and hence there is an edge~$e\\in E$ with~$e\\cap S=\\emptyset$.\n This contradicts the fact that~$\\dist_{G[F][V_1]}(v_e,x)\\leq 3$.\n\\end{proof}\n\n\\section{Conclusion, Discussion, and Outlook}\n\nWe modeled the problem of placing wildlife crossings\nwith three different problem (families):\n\\RgbpAcr{},\n\\CgbpAcr{},\nand~\\DgbpAcr{}.\nWe studied the practically desired cases~$d=1$ and $d=2$, \nas well as the cases~$d\\ge3$.\nFor all three problems,\nwe settled the classic as well as the parameterized complexity \n(regarding the number~$k$ of wildlife crossings and the number~$r$ of habitats),\nexcept for the parameterized complexity of \\RgbpAcr{} regarding~$r$.\n\n\\paragraph{Discussion.}\nWe derived an intriguing interrelation of\nconnection requirements,\ndata quality, \nand computational and parameterized complexity.\nWhile each problem admits its individual complexity fingerprint,\neach of them depends highly on the value of~$d$,\nthe level of the respective connectivity constraint.\nThis value can reflect the quality of the given data,\nsince naturally we assume that habitats are connected.\nThe worse the data, \nthe stronger are the relaxations according to the connectivity of habitats,\nand thus the larger is the value of~$d$.\nOur results show that having very small ($d=2$) data gaps already leads to the problems becoming \\cocl{NP}-hard,\nand that even larger gaps ($d\\geq 3$) yield \\W{1}-hardness (when parameterized by~$k$).\nHence, knowledge about habitats, connections, and data quality\ndecide which problem models can be applied,\nthus influencing the computation power required to determine an optimal placement of wildlife crossings.\nFor instance,\nfor larger networks,\nwe recommend to ensure data quality such that one of our proposed problems for~$d\\leq 2$ becomes applicable.\nThis\nin turn\nemphasizes the importance of careful habitat recognition.\n\nIn our models,\nwe neglected that different positions possibly lead to \\emph{different} costs of building bridges \n(i.e.,\\ edge~costs).\nThis neglect is justified when differentiating between types of bridges (and thus their costs)\nis not necessary \n(e.g., if the habitat's species share preferred types of green bridges,\nand the underlying human-made transportation lines are homogeneous).\nIn other scenarios,\nadditionally considering these costs may be beneficial for decision-making.\n\n\n\\paragraph{Outlook and open problems.}\nFor a final version,\nwe plan to continue our study with \napproximation and (refined) data reduction for our three problems,\nas well as planar input graphs,\nand to settle \\RgbpAcr[1]{}'s complexity regarding~$r$. \nNote that we obtained an~$\\O(rd)$-approximation for~\\RgbpAcr{},\nwhich does not directly transfer to the other two problems.\nFPT approximations may be lucrative.\nFor small~$d\\geq 2$,\nall problems allow for problems kernels where the number of vertices only depends on~$k$.\nIf more effective preprocessing is possible,\nthen data reduction on the habitats is required.\nIf the underlying street network is planar,\nthen the input graphs to our problems can be seen as their planar dual.\nThus, \ninput graphs may be planar in the applications.\n\nMoreover,\ninteresting directions for future work are,\nfor instance,\ndistinguishing types of green bridges to place,\ntaking into account possible movement directions within habitats (connectivity in directed graphs),\nidentifying real-world driven problem parameters leading to tractability,\nor the problem of maintaining and servicing green bridges over time under a possible seasonal change of wildlife habitats \n(temporal graph modeling could fit well).\n\n\\bigskip \n\n{\n \\let\\clearpage\\relax\n \\renewcommand{\\url}[1]{\\href{#1}{$\\ExternalLink$}}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe study of special knots in contact three manifolds provided great insight\ninto the geometry and topology of three manifolds. In particular, the study\nof Legendrian knots (ones tangent to the contact planes) has been useful\nin distinguishing homotopic contact structures on $T^3$ \\cite{k} and\nhomology spheres \\cite{am}. Moreover, Rudolph \\cite{r} has shown\nthat invariants of Legendrian knots can be useful in understanding\nslicing properties of knots. The first example of the use of knot \ntheory in contact topology was in the work of Bennequin. In \\cite{be}\nBennequin used transversal knots (ones transversal to the contact\nplanes) to show that $\\hbox{$\\mathbb R$} ^3$ has exotic contact structures. This was the\ngenesis of Eliashberg's insightful tight versus overtwisted dichotomy in three dimensional\ncontact geometry.\n\nIn addition to its importance in the understanding of contact geometry, the study\nof transversal and Legendrian knots is quite interesting in its own right.\nQuestions concerning transversal and Legendrian knots have most prominently appeared\nin \\cite{el:knots} and Kirby's problem list \\cite{kirby}.\nCurrently there are very few general theorems concerning the classification\nof these knots. In \\cite{el:knots}, Eliashberg classified transversal unknots\nin terms of their self-linking number. In \\cite{ef}, Legendrian unknots\nwere similarly classified. In this paper we will extend this classification \nto positive transversal torus knots\\footnote{By ``positive transversal torus knot'' we mean a positive (right handed) \n\ttorus\n\tknot that is transversal to a contact structure.}. In particular we prove:\n\\begin{ttm}\n\tPositive transversal \n\ttorus knots\n\tare transversely isotopic if and only if they have the same topological knot\n\ttype and the same self-linking number.\n\\end{ttm}\nIn the process of proving this result we will examine {\\em transversal stabilization}.\nThis is a simple method for creating one transversal knot from another. By \nshowing that all positive transversal torus knots whose self-linking number is less than \nmaximal come from this stabilization process we are able to reduce the above theorem\nto the classification of positive transversal torus knots with maximal self-linking number.\nStabilization also provides a general way to approach the classification problem for \nother knot types. For example, we can reprove Eliashberg's classification\nof transversal unknots using stabilization ideas and basic contact topology.\n\nIt is widely believed that the\nself-linking number is not a complete invariant for transversal knots. However,\nas of the writing of this paper, there is no known knot type whose transversal realizations\nare not determined by their self-linking number. For Legendrian knots, in contrast,\nEliashberg and Hofer (currently unpublished)\nand Chekanov \\cite{chek} have produced examples of Legendrian knots that are not determined\nby their corresponding invariants.\n\nIn Section~\\ref{basic} we review some standard facts concerning contact geometry\non three manifolds. In Section~\\ref{sec:main} we prove our main theorem modulo some\ndetails concerning the characteristic foliations on tori which are proved in\nSection~\\ref{tori} and some results on stabilizations proved in Section~\\ref{stab}.\nIn the last section we discuss some open questions.\n\n\\rk{Acknowledgments}\nThe author gratefully acknowledges the support of an NSF \nPost-Doctoral Fellowship (DMS--9705949) and Stanford University.\nConversations with Y Eliashberg and E Giroux were helpful\nin preparing this paper.\n\n\n\\section{Contact structures in three dimensions}\\label{basic}\n\nWe begin by recalling some basic facts from contact topology. For a more detailed\nintroduction, see \\cite{a:etall, gi:survey}.\nRecall an orientable plane field $\\xi$ is a \\dfn{contact structure} on a three manifold\nif $\\xi=\\hbox{ker } \\alpha$ where $\\alpha$ is a nondegenerate 1--form for which\n$\\alpha\\wedge d\\alpha\\not=0$. Note $d\\alpha$ induces an orientation on $\\xi$.\nTwo contact structures are called \\dfn{contactomorphic} if there is a diffeomorphism\ntaking one of the plane fields to the other.\nA contact structure $\\xi$ induces a singular\nfoliation on a surface $\\Sigma$ by integrating the singular line field\n$\\xi\\cap T\\Sigma$. This is called the \\dfn{characteristic foliation} and is denoted\n$\\Sigma_\\xi$. Generically, the singularities are elliptic (if local degree is 1)\nor hyperbolic (if the local degree is $-1$). If $\\Sigma$ is oriented then the singularities\nalso have a sign. A singularity is positive (respectively negative) if the orientations\non $\\xi$ and $T\\Sigma$ agree (respectively disagree) at the singularity. \n\n\\begin{lem}[Elimination Lemma \\cite{gi:convex}]\\label{lem:elimination}\n\tLet $\\Sigma$ be a surface in a contact $3$--manifold $(M,\\xi)$. Assume\n\tthat $p$ is an elliptic and $q$ is a hyperbolic singular point in \n\t$\\Sigma_\\xi$, they both have the same sign and there is a leaf $\\gamma$\n\tin the characteristic foliation $\\Sigma_\\xi$ that connects $p$ to $q$.\n\tThen there is a $C^0$--small isotopy $\\phi\\colon\\thinspace \\Sigma\\times[0,1]\\to M$ such that\n\t$\\phi_0$ is the inclusion map, $\\phi_t$ is fixed on $\\gamma$ and outside any \n\t(arbitrarily small) pre-assigned\n\tneighborhood $U$ of $\\gamma$ and $\\Sigma'=\\phi_1(\\Sigma)$ has no singularities\n\tinside $U$.\n\\end{lem}\n\nIt is important to note that after the above cancellation there is a curve in the \ncharacteristic foliation on which the singularities had previously sat. \nIn the case of positive\nsingularities this curve will consist of the (closure of the) stable manifolds of the hyperbolic point\nand {\\em any} arc leaving the elliptic point (see \\cite{ef, e:dis}), and similarly for the negative singularity case.\nOne may also reverse this process and add a canceling pair of singularities along \na leaf in the characteristic foliation. It is also important to note:\n\n\\begin{lem}\n\tThe germ of the contact structure $\\xi$ along a surface $\\Sigma$ is determined \n\tby $\\Sigma_\\xi$.\n\\end{lem}\n\nNow recall that a contact structure $\\xi$ on $M$ is called \\dfn{tight} if no disk embedded in \n$M$ contains a limit cycle in its characteristic foliation, otherwise it is called \n\\dfn{overtwisted}. The standard contact structure on $S^3$,\ninduced from the complex tangencies to $S^3=\\partial B^4$ where $B^4$ is the\nunit 4--ball in $\\hbox{$\\mathbb C$} ^2$, is tight.\n\nA closed curve $\\gamma\\colon\\thinspace S^{1}\\to M$ in a contact manifold $(M,\\xi)$ is called \n\\dfn{transversal} if $\\gamma'(t)$ is transverse to $\\xi_{\\gamma(t)}$ \nfor all $t\\in S^{1}$. Notice a transversal curve can be \n\\dfn{positive} or \\dfn{negative} according as $\\gamma'(t)$ agrees with \nthe co-orientation of $\\xi$ or not. We will restrict our attention to \npositive transversal knots (thus in this paper ``transversal'' means \n``positive transversal''). It can be shown that any curve can be made \ntransversal by a $C^{0}$ small isotopy. It will be useful to note:\n\n\\begin{lem}[See \\cite{el:knots}]\\label{extend}\n\tIf $\\psi_t\\colon\\thinspace S^1\\to M$ is a transversal isotopy, then there\n\tis a contact isotopy $f_t\\colon\\thinspace M\\to M$ such that $f_t\\circ \\psi_0=\\psi_t$.\n\\end{lem}\n\nGiven a transverse knot $\\gamma$ in $(M,\\xi)$ that bounds a surface $\\Sigma$\nwe define the \\dfn{self-linking number}, $l(\\gamma)$, of $\\gamma$ as follows: \ntake a nonvanishing vector field $v$ in $\\xi\\vert_{\\gamma}$ that \nextends to a nonvanishing vector field in $\\xi\\vert_{\\Sigma}$ and let $\\gamma'$\nbe $\\gamma$ slightly pushed along $v$. Define\n$$l(\\gamma,\\Sigma)=I(\\gamma',\\Sigma),$$\nwhere $I(\\,\\cdot\\,,\\,\\cdot\\,)$ is the oriented intersection number.\nThere is a nice relationship between $l(\\gamma,\\Sigma)$ and the \nsingularities of the characteristic foliation of $\\Sigma$. Let \n$d_{\\pm}=e_{\\pm}-h_{\\pm}$ where $e_{\\pm}$ and $h_{\\pm}$ are the \nnumber of $\\pm$ elliptic and hyperbolic points in the characteristic \nfoliation $\\Sigma_{\\xi}$ of $\\Sigma$, respectively. In \\cite{be} it \nwas shown that\n\\begin{equation}\n\tl=d_{-}-d_{+}.\n\\end{equation}\nWhen $\\xi$ is a {\\em tight} contact structure and $\\Sigma$ is a \n{\\em disk},\nEliashberg \\cite{el:twenty} has shown, using the elimination lemma, how to \neliminate all the positive hyperbolic and negative elliptic points \nfrom $\\Sigma_{\\xi}$. Thus in a \ntight contact structure when $\\gamma$ is an unknot $l(\\gamma,\\Sigma)$ is always negative.\nMore generally one can show (see \\cite{be, el:twenty}) that\n\\begin{equation}\\label{lbound}\n\tl(\\gamma)\\leq -\\chi(\\Sigma),\n\\end{equation}\nwhere $\\Sigma$ is a Seifert surface for $\\gamma$ and $\\chi(\\Sigma)$ is its Euler number.\n\nAny odd negative integer can be realized as the self-linking number for some \ntransversal unknot.\nThe first general result concerning the classification of transversal knots\nwas the following:\n\n\\begin{thm}[Eliashberg \\cite{el:knots}]\\label{unknots}\n\tTwo transversal unknots are transversely isotopic if and only if \n\tthey have the same self-linking number.\n\\end{thm}\n\nLet $\\mathcal{T}$ be the transversal isotopy classes of transversal knots in $S^3$ with its unique\ntight contact structure. Let $\\mathcal{K}$ be the isotopy classes of knots in $S^3$.\nGiven a transversal knot $\\gamma\\in \\mathcal{T}$ we have two pieces of information: its knot type \n$[\\gamma]\\in\\mathcal{K}$ and its self-linking number $l(\\gamma)\\in\\hbox{$\\mathbb Z$} $. Define \n\\begin{equation}\\label{tmap}\n\t\\phi\\colon\\thinspace \\mathcal{T}\\to\\mathcal{K}\\times \\hbox{$\\mathbb Z$} : \\gamma\\mapsto ([\\gamma],l(\\gamma)).\n\\end{equation}\nThe main questions concerning transversal knots can be phrased in terms of the image\nof this map and preimages of points. In particular the above results say that $\\phi$\nis onto \n$$U=[\\hbox{unknot}]\\times\\{\\hbox{negative odd integers}\\}$$ and $\\phi$ is one-to-one\non $\\phi^{-1}(U)$.\n\nWe will also need to consider Legendrian knots. A knot $\\gamma$ is a \\dfn{Legendrian knot} if\nit is tangent to $\\xi$. The contact structure $\\xi$ defines a canonical framing on a Legendrian\nknot $\\gamma$. If $\\gamma$ is null homologous we may associate a number to this framing which\nwe call the \\dfn{Thurston--Bennequin invariant} of $\\gamma$ and denote it $\\hbox{tb}(\\gamma)$.\nIf we let $\\Sigma$ be the surface exhibiting the null homology of $\\gamma$ then we may trivialize\n$\\xi$ over $\\Sigma$ and use this trivialization to measure the rotation of $\\gamma'(t)$ around\n$\\gamma$. This number $r(\\gamma)$ is called the \\dfn{rotation number} of $\\gamma$. Note that\nthe rotation number depends on an orientation on $\\gamma$. From an oriented \nLegendrian knot $\\gamma$ one can obtain canonical positive and negative transversal knots\n$\\gamma_\\pm$ by pushing $\\gamma$ by vector fields tangent to $\\xi$ but transverse to\n$\\gamma'(t)$. One may compute\n\\begin{equation}\n\tl(\\gamma_\\pm)=\\hbox{tb}(\\gamma)\\mp r(\\gamma).\n\\end{equation}\nThis observation combined with \\eqn{lbound} implies\n\\begin{equation}\\label{tb-bound}\n\t\\hbox{tb}(\\gamma)+|r(\\gamma)|\\leq -\\chi(\\Sigma).\n\\end{equation}\nConsider an oriented (nonsingular) foliation $\\mathcal{F}$ on a torus $T$. The foliation is \nsaid to have a \\dfn{Reeb component} if two oppositely oriented periodic orbits cobound an\nannulus containing no other periodic orbits. \n\n\\begin{lem}\\label{curveinT}\n\tConsider a torus $T$ in a contact three manifold $(M,\\xi)$. If the characteristic foliation\n\ton $T$ is nonsingular and contains no Reeb components then \n\tany closed curve on $T$ may be isotoped to be transversal to $T_\\xi$ or into a leaf\n\tof $T_\\xi$. Moreover there is at most one homology class in $H_1(T)$ that can\n\tbe realized by a leaf of $T_\\xi$. \n\\end{lem}\n\nNow let $\\xi$ be a tight contact structure on a solid torus $S$ with nonsingular characteristic\nfoliation on it boundary $T=\\partial S$. It is easy to arrange for $T_\\xi$ to have no Reeb \ncomponents \\cite{ml}.\nSince $\\xi$ is tight the lemma above implies the meridian $\\mu$\ncan be made transversal to $T_\\xi$. We say $S$ has self-linking number $l$ if $l=l(\\mu)$\n(ie, the self-linking number of $S$ is the self-linking number of its meridian).\n\n\\begin{thm}[Makar--Limanov \\cite{ml}]\\label{solid_tori}\n\tAny two tight contact structures on $S$ which induce the same nonsingular\n\tfoliation on the boundary and have self-linking number $-1$ are contactomorphic.\n\\end{thm}\n\n\n\\section{Positive transversal torus knots}\\label{sec:main}\n\nLet $U$ be an unknot in a 3--manifold $M$, $D$ an embedded disk that it bounds\nand $V$ a tubular neighborhood of $U$. The boundary $T$ of $V$ is an embedded torus in $M$,\nwe call such a torus a \\dfn{standardly embedded torus}. \nLet $\\mu$ be the unique curve on $T$ that bounds a disk in $V$ and\n$\\lambda=D\\cap V$. Orient $\\mu$ arbitrarily and \nthen orient $\\lambda$ so that $\\mu, \\lambda$ form a positive basis for $H_1(T)$ where \n$T$ is oriented as the boundary of $V$. Up to homotopy any curve in \n$T$ can be written as $p\\mu + q\\lambda$, we shall denote this curve by $K_{(p,q)}$. \nIf $p$ and $q$ are relatively prime\nthen $K_{(p,q)}$ is called a {\\em $(p,q)$--torus knot.} If $pq>0$ we say $K(p,q)$ is a \n\\dfn{positive} torus knot otherwise we call it \\dfn{negative}. One\nmay easily compute that the Seifert surface of minimal genus for $K_{(p,q)}$\nhas Euler number $|p|+|q|-|pq|$. Thus for a transversal torus knot Equation~\\ref{lbound} implies \n\\begin{equation}\n\tl(K_{(p,q)})\\leq -|p|-|q|+|pq|.\n\\end{equation}\nIn fact, if $\\overline{l}_{(p,q)}$ denotes the maximal self-linking number for a \ntransversal $K_{(p,q)}$ then one may easily check that\n\\begin{equation}\n\t\\overline{l}_{(p,q)}=-p-q+pq,\n\\end{equation}\nif $p,q>0$, ie, for a positive torus knot. (Note: for a positive transversal torus knot \nLemma~\\ref{basis}\nsays we have $p,q>0$ not just $pq>0$.) From the symmetries involved in the definition of\na torus knot we may assume that $p>q$, which we do throughout the rest of the paper.\nWe now state our main theorem.\n\n\\begin{thm}\\label{main}\n\tPositive transversal torus knots in a tight contact structure\n\tare determined up to transversal isotopy by \n\ttheir knot type and their self-linking number.\n\\end{thm}\n\n\\begin{rem}\n\t{\\em We may restate this theorem by saying\n\tthe map $\\phi$ defined in equation \\ref{tmap} is one-to-one when restricted to\n\t$$(\\hbox{pr}\\circ \\phi)^{-1}(\\hbox{positive torus knots})$$ \n\t(here $\\hbox{pr}\\colon\\thinspace \\mathcal{K}\\times \\hbox{$\\mathbb Z$} \\to \\mathcal{K}$ is projection). \n\tMoreover, the image of $\\phi$ restricted to the above set is \n\t$G=\\cup_{(p,q)} K_{(p,q)}\\times N(p,q)$ where the union\n\tis taken over relatively prime positive $p$ and $q$, and \n\t$N(p,q)$ is the set of odd integers less than or equal to $-p-q+pq$.}\n\\end{rem}\n\nWe first prove the following auxiliary result:\n\n\\begin{prop}\\label{aux}\n\tTwo positive transversal $(p,q)$--torus knots $K$ and $K'$ in a tight contact\n\tstructure with maximal self-linking number (ie, $l(K)=l(K')=\\overline{l}_{(p,q)}$) \n\tare transversally isotopic.\n\\end{prop}\n\n\\proof\nLet $T$ and $T'$ be tori standardly embedded in $M$ on which $K$ and $K'$, \nrespectively, sit.\n\n\\begin{lem}\\label{nonsingular}\n\tIf the self-linking number of $K$ is maximal then $T$ may be\n\tisotoped relative to $K$ so that the characteristic\n\tfoliation on $T$ is nonsingular.\n\\end{lem}\n\nThis lemma and the next are proved in the following section.\n\n\\begin{lem}\\label{isotopic}\n\tTwo transversal knots on a torus $T$ with nonsingular characteristic foliation\n\tthat are homologous are transversally isotopic, except possibly when there is a closed leaf\n\tin the foliation isotopic to the transversal knots.\n\\end{lem}\n\nOur strategy is to isotop $T$ onto $T'$, keeping $K$ and $K'$ transverse to $\\xi$, \nso that $K$ and $K'$ are\nhomologous, and thus transversally isotopic. We now show that $T$\ncan be isotoped into a standard form keeping $K$ transverse (and similarly for\n$K'$ and $T'$ without further mention). Let $V$ be \nthe solid torus that $T$ bounds (recall we are choosing $V$ so that $p>q$). \nLet $D_\\mu$ and $D_\\lambda$ be the disk that $\\mu$ and $\\lambda$\nrespective bound. Now observe:\n\n\\begin{lem}\\label{basis}\n\tWe may take $\\mu$ and $\\lambda$ to be positive transversal curves and with this\n\torientation $\\mu,\\lambda$ form a positive basis for $T=\\partial V$.\n\\end{lem}\n\n\\proof\nClearly we may take $\\mu$ and $\\lambda$ to be positive transversal knots, for if we could\nnot then \\lm{curveinT} implies that we may isotop one of them to a closed leaf in $T_\\xi$ contradicting\nthe tightness of $\\xi$. Thus we are left to see that \n$\\mu, \\lambda$ is a positive basis. Assume this is not the case.\nBy isotoping $T$ slightly we may assume that $T_\\xi$ has closed leaf\n(indeed if $T_\\xi$ does not already have a closed leaf then the isotopy will give an \nintervals worth of rotation numbers, and hence\nsome rational rotation numbers, for the return map\ninduced on $\\mu$ by $T_\\xi$). Let $C$ be one of these closed leaves and let $n=\\lambda\\cdot C$\nand $m=\\mu\\cdot C$. Note $n$ and $m$ are both positive since $\\mu$ and $\\lambda$ are\npositive transversal knots. Since $\\mu, \\lambda$ is not a positive basis $C$ is an $(n,m)$--torus knot. \nIn particular $C$ is a positive torus knot. Moreover, the framing on $C$ induced by $\\xi$ is\nthe same as the framing induced by $T$. Thus $\\hbox{tb}(C)=mn$ contradicting \\eqn{tb-bound}.\nSo $\\mu,\\lambda$ must be a positive basis for $T$.\n\\endproof\n\nNow let $m=l(\\mu)$ and $l=l(\\lambda)$ and recall $m,l\\leq -1$.\n\n\\begin{lem}\\label{l-formula}\n\tIf $\\gamma$ is a transversal $(p,q)$ knot on $T$ (with nonsingular \n\tcharacteristic foliation) then \n\t\\begin{equation}\\label{eqn:l-formula}\n\t\tl(\\gamma)=pm+ql+pq.\n\t\\end{equation}\n\\end{lem} \n\n\\proof\nLet $v$ be a section of $\\xi$ over an open 3--ball containing $T$ and its meridional and longitudinal\ndisks. If $C$ is a curve on $T$ then define\n$f(C)$ to be the framing of $\\xi$ over $C$ induced by $v$ relative to the framing of $\\xi$ \nover $C$ induced by $T$. Note $f$ descends to a map on $H_1(T)$ and $f(A+B)=f(A)+f(B)$\nwhere $A,B\\in H_1(T)$. One easily computes $f(\\mu)=m$ and $f(\\lambda)=l$. Thus \n$f(p\\mu+q\\lambda)=pm+ql$. Now for a transversal curve $C$ on $T$ the normal bundle to $C$ can be\nidentified with $\\xi$ thus $f(C)$ differs\nfrom $l(C)$ by the framing induced on $C$ by $T$ relative to the framing induced\non $C$ by its Seifert surface. So $l(C)=f(C)+pq=pm+ql+pq$. \n\\endproof\n\nThus since $K$ has maximal self-linking number we must have $m=l=-1$. \nNow by \\tm{solid_tori} we may find\na contactomorphism from $V$ to $S_f=\\{(r,\\theta,\\phi)\\in \\hbox{$\\mathbb R$} ^2\\times S^1|\nr\\leq f(\\theta,\\phi)\\}$ for some positive function $f\\colon\\thinspace T^2\\to \\hbox{$\\mathbb R$} $, with the \nstandard tight contact structure $\\hbox{ker}(d\\phi+r^2\\, d\\theta)$. \n\nClearly $T=\\partial S_f$ may be isotoped to $S_\\epsilon=\\{(r,\\theta,\\phi)\\in \\hbox{$\\mathbb R$} ^2\\times S^1|\nr<\\epsilon\\}$ for arbitrarily small $\\epsilon>0$. We now show this isotopy may be done\nkeeping our knot $K$ transverse to the characteristic foliation. To a foliation on $\\partial S_f$ we may\nassociate a real valued rotation number $r(S_f)$ for the return map on $\\mu$ induced\nby $(\\partial S_f)_\\xi$ (see \\cite{ml}). \nFor a standardly embedded torus this number must be negative since if not then some\nnearby torus would have a positive $(r,s)$ torus knot as a closed leaf in its characteristic\nfoliation violating the Bennequin inequality (as in the proof of Lemma~\\ref{basis}). \nSo as we isotop $\\partial S_f$ to\n$\\partial S_\\epsilon$ we may keep our positive torus knot transverse to the characteristic foliation\nby Lemma~\\ref{curveinT} (since closed leaves in $(\\partial S_f)_\\xi$ have slope $r(S_f)$ and\n$K$ has positive slope). \nThus we assume that the solid torus $V$ is contactomorphic to $S_\\epsilon$.\nIf $C$ is the core of $V (=S_\\epsilon)$ then it is a transversal unknot with self-linking \n$l(\\lambda)=-1$.\n\nFinally, let $V$ and $V'$ be the solid tori associated to the torus knots $K$ and $K'$ and\nlet $C$ and $C'$ be the cores of $V$ and $V'$. Now since $C$ and $C'$ are unknots with the\nsame self-linking number they are transversely isotopic. Thus we may think of $V$ and $V'$ as\nneighborhoods of the same transverse curve $C=C'$. From above, $V$ and $V'$ may both be shrunk \nto be arbitrarily small neighborhoods of $C$ keeping $K$ and $K'$ transverse to $\\xi$. Hence\nwe may assume that $V$ and $V'$ both sit in a neighborhood of $C$ which is contactomorphic\nto, say, $S_c$ (using the notation from the previous paragraph). By shrinking $V$ and $V'$ \nfurther we may assume they are the tori $S_\\epsilon$ and $S_{\\epsilon'}$ inside $S_c$\nfor some $\\epsilon$ and $\\epsilon'$. Note that this is not immediately obvious but follows\nfrom the fact that a contactomorphism from the standard model $S_f$ for, say, $V$ to \n$V\\subset S_c$ may be constructed to take a neighborhood of the core of $S_f$ to a\nneighborhood of the core of $S_c$.\nThis allows us to finally conclude that we may isotop\n$V$ so that $V=V'$. Now since $K$ and $K'$ represent the same homology class on $\\partial V$ and\nthey are both transverse to the foliation we may use Lemma~\\ref{isotopic} to transversely \nisotop $K$ to $K'$.\n\\endproof\n\nA transversal knot $K$ is called a \\dfn{stabilization} of a transversal knot $C$ if\n$K=\\alpha\\cup A$, $C=\\alpha\\cup A'$ and $A\\cup A'$ cobound a disk\nwith only positive elliptic and negative hyperbolic singularities (eg Figure~\\ref{fig:stab}). \n\\begin{figure}[ht]\n\t{\n\\epsfysize=2in\\centerline{\\relabelbox\\small\n\\epsfbox{stab.eps}\n\\relabel {A}{$A$}\n\\relabel {A'}{$A'$}\n\\adjustrelabel <-4pt,-1pt> {e}{$e_+$}\n\\relabel {h}{$h_-$}\n\\endrelabelbox}}\n\t\\caption{Stabilization disk}\n\t\\label{fig:stab}\n\\end{figure}\nWe say $K$ is obtained from $C$ by a \\dfn{single stabilization} if $K$ is a stabilization\nof $C$ and $l(K)=l(C)-2$ (ie, the disk that $A\\cup A'$ cobound is the one shown in \nFigure~\\ref{fig:stab}).\nThe key observation concerning stabilizations is the following:\n\n\\begin{thm}\\label{tm-stab}\n\tIf the transversal knots $K$ and $K'$ are single stabilizations of transversal knots\n\t$C$ and $C'$ then $K$ is transversely isotopic to $K'$ if $C$ is transversely \n\tisotopic to $C'$.\n\\end{thm}\n\nThis theorem will be proved in Section~\\ref{stab}. The proof of \\tm{main} is completed by an inductive\nargument using the following observation.\n\n\\begin{lem}\\label{canstab}\n\tIf $K$ is a positive transversal $(p,q)$--torus knot and $l(K)<\\overline{l}_{(p,q)}$ then $K$\n\tis a single stabilization of a $(p,q)$--torus knot with larger self-linking number.\n\\end{lem}\n\nThe proof of this lemma will be given in the next section following the proof of \\lm{nonsingular}.\n\n\\section{Characteristic foliations on tori}\\label{tori}\n\nIn this section we prove various results stated in Section~\\ref{sec:main} related to \nfoliations on tori.\nLet $T$ be a standardly embedded torus in $M^3$ and $K$ a positive $(p,q)$--torus\nknot on $T$ that is transverse to a tight contact structure $\\xi$. We are now ready to prove:\n\n\\proof[\\bf Lemma~\\ref{nonsingular}]\n{\\sl\n\tIf the self-linking number of $K$ is maximal then $T$ may be\n\tisotoped relative to $K$ so that the characteristic\n\tfoliation on $T$ is nonsingular.\n}\n\n\\proof\nBegin by isotoping $T$ relative to $K$ so that the number of singularities \nin $T_\\xi$ is minimal. Any singularities that are left must occur in pairs:\na positive (negative) hyperbolic $h$ and elliptic $e$ point connected by a stable\n(unstable) manifold $c$. Moreover, since $h$ and $e$ cannot be canceled without\nmoving $K$ we must have $c\\cap K\\not=\\emptyset$.\n\nNow $T\\setminus K$ is an annulus $A$ with the characteristic foliation flowing\nout of one boundary component and flowing in the other. Let $c'$ be the component\nof $c$ connected to $h$ in $A$. \nWe can have no periodic orbits in $A$ since such an orbit\nwould be a Legendrian $(p,q)$--torus knot with Thurston--Bennequin invariant\n$pq$ contradicting \\eqn{tb-bound}.\nThus the other stable (unstable) manifold $c''$ of $h$ will have to enter (exit) $A$ through the same\nboundary component. The manifolds $c'$ and $c''$ separate off a disk $D$ from $A$.\nWe may use $D\\subset T$ to push the arc $K\\cap D$ across $D$ to obtain another\ntransverse $(p,q)$--torus knot $K'$. It is not hard to show that\n$K$ is a stabilization of $K'$. In particular $l(K')>l(K)$, contradicting\nthe maximality of $l(K)$. Thus we could have not have had any singularities \nleft after our initial isotopy.\n\\endproof\n\nThe above proof provides some insight into \\lm{canstab}. Recall:\n\n\\proof[{\\bf \\lm{canstab}}]\n{\\sl\n\tIf $K$ is a positive transversal $(p,q)$--torus knot with and $l(K)<\\overline{l}_{(p,q)}$ then $K$\n\tis a single stabilization of a $(p,q)$--torus knot with larger self-linking number.\n}\n\n\\proof\nWe begin by noting that if $K$ is a stabilization of another transversal knot then it\nis also a single stabilization of some transversal knot. Thus we just demonstrate that \n$K$ is a stabilization of some transversal knot.\n\nFrom the above proof it is clear that if we cannot eliminate all the singularities in\nthe characteristic foliation of the torus $T$ on which $K$ sits then there is a disk on \nthe torus which exhibits $K$ as a stabilization. \n\nIf we can remove all the singularities from $T$ then by \\lm{l-formula} we know\nthat the self-linking number of, say, the meridian $\\mu$ is less than $-1$. Thus $\\mu$ bounds a \ndisk $D_\\mu$ containing only positive elliptic and at least one negative hyperbolic singularity. \nTo form a positive transversal torus knot $K''$ we can take $p$ copies of the meridian $\\mu$ and \n$q$ copies of the longitude $\\lambda$\nand ``add'' them (ie, resolve all the intersection points keeping the curve \ntransverse to the characteristic foliation). \nThis will produce a transversal knot on $T$ isotopic to $K$ thus transversely\nisotopic. Moreover, we may use the graph of singularities on $D_\\mu$ to show that $K''$, and hence $K$, \nis a stabilization. \n\\endproof\n\nWe end this section by establishing (a more general version of)\nLemma~\\ref{isotopic}.\n\\begin{lem}\n\tSuppose that $\\mathcal{F}$ is a nonsingular foliation on a torus\n\t$T$ and $\\gamma$ and $\\gamma'$ are two simple closed curves on $T.$\n\tIf $\\gamma$ and $\\gamma'$ are homologous and transverse to $\\mathcal{F}$\n\tthen they are isotopic through simple closed curves transverse to \n\t$\\mathcal{F},$ except possibly if $\\mathcal{F}$ has a closed leaf \n\tisotopic to $\\gamma.$\n\\end{lem}\n\n\\proof\nWe first note that if $\\gamma$ and $\\gamma'$ are disjoint and there are not closed\nleaves isotopic to them then the annulus that they cobound will provide the desired transverse\nisotopy. Thus we are left to show that we can make $\\gamma$ and $\\gamma'$ disjoint.\nWe begin by isotoping them so they intersect transversely. Now assume we have transversely \nisotoped them so that the number of their intersection points is minimal. We wish to show \nthis number is zero. Suppose not, then there are an even number of intersection points\n(since homologically their intersection is zero). \n\nUsing a standard innermost arc argument we may find a disk $D\\subset T$ such that\n$\\partial D$ consists of two arcs, one a subarc of $\\gamma$ the other a subarc of $\\gamma'$.\nWe can use the disk $D$ to guide a transverse isotopy of $\\gamma'$ that will decrease the\nnumber of intersections of $\\gamma$ and $\\gamma'$ contradicting our assumption of minimality.\nTo see this, note that the local orientability of the foliation implies that we can define\na winding number of $\\mathcal{F}$ around $\\partial D$. Moreover since $\\partial D$ is contractible\nand the foliation is nonsingular this winding number must be zero. Thus the foliation on \n$D$ must be diffeomorphic to the one shown in Figure~\\ref{fig:diskfol} where the desired isotopy is apparent.\n\\begin{figure}[ht]\n\t{\\epsfysize=2in\\centerline{\\epsfbox{diskfol.eps}}}\n\t\\caption{Foliation on $D$}\n\t\\label{fig:diskfol}\n\\end{figure}\n\\endproof\n\n\n\\section{Stabilizations of transversal knots}\\label{stab}\n\nThe main goal of this section is to prove \\tm{tm-stab}:\n\n\\proof[{\\bf \\tm{tm-stab}}]\n{\\sl\n\tIf the transversal knots $K$ and $K'$ are single stabilizations of transversal knots\n\t$C$ and $C'$ then $K$ is transversely isotopic to $K'$ if $C$ is transversely \n\tisotopic to $C'$.\n}\n\n\\proof\nSince $C$ and $C'$ are transversely isotopic we can assume that $C=C'$. \nLet $D$ and $D'$ be the disks that exhibit $K$ and $K'$ as stabilizations of\n$C$. Let $e,h$ and $e',h'$ be the elliptic\/hyperbolic pairs on $D$ and $D'$.\nFinally, let $\\alpha$ and $\\alpha'$ be the Legendrian arcs formed by the (closure of the) \nunion of stable manifolds of $h$ and $h'$. Using the characteristic foliation \non $D$ we may transversely isotop $K\\setminus C$ to lie arbitrarily close to $\\alpha$\n(and similarly for $K'$ and $\\alpha'$). We are thus done by the following simple lemmas.\n\n\\begin{lem}\n\tThere is a contact isotopy preserving $C$ taking $\\alpha\\cap C$ to \n\t$\\alpha'\\cap C$.\n\\end{lem} \n\nWorking in a standard model for a transverse curve\nthis lemma is quite simple to establish.\nThus we may assume that $\\alpha$ and $\\alpha'$ both touch $C$ at the same point.\n\\begin{lem}\n\tThere is a contact isotopy preserving $C$ taking $\\alpha$ to $\\alpha'$.\n\\end{lem}\nOnce again one can use a Darboux chart to check this lemma (for some details see \\cite{ef}).\n\\begin{lem}\n\tAny two single stabilizations of $C$ along a fixed Legendrian arc are transversely \n\tisotopic.\n\\end{lem}\nWith this lemma our proof of Theorem~\\ref{tm-stab} is complete.\n\\endproof\n\nWe now observe that using \\tm{tm-stab} we may reprove Eliashberg's result concerning \ntransversal unknots. The reader should note that this ``new proof'' is largely just\na reordering\/rewording of Eliashberg's proof.\n\n\\begin{thm}\n\tTwo transversal unknots are transversally isotopic if and only if they\n\thave the same self-linking number.\n\\end{thm}\n\n\\proof\nUsing \\tm{tm-stab} we only need to prove that two transversal unknots with self-linking\nnumber $-1$ are transversally isotopic, since by looking at the characteristic foliation\non a Seifert disk it is clear that a transversal unknot with self-linking number less\nthan $-1$ is a single stabilization of another unknot. But given a transversal unknot with self-linking\nnumber $-1$ we may find a disk that it bounds with precisely one positive elliptic \nsingularity in its characteristic foliation. Using the characteristic foliation on the disk \nthe unknot may be transversely isotoped into an arbitrarily small neighborhood of the elliptic \npoint. Thus given two such knots we may now find a contact isotopy of taking the elliptic point on\none of the Seifert disks to the elliptic point on the other. Since the Seifert disks are tangent\nat their respective elliptic points we may arrange that they agree in a neighborhood of the \nelliptic points. Now by shrinking the Seifert disks more we may assume that both unknots sit on the\nsame disk. It is now a simple matter to transversely isotop one unknot to the other.\n\\qed\n\n\\section{Concluding remarks and questions}\n\nWe would like to note that many of the techniques in this paper work for negative torus knots\nas well (though the proofs above do not always indicate this). There are two places where we cannot\nmake the above proofs work for negative torus knots, they are:\n\\begin{itemize}\n\\item From Equation~\\ref{eqn:l-formula} we cannot conclude that the self-linking numbers of\t\n\t$\\mu$ and $\\lambda$ are $-1$ when $l(K_{(p,q)})$ is maximal as we could for positive torus\n\tknots.\n\\item We cannot always conclude that a negative torus knot with self-linking less than maximal\n\tis a stabilization.\n\\end{itemize}\nDespite these difficulties we conjecture that negative torus knots are also determined by their\nself-linking number.\n\nLet $S=S^1\\times D^2$ and let $K$ be a $(p,q)$--curve on\nthe boundary of $S$. Now if $C$ is a null homologous knot in a three manifold $M$ then \nlet $f\\colon\\thinspace S\\to N$ be a diffeomorphism from $S$ to a neighborhood $N$ of $C$ in $M$ taking $S^1\\times\\{\\hbox{point}\\}$\nto a longitude for $C$. We now define the \\dfn{$(p,q)$--cable of $C$} to be the knot $f(K)$. \n\\begin{quest}\\label{conj}\n\tIf $\\mathcal{C}$ is the class of topological knots whose transversal realizations\n\tare determined up to transversal isotopy by their self-linking number, then is\n\t$\\mathcal{C}$ closed under cablings?\n\\end{quest}\nEliashberg's \nTheorem~\\ref{unknots} says that the unknot $U$ is in $\\mathcal{C}$. Our main Theorem~\\ref{main} says\nthat any positive cable of the unknot is in $\\mathcal{C}$. \nThis provides the first bit of evidence that the answer to the question might be YES, at least for\n``suitably positive'' cablings.\n\nGiven a knot type one might hope,\nusing the observation on stabilizations in this paper, to prove that transversal knots in this\nknot type are determined by their self-linking number as follows: First establishing that there is a \nunique transversal knot in this knot type with maximal self-linking number. Then showing that\nany transversal knot in this knot type that does not have maximal self-linking number is a stabilization.\nThe second part of this program is of independent interest so we ask the\nfollowing question:\n\\begin{quest}\n\tAre all transversal knots not realizing the maximal self-linking number of their knot type\n\tstabilizations of other transversal knots?\n\\end{quest}\nIt would be somewhat surprising if the answer to this question is YES in complete generality\nbut understanding when the answer is YES and when and why it is NO should provide insight\ninto the structure of transversal knots.\n\nWe end by mentioning that the techniques in this paper also seem to shed light on Legendrian\ntorus knots. It seems quite likely that their isotopy class may be determined by their \nThurston--Bennequin invariant\nand rotation number. We hope to return to this question in a future paper.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Interfacial Structures of Different Classical Water Force Fields}\nTo compare the interfacial structures of classical water force fields, we simulated the liquid-vapor interfaces of SPC\/E \\cite{Berendsen:1987gc}, TIP4P\/2005 \\cite{abascal2005general}, and TIP5P \\cite{Mahoney2000_SM} waters in NVT ensembles. Each system has 1944 water molecules in a slab geometry with dimensions of 5.0 nm $\\times$ 5.0 nm $\\times$ 3.0 nm. The system was equilibrated at $T=298 \\,\\text{K}$ and Particle Mesh Ewald was used to handle the long-range part of electrostatic interactions. The SHAKE and SETTLE algorithms were used to constrain the geometry of water. The simulations were performed using the LAMMPS package \\cite{Plimpton:1995fc} for SPC\/E and TIP4P and the GROMACS package \\cite{pronk2013gromacs} for TIP5P. Following the same procedure in Ref. \\onlinecite{Willard:2010da}, instantaneous liquid interface was constructed for each configuration of the generated statistics. Density profile, $\\rho(a)$, and orientational distribution, $P(\\cos\\theta_1,\\cos\\theta_2 | a)$, were computed according to Eqs. (7) and (11) in Ref. \\onlinecite{Willard:2010da}. Figure \\ref{fig:S1} shows the density profiles along with the reduced orientational distributions as defined in Eq. (5) of the main text. Both $\\rho(a)$ and $P(\\cos\\theta_\\text{OH}|a)$ exhibit the qualitatively same structural characteristics for three different force fields of water. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width = 4.0 in]{FigS1}\n\\caption{(a) Density profiles computed from the atomistic simulations of SPC\/E, TIP4P, and TIP5P. They are normalized by the bulk density, $\\rho_\\mathrm{b}$. (b) Orientational distributions, $P(\\cos\\theta_\\text{OH}|a)$, for SPC\/E (top), TIP4P (middle), and TIP5P (bottom). Color shading indicates the probability density.}\n\\label{fig:S1}\n\\end{figure}\n\nThe orientational polarization, $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$, and polarizability, $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, were computed according to Eqs.~(6) and (7) of the main text, and their interfacial profiles are plotted in Fig. \\ref{fig:S2}. The plots show that these interfacial properties also share the qualitatively same feature among three different force fields of water. It is notable that while there exists some quantitative difference in the scale of the polarization, the polarizability shows the almost identical trend across the force fields.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width = 6.4 in]{FigS2_v2}\n\\caption{Interfacial polarization, $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$, and polarizability, $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, computed from the atomistic simulations of SPC\/E, TIP4P, and TIP5P.}\n\\label{fig:S2}\n\\end{figure}\n\n\\section{Optimization for the Hydrogen Bond Energy Parameter}\nTo determine the optimal model parameter, $\\epsilon_\\text{w}$, we compute the Kullback-Leibler divergence \\cite{Kullback1951},\n\\begin{equation}\n\\Gamma(\\epsilon_\\text{w}) = \\int d a \\int d (\\cos\\theta_\\text{OH}) \\, P_\\text{ref} (\\cos\\theta_\\text{OH} | a) \\ln \\left[ \\frac{ P_\\text{ref} (\\cos\\theta_\\text{OH} | a)}{P (\\cos\\theta_\\text{OH} | a,\\epsilon_\\text{w})} \\right],\n\\tag{S.1}\n\\label{gamma}\n\\end{equation}\nwhere $P_\\text{ref}(\\cos\\theta_\\text{OH} | a)$ and $P (\\cos\\theta_\\text{OH} | a,\\epsilon_\\text{w})$ are the reduced orientational distributions obtained from atomistic simulation and our mean-field model, respectively. This quantity measures how far the probability distribution of our model deviates from the reference given $\\epsilon_\\text{w}$. As indicated above, it becomes a function of $\\epsilon_\\text{w}$ and thus we choose the parameter that minimizes the \\emph{fitness} function, \n\\begin{equation}\n\\epsilon_\\text{w}^* = \\argmin_{\\epsilon_\\text{w}}\\{\\Gamma(\\epsilon_\\text{w})\\},\n\\tag{S.2}\n\\end{equation}\nas the effective hydrogen bond energy.\n\n\n\\section{Fluctuations in hydrogen bond geometry}\nWe quantify the distortions in hydrogen bond geometry and the associated energetics by analyzing the atomistic simulation results as follows.\nNotably, here relatively simple algorithms for quantifying various aspects of molecular geometry translate into complicated mathematical expressions. \nLet $\\mathbf{v}_{k}^{(i)} = \\vec{r}_k^{(i)} - \\vec{r}_\\text{O}^{(i)}$, where $\\vec{r}_\\text{O}^{(i)}$ is the position of the oxygen of the $i$th water molecule and $\\vec{r}_k^{(i)}$ is the position of the $k$th bonding site on it ($k=1,2$ indicate the hydrogens and $k=3,4$ indicate the lone pairs for TIP5P water). \nThen $\\mathbf{v}_{k}^{(i)}$ represents the direction of ideal hydrogen bonding coordination through the $k$th site of the $i$th molecule.\nFor the $j$th molecule neighboring the $i$th, its deviation angle from the ideal coordination to the $i$th molecule is given by\n\\begin{equation}\n\\phi_j^{(i)} = \\min_k \\left\\{\\cos^{-1}\\!\\left( \\frac{\\mathbf{v}_{k}^{(i)} \\cdot \\mathbf{b}_j^{(i)} }{\\left|\\mathbf{v}_{k}^{(i)}\\right| \\left|\\mathbf{b}_j^{(i)} \\right|}\\right) \\right\\},\n\\label{angle_phi}\n\\tag{S.3}\n\\end{equation}\nwhere $\\mathbf{b}_j^{(i)} = \\vec{r}_\\text{O}^{(j)}- \\vec{r}_\\text{O}^{(i)}$ represents the hydrogen bond vector of the $i$th molecule to the $j$th. \nThe corresponding index,\n\\begin{equation}\ny_j^{(i)} = \\argmin_k \\left\\{\\cos^{-1}\\!\\left( \\frac{\\mathbf{v}_{k}^{(i)} \\cdot \\mathbf{b}_j^{(i)} }{\\left|\\mathbf{v}_{k}^{(i)}\\right| \\left|\\mathbf{b}_j^{(i)} \\right|} \\right) \\right\\},\n\\tag{S.4}\n\\label{b_label}\n\\end{equation}\nindicates which ideal boding direction $\\mathbf{b}_j^{(i)}$ is distorted from and thus whether it belongs to donor or acceptor hydrogen bond. \nThe inter-bond angle between the $j$th and $k$th molecules with respect to the $i$th is given by\n\\begin{equation}\n\\psi_{jk}^{(i)} = \\cos^{-1}\\!\\left( \\frac{\\mathbf{b}_j^{(i)} \\cdot \\mathbf{b}_k^{(i)} }{\\left|\\mathbf{b}_j^{(i)}\\right| \\left|\\mathbf{b}_k^{(i)}\\right|} \\right).\n\\tag{S.5}\n\\label{angle_psi}\n\\end{equation}\nThen the probability distribution of inter-bond angle at given distance $a$ is computed as,\n\\begin{equation}\nP_{\\alpha\\gamma}(\\psi|a) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} \\delta(\\psi_{jk}^{(i)} - \\psi)\\delta(a^{(i)} - a) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\Phi_\\gamma \\!\\left(y_k^{(i)} \\right) \\right>}{\\sin\\psi\\displaystyle\\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} \\delta(a^{(i)} - a)\\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\Phi_\\gamma \\!\\left(y_k^{(i)} \\right)\\right>}~,\n\\tag{S.6}\n\\end{equation}\nwhere $a^{(i)}$ is the interfacial depth of the $i$th molecule,\n\\begin{equation}\n\\Theta_\\text{hyd}(\\vec{r}) = H(|\\vec{r}| - 2.4 \\,\\mathrm{\\AA})H(3.2 \\,\\mathrm{\\AA} - |\\vec{r}|)\n\\tag{S.7}\n\\end{equation} \nselects the molecules only in the first hydration shell of the $i$th molecule using the Heaviside step function, $H(x)$, and \n\\begin{equation}\n\\Phi_\\alpha \\left(x \\right) = \\left\\{ \\begin{array}{ll}\n\t\tH\\left(2.5 - x \\right), & \\quad \\textrm{if $\\alpha$ is Donor}, \\\\\n \tH\\left(x - 2.5 \\right), & \\quad \\textrm{if $\\alpha$ is Acceptor}, \\end{array} \\right.\n\\tag{S.8}\t\n\\end{equation}\nselects the neighboring molecule of specific bond type, $\\alpha$. Here the geometric factor, $\\sin\\psi$, corrects the bias coming from the variation of solid angle. \nFig. 3(b) of the main text shows the plots of $P_{\\alpha\\gamma}(\\psi|a)$ with $a = 10 \\,\\mathrm{\\AA}$ and $a = 0 \\,\\mathrm{\\AA}$ for the bulk and interface respectively ($0.1 \\,\\mathrm{\\AA}$ was used for the binning width of histogram).\nFor the plots in Fig. 4(a) of the main text, the distribution is integrated such that $P_{\\alpha\\gamma}(\\psi < 60^{\\circ}|a) = \\int_0^{60^{\\circ}} P_{\\alpha\\gamma}(\\psi|a)\\sin\\psi d\\psi$.\n\nComputing $P_\\text{sqz}(a)$ needs to specify the certain type of defect among the configurations of $\\psi < 60 \n\\,\\text{deg}$. There is the other type of defect than the squeezed triangular one, which is known as the intermediate of the water reorientation \\cite{Laage2006_SM}. \nThis type of defect has bifurcated hydrogen bonds through one site of the molecule such that $y_j^{(i)} = y_k^{(i)}$. \nBy excluding such cases, we can compute the probability to observe a squeezed configuration of two donor bonds as,\n\\begin{equation}\nP_\\text{sqz,DD}(a) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} H\\!\\left(60^{\\circ} - \\psi_{jk}^{(i)} \\right)\\delta(a^{(i)} - a) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\delta \\!\\left(y_j^{(i)}y_k^{(i)} - 2 \\right) \\right>}{\\displaystyle\\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} \\delta(a^{(i)} - a)\\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\text{D} \\!\\left(y_j^{(i)} \\right) \\Phi_\\text{D} \\!\\left(y_k^{(i)} \\right)\\right>}~.\n\\tag{S.9}\n\\end{equation}\n\nThe average direct interaction energy for a hydrogen bond pair is computed in the bulk phase as a function of the deviation angle, $\\phi$, such that\n\\begin{equation}\nu_\\alpha (\\phi) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} U_{ij} \\,\\delta(\\phi_j^{(i)} - \\phi ) H\\!\\left(a^{(i)} - a_b\\right)\\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\right>}{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\delta(\\phi_j^{(i)} - \\phi ) H\\!\\left(a^{(i)} - a_b\\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\right>}~,\n\\tag{S.10}\n\\end{equation}\nwhere $U_{ij}$ is the pair potential energy between the $i$th and $j$th molecules and $a_b = 10 \\,\\mathrm{\\AA}$. The plots are given in Fig. 3(c) of the main text, normalized by the average bulk hydrogen bond energy where we used $E_\\text{HB} = 9.0 \\,k_B T$ (see the next section below for more details on this).\nSimilarly, the average direct interaction energy between two neighbors of a tagged molecule is computed as,\n\\begin{equation}\nv_{\\alpha\\gamma} (\\psi) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} U_{jk} \\,\\delta( \\psi_{jk}^{(i)} - \\psi ) H\\!\\left(a^{(i)} - a_b\\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\Phi_\\gamma \\!\\left(y_k^{(i)} \\right) \\right>}{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i}\\sum_{k \\neq i > j} \\delta (\\psi_{jk}^{(i)} - \\psi ) H\\!\\left(a^{(i)} - a_b\\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\Phi_\\gamma \\!\\left(y_k^{(i)} \\right) \\right>}~.\n\\tag{S.11}\n\\end{equation}\n\n\\section{Implementation of the Three-body Fluctuation model} \nFollowing the notations used in the previous section, let $\\{\\mathbf{\\hat{v}}_1, \\mathbf{\\hat{v}}_2, \\mathbf{\\hat{v}}_3, \\mathbf{\\hat{v}}_4\\}$ be the unit vectors of ideal hydrogen bonding directions through the hydrogens and lone pairs of a probe water molecule of given orientation $\\vec{\\kappa}$.\nWe sample the hydrogen bond vectors, $\\{\\mathbf{b}_1, \\mathbf{b}_2, \\mathbf{b}_3, \\mathbf{b}_4 \\}$, each of which is within a certain solid angle around $\\mathbf{\\hat{v}}_i$ such that $\\mathbf{b}_i \\cdot \\mathbf{\\hat{v}}_i = |\\mathbf{b}_i| \\cos\\phi_i$ where the value for $\\cos\\phi_i$ is drawn from the uniform random distribution of [$\\cos 70^{\\circ}$, 1]. \nEach hydrogen bond vector is assigned a directionality of either donor or acceptor based on the proximity to the bonding sites (see Eq.~(\\ref{b_label})). \nThe six inter-bond angles, $\\psi_{ij}$, are calculated based on Eq.~(\\ref{angle_psi}). \nIf the bond vectors are too close with one another, \\emph{i.e.} $\\psi_{ij} < 44^{\\circ }$, they are not taken into account for computing $P(\\vec{\\kappa}|a)$ since there is almost no statistics below that in the atomistic simulation. \nHence $P(\\vec{\\kappa}|a)$ is computed as,\n\\begin{equation}\nP(\\vec{\\kappa} \\vert a) = \\int \\prod_{i=1}^{4} \\left[d\\mathbf{b}_i \\delta (|\\mathbf{b}_i| - d_\\text{HB})H\\!\\left(\\frac{\\mathbf{b}_i}{|\\mathbf{b}_i|} \\cdot \\mathbf{\\hat{v}}_i - \\cos 70^{\\circ} \\right) \\prod_{j > i} H\\!\\left(\\cos44^{\\circ} - \\frac{\\mathbf{b}_i \\cdot \\mathbf{b}_j}{|\\mathbf{b}_i||\\mathbf{b}_j|} \\right)\\right] \\frac{\\left \\langle e^{-\\beta E(\\vec{\\kappa},a,\\{n_k\\})} \\right\\rangle_\\mathrm{b}}{Z(a)},\n\\tag{S.12}\n\\label{eq:dist1}\n\\end{equation}\nwhere $E(\\vec{\\kappa},a,\\{n_k\\})$ follows the Eq.~(8) of the main text. \nHere we impose the same constraint on the length of hydrogen bond vectors as that in the rigid tetrahedral model. \nImplementing the fluctuations in $|\\mathbf{b}_i|$ provokes more details about the energetics, $\\tilde{u}_\\alpha$ and $\\tilde{v}_{\\alpha\\gamma}$, such as their dependence on both lengths and angles of the hydrogen bond vectors, which we have not detailed so far in this model. \n\nThe energy functions, $\\tilde{u}_\\alpha(\\phi)$ and $\\tilde{v}_{\\alpha\\gamma}(\\psi)$, are rescaled from the atomistic simulation data of $u_\\alpha(\\phi)$ and $v_{\\alpha\\gamma}(\\psi)$. \nAdditionally, we parametrize $\\tilde{u}_\\alpha(\\phi)$ by tuning the maximum value of $u_\\alpha(\\phi)$ such that\n\\begin{equation}\n\\tilde{u}_\\alpha(\\phi) = \\left\\{ \\begin{array}{ll}\n\t\t\\displaystyle\\left\\{ \\big[u_\\alpha(\\phi) - u_\\alpha(0) \\big]\\frac{u_\\alpha^* - u_\\alpha(0)}{u_{\\alpha,\\text{max}} - u_\\alpha(0)} + u_\\alpha(0) \\right\\}\\frac{|\\epsilon_\\text{w}|}{E_\\text{HB}}, & \\quad\\quad \\textrm{if $\\phi \\le \\phi_c$},\\;\\;\\quad\\;\\;\\;\\; \\\\\\\\\n \t\\displaystyle0, & \\quad\\quad \\textrm{if $\\phi > \\phi_c$}, \\end{array} \\right.\\\\\n\\tag{S.13}\n\\label{eq:u_alpha}\n\\end{equation}\nwhere $\\phi_c = 72^{\\circ}$, $u_{\\alpha,\\text{max}} = \\max_{\\phi < \\phi_c}\\left\\{u_\\alpha(\\phi)\\right\\}$, and $u_\\alpha^*$ is the parameter that sets the new maximum (in the original scale of $u_\\alpha$). \nHere the factor of $|\\epsilon_\\text{w}|\/E_\\text{HB}$ rescales the functions in units of the effective hydrogen bond energy of our model.\nWe observed that the behavior of $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$ is largely sensitive to $\\tilde{u}_\\alpha(\\phi)$, and thus we optimized the parameters, $u_\\alpha^*$ and $\\epsilon_\\text{w}$, for the result expected from atomistic simulations.\nFor the results presented in Fig. 2 of the main text, we used $\\epsilon_\\text{w} = -5.0 \\,k_B T$, $u_\\text{D}^* = +0.2 \\,k_B T$, and $u_\\text{A}^* = -3.7 \\,k_B T$. \nWe found that the optimized $\\tilde{u}_\\text{A}(\\phi)$ is quite different from $u_\\text{A}(\\phi)$ but more like the corresponding energy computed near the interface (see Fig. \\ref{fig:S3}). \n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width = 3.5 in]{FigS3_v2}\n\\caption{$\\tilde{u}_\\alpha(\\phi)$ compared with $u_\\alpha(\\phi)$ in the same scale. Solid lines are the simulation data of $u_\\alpha(\\phi)\/E_\\text{HB}$, as originally shown in Fig.~3(c) of the main text, and circles indicate the rescaled energy functions that are optimized for the accurate trend of $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$. Gray and blue color correspond to $\\alpha = \\mathrm{D}$ and $\\alpha = \\mathrm{A}$, respectively. The blue dashed line corresponds to $u_\\text{A}(\\phi|a=1\\,\\mathrm{\\AA})$, \\emph{i.e.}, the average direct interaction energy for a hydrogen bond pair at $a = 1\\,\\mathrm{\\AA}$.}\n\\label{fig:S3}\n\\end{figure}\nFor $\\tilde{v}_{\\alpha\\gamma}(\\psi)$, we simply take the values from the atomistic simulation data and rescale them in units of $\\epsilon_\\text{w}$, that is,\n\\begin{equation}\n\\tilde{v}_{\\alpha\\gamma} (\\psi) = v_{\\alpha\\gamma}(\\psi)\\frac{|\\epsilon_\\text{w}|}{E_\\text{HB}}~.\n\\tag{S.14}\n\\end{equation}\n\nAs given in Eq.~(8) of the main text, $\\tilde{v}_{\\alpha\\gamma}(\\psi)$ is combined with the auxiliary function, $\\lambda(a_i,a_j,\\psi_{ij})$, in order to accounts for the interface-specific stability of hydrogen bond defects.\nThis auxiliary function represents the energetic cost for the defects to pay based on the hydrogen bonding status of the $i$th or $j$th hydrogen bond partner. \nAssuming that this penalty is imposed on the one that donates hydrogen, we describe this function in terms of the average number and energy of hydrogen bonds through donor sites, denoted by $N_\\text{D} (a)$ and $E_\\text{D} (a)$ respectively. \nSpecifically it is given by,\n\\begin{equation}\n\\lambda_{\\alpha\\gamma} (a_i,a_j,\\psi) = \\left\\{ \\begin{array}{ll}\n\t\t\\displaystyle \\frac{N_\\text{D} (\\bar{a}_{\\alpha\\gamma})}{2}E_\\text{D} (\\bar{a}_{\\alpha\\gamma})\\frac{|\\epsilon_\\text{w}|}{E_\\text{HB}}, & \\quad \\textrm{if $\\psi \\le \\psi_{\\alpha\\gamma}$}, \\\\\\\\\n\t\t\\displaystyle \\left[ 1 - \\frac{ v_{\\alpha\\gamma}(\\psi) - v_{\\alpha\\gamma}( \\psi_{\\alpha\\gamma})}{v_{\\alpha\\gamma}(\\psi_c) - v_{\\alpha\\gamma}(\\psi_{\\alpha\\gamma}) } \\right] \\lambda_{\\alpha\\gamma}(a_i,a_j,\\psi_{\\alpha\\gamma}), & \\quad \\textrm{if $\\psi_{\\alpha\\gamma} < \\psi < \\psi_c$}, \\\\\\\\\n \t\\displaystyle 0, & \\quad \\textrm{if $\\psi \\ge \\psi_c$}, \\end{array} \\right.\n\\tag{S.15}\n\\end{equation}\nwhere $\\psi_{\\alpha\\gamma} = \\argmin_{\\psi}\\left\\{v_{\\alpha\\gamma}(\\psi)\\right\\}$, $\\psi_c = \\argmax_{\\psi}\\left\\{v_\\text{AA}(\\psi)\\right\\}$, and\n\\begin{equation}\n\\bar{a}_{\\alpha\\gamma} (a_i,a_j) = \\left\\{ \\begin{array}{ll}\n\t\t\\displaystyle \\min \\{a_i, a_j \\}, & \\quad \\textrm{if $\\alpha = \\gamma$},\\\\\\\\\n \t\\displaystyle \\Phi_\\text{D}\\!\\left(y_i \\right)a_i + \\Phi_\\text{D}\\!\\left(y_j \\right)a_j, & \\quad \\textrm{if $\\alpha \\ne \\gamma$}. \\end{array} \\right.\n\\tag{S.16}\n\\end{equation}\nHere we let the penalty taken by the hydrogen bond partner located closer to the interface, but we make the exception for donor-acceptor bond pairs based on the typical structure of cyclic water trimer \\cite{keutsch2003water}.\n$N_\\text{D}(a)$ and $E_\\text{D}(a)$ are computed from atomistic simulation as, \n\\begin{equation}\nN_\\text{D} (a) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\delta(a^{(i)} - a) H\\!\\left(30^{\\circ} - \\phi_j^{(i)} \\right) H\\!\\left(3.5 \\,\\mathrm{\\AA} - \\left\\vert \\mathbf{b}_j^{(i)}\\right\\vert \\right) \\Phi_\\text{D}\\!\\left(y_j^{(i)} \\right) \\right>}{\\displaystyle \\left< \\sum_i \\delta\\!\\left(a^{(i)} - a \\right) \\right>}~,\n\\tag{S.17}\n\\end{equation}\nand\n\\begin{equation}\nE_\\text{D} (a) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} U_{ij} \\, \\delta(a^{(i)} - a) H\\!\\left(30^{\\circ} - \\phi_j^{(i)} \\right) H\\!\\left(3.5 \\,\\mathrm{\\AA} - \\left\\vert \\mathbf{b}_j^{(i)}\\right\\vert \\right) \\Phi_\\text{D}\\!\\left(y_j^{(i)} \\right) \\right>}{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\delta(a^{(i)} - a) H\\!\\left(30^{\\circ} - \\phi_j^{(i)} \\right) H\\!\\left(3.5 \\,\\mathrm{\\AA} - \\left\\vert \\mathbf{b}_j^{(i)}\\right\\vert \\right) \\Phi_\\text{D}\\!\\left(y_j^{(i)} \\right) \\right>}~,\n\\tag{S.18}\n\\end{equation}\nwhere the definition of a good hydrogen bond follows the one by Luzar and Chandler \\cite{luzar1996effect}.\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width = 6.4 in]{FigS4_v2}\n\\caption{(a) Interfacial profiles of average number and energy of hydrogen bonds through donor sites, rendered in orange and blue lines respectively. (b) Illustration of a three-body interaction term implemented in our model. Solid lines show the different energetic preferences for highly distorted configurations depending on the interfacial depth of hydrogen bond partner. Dashed line corresponds to $\\tilde{v}_\\text{AA}$ without the attenuation by $\\lambda_\\text{AA}$.}\n\\label{fig:S4}\n\\end{figure}\nAs illustrated in Fig.~\\ref{fig:S4}, they change dramatically within the first $2\\,\\mathrm{\\AA}$ of the interfacial region such that the effect of three-body interaction also becomes significant in that region. \nHere we take the value of $E_\\text{HB}$ from $E_\\text{D}(a)$ by setting $E_\\text{HB} = |E_\\text{D}(a_b)| = 8.45 \\,k_B T = 20.9 \\,\\mathrm{kJ\/mol}$ at $T = 298 \\text{ K}$.\n\n\nIn order to evaluate $P(\\vec{\\kappa}|a)$, we obtain an approximate analytic expression for $\\left< e^{-\\beta E(\\vec{\\kappa},a,\\{n_k\\})} \\right>_\\text{b}$ in Eq.~(\\ref{eq:dist1}).\nHere we made the same assumption as that of the rigid tetrahedral model, such that $\\langle n_i n_j \\rangle \\approx \\langle n_i \\rangle \\langle n_j \\rangle $ for $i \\ne j$\n\\footnote{\nThis is definitely a rough approximation since it neglects the density correlation between hydrogen bond partners even in the squeezed configurations. However, more accurate treatment for the density correlation provokes again the distance dependence of the energy functions which we have not detailed so far in this model. \n}. \nWithin this approximation, we can write\n\\begin{align*}\n\\left< e^{-\\beta E(\\vec{\\kappa},a,\\{n_k\\})}\\right>_\\text{b} &= \\prod_{i=1}^4 \\langle n_i \\rangle_\\text{b} e^{-\\beta \\tilde{u}_\\alpha(\\phi_i)} \\prod_{j > i}^{4} e^{-\\beta \\tilde{v}_{\\alpha\\gamma}(\\psi_{ij})} + \\sum_{k=1}^4 \\left[ 1 - \\langle n_k \\rangle_\\text{b}\\right] \\prod_{i \\neq k}^4 \\langle n_i \\rangle_\\text{b} e^{-\\beta \\tilde{u}_\\alpha(\\phi_i)} \\prod_{j\\neq k > i}^4 e^{-\\beta \\tilde{v}_{\\alpha\\gamma}(\\psi_{ij})} \\nonumber\\\\ &\\quad+ \\sum_{i = 1}^4\\sum_{j > i}^4 \\langle n_i \\rangle_\\text{b}\\langle n_j \\rangle_\\text{b} \\left[1 - \\langle n_k \\rangle_\\text{b} \\right] \\left[1 - \\langle n_l \\rangle_\\text{b} \\right] e^{-\\beta \\left[\\tilde{u}_\\alpha(\\phi_i) + \\tilde{u}_\\alpha(\\phi_j) + \\tilde{v}_{\\alpha\\gamma}(\\psi_{ij}) \\right]} \\nonumber\\\\ &\\quad\\quad+ \\sum_{k=1}^4 \\langle n_k \\rangle_\\text{b} e^{-\\beta \\tilde{u}_\\alpha(\\phi_k)} \\prod_{i \\neq k}^4 \\left[1 - \\langle n_i \\rangle_\\text{b}\\right] + \\prod_{i=1}^4 \\left[1 - \\langle n_i \\rangle_\\text{b}\\right]~,\\nonumber\n\\tag{S.19}\n\\end{align*}\nwhere $\\langle n_i \\rangle_\\text{b} = P_\\text{HB} (a_i) = \\rho(a_i)\/2\\rho_b$ and the dummy indices, $k$ and $l$, in the third term are the numbers among $\\{1,2,3,4\\}$ such that $i \\ne j \\ne k \\ne l$. \nThe resulting reduced probability distribution, $P(\\cos\\theta_\\text{OH}|a)$, is given in Fig.~\\ref{fig:S5}.\nAlthough its qualitative feature is still the same as the result from the rigid tetrahedral model, its details are closer to that from the atomistic simulation of TIP5P water (especially at $a < 3 \\,\\mathrm{\\AA}$).\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width = 3.4 in]{FigS5_v2}\n\\caption{Orientational distributions, $P(\\cos\\theta_\\text{OH}|a)$, computed from (a) the three-body fluctuation model and (b) the rigid tetrahedral model for the TIP5P force field. Color shading indicates the probability density.}\n\\label{fig:S5}\n\\end{figure}\n\n\\section{Application of mean-field models to SPC\/E force field}\nDespite sharing similar bulk hydrogen bonding structures, different classical water models can yield non-trivial differences in interfacial structure. This is highlighted to some extent in Figs.~\\ref{fig:S1} and \\ref{fig:S2}. As Fig.~\\ref{fig:S2} illustrates, different water models exhibit similar trends in $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$ and $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, but they differ in their quantitative characteristics. To evaluate the ability of our mean field model to capture these differences have also applied our model to the SPC\/E force field. To do this, we have followed the same procedure described herein but using the data from molecular dynamics simulations of SPC\/E water rather than TIP5P. For the SPC\/E-parameterized mean field model we have found similarly good agreement in reproducing $P(\\cos\\theta_\\text{OH}|a)$, as illustrated in Fig.~\\ref{fig:S6}, but as Fig.~\\ref{fig:S7} illustrates, the agreement to $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$ and $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, is not as strong for SPC\/E as it is for TIP5P. \n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width = 3.0 in]{FigS6}\n\\caption{$P(\\cos\\theta_\\text{OH}|a)$ computed from (a) the atomistic simulation with SPC\/E water and (b) the rigid tetrahedral model optimized for the SPC\/E force field ($\\epsilon_\\text{w}^* = -1.8\\,k_B T = -4.46 \\text{ kJ\/mol}$).}\n\\label{fig:S6}\n\\end{figure}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width = 5.0 in]{FigS7}\n\\caption{Interfacial polarization, $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$, and polarizability, $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, computed from the three-body fluctuation model parametrized for the SPC\/E force field, in comparison to the molecular dynamics simulation results.}\n\\label{fig:S7}\n\\end{figure}\n\nWe understand that the difference in the ability of our model to reproduce dipolar polarization\/polarizability between SPC\/E and TIP5P arises due to the differences in the geometric tendencies inherent to these force fields. The TIP5P force field is built upon a tetrahedral charge scaffold, so non-ideal hydrogen bond structures are more naturally described in terms of their deviations from this scaffold. On the other hand, SPC\/E is built upon a triangular charge scaffold (albeit with a tetrahedral bond angle) so non-ideal hydrogen bond structures are less well represented in term of deviation from a tetrahedral scaffold. We speculate that we could improve quantitative accuracy of our model in reproducing SPC\/E results by modifying the details of the underlying geometry of our mean field model.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:introduction}\n\nHeat exchanger network synthesis (HENS) \nminimizes cost and improves energy recovery in chemical processes \\citep{biegleretal:1997, smith:2000, eliaetal:2010, balibanetal:2012}. \nHENS exploits excess heat by integrating process hot and cold streams and improves energy efficiency by reducing utility usage \\citep{floudas:1987, naess:1988, furman:2002, escobar-et-trierweiler:2013}. \n\\citet{floudas:2012} review the critical role of heat integration for energy systems producing liquid transportation fuels \\citep{niziolek:2015}.\nOther important applications of HENS include: refrigeration systems \\citep{shelton:1986}, batch semi-continuous processes \\citep{zhao:1998,castro:2015} and water utilization systems \\citep{bagakewicz:2002}.\n\n\n\nHeat exchanger network design is a mixed-integer nonlinear optimization (MINLP) problem \\citep{yee:1990, Ciric:1991, papalexandri-pistikopoulos:1994,hasan-etal:2010}. \\cite{mistry:2016} recently showed that expressions incorporating logarithmic mean temperature difference, i.e.\\ the nonlinear nature of heat exchange, may be reformulated to decrease the number of nonconvex nonlinear terms in the optimization problem. But HENS remains a difficult MINLP with many nonconvex nonlinearities. One way to generate good HENS solutions is to use the so-called \\emph{sequential method} \\citep{furman:2002}.\nThe sequential method decomposes the original HENS MINLP into three tasks: (i) minimizing utility cost, (ii) minimizing the number of matches, and (iii) minimizing the investment cost. \nThe method optimizes the three mathematical models sequentially with: (i) a linear program (LP) \\citep{cerdaetwesterberg:1983, papoulias:1983}, (ii) a mixed-integer linear program (MILP) \\citep{cerda:1983, papoulias:1983}, and (iii) a nonlinear program (NLP) \\citep{floudas:1986}.\nThe sequential method may not return the global solution of the original MINLP, but solutions generated with the sequential method are practically useful.\n\nThis paper investigates the \\emph{minimum number of matches} problem \\citep{floudas}, the computational bottleneck of the sequential method. \nThe minimum number of matches problem is a strongly $\\mathcal{NP}$-hard MILP \\citep{furman:2001}.\nMathematical symmetry in the problem structure combinatorially increases the possible stream configurations and deteriorates the performance of exact, tree-based algorithms \\citep{kouyialis:2016}. \n\nBecause state-of-the-art approaches cannot solve the minimum number of matches problem to global optimality for moderately-sized instances \\citep{chen:2015}, engineers develop experience-motivated heuristics \\citep{hindmarsh:1983,cerdaetwesterberg:1983}. {\\cite{hindmarsh:1983} highlight the importance of generating good solutions quickly: a design engineer may want to actively interact with a good minimum number of matches solution and consider changing the utility usage as a result of the MILP outcome.}\n\\citet{furman:2004} propose a collection of approximation algorithms, i.e.\\ heuristics with performance guarantees, for the minimum number of matches problem by exploiting the LP relaxation of an MILP formulation.\n\\citet{furman:2004} present a unified worst-case analysis of their algorithms' performance guarantees and show a non-constant approximation ratio scaling with the number of temperature intervals.\nThey also prove a constant performance guarantee for the single temperature interval problem.\n\nThe standard MILP formulations for the minimum number of matches contain big-M constraints, i.e.\\ the on\/off switches associated with weak continuous relaxations of MILP.\nBoth optimization-based heuristics and exact state-of-the-art methods for solving minimum number of matches problem are highly affected by the big-M parameter.\nTrivial methods for computing the big-M parameters are typically adopted, but \\citet{gundersen:1997} propose a tighter way of computing the big-M parameters.\n\nThis manuscript develops new heuristics and provably efficient approximation algorithms for the minimum number of matches problem. These methods have guaranteed solution quality and efficient run-time bounds. \nIn the sequential method, many possible stream configurations are required to evaluate the minimum overall cost \\citep{floudas}, so a complementary contribution of this work is a heuristic methodology for producing multiple solutions efficiently.\nWe classify the heuristics based on their algorithmic nature into three categories: (i) relaxation rounding, (ii) water filling, and (iii) greedy packing.\n\n{\nThe relaxation rounding heuristics we consider are (i) Fractional LP Rounding (FLPR), (ii) Lagrangian Relaxation Rounding (LRR), and (iii) Covering Relaxation Rounding (CRR).\nThe water-filling heuristics are (i) Water-Filling Greedy (WFG), and (ii) Water-Filling MILP (WFM). \nFinally, the greedy packing heuristics are (i) Largest Heat Match LP-based (LHM-LP), (ii) Largest Heat Match Greedy (LHM), (iii) Largest Fraction Match (LFM), and (iv) Shortest Stream (SS).\nMajor ingredients of these heuristics are adaptations of single temperature interval algorithms and maximum heat computations with match restrictions.\nWe propose (i) a novel MILP formulation, and (ii) an improved greedy approximation algorithm for the single temperature interval problem.\nFurthermore, we present (i) a greedy algorithm computing maximum heat between two streams and their corresponding big-M parameter, (ii) an LP computing the maximum heat in a single temperature interval using a subset of matches, and (iii) an extended maximum heat LP using a subset of matches on multiple temperature intervals.\n}\n\nThe manuscript proceeds as follows: \nSection \\ref{sec:preliminaries} formally defines the minimum number of matches problem and discusses mathematical models.\nSection \\ref{sec:heuristics_performance_guarantees} discusses computational complexity and introduces a new $\\mathcal{NP}$-hardness reduction of the minimum number of matches problem from bin packing. \nSection \\ref{sec:single_temperature_interval} \nfocusses on the single temperature interval problem.\nSection \\ref{sec:max_heat} explores computing the maximum heat exchanged between the streams with match restrictions.\nSections \\ref{sec:relaxation_rounding} - \\ref{sec:greedy_packing} present our heuristics for the minimum number of matches problem based on: (i) relaxation rounding, (ii) water filling, and (iii) greedy packing, respectively, as well as new theoretical performance guarantees.\nSection \\ref{sec:results} evaluates experimentally the heuristics and discusses numerical results.\nSections \\ref{sec:discussion} and \\ref{sec:conclusion} discuss the manuscript contributions and conclude the paper.\n\n\n\n\\begin{comment}\n\\begin{table}[h]\n\n\\scriptsize\n\\begin{center}\n\\begin{adjustbox}{center}\n\\begin{tabular}{ | c | c c c | c c | c c c c| } \n\\hline\n& \\multicolumn{3}{|c|}{\\textbf{Relaxation Rounding}} & \\multicolumn{2}{c}{\\textbf{Water Filling}} & \\multicolumn{4}{|c|}{\\textbf{Greedy Packing}} \\\\\n& FLPR & LRR & CRR & WFM & WFG & LHM & LFM & LHM-LP & SS \\\\\n& (\\ref{Subsection:FLPR}) & (\\ref{Subsection:LRR}) & (\\ref{Subsection:CRR}) & (\\ref{sec:water_filling}) & (\\ref{sec:water_filling}) & (\\ref{Subsection:Largest_Heat_Match}) & (\\ref{Subsection:LFM}) & (\\ref{Subsection:Largest_Heat_Match}) & (\\ref{Subsection:SS}) \\\\\n\\hline\n\\textbf{Single Temperature Interval Problem} & & & & & & & & & \\\\ \nMILP Model (\\ref{Sec:SingleTemperatureIntervalProblem-MILP}) & & & & \\checkmark & & & & & \\\\ \nApproximation Algorithm (\\ref{Sec:SingleTemperatureIntervalProblem-Approximation}) & & & & & \\checkmark & & & & \\\\ \n\\hline\n\\textbf{Maximum Heat Computations} & & & & & & & & & \\\\\nTwo Streams, Big-M Parameter (\\ref{Sec:MaxHeat_2Streams}) & \\checkmark & \\checkmark & \\checkmark & & & \\checkmark & \\checkmark & & \\checkmark \\\\ \nSingle Temperature Interval (\\ref{Sec:MaxHeat_SingleInterval}) & & & & \\checkmark & \\checkmark & & & & \\\\ \nMultiple Temperature Intervals (\\ref{Sec:MaxHeat_MultipleIntervals}) & & & \\checkmark & & & & & \\checkmark & \\\\ \n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\end{center}\n\\caption{Table indicating the single temperature interval problem and maximum heat with match restrictions components used by each heuristic. Each element is associated with a section number. This table may be used as a roadmap to the paper.}\n\\label{Table:Heuristic_Components}\n\\end{table}\t\n\\end{comment}\n\n\n \n\n\\section{Minimum Number of Matches for Heat Exchanger Network Synthesis}\n\\label{sec:preliminaries}\nThis section defines the minimum number of matches problem and presents the standard transportation and transshipment MILP models. \nTable \\ref{tbl:notation} contains the notation.\n\n\\singlespacing\n\\begin{longtable}{l l l}\n\\caption{Nomenclature}\\\\\n\\toprule\nName & Description \\\\\n\\midrule\n{\\bf Cardinalities} & \\\\\n$n$ & Number of hot streams \\\\\n$m$ & Number of cold streams \\\\\n$k$ & Number of temperature intervals\\\\\n$v$ & Number of matches (objective value) \\\\\n\\midrule\n{\\bf Indices} \\\\\n$i\\in H$ & Hot stream\\\\\n$j\\in C$ & Cold stream\\\\\n$s,t,u \\in T$ & Temperature interval\\\\\n$b\\in B$ & Bin (single temperature interval problem) \\\\\n\\midrule\n{\\bf Sets} & & \\\\\n$H$, $C$ & Hot, cold streams \\\\\n$T$ & Temperature intervals \\\\\n$M$ & Set of matches (subset of $H\\times C$) \\\\\n$C_i(M), H_j(M)$ & Cold, hot streams matched with $i\\in H$, $j\\in C$ in $M$ \\\\\n$B$ & Bins (single temperature interval problem) \\\\\n$A(M)$ & Set of valid quadruples $(i,s,j,t)$ with respect to a set $M$ of matches \\\\\n$A_u(M)$ & Set of quadruples $(i,s,j,t)\\in A(M)$ with $s\\leq uT_{\\text{in},j}^{CS}$).\nEvery hot stream $i$ and cold stream $j$ are associated flow rate heat capacities $FCp_i$ and $FCp_j$, respectively.\nMinimum heat recovery approach temperature $\\Delta T_{\\min}$ relates the hot and cold stream temperature axes.\nA hot utility $i$ in a set $HU$ and a cold utility $j$ in a set $CU$ may be purchased at a cost, e.g.\\ with unitary costs $\\kappa_i^{HU}$ and $\\kappa_j^{CU}$.\nLike the streams, the utilities have inlet and outlet temperatures $T_{\\text{in},i}^{HU}$, $T_{\\text{out},i}^{HU},T_{\\text{in},j}^{CU}$ and $T_{\\text{out},j}^{CU}$.\nThe first step in a sequential approach to HENS minimizes the utility cost and thereby specifies the heat each utility introduces in the network.\nThe next step minimizes the number of matches.\n\\ref{App:Minimum_Utility_Cost} discusses the transition from the minimizing utility cost to minimizing the number of matches.\nAfter this transition, each utility may, without loss of generality, be treated as a stream.\n}\n\nThe minimum number of matches problem posits a set of \\emph{hot process streams} to be cooled and a set of \\emph{cold process streams} to be heated.\nEach stream is associated with an initial and a target temperature. \nThis set of temperatures defines a collection of \\emph{temperature intervals}. \nEach hot stream exports (or supplies) heat in each temperature interval between its initial and target temperatures.\nSimilarly, each cold stream receives (or demands) heat in each temperature interval between its initial and target temperatures.\n\\ref{App:Minimum_Utility_Cost} formally defines the temperature range partitioning.\nHeat may flow from a hot to a cold stream in the same or a lower temperature interval, but not in a higher one.\nIn each temperature interval, the \\emph{residual heat} descends to lower temperature intervals.\nA zero heat residual is a \\emph{pinch point}.\nA pinch point restricts the maximum energy integration and divides the network into subnetworks. \n\nA problem instance consists of a set $H=\\{1,2,\\ldots,n\\}$ of hot streams, a set $C=\\{1,2,\\ldots,m\\}$ of cold streams, and a set $T=\\{1,2,\\ldots,k\\}$ of temperature intervals. \nHot stream $i\\in H$ has heat supply $\\sigma_{i,s}$ in temperature interval $s\\in T$ and cold stream $j\\in C$ has heat demand $\\delta_{j,t}$ in temperature interval $t\\in T$.\nHeat conservation is satisfied, i.e.\\ $\\sum_{i\\in H}\\sum_{s\\in T}\\sigma_{i,s} = \\sum_{j\\in C}\\sum_{t\\in T}\\delta_{j,t}$.\nWe denote by $h_i=\\sum_{s\\in T}\\sigma_{i,s}$ the total heat supply of hot stream $i\\in H$ and by $c_j=\\sum_{t\\in T}\\delta_{j,t}$ the total heat demand of cold stream $j\\in C$.\n\nA feasible solution specifies a way to transfer the hot streams' heat supply to the cold streams, i.e.\\ an amount $q_{i,s,j,t}$ of heat exchanged between hot stream $i\\in H$ in temperature interval $s\\in T$ and cold stream $j\\in C$ in temperature interval $t\\in T$.\nHeat may only flow to the same or a lower temperature interval, i.e.\\ $q_{i,s,j,t}=0$, for each $i\\in H$, $j\\in C$ and $s,t\\in T$ such that $s>t$.\nA hot stream $i\\in H$ and a cold stream $j\\in C$ are \\emph{matched}, if there is a positive amount of heat exchanged between them, i.e.\\ $\\sum_{s,t\\in T}q_{i,s,j,t}>0$.\nThe objective is to find a feasible solution minimizing the number of matches $(i,j)$.\n\n\n\n\\subsection{Mathematical Models}\nThe transportation and transshipment models formulate the minimum number of matches as a mixed-integer linear program (MILP).\n\n\\paragraph{Transportation Model \\citep{cerda:1983}} \nAs illustrated in Figure \\ref{Fig:transportation}, the transportation model represents heat as a commodity transported from supply nodes to destination nodes.\nFor each hot stream $i\\in H$, there is a set of supply nodes, one for each temperature interval $s\\in T$ with $\\sigma_{i,s}>0$.\nFor each cold stream $j\\in C$, there is a set of demand nodes, one for each temperature interval $t\\in T$ with $\\delta_{j,t}>0$.\nThere is an arc between the supply node $(i,s)$ and the destination node $(j,t)$ if $s\\leq t$, for each $i\\in H$, $j\\in C$ and $s,t\\in T$.\n\nIn the MILP formulation, variable $q_{i,s,j,t}$ specifies the heat transferred from hot stream $i\\in H$ in temperature interval $s\\in T$ to cold stream $j\\in C$ in temperature interval $t\\in T$. \nBinary variable $y_{i,j}$ if whether streams $i\\in H$ and $j\\in C$ are matched or not. \nParameter $U_{i,j}$ is a big-M parameter bounding the amount of heat exchanged between every pair of hot stream $i\\in H$ and cold stream $j\\in C$, e.g.\\ $U_{i,j}=\\min\\{h_i,c_j\\}$.\nThe problem is formulated:\n{\\allowdisplaybreaks\n\\begin{align}\n\\text{min} & \\sum_{i \\in H}\\sum_{j \\in C} y_{i,j} \\label{TransportationMIP_Eq:ObjMinMatches} \\\\ \n& \\sum_{j\\in C}\\sum_{t\\in T} q_{i,s,j,t} = \\sigma_{i,s} & i\\in H, s\\in T \\label{TransportationMIP_Eq:HotStreamConservation}\\\\\n& \\sum_{i\\in H}\\sum_{s\\in T} q_{i,s,j,t} = \\delta_{j,t} & j\\in C, t\\in T \\label{TransportationMIP_Eq:ColdStreamConservation}\\\\\n& \\sum_{s,t\\in T} q_{i,s,j,t}\\leq U_{i,j}\\cdot y_{i,j} & i\\in H, j\\in C \\label{TransportationMIP_Eq:BigM_Constraint}\\\\\n& q_{i,s,j,t} = 0 & i\\in H, j\\in C, s,t\\in T: s> t \\label{TransportationMIP_Eq:ThermoConstraint} \\\\\n& y_{i,j} \\in \\{0,\\,1\\},q_{i,s,j,t}\\geq 0 & i\\in H, j\\in C,\\; s,t\\in T \\label{TransportationMIP_Eq:IntegralityConstraints}\n\\end{align}\n}\nExpression (\\ref{TransportationMIP_Eq:ObjMinMatches}), the objective function, minimizes the number of matches.\nEquations (\\ref{TransportationMIP_Eq:HotStreamConservation}) and (\\ref{TransportationMIP_Eq:ColdStreamConservation}) ensure heat conservation.\nEquations (\\ref{TransportationMIP_Eq:BigM_Constraint}) enforce a match between a hot and a cold stream if they exchange a positive amount of heat.\nEquations (\\ref{TransportationMIP_Eq:BigM_Constraint}) are \\emph{big-M constraints}.\nEquations (\\ref{TransportationMIP_Eq:ThermoConstraint}) ensure that no heat flows to a hotter temperature.\n\n{\n\nThe transportation model may be reduced by removing redundant variables and constraints.\nSpecifically, a mathematically-equivalent \\emph{reduced transportation MILP model} removes: (i) all variables $q_{i,s,j,t}$ with $s> t$ and (ii) Equations (\\ref{TransportationMIP_Eq:ThermoConstraint}).\nBut modern commercial MILP solvers may detect redundant variables constrained to a fixed value and exploit this information to their benefit. Table \\ref{Table:Transportation_Models_Comparison} shows that the aggregate performance of CPLEX and Gurobi is unaffected by the redundant constraints and variables.\n\n}\n\n\\begin{figure*}[t!]\n\\centering\n\n\\begin{subfigure}[t]{0.45\\textwidth}\n\\centering\n\\includegraphics{transportation_graph.eps}\n\\vspace*{-1cm}\n\\caption{ Transportation Model}\n\\label{Fig:transportation}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.45\\textwidth}\n\\centering\n\\includegraphics{transshipment_graph.eps}\n\\vspace*{-1cm}\n\\caption{Transshipment Model}\n\\label{Fig:transshipment}\n\\end{subfigure}\n\\vspace*{-0.5cm}\n\\caption{\nIn the transportation model \\citep{cerda:1983}, each hot stream $i$ supplies $\\sigma_{i,t}$ units of heat in temperature interval $t$ which can be received, in the same or a lower temperature interval, by a cold stream $j$ which demands $\\delta_{j,t}$ units of heat in $t$. \nIn the transshipment model \\citep{papoulias:1983}, there are also intermediate nodes transferring residual heat to a lower temperature interval.\nThis figure is adapted from \\citet{furman:2004}.\n}\n\\end{figure*}\n\n\n\\paragraph{Transshipment Model \\citep{papoulias:1983}} \nAs illustrated in Figure \\ref{Fig:transshipment}, the transshipment formulation transfers heat from hot streams to cold streams via intermediate transshipment nodes.\nIn each temperature interval, the heat entering a transshipment node either transfers to a cold stream in the same temperature interval or it descends to the transshipment node of the subsequent temperature interval as residual heat.\n\nBinary variable $y_{i,j}$ is 1 if hot stream $i\\in H$ is matched with cold stream $j\\in C$ and 0 otherwise.\nVariable $q_{i,j,t}$ represents the heat received by cold stream $j\\in C$ in temperature interval $t\\in T$ originally exported by hot stream $i\\in H$.\nVariable $r_{i,s}$ represents the residual heat of hot stream $i\\in H$ that descends from temperature interval $s$ to temperature interval $s+1$. \nParameter $U_{i,j}$ is a big-M parameter bounding the heat exchanged between hot stream $i\\in H$ and cold stream $j\\in C$, e.g.\\ $U_{i,j}=\\min\\{h_i,c_j\\}$.\nThe problem is formulated:\n{\\allowdisplaybreaks\n\\begin{align}\n\\text{min} & \\sum_{i\\in H}\\sum_{j \\in C} y_{i,j} \\label{TransshipmentMIP_Eq:ObjMinMatches} \\\\ \n& \\sum_{j \\in C} q_{i,j,s} + r_{i,s} = \\sigma_{i,s} + r_{i,s-1} & i\\in H, s\\in T \\label{TransshipmentMIP_Eq:HotStreamConservation} \\\\\n& r_{i,k} = 0 & i\\in H \\label{TransshipmentMIP_Eq:HeatConservation} \\\\\n& \\sum_{i\\in H} q_{i,j,t} = \\delta_{j,t} & j\\in C, t \\in T \\label{TransshipmentMIP_Eq:ColdStreamConservation} \\\\ \n& \\sum_{t\\in T} q_{i,j,t}\\leq U_{i,j}\\cdot y_{i,j} & i\\in H, j\\in C \\label{TransshipmentMIP_Eq:BigM_Constraint} \\\\\n& y_{i,j}\\in \\{0,1\\},\\; q_{i,j,t}, r_{i,s}\\geq 0 & i\\in H, j\\in C, s,t\\in T\n\\end{align}\n}\nExpression (\\ref{TransshipmentMIP_Eq:ObjMinMatches}) minimizes the number of matches. \nEquations (\\ref{TransshipmentMIP_Eq:HotStreamConservation})-(\\ref{TransshipmentMIP_Eq:ColdStreamConservation}) enforce heat conservation.\nEquation (\\ref{TransshipmentMIP_Eq:BigM_Constraint}) allows positive heat exchange between hot stream $i\\in H$ and cold stream $j\\in C$ only if $(i,j)$ are matched.\n\n\n\\section{Heuristics with Performance Guarantees}\n\\label{sec:heuristics_performance_guarantees}\n\n\\subsection{Computational Complexity}\n\\label{sec:computational_complexity}\n\nWe briefly introduce $\\mathcal{NP}$-completeness and basic computational complexity classes \\citep{arora:2009,papadimitriou:1994}.\nA \\emph{polynomial algorithm} produces a solution for a computational problem with a running time polynomial to the size of the problem instance.\nThere exist problems which admit a polynomial-time algorithm and others which do not.\nThere is also the class of \\emph{$\\mathcal{NP}$-complete problems} for which we do not know whether they admit a polynomial algorithm or not.\nThe question of whether $\\mathcal{NP}$-complete problems admit a polynomial algorithm is known as the $\\mathcal{P}=\\mathcal{NP}$ question.\nIn general, it is conjectured that $\\mathcal{P}\\neq \\mathcal{NP}$, i.e.\\ $\\mathcal{NP}$-complete problems are not solvable in polynomial time.\nAn optimization problem is \\emph{$\\mathcal{NP}$-hard} if its decision version is $\\mathcal{NP}$-complete.\nA computational problem is \\emph{strongly $\\mathcal{NP}$-hard} if it remains $\\mathcal{NP}$-hard when all parameters are bounded by a polynomial to the size of the instance.\n\n\nThe minimum number of matches problem is known to be strongly $\\mathcal{NP}$-hard, even in the special case of a single temperature interval.\n\\citet{furman:2004} propose an $\\mathcal{NP}$-hardness reduction from the well-known 3-Partition problem, i.e.\\ they show that the minimum number of matches problem has difficulty equivalent to the 3-Partition problem.\n\\ref{App:NP_harndess} presents an alternative $\\mathcal{NP}$-hardness reduction from the bin packing problem. This alternative setting of the minimum number of matches problem gives new insight into the packing nature of the problem.\nA major contribution of this paper is to design efficient, greedy heuristics motivated by packing.\n\n\\begin{theorem}\n\\label{thm:NP_hardness}\nThere exists an $\\mathcal{NP}$-hardness reduction from bin packing to the minimum number of matches problem with a single temperature interval.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:NP_harndess}.\n\\end{omitted_proof}\n\n\n\\subsection{Approximation Algorithms}\n\\label{sec:approximation_algorithms}\n\nA heuristic with a performance guarantee is usually called an \\emph{approximation algorithm} \\citep{vazirani:2001,williamson:2011}.\nUnless $\\mathcal{P}=\\mathcal{NP}$, there is no polynomial algorithm solving an $\\mathcal{NP}$-hard problem.\nAn approximation algorithm is a polynomial algorithm producing a near-optimal solution to an optimization problem.\nFormally, consider an an optimization problem, without loss of generality minimization, and a polynomial Algorithm $A$ for solving it (not necessarily to global optimality).\nFor each problem instance $I$, let $C_A(I)$ and $C_{OPT}(I)$ be the algorithm's objective value and the optimal objective value, respectively.\nAlgorithm $A$ is $\\rho$-approximate if, for every problem instance $I$, it holds that:\n\\begin{equation*}\nC_A(I)\\leq \\rho\\cdot C_{OPT}(I).\n\\end{equation*}\nThat is, a $\\rho$-approximation algorithm computes, in polynomial time, a solution with an objective value at most $\\rho$ times the optimal objective value.\nThe value $\\rho$ is the \\emph{approximation ratio} of Algorithm $A$. \nTo prove a $\\rho$-approximation ratio, we proceed as depicted in Figure \\ref{Fig:ApproximationAlgorithm}.\nFor each problem instance, we compute analytically a lower bound $C_{LB}(I)$ of the optimal objective value, i.e.\\ $C_{LB}(I) \\leq C_{OPT}(I)$, and we show that the algorithm's objective value is at most $\\rho$ times the lower bound, i.e.\\ $C_A(I)\\leq \\rho \\cdot C_{LB}(I)$.\nThe ratio of a $\\rho$-approximation algorithm is \\emph{tight} if the algorithm is not $\\rho-\\epsilon$ approximate for any $\\epsilon>0$. \nAn algorithm is $O(f(n))$-approximate and $\\Omega(f(n))$-approximate, where $f(n)$ is a function of an input parameter $n$, if the algorithm does not have an approximation ratio asymptotically higher and lower, respectively, than $f(n)$.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{approximation_algorithm.eps}\n\\caption{Analysis of an Approximation Algorithm}\n\\label{Fig:ApproximationAlgorithm}\n\\end{center}\n\\end{figure}\n\nApproximation algorithms have been developed for two problem classes relevant to process systems engineering: heat exchanger networks \\citep{furman:2004} and pooling \\citep{dey2015analysis}. Table \\ref{Table:Heuristics_Performance_Guarantees} lists performance guarantees for the minimum number of matches problem; most are new to this manuscript.\n\n\\begin{table}[t]\n\\small\n\\begin{adjustbox}{center} \n\\begin{tabular}{ | l l c c c | }\n\\hline\n\\textbf{Heuristic} & \\textbf{Abbrev.} & \\textbf{Section} & \\textbf{Performance Guarantee} & \\textbf{Running Time} \\\\ \n\\hline\n\\multicolumn{5}{|l|} {\\textbf{Single Temperature Interval Problem}} \\\\ \nSimple Greedy & SG & \\ref{Sec:SingleTemperatureIntervalProblem-Approximation} & 2$^{\\dagger}$ (tight) & $O(nm)$ \\\\\nImproved Greedy & IG & \\ref{Sec:SingleTemperatureIntervalProblem-Approximation} & 1.5 (tight) & $O(nm)$ \\\\\n\\hline\n\\multicolumn{5}{|l|} {\\textbf{Relaxation Rounding Heuristics}} \\\\ \nFractional LP Rounding & FLPR & \\ref{Subsection:FLPR} & $O(k)^{\\dagger}$, $O(U_{\\max})$, $\\Omega(n)$ & 1 LP \\\\\nLagrangian Relaxation Rounding & LRR & \\ref{Subsection:LRR} & & 2 LPs \\\\\nCovering Relaxation Rounding & CRR & \\ref{Subsection:CRR} & & $O(nm)$ ILPs \\\\\n\\hline\n\\multicolumn{5}{|l|} {\\textbf{Water Filling Heuristics}} \\\\ \nWater Filling MILP & WFM & \\ref{sec:water_filling} \\& \\ref{Sec:SingleTemperatureIntervalProblem-MILP} & $O(k)^{\\dagger}$, $\\Omega(k)$ & $O(k)$ MILPs \\\\ \nWater Filling Greedy & WFG & \\ref{sec:water_filling} \\& \\ref{Sec:SingleTemperatureIntervalProblem-Approximation} & $O(k)^{\\dagger}$, $\\Omega(k)$ & $O(n m k)$ \\\\\n\\hline\n\\multicolumn{5}{|l|} {\\textbf{Greedy Packing Heuristics}} \\\\ \nLargest Heat Match LP-based & LHM-LP & \\ref{Subsection:Largest_Heat_Match} & $O(\\log n + \\log (h_{\\max}\/\\epsilon))$ & $O(n^2m^2)$ LPs \\\\\nLargest Heat Match Greedy & LHM & \\ref{Subsection:Largest_Heat_Match} & & $O(n^2 m^2 k)$ \\\\\nLargest Fraction Match & LFM & \\ref{Subsection:LFM} & & $O(n^2 m^2 k)$ \\\\\nShortest Stream & SS & \\ref{Subsection:SS} & & $O(n m k)$ \\\\\n\\hline\n\\end{tabular}\n\\end{adjustbox} \n\\caption{Performance guarantees for the minimum number of matches problem. The performance guarantees marked ${\\dagger}$ are from \\cite{furman:2004}; all others are new to this manuscript.}\n\\label{Table:Heuristics_Performance_Guarantees}\n\\end{table}\n\n\n\n\\section{Single Temperature Interval Problem}\n\\label{sec:single_temperature_interval}\n\nThis section proposes efficient algorithms for the single temperature interval problem.\nUsing graph theoretic properties, we obtain: (i) a novel, efficiently solvable MILP formulation without big-M constraints and (ii) an improved 3\/2-approximation algorithm. \n{Of course, the single temperature interval problem is not immediately applicable to the minimum number of matches problem with multiple temperature intervals. But designing efficient approximation algorithms for the single temperature interval is the first, essential step before considering multiple temperature intervals. Additionally, the water filling heuristics introduced in Section \\ref{sec:water_filling} repeatedly solve the single temperature interval problem.}\n\nIn the single temperature interval problem, a feasible solution can be represented as a bipartite graph $G=(H\\cup C, M)$ in which there is a node for each hot stream $i\\in H$, a node for each cold stream $j\\in C$ and the set $M\\subseteq H\\times C$ specifies the matches. \n\\ref{App:Single_Temperature_Interval} shows the existence of an optimal solution whose graph $G$ does not contain any cycle. \nA connected graph without cycles is a \\emph{tree}, so $G$ is a forest consisting of trees.\n\\ref{App:Single_Temperature_Interval} also shows that the number $v$ of edges in $G$, i.e.\\ the number of matches, is related to the number $\\ell$ of trees with the equality $v=n+m-\\ell$.\nSince $n$ and $m$ are input parameters, minimizing the number of matches in a single temperature interval is equivalent to finding a solution whose graph consists of a maximal number $\\ell$ of trees.\n\n\\subsection{Novel MILP Formulation}\n\\label{Sec:SingleTemperatureIntervalProblem-MILP}\n\nWe propose a novel MILP formulation for the single temperature interval problem.\nIn an optimal solution without cycles, there can be at most $\\min\\{n,m\\}$ trees.\nFrom a packing perspective, we assume that there are $\\min\\{n,m\\}$ available bins and each stream is placed into exactly one bin.\nIf a bin is non-empty, then its content corresponds to a tree of the graph.\nThe objective is to find a feasible solution with a maximum number of bins. \n\nTo formulate the problem as an MILP, we define the set $B=\\{1,2,\\ldots,\\min\\{n,m\\}\\}$ of available bins.\nBinary variable $x_b$ is 0 if bin $b\\in B$ is empty and 1, otherwise. \nA binary variable $w_{i,b}$ indicates whether hot stream $i\\in H$ is placed into bin $b\\in B$.\nSimilarly, a binary variable $z_{j,b}$ specifies whether cold stream $j\\in C$ is placed into bin $b\\in B$. \nThen, the minimum number of matches problem can be formulated:\n{\\allowdisplaybreaks\n\\begin{align}\n\\text{max} & \\sum_{b\\in B} x_b \\label{Eq:SingleMIP_MaxBins} \\\\\n& x_b \\leq \\sum_{i\\in H} w_{i,b} & b\\in B \\label{Eq:SingleMIP_HotBinUsage} \\\\ \n& x_b \\leq \\sum_{j\\in C} z_{j,b} & b\\in B \\label{Eq:SingleMIP_ColdBinUsage} \\\\ \n& \\sum_{b\\in B} w_{i,b} = 1 & i\\in H \\label{Eq:SingleMIP_HotAssignment} \\\\\n& \\sum_{b\\in B} z_{j,b} = 1 & j\\in C \\label{Eq:SingleMIP_ColdAssignment} \\\\\n& \\sum_{i\\in H} w_{i,b} \\cdot h_i = \\sum_{j\\in C} z_{j,b} \\cdot c_j & b\\in B \\label{Eq:SingleMIP_BinHeatConservation} \\\\\n& x_b, w_{i,b}, z_{j,b}\\in\\{0,1\\} & b\\in B, i\\in H, j\\in C \\label{Eq:SingleMIP_Integrality}\n\\end{align}\n}\nExpression (\\ref{Eq:SingleMIP_MaxBins}), the objective function, maximizes the number of bins. \nEquations (\\ref{Eq:SingleMIP_HotBinUsage}) and (\\ref{Eq:SingleMIP_ColdBinUsage}) ensure that a bin is used if there is at least one stream in it.\nEquations (\\ref{Eq:SingleMIP_HotAssignment}) and (\\ref{Eq:SingleMIP_ColdAssignment}) enforce that each stream is assigned to exactly one bin.\nFinally, Eqs.\\ (\\ref{Eq:SingleMIP_BinHeatConservation}) ensure the heat conservation of each bin. \nNote that, unlike the transportation and transshipment models, Eqs.\\ (\\ref{Eq:SingleMIP_MaxBins})-(\\ref{Eq:SingleMIP_BinHeatConservation}) do not use a big-M parameter. {\n\\ref{App:Water_Filling} formulates the single temperature interval problem \\emph{without} heat conservation. Eqs.\\ (\\ref{Eq:SingleMIPwithoutConservation_MaxBins})-(\\ref{Eq:SingleMIPwithoutConservation_Integrality}) are similar to Eqs.\\ (\\ref{Eq:SingleMIP_MaxBins})-(\\ref{Eq:SingleMIP_Integrality}) except (i) they drop constraints (\\ref{Eq:SingleMIP_HotBinUsage}) and (ii) equalities (\\ref{Eq:SingleMIP_HotAssignment}) \\& (\\ref{Eq:SingleMIP_BinHeatConservation}) become inequalities (\\ref{Eq:SingleMIPwithoutConservation_HotAssignment}) \\& (\\ref{Eq:SingleMIPwithoutConservation_BinHeatConservation}).\n}\n\n\\subsection{Improved Approximation Algorithm}\n\\label{Sec:SingleTemperatureIntervalProblem-Approximation}\n\n\\citet{furman:2004} propose a greedy 2-approxi\\-mation algorithm for the minimum number of matches problem in a single temperature interval.\nWe show that their analysis is tight. We also propose an improved, tight $1.5$-approximation algorithm by prioritizing matches with equal heat loads and exploiting graph theoretic properties.\n\nThe simple greedy (SG) algorithm considers the hot and the cold streams in non-increasing heat load order \\citep{furman:2004}.\nInitially, the first hot stream is matched to the first cold stream and an amount $\\min\\{h_1, c_1\\}$ of heat is transferred between them.\nWithout loss of generality $h_1 > c_1$, which implies that an amount $h_1 - c_1$ of heat load remains to be transferred from $h_1$ to the remaining cold streams.\nSubsequently, the algorithm matches $h_1$ to $c_2$, by transferring $\\min\\{h_1 - c_1, c_2\\}$ heat. \nThe same procedure repeats with the other streams until all remaining heat load is transferred.\n\n\n\n\\begin{algorithm}[t] \\nonumber\n\\caption[Simple Greedy (SG)]{Simple Greedy (SG), developed by \\cite{furman:2004}, is applicable to one temperature interval only.}\n\\begin{algorithmic}[1]\n\\State Sort the streams so that $h_1\\geq h_2\\geq \\ldots\\geq h_n$ and $c_1\\geq c_2\\geq \\ldots \\geq c_m$.\n\\State Set $i = 1$ and $j = 1$.\n\\While {there is remaining heat load to be transferred}\n\\State Transfer $q_{i,j}=\\min\\{h_i, c_j\\}$ \n\\State Set $h_i = h_i - q_{i,j}$ and $c_j = c_j - q_{i,j}$\n\\State \\textbf{if} $h_i = 0$, \\textbf{then} set $i = i+1$\n\\State \\textbf{if} $c_j = 0$, \\textbf{then} set $j = j+1$\n\\EndWhile\n\\end{algorithmic}\n\\label{Alg:SimpleGreedy}\n\\end{algorithm}\n\n\n\n\\citet{furman:2004} show that Algorithm SG is 2-approximate for one temperature interval.\nOur new result in Theorem \\ref{thm:greedy} shows that this ratio is tight.\n\n\\begin{theorem}\n\\label{thm:greedy}\nAlgorithm SG achieves an approximation ratio of 2 for the single temperature interval problem and it is tight.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Single_Temperature_Interval}.\n\\end{omitted_proof}\n\n\\medskip\n\nAlgorithm IG improves Algorithm SG by: (i) matching the pairs of hot and cold streams with equal heat loads and (ii) using the acyclic property in the graph representation of an optimal solution. {\nIn practice, hot and cold process streams are unlikely to have equal supplies and demands of heat, so discussing equal heat loads is largely a thought experiment. But the updated analysis allows us to claim an improved performance bound on Algorithm SG. Additionally, the notion of matching roughly equivalent supplies and demands inspires the Section \\ref{Subsection:LFM} \\emph{Largest Fraction Match First} heuristic.\n}\n\n\n\\begin{algorithm}[t] \\nonumber\n\\caption[Improved Greedy (IG)]{Improved Greedy (IG) is applicable to one temperature interval only.}\n\\label{alg:impgreedy}\n\\begin{algorithmic}[1]\n\\For {each pair of hot stream $i$ and cold stream $j$ s.t. $h_i=c_j$}\n\\State Transfer $h_i$ amount of heat load (also equal to $c_j$) between them and remove them.\n\\EndFor\n\\State Run Algorithm SG with respect to the remaining streams.\n\\end{algorithmic}\n\\label{Alg:ImprovedGreedy}\n\\end{algorithm}\n\n\n\n\\medskip\n\n\\begin{theorem}\\label{thm:impgreedy}\nAlgorithm IG achieves an approximation ratio of 1.5 for the single temperature interval problem and it is tight.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Single_Temperature_Interval}.\n\\end{omitted_proof}\n\n\\medskip\n\n\n\n\\section{Maximum Heat Computations with Match Restrictions}\n\\label{sec:max_heat}\nThis section discusses computing the maximum heat that can be feasibly exchanged in a minimum number of matches instance. Section \\ref{Sec:MaxHeat_2Streams} discusses the specific instance of two streams and thereby reduces the value of big-M parameter $U_{i,j}$. \nSections \\ref{Sec:MaxHeat_SingleInterval} \\& \\ref{Sec:MaxHeat_MultipleIntervals} generalize Section \\ref{Sec:MaxHeat_2Streams} from 2 streams to any number of the candidate matches. Section \\ref{Sec:MaxHeat_SingleInterval} is limited to a restricted subset of matches in a single temperature interval. Section \\ref{Sec:MaxHeat_MultipleIntervals} calculates the maximum heat that can be feasibly exchanged for the most general case of multiple temperature intervals.\nThese maximum heat computations are an essential ingredient of our heuristic methods and aim in using a match in the most profitable way. They also answer the feasibility of the minimum number of matches problem.\n\n\\subsection{Two Streams and Big-M Parameter Computation}\n\\label{Sec:MaxHeat_2Streams}\n\nA common way of computing the big-M parameters is setting $U_{i,j}=\\min\\{h_i,c_j\\}$ for each $i\\in H$ and $j\\in C$. \\citet{gundersen:1997} propose a better method for calculating the big-M parameter.\nOur novel Greedy Algorithm MHG (Maximum Heat Greedy) obtains tighter $U_{i,j}$ bounds than either the trivial bounds or the \\citet{gundersen:1997} bounds by exploiting the transshipment model structure.\n\nGiven hot stream $i$ and cold stream $j$, Algorithm MHG computes the maximum amount of heat that can be feasibly exchanged between $i$ and $j$ in any feasible solution.\nAlgorithm MHG is tight in the sense that there is always a feasible solution where streams $i$ and $j$ exchange exactly $U_{i,j}$ units of heat.\nNote that, in addition to $U_{i,j}$, the algorithm computes a value $q_{i,s,j,t}$ of the heat exchanged between each hot stream $i\\in H$ in temperature interval $s\\in T$ and each cold stream $j\\in C$ in temperature interval $t\\in T$, so that $\\sum_{s,t\\in T}q_{i,s,j,t}=U_{i,j}$.\nThese $q_{i,s,j,t}$ values are required by greedy packing heuristics in Section \\ref{sec:greedy_packing}.\n\nAlgorithm \\ref{Alg:MaximumHeat} is a pseudocode of Algorithm MHG.\nThe correctness, i.e.\\ the maximality of the heat exchanged between $i$ and $j$, is a corollary of the well known maximum flow - minimum cut theorem.\nInitially, the procedure transfers the maximum amount of heat across the same temperature interval; $q_{i,u,s,u}=\\min\\{\\sigma_{i,u},\\delta_{j,u}\\}$ for each $u\\in T$.\nThe remaining heat is transferred greedily in a top down manner, with respect to the temperature intervals, by accounting heat residual capacities.\nFor each temperature interval $u\\in T$, the heat residual capacity $R_u=\\sum_{i=1}^n\\sum_{s=1}^u\\sigma_{i,s} - \\sum_{j=1}^m\\sum_{t=1}^u\\delta_{j,t}$ imposes an upper bound on the amount of heat that may descend from temperature intervals $1,2,\\ldots,u$ to temperature intervals $u+1,u+2,\\ldots,k$.\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Maximum Heat Greedy (MHG)}\n\\textbf{Input:} Hot stream $i\\in H$ and cold stream $j\\in C$ \n\\begin{algorithmic}[1]\n\\State $\\vec{q} \\leftarrow \\vec{0}$\n\\For {{$u=1,2,\\ldots,k-1$}}\n\\State {$R_u=\\sum_{i=1}^n\\sum_{s=1}^u\\sigma_{i,s} - \\sum_{j=1}^m\\sum_{t=1}^u\\delta_{j,t}$}\n\\EndFor\n\\For {$u=1,2,\\ldots,k$}\n\\State $q_{i,u,j,u}\\leftarrow \\min\\{\\sigma_{i,u},\\delta_{j,u}\\}$\n\\State $\\sigma_{i,u}\\leftarrow \\sigma_{i,u}-q_{i,u,j,u}$\n\\State $\\delta_{j,u}\\leftarrow \\delta_{j,u}-q_{i,u,j,u}$\n\\EndFor\n\\For {$s=1,2,\\ldots,k-1$}\n\\For {$t=s+1,s+2,\\ldots,k$}\n\\State $q_{i,s,j,t}=\\min\\{\\sigma_{i,s},\\delta_{j,t},\\min_{s\\leq u\\leq t-1}\\{R_u\\}\\}$\n\\State $\\sigma_{i,s}\\leftarrow \\sigma_{i,s}-q_{i,s,j,t}$\n\\State $\\delta_{j,t}\\leftarrow \\delta_{j,t}-q_{i,s,j,t}$\n\\For {$u=s,s+1,s+2,\\ldots,t-1$}\n\\State $R_u\\leftarrow R_u-q_{i,s,j,t}$\n\\EndFor\n\\EndFor\n\\EndFor\n\\State Return $\\vec{q}$\n\\end{algorithmic}\n\\label{Alg:MaximumHeat}\n\\end{algorithm}\n\n\\subsection{Single Temperature Interval}\n\\label{Sec:MaxHeat_SingleInterval}\n\nGiven an instance of the single temperature interval problem and a subset $M$ of matches, the maximum amount of heat that can be feasibly exchanged between the streams using only the matches in $M$ can be computed by solving \\ref{EquationSet:SingleMaxHeatLP_initial_stattement}.\n{Like the single temperature interval algorithms of Section \\ref{sec:single_temperature_interval}, \\ref{EquationSet:SingleMaxHeatLP_initial_stattement} is not directly applicable to a minimum number of matches problem with multiple temperature intervals. But \\ref{EquationSet:SingleMaxHeatLP_initial_stattement} is an important part of our water filling heuristics.}\nFor simplicity, \\ref{EquationSet:SingleMaxHeatLP_initial_stattement} drops\ntemperature interval indices for variables $q_{i,j}$.\n{\\allowdisplaybreaks\n\\begin{align}\n\\tag{MaxHeatLP}\n\\label{EquationSet:SingleMaxHeatLP_initial_stattement}\n\\begin{aligned}\n\\text{max} & \\sum_{(i,j)\\in M} q_{i,j} \\\\\n& \\sum_{j\\in C} q_{i,j} \\leq h_i & i\\in H \\\\\n& \\sum_{i\\in H} q_{i,j} \\leq c_j & j\\in C \\\\\n& q_{i,j} \\geq 0 & i\\in H, j\\in C\n\\end{aligned}\n\\end{align}\n}\n\n\\subsection{Multiple Temperature Intervals}\n\\label{Sec:MaxHeat_MultipleIntervals}\n\nMaximizing the heat exchanged through a subset of matches across multiple temperature intervals can solved with an LP that generalizes \\ref{EquationSet:SingleMaxHeatLP_initial_stattement}. \nThe generalized LP must satisfy the additional requirement that, after removing a maximum heat exchange, the remaining instance is feasible.\nFeasibility is achieved using residual capacity constraints which are essential for the efficiency of greedy packing heuristics (see Section \\ref{Subsection:MonotonicGreedyHeuristics}).\n\nGiven a set $M$ of matches, let $A(M)$ be the set of quadruples $(i,s,j,t)$ such that a positive amount of heat can be feasibly transferred via the transportation arc with endpoints the nodes $(i,s)$ and $(j,t)$.\nThe set $A(M)$ does not contain any quadruple $(i,s,j,t)$ with: (i) $s>t$, (ii) $\\sigma_{i,s}=0$, (iii) $\\delta_{j,t}=0$, or (iv) $(i,j)\\not\\in M$. \nLet $V^H(M)$ and $V^C(M)$ be the set of transportation vertices $(i,s)$ and $(j,t)$, respectively, that appear in $A(M)$.\nSimilarly, given two fixed vertices $(i,s)\\in V^H(M)$ and $(j,t)\\in V^C(M)$, we define the sets $V_{i,s}^C(M)$ and $V_{j,t}^H(M)$ of their respective neighbors in $A(M)$.\n\nConsider a temperature interval $u\\in T$.\nWe define by $A_u(M)\\subseteq A(M)$ the subset of quadruples with $s\\leq u< t$, for $u\\in T$. \nThe total heat transferred via the arcs in $A_u(M)$ must be upper bounded by $R_u=\\sum_{i=1}^n\\sum_{s=1}^u\\sigma_{i,s} - \\sum_{j=1}^m\\sum_{t=1}^u\\delta_{j,t}$.\nFurthermore, $A(M)$ eliminates any quadruple $(i,s,j,t)$ with $R_u=0$, for some $s\\leq u0$}\n\\State $y_{i,j}\\leftarrow 1$\n\\Else\n\\State $y_{i,j}\\leftarrow 0$\n\\EndIf\n\\EndFor\n\\State Return $(\\vec{y},\\vec{q})$\n\\end{algorithmic}\n\\label{Alg:FLPR}\n\\end{algorithm}\n\n\nAn inherent drawback of the \\citet{furman:2004} approach is the existence of optimal fractional solutions with unnecessary matches.\nTheorem \\ref{Thm:FLPR_negative} shows that Algorithm FLPR performance is bad in the worst case, even for instances with a single temperature interval. \nThe proof, given in \\ref{App:Relaxation_Rounding}, can be extended so that unnecessary matches occur across multiple temperature intervals.\n\\begin{theorem}\n\\label{Thm:FLPR_negative}\nAlgorithm FLPR is $\\Omega(n)$-approximate.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Relaxation_Rounding}.\n\\end{omitted_proof}\n\n\\medskip\n\nConsider an optimal fractional solution to \\ref{EquationSet:FracLP} and suppose that $M\\subseteq H\\times C$ is the set of pairs of streams exchanging a positive amount of heat.\nFor each $(i,j)\\in M$, denote by $L_{i,j}$ the heat exchanged between hot stream $i$ and cold stream $j$.\nWe define: \n\\begin{equation*}\n\\phi(M) = \\min_{(i,j)\\in M} \\left\\{ \\frac{L_{i,j}}{U_{i,j}} \\right\\}\n\\end{equation*}\nas the \\emph{filling ratio}, which corresponds to the minimum portion of an upper bound $U_{i,j}$ filled with the heat $L_{i,j}$, for some match $(i,j)$.\nGiven an optimal fractional solution with filling ratio $\\phi(M)$, Theorem \\ref{Thm:FLPR_positive} obtains a $1\/\\phi(M)$-approximation ratio for FLPR.\n\n\\begin{theorem}\n\\label{Thm:FLPR_positive}\nGiven an optimal fractional solution with a set $M$ of matches and filling ratio $\\phi(M)$, FLPR produces a $\\left(1\/\\phi(M) \\right)$-approximate integral solution.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Relaxation_Rounding}.\n\\end{omitted_proof}\n\n\\medskip\n\nIn the case where all heat supplies and demands are integers, the integrality of the minimum cost flow polytope and Theorem \\ref{Thm:FLPR_positive} imply that FLPR is $U_{\\max}$-approximate, where $U_{\\max}=\\max_{(i,j)\\in H\\times C}\\{U_{i,j}\\}$ is the biggest big-M parameter. \n{A corollary of the $L_{i,j} \/ U_{i,j}$ ratio is that a fractional solution transferring heat $L_{i,j}$ close to capacity $U_{i,j}$ corresponds to a good integral solution. For example, if the optimal fractional solution satisfies $L_{i,j}>0.5 \\cdot U_{i,j}$, for every used match $(i,j)$ such that $L_{i,j} \\neq 0$, then FLPR gives a 2-approximate integral solution. \nFinally, branch-and-cut repeatedly solves the fractional problem, so our new bound proves the big-M parameter's relevance for exact methods.}\nBecause performance guarantee of FLPR scales with the big-M parameters $U_{i,j}$, we improve the heuristic performance by computing a small big-M parameter $U_{i,j}$ using Algorithm MHG in Section \\ref{Sec:MaxHeat_2Streams}.\n\n\\subsection{Lagrangian Relaxation Rounding}\n\\label{Subsection:LRR}\n\n\\cite{furman:2004} design efficient heuristics for the minimum number of matches problem by applying the method of Lagrangian relaxation and relaxing the big-M constraints.\nThis approach generalizes Algorithm FLPR by approximating the fractional cost of every possible match $(i,j)\\in H\\times C$ and solving an appropriate LP using these costs.\nWe present the LP and revisit different ways of approximating the fractional match costs.\n\nIn a feasible solution, the fractional cost $\\lambda_{i,j}$ of a match $(i,j)$ is the cost incurred per unit of heat transferred via $(i,j)$.\nIn particular, \n\\begin{equation*}\n\\lambda_{i,j}=\n\\left\\{\n\t\\begin{array}{ll}\n\t\t1\/L_{i,j}, & \\mbox{if $L_{i,j}>0$, and} \\\\\n\t\t0, & \\mbox{if $L_{i,j}=0$}\n\t\\end{array}\n\\right.\n\\end{equation*}\nwhere $L_{i,j}$ is the heat exchanged via $(i,j)$.\nThen, the number of matches can be expressed as $\\sum_{i,s,j,t}\\lambda_{i,j}\\cdot q_{i,s,j,t}$.\n\\citet{furman:2004} propose a collection of heuristics computing a single cost value for each match $(i,j)$ and constructing a minimum cost solution.\nThis solution is rounded to a feasible integral solution equivalently to FLPR.\n\nGiven a cost vector $\\vec{\\lambda}$ of the matches, a minimum cost solution is obtained by solving:\n\\begin{align}\n\\tag{CostLP}\n\\label{EquationSet:CostLP}\n\\begin{aligned}\n\\text{min} & \\sum_{i \\in H} \\sum_{j \\in C} \\sum_{s,t\\in T} \\lambda_{i,j} \\cdot q_{i,s,j,t} \\\\ \n& \\sum_{j\\in C}\\sum_{t\\in T} q_{i,s,j,t} = \\sigma_{i,s} & i\\in H, s\\in T \\\\\n& \\sum_{i\\in H}\\sum_{s\\in T} q_{i,s,j,t} = \\delta_{j,t} & j\\in C, t\\in T \\\\\n& q_{i,s,j,t}\\geq 0 & i\\in H, j\\in C,\\; s,t\\in T \n\\end{aligned}\n\\end{align}\n\nA challenge in Lagrangian relaxation rounding is computing a cost $\\lambda_{i,j}$ for each hot stream $i\\in H$ and cold stream $j\\in C$.\nWe revisit and generalize policies for selecting costs.\n\n\\paragraph{Cost Policy 1 (Maximum Heat)} \nMatches that exchange large amounts of heat incur low fractional cost.\nThis observation motivates selecting $\\lambda_{i,j}= 1\/U_{i,j}$, for each $(i,j)\\in H\\times C$, where $U_{i,j}$ is an upper bound on the heat that can be feasibly exchanged between $i$ and $j$.\nIn this case, Lagrangian relaxation rounding is equivalent to FLPR (Algorithm \\ref{Alg:FLPR}).\n\n\\paragraph{Cost Policy 2 (Bounds on the Number of Matches)}\nThis cost selection policy uses lower bounds $\\alpha_i$ and $\\beta_j$ on the number of matches of hot stream $i\\in H$ and cold stream $j\\in C$, respectively, in an optimal solution.\nGiven such lower bounds, at least $\\alpha_i$ cost is incurred for the $h_i$ heat units of $i$ and at least $\\beta_j$ cost is incurred for the $c_j$ units of $j$.\nOn average, each heat unit of $i$ is exchanged with cost at least ${\\alpha_i}\/{h_i}$ and each heat unit of $j$ is exchanged with cost at least ${\\beta_j}\/{c_j}$.\nSo, the fractional cost of each match $(i,j)\\in H\\times C$ can be approximated by setting $\\lambda_{i,j}={\\alpha_i}\/{h_i}$, $\\lambda_{i,j}={\\beta_j}\/{c_j}$ or $\\lambda_{i,j}=\\frac{1}{2}(\\frac{\\alpha_i}{h_i} + \\frac{\\beta_j}{c_j})$.\n\n\\citet{furman:2004} use lower bounds $\\alpha_i=1$ and $\\beta_j=1$, for each $i\\in H$ and $j\\in C$.\nWe show that, for any choice of lower bounds $\\alpha_i$ and $\\beta_j$, this cost policy for selecting $\\lambda_{i,j}$ is not effective. Even when $\\alpha_i$ and $\\beta_j$ are tighter than 1, all feasible solutions of \\ref{EquationSet:CostLP} attain the same cost. \nConsider any feasible solution $(\\vec{y},\\vec{q})$ and the fractional cost $\\lambda_{i,j}=\\alpha_i \/ h_i$ for each $(i,j)\\in H\\times C$.\nThen the cost of $(\\vec{y},\\vec{q})$ in \\ref{EquationSet:CostLP} is:\n\\begin{equation*}\n\\sum_{i\\in H} \\sum_{j\\in C} \\sum_{s,t\\in T} \\lambda_{i,j} \\cdot q_{i,s,j,t} = \n\\sum_{i\\in H} \\sum_{j\\in C} \\sum_{s,t\\in T} \\frac{\\alpha_i}{h_i} \\cdot q_{i,s,j,t} = \n\\sum_{i\\in H} \\alpha_i.\n\\end{equation*}\nSince every feasible solution in (\\ref{EquationSet:CostLP}) has cost $\\sum_{i\\in H} \\alpha_i$, Lagrangian relaxation rounding returns an arbitrary solution.\nSimilarly, if $\\lambda_{i,j}={\\beta_j}\/{c_j}$ for $(i,j)\\in H\\times C$, every feasible solution has cost $\\sum_{j\\in C}\\beta_j$.\nIf $\\lambda_{i,j}=\\frac{1}{2}(\\frac{\\alpha_i}{h_i} + \\frac{\\beta_j}{c_j})$, all feasible solutions have the same cost $1\/2 \\cdot (\\sum_{i\\in H} \\alpha_i + \\sum_{j\\in C}\\beta_j)$.\n\n\n\\paragraph{Cost Policy 3 (Existing Solution)}\nThis method of computing costs uses an existing solution.\nThe main idea is to use the actual fractional costs for the solution's matches and a non-zero cost for every unmatched streams pair. \nA minimum cost solution with respect to these costs may improve the initial solution.\nSuppose that $M$ is the set of matches in the initial solution and let $L_{i,j}$ be the heat exchanged via $(i,j) \\in M$.\nFurthermore, let $U_{i,j}$ be an upper bound on the heat exchanged between $i$ and $j$ in any feasible solution.\nThen, a possible selection of costs is $\\lambda_{i,j}= 1\/L_{i,j}$ if $(i,j)\\in M$, and $\\lambda_{i,j}=1\/U_{i,j}$ otherwise.\n\n\n\\subsection{Covering Relaxation Rounding}\n\\label{Subsection:CRR}\n\nThis section proposes a novel covering relaxation rounding heuristic for the minimum number of matches problem.\nThe efficiency of Algorithm FLPR depends on lower bounding the unitary cost of the heat transferred via each match. \nThe goal of the covering relaxation is to use these costs and lower bound the number of matches in a stream-to-stream to basis by relaxing heat conservation.\nThe heuristic constructs a feasible integral solution by solving successively instances of the covering relaxation.\n\nConsider a feasible MILP solution and suppose that $M$ is the set of matches.\nFor each hot stream $i\\in H$ and cold stream $j\\in C$, denote by $C_i(M)$ and $H_j(M)$ the subsets of cold and hot streams matched with $i$ and $j$, respectively, in $M$.\nMoreover, let $U_{i,j}$ be an upper bound on the heat that can be feasibly exchanged between $i\\in H$ and $j\\in C$.\nSince the solution is feasible, it must be true that $\\sum_{j\\in C_i(M)}U_{i,j}\\geq h_i$ and $\\sum_{i\\in H_j(M)}U_{i,j}\\geq c_j$.\nThese inequalities are necessary, though not sufficient, feasibility conditions. \nBy minimizing the number of matches while ensuring these conditions, we obtain a covering relaxation:\n\\begin{align}\n\\tag{CoverMILP}\n\\label{EquationSet:CoverMILP}\n\\begin{aligned}\n\\text{min} & \\sum_{i \\in H}\\sum_{j \\in C} y_{i,j} \\\\ \n& \\sum_{j\\in C} y_{i,j}\\cdot U_{i,j} \\geq h_i & i\\in H \\\\\n& \\sum_{i\\in H} y_{i,j}\\cdot U_{i,j} \\geq c_j & j\\in C \\\\\n& y_{i,j}\\in\\{0,1\\} & i\\in H, j\\in C\n\\end{aligned}\n\\end{align}\nIn certain cases, the matches of an optimal solution to \\ref{EquationSet:CoverMILP} overlap well with the matches in a near-optimal solution for the original problem. \nOur new Covering Relaxation Rounding (CRR) heuristic for the minimum number of matches problem successively solves instances of the covering relaxation \\ref{EquationSet:CoverMILP}.\nThe heuristic chooses new matches iteratively until it terminates with a feasible set $M$ of matches. \nIn the first iteration, Algorithm CRR constructs a feasible solution for the covering relaxation and adds the chosen matches in $M$.\nThen, Algorithm CRR computes the maximum heat that can be feasibly exchanged using the matches in $M$ and stores the computed heat exchanges in $\\vec{q}$.\nIn the second iteration, the heuristic performs same steps with respect to the smaller updated instance $(\\vec{\\sigma}',\\vec{\\delta}')$, where $\\sigma_{i,s}'=\\sigma_{i,s}-\\sum_{j,t}q_{i,s,j,t}$ and $\\delta_{j,t}'=\\delta_{j,t}-\\sum_{i,s}q_{i,s,j,t}$.\nThe heuristic terminates when all heat is exchanged.\n\nAlgorithm \\ref{Alg:CRR} is a pseudocode of heuristic CRR.\nProcedure $CoveringRelaxation(\\vec{\\sigma},\\vec{\\delta})$ produces an optimal subset of matches for the instance of the covering relaxation in which the heat supplies and demands are specified by the vectors $\\vec{\\sigma}$ and $\\vec{\\delta}$, respectively.\nProcedure $MHLP(\\vec{\\sigma},\\vec{\\delta},M)$ (LP-based Maximum Heat) computes the maximum amount of heat that can be feasibly exchanged by using only the matches in $M$ and is based on solving the LP in Section \\ref{Sec:MaxHeat_MultipleIntervals}.\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Covering Relaxation Rounding (CRR)}\n\\begin{algorithmic}[1]\n\\State $M\\leftarrow \\emptyset$\n\\State $\\vec{q} \\leftarrow \\vec{0}$\n\\State $r\\leftarrow \\sum_{i\\in H}h_i$\n\\While {$r>0$}\n\\State For each $i\\in H$ and $s\\in T$, set $\\sigma_{i,s}' \\leftarrow \\sigma_{i,s} - \\sum_{j\\in C} \\sum_{t\\in T} q_{i,s,j,t}$\n\\State For each $j\\in C$ and $t\\in T$, set $\\delta_{j,t}' \\leftarrow \\delta_{j,t} - \\sum_{i\\in H} \\sum_{s\\in T} q_{i,s,j,t}$\n\\State $M'\\leftarrow CoveringRelaxation(\\vec{\\sigma}',\\vec{\\delta}')$\n\\Comment{{(\\ref{EquationSet:CoverMILP}) solving, Section \\ref{Subsection:CRR}}}\n\\State $M\\leftarrow M\\cup M'$\n\\State $\\vec{q}\\leftarrow MHLP(\\vec{\\sigma},\\vec{\\delta},M')$\n{\\Comment{Equations (\\ref{Eq:MaxHeatLP_Objective}) - (\\ref{Eq:MaxHeatLP_Positiveness}) LP solving, Section \\ref{Sec:MaxHeat_MultipleIntervals}}}\n\\State $r\\leftarrow \\sum_{i\\in H}h_i-\\sum_{i\\in H}\\sum_{j\\in C}\\sum_{s,t\\in T} q_{i,s,j,t}$\n\\EndWhile\n\\end{algorithmic}\n\\label{Alg:CRR}\n\\end{algorithm}\n\n\n\\section{Water Filling Heuristics}\n\\label{sec:water_filling}\n\nThis section introduces \\emph{water filling heuristics} for the minimum number of matches problem.\nThese heuristics produce a solution iteratively by exchanging the heat in each temperature interval, in a top down manner. \nThe water filling heuristics use, in each iteration,\nan efficient algorithm for the single temperature interval problem \n(see Section \\ref{sec:single_temperature_interval}).\n\n\nFigure \\ref{Fig:Water_Filling} shows the main idea of a \\emph{water filling heuristic} for the minimum number of matches problem with multiple temperature intervals.\nThe problem is solved iteratively in a top-down manner, from the highest to the lowest temperature interval.\nEach iteration produces a solution for one temperature interval.\nThe main components of a water filling heuristic are: (i) a maximum heat procedure which reuses matches from previous iterations and (ii) an efficient single temperature interval algorithm. \n\n\\begin{figure*}[t!]\n \\centering\n\n \\begin{subfigure}[t]{0.5\\textwidth}\n \\centering\n\t\\includegraphics{water_filling_all.eps}\n\t\\caption{Top Down Temperature Interval Structure}\n\t\\label{Fig:Water_Filling_Temperature_Intervals}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\\centering\n\n\t\\includegraphics{water_filling_one.eps}\n\n\t\\caption{Excess Heat Descending}\n\t\\label{Fig:Water_Filling_Excess_Heat}\n\t\\end{subfigure}\n\\caption{A water filling heuristic computes a solution by exploiting the top down temperature interval structure and moving from the higher to the lower temperature interval.\nIn each temperature interval $t$, the heuristic isolates the streams with positive heat at $t$, it matches them and descends the excess heat to the next interval which is sequentially solved. \n\\label{Fig:Water_Filling}}\n\n\\end{figure*}\n\n\nGiven a set $M$ of matches and an instance $(\\vec{\\sigma_t},\\vec{\\delta}_t)$ of the problem in the single temperature interval $t$, the procedure $MHS(\\vec{\\sigma}_t,\\vec{\\delta}_t,M)$ (Maximum Heat for Single temperature interval) computes the maximum heat that can be exchanged between the streams in $t$ using only the matches in $M$.\nAt a given temperature interval $t$, the $MHS$ procedure solves the LP in Section \\ref{Sec:MaxHeat_SingleInterval}.\nThe procedure $SingleTemperatureInterval(\\vec{\\sigma}_t,\\vec{\\delta}_t)$ produces an efficient solution for the single temperature interval problem with a minimum number of matches and total heat to satisfy one cold stream. $SingleTemperatureInterval(\\vec{\\sigma}_t,\\vec{\\delta}_t)$ either: (i) solves the MILP exactly (Water Filling MILP-based or WFM) or (ii) applies the improved greedy approximation Algorithm IG in Section \\ref{sec:single_temperature_interval} (Water Filling Greedy or WFG). \nBoth water filling heuristics solve instances of the single temperature interval problem in which there is no heat conservation, i.e.\\ the heat supplied by the hot streams is greater or equal than the heat demanded by the cold streams. \nThe exact WFM uses the MILP proposed in Eqs.\\ (\\ref{Eq:SingleMIPwithoutConservation_MaxBins}) - (\\ref{Eq:SingleMIPwithoutConservation_Integrality}) of \\ref{App:Water_Filling}.\nThe greedy heuristic WFG adapts Algorithm IG by terminating when the entire heat demanded by the cold streams has been transferred.\nAfter addressing the single temperature interval, the excess heat descends to the next temperature interval.\nAlgorithm \\ref{Alg:Water_Filling} represents our water filling approach in pseudocode.\n{Figure \\ref{Figure:Water_Filling_Ingredients} shows the main components of water filling heuristics.}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{water_filling_ingredients.eps}\n\\caption{\nWater filling heuristics solve the temperature intervals serially in a top-down manner and keep composition feasible. \nThe main components are (i) a maximum heat computation re-using higher temperature interval matches, (ii) a single temperature interval problem algorithm, and (iii) excess heat descending between consecutive temperature intervals. \nHeuristic WFM uses the \\ref{App:Water_Filling} MILP formulation for solving the single temperature interval problem, while heuristic WFG uses the Section \\ref{Sec:SingleTemperatureIntervalProblem-Approximation} Algorithm IG.}\n\\label{Figure:Water_Filling_Ingredients}\n\\end{center}\n\\end{figure}\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Water Filling (WF)}\\label{alg:water_filling}\n\\begin{algorithmic}[1]\n\\State $M \\leftarrow \\emptyset$\n\\State $\\vec{q} \\leftarrow \\vec{0}$\n\\For {$t=1,2,\\ldots,k$}\n\\If {$t\\neq 1$}\n\\State $\\vec{q}\\;' \\leftarrow MHS(\\vec{\\sigma}_t,\\vec{\\delta}_t,M)$\n{\\Comment{(\\ref{EquationSet:SingleMaxHeatLP_initial_stattement}) solve, Section \\ref{Sec:MaxHeat_SingleInterval}}}\n\\State $\\vec{q} \\leftarrow \\vec{q} + \\vec{q}\\;'$\n\\State For each $i\\in H$, set $\\sigma_{i,t} \\leftarrow \\sigma_{i,t} - \\sum_{j\\in C} \\sum_{t\\in T} q_{i,j,t}'$\n\\State For each $j\\in C$, set $\\delta_{j,t} \\leftarrow \\delta_{j,t} - \\sum_{i\\in H} \\sum_{s\\in T} q_{i,j,t}'$\n\\EndIf\n\\State $(M',\\vec{q}\\;') \\leftarrow SingleTemperatureInterval(\\vec{\\sigma}_t,\\vec{\\delta}_t)$\n{\\Comment{Eqs (\\ref{Eq:SingleMIPwithoutConservation_MaxBins} - \\ref{Eq:SingleMIPwithoutConservation_Integrality}) or Alg IG, Sec \\ref{Sec:SingleTemperatureIntervalProblem-Approximation}}}\n\\State $M \\leftarrow M\\cup M'$\n\\State $\\vec{q} \\leftarrow \\vec{q} + \\vec{q}\\;'$\n\\If {$t\\neq k$}\n\\For {$i\\in H$}\n\\State $\\vec{\\sigma}_{i,t+1} \\leftarrow \\vec{\\sigma}_{i,t+1} + (\\vec{\\sigma}_{i,t} - \\sum_jq_{i,j,t})$ (excess heat descending)\n\\EndFor\n\\EndIf\n\\EndFor\n\\end{algorithmic}\n\\label{Alg:Water_Filling}\n\\end{algorithm}\n\n\n\n\\begin{theorem}\n\\label{Thm:WaterFillingRatio}\nAlgorithms WFG and WFM are $\\Omega(k)$-approximate.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Water_Filling}.\n\\end{omitted_proof}\n\n\n\n\\section{Greedy Packing Heuristics}\n\\label{sec:greedy_packing}\n\nThis section proposes greedy heuristics motivated by the packing nature of the minimum number of matches problem.\nEach greedy packing heuristic starts from an infeasible solution with zero heat transferred between the streams and iterates towards feasibility by greedily selecting matches.\nThe two main ingredients of such a heuristic are: (i) a match selection policy and (ii) a heat exchange policy for transferring heat via the matches.\nSection \\ref{Subsection:MonotonicGreedyHeuristics} observes that a greedy heuristic has a poor worst-case performance if heat residual capacities are not considered.\nSections \\ref{Subsection:Largest_Heat_Match} - \\ref{Subsection:SS} define formally the greedy heuristics: (i) Largest Heat Match First, (ii) Largest Fraction Match First, and (iii) Smallest Stream First. \n{Figure \\ref{Figure:Greedy_Packing_Ingredients} shows the main components of greedy packing heuristics.}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{greedy_packing_ingredients.eps}\n\\caption{\nGreedy packing heuristics select matches iteratively one by one. \nThe main components of greedy packing heuristics are (i) a heat exchange policy, and (ii) a match selection policy.\nGreedy packing heuristics apply these policies with respect to all unmatched stream pairs, in each iteration.\nOptions for the heat exchange policy include \\emph{dynamic heat exchange}, which solves the Section \\ref{Sec:MaxHeat_MultipleIntervals} maximum heat LP, and \\emph{static heat exchange}, which uses the Section \\ref{Sec:MaxHeat_2Streams} greedy algorithm.\nOnce the heat exchange policy has been applied for every unmatched pair of streams, a match selection policy chooses the new match, e.g.\\\n(i) with the largest heat (LHM), (ii) with the largest fraction (LFM), or (iii) of the shortest stream (SS).}\n\\label{Figure:Greedy_Packing_Ingredients}\n\\end{center}\n\\end{figure}\n\n\\subsection{A Pathological Example and Heat Residual Capacities}\n\\label{Subsection:MonotonicGreedyHeuristics}\n\nA greedy match selection heuristic is efficient if it performs a small number of iterations and chooses matches exchanging large heat load in each iteration.\nOur greedy heuristics perform large moves towards feasibility by choosing good matches in terms of: (i) heat and (ii) stream fraction. \nAn efficient greedy heuristic should also be monotonic in the sense that every chosen match achieves a strictly positive increase on the covered instance size.\n\nThe Figure \\ref{Fig:non_monotonic_heuristic} example shows a pathological behavior of greedy non-monotonic heuristics.\nThe instance consists of 3 hot streams, 3 cold streams and 3 temperature intervals.\nHot stream $i\\in H$ has heat supply $\\sigma_{i,s}=1$ for $s=i$ and no supply in any other temperature interval.\nCold stream $j\\in C$ has heat demand $\\delta_{j,t}=1$ for $t=j$ and no demand in any other temperature interval.\nConsider the heuristic which selects a match that may exchange the maximum amount of heat in each iteration.\nThe matches $(h_1,c_2)$ and $(h_2,c_3)$ consist the initial selections.\nIn the subsequent iteration, no match increases the heat that can be feasibly exchanged between the streams and the heuristic chooses unnecessary matches.\n\n\\begin{figure}[t]\n\n\\begin{center}\n\\includegraphics{non_monotonic_heuristic.eps}\n\\caption{A bad example of a non monotonic heuristic. If a heuristic begins by matching $h_1$ with $c_2$ and $h_2$ with $c_3$, then many unnecessary matches might be required to end up with a feasible solution.}\n\\label{Fig:non_monotonic_heuristic}\n\\end{center}\n\n\\end{figure}\n\nA sufficient condition enforcing strictly monotonic behavior and avoiding the above pathology, is for each algorithm iteration to satisfy the heat residual capacities.\nAs depicted in Figure \\ref{Fig:greedy_decomposition}, a greedy heuristic maintains a set $M$ of selected matches together with a decomposition of the original instance $I$ into two instances $I^A$ and $I^B$.\nIf $I=(H,C,T,\\vec{\\sigma},\\vec{\\delta})$, then it holds that $I^A=(H,C,T,\\vec{\\sigma}^A,\\vec{\\delta}^A)$ and $I^B=(H,C,T,\\vec{\\sigma}^B,\\vec{\\delta}^B)$, where $\\mathbf{\\sigma}=\\vec{\\sigma}^A+\\vec{\\sigma}^B$ and $\\vec{\\delta}=\\vec{\\delta}^A+\\vec{\\delta}^B$. \nThe set $M$ corresponds to a feasible solution for $I^A$ and the instance $I^B$ remains to be solved.\nIn particular, $I^A$ is obtained by computing a maximal amount of heat exchanged by using the matches in $M$ and $I^B$ is the remaining part of $I$.\nInitially, $I^A$ is empty and $I^B$ is exactly the original instance $I$.\nA selection of a match increases the total heat exchanged in $I^A$ and reduces it in $I^B$.\n\\ref{App:Greedy_Packing} observes that a greedy heuristic is monotonic if $I^B$ is feasible in each iteration. \nFurthermore, $I^B$ is feasible if and only if $I^A$ satisfies the heat residual capacities $R_u = \\sum_{i\\in H}\\sum_{s=1}^u\\sigma_{i,s} - \\sum_{j\\in C}\\sum_{t=1}^u\\delta_{j,t}$, for $u\\in T$.\n\n\\begin{figure}[t]\n\n\\begin{center}\n\\includegraphics{greedy_packing_decomposition.eps}\n\\vspace{-20pt}\n\\caption{Decomposition of a greedy packing heuristic. The problem instance $I$ is the union of the instance $I_A$ already solved by the heuristic and the instance $I_B$ that remains to be solved.}\n\\label{Fig:greedy_decomposition}\n\\end{center}\n\n\\end{figure}\n\n\\subsection{Largest Heat Match First}\n\\label{Subsection:Largest_Heat_Match}\n\nOur Largest Heat Match First heuristics arise from the idea that the matches should individually carry large amounts of heat in a near optimal solution.\nSuppose that $Q_v$ is the maximum heat that may be transferred between the streams using only a number $v$ of matches.\nThen, minimizing the number of matches is expressed as $\\min\\{v:Q_v\\geq \\sum_{i=1}^nh_i\\}$.\nThis observation motivates the greedy packing heuristic which selects matches iteratively until it ends up with a feasible set $M$ of matches exchanging $\\sum_{i=1}^nh_i$ units of heat.\nIn each iteration, the heuristic chooses a match maximizing the additional heat exchanged. \nOur two variants of largest heat matches heuristics are: (i) LP-based Largest Heat Match (LHM-LP) and (ii) Greedy Largest Heat Match (LHM).\n\n\nHeuristic LHM-LP uses the $MHLP(M)$ (LP-based Maximum Heat) procedure to compute the maximum heat that can be transferred between the streams using only the matches in the set $M$.\nThis procedure is repeated $O(nm)$ times in each iteration, once for every candidate match, and solves an LP incorporating the proposed heat residual capacities.\nAlgorithm \\ref{Alg:LHM-LP} is an LHM-LP heuristic using the LP in Section \\ref{Sec:MaxHeat_MultipleIntervals}.\nThe algorithm maintains a set $M$ of chosen matches and selects a new match $(i',j')$ to maximize $MHLP(M\\cup(i',j'))$.\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Largest Heat Match First LP-based (LHM-LP)}\n\\begin{algorithmic}[1]\n\\State $M\\leftarrow\\emptyset$\n\\State $r\\leftarrow \\sum_{i\\in H}h_i$\n\\While {$r>0$}\n\\State $(i',j')\\leftarrow\\arg\\max_{(i,j)\\in H\\times C\\setminus M} \\{MHLP(M\\cup\\{(i,j)\\})\\}$\n{\\Comment{Eqs (\\ref{Eq:MaxHeatLP_Objective} - \\ref{Eq:MaxHeatLP_Positiveness}), Sec \\ref{Sec:MaxHeat_MultipleIntervals}}}\n\\State $M\\leftarrow M\\cup \\{(i',j')\\}$\n\\State $r\\leftarrow\\sum_{i\\in H}h_i-MHLP(M)$\n\\EndWhile\n\\State Return $M$\n\\end{algorithmic}\n\\label{Alg:LHM-LP}\n\\end{algorithm}\n\n\\begin{theorem}\n\\label{Thm:GreedyPackingRatio}\nAlgorithm LHM-LP is $O(\\log n + \\log \\frac{h_{\\max}}{\\epsilon})$-approximate, where $\\epsilon$ is the required precision.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Greedy_Packing}.\n\\end{omitted_proof}\n\n\\medskip\n\nLHM-LP heuristic is polynomial-time in the worst case.\nThe $i$-th iteration solves $nm-i+1$ LP instances which sums to solving a total of $\\sum_{i=1}^{nm}(nm-i+1)=O(n^2m^2)$ LP instances in the worst case.\nHowever, for large instances, the algorithm is time consuming because of this iterative LP solving.\nSo, we also propose an alternative, time-efficient greedy approach. \nThe new heuristic version builds a solution by selecting matches and deciding the heat exchanges, without modifying them in subsequent iterations.\n\nThe new approach for implementing the heuristic, that we call LHM, requires the $MHG(\\vec{\\sigma},\\vec{\\delta},i,j)$ procedure. \nGiven an instance $(\\vec{\\sigma},\\vec{\\delta})$ of the problem, it computes the maximum heat that can be feasibly exchanged between hot stream $i\\in H$ and cold stream $j\\in C$, as defined in Section \\ref{Sec:MaxHeat_2Streams}.\nThe procedure also computes a corresponding value $q_{i,s,j,t}$ of heat exchanged between $i\\in H$ in temperature interval $s\\in T$ and $j\\in C$ in temperature interval $t\\in T$.\nLHM maintains a set $M$ of currently chosen matches together with their respective vector $\\vec{q}$ of heat exchanges.\nIn each iteration, it selects the match $(i',j')$ and heat exchanges $q'$ between $i'$ and $j'$ so that the value $MHG(\\vec{\\sigma},\\vec{\\delta},i',j')$ is maximum.\nAlgorithm \\ref{Alg:LHM} is a pseudocode of this heuristic.\n\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Largest Heat Match First Greedy (LHM)}\n\\begin{algorithmic}[1]\n\\State $M \\leftarrow \\emptyset$\n\\State $\\vec{q} \\leftarrow \\vec{0}$\n\\State $r \\leftarrow \\sum_{i\\in H}h_i$\n\\While {$r > 0$}\n\\State $(i', j', \\vec{q}\\;') \\leftarrow \\arg \\max_{(i,j)\\in H\\times C\\setminus M} \\{MHG(\\vec{\\sigma},\\vec{\\delta},i,j)\\}$\n{\\Comment{Algorithm MHG, Section \\ref{Sec:MaxHeat_2Streams}}}\n\\State $M \\leftarrow M \\cup \\{(i',j')\\}$\n\\State $\\vec{q} \\leftarrow \\vec{q} + \\vec{q}\\;'$\n\\State For each $s\\in T$, set $\\sigma_{i',s} \\leftarrow \\sigma_{i',s} - \\sum_{t\\in T} q_{i',s,j',t}'$\n\\State For each $t\\in T$, set $\\delta_{j',t} \\leftarrow \\delta_{j',t} - \\sum_{s\\in T} q_{i',s,j',t}'$\n\\State $r \\leftarrow r - \\sum_{s,t\\in T}q_{i',s,j',t}'$\n\\EndWhile\n\\State Return $M$\n\\end{algorithmic}\n\\label{Alg:LHM}\n\\end{algorithm}\n\n\n\\subsection{Largest Fraction Match First}\n\\label{Subsection:LFM}\n\nThe heuristic \\emph{Largest Fraction Match First} (LFM) exploits the bipartite nature of the problem by employing matches which exchange large fractions of the stream heats.\nConsider a feasible solution with a set $M$ of matches.\nEvery match $(i,j)\\in M$ covers a fraction $\\sum_{s,t\\in T}\\frac{q_{i,s,j,t}}{h_i}$ of hot stream $i\\in H$ and a fraction $\\sum_{s,t\\in T}\\frac{q_{i,s,j,t}}{c_j}$ of cold stream $j\\in C$.\nThe total covered fraction of all streams is equal to $\\sum_{(i,j)\\in M} \\sum_{s,t\\in T} \\left( \\frac{q_{i,s,j,t}}{h_i} + \\frac{q_{i,s,j,t}}{c_j}\\right)=n+m$.\nSuppose that $F_v$ is the maximum amount of total stream fraction that can be covered using no more than $v$ matches.\nThen, minimizing the number of matches is expressed as $\\min\\{v:F_v\\geq n+m\\}$.\nBased on this observation, the main idea of LFM heuristic is to construct iteratively a feasible set of matches, by selecting the match covering the largest fraction of streams, in each iteration.\nThat is, LFM prioritizes proportional matches in a way that high heat hot streams are matched with high heat cold streams and low heat hot streams with low heat cold streams. \nIn this sense, it generalizes the idea of Algorithm IG for the single temperature interval problem (see Section \\ref{sec:single_temperature_interval}), according to which it is beneficial to match streams of (roughly) equal heat.\n\nAn alternative that would be similar to LHM-LP is an\nLFM heuristic \nwith an $MFLP(M)$ (LP-based Maximum Fraction) procedure computing the maximum fraction of streams that can be covered using only a given set $M$ of matches.\nLike the LHM-LP heuristic, this procedure would be based on solving an LP (see \\ref{App:Greedy_Packing}), except that the objective function maximizes the total stream fraction.\nThe LFM heuristic can be also modified to attain more efficient running times using Algorithm $MHG$, as defined in Section \\ref{Sec:MaxHeat_2Streams}. \nIn each iteration, the heuristic selects the match $(i,j)$ with the highest value $\\frac{U_{i,j}'}{h_i}+\\frac{U_{i,j}'}{c_j}$, where $U_{i,j}'$ is the maximum heat that can be feasibly exchanged between $i$ and $j$ in the remaining instance.\n\n\n\n\\subsection{Smallest Stream Heuristic}\n\\label{Subsection:SS}\n\nSubsequently, we propose \\emph{Smallest Stream First} (SS) heuristic based on greedy match selection, which also incorporates stream priorities so that a stream is involved in a small number of matches.\nLet $\\alpha_i$ and $\\beta_j$ be the number of matches of hot stream $i\\in H$ and cold stream $j\\in C$, respectively.\nMinimizing the number of matches problem is expressed as $\\min\\{\\sum_{i\\in H}\\alpha_i\\}$, or equivalently $\\min\\{\\sum_{j\\in C}\\beta_j\\}$.\nBased on this observation, we investigate heuristics that specify a certain order of the hot streams and match them one by one, using individually a small number of matches.\nSuch a heuristic requires: (i) a stream ordering strategy and (ii) a match selection strategy.\nTo reduce the number of matches of small hot streams, heuristic SS uses the order $h_1\\leq h_2\\leq \\ldots\\leq h_n$. \n\nIn each iteration, the next stream is matched with a low number of cold streams using a greedy match selection strategy; we use greedy LHM heuristic.\nObserve that SS heuristic is more efficient in terms of running time compared to the other greedy packing heuristics, because it solves a subproblem with only one hot stream in each iteration. \nAlgorithm \\ref{Alg:ShortestStream} is a pseudocode of SS heuristic. \nNote that other variants of ordered stream heuristics may be obtained in a similar way.\nThe heuristic uses the $MHG$ algorithm in Section \\ref{Sec:MaxHeat_2Streams}.\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Smallest Steam First (SS)}\n\\begin{algorithmic}[1]\n\\State Sort the hot streams in non-decreasing order of their heat loads, i.e.\\ $h_1\\leq h_2\\leq\\ldots\\leq h_n$.\n\\State $M \\leftarrow \\emptyset$\n\\State $\\vec{q}\\leftarrow \\vec{0}$\n\\For {$i\\in H$}\n\\State $r\\leftarrow h_i$\n\\While {$r>0$}\n\\State $(i, j', \\vec{q}\\;') \\leftarrow \\arg \\max_{j\\in C} \\{MHG(\\vec{\\sigma},\\vec{\\delta},i,j)\\}$\n{\\Comment{Algorithm MHG, Section \\ref{Sec:MaxHeat_2Streams}}}\n\\State $M\\leftarrow M\\cup \\{(i,j')\\}$\n\\State $\\vec{q} \\leftarrow \\vec{q} + \\vec{q}\\;'$\n\\State For each $s\\in T$, set $\\sigma_{i,s} \\leftarrow \\sigma_{i,s} - \\sum_{t\\in T} q_{i,s,j',t}'$\n\\State For each $t\\in T$, set $\\delta_{j',t} \\leftarrow \\delta_{j',t} - \\sum_{s\\in T} q_{i,s,j',t}'$\n\\State $r \\leftarrow r - \\sum_{s,t\\in T}q_{i',s,j',t}'$\n\\EndWhile\n\\EndFor\n\\State Return $M$\n\\end{algorithmic}\n\\label{Alg:ShortestStream}\n\\end{algorithm}\n\n\n\n\\section{Numerical Results}\n\\label{sec:results}\n\nThis section evaluates the proposed heuristics on three test sets. \nSection \\ref{Section:Benchmark_Instances} provides information on system specifications and benchmark instances.\nSection \\ref{Section:Exact_Experiments} presents computational results of exact methods and shows that commercial, state-of-the-art approaches have difficult solving moderately-sized instances \nto global optimality.\nSection \\ref{Section:Heuristic_Experiments} evaluates experimentally the heuristic methods and compares the obtained results with those reported by \\citet{furman:2004}.\nAll result tables are provided in \\ref{App:Experimental_Results}.\n{\\cite{source_code} provide test cases and source code for the paper's computational experiments.}\n\n\\subsection{System Specification and Benchmark Instances}\n\\label{Section:Benchmark_Instances}\n\nAll computations are run on an Intel Core i7-4790 CPU 3.60GHz with 15.6 GB RAM running 64-bit Ubuntu 14.04.\nCPLEX 12.6.3 and Gurobi 6.5.2 solve the minimum number of matches problem exactly. \nThe mathematical optimization models and heuristics are implemented in Python 2.7.6 and Pyomo 4.4.1 \\citep{hart:2011, hart:2012}. \n\nWe use problem instances from two existing test sets \\citep{furman:2004, chen:2015}. {We also generate two collections of larger test cases. The smaller of the two sets uses work of \\citet{grossman:2017}. The larger of the two sets was created using our own random generation method.}\nAn instance of general heat exchanger network design consists of streams and utilities with inlet, outlet temperatures, flow rate heat capacities and other parameters.\n\\ref{App:Minimum_Utility_Cost} shows how a minimum number of matches instances arises from the original instance of general heat exchanger network design.\n \nThe \\emph{\\cite{furman:2000} test set} consists of test cases from the engineering literature. \nTable \\ref{Table:Problem_Sizes} reports bibliographic information on the origin of these test cases. \nWe manually digitize this data set and make it publicly available for the first time \\citep{source_code}.\nTable \\ref{Table:Problem_Sizes} lists the 26 problem instance names and information on their sizes.\nThe total number streams and temperature intervals varies from 6 to 38 and from 5 to 32, respectively. \nTable \\ref{Table:Problem_Sizes} also lists the number of binary and continuous variables as well as the number of constraints in the transshipment MILP formulation.\n\nThe \\emph{\\cite{minlp,chen:2015} test set} consists of 10 problem instances.\nThese instances are classified into two categories depending on whether they consist of balanced or unbalanced streams. \nTest cases with balanced streams have flowrate heat capacities in the same order of magnitude,\nwhile test cases with unbalanced streams have dissimilar flowrate heat capacities spanning several orders of magnitude.\nThe sizes of these instances range from 10 to 42 streams and from 12 to 35 temperature intervals. \nTable \\ref{Table:Problem_Sizes} reports more information on the size of each test case.\n\n\\emph{The \\cite{grossman:2017} test set} is generated randomly.\nThe inlet, outlet temperatures of these instances are fixed while the values of flowrate heat capacities are generated randomly with fixed seeds.\nThis test set contains 12 moderately challenging problems (see Table \\ref{Table:Problem_Sizes}) with a classification into balanced and unbalanced instances, similarly to the \\cite{minlp,chen:2015} test set. \nThe smallest problem involves 27 streams and 23 temperature intervals while the largest one consists of 43 streams and 37 temperature intervals.\n\n{\\emph{The Large Scale test set} is generated randomly. These instances have 80 hot streams, 80 cold streams, 1 hot utility and 1 cold utility.\nFor each hot stream $i\\in HS$, the inlet temperature $T_{\\text{in},i}^{HS}$ is chosen uniformly at random in the interval $(30,400]$. \nThen, the outlet temperature $T_{\\text{out},i}^{HS}$ is selected uniformly at random in the interval $[30, T_{\\text{in},i}^{HS})$.\nAnalogously, for each cold stream $j\\in CS$, the outlet temperature $T_{\\text{out},j}^{CS}$ is chosen uniformly at random in the interval $(20,400]$.\nNext, the inlet temperature $T_{\\text{in},j}^{CS}$ is chosen uniformly at random in the interval $[20,T_{\\text{out},j}^{CS})$.\nThe flow rate heat capacities $FCp_i$ and $FCp_j$ of hot stream $i$ and cold stream $j$ are chosen as floating numbers with two decimal digits in the interval $[0,15]$.\nThe hot utility has inlet temperature $T_{\\text{in}}^{HU}=500$, outlet temperature $T_{\\text{out}}^{HS}=499$, and cost $\\kappa^{HU}=80$.\nThe cold utility has inlet temperature $T_{\\text{in}}^{CU}=20$, outlet temperature $T_{\\text{out}}^{CU}=21$, and cost $\\kappa^{CU}=20$.\nThe minimum heat recovery approach temperature is $\\Delta T_{\\min}=10$.}\n\n\n\\subsection{Exact Methods} \n\\label{Section:Exact_Experiments}\n\nWe evaluate exact methods using state-of-the-art commercial approaches. \nFor each problem instance, CPLEX and Gurobi solve the Section \\ref{sec:preliminaries} transportation and transshipment models.\nBased on the difficulty of each test set, we set a time limit for each solver run as follows: (i) 1800 seconds for the \\cite{furman:2000} test set, (ii) 7200 seconds for the \\cite{minlp,chen:2015} test set, and (iii) 14400 seconds for the \\cite{grossman:2017} {and large scale} test sets. \nIn each solver run, we set absolute gap 0.99, relative gap $4\\%$, and maximum number of threads 1.\n\nTable \\ref{Table:Exact_Methods} reports the best found objective value, CPU time and relative gap, for each solver run.\nObserve that state-of-the-art approaches cannot, in general, solve moderately-sized problems with 30-40 streams to global optimality.\nFor example, none of the test cases in the \\cite{grossman:2017} {or large scale} test sets is solved to global optimality within the specified time limit. \nTable \\ref{Table:Comparisons} contains the results reported by \\citet{furman:2004} using CPLEX 7.0 with 7 hour time limit. \nCPLEX 7.0 fails to solve 4 instances to global optimality.\nInterestingly, CPLEX 12.6.3 still cannot solve 3 of these 4 instances with a 1.5 hour timeout.\n\nTheoretically, the transshipment MILP is better than the transportation MILP because the former has asymptotically fewer variables.\nThis observation is validated experimentally with the exception of very few instances, e.g.\\ \\texttt{balanced10}, in which the transportation model computes a better solution within the time limit.\nCPLEX and Gurobi are comparable and neither dominates the other.\nInstances with balanced streams are harder to solve, which highlights the difficulty introduced by symmetry, see \\cite{kouyialis:2016}. \n{\nThe preceding numerical analysis refers to the extended transportation MILP.\nTable \\ref{Table:Transportation_Models_Comparison} compares solver performance to the reduced transportation MILP, i.e.\\ a formulation removing redundant variables $q_{i,s,j,t}$ with $s> t$ and Equations (\\ref{TransportationMIP_Eq:ThermoConstraint}). Note that modern versions of CPLEX and Gurobi show effectively no difference between the two formulations.\n}\n \n\\subsection{Heuristic Methods} \n\\label{Section:Heuristic_Experiments}\n\n{\nWe implement the proposed heuristics using Python and develop the LP models with Pyomo \\citep{hart:2011,hart:2012}.\nWe use CPLEX 12.6.3 with default settings to solve all LP models within the heuristic methods. \\cite{source_code} make the source code available. The following discussion covers the 48 problems with 43 streams or fewer. Section \\ref{Section:Large_Scale} discusses the 3 examples with 160 streams each.}\n\nThe difficulty of solving the minimum number of matches problem to global optimality motivates the design of heuristic methods and approximation algorithms with proven performance guarantees.\nTables \\ref{Table:Heuristic_Upper_Bounds} and \\ref{Table:Heuristic_CPU_Times} contain the computed objective value and CPU times, respectively, of the heuristics for all test cases. \nFor the challenging \\cite{minlp,chen:2015} and \\cite{grossman:2017} test sets, heuristic LHM-LP always produces the best solution.\nThe LHM-LP running time is significantly higher compared to all heuristics due to the iterative LP solving, despite the fact that it is guaranteed to be polynomial in the worst case.\nAlternatively, heuristic SS produces the second best heuristic result with very efficient running times in the \\cite{minlp,chen:2015} and \\cite{grossman:2017} test sets. \nFigure \\ref{Fig:Boxplot_Performance_Ratios} depicts the performance ratio of the proposed heuristics using a box and whisker plot, where the computed objective value is normalized with the one found by CPLEX for the transshipment MILP.\nFigure \\ref{Fig:Boxplot_CPU_Times} shows a box and whisker plot of the CPU times of all heuristics in $\\log_{10}$ scale normalized by the minimum CPU time for each test case.\nFigure \\ref{Fig:Line_Chart} shows a line chart verifying that our greedy packing approach produces better solutions than the relaxation rounding and water filling ones. \n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.6]{performance_ratios_boxplot.eps}\n\\end{center}\n\\vspace{-20pt}\n\\caption{Box and whisker diagram of {48} heuristic performance ratios, i.e.\\ computed solution \/ best known solution {for the problems with 43 streams or fewer.}}\n\\label{Fig:Boxplot_Performance_Ratios}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.6]{elapsed_times_boxplot.eps}\n\\end{center}\n\\vspace{-20pt}\n\\caption{Box and whisker diagram of {48} CPU times ($\\log_{10}$ scale) normalized by the minimum CPU time for each test case.}\n\\label{Fig:Boxplot_CPU_Times}\n\\end{figure}\n\n\\begin{figure}[!ht] \n\\begin{center}\n\\includegraphics[scale=0.6]{heuristic_comparison_line_chart.eps}\n\\end{center}\n\\vspace{-20pt}\n\\caption{Line chart comparing the performance ratio, i.e.\\ computed solution \/ best known solution, of the best computed result by heuristic methods: relaxation rounding, water filling, and greedy packing. {This graph applies to the 48 problems with 43 streams or fewer.}}\n\\label{Fig:Line_Chart}\n\\end{figure}\n\nTable \\ref{Table:Comparisons} contains the heuristic results reported by \\citet{furman:2004} and the ones obtained with our improved version of the FLPR, LRR, and WFG heuristics of \\cite{furman:2004}.\nOur versions of FLPR, LRR, and WFG perform better for the \\cite{furman:2004} test set because of our new Algorithm MHG for tightening the big-M parameters.\nFor example, out of the 26 instances, our version of FLPR performs strictly better than the \\cite{furman:2004} version 20 times and worse only once (\\texttt{10sp1}). To further explore the effect of the big-M parameter, Table \\ref{Table:BigM_Comparisons} shows how different computations for the big-M parameter change the FLPR and LRR performance. Table \\ref{Table:BigM_Comparisons} also demonstrates the importance of the big-M parameter on the transportation MILP fractional relaxation quality. \n\nIn particular, Table \\ref{Table:BigM_Comparisons} compares the three big-M computation methods discussed in Section \\ref{Sec:MaxHeat_2Streams}: (i) the trivial bounds, (ii) the \\citet{gundersen:1997} method, and (iii) our greedy Algorithm MHG.\nOur greedy maximum heat algorithm dominates the other approaches for computing the big-M parameters. Algorithm MHG also outperforms the other two big-M computation methods by finding smaller feasible solutions via both Fractional LP Rounding and Lagrangian Relaxation Rounding. In the 48 test cases, Algorithm MHG produces the best FLPR and LRR feasible solutions in 46 and 43 test cases, respectively. Algorithm MHG is strictly best for 33 FLPR and 32 LRR test cases. Finally, Algorithm MHG achieves the tightest fractional MILP relaxation for all test instances.\n\nFigure \\ref{Fig:Boxplot_Performance_Ratios} and Table \\ref{Table:Heuristic_Upper_Bounds} show that our\nnew CRR heuristic is competitive with the other relaxation rounding heuristics, performing as well or better than FLPR or LRR in 19 of the 48 test cases and strictly outperforming both FLPR and LRR in 8 test cases.\nAlthough CRR solves a sequence of MILPs, Figure \\ref{Fig:Boxplot_CPU_Times} and Table \\ref{Table:Heuristic_CPU_Times} show that its running time is efficient compared to the other relaxation rounding heuristics.\n\nOur water filling heuristics are equivalent to or better than \\citet{furman:2004} for 25 of their 26 test set instances (all except \\texttt{7sp2}). \nIn particular, our Algorithm WFG is strictly better than their WFG in 18 of 26 instances and is worse in just one. This improvement stems from the new 1.5-approximation algorithm for the single temperature interval problem (see Section \\ref{Sec:SingleTemperatureIntervalProblem-Approximation}).\nThe novel Algorithm WFM is competitive with Algorithm WFG and produces equivalent or better feasible solutions for 37 of the 48 test cases. \nIn particular, WFM has a better performance ratio than WFG (see Figure \\ref{Fig:Boxplot_Performance_Ratios}) and WFM is strictly better than WFG in all but 1 of the \\citet{grossman:2017} instances.\nThe strength of WFM highlights the importance of our new MILP formulation in Eqs.\\ (\\ref{Eq:SingleMIP_MaxBins})-(\\ref{Eq:SingleMIP_Integrality}).\nAt each iteration, WFM solves an MILP without big-M constraints and therefore has a running time in the same order of magnitude as its greedy counterpart WFG (see Figure \\ref{Fig:Boxplot_CPU_Times}).\n\nIn summary, our heuristics obtained via the relaxation rounding and water filling methods improve the corresponding ones proposed by \\citet{furman:2004}.\nFurthermore, greedy packing heuristics achieve even better results in more than $90\\%$ of the test cases. \n\n{\n\n\\subsection{Larger Scale Instances}\n\\label{Section:Large_Scale}\n\nAlthough CPLEX and Gurobi do not converge to global optimality for many of the \n\\cite{furman:2000}, \\cite{minlp,chen:2015}, and \\cite{grossman:2017} instances,\nthe solvers produce the best heuristic solutions in all test cases.\nBut the literature instances are only moderately sized. We expect that the heuristic performance improves relative to the exact approaches as the problem sizes increase.\nTowards a more complete numerical analysis, we randomly generate 3 larger scale instances with 160 streams each.\n\nFor larger problems, the running time may be important to a design engineer \\citep{hindmarsh:1983}.\nWe apply the least time consuming heuristic of each type for solving the larger scale instances, i.e.\\ apply relaxation rounding heuristic FLPR, water filling heuristic WFG, and greedy packing heuristic SS.\nWe also solve the transshipment model using CPLEX 12.6.3 with a 4h timeout. The results are in Table \\ref{Table:Large_Scale_Results}.\n\nFor these instances, greedy packing SS computes a better solution than the relaxation rounding FLPR heuristic or the water filling WFG heuristic, but SS has larger running time.\nIn instance \\texttt{large-scale1}, greedy packing SS computes 218, a better solution than the CPLEX value 219. Moreover, the CPLEX heuristic spent the first 1hr of computation time at solution 257 (18\\% worse than the solution SS obtains in 10 minutes) and the next 2hr of computation time at solution 235 (8\\% worse than the solution SS obtains in 10 minutes). Any design engineer wishing to interact with the results would be frustrated by these times.\n\nIn instance \\texttt{large-scale2}, CPLEX computes a slightly better solution (239) than the SS heuristic (242).\nBut the good CPLEX solution is computed slightly before the 4h timeout. For more than 3.5hr, the best CPLEX heuristic is 273 (13\\% worse than the solution SS obtains in 10 minutes).\nFinally, in instance \\texttt{large-scale0}, CPLEX computes a significantly better solution (175) than the SS heuristic (233).\nBut CPLEX computes the good solution after 2h and the incumbent is similar to the greedy packing SS solution for the first 2 hours.\nThese findings demonstrate that greedy packing approaches are particularly useful when transitioning to larger scale instances.\n\nNote that we could additionally study approaches to improve the heuristic performance of CPLEX, e.g.\\ by changing CPLEX parameters or using a parallel version of CPLEX. But the point of this paper is to develop a deep understanding of a very important problem that consistently arises in process systems engineering \\citep{floudas:2012}.\n}\n\n\\section{Discussion of Manuscript Contributions}\n\\label{sec:discussion}\n\nThis section reflects on this paper's contributions and situates the work with respect to existing literature. We begin in Section \\ref{sec:single_temperature_interval} by designing efficient heuristics for the minimum number of matches problem with the special case of a single temperature interval.\nInitially, we show that the 2 performance guarantee by \\citet{furman:2004} is tight.\nUsing graph theoretic properties, we propose a new MILP formulation for the single temperature interval problem which does not contain any big-M constraints. We also develop an improved, tight, greedy 1.5-approximation algorithm which prioritizes stream matches with equal heat loads. Apart from the its independent interest, solving the single temperature interval problem is a major ingredient of water filling heuristics.\n\n\nThe multiple temperature interval problem requires big-M parameters. We reduce these parameters in Section \\ref{sec:max_heat} by\ncomputing the maximum amount of heat transfer with match restrictions.\nInitially, we present a greedy algorithm for exchanging the maximum amount of heat between two streams.\nThis algorithm computes tighter big-M parameters than \\citet{gundersen:1997}.\nWe also propose LP-based ways for computing the maximum exchanged heat using only a subset of the available matches.\nMaximum heat computations are fundamental ingredients of our heuristic methods and detect the overall problem feasibility. This paper emphasizes how tighter big-M parameters improve heuristics with performance guarantees, but notice that improving the big-M parameters will also tend to improve exact methods.\n\nSection \\ref{sec:relaxation_rounding} further investigates the \\emph{relaxation rounding heuristics} of \\cite{furman:2004}. \n\\citet{furman:2004} propose a heuristic for the minimum number of matches problem based on rounding the LP relaxation of the transportation MILP formulation\n(\\emph{Fractional LP Rounding (FLPR)}).\nInitially, we formulate the LP relaxation as a minimum cost flow problem showing that it can be solved with network flow techniques which are more efficient than generic linear programming.\nWe derive a negative performance guarantee showing that FLPR has poor performance in the worst case.\nWe also prove a new positive performance guarantee for FLPR indicating that its worst-case performance may be improved with tighter big-M parameters.\nExperimental evaluation shows that the performance of FLPR improves with our tighter algorithm for computing big-M parameters.\nMotivated by the method of Lagrangian Relaxation, \\citet{furman:2004} proposed an approach generalizing FLPR by approximating the cost of the heat transferred via each match.\nWe revisit possible policies for approximating the cost of each match.\nInterestingly, we show that this approach can be used as a generic method for potentially improving a solution of the minimum number of matches problem.\nHeuristic \\emph{Lagrangian Relaxation Rounding (LRR)} aims to improve the solution of FLPR in this way.\nFinally, we propose a new heuristic, namely \\emph{Covering Relaxation Rounding (CRR)}, that successively solves instances of a new covering relaxation which also requires big-M parameters.\n\nSection \\ref{sec:water_filling} defines \\emph{water filling heuristics} as a class of heuristics solving the minimum number of matches problem in a top-down manner, i.e.\\ from highest to lowest temperature interval.\n\\citet{cerdaetwesterberg:1983} and \\citet{furman:2004} have solution methods based on water filling.\nWe improve these heuristics by developing novel, efficient ways for solving the single temperature interval problem.\nFor example, heuristics \\emph{MILP-based Water Filling (WFM)} and \\emph{Greedy Water Filling (WFG)} incorporate the new MILP formulation (Eqs.\\ \\ref{Eq:SingleMIP_MaxBins}-\\ref{Eq:SingleMIP_Integrality}) and greedy Algorithm IG, respectively.\nWith appropriate LP, we further improve water filling heuristics by reusing in each iteration matches selected in previous iterations.\n\\citet{furman:2004} showed a performance guarantee scaling with the number of temperature intervals. \nWe show that this performance guarantee is asymptotically tight for water filling heuristics. \n\nSection \\ref{sec:greedy_packing} develops a new \\emph{greedy packing approach} for designing efficient heuristics for the minimum the number of matches problem motivated by the packing nature of the problem.\nGreedy packing requires feasibility conditions which may be interpreted as a decomposition method analogous to pinch point decomposition, see\n\\cite{hindmarsh:1983}.\nSimilarly to \\cite{cerdaetwesterberg:1983}, stream ordering affects the efficiency of greedy packing heuristics.\nBased on the feasibility conditions, the LP in Eqs.\\ (\\ref{Eq:MaxHeatLP_Objective})-(\\ref{Eq:MaxHeatLP_Positiveness}) selects matches carrying a large amount of heat and incurring low unitary cost for exchanging heat.\nHeuristic \\emph{LP-based Largest Heat Match (LHM-LM)} selects matches greedily by solving instances of this LP.\nUsing a standard packing argument, we obtain a new logarithmic performance guarantee.\nLHM-LP has a polynomial worst-case running time but is experimentally time-consuming due to the repeated LP solving.\nWe propose three other greedy packing heuristic variants which improve the running time at the price of solution quality.\nThese other variants are based on different time-efficient strategies for selecting good matches.\nHeuristic \\emph{Largest Heat Match (LHM)} selects matches exchanging high heat in a pairwise manner.\nHeuristic \\emph{Largest Fraction Match (LFM)} is inspired by the idea of our greedy approximation algorithm for the single temperature interval problem which prioritizes roughly equal matches.\nHeuristic \\emph{Smallest Stream First (SS)} is inspired by the idea of the tick-off heuristic \\citep{hindmarsh:1983} and produces matches in a stream to stream basis, where a hot stream is ticked-off by being matched with a small number of cold streams. \n\nFinally, Section \\ref{sec:results} \nshows numerically that our new way of computing the big-M parameters, our improved algorithms for the single temperature interval, and the other enhancements improve the performance of relaxation rounding and water-filling heuristics.\nThe numerical results also show that our novel greedy packing heuristics {typically find better feasible solutions than} relaxation rounding and water-filling ones. {But the tradeoff is that the relaxation rounding and water filling algorithms achieve very efficient run times.}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn his PhD thesis, Professor Floudas showed that, given a solution to the minimum number of matches problem, he could solve a nonlinear optimization problem designing effective heat recovery networks. But the sequential HENS method cannot guarantee that promising minimum number of matches solutions will be optimal (or even feasible!) to Professor Floudas' nonlinear optimization problem. Since the nonlinear optimization problem is relatively easy to solve, we propose generating many good candidate solutions to the minimum number of matches problem. This manuscript develops nine heuristics with performance guarantees to the minimum number of matches problem. Each of the nine heuristics is either novel or provably the best in its class. Beyond approximation algorithms, our work has interesting implications for solving the minimum number of matches problem exactly, e.g.\\ the analysis into reducing big-M parameters {or the possibility of quickly generating good primal feasible solutions}. \\\\[-4pt]\n\n\\noindent\n\\textbf{Acknowledgments} \\\\[2pt]\n\\noindent\nWe gratefully acknowledge support from EPSRC EP\/P008739\/1, an EPSRC DTP to G.K., and a Royal Academy of Engineering Research Fellowship to R.M.\n\n\n\\bibliographystyle{ijocv081}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe success of the manufacturing industry based on the revolutionary innovations in various technologies in hardware and software \\cite{8666558}. The conception of the Internet of Things (IoT) has already become a part of modern factories for automatizing of large-scale processes~\\cite{Microsoft2019}. \n\nManufacturing sector brings many different stakeholders together. This industry has mostly common fraud problems comparing to any sector. The significant rate (56\\%) of respondents reported the issues related to vendor or\nprocurement fraud~\\cite{kroll}. For some companies developing know-how products, intellectual property infringement could be fatal for the whole business. Not surprisingly, these issues make companies look for new ways to avoid them~\\cite{deloitte2014}.\n\nBlockchain technology, which stays behind Bitcoin, is nowadays a hype technology\\cite{8704309}. Its development could be revolutionized like the appearance of the Internet at the beginning of 90th's. Blockchain provides a secure mechanism for achieving an independent trusted between business partners, excluding intermediaries. \nThe rest of the paper is organized as follows. Section \\ref{sec:research:method} discusses information about research methods. In Section \\ref{sec:results} illustrates the results of searching and screening for relevant papers for this research, while section \\ref{sec:analysis} answers the research questions using these results. Section \\ref{sec:conlcusion} concludes the paper.\n\n\n\n\n\\subsection{Overview of Blockchain}\\label{sec:background}\nBlockchain is a new and successful combination of existing technologies. The technical concept of Blockchain describes how data are distributed saved across the user systems using cryptography algorithm.\nBlockchain organized logically centralized and organizationally decentralized. As a result, the Blockchain represents a distributed database that maintains an ever-expanding list of decentralized transaction, events, or records in hash form. The data is maintained in a distributed register (Distributed Ledger Technology) and all participants have a copy of the entire register. In this distributed approach, the data is grouped into individual blocks that are linked together to ensure the chronological order and immutable data integrity of the entire data set~\\cite{8715005}. \n\nThe innovation of Blockchain technology is that existing approaches have been successfully put together. Following approaches are the main components of Blockchain:\n\\begin{itemize}\n\\item \\textbf{Peer-to-peer network:} In this peer-to-peer network (P2P network), communication runs without a central point. All participants or nodes are connected to each other and communicate with each other at the same level. Since the nodes are equal to each other, or can use services and make them available at the same time, there is no classic client-server structure\\cite{8704309}.\n\\item \\textbf{Cryptography:} With the help of methods from cryptography, the distributed register is protected against manipulation and abuse. This enables traceability, data integrity and authentication of the data source\\cite{LI2018133}.\n\\item \\textbf{Consensus mechanism:} The consensus mechanism defines the criteria that provide evidence of permission to create new blocks (mining). To reach a consensus, various consensus algorithms have been developed\\cite{Innerbichler:2018:FBA:3211933.3211953}.\n\\end{itemize}\n\nDue to the different uses of Blockchain technology, there are different variations on how the Blockchain is constructed.\nIn a \\textit{public Blockchain}, there are no restrictions on who can see the public data and validate the transactions\\cite{LI2018133}. Furthermore, the Blockchain data may be encrypted and understandable only to the authorized user. In the case of a \\textit{private Blockchain}, the completed consortium of those who access the Blockchain and are allowed to validate transactions is predefined~\\cite{8560153}. In a \\textit{permissionless Blockchain}, there are no restrictions on the identity of the participants who are allowed to conduct the transactions. In a \\textit{permissioned Blockchain}, the user group that can execute the transactions and generate new blocks is predefined and known~\\cite{8678753}.\n\\subsection{Smart Contracts}\nIn addition to a decentralized database system for transactions, Blockchain technology is also a platform for the automation of processes, regulations and organizational principles.\nSmart contracts are new, intelligent forms of contracts. These are to be understood as peer-to-peer applications, which are distributed with the underlying Blockchain technology, such as Bitcoin, Ethereum or Hyperledger. Hyperledger also uses the term \"chaincode\"~\\cite{Wang:2019:SSB:3302505.3310086}.\n\nThe smart contracts enable the determination of the conditions that lead to certain decisions by the data provided; the automatic processing of contracts; the permanent and real-time monitoring of contract terms and the automatic enforcement of the rights of contractors. The smart contracts provide not only the information about the Blockchain network and the distribution of data, but also the business logic~\\cite{Baumung2018135}.\n\nThese P2P applications can be programmed, stored in the Blockchain and executed in there. Therefore, they have the same advantages as the Blockchain itself. In Bitcoin, the smart contracts are created in the form of scripts. In order to simplify the development of smart contracts for Ethereum platform, a new specific programming language \"Solidity\" has been developed~\\cite{Baumung2019456}. Compared to the scripting language in Bitcoin, where many program constructs, such as loops, are missing, Solidity is a higher-level, more abstract language that resembles the JavaScript language. The chain codes are written in various high-level languages, such as Java or Go, and during execution, access is made to the data stored in the Blockchain, or to read out the existing information and store new ones. These scripts are stored in the Blockchain at a particular address. This address is determined when the integration of smart contract into the Blockchain is decided. When an event prescribed in the contract has occurred, a transaction is sent to this address. The distributed virtual machine executes the code of the script using the data sent with the transaction.\n\\subsection{State of Research on Blockchain}\nYli-Huumo et. al~\\cite{Yli_Huumo_2016} aim to understand the current research state of Blockchain technology, its technical challenges and limitations. This systematic review illustrates an sharply increasing number of publications each year beginning from 2012. It shows a growing interest in Blockchain technology. \nSwan~\\cite{swan2015blockchain} identified seven technical challenges of Blockchain for the future. Modern Blockchain implementations have to ensure security, throughput, size and bandwidth, performance, usability, data integrity and scalability. Being public Blockchains the \\textit{throughput} in the Bitcoin and Ethereum networks is from 10tps to 100tps (transactions per second). For example, VISA Payment System proceeds 2,000tps. But a permissioned Blockchain Hyperledger Fabric overcomes these challenges~\\cite{Alharby2017}. In order to achieve adequate security in the Bitcoin network validation of transaction takes roughly 10 minutes \\textit{(latency)}. In February 2016 the size of Bitcoin register was 50,000 MB. Current \\textit{size} of Bitcoin is 1 MB. This is the serious limitation of \\textit{bandwidths} for Blockchain, which should be solved to increase amount of transaction handled by register. The 51\\% attach on Blockchain network is still significant security issue. If the majority of the network will be controlled by hackers, it will be possible to manipulate Blockchain. Issue of \\textit{waster resources} is caused by Proof-of-Work effort in the mining process mainly in Bitcoin, which required huge amounts of energy. But there are other consensus algorithms, like Proof-Of-Stake, which are energy friendly. \\textit{Usability} problems resulting from difficulty of using Bitcoin API~\\cite{Yli_Huumo_2016}. \\textit{Versioning, hard forks, multiple chains} refers to a small chain with a small number of nodes, where a possibility of 51\\% attach is higher. Another issue become possible when chains are split for administrative or versioning purposes.\n\\begin{figure*}[htbp]\n\\centerline{\\includegraphics[width=\\textwidth,height=4.5cm]{Mapping-Process.png}}\n\\caption{The Systematic Mapping Process~\\cite{Petersen:2008:SMS:2227115.2227123}.}\n\\label{fig:001}\n\\end{figure*}\n\\section{Research method}\\label{sec:research:method}\nA systematic mapping study was selected to identify and classify primary studies to provide a systematic overview on the topics of industrial manufacturing and Blockchain. Petersen et al.~\\cite{Petersen:2008:SMS:2227115.2227123} presented the guidelines for systematic mapping study, which we followed to conduct this study.\n\nThe process for the systematic mapping study falls into a five-phase process as depicted in Figure \\ref{fig:001}: (1) Define research questions; (2) Search for primary studies; (3) Identify inclusion and exclusion criteria and screen primary studies based on these criteria; (4) Classify primary studies; (5) Mapping the data.\n\\subsection{Research questions}\nThe first step in systematic mapping study is the definition of the research questions. The purpose of this research is to classify current research and identify pertinent themes which relate directly to Blockchain technologies in manufacturing. This leads to the following research questions (RQs): \\\\\n\n\\textbf{RQ1:} \\textit{What are the problems between stakeholders in the manufacturing\nindustry? } \\\\\nRationale: The intention of this question is to identify current gaps in relationships of interested parties in manufacturing industry.\\\\\n\n\\textbf{RQ2:} \\textit{What are the data to secure in manufacturing process?} \\\\\nRationale: This question aims to identify the data, which should be insured during the whole process.\\\\\n\n\\textbf{RQ3:} \\textit{What are the use cases of Blockchain technology for manufacturing industry?} \\\\\nRationale: The intention of this question is to figure out the possible usage of Blockchain in manufacturing area.\\\\\n\n\\textbf{RQ4:} \\textit{What Blockchain frameworks are suitable for the scenario \"Assignment of production orders to an external manufacturer\"?} \\\\\nRationale: With this question, we investigate the existing Blockchain solutions to find appropriate framework. \\\\\n\\subsection{Search Strategy}\nThe search strategy is key to ensure a good starting point for the identification of studies and ultimately for the actual outcome of the study. An extensive and broad set of primary studies was needed to answer the research questions. The most popular academic databases in the domain of software engineering were selected to be used in this systematic mapping to search for potentially relevant papers:\n\\begin{itemize}\n\\item ACM Digital Library\\footnote{http:\/\/dl.acm.org}\n\\item IEEE Xplore Digital Library\\footnote{http:\/\/ieeexplore.ieee.org}\n\\item Scopus\\footnote{https:\/\/www.scopus.com}\n\\item Science Direct\\footnote{https:\/\/www.sciencedirect.com}\n\\end{itemize}\n\nFinding possibly relevant publications to answer the research questions requires creating an appropriate search clause. We chose the terms \"Blockchain\" and \"Manufacturing industry\" for this study as the main search keyword core, it focuses on Blockchain technology, manufacturing, production processes.\nThe final search strings were extended with alternative synonyms for main keywords. The term \"distributed ledger\" is a basic technology for \"blockchain\". We considered papers mentioning distributed manufacturing, manufacturing execution, programmable logic controller\" and included them into the search clause.\nRegarding the keywords for the search, after some exploratory searches using different combination of keywords, the researchers jointly established the final string to be used in the search for papers in the databases. Search terms with similar meanings were grouped in the same group and combined using the OR logical operator. To perform automatic searches in the selected digital libraries, the AND logical operator were used between combined terms of different groups, depicted in Table \\ref{tab:01}:\n\\begin{table}[htbp]\n\\centering\n\\label{tab:01}\n\\caption{Searches in databases.}\n\\begin{tabular}{|l|l|}\n\\hline\n\\rowcolor[HTML]{EFEFEF} \n\\textbf{Database} & \\textbf{Search}\\\\ \n\\hline\nACM & \\begin{tabular}[c]{@{}l@{}}(+(\"Blockchain\" \"Distributed Ledger\") +\\\\(\"Manufacturing Execution\" \"Programmable Logic \\\\ Controller\" \"Manufacturing\" \"Distributed \\\\manufacturing\")\\end{tabular} \\\\ \\hline\nIEEE & \\begin{tabular}[c]{@{}l@{}}('Blockchain' OR 'Distributed Ledger') AND \\\\ ('Industrial Control' OR 'Manufacturing Execution' \\\\ OR 'Programmable Logic Controller' OR \\\\'Manufacturing industry' OR 'Distributed \\\\manufacturing')\\end{tabular} \\\\ \\hline\nScopus & \\begin{tabular}[c]{@{}l@{}}ALL ((\"Blockchain\" OR \"Distributed Ledger\") AND\\\\ (\"Industrial Control\" OR \"Manufacturing Execution\" \\\\ OR \"Programmable Logic Controller\") OR \\\\ (\"Manufacturing industry\" OR \"Distributed\\\\ manufacturing\"))\\end{tabular} \\\\ \\hline\nScience Direct & \\begin{tabular}[c]{@{}l@{}}(\"Blockchain\" OR \"Distributed Ledger\") AND \\\\ (\"Industrial Control\" OR \"Manufacturing Execution\" \\\\ OR \"Programmable Logic Controller\" OR \\\\ \"Manufacturing\" OR \"Distributed manufacturing\")\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nThe search string was applied to title, abstract, full-text and keywords, and limited to journal papers written in English. The search was performed at the beginning of 2016. A total of 258 papers were retrieved from the different databases, which are focusing on research regarding information technologies, as on 21st May 2019, 10.30 am (CEST) and displayed in Table \\ref{tab:studies}. \n\\begin{table}[htbp]\n\\label{tab:studies}\n\\caption{Number of studies per database.}\n\\centering\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n\\rowcolor[HTML]{EFEFEF} \n\\textbf{Database} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Search\\\\ results\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Final\\\\ results\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}\\% final papers\\\\ from search results\\end{tabular}} \\\\ \\hline\nACM & 24 & 4 & 1.6 \\% \\\\ \\hline\nIEEE & 61 & 12 & 4.7 \\% \\\\ \\hline\nScopus & 159 & 6 & 2.3 \\% \\\\ \\hline\nScience Direct & 14 & 6 & 2.3 \\% \\\\ \\hline\n\\textbf{Total} & \\textbf{258} & \\textbf{28} & \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Screening for relevant papers}\nThis step is to exclude all research papers that are irrelevant to the research questions. To accomplish this step, We followed the screening approach described by Yli-Huumo et al.~\\cite{Yli_Huumo_2016} and defined inclusion\/exclusion criteria.\\\\\n\n\\textbf{Inclusion criteria} \n\nTo be considered for inclusion in the study, the research being evaluated had to originate from an academic source, such as a journal or conference, and clearly show its contribution was focused on applying Blockchain in manufacturing. Studies are accessible electronically. The paper title includes \"Blockchain\".\\\\\n\n\\textbf{Exclusion criteria}\n\nFor those publications that passed the inclusion criteria, two filters were applied to reduce the publications to only those that were deemed to be directly aligned with the focus of the study. These filters are described as follows:\n\\begin{itemize}\n\\item Study focuses on financial sector\n\\item Study contains \"Bitcoin\" or \"Cryptocurrency\"\n\\end{itemize}\nIt was decided to apply exclusion criteria on titles, keywords, abstracts and full-text. The final inclusion or exclusion could be decided based on the reading of publication's full text. Thus, this phase only eliminates publications clearly not within this study's scope and publications failing on formal requirements (such as duplicated papers). In detail, we identified 230 publications (about 90\\% of all papers) outside this study's scope: 20 duplicates, 156 papers accordingly with inclusion\/exclusion criteria and other based on abstract reading.\nThe search process is summarized in Figure \\ref{fig:sum}.\\\\\n\n\\begin{tikzpicture}\n[node distance=.6cm,\nstart chain=going below,]\n \\node (step0) [punktchain ] {Applying search on databases};\n \\begin{scope}[start branch=venstre,\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node[punktchain, on chain=going right, join=by {->}]\n (result0){Results = 258};\n \\end{scope}\n \\node (step1) [punktchain ] {Remove duplicates};\n \\begin{scope}[start branch=venstre,\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node[punktchain, on chain=going right, join=by {->}]\n (result1){Results = 238};\n \\end{scope}\n \\node[punktchain] (step2) {Apply inclusion\/exclusion};\n \\begin{scope}[start branch=hoejre1,]\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node [punktchain, on chain=going right, join=by {->}] \n (result2) {Results = 94};\n \\end{scope}\n \\node[punktchain] (step3) {Exclusion based on abstract};\n \\begin{scope}[start branch=hoejre1,]\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node [punktchain, on chain=going right, join=by {->}] \n (result3) {Results = 36};\n \\end{scope}\n \\node[punktchain] (step4) {Exclusion based on full reading};\n \\begin{scope}[start branch=hoejre1,]\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node [punktchain, on chain=going right, join=by {->}]\n (result4) {Results = 32};\n \\end{scope}\n \\node[punktchain] (step5) {Final primary papers};\n \\begin{scope}[start branch=hoejre1,]\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node [punktchain, on chain=going right, join=by {->}]\n (result5) {Results = 28};\n \\end{scope}\n \\draw[|-,-|,->, thick,] (result0.south) |-+(0,-1em)-| (step1.north);\n \\draw[|-,-|,->, thick,] (result1.south) |-+(0,-1em)-| (step2.north);\n \\draw[|-,-|,->, thick,] (result2.south) |-+(0,-1em)-| (step3.north);\n \\draw[|-,-|,->, thick,] (result3.south) |-+(0,-1em)-| (step4.north);\n \\draw[|-,-|,->, thick,] (result4.south) |-+(0,-1em)-| (step5.north);\n \\end{tikzpicture}\n \\begin{figure}[htbp]\n\\caption{Search and Selection Process of the Papers.}\n\\label{fig:sum}\n\\end{figure}\n\\subsection{Key-wording using Abstracts}\nYli-Huumo et al.~\\cite{Yli_Huumo_2016} proposed the key-wording technique to classify all relevant research papers. We first went thought the abstract of each paper to identify the most important keywords and the key contribution. This purpose used to classify papers under different categories. In some cases where it was difficult to classify a paper using\nits abstract, we looked through its introduction and conclusion. After classifying all papers, we read the papers and made changes to the classification when necessary.\n\\subsection{Data extraction and mapping process}\nThe bubble plot was designed to collect all the information needed to address the research questions of this mapping study. During the process of data extraction, we recorded major terms to Excel, which helped me to generate categories and proceed quality analysis. These data items embrace the main goals of papers. We used both qualitative and quantitative synthesis methods.\n\\\\\n\nThe final search process results 28 papers performed at the beginning of 2017. Their distribution over databases and percentage from all search results are illustrated in Table \\ref{tab:studies}.\\\\\n\\section{Study Results}\\label{sec:results}\nThis section demonstrates the findings from those the data was extracted regarding use cases of Blockchain in manufacturing industry, research type and attributes of Blockchain technology.\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=4.25cm,height=4.25cm]{Countries.png}}\n\\caption{Continent wise distribution of studies.}\n\\label{fig:country}\n\\end{figure} \n\n\\textbf{Top Countries and continents}\n\nGeographic distribution of the selected primary papers is shown in Fig. \\ref{fig:country}. The top continent was Europe with 14 studies being conducted there. Asia was second with 9 studies followed by America with 5 studies. China and Germany contributed towards 6 and 3 studies respectively. The rest of the countries had two or less papers published. It shows, that Blockchain technology has attracted attention worldwide. \\\\\n\n\\textbf{Year Wise Distribution}\n\nFigure \\ref{year} depicts the distribution of the papers from 2017 to 2019 year. Literature related to Blockchain in manufacturing industry has increased enormously in the last 2 years. Due to increasing papers of applying Blockchain in industrial context it expected to save this trend for next years. \\\\\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=6cm,height=4cm]{Year.png}}\n\\caption{Publication year of the selected primary papers.}\n\\label{year}\n\\end{figure} \n\n\\textbf{Classification of research.}\n\nAll of the publications in the study were classified by considering the following\ncriteria: (1) Use case of Blockchain, (2) Research facet and (3) Blockchain facet.\n\\subsection{Use case of Blockchain:}\nIn order to answer RQ3, we classified the publications in the study under five dimensions. These dimensions describe different use cases of Blockchain in manufacturing industry on the current state of research in the area. The use cases of Blockchain in industrial manufacturing are follows: \\\\\n\n\\textbf{Secure transfer of order data.} This use case describes how the production orders can be assigned to an external manufacturer and securely transmitted between different systems. It enables mutual interaction between the producer and the customer~\\cite{8250199}.\\\\\n\n\\textbf{Product data storing.} The data can be intercepted during transmission from the user's computer to the cloud systems. In these use cases the focus is on secured storing of products data in Blockchain.\\\\\n\n\\textbf{Supply chain, Process traceability.} Creating and distributing of goods can span over multiple locations, hundreds of stages etc. The use case aims to provide the ability to trace process in supply chain from procurement of raw materials to production~\\cite{8711819}.\\\\\n\n\\textbf{Prevention of fraud, Protection of Intellectual Property (IP).} The main point of this use case is to prove of products origin and intend for prevention of manipulation providing an indelible and traceable record of changes.\\\\\n\n\\textbf{Industrial IoT (IoT), Automation.} This use case illustrates how Blockchain can be used for integration with industrial IoT in automation context. \\\\\n\nFigure \\ref{trend} shows the amount of publications where each of use case above is described year wise. This figure illustrates, that the most researched topic is supply chain and process traceability (19 papers). Less researched (5 papers) is the scenario of how could the product data be stored in Blockchain. All use cases, except \"Secure transfer of order data\" show the trend of increasing interests. \n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=9cm,height=5cm]{Trend.png}}\n\\caption{Publication year of the identified use cases.}\n\\label{trend}\n\\end{figure} \n\n\\subsection{Research Facet:}\nThis facet is inspired by~\\cite{Petersen:2008:SMS:2227115.2227123} classify publications according to the type of research they contribute. (1) Review paper summarizes the current state of understanding on a topic. (2) Conceptual paper addresses a question that cannot be answered simply by getting more factual information. (3) Solution proposal includes an illustration or example of a solution to a particular problem. (4) Implementation research provides a prototypical development of a solution. (5) Case Study provides an up-close, in-depth, and detailed examination of a subject of study.\n\n\\begin{figure*}[htbp]\n\\centerline{\\includegraphics[width=\\textwidth,height=11cm]{SM.png}}\n\\caption{Visualisation of a Systematic Map in the Form of a Bubble Plot}\n\\label{fig:007}\n\\end{figure*}\n\\subsection{Blockchain Facet:}\nTo answer the RQ4, this facet classified the publications along the Blockchain attributes like framework (Ethereum, Hyperledger or other frameworks), type of Blockchain (public. private, permissioned) and focus of research paper on consensus algorithm.\n\nThe results of mapping process are summarized in Figure \\ref{fig:007} in form of the bubble plot to show the frequencies. This visualization gives a quick overview of publications for each category.\n\n\\section{Analysis}\\label{sec:analysis}\nThis section discusses the study results and answers the research questions that we defined in Section \\ref{sec:background}.\\\\ \n\n\\textbf{RQ1: What are the problems between stakeholders in manufacturing industry?}\\\\\\\\\nManufacturing industry is facing security incidents due to the competition~\\cite{Wang:2019:SSB:3302505.3310086}, data sharing with third companies~\\cite{8678753}, operational inefficiencies, losses and costs~\\cite{8626103}. Issue of limited trust is one of the complications in the industry~\\cite{Geiger:2019:PTD:3297280.3297546,Innerbichler:2018:FBA:3211933.3211953,8704309}. In~\\cite{Pinheiro2019331} authors illustrate the dependency of industrial companies on Trusted-Third-Party (TTP). It caused by the closed source code of programs, used in manufacturing industry~\\cite{8678753}. Moreover it is necessary to differentiating between original part or counterfeit products~\\cite{Holland2018,BANERJEE201869}. The outsourcing of production orders~\\cite{Baumung2019456} leads to limited flexibility and considerable organizational effort.\n\\\\ \\\\\n\\textbf{RQ2: What are the data to secure in manufacturing process?} \\\\\\\\\nFor some businesses the data exchange is a key success factor, so we found several type of data that should be protected between stakeholders: \n\\begin{itemize}\n\\item Computer-Aided Design (CAD) file for design and technical documentation~\\cite{Holland2018},~\\cite{Papakostas2019}\n\\item Material specification~\\cite{Mondragon20181300,Papakostas2019}\n\\item Order details~\\cite{Baumung2019456} and product recipe~\\cite{WESTERKAMP2019}\n\\item Machine data~\\cite{Geiger:2019:PTD:3297280.3297546,ANGRISH20181180} (measurement data~\\cite{Wang:2019:SSB:3302505.3310086} and configuration\\cite{Mondragon20181300})\n\\item Process values~\\cite{Geiger:2019:PTD:3297280.3297546,MANDOLLA2019134} and process state~\\cite{LI2018133,8704309,8621042}\n\\end{itemize}\n\nSelection of data, which should be protected, depends on specific scenario and application area of Blockchain. \\\\ \n\\textbf{RQ3: What are the use cases of Blockchain technology for manufacturing industry?} \\\\ \\\\\nThere are various use cases of Blockchain technology in the industrial manufacturing. In this study we identified 5 papers (17\\% of all papers) describing use case \"Secure transfer of order data\" in form of solution proposal. All of this papers provide prototypical implementation. Only 4 papers (14\\% ) are relevant for storing of product data in distributed ledger. According to our findings, significant part of papers (around 43\\%) illustrate solution proposal for supply chain and process traceability and 9 of them (32\\%) demonstrate implementation examples. Fraud prevention and IP Protection are considered in 2 papers (around 7\\%) and both of these papers implement this use case. The last use case \"IoT, Automation\" are described in 5 papers and covers equally all types of research.\nThe application of Blockchain is not limited to the use cases above. \\\\ \\\\\n\\textbf{RQ4: What Blockchain frameworks are suitable for the scenario \"Assignment of production orders to an external manufacturer\"?} \\\\ \\\\\nThis scenario is implemented mostly on Ethereum Blockchain (in 4 of all papers)~\\cite{Baumung2018135,Baumung2019456,Papakostas2019,ANGRISH20181180}. In~\\cite{Baumung2019456} was illustrated an example of using Hyperledger Fabric and in~\\cite{LI2018133} was used Multichain. Public Blockchain was used in 2 papers~\\cite{Baumung2018135},~\\cite{Baumung2019456}, in 1 research work the authors used the private Blockchain~\\cite{Papakostas2019} and in 4 papers was chosen consortium or permissioned Blockchain~\\cite{LI2018133,Baumung2018135,Baumung2019456,ANGRISH20181180}. Some papers describe several types of Blockchain in the case use case. It means, that this use case is possible to implement based on different Blockchain networks and doesn't require a specific framework. \n\\section{Conclusion}\\label{sec:conlcusion}\nIn the coming years it is expected, that the manufacturing sector will benefit from the use of Blockchain technology. In order to identify opportunities for integration of Blockchain in industrial processes, this research was made in form of systematic mapping study. After conducting the SMS and analysing the literature, a total of 28 primary papers were extracted from 4 different scientific databases, published mainly in journals and conference proceedings and classified into different facets. We have covered the time period of 2017-2019 and have classified the papers under different dimensions. We grouped these issues into five use cases, namely, secure transfer of order data, product data storing, supply chain and process traceability, fraud prevention and IP protection, IoT and automation. \n\nIn this study we found that the majority of papers describe the case \"supply chain and process traceability\" as solution proposal. There are essential less findings regarding assignment of production orders to an external manufacturer. It demonstrates the relative lack of research on this scenario, that requires effort lot more research. As found out, the last use case can be implemented using different frameworks. For example, this case could be implemented base on permissioned Blockchain using Hyperledger Fabric\\footnote{\\url{www.hyperledger.org}}. However, it is required to evaluate all frameworks to select the most suitable solution for the use case. \n\nThe result of this mapping study can be applied only on the selected research databases and may help the researchers to get an overview of the status of Blockchain in the manufacturing industry and highlight the research gaps. \n\nOur line of future research aims precisely to implement our own prototype of use case \"Assignment of production orders to an external manufacturer\" to demonstrate the benefits of using Blockchain in factory automation. Additionally, we are going to extend the literature survey to include other databases like SpringerLink and we will apply snowballing to assure that the search is as comprehensive as possible. \n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}