diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpmgc" "b/data_all_eng_slimpj/shuffled/split2/finalzzpmgc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpmgc" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec1}\nAccording to Moore's law \\cite{ref31}, traditional computer architectures will reach their physical limits in the near future. Quantum computers \\cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref12, ref13, ref14, ref15, ref16, ref17, ref18, ref19, ref20, ref21, ref22,ref23} provide a tool to solve problems more efficiently than ever would be possible with traditional computers \\cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11}. The power of quantum computing is based on the fundamentals of quantum mechanics. In a quantum computer, information is represented by quantum information, and information processing is achieved by quantum gates that realize quantum operations \\cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11,p1,p2,p3}. These quantum operations are performed on the quantum states, which are then outputted and measured in a measurement phase. The measurement process is applied to each quantum state where the quantum information conveyed by the quantum states is converted into classical bits. Quantum computers have been demonstrated in practice \\cite{ref1, ref2, ref3, ref4, ref5, ref6, ref8, ref9}, and several implementations are currently in progress \\cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref16, ref17, ref18, ref19}. \n\nIn the physical layer of a gate-model quantum computer, the device contains quantum gates, quantum ports (of quantum gates), and quantum wires for the quantum circuit\\footnote{The term ``quantum circuit'', in general, refers to software, not hardware; it is a description or prescription for what quantum operations should be applied when and does not refer to a physically implemented circuit analogous to a printed electronic circuit. In our setting, it refers to the hardware layer.}. In contrast to traditional automated circuit design \\cite{ref24, ref25, ref26, ref27, ref28, ref29, ref30}, a quantum system cannot participate in more than one quantum gate simultaneously. As a corollary, the quantum gates of a quantum circuit are applied in several rounds in the physical layer of the quantum circuit \\cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref16, ref17, ref18, ref19}.\n\nThe physical layout design and optimization of quantum circuits have different requirements with several open questions and currently represent an active area of study \\cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref16, ref17, ref18, ref19}. Assuming that the goal is to construct a reduced quantum circuit that can simulate the original system, the reduction process should be taken on the number of input quantum states, gate operations of the quantum circuit, and the number of output measurements. Another important question is the maximization of objective function associated with an arbitrary computational problem that is fed into the quantum computer. These parallel requirements must be satisfied simultaneously, which makes the optimization procedure difficult and is an emerging issue in present and future quantum computer developments. \n\nIn the proposed QTAM method, the goal is to determine a topology for the quantum circuits of quantum computer architectures that can solve arbitrary computational problems such that the quantum circuit is minimized in the physical layer, and the objective function of an arbitrary selected computational problem is maximized. The physical layer minimization covers the simultaneous minimization of the quantum circuit area (quantum circuit height and depth of the quantum gate structure, where the depth refers to the number of time steps required for the quantum operations making up the circuit to be run on quantum hardware), the total area of the quantum wires of the quantum circuit, the maximization of the objective function, and the minimization of the required number of input quantum systems and output measurements. An important aim of the physical layout minimization is that the resulting quantum circuit should be identical to a high complexity reference quantum circuit (i.e., the reduced quantum circuit should be able to simulate a nonreduced quantum circuit). \n\nThe minimization of the total quantum wire length in the physical layout is also an objective in QTAM. It serves to improve the processing in the topology of the quantum circuit. However, besides the minimization of the physical layout of the quantum circuit, the quantum computer also has to solve difficult computational problems very efficiently (such as the maximization of an arbitrary combinatorial optimization objective function \\cite{ref16, ref17, ref18, ref19}. To achieve this goal in the QTAM method, we also defined an objective function that provides the maximization of objective functions of arbitrary computational problems. The optimization method can be further tuned by specific input constraints on the topology of the quantum circuit (paths in the quantum circuit, organization of quantum gates, required number of rounds of quantum gates, required number of measurement operators, Hamiltonian minimization, entanglement between quantum states, etc.) or other hardware restrictions of quantum computers, such as the well-known \\textit{no-cloning theorem} \\cite{ref22}. The various restrictions on quantum hardware, such as the number of rounds required to be integrated into the quantum gate structure, or entanglement generation between the quantum states are included in the scheme. These constraints and design attributes can be handled in the scheme through the definition of arbitrary constraints on the topology of the quantum circuit, or by constraints on the computational paths. \n\nThe combinatorial objective function is measured on a computational basis, and an objective function value is determined from the measurement result to quantify the current state of the quantum computer. Quantum computers can be used for combinatorial optimization problems. These procedures aim to use the quantum computer to produce a quantum system that is dominated by computational basis states such that a particular objective function is maximized. \n\nRecent experimental realizations of quantum computers are qubit architectures \\cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref12, ref13, ref14, ref15, ref16, ref17, ref18, ref19}, and the current quantum hardware approaches focus on qubit systems (i.e., the dimension $d$ of the quantum system is two, $d=2$). However, while the qubit layout is straightforwardly inspirable by ongoing experiments, the method is developed for arbitrary dimensions to make it applicable for future implementations. Motivated by these assumptions, we therefore would avoid the term `qubit' in our scheme to address the quantum states and instead use the generalized term, `quantum states' throughout, which refers to an arbitrary dimensional quantum system. We also illustrate the results through superconducting quantum circuits \\cite{ref1, ref2, ref3, ref4, ref5}; however, the framework is general and flexible, allowing a realization for near term gate-model quantum computer implementations.\n\nThe novel contributions of this paper are as follows:\n\\begin{itemize}\n\\item \\textit{We define a method for designing quantum circuits for gate-model quantum computers.} \n\\item \\textit{We conceive the QTAM algorithm, which provides a quantum circuit minimization on the physical layout (circuit depth and area), quantum wire length minimization, objective function maximization, input size and measurement size minimization for quantum circuits.} \n\\item \\textit{We define a multilayer structure for quantum computations using the hardware restrictions on the topology of gate-model quantum computers.} \n\\end{itemize}\n\nThis paper is organized as follows. In \\sref{relw} the related works are summarized. \\sref{sec2} proposes the system model. In \\sref{sec4} the details of the optimization method are discussed, while \\sref{sec5} studies the performance of the model. Finally, \\sref{sec6} concludes the paper. Supplemental information is inlucded in the Appendix.\n\n\\section{Related Works}\n\\label{relw}\nThe related works are summarized as follows. \n\nA strong theoretical background on the logical model of gate-model quantum computers can be found in \\cite{ref17,ref16,ref18}. In \\cite{ref7}, the model of a gate-model quantum neural network model is defined.\n\nIn \\cite{refa1}, the authors defined a hierarchical approach to computer-aided design of quantum circuits. The proposed model was designed for the synthesis of permutation class of quantum logic circuits. The method integrates evolutionary and genetic approaches to evolve arbitrary quantum circuit specified by a target unitary matrix. Instead of circuit optimization, the work focuses on circuit synthesis.\n\nIn \\cite{refa2}, the authors propose a simulation of quantum circuits by low-rank stabilizer decompositions. The work focuses on the problem of simulation of quantum circuits containing a few non-Clifford gates. The framework focuses on the theoretical description of the stabilizer rank. The authors also derived the simulation cost.\n\nA method for the designing of a T-count optimized quantum circuit for integer multiplication with $4n+1$ qubits was defined in \\cite{int}. The T-count \\cite{tc} measures the number of T-gates, and has a relevance because of the implementation cost of a T gate is high. The aim of the T-count optimization is to reduce the number of T-gates without substantially increasing the number of qubits. The method also applied for quantum circuit designs of integer division \\cite{int2}. In the optimization takes into consideration both the T-count and T-depth, since T-depth is also an important performance measure to reduce the implementation costs. Another method for designing of reversible floating point divider units was proposed in \\cite{div}.\n\nIn \\cite{logic}, a methodology for quantum logic gate construction was defined. The main purpose of the scheme was to construct fault-tolerant quantum logic gates with a simple technique. The method is based on the quantum teleportation method \\cite{tel}.\n\nA method for the synthesis of depth-optimal quantum circuits was defined in \\cite{depth}. The aim of the proposed algorithm is to compute the depth-optimal decompositions of logical operations via an application of the so-called meet-in-the-middle technique. The authors also applied their scheme for the factorizations of some quantum logical operations into elementary gates in the in the Clifford+T set.\n\nA framework to the study the compilation and description of fault-tolerant, high level quantum circuits is proposed in \\cite{ft}. The authors defined a method to convert high level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. The method also represents a useful tool for quantum hardware architectures with topological quantum codes.\n\nThe Quantum Approximate Optimization Algorithm (QAOA) optimization algorithm is defined in \\cite{ref16}. The QAOA has been defined to evalute approximate solutions for combinatorial optimization problems fed into the quantum computer. \n\nRelevant attributes of the QAOA algorithm are studied in \\cite{refa3}.\n\nIn \\cite{refa4}, the authors analyzed the performance of the QAOA algorithm on near-term gate-model quantum devices. \n\nThe implementation of QAOA with parallelizable gates is studied in \\cite{refa5}.\n\nIn \\cite{refa6} the performance of QAOA is studied on different problems. The analysis covers the MaxCut combinatorial optimization problem, and the problem of quantum circuit optimizations on a classical computer using automatic differentiation and stochastic gradient descent. The work also revealed that QAOA can exceed the performance of a classical polynomial time algorithm (Goemans-Williamson algorithm \\cite{refgw}) with modest circuit depth. The work also concluded that the performance of QAOA with fixed circuit depth is insensitive to problem size.\n\nIn \\cite{refa7}, the authors studied the problem of ultrafast state preparation via the QAOA with long range interactions. The works provides an application for the QAOA in near-term gate-model quantum devices. As the authors concluded, the QAOA-based approach leads to an extremely efficient state preparation, for example the method allows us to prepare Greene-Horne-Zeilinger (GHZ) states with $\\mathcal{O}\\left( 1 \\right)$ circuit depth. The results were also demonstrated by several other examples. \n\nAnother experimental approach for the implementation of qubit entanglement and parallel logic operations with a superconducting circuit was presented in \\cite{song}. In this work, the authors generated entangled GHZ states with up to 10 qubits connecting to a bus resonator in a superconducting circuit. In the proposed implementation, the resonator-mediated qubit-qubit interactions are used to control the entanglement between the qubits and to operate on different pairs in parallel.\n\nA review on the noisy intermediate-scale quantum (NISQ) era can be found in \\cite{refpr}. \n\nThe subject of quantum computational supremacy is discussed in \\cite{refha, aar}.\n\nFor a survey on the attributes of quantum channels, see \\cite{ref11}, a survey on quantum computing technology is included in \\cite{refsur}.\n\n\\section{System Model}\n\\label{sec2}\nThe simultaneous physical-layer minimization and the maximization of the objective function are achieved by the Quantum Triple Annealing Minimization (QTAM) algorithm. The QTAM algorithm utilizes the framework of simulated annealing (SA) \\cite{ref24, ref25, ref26, ref27, ref28, ref29, ref30}, which is a stochastic point-to-point search method. \n\nThe procedure of the QTAM algorithm with the objective functions are depicted in \\fref{fig1}. The detailed descriptions of the methods and procedures are included in the next sections. \n\n \\begin{center}\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle = 0,width=1\\linewidth]{fig1.pdf}\n\\caption{The QTAM method for quantum computers. The quantum gate ($QG$) circuit computation model consists of an input array of $n$ quantum states (depicted by the green box), layers of quantum gates integrated into a quantum circuit (depicted by the purple box), and a measurement phase (depicted by the orange box). The quantum gates that act on the quantum states formulate a quantum circuit with a given circuit height and depth. The area of the quantum circuit is minimized by objective function $F_{{\\rm 1}} $, while the total quantum wire area of the quantum circuit is minimized by $F_{{\\rm 2}} $ ($F_{{\\rm 1}} \\wedge F_{{\\rm 2}} $ is referred via the quantum circuit minimization). The result of the minimization is a quantum circuit of quantum gates with minimized quantum circuit area, minimized total quantum wire length, and a minimized total Hamiltonian operator. The maximization of a corresponding objective function of arbitrary selected computational problems for the quantum computer is achieved by $F_{{\\rm 3}} $ (referred via the objective function maximization). Objective functions $F_{{\\rm 4}} $ and $F_{{\\rm 5}} $ are defined for the minimization of the number of quantum states (minimization of input size), and the total number of measurements (minimization of measurements).} \n \\label{fig1}\n \\end{center}\n\\end{figure}\n\\end{center}\n\n \n\\subsection{Computational Model}\n By theory, in an SA-based procedure a current solution $s_{A} $ is moved to a neighbor $s_{B} $, which yields an acceptance probability \\cite{ref24, ref25, ref26, ref27, ref28, ref29, ref30} \n\\begin{equation} \\label{eq1} \n{\\Pr }\\left(f\\left(s_{A} \\right),f\\left(s_{B} \\right)\\right)=\\frac{{\\rm 1}}{{\\rm 1}+e^{\\left(\\frac{f\\left(s_{A} \\right)-f\\left(s_{B} \\right)}{Tf\\left(s_{A} \\right)} \\right)} } , \n\\end{equation} \n where $f\\left(s_{A} \\right)$ and $f\\left(s_{B} \\right)$ represent the relative performances of the current and neighbor solutions, while $T$ is a control parameter, $T\\left(t\\right)=T_{\\max } {\\rm exp}\\left(-R\\left(t\/k\\right)\\right)$, where $R$ is the temperature decreasing rate, $t$ is the iteration counter, $k$ is a scaling factor, while $T_{\\max } $ is an initial temperature.\n\nSince SA is a probabilistic procedure it is important to minimize the acceptance probability of unfavorable solutions and avoid getting stuck in a local minima.\n\nWithout loss of generality, if $T$ is low, \\eqref{eq1} can be rewritten in function of $f\\left(s_{A} \\right)$ and $f\\left(s_{B} \\right)$ as \n\\begin{equation} \\label{eq2} \n{\\Pr }\\left(f\\left(s_{A} \\right),f\\left(s_{B} \\right)\\right)=\\left\\{\\begin{split} {{\\rm 1,if\\text{ }}f\\left(s_{A} \\right)>f\\left(s_{B} \\right)} \\\\ {{\\rm 0,if\\text{ }}f\\left(s_{A} \\right)\\le f\\left(s_{B} \\right)} \\end{split}\\right. . \n\\end{equation} \n In the QTAM algorithm, we take into consideration that the objectives, constraints, and other functions of the method, by some fundamental theory, are characterized by different magnitude ranges \\cite{ref24, ref25, ref26, ref27, ref28, ref29, ref30}. To avoid issues from these differences in the QTAM algorithm we define three annealing temperatures, $T_{f} \\left(t\\right)$ for objectives, $T_{g} \\left(t\\right)$ for constraints and $T_{c} \\left(t\\right)$ for the probability distribution closeness (distance of the output distributions of the reference quantum circuit and the reduced quantum circuit).\n\nIn the QTAM algorithm, the acceptance probability of a new solution $s_{B} $ at a current solution $s_{A} $ is as \n\\begin{equation} \\label{eq3} \n{\\Pr }\\left(s_{A} ,s_{B} \\right)=\\frac{{\\rm 1}}{{\\rm 1}+e^{\\tilde{d}\\left(f\\right)T_{f} \\left(t\\right)} e^{\\tilde{d}\\left(g\\right)T_{g} \\left(t\\right)} e^{\\tilde{d}\\left(c\\right)T_{c} \\left(t\\right)} } , \n\\end{equation} \n where $\\tilde{d}\\left(f\\right)$, $\\tilde{d}\\left(g\\right)$ and $\\tilde{d}\\left(c\\right)$ are the average values of objective, constraint and distribution closeness domination, see Algorithm 1.\n\nTo aim of the QTAM algorithm is to minimize the cost function \n\\begin{equation} \\label{eq4} \n\\min f\\left({\\rm x}\\right)=\\alpha _{{\\rm 1}} F_{{\\rm 1}} \\left({\\rm x}\\right)+\\ldots +\\alpha _{N_{obj} } F_{N_{obj} } \\left({\\rm x}\\right)+F_{s} , \n\\end{equation} \n where ${\\rm x}$ is the vector of design variables, while $\\alpha $ is the vector of weights, while $N_{obj} $ is the number of primarily objectives. Other $i$ secondary objectives (aspect ratio of the quantum circuit, overlaps, total net length, etc.) are minimized simultaneously via the single-objective function $F_{s} $ in \\eqref{eq4} as \n\\begin{equation} \\label{eq5} \nF_{s} =\\sum _{i} \\alpha _{i} F_{i} \\left(x\\right). \n\\end{equation} \n\n\\subsection{Objective Functions}\nWe defined $N_{obj} =5$ objective functions for the QTAM algorithm. Objective functions $F_{{\\rm 1}} $ and $F_{{\\rm 2}} $ are defined for minimization of $QG$ quantum circuit in the physical layer. The aim of objective function $F_{{\\rm 1}} $ is the minimization of the $A_{QG} $ quantum circuit area of the $QG$ quantum gate structure, \n\\begin{equation} \\label{eq6} \nF_{{\\rm 1}} :\\min \\left(A_{QG} \\right)=\\min \\left(H'_{QG} \\cdot D'_{QG} \\right), \n\\end{equation} \n where $H'_{QG} $ is the optimal circuit height of $QG$, while $D'_{QG} $ is the optimal depth of $QG$.\n\nFocusing on superconducting quantum circuits \\cite{ref1, ref2, ref3, ref4, ref5}, the aim of $F_{{\\rm 2}} $ is the physical layout minimization of the $w_{QG} $ total quantum wire area of $QG$, as \n\\begin{equation} \\label{eq7} \nF_{{\\rm 2}} :w_{QG} =\\min \\sum _{k=1}^{h} \\left(\\sum _{i=1}^{p} \\sum _{j=1}^{q} \\ell _{ij} \\cdot \\delta _{ij} \\left(\\psi _{ij} \\right)\\right), \n\\end{equation} \n where $h$ is the number of nets of the $QG$ circuit, $p$ is the number of quantum ports of the $QG$ quantum circuit considered as sources of a condensate wave function amplitude \\cite{ref1, ref2, ref3, ref4, ref5}, and $q$ the number of quantum ports considered as sinks of a condensate wave function amplitude, $\\ell _{ij} $ is the length of the quantum wire $ij$, $\\delta _{ij} $ is the effective width of the quantum wire $ij$, while $\\psi _{ij} $ is the (root mean square) condensate wave function amplitude \\cite{ref1, ref2, ref3, ref4, ref5} associated to the quantum wire $ij$. \n\nObjective function $F_{{\\rm 3}} $ is defined for the maximization of the expected value of an objective function $C_{L} (\\vec{\\Phi })$ as \n\\begin{equation} \\label{eq8} \nF_{3} :\\max {C_{L} (\\vec{\\Phi })}=\\max {\\langle \\vec{\\Phi }|C|\\vec{\\Phi }\\rangle }, \n\\end{equation} \n where $C$ is an objective function, $\\vec{\\Phi }$ is a collection of $L$ parameters \n\\begin{equation} \\label{eq9} \n\\vec{\\Phi }=\\Phi _{{\\rm 1}} ,\\ldots ,\\Phi _{L} \n\\end{equation} \n such that with $L$ unitary operations, state $|\\vec{\\Phi }\\rangle $ is evaluated as \n\\begin{equation} \\label{eq10} \n|\\vec{\\Phi }\\rangle =U_{L} \\left(\\Phi _{L} \\right),\\ldots ,U_{{\\rm 1}} \\left(\\Phi _{{\\rm 1}} \\right)\\left| \\varphi \\right\\rangle , \n\\end{equation} \n where $U_{i} $ is an $i$-th unitary that depends on a set of parameters $\\Phi _{i} $, while $\\left| \\varphi \\right\\rangle $ is an initial state. Thus the goal of $F_{{\\rm 3}} $ is to determine the $L$ parameters of $\\vec{\\Phi }$ (see \\eqref{eq9}) such that $\\langle \\vec{\\Phi }|C|\\vec{\\Phi }\\rangle $ is maximized.\n\nObjective functions $F_{{\\rm 4}} $ and $F_{{\\rm 5}} $ are defined for the minimization of the number of input quantum states and the number of required measurements. The aim of objective function $F_{{\\rm 4}} $ is the minimization of the number of quantum systems on the input of the $QG$ circuit, \n\\begin{equation} \\label{eq11} \nF_{{\\rm 4}} :\\min \\left(n\\right). \n\\end{equation} \n The aim of objective function $F_{{\\rm 5}} $ is the minimization of the total number of measurements in the $M$ measurement block, \n\\begin{equation} \\label{eq12} \nF_{{\\rm 5}} :\\min \\left(m\\right)=\\min {\\left(N_{M} \\left|M\\right|\\right)}, \n\\end{equation} \n where $m=N_{M} \\left|M\\right|$, where $N_{M} $ is the number of measurement rounds, $\\left|M\\right|$ is the number of measurement gates in the $M$ measurement block. \n\n\\subsection{Constraint Violations}\n The optimization at several different objective functions results in different Pareto fronts \\cite{ref24, ref25, ref26, ref27} of placements of quantum gates in the physical layout. These Pareto fronts allow us to find feasible tradeoffs between the optimization objectives of the QTAM method. The optimization process includes diverse objective functions, constraints, and optimization criteria to improve the performance of the quantum circuit and to take into consideration the hardware restrictions of quantum computers. In the proposed QTAM algorithm the constraints are endorsed by the modification of the Pareto dominance \\cite{ref24, ref25, ref26, ref27} values by the different sums of constraint violation values. We defined three different constraint violation values.\n \n\\subsubsection{Distribution Closeness Dominance}\n In the QTAM algorithm, the Pareto dominance is first modified with the sum of distribution closeness violation values, denoted by $c_{s} \\left(\\cdot \\right)$. The aim of this iteration is to support the closeness of output distributions of the reduced quantum circuit $QG$ to the output distribution of the reference quantum circuit $QG_{R} $.\n\nLet $P_{QG_{R} } $ the output distribution after the $M$ measurement phase of the reference (original) quantum circuit $QG_{R} $ to be simulated by $QG$, and let $Q_{QG} $ be the output distribution of the actual, reduced quantum circuit $QG$. The distance between the quantum circuit output distributions $P_{QG_{R} } $ and $Q_{QG} $ (distribution closeness) is straightforwardly yielded by the relative entropy function, as \n\\begin{equation} \\label{eq13} \nD\\left(\\left. P_{QG_{R} } \\right\\| Q_{QG} \\right)=\\sum _{i} P_{QG_{R} } \\left(i\\right)\\log _{2} \\frac{P_{QG_{R} } \\left(i\\right)}{Q_{QG} \\left(i\\right)} . \n\\end{equation} \n For two solutions $x$ and $y$, the $d_{x,y} \\left(c\\right)$ distribution closeness dominance function is defined as \n\\begin{equation} \\label{eq14} \nd_{x,y} \\left(c\\right)=c_{s} \\left(x\\right)-c_{s} \\left(y\\right), \n\\end{equation} \n where $c_{s} \\left(\\cdot \\right)$ is evaluated for a given solution $z$ as \n\\begin{equation} \\label{eq15} \nc_{s} \\left(z\\right)=\\sum _{i=1}^{N_{v} } v_{i}^{c} , \n\\end{equation} \n where $v_{i}^{c} $ is an $i$-th distribution closeness violation value, $N_{v} $ is the number of distribution closeness violation values for a solution $z$.\n\nIn terms of distribution closeness dominance, $x$ dominates $y$ if the following relation holds: \n\\begin{equation} \\label{eq16} \n\\begin{split} {\\left(\\left(c_{s} \\left(x\\right)<0\\right)\\wedge \\left(c_{s} \\left(y\\right)<0\\right)\\wedge \\left(c_{s} \\left(x\\right)>c_{s} \\left(y\\right)\\right)\\right)} \\\\ \\vee{\\left(\\left(c_{s} \\left(x\\right)=0\\right)\\wedge \\left(c_{s} \\left(y\\right)<0\\right)\\right),} \\end{split} \n\\end{equation} \n thus \\eqref{eq16} states that $x$ dominates $y$ if both $x$ and $y$ are unfeasible, and $x$ is closer to feasibility than $y$, or $x$ is feasible and $y$ is unfeasible.\n\nBy similar assumptions, $y$ dominates $x$ if \n\\begin{equation} \\label{eq17} \n\\begin{split} {\\left(\\left(c_{s} \\left(x\\right)<0\\right)\\wedge \\left(c_{s} \\left(y\\right)<0\\right)\\wedge \\left(c_{s} \\left(x\\right)g_{s} \\left(y\\right)\\right)\\right)} \\\\ \\vee{\\left(\\left(g_{s} \\left(x\\right)=0\\right)\\wedge \\left(g_{s} \\left(y\\right)<0\\right)\\right),} \\end{split} \n\\end{equation} \n thus \\eqref{eq16} states that $x$ dominates $y$ if both $x$ and $y$ are unfeasible, and $x$ is closer to feasibility than $y$, or $x$ is feasible and $y$ is unfeasible.\n\nBy similar assumptions, $y$ dominates $x$ with respect to $g_{s} \\left(\\cdot \\right)$ if \n\\begin{equation} \\label{eq21} \n\\begin{split} {\\left(\\left(g_{s} \\left(x\\right)<0\\right)\\wedge \\left(g_{s} \\left(y\\right)<0\\right)\\wedge \\left(g_{s} \\left(x\\right)A_{s} $, where $\\left|{\\rm {\\mathcal{A}}}\\right|$ is the number of elements in ${\\rm {\\mathcal{A}}}$, $A_{s} $ is the maximal archive size, then assign $\\Delta _{cr} \\left({\\rm {\\mathcal{A}}}\\right)$ to ${\\rm {\\mathcal{A}}}$, where $\\Delta _{cr} \\left(\\cdot \\right)$ is the crowding distance.\n\n\\textbf{Step 3}. Select the best $A_{s} $ elements. \n\\end{subproc}\n\n\\begin{subproc}\n \\DontPrintSemicolon\n\\caption{\\textit{}} \n\\textbf{Step 1}. Set $\\xi =\\nu $, and add $\\nu $ to ${\\rm {\\mathcal{A}}}$.\n\n\\textbf{Step 2}. Remove all the $k$ dominated points from ${\\rm {\\mathcal{A}}}$. \n\\end{subproc}\n\n\\begin{subproc}\n \\DontPrintSemicolon\n\\caption{\\textit{}} \n\\textbf{Step 1}. Set $\\xi =k_{\\tilde{d}\\left(\\min \\right)} $, where $k_{\\tilde{d}\\left(\\min \\right)} $ is a point of ${\\rm {\\mathcal{A}}}$ that corresponds to $\\tilde{d}\\left(\\min \\right)$ (see \\eqref{eq35}) with probability ${\\Pr }\\left(\\left. \\xi =\\nu \\right|\\xi \\angle \\nu ,\\nu \\angle \\left({\\rm {\\mathcal{A}}}\\right)_{k} \\right)$ (see \\eqref{eq34}).\n\n\\textbf{Step 2}. Otherwise set $\\xi =\\nu $. \n\\end{subproc}\n\n\\begin{subproc}\n \\DontPrintSemicolon\n\\caption{\\textit{}} \n\\textbf{Step 1}. If ${\\rm {\\mathcal{D}}}_{P} \\left(\\nu ,{\\rm {\\mathcal{A}}}\\right)={\\rm {\\mathcal{A}}}\\neg \\angle \\nu $, i.e., $\\nu $ is non-dominating with respect to ${\\rm {\\mathcal{A}}}$, then set $\\xi =\\nu $, and add $\\nu $ to ${\\rm {\\mathcal{A}}}$. If $\\left|{\\rm {\\mathcal{A}}}\\right|>A_{s} $, then assign $\\Delta _{cr} \\left({\\rm {\\mathcal{A}}}\\right)$ to ${\\rm {\\mathcal{A}}}$, and select the best $A_{s} $ elements.\n\n\\textbf{Step 2}. If ${\\rm {\\mathcal{D}}}_{P} \\left(\\nu ,\\left({\\rm {\\mathcal{A}}}\\right)_{k} \\right)=\\left({\\rm {\\mathcal{A}}}\\right)_{k} \\angle \\nu $, i.e., $\\nu $ dominates $k$ points in ${\\rm {\\mathcal{A}}}$, then set $\\xi =\\nu $, and add $\\nu $ to ${\\rm {\\mathcal{A}}}$. Remove the $k$ points from ${\\rm {\\mathcal{A}}}$. \n\\end{subproc}\n\n\\end{proof} \n\n\\subsubsection{Computational Complexity of QTAM}\nFollowing the complexity analysis of \\cite{ref24, ref25, ref26, ref27}, the computational complexity of QTAM is evaluated as \n\\begin{equation} \\label{eq36} \n{\\rm \\mathcal{O}}\\left(N_{d} N_{it} \\left|{\\rm {\\mathcal{P}}}\\right|\\left(N_{obj} +\\log _{2} \\left(\\left|{\\rm {\\mathcal{P}}}\\right|\\right)\\right)\\right), \n\\end{equation} \n where $N_{d} $ is the number of dominance measures, $N_{it} $ is the number of total iterations, $\\left|{\\rm {\\mathcal{P}}}\\right|$ is the population size, while $N_{obj} $ is the number of objectives.\n\n\\section{Wiring Optimization and Objective Function Maximization}\n\\label{sec4}\n\\subsection{Multilayer Quantum Circuit Grid}\n An $i$-th quantum gate of $QG$ is denoted by $g_{i} $, a $k$-th port of the quantum gate $g_{i} $ is referred to as $g_{i,k} $. Due to the hardware restrictions of gate-model quantum computer implementations \\cite{ref16, ref17, ref18, ref19}, the quantum gates are applied in several rounds. Thus, a multilayer, $k$-dimensional (for simplicity we assume $k=2$), $n$-sized finite square-lattice grid $G_{QG}^{k,r} $ can be constructed for $QG$, where $r$ is the number of layers, $l_{z} $, $z=1,\\ldots ,r$ . A quantum gate $g_{i} $ in the $z$-th layer $l_{z} $ is referred to as $g_{i}^{l_{z} } $, while a $k$-th port of $g_{i}^{l_{z} } $ is referred to as $g_{i,k}^{l_{z} } $.\n\n\\subsection{Method}\n\n\\begin{theorem}\nThere exists a method for the parallel optimization of quantum wiring in physical-layout of the quantum circuit and for the maximization of an objective function $C_{\\alpha } \\left(z\\right)$.\n\\end{theorem}\n\\begin{proof}\nThe aim of this procedure (Method 1) is to provide a simultaneous physical-layer optimization and Hamiltonian minimization via the minimization of the wiring lengths in the multilayer structure of $QG$ and the maximization of the objective function (see also \\sref{A1}). Formally, the aim of Method 1 is the $F_{{\\rm 2}} \\wedge F_{{\\rm 3}} $ simultaneous realization of the objective functions $F_{{\\rm 2}} $ and $F_{{\\rm 3}} $.\n\nUsing the $G_{QG}^{k,r} $ multilayer grid of the $QG$ quantum circuit determined via $F_{{\\rm 1}} $ and $F_{{\\rm 2}} $, the aim of $F_{{\\rm 3}} $ maximization of the objective function $C\\left(z\\right)$, where $z=z_{{\\rm 1}} \\ldots z_{n} $ in an $n$-length input string, where each $z_{i} $ is associated to an edge of $G_{QG}^{k,r} $ connecting two quantum ports. The objective function $C\\left(z\\right)$ associated to an arbitrary computational problem is defined as \n\\begin{equation} \\label{eq38} \nC\\left(z\\right)=\\sum _{\\left\\langle i,j\\right\\rangle \\in G_{QG}^{k,r} } C_{\\left\\langle i,j\\right\\rangle } \\left(z\\right), \n\\end{equation} \n where $C_{\\left\\langle i,j\\right\\rangle } $ is the objective function for an edge of $G_{QG}^{k,r} $ that connects quantum ports $i$ and $j$.\n\nThe $C^{{\\rm *}} \\left(z\\right)$ maximization of objective function \\eqref{eq38} yields a system state $\\Psi $ for the quantum computer \\cite{ref16, ref17, ref18, ref19} as \n\\begin{equation} \\label{eq39} \n\\Psi =\\left\\langle \\left. \\gamma ,\\mu ,C^{{\\rm *}} \\left(z\\right)\\right|\\right. C^{{\\rm *}} \\left(z\\right)\\left| \\gamma ,\\mu ,C^{{\\rm *}} \\left(z\\right)\\right\\rangle , \n\\end{equation} \n where \n\\begin{equation} \\label{eq40} \n{\\left| \\gamma ,\\mu ,C^{*} \\left(z\\right) \\right\\rangle} =U\\left(B,\\mu \\right)U\\left(C^{*} \\left(z\\right),\\gamma \\right){\\left| s \\right\\rangle} , \n\\end{equation} \nwhile \n\\begin{equation} \\label{eq43} \nU\\left(C^{{\\rm *}} \\left(z\\right),\\gamma \\right)\\left| z\\right\\rangle =e^{-i\\gamma C^{{\\rm *}} \\left(z\\right)} \\left| z\\right\\rangle , \n\\end{equation} \n where $\\gamma $ is a single parameter \\cite{ref16, ref17, ref18, ref19}.\n\nThe objective function \\eqref{eq38} without loss of generality can be rewritten as \n\\begin{equation} \\label{eq44} \nC\\left(z\\right)=\\sum _{\\alpha } C_{\\alpha } \\left(z\\right), \n\\end{equation} \n where $C_{\\alpha } $ each act on a subset of bits, such that $C_{\\alpha } \\in \\left\\{{\\rm 0,1}\\right\\}$. Therefore, there exists a selection of parameters of $\\vec{\\Phi }$ in \\eqref{eq9} such that \\eqref{eq44} picks up a maximized value $C^{{\\rm *}} \\left(z\\right)$, which yields system state $\\Upsilon $ as \n\\begin{equation} \\label{eq45} \n\\Upsilon =\\langle \\vec{\\Phi }|C^{{\\rm *}} (z)|\\vec{\\Phi }\\rangle . \n\\end{equation} \n Therefore, the resulting Hamiltonian $H$ associated to the system state \\eqref{eq45} is minimized via $F_{{\\rm 2}} $ (see \\eqref{eq57}) as \n\\begin{equation} \\label{eq46} \nE_{L} (\\vec{\\Phi })=\\min {\\langle \\vec{\\Phi }|H|\\vec{\\Phi }\\rangle }, \n\\end{equation} \n since the physical-layer optimization minimizes the $\\ell _{ij} $ physical distance between the quantum ports, therefore the energy $E_{L} (\\vec{\\Phi })$ of the Hamiltonian associated to $\\vec{\\Phi }$ is reduced to a minima. \n\nThe steps of the method $F_{{\\rm 2}} \\wedge F_{{\\rm 3}} $ are given in Method 1. The method minimizes the number of quantum wires in the physical-layout of $QG$, and also achieves the desired system state $\\Psi $ of \\eqref{eq39}. \n\n\\setcounter{algocf}{0}\n\\begin{proced}\n \\DontPrintSemicolon\n\\caption{\\textit{Quantum Wiring Optimization and Objective Function Maximization}}\n\\textbf{Step 1}. Construct the $G_{QG}^{k,r} $ multilayer grid of the $QG$ quantum circuit, with $r$ layers $l_{{\\rm 1}} ,\\ldots ,l_{r} $. Determine the \n\\[C\\left(z\\right)=\\sum _{\\left\\langle i,j\\right\\rangle \\in G_{QG}^{k,r} } C_{\\left\\langle i,j\\right\\rangle } \\left(z\\right)\\] \nobjective function, where each $C_{\\left\\langle i,j\\right\\rangle } $ refers to the objective function for an edge in $G_{QG}^{k,r} $ connecting quantum ports $i$ and $j$, defined as \n\\[C_{\\left\\langle i,j\\right\\rangle } \\left(z\\right)=\\frac{{\\rm 1}}{{\\rm 2}} \\left({\\rm 1}-z_{i} z_{j} \\right),\\] \nwhere $z_{i} =\\pm 1$.\n\n\\textbf{Step 2}. Find the optimal assignment of separation point $\\Delta $ in $G_{QG}^{k,r} =\\left(V,E,f\\right)$ at a physical-layer blockage $\\beta $ via a minimum-cost tree in $G_{QG}^{k,r} $ containing at least one port from each quantum gate $g_{i} $, $i=1,\\ldots ,\\left|V\\right|$. For all pairs of quantum gates $g_{i} $, $g_{j} $, minimize the $f_{p,c} $ path cost (${\\rm L1}$ distance) between a source quantum gate $g_{i} $ and destination quantum gate $g_{j} $ and then maximize the overlapped ${\\rm L1}$ distance between $g_{i} $ and $\\Delta $.\n\n\\textbf{Step 3}. For the $s$ found assignments of $\\Delta $ in Step 2, evaluate the objective functions $C_{\\alpha _{i} } $, $k=1,\\ldots ,s$, where $C_{\\alpha _{0} } $ is the initial value. Let the two paths ${\\rm {\\mathcal{P}}}_{{\\rm 1}} $ and ${\\rm {\\mathcal{P}}}_{{\\rm 2}} $ between quantum ports $g_{i{\\rm ,1}} $, $g_{j{\\rm ,1}} $, $g_{j{\\rm ,2}} $ be given as ${\\rm {\\mathcal{P}}}_{{\\rm 1}} :g_{i{\\rm ,1}} \\to \\Delta \\to g_{j{\\rm ,1}} $, and ${\\rm {\\mathcal{P}}}_{{\\rm 2}} :g_{i{\\rm ,1}} \\to \\Delta \\to g_{j{\\rm ,2}} $. Evaluate objective functions $C_{\\left\\langle g_{i{\\rm ,1}} ,\\Delta \\right\\rangle } \\left(z\\right)$, $C_{\\left\\langle \\Delta ,g_{j{\\rm ,1}} \\right\\rangle } \\left(z\\right)$ and $C_{\\left\\langle \\Delta ,g_{j{\\rm ,2}} \\right\\rangle } \\left(z\\right)$.\n\n\\textbf{Step 4}. Select that $k$-th solution, for which \n\\[{{C}_{{{\\alpha }_{k}}}}\\left( z \\right)=C_{\\left\\langle {{g}_{i,1}},\\Delta \\right\\rangle }^{\\left( k \\right)}\\left( z \\right)+C_{\\left\\langle \\Delta ,{{g}_{j,1}} \\right\\rangle }^{\\left( k \\right)}\\left( z \\right)+C_{\\left\\langle \\Delta ,{{g}_{j,2}} \\right\\rangle }^{\\left( k \\right)}\\left( z \\right)\\]\nis maximal, where $C_{\\left\\langle i,j\\right\\rangle }^{\\left(k\\right)} $ is the objective function associated to a $k$-th solution between quantum ports $g_{i{\\rm ,1}} $, $g_{j{\\rm ,1}} $, and $g_{i{\\rm ,1}} $, $g_{j{\\rm ,2}} $ in $G_{QG}^{k,r} $. The resulting $C_{\\alpha }^{{\\rm *}} \\left(z\\right)$ for ${\\rm {\\mathcal{P}}}_{{\\rm 1}} $ and ${\\rm {\\mathcal{P}}}_{{\\rm 2}} $ is as \n\\[C_{\\alpha }^{*} \\left(z\\right)=\\mathop{\\mathop{\\max }}\\limits_{k} {\\kern 1pt} \\left(C_{\\alpha _{k} } \\left(z\\right)\\right).\\] \n\n\\textbf{Step 5}. Repeat steps 2-4 for all paths of $G_{QG}^{k,r} $.\n\\end{proced}\n\n\\end{proof}\n\nThe steps of Method 1 are illustrated in \\fref{fig2}, using the $G_{QG}^{k,r} $ multilayer topology of the $QG$ quantum gate structure, $l_{i} $ refers to the $i$-th layer of $G_{QG}^{k,r} $.\n\n \\begin{center}\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[angle = 0,width=0.8\\linewidth]{fig2.pdf}\n\\caption{The aim is to find the optimal wiring in $G_{QG}^{k,r} $ for the $QG$ quantum circuit (minimal path length with maximal overlapped path between $g_{i{\\rm ,1}} $ and $g_{j{\\rm ,1}} $,$g_{j{\\rm ,2}} $) such that the $C_{\\alpha } $ objective function associated to the paths ${\\rm {\\mathcal{P}}}_{{\\rm 1}} :g_{i{\\rm ,1}} \\to g_{j{\\rm ,1}} $, and ${\\rm {\\mathcal{P}}}_{{\\rm 2}} :g_{i{\\rm ,1}} \\to g_{j{\\rm ,2}} $ is maximal. (a): The initial objective function value is $C_{\\alpha _{0} } $. A physical-layer blockage $\\beta $ in the quantum circuit allows no to use paths ${\\rm {\\mathcal{P}}}_{{\\rm 1}} $ and ${\\rm {\\mathcal{P}}}_{{\\rm 2}} $. (b): The wire length is optimized via the selection point $\\Delta $. The path cost is $f_{p,c} =11+3f_{l} $, where $f_{l} $ is the cost function of the path between the layers $l_{{\\rm 1}} $ and $l_{{\\rm 2}} $ (depicted by the blue vertical line), the path overlap from $g_{i{\\rm ,1}} $ to $\\Delta $ is $\\tau _{o} =5+f_{l} $. The objective function value is $C_{\\alpha _{{\\rm 1}} } $. (c): The path cost is $f_{p,c} =10$, the path overlap from $g_{i{\\rm ,1}} $ to $\\Delta $ is $\\tau _{o} =4$. The objective function value is $C_{\\alpha _{{\\rm 2}} } $. (d): The path cost is $f_{p,c} =12$, the path overlap from $g_{i{\\rm ,1}} $ to $\\Delta $ is $\\tau _{o} =6$. The objective function value is $C_{\\alpha _{{\\rm 3}} } $. The selected connection topology from (b), (c), and (d) is that which yields the maximized objective function $C_{\\alpha }^{{\\rm *}} $.} \n \\label{fig2}\n \\end{center}\n\\end{figure}\n\\end{center}\n\n\\subsection{Quantum Circuit Minimization}\n For objective function $F_{{\\rm 1}} $, the area minimization of the $QG$ quantum circuit requires the following constraints. Let $S_{v} \\left(P_{i} \\right)$ be the vertical symmetry axis of a proximity group $P_{i} $ \\cite{ref24, ref25, ref26} on $QG$, and let $x_{S_{v} \\left(P_{i} \\right)} $ refer to the $x$-coordinate of $S_{v} \\left(P_{i} \\right)$. Then, by some symmetry considerations for $x_{S_{v} \\left(P_{i} \\right)} $, \n\\begin{equation} \\label{eq47} \nx_{S_{v} \\left(P_{i} \\right)} =\\frac{{\\rm 1}}{{\\rm 2}} \\left(x_{i}^{{\\rm 1}} +x_{i}^{{\\rm 2}} +\\kappa _{i} \\right), \n\\end{equation} \n where $x_{i} $ is the bottom-left $x$ coordinate of a cell $\\sigma _{i} $, $\\kappa _{i} $ is the width of $\\sigma _{i} $, and \n\\begin{equation} \\label{eq48} \ny_{i}^{{\\rm 1}} +\\frac{h_{i} }{{\\rm 2}} =y_{i}^{{\\rm 2}} +\\frac{h_{i} }{{\\rm 2}} , \n\\end{equation} \n where $y_{i} $ is the bottom-left $y$ coordinate of a cell $\\sigma _{i} $, $h_{i} $ is the height of $\\sigma _{i} $.\n\nLet $\\left(\\sigma ^{{\\rm 1}} ,\\sigma ^{{\\rm 2}} \\right)$ be a symmetry pair \\cite{ref24, ref25, ref26} that refers to two matched cells placed symmetrically in relation to $S_{v} \\left(P_{i} \\right)$, with bottom-left coordinates $\\left(\\sigma ^{{\\rm 1}} ,\\sigma ^{{\\rm 2}} \\right)=\\left(\\left(x_{i}^{{\\rm 1}} ,y_{i}^{{\\rm 1}} \\right),\\left(x_{i}^{{\\rm 2}} ,y_{i}^{{\\rm 2}} \\right)\\right)$. Then, $x_{S_{v} \\left(P_{i} \\right)} $ can be rewritten as \n\\begin{equation} \\label{eq49} \nx_{S_{v} \\left(P_{i} \\right)} =x_{i}^{{\\rm 1}} -x_{i} =x_{i}^{{\\rm 2}} +x_{i} +\\kappa _{i} , \n\\end{equation} \n with the relation $y_{i}^{{\\rm 1}} =y_{i}^{{\\rm 2}} =y_{i} $.\n\nLet $\\sigma ^{S} =\\left(x_{i}^{S} ,y_{i}^{S} \\right)$ be a cell which is placed centered \\cite{ref24, ref25, ref26} with respect to $S_{v} \\left(P_{i} \\right)$. Then, $x_{S_{v} \\left(P_{i} \\right)} $ can be evaluated as \n\\begin{equation} \\label{eq50} \nx_{S_{v} \\left(P_{i} \\right)} =x_{i}^{S} +\\frac{\\kappa _{i} }{{\\rm 2}} , \n\\end{equation} \n along with $y_{i}^{S} =y_{i} $. Note that it is also possible that for some cells in $QG$ there is no symmetry requirements, these cells are denoted by $\\sigma ^{0} $.\n\nAs can be concluded, using objective function $F_{{\\rm 1}} $ for the physical-layer minimization of $QG$, a $d$-dimensional constraint vector ${\\rm \\mathbf{x}}_{F_{{\\rm 1}} }^{d} $ can be formulated with the symmetry considerations as follows: \n\\begin{equation} \\label{eq51} \n{\\rm \\mathbf{x}}_{F_{{\\rm 1}} }^{d} =\\sum _{N_{\\left(\\sigma ^{{\\rm 1}} ,\\sigma ^{{\\rm 2}} \\right)} } \\left(x_{i} ,y_{i} ,r_{i} \\right)+\\sum _{N_{\\sigma ^{S} } } \\left(y_{i} ,r_{i} \\right)+\\sum _{N_{\\sigma ^{0} } } \\left(x_{i} ,y_{i} ,r_{i} \\right), \n\\end{equation} \n where $N_{\\left(\\sigma ^{{\\rm 1}} ,\\sigma ^{{\\rm 2}} \\right)} $ is the number of $\\left(\\sigma ^{{\\rm 1}} ,\\sigma ^{{\\rm 2}} \\right)$ symmetry pairs, $N_{\\sigma ^{S} } $ is the number of $\\sigma ^{S} $-type cells, while $N_{\\sigma ^{0} } $ is the number of $\\sigma ^{0} $-type cells, while $r_{i} $ is the rotation angle of an $i$-th cell $\\sigma _{i} $, respectively.\n\n\\subsubsection{Quantum Wire Area Minimization}\nObjective function $F_{{\\rm 2}} $ provides a minimization of the total quantum wire length of the $QG$ circuit. To achieve it we define a procedure that yields the minimized total quantum wire area, $w_{QG} $, of $QG$ as given by \\eqref{eq7}. Let $\\delta _{ij} $ be the effective width of the quantum wire $ij$ in the $QG$ circuit, defined as \n\\begin{equation} \\label{eq52} \n\\delta _{ij} =\\frac{\\psi _{ij} }{J_{\\max } \\left(T_{ref} \\right)h_{nom} } , \n\\end{equation} \n where $\\psi _{ij} $ is the (root mean square) condensate wave function amplitude, $J_{\\max } \\left(T_{ref} \\right)$ is the maximum allowed current density at a given reference temperature $T_{ref} $, while $h_{nom} $ is the nominal layer height. Since drops in the condensate wave function phase $\\varphi _{ij} $ are also could present in the $QG$ circuit environment, the $\\delta '_{ij} $ effective width of the quantum wire $ij$ can be rewritten as \n\\begin{equation} \\label{eq53} \n\\delta '_{ij} =\\frac{\\psi _{ij} \\ell _{eff} r_{0} \\left(T_{ref} \\right)}{\\chi _{\\varphi _{ij} } } , \n\\end{equation} \n where $\\chi _{\\varphi _{ij} } $ is a maximally allowed value for the phase drops, $\\ell _{eff} $ is the effective length of the quantum wire, $\\ell _{eff} \\le \\left(\\chi _{\\varphi _{ij} } \\delta _{ij} \\right)\/\\psi _{ij} r_{0} \\left(T_{ref} \\right),$ while $r_{0} \\left(T_{ref} \\right)$ is a conductor sheet resistance \\cite{ref1, ref2, ref3, ref4, ref5}.\n\nIn a $G_{QG}^{k,r} $ multilayer topological representation of $QG$, the $\\ell _{ij} $ distance between the quantum ports is as \n\\begin{equation} \\label{eq54} \n\\ell _{ij} =\\left|x_{i} -x_{j} \\right|+\\left|y_{i} -y_{j} \\right|+\\left|z_{i} -z_{j} \\right|f_{l} , \n\\end{equation} \n where $f_{l} $ is a cost function between the layers of the multilayer structure of $QG$.\n\nDuring the evaluation, let $w_{QG} \\left(k\\right)$ be the total quantum wire area of a particular net $k$ of the $QG$ circuit, \n\\begin{equation} \\label{eq55} \nw_{QG} \\left(k\\right)=\\sum _{i=1}^{p} \\sum _{j=1}^{q} \\ell _{ij} \\cdot \\delta _{ij} \\left(\\psi _{ij} \\right), \n\\end{equation} \n where $q$ quantum ports are considered as sources of condensate wave function amplitudes, while $p$ of $QG$ are sinks, thus \\eqref{eq7} can be rewritten as \n\\begin{equation} \\label{eq56} \nF_{{\\rm 2}} :w_{QG} =\\min {\\sum _{k=1}^{h} w_{QG} \\left(k\\right)}. \n\\end{equation} \n Since $\\psi _{ij} $ is proportional to $\\delta _{ij} \\left(\\psi _{ij} \\right)$, \\eqref{eq56} can be simplified as \n\\begin{equation} \\label{eq57} \nF_{{\\rm 2}} :w'_{QG} =\\min {\\sum _{k=1}^{h} w'_{QG} \\left(k\\right)}, \n\\end{equation} \n where \n\\begin{equation} \\label{eq58} \nw'_{QG} \\left(k\\right)=\\sum _{i=1}^{p} \\sum _{j=1}^{q} \\ell _{ij} \\cdot \\psi _{ij} , \n\\end{equation} \n where $\\ell _{ij} $ is given in \\eqref{eq54}.\n\nIn all quantum ports of a particular net $k$ of $QG$, the source quantum ports are denoted by positive sign \\cite{ref24, ref25, ref26} in the condensate wave function amplitude, $\\psi _{ij} $ assigned to quantum wire $ij$ between quantum ports $i$ and $j$ , while the sink ports are depicted by negative sign in the condensate wave function amplitude, $-\\psi _{ij} $ with respect to a quantum wire $ij$ between quantum ports $i$ and $j$.\n\nThus the aim of $w_{QG} \\left(k\\right)$ in \\eqref{eq55} is to determine a set of port-to-port connections in the $QG$ quantum circuit, such that the number of long connections is reduced in a particular net $k$ of $QG$ as much as possible. The result in \\eqref{eq56} is therefore extends these requirements for all nets of $QG$.\n\n\\paragraph{Wave Function Amplitudes}\nWith respect to a particular quantum wire $ij$ between quantum ports $i$ and $j$ of $QG$, let $\\psi _{i\\to j} $ refer to the condensate wave function amplitude in direction $i\\to j$, and let $\\psi _{j\\to i} $ refer to the condensate wave function amplitude in direction $j\\to i$ in the quantum circuit. Then, the let be $\\phi _{ij} $ defined for the condensate wave function amplitudes of quantum wire $ij$ as \n\\begin{equation} \\label{eq59} \n\\phi _{ij} =\\min {\\left(\\left|\\psi _{i\\to j} \\right|,\\left|\\psi _{j\\to i} \\right|\\right)}, \n\\end{equation} \n with a residual condensate wave function amplitude \n\\begin{equation} \\label{eq60} \n\\xi _{i\\to j} =\\phi _{ij} -\\psi _{i\\to j} , \n\\end{equation} \n where $\\psi _{i\\to j} $ is an actual amplitude in the forward direction $i\\to j$. Thus, the maximum amount of condensate wave function amplitude injectable to of quantum wire $ij$ in the forward direction $i\\to j$ at the presence of $\\psi _{i\\to j} $ is $\\xi _{i\\to j} $ (see \\eqref{eq60}). The following relations holds for a backward direction, $j\\to i$, for the decrement of a current wave function amplitude $\\psi _{i\\to j} $ as \n\\begin{equation} \\label{eq61} \n\\bar{\\xi }_{j\\to i} =-\\psi _{i\\to j} , \n\\end{equation} \n with residual quantum wire length \n\\begin{equation} \\label{eq62} \n\\Gamma _{j\\to i} =-\\delta _{ij} , \n\\end{equation} \n where $\\delta _{ij} $ is given in \\eqref{eq52}.\n\nBy some fundamental assumptions, the ${\\rm {\\mathcal{N}}}_{R} $ residual network of $QG$ is therefore a network of the quantum circuit with forward edges for the increment of the wave function amplitude $\\psi $, and backward edges for the decrement of $\\psi $. To avoid the problem of negative wire lengths the Bellman-Ford algorithm \\cite{ref24, ref25, ref26} can be utilized in an iterative manner in the residual directed graph of the $QG$ topology.\n\nTo find a path between all pairs of quantum gates in the directed graph of the $QG$ quantum circuit, the directed graph has to be strongly connected. The strong-connectivity of the $h$ nets with the parallel minimization of the connections of the $QG$ topology can be achieved by a minimum spanning tree method such as Kruskal's algorithm \\cite{ref24, ref25, ref26}.\n \n\\begin{lemma}\nThe objective function $F_{{\\rm 2}} $ is feasible in a multilayer $QG$ quantum circuit structure.\n\\end{lemma}\n\\begin{proof}\n The procedure defined for the realization of objective function $F_{{\\rm 2}} $ on a $QG$ quantum circuit is summarized in Method 2. The proof assumes a superconducting architecture.\n \n\\setcounter{algocf}{1}\n\\begin{proced}\n \\DontPrintSemicolon\n\\caption{\\textit{Implementation of Objective Function $F_2$}}\n\\textbf{Step 1}. Assign the $\\psi _{ij} $ condensate wave function amplitudes for all $ij$ quantum wires of $QG$ via Sub-method 2.1.\n\n\\textbf{Step 2}. Determine the residual network of $QG$ via Sub-method 2.2.\n\n\\textbf{Step 3}. Achieve the strong connectivity of $QG$ via Sub-method 2.3.\n\n\\textbf{Step 4}. Output the $QG$ quantum circuit topology such that $w_{QG} $ \\eqref{eq7} is minimized. \n\\end{proced}\n\n\nThe sub-procedures of Method 2 are detailed in Sub-methods 2.1, 2.2 and 2.3. \n\n\\setcounter{algocf}{0}\n\\begin{subproc2}\n \\DontPrintSemicolon\n\\caption{}\n\\textbf{Step 1}. Create a ${\\rm M}_{QG} $ multilayer topological map of the network ${\\rm {\\mathcal{N}}}$ of $QG$ with the quantum gates and ports.\n\n\\textbf{Step 2}. From ${\\rm M}_{QG} $ determine the $L_{c} $ connection list of ${\\rm {\\mathcal{N}}}$ in $QG$.\n\n\\textbf{Step 3}. Determine the $\\delta _{ij} $ the effective width of the quantum wire $ij$ via \\eqref{eq52}, for $\\forall ij$ wires.\n\n\\textbf{Step 4}. Determine $\\phi _{ij} $ via \\eqref{eq59} for all quantum wires $ij$ of the $QG$ circuit.\n\n\\textbf{Step 5}. For a $k$-th net of $QG$, assign the wave function amplitude values $\\psi _{ij} $ to $\\forall ij$ quantum wires such that $w_{QG} \\left(k\\right)$ in \\eqref{eq55} is minimized, with quantum wire length $\\ell _{ij} $ \\eqref{eq54}. \n\\end{subproc2}\n\n\\begin{subproc2}\n \\DontPrintSemicolon\n\\caption{}\n\\textbf{Step 1}. Create a $\\bar{{\\rm M}}_{QG} $ multilayer topological map of the ${\\rm {\\mathcal{N}}}_{R} $ residual network of $QG$.\n\n\\textbf{Step 2}. From $\\bar{{\\rm M}}_{QG} $ determine the $\\bar{L}_{c} $ connection list of the ${\\rm {\\mathcal{N}}}_{R} $ residual network of $QG$.\n\n\\textbf{Step 3}. For $\\forall i\\to j$ forward edges of $\\bar{{\\rm M}}_{QG} $ of ${\\rm {\\mathcal{N}}}_{R} $, compute the $\\xi _{i\\to j} $ residual condensate wave function amplitude \\eqref{eq60}, and for $\\forall j\\to i$ backward edges of $\\bar{{\\rm M}}_{QG} $, compute the quantity $\\bar{\\xi }_{j\\to i} $ via \\eqref{eq61}.\n\n\\textbf{Step 4}. Compute the residual negative quantum wire length $\\Gamma _{j\\to i} $ via \\eqref{eq62}, using $\\delta _{ij} $ from \\eqref{eq52}.\n\n\\textbf{Step 5}. Determine the $\\bar{C}$ negative cycles in the ${\\bar{{\\rm M}}_{QG}} $ of the ${\\rm {\\mathcal{N}}}_{R} $ residual network of $QG$ via the ${\\rm {\\mathcal{A}}}_{BF} $ Bellman-Ford algorithm \\cite{ref24, ref25, ref26}.\n\n\\textbf{Step 6}. If $N_{\\bar{C}} >0$, where $N_{\\bar{C}} $ is the number of $\\bar{C}$ negative cycles in $\\bar{{\\rm M}}_{QG} $, then update the $\\psi _{ij} $ wave function amplitudes of the quantum wires $ij$ in the to cancel out the negative cycles.\n\n\\textbf{Step 7}. Re-calculate the values of \\eqref{eq60}, \\eqref{eq61} and \\eqref{eq62} for the residual edges of ${\\rm {\\mathcal{N}}}_{R} $.\n\n\\textbf{Step 8}. Repeat steps 5-7, until $N_{\\bar{C}} >0$. \n\\end{subproc2}\n\n\\begin{subproc2}\n \\DontPrintSemicolon\n\\caption{}\n\\textbf{Step 1}. For an $i$-th $sn_{k,i} $ subnet of a net $k$ of the $QG$ quantum circuit, set the quantum wire length to zero, $\\delta _{ij} =0$ between quantum ports $i$ and $j$, for all $\\forall i$.\n\n\\textbf{Step 2}. Determine the $L{\\rm 2}$ (Euclidean) distance between the quantum ports of the subnets $sn_{k,i} $ (from each quantum port of a subnet to each other quantum port of all remaining subnets \\cite{ref24}).\n\n\\textbf{Step 3}. Weight the $\\delta _{ij} >0$ non-zero quantum wire lengths by the calculated $L{\\rm 2}$ distance between the connections of the subnets of the $QG$ quantum circuit \\cite{ref1, ref2, ref3, ref4, ref5}, \\cite{ref24, ref25, ref26}.\n\n\\textbf{Step 4}. Determine the minimum spanning tree ${\\rm {\\mathcal{T}}}_{QG} $ via the ${\\rm {\\mathcal{A}}}_{K} $ Kruskal algorithm \\cite{ref24}.\n\n\\textbf{Step 5}. Determine the set $S_{{\\rm {\\mathcal{T}}}_{QG} } $ of quantum wires with $\\delta _{ij} >0$ from ${\\rm {\\mathcal{T}}}_{QG} $. Calculate $\\delta _{S_{{\\rm {\\mathcal{T}}}_{QG} } } =\\max \\left(\\delta _{ij} ,\\delta '_{ij} ,\\delta _{0} \\right)$, where $\\delta _{0} $ is the minimum width can be manufactured, while $\\delta _{ij} $ and $\\delta '_{ij} $ are given in \\eqref{eq52} and \\eqref{eq53}.\n\n\\textbf{Step 6}. Add the quantum wires of $S_{{\\rm {\\mathcal{T}}}_{QG} } $ to the ${\\rm M}_{QG} $ multilayer topological map of the network ${\\rm {\\mathcal{N}}}$ of $QG$.\n\n\\textbf{Step 7}. Repeat steps 4-6 for $\\forall k$ nets of the $QG$ quantum circuit, until ${\\rm M}_{QG} $ is not strongly connected. \n\\end{subproc2}\n\nThese conclude the proof.\n\\end{proof}\n\n\\subsubsection{Processing in the Multilayer Structure}\n\nThe $G_{QG}^{k,z} $ grid consists of all $g_{i} $ quantum gates of $QG$ in a multilayer structure, such that the $g_{i,k}^{l_{z} } $ appropriate ports of the quantum gates are associated via an directed graph ${\\rm {\\rm G}}=\\left(V,E,f_{c} \\right)$, where $V$ is the set of ports, $g_{i,k}^{l_{z} } \\subseteq V$, $E$ is the set of edges, and $f_{c} $ is a cost function, to achieve the gate-to-gate connectivity.\n\nAs a hardware restriction we use a constraint on the quantum gate structure, it is assumed in the model that a given quantum system cannot participate in more than one quantum gate at a particular time.\n\nThe distance in the rectilinear grid $G_{QG}^{k,z} $ of $QG$ is measured by the $d_{{\\rm L1}} \\left(\\cdot \\right)$ ${\\rm L1}$-distance function. Between two network ports $x,y\\in V$, $x=\\left(j,k\\right)$, $y=\\left(m,o\\right)$, $d_{{\\rm L1}} \\left(\\cdot \\right)$ is as \n\\begin{equation} \\label{eq63} \nd_{{\\rm L1}} \\left(x,y\\right)=d_{{\\rm L1}} \\left(\\left(j,k\\right),\\left(m,o\\right)\\right)=\\left|m-j\\right|+\\left|o-k\\right|. \n\\end{equation} \n The quantum port selection in the $G_{QG}^{k,r} $ multilayer structure of $QG$, with $r$ layers $l_{z} $, $z=1,\\ldots ,r$, and $k=2$ dimension in each layers is illustrated in \\fref{figA1}.\n\n\\begin{center}\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[angle = 0,width=1\\linewidth]{figA1.pdf}\n\\caption{The method of port allocation of the quantum gates in the $G_{QG}^{k,r} $ multilayer structure, with $r$ layers $l_{z} $, $z=1,\\ldots ,r$, and $k=2$ dimension in each layers. The aim of the multiport selection is to find the shortest path between ports of quantum gates $g_{i} $ (blue rectangle) and $g_{j} $ (green rectangle) in the $G_{QG}^{{\\rm 2,}r} $ multilayer structure. (a): The quantum ports needed to be connected in $QG$ are port $g_{i{\\rm ,1}} $ in quantum gate $g_{i} $ in layer $l_{{\\rm 1}} $, and ports $g_{j{\\rm ,1}} $ and $g_{j{\\rm ,2}} $ of quantum gate $g_{j} $ in layer $l_{{\\rm 3}} $. (b): Due to a hardware restriction on quantum computers, the quantum gates are applied in several rounds in the different layers of the quantum circuit $QG$. Quantum gate $g_{j} $ is applied in two rounds in two different layers that is depicted $g_{j}^{l_{{\\rm 3}} } $ and $g_{j}^{l_{{\\rm 2}} } $. For the layer-$l_{{\\rm 3}} $ quantum gate $g_{j}^{l_{{\\rm 3}} } $, the active port is $g_{j{\\rm ,1}}^{l_{{\\rm 3}} } $ (red), while the other port is not accessible (gray) in $l_{{\\rm 3}} $. The $g_{j{\\rm ,1}}^{l_{{\\rm 3}} } $ port, due to a physical-layer blockage $\\beta $ in the quantum circuit of the above layer $l_{{\\rm 2}} $ does not allow to minimize the path cost between ports $g_{i{\\rm ,1}} $ and $g_{j{\\rm ,1}}^{l_{{\\rm 3}} } $. The target port $g_{j{\\rm ,1}}^{l_{{\\rm 3}} } $ is therefore referred to as a blocked port (depicted by pink), and a new port of is $g_{j}^{l_{{\\rm 3}} } $ selected for $g_{j{\\rm ,1}}^{l_{{\\rm 3}} } $ (new port depicted by red). (c): For the layer-$l_{{\\rm 2}} $ quantum gate $g_{j}^{l_{{\\rm 2}} } $, the active port is $g_{j{\\rm ,2}}^{l_{{\\rm 2}} } $ (red), while the remaining port is not available (gray) in $l_{{\\rm 2}} $. The white dots (vertices) represent auxiliary ports in the grid structure of the quantum circuit. In $G_{QG}^{{\\rm 2,}r} $, each vertices could have a maximum of 8 neighbors, thus for a given port $g_{j,k} $ of a quantum gate $g_{j} $, ${\\rm deg}\\left(g_{j,k} \\right)\\le {\\rm 8}$.} \n \\label{figA1}\n \\end{center}\n\\end{figure*}\n\\end{center}\n\n\\paragraph{Algorithm}\n\\begin{theorem}\nThe Quantum Shortest Path Algorithm finds shortest paths in a multilayer $QG$ quantum circuit structure.\n\\end{theorem}\n\\begin{proof}\nThe steps of the shortest path determination between the ports of the quantum gates in a multilayer structure are included in Algorithm 2. \n\n\\setcounter{algocf}{1}\n\\begin{algo}\n \\DontPrintSemicolon\n\\caption{\\textit{Quantum Shortest Path Algorithm (QSPA)}}\n\\textbf{Step 1}. Create the $G_{QG}^{k,r} $ multilayer structure of $QG$, with $r$ layers $l_{z} $, $z=1,\\ldots ,r$, and $k$ dimension in each layers. From $G_{QG}^{k,r} $ generate a list $L_{{\\rm {\\mathcal{P}}}\\in {\\rm {\\rm Q}{\\rm G}}} $ of the paths between each start quantum gate port to each end quantum gate port in the $G_{QG}^{k,r} $ structure of $QG$ quantum circuit.\n\n\\textbf{Step 2}. Due to the hardware restrictions of quantum computers, add the decomposed quantum gate port information and its layer information to $L_{{\\rm {\\mathcal{P}}}\\in {\\rm {\\rm Q}{\\rm G}}} $. Add the $\\beta $ physical-layer blockage information to $L_{{\\rm {\\mathcal{P}}}\\in {\\rm {\\rm Q}{\\rm G}}} $.\n\n\\textbf{Step 3}. For a quantum port pair $\\left(x,y\\right)\\in G_{QG}^{k,r} $ define the $f_{c} \\left(x,y\\right)$ cost function, as \n\\[f_{c} \\left(x,y\\right)=\\gamma \\left(x,y\\right)+d_{{\\rm L1}} \\left(x,y\\right),\\] \nwhere $\\gamma \\left(x,y\\right)$ is the real path size from $x$ to $y$ in the multilayer grid structure $G_{QG}^{k,r} $ of $QG$, while $d_{{\\rm L1}} \\left(x,y\\right)$ is the ${\\rm L1}$ distance in the grid structure as given by \\eqref{eq63}.\n\n\\textbf{Step 4}. Using $L_{{\\rm {\\mathcal{P}}}\\in {\\rm {\\rm Q}{\\rm G}}} $ and cost function $f_{c} \\left(x,y\\right)$, apply the $A^{{\\rm *}} $ parallel search \\cite{ref24, ref25, ref26} to determine the lowest cost path ${\\rm {\\mathcal{P}}}^{{\\rm *}} \\left(x,y\\right)$. \n\\end{algo}\n\n\\end{proof}\n\n\\paragraph{Complexity Analysis}\nThe complexity analysis of Algorithm 2 is as follows. Since the QSPA algorithm (Algorithm 2) is based on the $A^{{\\rm *}} $ search method \\cite{ref24, ref25, ref26}, the complexity is trivially yielded by the complexity of the $A^{{\\rm *}} $ search algorithm.\n\n\\section{Performance Evaluation}\n\\label{sec5}\nIn this section, we compare the performance of the proposed QTAM method with a multiobjective evolutionary algorithm called NSGA-II \\cite{com1}. We selected this multiobjective evolutionary algorithm for the comparison, since the method can be adjusted for circuit designing. \n\nThe computational complexity of NSGA-II is proven to be ${\\rm {\\mathcal O}}\\left(N_{it} N_{obj} \\left|{\\rm {\\mathcal P}}\\right|^{2} \\right)$ in general, while at an optimized nondominated procedure, the complexity can be reduced to ${\\rm {\\mathcal O}}\\left(N_{it} N_{obj} \\left|{\\rm {\\mathcal P}}\\right|\\log _{2} \\left|{\\rm {\\mathcal P}}\\right|\\right)$. We take into consideration both situations for a comparison. The complexity of QTAM is given in \\eqref{eq36}.\n\nThe complexity of the methods in terms of the number of iterations, $N_{O}$, is compared in \\fref{figA2}. The performance of QTAM is depicted in \\fref{figA2}(a), while \\fref{figA2}(b) and \\fref{figA2}(c) illustrate the performances of the NSGA-II and optimized NSGA-II, respectively.\n\nFor the comparison, the $N_{obj} $ parameter is set to $N_{obj} =5$, while for the QTAM method, $N_{d} $ is set to $N_{d} =3$. \n\n\\begin{center}\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[angle = 0,width=1\\linewidth]{figP.pdf}\n\\caption{(a): The computational complexity ($N_{O} $: number of operations) of QTAM in function of $N_{it} $ and $\\left|{\\rm {\\mathcal P}}\\right|$, $N_{it} \\in \\left[1,100\\right]$, $\\left|{\\rm {\\mathcal P}}\\right|\\in \\left[1,500\\right]$. (b): The computational complexity of the NSGA-II method in function of $N_{it} $ and $\\left|{\\rm {\\mathcal P}}\\right|$, $N_{it} \\in \\left[1,100\\right]$, $\\left|{\\rm {\\mathcal P}}\\right|\\in \\left[1,500\\right]$. (c): The computational complexity of the optimized NSGA-II in function of $N_{it} $ and $\\left|{\\rm {\\mathcal P}}\\right|$, $N_{it} \\in \\left[1,100\\right]$, $\\left|{\\rm {\\mathcal P}}\\right|\\in \\left[1,500\\right]$.} \n \\label{figA2}\n \\end{center}\n\\end{figure*}\n\\end{center}\n\nIn the analyzed range, the maximized values of $N_{O} $ are $N_{O} \\left({\\rm QTAM}\\right)\\approx 2\\cdot 10^{6} $, $N_{O} (\\text{NSGA-II})\\approx 1.25\\cdot 10^{8} $, and for the optimized NSGA-II scenario, $N'_{O} \\left({\\text{NSGA-II}}\\right)\\approx 2.25\\cdot 10^{6} $, respectively. In comparison to NSGA-II, the complexity of QTAM is significantly lower. Note, while the performance of QTAM and the optimized NSGA-II is closer, QTAM requires no any optimization of the complexity of the nondominated procedure. \n\n\n\\section{Conclusions}\n\\label{sec6}\nThe algorithms and methods presented here provide a framework for quantum circuit designs for near term gate-model quantum computers. Since our aim was to define a scheme for present and future quantum computers, the developed algorithms and methods were tailored for arbitrary-dimensional quantum systems and arbitrary quantum hardware restrictions. We demonstrated the results through gate-model quantum computer architectures; however, due to the flexibility of the scheme, arbitrary implementations and input constraints can be integrated into the quantum circuit minimization. The objective function that is the subject of the maximization in the method can also be selected arbitrarily. This allows a flexible implementation to solve any computational problem for experimental quantum computers with arbitrary hardware restrictions and development constraints. \n\n\n\\section*{Acknowledgements}\nThis work was partially supported by the European Research Council through the Advanced Fellow Grant, in part by the Royal Society's Wolfson Research Merit Award, in part by the Engineering and Physical Sciences Research Council under Grant EP\/L018659\/1, by the Hungarian Scientific Research Fund - OTKA K-112125 and in part by the Engineering and Physical Sciences Research Council under Grant EP\/L018659\/1.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThis paper combines four lines of research: (a) studying variations of domination problems, here the Roman domination~\\cite{Cocetal2004,Dre2000a,HHS98}; (b) input-sensitive enumeration of minimal solutions, a topic that has drawn attention in particular from people also interested in domination problems \\cite{AbuHeg2016,CouHHK2013,CouLetLie2015,GolHKKV2016,GolHegKra2016}; (c) related to (and motivated by) enumeration, extension problems have been introduced and studied in particular in the context of domination problems\\footnote{Historically, a logical extension problem~\\cite{BorGurHam98} should be mentioned, as it has led to \\cite[Th\\'eor\\`eme 2.16]{Mar2013a}, dealing with an extension variant of 3-\\textsc{Hitting Set}; also see \\cite[Proposition 3.39]{Mar2013a} concerning implications for \\textsc{Extension Dominating Set}.} in \\cite{Bazetal2018,BonDHR2019,CasFKMS2019a,CasFGMS2021,KanLMNU2015,KanLMNU1516,Mar2013a}: is a given set a subset of any minimal dominating set?; \n(d) the \\textsc{Hitting Set Transversal Problem} is the question if all minimal hitting sets of a hypergraph can be enumerated with polynomial delay (or even output-polynomial) only: this question is open for four decades by now and is equivalent to several enumeration problems in logic, database theory and also to enumerating minimal dominating sets in graphs, see \\cite{CreKPSV2019,EitGot95,GaiVer2017,KanLMN2014}. By way of contrast, we show that enumerating all minimal Roman domination functions is possible with polynomial delay, a result which is quite surprising in view of the general similarities between the complexities of domination and Roman domination problems.\n\n\\begin{figure}[bth]\n\\begin{center}\n\\includegraphics[width=.65\\textwidth]{Roman-mymap.pdf}\n\\end{center}\n\\caption{\\label{fig-Roman-map}The Roman Empire in the times of Constantine}\n\\end{figure}\n\n\\textsc{Roman Domination}\\ comes with a nice (hi)story: \n namely, it should reflect the\nidea of how to secure the Roman Empire by positioning the armies (legions)\non the various parts of the Empire in a way that either\n(1) a specific region $r$ is also the location of at least one army or \n(2) one region $r'$ neighboring $r$\nhas two armies, so that $r'$ can afford sending off one army to the region $r$ \n (in case of an attack) without diminishing self-defense capabilities.\n More specifically, Emperor \nConstantine had a look at a map of his empire (as discussed in~\\cite{Ste99}, also see Fig.~\\ref{fig-Roman-map}).\\footnote{\nThe historical background is also nicely described\nin the online Johns Hopkins Magazine, visit \n\\url{http:\/\/www.jhu.edu\/~jhumag\/0497web\/locate3.html} to pre-view~\\cite{ReVRos2000}.} \nRelated is the island hopping strategy pursued by General MacArthur in World War II in the Pacific theater to gradually increase the \nUS-secured areas. \n\n\\textsc{Roman Domination}\\ has received a lot of attention from the algorithmic community in the past 15 years~\\cite{Ben2004,ChaCCKLP2013,Dre2000a,Fer08,Lie2007,Lieetal2008,LiuCha2013,Pagetal2002,PenTsa2007,ShaWanHu2010}.\nRelevant to our paper is the development of exact algorithms for \\textsc{Roman Domination}: combining ideas from \\cite{Lie2007,Roo2011}, an $\\mathcal{O}(1.5014^n)$ exponential-time and -space algorithm (making use of known \\textsc{Set Cover} algorithms via a transformation to \\textsc{Partial Dominating Set}) was presented in~\\cite{ShiKoh2014}. \n In \\cite{Chaetal2009,CheHHHM2016,Favetal2009,HedRSW2013,KraPavTap2012,LiuCha2012,LiuCha2012a,MobShe2008,XinCheChe2006,XueYuaBao2009,YerRod2013a}, more combinatorial studies can be found. This culminated in a chapter on Roman domination, stretching over nearly 50 pages in the monograph~\\cite{HayHedHen2020}. There is also an interesting link to the notion of a \\emph{differential} of a graph, introduced in~\\cite{Masetal2006}, see \\cite{BerFerSig2014}, also adding further algorithmic thoughts, as expressed in \\cite{AbuBCF2016,BerFer2014,BerFer2015}. For instance, in~\\cite{BerFer2014} an exponential-time algorithm was published, based on a direct Measure-and-Conquer approach.\n \nOne of the ideas leading to the development of the area of \\emph{extension problems} (as described in~\\cite{CasFGMS2021}) was to cut branches of search trees as early as possible, in the following sense:\nto each node of the search tree, a so-called pre-solution~$U$ can be associated, and it is asked if it is possible\nto extend $U$ to a meaningful solution~$S$. In the case of \\textsc{Dominating Set}, this means that $U$ is a set of vertices and a `meaningful solution' is an inclusion-wise minimal dominating set. Notice that such a strategy would work not only for computing smallest dominating sets, but also for computing largest minimal dominating set, or for counting minimal solutions, or for enumerating them. \nAlas, as it has been shown by many examples, extension problems turn out to be quite hard problems.\nEven for combinatorial problems whose standard decision version is solvable in polynomial time (for instance, \\textsc{Edge Cover}), its extension variation is \\textsf{NP}-hard. In such a case, the approach might still be viable, as possibly parameterized algorithms exist with respect to the parameter `pre-solution size'.\nThis would be interesting, as this parameter is small when a big gain can be expected in terms of an early abort of a search tree branch.\nIn particular for \\textsc{Extension Dominating Set}, this hope is not fulfilled. To the contrary, with this parameterization $|U|$, \\textsc{Extension Dominating Set}\\ is one of the few problems known to be complete for the parameterized complexity class \\textsf{W}[3], as shown in~\\cite{BlaFLMS2019}.\n\nWith an appropriate definition of the notion of minimality, \\textsc{Roman Domination}\\ becomes one of the few examples where the hope seeing extension variants being efficiently solvable turns out to be true, as we will show in this paper. This is quite a surprising result, as in nearly any other way, \\textsc{Roman Domination}\\ behaves most similar to \\textsc{Dominating Set}.\nTogether with its combinatorial foundations (a characterization of minimal Roman domination functions), this constitutes the first main result of this paper. \nThe main algorithmic exploit of this result is a non-trivial polynomial-space enumeration algorithm for minimal Roman domination functions that guarantees polynomial delay only, which is the second main result of the paper. As mentioned above, the corresponding question\nfor enumerating minimal dominating sets is open since decades, and we are not aware of any other modification of the concept of domination that seems to preserve any other of the difficulties of \\textsc{Dominating Set}, like classical or parameterized or approximation complexities, apart from the complexity of extension and enumeration.\nOur enumeration algorithm is a branching algorithm that we analyzed with a simple Measure \\& Conquer approach, yielding a running time of $\\mathcal{O}(1.9332^n)$, which also gives an upper bound on the number of minimal Roman dominating functions of an $n$-vertex graph. This result is complemented by a simple example that proves a lower bound of $\\Omega(1.7441^n)$ for the number of minimal Roman dominating functions on graphs of order~$n$.\n\n\n\n\\section{Definitions}\n\nLet $\\mathbb{N}=\\{1,2,3,\\dots\\}$ be the set of positive integers. For $n\\in\\mathbb{N}$, let $[n]=\\{m\\in\\mathbb{N}\\mid m\\leq n\\}$.\nWe only consider undirected simple graphs. \nLet $G=\\left(V,E\\right)$ be a graph. For $U\\subseteq V$, $G[U]$ denotes the graph induced by~$U$. \nFor $v\\in V$, $N_G(v)\\coloneqq\\{u\\in V\\mid \\lbrace u,v\\rbrace\\in E\\}$ denotes the \\emph{open neighborhood} of~$v$, while $N_G[v]\\coloneqq N_G(v)\\cup\\{v\\}$ is the \\emph{closed neighborhood} of~$v$. We extend such set-valued functions $X:V\\to 2^V$ to $X:2^V\\to 2^V$ by setting $X(U)=\\bigcup_{u\\in U}X(u)$. Subset $D\\subseteq V$ is a \\emph{dominating set}, or ds for short, if $N_G[D]=V$. \nFor $D\\subseteq V$ and $v\\in D$, define the \\emph{private neighborhood} of $v\\in V$ with respect to~$D$ as $P_{G,D}\\left( v\\right)\\coloneqq N_G\\left[ v\\right] \\setminus N_G\\left[D\\setminus \\lbrace v\\rbrace\\right]$.\nA function $f\\colon V \\to \\lbrace 0,1,2 \\rbrace$ is called a \\emph{Roman dominating function}, or rdf for short, if for each $v\\in V$ with $f\\left(v\\right) = 0$, there exists a $u\\in N_G\\left( v \\right)$ with $f\\left(u\\right)=2$. \nTo simplify the notation, we define $V_i\\left(f\\right)\\coloneqq \\lbrace v\\in V\\mid f\\left( v\\right)=i\\rbrace$ for $i\\in\\lbrace0,1,2\\rbrace$. The \\emph{weight} $w_f$ of a function $f\\colon V \\to \\lbrace 0,1,2 \\rbrace$ equals $|V_1\\left(f\\right)|+2|V_2\\left(f\\right)|$. The classical \\textsc{Roman Domination}\\ problem asks, given $G$ and an integer $k$, if there exists an rdf for~$G$ of weight at most~$k$. Connecting to the original motivation, $G$ models a map of regions, and if the region vertex~$v$ belongs to~$V_i$, then we place $i$ armies on~$v$.\n\nFor the definition of the problem \\textsc{Extension Roman Domination}, we need to define the order $\\leq$ on $\\lbrace 0,1,2\\rbrace^{V}$ first:\nfor $f,g \\in \\lbrace 0,1,2\\rbrace^{V}$, let $f\\leq g$ if and only if $f\\left(v\\right)\\leq g\\left(v\\right)$ for all $v\\in V$. In other words, we extend the usual linear ordering $\\leq$ on $\\{0,1,2\\}$ to functions mapping to $\\{0,1,2\\}$ in a pointwise manner. \n We call a function $f\\in \\lbrace 0,1,2\\rbrace^{V}$ a \\emph{minimal Roman dominating function} if and only if $f$ is a rdf and there exists no rdf $g$, $g\\neq f$, with $g\\leq f$.\\footnote{According to \\cite{HayHedHen2020}, this notion of minimality for rdf was coined by Cockayne but then dismissed, as it does not give a proper notion of \\emph{upper Roman domination} number. However, in our context, this definition seems to be the most natural one, as it also perfectly fits the extension framework proposed in \\cite{CasFGMS2022}. We will propose in \\autoref{sec:alternative-notion} yet another notion of minimal rdf that also fits the mentioned extension framework.} The weights of minimal rdf can vary considerably. Consider for example a star $K_{1,n}$ with center~$c$. Then, $f_1(c)=2$, $f_1(v)=0$ otherwise; $f_2(v)=1$ for all vertices~$v$; $f_3(c)=0$, $f_3(u)=2$ for one $u\\neq c$, $f_3(v)=1$ otherwise, define three minimal rdf with weights $w_{f_1}=2$, and $w_{f_2}=w_{f_3}=n+1$. \n\n\\vspace{5pt}\n\n\\noindent\n\\centerline{\\fbox{\\begin{minipage}{.99\\textwidth}\n\\textbf{Problem name: }\\textsc{Extension Roman Domination}, or \\textsc{ExtRD} for short\\\\\n\\textbf{Given: } A graph $G=\\left( V,E\\right)$ and a function $f\\in \\lbrace 0,1,2 \\rbrace^V.$\\\\\n\\textbf{Question: } Is there a minimal rdf $\\widetilde{f} \\in \\lbrace 0,1,2\\rbrace^V$ with $f\\leq \\widetilde{f}$?\n\\end{minipage}\n}}\n\n\\vspace{5pt}\n\nAs our first main result, we are going to show that \\textsc{ExtRD} can be solved in polynomial time in \\autoref{sec:poly-time-ExtRD}.\nTo this end, we need some understanding of the combinatorial nature of this problem, which we provide in \\autoref{sec:properties-minimal-rdf}.\n\nThe second problem that we consider is that of enumeration, both from an output-sensitive and from an input-sensitive perspective.\n\n\\vspace{5pt}\n\n\\noindent\n\\centerline{\\fbox{\\begin{minipage}{.99\\textwidth}\n\\textbf{Problem name: }\\textsc{Roman Domination Enumeration}, or \\textsc{RDEnum} for short\\\\\n\\textbf{Given: } A graph $G=\\left( V,E\\right)$.\\\\\n\\textbf{Task: } Enumerate all minimal rdf ${f} \\in \\lbrace 0,1,2\\rbrace^V$ of~$G$!\n\\end{minipage}\n}}\n\nFrom an output-sensitive perspective, it is interesting to perform this enumeration without repetitions and with polynomial delay, which means that there is a polynomial $p$ such that between the consecutive outputs of any two minimal rdf of a graph of order~$n$ that are enumerated, no more than $p(n)$ time elapses, including the corner-cases at the beginning and at the end of the algorithm. From an input-sensitive perspective, we want to upper-bound the running time of the algorithm, measured against the order of the input graph. The obtained run-time bound should not be too different from known lower bounds, given by graph families where one can prove that a certain number of minimal rdf must exist. \nOur algorithm will be analyzed from both perspectives and achieves both goals.\nThis is explained in \\autoref{sec:enum-minimal-rdf-simple} and in \\autoref{sec:enum-minimal-rdf-refined}.\n\n\n\\section{Properties of Minimal Roman Dominating Functions}\n\\label{sec:properties-minimal-rdf}\n\n\\begin{theorem}\\label{t_1_2_neigborhood}\nLet $G=\\left(V,E\\right)$ be a graph and $f: \\: V \\to \\lbrace 0,1,2\\rbrace$ be a minimal rdf. Then $N_G\\left[V_2\\left(f\\right)\\right]\\cap V_1\\left(f\\right)=\\emptyset$ holds.\n\\end{theorem}\n\\begin{pf}\nAssume that there exists a $\\lbrace u,v\\rbrace\\in E$ with $f\\left(v\\right) = 2$ and $f\\left(u\\right)=1$.\nLet\n$$ \\widetilde{f}:V\\to \\lbrace 0,1,2\\rbrace,\\: w\\mapsto\\begin{cases}\nf\\left(w\\right), &w\\neq u\\\\\n0,& w=u\n\\end{cases}$$\nWe show that $\\widetilde{f}$ is a rdf, which contradicts the minimality of $f$, as $\\widetilde{f}\\leq f$ and $\\widetilde{f}\\left( u \\right) < f\\left( u\\right)$ are given by construction.\nConsider $w \\in V_0\\left(\\widetilde{f}\\right)$. If $w= u$, $w$ is dominated by $v$, as $\\lbrace u,v\\rbrace\\in E$. Consider $w\\neq u$. Since $f$ is a rdf and $V_0\\left(f\\right)\\cup\\lbrace u\\rbrace = V_0\\left(\\widetilde{f}\\right)$, there exists a $t\\in N_G\\left[ w\\right]\\cap V_2\\left(f\\right)$. By construction of $\\widetilde{f}$, $V_2\\left(f\\right)=V_2\\left(\\widetilde{f}\\right)$ holds. This implies $N_G\\left[ w\\right]\\cap V_2\\left(\\widetilde{f}\\right)\\neq \\emptyset$. Hence, $\\widetilde{f}$ is a rdf. \n\\end{pf}\n\n\n\n\n\\begin{theorem}\\label{t_private_neighborhood}\nLet $G=\\left(V,E\\right)$ be a graph and $f: \\: V \\to \\lbrace 0,1,2\\rbrace$ be a minimal rdf. Then for all $v\\in V_2\\left( f \\right)$, $P_{G\\left[V_0\\left(f\\right) \\cup V_2\\left(f\\right)\\right], V_2\\left(f\\right)}\\left(v\\right) \\nsubseteq \\lbrace v \\rbrace$ holds.\n\\end{theorem}\n\\begin{pf}\nDefine $G'\\coloneqq G\\left[V\\setminus V_1\\left( f \\right)\\right] = G\\left[V_0\\left(f\\right) \\cup V_2\\left(f\\right)\\right]$. In contrast to the claim, assume that there exists a $v\\in V_2\\left( f \\right)$ with $P_{G', V_2\\left(f\\right)}(v) \\subseteq \\lbrace v \\rbrace$. Define \n$$ \\widetilde{f}:V\\to \\lbrace 0,1,2\\rbrace,\\: w\\mapsto\\begin{cases}\nf\\left(w\\right), &w\\neq v\\\\\n1,& w=v\n\\end{cases} $$\nWe show that $\\widetilde{f}$ is a rdf, which contradicts the minimality of $f$, as $\\widetilde{f}\\leq f$ and $\\widetilde{f}\\left( v \\right) < f\\left( v\\right)$ are given by construction.\nLet $u\\in V_0\\left(\\widetilde{f}\\right)=V_0\\left({f}\\right)$. We must show that some neighbor of $u$ belongs to $V_2\\left(\\widetilde{f}\\right)=V_2\\left(f\\right) \\setminus \\lbrace v\\rbrace$.\nThen, $\\widetilde f$ is a rdf.\n\nFirst, assume that $u$ is a neighbor of~$v$. By the choice of~$v$, $u$ is not a private neighbor of~$v$. Hence, there exists a $w\\in N_{G}\\left[u\\right] \\cap\\left( V_2\\left(f\\right) \\setminus \\lbrace v\\rbrace\\right) = N_{G}\\left[u\\right]\\cap V_2\\left(\\widetilde{f}\\right)$. Secondly, \nif $u\\in V_0\\left(\\widetilde{f}\\right)$ is not a neighbor of $v$, then there exists a $w\\in V_2\\left(f\\right)\\setminus\\{v\\}$ that dominates~$u$, i.e., $w\\in N_{G}\\left[u\\right]\\cap \\left(V_2\\left(f\\right)\\setminus \\lbrace v\\rbrace\\right) = N_{G}\\left[u\\right]\\cap V_2\\left(\\widetilde{f}\\right)$.\n\\end{pf}\n\n\n\\noindent\nAs each $v\\in V_0\\left(f\\right)$ has to be dominated by a $w\\in V_2\\left(f\\right)$, the next claim follows.\n\n\\begin{corollary}\\label{c_min_dom}\nLet $G=\\left(V,E\\right)$ be a graph and $f\\in \\lbrace 0,1,2\\rbrace^V$ be a minimal rdf. Then, $V_2\\coloneqq V_2\\left(f\\right)$ is a minimal ds of $G\\left[ N_G[V_2]\\right]$, with \n$N_G[V_2]=V_0\\left(f\\right)\\cup V_2$.\n\\end{corollary}\n\n\\begin{remark}\nWe can generalize the last statement as follows: Let $G=\\left(V,E\\right)$ be a graph and $f: \\: V \\to \\lbrace 0,1,2\\rbrace$ be a minimal rdf. Let $I\\subseteq V_1(f)$ be an independent set in $G$. Then, $V_2\\left(f\\right)\\cup I$ is a minimal ds of $G\\left[ V_0\\left(f\\right)\\cup V_2\\left(f\\right)\\cup I\\right]$. If $I$ is a maximal independent set in $G[V_1(f)]$, then $V_2\\left(f\\right)\\cup I$ is a minimal ds of $G\\left[ V_0\\left(f\\right)\\cup V_2\\left(f\\right)\\cup V_1(f)\\right]$.\n\\end{remark}\n\n\\noindent\nThis allows us to deduce the following characterization result.\n\n\\begin{theorem}\\label{t_porperty_min_rdf}\nLet $G=\\left(V,E\\right)$ be a graph, $f: \\: V \\to \\lbrace 0,1,2\\rbrace$ and abbreviate\n$G'\\coloneqq G\\left[ V_0\\left(f\\right)\\cup V_2\\left(f\\right)\\right]$. Then, $f$ is a minimal rdf if and only if the following conditions hold:\n\\begin{enumerate}\n\\item$N_G\\left[V_2\\left(f\\right)\\right]\\cap V_1\\left(f\\right)=\\emptyset$,\\label{con_1_2}\n\\item $\\forall v\\in V_2\\left(f\\right) :\\: P_{G',V_2\\left(f\\right)}\\left( v \\right) \\nsubseteq \\lbrace v\\rbrace$, also called \\emph{privacy condition}, and \\label{con_private}\n\\item $V_2\\left(f\\right)$ is a minimal dominating set of $G'$.\\label{con_min_dom}\n\\end{enumerate}\n\\end{theorem}\n\\begin{pf}\nThe ``only if'' follows by \\autoref{t_1_2_neigborhood}, \\autoref{t_private_neighborhood} and \\autoref{c_min_dom}. \n\nLet $f$ be a function that fulfills the three conditions. Since $V_2\\left(f\\right)$ is a dominating set on $G'$, for each $u\\in V_0\\left( f\\right)$, there exists a $v\\in V_2\\left(f\\right)\\cap N_{G}\\left[u\\right]$. Therefore, $f$ is a rdf. \nLet $\\widetilde{f}:V \\to \\lbrace 0,1,2 \\rbrace$ be a minimal rdf with $\\widetilde{f}\\leq f$. Therefore, $\\widetilde{f}$ (also) satisfies the three conditions by \\autoref{t_1_2_neigborhood}, \\autoref{t_private_neighborhood} and \\autoref{c_min_dom}. \nAssume that there exists a $v\\in V$ with $\\widetilde{f}\\left( v\\right) < f\\left( v \\right)$. Hence, $V_2\\left(\\widetilde{f}\\right)\\subseteq V_2\\left(f\\right)\\setminus \\lbrace v\\rbrace$. \\\\\n\\textbf{Case 1:} $\\widetilde{f}\\left( v\\right)=0, f\\left( v \\right) =1$. Therefore, there exists a $u\\in N_G\\left(v\\right)$ with $f\\left(u\\right)\\geq \\widetilde{f}\\left(u\\right)=2$. This contradicts Condition~\\ref{con_1_2}.\\\\\n\\textbf{Case 2:} $\\widetilde{f}\\left( v\\right)\\in\\lbrace 0, 1\\rbrace, f\\left( v \\right) =2$. Let $u\\in N_G\\left(v\\right)$ with $f(u)=0$. This implies $\\widetilde{f}(u)=0$ and\n$$\\emptyset \\neq N_G\\left[u\\right] \\cap V_2\\left(\\widetilde{f}\\right)\\subseteq N_G\\left[u\\right] \\cap V_2\\left(f\\right)\\setminus \\lbrace v \\rbrace$$ holds. Therefore, $N_G\\left( v \\right) \\subseteq N_G\\left[ V_2\\left(f\\right) \\setminus \\lbrace v\\rbrace\\right]$. This contradicts Condition~\\ref{con_private}.\n\nThus, $\\widetilde{f}=f$ holds and $f$ is minimal. \n\\end{pf}\n\n\\noindent\nWe conclude this section with an upper bound on the size of $V_2(f)$.\n\n\\begin{lemma}\\label{lem:V2-bound}\nLet $G=\\left(V,E\\right)$ be a graph and $f: V \\to \\lbrace0,1,2\\rbrace$ be a minimal rdf. Then $2 \\: \\vert V_2\\left(f\\right)\\vert \\leq \\vert V \\vert $ holds.\n\\end{lemma}\n\n\\begin{pf}\nConsider a graph $G=\\left(V,E\\right)$ and a minimal rdf $f:V\\to\\lbrace 0,1,2\\rbrace$. For each $v\\in V_2\\left(f\\right)$, let $P_f\\left(v\\right)= P_{G\\left[V_0\\left(f\\right) \\cup V_2\\left(f\\right)\\right], V_2\\left(f\\right)}\\left(v\\right) \\setminus \\lbrace v \\rbrace\\subseteq V\\setminus V_2\\left(f\\right) $. By \\autoref{t_private_neighborhood}, these sets are not empty and, by definition, they do not intersect. Hence, we get:\n$$ \\vert V \\vert = \\vert V_2\\left( f\\right) \\vert +\\vert V\\setminus V_2\\left( f\\right) \\vert \\geq \\vert V_2\\left( f\\right) \\vert +\\left\\vert\\bigcup_{v\\in V_2\\left(f\\right)}P_f\\left(v\\right) \\right\\vert\\geq 2\\: \\vert V_2\\left( f\\right) \\vert\\,. $$ \nTherefore, the claim is true. \n\\end{pf}\n\n\\section{A Polynomial-time Algorithm for \\textsc{ExtRD}}\n\\label{sec:poly-time-ExtRD}\n\nWith \\autoref{t_porperty_min_rdf}, we can construct an algorithm that solves the problem \\textsc{Extension Roman domination} in polynomial time. \n\\begin{algorithm}\n\\caption{Solving instances of \\textsc{ExtRD}}\\label{alg}\n\\begin{algorithmic}[1]\n\\Procedure{ExtRD Solver}{$G,f$}\\newline\n \\textbf{Input:} A graph $G=\\left(V,E\\right)$ and a function $f\\colon V\\to \\lbrace0,1,2\\rbrace$.\\newline\n \\textbf{Output:} Is there a minimal Roman dominating function $\\widetilde{f}$ with $f\\leq \\widetilde{f}$?\n\\State $\\widetilde{f}\\coloneqq f$. \\label{alg_init}\n\\State $M_2\\coloneqq V_2\\left(f\\right)$. \\{ Invariant: $M_2=V_2(\\widetilde{f})$ \\} \\label{alg_invariant}\n\\State $ M \\coloneqq M_2$. \\{ All $v\\in V_2(\\widetilde{f})$ are considered below; invariant: $M\\subseteq M_2$. \\}\n\\label{alg_before_while}\n\\While{$M\\neq \\emptyset$}\n\\State Choose $v\\in M$. \\{ Hence, $\\widetilde{f}(v)=2$. \\}\n\\For {$u\\in N\\left(v\\right)$}\n\\If {$\\widetilde{f}\\left(u\\right)=1$}\\label{alg_if_no}\n\\State $\\widetilde{f}\\left(u\\right)\\coloneqq 2$.\n\\State Add $u$ to $M$ and to $M_2$.\n\\EndIf\n\\EndFor\n\\State Delete $v$ from $M$. \n\\EndWhile\n\\For{$v\\in M_2$}\\label{alg_for_no}\n\\If{$N_G\\left(v\\right)\\subseteq N_G\\left[M_2\\setminus \\lbrace v\\rbrace\\right]$}\\label{alg_private_test}\n\\State \\textbf{Return No}. \\label{alg_no}\n\\EndIf\n\\EndFor\n\\For{$v\\in V\\setminus N_G\\left[ M_2\\right]$}\\label{alg_fil_for}\n\\State $\\widetilde{f}\\left(v\\right)\\coloneqq 1$.\n\\EndFor \n\\State\\textbf{Return Yes}.\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{theorem}\\label{theorem:correctness_alg}\nLet $G=\\left(V,E\\right)$ be a graph and $f\\colon V\\to\\lbrace0,1,2\\rbrace$. \nFor the inputs $G,f$, Algorithm~\\ref{alg} returns yes if and only if $\\left( G,f\\right)$ is a yes-instance of \\textsc{ExtRD}. In this case, the function~$\\widetilde f$ computed by Algorithm~\\ref{alg} is a minimal rdf.\n\\end{theorem}\n\\begin{pf}First observe that the invariants stated in Lines~\\ref{alg_invariant} and~\\ref{alg_before_while} of Algorithm~\\ref{alg} are true whenever entering or leaving the while-loop.\n\nLet the answer of the algorithm be \\emph{yes} and $\\widetilde{f}$ be the function computed by the algorithm. We will show that~$\\widetilde{f}$\nsatisfies the \nconditions formulated in \\autoref{t_porperty_min_rdf}.\n\nObserving the if-condition in Line~\\ref{alg_if_no}, clearly after the while-loop, no neighbor~$u$ of $v\\in V_2\\left(\\widetilde{f}\\right)$ fulfills $\\widetilde{f}\\left( u\\right)=1$. Hence, $\\widetilde{f}$ satisfies Condition~\\ref{con_1_2}.\nIf the function $\\widetilde{f}$ would contradict Condition~\\ref{con_private} of \\autoref{t_porperty_min_rdf}, then we would get to Line~\\ref{alg_no} and the algorithm would answer \\emph{no}. As we are considering a \\emph{yes}-answer of our algorithm, we can assume that this privacy condition holds after the for-loop of Line~\\ref{alg_for_no}. \nWe also can assume that $M_2=V_2\\left(\\widetilde{f}\\right)$ is a minimal ds of the graph $G\\left[N_G\\left[M_2\\right]\\right]$. Otherwise, for such a $v\\in M_2$ and each $u\\in N_G\\left(v\\right)$, there would exist a $w\\in N_G\\left[u\\right]\\cap\\left( M_2\\setminus\\lbrace v\\rbrace\\right)$. In this case, the algorithm would return \\emph{no} in Line~\\ref{alg_no}.\nIn the for-loop of Line~\\ref{alg_fil_for}, we update for all $v\\in V\\setminus N_G\\left[ M_2\\right]$ the value $\\widetilde{f} \\left( v\\right)$ to~$1$. With the while-loop, this implies $N_G\\left[M_2\\right] = V_0\\left(\\widetilde{f}\\right)\\cup V_2\\left(\\widetilde{f}\\right)$. Therefore, $V_2\\left(\\widetilde{f}\\right)$ is a minimal ds of $G\\left[V_0\\left(\\widetilde{f}\\right)\\cup V_2\\left(\\widetilde{f}\\right)\\right]$. \nSince we do not update the values of $\\widetilde{f}$ to two in this last for-loop, Condition~\\ref{con_private} from \\autoref{t_porperty_min_rdf} holds. By the while-loop and the for-loop starting in Line~\\ref{alg_fil_for}, it is trivial to see that Condition~\\ref{con_1_2} also holds for the final $\\widetilde{f}$.\nWe can now use \\autoref{t_porperty_min_rdf} to see that $\\widetilde{f}$ is a minimal rdf. \n\nSince we never decrease $\\widetilde{f}$ in this algorithm, starting with $\\widetilde f=f$ in Line~\\ref{alg_init}, we get $f\\leq \\widetilde{f}$. Therefore, $\\left(G,f\\right)$ is a \\emph{yes}-instance of \\textsc{ExtRD}.\n\n\nNow we assume that $\\left(G,f\\right)$ is a \\emph{yes}-instance, but the algorithm returns \\emph{no}. Therefore, there exists a minimal rdf $\\overline{f}$ with $f\\leq \\overline{f}$. Since $N_G\\left[V_2\\left(\\overline{f}\\right)\\right]\\cap V_1\\left(\\overline{f}\\right)=\\emptyset$, $\\widetilde{f}\\leq \\overline{f}$ holds for the function~$\\widetilde{f}$ in Line~\\ref{alg_for_no}. This implies $M_2=V_2\\left( \\widetilde{f}\\right)\\subseteq V_2\\left(\\overline{f}\\right)$. \nThe algorithm returns \\emph{no} if and only if there exists a $v\\in M_2$ with \n$$ N_G\\left(v\\right)\\subseteq N_G\\left[ M_2\\setminus \\lbrace v \\rbrace\\right]\\subseteq N_G\\left[V_2\\left(\\overline{f}\\right)\\setminus \\lbrace v\\rbrace\\right].$$\nApplying again Theorem~\\ref{t_porperty_min_rdf}, we see that $\\overline{f}$ cannot be a minimal rdf, contradicting our assumption.\n\\end{pf}\n\n\\noindent\nIn \\autoref{propos:runtime}, we prove that our algorithm needs polynomial time only.\n\n\\begin{proposition}\\label{propos:runtime}\nAlgorithm~\\ref{alg} runs in time cubic in the order of the input graph.\n\\end{proposition}\n\\begin{pf} Let $G=(V,E)$ be the input graph. \nDefine $n=\\vert V\\vert$. Up to Line~\\ref{alg_before_while}, the algorithm can run in linear time. As each vertex can only be once in $M$ and we look at the neighbors of each element in $M$, the while-loop runs in time $\\mathcal{O}\\left(n^2\\right)$. In the for-loop starting in Line~\\ref{alg_for_no}, we build for all $v\\in M_2$ the set $N_G\\left[M_2\\setminus \\lbrace v\\rbrace\\right]$. This needs $\\mathcal{O}\\left( n^3\\right)$ time. The other steps of this loop run in time $\\mathcal{O}\\left(n^2\\right)$.\nThe last for-loop requires linear time. Hence, the algorithm runs in time $\\mathcal{O}\\left(n^3\\right)$. \n\\end{pf}\n\n\\section{Enumerating Minimal RDF for General Graphs}\n\\label{sec:enum-minimal-rdf-simple}\n \nFor general graphs, our general combinatorial observations allow us to strengthen the (trivial) $\\mathcal{O}^*(3^n)$-algorithm for enumerating all minimal rdf for graphs of order~$n$\ndown to $\\mathcal{O}^*(2^n)$, as displayed in Algorithm~\\ref{alg:enum}.\nTo understand the correctness of this enumeration algorithm, the following lemma is crucial.\n\n\\begin{algorithm}\n\\caption{A simple enumeration algorithm for minimal rdf}\\label{alg:enum}\n\\begin{algorithmic}[1]\n\\Procedure{RD Enumeration}{$G$}\\newline\n \\textbf{Input:} A graph $G=\\left(V,E\\right)$.\\newline\n \\textbf{Output:} Enumeration of all minimal rdf $f:V\\to\\{0,1,2\\}$.\n\\For {all functions $f:V\\to\\{1,2\\}$}\n\\For {all $v\\in V$ with $f(v)=1$}\n\\If {$\\exists u\\in N_G(v): f(u)=2$}\n\\State $f(v)\\coloneqq 0$.\n\\EndIf\n\\EndFor\n\\State Build graph $G'$ induced by $f^{-1}(\\{0,2\\})=V_0(f)\\cup V_2(f)$.\n\\State $\\text{private-test}\\coloneqq 1$.\n\\For {all $v\\in V$ with $f(v)=2$}\n\\If {$P_{G',V_2(F)}(v)\\subseteq\\{v\\}$}\n\\State $\\text{private-test}\\coloneqq 0$.\n\\EndIf\n\\EndFor\n\\If{$\\text{private-test}=1$ and if $f^{-1}(2)=V_2(f)$ is a minimal ds of $G'$}\n\\State Output the current function $f:V\\to\\{0,1,2\\}$.\n\\EndIf\n\\EndFor\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{lemma}\\label{lem:extend2}\nLet $G=(V,E)$ be a graph with $V_2\\subseteq V$ such that $P_{G,V_2}\\left( v \\right) \\nsubseteq \\lbrace v\\rbrace$ for each $v\\in V_2$ holds. Then there exists exactly one minimal rdf $f\\in \\lbrace 0,1,2 \\rbrace ^ V$ with $V_2=V_2\\left(f\\right)$. Algorithm~\\ref{alg} can calculate $f$.\n\\end{lemma}\n\\begin{pf}\nDefine \\\\[-4.5ex]\n$$f:V\\to \\lbrace 0, 1, 2\\rbrace, v\\mapsto \\begin{cases}\n2, & v \\in V_2\\\\\n1, & v \\notin N\\left[ V_2\\right]\\\\\n0, & \\text{otherwise}\n\\end{cases}$$\nHence, $N_G\\left[ V_2 \\right]=V_2\\cup V_0\\left(f\\right)$. With the assumption $P_{G,V_2}\\left( v \\right) \\nsubseteq \\lbrace v\\rbrace$, $V_2$ is a minimal ds of $G[V_2\\cup V_0\\left(f\\right)]$. Furthermore, $N_G\\left[V_2\\right]\\cap V_1\\left(f\\right)=\\emptyset$. As $V_2=V_2\\left(f\\right)$, all conditions of \\autoref{t_porperty_min_rdf} hold and $f$ is a minimal rdf.\n\nLet $\\widetilde{f}\\in\\lbrace 0,1,2\\rbrace^V$ be a minimal rdf with $V_2= V_2\\left(\\widetilde{f}\\right)$. If there exists some $v\\in V_0\\left( f \\right) \\cap V_1\\left(\\widetilde{f}\\right)$, this contradicts Condition~\\ref{con_1_2}, as $v\\in N_G\\left[V_2\\right] = N_G\\left[V_2\\left(\\widetilde{f}\\right)\\right]$. Therefore, $ V_0\\left( f\\right)\\subseteq V_0\\left( \\widetilde{f}\\right) $ holds. By the assumption that $\\widetilde{f}$ is a rdf, for each $v\\in V_0\\left( \\widetilde{f} \\right)$ there exists a $u\\in V_2\\left(\\widetilde{f}\\right)\\cap N\\left[v\\right] = V_2\\cap N\\left[v\\right]$. This implies $v\\in N_G\\left[V_2\\right]\\setminus V_2= V_0\\left( f\\right)$. Therefore, $ V_0\\left( f\\right) = V_0\\left( \\widetilde{f}\\right)$ holds. This implies $f=\\widetilde{f}$.\n\n\\vspace{5pt}\nDefine: \n$$\\widehat{f}:V\\to \\lbrace 0, 1, 2\\rbrace, v\\mapsto \\begin{cases}\n2, & v\\in V_2\\\\\n0, & v\\notin V_2\n\\end{cases}.$$\nIt is trivial to see that $\\widehat{f}\\leq f$. By \\autoref{theorem:correctness_alg}, Algorithm~\\ref{alg} returns \\emph{yes} for the input $\\widehat{f}$. Let $\\overline{f}$ be the the minimal rdf produced by Algorithm~\\ref{alg}, given~$\\widehat{f}$. We want to show that $V_2=V_2\\left( \\overline{f} \\right)$. We do this by looking at the steps of the algorithm.\nSince $V_1\\left( \\widehat{f} \\right) = \\emptyset$, the algorithm never gets into the If-clause in Line~\\ref{alg_if_no}. This is the only way to update a vertex to the value 2. Therefore, $V_2= V_2\\left(\\overline{f}\\right)$. \n\\end{pf}\n\n\\begin{proposition} Let $G=\\left(V,E\\right)$ be a graph. For minimal rdf $f,g \\in \\lbrace0,1,2\\rbrace ^ V$ with $V_2\\left(f\\right)=V_2\\left(g\\right)$, it holds $f=g$.\n\\end{proposition}\n\n\\begin{pf}\nBy \\autoref{t_private_neighborhood}, $V_2\\left(f\\right)$ fulfills the conditions of \\autoref{lem:extend2}. Therefore, there exists a unique minimal rdf $h\\in\\lbrace0,1,2\\rbrace ^ V$ with $V_2\\left( h \\right)=V_2\\left( f \\right) = V_2\\left( g \\right)$. Thus. $f=g=h$ holds.\n\\end{pf}\n\nHence, there is a bijection between the minimal rdf of a graph $G=(V,E)$ and subsets $V_2\\subseteq V$ that satisfy the condition of \\autoref{lem:extend2}.\n\n\\begin{proposition}\\label{prop:RomanEnum}\nAll minimal rdf of a graph of order~$n$ can be enumerated in time\n$\\mathcal{O}^*(2^n)$.\n\\end{proposition}\n\n\\begin{pf}\n Consider Algorithm~\\ref{alg:enum}. The running time claim is obvious. The correctness of the algorithm is clear due to \\autoref{t_porperty_min_rdf} and \\autoref{lem:extend2}. \n\\end{pf}\n\n\nThe presented algorithm clearly needs polynomial space only, but it is less clear if it has polynomial delay.\nBelow, we will present a branching algorithm that has both of these desirable properties, and moreover, its running time is below $2^n$. How good or bad such an enumeration is, clearly also depends on examples that provide a lower bound on the number of objects that are enumerated. The next lemma explains why the upper bounds for enumerating minimal rdf must be bigger than those for enumerating minimal dominating sets.\n\n\\begin{lemma}\nA disjoint collection of $c$ cycles on five vertices yields a graph of order $n=5c$ that has $(16)^c$ many minimal rdf. \n\\end{lemma}\n\n\\begin{pf}\nLet $C_5$ be a cycle of length 5 with $V\\left(C_5\\right)=\\lbrace v_1,\\ldots,v_5 \\rbrace$ and $E\\left( C_5 \\right) = \\lbrace\\lbrace v_i,v_{i+1}\\rbrace\\mid i\\in [4] \\rbrace \\cup \\lbrace \\lbrace v_1,v_5 \\rbrace \\rbrace$. For a $f\\in\\lbrace0,1,2\\rbrace^{V\\left( C_5\\right)}$ there are at least the following sixteen possibilities for $\\left( f\\left( v_1\\right),\\ldots,f\\left( v_5\\right)\\right)$:\n\\begin{itemize}\n \\item zero occurrences of 2: $(1,1,1,1,1)$;\n \\item one occurrence of 2: $(2,0,1,1,0)$ and four more cyclic shifts;\n \\item two adjacent occurrences of 2: $(2,2,0,1,0)$ and four more cyclic shifts;\n \\item two non-adjacent occurrences of 2: $(2,0,2,0,0)$ and four more cyclic shifts.\n\\end{itemize}\n\nTherefore, there are at least 16 minimal rdf on $C_5$. To prove that these are all the minimal rdf, we use Lemma~\\ref{lem:V2-bound}, which implies $\\vert V_2\\left(f\\right)\\vert\\leq \\frac{\\vert V\\left(C_5\\right)\\vert}{2} <3$. Hence, the number of minimal rdf on $C_5$ is at most $\\binom{5}{0}+\\binom{5}{1}+\\binom{5}{2}=16$. \n\\end{pf}\n\n\n\\begin{corollary}\nThere are graphs of order $n$ that have at least ${\\sqrt[5]{16}\\,}^n\\in\\Omega(1.7441^n)$ many minimal rdf.\n\\end{corollary}\n\nWe checked with the help of a computer program that there are no other connected graphs of order at most eight that yield (by taking disjoint unions) a bigger lower bound.\n\n\\section{A Refined Enumeration Algorithm}\n \\label{sec:enum-minimal-rdf-refined}\n\nIn this section, we are going to prove the following result, which can be considered as the second main result of this paper.\n\\begin{theorem}\\label{thm:minimal-rdf-enumeration}\nThere is a polynomial-space algorithm that enumerates all minimal rdf of a given graph of order $n$ with polynomial delay and in time $\\mathcal{O}^*(1.9332^n)$.\n\\end{theorem}\n\nNotice that this is in stark contrast to what is known about the enumeration of minimal dominating sets, or, equivalently, of minimal hitting sets in hypergraphs. Here, it is a long-standing open problem if \nminimal hitting sets in hypergraphs can be enumerated with polynomial delay.\n\nThe remainder of this section is dedicated to describing the proof of this theorem.\n\n\\subsection{A bird's eye view on the algorithm}\n\nAs all along the search tree, from inner nodes we branch into the two cases if a certain vertex is assigned $2$ or not, it is clear that (with some care concerning the final processing in leaf nodes) no minimal rdf is output twice. Hence, there is no need for the branching algorithm to store intermediate results to test (in a final step) if any solution was generated twice. Therefore, our algorithm needs only polynomial space, as detailed in \\autoref{prop:poly-space} and \\autoref{cor:poly-space}.\n\nBecause we have a polynomial-time procedure that can test if a certain given pre-solution can be extended to a minimal rdf, we can build (a slightly modified version of) this test into an enumeration procedure, hence avoiding unnecessary branchings. \nTherefore, whenever we start with our binary branching, we know that at least one of the search tree branches will return at least one new minimal rdf. Hence, we will not move to more than $N$ nodes in the search tree before outputting a new minimal rdf, where $N$ is upper-bounded by twice the order of the input graph. This is the basic explanation for the claimed polynomial delay, as detailed in \\autoref{prop:poly-delay}.\n\nLet $G=(V,E)$ be a graph.\nLet us call a(ny partial) function \n\n\\vspace{-15pt}\n\\begin{equation*}\n\\begin{split}\n f: V \\longrightarrow \\{0,1,2,\\overline{1}, \\overline{2}\\}\n\\end{split}\n\\end{equation*}\n a \\emph{generalized Roman domination function}, or grdf for short. \nExtending previously introduced notation, let $\\overline{V_1}(f) = \\{x\\in V\\mid f(x) = \\overline{1}\\}$, and $\\overline{V_2}(f) = \\{x\\in V\\mid f(x) = \\overline{2}\\}$. A vertex is said to be \\emph{active} if it has not been assigned a value (yet) under~$f$; these vertices are collected in the set $A(f)$. Hence, for any grdf $f$, we have the partition $V=A(f)\\cup V_0(f)\\cup V_1(f)\\cup V_2(f)\\cup \\overline{V_1}(f)\\cup \\overline{V_2}(f)$. \n\nAfter performing a branching step, followed by an exhaustive application of the reduction rules, any grdf~$f$ considered in our algorithm always satisfies the following \\textbf{(grdf) invariants}:\n\\begin{enumerate}\n \\item $\\forall x\\in \\overline{V_1}(f)\\cup V_0(f)\\,\\exists y\\in N_G(x):y\\in V_2(f)$,\n \\item $\\forall x\\in V_2(f):N_G(x)\\subseteq \\overline{V_1}(f) \\cup V_0(f) \\cup V_2(f)$, \n \\item $\\forall x\\in V_1(f):N_G(x)\\subseteq \\overline{V_2}(f)\\cup V_0(f)\\cup V_1(f)$,\n \\item if $\\overline{V_2}(f)\\neq\\emptyset$, then $A(f)\\cup \\overline{V_1}(f)\\neq \\emptyset$.\\footnote{This condition assumes that our graphs have non-empty vertex sets.}\n\\end{enumerate}\n\nFor the extension test, we will therefore consider the function $\\hat f:V\\to\\{0,1,2\\}$ that is derived from a grdf~$f$ as follows: \n$$\\hat f(v)=\\begin{cases}0, & \\text{if }v\\in A(f)\\cup V_0(f)\\cup \\overline{V_1}(f)\\cup \\overline{V_2}(f)\\\\\n1, & \\text{if }v\\in V_1(f)\\\\\n2, & \\text{if }v\\in V_2(f)\n\\end{cases}$$\n\n\nThe enumeration algorithm uses a combination of reduction and branching rules, starting with the nowhere defined function $f_\\bot$, so that $A(f_\\bot)=V$. The schematics of the algorithm is shown in Algorithm~\\ref{alg:refined-enum}. \nTo understand the algorithm, call an rdf $g$ as \\emph{consistent} with a grdf $f$ if $g(v)=2$ implies $v\\in A(f)\\cup V_2(f)\\cup \\overline{V_1}(f)$ and $g(v)=1$ implies $v\\in A(f)\\cup V_1(f)\\cup \\overline{V_2}(f)$ and $g(v)=0$ implies $v\\in A(f)\\cup V_0(f)\\cup \\overline{V_1}(f)\\cup \\overline{V_2}(f)$.\nBelow, we start with presenting some reduction rules, which also serve as (automatically applied) actions at each branching step, whenever applicable. The branching itself always considers a most attractive vertex $v$ and either gets assigned~2 or not. \nThe running time analysis will be performed with a measure-and-conquer approach. Our simple measure is defined by $\\mu(G,f)=|A(f)|+\\omega_1 |\\overline{V_1}(f)| + \\omega_2 |\\overline{V_2}(f)|\\leq |V| $\nfor some constants $\\omega_1$ and $\\omega_2$ that have to be specified later.\n\nThe measure never increases when applying a reduction rule.\n\n\n\\begin{algorithm}\n\\caption{A refined enumeration algorithm for minimal rdf}\\label{alg:refined-enum}\n\\begin{algorithmic}[1]\n\\Procedure{Refined RD Enumeration}{$G,f$}\\newline\n \\textbf{Input:} A graph $G=\\left(V,E\\right)$, a grdf $f:V\\to\\{0,1,2,\\overline{1},\\overline{2}\\}$.\\newline\n \\emph{Assumption:} There exists at least one minimal rdf consistent with $f$.\\newline\n \\textbf{Output:} Enumeration of all minimal rdf consistent with $f$.\n\\If {$f$ is everywhere defined and $f(V)\\subseteq \\{0,1,2\\}$}\n\\State Output $f$ and return.\n\\EndIf\n\\State \\{\\,We know that $A(f)\\cup \\overline{V_1}(f)\\neq\\emptyset$.\\,\\}\n\\State Pick a vertex $v\\in A(f)\\cup \\overline{V_1}(f)$ of highest priority for branching.\n\\State $f_2\\coloneqq f $; $f_2(v)\\coloneqq 2$.\n\\State Exhaustively apply reduction rules to $f_2$. \\{\\,Invariants are valid for $f_2$.\\,\\}\n\\If{$\\textsc{GenExtRD Solver}\\left(G,\\widehat{f_2},\\overline{V_2}(f_2)\\right)$}\n\\State $\\textsc{Refined RD Enumeration}\\left(G,{f_2}\\right)$.\n\\EndIf\n\\State $f_{\\overline{2}}\\coloneqq f $; \\textbf{if} $v\\in A(f)$ \\textbf{then} $f_{\\overline{2}}(v)\\coloneqq {\\overline{2}}$ \\textbf{else} $f_{\\overline{2}}(v)\\coloneqq 0$.\n\\State Exhaustively apply reduction rules to $f_{\\overline{2}}$. \\{\\,Invariants are valid for $f_{\\overline{2}}$.\\,\\}\n\\If{$\\textsc{GenExtRD Solver}\\left(G,\\widehat{f_{\\overline{2}}},\\overline{V_2}(f_{\\overline{2}})\\right)$}\n\\State $\\textsc{Refined RD Enumeration}\\left(G,{f_{\\overline{2}}}\\right)$.\n\\EndIf\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\\noindent\nWe are now presenting details of the algorithm and its analysis.\n\n\\subsection{How to achieve polynomial delay and polynomial space}\nIn this section, we need a slight modification of the problem \\textsc{ExtRD} in order to cope with pre-solutions. In this version, we add to an instance, usually specified by $G=\\left( V, E\\right)$ and $f:V\\to \\lbrace 0, 1, 2\\rbrace$, a set $\\overline{V_2}\\subseteq V $ with $V_2\\left(f\\right)\\cap \\overline{V_2}=\\emptyset$. The question is if there exists a minimal RDF $\\widetilde{f}$ with $f \\leq \\widetilde{f}$ and $V_2\\left(\\widetilde{f}\\right)\\cap \\overline{V_2}=\\emptyset$. We call this problem a \\emph{generalized} rdf extension problem, or \n\\textsc{GenExtRD} for short.\nIn order to solve this problem, we modify Algorithm \\ref{alg} to cope with \\textsc{GenExtRD} by adding an if-clause after Line~\\ref{alg_if_no} that asks if $u\\in\\overline{V_2}$. If this is true, then the algorithm returns \\emph{no}, because it is prohibited that $\\tilde{f}(u)$ is set to~2, while this is necessary for minimal rdf, as there is a vertex $v$ in the neighborhood of $u$ such that $\\tilde{f}(v)$ has been set to~1. We call this algorithm \\textsc{GenExtRD Solver}.\n\n\\begin{lemma}\\label{lem:GenExtRD}\nLet $G=\\left(V,E\\right)$ be a graph, $f : V \\to \\lbrace0,1,2\\rbrace$ be a function and $\\overline{V_2} \\subseteq V$ be a set with $V_2\\left({f}\\right)\\cap \\overline{V_2}=\\emptyset$. \\textsc{GenExtRD Solver} gives the correct answer when given the \\textsc{GenExtRD} instance $(G,f,\\overline{V_2})$.\n\\end{lemma}\n\n\n\\begin{pf}\nIn Algorithm~\\ref{alg}, the only statement where we give a vertex the value $2$ is in the if-clause of Line~\\ref{alg_if_no}. The modified version would first check if the vertex is in $\\overline{V_2}$. If this is true, there will be no minimal RDF solving this problem. Namely, if we give the vertex the value~$2$, this would contradict $V_2 \\left( \\widetilde{f} \\right) \\cap \\overline{V_2}=\\emptyset$. If the value stays $1$, this would contradict Condition~\\ref{con_1_2}. By \\autoref{theorem:correctness_alg}, $\\widetilde{f}$ will be a minimal rdf with $V_2\\left(\\widetilde{f}\\right) \\cap \\overline{V_2}=\\emptyset$ if the algorithm returns \\emph{yes}.\n\nAssume there exists a minimal RDF $\\overline{f}$ with $V_2\\left(\\overline{f}\\right) \\cap \\overline{V_2}=\\emptyset$ but the algorithm returns \\emph{no}. First we assume that \\emph{no} is returned by the new if-clause. This implies that a vertex $u\\in V_1\\left( f \\right)$ is in the neighborhood of a vertex $v\\in V$ that has to have the value~2 in any minimal rdf that is bigger than $f$ (because \\autoref{theorem:correctness_alg}). But this would lead to a similar contradiction as above.\n\nTherefore, the answer \\emph{no} has to be returned in Line \\ref{alg_no}. That would contradict Condition~\\ref{con_private} or Condition~\\ref{con_min_dom}. Thus the algorithm would correctly return \\emph{yes}.\n\\end{pf}\n\nLet $f$ be a generalized rdf at any moment of the branching algorithm. The next goal is to show that \\textsc{GenExtRD Solver} could tell us in polynomial time if there exists a minimal rdf that could be enumerated by the branching algorithm from this point on. \n\n\\begin{proposition}\nLet $G=\\left(V,E\\right)$ be a graph, $f : V \\to \\lbrace0,1,2,\\overline{1},\\overline{2}\\rbrace$ be a partial function. Then, \\textsc{GenExtRD Solver} correctly answers if there exists some minimal rdf $g: V \\to \\lbrace0,1,2\\rbrace$ that is consistent with $f$ when \\textsc{GenExtRD Solver} is given the instance $(G,\\hat f,\\overline{V_2}(f))$.\n\\end{proposition}\n\nThe following proof makes use of the grdf invariants presented above, which are only formally proved to hold in the next subsection, in \\autoref{prop:invariants}.\n\n\\begin{pf} We have to show two assertions: (1) If \\textsc{GenExtRD Solver} answers \\emph{yes} on the instance $(G,\\hat f,\\overline{V_2}(f))$, then there exists a minimal rdf $g: V \\to \\lbrace0,1,2\\rbrace$ that is consistent with $f$. (2) If there exists a minimal rdf $g: V \\to \\lbrace0,1,2\\rbrace$ that is consistent with $f$, then \\textsc{GenExtRD Solver} answers \\emph{yes} on the instance $(G,\\hat f,\\overline{V_2}(f))$. \n\n\\smallskip\\noindent\n\\underline{ad (1)}: Assume \\textsc{GenExtRD Solver} found a minimal rdf $g$ such that $\\hat f\\leq g$ and $V_2(g)\\cap \\overline{V_2}(f)=\\emptyset$. Let $v\\notin A(f)$. First assume that $g(v)=2$. Clearly, vertices in $V_2(f)=V_2(\\hat f)$ do not get changed, as they cannot be made bigger. Hence, assume $v\\notin V_2(f)$ exists with $g(v)=2$. As \\textsc{GenExtRD Solver} will only explicitly set the value~2 for vertices originally set to~1 (by their $f$-assignment) that are in the neighborhood of vertices already set to value~2 and that do not belong to $\\overline{V_2}(f)$, we have to reason about a possible $v\\in V_1(f)=V_1(\\hat f)$. \nBy the third grdf invariant, the neighborhood of $v$ contains no vertex from $V_2(f)=V_2(\\hat f)$, so that the case of some $v\\notin V_2(f)$ with $g(v)=2$ can be excluded.\n\nSecondly, assume that $g(v)=1$. The case $v\\in V_1(f)$ is not critical, and $v\\in V_2(f)$ is not possible, as reasoned above. Notice that $g(v)=1$ was set in the last lines of the algorithm. In particular, $N_G(v)\\cap V_2(g)=\\emptyset$. As $V_2(f)\\subseteq V_2(g)$, also $N_G(v)\\cap V_2(f)=\\emptyset$. By the first grdf invariant, \n$v\\notin \\overline{V_1}(f)\\cup V_0(f)$. Hence, only $v\\in \\overline{V_2}(f)$ remains as a possibility.\n\nThirdly, assume that $g(v)=0$. As $f(v)\\in\\{1,2\\}$ is clearly impossible, $v\\in V_0(f)\\cup \\overline{V_1}(f)\\cup \\overline{V_2}(f)$ must follow.\nHence, $g$ is consistent with $f$.\n\n\\smallskip\\noindent\n\\underline{ad (2)}: Assume that there exists a minimal rdf $g: V \\to \\lbrace0,1,2\\rbrace$ that is consistent with $f$. We have to prove that $\\hat f\\leq g$ and that $V_2(g)\\cap \\overline{V_2}(f)=\\emptyset$, because then \\textsc{GenExtRD Solver} will correctly answer \\emph{yes} by \\autoref{lem:GenExtRD}.\nFor $g(v)=2$, then consistency implies $f(v)\\neq \\overline{2}$, and trivially $\\hat f(v)\\leq g(v)$. \nFor $g(v)=1$, $v\\in A(f)\\cup V_1(f)\\cup \\overline{V_2}(f)$, and hence $\\hat f(v)\\in\\{0,1\\}$, so that $\\hat f(v)\\leq g(v)$. \nIf $g(v)=0$, then $v\\in A(f)\\cup V_0(f)\\cup \\overline{V_1}(f)\\cup \\overline{V_2}(f)=V_0(\\hat f)$, so that again $\\hat f(v)\\leq g(v)$. \n\\end{pf}\n\n\nAn important consequence of the previous proposition is stated next.\nNotice that our algorithm behaves quite differently from what is known about algorithms the enumerate minimal ds.\n\n\\begin{proposition}\\label{prop:poly-delay}\nProcedure \\textsc{Refined RD Enumeration}, on input $G=(V,E)$, outputs functions $f:V\\to\\{0,1,2\\}$ with polynomial delay.\n\\end{proposition}\n\n\\begin{pf} Although the reduction rules are only stated in the next subsection, it is not hard to see by quickly browsing through them that they can be implemented to run in polynomial time. Moreover, \\textsc{GenExtRD Solver} runs in polynomial time.\nHence, all work done in an inner node of the search tree needs polynomial time only.\nBy the properties of \\textsc{GenExtRD Solver}, the search tree will never continue branching if no outputs are to be expected that are consistent with the current grdf (that is associated to that inner node). Hence, a run of the procedure \\textsc{Refined RD Enumeration} dives straight through setting more and more values of a grdf, until it is everywhere defined with values from $\\{0,1,2\\}$, and then it returns from the recursion and dives down the next promising branch. Clearly, the length of any search tree branch is bounded by $|V|$, so that at most $2|V|$ many inner nodes are visited between any two outputs. This also holds at the very beginning (i.e., only polynomial time will elapse until the first function is output) and at the very end (i.e., only polynomial time will be spent after outputting the last function). \nThis proves the claimed polynomial delay.\n\\end{pf}\n\n\\begin{proposition}\nProcedure \\textsc{Refined RD Enumeration} correctly enumerates all minimal rdf that are consistent with the input grdf, assuming that at least one consistent rdf exists.\n\\end{proposition}\n\n\\begin{pf}\nAs there exists a consistent rdf, outputting the input function is correct if the input function is already an rdf, which is checked, as we test if the given grdf is everywhere defined at has only images in $\\{0,1,2\\}$. Also, before \\textsc{Refined RD Enumeration} is called recursively, we explicitly check if at least one consistent rdf exists.\n\nIf the input grdf~$f$ is not everywhere defined or if $\\overline{V_2}(f)\\cup \\overline{V_1}(f)\\neq\\emptyset$, then $A(f)\\cup\\overline{V_1}(f)\\neq\\emptyset$ by the fourth grdf invariant. Hence, whenever \\textsc{Refined RD Enumeration} is called recursively, $A(f)\\cup\\overline{V_1}(f)\\neq\\emptyset$ holds, as these calls are immediately after applying all reduction rules exhaustively.\n\nHence, by induction and based on the previous propositions, \\textsc{Refined RD Enumeration} correctly enumerates all minimal rdf that are consistent with the input grdf.\n\\end{pf}\n\n\\begin{corollary}\\label{cor:correct-enumeration}\nProcedure \\textsc{Refined RD Enumeration} correctly enumerates all minimal rdf of a given graph $G=(V,E)$ when provided with the nowhere defined grdf $f_\\bot$.\n\\end{corollary}\n\n\\begin{pf}\nDue to the previous proposition, it is sufficient to notice that all minimal rdf are consistent with $f_\\bot$ and that the function that is constant~1 is a minimal rdf consistent with $f_\\bot$.\n\\end{pf}\n\n\\begin{proposition}\\label{prop:poly-space}\nProcedure \\textsc{Refined RD Enumeration} never enumerates any minimal rdf consistent with the given grdf on the input graph $G=(V,E)$ twice.\n\\end{proposition}\n\n\\begin{pf}\nNotice that the enumeration algorithm always branches by deciding for a vertex~$v$ from $A(f)\\cup \\overline{V_1}(f)$, where $f$ is the current grdf, if $f(v)$ is updated to~$2$ or not, which means that either $v\\in A(f)$ is set to $\\overline{2}$, or $v\\in \\overline{V_1}$ is set to~0. Then, reduction rules may apply, but they never change the decision if, in a certain branch of the search tree, $f(v)=2$ is either true or false.\nMoreover, they never set any vertex to~$2$. \nAs any minimal rdf that is ever output in a certain branch will be consistent with the grdf~$f$ associated to an inner node of the search tree, Procedure \\textsc{Refined RD Enumeration} never enumerates any minimal rdf twice.\n\\end{pf}\n\nAn important consequence of the last claim is that there is no need to store all output \nfunctions in order to finally parse them to see into enumerating any of them only once.\n\n\\begin{corollary}\\label{cor:poly-space}\nAlgorithm \\textsc{Refined RD Enumeration} lists all minimal rdf consistent with the given grdf on the input graph $G=(V,E)$ without repetitions and in polynomial space.\n\\end{corollary}\n\n\\subsection{Details on reductions and branchings}\n\nFor the presentation of the following rules, we assume that $G=(V,E)$ and a grdf $f$ is given. We also assume that the rules are executed exhaustively in the given order.\n\n\\vspace{5pt}\n\\noindent\n{\\bf Reduction Rule LPN (Last Potential Private Neighbor).} If $v\\in V_2(f)$ satisfies $|N_G(v)\\cap (\\overline{V_2}(f)\\cup A(f))|=1$, then set $f(x) = 0$ for $\\{x\\}=N_G(v)\\cap (\\overline{V_2}(f)\\cup A(f))$.\n\n\n\\vspace{5pt}\n\\noindent\n{\\bf Reduction Rule $V_0$.} Let $v \\in V_0(f)$. Assume there exists a unique $u\\in V_2(f)\\cap N_G(v)$. \nMoreover, assume that for all $x\\in N_G(u)\\cap (V_0(f)\\cup \\overline{V_1}(f)\\cup \\overline{V_2}(f))$, $|N_G(x)\\cap V_2(f)|\\geq 2$ if $x\\neq v$. Then, for any $w \\in N_G(v)\\cap A(f)$, set $f(w) = \\overline{2}$ and for any $w \\in N_G(v)\\cap \\overline{V_1}(f)$, set $f(w) =0$.\n\n\\vspace{5pt}\n\\noindent\n{\\bf Reduction Rule $V_1$.} Let $v \\in V_1(f)$. For any $w \\in N_G(v)\\cap A(f)$, set $f(w) = \\overline{2}$. For any $w \\in N_G(v)\\cap \\overline{V_1}(f)$, set $f(w) =0$.\n\n\\vspace{5pt}\n\\noindent\n{\\bf Reduction Rule $V_2$.} Let $v \\in V_2(f)$. For any $w \\in N_G(v)\\cap A(f)$, set $f(w) = \\overline{1}$. For any $w \\in N_G(v)\\cap \\overline{V_2}(f)$, set $f(w) =0$.\n\n\\vspace{5pt}\n\\noindent\n{\\bf Reduction Rule NPD (No Potential Domination).} If $v\\in \\overline{V_2}(f)$ satisfies $N_G(v)\\subseteq \\overline{V_2}(f)\\cup V_0(f)\\cup V_1(f)$, then set $f(v) = 1$ (this also applies to isolated vertices in $\\overline{V_2}(f)$).\n\n\\vspace{5pt}\n\\noindent\n{\\bf Reduction Rule NPN (No Private Neighbor).} If $v\\in A(f)$ satisfies $N_G(v)\\subseteq V_0 \\cup\\overline{V_1}(f)$, then set $f(v) = \\overline{2}$ (this also applies to isolated vertices in $A(f)$).\n\n\\vspace{5pt}\n\\noindent\n{\\bf Reduction Rule Isolate.} If $A(f)=\\emptyset$ and if $v\\in \\overline{V_1}(f)$ satisfies $N_G(v)\\cap\\overline{V_2}(f)= \\emptyset$, then set $f(v) = 0$.\n\n\\vspace{5pt}\n\\noindent\n{\\bf Reduction Rule Edges.} If $u,v\\in \\overline{V_2}(f)\\cup V_0(f)\\cup V_1(f)$ and $e=uv\\in E$, then remove the edge~$e$ from $G$.\n\n\\vspace{5pt}\n\\noindent\nIn the following, we first take care of the claimed grdf invariants. \n\n\\begin{proposition}\\label{prop:invariants}\nAfter exhaustively executing the proposed reduction rules, as indicated in Algorithm~\\ref{alg:refined-enum}, the claimed grdf invariants are maintained.\n\\end{proposition}\n\n\\begin{pf}We argue for the correctness of the grdf invariants by induction one by one. Notice that (trivially) all invariants hold if we start the algorithm with the nowhere defined grdf.\n\\begin{enumerate}\n \\item $\\forall x\\in \\overline{V_1}(f)\\cup V_0(f)\\,\\exists y\\in N_G(x):y\\in V_2(f)$.\n \n We need to show that $N_G(x) \\cap V_2(f) \\neq \\emptyset$ holds for each $x\\in V_0(f) \\cup \\overline{V_1}(f)$. For the inductive step, we only have to look at the reduction rules, since the branching rules only change the value to $0$ if the vertex was already in $\\overline{V_1}(f)$. For each reduction rule where we set a value of a vertex to~$0$ or to~$\\overline{1}$, there exists a vertex in the neighborhood with value~$2$, which is seen as follows.\n\n\\begin{tabular}{rl}\n LPN:& We explicitly consider $x\\in N_G(V_2(f))$ only to be set by $f(x)=0$.\\\\\n $V_0$\\,\\&\\,$V_1$:& We only set $w$ to $0$ if it has been in $\\overline{V_1}$. By induction hypothesis, \\\\\n &$w$ has a neighbor in $V_2(f)$.\\\\\n $V_2$:& We explicitly consider $w\\in N_G(V_2(f))$ only to be set to~$0$ or to~$\\overline{1}$.\\\\\n Isolate:& Only vertices from $\\overline{V_1}(f)$ are set to~0; apply induction hypothesis. \n\\end{tabular}\n \n \\item $\\forall x\\in V_2(f):N_G(x)\\subseteq \\overline{V_1}(f)\\cup V_0(f) \\cup V_2(f)$.\n \n This property can only be invalidated if new vertices get the value~2 or if vertices from $\\overline{V_1}(f)\\cup V_0(f) \\cup V_2(f)$ are changed to a value other than this or if edges are deleted (as vertices are never deleted). The only way in which a vertex gets~$v$ the value~2 is by branching. Immediately afterwards, the reduction rules are executed: LPN and $V_2$ will install the invariant for the neighborhood of~$v$. No reduction rule ever changes the value of a vertex from $V_0(f) \\cup V_2(f)$, while vertices from $\\overline{V_1}(f)$ might be set to~$0$ or~$2 $. The Reduction Rule Edges deletes no edges incident to vertices from $V_2(f)$. \n \\item $\\forall x\\in V_1(f):N_G(x)\\subseteq \\overline{V_2}(f)\\cup V_0(f)\\cup V_1(f)$.\n \n The invariant is equivalent to the following three conditions: (a) $N(V_1(f))\\cap A(f)=\\emptyset$, (b) $N(V_1(f))\\cap \\overline{V_1}(f)=\\emptyset$ and (c) $N(V_1(f))\\cap V_2(f) =\\emptyset$. Conditions (a) and (b) are taken care of by Reduction Rule $V_1$.\n Condition (c) immediately follows by the already proven second invariant.\n \\item If $\\overline{V_2}(f)\\neq\\emptyset$, then $A(f)\\cup \\overline{V_1}(f)\\neq \\emptyset$.\n \n Consider some $x\\in \\overline{V_2}(f)$. By the second invariant, $N_G(x)\\cap V_2(f)=\\emptyset$. By the Reduction Rule Edges, $N_G(x)\\cap \\left(\\overline{V_2}(f)\\cup V_0(f)\\cup V_1(f)\\right)=\\emptyset$. As the Reduction Rule NPD did not apply, the only possible neighbors of $x$ are in $A(f)\\cup \\overline{V_1}(f)$.\\qed\n\\end{enumerate}\n\\renewcommand{\\qed}{}\n\\end{pf}\n\nWe have now to show the \\emph{soundness} of the proposed reduction rules. In the context of enumerating minimal rdf, this means the following: if $f,f'$ are grdf of $G=(V,E)$ before or after applying any of the reduction rules, then $g$ is a minimal rdf that is consistent with $f$ if and only if it is consistent with $f'$.\n\n\\begin{proposition}\\label{prop:rdf-reductionrules-soundness}\nAll proposed reduction rules are sound.\n\\end{proposition}\n\n\\begin{pf}\nFor the soundness of the reduction rules, we also need the invariants proven to be correct in \\autoref{prop:invariants}.\nWe now prove the soundness of each reduction rule, one at a time.\n\nIf possible, we apply Reduction Rule LPN first. Consider $v\\in V_2(f)$ with $\\{x\\}=N_G(v)\\cap (\\overline{V_2}(f)\\cup A(f))$. Before the branching step, due to the second invariant, neighbors of $V_2(f)$-vertices are either in $V_0(f)$, $V_2(f)$ or in $\\overline{V_1}(f)$. As no reduction rule adds a vertex to $V_2(f)$, $v$ must have been put into $V_2(f)$ by the last branching step.\nBy the first invariant, we know that all $y\\in N_G(v)\\cap\\left( \\overline{V_1}(f)\\cup V_0(f)\\right)$ are dominated by vertices different from~$v$. As $v\\in V_2(f)$, it still needs a private neighbor to dominate. As $N_G(v)\\cap (\\overline{V_1}(f)\\cup A(f))$ contains one element~$x$ only, setting $f(x)=0$ is enforced for any minimal rdf (see \\autoref{t_private_neighborhood}).\n\nNext, we prove Reduction Rule $V_0$. We consider $v\\in V_0(f)$ and $u\\in V$ with $\\{u\\}=V_2(f)\\cap N_G(v)$. We can use the rule, since $u$ needs a private neighbor which can only be $v$, by the assumption that every other neighbor of~$u$ is dominated at least twice. To maintain the property that $v$ is a private neighbor of~$u$, each $A(f)$-neighbor of~$v$ is set to~$\\overline{2}$ and each $\\overline{V_1}(f)$-neighbor of $v$ is set to~$0$. This annotates the fact that any minimal rdf~$g$ compatible with~$f$ will satisfy $(g(x)=2\\implies x=u)$ for each $x\\in N_G(v)$.\n\nThe soundness of Reduction Rule $V_1$ and Reduction Rule $V_2$ mainly follows from \\autoref{t_1_2_neigborhood}. \n\nComing to the Reduction Rule NPD, notice that setting $f(v)=0$ would necessitate $f(u)=2$ for some neighbor $u$ of $v$, which is impossible.\n\nFor the Reduction Rule NPN, we use the fact that $N_G(v) \\cap V_2(f) \\neq \\emptyset$ holds for each $v\\in V_0(f) \\cup \\overline{V_1}(f)$, which is the first invariant.\\footnote{More precisely, we also have to check that the possibly newly introduced vertices in $V_0(f)$ or $\\overline{V_1}(f)$ by the branching or by the reduction rules up to this point do maintain the invariant, but this is nothing else then re-checking the induction step of the correctness proof of this invariant, see the proof of \\autoref{prop:invariants}.} This implies that there is no element left for $v$ to dominate (therefore it has no private neighbor except itself). Thus, if $v$ has the value $2$, then it would contradict with \\autoref{t_private_neighborhood}.\n\nFor the soundness of Reduction Rule Isolate, we note that, since $A(f)$ is empty, $v\\in \\overline{V_1}(f)$ can only have neighbors in $V_1(f)\\cup V_2(f)\\cup \\overline{V_1}(f)$, as $\\overline{V_2}(f)$-neighbors are prohibited. As Reduction Rule $V_1$ was (if possible) executed before, $N_G(v)\\cap V_1(f)= \\emptyset$ holds. Therefore, $v\\in \\overline{V_1}(f)$ would not have a private neighbor if $f(v)=2$, cf. the first invariant.\\footnote{Again, one has to partially follow the induction step of the proof of \\autoref{prop:invariants}.}\n\nFinally, the soundness of Reduction Rule Edges follows trivially from the fact that an element of $\\overline{V_2}(f) \\cup V_0(f) \\cup V_1(f)$ cannot dominate any vertex in $\\overline{V_2}(f) \\cup V_0(f) \\cup V_1(f)$. Hence, a minimal rdf $g$ is consistent with $f$ if and only it is consistent with $f'$, obtained by applying the Reduction Rule Edges to~$f$.\n\\end{pf}\n\n\nIn order to fully understand Algorithm~\\ref{alg:refined-enum}, we need to describe priorities for branching.\nWe describe these priorities in the following in decreasing order for a vertex $v\\in A(f)\\cup \\overline{V_1}(f)$.\n\\begin{enumerate}\n \\item $v\\in A(f)$ and $|N_{G}(v)\\cap (A(f)\\cup \\overline{V_2}(f))|\\geq 2$;\n \\item any $v\\in A(f)$;\n \\item any $v\\in \\overline{V_1}(f)$, preferably if $|N_{G}(v)\\cap \\overline{V_2}(f)|\\neq 2$.\n\\end{enumerate}\n\nThese priorities also split the run of our algorithm into phases, as whenever the algorithm was once forced to pick a vertex according to some lower priority, there will be never again the chance to pick a vertex of higher priority thereafter.\nIt is useful to collect some \\textbf{phase properties} that instances must satisfy after leaving Phase~$i$, determined by applying the $i^\\text{th}$ branching priority.\n\n\\begin{itemize}\n\\item Before entering any phase, there are no edges between vertices $u,v$ if $u,v\\in V_0(f)\\cup V_1(f)\\cup \\overline{V_2}(f)$ or if $u\\in V_2(f)$ and $v\\in \\overline{V_2}(f)\\cup A(f)$ or if $u\\in V_1(f)$ and $v\\in \\overline{V_1}(f)\\cup A(f)$, as we can assume that the reduction rules have been exhaustively applied.\n \\item After leaving the first phase, any active vertex with an active neighbor is either pendant or has only further neighbors from $\\overline{V_1}(f)\\cup V_0(f)$.\n \n \\item After leaving the second phase, $A(f)=\\emptyset$ and $N_G(\\overline{V_2}(f))\\subseteq \\overline{V_1}(f)$. Moreover, any vertex $x\\in \\overline{V_2}(f)$ has neighbors in $\\overline{V_1}(f)$.\n \\item After leaving the third phase, $A(f)=\\overline{V_2}(f)=\\overline{V_1}(f)=\\emptyset$, so that $f$ is a Roman dominating function. \n\\end{itemize}\n\n\\begin{proposition}\nThe phase properties hold.\n\\end{proposition}\n\n\\begin{proof} We are considering the items on the list separately.\n\\begin{itemize}\\item Reduction Rule Edges shows the first claim. Reduction Rules $V_2$ and $V_1$ show the other two claims.\n\\item \nBy the branching condition, we know that after leaving the first phase, $|N_{G}(v)\\cap (A(f)\\cup \\overline{V_2}(f))|< 2$ for any active vertex~$v$. Since $v$ has a neighbor in $A(f)$ (say $u$) this implies that there cannot be any other neighbor in $A(f)\\cup \\overline{V_2}(f)$. Moreover, by the Reduction Rule $V_1$, $N_G(v)\\cap V_1(f)=\\emptyset$, and by the Reduction Rule $V_2$, $N_G(v)\\cap V_2(f)=\\emptyset$. Hence, $N_G(v) \\setminus \\lbrace u\\rbrace \\subseteq \\overline{V_1}(f)\\cup V_0(f)$.\n\\item The second phase branches on each $v\\in A(f)$. Therefore, it ends if $A(f)=\\emptyset$. Let $v\\in \\overline{V_2}(f)$. By Reduction Rule Edge, we get $N_G(v) \\cap \\left( \\overline{V_2}(f) \\cup V_0 \\cup V_1 \\right)=\\emptyset$. Reduction Rule $V_2$ implies that $v$ does not have a neighbor in $V_2(f)$. Therefore we get $N_G(v) \\subseteq \\overline{V_1}(f)$. If $N_G(v)$ is empty, Reduction Rule NPD will be triggered. Therefore, $N_G(v)$ has at least one element.\n\\item The third phase runs on the vertices in $\\overline{V_1}(f)$. Thus, $\\overline{V_1}(f)=\\emptyset$ holds at the end of this phase. Since we never put a vertex into $A(f)$ again, $A(f)$ is empty. To get $\\overline{V_2}(f) = \\emptyset$, we can use the same argumentation as in the property before, since a vertex goes only from $A(f)$ to $\\overline{V_2}(f)$.\\qed \n\\end{itemize}\n\\end{proof}\n\n\\subsection{A Measure \\& Conquer Approach}\n\nWe now present the branching analysis, classified by the described branching priorities.\nWe summarize a list of all resulting branching vectors in \\autoref{tab:branching-vectors}.\n\n\\begin{table}[tb]\n \\centering\n \\begin{tabular}{|l|l|}\\hline\n Phase \\#& Branching vector\\\\\\hline \n 1.1 & $(1-\\omega_2,3-2\\omega_1)$\\\\\n 1.2 & $(1-\\omega_2,1+2\\omega_2)$\\\\\n 1.3 & $(1-\\omega_2,2+\\omega_2-\\omega_1)$\\\\\\hline\n 2.1 \\& 2.2.b & $(1-\\omega_2,2)$\\\\\n 2.2.a & $(1-\\omega_2,1+\\omega_2+\\omega_1)$\\\\\n 2.2.c & $(1+\\omega_2,1)$\\\\\\hline\n 3.1 & $(\\omega_1,\\omega_1+3\\omega_2)$\\\\\n 3.2.a & $(\\omega_1,2\\omega_1+\\omega_2)$\\\\\n 3.2.b \\& 3.3.a & $(\\omega_1+\\omega_2,\\omega_1+\\omega_2)$\\\\3.3.b & $(2\\omega_1+2\\omega_2,2\\omega_1+2\\omega_2,2\\omega_1+2\\omega_2,2\\omega_1+2\\omega_2)$\\\\\\hline\n \\end{tabular}\n \\caption{The branching vectors of different branching scenarios of the enumeration algorithm for listing all minimal Roman domination functions of a given graph}\n \\label{tab:branching-vectors}\n\\end{table}\n\n\\subsubsection{Branching in Phase~1.}\nWe are always branching on an active vertex $v$. In the first branch, we set $f(v) = 2$.\nIn the second branch, we set $f(v) = \\overline{2}$.\nIn the first branch, in addition the Reduction Rule $V_2$ triggers at least twice. In order to determine a lower bound on the branching vector, we describe three worst-case scenarios; all other reductions of the measure can be only better.\n\\begin{enumerate}\n \\item $|N_G(v)\\cap A(f)|=2$, i.e., $v$ has two active neighbors $x$ and $y$. The corresponding recurrence is: $T(\\mu) = T(\\mu-(1-\\omega_2))+T(\\mu-(1+2(1-\\omega_1)))$, as either $v$ moves from $A(f)$ to $V_2(f)$ and $x,y$ move from $A(f)$ to $\\overline{V_1}(f)$, or $v$ itself moves from $A(f)$ to $\\overline{V_2}(f)$. The branching vector is hence: $(1-\\omega_2,3-2\\omega_1)$, as noted in the first row of \\autoref{tab:branching-vectors}.\n \\item $|N_G(v)\\cap \\overline{V_2}(f)|=2$. The corresponding recurrence is: $T(\\mu) = T(\\mu-(1-\\omega_2))+T(\\mu-(1+2\\omega_2))$, see the second row of \\autoref{tab:branching-vectors}.\n \\item $|N_G(v)\\cap A(f)|=1$ and $|N_G(v)\\cap \\overline{V_2}(f)|=1$, leading to $T(\\mu) = T(\\mu-(1-\\omega_2))+T(\\mu-(1+(1-\\omega_1)+\\omega_2))$, see \\autoref{tab:branching-vectors}, third row.\n\\end{enumerate}\n\n\\subsubsection{Branching in Phase~2.}\nWe are again branching on an active vertex $v$. By Reduction Rule NPN, we can assume that $N_G(v)\\neq\\emptyset$. In the first branch, we set $f(v) = 2$.\nIn the second branch, we set $f(v) = \\overline{2}$.\n\n\\begin{enumerate}\n \\item\nIf $N_G(v)\\cap A(f)=\\{x\\}$, then $N_G(v)\\cap \\overline{V_2}(f)=\\emptyset$ in this phase. Therefore, in the first branch, $f(x)=0$ is enforced by Reduction Rule LPN. Notice that this might further trigger Reduction Rule $V_0$ if $N_G(x)\\cap (A(f)\\cup \\overline{V_1}(f))$ contains vertices other than~$v$. The corresponding worst-case recurrence is: $T(\\mu) = T(\\mu-(1-\\omega_2))+T(\\mu-(1+1))$, see \\autoref{tab:branching-vectors}, fourth row.\n \\item\nIf $N_G(v)\\cap \\overline{V_2}(f)=\\{x\\}$, then $N_G(v)\\cap A(f)=\\emptyset$ in this phase. Therefore, in the first branch, $f(x)=0$ is enforced by Reduction Rule LPN. We consider several sub-cases now.\n\\begin{enumerate}\n \\item $N_G(x)\\cap \\overline{V_1}(f)\\neq\\emptyset$. Reduction Rule $V_0$ will put all these vertices into $V_0(f)$. The corresponding worst-case recurrence is: $T(\\mu) = T(\\mu-(1-\\omega_2))+T(\\mu-(1+\\omega_2+\\omega_1))$, see \\autoref{tab:branching-vectors}, fifth row.\n \\item $|N_G(x)\\cap A(f)|\\geq 2$. Reduction Rule $V_0$ will put all these vertices into $\\overline{V_2}(f)$ (except for $v$). The corresponding worst-case recurrence is: $T(\\mu) = T(\\mu-(1-\\omega_2))+T(\\mu-(1+\\omega_2+(1-\\omega_2)))$, see \\autoref{tab:branching-vectors}, fourth row.\n \\item Recall that by Reduction Rule Edges, $N_G(x)\\cap (\\overline{V_2}(f)\\cup V_0(f)\\cup V_1(f))=\\emptyset$, so that (if the first two cases do not apply) now we have $N_G(x)\\setminus\\{v\\}\\subseteq V_2(f)$. By the properties listed above, also $N_G(x)\\cap V_2(f)=\\emptyset$ is clear, so that now $|N_G(x)|=1$, i.e., $x$ is a pendant vertex. In this situation, we do not gain anymore from the first branch, but when $f(v)=\\overline{2}$ is set, Reduction Rule NPD triggers and sets $f(x)=1$. The corresponding worst-case recurrence is: $T(\\mu) = T(\\mu-(1+\\omega_2))+T(\\mu-(1-\\omega_2+\\omega_2))$, see \\autoref{tab:branching-vectors}, sixth row.\n\\end{enumerate}\n\\end{enumerate}\n\n\n\\subsubsection{Branching in Phase~3.}\nAs $A(f)=\\emptyset$, we are now branching on a vertex $v\\in \\overline{V_1}(f)$. Due to Reduction Rule Isolate, we know that $N_G(v)\\cap\\overline{V_2}(f)\\neq \\emptyset$.\nIn the first branch, we consider setting $f(v)=2$, while in the second branch, we set $f(v)=0$.\nAgain, we discuss several scenarios in the following.\n\n\\begin{enumerate}\n \\item Assume that $|N_G(v)\\cap \\overline{V_2}(f)|\\geq 3$. If we set $f(v)=2$, then Reduction Rule $V_2$ triggers at least thrice. The corresponding worst-case recurrence is: $T(\\mu) = T(\\mu-\\omega_1)+T(\\mu-(\\omega_1+3\\omega_2))$, with a branching vector of $(\\omega_1,\\omega_1+3\\omega_2)$, see \\autoref{tab:branching-vectors}, seventh row. \n \\item Assume that $|N_G(v)\\cap \\overline{V_2}(f)|=1$, i.e., there is some (unique) $u\\in \\overline{V_2}(f)$ such that $N_G(v)\\cap \\overline{V_2}(f)=\\{u\\}$. We consider two sub-cases:\n \\begin{enumerate}\n \\item If $|N_G(u)\\cap \\overline{V_1}(f)|\\geq 2$, then if we set $f(v)=2$, then first Reduction Rule LPN triggers $f(u)=0$, which in turn sets $f(w)=0$ for all $w\\in N_G(u)\\cap \\overline{V_1}(f)$, $w\\neq u$, by Reduction Rule $V_0$. The corresponding worst-case recurrence is: $T(\\mu) = T(\\mu-\\omega_1)+T(\\mu-(\\omega_2+2\\omega_1))$, see \\autoref{tab:branching-vectors}, eighth row. \n \\item If $|N_G(u)\\cap \\overline{V_1}(f)|=1$, then $u$ is a pendant vertex. Hence, in the first branch, we have (as above) $f(v)=2$ and $f(u)=0$, while in the second branch, we have $f(v)=0$ and $f(u)=1$ by Reduction Rule NPD. This decreases the measure by $\\omega_1+\\omega_2$ in both branches, see \\autoref{tab:branching-vectors}, nineth row. This scenario happens in particular if the graph $G'$ induced by $\\overline{V_1}(f)\\cup \\overline{V_2}(f)$ contains a connected component which is a $P_2$. Therefore, we refer to this (also) as a \\emph{$P_2$-branching} below.\n \\end{enumerate}\n \\item Assume that $|N_G(v)\\cap \\overline{V_2}(f)|=2$, i.e., there are some $u_1,u_2\\in \\overline{V_2}(f)$ such that $N_G(v)\\cap \\overline{V_2}(f)=\\{u_1,u_2\\}$. Notice that in the first branch, when $f(v)=2$, Reduction Rule $V_2$ triggers twice, already reducing the measure by $\\omega_1+2\\omega_2$.\n We consider further sub-cases:\n \\begin{enumerate}\n \\item If $|N_G(u_1)\\cap \\overline{V_1}(f)|=1$, then $u_1$ is a pendant vertex. As in the previous sub-case, this helps us reduce the measure in the second branch by $\\omega_1+\\omega_2$ due to Reduction Rule NPD, which obviously puts us in a better branching than \\autoref{tab:branching-vectors}, nineth row. Similarly, we can discuss the case $|N_G(u_2)\\cap \\overline{V_1}(f)|=1$.\n \n \\item If $|N_G(u_1)\\cap \\overline{V_1}(f)|=2$, then we know now that the graph $G'$ induced by $\\overline{V_1}(f)\\cup \\overline{V_2}(f)$ is bipartite after removing edges between vertices from $\\overline{V_1}(f)$, and vertices from $\\overline{V_1}(f)$ all have degree two and vertices from $\\overline{V_2}(f)$ all have degree at least two. The worst case for the following branching is hence given by a $K_{2,2}$ as a connected component in $G'$: Testing now all possibilities of setting the $\\overline{V_2}(f)$-vertices to $0$ or to $1$ will determine all values of the $\\overline{V_1}(f)$-vertices by reduction rules. Hence, we have in particular for the $K_{2,2}$ a scenario with four branches, and in each branch, the measure is reduced by $2\\omega_2+2\\omega_1$ \\emph{($K_{2,2}$-branching)}.\n \n \\end{enumerate}\n\\end{enumerate}\n\n\n\\begin{proposition}\\label{prop:run-time}On input graphs of order~$n$, \nAlgorithm \\textsc{Refined RD Enumeration} runs in time $\\mathcal{O}^*(1.9332^n)$.\n\\end{proposition}\n\n\\begin{pf}\nWe follow the run-time analysis that led us to the branching vectors listed in \\autoref{tab:branching-vectors}. The claim follows by choosing as weights $\\omega_1= \\frac{2}{3}$ and $\\omega_2=0.38488$.\n\\end{pf}\n\nThe two worst-case branchings (with the chosen weights $\\omega_1= \\frac{2}{3}$ and $\\omega_2=0.38488$) are 1.1, 3.2.b and 3.3.\nIf we want to further improve on our figures, we would have to work on a deeper analysis in these cases.\nFor the $P_2$-branching, it might be an idea to combine it with the branchings where it could ever originate from. Notice that adjacent $\\overline{V_1}$-$\\overline{V_2}$-vertices can be only produced in the first branching phase. But we would then have to improve also on Phase 3.3, the worst case being a $K_{2,2}$-branching in Case 3.3 (b).\n\n\\smallskip\\noindent\nLet us finally summarize the corner-stones of our reasoning.\n\\begin{proof}[\\autoref{thm:minimal-rdf-enumeration}]\nSeveral important properties have been claimed and proved about Algorithm~\\ref{alg:refined-enum} that show the claim of our second main theorem.\n\\begin{itemize}\n \\item The algorithm correctly enumerates all minimal rdf; see \\autoref{cor:correct-enumeration}.\n \\item The algorithm needs polynomial space only; see \\autoref{cor:poly-space}.\n \\item The algorithm achieves polynomial delay; see \\autoref{prop:poly-delay}.\n \\item The algorithm runs in time $\\mathcal{O}^*(1.9332^n)$ on input graphs of order~$n$; see \\autoref{prop:run-time}.\\qed \n\\end{itemize}\n\\end{proof}\n\n\\section{An Alternative Notion of Minimal RDF}\n\\label{sec:alternative-notion}\n\nSo far, we focused on an ordering of the functions $V\\to\\{0,1,2\\}$ that was derived from the linear ordering $0<1<2$. Due to the different functionalities, it might be not that clear if 2 should be bigger than 1.\nIf we rather choose as a basic partial ordering $0<1,2$, with $1,2$ being incomparable, this yields another ordering for the functions $V\\to\\{0,1,2\\}$, again lifted pointwise. Being reminiscent of partial orderings, let us call the resulting notion of minimality PO-minimal rdf.\nRecall that the notion of minimality for Roman dominating functions that we considered so far and that we also\nview as the most natural interpretation of this notion has been refuted in the literature, because it leads to a trivial notion of \\textsc{Upper Roman Domination}, because the minimal rdf $f:V\\to\\{0,1,2\\}$ with biggest\nsum $\\sum_{v\\in V}f(v)$ is achieved by the constant function $f=1$. This is no longer true for the (new) problem \\textsc{Upper PO-Roman Domination}. \n\nAlso, this can be seen as a natural pointwise lifting of the inclusion ordering, keeping in mind that $f\\leq_{PO}g$ iff $V_1(f)\\subseteq V_1(g)$ and $V_2(f)\\subseteq V_2(g)$.\n\nMore interesting for the storyline of this paper are the following results:\n\n\\begin{theorem}\\label{t_porperty_min_rdf}\nLet $G=\\left(V,E\\right)$ be a graph, $f: \\: V \\to \\lbrace 0,1,2\\rbrace$ \nand abbreviate\n$G'\\coloneqq G\\left[ V_0\\left(f\\right)\\cup V_2\\left(f\\right)\\right]$. Then, $f$ is a PO-minimal rdf if and only if the following conditions hold:\n\\begin{enumerate}\n\\item$N_G\\left[V_2\\left(f\\right)\\right]\\cap V_1\\left(f\\right)=\\emptyset$,\\label{con_1_2_PO}\n\\item $V_2\\left(f\\right)$ is a minimal dominating set of $G'$.\\label{con_min_dom_PO}\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{pf}\nFirst we look into the ``only if''-part. The first condition follows analogously from \\autoref{t_1_2_neigborhood}. For the other condition, we assume that there exists a graph $G=\\left(V,E\\right)$ and a PO-minimal-rdf $f: V \\to \\lbrace 0,1,2\\rbrace$ such that $V_2(f)$ is not a minimal dominating set in $G'$. Since $f$ is a rdf, $V_2(f)$ is a dominating set in~$G'$. Thus, $V_2(f)$ is not irredundant in $G'$. Hence, there exists a $v\\in V_2(f)$ such that $N[v] \\subseteq N_G[V_2(f)\\setminus \\lbrace v\\rbrace]$. Define \n$$ \\widetilde{f}:V\\to \\lbrace 0,1,2\\rbrace,\\: w\\mapsto\\begin{cases}\nf\\left(w\\right), &w\\neq v\\\\\n0,& w=v\n\\end{cases}. $$\nClearly, vertices $w\\in \\left( V_0(f) \\cup V_1(f)\\right)\\setminus N_G[v]$ are dominated by $\\widetilde{f}$. But $N_G[v]$ is also dominated, since $N_G[v] \\subseteq N_G[V_2(f)\\setminus \\lbrace v\\rbrace]$ holds. This would contradict the PO-minimality of $f$. \n\nLet $f$ be a function that fulfills the two conditions. Since $V_2\\left(f\\right)$ is a dominating set in $G'$, for each $u\\in V_0\\left( f\\right)$, there exists a $v\\in V_2\\left(f\\right)\\cap N_{G}\\left[u\\right]$. Therefore, $f$ is a rdf. \nLet $\\widetilde{f}:V \\to \\lbrace 0,1,2 \\rbrace$ be a PO-minimal rdf such that $\\widetilde{f}$ is smaller than $f$ with respect to the partial ordering. Therefore, $\\widetilde{f}$ (also) satisfies the two conditions. \nAssume that there exists a $v\\in V$ with $\\widetilde{f}\\left( v\\right) < f\\left( v \\right)$. Hence, $V_2\\left(\\widetilde{f}\\right)\\subseteq V_2\\left(f\\right)\\setminus \\lbrace v\\rbrace$. \n\n\\noindent\n\\textbf{Case 1:} $\\widetilde{f}\\left( v\\right)=0 ~and~ f\\left( v \\right) =1$. Therefore, there exists a vertex $u\\in N_G\\left(v\\right)$ with $f\\left(u\\right)\\geq \\widetilde{f}\\left(u\\right)=2$. This contradicts Condition~\\ref{con_1_2}.\n\n\\noindent\n\\textbf{Case 2:} $\\widetilde{f}\\left( v\\right)=0 ~and~ f\\left( v \\right) =2$. Thus, for each $u\\in V_0(f) \\cap N_G[v]\\subseteq V_0(\\widetilde{f})\\cap N_G[v]$ there exists a $w\\in V_2(\\widetilde{f})\\cap N_G(u)\\subseteq V_2(f)\\cap N_G(u)$. This implies, that $V_2(f)$ is not irredundant in $G'$, which contradicts the second condition.\n\nTherefore, $\\widetilde{f} = f$ holds and $f$ is PO-minimal.\n\\end{pf}\n\nBased on this characterization of PO-minimality, we can again derive a positive algorithmic results for the corresponding extension problem.\n\n\\begin{theorem}\\label{theorem:P_ExtPoRDF}\nThe extension problem \\textsc{ExtPO-RDF} can be solved in polynomial time.\n\\end{theorem}\n\n\\begin{pf}\nFor this problem, we have to modify Algorithm \\ref{alg} again. This time, we have to modify Line \\ref{alg_private_test} to \\textbf{if} $N_G[v]\\subseteq N[M_2\\setminus \\lbrace v\\rbrace]$ \\textbf{do}. The rest of the proof is analogous to the proof of \\autoref{theorem:correctness_alg}.\n\\end{pf}\n\nFurthermore, we can show that for PO-minimal rdf, the simple enumeration algorithm is already provably optimal.\n\n\\begin{theorem}\nThere is a polynomial-space algorithm that enumerates all PO-minimal rdf of a given graph of order $n$ in time $\\mathcal{O}^*(2^n)$ with polynomial delay. Moreover, there is a family of graphs $G_n$, with $G_n$ being of order $n$, such that $G_n$ has $2^n$ many PO-minimal rdf.\n\\end{theorem}\n\n\\begin{pf}\nThe algorithm itself works similar to Algorithm~\\ref{alg:enum}, but we have to integrate the extension tests as in Algorithm~\\ref{alg:refined-enum}. Therefore, we need to combine our two modifications for Algorithm~\\ref{alg}. This new version would solve the \\textsc{GenExtPO-RDF}, where a graph $G=(V,E)$, a function $f:V\\to\\lbrace0,1,2\\rbrace$ and a set $\\overline{V_2}$ are given and we need to find a PO-minimal rdf $\\widetilde{f}$ with $f\\leq \\widetilde{f}$ and $\\overline{V_2}\\cap V_2\\left( f\\right)=\\emptyset$ (to prove this, combine the proofs of \\autoref{lem:GenExtRD} and \\autoref{theorem:P_ExtPoRDF}). \nTo see optimality of the enumeration algorithm, notice that the null graph (edge-less graph) of order~$n$ has any mapping $f:V\\to\\{1,2\\}$ as a PO-minimal rdf. \n\\end{pf}\n\nIt follows that the (relatively simple) enumeration algorithm is optimal for PO-minimal rdf.\nIf one dislikes the fact that our graph family is disconnected, consider the star $K_{1,n}$ that has $2^{n}+1$ many different PO-minimal rdf: If $V(K_{1,n})=\\{0,1,\\dots,n\\}$, with $0$ being the center and $i\\in\\{1,\\dots,n\\}$ being the `ray vertices' of this star, then either put $f(0)=2$ and $f(i)=0$ for $i\\in\\{1,\\dots,n\\}$, or $f(j)=1$ for $j\\in \\{0,1,\\dots,n\\}$, or $f(0)=0$ and $f(i)\\in\\{1,2\\}$ is arbitrary for $i\\in\\{1,\\dots,n\\}$ (except for $f(j)=1$ for $j\\in \\{1,\\dots,n\\}$).\nThis example proves that there cannot be any general enumeration algorithm running in time $\\mathcal{O}((2-\\varepsilon)^n)$ for any $\\varepsilon>0$, even for connected graphs of order~$n$.\n\n\\section{Conclusions}\n\nWhile the combinatorial concept of Roman domination leads to a number of complexity results that are completely analogous to what is known about the combinatorial concept of domination, the two concepts lead to distinctively different results when it comes to enumeration and extension problems. These are the main messages and results of the present paper.\n\nWe are currently working on improved enumeration and also on counting of minimal rdf in special graph classes.\nOur first results are very promising; for instance, there are good chances to completely close the gap between lower and upper bounds for enumerating minimal rdf for some graph classes.\n\nAnother line of research is looking into problems that are similar to Roman domination, in order to better understand the specialties of Roman domination in contrast to the classical domination problem. What makes Roman domination behave different from classical domination when it comes to finding extensions or to enumeration?\n\nFinally, let us mention that our main branching algorithm also gives an input-sensitive enumeration algorithm for minimal Roman dominating functions in the sense of Chellali \\emph{et al.}~\\cite{CheHHHM2016}. However, we do not know of a polynomial-delay enumeration algorithm in that case. This is another interesting line of research.\nHere, the best lower bound we could find was a repetition of a $C_4$, leading to $\\sqrt[4]{8}\\geq 1.68179$ as the basis.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWe study the entropy dissipation for the Boltzmann collision operator without cutoff for a wide range of power law potentials.\nThe main result of the present work gives a bound on the entropy dissipation from below by a weighted Lebesgue-Norm. \n\nThe Boltzmann equation is a nonlinear integro-differential equation which describes the dynamics of a diluted gas. \nIt is of the form\n\\begin{equation}\\label{eq:inhomogboltzmann}\n \\partial_t f(t,x,v) + v\\cdot\\nabla_x f(t,x,v) = Q(f,f)(t,x,v), \\quad f(0,x,v) = f_0(x,v),\n \\end{equation}\nwhere $t\\geq 0$ and $x,v\\in\\mathds{R}^d$ for $d\\geq 2$. \nThe solution $f$ describes the density of particles at time $t\\geq 0$ with position $x\\in\\mathds{R}^d$ having velocity $v\\in\\mathds{R}^d$. \nWhile the left-hand side of \\eqref{eq:inhomogboltzmann} describes the transport of particles, the right-hand side takes interactions between particles into account. \n\nIn the special case of spacial homogeneity, the Boltzmann equation simplifies to\n\\begin{equation}\\label{eq:homogboltzmann}\n \\partial_t f(t,v) = Q(f,f)(t,v), \\quad f(0,v) = f_0(v)\n \\end{equation}\n\nThe operator $Q$ on the right-hand side of the Boltzmann equation denotes the so-called Boltzmann collision operator, which acts on the function $f(t,x,\\cdot)$ for fixed values of $t,x$ and is given by\\footnote{Throughout the paper, we will drop the dependence on $t$ and $x$ of a function, whenever they are not directly involved.}\n\\[ Q(g,f)(v) = \\int_{\\mathds{R}^{d}}\\int_{\\mathds{S}^{d-1}} (g(v_{\\ast}')f(v') - g(v_{\\ast})f(v))B(|v-v_{\\ast}|,\\cos\\Theta)\\, \\d\\sigma \\d v_{\\ast}, \\]\nwhere for $v,v_{\\ast}\\in\\mathds{R}^d$, $\\sigma\\in\\mathds{S}^{d-1}$\n\\begin{center}\n\\begin{tabular}{ l l }\n$\\displaystyle v' = \\frac{v+v_{\\ast}}{2} + \\frac{|v_{\\ast}-v|}{2}\\sigma$ \\hspace*{2.5cm} & $\\displaystyle\\cos\\Theta= \\sigma\\cdot\\frac{v_{\\ast}-v}{|v_{\\ast}-v|}$, \\\\\n$\\displaystyle v_{\\ast}'= \\frac{v+v_{\\ast}}{2} - \\frac{|v_{\\ast}-v|}{2}\\sigma.$ & \\, \n\\end{tabular}\n\\end{center}\nThe type of interactions of the particles is determined by the so-called collision kernel $B(|v-v_{\\ast}|,\\cos\\Theta)$. A classical assumption on $B$ is the integrability of the collision kernel, referred to as Grad's cutoff assumption. In this paper, we study collision kernels that do not satisfy Grad's cutoff assumption.\nTo be more precise, we consider collision kernels of the form $B(|v-v_{\\ast}|,\\cos\\Theta) = \\Phi(|v-v_{\\ast}|)b(\\cos\\Theta)$.\nWe assume\n\\begin{equation}\\label{def:Phi}\n\\Phi(|v-v_{\\ast}|) = c_{\\Phi}|v-v_{\\ast}|^{\\gamma}\n\\end{equation} \nfor some $c_{\\Phi}>0$ and $b$ satisfies\n\\begin{equation}\\label{def:b}\n c_b^{-1} \\Theta^{-1-2s} \\leq \\sin(\\Theta)^{d-2}b(\\cos\\Theta) \\leq c_b \\Theta^{-1-2s} \\quad \\text{for all } \\Theta\\in\\left(0,\\frac{\\pi}{2}\\right]\n \\end{equation}\nfor some $c_b>0$, where $\\gamma>-d$ and $s\\in(0,1)$. The case $\\gamma \\geq 0$ is referred to as the hard potential case and $\\gamma < 0$ as soft potential case. In particular, the sub-case $\\gamma+2s<0$ is known as the very-soft potential case. \n\nMany questions about the regularity of solutions are open in the very soft potential range. For a recent review and open problems, see \\cite{ImbSilReg20}. In that case, the reaction term in the collision operator (we write it $Q_2$ in \\eqref{def:Q1Q2}) is more singular. It is difficult to control it with the diffusion part of the operator. In particular, there is no known method that leads to $L^{\\infty}$ estimates in the very soft potential case, even for space homogeneous solutions. A similar difficulty arises for the Landau equation in the very soft potential range, and in particular for Coulomb potentials. Our main objective in this paper is to derive an entropy dissipation estimate that applies in the very soft potential range, similar to the well known result by L. Desvillettes \\cite{DesvillettesColoumb} for the Landau equation with Coulomb potentials. Our main estimate is in a weighted $L^p$ space, with hopefully sharp asymptotics for large velocities.\n\nThe entropy dissipation for the Boltzmann collision operator is given by the following formula\n\\begin{equation}\\label{eq:genentropydiss}\n\\begin{aligned}\nD(f) &=-\\left\\langle Q(f,f),\\ln f\\right\\rangle_{L^2(\\mathds{R}^d)}\\\\\n& = \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} f(v_{\\ast})f(v) \\left[\\ln(f(v))-\\ln(f(v'))\\right] B(|v-v_{\\ast}|,\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v.\n\\end{aligned}\n\\end{equation}\n\nThe expression for $D(f)$ applies to $f$ as a function of $v$, for each \\emph{frozen} values of $t$ and $x$. The following entropy dissipation formula applies to solutions of the Boltzmann equation.\n\\begin{equation} \\label{eqn:dissipationintegratedintime}\n\\partial_t \\iint_{\\mathds{R}^d \\times \\mathds{R}^d} f \\log f \\d v \\d x = -\\int_{\\mathds{R}^d} D(f) \\d x.\n\\end{equation}\nIn the case of space-homogeneous solutions, the formula is simpler (and more powerful) since it does not involve integration with respect to $x$.\n\nThe expression for $D(f)$ is nonnegative. Due to the difficulty to obtain any other coercive quantities associated to the Boltzmann equation, it is interesting to study lower bounds for $D(f)$ that lead to a priori estimates in standard function spaces.\n\nWith the aim of sudying estimates for $D(f)$, we consider a nonnegative function $f = f(v) : \\mathds{R}^d \\to [0,\\infty)$. There is no point in keeping track of the dependence of $f$ with respect to $t$ and $x$ since $D(f)$ applies to $f$ as a function of $v$ only. Our estimate will depend on the mass, energy and entropy of $f$. To be more precise, it depends on upper bounds for the energy and entropy, and upper and lower bounds for the mass of $f$ of the following form.\n\\begin{align}\n& 0 < m_0 \\leq \\int_{\\mathds{R}^d} f(v) \\d v \\leq M_0, \\label{ass:mass} \\\\\n& \\int_{\\mathds{R}^d} f(v) |v|^2 \\d v \\leq E_0, \\label{ass:energy} \\\\\n& \\int_{\\mathds{R}^d} f(v) \\log f(v) \\d v \\leq H_0. \\label{ass:entropy}\n\\end{align}\nFor the spatially homogeneous Boltzmann equation, due to the conservation of mass and energy, and the monotonicity of the entropy, it suffices that these inequalities hold initially for them to hold for positive time. For the space-inhomogeneous Boltzmann equation, the estimates in this paper would apply provided that the inequalities above hold for every value of $t$ and $x$.\n\nBefore, we formulate the main result of the present paper, let us recall the weighted Lebesgue Norm. For $p\\geq 1$ and $\\ell\\in\\mathds{R}$, we define\n\\[ \\|g\\|_{L^p_{\\ell}} = \\left(\\int_{\\mathds{R}^d} \\langle v \\rangle^{\\ell p}|g(v)|^p\\, \\d v \\right)^{1\/p}, \\]\nwhere $\\langle v \\rangle = (1+|v|^2)^{1\/2}$.\n The main result of the present paper is the following entropy dissipation estimate:\n\\begin{theorem}\\label{thm:entropydissipation}\nLet $-d<\\gamma \\leq 2$ and $s\\in(0,1)$. Let $f$ be a non-negative function satisfying \\eqref{ass:mass}, \\eqref{ass:energy} and \\eqref{ass:entropy}. \nAssuming $\\gamma \\leq 0$, there is a finite constant $c>0$, depending on $d$, $b$, $\\gamma$, $s$ and the macroscopic bounds $m_0,M_0,E_0$ and $H_0$, and $C>0$ depending on $d$, $b$, $\\gamma$ and $s$ only, such that\n\\begin{equation}\\label{eq:mainresult}\n D(f) \\geq c\\|f\\|_{L^p_{-q}} - C M_0^2,\n\\end{equation} \nwhere $1\/p = 1-2s\/d$ and $q=2s\/d - \\gamma-2s$.\n\nWhen $\\gamma > 0$, a similar estimate follows but depending on a higher moment instead of $M_0$.\n\\begin{equation}\\label{eq:mainresult-hardpotential}\n D(f) \\geq c\\|f\\|_{L^p_{-q}} - C \\left( \\int_{\\mathds{R}^d} \\langle v \\rangle^\\gamma f \\d v \\right)^2.\n\\end{equation} \n\\end{theorem}\nThe proof of \\autoref{thm:entropydissipation} takes advantage of the simple idea of using the non-negativity of the integrand in the entropy dissipation and replace the kinetic factor $\\Phi$ by a smaller bounded function $\\psi$ without the singularity on $v=v_{\\ast}$ for $\\gamma<0$. The estimate in a weighted $L^p$ space, with a precise exponent, follows from an explicit formula for integral quadratic forms. This estimate, given in Proposition \\ref{prop:quadratictoLebesgue}, is one of the main novelties of this paper.\n\nAs a corollary of \\autoref{thm:entropydissipation}, we see that H-solutions to the Cauchy problem \\eqref{eq:homogboltzmann} are in a weighted Lebesgue space, that is \n\\[ f\\in L^{1}\\left([0,T], L^p_{-q}(\\mathds{R}^d)\\right),\\]\n where $p$ and $q$ are as in \\autoref{thm:entropydissipation}. This implies in particular that H-solutions to \\eqref{eq:homogboltzmann} are weak solutions in the usual sense. For details, see Section \\ref{sec:weaksol}.\n\nThe Landau equation can be derived as the grazing collision limit of the Boltzmann equation, see e.g. \\cite{desvillettes1992asymptotics, goudon1997boltzmann, Villaninewclass, AlexandreVillani2002boltzmann, alexandre2004landau} and the references therein. Precisely, the Boltzmann collision operator $Q(f,f)$, properly normalized, converges to the Landau operator as $s \\to 1$. In \\cite{DesvillettesColoumb}, Desvillettes proves an entropy dissipation estimate for the Landau equation with Coulomb interaction and presents applications to weak solutions. The estimate we obtain in \\autoref{thm:entropydissipation} is an analogous result but for the Boltzmann collision operator. If we take $d=3$ and $\\gamma=-3$, and take $s \\to 1$ in Theorem \\ref{thm:entropydissipation}, we see that $p\\to 3$ and $q\\to 5\/3$. While, the exponent $p$ coincides with the one in \\cite{DesvillettesColoumb}, our exponent in the weight $q$ is improved ($L^3_{-5\/3}$ as opposed to $L^3_{-3}$), suggesting that the weight exponent in \\cite{DesvillettesColoumb} may not be optimal. It is worth noting that the proof given in \\cite{DesvillettesColoumb} cannot be applied to the Boltzmann collision operator. The method in this paper is related to a simpler proof presented in \\cite{golse2019partial}.\n\nThe entropy dissipation is an important quantity in the analysis of the Boltzmann equation. It has various applications such as the construction of renormalized solutions to the Cauchy problem for the Boltzmann equation, see \\cite{DiPernaLionsCauchy}, or the introduction of H-solutions for the Boltzmann equation and Landau equations, see \\cite{Villaninewclass}. There are various lower bounds for the entropy dissipation in the literature. However, they have serious limitations when $\\gamma<0$ that we seek to overcome with the present result.\n\nOne of the first and best known entropy dissipation estimates appeared in \\cite{AlexDesvVilWenn}. Their analysis applies when $\\Phi$ is bounded below which, strictly speaking, is only the case when $\\gamma = 0$. For $\\gamma < 0$, some further analysis in \\cite{AlexDesvVilWenn} leads to \\emph{local} estimates restricted to bounded values of $v$. For other works on entropy dissipation estimates and their applications, see for instance \\cite{VillaniReg, AlexandreVillani2002boltzmann, Des03Fourier, desvillettes2005trend, mouhot2006explicit} and the references therein. \nIn \\cite{Gressmann-Strain-Global2010, GressmanStrainGlobal}, Gressmann and Strain introduce a metric which captures the anisotropic structure of the Boltzmann operator. Using this metric and the associated spaces, the same authors obtain entropy dissipation estimates in \\cite{Gressmann-Strain-2011} with sharp asymptotics for large velocities. The estimates in \\cite{Gressmann-Strain-2011} depend on a quantity (that the authors call $C_g$) that is only controlled by moments of $f$ when $\\gamma \\geq 0$ (the hard potentials case). \n\nIn addition of our main result, we present a refinement of the entropy dissipation result of \\cite[Theorem 3]{Gressmann-Strain-2011} so that it applies to the soft potential range without resorting to higher integrability assumptions on $f$. We recall the anisotropic fractional Sobolev norm of Gressman and Strain:\n\\begin{equation}\\label{def:weightedanisoSobolev}\n |f|_{{\\dot{N}^{s,\\gamma}}}^2 = \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} \\frac{(f(v')-f(v))^2}{d_{GS}(v,v')^{d+2s}} \\left( \\langle v \\rangle \\langle v' \\rangle \\right)^{(\\gamma+2s+1)\/2} \\mathds{1}_{\\{d_{GS}(v,v')\\leq 1\\}} \\d v' \\d v,\n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:GSdist}\n d_{GS}(v,v') = \\sqrt{|v-v'|^2+\\frac{1}{4}(|v|^2-|v'|^2)^2}\n \\end{equation}\nmeasures the distance in the lifted paraboloid $\\{v\\in\\mathds{R}^{d+1}\\colon v_{d+1} = \\frac12|(v_1,\\dots,v_d)|^2 \\}$.\n\nWe derive the following entropy dissipation estimate for soft potentials.\n\n\\begin{proposition}\\label{prop:entropydissipation}\nLet $-d<\\gamma \\leq 0$ and $s\\in(0,1)$. Let $f$ be a non-negative function satisfying \\eqref{ass:mass}, \\eqref{ass:energy} and \\eqref{ass:entropy}.\nThere is a constant $c>0$ depending on $d$, $s$, $\\gamma$ and the macroscopic bounds $m_0,M_0,E_0$ and $H_0$, and a constant $C$ depending only on $d$, $s$ and $\\gamma$ only, such that\n\\[ D(f) \\geq c |\\sqrt{f}|_{\\dot{N}^{s,\\gamma}}^2 - C M_0^2. \\]\n\\end{proposition}\n\nThe novelty of Proposition \\ref{prop:entropydissipation} compared with \\cite[Theorem 3]{Gressmann-Strain-2011} is that our negative error term is in terms of the mass of $f$ only. The result in the cited paper has a negative error term depending on higher integrability assumptions on the function. Roughly, they require $f \\ast |v|^\\gamma$ to be locally bounded in \\cite[Assumption U]{Gressmann-Strain-2011}. There is no apparent upper bound in terms of the hydrodynamic quantities for their parameter $C_g$ when $\\gamma<0$. Proposition \\ref{prop:entropydissipation} implies that weak solutions to the space-homogeneous Boltzmann equation belong to $L^1([0,T], N^{s,\\gamma})$, which was not available from earlier results in the literature.\n\nWhile it is conceivable that one could potentially derive our main result in Theorem \\ref{thm:entropydissipation} from Proposition \\ref{prop:entropydissipation} combined with some sharp form of a weighted fractional Sobolev inequality (not readily available in the literature), we chose to prove Theorem \\ref{thm:entropydissipation} directly, and then present an independent proof of Proposition \\ref{prop:entropydissipation}. The direct proof of Theorem \\ref{thm:entropydissipation} is relatively short and elegant. So, we think it is worth presenting Theorem \\ref{thm:entropydissipation} as an independent result.\n\n\\subsection*{Notation}We write $a\\lesssim b$ if there is a universal constant $c>0$ such that $a\\leq cb$. The notation $a\\gtrsim b$ means that $b\\lesssim a$ and $a \\approx b$ that $a\\lesssim b$ and $a\\gtrsim b$.\n\\section{Entropy dissipation estimates}\nThe Boltzmann collision operator clearly plays a central role in our analysis, since the entropy dissipation is defined through it. In the following, we will briefly discuss the decomposition of the operator and some selected properties. \n\nThe Boltzmann collision operator $Q(f,g)$ can be decomposed into the sum of an integro-differential operator $Q_1(f,g)$ and a lower order term $Q_2(f,g)$ (see \\cite{SilvestreBoltzmann}):\n\\begin{equation}\\label{def:Q1Q2}\nQ_1(f,g)(v) := (\\mathcal{L}_{K_f}g)(v) \\quad \\text{and} \\quad Q_2(f,g)(v) = g(v)\\int_{\\mathds{R}^d}f(v-w)\\widetilde{B}(|w|)\\, \\d w.\n\\end{equation}\nThe integro-differential operator $\\mathcal{L}_{K_f}$ is defined by\n\\[ \\mathcal{L}_{K_f}g(t,v) = pv\\int_{\\mathds{R}^d} (g(t,v')-g(t,v))K_{f}(t,v,v')\\, \\d v',\\]\nwhere pv denotes the Cauchy principal value around $v\\in\\mathds{R}^d$. The kernel $K_f(t,v,v')$ depends on the \nfunction $f$ and is given by the formula\n\\begin{equation}\\label{def:kernel}\n\\begin{aligned}\n K_f(t,v,v') &= \\frac{2^{d-1}}{|v'-v|}\\int_{w\\perp (v'-v)} f(t,v+w) \\, B(r,\\cos\\Theta) r^{-d+2}\\, \\d w,\n \\end{aligned}\n \\end{equation}\nwhere\n \\begin{equation}\\label{def:rvwreps}\n\\begin{aligned}\nr&=\\sqrt{|v'-v|^2+|w|^2}, \\qquad & \\cos(\\Theta\/2)= \\frac{|w|}{r}, \\\\\nv_{\\ast}'&= v+w, & v_{\\ast}= v'+w.\n\\end{aligned}\n\\end{equation}\nWhile $Q_1$ represents the singular part of the collision operator, the part $Q_2$ is of lower order. The lower order term can be handled using the cancellation lemma \\cite[Lemma 1]{AlexDesvVilWenn}.\nThe function $\\widetilde{B}$ in \\eqref{def:Q1Q2} is given by\n\\begin{align*}\n\\widetilde{B}(z)& =\\int_{\\mathds{S}^{d-1}}(2^{d\/2}(1-\\sigma\\cdot e)^{-d\/2}\\left(B\\left(\\sqrt{2} z\/(1-\\sigma\\cdot e),\\cos\\Theta\\right) -B(z,\\cos\\Theta)\\right) \\, \\d \\sigma\\\\\n& =C_b \\Phi(z)=C_bc_{\\Phi}|z|^{\\gamma},\n\\end{align*} \nwhere $C_b$ is a positive constant depending on the angular function $b$. For details on the decomposition of the Boltzmann collision operator, see \\cite{SilvestreBoltzmann}.\n\nThe idea of controlling the singularity of the operator near $v=v_{\\ast}$ for $\\gamma<0$ is to introduce an auxiliary collision kernel, where we replace the kinetic factor $\\Phi$ by a smaller bounded function $\\psi$ in which we cut out this singularity. \nSince the integrand of the entropy dissipation is non-negative, it can be bounded from below by the same expression with $B$ replaced by a smaller collision kernel.\nFor a given function $\\psi$ on $\\mathds{R}^d$, we define the generalized collision kernel $B^{\\psi}$ by\n\\[ B^{\\psi}(|v-v_{\\ast}|,\\cos\\Theta) = \\psi(|v-v_{\\ast}|)b(\\cos\\Theta). \\]\nThe auxiliary collision kernel $B^{\\psi}$ and the collision kernel $B$ differ only in the fact that we replace the kinetic factor $\\Phi$ with the function $\\psi$. \n\nBy simply replacing $\\Phi$ by $\\psi$, we can define the generalized kernel $K_f^{\\psi}$ by\n\\begin{equation}\\label{def:genkernel}\n \\mathcal{K}_f^{\\psi}(t,v,v') = \\frac{2^{d-1}}{|v'-v|}\\int_{w\\perp (v'-v)} f(t,v'+w) \\, \\psi(r) b(\\cos\\Theta) r^{-d+2 }\\, \\d w,\n \\end{equation}\nwhere $r$ and $\\cos\\Theta$ are defined in \\eqref{def:rvwreps} and $\\widetilde{B}^{\\psi}$ by\n\\begin{equation}\\label{def:genB}\n\\widetilde{B}^{\\psi}(z) = C_b \\psi(z).\n \\end{equation}\nThis leads to the decomposition of the auxiliary collision operator $Q^{\\psi}(f,g)$ (with the collision kernel $B$ replaced by $B^{\\psi}$) into the sum of an integro-differential operator $Q^{\\psi}_1(f,g)$ and a lower order term $Q^{\\psi}_2(f,g)$, where the operators $Q_1^\\psi$ and $Q_2^\\psi$ \nare defined as in \\eqref{def:Q1Q2} with $K_f$ and $\\widetilde{B}$ replaced by $K_f^{\\psi}$ resp. $\\widetilde{B}^{\\psi}$.\n\nWe define the auxiliary function $\\psi:\\mathds{R}^d\\to\\mathds{R}$ to be non-negative function satisfying $\\psi\\leq \\Phi$ and:\n\\begin{equation}\\label{def:psi}\n\\begin{aligned}\n\\text{if } \\gamma<0: \\quad & \\begin{cases}1\\leq \\psi(|z|) \\leq 2 \\quad &\\text{if } |z|\\leq 1, \\\\ \\psi(|z|)=\\Phi(|z|) & \\text{if } |z|> 1,\n\\end{cases} \\\\\n\\text{if } \\gamma \\geq 0: \\quad & \\psi(|z|) =\\Phi(|z|) \\quad \\text{for all } z\\in\\mathds{R}^d.\n\\end{aligned}\n\\end{equation}\nNote that by this choice, for any value of $\\gamma$,\n\\[ \\int_{\\mathds{R}^d} \\int_{\\mathds{R}^d} f(v) f(v_{\\ast}) \\psi(|v-v_{\\ast}|) \\, \\d v_{\\ast}\\, \\d v \\lesssim \\begin{cases} \nM_0^2 & \\text{ if } \\gamma \\leq 0, \\\\\n\\left( \\int f \\langle v \\rangle^\\gamma \\d v \\right)^2 & \\text{ if } \\gamma > 0.\n\\end{cases} \\]\nThe entropy dissipation is naturally connected to a quadratic form, coming from the singular part of the collision operator and a lower order term.\n\\begin{lemma}\\label{lem:entropuquadratic}\nLet $-d<\\gamma\\leq 2$ and $s\\in(0,1)$. Let $\\psi$ be a non-negative function satisfying $\\psi\\leq \\Phi$ and \\eqref{def:psi} and let $f$ be a non-negative function satisfying \\eqref{ass:mass}, \\eqref{ass:energy} and \\eqref{ass:entropy}. \nThere is an universal constant $C>0$ such that\n\\begin{equation}\nD(f) \\geq \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} \\left(\\sqrt{f(v')} - \\sqrt{f(v)}\\right)^2 K^{\\psi}_f(v,v') \\, \\d v' \\, \\d v - C \\left( \\int_{\\mathds{R}^d} f(v) \\langle v \\rangle^{\\gamma_+} \\d v \\right)^2.\n\\end{equation}\n\\end{lemma}\n\\begin{proof} \nUsing the non-negativity of the entropy dissipation $D(f)$ and $\\psi\\leq \\Phi$, we get\n\\begin{align*}\nD(f) & = \\frac 12 \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} (ff_\\ast - f' f'_\\ast) \\ln\\left(\\frac {ff_\\ast}{f'f'_\\ast} \\right) B(|v-v_{\\ast}|,\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v \\\\\n& \\geq \\frac 12 \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} (ff_\\ast - f' f'_\\ast) \\ln\\left(\\frac {ff_\\ast}{f'f'_\\ast} \\right) \\psi(|v-v_{\\ast}|) b(\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v \\\\\n& = \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} f(v_{\\ast})f(v) \\left[\\ln(f(v))-\\ln(f(v'))\\right] \\psi(|v-v_{\\ast}|) b(\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v \\\\\n& \\geq \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} f(v_{\\ast}) \\left(\\sqrt{f(v')} - \\sqrt{f(v)}\\right)^2\\psi(|v-v_{\\ast}|) b(\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v \\\\\n& \\quad -\\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} f(v_{\\ast}) \\left(f(v') - f(v)\\right)\\psi(|v-v_{\\ast}|) b(\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v \\\\\n& = I_1 - I_2.\n\\end{align*}\nIn the second estimate, we used the inequality $x(\\ln x - \\ln y) \\geq \\left(\\sqrt{y} - \\sqrt{x}\\right)^2 - (y-x)$ for all $x,y\\geq 0$ (See \\cite{AlexDesvVilWenn}).\nBy the definition of the kernel $K_f^{\\psi}(v,v')$, the term $I_1$ can be written as\n\\[ I_1 = \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} \\left(\\sqrt{f(v')} - \\sqrt{f(v)}\\right)^2 K_f^{\\psi}(v,v') \\, \\d v' \\, \\d v. \\]\nFor the term $I_2$ we use the cancellation lemma (See \\cite{AlexDesvVilWenn} or \\cite{SilvestreBoltzmann} for details), which leads to\n\\begin{align*}\nI_2 &= C \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} f(v)f(v_{\\ast}) \\psi(|v-v_{\\ast}|) \\d v_{\\ast} \\d v \\\\ \n&\\leq C \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} f(v)f(v_{\\ast}) \\langle v-v_{\\ast} \\rangle^\\gamma \\d v_{\\ast} \\d v \\\\ \n &\\leq C \\begin{cases} \nM_0^2 & \\text{ if } \\gamma \\leq 0, \\\\\n\\left( \\int f \\langle v \\rangle^\\gamma \\d v \\right)^2 & \\text{ if } \\gamma > 0.\n\\end{cases}\n\\end{align*}\nHere $C>0$ is a finite universal constant.\n\\end{proof}\n\nOur main result \\autoref{thm:entropydissipation} follows immediately from \\autoref{lem:entropuquadratic} and an estimate of the quadratic form from below by a weighted Lebesgue norm.\n\n\\begin{proposition}\\label{prop:quadratictoLebesgue}\nLet $\\gamma>-d$ and $s\\in(0,1)$. Let $\\psi$ be a non-negative function satisfying \\eqref{def:psi}, let $f$ be a non-negative function satisfying \\eqref{ass:mass}, \\eqref{ass:energy} and \\eqref{ass:entropy} and $g \\in L_2^1(\\mathds{R}^d)$. \nThere is a constant $c>0$, depending on the dimension $d$ and the macroscopic bounds $m_0,M_0,E_0$ and $H_0$, such that\n\\[ \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} \\left(\\sqrt{g(v')} - \\sqrt{g(v)}\\right)^2 K^{\\psi}_f(v,v') \\, \\d v' \\, \\d v \\geq c \\|g\\|_{L^p_{-q}}, \\]\nwhere $1\/p = 1-2s\/d$ and $q=2s\/d - \\gamma-2s$. In particular, $g \\in L^p_{-q}(\\mathds{R}^d)$ when the left hand side is bounded.\n\\end{proposition}\nThe proof of this proposition needs some auxiliary results and is given at the end of this section.\n\nSince the estimate in \\autoref{prop:quadratictoLebesgue} has no restrictions on $\\gamma>-d$ and $s\\in(0,1)$, \nit covers soft as well as hard potentials for the Boltzmann collision operator. It is perhaps most interesting that it works in the case of very-soft potentials $\\gamma+2s<0$. Note that outside of that range, if $\\gamma+2s>2s\/d$, the exponent $q$ changes its sign.\n\nAn essential tool for the proof of \\autoref{prop:quadratictoLebesgue} are cones of nondegeneracy introduced in \\cite{SilvestreBoltzmann}.\nBefore we recall the cones of nondegeneracy and some important properties, we first give a lower bound on the generalized kernel.\n\\begin{lemma}\\label{lemma:kernelpsi}\nLet $-d<\\gamma<0$ and $s\\in(0,1)$. Let $\\psi$ be a non-negative function satisfying \\eqref{def:psi}.\nThen \n\\begin{equation}\\label{Kernelpsi}\n\\begin{aligned}\nK^{\\psi}_f(v,v') \\gtrsim |v-v'|^{-d-2s}\\int_{w\\perp (v'-v)} f(v'+w) \\min\\Big( |w|^{\\gamma+2s+1}, |w|^{2s+1}\\Big) \\, \\d w.\n \\end{aligned}\n\\end{equation}\n\\end{lemma}\nNote that in the hard potential case $\\gamma\\geq 0$, the auxiliary kernel $K_f^{\\psi}$ coincides with the kernel $K_f$ and therefore, there is nothing to do in this case. The respective result is given in \\cite[Corollary 4.2]{SilvestreBoltzmann}, namely\n\\begin{equation}\\label{eq:lowerboundKf}\nK_f(v,v') \\gtrsim |v-v'|^{-d-2s}\\int_{w\\perp (v'-v)} f(v'+w) |w|^{\\gamma+2s+1} \\, \\d w,\n\\end{equation} \nwhich provides a better lower bound. Nevertheless, the estimate \\eqref{Kernelpsi} is sufficient for our applications in the case of soft potentials.\n\\begin{proof}[Proof of \\autoref{lemma:kernelpsi}]\nAs in the proof of \\cite[Corollary 4.2]{SilvestreBoltzmann}, we study the two cases $\\cos\\Theta\\geq 0$ and $\\cos\\Theta< 0$. \n\\begin{enumerate}\n\\item[(i)] If $\\cos\\Theta\\geq 0$, then $|w|\\approx r$ and $b(\\cos\\Theta)\\approx |v-v'|^{-d+1-2s}r^{d-1+2s}$. Hence, \n\\begin{align*} \n\\psi(r) b(\\cos\\Theta) r^{-d+2 } &\\approx |v-v'|^{-d-2s+1}\\left( |w|^{1+2s}\\mathds{1}_{\\{r\\leq 1\\}}(r) + |w|^{1+2s+\\gamma}\\mathds{1}_{\\{r> 1\\}}(r)\\right) \\\\\n&\\geq |v-v'|^{-d-2s+1}\\min\\Big( |w|^{\\gamma+2s+1}, |w|^{2s+1}\\Big).\n\\end{align*}\n\\item[(ii)] In the case $\\cos\\Theta<0$, we have $|v'-v|\\approx r$ and $|w|=r\\cos(\\Theta\/2)$. \nTherefore, $b(\\cos\\Theta) = \\cos(\\Theta\/2)^{\\gamma+2s+1}$. If $r\\leq 1$, then \n\\begin{align*}\n\\psi(r) b(\\cos\\Theta) r^{-d+2} &\\approx r^{-d-2s+1} |w|^{\\gamma+2s+1} \\\\\n&\\approx |v'-v|^{-d-2s+1} |w|^{\\gamma+2s+1} \\\\\n&\\geq |v-v'|^{-d-2s+1}\\min\\Big( |w|^{\\gamma+2s+1}, |w|^{2s+1}\\Big).\n\\end{align*}\nOn the other hand, if $r\\geq 1$, we have\n\\begin{align*}\n\\psi(r) b(\\cos\\Theta) r^{-d+2} &\\approx r^{-d+2} \\cos(\\Theta\/2)^{\\gamma+2s+1}r^{\\gamma} \\\\\n&\\approx |v-v'|^{-d-2s+1}|w|^{2s+1}\\cos(\\Theta\/2)^{\\gamma}r^{\\gamma} \\\\\n&\\approx |v-v'|^{-d-2s+1}|w|^{\\gamma+2s+1} \\\\\n& \\geq |v-v'|^{-d-2s+1}\\min\\Big( |w|^{\\gamma+2s+1}, |w|^{2s+1}\\Big).\n\\end{align*}\n\\end{enumerate}\nThis finishes the proof of \\autoref{lemma:kernelpsi}.\n\\end{proof}\nNote that by the bound on the mass and energy, a certain amount of the mass of the function $f$ lies inside a ball centered around zero with radius depending on $m_0$ and $E_0$. The bound on the entropy $H_0$ provides that this mass in not concentrated on a set of measure zero.\nThese observations lead to cones of nondegeneracy constructed by sets of the form $\\{f\\geq \\ell\\}$. To be more precise, for any point $v\\in\\mathds{R}^d$, there is a symmetric cone of directions $A(v)$ such that its perpendicular planes intersect the set $\\{f\\geq \\ell\\}$ on a set with $\\mathcal{H}^{d-1}$ positive Hausdorff measure. \nAs a consequence of \\autoref{lemma:kernelphi} resp. \\eqref{eq:lowerboundKf}, \nwe get the existence of a cone of non-degeneracy for the kernel $K^{\\psi}_f$. \nHere, one can simply follow the lines of the proof of \\cite[Lemma 4.8 and Lemma 7.1]{SilvestreBoltzmann} and use the bound on the kernel $K_f^{\\psi}$ given in Lemma \\ref{lemma:kernelpsi}\n\\begin{lemma}{\\cite[Lemma 7.1]{SilvestreBoltzmann}}\\label{lemma:kernelphi}\nLet $\\gamma>-d$ and $s\\in(0,1)$. Let $\\psi$ be a non-negative function satisfying \\eqref{def:psi}\nand let $f$ be a non-negative function satisfying \\eqref{ass:mass}, \\eqref{ass:energy} and \\eqref{ass:entropy}.\nFor any $v\\in\\mathds{R}^d$, there exists a symmetric subset $A(v)\\subset\\mathds{S}^{d-1}$ such that \n\\begin{enumerate}\n\\item[(i)] $\\displaystyle |A(v)|>\\mu\\langle v \\rangle^{-1}$, where $|A(v)|$ denotes the $d-1$-Hausdorff measure of $A(v)$,\n\\item[(ii)] $\\displaystyle K^{\\psi}_f(v,v')\\geq \\lambda \\langle v \\rangle^{1+2s+\\gamma}|v-v'|^{-d-2s}$ whenever $(v'-v)\/(|v'-v|)\\in A(v)$.\n\\item[(iii)] For every $\\sigma\\in A(v)$, $|\\sigma\\cdot v| \\leq C$.\n\\end{enumerate}\nThe constants $\\mu$, $\\lambda$ and $C$ depend on $d$ and on the hydrodynamic bounds $m_0, M_0, E_0$ and $H_0$.\n\\end{lemma}\nThe set $A(v)$ in the previous lemma describes a set of directions $A(v)$ along which the kernel $K^{\\psi}_f$ has a lower bounds given in property (ii). We denote the corresponding cone of nondegeneracy by $\\Xi(v)$ that is\n\\[ \\Xi(v) := \\left\\{ v' \\in\\mathds{R}^d\\colon \\frac{(v'-v)}{|v'-v|}\\in A(v)\\right\\}. \\]\nFurthermore, the cone of nondegeneracy degenerates as $|v|\\to\\infty$ and satisfies\n\\[ |B_R(v) \\cap \\Xi(v)| \\approx R^d \\langle v \\rangle^{-1} . \\]\n\nUsing these cones of nondegeneracy, we can finally prove \\autoref{prop:quadratictoLebesgue}.\n\\begin{proof}[Proof of \\autoref{prop:quadratictoLebesgue}]\nWe initially assume that $g \\in L^p_{-q}(\\mathds{R}^d)$ and prove the estimate. A posteriori, we prove that the quadratic form must be infinite if $g \\notin L^p_{-q}(\\mathds{R}^d)$ by an approximation procedure using that $g \\in L^1_2(\\mathds{R}^d)$.\n\nRecall that by \\autoref{lemma:kernelphi} for every $v\\in\\mathds{R}^d$, we know that $K^{\\psi}_f(v,v') \\geq \\lambda \\langle v \\rangle^{1+2s+\\gamma}|v-v'|^{-d-2s}$, whenever $v'\\in \\Xi(v)$. \nFurthermore, for every $v\\in\\mathds{R}^d$, let us choose $R=R(v)>0$ so that for some large $C>0$,\n\\[ g(v)^p R^d \\langle v\\rangle^{-qp-1} \\approx C\\|g\\|_{L^p_{-q}}^p. \\]\nWith this choice,\n\\[\\{v'\\in (B_R(v)\\cap \\Xi(v))\\colon g(v') \\leq g(v)\/2\\}| \\gtrsim |B_R(v)\\cap \\Xi(v)| \\approx R^d \\langle v \\rangle^{-1}.\t\\]\nHence, using this information, we get\n\\begin{align*}\n\\int_{\\mathds{R}^d} \\left( \\sqrt{g(v')} - \\sqrt{g(v)} \\right)^2K_f^{\\psi}(v,v')\\, \\d v' & \\gtrsim R^d \\langle v \\rangle^{-1} g(v) \\left(\\lambda \\langle v\\rangle^{1+2s+\\gamma}R^{-d-2s} \\right) \\\\\n& \\geq c_1 R^{-2s} g(v) \\langle v \\rangle^{\\gamma+2s} \\\\\n& = c_2 \\|g\\|_{L^p_{-q}}^{-2sp\/d} g(v)^{1+2sp\/d}\\langle v \\rangle^{\\gamma+2s-2s(qp+1)\/d},\n\\end{align*}\nwhere $c_1,c_2$ are constants depending on $m_0, M_0, E_0, H_0$.\nOur choice of $p$ and $q$ was made so that $p=1+2sp\/d$ and $-qp =\\gamma+2s-2s(qp+1)\/d$. \nIntegrating over $v\\in\\mathds{R}^d$ finally gives us\n\\[ \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} \\left( \\sqrt{g(v')} - \\sqrt{g(v)} \\right)^2K_f^{\\psi}(v,v')\\, \\d v' \\d v \\geq c\\|g\\|_{L^p_{-q}}^{-2sp\/d} \\|g\\|_{L^p_{-q}}^p = c\\|g\\|_{L^p_{-q}}. \n\\]\nThis concludes the proof provided that $g \\in L^p_{-q}$. For a function $g \\notin L^p_{-q}$, we consider $g_m(v) := \\max(\\min(g(v),m),-m)$. Since we assumed that $g \\in L^1_2(\\mathds{R}^d)$, we have $g_m \\in L^p_{-q}$ and the inequality holds. Moreover, $\\|g_m\\|_{L^p_{-q}} \\to \\infty$ and therefore, applying the monotone convergence theorem,\n\\begin{align*}\n\\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} &\\left( \\sqrt{g(v')} - \\sqrt{g(v)} \\right)^2K_f^{\\psi}(v,v')\\, \\d v' \\d v \\geq \\\\\n& \\geq \\lim_{m \\to \\infty} \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} \\left( \\sqrt{g_m(v')} - \\sqrt{g_m(v)} \\right)^2K_f^{\\psi}(v,v')\\, \\d v' \\d v \\\\\n& \\geq \\lim_{m \\to \\infty} c \\|g_m\\|_{L^p_{-q}} = +\\infty. \n\\end{align*}\n\\end{proof}\nAs already mentioned, the main result \\autoref{thm:entropydissipation} now follows from a combination of \\autoref{lem:entropuquadratic} and \\autoref{prop:quadratictoLebesgue}.\n\n\\begin{remark}\nEven though we asumme $g \\in L^1_2$ in Proposition \\ref{prop:quadratictoLebesgue}, the estimate does not depend on $\\|g\\|_{L^1_2}$. There are various possible ways to relax this assumption, but it cannot be removed altogether. It is there to rule out the obvious counterexample of $g$ being a nonzero constant function for which the quadratic form vanishes. It can be replaced by any condition that allows some form of the final approximation procedure for the case $g \\notin L^p_{-q}$.\n\\end{remark}\n\n\\subsection{Entropy dissipation estimate involving an anisotropic fractional Sobolev space}\nIn this subsection, we present a second entropy dissipation estimate for the Boltzmann collision operator. \nThis estimate involves the anisotropic distance by Gressmann and Strain \\cite{Gressmann-Strain-2011}.\n\nLet us first briefly recall the sharp anisotropic coercivity estimate for the Boltzmann collision operator from \\cite{Gressmann-Strain-2011}.\nNote that $\\left\\langle Q(g,f),f\\right\\rangle$ can be rewritten as\n\\begin{equation}\\label{eq:NgKg}\n\\begin{aligned}\n\\left\\langle Q(g,f),f\\right\\rangle &= \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} g(v_{\\ast})f(v) \\left[f(v')-f(v)\\right] B(r,\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v \\\\\n& = \\frac12 \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} g(v_{\\ast})\\left[f(v')^2 -f(v)^2\\right] B(r,\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v \\\\\n& \\quad \\qquad -\\frac12 \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d}\\int_{\\mathds{S}^{d-1}} g(v_{\\ast})\\left[f(v')-f(v)\\right]^2 B(r,\\cos\\Theta)\\d\\sigma \\d v_{\\ast} \\d v\\\\\n& =:K_g(f) - N_g(f).\n\\end{aligned}\n\\end{equation}\n\nIn \\cite[Theorem 1]{Gressmann-Strain-2011}, the authors prove that under mild assumptions on the function $g$, the term $N_g(f)$ can be bounded from below the weighted anisotropic Sobolev semi-norm\n$|f|_{{\\dot{N}^{s,\\gamma}}}^2$ defined in \\eqref{def:weightedanisoSobolev}. This estimate provides an entropy dissipation estimate in terms of the anisotropic fractional Sobolev space. However, the result depends on certain parameter $C_q$ that would be difficult to bound when $\\gamma<0$. In this section we prove Proposition \\ref{prop:entropydissipation}, which is effectively a refinement of \\cite[Theorem 1]{Gressmann-Strain-2011} in the soft potential range.\n\n\nIn \\cite{imbertsilvestreglobal2022}, Imbert and Silvestre introduce a change of variables, which is used to turn local H\u00f6lder and Schauder estimates into global ones. For $v_0\\in\\mathds{R}^d$ let \n\\[ \\overline{v} := \\begin{cases} v_0 + v \\quad & \\text{if } |v_0|<2, \\\\ v_0+T_0v & \\text{if } |v_0|\\geq 2,\\end{cases} \\]\n where\n \\begin{equation}\\label{def:T0}\n T_0(av_0+w) = \\frac{a}{|v_0|}v_0 + w, \\qquad \\text{for all } w\\perp v_0, a\\in\\mathds{R}. \n \\end{equation}\n The function $T_0:\\mathds{R}^d\\to\\mathds{R}^d$, introduced in \\cite{imbertsilvestreglobal2022}, has a strong connection to the anisotropic distance \n\\eqref{eq:GSdist}.\nIn \\cite[Lemma A.1]{imbertsilvestreglobal2022} it is shown that for any given $v_0\\in\\mathds{R}^d$ with $|v_0|\\geq 2$, we have \n\\begin{equation}\\label{eq:comparabilitymetric}\nd_{GS}(v_1,v_2) \\asymp |T_0^{-1}(v_1-v_2)|\n\\end{equation}\nfor all $v_1,v_2\\in E_1(v_0) := v_0+T_0(B_1)$. \nWe define\n\\begin{equation}\\label{def:Kbar}\n\\overline{K}^{\\psi}_f(v,v') = |v_0|^{-1-\\gamma-2s}K^{\\psi}_f(\\overline{v}, v_0+T_0v'),\n\\end{equation}\nwhere $\\psi$ is the auxiliary function $\\psi:\\mathds{R}^d\\to\\mathds{R}$ defined in\n\\eqref{def:psi}. \nIn \\cite{imbertsilvestreglobal2022}, the authors derive the global coercivity estimate by Gressmann and Strain by using the above mentioned transformation and a local coercivity estimate. \nBy using similar methods, we are able to prove \\autoref{prop:entropydissipation}.\nBefore we draw our attention to the proof of \\autoref{prop:entropydissipation}, we need some auxiliary results.\nLet us first state a local coercivity estimate.\n\\begin{lemma}\\label{lemma:localcoercivity}\nLet $\\gamma>-d$ and $s\\in(0,1)$. Let $\\psi$ be a non-negative function satisfying $\\psi\\leq \\Phi$ and \\eqref{def:psi} and let $f$ be a non-negative function satisfying \\eqref{ass:mass}, \\eqref{ass:energy} and \\eqref{ass:entropy}. \nThere is a constant $\\lambda>0$, depending on the macroscopic bounds $m_0, M_0, E_0$ and $H_0$, such that for every $g:\\mathds{R}^d\\to\\mathds{R}$\n\\[ \\int_{B_1}\\int_{B_1} \\left(g(v') - g(v)\\right)^2 \\overline{K}^{\\psi}_f(v,v') \\d v' \\d v \\geq \\lambda \\int_{B_{1\/2}}\\int_{B_{1\/2}} \\frac{\\left(g(v') - g(v)\\right)^2}{|v-v'|^{d+2s}} \\d v' \\d v. \\]\n\\end{lemma}\n\\begin{proof} \nBy \\autoref{lemma:kernelphi}, there is a cone of non-degeneracy for the kernel $K^{\\psi}_f$. Hence, following the lines of the proof of \\cite[Lemma 5.6]{imbertsilvestreglobal2022}, there is also a cone of non-degeneracy for $\\overline{K}^{\\psi}_f$. \nNow the result follows from the coercivity condition \\cite[Theorem 1.3]{chakersilvestrecoerc}.\n\\end{proof}\nLet $v_0\\in\\mathds{R}^d\\setminus B_2$ be given.\nFor $v\\in\\mathds{R}^d$, let $\\overline{v}$ be such that $v=v_0+T_0(\\overline{v})$, where $T_0$ is defined in \\eqref{def:T0}. Furthermore, let $\\overline{g}(v)= g(\\overline{v})$.\n\\begin{lemma}\\label{lemma:coerc1}\nLet $\\gamma>-d$ and $s\\in(0,1)$. Let $\\psi$ be a non-negative function satisfying $\\psi\\leq \\Phi$ and \\eqref{def:psi} and let $f$ be a non-negative function satisfying \\eqref{ass:mass}, \\eqref{ass:energy} and \\eqref{ass:entropy}. \nThere are $c>0$, $R\\in (2,\\infty)$ and $\\rho\\in (0,1]$, depending on the macroscopic bounds $m_0, M_0, E_0$ and $H_0$, such that for all $g:\\mathds{R}^d\\to\\mathds{R}$\n\\begin{align*} \n\\iint_{d_{GS}(v,v')0$ depending only on dimension, $s$ and $q$.\n\\begin{align*}\n\\iint_{d_{GS}(v,v')<\\rho} (g(v)- g(v'))^2\\, \\frac{\\left( \\langle v \\rangle \\langle v' \\rangle \\right)^q}{d_{GS}(v,v')^{d+2s}}\\, \\d v' \\, \\d v &\\geq \\\\\nc \\iint_{d_{GS}(v,v')< \\frac 32 \\rho} (g(v)- g(v'))^2\\, &\\frac{\\left( \\langle v \\rangle \\langle v' \\rangle \\right)^q}{d_{GS}(v,v')^{d+2s}}\\, \\d v' \\, \\d v\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nGiven $v,v' \\in \\mathds{R}^d$ so that $d_{GS}(v,v') < \\frac 32 \\rho$, define\n\\[ N(v,v') := \\left\\{ w \\in \\mathds{R}^d : d_{GS}(v,w) < \\frac 23 d_{GS}(v,v') \\text{ and } d_{GS}(v',w) < \\frac 23 d_{GS}(v,v') \\right\\}.\\]\n\nFrom the triangle inequality, we observe that $d_{GS}(v,w) > \\frac 13 d_{GS}(v,v')$ for all $w \\in N(v,v')$. Moreover, since $d_{GS}(v,v') < \\frac 32 \\rho < 3$, we also see that $\\langle v \\rangle \\approx \\langle v' \\rangle \\approx \\langle w \\rangle$ for all $w \\in N(v,v')$.\n\nLet us also define $M(v,w) := \\{ v' \\in \\mathds{R}^d: w \\in N(v,v')\\}$. The sets $N(v,v')$ and $M(v,w)$ are substantial portions of a ball with respect to the distance $d_{GS}$. It is not hard to estimate their volumes $|N(v,v')| \\approx |M(v,w)| \\approx \\langle v \\rangle^{-1} d_{GS}(v,v')^d$.\n\nWith this notation, we proceed with the computation\n\\begin{align*}\n\\iint_{d_{GS}(v,v')< \\frac 32 \\rho} &(g(v)- g(v'))^2\\, \\frac{\\left( \\langle v \\rangle \\langle v' \\rangle \\right)^q}{d_{GS}(v,v')^{d+2s}}\\, \\d v' \\, \\d v = \\\\\n&= \\iint_{d_{GS}(v,v')< \\frac 32 \\rho} (g(v)- g(v'))^2\\, \\frac{\\left( \\langle v \\rangle \\langle v' \\rangle \\right)^q}{d_{GS}(v,v')^{d+2s}}\\, |N(v,v')|^{-1} \\int_{N(v,v')} \\d w \\d v' \\, \\d v \\\\\n\\intertext{Using that $(g(v)- g(v'))^2 \\leq 2(g(v)- g(w))^2 + 2(g(w)- g(v'))^2$ and $|N(v,v')| \\approx \\langle v \\rangle^{-1} d_{GS}(v,v')^d$}\n&\\lesssim \\iint_{d_{GS}(v,v')< \\frac 32 \\rho}\\int_{N(v,v')} \\left( (g(v)- g(w))^2 + (g(w)- g(v'))^2 \\right) \\, \\frac{ (\\langle v \\rangle \\langle v' \\rangle)^{q+1\/2}}{d_{GS}(v,v')^{2d+2s}}\\, \\d w \\d v' \\, \\d v \\\\\n\\intertext{Note the symmetry respect to $v$ and $v'$ and that $\\langle v \\rangle \\approx \\langle v' \\rangle \\approx \\langle w \\rangle$.}\n& \\approx \\iint_{d_{GS}(v,w)<\\rho} (g(v)- g(w))^2 \\frac{ \\langle v \\rangle^{2q+1}}{d_{GS}(v,w)^{2d+2s}} \\left( \\int_{v' \\in M(v,w)} \\d v' \\right) \\d w \\d v \\\\\n& \\approx \\iint_{d_{GS}(v,w)<\\rho} (g(v)- g(w))^2 \\frac{ \\langle v \\rangle^{q} \\langle w \\rangle^{q}}{d_{GS}(v,w)^{d+2s}} \\d w \\d v \\\\\n\\end{align*}\n\\end{proof}\n\n\\begin{corollary} \\label{cor:rhoisirrelevant}\nLet $g : \\mathds{R}^d \\to \\mathds{R}$ be any measurable function, $q \\in \\mathds{R}$ and $\\rho < 1$. The following inequality holds for some $c>0$ depending only on dimension, $s$, $q$ and $\\rho$.\n\\begin{align*}\n\\iint_{d_{GS}(v,v')<\\rho} (g(v)- g(v'))^2\\, \\frac{\\left( \\langle v \\rangle \\langle v' \\rangle \\right)^q}{d_{GS}(v,v')^{d+2s}}\\, \\d v' \\, \\d v &\\geq \\\\\nc \\iint_{d_{GS}(v,v')< 1} (g(v)- g(v'))^2\\, &\\frac{\\left( \\langle v \\rangle \\langle v' \\rangle \\right)^q}{d_{GS}(v,v')^{d+2s}}\\, \\d v' \\, \\d v\n\\end{align*}\n\\end{corollary}\n\n\\begin{proof} We iterate Lemma \\ref{lem:rho32rho} $m$ times so that $(3\/2)^m \\rho \\geq 1$.\n\\end{proof}\n\nWe finally have all tools to proof our main result of this section, that is \\autoref{prop:entropydissipation}.\n\\begin{proof}[Proof of \\autoref{prop:entropydissipation}]\nLet $\\psi$ be a non-negative satisfying $\\psi\\leq \\Phi$ and \\eqref{def:psi}. \nProceeding as in the proof of \\autoref{lem:entropuquadratic},\n\\[ D(f) \\geq N_f^{\\varphi}(\\sqrt{f}) - C M_0^2\\]\nwhere $C>0$ is a finite universal constant and\n\\[ N_f^{\\psi}(\\sqrt{f}) = \\int_{\\mathds{R}^d}\\int_{\\mathds{R}^d} \\left(\\sqrt{f(v')} - \\sqrt{f(v)}\\right)^2 K_f^{\\psi}(v,v') \\d v' \\d v.\\]\n\nBy \\autoref{lemma:coerc1}, there are $c_1>0$ and $\\rho\\in(0,1)$, depending on $m_0, M_0, E_0$ and $H_0$, such that\n\\[ N_f^{\\psi}(\\sqrt{f}) \\geq c_1 \\iint_{d_{GS}(v,v')<\\rho} (\\sqrt{f(v)}- \\sqrt{f(v')})^2\\, \\frac{\\left( \\langle v \\rangle \\langle v' \\rangle \\right)^{(\\gamma+2s+1)\/2}}{d_{GS}(v,v')^{d+2s}}\\, \\d v' \\, \\d v.\\]\n\nBecause of Corollary \\ref{cor:rhoisirrelevant}, we can replace $\\rho$ in the formula above by $1$ by adjusting the constant $c_1$. We get\n\\[ N_f^{\\psi}(\\sqrt{f}) \\geq c_2 \\iint_{d_{GS}(v,v')<1} (\\sqrt{f(v)}- \\sqrt{f(v')})^2\\, \\frac{\\left( \\langle v \\rangle \\langle v' \\rangle \\right)^{(\\gamma+2s+1)\/2}}{d_{GS}(v,v')^{d+2s}}\\, \\d v' \\, \\d v.\\]\nTherefore, we conclude that\n\\[ D(f) \\geq c |\\sqrt{f}|_{\\dot{N}^{s,\\gamma}}^2 - C M_0^2. \\]\n\n\\end{proof}\n\\section{Applications to weak solutions}\\label{sec:weaksol}\nIn this section, we discuss an application of the main result \\autoref{thm:entropydissipation} for weak solutions to the spatially homogeneous Boltzmann equation. Let us have a look at the weak formulation of the Boltzmann collision operator. \n In the following, let $f\\in L^\\infty([0,\\infty); L_2^1(\\mathds{R}^d)) \\cap C([0,\\infty); \\mathcal{D}'(\\mathds{R}^d))$ be a nonnegative function satisfying \\eqref{ass:mass}, \\eqref{ass:energy} and \\eqref{ass:entropy}.\n\nIn \\cite{Villaninewclass}, Villani introduces a class of weak solutions, called H-solutions, to the spatially homogeneous Boltzmann equation with bounded entropy dissipation. A solution in this class might not be a weak solution in the usual sense. As explained in \\cite[Section 7, Application 2]{AlexDesvVilWenn} and \\cite{Villaninewclass}, this problem appears because of the lack of an a priori estimate in the very soft potential case of the form\n\\begin{equation}\\label{eq:apropri}\n \\int_{0}^T \\int_{B_R} \\int_{B_R} f(t,v) f(t,v_{\\ast})|v-v_{\\ast}|^{\\gamma+2} \\, \\d v \\, \\d v_{\\ast} \\, \\d t < \\infty. \n\\end{equation}\nIt is mentioned in \\cite{AlexDesvVilWenn} that the (local) entropy dissipation estimate shows that H-solutions are weak solutions in the usual sense when $\\gamma+2s \\geq d-2$. The computation is sketched without explicit details. Using our estimate in Theorem \\ref{thm:entropydissipation}, we show that \\eqref{eq:apropri} holds whenever $\\gamma+2s > -2$, covering the whole physical range of exponents. There is no fundamental difference between the computation presented here and the one proposed in \\cite{DesvillettesColoumb}. It is not clear why they stated a suboptimal range in \\cite{AlexDesvVilWenn}, suggesting that it is possibly a typo in the paper. We explain the computation explicitly below.\n\nOne important ingredient in the proof of \\eqref{eq:apropri} is that H-solutions are in a weighted Lebesgue space. It follows immediately from \\autoref{thm:entropydissipation} combined with the entropy dissipation formula \\eqref{eqn:dissipationintegratedintime} (without integrating in space).\n\n\\begin{corollary}\\label{corollary:weightedlebesgue}\nLet $T>0$, $-d<\\gamma\\leq 2$ and $s\\in(0,1)$. \nLet $f$ be a non-negative H-solution to the Cauchy problem \\eqref{eq:homogboltzmann} with initial datum $f_0$. Assume $f_0$ satisfies \\eqref{ass:entropy}. \\\\\nThen $f\\in L^{1}\\left([0,T], L^p_{-q}(\\mathds{R}^d)\\right)$, where $1\/p = 1-2s\/d$ and $q=2s\/d - \\gamma-2s$.\n\\end{corollary}\n\\begin{proof}\nBy definition, H-solutions satisfy the space-homogeneous form of \\eqref{eqn:dissipationintegratedintime}. We get that $\\int_0^T D(t) \\d t \\leq \\int f_0 \\log f_0 \\d v$. The corollary follows applying Theorem \\ref{thm:entropydissipation}.\n\\end{proof}\n\nWe use \\autoref{corollary:weightedlebesgue} to show that H-solutions are weak solutions in the usual sense by proving \\eqref{eq:apropri}.\n\n\\begin{corollary}\\label{cor:aprioriestimate}\nLet $T>0$, $-d<\\gamma\\leq 0$ and $s\\in(0,1)$, so that $\\gamma+2s > -2$. \nLet $f$ be a non-negative H-solution to the Cauchy problem \\eqref{eq:homogboltzmann} with initial datum $f_0$. Assume $f_0$ satisfies \\eqref{ass:entropy}. Then $f$ satisfies \\eqref{eq:apropri}.\n\\end{corollary}\n\n\\begin{proof}\n\\begin{align*}\n \\int_{0}^T & \\int_{B_R} \\int_{B_R} f(t,v) f(t,v_{\\ast}) |v-v_{\\ast}|^{\\gamma+2}\\mathds{1}_{\\{|v-v_{\\ast}| \\leq 1\\}}) \\, \\d v \\, \\d v_{\\ast} \\, \\d t \\\\\n& \\leq \\int_{0}^T \\|f\\|_{L^1} \\sup_{v \\in B_R} \\int_{B_R} f(t,v_{\\ast}) |v-v_{\\ast}|^{\\gamma+2} \\, \\d v_{\\ast} \\, \\d t \\\\\n\\intertext{We apply H\\\"older's inequality with $1\/p = 1 - 2s\/d$ as in Corollary \\ref{corollary:weightedlebesgue}.}\n& \\leq \\int_{0}^T \\|f(t,\\cdot)\\|_{L^1} \\|f(t,\\cdot)\\|_{L^p(B_R)} \\sup_{v \\in B_R} \\||v-\\cdot|^{\\gamma+2} \\|_{L_{p'}(B_R)} \\d t \\\\\n&\\leq \\|f\\|_{L^\\infty([0,T],L^1(B_R))} \\|f\\|_{L^1([0,T],L^p(B_R))} \\sup_{v \\in B_R} \\||v-\\cdot|^{\\gamma+2} \\|_{L_{p'}(B_R)}\n\\end{align*}\nIt only remains to check whether the last factor is finite. We have\n\\begin{align*}\n\\||v-\\cdot|^{\\gamma+2} \\|_{L_{p'}(B_R)} =\\left( \\int_{B_R} |v-v_\\ast|^{(\\gamma+2)\\frac{d}{2s}} \\, \\d v_\\ast \\right)^{\\frac{2s}d} \n\\end{align*}\nThe integral is finite provided that $(\\gamma+2)d\/(2s) > -d$. This is clearly the case when $\\gamma+2s > -2$.\n\\end{proof}\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n \\begin{table}[t]\n \\begin{center}\n \\begin{tabular}{|c|c|c| }\n \\hline\n Additive Stretch & Size & Time \\\\\n \\hline \\hline\n +2 & $O(n^{3\/2})$ & $O(mn^{1\/2})$ \\cite{aingworth1999fast} \\\\\n \\hline\n +4 & $\\widetilde{O}(n^{7\/5})$ & $O(mn)$ \\cite{chechik2013new} \\\\\n \\hline\n \\textbf{+4} & $\\bm{\\widetilde{O}(n^{7\/5})}$ & $\\bm{\\widetilde{O}(mn^{3\/5})}$ \\textbf{(this paper)}\\\\\n \\hline\n +6 & $\\widetilde{O}(n^{4\/3})$ & $\\widetilde{O}(n^2)$ (\\cite{Baswana2010} and \\cite{woodruff2010additive})\\\\\n \\hline\n +8 & $O(n^{4\/3})$ & $O(n^2)$ \\cite{knudsen2017additive}\\\\\n \\hline\n \\end{tabular}\n \\caption{Notable Additive Spanner Constructions}\n \\end{center}\n \\end{table}\n\nA graph on $n$ nodes can have on the order of $m = O(n^2)$ edges. For very large values of $n$, this amount of edges can be prohibitively expensive, both to store in space and to run graph algorithms on. Thus it may be prudent to operate instead on a smaller approximation of the graph. A \\textit{spanner} is a type of subgraph which preserves distances between nodes up to some error, which we call the stretch. Spanners were introduced in \\cite{peleg1989graph} and \\textit{additive} spanners were first studied in \\cite{liestman1993additive}.\n\n\\begin{definition}[Additive Spanners]\n\nA $k-$additive spanner (or ``$+k$ spanner\") of a graph $G$ is a subgraph $H$ that satisfies $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t) + k$ for each pair of nodes $s,t \\in V(G)$. $k$ is called the (additive) \\textit{stretch} of the spanner.\n\n\\end{definition}\n\n\nNote that since $H$ is a subgraph of $G$, the lower bound $\\texttt{dist}_G(s,t) \\leq \\texttt{dist}_H(s,t)$ is immediate (the error is one-sided). Spanners have found applications in distance oracles \\cite{baswana2006faster}, parallel and distributed algorithms for computing almost shortest paths \\cite{cohen1998fast,elkin2005approximating}, synchronizers \\cite{peleg1987optimal}, and more. \n\nSpanners are a tradeoff between the size of the subgraph and the stretch; spanner size can be decreased at the cost of a greater stretch and vice versa. The +2 spanner construction due to Aingworth et al. produces spanners of size $O(n^{3\/2})$, and this size is optimal \\cite{aingworth1999fast}. The +4 spanner construction is due to Chechik, which produces smaller spanners of size $\\widetilde{O}(n^{7\/5})$ (though this bound is not known to be tight; it is conceivable that further improvements may reduce size up to $O(n^{4\/3})$)\\cite{chechik2013new}. The +6 construction due to Baswana et al. \\cite{Baswana2010} and +8 construction due to Knudsen \\cite{knudsen2017additive} achieve an $O(n^{4\/3})$ spanner. It is known that any $+k$ spanner construction has a lower bound of $n^{4\/3 - o(1)}$ edges on the spanners it produces, so error values greater than +6 are not existentially of interest; the +8 spanner exchanges error for a polylog improvement in construction speed. \n\nIn addition to finding spanner constructions that produce the smallest spanners possible, it is also in our interest that these construction algorithms be fast. Because spanners are meant to make graphs more compact, they are mainly of interest for very large graphs. Thus, for very large $n$, a polynomial time speedup to an algorithm for producing $+k$ spanners is highly desirable. There is a long line of work done in the interest of speeding up spanner constructions, including \\cite{ roditty2004dynamic, baswana2007simple,woodruff2010additive,knudsen2017additive, alstrup2019constructing}.\nFor a comprehensive survey, see \\cite{ahmed2020graph}. \n\nSome additive spanner size and efficiency results are summarized in Table 1 above. As mentioned above, the $+8$ spanner due to Knudsen \\cite{knudsen2017additive} exchanges error over the $+6$ spanner for a polylog improvement in construction time. In this paper, we present a polynomial speedup to Shiri Chechik's 4-additive spanner construction presented in \\cite{chechik2013new}, at no cost in error.\n\n\n\n\n\n\n\n\n\n\n\\begin{theorem} [Main Result]\nThere is an algorithm that constructs (with high probability) a $+4$ spanner on $\\widetilde{O}(n^{7\/5})$ edges in $\\widetilde{O}(mn^{3\/5})$ time.\n\\end{theorem}\n\nFor comparison, the bottleneck to Chechik's original construction is solving the All-Pairs-Shortest-Paths (APSP) problem; with combinatorial methods, this has an $O(mn)$ runtime, and with matrix multiplication methods, $O(n^\\omega)$ \\cite{seidel1995all} \\footnote{We note that when $m> n^{\\omega-0.4}$, the bottleneck in the algebraic case is instead the second stage of the algorithm described in section 1.2 ($\\widetilde{O}(mn^{2\/5})$).}. Currently, $\\omega <2.373$ \\cite{alman2021refined}. See Section 1.2 for a full runtime analysis of the original construction.\n\nOur speedup relies on avoiding the APSP problem. We do this by realizing that we can weaken the path finding methods in Chechik's original construction without compromising error. In particular, we introduce a new problem in Section 2.2, which we call the ``Weak Constrained Single Source Shortest Paths\" (weak CSSSP) problem. We give a Dijkstra-time solution to this problem and apply it to create our new +4 spanner construction in Section 2.3.\n\n\n\nWith matrix multiplication methods for APSP, Chechik's algorithm has an $\\widetilde{O}(n^\\omega)$ implementation \\cite{seidel1995all}\nHowever, we note that there is a range of $m$ values where our combinatorial algorithm potentially outperforms algebraic methods; for example if $\\omega \\geq 2.3$, then when $m < n^{1.699}$, the complexity of our algorithm is polynomially faster\n\n\nFinally, we extend our weak CSSSP method to the construction of spanners in the weighted setting. While the unweighted setting is predominant in the study of additive spanners, the generalization was made to the weighted setting by Elkin et al. in \\cite{elkin2019almost}, and further work was done in \\cite{ahmed2020weighted,elkin2020improved,ahmed2021additive}. There are two different weighted generalizations of $+k$ spanners in the literature; the weaker generalization (introduced in \\cite{ahmed2020weighted}) requires the spanner $H$ to satisfy $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t) + kW$ for each $s,t \\in V(G)$, where $W = \\max_{e \\in E(G)} w(e)$ is the maximum edge weight of the edges of $G$. These are called $+kW$ spanners. The stronger generalization (studied in \\cite{elkin2020improved, ahmed2021additive}) defined below restricts the edge weight stretch factor to the maximal edge weight over shortest $s \\leadsto t$ shortest paths, denoted $W(s,t)$. For this reason this generalization is known as ``local weighted error\".\n\n\n\\begin{definition}[Weighted Additive Spanner (Global Error)]\nA $+kW$ spanner of a graph $G$ is a subgraph $H$ that satisfies $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t)+kW$ for all $s,t \\in V(G)$, where $W = \\max_{e \\in E}w(e)$\n\\end{definition}\n\n\\begin{definition}[Weighted Additive Spanner (Local Error)]\nA $+kW(\\cdot,\\cdot)$ spanner of a graph $G$ is a subgraph $H$ that satisfies $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t)+kW(s,t)$ for all $s,t \\in V(G)$, where $W(s,t)$ is the maximum edge weight along a shortest path $\\pi_G(s,t)$ in $G$.\n\\end{definition}\n\n\nIn \\cite{ahmed2021additive}, Ahmed et al. generalized the 4-additive spanner construction presented in \\cite{chechik2013new} to the (strong) weighted setting. Their algorithm constructs a $+4W(\\cdot,\\cdot)$ spanner on $\\widetilde{O}(n^{7\/5})$ edges, and can be implemented in $\\widetilde{O}(mn^{4\/5})$ time by using AliAbdi et al's Bi-SPP algorithm \\cite{AliAbdi2019} for constrained shortest path finding. Our contribution in section 3 will be a $+4W(\\cdot,\\cdot) + \\epsilon W$ spanner in $\\widetilde{O}(mn^{3\/5})$ time, on $\\widetilde{O}_\\epsilon(n^{7\/5} \\epsilon^{-1})$ edges with high probability, for any error $1>\\epsilon >0$.\n\n\n\\begin{theorem}\nFor any weighted graph $G = (V,E)$ and $\\epsilon >0$, there is a $+4W(\\cdot,\\cdot)+\\epsilon W$ spanner on $\\widetilde{O}_\\epsilon(n^{7\/5})$ edges and computable in $\\widetilde{O}(mn^{3\/5})$ time, with high probability.\n\\end{theorem}\n\\noindent\nWe note that while Ahmed et. al's construction in \\cite{ahmed2021additive} doesn't have the extra $+\\epsilon W$ stretch, our construction comes with a polynomial speedup. \n\n\\subsection{Notations}\n\nWe will use $\\pi_G(s,t)$ to refer to a canonical shortest path between two nodes $s,t \\in V(G)$. $P(s,t)$ is a variable we use in some of our algorithms that describes some computed $s \\leadsto t$ path. For a node $v \\in V(G)$, $\\Gamma_G(v)$ denotes the neighborhood of $v$ (the set containing $v$ and its neighbors) in $G$. When $S$ is a set, $\\Gamma_G(S):= \\bigcup_{v \\in S} \\Gamma_G(v)$. $\\mathcal{P}(u,v)$ denotes the set of all paths between nodes $u,v$. If $A,B$ are subsets of $V$, then $\\mathcal{P}(A,B) = \\bigcup_{u\\in A, v\\in B}\\mathcal{P}(u,v)$\n\n\\subsection{Current Runtime of the +4 Spanner Construction}\n\n\nIn \\cite{chechik2013new}, Shiri Chechik presents a spanner construction that produces a +4 spanner on $\\widetilde{O}(n^{7\/5})$ edges on average with probability $>1-1\/n$. The runtime complexity, however, was not analyzed. In this section, we will describe Chechik's algorithm and then give a runtime analysis. Chechik's construction of a +4 spanner $H$ of an input graph $G$ can be split into three stages:\n\n\\begin{enumerate}[(i)]\n \\item All edges adjacent to ``light nodes\" (nodes with degree $< \\mu = \\lceil n^{2\/5}\\log^{1\/5}n\\rceil$) are added to $H$.\n \n \\item Nodes are sampled for inclusion into a set $S_1$ with probability $9\\mu\/n$. BFS (Breadth-First Search) trees for these nodes are computed, and the edges for these trees are added to $H$.\n \n \\item Nodes are sampled for inclusion into a set $S_2$ with probability $1\/\\mu$. For each ``heavy\" node (nodes with degree $\\geq \\mu$ in the original graph) $v$ that is not in $S_2$, but is adjacent to some node of $S_2$, we arbitrarily choose a neighbor $x \\in S_2$ and add the edge $(v,x)$ to $H$. These choices also define the ``clusters\" of the graph: for each $x \\in S_2$, $C(x)$ is the set containing $x$ and its adjacent heavy nodes that were paired with $x$ in the previous step. We now find, for each pair $x_1,x_2 \\in S_2$, the shortest path $P(s,t)$ subject to the constraint that $s \\in C(x_1)$, $t \\in C(x_2)$, and $P(s,t)$ has $\\leq \\mu^3\/n$ heavy nodes. We use $\\texttt{heavy\\_dist}_G(P(s,t))$ to refer to the number of heavy nodes on $P(s,t)$ in $G$.\n\n\\end{enumerate}\n\nAlgorithm \\ref{alg:chechik} gives the full details. Stage (i) takes $O(n)$ time and stage (ii) takes $\\widetilde{O}(mn^{2\/5})$ time with high probability. The computationally dominant step of this algorithm is the task of finding these shortest paths between the clusters in (iii) (unless algebraic methods are used, in which case stage (ii) dominates). For worst case inputs, the expected number of clustered nodes (nodes in some cluster) is $\\Omega(n)$. Thus, this algorithm's runtime will be bottlenecked by the all-pairs-shortest-paths problem. We now show that the heavy-node constraint on the paths does not increase the runtime.\nTo see this, we note that it's enough to search over paths of the form\n$$P(x_1, x_2) = (x_1,s)\\circ \\pi_G(s,t) \\circ (t,x_2) \\footnote{Paths of this form will be important in our own construction (see Definition 5).}$$\n($\\circ$ denotes path concatenation), where $x_1,x_2$ range over $S_2$ and $s \\in C(x_1)$, $t \\in C(x_2)$. Specifically, we want the shortest path of this form for each pair of clusters $C(x_1),C(x_2)$, where $\\pi_G(s,t)$ is a constraint-satisfying ($\\leq\\mu^3\/n$ heavy nodes) path that is also a shortest path in $G$.\n\nTo find such paths, we first solve APSP ($O(mn)$ time combinatorially, $O(n^{\\omega})$ time algebraically) to get shortest paths $\\pi_G(s,t)$ for each $s,t \\in V$. Then for all pairs of clustered nodes $s,t$, with cluster centers $x_1,x_2$ respectively: if $\\texttt{heavy\\_dist}_G(\\pi_G(s,t)) \\leq \\mu^3\/n$, set $P(x_1,x_2) = (x_1,s) \\circ \\pi_G(s,t) \\circ (t,x_2)$ as the current best path for the cluster pair $(C(x_1),C(x_2))$ if one hasn't yet been selected, otherwise replace the current path iff $P(x_1,x_2)$ is shorter. This is an APSP time process for finding the shortest valid canonical shortest path connecting each cluster pair; at the end of the process, we add the edges of these best paths. We note that because we're not searching over \\textit{all} paths, but only one set of canonical shortest paths, it's possible we fail to find valid (constraint satisfying) paths between some cluster pairs. This does not impede correctness, as we only require these paths in the cases that they exist. \n\n\n\\begin{figure}[h]\n\\begin{algorithm}[H]\n \\label{alg:chechik}\\caption{Chechik's 4-Additive Spanner Construction \\cite{chechik2013new}}\n \\KwIn{$n$-node graph $G=(V,E)$}\\vspace{1em}\n $E'=$ All edges incident to light nodes\\\\\n Sample a set of nodes $S_1$ at random, every node with probability $9\\mu\/n$\\\\\n \\ForEach{node $x\\in S_1$}{\n Construct a BFS tree $T(x)$ rooted at $x$ spanning all vertices in $V$\\\\\n $E'=E'\\cup E(T(x))$\\\\\n }\\vspace{1em}\n %\n Sample a set of nodes $S_2$ at random, every node with probability $1\/\\mu$\\\\\n \\ForEach{heavy node $x$ so that $(\\{x\\}\\cup\\Gamma_G(x))\\cap S_2 = \\varnothing$}{\n Add all incident edges of $x$ to $E'$\\\\\n }\n \\ForEach{node $x\\in S_2$}{\n C(x) = \\{x\\}\n }\n \\ForEach{heavy node $v$ so that $v\\not\\in S_2$ and $\\Gamma(v)\\cap S_2\\neq\\varnothing$}{\n Arbitrarily choose one node $x\\in\\Gamma_G(v)\\cap S_2$\\\\\n $C(x) = C(x) \\cup \\{x\\}$\\\\\n $E' = E' \\cup \\{(u,v)\\}$\\\\\n }\n \\ForEach{pair of nodes $(x_1, x_2)$ in $S_2$}{\n Let $\\hat{\\mathcal{P}}=\\{P\\in\\mathcal{P}(C(x_1),C(x_2))\\ |\\ \\texttt{heavy\\_dist}_G(P)\\leq\\mu^3\/n\\}$\\\\\n Let $P(\\hat y_1, \\hat y_2)$ be the path in $\\hat{\\mathcal{P}}$ with minimal $\\left|P(\\hat y_1, \\hat y_2)\\right|$\n $E'=E'\\cup E(P(\\hat y_1, \\hat y_2))$\n }\n \\Return $H=(V,E')$\n\\end{algorithm}\n\\end{figure}\n\n\\FloatBarrier \n\n\n\n\n\\section{Fast Construction of the +4 Spanner}\n\n\nIn this section, we present our main result; a modification of Chechik's +4 spanner construction that has $\\widetilde{O}(mn^{3\/5})$ runtime with high probability, with no compromise to size or error.\n\n\\subsection{Constrained Shortest Paths}\n\nChechik's original algorithm required the computation of shortest paths subject to a constraint on the number of heavy nodes in the paths. This was a proxy for constraining the number of edges that had not yet been added to the spanner at that point in the construction. We will call CSSSP the ``Constrained Single Source Shortest Path Problem\". This is similar to the GB-SPP (``Gray-Vertices Bounded Shortest Path Problem\") presented by AliAbdi et al. in \\cite{AliAbdi2019}, but our constraint is on the edges instead of on the nodes.\n\n\\begin{definition}[CSSSP] The constrained single-source shortest paths problem is defined by the following algorithm contract:\n\\begin{itemize}\n \\item \\textbf{Input:} An (unweighted, undirected) graph $G=(V,E)$, a set of ``gray\" edges $E_g \\subset E$, a source vertex $s \\in V$, and a positive integer $g$.\n \n \\item \\textbf{Output:} For every $t \\in V$, a path $P(s,t)$ on $\\leq g$ gray edges, where $|P(s,t)| \\leq |P'(s,t)|$ for all $s \\leadsto t$ paths $P'(s,t)$ on $\\leq g$ gray edges.\n\\end{itemize}\n\\end{definition}\n\nOur modification to Chechik's construction will also make use of constrained shortest path finding, but the CSSSP problem is stronger than necessary for our purposes, and we can get away with a better runtime by solving a weaker problem. In this section, we define and give an efficient algorithm for a weaker variation on CSSSP, which we'll call weak CSSSP. In particular, we will only need to find constrained shortest paths from $s$ to $t$ in situations where a certain type of $s \\leadsto t$ constrained path already exists. We define these paths and call them \\textbf{g-short paths}\n\n\n\\begin{definition}[g-short path]\nFor two nodes $s,t$, an $s \\leadsto t$ path is called ``g-short\" if it has $5g$ gray edges. Note by construction that the weight of a path is its length plus $g^{-1}$ times the number of gray edges. Thus $w(P(s,t)) > |P(s,t)| + g^{-1}\\cdot 5g = |P(s,t)| + 5$. Furthermore, $w(P'(s,t)) < |P'(s,t)| + g^{-1} \\cdot g = |P'(s,t)| + 1$. But we also have, by the fact that $P(s,t)$ is the lowest-weight $s \\leadsto t$ path, that $w(P(s,t)) \\leq w(P'(s,t))$, and thus \n\n\\begin{align*}\n |P(s,t)|+5 &< |P'(s,t)| + 1\\\\\n &\\leq |\\pi_G(s',t')| + 2 + 1\\\\\n & \\leq |\\pi_G(s,t)| +2+ 2 + 1\\\\\n &= \\texttt{dist}_G(s,t)+5\n\\end{align*}\n\\noindent\nThis implies that $|P(s,t)| < \\texttt{dist}_G(s,t)$, which is a contradiction. Thus the computed path $P(s,t)$ has $\\leq 5g$ gray edges. We now show the g-optimality condition to complete the proof: let $P_g(s,t)$ be an arbitrary $s\\leadsto t$ path on $\\mu^3\/n$ heavy nodes, we have $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t)+4$ with probability $\\geq 1-\\frac{1}{n^3}$\n\\end{lemma}\n\\begin{proof}\nIn this case, we claim that there is a $\\geq 1-1\/n^3$ probability that $\\pi_G(s,t)$ is adjacent to a BFS tree in $H$. $\\pi_G(s,t)$ has $>\\mu^3\/n$ heavy nodes, each of degree $\\geq \\mu$. Thus the sum of the degrees of nodes on $\\pi_G(s,t)$ is $> \\mu^4\/n$. By Lemma 5, this implies there are at least $\\mu^4\/3n$ nodes adjacent to $\\pi_G(s,t)$. Each node $v$ has probability $9\\mu\/n$ of being included in $S_1$, and thus having a shortest-path tree rooted at $v$ in $H$. Therefore the probability that \\textit{none} of these nodes adjacent to $\\pi_G(s,t)$ have such a tree rooted at them is\n \\begin{align*}\n &\\leq (1-9\\mu\/n)^{\\mu^4\/3n}\\\\\n &=(1-9\\log^{1\/5}n\/n^{3\/5})^{n^{3\/5}\\log^{4\/5}n\/3}\\\\\n &=(1-9\\log^{1\/5}n\/n^{3\/5})^{(n^{3\/5}\/9\\log^{1\/5}n)\\cdot 3\\log n}\\\\\n &\\leq \\left(\\frac{1}{e}\\right)^{3 \\log n}\\\\\n &\\leq 1\/n^3\n \\end{align*}\n where we used the fact that $(1-\\frac{1}{x})^x < 1\/e$ for $x\\geq 1$. Thus, we have a $> 1-1\/n^3$ probability of the existence of a node $r$ neighboring some $u \\in \\pi_G(s,t)$ such that a BFS tree rooted at $r$ is in $H$. When this is the case, we can simply take the $s\\leadsto r$ followed by the $r \\leadsto t$ shortest paths provided by the BFS tree, which has a stretch factor of 2 as shown below:\n \\begin{align*}\n \\texttt{dist}_H(s,t) &\\leq \\texttt{dist}_H(s,r) + \\texttt{dist}_H(r,t)\\\\\n &= \\texttt{dist}_G(s,r) + \\texttt{dist}_G (r,t)\\\\\n &\\leq \\texttt{dist}_G(s,u) + 1 + \\texttt{dist}_G (u,t)+1\\\\\n &= \\texttt{dist}_G(s,t)+2\n \\end{align*}\n\n\\end{proof}\n\n\\begin{lemma}\nFor any two uncovered nodes $s,t\\in V(G)$ such that the canonical shortest path $\\pi_G(s,t)$ has $\\leq \\mu^3\/n$ heavy nodes, we have $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t)+4$ with probability $\\geq 1-\\frac{1}{n^3}$.\n\\end{lemma}\n\\begin{proof}\nBoth $s,t$ are uncovered and thus in $\\Gamma_G(S_2)$. Let $x_1,x_2 \\in S_2$ such that $s \\in \\Gamma_H(x_1)$ and $t \\in \\Gamma_H(x_2)$. We assume $x_1 \\neq x_2$ as this case is trivial.\\\\\n\nCall an edge $(u,v)$ ``heavy\" if both $u$ and $v$ are heavy nodes. Since $\\pi_G(s,t)$ is a (shortest) path on $\\mu^3\/n$ heavy nodes, it has $<\\mu^3\/n = g-2$ heavy edges, meaning $P'(x_1,x_2) := (x_1,s) \\circ \\pi_G(s,t) \\circ \\pi_G(t,x_2)$ has $1-1\/n$, $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t) + 4$ holds for all $s,t \\in V(G)$.\n\n\\begin{lemma}\nThe subgraph $H$ produced by Algorithm \\ref{alg:best} has $O(n\\mu) = \\widetilde{O}(n^{7\/5})$ edges with high probability.\n\\end{lemma}\n\\begin{proof}\nWe can separate the addition of edges to $H$ into 4 types:\n\n\\begin{enumerate}\n \\item The edges incident to light nodes are added. Each light node is incident to $\\leq \\mu$ edges by definition, so $O(n\\mu) = \\widetilde{O}(n^{7\/5})$ edges are added.\n \n \\item The BFS tree of each node in $S_1$ is added. Each such tree contributes $O(n)$ edges. The probability of a node being added to $S_1$ is $9\\mu\/n$, so $|S_1|=\\Theta(\\mu)$ with high probability, and thus $O(\\mu \\cdot n) = \\widetilde{O}(n^{7\/5})$ edges are added with high probability\n \n \\item Edges adjacent to heavy nodes $v$ that are $\\notin \\Gamma_G(S_2)$ are added. Nodes are added to $S_2$ with probability $1\/\\mu$, thus the probability of $v$ being neither in $S_2$ nor adjacent to a node in $S_2$ is $\\leq (1-1\/\\mu)^{\\deg(v)+1}$. If $\\deg(v) = \\Omega(\\mu \\log n)$, then it is adjacent to a node in $S_2$ with high probability. Thus the number of edges added for $v$ is at most $1+\\deg(v)(1-1\/\\mu)^{\\deg(v)} < \\mu$ with high probability. Unioning over all $v$, this adds $O(n\\mu)=\\widetilde{O}(n^{7\/5})$ edges with high probability.\n \n \\item Edges on paths between $S_2$ nodes with $\\leq 5\\mu^3\/n$ heavy edges are added. $|S_2| = \\Theta(n\/\\mu)$ with high probability, yielding $\\Theta(n^2\/\\mu^2)$ pairs of $S_2$. All the light edges (edges adjacent to light nodes) have already been added to $H$, so each path between these $S_2$ pairs adds at most $\\Theta(\\mu^3\/n)$ edges. Unioning over the number of pairs, this adds $O(\\mu n)=\\widetilde{O}(n^{7\/5})$ edges with high probability. \\qedhere\n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\n\\end{proof}\n\n\n\\begin{lemma}\nOn an $n-$node $m-$edge input graph, Algorithm \\ref{alg:best} runs in $\\widetilde{O}(mn^{3\/5})$ with high probability.\n\\end{lemma}\n\\begin{proof}\n\nThe only two superlinear stages of the algorithm are (a) the generation of the $S_1$ Breadth-First Search trees, and (b) solving weak CSSSP for each node of $S_2$. For (a): nodes are sampled to be in $S_1$ with probability $9\\mu\/n$, so $|S_1| = O(\\mu) = \\widetilde{O}(n^{2\/5})$ with high probability. BFS has worst-case runtime $O(m)$. Thus this stage is $O(m\\mu)=\\widetilde{O}(mn^{2\/5})$ time. For (b): we showed in section 2.1 an algorithm that solves weak CSSSP in $\\widetilde{O}(m)$ time, which we run for each node of $S_2$. Multiplying this over the size of $S_2$ (which has size $\\widetilde{O}(n^{3\/5})$ with high probability), we get $\\widetilde{O}(mn^{3\/5})$ time with high probability.\n\n\n\\end{proof}\n\n\n\\noindent\nTheorem 1 now follows from Lemmas 6-9.\n\n\n\n\\section{Weighted +4 Spanner}\n\n\nIn this section, we prove Theorem 2 by first synthesizing a weighted analogue of the weak CSSSP problem from section 2.1, then we apply this in a similar fashion to create our $+4W(s,t)+\\epsilon W$ construction. We note that a $+4W$, $\\widetilde{O}(n^{7\/5})$ edge, $\\widetilde{O}(mn^{3\/5})$ time spanner construction is a very straightforward generalization of the methods presented in the unweighted part of this paper. However, $W(s,t)$ error is preferable (except in the case when all the edge weights are equal). \n\n\n\n\n\\subsection{Weighted Constrained Shortest Paths}\n\nIn our first step towards our weighted spanner, we will generalize weak CSSSP from section 2.1 to the weighted setting, and give a Dijkstra-time algorithm for solving it. This is also where the $\\epsilon$ error of the construction will be incurred, so our generalization will incorporate an error parameter. There are many ways to generalize weak CSSSP to the weighted setting, but we choose this one as it is what we've been able to find a use for in the weighted spanner construction.\n\n\\begin{definition}[Weighted Weak CSSSP (with error)] The weighted weak constrained single-source shortest paths problem with error is defined by the following algorithm contract:\n\n\\begin{itemize}\n \\item \\textbf{Input:} An (undirected) graph $G=(V,E)$, a set of ``gray\" edges $E_g \\subset E$, a source vertex $s \\in V$, a weight function $w:V\\to \\mathbb{R}^+$, an error parameter $1>\\epsilon>0$, and a positive integer $g$.\n \n \\item \\textbf{Output:} For every $t \\in V$, a path $P(s,t)$ on $\\leq 5g\/\\epsilon$ gray edges, satisfying the following:\n \\begin{itemize}\n \\item If an $s\\leadsto t$ g-short \\footnote{We reuse the same definition from section 2.1.} path exists, then the \\textbf{near g-optimality condition} holds: $w(P(s,t)) < w(P_g(s,t)) + \\epsilon W$ for any $s\\leadsto t$ path $P_g(s,t)$ on $\\epsilon>0$\\\\\n Positive integer $g$\n }\\vspace{1em}\n \n Let $W = \\max_{e \\in E} w(e)$\n \n \n \\ForEach{edge $(u,v) \\in E$}{\n $w'(u,v)\\leftarrow w(u,v)+\\epsilon W g^{-1}$ if $(u,v)$ is gray, and $w'(u,v)\\leftarrow w(u,v)$ otherwise.\n } \n \n \n Run Dijkstra's algorithm on $G$ with source node $s$ and weight function $w'$ to get paths $P(s,t)$ for each other $t \\in V$\\\\\n \\Return these $P(s,t)$ paths \n\n\\end{algorithm}\n\\end{figure}\n\n\n\\begin{theorem}\nAlgorithm \\ref{alg:deltaweighted} solves Weighted Weak CSSSP with error in $O(m + n \\log n) = \\widetilde{O}(m)$ time.\n\\end{theorem}\n\\begin{proof}\n\nThe time complexity follows immediately from the complexity of Dijkstra's algorithm, which is the dominant stage of the algorithm. We now prove correctness.\n\nLet $t \\in V$ and suppose a g-short $s\\leadsto t$ path $P'(s,t)=(s,s') \\circ \\pi_G(s',t') \\circ (t',t)$ exists. We will first show that $P(s,t)$ has $\\leq 5g\/ \\epsilon$ gray edges. Suppose to show a contradiction that it has $>5g\/\\epsilon$ gray edges. Thus\n\n\n\\begin{align*}\n w(P(s,t)) + 5W &< w'(P(s,t))\\\\\n &\\leq w'(P'(s,t))\\\\\n &< w(P'(s,t)) + g\\cdot (\\epsilon W g^{-1}) = w(P'(s,t)) + \\epsilon W\\\\\n &\\leq w(P'(s,t)) + W\\\\\n &\\leq w(\\pi_G(s',t'))+2W+W\n\\end{align*}\n\nTherefore $w(P(s,t)) + 4W < w(\\pi_G(s',t')) + 2W$. Since $\\pi_G(s',t')$ is a shortest path, $w(\\pi_G(s',t')) \\leq w(P(s,t)) + 2W$. This implies $w(P(s,t)) +4W < w(P(s,t)) + 4W$, a contradiction.\\\\\n\n\nNow we prove near g-optimality. Let $P_g(s,t)$ be an arbitrary $s\\leadsto t$ path on $ 0$, on $\\widetilde{O}_\\epsilon(n^{7\/5}e^{-1})$ edges in $\\widetilde{O}(mn^{3\/5})$ time. This will make use of our weak CSSSP generalization to the weighted setting with error discussed in the previous subsection, which is where the $+\\epsilon W$ stretch is incurred\n\nThe construction differs from Algorithm \\ref{alg:best} in three important ways: (i): instead of adding all the edges of light nodes to the spanner $H$, we perform a \\textbf{$\\mu$-lightweight initialization} - that is, we add the $\\mu$ lightest edges of each node to $H$ (breaking ties arbitrarily), a technique introduced in \\cite{ahmed2021additive}.\n\n\\begin{definition}[$d-$lightweight initialization \\cite{ahmed2021additive}]\nA $d-$lightweight initialization $H = (V,E')$ of a weighted graph $G = (V,E)$ is a subgraph created by selecting the $d$ lightest edges of every node of $G$, breaking ties arbitrarily.\n\\end{definition}\n\n(ii): Instead of computing standard weak CSSSP (as seen in Section 2.2) on the $S_2$ nodes, we compute our new weighted version with error. (iii): We now omit the step of connecting ``heavy nodes\" with nodes of $S_2$. Instead we rely on these connections happening ``naturally\" due to our $\\mu$-lightweight initialization, which allows us to make use of the properties the lightweight initialization gives us.\n\n\\begin{figure}[h]\n\\begin{algorithm}[H]\n \\label{alg:weightedspan}\\caption{$+4W(\\cdot,\\cdot) + \\epsilon W$ Spanner}\n \\KwIn{$n$-node graph $G=(V,E)$\\\\\n\tWeight function $w:E \\mapsto \\mathbb{R}^+$\\\\\n\tError parameter $1>\\epsilon>0$\\\\\n\t}\\vspace{1em}\n Let $H_0=(V,E')$ be a $\\mu$-lightweight initialization of $G$.\\\\\n \n Sample a set of nodes $S_2$ at random, every node with probability $1\/\\mu$\\\\\n \\ForEach{node $x$ so that $(\\{x\\}\\cup\\Gamma_{H_0}(x))\\cap S_2 = \\varnothing$}{\n Add all incident edges of $x$ to $E'$\\\\\n }\n Sample a set of nodes $S_1$ at random, every node with probability $9\\mu\/n$\\\\\n \\ForEach{node $x\\in S_1$}{\n Construct a shortest-path tree $T(x)$ rooted at $x$ spanning all vertices in $V$\\\\\n $E'=E'\\cup E(T(x))$\\\\\n }\\vspace{1em}\n %\n \n \n \n \n \n \n \n \n \n \n \n \n \\ForEach{node $x_1 \\in S_2$}{\n Compute weighted weak CSSSP with error on $G$, with $g = \\mu^3\/n+2$, $x_1$ as the source vertex, $E_g = E \\setminus E'$, and $\\epsilon$ as the error parameter, to get paths $P(x_1,x_2)$ for each $x_2 \\in V$.\\\\\n Add $E(P(x_1,x_2))$ to $E'$ for each $t \\in S_2$\n }\n \n\n \\Return $H=(V,E')$\n\\end{algorithm}\n\\end{figure}\n\n\\noindent\nNote that $H_0$ in Algorithm 5 denotes the $\\mu-$initialization of $G$, before any additional edges are added to $E'$. We will refer to $H_0$ in our proofs as it will allow us to make use of the fact that the edges added are only the ones form the lightweight initialization. We now prove correctness by the following series of lemmas. We will make use of the following theorem due to Ahmed et al.\n\n\n\\begin{theorem}[\\cite{ahmed2020weighted}]\nLet $H$ be a $d-$lightweight initialization of an undirected, weighted graph $G$. Then if a shortest path $\\pi_G(s,t)$ is missing $l$ edges in $H$, there are $\\Omega(dl)$ different nodes adjacent to $\\pi_G(s,t)$ in $H$.\n\\end{theorem}\n\n\n\\begin{lemma}\nFor any two nodes $s,t\\in V(G)$ such that the canonical shortest path $\\pi_G(s,t)$ is missing $>\\mu^3\/n$ edges in $H_0$, we have $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t)+4W(s,t)$ with probability $\\geq 1-\\frac{1}{n^3}$\n\\end{lemma}\n\n\\begin{proof}\n\nThis implies that there are $>\\mu^3\/n$ \\textit{nodes} along $\\pi_G(s,t)$ with missing incident edges in $H_0$. Call the set of these nodes $S$. Since $H_0$ is a $\\mu-$lightweight initialization, it follows that nodes missing an edge in $H_0$ must have degree $>\\mu$ in $G$. We utilize the above theorem from \\cite{ahmed2020weighted}, which allows us to conclude that there are $\\Omega(\\mu^4\/n)$ different nodes adjacent to $\\pi_G(s,t)$ in $H_0$. \n\n\n\nAs shown in Lemma 4 (the details of which we won't repeat), we have a $> 1-1\/n^3$ probability of one of these neighbors being a member of $S_1$. In this case, let $r\\in S_1$ be adjacent to a node $q$ of $\\pi_G(s,t)$ in $H_0$. Since $(q,r) \\in H_0$ but $q$ is disconnected from $\\pi_G(s,t)$ in $H_0$, it follows from the fact that $H_0$ is a $\\mu-$lightweight initialization that $w(q,r)$ is lighter than the missing edge incident to $q$ on this path, i.e. $w(q,r) \\leq W(s,t)$.\n\nSince $r \\in S_1$, it is the root of a shortest path tree of $G$ included in $H$. Thus, since $\\pi_G(s,r)$ is a shortest path, $w(\\pi_G(s,r)) \\leq w(\\pi_G(s,q)) + w(q,r) \\leq w(\\pi_G(s,q))+W(s,t)$. Likewise, $w(\\pi_G(r,t)) \\leq w(\\pi_G(q,t)) + W(s,t)$. Thus, the path $\\pi_G(s,r) \\circ \\pi_G(r,t)$, which belongs to $H$, witnesses that\n\n\\begin{align*}\n\\texttt{dist}_H(s,t) &\\leq w(\\pi_G(s,q))+W(s,t) + w(\\pi_G(q,t)) + W(s,t)\\\\\n&= w(\\pi_G(s,t)) + 2W(s,t)\\\\\n&= \\texttt{dist}_G(s,t) + 2W(s,t)\n\\end{align*} \n\nThus with probability $>1-1\/n^3$, we achieve the required stretch.\n\n\\end{proof}\n\n\\begin{lemma}\nFor any two nodes $s,t\\in V(G)$ such that the canonical shortest path $\\pi_G(s,t)$ is missing $< \\mu^3\/n$ edges in $H_0$, we have $\\texttt{dist}_H(s,t) \\leq \\texttt{dist}_G(s,t)+4W(s,t) + \\epsilon W$.\n\\end{lemma}\n\\begin{proof}\nJust as in the unweighted case, we can assume WLOG that $s$ and $t$ are uncovered, and thus are each in the neighborhood of an $S_2$ node (all nodes $\\notin \\gamma(S_2)$ are covered by the algorithm). Let $x_1,x_2 \\in S_2$ such that $s \\in \\Gamma(x_1)$ and $t \\in \\Gamma(x_2)$. We can furthermore assume that the first and last edges of $\\pi_G(s,t)$ are missing from $H_0$, otherwise we could simply push our analysis to the first\/last nodes on $\\pi_G(s,t)$ to be severed from the path in $H_0$.\n\nSince $\\pi_G(s,t)$ is a shortest path with $< g-2 = \\mu^3\/n$ missing (gray) edges, $P'(x_1,x_2):=(x_1,s')\\circ \\pi_G(s',t') \\circ (t',x_2)$ is a g-short path. Thus by weighted weak CSSSP with error, our construction yields a path $P(x_1,x_2)$ with $w(P(x_1,x_2)) \\leq w(P'(x_1,x_2)) + \\epsilon W$. Furthermore, by the fact that $H_0$ is a $\\mu$-lightweight initialization, the edge $(x_1,s)$ must be lighter than the first edge of $\\pi_G(s,t)$, and $(t,x_2)$ must be lighter than the last edge of $\\pi_G(s,t)$. Thus $w(x_1,s),w(t,x_2) \\leq W(s,t)$.\\\\\n\nTherefore, $w(P(x_1,x_2)) \\leq w(\\pi_G(s,t))+ 2W(s,t) + \\epsilon W$. Thus, the path $(s,x_1) \\circ P(x_1,x_2) \\circ (x_2,t)$ in $H$ witnesses that\n\n\\begin{align*}\n\\texttt{dist}_H(s,t) &\\leq w(s,x_1) + w(P(x_1,x_2)) + w(x_2,t)\\\\\n&\\leq 2W(s,t) + w(P(x_1,x_2))\\\\\n&\\leq 2W(s,t) + 2W(s,t) + \\epsilon W + w(\\pi_G(s,t))\\\\\n&= \\texttt{dist}_G(s,t) + 4W(s,t) + \\epsilon W\n\\end{align*}\n\\noindent\nThus we deterministically have the required stretch for such node pairs.\n\n\\end{proof}\n\n\\noindent\nCorrectness now follows by the above two lemmas and the union bound.\n\n\\noindent\nWe now show that $H$ has the desired edge bound. We note that the details of this proof are mostly the same as the corresponding proof for our unweighted construction.\n\n\\begin{lemma}\n$H$ has (with high probability) $\\widetilde{O}_\\epsilon(n^{7\/5}\\epsilon^{-1})$ edges.\n\\end{lemma}\n\\begin{proof}\nAlgorithm \\ref{alg:weightedspan} adds edges to $H$ in 4 stages:\n\\begin{enumerate}[(i)]\n \\item We begin with a $\\mu-$lightweight initialization of $G$, which adds $\\widetilde{O}(n^{2\/5})$ edges per node, giving a total of $\\widetilde{O}(n^{7\/5})$ edges. This covers all light nodes (of degree $<\\mu$).\n \n \n \n \\item We add the edges of a shortest path tree for each node of $S_1$. The probability of a node's inclusion into $S_1$ is $9\\mu\/n$, thus $|S_1| = \\widetilde{O}(n^{2\/5})$ with high probability. Adding $O(n)$ edges for each of the $S_1$ nodes thus yields $\\widetilde{O}(n^{7\/5})$ edges added with high probability.\n \n \n \n \\item We add all the edges of heavy nodes $v$ (of degree $>\\mu$) not in the neighborhood of any $S_2$ node. Nodes are added to $S_2$ with probability $1\/\\mu$, thus the probability of $v$ being neither in $S_2$ nor adjacent to a node in $S_2$ is $\\leq (1-1\/\\mu)^{\\deg(v)+1}$. If $\\deg(v) = \\Omega(\\mu \\log n)$, then it is adjacent to a node in $S_2$ with high probability. Thus the number of edges added for $v$ is at most $1+\\deg(v)(1-1\/\\mu)^{\\deg(v)} < \\mu$ with high probability. Unioning over all $v$, this adds $O(n\\mu)=\\widetilde{O}(n^{7\/5})$ edges with high probability.\n \n \\item Edges on paths between $S_2$ nodes with $\\leq 5\\epsilon^{-1}\\mu^3\/n$ additional edges are added to $H$. $|S_2| = \\Theta(n\/\\mu)$ with high probability, yielding $\\Theta(n^2\/\\mu^2)$ pairs of $S_2$. Since each path between these pairs incurs $\\leq 5\\epsilon^{-1}\\mu^3\/n$ extra edges, this adds $O_\\epsilon(\\mu n \\epsilon^{-1})=\\widetilde{O}_\\epsilon(n^{7\/5}\\epsilon^{-1})$ edges with high probability.\n\\end{enumerate}\n\\end{proof}\n\n\nFinally, we show that Algorithm \\ref{alg:weightedspan} has the desired runtime.\n\n\\begin{lemma}\nOn an $n-$node $m-$edge input graph, Algorithm \\ref{alg:weigtedspan} runs in $\\widetilde{O}(mn^{3\/5})$ with high probability.\n\\end{lemma}\n\\begin{proof}\n\nThe only two superlinear stages of the algorithm are (a) the generation of the $S_1$ shortest-path trees, and (b) solving weak weighted CSSSP with error for each node of $S_2$. For (a): nodes are sampled to be in $S_1$ with probability $9\\mu\/n$, so $|S_1| = O(\\mu) = \\widetilde{O}(n^{2\/5})$ with high probability. Dijkstra has worst-case runtime $\\widetilde{O}(m)$. Thus this stage is $\\widetilde{O}(mn^{2\/5})$ time. For (b): we showed in section 3.1 an algorithm that solves weak CSSSP in $\\widetilde{O}(m)$ time, which we run for each node of $S_2$. Multiplying this over the size of $S_2$ (which has size $\\widetilde{O}(n^{3\/5})$ with high probability), we get $\\widetilde{O}(mn^{3\/5})$ time.\n\n\n\\end{proof}\n\n\\noindent\nBy Lemmas 12-15, we have now proven the main result of this chapter (Theorem 2).\n\n\n\n\n\\section{Conclusion}\n\nIn this paper we have presented a new state-of-the-art $\\widetilde{O}(mn^{3\/5})$ complexity result for constructing the +4 spanner, doing so by solving a novel pathfinding problem (weak CSSSP). This fills in a literature gap that has existed between +2,+6, and +8 spanners, as this is the first paper studying the efficiency of the +4 spanner construction. We also extended our methods to the weighted setting, where we were able to to derive a construction for $+4W(s,t)+\\epsilon W$ spanners, with the same runtime and an $\\epsilon^{-1}$ stretch to spanner size.\n\nWe believe that to find further polynomial time improvements to our construction would require a polynomial reduction in the number of $S_2$ nodes to compute shortest path trees on. The next bottleneck to the algorithm is the time needed to build the BFS trees rooted at the $S_1$ nodes, which is $\\widetilde{O}(mn^{2\/5})$ with high probability. For the weighted spanner construction, we hope to see a reduction in error to $(4+\\epsilon)W(s,t)$ or $+4W(s,t)$ without a compromise to runtime, eliminating global error entirely.\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements}\n\nMany thanks to Greg Bodwin, without whose guidance this paper would not be possible. I also thank fellow students Eric Chen and Cheng Jiang, with whom I discussed an earlier version of the paper.\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe concept of fuzzy transform ($F$-transform) was firstly introduced by Perfilieva \\cite{per}, a theory that attracted the interest of {many researchers}. It has now been greatly expanded upon, and a new chapter in the theory of semi-linear spaces has been opened. The main idea of the $F$-transform is to factorize (or fuzzify) the precise values of independent variables by using a proximity relation, and to average the precise values of dependent variables to an approximation value. The theory of $F$-transform has already been developed and used to real-valued to lattice-valued functions (cf., \\cite{{per},{irin1}}), from fuzzy sets to parametrized fuzzy sets \\cite{st} and from the single variable to the two (or more variables) (cf., \\cite{{mar2},{mar}, {mar1}, {step}}). Recently, several studies have begun to look into $F$-transforms based on any $L$-fuzzy partition of an arbitrary universe (cf., \\cite{kh1,jir,mo, mocko, anan,spt1,rus,spt}), where $L$ is a complete residuated lattice. Among these researches, the relationships between $F$-transforms and {semimodule homomorphisms} were introduced in \\cite{jir}; a categorical approach of $L$-fuzzy partitions was studied in \\cite{mo}; while, the relationships between $F$-transforms and similarity { relations were} discussed in \\cite{mocko}. Further, in \\cite{anan}, an interesting link among $F$-transforms, $L$-fuzzy topologies\/co-topologies and $L$-fuzzy approximation operators (which are concepts used in the study of an operator-oriented view of fuzzy rough set theory) was established, while in \\cite{spt1}, the relationship between fuzzy pretopological spaces and spaces with $L$-fuzzy partition was shown. Also, in a different direction, a generalization of $F$-transforms was presented in \\cite{rus} by considering the so-called $Q$-module transforms, where $Q$ stands for an unital quantale, while $F$-transforms based on a generalized residuated lattice {were studied} in \\cite{spt}. Further, classes of $F$-transforms taking into account the well-known classes of implicators, namely {$R-,S-, QL-$implicators} were discussed in \\cite{tri}. The several researches carried out in the application fields of $F$-transforms, e.g., trend-cycle estimation \\cite{holc}, {data compression \\cite{hut}}, numerical solution of partial differential equations \\cite{kh}, scheduling \\cite{li}, time series \\cite{vil}, data analysis \\cite{no}, denoising \\cite{Ir}, face recognition \\cite{roh}, neural network {approaches \\cite{ste} and trading} \\cite{to}.\\\\\\\\\n{It is easy to see how the dynamics of the species would be impacted when they interact. Many studies have been conducted on the problem of the food chain. If the growth rate of one species increases while that of the other decreases during their interaction, we say they are in a predator-prey situation. Such a situation arises when one species (predator) feeds on another species (prey). The Lotka-Volterra model is the first fundamental system representing the interaction between prey and predator species. During the first world war, the Lotka-Volterra model was developed to explain the oscillatory levels of certain fish in the Adriatic sea in \\cite{elet,murray}. Several features of predator-prey models have been the subject of many mathematical and ecological studies. There are many factors such as functional response \\cite{liu}, competition \\cite{cush,gak}, cooperation \\cite{elet} affecting dynamics of predator-prey model. The stability and other dynamical behavior of predator-prey models could be found in \\cite{aziz,bell,liou,hsu,gakk,brau,oat,past,jp}. The parameters in all the above-cited models are crisp in nature. However, in real-world ecosystems, many parameters may oscillate simultaneously with periodically varying environments. They also change due to natural and human-caused events, such as fire, earthquakes, climate warming, financial crisis, etc. As a result, environmental variables significantly impact the interaction process between the species and its dynamics. Thus the fuzzy mathematical model is more effective than the crisp model. Therefore we have considered the fuzzy set theory to create the prey-predator model. Specifically, the imprecise parameters are replaced by fuzzy numbers in the fuzzy approach. The analysis of the behavior of most phenomena is often based on mathematical models in the form of differential equations. Fuzzy differential equations are equations in which uncertainties are modeled by fuzzy sets (possibility). In recent years, analyzing the dynamical behavior of prey-predator systems whose mathematical models have been considered fuzzy differential equations. Obtaining the solution of the fuzzy differential equation has been investigated under the concept of H-derivative, SGH-derivative \\cite{bede}, gH-derivative and g-derivative \\cite{bede1}, H$_2$-differentiability \\cite{maz}, and gr-derivative \\cite{mazan}, it has been researched how to solve fuzzy differential equations. Also, in \\cite{hoa,hoa1,long,long1,long2}, fractional calculus has been used to discuss fuzzy differentiable equations. In \\cite{sun1}, it has been investigated how stable fuzzy differentiable equations using the second kind of Hukuhara derivative are in the application.\\\\\\\\\nAs long as the underlying fuzzy functions are highly generalized Hukuhara differentiable, the approach described in \\cite{sun} could be used for the stability analysis. Thus the limitation of the method as mentioned above relates to the existence of a highly generalized Hukuhara derivative. As the method presented in \\cite{sun} is based on what is known as Fuzzy Standard Interval Arithmetic (FSIA), then it has a flaw known as the UBM phenomenon (see \\cite{mazan} for more details). A novel fuzzy derivative idea termed granular derivative terms of relative-distance-measure fuzzy interval arithmetic (RDM-FIA) was presented in \\cite{mazan} to overcome the drawbacks of the FSIA-based approach. A new idea of the conventional membership functions called the horizontal membership functions (HMFs) proposed in \\cite{piegat} was used to construct RDM-FIA. Based on the findings in \\cite{land,piegat1,piegat2,son}, it has been established that the RDM-FIA is a more helpful application tool than the FSIA.} \\\\\\\\ \nDifferential equations cannot always be solved analytically, requiring numerical methods. Therefore in scientific research, numerical methods for solving differential equations have been elaborated frequently. In this connection, differential equations are successfully solved using fuzzy techniques. The fuzzy transform ($F$-transform) introduced by Perfilieva \\cite{per} is one of the fuzzy techniques that has been introduced in the literature. An approximation method based on $F$-transform for second order differential equation was introduced in \\cite{chen}. In \\cite{ali,kh,stev,p}, numerical methods based on $F$-transform to solve initial value problem and boundary value problem were introduced. Also, a numerical method based on $F$-transform to solve a class of delay differential equations by $F$-transform was introduced in \\cite{tom}.\\\\\\\\\nIt is to be pointed out here that the numerical solution of a differential equation or fuzzy differential based on $F$-transform (by using the concept of level sets) was studied, but the numerical solution based on granular $F$-transform of a fuzzy mathematical model, which is represented by fuzzy differential equations under granular differentiability, is yet to be done. In which all parameters and initial conditions can be uncertain. Specifically,\n\\begin{itemize}\n\\item[$\\bullet$] we introduce the concepts of granular $F$-transform and granular inverse $F$-transform associated with the fuzzy function and discuss some basic results by using the concept of granular metric;\n\\item[$\\bullet$] we formulate a fuzzy prey-predator model and investigate the equilibrium points and their stability in terms of fuzzy numbers;\n\\item[$\\bullet$] we establish a numerical method based on granular $F$-transform for the fuzzy prey-predator model; and\n\\item[$\\bullet$] we present a comparison between two numerical solutions with the exact solution.\n\\end{itemize}\n\\section{Preliminaries} Herein, the ideas associated with fuzzy number, horizontal membership function, granular differentiability and fuzzy partition (cf., \\cite{maz,mazan,naj,naja,per,kh}, for details). Throughout this chapter, $E^1$ denotes the collection of fuzzy numbers defined on the real number $\\mathbb{R}$ and $E^m=E^1\\times E^1\\times...\\times E^1$. The $\\alpha$-level sets of $\\widehat{a}\\in E^1$ is $\\widehat{a}^\\alpha=[\\underline{a}^\\alpha,\\overline{a}^\\alpha]$, where $\\underline{a}^\\alpha $ and $ \\overline{a}^\\alpha$ are the left and right end points of $\\widehat{a}$.\n\\begin{def1}\\label{grhor}\nFor a fuzzy number $\\widehat{a}: [a_1,a_2] \\subseteq \\mathbb{R} \\rightarrow [0, 1]$ and $\\widehat{a}^\\alpha=[\\underline{a}^\\alpha,\\overline{a}^\\alpha]$, the\n{\\bf horizontal membership function} is a function $ a^{gr} : [0, 1] \\times [0, 1] \\rightarrow [a, b]$ such that $a^{gr}(\\alpha, \\mu_a)=\\underline{a}^\\alpha+(\\overline{a}^\\alpha-\n\\underline{a}^\\alpha)\\mu_a$, where $\\mu_a\\in [0, 1]$ is called the relative-distance-measure (RDM) variable.\n\\end{def1}\n\\begin{rem}\\label{grr1} (i) The horizontal membership function of $\\widehat{a}\\in E^1$, i.e., $a^{gr}(\\alpha, \\mu_{a})$ is also denoted by $\\mathbb{K}(\\widehat{a})$. Also, the $\\alpha$-level set of $\\widehat{a}$ can be given by \\[\\mathbb{K}^{-1}(a^{gr}(\\alpha, \\mu_{a}))=\\widehat{a}^\\alpha=[\\inf\\limits_{\\beta\\geq\\alpha}\\min\\limits_{\\mu_{a}}a^{gr}(\\beta, \\mu_{a}),\\sup\\limits_{\\beta\\geq\\alpha}\\max\\limits_{\\mu_{a}}a^{gr}(\\beta, \\mu_{a})].\\]\n\\begin{itemize}\n \\item[(ii)] For the fuzzy numbers $\\widehat{a_1},\\widehat{a_2}\\in E^1$, $\\widehat{a_1}=\\widehat{a_2}$ iff $\\mathbb{K}(\\widehat{a_1})=\n \\mathbb{K}(\\widehat{a_2})$ and $\\widehat{a_1}\\geq \\widehat{a_2}$ if $\\mathbb{K}(\\widehat{a_1})\\geq\n \\mathbb{K}(\\widehat{a_2}),\\,\n \\forall\\,\\mu_{a_1}=\\mu_{a_2}\\in [0,1]$. \n \\item[(iii)] Let each of addition, subtraction, multiplication and division operations between fuzzy numbers $\\widehat{a_1},\\widehat{a_2}\\in E^1$ be represented by $\\odot$. Therefore $\\widehat{a_1}\\odot\\widehat{a_2}=\\widehat{m}\\in E^1$ iff $\\mathbb{K}(\\widehat{m})=\n \\mathbb{K}(\\widehat{a_1})\\odot\\mathbb{K}(\\widehat{a_2})$.\n \\item[(iv)] Let $\\widehat{a_1},\\widehat{a_2},\\widehat{a_3}\\in E^1$. Then we have\n \\item[$\\bullet$] $\\widehat{a_1}-\\widehat{a_2}=-(\\widehat{a_1}+\\widehat{a_2})$,\n \\item[$\\bullet$] $\\widehat{a_1}-\\widehat{a_1}=0$,\n \\item[$\\bullet$] $\\widehat{a_1}\\div\\widehat{a_1}=1$, and\n \\item[$\\bullet$] $(\\widehat{a_1}+\\widehat{a_2})\\widehat{a_3}=\\widehat{a_1}\\widehat{a_3}+\\widehat{a_2}\\widehat{a_3}$.\n \\item[(v)] A fuzzy function is a generalization of a classical function in which the domain or range, or both, is a subset of the fuzzy numbers set.\n \\end{itemize}\n \\end{rem}\n\\begin{def1}\n The horizontal membership function of $\\widehat{g}(\\widehat{p_{1}}(u),\\widehat{p_{2}}(u),...,\\widehat{p}_m(u))$ is defined by $\\mathbb{K}(\\widehat{g}(\\mathbb{K}(\\widehat{p}_1(u)),\\mathbb{K}(\\widehat{p_{2}}(u)),...,\\mathbb{K}(\\widehat{p}_m(u))))$, where $\\widehat{g}:E^m\\rightarrow E^1$ and $\\widehat{p_i}:[a_1,a_2]\\subseteq\\mathbb{R}\\rightarrow E^1,\\,i=1,2,...,m$, are fuzzy functions.\n\\end{def1}\n\\begin{def1}\\label{grmetric}\nA fuzzy function $d^{gr}:E^1\\times E^1\\rightarrow \\mathbb{R}^+\\cup \\{0\\}$ \nis called the {\\bf granular metric} if \\[d^{gr}(\\widehat{a_1},\\widehat{a_2})=\\sup\\limits_{\\alpha\\in[0,1]}\\max\\limits_{\\mu_{a_1},\\mu_{a_2}\\in[0,1]}|a_1^{gr}(\\alpha,\\mu_{a_1})-a_2^{gr}(\\alpha,\\mu_{a_2})|,\\,\\forall\\,\\widehat{a_1},\\widehat{a_2}\\in E^1.\\]\n\\end{def1}\n\\begin{def1}\\label{grcont}\nA fuzzy function $\\widehat{g}: [a_1,a_2] \\subseteq \\mathbb{R} \\rightarrow E^1$ is called the {\\bf granular continuous} (gr-continuous) if for all $u_0\\in[a_1,a_2],\\epsilon>0$ there is a $\\delta>0$ such that $$d^{gr}(\\widehat{g}(u),\\widehat{g}(u_0))<\\epsilon, \\,\\,\\mbox{whenever}\\, |u-u_0|<\\delta.$$\n\\end{def1}\n\\begin{def1} \\label{grgrdif}\nA fuzzy function $\\widehat{g}: [a_1,a_2] \\subseteq \\mathbb{R} \\rightarrow E^1$ is called the {\\bf granular differentiable} (gr-differentiable) at the point\n$u\\in [a_1,a_2]$ if there is a fuzzy number $\\mathcal{D}^{gr} \\widehat{g} (u)\\in E^1$ such that the following limit exists:\n\\[\\mbox{lim}_{h\\to 0}\\frac{\\widehat{g}(u + h)-\\widehat{g}(u)}{h}=\\mathcal{D}^{gr} \\widehat{g} (u).\\]\n\\end{def1}\n\\begin{pro}\\label{grgrdif1}\nAt any point $u\\in [a_1,a_2]$, a fuzzy function $\\widehat{g} : [a_1,a_2] \\subseteq \\mathbb{R} \\rightarrow E^1$ is gr-differentiable iff $\\mathbb{K}(\\widehat{g})$ is differentiable w.r.t. $u$ at that point. In addition, \n$$\\mathbb{K}(\\mathcal{D}^{gr} \\widehat{g} (u))=\\frac{\\partial}{\\partial t}\\mathbb{K}(\\widehat{g} (u)).$$\n\\end{pro}\n\\begin{pro}\\label{grgrpdif}\nThe fuzzy function $\\widehat{g}(\\widehat{p}_1(u),\\widehat{p_{2}}(u),...,\\widehat{p}_m(u))$ is gr-partial differentiable w.r.t. $\\widehat{p}_i(u)$ iff its horizontal membership function is differentiable w.r.t. the horizontal membership function of $\\widehat{p}_i(u)$, where $\\widehat{g} : E^m \\rightarrow E^1$ and $\\widehat{p}_i : [a_1,a_2] \\subseteq \\mathbb{R} \\rightarrow E^1,\\,i = 1, 2, . . . ,m$, are fuzzy functions.. Moreover,\n\\[\\mathbb{K}(\\frac{\\partial^{gr}}{\\partial \\widehat{p}_i} \\widehat{g}(\\widehat{p}_1(u),\\widehat{p_{2}}(u),...,\\widehat{p}_m(u)))=\\frac{\\partial}{\n\\partial \\mathbb{K}(\\widehat{p}_i)}\\mathbb{K}(\\widehat{g}(\\mathbb{K}(\\widehat{p}_1(u)),\\mathbb{K}(\\widehat{p_{2}}(u)),...,\\mathbb{K}(\\widehat{p}_m(u))))\n.\\]\n\\end{pro}\n\\begin{def1}\\label{grgrint}\nLet $g^{gr}(u, \\alpha, \\mu_g )$ is integrable on $u\\in [a_1,a_2]$, where $g^{gr}(u, \\alpha, \\mu_g )$ is a horizontal membership\nfunction of a gr-continuous fuzzy function $\\widehat{g} : [a_1,a_2]\\subseteq \\mathbb{R}\\rightarrow E^1$. In addition, let integral of $\\widehat{g} $ on $[a_1,a_2]$ be represented by $\\oint^{a_2}_{a_1} \\widehat{g} (u)du$. If there exists a fuzzy number $\\widehat{m} = \\oint^{a_2}_{a_1} \\widehat{g} (u)du$ such that $\\mathbb{K}(\\widehat{m})=\\int^{a_2}_{a_1} \\mathbb{K}(\\widehat{g} (u))du$, then the fuzzy function $\\widehat{g}$ is called the {\\bf granular fuzzy integrable} on $[a_1,a_2]$.\n\\end{def1}\n\\begin{def1}\\label{grgrpol}\nA {\\bf granular fuzzy polynomials} is an expression consisting of fuzzy variables and fuzzy coefficients that involves only the granular operations of addition, subtraction, multiplication and nom-negative with integer exponents of fuzzy variables. \n\\end{def1}\nFor example- $\\widehat{g}(\\widehat{p})=\\widehat{a}_m\\widehat{p}^n+\\widehat{a}_{m-1}\\widehat{p}^{m-1}+...+\\widehat{a}_1\\widehat{p}+\\widehat{a}_0,\\,\\forall\\,\\widehat{a}_i\\in E^1,\\,i=1,...,m$.\n\\begin{def1}\\label{grroot}\nA {\\bf fuzzy root} of a granular fuzzy polynomial $\\widehat{g}(\\widehat{p})$ is a fuzzy number $\\widehat{p}_i$ such that $\\widehat{g}(\\widehat{p}_i)=0$.\n\\end{def1}\n\\begin{rem}\\label{grr2}\nIt is easy to check that if $\\widehat{p}_k$ is a fuzzy root of $\\widehat{g}(\\widehat{p})$, then $\\mathbb{K}(\\widehat{p}_i)$ is a root of $\\mathbb{K}(\\widehat{g}(\\mathbb{K}(\\widehat{p})))$, i.e., $\\widehat{g}(\\widehat{p}_i)=0\\Rightarrow \\mathbb{K}(\\widehat{g}(\\mathbb{K}(\\widehat{p_i}) ))=0$.\n\\end{rem}\nNext, the concepts of fuzzy partition and $F$-transform introduced by \\cite{per} are recalled. \n\\begin{def1}\\label{grFP} Let $u_1 <... 0$ there is a $\\delta>0$ such that for all $u\\in[a_1,a_2]$, $d^{gr}(\\widehat{g}(u),\\widehat{g}(u))<\\epsilon$, whenever $|t-t'|<\\delta$. As $d^{gr}(\\widehat{g}(u),\\widehat{g}(u'))=\\sup\\limits_{\\alpha\\in [0,1]}\\max\\limits_{\\mu_g\\in [0,1]}|g^{gr}(u,\\alpha,\\mu_g)-g^{gr}(u',\\alpha,\\mu_g)|$. So, by using the fact that a function continuous on $[a_1,a_2]$, is uniformly continuous, we have $$\\sup\\limits_{\\alpha\\in [0,1]}\\max\\limits_{\\mu_g\\in [0,1]}|g^{gr}(u,\\alpha,\\mu_g)-g^{gr}(u',\\alpha,\\mu_g)|<\\epsilon.$$\n(ii) \\vspace{-1mm}{By using (i), we can show that for all $u\\in [a_1,a_2]$ and $i=1,...,m-1,$ $$d^{gr}(\\widehat{g}(u),F_i^{gr}[\\widehat{g}])\\leq\\epsilon,\\,d^{gr}(\\widehat{g}(u),F_{i+1}^{gr}[\\widehat{g}])\\leq\\epsilon.$$\nNow, from Proposition \\ref{grp3}, we have}\n\\begin{eqnarray}\n\\nonumber d^{gr}(\\widehat{g}(u),F_i^{gr}[\\widehat{g}])&\\leq&\\sup\\limits_{\\alpha\\in [0,1]}\\max\\limits_{\\mu_g\\in [0,1]}\\frac{1}{h}{\\int^{u_{i+1}}_{u_{i-1}}|g^{gr}(u,\\alpha,\\mu_g)-g^{gr}(u',\\alpha,\\mu_g)|P_i(u')du'}\\\\\n\\nonumber&<&\\frac{\\epsilon}{h}{\\int^{u_{i+1}}_{u_{i-1}}P_i(u')du'}\\\\\n\\nonumber&=&\\epsilon.\n\\end{eqnarray}\n{Similarly, we can show that $d^{gr}(\\widehat{g}(u),F_{i+1}^{gr}[\\widehat{g}])\\leq\\epsilon$.}\n\\end{rem}\n\\begin{pro}\\label{grp6}\nLet $\\widehat{g}:[a_1,a_2]\\rightarrow E^1$ be the gr-continuous fuzzy function. Then for all $\\epsilon > 0$ there is $m_\\epsilon$ and the fuzzy partition\n$P_1, . . . ,P_{m_\\epsilon}$ of $[a_1,a_2]$ such that for all $u\\in[a_1,a_2]$,\n\\[d^{gr}(\\widehat{{g}}_m(u) , \\widehat{g}^{gr}_{m_\\epsilon}(u))\\leq\\epsilon,\\]\nwhere $\\widehat{g}^{gr}_{m_\\epsilon}$ is the granular inverse $F$-transform of $\\widehat{g}$.\n\\end{pro}\n\\textbf{Proof:} Let $u\\in[a_1,a_2]$ and $\\widehat{g}:[a_1,a_2]\\rightarrow E^1$ be the gr-continuous fuzzy function. Then from Definition \\ref{grmetric} and Remark \\ref{grremark} (ii),\n\\begin{eqnarray*}\n\\nonumber d^{gr}(\\widehat{{g}}(u), \\widehat{g}^{gr}_m(u))&=&\\sup\\limits_{\\alpha\\in [0,1]}\\max\\limits_{\\mu_g\\in [0,1]}|g^{gr}(u,\\alpha,\\mu_g)-{g}^{gr}_m(u,\\alpha,\\mu_g)|\\\\\n\\nonumber&=&\\sup\\limits_{\\alpha\\in [0,1]}\\max\\limits_{\\mu_g\\in [0,1]}|g^{gr}(u,\\alpha,\\mu_g)-\\sum_{i=1}^{m}F_i[\\mathbb{K}(\\widehat{g})]P_i(u)|\\\\\n\\nonumber&=&\\sup\\limits_{\\alpha\\in [0,1]}\\max\\limits_{\\mu_g\\in [0,1]}|g^{gr}(u,\\alpha,\\mu_g)\\sum_{i=1}^{m}P_i(u)-\\sum_{i=1}^{m}F_i[\\mathbb{K}(\\widehat{g})]P_i(u)|\\\\\n\\nonumber&=&\\sup\\limits_{\\alpha\\in [0,1]}\\max\\limits_{\\mu_g\\in [0,1]}|\\sum_{i=1}^{m}(g^{gr}(u,\\alpha,\\mu_g)-F_i[\\mathbb{K}(\\widehat{g})])P_i(u)|\\\\\n\\nonumber&\\leq&\\sup\\limits_{\\alpha\\in [0,1]}\\max\\limits_{\\mu_g\\in [0,1]}\\sum_{i=1}^{m}|g^{gr}(u,\\alpha,\\mu_g)-F_i[\\mathbb{K}(\\widehat{g})]|P_i(u)\\\\\n\\nonumber&\\leq&\\epsilon\\sum_{i=1}^{m}P_i(u)\\\\\n\\nonumber&=&\\epsilon.\n\\end{eqnarray*}\nProposition \\ref{grp6} can be formulated for the uniform fuzzy partitions of $[a_1,a_2]$.\n\\begin{cor}\\label{grco1}\nLet $\\widehat{g}:[a_1,a_2]\\rightarrow E^1$ be the gr-continuous fuzzy function and $\\{(P^{(m)}_1, . . . ,P^{(m)}_{m})_m\\}$ be a sequence of uniform fuzzy partitions of $[a_1,a_2]$ for all $m$ . In addition, let $\\{\\widehat{g}^{gr}_m(u)\\}$ be the sequence of granular inverse $F$-transforms w.r.t. $\\{(P^{(m)}_1, . . . ,P^{(m)}_{m})_m\\}$, respectively. Then for all $\\epsilon > 0$ there exist $m_\\epsilon$ such that $m>m_\\epsilon$ and\n\\[d^{gr}(\\widehat{{g}}_m(u) , \\widehat{g}^{gr}_{m}(u))\\leq\\epsilon,\\,\\forall\\,u\\in [a_1,a_2].\\]\n\\end{cor}\n\\textbf{Proof:} Follows from Proposition \\ref{grp6}.\n\\begin{cor}\nLet the assumptions of Corollary \\ref{grco1} be hold. Then the sequence of granular inverse $F$-transforms $\\{\\widehat{g}^{gr}_m(u)\\}$ uniformly\nconverges to fuzzy function $\\widehat{{g}}$ .\n\\end{cor}\n\\textbf{Proof:} Follows from Corollary \\ref{grco1}.\\\\\\\\\nThe following result describes how to estimate the difference between any two approximations of a given fuzzy function by the granular inverse $F$-transforms based on different sets of basic functions. As can be observed, it depends on the original fuzzy function's smoothness behavior, as defined by its modulus of continuity.\n\\begin{pro}\\label{grp7} \nLet $P_1,...,P_m$, $P'_1,...,P'_m,\\,m\\geq 3$, be basic functions that form different uniform fuzzy partitions of $[a_1,a_2]$ and $\\widehat{g}:[a_1,a_2]\\rightarrow E^1$ be the gr-continuous fuzzy function. Further, let $\\widehat{g}^{gr}_m,\\widehat{g}^{gr'}_m$ be the granular inverse $F$-transforms of\n$\\widehat{g}$ w.r.t. different sets of basic functions. Then\n\\[d^{gr}(\\widehat{g}^{gr}_m(u),\\widehat{g}^{gr'}_m(u))\\leq 2 \\omega(2h,\\widehat{g}), \\,\\forall\\,u\\in [a_1,a_2],\\]\nwhere $h=\\frac{a_2-a_1}{m-1}$ and $2 \\omega(2h,\\widehat{g})$ is the modulus of continuity of $\\widehat{g}$ on $[a_1,a_2]$.\n\\end{pro} \n\\textbf{Proof:} Follows from Proposition \\ref{grp3} and Definition \\ref{grgrift}.\\\\\\\\\nThe following is towards the graphical representation of horizontal membership functions and level sets of the granular $F$-transform and the granular inverse $F$-transform associated with a fuzzy function are presented.\n\\begin{exa}\\label{grexa}\n\\end{exa}\nLet $P_1 = (0, 0, 1), P_2= (0, 1, 2), P_3 = (1, 2, 3), P_4 = (2, 3, 3)$ be the triangular fuzzy numbers that form a fuzzy partition of $[0,3]$. To calculate the components of the granular $F$-transform and the granular inverse $F$-transform, we assume a fuzzy function $\\widehat{g}(u)=(\\frac{u^3}{3},\\frac{u^3}{3}+u+3,\\frac{2u^3}{3}+4)$ and its horizontal membership function is given by \n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{g}(u))&=&g^{gr}(u,\\alpha,\\mu_g)\\\\\n&=&\\frac{u^3}{3}+\\alpha(u+3)+\\mu_g(1-\\alpha)(\\frac{u^3}{3}+4),\\,u\\in[0,3],\\alpha,\\mu_g\\in\\{0,0.5,1\\}.\n\\end{eqnarray*}\nNext, the horizontal membership functions of the components of the granular $F$-transform and the granular inverse $F$-transform associated with the fuzzy function $\\widehat{g}(u)$ corresponding to different values of $\\alpha,\\mu_g$, are presented in Figure \\ref{grfig:0}. In which, at $\\alpha=1$, the fuzzy function $\\widehat{g}(u)$ shows crisp behavior and the granular $F$-transform and the granular inverse $F$-transform become $F$-transform and inverse $F$-transform as given in \\cite{per}, respectively. Also, the $\\alpha(=0)$-level sets of the fuzzy function $\\widehat{g}(u)$, granular $F$-transform and granular inverse $F$-transform are given in Figure \\ref{grfig:00}.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{figure\/1fex.eps} \n \\includegraphics[scale=0.5]{figure\/4fex.eps}\n \\includegraphics[scale=0.5]{figure\/2fex.eps}\n \\includegraphics[scale=0.5]{figure\/5fex.eps}\n \\includegraphics[scale=0.5]{figure\/3fex.eps}\n \\includegraphics[scale=0.5]{figure\/6fex.eps}\n \\includegraphics[scale=0.5]{figure\/7fex.eps}\n \\caption{Red, blue, green curves show horizontal membership function (HMF) of fuzzy function $\\widehat{f}(t)$, granular $F$-transform and granular inverse $F$-transform presented in Example \\ref{exa} corresponding to different values of $\\alpha,\\mu$, respectively.}\n \\label{fig:0}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.55]{figure\/level3.eps} \n \n \\caption{Red, blue, green curves show $\\alpha(=0)$-level sets of of fuzzy functions $\\widehat{f}(t)$, granular $F$-transform and granular inverse $F$-transform presented in Example \\ref{exa}, respectively. }\n \\label{fig:00}\n\\end{figure}\n\\section{Fuzzy prey-predator model}\nIn this section, we formulate a fuzzy prey-predator model and study the dynamic behavior of the same. This section is divided into two subsections; the first is towards the formulation of the model, and the latter is towards its dynamical behavior.\n\\subsection{Formulation of fuzzy prey-predator model}\nIn this subsection, we present a fuzzy prey-predator model in which one predator team interacts with two teams of prey. In the presence of a predator, prey groups assist one another, but in the absence of a predator, they compete. We consider two teams of prey with densities $\\widehat{p}(u)$ and $\\widehat{q}(u)$, interacting with one team of predators with density $\\widehat{r}(u)$ in a fuzzy environment, respectively, which is chiefly motivated from a crisp model given in \\cite{elet}. The proposed fuzzy prey-predator model is as follows:\n\\begin{eqnarray}\\label{gr1}\n\\begin{array}{ll}\n{\\mathcal{D}^{gr}\\widehat{p}(u)}=\\widehat{a_1}\\widehat{p}(u)(1-\\widehat{p}(u))-\\widehat{p}(u)\\widehat{r}(u)+\\widehat{p}(u)\\widehat{q}(u)\\widehat{r}(u)=\\widehat{g_1}(u,\\widehat{p},\\widehat{q},\\widehat{r}),\\\\ \n{\\mathcal{D}^{gr}\\widehat{q}(u)}=\\widehat{a_2}\\widehat{q}(u)(1-\\widehat{q}(u))-\\widehat{q}(u)\\widehat{r}(u)+\\widehat{p}(u)\\widehat{q}(u)\\widehat{r}(u)=\\widehat{g_2}(u,\\widehat{p},\\widehat{q},\\widehat{r}),\\\\ \n{\\mathcal{D}^{gr}\\widehat{r}(u)}= -\\widehat{a_3}\\widehat{r}^2(u)+\\widehat{a_4}\\widehat{p}(u)\\widehat{r}(u)+\\widehat{a_5}\\widehat{q}(u)\\widehat{r}(u)=\\widehat{g_3}(u,\\widehat{p},\\widehat{q},\\widehat{r}),\\\\\n\\widehat{p}(0)=\\widehat{p}_0,\\,\\widehat{q}(0)=\\widehat{q}_0 ,\\,\\widehat{r}(0)=\\widehat{r}_0,\n\\end{array}\n\\end{eqnarray}\nwhere $\\widehat{p},\\widehat{q},\\widehat{r}:[0,1]\\subseteq \\mathbb{R}\\rightarrow E^1$ are gr-differentiable fuzzy functions and $\\widehat{a_1},\\widehat{a_2},\\widehat{a_3},\\widehat{a_4},\\widehat{a_5}$, $\\widehat{p}_0,\\widehat{q}_0,\\widehat{r}_0$ are positive fuzzy numbers. Based on\nRemark \\ref{grr1} and Proposition \\ref{grgrdif1}, the system (\\ref{gr1}) can be written as\n\\begin{eqnarray}\\label{gr2}\n\\nonumber\\mathbb{K}(\\mathcal{D}^{gr}\\widehat{p}(u))&=&{\\frac{\\partial}{\\partial t}\\mathbb{K}(\\widehat{p}(u))}=\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{p}(u))({1}-\\mathbb{K}(\\widehat{p}))-\\mathbb{K}(\\widehat{p}(u))\\mathbb{K}(\\widehat{r}(u))+\\\\\n\\nonumber&&\\mathbb{K}(\\widehat{p}(u))\\mathbb{K}(\\widehat{q}(u))\\mathbb{K}(\\widehat{r}(u))=\\mathbb{K}(\\widehat{g_1}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))),\\\\ \n\\nonumber\\mathbb{K}(\\mathcal{D}^{gr}\\widehat{q}(u))&=&{\\frac{\\partial}{\\partial t}\\mathbb{K}(\\widehat{q}(u))}=\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{q}(u))({1}-\\mathbb{K}(\\widehat{q}(u)))-\\mathbb{K}(\\widehat{q}(u))\\mathbb{K}(\\widehat{r}(u))+\\\\\n\\nonumber&&\\mathbb{K}(\\widehat{p}(u))\\mathbb{K}(\\widehat{q}(u))\\mathbb{K}(\\widehat{r}(u))=\\mathbb{K}(\\widehat{g_2}(\\mathbb{K}(u,\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))),\\\\ \n\\nonumber\\mathbb{K}(\\mathcal{D}^{gr}\\widehat{r}(u))&=&{\\frac{\\partial}{\\partial t}\\mathbb{K}(\\widehat{r}(u))}= -\\mathbb{K}(\\widehat{a_3})\\mathbb{K}(\\widehat{r}(u))^2+\\mathbb{K}(\\widehat{a_4})\\mathbb{K}(\\widehat{p}(u))\\mathbb{K}(\\widehat{r}(u))+\\\\\n\\nonumber&&\\mathbb{K}(\\widehat{a_5})\\mathbb{K}(\\widehat{q}(u))\\mathbb{K}(\\widehat{r}(u))=\\mathbb{K}(\\widehat{g_3}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))),\\\\\n\\mathbb{K}(\\widehat{p}(0))&=&\\mathbb{K}(\\widehat{p}_0),\\,\\mathbb{K}(\\widehat{q}(0))=\\mathbb{K}(\\widehat{q}_0)\n ,\\,\\mathbb{K}(\\widehat{r}(0))=\\mathbb{K}(\\widehat{r}_0).\n\\end{eqnarray}\nThe system (\\ref{gr2}) can be given as\n\\begin{eqnarray}\\label{gr2a}\n\\nonumber\\frac{\\partial}{\\partial t}p^{gr}(u,\\alpha,\\mu_p)&=&a_1^{gr}(\\alpha,\\mu_{a_1})p^{gr}(u,\\alpha,\\mu_p)({1}-p^{gr}(u,\\alpha,\\mu_p))-p^{gr}(u,\\alpha,\\mu_p)r^{gr}(u,\\alpha,\\mu_r)\\\\\n\\nonumber&&+p^{gr}(u,\\alpha,\\mu_p)q^{gr}(u,\\alpha,\\mu_q)r^{gr}(u,\\alpha,\\mu_r),\\\\\n\\nonumber\\frac{\\partial}{\\partial t}q^{gr}(u,\\alpha,\\mu_q)&=&a_2^{gr}(\\alpha,\\mu_{a_2})q^{gr}(u,\\alpha,\\mu_q)({1}-q^{gr}(u,\\alpha,\\mu_q))-q^{gr}(u,\\alpha,\\mu_q)r^{gr}(u,\\alpha,\\mu_r)\\\\\n\\nonumber&&+p^{gr}(u,\\alpha,\\mu_p)q^{gr}(u,\\alpha,\\mu_q)r^{gr}(u,\\alpha,\\mu_r),\\\\\n\\nonumber\\frac{\\partial}{\\partial t}r^{gr}(u,\\alpha,\\mu_r)&=& -a_2^{gr}(\\alpha,\\mu_{a_3})r^{2,gr}(u,\\alpha,\\mu_r)+d^{gr}(\\alpha,\\mu_{a_4})p^{gr}(u,\\alpha,\\mu_p)r^{gr}(u,\\alpha,\\mu_r)+\\\\\n&&a_5^{gr}(\\alpha,\\mu_{a_5})q^{gr}(u,\\alpha,\\mu_q)r^{gr}(u,\\alpha,\\mu_r),\n\\end{eqnarray}\n$p^{gr}(0,\\alpha,\\mu_p)=p_0^{gr}(\\alpha,\\mu_{p_0}),q^{gr}(0,\\alpha,\\mu_q)=q_0^{gr}(\\alpha,\\mu_{q_0}),r^{gr}(0,\\alpha,\\mu_r)=r_0^{gr}(\\alpha,\\mu_{r_0}).$\n\\subsection{Dynamical behavior of the fuzzy prey-predator model}\nIn this subsection, we investigate the equilibrium points and their stability of the proposed fuzzy prey-predator model. Now, we initiate with the following.\n\\begin{def1}\\label{greq}\n A point $(\\widehat{p_e},\\widehat{q}_e,\\widehat{r}_e)$ is called the {\\bf fuzzy equilibrium point} of the system (\\ref{gr1}) if\n \\begin{eqnarray*}\n \\mathcal{D}^{gr}\\widehat{p}(u)&=&\\widehat{g_1}(\\widehat{p_e},\\widehat{q}_e,\\widehat{r}_e)=0,\\\\\n \\mathcal{D}^{gr}\\widehat{q}(u)&=&\\widehat{g_2}(\\widehat{p_e},\\widehat{q}_e,\\widehat{r}_e)=0,\\\\\n \\mathcal{D}^{gr}\\widehat{r}(u)&=&\\widehat{g_3}(\\widehat{p_e},\\widehat{q}_e,\\widehat{r}_e)=0.\n \\end{eqnarray*}\n\\end{def1}\n\\begin{rem}\\label{grremeq}\nIt can be easily seen that $(\\widehat{p_e},\\widehat{q}_e,\\widehat{r}_e)$ is a fuzzy equilibrium point of the system (\\ref{gr1}) iff $(\\mathbb{K}(\\widehat{p_e}),\\mathbb{K}(\\widehat{q}_e),\\mathbb{K}(\\widehat{r}_e))$ is an equilibrium point of the system (\\ref{gr2}).\n\\end{rem}\n After algebraic calculation for the system (\\ref{gr1}), we get several fuzzy equilibrium points $(\\widehat{p},\\widehat{q},\\widehat{r})$, e.g.\n\\begin{itemize}\n\\item[(i)] $\\widehat{E}_0({0},{0},{0})$,\n\\item[(ii)] $\\widehat{E}_1({1},{0},{0})$,\n\\item[(iii)] $\\widehat{E}_2({0},{1},{0})$,\n\\item[(iv)] $\\widehat{E}_3({1},{1},{0})$,\n\\item[(v)] $\\widehat{E}_4({0},\\widehat{q_{e_4}},\\widehat{r_{e_4}})$, where\n\\begin{eqnarray*}\\widehat{q_{e_4}}&=&\\frac{\\widehat{a_2}\\widehat{a_3}}{\\widehat{a_2}\\widehat{a_3}+\\widehat{a_5}},\\\\\n\\widehat{r_{e_4}}&=&\\frac{\\widehat{a_2}\\widehat{a_5}}{\\widehat{a_2}\\widehat{a_3}+\\widehat{a_5}},\n\\end{eqnarray*}\n\\item[(vi)] $\\widehat{E}_5(\\widehat{p_{e_5}},{0},\\widehat{r_{e_5}})$, where \n\\begin{eqnarray*}\n\\widehat{p_{e_5}}&=&\\frac{\\widehat{a_1}\\widehat{a_3}}{\\widehat{a_1}\\widehat{a_3}+\\widehat{a_4}},\\\\\n\\widehat{r_{e_5}}&=&\\frac{\\widehat{a_1}\\widehat{a_4}}{\\widehat{a_1}\\widehat{a_3}+\\widehat{a_4}},\n\\end{eqnarray*}\n\\item[(vii)] $\\widehat{E}_6({1},{1},\\widehat{r_{e_6}})$, where \n\\[\\widehat{r_{e_6}}=\\frac{\\widehat{a_4}+\\widehat{a_5}}{\\widehat{a_3}},\\,\\mbox{and}\\]\n\\item[(viii)] $\\widehat{E}_7(\\widehat{p_{e_7}},\\widehat{q_{e_7}},\\widehat{r_{e_7}})$, where\n\\begin{eqnarray*}\n\\widehat{p_{e_7}}&=&\\frac{\\widehat{a_2}\\widehat{a_3}+\\widehat{a_5}\\left({1}-\\sqrt{\\frac{\\widehat{a_2}}{\\widehat{a_1}}}\\right)}{\\widehat{a_5}+\\widehat{a_4}\\sqrt{\\frac{\\widehat{a_2}}{\\widehat{a_1}}}},\\\\\n\\widehat{q_{e_7}}&=&\\frac{\\widehat{a_3}\\sqrt{\\widehat{a_1}\\widehat{a_2}}-\\widehat{a_4}\\left({1}-\\sqrt{\\frac{\\widehat{a_2}}{\\widehat{a_1}}}\\right)}{\\widehat{a_5}+\\widehat{a_4}\\sqrt{\\frac{\\widehat{a_2}}{\\widehat{a_1}}}},\\\\\n\\widehat{r_{e_7}}&=&\\sqrt{\\widehat{a_1}\\widehat{a_2}}.\n\\end{eqnarray*}\n\\end{itemize}\nThe existence of ${E}_0,{E}_1, {E}_2,{E}_3,{E}_4,{E}_5$ and ${E}_6$ are obvious. Now, we are concentrating on the existence of fuzzy equilibrium point $\\widehat{E}_7$. Therefore the fuzzy equilibrium point ${E}_7$ exists if \n\\begin{eqnarray*}\n&&\\widehat{a_3}\\sqrt{\\widehat{a_1}\\widehat{a_2}}\\leq\\widehat{a_4}+\\widehat{a_5},\\\\\n&&\\widehat{a_1}\\widehat{a_3}+\\widehat{a_4}>\\widehat{a_4}\\sqrt{\\frac{\\widehat{a_2}}{\\widehat{a_1}}},\\\\\n&&\\widehat{a_2}\\widehat{a_3}+\\widehat{a_5}>\\widehat{a_5}\\sqrt{\\frac{\\widehat{a_1}}{\\widehat{a_2}}}.\n\\end{eqnarray*}\nFrom Remark \\ref{grremeq}, $(\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))$ is an equilibrium point of the system (\\ref{gr2}), e.g. \n\\begin{itemize}\n\\item[(i)] $E_0({0},{0},{0})$,\n\\item[(ii)] $E_1({1},{0},{0})$,\n\\item[(iii)] $E_2({0},{1},{0})$,\n\\item[(iv)] $E_3({1},{1},{0})$,\n\\item[(v)] $E_4({0},\\mathbb{K}(\\widehat{q_{e_4}}),\\mathbb{K}(\\widehat{r_{e_4}}))$, where\n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{q_{e_4}})&=&\\frac{\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{a_3})}{\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{a_3})+\\mathbb{K}(\\widehat{a_5})},\\\\\n\\mathbb{K}(\\widehat{r_{e_4}})&=&\\frac{\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{a_5})}{\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{a_3})+\\mathbb{K}(\\widehat{a_5})},\n\\end{eqnarray*}\n\\item[(vi)]$E_5(\\mathbb{K}(\\widehat{p_{e_5}}),{0},\\mathbb{K}(\\widehat{r_{e_5}}))$, where\n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{p_{e_5}})&=&\\frac{\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_3})}{\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_3})+\\mathbb{K}(\\widehat{a_4})},\\\\\n\\mathbb{K}(\\widehat{r_{e_5}})&=&\\frac{\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_4})}{\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_3})+\\mathbb{K}(\\widehat{a_4})}\n\\end{eqnarray*}\n\\item[(vii)] $E_6({1},{1},\\mathbb{K}(\\widehat{r_{e_6}}))$, where \n\\[\\mathbb{K}(\\widehat{r_{e_6}}=\\frac{\\mathbb{K}(\\widehat{a_4})+\\mathbb{K}(\\widehat{a_5})}{\\mathbb{K}(\\widehat{a_3})},\\,\\mbox{and}\\]\n\\item[(viii)] $E_7(\\mathbb{K}(\\widehat{p_{e_7}}),\\mathbb{K}(\\widehat{q_{e_7}}),\\mathbb{K}(\\widehat{r_{e_7}}))$, where \n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{p_{e_7}})&=&\\frac{\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{a_3})+\\mathbb{K}(\\widehat{a_5})\\left({1}-\\sqrt{\\frac{\\mathbb{K}(\\widehat{a_2})}{\\mathbb{K}(\\widehat{a_1})}}\\right)}{\\mathbb{K}(\\widehat{a_5})+\\mathbb{K}(\\widehat{a_4})\\sqrt{\\frac{\\mathbb{K}(\\widehat{a_2})}{\\mathbb{K}(\\widehat{a_1})}}},\\\\\n\\mathbb{K}(\\widehat{q_{e_7}})&=&\\frac{\\mathbb{K}(\\widehat{a_3})\\sqrt{\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_2})}-\\mathbb{K}(\\widehat{a_4})\\left({1}-\\sqrt{\\frac{\\mathbb{K}(\\widehat{a_2})}{\\mathbb{K}(\\widehat{a_1})}}\\right)}{\\mathbb{K}(\\widehat{a_5})+\\mathbb{K}(\\widehat{a_4})\\sqrt{\\frac{\\mathbb{K}(\\widehat{a_2})}{\\mathbb{K}(\\widehat{a_1})}}},\\\\\n\\mathbb{K}(\\widehat{r_{e_7}})&=&\\sqrt{\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_2})},\n\\end{eqnarray*}\n\\end{itemize}\nare the equilibrium points of the system (\\ref{gr2}).\\\\\\\\ \nBelow, we discuss the local and global fuzzy stability of the fuzzy equilibrium points corresponding to the fuzzy eigenvalues, respectively. Before stating the next, we introduce the following.\n\\begin{def1}\\label{grstable} Let $\\widehat{\\lambda}_i,\\,i=1,2,...,m$, be a fuzzy eigenvalue of the fuzzy matrix $\\widehat{M}=[\\widehat{a_1}_{ij}]_{m\\times m}$.\nThen the fuzzy equilibrium points of the system of $\\mathcal{D}^{gr}\\widehat{p}(u)=\\widehat{M}\\widehat{p}(u)$ are called {\\bf fuzzy stable} and {\\bf fuzzy unstable} iff $Re(\\widehat{\\lambda}_i)<0$ and $Re(\\widehat{\\lambda}_i)>0$, respectively.\n\\end{def1}\n\\begin{def1}\\label{grlocal}\nA fuzzy equilibrium point $\\widehat{p_e}$ of $\\mathcal{D}^{gr}\\widehat{p}(u)=\\widehat{M}\\widehat{p}(u)$ is called {\\bf locally asymptotically fuzzy stable} if all eigen values of the corresponding variational fuzzy matrix have negative real parts.\n\\end{def1}\n\\begin{def1}\\label{grglobal} A fuzzy equilibrium point $\\widehat{p_e}$ of $\\mathcal{D}^{gr}\\widehat{p}(u)=\\widehat{M}\\widehat{p}(u)$ is called {\\bf globally asymptotically fuzzy stable} if there exists a $gr$-continuous and differentiable function $\\widehat{U}:D\\subseteq E^m\\rightarrow E^1$ such that \n\\begin{itemize}\n \\item[(i)] $\\widehat{U}(\\widehat{p_e})=0$;\n \\item[(ii)] $\\widehat{U}(\\widehat{p})>0,\\,\\forall\\,\\widehat{p}\\in D-\\{\\widehat{p_e}\\}$;\n \\item[(iii)] $\\widehat{U}(\\widehat{p})$ is radially unbounded; and\n \\item[(iv)] $\\mathcal{D}^{gr}\\widehat{U}(\\widehat{p})<0,\\,\\forall\\,\\widehat{p}\\in D-\\{\\widehat{p_e}\\}$.\n\\end{itemize}\nAlso, the fuzzy function $\\widehat{U}$ is called {\\bf fuzzy Lyapunov function}.\n\\end{def1}\n\\begin{thm}\\label{grlogo}\nA fuzzy equilibrium point $\\widehat{p_e}$ of $\\mathcal{D}^{gr}\\widehat{p}(u)=\\widehat{M}\\widehat{p}(u)$ is locally or globally asymptotically fuzzy stable iff the equilibrium point $\\mathbb{K}(\\widehat{p_e})$ of $\\mathbb{K}(\\mathcal{D}^{gr}\\widehat{p}(u))=\\mathbb{K}(\\widehat{M})\\mathbb{K}(\\widehat{p}(u))$ is locally or globally asymptotically stable, respectively.\n\\end{thm}\n\\textbf{Proof:} Let $\\widehat{p_e}$ be a fuzzy equilibrium point of $\\mathcal{D}^{gr}\\widehat{p}(u)=\\widehat{M}\\widehat{p}(u)$. Then from Remark \\ref{grremeq}, $\\mathbb{K}(\\widehat{p_e})$ is an equilibrium point of $\\mathbb{K}(\\mathcal{D}^{gr}\\widehat{p}(u))=\\mathbb{K}(\\widehat{M})\\mathbb{K}(\\widehat{p}(u))$. Also, let $\\mathbb{K}(\\widehat{\\lambda}_i),\\,i=1,2,...,m$ be an eigen value of the corresponding variational matrix. Then the equilibrium point $\\mathbb{K}(\\widehat{p_e})$ is locally asymptotically stable iff $Re(\\mathbb{K}(\\widehat{\\lambda}_i))<0$, i.e., $\\mathbb{K}(Re(\\widehat{\\lambda}_i))<0$. Therefore from Remark \\ref{grr1}, we have $Re(\\widehat{\\lambda}_i)<0$. Thus the fuzzy equilibrium point $\\widehat{p_e}$ is locally asymptotically\nfuzzy stable. Also, the equilibrium point $\\mathbb{K}(\\widehat{p_e})$ is globally asymptotically\nstable if there exists a suitable Lyapunov function $\\mathbb{K}({\\widehat{U}})$ such that $\\mathbb{K}({\\widehat{U}}(\\mathbb{K}(\\widehat{p_e})))=0$, $\\mathbb{K}({\\widehat{U}}(\\mathbb{K}(\\widehat{p})))>0,\\frac{\\partial\\mathbb{K}({\\widehat{U}})}{\\partial t}<0,\\forall \\,\\mathbb{K}(\\widehat{p})$ (except for the equilibrium point $\\mathbb{K}(\\widehat{p_e})$ and $\\mathbb{K}({\\widehat{U}}(\\mathbb{K}(\\widehat{p})))$ is radially unbounded. Therefore from Remark \\ref{grr1}, the fuzzy function $\\widehat{U}$ is the fuzzy Lyapunov function, i.e., satisfies all conditions given in Definition \\ref{grglobal}. Thus the fuzzy equilibrium point $\\widehat{p_e}$ is globally asymptotically\nfuzzy stable and conversely. \\\\\\\\\nThe variational fuzzy matrix of the system (\\ref{gr1}) is defined as\n\\begin{equation}\\label{grm}\n\\widehat{M_1} = \n\\begin{bmatrix}\n\\widehat{a_1}-2\\widehat{a_1}\\widehat{p}-\\widehat{r}+\\widehat{q}\\widehat{r} & \\widehat{p}\\widehat{r} & -\\widehat{p}+\\widehat{p}\\widehat{q} \\\\\n\\widehat{q}\\widehat{r} & \\widehat{a_2}-2\\widehat{a_2}\\widehat{q}-\\widehat{r}+\\widehat{p}\\widehat{r} & -\\widehat{q} +\\widehat{p}\\widehat{q}\\\\\n\\widehat{a_4}\\widehat{r} & \\widehat{a_5}\\widehat{r} & -2\\widehat{a_3}\\widehat{r}+\\widehat{a_4}\\widehat{p}+\\widehat{a_5}\\widehat{q} \n\\end{bmatrix}.\n\\end{equation}\nNow, the variational matrix of the system (\\ref{gr2}) is given by\n\\begin{equation}\\label{grm1}\n\\mathbb{K}(\\widehat{M_1}) = \n\\begin{bmatrix}\n\\mathbb{K}(\\widehat{a}_{11})& \\mathbb{K}(\\widehat{p})\\mathbb{K}(\\widehat{r}) & -\\mathbb{K}(\\widehat{p})+\\mathbb{K}(\\widehat{p})\\mathbb{K}(\\widehat{q}) \\\\\n\\mathbb{K}(\\widehat{q})\\mathbb{K}(\\widehat{r} )& \\mathbb{K}(\\widehat{a}_{22}) & -\\mathbb{K}(\\widehat{q})+\\mathbb{K}(\\widehat{p})\\mathbb{K}(\\widehat{q}) \\\\\n\\mathbb{K}(\\widehat{a_4})\\mathbb{K}(\\widehat{r}) & \\mathbb{K}(\\widehat{a_5})\\mathbb{K}(\\widehat{r}) & \\mathbb{K}(\\widehat{a}_{33})\n\\end{bmatrix},\n\\end{equation}\nwhere\n\\[\\mathbb{K}(\\widehat{a}_{11})=\\mathbb{K}(\\widehat{a_1})-2\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{p})-\\mathbb{K}(\\widehat{r})+\\mathbb{K}(\\widehat{q})\\mathbb{K}(\\widehat{r}), \\]\n\\[\\mathbb{K}(\\widehat{a}_{22})=\\mathbb{K}(\\widehat{a_2})-2\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{q})-\\mathbb{K}(\\widehat{r})+\\mathbb{K}(\\widehat{p})\\mathbb{K}(\\widehat{r} ),\\]\n\\[\\mathbb{K}(\\widehat{a}_{33})=-2\\mathbb{K}(\\widehat{a_3})\\mathbb{K}(\\widehat{r})+\\mathbb{K}(\\widehat{a_4})\\mathbb{K}(\\widehat{p})+\\mathbb{K}(\\widehat{a_5})\\mathbb{K}(\\widehat{q}).\n\\] \nTo check the stability of the fuzzy equilibrium points of the systems (\\ref{gr1}), we define the characteristic equation of the matrix (\\ref{grm1}) by $det(\\mathbb{K}(\\widehat{\\lambda})I-\\mathbb{K}(\\widehat{M_1}))=0$.\n\\begin{itemize}\n\\item[(i)] The variational matrix (\\ref{grm1}) corresponding to the equilibrium point $E_0(0,0,0)$ has eigenvalues\n$\\mathbb{K}(\\widehat{\\lambda})=0,\\mathbb{K}(\\widehat{a_1}),\\mathbb{K}(\\widehat{a_2})$. In which two eigenvalues are positive. Thus $E_0(0,0,0)$ is an unstable equilibrium point. From Remark \\ref{grr1}, the variational fuzzy matrix (\\ref{grm}) corresponding to the fuzzy equilibrium point $\\widehat{E}_0(0,0,0)$ has fuzzy eigenvalues $\\widehat{\\lambda}=\\widehat{0},\\widehat{a_1},\\widehat{a_2}$ and $\\widehat{E}_0(0,0,0)$ is an unstable fuzzy equilibrium point.\n\\item[(ii)] The variational matrix (\\ref{grm1}) corresponding to the equilibrium point $E_1(1,0,0)$ has\n$\\mathbb{K}(\\widehat{\\lambda})=-\\mathbb{K}(\\widehat{a_1}),\\mathbb{K}(\\widehat{a_2}),\\mathbb{K}(\\widehat{a_4})$. In which two eigenvalues are positive. Thus $E_1(1,0,0)$ is an unstable equilibrium point. From Remark \\ref{grr1}, the variational fuzzy matrix (\\ref{grm}) corresponding to the fuzzy equilibrium point $\\widehat{E}_1(1,0,0)$ has fuzzy eigenvalues $\\widehat{\\lambda}=-\\widehat{a_1},\\widehat{a_2},\\widehat{a_4}$ and $\\widehat{E}_1(1,0,0)$ is an unstable fuzzy equilibrium point.\n\\item[(iii)] The variational matrix (\\ref{grm1}) corresponding to the equilibrium point $E_2(0,1,0)$ has eigenvalues\n$\\mathbb{K}(\\widehat{\\lambda})=\\mathbb{K}(\\widehat{a_1}),-\\mathbb{K}(\\widehat{a_2}),\\mathbb{K}(\\widehat{a_5})$. In which two eigenvalues are positive. Thus $E_2(0,1,0)$ is an unstable equilibrium point. From Remark \\ref{grr1}, the variational fuzzy matrix (\\ref{grm}) corresponding to the fuzzy equilibrium point $\\widehat{E}_2({0},{1},{0})$ has fuzzy eigenvalues $\\widehat{\\lambda}=\\widehat{a_1},-\\widehat{a_2},\\widehat{a_5}$ and $\\widehat{E}_2({0},{1},{0})$ is an unstable fuzzy equilibrium point.\n\\item[(iv)] The variational matrix (\\ref{grm1}) corresponding to the equilibrium point $E_3(1,1,0)$ has eigenvalues\n$\\mathbb{K}(\\widehat{\\lambda})=-\\mathbb{K}(\\widehat{a_1}),-\\mathbb{K}(\\widehat{a_2}),\\mathbb{K}(\\widehat{a_4})+\\mathbb{K}(\\widehat{a_5})$. In which one eigenvalues are positive. Thus $E_3(1,1,0)$ is an unstable equilibrium point. From Remark \\ref{grr1}, the variational fuzzy matrix (\\ref{grm}) corresponding to the fuzzy equilibrium point $\\widehat{E}_3(1,1,0)$ has fuzzy eigenvalues $\\widehat{\\lambda}=-\\widehat{a_1},-\\widehat{a_2},\\widehat{a_4}+\\widehat{a_5}$ and $\\widehat{E}_3(1,1,0)$ is an unstable fuzzy equilibrium point.\n\\item[(v)] The equilibrium point $E_4(0,\\mathbb{K}(\\widehat{q_{e_4}}),\\mathbb{K}(\\widehat{r_{e_4}}))$ is locally asymptotically stable if \n\\[\\mathbb{K}(\\widehat{a_1})<\\frac{\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{a_5})^2}{(\\mathbb{K}(\\widehat{a_2})\\mathbb{K}(\\widehat{a_3})+\\mathbb{K}(\\widehat{a_5}))^2}.\\]\nTherefore $\\widehat{E}_4({0},\\widehat{q_{e_4}},\\widehat{r_{e_4}})$ is locally asymptotically fuzzy stable if \\[\\widehat{a_1}<\\frac{\\widehat{a_2}\\widehat{a_5}^2}{(\\widehat{a_2}\\widehat{a_3}+\\widehat{a_5})^2}.\\]\n\\begin{exa}\\label{grex1}\n\\vspace{-5mm}\n\\end{exa} Let $\\widehat{a_1}=(0.01,0.02,0.03),\\widehat{a_2}=(2,4,6),\\widehat{a_3}=(0.1,0.2,0.3),\\widehat{a_4}=(3,4,5),\\widehat{a_5}=(1,2,3),\\widehat{p}_0=0.1,\\widehat{q}_0=0.2,\\widehat{r}_0=0.3$, where $\\widehat{a_1},\\widehat{a_2},\\widehat{a_3},\\widehat{a_4},\\widehat{a_5}$ are triangular fuzzy numbers. Now, the horizontal membership functions of the given triangular fuzzy numbers are given by\n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{a_1})&=&a_1^{gr}(\\alpha,\\mu_{a_1})=0.01+0.01\\alpha+\\mu_{a_1}(0.02-0.02\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_2})&=&a_2^{gr}(\\alpha,\\mu_{a_2})=2+2\\alpha+\\mu_{a_2}(4-4\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_3})&=&a_3^{gr}(\\alpha,\\mu_{a_3})=0.1+0.1\\alpha+\\mu_{a_3}(0.2-0.2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_4})&=&a_4^{gr}(\\alpha,\\mu_{a_4})=3+\\alpha+\\mu_{a_4}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_5})&=&a_5^{gr}(\\alpha,\\mu_{a_5})=1+\\alpha+\\mu_{a_5}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{p}_0)&=&0.1,\\mathbb{K}(\\widehat{q}_0)=0.2,\\mathbb{K}(\\widehat{r}_0)=0.3.\n\\end{eqnarray*}\nFurther, we assume $\\mu_p,\\mu_q,\\mu_r,\\mu_{a_1},\\mu_{a_2},\\mu_{a_3},\\mu_{a_4},\\mu_{a_5}=\\mu\\in\\{0,0.4,0.6,1\\}$ and $\\alpha\\in\\{0,0.5,1\\}$.\n\\begin{table}[ht!]\n \\begin{center}\n \\begin{adjustbox}{max width=0.9\\linewidth}\n \\begin{tabular}{|p{.4cm}|p{3.1cm}| p{3.1cm}|p{3.1cm}|}\n \\hline\n $\\mu$ & $\\alpha=0$& $\\alpha=0.5$& $\\alpha=1$\\\\\n \\hline\n 0&(0,0.1667,1.6667)&(0,0.2308,2.3077)&(0,0.2857,2.8571) \\\\\n \\hline\n 0.4&(0,0.2647,2.6471)&(0,0.2754,2.7536)&(0,0.2857,2.8571) \\\\\n \\hline\n 0.6&(0,0.3056,3.0556)&(0,0.2958,2.9578)&(0,0.2857,2.8571)\\\\\n \\hline\n 1&(0,0.375,3.75)&(0,0.3333,3.3333)&(0,0.2857,2.8571)\\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Equilibrium point $E_4(0,\\mathbb{K}(\\widehat{q_{e_4}}),\\mathbb{K}(\\widehat{r_{e_4}}))$ for different values of $\\alpha,\\mu$}\n \\label{grtab:table1}\n \\end{center}\n\\end{table}\nFor the given data set, we realize that the stability condition of $E_4$ is well satisfied and the system (\\ref{gr2a}) have different equilibrium points corresponding to different values of $\\alpha,\\mu$, as shown in Table \\ref{grtab:table1}. From Table \\ref{grtab:table1}, we realize that for a fixed value of $\\alpha$ the equilibrium point increases with increasing value of $\\mu$ and at $\\alpha=1$ remain same because the left and right interval of triangular fuzzy number coincides with each other. For $\\mu=0,0.4$ the equilibrium point increases while for $\\mu=0.6,1$ the equilibrium point decreases with increasing value of $\\alpha$. Figure \\ref{grfig:1} shows that for the given data in Example \\ref{grex1}, the population density $\\mathbb{K}(\\widehat{p}(u))$ of prey eventually extinct while the population densities $\\mathbb{K}(\\widehat{q}(u))$ and $\\mathbb{K}(\\widehat{r}(u))$ of prey and predator persist, respectively and eventually get their steady states given in Table \\ref{grtab:table1} corresponding to different values of $\\alpha,\\mu$ and become asymptotically stable. At $\\alpha=1$, it shows crisp behavior.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{figure\/1.eps} \n \\includegraphics[scale=0.75]{figure\/5.eps}\n \\includegraphics[scale= 0.75]{figure\/9.eps}\n \\includegraphics[scale=0.75]{figure\/2.eps}\n \\includegraphics[scale=0.75]{figure\/6.eps}\n \\includegraphics[scale=0.75]{figure\/10.eps}\n \\includegraphics[scale=0.75]{figure\/3.eps}\n \\includegraphics[scale=0.75]{figure\/7.eps}\n \\end{figure}\n \\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{figure\/11.eps} \n \\includegraphics[scale=0.75]{figure\/4.eps}\n \\includegraphics[scale=0.75]{figure\/8.eps}\n \\includegraphics[scale=0.75]{figure\/12.eps} \\caption{Variation of preys and predator populations against the time for the system (\\ref{gr2a}) for different values of $\\alpha,\\mu$. Blue, green and red curves show the horizontal membership function of the population densities $\\mathbb{K}(\\widehat{p}(u)), \\mathbb{K}(\\widehat{q}(u))$ and $\\mathbb{K}(\\widehat{r}(u))$, respectively.}\n \\label{grfig:F}\n\\end{figure}\n\\item[(vi)] The equilibrium point $E_5(\\mathbb{K}(\\widehat{p_{e_5}}),0,\\mathbb{K}(\\widehat{r_{e_5}}))$ is locally asymptotically stable if \n\\[\\mathbb{K}(\\widehat{a_2})<\\frac{\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_4})^2}{(\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_3})+\\mathbb{K}(\\widehat{a_4}))^2}.\\]\nTherefore the fuzzy equilibrium point $\\widehat{E}_5(\\widehat{p_{e_5}},{0},\\widehat{r_{e_5}})$ is locally asymptotically fuzzy stable if \n\\[\\widehat{a_2}<\\frac{\\widehat{a_1}\\widehat{a_4}^2}{(\\widehat{a_1}\\widehat{a_3}+\\widehat{a_4})^2}.\\]\n\\begin{exa}\\label{grex2}\n\\end{exa}\n Let $\\widehat{a_1}=(2,4,6),\\widehat{a_2}=(0.01,0.02,0.03),\\widehat{a_3}=(0.1,0.2,0.3),\\widehat{a_4}=(1,2,3),\\widehat{a_5}=(3,4,5),\\widehat{p}_0=0.1,\\widehat{q}_0=0.2,\\widehat{r}_0=0.3$, where $\\widehat{a_1},\\widehat{a_2},\\widehat{a_3},\\widehat{a_4},\\widehat{a_5}$ are triangular fuzzy numbers. Now, the horizontal membership functions of the given triangular fuzzy numbers are given by\n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{a_1})&=&a_1^{gr}(\\alpha,\\mu_{a_1})=2+2\\alpha+\\mu_{a_1}(4-4\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_2})&=&a_2^{gr}(\\alpha,\\mu_{a_2})=0.01+0.01\\alpha+\\mu_{a_2}(0.02-0.02\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_3})&=&a_3^{gr}(\\alpha,\\mu_{a_3})=0.1+0.1\\alpha+\\mu_{a_3}(0.2-0.2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_4})&=&a_4^{gr}(\\alpha,\\mu_{a_4})=1+\\alpha+\\mu_{a_4}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_5})&=&a_5^{gr}(\\alpha,\\mu_{a_5})=3+\\alpha+\\mu_{a_5}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{p}_0)&=&0.1,\\mathbb{K}(\\widehat{q}_0)=0.2,\\mathbb{K}(\\widehat{r}_0)=0.3.\n\\end{eqnarray*}\nFurther, we assume $\\mu_p,\\mu_q,\\mu_r,\\mu_{a_1},\\mu_{a_2},\\mu_{a_3},\\mu_{a_4},\\mu_{a_5}=\\mu\\in\\{0,0.4,0.6,1\\}$ and $\\alpha\\in\\{0,0.5,1\\}$.\n\\begin{table}[ht!]\n \\begin{center}\n \\begin{adjustbox}{max width=0.9\\linewidth}\n \\begin{tabular}{|p{.4cm}|p{3.1cm}| p{3.1cm}|p{3.1cm}|}\n \\hline\n $\\mu$ & $\\alpha=0$& $\\alpha=0.5$& $\\alpha=1$\\\\\n \\hline\n 0&(0.1667,0,1.6667)&(0.2308,0,2.3077)&(0.2857,0,2.8571) \\\\\n \\hline\n 0.4&(0.2647,0,2.6471)&(0.2754,0,2.7536)&(0.2857,0,2.8571) \\\\\n \\hline\n 0.6&(0.3056,0,3.0556)&(0.2958,0,2.9577)&(0.2857,0,2.8571)\\\\\n \\hline\n 1&(0.375,0,3.75)&(0.3333,0,3.3333)&(0.28571,0,2.8571)\\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Equilibrium point $E_5(\\mathbb{K}(\\widehat{p_{e_5}}),0,\\mathbb{K}(\\widehat{r_{e_5}}))$ for different values of $\\alpha,\\mu$}\n \\label{grtab:table2}\n \\end{center}\n\\end{table}\nFor the given data set, we observe that the stability condition of $E_5$ is well satisfied and the system (\\ref{gr2a}) have different equilibrium points corresponding to different values of $\\alpha,\\mu$, as shown in Table \\ref{grtab:table2}. From Table \\ref{grtab:table2}, we realize that for a fixed value of $\\alpha$ the equilibrium point increases with increasing value of $\\mu$ and at $\\alpha=1$ remain same because the left and right interval of triangular fuzzy number coincides with each other. For $\\mu=0,0.4$ the equilibrium point increases while for $\\mu=0.6,1$ the equilibrium point decreases with increasing value of $\\alpha$. Figure \\ref{grfig:2} shows that for the given data in Example \\ref{grex2}, the population density $\\mathbb{K}(\\widehat{q}(u))$ of prey eventually extinct while the population densities $\\mathbb{K}(\\widehat{p}(u))$ and $\\mathbb{K}(\\widehat{r}(u))$ of prey and predator persist, respectively and eventually get their steady states given in Table \\ref{grtab:table2} corresponding to different values of $\\alpha,\\mu$ and become asymptotically stable. At $\\alpha=1$, it shows crisp behavior.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{figure\/13.eps} \n \\includegraphics[scale=0.75]{figure\/17.eps}\n\\includegraphics[scale= 0.75]{figure\/21.eps}\n \\includegraphics[scale=0.75]{figure\/14.eps} \n \\includegraphics[scale=0.75]{figure\/18.eps}\n \\includegraphics[scale=0.75]{figure\/22.eps}\n \\includegraphics[scale=0.75]{figure\/15.eps}\n \\includegraphics[scale=0.75]{figure\/19.eps}\n \\end{figure}\n \\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{figure\/23.eps}\n \\includegraphics[scale=0.75]{figure\/16.eps}\n \\includegraphics[scale=0.75]{figure\/20.eps}\n \\includegraphics[scale=.75]{figure\/24.eps}\n \\caption{Variation of preys and predator populations against the time for the system (\\ref{gr2a}) for different values of $\\alpha,\\mu$. Blue, green and red curves show the horizontal membership function of the population densities $\\mathbb{K}(\\widehat{p}(u)), \\mathbb{K}(\\widehat{q}(u))$ and $\\mathbb{K}(\\widehat{r}(u))$, respectively.}\n \\label{grfig:2}\n\\end{figure}\n\\item[(vii)] The equilibrium point $E_6(1,1,\\mathbb{K}(\\widehat{r_{e_6}}))$ is locally asymptotically stable if \n\\[\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_2})>\\frac{(\\mathbb{K}(\\widehat{a_4})+\\mathbb{K}(\\widehat{a_5}))^2}{\\mathbb{K}(\\widehat{a_3})^2}.\\]\nTherefore the fuzzy equilibrium point $\\widehat{E}_6({1},{1},\\widehat{r_{e_6}})$ is locally asymptotically fuzzy stable if \n\\[\\widehat{a_1}\\widehat{a_2}>\\frac{(\\widehat{a_4}+\\widehat{a_5})^2}{\\widehat{a_3}^2}.\\]\n\\begin{exa}\\label{grex3}\n\\end{exa} Let $\\widehat{a_1}=(3,4,5),\\widehat{a_2}=(2,4,6),\\widehat{a_3}=(3,4,5),\\widehat{a_4}=(1,2,3),\\widehat{a_5}=(0.01,0.02,0.03),\\widehat{p}_0=0.1,\\widehat{q}_0=0.2,\\widehat{r}_0=0.3$, where $\\widehat{a_1},\\widehat{a_2},\\widehat{a_3},\\widehat{a_4},\\widehat{a_5}$ are triangular fuzzy numbers. Now, the horizontal membership functions of the given triangular fuzzy numbers are given by\n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{a_1})&=&a_1^{gr}(\\alpha,\\mu_{a_1})=3+\\alpha+\\mu_{a_1}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_2})&=&a_2^{gr}(\\alpha,\\mu_{a_2})=2+2\\alpha+\\mu_{a_2}(4-4\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_3})&=&a_3^{gr}(\\alpha,\\mu_{a_3})=3+\\alpha+\\mu_{a_3}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_4})&=&a_4^{gr}(\\alpha,\\mu_{a_4})=1+\\alpha+\\mu_{a_4}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_5})&=&a_5^{gr}(\\alpha,\\mu_{a_5})=0.01+0.01\\alpha+\\mu_{a_5}(0.02-0.02\\alpha),\\\\\n\\mathbb{K}(\\widehat{p}_0)&=&0.1,\\mathbb{K}(\\widehat{q}_0)=0.2,\\mathbb{K}(\\widehat{r}_0)=0.3.\n\\end{eqnarray*}\nFurther, we assume $\\mu_p,\\mu_q,\\mu_r,\\mu_{a_1},\\mu_{a_2},\\mu_{a_3},\\mu_{a_4},\\mu_{a_5}=\\mu\\in\\{0,0.4,0.6,1\\}$ and $\\alpha\\in\\{0,0.5,1\\}$.\n\\begin{table}[ht!]\n \\begin{center}\n \\begin{adjustbox}{max width=0.9\\linewidth}\n \\begin{tabular}{|p{.4cm}|p{2.0cm}| p{2.0cm}|p{2.0cm}|}\n \\hline\n $\\mu$ & $\\alpha=0$& $\\alpha=0.5$& $\\alpha=1$\\\\\n \\hline\n 0&(1,1,0.3367)&(1,1,0.4329)&(1,1,0.5050) \\\\\n \\hline\n 0.4&(1,1,0.4784)&(1,1,0.4921)&(1,1,0.5050) \\\\\n \\hline\n 0.6&(1,1,0.5290)&(1,1,0.5173)&(1,1,0.5050)\\\\\n \\hline\n 1&(1,1,0.6060)&(1,1,0.5611)&(1,1,0.5050)\\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Equilibrium point $E_6({1},{1},\\mathbb{K}(\\widehat{r_{e_6}}))$ for different values of $\\alpha,\\mu$}\n \\label{grtab:table3}\n \\end{center}\n \\end{table}\nFor the given data set, we observe that the stability condition of $E_6$ is well satisfied and the system (\\ref{gr2a}) have different equilibrium points corresponding to different values of $\\alpha,\\mu$, as shown in Table \\ref{grtab:table3}. From Table \\ref{grtab:table3}, we realize that for a fixed value of $\\alpha$ the equilibrium point increases with increasing value of $\\mu$ and at $\\alpha=1$ remain same because the left and right interval of triangular fuzzy number coincides with each other. For $\\mu=0,0.4$ the equilibrium point increases while for $\\mu=0.6,1$ the equilibrium point decreases with increasing value of $\\alpha$. Figure \\ref{grfig:3} shows that for the given data in Example \\ref{grex3}, initially the population density $\\mathbb{K}(\\widehat{r}(u))$ of predator decreases while the population densities $\\mathbb{K}(\\widehat{p}(u))$ and $\\mathbb{K}(\\widehat{q}(u))$ of preys increases, respectively and eventually get their steady states given in Table \\ref{grtab:table3} corresponding to different values of $\\alpha,\\mu$ and become asymptotically stable. At $\\alpha=1$, it shows crisp behavior.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{figure\/25.eps}\n \\includegraphics[scale=0.75]{figure\/29.eps}\n \\includegraphics[scale= 0.75]{figure\/33.eps}\n \\includegraphics[scale=0.75]{figure\/26.eps}\n \\includegraphics[scale=0.75]{figure\/30.eps}\n \\includegraphics[scale= 0.65]{figure\/34.eps}\n\\includegraphics[scale=0.75]{figure\/27.eps}\n\\includegraphics[scale=0.75]{figure\/31.eps}\n\\end{figure}\n\\begin{figure}\n \\centering\n\\includegraphics[scale= 0.65]{figure\/35.eps}\n\\includegraphics[scale=0.8]{figure\/28.eps}\n\\includegraphics[scale=0.8]{figure\/32.eps}\n\\includegraphics[scale= 0.65]{figure\/36.eps} \\caption{Variation of preys and predator populations against the time for the system (\\ref{gr2a}) for different values of $\\alpha,\\mu$. Blue, green and red curves show the horizontal membership function of the population densities $\\mathbb{K}(\\widehat{p}(u)), \\mathbb{K}(\\widehat{q}(u))$ and $\\mathbb{K}(\\widehat{r}(u))$, respectively.}\n \\label{grfig:3}\n\\end{figure}\n\\item[(viii)] The equilibrium point $E_7(\\mathbb{K}(\\widehat{p_{e_7}}),\\mathbb{K}(\\widehat{q_{e_7}}),\\mathbb{K}(\\widehat{r_{e_7}}))$ is locally asymptotically stable if \n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{a_4})+\\mathbb{K}(\\widehat{a_5})&>&\\mathbb{K}(\\widehat{a_3})\\sqrt{\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_2})},\\\\\n\\mathbb{K}(\\widehat{a_2})&\\geq& \\mathbb{K}(\\widehat{a_1}),\\\\\n\\mathbb{K}(\\widehat{a_1})\\mathbb{K}(\\widehat{a_3})&\\geq& 1.\n\\end{eqnarray*}\nTherefore the fuzzy equilibrium point $\\widehat{E}_7(\\widehat{p_{e_7}},\\widehat{q_{e_7}},\\widehat{r_{e_7}})$ is locally asymptotically fuzzy stable if \n\\begin{eqnarray*}\n\\widehat{a_4}+\\widehat{a_5}&>&\\widehat{a_3}\\sqrt{\\widehat{a_1}\\widehat{a_2}},\\\\\n\\widehat{a_2}&\\geq&\\widehat{a_1},\\\\\n\\widehat{a_1}\\widehat{a_3}&\\geq& 1.\n\\end{eqnarray*}\n\\begin{exa}\\label{grex4}\n\\end{exa} Let $\\widehat{a_1}=(1,2,3),\\widehat{a_2}=(2,4,6),\\widehat{a_3}=(1,2,3),\\widehat{a_4}=(3,4,5),\\widehat{a_5}=(1,2,3),\\widehat{p}_0=0.1,\\widehat{q}_0=0.2,\\widehat{r}_0=0.3$, where $\\widehat{a_1},\\widehat{a_2},\\widehat{a_3},\\widehat{a_4},\\widehat{a_5}$ are triangular fuzzy numbers. Now, the horizontal membership functions of the given triangular fuzzy numbers are given by\n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{a_1})&=&a_1^{gr}(\\alpha,\\mu_{a_1})=1+\\alpha+\\mu_{a_1}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_2})&=&a_2^{gr}(\\alpha,\\mu_{a_2})=2+2\\alpha+\\mu_{a_2}(4-4\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_3})&=&a_3^{gr}(\\alpha,\\mu_{a_3})=1+\\alpha+\\mu_{a_3}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_4})&=&a_4^{gr}(\\alpha,\\mu_{a_4})=3+\\alpha+\\mu_{a_4}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_5})&=&a_5^{gr}(\\alpha,\\mu_{a_5})=1+\\alpha+\\mu_{a_5}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{p}_0)&=&0.1,\\mathbb{K}(\\widehat{q}_0)=0.2,\\mathbb{K}(\\widehat{r}_0)=0.3.\n\\end{eqnarray*}\nFurther, we assume $\\mu_p,\\mu_q,\\mu_r,\\mu_{a_1},\\mu_{a_2},\\mu_{a_3},\\mu_{a_4},\\mu_{a_5}=\\mu\\in\\{0,0.4,0.6,1\\}$ and $\\alpha\\in\\{0,0.5,1\\}$.\n\\begin{table}[ht!]\n \\begin{center}\n \\begin{adjustbox}{max width=0.9\\linewidth}\n \\begin{tabular}{|p{0.5cm}|p{4cm}| p{4cm}|p{4cm}|}\n \\hline\n $\\mu$ & $\\alpha=0$& $\\alpha=0.5$& $\\alpha=1$\\\\\n \\hline\n 0&(0.3025,0.5068,1.414)&(0.6014,0.7181,2.1213)&(0.9366,0.9552,2.8284) \\\\\n \\hline\n 0.4&(0.7993,0.8581,2.5456)&(0.8675,0.9063,2.6870)&\n (0.9366,0.9552,2.8284) \\\\\n \\hline\n 0.6&(1.0773,1.0546,3.1113)&(1.0066,1.0046,2.9698)&\n (0.9366,0.9552,2.8284)\\\\\n \\hline\n 1&(1.6639,1.4695,4.2426)&\n (1.2934,1.2075,3.5355)&\n (0.9366,0.9552,2.8284)\\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Equilibrium point $E_7(\\mathbb{K}(\\widehat{p_{e_7}}),\\mathbb{K}(\\widehat{q_{e_7}}),\\mathbb{K}(\\widehat{r_{e_7}}))$ for different values of $\\alpha,\\mu$}\n \\label{grtab:table4}\n \\end{center}\n \\end{table}\nFor the given data set, we observe that the stability condition of $E_7(\\widehat{p_{e_7}},\\widehat{q_{e_7}},\\widehat{r_{e_7}})$ is well satisfied and the system (\\ref{gr2a}) have different equilibrium points corresponding to different values of $\\alpha,\\mu$, as shown in Table \\ref{grtab:table4}. From Table \\ref{grtab:table4}, we realize that for a fixed value of $\\alpha$ the equilibrium point increases with increasing value of $\\mu$ and at $\\alpha=1$ remain same because the left and right interval of triangular fuzzy number coincides with each other. For $\\mu=0,0.4$ the equilibrium point increases while for $\\mu=0.6,1$ the equilibrium point decreases with increasing value of $\\alpha$. Figure \\ref{grfig:4} shows that for the given data in Example \\ref{grex4}, initially the population densities $\\mathbb{K}(\\widehat{p}(u)),\\mathbb{K}(\\widehat{q}(u))$ and $\\mathbb{K}(\\widehat{r}(u))$ of preys and predator, respectively increases and eventually get their steady states given in Table \\ref{grtab:table4} corresponding to different values of $\\alpha,\\mu$ and become asymptotically stable. At $\\alpha=1$, it shows crisp behavior.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{figure\/37.eps}\n \\includegraphics[scale=0.75]{figure\/41.eps}\n \\includegraphics[scale= 0.75]{figure\/45.eps}\n \\includegraphics[scale=0.75]{figure\/38.eps}\n \\includegraphics[scale=0.75]{figure\/42.eps}\n \\includegraphics[scale= 0.65]{figure\/46.eps}\n \\includegraphics[scale=0.75]{figure\/39.eps} \n \\includegraphics[scale=0.75]{figure\/43.eps}\n \\end{figure}\n \\begin{figure}\n \\centering\n \\includegraphics[scale= 0.65]{figure\/47.eps}\n \\includegraphics[scale=0.75]{figure\/40.eps} \n \\includegraphics[scale=0.75]{figure\/44.eps} \\includegraphics[scale=0.65]{figure\/48.eps}\n \\caption{Variation of preys and predator populations against the time for the system (\\ref{gr2a}) for different values of $\\alpha,\\mu$. Blue, green and red curves show the horizontal membership function of the population densities $\\widehat{p}(u), \\widehat{q}(u)$ and $\\widehat{r}(u)$, respectively.}\n \\label{grfig:4}\n\\end{figure}\n\\end{itemize}\n\\begin{thm}\nThe nontrivial fuzzy equilibrium point $\\widehat{E}_7({\\widehat{p_{e_7}}},{\\widehat{q_{e_7}}},{\\widehat{r_{e_7}}})$ is globally asymptotically fuzzy stable.\n\\end{thm}\n\\textbf{Proof:} To study the global fuzzy stability of the system (\\ref{gr1}), it is enough to check that the global stability of the system (\\ref{gr2}). For which, we construct a suitable Lyapunov function $\\mathbb{K}(\\widehat{U})=\\left(\\mathbb{K}(\\widehat{p})-\\mathbb{K}(\\widehat{p_{e_7}})-\\mathbb{K}(\\widehat{p_{e_7}})\\log\\left(\\frac{\\mathbb{K}(\\widehat{p})}{\\mathbb{K}(\\widehat{p_{e_7}})}\\right)\\right)+\\left(\\mathbb{K}(\\widehat{q})-\\mathbb{K}(\\widehat{q_{e_7}})-\\mathbb{K}(\\widehat{q_{e_7}})\\log\\left(\\frac{\\mathbb{K}(\\widehat{q})}{\\mathbb{K}(\\widehat{q_{e_7}})}\\right)\\right)+\\linebreak \\left(\\mathbb{K}(\\widehat{r})-\\mathbb{K}(\\widehat{r_{e_7}})-\\mathbb{K}(\\widehat{r_{e_7}})\\log\\left(\\frac{\\mathbb{K}(\\widehat{r})}{\\mathbb{K}(\\widehat{r_{e_7}})}\\right)\\right)$. It is simple to verify that the function $\\mathbb{K}(\\widehat{U})$ is zero at the equilibrium point $E_7(\\mathbb{K}(\\widehat{p_{e_7}}),\\mathbb{K}(\\widehat{q_{e_7}}),\\mathbb{K}(\\widehat{r_{e_7}}))$ and positive for other values of\\linebreak $(\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))$. Then\n\\begin{eqnarray}\n \\begin{array}{ll}\\label{grlya}\n \\frac{\\partial \\mathbb{K}(\\widehat{U})}{\\partial t}=\\frac{(\\mathbb{K}(\\widehat{p})-\\mathbb{K}(\\widehat{p_{e_7}}))}{\\mathbb{K}(\\widehat{p})}\\frac{\\partial \\mathbb{K}(\\widehat{p})}{\\partial t}+\\frac{(\\mathbb{K}(\\widehat{q})-\\mathbb{K}(\\widehat{q_{e_7}}))}{\\mathbb{K}(\\widehat{q})}\\frac{\\partial \\mathbb{K}(\\widehat{q})}{\\partial t}+\\frac{(\\mathbb{K}(\\widehat{r})-\\mathbb{K}(\\widehat{r_{e_7}}))}{\\mathbb{K}(\\widehat{r})}\\frac{\\partial \\mathbb{K}(\\widehat{r})}{\\partial t}\\\\\n \\hspace{1cm}= (\\mathbb{K}(\\widehat{p})-\\mathbb{K}(\\widehat{p_{e_7}}))(\\mathbb{K}(\\widehat{a_1})(1-\\mathbb{K}(\\widehat{p}))-\n \\mathbb{K}(\\widehat{r})+ \\mathbb{K}(\\widehat{q}) \\mathbb{K}(\\widehat{r}))\\\\\n \\hspace{1cm}+ (\\mathbb{K}(\\widehat{q})-\\mathbb{K}(\\widehat{q_{e_7}}))(\\mathbb{K}(\\widehat{a_2})(1-\\mathbb{K}(\\widehat{q}))-\n \\mathbb{K}(\\widehat{r})+ \\mathbb{K}(\\widehat{p}) \\mathbb{K}(\\widehat{r}))\\\\\n \\hspace{1cm}+ (\\mathbb{K}(\\widehat{r})-\\mathbb{K}(\\widehat{r_{e_7}}))(-\\mathbb{K}(\\widehat{a_3})\\mathbb{K}(\\widehat{r})+\n \\mathbb{K}(\\widehat{a_4})\\mathbb{K}(\\widehat{p})+ \\mathbb{K}(\\widehat{a_5})\\mathbb{K}(\\widehat{q})).\n \\end{array}\n\\end{eqnarray}\nFor the equilibrium point $E_7(\\mathbb{K}(\\widehat{p_{e_7}}),\\mathbb{K}(\\widehat{q_{e_7}}),\\mathbb{K}(\\widehat{r_{e_7}}))$, we have the set of equilibrium equations\n\\begin{eqnarray}\\label{grseteq}\n \\begin{array}{ll}\n \\mathbb{K}(\\widehat{a_1})(1-\\mathbb{K}(\\widehat{p_{e_7}}))-\n \\mathbb{K}(\\widehat{r_{e_7}})+ \\mathbb{K}(\\widehat{q_{e_7}}) \\mathbb{K}(\\widehat{r_{e_7}})=0\\\\\n \\mathbb{K}(\\widehat{a_2})(1-\\mathbb{K}(\\widehat{q_{e_7}}))-\n \\mathbb{K}(\\widehat{r_{e_7}})+ \\mathbb{K}(\\widehat{p_{e_7}}) \\mathbb{K}(\\widehat{r_{e_7}})=0\\\\\n-\\mathbb{K}(\\widehat{a_3})\\mathbb{K}(\\widehat{r_{e_7}})+\n \\mathbb{K}(\\widehat{a_4})\\mathbb{K}(\\widehat{p_{e_7}})+ \\mathbb{K}(\\widehat{a_5})\\mathbb{K}(\\widehat{q_{e_7}})=0.\n \\end{array}\n\\end{eqnarray} \nNow, we simplify the Equations (\\ref{grlya}) and (\\ref{grseteq}) in the form\n\\begin{eqnarray}\n \\begin{array}{ll}\n \\frac{\\partial \\mathbb{K}(\\widehat{U})}{\\partial t}= -\\mathbb{K}(\\widehat{a_1})(\\mathbb{K}(\\widehat{p})-\\mathbb{K}(\\widehat{p_{e_7}}))^2-\\mathbb{K}(\\widehat{a_2})(\\mathbb{K}(\\widehat{q})-\\mathbb{K}(\\widehat{q_{e_7}}))^2\n \\\\\n \\hspace{1cm} -\\mathbb{K}(\\widehat{a_3})(\\mathbb{K}(\\widehat{r})-\\mathbb{K}(\\widehat{r_{e_7}}))^2+(\\mathbb{K}(\\widehat{a_4})-1)(\\mathbb{K}(\\widehat{p})-\\mathbb{K}(\\widehat{p_{e_7}}))(\\mathbb{K}(\\widehat{r})-\\mathbb{K}(\\widehat{r_{e_7}}))\\\\\n \\hspace{1cm} +\\left(\\frac{\\mathbb{K}(\\widehat{a_3})\\mathbb{K}(\\widehat{r_{e_7}})}{\\mathbb{K}(\\widehat{q_{e_7}})}-\\frac{\\mathbb{K}(\\widehat{a_4})\\mathbb{K}(\\widehat{p_{e_7}})}{\\mathbb{K}(\\widehat{q_{e_7}})}-1\\right)(\\mathbb{K}(\\widehat{q})-\\mathbb{K}(\\widehat{q_{e_7}}))(\\mathbb{K}(\\widehat{r})-\\mathbb{K}(\\widehat{r_{e_7}})).\n \\end{array}\n\\end{eqnarray}\nNext, we assume that $\\mathbb{K}(\\widehat{a_4})<1,\\,\\frac{\\mathbb{K}(\\widehat{a_3})\\mathbb{K}(\\widehat{r_{e_7}})}{\\mathbb{K}(\\widehat{q_{e_7}})}<\\frac{\\mathbb{K}(\\widehat{a_4})\\mathbb{K}(\\widehat{p_{e_7}})}{\\mathbb{K}(\\widehat{q_{e_7}})}+1$ and $(\\mathbb{K}(\\widehat{p})-\\mathbb{K}(\\widehat{p_{e_7}})),\\,(\\mathbb{K}(\\widehat{q})-\\mathbb{K}(\\widehat{q_{e_7}})),\\,(\\mathbb{K}(\\widehat{r})-\\mathbb{K}(\\widehat{r_{e_7}}))$ are of same sign. Then $ \\frac{\\partial \\mathbb{K}(\\widehat{U})}{\\partial t}<0$ except for the equilibrium point $E_7(\\mathbb{K}(\\widehat{p_{e_7}}),\\mathbb{K}(\\widehat{q_{e_7}}),\\mathbb{K}(\\widehat{r_{e_7}}))$. Thus the equilibrium point $E_7(\\mathbb{K}(\\widehat{p_{e_7}}),\\mathbb{K}(\\widehat{q_{e_7}}),\\mathbb{K}(\\widehat{r_{e_7}}))$ is globally asymptotically stable. Now, from Remark \\ref{grr1} and Theorem \\ref{grlogo}, we have $\\mathcal{D}^{gr}\\widehat{U}<0$, except for the fuzzy equilibrium point $\\widehat{E}_7(\\widehat{p_{e_7}},\\widehat{q_{e_7}},\\widehat{r_{e_7}})$. Thus the equilibrium point $\\widehat{E}_7(\\widehat{p_{e_7}},\\widehat{q_{e_7}},\\widehat{r_{e_7}})$ is globally asymptotically fuzzy stable.\n\n\\section{Numerical solution of the fuzzy prey-predator model by using granular $F$-transform}\nThis section presents a numerical method by using the concept of granular $F$-transform to solve the system (\\ref{gr1}). To solve the system (\\ref{gr1}), we first need to solve the system (\\ref{gr2}) by using the concept of $F$-transform. The method of $F$-transform consists in applying the $F$-transform to both sides of the system (\\ref{gr2}). It leads to a system of algebraic equations, whose solution is a discrete representation of an analytical solution of the system (\\ref{gr2}). Accordingly, we apply granular $F$-transform to both sides of the system (\\ref{gr1}), whereby we get the following equations:\n\\begin{eqnarray}\\label{gr5}\n\\nonumber{F^{gr}[\\mathcal{D}^{gr}\\widehat{p}}]&=&F^{gr}[\\widehat{g_1}(u,\\widehat{p},\\widehat{q},\\widehat{r})],\\\\ \n\\nonumber{G^{gr}[\\mathcal{D}^{gr}\\widehat{q}}]&=&G^{gr}[\\widehat{g_2}(u,\\widehat{p},\\widehat{q},\\widehat{r})],\\\\ \n{H^{gr}[\\mathcal{D}^{gr}\\widehat{r}}]&=& H^{gr}[\\widehat{g_3}(u,\\widehat{p},\\widehat{q},\\widehat{r})].\n\\end{eqnarray}\nFrom Remark \\ref{grr1}, the system (\\ref{gr5}) can be written as\n\\begin{eqnarray}\\label{gr6}\n\\nonumber{\\mathbb{K}(F^{gr}[\\mathcal{D}^{gr}\\widehat{p}}])&=&\\mathbb{K}(F^{gr}[\\widehat{g_1}(u,\\widehat{p},\\widehat{q},\\widehat{r})]),\\\\ \n\\nonumber{\\mathbb{K}(G^{gr}[\\mathcal{D}^{gr}\\widehat{q}])}&=&\\mathbb{K}(G^{gr}[\\widehat{g_2}(u,\\widehat{p},\\widehat{q},\\widehat{r})]),\\\\ \n\\nonumber{\\mathbb{K}(H^{gr}[\\mathcal{D}^{gr}\\widehat{r}]})&=& \\mathbb{K}(H^{gr}[\\widehat{g_3}(u,\\widehat{p},\\widehat{q},\\widehat{r})]).\n\\end{eqnarray}\nFrom Proposition \\ref{grgrdif1}, the above system can be rewritten as\n\\begin{eqnarray}\\label{gr7}\n\\nonumber{F[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{p})]}&=&F[\\mathbb{K}(\\widehat{g_1}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r})))]=F[g_1],\\\\\n\\nonumber{G[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{p})]}&=&G[\\mathbb{K}(\\widehat{g_2}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r})))]=G[g_2],\\\\ \n{H[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{p})]}&=&H[\\mathbb{K}(\\widehat{g_3}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r})))]=H[g_3],\n\\end{eqnarray}\nwhere \n\\[g_1=\\mathbb{K}(\\widehat{g_1}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))),\\]\n\\[g_2=\\mathbb{K}(\\widehat{g_2}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))),\\]\n\\[g_3=\\mathbb{K}(\\widehat{g_3}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))).\\]\n Let us choose $n > 2$ and create an $h$-uniform fuzzy partition $P_1,... P_m$ of $[a_1,a_2]$, where $h =\\dfrac{a_2-a_1}{m-1}$.\n We denote the $F$-transforms of the functions on left and right sides of the system (\\ref{gr7}) by \n \\[F[\\mathbb{K}(\\widehat{p})] = (X_1,...,X_m)^T,\\]\n \\[G[\\mathbb{K}(\\widehat{q})] = (Y_1,...,Y_m)^T ,\\]\n \\[H[\\mathbb{K}(\\widehat{r})] = (Z_1,...,Z_m)^T,\\]\n \\[F[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{p})] = (X'_1,...,X'_m)^T,\\]\n \\[G[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{q})] = (Y'_1,...,Y'_m)^T ,\\]\n \\[H[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{r})] = (Z'_1,...,Z'_m)^T ,\\]\n \\[F[g_1] = (F_1,...,F_m)^T,\\]\n \\[G[g_2] = (G_1,...,G_m)^T,\\]\n \\[H[g_3] = (H_1,...,H_m)^T.\\]\n Also, we set \\[X_1=\\mathbb{K}(\\widehat{p}_a),\\,Y_1=\\mathbb{K}(\\widehat{q}_a),\\,Z_1=\\mathbb{K}(\\widehat{r}_a),\\] respectively. Thus from the system (\\ref{gr7}), we obtain the following system of algebraic equations:\n\\begin{eqnarray}\\label{gr8}\n\\nonumber&&X'_i=F_i,\\,Y'_i=G_i,\\,Z'_i=H_i,\\,i=2,....m-1,\\\\\n&&X_1=\\mathbb{K}(\\widehat{p}_a),\\,Y_1=\\mathbb{K}(\\widehat{q}_a),Z_1=\\mathbb{K}(\\widehat{r}_a),\n\\end{eqnarray}\nwhere \\[X'_i=F_i[\\frac{\\partial}{\\partial t}\\mathbb{K}(\\widehat{p})],\\,Y'_i=G_i[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{q})],\\,Z'_i=H_i[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{r})],\\]\n\\[X_i=F_i[\\mathbb{K}(\\widehat{p})],\\,Y_i=G_i[\\mathbb{K}(\\widehat{q})],\\,Z_i=H_i[\\mathbb{K}(\\widehat{r})],\\] \\[F_i=F_i[g_1],\\,G_i=G_i[g_2],\\,H_i=H_i[g_3]\\]\nare $F$-transform components. Also, by using schemes of the method of central differences, we replace \n\\begin{eqnarray*}\nX'_i&=&\\dfrac{X_{i+1}-X_{i-1}}{2h},\\\\\nY'_i&=&\\dfrac{Y_{i+1}-Y_{i-1}}{2h},\\\\\nZ'_i&=&\\dfrac{Z_{i+1}-Z_{i-1}}{2h},\\,i=2,...,m-1\n\\end{eqnarray*}\nand assume $X_2=X_1+hF_1,Y_2=Y_1+hG_1,Z_2=Z_1+hH_1$. Thus we can introduce $(m-2)\\times m$ matrix\n\\begin{equation}\\label{grm2}\n{D} = \\frac{1}{2h}\n\\begin{bmatrix}\n-1 & 0 &1&0&0&0&...&0\\\\\n0 & -1 &0&1&0&0&...&0\\\\\n0 & 0&-1&0&1&0&...&0\\\\\n .& &&&.&& &.\\\\[-.2cm]\n . & &&&&.& &.\\\\[-.2cm]\n . & &&&&&. &.\\\\\n 0 & 0&0&0&...&-1&0 &1\\\\\n\\end{bmatrix}.\n\\end{equation}\nNow, we can rewrite the system (\\ref{gr8}) to the following system of linear equations:\n\\[D F[\\mathbb{K}(\\widehat{p})]=F[g_1],\\,D G[\\mathbb{K}(\\widehat{q})]=G[g_2],\\,D H[\\mathbb{K}(\\widehat{r})]=H[g_3],\\]\nwhere \\[\nF[\\mathbb{K}(\\widehat{p})] = (X_1,...,X_{m-1})^T,\\]\n\\[G[\\mathbb{K}(\\widehat{q})] = (Y_1,...,Y_{m-1})^T,\\] \\[H[\\mathbb{K}(\\widehat{r})] = (Z_1,...,Z_{m-1})^T,\\]\n\\[F[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{p})] = (X'_2,...,X'_{m-1})^T,\\]\\[G[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{q})] = (Y'_2,...,Y'_{m-1})^T,\\]\\[H[\\frac{\\partial }{\\partial t}\\mathbb{K}(\\widehat{r})] = (Z'_2,...,Z'_{m-1})^T,\\]\n\\[F[g_1] = (F_2,...,F_{m-1})^T,\\]\\[ G[g_2] = (G_2,...,G_{m-1})^T,\\]\\[H[g_3] = (H_2,...,H_{m-1})^T.\\] Next, the matrix $D$ is completed by adding the first and second row as initial values, as seen below:\n\\begin{equation*}\\label{grm3}\n{D'} = \\frac{1}{2h}\n\\begin{bmatrix}\n1 & 0 &0&0&0&0&...&0\\\\\n0& 1 &0&0&0&0&...&0\\\\\n-1 & 0 &1&0&0&0&...&0\\\\\n0 & -1 &0&1&0&0&...&0\\\\\n0 & 0&-1&0&1&0&...&0\\\\\n .& &&&.&& &.\\\\[-.2cm]\n . & &&&&.& &.\\\\[-.2cm]\n . & &&&&&. &.\\\\\n 0 & 0&0&0&...&-1&0 &1\\\\\n\\end{bmatrix},\n\\end{equation*}\nso that the matrix $D'$ is nom-singular. Now, we can expand $F[ g_1 ],G[g_2]$ and $G[g_3]$ by adding the first and second elements using initial conditions and the matrix $D'$, as follows:\n\\[F[g_1] = \\left(\\frac{\\mathbb{K}(\\widehat{p}_a)}{2h},\\frac{\\mathbb{K}(\\widehat{p_{2}})}{2h},g_2,...,F_{m-1}\\right)^T,\\]\n\\[G[g_2] = \\left(\\frac{\\mathbb{K}(\\widehat{q}_a)}{2h},\\frac{\\mathbb{K}(\\widehat{q}_2)}{2h},G_2,...,G_{m-1}\\right)^T, \\] \n\\[H[g_3] = \\left(\\frac{\\mathbb{K}(\\widehat{r}_a)}{2h},\\frac{\\mathbb{K}(\\widehat{r}_2)}{2h},H_2,...,H_{m-1}\\right)^T,\\]\nwhere $\\mathbb{K}(\\widehat{p_{2}})=X_2$, $\\mathbb{K}(\\widehat{q}_2)=Y_2$ and $\\mathbb{K}(\\widehat{r}_2)=Z_2$. Thus we have \n\\begin{eqnarray}\\label{grmat}\nD' F[\\mathbb{K}(\\widehat{p})]=F[g_1],\\,D' G[\\mathbb{K}(\\widehat{q})]=G[g_2],\\,D' H[\\mathbb{K}(\\widehat{r})]=H[g_3].\n\\end{eqnarray}\nThe system (\\ref{grmat}) can be written as\n\\begin{eqnarray}\\label{grmat1}\n F[\\mathbb{K}(\\widehat{p})]=(D')^{-1}F[g_1],\\, G[\\mathbb{K}(\\widehat{q})]=(D')^{-1}G[g_2],\\, H[\\mathbb{K}(\\widehat{r})]=(D')^{-1}H[g_3].\n\\end{eqnarray}\nNow, to solve the system (\\ref{grmat1}), we compute the inverse matrix $(D')^{-1}$. If $m$ is even then inverse matrix is given as\n\\begin{equation*}\\label{grm4}\n({D'})^{-1} = {2h}\n\\begin{bmatrix}\n1 & 0 &0&0&0&0&...&0\\\\\n0& 1 &0&0&0&0&...&0\\\\\n1 & 0 &1&0&0&0&...&0\\\\\n0 & 1 &0&1&0&0&...&0\\\\\n1 & 0&1&0&1&0&...&0\\\\\n .& &&&.&& &.\\\\[-.2cm]\n . & &&&&.& &.\\\\[-.2cm]\n . & &&&&&. &.\\\\\n 0 & ..&0&1&0&1&0 &1\\\\\n\\end{bmatrix},\n\\end{equation*}\nand when $m$ is odd, we obtain\n\\begin{equation*}\\label{grm5}\n({D'})^{-1} = {2h}\n\\begin{bmatrix}\n1 & 0 &0&0&0&0&...&0\\\\\n0& 1 &0&0&0&0&...&0\\\\\n1 & 0 &1&0&0&0&...&0\\\\\n0 & 1 &0&1&0&0&...&0\\\\\n .& &&&.&& &.\\\\[-.2cm]\n . & &&&&.& &.\\\\[-.2cm]\n . & &&&&&. &.\\\\\n 0 & ..&0&1&0&1&0 &1\\\\\n 1 & 0&1&0&...&1&0 &1\\\\\n\\end{bmatrix}.\n\\end{equation*}\nThus from the system (\\ref{grmat1}), we have the following \\begin{eqnarray}\\label{gr9}\n\\nonumber X_{i+1}&=&X_{i-1}+2hF_i,\\\\\n\\nonumber Y_{i+1}&=&Y_{i-1}+2hG_i,\\\\\n\\nonumber Z_{i+1}&=&Z_{i-1}+2hH_i,\\\\\n\\nonumber X_2&=&\\mathbb{K}(\\widehat{p_{2}}),\\,Y_2=\\mathbb{K}(\\widehat{q}_2),\\,\nZ_2=\\mathbb{K}(\\widehat{r}_2),\\\\\nX_1&=&\\mathbb{K}(\\widehat{p}_a),\\,Y_1=\\mathbb{K}(\\widehat{q}_a),Z_1=\\mathbb{K}(\\widehat{r}_a),\\,i=2,...,m-1.\n\\end{eqnarray}\nThe system (\\ref{gr9}) can be used to the computation of $X_3, . . . , X_m,Y_3, . . . , Y_m$ and $Z_3, . . . , Z_m$. However, it can not be used directly by using the functions $\\mathbb{K}(\\widehat{g_1}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))),$ $\\mathbb{K}(\\widehat{g_2}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r})))$ and $\\mathbb{K}(\\widehat{g_3}(u,\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r})))$, because these functions use unknown functions $\\mathbb{K}(\\widehat{p}), \\mathbb{K}(\\widehat{q})$ and $\\mathbb{K}(\\widehat{r})$. Therefore we can use the same technique as in \\cite{p} and substitute functions by their $F$-transform components:\n\\begin{eqnarray}\n\\nonumber\\hat{F}_i[g_1]&=&\\dfrac{\\int_{a}^{b} \\mathbb{K}(\\widehat{g_1}(u,X_i,Y_i,Z_i))P_i(u)du}{\\int_{a}^{b}P_i(u)du },\\\\\n\\nonumber\\hat{G}_i[g_2]&=&\\dfrac{\\int_{a}^{b} \\mathbb{K}(\\widehat{g_2}(u,X_i,Y_i,Z_i))P_i(u)du}{\\int_{a}^{b}P_i(u)du },\\\\\n\\hat{H}_i[g_3]&=&\\dfrac{\\int_{a}^{b} \\mathbb{K}(\\widehat{g_3}(u,X_i,Y_i,Z_i))P_i(u)du}{\\int_{a}^{b}P_i(u)du },i=1,...,m-1.\n\\end{eqnarray}\nThus the system (\\ref{gr9}) can be written as\n\\begin{eqnarray}\\label{gr10}\n\\nonumber X_{i+1}&=&X_{i-1}+2h\\hat{F}_i[g_1],\\\\\n\\nonumber Y_{i+1}&=&Y_{i-1}+2h\\hat{G}_i[g_2],\\\\\n\\nonumber Z_{i+1}&=&Z_{i-1}+2h\\hat{H}_i[g_3],\\\\\n\\nonumber X_{2}&=&X_{1}+h\\hat{F}_1[g_1],\\\\\n\\nonumber Y_{2}&=&Y_{1}+h\\hat{G}_1[g_2],\\\\\n\\nonumber Z_{2}&=&Z_{1}+h\\hat{H}_1[g_3],\\,i=2,...,m-1,\\\\\nX_1&=&\\mathbb{K}(\\widehat{p}_a),\\,Y_1=\\mathbb{K}(\\widehat{q}_a),Z_1=\\mathbb{K}(\\widehat{r}_a).\n\\end{eqnarray}\nThe proposed method is based on the same assumptions as the well-known Euler mid-point method. It has the same degree of accuracy. The solution vector $(F_m[\\mathbb{K}(\\widehat{p})],G_m[\\mathbb{K}(\\widehat{q})],H_m[\\mathbb{K}(\\widehat{r})])$ approximates the (unique) solution $(\\mathbb{K}(\\widehat{p}),\\mathbb{K}(\\widehat{q}),\\mathbb{K}(\\widehat{r}))$ of the system (\\ref{gr2}) in the sense that $(\\mathbb{K}(\\widehat{p}_i),\\mathbb{K}(\\widehat{q}_i),\\mathbb{K}(\\widehat{r}_i))\\approx (X_i,Y_i,Z_i), i= 1,...,m$, where nodes\n$u_1,...,u_m$ are determined by fuzzy partition $P_1,...,P_m$.\\\\\\\\\nThe system (\\ref{gr10}) can be written as in terms of granular $F$-transform, i.e.,\n\\begin{eqnarray}\\label{gr11}\n\\nonumber\\mathbb{K}(F^{gr}_{i+1}[\\widehat{p}])&=&\\mathbb{K}(F^{gr}_{i-1}[\\widehat{p}])+2h\\mathbb{K}(\\hat{F}^{gr}_i[\\widehat{g_1}]),\\\\\n\\nonumber\\mathbb{K}(G^{gr}_{i+1}[\\widehat{q}])&=&\\mathbb{K}(G^{gr}_{i-1}[\\widehat{q}])+2h\\mathbb{K}(\\hat{G}^{gr}_i[\\widehat{g_2}]),\\\\\n\\nonumber\\mathbb{K}(H^{gr}_{i+1}[\\widehat{r}])&=&\\mathbb{K}(H^{gr}_{i-1}[\\widehat{r}])+2h\\mathbb{K}(\\hat{H}^{gr}_i[\\widehat{g_3}]),\\\\\n\\nonumber\\mathbb{K}(F^{gr}_{2}[\\widehat{p}])&=&\\mathbb{K}(F^{gr}_{1}[\\widehat{p}])+h\\mathbb{K}(\\hat{F}^{gr}_{1}[{\\widehat{g_1}}]),\\\\\n\\nonumber\\mathbb{K}(G^{gr}_{2}[\\widehat{q}])&=&\\mathbb{K}(G^{gr}_{1}[\\widehat{q}])+h\\mathbb{K}(\\hat{G}^{gr}_{1}[{\\widehat{g_2}}]),\\\\\n\\nonumber\\mathbb{K}(H^{gr}_{2}[\\widehat{r}])&=&\\mathbb{K}(H^{gr}_{1}[\\widehat{r}])+h\\mathbb{K}(\\hat{H}^{gr}_{1}[{\\widehat{g_3}}]),\\\\\n\\mathbb{K}(F^{gr}_{1}[\\widehat{p}])&=&\\mathbb{K}(\\widehat{p}_a),\\mathbb{K}(G^{gr}_{1}[\\widehat{q}])=\\mathbb{K}(\\widehat{q}_a),\\mathbb{K}(H^{gr}_{1}[\\widehat{r}])=\\mathbb{K}(\\widehat{r}_a),\n\\end{eqnarray}\nwhere \n\\begin{eqnarray*}\n\\widehat{g_1}&=&\\widehat{g_1}(F^{gr}_i[\\widehat{p}],G^{gr}_i[\\widehat{q}],H^{gr}_i[\\widehat{r}]),\\\\ \\widehat{g_2}&=&\\widehat{g_2}(F^{gr}_i[\\widehat{p}],G^{gr}_i[\\widehat{q}],H^{gr}_i[\\widehat{r}]),\\\\ \\widehat{g_3}&=&\\widehat{g_3}(F^{gr}_i[\\widehat{p}],G^{gr}_i[\\widehat{q}],H^{gr}_i[\\widehat{r}]),\\,i=2,...,m-1. \n\\end{eqnarray*}\nFrom Reamrk \\ref{grr1}, the system (\\ref{gr11}) becomes\n\\begin{eqnarray}\\label{gr12}\n\\nonumber F^{gr}_{i+1}[\\widehat{p}]&=&F^{gr}_{i-1}[\\widehat{p}] +2h \\hat{F}^{gr}_i[\\widehat{g_1}],\\\\\n\\nonumber G^{gr}_{i+1}[\\widehat{q}]&=&G^{gr}_{i-1}[\\widehat{q}] +2h \\hat{G}^{gr}_i[\\widehat{g_2}],\\\\\n\\nonumber H^{gr}_{i+1}[\\widehat{r}]&=&H^{gr}_{i-1}[\\widehat{r}] +2h \\hat{H}^{gr}_i[\\widehat{g_3}],\\\\\n\\nonumber F^{gr}_{2}[\\widehat{p}]&=&F^{gr}_{1}[\\widehat{p}])+h\\hat{F}^{gr}_{1}[{\\widehat{g_1}}],\\\\\n\\nonumber G^{gr}_{2}[\\widehat{q}]&=&G^{gr}_{1}[\\widehat{q}])+h\\hat{G}^{gr}_{1}[{\\widehat{g_2}}],\\\\\n H^{gr}_{2}[\\widehat{r}]&=&H^{gr}_{1}[\\widehat{r}]+h\\hat{H}^{gr}_{1}[{\\widehat{g_3}}],\\,i=2,...,m-1\n\\end{eqnarray}\nwhere $F^{gr}_{1}[\\widehat{p}]=\\widehat{p}_a,G^{gr}_{1}[\\widehat{q}]=\\widehat{q}_a,H^{gr}_{1}[\\widehat{r}]=\\widehat{r}_a$. The solution vector $(F^{gr}[\\widehat{p}],G^{gr}[\\widehat{q}],H^{gr}[\\widehat{r}])$ approximates the (unique) solution $(\\widehat{p},\\widehat{q},\\widehat{r})$ of the system (\\ref{gr1}) in the sense that $(\\widehat{p}_i,\\widehat{q}_i,\\widehat{r}_i)\\approx (F^{gr}_i,G^{gr}_i,H^{gr}_i), i= 1,...,m$, where nodes\n$u_1,...,u_m$ are determined by fuzzy partition $P_1,...,P_m$. Applying the granular inverse $F$-transform to the extended vector $(F^{gr}[\\widehat{p}],G^{gr}[\\widehat{q}],H^{gr}[\\widehat{r}])$,\nwe obtain an approximate solution $(\\widehat{p},\\widehat{q},\\widehat{r})$ of the system (\\ref{gr1}) in the form of fuzzy function function:\n\\begin{eqnarray*}\n\\widehat{{p}}^{gr}_m(u)&=&\\sum_{n=1}^{m} F^{gr}_{i}[\\widehat{p}]P_i(u),\\\\\n\\widehat{{q}}^{gr}_m(u)&=&\\sum_{n=1}^{m} G^{gr}_{i}[\\widehat{q}]P_i(u),\\\\ \n\\widehat{{r}}^{gr}_m(u)&=&\\sum_{n=1}^{m} H^{gr}_{i}[\\widehat{r}]P_i(u),\n\\end{eqnarray*}\nwhere $P_i(u), i= 1, . . . ,m$ are given basic functions. \n\\begin{exa}\\label{grex5}\n\\end{exa} Let $\\widehat{a_1}=(0.01,0.02,0.03),\\widehat{a_2}=(2,4,6),\\widehat{a_3}=(0.1,0.2,0.3),\\widehat{a_4}=(3,4,5),\\widehat{a_5}=(1,2,3),\\widehat{p}_0=0.1,\\widehat{q}_0=0.2,\\widehat{r}_0=0.3$, where $\\widehat{a_1},\\widehat{a_2},\\widehat{a_3},\\widehat{a_4},\\widehat{a_5}$ are triangular fuzzy numbers and assume that fuzzy partition is a triangular fuzzy partition. Now, the horizontal membership functions of the given triangular fuzzy numbers are given by\n\\begin{eqnarray*}\n\\mathbb{K}(\\widehat{a_1})&=&a_1^{gr}(\\alpha,\\mu_{a_1})=0.01+0.01\\alpha+\\mu_{a_1}(0.02-0.02\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_2})&=&a_2^{gr}(\\alpha,\\mu_{a_2})=2+2\\alpha+\\mu_{a_2}(4-4\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_3})&=&a_3^{gr}(\\alpha,\\mu_{a_3})=0.1+0.1\\alpha+\\mu_{a_3}(0.2-0.2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_4})&=&a_4^{gr}(\\alpha,\\mu_{a_4})=3+\\alpha+\\mu_{a_4}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{a_5})&=&a_5^{gr}(\\alpha,\\mu_{a_5})=1+\\alpha+\\mu_{a_5}(2-2\\alpha),\\\\\n\\mathbb{K}(\\widehat{p}_0)&=&0.1,\\mathbb{K}(\\widehat{q}_0)=0.2,\\mathbb{K}(\\widehat{r}_0)=0.3.\n\\end{eqnarray*}\nFurther, we assume $\\mu_p,\\mu_q,\\mu_r,\\mu_{a_1},\\mu_{a_2},\\mu_{a_3},\\mu_{a_4},\\mu_{a_5}=\\mu\\in\\{0,0.4,0.6,1\\}$, and $\\alpha\\in\\{0,0.5,1\\}$. A comparison of the numerical solutions obtained by F-transform (FT-Euler mid-point method) and Euler method with exact solution of the system (\\ref{gr2}) corresponding to different values of $\\alpha,\\mu$ and $h=0.01$ . For the accuracy of numerical solutions over exact solution, root mean square error (RMS error) is used, which is defined by formula\n\\[\\mbox{RMS error}=\\sqrt{\\frac{1}{m}\\sum_{i=1}^{m}({\\Delta}y_k)^2},\\] where ${\\Delta}y_i=y_k^{exact}-y_k^{numerical}$ and $y_k^{exact},y_k^{numerical}$ are exact and numerical solution, respectively. Tables \\ref{grtab:5}, \\ref{grtab:6}, \\ref{grtab:7} and \\ref{grtab:8} show comparison among the exact solution, Euler method and the FT-Euler mid-point method for the system (\\ref{gr2}). The graphical representation of this system shows that exact solution and the FT-Euler mid-point method are very close to each other in comparison to Euler method. Therefore from above comparison, we can say that FT-Euler mid-point method is a trustworthy numerical method and the above figures show that the population density $\\mathbb{K}(\\widehat{p}(u))$ of prey decreases whereas the population densities $\\mathbb{K}(\\widehat{q}(u)),\\mathbb{K}(\\widehat{r}(u))$ of prey and predator, respectively increase corresponding to different values of $\\alpha,\\mu$.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{figure\/eumid1.eps} \\includegraphics[scale=0.5]{figure\/eumid2.eps}\n \\includegraphics[scale=0.5]{figure\/eumid3.eps}\n \\includegraphics[scale=0.5]{figure\/eumid4.eps}\n \\includegraphics[scale=0.5]{figure\/eumid5.eps}\n \\includegraphics[scale=0.5]{figure\/eumid6.eps}\n \\includegraphics[scale=0.5]{figure\/eumid7.eps}\n\\includegraphics[scale=0.5]{figure\/eumid8.eps}\n\\end{figure}\n\\begin{figure}\n \\centering\n\\includegraphics[scale=0.5]{figure\/eumid9.eps}\n \\caption{Variation of preys and predator populations against the time for the system (\\ref{gr2a}) for different $\\alpha,\\mu$. Blue, green, red curves show population densities corresponding to exact, FT-Euler mid-point and Euler method, respectively.}\n \\label{grfig:1}\n\\end{figure}\n\\begin{table}[ht]\n\\centering\n \\begin{adjustbox}{max width=0.90\\linewidth}\n\\begin{tabular}{|c|c |ccc|ccc|ccc|}\n\\hlin\n $\\mu$ &$u_i$ &\\multicolumn{3}{|c|}{Exact} &\\multicolumn{3}{|c|}{ Euler}\n &\\multicolumn{3}{|c|}{FT-Euler mid-point}\n\\\\ \n & &$\\mathbb{K}(\\widehat{p}(u))$&$\\mathbb{K}(\\widehat{q}(u))$&$\\mathbb{K}(\\widehat{r}(u))$ &$\\mathbb{K}(\\widehat{p}(u))$& $\\mathbb{K}(\\widehat{q}(u))$ & $\\mathbb{K}(\\widehat{r}(u))$ & $\\mathbb{K}(\\widehat{p}(u))$ & $\\mathbb{K}(\\widehat{q}(u))$ & $\\mathbb{K}(\\widehat{r}(u))$ \\\\\n\\hline\n & 0&0.1000&0.2000&0.3000&0.1000&0.2000&0.3000 & 0.1000&0.2000&0.3000 \\\\\n & 0.2&0.0954&\t0.2573&\t0.3309 &0.0954&\t0.2571&\t0.3307& 0.0954&\t0.2573&\t0.3309\\\\\n{0} & 0.4& 0.0910&\t0.3210&\t0.3681&0.0910&\t0.3207&\t0.3677 & 0.0910&\t0.3210&\t0.3681 \\\\\n &0.6&0.0867&\t0.3871&0.4135 &0.0867&0.3868&0.4128 &0.0867&\t0.3872&0.4135 \\\\\n &0.8 &0.0825&\t0.4505&\t0.4689 &0.0825&0.4504&\t0.4679 & 0.0825&\t0.4506&\t0.4690\\\\\n & 1&0.0785&0.5060&0.5362&0.0785&\t0.5062&\t0.5347&0.0784&0.5060&0.5363\\\\\n \\hline\n & 0&0.1000&0.2000&0.3000&0.1000&\t0.2000&\t0.3000 & 0.1000&0.2000&0.3000 \\\\\n & 0.2&0.0956&0.3218&0.3505 &0.0956&0.3209&0.3499& 0.0956&0.3218&0.3505 \\\\\n{0.4} & 0.4& 0.0916\t&0.4646&0.4275&0.0916\t&0.4635&0.4257 &0.0916\t&0.4646&0.4275 \\\\\n & 0.6&0.0878&0.5958&0.5449&0.0878&0.5958&0.5412&0.0878&0.5958&0.5449 \\\\\n & 0.8 &0.0843&\t0.6861&\t0.7174& 0.0844&\t0.6876&\t0.7111&0.0843&\t0.6861&\t0.7174\\\\\n & 1&0.0806 &0.7258&0.9572 &0.0807\t&0.7285&0.9478 & 0.0806 &0.7258&0.9572 \\\\\n \\hline\n & 0&0.1000&\t0.2000&\t0.3000 &0.1000&\t0.2000&\t0.3000 & 0.1000&0.2000&0.3000 \\\\\n & 0.2&0.0957&\t0.3569\t&0.3636 & 0.0957&\t0.3555\t&0.3612& 0.0957&\t0.3570\t&0.3622\\\\\n{0.6} & 0.4&0.0919&0.5383&\t0.4723&\t0.0919&0.5373&\t0.4656&\t0.0919&0.5385&\t0.4687\\\\\n &0.6 &0.0884&\t0.6823&\t0.6541 &0.0884&\t0.6823&\t0.6404 & 0.0885&\t0.6830&\t0.6472\\\\\n &0. 8&0.0850&\t0.7534&0.9379 &0.0852&\t0.7577&0.9138 & 0.0851&\t0.7548&0.9256\\\\\n & 1&0.0809&\t0.7565&\t1.3412 & 0.0813&\t0.7630&\t1.3042 & 0.0811&\t0.7590&\t1.3214\\\\\n \\hline\n & 0&0.1000&\t0.2000&\t0.3000 &0.1000&\t0.2000&\t0.3000& 0.1000&0.2000&0.3000 \\\\\n & 0.2&0.0960&0.4313&0.3901 &0.0960&0.4289&0.3879 & 0.0960&0.4313&0.3901\\\\\n{1} & 0.4& 0.0926&\t0.6726&\t0.5823 &0.0926&\t0.6726&\t0.5744& 0.0926&\t0.6727&\t0.5822 \\\\\n & 0.6&0.0897&\t0.7961&\t0.9541&0.0898&\t0.7997&\t0.9361 & 0.0897&\t0.7963&\t0.9538 \\\\\n & 0.8&0.0860&\t0.8010&\t1.5690 &0.0862&\t0.8062&\t1.5379&0.0860&\t0.8012&\t1.5685\\\\\n & 1&0.0789&\t0.7335&\t2.4055 &0.0796&\t0.7395&\t2.3667& 0.0789&\t0.7342&\t2.4046 \\\\\n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\caption{Comparison of numerical results of Example \\ref{grex5} for $\\alpha=0$}\n\\label{grtab:5}\n\\end{table}\n\\begin{table}[ht]\n\\centering\n \\begin{adjustbox}{max width=0.90\\linewidth}\n\\begin{tabular}{|c|c |ccc|ccc| ccc|}\n\\hlin\n $\\mu$ &$u_i$ &\\multicolumn{3}{|c|}{Exact} &\\multicolumn{3}{|c|}{ Euler}\n &\\multicolumn{3}{|c|}{FT-Euler mid-point}\n\\\\ \n & &$\\mathbb{K}(\\widehat{p}(u))$&$\\mathbb{K}(\\widehat{q}(u))$&$\\mathbb{K}(\\widehat{r}(u))$ &$\\mathbb{K}(\\widehat{p}(u))$& $\\mathbb{K}(\\widehat{q}(u))$ & $\\mathbb{K}(\\widehat{r}(u))$ & $\\mathbb{K}(\\widehat{p}(u))$ & $\\mathbb{K}(\\widehat{q}(u))$ & $\\mathbb{K}(\\widehat{r}(u))$ \\\\\n\\hline\n & 0&0.1000&0.2000&0.3000&0.1000&\t0.2000&\t0.3000 & 0.1000&0.2000&0.3000 \\\\\n & 0.2&0.0956&\t0.2967&\t0.3426&0.0956&\t0.2961&\t0.3422 & 0.0956&\t0.2967&\t0.3426 \\\\\n{0} & 0.4& 0.0913&\t0.4092&\t0.4020&0.0913&\t0.4083&\t0.4009&0.0913&\t0.4092&\t0.4020\\\\\n & 0.6&0.0874&\t0.5202&\t0.4857&0.0874&\t0.5198&\t0.4835 & 0.0874&\t0.5203&\t0.4857 \\\\\n & 0.8&0.0836&\t0.6107&\t0.6015&0.0836&\t0.6113&\t0.5979 & 0.0836&\t0.6108&\t0.6015 \\\\\n & 1&0.0799&\t0.6685&\t0.7569&\t0.0800&\t0.6702&\t0.7515\t&0.0799&\t0.6685&\t0.7569 \\\\\n \\hline\n & 0&0.1000&0.2000&0.3000&0.1000&\t0.2000&\t0.3000&0.1000&0.2000&0.3000 \\\\\n & 0.2&0.0957&\t0.3305&0.3533 &0.0957&0.3294&\t0.3526 & 0.0957&\t0.3304&0.3533\\\\\n{0.4} & 0.4& 0.0916&\t0.4832&\t0.4370&0.0916&\t0.4821&\t0.4349& 0.0916&\t0.4832&\t0.4370 \\\\\n & 0.6&0.0880\t&0.6193&\t0.5678 &0.0880&\t0.6194&\t0.5634&0.0880\t&0.6193&\t0.5677\\\\\n & 0.8&0.0845&\t0.7067&\t0.7633&0.0846&\t0.7086&\t0.7559& 0.0845&\t0.7068&\t0.7632\\\\\n & 1&0.0808&\t0.7382&\t1.0374 &0.0809&\t0.7414&\t1.02627& 0.0808&\t0.7384&\t1.0372 \\\\\n \\hline\n & 0&0.1000&\t0.2000&\t0.3000 &0.1000&\t0.2000&\t0.3000 & 0.1000&0.2000&0.3000 \\\\\n &0. 2&0.0957&\t0.3480&\t0.3592&0.0957&\t0.3467&\t0.3583& 0.0957&\t0.3480&\t0.3591\\\\\n{0.6} &0.4&0.0918&\t0.5202&\t0.4576&\t0.0918&\t0.5190&\t0.4549& 0.0918&\t0.5202&\t0.4576 \\\\\n & 0.6&0.0883&\t0.6629&\t0.6189&0.0883&\t0.6636&\t0.6130 & 0.0883&\t0.6630&\t0.6188\\\\\n & 0.8&0.0850&\t0.7409&\t0.8672&0.0850&\t0.7435&\t0.8570 & 0.0850&\t0.7410&\t0.8672\\\\\n & 1&0.0810&\t0.7545&\t1.2193&0.0812&\t0.7583&\t1.2042 & 0.0810&\t0.7546&\t1.2192\\\\\n \\hline\n & 0&0.1000&\t0.2000&\t0.3000 &0.1000&\t0.2000&\t0.3000 & 0.1000&0.2000&0.3000 \\\\\n & 0.2&0.0958&\t0.3843&\t0.3719&0.0958&\t0.3825&\t0.3706& 0.0958&\t0.3843&\t0.3719 \\\\\n{1} & 0.4&0.0922&\t0.5918&\t0.5061 &0.0921\t&0.5908&\t0.5015& 0.0922&\t0.5918&\t0.5060 \\\\\n & 0.6&0.0890&\t0.7351&\t0.7447&0.0890&\t0.7371&\t0.7346& 0.0890&\t0.7352&\t0.7446 \\\\\n & 0.8&0.0856&\t0.7841&\t1.1288 &0.0858&0.7880&\t1.1111 & 0.0856&\t0.7842&\t1.1286 \\\\\n & 1&0.0808&\t0.7598&\t1.6740 &0.0812&\t0.7647&\t1.6489& 0.0808&\t0.7600&\t1.6738 \\\\\n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\caption{Comparison of numerical results of Example \\ref{grex5} for $\\alpha=0.5$} \n\\label{grtab:6}\n\\end{table}\n\\begin{landscape}\n\n\\begin{table}[ht]\n\n\\centering\n \\begin{adjustbox}{max width=0.65\\linewidth}\n \\begin{tabular}{|c |rrr|rrr| rrr|}\n\\hlin\n $u_i$ &\\multicolumn{3}{|c|}{Exact} &\\multicolumn{3}{|c|}{ Euler}\n &\\multicolumn{3}{|c|}{FT-Euler mid-point}\n\\\\ \n&$\\mathbb{K}(\\widehat{p}(u))$&$\\mathbb{K}(\\widehat{q}(u))$&$\\mathbb{K}(\\widehat{r}(u))$ &$\\mathbb{K}(\\widehat{p}(u))$& $\\mathbb{K}(\\widehat{q}(u))$ & $\\mathbb{K}(\\widehat{r}(u))$ & $\\mathbb{K}(\\widehat{p}(u))$ & $\\mathbb{K}(\\widehat{q}(u))$ & $\\mathbb{K}(\\widehat{r}(u))$ \\\\\n\\hline\n 0&0.1000&0.2000&0.3000&0.1000&0.2000&0.3000 & 0.1000&0.2000&0.3000 \\\\\n 0.2&0.0957&0.3392&\t0.3562 &0.0957&\t0.3380&\t0.3554& 0.0957&\t0.3380&\t0.3554 \\\\\n 0.4& 0.0917&\t0.5018&\t0.4470&0.0917&\t0.5006&\t0.4446& 0.0917&\t0.5018&\t0.4470\\\\\n 0.6&0.0882&\t0.6417&\t0.5924 &0.0882&\t0.6421&\t0.5873 & 0.0882&\t0.6418&\t0.5924\\\\\n 0.8&0.0848&\t0.7250&\t0.8132&0.0848&0.7272&\t0.8044 & 0.0848&\t0.7250&\t0.8132\\\\\n 1&0.0809&\t0.7477&\t1.1247&0.0811&\t0.7512&\t1.1117 & 0.0809&\t0.7478&\t1.1246\\\\\n \n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\caption{Comparison of numerical results of Example \\ref{grex5} for $\\alpha=1,\\mu=0,0.4,0.6,1$}\n\\label{grtab:7}\n\\end{table}\n\\begin{table}[ht]\n\\centering\n \\begin{adjustbox}{max width=1.0\\linewidth}\n\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|}\n\\hlin\n $\\mu$&\\multicolumn{9}{|c|}{ Euler}\n &\\multicolumn{9}{|c|}{FT-Euler Mid-point}\n\\\\\n& & $\\alpha=0$& && $\\alpha=0.5$ & & & $\\alpha=1$ & &&$\\alpha=0$& && $\\alpha=0.5$ & & & $\\alpha=1$ & \\\\\n\\hline\n& &\t& & & & & & &\t& & & & & & & & & \\\\\n{0} & 4.0000e-3&\t2.1213e-4&\t8.1035e-4&0.0000&5.7773e-5&5.7773e-5&4.0000e-3&8.7369e-4&\t2.8381e-3& 0.0000& 5.7732e-5& 0.0000&8.1650e-5 &1.8317e-3 &6.7928e-3 & 0.0000&4.9497e-4 & 3.2914e-4\\\\\n & &\t& & & & & & &\t& & & & & & & & & \\\\\n \\hline\n \n & &\t& & & & & & &\t& & & & & & & & & \\\\\n {0.4} & 5.7732e-5 & 1.3880e-3&4.7097e-3 & 0.0000& 0.0000&0.0000 &5.7732e-05&\t1.6467e-3&5.8375e-3&0.0000&\t1.0000e-4&\t1.0000e-4& 8.1650e-5 &1.8317e-3 &6.7928e-3 & 0.0000&4.9497e-4 & 3.2914e-4 \\\\\n & &\t& & & & & & &\t& & & & & & & & & \\\\\n \\hline\n \n & &\t& & & & & & &\t& & & & & & & & & \\\\\n{0.6} &1.8257e-4&\t3.2583e-3&1.6367e-2&1.0000e-4&\t1.2076e-3&\t3.1054e-3 & 8.1652e-5\t&2.0339e-3 &7.9053e-3\t& 0.0000&1.0000e-4\t&1.2910e-4\t & 8.1650e-5 &1.8317e-3 &6.7928e-3 & 0.0000&4.9497e-4 & 3.2914e-4 \\\\\n& &\t& & & & & & &\t& & & & & & & & & \\\\\n \\hline\n \n & &\t& & & & & & &\t& & & & & & & & & \\\\\n{1} & 3.0000e-4&3.6914e-03&\t2.1848e-2&\t0.0000 &3.2145e-4&\t4.3970e-4&1.8708e-4\t & 2.7404e-3&1.3332e-2\t& 0.0000&1.2247e-4 &1.2247e-4\t&\t8.1650e-5 &1.8317e-3 &6.7928e-3 & 0.0000&4.9497e-4 & 3.2914e-4 \\\\\n& &\t& & & & & & &\t& & & & & & & & & \\\\\n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\caption{RMS error for Example \\ref{grex5} for different values of $\\alpha,\\mu$} \n\\label{grtab:8}\n\\end{table}\n\\end{landscape}\n\\section{Conclusion}\nIn this contribution, we have introduced and studied the concepts of granular $F$-transforms to enrich the theory of $F$-transforms and explore new applications. Accordingly, we have initiated the said theory, formulated a fuzzy prey-predator model consisting of two prey and one predator team, and discussed the equilibrium points and their stability for this model. Moreover, we have established a numerical method based on the granular $F$-transform to find the numerical solution to the proposed model. Finally, a comparison between two numerical solutions with the exact solution is discussed. In the future, it will be interesting to use the proposed numerical method based on granular $F$-transforms to solve fuzzy fractional differential equations and analyze error estimation. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}