diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpkdj" "b/data_all_eng_slimpj/shuffled/split2/finalzzpkdj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpkdj" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nSets are widely used in a variety of control theory and applications including reachability analysis for system verification \\cite{Asarin2006,Girard2005,Girard2006,Kurzhanskiy2007}, robust Model Predictive Control (MPC) \\cite{Mayne2005,Langson2004,Bravo2006}, and state estimation \\cite{Chisci1996,Alamo2005,Le2013}. However, the sets used in control theory are not always practical to compute in application. For example, the minimal Robust Positively Invariant (mRPI) set \\cite{Rakovic2005} is widely used in robust MPC \\cite{Mayne2014,Richards2006,Limon2010}. However, in general, mRPI sets are not finitely represented and must be approximated. Furthermore, existing techniques for determining finite approximations of the mRPI set do not scale well with the dimension of the state space. Such scalability issues are found in many set computations \\cite{Tiwary2008}, motivating the need for alternative set representations and efficient approximation algorithms. \n\nWhen computing a set, there is often a trade-off between accuracy, complexity, and computation time. The desired balance of these three aspects varies depending on if the set computations are performed off-line prior to controller execution or on-line in real-time. Certain applications permit iterative set computation methods while others require one-step methods that allow set computations to be embedded within an existing optimization problem \\cite{Trodden2016_OneStep}.\n\nAdditionally, this trade-off is highly dependent on the specific representation of the set. Widely used set representations include the halfspace representation (H-Rep) based on the intersection of a finite number of halfspace inequalities and the vertex representation (V-Rep) based on the convex hull of a finite number of vertices. As an alternative, zonotopes (G-Rep) \\cite{McMullen1971} and, more recently, constrained zonotopes (CG-Rep) \\cite{Scott2016} have enabled significant reductions in the cost and complexity associated with commonly used set computations in dynamic systems and control.\n\nA \\emph{zonotope} is the Minkowski sum of a finite set of line segments or, equivalently, the image of a hypercube under an affine transformation \\cite{Fukuda2004,Maler2008}. Due to their computational efficiency, zonotopes have been widely used in reach set calculations for hybrid system verification, estimation, and MPC \\cite{Maler2008,Althoff2010,Scott2016,Bravo2006}. As with the iterative algorithm in \\cite{Scibilia2011}, computing these reach sets utilizes linear transformation and Minkowski sum operations. Zonotopes are closed under these operations (i.e. the Minkowski sum of two zonotopes is a zonotope) and the number of generators grows linearly with the number of Minkowski sum operations, compared to the potential exponential growth of the number of halfspaces in H-Rep. Unfortunately, zonotopes in general are not closed under intersection and the conversion from G-Rep to H-Rep for intersection operations is inefficient. \n\n\\emph{Constrained zonotopes} were developed in \\cite{Scott2016} to overcome the limitations caused by the inherent symmetry of zonotopes. Constrained zonotopes are closed under linear transformation, Minkowski sum, and generalized intersection and can be used to represent any convex polytope. Constrained zonotopes provide the computational advantages of zonotopes while enabling exact computations of a much wider class of sets. In \\cite{Koeln2019ACC}, reach set computations using constrained zonotopes were shown to be several orders-of-magnitude faster than the same set computations using H-Rep, enabling the on-line computation of these reach sets for use in a hierarchical MPC formulation.\n\nWhile zonotopes and constrained zonotopes provide a significant computational advantage, various set operations can increase the complexity of the resultant sets beyond a desired upper limit. Thus, there is a need for techniques that provide reduced-complexity approximations of the desired set. Currently there exist reduced-order outer-approximation techniques for zonotopes \\cite{Kopetzki2017a,Yang2018} and constrained zonotopes \\cite{Scott2016}. Outer-approximations are widely used in the field of reachability analysis for system verification to determine if a system will always operate in a desired region of the state space \\cite{Girard2005,Girard2006}. \n\nHowever, in many applications there is a need for computing reduced-order inner-approximations. In general computing inner-approximations of sets is considered a more difficult problem \\cite{Kurzhanski2000}. Inner-approximations are particularly important when computing backward reachable sets that define a set of initial states for which a system will enter a specified target region after some allotted time \\cite{Xue2017}. While there are existing techniques for zonotopes \\cite{Girard2006,Han2016a}, inner-approximation techniques for constrained zonotopes are lacking.\n\nThe goal of this paper is to further increase the practicality of applying set-based control techniques through the use of zonotopes and constrained zonotopes. Specifically, this paper provides improved methods for i) representing set intersections with halfspaces, ii) removing redundancy from set representations, and iii) computing reduced-order inner-approximations, convex hulls, RPI sets, and Pontryagin differences. Approaches for both zonotopes and constrained zonotopes are provided along with numerical examples that demonstrate the features and applicability of each approach.\\footnote{The source code for all of the constrained zonotope operations and numerical examples is provided at https:\/\/github.com\/ESCL-at-UTD\/ConZono.}\n\nThe remainder of the paper is organized as follows. Section~\\ref{Notation} provides some initial notation and preliminary background on set operations, zonotopes, and constrained zonotopes. Methods for checking and computing halfspace intersections for zonotopes and constrained zonotopes are presented in Section~\\ref{halfspaceIntersections}. Section~\\ref{Sec_Redundancy} addresses the issue of redundancy in set representations along with methods for redundancy removal. Techniques for computing reduced-complexity inner-approximations of zonotopes and constrained zonotopes are provided in Section~\\ref{Sec_InnerApprox}. Zonotope and constrained-zonotope based methods for computing the convex hull of two sets, the outer-approximation of the mRPI set, and the Pontryagin difference of two sets are presented in Sections \\ref{Sec_ConvexHull}, \\ref{Sec_RPI}, and \\ref{Sec_Pontryagin}, respectively. Section~\\ref{Sec_Hier} provides a practical application of these techniques for computing and approximating a backward reachable set within the context of hierarchical control. Finally, Section~\\ref{Conclusions} summarizes the conclusions of the paper.\n\\section{Notation and Preliminaries} \\label{Notation}\n\nFor sets $ Z, W \\subset \\mathbb{R}^n $, $ Y \\subset \\mathbb{R}^m $, and matrix $ \\mathbf{R} \\in \\mathbb{R}^{m \\times n} $, the linear transformation of $ Z $ under $ \\mathbf{R} $ is $ \\mathbf{R} Z = \\left\\{\\mathbf{R} \\mathbf{z} \\mid \\mathbf{z} \\in Z \\right\\} $, the Minkowski sum of $ Z $ and $ W $ is $ Z \\oplus W = \\left\\{\\mathbf{z}+\\mathbf{w} \\mid \\mathbf{z} \\in Z, \\mathbf{w} \\in W \\right\\} $, and the generalized intersection of $ Z $ and $ Y $ under $ \\mathbf{R} $ is $ Z \\cap_{\\mathbf{R}} Y = \\left\\{ \\mathbf{z} \\in Z \\mid \\mathbf{R}\\mathbf{z} \\in Y \\right\\} $. The standard intersection, corresponding to the identity matrix $ \\mathbf{R} = \\mathbf{I}_n $, is simply denoted as $ Z \\cap Y$.\n\nThe convex polytope $ H \\subset \\mathbb{R}^n $ in H-Rep is defined as $ H = \\{\\mathbf{x} \\in \\mathbb{R}^n \\mid \\mathbf{H} \\mathbf{x} \\leq \\mathbf{f} \\} $ where $ \\mathbf{H} \\in \\mathbb{R}^{n_h \\times n} $, $ \\mathbf{f} \\in \\mathbb{R}^{n_h} $, and $ n_h $ is the number of halfspaces. A centrally symmetric set $ Z \\subset \\mathbb{R}^n $ can be represented as a zonotope in G-Rep where $ Z = \\left\\{ \\mathbf{G} \\boldsymbol{\\xi} + \\mathbf{c} \\mid \\lVert \\boldsymbol{\\xi} \\rVert_\\infty \\leq 1 \\right\\} $. The vector $ \\mathbf{c} \\in \\mathbb{R}^n $ is the center and the $ n_g $ generators, denoted $ \\mathbf{g}_i $, form the columns of the generator matrix $ \\mathbf{G} \\in \\mathbb{R}^{n \\times n_g } $. Similarly, a constrained zonotope $ Z_c \\subset \\mathbb{R}^n $ is defined in CG-Rep as $ Z = \\left\\{ \\mathbf{G}\\boldsymbol{\\xi} + \\mathbf{c} \\mid \\lVert \\boldsymbol{\\xi} \\rVert_\\infty \\leq 1, \\mathbf{A} \\boldsymbol{\\xi} = \\mathbf{b} \\right\\} $. With $ \\mathbf{A} \\in \\mathbb{R}^{n_c \\times n_g} $ and $ \\mathbf{b} \\in \\mathbb{R}^{n_c} $, constrained zonotopes include $ n_c $ equality constraints that break the symmetry of zonotopes and allow any convex polytope to be written in CG-Rep. The complexity of a zonotope is captured by its order, $ o = \\frac{n_g}{n} $ while the complexity of a constrained zonotope is captured by the degrees-of-freedom order, $o_d = \\frac{n_g-n_c}{n} $. Zonotopes and constrained zonotopes are denoted as $ Z = \\left\\{\\mathbf{G},\\mathbf{c}\\right\\} $ and $ Z_c = \\left\\{\\mathbf{G},\\mathbf{c},\\mathbf{A},\\mathbf{b}\\right\\} $, respectively. \n\nAs shown in \\cite{Scott2016}, constrained zonotopes are closed under linear transformation, Minkowski sum, and generalized intersection where\n\n\\begin{equation} \\label{affineMap}\n\\mathbf{R} Z = \\left\\{\\mathbf{R} \\mathbf{G}_z, \\mathbf{R} \\mathbf{c}_z, \\mathbf{A}_z, \\mathbf{b}_z\\right\\},\n\\end{equation}\n\\begin{equation} \\label{MinkowskiSum}\nZ \\oplus W = \\left\\{\\left[\\mathbf{G}_z \\; \\mathbf{G}_w\\right], \\mathbf{c}_z+\\mathbf{c}_w, \\begin{bmatrix} \\mathbf{A}_z & \\mathbf{0} \\\\ \\mathbf{0} & \\mathbf{A}_w \\end{bmatrix}, \\begin{bmatrix} \\mathbf{b}_z \\\\ \\textbf{b}_w \\end{bmatrix} \\right\\},\n\\end{equation}\n\\begin{equation} \\label{generalized_Intersection}\nZ \\cap_{\\mathbf{R}} Y = \\left\\{\\left[\\mathbf{G}_z \\; \\mathbf{0}\\right], \\mathbf{c}_z, \\begin{bmatrix} \\mathbf{A}_z & \\mathbf{0} \\\\ \\mathbf{0} & \\mathbf{A}_y \\\\ {\\scriptstyle \\mathbf{R} \\mathbf{G}_z} & {\\scriptstyle -\\mathbf{G}_y} \\end{bmatrix}, \\begin{bmatrix} \\mathbf{b}_z \\\\ \\mathbf{b}_y \\\\ {\\scriptstyle \\mathbf{c}_y - \\mathbf{R} \\mathbf{c}_z }\\end{bmatrix} \\right\\}.\n\\end{equation}\nAdditional notation is defined as follows. The set of non-negative real numbers is denoted as $ \\mathbb{R}_+ $. The matrix $ \\mathbf{T} \\in \\mathbb{R}^{n \\times m} $ with values $ t_{i,j} $ in the $i^{th}$ row and $ j^{th} $ column is denoted as $ \\mathbf{T} = [t_{i,j}] $. A $ n \\times m $ matrix of zeros is denoted as $ \\mathbf{0}_{n \\times m} $ or simply $ \\mathbf{0} $ if the dimension can be readily determined from context. Similarly, a vector of ones is denoted as $ \\mathbf{1} $. For a matrix $ \\mathbf{A} $, the null space is denoted $ \\mathcal{N}(\\mathbf{A}) $ and the pseudoinverse is denoted $ \\mathbf{A}^\\dagger $. Parallel vectors $ \\mathbf{v}_1 $ and $ \\mathbf{v}_2 $ are denoted as $ \\mathbf{v}_1 \\parallel \\mathbf{v}_2 $. The unit hypercube in $ \\mathbb{R}^n $ is defined as $ B_\\infty = \\left\\{ \\boldsymbol{\\xi} \\mid \\| \\boldsymbol{\\xi} \\|_\\infty \\leq 1 \\right\\}$ while $ B_\\infty(\\mathbf{A},\\mathbf{b}) = \\left\\{ \\boldsymbol{\\xi} \\in B_\\infty \\mid \\mathbf{A} \\boldsymbol{\\xi} = \\mathbf{b} \\right\\} $. With the volume of a set $ X $ denoted as $ V(X) $, the volume ratio for sets $ X, Y \\in \\mathbb{R}^n $ is defined as $ V_r = \\left( \\frac{V(X)}{V(Y)} \\right)^{1\/n} $. All numerical examples were generated using MATLAB on a desktop computer with a 3.6 GHz i7 processor and 16 GB of RAM.\nAll optimization problems were formulated and solved with YALMIP \\cite{Lofberg2004} and Gurobi \\cite{Gurobi2019}.\n\n\\section{Halfspace Intersections} \\label{halfspaceIntersections}\n\nThis section presents methods for determining if a zonotope or constrained zonotope intersects a given halfspace along with the exact representation of this intersection in CG-Rep. The need for computing this intersection arises in reachability analysis \\cite{Althoff2012} and in MPC when determining the set of feasible initial conditions \\cite{Scibilia2011}. The use of CG-Rep enables exact representations unlike existing techniques that rely on zonotopic approximations of the intersection \\cite{Girard2008}.\n\n\\subsection{Zonotope-Halfspace Intersection}\nFor a zonotope in $ \\mathbb{R}^n $ with $ n_g $ generators, the intersection between a zonotope and a hyperplane can be tested algebraically with complexity $O(nn_g)$.\n\\begin{lem} (Section 5.1 of \\emph{\\cite{Girard2005}}) \\label{Zono_halfspace_int_check}\n\tThe zonotope $ Z = \\{\\mathbf{G},\\mathbf{c}\\} \\subset \\mathbb{R}^n $ intersects the hyperplane $ H = \\{\\mathbf{x} \\in \\mathbb{R}^n \\mid \\mathbf{h}^T \\mathbf{x} = f \\} $ if and only if \n\t\\begin{equation} \\label{hyperplane_int}\n\t| f - \\mathbf{h}^T \\mathbf{c} | \\leq \\sum_{i = 1}^{n_g} | \\mathbf{h}^T \\mathbf{g}_i |.\n\t\\end{equation} \n\\end{lem}\nIf a zonotope intersects a hyperplane, the intersection between the zonotope and the corresponding halfspace can be represented in CG-Rep by the addition of exactly one generator and one equality constraint.\n\n\\begin{thm} \\label{zono_halfspace_int}\n\tIf the zonotope $ Z = \\{\\mathbf{G},\\mathbf{c}\\} \\subset \\mathbb{R}^n $ intersects the hyperplane $ H = \\{\\mathbf{x} \\in \\mathbb{R}^n \\mid \\mathbf{h}^T \\mathbf{x} = f \\} $ corresponding to the halfspace $ H_- = \\{\\mathbf{x} \\in \\mathbb{R}^n \\mid \\mathbf{h}^T \\mathbf{x} \\leq f \\} $, then the intersection $ Z_h = Z \\cap H_- $ is a constrained zonotope where\n\t\\begin{equation}\\label{zono_halfspace_int_def}\n\tZ_h = \\{[\\mathbf{G} \\; \\mathbf{0}], \\mathbf{c}, \\begin{bmatrix} \\mathbf{h}^T\\mathbf{G} \\; \\frac{d_m}{2}\t\\end{bmatrix}, \\begin{matrix} f - \\mathbf{h}^T\\mathbf{c}-\\frac{d_m}{2}\\end{matrix}\\}, \n\t\\end{equation} \n\tand $ d_m = f - \\mathbf{h}^T\\mathbf{c} + \\sum_{i = 1}^{n_g} | \\mathbf{h}^T \\mathbf{g}_i | $.\n\\end{thm}\n\\begin{pf}\nConsidering any element $\\mathbf{x} \\in Z_h$, it is to be proven that $\\mathbf{x} \\in Z \\cap H_{-}$. From the definition of $Z_h$ in \\eqref{zono_halfspace_int_def}, $\\exists \\; \\boldsymbol{\\xi} \\in \\mathbb{R}^{n_g}$ and $\\xi_{n_g + 1} \\in \\mathbb{R}$ such that\n\\begin{equation*}\n \\mathbf{x} = \\mathbf{G}\\boldsymbol{\\xi} + \\mathbf{0}\\xi_{n_g + 1} + \\mathbf{c}, \\quad ||\\boldsymbol{\\xi}||_{\\infty} \\leq 1, \\quad |\\xi_{n_g + 1}| \\leq 1,\n\\end{equation*}\n\\begin{equation}\\label{conzono_const}\n \\mathbf{h}^T\\mathbf{G}\\boldsymbol{\\xi} + \\frac{d_m}{2}\\xi_{n_g+1} = f - \\mathbf{h}^T\\mathbf{c} - \\frac{d_m}{2}.\n\\end{equation}\n\n\nBy the assumption that $Z \\cap H \\neq \\emptyset$, the definition of $d_m$ and \\eqref{hyperplane_int} ensure $d_m \\geq 0$. If $d_m = 0$, then \\eqref{conzono_const} results in $\\mathbf{h}^T\\mathbf{G}\\boldsymbol{\\xi} = f- \\mathbf{h}^T\\mathbf{c}$, which can be rewritten as $\\mathbf{h}^T(\\mathbf{G}\\boldsymbol{\\xi} + \\mathbf{c}) = f$. Therefore, $\\mathbf{x} \\in Z_h \\subset Z$ and $\\mathbf{x} \\in H \\subset H_{-}$. If $d_m > 0$, \\eqref{conzono_const} can be solved for $\\xi_{n_g + 1}$ as\n\\begin{equation} \\label{eq_xi_ngplus1}\n \\xi_{n_g + 1} = \\frac{2}{d_m}(f - \\mathbf{h}^T\\mathbf{c}- \\frac{d_m}{2} - \\mathbf{h}^T\\mathbf{G}\\boldsymbol{\\xi}).\n\\end{equation}\nCombining \\eqref{eq_xi_ngplus1} and the inequality constraint $-1 \\leq \\xi_{n_g + 1}$ results in \\begin{subequations}\n \\begin{align*}\n -1 \\leq & \\; \\xi_{n_g + 1} = \\frac{2}{d_m}(f - \\mathbf{h}^T\\mathbf{c}- \\frac{d_m}{2} - \\mathbf{h}^T\\mathbf{G}\\boldsymbol{\\xi}), \\\\ \n -\\frac{d_m}{2} \\leq & \\; f - \\mathbf{h}^T\\mathbf{c} - \\frac{d_m}{2} - \\mathbf{h}^T\\mathbf{G}\\boldsymbol{\\xi}, \\\\ \n \\mathbf{h}^T(\\mathbf{c} + \\mathbf{G}\\boldsymbol{\\xi}) \\leq & \\; f. \n \\end{align*}\n\\end{subequations}\n\\setcounter{equation}{\\value{equation}-1}\nTherefore, $\\mathbf{x} \\in Z$ and $\\mathbf{x} \\in H_{-}$.\nNext, considering any $\\mathbf{x} \\in Z \\cap H_{-}$, it is to be proven that $\\mathbf{x} \\in Z_h$. For all $\\mathbf{x} \\in Z \\cap H_{-}$, $\\exists \\; \\boldsymbol{\\xi} \\in \\mathbb{R}^{n_g}$ such that \n\\begin{equation}\\label{eq_x_Z_intsct_H}\n\\mathbf{x} = \\mathbf{G}\\boldsymbol{\\xi} + \\mathbf{c}, \\; ||\\boldsymbol{\\xi}||_{\\infty} \\leq 1, \\; \\mathbf{h}^T\\mathbf{x} \\leq f\n\\end{equation}\nTo show that $\\mathbf{x} \\in Z_h$ requires proving the existence of $\\xi_{n_g + 1} \\in \\mathbb{R}$ such that\n\\begin{equation*}\n\\mathbf{x} = \\mathbf{G}\\boldsymbol{\\xi} + \\mathbf{0}\\xi_{n_g + 1} + \\mathbf{c}, \\quad |\\xi_{n_g + 1}| \\leq 1,\n\\end{equation*}\nand \\eqref{conzono_const} holds for all $\\mathbf{x}$ satisfying \\eqref{eq_x_Z_intsct_H}. If $d_m = 0$, then \\eqref{conzono_const} is independent of $\\xi_{n_g + 1}$ and holds $\\forall \\; \\mathbf{x} \\in Z \\cap H_{-}$. Thus, $\\xi_{n_g + 1}$ can be arbitrarily chosen such that $|\\xi_{n_g + 1}| \\leq 1$. If $d_m > 0$, let $\\xi_{n_g+1}$ be chosen as in \\eqref{eq_xi_ngplus1}, which satisfies \\eqref{conzono_const}.\nTo prove $|\\xi_{n_g + 1}| \\leq 1$, consider $\\mathbf{x}$ as in \\eqref{eq_x_Z_intsct_H}.\nSince, $f - \\mathbf{h}^T\\mathbf{x} \\geq 0$, $\\xi_{n_g +1}$ satisfies\n\\begin{equation}\\label{xi_ngplus1_exp2}\n \\xi_{n_g + 1} = \\frac{2}{d_m}(f - \\mathbf{h}^T\\mathbf{x} - \\frac{d_m}{2}) \\geq -1.\n\\end{equation}\nFinally, using \\eqref{eq_xi_ngplus1}, the fact that $-\\mathbf{h}^T\\mathbf{G}\\boldsymbol{\\xi} \\leq \\sum \\limits_{i = 1}^{n_g} |\\mathbf{h}^T \\mathbf{g}_i |$, and the definition of $d_m$ results in\n\\begin{subequations}\n\\begin{align*}\n & \\xi_{n_g + 1} \\leq \\frac{2}{d_m}(f - \\mathbf{h}^T\\mathbf{c} + \\sum\\limits_{i = 1}^{n_g}|\\mathbf{h}^T \\mathbf{g}_i | - \\frac{d_m}{2}), \\nonumber \\\\ \n & \\xi_{n_g + 1} \\leq \\frac{2}{d_m}(d_m - \\frac{d_m}{2}) = 1. \\nonumber\n\\end{align*} \n\\end{subequations}\n\\setcounter{equation}{\\value{equation}-1}\nThus, $\\forall \\; \\mathbf{x} \\in Z \\cap H_{-}$, $\\mathbf{x} \\in Z_h$. \\hfill \\hfill \\qed\n \\end{pf}\n\n\t\n\n\n\\begin{exmp}\\label{exmp_1}\n\tThe left subplot in Fig. \\ref{Fig_Halfspace_Intersection} shows the zonotope $ Z $ and halfspace $ H_- $ where\n\t\\begin{equation*}\n\t\\setlength\\arraycolsep{2pt}\n\tZ = \\left\\{ \\begin{bmatrix} 1 & 1 \\\\ 0 & 2 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix} \\right\\}, \\; H_- = \\{\\mathbf{x} \\in \\mathbb{R}^2 \\mid \\left[ 3 \\enspace 1 \\right] \\mathbf{x} \\leq 3 \\}.\n\t\\end{equation*}\n\tFrom \\emph{\\textbf{Lemma \\ref{Zono_halfspace_int_check}}}, $ Z $ intersects the associated hyperplane $ H $ since \\eqref{hyperplane_int} evaluates to $ 3 \\leq 8 $. From \\emph{\\textbf{Theorem \\ref{zono_halfspace_int}}}, the intersection $ Z \\cap H_- $ is a constrained zonotope and \\eqref{zono_halfspace_int_def} evaluates to\n\t\\begin{equation*}\n\tZ_h = \\left\\{\\begin{bmatrix} 1 & 1 & 0 \\\\ 0 & 2 & 0 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}, \\left[ 3 \\enspace 5 \\enspace 5.5\t\\right], -2.5\\right\\}.\n\t\\end{equation*}\n\n\tThe left subplot in Fig. \\ref{Fig_Halfspace_Intersection} also shows the physical interpretation of $ d_m $ where $ d_m = d_1 + d_2 $. With $ d_1 = f - \\mathbf{h}^T\\mathbf{c} $, $ d_1 $ captures the orthogonal distance from the hyperplane $ H $ to the center, $ \\mathbf{c} $, of the zonotope. With $ d_2 = \\sum_{i = 1}^{n_g} | \\mathbf{h}^T \\mathbf{g}_i | $, $ d_2 $ captures the orthogonal distance from center of the zonotope to the point in $ Z $ farthest from $ H $.\n\\end{exmp}\n\n\\begin{figure}\n\t\\begin{center}\n\t\\includegraphics[width=8.6cm]{Halfspace_Int.pdf}\n\t\t\\vspace{-10pt}\n\t\t\\caption{Left: The intersection of the zonotope $ Z $ and the halfspace $ H_- $ corresponding to the hyperplane $ H $ results in the constrained zonotope $ Z_h $. The distances $ d_1 $ and $ d_2 $, measured orthogonally to $ H $, are shown to provide a geometric interpretation of the equality constraints in \\eqref{zono_halfspace_int_def}. Right: An example where the constrained zonotope $ Z_c = \\{\\mathbf{G},\\mathbf{c},\\mathbf{A},\\mathbf{b}\\} $, with corresponding unconstrained zonotope $ Z = \\{\\mathbf{G},\\mathbf{c}\\} $, where $ Z $ intersects the hyperplane $ H $ but $ Z_c $ does not.}\n\t\t\\label{Fig_Halfspace_Intersection} \n\t\\end{center} \n\\end{figure}\n\n\\subsection{Constrained Zonotope-Halfspace Intersection} \\label{Sec_conZonoHalfspace}\n\nFor the intersection $ Z_h = Z_c \\cap H_- $ of a constrained zonotope $ Z_c = \\{\\mathbf{G},\\mathbf{c},\\mathbf{A},\\mathbf{b}\\} $ and a halfspace $ H_- $, \\textbf{Theorem~\\ref{zono_halfspace_int}} is readily modified where \n\\begin{equation} \\label{conzono_halfspace_int}\n \\scriptsize{Z_h} = \\left\\{[\\mathbf{G} \\; \\mathbf{0}], \\mathbf{c}, \\begin{bmatrix} \\mathbf{A} & \\mathbf{0} \\\\ \\mathbf{h}^T\\mathbf{G} & \\frac{d_m}{2}\t\\end{bmatrix}, \\begin{bmatrix} \\mathbf{b} \\\\ f - \\mathbf{h}^T\\mathbf{c}-\\frac{d_m}{2}\\end{bmatrix}\\right\\}.\n\\end{equation} \n\nHowever, if the constrained zonotope is completely contained in the halfspace, $ Z_c \\subset H_- $, and does not intersect the corresponding hyperplane $ H $, then $ Z_h = Z_c $ and the addition of the $ n_g + 1 $ generator and $ n_c + 1 $ constraint is redundant and increases the order of $ Z_h $ unnecessarily.\n\nHowever, when determining if a constrained zonotope $ Z_c $ intersects a hyperplane $ H $, the inequality \\eqref{hyperplane_int} is necessary but not sufficient. The equality constraints $ \\mathbf{A} \\boldsymbol{\\xi} = \\mathbf{b} $ impose restrictions such that $ Z_c \\subset Z = \\{\\mathbf{G},\\mathbf{c}\\} $. Thus, the parent zonotope $ Z $ may intersect $ H $ while $ Z_c $ does not (as shown in right subplot of Fig.~\\ref{Fig_Halfspace_Intersection}). The intersection of a constrained zonotope with a hyperplane can be checked by solving two Linear Programs (LPs), each with $n_g$ decision variables.\n\t\n\\begin{lem}\\label{conzono_hyplane_intersection}\n\tThe constrained zonotope $ Z_c = \\{\\mathbf{G},\\mathbf{c},\\mathbf{A},\\mathbf{b}\\} \\subset \\mathbb{R}^n $ intersects the hyperplane $ H = \\{\\mathbf{x} \\in \\mathbb{R}^n \\mid \\mathbf{h}^T \\mathbf{x} = f \\} $ if $ f_{min} \\leq f \\leq f_{max} $, where\n\t\\begin{subequations}\n\t\t\\begin{align*}\n\t\tf_{min} \\triangleq \\text{\\emph{min}} \\{\\mathbf{h}^T (\\mathbf{c} + \\mathbf{G} \\boldsymbol{\\xi}) \\mid \\| \\boldsymbol{\\xi} \\|_\\infty \\leq 1, \\mathbf{A} \\boldsymbol{\\xi} = \\mathbf{b} \\}, \\\\\n\t\tf_{max} \\triangleq \\text{\\emph{max}} \\{\\mathbf{h}^T (\\mathbf{c} + \\mathbf{G} \\boldsymbol{\\xi}) \\mid \\| \\boldsymbol{\\xi} \\|_\\infty \\leq 1, \\mathbf{A} \\boldsymbol{\\xi} = \\mathbf{b} \\}.\n\t\t\\end{align*}\n\t\\end{subequations}\n\\end{lem}\n\\setcounter{equation}{\\value{equation}-1}\n\\begin{pf}\n\tFrom the definition of $ f_{min} $ and $ f_{max} $, if $ f_{min} \\leq f \\leq f_{max} $, then there exists $ \\mathbf{x}_{min}, \\mathbf{x}_{max} \\in Z_c $ such that $ \\mathbf{h}^T \\mathbf{x}_{min} \\leq f \\leq \\mathbf{h}^T \\mathbf{x}_{max} $.\n\tBy the convexity of constrained zonotopes \\cite{Scott2016}, there exists $ \\mathbf{x}_\\lambda \\in Z_c $ such that $ \\mathbf{x}_\\lambda = \\lambda \\mathbf{x}_{min} + (1-\\lambda) \\mathbf{x}_{max} $, $ \\lambda \\in \\left[0,1\\right] $. For the case where $f_{min} = f_{max} = f $, any choice of $\\lambda \\in [0,1]$ results in $\\mathbf{h}^T\\mathbf{x}_{\\lambda} = f$. Otherwise, if $f_{min} \\neq f_{max}$, choosing $ \\lambda = \\frac{f-f_{max}}{f_{min}-f_{max}} \\in \\left[0,1\\right] $ results in $ \\mathbf{h}^T \\mathbf{x}_\\lambda = f $. Thus $ \\mathbf{x}_\\lambda \\in H $ and $ \\mathbf{x}_\\lambda \\in Z_c $, proving $ Z_c \\cap H \\neq \\emptyset $. \\hfill \\hfill \\qed\n\\end{pf}\nNote that $f_{min}$ and $f_{max}$ obtained using \\textbf{Lemma \\ref{conzono_hyplane_intersection}} represent the largest orthogonal distance between a point in $Z_c$ and either side of the hyperplane providing additional insight to the location of constrained zonotope with respect to the hyperplane.\n\n\n\\begin{rem}\\label{rem_conzono_hyp_intersection}\n\tWhile the knowledge of $f_{min}$ and $f_{max}$ can be useful, checking for the non-empty intersection of a constrained zonotope and a hyperplane can be achieved by assessing the feasibility of a single LP with constraints\n\t\\begin{equation*}\n\t \\mathbf{h}^{T}(\\mathbf{c} + \\mathbf{G}\\boldsymbol{\\xi}) \\leq f, \\quad \\mathbf{A}\\boldsymbol{\\xi}= \\mathbf{b}, \\quad ||\\boldsymbol{\\xi}||_{\\infty} \\leq 1.\n\t\\end{equation*}\n\\end{rem}\nWhen solving these LPs is undesirable, an iterative method based on interval arithmetic from \\cite{Scott2016} provides an approach for checking constrained zonotope-halfspace intersection with complexity $O(n_cn_g^2)$. Reproduced from \\cite{Scott2016}, \\textbf{Algorithm \\ref{gen_Bounds}} computes the interval set $ E = [\\boldsymbol{\\xi}^L, \\boldsymbol{\\xi}^U] $ such that $ B_\\infty(\\mathbf{A},\\mathbf{b}) \\subset E \\subset [-\\mathbf{1},\\mathbf{1}] $ and $ R = [\\boldsymbol{\\rho}^L, \\boldsymbol{\\rho}^U] \\subset \\mathbb{R}^{n_g} $ where\n\\begin{equation*}\nR_j \\supset \\{\\xi_j \\mid \\mathbf{A} \\boldsymbol{\\xi} = \\mathbf{b}, |\\xi_i| \\leq 1, \\forall i \\neq j \\}, \\quad \\forall j \\in [1,n_g].\n\\end{equation*}\nAs discussed in \\cite{Scott2016}, this iterative method has the potential to detect empty constrained zonotopes without solving a LP. Specifically, if $ E \\cap R = \\emptyset $, then $ Z_c = \\emptyset $. Since $ E, R $ are intervals, $ E \\cap R = \\emptyset $ if $ \\xi_j^U < \\rho_j^L $ or $ \\xi_j^L > \\rho_j^U $ for any $ j \\in [0,n_g] $.\n\n\\IncMargin{1.5em}\n\\begin{algorithm2e}\n\t\\SetAlgoLined\n\t\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n\t\\Input{$ Z_c = \\{\\mathbf{G},\\mathbf{c},\\mathbf{A},\\mathbf{b}\\} $}\n\t\\Output{$ E_j, R_j, \\forall j \\in [1,n_g] $}\n\t\\BlankLine\n\tInitialize $ E_j \\leftarrow [-1, 1], R_j \\leftarrow [-\\infty, \\infty], \\; i,j \\leftarrow 1 $\n\t\\While{$ i \\leq n_c $}{\n\t\t\\While{$ j \\leq n_g $}{\n\t\t\t\\If{$ a_{ij} \\neq 0 $}{\n\t\t\t\t$ R_j \\leftarrow R_j \\cap ( a_{ij}^{-1}b_i - \\sum_{k \\neq j} a_{ij}^{-1} a_{ik} E_{k} ) $\\;\n\t\t\t\t$ E_j \\leftarrow E_j \\cap R_j $\\;}\n\t\t\t$ j \\leftarrow j+1 $\\;}\n\t\t$ i \\leftarrow i + 1, j \\leftarrow 1 $\\;}\n\t\\caption{\\cite{Scott2016} Constrained zonotope intervals.}\n\t\\label{gen_Bounds}\n\\end{algorithm2e}\n\\DecMargin{1.5em}\nThe goal is to detect if $ Z_c \\subset H_- $, resulting in $ Z_h = Z_c $ and thus avoiding the unnecessary addition of generators and constraints from the application of \\eqref{conzono_halfspace_int}. The proposed approach uses the fact that $ Z_c \\subset H_- $ if and only if $ Z_c \\cap H_+ = \\emptyset $, where $ H_+ = \\{\\mathbf{x} \\in \\mathbb{R}^n \\mid \\mathbf{h}^T \\mathbf{x} \\geq f \\} $ is the complement of $ H_- $. By modifying \\eqref{conzono_halfspace_int} such that $ Z_{h^{+}} = Z_c \\cap H_+ $, \\textbf{Algorithm \\ref{gen_Bounds}} can then be applied to $ Z_{h^{+}} $ to check if $ Z_{h^{+}} = \\emptyset $. Specifically, if $ E \\cap R = \\emptyset $, then $ Z_{h^{+}} = \\emptyset $ and $ Z_c \\subset H_- $. Note that applying \\textbf{Algorithm 1} does not guarantee the detection of $ Z_{h^{+}} = \\emptyset $.\nAs discussed in \\cite{Scott2016}, \\textbf{Algorithm~\\ref{gen_Bounds}} can be applied iteratively to refine the interval set $ E $. In fact, two iterations of \\textbf{Algorithm~\\ref{gen_Bounds}} were required to detect that $ Z_c \\subset H_- $ for the example shown on the right subplot of Fig. \\ref{Fig_Halfspace_Intersection}.\n\n\\begin{rem}\nTo provide an unbiased evaluation of constrained-zonotope hyperplane intersection using \\textbf{\\emph{Algorithm \\ref{gen_Bounds}}}, the intersection of $Z_h$ (from \\textbf{\\emph{Example \\ref{exmp_1}}})\nwith 100 randomly chosen hyperplanes is checked. Note that for all instances, the parent zonotope $Z$ satisfying $Z \\supset Z_h$ intersected the random hyperplanes. The constrained zonotope $Z_h$ intersected these random hyperplanes $61$ times and did not intersect for the remaining $39$ times. In all cases, \\textbf{\\emph{Algorithm \\ref{gen_Bounds}}} accurately detected the intersection\/non-intersection of the constrained zonotope and randomly generated hyperplanes. Iteration of \\textbf{\\emph{Algorithm \\ref{gen_Bounds}}} to further refine $E$ was only required in $13$ of these $100$ cases.\n\\end{rem}\n\n\\section{Redundancy Removal} \\label{Sec_Redundancy}\n\nIt is important to recognize that certain set operations can create redundancy in the set representation. For example, the Minkowski sum can create redundancy in the resultant zonotope if the two operands have parallel generators. Additionally, the generalized intersection can create redundancy within the generators and constraints of a constrained zonotope. Detecting and removing this redundancy can provide order reduction without reducing the volume of the set.\nFirst, if a zonotope $ Z = \\{\\mathbf{G},\\mathbf{c}\\} $ has parallel generators, $ \\mathbf{g}_i \\parallel \\mathbf{g}_j $, then the same set can be represented using one less generator by simply combining parallel generators through addition $\\mathbf{g}_i + \\mathbf{g}_j$. For a zonotope in $ \\mathbb{R}^n $ with $ n_g $ generators, parallel generators can be detected and combined using a typical sorting algorithm with complexity $O(n n_g^2)$. To set a desired numerical precision, two generators are considered parallel if $ \\frac{|\\mathbf{g}_i^T \\mathbf{g}_j|}{\\|\\mathbf{g}_i\\|_2 \\|\\mathbf{g}_j\\|_2} \\geq 1 - \\epsilon $, where $ \\epsilon > 0 $ is a small number.\n\nThe same is true for a constrained zonotope $ Z_c = \\{\\mathbf{G},\\mathbf{c},\\mathbf{A},\\mathbf{b}\\} $ if the lifted zonotope \\cite{Scott2016}\n\\begin{equation*\nZ^{+} = \\left\\{ \\begin{bmatrix} \\mathbf{G} \\\\ \\mathbf{A} \\end{bmatrix}, \\begin{bmatrix} \\phantom{-}\\mathbf{c} \\\\ -\\mathbf{b} \\end{bmatrix} \\right\\} = \\{\\mathbf{G}^+,\\mathbf{c}^+\\},\n\\end{equation*}\nhas parallel generators, $ \\mathbf{g}_i^+ \\parallel \\mathbf{g}_j^+ $. In this case, the parallel generators can be similarly reduced but with higher complexity $O(n+n_c)n_g^2$ due to the $ n_c $ constraints added to the rows of the lifted zonotope structure. Once the reduced lifted zonotope is obtained, it\nis transformed back to a reduced constrained zonotope with fewer generators\n\n\nFor constrained zonotopes, redundancy can also come from the combination of constraints $ \\mathbf{A} \\mathbf{\\xi} = \\mathbf{b} $ and $ \\lVert \\boldsymbol{\\xi} \\rVert_\\infty \\leq~1 $. By representing these constraints as\n\\begin{equation} \\label{redund_cons}\n\\mathbf{A} \\mathbf{\\xi} = \\mathbf{b} \\Longleftrightarrow \\mkern -20mu \\sum_{j\\in \\{ 1, \\cdots, n_g \\}} \\mkern -20mu a_{i,j} \\xi_j = b_i, \\forall i \\in \\{ 1, \\cdots, n_c\\},\n\\end{equation} \nand $ \\lVert \\boldsymbol{\\xi} \\rVert_\\infty \\leq 1 \\Leftrightarrow |\\xi_j | \\leq 1, \\forall j \\in \\{ 1, \\cdots, n_g \\}$, the following theorem provides a condition for detecting redundancy and a method for removing one generator and one constraint with complexity $O(n_cn_g^2)$.\n\n\\begin{thm} \\label{Redundant_ConZono}\n\tFor $ Z_c = \\{ \\mathbf{G},\\mathbf{c},\\mathbf{A},\\mathbf{b} \\} \\subset \\mathbb{R}^n $ with $ n_g $ generators and $ n_c $ constraints, if there exists indices $ r \\in \\{ 1, \\cdots, n_c \\} $ and $ c \\in \\{1, \\cdots, n_g \\} $ such that $ a_{r,c} \\neq 0 $ and \n\t\\begin{equation} \\label{Ranges_indexed}\n\tR_{r,c} \\triangleq a_{r,c}^{-1} b_r - a_{r,c}^{-1} \\sum_{k \\neq c} a_{r,k} E_k \\subseteq [-1,1],\n\t\\end{equation}\n\twith $E_k$ computed using \\textbf{\\emph{Algorithm 1}}, then $ Z_c $ \n\n\tcan be exactly represented by a constrained zonotope $ Z_r $ with $ n_g -1 $ generators and $ n_c -1 $ constraints.\n\\end{thm}\n\\begin{pf}\n\tFollowing the procedure in \\cite{Scott2016}, let \n\t\\begin{equation*}\n\tZ_r = \\{\\mathbf{G}-\\mathbf{\\Lambda}_G\\mathbf{A}, \\mathbf{c}+\\mathbf{\\Lambda}_G\\mathbf{b}, \\mathbf{A}-\\mathbf{\\Lambda}_A\\mathbf{A}, \\mathbf{b}-\\mathbf{\\Lambda}_A\\mathbf{b}\\},\n\t\\end{equation*}\n\twhere $ \\mathbf{\\Lambda}_G = \\mathbf{G} \\mathbf{E}_{c,r} a_{r,c}^{-1} \\in \\mathbb{R}^{n \\times n_c} $, $ \\mathbf{\\Lambda}_A = \\mathbf{A} \\mathbf{E}_{c,r} a_{r,c}^{-1} \\in \\mathbb{R}^{n_c \\times n_c} $, and $ \\mathbf{E}_{c,r} \\in \\mathbb{R}^{n_g \\times n_c} $ is zero except for a one in the $ (c,r) $ position. With $ Z_r = \\{\\mathbf{G}_r,\\mathbf{c}_r,\\mathbf{A}_r,\\mathbf{b}_r\\} $, this transformation uses the $ r^{th} $ of row of \\eqref{redund_cons} to solve for $ \\xi_c $ in terms of $ \\xi_k, k\\neq c $. This results in the $ c^{th} $ column of $ \\mathbf{G}_r $ and $ \\mathbf{A}_r $ and the $ r^{th} $ row of $ \\mathbf{A}_r $ to equal zero. Removing these columns and rows of zeros results in a constrained zonotope with $ n_g -1 $ generators and $ n_c -1 $ constraints. Through this transformation, the $ r^{th} $ constraint is still imposed in $ Z_r $ but the ability to constraint $ | \\xi_c | \\leq 1 $ is lost. However, since $ R_{r,c} \\subseteq [-1,1] $, this constraint is imposed by the remaining equality and norm constraints, and thus $ Z_r = Z_c $. \\hfill \\hfill \\qed\n\\end{pf}\n\nAs in \\cite{Scott2016}, Gauss-Jordan elimination with full pivoting should be applied to $ Z_c $ prior to applying \\textbf{Algorithm \\ref{gen_Bounds}} to determine the intervals $ E_k $ required to compute \\eqref{Ranges_indexed}.\nThe procedure discussed in the proof of \\textbf{Theorem \\ref{Redundant_ConZono}} can be applied iteratively until $ R_{r,c} \\nsubseteq [-1,1] $ for any indices. However, there is no guarantee that the resulting constrained zonotope will be without redundancy since \\textbf{Theorem \\ref{Redundant_ConZono}} only provides a sufficient condition. \n\\begin{exmp}\n\tConsider the two zonotopes shown in Fig. \\emph{\\ref{Fig_Redundancy}}\n\t\\begin{equation*}\n\tZ_1 = \\left\\{ \\begin{bmatrix} 1 & \\phantom{-}1 \\\\ 1 & -1 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix} \\right\\}, \\quad\n\tZ_2 = \\left\\{ \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix} \\right\\},\n\t\\end{equation*}\n and the constrained zonotope $ Z_c = Z_1 \\cap Z_2 $. Applying \\eqref{generalized_Intersection} results in \n\t\\begin{equation} \\label{Redund_gen_int_example}\n\t\\setlength\\arraycolsep{1.7pt}\n\tZ_c = \\left\\{ \\begin{bmatrix} 1 & \\phantom{-}1 & 0 & 0 \\\\ 1 & -1 & 0 & 0 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}, \\begin{bmatrix} 1 & \\phantom{-}1 & -1 & \\phantom{-}0 \\\\ 1 & -1 & \\phantom{-}0 & -1 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix} \\right\\},\n\t\\end{equation}\n\twith $ n_g = 4 $ generators and $ n_c = 2 $ constraints. However, since $ Z_2 \\subset Z_1 $, the intersection is also represented exactly by $ Z_2 $. By applying Gauss-Jordan elimination with full pivoting and two iterations of the procedure from \\emph{\\textbf{Theorem \\ref{Redundant_ConZono}}}, two constraints and two generators are removed to reduce $ Z_c $ from \\eqref{Redund_gen_int_example} to $ Z_c = Z_2 $ with $ n_g = 2 $ and $ n_c = 0 $. To provide an unbiased evaluation of \\emph{\\textbf{Theorem \\ref{Redundant_ConZono}}}, the axis-aligned generators of $Z_2$ above were replaced by randomly chosen generators. In each of the $45$ out of $100$ cases where $Z_2 \\subseteq Z_1$, $Z_c$ was successfully reduced to $Z_c = Z_2$ with $n_g = 2$ and $n_c = 0$. \n\t\\end{exmp}\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=4cm]{Redundancy_Removal.pdf}\n\t\t\\caption{Zonotopes $ Z_1 $ and $ Z_2 $, where $ Z_1 \\cap Z_2 = Z_2 $, used to demonstrate the ability to remove redundancy from constrained zonotopes that can arise from operations like the generalized intersection.}\n\t\t\\label{Fig_Redundancy} \t\\end{center} \n\\end{figure}\n\n\\begin{rem}\nFor a constrained zonotope $Z_c$ with $n_c$ constraints and $n_g$ generators and a set $H$ in H-Rep with with $n_h$ halfspaces, \\textbf{\\emph{Algorithm \\ref{gen_Bounds}}} can be applied in two different ways to either prevent or remove redundancy in the set representation of $ Z_c \\cap H $. The approach from Section \\ref{Sec_conZonoHalfspace} based on preventing the addition of unnecessary generators and constraints has a best-case complexity of $O(n_h n_c n_g^2)$ if $ Z_c \\subset H $ and a worst-case complexity of $O(n_h (n_c+n_h) (n_g+n_h)^2)$ if $ Z_c $ intersects each of the $ n_h $ halfspaces. Alternatively, $ n_h $ constraints and $n_h$ generators can be directly added to $ Z_c $ using \\eqref{conzono_halfspace_int} and then \\textbf{\\emph{Theorem \\ref{Redundant_ConZono}}} can be applied to reduce set complexity. This approach has a best-case complexity of $O((n_c+n_h)(n_g+n_h)^2)$ when no generators\/constraints can be removed and a worst-case complexity $O(n_h(n_c+n_h)(n_g+n_h)^2)$ when all of the added $ n_h $ constraints and $n_h$ generators can be removed. Thus, both approaches have the same worst-case complexity but the preventative approach has the potential to require fewer computations in practice.\n\\end{rem}\n\n\\section{Inner-Approximations} \\label{Sec_InnerApprox}\n\nOnce attempts have been made to remove redundancy from the representation of a zonotope or constrained zonotope, further complexity reduction may be required. As discussed in the Introduction, the majority of order reduction techniques have focused on outer-approximations. This section establishes inner-approximation order reduction for zonotopes and constrained zonotopes.\n\n\\subsection{Zonotopes}\n\nThe proposed reduced-order inner-approximation of a zonotope requires the following zonotope containment conditions.\n\\begin{lem} (Theorem 3 of\n\t\\emph{\\cite{Sadraddini2019}}) \\label{zonoContainment}\n\tGiven two zonotopes $ X = \\{\\mathbf{G}_x,\\mathbf{c}_x\\} \\subset \\mathbb{R}^n $ and $ Y = \\{\\mathbf{G}_y,\\mathbf{c}_y\\} \\subset \\mathbb{R}^n $, $ X \\subseteq Y $ if there exists $ \\mathbf{\\Gamma} \\in \\mathbb{R}^{n_y \\times n_x} $ and $ \\boldsymbol{\\beta} \\in \\mathbb{R}^{n_y} $ such that\n\t\\begin{equation} \\label{zonoContainment_Conditions}\n\t\\mathbf{G}_x = \\mathbf{G}_y \\mathbf{\\Gamma}, \\quad \\mathbf{c}_y - \\mathbf{c}_x = \\mathbf{G}_y \\boldsymbol{\\beta}, \\quad |\\mathbf{\\Gamma}|\\mathbf{1} + |\\boldsymbol{\\beta}| \\leq \\mathbf{1}.\n\t\\end{equation}\n\\end{lem}\n\\begin{thm} \\label{ZonoInnerApprox}\n\tThe zonotope $ Z_r = \\{\\mathbf{G}_r,\\mathbf{c}\\} \\subset \\mathbb{R}^n $ is a reduced-order inner-approximation of $ Z = \\{\\mathbf{G},\\mathbf{c}\\} \\subset \\mathbb{R}^n $ such that $ Z_r \\subseteq Z $ with $ \\mathbf{G}_r \\in \\mathbb{R}^{n \\times n_r} $, $ \\mathbf{G} \\in \\mathbb{R}^{n \\times n_g} $, and $ n_r < n_g $ if $ \\mathbf{G}_r = \\mathbf{G} \\mathbf{T} $ where $ \\mathbf{T} = [t_{i,j}] \\in \\mathbb{R}^{n_g \\times n_r} $, $ t_{i,j} \\in \\{-1,0,1\\} $, and $ \\sum_{j=1}^{n_r} |t_{i,j}| = 1 $, $\\forall i \\in \\{ 1, \\cdots, n_g\\}$.\n\\end{thm}\n\\begin{pf}\n\tFrom \\textbf{Lemma \\ref{zonoContainment}}, $ Z_r \\subseteq Z $ if there exist $ \\mathbf{\\Gamma} \\in \\mathbb{R}^{n_g \\times n_r} $ and $ \\boldsymbol{\\beta} \\in \\mathbb{R}^{n_g} $ such that\n\t\\begin{equation*}\n\t\\mathbf{G}\\mathbf{T} = \\mathbf{G} \\mathbf{\\Gamma}, \\quad \\mathbf{c} - \\mathbf{c} = \\mathbf{G} \\boldsymbol{\\beta}, \\quad |\\mathbf{\\Gamma}|\\mathbf{1} + |\\boldsymbol{\\beta}| \\leq \\mathbf{1}.\n\t\\end{equation*}\n\tThe first two equations hold by setting $ \\mathbf{\\Gamma} = \\mathbf{T} $ and $ \\boldsymbol{\\beta} = \\mathbf{0} $. The third equation holds since $ \\sum_{j=1}^{n_r} |t_{i,j}| = 1, \\; \\forall i \\in \\{1, \\cdots, n_g\\} $, if and only if $ |\\mathbf{T}|\\mathbf{1} = \\mathbf{1} $. \\hfill \\hfill \\qed\n\\end{pf}\nThe specific definition of $ \\mathbf{T} $ in \\textbf{Theorem \\ref{ZonoInnerApprox}} produces an inner-approximation of $ Z $ by forming the generators of $ Z_r $ through the addition of the generators in $ Z $. Typically, the largest inner-approximation of $ Z $ is desired. The proposed method for determining $ \\mathbf{T} $ is inspired by the methods for determining outer-approximations of zonotopes presented in \\cite{Kopetzki2017a}. First, let the generators $ \\mathbf{g}_i $ of $ Z $ be arranged such that $ \\| \\mathbf{g}_i \\|_2 \\geq \\| \\mathbf{g}_{i+1} \\|_2, \\; \\forall i \\in \\{1, \\cdots, n_g -1 \\} $. Then partition the generator matrix such that $ \\mathbf{G} = [\\mathbf{G}_1 \\; \\mathbf{G}_2] $ where $ \\mathbf{G}_1 \\in \\mathbb{R}^{n \\times n_r} $ and $ \\mathbf{G}_2 \\in \\mathbb{R}^{n \\times (n_g-n_r)} $. For each generator $ \\mathbf{g}_{2,j} $ in $ \\mathbf{G}_2 $, compute the magnitude of the dot product $ \\alpha_{i,j} = | \\mathbf{g}_{1,i}^T \\mathbf{g}_{2,j}| $ with all generators $ \\mathbf{g}_{1,i} $ in $ \\mathbf{G}_1 $. The goal is to add the generators $ \\mathbf{g}_{2,j} $ to the most aligned generator $ \\mathbf{g}_{1,i} $. Thus, let $ \\mathbf{T} = [t_{i,j}] $ where\n\\begin{equation} \\label{T_zonoapprox}\nt_{i,j} = \\begin{Bmatrix} 1 & \\text{if } i=j\\leq n_r \\\\\n\\frac{1}{\\alpha_{i,j}}\\mathbf{g}_{1,i}^T \\mathbf{g}_{2,j} & \\text{if } \\alpha_{i,j} > \\alpha_{i,k}, \\forall k \\neq j \\\\\n0 & \\text{otherwise}\n\\end{Bmatrix}.\n\\end{equation}\nNote that computing $Z_r$ using \\textbf{Theorem \\ref{ZonoInnerApprox}} and \\eqref{T_zonoapprox} has an overall complexity of $O(nn_g^2 + nn_gn_r)$, where the first term is associated with sorting the generators based on the $2$-norm and the second term is associated with computing the product $\\mathbf{G}_r = \\mathbf{G}\\mathbf{T}$ in \\textbf{Theorem \\ref{ZonoInnerApprox}}.\n\\begin{exmp}\\label{exmp_zon_innerapprox}\n\tConsider the zonotope \n\t\\begin{equation*}\n\tZ = \\left\\{ \\begin{bmatrix} 4 & 3 & -2 & 0.2 & 0.5 \\\\ 0 & 2 & 3 & 0.6 & -0.3 \\end{bmatrix}, \\boldsymbol{0} \\right\\} \\subset \\mathbb{R}^2. \n\t\\end{equation*}\n\tNote that the generators are already arranged in order of decreasing 2-norm. With $ n_g = 5 $, the goal is to determine $ Z_r \\subseteq Z $ such that $ n_r = 3 $. From \\textbf{\\emph{Theorem \\ref{ZonoInnerApprox}}} and \\eqref{T_zonoapprox}, the matrix $\\mathbf{T}$ and the reduced-order zonotope $Z_r$ are \n\t\\begin{equation*}\n\t \\mathbf{T} = \\begin{bmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1 \\\\ 0 & 1 & 0 \\\\ 1 & 0 & 0 \\end{bmatrix}, \\; Z_r = \\left\\{ \\begin{bmatrix} \\phantom{-}4.5 & 3.2 & -2 \\\\ -0.3 & 2.6 & \\phantom{-}3 \\end{bmatrix}, \\boldsymbol{0} \\right\\}.\n\t\\end{equation*}\n\n\tFig. \\emph{\\ref{Fig_Zono_Inapprox_Tmat}} confirms $ Z_r \\subseteq Z $ with volume ratio $V_r = 0.97$.\n\tWhile this numerical example resulted in relatively large volume ratio, the reduction in volume is highly dependent on the distribution of generator lengths and the number of generators removed. For 100 randomly generated zonotopes in $ \\mathbb{R}^2 $ with $n_g = 5$, applying \\textbf{\\emph{Theorem \\ref{ZonoInnerApprox}}} and \\eqref{T_zonoapprox}, resulted in all reduced zonotopes satisfying $Z_r \\subseteq Z$ with $n_r = 3$ and mean volume ratio $V_r = 0.84$.\n\\end{exmp}\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=4cm]{Zono_Inner_Approx_Tmat.pdf}\n\t\t\\caption{The inner-approximation of $ Z $ with $ n_g = 5 $ by the reduced-order zonotope $ Z_r $ with $ n_r = 3 $.}\n\t\t\\label{Fig_Zono_Inapprox_Tmat} \n\t\\end{center} \n\\end{figure}\n\n\\subsection{Constrained Zonotopes}\\label{subsec_conzono_innerapprox}\n\nFor constrained zonotopes, a reduced-order inner-approximation $ Z_r $ of $ Z_c $ can be computed based on the set containment criteria for the affine transformation of polytopes in H-Rep (AH-polytopes) developed in \\cite{Sadraddini2019} since AH-polytopes and constrained zonotopes are equivalent.\n\n\\begin{defn} \\label{AHPolytope_def} \\emph{\\cite{Sadraddini2019}}\n\tAn AH-polytope $ X \\subset \\mathbb{R}^n$ is an affine transformation of a H-Rep polytope $ P \\subset \\mathbb{R}^m$ where\n\t\\begin{equation} \\label{AH_def}\n\tX = \\bar{\\mathbf{x}} + \\mathbf{X} P, \\quad \\mathbf{X} \\in \\mathbb{R}^{n \\times m}, \\quad \\bar{\\mathbf{x}} \\in \\mathbb{R}^n.\n\t\\end{equation}\n\\end{defn}\n\nThe following theorem proves the equivalency between constrained zonotopes and AH-polytopes in addition to providing a method to convert constrained zonotopes to AH-polytopes with complexity $O(nn_g^2+n_c^2n_g)$,\nwhere the first term is associated with computing an affine transformation and the second term is associated with computing the basis of $\\mathcal{N}(\\mathbf{A})$ for $Z_c = \\{\\mathbf{G}, \\mathbf{c}, \\mathbf{A}, \\mathbf{b} \\}$.\n\t\\begin{thm} \\label{ConZono_AHpolytope}\n\tA non-empty set $ Z_c \\subset \\mathbb{R}^n $ is a constrained zonotope if and only if it is an AH-polytope.\n\\end{thm}\n\\begin{pf}\n\tTo prove that every AH-polytope is a constrained zonotope, let $ P = \\{ \\mathbf{z} \\in \\mathbb{R}^m \\mid \\mathbf{H} \\mathbf{z} \\leq \\mathbf{k} \\} $. Per Theorem 1 in \\cite{Scott2016}, the set $ P $ can always be represented as a constrained zonotope $ P = \\{\\mathbf{G}_p, \\mathbf{c}_p, \\mathbf{A}_p, \\mathbf{b}_p\\} $. Thus, from \\eqref{AH_def} and the properties of constrained zonotopes \\eqref{affineMap} and \\eqref{MinkowskiSum}, $ X $ is a constrained zonotope where $ X = \\{\\mathbf{X}\\mathbf{G}_p, \\bar{\\mathbf{x}}+\\mathbf{X}\\mathbf{c}_p, \\mathbf{A}_p, \\mathbf{b}_p\\} $. \n\tTo prove that every constrained zonotope is an AH-polytope, consider $ Z_c = \\{\\mathbf{G}, \\mathbf{c}, \\mathbf{A}, \\mathbf{b}\\} $ with $ n_g $ generators and $ n_c $ constraints. If $ n_c = 0 $, $ Z_c = Z = \\{\\mathbf{G},\\mathbf{c}\\} $ is a zonotope and can be represented in AH-polytope form of \\eqref{AH_def} with $ \\bar{\\mathbf{x}} = \\mathbf{c} $, $ \\mathbf{X} = \\mathbf{G} $, and $ P = B_\\infty $. For $ n_c > 0 $, assume that any rank deficiency in $ \\mathbf{A} $ has been detected as a row of zeros in the reduced row echelon form achieved through Gauss-Jordan elimination with full pivoting (see \\cite{Scott2016} for details). Thus, the rank of $ \\mathbf{A} $ is $ n_c $ and there exists $\\mathbf{s} = \\mathbf{A}^{\\dagger}\\mathbf{b} \\in \\mathbb{R}^{n_g}$ and the matrix $ \\mathbf{T} \\in \\mathbb{R}^{n_g \\times (n_g-n_c)} $ with columns that form a basis for $\\mathcal{N}(\\mathbf{A})$. Using the change of variables $ \\boldsymbol{\\xi} = \\mathbf{T} \\bar{\\boldsymbol{\\xi}} + \\mathbf{s} $, the equality constraint $\\mathbf{A}\\boldsymbol{\\xi} = \\mathbf{b}$\\\\ is satisfied for all $ \\bar{\\boldsymbol{\\xi}} \\in \\mathbb{R}^{n_g-n_c} $.\n\n Hence, $Z_c$ can be expressed as \n\t\\begin{equation*}\n\tZ_c = \\left\\{ \\mathbf{c}+\\mathbf{G}\\mathbf{s} + \\mathbf{G}\\mathbf{T}\\bar{\\boldsymbol{\\xi}} \\mid \\| \\mathbf{T} \\bar{\\boldsymbol{\\xi}} + \\mathbf{s} \\|_\\infty \\leq 1 \\right\\}.\n\t\\end{equation*}\n\n\n\tFurthermore, the norm constraints $ \\| \\mathbf{T} \\bar{\\boldsymbol{\\xi}} + \\mathbf{s} \\|_\\infty \\leq 1 $ can be represented in H-Rep as $P = \\{ \\bar{\\boldsymbol{\\xi}} \\mid \\mathbf{H} \\bar{\\boldsymbol{\\xi}} \\leq \\mathbf{k} \\}$, where \n\t\\begin{equation*}\n\t\\mathbf{H} = \\begin{bmatrix} \\phantom{-}\\mathbf{T} \\\\ -\\mathbf{T} \\end{bmatrix}, \\quad \n\t\\mathbf{k} = \\begin{bmatrix} \\mathbf{1} - \\mathbf{s} \\\\ \\mathbf{1} + \\mathbf{s} \\end{bmatrix}.\n\t\\end{equation*}\n\tThus, with $ \\bar{\\mathbf{x}} = \\mathbf{c}+\\mathbf{G}\\mathbf{s} $ and $ \\mathbf{X} = \\mathbf{G}\\mathbf{T} $,\n$ Z_c $ is an AH-polytope of the form \\eqref{AH_def}. \t\\hfill \\hfill \\qed\n\\end{pf}\n\n\n\\begin{rem}\n\t\tThe convexity of the constrained zonotope $Z_c = (\\mathbf{G}, \\mathbf{c}, \\mathbf{A}, \\mathbf{b})$ also facilitates representation as a polynomial zonotope $Z_p = (\\mathbf{c}, \\mathbf{G}, \\mathbf{E})$ in Z-Rep \\cite{Kochdumper2019}. However, the reverse is not true.\n\\end{rem}\n\\begin{lem}(Theorem 1 of \\emph{\\cite{Sadraddini2019}}) \\label{AHContainment}\n\tGiven AH-polytopes $ X,Y \\subset \\mathbb{R}^n $ where $ X = \\bar{\\mathbf{x}} + \\mathbf{X} P_x $, $ Y = \\bar{\\mathbf{y}} + \\mathbf{Y} P_y $, $ P_x = \\left\\{ \\mathbf{x} \\in \\mathbb{R}^{n_x} \\mid \\mathbf{H}_x \\mathbf{x} \\leq \\mathbf{f}_x \\right\\} $, and $ P_y = \\left\\{ \\mathbf{y} \\in \\mathbb{R}^{n_y} \\mid \\mathbf{H}_y \\mathbf{y} \\leq \\mathbf{f}_y \\right\\}$, $ X \\subseteq Y $ if there exists $\\mathbf{\\Gamma} \\in \\mathbb{R}^{n_y \\times n_x}, \\boldsymbol{\\beta} \\in \\mathbb{R}^{n_y}$ and $\\mathbf{\\Lambda} \\in \\mathbb{R}^{n_{hy} \\times n_{hx}}_{+}$ such that \n\t\\begin{subequations} \\label{AHContainment_Cons}\n\t\t\\begin{align}\n\t\t\\mathbf{X} &= \\mathbf{Y} \\mathbf{\\Gamma}, & \\bar{\\mathbf{y}} - \\bar{\\mathbf{x}} &= \\mathbf{Y} \\boldsymbol{\\beta}, \\\\ \n\t\t\\mathbf{\\Lambda} \\mathbf{H}_x &= \\mathbf{H}_y \\mathbf{\\Gamma}, & \\mathbf{\\Lambda} \\mathbf{f}_x &\\leq \\mathbf{f}_y + \\mathbf{H}_y \\boldsymbol{\\beta}. \n\t\t\\end{align}\n\t\\end{subequations}\n\\end{lem}\n\nTo achieve a reduced-order inner-approximation $ Z_r $ of constrained zonotope $ Z_c $, \\textbf{Theorem \\ref{ConZono_AHpolytope}} can be used to convert both $ Z_r $ and $ Z_c $ in to AH-polytopes while \\textbf{Lemma \\ref{AHContainment}} can be used to ensure $ Z_r \\subseteq Z_c $. Assuming $ Z_c $ is known, consider $ Z_r = \\{\\mathbf{G}_r \\mathbf{\\Phi}, \\mathbf{c}_r, \\mathbf{A}_r, \\mathbf{b}_r\\} $ where $ \\mathbf{\\Phi} = diag(\\boldsymbol{\\phi}) $ is a scaling matrix with $ \\phi_i > 0, \\forall i \\in \\{1,\\cdots, n_{gr} \\} $. Assuming $ \\mathbf{G}_r $, $ \\mathbf{A}_r $, and $ \\mathbf{b}_r $ are known, the following optimization problem can be formulated with $4n_{gr}^2 + n_{gr} + (n_g - n_c)(1 + n_{gr} - n_{cr}) + n$ decision variables that maximizes the $p = 1$, $2$, or $\\infty$ norm of the diagonal elements $\\phi$ of the scaling matrix $\\mathbf{\\Phi}$ by solving\n\\begin{subequations}\\label{Conzono_containment}\n\t\\begin{align}\n\t& \\underset{\\mathbf{\\Phi}, \\mathbf{\\Gamma}, \\boldsymbol{\\beta}, \\mathbf{\\Lambda}, \\mathbf{c}_r}{\\text{max}} \\mkern 20mu ||\n\t\\mathbf{\\phi}\n\t||_p, \\\\\n\t& \\text{s.t.} \\nonumber \\\\\n\t& (\\mathbf{c} + \\mathbf{G}\\mathbf{s}) - (\\mathbf{c}_r + \\mathbf{G}_r\\mathbf{\\Phi}\\mathbf{s}_r) = \\mathbf{G}\\mathbf{T}\\boldsymbol{\\beta}, \\\\\n\t& \\mathbf{G}_r\\mathbf{\\Phi}\\mathbf{T}_r = \\mathbf{G}\\mathbf{T}\\mathbf{\\Gamma},\\quad \\mathbf{\\Lambda}\\begin{bmatrix}\\phantom{-}\\mathbf{T}_r \\\\ \\mathbf{-T}_r\\end{bmatrix} = \\begin{bmatrix} \\phantom{-}\\mathbf{T} \\\\ -\\mathbf{T} \\end{bmatrix}\\mathbf{\\Gamma}, \\\\\n\t& \\mathbf{\\Lambda}\\begin{bmatrix} \\mathbf{1} - \\mathbf{s}_r \\\\ \\mathbf{1} + \\mathbf{s}_r \\end{bmatrix} \\leq \\begin{bmatrix} \\mathbf{1} - \\mathbf{s} \\\\ \\mathbf{1} + \\mathbf{s} \\end{bmatrix} + \\begin{bmatrix} \\phantom{-}\\mathbf{T} \\\\ -\\mathbf{T}\\end{bmatrix}\\boldsymbol{\\beta},\n\t\\end{align}\n\\end{subequations}\nwith parameters $\\mathbf{s} = \\mathbf{A}^{\\dagger}\\mathbf{b} \\in \\mathbb{R}^{n_g} $, $ \\mathbf{s}_r = \\mathbf{A}_r^{\\dagger}\\mathbf{b}_r \\in \\mathbb{R}^{n_{gr}} $, and matrices $ \\mathbf{T} \\in \\mathbb{R}^{n_g \\times (n_g - n_c)}, \\mathbf{T}_r \\in \\mathbb{R}^{n_{gr} \\times (n_{gr} - n_{cr})} $ with columns that form bases for $ \\mathcal{N}(\\mathbf{A}) $ and $ \\mathcal{N}(\\mathbf{A}_r) $, respectively.\nNote that the majority of the decision variables in \\eqref{Conzono_containment} come from the matrices $\\mathbf{\\Gamma} \\in \\mathbb{R}^{(n_g - n_c) \\times (n_{gr} - n_{cr})}$ and $\\mathbf{\\Lambda} \\in \\mathbb{R}^{2n_{gr} \\times 2n_{gr}}_{+}$.\nWhile this procedure applies to any $ Z_r $, the process discussed in Section \\ref{Sec_Redundancy} can be used to compute $ Z_r $ by removing exactly one constraint and one generator from $ Z_c $. For the case where $ Z_c $ satisfies the conditions in \\textbf{Theorem~\\ref{Redundant_ConZono}}, the $ r^{th} $ constraint and the $ c^{th} $ generators were chosen such that $ R_{r,c} \\subseteq [-1,1] $ and thus an exact reduced-order representation was achieved with $ Z_r = Z_c $. To achieve further reduction through the inner-approximation of $ Z_c $, the same procedure from Section \\ref{Sec_Redundancy} can be applied by choosing appropriate indices and scaling $ Z_r $ via optimization while enforcing $ Z_r \\subseteq Z_c $ using the constraints from \\eqref{Conzono_containment}.\nSince $ R_j = [\\rho_j^L, \\rho_j^U] $ represents the range of $ \\xi_j $ if the constraints $ | \\xi_j | \\leq 1 $ were omitted \\cite{Scott2016}, the $ c^{th} $ generator should be removed that minimizes $ \\max(|\\rho_j^L|, |\\rho_j^U|) $. Once $ c $ is chosen, $ r $ should be chosen such that the entry in the $ (r,c) $ position of $ \\mathbf{A}_r $ has the largest absolute value of all entries in the $ c^{th} $ column. \n\n\\begin{exmp} \\label{example_innerApprox}\n\tConsider the constrained zonotope $ Z_c $ shown in Fig. \\ref{Fig_ConZono_Inapprox} where\n\t\\begin{subequations}\n\t\t\\setlength\\arraycolsep{2pt}\n\t\t\\begin{align*}\n\t\tZ_c = \\Bigg \\{ &\\begin{bmatrix} -1 & \\phantom{-}3 & \\phantom{-}4 & 0 & 0 \\\\ \\phantom{-}4 & -2 & -5 & 0 & 0 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}, \\\\\n\t\t&\\begin{bmatrix} -1 & \\phantom{-}3 & \\phantom{-}4 & 6.5 & 0 \\\\ \\phantom{-}4 & -2 & -5 & 0 & 8 \\end{bmatrix}, \\begin{bmatrix} -1.5 \\\\ -3 \\end{bmatrix} \\Bigg \\}.\n\t\t\\end{align*}\n\t\\end{subequations}\n\t\\setcounter{equation}{\\value{equation}-1}\n\tFirst, Gauss-Jordan elimination with full pivoting was applied to $ Z_c $, followed by the transformation in \\emph{\\textbf{Theorem \\ref{Redundant_ConZono}}} by picking the $ c^{th} $ generator that minimizes $ \\max(|\\rho_j^L|, |\\rho_j^U|) $ and the $ r^{th} $ row with the largest entry in $ c^{th} $ column of $ \\mathbf{A} $. Then an LP was formulated and solved using the constraints from \\eqref{AHContainment_Cons} and a cost function that maximized $ \\| \\boldsymbol{\\phi} \\|_\\infty $. The resulting reduced-order zonotope $ Z_r $ is shown in Fig. \\ref{Fig_ConZono_Inapprox} where \n\t\\begin{subequations}\n\t\t\\setlength\\arraycolsep{2pt}\n\t\t\\begin{align*}\n\t\tZ_r = \\Bigg \\{ &\\begin{bmatrix} 0 & \\phantom{-}3.17 & \\phantom{-}2.38 & -0.79 \\\\ 0 & -3.97 & -1.59 & \\phantom{-}3.17 \\end{bmatrix}, \\begin{bmatrix} -1.34 \\\\ \\phantom{-}1.03 \\end{bmatrix}, \\\\\n\t\t&\\begin{bmatrix} 1 & -0.63 & -0.25 & \\phantom{-}0.50 \\end{bmatrix}, \\begin{bmatrix} -0.38 \\end{bmatrix} \\Bigg \\}.\n\t\t\\end{align*}\n\t\\end{subequations}\n\t\\setcounter{equation}{\\value{equation}-1}\nUsing a similar approach, Fig, \\ref{Fig_ConZono_Inapprox} also shows the inner-approximations of $ Z_c $ by zonotope $ Z $ and interval set $ B $ where\n\t\\begin{subequations}\n\t\t\\setlength\\arraycolsep{2pt}\n\t\t\\begin{align*}\n\t\tZ &= \\left\\{ \\begin{bmatrix} \\phantom{-}2.31 & \\phantom{-}1.93 & -0.27 \\\\ -2.84 & -0.94 & \\phantom{-} 2.57\\end{bmatrix}, \\begin{bmatrix} \\phantom{-}0.49 \\\\ -1.35 \\end{bmatrix} \\right\\}, \\\\\n\t\tB &= \\left\\{ \\begin{bmatrix} 2 & 0 \\\\ 0 & 2 \\end{bmatrix}, \\begin{bmatrix} \\phantom{-}2.55 \\\\ -3.18 \\end{bmatrix} \\right\\}.\n\t\t\\end{align*}\n\t\\end{subequations}\n\t\\setcounter{equation}{\\value{equation}-1}\n\tTo compute $ Z $, the equality constraints from $ Z_c $ were removed via the same change of variables used in the proof of \\emph{\\textbf{Theorem~\\ref{ConZono_AHpolytope}}}\n\tTypically this would result in an outer-approximation of $ Z_c $, however the scaling matrix $ \\mathbf{\\Phi} $ is used to reduce the length of each generator such that $ Z \\subseteq Z_c $. For the interval set $ B $, the generator matrix is initialized as the identity matrix and then scaled by $ \\mathbf{\\Phi} $. The resulting volume ratios with respect to $ Z_c $ are $ V_r = 0.86 $, $ V_r = 0.83 $, $ V_r = 0.46 $ for $ Z_r $, $ Z $, and $ B $, respectively. \n\tRepeating this process for 100 randomly generated constrained zonotopes with $ 4 \\leq n_g \\leq 20 $ and $ 1 \\leq n_c \\leq \\frac{1}{2}n_g $, Fig. \\ref{Fig_ConZono_Inapprox_Rand} shows the volume ratios for constrained zonotope, zonotope, and interval set inner-approximations. Both constrained zonotopes and zonotopes provide better approximations compared to interval sets while constrained zonotopes provide only a slightly higher mean volume ratio.\n\\end{exmp}\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=8.6cm]{ConZono_Inner_Approx.pdf}\n\t\t\\caption{Left: The inner-approximation of $ Z_c $ by a constrained zonotope $ Z_r $ with one less generator and constraint. Right: The inner-approximation of $ Z_c $ by a zonotope $ Z $ and an interval set $ B $.}\n\t\t\\label{Fig_ConZono_Inapprox} \n\t\\end{center} \n\\end{figure}\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=5cm]{ConZono_Inner_Approx_Rand.pdf}\n\t\t\\caption{The volume ratios for the inner-approximation of 100 randomly generated constrained zonotopes by a constrained zonotope $ Z_r $ with one less generator and constraint, a zonotope $ Z $, and an interval set $ B $. The red crosses denote outliers that do not fit the box plot distribution.}\n\t\t\\label{Fig_ConZono_Inapprox_Rand} \t\\end{center} \n\\end{figure}\n\n\\section{Convex Hulls} \\label{Sec_ConvexHull}\nThis section computes the CG-Rep of the convex hull of two constrained zonotopes $Z_1, Z_2 \\subset \\mathbb{R}^n$ with complexity $O(n+n_{c1}+n_{c2})$ where $n_{c1}$ and $n_{c2}$ are the number of constraints in $Z_1$ and $Z_2$, respectively. Since zonotopes are a subset of constrained zonotopes with $ n_c = 0 $, the following result also applies to zonotopes.\n\n\\begin{defn} \\label{CH_def} \\emph{\\cite{Tiwary2008}} \n\tThe convex hull of the union of two polytopes $ P_1, P_2 \\subset \\mathbb{R}^n $ is defined as\n\t\\begin{equation*}\n\tCH(P_1 \\cup P_2) \\triangleq \\left\\{\\mathbf{x}_1 \\lambda + \\mathbf{x}_2 (1-\\lambda) \\mid \\begin{matrix} \\mathbf{x}_1 \\in P_1,\\\\ \\mathbf{x}_2 \\in P_2, \\\\ 0 \\leq \\lambda \\leq 1 \\end{matrix}\\right\\}.\n\t\\end{equation*}\n\\end{defn}\n\\begin{thm} \\label{convexHull}\n\tThe convex hull of the union of two constrained zonotopes $ Z_1 = \\{\\mathbf{G}_1,\\mathbf{c}_1,\\mathbf{A}_1,\\mathbf{b}_1\\} \\subset \\mathbb{R}^n $ and $ Z_2 = \\{\\mathbf{G}_2,\\mathbf{c}_2,\\mathbf{A}_2,\\mathbf{b}_2\\} \\subset \\mathbb{R}^n $ is a constrained zonotope $ Z_h = \\{\\mathbf{G}_h,\\mathbf{c}_h,\\mathbf{A}_h,\\mathbf{b}_h\\} $ where\n\t\\begin{equation*}\n\t\\begin{matrix*}[l]\n\t\\mathbf{G}_h = \\begin{bmatrix}\n\t\\mathbf{G}_1 & \\mathbf{G}_2 & \\frac{\\mathbf{c}_1-\\mathbf{c}_2}{2} & \\mathbf{0\n\t\\end{bmatrix}, & \n\t\n\t\\mathbf{c}_h = \\frac{\\mathbf{c}_1+\\mathbf{c}_2}{2}, \\\\\n\t\n\t\\mathbf{A}_h = \\begin{bmatrix}\n\t\\mathbf{A}_1 & \\mathbf{0} & -\\frac{\\mathbf{b}_1}{2} & \\mathbf{0} \\\\\n\t\\mathbf{0} & \\mathbf{A}_2 & \\phantom{-}\\frac{\\mathbf{b}_2}{2} & \\mathbf{0} \\\\\n\t\\mathbf{A}_{3,1} & \\mathbf{A}_{3,2} & \\mathbf{A}_{3,0} & \\mathbf{I}\n\t\\end{bmatrix}, &\n\t\n\t\\mathbf{b}_h = \\begin{bmatrix} \\phantom{-}\\frac{1}{2} \\mathbf{b}_1 \\\\ \\phantom{-}\\frac{1}{2} \\mathbf{b}_2 \\\\ -\\frac{1}{2} \\mathbf{1}\t\\end{bmatrix}, \\\\\n\t\n\t\\mathbf{A}_{3,1} = \\begin{bmatrix} \\phantom{-} \\mathbf{I} \\\\ -\\mathbf{I} \\\\ \\phantom{-} \\mathbf{0} \\\\ \\phantom{-} \\mathbf{0} \\end{bmatrix},\n\t\n\t\\mathbf{A}_{3,2} = \\begin{bmatrix} \\phantom{-} \\mathbf{0} \\\\ \\phantom{-}\\mathbf{0} \\\\ \\phantom{-}\\mathbf{I} \\\\ -\\mathbf{I} \\end{bmatrix}, &\n\t\n\t\\mathbf{A}_{3,0} = \\begin{bmatrix} -\\frac{1}{2}\\mathbf{1} \\\\ -\\frac{1}{2}\\mathbf{1} \\\\ \\phantom{-}\\frac{1}{2}\\mathbf{1} \\\\ \\phantom{-}\\frac{1}{2}\\mathbf{1} \\end{bmatrix}.\n\t\\end{matrix*}\n\t\\end{equation*}\n\\end{thm}\n\\begin{pf}\n Considering any element $\\mathbf{x} \\in Z_h$, it is to be proven that $\\mathbf{x} \\in CH(Z_1 \\cup Z_2)$. By the definition of $Z_h$, $\\exists \\; \\boldsymbol{\\xi}_1 \\in \\mathbb{R}^{n_{g1}}, \\boldsymbol{\\xi}_2 \\in \\mathbb{R}^{n_{g2}}, \\xi_0 \\in \\mathbb{R}$, and $\\boldsymbol{\\xi}_s \\in \\mathbb{R}^{2(n_{g1} + n_{g2})}$ such that\n \\begin{subequations}\\label{eq_Zh_exist_all}\n \\begin{align}\n & \\scriptsize{\\mathbf{x} = \\mathbf{G}_1\\boldsymbol{\\xi}_1+\\mathbf{G}_2\\boldsymbol{\\xi}_2+\\frac{\\mathbf{c}_1-\\mathbf{c}_2}{2}\\xi_0+\\mathbf{0}\\boldsymbol{\\xi}_s +\\frac{\\mathbf{c}_1+\\mathbf{c}_2}{2},} \\label{eq_Zh_exist_element} \\\\\n & ||\\boldsymbol{\\xi}_1||_{\\infty} \\leq 1, \\; ||\\boldsymbol{\\xi}_2||_{\\infty} \\leq 1, \\; |\\xi_0| \\leq 1, \\; ||\\boldsymbol{\\xi}_s||_{\\infty} \\leq 1, \\label{eq_Zh_inf_norm_const} \\\\\n & \\mathbf{A}_h [\n \\boldsymbol{\\xi}_1^T \\; \\boldsymbol{\\xi}_2^T \\; \\xi_0 \\; \\boldsymbol{\\xi}_s^T\n ]^T = \\mathbf{b}_h. \\label{eq_Zh_exist_cons}\n \\end{align}\n \\end{subequations}\n \n To prove $\\mathbf{x} \\in CH(Z_1 \\cup Z_2)$ requires the existence of elements $\\mathbf{z}_1$, $\\mathbf{z}_2 \\in \\mathbb{R}^n$, $\\lambda \\in \\mathbb{R}, \\boldsymbol{\\xi}_1^{'} \\in \\mathbb{R}^{n_{g1}}$, and $\\boldsymbol{\\xi}_2^{'} \\in \\mathbb{R}^{n_{g2}}$ such that\n\\begin{subequations}\\label{eq_CH_Z1_Z2_exist_all}\n \\begin{align}\n & \\mathbf{x} = \\mathbf{z}_1\\lambda + \\mathbf{z}_2(1 - \\lambda), \\quad 0 \\leq \\lambda \\leq 1, \\label{eq_CH_Z1_Z2_exp_base} \\\\ \n & \\mathbf{z}_1 = \\mathbf{c}_1 + \\mathbf{G}_1\\boldsymbol{\\xi}_1^{'}, \\quad ||\\boldsymbol{\\xi}_1^{'}||_{\\infty} \\leq 1, \\quad \\mathbf{A}_1\\boldsymbol{\\xi}_1^{'} = \\mathbf{b}_1, \\label{eq_Zh_exist_element_Z1}\\\\\n & \\mathbf{z}_2 = \\mathbf{c}_2 + \\mathbf{G}_2\\boldsymbol{\\xi}_2^{'}, \\quad ||\\boldsymbol{\\xi}_2^{'}||_{\\infty} \\leq 1, \\quad \\mathbf{A}_2\\boldsymbol{\\xi}_2^{'} = \\mathbf{b}_2. \\label{eq_Zh_exist_element_Z2}\n \\end{align}\n\\end{subequations}\nThis is shown by defining $\\lambda$, $\\boldsymbol{\\xi}_1^{'}$, and $\\boldsymbol{\\xi}_2^{'}$ as\n\\begin{equation}\\label{eq_cvxhull_lambda_xi_varsubs}\n \\lambda = \\frac{1}{2}(1 + \\xi_0), \\quad \\boldsymbol{\\xi}_1 = \\boldsymbol{\\xi}_1^{'}\\lambda, \\quad \\boldsymbol{\\xi}_2 = \\boldsymbol{\\xi}_2^{'} (1 - \\lambda).\n\\end{equation}\nBy rearranging \\eqref{eq_Zh_exist_element}, substituting using the variable definitions in \\eqref{eq_cvxhull_lambda_xi_varsubs}, and then rearranging to simplify using the definitions for $\\mathbf{z}_1$ and $\\mathbf{z}_2$ from \\eqref{eq_Zh_exist_element_Z1} and \\eqref{eq_Zh_exist_element_Z2}, the expression for $\\mathbf{x}$ from \\eqref{eq_CH_Z1_Z2_exp_base} can be established as\n\\begin{subequations}\n \\begin{align}\n \\mathbf{x} &= \\frac{\\mathbf{c}_1}{2}(1 + \\xi_0) + \\mathbf{G}_1\\boldsymbol{\\xi}_1 + \\frac{\\mathbf{c}_2}{2}(1 - \\xi_0) + \\mathbf{G}_2\\boldsymbol{\\xi}_2, \\label{eq_Zh_exist_element_exp1}\\\\ \n &= \\mathbf{c}_1\\lambda + \\mathbf{G}_1\\boldsymbol{\\xi}_1^{'}\\lambda + \\mathbf{c}_2(1 - \\lambda) + \\mathbf{G}_2\\boldsymbol{\\xi}_2^{'}(1 - \\lambda), \\label{eq_Zh_exist_element_exp2}\\\\ \n &= \\mathbf{z}_1\\lambda + \\mathbf{z}_2(1 - \\lambda).\n \\end{align}\n\\end{subequations}\nSince $|\\xi_0| \\leq 1$, the definition for $\\lambda$ in \\eqref{eq_cvxhull_lambda_xi_varsubs} results in $0 \\leq \\lambda \\leq 1$.\nFrom the definition of $\\mathbf{A}_h$ and $\\mathbf{b}_h$, the first two sets of equality constraints are\n\\begin{equation}\\label{eq_cvxhull_Zh_cons}\n \\mathbf{A}_1\\boldsymbol{\\xi}_1 - \\frac{\\mathbf{b}_1}{2}\\xi_0 = \\frac{1}{2}\\mathbf{b}_1, \\quad \\mathbf{A}_2\\boldsymbol{\\xi}_2 - \\frac{\\mathbf{b}_2}{2}\\xi_0 = \\frac{1}{2}\\mathbf{b}_2.\n\\end{equation}\nUsing \\eqref{eq_cvxhull_lambda_xi_varsubs}, \\eqref{eq_cvxhull_Zh_cons} simplifies to\n\\begin{equation*}\n \\mathbf{A}_1\\boldsymbol{\\xi}_1^{'} = \\mathbf{b}_1, \\; \\forall \\; \\lambda \\in (0, 1], \\quad \\mathbf{A}_2\\boldsymbol{\\xi}_2^{'} = \\mathbf{b}_2, \\; \\forall \\; \\lambda \\in [0, 1).\n\\end{equation*}\nNote that if $\\lambda = 0$, then $\\mathbf{z}_1$ does not affect $\\mathbf{x}$ or $\\boldsymbol{\\xi}_1$ and an arbitrary value of $\\boldsymbol{\\xi}_1^{'}$ can be chosen satisfying the infinity norm and equality constraints from \\eqref{eq_Zh_exist_element_Z1}. Similarly, if $\\lambda = 1$, an arbitrary value of $\\boldsymbol{\\xi}_2^{'}$ can be chosen satisfying constraints from \\eqref{eq_Zh_exist_element_Z2}. Otherwise, the norm constraints $||\\boldsymbol{\\xi}_1^{'}||_{\\infty} \\leq 1$ and $||\\boldsymbol{\\xi}_2^{'}||_{\\infty} \\leq 1$ are guaranteed since $||\\boldsymbol{\\xi}_1||_{\\infty} \\leq 1$, $||\\boldsymbol{\\xi}_2||_{\\infty} \\leq 1$, and $0 \\leq \\lambda \\leq 1$. Thus, $\\mathbf{x} \\in CH(Z_1 \\cup Z_2)$.\n\nNext, considering any $\\mathbf{x} \\in CH(Z_1 \\cup Z_2)$, it is to be proven that $\\mathbf{x} \\in Z_h$.\nBy \\textbf{Definition \\ref{CH_def}}, there exists elements $\\mathbf{z}_1$, $\\mathbf{z}_2 \\in \\mathbb{R}^n$, $\\lambda \\in \\mathbb{R}, \\boldsymbol{\\xi}_1^{'} \\in \\mathbb{R}^{n_{g1}}$, and $\\boldsymbol{\\xi}_2^{'} \\in \\mathbb{R}^{n_{g2}}$ such that \\eqref{eq_CH_Z1_Z2_exp_base}-\\eqref{eq_Zh_exist_element_Z2} hold.\nTo prove $\\mathbf{x} \\in Z_h$ requires the existence of variables $\\boldsymbol{\\xi}_1 \\in \\mathbb{R}^{n_{g1}}$, $\\boldsymbol{\\xi}_2 \\in \\mathbb{R}^{n_{g2}}$, $\\xi_0 \\in \\mathbb{R}$, $\\boldsymbol{\\xi}_s \\in \\mathbb{R}^{2(n_{g1} + n_{g2})}$ such that \\eqref{eq_Zh_exist_element}-\\eqref{eq_Zh_exist_cons} hold.\nConsider the following definitions for variables $\\boldsymbol{\\xi}_1$, $\\boldsymbol{\\xi}_2$, ${\\xi}_0$, and $\\boldsymbol{\\xi}_s$ with\n\\begin{subequations}\\label{eq_xi1_xi2_xi_0_xi_s_assum_all}\n \\begin{align}\n & \\boldsymbol{\\xi}_1 = \\boldsymbol{\\xi}_1^{'}\\lambda, \\quad \\boldsymbol{\\xi}_2 = \\boldsymbol{\\xi}_2{'}(1-\\lambda), \\quad \\xi_0 = 2\\lambda -1, \\label{eq_xi1_xi2_xi_0_assum}\\\\ \n & \\boldsymbol{\\xi}_s = -\\frac{1}{2}\\mathbf{1} - (\\mathbf{A}_{31}\\boldsymbol{\\xi}_1 + A_{32}\\boldsymbol{\\xi}_2 + \\mathbf{A}_{30}\\xi_0). \\label{eq_xi_s_assum}\n \\end{align}\n\\end{subequations} \nUsing \\eqref{eq_xi1_xi2_xi_0_assum} and \\eqref{eq_xi_s_assum}, it can be readily shown that the equality constraints in \\eqref{eq_CH_Z1_Z2_exp_base}-\\eqref{eq_Zh_exist_element_Z2} can be rewritten to achieve \\eqref{eq_Zh_exist_element} and \\eqref{eq_Zh_exist_cons}. Thus, all that remains is to show $\\big|\\big| \\; [\n\\boldsymbol{\\xi}_1^T \\; \\boldsymbol{\\xi}_2^T \\; \\xi_0 \\; \\boldsymbol{\\xi}_s^T ]^T\\; \\big|\\big|_{\\infty} \\leq 1$. Since $0 \\leq \\lambda \\leq 1$ holds, \n\\begin{equation*}\n||\\boldsymbol{\\xi}_1^{'}||_{\\infty} \\geq ||\\boldsymbol{\\xi}_1^{'}\\lambda||_{\\infty} = ||\\boldsymbol{\\xi}_1||_{\\infty},\n\\end{equation*}\nis satisfied. By \\eqref{eq_Zh_exist_element_Z1}, $||\\boldsymbol{\\xi}_1^{'}||_{\\infty} \\leq 1$ implies $||\\boldsymbol{\\xi}_1||_{\\infty} \\leq 1$. Similarly, it can be shown that $||\\boldsymbol{\\xi}_2||_{\\infty} \\leq 1$. Using the definition of $\\xi_0$ from \\eqref{eq_xi1_xi2_xi_0_assum} and $0 \\leq \\lambda \\leq 1$ proves that $|\\xi_0| \\leq 1$. Finally, using the definition of $\\boldsymbol{\\xi}_s$ from \\eqref{eq_xi_s_assum} and interval arithmetic, it can be shown that \n\\begin{equation*}\n||\\boldsymbol{\\xi}_1||_{\\infty} \\leq 1, \\; ||\\boldsymbol{\\xi}_2||_{\\infty} \\leq 1, \\; |\\boldsymbol{\\xi}_0|\\leq 1 \\implies ||\\boldsymbol{\\xi}_s||_{\\infty} \\leq 1.\n\\end{equation*} Thus, $\\forall \\; \\mathbf{x} \\in CH(Z_1 \\cup Z_2)$, $\\mathbf{x} \\in Z_h$. \\hfill \\hfill \\qed\n\\end{pf}\n\nThe resulting constrained zonotope $Z_h$ obtained using \\textbf{Theorem \\ref{convexHull}} has $n_{gh} = 3(n_{g1}+n_{g2})+1$ generators and $n_{ch} = n_{c1}+n_{c2}+2(n_{g1}+n_{g2})$ constraints.\n\n\n\\begin{exmp}\n\tFor the zonotopes\n\t\\begin{subequations} \n\t\t\\begin{align*}\n\t\tZ_1 &= \\left\\{ \\begin{bmatrix} 0 & 1 & 0 \\\\ 1 & 1 & 2 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 0\t\\end{bmatrix} \\right\\}, \\\\ \n\t\tZ_2 &= \\left\\{ \\begin{bmatrix} -0.5 & 1 & -2\\\\ \\phantom{-}0.5 & 0.5 & 1.5\\end{bmatrix}, \\begin{bmatrix} -5 \\\\ \\phantom{-}0\t\\end{bmatrix} \\right\\},\n\t\t\\end{align*}\n\t\\end{subequations}\n\t\\setcounter{equation}{\\value{equation}-1}\n\tFig. \\ref{Fig_CVX_Hull} shows the convex hull $ Z_h = CH(Z_1 \\cup Z_2) $ with $n_g = 19 $ generators and $ n_c = 12 $ constraints, as computed using \\textbf{\\emph{Theorem \\ref{convexHull}}}. Fig. \\ref{Fig_CVX_Hull} also shows the convex hull $ Z_{ch} = CH(Z_{c1} \\cup Z_{c2}) $ with $n_g = 25 $ generators and $ n_c =~18 $ constraints, where $ Z_{c1} = Z_1 \\cap H_{1-} $, $ Z_{c2} = Z_2 \\cap H_{2-} $, $ H_{1-} = \\{\\mathbf{z} \\mid [1 \\; 1] \\mathbf{z} \\leq 0 \\} $, and $ H_{2-} = \\{\\mathbf{z}\\mid [-2.5 \\; 1] \\mathbf{z} \\leq 9.5 \\} $.\n\\end{exmp}\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=8.6cm]{Convex_Hull.pdf}\n\t\t\\caption{Left: The convex hull $ Z_h $ of zonotopes $ Z_1 $ and $ Z_2 $. \n\t\t\tRight: The convex hull $ Z_{ch} $ of constrained zonotopes $ Z_{c1} $ and $ Z_{c2} $, where each constrained zonotope is a zonotope-halfspace intersection corresponding to the shown hyperplanes.} \n\t\t\\label{Fig_CVX_Hull} \n\t\\end{center} \n\\end{figure}\n\n\n\\section{Robust Positively Invariant (RPI) Sets} \\label{Sec_RPI}\nThis section provides both iterative and one-step optimization based methods for computing approximations of the minimal robust positively invariant set using zonotopes. Consider the autonomous discrete-time linear time-invariant system\n\\begin{equation} \\label{autoSys}\n\\mathbf{x}_{k+1} = \\mathbf{A} \\mathbf{x}_k + \\mathbf{w}_k,\n\\end{equation}\nwhere $ \\mathbf{x}_k \\in \\mathbb{R}^n $, $ \\mathbf{A} \\in \\mathbb{R}^{n \\times n} $ is a strictly stable matrix, and $ \\mathbf{w}_k \\in W \\subset \\mathbb{R}^n $, where $ W $ is a convex and compact set containing the origin. \n\n\\begin{defn} \\emph{\\cite{Blanchini1999}}\n\tThe set $ \\Omega \\subset \\mathbb{R}^n $ is a robust positively invariant (RPI) set of \\eqref{autoSys} if and only if $ \\mathbf{A} \\Omega \\oplus W \\subseteq \\Omega $.\n\\end{defn}\n\n\\begin{defn} \\emph{\\cite{Rakovic2005}}\n\tThe minimal RPI (mRPI) set $ F_\\infty $ of \\eqref{autoSys} is the RPI set that is contained in every closed RPI set of \\eqref{autoSys} and is given by\n\t\\begin{equation} \\label{RPI_infsum}\n\tF_\\infty = \\bigoplus_{i=0}^{\\infty} \\mathbf{A}^i W.\n\t\\end{equation}\n\\end{defn}\n\n\\subsection{Iterative Method}\nUnless specific conditions are met, such as $ \\mathbf{A} $ being nilpotent, the infinite sequence of Minkowski sums in \\eqref{RPI_infsum} makes it impossible to compute $ F_\\infty $ exactly. Thus, outer-approximations of the mRPI set are typically used. An iterative approach is developed in \\cite{Rakovic2005} that computes the RPI set $ F(\\alpha,s) $ such that $ F_\\infty \\subseteq F(\\alpha,s) \\subseteq F_\\infty \\oplus \\epsilon B_\\infty $, where $ \\epsilon $ is a user defined bound on the error of the approximation with $ s \\in \\mathbb{N}_+ $, $ \\alpha \\in [0,1) $ such that $ \\mathbf{A}^s W \\subseteq \\alpha W $. Starting at $ s = 0 $, the approach increments $ s $ until the approximation error is less than $ \\epsilon $, at which point $ F_s $ is computed as \n\\begin{equation} \\label{RPI_finitesum}\nF_s = \\bigoplus_{i=0}^{s} \\mathbf{A}^i W,\n\\end{equation}\nand $ F(\\alpha,s) = (1-\\alpha)^{-1} F_s $. The iterative algorithm in \\cite{Rakovic2005} requires use of multiple support functions at each iteration. When $ W $ is expressed in H-Rep, an LP must be solved for each support function calculation. As discussed in \\cite{Trodden2016_OneStep}, computing $ F(\\alpha,s) $ using this method may require the solution of thousands of LPs, even for a system with only two states. As briefly mentioned in Remark 3 in \\cite{Rakovic2005}, if $ W $ is expressed in G-Rep, then the support function can be evaluated algebraically without the use of an LP, significantly reducing the computational cost. Thus, the use of zonotopes for RPI set calculations provides both improved scalability and reduced computational cost for the Minkowski sums in \\eqref{RPI_finitesum} and by removing the need to solve LPs.\n\n\\subsection{One-step Optimization Method}\n\nAs an alternative for the iterative method in \\cite{Rakovic2005}, a one-step method for computing an outer-approximation of the mRPI set is presented in \\cite{Trodden2016_OneStep}. By expressing the RPI set in H-Rep, this method requires solving a single LP, assuming both the number and normal vectors of the hyperplanes associated with each halfspace inequality are provided \\emph{a priori}. Inspired by this approach, the following presents a similar one-step method\nfor computing an outer-approximation of the mRPI set using G-Rep, where the generator vectors are predetermined.\n\\begin{thm} \\label{RPI_OneStep}\n\tThe zonotope $ Z = \\{\\mathbf{G}\\mathbf{\\Phi},\\mathbf{c}\\} \\subset \\mathbb{R}^n $, with $ \\mathbf{\\Phi} = diag(\\boldsymbol{\\phi}), \\phi_i > 0, \\forall i \\in \\{1, \\cdots, n_g \\} $, is an RPI set of \\eqref{autoSys} if $ W = \\{\\mathbf{G}_w,\\mathbf{c}_w\\} $ and there exists $ \\mathbf{\\Gamma}_1 \\in \\mathbb{R}^{n_g \\times n_g} $, $ \\mathbf{\\Gamma}_2 \\in \\mathbb{R}^{n_g \\times n_w} $, and $ \\boldsymbol{\\beta} \\in \\mathbb{R}^{n_g} $ such that\n\t\\begin{subequations} \\label{RPI_conditions}\n\t\t\\begin{align} \n\t\t\\mathbf{A}\\mathbf{G}\\mathbf{\\Phi} &= \\mathbf{G}\\mathbf{\\Gamma}_1, \\label{RPI_condition1}\\\\\n\t\t\\mathbf{G}_w &= \\mathbf{G}\\mathbf{\\Gamma}_2, \\label{RPI_condition2}\\\\\n\t\t(\\mathbf{I} - \\mathbf{A})\\mathbf{c} - \\mathbf{c}_w &= \\mathbf{G}\\boldsymbol{\\beta}, \\label{RPI_condition3} \\\\\n\t\t|\\mathbf{\\Gamma}_1|\\mathbf{1} + |\\mathbf{\\Gamma}_2|\\mathbf{1} + |\\boldsymbol{\\beta}| &\\leq \\mathbf{\\Phi}\\mathbf{1}. \\label{RPI_condition4}\n\t\t\\end{align}\n\t\\end{subequations}\n\\end{thm}\n\\begin{pf}\n\tThe proof requires showing that \\eqref{RPI_conditions} enforces the zonotope containment conditions from \\textbf{Lemma \\ref{zonoContainment}} such that $ X \\subseteq Y $, where $ X = \\mathbf{A} Z \\oplus W $ and $ Y = Z $. Consider the change of variables $ \\mathbf{\\Gamma}_1 = \\mathbf{\\Phi}\\tilde{\\mathbf{\\Gamma}}_1 $, $ \\mathbf{\\Gamma}_2 = \\mathbf{\\Phi}\\tilde{\\mathbf{\\Gamma}}_2 $, $ \\boldsymbol{\\beta} = \\mathbf{\\Phi}\\tilde{\\boldsymbol{\\beta}} $ and define $ \\tilde{\\mathbf{\\Gamma}} = [\\tilde{\\mathbf{\\Gamma}}_1 \\; \\tilde{\\mathbf{\\Gamma}}_2] $. Then the zonotope containment conditions from \\eqref{zonoContainment_Conditions} are satisfied by 1) rearranging and combining \\eqref{RPI_condition1} and \\eqref{RPI_condition2} to get $ [\\mathbf{A}\\mathbf{G}\\mathbf{\\Phi} \\; \\mathbf{G}_w] = \\mathbf{G}\\mathbf{\\Phi} \\tilde{\\mathbf{\\Gamma}} $, 2) rearranging \\eqref{RPI_condition3} to get $ \\mathbf{c} - (\\mathbf{A}\\mathbf{c} + \\mathbf{c}_w) = \\mathbf{G}\\mathbf{\\Phi}\\tilde{\\boldsymbol{\\beta}} $, and 3) multiplying \\eqref{RPI_condition4} by $ \\mathbf{\\Phi}^{-1} $, since $ \\phi_i > 0 $, to get $ |\\tilde{\\mathbf{\\Gamma}}|\\mathbf{1} + |\\tilde{\\boldsymbol{\\beta}}| \\leq \\mathbf{1} $. \\hfill \\hfill \\qed\n\\end{pf}\n\nWhen using \\textbf{Theorem \\ref{RPI_OneStep}} to determine the RPI set $ Z $ in G-Rep, the generator matrix $ \\mathbf{G} $ is assumed to be known \\emph{a~priori} in the same way that the normal vectors are chosen \\emph{a~priori} in \\cite{Trodden2016_OneStep} for the one-step RPI set computation in H-Rep. Given a desired order of $ Z $, $ \\mathbf{G} $ can be computed using \\eqref{RPI_finitesum} where $ \\mathbf{G} = [\\mathbf{G}_w \\; \\mathbf{A}\\mathbf{G}_w \\; ... \\; \\mathbf{A}^s\\mathbf{G}_w] $, for some $ s \\in \\mathbb{N}_+ $ that provides the desired order. \nOnce $ \\mathbf{G} $ is determined, the diagonal matrix $ \\mathbf{\\Phi} $ provides the ability to scale the size of $ Z $ such that $ Z $ is an RPI set. Since the minimal RPI set is typically desired, an optimization problem can be formulated with the constraints from \\eqref{RPI_conditions} and a objective function that minimizes the scaling variables in $ \\mathbf{\\Phi} $. With $ \\mathbf{c} $, $ \\mathbf{\\Phi} $, $ \\mathbf{\\Gamma}_1 $, $ \\mathbf{\\Gamma}_2 $, and $ \\boldsymbol{\\beta} $ as decision variables in this optimization problem, \\eqref{RPI_conditions} consists of only linear constraints and thus an LP or QP can be formulated based on the norm used to minimize the vector $ \\boldsymbol{\\phi} $, where $ \\mathbf{\\Phi} = diag(\\boldsymbol{\\phi}) $. In the following example, an LP is formulated by minimizing $ \\| \\boldsymbol{\\phi} \\|_\\infty $ subject to \\eqref{RPI_conditions}. Computing RPI set $Z$ using \\textbf{Theorem \\ref{RPI_OneStep}} requires solving an LP with $n_g^2 + n_g(n_w + 2) + n$ decision variables.\n\\begin{exmp}\n\n\n\tConsider the system from \\emph{\\cite{Trodden2016_OneStep}}\n\t\\begin{equation}\\label{RPISet_Trodden_Sys}\n\t\\mathbf{x}_{k+1} = \\begin{bmatrix}\t1 & 1 \\\\ 0 & 1\t\\end{bmatrix}\\mathbf{x}_k + \\begin{bmatrix} 0.5 \\\\ 1\n\t\\end{bmatrix} u_k + \\mathbf{w}_k, \n\t\\end{equation} \n\twith $ \\mathbf{w}_k \\in W = \\{ \\mathbf{w} \\in \\mathbb{R}^2 \\mid \\| \\mathbf{w} \\|_{\\infty} \\leq 0.1\\} $. As in \\emph{\\cite{Trodden2016_OneStep}}, the state feedback control law $ u_k = \\mathbf{K} \\mathbf{x}_k $, where $\\mathbf{K} $ corresponds to the LQR solution with $ \\mathbf{Q} = \\mathbf{I} $ and $ \\mathbf{R} = 1 $, converts \\eqref{RPISet_Trodden_Sys} to an autonomous system of the form \\eqref{autoSys}. For this system, four methods for computing outer-approximations of the mRPI set are compared in Fig. \\ref{Fig_RPI_Set} with respect to volume ratio $ V_r $ and computation time $ \\Delta t_{calc} $ as a function of set complexity ($ n_g $~for zonotopes in G-Rep, $ \\frac{1}{2}n_h $ for polytopes in H-Rep). The seminal work from \\emph{\\cite{Rakovic2005}}, denoted as $\\epsilon$-mRPI (H-Rep), is the most computationally expensive since evaluating support functions for polytopes in H-Rep requires the solution of an LP. Using zonotopes in G-Rep, computational cost of this $\\epsilon$-mRPI approach can be reduced by an order-of-magnitude since evaluating support functions for zonotopes is algebraic, as mentioned in Remark 3 of \\emph{\\cite{Rakovic2005}}. Alternatively, the 1-step approaches from \\emph{\\cite{Trodden2016_OneStep}} and \\emph{\\textbf{Theorem \\ref{RPI_OneStep}}}, provide similar computational advantages. However, the 1-step approach from \\emph{\\cite{Trodden2016_OneStep}} is sensitive to the choice of hyperplanes. Using the same choice of hyperplanes from \\emph{\\cite{Trodden2016_OneStep}}, Fig. \\ref{Fig_RPI_Set} shows that the volume ratio does not decrease with increasing set complexity as quickly as the zonotope-based approach. Note that volume ratio is defined with respect to an approximation of the true mRPI set volume computed using the $\\epsilon$-mRPI method with $\\epsilon = 10^{-9}$. \n\t\n\tTo assess the scalability of these methods with respect to system order, Fig. \\ref{Fig_RPI_Set_Scalability} shows a comparison of these methods based on set complexity and computation time as a function of system order $ n $. Note that the $\\epsilon$-mRPI (H-Rep) method became impractical for higher system orders and is not included in Fig. \\ref{Fig_RPI_Set_Scalability}. Similarly, the 1-step (H-Rep) method became impractical for $ n > 6 $. These results are generated using a $n^{th}$-order integrator system similar to that of \\eqref{RPISet_Trodden_Sys}. While the $\\epsilon$-mRPI method in G-Rep provides the lowest computational cost, the complexity of the resulting set is roughly ten times larger than the set used for the 1-step approach. While scaling better than the 1-step H-Rep approach, the 1-step G-Rep approach requires solving a linear program with the constraints from \\eqref{RPI_conditions} which includes the large decision variable $ \\mathbf{\\Gamma}_1 \\in \\mathbb{R}^{n_g \\times n_g} $. To manage this computational cost for higher order systems, the number of steps $ s \\in \\mathbb{N}_+ $ in \\eqref{RPI_finitesum} can be chosen to balance set complexity and accuracy.\n\\end{exmp}\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=8.6cm]{RPI_Sets_K1_ave10.pdf}\n\t\t\\caption{Comparison of volume ratio and computation time as a function of set complexity for outer-approximations of the mRPI set using iterative and 1-step approaches based on H-Rep or G-Rep.}\n\t\t\\label{Fig_RPI_Set} \n\t\\end{center} \n\\end{figure}\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=8.6cm]{RPI_Sets_Scalability_K1_ave10.pdf}\n\t\t\\caption{Comparison of set complexity and computation time as a function of system order for outer-approximations of the mRPI set using iterative and 1-step approaches based on H-Rep or G-Rep.}\n\t\\label{Fig_RPI_Set_Scalability} \n\t\\end{center}\n\\end{figure}\n\n\\section{Pontryagin Difference} \\label{Sec_Pontryagin}\n\nThis section provides an iterative method for computing the constrained zonotope representation of the Pontryagin difference of two zonotopes and a one-step optimization method for computing the zonotopic inner-approximation of the Pontryagin difference.\n\n\\begin{defn} \\emph{\\cite{Althoff2015a}}\n\tGiven two sets $ Z_1, Z_2 \\subset \\mathbb{R}^n $, the Pontryagin difference $ Z_d = Z_1 \\ominus Z_2 $ is defined as\n\t\\begin{equation} \\label{Pontryagin_Diff_Def}\n\tZ_d = \\{z \\in \\mathbf{R}^n \\mid z \\oplus Z_2 \\subseteq Z_1\\}.\n\t\\end{equation}\n\\end{defn}\nThe Pontryagin difference is also referred to as the Minkowski difference or the erosion of set $ Z_1 $ by $ Z_2 $.\n\n\\subsection{Iterative Method}\nIf $ Z_1 $ and $ Z_2 $ are zonotopes, then \\cite{Althoff2015a} provides the following iterative method for computing $Z_d$.\n\n\\begin{lem} (Theorem 1 of \\emph{\\cite{Althoff2015a}}) \\label{Pontryagin_Diff}\n\tIf $ Z_1 = \\{\\mathbf{G}_1,\\mathbf{c}_1\\} $ and $ Z_2 = \\{\\mathbf{G}_2,\\mathbf{c}_2\\} $, then the Pontryagin difference $ Z_d = Z_1 \\ominus Z_2 $ is computed using the $ n_{g2} $ generators $ \\mathbf{g}_{2,i} $ of $ Z_2 $ by applying the following recursion:\n\t\\begin{subequations} \\label{Pontryagin_Diff_calc}\n\t\t\\begin{align}\n\t\tZ_{int}^{(0)} &= Z_1 - c_2, \\\\\n\t\tZ_{int}^{(i)} &= (Z_{int}^{(i-1)} + \\mathbf{g}_{2,i}) \\cap (Z_{int}^{(i-1)} - \\mathbf{g}_{2,i}), \\label{Pontryagin_intersection}\\\\\n\t\tZ_d &= Z_{int}^{(n_{g2})}.\n\t\t\\end{align}\n\t\\end{subequations}\n\\end{lem}\n\nAs shown in \\cite{Althoff2015a}, zonotopes are not closed under the Pontryagin difference. Thus, the methods in \\cite{Althoff2015a} require the use of a combination of G-Rep and H-Rep to compute approximations of $ Z_d $ in G-Rep. While this combination results in faster calculations than methods that solely use H-Rep, the majority of computation time comes from the conversion from G-Rep to H-Rep, which scales exponentially with the number of generators. \n\nHowever, since $ Z_d $ is computed via the intersection of zonotopes, $ Z_d $ can be exactly represented as a constrained zonotope. Thus, \\eqref{Pontryagin_intersection} can be directly computed using the generalized intersection from \\eqref{generalized_Intersection} without the need for H-Rep. Note that iterative method from \\textbf{Lemma \\ref{Pontryagin_Diff}} is also applicable if $ Z_1 = \\{\\mathbf{G}_1,\\mathbf{c}_1,\\mathbf{A}_1,\\mathbf{b}_1\\} $ is a constrained zonotope, since \\eqref{Pontryagin_Diff_calc} only requires $ Z_2 $ to be the Minkowski sum of generators $ \\mathbf{g}_{2,i} $. For a constrained zonotope $Z_1$ in $\\mathbb{R}^n$ with $n_{c1}$ constraints and $n_{g1}$ generators and a zonotope $Z_2$ in $\\mathbb{R}^n$ with $n_{g2}$ generators, $Z_d = Z_1 \\ominus Z_2$ is a constrained zonotope with $n_{gd} = 2^{n_{g2}}n_{g1}$ generators and $n_{cd} = 2^{n_{g2}}n_{c1} + n(2^{n_{g2}} - 1)$ constraints.\n\n\n\\subsection{One-step Optimization Inner-Approximation Method}\n\nAs an alternative to the iterative method from \\textbf{Lemma~\\ref{Pontryagin_Diff}}, the following theorem presents a one-step method for computing an zonotopic inner-approximation of the Pontryagin difference $ \\tilde{Z}_d \\subseteq Z_d = Z_1 \\ominus Z_2$ using a single LP.\n\n\\begin{thm} \\label{Pontryagin_OneStep}\n\tGiven $ Z_1 = \\{\\mathbf{G}_1,\\mathbf{c}_1\\} $ and $ Z_2 = \\{\\mathbf{G}_2,\\mathbf{c}_2\\} $, then $ \\tilde{Z}_d = \\{[\\mathbf{G}_1 \\; \\mathbf{G}_2]\\mathbf{\\Phi},\\mathbf{c}_d\\} $, with $ \\mathbf{\\Phi} = diag(\\boldsymbol{\\phi}), \\phi_i > 0, \\forall i \\in \\{1, \\cdots, n_{g1} + n_{g2}\\}$, is an inner-approximation of the Pontryagin difference such that $ \\tilde{Z}_d \\subseteq Z_1 \\ominus Z_2 $ if there exists $ \\mathbf{\\Gamma} \\in \\mathbb{R}^{n_{g1} \\times (n_{g1} + 2 n_{g2})} $ and $ \\boldsymbol{\\beta} \\in \\mathbb{R}^{n_{g1}} $, such that\n\t\\begin{subequations} \\label{Pontryagin_conditions}\n\t\t\\begin{align} \n\t\t\\begin{bmatrix}\n\t\t[\\mathbf{G}_1 \\; \\mathbf{G}_2]\\mathbf{\\Phi} & \\mathbf{G}_2\n\t\t\\end{bmatrix} &= \\mathbf{G}_1 \\mathbf{\\Gamma}, \\label{Pontryagin_condition1}\\\\\n\t\t\\mathbf{c}_1 - (\\mathbf{c}_d + \\mathbf{c}_2) &= \\mathbf{G}_1 \\boldsymbol{\\beta}, \\label{Pontryagin_condition2}\\\\\n\t\t|\\mathbf{\\Gamma}|\\mathbf{1} + |\\boldsymbol{\\beta}| &\\leq \\mathbf{1}. \\label{Pontryagin_condition3}\n\t\t\\end{align}\n\t\\end{subequations}\n\\end{thm}\n\\begin{pf}\n\tBy viewing \\eqref{Pontryagin_conditions} in the context of the zonotope containment conditions from \\textbf{Lemma \\ref{zonoContainment}}, it is clear that \\eqref{Pontryagin_conditions} enforces the Pontryagin difference condition $ \\tilde{Z}_d \\oplus Z_2 \\subset Z_1 $ from \\eqref{Pontryagin_Diff_Def}. \\hfill \\hfill \\qed\n\\end{pf}\n\nWhen using \\textbf{Theorem \\ref{Pontryagin_OneStep}} to compute $ \\tilde{Z}_d \\subset Z_d $ in G-Rep, the generator matrix $ [\\mathbf{G}_1 \\; \\mathbf{G}_2]\\mathbf{\\Phi} $ is assumed to be comprised of the generators from both $ Z_1 $ and $ Z_2 $ scaled by the diagonal matrix $ \\mathbf{\\Phi} $. Since maximizing the size of $ \\tilde{Z}_d $ is typically desired, an optimization problem can be formulated with the constraints from \\eqref{Pontryagin_conditions} and an objective function that maximizes the scaling variables in $ \\mathbf{\\Phi} $. With $ \\mathbf{c}_d $, $ \\mathbf{\\Phi} $, $ \\mathbf{\\Gamma} $, and $ \\boldsymbol{\\beta} $ as decision variables in this optimization problem, \\eqref{Pontryagin_conditions} consists of only linear constraints and thus an LP or QP can be formulated based on the norm used to maximize the vector $ \\boldsymbol{\\phi} $, where $ \\mathbf{\\Phi} = diag(\\boldsymbol{\\phi})$. Computing $\\tilde{Z}_d$ using \\textbf{Theorem \\ref{Pontryagin_OneStep}} requires solving a LP with $n_{g1}^2 +2n_{g1}n_{g2} + 2n_{g1} + n_{g2} + n$ decision variables.\n\n\\begin{exmp}\n\tConsider the zonotopes from \\emph{\\cite{Althoff2015a}}\n\t\\begin{equation*}\n\t\\setlength\\arraycolsep{2pt}\n\tZ_1 = \\left\\{ \\begin{bmatrix} 1 & 1 & 0 & 0 \\\\ 1 & 0 & 1 & 0 \\\\ 1 & 0 & 0 & 1 \\end{bmatrix}, \\boldsymbol{0} \\right\\}, \\quad Z_2 = \\left\\{ \\frac{1}{3} \\begin{bmatrix} -1 & 1 & 0 & 0 \\\\ \\phantom{-}1 & 0 & 1 & 0 \\\\ \\phantom{-}1 & 0 & 0 & 1 \\end{bmatrix}, \\boldsymbol{0} \\right\\}.\n\t\\end{equation*}\n\tFig. \\ref{Fig_Pontryagin_Diff_3D} shows the Pontryagin difference $ Z_d = Z_1 \\ominus Z_2 $ with $ n_g = 64 $ and $ n_c = 45 $ computed using \\emph{\\textbf{Lemma~\\ref{Pontryagin_Diff}}}. As discussed in \\emph{\\cite{Althoff2015a}}, zonotopes are not closed under the Pontryagin difference, which can be seen in Fig. \\ref{Fig_Pontryagin_Diff_3D} by the asymmetric facets of $ Z_d $. Using \\emph{\\textbf{Theorem \\ref{Pontryagin_OneStep}}}, the inner-approximation of the Pontryagin difference $ \\tilde{Z}_d $ is also shown in Fig. \\ref{Fig_Pontryagin_Diff_3D}. Choosing to maximize $ \\| [\\mathbf{G}_1 \\; \\mathbf{G}_2]\\mathbf{\\Phi} \\|_\\infty $ subject to \\eqref{Pontryagin_conditions} produced $ \\tilde{Z}_d \\subset Z_d $ with a volume ratio of $ V_r = 0.924 $.\n\\end{exmp}\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=7cm]{Pontryagin_Diff_3D.pdf}\n\t\t\\caption{Left: The Pontryagin difference $ Z_d = Z_1 \\ominus Z_2 $ where $ Z_1 $ and $ Z_2 $ are zonotopes but $ Z_d $ is not \\cite{Althoff2015a}. Right: The inner-approximation of $ Z_d $ by a zonotope $ \\tilde{Z}_d \\subseteq Z_d $. }\n\t\t\\label{Fig_Pontryagin_Diff_3D} \n\t\\end{center} \n\\end{figure}\n\n\\begin{table*}[!t]\n\t\\renewcommand{\\arraystretch}{1.1}\n\t\\caption{Pontryagin difference set complexity and computation time (seconds)}\n\t\\label{Tab_Ptryagindiff}\n\t\\centering\n\t\\begin{tabular}{ M{0.3cm} M{0.4cm} M{0.4cm}| M{0.6cm} M{0.6cm} | M{3.5cm} M{0.9cm} | M{0.6cm} M{0.6cm} M{0.6cm} }\n\t\t\\hline\n\t\t\\multicolumn{3}{c}{} & \\multicolumn{7}{c}{$Z_1 \\ominus Z_2$} \\\\\n\t\t\\cline{4-10}\n\t\t\\multicolumn{1}{c}{} & \\multicolumn{1}{c}{$Z_1$} & \\multicolumn{1}{c}{$Z_2$} & \\multicolumn{2}{c}{H-Rep} & \\multicolumn{2}{c}{CG-Rep} & \\multicolumn{3}{c}{1-Step (G-Rep)} \\\\ \\cline{2-3} \\cline{4-10}\n\t\t\\multicolumn{1}{c}{$n$} & \\multicolumn{2}{c}{$n_g$} & \\multicolumn{1}{c}{$n_h$} & \\multicolumn{1}{c}{$t_h$} & \\multicolumn{1}{c}{$n_{c} \\times n_g$} & \\multicolumn{1}{c}{$t_h\/t_{cg}$} & \\multicolumn{1}{c}{$n_g$} & \\multicolumn{1}{c}{$V_r$} & \\multicolumn{1}{c}{$t_h\/t_{g}$} \\\\\n\t\t\\hline\n\t\t2 & 4 & 4 & 16 & 0.01 & 30 \t $\\times$ 64 \t\t& 33.2 \t& 2.5 & 0.64 & 3.3\\\\\n\t\t2 & 8 & 4 & 32 & 0.01 & 30 \t $\\times$ 128 \t\t& 48.2 \t& 3.0 & 0.54 & 3.5\\\\\n\t\t2 & 4 & 8 & 16 & 0.01 & 510\t $\\times$ 1024 \t\t& 17.6 \t& 2.6 & 0.67 & 3.3\\\\\n\t\t2 & 8 & 8 & 32 & 0.02 & 510 \t $\\times$ 2048 \t& 20.9 \t& 3.2 & 0.53 & 3.1\\\\ \\hline \n\t\t3 & 6 & 6 & 60 & 0.03 & 189 \t $\\times$ 384 \t\t& 68.8 \t& 3.7 & 0.55 & 7.5 \\\\\n\t\t3 & 12 & 6 & 264 & 0.16 & 189 \t $\\times$ 768 \t\t& 261 \t& 4.8 & 0.46 & 16.0 \\\\\n\t\t3 & 6 & 12 & 60 & 0.13 & 12,285 $\\times$ 24,576 \t& 18.6 \t& 3.8 & 0.54 & 21.5\\\\\n\t\t3 & 12 & 12 & 264 & 0.40 & 12,285 $\\times$ 49,152 \t& 27.2\t& 4.9 & 0.43 & 28.2\\\\ \\hline \n\t\t4 & 8 & 8 & 224 & 0.38 & 1,020 $\\times$ 2,048 \t& 359 \t& 4.9 & 0.50 & 46.2 \\\\ \n\t\t4 & 16 & 8 & 2,240 & 41.0 & 1,020 $\\times$ 4,096 \t& 2,370 & 6.2 & 0.45 & 1,890\\\\ \n\t\t4 & 8 & 16 & 224 & 59.0 & 262,140 $\\times$ 524,288 & 271 \t& 4.7 & 0.43 & 4,078\\\\ \n\t\t4 & 16 & 16 & 2,240 & 243 & 262,140 $\\times$ 1,048,576 & 556 \t& 6.2 & 0.48 & 6,510\\\\ \\hline \n\t\\end{tabular}\n\\end{table*}\n\n\\begin{exmp}\n\tSimilar to \\emph{\\cite{Althoff2015a}}, the scalability of exact constrained zonotope representations of the Pontryagin difference via \\emph{\\textbf{Lemma~\\ref{Pontryagin_Diff}}} and zonotopic inner-approximations via \\emph{\\textbf{Theorem~\\ref{Pontryagin_OneStep}}} is compared with the standard H-Rep approach provided in the Multi-Parametric Toolbox \\cite{Herceg2013}. Table \\ref{Tab_Ptryagindiff} shows the complexity and computational time for computing the Pontryagin difference $ Z_d = Z_1 \\ominus Z_2 $ using each of the three methods for zonotopes in $ \\mathbb{R}^2 $, $ \\mathbb{R}^3 $, and $ \\mathbb{R}^4 $. Each entry in Table~\\ref{Tab_Ptryagindiff} represents an average of 100 computations using randomly generated zonotopes $ Z_1 $ and $ Z_2 $. These random zonotopes are generated using the procedure provided in \\emph{\\cite{Althoff2015a}} and the CORA toolbox \\cite{Althoff2018a}. Cases where $ Z_d = \\emptyset $ were disregarded and not considered in the set of 100 computations. For CG-Rep and G-Rep, the ratio of computation times relative to that of H-Rep is presented. Since the G-Rep approach is an inner-approximation, the average volume ratio is also provided. From these results, it is clear that both the set complexity $ n_h $ and the computation time $ t_h $ for the H-Rep approach increase by approximately an order-of-magnitude as the set dimension $ n $ increases. While the CG-Rep approach increases the computation speed by approximately two orders-of-magnitude, the set complexity increases exponentially. Sparse matrices were used to reduce the memory requirements for these computations. The redundancy removal approach presented in Section \\ref{Sec_Redundancy} was not able to detect the high-degree of redundancy in these set representations. Alternatively, the one-step G-Rep approximation approach also provided significant reductions in computational cost while maintaining a small number of generators. However, for these randomly generated zonotopes, the inner-approximation only captures approximately 50\\% of the volume of $ Z_d $. While these methods will likely work well for many practical applications, future work is needed to improve redundancy detection and removal for the CG-Rep approach and improved optimization formulations are needed for the G-Rep approach to further maximize volume ratio.\n\\end{exmp}\n\\section{Application to Reachability Analysis} \\label{Sec_Hier}\n\nTo demonstrate the applicability of algorithms developed in this paper, this section considers the exact and approximate computations of backwards reachable sets of a constrained linear system in the context of the two-level hierarchical MPC framework developed in \\cite{Koeln2019ACC,Koeln2019_Aut}. The high-level goal is to compute a \\emph{wayset} $ Z_c(k) $ at discrete time step $ k $ that captures all of the initial states $ \\mathbf{x}(k) \\in Z_c(k) \\subset \\mathbb{R}^n $ for which there are state and input trajectories $ \\mathbf{x}(k+j) $ and $ \\mathbf{u}(k+j) $ that satisfy, for all $ j \\in \\{0,\\cdots,N-1\\} $, \\emph{i}) the dynamics $ \\mathbf{x}(k+j+1) = \\mathbf{A} \\mathbf{x}(k+j) + \\mathbf{B} \\mathbf{u}(k+j) $, \\emph{ii}) the state and input constraints $ \\mathbf{x}(k+j) \\in \\mathcal{X}$ and $ \\mathbf{u}(k+j) \\in \\mathcal{U}$, and \\emph{iii}) the terminal constraint $ \\mathbf{x}(k+N) = \\mathbf{x}^* $ for some predetermined target $ \\mathbf{x}^* \\in \\mathbb{R}^n $. In the context of the hierarchical MPC framework from \\cite{Koeln2019ACC,Koeln2019_Aut}, $ \\mathbf{x}^* $ is a future state on the optimal trajectory determined by an upper-level controller and $ Z_c(k) $ is a terminal constraint imposed on a lower-level controller. Since $ \\mathbf{x}^* $ is updated at every evaluation of the upper-level controller, $ Z_c(k) $ must be recomputed in real-time, which is enabled through the use of constrained zonotopes.\n\n\\textbf{Algorithm \\ref{waysetComputation}} shows a simplified version of the backward reachable wayset algorithms presented in \\cite{Koeln2019ACC,Koeln2019_Aut}. Fig. \\ref{Fig_HierMPC_Wayset_evol} shows the results of this algorithm when applied to the simplified vehicle system model from \\cite{Koeln2019ACC,Koeln2019_Aut} with\n\\begin{equation}\n \\mathbf{x}(k+1) = \\begin{bmatrix} 1 & 1 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1 \\end{bmatrix}\\mathbf{x}(k) + \\begin{bmatrix} \\phantom{-}0 & \\phantom{-}0 & \\phantom{-}0 \\\\ \\phantom{-}1 & -1 & \\phantom{-}0 \\\\ -1 & -1 & -1\n \\end{bmatrix}\\mathbf{u}(k),\n\\end{equation}\nwhere the states represent position, velocity, and on-board energy storage and the inputs represent acceleration, deceleration, and power to an on-board load. The discretization time step size is $\\Delta t = 1$ second and the state and input constraints defining $\\mathcal{X}$ and $\\mathcal{U}$ are\n\\begin{equation*}\n \\begin{bmatrix} -1 \\\\ -20 \\\\ 0 \\end{bmatrix} \\leq \\mathbf{x}(k) \\leq \\begin{bmatrix}\n 105 \\\\ 20 \\\\ 100\n \\end{bmatrix}, \\; \\begin{bmatrix} 0 \\\\ 0 \\\\ 0 \\end{bmatrix} \\leq \\mathbf{u}(k) \\leq \\begin{bmatrix} 1 \\\\ 1 \\\\ 1\\end{bmatrix}.\n\\end{equation*}\n\nTo demonstrate the halfspace intersection results from Section \\ref{halfspaceIntersections}, Table \\ref{Tab_wayset_cmplx_comptime} compares the set representation complexity and computation time of four different CG-Rep methods with those using H-Rep via the Multi-Parametric Toolbox \\cite{Herceg2013}. All computation times are averaged over 100 runs. Overall, the CG-Rep methods result in significantly less set complexity and computation time. The CG-Rep methods differ in the computation of $ \\hat{Z}_c(k + j-1) \\cap \\mathcal{X}$ in \\textbf{Algorithm \\ref{waysetComputation}}. Specifically, this intersection is computed using $1)$ the zonotope-hyperplane (ZH) method from \\textbf{Lemma \\ref{Zono_halfspace_int_check}} based on the parent zonotope $ \\hat{Z}(k + j-1) \\supset \\hat{Z}_c(k + j-1) $ and the H-Rep of $ \\mathcal{X} $, $2)$ the generalized intersection (GI) (from \\eqref{generalized_Intersection}) of the constrained zonotope wayset and the G-Rep of $ \\mathcal{X} $, $3)$ the linear program (LP) method from \\textbf{Lemma \\ref{conzono_hyplane_intersection}} for checking the intersection of a constrained zonotope and a hyperplane, and $4)$ the interval arithmetic (IA) approach using \\textbf{Algorithm \\ref{gen_Bounds}} to detect empty sets when $ \\hat{Z}_c(k + j-1) \\subset \\mathcal{X}$.\nIn the ZH, LP, and IA methods, if the wayset intersects the hyperplanes associated with the halfspaces of $ \\mathcal{X} $, generators and constraints are added using \\eqref{conzono_halfspace_int} to exactly compute $ \\hat{Z}_c(k + j-1) \\cap \\mathcal{X}$ in CG-Rep.\n\nAs expected, the GI approach resulted in the highest set complexity since generators and constraints are added even if $ \\hat{Z}_c(k + j-1) \\subset \\mathcal{X} $. The LP approach results in the lowest complexity by only adding generators and constraints when needed to exactly define the intersection. In this application, the ZH method also achieves this low set complexity and requires significantly less computation time. However, achieving this low complexity is not expected in general. Finally, the IA approach did not perform as well in this application, resulting in unnecessary generators and constraints and a large computation time. However, in practice, the zonotope-halfspace check from \\textbf{Theorem \\ref{zono_halfspace_int}} would be applied first so that \\textbf{Algorithm \\ref{gen_Bounds}} is only used in cases where the parent zonotope intersects the hyperplane. \n\n\\IncMargin{1.5em}\n\\begin{algorithm2e}[t]\n\t\\SetAlgoLined\n\t\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n\t\\Input{$\\mathbf{x}^*$}\n\t\\Output{$ Z_c(k)$}\n\t\\BlankLine\n\t\\SetAlgoLined\n\tinitialize $ j \\leftarrow N $\\\\\n\t$Z_c(k + j) = \\mathbf{x}^* $\\;\n\t\\While{$ j \\geq 1 $}{\n\t\t$\\hat{Z}_c(k + j-1) = A^{-1}Z_c(k + j) \\oplus (-A^{-1}B) \\mathcal{U}$\\;\n\t\t$ Z_c(k + j-1) = \\hat{Z}_c(k + j-1) \\cap \\mathcal{X}$\\;\n\t\t$j \\leftarrow j - 1 $\\;\n\t}\n\t$Z_c(k) = Z_c(k + j) $\n\t\\caption{Wayset $Z_c(k)$ for target $ \\mathbf{x}^*$.}\n\t\\label{waysetComputation}\t\n\\end{algorithm2e}\n\\DecMargin{1.5em}\n\n\\begin{figure}[t]\n\t\\begin{center}\t\t\\includegraphics[width=7cm]{Wayset_S13_evol_time.pdf}\n\t\\caption{The evolution of backward reachable wayset $Z_c(k)$ for $k = 40$ and $N = 10$ time steps starting from $\\mathbf{x}^*$ projected on the position and energy states. The sets $Z_c(k + j), \\forall \\; j \\in \\{ 7, 8, 9\\}$ are zonotopes (evident from symmetry) while the sets $Z_c(k+j), \\forall \\; j \\in \\{ 0, \\cdots, 6\\}$, are constrained zonotopes. The constrained zonotope wayset $Z_c(k)$ contains $\\mathbf{x}_{-}^*$ ensuring the control feasibility from \\cite{Koeln2019ACC,Koeln2019_Aut}.}\n\t\\label{Fig_HierMPC_Wayset_evol} \n\t\\end{center} \n\\end{figure}\n\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.25}\n\t\\caption{Complexity and Computation Time of Waysets}\n\t\\label{Tab_wayset_cmplx_comptime}\n\t\\centering\n\t\\begin{tabular}{ M{1cm} | M{1.5cm} | M{0.8cm} | M{1.5cm} | M{0.8cm}}\n\t\t\\multicolumn{5}{c}{} \\\\\n\t\t\\hline\t\\multicolumn{1}{c}{} & \\multicolumn{1}{c}{$Z_c$} & \\multicolumn{1}{c}{$t_{calc}$} & \\multicolumn{1}{c}{$\\tilde{Z}_c$} & \\multicolumn{1}{c}{$t_{calc}$} \\\\\n\t\t\\cline{2-5}\t\\multicolumn{1}{c}{Method} & \\multicolumn{1}{c}{$n_c \\times n_g$} & \\multicolumn{1}{c}{sec}\n\t\t& \\multicolumn{1}{c}{$\\tilde{n}_c \\times \\tilde{n}_g$} & \\multicolumn{1}{c}{sec}\\\\\n\t\t\\hline\n\t\tZH & $ 7 \\times 37$ & $1e^{-3}$ & \n\t\t$7 \\times 37$ & $4e^{-3}$ \\\\\n\t\tGI & $30 \\times 60$ & $2e^{-3}$ & $7 \\times 37$ & $2e^{-1}$ \\\\\n\t\tLP & $7 \\times 37$ & $1e^{-1}$ & $7 \\times 37$ & $2e^{-3}$ \\\\\n\t\tIA & $15 \\times 45$ & $1e^{-1}$ & $7 \\times 37$ & $4e^{-2}$\\\\\n\t\tH-Rep & $n_h = 5047$ & $161$ & $n_h = 153$ & $333$\n\t\\end{tabular}\n\\end{table}\n\nTo demonstrate redundancy removal results from Section \\ref{Sec_Redundancy}, \\textbf{Algorithm \\ref{waysetComputation}} and \\textbf{Theorem \\ref{Redundant_ConZono}} were applied to successfully remove all unnecessary generators and constraints resulting in the irredundant constrained zonotope wayset $ \\tilde{Z}_c $ in Table \\ref{Tab_wayset_cmplx_comptime}. Overall, when compared to H-Rep, any of the four CG-Rep approaches are computationally efficient with less set complexity and the preferred CG-Rep approach is likely to be application dependent.\n\nWhen computing these waysets for complex systems, it is likely that inner-approximations are needed to restrict the complexity of the set to satisfy a predetermined upper bound on the number of generators and constraints. Demonstrating the inner-approximations from Section \\ref{Sec_InnerApprox} and the convex hull operation from Section \\ref{Sec_ConvexHull}, the top row of plots in Fig. \\ref{Fig_HierMPC_InApprox_cvxhull} shows the inner-approximating interval set $ B \\subset Z_c $ computed using the method described in \\textbf{Example \\ref{example_innerApprox}} with $ n_g = 3 $ and $ n_c = 0 $. However, in the hierarchical MPC framework from \\cite{Koeln2019ACC,Koeln2019_Aut} the wayset must also include a key element denoted here as $ \\mathbf{x}^*_- $. Since $ \\mathbf{x}^*_- \\notin B $, the wayset can be computed as $ CH(B \\cup \\mathbf{x}^*_-) $ resulting in $ n_g = 10 $ and $ n_c = 6 $. If this increase in set complexity is undesirable for a particular application, the point containment $ \\mathbf{x}^*_- \\in B \\subseteq Z_c $ can be readily added to the LP defined in \\eqref{Conzono_containment}. The resulting inner-approximating interval set with this point containment is shown in the bottom row of plots in Fig. \\ref{Fig_HierMPC_InApprox_cvxhull}. The computation time for these inner-approximating interval sets are approximately $0.18$ and $0.25$ seconds for the top and bottom rows, respectively.\n\n\\begin{figure}\n\t\\begin{center}\n \t\t\\includegraphics[width=8cm]{HierMPC_Wayset_InApprox_cvxhull_nd_pt_containment.pdf}\n\t\t\\caption{Top: The wayset $Z_c$, inner-approximating interval set $ B $ with $ V_r = 0.35 $, and $CH(B \\cup \\mathbf{x}^*_-)$ with $V_r = 0.39$ are shown on the left and the projections on to the position and velocity states are shown on the right. Bottom: The wayset $Z_c$, inner-approximating interval set $ B $ containing $ \\mathbf{x}^*_- $ with $ V_r = 0.30 $ shown on the left with the projection shown on the right.}\n\t\t\\label{Fig_HierMPC_InApprox_cvxhull}\n\t\t\\end{center} \n\\end{figure}\n\n\\section{Conclusions and Future Work} \\label{Conclusions}\nThe use of zonotopes and constrained zonotopes for set operations provides significant computational advantages that improve the practicality of set-based techniques commonly used in systems and control theory. Operations such as halfspace intersections, convex hulls, invariant sets, and Pontryagin differences have been shown to benefit from zonotope and constrained zonotope set representations. \nComplexity reduction techniques were developed based on redundancy removal and inner-approximations to further improve the practicality of these set representations. Future work will focus on improved redundancy detection algorithms and optimization formulations that more accurately capture the volume of the approximated set.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe interpretation presented here completely revives classical ontology: Reality is described completely by a classical trajectory $q(t)$, in the classical configuration space $Q$. \n\nThe wave function is also interpreted in a completely classical way -- as a particular description of a classical probability flow, defined by the probability distribution $\\rho(q)$ and average velocities $v^i(q)$. The formula which defines this connection has been proposed by Madelung 1926 \\cite{Madelung} and is therefore as old as quantum theory itself. It is the polar decomposition\n\\begin{equation}\\label{polar}\n\\psi(q)=\\sqrt{\\rho(q)} e^{\\frac{i}{\\hbar} S(q)},\n\\end{equation}\nwith the phase $S(q)$ being a potential of the velocity $v^i(q)$: \n\\begin{equation}\\label{guiding}\nv^i(q) = m^{ij} \\pd_j S(q),\n\\end{equation}\nso that the flow is a potential one.\\footnote{Here, $m^{ij}$ denotes a quadratic form -- a ``mass matrix'' -- on configuration space. I assume in this paper that the Hamiltonian is quadratic in the momentum variables, thus,\n\\begin{equation}\\label{HamiltonFunction}\nH(p,q) = \\frac{1}{2}m^{ij}p_ip_j + V(q).\n\\end{equation}\nThis is quite sufficient for relativistic field theory, see app. \\ref{relativity}.\n}\n\nThis puts the interpretation into the classical realist tradition of interpretation of quantum theory, the tradition of de Broglie-Bohm (dBB) theory \\cite{deBroglie}, \\cite{Bohm} and Nelsonian stochastics \\cite{Nelson}. \n\nBut there is a difference -- the probability flow is interpreted as describing incomplete information about the true trajectory $q(t)$. This puts the interpretation into another tradition -- the interpretation of probability theory as the logic of plausible reasoning in situations with incomplete information, as proposed by Jaynes \\cite{Jaynes}. This objective variant of the Bayesian interpretation of probability follows the classical tradition of Laplace \\cite{Laplace} (to be distinguished from the subjective variant proposed by de Finetti \\cite{deFinetti}). \n\nSo this interpretation combines two classical traditions -- classical realism about trajectories and the classical interpretation of probability as the logic of plausible reasoning. \n\nThere is also some aspect of novelty -- a clear and certain program for development of subquantum theory. Quantum theory makes sense only as an approximation for potential probability flows. It has to be generalized to non-potential flows, described by the flow variables $\\rho(q), v^i(q)$. Without a potential $S(q)$ there will be also no wave function $\\psi(q)$ in such a subquantum theory. And the flow variables are not fundamental fields themself, they also describe only incomplete information. The fundamental theory has to be one for the classical trajectory $q(t)$ alone. \n\nSo this interpretation is also a step toward the development of a subquantum theory. This is an aspect which invalidates prejudices against quantum interpretations as pure philosphy leading nowhere. Nonetheless even this program for subquantum theory has also classical character, making subquantum theory closer to a classical theory. \n\nThinking about how to name this interpretation of quantum theory, I have played around with ``neoclassical'', but rejected it, for a simple reason: There is nothing ``neo'' in it. Instead, it deserves to be named ``paleo''.\\footnote{Association with the ``paleolibertarian'' political direction is welcome and not misleading -- it also revives classical libertarian ideas considered to be out of date for a long time.} \n\nAnd so I have decided to name this interpretation of quantum theory ``paleoclassical''. \n\n\\subsection{The Bayesian character of the paleoclassical interpretation}\n\nThe interpretation of the wave function is essentially Bayesian. \n\nIt is the objective (information-dependent) variant of Bayesian probability, as proposed by Jaynes \\cite{Jaynes}, which is used here. It has to be distinguished from the subjective variant proposed by de Finetti \\cite{deFinetti}, which is embraced by the Bayesian interpretation of quantum theory proposed by Caves, Fuchs and Schack \\cite{Bayesian}. \n\nThe Bayesian interpretation of the wave function is in conflict with the objective character assigned to the wave function in dBB theory. Especially Bell has emphasized that the wave function has to be understood as real object. \n\nBut the arguments in favour of the reality of the wave function, even if strong, appear insufficient: The effective wave function of small subsystems depends on the configuration of the environment. This dependence is sufficient to explain everything which makes the wave function similar to a really existing object. It is only the wave function of a closed system, like the whole universe, which is completely interpreted in terms of incomplete information about the system itself. \n\nComplexity and time dependence, even if they are typical properties of real things, are characteristics of incomplete information as well. \n\nThe Bayesian aspects essentially change the character of the interpretation: The dBB ``guiding equation'' \\eqref{guiding}, instead of guiding the configuration, becomes part of the definition of the information about the configuration contained in the wave function. \n\nThis has the useful consequence that there is no longer any ``action without reaction'' asymmetry: While it is completely natural that the real configuration does not have an influence on the information available about it, there is also no longer any ``guiding'' of the configuration by the wave function. \n\n\\subsection{The realistic character of the paleoclassical interpretation}\n\nOn the other hand, there is also a strong classical realistic aspect of the paleoclassical interpretation: First of all, it is a realistic interpretation. But, even more, its ontology is completely classical -- the classical trajectory $q(t)$ is the only beable. \n\nThis defines an important difference between the paleoclassical interpretation and other approaches to interprete the wave function in terms of information: It answers the question ``information about what'' by explicitly defining the ``what'' -- the classical trajectory -- and even the ``how'' -- by giving explicit formulas for the probability distribution $\\rho(q)$ and the average velocity $v^i(q)$. \n\nThe realistic character leads also to another important difference -- that of motivation. For the paleoclassical interpretation, there is no need to solve a measurement problem -- there is none already in dBB theory, and the dBB solution of this problem -- the explicit non-Schr\\\"{o}dinger\\\/ evolution of the conditional wave function of the measured system, defined by the wave function of system and measurement device and the actual trajectory of the measurement device -- can be used without modification. \n\nAnd there is also no intention to save relativistic causality by getting rid of the non-local collapse -- the paleoclassical interpretation accepts a hidden preferred frame, which is yet another aspect of its paleo character. Anyway, because of Bell's theorem, there is no realistic alternative.\n\\footnote{Here I, of course, have in mind the precise meaning of ``realistic'' used in Bell's theorem, instead of the metaphorical one used in ``many worlds'', which is, in my opinion, not even a well-defined interpretation.}\n\n\\subsection{The justification}\n\nSo nor the realistic approach of dBB theory, which remains influenced by the frequentist interpretation of probability (an invention of the positivist Richard von Mises \\cite{Mises}) and therefore tends to objectivize the wave function, nor the Bayesian approach, which embraces the anti-realistic rejection of unobservables and therefore rejects the preferred frame, are sufficiently classical to see the possibility of a complete classical picture. \n\nBut it is one thing to see it, and possibly even to like it because of its simplicity, and another thing to consider it as justified. \n\nThe justification of the paleoclassical interpretation presented here is based on a reformulation of classical mechanics in term of -- a wave function. It is a simple variant of Hamilton-Jacobi theory with a density added, but this funny reformulation of classical mechanics appears to be the key for the paleoclassical interpretation. The point is that its exact equivalence to classical mechanics (in the domain of its validity) and even the very fact that it shortly becomes invalid (because of caustics) almost force us to accept -- for this classical variant -- the interpretation in terms of insufficient information. And it also provides all the details we use. \n\nBut, then, why should we change the ontological interpretation if all what changes is the equation of motion? Moreover, if (as it appears to be the case) the Schr\\\"{o}dinger\\\/ equation is simply the linear part of the classical equation, so that there is not a single new term which could invalidate the interpretation? And where one equation is the classical limit of the other one?\n\nWe present even more evidence of the conceptual equivalence between the two equations: A shared explanation, in terms of information, that there should be a global $U(1)$ phase shift symmetry, a shared explanation of the product rule for independence, a shared explanation for homogenity of the equation. And there is, of course, the shared Born probability interpretation. All these things being equal, why should one even think about giving the two wave functions a different interpretation? \n\n\\subsection{Objections} \n\nThere are a lot of objections against this interpretation to care about. \n\nSome of them are quite irrelevant, because they are handled already appropriately from point of view of de Broglie-Bohm theory, so I have banned them into appendices: The objection of incompatibility with relativity (app. \\ref{relativity}), the Pauli objection that it destroys the symmetry between configuration and momentum variables (app. \\ref{Pauli}), some doubts about viability of field-theoretic variants related with overlaps (app. \\ref{fields}). \n\nThere are the arguments in favour of interpreting the wave function as describing external real beables. These are quite strong but nonetheless insufficient: The wave functions of small subsystems -- and we have no access to different ones -- \\emph{depend} on real beables external to the system itself, namely the trajectories of the environment. And complexity and dynamics are properties of incomplete information as well. \n\nAnd there is the Wallstrom objection \\cite{Wallstrom}, the in my opinion most serious one. How to justify, in terms of $\\rho(q)$ and $v^i(q)$, the ``quantization condition''\n\\begin{equation\n\\oint m_{ij} v^i(q) dq^j = \\oint \\pd_j S(q) dq^j = 2\\pi m\\hbar, \\qquad m\\in\\mbox{$\\mathbb{Z}$}.\n\\end{equation}\nfor closed trajectories around zeros of the wave function, which is, in quantum theory, a trivial consequence of the wave function being uniquely defined globally, but which is completely implausible if formulated in terms of the velocity field?\n\nFortunately, I have found a solution of this problem in \\cite{againstWallstrom}, based on an additional regularity postulate. All what has to be postulated is that \\emph{$0< \\Delta \\rho(q) < \\infty$ almost everywhere where $\\rho(q)=0$}. This gives the necessary quantization condition. Moreover, there are sufficiently strong arguments that it can be justified by a subquantum theory. \n\nThe idea that what we observe are localized wave packets instead of the configurations themself I reject in app. \\ref{wavepackets}. So all the objections I know about can be answered in a satisfactory way. Or at least I think so. \n\n\\subsection{Directions of future research} \n\nDifferent from many other interpretations of quantum theory, the paleoclassical interpretation suggests a quite definite program of development of a more fundamental, subquantum theory: It defines the ontology of subquantum theory as well as the equation which can hold only approximately -- the potentiality condition for the velocity $v^i(q)$. The consideration of the Wallstrom objection even identifies the domain where modifications of quantum theory are necessary -- the environment of the zeros of the wave function. \n\nOne can identify also another domain where quantum predictions will fail in subquantum theory -- the reliability of quantum computers, in particular their ability to reach exponential speedup in comparison with classical computers. \n\nAnother interesting question is what restrictions follow for $\\rho(q), v^i(q)$ from the interpretation as a probability flow for a more fundamental theory for the trajectory $q(t)$ alone. An answer may be interesting for finding answers to the ``why the quantum'' question. A question which cannot be answered by an interpretation, which is restricted to the ``what is the quantum'' question. \n\n\\section{A wave function variant of Hamilton-Jacobi theory}\n\nDo you know that one can reformulate classical theory in terms of a wave function? With an equation for this wave function which is completely classical, but, nonetheless, quite close to the Schr\\\"{o}dinger\\\/ equation? \n\nIn fact, this is a rather trivial consequence of the mathematics of Hamilton-Jacobi theory and the insights of Madelung \\cite{Madelung}, de Broglie \\cite{deBroglie}, and Bohm \\cite{Bohm}. All one has to do is to look at them from another point of view. Their aim was to understand quantum theory, by representing quantum theory in a known, more comprehensible, classical form -- a form remembering the classical Hamilton-Jacobi equation\n\\begin{equation}\\label{HamiltonJacobi}\n\\pd_t S(q) + \\frac12 m^{ij} \\pd_i S(q) \\pd_j S(q) + V(q) = 0.\n\\end{equation}\nSo, the real part of the Schr\\\"{o}dinger\\\/ equation, divided by the wave function, gives\n\\begin{equation}\\label{Bohm}\n\\pd_t S(q) + \\frac12 m^{ij} \\pd_i S(q) \\pd_j S(q) + V(q) + Q[\\rho] = 0,\n\\end{equation}\nwith only one additional term -- the quantum potential \n\\begin{equation}\\label{Qdef}\nQ[\\rho] = -\\frac{\\hbar^2}{2} \\frac{\\Delta \\sqrt{\\rho}}{\\sqrt{\\rho}}\n\\end{equation}\nThe imaginary part of the Schr\\\"{o}dinger\\\/ equation (also divided by $\\psi(q)$) is the continuity equation for $\\rho$\n\\begin{equation}\\label{continuity}\n\\pd_t\\rho(q,t) + \\pd_i(\\rho(q,t)v^i(q,t)) = 0\n\\end{equation}\n \nNow, all what we have to do is to revert the aim -- instead of presenting the Schr\\\"{o}dinger\\\/ equation like a classical equation, let's present the classical Hamilton-Jacobi equation like a Schr\\\"{o}dinger\\\/ equation. There is almost nothing to do -- to add a density $\\rho(q)$ together with a continuity equation \\eqref{continuity} is trivial. We use the same polar decomposition formula \\eqref{polar} to define the classical wave function. It remains to do the same procedure in the other direction. The difference is the same -- the quantum potential. So we obtain, as the equation for the classical wave function, an equation I have named pre-Schr\\\"{o}dinger\\\/ equation:\n\\begin{equation}\\label{pre}\ni\\hbar \\pd_t \\psi(q,t) = -\\frac{\\hbar^2}{2} m^{ij}\\pd_i\\pd_j\\psi(q,t) + (V(q) - Q[\\rho]) \\psi(q,t) = \\hat{H} \\psi - Q[|\\psi|^2]\\psi.\n\\end{equation}\n\nOf course, the additional term is a nasty, nonlinear one. But that's the really funny point: We can obtain now quantum theory as the linear approximation of classical theory, in other words, as a simplification.\n\nBut there is more than this in this classical equation for the classical wave equation. The point is that there is an exact equivalence between the different formulations of classical theory, despite the fact that one is based on a classical trajectory $q(t)$ only, and the other has a wave function $\\psi(q,t)$ together with the trajectory $q(t)$. And it is this exact equivalence which can be used to identify the meaning of the classical wave function. \n\nBecause of its importance, let's formulate this in form of a theorem:\n\\begin{theorem}[equivalence]\nAssume $\\psi(q,t),q(t)$ fulfill the pre-Schr\\\"{o}dinger\\\/ equation \\eqref{pre} for some Hamilton function $H(p,q)$ of type \\eqref{HamiltonFunction}, together with the guiding equation \\eqref{guiding}. \n\nThen, whatever the initial values $\\psi_0(q),q_0$ for the wave function $\\psi_0(q)=\\psi(q,t_0)$ and the initial configuration $q_0=q(t_0)$, there exists initial values $q_0,p_0$ so that the Hamilton equation for $H(p,q)$ with these initial values gives $q(t)$. \n\\end{theorem}\n\\begin{proof} \nThe difficult part of this theorem is classical Hamilton-Jacobi theory, so, presupposed here as known. The simple modifications of this theory used here -- adding the density $\\rho(q)$, with continuity equation \\eqref{continuity}, and rewriting the result in terms of the wave function $\\psi(q)$ defined by the polar decomposition \\eqref{polar}, copying what has been done by Bohm \\cite{Bohm} -- do not endanger the equivalence between Hamiltonian (or Lagrangian) and Hamilton-Jacobi theory. \n\\end{proof}\n\n\\section{The wave function describes incomplete information}\n\nSo let's evaluate now what follows from the equivalence theorem about the physical meaning of the (yet classical) wave function. \n\nFirst, it is the classical (Hamiltonian or Lagrangian) variant which is preferable as a fundamental theory. There are three arguments to justify this:\n\n\\begin{itemize}\n\n\\item Simplicity of the set of fundamental variables: We need only a single trajectory $q(t)$ instead of trajectory $q(t)$ together with a wave function $\\psi(q,t)$. \n\n\\item Correspondence between fundamental variables and observables: It is only the trajectory $q(t)$ in configuration space which is observable. \n\n\\item Stability in time: The wave function develops caustics after a short period of time and becomes invalid. The classical equations of motion does not have such a problem. \n\n\\end{itemize}\n\nSo it is the classical variant which is preferable as a fundamental theory. Thus, we can identify the true beable of classical theory with the classical trajectory $q(t)$. \n\nThe next observation is that, once $q(t)$ is known and fixed, the wave function contains many degrees of freedom which are unobservable in principle: Many different wave functions define the same trajectory $q(t)$. So we can conclude that these different wave functions do \\emph{not} describe physically different states, containing additional beables. \n\nOn the other hand, this is true only if $q$ is known. What if $q$ is not known? In this case, the wave function defines simply a subset of all possible classical trajectories, and a probability measure on this subset. \n\nTo illustrate this, a particular most important example of a Hamilton-Jacobi function is useful: It is the function $S(q_0,t_0,q_1,t_1)$ defined by\n\\begin{equation}\\label{Minimumproblem}\n S(q_0,t_0,q_1,t_1) = \\int_{t_0}^{t_1} L(q(t),\\dot{q}(t),t) dt,\n\\end{equation} \nwhere the integral is taken over the classical solutions $q(t)$ of this minimum problem with initial and final values $q(t_0)=q_0$, $q(t_1)=q_1$. This function fulfills the Hamilton-Jacobi equation in the variables $q_0,t_0$ as well as $q_1,t_1$. In above versions, it can be characterized as a Hamilton-Jacobi function $S(q,t)$ which is defined by a subset of trajectories: The function $S(q_0,t_0,q,t)$ describes the subset of trajectories going through $q_0$ at $t_0$, while the function $S(q,t,q_1,t_1)$ describes the subset of trajectories going through $q_1$ at $t_1$. \n\nWe can generalize this. The phase $S(q,t)$ tells us the value of $\\dot{q}(t)$ given $q(t)$: \\emph{If} $q(t)=q_0$ \\emph{then} $\\dot{q}(t)=v_0$, with $v_0$ defined by the guiding equation \\eqref{guiding}. So, $S(q,t)$ always distinguishes a particular subset of classical trajectories. \n\nEven more specific, this subset described by $S(q,t)$ can be uniquely described in terms of a subset of the possible initial values at the moment of time $t$ -- the configuration $q(t_0)$ and the momentum $p(t_0)=\\nabla S(q(t_0),t_0)$, that means, as a subset of the possible values for the fundamental beables $q,p$ (or $q,\\dot{q}$). \n\nThe other part -- the density $\\rho(q)$ -- is nothing but a probability density on this particular subset. \n\nOf course, a subset is nothing but a special delta-like probability measure, so that the wave function simply defines a probability density on the set of possible initial values:\n\\begin{equation}\\label{probability}\n\\rho(p,q)dpdq = \\rho(q)\\delta(p-\\nabla S(q))dpdq\n\\end{equation}\nThe pre-Schr\\\"{o}dinger\\\/ equation is, therefore, nothing but the evolution equation for this particular probability distribution. \n\nSo our classical, Hamilton-Jacobi wave function is nothing mystical, but simply a very particular form of a standard classical probability density $\\rho(p,q)$ on the phase space. \n\nIn particular, the pre-Schr\\\"{o}dinger\\\/ equation for the wave function is nothing but a particular case of the Liouville equation, the standard law of evolution of standard classical probability distributions $\\rho(p,q)$, for the particular ansatz \\eqref{probability}, and follows from the fundamental law of evolution for the true, fundamental classical beables. \n\nMoreover, the Liouville equation also defines the domain of applicability of the equation for the wave function. This domain is, in fact, restricted. In terms of $\\rho(p,q)$, it will always remain a probability distribution on some Lagrangian submanifold. But this Lagrangian submanifold will be, after some time, no longer a graphic of a function $p=\\nabla S(q)$ on configuration space -- there may be caustics, and in this case there will be several values of momentum for the same configuration $q$. If this happens, the wave function is no longer an adequate description. \n\nSuch an effect -- restricted validity -- is quite natural for the evolution of information, but not for fundamental beables. \n\nSo the wave function variant of Hamilton-Jacobi theory almost forces us to accept an interpretation of the wave function in terms of incomplete information. Indeed,\n\n\\begin{itemize}\n\n\\item The parts of the wave function, $\\rho(q)$ as well as $S(q)$, make sense as describing a well-defined type of incomplete information about the classical configuration, namely the probability distribution $\\rho(p,q)$ defined by \\eqref{probability}. \n\n\\item The alternative, to interpret the wave function as describing some other, external beables, does not make sense, given the observational equivalence of the theory with simple Lagrangian classical mechanics, with $q(t)$ as the only observable. Additional beables should influence, at least in some circumstances, the $q(t)$. They don't. \n\n\\end{itemize}\n\nSo it looks like a incomplete information, it behaves like incomplete information, it becomes invalid like incomplete information -- it is incomplete information.\n\nAnd, given that we know, in the case of the pre-Schr\\\"{o}dinger\\\/ equation, the fundamental law of evolution of the beables themself, it also makes no sense to reify this particular probability distribution as objective. A probability distribution defines a set of incomplete information about the real configuration, that's all. \n\n\\subsection{What about the more general case?}\n\nOf course, the considerations above have been based on the equivalence theorem between the classical evolution and the evolution defined by the pre-Schr\\\"{o}dinger\\\/ equation. It was this exact equivalence which was able to give us some certainty, to remove all doubts that there is something else, some information about other beables, hidden in the wave function. \n\nBut what about the more general case, the case where we do not have an exact equivalence between an equation in terms of $q,\\psi(q)$ and an equation purely in terms of the classical trajectory $q(t)$? In such a situation, the case for an interpretation of $\\psi(q)$ in terms of incomplete information about the $q(t)$ is, of course, a little bit weaker. \n\nNonetheless, given such an ideal correspondence for the pre-Schr\\\"{o}dinger\\\/ equation, the interpretation remains certainly extremely plausible in a more general situation too. Indeed, why should one change the interpretation, the ontological meaning, of $\\psi(q)$, if all what is changed is that we have replaced the pre-Schr\\\"{o}dinger\\\/ equation by another equation? The evolution equation is different, that's all. In itself, a change in the evolution equation does not give even a bit of motivation to change the interpretation. \n\nAnd it should be noted that they have a hard job. The similarity between the two variants of the Schr\\\"{o}dinger\\\/ equation does not makes it easier: In fact, the funny observation that the Schr\\\"{o}dinger\\\/ equation is the linear part of the pre-Schr\\\"{o}dinger\\\/ equation becomes relevant here. If the Schr\\\"{o}dinger\\\/ equation would contain some new terms, this would open the door for attempts to show that the new terms do not make sense in the original interpretation. But there are no new terms in the linear part of the pre-Schr\\\"{o}dinger\\\/ equation. All terms of the Schr\\\"{o}dinger\\\/ equation are already contained in the pre-Schr\\\"{o}dinger\\\/ equation. So they all make sense.\\footnote{It has to be mentioned in this connection that there is something present in the Schr\\\"{o}dinger\\\/ equation which is not present in the pre-Schr\\\"{o}dinger\\\/ equation -- the dependence on $\\hbar$. So one can at least try to base an argument on this additional dependence. But, as described below, if one incorporates a Nelsonian stochastic process, the dependence on $\\hbar$ appears in a natural way as connected with the stochastic process. So to use this difference to justify a modification of the ontology remains quite nontrivial.}\n\n\\subsection{The classical limit}\n\nThere is also another strong argument for interpreting above theories in the same way -- that classical theory appears as the classical limit of quantum theory. \n\nThe immediate consequence of this is that above theories have the same intention -- the description, as accurate as possible, of the same reality. \n\nSo this is not a situation where the same mathematical apparatus is applied to quite different phenomena, so that it is natural use a different interpretation even if the same mathematical apparatus is applied. In our case, we use the same mathematical formalism to describe the same thing. At least in the classical limit, quantum theory has to describe the same thing -- with the same interpretation -- as the classical interpretation. \n\n\\section{What follows from the interpretation}\n\nBut there is not only the similarity between the equations, and that the object is essentially the same, which suggests to use the same interpretation for above variants of the wave function. \n\nThere are also some interesting consequences of the interpretation. And these consequences, to be shared by all wave functions which follow this interpretation, are fulfilled by the quantum wave function. \n\n\\subsection{The relevance of the information contained in the wave function} \n\nA first consequence of the interpretation can be found considering the \\emph{relevance} of the information contained in the wave function, as information about the real beables. In fact, assume that the wave function $\\psi(q)$ contains a lot of information which is irrelevant as information about the $q$. This would open the door for some speculation about the nature of this additional information. Once it cannot be interpreted as information about the $q$, it has to contain information about some other real beables. So let's consider which of the information contained in the wave function is really relevant, tells us something about the true trajectory $q(t)$.\n\nNow, knowledge about the probability $\\rho(q)$ is certainly relevant if we don't know the real value of $q$. And it is relevant in all of its parts. \n\nThen, as we have seen, $S(q)$ gives us, via the ``guiding equation'' \\eqref{guiding}, the clearly relevant information about the value of $\\dot{q}$ given the value of $q$. Given that we don't know the true value of $q$, this information is clearly relevant. But, different from $\\rho(q)$, the function $S(q)$ contains also a little bit more: Another function $S'=S+c$ would give \\emph{exactly} the same information about the real beables. \n\nAs a consequence, the wave function $\\psi(q)$ also contains a corresponding additional, irrelevant information. The polar decomposition \\eqref{polar} defines what it is -- a global constant phase factor. \n\nSo we find that all the information contained in $\\psi(q)$ -- \\emph{except for a global constant phase factor} -- is relevant information about the real beables $q$. \n\nAt a first look, this seems trivial, but I think it is a really remarkable property of the paleoclassical interpretation. \n\nThe irrelevance of the phase factor is a property of the interpretation of the meaning of the wave function. It follows from this interpretation: We have considered all the ways how to obtain information about the beables from $\\psi(q)$, and we have found that all the information is relevant, except this constant phase factor. \n\nAnd now let's consider the equations. Above equations considered up to now -- the pre-Schr\\\"{o}dinger\\\/ equation as well as the Schr\\\"{o}dinger\\\/ equation -- have the same symmetry regarding multiplication with a constant phase factor: Together with $\\psi(q,t)$, $c\\psi(q,t)$ is a solution of the equations too. \n\nThis is, of course, how it should be if the wave function describes incomplete information about the $q$. If, instead, it would describe some different, external beables, then there would be no reason at all to assume such a symmetry. In fact, why should the values of some function, values at points far away from each other, be connected with each other by such a global symmetry? \n\nSo every interpretation which assigns the status of reality to $\\psi(q)$ has, in comparison, a problem to explain this symmetry. A popular solution is to assign reality not to $\\psi(q)$, but, instead, to the even more complicate density matrix $|\\psi\\rangle\\langle\\psi|$. The flow variable are, obviously, preferable by simplicity.\n\n\\subsection{The independence condition}\n\nIt is one of the great advantages of Jaynes' information-based approach to plausible reasoning \\cite{Jaynes} that it contains common sense principles to be applied in situations with insufficient information. If we have no information which distinguishs the probability of the six possible outcomes of throwing a die, we \\emph{have} to assign equal probability to them. Everything else would be irrational.\\footnote{The purely subjectivist, de Finetti approach differs here. It does not make prescriptions about the initial probability distributions -- all what is required is that one updates the probabilities following the rules of Bayesian updating. This difference is one of the reasons for my preference for the objective approach proposed by Jaynes \\cite{Jaynes}} \n\nSimilar principles work if we consider the question of independence. From a frequentist point of view, independence is a quite nontrivial physical assumption, which has to be tested. And, in fact, there is no justification for it, at least none coming from the frequentist interpretation. From point of view of the logic of plausible reasoning, the situation is much better: Once we have no \\emph{information} which justifies any hypothesis of a dependence between two statements $A$ and $B$, we \\emph{have} to assume their independence. There is no information which makes $A$ more or less plausible given $B$, so we have to assign equal plausibilities to them: $P(A|B)=P(A)$. But this is the condition of independence $P(AB)=P(A)P(B)$. \n\nThe different status of plausibilities in comparison with physical hypotheses makes this obligatory character reasonable. Probabilities are not hypotheses about reality, but logical conclusions derived from the available information, derived using the logic of plausible reasoning.\n\nNow, all this is much better explained in \\cite{Jaynes}, so why I talk here about this? The point is that I want to obtain here a formula which appropriately describes different independent subsystems. Subsystems are independent if we have no information suggesting their dependence. A quite typical situation, so independence is quite typical too. \n\nSo assume we have two subsystems, and we have no information suggesting any dependence between them. What do we have to assume based on the logic of plausible reasoning? \n\nFor the probability distribution itself, the answer is trivial: \n\\begin{equation}\n\\rho(q_1,q_2)=\\rho_1(q_1)\\rho_2(q_2).\n\\end{equation}\nBut what about the phase function $S(q)$? We have to assume that the velocity of one system is independent of the state of the other one. So the potential of that velocity also should not depend on the state of the other system. So we obtain \n\\begin{equation}\nS(q_1,q_2) = S_1(q_1) + S_2(q_2).\n\\end{equation}\nThis gives, combined using the polar decomposition \\eqref{polar}, the following rule for the wave function:\n\\begin{equation}\n\\psi(q_1,q_2) = \\psi_1(q_1) \\psi_2(q_2). \n\\end{equation}\n\nOf course, again, the rule is trivial. Nobody would have proposed any other rule for defining a wave function in case of independence. Nonetheless, I consider it as remarkable that it has been derived from the very logic of plausible reasoning and the interpretation of the wave function. \n\nAnd, again, this property is shared by above variants, the quantum as well as the classical one. And if one thinks about the applicability of this rule to the classical variant of the wave function, one has to recognize that it is nontrivial: From a classical point of view, the rule of combining $\\rho(q)$ and $S(q)$ into a wave function $\\psi(q)$ is quite arbitrary, and there is no a priori reason to expect that such an arbitrary combination transforms the rule for defining independent classical states into a simple rule for independent wave functions, moreover, into the one used in quantum theory. \n\n\\subsection{Appropriate reduction to a subsystem}\n\nWhile the pre-Schr\\\"{o}dinger\\\/ equation is non-linear, it shares with the Schr\\\"{o}dinger\\\/ equation the weaker property of homogenity of degree one: The operator condition \n\\begin{equation}\\label{homogenity}\n\\Omega(c\\psi) = c \\Omega(\\psi)\n\\end{equation}\nholds not only for the Schr\\\"{o}dinger\\\/ operator, but also for the non-linear pre-Schr\\\"{o}dinger\\\/ operator, and not only for $|c|=1$, as required by the $U(1)$ symmetry, which we have already justified, but for arbitrary $c$, so that the $U(1)$ symmetry is, in fact, a $GL(1)$ symmetry of the equations. \n\nSo one ingredient of linearity is shared by the pre-Schr\\\"{o}dinger\\\/ equation. This suggests that for this property there should be a more fundamental explanation, one which is shared by above equations. And, indeed, such an explanation is possible. There should be a principle which allows an appropriate reduction of the equation to subsystems, something like the following\n\n\\begin{principle}[splitting principle]\nThere should a be simple ``no interaction'' conditions for the operators on a system consisting of two subsystems of type $\\Omega=\\Omega_1+\\Omega_2$ such that the equation $\\pd_t\\psi=\\Omega(\\psi)$ splits for independent product states $\\psi(q_1,q_2)=\\psi_1(q_1)\\psi_2(q_2)$ into independent equation's for the two subsystems. \n\\end{principle}\n\nNow, the time derivative splits nicely:\n\\begin{equation}\n\\pd_t \\psi(q_1,q_2)= \\pd_t\\psi_1(q_1)\\psi_2(q_2) + \\psi_1(q_1)\\pd_t\\psi_2(q_2).\n\\end{equation}\nIt remains to insert the equations for the whole system $\\pd_t \\psi = \\Omega(\\psi)$ as well as for the two subsystems into this equation. This gives:\n\\begin{equation}\n\\Omega(\\psi_1(q_1)\\psi_2(q_2)) = \\Omega_1(\\psi_1(q_1))\\psi_2(q_2) + \\psi_1(q_1)\\Omega_2(\\psi_2(q_2)),\n\\end{equation}\nwhere the $\\Omega_i$ are subsystem operators acting only on the $q_i$, so that functions of the other variable are simply constants for them. On the other hand, in the splitting principle we have assumed $\\Omega = \\Omega_1+\\Omega_2$. Comparison gives\n\\begin{equation}\n\\begin{split}\n\\Omega_1(\\psi_1(q_1)\\psi_2(q_2)) &= \\Omega_1(\\psi_1(q_1))\\psi_2(q_2),\\\\\n\\Omega_2(\\psi_1(q_1)\\psi_2(q_2)) &= \\psi_1(q_1)\\Omega_2(\\psi_2(q_2)).\n\\end{split}\n\\end{equation}\nThis follows from the homogenity condition \\eqref{homogenity}. The weaker $U(1)$ symmetry is not sufficient, because the values of $\\psi_2(q_2)$ may be arbitrary complex numbers. \n\nSomething similar to the splitting property is necessary for any equation relevant for us -- we have not enough information to consider the equation of the whole universe, thus, have to restrict ourself to small subsystems. And the equations of these subsystems should be at least approximately independent of the state of the environment. \n\nSo the homogenity of the equations -- in itself a quite nontrivial property of the equations, given the definition of the wave function by polar decomposition -- can be explained by this splitting property. \n\n\\subsection{Summary}\n\nSo we have found three points, each in itself quite trivial, but each containing some nontrivial element of explanation based on the paleoclassical principles: The global $U(1)$ phase shift symmetry of the wave function, explained by the irrelevance of the phase as information about $q(t)$, the product rule of independence, explained by the logic of plausible reasoning, applied to the set of information described by the wave function, and the homogenity of the equations, explained in terms of a splitting property for independent equations for subsystems. \n\nAll three points follow the same scheme -- the interpretation in terms of incomplete information allows to derive a rule, and this rule is shared by above theories, quantum theory as well as the wave function variant of classical theory. \n\nSo all three points give additional evidence that the simple, straightforward proposal to use the same interpretation of the wave function for above theories is the correct one.\n\n\\section{Incorporating Nelsonian stochastics}\n\nThe analogy between pre-Schr\\\"{o}dinger\\\/ and Schr\\\"{o}dinger\\\/ equation is a useful one, but it is that useful especially because the equations are nonetheless very different. And one should not ignore these differences. \n\nOne should not, in particular, try to interpret the Schr\\\"{o}dinger\\\/ equation as the linear approximation of the pre-Schr\\\"{o}dinger\\\/ equation: The pre-Schr\\\"{o}dinger\\\/ equation does not depend on $\\hbar$. Real physical effects do depend. And so there should be also something in the fundamental theory, the theory in terms of the trajectory $q(t)$, which depends on $\\hbar$. \n\nHere, Nelsonian stochastics comes into mind. In Nelsonian stochastics the development of the configuration $q$ in time is described by a deterministic drift term $b^i(q(t),t)dt$ and a stochastic diffusion term $dB^i_t$\n\\begin{equation}\\label{process}\ndq^i(t) = b^i(q(t),t) dt + dB^i_t,\n\\end{equation}\nwith is a classical Wiener process $B^i_t$ with expectation $0$ and variance\n\\begin{equation}\n\\langle dB^i_t m_{ij} dB^j_t \\rangle = \\hbar dt,\n\\end{equation}\nso that we have a $\\hbar$-dependence on the fundamental level. The probability distribution $\\rho(q(t),t)$ has, then, to fulfill the Fokker-Planck-equation:\n\\begin{equation}\\label{FokkerPlanck}\n\\partial_t \\rho + \\pd_i (\\rho b^i) - \\frac{\\hbar}{2}m^{ij}\\pd_i\\pd_j\\rho = 0\n\\end{equation}\nFor the average velocity $v^i$ one obtains\n\\begin{equation}\\label{vdef}\nv^i(q(t),t) = b^i(q(t),t) - \\frac{\\hbar}{2}\\frac{m^{ij}\\pd_j \\rho(q(t),t)}{\\rho(q(t),t)},\n\\end{equation}\nwhich fulfills the continuity equation. The difference between flow velocity $b^i$ and average velocity $v^i$, the osmotic velocity\n\\begin{equation}\\label{osmotic}\nu^i(q(t),t) = b^i(q(t),t)-v^i(q(t),t) = \\frac{\\hbar}{2}\\frac{m^{ij}\\pd_j\\rho(q(t),t)}{\\rho(q(t),t)} = \\frac{\\hbar}{2}m^{ij}\\pd_j \\ln\\rho(q(t),t)\n\\end{equation}\nhas a potential $\\ln\\rho(q)$. The average acceleration is given by\n\\begin{equation}\\label{acceleration}\na^i(q(t),t) = \\partial_t v^i + v^j \\pd_j v^i - \\frac{\\hbar^2}{2} m^{ij}\\pd_j\n \\left(\\frac{m^{kl}\\pd_k\\pd_l \\sqrt{\\rho(q(t),t)}}{\\sqrt{\\rho(q(t),t)}}\\right)\n\\end{equation}\nFor this average acceleration the classical Newtonian law\n\\begin{equation}\\label{Newton}\na^i(q(t),t) = - m^{ij} \\pd_j V(q(t),t)\n\\end{equation}\nis postulated. Putting this into the equation \\eqref{acceleration} gives\n\\begin{equation}\\label{derivativeBohm}\n\\partial_t v^i + (v^j \\pd_j)v^i =\n - m^{ij} \\pd_j \\left(V -\\frac{\\hbar^2}{2}\n \n \\frac{\\Delta \\sqrt{\\rho}}{\\sqrt{\\rho}}\n\\right) = - m^{ij}\\pd_j(V + Q[\\rho]).\n\\end{equation}\n\nThe next postulate is that the average velocity $v^i(q)$ has, at places where $\\rho>0$, a potential, that means, a function $S(q)$, so that \\eqref{guiding} holds. Then equation \\eqref{derivativeBohm} can be integrated (putting the integration constant into $V(q)$), which gives \\eqref{Bohm}. Finally, one combines the equations for $S(q)$ and $\\rho(q)$ into the Schr\\\"{o}dinger\\\/ equation as in de Broglie-Bohm theory. \n\nSo far the basic formulas of Nelsonian stochastics. Now, the beables of Nelsonian stochastics are quite different from those of the paleoclassical interpretation. The external flow $b^i(q,t)$ does not define some state of information, but some objective flow which takes away the configuration if it is at $q$. The configuration is guided in a way close to a swimmer in the ocean: He can randomly swim into one or another direction, but whereever he is, he is always driven by the flow of the ocean, a flow which exists independent of the swimmer. \n\nWhat would be the picture suggested for a stochastic process by the paleoclassical interpretation? It would be different, more close to a spaceship flying in the cosmos. If it changes its place because of some stochastic process, there will be no predefined, independent velocity $b^i(q)$ at the new place which can guide it. Instead, it would have to follow its previous velocity. Indeed, the velocity field $v^i(q)$, and, therefore, also $b^i(q)$, of the new place describes only information, not a physical field which could possibly influence the spaceship in its new position. \n\nBut what follows for the information if there is some probability that the stochastic process causes the particle to change its location to the new one? It means that the old average velocity has to be recomputed. \n\nOf course, there is such a necessity to recompute only if the velocity at the new location is different from the old one. So, if $\\pd_i v^k=m^{jk}\\pd_i p_j=0$, there would be no need for such a recomputation at all. This condition is clearly too strong, it would cause the whole velocity field to become constant. Now there is an interesting subcondition, namely that $\\pd_i p_j-\\pd_j p_i=0$, the condition that the velocity field is a potential one. So the re-averaging of the average velocity $v^i(q)$ caused by the stochastic process will decrease the curl.\n\nThis gives a first advantage in comparison with Nelsonian stochastics: The potentiality assumption does not have to be postulated, without any justification, for the external flow $b^i(q,t)$. (It is postulated for average velocity, but their difference -- the osmotic velocity -- has a potential anyway.) In the paleoclassical interpretation we have an independent motivation for postulating this. But, let's recognize, there is no fundamental reason to postulate potentiality: On the fundamental level, the curl may be nontrivial. All what is reached is that the assumption of potentiality is not completely unmotivated, but that it appears as a natural consequence of the necessary re-averaging.\n\nThere is another strangeness connected with the external flow picture of Nelsonian stochastics. The probability distribution $\\rho(q)$ already characterizes the information about the configuration itself -- it is a probability for the swimmer in the ocean, not for the ocean. Once they depend on $\\rho(q)$, as the average velocity $v^i(q)$, as the average acceleration $a^i(q)$ also describe information about the swimmer, not about the ocean. Then, it is postulated that the average acceleration of the swimmer has to be defined by the Newtonian law. But because this average velocity is, essentially, already defined by the very process -- the flow of the ocean -- together with the initial value for the swimmer, this condition for the swimmer becomes an equation for the ocean. This is conceptually unsound -- as if the ocean has to care that the swimmer is always correctly accelerated. \n\nBut this conceptual inconsistency disappears in the paleoclassical interpretation. The drift field is now part of the incomplete information about the configuration itself, as defined by the average velocity and osmotic velocity. There is no longer any external drift field. And, so, it is quite natural that a condition for the average acceleration of the configuration gives an equation for the average velocity $v^i(q)$. So the paleoclassical picture is internally much more consistent.\n\nBut is it viable? This is a quite non-trivial question discussed below in sec. \\ref{viability}. \n\n\\section{The character of the wave function}\n\nLet's start with the consideration of the objections against the paleoclassical interpretation. Given that the basic formulas are not new at all, I do not have to wait for reactions to this paper -- some quite strong arguments are already well-known.\n\nThe first one is the evidence in favour of the thesis that the wave function describes the behaviour of real degrees of freedom, degrees of freedom which actually influence the things we can observe immediately. Here, at first Bell's argumentation comes into mind -- an argumentation for a double ontology which, I think, has impressed many of those who support today realistic interpretations: \n\\begin{quote}\nIs it not clear fro the smalleness of the scintillation on the screen that we have to do with a particle? And is it not clear, from the diffraction and interference patterns, that the motion of the particle is directed by a wave? \n(\\cite{Bell} p. 191).\\end{quote}\nBut what are the points which make this argument that impressive? What is it what motivates us to accept some things as real? Here I see no way to express this better than Brown and Wallace: \n\\begin{quote}\nFrom the corpuscles' perspective, the wave-function is just a (time-dependent) function on their configuration space, telling them how to behave; it superficially appears similar to the Newtonian or Coulomb potential field, which is again a function on configuration space. No-one was tempted to reify the Newtonian potential; why, then, reify the wave-function?\n\nBecause the wave-function is a very different sort of entity. It is contingent (equivalently, it has dynamical degrees of freedom independent of the corpuscles); it evolves over time; it is structurally overwhelmingly more complex (the Newtonian potential can be written in closed form in a line; there is not the slightest possibility of writing a closed form for the wave-function of the Universe.) Historically, it was exactly when the gravitational and electric fields began to be attributed independent dynamics and degrees of freedom that they were reified: the Coulomb or Newtonian `fields' may be convenient mathematical fictions, but the Maxwell field and the dynamical spacetime metric are almost universally accepted as part of the ontology of modern physics.\n\nWe don't pretend to offer a systematic theory of which mathematical entities in physical theories should be reified. But we do claim that the decision is not to be made by fiat, and that some combination of contingency, complexity and time evolution seems to be a requirement.\n(\\cite{BrownWallace} p. 12-13)\\end{quote}\n\nSo, let's consider the various points in favour of the reality of the wave function:\n\n\\subsection{Complexity of the wave function} \n\nThe argument of complexity seems powerful. But, in fact, for an interpretation in terms of incomplete information \\emph{this} is not a problem at all. \n\nComplexity is, in fact, a natural consequence of incompleteness of information. The complete information about the truth of a statement is a single bit: true or false. The incomplete information is much more complex: it is a real number, the probability. \n\nAs well, the complete information about reality in this interpretation is simple: a single trajectory $q(t)$. But incomplete information requires much more information: Essentially, we need probabilities for all possible trajectories. \n\n\\subsection{Time dependence of the wave function} \n\nTime dependence is, as well, a natural property of information -- complete or incomplete. The information about where the particle has been yesterday transforms into some other information about where the particle is now. \n\nThis transformation is, moreover, quite nontrivial and complex. \n\nIt is also worth to note here that the law of transformation of information is a derivative of the real, physical law of the behaviour of the real beables. \n\nSo it necessarily has all the properties of a physical law. \n\nWe can, in particular, use the standard Popperian scientific method (making hypotheses, deriving predictions from them, testing and falsifying them, and inventing better hypotheses) to find these laws. \n\nThis is, conceptually, a quite interesting point: The laws of probability themself are best understood, following Jaynes \\cite{Jaynes}, as laws of extended logic, of the logic of plausible reasoning. \n\nBut, instead, the laws of \\emph{transformation} of probabilities in time follow from the laws of the original beables in time, and, therefore, have the character of physical laws.\n\nOr, in other words, incomplete information develops in time in a way indistinguishable from the development in time of real beables. In particular, we use the same objective scientific methods to find and test them. \n\nSo, nor the simple fact that there is a nontrivial time evolution, nor the very physical character of the dynamical laws, in all details, up to the scientific method we use to find them, give any argument against an interpretation in terms of incomplete information. All this is quite natural for the evolution of incomplete information too. \n\n\\subsection{The contingent character of the wave function}\n\nThere remains the most powerful argument in favour of the reality of the wave function: Its contingent character. \n\nThere are different wave functions, and these different wave functions lead to objectively different probability distributions of the observable results. \n\nIf we have, in particular, different preparation procedures, leading to different interference pictures, we really observe different interference pictures. It is completely implausible that these different interference pictures -- quite objective pictures -- could be the result of different sets of incomplete information about the same reality. The different interference pictures are, clearly, the result of different things happening in reality. \n\nBut, fortunately, there is not even a reason to disagree with this. The very point is that one has to distinguish the wave function of a small subsystem -- we have no access to any other wave functions -- from the wave function of a closed system. The last is, in fact, only an object of purely theoretical speculation, because there is no closed system in nature except the whole universe, but we have no idea about the wave function of the whole universe. \n\nFor the wave function of a small subsystem, the situation is quite different. It does not contain only incomplete information about the subsystem. In fact, it is only an effective wave function, and there is a nice formula of dBB theory, which can be used in our paleoclassical interpretation too: The formula which defines the conditional wave function of a subsystem $\\psi^S(q_S,t)$ in terms of the wave function of the whole system (say the whole universe) $\\psi(q_S,q_E,t)$ and the configuration of the environment $q_E(t)$:\n\\begin{equation}\\label{conditional}\n\\psi^S(q_S,t) = \\psi(q_S,q_E(t),t)\n\\end{equation}\nThis is a remarkable formula of dBB theory which contains, in particular, the solution of the measurement problem: The evolution of $\\psi^S(q_S,t)$ is, in general, not described by a Schr\\\"{o}dinger\\\/ equation -- if there is interaction with the environment, the evolution of $\\psi^S(q_S,t)$ is different, but, nonetheless, completely well-defined. And this different evolution is the collapse of the wave function caused by measurement. \n\nLet's note that the paleoclassical interpretation requires to justify this formula in terms of the information about the subsystem. But this is not a problem. Indeed, assume the trajectory of the environment $q_E(t)$ is known -- say by observation of a classical, macroscopic measurement device. Then the combination of the knowledge described by the wave function of the whole system with the knowledge of $q_E(t)$ gives exactly the same knowledge as that described by $\\psi^S(q_S,t)$. Indeed, the probability distribution gives\n\\begin{equation}\n\\rho^S(q_S,t) = \\rho(q_S,q_E(t),t),\n\\end{equation}\nand, similarly, the velocity field defined by $S(q)$ follows the same reduction principle:\n\\begin{equation}\n\\nabla S^S(q_S,t) = \\nabla S(q_S, q_E(t),t).\n\\end{equation}\nSo in the paleoclassical interpretation the dBB formula the conditional wave function of a subsystem is a logical necessity. This provides yet another consistency check for the interpretation. \n\nBut the reason for considering this formula here was a different one: The point is that the wave function of the subsystem in fact contains important information about other real beables -- the actual configuration of the whole environment $q_E(t)$. So there are real degrees of freedom, different from the configuration of the system $q_S(t)$ itself, which are distinguished by different wave functions $\\psi^S(q_S,t)$. \n\nAnd we do not have to object at all if one argues that the wave function contains such additional degrees of freedom. That's fine, it really contains them. These degrees of freedom are those of the configuration of the environment $q_E(t)$. And this is not an excuse, but a logical consequence of the interpretation itself, a consequence of the definition of the conditional wave function of the subsystem \\eqref{conditional}.\n\n\\subsection{Conclusion}\n\nSo the consideration of the arguments in favour of a beable status for the wave function, even if they seem to be strong and decisive at a first look, appear to be in no conflict at all with the interpretation of the wave function of a closed system in terms of incomplete information about this system.\n\nThe most important point to understand this is, of course, the very fact that the conditional wave function of the small subsystems of the universe we can consider really have a different character -- they depend on the configuration of the environment. And, so, the argumentation that these conditional wave functions describe real degrees of freedom, external to the system itself, is accepted and even derived from the interpretation. \n\nNonetheless, the point that nor the complexity of the wave function of the whole universe, nor its evolution in time, nor the physical character of the laws of this evolution are in any conflict with an interpretation in terms of incomplete information is an important insight too. \n\n\\section{The Wallstrom objection} \n\nWallstrom \\cite{Wallstrom} has made an objection against giving the fields of the polar decomposition $\\rho(q)$, $S(q)$ (instead of the wave function $\\psi(q)$ itself) a fundamental role. \n\nThe first point is that around the zeros of the quantum mechanical wave function, the flow has no longer a potential. The quantum flow is a potential one only where $\\rho(q)>0$. But in general there will be submanifolds of dimension $n-2$ where the wave function is zero. And for a closed path $q(s)$ around such a zero submanifold one finds that\n\\begin{equation}\\label{curl}\n\\oint m_{ij} v^i(q) dq^j \\neq 0.\n\\end{equation}\n\nThis, in itself, is unproblematic for the interpretation: The condition of potentiality is not assumed to be a fundamental one -- the fundamental object is not $S(q)$ but the $v^i(q)$. There will be some mechanism in subquantum theory which locally reduces violations of potentiality, so we can assume that the flow is a potential one only as an approximation. \n\nIt is also quite natural to assume that such a mechanism works more efficient for higher densities and fails near the zeros of the density. \n\nSo having them localized at the zeros of the density is quite nice -- not for a really fundamental equation, which is not supposed to have any infinities, but if we consider quantum theory as being only an approximation. \n\nThe really problematic part of the Wallstom objection is a different one: It is that the quantum flow has to fulfill a nontrivial \\emph{quantization condition}, namely\n\\begin{equation}\\label{curlquantization}\n\\oint m_{ij} v^i(q) dq^j = \\oint \\pd_j S(q) dq^j = 2\\pi m\\hbar, \\qquad m\\in\\mbox{$\\mathbb{Z}$}.\n\\end{equation}\nThe point is, in particular, that the equations \\eqref{Bohm}, \\eqref{continuity} in flow variables are not sufficient to derive this quantization condition. So, in fact, this set of equations is \\emph{not} empirically equivalent to the Schr\\\"{o}dinger\\\/ equation. \n\nThis is, of course, no wonder, given the fact that the equivalence holds only for $\\rho(q)=|\\psi(q)|^2>0$. But, however natural, empirical inequivalence is empirical inequivalence. \n\nThen, this condition looks quite artificial in terms of the $v^i$. What is a triviality in terms of the wave function -- that it has to globally uniquely defined -- becomes extremely artificial and strange if formulated in terms of the $v^i(q)$. As Wallstrom \\cite{Wallstrom} writes, to ``the best of my knowledge, this condition [\\eqref{curlquantization}] has not yet found any convincing explanation outside the context of the Schr\\\"{o}dinger\\\/ equation''. \n\n\\subsection{A solution for this problem} \n\nFortunately I have found a solution for this problem in \\cite{againstWallstrom}. I do not claim that it is a complete one -- there is a part which is beyond the scope of an interpretation, which has to be left to particular proposals for a subquantum theory. One has to check if the assumptions I have made about such a subquantum theory are really fulfilled in that particular theory. \n\nThe first step of the solution is to recognize that, for empirical equivalence with quantum theory, it is sufficient to recover only solutions with simple zeros. Such simple zeros give $m=\\pm 1$ in the quantization condition \\eqref{curlquantization}. This is a consequence of the principles of general position: A small enough modification of the wave function cannot be excluded by observation, but leads to a wave function in general position, and in general position the zeros of the wave function are non-degenerated. \n\nThe next step is a look at the actual solutions. For the simple, two dimensional, rotational invariant, zero potential case these solutions are defined by $S(q)=m\\varphi$, $\\rho(q)=r^{2|m|}$. And this extends to the general situation, where $S(q)=m\\varphi+\\tilde{S}(q)$, $\\rho(q)=r^{2|m|}\\tilde{\\rho}(q)$, such that $\\tilde{S}(q)$ is well-defined in a whole environment of the zero, and $\\tilde{\\rho}(0)>0$.\n\nBut that means we can replace the problem of justifying an integer $m$ in $S(q)=m\\varphi$, where all values of $m$ seem equally plausible, by the quite different problem of justifying $\\rho(q)=r^2$ (once we need only $m=\\pm 1$) in comparison with other $\\rho(q)=r^\\a$. This is already a quite different perspective. \n\nWe make the natural conclusion and invent a criterion which prefers $\\rho(q)=r^2$ in comparison with other $r^\\a$. This is quite easy: \n\n\\begin{postulate}[regularity of $\\Delta\\rho$]\\label{preg}\nIf $\\rho(q)=0$, then $0< \\Delta \\rho(q) < \\infty$ almost everywhere. \n\\end{postulate}\n\nThis postulate already solves the inequivalence argument. The equations \\eqref{Bohm}, \\eqref{continuity} for $\\rho(q)>0$, together with postulate \\ref{preg}, already defines a theory empirically equivalent to quantum theory (even if the equivalence is not exact, because only solutions in general position are recovered). \n\nIt remains to invent a justification for this postulate. \n\nThe next step is to rewrite equation \\eqref{Bohm} for stable states in form of a balance of energy densities. In particular, we can rewrite the densitized quantum potential as\n\\begin{equation}\\label{Q}\nQ[\\rho]\\rho = \\frac12\\rho u^2 - \\frac14 \\Delta \\rho,\n\\end{equation}\nwith the ``osmotic velocity'' $u(q) = \\frac12 \\nabla \\ln \\rho(q)$. Then the energy balance looks like\n\\begin{equation}\n\\frac12\\rho v^2 + \\frac12 \\rho u^2 + V(q) = \\frac14\\Delta\\rho.\n\\end{equation}\n\nSo, the operator we have used in the postulate is not an arbitrary expression, but a meaningful term, which appears in an important equations -- an energy balance. This observation is already sufficient to justify the $\\Delta \\rho(q) < \\infty$ part of the condition. There may be, of course, subquantum theories which allow for infinities in energy densities, but it is hardly a problem for a subquantum theory to justify that expressions which appear like energy densities in energy balances have to be finite. \n\nLast but not least, subquantum theory has to allow for nonzero curl $\\nabla\\times v$, but has to suppress it to obtain a quantum limit. One way to suppress it is to add a penalty term $U(\\rho,\\nabla\\times v)$ which increases with $|\\nabla\\times v|$. This would give\n\\begin{equation}\n\\frac12\\rho v^2 + \\frac12 \\rho u^2 + V(q) + U(\\rho,\\nabla\\times v) = \\frac14\\Delta\\rho.\n\\end{equation}\nMoreover subquantum theory has to regularize the infinities of $v$ and $u$ at the zeros of the density. One can plausibly expect that this gives finite but large values of $|\\nabla\\times v|$ at zero which decrease sufficiently fast with $r$. Now, a look at the energy balance shows that, if the classical potential term $V(q)$ is neglected (for example assuming that it changes only smoothly) the only term which can balance $U(\\rho,\\nabla\\times v)$ at zero is the $\\Delta \\rho$ terms, which, therefore, has to be finite but nonzero. Or, at least, it would not be difficult to modify the definition of $U(\\rho,\\nabla\\times v)$ in such a way that the extremal value $\\Delta\\rho=0$ (we have necessarily $\\Delta\\rho\\ge 0$ at the minima) has to be excluded. \n\nSo the postulate seems nicely justifiable. For some more details I refer to \\cite{againstWallstrom}. What remains is the particular job of particular proposals for subquantum theories -- they have to check if the way to justify the postulate really works in this theory, or if it may be justified in some other way. But this is beyond the scope of the interpretation. What has to be done by the interpretation -- in particular, to obtain empirical equivalence with quantum theory -- has been done.\n \n\\section{A Popperian argument for preference of an information-based interpretation}\n\nOne of Popper's basic ideas was that we should prefer -- as long as possible without conflict with experience -- theories which are more restrictive, make more certain predictions, depend on less parameters. And, while this criterion has been formulated for theories, it should be applied, for the same reasons, to more general principles of constructing theories too. \n\nThis gives an argument in preference for an interpretation in terms of incomplete information. \n\nIndeed, let's consider, from this point of view, the difference between interpretations of fields $\\rho(q)$, $v^i(q)$ in terms of a probability for some real trajectories $q(t)$, and interpretations which reify them as describing some external reality, different from $q(t)$, which influences the trajectory $q(t)$. \n\nIt is quite clear and obvious which of the two approaches is more restrictive. Designing theories of the first type, we are restricted, for the real physics, to theories for single trajectories $q(t)$. Then, given that we have identified the connection between the fields $\\rho(q)$, $v^i(q)$ and $q(t)$ as those of a probability flow, everything else follows. There is no longer any freedom of choice for the field equations. If we have fixed the Hamiltonian evolution for $p(t), q(t)$, the Liouville field equation for $\\rho(p,q)$ is simply a logical consequence. Similarly, the continuity equation \\eqref{continuity} is a law of logic, it cannot be modified, is no longer a subject of theoretical speculation. It is fixed by the interpretation of $\\rho(q)$, $v^i(q)$ as a probability flow, in the same way as $\\rho(q)\\ge 0$ is fixed. \n\nIn the second case, we have much more freedom -- the full freedom of speculation about field theories in general. In particular, the continuity equation can be modified, introducing, say, some creation and destruction processes, which are quite natural if $\\rho(q)$ describes a density of some external objects. \n\nThe derivation of the $U(1)$ global phase shift symmetry is another particular example of such an additional logical law following from the interpretation. \n\nSo there are some consequences of the interpretation which have purely logical character, including the continuity equation, $\\rho(q)\\ge 0$, and the $U(1)$ global phase shift symmetry. But these will not be the only consequences. The other equations will be restricted, in comparison with field theories, too, but in a less clear and obvious way. There is, last but not least, a large freedom of choice for the equations of the real beables $q(t)$, which correspondence to a similarly large freedom of choice of the resulting equations for $\\rho(q)$, $v^i(q)$. But this freedom of choice will be, nonetheless, much smaller than the completely arbitrariness of a general field theory. \n\nThis consideration strongly indicates that we have to prefer the interpretation in terms of incomplete information until it has been falsified, until it appears incompatible with observation. \n\nThe immediate, sufficiently trivial logical consequences we have found yet are compatible with the Schr\\\"{o}dinger\\\/ equation and therefore with observation. So we should prefer this interpretation. \n\n\\section{Open problems}\n\nInstead of using such a Popperian argumentation, I could have, as well, used simply Ockham's razor: Don't multiply entities without necessity. Once, given this interpretation, there is no necessity for more than a single classical trajectory $q(t)$, one should not introduce other real entities like really existing wave functions. \n\nBut the Popperian consideration has the advantage that it implicitly defines an interesting research program.\n\n\\subsection{Other restrictions following from the interpretation}\\label{viability}\n\nIn fact, given the restrictive character of the interpretation, there may be other, additional, more subtle restrictions of the equations for probility flows $\\rho(q)$, $v^i(q)$, restrictions which we have not yet identified, but which plausibly exist. \n\nSo what are these additional restrictions for equations for probability flows $\\rho(q)$, $v^i(q)$ in comparison with four general, unspecific fields fulfilling a continuity equation? I have no answer. \n\nThis is clearly an interesting question for future research. It is certainly also a question interesting in itself, interesting from point of view of pure mathematics, for a better understanding of probability theory. \n\nThe consequences may be fatal for this approach -- it may be that we find that the Schr\\\"{o}dinger\\\/ equation does not fit into this set of restrictions. This possibility of falsifiction is, of course, the very point of the Popperian consideration. I'm nonetheless not afraid that this happens, but this is only a personal opinion. \n\nThe situation may be, indeed, much better: That this subclass of theories contains the Schr\\\"{o}dinger\\\/ equation, but appears to be heavily restricted by some additional, yet unknown, conditions. Then, all of these additional restrictions give us partial answers to the ``why the quantum'' question. \n\n\\subsection{Why is the Schr\\\"{o}dinger\\\/ equation linear?}\n\nThe most interesting question which remains open is why the Schr\\\"{o}dinger\\\/ equation is linear. We have found only some part of the answer -- an explanation for the global $U(1)$ phase symmetry based on the informational content and of the homogeneity based on the reduction of the equation to subsystems. \n\nBut, given that the pre-Schr\\\"{o}dinger\\\/ equation is non-linear, but interpreted in the same way, the linearity of the Schr\\\"{o}dinger\\\/ equation cannot follow from the interpretation taken alone. Some other considerations are necessary to obtain linearity.\n\nOne idea is to justify linearity as an approximation. Last but not least, linearization is a standard way to obtain approximations. \n\nThe problem of stability in time may be also relevant here. The pre-Schr\\\"{o}dinger\\\/ equation becomes invalid after a short period of time, then the first caustic appears. There is no such problem in quantum theory, which has a lot of stable solutions. Then, there should be not only stable solutions, but also slowly changing solutions: It doesn't even matter if the fundamental time scale is Planck time or something much larger -- even if it is only the time scale of strong interactions, all the things changing around us are changing in an extremely slow way in comparison with this fundamental time scale. But the linear character of the Schr\\\"{o}dinger\\\/ equation gives us a way to obtain solutions slowly changing in time by combining different stable solutions with close energies. \n\n\\section{A theoretical possibility to test: The speedup of quantum computers}\\label{computer}\n\nThere is an idea which suggests, at least in principle, a way to distinguish observationally the paleoclassical interpretation (or, more accurate, the class of all more fundamental theories compatible with the paleoclassical interpretation) from the minimal interpretation. \n\nThe idea is connected with the theory of quantum computers. If quantum computers really work as predicted by quantum theory, these abilities will provide fascinating tests of the accuracy of quantum theory. In the case of Simon's algorithm, the speed-up is exponential over any classical algorithm. It may be a key for the explanation of this speed-up that the state space (phase space) of a composite classical system is the Cartesian product of the state spaces of its subsystems, while the state space of a composite quantum system is the tensor product of the state spaces of its subsystems. For $n$ qubits, the quantum state space has $2^n$ instead of $n$ dimensions. So the information required to represent a general state increases exponentially with $n$ (see, for example, \\cite{Bub}). There is also the idea ``that a quantum computation is something like a massively parallel classical computation, for all possible values of a function. This appears to be Deutsch's view, with the parallel computations taking place in parallel universes.'' \\cite{Bub}. \n\nIt is this \\emph{exponential} speedup which suggests that the predictions of standard QM may differ from those of the paleoclassical interpretation. An exact quantum computer would have all beables contained in the wave function as degrees of freedom. A quantum computer in the paleoclassical interpretation has only the resources provided by its beables. But these beables are, essentially, only the classical states of the universe. Given the exponential difference between them, $n$ vs. $2^n$ dimensions for qubits instead of classical bits, an exact quantum computer realizable at least in principle in a laboratory on Earth can have more computational resources than a the corresponding computer of the paleoclassical interpretation, which can use only the classical degrees of freedom, even if these are the classical degrees of freedom of the whole universe. \n\nBut if we distort a quantum computer, even slightly, the result will be fatal for the computation. In particular, if this distortion is of the type of the paleoclassical interpretation, which replaces an exact computer with a $2^n$-dimensional state space by an approximate one with only $N$ dimensions, then even for quite large $N\\gg n$ the approximate computer will be simply unable to do the exact computations, even in principle. There simply are no parallel universes in the paleoclassical interpretation to make the necessary parallel computations. \n\nSo, roughly speaking, the prediction of the paleoclassical interpretation is that a sufficiently large quantum computer will fail to give the promised exponential speedup. The exponential speedup will work only up to a certain limit, defined by the logarithm of the relation between the size of the whole universe and the size of the quantum computer. \n\nOf course, we do not know the size of the universe. It may be much larger than the size of the observable universe, or even infinite. Nonetheless, this argument, applied to any finite model of the universe, shows that the true theory, the theory in the configurational beables alone, cannot be exactly quantum theory. This is in my opinion the most interesting conclusion. \n\nBut let's see if we can, nonetheless, make even testable (at least in principle) predictions. So let's presuppose that the universe is finite, and, moreover, let's assume that this size is not too many orders larger than the its observable part. This would be already sufficient to obtain some numbers about the number of qubits such that the $2^n$ exponential speedup is no longer possible. This number will be sufficiently small, small enough that a quantum computer in a laboratory on Earth will be sufficient to reach this limit. \n\nAnd, given the logarithmic dependence on $N$, increasing $N$ does not help much. If it is possible to build a quantum computer with $n$ qubits, why not with $2n$? This would already move the size of the universe into completely implausible regions. \n\n\\subsection{The speed of quantum information as another boundary}\n\nInstead of caring about the size of the universe, it may be more reasonable to care about the size of the region which can causally influence us. Here I do not have in mind the limits given by of relativity, by the speed of light. Given the violation of Bell's inequality, there has to be (from a realist's point of view) a hidden preferred frame where some other sort of information -- quantum information -- is transferred with a speed much larger than the speed of light. But if we assume that the true theory has some locality properties, even if only in terms of a much larger maximal speed, the region which may be used by a quantum computer for its computational speedup decreases in comparison with the size of the universe. \n\nSo if we assume that there is such a speed limit for quantum information, then we obtain in the paleoclassical interpretation even more restrictive limits for the speedup reachable by quantum computers, limits which depend logarithmically on the speed limit for quantum information. \n\nNonetheless, I personally don't believe that quantum computers will really reach large speedups. I think the general inaccuracy of human devices will prevent us from constructing quantum computers which can really use the full $2^n$ power for large enough $n$. I would guess that the accuracy requirements necessary to obtain a full $2^n$ speedup will also grow exponentially. So I guess that quantum computers will fail already on a much smaller scale. \n\n\\section{Conclusions}\n\nSo it's time to summarize:\n\n\\begin{itemize}\n\n\\item The unknown, true theory of the whole universe is a theory defined on the classical configuration space $Q$, with the configuration $q(t)$ evolving in absolute time $t$ as a complete description of all beables. \n\n\\item The wave function of the whole universe is interpreted as a consistent set of incomplete information about these fundamental beables. \n\n\\item In particular, $\\rho(q)$ defines not some ``objective'' probability, but an incomplete set of information about the real position $q$, described, as required by the logic of plausible reasoning, by a probability distribution $\\rho(q)dq$. \n\n\\item The phase $S(q)$ describes, via the ``guiding equation'', the expectation value $\\langle\\dot{q}\\rangle$ of the velocity given the actual configuration $q$ itself. So the ``guiding equation'' is not a physical equation, but has to be interpreted as part of the definition of $S(q)$, which describes which information about $q$ is contained in $S(q)$. \n\n\\item Only a constant phase factor of $\\psi(q)$ does not contain any relevant information about the trajectory $q(t)$. Therefore, the equations for $\\psi(q)$ should not depend on such a factor. \n\n\\item The Schr\\\"{o}dinger\\\/ equation is interpreted as an approximate equation. More is not to be expected, given that it describes the evolution of an incomplete set of information. \n\n\\item The linear character of the Schr\\\"{o}dinger\\\/ equation is interpreted as an additional hint that it is only an approximate equation. \n\n\\item The interpretation can be used to reinterpret Nelsonian stochastics. The resulting picture is conceptually more consistent than the original proposal. \n\n\\item The Wallstrom objection appears much less serious than expected. The quantization condition for simple zeros (which is sufficient because it is the general position) can be derived from the much simpler regularity postulate that $0<\\Delta \\rho(q)<\\infty$ if $\\rho(q)=0$. While a final justification of this condition has to be left to a more fundamental theory, it is, as shown in \\cite{againstWallstrom}, plausible that this is not a problem for such theories. \n\n\\item If the true theory of the universe is defined on classical configurations, and the whole universe is finite, quantum computers can give their promised exponential speedup only up to an upper bound for the number of qubits, which is much smaller than the available qubits of the universe. This argument shows that the Schr\\\"{o}dinger\\\/ equation has to be approximate. \n\n\\item The dBB problem with of the ``action without reaction'' asymmetry is solved: For effective wave function, the collapse defines the back-reaction, for the wave function of the whole universe there should be no such back-reaction -- it is only an equation about incomplete information about reality, not about reality itself. \n\n\\item The wave functions of small subsystems obtain a seemingly objective, physical character only because they, as conditional wave functions, depend on the physical beables of the environment. \n\n\\end{itemize}\n\nFrom point of view of simplicity, the paleoclassical interpretation is superior to all alternatives. The identification of the fundamental beables with the classical configuration space trajectory $q(t)$ is sufficient for this point. \n\nIt has also the additional advantage that it leads to strong restrictions of the properties of a more fundamental, sub-quantum theory: It has to be a theory completely defined on the classical configuration space. Moreover, it has to be a theory which, in its statistical variant, leads to a Fokker-Planck-like equations for the probability flow defined by the classical flow variables $\\rho(q)$ and $v^i(q)$. \n\n\\begin{appendix}\n \n\\section{Compatibility with relativity}\\label{relativity}\n\nMost physicists consider the problem of compatibility with relativity as the major problem of dBB-like interpretations -- sufficient to reject them completely. But I have different, completely independent reasons for accepting a preferred frame, so that I don't worry about this. \n\nThere are two parts of this compatibility problem, a physical and a philosophical one, which should not be mingled: \n\nThe physical part is that we need a dBB version of relativistic quantum field theories, in particular of the standard model of particle physics -- versions which do not have to change the fundamental scheme of dBB, and, therefore, may have a hidden preferred frame. \n\nThe philosophical part is the incompatibility of a hidden preferred frame with relativistic metaphysics. \n\nThe physical problem is heavily overestimated, in part because of the way dBB theory is often presented: As a theory of many particles. I think it should be forbidden to introduce dBB theory in such a way. The appropriate way is to present it as a general theory in terms of an abstract configuration space $Q$, and to recognize that field theories as well as their lattice regularizations fit into this scheme. The fields are, of course, fields on three-dimensional space $\\mbox{$\\mathbb{R}$}^3$ changing in time, and their lattice regularizations live on three-dimensional spatial lattices $\\mbox{$\\mathbb{Z}$}^3$, not four-dimensional space-time lattices. But this violation of manifest relativistic symmetry is already part of the second, philosophical problem. \n\nThe simple, seemingly non-relativistic Hamiltonian \\eqref{HamiltonFunction}, with $p^2$ instead of $\\sqrt{p^2 + m^2}$, is also misleading: For relativistic field theories the quadratic Hamiltonian is completely sufficient. Indeed, a relativistic field Lagrangian is of type\n\\begin{equation}\\label{field}\n\\mathscr{L} = \\frac{1}{2}((\\pd_t\\varphi)^2 - (\\pd_i\\varphi)^2)-V(\\varphi).\n\\end{equation}\nThis gives momentum fields $\\pi = \\pd_t\\varphi$ and the Hamiltonian\n\\begin{equation}\n\\mathscr{H} = \\frac{1}{2}(\\pi^2 + (\\pd_i\\varphi)^2)+V(\\varphi) = \\frac{1}{2}\\pi^2 + \\tilde{V}(\\varphi)\n\\end{equation}\nquadratic in $\\pi$, thus, the straightforward field generalization of the standard Hamiltonian \\eqref{HamiltonFunction}. And a for lattice regularization, the Hamiltonian is already exactly of the form \\eqref{HamiltonFunction}. So, whatever one thinks about the dBB problems with other relativistic fields, it is certainly not relativity itself which causes the problem. \n\nThe problem with fermions and gauge fields is certainly more subtle. Here, my proposal is described in \\cite{clm}. It heavily depends on a preferred frame, but for completely different reasons -- interpretations of quantum theory are not even mentioned. Nonetheless, fermion fields are obtained from field theories of type \\eqref{field}, and gauge-equivalent states are interpreted as fundamentally different beables, so that no BRST factorization procedure is necessary. \n\nAnother part of the physical problem is compatibility with relativistic gravity. Here I argue that it is the general-relativistic concept of background-freedom which is incompatible with quantum theory and has to be given up. I use a quantum variant of the classical hole argument for this purpose \\cite{hole}. As a replacement, I propose a theory of gravity with background and preferred frame \\cite{glet}. \n\nSo there remains only the philosophical part. But here the violation of Bell's inequality gives a strong argument in favour of a preferred frame: Every realistic interpretation needs it. Moreover, the notion of metaphysical realism presupposed by ``realistic interpretation'' is so weak that Norsen \\cite{Norsen} has titled a paper ``against realism'', arguing that one should not mention realism at all in this context. The metaphysical notion of realism used there is so weak that to give it up does not save Einstein locality at all -- it is presupposed in this notion too. \n\n\\section{Pauli's symmetry argument}\\label{Pauli}\n\nThere is also another symmetry argument against dBB theory, which goes back to Pauli \\cite{Pauli}, which deserves to be mentioned: \n\n\\begin{quote}\n\\ldots the artificial asymmetry introduced in the treatment of the two variables of a canonically conjugated pair\ncharacterizes this form of theory as artificial metaphysics. (\\cite{Pauli}, as quoted by \\cite{Freire}),\n\n``\\ldots the Bohmian corpuscle picks out by fiat a preferred basis (position) \\ldots'' \\cite{BrownWallace}\n\\end{quote}\n\nHere my counterargument is presented in \\cite{kdv}. I construct there an explicit counterexample, based on the KdV equation, that the Hamilton operator alone, without a specification which operator measures position, is not sufficient to fix the physics. It follows that the canonical operators have to be part of the complete definition of a quantum theory and so have to be distinguished by the interpretation as something special, different from the other similar pairs of operators. \n\nThe Copenhagen interpretation makes such a difference -- this is one of the roles played by the classical part. But attempts to get rid of the classical part of the Copenhagen interpretation, without adding something else as a replacement, are not viable \\cite{pure}. One has to introduce a replacement. \n\nRecognizing that the configuration space has to be part of the definition of the physics gives more power to an old argument in favour of the pilot wave approach, made already by de Broglie at the Solvay conference 1927:\n\\begin{quote}\n``It seems a little paradoxical to construct a configuration space with the coordinates of points which do not exist.'' \\cite{deBroglie}.\n\\end{quote}\n\n\\section{Problems with field theories}\\label{fields}\n\nIt has been argued that fields are problematic as beables in general for dBB theory, a point which could be problematic for the paleoclassical interpretation too. \n\nIn particular, the equivalence proof between quantum theory and dBB theory depends on the fact that the overlap of the wave function for different macroscopic states is irrelevant. But it appeared in field theory that for one-particle states there is always a non-trivial overlap, even if these field states are localized far away from each other. \n\nBut, as I have shown in \\cite{overlap}, the overlap decreases sufficiently fast (approximately exponentially) with greater particle numbers. \n\n\\section{Why we observe configurations, not wave packets}\\label{wavepackets}\n\nIn the many worlds community there is a quite popular argument against dBB theory -- that it is many worlds in denial (for example, see \\cite{BrownWallace}). But this argument depends on the property of dBB theory that the wave function is a beable, a really existing object. So it cannot be applied against the paleoclassical interpretation, where the wave function is no longer a beable. \n\nBut in fact it is invalid also as an argument against dBB theory. In fact, already in dBB theory it is the configuration $q(t)$ which is observable and not the wave function. \n\nThis fact is sometimes not presented in a clear enough way, so that misrepresentations become possible. For example Brown and Wallace \\cite{BrownWallace} find support for another interpretation even in Bohm's original paper \\cite{Bohm}:\n\\begin{quote}\n\\ldots even in his hidden variables paper II of 1952, Bohm seems to associate the wavepacket chosen by the corpuscles as the representing outcome of the measurement -- the role of the corpuscles merely being to point to it.\n(\\cite{BrownWallace} p. 15)\\end{quote}\nand support their claim with the following quote from Bohm\n\\begin{quote}\nNow, the packet entered by the apparatus variable $y$ determines the actual result of the measurement, which the observer will obtain when he looks at the apparatus.\n(\\cite{Bohm} p. 118)\\end{quote}\nThis quote may, indeed, lead to misunderstandings about this issue. So, maybe we obserse only the wave packet containing the configuration, instead of configuration itself? \n\nMy answer is a clear no. I don't believe into the existence of sufficiently localized wave packets to construct some effective reality out of them, as assumed by many worlders. \n\nToday they use decoherence to justify their belief that wave packets will be sufficiently localized. But decoherence presupposes another structure -- a decomposition of the world into systems. Only from point of view of such a subdivision of $q$ into, say, $(x,y)$, a non-localized wave function like $e^{-(x-y)^2\/2}$ may look similar to a superposition, for different $a$, of product states localized in $x$ and $y$ like $e^{-(x-a)^2}\\cdot e^{-(y-a)^2}$. \n\n\nBut where does this subdivision into systems come from? The systems around us -- observers, planets, measurement devices -- cannot be used for this purpose. They do not exist on the fundamental level, as a predefined structure on the configuration space. But the subdivision into systems has to, once we need it to construct localized objects. Else, the whole construction would be circular.\n\nSo one would have to postulate something else as a fundamental subdivision into systems. This something else is undefined in the interpretations considered here, so an interpretation based on it is simply another interpretation, with another, additional fundamental structure -- a fundamental subdivision into systems. \n\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nInfrared maps of star forming Giant Molecular Clouds (GMCs) are an essential tool in the\nmodern study of star formation. When radio and millimeter maps also exist, the relationships\nbetween the regions of infrared, millimeter and radio activity provide one of the key new tools\nfor clarifying the varieties of star formation that can occur. The sensitivity of infrared techniques\nmeans that even shallow surveys can in principle reveal the processes of both low and high mass\nstar formation in clouds that are not too far away. There are, however, very few nearby GMCs\nwith which to take full advantage of these techniques. Among these available targets, one is the\nVela Molecular Ridge (VMR), a complex of four adjoining GMCs (Murphy \\& May 1991;\nYamaguchi et al. 1999), located in the galactic plane ($b$=$\\pm$3$^{\\circ}$) outside the solar\ncircle ($l \\sim$ 260$^{\\circ}$ - 275$^{\\circ}$); most of the gas (clouds named A, C and D) is\nlocated at a distance of about 700 pc (Liseau et al. 1992).\n\nThis team has studied the star formation activity in the VMR for many years: the concentration of\nred and young sources (Liseau et al. 1992, Lorenzetti et al. 1993); the presence of embedded\nclusters (Massi et al. 2000, 2003); the occurrence of protostellar jets (Lorenzetti et al. 2002;\nGiannini et al. 2001, 2005, De Luca et al. 2007, hereinafter D07). Recently we mapped with the SIMBA\nbolometer array at SEST a $\\sim$ 1 deg$^2$ area of the cloud D in the 1.2\\,mm continuum of dust emission, \nand in the $^{12}$CO(1--0) and $^{13}$CO(2--1) transitions (Massi et al. 2007, hereinafter M07; Elia et\nal. 2007, hereinafter E07).\n\nThe advent of the Spitzer Space Telescope (SST, Werner et al. 2004) and the imaging\nphotometric facilities on board, i.e. the Multiband Imaging Photometer for Spitzer (MIPS, 24, 70, \n160\\,$\\mu$m; Rieke et al. 2004) and the InfraRed Array Camera (IRAC, Fazio et al.\n2004) has enabled us to obtain maps of the VMR from 3.8 to 70~$\\mu$m across the same area\nalready surveyed in the millimeter emission of dust and gas. The primary goal of this survey is to\nobtain a census of the embedded young stellar population of VMR-D and to correlate it with\nits gas and dust cores. \n\nThis paper describes our MIPS observations of the VMR-D; it is the first of a series of papers we\nare preparing dealing with VMR-D as seen by Spitzer; the IRAC data of the same region will be\npresented in a separate paper. Spitzer surveys of several other star\nforming regions have been already published, most of them in the framework of the cores-to-disk\n(c2d) legacy project (Evans et al. 2003). Most of these regions, however, are located outside the\nGalactic plane ($|b|$ $>$ 10$^{\\circ}$) in regions that were originally selected in part to avoid\nstrong confusion and extinction problems. A huge amount of observational material has been so\nfar accumulated and published on those clouds. For the VMR, our current multi-frequency\ndatabase, when combined with the increased sensitivity of current instruments, has allowed us to\novercome many of the problems associated with observations of GMCs in the galactic plane. \nSince the plane is where most of the material currently forming stars is located, it is both a\nnatural and critical region to understand, and will also help with the comparison between the\nderived properties of our Galaxy with those of external galaxies, whose planes are the unique\nzones we are able to sample.\n\nSpitzer has surveyed many different types of star formation regions. \nTherefore, legitimate comparisons between them all would benefit from a standard analysis and\npresentation, although some problems could arise because their numerous differing parameters, \nas well as the various details of the observations. In this paper we therefore adopt as much \nas possible the methods that have already been used successfully in the c2d program. \n\nOur paper is organized as follows: in Sect.\\,2 we give the details of the observations and data\nreduction procedure; the results are presented in Sect.\\,3 and discussed in Sect.\\,4 and 5.\nConcluding remarks are given in Sect.6.\n\n\n\\section{Observations} \n\nVMR-D cloud was observed with MIPS on board the Spitzer Space Telescope\nwithin the Guaranteed Time Observation program (PID 30335).\nThe observations covered $\\sim$ 1.15 (in R.A.) $\\times$ 1.6 (in dec.) degrees centered \nat $\\alpha$, $\\delta$ (J2000) = 8$^h$~47$^m$~50$^s$,\n-43$^{\\circ}$~42$^{\\prime}$~13$^{\\prime\\prime}$ \nand 8$^h$~48$^m$~20$^s$, -43$^{\\circ}$~31$^{\\prime}$~26$^{\\prime\\prime}$ at \n24~$\\mu$m and 70~$\\mu$m, respectively (orientation: 145$^{\\circ}$ W of N).\n\nData were collected on 14 Jun 2006 in scan mode, medium speed, with 5 scan legs \nand 160$^{\\prime\\prime}$ cross-scan step, resulting in a total integration time \nof 40 seconds (for both 24 and 70~$\\mu$m) per pixel.\nThe mapping parameters were optimized for the 24 and 70~$\\mu$m bands: as a consequence, \nthe 160~$\\mu$m map suffers from coverage gaps and saturation and will be not considered \nin the following.\n\nThe SSC-pipeline, version S14.4.0, produced basic calibrated data (BCDs) \nthat we have used to obtain mosaiced, pointing refined images by means of \nthe MOPEX package provided by the Spitzer Science Center (Makovoz \\& Marleau 2005).\n\nThe main instrumental artifacts have been removed from the mosaiced images \nby means of the MOPEX package. Minor problems of residuals jailbars \n(expecially at 70~$\\mu$m) and background matching between adjacent frames \n(at 24~$\\mu$m) are still visible close to the brightest objects, but they do \nnot affect significantly the point source photometry discussed in this paper.\n\nThe final 24\\,$\\mu$m map global properties can be summarized as follows: \npixel scale of 2.45$^{\\prime\\prime}$\/pixel, background r.m.s. of 0.3~$\\mu$Jy\/arcsec$^2$, \nwithin the regions of high level of diffuse emission. The brightest sources saturate at the\nemission peak: for these we estimate a lower limit to the integrated flux of 4 Jy.\nIn the 70\\,$\\mu$m map the pixel scale is 4.0\\,$^{\\prime\\prime}$\/pixel and the background r.m.s.\nranges between 23 and 94\\,$\\mu$Jy\/arcsec$^2$. None of the detected sources appears saturated\nat this wavelength.\n\n\n\n\\section{Results} \n\n\\begin{figure*}\n\\includegraphics[angle=0,scale=0.8]{f1.eps}\n\\caption{MIPS two-color map (24\\,$\\mu$m in blue, 70\\,$\\mu$m in red) of\nVMR-D.\n\\label{MIPS24-70}}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\includegraphics[angle=0,scale=0.8]{f2.eps}\n\\caption{Mosaic of VMR-D map at 24~$\\mu$m, with superposed \nthe $^{12}$CO intensity map (whose limits are depicted in yellow), where the contours (in green)\nare in the range -2\\,-\\,20 km s$^{-1}$\n(adapted by E07). Also overlaid is the 1.2$\\it{mm}$\ndust emission map (red contours, adapted from M07).\nCO contour levels start from 5 K km s$^{-1}$ and are in steps of\n25 K km s$^{-1}$, while dust contours start from 50 mJy\/beam and\nare in steps of 50 mJy\/beam. \\label{MIPS24}}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[angle=0,scale=0.8]{f3.eps}\n\\caption{As Figure~\\ref{MIPS24} for the 70\\,$\\mu$m map.\\label{MIPS70}}\n\\end{figure*}\n\nFigure\\,\\ref{MIPS24-70} shows the two-color final mosaic of VMR-D\n(24\\,$\\mu$m in blue, 70\\,$\\mu$m in red), while on Figures \\ref{MIPS24} and \n\\ref{MIPS70}, images in each of the two filters are shown separately.\nIn these latter, the 1.2\\,mm dust map (adapted from M07) and the $^{12}$CO intensity map\nintegrated in the velocity range -2\\,$\\div$\\,20 km s$^{-1}$ (adapted from E07) \nhave been superimposed for comparison. We define as `on-cloud' all objects \ninside these latter contours. Such a definition is perforce just the first level effort of delimiting\nthe sources belonging to the molecular cloud; indeed, it is clear from Figures \\ref{MIPS24} and\n\\ref{MIPS70} that \nthe $^{12}$CO emission remains well above the 3$\\sigma$ level at the north and west borders \nof the gas map, and thus sources belonging to VMR-D could exist toward these directions. \nConsidering such sources as 'off-cloud' will have the effect of reducing the distinctions between\nthe 'on' and 'off' cloud populations; these sources should then be considered on a \ncase-by-case basis (see sect.5.1). We have also considered as 'off-cloud' those regions where the CO peak velocity\nis faster than 20~km s$^{-1}$, since they are likely to be more distant and unassociated\nwith VMR-D (see Figure~1 in Lorenzetti et al. 1993). \n\n\nThe point-source extraction and photometry processes were performed by using\nthe {\\it DAOPHOT} task of the astronomical data analysis package {\\it\nIRAF}\\footnote{IRAF, the Image Reduction and Analysis Facility, is a general\npurpose software written and supported by the IRAF programming group at\nthe National Optical Astronomy Observatories (NOAO) in Tucson, Arizona \n(http:\/\/iraf.noao.edu).}.\nGiven the size of the MIPS mosaic it was impossible to apply any \nautomatic procedure for finding sources down to the sensitivity \nlimits without being affected by a locally varying background level; we \ntherefore applied a searching algorithm as deep as possible, \nbut still compatible with an automatic procedure. The search algorithm\nwas applied to a differential image we produced between the final mosaic and a 'sky' image, \nthe latter obtained by applying to the mosaic a median filter over boxes of 5$\\times$5 pixels.\nA threshold of 30\\,$\\sigma$ has been imposed on the sky-subtracted image, which\ncorresponds at least to 5\\,$\\sigma$ (depending on the local background) \nin the unsubtracted image.\n\n\n\\begin{figure}\n\\includegraphics[angle=-90,scale=.40]{f4.eps}\n\\caption{Histogram of the sources detected at 24~$\\mu$m. The completeness\nlimit is around 5 mJy, as indicated by the vertical line.\\label{compl24}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[angle=-90,scale=.40]{f5.eps}\n\\caption{Histogram of the sources detected at 70~$\\mu$m. The completeness\nlimit is around 250 mJy, as indicated by the vertical line.\\label{compl70}}\n\\end{figure}\n\n\\begin{figure*}\n\\includegraphics[angle=0,scale=1]{f6.eps}\n\\caption{Left panel: differential number counts at 24~$\\mu$m. Thick and thin lines refer to\nsource in VMR-D within and outside the $^{12}$CO contours, respectively.\nExtragalactic background sources from the SWIRE ELAIS N1 field are shown for \ncomparison (these latter have been taken from Figs.6 and 7 in Rebull et al. 2007, hereinafter\nR07). Right panel: as \nin left panel at 70~$\\mu$m. \n\\label{fig:counts24_70}}\n\\end{figure*}\n\nThe automated methods just described lead to the detection of 838 and 61 point sources \nat 24~$\\mu$m and 70~$\\mu$m, respectively. A further 12 detections have been added\nto the 24~$\\mu$m list by applying local sky values in selected areas (see the discussion below).\nThe source distribution as a function of the measured flux is depicted in Figures~\\ref{compl24}\nand \\ref{compl70}, where the completeness limits can be evaluated as the flux bin corresponding\nto the maximum counts before the decline at lower fluxes due to\nthe instrumental sensitivity. We determine that our sample is complete down to 5 and 250 mJy at\n24 and 70~$\\mu$m, respectively. \n\nA statistical summary of the detected sources is presented in Table \\ref{tab:tab1}. \nAbout 45\\% of the 24~$\\mu$m sources are spatially located inside the region \ndelimited by the $^{12}$CO contours ($\\sim$ 0.61 deg$^2$), even though this latter \nis about one half the size of the remaining mapped area ($\\sim$ 1.23 deg$^2$). \nThis result gives an initial indication of how the IR source density increases by \nabout 100\\% moving from outside to inside the CO contours, and the pattern becomes even\nmore significant when considering the 70\\,$\\mu$m sources, whose source density increase\nis four times. \nThe 24~$\\mu$m counts per deg$^2$ are \nrepresented in Figure~\\ref{fig:counts24_70}, left panel, where different symbols indicate those\nsources located respectively within and outside the CO contours.\nAlso in this plot, where the differential number density is shown, there is a drop \nfor F$_{\\nu}$ $<$ 35\\,mJy. For greater F$_{\\nu}$ values the number of objects inside the gas\ncontours (i.e. those more likely associated to the cloud) systematically\nexceeds the number of objects outside the cloud, giving reasonable support to the\nempirical significance of this crude classification. In addition, Figure~\\ref{fig:counts24_70} is\na comparison between the on- and off-cloud samples and the Spitzer Wide-area Infrared\nExtragalactic Survey (SWIRE, Lonsdale et al. 2003) legacy program. A\nsignificant amount of contamination from the extragalactic background is predicted (at\n24~$\\mu$m) for flux densities $<$10 mJy down to the completeness limit, so that 'on' and 'off'\nsource populations at this level become undistinguishable. \n\nThe counts per deg$^2$ at 70~$\\mu$m are depicted \nin Figure~\\ref{fig:counts24_70}, right panel: here again the possible extragalactic contamination\nappears just at (or even below) the completeness limit.\\\\\nIn the same Figure~\\ref{fig:counts24_70}, we also note that the 70~$\\mu$m counts confirm the\non- and off-cloud distributions already found at 24\\,$\\mu$m. \n\n\nThe complete catalog of the detected sources is given in electronic form (a short sample version is printed in Table~\\ref{tab:tab2}). In Table~\\ref{tab:tab3}\nwe show the list of the 70~$\\mu$m detections: of the 61 sources, 52 of them are \ncoincident with a 24~$\\mu$m source (i.e. the distance in both right ascension \nand declination is less than the 20$^{\\prime\\prime}$ PSF radius \nat 70 $\\mu$m, see the summary of Table~\\ref{tab:tab1}). In Table~\\ref{tab:tab3}, we list the \n24~$\\mu$m coordinates (which are more accurate than the 70~$\\mu$m ones because of the\nsmaller PSF at 24~$\\mu$m), the distance from the 70~$\\mu$m coordinates, \n($\\Delta\\alpha$\/$\\Delta\\delta$)$_{70}$, the measured flux at 24 and 70~$\\mu$m along with \nthe relative uncertainties, a flag indicating whether or not \nthe source is located inside the region delimited by CO emission contours, and\nthe association with a dust core, if any. This latter is based on the distance between the 24$\\mu$m\nand mm coordinates, ($\\Delta\\alpha$\/$\\Delta\\delta$)$_{mm}$, which must be within the SIMBA HPBW \nof 24$^{\\prime\\prime}$. All the dust cores associated with a 24~$\\mu$m source are also associated with\nits 70~$\\mu$m counterpart.\\\\\nNine 70~$\\mu$m sources have no 24~$\\mu$m\ncounterpart. Four of these were not imaged at 24~$\\mu$m because of the shift between the two\nmaps, four appear as diffuse or with a filamentary structure at 24~$\\mu$m, and one \nhas F$_{24}$ $<$ 1.2 mJy (3$\\sigma$ upper limit): this source (\\#1 in Table~\\ref{tab:tab3})\ncould be (if not a galaxy) a very young protostar which deserves further attention. \n\nWe also provide in Table~\\ref{tab:tab3} the association of\nMIPS 24\/70~$\\mu$m sources with dust cores found in VMR-D by M07.\n The detailed study of these sources will be addressed in a future paper; here we give some\npreliminary results and point out some statistical aspects. In the region mapped in the dust\nemission at 1.2\\,mm \n(see Figures\\,\\ref{MIPS24} and \\ref{MIPS70}), a robust sample of 29 cores has been revealed,\nalong with 26\ncores whose size is below the map spatial resolution (24$^{\\prime\\prime}$). \nD07 have associated 12 of these cores \n(8 resolved and 4 under-resolved) with an IRAS or MSX point source, while the remaining 43\ncores\nare not associated with any FIR counterpart, so that they appear to be either cold Class\n0 sources\/starless cores (in case of resolved cores) or possibly data artifacts (in case of \nunder-resolved cores). As stated in D07, such a high fraction of starless cores as compared to\nprotostellar cores is most likely a result of the poor sensitivity of the IRAS\/MSX facilities. Our\nsignificantly more sensitive MIPS data offers the opportunity to check whether \nor not such a bias exists, and to eventually find weak counterparts of the dust cores.\nIn order to resolve this issue we closely reexamined our maps, performing photometry on the mm\npeaks coordinates using local rather than global thresholds for the background level. This\ntechnique turned up 12 new objects at a flux density as low as 0.7 mJy at 24~$\\mu$m, fainter\nthan the completeness limit by more than a factor of 7.\n\nThis procedure, together with automatic finding described above, \nwhen applied overall led to the association of 23 resolved and 20 under-resolved cores with 58\nsources at 24 ~$\\mu$m, 19 sources at 70~$\\mu$m (in some cases we found multiple\nassociations), thereby dramatically increasing the\npercentage of cores associated with an embedded protostar from 22\\% (D07) to 78\\%. This\nresult is in general agreement with recent MIPS findings in other GMCs that have substantially\nmodified the percentage of active vs. inactive cores in favor of the former (e.g. Young et al.\n2004). We also note that the existence of a MIPS counterpart to 20 out of 26 under-resolved\ncores significantly reduces the possibility that these objects are simply data artifacts. The lack,\neven at the MIPS sensitivity, of a FIR counterpart to five\nresolved dust peaks (namely MMS 6,\\,13,\\,15,\\,20,\\,24 in the list by M07) makes these objects a\nvery robust sample of genuine starless cores. \n\n\n\\section{Comparison with IRAS sources}\nThe similarity of the MIPS 24 and 70~$\\mu$m bandpasses to the \n25 and 60\\,$\\mu$m filters on-board IRAS offers us the opportunity to \nevaluate directly the reliability of the IRAS point source catalogue (IRAS-PSC) \nfluxes in crowded and diffuse clouds like VMR-D, objects that are commonly found \nin the galactic plane. A similar study has already been performed \nby R07 in the Perseus molecular cloud; although in this case\nthe geographic location makes extended emission and source \nconfusion less critical, only 61\\% (at 25$\\mu$m) and 32\\% (at 60$\\mu$m)\nof the objects of the IRAS-PSC are recovered by MIPS as point-like sources, while all\nthe others, although detected, remain confused by nebulosity. Higher rates of coincidence \nare found, at least at 25\\,$\\mu$m, if the Faint Source Catalogue (FSC) -\nproduced by point-source filtering the individual detector data streams - is used.\n\nUnfortunately, the FSC does not\ncover the galactic plane, so that we cannot confirm this result on VMR-D. \nHere, a total of 57 high (f$_{qual}$=3) or moderate (f$_{qual}$=2) quality \ndetections are listed in the IRAS-PSC catalogue at 25\\,$\\mu$m; 46 of them (80\\%)\nare also seen by MIPS and recovered with our algorithm, while the remaining 11 IRAS objects\nappear as diffuse \nemission at 24\\,$\\mu$m and are thus undetected as point-sources. \nThe matching rate for VMR-D is thus higher than in Perseus. The same trend is seen \nat 60\\,$\\mu$m, where out of 48 IRAS-PSC items, the retrieval rate is of about 50\\%. \nIn Table~\\ref{tab:tab4} we give the list of the IRAS-PSC (with any f$_{qual}$) not \nrecovered by MIPS. \nThe IRAS sources in the table marked as `off-edge' in one MIPS bandpass are necessarily \n`on-edge' in the other, because of the spatial shift between the two focal plane\narrays. Along with the f$_{qual}$ flag, we also give in Table~\\ref{tab:tab4}\nthe IRAS correlation coefficient flag (cc) which provides an indication of the \npoint-likeness confidence of the detected source. This flag is coded as alphabetical\ncharacter and subsequent letters correspond to decreasing accuracy (i.e. A$>$99\\%,\nE$>$96\\%). \nNoticeably, PSC sources retrieved by MIPS (with f$_{qual}$=2,3), show, on average, \n`cc' flag equal to \"A\" or \"B\": such an occurrence can thus be translated into a\nsuitable tool to broadly distinguish between genuine point-source and diffuse emissions, if \nMIPS maps (and FSC detections) are unavailable.\n\n\n\n\\section{Color-Magnitude diagrams}\n\n\\subsection{K$_s$ vs. K$_s$-[24]}\n\n\\begin{figure}\n\\includegraphics[angle=0,scale=.50]{f7.eps}\n\\caption{Color-magnitude diagram for the 2MASS K$_s$-band\nand the MIPS 24\\,$\\mu$m sources. Of the 849 24~$\\mu$m sources \nin the MIPS map, 401 have a K$_s$ detection within a radius of 5 arcsec.\nThese are shown by red dots if located inside the CO contour map (180 sources)\nand by black dots if outside (221 sources).\nLarge dots denote sources with 70~$\\mu$m detections, while arrows refer to\nsources saturated at 24\\,$\\mu$m. Hatched\nareas are the {\\it loci} of the sources in the SWIRE survey (taken from R07). \nThe thick line indicates the effect of the extinction for different values of A$_V$ (open squares\nrefer to A$_V$=10 and 50~mag).\\label{fig:k-24}}\n\\end{figure}\n\nAbout half of the 24~$\\mu$m detections have identifiable 2MASS counterparts \nat K$_s$ (limiting magnitude of 15.3) within a radius of 5$^{\\prime\\prime}$. \nThese 2MASS fluxes have been used to construct the K$_s$ vs. K$_s$-[24] \ncolor-magnitude diagram given in Figure~\\ref{fig:k-24}, where MIPS \nsources inside and outside the CO contours are shown with different colors. \nAlso reported as hatched areas are the {\\it loci} of the extragalactic sources in the SWIRE survey. \nAs expected for a molecular cloud in the galactic plane, there are very few \nextragalactic sources seen. A remarkable number of objects fall at K$_s$$<$8.5 mag and K$_s$-[24]$\\sim$0, \nwhich, given our completeness limit at 24~$\\mu$m of 5~mJy, delimits the region\nof normal photospheres in VMR-D. \nNoticeably, in this part of the diagram, the number density (per deg$^2$)\nof the 'off-cloud' sources is larger than that of the 'on-cloud'\nones (100 vs. 69, see Table~\\ref{tab:tab4}): in principle, all the unreddened\nphotospheres detected in VMR-D could be indeed foreground\/background stars. More reasonably,\nwe can affirm that no increase of main-sequence stars (with respect to the\nadjacent field) is registered in VMR-D, as expected because of the youth of the region.\n\nThe thick squares indicate the effects of an extinction of A$_V$ =\n10 and 50\\,mag, respectively, on the data.\nThe quantitative A$_V$ map of the overall region by Dobashi et al. (2005) does not \nprovide values in excess to 5-10\\,mag (below the saturation limit of the catalog of 15\nmag), while toward the dust cores A$_V$ can increase up to\n$\\sim$ 20\\,mag (M07 and E07). We thus conclude that sources \nwith K$_s$-[24] $>$ 5 are probably reasonably young objects, and not merely extinguished\nobjects. \nIndeed, it is not accidental that the large majority of these sources\nbelong to the molecular cloud, nor that the objects with normal photospheres are outside the\ncloud.\\\\\nFrom an evolutionary point of view, protostars can be characterized on the basis of the spectral \nclassification between 2\\,-\\,10\\,$\\mu$m (Greene et al. 1994), according to which\ndifferent evolutionary stages, from the accretion phase (Class I) to the beginning of \nthe main-sequence (Class III), are manifested. \nThe same authors have found that the 2\\,-10\\,$\\mu$m spectral index \ndoes not change substantially when computed using fluxes up to 20\\,$\\mu$m \n(by using photometry in the Q band); this result allows one to extrapolate the 24\\,$\\mu$m \nflux for different spectral indexes and accordingly to compute \nthe expected value of the K$_s$-[24] color. \nThe result of this procedure is given in R07, who furthermore\nrequested that, to select Class III sources from \nnormal photospheres and foreground\/background stars, K$_s$-[24]$>$2. The spectral classification\nderived for VMR-D is depicted in Fig.\\,\\ref{fig:k-24} and also reported in Table \\ref{tab:tab5}.\nThe ratio of `on-cloud' over `off-cloud' objects within the same class\nincreases with increasing K$_s$-[24] values; \nmoreover, most of the younger 'on-cloud' objects are also detected in the 70~$\\mu$m band\n(large, red dots in Fig.\\,\\ref{fig:k-24}).\nNoticeably, 70\\% of the `off-cloud' objects showing the characteristics of the youngest\nand coldest sources (black dots at K$_s$-[24]$\\simeq$7) are located just outside the North and \nWest borders of the CO gas map, therefore they are reasonably genuine members of VMR-D; the \nremaining sources (30\\%) with the same colours, if not belonging to VMR-D, could represent \nstar forming regions at larger distances.\n\n \nIn summary, we find a definitely many more young sources associated with the cloud, but\nconfirming that active star formation behind and\/or in the close \nneighbourhood of our cloud is going on as well.\\\\\nThe relative percentages of sources attributed to different evolutionary stages (see\nTable~\\ref{tab:tab4}) can be compared with those of other well studied star forming regions.\nSchmeja et al. (2005), in particular, have investigated number ratios of sources \nin different evolutionary classes in several star forming regions ($\\rho$ Ophiuchi, Serpens,\nTaurus, Chamaleon\\,I, IC348) basing on data obtained before than the Spitzer advent. They find,\non average, that Class\\,I sources are $\\sim$ 1-10\\%, while Class\\,II\/III sources are about 80-95\\%\nof the total. \nWith the advent of Spitzer, these percentages have increased slightly in favor of younger\nprotostars: by combining both IRAC and MIPS data, Reach et al. (2004) classified as Class\\,I\n11\\% of the sources in IC1396A and Muzerolle et al. (2004) 20\\% of the sources in NGC7129.\nFinally, in L1630 a ratio of 0.25 is found between Class\\,I and Class\\,II protostars (Muzerolle et\nal. 2005). A comparison of our statistics with these is not straightforward both because of the\ndifferent classification adopted there (e.g. flat spectrum objects are not included) and the fact that\nthis paper uses MIPS data only. A more meaningful and direct comparison is with the works of Harvey\net al. (2007) and of R07, who performed a very similar analysis as the one we do here, but on \nthe Serpens star forming region and on the IC348 and NGC1333 clouds in Perseus. The percentage\nof Class\\,I vs. Class\\,II objects is 6\\% vs. 63\\% (Serpens), \n6\\% vs. 85\\% (IC348) and 7\\% vs. 67\\% (NGC1333), thus strongly in favoring the latter. \nIn contrast, we find percentages in VMR-D of 23\\% of Class\\,I and 28\\%\nof Class\\,II.\n\nTwo possible alternatives could explain such a difference: {\\it i)} VMR-D is\nsignificantly younger than either Perseus or Serpens. Such an hypothesis is supported by \nthe age estimates of 1-2 Myr derived in the Perseus cloud (Palla \\& Stahler 2000, R07) and of\n2 Myr derived in Serpens (Djupvik et al. 2006) as compared with an age of 10$^5$-10$^6$ yr \ntowards the clusters of VMR-D (Massi et al. 2000); {\\it ii)} our K$_s$ vs. K$_s$-[24] \ndiagram suffers from missing two important categories of sources. One category \nis represented by the $\\sim$ 450 objects detected at 24$\\mu$m, but without a\nK$_s$-2MASS counterpart (see Table~\\ref{tab:tab1}). The sensitivity limits of our\nsurvey in terms of power density at a given wavelength are \n$\\lambda$F$_{\\lambda}$(2MASS) $\\simeq$ $\\lambda$F$_{\\lambda}$(24-MIPS) $\\sim$ 6 10$^{-16}$ W m$^{-2}$. \nThis implies that these 450 sources are objects whose SED is rising with\nwavelength and thus they could be additional young objects that tend even to increase the already anomalous \npercentage of Class~I sources. The second category, however, is represented \nby the about 5 10$^4$ 2MASS objects not having a MIPS 24$\\mu$m counterpart.\nTheir SEDs are allowed to decrease with increasing wavelength, therefore,\nalthough many of them could be foreground or background objects unrelated with\nthe VMR population, they undoubtly represent a potential reservoir of Class~II\nand III objects. It should be sufficient that a very small fraction of them\n($\\sim$ 1-2\\%) were genuine Class~II\/III sources to reduce significantly the\nrelative excess of Class~I in VMR, and then to increase the apparent age of the\nregion. In this view the disagreement with other star forming regions \ncould be reconciled, by considering that in those cases the lower\nbackground level implied by their location outside the Galactic plane\nhas permitted to reach detection limits at 24$\\mu$m up to an order of magnitude\nfainter than in Vela, therefore allowing to trace the SED also for faint \nK$_s$ sources that decline going from the near- to the far-infrared. In any case, we expect\nto provide a more certain answer to this issue in the next future, by means of\nforthcoming IRAC images covering the relevant spectral bands at more adequate\nsensitivity.\n\n\\subsection{[24] vs. [24]-[70]}\n\\begin{figure}\n\\includegraphics[angle=0,scale=.50]{f8.eps}\n\\caption{Color-magnitude diagram [24] vs. [24]-[70], where only sources\nnot saturated at 24\\,$\\mu$m are plotted. Red\/black dots refer\nto sources inside\/outside the CO contour map (41\/12 sources). Large dots denote sources\nassociated with a dust core, while\nnumbered sources are the candidates exciting sources of the jets discussed\nin Sect.\\,6. The hatched area shows the {\\it locus} of \nthe SWIRE survey (taken from Fig.11 in R07).\n\\label{fig:24-70}}\n\\end{figure}\n\nIn Figure~\\ref{fig:24-70} the color-magnitude diagram based on MIPS fluxes alone is\nshown. Here the sources detected in both bands are plotted; the large majority\nof them are on-cloud, although there is no clear difference between sources associated \nor not associated with dust cores (large dots). Remarkably, all sources (except 3) are located\nto the right of [24]-[70]=2. This value pertains to SED's that increase with wavelength in such a\nway that ($\\lambda$F$_{\\lambda}$)$_{70}$ = 2$\\times$($\\lambda$F$_{\\lambda}$)$_{24}$.\nThese red objects are much more numerous than the 70~$\\mu$m detections depicted in\nFigure~\\ref{fig:k-24}, since the majority of them lack a 2MASS counterpart.\nForthcoming IRAC data will help us to reconstruct their SED's more adequately, giving\nconstraints on their luminosity and evolutionary stage. A few sources (6) lie \nin the locus corresponding to black-body temperatures ranging between 40 and 50 K. These\nare values theoretically predicted (Shu, Adams \\& Lizano, 1987) for a collapsing isothermal\nsphere, identified as Class 0 objects. These 6 sources are all located inside the CO cloud; two\nof them lie within a mm core and one (\\#38) is also associated to a compact H$_2$ jet\n(see Sect.\\,6). Although at the moment a quantitative evaluation of their sub-mm vs.\nbolometric luminosity cannot be given, they still represent the best candidates for \nmembers of the youngest population within the cloud.\n\n\n\\section{MIPS associations with H$_2$ protostellar jets}\\label{sec:jets} \nMIPS offers a chance to identify, for the first time, very embedded compact exciting sources of\nmolecular (H$_2$) jets, found by D07, that have so far remained undetected even at the longest IRAS\nwavelengths. We recall once again that our completeness limits are \nsignificantly higher than the sensitivity limits, so that by\nscrutinizing our data-base for the weakest MIPS sources in selected areas, we have been able to\ndiscover new objects down to 0.7 mJy at 24~$\\mu$m.\nWe used this technique to search for the sources driving a number \nof molecular jets that were previously found by narrow band imaging centered at the H$_2$\n(1-0)S(1) line (2.12$\\mu$m, see D07 for details). \nAlthough our H$_2$ driving source survey is still incomplete, we here identify out some\ninteresting cases in the VMR-D cloud. \n\n\nThe results of the correlation between H$_2$ maps with our MIPS maps are given\nin Table~\\ref{tab:tab6}; here we list each jet, the length and the \ndynamical time of the jet itself (having adopted d=700 pc); the total flux detected at 2.12 $\\mu$m and the corresponding\nH$_2$ luminosity; \nthe identification of the exciting source in our MIPS\ncatalogue; the coincidence with a\ndust mm-peak, and finally an estimate of the source bolometric luminosity\nobtained by summing up all the contributions from the near-IR (if\nany) to the 1.2\\,mm flux, as derived by M07 SIMBA map.\nWe identify several different morphologies among our sources, including discovering the driving\nsources for three of the jets (namely \\# 2, 4, 5), objects that were not detected in either near-IR (K\nband) or the far-IR (N-band, IRAS, MSX). \n\n{\\bf Jet 1 -} The jet center lies towards a millimeter peak (MMS2),\nwhere no infrared source is detected down to K=17 mag. In the MIPS\n24~$\\mu$m band, an emission peak is found, although not aligned with the jet\naxis. The lack of any aligned source suggests two possible alternative\nscenarios: (i) we are observing just one jet lobe or (ii), more\nreasonably, the exciting source is too faint to be detected even by\nMIPS (F(24) $<$ 0.7 mJy). Scheduled APEX observations will hopefully\nanswer this question (Figure~\\ref{fig:jet1}).\n\n{\\bf Jets 2 and 3 -} Two small jets have been detected that correspond to faint dust peaks. The\nexciting sources, although not detected in the infrared (NIR bands, IRAS, MSX), are clearly\nrecognizable in the MIPS images and one of them (\\#38) is a\ncandidate Class 0 protostar. The compactness of these jets\nimplies a very short dynamical age, if reasonable conditions for\nboth shock velocity (v$_{shock}$ = 50 km s$^{-1}$) and inclination angle ({\\it i} =\n45$^{\\circ}$) are assumed (Figures~\\ref{fig:jet2} and \\ref{fig:jet3}).\n\n\n{\\bf Jet 4 -} A parsec scale jet emerges from a young near-infrared\ncluster centered on IRAS08476-4306. The proposed exciting source,\ndetected in the near-IR bands is the IRS20-\\#98\naccording to the Massi et al. (1999) list (Figure~\\ref{fig:jet4}).\n\n{\\bf Jet 5 -} A point-like 24\/70~$\\mu$m source aligned with the jet\nand corresponding to a dust peak (umms19) is found about 2 arcmin\naway towards the NE. If this source is indeed driving the\njet then we are observing just one jet lobe, being the\ncounter-jet located outside the H$_2$ investigated field. A NIR\ncluster is also found at the MIPS source position (Figure~\\ref{fig:jet5}).\n\n{\\bf Jet 6 -} A chain of H$_2$ knots emerges from a MIPS source (not\nvisible in the H$_2$ band) centered at the dust emission peak MMS16. \nWe do not observe a counter jet (Figure~\\ref{fig:jet6}). \n\n\\begin{figure}\n\\includegraphics[angle=0, width=14cm]{f9.eps}\n\\caption{H$_2$ contours of jet 1 (green) superposed on the MIPS 24~$\\mu$m (left)\nand 70~$\\mu$m (right) images. Dust contours (from a 3$\\sigma$ level in steps\nof 3$\\sigma$) are shown in yellow. Dust core MMS2 is located\nat the jet center. \\label{fig:jet1}}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=14cm]{f10.eps}\n\\caption{The same as Fig.\\ref{fig:jet1} for jet 2. Peak umms16 \n(under-resolved at the SIMBA spatial resolution) is found near \nthe jet center. The proposed exciting source is \\#38 in \nTable~\\ref{tab:tab3}.\\label{fig:jet2}}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=14cm]{f11.eps}\n\\caption{The same as Fig.\\ref{fig:jet1} for jet 3. A MIPS source (\\#21)\nis found at the jet center, corresponding to a location of weak dust emission. \n\\label{fig:jet3}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[angle=0, width=14cm]{f12.eps}\n\\caption{The same as Fig.\\ref{fig:jet1} for jet 4. Dust core MMS22 crosses\nthe jet. The candidate driving source (IRS20-MGL99\\#98) is indicated.\\label{fig:jet4}}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=14cm]{f13.eps}\n\\caption{The same as Fig.\\ref{fig:jet1} for jet 5. The driving source is likely to be the \nMIPS source (\\#44) associated with the under-resolved peak umms19. The lack of a\ncounter jet is probably due to the driving source being located near the edge \nof the H$_2$ image.\\label{fig:jet5}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[angle=0, width=7cm]{f14.eps}\n\\caption{The same as Fig.\\ref{fig:jet1} for jet 6. The candidate exciting source is the \n24 and 70~$\\mu$m source associated with mm peak MMS16 (\\#50). Just one lobe \nhas been detected. \\label{fig:jet6}}\n\\end{figure}\n\n\\clearpage\n\n\\section{Conclusions} \nMIPS maps covering 1.8 square degrees across the Vela Molecular Cloud D at 24 and\n70\\,$\\mu$m are presented. The data allowed us to derive the following results:\n\\begin{itemize}\n\\item[-] A total of 849 and 61 point sources at 24 and 70\\,$\\mu$m, respectively, have been\ndetected at completeness limits of 5 and 250 mJy.\n\\item[-] About half of the 24\\,$\\mu$m sources and two thirds of the 70\\,$\\mu$m ones \nare spatially located inside a region delimited by the $^{12}$CO contours (0.6 deg$^2$). \nThe implication is that the IR source density doubles (and is four times when considering \nsources at 70\\,$\\mu$m) inside the CO contours as compared to outside the molecular cloud. A\nquantitative analysis of the 24 and 70\\,$\\mu$m counts per deg$^2$ confirms this result. \n\\item[-] The maps allow us to correlate MIPS sources with the distribution of the dust cores\nfound within VMR-D, and, when we extend the search of MIPS sources down to the\ninstrumental sensitivity limit, we find that most of these cores result associated with\nred and cold objects.\n\\item[-] The MIPS sensitivity has enabled us to identify many new starless cores; the result will\nprompt a revision of the relative percentages of young objects known so far in VMR-D. \n\\item[-] IRAS-PSC detections of good quality (f$_{qual}$=3,2) are also seen by\nMIPS, but only when the IRAS point-likeness confidence is high (correlation coefficient, cc,\nequal to A or B). This result may be adopted as a broad confidence prescription \nfor finding genuine point sources in the IRAS catalogue.\n\\item[-] About 400 MIPS sources have 2MASS K$_s$ counterparts. Color-magnitude\nplots constructed with magnitudes at 2.2, 24 and 70\\,$\\mu$m in VMR-D\nshow an excess of Class I objects in comparison with other well studied star formation \nregions. This excess could be biased by the \nsensitivity limits of the 2MASS and MIPS surveys, otherwise, it\ncould reflect the short time elapsed since the first collapse of the cloud.\nFrom the MIPS colors, 6 objects appear as potential candidates Class\\,0 objects.\n\\item[-] We have detected the driving source in five out of six H$_2$ protostellar jets in VMR-D,\nfour of them embedded in mm-cores. Such circumstance, along with the very low dynamical\ntime estimated for the jets, indicates ages of 10$^4$-10$^5$ yr for these sources. \n\\item[-] We note that, given the southern location of VMR-D, many of the newly detected MIPS\nsources will be excellent candidate targets for ALMA. \n\\end{itemize} \n\n\n\n\\section{Acknowledgements}\nThis paper is based on observations\nmade with the Spitzer Space Telescope, which is operated by the Jet\nPropulsion Laboratory, California Institute of Techonology under\na contract with NASA. HAS acknowledges partial support from NASA grant NAG5-10659.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe theory of weight functions is fundamental in the evaluation of stress intensity factors for asymptotic representations near non-regular boundaries such as cusps, \nwedges, cracks. The classical work of Bueckner defined weight functions for several types of cracks, both in 2D and 3D, as the stress intensity factors corresponding to \nthe point force loads applied to the faces of the cracks placed in a homogeneous continuum (Bueckner, 1987, 1989). In particular, for a penny shaped crack in three \ndimensions, as well as a half-plane crack, Bueckner has introduced the weight functions corresponding to a general distribution of forces on the opposite faces of \nthe crack, which also included the case of forces acting in the same direction. \n\nIn the present paper, we use the term ``symmetric'' load for forces acting on opposite crack faces and in opposite directions, whereas the term ``skew-symmetric'' or \n``anti-symmetric'' load will be used for forces acting on opposite crack faces but in the same direction, see Fig.~\\ref{fig01a}. Consequently, symmetric loads are always \nself-balanced, whereas skew-symmetric loads are not, in general (however, we shall consider only the case of self-balanced loading). Note that this terminology is \ndifferent from that usually in use, where the words ``symmetric'' and ``anti-symmetric'' are intended in terms of the symmetry about the plane containing the crack, \nso that, for example, shear forces acting on opposite crack faces but in the same direction are considered to be symmetric (see for example Meade and Keer, 1984).\n\\begin{figure}[!htb]\n\\begin{center}\n\\vspace*{3mm}\n\\includegraphics[width=10cm]{fig01a.eps}\n\\caption{\\footnotesize Symmetric and skew-symmetric parts of the loading in the case of single point forces. In practical cases, the loading is \ngiven as a combination of point forces and\/or distributed forces in such a way it is self-balanced (see for example the problem analysed in \nSection \\ref{example}).}\n\\label{fig01a}\n\\end{center}\n\\end{figure}\n\nAlthough in two dimensions skew-symmetric loading does not contribute to the stress intensity factors, it becomes essential in three dimensions. Both symmetric and \nskew-symmetric loads on the faces of a half-plane crack in a three dimensional homogeneous elastic space are also considered by Meade and Keer (1984).\n\nThe situation when the crack is placed at an interface between two dissimilar elastic media is substantially different from the analogue corresponding to a homogeneous \nelastic space with a crack. Here, even in the case of two dimensions (plane strain or plane stress), the stress components oscillate near the crack tip and also the \nskew-symmetric loads generate non-zero stress intensity factors. Also for the Mode III, where the stress components do not oscillate, there is a non-vanishing \nskew-symmetric component of the weight function. \n\nSymmetric twodimensional weight functions (i.e. stress intensity factors for symmetric opening loads applied on the crack faces) for interfacial cracks were analysed by \nHutchinson, Mear and Rice (1987). The problem of symmetric three dimensional Bueckner's weight functions for interfacial cracks has been addressed by Lazarus and Leblond \n(1998) and Piccolroaz {\\em et al.}\\ (2007). On the basis of these results, Pindra {\\em et al.}\\ (2008) studied the evolution in time of the deformation of the front of a \nsemi-infinite 3D interface crack propagating quasistatically in an infinite heterogeneous elastic body.\n\nHowever, to our best knowledge, the skew-symmetric weight function for interfacial cracks has never been constructed for the plane strain case nor even for the Mode III \ndeformation. On the other hand, in most applications, especially for the interfacial crack, the loading is not symmetric and consequently the effects of the skew-symmetric \nloading in the analysis of fracture propagation require a thoroughly investigation.\n\nThe aim of this paper is to construct the aforementioned skew-symmetric weight functions for the twodimensional interfacial crack problem. To this purpose, we develop a \ngeneral approach, which allows us to obtain the general weight functions, as used by Willis and Movchan (1995), defined as non-trivial singular solutions of a boundary \nvalue problem for interfacial cracks with zero tractions on the crack faces but unbounded elastic energy. By taking the trace of these general weight functions on the \nreal axis, one can arrive to the notion of Bueckner's weight function associated with the point force load on the crack faces (Piccolroaz {\\em et al.}, 2007). It is then \nshown that the skew-symmetric part of the loading contributes to the stress singularity at the crack edge and thus to the resulting stress intensity factors. These \nresults are presented in Section \\ref{wfunc} for the plane strain problem and in Section \\ref{antiplane} for the Mode III problem.\n\nWe summarize that the symmetric weight function matrix $\\jump{0.15}{\\mbox{\\boldmath $U$}}$ for a plane strain interfacial crack has the form (compare with \\eq{jumpu} in the main text):\n\\begin{equation}\n\\label{symm}\n\\begin{array}{ll}\n\\displaystyle \\jump{0.15}{\\mbox{\\boldmath $U$}}(x_1) = \\frac{1}{2 d_0 \\sqrt{2 \\pi x_1}} \\left\\{ \\frac{x_1^{-i \\epsilon}}{c_1^+} \\mbox{\\boldmath $\\mathcal{B}$} + \\frac{x_1^{i \\epsilon}}{c_1^-} \\mbox{\\boldmath $\\mathcal{B}$}^\\top \\right\\} \n& \\text{for } x_1 > 0, \\\\[5mm]\n\\displaystyle \\jump{0.15}{\\mbox{\\boldmath $U$}}(x_1) = 0 & \\text{for } x_1 < 0,\n\\end{array}\n\\end{equation}\nwhere $i$ is the imaginary unit, $\\mbox{\\boldmath $\\mathcal{B}$} = \\begin{pmatrix} 1 & -i \\\\ i & 1 \\end{pmatrix}$, the superscript $^\\top$ denote transposition, and the bimaterial parameters \n$\\epsilon$ and $d_0$ are defined in Appendix \\ref{app1}. The quantities $c_1^+$, $c_1^-$ appear as coefficients in the representation of oscillatory terms in the \nasymptotics of stress near the tip of the interfacial crack, as defined in the sequel (see \\eq{stress1}--\\eq{stress2}). \n\nThe result for the skew-symmetric weight function matrix $\\langle {\\bf U} \\rangle$, unpublished in the earlier literature, reads as follows (compare with \\eq{meanu} \nin the main text):\n\\begin{equation}\n\\label{skew}\n\\begin{array}{ll}\n\\displaystyle \\langle \\mbox{\\boldmath $U$} \\rangle(x_1) = \\frac{\\alpha}{2} \\jump{0.15}{\\mbox{\\boldmath $U$}}(x_1) & \\text{for } x_1 >0, \\\\[5mm]\n\\displaystyle \\langle \\mbox{\\boldmath $U$} \\rangle(x_1) = -i \\frac{\\alpha (d_* - \\gamma_*)}{4d_0^3 \\sqrt{-2 \\pi x_1}} \n\\left\\{ \\frac{(-x_1)^{-i \\epsilon}}{c_1^+} \\mbox{\\boldmath $\\mathcal{B}$} - \\frac{(-x_1)^{i \\epsilon}}{c_1^-} \\mbox{\\boldmath $\\mathcal{B}$}^\\top \\right\\} & \\text{for } x_1 < 0,\n\\end{array}\n\\end{equation}\nwhere the bimaterial constant $\\gamma_*$ and the Dundurs parameters $\\alpha$ and $d_*$ are defined in Appendix \\ref{app1}. The conventional notations \n$\\jump{0.15}{f}(x) = f(x,0^+) - f(x,0^-)$ and $\\langle f \\rangle(x) = \\frac{1}{2}(f(x,0^+) + f(x,0^-))$ are in use here to denote the symmetric and skew-symmetric \ncomponents, respectively.\n\nNote that the result for the symmetrical weight functions is consistent with the representation for the stress intensity factors derived by Hutchinson, Mear and Rice \n(1987).\n\nFor the Mode III case, the scalar symmetric and skew-symmetric weight functions are (compare with \\eq{jumpu3} and \\eq{meanu3} in the main text):\n\\begin{equation}\n\\jump{0.15}{U_3}(x_1) = \\left\\{\n\\begin{array}{ll}\n\\displaystyle \\frac{1-i}{\\sqrt{2\\pi}} x_1^{-1\/2}, & \\text{for } x_1 > 0, \\\\[3mm]\n0, & \\text{for } x_1 < 0,\n\\end{array} \\right., \\quad\n\\langle U_3\\rangle = \\frac{\\eta}{2} \\jump{0.15}{U_3},\n\\end{equation}\nwhere $\\eta$ is another bimaterial constant defined in Appendix \\ref{app1}.\n\nIn Section \\ref{identity} we use both symmetric and skew-symmetric weight function matrices together with the Betti identity in order to derive the stress intensity \nfactors for an interfacial crack subjected to a general load. We summarize here that the integral formula for the computation of the complex stress intensity factor \nin the plane strain problem, $K = K_\\text{I} + i K_\\text{II}$, has the form (compare with \\eq{SIF} in the main text):\n\\begin{equation}\n\\mbox{\\boldmath $K$} = -i \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1} \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\langle \\mbox{\\boldmath $p$} \\rangle(x_1) \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1) \\right\\} dx_1,\n\\end{equation}\nwhere $\\mbox{\\boldmath $K$} = [K,K^*]^\\top$, the superscript $^*$ denotes conjugation, $\\mbox{\\boldmath $\\mathcal{M}$}_1$ is defined in \\eq{emme1} and \\eq{stress2}, $\\mbox{\\boldmath $R$}$ is defined in \\eq{erre} and \n$\\langle \\mbox{\\boldmath $p$} \\rangle(x_1)$, $\\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1)$ stand for the symmetric and skew-symmetric parts of the loading, respectively.\n\nFor the computation of the Mode III stress intensity factor, $K_\\text{III}$, the integral formula has the form (compare with \\eq{k3} in the main text):\n\\begin{equation}\nK_\\text{III} = - \\sqrt{\\frac{2}{\\pi}} \\int_{-\\infty}^{0} \\left\\{ \\langle p_3 \\rangle(x_1) + \\frac{\\eta}{2} \\jump{0.15}{p_3}(x_1) \\right\\} (-x_1)^{-1\/2} dx_1.\n\\end{equation}\n\nNeedless to say, by replacing the loading with the Dirac delta function, these integral formulae immediately give the Bueckner's weight functions. Also, this approach \nallows us to derive the constants in the high-order terms, which are essential in the perturbation analysis (Lazarus and Leblond, 1998; Piccolroaz {\\em et al.}, 2007). \n\nSection \\ref{perturbation} is devoted to the perturbation model of a crack advancing quasi-statically along the interface between two dissimilar media. We develop and prove \nan alternative approach consistent with the idea originally presented in Willis and Movchan (1995). Both symmetric and skew-symmetric weight function matrices are \nessential in this perturbation analysis.\n\nA simple illustrative example is given in Section \\ref{example}, which shows quantitatively the influence of the skew-symmetric loading for \ninterfacial cracks.\n\nSection \\ref{prel_analysis} is introductory and contains auxiliary results concerning the physical fields around the interfacial crack tip, which are necessary for the analysis presented \nin Sections \\ref{wfunc}--\\ref{antiplane}.\n\nFinally, the Appendix \\ref{app1} collects all bimaterial parameters involved in the analysis of the interfacial crack, including the well-known parameters as well as \nthe new parameters appearing in the skew-symmetric problem. The parameters have been classified according to whether they are involved in the symmetrical or \nskew-symmetrical weight functions. The Appendix \\ref{app3} provides the theorems required for proving the approach presented in Section \\ref{perturbation}.\n\n\\section{Preliminary results for an interfacial crack}\n\\label{prel_analysis}\nTo introduce the main notations and the mathematical framework for our model, we start with the analysis of the displacement and stress fields around a semi-infinite \nplane crack located on an interface between two dissimilar isotropic elastic media, with elastic constants denoted by $\\nu_\\pm,\\mu_\\pm$, see Fig. \\ref{fig02a}.\n\\begin{figure}[!htb]\n\\begin{center}\n\\vspace*{3mm}\n\\includegraphics[width=9cm]{fig02a.eps}\n\\caption{\\footnotesize Geometry of the model.}\n\\label{fig02a}\n\\end{center}\n\\end{figure}\n\nThe loading is given by tractions acting upon the crack faces. In terms of a Cartesian coordinate system with the origin at the crack tip, traction components are \ndefined as follows \n\\begin{equation}\n\\label{load}\n\\sigma_{2j}^\\pm(x_1,0^\\pm) = p_j^\\pm(x_1), \\quad \\text{for } x_1 < 0, \\quad j = 1,2,\n\\end{equation}\nwhere $p_j^\\pm(x_1)$ are prescribed functions.\n\nThe load is assumed to be self-balanced, so that its principal force and moment vectors are equal to zero. We also assume that the forces are applied outside a \nneighbourhood of the crack tip and vanish at infinity (specific assumptions on the behaviour of the loading will be discussed in the sequel). The body forces are \nassumed to be zero. It is convenient to use the notion of the symmetric and skew-symmetric parts of the loading\n\\begin{equation}\n\\label{parts}\n\\langle p_j \\rangle(x_1) = \\frac{p_j^+(x_1) + p_j^-(x_1)}{2}, \\quad \\jump{0.15}{p_j}(x_1) = p_j^+(x_1) - p_j^-(x_1), \\quad j = 1,2.\n\\end{equation}\n\nThe solution to the physical problem is sought in the class of functions which vanish at infinity and have finite elastic energy.\n\nThe results presented in Sections \\ref{mellin} and \\ref{asymptotics} are known in the literature (see for example: Williams, 1959; Erdogan, 1963; Rice, 1965; \nWillis, 1971), at least for symmetric loading. These results, together with the results on higher order terms presented in Section \\ref{high}, which are less common \nin the literature, provide the formulae needed for the perturbation analysis developed in the sequel of the paper.\n\n\n\\subsection{Representation in terms of Mellin transforms}\n\\label{mellin}\nIn polar coordinates the Mellin transforms for the \ndisplacement vector and the stress tensor with locally bounded elastic energy are defined as follows \n$$\n\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta) = \\int_0^\\infty \\mbox{\\boldmath $u$}(r,\\theta) r^{s-1} dr, \\quad\n\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) = \\int_0^\\infty \\mbox{\\boldmath $\\sigma$}(r,\\theta) r^s dr, \n$$\nand they are represented by analytic functions in the strips $-\\vartheta_0 < \\mathop{\\mathrm{Re}}(s) < \\vartheta_\\infty$ and $-\\gamma_0 < \\mathop{\\mathrm{Re}}(s) < \\gamma_\\infty$, respectively, where \n$\\vartheta_0,\\vartheta_\\infty \\ge 0$ ($\\vartheta_0 + \\vartheta_\\infty > 0$), $\\gamma_0,\\gamma_\\infty > 0$ are constants related to the behaviour of the solution at the \ncrack tip and at infinity, namely\n\\begin{equation}\n\\label{exponents}\n\\mbox{\\boldmath $u$}(r,\\theta) = \n\\left\\{\n\\begin{array}{ll}\nO(r^{\\vartheta_0}), & r \\to 0, \\\\\nO(r^{-\\vartheta_\\infty}), & r \\to \\infty, \n\\end{array}\n\\right.\n\\quad\n\\mbox{\\boldmath $\\sigma$}(r,\\theta) = \n\\left\\{\n\\begin{array}{ll}\nO(r^{\\gamma_0-1}), & r \\to 0, \\\\\nO(r^{-\\gamma_\\infty-1}), & r \\to \\infty, \n\\end{array}\n\\right.\n\\end{equation}\n\nCorrespondingly, the inverse transforms are \n\\begin{equation}\n\\label{inverse}\n\\mbox{\\boldmath $u$}(r,\\theta) = \n\\frac{1}{2\\pi i} \\int_{\\omega_1 - i\\infty}^{\\omega_1 +i\\infty} \\tilde{\\mbox{\\boldmath $u$}}(s,\\theta) \nr^{-s} ds, \\quad\n\\mbox{\\boldmath $\\sigma$}(r,\\theta) = \n\\frac{1}{2\\pi i} \\int_{\\omega_2 - i\\infty}^{\\omega_2 + i\\infty} \\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \nr^{-s - 1} ds, \n\\end{equation}\nwhere $-\\vartheta_0 < \\omega_1 < \\vartheta_\\infty$ and $-\\gamma_0 < \\omega_2 < \\gamma_\\infty$.\n\n\\subsubsection{General solution to the field equations of linear elasticity} \n\\label{general}\nIn polar coordinates, with the origin at the crack tip, the Mellin transforms of stresses and displacements satisfy the relations (the material constants involved in \nthese representations are reported in Appendix \\ref{app1})\n$$\n\\tilde{\\sigma}_{\\theta\\theta}^\\pm = \nC_1^\\pm \\cos[(s + 1)\\theta] + C_2^\\pm \\cos[(s - 1)\\theta] + C_3^\\pm \\sin[(s + 1)\\theta] \n+ C_4^\\pm \\sin[(s - 1)\\theta],\n$$\n$$\n\\tilde{\\sigma}_{rr}^\\pm = -\\frac{1}{s}\\left[\\tilde{\\sigma}_{\\theta\\theta}^\\pm \n- \\frac{1}{s-1}\\nderiv{}{\\theta}{2}\\tilde{\\sigma}_{\\theta\\theta}^\\pm\\right], \\quad\n\\tilde{\\sigma}_{r\\theta}^\\pm = \\frac{1}{s-1} \\deriv{}{\\theta}\\tilde{\\sigma}_{\\theta\\theta}^\\pm,\n$$\n$$\n\\tilde{u}_r^\\pm = \\frac{1}{2s \\mu_\\pm}[\\tilde{\\sigma}_{\\theta\\theta}^\\pm \n- (1 - \\nu_\\pm)\\tilde{\\sigma}_0^\\pm], \\quad\n\\tilde{u}_\\theta^\\pm = -\\frac{1}{2s \\mu_\\pm}\\left[\\tilde{\\sigma}_{r\\theta}^\\pm \n+ \\frac{1 - \\nu_\\pm}{s + 1}\\deriv{}{\\theta}\\tilde{\\sigma}_0^\\pm\\right], \\quad\n$$\nwhere $\\tilde{\\sigma}_0^\\pm = \\tilde{\\sigma}_{rr}^\\pm + \\tilde{\\sigma}_{\\theta\\theta}^\\pm$.\n\nHence, we deduce\n$$\n\\tilde{\\sigma}_{rr}^\\pm = \n-\\frac{s + 3}{s - 1} C_1^\\pm \\cos[(s + 1)\\theta] - C_2^\\pm \\cos[(s - 1)\\theta]\n- \\frac{s + 3}{s - 1} C_3^\\pm \\sin[(s + 1)\\theta] - C_4^\\pm \\sin[(s - 1)\\theta],\n$$\n$$\n\\tilde{\\sigma}_{r\\theta}^\\pm = \n-\\frac{s + 1}{s - 1} C_1^\\pm \\sin[(s + 1)\\theta] - C_2^\\pm \\sin[(s - 1)\\theta]\n+ \\frac{s + 1}{s - 1} C_3^\\pm \\cos[(s + 1)\\theta] + C_4^\\pm \\cos[(s - 1)\\theta],\n$$\n$$\\begin{array}{l}\n\\displaystyle\n\\tilde{u}_r^\\pm = \n\\frac{1}{2s \\mu_\\pm} \\left\\{ C_1^\\pm \\cos[(s + 1)\\theta] + C_2^\\pm \\cos[(s - 1)\\theta]\n+ C_3^\\pm \\sin[(s + 1)\\theta] + C_4^\\pm \\sin[(s - 1)\\theta] \\right. \\\\[3mm]\n\\displaystyle \\hspace{60mm}\n\\left. + \\frac{4(1 - \\nu_\\pm)}{s - 1}\n\\left[ C_1^\\pm \\cos[(s + 1)\\theta] + C_3^\\pm \\sin[(s + 1)\\theta] \\right] \\right\\},\n\\end{array}\n$$\n$$\n\\begin{array}{l}\n\\displaystyle\n\\tilde{u}_\\theta^\\pm = \n-\\frac{1}{2s \\mu_\\pm} \\left\\{ -C_1^\\pm \\frac{s + 1}{s - 1} \\sin[(s + 1)\\theta] \n- C_2^\\pm \\sin[(s - 1)\\theta]\n+ C_3^\\pm \\frac{s + 1}{s - 1} \\cos[(s + 1)\\theta] \\right. \\\\[3mm]\n\\displaystyle \\hspace{30mm}\n\\left. + C_4^\\pm \\cos[(s - 1)\\theta] + \\frac{4(1 - \\nu_\\pm)}{s - 1}\n\\left[ C_1^\\pm \\sin[(s + 1)\\theta] - C_3^\\pm \\cos[(s + 1)\\theta] \\right] \\right\\}.\n\\end{array}\n$$\n\nThe coefficients $C_j^\\pm$ depend on the loading and will be given in the next section.\n\n\\subsubsection{The boundary conditions and full field solution} \n\\label{full}\nThe coefficients $C_j^\\pm$ are obtained from the boundary conditions on the crack faces, namely \n$$\n\\sigma_{\\theta\\theta}^\\pm(r,\\pm\\pi) = p^\\pm(r), \\quad \\sigma_{r\\theta}^\\pm(r,\\pm\\pi) = q^\\pm(r),\n$$\nwhere $p^\\pm(r), q^\\pm(r)$ are prescribed functions given by $p^\\pm(r) = p_2^\\pm(-r)$, $q^\\pm(r) = p_1^\\pm(-r)$, see \\eq{load}. Mishuris and Kuhn (2001) give the result in a \ncompact form as the solution to the following system of algebraic equations\n\\begin{equation}\n\\label{system}\n\\frac{2\\sin \\pi s}{s - 1}\n\\left[\n\\begin{array}{l}\nC_1^\\pm(s) \\\\[3mm]\nC_3^\\pm(s) \n\\end{array}\n\\right] = \n\\left[\n\\begin{array}{ll}\n-\\sin \\pi s & \\pm \\cos \\pi s \\\\[3mm]\n\\pm \\cos \\pi s & \\sin \\pi s\n\\end{array}\n\\right]\n\\mbox{\\boldmath $D$}(s)\n\\pm \\left[\n\\begin{array}{l}\n\\displaystyle \\langle \\tilde{q} \\rangle \\pm \\frac{1}{2} \\jump{0.15}{\\tilde{q}} \\\\[3mm]\n\\displaystyle \\langle \\tilde{p} \\rangle \\pm \\frac{1}{2} \\jump{0.15}{\\tilde{p}} \n\\end{array}\n\\right],\n\\end{equation}\n$$\n\\left[\n\\begin{array}{l}\n\\displaystyle\nC_1^\\pm(s) + C_2^\\pm(s) \\\\[3mm]\n\\displaystyle\n\\frac{s + 1}{s - 1} C_3^\\pm(s) + C_4^\\pm(s)\n\\end{array}\n\\right] = \\mbox{\\boldmath $D$}(s),\n$$\nwhere $\\mbox{\\boldmath $D$}(s) = - \\mbox{\\boldmath $\\Phi$}^{-1}(s) \\mbox{\\boldmath $F$}(s)$ and \n$$\n\\mbox{\\boldmath $\\Phi$}(s) = \n\\left[\n\\begin{array}{ll}\n\\cos \\pi s & d_* \\sin \\pi s \\\\[3mm]\n-d_* \\sin \\pi s & \\cos \\pi s \n\\end{array}\n\\right],\n\\quad\n\\mbox{\\boldmath $F$}(s) = \n\\left[\n\\begin{array}{l}\n\\displaystyle \\langle \\tilde{p} \\rangle(s) + \\frac{\\alpha}{2} \\jump{0.15}{\\tilde{p}}(s) \\\\[3mm]\n\\displaystyle \\langle \\tilde{q} \\rangle(s) + \\frac{\\alpha}{2} \\jump{0.15}{\\tilde{q}}(s)\n\\end{array}\n\\right].\n$$\n\nNote that the prescribed boundary conditions appear in \\eq{system} and in the definition of the vector $\\mbox{\\boldmath $F$}(s)$, in terms of the symmetric and skew-symmetric parts of the \nloading. The material constants $d_*$ and $\\alpha$ are the Dundurs parameters, as described in the Appendix \\ref{app1}.\n\nDenoting the determinant of $\\mbox{\\boldmath $\\Phi$}(s)$ by $\\delta(s) = \\cos^2 \\pi s + d_*^2 \\sin^2 \\pi s$, we obtain the full field solution in terms of the Mellin transform as follows \n$$\\begin{array}{l}\n\\displaystyle \nC_1^\\pm(s) = \\frac{s - 1}{2 \\delta(s)}\n\\Bigg\\{\n\\langle \\tilde{p} \\rangle[(1 \\mp d_*) \\cos \\pi s] \n+ \\frac{1}{2} \\jump{0.15}{\\tilde{p}}[(1 \\mp d_*)\\alpha \\cos \\pi s] \\\\[3mm]\n\\displaystyle \\hspace{15mm} \n+ \\langle \\tilde{q} \\rangle\\left[-(1 \\mp d_*)d_* \\sin \\pi s \\right]\n+ \\frac{1}{2} \\jump{0.15}{\\tilde{q}}\\left[(d_* - \\alpha)d_* \\sin \\pi s \n+ (1 \\mp \\alpha)\\frac{\\cos^2 \\pi s}{\\sin \\pi s}\\right]\n\\Bigg\\},\n\\end{array}\n$$\n$$\n\\begin{array}{l}\n\\displaystyle \nC_3^\\pm(s) = \\frac{s - 1}{2 \\delta(s)}\n\\Bigg\\{\n\\langle \\tilde{p} \\rangle\\left[-(1 \\mp d_*)d_* \\sin \\pi s\\right]\n+ \\frac{1}{2} \\jump{0.15}{\\tilde{p}}\\left[(d_*-\\alpha)d_* \\sin \\pi s \n+ (1 \\mp \\alpha)\\frac{\\cos^2 \\pi s}{\\sin \\pi s}\\right] \\\\[3mm]\n\\displaystyle \\hspace{15mm}\n+ \\langle \\tilde{q} \\rangle[-(1 \\mp d_*) \\cos \\pi s] \n+ \\frac{1}{2} \\jump{0.15}{\\tilde{q}}[-(1 \\mp d_*)\\alpha \\cos \\pi s] \n\\Bigg\\},\n\\end{array}\n$$\n$$\n\\begin{array}{l}\n\\displaystyle \nC_2^\\pm(s) = -\\frac{1}{2 \\delta(s)}\n\\Bigg\\{\n\\langle \\tilde{p} \\rangle[(1 + s \\pm (1 - s)d_*) \\cos \\pi s] \n+ \\frac{1}{2} \\jump{0.15}{\\tilde{p}}[(1 + s \\pm (1 - s)d_*)\\alpha \\cos \\pi s] \\\\[3mm]\n\\displaystyle \\hspace{40mm} \n+ \\langle \\tilde{q} \\rangle\\left[(\\pm(1 + s) + (1 - s)d_*)d_* \\sin \\pi s\\right] \\\\[3mm]\n\\displaystyle \\hspace{40mm}\n+ \\frac{1}{2} \\jump{0.15}{\\tilde{q}}\\left[-(\\alpha(1 + s) + d_*(1 - s))d_* \\sin \\pi s \n- (1 \\mp \\alpha)(1 - s)\\frac{\\cos^2 \\pi s}{\\sin \\pi s}\\right]\n\\Bigg\\},\n\\end{array}\n$$\n$$\n\\begin{array}{l}\n\\displaystyle \nC_4^\\pm(s) = -\\frac{1}{2 \\delta(s)}\n\\Bigg\\{\n\\langle \\tilde{p} \\rangle\\left[(1 - s \\pm (1 + s)d_*)d_* \\sin \\pi s\\right] \\\\[3mm]\n\\displaystyle \\hspace{15mm}\n+ \\frac{1}{2} \\jump{0.15}{\\tilde{p}}\\left[(\\alpha(1 - s) + d_*(1 + s))d_* \\sin \\pi s \n+ (1 \\mp \\alpha)(1 + s)\\frac{\\cos^2 \\pi s}{\\sin \\pi s}\\right] \\\\[3mm]\n\\displaystyle \\hspace{15mm}\n+ \\langle \\tilde{q} \\rangle[(1 - s \\pm (1 + s)d_*) \\cos \\pi s] \n+ \\frac{1}{2} \\jump{0.15}{\\tilde{q}}[(1 - s \\pm (1 + s)d_*)\\alpha \\cos \\pi s] \n\\Bigg\\}.\n\\end{array}\n$$\n\nWe note that some of the poles of the functions $C_j^\\pm(s)$ are obtained from the solution of the equation $\\delta(s) = \\cos^2 \\pi s + d_*^2 \\sin^2 \\pi s = 0$, so that \nthe poles are given by \n\\begin{equation}\n\\label{poles}\ns_n^\\pm = \\frac{1 - 2n}{2} \\pm i\\epsilon,\n\\end{equation}\nwhere $n$ is an integer, and $\\epsilon$ is the bimaterial constant defined in Appendix \\ref{app1}.\n\nFor the purpose of evaluation of residues, it will be useful to have at hand the derivative of $\\delta(s)$. This is given by \n$\\delta'(s) = 2\\pi(-1+d_*^2) \\sin \\pi s \\cos \\pi s$, and its value at $s = s_n^\\pm$ is as follows\n\\begin{equation}\n\\label{deltap}\n\\delta'\\left(s_n^\\pm\\right) = \\pm 2\\pi i d_*.\n\\end{equation}\nThis formula will be used in the next section for the derivation of the asymptotic estimates of stress and displacement fields near the crack tip.\n\n\\subsection{Asymptotic representations near the crack tip and the stress intensity factors}\n\\label{asymptotics}\nThe solution outlined above leads to the asymptotic representations of stress and displacement near the crack tip. Analysis of the singular terms in the stress components \nyields the complex stress intensity factor for the interfacial crack, where the stress singularity is accompanied by the oscillatory behaviour of the physical fields.\n\n\\subsubsection{Asymptotics of stress and displacement near the crack tip}\nTaking into account the assumptions on the applied forces, the inspection of the results for $\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta)$ shows that $\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta)$ is \nanalytic in the strip $-1\/2 < \\mathop{\\mathrm{Re}}(s) < 1\/2$. Therefore, in \\eq{exponents} we have $\\gamma_0 = \\gamma_\\infty = 1\/2$, and choosing $\\omega_2 = 0$, the inverse transform \nis given by \n$$\n\\mbox{\\boldmath $\\sigma$}(r,\\theta) = \\frac{1}{2\\pi i} \\int_{0-i\\infty}^{0+i\\infty} \n\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) r^{-s-1} ds.\n$$\n\nBy means of the Cauchy's residue theorem, we can get the asymptotics of $\\mbox{\\boldmath $\\sigma$}(r,\\theta)$ as $r \\to 0$ as follows\n$$\n\\mbox{\\boldmath $\\sigma$}(r,\\theta) = \\sum_{\\pm} \\mathop{\\mathrm{Res}}[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) r^{-s-1},s=s_1^\\pm] + \n\\frac{1}{2\\pi i} \\int_{\\omega-i\\infty}^{\\omega+i\\infty} \\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) r^{-s-1} ds, \n$$\nwhere $\\omega < -1\/2$. The first two terms are the leading term, of the order $O(r^{-1\/2})$, and the last term is a higher order term, of the order \n$O(r^{\\beta}), \\beta > -1\/2$. The notation $\\mathop{\\mathrm{Res}}[f(s),s=s^*]$ stands for the residue of $f$ at the pole $s = s^*$,\n$$\n\\mathop{\\mathrm{Res}}[f(s),s=s^*] = \\frac{1}{(m-1)!} \\left. \\frac{d^{m-1} f(s)(s-s^*)^m}{ds^{m-1}}\\right|_{s=s^*},\n$$\nwhere $m$ is the order of the pole.\n\nIt suffices now to compute the residues at the poles $s = s_1^\\pm = -1\/2 \\pm i\\epsilon$,\n$$\n\\begin{array}{l}\n\\mathop{\\mathrm{Res}}\\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta)\\, r^{-s - 1}, s = -1\/2 \\pm i\\epsilon\\right] = \\\\[3mm]\n\\hspace{30mm}\n\\displaystyle\n= \\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \\delta(s)\\, r^{-s - 1}\\right]_{s = -1\/2 \\pm i\\epsilon} \n\\cdot \\mathop{\\mathrm{Res}}\\left[\\frac{1}{\\delta(s)}, s = -1\/2 \\pm i\\epsilon\\right] \\\\[3mm]\n\\hspace{30mm}\n\\displaystyle\n= \\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \\delta(s)\\, r^{-s - 1}\\right]_{s = -1\/2 \\pm i\\epsilon} \n\\cdot \\lim_{s \\to -1\/2 \\pm i\\epsilon} \\frac{s-(-1\/2 \\pm i\\epsilon)}{\\delta(s)} \\\\[3mm]\n\\hspace{30mm}\n\\displaystyle\n= \\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \\delta(s)\\, r^{-s - 1}\\right]_{s = -1\/2 \\pm i\\epsilon} \n\\cdot \\frac{1}{\\delta'(-1\/2 \\pm i\\epsilon)} \\\\[3mm]\n\\hspace{30mm}\n\\displaystyle\n= \\mp \\frac{i}{2\\pi d_*} \n\\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \\delta(s)\\right]_{s = -1\/2 \\pm i\\epsilon} \\cdot \nr^{-1\/2 \\mp i\\epsilon},\n\\end{array}\n$$\nwhere in the last equality we used the formula \\eq{deltap}.\n\nIt can be shown that $\\beta = 0$, so that the leading term asymptotics of the stress field is \n\\begin{equation}\n\\label{sigma}\n\\mbox{\\boldmath $\\sigma$}(r,\\theta) = \\beth_{-1\/2}(r,\\theta) + O(1),\n\\end{equation}\nwhere\n$$\n\\beth_{-1\/2}(r,\\theta) = \n\\frac{i}{2\\pi d_*}\n\\left\\{-\\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \\delta(s)\\right]_{s = -1\/2 + i\\epsilon} \nr^{-1\/2 - i\\epsilon}\n+ \\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \\delta(s)\\right]_{s = -1\/2 - i\\epsilon} \nr^{-1\/2 + i\\epsilon}\\right\\}.\n$$\n\nAn inspection of the results for $\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta)$ shows that it is analytic in the same strip $-1\/2 < \\mathop{\\mathrm{Re}}(s) < 1\/2$, except for a simple pole in $s = 0$. Since \nwe seek the solution vanishing at infinity, the strip of analyticity for the function $\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta)$ is $0 < \\mathop{\\mathrm{Re}}(s) < 1\/2$. Therefore, in \\eq{exponents} we \nhave $\\vartheta_0 = 0$, $\\vartheta_\\infty = \\gamma_\\infty = 1\/2$. Choosing $\\omega_1$ in this interval and applying the Cauchy's residue theorem, we can write the \nleading term asymptotics of $\\mbox{\\boldmath $u$}(r,\\theta)$ as $r \\to 0$ as follows \n$$\n\\mbox{\\boldmath $u$}(r,\\theta) = \\mathop{\\mathrm{Res}}\\left[\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta)\\, r^{-s}, s = 0\\right] + \\sum_{\\pm} \\mathop{\\mathrm{Res}}\\left[\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta)\\, r^{-s}, s = s_1^\\pm\\right] \n+ \\frac{1}{2 \\pi i} \\int_{\\omega-i\\infty}^{\\omega+i\\infty} \\tilde{\\mbox{\\boldmath $u$}}(s,\\theta) r^{-s} ds,\n$$\nwhere $\\omega < -1\/2$. The first term of the order $O(1)$ corresponds to the translation of the crack tip and reads\n\\begin{equation}\n\\label{qtheta}\n\\mbox{\\boldmath $u$}(0,\\theta) = \\mbox{\\boldmath $V$}_0(\\theta) = \n\\mbox{\\boldmath $Q$}(\\theta)\\mbox{\\boldmath $w$}_0, \\quad \\mbox{\\boldmath $Q$}(\\theta) = \\left[\\begin{array}{ll} \\cos\\theta & \\sin\\theta \\\\[3mm] -\\sin\\theta & \\cos\\theta \\end{array}\\right],\n\\end{equation}\nwhere\n$$\n\\begin{array}{ll}\n\\displaystyle w_{01} & \\displaystyle = u_1(0,0) \\\\[3mm]\n& \\displaystyle = \\frac{1}{2\\pi\\mu_\\pm} \\left\\{ [1 - 2\\nu_\\pm \\pm 2d_*(\\nu_\\pm - 1)] \\pi \\int_{0}^{\\infty} \\langle p \\rangle(r) dr + \n(-1 \\pm \\alpha)(\\nu_\\pm - 1) \\int_{0}^{\\infty} \\jump{0.15}{q}(r) (\\log r) dr \\right\\},\n\\end{array}\n$$\n$$\n\\begin{array}{ll}\n\\displaystyle w_{02} & \\displaystyle = u_2(0,0) \\\\[3mm]\n& \\displaystyle = \\frac{1}{2\\pi\\mu_\\pm} \\left\\{ -[1 - 2\\nu_\\pm \\pm 2d_*(\\nu_\\pm - 1)] \\pi \\int_{0}^{\\infty} \\langle q \\rangle(r) dr + \n(-1 \\pm \\alpha)(\\nu_\\pm - 1) \\int_{0}^{\\infty} \\jump{0.15}{p}(r) (\\log r) dr \\right\\},\n\\end{array}\n$$\nare the Cartesian components of the translation of the crack tip.\nThe second and third terms are of the order $O(r^{1\/2})$, and the last term is a higher order term, which is of the order $O(r^{\\beta}), \\beta > 1\/2$. \n\nIt can be shown that $\\beta = 1$, so that the computation of the residues leads to the asymptotics of the displacement field as follows\n\\begin{equation}\n\\label{disp}\n\\mbox{\\boldmath $u$}(r,\\theta) \n= \\mbox{\\boldmath $V$}_0(\\theta) + \\mbox{\\boldmath $V$}_{1\/2}(r,\\theta) + O(r),\n\\end{equation}\nwhere\n$$\n\\mbox{\\boldmath $V$}_{1\/2}(r,\\theta) = \n\\frac{i}{2\\pi d_*}\n\\left\\{-\\left[\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta) \\delta(s)\\right]_{s = -1\/2 + i\\epsilon} \nr^{1\/2 - i\\epsilon}\n+ \\left[\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta) \\delta(s)\\right]_{s = -1\/2 - i\\epsilon} \nr^{1\/2 + i\\epsilon}\\right\\}.\n$$\n\nNote that the remainders $O(1)$ in \\eq{sigma} and $O(r)$ in \\eq{disp} are rough estimates and will be refined in the Section \\ref{high}. However, their physical meaning is \nalready evident: the first corresponds to the so-called T-stress and the second to a rigid body rotation superimposed to a uniform deformation near the crack tip. \n\n\\subsubsection{The complex stress intensity factor $K = K_\\text{I} + i K_\\text{II}$}\nFrom the full field solution outlined in Sections \\ref{general} and \\ref{full} we obtain\n\\begin{equation}\n\\label{eq46}\n\\tilde{\\sigma}_{\\theta\\theta}(s,0) + i \\tilde{\\sigma}_{r\\theta}(s,0) = \n-\\left\\{\\langle \\tilde{p} \\rangle(s) + i\\langle \\tilde{q} \\rangle(s) + \\frac{\\alpha}{2} \\jump{0.15}{\\tilde{p}}(s) + i \\frac{\\alpha}{2} \\jump{0.15}{\\tilde{q}}(s)\\right\\}\n\\frac{\\cos \\pi s + i d_* \\sin \\pi s}{\\delta(s)}.\n\\end{equation}\n\nApplying the formula (\\ref{sigma}), the leading term asymptotics may be written as\n$$\n\\sigma_{\\theta\\theta}(r,0) + i \\sigma_{r\\theta}(r,0) = \\frac{K}{\\sqrt{2\\pi}} r^{-1\/2 + i\\epsilon} + O(1),\n$$\nwhere\n$$\nK = - \\sqrt{\\frac{2}{\\pi}} \\cosh(\\pi\\epsilon) \\left\\{\\langle \\tilde{p} \\rangle(s) + i\\langle \\tilde{q} \\rangle(s) \n+ \\frac{\\alpha}{2} \\jump{0.15}{\\tilde{p}}(s) + i \\frac{\\alpha}{2} \\jump{0.15}{\\tilde{q}}(s)\\right\\}_{s = -1\/2 - i\\epsilon}\n$$\nis the complex stress intensity factor.\n\nCorrespondingly, by means of the formula (\\ref{disp}), we obtain\n$$\n\\jump{0.15}{u_\\theta}(r) + i\\jump{0.15}{u_r}(r) = \n-\\frac{(1 - \\nu_+) \/ \\mu_+ + (1 - \\nu_-) \/ \\mu_-}{(1\/2 + i\\epsilon)\\cosh(\\pi\\epsilon)} \\frac{K}{\\sqrt{2\\pi}} r^{1\/2 + i\\epsilon} + O(r), \n$$\nwhere we used the notation $\\jump{0.15}{f}(r) = f(r,\\pi) - f(r,-\\pi)$. Note that the zero-order term $\\mbox{\\boldmath $V$}_0(\\theta)$ present in \\eq{disp} disappears in the last formula.\n\nThe formula for the complex stress intensity factor is then\n$$\n\\begin{array}{l}\n\\displaystyle\nK = - \\sqrt{\\frac{2}{\\pi}} \\cosh(\\pi\\epsilon) \n\\int_{0}^{\\infty}\\left\\{\\langle p \\rangle(r) + i\\langle q \\rangle(r) + \\frac{\\alpha}{2} \\jump{0.15}{p}(r) + i \\frac{\\alpha}{2} \\jump{0.15}{q}(r)\\right\\} \nr^{-1\/2 - i\\epsilon} dr.\n\\end{array}\n$$\n\nAs expected, this representation is consistent with Hutchinson, Mear and Rice (1987) who considered a crack loaded by point forces a distance $a$ behind the tip:\n$$\np^+(r) = p^-(r) = -P \\delta(r-a), \\quad q^+(r) = q^-(r) = -Q \\delta(r-a),\n$$\nwhere $\\delta(\\cdot)$ is the Dirac delta function, so that\n$$\n\\langle p \\rangle(r) = -P \\delta(r-a), \\quad \\jump{0.15}{p}(r) = 0, \\quad \\langle q \\rangle(r) = -Q \\delta(r-a), \\quad \\jump{0.15}{q}(r) = 0.\n$$\nFor this loading we obtain\n$$\nK = \\sqrt{\\frac{2}{\\pi}} \\cosh(\\pi\\epsilon) \\int_0^\\infty \\left\\{ P \\delta(r-a) + i Q \\delta(r-a) \\right\\} r^{-1\/2-i\\epsilon} dr \n= \\sqrt{\\frac{2}{\\pi}} \\cosh(\\pi\\epsilon) (P+iQ) a^{-1\/2-i\\epsilon},\n$$\nwhich fully agrees with Hutchinson, Mear and Rice (1987).\n\n\\subsection{High-order asymptotics}\n\\label{high}\nThe asymptotic procedure involving the evaluation of stress intensity factors for cracks with a slightly perturbed front requires the high-order asymptotics of the \ndisplacement and stress fields near the crack edge. The procedure of the previous section can be extended to the high-order terms, and hence the high-order asymptotics \nof the stress field is given by\n\\begin{equation}\n\\label{beth}\n\\begin{array}{l}\n\\mbox{\\boldmath $\\sigma$}(r,\\theta) = \\beth_{-1\/2}(r,\\theta) + \\mbox{\\boldmath $T$}(\\theta) + \\beth_{1\/2}(r,\\theta)\n+ \\mbox{\\boldmath $S$}(\\theta)r + \\beth_{3\/2}(r,\\theta) + O(r^2), \n\\end{array}\n\\end{equation}\nwhere (the superscript $^\\top$ denotes transposition, $x_1 = r \\cos\\theta$, $x_2 = r \\sin\\theta$, $\\mbox{\\boldmath $Q$}(\\theta)$ is given in \\eq{qtheta})\n$$\n\\mbox{\\boldmath $T$}(\\theta) = \\mathop{\\mathrm{Res}}\\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta), s = -1\\right] = T \\mbox{\\boldmath $Q$}(\\theta) \\left[\\begin{array}{ll} 1 & 0 \\\\[3mm] 0 & 0 \\end{array}\\right] \\mbox{\\boldmath $Q$}^\\top(\\theta), \n$$\n$$\n\\begin{array}{l}\n\\displaystyle \\mbox{\\boldmath $S$}(\\theta) = \\mathop{\\mathrm{Res}}\\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta), s = -2\\right] = \\\\[3mm]\n\\displaystyle = \\frac{1 \\mp \\alpha}{\\pi \\sqrt{x_1^2 + x_2^2}} \\mbox{\\boldmath $Q$}(\\theta) \n\\left[\n\\begin{array}{ll}\n\\displaystyle x_2 \\int_{0}^{\\infty} \\jump{0.15}{p}(r) \\frac{dr}{r^2} - x_1 \\int_{0}^{\\infty} \\jump{0.15}{q}(r) \\frac{dr}{r^2} \n& \\displaystyle x_2 \\int_{0}^{\\infty} \\jump{0.15}{q}(r) \\frac{dr}{r^2} \\\\[3mm]\n\\displaystyle x_2 \\int_{0}^{\\infty} \\jump{0.15}{q}(r) \\frac{dr}{r^2} \n& \\displaystyle 0 \n\\end{array}\n\\right] \n\\mbox{\\boldmath $Q$}^\\top(\\theta),\n\\end{array}\n$$\nand\n$$\n\\begin{array}{ll}\n\\displaystyle \\beth_{\\,l\/2}(r,\\theta) \n&\n\\displaystyle = \\sum_{\\pm} \\mathop{\\mathrm{Res}}\\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta)\\, r^{-s - 1}, s = -l\/2 -1 \\pm i\\epsilon\\right] \\\\[5mm]\n&\n\\displaystyle = \\frac{i}{2\\pi d_*}\n\\left\\{-\\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \\delta(s)\\right]_{s = -l\/2 -1 + i\\epsilon} \nr^{l\/2 - i\\epsilon}\n+ \\left[\\tilde{\\mbox{\\boldmath $\\sigma$}}(s,\\theta) \\delta(s)\\right]_{s = -l\/2 -1 - i\\epsilon} \nr^{l\/2 + i\\epsilon}\\right\\}, \\quad l = 1,3.\n\\end{array}\n$$\n\nNote that $T$ is the T-stress given by\n$$\nT = \\frac{1 \\mp \\alpha}{\\pi} \\int_{0}^{\\infty} \\jump{0.15}{q}(r) \\frac{dr}{r}.\n$$\n\nApplying the formula \\eq{beth} to \\eq{eq46}, we finally obtain\n$$\n\\sigma_{\\theta\\theta}(r,0) + i \\sigma_{r\\theta}(r,0) = \\frac{K}{\\sqrt{2\\pi}} r^{-1\/2 + i\\epsilon} \n+ \\frac{A}{\\sqrt{2\\pi}} r^{1\/2 + i\\epsilon} + \\frac{B}{\\sqrt{2\\pi}} r^{3\/2 + i\\epsilon} \n+ O(r^{2}),\n$$\nwhere the constants in the high-order terms are \n$$\nA = \\sqrt{\\frac{2}{\\pi}} \\cosh(\\pi\\epsilon) \n\\int_0^\\infty \\left\\{\\langle p \\rangle(r) + i\\langle q \\rangle(r) + \\frac{\\alpha}{2} \\jump{0.15}{p}(r) + i \\frac{\\alpha}{2} \\jump{0.15}{q}(r)\\right\\} \nr^{-3\/2-i\\epsilon} dr, \n$$\n$$\nB = -\\sqrt{\\frac{2}{\\pi}} \\cosh(\\pi\\epsilon) \n\\int_0^\\infty \\left\\{\\langle p \\rangle(r) + i\\langle q \\rangle(r) + \\frac{\\alpha}{2} \\jump{0.15}{p}(r) + i \\frac{\\alpha}{2} \\jump{0.15}{q}(r)\\right\\} \nr^{-5\/2-i\\epsilon} dr. \n$$\nIf the loading is smooth enough, then we can integrate by parts to obtain\n$$\nA = \\sqrt{\\frac{2}{\\pi}} \\frac{\\cosh(\\pi\\epsilon)}{1\/2+i\\epsilon} \n\\int_0^\\infty \\left\\{\\langle p \\rangle'(r) + i\\langle q \\rangle'(r) + \\frac{\\alpha}{2} \\jump{0.15}{p}'(r) + i \\frac{\\alpha}{2} \\jump{0.15}{q}'(r)\\right\\} \nr^{-1\/2-i\\epsilon} dr,\n$$\n$$\nB = -\\sqrt{\\frac{2}{\\pi}} \\frac{\\cosh(\\pi\\epsilon)}{(1\/2+i\\epsilon)(3\/2+i\\epsilon)} \n\\int_0^\\infty \\left\\{\\langle p \\rangle''(r) + i\\langle q \\rangle''(r) + \\frac{\\alpha}{2} \\jump{0.15}{p}''(r) + i \\frac{\\alpha}{2} \\jump{0.15}{q}''(r)\\right\\} \nr^{-1\/2-i\\epsilon} dr.\n$$\n\nCorrespondingly, we obtain for the high-order asymptotics of the displacement field the expression\n$$\n\\mbox{\\boldmath $u$}(r,\\theta) = \\mbox{\\boldmath $V$}_0(\\theta) + \\mbox{\\boldmath $V$}_{1\/2}(r,\\theta) + \\mbox{\\boldmath $V$}_1(\\theta)r + \\mbox{\\boldmath $V$}_{3\/2}(r,\\theta) \n+ \\mbox{\\boldmath $V$}_2(\\theta)r^2 + \\mbox{\\boldmath $V$}_{5\/2}(r,\\theta) + O(r^3), \n$$\nwhere\n$$\n\\mbox{\\boldmath $V$}_l(\\theta) = \\mathop{\\mathrm{Res}}\\left[\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta), s = -l\\right] = \\mbox{\\boldmath $Q$}(\\theta) r^{-l} \\mbox{\\boldmath $w$}_l(r,\\theta), \\quad l = 1,2,\n$$\n\\vspace{3mm}\n$$\n\\begin{array}{ll}\n\\displaystyle \\mbox{\\boldmath $V$}_{l\/2} \n&\n\\displaystyle = \\sum_{\\pm} \\mathop{\\mathrm{Res}}\\left[\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta)\\, r^{-s}, s = -l\/2 \\pm i\\epsilon\\right] \\\\[5mm]\n&\n\\displaystyle = \\frac{i}{2\\pi d_*}\n\\left\\{-\\left[\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta) \\delta(s)\\right]_{s = -l\/2 + i\\epsilon} \nr^{l\/2 - i\\epsilon}\n+ \\left[\\tilde{\\mbox{\\boldmath $u$}}(s,\\theta) \\delta(s)\\right]_{s = -l\/2 - i\\epsilon} \nr^{l\/2 + i\\epsilon}\\right\\}, \\quad l = 3,5,\n\\end{array}\n$$\nand\n$$\nw_{11} = \\frac{1 \\mp \\alpha}{2\\pi \\mu_\\pm} (1 - \\nu_\\pm) \\left\\{x_1\\int_{0}^{\\infty} \\jump{0.15}{q}(r) \\frac{dr}{r} - \nx_2\\int_{0}^{\\infty} \\jump{0.15}{p}(r) \\frac{dr}{r}\\right\\},\n$$\n$$\nw_{12} = \\frac{1 \\mp \\alpha}{2\\pi \\mu_\\pm} \\left\\{(1 - \\nu_\\pm) x_1\\int_{0}^{\\infty} \\jump{0.15}{p}(r) \\frac{dr}{r} - \n\\nu_\\pm x_2\\int_{0}^{\\infty} \\jump{0.15}{q}(r) \\frac{dr}{r}\\right\\},\n$$\n$$\nw_{21} = \\frac{1 \\mp \\alpha}{8\\pi \\mu_\\pm} \\left\\{4(1 - \\nu_\\pm)x_1x_2\\int_{0}^{\\infty} \\jump{0.15}{p}(r) \\frac{dr}{r^2} + \n[(x_1^2 + x_2^2) - (3 - 2\\nu_\\pm)(x_1^2 - x_2^2)]\\int_{0}^{\\infty} \\jump{0.15}{q}(r) \\frac{dr}{r^2}\\right\\},\n$$\n$$\nw_{22} = \\frac{1 \\mp \\alpha}{8\\pi \\mu_\\pm} \\left\\{-[(x_1^2 + x_2^2) + (1 - 2\\nu_\\pm)(x_1^2 - x_2^2)]\\int_{0}^{\\infty} \\jump{0.15}{p}(r) \\frac{dr}{r^2} + \n4\\nu_\\pm x_1x_2 \\int_{0}^{\\infty} \\jump{0.15}{q}(r) \\frac{dr}{r^2}\\right\\}.\n$$\nWe note that $\\mbox{\\boldmath $w$}_l(r,\\theta)$ is homogeneous of degree $l$ in the variable $r$ and that $w_{11}$ and $w_{12}$ are the Cartesian components of the local rigid body rotation superimposed to a uniform deformation near the crack tip mentioned above.\n\nHence\n$$\n\\begin{array}{l}\n\\displaystyle \\jump{0.15}{u_\\theta}(r) + i\\jump{0.15}{u_r}(r) = \n-\\frac{(1 - \\nu_+) \/ \\mu_+ + (1 - \\nu_-) \/ \\mu_-}{\\cosh(\\pi\\epsilon)} \\times \\\\[3mm]\n\\displaystyle \\left\\{ \\frac{1}{1\/2 + i\\epsilon} \\frac{K}{\\sqrt{2\\pi}} r^{1\/2 + i\\epsilon} \n- \\frac{1}{3\/2 + i\\epsilon} \\frac{A}{\\sqrt{2\\pi}} r^{3\/2 + i\\epsilon} + \\frac{1}{5\/2 + i\\epsilon} \\frac{B}{\\sqrt{2\\pi}} r^{5\/2 + i\\epsilon} \\right\\} + O(r^{3}). \n\\end{array}\n$$\n\n\\section{Symmetric and skew-symmetric weight functions for interfacial cracks}\n\\label{wfunc}\nIn this section, we introduce a special type of singular solutions of a homogeneous problem for an interfacial crack. The traces of these functions on the plane containing \nthe crack are known as the weight functions, and they are used in the evaluation of the stress intensity factors in models of linear fracture mechanics. The notations \n$\\jump{0.15}{\\mbox{\\boldmath $U$}}$ and $\\langle \\mbox{\\boldmath $U$} \\rangle$ will be used for the symmetric and skew-symmetric weight function matrices, as outlined in the sequel of the paper.\n\nWe refer to the earlier publications by Antipov (1999), Bercial {\\em et al.}\\ (2005) and Piccolroaz {\\em et al.}\\ (2007) for the detailed discussion of the theory of \nweight functions and related functional equations of the Wiener-Hopf type. Here we give an outline, required for our purpose of evaluation of the weight function matrices \nfor interfacial cracks.\n\nThe special singular solutions for the interfacial crack are defined as the solutions of the elasticity problem where the crack is placed along the positive semi-axis, \n$x_1 > 0$, the boundary conditions are homogeneous (traction-free crack faces) and satisfying special homogeneity properties. In particular, \n\\begin{itemize}\n\\item[(a)]\nthe singular solution $\\mbox{\\boldmath $U$} = [U_1,U_2]^\\top$ satisfies the equation of equilibrium;\n\\item[(b)]\n$\\jump{0.15}{\\mbox{\\boldmath $U$}} = 0$ when $x_1 < 0$;\n\\item[(c)]\nthe associated traction vector acting on the plane containing the crack, $\\mbox{\\boldmath $\\Sigma$} = [\\Sigma_{21},\\Sigma_{22}]^\\top$, is continuous and $\\mbox{\\boldmath $\\Sigma$} = 0$ when $x_2 = 0$ and \n$x_1 > 0$ (homogeneous boundary conditions); \n\\item[(d)]\n$\\mbox{\\boldmath $U$}$ is a linear combination of homogeneous functions of degree $-1\/2 + i\\epsilon$ and $-1\/2 - i\\epsilon$.\n\\end{itemize}\n\nWe enphasise here that the domain for weight functions (where the crack is placed along the positive semi-axis) is different from the domain for the physical solution \n(where the crack is placed along the negative semi-axis). The reason for this is to have the fundamental integral identity, introduced in the sequel, eq. \\eq{betti}, \nin convolution form (see Piccolroaz {\\em et al.}, 2007, for details).\n\n\\subsection{The Wiener-Hopf equation}\n\\subsubsection{Factorization and solution}\nLet us introduce the Fourier transforms of the crack-opening singular displacements and the corresponding traction components\n$$\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+(\\beta) = \\int_{0}^{\\infty} \\jump{0.15}{\\mbox{\\boldmath $U$}}(x_1) e^{i\\beta x_1} dx_1, \\quad\n\\overline{\\mbox{\\boldmath $\\Sigma$}}^-(\\beta) = \\int_{-\\infty}^{0} \\mbox{\\boldmath $\\Sigma$}(x_1) e^{i\\beta x_1} dx_1.\n$$\nThe superscript $^+$ indicates that the function $\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+(\\beta)$ is analytic in the upper half-plane \n$\\ensuremath{\\mathbb{C}}^+ = \\{\\beta \\in \\ensuremath{\\mathbb{C}} : \\mathop{\\mathrm{Re}}\\beta \\in (-\\infty,\\infty), \\mathop{\\mathrm{Im}}\\beta \\in (0,\\infty)\\}$, whereas the superscript $^-$ indicates that the function \n$\\overline{\\mbox{\\boldmath $\\Sigma$}}^-(\\beta)$ is analytic in the lower half-plane $\\ensuremath{\\mathbb{C}}^- = \\{\\beta \\in \\ensuremath{\\mathbb{C}} : \\mathop{\\mathrm{Re}}\\beta \\in (-\\infty,\\infty), \\mathop{\\mathrm{Im}}\\beta \\in (-\\infty,0)\\}$.\n\nThese functions are related via the functional equation of the Wiener-Hopf type (Antipov, 1999):\n\\begin{equation}\n\\label{wh}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+(\\beta) = \n-\\frac{b}{|\\beta|} \\mbox{\\boldmath $G$}(\\beta) \\overline{\\mbox{\\boldmath $\\Sigma$}}^-(\\beta), \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n\\end{equation}\nwhere $b = (1 - \\nu_+)\/\\mu_+ + (1 - \\nu_-)\/\\mu_-$ and \n$$\n\\mbox{\\boldmath $G$}(\\beta) = \n\\left[\n\\begin{array}{cc}\n1 & -i\\mathop{\\mathrm{sign}}(\\beta) d_* \\\\[3mm]\ni\\mathop{\\mathrm{sign}}(\\beta) d_* & 1\n\\end{array}\n\\right].\n$$\n\nThe factorization of $-b\/|\\beta| \\mbox{\\boldmath $G$}$ is given by\n\\begin{equation}\n\\label{fact}\n-\\frac{b}{|\\beta|} \\mbox{\\boldmath $G$}(\\beta) = \n-\\frac{b}{\\beta_+^{1\/2} \\beta_-^{1\/2}} \\mbox{\\boldmath $X$}^+(\\beta) [\\mbox{\\boldmath $X$}^-(\\beta)]^{-1}, \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n\\end{equation}\nwhere\n$$\n\\mbox{\\boldmath $X$}^+(\\beta) = d_0\n\\left[\n\\begin{array}{cc}\n\\cos B^+ & -\\sin B^+ \\\\[3mm]\n\\sin B^+ & \\cos B^+\n\\end{array}\n\\right], \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+, \\quad\n\\mbox{\\boldmath $X$}^-(\\beta) = d_0^{-1}\n\\left[\n\\begin{array}{cc}\n\\cos B^- & -\\sin B^- \\\\[3mm]\n\\sin B^- & \\cos B^-\n\\end{array}\n\\right], \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^-,\n$$\n$d_0 = (1 - d_*)^{1\/4}$, and the limit values of the functions $\\beta_\\pm^{1\/2}$, as $\\mathop{\\mathrm{Im}}(\\beta) \\to 0^\\pm$, are\n$$\n\\beta_+^{1\/2} =\n\\left\\{ \n\\begin{array}{ll}\n\\beta^{1\/2}, & \\beta>0 \\\\[3mm]\ni(-\\beta)^{1\/2}, & \\beta<0\n\\end{array}\n\\right., \\quad\n\\beta_-^{1\/2} =\n\\left\\{ \n\\begin{array}{ll}\n\\beta^{1\/2}, & \\beta>0 \\\\[3mm]\n-i(-\\beta)^{1\/2}, & \\beta<0\n\\end{array}\n\\right.,\n$$\nand\n$$\nB^\\pm = -\\epsilon \\log(\\mp i\\beta), \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^\\pm.\n$$\nFrom the definition of singular solutions given in the introduction of this section, it follows that the asymptotic behaviour of the weight functions is given by\n$$\n\\begin{array}{ll}\n\\jump{0.15}{\\mbox{\\boldmath $U$}}(x_1) \\sim x_1^{-1\/2} \\mbox{\\boldmath $F$}_1(x_1), & x_1 \\to 0^+, \\\\[3mm]\n\\mbox{\\boldmath $\\Sigma$}(x_1) \\sim (-x_1)^{-3\/2} \\mbox{\\boldmath $F$}_2(x_1), & x_1 \\to 0^-,\n\\end{array}\n$$ \n$$\n\\begin{array}{ll}\n\\jump{0.15}{\\mbox{\\boldmath $U$}}(x_1) \\sim x_1^{-1\/2} \\mbox{\\boldmath $F$}_3(x_1), & x_1 \\to \\infty, \\\\[3mm]\n\\mbox{\\boldmath $\\Sigma$}(x_1) \\sim (-x_1)^{-3\/2} \\mbox{\\boldmath $F$}_4(x_1), & x_1 \\to -\\infty,\n\\end{array}\n$$\nwhere $\\mbox{\\boldmath $F$}_1,\\mbox{\\boldmath $F$}_2,\\mbox{\\boldmath $F$}_3,\\mbox{\\boldmath $F$}_4$ are bounded functions. The application of the Abelian type theorem \\ref{abel} and the Tauberian type theorem \\ref{tau} \n(see Appendix \\ref{app3}) gives\n$$\n\\begin{array}{ll}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+(\\beta) \\sim \\beta_+^{-1\/2} \\tilde{\\mbox{\\boldmath $F$}}_1(\\beta), & \n\\beta \\in \\ensuremath{\\mathbb{C}}^+, \\quad \\beta \\to \\infty, \\\\[3mm]\n\\overline{\\mbox{\\boldmath $\\Sigma$}}^-(\\beta) \\sim \\beta_-^{1\/2} \\tilde{\\mbox{\\boldmath $F$}}_2(\\beta), & \n\\beta \\in \\ensuremath{\\mathbb{C}}^-, \\quad \\beta \\to \\infty,\n\\end{array}\n$$ \n$$\n\\begin{array}{ll}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+(\\beta) \\sim \\beta_+^{-1\/2} \\tilde{\\mbox{\\boldmath $F$}}_3(\\beta), & \n\\beta \\in \\ensuremath{\\mathbb{C}}^+, \\quad \\beta \\to 0, \\\\[3mm]\n\\overline{\\mbox{\\boldmath $\\Sigma$}}(\\beta) \\sim \\beta_-^{1\/2} \\tilde{\\mbox{\\boldmath $F$}}_4(\\beta), & \n\\beta \\in \\ensuremath{\\mathbb{C}}^-, \\quad \\beta \\to 0,\n\\end{array}\n$$\nwhere $\\tilde{\\mbox{\\boldmath $F$}}_1,\\tilde{\\mbox{\\boldmath $F$}}_2,\\tilde{\\mbox{\\boldmath $F$}}_3,\\tilde{\\mbox{\\boldmath $F$}}_4$ are bounded functions. Substituting the representation \\eq{fact} for the matrix $-b\/|\\beta|\\mbox{\\boldmath $G$}(\\beta)$ \ninto the Wiener-Hopf equation \\eq{wh}, we obtain\n\\begin{equation}\n\\label{wiener}\n\\beta_+^{1\/2} [\\mbox{\\boldmath $X$}^+(\\beta)]^{-1} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+(\\beta) =\n-\\frac{b}{\\beta_-^{1\/2}} [\\mbox{\\boldmath $X$}^-(\\beta)]^{-1} \\overline{\\mbox{\\boldmath $\\Sigma$}}^-(\\beta), \\quad \\beta \\in \\ensuremath{\\mathbb{R}}.\n\\end{equation}\nSince $[\\mbox{\\boldmath $X$}^+(\\beta)]^{-1}$ and $[\\mbox{\\boldmath $X$}^-(\\beta)]^{-1}$ are bounded functions and taking into account the asymptotic behaviour of the weight functions at infinity, it \nfollows that the LHS and RHS of \\eq{wiener} behave as $O(1)$, $\\beta \\to \\infty$, and from the Liouville theorem, they are equal to the same constant \n$\\mbox{\\boldmath $e$} = [e_1,e_2]^\\top$, where $e_1,e_2$ are arbitrary, so that\n\\begin{equation}\n\\label{wf}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+(\\beta) = \\frac{1}{\\beta_+^{1\/2}} \\mbox{\\boldmath $X$}^+(\\beta)\n\\left[ \n\\begin{array}{l}\ne_1 \\\\[3mm]\ne_2\n\\end{array}\n\\right], \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+, \\quad\n\\overline{\\mbox{\\boldmath $\\Sigma$}}^-(\\beta) = -\\frac{\\beta_-^{1\/2}}{b} \\mbox{\\boldmath $X$}^-(\\beta)\n\\left[ \n\\begin{array}{l}\ne_1 \\\\[3mm]\ne_2\n\\end{array}\n\\right], \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^-.\n\\end{equation}\n\n\\subsubsection{The basis in the space of singular solutions}\nFrom now on, we shall use the notion of weight functions, defined as traces of the singular displacement fields obtained in the above section. It is evident from the \nsolution obtained above that the space of singular solutions is a 2-dimensional linear space. Two linearly independent weight functions can be selected as follows. \nSetting $e_1 = 1$, $e_2 = 0$ in \\eq{wf}, we get the first weight function as:\n\\begin{equation}\n\\label{first}\n\\begin{array}{l}\n\\displaystyle \\jump{0.15}{\\overline{U}_1^1}^+ = \\frac{d_0}{\\beta_+^{1\/2}} \\cos B^+ =\n\\frac{d_0}{2} \\beta_+^{-1\/2} \\left(e_0 \\beta_+^{i\\epsilon} + \\frac{1}{e_0} \\beta_+^{-i\\epsilon}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle \\jump{0.15}{\\overline{U}_2^1}^+ = \\frac{d_0}{\\beta_+^{1\/2}} \\sin B^+ =\n\\frac{i d_0}{2} \\beta_+^{-1\/2} \\left(e_0 \\beta_+^{i\\epsilon} - \\frac{1}{e_0} \\beta_+^{-i\\epsilon}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle \\overline{\\mbox{\\boldmath $\\Sigma$}}_{21}^{1-} = -\\frac{\\beta_-^{1\/2}}{bd_0} \\cos B^- =\n- \\frac{1}{2bd_0} \\beta_-^{1\/2} \\left(\\frac{1}{e_0} \\beta_-^{i\\epsilon} + e_0 \\beta_-^{-i\\epsilon}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^-, \\\\[3mm]\n\\displaystyle \\overline{\\mbox{\\boldmath $\\Sigma$}}_{22}^{1-} = -\\frac{\\beta_-^{1\/2}}{bd_0} \\sin B^- =\n- \\frac{i}{2bd_0} \\beta_-^{1\/2} \\left(\\frac{1}{e_0} \\beta_-^{i\\epsilon} - e_0 \\beta_-^{-i\\epsilon}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^-,\n\\end{array}\n\\end{equation}\nwhere $e_0 = e^{\\epsilon\\pi\/2}$.\n\nSetting $e_1 = 0$, $e_2 = 1$ in \\eq{wf}, we get the second weight function as:\n\\begin{equation}\n\\label{second}\n\\begin{array}{l}\n\\displaystyle \\jump{0.15}{\\overline{U}_1^2}^+ = -\\frac{d_0}{\\beta_+^{1\/2}} \\sin B^+ =\n-\\frac{i d_0}{2} \\beta_+^{-1\/2} \\left(e_0 \\beta_+^{i\\epsilon} - \\frac{1}{e_0} \\beta_+^{-i\\epsilon}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle \\jump{0.15}{\\overline{U}_2^2}^+ = \\frac{d_0}{\\beta_+^{1\/2}} \\cos B^+ =\n\\frac{d_0}{2} \\beta_+^{-1\/2} \\left(e_0 \\beta_+^{i\\epsilon} + \\frac{1}{e_0} \\beta_+^{-i\\epsilon}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle \\overline{\\mbox{\\boldmath $\\Sigma$}}_{21}^{2-} = \\frac{\\beta_-^{1\/2}}{bd_0} \\sin B^- =\n\\frac{i}{2bd_0} \\beta_-^{1\/2} \\left(\\frac{1}{e_0} \\beta_-^{i\\epsilon} - e_0 \\beta_-^{-i\\epsilon}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^-, \\\\[3mm]\n\\displaystyle \\overline{\\mbox{\\boldmath $\\Sigma$}}_{22}^{2-} = -\\frac{\\beta_-^{1\/2}}{bd_0} \\cos B^- =\n- \\frac{1}{2bd_0} \\beta_-^{1\/2} \\left(\\frac{1}{e_0} \\beta_-^{i\\epsilon} + e_0 \\beta_-^{-i\\epsilon}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^-.\n\\end{array}\n\\end{equation}\nIn order to use a compact notation in the sequel of the paper, we collect the weight function components together with components of the corresponding tractions in \nmatrices as follows\n$$\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+ = \n\\left[\n\\begin{array}{cc}\n\\jump{0.15}{\\overline{U}_1^1}^+ & \\jump{0.15}{\\overline{U}_1^2}^+ \\\\[3mm]\n\\jump{0.15}{\\overline{U}_2^1}^+ & \\jump{0.15}{\\overline{U}_2^2}^+ \n\\end{array}\n\\right], \\quad\n\\overline{\\mbox{\\boldmath $\\Sigma$}}^- = \n\\left[\n\\begin{array}{cc}\n\\overline{\\Sigma}_{21}^{1-} & \\overline{\\Sigma}_{21}^{2-} \\\\[3mm]\n\\overline{\\Sigma}_{22}^{1-} & \\overline{\\Sigma}_{22}^{2-} \n\\end{array}\n\\right].\n$$\n\nNote that the limit values of the functions $\\beta_+^{\\pm i\\epsilon}$, as $\\mathop{\\mathrm{Im}}(\\beta) \\to 0^+$, are\n$$\n\\beta_+^{i\\epsilon} = \n\\left\\{ \n\\begin{array}{ll} \n\\displaystyle \\beta^{i\\epsilon}, & \\beta>0, \\\\[3mm]\n\\displaystyle \\frac{(-\\beta)^{i\\epsilon}}{e_0^2}, & \\beta<0,\n\\end{array} \n\\right. \\qquad\n\\beta_+^{-i\\epsilon} = \n\\left\\{ \n\\begin{array}{ll} \n\\displaystyle \\beta^{-i\\epsilon}, & \\beta>0, \\\\[3mm]\n\\displaystyle (-\\beta)^{-i\\epsilon} e_0^2, & \\beta<0,\n\\end{array} \n\\right.\n$$\nwhereas the limit values of the functions $\\beta_-^{\\pm i\\epsilon}$, as $\\mathop{\\mathrm{Im}}(\\beta) \\to 0^-$, are\n$$\n\\beta_-^{i\\epsilon} = \n\\left\\{ \n\\begin{array}{ll} \n\\displaystyle \\beta^{i\\epsilon}, & \\beta>0, \\\\[3mm]\n\\displaystyle (-\\beta)^{i\\epsilon} e_0^2, & \\beta<0,\n\\end{array} \n\\right. \\qquad\n\\beta_-^{-i\\epsilon} = \n\\left\\{ \n\\begin{array}{ll} \n\\displaystyle \\beta^{-i\\epsilon}, & \\beta>0, \\\\[3mm]\n\\displaystyle \\frac{(-\\beta)^{-i\\epsilon}}{e_0^2}, & \\beta<0.\n\\end{array} \n\\right.\n$$\n\nIt is clear that the choice of a basis of two linearly independent singular solutions is not unique. Bercial-Velez {\\em et al.}\\ (2005) provided another set \nof linearly independent weight functions, namely\n$$\n\\begin{array}{l}\n\\displaystyle \\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_1^1}^+ = \n\\frac{1}{2d_0^2} \\beta_+^{-1\/2} \n\\left(-\\frac{e_0 \\beta_+^{i\\epsilon}}{c_1^-} + \\frac{\\beta_+^{-i\\epsilon}}{e_0 c_1^+}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle \\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_2^1}^+ = \n-\\frac{i}{2d_0^2} \\beta_+^{-1\/2} \n\\left(\\frac{e_0 \\beta_+^{i\\epsilon}}{c_1^-} + \\frac{\\beta_+^{-i\\epsilon}}{e_0 c_1^+}\\right), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle \\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_1^2}^+ = -\\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_2^1}^+, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle \\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_2^2}^+ = \\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_1^1}^+, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+.\n\\end{array}\n$$\nIt can be shown that these weight functions are linear combinations of (\\ref{first}) and (\\ref{second}),\n$$\n\\left[\n\\begin{array}{cc}\n\\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_1^1}^+ & \\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_1^2}^+ \\\\[3mm]\n\\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_2^1}^+ & \\jump{0.15}{\\overline{\\ensuremath{\\mathcal{U}}}_2^2}^+ \n\\end{array}\n\\right] = \\frac{1}{2 c_1^+ c_1^- d_0^3}\n\\left[\n\\begin{array}{cc}\n\\jump{0.15}{\\overline{U}_1^1}^+ & \\jump{0.15}{\\overline{U}_1^2}^+ \\\\[3mm]\n\\jump{0.15}{\\overline{U}_2^1}^+ & \\jump{0.15}{\\overline{U}_2^2}^+ \n\\end{array}\n\\right] \n\\left[\n\\begin{array}{cc}\n\\displaystyle -c_1^+ + c_1^- & \\displaystyle i (c_1^+ + c_1^-) \\\\[3mm]\n\\displaystyle -i (c_1^+ + c_1^-) & \\displaystyle -c_1^+ + c_1^- \n\\end{array}\n\\right].\n$$\n\n\\subsection{The half-plane problem and full representation of weight functions}\nIn this section, we will construct the full representation of singular solutions for the whole plane. This is needed in order to compute the skew-symmetric weight \nfunction matrix $\\langle \\mbox{\\boldmath $U$} \\rangle$. The complete singular solutions can be constructed by solving a boundary value problem for a semi-infinite half-plane subjected \nto traction boundary conditions at its boundary. \n\nLet us consider first the lower half-plane. A plane strain elasticity problem is usually solved by means of the Airy function. Introducing the Fourier transform with \nrespect to the $x_1$ variable, the problem is most easily solved considering the stress component $\\overline{\\sigma}_{22}$ as the primary unknown function, so that the \nproblem reduces to the following ordinary differential equation\n$$\n\\overline{\\sigma}_{22}'''' - 2\\beta^2 \\overline{\\sigma}_{22}'' + \\beta^4 \\overline{\\sigma}_{22} = 0,\n$$\nwhere a prime denotes derivative with respect to $x_2$. The general solution is then \n$$\n\\overline{\\sigma}_{22}(\\beta,x_2) = (A_2 + x_2 B_2) e^{|\\beta| x_2}, \\quad\n\\overline{\\sigma}_{11}(\\beta,x_2) = -\\frac{1}{\\beta^2} \\overline{\\sigma}_{22}'', \\quad\n\\overline{\\sigma}_{21}(\\beta,x_2) = -\\frac{i}{\\beta} \\overline{\\sigma}_{22}', \\quad \\beta \\in \\ensuremath{\\mathbb{R}}.\n$$\nCorrespondingly, the Fourier transforms of the displacement components are \n$$\n\\overline{u}_{1}(\\beta,x_2) = -\\frac{i}{2\\mu_- \\beta} \n\\{ \\overline{\\sigma}_{22} - (1-\\nu_-)\\overline{\\sigma}_0 \\}, \\quad\n\\overline{u}_{2}(\\beta,x_2) = \\frac{1}{2\\mu_- \\beta^2} \n\\{ \\overline{\\sigma}_{22}' + (1-\\nu_-)\\overline{\\sigma}_0' \\}, \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nwhere $\\overline{\\sigma}_0 = \\overline{\\sigma}_{11} + \\overline{\\sigma}_{22}$. The boundary conditions along the boundary $x_2 = 0^-$ are defined by\n$$\n\\overline{\\sigma}_{22}(\\beta,x_2 = 0^-) = \\overline{\\Sigma}_{22}^-(\\beta), \\quad\n\\overline{\\sigma}_{21}(\\beta,x_2 = 0^-) = \\overline{\\Sigma}_{21}^-(\\beta), \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nwhere $\\Sigma_{22}^-,\\Sigma_{21}^-$ are the tractions along the interface derived in the previous section. It follows that\n$$\n\\overline{\\sigma}_{22}(\\beta,x_2 = 0^-) = A_2 = \\overline{\\Sigma}_{22}^-(\\beta), \\quad\n\\overline{\\sigma}_{21}(\\beta,x_2 = 0^-) = -\\frac{i}{\\beta} (A_2|\\beta| + B_2) = \n\\overline{\\Sigma}_{21}^-(\\beta), \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nand thus\n$$\nA_2 = \\overline{\\Sigma}_{22}^-, \\quad\nB_2 = i\\beta \\overline{\\Sigma}_{21}^- - |\\beta| \\overline{\\Sigma}_{22}^-, \\quad \\beta \\in \\ensuremath{\\mathbb{R}}.\n$$\nThe full representations of Fourier transforms (with respect to $x_1$) of the required singular displacements and the corresponding components of stress are\n$$\n\\begin{array}{l}\n\\overline{\\sigma}_{22}(\\beta,x_2) = \\{ i\\beta x_2 \\overline{\\Sigma}_{21}^- \n+ (1 - |\\beta|x_2) \\overline{\\Sigma}_{22}^- \\} e^{|\\beta|x_2}, \\\\[3mm]\n\\overline{\\sigma}_{11}(\\beta,x_2) = \\{ -i(2\\mathop{\\mathrm{sign}}(\\beta) + \\beta x_2) \\overline{\\Sigma}_{21}^- \n+ (1 + |\\beta|x_2) \\overline{\\Sigma}_{22}^- \\} e^{|\\beta|x_2}, \\\\[3mm]\n\\overline{\\sigma}_{21}(\\beta,x_2) = \\{ (1 + |\\beta|x_2) \\overline{\\Sigma}_{21}^- \n+ i\\beta x_2 \\overline{\\Sigma}_{22}^- \\} e^{|\\beta|x_2}.\n\\end{array}\n$$\n$$\n\\begin{array}{l}\n\\displaystyle\n\\overline{u}_{1}(\\beta,x_2) = \\frac{1}{2\\mu_-} \\left\\{ \\left[x_2 + \\frac{2(1-\\nu_-)}{|\\beta|}\\right] \n\\overline{\\Sigma}_{21}^- \n+ i\\left[\\mathop{\\mathrm{sign}}(\\beta)x_2 + \\frac{1 - 2\\nu_-}{\\beta}\\right] \n\\overline{\\Sigma}_{22}^- \\right\\} e^{|\\beta|x_2}, \\\\[3mm]\n\\displaystyle\n\\overline{u}_{2}(\\beta,x_2) = \\frac{1}{2\\mu_-} \n\\left\\{ i\\left[\\mathop{\\mathrm{sign}}(\\beta) x_2 - \\frac{1-2\\nu_-}{\\beta} \\right] \n\\overline{\\Sigma}_{21}^- \n+ \\left[\\frac{2(1-\\nu_-)}{|\\beta|} - x_2\\right] \n\\overline{\\Sigma}_{22}^- \\right\\} e^{|\\beta|x_2}.\n\\end{array}\n$$\n\nFor the upper half-plane, we find the same equations, subject to replacing $|\\beta|$ with $-|\\beta|$, $\\mu_-$ with $\\mu_+$ and $\\nu_-$ with $\\nu_+$.\n\n\\subsection{New results for skew-symmetric weight functions}\n\\subsubsection{The Fourier transform representations}\nIt is possible now to derive the jump and average of Fourier transforms of the singular displacement functions across the plane containing the crack. The traces on the \nplane containing the crack are given by\n$$\n\\begin{array}{l}\n\\overline{U}_{1}(\\beta) = \\overline{u}_{1}(\\beta,x_2 = 0^+) = \n\\left[\n\\begin{array}{ll}\n\\displaystyle -\\frac{1-\\nu_+}{\\mu_+ |\\beta|}, & \\displaystyle \\frac{i(1-2\\nu_+)}{2\\mu_+ \\beta}\n\\end{array}\n\\right]\n\\left[\n\\begin{array}{l}\n\\overline{\\Sigma}_{21}^- \\\\[3mm]\n\\overline{\\Sigma}_{22}^-\n\\end{array}\n\\right], \\\\[3mm]\n\\overline{U}_{2}(\\beta) = \\overline{u}_{2}(\\beta,x_2 = 0^+) = \n\\left[\n\\begin{array}{ll}\n\\displaystyle -\\frac{i(1-2\\nu_+)}{2\\mu_+ \\beta}, & \\displaystyle -\\frac{1-\\nu_+}{\\mu_+ |\\beta|}\n\\end{array}\n\\right]\n\\left[\n\\begin{array}{l}\n\\overline{\\Sigma}_{21}^- \\\\[3mm]\n\\overline{\\Sigma}_{22}^-\n\\end{array}\n\\right], \\\\[3mm]\n\\overline{U}_{1}(\\beta) = \\overline{u}_{1}(\\beta,x_2 = 0^-) = \n\\left[\n\\begin{array}{ll}\n\\displaystyle \\frac{1-\\nu_-}{\\mu_- |\\beta|}, & \\displaystyle \\frac{i(1-2\\nu_-)}{2\\mu_- \\beta}\n\\end{array}\n\\right]\n\\left[\n\\begin{array}{l}\n\\overline{\\Sigma}_{21}^- \\\\[3mm]\n\\overline{\\Sigma}_{22}^-\n\\end{array}\n\\right], \\\\[3mm]\n\\overline{U}_{2}(\\beta) = \\overline{u}_{2}(\\beta,x_2 = 0^-) = \n\\left[\n\\begin{array}{ll}\n\\displaystyle -\\frac{i(1-2\\nu_-)}{2\\mu_- \\beta}, & \\displaystyle \\frac{1-\\nu_-}{\\mu_- |\\beta|}\n\\end{array}\n\\right]\n\\left[\n\\begin{array}{l}\n\\overline{\\Sigma}_{21}^- \\\\[3mm]\n\\overline{\\Sigma}_{22}^-\n\\end{array}\n\\right],\n\\end{array}\n$$\nso that, we obtain in matrix form\n$$\n\\left[\n\\begin{array}{cc}\n\\jump{0.15}{\\overline{U}_1^1}^+ & \\jump{0.15}{\\overline{U}_1^2}^+ \\\\[3mm]\n\\jump{0.15}{\\overline{U}_2^1}^+ & \\jump{0.15}{\\overline{U}_2^2}^+\n\\end{array}\n\\right] = -\\frac{b}{|\\beta|} \n\\left[\n\\begin{array}{cc}\n1 & -i\\mathop{\\mathrm{sign}}(\\beta) d_* \\\\[3mm]\ni\\mathop{\\mathrm{sign}}(\\beta) d_* & 1\n\\end{array}\n\\right]\n\\left[\n\\begin{array}{cc}\n\\overline{\\Sigma}_{21}^{1-} & \\overline{\\Sigma}_{21}^{2-} \\\\[3mm]\n\\overline{\\Sigma}_{22}^{1-} & \\overline{\\Sigma}_{22}^{2-}\n\\end{array}\n\\right], \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nand\n$$\n\\left[\n\\begin{array}{cc}\n\\langle\\overline{U}_1^1\\rangle & \\langle\\overline{U}_1^2\\rangle \\\\[3mm]\n\\langle\\overline{U}_2^1\\rangle & \\langle\\overline{U}_2^2\\rangle\n\\end{array}\n\\right] = -\\frac{b\\alpha}{2|\\beta|} \n\\left[\n\\begin{array}{cc}\n1 & -i\\mathop{\\mathrm{sign}}(\\beta) \\gamma_* \\\\[3mm]\ni\\mathop{\\mathrm{sign}}(\\beta) \\gamma_* & 1\n\\end{array}\n\\right]\n\\left[\n\\begin{array}{cc}\n\\overline{\\Sigma}_{21}^{1-} & \\overline{\\Sigma}_{21}^{2-} \\\\[3mm]\n\\overline{\\Sigma}_{22}^{1-} & \\overline{\\Sigma}_{22}^{2-}\n\\end{array}\n\\right], \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nwhere $\\gamma_*$ is a material parameter,\n$$\n\\gamma_* = \\frac{\\mu_-(1 - 2\\nu_+) + \\mu_+(1 - 2\\nu_-)}{2\\mu_-(1 - \\nu_+) - 2\\mu_+(1 - \\nu_-)}.\n$$\n\nNote that the above equations can be rewritten as\n$$\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+ = \n-\\frac{b}{|\\beta|} \\left[\\begin{array}{cc} 1 & 0 \\\\[3mm] 0 & 1 \\end{array}\\right] \\overline{\\mbox{\\boldmath $\\Sigma$}}^- - \n\\frac{i b d_*}{\\beta} \\left[\\begin{array}{cc} 0 & -1 \\\\[3mm] 1 & 0 \\end{array}\\right] \\overline{\\mbox{\\boldmath $\\Sigma$}}^-, \\quad\n\\beta \\in \\ensuremath{\\mathbb{R}},\n$$\n$$\n\\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle = \n-\\frac{\\alpha b}{2|\\beta|} \\left[\\begin{array}{cc} 1 & 0 \\\\[3mm] 0 & 1 \\end{array}\\right] \\overline{\\mbox{\\boldmath $\\Sigma$}}^- - \n\\frac{i \\alpha b \\gamma_*}{2\\beta} \\left[\\begin{array}{cc} 0 & -1 \\\\[3mm] 1 & 0 \\end{array}\\right]\n\\overline{\\mbox{\\boldmath $\\Sigma$}}^-, \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nso that we can derive the decomposition of $\\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle$ in the sum of \"$+$\" and \"$-$\" functions as follows\n\\begin{equation}\n\\label{average}\n\\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle = \n\\frac{\\alpha}{2} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+ + \\alpha(d_* - \\gamma_*) \n\\frac{i b}{2\\beta} \\left[\\begin{array}{cc} 0 & -1 \\\\[3mm] 1 & 0 \\end{array}\\right] \n\\overline{\\mbox{\\boldmath $\\Sigma$}}^-, \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n\\end{equation}\nwhere the last term on the right-hand side is a \"$-$\" function.\n\n\\subsubsection{The weight functions - Fourier inversion}\n\\label{weight_f}\nAfter the inversion of the corresponding Fourier transforms, the weight functions, which will be needed for the computation of stress intensity factors, are as follows. \nThe symmetric weight function matrix $\\jump{0.15}{\\mbox{\\boldmath $U$}}(x_1)$ is equal to 0 for $x_1 < 0$, whereas for $x_1 > 0$ it is given by \n\\begin{equation}\n\\label{jumpu}\n\\begin{array}{l}\n\\displaystyle\n\\jump{0.15}{U_1^1}(x_1) = \\frac{x_1^{-1\/2}}{2d_0 \\sqrt{2\\pi}} \n\\left( \\frac{x_1^{-i\\epsilon}}{c_1^+} + \\frac{x_1^{i\\epsilon}}{c_1^-} \\right), \\\\[5mm]\n\\displaystyle\n\\jump{0.15}{U_2^1}(x_1) = \\frac{i x_1^{-1\/2}}{2d_0 \\sqrt{2\\pi}} \n\\left( \\frac{x_1^{-i\\epsilon}}{c_1^+} - \\frac{x_1^{i\\epsilon}}{c_1^-} \\right), \\\\[5mm]\n\\displaystyle \n\\jump{0.15}{U_1^2}(x_1) = -\\jump{0.15}{U_2^1}(x_1), \\\\[5mm]\n\\jump{0.15}{U_2^2}(x_1) = \\jump{0.15}{U_1^1}(x_1).\n\\end{array}\n\\end{equation}\nThe skew-symmetric weight function matrix $\\langle\\mbox{\\boldmath $U$}\\rangle(x_1)$ is equal to $\\displaystyle \\frac{\\alpha}{2}\\jump{0.15}{\\mbox{\\boldmath $U$}}(x_1)$ for $x_1 > 0$, whereas for $x_1 < 0$ it is \ngiven by \n\\begin{equation}\n\\label{meanu}\n\\begin{array}{l}\n\\displaystyle\n\\langle U_1^1 \\rangle(x_1) = -\\frac{i\\alpha(d_*-\\gamma_*)(-x_1)^{-1\/2}}{4d_0^3 \\sqrt{2\\pi}}\n\\left[ \\frac{(-x_1)^{-i\\epsilon}}{c_1^+} - \\frac{(-x_1)^{i\\epsilon}}{c_1^-} \\right], \\\\[3mm]\n\\displaystyle\n\\langle U_2^1 \\rangle(x_1) = \\frac{\\alpha(d_*-\\gamma_*)(-x_1)^{-1\/2}}{4d_0^3 \\sqrt{2\\pi}}\n\\left[ \\frac{(-x_1)^{-i\\epsilon}}{c_1^+} + \\frac{(-x_1)^{i\\epsilon}}{c_1^-} \\right], \\\\[3mm]\n\\displaystyle \n\\langle U_1^2 \\rangle(x_1) = -\\langle U_2^1 \\rangle(x_1), \\\\[3mm]\n\\langle U_2^2 \\rangle(x_1) = \\langle U_1^1 \\rangle(x_1).\n\\end{array}\n\\end{equation}\nThe function $\\mbox{\\boldmath $\\Sigma$}(x_1)$ is equal to 0 for $x_1 > 0$, whereas for $x_1 < 0$ it is given by \n$$\n\\begin{array}{l}\n\\displaystyle\n\\Sigma_{21}^1(x_1) = \\frac{(-x_1)^{-3\/2}}{2bd_0^3 \\sqrt{2\\pi}} \n\\left[ \\frac{1\/2 + i\\epsilon}{c_1^+}(-x_1)^{-i\\epsilon} + \\frac{1\/2 - i\\epsilon}{c_1^-}(-x_1)^{i\\epsilon} \\right], \\\\[3mm]\n\\displaystyle\n\\Sigma_{22}^1(x_1) = \\frac{i(-x_1)^{-3\/2}}{2bd_0^3 \\sqrt{2\\pi}} \n\\left[ \\frac{1\/2 + i\\epsilon}{c_1^+}(-x_1)^{-i\\epsilon} - \\frac{1\/2 - i\\epsilon}{c_1^-}(-x_1)^{i\\epsilon} \\right], \\\\[3mm]\n\\displaystyle \n\\Sigma_{21}^2(x_1) = -\\Sigma_{22}^1(x_1), \\\\[3mm]\n\\Sigma_{22}^2(x_1) = \\Sigma_{21}^1(x_1).\n\\end{array}\n$$\n\nIt will be shown in the sequel of the text that we only need the inverse transform of $\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^+$ (see (\\ref{average})) in \norder to compute the stress intensity factors for both symmetric and skew-symmetric loading.\n\n\\section{The Betti identity and evaluation of the stress intensity factors}\n\\label{identity}\nHere we develop a general procedure for the evaluation of coefficients in the asymptotics of the stress components near the crack tip. This includes the stress intensity \nfactors as well as high-order asymptotics. In particular, the coefficients near the higher order terms require appropriate weight functions which are shown to be derived \nvia differentiation of the weight functions $\\jump{0.15}{\\mbox{\\boldmath $U$}}$ and $\\langle \\mbox{\\boldmath $U$} \\rangle$ along the crack.\n\n\\subsection{The fundamental Betti identity and its equivalent representation in terms of Fourier transforms}\nIn this section, the notations $\\mbox{\\boldmath $u$} = [u_1, u_2]^\\top$ and $\\mbox{\\boldmath $\\sigma$} = [\\sigma_{21},\\sigma_{22}]^\\top$ will be used for the physical displacement and traction fields discussed \nin Section \\ref{prel_analysis}. The notations $\\mbox{\\boldmath $U$} = [U_1,U_2]^\\top$ and $\\mbox{\\boldmath $\\Sigma$} = [\\Sigma_{21},\\Sigma_{22}]^\\top$ will apply to the auxiliary singular displacements and the \ncorresponding tractions obtained in Section \\ref{weight_f}. We note that $\\mbox{\\boldmath $U$}$ is discontinuous along the positive semi-axis $x_1 > 0$, whereas $\\mbox{\\boldmath $u$}$ is discontinuous \nalong the negative semi-axis $x_1 < 0$.\n\nSimilar to Willis and Movchan (1995) and Piccolroaz {\\em et al.}\\ (2007), we can apply the Betti formula to the physical fields and to the weight functions in order to \nevaluate the coefficients in the asymptotics near the crack tip. In particular, applying the Betti identity to the upper half-plane and lower half-plane, we obtain\n$$\n\\int_{(x_2 = 0^+)} \\left\\{ \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^+) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $\\sigma$}(x_1,0^+) - \\mbox{\\boldmath $\\Sigma$}^\\top(x_1'-x_1,0^+) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $u$}(x_1,0^+) \\right\\} dx_1 = 0,\n$$\nand\n$$\n\\int_{(x_2 = 0^-)} \\left\\{ \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^-) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $\\sigma$}(x_1,0^-) - \\mbox{\\boldmath $\\Sigma$}^\\top(x_1'-x_1,0^-) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $u$}(x_1,0^-) \\right\\} dx_1 = 0,\n$$\nrespectively, where \n\\begin{equation}\n\\label{erre}\n\\mbox{\\boldmath $R$} = \\begin{pmatrix} -1 & 0 \\\\ 0 & 1 \\end{pmatrix}\n\\end{equation}\nis a rotation matrix. Then, subtracting one from the other, we arrive at\n$$\n\\begin{array}{l}\n\\displaystyle\n\\int_{(x_2 = 0)} \\left\\{ \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^+) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $\\sigma$}(x_1,0^+) - \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^-) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $\\sigma$}(x_1,0^-) \\right. \\\\[3mm]\n\\displaystyle \\hspace{30mm}\n\\left. - \\left[\\mbox{\\boldmath $\\Sigma$}^\\top(x_1'-x_1,0^+) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $u$}(x_1,0^+) - \\mbox{\\boldmath $\\Sigma$}^\\top(x_1'-x_1,0^-) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $u$}(x_1,0^-)\\right] \\right\\} dx_1 = 0.\n\\end{array}\n$$\nThe traction components acting on the $x_1$ axis can be written as\n$$\n\\mbox{\\boldmath $\\sigma$}(x_1,0^+) = \\mbox{\\boldmath $p$}^+(x_1) + \\mbox{\\boldmath $\\sigma$}^{(+)}(x_1,0), \\quad \\mbox{\\boldmath $\\sigma$}(x_1,0^-) = \\mbox{\\boldmath $p$}^-(x_1) + \\mbox{\\boldmath $\\sigma$}^{(+)}(x_1,0), \n$$\nwhere $\\mbox{\\boldmath $p$}^+(x_1) = \\mbox{\\boldmath $\\sigma$}(x_1,0^+) H(-x_1)$, $\\mbox{\\boldmath $p$}^-(x_1) = \\mbox{\\boldmath $\\sigma$}(x_1,0^-) H(-x_1)$ is the loading acting on the upper and lower crack faces, respectively, and \n$\\mbox{\\boldmath $\\sigma$}^{(+)}(x_1,0)$ is the traction field ahead of the crack tip, with $H(x_1)$ being the unit-step Heaviside function. Consequently, the integral identity can be \nwritten as\n$$\n\\begin{array}{l}\n\\displaystyle\n\\int_{(x_2 = 0)} \\Big\\{ \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^+) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $p$}^+(x_1) + \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^+) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $\\sigma$}^{(+)}(x_1,0) \\\\[3mm]\n\\displaystyle \\hspace{15mm}\n- \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^-) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $p$}^-(x_1) - \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^-) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $\\sigma$}^{(+)}(x_1,0) \\\\[3mm]\n\\displaystyle \\hspace{30mm}\n- \\left[\\mbox{\\boldmath $\\Sigma$}^\\top(x_1'-x_1,0^+) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $u$}(x_1,0^+) - \\mbox{\\boldmath $\\Sigma$}^\\top(x_1'-x_1,0^-) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $u$}(x_1,0^-)\\right] \\Big\\} dx_1 = 0.\n\\end{array}\n$$\nFrom the continuity of $\\mbox{\\boldmath $\\sigma$}^{(+)}$ and $\\mbox{\\boldmath $\\Sigma$}$ we get\n$$\n\\begin{array}{l}\n\\displaystyle\n\\int_{(x_2 = 0)} \\Big\\{ \\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $\\sigma$}^{(+)}(x_1,0) - \\mbox{\\boldmath $\\Sigma$}^\\top(x_1'-x_1,0) \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $u$}}(x_1) \\Big\\} dx_1 = \\\\[3mm]\n\\displaystyle \\hspace{30mm}\n- \\int_{(x_2 = 0)} \\Big\\{ \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^+) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $p$}^+(x_1) - \\mbox{\\boldmath $U$}^\\top(x_1'-x_1,0^-) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $p$}^-(x_1) \\Big\\} dx_1.\n\\end{array}\n$$\nIntroducing the symmetric and skew-symmetric parts of the loading (\\ref{parts}), we finally obtain\n\\begin{equation}\n\\label{betti}\n\\begin{array}{l}\n\\displaystyle\n\\int_{(x_2 = 0)} \\Big\\{ \\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\mbox{\\boldmath $\\sigma$}^{(+)}(x_1,0) - \\mbox{\\boldmath $\\Sigma$}^\\top(x_1'-x_1,0) \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $u$}}(x_1) \\Big\\} dx_1 = \\\\[3mm]\n\\displaystyle \\hspace{30mm}\n- \\int_{(x_2 = 0)} \\Big\\{ \\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\langle \\mbox{\\boldmath $p$} \\rangle(x_1) + \\langle \\mbox{\\boldmath $U$} \\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1) \\Big\\} dx_1.\n\\end{array}\n\\end{equation}\nTaking the Fourier transform of (\\ref{betti}) we derive\n\\begin{equation}\n\\label{betti2}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}^+ - \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}}^- = \n- \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle - \\langle \\overline{\\mbox{\\boldmath $U$}} \\rangle^\\top \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}, \\quad \\beta \\in \\ensuremath{\\mathbb{R}}.\n\\end{equation}\n\n\\subsection{Stress intensity factors and high-order terms coefficients}\nIn this section, we will derive the stress intensity factors as well as high-order terms from the Betti formula \\eq{betti2} obtained in the previous section. The complex \nstress intensity factor will be denoted by $K = K_\\text{I} + i K_\\text{II}$, and analogous notations will be used for high-order terms: $A = A_\\text{I} + i A_\\text{II}$, \n$B = B_\\text{I} + i B_\\text{II}$.\n\nThe asymptotics of the physical traction field as $x_1 \\to 0^+$ are as follows\n$$\n\\mbox{\\boldmath $\\sigma$}^{(+)}(x_1) = \\frac{x_1^{-1\/2}}{2\\sqrt{2\\pi}} \\mbox{\\boldmath $\\mathcal{S}$}(x_1) \\mbox{\\boldmath $K$} + \\frac{x_1^{1\/2}}{2\\sqrt{2\\pi}} \\mbox{\\boldmath $\\mathcal{S}$}(x_1) \\mbox{\\boldmath $A$} + \\frac{x_1^{3\/2}}{2\\sqrt{2\\pi}} \\mbox{\\boldmath $\\mathcal{S}$}(x_1) \\mbox{\\boldmath $B$} + O(x_1^{5\/2}),\n$$\nwhere \n$$\n\\mbox{\\boldmath $\\mathcal{S}$}(x_1) = \n\\left[\\begin{array}{ll}\n\\displaystyle\n-i x_1^{i\\epsilon} & i x_1^{-i\\epsilon} \\\\[3mm]\n\\displaystyle\nx_1^{i\\epsilon} & x_1^{-i\\epsilon}\n\\end{array}\\right],\n$$\nand $\\mbox{\\boldmath $K$} = [K, K^*]^\\top$, $\\mbox{\\boldmath $A$} = [A, A^*]^\\top$, $\\mbox{\\boldmath $B$} = [B, B^*]^\\top$, where the superscript $^*$ denotes conjugation. The corresponding Fourier transforms, as \n$\\beta \\to \\infty$, $\\beta \\in \\ensuremath{\\mathbb{C}}^+$, are (compare with Piccolroaz {\\em et al.}, 2007)\n\\begin{equation}\n\\label{stress1}\n\\overline{\\mbox{\\boldmath $\\sigma$}}^+(\\beta) = \\frac{\\beta_+^{-1\/2}}{4} \\mbox{\\boldmath $\\mathcal{T}$}_1(\\beta) \\mbox{\\boldmath $K$} + \\frac{\\beta_+^{-1\/2}}{4\\beta} \\mbox{\\boldmath $\\mathcal{T}$}_2(\\beta) \\mbox{\\boldmath $A$} + \n\\frac{\\beta_+^{-1\/2}}{4\\beta^2} \\mbox{\\boldmath $\\mathcal{T}$}_3(\\beta) \\mbox{\\boldmath $B$} + O(\\beta^{-7\/2}),\n\\end{equation}\nwhere\n\\begin{equation}\n\\mbox{\\boldmath $\\mathcal{T}$}_j(\\beta) = \n\\left[\\begin{array}{ll}\n\\displaystyle \\frac{\\beta_+^{-i\\epsilon}}{c_j^+ e_0} & \\displaystyle -\\frac{e_0 \\beta_+^{i\\epsilon}}{c_j^-} \\\\[5mm]\n\\displaystyle \\frac{i\\beta_+^{-i\\epsilon}}{c_j^+ e_0} & \\displaystyle \\frac{ie_0 \\beta_+^{i\\epsilon}}{c_j^-} \n\\end{array}\\right], \\quad j=1,2,3,\n\\end{equation}\n\\begin{equation}\n\\label{stress2}\nc_1^\\pm = \\frac{(1+i)\\sqrt{\\pi}}{2\\Gamma(1\/2 \\pm i\\epsilon)}, \\quad\nc_2^\\pm = \\frac{(1-i)\\sqrt{\\pi}}{2\\Gamma(3\/2 \\pm i\\epsilon)}, \\quad\nc_3^\\pm = -\\frac{(1+i)\\sqrt{\\pi}}{2\\Gamma(5\/2 \\pm i\\epsilon)},\n\\end{equation}\nand $\\Gamma(\\cdot)$ denotes the Gamma function.\n\nThe components of the displacement physical field as $x_1 \\to 0^-$ have the form\n$$\n\\jump{0.15}{\\mbox{\\boldmath $u$}}(x_1) = \\frac{b d_0^2}{\\sqrt{2\\pi}} (-x_1)^{1\/2} \\mbox{\\boldmath $\\mathcal{U}$}_1(x_1) \\mbox{\\boldmath $K$} -\n\\frac{b d_0^2}{\\sqrt{2\\pi}} (-x_1)^{3\/2} \\mbox{\\boldmath $\\mathcal{U}$}_2(x_1) \\mbox{\\boldmath $A$} + \\frac{b d_0^2}{\\sqrt{2\\pi}} (-x_1)^{5\/2} \\mbox{\\boldmath $\\mathcal{U}$}_3(x_1) \\mbox{\\boldmath $B$} + O[(-x_1)^{7\/2}],\n$$\nwhere\n$$\n\\mbox{\\boldmath $\\mathcal{U}$}_j(x_1) = \n\\left[\\begin{array}{ll}\n\\displaystyle -\\frac{i (-x_1)^{i\\epsilon}}{2j-1 + 2i\\epsilon} & \n\\displaystyle \\frac{i (-x_1)^{-i\\epsilon}}{2j-1 - 2i\\epsilon} \\\\[3mm]\n\\displaystyle \\frac{(-x_1)^{i\\epsilon}}{2j-1 + 2i\\epsilon} & \n\\displaystyle \\frac{(-x_1)^{-i\\epsilon}}{2j-1 - 2i\\epsilon}\n\\end{array}\\right], \\quad j=1,2,3.\n$$\n\nThe corresponding Fourier transforms, as $\\beta \\to \\infty$, $\\beta \\in \\ensuremath{\\mathbb{C}}^-$ are (compare with Piccolroaz {\\em et al.}, 2007)\n\\begin{equation}\n\\label{displac}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}}^-(\\beta) = -\\frac{bd_0^2}{4\\beta} \\beta_-^{-1\/2} \\mbox{\\boldmath $\\mathcal{V}$}_1(\\beta) \\mbox{\\boldmath $K$} - \\frac{bd_0^2}{4\\beta^2} \\beta_-^{-1\/2} \\mbox{\\boldmath $\\mathcal{V}$}_2(\\beta) \\mbox{\\boldmath $A$} \n- \\frac{bd_0^2}{4\\beta^3} \\beta_-^{-1\/2} \\mbox{\\boldmath $\\mathcal{V}$}_3(\\beta) \\mbox{\\boldmath $B$} + O(\\beta^{-9\/2}),\n\\end{equation}\nwhere\n$$\n\\mbox{\\boldmath $\\mathcal{V}$}_j(\\beta) = \n\\left[\\begin{array}{ll}\n\\displaystyle \\frac{e_0 \\beta_-^{-i\\epsilon}}{c_j^+} & \\displaystyle - \\frac{\\beta_-^{i\\epsilon}}{c_j^-e_0} \\\\[5mm]\n\\displaystyle \\frac{i e_0 \\beta_-^{-i\\epsilon}}{c_j^+} & \\displaystyle \\frac{i \\beta_-^{i\\epsilon}}{c_j^-e_0} \n\\end{array}\\right], \\quad j=1,2,3.\n$$\n\nFrom these asymptotics we can derive estimates of the terms in the LHS of the Betti identity \n\\eq{betti2}, as $\\beta \\to \\infty$,\n\\begin{equation}\n\\label{lhs}\n\\begin{array}{l}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}^+ = \n\\beta^{-1} \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$} + \\beta^{-2} \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$} + \\beta^{-3} \\mbox{\\boldmath $\\mathcal{M}$}_3 \\mbox{\\boldmath $B$} + O(\\beta^{-4}), \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+ \\\\[3mm]\n\\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}}^- = \n\\beta^{-1} \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$} + \\beta^{-2} \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$} + \\beta^{-3} \\mbox{\\boldmath $\\mathcal{M}$}_3 \\mbox{\\boldmath $B$} + O(\\beta^{-4}), \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^-\n\\end{array}\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{emme1}\n\\mbox{\\boldmath $\\mathcal{M}$}_j = \\frac{d_0}{4c_j^+c_j^-} \\left[ \\begin{array}{cc} -c_j^- & c_j^+ \\\\[3mm] ic_j^- & ic_j^+ \\end{array} \\right], \\quad j=1,2,3.\n\\end{equation}\n\nLet us introduce the following notation for the RHS of the Betti identity \\eq{betti2}\n$$\n\\mbox{\\boldmath$\\Psi$}(\\beta) = - \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top}(\\beta) \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle(\\beta) \n- \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top(\\beta) \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}(\\beta), \\quad \\beta \\in \\ensuremath{\\mathbb{R}}.\n$$\nNote that for any possible loading, $\\mbox{\\boldmath$\\Psi$}(\\beta) \\to 0$, as $\\beta \\to \\pm \\infty$. In fact, if the loading is given by point forces in terms of the Dirac delta function, \nthen $\\mbox{\\boldmath$\\Psi$}(\\beta) = O(|\\beta|^{-1\/2})$. If, for example, $\\mbox{\\boldmath $p$}^\\pm \\in L_1(\\ensuremath{\\mathbb{R}})$ or $(\\mbox{\\boldmath $p$}^\\pm)' \\in L_1(\\ensuremath{\\mathbb{R}})$ then $\\mbox{\\boldmath$\\Psi$}(\\beta) = o(|\\beta|^{-1\/2})$ or \n$\\mbox{\\boldmath$\\Psi$}(\\beta) = o(|\\beta|^{-3\/2})$, respectively.\n\nConsequently, we can split $\\mbox{\\boldmath$\\Psi$}(\\beta)$ in the sum of a plus function and a minus function \n$$\n\\mbox{\\boldmath$\\Psi$}(\\beta) = \\mbox{\\boldmath$\\Psi$}^+(\\beta) - \\mbox{\\boldmath$\\Psi$}^-(\\beta), \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nwhere\n$$\n\\mbox{\\boldmath$\\Psi$}^\\pm(\\beta) = \\frac{1}{2\\pi i} \\int_{-\\infty}^{\\infty} \\frac{\\mbox{\\boldmath$\\Psi$}(t)}{t-\\beta} dt, \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^\\pm.\n$$\n\nMoreover $\\mbox{\\boldmath$\\Psi$}^\\pm(\\beta) \\sim 1\/\\beta$, $\\beta \\to \\infty$, $\\beta \\in \\ensuremath{\\mathbb{C}}^\\pm$, and, from the estimates (\\ref{lhs}) we can conclude that the inhomogeneous \nWiener-Hopf equation (\\ref{betti2}) has the unique solution \n$$\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}^+ = \\mbox{\\boldmath$\\Psi$}^+, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+ \\quad \\text{and} \\quad \n\\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}}^- = \\mbox{\\boldmath$\\Psi$}^-, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^-.\n$$\n\nFrom these identities we can extract asymptotic estimates. For this reason, let us first assume that $\\mbox{\\boldmath $p$}^\\pm \\in \\ensuremath{\\mathcal{S}}(\\ensuremath{\\mathbb{R}})$, where $\\ensuremath{\\mathcal{S}}(\\ensuremath{\\mathbb{R}})$ is \nthe Schwartz space of rapidly decreasing functions (Zemanian, 1968), so that \n$\\mbox{\\boldmath$\\Psi$}(\\beta)$ decays exponentially as $\\beta \\to \\pm\\infty$. Then we have, for $\\beta \\to \\infty$, $\\beta \\in \\ensuremath{\\mathbb{C}}^\\pm$,\n\\begin{equation}\n\\label{rhs}\n\\mbox{\\boldmath$\\Psi$}^\\pm(\\beta) = \\frac{1}{2\\pi i} \\int_{-\\infty}^{\\infty} \\frac{\\mbox{\\boldmath$\\Psi$}(t)}{t-\\beta} dt = \n-\\beta^{-1} \\frac{1}{2\\pi i} \\int_{-\\infty}^{\\infty} \\mbox{\\boldmath$\\Psi$}(t) dt \n-\\beta^{-2} \\frac{1}{2\\pi i} \\int_{-\\infty}^{\\infty} t \\mbox{\\boldmath$\\Psi$}(t) dt\n+ O(\\beta^{-3}). \n\\end{equation}\n\nComparing corresponding terms in (\\ref{rhs}) and (\\ref{lhs}), we obtain the following formulae for the complex stress intensity factor and the coefficient in the \nsecond-order term\n\\begin{equation}\n\\label{coeff}\n\\begin{array}{l}\n\\displaystyle\n\\mbox{\\boldmath $K$} = \\frac{1}{2\\pi i} \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1}\n\\int_{-\\infty}^{\\infty} \\left\\{\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top}(t) \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle(t) \n+ \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top(t) \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}(t) \\right\\} dt, \\\\[3mm]\n\\displaystyle\n\\mbox{\\boldmath $A$} = \\frac{1}{2\\pi i} \\mbox{\\boldmath $\\mathcal{M}$}_2^{-1}\n\\int_{-\\infty}^{\\infty} t \\left\\{\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top}(t) \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle(t) \n+ \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top(t) \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}(t) \\right\\} dt,\n\\end{array}\n\\end{equation}\nrespectively.\n\nNote that\n$$\n\\begin{array}{l}\n\\displaystyle\n\\frac{1}{2\\pi} \\int_{-\\infty}^{\\infty} \\left\\{\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top}(t) \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle(t) \n+ \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top(t) \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}(t) \\right\\} dt = \\\\[3mm]\n\\displaystyle\n= \\lim_{x_1'\\to 0} \\frac{1}{2\\pi} \n\\int_{-\\infty}^{\\infty} \\left\\{\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top}(t) \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle(t) \n+ \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top(t) \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}(t) \\right\\} e^{-i x_1' t} dt \\\\[3mm]\n\\displaystyle\n= \\lim_{x_1'\\to 0} \\ensuremath{\\mathcal{F}}^{-1}_{x_1'} \\left\\{\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle \n+ \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}} \\right\\} \\\\[3mm]\n\\displaystyle\n= \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\langle \\mbox{\\boldmath $p$} \\rangle(x_1) \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1) \\right\\} dx_1.\n\\end{array}\n$$\n\nThe formula for the complex stress intensity factor becomes\n\\begin{equation}\n\\label{SIF}\n\\mbox{\\boldmath $K$} = -i \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1} \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\langle \\mbox{\\boldmath $p$} \\rangle(x_1) \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1) \\right\\} dx_1.\n\\end{equation}\n\nSimilarly, we can develop the integral in \\eq{coeff}$_2$ to obtain\n$$\n\\begin{array}{l}\n\\displaystyle\n\\frac{1}{2\\pi} \\int_{-\\infty}^{\\infty} t \\left\\{\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top}(t) \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle(t) \n+ \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top(t) \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}(t) \\right\\} dt = \\\\[3mm]\n\\displaystyle\n= i \\lim_{x_1'\\to 0} \\frac{d}{dx_1'} \\frac{1}{2\\pi} \n\\int_{-\\infty}^{\\infty} \\left\\{\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top}(t) \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle(t) \n+ \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top(t) \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}(t) \\right\\} e^{-i x_1' t} dt \\\\[3mm]\n\\displaystyle\n= i \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{d\\langle \\mbox{\\boldmath $p$} \\rangle(x_1)}{dx_1} \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{d\\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1)}{dx_1} \\right\\} dx_1.\n\\end{array}\n$$\n\nThe formula for the coefficient in the second order term is\n\\begin{equation}\n\\label{sec}\n\\mbox{\\boldmath $A$} = \\mbox{\\boldmath $\\mathcal{M}$}_2^{-1} \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{d\\langle \\mbox{\\boldmath $p$} \\rangle(x_1)}{dx_1} \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{d\\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1)}{dx_1} \\right\\} dx_1.\n\\end{equation}\n\nIf the loading $\\mbox{\\boldmath $p$}^\\pm$ is not smooth enough, singular or it is a distribution (such as the Dirac delta function), then we can replace $\\mbox{\\boldmath $p$}^\\pm$ \nwith a sequence $\\{\\mbox{\\boldmath $p$}^\\pm_n\\}$ such that $\\mbox{\\boldmath $p$}^\\pm_n \\in \\ensuremath{\\mathcal{S}}(\\ensuremath{\\mathbb{R}})$ and $\\mbox{\\boldmath $p$}^\\pm_n \\to \\mbox{\\boldmath $p$}^\\pm$ in $\\bar{\\ensuremath{\\mathcal{S}}}(\\ensuremath{\\mathbb{R}})$, when $n \\to \\infty$. \nHere $\\bar{\\ensuremath{\\mathcal{S}}}(\\ensuremath{\\mathbb{R}})$ denotes the space of distributions (Zemanian, 1968). Therefore, the representations \\eq{coeff}, \\eq{SIF}, \\eq{sec} can \nbe interpreted in the sense of distributions and we can integrate formally \\eq{sec} by parts to obtain\n$$\n\\mbox{\\boldmath $A$} = \\mbox{\\boldmath $\\mathcal{M}$}_2^{-1} \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \\left\\{\\frac{d\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1)}{dx_1} \\mbox{\\boldmath $R$} \\langle \\mbox{\\boldmath $p$} \\rangle(x_1) \n+ \\frac{d\\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1)}{dx_1} \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1) \\right\\} dx_1.\n$$\n\n\\section{Perturbation of the crack front}\n\\label{perturbation}\nFor the 2D case, the perturbation considered here is related to an advance of the crack front by a distance $a$. We denote quantities relative to the original unperturbed \ncrack problem by subscript $0$, and quantities relative to the perturbed crack problem by subscript $\\star$.\n\n\\subsection{General settings}\nThe complex stress intensity factor for the original unperturbed problem is given by\n$$\n\\mbox{\\boldmath $K$}_0 = -i \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1} \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\langle \\mbox{\\boldmath $p$} \\rangle(x_1) \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1) \\right\\} dx_1.\n$$\nFor the perturbed problem, if the coordinate system is shifted by the quantity $a$ to the new crack tip position, the complex stress intensity factor is given by the \nformula\n$$\n\\mbox{\\boldmath $K$}_\\star(a) = -i \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1} \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\langle \\mbox{\\boldmath $p$} \\rangle(x_1+a) \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1+a) \\right\\} dx_1.\n$$\n\nThe computation of the first order variation of the complex stress intensity factor is now straightforward\n$$\n\\begin{array}{l}\n\\displaystyle\n\\left.\\frac{d\\mbox{\\boldmath $K$}_\\star(a)}{da}\\right|_{a=0} = \\\\[3mm]\n\\displaystyle\n=-i \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1} \\lim_{x_1'\\to 0} \\lim_{a \\to 0^+} \\int_{-\\infty}^{0} \n\\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{\\langle \\mbox{\\boldmath $p$} \\rangle(x_1+a) - \\langle \\mbox{\\boldmath $p$} \\rangle(x_1)}{a} \\right. \\\\[3mm] \n\\hspace{50mm}\n\\displaystyle\n\\left. + \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{\\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1+a) - \\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1)}{a} \\right\\} \ndx_1 \\\\[3mm]\n\\displaystyle\n=-i \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1} \\lim_{x_1'\\to 0} \\int_{-\\infty}^{0} \n\\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{d\\langle \\mbox{\\boldmath $p$} \\rangle(x_1)}{dx_1} \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{d\\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1)}{dx_1} \\right\\} \ndx_1.\n\\end{array}\n$$\n\nSimilarly, for the second order term, we obtain\n$$\n\\left.\\frac{d\\mbox{\\boldmath $A$}_\\star(a)}{da}\\right|_{a=0} = \\mbox{\\boldmath $\\mathcal{M}$}_2^{-1} \\lim_{x_1'\\to 0} \n\\int_{-\\infty}^{0} \\left\\{\\jump{0.15}{\\mbox{\\boldmath $U$}}^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{d^2\\langle \\mbox{\\boldmath $p$} \\rangle(x_1)}{dx_1^2} \n+ \\langle\\mbox{\\boldmath $U$}\\rangle^\\top(x_1'-x_1) \\mbox{\\boldmath $R$} \\frac{d^2\\jump{0.15}{\\mbox{\\boldmath $p$}}(x_1)}{dx_1^2} \\right\\} dx_1.\n$$\n\nWe assume here that all the derivatives exist, otherwise, as previously, the formula should be understood in the generalized sense.\n\n\\subsection{Computation of the perturbations of stress intensity factor according to the approach of Willis and Movchan (1995)} \nThe Betti formula for the original crack problem and the perturbed crack problem reads\n\\begin{equation}\n\\label{betti4}\n\\begin{array}{l}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^{+} - \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^- = \n- \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle - \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}, \\quad \\beta \\in \\ensuremath{\\mathbb{R}}, \\\\[3mm]\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^{\\ddag} - \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star^{\\ddag}} = \n- \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\langle \\overline{\\mbox{\\boldmath $p$}} \\rangle - \\langle\\overline{\\mbox{\\boldmath $U$}}\\rangle^\\top \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $p$}}}, \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n\\end{array}\n\\end{equation}\nrespectively. According to \\eq{stress1} and \\eq{displac}, the asymptotics of physical fields in the unperturbed problem are\n\\begin{equation}\n\\label{unpert}\n\\begin{array}{l}\n\\displaystyle\n\\overline{\\mbox{\\boldmath $\\sigma$}}_0^+(\\beta) = \\frac{\\beta_+^{-1\/2}}{4} \\mbox{\\boldmath $\\mathcal{T}$}_1(\\beta) \\mbox{\\boldmath $K$}_0 + \\frac{\\beta_+^{-1\/2}}{4\\beta} \\mbox{\\boldmath $\\mathcal{T}$}_2(\\beta) \\mbox{\\boldmath $A$}_0 + O(\\beta^{-5\/2}), \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^-(\\beta) = -\\frac{bd_0^2}{4\\beta} \\beta_-^{-1\/2} \\mbox{\\boldmath $\\mathcal{V}$}_1(\\beta) \\mbox{\\boldmath $K$}_0 - \\frac{bd_0^2}{4\\beta^2} \\beta_-^{-1\/2} \\mbox{\\boldmath $\\mathcal{V}$}_2(\\beta) \\mbox{\\boldmath $A$}_0 + O(\\beta^{-7\/2}), \\quad \n\\beta \\in \\ensuremath{\\mathbb{C}}^-.\n\\end{array}\n\\end{equation}\n\nFor the physical fields in the perturbed problem, $\\mbox{\\boldmath $\\sigma$}_\\star^\\ddag$ and $\\mbox{\\boldmath $u$}_\\star^\\ddag$, we can write the respective transforms in the new \ncoordinate system related to the new position of the crack tip\n$$\n\\begin{array}{l}\n\\displaystyle\n\\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^\\ddag(\\beta,a) = \n\\int_a^{\\infty} \\mbox{\\boldmath $\\sigma$}_\\star^\\ddag(x_1) e^{i\\beta x_1} dx_1 = \ne^{i\\beta a} \\int_0^{\\infty} \\mbox{\\boldmath $\\sigma$}_\\star(y) e^{i\\beta y} dy = \ne^{i\\beta a} \\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^+(\\beta,a), \\\\[3mm]\n\\displaystyle\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star^\\ddag}(\\beta,a) = \n\\int_{-\\infty}^a \\jump{0.15}{\\mbox{\\boldmath $u$}_\\star^\\ddag}(x_1) e^{i\\beta x_1} dx_1 = \ne^{i\\beta a} \\int_{-\\infty}^0 \\jump{0.15}{\\mbox{\\boldmath $u$}_\\star}(y) e^{i\\beta y} dy = \ne^{i\\beta a} \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star}^-(\\beta,a).\n\\end{array}\n$$\nNote that $\\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^+$ and $\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star}^-$ are \"$+$\" and \"$-$\" functions, respectively, with the same asymptotics as in \\eq{unpert}, \nsubject to replacing $\\mbox{\\boldmath $K$}_0$ with $\\mbox{\\boldmath $K$}_\\star(a)$ and $\\mbox{\\boldmath $A$}_0$ with $\\mbox{\\boldmath $A$}_\\star(a)$.\n\nSubtracting (\\ref{betti4})$_2$ from (\\ref{betti4})$_1$, we obtain\n\\begin{equation}\n\\label{betti5}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} (\\overline{\\mbox{\\boldmath $\\sigma$}}_0^{+} - e^{i\\beta a}\\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^{+}) \n- \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} (\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^- - e^{i\\beta a}\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star}^-) = \\mbox{\\boldmath $0$}, \n\\quad \\beta \\in \\ensuremath{\\mathbb{R}}.\n\\end{equation} \nNote that for any $a > 0$, the function $e^{i\\beta a}\\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^{+}$ is a \"$+$\" function which decays exponentially as $\\beta \\to \\infty$, \n$\\beta \\in \\ensuremath{\\mathbb{C}}^+$, whereas $e^{i\\beta a}[\\overline{\\mbox{\\boldmath $u$}}_\\star]^-$ is a \"$-$\" function which grows exponentially as $\\beta \\to \\infty$, $\\beta \\in \\ensuremath{\\mathbb{C}}^-$.\n\nFollowing Willis and Movchan (1995), we expand the exponential term as $a \\to 0^+$, so that\n$$\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} (\\overline{\\mbox{\\boldmath $\\sigma$}}_0^{+} - (1 + ia\\beta)\\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^{+}) \n- \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} (\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^- - (1 + ia\\beta)\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star}^-) = \\mbox{\\boldmath $0$}, \n\\quad \\beta \\in \\ensuremath{\\mathbb{R}}.\n$$\n\nThe substitution of two-terms asymptotics for the unperturbed physical fields, $\\overline{\\mbox{\\boldmath $\\sigma$}}_0^{+}$, $\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^-$, and for the perturbed \nphysical fields, $\\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^{+}$, $\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star}^-$, leads to\n$$\n\\begin{array}{l}\n\\displaystyle\n\\mbox{\\boldmath $\\mathcal{M}$}_1 (\\mbox{\\boldmath $K$}_0 - \\mbox{\\boldmath $K$}_\\star(a))\\beta_+^{-1} - ia \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_\\star(a) -ia \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_\\star(a) \\beta_+^{-1} \\\\[3mm]\n\\hspace{30mm}\n- \\{ \\mbox{\\boldmath $\\mathcal{M}$}_1(\\mbox{\\boldmath $K$}_0 - \\mbox{\\boldmath $K$}_\\star(a))\\beta_-^{-1} - ia \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_\\star(a) -ia \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_\\star(a) \\beta_-^{-1} \\} + O(\\beta^{-2}) = \\\\[3mm]\n\\displaystyle \\hspace{30mm}\n= \\{\\mbox{\\boldmath $\\mathcal{M}$}_1 (\\mbox{\\boldmath $K$}_0 - \\mbox{\\boldmath $K$}_\\star(a)) - ia \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_\\star(a) \\} (\\beta_+^{-1} - \\beta_-^{-1}) + O(\\beta^{-2}) = \\mbox{\\boldmath $0$}.\n\\end{array}\n$$\nWe get \n\\begin{equation}\n\\label{SIF_perturb}\n\\Delta \\mbox{\\boldmath $K$} = \\mbox{\\boldmath $K$}_\\star(a) - \\mbox{\\boldmath $K$}_0 \\sim -ia \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1} \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_0 = \na \\left[\\begin{array}{ll} 1\/2 + i\\epsilon & 0 \\\\[3mm] 0 & 1\/2 - i\\epsilon \\end{array}\\right] \\mbox{\\boldmath $A$}_0, \\quad a \\to 0^+,\n\\end{equation}\nand finally\n$$\n\\Delta K \\sim a (1\/2 + i\\epsilon) A_0, \\quad \\Delta K^* \\sim a (1\/2 - i\\epsilon) A_0^*, \\quad a \\to 0^+.\n$$\n\nNote that it is assumed here that $\\mbox{\\boldmath $K$}_\\star(a)$ and $\\mbox{\\boldmath $A$}_\\star(a)$ are continuous in $a = 0$. This fact will be proven in the next section.\n\n\\subsection{An alternative computation of the perturbations of stress intensity factors and high-order terms}\n\\label{SIF_Abelian}\nHere we provide a rigorous asymptotic procedure leading to the computation of perturbations in the stress intensity factors as well as high-order terms. The Abelian \nand Tauberian type theorems outlined in Appendix \\ref{app3} will be used for this purpose.\n\nWe rewrite now the equation (\\ref{betti5}) as follows \n\\begin{equation}\n\\label{w-h}\n\\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} (e^{-i\\beta a}\\overline{\\mbox{\\boldmath $\\sigma$}}_0^{+} - \\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^{+}) \n- \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} (e^{-i\\beta a}\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^- - \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star}^-) = \\mbox{\\boldmath $0$}, \n\\quad \\beta \\in \\ensuremath{\\mathbb{R}}.\n\\end{equation} \nThe function $e^{-i\\beta a} \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^{-}$ is a \"$-$\" function which decays exponentially as $\\beta \\to \\infty$, \n$\\beta \\in \\ensuremath{\\mathbb{C}}^-$, whereas $e^{-i\\beta a} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ $ is not a proper \"$+$\" function, because of the exponential \ngrowth as $\\beta \\to \\infty$, $\\beta \\in \\ensuremath{\\mathbb{C}}^+$. However, for $\\beta \\in \\ensuremath{\\mathbb{R}}$, \n$e^{-i\\beta a} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ \\sim 1\/\\beta$, as $\\beta \\to \\pm \\infty$, so that it can be decomposed as \n$$\ne^{-i\\beta a} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ = \\mbox{\\boldmath $\\Xi$}^+ - \\mbox{\\boldmath $\\Xi$}^-,\n$$\nwhere\n$$\n\\mbox{\\boldmath $\\Xi$}^\\pm(\\beta) = \\frac{1}{2\\pi i} \\int_{-\\infty}^{\\infty} \n\\frac{e^{-ia t} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+}{t - \\beta} dt, \n\\quad \\beta \\in \\ensuremath{\\mathbb{C}}^\\pm.\n$$\nNow, from the solution of the Wiener-Hopf equation (\\ref{w-h}), we obtain the following identities\n\\begin{equation}\n\\label{id}\n\\begin{array}{l}\n\\displaystyle\n\\mbox{\\boldmath $\\Xi$}^+ - \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_\\star^{+} = \\mbox{\\boldmath $0$}, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+, \\\\[3mm]\n\\displaystyle\n-\\mbox{\\boldmath $\\Xi$}^- - \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} (e^{-i\\beta a}\\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^- - \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_\\star}^-) = \\mbox{\\boldmath $0$}, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^-.\n\\end{array}\n\\end{equation}\nNote that\n$$\n\\int_{-\\infty}^{\\infty} e^{-iat} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ dt < \\infty,\n$$\nand\n$$\ne^{-i\\beta a} \\overline{\\mbox{\\boldmath $\\Sigma$}}^{-\\top} \\mbox{\\boldmath $R$} \\jump{0.15}{\\overline{\\mbox{\\boldmath $u$}}_0}^- = o\\left( \\frac{1}{\\beta^n} \\right), \n\\quad \\beta \\to \\infty, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^-,\n$$\nfor any $n \\in \\ensuremath{\\mathbb{N}}$. Consequently, we obtain from the leading term asymptotic of both identities (\\ref{id}) the same result, namely\n\\begin{equation}\n\\label{k2}\n\\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_\\star(a) = -\\frac{1}{2\\pi i} \\int_{-\\infty}^{\\infty} e^{-iat} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ dt.\n\\end{equation}\nThe application of the Tauberian type Theorem \\ref{theorem} (see Appendix \\ref{app3}) leads to\n$$\n\\lim_{a \\to 0^+} \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_\\star(a) = -\\frac{1}{i} (- i \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0),\n$$\nand, since the matrix $\\mbox{\\boldmath $\\mathcal{M}$}_1$ is not singular,\n$$\n\\lim_{a \\to 0^+} \\mbox{\\boldmath $K$}_\\star(a) = \\mbox{\\boldmath $K$}_0.\n$$\nThis proves that the complex stress intensity factor $K$ is continuous in $a = 0$.\n\nIn order to derive the first order variation of the complex stress intensity factor in the perturbation problem, we consider the derivative of (\\ref{k2}) with \nrespect to $a$ (see Theorem \\ref{theorem}),\n$$\n\\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_\\star'(a) = \\frac{1}{2\\pi} \\int_{-\\infty}^{\\infty} e^{-iat} \\{t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ - \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0\\} dt,\n$$\nwhere\n$$\nt \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ - \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0 = t^{-1} \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_0 + t^{-2} \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $B$}_0 + O(t^{-3}), \\quad t \\to \\infty, \n\\quad t \\in \\ensuremath{\\mathbb{C}}^+.\n$$\nApplying again the Theorem \\ref{theorem} we can conclude that\n$$\n\\lim_{a \\to 0^+} \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_\\star'(a) = -i \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_0, \n$$\nand, finally, the first order variation of the complex stress intensity factor in the perturbation problem is given by \n$$\n\\mbox{\\boldmath $K$}_\\star'(0) = -i \\mbox{\\boldmath $\\mathcal{M}$}_1^{-1} \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_0,\n$$\nwhich is consistent with the result \\eq{SIF_perturb}.\n\nThis asymptotic procedure can be extended to compute the perturbations of high-order terms as follows. From the second-term asymptotics of (\\ref{id}) we derive \nthe coefficient $\\mbox{\\boldmath $A$}_\\star(a)$, which reads\n\\begin{equation}\n\\label{A2}\n\\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_\\star(a) = \\lim_{\\beta \\to \\infty} \\frac{\\beta}{2\\pi i}\n\\int_{-\\infty}^{\\infty} \\frac{t e^{-iat} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+}{t - \\beta} dt,\n\\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+.\n\\end{equation}\nNote that the integral on the right-hand side exists, but the behaviour of the integrand does not allow us to take the limit directly.\n\nLet us consider the function\n$$\n\\mbox{\\boldmath $f$}(\\beta,a) = \\frac{\\beta}{2\\pi i}\n\\int_{-\\infty}^{\\infty} \\frac{t e^{-iat} \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+}{t - \\beta} dt.\n$$\n\nIntegration by parts leads to\n\\begin{equation}\n\\label{effe2}\n\\mbox{\\boldmath $f$}(\\beta,a) = \\frac{\\beta}{2\\pi i} \\left\\{ \n\\left[g(t,\\beta) t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ \\right]_{-\\infty}^{\\infty}\n- \\int_{-\\infty}^{\\infty} g(t,\\beta) (t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ )' dt\n\\right\\},\n\\end{equation}\nwhere \n\\begin{equation}\n\\label{gi}\ng(t,\\beta) = \\int_{-\\infty}^{t} \\frac{e^{-ia\\xi}}{\\xi - \\beta} d\\xi, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+.\n\\end{equation}\nThe integrand in (\\ref{gi}) is analytic in $\\ensuremath{\\mathbb{C}}^-$ and decays exponentially as $\\xi \\to \\infty$, \n$\\xi \\in \\ensuremath{\\mathbb{C}}^-$. As a result\n$$\n\\lim_{t \\to \\infty} g(t,\\beta) = 0, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+,\n$$\nand therefore the first term in brackets in \\eq{effe2} vanishes and we can then write\n$$\n\\begin{array}{ll}\n\\displaystyle \\mbox{\\boldmath $f$}(\\beta,a) & \\displaystyle = -\\frac{\\beta}{2\\pi i} \\int_{-\\infty}^{\\infty} g(t,\\beta) (t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ )' dt \\\\[3mm]\n& \\displaystyle = -\\frac{\\beta}{2\\pi i} \\int_{-\\infty}^{\\infty} g(t,\\beta) (t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ - \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0)' dt \\\\[3mm]\n& \\displaystyle = -\\frac{\\beta}{2\\pi i} \\left\\{ \n\\left[g(t,\\beta) (t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ - \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0)\\right]_{-\\infty}^{\\infty} \n- \\int_{-\\infty}^{\\infty} g'(t,\\beta) (t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ - \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0) dt \n\\right\\} \\\\[3mm]\n& \\displaystyle = \\frac{\\beta}{2\\pi i} \n\\int_{-\\infty}^{\\infty} \\frac{e^{-iat}}{t-\\beta} (t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ - \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0) dt.\n\\end{array}\n$$\nIt is now possible to take the limit as $\\beta \\to \\infty$ in (\\ref{A2}), thus we deduce\n$$\n\\begin{array}{ll}\n\\displaystyle \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_\\star(a) & \\displaystyle = \\lim_{\\beta \\to \\infty} \n\\frac{\\beta}{2\\pi i} \\int_{-\\infty}^{\\infty} \\frac{e^{-iat} (t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ - \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0)}{t - \\beta} dt \\\\[3mm]\n& \\displaystyle = -\\frac{1}{2\\pi i} \\int_{-\\infty}^{\\infty} e^{-iat} (t \\jump{0.15}{\\overline{\\mbox{\\boldmath $U$}}}^{+\\top} \\mbox{\\boldmath $R$} \\overline{\\mbox{\\boldmath $\\sigma$}}_0^+ - \\mbox{\\boldmath $\\mathcal{M}$}_1 \\mbox{\\boldmath $K$}_0) dt.\n\\end{array}\n$$\nWe are now in the position to use the Theorem \\ref{theorem}, so that\n$$\n\\lim_{a \\to 0^+} \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_\\star(a) = -\\frac{1}{i} (-i \\mbox{\\boldmath $\\mathcal{M}$}_2 \\mbox{\\boldmath $A$}_0),\n$$\nand, since the matrix $\\mbox{\\boldmath $\\mathcal{M}$}_2$ is not singular,\n$$\n\\lim_{a \\to 0^+} \\mbox{\\boldmath $A$}_\\star(a) = \\mbox{\\boldmath $A$}_0,\n$$\nwhich proves the continuity of the function $\\mbox{\\boldmath $A$}_\\star(a)$ in $a = 0$.\n\nMoreover, the first order variation of $\\mbox{\\boldmath $A$}_\\star(a)$ can now be derived following the same asymptotic procedure presented above and we deduce\n$$\n\\mbox{\\boldmath $A$}_\\star'(0) = -i \\mbox{\\boldmath $\\mathcal{M}$}_2^{-1} \\mbox{\\boldmath $\\mathcal{M}$}_3 \\mbox{\\boldmath $B$}_0,\n$$\nso that\n$$\nA_\\star'(0) = (3\/2 + i\\epsilon) B_0, \\quad A_\\star^{*\\prime}(0) = (3\/2 - i\\epsilon) B_0^*.\n$$\n\n\\section{An illustrative example}\n\\label{example}\nIn this section, we present an example regarding the computation of the complex stress intensity factor $K$ for an interfacial crack loaded by a simple asymmetric force \nsystem, as shown in Fig. \\ref{three-point}. The loading is given by a point force $F$ acting upon the upper crack face at a distance $a$ behind the crack tip and two \npoint forces $F\/2$ acting upon the lower crack face at a distance $a - b$ and $a + b$ behind the crack tip. In terms of the Dirac delta function $\\delta(\\cdot)$, the \nloading is then given by\n$$\n\\begin{array}{l}\n\\displaystyle p^+(x_1) = - F \\delta(x_1 + a), \\\\[3mm]\n\\displaystyle p^-(x_1) = -\\frac{F}{2} \\delta(x_1 + a + b) - \\frac{F}{2} \\delta(x_1 + a - b).\n\\end{array}\n$$\n\n\\begin{figure}[!htb]\n\\begin{center}\n\\vspace*{3mm}\n\\includegraphics[width=10cm]{fig03.eps}\n\\caption{\\footnotesize \"Three-point\" loading of an interfacial crack.}\n\\label{three-point}\n\\end{center}\n\\end{figure}\n\nThe loading is self-balanced, in terms of both principal force and moment vectors, and can be divided into symmetric and skew-symmetric parts (see Fig. \n\\ref{three-point}), \n$$\n\\begin{array}{l}\n\\displaystyle \\langle p \\rangle(x_1) = -\\frac{F}{2} \\delta(x_1 + a) - \\frac{F}{4} \\delta(x_1 + a + b) - \\frac{F}{4} \\delta(x_1 + a - b), \\\\[3mm]\n\\displaystyle \\jump{0.15}{p}(x_1) = - F \\delta(x_1 + a) + \\frac{F}{2} \\delta(x_1 + a + b) + \\frac{F}{2} \\delta(x_1 + a - b).\n\\end{array}\n$$\n\nFor this loading system, the complex stress intensity factor $K$ has been evaluated by means of the integral formula \\eq{SIF} and the symmetric and skew-symmetric \nweight functions, \\eq{symm} and \\eq{skew} respectively, obtaining $K = K^S + K^A$, where\n$$\n\\begin{array}{l}\n\\displaystyle K^S = F \\sqrt{\\frac{2}{\\pi}} \\cosh(\\pi\\epsilon) a^{-1\/2 - i\\epsilon} \\left\\{ \\frac{1}{2} + \\frac{1}{4} (1 + b\/a)^{-1\/2 - i\\epsilon} + \n\\frac{1}{4} (1 - b\/a)^{-1\/2 - i\\epsilon} \\right\\}, \\\\[3mm]\n\\displaystyle K^A = \\alpha F \\sqrt{\\frac{2}{\\pi}} \\cosh(\\pi\\epsilon) a^{-1\/2 - i\\epsilon} \\left\\{ \\frac{1}{2} - \\frac{1}{4} (1 + b\/a)^{-1\/2 - i\\epsilon} -\n\\frac{1}{4} (1 - b\/a)^{-1\/2 - i\\epsilon} \\right\\}.\n\\end{array}\n$$\n\nThe complex stress intensity factor has been computed for the case $\\nu_+ = 0.2$ and $\\nu_- = 0.3$ and five values of the parameter $\\eta$, namely \n$\\eta = \\{ -0.99, -0.5, 0, 0.5, 0.99 \\}$. The values have been normalized and plotted in Fig. \\ref{fig04} as functions of the ratio $b\/a$ \n($a$ is fixed equal to 1). The symmetric (skew-symmetric) \nstress intensity factor is reported on the left (right) of the figure, the real (imaginary) part is reported on the top (bottom).\n\\begin{figure}[!htb]\n\\begin{center}\n\\vspace*{3mm}\n\\includegraphics[width=11cm]{fig04arx.eps}\n\\caption{\\footnotesize Symmetric and anti-symmetric stress intensity factors as a function of $b\/a$ ($a$ is fixed equal to 1) for the case \n$\\nu_+ = 0.2$, $\\nu_-=0.3$ and: $\\eta = -0.99$ (green), $\\eta = -0.5$ (orange), $\\eta = 0$ (red), \n$\\eta = 0.5$ (blue), $\\eta = 0.99$ (black).}\n\\label{fig04}\n\\end{center}\n\\end{figure}\n\nCommenting on the results, first, we note from the figure that both $K_\\text{II}^S$ and $K_\\text{II}^A$ are not identically zero, even though the loading corresponds to a \ntensile (or opening) mode. This is typical of the interfacial crack problem, where there is no symmetry with respect to the plane containing the crack, so that Mode I and \nMode II are coupled.\n\nSecond, results pertaining to the skew-symmetric stress intensity factor show that $K_\\text{I}^A$ and $K_\\text{II}^A$ are both equal to zero for $b\/a = 0$, as expected, since \nin this case the loading is symmetric. As we increase $b\/a$ and the skew-symmetric part of the loading becomes more and more relevant, the skew-symmetric stress intensity \nfactor increases correspondingly. Both the symmetric and skew-symmetric stress intensity factors are singular as $b\/a \\to 1$, because a point force is approaching the \ncrack tip.\n\nThe analytical method developed in the present paper allows us also to evaluate the coefficients in the higher order terms of asymptotics of physical \nfields. As an example, we computed the coefficient in the second order term, $A$, which is reported in Fig. \\ref{fig05}, in \nanalogy with the stress intensity factor in the figures above.\n\\begin{figure}[!htb]\n\\begin{center}\n\\vspace*{3mm}\n\\includegraphics[width=11cm]{fig05arx.eps}\n\\caption{\\footnotesize Symmetric and anti-symmetric parts of $A$ as functions of $b\/a$ ($a$ is fixed equal to 1) for the case $\\nu_+ = 0.2$, \n$\\nu_-=0.3$ and: $\\eta = -0.99$ (green), $\\eta = -0.5$ (orange), $\\eta = 0$ (red), $\\eta = 0.5$ (blue), $\\eta = 0.99$ (black).}\n\\label{fig05}\n\\end{center}\n\\end{figure}\n\nIn order to appreciate the magnitude of the skew-symmetric stress intensity factor with respect to the magnitude of the symmetric stress intensity factor, the ratio \n$K_\\text{I}^A\/K_\\text{I}^S$ is plotted in Fig. \\ref{fig06} (on the left) as a function of $b\/a$. We notice from the figure that the magnitude of $K_\\text{I}^A$ may \neasily reach 20\\% of the magnitude of $K_\\text{I}^S$, a value which is not negligible and has to be taken into account in the view of the application of a tensile fracture \ncriterion. Note also that the coefficient $A$ is even more sensitive to the anti-symmetric loading, $A_\\text{I}^A$ may \neasily reach the same magnitude as $A_\\text{I}^S$, see Fig. \\ref{fig06} on the right. The ratios $K_\\text{II}^A\/K_\\text{II}^S$ and \n$A_\\text{II}^A\/A_\\text{II}^S$ are not reported here since, in this case, they are both equal to the same constant, namely \n$K_\\text{II}^A\/K_\\text{II}^S = A_\\text{II}^A\/A_\\text{II}^S = \\alpha$.\n\\begin{figure}[!htb]\n\\begin{center}\n\\vspace*{3mm}\n\\includegraphics[width=11cm]{fig06arx.eps}\n\\caption{\\footnotesize Ratios $K_\\text{I}^A\/K_\\text{I}^S$ and $A_\\text{I}^A\/A_\\text{I}^S$ as functions of\n$b\/a$ ($a$ is fixed equal to 1) for the case \n$\\nu_+ = 0.2$, $\\nu_-=0.3$ and: $\\eta = -0.99$ (green), $\\eta = -0.5$ (orange), $\\eta = 0$ (red), \n$\\eta = 0.5$ (blue), $\\eta = 0.99$ (black).}\n\\label{fig06}\n\\end{center}\n\\end{figure}\n\n\\section{Mode III} \n\\label{antiplane}\nThe Wiener-Hopf equation relating the Fourier transforms of the singular displacement and the corresponding traction is given by \n$$\n\\jump{0.15}{\\overline{U}_3}^+ = - \\frac{b+e}{|\\beta|} \\overline{\\Sigma}_{32}^-, \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nwhere $e = \\nu_+ \/ \\mu_+ + \\nu_- \/ \\mu_-$. This equation can be immediately factorized as follows\n$$\n\\beta_+^{1\/2} \\jump{0.15}{\\overline{U}_3}^+ = -\\frac{b+e}{\\beta_-^{1\/2}} \\overline{\\Sigma}_{32}^-.\n$$\nAssuming that $\\jump{0.15}{\\overline{U}_3}^+ \\sim \\beta_+^{-1\/2}$ and $\\overline{\\Sigma}_{32}^- \\sim \\beta_-^{1\/2}$, as $\\beta \\to \\infty$, $\\beta \\in \\ensuremath{\\mathbb{C}}^\\pm$, \nwe have\n$$\n\\beta_+^{1\/2} [\\overline{U}_3]^+ = -\\frac{b+e}{\\beta_-^{1\/2}} \\overline{\\Sigma}_{32}^- = O(1), \\quad\n\\beta \\to \\pm\\infty , \\quad \\beta \\in \\ensuremath{\\mathbb{R}},\n$$\nso that, from the Liouville theorem, it follows that both sides of the equation are equal to the same constant.\n\nThe weight functions for the Mode III problem are then given by\n$$\n\\jump{0.15}{\\overline{U}_3}^+(\\beta) = \\beta_+^{-1\/2}, \\quad \\text{and} \\quad \n\\overline{\\Sigma}_{32}^-(\\beta) = -\\frac{\\beta_-^{1\/2}}{b+e},\n$$\nwith corresponding inverse transforms\n\\begin{equation}\n\\label{jumpu3}\n\\jump{0.15}{U_3}(x_1) = \\left\\{\n\\begin{array}{ll}\n\\displaystyle \\frac{1-i}{\\sqrt{2\\pi}} x_1^{-1\/2}, & \\displaystyle x_1 > 0, \\\\[3mm]\n0, & x_1 < 0,\n\\end{array} \\right. \\qquad\n\\Sigma_{32}(x_1) = \\left\\{\n\\begin{array}{ll}\n0, & \\displaystyle x_1 > 0, \\\\[3mm]\n\\displaystyle \\frac{1-i}{2\\sqrt{2\\pi}(b+e)} (-x_1)^{-3\/2}, & x_1 < 0,\n\\end{array} \\right.\n\\end{equation}\n\nThe Betti identity reduces to \n$$ \n\\jump{0.15}{\\overline{U}_3}^+ \\overline{\\sigma}_{32}^+ - \\overline{\\Sigma}_{32}^- \\jump{0.15}{\\overline{u}_3}^- = \n- \\jump{0.15}{\\overline{U}_3}^+ \\langle \\overline{p}_3 \\rangle - \\langle\\overline{U}_3\\rangle^+ \\jump{0.15}{\\overline{p}_3}.\n$$\n\nThe asymptotics of physical fields are\n$$\n\\begin{array}{l}\n\\displaystyle\n\\sigma_{32} = \\frac{K_\\text{III}}{\\sqrt{2\\pi}} x_1^{-1\/2} + \\frac{A_\\text{III}}{\\sqrt{2\\pi}} x_1^{1\/2} + O(x_1^{3\/2}), \\quad x_1 \\to 0^+, \\\\[5mm]\n\\displaystyle\n\\jump{0.15}{u_3} = \\frac{2(b+e)K_\\text{III}}{\\sqrt{2\\pi}} (-x_1)^{1\/2} - \\frac{2(b+e)A_\\text{III}}{3\\sqrt{2\\pi}} (-x_1)^{3\/2} + O[(-x_1)^{5\/2}], \\quad x_1 \\to 0^-,\n\\end{array}\n$$\nwith corresponding Fourier transforms\n$$\n\\begin{array}{l}\n\\displaystyle\n\\overline{\\sigma}_{32}^+ = \\frac{(1+i)K_\\text{III}}{2} \\beta_+^{-1\/2} - \\frac{(1-i)A_\\text{III}}{4} \\beta_+^{-3\/2} + O(\\beta_+^{-5\/2}), \n\\quad \\beta \\to \\infty, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^+ \\\\[5mm]\n\\displaystyle\n\\jump{0.15}{\\overline{u}_3}^- = -\\frac{(1+i)(b+e)K_\\text{III}}{2} \\beta_-^{-3\/2} + \\frac{(1-i)(b+e)A_\\text{III}}{4} \\beta_-^{-5\/2} + O(\\beta_-^{-7\/2}), \n\\quad \\beta \\to \\infty, \\quad \\beta \\in \\ensuremath{\\mathbb{C}}^-.\n\\end{array}\n$$\n\nBy the same arguments used for Mode I and II, we obtain the formulae\n$$\n\\begin{array}{l}\n\\displaystyle K_\\text{III} = -(1+i) \\lim_{x_1' \\to 0^+} \\int_{-\\infty}^{0} \n\\{ \\jump{0.15}{U_3}(x_1'-x_1) \\langle p_3 \\rangle(x_1) + \\langle U_3 \\rangle(x_1'-x_1) \\jump{0.15}{p_3}(x_1) \\} dx_1, \\\\[3mm]\n\\displaystyle A_\\text{III} = -2(1+i) \\lim_{x_1' \\to 0^+} \\int_{-\\infty}^{0} \n\\left\\{ \\jump{0.15}{U_3}(x_1'-x_1) \\frac{d\\langle p_3 \\rangle(x_1)}{dx_1} + \\langle U_3 \\rangle(x_1'-x_1) \\frac{d\\jump{0.15}{p_3}(x_1)}{dx_1} \\right\\} dx_1.\n\\end{array}\n$$\n\nFrom the solution of the half-plane problem, we get the full representation of weight functions. For the lower half-plane:\n$$\n\\begin{array}{l}\n\\displaystyle\\overline{u}_3(\\beta,x_2) = \\frac{1}{\\mu_-|\\beta|} \\overline{\\Sigma}_{32}^- e^{|\\beta|x_2}, \\\\[3mm]\n\\displaystyle \\overline{\\sigma}_{31}(\\beta,x_2) = -i \\mathop{\\mathrm{sign}}(\\beta) \\overline{\\Sigma}_{32}^- e^{|\\beta|x_2}, \\\\[3mm]\n\\displaystyle \\overline{\\sigma}_{32}(\\beta,x_2) = \\overline{\\Sigma}_{32}^- e^{|\\beta|x_2}.\n\\end{array}\n$$\n\nFor the upper half-plane the equations are the same subject to replacing $|\\beta|$ with $-|\\beta|$ and $\\mu_-$ with $\\mu_+$, so that we obtain\n\\begin{equation}\n\\label{meanu3}\n\\jump{0.15}{\\overline{U}_3}^+ = -\\frac{\\mu_+ + \\mu_-}{\\mu_ + \\mu_-} \\frac{1}{|\\beta|}\\overline{\\Sigma}_{32}^-, \\quad\n\\langle\\overline{U}_3\\rangle^+ = \\frac{\\mu_+ - \\mu_-}{2\\mu_+\\mu_-} \\frac{1}{|\\beta|}\\overline{\\Sigma}_{32}^- = \\frac{\\eta}{2} \\jump{0.15}{\\overline{U}_3}^+,\n\\end{equation} \nwhere\n$$\n\\eta = \\frac{\\mu_- - \\mu_+}{\\mu_- + \\mu_+}.\n$$\n\nThe formulae for the evaluation of $K_\\text{III}$ and $A_\\text{III}$ are \n\\begin{equation}\n\\label{k3}\n\\begin{array}{l}\n\\displaystyle K_\\text{III} = - \\sqrt{\\frac{2}{\\pi}} \\int_{-\\infty}^{0} \\left\\{ \\langle p_3 \\rangle(x_1) + \\frac{\\eta}{2} \\jump{0.15}{p_3}(x_1) \\right\\} (-x_1)^{-1\/2} dx_1, \\\\[3mm]\n\\displaystyle A_\\text{III} = -2 \\sqrt{\\frac{2}{\\pi}} \\int_{-\\infty}^{0} \n\\left\\{ \\frac{d\\langle p_3 \\rangle(x_1)}{dx_1} + \\frac{\\eta}{2} \\frac{d\\jump{0.15}{p_3}(x_1)}{dx_1} \\right\\} (-x_1)^{-1\/2} dx_1.\n\\end{array}\n\\end{equation}\n\nThe full representation of weight functions after Fourier inversion is, for the lower half-plane, $x_2 < 0$,\n$$\n\\begin{array}{l}\n\\displaystyle u_3(x_1,x_2) = -\\frac{\\mu_+}{2\\sqrt{\\pi}(\\mu_- + \\mu_+)}\\left\\{ (ix_1 - x_2)^{-1\/2} - i(-ix_1 - x_2)^{-1\/2} \\right\\}, \\\\[3mm]\n\\displaystyle \\sigma_{31}(x_1,x_2) = \\frac{\\mu_+\\mu_-}{4\\sqrt{\\pi}(\\mu_- + \\mu_+)}\\left\\{ i(ix_1 - x_2)^{-3\/2} - (-ix_1 - x_2)^{-3\/2} \\right\\}, \\\\[3mm]\n\\displaystyle \\sigma_{32}(x_1,x_2) = -\\frac{\\mu_+\\mu_-}{4\\sqrt{\\pi}(\\mu_- + \\mu_+)}\\left\\{ (ix_1 - x_2)^{-3\/2} - i(-ix_1 - x_2)^{-3\/2} \\right\\},\n\\end{array}\n$$\nand, for the upper half-plane, $x_2 > 0$,\n$$\n\\begin{array}{l}\n\\displaystyle u_3(x_1,x_2) = \\frac{\\mu_-}{2\\sqrt{\\pi}(\\mu_- + \\mu_+)}\\left\\{ (ix_1 + x_2)^{-1\/2} - i(-ix_1 + x_2)^{-1\/2} \\right\\}, \\\\[3mm]\n\\displaystyle \\sigma_{31}(x_1,x_2) = -\\frac{\\mu_+\\mu_-}{4\\sqrt{\\pi}(\\mu_- + \\mu_+)}\\left\\{ i(ix_1 + x_2)^{-3\/2} - (-ix_1 + x_2)^{-3\/2} \\right\\}, \\\\[3mm]\n\\displaystyle \\sigma_{32}(x_1,x_2) = -\\frac{\\mu_+\\mu_-}{4\\sqrt{\\pi}(\\mu_- + \\mu_+)}\\left\\{ (ix_1 + x_2)^{-3\/2} - i(-ix_1 + x_2)^{-3\/2} \\right\\}.\n\\end{array}\n$$\n\nWe compare now the result obtained for the Mode III with that of Mode I and II. In particular, we notice from \\eq{k3} that the skew-symmetric part of $K_\\text{III}$ \nis proportional to the material parameter $\\eta$, whereas the skew-symmetric part of the complex stress intensity factor $K$ is proportional to the material parameter \n$\\alpha$, see \\eq{SIF} and \\eq{skew}. A comparison of these two parameters is then useful to understand whether the skew-symmetric loading is more relevant for Mode III \n($\\alpha > \\eta$) or Mode I and II ($\\eta$ > $\\alpha$). This comparison is shown in Fig. \\ref{alpha}.\n\\begin{figure}[!htb]\n\\begin{center}\n\\vspace*{3mm}\n\\includegraphics[width=6cm]{fig08arx.eps}\n\\caption{\\footnotesize The material parameter $\\alpha$ as a function of the material parameter $\\eta$ for: $\\nu_+ = 0$, $\\nu_-=0.5$ (green); \n$\\nu_+ = 0.2$, $\\nu_-=0.3$ (orange); $\\nu_+ = \\nu_-=0.3$ (red); \n$\\nu_+ = 0.5$, $\\nu_-=0$ (blue).}\n\\label{alpha}\n\\end{center}\n\\end{figure}\n\n\\vspace{60mm}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn networking or telecommunications the search for the minimum-delay\npath (that is the shortest path between two points) is always on.\nThe cost on each edge, that is the time taken for a signal to\ntravel between two adjacent nodes of the network, is often a\nfunction of real time. Hence the shortest path between any two nodes\nchanges with time. Of course one can run a shortest path algorithm\nevery time a signal has to be sent, but usually some prior knowledge of\nthe network graph is given in advance, such as the structure of the\nnetwork graph and the cost functions on each edge (with time as a\nvariable).\n\nHow can one benefit from this extra information? One plausible way\nis to preprocess the initial information and store the\npreprocessed information. Every time the rest of the input is given,\nusing the preprocessed information, one can solve the optimization\nproblem faster than solving the problem from scratch. Even if the\npreprocessing step is expensive one would benefit by saving precious\ntime whenever the optimal solution has to be computed. Also, if the\nsame preprocessed information is used multiple times then the total\namount of resources used will be less in the long run.\n\nSimilar phenomena can be observed in various other combinatorial\noptimization problems that arise in practice; that is, a part of the\ninput does not change with time and is known in advance. However, many\ntimes it is hard to make use of this extra information.\n\nIn this paper we consider only those problems where the whole input is\na weighted graph. We assume that the graph structure and some\nknowledge of how the weights on the edges are generated are known in\nadvance. We call this the {\\it function-weighted graph} -- it is a graph\nwhose edges are labeled with continuous real functions. When all the\nfunctions are univariate (and all have the same variable), the graph\nis called a {\\em parametric weighted graph}. In other words, the graph\nis $G=(V,E,W)$ where $W:E \\rightarrow \\mathcal{F}$ and $\\mathcal{F}$ is the\nspace of all real continuous functions with the variable $x$. If $G$\nis a parametric weighted graph, and $r \\in \\mathbb{R}$ is any real\nnumber, then $G(r)$ is the standard weighted graph where the weight of\nan edge $e$ is defined to be $(W(e))(r)$. We say that $G(r)$ is an\n{\\em instantiation} of $G$, since the variable $x$ in each function is\ninstantiated by the value $r$. Parametric weighted graphs are\ntherefore, a generic instance of infinitely many instances of weighted\ngraphs.\n\nThe idea is to use the generic instance $G$ to precompute some general\ngeneric information $I(G)$, such that for any given\ninstantiation $G(r)$, we will be able to\nuse the precomputed information $I(G)$ in order to speed up the time\nto solve the given problem on $G(r)$, faster than just solving the\nproblem on $G(r)$ from scratch. Let us make this notion more\nprecise.\n\nA {\\em parametric weighted graph algorithm} (or, for brevity, a {\\em\nparametric algorithm}) consists of two phases. A {\\em preprocessing\nphase} whose input is a parametric weighted graph $G$, and whose\noutput is a data structure (the advice) that is later used by the {\\em\ninstantiation phase}, where a specific value $r$ for the variable is\ngiven. The instantiation phase outputs the solution to the (standard)\nweighted graph problem on the weighted graph $G(r)$. Naturally, the\ngoal is to have the running time of the instantiation phase\nsignificantly smaller than the running time of any algorithm that\nsolves the weighted graph problem from scratch, by taking advantage of\nthe advice constructed in the preprocessing phase. Parametric\nalgorithms are therefore evaluated by a pair of running times, the\n{\\em preprocessing time} and the {\\em instantiation time}.\n\n\nIn this paper we show that parametric algorithms are beneficial for\none of the most natural combinatorial optimization problems: the {\\em\nshortest path} problem in directed graphs. Recall that given a\ndirected real-weighted graph $G$, and two vertices $u,v$ of $G$, the\ndistance from $u$ to $v$, denoted by $\\delta(u,v)$, is the length of a\nshortest path from $u$ to $v$. The {\\em single pair} shortest path\nproblem seeks to compute $\\delta(u,v)$ and construct a shortest path\nfrom $u$ to $v$. Likewise, the {\\em single source} shortest path\nproblem seeks to compute the distances and shortest paths from a\ngiven vertex to all other vertices, and the {\\em all pairs} version\nseeks to compute distances and shortest paths between all ordered\npairs of vertices. In some of our algorithms we forgo the calculation\nof the path itself to achieve a shorter instantiation time. In all\nthose cases the algorithms can be easily modified to also output a\nshortest path, in which case their instantiation time is the sum of\nthe time it takes to calculate the distance and a time linear in the\nsize of the path to be output.\n\nOur first algorithm is a parametric algorithm for single source\nshortest path, in the case where the weights are {\\em linear}\nfunctions. That is, each edge $e$ is labeled with a function\n$a_ex+b_e$ where $a_e$ and $b_e$ are reals. Such linear\nparametrization has practical importance. Indeed, in many problems\nthe cost of an edge is composed from some constant term plus a term\nwhich is a factor of some commodity, whose cost varies (e.g. bank\ncommissions, taxi fares, vehicle maintenance costs, and so on). Our\nparametric algorithm has preprocessing time $\\tilde{O}(n^4)$ and\ninstantiation time $O(m+n\\log n)$ (throughout this paper $n$ and $m$\ndenote the number of vertices and edges of a graph, respectively). We\nnote that the fastest algorithm for the single source shortest path in\nreal weighted directed graphs requires $O(nm)$ time; the Bellman-Ford\nalgorithm \\cite{Be-1958}. The idea of our preprocessing stage is to\nprecompute some other linear functions, on the {\\em vertices}, so that\nfor every instantiation $r$, one can quickly determine whether $G(r)$\nhas a negative cycle and otherwise use these functions to quickly\nproduce a reweighing of the graph so as to obtain only nonnegative\nweights similar to the weights obtained by Johnson's algorithm\n\\cite{Jo-1977}. In other words, we {\\em avoid} the need to run the\nBellman-Ford algorithm in the instantiation phase. The\n$\\tilde{O}(n^4)$ time in the preprocessing phase comes from the use of Megiddo's\\cite{M-79} technique that we need in order to compute the linear vertex functions.\n\n\\begin{theorem}\\label{thm:neg} There exists a parametric algorithm for\nsingle source shortest path in graphs weighted by linear functions,\nwhose preprocessing time is $\\tilde{O}(n^4)$ and whose instantiation\ntime is $O(m + n\\log n)$.\n\\end{theorem}\n\nOur next algorithm applies to a more general setting where the weights\nare polynomials of degree at most $d$. Furthermore, in this case our\ngoal is to have the instantiation phase answering distance queries\nbetween any two vertices in {\\em sublinear} time. Notice first that if\nwe allow exponential preprocessing time, this goal can be easily\nachieved. This is not hard to see since the overall\npossible number of shortest paths (when $x$ varies over the reals) is\n$O(n!)$, or from Fredman's decision tree for shortest paths whose\nheight is $O(n^{2.5})$ \\cite{Fr-1976}. But can we settle for {\\em\nsub-exponential} preprocessing time and still be able to have\nsublinear instantiation time? Our next result achieves this goal.\n\n\\begin{theorem}\\label{thm:gen} There exists a parametric algorithm for\nthe single pair shortest path problem in graphs weighted by degree $d$\npolynomials, whose preprocessing time is $O(n^{(O(1) + \\log f(d))\\log\nn})$ and instantiation time $O(\\log^2 n)$, where $f(d)$ is the time\nrequired to compute the intersection points of two degree $d$\npolynomials. The size of the advice that the preprocessing algorithm\nproduces is $O(n^{(O(1) + \\log d)\\log n})$.\n\\end{theorem}\n\nThe above result falls in the subject of sensitivity analysis where\none is interested in studying the effect on the optimal solution as the\nvalue of the parameter changes. We give a linear-time (linear in the\noutput size) algorithm that computes the breaking points.\n\nThe practical and theoretical importance of shortest path problems\nlead several researchers to consider fast algorithms that settle for\nan approximate shortest path. For the general case (of real weighted\ndigraphs) most of the algorithms guarantee an {\\em $\\alpha$-stretch}\nfactor. Namely, they compute a path whose length is at most $\\alpha\n\\delta(u,v)$. We mention here the $(1+\\epsilon)$-stretch algorithm\nof Zwick for the all-pairs shortest path problem, that runs in\n$\\tilde{O}(n^\\omega)$ time when the weights are non-negative reals\n\\cite{Zw-2002}. Here $\\omega < 2.376$ is the matrix multiplication\nexponent \\cite{CoWi-1990}.\n\nHere we consider probabilistic additive-approximation algorithms, or\n{\\em surplus} algorithms, that work for linear weights which may have\npositive and negative values (as long as there is no negative weight\ncycle). We say that a shortest path algorithm has an\n$\\epsilon$-surplus if it computes paths whose lengths are at most\n$\\delta(u,v)+\\epsilon$. We are unaware of any truly subcubic algorithm\nthat guarantees an $\\epsilon$-surplus approximation, and which\noutperforms the fastest general all-pairs shortest path algorithm\n\\cite{Ch-2007}.\n\nIn the linear-parametric setting, it is easy to obtain\n$\\epsilon$-surplus parametric algorithms whose preprocessing time is\n$O(n^4)$ time, and whose instantiation time, for any ordered pair of\nqueried vertices $u,v$ is constant. It is assumed instantiations are\ntaken from some interval $I$ whose length is independent of $n$.\nIndeed, we can partition $I$ into $O(n)$ subintervals $I_1,I_2,\\ldots$\nof size $O(1\/n)$ each, and solve, in cubic time (say, using\n\\cite{Fl-1962}), the exact all-pairs solution for any instantiation\n$r$ that is an endpoint of two consecutive intervals. Then, given any\n$r \\in I_j=(a_j,b_j)$, we simply look at the solution for $b_j$ and\nnotice that we are (additively) off from the right answer only by\n$O(1)$. Standard scaling arguments can make the surplus smaller than\n$\\epsilon$. But do we really need to spend $O(n^4)$ time for\npreprocessing? In other words, can we invest (significantly) less than\n$O(n^4)$ time and still be able to answer instantiated distance\nqueries in $O(1)$ time? The following result gives a positive answer\nto this question.\n\n\\begin{theorem}\\label{thm:approx} Let $\\epsilon > 0$, let\n$[\\alpha,\\beta]$ be any fixed interval and let $\\gamma$ be a fixed\nconstant. Suppose $G$ is a linear-parametric graph that has no\nnegative weight cycles in the interval $[\\alpha,\\beta]$, and for which\nevery edge weight $a_e+xb_e$ satisfies $|a_e|\\leq\\gamma$. There is a\nparametric randomized algorithm for the $\\epsilon$-surplus shortest\npath problem, whose preprocessing time is $\\tilde{O}(n^{3.5})$ and\nwhose instantiation time is $O(1)$ for a single pair, and hence\n$O(n^2)$ for all pairs.\n\\end{theorem} We note that this algorithm works in the restricted\naddition-comparison model. We also note that given an ordered pair\n$u,v$ and $r \\in [\\alpha,\\beta]$, the algorithm outputs, in $O(1)$\ntime, a weight of an actual path from $u$ to $v$ in $G(r)$, and points\nto a linked list representing that path. Naturally, if one wants to\noutput the vertices of this path then the time for this is linear in\nthe length of the path.\n\nThe rest of this paper is organized as follows. The next subsection\nshortly surveys related research on parametric shortest path\nproblems. In the three sections following it we prove Theorems\n\\ref{thm:neg}, \\ref{thm:gen} and \\ref{thm:approx}.\nSection~\\ref{sec:conclusion} contains some concluding remarks and open\nproblems.\n\n\\subsection{Related research} Several researchers have considered parametric\nversions of combinatorial optimization problems. In particular\nfunction-weighted graphs (under different names) have been\nextensively studied in the subject of sensitivity analysis (see\n\\cite{vHKRW89}) where they study the effect on the optimal solution\nas the parameter value changes.\n\nMurty~\\cite{Mur80} showed that for parametric linear programming\nproblems the optimal solution can change exponentially many times\n(exponential in the number of variables). Subsequently, Carstensen\n\\cite{Ca-1983} has shown that there are constructions for which\nthe number of shortest path changes while $x$ varies over the reals\nis $n^{\\Omega(\\log n)}$. In fact, in her example each linear\nfunction is of the form $a_e+xb_e$ and both $a_e$ and $b_e$ are\npositive, and $x$ varies in $[0,\\infty]$. Carstensen also proved\nthat this is tight. In other words, for any linear-parametric graph\nthe number of changes in the shortest paths is $n^{O(\\log n)}$. A\nsimpler proof was obtained by Nikolova et al. \\cite{Ni-Mi-2006},\nthat also supply an $n^{O(\\log n)}$ time algorithm to compute the\npath breakpoints. Their method, however, does not apply to the case\nwhere the functions are not linear, such as in the case of degree\n$d$ polynomials. Gusfield~\\cite{gus} also gave a proof for the upper bound\nof the number of breakpoints in the linear function version of the parametric shortest\npath problem, in addition to studying a number of other parametric\nproblems.\n\n\nKarp and Orlin \\cite{KaOr-1981}, and, later, Young, Tarjan, and\nOrlin~\\cite{Yo-Or-1991} considered a special case of the\nlinear-parametric shortest path problem. In their case, each edge\nweight $e$ is either some fixed constant $b_e$ or is of the form\n$b_e-x$. It is not too difficult to prove that for any given vertex\n$v$, when $x$ varies from $-\\infty$ to the largest $x_0$ for which\n$G(x_0)$ has no negative weight cycle (possibly $x_0=\\infty$), then\nthere are at most $O(n^2)$ distinct shortest path trees from $v$ to\nall other vertices. Namely, for each $r \\in [-\\infty,x_0]$ one of\nthe trees in this family is a solution for single-source shortest\npath in $G(r)$. The results in \\cite{KaOr-1981,Yo-Or-1991} cleverly\nand compactly compute all these trees, and the latter does it in\n$O(nm+n^2\\log n)$ time.\n\n\n\\section{Proof of Theorem~\\ref{thm:neg}}\n\nThe proof of Theorem~\\ref{thm:neg} follows from the following two\nlemmas.\n\n\\begin{lemma}\\label{lem:negcyl} Given a linear-weighted graph\n$G=(V,E,W)$, there exist $\\alpha, \\beta \\in \\mathbb{R}\\cup \\{-\\infty\\}\n\\cup \\{+\\infty\\}$ such that $G(r)$ has no negative cycles if and only\nif $\\alpha \\leq r \\leq \\beta$. Moreover $\\alpha$ and $\\beta$ can be\nfound in $\\tilde{O}(n^4)$ time.\n\\end{lemma}\n\n\\begin{lemma}\\label{lem:reweight} Let $G=(V,E,W)$ be a linear-weighted\ngraph. Also let $\\alpha, \\beta \\in \\mathbb{R}\\cup \\{-\\infty\\} \\cup\n\\{+\\infty\\}$ be such that at least one of them is finite and for all\n$\\alpha \\geq r \\geq \\beta$ the graph $G(r)$ has no negative cycle.\nThen for every vertex $v \\in V$ there exists a linear function\n$g^{[\\alpha,\\beta]}_v$ such that if the new weight function $W'$ is\ngiven by\n$$\nW'\\left((u,v)\\right) = W\\left((u,v)\\right) + g^{[\\alpha, \\beta]}_u -\ng^{[\\alpha, \\beta]}_v\n$$\nthen the new linear-weighted graph $G'=(V,E,W')$ has the property that\nfor any real $\\alpha \\leq r \\leq \\beta$ all the edges in $G'(r)$ are\nnon-negative. Moreover the functions $g^{[\\alpha, \\beta]}_v$ for all\n$v \\in V$ can be found in $O(mn)$ time.\n\\end{lemma}\n\nSo given a linear-weighted graph $G$, we first use Lemma\n\\ref{lem:negcyl} to compute $\\alpha$ and $\\beta$. If at least one of\n$\\alpha$ and $\\beta$ is finite then using Lemma~\\ref{lem:reweight} we\ncompute the $n$ linear functions $g^{[\\alpha,\\beta]}_v$, one for each\n$v \\in V$. If $\\alpha = -\\infty$ and $\\beta = +\\infty$, then using\nLemma~\\ref{lem:reweight} we compute the $2n$ linear functions\n$g^{[\\alpha, 0]}_v$ and $g^{[0,\\beta]}_v$. These linear functions will\nbe the advice that the preprocessing algorithm produces. The above\nlemmas guarantee us that the advice can be computed in time\n$\\tilde{O}(n^4)$, that is the preprocessing time is $\\tilde{O}(n^4)$.\n\nNow when computing the single source shortest path problem from vertex\n$v$ for the graph $G(r)$ our algorithm proceeds as follows:\n\\begin{enumerate}\n\\item If $r<\\alpha$ or $r>\\beta$ output ``$-\\infty$'' as there exists\na negative cycle (such instances are considered invalid).\n\\item If $\\alpha\\leq r\\leq \\beta$ and at least one of $\\alpha$ or\n$\\beta$ is finite then compute $g_u(r)$ for all $u \\in V$. Use these\nto re-weight the edges in the graph as in Johnson's algorithm\n\\cite{Jo-1977}. If $\\alpha = -\\infty$ and $\\beta = +\\infty$ then if\n$r\\leq 0$ compute $g^{[\\alpha,0]}_u(r)$ for all $u \\in V$ and if\n$r\\geq 0$ compute $g^{[0,\\beta]}_u(r)$ for all $u \\in V$. Notice that\nafter the reweighing we have an instance of $G'(r)$.\n\\item Use Dijkstra's algorithm \\cite{Di-1959} to solve the single\nsource shortest path problem in $G'(r)$. Dijkstra's algorithm applies\nsince $G'(r)$ has no negative weight edges. The shortest paths tree\nreturned by Dijkstra's algorithms applied to $G'(r)$ is also the\nshortest paths tree in $G(r)$. As in Johnson's algorithm, we use the\nresults $d'(v,u)$ of $G'(r)$ to deduce $d(v,u)$ in $G(r)$ since, by\nLemma \\ref{lem:reweight} $d(v,u)=d'(v,u)-g_v(r)+g_u(r)$.\n\\end{enumerate} The running time of the instantiation phase is\ndominated by the running time of Dijkstra's algorithm which is $O(m +\nn\\log n)$ \\cite{FrTa-1987}.\n\n\\subsection{Proof of Lemma~\\ref{lem:negcyl}}\n\nSince the weight on the edges of the graph $G$ are linear functions,\nwe have that the weight of any directed cycle in the graph is also a\nlinear function. Let $C_1, C_2, \\ldots, C_T$ be the set of all\ndirected cycles in the graph. The linear weight function of a cycle\n$C_i$ will be denoted by $\\mbox{wt}(C_i)$. If $\\mbox{wt}(C_i)$ is not the\nconstant function, then let $\\gamma_i$ be the real number for which\nthe linear equation $\\mbox{wt}(C_i)$ evaluates to $0$.\n\n\\noindent Let $\\alpha$ and $\\beta$ be defined as follows:\n$$\n\\alpha = \\max_i \\left\\{ \\gamma_i \\mid \\mbox{ $\\mbox{wt}(C_i)$ has a positive\nslope}\\right\\}.\n$$\n$$\n\\beta = \\min_i \\left\\{ \\gamma_i \\mid \\mbox{ $\\mbox{wt}(C_i)$ has a negative\nslope}\\right\\}.\n$$\nNote that if $\\mbox{wt}(C_i)$ has a positive slope then\n$\\gamma_i = \\min_x \\left\\{ \\mbox{wt}(C_i)(x) \\geq 0\\right\\}.$\nThus for all $x \\geq \\gamma_i$ the value of $\\mbox{wt}(C_i)$ evaluated at\n$x$ is non-negative. So by definition for all $x \\geq \\alpha$ the\nvalue of the $\\mbox{wt}(C_i)$ is non-negative if the slope of $\\mbox{wt}(C_i)$ is\npositive, and for any $x<\\alpha$ there exists a cycle $C_i$ such that\n$\\mbox{wt}(C_i)$ has positive slope and $\\mbox{wt}(C_i)(x)$ is negative.\nSimilarly, for all $x \\leq \\beta$ the value of the $\\mbox{wt}(C_i)$ is\nnon-negative if the slope of $\\mbox{wt}(C_i)$ is negative and for any\n$x>\\beta$ there exists a cycle $C_i$ such that $\\mbox{wt}(C_i)$ has negative\nslope and $\\mbox{wt}(C_i)(x)$ is negative.\n\nThis proves the existence of $\\alpha$ and $\\beta$. There are,\nhowever, two bad cases that we wish to exclude. Notice that if\n$\\alpha > \\beta$ this means that for any evaluation at $x$, the\nresulting graph has a negative weight cycle. The same holds if there\nis some cycle for which $\\mbox{wt}(C_i)$ is constant and negative. Let us\nnow show how $\\alpha$ and $\\beta$ can be efficiently computed whenever\nthese bad cases do not hold. Indeed, $\\alpha$ is the solution to the\nfollowing Linear Program (LP), which has a feasible solution if and\nonly if the bad cases do not hold.\n\\begin{center} \\framebox{\n\\parbox[s]{3.0in}{\\\n\n{\\bf Minimize $x$} under the constraints\n\n\\bigskip {\\bf $\\forall i$, $\\mbox{wt}(C_i)(x) \\geq 0$}.\n\n\\bigskip }}\n\\end{center} This is an LP on one variable, but the number of\nconstraints can be exponential.\nUsing Megiddo's\\cite{M-79}\ntechnique for finding the minimum ratio cycles we can solve the\nlinear-program in $O(n^4\\log n)$ steps.\n\n\\subsection{Proof of Lemma~\\ref{lem:reweight}}\n\nLet $\\alpha$ and $\\beta$ be the two numbers such that for all\n$\\alpha\\leq r \\leq \\beta$ the graph $G(r)$ has no negative cycles and\nat least one of $\\alpha$ and $\\beta$ is finite.\n\nFirst let us consider the case when both $\\alpha$ and $\\beta$ are\nfinite. Recall that, given any number $r$, Johnson's algorithm\nassociates a weight function $h^{r}:V\\rightarrow \\mathbb{R}$ such\nthat, for any edge $(u,v)\\in E$,\n$$\nW_{(u,v)}(r) + h^r(u) - h^r(v) \\geq 0.\n$$\n(Johnson's algorithm computes this weight function by running the\nBellman-Ford algorithm over $G(r)$). Define the weight function\n$g^{[\\alpha,\\beta]}_v$ as\n$$\ng^{[\\alpha, \\beta]}_v(x) = \\left(\\frac{h^{\\beta}(v) -\nh^{\\alpha}(v)}{\\beta - \\alpha}\\right)x + h^{\\alpha}(v) -\n\\left(\\frac{h^{\\beta}(v) - h^{\\alpha}(v)}{\\beta -\n\\alpha}\\right)\\alpha~.\n$$\nThis is actually the equation of the line joining $(\\alpha,\nh^{\\alpha}(v))$ and $(\\beta, h^{\\beta}(v))$ in $\\mathbb{R}^2$.\n\n\\noindent Now we need to prove that for every $\\alpha \\leq r\\leq\n\\beta$ and for every $(u,v)\\in V$,\n$$\nW_{(u,v)}(r) + g^{[\\alpha, \\beta]}_{u}(r) - g^{[\\alpha,\\beta]}_{v}(r)\n\\geq 0~.\n$$\nSince $\\alpha \\leq r \\leq \\beta$, one can write $r = (1-\\delta)\\alpha\n+ \\delta\\beta$ where $1 \\geq \\delta \\geq 0$. Then for all $v\\in V$,\n$$\ng^{[\\alpha, \\beta]}_v(r) = (1 - \\delta) h^{\\alpha}(v) + \\delta\nh^{\\beta}(v)~.\n$$\nSince $W_{(u,v)}(r)$ is a linear function we can write\n$$\nW_{(u,v)}(r) = (1-\\delta)W_{(u,v)}(\\alpha) + \\delta W_{(u,v)}(\\beta)~.\n$$\nSo after re-weighting the weight of the edge $(u,v)$ is\n$$\n(1-\\delta)W_{(u,v)}(\\alpha) + \\delta W_{(u,v)}(\\beta) +\n(1-\\delta)h^{\\alpha}(u) + \\delta h^{\\beta}(u) - (1 - \\delta)\nh^{\\alpha}(v) - \\delta h^{\\beta}(v)~.\n$$\nNow this is non-negative as by the definition of $h^{\\beta}$ and\n$h^{\\alpha}$ we know that both $W_{(u,v)}(\\beta) + h^{\\beta}(u) -\nh^{\\beta}(v)$ and $W_{(u,v)}(\\alpha) + h^{\\alpha}(u) - h^{\\alpha}(v)$\nare non-negative.\n\nWe now consider the case when one of $\\alpha$ or $\\beta$ is not\nfinite. We will prove it for the case where $\\beta = +\\infty$. The\ncase $\\alpha = -\\infty$ follows similarly. Consider the simple\nweighted graph $G_{\\infty}=(V,E,W_{\\infty})$ where the weight function\n$W_{\\infty}$ is defined as: if the weight of the edge $e$ is $W(e) =\na_ex + b_e$ then $W_{\\infty}(e) = a_e$.\n\nWe run the Johnson's algorithm on the graph $G_{\\infty}$. Let\n$h^{\\infty}(v)$ denote the weight that Johnson's algorithm associates\nwith the vertex $v$. Then define the weight function\n$g^{[\\alpha,\\infty]}_v$ as\n$$\ng^{[\\alpha,\\infty]}_v(x) = h^{\\alpha}(v) + (x-\\alpha)h^{\\infty}(v)~.\n$$\nWe need to prove that for every $\\alpha \\leq r$ and for every\n$(u,v)\\in V$,\n$$\nW_{(u,v)}(r) + g^{[\\alpha,\\infty]}_{u}(r) - g^{[\\alpha,\n\\infty]}_{v}(r) = W_{(u,v)}(r) + h^{\\alpha}(u) + (r-\\alpha)\nh^{\\infty}(u) - h^{\\alpha}(v) - (r-\\alpha) h^{\\infty}(v) \\geq 0~.\n$$\nLet $r = \\alpha + \\delta$ where $\\delta \\geq 0$. By the linearity of\n$W$ we can write $W_{(u,v)}(r) = W_{(u,v)}(\\alpha) + \\delta\na_{(u,v)}$, where $W_{(u,v)}(r) = a_{(u,v)}r + b_{(u,v)}$. So the\nabove inequality can be restated as\n$$\nW_{(u,v)}(\\alpha) + \\delta a_{(u,v)} + h^{\\alpha}(u) + \\delta\nh^{\\infty}(u) - h^{\\alpha}(v) - \\delta h^{\\infty}(v) \\geq 0~.\n$$\nThis now follows from the fact that both $W_{(u,v)}(\\alpha) +\nh^{\\alpha}(u) - h^{\\alpha}(v)$ and $a_{(u,v)} + h^{\\infty}(u) -\nh^{\\infty}(v)$ are non-negative.\n\nSince the running time of the reweighing part of Johnson's algorithm\ntakes $O(mn)$ time, the overall running time of computing the\nfunctions $g^{[\\alpha,\\beta]}_v$ is $O(mn)$, as claimed.\n\n\\section{Proof of Theorem~\\ref{thm:gen}}\n\nIn this section we construct a parametric algorithm that computes the\ndistance $\\delta(u,v)$ between a given pair of vertices. If one is\ninterested in the actual path realizing this distance, then it can be\nfound with some extra book-keeping that we omit in the proof.\n\nThe processing algorithm will output the following advice: for any\npair $(u,v) \\in V\\times V$ the advice consists of a set of $t+2$ increasing\nreal numbers $-\\infty = b_0 < b_1 < \\dots < b_{t} < b_{t+1} = \\infty$\nand an ordered set of degree-$d$ polynomials $p_0, p_1, \\dots, p_t$,\nsuch that for all $b_i \\leq r \\leq b_{i+1}$ the weight of a shortest\npath in $G(r)$ from $u$ to $v$ is $p_i(r)$. Note that each $p_i$\ncorresponds to the weight of a path from $u$ to $v$. Thus if we are\ninterested in computing the exact path then we need to keep track of\nthe path corresponding to each $p_i$.\n\nGiven $r$, the instantiation algorithm has to find the $i$ such that\n$b_i \\leq r \\leq b_{i+1}$ and then output $p_i(r)$. So the output\nalgorithm runs in time $O(\\log t)$. To prove our result we need to\nshow that for any $(u,v) \\in V\\times V$ we can find the advice in time\n$O(f(d) n)^{\\log n}$. In particular this will prove that $t =\nO(dn)^{\\log n}$ and hence the result will follow.\n\n\\begin{definition} A \\textit{minBase} is a sequence of increasing real\nnumbers $-\\infty = b_0 < b_1 < \\dots < b_t < b_{t+1} = \\infty$ and an\nordered set of degree-$d$ polynomials $p_0, p_1, \\dots, p_t$, such\nthat for all $b_i \\leq r \\leq b_{i+1}$ and all $j \\neq i$, $p_i(r)\n\\leq p_j(r)$.\n\\end{definition} We call the sequence of real numbers the {\\em\nbreaks}. We call each interval $[b_i, b_{i+1}]$ the $i$-th interval of\nthe minBase and the polynomial $p_i$ the $i$-th polynomial. The {\\em\nsize} of the minBase is $t$.\n\nThe final advice that the preprocessing algorithm produces is a\nminBase for every pair $(u,v)\\in V\\times V$ where the $i$-th\npolynomial has the property that $p_i(r)$ is the distance from $u$ to\n$v$ in $G(r)$ for each $b_i \\le r \\le b_{i+1}$.\n\n\\begin{definition} A $minBase^{\\ell}(u,v)$ is a minBase corresponding\nto the ordered pair $u,v$, where the $i$-th polynomial $p_i$ has the\nproperty that for $r \\in [b_i,b_{i+1}]$, $p_i(r)$ is the length of a\nshortest path from $u$ to $v$ in $G(r)$, that is taken among all paths\nthat use at most $2^{\\ell}$ edges.\n\n\\noindent A $minBase^{\\ell}(u,w,v)$ is a minBase corresponding to the\nordered triple $(u,w, v)$ where the $i$-th polynomial $p_i$ has the\nproperty that for each $r \\in [b_i,b_{i+1}]$, $p_i(r)$ is the sum of\nthe lengths of a shortest path from $u$ to $w$ in $G(r)$, among all\npaths that use at most $2^{\\ell}$ edges, and a shortest path from $w$\nto $v$ in $G(r)$, among all paths that use at most $2^{\\ell}$ edges.\n\\end{definition} Note that in both of the above definitions some of\nthe polynomials can be $+\\infty$ or $-\\infty$.\n\n\\begin{definition} If $B_1$ and $B_2$ are two minBases (not\nnecessarily of the same size), with polynomials $p^1_i$ and $p^2_j$,\nwe say that another minBase with breaks $b'_k$ and polynomials $p'_k$\nis $\\min(B_1 + B_2)$ if the following holds.\n\\begin{enumerate}\n\\item For all $k$ there exist $i,j$ such that $p'_k = p^1_i +p^2_j$,\nand\n\\item For $b'_k \\leq r \\leq b'_{k+1}$ and for all $i,j$ we have\n$p'_k(r) \\le p^1_i(r) + p^2_j(r)$.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{definition} If $B_1,B_2,\\ldots,B_s$ are $s$ minBases (not\nnecessarily of the same size), with polynomials $p^1_{i_1}, p^2_{i_2},\n\\ldots, p^s_{i_s}$, another minBase with breaks $b'_k$ and polynomials\n$p'_k$ is $\\min\\{B_1,B_2,\\ldots,B_s\\}$ if the following holds.\n\\begin{enumerate}\n\\item For all $k$ there exist $q$ such that $p'_k = p^q_{i_q}$, and\n\\item For $b'_k \\leq r \\leq b'_{k+1}$ and for all $1 \\le q \\le s$ and\nall $i_q$, we have $p'_k(r) \\le p^q_{i_q}(r)$.\n\\end{enumerate}\n\\end{definition}\n\nNote that using the above definition we can write the following two\nequations:\n\\begin{equation}\\label{eq:minuwv} minBase^{\\ell+1}(u,v) = \\min_{w\\in\nV}\\left\\{minBase^{\\ell}(u,w,v)\\right\\}~.\n\\end{equation}\n\n\\begin{equation}\\label{eq:minuwwv} minBase^{\\ell}(u,w,v) =\nmin\\left(minBase^{\\ell}(u,w) + minBase^{\\ell}(w,v)\\right)~.\n\\end{equation}\n\nThe following claim will prove the result. The proof of the claim is\nomitted due to lack of space.\n\n\\begin{claim}\\label{cl:sizesminBase} If $B_1$ and $B_2$ are two\nminBases of sizes $t_1$ and $t_2$ respectively, then\n\\begin{enumerate}\n\\item[(a)] $\\min(B_1+B_2)$ can be computed from $B_1$ and $B_2$ in\ntime $O(t_1 + t_2)$.\n\\item[(b)] $\\min\\{B_1, B_2\\}$ can be computed from $B_1$ and $B_2$ in\ntime $O(f(d)(t_1 + t_2))$, where $f(d)$ is the time required to\ncompute the intersection points of two degree-$d$ polynomials. The\nsize of $\\min\\{B_1, B_2\\}$ is $O(d(t_1 + t_2))$.\n\\end{enumerate}\n\\end{claim} In order to compute $\\min\\{B_1,\\ldots,B_s\\}$ one\nrecursively computes $X=\\min\\{B_1,\\ldots,B_{s\/2}\\}$ and\n$Y=\\min\\{B_{s\/2+1},\\ldots,B_s\\}$ and then takes $\\min\\{X,Y\\}$.\n\nIf there are no negative cycles, then the advice that the\ninstantiation algorithm needs from the preprocessing algorithm\nconsists of $minBase^{\\lceil\\log n\\rceil}(u,v)$. To deal with negative\ncycles, both $minBase^{\\lceil\\log n\\rceil}(u,v)$ and\n$minBase^{\\lceil\\log n\\rceil+1}(u,v)$ are produced, and the\ninstantiation algorithm compares them. if they are not equal, then the\ncorrect output is $-\\infty$.\n\nAlso note that $minBase^{0}(u,v)$ is the trivial minBase where the\nbreaks are $-\\infty$ and $+\\infty$ and the polynomial is weight\n$W((u,v))$ associated to the edge $(u,v)$ if $(u,v) \\in E$ and\n$+\\infty$ otherwise.\n\nIf the size of $minBase^{\\ell}(u,v)$ is $s_{\\ell}$, then by\n(\\ref{eq:minuwv}), (\\ref{eq:minuwwv}), and by\nClaim~\\ref{cl:sizesminBase} the time to compute\n$minBase^{\\ell+1}(u,v)$ is $O(f(d))^{\\log n}s_{\\ell}$ and the size of\n$minBase^{\\ell+1}(u,v)$ is $O(d)^{\\log n}s_{\\ell}$. Thus one can\ncompute the advice for $u$ and $v$ in time\n$$\n(O(f(d))^{\\log n})^{\\log n} = O(n^{(O(1) + \\log f(d))\\log n})~,\n$$\nand the length of the advice string is $O(n^{(O(1) + \\log d)\\log n})$.\n\n\n\n\\section{Proof of Theorem \\ref{thm:approx}}\n\nGiven the linear-weighted graph $G=(V,E,W)$, our preprocessing phase\nbegins by verifying that for all $r \\in [\\alpha,\\beta]$, $G(r)$ has no\nnegative weight cycles. From the proof of Lemma \\ref{lem:reweight} we\nknow that this holds if and only if both $G(\\alpha)$ and $G(\\beta)$\nhave no negative weight cycles. This, in turn, can be verified in\n$O(mn)$ time using the Bellman-Ford algorithm. We may now assume that\n$G(r)$ has no negative cycles for any $r \\in [\\alpha,\\beta]$.\nMoreover, since our preprocessing algorithm will solve a large set of\nshortest path problems, each of them on a specific instantiation of\n$G$, we will first compute the reweighing functions\n$g^{[\\alpha,\\beta]}_v$ of Lemma \\ref{lem:reweight} which will enable\nus to apply, in some cases, algorithms that assume nonnegative edge\nweights. Recall that by Lemma \\ref{lem:reweight}, the functions\n$g^{[\\alpha,\\beta]}_v$ for all $v \\in V$ are computed in $O(mn)$ time.\n\nThe advice constructed by the preprocessing phase is composed of two\ndistinct parts, which we respectively call the {\\em crude-short}\nadvice and the {\\em refined-long} advice. We now describe each of\nthem.\n\nFor each edge $e \\in E$, the weight is a linear function\n$w_e=a_e+xb_e$. Set $K=8(\\beta-\\alpha)\\max_e|a_e|$. Let $N_0 = \\lceil\nK\\sqrt{n}\\ln n\/\\epsilon \\rceil$ and let $N_1= \\lceil Kn\/\\epsilon\n\\rceil$. We define $N_0+1$ and $N_1+1$ points in\n$[\\alpha,\\beta]$ and solve certain variants of shortest path problems\ninstantiated in these points.\n\nConsider first the case of splitting $[\\alpha,\\beta]$ into $N_0$\nintervals. Let $\\rho_0 = (\\beta-\\alpha)\/N_0$ and consider the points\n$\\alpha +i\\rho_0$ for $i=0,\\ldots,N_0$. The crude-short part of the\npreprocessing algorithm solves $N_0+1$ {\\em limited} all-pairs\nshortest path problems in $G(\\alpha +i\\rho_0)$ for $i=0,\\ldots,N_0$.\nSet $t=4\\sqrt{n}\\ln n$, and let $d_i(u,v)$ denote the length of a\nshortest path from $u$ to $v$ in $G(\\alpha +i\\rho_0)$ that is chosen\namong all paths containing at most $t$ vertices (possibly\n$d_i(u,v)=\\infty$ if no such path exists). Notice that $d_i(u,v)$ is\nnot necessarily the distance from $u$ to $v$ in $G(\\alpha +i\\rho_0)$,\nsince the latter may require more than $t$ vertices. It is\nstraightforward to compute shortest paths limited to at most $k$\nvertices (for any $1 \\le k \\le n$) in a real-weighted directed graph\nwith $n$ vertices in time $O(n^3 \\log k)$ time, by the repeated\nsquaring technique. In fact, they can be computed in $O(n^3)$ time\n(saving the $\\log k$ factor) using the method from \\cite{AHU-1974},\npp. 204--206. This algorithm also constructs the predecessor data\nstructure that represents the actual paths. It follows that for each\nordered pair of vertices $u,v$ and for each $i=0,\\ldots,N_0$, we can\ncompute $d_i(u,v)$ and a path $p_i(u,v)$ yielding $d_i(u,v)$ in\n$G(\\alpha +i\\rho_0)$ in $O(n^3|N_0|)$ time which is\n$O(n^{3.5} \\ln n)~.$\nWe also maintain, at no additional cost, linear functions $f_i(u,v)$\nwhich sum the linear functions of the edges of $p_i(u,v)$. Note also\nthat if $d_i(u,v)=\\infty$ then $p_i(u,v)$ and $f_i(u,v)$ are\nundefined.\n\nConsider next the case of splitting $[\\alpha,\\beta]$ into $N_1$\nintervals. Let $\\rho_1 = (\\beta-\\alpha)\/N_1$ and consider the points\n$\\alpha +i\\rho_1$ for $i=0,\\ldots,N_1$. However, unlike the\ncrude-short part, the refined-long part of the preprocessing algorithm\ncannot afford to solve an all-pairs shortest path algorithm for each\n$G(\\alpha +i\\rho_1)$, as the overall running time will be too large.\nInstead, we randomly select a set $H \\subset V$ of (at most)\n$\\sqrt{n}$ vertices. $H$ is constructed by performing $\\sqrt{n}$\nindependent trials, where in each trial, one vertex of $V$ is chosen\nto $H$ uniformly at random (notice that since the same vertex can be\nselected to $H$ more than once $|H| \\le \\sqrt{n}$). For each $h \\in\nH$ and for each $i=0,\\ldots,N_1$, we solve the single source shortest\npath problem in $G(\\alpha +i\\rho_1)$ from $h$, and also (by reversing\nthe edges) solve the single-destination shortest path {\\em toward}\n$h$. Notice that by using the reweighing functions\n$g^{[\\alpha,\\beta]}_v$ we can solve all of these single source\nproblems using Dijkstra's algorithm. So, for all $h \\in H$ and\n$i=0,\\ldots,N_1$ the overall running time is\n$$\nO(|N_1||H|(m+n \\log n)) = O(n^{1.5}m + n^{2.5}\\log n) = O(n^{3.5})~.\n$$\nWe therefore obtain, for each $h \\in H$ and for each $i=0,\\ldots,N_1$,\na shortest path tree $T_i(h)$, together with distances $d^*_i(h,v)$\nfrom $h$ to each other vertex $v \\in V$, which is the distance from\n$h$ to $v$ in $G(\\alpha +i\\rho_1)$. We also maintain the functions\n$f^*_i(h,v)$ that sum the linear equations on the path in $T^*_i(h)$\nfrom $h$ to $v$. Likewise, we obtain a ``reversed'' shortest path\ntree $S^*_i(h)$, together with distances $d^*_i(v,h)$ from each $v \\in\nV$ to $h$, which is the distance from $v$ to $h$ in $G(\\alpha\n+i\\rho_1)$. Similarly, we maintain the functions $f^*_i(v,h)$ that sum\nthe linear equations on the path in $S^*_i(h)$ from $v$ to $h$.\n\nFinally, for each ordered pair of vertices $u,v$ and for each\n$i=0,\\ldots,N_1$ we compute a vertex $h_{u,v,i} \\in H$ which attains\n$\n\\min_{h \\in H} d^*_i(u,h)+d^*_i(h,u)~.\n$\nNotice that the time to construct the $h_{u,v,i}$ for all ordered\npairs $u,v$ and for all $i=0,\\ldots,N_1$ is $O(n^{3.5})$. This\nconcludes the description of the preprocessing algorithm. Its overall\nruntime is thus $O(n^{3.5} \\ln n)$.\n\nWe now describe the instantiation phase. Given $u,v \\in V$ and $r \\in\n[\\alpha,\\beta]$ we proceed as follows. Let $i$ be the index for which\nthe number of the form $\\alpha+i\\rho_0$ is closest to $r$. As we have\nthe advice $f_i(u,v)$, we let $w_0 = f_i(u,v)(r)$ (recall that\n$f_i(u,v)$ is a function). Likewise, let $j$ be the index for which\nthe number of the form $\\alpha+j\\rho_1$ is closest to $r$. As we have\nthe advice $h=h_{u,v,j}$, we let $w_1 = f^*_j(u,h)(r)+f^*_j(h,u)(r)$.\nFinally, our answer is $z=\\min \\{w_0,w_1\\}$. Clearly, the\ninstantiation time is $O(1)$. Notice that if we also wish to output a\npath of weight $z$ in $G(r)$ we can easily do so by using either\n$p_i(u,v)$, in the case where $z=w_0$ or using $S^*_j(h)$ and\n$T^*_j(h)$ (we take the path from $u$ to $h$ in $S^*_j(h)$ and\nconcatenate it with the path from $h$ to $v$ in $T^*_j(h)$) in the\ncase where $z=w_1$.\n\nIt remains to show that, with very high probability, the result $z$\nthat we obtain from the instantiation phase is at most $\\epsilon$\nlarger than the distance from $u$ to $v$ in $G(r)$. For this purpose,\nwe first need to prove that the random set $H$ possesses some\n``hitting set'' properties, with very high probability.\n\n\nFor every pair of vertices $u$ and $v$ and parameter $r$, let\n$p_{u,v,r}$ be a shortest path in $G(r)$ among all simple paths from\n$u$ to $v$ containing at least $t=4\\sqrt{n}\\ln n$ vertices (if $G$ is\nstrongly connected then such a path always exist, and otherwise we can\njust put $+\\infty$ for all $u,v$ pairs for which no such path exists).\nThe following simple lemma is used in an argument similar to one used\nin \\cite{Zw-2002}.\n\\begin{lemma}\n\\label{lem:prob} For fixed $u$, $v$ and $r$, with probability at least\n$1-o(1\/n^3)$ the path $p_{u,v,r}$ contains a vertex from $H$.\n\\end{lemma}\n\\begin{proof} Indeed, the path from $p_{u,v,r}$ by its definition has\nat least $4\\sqrt{n}\\ln n$ vertices. The probability that all of the\n${\\sqrt n}$ independent selections to $H$ failed to choose a vertex\nfrom this path is therefore at most\n$$\n\\left(1-\\frac{4\\sqrt{n} \\ln n }{n} \\right)^{\\sqrt{n}} < e^{-4\\ln n} <\n\\frac{1}{n^4} = o(1\/n^3)~.\n$$\n\\end{proof}\n\nLet us return to the proof of Theorem \\ref{thm:approx}. Suppose that\nthe distance from $u$ to $v$ in $G(r)$ is $\\delta$. We will prove that\nwith probability $1-o(1)$, $H$ is such that for every $u$, $v$ and $r$\nwe have $z \\le \\delta+\\epsilon$ (clearly $z \\ge \\delta$ as it is the\nprecise length of some path in $G(r)$ from $u$ to $v$).\nAssume first that there is a path $p$ of length\n$\\delta$ in $G(r)$ that uses less than $4\\sqrt{n}\\ln n$ edges.\nConsider the length of $p$ in $G(\\alpha+i\\rho_0)$. When going from $r$\nto $\\alpha+i\\rho_0$, each edge $e$ with weight $a_ex+b_e$ changed its\nlength by at most $|a_e|\\rho_0$. By the definition of $K$, this is at\nmost $\\rho_0 K\/(8(\\beta-\\alpha))$. Thus, $p$ changed its weight by at\nmost\n$$\n(4\\sqrt{n} \\ln n) \\cdot \\rho_0 \\frac{K}{8(\\beta-\\alpha)} = (4\\sqrt{n}\n\\ln n)\\frac{K}{8N_0} < \\frac{\\epsilon}{2}.\n$$\nIt follows that the length of $p$ in $G(\\alpha+i\\rho_0)$ is less than\n$\\delta+\\epsilon\/2$. But $p_i(u,v)$ is a shortest path from $u$ to\n$v$ in $G(\\alpha+i\\rho_0)$ of all the paths that contain at most $t$\nvertices. In particular, $d_i(u,v) \\le \\delta+\\epsilon\/2$. Consider\nthe length of $p_i(u,v)$ in $G(r)$. The same argument shows that the\nlength of $p_i(u,v)$ in $G(r)$ changed by at most $\\epsilon\/2$. But\n$w_0=f_i(u,v)(r)$ is that weight, and hence $w_0 \\le\n\\delta+\\epsilon$. In particular, $z \\le \\delta+\\epsilon$.\n\nAssume next that every path of length $\\delta$ in $G(r)$ uses at least\n$4\\sqrt{n}\\ln n$ edges. Let $p$ be one such path. When going from\n$r$ to $r'=\\alpha+j\\rho_1$, each edge $e$ with weight $a_ex+b_e$\nchanged its length by at most $|a_e|\\rho_1$. By the definition of $K$,\nthis is at most $\\rho_1 K\/(8(\\beta-\\alpha))$. Thus, $p$ changed its\nweight by at most\n$$\nn \\cdot \\rho_1 \\frac{K}{8(\\beta-\\alpha)} = n \\frac{K}{8N_1} <\n\\frac{\\epsilon}{8}.\n$$\nIn particular, the length of $p_{u,v,r'}$ is not more than the length\nof $p$ in $G(r')$, which, in turn, is at most $\\delta+\\epsilon\/8$. By\nLemma \\ref{lem:prob}, with probability $1-o(1\/n^3)$, some vertex of\n$h$ appears on $p_{u,v,r'}$. Moreover, by the union bound, with\nprobability $1-o(1)$ {\\em all} paths of the type $p_{u,v,r'}$\n(remember that $r'$ can hold one of $O(n)$ possible values) are thus\ncovered by the set $H$. Let $h'$ be a vertex of $H$ appearing in\n$p_{u,v,r'}$. We therefore have $d^*_j(u,h')+d^*_j(h',v) \\le\n\\delta+\\epsilon\/8$. Since $h=h_{u,v,j}$ is taken as the vertex which\nminimizes these sums, we have, in particular, $d^*_j(u,h)+d^*_j(h,v)\n\\le \\delta+\\epsilon\/8$. Consider the path $q$ in $G(\\alpha+j\\rho_1)$\nrealizing $d^*_j(u,h)+d^*_j(h,v)$. The same argument shows that the\nlength of $q$ in $G(r)$ changed by at most $\\epsilon\/8$. But\n$w_1=f^*_j(u,h)(r)+f^*_j(h,v)(r)$ is that weight, and hence $w_1 \\le\n\\delta+\\epsilon\/4$. In particular, $z \\le \\delta+\\epsilon\/4$.\n\n\\section{Concluding remarks}\\label{sec:conclusion} We have constructed\nseveral parametric shortest path algorithms, whose common feature is\nthat they preprocess the generic instance and produce an advice that\nenables particular instantiations to be solved faster than running the\nstandard weighted distance algorithm from scratch. It would be of\ninterest to improve upon any of these algorithms, either in their\npreprocessing time or in their instantiation time, or both.\n\nPerhaps the most challenging open problem is to improve the\npreprocessing time of Theorem \\ref{thm:gen} to a polynomial one, or,\nalternatively, prove an hardness result for this task. Perhaps less\nambitious is the preprocessing time in Theorem \\ref{thm:neg}.\n\nFinally, parametric algorithms are of practical importance for other\ncombinatorial optimization problems as well. It would be interesting\nto find applications where, indeed, a parametric algorithm can be\ntruly beneficial, as it is in the case of shortest path problems.\n\n\\section*{Acknowledgment} We thank Oren Weimann and Shay Mozes for\nuseful comments.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}