diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzokoi" "b/data_all_eng_slimpj/shuffled/split2/finalzzokoi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzokoi" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe practical utility of widely used methods in electronic structure\ntheory is in large part determined by the optimization algorithms\nthey rely on.\nThis basic theme has been repeated throughout the history of\nquantum chemistry, with methods as fundamental as Hartree-Fock\ntheory becoming dramatically more useful with the development\nof superior solution methods such as the direct inversion of\nthe iterative subspace. \\cite{pulay1982diis}\nSimilar transformations have been seen in configuration interaction\n(CI) theory thanks to Davidson's method, \\cite{davidson1975}\nin the density matrix renormalization group (DMRG) approach\nthanks to (among other innovations) the noise algorithm, \\cite{white2005noise}\nand in many other methods besides.\nAs in the case of DMRG, it is usually not so simple as a single innovation\nin the numerical methods that transforms a theory from a promising\nproof of concept into a robust computational tool.\nInstead, such tools often arise as the result of a series of innovations,\nthat, once combined, fit together in a way that makes them more than\nthe sum of their parts.\n\nIn the context of quantum Monte Carlo (QMC), and more specifically\nin its variational (VMC) formulation, the introduction of the\nlinear method (LM) for trial function optimization marked a large\nstep forward along the path to practical utility and reliability.\n\\cite{Umrigar2007}\nHowever, recent research has revealed multiple options for\nbypassing the LM's memory bottleneck, making clear that there is\nstill a great deal of distance to cover in the maturation of\nVMC numerical methods.\nSome of these approaches\n\\cite{Neuscamman2012,Zhao2017}\ndepend, like the LM itself, on knowing\nat least some information about energy second derivatives, but by\navoiding the construction of full Hessian-sized matrices they\nachieve dramatically lower memory footprints compared to the LM.\nOther even more recent approaches, most of which can be classified as\naccelerated descent (AD) methods,\n\\cite{Schwarz2017,Sabzevari2018,Mahajan2019}\navoid second derivative information entirely and are thus even\nmore memory efficient, relying instead on a\nlimited knowledge of the optimization's history of energy\nfirst derivatives or in one case just the signs of\nthese derivatives. \\cite{Luo2018}\nIn the present study, we explore the relative advantages of these\nfirst and second derivative approaches and find that,\nwhen combined, they offer a highly complementary optimization\nstrategy that appears both more robust and more efficient\nthan either class of methods is on its own.\n\nThe ability to optimize larger and more complicated wave function\nforms is becoming increasingly relevant due to rapid\nprogress in other areas of VMC methodology.\nThe introduction of the table method \\cite{Clark2011,Morales2012}\nhas increased the size of CI expansions that can be\nhandled by more than an order of magnitude, and expansion\nlengths beyond 10,000 determinants are no longer unusual.\nA recent improvement to the table method \\cite{Filippi2016,Assaraf2017}\nnow allows the molecular orbital basis to be optimized\nefficiently in the presence of these large expansions,\nwhile the resurgence of interest in selected CI methods\n\\cite{evangelista2016adaptive,Holmes2016,tubman2016asci,\n Sharma2017,loos2018scipt,zimmerman2018sshci}\nhas provided a convenient route to their construction.\nIn addition to these CI-based advances, other wave function\ninnovations have also led to growing demands on\nVMC optimization methods.\nIncreasingly sophisticated correlation factors,\nsuch as those used in Hilbert space approaches\n\\cite{changlani2009cps,mezzacapo2009eps,neuscamman2012magnet,\n Neuscamman2012,Schwarz2017,Sabzevari2018,Mahajan2019}\nas well as a steady stream of developments in real space\n\\cite{Casula2004,Sorella2007,LopezRios2012,Luchow2015,Goetz2017,Goetz2018}\nhave also raised the demand for optimization approaches\nthat can deal with large numbers of highly nonlinear parameters.\nAlthough less thoroughly explored, the treatment of correlation effects\nvia back flow transformations also continues to receive attention\nand create new optimization challenges. \\cite{Holzmann2015iterbackflow,Luo2018}\nFinally, in addition to these increases in ansatz sophistication,\nrecent interest in using excited state variational principles\nto expand QMC's excited state capabilities has led to its own\ncollection of optimization difficulties.\n\\cite{Zhao2016,neuscamman2016varqmc,blunt2017ct,Shea2017,\n robinson2017vm,blunt2018excited,Flores2019}\n\nBy supporting these various advances in QMC methodology, improved\nVMC optimization methods have the potential for large impacts\nin diverse areas of chemistry and solid state physics.\nWork on lattice models, for example, continues to push the\nboundaries on how approximate wave functions are defined.\n\\cite{troyer2017rbm,Kochkov2018}\nIn the area of molecular excited states, QMC methods offer\npromising new routes to high-accuracy treatments of both\ndouble excitations \\cite{Zhao2016,Neuscamman2016}\nand charge transfer excitations, \\cite{blunt2017ct,Flores2019}\nboth of which continue to challenge conventional\nquantum chemistry methods.\nIn QMC's traditional area of simulating real solids, applications\nof both VMC and projector Monte Carlo would benefit immediately\nfrom the ability to prepare more sophisticated trial\nwave functions.\n\\cite{Foulkes2001,morales2018qfqmcnio,zhao2018gaps}\nDiffusion Monte Carlo (DMC) in particular would achieve higher accuracy using \nthe better nodal surfaces determined by well-optimized ansatzes from VMC.\nMore generally, the ability of QMC to combine treatments\nof weak and strong electron correlation effects within a robust\nvariational framework that operates near the basis set limit\nmakes it a powerful general-purpose\napproach for difficult molecular and materials problems where\nhigh accuracy is necessary.\nBy increasing the size and complexity of systems that fall into\nits purview, improvements in QMC wave function optimization methods\ntherefore have the potential to move electronic structure simulation\nforward on a number of fronts.\n\nThe present study seeks to aid in this endeavor by focusing on\nthe relative advantages of recently developed low-memory\nfirst and second derivative methods in VMC and in\nparticular on how they can be used to complement each other.\nUnlike deterministic optimizations, in which second derivative\nmethods are typically preferred so long as they are affordable,\nthe situation is less straightforward when the objective function\nand its derivatives are statistically uncertain.\nOne major concern is that, in practice, it can be more difficult\nto achieve low-uncertainty estimates of the second derivative\nterms that appear in the LM and its descendants.\nWhile this issue can be mitigated by the use of alternative\napproaches to importance sampling,\nthese can increase uncertainty in the energy due to the loss\nof the zero-variance principle.\nThus, as we will demonstrate, statistical precision tends to\nbe higher when using AD methods, which is an advantage on top\nof their ability to converge to the minimum without the bias\nthat arises from the LM's highly nonlinear matrix diagonalization.\nHowever, we will also see that in order to enjoy the advantages\nof a tighter and less biased final convergence,\nAD methods must first reach the vicinity of the minimum.\nFor this task, we find that the LM and its low-memory variants\noutperform all of the first derivative methods that we tested,\nespecially for optimizations in which the wave function contains\ndifferent classes of parameters that vary greatly in their\nnonlinear character and how they couple to each other.\nHappily, we will see that a hybrid approach --- in which\nAD and low-memory LM optimization steps are interwoven ---\nexcels both at reaching the vicinity of the minimum and\nproducing unbiased final energies while simultaneously\nmaintaining a high degree of statistical efficiency.\n\n\\section{Theory}\n\n\\subsection{Variational Monte Carlo}\n\nVMC combines the variational principle of quantum mechanics with Monte Carlo evaluation of high dimensional integrals.\\cite{Umrigar2015}\nTo study the ground state of a system, we pick a trial wave function $\\Psi$ of some particular form and seek to minimize its energy expectation value.\n\\begin{equation}\n E(\\Psi) = \\frac{\\Braket{\\Psi | H | \\Psi}}{\\Braket{\\Psi | \\Psi}}\n\\end{equation}\n\nIn the language of mathematical optimization, $E(\\Psi)$ is an example of an objective function or cost function.\nFor a typical system with $N$ electrons, this expression contains integrals over $3N$ position space coordinates which for some wave functions can only be evaluated efficiently through Monte Carlo sampling rather than quadrature methods.\nWe rewrite the energy as\n\\begin{align}\n\\notag\n E &= \\frac{\\int d \\mathbf{R}\\Psi (\\mathbf{R}) H \\Psi(\\mathbf{R})}{\\int d \\mathbf{R}\\Psi (\\mathbf{R})^2}\n = \\frac{\\int d \\mathbf{R}\\Psi (\\mathbf{R})^2 E_L (\\mathbf{R})}{\\int d \\mathbf{R}\\Psi (\\mathbf{R})^2} \\\\\n &= \\int d \\mathbf{R} \\rho (\\mathbf{R})E_L (\\mathbf{R})\n\\label{eqn:energy}\n\\end{align}\nwhere $E_L (\\mathbf{R}) = \\frac{H \\Psi (\\mathbf{R})}{\\Psi (\\mathbf{R})}$ is the local energy and $\\rho (\\mathbf{R}) = \\frac{\\Psi(\\mathbf{R})^2}{\\int d \\mathbf{R} \\Psi (\\mathbf{R})^2}$ is the probability density.\nThe zero-variance principle\\cite{Assaraf1999} makes $\\rho (\\mathbf{R})$ the most common choice of probability distribution for obtaining samples, but it is not the only option.\nFor effective estimation of quantities beside the energy, such as the LM matrix elements, other\nimportance sampling functions are often preferred.\n\\cite{trail2008a,trail2008b,robinson2017vm}\nIn our LM and blocked LM calculations in this study,\nwe employ the importance sampling function\n(and the appropriately modified statistical estimate formulas\n\\cite{Flores2019})\n\\begin{align}\n\\label{eqn:is}\n |\\Phi|^2 \\equiv\n |\\Psi|^2 + \\frac{\\epsilon}{M}\n \\hspace{0.5mm} \\sum_{I} |D_I|^2\n\\end{align}\nin which the $D_I$ are the $M$ different $S_z$-conserving single\nexcitations relative to the closed shell reference determinant.\nThe logic behind this choice is that it puts some weight on\nconfigurations that are highly relevant for the orbital rotation\nparameters' wave function derivatives, as small orbital\nrotations can be approximated via the addition of singles.\nWe find that this importance sampling function substantially\nreduces the uncertainty of the LM matrix elements corresponding\nto orbital rotations, which in turn helps reduce the update\nstep uncertainty.\nFor AD, we simply use traditional $|\\Psi|^2$ importance sampling\nas in equation \\ref{eqn:energy}.\n\nBy the variational principle, we are guaranteed that $E$ is an upper bound on the true ground state energy. \nGiven some set of adjustable parameters in the functional form of $\\Psi$, we expect that values of those parameters that yield a lower value of $E$ to correspond to a wave function that is closer to the ground state. \nOne could then imagine the abstract space produced by the possible values of all variational parameters.\nThe set of optimal parameter values that specify the wave function expression which minimizes $E$ can be taken as a point in this space labeled by the vector $\\mathbf{p^*}$.\nIn general, the initial choice for parameters will not be at this energy minimum point, but at some other point $\\mathbf{p_0}$.\nThe problem of determining the best wave function in VMC calculations then relies on an optimization algorithm for finding $\\mathbf{p^*}$ after starting from $\\mathbf{p_0}$.\n\nWithin this framework, one of the most important considerations is that the optimization is inherently stochastic due to the introduction of noise through the Monte Carlo evaluation of the integral in equation \\ref{eqn:energy}.\nThis forms a contrast with many other methods in electronic structure theory including Hartree-Fock, CI, and coupled cluster where various deterministic optimization schemes predominate.\\cite{Helgaker2000}\nMany of the algorithms commonly encountered in a deterministic quantum chemistry context such as steepest descent and the Newton method, have been adapted for use in VMC.\\cite{Harju1997,Lin2000,Lee2005,Sorella2005,Umrigar2005}\nHowever, there is now a need to be robust to the presence of noise.\nHistorically, errors due to finite sampling led to numerical instabilities that prompted interest in minimizing variance\\cite{Kent1999,Umrigar1988} instead of energy, but later optimization developments have sought to mitigate this issue and in this paper we only consider energy minimization.\nAs we will now discuss in their respective sections, both the LM and gradient descent approaches possess features that enable them to operate stably in a stochastic setting.\n\n\\subsection{The Linear Method}\nThe LM\\cite{Nightingale2001,Umrigar2007} begins with a first order Taylor expansion of the wave function.\nFor a set of variational parameters given by vector $\\mathbf{p}$, we have\n\\begin{equation}\n\\label{eqn:lmTaylor}\n \\Psi (\\mathbf{p}) = \\Psi_0 + \\sum_i \\Delta p_i \\Psi_i\n\\end{equation}\nwhere $\\Psi_i = \\frac{\\partial \\Psi(\\mathbf{p})}{\\partial p_i}$ and $\\Psi_0$ is the wave function at the current parameter values.\n\nFinding the optimal changes to the parameters amounts to solving the generalized eigenvalue problem \n\\begin{equation}\n\\label{eqn:lmEigen}\n H\\Vec{c} = E S \\Vec{c}\n\\end{equation}\n in the basis of the wave function and its first order parameter derivatives $\\{\\Psi_0,\\Psi_1,\\Psi_2,...\\}$.\n$H$ and $S$ are the Hamiltonian and overlap matrices in this basis with elements \n\\begin{equation}\n\\label{eqn:lmH}\n H_{ij} = \\Braket{\\Psi_i | H | \\Psi_j} \n\\end{equation}\n\\begin{equation}\n\\label{eqn:lmS}\n S_{ij} = \\Braket{\\Psi_i | \\Psi_j}\n\\end{equation}\nThe matrix diagonalization to solve this eigenproblem for eigenvector $\\Vec{c} = (1,\\Delta \\mathbf{p})$ then yields the updated parameter values $\\mathbf{p_1} = \\mathbf{p_0} + \\Delta \\mathbf{p}$.\nAs the matrices $\\bm{H}$ and $\\bm{S}$ both contain a subset\nof the second derivative terms that would be present in a \nNewton-Raphson approach, \\cite{Toulouse2007}\nthe LM is most naturally categorized as a second-derivative\nmethod, and it certainly shares Newton-Raphson's difficulties\nwith regards to dealing with matrices whose dimension\ngrows as the number of variables.\n\nFor practical use with finite sampling, the LM must be stabilized to prevent unwisely large steps in parameter space.\nThis is accomplished by adding shift values\\cite{Umrigar2007} to the matrix diagonal that effectively act as a trust radius scheme similar to those used with the Newton method.\nIn our implementation, the Hamiltonian is modified with two shift values meant to address distinct potential problems in the optimization.\\cite{Kim2018}\n\\begin{equation}\n\\label{eqn:lmShift}\n \\mathbf{H} \\xrightarrow[]{} \\mathbf{H} + c_I \\mathbf{A} + c_S \\mathbf{B}\n\\end{equation}\n\nThe matrix elements of $\\mathbf{A}$ are given by $A_{ij} = \\delta_{ij}(1-\\delta_{i0})$ so that the shift $c_I$ effectively gives an energy penalty to directions of change from the current wave function.\\cite{Umrigar2007}\nThe second shift is intended to address problems that may arise if some wave function derivatives have norms that differ by orders of magnitude. \nIn this situation, the single shift value $c_I$ is insufficient to preserve a quick yet stable optimization.\nFor a parameter with a large derivative norm, a sufficiently high value of $c_I$ might prevent an excessively large change in its value.\nHowever, all other parameter directions with smaller derivative norms will be so heavily penalized by the large value of $c_I$ that those parameters become effectively fixed.\nThe purpose of the second $c_S \\mathbf{B}$ term is to retain important flexibility in other parameter directions. \nWe can write the matrix $\\mathbf{B}$ as\n\\begin{equation}\n\\label{eqn:bMat}\n\\mathbf{B} = (\\mathbf{Q^T})^{-1} \\mathbf{T} \\mathbf{Q}^{-1}\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eqn:qMat}\n Q_{ij} = \\delta_{ij} -\\delta_{i0}(1-\\delta_{j0})S_{0j}\n\\end{equation}\nand \n\\begin{equation}\n\\label{eqn:tMat}\n T_{ij} = (1-\\delta_{i0}\\delta_{j0})[\\mathbf{Q^T}\\mathbf{S}\\mathbf{Q}]_{ij}\n\\end{equation}\nThe matrix $\\mathbf{Q}$ provides a transformation to a basis where all update directions are orthogonal to the current wave function and the matrix $\\mathbf{T}$ is the overlap matrix in this basis. \nThe optimal choice of shift parameters $c_I$ and $c_S$ may depend on the particular optimization problem.\nIn our implementation, an adaptive scheme adjusts the shifts on each iteration by comparing the energies calculated through correlated sampling on three different sets of shift values and choosing whichever shifts produced the lowest energy.\n\nThe LM has been successfully applied to a\nvariety of systems to prepare good trial wave functions\nfor DMC.\n\\cite{Umrigar2007,Toulouse2007,Toulouse2008,Brown2007,Petruzielo2012,Goetz2017,Goetz2018}\nIt has also been used in the variational optimization of a recent functional for targeting excited states.\\cite{Zhao2016,Shea2017,Flores2019}\nHowever, it possesses a number of limitations, most notably a memory cost that scales with the square of the number of optimizable parameters due to the matrices it builds. \nThis memory cost currently confines routine use of the LM to less than roughly 10,000 parameters though exceptional calculations with up to about 16,000 have been made.\\cite{Clark2011}\nAnother shortcoming is the nonlinear bias of the LM.\nWe are evaluating the elements of the Hamiltonian and overlap matrices stochastically and have a nonlinear relationship between them and our energy through the generally high order characteristic polynomial of the eigenvalue problem of equation \\ref{eqn:lmEigen}.\nAs a result, we in general expect the LM to converge to a point in parameter space slightly offset from the true minimum.\nThis nonlinear bias has been studied for the LM in Hilbert space\\cite{Zhao2016a} and a similar issue arises in the context of Full Configuration Interaction QMC.\\cite{Blunt2018}\nBoth the memory constraint and the nonlinear bias of the LM become more severe for ansatzes with larger numbers of variational parameters, which spurs the search for potential alternatives.\nOne approach suggested for memory reduction is to employ Krylov subspace methods for Eq. 4 to avoid building matrices, but it requires a drastically higher sampling effort due to the need for many matrix-vector multiplications and so we do not pursue the approach here.\\cite{Neuscamman2012}\n\n\\subsection{Blocked Linear Method}\nOne recent approach to bypassing the memory bottleneck is known as the blocked linear method (BLM).\\cite{Zhao2017}\nThe first step of the algorithm is to divide the full set of parameters into $N_b$ blocks. \nNext, a LM-style matrix diagonalization is carried out within each block and some number $N_k$ of the resulting eigenvectors from the blocks are retained as good directions for constructing an approximation for the overall best update direction in the full parameter space.\nFor a particular block of variables, the wave function expansion in the LM is given by\n\\begin{equation}\n\\label{eqn:blmBlock}\n \\ket{\\Psi_b} = \\ket{\\Psi_0} + \\sum_{i=1}^{M_b} c_i \\ket{\\Psi^i} \n\\end{equation}\nwhere $\\ket{\\Psi^i}$ is the wave function derivative with respect to the $i$th variable in the block, $M_b$ is the number of variables in the block, and $\\ket{\\Psi_0}$ the current wave function as in the normal LM.\nWe can perform the same matrix diagonalization done in the LM, only with parameters outside the block fixed.\nThis yields a set of eigenvectors that we can use to construct another approximate expansion of the original wave function.\nWe can construct a matrix $\\mathbf{B}$ using the $N_k$ eigenvectors with the lowest eigenvalues from each block and write a new expansion\n\\begin{equation}\n\\label{eqn:blmNewBlock}\n \\ket{\\Tilde{\\Psi}} = \\alpha \\ket{\\Psi_0} + \\sum_{k=1}^{N_b} \\sum_{j=1}^{N_k} A_{kj} \\sum_{i=1}^{M_b} B_{ji}^{(b)} \\ket{\\Psi^{i,b}}\n\\end{equation}\nHaving now pre-identified important directions within each block,\nthe idea is that a subsequent LM-style diagonalization\nin the basis of these good directions (which yields\nthe coefficients $A_{kj}$) should still provide a good\nupdate direction when re-expressed in the full parameter space.\n\nIn order to help retain most of the accuracy of the traditional LM, the first stage of the BLM computation includes $N_o$ other good directions that are used to supply the current block's diagonalization with information about how its variables are likely to couple to those in other blocks.\nIn practice, important out-of-block directions are obtained by\nkeeping a history of previous iterations' updates\nas the optimization progresses.\nWe can rewrite the one block expansion introduced in equation \\ref{eqn:blmBlock} as\n\\begin{equation}\n\\label{eqn:blmHistory}\n \\ket{\\Psi_b} = \\ket{\\Psi_0} + \\sum_{i=1}^{M_b} c_i \\ket{\\Psi^i} + \\sum_{j=1}^{N_o} \\sum_{k=1,k\\neq b}^{N_b} d_{jk} \\ket{\\Theta_{jk}}\n\\end{equation}\nwhere we take the\n\\begin{equation}\n\\label{eqn:blmTheta}\n \\ket{\\Theta_{jk}} = \\sum_{l=1}^{M_k} C_{jkl} \\ket{\\Psi^{l,k}}\n\\end{equation}\nas the linear combinations of wave function derivatives from other blocks that were identified as important based on previous iterations' updates.\nThe additional term in the expansion allows us to account for couplings between variables in different blocks and enable the construction of a better space for the second diagonalization.\nWe assemble the matrix $\\mathbf{B}$ and $\\ket{\\Tilde{\\Psi}}$ and then seek to minimize $\\frac{\\Braket{\\Tilde{\\Psi} | H | \\Tilde{\\Psi}}}{\\Braket{\\Tilde{\\Psi}|\\Tilde{\\Psi}}}$ with respect to variational parameters $\\alpha$ and $A_{kj}$ in our BLM wave function expansion in equation \\ref{eqn:blmNewBlock}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{BLM_flowchart.eps}\n \\caption{Flowchart depicting steps in the BLM algorithm to arrive at a parameter update.}\n \\label{fig:blmchart}\n\\end{figure}\n\nFigure \\ref{fig:blmchart} portrays the algorithmic steps described above. Some number of parameters too large to be handled by the standard LM is divided among different blocks whose diagonalizations produce the vectors $\\Vec{b_i}$ for the construction of the space of the second diagonalization that produces the parameter update.\nThe BLM can be thought of as achieving memory savings in the use of smaller matrices at the cost of having to run over the sample twice when the traditional LM must run over it just once.\nA more extensive description of the BLM and its precise memory usage can be found in its original paper.\\cite{Zhao2017}\n\nWe divide parameters evenly among blocks, but one could implement the use of tailored blocks of varying sizes. \nIt is advisable to choose the block size to be large enough to keep important parameters of the same type, such as all of those for a Jastrow factor, within the same block.\nThis enables the expected strong coupling between them to be handled more accurately by the LM-style diagonalization within that block.\nWhile the BLM has been successfully applied up to about 25,000 parameters and found to closely reproduce the results of the standard LM, \\cite{Zhao2017}\nit remains a relatively new method, and the present study will provide additional data on its efficacy.\n\n\\subsection{Gradient Descent Methods}\nIn the last few years, increasing attention\\cite{Schwarz2017,Schwarz2017a,Sabzevari2018,Luo2018,Mahajan2019} has been paid to optimization methods that use only first derivatives to minimize a Lagrangian of the form \n\\begin{equation}\n\\label{eqn:lagrangian}\n \\mathcal{L}( \\Psi(\\mathbf{p})) = \\Braket{\\Psi | H | \\Psi} - \\mu(\\Braket{\\Psi | \\Psi} - 1) \n\\end{equation}\nwhere $\\mu$ is a Lagrange multiplier and, in practice, a moving average of the local energy.\nThere is no need to solve an eigenvalue problem as in the LM and the memory cost of these approaches scales linearly with the number of parameters.\nWe also note that the stochastic evaluation of derivatives of this Lagrangian will lead to a smaller nonlinear bias compared to what is encountered in the LM.\nWhile there is some nonlinearity present in the product $\\mu \\Braket{\\Psi | \\Psi}$, it is mild compared to the high order polynomials encountered in the solution of the LM eigenvalue problem and can be avoided entirely if desired through modest amounts of extra sampling.\nMinimization of this Lagrangian targets the ground state, but excited states can similarly be targeted with these optimization algorithms merely by using derivatives of one of the excited state functionals that have been developed.\\cite{Choi1970,Ye2017,Zhao2016}\n\nThe simplest method in this category is the steepest descent algorithm.\n\\begin{equation}\n\\label{eqn:steepest}\n p_i^{k+1} = p_i^k - \\eta_k \\frac{\\partial \\mathcal{L}(\\mathbf{p})}{\\partial p_i}\n\\end{equation}\nIn this case, the value of each parameter on the $k+1$'th step is found simply by subtracting the statistically uncertain parameter derivative times a step size $\\eta_k$.\nThe step size can be taken as constant over all steps in the simplest case, but rigorous proofs on the convergence of stochastic gradient descent (SGD) rely on decaying step sizes satisfying $\\sum_{k} \\eta_k = \\infty$ and $\\sum_{k} \\eta_k^2 < \\infty$.\\cite{Bottou2012}\n\nIt may be worth briefly commenting that the typical formulation of stochastic gradient descent as seen in the machine learning and mathematical optimization literature is slightly different from what we use here within VMC.\nIn a common machine learning scenario,\\cite{Bottou2012} one has a training set of input data $\\{x_1,x_2,...,x_n\\}$ and corresponding outputs $\\{y_1,y_2,...,y_n\\}$ and wishes to minimize a loss function $Q(x,y;w)$ that measures the error produced by a model $f_w(x)$, which predicts $\\widetilde{y}_i$ given $x_i$ and is parameterized by variables $w$.\nFor this setting, the SGD algorithm refers to evaluating the gradient of $Q$ with a randomly chosen pair $(x_j,y_j)$ from the given data set and then computing the parameter update according to $w_{k+1} = w_k - \\eta_k \\nabla_w Q(x_j,y_j)$.\nFor our VMC optimization, we are dealing with a noisy gradient similar to what occurs in this machine learning problem, but the source of our noise is somewhat different and lies in our means of evaluating the underlying 3N dimensional integrals within our Lagrangian derivatives.\nAnother important distinction is that in machine learning applications, complete convergence to the minimum is in fact undesirable because it will overfit the model to the training data and degrade its performance on new sets of test inputs.\nMuch as SGD provides a computational speed up for machine learning problems, we are also able to operate gradient descent methods at a cheap per-iteration cost because we need only a modest number of samples to evaluate sufficiently precise Lagrangian derivatives compared to the Hamiltonian and overlap matrices in the LM.\nHowever, unlike the machine learning case, we do want to come as close as possible to the true minimum, and we will see that even reaching the vicinity of the minimum can be difficult for descent methods when typical VMC initial guesses are employed.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{descent_intuition.eps}\n \\caption{Illustration of the difficulty faced by steepest descent in red on the lower left with its slow approach to the minimum. Accelerated descent in green on the upper right is able to progress more rapidly to the minimum with its memory of previous gradients.}\n \\label{fig:narrowvalley}\n\\end{figure}\n\nWhile steepest descent can be guaranteed to eventually reach the minimum of the Lagrangian even in a stochastic setting, its asymptotic convergence is very slow.\nFor some intuition, one could imagine the landscape of the Lagrangian's values forming a very narrow valley near the true minimum.\nIn this situation, steepest descent would produce parameter updates mostly back and forth along the sides of the valley with little improvement of parameter values in the direction directly toward the minimum.\nDue to the limitations of steepest descent, a number of other flavors of accelerated gradient descent (AD) have been developed that include a momentum term with information on previous values of the gradient.\nAs illustrated in Figure \\ref{fig:narrowvalley}, the general intuition is that this additional term provides some memory of the progression along narrow valleys that steepest descent lacks and thereby achieves swifter convergence.\nIn addition, there are multiple schemes for adaptively varying the step sizes used in a manner that draws on the particular derivative values for each individual parameter as the optimization progresses. \nThese methods have recently been applied successfully to Hilbert space QMC. In this study, we work in real space and investigate a combination of Nesterov momentum with RMSprop as presented by the Booth group \\cite{Schwarz2017,Schwarz2017a}, a method using random step sizes from the Clark group\\cite{Luo2018}, AMSGrad, recently used by the Sharma group\\cite{Sabzevari2018,Mahajan2019}, as well as the ADAM optimizer\\cite{Kingma}.\n\nWe now lay out the precise expressions for each of these methods in turn. The RMSprop algorithm used by Booth and co-workers is given by the following recurrence relations.\\cite{Schwarz2017}\n\\begin{equation}\n\\label{eqn:rmspropMomentum}\n p_i^{k+1} = (1-\\gamma_k)q_i^{k+1} - \\gamma_k q_i^k\n\\end{equation}\n\\begin{equation}\n\\label{eqn:rmspropUpdate}\n q_i^{k+1} = p_i^k - \\tau_k \\frac{\\partial \\mathcal{L}(\\mathbf{p})}{\\partial p_i}\n\\end{equation}\n\\begin{equation}\n\\label{eqn:rmspropRecur}\n\\lambda_0=0 \\hspace{7mm}\n\\lambda_k = \\frac{1}{2} + \\frac{1}{2}\\sqrt{1+4\\lambda_{k-1}^2} \\hspace{7mm}\n\\gamma_k = \\frac{1-\\lambda_k}{\\lambda_{k+1}}\n\\end{equation}\n\\begin{equation}\n\\label{eqn:rmspropStep}\n \\tau_k = \\frac{\\eta}{\\sqrt{E[(\\frac{\\partial\\mathcal{L}}{\\partial p_i})^2]^{(k)}} + \\epsilon}\n\\end{equation}\n\\begin{equation}\n\\label{eqn:rmspropAvg}\n E[(\\partial \\mathcal{L})^2]^{(k)} = \\rho E\\left[\\left(\\frac{\\partial\\mathcal{L}}{\\partial p_i}\\right)^2\\right]^{(k-1)} + (1-\\rho)\\left(\\frac{\\partial\\mathcal{L}}{\\partial p_i}\\right)^2\n\\end{equation}\nAbove, $p_i^k$ denotes the value of the $i$th parameter on the $k$th step of the optimization, $\\tau_k$ is a step size that is adaptively adjusted according to the RMSprop algorithm in equations \\ref{eqn:rmspropStep} and \\ref{eqn:rmspropAvg}.\nThe running average of the square of parameter derivatives in the denominator of $\\tau_k$ allows for the step size to decrease when the derivative is large, which should hedge against the possibility of taking excessively large steps.\nConversely, a smaller denominator when the derivative is small allows for larger steps to be taken.\nThe weighting in the running average is controlled by a factor $\\rho$ that can be thought of as the amount of memory retained of past gradients for adjusting $\\tau_k$, and $\\eta$ again denotes the chosen initial step size.\nIn order to avoid possible singularities when the gradient is very close to zero, a small positive number $\\epsilon$ is included in the denominator of $\\tau_k$.\nEquation \\ref{eqn:rmspropMomentum} shows the momentum effect in which the update for the parameter on the $k+1$ step depends on the update from the previous step as well as the current gradient.\nWe also follow the Booth group in applying a damping factor to the momentum by replacing $\\gamma_k$ with $\\gamma_k e^{-(\\frac{1}{d})(k-1)}$.\nThe quantity $d$ effectively controls how quickly the momentum is turned off, which eventually turns the algorithm into SGD.\nThe values of $d$,$\\eta$, $\\rho$, and $\\epsilon$ may all be chosen by the user of the algorithm and are known as hyperparameters in the machine learning literature.\nIn the results we present using this method, we have used $d=100$, $\\rho = .9$ and $\\epsilon = 10^{-8}$.\nWe have found adjusting these hyperparameters has relatively little influence on optimization performance compared to choices for step size $\\eta$, but their influence could be explored more systematically.\n\nThe Clark group's algorithm takes a far simpler form\n\\begin{equation}\n\\label{eqn:randomStep}\n p_i^{k+1} = p_i^k - \\alpha \\eta \\frac{\\left|\n \\frac{\\partial \\mathcal{L}}{\\partial p_i^k}\n \\right|}{\\frac{\\partial \\mathcal{L}}{\\partial p_i^k}}\n\\end{equation}\nand has been recently used with neural network wave functions in the context of the Hubbard model.\\cite{Luo2018}\nHere $\\alpha$ is a random number in the interval $(0,1)$ and $\\eta$ sets the overall scale of the random step size.\nThe motivation for allowing the step size to be random is that it may help the optimization escape local minima that it encounters.\nWithin VMC, this algorithm can be run with fewer samples per iteration even compared to other gradient descent based algorithms as only the sign of the derivative needs to be known, but it typically requires many more iterations to converge.\n\nADAM and AMSGrad are popular methods within the machine learning community\\cite{Ruder2016,Reddi2018,Kingma} and have similar forms. ADAM is given by:\n\\begin{equation}\n\\label{eqn:adamUpdate}\n p_i^{k+1} = p_i^{k} - \\eta \\frac{m_i^k}{\\sqrt{n_i^k}}\n\\end{equation}\n\\begin{equation}\n\\label{eqn:adamMomentum}\n m_i^k = (1-\\beta_1)m_i^{k-1} + \\beta_1 \\frac{\\partial \\mathcal{L}}{\\partial p_i^k}\n\\end{equation}\n\\begin{equation}\n\\label{eqn:adamStep}\n n_i^k = \\beta_2 \\hspace{0.5mm} n_i^{k-1}\n + (1-\\beta_2)\\bigg(\\frac{\\partial \\mathcal{L}}{\\partial p_i^k}\\bigg)^2\n\\end{equation}\nAMSGrad is a recent adaptive step size scheme developed in response to the limitations of ADAM \\cite{Reddi2018} and has almost the same form except for a slightly different denominator.\n\\begin{equation}\n\\label{eqn:amsgradStep}\n n_i^k = \\mathrm{max}\\Bigg(n_i^{k-1}, \\hspace{2mm}\n (1-\\beta_2) \\hspace{0.5mm} n_i^{k-1} +\n \\beta_2 \\bigg(\\frac{\\partial \\mathcal{L}}{\\partial p_i^k}\\bigg)^2\\Bigg)\n\\end{equation}\n\nIn our calculations, we have used $\\beta_1 = 0.1$ and $\\beta_2 = 0.01$ for both AMSGrad and ADAM in line with the choice made by the Sharma group.\\cite{Sabzevari2018,Mahajan2019}\nIt may be worth noting that a different convention appears in machine learning literature using $1-\\beta_1$ and $1-\\beta_2$ for what we and the Sharma group call $\\beta_1$ and $\\beta_2$. \\cite{Ruder2016,Reddi2018}\n\nCompared to the LM, these first derivative descent methods have some significant advantages.\nTheir low memory usage and reduced nonlinear bias make them a natural fit for the large parameter sets that the LM struggles to handle.\nThey are remarkably robust in the presence of noise and do not need special safeguards against statistical instabilities such as the LM's shifts.\nAt a basic practical level, the descent methods are also far simpler to implement than the LM and especially its blocked variant.\nHowever, as we will see in our results, they often struggle\nto reach the vicinity of the minimum using a comparable sampling effort.\n\n\\subsection{A Hybrid Optimization Method}\nIn an attempt to retain the benefits of both the LM and the AD\ntechniques, we have developed a hybrid optimization scheme that can be applied to large numbers of parameters.\nOur approach alternates between periods of optimization using AD and sections using the BLM. \nAmong other advantages, this allows us to use gradient descent to identify the $N_o$ previous important directions in parameter space that are used in the BLM via equation \\ref{eqn:blmHistory}. \nThe precise mixture of both methods can be flexibly altered, but a concrete example would be to first optimize for 100 iterations using RMSprop.\nBy storing a vector of parameter value differences every 20 iterations, we would produce 5 vectors that can be used for equation \\ref{eqn:blmHistory} in some number (say three) steps of the BLM. \nAfter the execution of these BLM steps, the algorithm would return to another 100 iterations of descent and the process repeats until the minimum is reached.\nFigure \\ref{fig:hybridschematic} shows a generic depiction of how the ground state energy optimization may behave over the course of the hybrid method.\nThere are extended sections of computationally cheap optimization using gradient descent interwoven with substantial energy improvement over a few BLM steps.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{hybrid_opt.eps}\n \\caption{Schematic depiction of a typical energy optimization using the hybrid method. The dashed box around a section of descent in green and BLM in red defines a macro-iteration of the method.}\n \\label{fig:hybridschematic}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{Hybrid_contour.eps}\n \\caption{Schematic representation of gradient descent corrections in green to the red BLM steps, which we have observed to produce a greater degree of uncertainty about the location of the final minimum.}\n \\label{fig:hybridcontour}\n\\end{figure}\n\nThe use of AD and the BLM should naturally allow parameter sets beyond the traditional LM limit of about 10,000 variables to be addressed, a limit we will surpass in the present study in the diflurodiazene system.\nFor now, Table \\ref{tab:memCost} lays out how the memory cost of the methods we are considering scales with number of parameters $N$.\nBoth the hybrid method and the BLM steps it contains have a memory scaling that is intermediate between that of the standard LM and the descent methods.\nThe cost is given only approximately because while it is normally dominated by the cost of the $N_b$ blocks in the BLM, there are additional contributions related to how many directions are retained from the first BLM diagonalization and how many old directions are used.\\cite{Zhao2017}\n\n\\begin{table}[htbp]\n \\centering\n \\small\n \\caption{Rough memory cost scaling for the optimization methods we examine, with $N$ the number of optimized parameters and $N_b$ the number of blocks}\n \\begin{tabular}{llll}\n Method Type & \\multicolumn{1}{l}{Memory Cost} \\\\ \\hline\n & \\\\\n Standard Linear Method & $O(N^2)$ \\\\ \n \n & \\\\\n Blocked Linear Method & $\\thicksim O\\left(\\frac{N^2}{N_b}\\right)$ \\\\ \n \n & \\\\\n Hybrid Method & $\\thicksim O\\left(\\frac{N^2}{N_b}\\right)$ \\\\ \n \n & \\\\\n Descent Methods & $O(N)$\n \n \\end{tabular}%\n \\label{tab:memCost}%\n\\end{table}%\n\nOne key motivation for including sections of AD, especially when the method is near convergence, is to counteract the noise we observe in LM updates.\nWhile the LM tends to converge in a relatively small number of steps, we find the individual energies still fluctuate from iteration to iteration by multiple m$E_h$, particularly when we are working with wave functions that possess many highly nonlinear parameters.\nFigure \\ref{fig:hybridcontour} shows a cartoon of this behavior near the minimum\nthat prevents tight convergence.\nUnless the shifts are large enough to constrain it to very small steps, the LM will tend to bounce around near the true minimum due to substantial (and biased) statistical uncertainties in its step direction.\nThe resulting energy fluctuations lead to ambiguity in what to report as the definitive LM energy.\nOne could take the absolute lowest energy reached on any iteration, but this is fairly unsatisfactory as it feels too dependent on a \"lucky\" step landing right on the minimum.\nOur practice has been to take an average over multiple steps at the end of the optimization when parameter values should be converged.\nHowever, this will generally include iterations with upward energy deviations due to the step uncertainties.\nThe use of AD offers a way out of this dilemma because it can correct the errors in the LM steps by moving towards the minimum more smoothly and with less bias.\nAs we shall demonstrate in our results, these considerations seem to give the hybrid method a statistical advantage over the LM by achieving lower error bars for the same computational cost.\nThey are also the basis of our recommendation for finishing optimizations with a long section of pure AD, which we shall show tends to improve the energy and greatly diminish the final statistical uncertainty.\n\n\n\n\n\\subsection{Wave Functions}\n\nAn assessment of optimization methods' effectiveness requires consideration of the form of the wave function that they are applied to. \nMulti-Slater determinant wave functions have been a common choice of ansatz in QMC and are typically combined with Jastrow factors that help recover some electron correlation and describe particle cusps.\\cite{Foulkes2001} \nWe specify our Multi-Slater Jastrow (MSJ) wave function with the following set of equations.\n\n\\begin{equation}\n\\label{eqn:psi}\n \\Psi = \\psi_{MS} \\psi_J \\psi_C\n\\end{equation}\n\\begin{equation}\n\\label{eqn:psiMS}\n \\psi_{MS} = \\sum_{i=0}^{N_D} c_i D_i\n\\end{equation}\n\\begin{equation}\n\\label{eqn:psiJ}\n \\psi_J = \\exp{\\sum_i \\sum_j \\chi_k(|r_i - R_j|) + \\sum_k \\sum_{l>k} u_{kl} (|r_k - r_l|)}\n\\end{equation}\n\n\\begin{equation}\n\\label{eqn:psiNCJF}\n \\psi_C = \\exp(\\sum_{IJ} F_{IJ} N_I N_J + \\sum_K G_K N_K) \n\\end{equation}\n\nIn equation $29$ above, $\\psi_{MS}$ consists of $N_D$ Slater determinants $D_i$ with coefficients $c_i$.\nIt can be generated by some other quantum chemistry calculation such as complete active space self-consistent field (CASSCF) or a selective CI method prior to the VMC optimization.\nIn the one- and two-body Jastrow factor $\\psi_J$, we have\nfunctions $\\chi_k$ and $u_{kl}$, which are constructed from\noptimizable splines whose form is constrained so as to enforce\nany relevant electron-electron and electron-nuclear cusp conditions.\n\\cite{Kim2018}\n\nWhile MSJ wave functions with these types of traditional Jastrow\nfactors (TJFs) have been successfully used in many contexts,\n\\cite{Foulkes2001,Umrigar2007,Clark2011,Assaraf2017,Flores2019}\nmore involved correlation factors can be considered.\nTypically, this involves the construction of many-body Jastrows\nfactors, \\cite{Umrigar1988,Huang1997,Casula2003}\nwhich may involve various polynomials of interparticle distances \n\\cite{Huang1997,Luchow2015,LopezRios2012}\nor an expansion in an atomic orbital basis\n\\cite{Casula2003,Casula2004,Beaudet2008,Sterpone2008,\n Marchi2009,Barborini2012,Zen2015}\nor a set of local counting functions. \\cite{Goetz2017,Goetz2018}\nThe latter case of many-body Jastrows, known as\nreal space number-counting Jastrow factors (NCJF),\nis employed here as an example many-body Jastrow factor.\nIn real space, Jastrow factors have historically been effective\nat encoding small changes to the wave function associated\nwith weak correlation effects, \\cite{Foulkes2001}\nbut work in Hilbert space and lattice model VMC\nreminds us that they can also be used to aid in the\nrecovery of strong correlations.\n\\cite{gutzwiller_gf,Neuscamman2013,Neuscamman2016}\nOne way to view NCJFs is as an attempt to develop a real space\nmany-body Jastrow factor that can aid in recovering both\nstrong and weak electron correlations.\n\\cite{Goetz2018}\n\nThe form of our NCJFs in equation \\ref{eqn:psiNCJF} has the same structure as previously proposed four-body Jastrow factors,\\cite{Marchi2009} where $N_I$ denotes the population of a region and the $F_{IJ}$ and $G_K$ are linear coefficients.\nThe region populations are computed by summing the values of counting functions at each electron coordinate.\n\\begin{equation}\n\\label{eqn:ncjfPop}\n N_I = \\sum_i C_I (\\mathbf{r_i})\n\\end{equation}\nIn this work, we use a recently introduced \\cite{Goetz2018} form for the counting functions consisting of normalized Gaussians.\n\\begin{equation}\n\\label{eqn:ncjfCount}\n C_I = \\frac{g_I(\\mathbf{r})}{\\sum_j g_j (\\mathbf{r})}\n\\end{equation}\nwhere \n\\begin{equation}\n\\label{eqn:ncjfGauss}\n g_j(\\mathbf{r}) = \\exp ((\\mathbf{r} - \\mu)^T \\mathbf{A} (\\mathbf{r} - \\mu) + K)\n\\end{equation}\ndescribes a Gaussian about a center $\\mu$. \nBy placing these normalized Gaussians at various centers, we can divide up space with a Voronoi tessellation.\nSchemes have been developed to generate partitions that either consist of regions centered on atoms or of finer grained divisions of space that can capture correlation within an atomic shell.\nWe make use of both types of partitioning methods for different wave functions in our study.\nFor simplicity, we only consider optimization of the parameters $F_{IJ}$ in the $F$-matrix of our NCJFs (the coefficients $G_K$ can be eliminated with a basis transformation of the region populations $N_I$),\n\\cite{Goetz2018}\nbut in principle the parameters defining the Gaussians $g_j$ could also be optimized.\nWe provide details of the Gaussians used in our ansatzes in Appendix D.\n\nWe also consider the problem of optimizing the molecular orbital\nshapes alongside the other variational parameters. \nThe ability to relax orbitals is important for successful study of many systems, particularly those involving excited state phenomena.\\cite{Flores2019}\nWe make use of considerable theoretical and computational machinery based on the table method enhancements developed by Filippi and coworkers \\cite{Filippi2016,Assaraf2017} that enables efficient evaluation of orbital rotation derivatives in large MSJ wave functions.\nA rotation of molecular orbitals can be described with a unitary transformation with matrix $\\mathbf{U}$ parameterized as the exponential of an antisymmetric matrix $\\mathbf{X} = - \\mathbf{X^T}$\n\\begin{equation}\n\\label{eqn:unitaryMat}\n \\mathbf{U} = \\exp (\\mathbf{X})\n\\end{equation}\nImpressively, one can obtain all wave function derivatives with\nrespect to the elements of $\\mathbf{X}$ for a large multi-Slater\ndeterminant ansatz for a cost that is only slightly higher than\nthat of the local energy evaluation.\nFor the details of how this is accomplished, we refer the reader\nto the original publications. \\cite{Filippi2016,Assaraf2017}\nFrom the standpoint of parameter optimization, the main significance\nof the orbitals (and the NCJFs) lies in both their nonlinearity\nand their strong coupling to other optimizable parameters.\nIn practice, we find that turning on the optimization of orbitals\nand NCJFs greatly enhances the difficulty of the optimization problem\ncompared to MSJ optimizations in which only the CI \ncoefficients and one- and two-body Jastrow parameters are varied.\n\n\\section{Results}\n\n\\subsection{Multi-Slater Jastrow N$_2$}\n\\label{sec:n2simple}\n\nFor a small initial test system, we consider the nitrogen dimer $\\text{N}_2$ at the near-equilibrium and stretched bond lengths of 1.1 and 1.8 \\r{A}. \nThe nitrogen dimer is a known example of a strongly correlated system and a common testing ground for quantum chemistry methods.\\cite{Langhoff1974,Rossi1999,Chan2004,Braida2011,Mazziotti2004,Neuscamman2013,Neuscamman2016}\nThe initial wave function ansatz consists of a modest number of Slater determinants (67 for the equilibrium geometry and 169 for the stretched, the result of a 0.01 cutoff limit on determinant coefficients) with traditional one-body and two-body Jastrow factors.\nThe Jastrow splines provide 30 additional optimizable parameters via 10 point cubic b-splines with cutoff distances of 10 bohr for the electron-nuclear and same-spin and opposite-spin electron-electron components.\nThe Slater determinant expansion is the result of a (10e,12o) CASSCF calculation in GAMESS\\cite{Baldridge1993} using BFD pseudopotentials and the corresponding VTZ basis set.\\cite{Burkatzki2007}\nDue to the simplicity of the variable space in this case, we have\nemployed the $|\\Psi|^2$ guiding function for all optimization\nmethods, including the LM and BLM.\nSee Appendix B for further computational details.\n\nThe first and simplest study we can make is to optimize our ansatzes with our multiple optimization techniques until convergence and compare final energies.\nNote that all of our VMC optimizations with different methods in this study were performed using our implementations within a development version of the QMCPACK software package.\\cite{Kim2018}\nAs N$_2$ is a small enough system that the traditional LM can be easily \nemployed, we take the approach of first obtaining a traditional LM\noptimization result and then using it as a reference against which\nto compare the performance of other methods.\nFor the gradient descent methods, multiple optimizations were attempted \nwith the initial step sizes tweaked from run to run based on a rough \nexamination of how parameter values compared to the LM's results.\nWe find that the chosen values for the step sizes and other\nhyperparameters in the gradient descent algorithms often\nleads to apparent convergence at different energies.\nIt is therefore essential to make effective choices for these\nparameters, which in part seems to rely on one's experience\nwith a given system.\n\nFigures \\ref{fig:n2citjfeq} and \\ref{fig:n2citjfst} show energy differences relative to the LM result when optimizing the equilibrium and stretched nitrogen dimer wave functions respectively.\nTables providing the precise energies and statistical uncertainties as well as the step sizes used are shown in the appendices.\nFirst, we see the choice of step sizes can have a substantial influence on the quality of gradient descent results.\nIn some cases, the same method can appear to converge to energies more\nthan 20 m$E_h$ apart when run with different initial step sizes.\nWhile many of the gradient descent optimizations clearly did not reach\nthe minimum, the energy differences from the LM are only about 5 m$E_h$\nor less when looking at the runs that used what turned out to be the\nbest choices for the hyperparameters.\nWith further tweaking of the hyperparameters, we would guess that at least\nsome of these descent methods could match the performance of the LM\nin this simple test case.\nFinally, we observe that the hybrid method performs about as well\nas the best descent optimizations, typically reaching energies that\nagree with the LM within error bars.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{n2_tjfci_equilibrium.eps}\n \\caption{Different methods' optimized energies relative to that of the\n LM for equilibrium $\\text{N}_2$ when optimizing CI coefficients\n and the TJF.\n }\n \\label{fig:n2citjfeq}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{n2_tjfci_stretched.eps}\n \\caption{Different methods' optimized energies relative to that of the\n LM for stretched $\\text{N}_2$ when optimizing CI coefficients\n and the TJF.\n }\n \\label{fig:n2citjfst}\n\\end{figure}\n\n\n\\subsection{All parameter N$_2$}\n\\label{sec:n2all}\n\nWe now add a NCJF and enable orbital optimization in order\nto extend the comparison in a setting with a larger number\nand variety of nonlinear parameters.\nWe will consider the relative merits of the optimization methods\nin much greater detail in this setting as it offers\na clearer view of their differences.\nFor the number-counting Jastrow factor, we generated a set of 16 counting regions with 8 octants per atom after dividing space in half with a plane bisecting the bond axis.\nThe details are given in Appendix D, but we will note here that\nthis adds 135 $F$-matrix parameters to the optimization.\nAllowing for orbital optimization adds another 663 and 618 parameters for the equilibrium and stretched cases, respectively.\nNote that our implementation of orbital optimization in QMCPACK removes rotation parameters for orbitals that are not occupied in any determinant and also between orbitals occupied in all determinants, and so the precise number of rotation parameters is a function of the determinant expansion.\nWith orbital optimization enabled, the choice of importance sampling\nfunction becomes an issue, and we now employ $|\\Phi|^2$ \nfor all LM and BLM steps with the $\\epsilon$ weight set to 0.001.\n\nFigures \\ref{fig:n2alleq} and \\ref{fig:n2allst} show converged ground state energies relative to that of the LM.\nFor this more difficult version of the nitrogen dimer, we find that the gradient descent methods are less effective.\nThey now often yield energies that can be 10 m$E_h$ or more above the LM's answer though we again find that choice of step size plays a significant role.\nThe worst results for AMSGrad and ADAM were the result of choosing inappropriately large step sizes and simple reductions in the initial step size produced improvements in energy of tens of m$E_h$ though the final result still remained well above the LM's.\nWhen we examined the optimizations over the course of their iterations, the gradient methods typically displayed some ability to quickly improve the wave function and energy initially, but they would then plateau and only very slowly improve the energy thereafter.\nExtrapolating from our results indicates that even if these gradient descent methods eventually converge to the minimum, they will only do so after thousands more iterations and at a computational cost well beyond that of the LM.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{n2_all_param_equilibrium.eps}\n\\caption{Different methods' optimized energies relative to that of the\n traditional LM for equilibrium $\\text{N}_2$ when all parameters\n are optimized simultaneously.\n See also Table \\ref{tab:n2alleqData}.}\n \\label{fig:n2alleq}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{n2_all_param_stretched.eps}\n \\caption{Different methods' optimized energies relative to that of the\n traditional LM for stretched $\\text{N}_2$ when all parameters\n are optimized simultaneously.\n See also Table \\ref{tab:n2allstData}.}\n \\label{fig:n2allst}\n\\end{figure}\n\n\n\n\\begin{table}[htbp]\n \\centering\n \\footnotesize\n \n \\caption{Energies, uncertainties, and sample numbers for optimization of all parameters in equilibrium N$_2$.}\n \\begin{tabular}{llll}\n Method & \\multicolumn{1}{l}{Energy (a.u.)} & \\multicolumn{1}{l}{Uncertainty (a.u.)} & \\multicolumn{1}{l}{Samples} \\\\ \\hline\n \n Hybrid 1& -19.9263 & 0.0004 &5,400,000 \\\\ \n Hybrid 2& -19.9266 & 0.0004 &10,000,000\\\\\n Hybrid 3& -19.9272 & 0.0004 &42,000,000\\\\ \n & & & \\\\\\hline\n RMSprop 1& -19.8974 & 0.0005 & 20,000,000\\\\\n RMSprop 2& -19.9242 & 0.0004 & 20,000,000\\\\\n & & & \\\\\\hline\n AMSGrad 1& -19.9115 & 0.0006 & 20,000,000\\\\\n AMSGrad 2& -19.9221 & 0.0004 & 20,000,000\\\\\n & & & \\\\\\hline\n ADAM 1& -19.8973 & 0.0009 & 20,000,000\\\\\n ADAM 2& -19.9165 & 0.0005 & 20,000,000 \\\\\n & & & \\\\\\hline\n Random 1& -19.9019 & 0.0006 & 20,000,000\\\\\n Random 2& -19.9157 & 0.0005 & 20,000,000 \\\\\n & & & \\\\\\hline\n LM & -19.9280 & 0.0008 & 40,000,000\\\\\n & & & \\\\\\hline\n BLM & -19.9297 & 0.0008 & 80,000,000\\\\\n & & & \\\\\\hline\n DF-BLM & -19.9293 &0.0001 & 90,000,000\\\\\n DF-Hybrid 1& -19.9290 &0.0001 & 15,400,000\\\\\n DF-Hybrid 2& -19.9290 &0.0001 & 20,000,000\\\\\n DF-Hybrid 3& -19.9293 &0.0001 & 52,000,000\\\\\n \n \\end{tabular}%\n \\label{tab:n2alleqData}%\n\\end{table}%\n\n\n\\begin{table}[htbp]\n \\centering\n \n \\footnotesize\n \\caption{Energies, uncertainties, and sample numbers for optimization of all parameters in stretched N$_2$.}\n \\begin{tabular}{llll}\n Method & \\multicolumn{1}{l}{Energy (a.u.)} & \\multicolumn{1}{l}{Uncertainty (a.u.)} & \\multicolumn{1}{l}{Samples}\\\\ \\hline\n \n Hybrid 1 & -19.6316 & 0.0004 & 5,400,000\\\\\n Hybrid 2 & -19.6321 & 0.0004 & 8,400,000\\\\\n Hybrid 3 & -19.6340 & 0.0004 & 49,200,000\\\\\n & & & \\\\ \\hline\n RMSprop 1& -19.6277 & 0.0005 & 20,000,000\\\\\n RMSprop 2 & -19.6313 & 0.0005 & 20,000,000\\\\\n & & & \\\\ \\hline\n AMSGrad 1& -19.5571 & 0.0010 & 20,000,000\\\\\n AMSGrad 2 & -19.6064 & 0.0008 & 20,000,000\\\\\n AMSGrad 3 & -19.6268 & 0.0008 & 20,000,000\\\\\n & & & \\\\ \\hline\n ADAM 1& -19.5889 & 0.0009 & 20,000,000\\\\\n ADAM 2 & -19.6179 & 0.0006 & 20,000,000\\\\\n ADAM 3 & -19.6192 & 0.0005 & 20,000,000\\\\\n & & & \\\\ \\hline\n Random 1& -19.6112 & 0.0006 & 20,000,000\\\\ \n Random 2 &-19.6196 & 0.0006 & 20,000,000\\\\\n & & & \\\\ \\hline\n LM & -19.6356 & 0.0009 & 40,000,000\\\\\n & & & \\\\ \\hline\n BLM & -19.6354 &0.0008 & 80,000,000\\\\\n & & & \\\\ \\hline\n DF-BLM & -19.6356 &0.0001 & 90,000,000\\\\\n DF-Hybrid 1& -19.6352 &0.0001 & 15,400,000\\\\\n DF-Hybrid 2& -19.6354 &0.0001 & 18,400,000\\\\\n DF-Hybrid 3& -19.6346 &0.0001 & 59,200,000\\\\\n \n \\end{tabular}%\n \\label{tab:n2allstData}%\n\\end{table}%\n\nA more careful comparison of the different methods can be made by referring to Tables \\ref{tab:n2alleqData} and \\ref{tab:n2allstData}, which list the precise converged energies and their error bars.\nWe also report the total number of samples used in each optimization as a proxy for computational effort, noting that for the BLM and the BLM portion of the hybrid method we double counted samples out of fairness as the BLM steps require running over their samples twice.\nIn assessing cost, one must also consider the statistical uncertainty achieved, where we see that the LM and BLM are at a disadvantage.\nTo help illustrate the update uncertainty contribution to this error, which we first discussed in the theoretical section above, we show the energy versus LM iteration for equilibrium N$_2$ in Figure \\ref{fig:n2lm}.\nThe fluctuations in energy from step to step, sometimes by as much as 2 m$E_h$, demonstrate the difficulty the LM faces from the uncertainty in its steps near the minimum.\nIn this case, we see that the LM's final energy uncertainty is driven by the update step uncertainty rather than the uncertainty in evaluating the energy for a given set of parameter values at a particular iteration.\nWe have observed similar behavior in the BLM and include the result of a BLM calculation in the tables.\nNote that the tabulated energies come from an average over the last ten optimization steps in the case of the standard LM and BLM and from an average over the last 50 descent steps in the case of the hybrid and pure descent methods.\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{LM_opt.eps}\n \\caption{The standard LM optimization for all parameter equilibrium N$_2$.}\n \\label{fig:n2lm}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.8cm]{n2_params.eps}\n \\caption{\n Values for the first off-diagonal $F$-matrix element ($F_{01}$),\n the first electron-nuclear TJF spline parameter ($u_0$),\n and the second orbital rotation variable ($X_{02}$) at each\n micro iteration of the ``Hybrid 1'' optimization for\n all parameter equilibrium N$_2$.\n }\n \\label{fig:n2param}\n\\end{figure}\n\nFrom the tabulated data, we see that the hybrid optimization can achieve\nlower energies than the gradient descent methods using fewer samples,\nand that its results are typically within a few m$E_h$ of the traditional LM.\nWhile the accelerated descent sections of the hybrid method provide\nsome swift energy reductions early on when the wave function is still far\nfrom the minimum in parameter space, the BLM steps in the algorithm\ngreatly accelerate the process of bringing the parameters near to the\nminimum, as can be seen in Figure \\ref{fig:n2param}.\nLooking at the electron-nuclear spline parameter and the orbital\nrotation parameter, we see typical\ncases in which rapid initial parameter movement during the early\npart of the first RMSprop stage transitions to much slower movement\nlater in that stage, followed by very little movement at all\nin later RMSprop stages.\nNote that the latter can be explained largely by the need to\nkeep initial step sizes small in later stages to avoid\nsignificant upward deviations in the energy as the RMSprop method\nrebuilds its momentum history.\nIn between these AD stages, the BLM updates move the parameter\nvalues in much larger steps, greatly accelerating convergence.\nThis behavior makes the hybrid approach somewhat more black box\nas compared to the pure descent approaches, as the ability to\nget near the minimum with a modest sampling effort is much\nless dependent on the choice of the initial step sizes than for\nthe AD methods.\nThis conclusion is supported by the fact that the hybrid\noptimizations in Tables \\ref{tab:n2alleqData} and \\ref{tab:n2allstData}\nused various initial step size settings (as discussed in Appendix B)\nand nonetheless produced lower energies than the pure descent methods\nin every case.\n\nAs discussed in our introduction of the hybrid method, another advantage\nis its ability to obtain a lower error bar at convergence\nthan the LM for the same overall computational cost.\nThis is a natural consequence of spending part of its sampling effort\non gradient descent steps that correct for the BLM steps' uncertainty\nand bias (as illustrated earlier in Figure \\ref{fig:hybridcontour})\nand that hew closer to\nthe zero-variance principle by importance sampling with $|\\Psi|^2$.\nTo demonstrate this advantage explicitly, we ran additional sets\nof LM and hybrid optimizations adjusted to have essentially the same\ntotal number of samples.\nWe then compare the standard error for the last ten LM steps\nand the last ten hybrid macro iterations in \nFigures \\ref{fig:n2eqstderr} and \\ref{fig:n2ststderr},\nwhere we find that the hybrid has a substantially lower\nstatistical uncertainty in every case.\nAssuming the usual $N^{-1\/2}$ decay of uncertainty\nwith sample size, the LM would require a factor of roughly\nfour times more samples to reach the hybrid's uncertainty,\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{N2_eq_error.eps}\n \\caption{Standard Errors for the hybrid method and LM on all parameter\n equilibrium $\\text{N}_2$ vs different optimizations'\n total sampling costs.}\n \\label{fig:n2eqstderr}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{N2_st_error.eps}\n \\caption{Standard Errors for the hybrid method and LM on\n all parameter stretched $\\text{N}_2$ plotted against different optimizations'\n total sampling costs.}\n \\label{fig:n2ststderr}\n\\end{figure}\n\nThese statistical advantages in the final energy can be improved even further if we finish an optimization with a long section of pure descent.\nTo demonstrate this, we have taken the final wave functions produced by the hybrid and BLM optimizations in Tables \\ref{tab:n2alleqData} and \\ref{tab:n2allstData} and applied a further period of optimization using RMSprop with initial step sizes of 0.001 for all parameters.\nThis ``descent finishing'' (DF) adds only a modest additional cost compared to the preceding optimization and yields a large improvement in statistical uncertainty and, in many cases, an improvement in the final energy value as well.\nThese advantages can be seen clearly in Figures \\ref{fig:n2dfeq} and \\ref{fig:n2dfst}, as well as in Tables \\ref{tab:n2alleqData} and \\ref{tab:n2allstData},\nwhere we observe final error bars that are a factor of\neight smaller than those of the LM.\nIn terms of cost, this implies that the traditional LM would\nhave required 64 times the original number of samples\nto achieve the DF-BLM or DF-hybrid precision.\nPut another way, we find that the DF-hybrid\napproach gives an equivalent or lower energy, with a much\nsmaller error bar, at a substantially lower cost.\nNote that, in contrast, we find that this DF approach is not very effective\nwhen used in conjunction with the pure descent methods, where\nit essentially amounts to restarting the methods at the\nparameter values found after the first run of their optimization.\nWhile we do find that this restarting of the accumulation\nof momentum can improve the energy, the wave function\nparameters still do not reach their optimal values and the\nenergy lowering vs total sampling cost is not competitive\nwith the DF-hybrid.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{eq_N2_descent_finish.eps}\n \\caption{Converged energies in equilibrium $\\text{N}_2$ before and after a final descent optimization.}\n \\label{fig:n2dfeq}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.3cm]{st_N2_descent_finish.eps}\n \\caption{Converged energies in stretched $\\text{N}_2$ before and after a final descent optimization.}\n \\label{fig:n2dfst}\n\\end{figure}\n\n\nOur study of the nitrogen dimer provides some clarity on the relative strengths of the LM and gradient descent, while also pointing the way to a more effective synthesis of the two.\nGradient descent methods struggle in the presence of a variety of different highly nonlinear parameters, although they did perform better when we were only optimizing TJFs and CI coefficients.\nAmong the descent methods, we found that the RMSprop approach came the\nclosest to achieving the LM minimum energy.\nIt is of course difficult to rule out the possibility that this\nand other AD methods could reach the LM energy\nwith additional sampling and more experimentation with\nthe hyperparameters.\nHowever, it is far from obvious that this would be\nbe cost-competitive, and the need to make careful and possibly\nsystem-specific choices for hyperparameters\nis somewhat antithetical to the general aspiration that an\noptimizer be as black-box as possible.\nFor its part, the LM is more effective at moving parameters\ninto the vicinity of the minimum, but tight convergence is then stymied by\nan unsatisfactory level of biased statistical uncertainty.\nAs a side note, this behavior --- in which the first derivative methods give\nbetter convergence once near the minimum but are at a relative disadvantage\nfar from the minimum --- is somewhat the reverse of what one would expect\nin deterministic optimization, where second derivative methods are at their\nstrongest relative to first derivative methods during the final tight\nconvergence in the vicinity of the minimum.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=8.3cm]{eq_styrene_geometry.eps}\n \\caption{Equilibrium geometry of styrene. See Appendix C for structure coordinates.}\n \\label{fig:styrene_geom}\n\\end{figure}\nAlthough things are reversed in the stochastic VMC case, we stress that the\ntwo classes of methodology are strongly complementary, as they compensate\nfor each other's weaknesses.\nBy using a low-memory version (BLM or hybrid) of the LM to get near\nto the minimum and then handing off to an accelerated descent method\nto achieve tight convergence, we find better overall performance than when\nworking with either class of method on its own.\nThese insights in hand, we will now apply this combined approach in a pair of larger and more challenging VMC optimization examples.\n\n\\subsection{Styrene}\nWe first turn to styrene at its equilibrium geometry \n(Figure \\ref{fig:styrene_geom}) which offers an optimization with both\nmore electrons and more variables, but in which the traditional LM\nis still quite achievable for comparison.\nAs in N$_2$, we construct a multi-Slater wave function modified by both TJFs and a NCJF.\nTo generate our Slater determinants, we have employed the\nheatbath selective CI (HCI) method as implemented in the Dice\ncode by Sharma and coworkers. \\cite{Holmes2016,Sharma2017}\nThe orbital basis for the HCI calculation was produced via a (14e,14o) CASSCF calculation in Molpro \\cite{MOLPRO_brief} using a recently developed set of pseudopotentials and their corresponding double zeta basis.\\cite{Bennett2017}\nIn this CASSCF basis, HCI then correlated 32 electrons (out of a total of 40 electrons left over after applying pseudopotentials) in 64 orbitals.\nFor our NCJF, we defined one counting region per atom, giving our $F$-matrix 135 optimizable parameters (see Appendix D for further NCJF details).\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=8.3cm]{addOrbs_F_styrene_descent_finish.eps}\n \\caption{Converged energies in equilibrium styrene before and after a final descent optimization.}\n \\label{fig:styrene_energies}\n\\end{figure}\n\nWe optimized our wave function in a staged fashion using the standard LM, the BLM, and the hybrid method.\nFirst, we conducted a partial optimization of the TJFs and the 100 most important CI coefficients.\nWe then turned on the optimization of the orbitals and the NCJF's $F$-matrix, reaching a total of 4,570 parameters, most of them highly nonlinear.\nIn the hybrid and BLM optimizations, the parameters were divided into 5 blocks and we used $N_k = 50$ and $N_o = 5$ for our numbers of kept directions and previous important directions, respectively.\nWe used a value of 0.001 for $\\epsilon$ in the $|\\Phi|^2$ distribution for the LM and BLM sampling.\nThese optimizations were then followed by 1,000 steps of RMSprop.\nAs shown in Figure \\ref{fig:styrene_energies}, we find that our hybrid method reaches a converged energy as low or better than that of the standard and blocked LM, and finishing our optimizations with descent provides a substantial improvement in the statistical uncertainty even in this more challenging case.\n\n\n\n\\begingroup\n\\setlength{\\tabcolsep}{1.5pt}\n\\renewcommand{\\arraystretch}{1.25}\n\\begin{table}\n \\centering\n \\small\n \\caption{A summary of the VMC optimization stages in FNNF showing\n the number of determinants\n $N_d$ included from HCI, which parameters are optimized,\n and the total number $N_p$ of optimized parameters.\n Note that CI coefficients are optimized at every stage.\n Stages 2, 3, and 4 start from the parameter values from\n the previous stage, with newly added determinants' coefficients\n initialized to zero.\n We also report the number of iterations performed in each stage,\n which for stage 4 is simply the number of RMSprop steps.\n A hybrid iteration, on the other hand, consists of 100 RMSprop steps\n followed by three BLM steps.\n All RMSprop steps use 20,000 samples drawn from $|\\Psi|^2$,\n while the BLM steps each use 1 million samples drawn from the\n $|\\Phi|^2$ guiding function with $\\epsilon$ set to 0.01.\n }\n \\begin{tabular}{cclcccrr}\n Stage & Method & $N_d$ & TJF & $F$-matrix & Orbitals & $N_p$ & Iterations \\\\\n \\hline\n 1 & Hybrid & $10^2$ & \\checkmark & & & 139 & 9 \\\\\n 2 & Hybrid & $10^3$ & \\checkmark & \\checkmark & & 1048 & 4 \\\\\n 3 & Hybrid & $10^4$ & \\checkmark & \\checkmark & \\checkmark & 15,573 & 6 \\\\\n 4 & AD & $10^4$ & \\checkmark & \\checkmark & \\checkmark & 15,573 & 1,000 \\\\\n \\end{tabular}%\n \\label{tab:fnnf_stages}%\n\\end{table}%\n\\endgroup\n\n\n\\subsection{FNNF}\n\\label{sec:fnnf}\n\nWe now turn our attention to a strongly correlated transition\nstate of the the diflurodiazene (FNNF) \\textit{cis-trans}\nisomerization, where we test the hybrid optimization approach\non a much larger determinant expansion.\nThe FNNF isomerization can be thought of as a toy model molecule\nfor larger systems such as photoswitches,\nwhich have potential uses in molecular machines \\cite{Russew2010,Kinbara2005}\nand high-density memory storage. \\cite{tian2004switches}\nIn addition, FNNF itself is of interest as part of the synthesis\nof high energy polynitrogen compounds and has been the subject of\nmultiple electronic structure studies.\\cite{Christe1991,Christe2010,Lee1989}\nHere we focus on its strongly correlated transition state, which is\nthe direct analogue of the out-of-plane TS1 transition state \nin diazene. \\cite{Sand2012}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=18.5cm]{fnnf_plot.eps}\n \\caption{Left panel: energies during stages 3 and 4 of the FNNF\n optimization. The descent energies are reported as the\n average over the last 50 RMSprop steps within each block\n of 100 RMSprop steps, whereas the BLM energies are\n the energy estimates on the random samples used for the\n BLM update steps.\n Right panel: change in the value of the $F$-matrix\n parameter that couples the two nitrogen atoms' counting\n regions over the first three macro iterations\n of stage 3, with each micro iteration corresponding to\n one RMSprop or BLM step.\n The nine BLM points on the right panel correspond to\n the first nine BLM points on the left panel.\n }\n \\label{fig:fnnf_plot}\n\\end{figure*}\n\nOur treatment of this transition state began by locating its\ngeometry via an (8e,8o) CASSCF optimization in the cc-pVTZ basis\nusing Molpro.\\cite{MOLPRO_brief}\nAt this geometry (given in Appendix C)\nwe then switch over to using BFD pseudopotentials\nand their corresponding triple zeta basis,\n\\cite{Burkatzki2007}\nin which we use the Dice code\n\\cite{Holmes2016,Sharma2017}\nto iterate an HCI calculation\nwith 24 electrons\ndistributed in the lowest 50 (8e,8o) CASSCF\norbitals until its variational wave function\nhas reached almost 2 million determinants.\nWe then import the first 10,000 of these determinants\ninto our VMC optimization and combine them with\nTJFs, atom-centered NCJFs, and orbital optimization,\nwhich produces an ansatz with over 15,000 variational parameters.\n\nOur VMC optimization proceeds in stages as summarized in\nTable \\ref{tab:fnnf_stages}.\nThis begins with TJFs and a 100-determinant ansatz from HCI,\nwith later optimization stages adding\nmore determinants and turning on the optimization of\nthe NCJF and orbital rotation variables.\nAs in styrene, the strategy is to bring the parameters\nnear to their optimal values with the help of the LM\nand then to perform a final\nunbiased relaxation via a long run of RMSprop AD.\nDue to the large number of variational parameters,\nwe incorporate the LM via the hybrid scheme, with the BLM\nsteps employing 2, 2, and 10 blocks\nduring stages 1, 2, and 3, respectively.\nIn stages 1 and 2, we used an initial RMSprop step size of 0.01 for TJFs and CI coefficients\nbefore setting it to 0.005 at the beginning of stage 3.\nFor the $F$-matrix parameters, we began by setting the initial\nstep size to 0.001,\nbut after observing a significant rise and fall of the energy\nduring the RMSprop section of the first hybrid macro iteration\nin stage 3, we reduced this to 0.0001 and also lowered the TJF and CI step size to 0.0005\nfor the last 4 macro iterations in that stage.\nFor all steps in stage 4, we maintained the 0.0001 step size for the $F$-matrix parameters and lowered the initial step size for TJFs and CI coefficients to 0.0002.\nAn initial step size of 0.0001 was used for orbital parameters throughout both stages 3 and 4.\nThe BLM steps used the $|\\Phi|^2$ guiding function with a value of 0.01 for $\\epsilon$.\n\n\\begin{table}\n \\centering\n \\small\n \\caption{Energies of the transition state of FNNF.}\n \\begin{tabular}{llll}\n Method & \\multicolumn{1}{l}{Energy (a.u.)} & & \\multicolumn{1}{l}{Uncertainty (a.u.)} \\\\\n \\hline\n Hartree-Fock & -67.112730 & & \\vphantom{\\big(\\big)} \\\\\n CASSCF & -67.359100 & & \\vphantom{\\big(\\big)} \\\\\n \n VMC Stage 1 & -68.1017 & & 0.0011 \\vphantom{\\big(\\big)} \\\\\n \n VMC Stage 2 & -68.1213 & &0.0009 \\vphantom{\\big(\\big)} \\\\\n \n VMC Stage 3 & -68.1698 & & 0.0006 \\vphantom{\\big(\\big)} \\\\\n \n VMC Stage 4 \\hspace{2mm} & -68.1750 & &0.0002 \\vphantom{\\big(\\big)} \\\\\n \\end{tabular}%\n \\label{tab:fnnf_energies}%\n\\end{table}%\n\nThe energies resulting from this staged optimization\nare shown in Table \\ref{tab:fnnf_energies} and Figure \\ref{fig:fnnf_plot}.\nUnsurprisingly, stage 3 proved to be the most challenging and expensive\nstage, as it is where we hope to move all parameters near to their final\nvalues in a setting where the traditional LM would face severe memory\nbottlenecks.\nAs seen in Figure \\ref{fig:fnnf_plot}, both the AD and BLM steps\nclearly work to lower the energy during the first two macro iterations of\nstage 3.\nIn the last four macro iterations of stage 3, however, the energy decreases\nmore slowly and it is less clear, at least when looking at the energetics,\nwhether the BLM steps are still necessary.\nInstead, their importance is revealed by inspecting the movement of\nthe $F$-matrix values within the NCJF, an example of which is shown\nin the right-hand panel of Figure \\ref{fig:fnnf_plot}.\nAs in N$_2$, these parameters prove to be the most resistant to optimization\nvia AD, and we clearly see that although AD does gradually move their values\nin the same direction as the BLM, the BLM steps dramatically accelerate\ntheir optimization.\nThis effect is seen throughout all six macro iterations of stage 3, and\nso although the BLM energies are not obviously improving at the end of this\nstage, the inclusion of these steps is clearly still beneficial.\nNote that relaxing the NCJF after moving from a 1,000-determinant to a\n10,000-determinant expansion is important, because the larger determinant\nexpansion is better able to capture some of the correlation effects\nthat the NCJF is encoding, and so we expect (and indeed see) that this\ndiminishing of its role leads to smaller $F$-matrix values being optimal.\n\nAlthough we have again found that it would be difficult for AD alone\nto provide a successful optimization of our ansatz, the statistical\nadvantages of its incorporation are still quite clear.\nA close inspection of the sample sizes used in the optimization\nreveals that each of the AD and BLM points in the left panel of\nFigure \\ref{fig:fnnf_plot} corresponds to averaging over 1 million\nrandom samples.\nDespite this equal sampling effort, the uncertainties for the\nAD energy estimates are about one third the size of those for the\nBLM, implying that a pure BLM approach would require an order of\nmagnitude more sampling effort to produce similar results.\nTo understand this statistical advantage, we need to remember\ntwo important differences between the AD and BLM steps.\nFirst, the nonlinearity of the LM and BLM eigenvalue problem\nleads to biases in the update steps that can both increase the\nstep-to-step energy uncertainty and cause the method to optimize\noff-center from the true minimum.\nSecond, the use of an alternative guiding function for the\nBLM samples in order to mitigate this step uncertainty moves\nus away from the zero-variance regime enjoyed by traditional\n$|\\Psi|^2$ sampling.\nIf we were to instead employ traditional sampling, our energy\nestimates for a specific wave function would improve,\nbut the BLM step uncertainty would increase sharply.\nAs the AD methods do not suffer from these issues, they help us to\nfurther mitigate the BLM step uncertainty and to perform\na final, high-precision relaxation during stage 4.\nIn total, incorporating the AD steps in this case roughly\ndoubles the number of samples required, but is well worthwhile given\nthat it improves statistical efficiency by almost an order of magnitude.\n\nWhile it is possible that the NCJF parameters are not quite converged\nin this particular optimization\nand that increasing iteration counts in stages 1 through 3 could\nfurther improve the energy, the lessons learned from investigating\na large MSJ optimization for the FNNF transition state are already clear.\nWhile both the BLM and the AD methods can be used in this 10,000+ parameter\nregime, they bring highly complementary advantages to the optimization\nand so would appear to work better together than apart.\nIn particular, the BLM helps optimize the parameters that change only\nvery slowly during AD, whereas the statistical advantages of AD\ngreatly increase precision at a given sample size and work to\neliminate the statistical biases suffered by the BLM.\n\n\\section{Conclusions and Outlook}\n\\label{sec:conclusion}\n\nWe have found that a combination of first and second derivative\noptimization methods appears to work better than using either\nclass of method on its own when minimizing the energies of\nwave functions in variational Monte Carlo.\nThis is particularly true for wave functions with a wide\nvariety of different types of nonlinear parameters, as for\nexample when dealing simultaneously with traditional\none- and two-body Jastrow factors, many-body Jastrow factors,\nand orbital relaxations.\nWhile the linear method and its low-memory variants show a\nsuperior ability to move these nonlinear parameters into\nthe vicinity of their optimal values, accelerated descent\nmethods prove much more capable of converging them tightly\naround the minimum.\nThis situation stands as an interesting reverse of what\nis typically encountered in deterministic optimization,\nwhere second derivative methods are usually superior\nfor tight final convergence and first derivative methods\nperform relatively better in the early stages of an\noptimization.\nThe realities of working with statistically uncertain\nenergies and energy derivatives turns this expectation\non its head, both because of the need to stabilize the\nstatistics of the linear method's second derivative\nelements through zero-variance-violating importance\nsampling schemes and due to the nonlinear biases that\nare induced when solving the linear method's eigenvalue\nproblem.\nThe linear method's ability to quickly move the parameters\nnear the minimum, however, makes it appear that employing\nit as part of a hybrid approach is well worthwhile.\nIndeed, in our testing, hybridizing low-memory\nlinear method variants with accelerated descent methods\nprovides better energies with smaller\nstatistical uncertainties at a lower computational cost\nwhen compared to the stand-alone use of either the linear method\nor accelerated descent methods.\n\nLooking forward, there are many questions still to be answered\nabout the interplay between first and second derivative methods.\nFor example, although the blocked linear method greatly reduces\nmemory cost vs the traditional linear method, it is not clear that\nit can be applied effectively beyond 100,000 parameters in its\ncurrent form.\nOne thus wonders whether it is necessary to optimize all of the\nparameters during the linear method steps of the hybrid\napproach, or whether it may be possible to identify\n(perhaps during an ongoing optimization?) which parameters\nwould benefit from linear method treatment and which would not.\nWere such a sorting possible, accelerated descent methods with\ntheir even lower memory footprint could be left to deal with\nmost of the parameters, with only a relatively small subset\ntreated by the linear method steps.\nAnother important issue is making the hybrid approach as black\nbox and user friendly as possible.\nAlthough we have tested it here with many different descent\nstep size settings for the different parameter types, this has\nnot been a systematic survey.\nMore extensive testing may allow clear defaults to be settled\nupon so that users can reasonably expect a successful\noptimization without resorting to careful step size control.\nWe look forward to investigating these exciting possibilities\nin future.\n\n\n\\section{Acknowledgements}\nThis work was supported by the Office of Science,\nOffice of Basic Energy Sciences, the US Department of Energy,\nContract No.\\ {DE-AC02-05CH11231}.\nCalculations were performed using the Berkeley Research Computing\nSavio cluster and\nthe National Energy Research Scientific Computing Center,\na DOE Office of Science User Facility supported by the\nOffice of Science of the U.S. Department of Energy\nunder Contract No.\\ {DE-AC02-05CH11231}.\n\n\\section{Appendix A: Additional Energies}\n\n\\begin{table}[H]\n \\centering\n \\footnotesize\n \\caption{Precise Values for optimizing CI coefficients and traditional Jastrow factors in equilibrium N$_2$.}\n \\begin{tabular}{llll}\n Method & \\multicolumn{1}{l}{Energy (a.u.)} & \\multicolumn{1}{l}{Uncertainty (a.u.)} & \\multicolumn{1}{l}{Samples}\\\\ \\hline\n \n Hybrid 1& -19.9083 & 0.0005 & 3,900,000\\\\\n Hybrid 2 & -19.9095 & 0.0006 & 5,400,000\\\\\n Hybrid 3 & -19.9101 & 0.0005 & 29,000,000\\\\\n & & & \\\\ \\hline\n RMSprop 1& -19.8936 & 0.0006 & 20,000,000\\\\\n RMSprop 2& -19.9091 & 0.0005 & 20,000,000\\\\\n & & & \\\\\\hline\n AMSGrad 1& -19.8998 & 0.0007 & 20,000,000\\\\\n AMSGrad 2 & -19.8926 & 0.0008 & 20,000,000\\\\\n AMSGrad 3 & -19.9048 & 0.0006 & 20,000,000\\\\\n AMSGrad 4 & -19.9071 & 0.0005 & 20,000,000\\\\\n & & & \\\\ \\hline\n ADAM 1& -19.9009 & 0.0005 & 20,000,000\\\\\n ADAM 2 & -19.9056 & 0.0006 & 20,000,000\\\\\n ADAM 3 & -19.9078 & 0.0005 & 20,000,000\\\\\n & & & \\\\ \\hline\n Random 1& -19.8926 & 0.0006 & 20,000,000\\\\\n Random 2 &-19.9041 &0.0005 & 20,000,000 \\\\\n & & & \\\\\\hline\n Linear Method & -19.9105 & 0.0010 & 40,000,000 \\\\\n \\end{tabular}%\n \\label{tab:n2tjfcieqData}%\n\\end{table}%\n\n\\begin{table}[H]\n\\footnotesize\n \\centering\n \\caption{Precise Values for optimizing CI coefficients and traditional Jastrow factors in stretched N$_2$.}\n \\begin{tabular}{llll}\n Method & \\multicolumn{1}{l}{Energy (a.u.)} & \\multicolumn{1}{l}{Uncertainty (a.u.)} & \\multicolumn{1}{l}{Samples}\\\\ \\hline\n \n Hybrid 1& -19.6141 & 0.0005 & 27,000,0000\\\\ \n & & & \\\\ \\hline\n RMSprop 1& -19.6010 & 0.0007 & 20,000,000\\\\\n RMSprop 2& -19.6137 & 0.0005 & 20,000,000\\\\\n & & & \\\\ \\hline\n AMSGrad 1& -19.6044 & 0.0006 & 20,000,000\\\\\n AMSGrad 2 & -19.5858 & 0.0007 & 20,000,000\\\\\n AMSGrad 3 & -19.6096 & 0.0006 & 20,000,000\\\\\n & & & \\\\\\hline\n ADAM 1& -19.6028 & 0.0006 & 20,000,000\\\\\n ADAM 2 & -19.6105 & 0.0005 & 20,000,000\\\\\n & & & \\\\ \\hline\n Random 1& -19.5937 & 0.0006 & 20,000,000\\\\\n Random 2 &-19.6147 &0.0006 & 20,000,000 \\\\\n & & & \\\\ \\hline\n Linear Method & -19.6155 & 0.0010 & 40,000,000 \\\\\n \\end{tabular}%\n \\label{tab:n2tjfcistData}%\n\\end{table}%\n\n\\section{Appendix B: Details for N$_2$ Optimizations}\n\nThe details for the optimizations behind the final energies in the main paper are presented below.\nIn every case of N$_2$, all flavors of pure gradient descent based optimization were run for 2000 iterations at 10,000 samples per iteration except for the random step size method, which was run for 10,000 iterations at 2000 samples per iteration.\nThe total sampling cost was then 20 million samples, half of the standard linear method's cost of 40 million over 40 steps.\nGiven the descent methods' tendency to plateau after a few hundred iterations and then lower the energy by only a few m$E_h$ afterward, we expect that running them longer to fully match or exceed the linear method's sampling effort would not yield a much better result in most cases.\nWe found that it was often advantageous to allow for different types of parameters to be given different initial step sizes.\nTables \\ref{tab:n2tjfcieqStep} through \\ref{tab:n2allstStep} list the step sizes for different descent optimizations in all cases of N$_2$.\nThe values can be cross-referenced with the energy results in earlier tables to see which choices were most effective.\nSome amount of experimentation was necessary to build up intuition for what choices are effective, but we generally expect more nonlinear parameters such as those in the $F$-matrix and the orbitals to require smaller step sizes.\nWe also found that RMSprop benefited from using larger step sizes compared to other descent algorithms.\nThe energy may be significantly raised on early iterations, but tends to be quickly lowered and eventually brought to an improved result once enough gradient history has built up over more steps. \n\nThe details of the different hybrid method optimizations are slightly more involved and are discussed separately here.\nIn all cases, the blocked linear method steps of the hybrid optimization used 5 blocks with 5 directions from sections of RMSprop to provide coupling to variables outside a block and retained 30 directions from each block to construct the final space for determining the parameter update.\nThese were also the settings given to the blocked linear method optimizations of N$_2$ that appear in the main text.\nAll hybrid optimizations used the RMSprop method for their AD sections with the hyperparameters $d=100$, $\\rho = .9$ and $\\epsilon = 10^{-8}$.\nThe step sizes used in the AD sections varied over the course of the hybrid optimizations.\nWe typically chose larger step sizes for the AD portion of the first macro-iteration in order to obtain more energy and parameter improvement at a low sampling cost before any use of the BLM and these are tabulated separately as \"Hybrid-Initial\".\nThe smaller step sizes reported for the rest of hybrid optimization were used in the later macro-iterations to avoid rises in the energy that might occur before a sufficient gradient history was accumulated.\nWe also list the step sizes in the long RMSprop optimization used to achieve the descent finalized energies.\nThese were sometimes larger than those for the AD sections of the initial hybrid optimizations because the descent finalization was long enough for any early transient rises in the energy to recover.\n\nWe now specify how the hybrid method sampling costs reported in Tables \\ref{tab:n2alleqData} and \\ref{tab:n2allstData} of the main paper were divided between AD and the BLM. In all parameter equilibrium N$_2$, Hybrid 1 consisted of 500 AD steps costing 3 million samples interwoven with 12 BLM steps that cost 2.4 million samples.\nHybrid 2 consisted of the same sequence of steps, but had an increased sampling effort of 6 million samples on descent and 2.4 million on BLM.\nHybrid 3 had a greatly increased sampling cost and consisted of 1400 AD steps for 11.2 million samples interwoven with 19 BLM steps costing 38 million.\nFor all parameter stretched N$_2$, Hybrid 1 used the same sequence of steps and sampling cost breakdown as Hybrid 1 for the equilibrium case.\nHybrid 2 consisted of 600 AD steps that cost 7 million samples and 15 BLM steps that cost 3 million.\nHybrid 3 also had 600 AD steps, now using 12 million samples, and 15 BLM steps, now using 30 million samples.\nFor all descent finalizations in N$_2$, we used 1000 steps of RMSprop at an additional cost of 10 million samples and took an average over the last 500 steps to obtain our reported energies and error bars.\n\nFinally, we give the breakdown of the hybrid sampling costs in Tables \\ref{tab:n2tjfcieqData} and \\ref{tab:n2tjfcistData} of Appendix A.\nFor the equilibrium case, Hybrid 1 had 500 AD steps costing 1.5 million samples and 12 BLM steps costing 2.4 million samples.\nHybrid 2 had the same combination of steps and BLM cost as Hybrid 1 while the AD steps used 3 million samples.\nHybrid 3 used the same sequence of steps, but increased the AD and BLM sampling costs to 5 million and 24 million, respectively.\nIn the stretched case, the hybrid optimization used 500 AD steps with 3 million samples and 12 BLM steps costing 24 million samples.\n\n\\begin{table}[H]\n\\footnotesize\n \\caption{Step sizes for TJFCI equilibrium N$_2$ optimizations.}\n \\centering\n \\begin{tabular}{lccc}\n Method & 2-Body TJF & 1-Body TJF & CI \n \\\\ \\hline\n RMSprop 1 & 0.01 & 0.01 & 0.005 \\\\\n RMSprop 2 & 0.05 & 0.05 & 0.01 \\\\\n \n & & & \\\\ \\hline\n AMSGrad 1 & 0.05 & 0.05 & 0.01 \\\\\n AMSGrad 2 & 0.05 & 0.05 & 0.05 \\\\\n AMSGrad 3 & 0.01 & 0.01 & 0.01\\\\\n AMSGrad 4 & 0.001 & 0.001 & 0.001\\\\\n \n & & & \\\\ \\hline\n ADAM 1 & 0.01 & 0.01 & 0.01\\\\\n ADAM 2 & 0.005 & 0.005 & 0.005 \\\\\n ADAM 3 & 0.001 & 0.001 & 0.001 \\\\\n \n & & & \\\\ \\hline\n Random 1 & 0.01 & 0.01 & 0.01\\\\\n Random 2 & 0.0005 & 0.0005 & 0.0005 \\\\\n & & & \\\\ \\hline\n Hybrid-Initial 1 & 0.1 & 0.1 & 0.01\\\\\n Hybrid-Initial 2 & 0.1 & 0.1 & 0.01 \\\\\n Hybrid-Initial 3 & 0.1 & 0.1 & 0.01 \\\\\n & & & \\\\ \\hline\n Hybrid 1 & 0.001 & 0.001 & 0.001\\\\\n Hybrid 2 & 0.005 & 0.005 & 0.005 \\\\\n Hybrid 3 & 0.005 & 0.005 & 0.005 \\\\\n \\end{tabular}\n \\label{tab:n2tjfcieqStep}\n\\end{table}\n\n\\begin{table}[H]\n\\footnotesize\n \\caption{Step sizes for TJFCI stretched N$_2$ optimizations.}\n \\centering\n \\begin{tabular}{lccc}\n Method & 2-Body TJF & 1-Body TJF & CI \n \\\\ \\hline\n RMSprop 1 & 0.01 & 0.01 & 0.005 \\\\\n RMSprop 2 & 0.05 & 0.05 & 0.01 \\\\\n \n & & & \\\\ \\hline\n AMSGrad 1 & 0.05 & 0.05 & 0.01 \\\\\n AMSGrad 2 & 0.05 & 0.05 & 0.05 \\\\\n AMSGrad 3 & 0.01 & 0.01 & 0.01\\\\\n \n & & & \\\\ \\hline\n ADAM 1 & 0.01 & 0.01 & 0.01\\\\\n ADAM 2 & 0.005 & 0.005 & 0.005 \\\\\n \n & & & \\\\ \\hline\n Random 1 & 0.01 & 0.01 & 0.01\\\\\n Random 2 & 0.001 & 0.001 & 0.001 \\\\\n & & & \\\\ \\hline\n Hybrid-Initial 1 & 0.1 & 0.1 & 0.1\\\\\n Hybrid 1 & 0.005 & 0.005 & 0.005\\\\\n \\end{tabular}\n \\label{tab:n2tjfcistStep}\n\\end{table}\n\\begin{table}[H]\n\\scriptsize\n \\caption{Step sizes for all parameter equilibrium N$_2$ optimizations.}\n \\centering\n \\begin{tabular}{lccccc}\n Method & 2-Body TJF & 1-Body TJF & $F$-Matrix & CI & Orbitals\n \\\\ \\hline\n RMSprop 1& 0.005 & 0.005 & 0.001 & 0.001 & 0.001 \\\\\n RMSprop 2& 0.05 & 0.05 & 0.01 & 0.01 & 0.01 \\\\\n \n & & & & & \\\\ \\hline\n AMSGrad 1 & 0.05 & 0.05 & 0.005 & 0.01 & 0.001 \\\\\n AMSGrad 2 & 0.005 & 0.005 & 0.001 & 0.001 & 0.001 \\\\\n \n & & & & & \\\\ \\hline\n ADAM 1 & 0.05 & 0.05 & 0.005 & 0.01 & 0.001 \\\\\n ADAM 2 & 0.005 & 0.005 & 0.001 & 0.001 & 0.001 \\\\\n \n & & & & & \\\\ \\hline\n Random 1 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n Random 2 & 0.001 & 0.001 & 0.0005 & 0.001 & 0.0005 \\\\\n & & & & & \\\\ \\hline\n Hybrid-Initial 1 & 0.1 & 0.1 & 0.0001 & 0.01 & 0.01 \\\\\n Hybrid-Initial 2 & 0.1 & 0.1 & 0.0005 & 0.01 & 0.01 \\\\\n Hybrid-Initial 3 & 0.01 & 0.01 & 0.001 & 0.01 & 0.001 \\\\\n & & & & & \\\\ \\hline\n Hybrid 1 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\\\\n Hybrid 2 & 0.001 & 0.001 & 0.0005 & 0.0005 & 0.0005 \\\\\n Hybrid 3 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n & & & & & \\\\ \\hline\n DF-Hybrid 1 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n DF-Hybrid 2 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n DF-Hybrid 3 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n \\end{tabular}\n \\label{tab:n2alleqStep}\n\\end{table}\n\n\\begin{table}[H]\n\\scriptsize\n \\caption{Step sizes for all parameter stretched N$_2$ optimizations.}\n \\centering\n \\begin{tabular}{lccccc}\n Method & 2-Body TJF & 1-Body TJF & $F$-Matrix & CI & Orbitals\n \\\\ \\hline\n RMSprop 1 & 0.05 & 0.05 & 0.05 & 0.01 & 0.01 \\\\\n RMSprop 2 & 0.1 & 0.1 & 0.01 & 0.01 & 0.005 \\\\\n \n & & & & & \\\\ \\hline\n AMSGrad 1 & 0.05 & 0.05 & 0.05 & 0.01 & 0.01 \\\\\n AMSGrad 2 & 0.05 & 0.05 & 0.01 & 0.02 & 0.001 \\\\\n AMSGrad 3 & 0.005 & 0.005 & 0.001 & 0.002 & 0.001 \\\\\n \n & & & & & \\\\ \\hline\n ADAM 1 & 0.05 & 0.05 & 0.05 & 0.01 & 0.01 \\\\\n ADAM 2 & 0.05 & 0.05 & 0.01 & 0.02 & 0.001 \\\\\n ADAM 3 & 0.005 & 0.005 & 0.001 & 0.002 & 0.001 \\\\\n \n & & & & & \\\\ \\hline\n Random 1 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n Random 2 & 0.001 & 0.001 & 0.0005 & 0.001 & 0.0005 \\\\\n & & & & & \\\\ \\hline\n Hybrid-Initial 1 & 0.1 & 0.1 & 0.01 & 0.01 & 0.001 \\\\\n Hybrid-Initial 2 & 0.1 & 0.1 & 0.01 & 0.01 & 0.001 \\\\\n Hybrid-Initial 3 & 0.1 & 0.1 & 0.01 & 0.01 & 0.001 \\\\\n & & & & & \\\\ \\hline\n Hybrid 1 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\\\\n Hybrid 2 & 0.001 & 0.001 & 0.0005 & 0.0005 & 0.0005 \\\\\n Hybrid 3 & 0.001 & 0.001 & 0.0005 & 0.0005 & 0.0005 \\\\\n & & & & & \\\\ \\hline\n DF-Hybrid 1 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n DF-Hybrid 2 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n DF-Hybrid 3 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\\\\n \\end{tabular}\n \\label{tab:n2allstStep}\n\\end{table}\n\n\n\\section{Appendix C: Molecular Geometries}\n\n\\begin{table}[H]\n \\caption{Structure of equilibrium styrene. Coordinates in \\r{A}.}\n \\centering\n \\begin{tabular}{lrrr}\n \\multicolumn{1}{l}{} & & & \\\\ \\hline\n C & 1.39295 & 0.00000 & 0.00000 \\\\\nC & 2.16042 & -1.19258 & 0.01801 \\\\\nC & 2.09421 & 1.23178 & -0.01914 \\\\\nC & 3.56585 & -1.15969 & 0.05286 \\\\\nC & 3.50142 & 1.27211 & 0.01795 \\\\\nC & 4.23686 & 0.07503 & 0.06081 \\\\\nC & 0.00000 & 0.00000 & 0.00000 \\\\\nC & -0.79515 & -0.93087 & 0.54406 \\\\\nH & 1.71222 & -2.11161 & -0.00239 \\\\\nH & 1.59237 & 2.12471 & -0.04753 \\\\\nH & 4.09818 & -2.03273 & 0.07153 \\\\\nH & 3.99086 & 2.16987 & 0.01692 \\\\\nH & 5.25794 & 0.10043 & 0.09503 \\\\\nH & -0.46324 & 0.77112 & -0.42775 \\\\\nH & -0.43431 & -1.72147 & 1.02278 \\\\\nH & -1.78240 & -0.84577 & 0.49296 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:styreneGeo}\n \\end{table}\n\n\\begin{table}[H]\n \\caption{Structure of FNNF transition state. Coordinates in \\r{A}.}\n \\centering\n \\begin{tabular}{lrrr}\n \\multicolumn{1}{l}{} & & & \\\\ \\hline\n N & 0.49939 & -0.44656 & -0.59377\\\\\n N & 0.57066 & 0.41224 & 0.5639 \\\\\n F & -0.39084 & 0.14563 & -1.36959 \\\\\n F & -0.39807 & -0.12032 & 1.39157\\\\\n \\hline\n \\end{tabular}\n \\label{tab:fnnfGeo}\n \\end{table}\n \n \\section{Appendix D: NCJF Gaussian Basis} \n \n The form for the three dimensional Gaussian basis functions of NCJFs in the main text can equivalently be written as $g_j(\\mathbf{r}) = \\exp (\\mathbf{r}^T \\mathbf{A} \\mathbf{r} - 2 \\mathbf{B}^T\\mathbf{r} +C)$ where $\\mathbf{A}$ is a symmetric matrix defined by 6 parameters, $\\mathbf{B}$ is a three-component vector, and $C$ is a single dimensionless number.\n We used 16 basis functions for the all parameter cases of N$_2$, 16 in styrene, and 4 in FNNF.\n Their complete specifications are presented in Tables \\ref{tab:gaussianA}-\\ref{tab:fnnfGaussian}.\n The components $A_{xx},A_{xy},A_{xz},A_{yy},A_{yz},A_{zz}$ with units of inverse square bohr are the same for each basis function within a particular system and are therefore listed separately in Table \\ref{tab:gaussianA}.\n Tables \\ref{tab:n2alleqGaussian}-\\ref{tab:fnnfGaussian} contain components $B_x, B_y, B_z$, with units of inverse bohr and $C$ for each system's basis functions.\n \n \n \\begin{table}[H]\n\\footnotesize\n \\caption{Components of the matrix $\\mathbf{A}$ for our systems.}\n \\centering\n \\begin{tabular}{lcccccc}\n System & $A_{xx}$ & $A_{xy}$ & $A_{xz}$ & $A_{yy}$ & $A_{yz}$ & $A_{zz}$\n \\\\ \\hline\n Equilibrium N$_2$ & -6.9282 & 0.0 & 0.0 & -6.9282 & 0.0 & -6.9282 \\\\\n Stretched N$_2$ & -6.9282 & 0.0 & 0.0 & -6.9282 & 0.0 & -6.9282 \\\\\n Styrene &-0.1 & 0.0 &0.0 &-0.1 &0.0 &-0.1 \\\\\n FNNF &-0.1 & 0.0 &0.0 &-0.1 &0.0 &-0.1 \\\\\n \\end{tabular}\n \\label{tab:gaussianA}\n\\end{table}\n \n \\begin{table}[H]\n\\footnotesize\n \\caption{Gaussian components for all parameter equilibrium N$_2$.}\n \\centering\n \\begin{tabular}{ccccc}\n Basis Function & $B_x$ & $B_y$ & $B_z$ & $C$\n \\\\ \\hline\n $g_0$ & -0.8 &-0.8 &-0.8& -0.2771 \\\\\n $g_1$ & 0.8 &-0.8 &-0.8 & -0.2771\\\\\n $g_2$ &-0.8 &0.8 &-0.8 & -0.2771\\\\\n $g_3$ & 0.8 &0.8& -0.8 & -0.2771\\\\\n $g_4$ & -0.8 &-0.8& 0.8 & -0.2771\\\\\n $g_5$ &0.8 &-0.8& 0.8 & -0.2771\\\\\n $g_6$ &-0.8 &0.8& 0.8 &-0.2771 \\\\\n $g_7$ &0.8& 0.8& 0.8 & -0.2771\\\\\n $g_8$ &-4.4787& -0.8& -0.8 &-11.2500 \\\\\n $g_9$ &-2.8787& -0.8 &-0.8 & -4.5981\\\\\n $g_{10}$ &-4.4787 &0.8& -0.8 & -11.2500\\\\\n $g_{11}$ &-2.8787& 0.8& -0.8 & -4.5981\\\\\n $g_{12}$ &-4.4787 & -0.8 & 0.8 & -11.2500 \\\\\n $g_{13}$ &-2.8787 & -0.8 & 0.8 & -4.5981\\\\\n $g_{14}$ &-4.4787& 0.8& 0.8 & -11.2500\\\\\n $g_{15}$ &-2.8787& 0.8& 0.8 & -4.5981\\\\\n \\end{tabular}\n \\label{tab:n2alleqGaussian}\n\\end{table}\n\n \\begin{table}[H]\n\\footnotesize\n \\caption{Gaussian basis functions for all parameter stretched N$_2$.}\n \\centering\n \\begin{tabular}{ccccc}\n Basis Function & $B_x$ & $B_y$ & $B_z$ & $C$\n \\\\ \\hline\n $g_0$ & -0.8 &-0.8 &-0.8& -0.2771 \\\\\n $g_1$ & 0.8 &-0.8 &-0.8 & -0.2771 \\\\\n $g_2$ &0.8& 0.8& -0.8 & -0.2771 \\\\\n $g_3$ &0.8 &0.8& -0.8 & -0.2771\\\\\n $g_4$ &-0.8 &-0.8 &0.8 & -0.2771\\\\\n $g_5$ &0.8 &-0.8& 0.8 & -0.2771\\\\\n $g_6$ &-0.8 &0.8& 0.8 & -0.2771\\\\\n $g_7$ &0.8 &0.8 &0.8 & -0.2771\\\\\n $g_8$ &-5.8015 & -0.8& -0.8 & -22.7322 \\\\\n $g_9$ &-4.2015 &-0.8 & -0.8 & -11.8474 \\\\\n $g_{10}$ &-5.8015 &0.8 &-0.8 & -22.7322\\\\\n $g_{11}$ & -4.2015& 0.8& -0.8 & -11.8474\\\\\n $g_{12}$ &-5.8015 & -0.8 & 0.8 & -22.7322\\\\\n $g_{13}$ &-4.2015 & -0.8 & 0.8 & -11.8474 \\\\\n $g_{14}$ &-5.8015 & 0.8 & 0.8 & -22.7322\\\\\n $g_{15}$ &-4.2015 & 0.8 & 0.8 & -11.8474\\\\\n \\end{tabular}\n \\label{tab:n2allstGaussian}\n\\end{table}\n\n\n\n\\begin{table}[H]\n\\footnotesize\n \\caption{Gaussian basis functions for equilibrium styrene.}\n \\centering\n \\begin{tabular}{ccccc}\n Basis Function & $B_x$ & $B_y$ & $B_z$ & $C$\n \\\\ \\hline\n $g_0$ & -0.2632 & 0.0 & 0.0 & -0.6929 \\\\\n $g_1$ & -0.4083 & 0.2254 & -0.003403& -2.1748\\\\\n $g_2$ &-0.3957 & -0.2328 & 0.003617 & -2.1081\\\\\n $g_3$ &-0.6738 & 0.2191 & -0.009989 & -5.0220\\\\\n $g_4$ &-0.6617 & -0.2404 & -0.003392 & -4.9561\\\\\n $g_5$ &-0.8007 & -0.01418 & -0.01149 & -6.4137\\\\\n $g_6$ & 0.0 & 0.0 & 0.0 & 0.0\\\\\n $g_7$ &0.1503 & 0.1759 & -0.1028 & -0.6409\\\\\n $g_8$ &-0.3236 & 0.3990 & 0.0004516 & -2.6392\\\\\n $g_9$ &-0.3009 & -0.4015 & 0.008982 & -2.5184\\\\\n $g_{10}$ &-0.7744 & 0.3841 & -0.01352 & -7.4750\\\\\n $g_{11}$ & -0.7542 & -0.41005 & -0.003197 & -7.3691 \\\\\n $g_{12}$ & -0.9936 & -0.01898 & -0.01796 & -9.8794 \\\\\n $g_{13}$ & 0.08754 & -0.1457 & 0.08083 & -0.3543\\\\\n $g_{14}$ & 0.08207 & 0.3253 & -0.19328 & -1.4992 \\\\\n $g_{15}$ & 0.3368 & 0.1598 & -0.09316 & -1.4767 \\\\\n \\end{tabular}\n \\label{tab:styreneGaussian}\n\\end{table}\n\n \\begin{table}[H]\n\\footnotesize\n \\caption{Gaussian basis functions for FNNF.}\n \\centering\n \\begin{tabular}{ccccc}\n Basis Function & $B_x$ & $B_y$ & $B_z$ & $C$\n \\\\ \\hline\n $g_0$ & -0.09437 &0.08439 &0.1122& -0.2862 \\\\\n $g_1$ & -0.1078 & -0.07790 & -0.1066 & -0.2905 \\\\\n $g_2$ &0.07386& -0.02752& 0.2588 & -0.7320\\\\\n $g_3$ &0.07522 &0.02274& -0.2630 &-0.7533 \\\\\n \\end{tabular}\n \\label{tab:fnnfGaussian}\n\\end{table}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMuon spin rotation and relaxation spectroscopy ($\\mu$SR\\xspace) and density functional theory (DFT) were first theorized, and later implemented, roughly in the same years.\\cite{PhysRev.136.B864,PhysRev.140.A1133,PhysRev.105.1415} Indeed, during the development of $\\mu$SR\\xspace as an experimental technique for studying magnetism in solid state physics, the analysis of the experimental data has greatly benefited from the theoretical insights provided by the first embryonic density functional based simulations.\\cite{PhysRevLett.40.264}\n\nTo the best of our knowledge, the first example of this kind dates back in 1975.\\cite{meier1975electron} Dr. P. Meier provided the first results from simulations aiming at identifying the effect of a positively charged interstitial muon in elemental metals.\nFrom then on, many works\\cite{PhysRevLett.40.264,Jena_1979} presented and discussed \\emph{ab initio} methods to tackle some common sources of uncertainty that stem from complicated $\\mu$SR\\xspace experiments. \nWe summarize them with the following three questions: {\\em i)} where is the muon? {\\em ii)} Can we estimate the parameters of the $\\mu-e^{-}$ interaction Hamiltonian? {\\em iii)} Is the muon a passive probe?\n\nA renewed effort in trying to answer the above questions with first principles simulations has begun a few years ago.\nIndeed, what has essentially changed from the 70s is our capabilities in simulating the electronic properties of complex materials, strongly reducing the impact of the approximations that must be adopted to solve the many body electronic (and in some cases also nuclear) problem on the parameters under investigation.\nIt is known, for example, that DFT is very accurate in determining the bond distances and unit cell sizes even when adopting the Local Density Approximation (LDA) for the exchange and correlation potential.\nThis is the rougher approximation, that disregards the non-local effects of the exchange interaction, and there are many cases where it is not sufficient to obtain a realistic description of the material.\n\nStarting from the LDA, the Generalized Gradient Approximation generally improves the description of the lattice positions. From there, in recent years, many rungs have been added to the ``Jacob's ladder of density functional approximations for the exchange-correlation energy'' \\cite{vxcladder}. Moreover, many body approaches different from DFT and \\emph{ad hoc} corrections to the Kohn-Sham Hamiltonian have greatly improved the capabilities of the density functional method in describing the electronic degrees of freedom of crystals and molecules. The reader is referred to one of the many review articles that discusses this topic.\\cite{QUA:QUA24521,PhysRevB.67.153106}\n\nAt the same time, the astonishing increase in the computational power obtained during the past 40 years has provided a tangible change in the amount of predictions that simulations can provide.\n\nFor what concerns $\\mu$SR\\xspace, this has also led to the possibility of addressing the three fundamental questions discussed above with fully \\emph{ab initio} methods. \n\n\nIn this short review article we will briefly address each question, respectively in Sect.~\\ref{sec:where}, \\ref{sec:interactions} and \\ref{sec:passive}, surveying the importance of the quantum nature of the muon in Sect.~\\ref{sec:quantum}. The discussion is illustrated by old and recent examples of DFT aided $\\mu$SR\\xspace data analysis. We will limit our attention to the simulations involving positive muons since they constitute the vast majority of the experiments performed with $\\mu$SR\\xspace nowadays. Since the topic is still rather vast, particular attention will be devoted to the analysis of the simulations performed in crystalline materials, even though we will also touch a few aspects of the analysis of the muon\/sample interactions in molecular compounds. \n\n\n\\section{Where is the muon?}\n\\label{sec:where}\n\nPart of the first experiments performed with $\\mu$SR\\xspace where devoted to the validation of the then new technique.\nFor this reason the study of elemental crystals and text-book cases were predominant and represented an important development toward a more precise understanding of the muon\/sample interactions. Positive muon sites identifications were mainly performed with direct experimental approaches like, for example, following the evolution of the muon frequency shift in a transverse field experiment as a function of the applied stress in a single crystal \\cite{PhysRevB.32.293}, notably with the stimulus of theoretical insight \\cite{PhysRevB.29.4170}, or by the symmetries of the same shifts as a function of the applied field direction\\cite{PhysRevB.30.186}, or again obtaining geometrical constraints on the relative position of the muon and the interacting nuclei from avoided level crossing measurements \\cite{PhysRevLett.60.224}.\nAt the same time, computational methods were used to model the electronic density surrounding the muon and provide, in some cases, information regarding the interstitial position of the muon and the interaction parameters between the muon and its surrounding electronic environment.\n\n\n\\begin{table}\n \n\\begin{tabular}{ccc}\n \\hline\n Compound & \\multicolumn{2}{c} {muon site}\\\\\n & (experiment) & ({\\em ab initio}) \\\\\n \\hline\n &&\\\\\n Fe$_3$O$_4$ & \\ \\cite{fe3o4,PhysRevB.77.045115} & -- \\\\\n $R$FeO$_{3}$ ($R$ = Sm, Eu, Dy, Ho, Y, Er) & \\ \\cite{PhysRevB.27.5294} & -- \\\\\n UCoGe & \\ \\cite{PhysRevLett.102.167003} & -- \\\\\n YBCO & \\ \\cite{Brewer_1991,Weber} & -- \\\\\n UNi$_2$Al$_3$ & \\ \\cite{Amato_2000} & -- \\\\\n GdNi$_5$ & \\ \\cite{Mulders_2000} & -- \\\\\n PrNi$_5$ & \\ \\cite{Feyerherm_1995} & -- \\\\\n PrIn$_3$ & \\ \\cite{Tashma_1997} & -- \\\\\n LiV$_2$O$_4$ & \\ \\cite{Koda_2004} & -- \\\\\n TmNi$_2$B$_2$C & \\ \\cite{Gygax_2003} & -- \\\\\n YMnO$_3$ & \\ \\cite{Lancaster_2007} & -- \\\\\n HoB$_2$C$_2$ & \\ \\cite{De_Lorenzi_2003} & -- \\\\\n UN & \\ \\cite{M_nch_1993} & -- \\\\\n UAl$_2$ & \\ \\cite{Kratzer_1986} & -- \\\\\n CeAl$_{3}$ & \\ \\cite{PhysRevB.39.11695} & -- \\\\\n ZnO &\\ \\cite{PhysRevB.85.165211} &\\ \\cite{PhysRevLett.85.1012}\\\\\n Y$_2$O$_3$ &\\ \\cite{PhysRevB.85.165211} &\\ \\cite{PhysRevB.85.165211} \\\\ \n CoF$_2$, MnF$_2$ & \\ \\cite{PhysRevB.30.186} & \\ \\cite{PhysRevB.87.121108} \\\\\n LiF & \\ \\cite{PhysRevB.33.7813} & \\ \\cite{PhysRevB.87.121108,PhysRevB.87.115148} \\\\\n YF$_3$ & \\ \\cite{Noakes1993785} & \\ \\cite{PhysRevB.87.115148} \\\\\n MnSi & \\ \\cite{PhysRevB.89.184425} & \\ \\cite{jp5125876}\\\\\n La$_2$CuO$_4$ T & \\ \\cite{PhysRevLett.59.1045,Hitti1990} & \\ \\cite {Suter2003329}\\\\\n La$_2$CuO$_4$ T$^\\prime$ & \\ \\cite{gwenthesis} & \\ \\cite{pbphd} \\\\\n $R$CoPO ($R$ = La, Pr)& \\ \\cite{PhysRevB.87.064401} & \\ \\cite{PhysRevB.87.064401} \\\\\n PrB$_2$O$_7$ (B = Sn, Zr, Hf) & \\ \\cite{PhysRevLett.114.017602} & \\ \\cite{PhysRevLett.114.017602} \\\\\n\\end{tabular}\n\\caption{Tentative list of the non elemental crystalline compounds where the muon site is known (or a few candidates are proposed) from the experiment. Although elemental crystal have been largely studied and the muon site is known in most of them, they have been omitted in this table for simplicity and readability.}\\label{tab:sites}\n\\end{table}\n\n\nIn this context, one instructive problem that has been considered with \\emph{ab initio} approaches is the diffusion of the muon in copper. This topic has been widely studied both experimentally \\cite{PhysRevB.43.3284,PhysRevLett.39.836} and theoretically \\cite{PhysRevB.29.5382}.\nFrom the field and orientation dependence of the decay in single crystalline samples\\cite{PhysRevLett.39.832} and from \\emph{ab initio} calculations it was soon realized that the occupied muon site is the octahedral interstitial.\nFirst principles simulations showed that this is due to the large Zero Point Motion Energy (ZPME) that exceed the self trapping energy gain and makes the tetrahedral interstitial site unstable.\\cite{PhysRevB.29.5382} \n\nStarting from the mid 80s, the increasing computational power allowed to tackle supercells of crystalline materials. Computational efforts were reported mainly for paramagnetic muon states, i.e. muonium, in carbon based and semiconducting materials. Within these simpler crystalline structures \\emph{ab initio} methods allowed the identification of the muon sites \\cite{0022-3719-17-14-009} and the determination important characteristic of the interaction between the muon and the investigated sample. \n\nMore recently, the advent of large computational facilities \nand the development of effective methods for the solution of the Kohn-Sham equations \\cite{PhysRevB.59.1758,PhysRevB.41.7892} \nhas led to a rebirth of the DFT approach for providing a description of the muon implantation process in the very end of its deceleration path.\nIndeed, since it is rarely possible to determine the muon site(s) with experimental methods, the computational approaches are becoming a precious supporting tool for $\\mu$SR\\xspace data analysis.\n\nAs of today the best compromise between accuracy and efficiency to identify muon sites in molecules and crystalline materials is based on a sampling of the total energy hyper-surface which is obtained for the embedding of the charged particle in the host system. \n\nThe starting point is to treat the muon as a hydrogen atom within the standard Born Oppenheimer (BO) approximation used in DFT. \nDepending on the crystalline or molecular nature of the material under study, different approaches are used. In crystalline solids periodic boundary conditions are normally adopted and the final crystal relaxation around the muon is reproduced by choosing a suitably large supercell, in order to limit the interactions between the periodic muon replicas in the simulated system. It is very important to check the convergence of the results against the supercell size. To this aim two requirements are generally inspected:\n\\begin{enumerate}\n \\item the interaction between the muon and its replicas must not influence the results,\n \\item the displacement of atoms must progressively decay as the distance from the muon increases. \n\\end{enumerate}\n\nThis approach has been diffusely used for neutral and charged impurity calculations and has produced accurate results also for $\\mu$SR\\xspace in Si, as well as in other elemental semiconductors \\cite{PhysRevLett.58.1547,PhysRevLett.60.2761,PhysRevLett.64.669,Jones351,EPL.7.145}.\n\nFor molecular systems the calculation can be less computational expensive since supercells are no longer required. However, in the case of large low symmetry molecules, one must still consider all the possible muon additions to the molecule at inequivalent positions, guiding an educated guess by chemical insight.\\cite{doi:10.1021\/jp107824p,doi:10.1021\/jp305610g}\n\nTable \\ref{tab:sites} is a partial list of crystalline compounds where the muon site is assigned, with certainty or tentatively, by experiment. The last rows list the cases where {\\em ab initio} confirmed the assignment. \n\nThe stability and the formation energy barrier of the various muon embedding configurations can be estimated, within the same BO approximation, with the nudged elastic-band (NEB) approach. This method provides an efficient way to identify saddle points and minimum energy paths between known initial and final ionic configurations.\nInitially, a series of equally separated configurations (called images) along the reaction path is guessed.\nThe images are kept equally separated from each other along the reaction path by adding an elastic band force acting on the images. A constrained optimization of the total energy for all the images, obtained by projecting out the perpendicular and the parallel components to the path of the spring force and of the true force respectively, provides an iterative method to identify the minimum energy path between the initial and final configurations of the muon and its neighborhood.\n\n\\begin{figure}\n\\center\n \\includegraphics{81411Fig1}\n \\caption{Formation energy for various types of hydrogen impurities as a function of the Fermi Energy level. The plot shows that H$^+$ is the stable form in ZnO, for Fermi Energy values (bottom scales) within the calculated energy gap (top scale), a fact that is experimentally\\cite{PhysRevLett.86.2601} verified in ZnO.\n Reprinted figure with permission from Ref.\\onlinecite{PhysRevLett.85.1012}. Copyright (2000) by the American Physical Society.}\\label{fig:zno}\n\\end{figure}\n\nOne of the first success stories was the discovery of a shallow donor state for hydrogen, hence for the muon as well, in Zinc Oxide. It was somehow surprising since hydrogen in p-type (n-type) semiconductors typically acts as a compensating impurity, in a stable H$^{+}$(H$^{-}$) charge state at the bond-center. \\cite{PhysRevLett.85.1012}\nA 96 atoms supercell affords the calculation of the formation energies of the various charge states of the hydrogen atom and molecule in ZnO. The large H$^{+}$-oxygen bond strength leads to the formation of shallow donor states with a low electron density at the hydrogen (or the muon) site, as it is shown in Fig.~\\ref{fig:zno}. The formation energy depends on three critical parameters: the total energy associated to the impurity in the selected charge state (top axis), the Fermi energy (bottom axis) and the hydrogen\/muon ZPME, discussed in Sect.~\\ref{sec:quantum}. The interplay between these quantities governs the stability of paramagnetic and diamagnetic species, although metastable states must also be taken into account in the case of epithermal implanted muons.\n\nThe experimental demonstration of the unexpected shallow donor came just one year later with the observation of its characteristic precession frequencies by Cox and co-workers\\cite{PhysRevLett.86.2601}, confirming the \\emph{ab initio} predictions, and followed two years later by detailed single crystal studies. \\cite{PhysRevLett.89.255505}\n\nOne common feature of these early examples is that they forcibly concern the simplest crystal structures, that both require more manageable computational efforts, and offer few rather obvious starting guess sites for the muon.\nThe extension to non elemental compounds with more complex structures also implies a strategy for the exploration of the candidate sites. \n\nAn example of exploration strategy along the same line of ZnO is provided by Vil\\~ao {\\em et al.} \\cite{PhysRevB.84.045201} who compared experiments on paratelluride ($\\alpha$TeO$_2$) with DFT within GGA on a 3x3x3 supercell (96 atoms) including NEB calculations to discuss diffusion by means of classical energy barriers. They are thus able to identify the experimental results as a donor and a deep trap configuration.\n\n\\begin{figure}\n\\center\n \\includegraphics[width=\\columnwidth]{81411Fig2}\n \\caption{(Color online) Muon sites in Y$_{2}$O$_{3}$. Different positions are identified for the various charge states of the hydrogen impurity used to simulate the muon.\n Reprinted figure with permission from Ref.\\onlinecite{PhysRevB.85.165211}. Copyright (2012) by the American Physical Society.}\\label{fig:yttria}\n\\end{figure}\n\n\nMore recently Silva {\\em et al.}\\cite{PhysRevB.85.165211} assigned the muon species observed in yttria. Figure~\\ref{fig:yttria} shows the various diamagnetic and paramagnetic configurations obtained for the different charge states considered for the impurity in the simulations. The results highlight the striking difference between the embedding sites obtained with the different charge states and underline the importance of considering various electronic configurations when exploring the muon embedding sites. These examples are still guided in some degree by the chemical insight allowed by the dominant covalent nature of the bonds. \n\nAdditional strategies are devised for dominantly ionic compounds. Whenever the candidate muon sites are not easily assigned by educated guess it is unavoidable to explore all the possible interstitial sites. To this end it is useful to set up a grid of positions and to optimize the impurity site with the use of the Hellman-Feynman theorem, that provides the intermolecular forces and allows the identification of equilibrium geometries. This procedure produces candidate interstitial positions for the muon.\nThe number of points in the grid may usually be reduced with the help of symmetry considerations. This can substantially shrink the computational cost of the simulation.\n\nIn recent years, DFT based muon site assignment has been used in several materials.\nGraphene has been investigated \\cite{doi:10.1021\/nl202866q} to try and clarify whether diluted hydrogen decorated defects, mimicked by muons, can give rise to magnetism. The authors support their interpretation of the experimental results with DFT predictions of hydrogen decorated carbon vacancies on 6x6 nanoribbon cluster. By contrast adsorption of positive muon on perfect graphene predicts \\cite{Pant_2014} a ground state energy of 0.22 eV directly on top of the carbon atom. \n\nRecently, $\\mu$SR\\xspace has played a prominent role in characterizing the newly discovered iron based high temperature superconductors. As a consequence, the muon site in these materials has been widely studied.\nMost of the initial estimations were based on a naive approach which relies on the analysis of the electrostatic potential of the bulk material obtained from DFT simulations \\cite{PhysRevB.80.094524,0953-2048-25-8-084009,PhysRevB.91.144423,PhysRevB.85.064517}.\nThis method is the direct evolution of the point-charge model widely used in literature.\\cite{PhysRevB.84.054430,PhysRevB.85.054111,PhysRevLett.103.147601,PhysRevB.84.064433}\nIncidentally we mention that this approach has also been used for materials with diverse electronic properties.\\cite{1742-6596-551-1-012052,1742-6596-551-1-012053}\nThe results obtained from the analysis of the unperturbed electrostatic potential (UEP) are generally validated by comparison with the experiment. \nThis is the case of the 11 11 pnictide superconductors\\cite{PhysRevB.80.094524}.A full \\emph{ab-initio} confirmation of the UEP predictions was provided for the isostructural compound LaCoPO with relaxed supercell calculations.\\cite{PhysRevB.87.064401}\n\nThe remarkable success of simple electrostatic potential predictions in simpler three dimensional metals may perhaps be justified by heuristic arguments on the screening of point charges. A recent confirmation is the case of MnSi where accurate supercell calculations\\cite{jp5125876} introduce only tiny improvements on the UEP predictions. However the accuracy of the UEP method in a layered material like LaFeAsO or LaCoPO is more surprising, and it should probably be considered just a fortunate case. The method cannot be expected to produce reliable results in insulators or in two dimensional materials alternating metallic and charge reservoir layers. \n\nFluorides represent a notable example of insulator where the UEP method fails.\\cite{PhysRevB.87.115148} These materials deserve a special mention. Brewer and coworkers identified a striking effect, characteristic of several of them, where the distortion of the lattice produced by the muon on F atoms is very pronounced\\cite{PhysRevB.33.7813}. The muon-fluorine distance for the most distorted nearest neighbor ions is characterized in details by the experimental measurement, thanks to the quantum entanglement of the muon and fluorine nuclear spins.\n\nThis has been used as an ideal test case by M\\\"oller {\\em et al.} \\cite{PhysRevB.87.121108} and by Bernardini {\\em et al.} \\cite{PhysRevB.87.115148} to verify the accuracy of DFT in reproducing the crystalline distortions introduce by the muon. The first two columns of Table~\\ref{tab:Moeller} show the experimental and calculated muon-fluorine distances, whose very good agreement is an important validation of the \\emph{ab initio} approach to the muon site identification problem. \n\n\\begin{table}\n \n\\begin{tabular}{cccc}\n\\hline\\hline\n& $2r_{\\rm DFT}$ (\\AA) & $2r_{\\rm exp}$ (\\AA) & ZPE (eV)\\\\\n\\hline\n(FHF)$^-$ & 2.36 & 2.28 & 0.30\\\\\n(F$\\mu$F)$^-$ & 2.36 & & 0.80 \\\\\nLiF & 2.34 & 2.36(2)\\cite{PhysRevB.33.7813} & 0.76\\\\\nNaF & 2.35 & 2.38(1)\\cite{PhysRevB.33.7813} & 0.76\\\\\nCaF$_2$ & 2.31 & 2.34(2)\\cite{PhysRevB.33.7813} & 0.83\\\\\nBaF$_2$ & 2.33& 2.37(2)\\cite{PhysRevB.33.7813}& 0.79\\\\\nCoF$_2$ & 2.36 & 2.43(2)& 0.73\\\\\n\\hline\n\\hline \n\\end{tabular}\n\\caption{Calculated (DFT) and experimental (exp) properties of\nthe diamagnetic and molecular ion fluorine-muon (F-$\\mu$-F) states in solid and vacuum. $r$(\\AA) is the muon-fluoride bond length.\nReprinted table with minor edits with permission from Ref.~\\onlinecite{PhysRevB.87.121108}. Copyright (2013) by the American Physical Society.}\\label{tab:Moeller}\n\\end{table}\n\nAs it has been shown from the very beginning by studying the interstitial muon site in elemental crystals, the ZPME plays a crucial role in this procedure.\\cite{teichler1978microscopic,0305-4608-9-7-013,rath1979effect}\nIndeed the large ZPME of the muon, discussed in Sect.~\\ref{sec:quantum}, may yield classical hopping or quantum tunneling among various interstitial positions. \nThe case of fcc copper represent an instructive example in this perspective. \\cite{PhysRevB.43.3284} The barrier between the tetrahedral and the octahedral interstitial is too small to bind the muon. Thus no localized muon wave-function is found in that positions. At the same time the barrier between the octahedral sites is small enough to permit both quantum tunneling and classical diffusion for relatively small temperatures.\\cite{jp5125876}\n\nIn several instances a successful identification of the muon sites based on a first principle approach relies on the additional evaluation of the ZPME, not just on the solution of a purely electronic problem, and the number of cases where this inclusion proved to be essential is steadily increasing.\\cite{Suter2003329,PhysRevB.84.045201,PhysRevB.85.165211,Pant_2014,PhysRevLett.107.207207,1402-4896-88-6-068510}\n\n\n\\section{Interaction parameters}\n\\label{sec:interactions}\n\nAnother point that has been largely discussed in the literature is the estimation, from first principles, of the interaction parameters between the muon and the electrons (or possibly the nuclei) of the host material.\nSuccessful results have been reported from early studies of the hyperfine coupling for diamagnetic muon sites in metallic magnetic materials and for paramagnetic muon sites in semiconductors.\\cite{PhysRevLett.64.669}\nIn the vast majority of these preliminary studies the electron density and its magnetic polarization at the muon position were used to estimate the hyperfine parameters according to\\cite{PhysRevLett.64.669}\n\n\\begin{align}\n\\label{eq:hyperfine}\n \\mathcal{H} & = \\sum_i\\left\\{\\frac{2 \\mu_{0}}{3} \\mathbf{m}^{\\mu} \\cdot \\mathbf{m}^{e} \\delta(\\mathbf{r}_i)\\right. \\nonumber\\\\ &\\left.\\quad + \\frac{\\mu_{0}}{4 \\pi} \\frac{1}{r_i^{3}} \\left[ 3 (\\mathbf{m}_{\\mu} \\cdot \\mathbf{\\hat{r}_i})(\\mathbf{m}_{e} \\cdot \\mathbf{\\hat{r}_i}) - \\mathbf{m}_{\\mu} \\cdot \\mathbf{m}_{e} \\right]\\right\\}\n\\end{align}\n\n\nwhere the first term is the Fermi contact term and the second is the dipolar interaction, $\\delta$ is the Dirac delta function, $\\mathbf{m}^{e} = -g_{e} \\mu_{\\mathrm{B}} \\mathbf{S}^{e}$ and $\\mathbf{m}^{\\mu} = \\gamma_{\\mu} \\hbar \\mathbf{I}^{\\mu}$ are the electronic and the muon magnetic moment operators, $\\mathbf{r}_i$ is the coordinate of the i-th electron in a reference frame centered at the muon and $\\mu_{0}$ is the vacuum permeability.\n\nThe parent compound La$_2$CuO$_4$ in its standard crystal group 64 has been first addressed by a 9 Cu + Mu cluster in the Generalized Gradient Approximation (GGA), without quantum treatment of the muon. \\cite{Suter2003329} The authors extract spin densities on Cu, nearest neighbor O ions and the muon. They remark that spin density on oxygen, although definitely smaller than that on copper, provides a non negligible contribution through the second term of Eq.~(\\ref{eq:hyperfine}), in view of the much shorter distance to the bonded muon, thanks to the $r^{-3}$ dependence of the dipolar term. This remark might be of more general relevance.\n\n\nFor chemically diamagnetic muon species the contact hyperfine coupling at the muon site is generally the result of a rather small imbalance of the spin density, and the accuracy of its determination strongly depends on that of the DFT description of the many body electronic problem. Even assuming small uncertainties of the muon site coordinates and on the electron magnetic moments a large relative numerical error on the reduced electronic spin polarization at the muon site, produces a considerable relative inaccuracy, that is usually an order of magnitude larger than that typically obtained for the dipolar coupling.\n\nThis remark applies e.g.~to the one-dimensional quantum antiferromagnet AF Cu(pyz)(NO$_3$)$_2$, where a 2x2x1 supercell GGA calculation \\cite{PhysRevB.91.144417} including the muon in both positive and neutral charge configurations predict large contact couplings for the latter and much smaller ones for the former. The rather low experimental zero field muon precession frequencies rule out the large hyperfine, neutral muon site. The authors implicitly acknowledge the inaccuracy of low spin density DFT estimation by establishing a comparison between the experimental local field and the dipolar contribution alone. This comparison justifies the qualitative statement that the Cu moment must be reduced by a large factor, of order 7-8 with respect to the nominal 1 $\\mu_B$, as expected from the low dimensional nature of the magnetic order. \n\\begin{figure}\n\\center\n \\includegraphics[width=0.8\\columnwidth]{81411Fig3}\n \\caption{(Color online) The adiabatic muon energy isosurface in MnSi, \\cite{PhysRevB.89.184425,jp5125876} obtained joining points where the potential acting on the muon is 1.6 times the corresponding muon ground state energy. The dots represent the points used for the potential interpolation, within DAA (see text).} \\label{fig:mnsias}\n\\end{figure}\n\n\nThus, whenever Eq.~(\\ref{eq:hyperfine}) provides a reasonably good account of the muon coupling, $\\mu$SR\\xspace data together with the muon site assignment allow quantitative determination of magnetic moments and, in fortunate cases, even of magnetic structures. In this perspective a promising approach based on the Bayesian analysis,\\cite{Blundell2012113,PhysRevB.89.140413} has been proposed in order to estimate the magnetic moment size and\/or the long range magnetic structure of magnetic materials.\nIndeed by feeding the probabilistic analysis with the DFT results obtained for the identified muon site(s) it is possible to compare quantitatively the expectations for various spin arrangements and determine the most probable long range magnetic order for the sample under investigation.\n\nEquation (\\ref{eq:hyperfine}) represents only a first approximation, since it neglects the quantum nature of the muon. Although the approximation often provides at least the correct order of magnitude, if not the exact experimental value for the hyperfine couplings at the muon site \\cite{Katayama1979431,Jepsen1980575}, there are known cases in which this approach is not sufficiently accurate \\cite{PhysRevB.25.297, PhysRevB.27.53, PhysRevB.15.1560}.\n\nFinally, we notice that estimating {\\em ab initio} the value of the hyperfine field at the muon site may be differently challenging, depending on the material. For strongly correlated electron material it involves an accurate description of their electronic properties that is not always available with conventional DFT approaches. The reliability of the typical approximations, e.g. LDA+U, must therefore be seriously taken in consideration case by case.\n\nAt the opposite end rather accurate results can be obtained for the hyperfine coupling parameters of paramagnetic muonated radicals.\\cite{ct500027z,PhysRevE.87.012504,C4CP00618F,C4CP04899G,Peck} This is mainly due to the fact that the contact hyperfine coupling constants are proportional, in this case, to the electron density at the muon site, which is much larger for paramagnetic species.\nOnce again, the key point in this procedure is an accurate electronic description of the molecule, usually obtained with hybrid exchange and correlation functionals.\\cite{Peck}\n\n\\section{The quantum muon}\n\\label{sec:quantum}\n\n\nIn the case of both diamagnetic and paramagnetic muon centers, improved results are obtained if the quantum nature of the muon, usually far from being negligible, is taken into account. This was already considered in most of the first reports on \\emph{ab initio} studies of the muon \\cite{rath1979effect,PhysRevB.25.297}. Since within DFT the muon is treated as a charged classical particle, the simulations must be extended in order to provide a description of the muon wave-function. \n\nThe task is fulfilled with a number of different approaches\\cite{1.1356441,Gross1998291,1.3556661,C4CP05192K,PhysRevA.90.042507,PhysRevLett.101.153001,RevModPhys.85.693}. We mention here the estimation of the ground state energy of the muon with the analysis of the phonon-modes, within the BO approximation for the electrons, \\cite{PhysRevB.87.121108} and the double adiabatic approximation (DAA)\\cite{jp5125876}, discussed in details by Soudackov and Hammes-Schiffer\\cite{Soudackov1999503} and Porter {\\em et al}\\cite {PhysRevB.60.13534} in which an adiabatic energy surface for the solution of the Schr\\\"oedinger equation of the muon is obtained.\n\nMore accurate approaches may be provided by the Nuclear-Electronic Orbitals (NEO)\\cite{C4CP06006G}, using Hartree-Fock methods\\cite{Kerridge_2004} on an orthogonal basis for the muon and the electrons. Finally, path integral molecular dynamics (PIMD)\\cite{1.471221, PhysRevLett.81.1873, Yoshikawa2014135,PhysRevLett.99.205504,PhysRevB.60.14197} in principle can yield the most accurate calculations. However, since these techniques are also listed in order of increasing computational costs, their use grows increasingly impractical for materials, such as mixed valence, strongly correlated electron systems, that are already intrinsically complex from an {\\em ab initio} point of view.\n\nThe first two methods treat the muon as a point-like charged particle.\nThe DAA method and the linear response evaluation of the muon phonon modes in the crystal represent the simplest and less computer intensive approaches. The DAA method approximates the potential of the muon Schr\\\"odinger equation with the DFT total electronic energy recalculated as a function of fixed muon position in a suitable grid. The phonon based mechanism yields the standard harmonic approximation to the mode frequency, which directly provides the ZPME by a projection method. It has the advantage of treating the muon and the nuclei on the same footing, but the harmonic approximation is usually not very accurate in the muon case. This is indirectly shown for example in Fig.~\\ref{fig:mnsias} by the highly non ellipsoidal shape of the muon potential energy isosurface obtained by DAA in the case of MnSi \\cite{jp5125876}. The actual shape of the muon potential energy may be mapped within DAA, to avoid the harmonic approximation, but only inasmuch as the energy scales of the nuclei of the embedding system are well separated from those of the muon. Hence DAA may fail for systems with close-lying muon and hydrogen ions. \n\nThe NEO approach introduces a muon wave-function that is optimized together with electronic wave-functions and overcomes the BO approximation, and PIMD treats the nuclei from a quantum perspective by mapping them onto an isomorphic classical polymer of replicas of each nucleus (called bead). In \\emph{ab initio} PIMD, each bead requires a self consistent calculation, thus the computational cost scales linearly with number of beads.\n\n\\begin{figure}\n\\center\n \\includegraphics[width=0.8\\columnwidth]{81411Fig4}\n \\caption{The hopping frequency for a muon in diamond as obtained from PIMD simulations. The solid line is derived from $\\mu$SR\\xspace measurements. Reprinted figure with permission from Ref.~\\onlinecite{PhysRevLett.99.205504}. Copyright (2007) by the American Physical Society.} \\label{fig:diamond}\n\\end{figure}\n\nAn instance where the drastic approximation of the first two methods (and maybe also of the NEO approach) may fail is the metastability of the neutral H$_0$ charge state in Si, that was first tentatively assigned \\cite{Cox1986516} to the tetrahedral (T) site Mu$^0_T$. Specifically, the existence of an absolute energy minimum at the bond-center (BC) Mu$^0_{BC}$ site \\cite{Cox1986516} is confirmed, in Si and diamond, by Hartree-Fock cluster calculations that identify the tetrahedral site as a local minimum.\nHowever, contrasting results regarding the height of the barrier between the two sites and the role of the ground state ZPME of Mu$^0_T$ are reported. \\cite{PhysRevLett.71.557, PhysRevLett.58.1547,PhysRevB.36.9122} \n\nRecalling that thermodynamic equilibrium may not be achieved during a muon lifetime, this finding would agree with the low temperature experimental observation of both species in all elemental semiconductors.\nThis fact is partially confirmed by more accurate PIMD calculations in diamond.\\cite{PhysRevLett.99.205504}\nFigure \\ref{fig:diamond} displays both the experimental (solid line) and the theoretical PIMD (dashed line) jump rates for muons in diamond. The jump rate from the T site to the BC site is several orders of magnitude larger than that from the BC to the T site for all experimentally accessible temperatures. Therefore the simulation predicts, in qualitative agreement with experiment, that Mu$^0_T$ must disappears from observation at high temperatures, when its jumping rate time becomes shorter than a few nanoseconds, whereas in the same temperature range the more stable Mu$^0_{BC}$ does not delocalise during a few muon lifetimes. \n\nAs of today there is no universal solution to obtain a detailed description of the quantum nature of the muon. While for small molecules PIMD gives the most accurate results, this approach is usually prohibitively time consuming for muons embedded in crystalline materials.\nIndeed there are two aspects specific to $\\mu$SR\\xspace experiments that makes PIMD very computationally demanding: the small mass of the muon and, in many cases, the (low) temperature that is used in experiments. Both these conditions contribute to the growth of the required number of beads.\nFor this reason, when performing PIMD, the \\emph{ab initio} methods for the electronic structure evaluation are usually abandoned in favor of less demanding approaches like tight-binding Hamiltonians or empirical potentials which may degrade the quality of the description of the electron density.\n\n\n\\section{Is the muon a passive probe?}\n\\label{sec:passive}\n\nThe muon is a positively charged particle\\cite{footnote} and, when it comes to rest in the lattice, it gives rise to a charged defect that can introduce appreciable local modification to its immediate electronic and ionic environment. The key question in this case is whether the muon can alter the properties of the system that it is supposed to probe in the experiment.\nThe answer to this question can be rarely provided just by $\\mu$SR\\xspace experiments and it depends largely on the goals of the measurement. Since a large fraction of the muon studies are directed at magnetic materials the question can be often rephrased into ``is the muon distortion capable of altering the apparent magnetic behavior of the investigated sample with respect to that of the crystal without the muon?''\n\n\n\nThe appreciable crystalline distortions are however generally not a source of concern for magnetic measurements. Their scarce influence may be explained by considering that, firstly, the muon usually forms bonds with the most electronegative atoms of the hosting system, and, in many cases, these atoms do not provide the leading contribution to the exchange integral. Secondly, when the muon does modify the exchange integral, the perturbation is usually confined to the nearest neighbor magnetic atoms. Thus the global magnetic properties are not affected by the perturbation, although the local magnetic field at the muon site may be modified. As long as $\\mu$SR\\xspace experiments regard the relative temperature dependence of the local field, the influence is not relevant. \n\n\n\n\\begin{figure}\n\\center\n \\includegraphics{81411Fig5}\n \\caption{(Color online) Perturbation introduced by the muon on its neighboring Pr atoms in Pr$_2$Ir$_2$O$_7$. The positive electric charge of the muon modifies the crystal field levels producing a non negligible hyperfine coupling at the muon site as a consequence of the ground state doublet splitting. \n Reprinted figure with permission from Ref.~\\onlinecite{PhysRevLett.114.017602}. Copyright (2015) by the American Physical Society.}\\label{fig:pr}\n\\end{figure}\n\nDFT can be directly employed to check the variation of the magnetic moment of equivalent ions as a function of their distance from the muon. Although most often it is indeed found that the charged particle does not modify significantly the magnetic properties of its neighbors,\na notably different instance is represented by the one dimensional AF Cu(pyz)(NO$_3$)$_2$ \\cite{PhysRevB.91.144417} discussed earlier.\nFor neutral supercell simulations there are a couple of muon embedding sites which donate an electron to the $3d$ orbitals of their nearest Cu, turning it into the diamagnetic Cu$^{+}$ configuration. Such a substantial local perturbation is, however, still hard to detect.\nIndeed, even though the modification does affect the local value of the magnetic field via the dipolar coupling, it has practically no effect on the collective magnetic properties of the system. Therefore the temperature dependence of the magnetic order parameter or that of its slow fluctuations, that induce muon spin relaxation, remain the same as in an unperturbed environment. \n\n\nBy contrast, a very interesting case in which the magnetic response of the system is altered by the muon has been recently discussed by Foronda and co-authors.\\cite{PhysRevLett.114.017602} \nAn unexpected quasi-static local field at the muon site was observed in geometrically frustrated pyrochlore iridate Pr$_2$Ir$_2$O$_7$. This effect was argued to be related to a muon induced perturbation of the crystal field levels of Pr which leads to the lifting of the non-Kramers degeneracy of the ground state.\\cite{1742-6596-225-1-012031}\n\nThis hypothesis has been nicely demonstrated by the results obtained with DFT simulations which provides the deformation of the oxygen tetrahedra surrounding the Pr atoms.\nThe lifting of the degeneracy, shown in Fig. \\ref{fig:pr}, is reported for the three Pr atoms closer to the muon. The hyperfine interaction between the Pr nuclei and $f$ orbitals is enhanced by the lifted degeneracy and the resulting magnetic moments of the three Pr atoms surrounding the muon are revealed by the $\\mu$SR\\xspace experiment, thus masking the properties of the nonmagnetic unperturbed system.\\cite{PhysRevLett.114.017602}\n\nWhenever $\\mu$SR\\xspace results deviate drastically from expectations, one should critically consider whether they are due to specific local alteration induced by the muon on its surrounding. It also happens that unconventional muon related phenomena are invoked to justify $\\mu$SR\\xspace observations. These cases too may profit from a comparison with DFT predictions. For example, it has been suggested that spin polarons may interact strongly with muons, in particular in the noncentrosymmetric magnetic metal MnSi \\cite{PhysRevB.83.140404}. In this material the spin polaron would consist of an electron coupled to four Mn neighbors, to form a single large spin entity. Binding of the muon to the spin polaron was claimed to justify two precessions observed by Storchack {\\em et al.} in high transverse magnetic field. \\cite{PhysRevB.83.140404} This explanation is alternative to the conventional hypothesis of a muon site made inequivalent by the application of the field, to justify the appearance of more than one frequency. For MnSi the site identification by DFT, supporting accurate transverse field experiments and a careful data analysis, proved that the observed frequencies correspond to the latter case. \\cite{PhysRevB.89.184425,jp5125876}\n\nA similar instance is that of deconfined magnetic monopoles that are predicted by theory in spin ices, such as in some rare earth pyrochlores. A recent experiment claimed to have detected by $\\mu$SR\\xspace in Dy$_2$Ti$_2$O$_7$ a second Wien effect, also referred to as magnetricity, i.e.~the dissociation of magnetic charges by an applied field. \\cite{nature461.956} A subsequent work \\cite{PhysRevLett.107.207207} demonstrated this not to be the case, by comparing observed and simulated spectra on the same material, based on DFT site assignment. Incidentally this is a case where the error in interpretation of the earlier experiment turned out to be a trivial one, and its recognition did not actually rule out the existence of deconfined magnetic charges in Dy$_2$Ti$_2$O$_7$. Rather, a more recent work suggested \\cite{PhysRevLett.108.147601} that the observations of Bramwell {\\em et al.} \\cite{nature461.956} are probably still due to magnetricity, although the observation in that work was rather indirect, through a large fraction of muons implanted in close contact to the sample, but outside it, in its cryostat holder.\n\nIn both the MnSi and the Dy$_2$Ti$_2$O$_7$ cases qualitative analysis could suffice to produce plausibility arguments towards the correct conclusion, but the DFT offered a precious quantitative support to the discussion of the experimental findings. This and other examples,\\cite{PhysRevB.87.121108} show the effectiveness of the computational approaches in confirming or rejecting the generally accepted belief that the muon behaves as a passive probe.\n\n\\section{Limits and Perspectives}\nOne can confidently say that DFT calculations nowadays offer methods for predicting muon candidate sites in many crystalline materials, suitable to be directly employed in the design of $\\mu$SR\\xspace experiments and to provide complementary information for the data analysis. We refer here to muon {\\em candidate} sites because it is presently beyond the scope of DFT to actually predict the branching ratios among such sites during muon implantation, which is effectively an epithermal process. \n\nThe literature on DFT based analysis of the effect of charged impurity is vast and often provides precious guidance for the validation of the muon results obtained by numerical simulations. Three intrinsic limiting factors arise when considering DFT as a tool for complementing muon experimental observations.\n\nOpen challenges are still represented by critical compositions of solid solutions, such as certain intermetallics, or intermediate valence oxides. It is for instance still very hard to accurately simulate by DFT a specific composition like YBa$_2$Cu$_3$O$_{6.35}$, at the onset of high T$_c$ superconductivity, since a very large supercell would be required. But enough insight can be often gathered by considering end members and simple intermediate compositions. \n\nAnother difficulty is sometimes caused by the mean field approach of the Kohn-Sham method, not always sufficient to describe the electronic properties of the materials in all their relevant details. This is notoriously true already for semiconductors, e.g. when excited states are involved, as for the energy gap, although in this case the shortcomings for the muon are easily circumvented. Failures may be more difficult to overcome in the case of strongly correlated systems. \n\nA third problem is related to the BO approximation that is commonly adopted when simulating the muon with \\emph{ab initio} approaches. This latter issue may become rather severe when dealing with the interaction between light atoms and the muon. \n\n\n\nSince the purpose of the present review is to concentrate on methods that may be routinely available to assist the analysis of $\\mu$SR\\xspace experiments, not all the known theoretical tools qualify. In this sense a universal viable way to tackle the quantum nature of the muon is still missing. Although from the theoretical point of view, many approaches have been developed,\\cite{gwfornonbo,1742-6596-225-1-012031,1.471221} most of them become computationally increasingly expensive with the number of atoms, and the number of electrons per atom that are included in the calculation. A compromise between accuracy and speed must be found. \n\nPromising results have been obtained with the nuclear-electronic orbital (NEO) method \\cite{jp053552i,C4CP06006G} which, by treating only a small subset of the atoms with non-BO approaches, improves the description of the muon and of other light nuclei with smaller computational costs with respect to other approaches like, for example, PIMD.\nFor the cases where the computational cost of performing PIMD is sustainable, this approach has demonstrated high accuracy\\cite{ct500027z}.\nNonetheless, as of today, its applicability is limited to simpler systems such as molecular materials, where the single DFT self consistent field simulation is computationally not expensive. \n\nWhenever the muon coupling to its environment is dominated by dipolar interactions the site assignment is already sufficient to obtain a fully quantitative muon data analysis, and the influence of the quantum muon treatment may be much less important. By contrast, when contact hyperfine couplings are required for a chemically diamagnetic site, state of the art DFT techniques may not be sufficiently accurate, although the situation will probably change during the next years owing to the advances of both computational methods' efficiency and computational power availability.\n\nFinally, we have shown that the methods we have described above are already a very valuable tool when critically analyzing the possibility of a muon induced effect. \n\n\\begin{acknowledgment}\n\n\n\nThe seed for both our recent DFT work and the effort to collect the systematics which is the basis of the present review was generated by the MUON JRA of EU FP7 NMI3, under grant agreement 226507. We would like to thank Franz Lang, Johannes M\\\"oller, Stephen Blundell and Fabio Bernardini for useful discussions. We also acknowledge partial funding from PRIN project 2012X3YFZ2 and from the European Union's Horizon 2020 research and innovation programme under grant agreement No 654000. \n\n\\end{acknowledgment}\n\n\\bibliographystyle{jpsj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe effect of quenched random impurities which couple to the local energy\ndensity on the critical behaviour of a system whose pure version undergoes \na continuous transition\nis well understood in terms of the Harris criterion \\cite{Harris}: if\nthe specific heat exponent $\\alpha$ of the pure system is negative,\nsuch impurities do not change the\nqualitative nature of the transition or its universality class, while in\nthe opposite case there is expected to be different behaviour described\nby a new random fixed point of the renormalisation group (RG). Given\nthe volume of literature devoted to this subject, it is rather\nsurprising to find relatively few studies of the effects of such\nimpurities on systems whose pure versions undergo a first-order\ntransition, despite the ubiquity of such systems in nature. A naive\nextension of the Harris criterion, based on the observation that the\neffective value of $\\alpha$ is unity at a first-order transition, might suggest\nthat randomness is always strongly relevant. On the other hand,\nfirst-order behaviour is accompanied by a finite correlation length and\nis commonly assumed to be stable under small perturbations. Following\nearly work by Imry and Wortis \\cite{IW}, Aizenman and Wehr \\cite{AW}\nand Hui and Berker \\cite{HB} showed that the role of dimensionality $d$\nis crucial. In fact the first authors proved a rigorous theorem which\nstates that for $d\\leq 2$ the Gibbs state is always unique, for\narbitrarily small but non-zero concentration of impurities. This means\nthat there can be no phase coexistence at the transition and hence no\nlatent heat. \n\nThis result raises a number of important questions: (a) what is its\nphysical mechanism; (b) is the continuous transition accompanied by a\ndivergent correlation length; (c) if so, what are the (presumably)\nuniversal critical exponents characterising the transition; and (d) what\nhappens for $d>2$? Initial Monte Carlo studies of (c) \\cite{CFL,Dom}\nsuggested that the critical behaviour of all such systems might be strongly\nuniversal, exhibiting Ising-like exponents independent of the\nunderlying symmetry of the model. This received some theoretical\nsupport \\cite{Kardar}. However, more recent numerical studies (to be\ndescribed below) support earlier theoretical results \\cite{Ludwig} in\nsuggesting that the critical behaviour of, for example, the random bond\n$q$-state Potts model depends continuously on $q$, both in the region\nwhere the pure model has a continuous transition, and where it is\nfirst-order. The aim of this talk is to summarise some of these \nrecent developments, and to provide at least partial answers to the\nabove questions.\n\n\\section{Mapping to the random field Ising model}\nFirst, I want to describe a mapping \\cite{CJ} of this problem to the \n\\em random field \\em Ising model (RFIM), which is asymptotically valid for very\nstrong first-order transitions, and which provides a physical\nexplanation of the Aizenman-Wehr result \\cite{AW}, as well as a\nprediction for what happens when $d>2$. \n\nConsider a pure system at a thermal first-order transition point. There\nwill be coexistence between a (generally unique) disordered phase and\nthe (generally non-unique) ordered phases. The internal energies \n$U_1$ and $U_2$ of\nthese two kinds of phase will differ by the latent heat, $L$.\nNow consider a interface between the disordered phase and one of the \nordered phases, with surface tension $\\sigma$ (measured in units of $kT_c$).\nIf $\\sigma$ is large, there will be very few isolated bubbles of the\nopposite phase above or below the main interface. Its equilibrium\nstatistics may therefore be\ndescribed by a free energy functional equal to its area multiplied by\n$\\sigma$. Let us compare this with the interface between the two\n\\em ordered \\em phases of an Ising model, with spins taking the values\n$\\pm1$, at low temperatures. Once\nagain, there will be few bubbles of the opposite phase, and the bare\ninterfacial tension will be $\\sim 2J$, where $J$ is the reduced exchange\ncoupling. In the limit when $\\sigma\\sim 2J$ is large, these\ninterfacial models are therefore identical\\footnote{This is not the case when\nbubbles of the opposite phase are included. For example, regions of\nordered phase appearing in the disordered phase are counted with\nthe degeneracy factor of the ordered phases, as compared with a factor\nof unity for the Ising case.}\n\nNow consider the effects of randomness in these two models, in\nthe first case random bonds, coupling to the local energy density, and\nin the second random fields, coupling to the local magnetisation.\nIn the RFIM, these are accounted for by adding a term $\\Sigma_{r>}h(r)-\n\\Sigma_{r<}h(r)$, where $h(r)$ is the local random field, and in the two\nterms $r$ is summed respectively above and below the instantaneous\nposition of the interface. For random impurities of local concentration\n$\\delta x(r)$ we have, similarly,\n$U_1\\Sigma_{r>}\\delta x(r)+U_2\\Sigma_{r<}\\delta x(r)$. Apart from a constant\nindependent of the position of the interface, this may be written as\n$L$ times $\\Sigma_{r>}\\delta x(r)-\\Sigma_{r<}\\delta x(r)$, exactly the\nsame form as for the RFIM. \n\nWe may therefore set up a dictionary between these two cases, in which\nthe thermal variables of the random bond system are related to the\nmagnetic variables of the RFIM:\n\\begin{eqnarray*}\n\\sigma\/kT_c&\\longleftrightarrow&J\/kT\\\\\n(L\/kT_c)\\,x&\\longleftrightarrow& h_{RF}\/kT\\\\\n(T-T_c)\\,L&\\longleftrightarrow&H\\cdot M,\n\\end{eqnarray*}\nwhere the last relation is between the fields $(T-T_c)$ and a \\em\nuniform \\em magnetic field $H$ which respectively distinguish between\nthe two phases. Although the above mapping may seem ill-defined in its\nuse of the local energy density as a kind of order parameter, it may be\nmade completely explicit, for example, for the $q$-state Potts model\nthrough the mapping to the random cluster model, where $\\sigma\\sim\nL\\sim\\ln q$ for large $q$ \\cite{CJ}.\n\nThe interfacial model of the RFIM has been studied extensively\n\\cite{RFIMinterface} and, in particular, RG equations have been derived\nwhich are asymptotically exact in the limit of low temperatures and\nweak randomness, just where our mapping is valid. For $d=2$ the variable\n$h_{RF}\/J\\sim xL\/\\sigma$ is (marginally) relevant, and, just as this\ndestroys the spontaneous magnetisation for the RFIM, the latent heat\nvanishes for the random bond case, in accord with the Aizenman-Wehr\ntheorem \\cite{AW}. For the RFIM, the RG\ntrajectories flow towards a paramagnetic fixed point at which the\ncorrelation length $\\xi\\sim\\exp({\\rm const}(J\/h_{RF})^3)$ is finite, but\nnote that this is outside the region where the mapping to the random\nbond problem is valid. We cannot conclude, therefore, anything about the\nnature of the latter's true critical behaviour from this argument. In fact,\nnumerical and other analytic studies indicate that the actual\ncorrelation length is divergent, and hence $\\xi$ is merely a crossover \nlength in this case.\n\nThe predictions are more interesting for $d>2$, when the RFIM exhibits a\ncritical fixed point at $kT\/J=0$ and a finite value of $h_{RF}\/J$.\n\\begin{figure}\n\\centerline{\n\\epsfxsize=3in\n\\epsfbox{fig1.ps}}\n\\caption{Critical surface for $d>2$, constructed by analogy with the\nphase diagram of the RFIM. The axes are $(1\/\\sigma,x)$ in the\nnotation of this talk. The shaded region is first-order,\nbounded by a line of tricritical points, along which the RG flows go\ntowards the fixed point at $R$. The upper boundary is the percolation\nlimit, and the partially dashed curve is a conjectured line of fixed\npoints describing the continuous random critical behaviour.}\n\\label{fig1}\n\\end{figure}\nAs shown in Fig.~\\ref{fig1}, there is now a region in which the\nspontaneous magnetisation (latent heat) is non-vanishing as the sign of\nthe uniform field ($T-T_c$) is changed. This region is bounded, in the\ncase of the RFIM by the critical curve, close to which the critical\nbehaviour is determined by the zero-temperature fixed point. Similarly,\nthe first-order region of the random bond system will be bounded by a\nline of \\em tricritical \\em points, above which (presumably) the transition\nbecomes continuous. Since the fixed point occurs in the region where the\nmapping between the two systems is valid, we may infer some of the\ntricritical exponents from those of the RFIM \\cite{CJ}. For example, the latent\nheat should vanish as $(x_c-x)^\\beta$, where $\\beta$ is the usual\nmagnetisation exponent of the RFIM. Similarly, the correlation length on\nthe critical surface should diverge as $x\\to x_c$\nwith the usual exponent $\\nu$ of the RFIM. But the behaviour for \n$T\\not=T_c$ is related to the \\em magnetic \\em properties of the RFIM, and is\ncomplicated by\nthe fact that the temperature at the RFIM fixed point is dangerously\nirrelevant, with an RG eigenvalue $-\\theta$ which is responsible for\nthe violation of hyperscaling $\\alpha=2-(d-\\theta)\\nu$ in that model. \nAs a result, for example, the correlation length in the random bond\nmodel at $x=x_c$ diverges as $(T-T_c)^{1\/y}$, with \n$y=d-\\theta-\\beta\/\\nu$.\n\n\\section{Results for the Potts model.}\nAs is well-known, the pure $q$-state Potts model undergoes a first-order\ntransition for $q>4$ in $d=2$, and it is a relatively simple system to\nstudy numerically in the random bond case. By choosing a suitable\ndistribution of randomness, one can fix the model to be self-dual so that\nthe critical point is determined exactly. One method, which is very\neffective for pure two-dimensional critical system, is to study the\nfinite-size scaling behaviour of the eigenvalues $\\Lambda_i$ of the\ntransfer matrix in a strip of width $N$. By conformal invariance\n\\cite{JCconf}, these\nare related to the scaling dimensions $x_i$ by \n$2\\pi x_i\/N\\sim\\ln(\\Lambda_0\/\\Lambda_i)$ as $N\\to\\infty$. For the Potts\nmodel, it is possible \\cite{JC} to write the transfer matrix in the so-called\nconnectivity basis \\cite{BN}, allowing $q$ to appear as an easily tunable\nvariable. In the random case, however, the transfer matrices for\ndifferent rows do not commute, and the role of the $\\Lambda_i$ is\ntaken by the Lyapunov exponents which govern the average growth rate\nof the norm of vectors under the action of the transfer matrices.\nThe asymptotic behaviour of the correlation functions is related to\nthese exponents as for the pure case. \nIn order to use conformal invariance, it is necessary to assume\ntranslational invariance which is only recovered after quenched\naveraging. However, the Lyapunov exponents themselves are not\nself-averaging, only their logarithms. This is related to the phenomenon\nof multi-scaling, whereby the average of the $p$th power of the\ncorrelation function is governed by an exponent $x^{(p)}$ which is not\nlinear in $p$. Since this occurs in most random systems, however, I shall\nnot treat the problem in detail here. It still proves possible, effectively\nby measuring the\nwhole distribution of the $\\ln \\Lambda_i$, to extract the decay of the\naverage correlation function, and hence, for example, the magnetic\nexponent $x_1\\equiv\\beta\/\\nu$. The results from our study \\cite {JC}\nare shown in \nFig.~\\ref{fig2}. We see that the value of $x_1$ appears to increase\n\\begin{figure}\n\\centerline{\n\\epsfxsize=4in\n\\epsfbox{fig2.ps}}\n\\caption{Measured values for the magnetic exponent $\\beta\/\\nu$ of the\nrandom-bond $q$-state Potts model for $d=2$, from Refs.~(9,12). The solid\ncurve is the exact value for the pure model for $q\\leq4$, and the\nother is the extrapolated $(q-2)$-expansion of Dotsenko et al. [14].}\n\\label{fig2}\n\\end{figure}\nsteadily with $q$. For $q<3$ it agrees with the analytic\n$(q-2)$-expansion \\cite{Ludwig,Dot}. \nThere appears to be no break at $q=4$ where the pure\ntransition becomes first-order. Our results disagree with earlier Monte\nCarlo work \\cite{CFL} for $q=8$ where a number close to the Ising value of\n$\\frac18$ was reported. However, more recent Monte Carlo results by\nPicco \\cite{Picco} find $x_1=0.150-0.155$ for $q=8$ (and $0.185\\pm0.005$\nfor $q=64$, while Chatelain and Berche \\cite{CB} \nreport $x_1=0.153\\pm0.003$ for $q=8$.\nThe small discrepancy with our results may be explained by a\ncareful study of the crossover behaviour which shows that stronger\nrandomness (beyond the reach of our methods) must be considered\nas $q$ increases \\cite{Picco}.\n\nAll studies report a thermal exponent $\\nu\\approx1$. This approximately\nsaturates the lower bound proved by Chayes et al \\cite {Chayes}. It\nhas been suggested \\cite{Davis} that, in general, this might be a result of the\naveraging procedure and that in some random systems \nthe `true' value of $\\nu$ might be less than unity. However, it can\nshown that this is not the case here.\n\n\\section{Analytic results for weak first-order transitions.}\nAlthough the $q$-state Potts model is relatively simple to analyse\nnumerically, like most other first-order transitions it is difficult to\nstudy in any kind of perturbative RG approach, since the randomness is\nstrongly relevant. However, this is\nnot always the case for systems which exhibit weak `fluctuation-driven'\nfirst-order transitions. An example is afforded by $N$ Ising models\nwith spins $s_i(r)$,\ncoupled through their energy densities, with a hamiltonian\n\\begin{equation}\n{\\cal H}=-\\sum_{r,r'}J(r,r')\\sum_is_i(r)s_i(r')+\ng\\sum_{r,r'}\\sum_{i\\not=j}s_i(r)s_i(r')s_j(r)s_j(r')\n\\end{equation}\nFor uniform couplings $J$, this exhibits a first-order transition when\n$N>2$, and in fact, on the critical surface is equivalent to the\nGross-Neveu model, which may be analysed nonperturbatively to show that\nthe correlation length is $\\xi\\sim\\exp({\\rm const}\/(N-2)g)$ \\cite{GN}. \nEven with\nrandom bonds $J(r,r')$ of strength $\\Delta$\nthe one-loop RG equations may be derived \nBy very simple combinatorial methods \\cite{JCbook}, to give \\cite{RFD}\n\\begin{eqnarray}\ndg\/d\\ell&=&4(N-2)g^2-8g\\Delta+\\cdots\\label{flows1}\\\\\nd\\Delta\/d\\ell&=& -8\\Delta^2+8(N-1)g\\Delta+\\cdots\\label{flows2}\n\\end{eqnarray}\nThese flows are illustrated in Fig.~\\ref{fig3}.\n\\begin{figure}\n\\centerline{\n\\epsfxsize=5in\n\\epsfbox{fig3.eps}}\n\\caption{RG flows in the critical surface for $N$ coupled Ising models,\nfrom Ref.~(21).\nFor non-zero randomness the trajectories curve back towards the\ndecoupled pure fixed point.}\n\\label{fig3}\n\\end{figure}\nWhen $\\Delta=0$, $g$ runs away to infinity, in a manner consistent with\nthe non-perturbative result for $\\xi$. When randomness is added,\nhowever, it is marginally relevant, and in fact the flows are quite\nsimilar to those found for the RFIM quoted earlier if we identify \nthe interfacial tension $\\sim\\xi$.\nHowever, in this case, we also see where the flows\nend: in this case at the critical\nfixed point corresponding to $N$ decoupled pure Ising models. (This is\nthe only example I know of where the infrared and ultraviolet fixed\npoints of a set of RG flows are the same.) There have been various\ngeneralisations of this calculation to coupled Potts models \\cite{gen}. In most\ncases the addition of randomness induces flow towards a critical fixed\npoint which is perturbatively accessible. In higher dimensions, however,\ndifferent outcomes are possible. Adding randomness to models in\n$4-\\epsilon$ dimensions with cubic anisotropy appears not to change\ntheir fluctuation-driven first-order character \\cite{RFD}. However, for impure \n$n$-component superconductors in $4-\\epsilon$ dimensions there is a\ncritical concentration above which the transition becomes continuous\n\\cite{CBoy}.\n\n\\section{Outlook}\nThere are still many open questions in this relatively little explored\narea. A lot more needs to be learned about the nature of the universal\ncritical behaviour (if indeed it is universal), both in $d=2$ and for\n$d>2$ when the randomness is sufficiently strong. In the former case, it\nmay be possible to solve the problem by conformal field theory methods.\nEven the limit of large $q$ appears non-trivial, however. Similar ideas\napply to quantum phase transitions and may have relevance for the\nquantum Hall effect. Most importantly, it should be possible to find\nexperimental systems which realise some of the predictions discussed\nabove. There are, after all, many three-dimensional examples of\nfirst-order transitions. However, it should be stressed that the\nrandomness should couple only to the local energy density, not to the\norder parameter, otherwise this becomes the random field problem. \nIt is also important to ensure that the tricritical\nbehaviour is driven by the effects discussed and not by some simple\nmean-field mechanism (which would give rise to mean-field tricritical\nexponents in $d=3$, part from logs.) Finally it is important to\nunderstand the dynamics of these systems: it may be that, like the\nrandom field problem, they are plagued by logarithmically slow time\nscales close to the critical point \\cite{RFdyn}.\n\nI am grateful to Jesper Jacobsen for his continuing collaboration on\nthis problem. This work was supported in part by EPSRC Grant GR\/J78327.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{Section_1}\n\nIn this paper, we consider the optimal control problem of \n\\begin{align}\t\n\\label{eq_intro_1_1}\n\\textbf{(P)}~~J(x_0,u(\\cdot)) = \\int_0^T l(r,x(r),u(r) \\dd r + h(x_0,x(T)),\n\\end{align}\nsubject to the following state equation with $\\alpha \\in (0,1)$,\n\\begin{align}\n\\label{eq_intro_1_2}\nx(t) = x_0 + \\int_0^t \\frac{f(t,s,x(s),u(s))}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t g(t,s,x(s),u(s)) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align}\nand the state constraints\n\\begin{align}\n\\label{eq_intro_1_3}\n\\begin{cases}\n(x_0, x(T)) \\in F, & \\textrm{(terminal state constraint)},\\\\\n\tG^i(t,x(t)) \\leq 0,~\\forall t \\in [0,T],~ i=1,\\ldots,m, & \\textrm{(inequality state constraint).} \n\\end{cases}\n\\end{align}\nThe precise problem statement of \\textbf{(P)} including the space of admissible controls and the standing assumptions for (\\ref{eq_intro_1_1})-(\\ref{eq_intro_1_3}) is given in Section \\ref{Section_2_2}. We mention that the optimal control problems with state constraints capture various practical aspects of systems in science, biology, engineering, and economics \\cite{Hartl_SICON_1995, Arutyunov_JOTA_2020, Evans_2010, Vinter_book}.\n\nThe state equation in (\\ref{eq_intro_1_2}) is known as a class of Volterra integral equations. The main feature of Volterra integral equations is the effect of memories, which does not appear in ordinary (state) differential equations. In fact, Volterra integral equations of various kinds have been playing an important role in modeling and analyzing of practical physical, biological, engineering, and other phenomena that are governed by memory effects \\cite{Burton_book}. We note that one major distinction between (\\ref{eq_intro_1_2}) and other existing Volterra integral equations is that (\\ref{eq_intro_1_2}) has two different kernels $\\frac{f(t,s,x,u)}{(t-s)^{1-\\alpha}}$ and $g(t,s,x,u)$, in which the first kernel $\\frac{f(t,s,x,u)}{(t-s)^{1-\\alpha}}$ becomes singular at $s=t$, while the second kernel $g(t,s,x,u)$ is nonsingular. In fact, $\\alpha \\in (0,1)$ in the singular kernel of (\\ref{eq_intro_1_2}) determines the amount of the singularity, in which the large singular behavior occurs with small $\\alpha \\in (0,1)$. \n\nOptimal control problems for various kinds of Volterra integral equations via the maximum principle have been studied extensively in the literature; see \\cite{Vinokurov_SICON_1969, Angell_JOTA_1976, Kamien_RES_1976, Medhin_JMAA_1988, Carlson_JOTA_1987, Burnap_IMA_Control_1999, Vega_JOTA_2006, Bonnans_Vega_JOTA_2013, Dmitruk_MCRF_2017, Dmitruk_SICON_2014, Bonnans_SVA_2010} and the references therein. Specifically, the first study on optimal control for Volterra integral equations (using the maximum principle) can be traced back to \\cite{Vinokurov_SICON_1969}. Several different formulations (with\/without state constraints, with\/without delay, with\/without additional equality and\/or inequality constraints) of optimal control for Volterra integral equations and their generalizations are reported in \\cite{Angell_JOTA_1976, Kamien_RES_1976, Medhin_JMAA_1988, Carlson_JOTA_1987, Burnap_IMA_Control_1999, Vega_JOTA_2006, Belbas_AMC_2007, Bonnans_SVA_2010}. Some recent progress in different directions including the stochastic framework can be found in \\cite{Bonnans_Vega_JOTA_2013, Dmitruk_MCRF_2017, Dmitruk_SICON_2014, Wang_ESAIM_2018, Hamaguchi_Arvix_2021}. We note that the above-mentioned existing works considered the situation with nonsingular kernels only in Volterra integral equations, which corresponds to $f \\equiv 0$ in (\\ref{eq_intro_1_2}). Hence, the problem settings in the earlier works can be viewed as a special case of \\textbf{(P)}.\n\nRecently, the optimal control problem for Volterra integral equations having singular kernels only (equivalently, $g \\equiv 0$ in (\\ref{eq_intro_1_2})) was studied in \\cite{Lin_Yong_SICON_2020}. Due to the presence of the singular kernel, the technical analysis including the maximum principle (without state constraints) in \\cite{Lin_Yong_SICON_2020} should be different from that of the existing works mentioned above. In particular, the proof for the well-posedness and estimates of Volterra integral equations in \\cite[Theorem 3.1]{Lin_Yong_SICON_2020} require a new type of the Gronwall's inequality. Furthermore, the maximum principle (without state constraints) in \\cite[Theorem 4.3]{Lin_Yong_SICON_2020} needs a different duality analysis for variational and adjoint integral equations, induced by the variational approach. More recently, linear-quadratic optimal control problem (without state constraints) for linear Volterra integral equations with singular kernels only was studied in \\cite{Han_Arxiv_2021}.\n\n\nWe note that Volterra integral equations having singular and nonsingular kernels are strongly related to classical state equations and fractional order differential equations in the sense of Riemann-Liouville or Caputo \\cite{Kilbas_book}. For the case with singular kernels only, a similar argument is given in \\cite[Section 3.2]{Lin_Yong_SICON_2020}. In particular, let $\\mathcal{D}_{\\alpha} ^C [x(\\cdot)]$ be the fractional derivative operator of order $\\alpha \\in (0,1)$ in the sense of Caputo \\cite[Chapter 2.4]{Kilbas_book}. Then applying \\cite[Theorem 3.24 and Corollary 3.23]{Kilbas_book} to (\\ref{eq_intro_1_2}) yields\n\\begin{subequations}\n\\begin{align}\n\\label{eq_intro_1_4}\n\\mathcal{D}_{\\alpha}^C [x(\\cdot)](t) = f(t,x(t),u(t)) & ~\\Leftrightarrow~ x(t) = x_0 + \\frac{1}{\\Gamma(\\alpha)} \\int_0^t \\frac{f(s,x(t),u(s))}{(t-s)^{1-\\alpha}} \\dd s,~ \\textrm{a.e.}~ t \\in [0,T], \\\\\n\\label{eq_intro_1_5}\n\\frac{ \\dd x(t)}{\\dd t} = g(t,x(t),u(t)) & ~\\Leftrightarrow~ x(t) = x_0 + \\int_0^t g(s,x(s),u(s)) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align}\n\\end{subequations}\nwhere $\\Gamma(\\cdot)$ is the gamma function. Note that while (\\ref{eq_intro_1_4}) is a class of fractional differential equations in the sense of Caputo, (\\ref{eq_intro_1_5}) is a classical ordinary differential equation.\nInstead of $\\mathcal{D}_{\\alpha}^C [x(\\cdot)]$ in (\\ref{eq_intro_1_4}), we may use the fractional derivative of order $\\alpha \\in (0,1)$ in the sense of Riemann-Liouville \\cite[Chapter 2.1 and Theorem 3.1]{Kilbas_book}. Hence, we observe that (\\ref{eq_intro_1_4}) and (\\ref{eq_intro_1_5}) are special cases of our state equation in (\\ref{eq_intro_1_2}). This implies that the state equation in (\\ref{eq_intro_1_2}) is able to describe various types of differential equations including combinations of fractional (in Riemann-Liouville- or Caputo-type) and ordinary differential state equations. We also mention that there are several different results on optimal control for fractional differential equations; see \\cite{Agrawal_ND_2004, Bourdin_arvix_2012, Kamocki_AMC_2014, Gomoyunov_SICON_2020} and the references therein.\n\nThe aim of this paper is to study the optimal control problem stated in \\textbf{(P)}. As noted above, since (\\ref{eq_intro_1_2}) has both singular and nonsingular kernels, when $f \\equiv 0$, (\\ref{eq_intro_1_2}) is reduced to the Volterra integral equation with singular kernels only studied in \\cite{Lin_Yong_SICON_2020}. Since \\cite{Lin_Yong_SICON_2020} did not consider the state-constrained control problem, \\textbf{(P)} can be viewed as a generalization of \\cite{Lin_Yong_SICON_2020} to the state-constrained control problem for Volterra integral equations having singular and nonsingular kernels. Moreover, with $g \\equiv 0$, (\\ref{eq_intro_1_2}) is reduced to the classical Volterra integral equation with nonsingular kernels only (e.g. \\cite{Dmitruk_MCRF_2017, Dmitruk_SICON_2014, Burnap_IMA_Control_1999, Medhin_JMAA_1988, Kamien_RES_1976, Bonnans_SVA_2010}). Hence, \\textbf{(P)} also covers the optimal control problems for Volterra integral equations with nonsingular kernels only.\n\n\nUnder mild assumptions on $f$ and $g$, we first obtain the well-posedness (in $L^p$ and $C$ spaces) and precise estimates for generalized Volterra integral equations of (\\ref{eq_intro_1_2}) when the initial condition of (\\ref{eq_intro_1_2}) also depends on $t$ (see Lemma \\ref{Lemma_2_1} and Appendix \\ref{Appendix_B}). This requires the extensive use of the generalized Gronwall's inequality with singular and nonsingular kernels, together with the several different regularities of integrals having singular and nonsingular integrands, where their results (including the generalized Gronwall's inequality) are obtained in Appendix \\ref{Appendix_A}. Note that the main technical analysis for the well-posedness and estimates of (\\ref{eq_intro_1_2}) (see Lemma \\ref{Lemma_2_1} and Appendix \\ref{Appendix_B}) should be different from those for the case with singular kernels only in \\cite{Lin_Yong_SICON_2020}, as the presence of the singular and nonsingular kernels in (\\ref{eq_intro_1_2}) causes various cross coupling characteristics.\n\nNext, we obtain the maximum principle for \\textbf{(P)} (see Theorem \\ref{Theorem_3_1}). Due the presence of the state constraints in (\\ref{eq_intro_1_3}) and the control space being only a separable metric space (that does not necessarily have any algebraic structure), the derivation of the maximum principle in this paper must be different from that for the unconstrained case with singular kernels only studied in \\cite[Theorem 4.3]{Lin_Yong_SICON_2020}. Specifically, we have to employ the Ekeland variational principle and the spike variation technique, together with the intrinsic properties of distance functions and the generalized Gronwall's inequality (see Appendix \\ref{Appendix_A}), to establish the duality analysis for Volterra-type variational and adjoint equations, which leads to the desired necessary conditions for optimality. \nFurthermore, as (\\ref{eq_intro_1_2}) has both singular and nonsingular kernels, the proof for the maximum principle of this paper should be more involved than that for the classical state-constrained maximum principle without singular kernels studied in the existing literature (e.g. \\cite[Theorem 1]{Bonnans_SVA_2010} and \\cite{Dmitruk_MCRF_2017, Dmitruk_SICON_2014, Burnap_IMA_Control_1999, Medhin_JMAA_1988, Kamien_RES_1976}). In fact, the analysis of the maximum principle for state-constrained optimal control problems is entirely different from that of the problems without state constraints \\cite{Hartl_SICON_1995, Bourdin_arxiv_2016}. We also note that different from existing works for classical optimal control of Volterra integral equations (e.g. \\cite{Dmitruk_MCRF_2017, Dmitruk_SICON_2014, Burnap_IMA_Control_1999, Medhin_JMAA_1988, Kamien_RES_1976, Bonnans_SVA_2010}), our paper does not assume the differentiability of (singular and nonsingular) kernels in $(t,s,u)$ (time and control variables) and the convexity of the control space.\n\nThe rest of this paper is organized as follows. The notation and the problem statement of \\textbf{(P)} are given in Section \\ref{Section_2}. The statement of the maximum principle for \\textbf{(P)} is provided in Section \\ref{Section_3}. Some examples of \\textbf{(P)} are studied in Section \\ref{Section_5}. The proof of the maximum principle for \\textbf{(P)} is given in Section \\ref{Section_4}. Appendices \\ref{Appendix_A}-\\ref{Appendix_D} give some preliminary results and lemmas including the well-posedness and estimates of (\\ref{eq_intro_1_2}).\n\n\n\\section{Notation and Problem Formulation}\\label{Section_2}\n\n\n\\subsection{Notation}\\label{Section_2_1}\n\nLet $\\mathbb{R}_+$ and $\\mathbb{R}_-$ be the sets of nonnegative and nonpositive numbers, respectively. Let $\\mathbb{R}^n$ be the $n$-dimensional Euclidean space, where $\\langle x,y \\rangle_{\\mathbb{R}^n \\times \\mathbb{R}^n} := x^\\top y$ is the inner product and $|x|_{\\mathbb{R}^n} := \\langle x,x \\rangle^{1\/2}_{\\mathbb{R}^n \\times \\mathbb{R}^n}$ is the norm for $x,y \\in \\mathbb{R}^n$. We sometimes write $\\langle \\cdot,\\cdot \\rangle$ and $|\\cdot|$ when there is no confusion. For $A \\in \\mathbb{R}^{m \\times n}$, $A^\\top$ denotes the transpose of $A$. Let $I_{n}$ be an $n \\times n$ identity matrix.\nLet $\\Delta := \\{(t,s) \\in [0,T] \\times [0,T]~|~ 0 \\leq s \\leq t \\leq T \\}$ with $T > 0$ being a fixed horizon. Define $\\mathds{1}_{A}(\\cdot)$ by the indicator function of any set $A$. A modulus of continuity is any increasing real-valued function $\\omega :[0,\\infty) \\rightarrow [0,\\infty)$, vanishing at $0$, i.e., $\\lim_{t \\downarrow 0} \\omega(t) = 0$, and continuous at $0$. In this paper, the constant $C$ denotes the generic constant, whose value is different from line to line.\n\nFor any differentiable function $f:\\mathbb{R}^n \\rightarrow \\mathbb{R}^l$, let $f_x : \\mathbb{R}^n \\rightarrow \\mathbb{R}^{l \\times n}$ be the partial derivative of $f$ with respect to $x \\in \\mathbb{R}^n$. Note that $f_x = \\begin{bmatrix}\n\tf_{1,x}^\\top & \\cdots & f_{l,x}^\\top \\end{bmatrix}^\\top$ with $f_{j,x} \\in \\mathbb{R}^{1 \\times n}$, and when $l=1$, $f_x \\in \\mathbb{R}^{1 \\times n}$. For any differentiable function $f : \\mathbb{R}^n \\times \\mathbb{R}^l \\rightarrow \\mathbb{R}^l$, $f_{x} : \\mathbb{R}^n \\times \\mathbb{R}^l \\rightarrow \\mathbb{R}^{l \\times n}$ for $x \\in \\mathbb{R}^n$, and $f_y: \\mathbb{R}^n \\times \\mathbb{R}^l \\rightarrow \\mathbb{R}^{l \\times l}$ for $y \\in \\mathbb{R}^l$. \n\t\n\t\n\nFor $1 \\leq p < \\infty$, we define the following spaces:\n\\begin{itemize}\n\\item $L^p([0,T];\\mathbb{R}^n)$: the space of functions $\\psi:[0,T] \\rightarrow \\mathbb{R}^n$ such that $\\psi$ is measurable and satisfies $\\|\\psi(\\cdot)\\|_{L^p([0,T];\\mathbb{R}^n)} := \\Bigl ( \\int _0^T |\\psi(t)|^p_{\\mathbb{R}^n} \\dd t \\Bigr )^{1\/p}$;\n\\item $L^{\\infty}([0,T];\\mathbb{R}^n)$: the space of functions $\\psi:[0,T] \\rightarrow \\mathbb{R}^n$ such that $\\psi$ is measurable and satisfies $\\|\\psi(\\cdot)\\|_{L^{\\infty}([0,T];\\mathbb{R}^n)} := \\esssup_{t \\in [0,T]} |\\psi(t)|_{\\mathbb{R}^n} < \\infty$;\n\\item $C([0,T];\\mathbb{R}^n)$: the space of functions $\\psi:[0,T] \\rightarrow \\mathbb{R}^n$ such that $\\psi$ is continuous and satisfies $\\|\\psi(\\cdot)\\|_{\\infty} := \\sup_{t \\in [0,T]} |\\psi(t)|_{\\mathbb{R}^n} < \\infty $;\n\\item $\\textsc{BV}([0,T];\\mathbb{R}^n)$: the space of functions $\\psi:[0,T] \\rightarrow \\mathbb{R}^n$ such that $\\psi$ is a function with bounded variation on $[0,T]$.\n\\end{itemize}\nThe norm on $\\textsc{BV}([0,T];\\mathbb{R}^n)$ is defined by $\\|\\psi(\\cdot)\\|_{\\textsc{BV}([0,T];\\mathbb{R}^n)} := \\psi(0) + \\textsc{TV}(\\psi)$, where $\\textsc{TV}(\\psi) := \\sup_{(t_k)_k} \\bigl \\{ \\sum_{k} |\\psi(t_{k+1}) - \\psi(t_k)|_{\\mathbb{R}^n} \\bigr \\} < \\infty$ with the supremum being taken by all partitions of $[0,T]$. Let $\\textsc{NBV}([0,T];\\mathbb{R}^n)$ be the space of functions $\\psi(\\cdot) \\in \\textsc{BV}([0,T];\\mathbb{R}^n)$ such that $\\psi(\\cdot) \\in \\textsc{BV}([0,T];\\mathbb{R}^n)$ is normalized, i.e., $\\psi(0) = 0$ and $\\psi$ is left continuous. The norm on $\\textsc{NBV}([0,T];\\mathbb{R}^n)$ is defined by $\\|\\psi(\\cdot)\\|_{\\textsc{NBV}([0,T];\\mathbb{R}^n)} := \\textsc{TV}(\\psi)$. When $\\psi(\\cdot) \\in \\textsc{NBV}([0,T];\\mathbb{R})$ is monotonically nondecreasing, we have $\\|\\psi(\\cdot)\\|_{\\textsc{NBV}([0,T];\\mathbb{R}} = \\psi(T)$. Note that both $(\\textsc{BV}([0,T];\\mathbb{R}^n), \\|\\cdot\\|_{\\textsc{BV}([0,T];\\mathbb{R}^n)})$ and $(\\textsc{NBV}([0,T];\\mathbb{R}^n), \\|\\cdot\\|_{\\textsc{NBV}([0,T];\\mathbb{R}^n)})$ are Banach spaces. \n\n\\subsection{Problem Formulation}\\label{Section_2_2}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.35]{state1.eps}~~~\n\\includegraphics[scale=0.35]{state2.eps}\t\n\\caption{State trajectories when $x_0 = 1$, $f(t,s,x,u) = -0.4 \\sin(2\\pi x)$, and $g(t,s,x,u) = -x$. Note that the state trajectory shows more singular behavior with small $\\alpha \\in (0,1)$.}\n\\label{Fig_1_1_1_1}\n\\end{figure}\n\n\nConsider the following Volterra integral equation:\n\\begin{align}\n\\label{eq_1}\nx(t) = x_0 + \\int_0^t \\frac{f(t,s,x(s),u(s))}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t g(t,s,x(s),u(s)) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align}\nwhere $\\alpha \\in (0,1)$ is the parameter of singularity, $x(\\cdot) \\in \\mathbb{R}^n$ is the state with the initial condition $x_0 \\in \\mathbb{R}^n$, and $u(\\cdot) \\in U \\subset \\mathbb{R}^d$ is the control with $U$ being the control space. In (\\ref{eq_1}), $\\frac{f(t,s,x,u)}{(t-s)^{1-\\alpha}}$ is the singular kernel (with the singularity appearing at $s=t$) and $g(t,s,x,u)$ is the nonsingular kernel, where $f,g:\\Delta \\times \\mathbb{R}^n \\times U \\rightarrow \\mathbb{R}^n$ are generators. We note that $\\alpha \\in (0,1)$ determines the level of singularity of (\\ref{eq_1}); see Figure \\ref{Fig_1_1_1_1}. Notice also that $f$ and $g$ are dependent on two time parameters, $t$ and $s$, where their roles are different. While $t$ is the outer time variable to determine the current time, $s$ is the inner time variable describing the path or memory of the state equation from $0$ to $t$. We sometimes use the notation $x(\\cdot;x_0,u) := x(\\cdot)$ to emphasize the dependence on the initial state and the control.\n\n\\begin{assumption}\\label{Assumption_2_1}\n\\begin{enumerate}[(i)]\n\\item $(U,\\rho)$ is a separable metric space, where $U \\subset \\mathbb{R}^d$ and $\\rho$ is the metric induced by the standard Euclidean norm $|\\cdot|_{\\mathbb{R}^d}$;\n\\item There is a constant $K \\geq 0$ such that for some modulus of continuity $\\omega$,\n\\begin{align*}\n\\begin{cases}\n\t|f(t,s,x,u) - f(t^\\prime,s,x,u)| + |g(t,s,x,u) - g(t^\\prime,s,x,u)| \\leq K \\omega(|t-t^\\prime|)(1+|x|), \\\\\n\t\t~~~~~~~~~~ \\forall (t,s),(t^\\prime,s) \\in \\Delta, ~x \\in \\mathbb{R}^n,~ u \\in U; \n\\end{cases}\t\n\\end{align*}\n\\item For $p > \\frac{1}{\\alpha}$, there are nonnegative functions $K_0(\\cdot) \\in L^{ \\frac{1}{\\alpha} +}([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{ \\frac{p}{\\alpha p - 1} +}([0,T];\\mathbb{R})$, where $L^{p+}([0,T];\\mathbb{R}^n) := \\cup_{r > p} L^{r}([0,T];\\mathbb{R}^n)$ for $1 \\leq p < \\infty$, such that\n\\begin{align*}\n\\begin{cases}\n\t|f(t,s,x,u) - f(t,s,x^\\prime,u^\\prime)| + |g(t,s,x,u) - g(t,s,x^\\prime,u^\\prime)| \\leq K(s) (|x-x^\\prime| + \\rho(u,u^\\prime)) , \\\\\n\t~~~~~~~~~~ \\forall (t,s) \\in \\Delta,~ x,x^\\prime \\in \\mathbb{R}^n,~ u,u^\\prime \\in U, \\\\\n\t|f(t,s,0,u)| + |g(t,s,0,u)| \\leq K_0(s),~ \\forall (t,s) \\in \\Delta,~ u \\in U;\n\\end{cases}\t\n\\end{align*}\n\\item $f$ and $g$ are of class $C^1$ (continuously differentiable) in $x$, which are bounded and continuous in $(x,u) \\in \\mathbb{R}^n \\times U$.\n\\end{enumerate}\t\t\n\\end{assumption}\n\nFor $p \\geq 1$ and $u_0 \\in U$, the space of admissible controls for (\\ref{eq_1}) is defined by\n\\begin{align*}\n\\mathcal{U}^p[0,T] = \\Bigl \\{u:[0,T] \\rightarrow U~|~ \\textrm{$u$ is measurable in $t \\in [0,T]$} ~\\&~ \\rho(u(\\cdot),u_0) \\in L^p([0,T];\\mathbb{R}_+) \\Bigr \\}\n\\end{align*}\nWe state the following lemma; the proof is provided in Appendix \\ref{Appendix_B} (see Lemmas \\ref{Lemma_B_1} and \\ref{Lemma_B_2}).\n\n\\begin{lemma}\\label{Lemma_2_1}\nLet (i)-(iii) of Assumption \\ref{Assumption_2_1} hold. Then the following results hold:\n\\begin{enumerate}[(i)]\n\\item For any $(x_0,u(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$, (\\ref{eq_1}) admits a unique solution in $C([0,T];\\mathbb{R}^n)$, i.e., $x(\\cdot;x_0,u) \\in C([0,T];\\mathbb{R}^n)$, and there is a constant $C \\geq 0$ such that \n\\begin{align*}\n\\Bigl \\|x (\\cdot;x_0,u) \\Bigr \\|_{L^p([0,T];\\mathbb{R}^n)} \\leq C \\Bigl (1 + |x_0|_{\\mathbb{R}^n} + \\Bigl \\|\\rho(u(\\cdot),u_0) \\Bigr \\|_{L^p([0,T];\\mathbb{R}_+)} \\Bigr )\t;\n\\end{align*}\t\n\\item For any $x_0,x_0^\\prime \\in \\mathbb{R}^n$ and $u(\\cdot),u^\\prime (\\cdot) \\in \\mathcal{U}^p[0,T]$, there is a constant $C \\geq 0$ such that\n\\begin{align*}\n& \\Bigl \\|x(\\cdot;x_0,u) - \tx(\\cdot;x_0^\\prime,u^\\prime) \\Bigr \\|_{L^p([0,T];\\mathbb{R}^n)} \\leq C |x_0 - x_0^\\prime |_{\\mathbb{R}^n} \\\\\n&~~~~~ + C \\Biggl [ \\int_0^T \\Bigl ( \\int_0^t \\frac{|f(t,s,x(s;x_0,u),u(s)) - f(t,s,x(s;x_0,u),u^\\prime(s))|}{(t-s)^{1-\\alpha}} \\dd s \\Bigr )^p \\dd t \\Biggr]^{\\frac{1}{p}} \\\\\n&~~~~~ + C \\Biggl [ \\int_0^T \\Bigl ( \\int_0^t | g(t,s,x(s;x_0,u),u(s)) - g(t,s,x(s;x_0,u),u^\\prime(s)) | \\dd s \\Bigr)^p \\dd t \\Biggr ]^{\\frac{1}{p}}.\n\\end{align*}\n\\end{enumerate}\n\\end{lemma}\n\n\nWe introduce the following objective functional:\n\\begin{align}\n\\label{eq_2}\nJ(x_0,u(\\cdot)) = \\int_0^T l(r,x(r),u(r) \\dd r + h(x_0,x(T)).\n\\end{align}\nThen the main objective of this paper is to solve the following optimal control problem:\n\\begin{align*}\n\\textbf{(P)}~ \\inf_{u(\\cdot) \\in \\mathcal{U}^p[0,T]} J(x_0,u(\\cdot)),~\\textrm{subject to (\\ref{eq_1}),}\n\\end{align*}\nand the state constraints given by\n\\begin{align}\n\\label{eq_3}\n\\begin{cases}\n(x_0, x(T;x_0,u)) \\in F, & \\textrm{(terminal state constraint)},\\\\\n\tG^i(t,x(t;x_0,u)) \\leq 0,~\\forall t \\in [0,T],~ i=1,\\ldots,m, & \\textrm{(inequality state constraint).} \n\\end{cases}\n\\end{align}\n\n\n\\begin{assumption}\\label{Assumption_2_2}\n\t\\begin{enumerate}[(i)]\n\t\\item $l:[0,T] \\times \\mathbb{R}^n \\times U \\rightarrow \\mathbb{R}$ is continuous in $t \\in [0,T]$, and is of class $C^1$ in $x$, which is bounded and continuous in $(x,u) \\in \\mathbb{R}^n \\times U$. Moreover, there is a constant $K \\geq 0$ such that\n\t\\begin{align*}\n\t\\begin{cases}\n\t|l(s,x,u) - l(s,x^\\prime,u^\\prime)| \\leq K (|x-x^\\prime| + \\rho(u,u^\\prime)) ,~ \\forall s \\in [0,T],~ x,x^\\prime \\in \\mathbb{R}^n,~ u,u^\\prime \\in U, \\\\\n\t|l(s,0,u)| \\leq K,~ \\forall s \\in [0,T],~ u \\in U;\n\t\\end{cases}\t\n\t\\end{align*}\n\t\\item $h : \\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}$ is of class $C^1$ in both variables, which are bounded. Let $h_{x}$ and $h_{x_0}$ be partial derivatives of $h$ with respect to $x$ and $x_0$, respectively. Moreover, there is a constant $K \\geq 0$ such that\n\t\\begin{align*}\n\t|h(x_0,x) - h(x_0^\\prime,x^\\prime)| \\leq K ( |x_0 - x_0^\\prime |\t+ |x^\\prime - x^\\prime | ),~ \\forall (x_0,x),(x_0^\\prime,x^\\prime) \\in \\mathbb{R}^n \\times \\mathbb{R}^n;\n\t\\end{align*}\n\t\\item $F$ is a nonempty closed convex subset of $\\mathbb{R}^{2n}$;\n\t\\item For $i=1,\\ldots,m$, $G^i:[0,T] \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}$ is continuous in $t \\in [0,T]$ and is of class $C^1$ in $x$, which is bounded in both variables.\n\t\\end{enumerate}\n\\end{assumption}\n \n\nUnder Assumptions \\ref{Assumption_2_1} and \\ref{Assumption_2_2}, the main objective of this paper is to derive the Pontryagin-type maximum principle for \\textbf{(P)}, which constitutes the necessary conditions for optimality. Note that Assumptions \\ref{Assumption_2_1} and \\ref{Assumption_2_2} are crucial for the well-posedness of the state equation in (\\ref{eq_1}) by Lemma \\ref{Lemma_2_1} (see also Appendix \\ref{Appendix_B}) as well as the maximum principle of \\textbf{(P)}. Assumptions similar to Assumptions \\ref{Assumption_2_1} and \\ref{Assumption_2_2} have been used in various optimal control problems and their maximum principles; see \\cite{Yong_book, Li_Yong_book, Lin_Yong_SICON_2020, Bettiol_CVOC_2021, Bourdin_arxiv_2016, Bonnans_SVA_2010, Carlson_JOTA_1987, Vega_JOTA_2006, Moon_Automatica_2020_1, Dmitruk_MCRF_2017, Dmitruk_SICON_2014, Bonnans_Vega_JOTA_2013, Vinokurov_SICON_1969, Bourdin_MP_2020, Hamaguchi_Arvix_2021} and the references therein.\n\n\\section{Statement of the Maximum Principle}\\label{Section_3}\n\nWe provide the statement of the maximum principles for \\textbf{(P)}. The proof is given in Section \\ref{Section_4}. \n\n\\begin{theorem}\\label{Theorem_3_1}\n\tLet Assumptions \\ref{Assumption_2_1} and \\ref{Assumption_2_2} hold. Suppose that $(\\overline{u}(\\cdot), \\overline{x}(\\cdot)) \\in \\mathcal{U}^p[0,T] \\times C([0,T];\\mathbb{R}^n)$ is the optimal pair for \\textbf{(P)}, i.e., $\\overline{u}(\\cdot) \\in \\mathcal{U}^p[0,T]$ and the optimal solution to \\textbf{(P)}, where $\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u}) := \\overline{x}(\\cdot) \\in C([0,T];\\mathbb{R}^n)$ is the corresponding optimal state trajectory of (\\ref{eq_1}). Then there exists the tuple $(\\lambda,\\xi,\\theta_1,\\ldots,\\theta_m)$, where $\\lambda \\in \\mathbb{R}$, $\\xi \\in \\mathbb{R}^{2n}$ with $(\\xi_1,\\xi_2) \\in \\mathbb{R}^n \\times \\mathbb{R}^n$, and $\\theta(\\cdot) := (\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot)) \\in \\textsc{NBV}([0,T];\\mathbb{R}^m)$ with $\\theta_i(\\cdot) \\in \\textsc{NBV}([0,T];\\mathbb{R})$ for $i=1,\\ldots,m$, such that the following conditions are satisfied:\n\\begin{itemize}\n\t\\item Nontriviality condition: the tuple $(\\lambda,\\xi,\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot))$ is not trivial, i.e., it holds that \\\\ $(\\lambda,\\xi,\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot)) \\neq 0$, where\n\t\\begin{align*}\n\t\\begin{cases}\n\t\\lambda \\geq 0, \\\\\n\t\\xi = \\begin{bmatrix}\n \t\\xi_1 \\\\\n \t\\xi_2\n \\end{bmatrix} \\in N_F \\Bigl (\\begin{bmatrix}\n\t\\overline{x}_0 \\\\\n\t\\overline{x}(T)\n\\end{bmatrix} \\Bigr ), \\\\\n\t\\theta_i(\\cdot)\t\\in \\textsc{NBV}([0,T];\\mathbb{R})~ \\textrm{with}~ \\|\\theta_i(\\cdot)\\|_{\\textsc{NBV}([0,T];\\mathbb{R})} = \\theta_i(T) \\geq 0,~\\forall i=1,\\ldots,m,\n\t\\end{cases}\n\t\\end{align*}\n\twith $N_F(x)$ being the normal cone to the convex set $F$ defined in (\\ref{eq_4_1}), and $\\theta_i(\\cdot) \\in \\textsc{NBV}([0,T];\\mathbb{R})$, $i=1,\\ldots,m$, being finite, nonnegative, and monotonically nondecreasing on $[0,T]$;\n\\item Nonnegativity condition:\n\t\\begin{align*}\n\t\\begin{cases}\n\t\\lambda \\geq 0, \\\\\n\t\\dd \\theta_i(s) \\geq 0,~ \\forall s \\in [0,T],~i=1, \\ldots, m,\n\t\\end{cases}\n\t\\end{align*}\n\twhere $\\dd \\theta_i$ denotes the Lebesgue-Stieltjes measure on $[0,T]$ corresponding to $\\theta_i$, $i=1,\\ldots,m$;\n\t\\item Adjoint equation: there exists a nontrivial $p(\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ such that $p$ is the unique solution to the following backward Volterra integral equation having singular and nonsingular kernels:\n\\begin{align*}\n\tp(t) & = \\int_t^T \\frac{f_x(r,t,\\overline{x}(t),\\overline{u}(t))^\\top}{(r-t)^{1-\\alpha}} p(r) \\dd r - \\mathds{1}_{[0,T)} (t) \\frac{f_x(T,t,\\overline{x}(t),\\overline{u}(t))^\\top}{(T-t)^{1-\\alpha}} \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr )^\\top \\\\\n&~~~ + \\int_t^T g_x(r,t,\\overline{x}(t),\\overline{u}(t))^\\top p(r) \\dd r\t - g_x(T,t,\\overline{x}(t),\\overline{u}(t))^\\top \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr )^\\top \\\\\n&~~~ - \\lambda l_x(t,\\overline{x}(t),\\overline{u}(t))^\\top - \\sum_{i=1}^m G_x^{i}(t,\\overline{x}(t))^\\top \\frac{\\dd \\theta_i(t)}{\\dd t},~ \\textrm{a.e.}~ t \\in [0,T];\n\\end{align*}\n\t\\item Transversality condition:\n\t\\begin{align*}\t\n\t0 & \\leq \t\\Bigl \\langle \\xi_1, \\overline{x}_0 - y_1 \\Bigr \\rangle_{\\mathbb{R}^n \\times \\mathbb{R}^n} + \\Bigl \\langle \\xi_2, \\overline{x}(T) - y_2 \\Bigr \\rangle_{\\mathbb{R}^n \\times \\mathbb{R}^n},~ \\forall y = \\begin{bmatrix}\n\t\t\t y_1 \\\\\n\t\t\t y_2\n\t\t\\end{bmatrix} \\in F, \\\\\n\t\\int_0^T p(t) \\dd t &= \\xi_1 + \\xi_2 + \\lambda h_{x_0}(\\overline{x}_0,\\overline{x}(T))^\\top +\\lambda h_x(\\overline{x}_0,\\overline{x}(T))^\\top;\n\t\\end{align*}\n\t\\item Complementary slackness condition:\n\t\\begin{align*}\t\n\t\\int_0^T G^i(t,\\overline{x}(t;\\overline{x}_0,\\overline{u})) \\dd \\theta_i(t) = 0,~ \\forall i=1,\\ldots,m,\n\t\\end{align*}\n\twhich is equivalent to\n\t\\begin{align*}\n\t\\textsc{supp}(\\dd \\theta_i(\\cdot)) \\subset \\{ t \\in [0,T]~|~ G^i(t,\\overline{x}(t;\\overline{x}_0,\\overline{u})= 0\\},~ \\forall i=1,\\ldots,m,\t\n\t\\end{align*}\n\twhere $\\textsc{supp}(\\dd \\theta_i(\\cdot))$ denotes the support of the measure $\\dd \\theta_i$, $i=1,\\ldots,m$;\n\t\\item Hamiltonian-like maximum condition: \n\t\\begin{align*}\n&\\int_t^T p(r)^\\top \\frac{f(r,t,\\overline{x}(t),\\overline{u}(t))}{(r-t)^{1-\\alpha}} \\dd r - \\mathds{1}_{[0,T)}(t) \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\frac{f(T,t,\\overline{x}(t),\\overline{u}(t))}{(T-t)^{1-\\alpha}} \\\\\n&~~~ + \\int_t^T p(r)^\\top g(r,t,\\overline{x}(t),\\overline{u}(t)) \\dd r - \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) g(T,t,\\overline{x}(t),\\overline{u}(t)) \\\\\n&~~~ - \\lambda l(t,\\overline{x}(t),\\overline{u}(t)) \\\\\n& = \\max_{u \\in U} \\Biggl \\{ \\int_t^T p(r)^\\top \\frac{f(r,t,\\overline{x}(t),u)}{(r-t)^{1-\\alpha}} \\dd r - \\mathds{1}_{[0,T)}(t) \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\frac{f(T,t,\\overline{x}(t),u)}{(T-t)^{1-\\alpha}} \\\\\n&~~~ + \\int_t^T p(r)^\\top g(r,t,\\overline{x}(t),u) \\dd r - \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) g(T,t,\\overline{x}(t),u) \\\\\n&~~~ - \\lambda l(t,\\overline{x}(t),u) \\Biggr \\},~\\textrm{a.e. $ t \\in [0,T]$.}\n\\end{align*}\n\t\\end{itemize}\n\\end{theorem}\n\nSeveral important remarks are given below.\n\n\n\\begin{remark}\\label{Remark_3_3}\nThe adjoint equation $p$ in Theorem \\ref{Theorem_3_1} includes the (strong or distributional (or weak)) derivative of $\\theta$, which is expressed as $\\frac{\\dd \\theta_i(t)}{\\dd t}$, $i=1,\\ldots,m$. Notice that $\\theta_i$, $i=1,\\ldots,m$, are finite and monotonically nondecreasing by Theorem \\ref{Theorem_3_1}, where their corresponding Lebesgue-Stieltjes measures, denoted by $\\dd \\theta_i$, $i=1,\\ldots,m$, are nonnegative, i.e., $\\dd \\theta_i(s) \\geq 0$, for $s \\in [0,T]$ and $i=1,\\ldots,m$. In fact, $([0,T],\\mathcal{B}([0,T]))$, where $\\mathcal{B}$ is the Borel $\\sigma$-algebra generated by subintervals of $[0,T]$, is a measurable space on which the two nonnegative measures $\\dd \\theta_i$ and $\\dd t $ are defined. Then we can easily see that $\\dd \\theta_i \\ll \\dd t$, i.e., $\\dd \\theta_i$ is absolutely continuous with respect to $\\dd t$. That is, $\\dd \\theta_i(B) = 0$ whenever $\\dd t(B) = 0$ for $B \\in \\mathcal{B}([0,T])$ and $i=1,\\ldots,m$ \\cite[Appendix C]{Conway_2000_book}. By the Radon-Nikodym theorem (see \\cite[Appendix C]{Conway_2000_book}), this implies that there is a unique Radon-Nikodym derivative $\\Theta_i(\\cdot) \\in L^1([0,T];\\mathbb{R})$, $i=1,\\ldots,m$, such that \n\\begin{align*}\n\\frac{\\dd \\theta_i(t)}{\\dd t} = \\Theta_i(t)~\\Leftrightarrow~ \t\\theta_i(t) = \\int_0^t \\Theta_i(s) \\dd s,~ \\forall i=1,\\ldots,m,~ \\textrm{a.e.}~ t\\in [0,T].\n\\end{align*}\nHence, with the Radon-Nikodym derivative $\\Theta_i(\\cdot)$, $i=1,\\ldots,m$, the adjoint equation $p$ in Theorem \\ref{Theorem_3_1} can be written as\n\\begin{align}\n\\label{eq_3_1}\n\tp(t) & = \\int_t^T \\frac{f_x(r,t,\\overline{x}(t),\\overline{u}(t))^\\top}{(r-t)^{1-\\alpha}} p(r) \\dd r - \\mathds{1}_{[0,T)} (t) \\frac{f_x(T,t,\\overline{x}(t),\\overline{u}(t))^\\top}{(T-t)^{1-\\alpha}} \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr )^\\top \\\\\n&~~~ + \\int_t^T g_x(r,t,\\overline{x}(t),\\overline{u}(t))^\\top p(r) \\dd r\t - g_x(T,t,\\overline{x}(t),\\overline{u}(t))^\\top \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr )^\\top \\nonumber \\\\\n&~~~ - \\lambda l_x(t,\\overline{x}(t),\\overline{u}(t))^\\top - \\sum_{i=1}^m G_x^{i}(t,\\overline{x}(t))^\\top \\Theta_i(t),~ \\textrm{a.e.}~ t \\in [0,T]. \\nonumber\n\\end{align}\nNote that the well-posedness (existence and uniqueness of the solution) of the adjoint equation in (\\ref{eq_3_1}) follows from Theorem \\ref{Theorem_3_1} (see also Lemma \\ref{Lemma_B_5} in Appendix \\ref{Appendix_B}). \n\\end{remark}\n\n\\begin{remark}\nThe strategy of the proof for Theorem \\ref{Theorem_3_1} is based on the Ekeland variational principle. Moreover, as $U$ is only the (separable) metric space and does not have any algebraic structure, the spike variation technique has to be employed. In contrast to other classical approaches, our proof needs to deal with the Volterra-type variational and adjoint equations having singular and nonsingular kernels in the variational analysis. \n\\end{remark}\n\n\\begin{remark}\\label{Remark_3_2}\nThe nontrivial tuple $(\\lambda, \\xi, \\dd \\theta_1,\\ldots,\\dd \\theta_m, p)$ is a Lagrange multiplier, which is said to be normal when $\\lambda > 0$ and abnormal when $\\lambda = 0$. In the normal case, we may assume the Lagrange multiplier to have been normalized so that $\\lambda =1$.\n\\end{remark}\n\n\n\\begin{remark}\nThe necessary conditions in Theorem \\ref{Theorem_3_1} are of interest only when the terminal state constraint is nondegenerate in the sense that $G_x^i(t,\\overline{x}(t))^\\top \\neq 0$ whenever $G^i(t,\\overline{x}(t)) = 0$ for all $t \\in [0,T]$ and $i=1,\\ldots,m$. \nA similar remark is given in \\cite[page 330, Remarks (b)]{Vinter_book} for the classical state-constrained optimal control problem for ordinary state equations. \n\\end{remark}\n\n\\begin{remark}\\label{Remark_3_4}\nWithout the state constraints in (\\ref{eq_3}), Theorem \\ref{Theorem_3_1} holds with $\\lambda = 1$, $\\xi = 0$, and $\\theta = 0$. This is equivalent to the following statement (see also \\cite[Theorem 4.3]{Lin_Yong_SICON_2020} for the case with singular kernels only): If $(\\overline{u}(\\cdot), \\overline{x}(\\cdot)) \\in \\mathcal{U}^p[0,T] \\times C([0,T];\\mathbb{R}^n)$ is the optimal pair for \\textbf{(P)}, then the following conditions hold:\n\\begin{itemize}\n\\item Adjoint equation:\t$p(\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ is the unique solution of the following backward Volterra integral equation having singular and nonsingular kernels:\n\\begin{align*}\n\tp(t) & = \\int_t^T \\frac{f_x(r,t,\\overline{x}(t),\\overline{u}(t))^\\top}{(r-t)^{1-\\alpha}} p(r) \\dd r - \\mathds{1}_{[0,T)} (t) \\frac{f_x(T,t,\\overline{x}(t),\\overline{u}(t))^\\top}{(T-t)^{1-\\alpha}} h_x(\\overline{x}_0,\\overline{x}(T))^\\top \\\\\n&~~~ + \\int_t^T g_x(r,t,\\overline{x}(t),\\overline{u}(t))^\\top p(r) \\dd r\t - g_x(T,t,\\overline{x}(t),\\overline{u}(t))^\\top h_x(\\overline{x}_0,\\overline{x}(T))^\\top \\\\\n&~~~ - l_x(t,\\overline{x}(t),\\overline{u}(t))^\\top,~ \\textrm{a.e.}~ t \\in [0,T];\n\\end{align*}\n\\item Hamiltonian-like maximum condition: \n\t\\begin{align*}\n&\\int_t^T p(r)^\\top \\frac{f(r,t,\\overline{x}(t),\\overline{u}(t))}{(r-t)^{1-\\alpha}} \\dd r - \\mathds{1}_{[0,T)}(t) h_x(\\overline{x}_0,\\overline{x}(T)) \\frac{f(T,t,\\overline{x}(t),\\overline{u}(t))}{(T-t)^{1-\\alpha}} \\\\\n&+ \\int_t^T p(r)^\\top g(r,t,\\overline{x}(t),\\overline{u}(t)) \\dd r - h_x(\\overline{x}_0,\\overline{x}(T)) g(T,t,\\overline{x}(t),\\overline{u}(t)) - l(t,\\overline{x}(t),\\overline{u}(t)) \\\\\n& = \\max_{u \\in U} \\Biggl \\{ \\int_t^T p(r)^\\top \\frac{f(r,t,\\overline{x}(t),u)}{(r-t)^{1-\\alpha}} \\dd r - \\mathds{1}_{[0,T)}(t) h_x(\\overline{x}_0,\\overline{x}(T)) \\frac{f(T,t,\\overline{x}(t),u)}{(T-t)^{1-\\alpha}} \\\\\n&~~~ + \\int_t^T p(r)^\\top g(r,t,\\overline{x}(t),u) \\dd r - h_x(\\overline{x}_0,\\overline{x}(T)) g(T,t,\\overline{x}(t),u) - l(t,\\overline{x}(t),u) \\Biggr \\},~\\textrm{a.e. $ t \\in [0,T]$.}\n\\end{align*}\n\\end{itemize}\n\\end{remark}\n\n\\begin{remark}\nBy taking $f \\equiv 0$ in Theorem \\ref{Theorem_3_1}, we can obtain the maximum principle for classical Volterra integral equations with nonsingular kernels only. Note that Theorem \\ref{Theorem_3_1} is different from the classical maximum principles for Volterra integral equations with nonsingular kernels only studied in the existing literature (e.g. \\cite[Theorem 1]{Bonnans_SVA_2010} and \\cite{Dmitruk_MCRF_2017, Dmitruk_SICON_2014, Medhin_JMAA_1988}), where Theorem \\ref{Theorem_3_1} does not need differentiability of kernels with respect to time variables and the adjoint equation in Theorem \\ref{Theorem_3_1} is expressed by the integral form. \n\\end{remark}\n\n\n\\section{Examples}\\label{Section_5} \n\nIn this section, we provide two examples of \\textbf{(P)}.\n\n\\begin{example}\t\\label{Example_1}\n\\normalfont\nConsider the minimization of the following objective functional\n\\begin{align*}\nJ(x_0,u(\\cdot)) = \\int_0^3 [ x(s) + \\frac{1}{2} u(s)^2 ] \\dd s + (x_0 + x(3)),\t\n\\end{align*}\nsubject to the Volterra integral equation with singular and nonsingular kernels given by\n\\begin{align}\n\\label{eq_s_4_1}\nx(t) = x_0 + \\int_0^t \\frac{u(s)}{(t-s)^{1-\\alpha}} \\dd s\t+ \\int_0^t u(s) \\dd s,~\\textrm{a.e. $ t \\in [0,3]$,}\n\\end{align}\nand the state constraints\n\\begin{align}\n\\label{eq_s_4_2}\n\\begin{cases}\n\t(x_0,x(3)) \\in F=\\{10\\} \\times \\{-16\\}, & \\textrm{(terminal state constraint)}, \\\\\nG(t,x(t)) = - x(t) - \\Bigl (\\frac{t^2}{5} + 20 \\Bigr ) \\leq 0,~ \\forall t \\in [0,3], & \\textrm{(inequality state constraint)}.\n\\end{cases}\n\\end{align}\nWe assume that the control space $U$ is an appropriate sufficiently large compact subset of $\\mathbb{R}^d$ to satisfy Assumption \\ref{Assumption_2_2}.\n\n\nNote that $F$ is singleton, which is closed and convex. Hence, by (\\ref{eq_s_4_2}), we can choose $\\xi = 0$. This implies that the (candidate) optimal state trajectory holds $\\overline{x}_0 = 10$ and $\\overline{x}(3) = -10$. In addition, the transversality condition leads to $\\int_0^3 p(t) \\dd t = 2 \\lambda$. Assume by contradiction that $\\lambda = 0$. Then the adjoint equation holds that $p(t) = \\frac{\\dd \\theta(t)}{\\dd t}$. This implies $\\int_0^3 p(t) \\dd t = \\int_0^3 \\dd \\theta(t) = \\theta(3) - \\theta(0) = \\theta(3) = 0$, which, by the fact that $\\theta(0) = 0$ and $\\theta$ is monotonically nondecreasing, contradicts the nontriviality condition of $\\theta$ as well as the adjoint equation $p$ in Theorem \\ref{Theorem_3_1}. Therefore, $\\lambda \\neq 0$, and we may take the normalized case with $\\lambda = 1$. Based on the preceding discussion and by Theorem \\ref{Theorem_3_1}, the following conditions hold:\n\\begin{itemize}\n\\item Nontriviality and nonnegativity conditions:\n\\begin{itemize}\n\\item $\\lambda = 1$ and $\\theta(\\cdot)\t\\in \\textsc{NBV}([0,3];\\mathbb{R})$ with $\\|\\theta(\\cdot)\\|_{\\textsc{NBV}([0,3];\\mathbb{R})} = \\theta(3) \\geq 0$, $\\theta$ being finite and monotonically nondecreasing on $[0,3]$, and $\\dd \\theta(t) \\geq 0$ for $t \\in [0,3]$;\n\\end{itemize}\n\\item Adjoint equation:\n\\begin{align}\n\\label{eq_s_4_3}\np(t) = - 1 + \\frac{\\dd \\theta(t)}{\\dd t},~ \\textrm{a.e.}~ t \\in [0,3];\n\\end{align}\n\\item Transversality condition: \n\\begin{align}\n\\label{eq_s_4_4}\n\\int_0^3 p(t) \\dd t = \\int_0^3 \\Bigl [ -1 + \\frac{ \\dd \\theta(t)}{ \\dd t} \\Bigr ] \\dd t = -3 + \\theta(3) = 2~ \\Rightarrow~ \\theta(3) = 5 > 0;\n\\end{align}\n\\item Complementary slackness condition:\n\\begin{align}\t\n\\label{eq_s_4_5}\n\t\\int_0^3 \\Bigl [ - \\overline{x}(t) - \\Bigl (\\frac{t^2}{5} + 20 \\Bigr ) \\Bigr ] \\dd \\theta(t) = 0\t;\n\\end{align}\n\\item Hamiltonian-like maximum condition: the first-order optimality condition implies\n\\begin{align}\n\\label{eq_s_4_6}\n\\overline{u}(t)\t= - 1 + \\int_t^3 p(r) \\dd r - \\frac{\\mathds{1}_{[0,3)}(t)}{(3-t)^{1-\\alpha}} + \\int_t^3 \\frac{p(r)}{(r-t)^{1-\\alpha}} \\dd r,~\\textrm{a.e. $ t \\in [0,3]$.}\n\\end{align}\n\\end{itemize}\n\nThe numerical simulation results of Example \\ref{Example_1} with $\\alpha = 0.8$ and $\\alpha = 0.5$ are given in Figures \\ref{Fig_1} and \\ref{Fig_11111}. One can easily observe that for each case, the optimal state trajectory holds the terminal condition as well as the inequality constraint in (\\ref{eq_s_4_2}). In addition, $\\theta(\\cdot) \\in \\textsc{NBV}([0,3];\\mathbb{R})$, where $\\theta$ is finite and monotonically nondecreasing on $[0,3]$ and $\\dd \\theta(t) \\geq 0$ for $t \\in [0,3]$, and the adjoint equation holds $p(\\cdot) \\in L^p([0,3];\\mathbb{R})$. The (candidate) optimal solution is obtained from the Hamiltonian-like maximum condition in (\\ref{eq_s_4_6}). Note that the numerical approach that we adopt is as follows:\n\\begin{enumerate}\n\\setlength{\\itemindent}{0.2in}\n\\item[(s.1)] Given $\\theta(0) = 0$ and $\\theta(3) > 0$, provide a guess of the measure $\\dd \\theta$ and then construct $\\theta$;\n\\item[(s.2)] Compute the adjoint equation in (\\ref{eq_s_4_3});\n\\item[(s.3)] Compute the optimal solution in (\\ref{eq_s_4_6});\n\\item[(s.4)] Compute the controlled state equation in (\\ref{eq_s_4_1}) under the optimal solution (\\ref{eq_s_4_6}), which needs to satisfy the terminal and inequality constraints in (\\ref{eq_s_4_2}); \n\\item[(s.5)] Check the complementary slackness condition in (\\ref{eq_s_4_5}) and the transversality condition in (\\ref{eq_s_4_4});\n\\item[(s.6)] If the constraints and conditions in (s.4) and (s.5) hold, stop the algorithm. Otherwise, we iterate (s.1)-(s.5).\n\\end{enumerate}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.28]{xsolution08.eps}\n\\includegraphics[scale=0.28]{theta08.eps}\n\\includegraphics[scale=0.28]{adjoint08.eps}\n\\includegraphics[scale=0.28]{control08.eps}\n\\includegraphics[scale=0.28]{inequality08.eps}\n\\caption{Simulation results of Example \\ref{Example_1} with $\\alpha = 0.8$.}\n\\label{Fig_1}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.28]{xsolution05.eps}\n\\includegraphics[scale=0.28]{theta05.eps}\n\\includegraphics[scale=0.28]{adjoint05.eps}\n\\includegraphics[scale=0.28]{control05.eps}\n\\includegraphics[scale=0.28]{inequality05.eps}\n\\caption{Simulation results of Example \\ref{Example_1} with $\\alpha = 0.5$.}\n\\label{Fig_11111}\n\\end{figure}\n\\end{example}\n\n\n\\begin{example}\t\n\\label{Example_2}\n\\normalfont\nWe consider the linear-quadratic problem of \\textbf{(P)} without state constraints. The state equation and the objective functional are given by\n\\begin{align*}\nx(t) & = x_0 + \\int_0^t \\frac{A_1(t,s) x(s) + B_1 u(s)}{(t-s)^{1-\\alpha}} \\dd s\t+ \\int_0^t \\Bigl [ A_2(t,s) x(s) + B_2(t,s) u(s) \\Bigr ] \\dd s, \\\\\nJ(x_0,u(\\cdot)) & = \\frac{1}{2} \\int_0^T \\Bigl [ \\langle x(s), Q(s) x(s) \\rangle + \\langle u(s), R(s) u(s) \\rangle \\Bigr ] \\dd s + \\frac{1}{2} \\langle x(T), M x(T) \\rangle,\n\\end{align*}\nwhere (ii) of Assumption \\ref{Assumption_2_1} holds and (see Lemmas \\ref{Lemma_B_1}-\\ref{Lemma_B_5_2323232} in Appendix \\ref{Appendix_B})\n\\begin{align*}\n\\begin{cases}\nA_1(\\cdot,\\cdot),A_2(\\cdot,\\cdot) \\in L^{\\infty}(\\Delta;\\mathbb{R}^{n \\times n}),~ B_1(\\cdot,\\cdot),B_2(\\cdot,\\cdot) \\in L^{\\infty}(\\Delta;\\mathbb{R}^{n \\times d}), \\\\\nQ(\\cdot) \\in L^{\\infty}([0,T];\\mathbb{R}^{n \\times n}),~ R(\\cdot) \\in L^{\\infty}([0,T];\\mathbb{R}^{d \\times d}),~ M \\in \\mathbb{R}^{n \\times n}, \\\\\nQ(t) = Q(t)^\\top \\geq 0,~ R(t) = R(t)^\\top > c I_d~(c > 0),~ M=M^\\top,~ \\forall t \\in [0,T].\n\\end{cases}\t\n\\end{align*}\nWe further assume that the state space $X$ and the control space $U$ are appropriate sufficiently large compact subsets of $\\mathbb{R}^n$ and $\\mathbb{R}^d$, respectively, to satisfy Assumption \\ref{Assumption_2_2}.\n\nBy Remark \\ref{Remark_3_4} and the first-order optimality condition, the corresponding optimal solution is as follows:\n\\begin{align*}\t\n\\overline{u}(t) & = R(t)^{-1} \\Biggl [ \\int_t^T \\frac{B_1(r,t)^\\top p(r)}{(r-t)^{1-\\alpha}} \\dd r - \\mathds{1}_{[0,T)}(t) \\frac{B_1(T,t)^\\top M \\overline{x}(T)}{(T-t)^{1-\\alpha}} \\\\\n&~~~~~ + \\int_t^T B_2(r,t)^\\top p(r) \\dd r - B_2(T,t)^\\top M \\overline{x}(T) \\Biggr ],~\\textrm{a.e. $ t \\in [0,T]$,}\n\\end{align*}\nwhere $p$ is the adjoint equation given by\n\\begin{align*}\np(t) & = \\int_t^T \\frac{A_1(r,t)^\\top p(r)}{(r-t)^{1-\\alpha}} \\dd r - \\mathds{1}_{[0,T)}(t)\t\\frac{A_1(T,t)^\\top M \\overline{x}(T)}{(T-t)^{1-\\alpha}} \\\\\n&~~~ + \\int_t^T A_2(r,t)^\\top p(r) \\dd r - A_2(T,t)^\\top M \\overline{x}(T) - Q(t) \\overline{x}(t),~\\textrm{a.e. $ t \\in [0,T]$.}\n\\end{align*}\nAssume that $T=2$, $x_0 = 1$, $A_1 = -1$, $A_2 = 0.2$, $B_1 = 2$, $B_2 = 0.1$, $M=1$, $R=1$, and $Q=0$. By applying the shooting method \\cite{Bourdin_MP_2020}, the numerical simulation results are obtained in Figures \\ref{Fig_3} and \\ref{Fig_323423423}.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.28]{xsolution_linear.eps}\n\\includegraphics[scale=0.28]{adjoint_linear.eps}\n\\includegraphics[scale=0.28]{control_linear.eps}\t\n\\caption{Simulation results of Example \\ref{Example_2} with $\\alpha = 0.5$.}\n\\label{Fig_3}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.28]{xsolution_linear_001.eps}\n\\includegraphics[scale=0.28]{adjoint_linear_001.eps}\n\\includegraphics[scale=0.28]{control_linear_001.eps}\t\n\\caption{Simulation results of Example \\ref{Example_2} with $\\alpha = 0.01$.}\n\\label{Fig_323423423}\n\\end{figure}\n\\end{example}\n\n\n\\section{Proof of the Maximum Principle}\\label{Section_4}\nThis section is devoted to prove Theorem \\ref{Theorem_3_1}.\n\n\\subsection{Preliminaries on Distance Functions}\\label{Section_4_1}\n\nLet $(X,\\|\\cdot\\|_{X})$ be a Banach space. We denote $(X^*,\\|\\cdot\\|_{X^*})$ by the dual space of $(X,\\|\\cdot\\|_{X})$, where $X^*$ is the space of bounded linear functionals on $X$ with the norm given by $\\|\\psi\\|_{X^*} := \\sup_{x \\in X,~ \\|x\\|_{X} \\leq 1} \\langle \\psi,x \\rangle_{X^* \\times X}$. Here, $\\langle \\cdot,\\cdot \\rangle_{X^* \\times X}$ denotes the usual duality paring between $X$ and $X^*$, i.e., $\\langle \\psi,x \\rangle_{X^* \\times X} := \\psi(x)$. Recall that $(X^*,\\|\\cdot\\|_{X^*})$ is also a Banach space. \n\nWe first deal with the terminal state constraints in (\\ref{eq_3}). Recall that $F$ is a nonempty closed convex subsets of $\\mathbb{R}^{2n}$. Let $d_{F} : \\mathbb{R}^{2n} \\rightarrow \\mathbb{R}_{+}$ be the standard Euclidean distance function to $F$ defined by $d_{F}(x) := \\inf_{y \\in F} |x-y|_{\\mathbb{R}^{2n}}$ for $x \\in \\mathbb{R}^{2n}$. Note that $d_{F}(x) = 0$ when $x \\in F$. Then it follows from the projection theorem \\cite[Theorem 2.10]{Ruszczynski_book} that there is a unique $P_{F}(x) \\in F$ with $P_{F}(x) : \\mathbb{R}^{2n} \\rightarrow F \\subset \\mathbb{R}^{2n}$, the projection of $x \\in \\mathbb{R}^{2n}$ onto $F$, such that $d_{F}(x) = \\inf_{y \\in F} |x-y|_{\\mathbb{R}^{2n}} = |x - P_{F}(x)|_{\\mathbb{R}^{2n}}$. By \\cite[Lemma 2.11]{Ruszczynski_book}, $P_{F}(x) \\in F$ is the corresponding projection if and only if $\\langle x - P_{F}(x),y- P_{F}(x) \\rangle_{\\mathbb{R}^{2n} \\times \\mathbb{R}^{2n}} \\leq 0$ for all $y \\in F$, which leads to the characterization of $P_{F}(x)$. In view of \\cite[Definition 2.37]{Ruszczynski_book}, we have $x - P_{F}(x) \\in N_{F}(P_{F}(x))$ for $x \\in \\mathbb{R}^{2n}$, where $N_{F}(x)$ is the normal cone to the convex set $F$ at a point $x \\in \\mathbb{R}^{2n}$ defined by\n\\begin{align}\n\\label{eq_4_1}\nN_{F}(x) := \\{ y \\in \\mathbb{R}^{2n}~|~ \\langle y,y^\\prime - x \\rangle_{\\mathbb{R}^{2n} \\times \\mathbb{R}^{2n}} \\leq 0,~ \\forall y^\\prime \\in F \\}.\t\n\\end{align}\nBased on the distance function $d_F$, the terminal state constraint in (\\ref{eq_3}) can be written as\n\\begin{align*}\nd_F(\\overline{x}_0, \\overline{x}(T;\\overline{x}_0,\\overline{u})) = 0 ~\\Leftrightarrow~ \t\\begin{bmatrix}\n\t\t\\overline{x}_0 \\\\\n\t\t\\overline{x}(T;\\overline{x}_0,\\overline{u}))\t\t \t\t\n \t\\end{bmatrix} \\in F.\n\\end{align*}\n\n\n\\begin{lemma}\\label{Lemma_4_1}\nThe function $d_F(x)^2$ is Fr\\'echet differentiable on $\\mathbb{R}^{2n}$ with the Fr\\'echet differentiation of $d_F(x)^2$ at $x$ given by $D d_F(x)^2(h) = 2 \\langle x-P_F(x),h \\rangle_{\\mathbb{R}^{2n} \\times \\mathbb{R}^{2n}}$ for $h \\in \\mathbb{R}^{2n}$.\n\\end{lemma}\n\\begin{proof}\nNote that\n\\begin{align*}\nd_F(x+h)^2 - d_F(x)^2 & \\leq |x+h - P_F(x)|_{\\mathbb{R}^{2n}}^2 - |x - P_F(x)|_{\\mathbb{R}^{2n}}^2 = 2 \\langle x - P_F(x),h \\rangle + |h|_{\\mathbb{R}^{2n}}^2,\n\\end{align*}\nand by the fact that the projection operator is nonexpansive, i.e., $|P_{F}(x) - P_{F}(x^\\prime) |_{\\mathbb{R}^{2n}} \\leq |x-x^\\prime|_{\\mathbb{R}^{2n}}$, for all $x,x^\\prime \\in \\mathbb{R}^{2n}$ (see \\cite[Theorem 2.13]{Ruszczynski_book}), \n\\begin{align*}\nd_F(x)^2 - d_F(x+h)^2 & \\leq|x - P_F(x+h)|_{\\mathbb{R}^{2n}}^2 - |x+h - P_F(x+h)|_{\\mathbb{R}^{2n}}^2 \\\\\n& = - 2 \\langle x - P_F(x), h \\rangle + 2 \\langle P_F(x+h) - P_F(x), h \\rangle + |h|_{\\mathbb{R}^{2n}}^2 \\\\\n& \\leq - 2 \\langle x - P_F(x), h \\rangle + 3|h|_{\\mathbb{R}^{2n}}^2.\n\\end{align*}\nSince $-3 |h|_{\\mathbb{R}^{2n}} \\leq \\frac{|d_F(x+h)^2 - d_F(x)^2 - 2 \\langle x - P_F(x), h \\rangle |}{|h|_{\\mathbb{R}^{2n}}}\t \\leq |h|_{\\mathbb{R}^{2n}}$, this completes the proof.\n\\end{proof}\n\n\n\nNow, we consider the inequality state constraint given in (\\ref{eq_3}). Let $\\gamma: C([0,T];\\mathbb{R}^n) \\rightarrow C([0,T];\\mathbb{R}^m)$ be defined by $\\gamma(x(\\cdot;x_0,u)) := (\\gamma_1(x(\\cdot;x_0,u)),\\ldots, \\gamma_m(x(\\cdot;x_0,u))):= G(\\cdot,x(\\cdot;x_0,u)) = \\begin{bmatrix}\n \tG^1(\\cdot,x(\\cdot;x_0,u)) & \\cdots & G^m(\\cdot,x(\\cdot;x_0,u))\n \\end{bmatrix}$. Moreover, we let $S \\subset C([0,T];\\mathbb{R}^m)$ be the nonempty closed convex cone of $C([0,T];\\mathbb{R}^m)$ defined by $S := C([0,T];\\mathbb{R}_{-}^m)$, where $\\mathbb{R}_{-}^m := \\mathbb{R}_{-} \\times \\cdots \\times \\mathbb{R}_{-}$. Note that $S$ has a nonempty interior. Then the inequality state constraint in (\\ref{eq_3}) can be expressed as follows:\n\\begin{align}\n\\label{eq_4_2}\n\\gamma(\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) \\in S~ \\Leftrightarrow~ G^i(t,\\overline{x}(t;\\overline{x}_0,\\overline{u})) \\leq 0,~\\forall t \\in [0,T],~ i=1,\\ldots,m.\n\\end{align}\n\nRecall that $G^i$, $i=1,\\ldots,m$, are continuously differentiable in $x$. Then $\\gamma$ is Fr\\'echet differentiable with its Fr\\'echet differentiation at $\\gamma(x(\\cdot)) \\in S$ given by $D\\gamma(x(\\cdot))(w) = G_x(\\cdot,x(\\cdot)) w $ for all $w \\in C([0,T];\\mathbb{R}^n)$ \\cite[page 167]{Flett_book}. The normal cone to $S$ at $x \\in S$ is defined by\n\\begin{align}\n\\label{eq_4_3}\nN_S(x) : = \\{ \\kappa \\in C([0,T]; \\mathbb{R}^m)^*~|~\\langle \\kappa,\\kappa^\\prime - x \\rangle_{C_m^* \\times C_m} \\leq 0,~ \\forall \\kappa^\\prime \\in S \\},\n\\end{align}\nwhere $\\langle \\cdot,\\cdot \\rangle_{C_m^* \\times C_m} := \\langle \\cdot,\\cdot \\rangle_{C([0,T]; \\mathbb{R}^m)^* \\times C([0,T]; \\mathbb{R}^m)}$ stands for the duality paring between $C([0,T]; \\mathbb{R}^m)$ and $C([0,T]; \\mathbb{R}^m)^*$ with $C([0,T]; \\mathbb{R}^m)^*$ being the dual space of $C([0,T]; \\mathbb{R}^m)$.\n\n\\begin{remark}\\label{Remark_4_2_3_5_2_3_2_1}\nNote that $(C([0,T]; \\mathbb{R}^m),\\|\\cdot\\|_{\\infty})$ is a separable Banach space \\cite[Theorem 6.6, page 140]{Conway_2000_book}. Then by \\cite[Theorem 2.18, page 42]{Li_Yong_book}, there exists a norm $\\|\\cdot \\|_{C([0,T]; \\mathbb{R}^m)}$ on $C([0,T]; \\mathbb{R}^m)$, which is equivalent to $\\|\\cdot\\|_{\\infty}$ \\cite[Definition 2.17, page 42]{Li_Yong_book}, such that $(C([0,T]; \\mathbb{R}^m)^*, \\|\\cdot \\|_{C([0,T]; \\mathbb{R}^m)^*})$ is strictly convex, i.e., $\\|x \\|_{C([0,T]; \\mathbb{R}^m)^*} = \\|y \\|_{C([0,T]; \\mathbb{R}^m)^*} = 1$ and $\\|x+y \\|_{C([0,T]; \\mathbb{R}^m)^*} =2 $ imply $x=y$ for $x,y \\in C([0,T]; \\mathbb{R}^m)^*$ \\cite[Definition 2.12, page 41]{Li_Yong_book}.\n\\end{remark}\n\nLet $d_{S}: C([0,T]; \\mathbb{R}^m) \\rightarrow \\mathbb{R}_{+}$ be the distance function to $S$ defined by\n\\begin{align*}\nd_S(x) := \\inf_{y \\in S} \\|x-y\\|_{C([0,T]; \\mathbb{R}^m)}~ \\textrm{for}~x \\in C([0,T]; \\mathbb{R}^m).\n\\end{align*}\nBy definition of $d_S$, (\\ref{eq_4_2}) is equivalent to\n\\begin{align*}\nd_S \\Bigl ( \\gamma(\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) \\Bigr ) = 0 ~ \\Leftrightarrow ~ \\gamma \\Bigl (\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u}) \\Bigr ) \\in S.\t\n\\end{align*}\n\n\\begin{lemma}\\label{Lemma_4_2}\nThe distance function $d_S$ is nonexpansive, continuous, and convex. \n\\end{lemma}\n\\begin{proof}\nTo simplify the notation, let $\\|\\cdot\\| := \\|\\cdot\\|_{C([0,T]; \\mathbb{R}^m)}$. We fix $x,y \\in S$. Let $\\epsilon > 0$ be given. By definition, there is $\\pi \\in S$ such that $d_S(y) \\geq \\|\\pi-y\\| - \\epsilon$. We then have\n\\begin{align*}\nd_S(x) \\leq \\|x-\\pi\\| \\leq \\|x-y\\| +\\|y-\\pi\\| \\leq \t\\|x-y\\| + d_S(y) + \\epsilon.\n\\end{align*}\nSimilarly, we have $d_S(x) \\geq |\\pi-x\\| - \\epsilon$, and\n\\begin{align*}\t\nd_S(y) \\leq \\|y-\\pi\\| \\leq \\|x-y\\| + \\|x-\\pi\\| \\leq \\|x-y\\| + d_S(x) + \\epsilon.\n\\end{align*}\nSince $\\epsilon$ is arbitrary, $|d_S(x) - d_S(y)| \\leq \\|x-y\\|$ holds, which also implies the continuity of $d_S$.\n\nAs $S$ is convex, we have $(1-\\eta) x + \\eta y \\in S$ for $\\eta \\in [0,1]$. By definition of $d_S$, there are $\\pi_x, \\pi_y \\in S$ such that $\\|\\pi_x - x\\| \\leq d_S(x) + \\epsilon$ and $\\|\\pi_y - y\\| \\leq d_S(y) + \\epsilon$. \nDefine $\\pi := (1-\\eta) \\pi_x + \\eta \\pi_y \\in S$ for $\\eta \\in [0,1]$. It then follows that\n\\begin{align*}\nd_S ((1-\\eta) x + \\eta y) & \\leq \\|\\pi - ((1-\\eta) x + \\eta y) \\| \n\\leq (1-\\eta) d_S(x) + \\eta d_S(y) + \\epsilon.\n\\end{align*}\nSince $\\epsilon$ is arbitrary, $d_S$ is convex. We complete the proof.\n\\end{proof}\n\nWe define the subdifferential of $d_S$ at $x \\in C([0,T];\\mathbb{R}^m)$ by \\cite[page 214]{Rockafellar_book}\n\\begin{align}\n\\label{eq_4_5_6_5_43_3_5_7_3_3}\n\\partial d_S(x) := \\{ y^\\prime \\in C([0,T];\\mathbb{R}^m)^*~|~\\langle y^\\prime , y -x \\rangle_{C_m^* \\times C_m}\t\\leq d_S(y) - d_S(x),~ \\forall y \\in C([0,T];\\mathbb{R}^m) \\}.\n\\end{align}\nBy \\cite[page 27]{Clarke_book} and Lemma \\ref{Lemma_4_2}, since $d_S$ is continuous, $\\partial d_S(x)$ is a nonempty ($\\partial d_S(x) \\neq \\emptyset$), convex, and weak--$^*$ compact subset of $C([0,T];\\mathbb{R}^m)^*$. Moreover, from \\cite[Proposition 2.1.2]{Clarke_book}, it holds that $\\|y^\\prime\\|_{C([0,T];\\mathbb{R}^m)^*} \\leq 1$ for all $y^\\prime \\in \\partial d_S(x)$. \n\nAn important consequence of Remark \\ref{Remark_4_2_3_5_2_3_2_1} and Lemma \\ref{Lemma_4_2} is as follows:\n\\begin{remark}\\label{Remark_4_1}\nSince $\\partial d_S(x)$ is convex, $\\eta y^\\prime + (1-\\eta) y^{\\prime \\prime} \\in \\partial d_S(x)$ for any $y^\\prime,y^{\\prime \\prime} \\in \\partial d_S(x)$ and $\\eta \\in [0,1]$. Consider $\\|\\eta y^\\prime + (1-\\eta) y^{\\prime \\prime} \\|_{C([0,T]; \\mathbb{R}^m)^*} = 1$ for $\\eta \\in [0,1]$. Then $\\|y^\\prime\\|_{C([0,T];\\mathbb{R}^m)^*} = 1$ and $ \\|y^{\\prime \\prime} \\|_{C([0,T];\\mathbb{R}^m)^*} = 1$ when $\\eta = 1$ and $\\eta = 0$, respectively. Moreover, when $\\eta = \\frac{1}{2}$, $\\|y^\\prime + y^{\\prime \\prime} \\|_{C([0,T]; \\mathbb{R}^m)^*} = 2$. Since $(C([0,T]; \\mathbb{R}^m)^*, \\|\\cdot \\|_{C([0,T]; \\mathbb{R}^m)^*})$ is strictly convex, we must have $y^\\prime = y^{\\prime \\prime} \\in C([0,T];\\mathbb{R}^m)^*$, which implies $\\partial d_S(x) =\\{y^\\prime\\}$, i.e., $\\partial d_S(x)$ is a singleton, and $\\|y^\\prime\\|_{C([0,T];\\mathbb{R}^m)^*} = 1$. \n\\end{remark} \n\n\n\\begin{lemma}\\label{Lemma_4_3_1_2_2_1}\nThe distance function $d_S$ is strictly Hadamard differentiable on $C([0,T];\\mathbb{R}^m) \\setminus S$ with the Hadamard differential $D d_S$ satisfying $\\|D d_S(x)\\|_{C([0,T]; \\mathbb{R}^m)^*} = \\|y^\\prime\\|_{C([0,T]; \\mathbb{R}^m)^*} = 1$ for all $x \\in C([0,T];\\mathbb{R}^m) \\setminus S$. Consequently, $d_S(x)^2$ is strictly Hadamard differentiable on $C([0,T];\\mathbb{R}^m) \\setminus S$ with the Hadamard differential given by $D d_S(x)^2 = 2 d_S(x) D d_S(x)$ for $x \\in C([0,T];\\mathbb{R}^m) \\setminus S$. Moreover, $d_S(x)^2$ is Fr\\'echet differentiable on $S$ with the Fr\\'echet differential being $D d_S(x)^2 = 0 \\in C([0,T]; \\mathbb{R}^m)^*$ for all $x \\in S$.\n\\end{lemma}\n\n\\begin{proof}\nThe strictly Hadamard differentiability of $d_S(x)$ and $d_S(x)^2$ on $C([0,T];\\mathbb{R}^m) \\setminus S$ follows from on Lemma \\ref{Lemma_4_2} and Remark \\ref{Remark_4_1}, together with \\cite[Theorem 3.54]{Mordukhovich_book}. The Fr\\'echet differentiability of $d_S(x)^2$ on $S$ with $D d_S(x) = 0 \\in C([0,T]; \\mathbb{R}^m)^*$ for $x \\in S$ follows by the fact that $d_S(x) = 0$ for $x \\in S$ and that $d_S$ is nonexpansive shown in Lemma \\ref{Lemma_4_2}. This completes the proof.\n\\end{proof}\n\n\\subsection{Ekeland Variational Principle}\\label{Section_4_2}\n\nRecall that the pair $(\\overline{x}(\\cdot),\\overline{u}(\\cdot)) \\in C[0,T];\\mathbb{R}^n) \\times \\mathcal{U}^p[0,T]$ is the optimal pair of \\textbf{(P)}. We also write $\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u}) := \\overline{x}(\\cdot)$ to emphasize the dependence of the state equation $\\overline{x}(\\cdot)$ on the optimal initial condition and control $(\\overline{x}_0, \\overline{u}(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$. Note that the pair $(\\overline{x}_0,\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u}))$ holds the state constraints in (\\ref{eq_3}). The optimal cost of \\textbf{(P)} under $(\\overline{x}(\\cdot),\\overline{u}(\\cdot))$ can be written by $J(\\overline{x}_0, \\overline{u}(\\cdot))$.\n\nRecall the distance functions $d_F$ and $d_S$ in Section \\ref{Section_4_1}. For $\\epsilon > 0$, we define the penalized objective functional as follows:\n\\begin{align}\n\\label{eq_4_5}\nJ_{\\epsilon}(x_0,u(\\cdot)) & = \\Biggl ( \\Bigl ( \\bigl [ J(x_0,u(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ \\Big)^2 + d_F \\Bigl ( \\begin{bmatrix}\nx_0 \\\\\nx(T) \t\n \\end{bmatrix} \\Bigr )^2 + d_S \\Bigl ( \\gamma(x(\\cdot)) \\Bigr )^2 \\Biggr )^{\\frac{1}{2}}.\n\\end{align}\nWe can easily observe that $J_{\\epsilon}(\\overline{x}_0, \\overline{u}(\\cdot)) = \\epsilon > 0$, i.e., $(\\overline{x}_0, \\overline{u}(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$ is the $\\epsilon$-optimal solution of (\\ref{eq_4_5}). Define the Ekeland metric $\\widehat{d}:(\\mathbb{R}^n \\times \\mathcal{U}^p[0,T]) \\times (\\mathbb{R}^n \\times \\mathcal{U}^p[0,T]) \\rightarrow \\mathbb{R}_{+}$ as follows:\n\\begin{align}\t\n\\label{eq_4_6}\n\\widehat{d} \\Bigl ((x_0,u(\\cdot)),(\\tilde{x}_0,\\tilde{u}(\\cdot)) \\Bigr ) := |x_0 - \\tilde{x}_0 | + \\overline{d}(u(\\cdot),\\tilde{u}(\\cdot)),\n\\end{align}\nwhere\n\\begin{align}\n\\label{eq_4_7}\n\\overline{d}(u(\\cdot),\\tilde{u}(\\cdot)) := |\\{t \\in [0,T]~|~ u(t) \\neq \\tilde{u}(t) \\} |,~ \\forall u(\\cdot),\\tilde{u}(\\cdot) \\in \\mathcal{U}^p[0,T].\n\\end{align}\nIt is easy to see that $(\\mathbb{R}^n \\times \\mathcal{U}^p[0,T],\\widehat{d})$ is a complete metric space \\cite[Lemma 7.2]{Ekeland_JMAA_1974}. By Assumption \\ref{Assumption_2_2}, together with Lemmas \\ref{Lemma_4_1} and \\ref{Lemma_4_2}, $J_{\\epsilon}(x_0,u)$ in (\\ref{eq_4_5}) is a continuous functional on $(\\mathbb{R}^n \\times \\mathcal{U}^p[0,T],\\widehat{d})$. \n\n\nIn view of (\\ref{eq_4_5})-(\\ref{eq_4_7}), we have\n\\begin{align}\n\\label{eq_4_8}\n\\begin{cases}\nJ_{\\epsilon}(x_0,u(\\cdot)) > 0,~ \\forall (x_0,u(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T], \\\\\nJ_{\\epsilon}(\\overline{x}_0, \\overline{u}(\\cdot)) = \\epsilon \\leq \\inf_{(x_0,u(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]} J_{\\epsilon}(x_0,u(\\cdot)) + \\epsilon.\n\\end{cases}\n\\end{align}\nBy the Ekeland variational principle \\cite{Ekeland_JMAA_1974}, there exists a pair $(x_0^\\epsilon,u^\\epsilon) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$ such that\n\\begin{align}\n\\label{eq_4_9_1_1_1}\n\\widehat{d}\\Bigl ( (x_0^\\epsilon,u^\\epsilon(\\cdot)),(\\overline{x}_0,\\overline{u}(\\cdot)) \\Bigr ) \\leq \\sqrt{\\epsilon},\t\t\n\\end{align}\nand\n\\begin{align}\n\\label{eq_4_9}\n\\begin{cases}\nJ_{\\epsilon}(x_0^\\epsilon,u^\\epsilon(\\cdot)) \\leq J_{\\epsilon}(\\overline{x}_0, \\overline{u}(\\cdot)) = \\epsilon, \\\\\nJ_{\\epsilon}(x_0^\\epsilon,u^\\epsilon(\\cdot)) \\leq J_{\\epsilon}(x_0,u(\\cdot)) + \\sqrt{\\epsilon} \\widehat{d} \\Bigl ((x_0^\\epsilon,u^\\epsilon(\\cdot)),(x_0,u(\\cdot)) \\Bigr ),~ \\forall (x_0,u(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T].\n\\end{cases}\n\\end{align}\nAs $\\widehat{d} ((x_0^\\epsilon,u^\\epsilon(\\cdot)),(x_0^\\epsilon,u^\\epsilon(\\cdot))) = 0$, the above condition implies that the pair $(x_0^\\epsilon,u^\\epsilon(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$ is the minimizing solution of the following Ekeland objective functional over $\\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$:\n\\begin{align}\n\\label{eq_4_10}\nJ_{\\epsilon}(x_0,u(\\cdot)) + \t\\sqrt{\\epsilon} \\widehat{d} \\Bigl ((x_0^\\epsilon,u^\\epsilon(\\cdot)),(x_0,u(\\cdot)) \\Bigr ).\n\\end{align}\nWe observe that (\\ref{eq_4_10}) is the unconstrained control problem. By notation, we write $(x^\\epsilon(\\cdot),u^\\epsilon(\\cdot)) := (x^{\\epsilon}(\\cdot;x_0^\\epsilon,u^\\epsilon),u^\\epsilon(\\cdot)) \\in C([0,T];\\mathbb{R}^n) \\times \\mathcal{U}^p[0,T]$, where $x^{\\epsilon}(\\cdot;x_0^\\epsilon,u^\\epsilon)$ is the state trajectory of (\\ref{eq_1}) under $(x_0^\\epsilon,u^\\epsilon(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$.\n\n\\subsection{Spike Variations and First Variational Equation}\\label{Section_4_3}\n\nIn the previous subsection, we have obtained the $\\epsilon$-optimal solution to \\textbf{(P}), which is also the optimal solution to the Ekeland objective functional in (\\ref{eq_4_10}). The next step is to derive the necessary condition for $(x_0^\\epsilon,u^\\epsilon(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$. We employ the spike variation technique, as $U$ does not have any algebraic structure (hence, it is impossible to use standard (convex) variations).\n\nFor $\\delta \\in (0,1)$, define\n\\begin{align*}\n\\mathcal{E}_{\\delta} := \\{ E \\in [0,T]~|~ |E| = \\delta T\\}\t,\n\\end{align*}\nwhere $|E|$ denotes the Lebesgue measure of $E$. For $E_{\\delta} \\in \\mathcal{E}_{\\delta}$, we introduce the spike variation associated with $u^\\epsilon$, i.e., the optimal solution of (\\ref{eq_4_10}): \n\\begin{align*}\nu^{\\epsilon,\\delta}(s) := \\begin{cases}\n \tu^\\epsilon(s), & s \\in [0,T] \\setminus E_{\\delta}, \\\\\n\tu(s), & s \\in E_{\\delta},\n \\end{cases}\t\n\\end{align*}\nwhere $u(\\cdot) \\in \\mathcal{U}^p[0,T]$. Clearly, $u^{\\epsilon,\\delta}(\\cdot) \\in \\mathcal{U}^p[0,T]$. Moreover, by definition of $\\overline{d}$ in (\\ref{eq_4_7}),\n\\begin{align}\n\\label{eq_4_12_345234231}\n\t\\overline{d}(u^{\\epsilon,\\delta}(\\cdot),u^\\epsilon(\\cdot)) \\leq | E_{\\delta}| = \\delta T.\n\\end{align}\nConsider also the variation of the initial state given by $x_0 + \\delta a $, where $a \\in \\mathbb{R}^n$. By notation, let us define the perturbed state equation by\n\\begin{align}\n\\label{eq_5_234235457634545245234523}\nx^{\\epsilon,\\delta}(\\cdot) := x^{\\epsilon,\\delta}(\\cdot; x_0^\\epsilon + \\delta a,u^{\\epsilon,\\delta}) \\in C([0,T];\\mathbb{R}^n).\n\\end{align}\nIn fact, $x^{\\epsilon,\\delta}(\\cdot)$ is the state trajectory of (\\ref{eq_1}) under $(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$. We also recall $(x^\\epsilon(\\cdot),u^\\epsilon(\\cdot)) := (x^{\\epsilon}(\\cdot;x_0^\\epsilon,u^\\epsilon),u^\\epsilon(\\cdot)) \\in C([0,T];\\mathbb{R}^n) \\times \\mathcal{U}^p[0,T]$, where $x^{\\epsilon}$ is the state trajectory of (\\ref{eq_1}) under $(x_0^\\epsilon,u^\\epsilon(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$. Then by (\\ref{eq_4_9}) and (\\ref{eq_4_12_345234231}), we have\n\\begin{align}\n\\label{eq_4_13_1_1_1_2}\n- \\sqrt{\\epsilon} (|a| + T) \\leq \\frac{1}{\\delta} \\Bigl ( J_{\\epsilon}(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - \tJ_{\\epsilon}(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) \\Bigr ).\n\\end{align}\n\n\\begin{lemma}\\label{Lemma_4_4_2342341234234}\nThe following result holds:\n\\begin{align*}\n\\sup_{t \\in [0,T]} \\Bigl | x^{\\epsilon,\\delta}(t) - x^{\\epsilon}(t) - \\delta Z^{\\epsilon}(t) \\Bigr | = o(\\delta),\n\\end{align*}\nwhere $Z^\\epsilon$ is the solution to the first variational equation related to the optimal pair $(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$ given by\n\\begin{align*}\nZ^{\\epsilon}(t) & = a + \\int_0^t \\Bigl [ \\frac{f_x(t,s,x^\\epsilon(s),u^\\epsilon(s))}{(t-s)^{1-\\alpha}} Z^{\\epsilon}(s) \\dd s \t + \\frac{\\widehat{f}(t,s) }{(t-s)^{1-\\alpha}} \\Bigr ] \\dd s \\\\\n&~~~ + \\int_0^t \\Bigl [ g_x(t,s,x^\\epsilon(s),u^\\epsilon(s)) Z^{\\epsilon}(s) + \\widehat{g}(t,s) \\Bigr ] \\dd s,~ \\textrm{a.e.}~ t \\in [0,T], \\nonumber\n\\end{align*}\nwith for any $u(\\cdot) \\in \\mathcal{U}^p[0,T]$, \n\\begin{align*}\n\\begin{cases}\n\t\\widehat{f}(t,s) := f(t,s,x^\\epsilon(s),u(s)) - f(t,s,x^\\epsilon(s),u^\\epsilon(s)), \\\\\n\t\\widehat{g}(t,s) := g(t,s,x^\\epsilon(s),u(s)) - g(t,s,x^\\epsilon(s),u^\\epsilon(s)). \n\t\\\\\n\\end{cases}\t\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nBy definition and (\\ref{eq_5_234235457634545245234523}), \n\\begin{align*}\n\tx^{\\epsilon,\\delta}(t) = x(t;x_0^\\epsilon + \\delta a,u^{\\epsilon,\\delta}) &= (x_0^{\\epsilon} + \\delta a) + \\int_0^t \\frac{f(t,s,x^{\\epsilon,\\delta}(s),u^{\\epsilon,\\delta}(s))}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t g(t,s,x^{\\epsilon,\\delta}(s),u^{\\epsilon,\\delta}(s)) \\dd s, \\\\\n\tx^{\\epsilon}(t) = x(t;x_0^\\epsilon,u^{\\epsilon}) & = x_0^{\\epsilon} + \\int_0^t \\frac{f(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s))}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t g(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s)) \\dd s.\n\\end{align*}\nFor $\\delta \\in (0,1)$, let\n\\begin{align}\n\\label{eq_e_1}\nZ^{\\epsilon,\\delta}(t) := \\frac{x^{\\epsilon,\\delta}(t) - x^{\\epsilon}(t)}{\\delta},\t~ t \\in [0,T],\n\\end{align}\nwhere based on the Taylor expansion, $Z^{\\epsilon,\\delta}$ holds\n\\begin{align*}\nZ^{\\epsilon,\\delta}(t) & = a + \\int_0^t \\Bigl [ \\frac{f_{x}^{\\epsilon,\\delta}(t,s)}{(t-s)^{1-\\alpha}} Z^{\\epsilon,\\delta}(s) + \\frac{\\mathds{1}_{E_{\\delta}}(s)}{\\delta} \\frac{\\widehat{f}(t,s) }{(t-s)^{1-\\alpha}} \\Bigr ] \\dd s \\\\\n&~~~ + \\int_0^t \\Bigl [ g_{x}^{\\epsilon,\\delta}(t,s) Z^{\\epsilon,\\delta}(s) + \\frac{\\mathds{1}_{E_{\\delta}}(s)}{\\delta} \\widehat{g}(t,s) \\Bigr ] \\dd s,~ t \\in [0,T]\n\\end{align*}\nwith $f_{x}^{\\epsilon,\\delta}$ and $g_{x}^{\\epsilon,\\delta}$ defined by\n\\begin{align*}\nf_{x}^{\\epsilon,\\delta}(t,s) & := \\int_0^1 f_x(t,s,x^{\\epsilon}(s) + r (x^{\\epsilon,\\delta}(s) - x^{\\epsilon}(s)),u^{\\epsilon,\\delta}(s)) \\dd r, \\\\\ng_{x}^{\\epsilon,\\delta}(t,s) & := \\int_0^1 g_x(t,s,x^{\\epsilon}(s) + r (x^{\\epsilon,\\delta}(s) - x^{\\epsilon}(s)),u^{\\epsilon,\\delta}(s)) \\dd r.\n\\end{align*}\n\nBy Assumptions \\ref{Assumption_2_1} and \\ref{Assumption_2_2}, we have\n\\begin{align}\n\\label{eq_4_16_4534345345}\n\\begin{cases}\n|\\widehat{f}(t,s)| + |\\widehat{g}(t,s)| \\leq 4 K_0(s) + 4 K(s)|x^\\epsilon(s)| + K(s)(\\rho(u(s),u_0) + \\rho(u^\\epsilon(s),u_0)) =: \\widetilde{\\psi}(s), \\\\\n|f_x^{\\epsilon,\\delta}(t,s)| + |g_x^{\\epsilon,\\delta}(s)| \\leq K(s).\n\\end{cases}\n\\end{align}\nLet $q > \\frac{1}{\\alpha}$,\nand we replace $p$ by $q$ in Lemma \\ref{Lemma_A_5}. Recall $L^{l+}([0,T];\\mathbb{R}^n) := \\cup_{r > l} L^{r}([0,T];\\mathbb{R}^n)$ for $1 \\leq l < \\infty$, and the $L^p$-spaces of this paper are induced by the finite measure on $([0,T],\\mathcal{B}([0,T]))$. Since $p > \\alpha p > 1$, it holds that $K(\\cdot) \\in L^{\\frac{p}{\\alpha p - 1} + } ([0,T];\\mathbb{R}) \\subset L^{ \\frac{p}{p-1} +}([0,T];\\mathbb{R})$. Hence, we can choose $q$ so that $K(\\cdot) \\in L^q([0,T];\\mathbb{R}) \\subset L^{\\frac{p}{p-1}}([0,T];\\mathbb{R}) $ and $x^{\\epsilon}(\\cdot) \\in L^p([0,T];\\mathbb{R}^n) \\subset L^{\\frac{pq}{q-p}}([0,T];\\mathbb{R}^n)$. It then follows from the H\\\"older's inequality that\n\\begin{align*}\n\\Bigl ( \\int_0^T |K(s)^p x^{\\epsilon}(s)^p| \\dd s \\Bigr)^{\\frac{1}{p}} \\leq \\Bigl ( \\int_0^T |K(s)|^q \\dd s \\Bigr )^{\\frac{1}{q}} \\Bigl ( \\int_0^T |x^{\\epsilon}(s)|^{\\frac{pq}{q-p}} \\dd s \\Bigr )^{\\frac{q-p}{qp}} & < \\infty, \\\\\n\\Bigl ( \\int_0^T |K(s)^p (\\rho(u(s),u_0) + \\rho(u^\\epsilon(s),u_0))^p| \\dd s \\Bigr)^{\\frac{1}{p}} &< \\infty.\n\\end{align*}\nTherefore, as $K_0(\\cdot) \\in L^{\\frac{1}{\\alpha} + } ([0,T];\\mathbb{R}) \\subset L^{p+} ([0,T];\\mathbb{R})$, $\\widetilde{\\psi}(\\cdot) \\in L^p([0,T];\\mathbb{R}) \\subset L^{\\frac{q}{q-1}}([0,T];\\mathbb{R})$. \n\nBased on Assumption \\ref{Assumption_2_1} and (\\ref{eq_4_16_4534345345}), we can show that\n\\begin{align}\t\n\\label{eq_e_2}\n|x^{\\epsilon,\\delta}(t) - x^{\\epsilon}(t) | \n& \\leq b(t) + \\int_0^t \\frac{K(s)}{(t-s)^{1-\\alpha}} |x^{\\epsilon,\\delta}(t) - x^{\\epsilon}(t) |\\dd s + \\int_0^t K(s) |x^{\\epsilon,\\delta}(t) - x^{\\epsilon}(t) |\\dd s, \n\\end{align}\nwhere\n\\begin{align*}\nb(t) = |\\delta a |_{\\mathbb{R}^n} + \\int_0^t \\mathds{1}_{E_{\\delta}}(s) \\frac{\\widetilde{\\psi}(s)} {(t-s)^{1-\\alpha}} \\dd s + \\int_0^t \\mathds{1}_{E_{\\delta}}(s) \\widetilde{\\psi}(s) \\dd s. \n\\end{align*}\nWe let $\\widetilde{\\psi}(t,\\cdot) := \\widetilde{\\psi}(\\cdot)$ in (\\ref{eq_4_16_4534345345}). As $\\widetilde{\\psi}(0,\\cdot) \\in L^p([0,T];\\mathbb{R}) \\subset L^{\\frac{q}{q-1}}([0,T];\\mathbb{R})$, by Lemmas \\ref{Lemma_A_3} and \\ref{Lemma_A_4} (and using Assumption \\ref{Assumption_2_1}), we have $b(\\cdot) \\in L^p([0,T];\\mathbb{R}^n) \\subset L^{\\frac{q}{q-1}}([0,T];\\mathbb{R}^n)$. Note also that we can choose $q$ so that $x^{\\epsilon}(\\cdot) \\in L^p([0,T];\\mathbb{R}^n) \\subset L^{\\frac{q}{q-1}}([0,T];\\mathbb{R}^n)$ and $|x^{\\epsilon,\\delta}(\\cdot) - x^{\\epsilon}(\\cdot) |_{\\mathbb{R}^n} \\in L^p([0,T];\\mathbb{R}) \\subset L^{\\frac{q}{q-1}}([0,T];\\mathbb{R})$. \nIn addition, from Lemmas \\ref{Lemma_A_3} and \\ref{Lemma_A_4}, there is a constant $C \\geq 0$ such that\n\\begin{align*}\t\n\\Biggl | \\int_0^t \\mathds{1}_{E_{\\delta}}(s) \\widetilde{\\psi}(s) \\dd s \\Biggr | + \\Biggl | \\int_0^t \\mathds{1}_{E_{\\delta}}(s) \\frac{\\widetilde{\\psi}(s)} {(t-s)^{1-\\alpha}} \\dd s \\Biggr | \\leq C |E_{\\delta}|^{\\frac{1}{q}}.\n\\end{align*}\nThen applying Lemma \\ref{Lemma_A_5} to (\\ref{eq_e_2}) yields\n\\begin{align}\n\\label{eq_5_4564563452342342342342342}\n|x^{\\epsilon,\\delta}(t) - x^{\\epsilon}(t) |_{\\mathbb{R}^n} & \\leq b(t) + C \\int_0^t \\frac{K(s)}{(t-s)^{1-\\alpha}} b(s) \\dd s + C \\int_0^t K(s) b(s) \\dd s \\\\\n& \\leq C \\Bigl ( |\\delta a|_{\\mathbb{R}^n} + |E_{\\delta}|^{\\frac{1}{q}} \\Bigr )~ \\rightarrow 0,~ \\textrm{as $\\delta \\downarrow 0$ for all $ t\\in [0,T]$.} \\nonumber\n\\end{align}\n\nOn the other hand, since $Z^{\\epsilon}$ is linear, by Lemma \\ref{Lemma_2_1} (see also the results in Appendix \\ref{Appendix_B}), it admits a unique solution in $C([0,T];\\mathbb{R}^n)$. Hence, as\n\\begin{align*}\n|Z^{\\epsilon}(t)| \\leq |a| + \t\\int_0^t \\frac {K(s)}{(t-s)^{1-\\alpha}} |Z^{\\epsilon}(s)| \\dd s + \\int_0^t \\frac{\\widetilde{\\psi}(s)}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t K(s) |Z^{\\epsilon}(s)| \\dd s + \\int_0^t \\widetilde{\\psi}(s) \\dd s,\n\\end{align*}\nwe use Lemmas \\ref{Lemma_A_3}-\\ref{Lemma_A_5} to get\n\\begin{align}\n\\label{eq_e_3}\n|Z^{\\epsilon}(t)| & \\leq \\hat{b}(t) + C \\int_0^t\t\\frac{K(s) }{(t-s)^{1-\\alpha}} \\hat{b}(s) \\dd s + C \\int_0^t K(s) \\hat{b}(s) \\dd s \\leq C \\Bigl ( |a|_{\\mathbb{R}^n} + \\|\\widetilde{\\psi}(\\cdot)\\|_{L^p([0,T];\\mathbb{R})} \\Bigr ),\n\\end{align}\nwhere $\\hat{b}(t) := \t|a|_{\\mathbb{R}^n} + \\int_0^t \\frac{\\widetilde{\\psi}(s)}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t \\widetilde{\\psi}(s) \\dd s$.\n\nWe obtain\n\\begin{align}\n\\label{eq_4_17_2343234112}\nZ^{\\epsilon,\\delta}(t)\t- Z^{\\epsilon}(t) & = \\int_0^t \\frac{f_x^{\\epsilon,\\delta}(t,s)}{(t-s)^{1-\\alpha}} \\Bigl [ Z^{\\epsilon,\\delta}(s) - Z^{\\epsilon}(s) \\Bigr ] \\dd s + \\int_0^t \\Bigl ( \\frac{\\mathds{1}_{E_{\\delta}}(s)}{\\delta} - 1 \\Bigr ) \\frac{\\widehat{f}(t,s)}{(t-s)^{1-\\alpha}} \\dd s \\\\\n&~~~ + \\int_0^t \\frac{f_x^{\\epsilon,\\delta}(t,s) - f_x(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s))}{(t-s)^{1-\\alpha}} Z^{\\epsilon}(s) \\dd s \\nonumber\\\\\n&~~~ + \\int_0^t g_x^{\\epsilon,\\delta}(t,s) \\Bigl [ Z^{\\epsilon,\\delta}(s) - Z^{\\epsilon}(s) \\Bigr ] \\dd s + \\int_0^t \\Bigl ( \\frac{\\mathds{1}_{E_{\\delta}}(s)}{\\delta} - 1 \\Bigr ) \\widehat{g}(t,s) \\dd s \\nonumber\\\\\n&~~~ + \\int_0^t \\Bigl [ g_x^{\\epsilon,\\delta}(t,s) - g_x(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s)) \\Bigr ] Z^{\\epsilon}(s) \\dd s,~ t \\in [0,T]. \\nonumber\n\\end{align}\nNotice that\n\\begin{align*}\n\\Biggl | \\frac{f_x^{\\epsilon,\\delta}(t,s) - f_x(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s))}{(t-s)^{1-\\alpha}} Z^{\\epsilon}(s) \\Biggr |_{\\mathbb{R}^n} & \\leq \\frac{4 K(s)}{(t-s)^{1-\\alpha}} |Z^{\\epsilon}(s)|,~ \\forall s \\in [0,t), \\\\\n\\Biggl | \\Bigl [ g_x^{\\epsilon,\\delta}(t,s) - g_x(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s)) \\Bigr ] Z^{\\epsilon}(s) \\Biggr |_{\\mathbb{R}^n} & \\leq 4 K(s) |Z^{\\epsilon}(s)|,~ \\forall s \\in [0,t].\n\\end{align*}\nwhere $\\lim_{\\delta \\downarrow 0} |f_x^{\\epsilon,\\delta}(t,s) - f_x(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s))| = 0$ and $\\lim_{\\delta \\downarrow 0} |g_x^{\\epsilon,\\delta}(t,s) - g_x(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s))| = 0$. In addition, using (\\ref{eq_e_3}), we get\n\\begin{align*}\n\\int_0^T\t\\frac{4 K(s)}{(t-s)^{1-\\alpha}} |Z^{\\epsilon}(s)| \\dd s < \\infty,~ \\int_0^T 4 K(s) |Z^{\\epsilon}(s)| \\dd s < \\infty.\n\\end{align*}\n\nFor convenience, define\n\\begin{align*}\t\nb^{(1,1)}(t) & := \\int_0^t \\frac{f_x^{\\epsilon,\\delta}(t,s) - f_x(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s))}{(t-s)^{1-\\alpha}} Z^{\\epsilon}(s) \\dd s \\\\\nb^{(2,1)}(t) & := \\int_0^t \\Bigl [ g_x^{\\epsilon,\\delta}(t,s) - g_x(t,s,x^{\\epsilon}(s),u^{\\epsilon}(s)) \\Bigr ] Z^{\\epsilon}(s) \\dd s \\\\\nb^{(1,2)}(t) & := \\int_0^t \\Bigl ( \\frac{\\mathds{1}_{E_{\\delta}}(s)}{\\delta} - 1 \\Bigr ) \\frac{\\widehat{f}(t,s)}{(t-s)^{1-\\alpha}} \\dd s \\\\\nb^{(2,2)}(t) & := \\int_0^t \\Bigl ( \\frac{\\mathds{1}_{E_{\\delta}}(s)}{\\delta} - 1 \\Bigr ) \\widehat{g}(t,s) \\dd s.\n\\end{align*}\nBy the dominated convergence theorem, it follows that\n\\begin{align*}\t\n\\lim_{\\delta \\downarrow 0} b^{(1,1)}(t) = 0, ~ \\lim_{\\delta \\downarrow 0} b^{(2,1)}(t) = 0,~ \\forall t \\in [0,T].\n\\end{align*}\nIn addition, by letting\n\\begin{align*}\n\\phi(t,s) = \\widehat{g}(t,s),~~~~ \\psi(t,s) = \\widehat{f}(t,s),\n\\end{align*} \nand then invoking Lemmas \\ref{Lemma_D_1} and \\ref{Lemma_D_2} in Appendix \\ref{Appendix_D} (by Assumptions \\ref{Assumption_2_1} and \\ref{Assumption_2_2}, together with Remark \\ref{Remark_B_1}, $\\psi$ holds (\\ref{eq_d_1}) in Appendix \\ref{Appendix_D}), for any $\\delta \\in (0,1)$, there exists an $E_{\\delta} \\in \\mathcal{E}_{\\delta}$ such that\n\\begin{align*}\n|b^{(1,2)}(t)|_{\\mathbb{R}^n} \\leq \\delta,~ |b^{(2,2)}(t)|_{\\mathbb{R}^n} & \\leq \\delta,~ \\forall t \\in [0,T].\n\\end{align*}\n\nWith $b^{(1)}(\\cdot) := b^{(1,1)}(\\cdot) + b^{(2,1)}(\\cdot)$ and $b^{(2)}(\\cdot) := b^{(1,2)}(\\cdot) + b^{(2,2)}(\\cdot)$ in (\\ref{eq_4_17_2343234112}), we then have\n\\begin{align*}\n|Z^{\\epsilon,\\delta}(t)\t- Z^{\\epsilon}(t)| & \\leq b^{(1)}(t) + b^{(2)}(t) + \\int_0^t \\frac{K(s)}{(t-s)^{1-\\alpha}} \\Bigl [ Z^{\\epsilon,\\delta}(s) - Z^{\\epsilon}(s) \\Bigr ] \\dd s \\\\\n&~~~ + \\int_0^t K(s) \\Bigl [ Z^{\\epsilon,\\delta}(s) - Z^{\\epsilon}(s) \\Bigr ] \\dd s,~ t \\in [0,T],\n\\end{align*}\nand by applying the same technique as above and using Lemma \\ref{Lemma_A_5}, \n\\begin{align*}\n|Z^{\\epsilon,\\delta}(t)\t- Z^{\\epsilon}(t)| & \\leq b^{(1)}(t) + b^{(2)}(t) + C \\int_0^t \\frac{K(s)}{(t-s)^{1-\\alpha}} \\Bigl [ b^{(1)}(s) + b^{(2)}(s) \\Bigr ] \\dd s \\\\\n&~~~ + C \\int_0^t K(s) \\Bigl [ b^{(1)}(s) + b^{(2)}(s) \\Bigr ] \\dd s,~ t \\in [0,T].\n\\end{align*}\nHence, the dominated convergence theorem implies that\n\\begin{align*}\t\n\\lim_{\\delta \\downarrow 0} |Z^{\\epsilon,\\delta}(t)\t- Z^{\\epsilon}(t)|_{\\mathbb{R}^n} = 0,~ \\forall t \\in [0,T].\n\\end{align*}\nBy definition of $Z^{\\epsilon,\\delta}$ in (\\ref{eq_e_1}), we have the desired result. This completes the proof.\t\n\\end{proof}\n\n\n\n\\subsection{Crucial Facts from Ekeland Variational Principle, together with Passing Limit and Second Variational Equation}\\label{Section_4_4}\n\nWe recall $Z^{\\epsilon,\\delta}(\\cdot) := \\frac{x^{\\epsilon,\\delta}(\\cdot) - x^{\\epsilon}(\\cdot)}{\\delta}$ defined in (\\ref{eq_e_1}). Based on the Taylor expansion, \n\\begin{align*}\n& \\frac{1}{\\delta} \\Bigl ( J(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - \tJ(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) \\Bigr ) \\\\\n& = \\frac{1}{\\delta} \\Biggl ( \\int_0^T l(s,x^{\\epsilon,\\delta}(s),u^{\\epsilon,\\delta}(s)) \\dd s + h(x_0^\\epsilon + \\delta a, x^{\\epsilon,\\delta}(T)) - \\int_0^T l(s,x^{\\epsilon}(s),u^{\\epsilon}(s)) \\dd s - h(x_0^\\epsilon, x^{\\epsilon}(T)) \\Biggr ) \\\\\n& = \\int_0^T l_x^{\\epsilon,\\delta}(s) Z^{\\epsilon,\\delta}(s) \\dd s + \\int_0^T \\frac{\\mathds{1}_{E_{\\delta}}}{\\delta} \\widehat{l}(s) \\dd s + h_{x_0}^{\\epsilon,\\delta}(T) a + h_{x}^{\\epsilon,\\delta}(T) Z^{\\epsilon,\\delta}(T),\n\\end{align*}\nwhere \n\\begin{align*}\n\\widehat{l}(s) & := l(s,x^\\epsilon(s),u(s)) - l(s,x^\\epsilon(s),u^\\epsilon(s)), \\\\\nl_x^{\\epsilon,\\delta}(s) & := \\int_0^1 l_x(s,x^{\\epsilon}(s) + r (x^{\\epsilon,\\delta}(s) - x^{\\epsilon}(s)),u^{\\epsilon,\\delta}(s)) \\dd r,\n\\\\\nh_{x_0}^{\\epsilon,\\delta}(T) & := \\int_0^1 \n \th_{x_0}(x_0^\\epsilon + r \\delta a, x^{\\epsilon}(T) + r (x^{\\epsilon,\\delta}(T)-x^{\\epsilon}(T))) \\dd r \\\\\nh_{x}^{\\epsilon,\\delta}(T) & := \\int_0^1 \n h_x(x_0^\\epsilon + r \\delta a, x^{\\epsilon}(T) + r (x^{\\epsilon,\\delta}(T)-x^{\\epsilon}(T))) \\dd r.\n\\end{align*}\n\nLet us define\n\\begin{align*}\n\\widehat{Z}^{\\epsilon}(T) & = \\int_0^T l_x(s,x^{\\epsilon} (s),u^{\\epsilon} (s)) Z^{\\epsilon}(s) \\dd s + \\int_0^T \\widehat{l}(s) \\dd s + \n \th_{x_0}(x_0^{\\epsilon},x^{\\epsilon}(T))a + h_x(x_0^{\\epsilon},x^{\\epsilon}(T)) \tZ^{\\epsilon}(T).\n\\end{align*}\nBy definition of $J$ in (\\ref{eq_2}),\n\\begin{align*}\n& \\frac{1}{\\delta} \\Bigl ( J(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - \tJ(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) \\Bigr ) - \t\\widehat{Z}^{\\epsilon}(T) \\\\\n& = \\int_0^T l_x^{\\epsilon,\\delta}(s) \\Bigl [ Z^{\\epsilon,\\delta}(s) - Z^{\\epsilon}(s) \\Bigr ] \\dd s + \\int_0^T \\Bigl [ l_x^{\\epsilon,\\delta}(s) - l_x(s,x^{\\epsilon} (s),u^{\\epsilon} (s)) \\Bigr ] Z^{\\epsilon}(s) \\dd s \\\\\n&~~~ + \\int_0^T \\Bigl ( \\frac{\\mathds{1}_{E_{\\delta}}}{\\delta} - 1 \\Bigr ) \\widehat{l}(s) \\dd s + \\Bigl [ h_{x_0}^{\\epsilon,\\delta}(T) - h_{x_0}(x_0^\\epsilon, x^\\epsilon(T)) \\Bigr ] a \\\\\n&~~~ + h_x^{\\epsilon,\\delta}(T)\\Bigl [ Z^{\\epsilon,\\delta}(T) - Z^{\\epsilon}(T) \\Bigr ] + \\Bigl [ h_x^{\\epsilon,\\delta}(T) - h_x(x^{\\epsilon}(T)) \\Bigr ] Z^{\\epsilon}(T).\n\\end{align*}\nNotice that $\\lim_{\\delta \\downarrow 0} |Z^{\\epsilon,\\delta}(t)\t- Z^{\\epsilon}(t)| = 0$ for all $t \\in [0,T]$ by Lemma \\ref{Lemma_4_4_2342341234234}. Moreover, with $\\phi(t,s) = \\widehat{l}(s)$ in Lemma \\ref{Lemma_D_1} of Appendix \\ref{Appendix_D}, for any $\\delta \\in (0,1)$, there exists an $E_{\\delta} \\in \\mathcal{E}_{\\delta}$ such that \n\\begin{align*}\t\n\\Biggl | \\int_0^t \\Bigl ( \\frac{1}{\\delta} \\mathds{1}_{E_{\\delta}}(s) - 1 \\Bigr ) \\widehat{l}(s) \\dd s \\Biggr | & \\leq \\delta,~ \\forall t \\in [0,T].\n\\end{align*}\nHence, by using a similar technique of Lemma \\ref{Lemma_4_4_2342341234234}, we can show that\n\\begin{align}\n\\label{eq_4_15}\n\\lim_{\\delta \\downarrow 0} \\Biggl | \t\\frac{1}{\\delta} \\Bigl ( J(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}) - \tJ(x_0^\\epsilon, u^{\\epsilon}) \\Bigr ) - \t\\widehat{Z}^{\\epsilon}(T) \\Biggr | = 0,\n\\end{align}\nwhich is equivalent to \n\\begin{align}\n\\label{eq_4_16}\n\\Bigl | J(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - \tJ(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) - \t\\delta \\widehat{Z}^{\\epsilon}(T) \\Bigr |\t= o(\\delta).\n\\end{align}\n\n\nNow, from (\\ref{eq_4_13_1_1_1_2}), \n\\begin{align}\n\\label{eq_4_17_1_1_1_1}\n-\\sqrt{\\epsilon}(|a| + T) & \\leq \\frac{1}{\\delta} \\Bigl ( J_{\\epsilon}(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - \tJ_{\\epsilon}(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) \\Bigr ) \\\\\n& = \\frac{1}{J_{\\epsilon}(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) + \tJ_{\\epsilon}(x_0^\\epsilon, u^{\\epsilon}(\\cdot))} \\nonumber \\\\\n&~~~ \\times \\frac{1}{\\delta} \\Biggl ( \\Bigl ( \\bigl [ J(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ \\Bigr )^2 - \\Bigl ( \\bigl [ J(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ \\Bigr )^2 \\nonumber \\\\\n&~~~~~~~ + d_F \\Bigl ( \\begin{bmatrix}\nx_0^\\epsilon + \\delta a \\\\\nx^{\\epsilon,\\delta}(T)\t\n\\end{bmatrix} \\Bigr )^2 - d_F \\Bigl ( \\begin{bmatrix}\nx_0^\\epsilon \\\\\nx^{\\epsilon}(T)\t\n\\end{bmatrix} \\Bigr )^2 + d_S \\Bigl ( \\gamma(x^{\\epsilon,\\delta}(\\cdot)) \\Bigr )^2 - d_S \\Bigl ( \\gamma(x^{\\epsilon}(\\cdot)) \\Bigr )^2 \\Biggr ). \\nonumber\n\\end{align}\n\nBy continuity of $J_{\\epsilon}$ on $(\\mathbb{R}^n \\times \\mathcal{U}^p[0,T],\\widehat{d})$ and (\\ref{eq_4_16}), it follows that $\\lim_{\\delta \\downarrow 0} J_{\\epsilon}(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) = J_{\\epsilon}(x_0^\\epsilon, u^{\\epsilon}(\\cdot))$, which leads to\n\\begin{align*}\n\\lim_{\\delta \\downarrow 0} \\Bigl \\{ J_{\\epsilon}(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) + \tJ_{\\epsilon}(x_0^\\epsilon, u^{\\epsilon}(\\cdot))\t\\Bigr \\} = 2 J_{\\epsilon}(x_0^\\epsilon, u^{\\epsilon}(\\cdot)).\n\\end{align*}\nIn view of (\\ref{eq_4_15}) and Lemma \\ref{Lemma_4_4_2342341234234},\n\\begin{align*}\n& \\frac{1}{\\delta} \\Bigl ( \\bigl [ J(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ \\Bigr )^2 - \\Bigl ( \\bigl [ J(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ \\Bigr )^2 \\\\\n& = \\Biggl ( \\bigl [ J(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ + \\bigl [ J(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ \\Biggr ) \\\\\n&~~~~~~~ \\times \\frac{1}{\\delta} \\Biggl ( \\bigl [ J(x_0^\\epsilon + \\delta a, u^{\\epsilon,\\delta}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ - \\bigl [ J(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ \\Biggr )\n\\\\\n& \\rightarrow 2 \\bigl [ J(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+ \\widehat{Z}^{\\epsilon}(T),~ \\textrm{as $\\delta \\downarrow 0$.}\n\\end{align*}\nLet us define (since $J_{\\epsilon}(x_0^\\epsilon,u^\\epsilon(\\cdot)) > 0$ by (\\ref{eq_4_8}))\n\\begin{align}\n\\label{eq_4_17}\n\\lambda^{\\epsilon} := \t\\frac{\\bigl [ J(x_0^\\epsilon, u^{\\epsilon}(\\cdot)) - J(\\overline{x}_0, \\overline{u}(\\cdot)) + \\epsilon \\bigr ]^+}{J_{\\epsilon}(x_0^\\epsilon, u^{\\epsilon}(\\cdot))} \\geq 0.\n\\end{align}\n\nBy Lemmas \\ref{Lemma_4_1} and \\ref{Lemma_4_4_2342341234234}, and the definition of Fr\\'echet differentiability, as $\\delta \\downarrow 0$,\n\\begin{align*}\t\n& \\frac{1}{\\delta} \\Biggl ( d_F \\Bigl ( \\begin{bmatrix}\n\tx_0^\\epsilon + \\delta a \\\\\n\tx^{\\epsilon,\\delta}(T)\n\\end{bmatrix} \\Bigr )^2 - d_F \\Bigl ( \\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} \\Bigr )^2 \n \\Biggr ) \\rightarrow 2 \\Biggl \\langle \\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} - P_F \\Bigl (\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} \\Bigr ), \\begin{bmatrix}\na \\\\\nZ^{\\epsilon}(T)\n \\end{bmatrix} \\Biggr \\rangle_{\\mathbb{R}^{2n} \\times \\mathbb{R}^{2n}},\n\\end{align*}\nwhere $P_F: \\mathbb{R}^{2n} \\rightarrow F \\subset \\mathbb{R}^{2n}$ is the projection operator defined in Section \\ref{Section_4_1}. Notice that by the statement in Section \\ref{Section_4_1}, \n\\begin{align*}\nd_F\t\\Bigl (\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} \\Bigr ) = \\Biggl | \\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} - P_F \\Bigl (\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} \\Bigr ) \\Biggr |_{\\mathbb{R}^{2n}},\n\\end{align*}\nand by (\\ref{eq_4_1}), \n\\begin{align*}\t\n\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} - P_F \\Bigl (\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} \\Bigr ) \\in N_F \\Bigl ( P_F \\Bigl (\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} \\Bigr ) \\Bigr ).\n\\end{align*}\nWe define (note that $J_{\\epsilon}(x_0^\\epsilon,u^\\epsilon (\\cdot)) > 0$ by (\\ref{eq_4_8}) and $\\xi_1^{\\epsilon},\\xi_2 ^{\\epsilon} \\in \\mathbb{R}^n$)\n\\begin{align}\n\\label{eq_4_18}\n\\xi^{\\epsilon} := \\begin{bmatrix}\n \\xi^{\\epsilon}_1 \\\\\n \\xi^{\\epsilon}_2\t\n \\end{bmatrix}\n := \\frac{\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} - P_F \\Bigl (\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} \\Bigr ) }{J_{\\epsilon}(x_0^\\epsilon,u^\\epsilon (\\cdot) )}\t\\in N_F \\Bigl ( P_F \\Bigl (\\begin{bmatrix}\n\tx_0^\\epsilon \\\\\n\tx^{\\epsilon}(T)\n\\end{bmatrix} \\Bigr ) \\Bigr ).\n\\end{align}\n\nBy Lemma \\ref{Lemma_2_1} (see also Lemmas \\ref{Lemma_B_1} and \\ref{Lemma_B_2} in Appendix \\ref{Appendix_B}), it holds that $Z^{\\epsilon}(\\cdot) \\in C([0,T];\\mathbb{R}^n)$. Then using Lemmas \\ref{Lemma_4_3_1_2_2_1} and \\ref{Lemma_4_4_2342341234234}, as $\\delta \\downarrow 0$, we get\n\\begin{align*}\n& \\frac{1}{\\delta} \\Biggl ( d_S \\Bigl (\\gamma(x^{\\epsilon,\\delta}(\\cdot)) \\Bigr )^2 - d_S \\Bigl (\\gamma(x^{\\epsilon}(\\cdot)) \\Bigr )^2 \\Biggr ) \\\\\n&\\rightarrow \\begin{cases}\n2 \\Biggl \\langle d_S \\Bigl ( \\gamma(x^{\\epsilon}(\\cdot)) \\Bigr ) D d_S \\Bigl ( ( \\gamma(x^{\\epsilon}(\\cdot)) \\Bigr ), G_x(\\cdot,x^{\\epsilon}(\\cdot)) Z^{\\epsilon}(\\cdot) \\Biggr \\rangle_{C_m^* \\times C_m}, & \t \\gamma(x^{\\epsilon}(\\cdot)) \\notin S, \\\\\n0 \\in \\mathbb{R}, & \\gamma(x^{\\epsilon}(\\cdot)) \\in S .\n \\end{cases}\n\\end{align*}\nWe define (since $J_{\\epsilon}(x_0^\\epsilon,u^\\epsilon (\\cdot)) > 0$ by (\\ref{eq_4_8}))\n\\begin{align}\n\\label{eq_4_19}\n\\mu^{\\epsilon} := \\begin{dcases}\n\\frac{d_S \\Bigl (\\gamma(x^{\\epsilon}(\\cdot)) \\Bigr ) D d_S \\Bigl (\\gamma(x^{\\epsilon}(\\cdot)) \\Bigr )}{J_{\\epsilon}(x_0^\\epsilon,u^\\epsilon (\\cdot) )} \\in C([0,T]; \\mathbb{R}^m)^*, & \\gamma(x^{\\epsilon}(\\cdot)) \\notin S, \\\\\n0 \\in C([0,T]; \\mathbb{R}^m)^*, & \\gamma(x^{\\epsilon}(\\cdot)) \\in S,\n \\end{dcases} \n\\end{align}\nand since $D d_S \\Bigl ( \\gamma(x^{\\epsilon}(\\cdot)) \\Bigr )$ is the subdifferential of $d_S \\Bigl ( \\gamma(x^{\\epsilon}(\\cdot)) \\Bigr )$ at $\\gamma(x^{\\epsilon}(\\cdot)) \\in S$ (see Lemma \\ref{Lemma_4_3_1_2_2_1}), by (\\ref{eq_4_3}) and (\\ref{eq_4_5_6_5_43_3_5_7_3_3}), we have\n\\begin{align}\n\\label{eq_4_26_23423425345}\n\\mu^{\\epsilon} \\in N_S \\Bigl (\\gamma(x^{\\epsilon}(\\cdot)) \\Bigr ). \t\n\\end{align}\n\n\nIn view of Lemma \\ref{Lemma_4_3_1_2_2_1} and the definitions of $J_\\epsilon$, $d_F$ and $d_S$, this leads to\n\\begin{align}\n\\label{eq_4_20_1_1_1_1_2}\n& |\\lambda^\\epsilon|^2 + \t|\\xi^\\epsilon|^2_{\\mathbb{R}^{2n}} + \\|\\mu^\\epsilon\\|^2_{C([0,T]; \\mathbb{R}^m)^*} = 1.\n\\end{align}\nHence, as $\\delta \\downarrow 0$, applying (\\ref{eq_4_17})-(\\ref{eq_4_26_23423425345}) to (\\ref{eq_4_17_1_1_1_1}) yields\n\\begin{align}\t\n\\label{eq_4_20}\n-\\sqrt{\\epsilon}(|a| + T) & \\leq \\lambda^{\\epsilon} \\widehat{Z}^{\\epsilon}(T) + \\Bigl \\langle \\xi_1^{\\epsilon}, a \\Bigr \\rangle + \\Bigl \\langle \\xi_2^{\\epsilon}, Z^{\\epsilon}(T) \\Bigr \\rangle + \\Bigl \\langle \\mu^{\\epsilon}, G_x(\\cdot,x^{\\epsilon}(\\cdot)) Z^{\\epsilon}(\\cdot) \\Bigr \\rangle_{C_m^* \\times C_m}.\n\\end{align}\n\nThe following lemma shows the estimate between the first and second variational equations, where the first variational equation is given in Lemma \\ref{Lemma_4_4_2342341234234}.\n\n\\begin{lemma}\\label{Lemma_4_5}\nFor any $(a,u(\\cdot)) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$, the following results hold:\n\\begin{align*}\n& \\textrm{(i)}~ \\lim_{\\epsilon \\downarrow 0} \\Bigl \\{ |x_0^\\epsilon - \\overline{x}_0 |_{\\mathbb{R}^n} + \\overline{d}(u^\\epsilon(\\cdot),\\overline{u}(\\cdot)) \\Bigr \\} = 0, \\\\\n& \\textrm{(ii)}~ \\sup_{t \\in [0,T]} \\Bigl | Z^{\\epsilon}(t;a,u) - Z(t;a,u) \\Bigr | = o(\\epsilon),~ \\Bigl | \\widehat{Z}^{\\epsilon}(T;a,u) - \\widehat{Z}(T;a,u) \\Bigr | = o(\\epsilon),\n\\end{align*}\nwhere $Z(\\cdot) := Z(\\cdot;a,u)$ is the solution to the second variational equation related to $(\\overline{x},\\overline{u}(\\cdot))$ and $\\widehat{Z}(\\cdot) := \\widehat{Z}(\\cdot;a,u)$ is the variational equation of $J$, both of which are given below\n\\begin{align*}\n& Z(t) = a + \\int_0^t \\Bigl [ \\frac{f_x(t,s,\\overline{x}(s),\\overline{u}(s))}{(t-s)^{1-\\alpha}} Z(s) \\dd s \t + \\frac{f(t,s,\\overline{x}(s),u(s)) - f(t,s,\\overline{x}(s),\\overline{u}(s))}{(t-s)^{1-\\alpha}} \\Bigr ] \\dd s \\\\\n&~~~ + \\int_0^t \\Bigl [ g_x(t,s,\\overline{x}(s),\\overline{u}(s)) Z(s) + \\bigl ( g(t,s,\\overline{x}(s),u(s)) - g(t,s,\\overline{x}(s),\\overline{u}(s)) \\bigr ) \\Bigr ] \\dd s,~ \\textrm{a.e.}~ t \\in [0,T], \\\\\n& \\widehat{Z}(T) = \\int_0^T l_x(s,\\overline{x}(s),\\overline{u}(s)) Z(s) \\dd s + \\int_0^T \\Bigl [ l(s,\\overline{x}(s),u(s)) - l(s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s \\\\\n&~~~ + h_{x_0}(\\overline{x}_0,\\overline{x}(T))a + h_x(\\overline{x}_0,\\overline{x}(T)) Z(T),~ \\textrm{a.e.}~ t \\in [0,T].\n\\end{align*}\n\\end{lemma}\n\n\\begin{remark}\\label{Remark_4_8_4534535}\nNote that (i) of Lemma \\ref{Lemma_4_5} follows from the definition of the Ekeland metric in (\\ref{eq_4_6}) (see also (\\ref{eq_4_9_1_1_1})). The proof for (ii) of Lemma \\ref{Lemma_4_5} is similar to that for Lemma \\ref{Lemma_4_4_2342341234234}.\t\n\\end{remark}\n\n\nWe now consider the limit of $\\epsilon \\downarrow 0$. Instead of taking the limit with respect to $\\epsilon \\downarrow 0$, let $\\{\\epsilon_k\\}$ be the sequence of $\\epsilon$ such that $\\epsilon_k \\geq 0$ and $\\epsilon_k \\downarrow 0$ as $k \\rightarrow \\infty$. We replace $\\epsilon$ by $\\epsilon_k$. Then by (\\ref{eq_4_20_1_1_1_1_2}), the sequences $(\\{\\lambda^{\\epsilon_k} \\}, \\{\\xi^{\\epsilon_k} \\},\\{\\mu^{\\epsilon_k}\\}) $ are bounded for $k \\geq 0$. Note also from (\\ref{eq_4_20_1_1_1_1_2}) that the ball generated by $\\|\\mu^{\\epsilon_k}\\|^2_{C([0,T];\\mathbb{R}^m)^*} \\leq 1$ is a closed unit ball in $C([0,T];\\mathbb{R}^m)^*$, which is weak--$*$ compact by the Banach-Alaoglu theorem \\cite[page 130]{Conway_2000_book}. Then by the standard compactness argument, we may extract a subsequence of $\\{\\epsilon_k\\}$, still denoted by $\\{\\epsilon_k\\}$, such that \n\\begin{align}\n\\label{eq_4_28_34_34234343234}\n(\\{\\lambda^{\\epsilon_k} \\}, \\{\\xi^{\\epsilon_k} \\},\\{\\mu^{\\epsilon_k}\\}) \\rightarrow (\\lambda^0, \\xi^0,\\mu^0) = :(\\lambda, \\xi,\\mu),~ \\textrm{as $k \\rightarrow \\infty$,}\n\\end{align}\nwhere $\\{\\mu^{\\epsilon_k}\\} \\rightarrow \\mu$ (as $k \\rightarrow \\infty$) is understood in the weak--$*$ sense \\cite{Conway_2000_book}. \n\nWe claim that from (\\ref{eq_4_17})-(\\ref{eq_4_26_23423425345}), the tuple $(\\lambda,\\xi,\\mu)$ holds\n\\begin{subequations}\n\\label{eq_4_22}\n\\begin{align}\n\\label{eq_4_22_234234234234}\n\t\\lambda &\\geq 0, \\\\\n\\label{eq_4_22_234234234234_wdsdf}\n\t\\xi &\\in N_F \\Bigl ( P_F \\Bigl (\\begin{bmatrix}\n\t\\overline{x}_0, \\\\\n\t\t\\overline{x}(T)\n\\end{bmatrix} \\Bigr ) \\Bigr ), \\\\\n\\label{eq_4_22_234234234234_wdsdfdsfsdf}\n\\mu &\\in N_S \\Bigl ( \\gamma(\\overline{x}(\\cdot)) \\Bigr ).\n\\end{align}\n\\end{subequations}\nIndeed, (\\ref{eq_4_22_234234234234}) holds due to (\\ref{eq_4_17}). Furthermore, (\\ref{eq_4_22_234234234234_wdsdf}) follows from (\\ref{eq_4_18}) and the property of limiting normal cones \\cite[page 43]{Vinter_book}. To prove (\\ref{eq_4_22_234234234234_wdsdfdsfsdf}), we note that (\\ref{eq_4_26_23423425345}) and (\\ref{eq_4_3}) mean that $ \\langle \\mu^{\\epsilon_k}, z - \\gamma(x^{\\epsilon_k}(\\cdot)) \\rangle_{C_m^* \\times C_m} \\leq 0$ for any $z \\in S$. Then (\\ref{eq_4_22_234234234234_wdsdfdsfsdf}) holds, since by (\\ref{eq_4_3}), (\\ref{eq_4_20_1_1_1_1_2}) and (\\ref{eq_4_28_34_34234343234}), together with the boundedness of $\\{\\mu^{\\epsilon_k}\\}$, Lemma \\ref{Lemma_4_5}, and the weak--$*$ convergence property of $\\{\\mu^{\\epsilon_k}\\}$ to $\\mu$, it holds that\n\\begin{align*}\n0 \\geq \\Bigl \\langle \\mu^{\\epsilon_k}, z - \\gamma(x^{\\epsilon_k}(\\cdot) ) \\Bigr \\rangle_{C_m^* \\times C_m}\t& \\geq \\Bigl \\langle \\mu, z - \\gamma(\\overline{x}(\\cdot) ) \\Bigr \\rangle_{C_m^* \\times C_m} - \\Bigl \\|\\gamma(x^{\\epsilon_k}(\\cdot)) - \\gamma(\\overline{x}(\\cdot)) \\Bigr \\|_{\\infty} \\\\\n&~~~ + \\Bigl \\langle \\mu^{\\epsilon_k}, z - \\gamma(\\overline{x}(\\cdot)) \\Bigr \\rangle_{C_m^* \\times C_m} - \\Bigl \\langle \\mu, z - \\gamma(\\overline{x}(\\cdot)) \\Bigr \\rangle_{C_m^* \\times C_m} \\\\\n& \\rightarrow ~\\Bigl \\langle \\mu, z - \\gamma(\\overline{x}(\\cdot) ) \\Bigr \\rangle_{C_m^* \\times C_m},~ \\textrm{as $k \\rightarrow \\infty$.}\n\\end{align*}\n\nBy (\\ref{eq_4_28_34_34234343234}) and (\\ref{eq_4_20_1_1_1_1_2}), together with Lemma \\ref{Lemma_4_5}, it follows that\n\\begin{align*}\n\\lambda^{\\epsilon_k} \\widehat{Z}^{\\epsilon_k}(T) & \\leq \\lambda \\widehat{Z}(T) + |\\widehat{Z}^{\\epsilon_k}(T) - \\widehat{Z}(T)| + |\\lambda^{\\epsilon_k} - \\lambda | \\widehat{Z}(T)~\\rightarrow ~ \\lambda \\widehat{Z}(T),~ \\textrm{as $k \\rightarrow \\infty$,}\t \\\\\n\\Bigl \\langle \\xi_1^{\\epsilon_k}, a \\Bigr \\rangle &= \t \\Bigl \\langle \\xi_1, a \\Bigr \\rangle + \\Bigl \\langle \\xi_1^{\\epsilon_k}, a \\Bigr \\rangle - \\Bigl \\langle \\xi_1, a \\Bigr \\rangle~ \\rightarrow~ \\Bigl \\langle \\xi_1, a \\Bigr \\rangle,~ \\textrm{as $k \\rightarrow \\infty$,} \\\\\n\\Bigl \\langle \\xi_2^{\\epsilon_k}, Z^{\\epsilon_k}(T) \\Bigr \\rangle & \\leq \\Bigl \\langle \\xi_2, Z(T) \\Bigr \\rangle + |Z^{\\epsilon_k}(T) - Z(T)| + |\\xi_2^{\\epsilon_k} - \\xi_2||Z(T)|~\\rightarrow \\Bigl \\langle \\xi_2, Z(T) \\Bigr \\rangle,~ \\textrm{as $k \\rightarrow \\infty$,}\n\\end{align*}\nand similarly, together with the definition of the weak--$*$ convergence,\n\\begin{align*}\n\\Bigl \\langle \\mu^{\\epsilon_k}, G_x(\\cdot,x^{\\epsilon_k}(\\cdot)) Z^{\\epsilon_k}(\\cdot) \\Bigr \\rangle_{C_m^* \\times C_m} & \\leq \\Bigl \\langle \\mu, G_x(\\cdot,x(\\cdot)) Z(\\cdot) \\Bigr \\rangle_{C_m^* \\times C_m} + \\| Z^{\\epsilon_k}(\\cdot) - Z(\\cdot)\\|_{\\infty} \\\\\n&~~~ + \\Bigl \\langle \\mu^{\\epsilon_k}, G_x(\\cdot,x(\\cdot)) Z(\\cdot) \\Bigr \\rangle_{C_m^* \\times C_m} - \\Bigl \\langle \\mu, G_x(\\cdot,x(\\cdot)) Z(\\cdot) \\Bigr \\rangle_{C_m^* \\times C_m}\t\\\\\n& \\rightarrow ~ \\Bigl \\langle \\mu, G_x(\\cdot,x(\\cdot)) Z(\\cdot) \\Bigr \\rangle_{C_m^* \\times C_m},~ \\textrm{as $k \\rightarrow \\infty$.}\n\\end{align*}\nTherefore, as $k \\rightarrow \\infty$, (\\ref{eq_4_20}) becomes for any $(a,u) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$,\n\\begin{align}\t\n\\label{eq_4_23}\n0 & \\leq \\lambda \\widehat{Z}(T) + \\Bigl \\langle \\xi_1, a \\Bigr \\rangle + \\Bigl \\langle \\xi_2, Z(T) \\Bigr \\rangle + \\Bigl \\langle \\mu, G_x(\\cdot, \\overline{x}(\\cdot)) Z(\\cdot;a,u) \\Bigr \\rangle_{C_m^* \\times C_m}.\n\\end{align}\nNote that (\\ref{eq_4_23}) is the crucial inequality obtained from the Ekeland variational principle as well as the estimates of the variational equations in Lemmas \\ref{Lemma_4_4_2342341234234} and \\ref{Lemma_4_5}. \n\n\\subsection{Proof of Theorem \\ref{Theorem_3_1}: Complementary Slackness Condition}\\label{Section_4_5}\n\nWe prove the complementary slackness condition in Theorem \\ref{Theorem_3_1}. Let $\\mu = (\\mu_1,\\ldots, \\mu_m) \\in C([0,T];\\mathbb{R}^m)^*$, where $\\mu_i \\in C([0,T];\\mathbb{R})^*$, $i=1,\\ldots,m$. Then it holds that\n\\begin{align}\t\n\\label{Eq_4_31_3434534234234123}\n\\Bigl \\langle \\mu, z \\Bigr \\rangle_{C_m^* \\times C_m} = \\sum_{i=1}^m \\Bigl \\langle \\mu_i, z_i \\Bigr \\rangle_{C_1^* \\times C_1},~ \\forall z = (z_1,\\ldots,z_m) \\in C([0,T];\\mathbb{R}^m)\n\\end{align}\nwhere $\\langle \\cdot,\\cdot \\rangle_{C_1^* \\times C_1} := \\langle \\cdot,\\cdot \\rangle_{C([0,T];\\mathbb{R})^* \\times C([0,T];\\mathbb{R})}$ denotes the duality paring between $C([0,T];\\mathbb{R})$ and $C([0,T];\\mathbb{R})^*$. \n\nRecall $\\gamma(\\overline{x}(\\cdot)) = (\\gamma_1(\\overline{x}(\\cdot)),\\ldots, \\gamma_m(\\overline{x}(\\cdot))) = G(\\cdot,\\overline{x}(\\cdot)) = \\begin{bmatrix}\n \tG^1(\\cdot,\\overline{x}(\\cdot)) & \\cdots & G^m(\\cdot,\\overline{x}(\\cdot))\n \\end{bmatrix} \\in S$ and $\\mu \\in N_S \\Bigl (\\gamma(\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) \\Bigr )$ by (\\ref{eq_4_22_234234234234_wdsdfdsfsdf}). Based on (\\ref{Eq_4_31_3434534234234123}) and (\\ref{eq_4_3}), this implies that for any $z \\in S$,\n\\begin{align}\n\\label{eq_4_24}\t\n\\Bigl \\langle \\mu, z - \\gamma(\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) \\Bigr \\rangle_{C_m^* \\times C_m} = \\sum_{i=1}^m \\Bigl \\langle \\mu_i, z_i - \\gamma_i (\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) \\Bigr \\rangle_{C_1^* \\times C_1} \\leq 0.\n\\end{align}\nTaking $z$ in (\\ref{eq_4_24}) as follows:\n\\begin{align*}\nz &= \\begin{bmatrix}\n G^1(\\cdot,\\overline{x}(\\cdot)) & \\cdots & G^{i-1}(\\cdot,\\overline{x}(\\cdot)) & 2G^{i}(\\cdot,\\overline{x}(\\cdot)) & G^{i+1}(\\cdot,\\overline{x}(\\cdot)) & \\cdots & G^m(\\cdot,\\overline{x}(\\cdot))\t\n \\end{bmatrix} \\in S, \\\\\n z^{(-i)} &= \n \\begin{bmatrix}\n \tG^1(\\cdot,\\overline{x}(\\cdot)) & \\cdots & G^{i-1}(\\cdot,\\overline{x}(\\cdot)) & 0_{\\in C([0,T];\\mathbb{R})} & G^{i+1}(\\cdot,\\overline{x}(\\cdot)) & \\cdots & G^m(\\cdot,\\overline{x}(\\cdot)) \n \\end{bmatrix} \\in S.\n\\end{align*}\nThen (\\ref{eq_4_24}) is equivalent to\n\\begin{align}\n\\label{eq_4_25}\n\\Bigl \\langle \\mu_i, G^i(\\cdot,\\overline{x}(\\cdot;\\overline{x}_0;\\overline{u})) \\Bigr \\rangle_{C_1^* \\times C_1} & = 0,~ \\forall i=1,\\ldots,m ,\n\\\\\n\\label{eq_4_26}\n\\Bigl \\langle \\mu_i, z_i \\Bigr \\rangle_{C_1^* \\times C_1} & \\geq 0,~ \\forall z_i \\in C([0,T];\\mathbb{R}_{+}),~ i=1,\\ldots,m.\n\\end{align}\n\nFor (\\ref{eq_4_25}) and (\\ref{eq_4_26}), by the Riesz representation theorem (see \\cite[page 75 and page 382]{Conway_2000_book} and \\cite[Theorem 14.5]{Limaye_book}), there is a unique $\\theta(\\cdot) = (\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot)) \\in \\textsc{NBV}([0,T];\\mathbb{R}^m)$ with $\\theta_i(\\cdot) \\in \\textsc{NBV}([0,T];\\mathbb{R})$, i.e., $\\theta_i$, $i=1,\\ldots,m$, being the normalized functions of bounded variation on $[0,T]$, such that every $\\theta_i$ is finite, nonnegative, and monotonically nondecreasing on $[0,T]$ with $\\theta_i(0) = 0$. Moreover, the Riesz representation theorem leads to the following representation:\n\\begin{align}\t\n\\Bigl \\langle \\mu_i, \\gamma_i(\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) \\Bigr \\rangle_{C_1^* \\times C_1} &= \\int_0^T G^i(s,\\overline{x}(s;\\overline{x}_0,\\overline{u})) \\dd \\theta_i(s) = 0,~ \\forall i=1,\\ldots,m, \\nonumber \\\\\n\\label{eq_4_27}\n\\langle \\mu_i, z_i \\Bigr \\rangle_{C_1^* \\times C_1} &= \\int_0^T z_i(s) \\dd \\theta_i(s) \\geq 0,~ \\forall z_i \\in C([0,T];\\mathbb{R}_{+}),~ i=1,\\ldots,m.\n\\end{align}\n\nNotice that (\\ref{eq_4_27}) always holds as $\\theta_i$ is monotonically nondecreasing on $[0,T]$ with $\\theta(0) = 0$ (equivalently, $\\dd \\theta_i$ is nonnegative) and $z_i \\in C([0,T];\\mathbb{R}_{+})$. Hence, (\\ref{eq_4_25}) and (\\ref{eq_4_26}) are reduced to\n\\begin{align}\t\n\\label{eq_4_27_4_23_3_4_2_}\n& \\Bigl \\langle \\mu_i, \\gamma_i(\\overline{x}(\\cdot;\\overline{x}_0;\\overline{u})) \\Bigr \\rangle_{C_1^* \\times C_1} = \\int_0^T G^i(s,\\overline{x}(s;\\overline{x}_0,\\overline{u})) \\dd \\theta_i(s) = 0,~ \\forall i=1,\\ldots,m, \\\\\n& \\Leftrightarrow~ \t\\textsc{supp}(\\dd \\theta_i(\\cdot)) \\subset \\{ t \\in [0,T]~|~ G^i(t,\\overline{x}(t;\\overline{x}_0,\\overline{u}))= 0\\},~ \\forall i=1,\\ldots,m, \\nonumber\n\\end{align}\nwhere the equivalence follows from the fact that $G^i(t,\\overline{x}(t;\\overline{x}_0,\\overline{u})) \\leq 0$, $i=1,\\ldots,m$, and $\\dd \\theta_i$, $i=1,\\ldots,m$, are finite nonnegative measures on $([0,T],\\mathcal{B}([0,T]))$. The relation in (\\ref{eq_4_27_4_23_3_4_2_}) proves the complementary slackness condition in Theorem \\ref{Theorem_3_1}.\n\n\\subsection{Proof of Theorem \\ref{Theorem_3_1}: Nontriviality and Nonnegativity Conditions}\\label{Section_4_6}\n\nWe prove the nontriviality and nonnegativity conditions in Theorem \\ref{Theorem_3_1}. Recall (\\ref{eq_4_24}), i.e., for any $z \\in S$, \n\\begin{align}\n\\label{eq_4_24_2345234234234223423}\t\n\\Bigl \\langle \\mu, z - \\gamma(\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) \\Bigr \\rangle_{C_m^* \\times C_m} = \\sum_{i=1}^m \\Bigl \\langle \\mu_i, z_i - G^i(\\cdot,\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) \\Bigr \\rangle_{C_1^* \\times C_1} \\leq 0.\n\\end{align}\nThen by the Riesz representation theorem (see \\cite[page 75 and page 382]{Conway_2000_book} and \\cite[Theorem 14.5]{Limaye_book}) and the fact that $\\theta_i$, $i=1,\\ldots,m$, is finite, nonnegative, and monotonically nondecreasing on $[0,T]$ with $\\theta_i(0) = 0$ (see Section \\ref{Section_4_5}), it follows that $\\|\\mu_i \\|_{C([0,T];\\mathbb{R}^m)^*} = \\|\\theta_i(\\cdot)\\|_{\\textsc{NBV}([0,T];\\mathbb{R})} = \\theta_i(T) \\geq 0$ for $i=1,\\ldots,m$. In addition, as $\\theta_i$ is monotonically nondecreasing, we have $\\dd \\theta_i(s) \\geq 0$ for $s \\in [0,T]$, where $\\dd \\theta_i$ denotes the Lebesgue-Stieltjes measure corresponding to $\\theta_i$, $i=1,\\ldots,m$. \n\nBy (\\ref{eq_4_22_234234234234_wdsdf}) and the fact that $\\begin{bmatrix}\n\\overline{x}_0 \\\\\\overline{x}(T)) \\end{bmatrix} \\in F$ implies $P_F \\Bigl (\\begin{bmatrix}\n\\overline{x}_0 \\\\\\overline{x}(T)) \\end{bmatrix} \\Bigr ) = \\begin{bmatrix}\\overline{x}_0 \\\\\\overline{x}(T) \\end{bmatrix}$ (see Section \\ref{Section_4_1}), we have $\\xi = \\begin{bmatrix}\n \\xi_1 \\\\\n \\xi_2\t\n \\end{bmatrix} \\in N_F \\Bigl ( \\begin{bmatrix}\\overline{x}_0 \\\\\\overline{x}(T) \\end{bmatrix} \\Bigr )$. \nIn addition, from the fact that $S = C([0,T];\\mathbb{R}_{-}^m)$ has an nonempty interior, there are $z^\\prime \\in S$ and $\\sigma > 0$ such that $z^\\prime + \\sigma z \\in S$ for all $z \\in \\overline{B}_{(C([0,T];\\mathbb{R}^n),\\|\\cdot\\|_{C([0,T];\\mathbb{R}^n)})}(0,1)$ (the closure of the unit ball in $C([0,T];\\mathbb{R}^n)$). Then by (\\ref{eq_4_24_2345234234234223423}), it follows that\n\\begin{align*}\t\n\\sigma \\Bigl \\langle \\mu,z \\Bigr \\rangle_{C_m^* \\times C_m} \\leq \\Bigl \\langle \\mu, \\gamma(\\overline{x}(\\cdot)) - z^\\prime\t \\Bigr \\rangle_{C_m^* \\times C_m},~ \\forall z \\in \\overline{B}_{(C([0,T];\\mathbb{R}^n),\\|\\cdot\\|_{C([0,T];\\mathbb{R}^n)})}(0,1).\n\\end{align*}\nBy (\\ref{eq_4_20_1_1_1_1_2}) \nand the definition of the norm of the dual space (the norm of linear functionals on $C([0,T];\\mathbb{R}^m)$ (see Section \\ref{Section_4_1})), we get\n\\begin{align*}\n\\sigma \\|\t\\mu \\|_{C([0,T];\\mathbb{R}^m)^*} = \\sigma \\sqrt{1-|\\lambda|^2 - \t|\\xi|^2_{\\mathbb{R}^{2n}}} \\leq \\Bigl \\langle \\mu, \\gamma(x^{\\epsilon}(\\cdot)) - z^\\prime\t \\Bigr \\rangle_{C_m^* \\times C_m},~ z^\\prime \\in S.\n\\end{align*}\nNotice that $\\sigma > 0$. When $\\mu = 0 \\in C([0,T];\\mathbb{R}^m)^*$ and $\\xi = 0$, we must have $\\lambda = 1$. When $\\lambda = 0$ and $\\mu = 0 \\in C([0,T];\\mathbb{R}^m)^*$, we must have $|\\xi|_{\\mathbb{R}^{2n}} = 1$. When $\\lambda = 0$ and $\\xi = 0$, it holds that $\\mu \\neq 0 \\in C([0,T];\\mathbb{R}^m)^*$. This implies that the tuple $(\\lambda,\\xi,\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot))$ cannot be trivial, i.e., $(\\lambda,\\xi,\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot)) \\neq 0$ (they cannot be zero simultaneously).\n\nIn summary, based on the above discussion, it follows that the following tuple\n\\begin{align*}\n\\begin{cases}\n\t\\lambda \\geq 0, \\\\\n\t\\xi \\in N_F \\Bigl ( \\begin{bmatrix}\\overline{x}_0 \\\\\\overline{x}(T) \\end{bmatrix} \\Bigr ), \\\\\n\t\\|\\mu_i \\|_{ C([0,T];\\mathbb{R}^m)^*} = \\|\\theta_i(\\cdot)\\|_{\\textsc{NBV}([0,T];\\mathbb{R})} = \\theta_i(T) \\geq 0,~\\forall i=1,\\ldots,m\n\\end{cases}\n\\end{align*}\ncannot be trivial, i.e., it holds that $(\\lambda,\\xi,\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot)) \\neq 0$, and\n\\begin{align*}\n\t& \\begin{cases}\n\t\\lambda \\geq 0, \\\\\n\t\\dd \\theta_i(s) \\geq 0,~ \\forall s \\in [0,T],~i=1, \\ldots, m.\n\t\\end{cases}\n\\end{align*}\nThis shows the nontriviality and nonnegativity conditions in Theorem \\ref{Theorem_3_1}.\n\n\n\n\\subsection{Proof of Theorem \\ref{Theorem_3_1}: Adjoint Equation and Duality Analysis}\\label{Section_4_7}\nRecall the variational inequality in (\\ref{eq_4_23}), i.e., for any $(a,u) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$,\n\\begin{align}\n\\label{eq_5_40_23423402302020202}\n0 & \\leq \\lambda \\widehat{Z}(T;a,u) + \\Bigl \\langle \\xi_1, a \\Bigr \\rangle + \\Bigl \\langle \\xi_2, Z(T;a,u) \\Bigr \\rangle + \\Bigl \\langle \\mu, G_x(\\cdot, \\overline{x}(\\cdot)) Z(\\cdot;a,u) \\Bigr \\rangle_{C_m^* \\times C_m}.\n\\end{align}\nSimilar to (\\ref{eq_4_27_4_23_3_4_2_}), by the Riesz representation theorem, it holds that\n\\begin{align*}\t\n\\Bigl \\langle \\mu, G_x(\\cdot,\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) Z(\\cdot;a,u) \\Bigr \\rangle_{C_m^* \\times C_m} & = \\sum_{i=1}^m \\Bigl \\langle \\mu_i, G_x^i(\\cdot,\\overline{x}(\\cdot;\\overline{x}_0,\\overline{u})) Z(\\cdot;a,u) \\Bigr \\rangle_{C_1^* \\times C_1} \\\\\n& = \\sum_{i=1}^m \\int_0^T G_x^{i} (s,\\overline{x}(s)) Z(s;a,u) ) \\dd \\theta_i(s),\n\\end{align*}\nwhere as shown in Section \\ref{Section_4_5}, we have $\\theta(\\cdot) = (\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot)) \\in \\textsc{NBV}([0,T];\\mathbb{R}^m)$ with $\\theta_i (\\cdot) \\in \\textsc{NBV}([0,T];\\mathbb{R})$ being finite and monotonically nondecreasing on $[0,T]$. \n\nThen by using the variational equations in Lemma \\ref{Lemma_4_5}, (\\ref{eq_5_40_23423402302020202}) becomes\n\\begin{align}\n\\label{eq_4_30_4_2323234_2}\t\n0 & \\leq \n\\Bigl \\langle \\xi_1 + \\lambda h_{x_0}(\\overline{x}_0,\\overline{x}(T))^\\top, a \\Bigr \\rangle + \\Bigl \\langle \\xi_2 + \\lambda h_x(\\overline{x}_0,\\overline{x}(T))^\\top, a \\Bigr \\rangle + \\sum_{i=1}^m \\int_0^T G_x^{i} (s,\\overline{x}(s)) Z(s) \\dd \\theta_i(s) \\\\\n&~~~ + \\int_0^T \\Bigl [ \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\mathds{1}_{[0,T)}(s) \\frac{f_x(T,s,\\overline{x}(s),\\overline{u}(s))}{(T-s)^{1-\\alpha}} \\nonumber \\\\\n&~~~~~~~~~~ + \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) g_x(T,s,\\overline{x}(s),\\overline{u}(s)) + \\lambda l_x(s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] Z(s) \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\mathds{1}_{[0,T)}(s) \\frac{f(T,s,\\overline{x}(s),u(s)) - f(T,s,\\overline{x}(s),\\overline{u}(s))}{(T-s)^{1-\\alpha}} \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\Bigl ( g(T,s,\\overline{x}(s),u(s)) - g(T,s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ) \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\lambda \\Bigl [ l(s,\\overline{x}(s),u(s)) - l(s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s .\\nonumber\n\\end{align}\nBased on Lemma \\ref{Lemma_B_5} and Remark \\ref{Remark_3_3}, let $p(\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ be the unique solution to the adjoint equation in Theorem \\ref{Theorem_3_1}. Applying it to (\\ref{eq_4_30_4_2323234_2}) yields\n\\begin{align}\n\\label{eq_4_31_32342323423423423}\n0 & \\leq \t\n\\Bigl \\langle \\xi_1 + \\lambda h_{x_0}(\\overline{x}_0,\\overline{x}(T))^\\top, a \\Bigr \\rangle + \\Bigl \\langle \\xi_2 + \\lambda h_x(\\overline{x}_0,\\overline{x}(T))^\\top, a \\Bigr \\rangle \\\\\n&~~~ + \\sum_{i=1}^m \\int_0^T G_x^{i} (s,\\overline{x}(s)) Z(s;a,u) \\dd \\theta_i(s) + \\int_0^T \\Biggl [ - p(s) - \\sum_{i=1}^m G_x^{i} (s,\\overline{x}(s))^\\top \\frac{\\dd \\theta_i(s)}{\\dd s} \\nonumber \\\\\n&~~~~~~~~~~ + \\int_s^T \\frac{f_x(r,s,\\overline{x}(s),\\overline{u}(s))^\\top}{(r-s)^{1-\\alpha}} p(r) \\dd r + \\int_s^T g_x(r,s,\\overline{x}(s),\\overline{u}(s))^\\top p(r) \\dd r \\Biggr ]^\\top Z(s;a,u) \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\mathds{1}_{[0,T)}(s) \\frac{f(T,s,\\overline{x}(s),u(s)) - f(T,s,\\overline{x}(s),\\overline{u}(s))}{(T-s)^{1-\\alpha}} \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\Bigl [ g(T,s,\\overline{x}(s),u(s)) - g(T,s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\lambda \\Bigl [ l(s,\\overline{x}(s),u(s)) - l(s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s \\nonumber \\\\\n& = \\Bigl \\langle \\xi_1 + \\lambda h_{x_0}(\\overline{x}_0,\\overline{x}(T))^\\top, a \\Bigr \\rangle + \\Bigl \\langle \\xi_2 + \\lambda h_x(\\overline{x}_0,\\overline{x}(T))^\\top, a \\Bigr \\rangle \\nonumber \\\\\n&~~~ + \\int_0^T \\Biggl [ - p(s) + \\int_s^T \\Bigl [ \\frac{f_x(r,s,\\overline{x}(s),\\overline{u}(s))^\\top}{(r-s)^{1-\\alpha}} + g_x(r,s,\\overline{x}(s),\\overline{u}(s))^\\top \\Bigr ] p(r) \\dd r \\Biggr ]^\\top Z(s;a,u) \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\mathds{1}_{[0,T)}(s) \\frac{f(T,s,\\overline{x}(s),u(s)) - f(T,s,\\overline{x}(s),\\overline{u}(s))}{(T-s)^{1-\\alpha}} \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\Bigl [ g(T,s,\\overline{x}(s),u(s)) - g(T,s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\lambda \\Bigl [ l(s,\\overline{x}(s),u(s)) - l(s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s. \\nonumber\n\\end{align}\n\nIn (\\ref{eq_4_31_32342323423423423}), the standard Fubini's formula and Lemma \\ref{Lemma_4_5} lead to\n\\begin{align*}\n& \\int_0^T \\Biggl [ - p(s) + \\int_s^T \\Bigl [ \\frac{f_x(r,s,\\overline{x}(s),\\overline{u}(s))^\\top}{(r-s)^{1-\\alpha}} + g_x(r,s,\\overline{x}(s),\\overline{u}(s))^\\top \\Bigr ] p(r) \\dd r \\Biggr ]^\\top Z(s) \\dd s \\\\\n& = \\int_0^T - p(s)^\\top Z(s) \\dd s\t+ \\int_0^T \\int_0^s p(s)^\\top \\Bigl [ \\frac{f_x(s,r,\\overline{x}(r),\\overline{u}(r)}{(s-r)^{1-\\alpha}} + g_x(s,r,\\overline{x}(r),\\overline{u}(r)) \\Bigr ] Z(r) \\dd r \\dd s \\\\\n& = \\int_0^T - p(s)^\\top \\Biggl [ Z(s) - \\int_0^s \\Bigl [ \\frac{f_x(s,r,\\overline{x}(r),\\overline{u}(r)}{(s-r)^{1-\\alpha}} + g_x(s,r,\\overline{x}(r),\\overline{u}(r)) \\Bigr ]Z(r) \\dd r \\Biggr ] \\dd s \\\\\n& = \\int_0^T - p(s)^\\top \\Biggl [a + \\int_0^s \\frac{f(s,r,\\overline{x}(r),u(r)) - f(s,r,\\overline{x}(r),\\overline{u}(r))}{(s-r)^{1-\\alpha}} \\dd r \\\\\n&~~~~~~~ + \\int_0^s \\bigl [ g(s,r,\\overline{x}(r),u(r)) - g(s,r,\\overline{x}(r),\\overline{u}(r)) \\bigr ] \\dd r \\Biggr ] \\dd s.\n\\end{align*}\nMoreover, by definition of $N_F$ in (\\ref{eq_4_1}), it follows that \n\\begin{align*}\n\\Bigl \\langle \\xi_1, a \\Bigr \\rangle + \\Bigl \\langle \\xi_2, a \\Bigr \\rangle & \\leq \\Bigl \\langle \\xi_1, \\overline{x}_0 - y_1 + a \\Bigr \\rangle + \\Bigl \\langle \\xi_2, \\overline{x}(T;\\overline{x}_0,\\overline{u}) - y_2 + a \\Bigr \\rangle,~ \\forall y = \\begin{bmatrix}\n y_1 \\\\ \n y_2\n \\end{bmatrix} \\in F.\n\\end{align*}\nHence, (\\ref{eq_4_31_32342323423423423}) becomes for any $(a,u) \\in \\mathbb{R}^n \\times \\mathcal{U}^p[0,T]$ and $y \\in F$, \n\\begin{align}\t\n\\label{eq_4_32}\n0 & \\leq \\Bigl \\langle \\xi_1, \\overline{x}_0 - y_1 + a \\Bigr \\rangle_{\\mathbb{R}^n \\times \\mathbb{R}^n} + \\Bigl \\langle \\xi_2, \\overline{x}(T;\\overline{x}_0,\\overline{u}) - y_2 + a \\Bigr \\rangle_{\\mathbb{R}^n \\times \\mathbb{R}^n} \\\\\n&~~~ + \\lambda h_{x_0}(\\overline{x}_0,\\overline{x}(T)) a + \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) a - \\Bigl \\langle \\int_0^T p(s) \\dd s , a \\Bigr \\rangle \\nonumber \\\\\n&~~~ + \\int_0^T - p(s)^\\top \\Biggl [\\int_0^s \\frac{f(s,r,\\overline{x}(r),u(r)) - f(s,r,\\overline{x}(r),\\overline{u}(r))}{(s-r)^{1-\\alpha}} \\dd r \\nonumber \\\\\n&~~~~~~~ + \\int_0^s \\bigl [ g(s,r,\\overline{x}(r),u(r)) - g(s,r,\\overline{x}(r),\\overline{u}(r)) \\bigr ] \\dd r \\Biggr ] \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\mathds{1}_{[0,T)}(s) \\frac{f(T,s,\\overline{x}(s),u(s)) - f(T,s,\\overline{x}(s),\\overline{u}(s))}{(T-s)^{1-\\alpha}} \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\Bigl [ g(T,s,\\overline{x}(s),u(s)) - g(T,s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\lambda \\Bigl [ l(s,\\overline{x}(s),u(s)) - l(s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s. \\nonumber\n\\end{align}\nBelow, we use (\\ref{eq_4_32}) to prove the transversality condition, the nontriviality of the adjoint equation, and the Hamiltonian-like maximum condition in Theorem \\ref{Theorem_3_1}.\n\n\n\\subsection{Proof of Theorem \\ref{Theorem_3_1}: Transversality Condition and Nontriviality of Adjoint Equation}\\label{Section_4_8}\n\nIn (\\ref{eq_4_32}), when $u=\\overline{u}$, we have\n\\begin{align}\n\\label{eq_5_456345345345}\t\n0 & \\leq \\Bigl \\langle \\xi_1, \\overline{x}_0 - y_1 + a \\Bigr \\rangle + \\Bigl \\langle \\xi_2, \\overline{x}(T;\\overline{x}_0,\\overline{u}) - y_2 + a \\Bigr \\rangle \\\\\n&~~~ + \\lambda h_{x_0}(\\overline{x}_0,\\overline{x}(T)) a + \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) a - \\Bigl \\langle \\int_0^T p(s) \\dd s , a \\Bigr \\rangle,~ \\forall y = \\begin{bmatrix}\n y_1 \\\\ \n y_2\n \\end{bmatrix} \\in F. \\nonumber\n\\end{align}\nWhen $y_1 = \\overline{x}_0$ and $y_2 = \\overline{x}(T;\\overline{x}_0,\\overline{u})$, the above inequality holds for any $a,-a \\in \\mathbb{R}^n$, which implies\n\\begin{align}\n\\label{eq_4_38_1_2_1_2_3_2_1}\n\\int_0^T p(s) \\dd s = \\xi_1 + \\xi_2 + \\lambda h_{x_0}(\\overline{x}_0,\\overline{x}(T))^\\top +\\lambda h_x(\\overline{x}_0,\\overline{x}(T))^\\top.\n\\end{align}\nUnder this condition, (\\ref{eq_5_456345345345}) becomes\n\\begin{align*}\t\n0 \\leq \\Bigl \\langle \\xi_1, \\overline{x}_0 - y_1 \\Bigr \\rangle + \\Bigl \\langle \\xi_2, \\overline{x}(T;\\overline{x}_0,\\overline{u}) - y_2 \\Bigr \\rangle,~ \\forall y \\in F.\t\n\\end{align*}\nThis proves the transversality condition in Theorem \\ref{Theorem_3_1}. In addition, as $p (\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ by Lemma \\ref{Lemma_B_5}, (\\ref{eq_4_38_1_2_1_2_3_2_1}), together with the nontriviality condition, shows the nontriviality of the adjoint equation in Theorem \\ref{Theorem_3_1}.\n\n\\subsection{Proof of Theorem \\ref{Theorem_3_1}: Hamiltonian-like Maximum Condition}\\label{Section_4_9}\n\nWe finally prove the Hamiltonian-like maximum condition in Theorem \\ref{Theorem_3_1}. When $y_1 = \\overline{x}_0$, $y_2 = \\overline{x}(T;\\overline{x}_0,\\overline{u})$ and $a=0$ in (\\ref{eq_4_32}), by the standard Fubini's formula, (\\ref{eq_4_32}) can be written as\n\\begin{align}\n\\label{eq_4_33}\n0 & \\leq \\int_0^T -p(s)^\\top \\Biggl [\\int_0^s \\frac{f(s,r,\\overline{x}(r),u(r)) - f(s,r,\\overline{x}(r),\\overline{u}(r))}{(s-r)^{1-\\alpha}} \\dd r \\\\\n&~~~~~~~ + \\int_0^s \\bigl [ g(s,r,\\overline{x}(r),u(r)) - g(s,r,\\overline{x}(r),\\overline{u}(r)) \\bigr ] \\dd r \\Biggr ] \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\mathds{1}_{[0,T)}(s) \\frac{f(T,s,\\overline{x}(s),u(s)) - f(T,s,\\overline{x}(s),\\overline{u}(s))}{(T-s)^{1-\\alpha}} \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\Bigl [ g(T,s,\\overline{x}(s),u(s)) - g(T,s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s \\nonumber \\\\\n&~~~ + \\int_0^T \\lambda \\Bigl [ l(s,\\overline{x}(s),u(s)) - l(s,\\overline{x}(s),\\overline{u}(s)) \\Bigr ] \\dd s\t \\nonumber \\\\\n& = \\int_0^T \\Biggl [ \\int_s^T - p(r)^\\top \\frac{f(r,s,\\overline{x}(s),u(s)) - f(r,s,\\overline{x}(s),\\overline{u}(s))}{(r-s)^{1-\\alpha}} \\dd r \\nonumber \\\\\n&~~~~~~~~~~ + \\int_s^T - p(r)^\\top \\bigl [ g(r,s,\\overline{x}(s),u(s)) - g(r,s,\\overline{x}(s),\\overline{u}(s)) \\bigr ] \\dd r \\nonumber \\\\\n&~~~~~~~~~~ + \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\mathds{1}_{[0,T)}(s) \\frac{f(T,s,\\overline{x}(s),u(s)) - f(T,s,\\overline{x}(s),\\overline{u}(s))}{(T-s)^{1-\\alpha}} \\nonumber \\\\\n&~~~~~~~~~~ + \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\bigl [ g(T,s,\\overline{x}(s),u(s)) - g(T,s,\\overline{x}(s),\\overline{u}(s)) \\bigr ] \\nonumber \\\\\n&~~~~~~~~~~ + \\lambda \\bigl [ l(s,\\overline{x}(s),u(s)) - l(s,\\overline{x}(s),\\overline{u}(s)) \\bigr ] \\Biggr ] \\dd s. \\nonumber\n\\end{align}\n\nLet us define for $s \\in [0,T]$,\n\\begin{align*}\n& \\Lambda(s,\\overline{x}(s),u) := \\int_s^T p(r)^\\top \\frac{f(r,s,\\overline{x}(s),u)}{(s-r)^{1-\\alpha}} \\dd r - \\mathds{1}_{[0,T)}(s) \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) \\frac{f(T,s,\\overline{x}(s),u)}{(T-s)^{1-\\alpha}} \\\\\n&~~~~~ + \\int_s^T p(r)^\\top g(r,s,\\overline{x}(s),u) \\dd r - \\Bigl ( \\lambda h_x(\\overline{x}_0,\\overline{x}(T)) + \\xi_2^\\top \\Bigr ) g(T,s,\\overline{x}(s),u) - \\lambda l(s,\\overline{x}(s),u).\n\\end{align*}\nThen we observe that (\\ref{eq_4_33}) becomes\n\\begin{align*}\n\\int_0^T \\Lambda(s,\\overline{x}(s),u(s)) \\dd s \\leq \t\\int_0^T \\Lambda(s,\\overline{x}(s),\\overline{u}(s)) \\dd s.\n\\end{align*}\n\nAs $U$ is separable, there exists a countable dense set $U_i = \\{u_i,~ i \\geq 1\\} \\subset U$. Moreover, there exists a measurable set $S_i \\subset [0,T]$ such that $|S_i| = T$ and any $t \\in S_i$ is the Lebesgue point of $\\Lambda(t,\\overline{x}(t),u(t))$, i.e., $\\lim_{\\tau \\downarrow 0} \\frac{1}{2\\tau} \\int_{t-\\tau}^{t+\\tau} \\Lambda(s,\\overline{x}(s),u(s)) \\dd s = \\Lambda(t,\\overline{x}(t),u(t))$ \\cite[Theorem 5.6.2]{Bogachev_book}. We fix $u_i \\in U_i$. For any $ t \\in S_i$, define\n\\begin{align*}\nu(s) : = \\begin{cases}\n \t\\overline{u}(s), & s \\in [0,T] \\setminus (t-\\tau,t+\\tau),\\\\\n \tu_i, & s \\in (t-\\tau,t+\\tau).\n \\end{cases}\t\n\\end{align*}\nIt then follows that\n\\begin{align*}\n0 \\leq \\lim_{\\tau \\downarrow 0} \\frac{1}{2\\tau} \\int_{t - \\tau}^{t + \\tau} \\bigl [ \t\\Lambda(s,\\overline{x}(s),\\overline{u}(s)) - \\Lambda(s,\\overline{x}(s),u_i) \\bigr ] \\dd s = \\Lambda (t,\\overline{x}(t),\\overline{u}(t)) - \\Lambda (t,\\overline{x}(t),u_i).\n\\end{align*}\nSince $\\cap_{i \\geq 1} S_i = [0,T]$, $\\Lambda$ is continuous in $u \\in U$, and $U$ is separable, we must have\n\\begin{align*}\n\\Lambda(t,\\overline{x}(t),u) \\leq \\Lambda(t,\\overline{x}(t),\\overline{u}(t)),~ \\forall u \\in U,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align*}\nwhich proves the Hamiltonian-like maximum condition in Theorem \\ref{Theorem_3_1}. This is the end of the proof for Theorem \\ref{Theorem_3_1}.\n\n\n\n\n\n\n\\begin{appendices}\n\t\n\\section*{Appendices}\n\nWe provide some preliminary results, and obtain the well-posedness and estimates of general Volterra integral equations having singular and nonsingular kernels. To simplify the notation, we use $\\|\\cdot\\|_{q} := \\|\\cdot\\|_{L^q([0,T];\\mathbb{R}^n)}$ and $\\|\\cdot\\|_{p} := \\|\\cdot\\|_{L^p([0,T];\\mathbb{R}^n)}$. \n\n\n\n\n\\section{Preliminaries}\\label{Appendix_A}\n\n\n\n\\begin{lemma}\n\\label{Lemma_A_2}\nLet $\\alpha \\in (0,1)$. Suppose that $\\frac{1}{q} + 1 = \\frac{1}{p} + \\frac{1}{r}$ with $p,q \\geq 1$ and $ r \\in [1, \\frac{1}{1-\\alpha})$. Then for any $a 0$,\n\\begin{align*}\n\t\\|\\zeta_{\\tau}(\\cdot) \\|_{L^r([0,\\tau];\\mathbb{R})}^r = \\int_0^{\\tau} t^{-r(1-\\alpha)} \\dd t = \\frac{1}{1 - r(1-\\alpha)} \\tau^{1 - r(1-\\alpha)}.\n\\end{align*}\nThis proves the first inequality. The second inequality can be shown in a similar way by letting $\\zeta_{\\tau}(t) :=\t\\mathds{1}_{(0,\\tau]}(t)$. We complete the proof.\n\\end{proof}\n\n\\begin{lemma}[Lemma 2.3 of \\cite{Lin_Yong_SICON_2020}]\\label{Lemma_A_3}\nSuppose that $\\alpha \\in (0,1)$ and $p \\geq 1$. Assume that $\\psi :\\Delta \\rightarrow \\mathbb{R}^n$ is measurable with $\\psi(0,\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ satisfying $|\\psi(t,s) - \\psi(t^\\prime,s)|_{\\mathbb{R}^n} \\leq \\omega(|t-t^\\prime|) \\psi^\\prime(s)$ for $t,t^\\prime \\in [0,T]$ and $s \\in [0,T]$, where $\\psi^\\prime (\\cdot) \\in L^p([0,T];\\mathbb{R})$ and $\\omega$ is some modulus of continuity. Let\n\\begin{align*}\n\\varphi(t) &:= \\int_0^t \\frac{\\psi(t,s)}{(t-s)^{1-\\alpha}} \\dd s,~ \\textrm{a.e.}~ t \\in [0,T]. \n\\end{align*}\nThen $\\varphi(\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ and $\\|\\varphi(\\cdot)\\|_{p} \\leq \\frac{T^\\alpha}{\\alpha} \\Bigl ( \\|\\psi(0,\\cdot)\\|_{p} + \\omega(T) \\|\\psi^\\prime (\\cdot) \\|_{p} \\Bigr )$.\nFurthermore, if $p > \\frac{1}{\\alpha}$, then $\\varphi$ is continuous on $[0,T]$ and there is a constant $C$, independent from choice of $\\varphi$, such that \n\\begin{align*}\n|\\varphi(t)|_{\\mathbb{R}^n} \\leq C \\Bigl ( \t\\|\\psi(0,\\cdot)\\|_{p} + \\|\\psi^\\prime (\\cdot) \\|_{p} \\Bigr ),~ \\forall t \\in [0,T].\n\\end{align*}\n\\end{lemma}\n\n\n\\begin{lemma}\\label{Lemma_A_4}\nSuppose that $\\alpha \\in (0,1)$ and $q \\geq 1$. Assume that $\\psi :\\Delta \\rightarrow \\mathbb{R}^n$ is measurable with $\\psi(0,\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ satisfying $|\\psi(t,s) - \\psi(t^\\prime,s)|_{\\mathbb{R}^n} \\leq \\omega(|t-t^\\prime|) \\psi^\\prime(s)$ for $t,t^\\prime \\in [0,T]$ and $s \\in [0,T]$, where $\\psi^\\prime (\\cdot) \\in L^p([0,T];\\mathbb{R})$ and $\\omega$ is some modulus of continuity. Let\n\\begin{align*}\n\\hat{\\varphi}(t) &:= \\int_0^t \\psi(t,s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T].\n\\end{align*}\nThen $\\hat{\\varphi} (\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ and $\\|\\hat{\\varphi} (\\cdot) \\|_{p} \\leq T \\Bigl ( \\|\\psi(0,\\cdot)\\|_{p} + \\omega(T) \\|\\psi^\\prime (\\cdot) \\|_{p} \\Bigr )$.\nFurthermore, if $p > \\frac{1}{\\alpha}$, then $\\hat{\\varphi}$ is continuous on $[0,T]$ and there is a constant $C$, independent from choice of $\\hat{\\varphi}$, such that \n\\begin{align*}\n|\\hat{\\varphi}(t)|_{\\mathbb{R}^n} \\leq C \\Bigl ( \t\\|\\psi(0,\\cdot)\\|_{p} + \\|\\psi^\\prime (\\cdot) \\|_{p} \\Bigr ),~ \\forall t \\in [0,T].\t\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nThe proof is analogous to that for Lemma \\ref{Lemma_A_3}. Indeed, to prove the continuity, note that\n\\begin{align*}\n|\\psi(t,s)|_{\\mathbb{R}^n} \\leq |\\psi(0,s)|_{\\mathbb{R}^n} + \\omega(t)\\psi^\\prime(s) =: \\overline{\\psi}(s),~ (t,s) \\in \\Delta,\n\\end{align*}\nwhere $\\overline{\\psi} \\in L^p([0,T];\\mathbb{R})$, since $\\psi(0,\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ and $\\psi^\\prime \\in L^p([0,T];\\mathbb{R})$. It holds that\n\\begin{align*}\n& |\\hat{\\varphi}(t) - \\hat{\\varphi}(t^\\prime) |_{\\mathbb{R}^n}\t\n\\leq \\omega(|t-t^\\prime|) T^{\\frac{q-1}{q}} \\|\\psi^\\prime (\\cdot) \\|_{p} + \\|\\overline{\\psi} (\\cdot)\\|_{p} \\tau^{\\frac{p-1}{p}} + \\|\\overline{\\psi} (\\cdot) \\|_{p}(t^\\prime-t+\\tau)^{\\frac{p-1}{p}}.\n\\end{align*}\nThen the rest of the proof is similar to that of Lemma \\ref{Lemma_A_3}; thus completing the proof.\n\\end{proof}\n\n\\begin{lemma}[Gronwall-type inequality with the presence of singular and nonsingular kernels]\n\\label{Lemma_A_5}\nAssume that $\\alpha \\in (0,1)$ and $p > \\frac{1}{\\alpha}$.\nLet $P(\\cdot) \\in L^p([0,T];\\mathbb{R})$ and $b(\\cdot),z(\\cdot) \\in L^{\\frac{p}{p-1}}([0,T];\\mathbb{R})$, where $b$, $z$, and $P$ are nonnegative functions. Suppose that the following holds:\n\\begin{align}\n\\label{eq_a_1}\nz(t) \\leq b(t) + \\int_0^t \\frac{P(s)z(s)}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t \tP(s) z(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T].\n\\end{align}\nThen there exists a constant $C \\geq 0$ such that\n\\begin{align*}\nz(t) \\leq b(t) + C \\int_0^t \\frac{P(s)b(s)}{(t-s)^{1-\\alpha}} \\dd s + C \\int_0^t P(s) b(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T].\t\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nBy the H\\\"older's inequality, we have $P(\\cdot)z(\\cdot) \\in L^1([0,T];\\mathbb{R})$, which implies that the two integrals on the right-hand side of (\\ref{eq_a_1}) are well-defined in the $L^1$ sense. Below, there are several generic constants, whose values vary from line to line.\n\nWe remove the singularity of the right-hand side of (\\ref{eq_a_1}). As (\\ref{eq_a_1}) is linear in $z$, consider,\n\\begin{align*}\n\tz(t) \\leq b(t) + \\int_0^t \\frac{\\hat{P}(s;\\alpha) z(s)}{(t-s)^{1-\\alpha}} \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align*}\nwhere $\\hat{P}(s;\\alpha) := P(s) + (t-s)^{1-\\alpha} P(s)$ with $(t,s) \\in \\Delta$. Note that $\\hat{P}(\\cdot;\\alpha) \\in L^p([0,T];\\mathbb{R})$ is nonnegative, and since $1-\\alpha > 0$, we have $\\|\\hat{P}(\\cdot;\\alpha)\\|_{p} \\leq \\|P(\\cdot)\\|_{p} + T^{1-\\alpha} \\|P(\\cdot)\\|_{p} \\leq C \\|P(\\cdot)\\|_{p}$.\n\nIt follows that \n\\begin{align}\n\\label{eq_A_2_23234234}\nz(t) \n& \\leq b(t) + \\int_0^t \\frac{P(s) b(s)}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t P(s) b(s) \\dd s + \\int_0^t \t\\frac{\\hat{P}(s;\\alpha)}{(t-s)^{1-\\alpha}} \\int_0^s \\frac{\\hat{P}(\\tau;\\alpha)}{(s-\\tau)^{1-\\alpha}} z(\\tau) \\dd \\tau \\dd s,\n\\end{align}\nNotice that the double integral above represents the integration over the triangular region with base and height of $t$, where the integration is performed vertically and then horizontally with respect to $\\tau$ and $s$, respectively. Alternatively, we may reverse the order of the double integration above, i.e.,\n\\begin{align}\n\\label{eq_a_3_345345}\n \\int_0^t \\frac{\\hat{P}(s;\\alpha) }{(t-s)^{1-\\alpha}} \\int_0^s \\frac{\\hat{P}(\\tau;\\alpha)}{(s-\\tau)^{1-\\alpha}} z(\\tau) \\dd \\tau \\dd s \n& = \\int_0^t \\int_{s}^t \\frac{\\hat{P}(\\tau;\\alpha) \\hat{P}(s;\\alpha)}{(t-\\tau)^{1-\\alpha} (\\tau-s)^{1-\\alpha}} z(s) \\dd \\tau \\dd s.\n\\end{align}\n\nLet $v := \\frac{\\tau-s}{t-s}$. Note that $\\tau$ varies from $s$ to $t$, which implies $v$ varies from $0$ to $1$. Moreover, $\\tau = s+ (t-s) v $ and $\\dd \\tau = (t-s) \\dd v$. Then using the H\\\"older's inequality and changing the integration variable, the integration in (\\ref{eq_a_3_345345}) can be evaluated by\n\\begin{align*}\t\n& \\int_0^t \\int_{s}^t \\frac{\\hat{P}(\\tau;\\alpha) \\hat{P}(s;\\alpha)}{(t-\\tau)^{1-\\alpha} (\\tau-s)^{1-\\alpha}} \\dd \\tau z(s) \\dd s \\\\\n& \\leq C \\Biggl ( \\|P(\\cdot)\\|_{p} \\int_0^t P(s) \\Bigl ( \\int_s^t \\frac{1}{(t-\\tau)^{ \\frac{(1-\\alpha)p}{p-1}} (\\tau-s)^{ \\frac{(1-\\alpha)p}{p-1}}} \\dd \\tau \\Bigr)^{\\frac{p-1}{p}} z(s) \\dd s + \\|P(\\cdot)\\|_{p} T^{\\frac{p-1}{p}} \\int_0^t P(s) z(s) \\dd s \\Biggr ) \\\\\n& = C^{(1)} \\Biggl ( \\|P(\\cdot)\\|_{p} \\int_0^t \\frac{P(s) z(s) }{(t-s)^{2(1-\\alpha) - \\frac{p-1}{p}}}\n\\Bigl (\\int_0^1 \\frac{1}{(1-v)^{(1-\\alpha) \\frac{p}{p-1}} v^{(1-\\alpha) \\frac{p}{p-1}} } \\dd v \\Bigr )^{\\frac{p-1}{p}} \\dd s \\\\\n&~~~~~~~~~~ + \\|P(\\cdot)\\|_{p} T^{\\frac{p-1}{p}} \\int_0^t P(s) z(s) \\dd s \\Biggr ).\n\\end{align*}\nLet $\\alpha[\\alpha] := 1 - (1-\\alpha) \\frac{p}{p-1} = \\frac{\\alpha p - 1}{p-1} \\in (0,1)$ and $\\alpha^{(1)} := 1 - \\Bigl ( 2(1-\\alpha) - \\frac{p-1}{p} \\Bigr ) = 2 \\alpha - \\frac{1}{p} = \\alpha + \\Bigl (\\alpha - \\frac{1}{p} \\Bigr ) > \\alpha$ (note that $p > \\frac{1}{\\alpha}$). We can show that\n\\begin{align*}\n& C^{(1)} \\|P(\\cdot)\\|_{p} \\int_0^t \\frac{P(s) z(s) }{(t-s)^{2(1-\\alpha) - \\frac{p-1}{p}}}\n\\Bigl (\\int_0^1 \\frac{1}{(1-v)^{1-\\alpha[\\alpha]} v^{1-\\alpha[\\alpha]} } \\dd v \\Bigr )^{\\frac{p-1}{p}} \\dd s\t\\\\\n& = C^{(1)} \\|P(\\cdot)\\|_{p} B(\\alpha[\\alpha], \\alpha[\\alpha])^{\\frac{p-1}{p}} \\int_0^t \\frac{P(s) z(s)}{(t-s)^{1-\\alpha^{(1)}}} \\dd s =: \\bar{c}^{(1)} \\int_0^t \\frac{P(s) z(s)}{(t-s)^{1-\\alpha^{(1)}}} \\dd s,\n\\end{align*}\nwhere $B$ is the beta function defined by $B(x,y) := \\int_0^1 t^{x-1} (1-t)^{y-1} \\dd t$ for $x,y > 0$. We also have $C^{(1)} \\|P(\\cdot)\\|_{p} T^{\\frac{p-1}{p}}\t \\int_0^t P(s) z(s) \\dd s =: \\hat{c}^{(1)} \\int_0^t P(s) z(s) \\dd s$. Then with $c^{(1)} := \\max \\{ \\bar{c}^{(1)}, \\hat{c}^{(1)} \\}$, (\\ref{eq_a_3_345345}) is bounded above by\n\\begin{align*}\t\n\\int_0^t \\frac{\\hat{P}(s;\\alpha) }{(t-s)^{1-\\alpha}} \\int_0^s \\frac{\\hat{P}(\\tau;\\alpha)}{(s-\\tau)^{1-\\alpha}} z(\\tau) \\dd \\tau \\dd s \\leq c^{(1)} \\Bigl ( \\int_0^t \\frac{P(s) z(s)}{(t-s)^{1-\\alpha^{(1)}}} \\dd s + \\int_0^t P(s) z(s) \\dd s \\Bigr ).\n\\end{align*}\nHence, by letting $c^{(0)} := 1$ and $\\alpha^{(0)} := \\alpha$, together with (\\ref{eq_a_1}), (\\ref{eq_A_2_23234234}) can be evaluated by\n\\begin{align}\n\\label{eq_a_2}\t\nz(t) \n& \\leq b(t) + \\int_0^t \\frac{P(s) b(s)}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t P(s) b(s) \\dd s + \\int_0^t \t\\frac{\\hat{P}(t,s)}{(t-s)^{1-\\alpha}} \\int_0^s \\frac{\\hat{P}(s,\\tau)}{(s-\\tau)^{1-\\alpha}} z(\\tau) \\dd \\tau \\dd s \\\\\n& \\leq b(t) + \\sum_{i=0}^{1} c^{(i)} \\int_0^t \\frac{P(s)b(s)}{(t-s)^{1-\\alpha^{(i)}}} \\dd s + \\sum_{i=0}^{1} c^{(i)} \\int_0^t P(s) b(s) \\dd s \\nonumber \\\\\n&~~~ + c^{(1)} \\int_0^t \\frac{\\hat{P}(s;\\alpha^{(1)})}{(t-s)^{1-\\alpha^{(1)}}} \\int_0^s \\frac{\\hat{P}(\\tau;\\alpha)}{(t-s)^{1-\\alpha}} z(\\tau) \\dd \\tau \\dd s,~ \\textrm{a.e.}~ t \\in [0,T]. \\nonumber\n\\end{align}\n\nNote that by using the same technique as above, we can show that\n\\begin{align*}\n& c^{(1)} \\int_0^t \\frac{\\hat{P}(s;\\alpha^{(1)})}{(t-s)^{\\alpha^{(1)}}} \\int_0^s \\frac{\\hat{P}(\\tau;\\alpha)}{(s-\\tau)^{1-\\alpha}} z(\\tau) \\dd \\tau \\dd s = c^{(1)} \\int_0^t \\int_s^t \\frac{\\hat{P}(\\tau;\\alpha^{(1)})}{(t-\\tau)^{1-\\alpha^{(1)}}} \\frac{\\hat{P}(s;\\alpha)}{(\\tau-s)^{1-\\alpha}} z(s) \\dd \\tau \\dd s \\\\\n& \\leq C^{(2)} c^{(1)} \\Biggl ( \\|P(\\cdot)\\|_{p} \\int_0^t \\frac{P(s) z(s)}{(t-s)^{2-\\alpha-\\alpha^{(1)} - \\frac{p-1}{p} }} \\Bigl ( \\int_0^1 \\frac{1}{(1-v)^{(1-\\alpha^{(1)}) \\frac{p}{p-1} } v^{ (1-\\alpha) \\frac{p}{p-1} }} \\dd v \\Bigr )^{\\frac{p-1}{p}} \\dd s \\\\\n&~~~~~~~~~~ + \\|P(\\cdot)\\|_{p} T^{\\frac{p-1}{p}} \\int_0^t P(s) z(s) \\dd s \\Biggr ).\n\\end{align*}\nLet $\\alpha[\\alpha^{(1)}] := 1 - (1-\\alpha^{(1)}) \\frac{p}{p-1} = \\frac{\\alpha^{(1)} p - 1}{p-1} \\in (0,1)$ and $\\alpha^{(2)} := 1- \\Bigl (2-\\alpha-\\alpha^{(1)} - \\frac{p-1}{p} \\Bigr ) = \\alpha + 2 \\Bigl ( \\alpha - \\frac{1}{p} \\Bigr ) > \\alpha^{(1)} > \\alpha$. We can show that\n\\begin{align*}\n& C^{(2)} c^{(1)} \\|P(\\cdot)\\|_{p} \\int_0^t \\frac{P(s) z(s)}{(t-s)^{2-\\alpha-\\alpha^{(1)} - \\frac{p-1}{p} }} \\Bigl ( \\int_0^1 \\frac{1}{(1-v)^{1-\\alpha[\\alpha^{(1)}]} v^{1- \\alpha[\\alpha]}} \\dd v \\Bigr )^{\\frac{p-1}{p}} \\dd s \t \\\\\n& = C^{(2)} c^{(1)} \\|P(\\cdot)\\|_{p} B(\\alpha[\\alpha],\\alpha[\\alpha^{(1)}])^{\\frac{p-1}{p}} \\int_0^t \\frac{P(s)z(s)}{(t-s)^{1-\\alpha^{(2)}}} \\dd s =: \\bar{c}^{(2)} \\int_0^t \\frac{P(s)z(s)}{(t-s)^{1-\\alpha^{(2)}}} \\dd s,\n\\end{align*}\nand we have $C^{(2)} c^{(1)} \\|P(\\cdot)\\|_{p} T^{\\frac{p-1}{p}} \\int_0^t P(s) z(s) \\dd s =: \\hat{c}^{(2)} \\int_0^t P(s) z(s) \\dd s$.\nLet $c^{(2)} := \\max \\{ \\bar{c}^{(2)}, \\hat{c}^{(2)} \\}$. Then using a similar approach, (\\ref{eq_a_2}) can be evaluated by\n\\begin{align}\n\\label{eq_a_5_3423234234}\nz(t) \n& \\leq b(t) + \\sum_{i=0}^{1} c^{(i)} \\int_0^t \\frac{P(s)b(s)}{(t-s)^{1-\\alpha^{(i)}}} \\dd s + \\sum_{i=0}^{1} c^{(i)} \\int_0^t P(s) b(s) \\dd s \\\\\n&~~~ + c^{(2)} \\int_0^t \\frac{P(s)z(s)}{(t-s)^{1-\\alpha^{(2)}}}\\dd s + c^{(2)} \\int_0^t P(s) z(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T]. \\nonumber\n\\end{align}\nProceeding similarly, using (\\ref{eq_a_1}), (\\ref{eq_a_5_3423234234}) is evaluated by\n\\begin{align*}\nz(t) & \\leq b(t) + \\sum_{i=0}^{2} c^{(i)} \\int_0^t \\frac{P(s)b(s)}{(t-s)^{1-\\alpha^{(i)}}} \\dd s + \\sum_{i=0}^{2} c^{(i)} \\int_0^t P(s) b(s) \\dd s \\\\\n&~~~ + c^{(3)} \\int_0^t \\frac{P(s)z(s)}{(t-s)^{1-\\alpha^{(3)}}}\\dd s + c^{(3)} \\int_0^t P(s) z(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align*}\nwhere $\\alpha[\\alpha^{(2)}] := 1 - \\frac{(1- \\alpha^{(2)} )p}{p-1} = \\frac{\\alpha^{(2)} p - 1}{p-1} \\in (0,1)$, $\\alpha^{(3)} := \\alpha + 3 \\Bigl ( \\alpha - \\frac{1}{p} \\Bigr ) > \\alpha$, $c^{(3)} := \\max \\{ \\bar{c}^{(3)}, \\hat{c}^{(3)} \\}$, $\\bar{c}^{(3)} := C^{(3)} c^{(2)} \\|P(\\cdot)\\|_{p} B (\\alpha[\\alpha], \\alpha[\\alpha^{(2)}])$, and $\\hat{c}^{(3)} := C^{(3)} c^{(2)} \\|P(\\cdot)\\|_{p} T^{\\frac{p-1}{p}}$.\n\nTherefore, by induction, we are able to get\n\\begin{align*}\nz(t) & \\leq b(t) + \\sum_{i=0}^{k-1} c^{(i)} \\int_0^t \\frac{P(s)b(s)}{(t-s)^{1-\\alpha^{(i)}}} \\dd s + \\sum_{i=0}^{k-1} c^{(i)} \\int_0^t P(s) b(s) \\dd s \\\\\n&~~~ + c^{(k)} \\int_0^t \\frac{P(s)z(s)}{(t-s)^{1-\\alpha^{(k)}}}\\dd s + c^{(k)} \\int_0^t P(s) z(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align*}\nwhere $c^{(0)} = 1$, $\\alpha^{(0)} = \\alpha$, and for $i=1,\\ldots,k$,\n\\begin{align*}\n\\alpha[\\alpha] & := \\frac{\\alpha p - 1}{p-1} \\in (0,1),~ \\alpha^{(i)} := \t \\alpha + i \\Bigl (\\alpha - \\frac{1}{p} \\Bigr ) > \\alpha,~ c^{(i)} := \\max \\{ \\bar{c}^{(i)}, \\hat{c}^{(i)} \\},\t \\\\\n\\bar{c}^{(i)} &:= C^{(i)} c^{(i-1)} \\|P(\\cdot)\\|_{p} B (\\alpha[\\alpha], \\alpha[\\alpha^{(i-1)}]),~ \\hat{c}^{(i)} := C^{(i)} c^{(i-1)} \\|P(\\cdot)\\|_{p} T^{\\frac{p-1}{p}}.\n\\end{align*}\n\nWe observe that there is $k^\\prime \\geq 1$ such that $\\alpha^{(k)} \\geq 1$ for any $k \\geq k^\\prime$. Hence, with a fixed $k \\geq k^\\prime$, using the H\\\"older's inequality, it follows that\n\\begin{align*}\nz(t) & \\leq b(t) + \\sum_{i=0}^{k-1} c^{(i)} \\int_0^t \\frac{P(s)b(s)}{(t-s)^{1-\\alpha^{(i)}}} \\dd s + \\sum_{i=0}^{k-1} c^{(i)} \\int_0^t P(s) b(s) \\dd s \\\\\n&~~~ + c^{(k)} T^{\\alpha^{(k)}-1} \\int_0^t P(s)z(s) \\dd s + c^{(k)} T^{\\alpha^{(k)}-1} \\int_0^t P(s) z(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T].\n\\end{align*}\nNotice that the integrals above do not have the singularity. Hence, there is a constant $C$ such that $\\frac{c^{(i)}}{(t-s)^{1-\\alpha^{(i)}}} \\leq \n\\frac{C}{(t-s)^{1-\\alpha}}$ for all $i$ with $0 \\leq i \\leq k$ and $0 \\leq s < t \\leq T$. \nThis implies that\n\\begin{align*}\t\nz(t) \n& \\leq b(t) + C \\int_0^t \\frac{P(s) b(s)}{(t-s)^{1-\\alpha}} \\dd s + C \\int_0^t P(s) b(s) \\dd s\\\\\n&~~~ + c^{(k)} T^{\\alpha^{(k)}-1} \\int_0^t P(s)z(s) \\dd s + c^{(k)} T^{\\alpha^{(k)}-1} \\int_0^t P(s) z(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T]. \n\\end{align*}\nThen we apply the standard Gronwall's inequality (see \\cite[page 14]{Walter_book}) to obtain the desired result. This completes the proof of the lemma.\n\\end{proof}\n\n\n\\section{Well-posedness and Estimates of Volterra Integral Equations}\\label{Appendix_B}\n\nWe prove Lemma \\ref{Lemma_2_1} in a more general setting when the initial condition of (\\ref{eq_1}) is also dependent on the outer time variable. Consider the following Volterra integral equation:\n\\begin{align}\n\\label{eq_b_1}\nx(t) = x_0(t) + \\int_0^t \\frac{f(t,s,x(s),u(s))}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t g(t,s,x(s),u(s)) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T].\n\\end{align}\nLet $x(\\cdot;x_0,u) := x(\\cdot)$ be the solution of (\\ref{eq_b_1}) under $(x_0(\\cdot),u(\\cdot)) \\in L^p([0,T];\\mathbb{R}^n) \\times \\mathcal{U}^p[0,T]$, where we recall\n\\begin{align*}\n\\mathcal{U}^p[0,T] = \\Bigl \\{u:[0,T] \\rightarrow U~|~ \\textrm{$u$ is measurable in $t \\in [0,T]$} ~\\&~ \\rho(u(\\cdot),u_0) \\in L^p([0,T];\\mathbb{R}_+) \\Bigr \\}\n\\end{align*}\nHere, $(U,\\rho)$ is a separable metric space, where $U \\subset \\mathbb{R}^d$ and $\\rho$ is the metric induced by the standard Euclidean norm $|\\cdot|_{\\mathbb{R}^d}$\n\n\\begin{assumption}\\label{Assumption_B_1}\nFor $p \\geq 1$ and $\\alpha \\in (0,1)$, there are nonnegative $K_0(\\cdot) \\in L^{ (\\frac{p}{1 + \\alpha p} \\vee 1)+}([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{ (\\frac{1}{\\alpha} \\vee \\frac{p}{p-1})+}([0,T];\\mathbb{R})$, where $L^{p+}([0,T];\\mathbb{R}^n) := \\cup_{r > p} L^{r}([0,T];\\mathbb{R}^n)$ for $1 \\leq p < \\infty$ and $t \\vee s := \\max \\{t,s\\}$ for $t,s \\in [0,T]$, such that\n\\begin{align*}\n\\begin{cases}\n\t|f(t,s,x,u) - f(t,s,x^\\prime,u^\\prime)| + |g(t,s,x,u) - g(t,s,x^\\prime,u^\\prime)| \\leq K(s) (|x-x^\\prime| + \\rho(u,u^\\prime)) , \\\\\n\t~~~~~~~~~~ \\forall (t,s) \\in \\Delta,~ x,x^\\prime \\in \\mathbb{R}^n,~ u,u^\\prime \\in U, \\\\\n\t|f(t,s,0,u)| + |g(t,s,0,u)| \\leq K_0(s),~ \\forall (t,s) \\in \\Delta,~ u \\in U.\n\\end{cases}\n\\end{align*}\n\\end{assumption}\n\n\\begin{lemma}\\label{Lemma_B_1}\nLet Assumption \\ref{Assumption_B_1} hold. Assume that $p \\geq 1$ and $\\alpha \\in (0,1)$. Then for any $(x_0(\\cdot),u(\\cdot)) \\in L^p([0,T];\\mathbb{R}^n) \\times \\mathcal{U}^p[0,T]$, (\\ref{eq_b_1}) admits a unique solution in $L^p([0,T];\\mathbb{R}^n)$. In addition, there is a constant $C \\geq 0$ such that (\\ref{eq_b_1}) holds the following estimate:\n\\begin{align}\n\\label{eq_b_1_1_1_2}\n\\Bigl \\| x(\\cdot;x_0,u) \\Bigr \\|_{p} \\leq C \\Bigl (1 + \\|x_0(\\cdot)\\|_{p} + \\|\\rho(u(\\cdot),u_0)\\|_{L^p([0,T];\\mathbb{R}_+)} \\Bigr ).\t\n\\end{align}\nFurthermore, for any $x_0(\\cdot),x_0^\\prime(\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$ and $u(\\cdot),u^\\prime(\\cdot) \\in \\mathcal{U}^p[0,T]$, there is a constant $C \\geq 0$ such that\n\\begin{align}\n\\label{eq_b_1_1_1_3}\n& \\|x(\\cdot;x_0,u) - \tx(\\cdot;x_0^\\prime,u^\\prime) \\|_{p} \\leq C \\|x_0(\\cdot) - x_0^\\prime(\\cdot) \\|_{p} \\\\\n&~~~~~ + C\\Biggl [ \\int_0^T \\Bigl ( \\int_0^t \\frac{|f(t,s,x(s;x_0,u),u(s)) - f(t,s,x(s;x_0,u),u^\\prime(s))|}{(t-s)^{1-\\alpha}} \\dd s \\Bigr )^p \\dd t \\Biggr]^{\\frac{1}{p}} \\nonumber \\\\\n&~~~~~ + C\\Biggl [ \\int_0^T \\Bigl ( \\int_0^t | g(t,s,x(s;x_0,u),u(s)) - g(t,s,x(s;x_0,u),u^\\prime(s)) | \\dd s \\Bigr)^p \\dd t \\Biggr ]^{\\frac{1}{p}}. \\nonumber\n\\end{align}\n\\end{lemma}\n\n\\begin{remark}\\label{Remark_B_1}\n\\begin{enumerate}[(i)]\n\\item By Assumption \\ref{Assumption_B_1}, we have\n\\begin{align*}\n|f(t,s,x,u)| + |g(t,s,x,u)| & \\leq K_0(s) + K(s)(|x| + \\rho(u,u_0)),~ \\forall (t,s) \\in \\Delta,~ (x,u) \\in \\mathbb{R}^n \\times U.\n\\end{align*}\n\\item Unlike Assumption \\ref{Assumption_2_1}, we do not assume $p > \\frac{1}{\\alpha}$ in Assumption \\ref{Assumption_B_1}. In addition, the conditions of $K_0$ and $K$ in Assumption \\ref{Assumption_B_1} are weaker than those in Assumption \\ref{Assumption_2_1} when $p > \\frac{1}{\\alpha}$. Indeed, with $p > \\frac{1}{\\alpha}$, we can show that $K_0(\\cdot) \\in L^{\\frac{1}{\\alpha} + } ([0,T];\\mathbb{R}) \\subset L^{ (\\frac{p}{1 + \\alpha p} \\vee 1)+}([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{\\frac{p}{\\alpha p - 1} + } ([0,T];\\mathbb{R}) \\subset L^{ (\\frac{1}{\\alpha} \\vee \\frac{p}{p-1})+}([0,T];\\mathbb{R})$. This means that Lemma \\ref{Lemma_2_1} can be shown under the weaker assumption than Assumption \\ref{Assumption_2_1}\n\\end{enumerate}\n\\end{remark}\n\n\n\\begin{proof}[Proof of Lemma \\ref{Lemma_B_1}]\nThe main idea of the proof is the extension of \\cite[Theorem 3.1]{Lin_Yong_SICON_2020}, where unlike \\cite{Lin_Yong_SICON_2020} we have to consider the cross coupling characteristics between the singular and nonsingular kernels in (\\ref{eq_b_1}). Furthermore, our proof provides a more detailed statement, which can be viewed as a refinement of \\cite{Lin_Yong_SICON_2020}. \n\nWe first use the contraction mapping argument to show the existence and uniqueness of the solution to (\\ref{eq_b_1}). For $\\tau \\in [0,T]$, where $\\tau$ will be determined below, let us define\n\\begin{align*}\t\n\\mathcal{F}[x(\\cdot)](t) := x_0(t) + \\int_0^t \\frac{f(t,s,x(s),u(s))}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t g(t,s,x(s),u(s)) \\dd s,~ \\textrm{a.e.}~ t \\in [0,\\tau]. \n\\end{align*}\nFor $q,p \\geq 1$ and $r \\in [1,\\frac{1}{1-\\alpha})$, set $r= 1+\\beta$, where $\\beta \\geq 0$ (equivalently, $\\beta \\in [0,\\frac{\\alpha}{1-\\alpha})$) and $\\frac{1}{p} + 1 = \\frac{1}{q} + \\frac{1}{1+\\beta}$. By Lemma \\ref{Lemma_A_2} and Remark \\ref{Remark_B_1}, it follows that\n\\begin{align}\n\\label{eq_b_2}\n& \\|\\mathcal{F}[x(\\cdot)] (\\cdot) \\|_{L^p([0,\\tau];\\mathbb{R}^n)} \\\\ \n& \\leq \\|x_0(\\cdot)\\|_{p} + \\Bigl ( \\Bigl ( \\frac{\\tau^{1-(1+\\beta)(1-\\alpha)}}{1-(1+\\beta)(1-\\alpha)} \\Bigr )^{\\frac{1}{1+\\beta}} + \\tau^{\\frac{1}{1+\\beta}} \\Bigr ) \\Bigl \\|K_0(\\cdot) + K(\\cdot)(|x(\\cdot)| + \\rho(u(\\cdot),u_0) ) \\Bigr \\|_{L^q([0,\\tau];\\mathbb{R})}. \\nonumber\n\\end{align}\nBelow, we consider the three different cases.\n\n\\subsubsection*{Case I: $p > \\frac{1}{1-\\alpha}$}\nNote that $\\frac{1}{p} < 1 - \\alpha$. Moreover, $\\frac{1}{\\alpha} > \\frac{p}{p-1}$, $\\frac{p}{1+\\alpha p} > 1$, and $ 1+\\beta < \\frac{1}{1-\\alpha} = 1+ \\frac{\\alpha}{1-\\alpha}$ (equivalently, $\\beta < \\frac{\\alpha}{1-\\alpha}$). In this case, we have $K_0(\\cdot) \\in L^{\\frac{p}{1+\\alpha p}+}([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{\\frac{1}{\\alpha}+}([0,T];\\mathbb{R})$. Observe that $\\frac{1}{q} = \\frac{1}{p} + 1 - \\frac{1}{1+\\beta} < \t\\frac{1}{p} + 1 - \\frac{1}{1 + \\frac{\\alpha}{1-\\alpha}} = \\frac{1}{p} + \\alpha < 1$ and $\\frac{1}{q} - \\frac{1}{p} = \\frac{p-q}{pq} = 1 - \\frac{1}{1+\\beta} < 1 - \\frac{1}{1+\\frac{\\alpha}{1-\\alpha}} = \\alpha < 1$, \nwhich implies $q \\searrow \\frac{p}{1+\\alpha p} < 1$ and $\\frac{pq}{p-q} \\searrow \\frac{1}{\\alpha}$ as $\\beta \\nearrow \\frac{\\alpha}{1-\\alpha}$. Hence, since $K_0(\\cdot) \\in L^{\\frac{p}{1+\\alpha p}+}([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{\\frac{1}{\\alpha}+}([0,T];\\mathbb{R})$, we may choose $\\beta$ close enough to $\\frac{\\alpha}{1-\\alpha}$ so that $K_0 (\\cdot)\\in L^{q}([0,T];\\mathbb{R})$ and $K (\\cdot) \\in L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})$. Therefore, as $\\frac{p-q}{p} + \\frac{q}{p} = 1$, it follows that\n\\begin{align*}\n& \\|K_0(\\cdot) + K(\\cdot)(|x(\\cdot)| + u(\\cdot))\\|_{L^q([0,\\tau];\\mathbb{R})} \\\\\n& \\leq \\|K_0(\\cdot)\\|_{L^q([0,T];\\mathbb{R})} + \\|K(\\cdot)\\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})} (\\| x(\\cdot)\\|_{L^p([0,T];\\mathbb{R}^n)} + \\|\\rho(u(\\cdot),u_0) \\|_{L^p([0,T];\\mathbb{R})}).\n\\end{align*}\nThis, together with (\\ref{eq_b_2}), implies\n\\begin{align}\n\\label{eq_b_3}\n\\bigl \\|\\mathcal{F}[x(\\cdot)] (\\cdot) \\bigr \\|_{L^p([0,\\tau];\\mathbb{R}^n)} & \\leq \\|x_0(\\cdot)\\|_{p} + \\Bigl ( \\Bigl ( \\frac{\\tau^{1-(1+\\beta)(1-\\alpha)}}{1-(1+\\beta)(1-\\alpha)} \\Bigr )^{\\frac{1}{1+\\beta}} + \\tau^{\\frac{1}{1+\\beta}} \\Bigr ) \\Bigl [ \\|K_0(\\cdot)\\|_{L^q([0,T];\\mathbb{R})}\t \\\\\n&~~~ + \\|K(\\cdot)\\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})} \\Bigl ( \\| x (\\cdot) \\|_{L^p([0,\\tau];\\mathbb{R}^n)} + \\|\\rho(u(\\cdot),u_0) \\|_{L^p([0,\\tau];\\mathbb{R})} \\Bigr ) \\Bigr ]. \\nonumber\n\\end{align}\nThis shows $\\mathcal{F}[x(\\cdot)]: L^p([0,\\tau];\\mathbb{R}^n) \\rightarrow L^p([0,\\tau];\\mathbb{R}^n)$ for $\\tau \\in [0,T]$. For $x(\\cdot),x^\\prime (\\cdot) \\in L^p([0,\\tau];\\mathbb{R}^n)$, by Lemma \\ref{Lemma_A_2} and Assumption \\ref{Assumption_2_1}, and using the same technique as (\\ref{eq_b_2}) and (\\ref{eq_b_3}), it follows that\n\\begin{align*}\n& \\Bigl \\|( \\mathcal{F}[x(\\cdot)] - \\mathcal{F}[x^\\prime(\\cdot)])(\\cdot) \\Bigr \\|_{L^p([0,\\tau];\\mathbb{R}^n)} \\\\\n& \\leq \\Bigl ( \\Bigl ( \\frac{\\tau^{1-(1+\\beta)(1-\\alpha)}}{1-(1+\\beta)(1-\\alpha)} \\Bigr )^{\\frac{1}{1+\\beta}} + \\tau^{\\frac{1}{1+\\beta}} \\Bigr ) \\|K(\\cdot)\\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})} \\|x(\\cdot) - x^\\prime(\\cdot) \\|_{L^p([0,\\tau];\\mathbb{R}^n)}.\n\\end{align*}\nTake $\\tau \\in (0,T]$, independent of $x_0$, such that $\\Bigl ( \\Bigl ( \\frac{\\tau^{1-(1+\\beta)(1-\\alpha)}}{1-(1+\\beta)(1-\\alpha)} \\Bigr )^{\\frac{1}{1+\\beta}} + \\tau^{\\frac{1}{1+\\beta}} \\Bigr ) \\|K(\\cdot)\\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})} < 1$. Then the mapping $\\mathcal{F}[x(\\cdot)]: L^p([0,\\tau];\\mathbb{R}^n) \\rightarrow L^p([0,\\tau];\\mathbb{R}^n)$ is contraction. Hence, in view of the contraction mapping theorem, (\\ref{eq_b_1}) admits a unique solution on $[0,\\tau]$ in $L^p([0,\\tau];\\mathbb{R}^n)$.\n\nFor $[\\tau,2\\tau]$, consider,\n\\begin{align*}\t\n\\mathcal{F}[y(\\cdot)](t) & := x_0(t) + \\int_0^{\\tau} \\frac{f(t,s,x(s),u(s))}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^{\\tau} g(t,s,x(s),u(s)) \\dd s \\\\\n&~~~ + \\int_{\\tau}^{t} \\frac{f(t,s,y(s),u(s))}{(t-s)^{1-\\alpha}} \\dd s + \\int_{\\tau}^{t} g(t,s,y(s),u(s)) \\dd s,~ \\textrm{a.e.}~ t \\in [\\tau,2\\tau].\n\\end{align*}\nNote that by Lemma \\ref{Lemma_A_2} and (\\ref{eq_b_3}), we have\n\\begin{align*}\n& \\bigl \\|\\mathcal{F}[y(\\cdot)] (\\cdot) \\bigr \\|_{L^p([\\tau,2\\tau];\\mathbb{R}^n)} \\\\\n& \\leq \\|x_0(\\cdot) \\|_{p} + C \\Bigl [ \\|K_0 (\\cdot) \\|_{L^q([0,T];\\mathbb{R})}\t + \\|K (\\cdot) \\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})} \\Bigl ( \\| x(\\cdot) \\|_{L^p([0,\\tau];\\mathbb{R}^n)} \\\\\n&~~~ + \\|\\rho(u(\\cdot),u_0) \\|_{L^p([0, \\tau];\\mathbb{R})} \\Bigr ) + \\|K (\\cdot) \\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})} \\Bigl ( \\| y(\\cdot) \\|_{L^p([\\tau, 2 \\tau];\\mathbb{R}^n)} + \\|\\rho(u(\\cdot),u_0) \\|_{L^p([\\tau, 2 \\tau];\\mathbb{R})} \\Bigr ) \\Bigr ],\n\\end{align*}\nwhich shows that $\\mathcal{F}[y(\\cdot)]: L^p([\\tau,2\\tau];\\mathbb{R}^n) \\rightarrow L^p([\\tau,2\\tau];\\mathbb{R}^n)$. Moreover, by a similar argument, it follows that for any $y(\\cdot),y^\\prime (\\cdot) \\in L^p([\\tau,2\\tau];\\mathbb{R}^n)$,\n\\begin{align*}\n& \\bigl \\| (\\mathcal{F}[y(\\cdot)] - \\mathcal{F}[y^\\prime(\\cdot)] )(\\cdot)\\bigr \\|_{L^p([\\tau,2\\tau];\\mathbb{R}^n)} \\\\\n& \\leq \\Bigl ( \\Bigl ( \\frac{\\tau^{1-(1+\\beta)(1-\\alpha)}}{1-(1+\\beta)(1-\\alpha)} \\Bigr )^{\\frac{1}{1+\\beta}} + \\tau^{\\frac{1}{1+\\beta}} \\Bigr ) \\|K(\\cdot)\\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})} \\|y(\\cdot) - y^\\prime(\\cdot) \\|_{L^p([\\tau, 2 \\tau];\\mathbb{R}^n)}.\t\n\\end{align*}\nAs before, we have $\\Bigl ( \\Bigl ( \\frac{\\tau^{1-(1+\\beta)(1-\\alpha)}}{1-(1+\\beta)(1-\\alpha)} \\Bigr )^{\\frac{1}{1+\\beta}} + \\tau^{\\frac{1}{1+\\beta}} \\Bigr ) \\|K(\\cdot)\\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})} < 1$. Hence, (\\ref{eq_b_1}) admits a unique solution on $[\\tau,2 \\tau]$ in $L^p([\\tau,2 \\tau];\\mathbb{R}^n)$. By induction, we are able to prove the existence and uniqueness of the solution for (\\ref{eq_b_1}) on $[0, \\tau], [\\tau,2 \\tau], \\ldots, \n[\\lfloor \\frac{T}{\\tau} \\rfloor \\tau ,T]$. This shows the existence and uniqueness of the solution for (\\ref{eq_b_1}) on $[0,T]$ in $L^p([0,T];\\mathbb{R}^n)$. \n\nWe now prove the estimates in (\\ref{eq_b_1_1_1_2}) and (\\ref{eq_b_1_1_1_3}). Let $z(\\cdot) := |x(\\cdot)\t- x^\\prime(\\cdot) |_{\\mathbb{R}^n}$, where $x(\\cdot) := x(\\cdot;x_0,u)$ and $x^\\prime(\\cdot) := x(\\cdot;x_0^\\prime,u^\\prime)$. Then\n\\begin{align*}\nz(t) \n& \\leq b(t) + \\int_0^t \\frac{ K(s) z(s) }{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t K(s) z(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align*}\nwhere\n\\begin{align*}\nb(t) & := |x_0(t) - x_0^\\prime(t)|_{\\mathbb{R}^n} + \\int_0^t \\frac{|f(t,s,x(s),u^\\prime(s))-f(t,s,x(s),u(s))|}{(t-s)^{1-\\alpha}} \\dd s \\\\\n&~~~ + \\int_0^t |g(t,s,x(s),u^\\prime(s))-g(t,s,x(s),u(s))| \\dd s,~ \\textrm{a.e.}~ t \\in [0,T].\n\\end{align*}\nNote that $z(\\cdot),b(\\cdot) \\in L^p([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{\\frac{1}{\\alpha}+}([0,T];\\mathbb{R})$. We replace $p$ by $q$ in Lemma \\ref{Lemma_A_5}. Recall that the $L^p$-spaces of this paper are induced by the finite measure on $([0,T],\\mathcal{B}([0,T]))$. Then as $\\frac{1}{q} < \\frac{1}{p} + \\alpha < 1$, we may increase $q$ enough to get $K (\\cdot) \\in L^{q}([0,T];\\mathbb{R}) \\subset L^{\\frac{1}{\\alpha}+}([0,T];\\mathbb{R})$ and $z(\\cdot),b(\\cdot) \\in L^p([0,T];\\mathbb{R}) \\subset L^{\\frac{q}{q-1}}([0,T];\\mathbb{R})$. By Lemma \\ref{Lemma_A_5}, it follows that\n\\begin{align*}\t\nz(t) = |x(t)\t- x^\\prime(t) |_{\\mathbb{R}^n} \\leq b(t) + C \\int_0^t \\frac{K(s) b(s)}{(t-s)^{1-\\alpha}} \\dd s + C \\int_0^t K(s) b(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T].\n\\end{align*}\nHence, similar to (\\ref{eq_b_2}), \n\\begin{align*}\n& \\|x(\\cdot;x_0,u)\t- x(\\cdot;x_0^\\prime,u^\\prime)\t\\|_{p} \n \\leq C \\Bigl ( \\|b(\\cdot)\\|_{p} + \\|K(\\cdot)\\|_{L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})}\\|b(\\cdot)\\|_{p} \\Bigr ) \\leq C \\|b(\\cdot)\\|_{p}.\n\\end{align*}\nThis shows the estimate in (\\ref{eq_b_1_1_1_3}). The estimate in (\\ref{eq_b_1_1_1_2}) can be shown in a similar way.\n\n\n\\subsubsection*{Case II: $1 < p \\leq \\frac{1}{1-\\alpha}$} \n\nThis case implies $1-\\alpha \\leq \\frac{1}{p} < 1$, $\\frac{1}{\\alpha} \\leq \\frac{p}{p-1}$, and $\\frac{p}{1+\\alpha p} \\leq 1$. Moreover, for $\\beta \\in (0,p-1)$ (equivalent to $1 + \\beta \\in (1,p)$), we have $1-\\alpha \\leq \\frac{1}{p} < \\frac{1}{1+\\beta}$. Hence, $K_0 (\\cdot) \\in L^{1+}([0,T];\\mathbb{R})$ and $K (\\cdot) \\in L^{\\frac{p}{p-1}+}([0,T];\\mathbb{R})$. Then since $\\frac{1}{p} + 1 = \\frac{1}{q} + \\frac{1}{1+\\beta}$, we observe that $\\frac{1}{p}\t< \\frac{1}{q} = \\frac{1}{p} + 1 - \\frac{1}{1+\\beta} \\nearrow 1$ and $\\frac{p-q}{pq} = \\frac{1}{q} - \\frac{1}{p} = 1 - \\frac{1}{1+\\beta} \\nearrow \\frac{p-1}{p}$ as $\\beta \\nearrow p-1$.\n\nAs $K_0(\\cdot) \\in L^{1+}([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{\\frac{p}{p-1}+}([0,T];\\mathbb{R})$, we are able to choose $\\beta$ close enough to $p-1$ to get $q > 1$ and $\\frac{p-q}{pq} < \\frac{p-1}{p}$, which implies $K_0(\\cdot) \\in L^{q}([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{\\frac{pq}{p-q}}([0,T];\\mathbb{R})$. We replace $p$ by $q$ in Lemma \\ref{Lemma_A_5}. Since $p \\in [1,\\frac{1}{1-\\alpha}]$, choose $q$ to get $q > \\frac{p}{p-1} \\geq \\frac{1}{\\alpha}$, which implies $K(\\cdot) \\in L^{q}([0,T];\\mathbb{R}) \\subset L^{\\frac{p}{p-1}+}([0,T];\\mathbb{R})$ and $z(\\cdot),b(\\cdot) \\in L^p([0,T];\\mathbb{R}) \\subset L^{\\frac{q}{q-1}}([0,T];\\mathbb{R})$. Then the technique for Case I can be applied to prove Case II.\n \n\n\\subsubsection*{Case III: $p=1$} \n\nWe have $K_0(\\cdot) \\in L^{1+}([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{\\infty}([0,T];\\mathbb{R})$. Choose $\\beta = 0$ and use (\\ref{eq_b_2}) to get \n\\begin{align*}\n\\bigl \\|\\mathcal{F}[x(\\cdot)](\\cdot) \\bigr \\|_{L^1([0,\\tau];\\mathbb{R}^n)} & \\leq \\|x_0(\\cdot)\\|_{p} + \\frac{\\tau^{\\alpha}}{\\alpha} \\Bigl [ \\|K_0(\\cdot)\\|_{L^1([0,T];\\mathbb{R})}\t \\\\\n&~~~~~~~ + \\|K(\\cdot)\\|_{L^{\\infty}([0,T];\\mathbb{R})}( \\| x(\\cdot) \\|_{L^1([0,\\tau];\\mathbb{R}^n)} + \\|\\rho(u(\\cdot),u_0)\\|_{L^1([0,\\tau];\\mathbb{R})}) \\Bigr ].\n\\end{align*}\nThen the rest of the proof is similar to that for Case I. This completes the proof of the theorem.\n\\end{proof}\n\n\n\\begin{remark}\\label{Remark_B_2}\n\tThe integrability of $K_0$ and $K$ in Assumption \\ref{Assumption_B_1} is crucial in the proof of Lemma \\ref{Lemma_B_1}. Comparing between Cases I and II, we see that $(x_0(\\cdot),u(\\cdot)) \\in L^p([0,T];\\mathbb{R}^n) \\times \\mathcal{U}^p[0,T]$ has weaker integrability in Case II. Hence, we need stronger integrability of $K$ from $K(\\cdot) \\in L^{\\frac{1}{\\alpha}+}([0,T];\\mathbb{R})$ to $K(\\cdot) \\in L^{\\frac{p}{p-1}+}([0,T];\\mathbb{R})$, and $K_0$ from $K_0(\\cdot) \\in L^{\\frac{p}{1+\\alpha p} + } ([0,T];\\mathbb{R})$ to $K_0(\\cdot) \\in L^{1 + } ([0,T];\\mathbb{R})$ (note that in Case II, $\\frac{1}{\\alpha} \\leq \\frac{p}{p-1}$ and $\\frac{p}{1+\\alpha p} \\leq 1$). Notice that for Case III, by the weakest integrability of $(x_0(\\cdot),u(\\cdot)) \\in L^p([0,T];\\mathbb{R}^n) \\times \\mathcal{U}^p[0,T]$, we need the essential boundedness of $K$, i.e., the strongest integrability condition for $K$. Finally, as the proof relies on the contraction mapping argument, the solution of (\\ref{eq_b_1}) can be constructed via the standard Picard iteration algorithm, which is applied to Examples \\ref{Example_1} and \\ref{Example_2} in Section \\ref{Section_5}.\n\\end{remark}\n\n\nWe state the continuity of the solution under the stronger assumption (see Remark \\ref{Remark_B_1}).\n\\begin{lemma}\\label{Lemma_B_2}\nLet Assumption \\ref{Assumption_2_1} hold and $x_0(\\cdot) \\in C([0,T];\\mathbb{R}^n)$. Then (\\ref{eq_b_1}) admits a unique solution in $C([0,T];\\mathbb{R}^n)$. \n\\end{lemma}\n\n\\begin{proof}\nBased on Lemma \\ref{Lemma_B_1} and Remark \\ref{Remark_B_1}, (\\ref{eq_b_1}) admits a unique solution in $L^p([0,T];\\mathbb{R}^n)$. Notice that under Assumption \\ref{Assumption_2_1}, $K_0(\\cdot) \\in L^{\\frac{1}{\\alpha} + } ([0,T];\\mathbb{R})$ and $K(\\cdot) \\in L^{\\frac{p}{\\alpha p - 1} + } ([0,T];\\mathbb{R})$. Let $q = \\frac{sp}{s+p}$, where $s > \\frac{p}{\\alpha p - 1}$. We observe $K(\\cdot) \\in L^s([0,T];\\mathbb{R}^n)$. Since $s = \\frac{pq}{p-q}$, we have $\\frac{1}{q} = \\frac{s+p}{sp} = \\frac{1}{p} + \\frac{1}{s} > \\frac{1}{p}$ and $\\frac{pq}{p-q} > \\frac{p}{\\alpha p - 1}\t\\Rightarrow \\frac{p-q}{pq} < \\frac{\\alpha p - 1}{p} \\Rightarrow \\alpha > \\frac{1}{q}$. This implies $ \\frac{1}{p} < \\frac{1}{q} < \\alpha$, i.e., $p > q > \\frac{1}{\\alpha}$. \n\nNote that $K_0(\\cdot) \\in L^q([0,T];\\mathbb{R})$ and \n\\begin{align}\n\\label{eq_b_6}\n|f(t,s,x(s),u(s))| + |g(t,s,x(s),u(s))| & \\leq K_0(s) + K(s)(|x(s)|_{\\mathbb{R}^n} + |\\rho(u(s),u_0)|) =: \\overline{\\psi}(s).\n\\end{align}\nConsequently, using the H\\\"older's inequality, we get\n\\begin{align*}\n& \\Bigl ( \t\\int_0^T |K(s)|^q(|x(s)|_{\\mathbb{R}^n} + \\rho(u(s),u_0))^q \\dd s \\Bigr)^{\\frac{1}{q}} \\\\\n& \\leq \\|K(\\cdot)\\|_{L^{ \\frac{pq}{p-q} }([0,T];\\mathbb{R})} ( \\|x (\\cdot) \\|_{p} + \\| \\rho(u(\\cdot),u_0)\\|_{L^p([0,T];\\mathbb{R})} ) < \\infty.\n\\end{align*}\nThis implies $\\overline{\\psi}$ defined in (\\ref{eq_b_6}) holds $\\overline{\\psi}(\\cdot) \\in L^q([0,T];\\mathbb{R})$. As $q > \\frac{1}{\\alpha}$, the continuity of (\\ref{eq_b_1}) follows from Lemmas \\ref{Lemma_A_3} and \\ref{Lemma_A_4}, together with Assumption \\ref{Assumption_2_1}. This completes the proof.\n\\end{proof}\n\n\nWe study linear Volterra integral equations having singular and nonsingular kernels. For $\\alpha \\in (0,1)$ and $x_0(\\cdot) \\in L^p([0,T];\\mathbb{R}^n)$, consider\n\\begin{align}\n\\label{eq_b_34543534234234}\nx(t) = x_0(t) + \\int_{0}^t \\frac{F(t,s)x(s)}{(t-s)^{1-\\alpha}} \\dd s + \\int_0^t H(t,s) x(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align}\nwhere $F,G:\\Delta \\rightarrow \\mathbb{R}^{n \\times n}$ satisfy $F(\\cdot,\\cdot), H(\\cdot,\\cdot) \\in L^{\\infty}(\\Delta;\\mathbb{R}^{n \\times n})$.\n\n\n\\begin{lemma}\\label{Lemma_B_5_2323232}\nThe solution of (\\ref{eq_b_34543534234234}) can be written as\n\\begin{align}\n\\label{eq_b_34543534234234_1_1_1}\nx(t) = x_0(t) + \\int_0^t \\Psi(t,s) x_0(s) \\dd s,~ \\textrm{a.e.}~ t \\in [0,T],\n\\end{align}\nwhere $\\Psi$ is the state transition equation defined by\n\\begin{align*}\n\\Psi(t,s) = \\frac{F(t,s)}{(t-s)^{1-\\alpha}} + \\int_s^t \\frac{F(t,\\tau) \\Psi(\\tau,s) }{(t-\\tau)^{1-\\alpha}} \\dd \\tau + H(t,s) + \\int_s^t H(t,\\tau) \\Psi(\\tau,s) \\dd \\tau,~ \\textrm{a.e.}~ t \\in (s,T].\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nThe well-posedness of (\\ref{eq_b_34543534234234}) follows from Lemma \\ref{Lemma_B_1}. From (\\ref{eq_b_34543534234234_1_1_1}), it follows that\n\\begin{align*}\n& \\int_{0}^t \\frac{F(t,s)}{(t-s)^{1-\\alpha}} x(s) \\dd s + \\int_0^t H(t,s) x(s) \\dd s \\\\\n& = \\int_{0}^t \\frac{F(t,s)}{(t-s)^{1-\\alpha}} \\Bigl [x_0(s) + \\int_0^s \\Psi(s,\\tau) x_0(\\tau) \\dd \\tau \\Bigr ] \\dd s + \\int_0^t H(t,s) \\Bigl [x_0(s) + \\int_0^s \\Psi(s,\\tau) x_0(\\tau) \\dd \\tau \\Bigr ] \\dd s \\\\\n& = \\int_0^t \\Bigl [ \\frac{F(t,s) }{(t-s)^{1-\\alpha}} + \\int_s^t \\frac{F(t,\\tau) \\Psi(\\tau,s) }{(t-\\tau)^{1-\\alpha} } \\dd \\tau + H(t,s) + \\int_s^t H(t,\\tau) \\Psi(\\tau,s) \\dd \\tau \\Bigr ] x_0(s) \\dd s \\\\\n& = \\int_0^t \\Psi(t,s) x_0(s) \\dd s = x(t) - x_0(t),\n\\end{align*}\nwhich completes the proof.\n\\end{proof}\n\n\nConsider the following $\\mathbb{R}^n$-valued backward Volterra integral equation having singular and nonsingular kernels, which covers the adjoint equation in Theorem \\ref{Theorem_3_1}:\n\\begin{align}\n\\label{eq_b_7}\nz(t) & = z_0(t) + \\int_t^T \\frac{F(r,t)^\\top}{(r-t)^{1-\\alpha}} z(r) \\dd r + \\int_t^T H(r,t)^\\top z(r) \\dd r \\\\\n&~~~ + \\sum_{i=1}^m C_i(t)^\\top \\frac{\\dd \\theta_i(t)}{\\dd t} + D(t)^\\top,~ \\textrm{a.e.}~ t \\in [0,T]. \\nonumber\n\\end{align} \n\n\\begin{assumption}\\label{Assumption_6}\n\\begin{enumerate}[(i)]\n\t\t\\item $z_0(\\cdot) \\in C([0,T];\\mathbb{R}^n)$, $\\theta (\\cdot) = (\\theta_1(\\cdot),\\ldots,\\theta_m(\\cdot)) \\in \\textsc{NBV}([0,T];\\mathbb{R}^m)$, and $\\dd \\theta_i \\ll \\dd t$, i.e., $\\dd \\theta_i$ is absolutely continuous with respect to $\\dd t$ for $i=1,\\ldots,m$;\n\t\t\\item $F,H:\\Delta \\rightarrow \\mathbb{R}^{n \\times n}$ and $C_i,D:[0,T] \\rightarrow \\mathbb{R}^n$, $i=1,\\ldots,m$, satisfy $F(\\cdot,\\cdot),H(\\cdot,\\cdot) \\in L^{\\infty}(\\Delta;\\mathbb{R}^{n \\times n})$ and $C_i(\\cdot),D(\\cdot) \\in L^{\\infty}([0,T];\\mathbb{R}^{n})$, $i=1,\\ldots,m$. \n\\end{enumerate}\n\\end{assumption}\n\n\n\\begin{lemma}\\label{Lemma_B_5}\nLet Assumption \\ref{Assumption_6} hold. Assume that $p \\geq 1$ and $\\alpha \\in (0,1)$. Then for any $z_0(\\cdot) \\in C([0,T];\\mathbb{R}^n)$, (\\ref{eq_b_7}) admits a unique solution in $L^p([0,T];\\mathbb{R}^n)$.\n\\end{lemma}\n\n\\begin{proof}\nNote that by Remark \\ref{Remark_3_3} and the Radon-Nikodym theorem, there is a unique $\\Theta_i(\\cdot) \\in L^1([0,T];\\mathbb{R})$, $i=1,\\ldots,m$, such that $\\frac{\\dd \\theta_i(t)}{\\dd t} = \\Theta_i(t)$ for $i=1,\\ldots,m$. Hence, we may replace $\\frac{\\dd \\theta_i(t)}{\\dd t}$ by $\\Theta_i(t)$ in (\\ref{eq_b_7}). Let us define\n\\begin{align*}\n\\mathcal{G}[z(\\cdot)](t) & := z_0(t) + \\int_t^T \\frac{F(r,t)^\\top}{(r-t)^{1-\\alpha}} z(r) \\dd r + \\int_t^T H(r,t)^\\top z(r) \\dd r \\\\\n&~~~ + \\sum_{i=1}^m C_i(t)^\\top \\Theta_i(t) + D(t)^\\top,~ \\textrm{a.e.}~ t \\in [0,T].\n\\end{align*}\nClearly, $\\mathcal{G}[z(\\cdot)]: L^p([0,T];\\mathbb{R}^n) \\rightarrow L^p([0,T];\\mathbb{R}^n)$. In addition, for $\\tau > 0$, we apply a similar technique of Lemma \\ref{Lemma_B_1} (with Lemma \\ref{Lemma_A_2} for $p=q$ and $r=1$) to show that\n\\begin{align*}\n\\|(\\mathcal{G}[z(\\cdot)]- \\mathcal{G}[z^\\prime(\\cdot)])(\\cdot)\\|_{L^p([T-\\tau,T];\\mathbb{R}^n)} \\leq \t\\Bigl ( \\frac{\\tau K}{\\alpha} + \\tau K \\Bigr ) \\|z(\\cdot) - z^\\prime(\\cdot)\\|_{L^p([T-\\tau,T];\\mathbb{R}^n)}.\n\\end{align*}\nWe may choose $\\tau$, independent of $z_0$, such that $\\Bigl ( \\frac{\\tau K}{\\alpha} + \\tau K \\Bigr ) < 1$. Then by the contraction mapping theorem, (\\ref{eq_b_7}) admits a unique solution on $[T-\\tau,T]$ in $L^p([T-\\tau,T];\\mathbb{R}^n)$. By induction, we are able to show that (\\ref{eq_b_7}) admits a unique solution on $[0,T]$ in $L^p([0,T];\\mathbb{R}^n)$. \nWe complete the proof.\n\\end{proof}\n\n\n\\section{Auxiliary Lemmas}\\label{Appendix_D}\n\n\\begin{lemma}[Corollary 3.9 and page 144 of \\cite{Li_Yong_book} or Lemma 3 of \\cite{Bourdin_arxiv_2016}]\\label{Lemma_D_1}\nAssume that $(X,\\|\\cdot\\|_{X})$ is a Banach space. For $\\delta \\in (0,1)$, define $\\mathcal{E}_{\\delta} := \\{ E \\in [0,T]~|~ |E| = \\delta T\\}$, where $|E|$ denotes the Lebesgue measure of $E$. Suppose that $\\phi:\\Delta \\rightarrow X$ satisfies the properties such that (i) $\\|\\phi(t,s)\\|_{X} \\leq \\overline{\\phi}(s)$ for all $(t,s) \\in \\Delta$, where $\\overline{\\phi}(\\cdot) \\in L^1([0,T];\\mathbb{R})$, and (ii) for almost all $s \\in [0,T]$, $\\phi(\\cdot,s):[s,T] \\rightarrow X$ is continuous. Then there is an $E_{\\delta} \\in \\mathcal{E}_{\\delta}$ such that\n\\begin{align*}\n\\sup_{t \\in [0,T]} \\Biggl | \\int_0^t \\Bigl ( \\frac{1}{\\delta} \\mathds{1}_{E_{\\delta}}(s) - 1 \\Bigr ) \\phi(t,s) \\dd s \\Biggr | \\leq \\delta.\n\\end{align*}\n\\end{lemma}\n\n\n\\begin{lemma}[Lemma 4.2 of \\cite{Lin_Yong_SICON_2020}]\\label{Lemma_D_2}\nLet $\\mathcal{E}_{\\delta}$ be the set in Lemma \\ref{Lemma_D_1}. Assume $\\psi:\\Delta \\rightarrow \\mathbb{R}$ holds the following property:\n\\begin{align}\n\\label{eq_d_1}\n\\begin{cases}\n\t|\\psi(0,s)| \\leq \\overline{\\psi}(s),~ s \\in [0,T], \\\\\n\t|\\psi(t,s) - \\psi(t^\\prime,s)| \\leq \\omega(|t-t^\\prime|) \\overline{\\psi}(s),~ (t,s),(t^\\prime,s) \\in \\Delta,\n\\end{cases}\t\n\\end{align}\nwhere $\\overline{\\psi} \\in L^p([0,T];\\mathbb{R})$ with $p > \\frac{1}{\\alpha}$ and $\\omega:[0,\\infty) \\rightarrow [0,\\infty)$ is some modulus of continuity. Then there is an $E_{\\delta} \\in \\mathcal{E}_{\\delta}$ such that\n\\begin{align*}\n\t\\sup_{t \\in [0,T]} \\Biggl | \\int_0^t \\Bigl (\\frac{1}{\\delta} \\mathds{1}_{E}(s) - 1 \\Bigr ) \\frac{\\psi(t,s)}{(t-s)^{1-\\alpha}} \\dd s \\Biggr | \\leq \\delta.\n\\end{align*}\n\\end{lemma}\n\n\n\\end{appendices}\n\n\n\\bibliographystyle{IEEEtranS}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nWith the popularity of GPS-equipped smart devices and wireless\nmobile networks \\cite{deng2013maximizing,kazemi2012geocrowd}, nowadays people can easily identify and participate\nin some location-based tasks that are close to their current\npositions, such as taking photos\/videos, repairing houses, and\/or\npreparing for parties at some spatial locations. Recently, a new\nframework, namely \\textit{spatial crowdsourcing}\n\\cite{kazemi2012geocrowd}, for employing workers to conduct spatial\ntasks, has emerged in both academia (e.g., the database community\n\\cite{chen2014gmission}) and industry (e.g., TaskRabbit\n\\cite{taskrabbit}). A typical spatial crowdsourcing platform (e.g.,\ngMission \\cite{chen2014gmission} and MediaQ \\cite{kim2014mediaq})\nassigns a number of moving \\textit{workers} to do \\textit{spatial\n\ttasks} nearby, which requires workers to physically move to some\nspecified locations and accomplish these tasks.\n\n\n\nNote that, not all spatial tasks are as simple as taking a photo or\nvideo clip (e.g., street view of Google Maps\n\\cite{GoogleMapStreetView}), monitoring traffic conditions (e.g.,\nWaze \\cite{waze}), or reporting local hot spots (e.g., Foursquare\n\\cite{foursquare}), which can be easily completed by providing\nanswers via camera, sensing devices in smart phones, or naked eyes,\nrespectively. In contrast, some spatial tasks can be rather complex,\nsuch as repairing a house, preparing for a party, and performing\nentertainment shows for a ceremony, which may consist of several\nsteps\/phases\/aspects, and require demanding professional skills from\nworkers. In other words, these complex tasks cannot be simply\naccomplished by normal workers, but require the skilled workers with\nspecific expertise (e.g., fixing roofs or setting up the stage).\n\n\n\n\\begin{figure}[t]\\centering\n\t\\scalebox{0.8}[0.8]{\\includegraphics{example_locations_C.eps}}\\vspace{-1ex}\n\t\\caption{\\small An Example of Repairing a House in the Multi-Skill Spatial Crowdsourcing System.}\\vspace{-5ex}\n\t\\label{fig:bbq_example}\n\\end{figure}\n\n\\begin{table}\\vspace{-3ex}\n\t\\parbox[b]{.44\\linewidth}{\n\t\t\\caption{\\small Worker\/Task Skills}\\label{tab:marker_skill}\\vspace{-2ex}\n\t\t{\\small\\scriptsize\n\t\t\t\\begin{tabular}{l|l}\n\t\t\t\t{\\bf worker\/task} & {\\bf skill key set}\\\\\n\t\t\t\t\\hline \\hline\n\t\t\t\t$w_1$, $w_4$ $w_8$& \\{$a_1$, $a_4$, $a_6$\\}\\\\\n\t\t\t\t$w_2$& \\{$a_5$\\}\\\\\n\t\t\t\t$w_3$, $w_7$& \\{$a_2$, $a_3$\\}\\\\\n\t\t\t\t$w_5$, $w_6$& \\{$a_1$, $a_5$\\}\\\\\n\t\t\t\t\\hline\n\t\t\t\t$t_1$, $t_2$, $t_3$& \\{$a_1$ $\\sim$ $a_6$\\}\\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\t\\vspace{-1ex}\n\t}\n\t\\hfill\n\t\\parbox[b]{.52\\linewidth}{\n\t\t\\caption{\\small Descriptions of Skills}\\label{tab:skill_description}\\vspace{-2ex}\n\t\t{\\small\\scriptsize\n\t\t\t\\begin{tabular}{c|l}\n\t\t\t\t{\\bf skill key} & {\\bf \\quad skill description}\\\\\n\t\t\t\t\\hline \\hline\n\t\t\t\t$a_1$& painting walls\\\\\n\t\t\t\t$a_2$& repairing roofs\\\\\n\t\t\t\t$a_3$& repairing floors\\\\\n\t\t\t\t$a_4$& installing pipe systems\\\\\n\t\t\t\t$a_5$& installing electronic components\\\\\n\t\t\t\t$a_6$& cleaning\\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\t\\vspace{-1ex}\n\t}\n\t\\vspace{-4ex}\n\\end{table}\n\n\nInspired by the phenomenon of complex spatial tasks, in this paper,\nwe will consider an important problem in the\nspatial crowdsourcing system, namely \\textit{multi-skill spatial\n\tcrowdsourcing} (MS-SC), which assigns multi-skilled workers to those\ncomplex tasks, with the matching skill sets and high scores of the\nworker-and-task assignments.\n\nIn the sequel, we will illustrate the\nMS-SC problem by a motivation example of repairing a house.\n\n\n\\nop{\n\t\n\tThe spatial tasks in the spatial crowdsourcing system can be\n\tchecking the display shelves at neighborhood stores, taking a\n\tphoto\/video (e.g., street view of Google Maps\n\t\\cite{GoogleMapStreetView}), monitoring traffic conditions (e.g.,\n\tWaze \\cite{waze}), or reporting local hot spots (e.g., Foursquare\n\t\\cite{foursquare}). Since simple tasks are easy for workers to\n\taccomplish (i.e., workers only need to move to some locations, and\n\tprovide the answer via naked eyes, camera, or sensing devices in\n\tsmart phones), they do not require expert skills from workers. Thus,\n\tany worker can be assigned to do simple tasks.\n\t\n\t\n\tOn the other hand, however, spatial tasks can be as complex as\n\trepairing a house, decorating a room, performing entertainment shows\n\tfor a ceremony, or reporting an ad-hoc news, which consist of\n\tseveral steps\/phases\/aspects, and require demanding professional\n\tskills from workers. In this case, the spatial crowdsourcing\n\tplatform has to assign multi-skilled workers only to those complex\n\ttasks that exactly require their skills, which is more challenging\n\tthan the assignment with simple tasks.\n\t\n\t\n}\n\n\n\n\n\n\\vspace{0.5ex}\\noindent {\\bf Example 1 (Repairing a House).} {\\it\n\tConsider a scenario of the spatial crowdsourcing in Figure\n\t\\ref{fig:bbq_example}, where a user wants to repair a house he\/she\n\tjust bought, in order to have a good living environment for his\/her\n\tfamily. However, it is not an easy task to repair the house, which\n\trequires many challenging works (skills), such as repairing\n\troofs\/floors, replacing\/installing pipe systems and\n\telectronic components, painting walls, and finally cleaning rooms.\n\tThere are many skilled workers that can accomplish one or some of\n\tthese skill types. In this case, the user can post a spatial task\n\t$t_1$, as shown in Figure \\ref{fig:bbq_example}, in the spatial\n\tcrowdsourcing system, which specifies a set of required skills\n\t(given in Tables \\ref{tab:marker_skill} and \\ref{tab:skill_description}) for the\n\thouse-repairing task, a valid time period to repair, and the maximum\n\tbudget that he\/she would like to pay.\n\t\n\tIn Figure \\ref{fig:bbq_example}, around the spatial location of\n\ttask $t_1$, there are 8 workers, $w_1 \\sim w_8$, each of whom has a\n\tdifferent set of skills as given in Table \\ref{tab:marker_skill}.\n\tFor example, worker $w_1$ has the skill set $\\{\\text{painting walls},$\n\t$\\text{installing pipe systems},$ $\\text{cleaning}\\}$.\n\t\n\t\n\tTo accomplish the spatial task $t_1$ (i.e., repair the house), the\n\tspatial crowdsourcing platform needs to select a best subset of\n\tworkers $w_i$ ($1\\leq i\\leq 8$), such that the union of their skill\n\tsets can cover the required skill set of task $t_1$, and, moreover,\n\tworkers can travel to the location of $t_1$ with the maximum net\n\tpayment under the constraints of arrival times, workers' moving ranges, and budgets. For\n\texample, we can assign task $t_1$ with 3 workers $w_2$, $w_7$, and\n\t$w_8$, who are close to $t_1$, and whose skills can cover all the\n\trequired skills of $t_1$.\n\t\n}\n\n\n\n\n\n\\nop{\n\t\n\t\\vspace{0.5ex}\\noindent {\\bf Example 2 (Performing Ad Hoc News\n\t\tReporting).} {\\it A task requester wants to report a breaking news\n\t\tsuddenly occurring at a remote location. The news reporting needs\n\t\tdifferent skills such as taking photos, recording audio\/videos,\n\t\tinterviewing people, news background analysis, and writing the news.\n\t\tIn this situation, the task requester can post a spatial task of\n\t\tnews reporting with the specified location to the spatial\n\t\tcrowdsourcing system, which includes the information such as the\n\t\tbudget, a required skill set, and the valid period. The system can\n\t\tthen assign those workers, who are near the specified location and\n\t\thave the required news reporting skills, to the news reporting task.}\n\t\n}\n\n\n\n\n\nMotivated by the example above, in this paper, we will\nformalize the MS-SC problem, which aims to efficiently assign\nworkers to complex spatial tasks, under the task constraints of\nvalid time periods and maximum budgets, such that the required skill\nsets of tasks are fully covered by those assigned workers, and the\ntotal score of the assignment (defined as the total profit of\nworkers) is maximized.\n\n\nNote that, existing works on spatial crowdsourcing focused on\nassigning workers to tasks to maximize the total number of completed\ntasks \\cite{kazemi2012geocrowd}, the number of performed tasks for a\nworker with an optimal schedule \\cite{deng2013maximizing}, or the\nreliability-and-diversity score of assignments\n\\cite{cheng2014reliable}. However, they did not take into account\nmulti-skill covering of complex spatial tasks, time\/distance constraints, and the assignment\nscore with respect to task budgets and workers' salaries (excluding\nthe traveling cost). Thus, we cannot directly apply prior solutions to\nsolve our MS-SC problem.\n\n\nIn this paper, we first prove that our MS-SC problem in the spatial\ncrowdsourcing system is NP-hard, by reducing it from the \\textit{Set\n\tCover Problem} (SCP) \\cite{karp1972reducibility}. As a result, the\nMS-SC problem is not tractable, and thus very challenging to achieve\nthe optimal solution. Therefore, in this paper, we will tackle the\nMS-SC problem by proposing three effective approximation approaches,\ngreedy, $g$-\\textit{divide-and-conquer} ($g$-D\\&C), and\ncost-model-based adaptive algorithms, which can efficiently compute\nworker-and-task assignment pairs with the constraints\/goals of\nskills, time, distance, and budgets.\n\n\n\n\nSpecifically, we make the following contributions.\n\\begin{itemize}[leftmargin=*]\n\t\n\t\\item We formally define the \\textit{multi-skill spatial\n\t\tcrowdsourcing} (MS-SC) problem in Section \\ref{sec:problem_def}, under the constraints of multi-skill covering, time, distance, and budget for spatial workers\/tasks in the spatial crowdsourcing system.\n\t\n\t\\item We prove that the MS-SC problem is NP-hard, and thus\n\tintractable in Section \\ref{sec:reduction}.\n\t\n\t\\item We propose efficient approximation approaches, namely greedy,\n\t$g$-divide-and-conquer, and cost-model-based adaptive\n\talgorithms to tackle the MS-SC problem in Sections \\ref{sec:greedy}, \\ref{sec:g_D&C}, and \\ref{sec:adpative}, respectively.\n\t\n\t\\item We conduct extensive experiments on real and synthetic data sets,\n\tand show the efficiency and effectiveness of our MS-SC approaches in Section \\ref{sec:exper}.\n\\end{itemize}\n\n\nSection \\ref{sec:framework} introduces a general framework for our MS-SC problem in spatial crowdsourcing systems. Section \\ref{sec:related} reviews previous works on spatial\ncrowdsourcing. Finally, Section \\ref{sec:conclusion} concludes this\npaper.\n\n\n\\section{Problem Definition}\n\\label{sec:problem_def}\n\n\nIn this section, we present the formal definition of the multi-skill\nspatial crowdsourcing, in which we assign multi-skilled\nworkers with time-constrained complex spatial tasks.\n\n\n\\subsection{Multi-Skilled Workers}\n\nWe first define the multi-skilled workers in spatial crowdsourcing\napplications. Assume that $\\Psi=\\{a_1, a_2, ..., a_k\\}$ is a\nuniverse of $k$ abilities\/skills. Each worker has one or multiple\nskills in $\\Psi$, and can provide services for spatial tasks that\nrequire some skills in $\\Psi$.\n\n\n\n\n\\begin{definition} $($Multi-Skilled Workers$)$ Let\n\t$W_p$ $=\\{w_1,$ $w_2,$ $...,$ $w_n\\}$ be a set of $n$ multi-skilled\n\tworkers at timestamp $p$. Each worker $w_i$ ($1\\leq i\\leq n$) has a\n\tset, $X_i$ $(\\subseteq \\Psi)$, of skills, is located at position\n\t$l_i(p)$ at timestamp $p$, can move with velocity $v_i$, and has a maximum moving distance $d_i$.\n\t\\qquad $\\blacksquare$ \\label{definition:worker}\n\\end{definition}\n\nIn Definition \\ref{definition:worker}, the multi-skilled workers\n$w_i$ can move dynamically with speed $v_i$ in any direction, and at\neach timestamp $p$, they are located at spatial places $l_i(p)$, and\nprefer to move at most $d_i$ distance from $l_i(p)$. They can freely\njoin or leave the spatial crowdsourcing system. Moreover, each\nworker $w_i$ is associated with a set, $X_i$, of skills, such as\ntaking photos, cooking, and decorating rooms.\n\n\\subsection{Time-Constrained Complex Spatial Tasks}\n\nNext, we define complex spatial tasks in the spatial crowdsourcing\nsystem, which are constrained by deadlines of arriving at task\nlocations and budgets.\n\n\n\\begin{definition}\n\t$($Time-Constrained Complex Spatial Tasks$)$ Let $T_p=\\{t_1, t_2,\n\t..., t_m\\}$ be a set of time-constrained complex spatial tasks at\n\ttimestamp $p$. Each task $t_j$ ($1\\leq j\\leq m$) is located\n\tat a specific location $l_j$, and workers are expected to reach the\n\tlocation of task $t_j$ before the arrival deadline $e_j$. Moreover, to complete the\n\ttask $t_j$, a set, $Y_j$ $(\\subseteq \\Psi)$, of skills is\n\trequired for those assigned workers. Furthermore, each task $t_j$ is\n\tassociated with a budget, $B_j$, of salaries for workers. \\qquad\n\t$\\blacksquare$ \\label{definition:task}\n\\end{definition}\n\nAs given in Definition \\ref{definition:task}, usually, a task\nrequester creates a time-constrained spatial task $t_j$, which\nrequires workers physically moving to a specific location $l_j$, and\narriving at $l_j$ before the arrival deadline $e_j$. Meanwhile, the task\nrequester also specifies a budget, $B_j$, of salaries, that is, the\nmaximum allowance that he\/she is willing to pay for workers. This\nbudget, $B_j$, can be either the reward cash or bonus points in the\nspatial crowdsourcing system.\n\n\nMoreover, the spatial task $t_j$ is often complex, in the sense that\nit might require several distinct skills (in $Y_j$) to be conducted.\nFor example, a spatial task of repairing a house may require\nseveral skills, such as repairing floors, painting walls and cleaning.\n\n\n\n\\subsection{The Multi-Skill Spatial Crowdsourcing Problem}\n\nIn this subsection, we will formally define the\nmulti-skill spatial crowdsourcing (MS-SC) problem, which assigns spatial\ntasks to workers such that workers can cover the skills required by\ntasks and the assignment strategy can achieve high scores.\n\n\\vspace{0.5ex}\\noindent {\\bf Task Assignment Instance Set.} Before\nwe present the MS-SC problem, we first introduce the concept of task\nassignment instance set.\n\n\n\\begin{definition}\n\t$($Task Assignment Instance Set$)$ At timestamp $p$, given a worker\n\tset $W_p$ and a task set $T_p$, a \\textit{task assignment instance\n\t\tset}, denoted by $I_p$, is a set of worker-and-task assignment pairs\n\tin the form $\\langle w_i, t_j\\rangle$, where each worker $w_i \\in\n\tW_p$ is assigned to at most one spatial task $t_j\\in T_p$.\n\t\n\tMoreover, we denote $CT_p$ as the set of completed tasks $t_j$ that can be reached\n\tbefore the arrival deadlines $e_j$, and accomplished by those assigned workers in $I_p$.\\qquad\n\t$\\blacksquare$ \\label{definition:instance}\n\\end{definition}\n\nIntuitively, the task assignment instance set $I_p$ is one possible\n(valid) worker-and-task assignment between worker set $W_p$ and task\nset $T_p$. Each pair $\\langle w_i, t_j\\rangle$ is in $I_p$, if and\nonly if this assignment satisfies the constraints of task $t_j$,\nwith respect to distance (i.e., $d_i$), time (i.e., $e_j$), budget\n(i.e., $B_j$), and skills (i.e., $Y_j$).\n\n\nIn particular, for each pair $\\langle w_i, t_j\\rangle$, worker $w_i$\nmust arrive at location $l_j$ of the assigned task $t_j$ before its\narrival deadline $e_j$, and can support the skills required by task $t_j$,\nthat is, $X_i \\bigcap Y_j \\neq \\emptyset$. The distance between\n$l_i(p)$ and $l_j$ should be less than $d_i$. Moreover, for all\npairs in $I_p$ that contain task $t_j$, the required skills of task\n$t_j$ should be fully covered by skills of its assigned workers,\nthat is, $Y_j \\subseteq \\cup_{\\forall \\langle w_i, t_j\\rangle \\in\n\tI_p}X_i$.\n\n\nTo assign a worker $w_i$ to a task $t_j$, we need to pay him\/her\nsalary, $c_{ij}$, which is related to the traveling cost from the\nlocation, $l_i(p)$, of worker $w_i$ to that, $l_j$, of task $t_j$.\nThe traveling cost, $c_{ij}$, for vehicles can be calculated by the\nunit gas price per gallon times the number of gallons needed for the\ntraveling. For the public transportation, the cost $c_{ij}$ can be\ncomputed by the fees per mile times the traveling distance. For\nwalking, we can also provide the compensation fee for the worker\nwith the cost $c_{ij}$ proportional to his\/her traveling distance.\n\nWithout loss of generality, we assume that the cost, $c_{ij}$, is\nproportional to the traveling distance, $dist(l_i(p), l_j)$, between\n$l_i(p)$ and $l_j$, where $dist(x, y)$ is a distance function\nbetween locations $x$ and $y$. Formally, we have: $c_{ij} = C_i\\cdot\ndist(l_i(p), l_j)$, where $C_i$ is a constant (e.g.,\ngas\/transportation fee per mile).\n\n\nNote that, for simplicity, in this paper, we use Euclidean distance\nas our distance function (i.e., $dist(x, y)$). We can easily extend\nour proposed approaches in this paper by considering other distance\nfunction (e.g., road-network distance), under the framework of the\nspatial crowdsourcing system, and would like to leave the topics of\nusing other distance metics as our future work.\n\n\n\\vspace{0.5ex}\\noindent {\\bf The MS-SC Problem.} In the sequel, we\ngive the definition of our \\textit{multi-skill spatial\n\tcrowdsourcing} (MS-SC) problem.\n\n\\begin{definition}\n\t(Multi-Skill Spatial Crowdsourcing Problem) Given a time interval $P$, the problem of\n\t\\textit{multi-skill spatial crowdsourcing} (MS-SC) is to assign the\n\tavailable workers $w_i\\in W_p$ to spatial tasks $t_j\\in T_p$, and to obtain a\n\ttask assignment instance set, $I_p$, at each timestamp $p \\in P$, such that:\n\t\n\t\\begin{enumerate}\n\t\t\\item any worker $w_i\\in W_p$ is assigned to only one spatial\n\t\ttask $t_j\\in T_p$ such that his\/her arrival time at location $l_j$\n\t\tbefore the arrival deadline $e_j$, the moving distance is\n\t\tless than the worker's maximum moving distance $d_i$, and \n\t\tall workers assigned to $t_j$ have skill sets fully covering $Y_j$;\n\t\t\\item the total traveling cost of all the assigned workers to task $t_j$ does not exceed the budget of\n\t\tthe task, that is, $\\sum_{\\forall \\langle w_i, t_j\\rangle \\in\n\t\t\tI_p}c_{ij}$ $\\leq B_j$; and\n\t\t\\item the total score, $\\sum_{p \\in P}S_p$, of the task assignment\n\t\tinstance sets $I_p$ within the time interval $P$ is maximized,\n\t\\end{enumerate}\n\t\n\twhere it holds that:\n\t\n\t{\\small\\vspace{-4ex}\n\t\t\\begin{eqnarray}\n\t\t\tS_p &=& \\sum_{t_j \\in CT_p}B'_j, and \\label{eq:assignment_score}\\\\\n\t\t\tB'_j &=& B_j - \\sum_{\\langle w_i, t_j\\rangle \\in I_p}c_{ij}.\\label{eq:flexible_budget}\n\t\t\\end{eqnarray}\n\t\t\\vspace{-2ex}\n\t\t\\label{definition:MS_SC}}\n\\end{definition}\n\n\nIn Definition \\ref{definition:MS_SC}, our MS-SC problem aims to\nassign workers $w_i$ to tasks $t_j$ such that: (1) workers $w_i$ are\nable to reach locations, $l_j$, of tasks $t_j$ on time and\ncover the required skill set $Y_j$, and the moving distance is less than $d_i$; (2) the total traveling cost\nof all the assigned workers should not exceed budget $B_j$; and\n(3) the total score, $\\sum_{p \\in P}S_p$, of the task-and-worker assignment within time interval $P$ should be maximized.\n\nAfter the server-side assignment at a timestamp $p$, those assigned\nworkers would change their status to unavailable, and move to the\nlocations of spatial tasks. Next, these workers will become\navailable again, only if they finish\/reject the assigned tasks.\n\n\\vspace{0.5ex}\\noindent {\\bf Discussions on the Score $S_p$.}\nEq.~(\\ref{eq:assignment_score}) calculates the score, $S_p$, of a\ntask-and-worker assignment by summing up \\textit{flexible budgets},\n$B_j'$ (given by Eq.~(\\ref{eq:flexible_budget})), of all the\ncompleted tasks $t_j\\in CT_p$, where the \\textit{flexible budget} of\ntask $t_j$ is the remaining budget of task $t_j$ after paying\nworkers' traveling costs. Maximizing scores means maximizing the \nnumber of accomplished tasks while minimizing the traveling cost of workers.\n\nIntuitively, each task $t_j$ has a maximum budget $B_j$, which\nconsists of two parts, the traveling cost of the assigned workers\nand the flexible budget. The former cost is related to the\ntotal traveling distance of workers, whereas the latter one can be\nfreely and flexibly used for rewarding workers for their\ncontributions to the task. Here, the distribution of the flexible\nbudget among workers can follow existing incentive mechanisms in\ncrowdsourcing \\cite{rula2014no, yang2012crowdsourcing}, which\nstimulate workers who did the task better (i.e., with more\nrewards).\n\nNote that, in Eq.~(\\ref{eq:assignment_score}), the score, $S_p$, of\nthe task assignment instance set $I_p$ only takes into account those\ntasks that can be completed by the assigned workers (i.e., tasks in\nset $CT_p$). Here, a task can be completed, if the assigned workers\ncan reach the task location before the deadline and finish the task\nwith the required skills.\n\n\nSince the spatial crowdsourcing system is quite dynamic, new\ntasks\/workers may arrive at next timestamps. Thus, if we cannot find\nenough\/proper workers to do the task at the current timestamp $p$,\nthe task is still expected to be successfully assigned with workers\nand completed in future timestamps. Meanwhile, the task requester\ncan be also informed by the spatial crowdsourcing system to increase\nthe budget (i.e., with higher budget $B_j$, we can find more skilled\ncandidate workers that satisfy the budget constraint). Therefore, in\nour definition of score $S_p$, we would only consider those tasks in\n$CT_p$ that can be completed by the assigned workers at timestamp\n$p$, and maximize this score $S_p$.\n\n\n\\subsection{Hardness of Multi-Skill Spatial Crowdsourcing Problem}\n\\label{sec:reduction}\n\nWith $m$ time-constrained complex spatial tasks and $n$\nmulti-skilled workers, in the worst case, there are an exponential\nnumber of possible task-and-worker assignment strategies, which\nleads to high time complexity, $O((m + 1)^n)$. In this subsection,\nwe prove that our MS-SC problem is NP-hard, by reducing a well-known\nNP-hard problem, \\textit{set cover problem} (SCP)\n\\cite{vazirani2013approximation}, to the MS-SC problem.\n\n\n\n\\begin{lemma} (Hardness of the MS-SC Problem)\n\tThe problem of the Multi-Skill Spatial Crowdsourcing (MS-SC) is\n\tNP-hard. \\label{lemma:np}\\vspace{-1ex}\n\\end{lemma}\n\\begin{proof}\n\tPlease refer to Appendix A.\n\\end{proof}\n\n\nSince the MS-SC problem involves multiple spatial tasks whose skill\nsets should be covered, we thus cannot directly use existing\napproximation algorithms for SCP (or its variants) to solve the\nMS-SC problem. What is more, we also need to find an assignment\nstrategy such that workers and tasks match with each other (in terms\nof traveling time\/cost, and budge constraints), which is more\nchallenging.\n\n\nThus, due to the NP-hardness of our MS-SC problem, in subsequent\nsections, we will present a general framework for MS-SC processing and design 3 heuristic algorithms, namely greedy,\n$k$-divide-and-conquer, and cost-model-based adaptive approaches, to\nefficiently retrieve MS-SC answers.\n\n\n\n\n\n\n\\begin{table}\n\t\\begin{center}\\vspace{-5ex}\n\t\t\\caption{\\small Symbols and Descriptions.} \\label{table0}\n\t\t{\\small\\scriptsize\n\t\t\t\\begin{tabular}{l|l}\n\t\t\t\t{\\bf Symbol} & {\\bf \\qquad \\qquad \\qquad\\qquad\\qquad Description} \\\\ \\hline \\hline\n\t\t\t\t$T_p$ & a set of $m$ time-constrained spatial tasks $t_j$ at timestamp $p$\\\\\n\t\t\t\t$W_p$ & a set of $n$ dynamically moving workers $w_i$ at timestamp $p$\\\\\n\t\t\t\t$e_j$ & the deadline of arriving at the location of task $t_j$\\\\\n\t\t\t\t$l_i(p)$ & the position of worker $w_i$ at timestamp $p$\\\\\n\t\t\t\t$l_j$ & the position of task $t_j$\\\\\n\t\t\t\t$X_i$ & a set of skills that worker $w_i$ has\\\\\n\t\t\t\t$Y_j$ & a set of the required skills for task $t_j$ \\\\\n\t\t\t\t$d_i$ & the maximum moving distance of worker $w_i$ \\\\\n\t\t\t\t$B_j$ & the maximum budget of task $t_j$ \\\\\n\t\t\t\t$I_p$ & the task assignment instance set at timestamp $p$ \\\\\n\t\t\t\t$CT_p$ & a set of tasks that are assigned with workers at timestamp $p$ and\\\\\n\t\t\t\t& \\qquad can be completed by these assigned workers\\\\\n\t\t\t\t$C_i$ & the unit price of the traveling cost of worker $w_i$\\\\\n\t\t\t\t$c_{ij}$ & the traveling cost from the location of worker $w_i$ to that of task $t_j$\\\\\n\t\t\t\t$S_p$ & the score of the task assignment instance set $I_p$\\\\\n\t\t\t\t$\\Delta S_p$ & the score increase when changing the pair assignment\\\\ \\hline\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\\vspace{-6ex}\n\\end{table}\n\nTable \\ref{table0} summarizes the commonly used symbols.\n\n\\section{Framework of Solving MS-SC Problems}\n\\label{sec:framework}\n\n\nIn this section, we present a general framework, \nnamely {\\sf MS-SC\\_Framework}, in Figure \\ref{alg:framework} for solving the\nMS-SC problem, which greedily assigns workers with spatial tasks for\nmultiple rounds. For each round, at timestamp $p$, we first retrieve\na set, $T_p$, of all the available spatial tasks, and a set, $W_p$,\nof available workers (lines 2-3). Here, the available task set $T_p$\ncontains existing spatial tasks that have not been assigned with\nworkers in the last round, and the ones that newly arrive at the\nsystem after the last round. Moreover, set $W_p$ includes those\nworkers who have accomplished (or rejected) the previously assigned\ntasks, and thus are available to receive new tasks in the current\nround.\n\nIn our spatial crowdsourcing system, we organize both sets $T_p$ and\n$W_p$ in a cost-model-based grid index. For the sake of space\nlimitations, details about the index construction can be found in\nAppendix E. Due to\ndynamic changes of sets $T_p$ and $W_p$, we also update the grid\nindex accordingly (line 4). Next, we utilize the grid index to\nefficiently retrieve a set, $S$, of valid worker-and-task candidate\npairs (line 5). That is, we obtain those pairs of workers and tasks,\n$\\langle w_i, t_j\\rangle$, such that workers $w_i$ can reach the\nlocations of tasks $t_j$ and satisfy the constraints of skill\nmatching, time, and budgets for tasks $t_j$. With valid pairs in set\n$S$, we can apply our proposed algorithms, that is, \\textit{greedy},\n\\textit{$g$-divide-and-conquer}, or \\textit{adaptive\n\tcost-model-based} approach, over set $S$, and obtain a good\nworker-and-task assignment strategy in an assignment instance set\n$I_p$, which is a subset of $S$ (line 6).\n\n\nFinally, for each pair $\\langle w_i, t_j \\rangle$ in the selected\nworker-and-task assignment set $I_p$, we will notify worker $w_i$ to\ndo task $t_j$ (lines 7-8).\n\n\n\\begin{figure}[ht]\n\t\\begin{center}\\vspace{-2ex}\n\t\t\\begin{tabular}{l}\n\t\t\t\\parbox{3.1in}{\n\t\t\t\t\\begin{scriptsize}\n\t\t\t\t\t\\begin{tabbing}\n\t\t\t\t\t\t12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=\\kill\n\t\t\t\t\t\t{\\bf Procedure {\\sf MS-SC\\_Framework}} \\{ \\\\\n\t\t\t\t\t\t\\> {\\bf Input:} a time interval $P$\\\\\n\t\t\t\t\t\t\\> {\\bf Output:} a worker-and-task assignment strategy within the time interval $P$\\\\\n\t\t\t\t\t\t\\> (1) \\> \\> for each timestamp $p$ in $P$\\\\\n\t\t\t\t\t\t\\> (2) \\> \\> \\> retrieve all the available spatial tasks to $T_p$\\\\\n\t\t\t\t\t\t\\> (3) \\> \\> \\> retrieve all the available workers to $W_p$\\\\\n\t\t\t\t\t\t\\> (4) \\> \\> \\> update the grid index for current $T_p$ and $W_p$\\\\\n\t\t\t\t\t\t\\> (5) \\> \\> \\> obtain a set, $S$, of valid worker-and-task pairs from the index\\\\\n\t\t\t\t\t\t\\> (6) \\> \\> \\> use our \\textit{greedy}, \\textit{$g$-divide-and-conquer} or \\textit{adaptive cost-model-based} approach\\\\\n\t\t\t\t\t\t\\> \\> \\> \\> \\> to obtain a good assignment instance set, $I_p \\subseteq S$\\\\\n\t\t\t\t\t\t\\> (7) \\> \\> \\> for each pair $\\langle w_i, t_j \\rangle$ in $I_p$\\\\\n\t\t\t\t\t\t\\> (8) \\> \\> \\> \\> inform worker $w_i$ to conduct task $t_j$\\\\\n\t\t\t\t\t\t\\}\n\t\t\t\t\t\\end{tabbing}\n\t\t\t\t\\end{scriptsize}\n\t\t\t}\n\t\t\\end{tabular}\n\t\\end{center}\\vspace{-3ex}\n\t\\caption{\\small Framework for Solving the MS-SC Problem.}\n\t\\label{alg:framework}\\vspace{-5ex}\n\\end{figure}\n\n\\nop{\n\t\n\tThe available tasks include the ones arriving between the last and\n\tthe current rounds, and those that are unfinished after last round.\n\tFor any task $t_j$ of the latter type, we need to maintain the set\n\tof workers, $W_j$, who have accepted the assigned task $t_j$, and\n\tupdate the information of the task $t_j$. When we update the\n\tinformation of unfinished task $t_j$ after last round, we reduce its\n\tbudget $B_j$ to $B'_j=B_j - \\sum_{w_i \\in W_j} c_{ij}$ and change\n\tits required skill set $Y_j$ to $Y'_j = Y_j \/ \\cup_{w_i \\in\n\t\tW_j}X_i$.\n\t\n\t\n\t\n\t\n\tMoreover, we allow workers to accomplish multiple tasks within a\n\tperiod of time. However,\n\teach worker can only be available to accept more tasks after the\n\tcurrent assigned task has been rejected or finished.\n\tWhen we assign more than one task to\n\ta worker in a round, if he\/she uses too\n\tmuch time to finish one task,\n\the\/she may miss the deadlines of the other tasks,\n\twhich could harm the overall performance of the platform.\n\t\n\tIn order to facilitate the processing of the MS-SC problem,\n\twe present an efficient cost-model-based indexing\n\tmechanism, which can maintain workers and tasks and help the\n\tretrieval of MS-SC answers. Different from the grid-index in\n\tour previous work \\cite{cheng2014reliable}, the grid-index in\n\tthis work utilize bitmap synopses to time- and space- efficiently\n\torganize\/manipulate the sets of skills for workers and tasks.\n\tFor the details of our grid-index, please refer to\n\tAppendix E of our technical report \\cite{arxivReport}.\n\t\n}\n\n\n\n\\section{The Greedy Approach}\n\\label{sec:greedy}\n\nIn this section, we will propose a greedy algorithm, which greedily\nselects one worker-and-task assignment, $\\langle w_i, t_j \\rangle$,\nat a time that can maximize the increase of the assignment score\n(i.e., $\\sum_{\\forall p\\in P} S_p$ as given in Definition\n\\ref{definition:MS_SC}). This greedy algorithm can be applied in\nline 6 of the framework, {\\sf MS-SC\\_Framework}, in Fig.\n\\ref{alg:framework}.\n\n\n\\subsection{The Score Increase}\n\\label{subsec:score_increase}\n\nBefore we present the greedy algorithm, we first define the\nincrease, $\\Delta S_p$, of score $S_p$ (given in\nEq.~(\\ref{eq:assignment_score})), in the case where we assign a\nnewly available worker $w_i$ to task $t_j$. Specifically, from\nEqs.~(\\ref{eq:assignment_score}) and (\\ref{eq:flexible_budget}), we\ndefine the score increase after assigning worker $w_i$ to task $t_j$\nas follows:\\vspace{-2ex}\n\n{\\small\n\t\\begin{eqnarray}\n\t\t\\Delta S_p = S_p-S_{p-1} = \\Delta B_j' = \\frac{|X_i \\cap (Y_j -\n\t\t\t\\widetilde{Y_j})|}{|Y_j|} \\cdot B_j -\n\t\tc_{ij},\\label{eq:score_increase}\n\t\\end{eqnarray}\\vspace{-2ex}}\n\n\\noindent where $\\widetilde{Y_j}$ is the set of skills that have\nbeen covered by those assigned workers (excluding the new worker\n$w_i$) for task $t_j$.\n\nIn Eq.~(\\ref{eq:score_increase}), $\\frac{|X_i \\cap (Y_j -\n\t\\widetilde{Y_j})|}{|Y_j|}$ is the ratio of skills for task $t_j$\nthat have not been covered by (existing) assigned workers, but can\nbe covered by the new worker $w_i$. Intuitively, the first term in\nEq.~(\\ref{eq:score_increase}) is the pre-allocated maximum budget\nbased on the number of covered skills by the new worker $w_i$,\nwhereas the second term, $c_{ij}$, is the traveling cost from\nlocation of $w_i$ to that of $t_j$. Thus, the score increase,\n$\\Delta S_p$, in Eq.~(\\ref{eq:score_increase}) is to measure the\nchange of score (i.e., flexible budget) $S_p$, due to the assignment\nof worker $w_i$ to task $t_j$.\n\n\n\\subsection{Pruning Strategies}\n\\label{sub:pruning}\n\nThe score increase can be used as a measure to evaluate and decide\nwhich worker-and-task assignment pair should be added to the task\nassignment instance set $I_p$. That is, each time our greedy\nalgorithm aims to choose one worker-and-task assignment pair in $S$\nwith the highest score increase, which will be added to $I_p$ (i.e.,\nline 6 of {\\sf MS-SC\\_Framework} in Fig. \\ref{alg:framework}).\nHowever, it is not efficient to enumerate all valid worker-and-task\nassignment pairs in $S$, and compute score increases. That is, in\nthe worst case, the time complexity is as high as $O(m\\cdot n)$,\nwhere $m$ is the number of tasks and $n$ is the number of workers.\nTherefore, in this subsection, we present three effective pruning\nmethods (two for pruning workers and one for pruning tasks) to quickly filter out false alarms of worker-and-task pairs\nin set $S$.\n\n\n\\noindent\\textbf{The Worker-Pruning Strategy.} When assigning available workers\nto spatial tasks, we can rule out those valid worker-and-task pairs\nin $S$, which contain either \\textit{dominated} or \\textit{high-wage\n\tworkers}, as given in Lemmas \\ref{lemma:dominate_worker} and\n\\ref{lemma:expensive_worker}, respectively, below.\n\nWe say that a worker $w_a$ is \\textit{dominated by} a worker $w_b$\nw.r.t. task $t_j$ (denoted as $w_a \\succ_{t_j} w_b$), if it holds\nthat $X_a \\subseteq X_b$ and $c_{aj}\\geq c_{bj}$, where $X_a$ and\n$X_b$ are skill sets of workers $w_a$ and $w_b$, and $c_{aj}$ and\n$c_{bj}$ are the traveling costs from locations of workers $w_a$ and\n$w_b$ to task $t_j$, respectively.\n\n\\begin{lemma} (Pruning Dominated Workers) Given two worker-and-task\n\tpairs $\\langle w_a, t_j\\rangle$ and $\\langle w_b, t_j\\rangle$ in\n\tvalid pair set $S$, if it holds that $w_a \\succ_{t_j} w_b$, then we\n\tcan safely prune the worker-and-task pair $\\langle w_a,\n\tt_j\\rangle$. \\label{lemma:dominate_worker}\n\\end{lemma}\n\n\\begin{proof}\n\tPlease refer to Appendix B.\n\\end{proof}\n\n\nLemma \\ref{lemma:dominate_worker} indicates that if there exists a\nbetter worker $w_b$ than worker $w_a$ to do task $t_j$ (in terms of\nboth the skill set and the traveling cost), then we can safely\nfilter out the assignment of worker $w_a$ to task $t_j$.\n\n\\begin{lemma} (Pruning High-Wage Workers) Let $\\widetilde{c_{\\cdot j}}$\n\tbe the total traveling cost for those workers that have already been\n\tassigned to task $t_j$. If the traveling cost $c_{ij}$ of assigning a worker\n\t$w_i$ to task $t_j$ is greater than the remaining budget\n\t$(B_j-\\widetilde{c_{\\cdot j}})$ of task $t_j$, then we will not\n\tassign worker $w_i$ to task $t_j$. \\label{lemma:expensive_worker}\n\\end{lemma}\n\\begin{proof}\n\tPlease refer to Appendix C.\n\\end{proof}\n\n\n\nIntuitively, Lemma \\ref{lemma:expensive_worker} shows that, if the\nwage of a worker $w_i$ (including the traveling cost $c_{ij}$)\nexceeds the maximum budget $B_j$ of task $t_j$ (i.e., $c_{ij} >\nB_j-\\widetilde{c_{\\cdot j}}$), then we can safely prune the\nworker-and-task assignment pair $\\langle w_i, t_j\\rangle$.\n\n\n\\noindent\\textbf{The Task-Pruning Strategy.} Let $W(t_j)$ be a set of valid\nworkers that can be assigned to task $t_j$, and $\\widetilde{W(t_j)}$\nbe a set of valid workers that have already been assigned to task\n$t_j$. We give the lemma of pruning those tasks with insufficient\nbudgets below.\n\n\\begin{lemma} (Pruning Tasks with Insufficient Budgets) If an unassigned worker $w_i\\in\n\t(W(t_j)-\\widetilde{W(t_j)})$ has the highest value of $\\frac{\\Delta\n\t\tS_p}{|X_i \\cap (Y_j - \\widetilde{Y_j})|}$, and the traveling cost,\n\t$c_{ij}$, of worker $w_i$ exceeds the remaining budget\n\t$(B_j-\\widetilde{c_{\\cdot j}})$ of task $t_j$, then we can safely\n\tprune task $t_j$. \\label{lemma:prune_insufficient_task}\n\t\n\\end{lemma}\n\\begin{proof}\n\tPlease refer to Appendix D.\n\\end{proof}\n\nIntuitively, Lemma \\ref{lemma:prune_insufficient_task} provides the\nconditions of pruning tasks. That is, if any unassigned worker\nsubset of $(W(t_j)-\\widetilde{W(t_j)})$ either cannot fully cover\nthe required skill set $Y_j$, or exceeds the remaining budget of\ntask $t_j$, then we can directly prune all assignment pairs that\ncontain task $t_j$.\n\nTo summarize, by utilizing Lemmas \\ref{lemma:dominate_worker},\n\\ref{lemma:expensive_worker} and\n\\ref{lemma:prune_insufficient_task}, we do not have to check all\nworker-and-task assignments iteratively in our greedy algorithm.\nInstead, we can now apply our proposed three pruning methods, and\neffectively filter out those false alarms of assignment pairs, which\ncan significantly reduce the number of times to compute the score\nincreases.\n\n\n\\subsection{The Greedy Algorithm}\n\\label{sub:greedy_algorithm}\n\nAccording to the definition of the score increase $\\Delta S_p$ (as\nmentioned in Section \\ref{subsec:score_increase}), we propose a\ngreedy algorithm, which iteratively assigns a worker to a spatial\ntask that can always achieve the highest score increase.\n\n\n\n\n\\begin{figure}[ht]\\vspace{-3ex}\n\t\\begin{center}\n\t\t\\begin{tabular}{l}\n\t\t\t\\parbox{3.1in}{\n\t\t\t\t\\begin{scriptsize}\n\t\t\t\t\t\\begin{tabbing}\n\t\t\t\t\t\t12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=\\kill\n\t\t\t\t\t\t{\\bf Procedure {\\sf MS-SC\\_Greedy}} \\{ \\\\\n\t\t\t\t\t\t\\> {\\bf Input:} $n$ workers in $W_p$ and $m$ time-constrained spatial tasks in $T_p$\\\\\n\t\t\t\t\t\t\\> {\\bf Output:} a worker-and-task assignment instance set, $I_p$\\\\\n\t\t\t\t\t\t\\> (1) \\> \\> $I_p = \\emptyset$;\\\\\n\t\t\t\t\t\t\\> (2) \\> \\> compute all valid worker-and-task pairs $\\langle w_i, t_j \\rangle$ from $W_p$ and $T_p$\\\\\n\t\t\t\t\t\t\\> (3) \\> \\> while $W_p \\neq \\emptyset$ and $T_p \\neq \\emptyset$\\\\\n\t\t\t\t\t\t\\> (4) \\> \\> \\> $S_{cand} = \\emptyset;$\\\\\n\t\t\t\t\t\t\\> (5) \\> \\> \\> for each task $t_j\\in T_p$\\\\\n\t\t\t\t\t\t\\> (6) \\> \\> \\> \\> for each worker $w_i$ in the valid pair $\\langle w_i, t_j \\rangle$\\\\\n\t\t\t\t\t\t\\> (7) \\> \\> \\> \\> \\> if we cannot prune dominated worker $w_i$ by Lemma \\ref{lemma:dominate_worker}\\\\\n\t\t\t\t\t\t\\> (8) \\> \\> \\> \\> \\> \\> if we cannot prune high-wage worker $w_i$ by Lemma \\ref{lemma:expensive_worker}\\\\\n\t\t\t\t\t\t\\> (9) \\> \\> \\> \\> \\> \\> \\> add $\\langle w_i, t_j \\rangle$ to $S_{cand}$\\\\\n\t\t\t\t\t\t\\> (10)\\> \\> \\> \\> if we cannot prune task $t_j$ w.r.t. workers in $S_{cand}$ by Lemma \\ref{lemma:prune_insufficient_task}\\\\\n\t\t\t\t\t\t\\> (11)\\> \\> \\> \\> \\> for each pair $\\langle w_i, t_j \\rangle$ w.r.t. task $t_j$ in $S_{cand}$\\\\\n\t\t\t\t\t\t\\> (12)\\> \\> \\> \\> \\> \\> compute the score increase, $\\Delta S_p(w_i, t_j)$\\\\\n\t\t\t\t\t\t\\> (13)\\> \\> \\> \\> else\\\\\n\t\t\t\t\t\t\\> (14)\\> \\> \\> \\> \\> $T_p=T_p - \\{t_j\\}$\\\\\n\t\t\t\t\t\t\\> (15)\\> \\> \\> obtain a pair, $\\langle w_r, t_j \\rangle \\in S_{cand}$, with the highest score increase, \\\\\n\t\t\t\t\t\t\\> \\> \\> \\> \\> \\> $\\Delta S_p(w_r, t_j)$, and add this pair to $I_p$\\\\\n\t\t\t\t\t\t\\> (16)\\> \\> \\> $W_p=W_p - \\{w_r\\}$\\\\\n\t\t\t\t\t\t\\> (17)\\> \\> return $I_p$\\\\\n\t\t\t\t\t\t\\}\n\t\t\t\t\t\\end{tabbing}\n\t\t\t\t\\end{scriptsize}\n\t\t\t}\n\t\t\\end{tabular}\n\t\\end{center}\\vspace{-5ex}\n\t\\caption{\\small The MS-SC Greedy Algorithm.}\\vspace{-4ex}\n\t\\label{alg:greedy}\n\\end{figure}\n\nFigure \\ref{alg:greedy} shows the pseudo code of our MS-SC greedy\nalgorithm, namely {\\sf MS-SC\\_Greedy}, which obtains one\nworker-and-task pair with the highest score increase each\ntime, and returns a task assignment instance set $I_p$ with\nhigh score.\n\nInitially, we set $I_p$ to be empty, since no workers are assigned\nto any tasks (line 1). Next, we find out all valid worker-and-task\npairs $\\langle w_i, t_j \\rangle$ in the crowdsourcing system at\ntimestamp $p$ (line 2). Here, the validity of pair $\\langle w_i, t_j\n\\rangle$ satisfies 4 conditions: (1) the distance between the\ncurrent location, $l_i(p)$, of worker $w_i$ and the location, $l_j$\nof task $t_j$ is less than the maximum moving distance, $d_i$ of\nworker $w_i$, that is, $dist(l_i(p), l_j) \\leq d_i$; (2) worker\n$w_i$ can arrive at the location, $l_j$, of task $t_j$ before the\narrival deadline $e_j$; (3) worker $w_i$ have skills that task $t_j$\nrequires; and (4) the traveling cost, $c_{ij}$, of worker $w_i$\nshould not exceed the budget $B_j$ of task $t_j$.\n\n\nThen, for each round, we would select one valid worker-and-task\nassignment pair with the highest score increase, and add it to set\n$I_p$ (lines 3-16). Specifically, in each round, we check every task\n$t_j$ that is involved in valid pairs $\\langle w_i, t_j \\rangle$,\nand then prune those dominated and high-wage workers $w_i$, via\nLemmas \\ref{lemma:dominate_worker} and \\ref{lemma:expensive_worker},\nrespectively (lines 7-8). If worker $w_i$ cannot be pruned by both\npruning methods, then we add it to a candidate set $S_{cand}$ for\nfurther checking (line 9). After obtaining all workers that match\nwith task $t_j$, we apply Lemma \\ref{lemma:prune_insufficient_task}\nto filter out task $t_j$ (if workers cannot be successfully assigned\nto $t_j$). If task $t_j$ cannot be pruned, we will calculate the\nscore increase, $\\Delta S_p(w_i, t_j)$, for each pair $\\langle w_i,\nt_j \\rangle$ in $S_{cand}$; otherwise, we remove task $t_j$ from\ntask set $T_p$ (lines 10-14).\n\nAfter we scan all tasks in $T_p$, we can retrieve one\nworker-and-task assignment pair, $\\langle w_r, t_j \\rangle$, from\nthe candidate set $S_{cand}$, which has the highest score increase,\nand insert this pair to $I_p$ (line 15). Since worker $w_r$ has been\nassigned, we remove it from the worker set $W_p$ (line 16). The\nprocess above repeats, until all workers have been assigned (i.e.,\n$W_p = \\emptyset$) or there are no tasks left (i.e.,\n$T_p=\\emptyset$) (line 3).\n\n\nFigure \\ref{subfig:validpairs} illustrates an example of valid\npairs, where $n$ available workers and $m$ spatial tasks are denoted\nby triangular and circular nodes, respectively, and valid\nworker-and-task pairs are represented by dashed lines. Figure\n\\ref{subfig:assignment} depicts the result of one assignment with\nhigh score, where the bold lines indicate assignment pairs in $I_p$.\n\n\n\n\\begin{figure}[ht]\\vspace{-2ex}\\centering\n\t\\subfigure[][{\\scriptsize Valid Pairs}]{\n\t\t\\scalebox{0.47}[0.47]{\\includegraphics{valid_pairs.eps}}\n\t\t\\label{subfig:validpairs}}\\hspace{8ex}\n\t\\subfigure[][{\\scriptsize Assignment Instance}]{\n\t\t\\scalebox{0.45}[0.45]{\\includegraphics{assignment_example.eps}}\n\t\t\\label{subfig:assignment}}\\vspace{-2ex}\n\t\\caption{\\small Illustration of the Worker-and-Task\n\t\tAssignment.}\\vspace{-4ex}\n\t\\label{fig:assignment}\n\\end{figure}\n\n\n\n\\vspace{0.5ex}\\noindent {\\bf The Time Complexity.} We next present\nthe time complexity of the greedy algorithm, {\\sf MS-SC\\_Greedy} (in\nFigure \\ref{alg:greedy}). Specifically, the time cost of computing\nvalid worker-and-task assignment pairs (line 2) is given by $O(m\n\\cdot n)$ in the worst case, where any of $n$ workers can be\nassigned to any of $m$ tasks (i.e., $m\\cdot n$ valid worker-and-task\npairs). Then, for each round (lines 3-16), we apply pruning methods\nto $m\\cdot n$ pairs, and select the pair with the highest score\nincrease. In the worst case, pairs cannot be pruned, and thus the\ntime complexity of computing score increases for these pairs is\ngiven by $O(m \\cdot n)$. Moreover, since each of $n$ workers can\nonly be assigned to one spatial task, the number of iterations is at\nmost $n$ times. Therefore, the total time complexity of our greedy\nalgorithm can be given by $O(m \\cdot n^2)$.\n\n\\section{The $g$-Divide-and-Conquer Approach}\n\\label{sec:g_D&C}\n\nAlthough the greedy algorithm incrementally finds one\nworker-and-task assignment (with the highest score increase) at a\ntime, it may incur the problem of only achieving local optimality.\nTherefore, in this section, we propose an efficient\n\\textit{$g$-divide-and-conquer algorithm} ($g$-D\\&C), which first\ndivides the entire MS-SC problem into $g$ subproblems, such that\neach subproblem involves a smaller subgroup of $\\lceil m\/g\\rceil$\nspatial tasks, and then conquers the subproblems recursively (until\nthe final group size becomes 1). Since different numbers, $g$, of\nthe divided subproblems may incur different time costs, in this\npaper, we will propose a novel cost-model-based method to estimate\nthe best $g$ value to divide the problem.\n\nSpecifically, for each subproblem\/subgroup (containing $\\lceil\nm\/g\\rceil$ tasks), we will tackle the worker-and-task assignment\nproblem via recursion (note: the base case with the group size equal\nto 1 can be solved by the greedy algorithm\n\\cite{vazirani2013approximation}, which has an approximation ratio\nof $\\ln(N)$, where $N$ is the total number of skills). During the\nrecursive process, we will combine\/merge assignment results from\nsubgroups, and obtain the assignment strategy for the merged groups,\nby resolving the assignment conflict among subgroups. Finally, we\ncan return the task assignment instance set $I_p$, with respect to\nthe entire worker and tasks sets.\n\n\n\nIn the sequel, we first discuss how to decompose the MS-SC problem\ninto subproblems in Section \\ref{subsec:MS_SC_decomposition}. Then,\nwe will illustrate our $g$-divide-and-conquer approach in Section\n\\ref{subsec:D&C}, which utilizes the decomposition and merge (as\nwill be discussed in Section \\ref{subsec:merge}) algorithms.\nFinally, we will provide a cost model in Section\n\\ref{subsec:DC_cost_model} to determine the best number $g$ of\nsubproblems during the $g$-D\\&C process.\n\n\n\n\n\\begin{figure}[ht]\\vspace{-2ex}\n\t\\centering\n\t\\subfigure[][{\\scriptsize Original MS-SC Problem}]{\n\t\t\\scalebox{1}[1]{\\includegraphics{beforeDecompose.eps}}\n\t\t\\label{subfig:beforedecomposing}}\\hspace{4ex}\n\t\\subfigure[][{\\scriptsize Decomposed Subproblems}]{\n\t\t\\scalebox{0.9}[0.9]{\\includegraphics{decomposed.eps}}\n\t\t\\label{subfig:decomposed}}\\vspace{-2ex}\n\t\\caption{\\small Illustration of Decomposing the MS-SC Problem.}\\vspace{-5ex}\n\t\\label{fig:decomposing}\n\\end{figure}\n\n\\subsection{MS-SC Problem Decompositions}\n\\label{subsec:MS_SC_decomposition}\n\nIn this subsection, we discuss how to decompose a MS-SC problem into\nsubproblems. In order to illustrate the decomposition, we first\nconvert our original MS-SC problem into a representation of a\nbipartite graph.\n\n\\vspace{0.5ex}\\noindent {\\bf Bipartite Graph Representation of the\n\tMS-SC Problem.} Specifically, given a worker set $W_p$ and a spatial\ntask set $T_p$, we denote each worker\/task (i.e., $w_i$ or $t_j$) as\na vertex in the bipartite graph, where worker and task vertices have\ndistinct vertex types. There exists an edge between a worker vertex\n$w_i$ and a task vertex $t_j$, if and only if worker $w_i$ can reach\nspatial task $t_j$ under the constraints of skills (i.e., $X_i \\cap\nY_j \\ne \\emptyset$), time (i.e., arrival time is before deadline\n$e_j$ of arrival), distance (i.e., the traveling distance is below $d_i$), and\nbudget (i.e., the traveling cost is below task budget $B_j$). We say\nthat the worker-and-task assignment pair $\\langle w_i, t_j\\rangle$\nis \\textit{valid}, if there is an edge between vertices $w_i$ and\n$t_j$ in the graph.\n\n\nAs an example in Figure \\ref{subfig:beforedecomposing}, we have a\nworker set $W_p = \\{w_i | 1\\leq i\\leq 5\\}$, and a spatial task set\n$T_p = \\{t_j | 1\\leq j\\leq 3\\}$, which are denoted by two types of\nvertices (i.e., represented by triangle and circle shapes,\nrespectively) in a bipartite graph. Any edge connects two types of\nvertices $w_i$ and $t_j$, if worker $w_i$ can reach the location of\ntask $t_j$ and do tasks with the required skills from $t_j$. For\nexample, there exists an edge between $w_1$ and $t_1$, which\nindicates that worker $w_1$ can move to the location of $t_1$ before\nthe arrival deadline $e_1$, with the traveling distance under $d_1$,\nwith the traveling cost below budget $B_1$, and moreover with some\nskill(s) in the required skill set $Y_1$ of task $t_1$.\n\nNote that, one or multiple worker vertices (e.g., $w_1$, $w_3$, and\n$w_4$) may be connected to the same task vertex (e.g., $t_1$).\nFurthermore, multiple task vertices, say $t_1$ and $t_2$, may also\nshare some conflicting workers (e.g., $w_3$ or $w_4$), where the\nconflicting worker $w_3$ (or $w_4$) can be assigned to either task\n$t_1$ or task $t_2$ mutual exclusively.\n\n\n\n\n\\begin{figure}[ht]\n\t\\begin{center}\n\t\t\\begin{tabular}{l}\n\t\t\t\\parbox{3.1in}{\n\t\t\t\t\\begin{scriptsize} \\vspace{-4ex}\n\t\t\t\t\t\\begin{tabbing}\n\t\t\t\t\t\t12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=\\kill\n\t\t\t\t\t\t{\\bf Procedure {\\sf MS-SC\\_Decomposition}} \\{ \\\\\n\t\t\t\t\t\t\\> {\\bf Input:} $n$ workers in $W_p$, $m$ time-constrained spatial tasks in $T_p$, and the number\\\\\n\t\t\t\t\t\t\\> \\> \\> \\> of groups $g$\\\\\n\t\t\t\t\t\t\\> {\\bf Output:} decomposed MS-SC subproblems, $P_s$ ($1\\leq s\\leq g$)\\\\\n\t\t\t\t\t\t\\> (1) \\> \\> for $s$ = 1 to $g$\\\\\n\t\t\t\t\t\t\\> (2) \\> \\> \\> $P_s = \\emptyset$\\\\\n\t\t\t\t\t\t\\> (3) \\> \\> compute all valid worker-and-task pairs $\\langle w_i, t_j\\rangle$ from $W_p$ and $T_p$, \\\\\n\t\t\t\t\t\t\\> \\> \\> \\> \\> and obtain a bipartite graph $G$\\\\\n\t\t\t\t\t\t\\> (4) \\> \\> for $s = 1$ to $g$\\\\\n\t\t\t\t\t\t\\> (5) \\> \\> \\> let set $T_p^{(j)}$ contain the next anchor task $t_j$ and its top-$(\\lceil m\/g \\rceil -1)$\\\\\n\t\t\t\t\t\t\\> \\> \\> \\> \\> \\> nearest tasks \/\/ the task, $t_j$, whose longitude is the smallest\\\\\n\t\t\t\t\t\t\\> (6) \\> \\> \\> for each task vertex $t_j \\in T_p^{(j)}$ in graph $G$\\\\\n\t\t\t\t\t\t\\> (7) \\> \\> \\> \\> obtain all worker vertices $w_i$ that connect with task vertex $t_j$\\\\\n\t\t\t\t\t\t\\> (8) \\> \\> \\> \\> add all pairs $\\langle w_i, t_j\\rangle$ to $P_s$\\\\\n\t\t\t\t\t\t\\> (9) \\> \\> return $P_s$ (for $1\\leq s\\leq g$)\\\\\n\t\t\t\t\t\t\\}\n\t\t\t\t\t\\end{tabbing}\n\t\t\t\t\\end{scriptsize}\n\t\t\t}\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-5ex}\n\t\\caption{\\small The MS-SC Problem Decomposition Algorithm.}\n\t\\vspace{-4ex}\n\t\\label{alg:decomposing}\n\\end{figure}\n\n\n\n\\vspace{0.5ex}\\noindent {\\bf Decomposing the MS-SC Problem.} Next,\nwe will illustrate how to decompose the MS-SC problem, with respect\nto task vertices in the bipartite graph. Figure\n\\ref{fig:decomposing} shows an example of decomposing the MS-SC\nproblem (as shown in Figure \\ref{subfig:beforedecomposing}) into 3\nsubproblems (as depicted in Figure \\ref{subfig:decomposed}), where\neach subproblem contains a subgroup of one single spatial task\n(i.e., group size = 1), associated with its connected worker\nvertices. For example, the first subgroup in Figure\n\\ref{subfig:decomposed}) contains task vertex $t_1$, as well as its\nconnecting worker vertices $w_1$, $w_3$, and $w_4$. Different task\nvertices may have conflicting workers, for example, tasks $t_1$ and\n$t_2$ share the same (conflicting) worker vertices $w_3$ and $w_4$.\n\n\nIn a general case, given $n$ workers and $m$ spatial tasks, we\npartition the bipartite graph into $g$ subgroups,\neach of which contains $\\lceil m\/g \\rceil$ spatial tasks, as well as their\nconnecting workers. Figure \\ref{alg:decomposing} presents the pseudo\ncode of our MS-SC problem decomposition algorithm, namely {\\sf\n\tMS-SC\\_Decomposition}, which returns $g$ MS-SC\nsubproblems (each corresponding to a subgroup with $\\lceil m\/g \\rceil$ tasks),\n$P_s$, after decomposing the original MS-SC problem.\n\n\n\nSpecifically, we first initialize $g$ empty\nsubproblems, $P_s$, where $1\\leq s\\leq g$ (lines\n1-2). Then, we find out all valid worker-and-task pairs $\\langle\nw_i, t_j\\rangle$ in the crowdsourcing system at timestamp $p$, which\ncan can form a bipartite graph $G$, where valid pairs satisfy the\nconstraints of skills, times, distances, and budgets (line 3).\n\nNext, we want to obtain one subproblem $P_s$ at a time (lines 4-8).\nIn particular, for each round, we retrieve an anchor task $t_j$ and\nits top-$(\\lceil m\/g \\rceil -1)$ nearest tasks, which form a task\nset $T_p^{(j)}$ of size $\\lceil m\/g \\rceil$ (line 5). Here, we\nchoose anchor tasks with a sweeping style, that is, we always choose\nthe task whose longitude is smallest (in the case where multiple\ntasks have the same longitude, we choose the one with smallest\nlatitude). Then, for each task $t_j\\in T_p^{(j)}$, we obtain its\ncorresponding vertex in $G$ and all of its connecting worker\nvertices $w_i$, and add pairs $\\langle w_i, t_j\\rangle$ to\nsubproblem $P_s$ (lines 6-8). Finally, we return all the $g$\ndecomposed subproblems $P_s$.\n\n\n\n\\begin{figure}[ht]\n\t\\begin{center}\n\t\t\\begin{tabular}{l}\n\t\t\t\\parbox{3.1in}{\n\t\t\t\t\\begin{scriptsize} \\vspace{-4ex}\n\t\t\t\t\t\\begin{tabbing}\n\t\t\t\t\t\t12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=\\kill\n\t\t\t\t\t\t{\\bf Procedure {\\sf MS-SC\\_$g$D\\&C}} \\{ \\\\\n\t\t\t\t\t\t\\> {\\bf Input:} $n$ workers in $W_p$, and $m$ time-constrained spatial tasks in $T_p$\\\\\n\t\t\t\t\t\t\\> {\\bf Output:} a worker-and-task assignment instance set, $I_p$\\\\\n\t\t\t\t\t\t\\> (1) \\> \\> $I_p = \\emptyset$\\\\\n\t\t\t\t\t\t\\> (2) \\> \\> estimate the best number of groups, $g$, for $W_p$ and $T_p$\\\\\n\t\t\t\t\t\t\\> (3) \\> \\> invoke {\\sf MS-SC\\_Decomposition}$(W_p, T_p, g)$, and obtain subproblems $P_s$\\\\\n\t\t\t\t\t\t\\> (4) \\> \\> for $s=1$ to $g$\\\\\n\t\t\t\t\t\t\\> (5) \\> \\> \\> if the number of tasks in subproblem $P_s$ (group size) is greater than 1\\\\\n\t\t\t\t\t\t\\> (6) \\> \\> \\> \\> $I_p^{(s)}$ = {\\sf MS-SC\\_$g$D\\&C}($W_p (P_s)$, $T_p(P_s)$)\\\\\n\t\t\t\t\t\t\\> (7) \\> \\> \\> else\\\\\n\t\t\t\t\t\t\\> (8) \\> \\> \\> \\> invoke classical greedy set cover algorithm to solve subproblem $P_s$,\\\\\n\t\t\t\t\t\t\\> \\> \\> \\> \\> \\> \\> and obtain assignment results $I_p^{(s)}$\\\\\n\t\t\t\t\t\t\\> (9) \\> \\> for $i=1$ to $g$\\\\\n\t\t\t\t\t\t\\> (10) \\> \\> \\> find the next subproblem, $P_s$\\\\\n\t\t\t\t\t\t\\> (11) \\> \\> \\> $I_p$ = {\\sf MS-SC\\_Conflict\\_Reconcile} ($I_p$, $I_p^{(s)}$) \\\\\n\t\t\t\t\t\t\\> (12) \\> \\> return $I_p$\\\\\n\t\t\t\t\t\t\\}\n\t\t\t\t\t\\end{tabbing}\n\t\t\t\t\\end{scriptsize}\n\t\t\t}\n\t\t\\end{tabular}\n\t\\end{center}\\vspace{-5ex}\n\t\\caption{\\small The $g$-Divide-and-Conquer Algorithm.}\n\t\\vspace{-5ex}\n\t\\label{alg:incrementalFrame}\n\\end{figure}\n\n\n\n\\subsection{The $g$-D\\&C Algorithm}\n\\label{subsec:D&C}\n\nIn this subsection, we propose an efficient\n\\textit{$g$-divide-and-conquer} ($g$-D\\&C) algorithm, namely {\\sf\n\tMS-SC\\_$g$D\\&C}, which recursively partitions the original MS-SC\nproblem into subproblems, solves each subproblem (via recursion),\nand merges assignment results of subproblems by resolving the\nconflicts.\n\n\nSpecifically, in Algorithm {\\sf MS-SC\\_$g$D\\&C}, we first estimate\nthe best number of groups, $g$, to partition, with respect to $W_p$ and\n$T_p$, which is based on the cost model proposed later in Section\n\\ref{subsec:DC_cost_model} (line 2). Then, we will call the {\\sf\n\tMS-SC\\_Decomposition} algorithm (as mentioned in Figure\n\\ref{alg:decomposing}) to obtain subproblems $P_s$ (line 3). For\neach subproblem $P_s$, if $P_s$ involves more than 1 task, then we\ncan recursively call Algorithm {\\sf MS-SC\\_$g$D\\&C} itself, by\nfurther dividing the subproblem $P_s$ (lines 5-6). Otherwise, when\nsubproblem $P_s$ contains only one single task, we apply the greedy\nalgorithm of the classical set cover problem for task set $T_p(P_s)$ and worker set\n$W_p(P_s)$ (lines 7-8).\n\nAfter that, we can obtain an assignment instance set $I_p^{(s)}$ for\neach subproblem $P_s$, and merge them into one single\nworker-and-task assignment instance set $I_p$, by reconciling the\nconflict (lines 9-11). In particular, $I_p$ is initially empty (line\n1), and each time merged with an assignment set $I_p^{(s)}$ from\nsubproblem $P_s$ (lines 10-11). Due to the confliction among\nsubproblems, we call function {\\sf MS-SC\\_Conflict\\_Reconcile}\n$(\\cdot, \\cdot)$ (discussed later in Section \\ref{subsec:merge}) to\nresolve the confliction issue during the merging process. Finally,\nwe can return the merged assignment instance set $I_p$ (line 12).\n\\vspace{-1ex}\n\n\\subsection{Merging Conflict Reconciliation}\n\\label{subsec:merge}\n\nIn this subsection, we introduce the merging conflict reconciliation\nprocedure, which resolves the conflicts while merging assignment\nresults of subproblems (i.e., line 11 of Procedure {\\sf\n\tMS-SC\\_$g$D\\&C}). Assume that $I_p$ is the current assignment\ninstance set we have merged so far. Given a new subproblem $P_s$\nwith assignment set $I_p^{(s)}$, Figure \\ref{alg:conflict_reconcile}\nshows the merging algorithm, namely {\\sf MS-SC\\_Conflict\\_Reconcile},\nwhich combines two assignment sets $I_p$ and $I_p^{(s)}$ by\nresolving conflicts.\n\n\\begin{figure}[ht]\n\t\\begin{center}\n\t\t\\begin{tabular}{l}\n\t\t\t\\parbox{3.1in}{\n\t\t\t\t\\begin{scriptsize} \\vspace{-2ex}\n\t\t\t\t\t\\begin{tabbing}\n\t\t\t\t\t\t12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=\\kill\n\t\t\t\t\t\t{\\bf Procedure {\\sf MS-SC\\_Conflict\\_Reconcile}} \\{ \\\\\n\t\t\t\t\t\t\\> {\\bf Input:} the current assignment instance set, $I_p$, of subproblem $P$ we have merged, \\\\\n\t\t\t\t\t\t\\> \\> \\> \\> and the assignment instance set, $I_p^{(s)}$, of subproblem $P_s$\\\\\n\t\t\t\t\t\t\\> {\\bf Output:} a merged worker-and-task assignment instance set, $I_p$\\\\\n\t\t\t\t\t\t\\> (1) \\> \\> let $W_c$ be a set of all conflicting workers between $I_p$ and $I_p^{(s)}$\\\\\n\t\t\t\t\t\t\\> (2) \\> \\> while $W_c \\neq \\emptyset$\\\\\n\t\t\t\t\t\t\\> (3) \\> \\> \\> choose a worker $w_i\\in W_c$ with the highest traveling cost in $I_p^{(s)}$\\\\\n\t\t\t\t\t\t\\> (4) \\> \\> \\> if we substitute $w_i$ with $w_i'$ in $P_s$ having the highest score $S_p^{(s)}$\\\\\n\t\t\t\t\t\t\\> (5) \\> \\> \\> \\> compute the reduction of the assignment score, $\\Delta S_p^{(s)}$\\\\\n\t\t\t\t\t\t\\> (6) \\> \\> \\> if we substitute $w_i$ with $w_i''$ in $P$ having the highest score $S_p$\\\\\n\t\t\t\t\t\t\\> (7) \\> \\> \\> \\> compute the reduction of the assignment score, $\\Delta S_p$\\\\\n\t\t\t\t\t\t\\> (8) \\> \\> \\> if $\\Delta S_p > \\Delta S_p^{(s)}$\\\\\n\t\t\t\t\t\t\\> (9) \\> \\> \\> \\> substitute worker $w_i$ with $w_i'$ in $I_p^{(s)}$\\\\\n\t\t\t\t\t\t\\> (10)\\> \\> \\> else\\\\\n\t\t\t\t\t\t\\> (11)\\> \\> \\> \\> substitute worker $w_i$ with $w_i''$ in $I_p$\\\\\n\t\t\t\t\t\t\\> (12)\\> \\> \\> $W_c = W_c - \\{w_i\\}$\\\\\n\t\t\t\t\t\t\\> (13)\\> \\> $I_p = I_p \\cup I_p^{(s)}$\\\\\n\t\t\t\t\t\t\\> (14) \\> \\> return $I_p$\\\\\n\t\t\t\t\t\t\\}\n\t\t\t\t\t\\end{tabbing}\n\t\t\t\t\\end{scriptsize}\n\t\t\t}\n\t\t\\end{tabular}\n\t\\end{center}\\vspace{-5ex}\n\t\\caption{\\small The Merging Conflict Reconciliation Algorithm.}\n\t\\vspace{-5ex}\n\t\\label{alg:conflict_reconcile}\n\\end{figure}\n\nIn particular, two distinct tasks from two subproblems may be\nassigned with the same (conflicting) worker $w_i$. Since each worker\ncan only be assigned to one spatial task at a time, we thus need to\navoid such a scenario when merging assignment instance sets of two\nsubproblems (e.g., $I_p$ and $I_p^{(s)}$). Our algorithm in Figure\n\\ref{alg:conflict_reconcile} first obtain a set, $W_c$, of all\nconflicting workers between $I_p$ and $I_p^{(s)}$ (line 1). Then,\neach time we greedily solve the conflicts for workers $w_i$ in an\nnon-decreasing order of the traveling cost (i.e., $c_{ij}$) in\n$I^{(s)}_p$ (line 3). Next, in order to resolve the conflicts, we\ntry to replace worker $w_i$ with another worker $w_i'$ (or $w_i''$)\nin $P_{s}$ (or $P$) with the highest score $S_p^{(s)}$ (or\n$S_p$), and compute possible reduction of the assignment score,\n$\\Delta S_p^{(s)}$ (or $\\Delta S_p$) (lines 4-7). Note that, here we replace worker $w_i$ with other available workers. If no other workers are available for replacing $w_i$, we may need to sacrifice task $t_j$ that worker $w_i$ is assigned to. For example, when we cannot find another worker to replace\n$w_i$ in $P_s$, the substitute of $w_i$ will be set as an empty worker, which means the\nassigned task $t_j$ for $w_i$ in $I_p^{(s)}$ will be sacrificed\nand $\\Delta S_p^{(s)}=B'_j$ (as calculated in Equation \\ref{eq:flexible_budget}). In the case that\n$\\Delta S_p > \\Delta S_p^{(s)}$, we substitute worker $w_i$ with\n$w_i'$ in $I_p^{(s)}$ (since the replacement of $w_i$ in subproblem\n$S_p^{(s)}$ leads to lower score reduction); otherwise, we resolve\nconflicts by replacing $w_i$ with $w_i''$ in $I_p$ (lines 8-12). After\nresolving all conflicts, we merge assignment instance set $I_p$ with\n$I_p^{(s)}$ (line 13), and return the merged result $I_p$.\n\n\\subsection{Cost-Model-Based Estimation of the Best Number of Groups}\n\\label{subsec:DC_cost_model}\n\n\nIn this subsection, we discuss how to estimate the best number of\ngroups, $g$, such that the total cost of solving the MS-SC problem\nin $g$-divide-and-conquer approach is minimized. Specifically, the\ncost of the $g$-divide-and-conquer approach consists of 3 parts: the\ncost, $F_D$, of decomposing subproblems, that, $F_C$, of conquering\nsubproblems recursively, and that, $F_M$, of merging subproblems by\nresolving conflicts.\n\nWithout loss of generality, as illustrated in Figure\n\\ref{fig:gDC_cost_model}, during the $g$-divide-and-conquer process,\non level $k$, we recursively divide the original MS-SC problem into\n$g^k$ subproblems, $P_1^{(k)}$, $P_2^{(k)}$, ..., and\n$P_{g^k}^{(k)}$, where each subproblem involves $m\/g^k$ spatial\ntasks.\n\\begin{figure}[ht]\\vspace{-3ex}\n\t\\centering\n\t\\scalebox{0.4}[0.4]{\\includegraphics{gDC_cost_model.eps}}\\vspace{-1ex}\n\t\\caption{\\small Illustration of the Cost Model Estimation.}\\vspace{-4ex}\n\t\\label{fig:gDC_cost_model}\n\\end{figure}\n\n\n\n\\noindent {\\bf The Cost, $F_D$, of Decomposing Subproblems.} From\nAlgorithm {\\sf MS-SC\\_Decomposition} (in Figure\n\\ref{alg:decomposing}), we first need to retrieve all valid\nworker-and-task assignment pairs (line 3), whose cost is $O(m\\cdot\nn)$. Then, we will divide each problem into $g$ subproblems, whose\ncost is given by $O(m\\cdot g+m)$ on each level. For level $k$, we\nhave $m\/g^k$ tasks in each subproblem $P^{(k)}_i$. We will further\ndivide it into $g$ more subproblems, $P^{(k+1)}_j$, and each one\nwill have $m\/g^{k+1}$ tasks. To obtain $m\/g^{k+1}$ tasks in each\nsubproblem $P^{(k+1)}_j$, we first need to find the anchor task, which\nneeds $O(m\/g^{k})$ cost, and further retrieve the rest tasks,\nwhich needs $O(m\/g^{k+1})$ cost. Moreover, since we will have $g^{k+1}$\nsubproblems on level $k+1$, the cost of decomposing tasks on level $k$\nis given by $O(m\\cdot g+m)$.\n\n\nSince there are totally $log_g(m)$ levels, the total cost of\ndecomposing the MS-SC problem is given by:\n{\\small\n\t$$F_D= m\\cdot n+(m \\cdot g+m)\\cdot log_g(m).$$\n\t\\vspace{-2ex}}\n\n\n\n\n\\noindent {\\bf The Cost, $F_C$, of Recursively Conquering\n\tSubproblems.} Let function $F_C(x)$ be the total cost of conquering\na subproblem which contains $x$ spatial tasks. Then, we have the\nfollowing recursive function: \n{\\small\\vspace{-3ex}\n\t$$F_C(m) = g\\cdot F_C(\\left\\lceil \\frac{m}{g}\\right\\rceil).$$\n\t\\vspace{-2ex}}\n\nAssume that $deg_t$ is the average degree of task nodes in the\nbipartite group $G$. Then, the base case of function $F_C(x)$ is the\ncase that $x=1$, in which we apply the greedy algorithm on just one\nsingle task and $deg_t$ workers. Thus, by the analysis of the time\ncomplexity in Section \\ref{sub:greedy_algorithm}, we have:\n{\\small\\vspace{-1ex}\n\t$$F_C(1) = cost_{greedy}(deg_t, 1) = deg_t^2.$$\n\t\\vspace{-3ex}\n}\n\nFrom the recursive function $F_C(x)$ and its base case, we can\nobtain the total cost of the recursive invocation on levels from 1\nto $log_g(m)$ below:\n{\\small\\vspace{-1.5ex}\n\t$$\\sum_{k = 1}^{log_g(m)}F_c(m\/g^k) =\n\t\\frac{1-m}{1-g}deg_t^2$$\n\t\n}\n\n\\noindent {\\bf The Cost, $F_M$, of Merging Subproblems.} Next, we\nprovide the cost, $F_M$, of merging subproblems by resolving\nconflicts. Assume that we have $n_s$ workers who could be assigned\nto more than one spatial task (i.e., conflicting workers). Moreover,\neach worker node has an average degree $deg_w$ in the bipartite\ngraph. During the subproblem merging processing, we can estimate the\nworst-case cost of resolving conflicts for these $n_s$ workers, and\nwe may resolve conflicts for each worker at most $(deg_w-1)$ times.\n\nTherefore, the worst-case cost of merging subproblems can be given\nby: \\vspace{-1ex}$$F_M = n_s\\cdot (deg_w-1).$$\n\n\n\\noindent {\\bf The Total Cost of the $g$-D\\&C Approach.} The total\ncost, $cost_{gD\\&C}$, of the $g$-D\\&C algorithm can be given by\nsumming up three costs, $F_D$, $F_C$, and $F_M$. That is, we have\n\n{\\small\n\t\\vspace{-3ex}\n\t\\begin{eqnarray}\n\t\tcost_{gD\\&C} &=& F_D + \\sum_{k = 1}^{log_g(m)}F_c(m\/g^k) + F_M \\label{eq:D&C_cost}\\\\\n\t\t&=& (mg+m)\\log_g (m) + \\frac{1-m}{1-g}deg_t^2 + n_s (deg_w -\n\t\t1)\\notag \\vspace{-3ex}\n\t\\end{eqnarray}\n\t\\vspace{-3ex}\n}\n\nWe take the derivation of $cost_{gD\\&C}$ (given in\nEq.~(\\ref{eq:D&C_cost})) over $g$, and let it be 0. In particular,\nwe have:\n\n{\\small\n\t\\vspace{-3ex}\n\t\\begin{eqnarray}\n\t\t&&\\frac{\\partial cost_{gD\\&C}}{\\partial g} \\notag\\\\\n\t\t&=&\\frac{m\\log(m)(g\\log(g) - g - 1)}{g\\log(2g)} +\n\t\t\\frac{1-m}{(1-g)^2}deg_t^2 = 0\n\t\\end{eqnarray}\n\t\\vspace{-1ex}\n}\n\nWe notice that when $g=2$, $\\frac{\\partial cost_{gD\\&C}}{\\partial g}$ is much\nsmaller than 0 but increases quickly when $g$ grows. In addition, $g$ can only\nbe an integer. Then we can try the integers, (2, 3, 4... ), until\n$\\frac{\\partial cost_{gD\\&C}}{\\partial g}$ is above 0.\n\n\n\\section{The Cost-Model-Based Adaptive Algorithm}\n\\label{sec:adpative}\n\nIn this section, we introduce a \\textit{cost-model-based adaptive\n\tapproach}, which adaptively decides the strategies to apply our\nproposed greedy and $g$-divide-and-conquer ($g$-D\\&C) algorithms.\nThe basic idea is as follows. Unlike the $g$-D\\&C algorithm, we do\nnot divide the MS-SC problem into subproblems recursively until task\ngroup sizes become $1$ (which can be solved by the greedy algorithm\nof set cover problems). Instead, based on our proposed cost model,\nwe will partition the problem into subproblems, and adaptively\ndetermine when to stop in some partitioning round (i.e., the total\ncost of solving subproblems with the greedy algorithm is smaller\nthan that of continuing dividing subproblems).\n\n\\begin{figure}[ht]\n\t\\begin{center}\n\t\t\\begin{tabular}{l}\n\t\t\t\\parbox{3.1in}{\n\t\t\t\t\\begin{scriptsize} \\vspace{-4ex}\n\t\t\t\t\t\\begin{tabbing}\n\t\t\t\t\t\t12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=12\\=\\kill\n\t\t\t\t\t\t{\\bf Procedure {\\sf MS-SC\\_Adaptive}} \\{ \\\\\n\t\t\t\t\t\t\\> {\\bf Input:} $n$ workers in $W_p$, and $m$ time-constrained spatial tasks in $T_p$\\\\\n\t\t\t\t\t\t\\> {\\bf Output:} a worker-and-task assignment instance set, $I_p$\\\\\n\t\t\t\t\t\t\\> (1) \\> \\> $I_p = \\emptyset$\\\\\n\t\t\t\t\t\t\\> (2) \\> \\> estimate the cost, $cost_{greedy}$, of the greedy algorithm\\\\\n\t\t\t\t\t\t\\> (3) \\> \\> estimate the best number of groups, $g$, and obtain the cost, $cost_{gdc}$, \\\\\n\t\t\t\t\t\t\\> \\> \\> of the $g$-D\\&C approach\\\\\n\t\t\t\t\t\t\\> (4) \\> \\> if $cost_{greedy} < cost_{gdc}$\\\\\n\t\t\t\t\t\t\\> (5) \\> \\> \\> $I_p$ = {\\sf MS-SC\\_Greedy}($W_p$, $T_p$) \\\\\n\t\t\t\t\t\t\\> (6) \\> \\> else \\qquad \\textit{\/\/ $g$-D\\&C algorithm}\\\\\n\t\t\t\t\t\t\\> (7) \\> \\> \\> invoke {\\sf MS-SC\\_Decomposition}$(W_p, T_p, g)$, and obtain subproblems $P_s$\\\\\n\t\t\t\t\t\t\\> (8) \\> \\> \\> for each subproblem, $P_s$, \\\\\n\t\t\t\t\t\t\\> (9) \\> \\> \\> \\> $I_p^{(s)}$ = {\\sf MS-SC\\_Adaptive}($W_p(P_s)$, $T_p(P_s)$)\\\\\n\t\t\t\t\t\t\\> (10)\\> \\> \\> for $i=1$ to $g$\\\\\n\t\t\t\t\t\t\\> (11)\\> \\> \\> \\> find the next subproblem, $P_s$\\\\\n\t\t\t\t\t\t\\> (12)\\> \\> \\> \\> $I_p$ = {\\sf MS-SC\\_Conflict\\_Reconcile} ($I_p$, $I_p^{(s)}$) \\\\\n\t\t\t\t\t\t\\> (13)\\> \\> return $I_p$\\\\\n\t\t\t\t\t\t\\}\n\t\t\t\t\t\\end{tabbing}\n\t\t\t\t\\end{scriptsize}\n\t\t\t}\n\t\t\\end{tabular}\n\t\\end{center}\\vspace{-5ex}\n\t\\caption{\\small The MS-SC Cost-Model-Based Adaptive Algorithm.}\n\t\\label{alg:hybrid}\\vspace{-5ex}\n\\end{figure}\n\n\n\n\\subsection{Algorithm of the Cost-Model-Based Adaptive Approach}\n\n\n\nFigure \\ref{alg:hybrid} shows the pseudo-code of our\ncost-model-based adaptive algorithm, namely {\\sf MS-SC\\_Adaptive}.\nInitially, we estimate the cost, $cost_{greedy}$, of applying the\ngreedy approach over worker\/task sets $W_p$ and $T_p$ (line 2).\nSimilarly, we also estimate the best group size, $g$, and compute\nthe cost, $cost_{gd\\&c}$ of using the $g$-D\\&C algorithm (line 3).\nIf it holds that the cost of the greedy algorithm is smaller than\nthat of the $g$-D\\&C approach (i.e., $cost_{greedy} < cost_{gdc}$),\nthen we will use the greedy algorithm by invoking function {\\sf\n\tMS-SC\\_Greedy}($\\cdot, \\cdot$) (due to its lower cost; lines 4-5).\nOtherwise, we will apply the $g$-D\\&C algorithm, and further\npartition the problem into subproblems $P_s$ (lines 6-7). Then, for\neach subproblem $P_s$, we recursively call the cost-model-based\nadaptive algorithm, and retrieve the assignment instance set\n$I_p^{(s)}$ (line 9). After that, we merge all the assignment\ninstance sets from subproblems by invoking function {\\sf\n\tMS-SC\\_Conflict\\_Reconcile}($\\cdot, \\cdot$) (lines 10-12). Finally,\nwe return the worker-and-task assignment instance set $I_p$ (line\n13).\n\n\n\n\n\\subsection{Cost Model for the Stoping Condition}\n\n\n\nNext, we discuss how to determine the stopping level, when using our\ncost-model-based adaptive approach to recursively solve the MS-SC\nproblem. Intuitively, at the current level $k$, we need to estimate\nthe costs, $cost_{greedy}$ and $cost_{gdc}$, of using greedy and\n$g$-D\\&C algorithms, respectively, to solve the remaining MS-SC\nproblem. If the greedy algorithm has lower cost, then we will stop\nthe divide-and-conquer, and apply the greedy algorithm for each\nsubproblems.\n\nIn the sequel, we discuss how to obtain the formulae of costs\n$cost_{greedy}$ and $cost_{gdc}$.\n\n\\underline{\\it The Cost, $cost_{greedy}$, of the Greedy Algorithm.}\nGiven a set, $W_p$, of $n$ workers and a set, $T_p$, of $m$ tasks,\nthe cost, $cost_{greedy}$, of our greedy approach (as given in\nFigure \\ref{alg:greedy}) has been discussed in Section\n\\ref{sub:greedy_algorithm}.\n\nIn the bipartite graph of valid worker-and-task pairs, denote\nthe average degree of workers as $deg_w$, and that of tasks as\n$deg_t$. In Figure \\ref{alg:greedy}, the computation of valid\nworker-and-task pairs in line 2 needs\n$O(m\\cdot n)$ cost. Since there are at most $n$ iterations, for each\nround (lines 3-16), we apply two worker-pruning methods to at most\n$(2m \\cdot deg_t)$ pairs, and select pairs with the highest score\nincreases, which need $O(3m \\cdot n\\cdot deg_t)$ cost in total.\nFor the cost of task-pruning, there are totally $n$ rounds (lines 3-16; \ni.e., removing one out of $n$ workers in each round in line 16). \nIn each round, there are at most $deg_w$ out of $m$ tasks (line 5) that may be \npotentially pruned by Lemma \\ref{lemma:prune_insufficient_task} (line10). To check each of $deg_w$ tasks, we need $O(deg_t)$ cost. \nTherefore, the total cost of task-pruning is given by $O(n\\cdot deg_t \\cdot deg_w)$. \nIf we cannot prune a task that was assigned with a worker in the last round (lines 3-16), then we need to update score increases of $deg_t$ workers for that task. Each task will be assigned with workers for $deg_t$ times. Thus, the total update cost for one task is given by $O(deg^2_t)$ (line 12). Therefore, $cost_{greedy}(n, m)$ can be given by:\\vspace{-3ex}\n\n\n\n\n{\\small\n\t\\begin{eqnarray}\n\t\t&& cost_{greedy}(n, m) \\notag\\\\\n\t\t&=&C_{greedy}\\cdot(m\\cdot n + n\\cdot deg_t \\cdot (3m + deg_w) + m\\cdot deg^2_t), \\qquad \\label{eq:greedy_approach_cost}\n\t\\end{eqnarray}}\n\t\\vspace{-3ex}\n\t\n\t\\noindent where parameter $C_{greedy}$ is a constant factor, which\n\tcan be inferred from cost statistics of the greedy algorithm.\n\t\n\t\n\t\n\t\n\t\\underline{\\it The Cost, $cost_{gdc}$, of the $g$-D\\&C Algorithm.}\n\tAssume that the current $g$-divide-and-conquer level is $k$. We can\n\tmodify the cost analysis in Section \\ref{subsec:DC_cost_model}, by\n\tconsidering the cost, $cost_{gdc}$, of the remaining\n\tdivide-and-conquer levels. Specifically, we have the cost, $F_D'$,\n\tof the decomposition algorithm, that is:\\vspace{-2ex}\n\t\n\t{\\small\n\t\t$$F_D'= m\\cdot n+(m\\cdot g+m)\\cdot k.$$\n\t\t\\vspace{-3ex}}\n\t\n\t\n\tMoreover, when the current level is $k$, the cost of conquering the\n\tremaining subproblems is given by:\\vspace{-2ex}\n\t\n\t{\\small\n\t\t$$\\sum_{i= k}^{log_g(m)}F_c(m\/g^i).$$\n\t\t\\vspace{-3ex}}\n\t\n\tFinally, the cost of merging subproblems is given by $F_M$.\n\t\n\tAs a result, the total cost, $cost_{gdc}$, of solving the MS-SC\n\tproblem with our $g$-D\\&C approach for the remaining partitioning\n\tlevels (from level $k$ to $log_g(m)$) can be given by:\\vspace{-3ex}\n\t\n\t{\\small\n\t\t$$cost_{gdc}= C_{gdc}\\cdot(F_D' + \\sum_{i = k}^{log_g(m)}F_c(m\/g^i)+ F_M),$$\n\t\t\\vspace{-3ex}}\n\t\n\t\\noindent where parameter $C_{gdc}$ is a constant factor, which can\n\tbe inferred from time cost statistics of the $g$-D\\&C algorithm.\n\t\n\tThis way, we compare $cost_{greedy}$ with $cost_{gdc}$ (as mentioned\n\tin line 4 of {\\sf MS-SC\\_Adaptive} Algorithm). If $cost_{greedy}$ is\n\tsmaller than $cost_{gdc}$, we stop at the current level $k$, and apply the greedy algorithm to tackle\n\tthe MS-SC problem directly; otherwise, we keep dividing the original\n\tMS-SC problem into subproblems (i.e., $g$-D\\&C).\n\n\\section{Experimental Study}\n\\label{sec:exper}\n\n\n\\subsection{Experimental Methodology}\n\n\\noindent \\textbf{Data Sets.} We use both real and synthetic data to\ntest our proposed MS-SC approaches. Specifically, for real data, we\nuse Meetup data set from \\cite{liu2012event}, which was crawled from\n\\textit{meetup.com} between Oct. 2011 and Jan. 2012. There are 5,153,886\nusers, 5,183,840 events, and 97,587 groups in Meetup, where each\nuser is associated with a location and a set of tags, each group is\nassociated with a set of tags, and each event is associated with a\nlocation and a group who created the event. For an event, we use the\ntags of the group who creates the event as its tags. To conduct the\nexperiments on our approaches, we use the locations and tags of\nusers in Meetup to initialize the locations and the practiced skills\nof workers in our MS-SC problem. In addition, we utilize the\nlocations and tags of events to initialize the locations and the\nrequired skills of tasks in our experiments. Since workers are\nunlikely to move between two distant cities to conduct one spatial\ntask, and the constraints of time (i.e., $e_j$), budget (i.e.,\n$B_j$) and distance (i.e., $d_i$) also prevent workers from moving\ntoo far, we only consider those user-and-event pairs located in the\nsame city. Specifically, we select one famous and popular city, Hong\nKong, and extract Meetup records from the area of Hong Kong (with\nlatitude from \\ang{22.209} to \\ang{113.843} and longitude from\n\\ang{22.609} to \\ang{114.283}), in which we obtain 1,282 tasks and\n3,525 workers.\n\n\n\n\nFor synthetic data, we generate locations of workers and tasks in a\n2D data space $[0, 1]^2$, following either Uniform (UNIFORM) or\nSkewed (SKEWED) distribution. For Uniform distribution, \nwe uniformly generate the locations of tasks\/workers in the 2D data space.\nSimilarly, we also generate\ntasks\/workers with the Skewed distribution by locating 90\\% of them into\na Gaussian cluster (centered at (0.5, 0.5) with variance = $0.2^2$),\nand distribute the rest workers\/tasks uniformly. Then, for\nskills of each worker, we randomly associate one user in Meetup data\nset to this worker, and use tags of the user as his\/her skills in\nour MS-SC system. For the required skills of each task, we randomly \nselect an event, and use its tags as the required skills of the task.\n\n\nFor both real and synthetic data sets, we simulate the velocity of\neach worker with Gaussian distribution within range $[v^-, v^+]$,\nfor $v^-, v^+ \\in (0, 1)$. For the unit price, $C_i$, w.r.t. the\ntraveling distance of each worker, we generate it following the Uniform\ndistribution within the range $[C^-, C^+]$. Furthermore, we produce\nthe maximum moving distance of each worker, following the Uniform\ndistribution within the range $[d^-, d^+]$ (for $d^-, d^+ \\in\n(0,1)$). For temporal constraints of tasks, we also generate the\narrival deadlines of tasks, $e$, within range $[rt^-, rt^+]$ with\nGaussian distribution. Finally, we set the budgets of tasks with\nGaussian distribution within the range $[B^-, B^+]$. \nHere, for Gaussian distributions, we linearly map data \nsamples within $[-1, 1]$ of a Gaussian distribution $\\mathcal{N}(0, 0.2^2)$ \nto the target ranges.\n\n\n\\noindent\\textbf{MS-SC Approaches and Measures.} We conduct\nexperiments to compare our three approaches, GREEDY, $g$-D\\&C and\nADAPTIVE, with a random method, namely RANDOM, which randomly\nassigns workers to tasks.\n\nIn particular, GREEDY selects a ``best'' worker-and-task assignment\nwith the highest score increase each time, which is a local optimal\napproach. The $g$-D\\&C algorithm keeps dividing the problem into $g$\nsubproblems on each level, until finally the number of tasks in each\nsubproblem is 1 (which can be solved by the greedy algorithm on each\none-task subproblem). Here, the parameter $g$ can be estimated by a\ncost model to minimize the computing cost. The cost-model-base\nadaptive algorithm (ADAPTIVE) makes the trade-off between GREEDY and\n$g$-D\\&C, in terms of efficiency and accuracy, which adaptively\ndecides the stopping level of the divide-and-conquer. To evaluate\nour three proposed approaches, we need to compare the results with\nground truth. However, as proved in Section \\ref{sec:reduction}, the\nMS-SC problem is NP-hard, and thus infeasible to calculate the real\noptimal result as the ground truth. Alternatively, we will compare\nthe effectiveness of our three approaches with that of a random\n(RANDOM) method, which randomly chooses a task then randomly assigns\nworker to the task. For each MS-SC instance, we run RANDOM for 10\ntimes, and report the result with the highest score.\n\nTable \\ref{table2} depicts our experimental settings, where the\ndefault values of parameters are in bold font. In each set of\nexperiments, we vary one parameter, while setting other parameters\nto their default values. For each experiment, we report the running\ntime and the assignment score of our tested approaches. All our\nexperiments were run on an Intel Xeon X5675 CPU @3.07 GHZ with 32 GB\nRAM in Java.\n\n\n\\begin{table}[t]\n\t\\begin{center}\\vspace{-6ex}\n\t\t\\caption{\\small Experiments Settings.} \\label{table2}\n\t\t{\\small\\scriptsize\n\t\t\t\\begin{tabular}{l|l}\n\t\t\t\t{\\bf \\qquad \\qquad \\quad Parameters} & {\\bf \\qquad \\qquad \\qquad Values} \\\\ \\hline \\hline\n\t\t\t\tthe number of tasks $m$ & 1K, 2K, \\textbf{5K}, 8K, 10K \\\\\n\t\t\t\tthe number of workers $n$ & 1K, 2K, \\textbf{5K}, 8K, 10K \\\\\n\t\t\t\tthe task budget range $[B^-, B^+]$ & [1, 5], \\textbf{[5, 10]}, [10, 15], [15, 20], [20, 25]\\\\\n\t\t\t\tthe velocity range $[v^-, v^+]$ & [0.1, 0.2], \\textbf{[0.2, 0.3]}, [0.3, 0.4], [0.4, 0.5]\\\\\n\t\t\t\tthe unit price w.r.t. distance $[C^-, C^+]$ & [10, 20], \\textbf{[20, 30]}, [30, 40], [40, 50]\\\\\n\t\t\t\tthe moving distance range $[d^-, d^+]$ & [0.1, 0.2], [0.2, 0.3], \\textbf{[0.3, 0.4]}, [0.4, 0.5]\\\\\n\t\t\t\tthe expiration time range $[rt^-, rt^+]$ & [0.25, 0.5], [0.5, 1], \\textbf{[1, 2]}, [2, 3], [3, 4]\\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\\vspace{-7ex}\n\\end{table}\n\n\n\\subsection{Experiments on Real Data}\n\n\n\n\nIn this subsection, we show the effects of the range of task budgets\n$[B^-, B^+]$, the range of workers' velocities $[v^-, v^+]$, and\nthe range of unit prices w.r.t. distance $[C^-, C^+]$.\n\n\n\\noindent\\textbf{Effect of the Range of Task Budgets $[B^-, B^+]$.}\nFigure \\ref{fig:budget_b} illustrates the experimental results on\ndifferent ranges, $[B^-, B^+]$, of task budgets $B_j$ from $[1,5]$\nto $[20,25]$. In Figure \\ref{subfig:b_score}, the assignment scores\nof all the four approaches increase, when the value range of task\nbudgets gets larger. When the average budgets of tasks increase, the\nflexible budget $B'$ of each task will also increase. $g$-D\\&C and\nADAPTIVE can achieve higher score than GREEDY. In contrast, RANDOM\nhas the lowest score, which shows that our proposed three approaches\nare more effective. As shown in Figure \\ref{subfig:b_cpu}, the\nrunning times of our three approaches increase, when the range of\ntask budgets becomes larger. The reason is that, when $B_j \\in [B^-,\nB^+]$ increases, each task has more valid workers, which thus leads\nto higher complexity of the MS-SC problem and the increase of the\nrunning time. The RANDOM approach is the fastest (however, with the\nlowest assignment score), since it does not even need to find local\noptimal assignment. The ADAPTIVE algorithm achieves much lower\nrunning time than $g$-D\\&C (a bit higher time cost than GREEDY), but\nhas comparable score with $g$-D\\&C (much higher score than GREEDY),\nwhich shows the good performance of ADAPTIVE, compared with GREEDY\nand $g$-D\\&C.\n\n\n\n\\begin{figure}[ht]\\vspace{-2ex}\n\t\\centering\n\t\\subfigure[][{\\scriptsize Scores of Assignment}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{budget_score.eps}}\n\t\t\\label{subfig:b_score}}\n\t\\subfigure[][{\\scriptsize Running Times}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{budget_cpu.eps}}\n\t\t\\label{subfig:b_cpu}}\\vspace{-2ex}\n\t\\caption{\\small Effect of the Range of Task Budgets $[B^-, B^+]$ (Real Data).}\\vspace{-4ex}\n\t\\label{fig:budget_b}\n\\end{figure}\n\n\n\\noindent\\textbf{Effect of the Workers' Velocity Range $[v^-,\n\tv^+]$.} Figure \\ref{fig:velocity_v} reports the effect of the range\nof velocities, $[v^-, v^+]$, of workers over real data. As shown in\nFigure \\ref{subfig:v_score}, when the range of velocities increases\nfrom $[0.1,0.2]$ to $[0.2,0.3]$, the scores of all the approaches\nfirst increase; then, they stop growing for the velocity range \nvarying from [0.2, 0.3] to [0.4, 0.5]. The reason is that, at the\nbeginning, with the increase of velocities, workers can reach more\ntasks before their arrival deadlines. Nevertheless, workers are also\nconstrained by their maximum moving distances, which prevents them\nfrom reaching more tasks. ADAPTIVE can achieve a bit higher scores\nthan $g$-D\\&C, and much better assignment scores than GREEDY.\n\nIn Figure \\ref{subfig:v_cpu}, when the range of velocity $[v^-,\nv^+]$ increases, the running times of our tested approaches also\nincrease, due to the cost of more valid worker-and-task pairs to be\nhandled. Similar to previous results, RANDOM is the fastest, and\n$g$-D\\&C is the slowest. ADAPTIVE requires about 0.5-1.5 seconds,\nand has lower time cost than $g$-D\\&C, which shows the efficiency of\nour proposed approaches.\n\n\n\\begin{figure}[ht]\\vspace{-2ex}\n\t\\centering\n\t\\subfigure[][{\\scriptsize Scores of Assignment}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{v_score.eps}}\n\t\t\\label{subfig:v_score}}\n\t\\subfigure[][{\\scriptsize Running Times}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{v_cpu.eps}}\n\t\t\\label{subfig:v_cpu}}\\vspace{-2ex}\n\t\\caption{\\small Effect of the Range of Velocities $[v^-, v^+]$ (Real Data).}\\vspace{-4ex}\n\t\\label{fig:velocity_v}\n\\end{figure}\n\n\n\n\\noindent\\textbf{Effect of the Range of Unit Prices w.r.t. Traveling Distance\n\t$[C^-, C^+]$.} In Figure \\ref{subfig:c_score}, when the unit prices w.r.t.\ntraveling distance $C_i \\in[C^-, C^+]$ increase, the scores of all the approaches\ndecrease. The reason is that, when the range of unit prices $[C^-, C^+]$ increases, we need to pay\nmore wages containing the traveling costs of workers (to do spatial\ntasks), which in turn decreases the flexible budget of each task.\nHowever, ADAPTIVE can still achieve the highest score among all four\napproaches; scores of $g$-D\\&C are close to the scores of ADAPTIVE\nand higher than GREEDY.\n\nIn Figure \\ref{subfig:c_cpu}, when the range of unit prices, $[C^-, C^+]$, of the\ntraveling cost increases, the number of valid worker-and-task pairs\ndecreases, and thus the running time of all the approaches also\ndecreases. Our ADAPTIVE algorithm is faster than $g$-D\\&C, and\nslower than GREEDY.\n\n\\begin{figure}[ht]\\vspace{-2ex}\n\t\\centering\n\t\\subfigure[][{\\scriptsize Scores of Assignment}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{C_score.eps}}\n\t\t\\label{subfig:c_score}}\n\t\\subfigure[][{\\scriptsize Running Times}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{C_cpu.eps}}\n\t\t\\label{subfig:c_cpu}}\\vspace{-2ex}\n\t\\caption{\\small Effect of the Range of Unit Prices w.r.t. Traveling Distance $[C^-, C^+]$ (Real Data).} \\vspace{-1ex}\n\t\\label{fig:constanct_c}\\vspace{-3ex}\n\\end{figure}\n\nIn addition, we also tested the effects of the range, $[d^-, d^+]$,\nof maximum moving distances for workers, and the expiration time\nrange, $[rt^-, rt^+]$, of tasks over the real data set, Meetup. Due to\nspace limitations, please refer to experimental results with respect\nto other parameters (e.g., $[d^-, d^+]$ and $[rt^-, rt^+]$) in\nAppendix F.\n\nFrom experimental results on the real data above, ADAPTIVE can\nachieve higher scores than Greedy and $g$-D\\&C, and it is faster\nthan $g$-D\\&C and slower than GREEDY. Although $g$-D\\&C can achieve\ngood scores close to ADAPTIVE, it is the slowest among all the 4\napproaches.\n\n\\subsection{Experiments on Synthetic Data}\n\nIn this subsection, we test the effectiveness and robustness of our\nthree MS-SC approaches, GREEDY, $g$-D\\&C, and ADAPTIVE, compared\nwith RANDOM, by varying parameters (e.g., the number of tasks $m$\nand the number of workers $n$) on synthetic data sets. Due to space limitations, we present the experimental results for tasks\/workers with Uniform distributions. For similar results with tasks\/workers following skewed distributions, please refer to Appendix G.\n\n\\noindent\\textbf{Effect of the Number of Tasks $m$.} Figure\n\\ref{fig:tasks_m} illustrates the effect of the number, $m$, of\nspatial tasks, by varying $m$ from $1K$ to $10K$, over synthetic\ndata sets, where other parameters are set to default values. For\nassignment scores in Figure \\ref{subfig:m_score}, $g$-D\\&C obtains\nresults with the highest scores among all the four approaches.\nADAPTIVE performs similar to $g$-D\\&C, and achieves good results\nsimilar to $g$-D\\&C. GREEDY is not as good as $g$-D\\&C and ADAPTIVE,\nbut is still much better than RANDOM. When the number, $m$, of\nspatial tasks becomes larger, all our approaches can achieve higher\nscores.\n\nIn Figure \\ref{subfig:m_cpu}, when $m$ increases, the running time\nalso increases. This is because, we need to deal with more\nworker-and-task assignment pairs for large $m$. The ADAPTIVE\nalgorithm is slower than GREEDY, and faster than $g$-D\\&C. In\naddition, we find that the running time of GREEDY performs, with the\nsame trend as that estimated in our cost model (as given in\nEq.~(\\ref{eq:greedy_approach_cost})).\n\n\n\n\n\\begin{figure}[ht]\\vspace{-2ex}\n\t\\centering\n\t\\subfigure[][{\\scriptsize Scores of Assignment}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{m_score.eps}}\n\t\t\\label{subfig:m_score}}\n\t\\subfigure[][{\\scriptsize Running Times}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{m_cpu.eps}}\n\t\t\\label{subfig:m_cpu}}\\vspace{-2ex}\n\t\\caption{\\small Effect of the Number of Tasks $m$ (Synthetic Data).}\\vspace{-4ex}\n\t\\label{fig:tasks_m}\n\\end{figure}\n\n\\noindent\\textbf{Effect of the Number of Workers $n$.} Figure\n\\ref{fig:workers_n} shows the experimental results with different\nnumbers of workers, $n$, from $1K$ to $10K$ over synthetic data,\nwhere other parameters are set to their default values. Similar to\nprevious results about the effect of $m$, in Figure \\ref{subfig:n_score}, our proposed three\napproaches can obtain good results with high assignment scores,\ncompared with RANDOM. Moreover, when the number, $n$, of workers\nincreases, the scores of all our approaches also increase. The\nreason is that, when $n$ increases, we have more potential workers,\nwho can be assigned to nearby tasks, which may lead to even larger\nscores.\n\nIn Figure \\ref{subfig:n_cpu}, the running time of our approaches\nincreases, with the increase of the number of workers . This is due\nto higher cost to process more workers (i.e., larger $n$).\nSimilarly, ADAPTIVE has higher time cost than GREEDY, and lower\ntime cost than $g$-D\\&C.\n\n\\begin{figure}[ht]\\vspace{-2ex}\n\t\\centering\n\t\\subfigure[][{\\scriptsize Scores of Assignment}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{n_score.eps}}\n\t\t\\label{subfig:n_score}}\n\t\\subfigure[][{\\scriptsize Running Times}]{\n\t\t\\scalebox{0.2}[0.2]{\\includegraphics{n_cpu.eps}}\n\t\t\\label{subfig:n_cpu}}\\vspace{-2ex}\n\t\\caption{\\small Effect of the Number of Workers $n$ (Synthetic Data).}\\vspace{-1ex}\n\t\\label{fig:workers_n}\\vspace{-3ex}\n\\end{figure}\n\n\n\nIn summary, over synthetic data sets, our ADAPTIVE algorithm trades\nthe accuracy for efficiency, and thus has the trade-off of\nscores\/times between GREEDY and $g$-D\\&C.\n\n\n\n\n\n\\section{Related Work}\n\\label{sec:related}\n\n\nRecently, with the popularity of GPS-equipped smart devices and\nwireless networks (e.g., Wi-Fi and 4G), the spatial crowdsourcing\n\\cite{deng2013maximizing, kazemi2012geocrowd} that performs\nlocation-based tasks has emerged and become increasingly important\nin both academia and industry. In this section, we review the\nrelated work on spatial crowdsourcing, as well as the set cover\nproblem (and its variants).\n\n\n\n\\vspace{0.5ex}\\noindent {\\bf Spatial Crowdsourcing.} Prior works on\ncrowdsourcing \\cite{alt2010location, bulut2011crowdsourcing} usually\nstudied crowdsourcing problems, which treat the location\ninformation as a parameter and distribute tasks to workers. In these\nproblems, workers are not required to accomplish tasks on sites.\n\nIn contrast, the spatial crowdsourcing platform\n\\cite{kazemi2012geocrowd} requires workers to physically move to\nsome specific locations of tasks, and perform the requested\nservices, such as taking photos\/videos, waiting in line at shopping\nmalls, and decorating a room. As an example, some previous works\n\\cite{cornelius2008anonysense, kanhere2011participatory} studied the\nsmall-scale or specified campaigns benefiting from\n\\textit{participatory sensing} techniques, which utilize smart\ndevices (equipped by workers) to sense\/collect data for real\napplications.\n\n\nKazemi and Shahabi \\cite{kazemi2012geocrowd} classified the spatial\ncrowdsourcing systems from two perspectives: people's motivation and\npublishing models. From the perspective of people's motivation, the\nspatial crowdsourcing can be categorized into two groups:\nreward-based, in which workers can receive rewards according to the\nservices they supplied, and self-incentivised, in which workers\nconduct tasks voluntarily. In our work, we study our MS-SC problem\nbased on the reward-based model, where workers are paid for doing\ntasks. However, with a different goal, our MS-SC problem targets at\nassigning workers to tasks by using our proposed algorithms, such\nthat the required skills of tasks can be covered, and the total\nreward budget (i.e., flexible budget $B_j'$ in\nEq.~(\\ref{eq:flexible_budget})) can be maximized. Note that, we can\nembed incentive mechanisms from existing works \\cite{rula2014no,\n\tyang2012crowdsourcing} into our MS-SC framework to distribute\nrewards (flexible budgets) among workers, which is however not the\nfocus of our problem.\n\n\nAccording to the publishing modes of spatial tasks, the spatial\ncrowdsourcing can be also classified into two categories:\n\\textit{worker selected tasks} (WST) and \\textit{server assigned\n\ttasks} (SAT) \\cite{kazemi2012geocrowd}. In particular, for the WST\nmode, spatial tasks are broadcast to all workers, and workers can\nselect any tasks by themselves. In contrast, for the SAT mode, the\nspatial crowdsourcing server will directly assign tasks to workers,\nbased on location information of tasks\/workers.\n\n\nSome prior works \\cite{alt2010location, deng2013maximizing} on the\nWST mode allowed workers to select available tasks, based on their\npersonal preferences. However, for the SAT mode, previous works\nfocused on assigning available workers to tasks in the system, such\nthat the number of assigned tasks on the server side\n\\cite{kazemi2012geocrowd}, the number of worker's self-selected\ntasks on the client side \\cite{deng2013maximizing}, or the\nreliability-and-diversity score of assignments\n\\cite{cheng2014reliable} is maximized. For example, Peng et al.\n\\cite{cheng2014reliable} aims to obtain a worker-and-task assignment\nstrategy such that the assignment score (w.r.t. spatial\/temporal\ndiversity and reliability of tasks) is maximized.\n\nIn contrast, our MS-SC problem has a different, yet more general,\ngoal, which maximizes the total assignment score (i.e., flexible\nbudget, given by the total budget of the completed tasks minus the\ntotal traveling cost of workers). Most importantly, in our MS-SC\nproblem, we need to consider several constraints, such as\nskill-covering, budget, time, and distance constraints. That is, the\nrequired skill sets of spatial tasks should be fully covered by\nskills of those assigned workers, which is NP-hard and intractable.\nThus, previous techniques\n\\cite{cheng2014reliable,deng2013maximizing,kazemi2012geocrowd} on\ndifferent spatial crowdsourcing problems cannot be directly applied\nto our MS-SC problem.\n\nSome research communities studied the theory of SAT problems and\ndeveloped some SAT (Satisfiability) solvers. However, standard SAT\nsolvers can only solve decision problems (i.e., NP-complete\nproblems), but not optimization problems (i.e., NP-hard problems,\nlike our MS-SC problem). Thus, we need to design specific heuristic\nalgorithms for tackling the MS-SC problem.\n\nMoreover, some previous works \\cite{pournajaf2014spatial,\n\tto2014framework} utilized \\textit{differential privacy} techniques\n\\cite{dwork2008differential} to protect the location information,\nwhich is used to do the assignment, but may release some sensitive\nlocation\/trajectory data (leading to malicious attacks).\nNevertheless, this privacy issue is out of the scope of this paper.\n\n\n\\vspace{0.5ex}\\noindent {\\bf Set Cover Problem.} As mentioned in\nLemma \\ref{lemma:np}, the \\textit{set cover problem} (SCP) is a\nclassical NP-hard problem, which targets at choosing a set of\nsubsets to cover a universe set, such that the number of the\nselected subsets is minimized. SCP is actually a special case of our\nMS-SC problem, in which there exists only one spatial task. However,\nin most situations, we have more than one spatial task in the\nspatial crowdsourcing system, which is more complex, and thus more\nchallenging, to tackle.\n\nA direct variant of SCP is the \\textit{weighted set cover problem},\nwhich associates each subset with a weight. The well-known greedy\nalgorithm \\cite{vazirani2013approximation} can achieve an\napproximation ratio of $\\ln(N) ( \\approx H(N)$ here $H(N) =\n\\sum_{i=1}^{N}1\/i)$, where $N$ is the size of the universe set.\nOther SCP variants, such as the \\textit{set multicover problem}\n(SMC) and \\textit{multiset multicover problem} (MSMC), focused on\ncovering each element of the universe set for at least specified\ntimes using sets (in SMC, each element in subsets has just one copy)\nor multisets (in MSMC, each element in subsets has a specified\nnumber of copies). Sun and Li \\cite{sun2005mechanism} studied\n\\textit{set cover games problem}, which covers multiple sets. However, they focused on designing a good mechanism to enable each single task to obtain a local optimal result. In contrast, our work aims to obtain a global optimal solution to maximize the score of assignment.\n\n\n\nDifferent from SCP and its variants that cover only one universe\nset, our MS-SC problem is targeting to cover multiple sets, such\nthat the assignment score is maximized. Furthermore, our MS-SC\nproblem is also constrained by budget, time, and distance, which is\nmuch more challenging than SCP. To the best of our knowledge, no\nprior works on SCP (and its variants) have studied the MS-SC\nproblem, and existing techniques cannot be used directly to tackle\nthe MS-SC problem.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn this paper, we propose the problem of the \\textit{multi-skill\n\toriented spatial crowdsourcing} (MS-SC), which assigns the\ntime-constrained and multi-skill-required spatial tasks with\ndynamically moving workers, such that the required skills of tasks\ncan be covered by skills of workers and the assignment score is\nmaximized. We prove that the processing of the MS-SC problem is\nNP-hard, and thus we propose three approximation approaches (i.e.,\ngreedy, $g$-D\\&C, and cost-model-based adaptive algorithms), which\ncan efficiently retrieve MS-SC answers. Extensive experiments have\nshown the efficiency and effectiveness of our proposed MS-SC\napproaches on both real and synthetic data sets.\n\n\\balance\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}