diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcupy" "b/data_all_eng_slimpj/shuffled/split2/finalzzcupy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcupy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\n\\citeAPY{Berryman:1990:VCE} found that classical variational principles could be used to obtain information about the\nconductivity inside a body from electrical measurements on the exterior. In this paper our main focus is on using classical\nvariational principles and known bounds on the response of periodic composites to bound\nthe volume fraction of one phase in a two-phase body $\\Omega$ from measurements on the exterior of the body. Of course if one\nknows the mass densities of the two phases, the easiest way to do this is just to weigh the body. However this \nmay not always be practical, or the densities of the two phases may be very close. \n\nTwo types of boundary conditions are most natural: what we call special Dirichlet conditions where affine Dirichlet\nconditions are imposed on the boundary of $\\Omega$ (which would render the field inside $\\Omega$ uniform if the body were\nhomogeneous) or what we call special Neumann conditions where Neumann conditions are imposed that would render the field inside $\\Omega$\nuniform if the body were homogeneous. Bounds on the electrical and elastic response of the body to these special boundary conditions were obtained \nby Nemat-Nasser and Hori (\\citeyearNP{Nemat-Nasser:1993:MOP}, \\citeyearNP{Nemat-Nasser:1995:UBO}), and were extended to piezoelectricity\nby \\citeAPY{Hori:1998:UBE}. They called these bounds universal because they did not depend on any assumption about the microgeometry in the body.\nThey obtained both elementary universal bounds based on the classical variational principles, and reviewed below in section 3, and \nuniversal bounds based on the Hashin-Shtrikman variational principles (Hashin and Shtrikman \\citeyearNP{Hashin:1962:VAT}, \\citeyearNP{Hashin:1963:VAT}).\nThe latter bounds were obtained under the assumption that $\\Omega$ is either an ellipsoid or a parallelopiped, but we will see here that they\ncan easily be improved and generalized to bodies $\\Omega$ of arbitrary shape. The key is to consider an assemblage of copies of $\\Omega$ packed to\nfill all space, and then to use the bounds of \\citeAPY{Huet:1990:AVC} which relate the effective tensor of this composite to the\nresponses of $\\Omega$ under the special boundary conditions. Then existing bounds on the effective tensor [as surveyed in the books\nof \\citeAPY{Nemat-Nasser:1993:MOP}, \\citeAPY{Cherkaev:2000:VMS}, \\citeAPY{Allaire:2002:SOH}, \\citeAPY{Milton:2002:TC}, and \\citeAPY{Tartar:2009:GTH}]\ncan be directly applied to bound the responses of $\\Omega$ under special boundary conditions (see sections 5,6,7, and 8).\nSince these bounds involve the volume fractions of the phases\n(and the moduli of the phases), they can be used in an inverse fashion to bound the volume fraction. As shown by \\citeAPY{Kang:2011:SBV}\nthe volume fraction bounds thus obtained \nfor electrical conductivity generalize those obtained by Capdeboscq and Vogelius (\\citeyearNP{Capdeboscq:2003:OAE}, \\citeyearNP{Capdeboscq:2004:RSR})\nfor the important case when the volume fraction is asymptotically small.\n\n Given the close connection between bounds on effective tensors\nand bounds on the responses of $\\Omega$ under special boundary condition, a natural question to ask is whether methods that have been used\nto derive bounds on effective tensors could be directly used to derive bounds on the response of $\\Omega$ under more general boundary conditions.\nOne such method is the Hashin-Strikman (\\citeyearNP{Hashin:1962:VAT}, \\citeyearNP{Hashin:1963:VAT}) variational method and this lead\nNemat-Nasser and Hori to their bounds for ellipsoidal or parallelopipedic $\\Omega$. Another particularly successful method is the translation method \n(\\citeAY{Tartar:1979:ECH}; Lurie and Cherkaev \\citeyearNP{Lurie:1982:AEC}, \\citeyearNP{Lurie:1984:EEC}; \\citeAY{Murat:1985:CVH}; \\citeAY{Tartar:1985:EFC}; \\citeAY{Milton:1990:CSP})\nand indeed as shown in a companion paper (\\citeAY{Kang:2011:SBV}) this method yields upper and lower bounds on the volume fraction in a\ntwo-phase body with general boundary conditions for two-dimensional conductivity without making any assumption on the shape of $\\Omega$.\nFor special boundary conditions the bounds thus derived reduce to the ones derived here. \n\nWe also provide (in section 4) some new conductivity bounds which just involve the results of just one (flux, voltage) pair\nmeasured at the boundary of $\\Omega$, and which improve upon the elementary bounds of \\citeAPY{Nemat-Nasser:1993:MOP}.\nAgain these new bounds can be used in an inverse fashion to bound the volume fraction. Other volume fraction\nbounds using one measurement were derived by \\citeAPY{Kang:1997:ICP}\n\\citeAPY{Ikehata:1998:SEI}, \\citeAPY{Alessandrini:1998:ICP}, \\citeAPY{Alessandrini:2000:OSE}, and \\citeAPY{Alessandrini:2002:DCE}.\nThese other bounds involve constants which are not easy to determine, making it difficult to make a general comparison with\nour new bounds. \n\nThe various bounds on the volume fraction we have derived are too numerous to summarize in this introduction. \nHowever we want to draw attention to the bounds \\eq{3.12} and \\eq{3.21} which are the natural extension of the\nfamous \\citeAPY{Hashin:1962:VAT} conductivity bounds to this problem. Also of particular note is the bound \\eq{5.18ag},\nwhich is one natural generalization of the bulk modulus bounds of \\citeAPY{Hashin:1963:VAT} and \\citeAPY{Hill:1963:EPR},\nand implies that a bound on the volume fraction can be obtained by simply immersing the body in a water filled \ncylinder with a piston at one end and measuring the change in water pressure when the piston is displaced by a known\nsmall amount.\n\n\n\\section{The conductivity response tensors with special Dirichlet and special Neumann boundary conditions}\n\\setcounter{equation}{0}\n\n\nIn electrical impedance tomography in a body $\\Omega$ containing two isotropic components with (positive, scalar) \nconductivities $\\sigma_1$ and $\\sigma_2$ the potential $V$ satisfies \n\\begin{equation} \\nabla \\cdot\\sigma\\nabla V=0, \\quad {\\rm where}\\quad \\sigma({\\bf x})=\\chi({\\bf x})\\sigma_1+(1-\\chi({\\bf x}))\\sigma_2,\n\\eeq{1.1}\nand $\\chi({\\bf x})$ is the indicator function of component $1$, taking the value $1$ in component and $0$ in component\n$2$. Equivalently, in terms of the current field ${\\bf j}({\\bf x})$ and electric field ${\\bf e}({\\bf x})$ we have\n\\begin{equation} \\nabla \\cdot{\\bf j}=0,\\quad{\\bf j}=\\sigma{\\bf e},\\quad {\\bf e}=-\\nabla V. \\eeq{1.2} \nLet us assume the components have been labeled so that $\\sigma_1\\geq\\sigma_2$. We are given a set of Cauchy data, i.e. \nmeasurements of pairs $(V_0,q)$, where $V_0({\\bf x})$ and $q({\\bf x})$ are the boundary values of the voltage $V({\\bf x})$ and\nand flux $q({\\bf x})=-{\\bf n}\\cdot{\\bf j}({\\bf x})$ at the boundary $\\partial\\Omega$ of $\\Omega$, in which ${\\bf n}({\\bf x})$ is the outwards \nnormal to the boundary. From this boundary information we can immediately determine, using integration by parts, \nvolume averages such as\n\\begin{eqnarray} \\langle {\\bf e}\\cdot{\\bf j}\\rangle &= & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-V_0({\\bf j}\\cdot{\\bf n})= \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}V_0q, \\nonumber \\\\\n \\langle {\\bf e}\\rangle & = & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-V_0{\\bf n}, \\nonumber \\\\\n \\langle {\\bf j}\\rangle & = & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-{\\bf x} q,\n\\eeqa{1.3}\nwhere the angular brackets denote the volume average, i.e.\n\\begin{equation} \\langle g \\rangle=\\frac{1}{|\\Omega|}\\int_{\\Omega} g, \\eeq{1.4}\nfor any quantity $g({\\bf x})$. From such averages our objective is to bound the\nvolume fraction $f_1=\\langle \\chi\\rangle$ of component 1 (and hence also the volume fraction $f_2=1-f_1$ of component\n2).\n\nTo obtain good estimates of the volume fraction $f_1$ it makes physical sense to use\nmeasurements where the fields ${\\bf e}({\\bf x})$ and ${\\bf j}({\\bf x})$ probe well into the interior of $\\Omega$. In this\nconnection two sets of measurements are most natural. We could apply special Dirichlet boundary conditions\n\\begin{equation} V_0=-{\\bf e}_0\\cdot{\\bf x}, \\eeq{1.5}\nand measure ${\\bf j}_0=\\langle{\\bf j}\\rangle$. Here, according to \\eq{1.3}, ${\\bf e}_0$ equals $\\langle{\\bf e}\\rangle$. Since\n${\\bf j}_0$ is linearly related to ${\\bf e}_0$ we can write\n\\begin{equation} {\\bf j}_0=\\bfm\\sigma^D{\\bf e}_0, \\eeq{1.6} \nwhich defines the conductivity tensor $\\bfm\\sigma^D$ ($D$ for Dirichlet). To determine $\\bfm\\sigma^D$ \nin dimension $d=2,3$ it of course suffices to measure ${\\bf j}_0$ for $d$ linearly independent values of ${\\bf e}_0$. \nAlternatively we could apply the special Neumann boundary conditions \n\\begin{equation} q={\\bf j}_0\\cdot{\\bf n}, \\eeq{1.7}\nand measure ${\\bf e}_0=\\langle{\\bf e}\\rangle$. Again according to \\eq{1.3}, ${\\bf j}_0=\\langle{\\bf j}\\rangle$. The linear relation\nbetween ${\\bf e}_0$ and ${\\bf j}_0$,\n\\begin{equation} {\\bf e}_0=(\\bfm\\sigma^N)^{-1}{\\bf j}_0 \\eeq{1.8}\ndefines the resistivity tensor $(\\bfm\\sigma^N)^{-1}$ and hence the conductivity tensor $\\bfm\\sigma^N$ ($N$ for Neumann):\nwe will see later that $(\\bfm\\sigma^N)^{-1}$ is invertible. To determine $\\bfm\\sigma^N$ it suffices to measure ${\\bf e}_0$ \nfor $d$ linearly independent values of ${\\bf j}_0$. With either of these two sorts of boundary conditions (but \nnot in general) \\citeAPY{Hill:1963:EPR} has shown that\n\\begin{equation} \\langle {\\bf e}\\cdot{\\bf j}\\rangle=\\langle{\\bf e}\\rangle\\cdot\\langle{\\bf j}\\rangle, \\eeq{1.9}\nas follows by substituting \\eq{1.5} or \\eq{1.7} in the first of equations \\eq{1.3}.\nUsing this relationship, and its obvious generalizations, it is easy to check that both $\\bfm\\sigma^D$ and \n$\\bfm\\sigma^N$ are self-adjoint. Thus if ${\\bf e}'({\\bf x})$ and ${\\bf j}'({\\bf x})$ denote the electric and current fields\nassociated with the boundary conditions \\eq{1.5}, with ${\\bf e}_0$ replaced by ${\\bf e}_0'$, while keeping\nthe same conductivity $\\sigma({\\bf x})$ then\n\\begin{equation} {\\bf e}_0'\\cdot\\bfm\\sigma^D{\\bf e}_0=\\langle{\\bf e}'\\cdot{\\bf j}\\rangle=\\langle{\\bf e}'\\sigma{\\bf e}\\rangle\n=\\langle{\\bf e}\\cdot{\\bf j}'\\rangle={\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0',\n\\eeq{1.10}\nwhich implies $\\bfm\\sigma^D$ is self-adjoint. By similar argument $\\bfm\\sigma^N$ is self-adjoint.\n\n\n\n\\section{Known elementary bounds}\n\\setcounter{equation}{0}\n\nThis section reviews the elementary bounds on $\\bfm\\sigma^D$ and $\\bfm\\sigma^N$ obtained by\n\\citeAPY{Nemat-Nasser:1993:MOP} and by Willis\nin a 1989 private communication to Nemat-Nasser and Hori. Their implications for bounding the volume fraction\nwill be studied. We will make use of two classical variational principles: the Dirichlet variational principle that\n\\begin{equation} \\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})=-\\nabla{\\underline{V}}({\\bf x}) \n\\cr {\\underline{V}}({\\bf x})=V_0({\\bf x})~{\\rm on}~\\partial\\Omega}}\\int_{\\Omega}\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\n=\\int_{\\partial\\Omega}-V_0q,\n\\eeq{2.1}\nwhich is attained when $\\underline{V}({\\bf x})=V({\\bf x})$, and the Neumann variational principle that\n\\begin{equation} \\min_{\\matrix{{\\underline{{\\bf j}}}({\\bf x})\\cr \\nabla \\cdot{\\underline{{\\bf j}}}({\\bf x})=0 \\cr \n {\\bf n}\\cdot{\\underline{{\\bf j}}}({\\bf x})=-q({\\bf x})~{\\rm on}~\\partial\\Omega}}\\int_{\\Omega}\\underline{{\\bf j}}\\cdot\\sigma^{-1}\\underline{{\\bf j}}\n=\\int_{\\partial\\Omega}V_0q,\n\\eeq{2.2}\nwhich is attained when $\\underline{j}({\\bf x})={\\bf j}({\\bf x})$. With the special Dirichlet boundary conditions\n\\eq{1.5} the Dirichlet variational principle implies \n\\begin{equation} \\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})=-\\nabla{\\underline{V}}({\\bf x}) \n\\cr {\\underline{V}}({\\bf x})=-{\\bf e}_0\\cdot{\\bf x}~{\\rm on}~\\partial\\Omega}}\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle\n={\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0.\n\\eeq{2.3}\nTaking a trial potential $\\underline{V}=-{\\bf e}_0\\cdot{\\bf x}$ produces the elementary upper bound on ${\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0$\n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0\\leq \\langle\\sigma\\rangle{\\bf e}_0\\cdot{\\bf e}_0,\n\\eeq{2.4}\ngiven by \\citeAPY{Nemat-Nasser:1993:MOP}.\nTo obtain a lower bound observe, following a standard argument, that the left hand side of \\eq{2.3} is surely\ndecreased if we take the minimum over a larger class of trial fields. Since the constraints on \n${\\underline{{\\bf e}}}({\\bf x})$ imply $\\langle{\\underline{{\\bf e}}}\\rangle={\\bf e}_0$ let us replace them by this weaker constraint to \nobtain the inequality\n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0\\geq \\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})\\cr\\langle{\\underline{{\\bf e}}}\\rangle={\\bf e}_0}}\n\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle,\n\\eeq{2.5}\nwhere the minimum is now over fields ${\\underline{{\\bf e}}}$ which are not necessarily curl-free. Using Lagrange\nmultipliers one finds that the minimum is attained when ${\\underline{{\\bf e}}}=\\sigma^{-1}\\langle\\sigma^{-1}\\rangle^{-1}{\\bf e}_0$\nand so we obtain the lower bound \n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0\\geq \\langle\\sigma^{-1}\\rangle^{-1}{\\bf e}_0\\cdot{\\bf e}_0\n\\eeq{2.6}\nof \\citeAPY{Nemat-Nasser:1993:MOP}.\nTaken together, \\eq{2.4} and \\eq{2.6} imply the lower and upper bounds\n\\begin{equation} \\left(\\frac{{\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0}{{\\bf e}_0\\cdot{\\bf e}_0}-\\sigma_2\\right)\/(\\sigma_1-\\sigma_2)\n\\leq f_1 \\leq \\left(\\sigma_2^{-1}-\\frac{{\\bf e}_0\\cdot{\\bf e}_0}{{\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0}\\right)\/(\\sigma_2^{-1}-\\sigma_1^{-1})\n\\eeq{2.7}\non the volume fraction $f_1$. These bounds give useful information even if we only know ${\\bf j}_0=\\bfm\\sigma^D{\\bf e}_0$\nfor only one value of ${\\bf e}_0$, i.e. if we only take one measurement. These bounds \\eq{2.7} are sharp in \nthe sense that the lower bound is approached artitrarily closely if $\\Omega$ is filled with a periodic laminate\nof components 1 and 2, oriented with the normal to the layers orthogonal to ${\\bf e}_0$ and we let the period\nlength go to zero, while the upper bound is approached artitrarily closely for the same geometry, but \noriented with the normal to the layers parallel to ${\\bf e}_0$. If the full tensor $\\bfm\\sigma^D$ is known,\nfrom $d=2,3$ measurements of pairs $({\\bf e}_0,{\\bf j}_0)$ \nthen we can take the intersection of the bounds \\eq{2.7} as ${\\bf e}_0$ is varied, and so obtain \n\\begin{equation} (\\lambda^D_+-\\sigma_2)\/(\\sigma_1-\\sigma_2)\n\\leq f_1 \\leq (1\/\\sigma_2-1\/\\lambda^D_-)\/(\\sigma_2^{-1}-\\sigma_1^{-1}),\n\\eeq{2.8}\nwhere $\\lambda^D_+$ and $\\lambda^D_-$ are the maximum and minimum eigenvalues of $\\bfm\\sigma^D$. However\nwe will see in the next section that an additional and typically sharper upper bound on $f_1$ can be obtained.\n\nWith the special Neumann boundary conditions \\eq{1.7} the variational principle \\eq{2.2} implies \n\\begin{equation} \\min_{\\matrix{{\\underline{{\\bf j}}}({\\bf x})\\cr \\nabla \\cdot{\\underline{{\\bf j}}}({\\bf x})=0 \\cr \n {\\bf n}\\cdot{\\underline{{\\bf j}}}({\\bf x})={\\bf n}\\cdot{\\bf j}_0~{\\rm on}~\\partial\\Omega}}\n\\langle\\underline{{\\bf j}}\\cdot\\sigma^{-1}\\underline{{\\bf j}}\\rangle={\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0.\n\\eeq{2.9}\nBy taking a constant trial field ${\\underline{{\\bf j}}}({\\bf x})={\\bf j}_0$ or alternatively by taking the minimum over the larger\nclass of trial fields satisfying only $\\langle{\\underline{{\\bf j}}}\\rangle={\\bf j}_0$ we \nobtain the bounds\n\\begin{equation} \\langle\\sigma\\rangle^{-1}{\\bf j}_0\\cdot{\\bf j}_0\\leq{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0\\leq \\langle\\sigma^{-1}\\rangle{\\bf j}_0\\cdot{\\bf j}_0\n\\eeq{2.10}\nof \\citeAPY{Nemat-Nasser:1993:MOP} which imply\n\\begin{equation} \\left(\\frac{{\\bf j}_0\\cdot{\\bf j}_0}{{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0}-\\sigma_2\\right)\/(\\sigma_1-\\sigma_2)\n\\leq f_1 \\leq \\left(\\sigma_2^{-1}-\\frac{{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0}{{\\bf j}_0\\cdot{\\bf j}_0}\\right)\/(\\sigma_2^{-1}-\\sigma_1^{-1}).\n\\eeq{2.11}\nThese bounds are applicable even if we only know ${\\bf e}_0=(\\bfm\\sigma^N)^{-1}{\\bf j}_0$\nfor only one value of ${\\bf j}_0$. For comparison, with these special Neumann boundary conditions \\eq{1.7},\nthe bounds in Theorem 3.1 of \\citeAPY{Kang:1997:ICP} coupled with the improvement in proposition 0\nof \\citeAPY{Ikehata:1998:SEI},\nwith $\\sigma_1=k>1$, $\\sigma_2=1$ and ${\\bf j}_0\\cdot{\\bf j}_0=1$, imply\n\\begin{equation} \n\\frac{1}{k-1}(1-{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0)\n\\leq f_1 \\leq \\frac{k}{k-1}(1-{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0).\n\\eeq{2.12}\nIn this case it is easy to check that the upper bounds in \\eq{2.11} and \\eq{2.12} coincide while the lower bound\nin \\eq{2.11} is tighter. The bounds \\eq{2.11} are each approached arbitrarily closely if $\\Omega$ is filled with a periodic laminate\nof components 1 and 2, oriented with ${\\bf j}_0$ either parallel or orthogonal to the layers\nand we let the period length go to zero.\n\nIn summary, \\eq{2.4},\\eq{2.6} and \\eq{2.10} imply the matrix inequalities\n\\begin{equation} \\langle\\sigma^{-1}\\rangle^{-1}{\\bf I}\\leq\\bfm\\sigma^D\\leq \\langle\\sigma\\rangle{\\bf I},\\quad\\quad\n \\langle\\sigma^{-1}\\rangle^{-1}{\\bf I}\\leq\\bfm\\sigma^N\\leq \\langle\\sigma\\rangle{\\bf I}\n\\eeq{2.13}\nof \\citeAPY{Nemat-Nasser:1993:MOP}.\n\nFor artitrary boundary conditions, i.e. for any ${\\bf e}$ and ${\\bf j}$ satisfying \\eq{1.2} within $\\Omega$,\nwe have the bounds \n\\begin{equation} \\langle{\\bf e}\\cdot{\\bf j}\\rangle \\geq {\\bf e}_0\\cdot\\bfm\\sigma^N{\\bf e}_0,\\quad\n\\langle{\\bf e}\\cdot{\\bf j}\\rangle \\geq {\\bf j}_0(\\bfm\\sigma^D)^{-1}{\\bf j}_0,\n\\eeq{2.14}\nwhere ${\\bf e}_0=\\langle{\\bf e}\\rangle$ and ${\\bf j}_0=\\langle{\\bf j}\\rangle$,\ndue to Willis in a 1989 private communication to Nemat-Nasser and Hori, and presented by \\citeAPY{Nemat-Nasser:1993:MOP}. In conjunction with \\eq{2.13} they imply\nthe volume fraction bounds,\n\\begin{equation} \\left(\\frac{{\\bf j}_0\\cdot{\\bf j}_0}{\\langle{\\bf e}\\cdot{\\bf j}\\rangle}-\\sigma_2\\right)\/(\\sigma_1-\\sigma_2)\n\\leq f_1 \\leq \\left(\\sigma_2^{-1}-\\frac{{\\bf e}_0\\cdot{\\bf e}_0}{\\langle{\\bf e}\\cdot{\\bf j}\\rangle}\\right)\/(\\sigma_2^{-1}-\\sigma_1^{-1}).\n\\eeq{2.15}\n\n\n\\section{New bounds with one measurement}\n\\setcounter{equation}{0}\n\n\nIf we have measurements of $\\langle{\\bf e}\\cdot{\\bf j}\\rangle$ and both vectors ${\\bf e}_0$ and ${\\bf j}_0$ for arbitrary boundary \nconditions then the bounds \\eq{2.14} and \\eq{2.15} can be improved. The classical variational principle\n\\eq{2.1} implies \n\\begin{equation}\n\\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})=-\\nabla{\\underline{V}}({\\bf x}) \n\\cr {\\underline{V}}({\\bf x})=V_0({\\bf x})~{\\rm on}~\\partial\\Omega \\cr\n\\langle\\sigma\\underline{{\\bf e}}\\rangle={\\bf j}_0}}\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle,\n=\\langle{\\bf e}\\cdot{\\bf j}\\rangle\n\\eeq{2.16}\nwhere we have chosen to add the constraint that $\\langle\\sigma\\underline{{\\bf e}}\\rangle={\\bf j}_0$ since we know that\nwithout this constraint the minimizer $\\underline{{\\bf e}}={\\bf e}$ satisfies $\\langle\\sigma{\\bf e}\\rangle={\\bf j}_0$. We surely\nobtain something lower if take the minimum over the larger class of fields satisfying only\n$\\langle\\underline{{\\bf e}}\\rangle={\\bf e}_0$ and $\\langle\\sigma\\underline{{\\bf e}}\\rangle={\\bf j}_0$. Thus we obtain the inequality\n\\begin{equation} \n\\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})\\cr \\langle\\underline{{\\bf e}}\\rangle={\\bf e}_0 \\cr \\langle\\sigma\\underline{{\\bf e}}\\rangle={\\bf j}_0}}\n\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle\\leq\\langle{\\bf e}\\cdot{\\bf j}\\rangle.\n\\eeq{2.17}\nBy introducing two vector valued Lagrange multipliers associated with the two vector valued constraints we find that\nthe minimum is attained when\n\\begin{equation} \\underline{{\\bf e}}({\\bf x})\n={\\bf e}_0+(\\langle\\sigma^{-1}\\rangle-\\sigma^{-1}({\\bf x}))(\\langle\\sigma\\rangle\\langle\\sigma^{-1}\\rangle-1)^{-1}({\\bf j}_0-\\langle\\sigma\\rangle{\\bf e}_0).\n\\eeq{2.18}\nSubstituting this back in \\eq{2.17} gives the bound\n\\begin{equation} (\\langle\\sigma\\rangle\\langle\\sigma^{-1}\\rangle-1)(\\langle{\\bf e}\\cdot{\\bf j}\\rangle-{\\bf e}_0\\cdot{\\bf j}_0)\n\\geq ({\\bf j}_0-\\langle\\sigma\\rangle{\\bf e}_0)\\cdot(\\langle\\sigma^{-1}\\rangle{\\bf j}_0-{\\bf e}_0).\n\\eeq{2.19}\nIf, with general boundary conditions, we are interested in bounding the volume fraction $f_1$ given measured values of \n$\\langle{\\bf e}\\cdot{\\bf j}\\rangle$, ${\\bf e}_0$ and ${\\bf j}_0$ then the difference between the left hand side and right hand side\nof \\eq{2.19} is a quadratic in $f_1$ whose two roots give upper and lower bounds on $f_1$. (Unless the roots happen\nto be complex, in which case there is no configuration of the two phases within $\\Omega$ which produce the measured\n$\\langle{\\bf e}\\cdot{\\bf j}\\rangle$, ${\\bf e}_0$ and ${\\bf j}_0$, indicating the presence of other phases or indicating\nan error in measurements.)\n\n\nIn the particular cases of either special Dirichlet or special Neumann boundary conditions, \n\\eq{1.5} or \\eq{1.7}, the left hand\nside of \\eq{2.19} vanishes (see \\eq{1.9}) and we obtain the reduced bounds\n\\begin{equation} 0\\geq ({\\bf j}_0-\\langle\\sigma\\rangle{\\bf e}_0)\\cdot(\\langle\\sigma^{-1}\\rangle{\\bf j}_0-{\\bf e}_0),\n\\eeq{2.20}\nwhich are in fact implied by the matrix inequalities \\eq{2.13}. This bound \\eq{2.20} is optimal.\nFor any given fixed ${\\bf e}_0$, and fixed volume fraction $f_1$, the vector ${\\bf j}_0$ \nhas an endpoint which is constrained by \\eq{2.20} to lie within a\nsphere (disk in two dimensions) centered at $(\\langle\\sigma\\rangle+\\langle\\sigma^{-1}\\rangle^{-1}){\\bf e}_0\/2$.\nWhen $\\Omega$ is filled with a periodic laminate of the two phases\nwith interfaces orthogonal to some unit vector ${\\bf m}$, and we let the period length go to zero,\nthen the endpoint of the vector ${\\bf j}_0$ covers the entire surface of this sphere (disk) as ${\\bf m}$\nranges over all unit vectors. These bounds are the analogs, for arbitrary bodies $\\Omega$, of \nbounds on possible $({\\bf e}_0,{\\bf j}_0)$ pairs\nfor composites derived by \\citeAPY{Raitum:1983:QES} and \\citeAPY{Tartar:1995:RHM}. If we are given ${\\bf e}_0$ and ${\\bf j}_0$\nand want to bound $f_1$ then we should find the range of $f_1$ where the sphere (or disk)\ncontains the endpoint of the vector ${\\bf j}_0$. The endpoints of this range are the roots\nof the right hand side of \\eq{2.20} which is a quadratic function of $f_1$. \n\nKnowledge of ${\\bf e}_0$ and ${\\bf j}_0$ is equivalent to knowledge of $\\langle{\\bf e}\\cdot{\\bf v}\\rangle$\nand $\\langle{\\bf v}\\cdot{\\bf j}\\rangle$ for all constant fields ${\\bf v}$. A more general alternative is to use\nthe information about \n\\begin{eqnarray} a_i=\\langle {\\bf e}\\cdot{\\bf j}_i\\rangle &= & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-V_0({\\bf j}_i\\cdot{\\bf n}),\n\\nonumber \\\\\n b_k=\\langle \\nabla V_k\\cdot{\\bf j}\\rangle &= & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-V_k({\\bf j}\\cdot{\\bf n}),\n\\eeqa{2.21}\nfor a given set of ``comparison flux fields'' ${\\bf j}_i({\\bf x})$ satisfying $\\nabla \\cdot{\\bf j}_i=0$,\n$i=1,2,\\ldots n$ and ``comparison potentials'' $V_k({\\bf x}),~k=1,2,\\ldots m$. Suppose, for\nexample, that we have just one comparison flux field ${\\bf j}_1$. We have the variational\nprinciple\n\\begin{equation}\n\\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})=-\\nabla{\\underline{V}}({\\bf x}) \n\\cr {\\underline{V}}({\\bf x})=V_0({\\bf x})~{\\rm on}~\\partial\\Omega \\cr\n\\langle \\underline{{\\bf e}}\\cdot{\\bf j}_1\\rangle=a_1}}\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle\n=\\langle{\\bf e}\\cdot{\\bf j}\\rangle,\n\\eeq{2.22}\nwhere we have chosen to add the constraint that $\\langle\\underline{{\\bf e}}\\cdot{\\bf j}_1\\rangle=a_1$ since we know that\nwithout this constraint the minimizer $\\underline{{\\bf e}}={\\bf e}$ \nsatisfies $\\langle{\\bf e}\\cdot{\\bf j}_1\\rangle=a_1$. This implies the inequality\n\\begin{equation} \\langle{\\bf e}\\cdot{\\bf j}\\rangle\\geq \\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})\\cr \n\\langle \\underline{{\\bf e}}\\cdot{\\bf j}_1\\rangle=a_1}}\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle. \n\\eeq{2.23}\nBy introducing a Lagrange multiplier associated with the constraint $\\langle{\\bf e}\\cdot{\\bf j}_1\\rangle=a_1$\nwe see the minimum occurs when\n\\begin{equation} \\underline{{\\bf e}}=a_1\\sigma^{-1}{\\bf j}_1\/\\langle{\\bf j}_1\\cdot\\sigma^{-1}{\\bf j}_1\\rangle,\n\\eeq{2.24}\ngiving the inequality\n\\begin{equation} \\langle{\\bf e}\\cdot{\\bf j}\\rangle\\geq a_1^2\/\\langle{\\bf j}_1\\cdot\\sigma^{-1}{\\bf j}_1\\rangle. \\eeq{2.25a}\nThis inequality gives information about $\\sigma({\\bf x})$ through $\\langle{\\bf j}_1\\cdot\\sigma^{-1}{\\bf j}_1\\rangle$.\nIf we only want bounds which involve the volume fraction we should choose ${\\bf j}_1({\\bf x})$\nwith\n\\begin{equation} |{\\bf j}_1({\\bf x})|=1\\quad {\\rm for~all~}{\\bf x}\\in\\Omega.\n\\eeq{2.25} \nThere are many divergence free fields ${\\bf j}_1({\\bf x})$ which satisfy this constraint. For example\nin two dimensions we can take\n\\begin{equation} {\\bf j}_1=({\\partial \\phi\/\\partial x_2,-\\partial \\phi\/\\partial x_1}),~~\n{\\rm with}~|\\nabla\\phi({\\bf x})|=1\\quad {\\rm for~all~}{\\bf x}\\in\\Omega.\n\\eeq{2.26}\nThus $\\phi({\\bf x})$ satisfies an Eikonal equation, and we could take $\\phi({\\bf x})$ to be the shortest\ndistance between ${\\bf x}$ and a curve outside $\\Omega$. Once \\eq{2.25} is satisfied \\eq{2.25a}\nimplies the volume fraction bound\n\\begin{equation} f_1\\leq \n \\left(\\sigma_2^{-1}-\\frac{a_1^2}{\\langle{\\bf e}\\cdot{\\bf j}\\rangle}\\right)\/(\\sigma_2^{-1}-\\sigma_1^{-1}).\n\\eeq{2.27}\nIn the special case when ${\\bf j}_1={\\bf e}_0\/|{\\bf e}_0|$ this reduces to the upper bound on \n$f_1$ given by \\eq{2.15}. \n\nAn important question is whether this new bound is sharp, and if so for what $\\sigma({\\bf x})$?\nThe new bound will be sharp when ${\\bf e}=\\underline{{\\bf e}}$ where $\\underline{{\\bf e}}$ is\ngiven by \\eq{2.24}. In that case\n\\begin{equation} {\\bf j}({\\bf x})=a_1{\\bf j}_1\/\\langle{\\bf j}_1\\cdot\\sigma^{-1}{\\bf j}_1\\rangle\n\\eeq{2.28}\nhas zero divergence because it is proportional to ${\\bf j}_1$. \nLet us impose the Neumann boundary condition\n\\begin{equation} {\\bf j}({\\bf x})\\cdot{\\bf n}={\\bf j}_1\\cdot{\\bf n}\\quad {\\rm for~all}~{\\bf x}\\in\\partial\\Omega,\n\\eeq{2.29}\nand look for a $\\sigma({\\bf x})$ so ${\\bf j}({\\bf x})={\\bf j}_1({\\bf x})$ and ${\\bf e}({\\bf x})=\\sigma^{-1}{\\bf j}_1({\\bf x})$\nis curl-free. Now as schematically represented by figure \\fig{0}, choose\n$\\sigma({\\bf x})$ to correspond to a finely layered composite with layers\northogonal to the streamlines of ${\\bf j}_1({\\bf x})$, and with phase 1 occupying\na local volume fraction $p({\\bf x})$. This composite will support a\ncurrent field ${\\bf j}({\\bf x})={\\bf j}_1({\\bf x})$ and an electric field ${\\bf e}({\\bf x})=\\sigma^{-1}{\\bf j}_1({\\bf x})$\nprovided \n\\begin{equation} \\nabla \\times{\\bf e}_0=0,\\quad{\\bf e}_0\\equiv[\\sigma_2^{-1}-p({\\bf x})(\\sigma_2^{-1}-\\sigma_1^{-1})]{\\bf j}_1({\\bf x}).\n\\eeq{2.30}\nHere ${\\bf e}_0({\\bf x})$ is the weak limit (local volume average) of ${\\bf e}({\\bf x})$ as the\nlayer spacing goes to zero. In two dimensions, given ${\\bf j}_1({\\bf x})$ we could look for solutions\nfor $p({\\bf x})$ such that \\eq{2.30} is satisfied and $0\\leq p({\\bf x})\\leq 1$ in $\\Omega$. We expect such\nsolutions to exist for a wide class of fields ${\\bf j}_1({\\bf x})$. This example shows that non-constant\n``comparison flux fields'' can lead to sharp bounds on the volume fraction. In three\ndimensions we only expect to find a solution of the vector equation \\eq{2.30}\nfor the scalar field $p({\\bf x})$ if ${\\bf j}_1({\\bf x})$ satisfies some additional conditions. \n \n\n\\begin{figure}\n\\vspace{2in}\n\\hspace{1.0in}\n{\\resizebox{2.0in}{1.0in}\n{\\includegraphics[0in,0in][6in,3in]{wavelam.eps}}}\n\\vspace{0.1in}\n\\caption{A schematic of the type of layered microstructure achieving the volume fraction bound \\eq{2.27}, where the black regions denote one phase, and the\nwhite regions the other phase. The layer widths should be much finer than the size of $\\Omega$. }\n\\labfig{0}\n\\end{figure}\n\n\n\n\\section{Relationship to bounding effective tensors of composites}\n\\setcounter{equation}{0}\n\nConsider a periodic composite obtained by taking the unit cell boundaries outside $\\Omega\\equiv\\Omega_1$\nand almost filling the rest of the unit cell by non-intersecting\nrescaled and translated copies $\\Omega_i$, $i=2,\\ldots, n$ of\n$\\Omega$, as illustrated in figure \\fig{1}. The remainder of the unit cell is filled by phase 2 with\nconductivity $\\sigma_2$. The unit cell structure is periodically repeated to fill all space.\nLet $\\sigma_C({\\bf x})$ ($C$ for composite) denote this effective conductivity, i.e. in \nthe unit cell\n\\begin{eqnarray} \\sigma_C({\\bf x}) &=& \\sigma({\\bf x}\/a_i+{\\bf b}_i)~{\\rm in}~\\Omega_i,~~i=1,2,\\ldots, N \\nonumber \\\\\n &=& \\sigma_2~{\\rm elsewhere~outside}~\\cup_{i}\\Omega_i,\n\\eeqa{3.1}\nwhere the scaling constants $a_i$ and translation vectors ${\\bf b}_i$ (with $a_1=1$ and ${\\bf b}_1=0$)\nare determined by the size and\nposition of each copy $\\Omega_i$, so that ${\\bf x}\/a_i+{\\bf b}_i$ is on the boundary of $\\Omega$ if and only if\n${\\bf x}$ is on the boundary of $\\Omega_i$. Let $p_n$ denote the volume fraction in the unit cell\noccupied by the material with conductivity $\\sigma_2$. Let $\\bfm\\sigma^*_n$ denote the (matrix valued) \neffective conductivity of this composite, which in general depends upon the relative positions\nof the copies $\\Omega_i$ within the unit cell. \n\n\n\\begin{figure}\n\\vspace{2in}\n\\hspace{1.0in}\n{\\resizebox{2.0in}{1.0in}\n{\\includegraphics[0in,0in][6in,3in]{omegacopies.eps}}}\n\\vspace{0.1in}\n\\caption{A period cell containing rescaled copies of $\\Omega$. }\n\\labfig{1}\n\\end{figure}\n\nWe have the classical variational inequality \n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^*_n{\\bf e}_0\\leq \\langle\\underline{{\\bf e}}_C\\cdot\\sigma_C\\underline{{\\bf e}}_C\\rangle,\n\\eeq{3.2}\nwhich holds for any trial electric field $\\underline{{\\bf e}}_C$ satisfying\n\\begin{equation} \\nabla \\times\\underline{{\\bf e}}_C=0,\\quad\\underline{{\\bf e}}_C~{\\rm periodic},\\quad\n\\langle\\underline{{\\bf e}}_C\\rangle={\\bf e}_0,\n\\eeq{3.3}\nwhere now the volume averages are over the entire unit cell, rather than just $\\Omega$. \nIn particular, letting ${\\bf e}({\\bf x})$ denote the electric field within $\\Omega$ when the special\nDirichlet boundary conditions \\eq{1.5} are applied, we may take in the unit cell\n\\begin{eqnarray} \\underline{{\\bf e}}_C({\\bf x}) &=& {\\bf e}({\\bf x}\/a_i+{\\bf b}_i)~{\\rm in}~\\Omega_i,~~i=1,2,\\ldots, N \\nonumber \\\\\n &=& {\\bf e}_0~{\\rm elsewhere~outside}~\\cup_{i}\\Omega_i,\n\\eeqa{3.4}\nand periodically extend it. Then we get\n\\begin{eqnarray} \\langle\\underline{{\\bf e}}_C\\cdot\\sigma_C\\underline{{\\bf e}}_C\\rangle & = & p_n\\sigma_2{\\bf e}_0\\cdot{\\bf e}_0 \\nonumber \\\\\n&~& +(1-p_n)\\sum_{i=1}^N\\langle{\\bf e}({\\bf x}\/a_i+{\\bf b}_i)\\cdot\\sigma({\\bf x}\/a_i+{\\bf b}_i){\\bf e}({\\bf x}\/a_i+{\\bf b}_i)\\rangle_{\\Omega_i} \\nonumber \\\\\n&=& p_n\\sigma_2{\\bf e}_0\\cdot{\\bf e}_0+(1-p_n)\\langle{\\bf e}({\\bf x})\\cdot\\sigma({\\bf x}){\\bf e}({\\bf x})\\rangle_{\\Omega} \\nonumber \\\\\n&=& p_n\\sigma_2{\\bf e}_0\\cdot{\\bf e}_0+(1-p_n){\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0,\n\\eeqa{3.5}\nwhere $\\langle\\cdot\\rangle_{\\Omega_i}$ denotes an average over the region $\\Omega_i$.\nCombined with the variational inequality \\eq{3.2} this implies the bound\n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^*_n{\\bf e}_0\\leq p_n\\sigma_2{\\bf e}_0\\cdot{\\bf e}_0+(1-p_n){\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0\n\\leq {\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0,\n\\eeq{3.6}\nwhere we have used the inequality $\\bfm\\sigma^D\\geq\\sigma_2{\\bf I}$ implied by \\eq{2.13}. Thus we get\n\\begin{equation} \\bfm\\sigma^*_n\\leq \\bfm\\sigma^D.\n\\eeq{3.7}\nThis composite has a volume fraction $f_1'=(1-p_n)f_1$ of phase 1.\nThus any bound ``from below'' on the effective conductivity $\\bfm\\sigma^*_n$, applicable to composites\nhaving a volume fraction $f_1'$ of phase 1, immediately translates into\nbound ``from below'' on $\\bfm\\sigma^D$. Now consider what happens as we increase $N$, inserting\nmore and more regions $\\Omega_i$, while leaving undisturbed the regions $\\Omega_i$ already in place,\nso that $p_n\\to 0$ as $n\\to\\infty$. We are assured that this is possible. Rescaled copies\nof any shaped region can be packed to fill all space: see, for example, Theorem A.1\nin \\citeAPY{Benveniste:2003:NER}. Define\n\\begin{equation} \\bfm\\sigma^*=\\lim_{n\\to\\infty}\\bfm\\sigma^*_n.\n\\eeq{3.7a}\nWe are assured this limit exists since if we change the geometry in some\nsmall volume then the effective conductivity (assuming $\\sigma_1$ and $\\sigma_2$ are strictly positive and finite)\nis perturbed only by a small amount (\\citeAY{Zhikov:1994:HDO}).\nWe will call $\\bfm\\sigma^*$ the effective conductivity tensor of an assemblage of rescaled copies\nof $\\Omega$ packed to fill all space. Then \\eq{3.7} implies\n\\begin{equation} \\bfm\\sigma^*\\leq \\bfm\\sigma^D, \\eeq{3.7b}\nwhich is essentially the bound of \\citeAPY{Huet:1990:AVC} applied to this assemblage.\nAssume the bound is continuous with respect\nto $f_1'$ at the point $f_1'=f_1$, as expected. Then taking the limit $n\\to\\infty$ the \n``lower bound'' on the effective tensor of composites having\nvolume fraction $f_1$ must also be a lower bound on $\\bfm\\sigma^D$.\n\nIn particular, the harmonic mean bound \n$\\bfm\\sigma^*\\geq\\langle\\sigma^{-1}\\rangle^{-1}{\\bf I}$ translates into the elementary bound \n$\\bfm\\sigma^D\\geq\\langle\\sigma^{-1}\\rangle^{-1}{\\bf I}$ of Nemat-Nasser and Hori, obtained before. \nAdditionally, in our two-phase composite, the effective conductivity $\\bfm\\sigma^*$ satisfies\nthe Lurie-Cherkaev-Murat-Tartar bound (Lurie and Cherkaev \\citeyearNP{Lurie:1982:AEC}, \\citeyearNP{Lurie:1984:EEC}; \\citeAY{Murat:1985:CVH}; \\citeAY{Tartar:1985:EFC})\n\\begin{equation} f_1{\\rm Tr}[(\\bfm\\sigma^*-\\sigma_2{\\bf I})^{-1}]\\leq d\/(\\sigma_1-\\sigma_2)+f_2\/\\sigma_2,\n\\eeq{3.8}\n[which are a generalization of the bounds of \\citeAPY{Hashin:1962:VAT}] where $d=2,3$ is the dimensionality\nof the composite. Since $\\bfm\\sigma^D\\geq\\bfm\\sigma^*\\geq \\sigma_2{\\bf I}$ it follows that \n\\begin{equation} (\\bfm\\sigma^*-\\sigma_2{\\bf I})^{-1}\\geq (\\bfm\\sigma^D-\\sigma_2{\\bf I})^{-1}, \\eeq{3.9}\nand so \\eq{3.8} implies the new bound\n\\begin{equation} f_1{\\rm Tr}[(\\bfm\\sigma^D-\\sigma_2{\\bf I})^{-1}]\\leq d\/(\\sigma_1-\\sigma_2)+f_2\/\\sigma_2.\n\\eeq{3.10}\nBy multiplying this inequality by $\\sigma_2^2$ and adding $df_1\\sigma_2$ to both sides we see that it \ncan be rewritten in the equivalent form \n\\begin{equation} f_1{\\rm Tr}[(\\sigma_2^{-1}{\\bf I}-(\\bfm\\sigma^D)^{-1})^{-1}]\\leq d\/(\\sigma_2^{-1}-\\sigma_1^{-1})-(d-1)f_2\\sigma_2.\n\\eeq{3.10a}\nAs $d^2\/{\\rm Tr}({\\bf A})\\leq {\\rm Tr}({\\bf A}^{-1})$ for any positive definite matrix ${\\bf A}$ we also obtain\nthe weaker bound\n\\begin{equation} \\frac{1}{d}{\\rm Tr}[(\\bfm\\sigma^D)^{-1}]\\leq \\sigma_2^{-1}-\\frac{f_1 d}{d\/(\\sigma_2^{-1}-\\sigma_1^{-1})-(d-1)f_2\\sigma_2},\n\\eeq{3.11}\nwhich is a particular case of the universal bounds first derived by \nNemat-Nasser and Hori (\\citeyearNP{Nemat-Nasser:1993:MOP}, \\citeyearNP{Nemat-Nasser:1995:UBO}), see equation (5.4.9) in their 1995 paper,\nobtained under the assumption that $\\Omega$ is ellipsoidal or parallelpipedic.\n(which we see is not needed).\n\nIf one is interested in bounds on the volume fraction $f_1$ then \\eq{3.10} implies the upper bound\n\\begin{equation} f_1\\leq \\frac{1\/\\sigma_2+d\/(\\sigma_1-\\sigma_2)}{1\/\\sigma_2+{\\rm Tr}[(\\bfm\\sigma^D-\\sigma_2{\\bf I})^{-1}]}.\n\\eeq{3.12}\n\nTo obtain lower bounds on $f_1$, we consider the same periodic composite\nand apply the dual variational inequality\n\\begin{equation} {\\bf j}_0\\cdot(\\bfm\\sigma^*_n)^{-1}{\\bf j}_0\\leq \\langle\\underline{{\\bf j}}_C\\cdot\\sigma_C^{-1}\\underline{{\\bf j}}_C\\rangle\n\\eeq{3.13}\nvalid for any trial current field $\\underline{{\\bf j}}_C$ satisfying\n\\begin{equation} \\nabla \\cdot\\underline{{\\bf j}}_C=0,\\quad\\underline{{\\bf j}}_C~{\\rm periodic},\\quad\n\\langle\\underline{{\\bf j}}_C\\rangle={\\bf j}_0.\n\\eeq{3.14}\nLetting ${\\bf j}({\\bf x})$ denote the current field within $\\Omega$ when the special\nNeumann boundary conditions \\eq{1.7} are applied, we may take in the unit cell\n\\begin{eqnarray} \\underline{{\\bf j}}_C({\\bf x}) &=& {\\bf j}({\\bf x}\/a_i+{\\bf b}_i)~{\\rm in}~\\Omega_i,~~i=1,2,\\ldots, N \\nonumber \\\\\n &=& {\\bf j}_0~{\\rm elsewhere~outside}~\\cup_{i}\\Omega_i,\n\\eeqa{3.15}\nand periodically extend it. \nSubstituting this trial field in \\eq{3.13} gives the bound \n\\begin{equation} (\\bfm\\sigma^*_n)^{-1}\\leq p_n\\sigma_2^{-1}{\\bf I}+(\\bfm\\sigma^N)^{-1}, \\eeq{3.15a}\nwhich in the limit $n\\to\\infty$ implies\n\\begin{equation} \\bfm\\sigma^*\\geq \\bfm\\sigma^N, \\eeq{3.16}\nwhich is essentially the bound of \\citeAPY{Huet:1990:AVC} applied to the\nassemblage of rescaled copies of $\\Omega$ packed to fill all space.\n\nThus any bound ``from above'' on the effective conductivity $\\bfm\\sigma^*_n$ of composites having a volume \nfraction $f_1'$ immediately \ntranslates into a bound ``from above'' on $(p_n\\sigma_2^{-1}{\\bf I}+(\\bfm\\sigma^N)^{-1})^{-1}$. Taking\nthe limit $n\\to\\infty$\nand assuming continuity of the bound at $f_1'=f_1$ the ``upper bound'' \non the effective tensor of composites having volume fraction $f_1$ must also be an upper bound\non $\\bfm\\sigma^N$.\nIn particular, the other Murat-Tartar-Lurie-Cherkaev bound \n\\begin{equation} f_2{\\rm Tr}[(\\sigma_1{\\bf I}-\\bfm\\sigma^*)^{-1}]\\leq d\/(\\sigma_1-\\sigma_2)-f_1\/\\sigma_1,\n\\eeq{3.17}\nimplies\n\\begin{equation} f_2{\\rm Tr}[(\\sigma_1{\\bf I}-\\bfm\\sigma^N)^{-1}]\\leq d\/(\\sigma_1-\\sigma_2)-f_1\/\\sigma_1. \n\\eeq{3.18}\nAgain using the inequality $d^2\/{\\rm Tr}({\\bf A})\\leq {\\rm Tr}({\\bf A}^{-1})$ for ${\\bf A}>0$, we obtain\nthe weaker bound\n\\begin{equation} \\frac{1}{d}{\\rm Tr}(\\bfm\\sigma^N)\\leq \\sigma_1-\\frac{f_2 d}{d\/(\\sigma_1-\\sigma_2)-f_1\/\\sigma_1}\n\\eeq{3.20}\nwhich is a particular case of the universal bounds derived by Nemat-Nasser and Hori (\\citeyearNP{Nemat-Nasser:1993:MOP}, \\citeyearNP{Nemat-Nasser:1995:UBO}), see \nequation (5.3.11) in their 1995 paper,\nobtained under the assumption that $\\Omega$ is ellipsoidal or parallelpipedic\n(which we see is not needed).\n\nFrom \\eq{3.18} we directly obtain the volume fraction bound\n\\begin{equation} f_2\\leq \\frac{d\/(\\sigma_1-\\sigma_2)-1\/\\sigma_1}{\\{{\\rm Tr}[(\\sigma_1{\\bf I}-\\bfm\\sigma^N)^{-1}]\\}-1\/\\sigma_1},\n\\eeq{3.21}\ngiving a lower bound on the volume fraction $f_1=1-f_2$. \n\nIn the asymptotic limit as the volume fraction goes to zero\nthe volume fraction bounds \\eq{3.12} and \\eq{3.21} reduce to those of Capdeboscq and Vogelius (\\citeyearNP{Capdeboscq:2003:OAE}, \\citeyearNP{Capdeboscq:2004:RSR}),\nas shown in the two dimensional case by \\citeAPY{Kang:2011:SBV}. The paper of Kang, Kim and Milton also tests the bounds numerically, and their (two-dimensional) results \nshow the bound \\eq{3.12} is typically close to the actual volume fraction for a variety of inclusions of phase 1 in a matrix of phase 2. Similarly we can expect\nthat the bound \\eq{3.21} will be typically close to the actual volume fraction for a variety of inclusions of phase 2 in a matrix of phase 1. \n\n\n\n\\section{Coupled bounds in two-dimensions}\n\\setcounter{equation}{0}\nThe tensors $\\bfm\\sigma^D$ and $\\bfm\\sigma^N$ obviously depend on $\\sigma_1$ and $\\sigma_2$, i.e. \n$\\bfm\\sigma^D=\\bfm\\sigma^D(\\sigma_1,\\sigma_2)$ and $\\bfm\\sigma^N=\\bfm\\sigma^N(\\sigma_1,\\sigma_2)$. Let us assume we have\nmeasurements of these tensors for an additional pair of conductivities $(k_1, k_2)$,\n(which could be obtained, say from thermal, magnetic permeability, or diffusivity\nmeasurements) and let ${\\bf k}^D$ and ${\\bf k}^N$ denote these tensors,\n\\begin{equation} {\\bf k}^D=\\bfm\\sigma^D(k_1,k_2),\\quad {\\bf k}^N=\\bfm\\sigma^N(k_1,k_2).\n\\eeq{4.1}\nWe still let $\\bfm\\sigma^D$ and $\\bfm\\sigma^N$ denote the tensors associated with the first\npair of conductivities $(\\sigma_1, \\sigma_2)$, with $\\sigma_1>\\sigma_2$. From \\eq{3.7b} and\n\\eq{3.16} we have the inequalities\n\\begin{eqnarray} \\sigma_2{\\bf I}\\leq\\bfm\\sigma^N\\leq\\bfm\\sigma^*\\leq \\bfm\\sigma^D\\leq \\sigma_1{\\bf I}, \\nonumber \\\\\nk^-\\leq {\\bf k}^N\\leq{\\bf k}^*\\leq {\\bf k}^D\\leq k^+{\\bf I},\n\\eeqa{4.2}\nwhere $k^-=\\min\\{k_1,k_2\\}$ and $k^+=\\max\\{k_1,k_2\\}$ and\n${\\bf k}^*$ is the effective conductivity the composite considered\nin the previous section when $\\sigma_1$ and $\\sigma_2$ are replaced by $k_1$ and $k_2$. (It can\neasily be checked that these inequalities still hold if $k_2>k_1$.) \n\nFor two dimensional conductivity from duality \n(\\citeAY{Keller:1964:TCC}; \\citeAY{Dykhne:1970:CTD})\nwe know the functions \n$\\bfm\\sigma^D=\\bfm\\sigma^D(\\sigma_1,\\sigma_2)$ and $\\bfm\\sigma^N=\\bfm\\sigma^N(\\sigma_1,\\sigma_2)$ satisfy\n\\begin{eqnarray} \\bfm\\sigma^D(\\sigma_2,\\sigma_1)& = &\\sigma_1\\sigma_2{\\bf R}_\\perp^T[\\bfm\\sigma^N(\\sigma_1,\\sigma_2)]^{-1}{\\bf R}_\\perp, \\nonumber \\\\\n \\bfm\\sigma^N(\\sigma_2,\\sigma_1)& = &\\sigma_1\\sigma_2{\\bf R}_\\perp^T[\\bfm\\sigma^D(\\sigma_1,\\sigma_2)]^{-1}{\\bf R}_\\perp,\n\\eeqa{4.3}\nwhere \n\\begin{equation} {\\bf R}_\\perp=\\pmatrix{0 & 1 \\cr -1 & 0}\n\\eeq{4.4}\nis the matrix for a $90^\\circ$ rotation.\nSo if we know these tensors for the conductivity pair $(k_1, k_2)$, we also know them for the \nconductivity pair $(k_2, k_1)$. Hence, by making such an interchange if necessary, we may assume without\nloss of generality that $k_1>k_2$, i.e. that $k^+=k_1$ and $k^-=k_2$. Finally, by \ninterchanging $k$ with $\\sigma$ if necessary,\nwe may assume without loss of generality that \n\\begin{equation} \\sigma_1\/\\sigma_2\\geq k_1\/k_2>1. \\eeq{4.5}\n\n\n\nOptimal bounds on all possible matrix pairs $(\\bfm\\sigma^*,{\\bf k}^*)$ for composites having a prescribed\nvolume fraction $f_1$ of phase 1 have been derived by \\citeAPY{Cherkaev:1992:ECB}, \nand extended to an arbitrary number of effective conductivity function values by \\citeAPY{Clark:1995:OBC}.\nHowever it seems\ndifficult to extract bounds on $f_1$ from these optimal bounds. Instead we consider a\npolycrystal checkerboard with conductivities \n\\begin{equation} \\bfm\\sigma({\\bf x})={\\bf R}^T({\\bf x})\\bfm\\sigma^*{\\bf R}({\\bf x}),\\quad {\\bf k}({\\bf x})={\\bf R}^T({\\bf x}){\\bf k}^*{\\bf R}({\\bf x}),\n\\quad {\\rm with}~{\\bf R}^T({\\bf x}){\\bf R}({\\bf x})={\\bf I},\n\\eeq{4.6}\nin which the rotation field ${\\bf R}({\\bf x})$ is ${\\bf I}$ in the ``white squares'' and\n${\\bf R}_\\perp$ in the ``black squares''. By a result of \\citeAPY{Dykhne:1970:CTD} this material\nhas effective conductivities $(\\sigma_*{\\bf I},k_*{\\bf I})$ where\n\\begin{equation} \\sigma_*=\\sqrt{\\det\\bfm\\sigma^*},\\quad k_*=\\sqrt{\\det{\\bf k}^*}.\n\\eeq{4.7}\nNow we replace the ``white squares'' by the limiting composite considered\nin the previous section (with structure much smaller than the size of the\nsquares) and we replace the ``black squares'' by the limiting composite considered\nin the previous section, rotated by $90^\\circ$. The resulting material is an isotropic\ncomposite of phases 1 and 2 and so the pair $(\\sigma_*,k_*)$ satisfies the bounds\nof \\citeAPY{Milton:1981:BTO},\n\\begin{equation} u(k_*)\\leq \\sigma_* \\leq v(k_*), \\eeq{4.10}\nwhich are attained when the composite is an assemblage of doubly coated disks, where\n\\begin{eqnarray} \n v(k_*) & = &\\sigma_1-\\frac{2f_2\\sigma_1(\\sigma_1^2-\\sigma_2^2)}{(f_2\\sigma_1+f_1\\sigma_2+\\sigma_1)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_1(k_*)}, \\nonumber \\\\\nu(k_*) & \\equiv &\\sigma_2+\n\\frac{2f_1\\sigma_2(\\sigma_1^2-\\sigma_2^2)}{(f_2\\sigma_1+f_1\\sigma_2+\\sigma_2)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_2(k_*)},\n\\eeqa{4.11}\nand\n\\begin{eqnarray}\n\\alpha_1(k_*)& = &\\frac{(k_1+k_2)[2f_2k_1(k_1-k_2)\/(k_1-k_*)-(f_2k_1+f_1k_2+k_1)]}{(k_1-k_2)^2}, \\nonumber \\\\\n\\alpha_2(k_*)& = &\\frac{(k_1+k_2)[2f_1k_2(k_1-k_2)\/(k_*-k_2)-(f_2k_1+f_1k_2+k_2)]}{(k_1-k_2)^2}.\n\\eeqa{4.12}\nNow for any two symmetric matrices ${\\bf A}$ and ${\\bf B}$ with ${\\bf A}\\geq{\\bf B}>0$ we \nhave ${\\bf B}^{-1\/2}{\\bf A}{\\bf B}^{-1\/2}\\geq{\\bf I}$, and so $\\det({\\bf B}^{-1\/2}{\\bf A}{\\bf B}^{-1\/2})\\geq 1$\nimplying $\\det({\\bf A})>\\det({\\bf B})$. Thus \\eq{4.2} and \\eq{4.7} imply\n\\begin{equation} \\sigma_2\\leq\\sigma_N\\leq\\sigma_*\\leq \\sigma_D\\leq\\sigma_1, \n\\quad k_2\\leq k_N\\leq k_*\\leq k_D\\leq k_1,\n\\eeq{4.13}\nwhere we define\n\\begin{equation} \\sigma_N=\\sqrt{\\det\\bfm\\sigma_N},\\quad \\sigma_D=\\sqrt{\\det\\bfm\\sigma_D},\\quad\n k_N=\\sqrt{\\det{\\bf k}_N},\\quad k_D=\\sqrt{\\det{\\bf k}_D}.\n\\eeq{4.13a}\nThe Hashin-Shtrikman bounds (\\citeAY{Hashin:1962:VAT}; \\citeAY{Hashin:1970:TCM}),\n\\begin{equation} k_1-\\frac{2f_2k_1(k_1-k_2)}{f_2k_1+f_1k_2+k_1}\\geq k_*\n\\geq k_2+\\frac{2f_1k_2(k_1-k_2)}{f_2k_1+f_1k_2+k_2},\n\\eeq{4.14}\nimply that both $\\alpha_1(k_*)$ and $\\alpha_2(k_*)$ are non-negative. Hence the denominators\nin \\eq{4.11} are positive and so \\eq{4.10} implies\n\\begin{eqnarray} \n(\\sigma_1-\\sigma_*)[(f_2\\sigma_1+f_1\\sigma_2+\\sigma_1)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_1(k_*)]\\geq 2f_2\\sigma_1(\\sigma_1^2-\\sigma_2^2),\n\\nonumber \\\\\n(\\sigma_*-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+\\sigma_2)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_2(k_*)]\\geq 2f_1\\sigma_2(\\sigma_1^2-\\sigma_2^2).\n\\eeqa{4.15}\nSince $\\alpha_1(k_D)\\geq\\alpha_1(k_*)$ and $\\alpha_2(k_N)\\geq\\alpha_2(k_*)$, we get using\n\\eq{4.13},\n\\begin{eqnarray}\n(\\sigma_1-\\sigma_N)[(f_2\\sigma_1+f_1\\sigma_2+\\sigma_1)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_1(k_D)]\\geq 2f_2\\sigma_1(\\sigma_1^2-\\sigma_2^2),\n\\nonumber \\\\\n(\\sigma_D-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+\\sigma_2)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_2(k_N)]\\geq 2f_1\\sigma_2(\\sigma_1^2-\\sigma_2^2).\n\\nonumber \\\\ ~\n\\eeqa{4.16}\nAs $\\alpha_1(k_D)$ and $\\alpha_2(k_N)$ depend linearly on $f_1$ and $f_2=1-f_1$,\nthe equations \\eq{4.16} readily yield bounds on the volume fraction. Eunjoo Kim\nhas used an integral equation solver [as described by \\citeAPY{Kang:2011:SBV}]\nto compare the bounds \\eq{4.16} with the bounds \\eq{3.12} and \\eq{3.21}. Her results\nare presented in figures \\ref{3}, \\ref{4}, and \\ref{5}. More numerical results testing\nthe bounds \\eq{3.12} and \\eq{3.21} are in the paper by \\citeAPY{Kang:2011:SBV}.\n\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\epsfig{figure=LU_DYDNa6.3.eps,width=10cm}\n\\end{center}\n\\caption{The first figure shows the circular body $\\Omega$ containing an ellipse of phase 1 surrounded by phase 2.\nThe second figure shows the results for the bounds\n\\eq{3.12} and \\eq{3.21} while the third figure shows the results \nfor the bounds \\eq{4.16}.\nThe bounds are for increasing $\\sigma_1$, with $\\sigma_2=1$\nand (for the third figure) the pairs $(\\sigma_1,k_1)$ are taken as\n$(1.1,1.05)$, $(1.2,1.1)$, $(1.5,1.2)$, $(2,1.5)$, $(3,2)$, $(5,3)$, $(10,5)$ and $(20,10)$,\nwith $\\sigma_2=k_2=1$.\nHere $U(\\sigma_1)$ and $L(\\sigma_1)$ are the upper and \nlower bounds on the volume fraction, and the true volume fraction is $f_1=0.08$.\nFigure supplied courtesy of Eunjoo Kim.}\\label{3}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\epsfig{figure=LU_DYDNa5.3.eps,width=10cm}\n\\end{center}\n\\caption{The same as for figure \\ref{3} but with the elliptical \ninclusion moved closer to the boundary of $\\Omega$. Figure supplied courtesy of Eunjoo Kim.}\\label{4}\n\\end{figure}\n\n\n\n \n\\begin{figure}[htbp]\n\\begin{center}\n\\epsfig{figure=LU_DNDNa4.3.eps,width=10cm}\n\\end{center}\n\\caption{The same as for figure \\ref{3} but with a non-elliptical inclusion of phase 1 in a square region $\\Omega$. The true volume\nfraction is $f_1=0.0673$. Figure supplied courtesy of Eunjoo Kim. }\\label{5}\n\\end{figure}\n\n\\section{Coupled bounds in three-dimensions}\n\\setcounter{equation}{0}\n\nWe can also derive coupled bounds in three dimensions. Let us assume the phases\nhave been labeled so that\n\\begin{equation} \\sigma_1 k_1\\geq \\sigma_2 k_2, \\quad {\\rm i.e.}~\\sigma_1\/\\sigma_2\\geq k_2\/k_1,\n\\eeq{4.17}\nand by interchanging $\\sigma$ with $k$ if necessary let us assume\n\\begin{equation} \\sigma_1\/\\sigma_2\\geq k_1\/k_2.\n\\eeq{4.18}\nThese two inequalities imply $\\sigma_1\/\\sigma_2>1$ as before. We want to use the\ninequalities \\eq{4.2} to derive bounds on the volume fraction. As in the \ntwo-dimensional case the idea is to first construct an isotropic\npolycrystal, where the polycrystal has the conductivities \\eq{4.6}\nin which the rotation field ${\\bf R}({\\bf x})$ is constant within grains\nwhich we take to be spheres. These spheres fill all space, and the crystal\norientation varies randomly from sphere to sphere so that the composite\nhas isotropic conductivities $(\\sigma_*{\\bf I},k_*{\\bf I})$. We use the effective medium formula\n(\\citeAY{Stroud:1975:GEM}; \\citeAY{Helsing:1991:ECA})\nwhich gives \n\\begin{equation} \\sigma_*=g(\\bfm\\sigma_*), \\quad k_*=g({\\bf k}_*),\n\\eeq{4.19}\nwhere for any positive definite symmetric $3\\times 3$ matrix ${\\bf A}$, $g=g({\\bf A})$ is taken to be the\nunique positive root of\n\\begin{equation} \\frac{\\lambda_1-g}{\\lambda_1+2g}+\\frac{\\lambda_2-g}{\\lambda_2+2g}+\\frac{\\lambda_3-g}{\\lambda_3+2g}=0,\n\\eeq{4.20}\nin which $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the eigenvalues of ${\\bf A}$. This effective\nmedium formula is realizable (\\citeAY{Milton:1985:TCP}; \\citeAY{Avellaneda:1987:IHD})\nin the sense that it corresponds to a limiting composite\nof spherical grains with hierarchical structure (where any pair of grains of \ncomparable size are well separated from each other, relative to their diameter). \nNote that the left hand side of side of \\eq{4.20} increases if any of the eigenvalues\n$\\lambda_i$ increase, and decreases if $g$ increases. So $g({\\bf A})$ must increase if\nany or all of the eigenvalues of ${\\bf A}$ increase. It follows that $g({\\bf B})\\geq g({\\bf A})$\nif ${\\bf B}\\geq{\\bf A}>0$. Hence the inequalities \\eq{4.2} imply\n\\begin{equation} \\sigma_2\\leq\\sigma_N\\leq\\sigma_*\\leq \\sigma_D\\leq\\sigma_1, \n\\quad k^-\\leq k_N\\leq k_*\\leq k_D\\leq k^+,\n\\eeq{4.20a}\nwhere now\n\\begin{equation} \\sigma_N=g(\\bfm\\sigma_N),\\quad \\sigma_D=g(\\bfm\\sigma_D),\\quad k_N=g({\\bf k}_N),\\quad k_D=g({\\bf k}_D).\n\\eeq{4.20b}\n\nWe next replace the material in each sphere by the appropriately oriented \nlimiting composite considered in the previous section (with structure much \nsmaller than the sphere diameter) to obtain a two-phase isotropic composite with\n$(\\sigma_*,k_*)$ as its conductivities. Thus $\\sigma_*$ must satisfy the upper bound of Bergman (\\citeyearNP{Bergman:1976:VBS},\\citeyearNP{Bergman:1978:DCC})\n\\begin{equation}\n\\sigma_*\\leq f_1\\sigma_1+f_2\\sigma_2\n-\\frac{f_1f_2(\\sigma_1-\\sigma_2)^2}{3\\sigma_2+(\\sigma_1-\\sigma_2)\\gamma(k_*)},\n\\eeq{4.23}\nwhere\n\\begin{equation} \\gamma(k_*)=\\frac{f_1f_2(k_1-k_2)}{f_1k_1+ f_2k_2 -k_*}-\\frac{3k_2}{k_1-k_2},\n\\eeq{4.24}\nand the lower bound\n\\begin{equation} \\sigma_*\\geq \\sigma_2+\n\\frac{3f_1\\sigma_2(\\sigma_1-\\sigma_2)(\\sigma_2+2\\sigma_1)}{(f_2\\sigma_1+f_1\\sigma_2+2\\sigma_2)(\\sigma_2+2\\sigma_1)\n+(\\sigma_1-\\sigma_2)^2\\beta(k_*)},\n\\eeq{4.21}\nwhere\n\\begin{equation}\n\\beta(k_*) = \\frac{(k_2+2k_1)[3f_1k_2(k_1-k_2)\/(k_*-k_2)-(f_2k_1+f_1k_2+2k_2)]}{(k_1-k_2)^2}.\n\\eeq{4.22}\nThis lower bound was first conjectured by \\citeAPY{Milton:1981:BTO}. A proof was proposed\nby \\citeAPY{Avellaneda:1988:ECP} which was corrected by \\citeAPY{Nesi:1991:MII} and \\citeAPY{Zhikov:1991:EHM}.\n\nThe lower bound \\eq{4.21} is sharp, being attained for two-phase assemblages of doubly\ncoated spheres (\\citeAY{Milton:1981:BTO}). The upper bound \\eq{4.23} is attained at $5$ values of $\\gamma(k_*)$\nnamely when $\\gamma(k_*)=f_2, 3f_2\/2, 3f_2, 3-3f_1\/2,$ and $3-f_1$ (\\citeAY{Milton:1981:BCP}).\n\nThe Hashin-Shtrikman bound (\\citeAY{Hashin:1962:VAT}),\n\\begin{equation} (k_*-k_2)\/(k_1-k_2)\\geq 3f_1k_2\/(f_2k_1+f_1k_2+2k_2),\n\\eeq{4.25}\nimplies that $\\beta(k_*)$ is non-negative. Hence the denominator\nin \\eq{4.21} is positive and so the inequality implies\n\\begin{equation} (\\sigma_*-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+2\\sigma_2)(\\sigma_2+2\\sigma_1)\n+(\\sigma_1-\\sigma_2)^2\\beta(k_*)]\\geq 3f_1\\sigma_2(\\sigma_1-\\sigma_2)(\\sigma_2+2\\sigma_1).\n\\eeq{4.26}\nThe Hashin-Shtrikman bounds can also be rewritten in the form\n\\begin{equation}\nf_2k_1+f_1k_2+2k^- \\leq \\frac{f_1f_2(k_1-k_2)^2}{f_1k_1+ f_2k_2 -k_*}\\leq f_2k_1+f_1k_2+2k^+.\n\\eeq{4.27}\nThese inequalities imply\n$\\gamma(k_*)$ lies between $f_2$ and $3-f_1$. Hence the denominator in \\eq{4.23} is positive and\nthe inequality can be rewritten as \n\\begin{equation}\n(f_1\\sigma_1+f_2\\sigma_2-\\sigma_*)(3\\sigma_2+(\\sigma_1-\\sigma_2)\\gamma(k_*))\\geq f_1f_2(\\sigma_1-\\sigma_2)^2.\n\\eeq{4.28}\nWhen $k_1\\geq k_2$ \\eq{4.20a} implies $\\beta(k_N)\\geq\\beta(k_*)$ and $\\gamma(k_D)\\geq\\gamma(k_*)$, and \nhence \n\\begin{eqnarray}\n(\\sigma_D-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+2\\sigma_2)(\\sigma_2+2\\sigma_1)\n+(\\sigma_1-\\sigma_2)^2\\beta(k_N)]& \\geq & 3f_1\\sigma_2(\\sigma_1-\\sigma_2)(\\sigma_2+2\\sigma_1),\\nonumber \\\\\n(f_1\\sigma_1+f_2\\sigma_2-\\sigma_N)(3\\sigma_2+(\\sigma_1-\\sigma_2)\\gamma(k_D))& \\geq & f_1f_2(\\sigma_1-\\sigma_2)^2. \\nonumber \\\\\n\\eeqa{4.29}\nOn the other hand when $k_1\\leq k_2$ then \\eq{4.20a} implies $\\beta(k_D)\\geq\\beta(k_*)$ and\n$\\gamma(k_N)\\geq\\gamma(k_*)$, and hence \n\\begin{eqnarray}\n(\\sigma_D-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+2\\sigma_2)(\\sigma_2+2\\sigma_1)\n+(\\sigma_1-\\sigma_2)^2\\beta(k_D)]& \\geq & 3f_1\\sigma_2(\\sigma_1-\\sigma_2)(\\sigma_2+2\\sigma_1),\\nonumber \\\\\n(f_1\\sigma_1+f_2\\sigma_2-\\sigma_N)(3\\sigma_2+(\\sigma_1-\\sigma_2)\\gamma(k_N))&\\geq& f_1f_2(\\sigma_1-\\sigma_2)^2. \\nonumber \\\\\n\\eeqa{4.30}\nSince $\\beta(k_N)$ and $\\beta(k_D)$ depend linearly on the volume fractions $f_1$ and $f_2=1-f_1$, \nthe first inequalities\nin \\eq{4.29} and \\eq{4.30} also depend linearly on the volume fraction and easily yield\nbounds on the volume fraction. On the other hand, finding bounds on the volume fraction\nfrom the second inequalities in \\eq{4.29} and \\eq{4.30}, involves solving a cubic equation in $f_1$. So instead\nof analytically computing the roots of this cubic it is probably better to numerically\nsearch for the range of values of $f_1$ where the second inequalities in \\eq{4.29} and \\eq{4.30}\nare satisfied. \n\n\n\n\n\n\n\n\\section{Bounds for elasticity}\n\\setcounter{equation}{0}\n\nLet us consider solutions to the linear elasticity equations \n\\begin{equation} \\bfm\\tau({\\bf x})={\\bfm{\\cal C}}({\\bf x})\\bfm\\epsilon({\\bf x}),\\quad\\nabla \\cdot\\bfm\\theta=0,\\quad\\bfm\\epsilon=(\\nabla{\\bf u}+(\\nabla{\\bf u})^T)\/2,\n\\eeq{5.1}\nwithin $\\Omega$, where ${\\bf u}({\\bf x})$, $\\bfm\\epsilon({\\bf x})$ and $\\bfm\\tau({\\bf x})$, are the displacement\nfield, strain field, and stress field, and ${\\bfm{\\cal C}}({\\bf x})$ is the fourth\norder elasticity tensor field\n\\begin{equation} {\\bfm{\\cal C}}({\\bf x})=\\chi({\\bf x}){\\bfm{\\cal C}}^1+(1-\\chi({\\bf x})){\\bfm{\\cal C}}^2,\n\\eeq{5.2}\nin which ${\\bfm{\\cal C}}^1$ and ${\\bfm{\\cal C}}^2$ are the elasticity tensors of the phases, assumed\nto be isotropic with elements,\n\\begin{equation} {\\cal C}_{ijk\\ell}^h\n=\\mu_h(\\delta_{ik}\\delta_{j\\ell}+\\delta_{i\\ell}\\delta_{jk})+(\\kappa_h-2\\mu_h\/d)\\delta_{ij}\\delta_{k\\ell},\\quad h=1,2,\n\\eeq{5.3}\nin which $d=2$ or 3 is the dimensionality, and \n$\\mu_1,\\mu_2$ and $\\kappa_1,\\kappa_2$ are the shear and bulk moduli of the\ntwo phases. From boundary information on the displacement ${\\bf u}_0({\\bf x})={\\bf u}({\\bf x})$\nand traction ${\\bf f}({\\bf x})=\\bfm\\tau({\\bf x})\\cdot{\\bf n}$\nwe can immediately determine, using integration by parts, \nvolume averages such as\n\\begin{eqnarray} \\langle \\bfm\\epsilon:\\bfm\\tau\\rangle &= & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}{\\bf u}\\cdot{\\bf f}, \\nonumber \\\\\n \\langle \\bfm\\epsilon\\rangle & = & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}({\\bf n}{\\bf u}^{T}+{\\bf u}{\\bf n}^{T})\/2, \\nonumber \\\\\n \\langle \\bfm\\tau\\rangle & = & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}{\\bf x}{\\bf f}^T,\n\\eeqa{5.3a}\nin which $\":\"$ denotes a contraction of two indices.\n\nThere are two natural sets of boundary conditions. \nFor any symmetric matrix $\\bfm\\epsilon_0$ we could\nprescribe the special Dirichlet boundary conditions \n\\begin{equation} {\\bf u}({\\bf x})=\\bfm\\epsilon_0{\\bf x},\\quad {\\rm for}~{\\bf x}\\in\\partial\\Omega, \n\\eeq{5.4}\nand measure $\\bfm\\tau_0=\\langle\\bfm\\tau\\rangle$. Here, according to \\eq{5.3a}, $\\bfm\\epsilon_0$ equals $\\langle\\bfm\\epsilon\\rangle$. Since\n$\\bfm\\tau_0$ is linearly related to $\\bfm\\epsilon_0$ we can write\n\\begin{equation} \\bfm\\tau_0={\\bfm{\\cal C}}^D\\bfm\\epsilon_0,\n\\eeq{5.5} \nwhich defines the elasticity tensor $\\bfm\\sigma^D$ ($D$ for Dirichlet). Alternatively\nfor any symmetric matrix $\\bfm\\tau_0$ we could\nprescribe the special Neumann boundary conditions \n\\begin{equation} \\bfm\\tau({\\bf x})\\cdot{\\bf n}=\\bfm\\tau_0\\cdot{\\bf n},\\quad {\\rm for}~{\\bf x}\\in\\partial\\Omega, \n\\eeq{5.6}\nand measure $\\bfm\\epsilon_0=\\langle\\bfm\\epsilon\\rangle$. Here, according to \\eq{5.3a}, $\\bfm\\tau_0$ equals $\\langle\\bfm\\tau\\rangle$. Since\n$\\bfm\\epsilon_0$ is linearly related to $\\bfm\\tau_0$ we can write\n\\begin{equation} \\bfm\\epsilon_0=({\\bfm{\\cal C}}^N)^{-1}\\bfm\\tau_0, \n\\eeq{5.7} \nwhich defines the elasticity tensor $\\bfm\\sigma^N$ ($D$ for Dirichlet). It is easy to check\nthat ${\\bfm{\\cal C}}^D$ and ${\\bfm{\\cal C}}^N$ satisfy all the usual symmetries of elasticity tensors.\n\nDirectly analogous to \\eq{2.13} we have the bounds \n\\begin{equation} \\langle{\\bfm{\\cal C}}^{-1}\\rangle^{-1}\\leq{\\bfm{\\cal C}}^D\\leq \\langle{\\bfm{\\cal C}}\\rangle,\\quad\\quad\n \\langle{\\bfm{\\cal C}}^{-1}\\rangle^{-1}\\leq{\\bfm{\\cal C}}^N\\leq \\langle{\\bfm{\\cal C}}\\rangle\n\\eeq{5.8}\nof \\cite{Nemat-Nasser:1993:MOP}, and directly analogous to \\eq{2.14} \nfor any boundary condition (not just the\nspecial boundary conditions \\eq{5.4} and \\eq{5.6}) we have the bounds\n\\begin{equation} \\langle\\bfm\\epsilon\\cdot\\bfm\\tau\\rangle \\geq \\bfm\\epsilon_0\\cdot\\bfm\\sigma^N\\bfm\\epsilon_0,\\quad\n\\langle\\bfm\\epsilon\\cdot\\bfm\\tau\\rangle \\geq \\bfm\\tau_0(\\bfm\\sigma^D)^{-1}\\bfm\\tau_0,\n\\eeq{5.9}\nwhere $\\bfm\\epsilon_0=\\langle\\bfm\\epsilon\\rangle$ and $\\bfm\\tau_0=\\langle\\bfm\\tau\\rangle$,\ndue to Willis in 1989 private communication to Nemat-Nasser and Hori and presented by \\citeAPY{Nemat-Nasser:1993:MOP}.\n\nAlso directly analogous to \\eq{3.7b} and \\eq{3.16} we have the bounds\n\\begin{equation} {\\bfm{\\cal C}}^N\\leq{\\bfm{\\cal C}}^*\\leq {\\bfm{\\cal C}}^D,\n\\eeq{5.10}\nwhere ${\\bfm{\\cal C}}^*$ is the effective elasticity tensor of any assemblage of\nrescaled copies of $\\Omega$ packed to fill all space. These are \nessentially the bounds of \\citeAPY{Huet:1990:AVC} applied to this assemblage. Thus ``lower\nbounds'' on ${\\bfm{\\cal C}}^*$ directly give ``lower bounds'' on ${\\bfm{\\cal C}}^D$ and\n``upper bounds'' on ${\\bfm{\\cal C}}^*$ directly give ``upper bounds'' on ${\\bfm{\\cal C}}^N$.\nIn particular, in two dimensions lower and upper bounds on $\\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^*\\bfm\\epsilon_0$ \nhave been obtained by \\citeAPY{Gibiansky:1984:DCPa} (for the equivalent plate equation) and also by \\citeAPY{Allaire:1993:EOB}.\nAssuming that the phases have been labeled so that $\\mu_1\\geq\\mu_2$ ($\\kappa_1-\\kappa_2$ could be either positive\nor negative)\nand letting $\\epsilon_1$ and $\\epsilon_2$ denote the two eigenvalues $\\bfm\\epsilon_0$,\nthe bounds imply\n\\begin{eqnarray}\n&~& \\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^D\\bfm\\epsilon_0 \\geq\n(\\epsilon_1+\\epsilon_2)^2\/(f_1\/\\kappa_1+f_2\/\\kappa_2)\n+(\\epsilon_1-\\epsilon_2)^2\/(f_1\/\\mu_1+f_2\/\\mu_2), \\nonumber \\\\\n&~& \\quad \\quad {\\rm if~~} |\\kappa_1-\\kappa_2|(f_1\\mu_2+f_2\\mu_1)|\\epsilon_1+\\epsilon_2|\\leq\n |\\mu_1-\\mu_2|(f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2|; \\nonumber \\\\\n&~& \\nonumber \\\\ &~&\n\\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^D\\bfm\\epsilon_0 \\geq (\\epsilon_1+\\epsilon_2)^2(f_1\\kappa_1+f_2\\kappa_2)\n+(\\epsilon_1-\\epsilon_2)^2(f_1\\mu_1+f_2\\mu_2) \\nonumber \\\\\n&~&~~~~~~~~~~~~~~~~~~~-f_1f_2\\frac{[|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|+|\\mu_1-\\mu_2||\\epsilon_1-\\epsilon_2|]^2}\n{f_1(\\mu_2+\\kappa_2)+f_2(\\mu_1+\\kappa_1)}, \\nonumber \\\\\n&~&\\quad \\quad {\\rm if~~} (\\mu_2+f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2| \\geq\nf_2|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2| \\nonumber \\\\\n&~& \\quad \\quad{\\rm and~~} |\\kappa_1-\\kappa_2|(f_1\\mu_2+f_2\\mu_1)|\\epsilon_1+\\epsilon_2|\\geq\n |\\mu_1-\\mu_2|(f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2|; \\nonumber \\\\\n&~& \\nonumber \\\\ &~&\n \\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^D\\bfm\\epsilon_0 \\geq\n\\mu_2(\\epsilon_1-\\epsilon_2)^2+\n\\frac{\\kappa_1\\kappa_2+\\mu_2(f_1\\kappa_1+f_2\\kappa_2)}{\\mu_2+f_1\\kappa_2+f_2\\kappa_1}(\\epsilon_1+\\epsilon_2)^2, \\nonumber \\\\\n&~ & \\quad \\quad{\\rm if~~} (\\mu_2+f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2| \\leq\nf_2|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|;\n\\eeqa{5.12}\nand\n\\begin{eqnarray}\n&~& \\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^N\\bfm\\epsilon_0 \\leq (\\epsilon_1+\\epsilon_2)^2(f_1\\kappa_1+f_2\\kappa_2)\n+(\\epsilon_1-\\epsilon_2)^2(f_1\\mu_1+f_2\\mu_2) \\nonumber \\\\\n&~&~~~~~~~~~~~~~~~~~~~-f_1f_2\\frac{[|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|-|\\mu_1-\\mu_2||\\epsilon_1-\\epsilon_2|]^2}\n{f_1(\\mu_2+\\kappa_2)+f_2(\\mu_1+\\kappa_1)}, \\nonumber \\\\\n&~&\\quad \\quad {\\rm if~~~} f_1|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|\\leq (\\mu_1+f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2| \\nonumber \\\\\n&~&\\quad \\quad {\\rm and~~} f_+|\\mu_1-\\mu_2||\\epsilon_1-\\epsilon_2|\\leq (\\kappa_++f_1\\mu_2+f_2\\mu_1)|\\epsilon_1+\\epsilon_2|; \\nonumber \\\\\n&~& \\nonumber \\\\ &~&\n\\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^N\\bfm\\epsilon_0 \\leq \\mu_1(\\epsilon_1-\\epsilon_2)^2+\\frac{\\kappa_1\\kappa_2+\\mu_1(f_1\\kappa_1+f_2\\kappa_2)}{\\mu_1+f_1\\kappa_2+f_2\\kappa_1}(\\epsilon_1+\\epsilon_2)^2, \\nonumber \\\\\n&~&\\quad \\quad {\\rm if~~~} f_1|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|\\geq (\\mu_1+f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2|; \\nonumber \\\\\n&~& \\nonumber \\\\ &~&\n\\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^N\\bfm\\epsilon_0 \\leq \\kappa_+(\\epsilon_1+\\epsilon_2)^2+\\frac{\\mu_1\\mu_2+\\kappa_+(f_1\\mu_1+f_2\\mu_2)}{\\kappa_++f_1\\mu_2+f_2\\mu_1}(\\epsilon_1-\\epsilon_2)^2, \\nonumber \\\\\n&~&\\quad \\quad {\\rm if~~~} f_+|\\mu_1-\\mu_2||\\epsilon_1-\\epsilon_2|\\geq (\\kappa_++f_1\\mu_2+f_2\\mu_1)|\\epsilon_1+\\epsilon_2|,\n\\eeqa{5.12a}\nwhere $\\kappa_+$ is the maximum of $\\kappa_1$ and $\\kappa_2$ and $f_+$ is the volume fraction of the material corresponding to $\\kappa_+$.\n\n\nThe corresponding three-dimensional bounds follow directly from\n\\eq{5.10} and the bounds of \\citeAPY{Allaire:1993:OBE}, but are not so explicit. Assuming\nthat the Lame moduli\n\\begin{equation} \\lambda_1=\\kappa_1-2\\mu_1\/3~~{\\rm and}~~\\lambda_2=\\kappa_2-2\\mu_2\/3 \n\\eeq{5.13}\nof both phases are positive, and that the bulk and shear moduli of the two phases are well-ordered with\n\\begin{equation} \\kappa_1>\\kappa_2>0 {\\rm ~~and~~}\\mu_1>\\mu_2>0, \n\\eeq{5.13a}\nthese bounds are\n\\begin{eqnarray}\n\\bfm\\epsilon_0:{\\bfm{\\cal C}}^D\\bfm\\epsilon_0 & \\geq & \\bfm\\epsilon_0:{\\bfm{\\cal C}}_2\\bfm\\epsilon_0\n+f_1\\max_{\\bfm\\eta}[2\\bfm\\epsilon_0:\\bfm\\eta-\\bfm\\eta:({\\bfm{\\cal C}}_1-{\\bfm{\\cal C}}_2)^{-1}\\bfm\\eta-f_2g(\\bfm\\eta)], \\nonumber \\\\\n\\bfm\\epsilon_0:{\\bfm{\\cal C}}^N\\bfm\\epsilon_0 & \\geq & \\bfm\\epsilon_0:{\\bfm{\\cal C}}_1\\bfm\\epsilon_0\n+f_2\\min_{\\bfm\\eta}[2\\bfm\\epsilon_0:\\bfm\\eta+\\bfm\\eta:({\\bfm{\\cal C}}_1-{\\bfm{\\cal C}}_2)^{-1}\\bfm\\eta-f_1h(\\bfm\\eta)],\n\\eeqa{5.14}\nwhere $g(\\bfm\\eta)$ and $h(\\bfm\\eta)$ are function of the eigenvalues $\\eta_1, \\eta_2$, and $\\eta_3$ \nof the symmetric matrix $\\bfm\\eta$.\nAssuming that\nthese are labeled with\n\\begin{equation} \\eta_1\\leq\\eta_2\\leq\\eta_3, \n\\eeq{5.15}\nwe have\n\\begin{eqnarray}\ng(\\bfm\\eta)&=&\n\\frac{(\\eta_1-\\eta_3)^2}{4\\mu_2}+\\frac{(\\eta_1+\\eta_3)^2}{4(\\lambda_2+\\mu_2)} ~~{\\rm if}~~\n\\eta_3\\geq\\frac{\\lambda_2+2\\mu_2}{2(\\lambda_2+\\mu_2)}(\\eta_1+\\eta_3)\\geq\\eta_1, \\nonumber \\\\\ng(\\bfm\\eta)&=&\n\\frac{\\eta_1^2}{\\lambda_2+2\\mu_2} ~~{\\rm if}~~\n\\eta_1>\\frac{\\lambda_2+2\\mu_2}{2(\\lambda_2+\\mu_2)}(\\eta_1+\\eta_3), \\nonumber \\\\\ng(\\bfm\\eta)&=&\n\\frac{\\eta_3^2}{\\lambda_2+2\\mu_2} ~~{\\rm if}~~\n\\eta_3<\\frac{\\lambda_2+2\\mu_2}{2(\\lambda_2+\\mu_2)}(\\eta_1+\\eta_3),\n\\eeqa{5.16}\nand\n\\begin{equation} h(\\bfm\\eta)=\\frac{1}{\\lambda_1+2\\mu_1}\\min\\{\\eta_1^2,\\eta_2^2,\\eta_3^2\\}.\n\\eeq{5.17}\n\nThe bounds \\eq{5.12}, \\eq{5.12a} and \\eq{5.14} can be used in an inverse way to bound\nthe volume fraction $f_1=1-f_2$, for a single experiment when for special Dirichlet conditions\n$\\bfm\\epsilon_0$ is prescribed and $\\bfm\\tau_0$ ($={\\bfm{\\cal C}}^D\\bfm\\epsilon_0$) is measured, or when for special Neumann conditions\n$\\bfm\\tau_0$ is prescribed and $\\bfm\\epsilon_0$ ($={\\bfm{\\cal C}}^N\\bfm\\epsilon_0$) is measured. \\citeAPY{Allaire:1993:OBE} also derive\nbounds on the complementary energy and these imply\n\\begin{equation} \\bfm\\tau_0:({\\bfm{\\cal C}}^N)^{-1}\\bfm\\tau_0 \\geq \\bfm\\tau_0:{\\bfm{\\cal C}}_1^{-1}\\bfm\\tau_0\n+f_2\\max_{\\bfm\\zeta}[2\\bfm\\tau_0:\\bfm\\zeta-\\bfm\\zeta:({\\bfm{\\cal C}}_2^{-1}-{\\bfm{\\cal C}}_1^{-1})^{-1}\\bfm\\zeta-f_1\\bfm\\zeta:{\\bfm{\\cal C}}_1\\bfm\\zeta+f_1h({\\bfm{\\cal C}}_1\\bfm\\zeta)], \\nonumber \\\\\n\\eeq{5.17aa}\nand\n\\begin{equation}\n\\bfm\\tau_0:({\\bfm{\\cal C}}^D)^{-1}\\bfm\\tau_0 \\leq \\bfm\\tau_0:{\\bfm{\\cal C}}_2^{-1}\\bfm\\tau_0\n+f_1\\min_{\\bfm\\zeta}[2\\bfm\\tau_0:\\bfm\\zeta+\\bfm\\zeta:({\\bfm{\\cal C}}_2^{-1}-{\\bfm{\\cal C}}_1^{-1})^{-1}\\bfm\\zeta-f_2\\bfm\\zeta:{\\bfm{\\cal C}}_2\\bfm\\zeta+f_2g({\\bfm{\\cal C}}_2\\bfm\\zeta)].\n\\eeq{5.17ab}\n\nThe bound in \\eq{5.17aa} is particularly useful when $\\bfm\\tau_0=-p{\\bf I}$, corresponding to immersing the body $\\Omega$ in a fluid\nwith pressure $p$. Then from measurements of the resulting volume change of the body one can determine $\\bfm\\tau_0:({\\bfm{\\cal C}}^N)^{-1}\\bfm\\tau_0=-p\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0$.\nLet us assume $\\lambda_1>0$ and set $\\bfm\\zeta=\\alpha{\\bf I}+{\\bf A}$, with ${\\bf A}$ being a trace free matrix with eigenvalues $a_1$, $a_2$ and $a_3$. Then \nwe have $\\bfm\\eta={\\bfm{\\cal C}}_1\\bfm\\zeta=2\\mu_1(k{\\bf I}+{\\bf A})$ where $k=\\alpha[1+3\\lambda_1\/(2\\mu_1)]$. Substitution gives\n\\begin{eqnarray} &~&[\\bfm\\zeta:{\\bfm{\\cal C}}_1\\bfm\\zeta-h({\\bfm{\\cal C}}_1\\bfm\\zeta)]-\\alpha^2[{\\bf I}:{\\bfm{\\cal C}}_1{\\bf I}-h({\\bfm{\\cal C}}_1{\\bf I})] \\nonumber \\\\\n&~&~=2\\mu_1\\left[a_1^2+a_2^2+a_3^2-\\frac{2\\mu_1}{\\lambda_1+2\\mu_1}\\min\\{(k+a_1)^2-k^2,(k+a_2)^2-k^2,(k+a_3)^2-k^2\\}\\right] \\nonumber \\\\\n&~&~\\geq 2\\mu_1[a_1^2+a_2^2+a_3^2-\\min\\{2a_1k+a_1^2,2a_2k+a_2^2,2a_3k+a_3^2\\}],\n\\eeqa{5.17ac}\nwhich is surely positive since $\\min\\{2a_1k+a_1^2,2a_2k+a_2^2,2a_3k+a_3^2\\}\\leq a_j^2$ where $j$ is such that $ka_j$ is non-positive. (Note that\n$a_1$, $a_2$ and $a_3$ cannot all have the same sign since they sum to zero). Consequently when $\\bfm\\tau_0=-p{\\bf I}$\nthe maximum over $\\bfm\\zeta$ in \\eq{5.17aa} is achieved when ${\\bf A}=0$ and taking the maximum over $\\alpha$ gives\n\\begin{equation} -p\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0 \\geq p^2\\left[\\frac{1}{\\kappa_1}+\\frac{f_2}{\\frac{\\kappa_1\\kappa_2}{\\kappa_1-\\kappa_2}+\\frac{4f_1\\mu_1\\kappa_1}{3\\kappa_1+4\\mu_1}}\\right],\n\\eeq{5.17ad}\nor equivalently\n\\begin{equation} -p\/(\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0)\\leq \\kappa_{HSH}^+\\equiv\\kappa_1-\\frac{f_2}{1\/(\\kappa_1-\\kappa_2)-f_1\/(\\kappa_1+4\\mu_1\/3)}, \\eeq{5.17ae}\nwhere $\\kappa_{HSH}^+$ is the upper bulk modulus bound of \\citeAPY{Hashin:1963:VAT} and\n\\citeAPY{Hill:1963:EPR}. The inequality \\eq{5.17ad} can be \nrewritten as\n\\begin{equation} \\frac{f_2}{-(\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0\/p)-1\/\\kappa_1}\\leq \\frac{\\kappa_1\\kappa_2}{\\kappa_1-\\kappa_2}+\\frac{4f_1\\mu_1\\kappa_1}{3\\kappa_1+4\\mu_1},\n\\eeq{5.17af}\nwhere we have used the fact that $-(\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0\/p)-1\/\\kappa_1$ is positive (since $\\bfm\\tau_0:({\\bfm{\\cal C}}^N)^{-1}\\bfm\\tau_0\\geq \\bfm\\tau_0:({\\bfm{\\cal C}}_1)^{-1}\\bfm\\tau_0$\nby \\eq{5.8}). This then yields the volume fraction bound\n\\begin{equation} f_2\\leq \\frac{\\frac{\\kappa_1\\kappa_2}{\\kappa_1-\\kappa_2}+\\frac{4\\mu_1\\kappa_1}{3\\kappa_1+4\\mu_1}}{\\frac{1}{-(\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0\/p)-1\/\\kappa_1}+\\frac{4\\mu_1\\kappa_1}{3\\kappa_1+4\\mu_1}},\n\\eeq{5.18ag}\nwhich we expect to be closest to the actual volume fraction when phase 2 (the softer phase) \nis the inclusion phase. Thus the\nbound may be particularly effective for estimating the volume of cavities in a body. Note that\nif some granules of phase 1 lie within these cavities, then such granules will not\ncontribute to this volume fraction estimate, but will contribute to the overall weight.\nIf the weight of the body has been measured (and the density of phase 1 is known)\nthis provides a way of estimating the volume of granules of phase 1 which lie within the\ncavities. \n\nWhen multiple experiments have been\ndone, and the full tensor ${\\bfm{\\cal C}}^D$ or ${\\bfm{\\cal C}}^N$ has been determined, then the ``trace bounds'' of\nZhikov(\\citeyearNP{Zhikov:1988:ETA}, \\citeyearNP{Zhikov:1991:EHM}) and \\citeAPY{Milton:1988:VBE} can be used. \n(These generalize the well known Hashin-Shtrikman (\\citeyearNP{Hashin:1963:VAT})\nbounds to anisotropic elastic composites.) Define the two traces\n\\begin{equation} \\mathop{\\rm Tr}\\nolimits_h{\\bfm{\\cal A}}=A_{iijj}\/d,\\quad\\quad Tr_s{\\bfm{\\cal A}}=A_{ijij}-(A_{iijj}\/d),\n\\eeq{5.17a}\nfor any fourth order tensor ${\\bfm{\\cal A}}$ with elements $A_{ijk\\ell}$ in spatial dimension $d$.\nThen, assuming the moduli of the two-phases are well ordered satisfying \\eq{5.13a},\ntheir lower and upper ``bulk modulus type bounds''\nimply, through \\eq{5.10}, the universal bounds\n\\begin{eqnarray} f_1\\mathop{\\rm Tr}\\nolimits_h[({\\bfm{\\cal C}}^D-{\\bfm{\\cal C}}_2)^{-1}]\n& \\leq & \\frac{1}{d(\\kappa_1-\\kappa_2)}+\\frac{f_2}{d\\kappa_2+2(d-1)\\mu_2}, \\nonumber \\\\\nf_2\\mathop{\\rm Tr}\\nolimits_h[({\\bfm{\\cal C}}_1-{\\bfm{\\cal C}}^N)^{-1}]\n& \\leq & \\frac{1}{d(\\kappa_1- \\kappa_2)}-\\frac{f_1}{d\\kappa_1+2(d-1)\\mu_1},\n\\eeqa{5-18}\nwhile their lower and upper ``shear modulus type bounds,'' imply the universal bounds\n\\begin{eqnarray} f_1\\mathop{\\rm Tr}\\nolimits_s[({\\bfm{\\cal C}}^D-{\\bfm{\\cal C}}_2)^{-1}] & \\leq &\n\\frac{(d-1)(d+2)}{4(\\mu_1-\\mu_2)}+\\frac{d(d-1)(\\kappa_2+2\\mu_2)f_2}{2\\mu_2(d\\kappa_2+2(d-1)\\mu_2)},\\nonumber \\\\\nf_2\\mathop{\\rm Tr}\\nolimits_s[({\\bfm{\\cal C}}_1-{\\bfm{\\cal C}}^N)^{-1}] & \\leq &\n\\frac{(d-1)(d+2)}{4(\\mu_1-\\mu_2)}-\\frac{d(d-1)(\\kappa_1+2\\mu_1)f_1}{2\\mu_1(d\\kappa_1+2(d-1)\\mu_1)}.\n\\eeqa{5.19}\nSince these inequalities depend linearly on $f_1=1-f_2$ they can easily be inverted to obtain\nbounds on $f_1$ given ${\\bfm{\\cal C}}^D$ or ${\\bfm{\\cal C}}^N$.\n\nAs noted by \\citeAPY{Milton:1988:VBE} the lower and upper ``bulk modulus type bounds'' are tighter than those obtained by \\citeAPY{Kantor:1984:IRB} and\n\\citeAPY{Francfort:1986:HOB}, which imply\n\\begin{eqnarray} \\mathop{\\rm Tr}\\nolimits_h{\\bfm{\\cal C}}^N&\\leq& d\\kappa_1-\\frac{f_2}{\\frac{1}{d(\\kappa_1-\\kappa_2)}-\\frac{f_1}{d\\kappa_1+2(d-1)\\mu_1}}, \\nonumber \\\\\n1\/\\mathop{\\rm Tr}\\nolimits_h[({\\bfm{\\cal C}}^D)^{-1}]&\\geq& d\\kappa_2+\\frac{f_1}{\\frac{1}{d(\\kappa_1-\\kappa_2)}+\\frac{f_2}{d\\kappa_2+2(d-1)\\mu_2}}.\n\\eeqa{5.20}\nFor bodies $\\Omega$ of ellipsoidal or parallelopipedic shape the universal bounds \\eq{5.20} were \nobtained by Nemat-Nasser and Hori (\\citeyearNP{Nemat-Nasser:1993:MOP}, \\citeyearNP{Nemat-Nasser:1995:UBO}):\nsee the equations (4.3.9) and (4.4.8), with $I=1$, in their 1995 paper. Their other bounds, with $I=2$,\nwhich incorporate the ``shear responses'' of the tensors ${\\bfm{\\cal C}}^N$ and ${\\bfm{\\cal C}}^D$ are improved upon by the bounds \\eq{5.19}\nas can be seen using the inequality \n\\begin{equation} \\mathop{\\rm Tr}\\nolimits_s{\\bfm{\\cal A}}^{-1}\\geq (d-1)^2(d+2)^2\/(4\\mathop{\\rm Tr}\\nolimits_s{\\bfm{\\cal A}}), \\eeq{5.21}\nwhich holds for any positive definite fourth-order tensor ${\\bfm{\\cal A}}$. \n\n\\section*{Acknowledgements}\nEunjoo Kim is deeply thanked for generously providing figures \\ref{3}, \\ref{4}, and \\ref{5}, and for doing the numerical \nsimulations which generated them. Additionally the author is grateful to Hyeonbae Kang and \nMichael Vogelius for stimulating\nhis interest in this problem, and for their comments on an initial draft of the manuscript. \nThe author is most thankful for support from the Mathematical Sciences Research Institute and the Simons foundation, \nthrough an Eisenbud fellowship, and from\nNational Science Foundation through grant DMS-0707978. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Appendix: Volterra equation for the cavity amplitude}\n\\normalsize\nWe start from the Hamiltonian (1) of the main article and derive the Heisenberg operator equations (limit of zero temperature), for the cavity and spin operators, $\\dot a=i [{\\cal H},a]-\\kappa a$, $\\dot \\sigma_k^-=i [{\\cal H},\\sigma_k^-]-\\gamma \\sigma_k^-$, respectively. Here $\\kappa$ and $\\gamma$ stand for the total cavity and spin losses, respectively. We then write a set of equations for the expectation values in the frame rotating with the probe frequency $\\omega_p$, using the commonly used Holstein-Primakoff-approximation, $\\langle \\sigma_k^z \\rangle \\approx -1$, which is valid if the number of the excited spins is small compared to the ensemble size (which is the case for all experimental results reported in the main article). Denoting $A(t)\\equiv \\langle a(t)\\rangle$ and $B_k(t)\\equiv\\langle\\sigma_k^-(t)\\rangle$, we end up with the following set of first-order ODEs with respect to the cavity and spin amplitudes\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{Eq_a_Volt}\n\\dot{A}(t) & = & -\\left[\\kappa-i(\\omega_c-\\omega_p)\\right]A(t) + \\sum_k\ng_k B_k(t)-\\eta(t), \\\\\n\\label{Eq_bk_Volt}\n\\dot{B}_k(t) & = & -\\left[\\gamma+i(\\omega_k-\\omega_p)\\right] B_k(t) - g_k A(t).\n\\end{eqnarray}\n\\end{subequations}\nNote, that the size of our spin ensemble is very large (typically $N\\sim 10^{12}$) and individual spins are distributed around a certain mean frequency $\\omega_s$. We can thus go to the continuum limit by introducing the continuous spectral density as $\\rho(\\omega)=\\sum_k g_k^2 \\delta(\\omega-\\omega_k)\/\\Omega^2$ (see, e.g. \\onlinecite{Diniz2011}), where $\\Omega$ is the collective coupling strength of the spin ensemble to the cavity and $\\int d\\omega\\rho(\\omega)=1$. In what follows we will replace any discrete function $F(\\omega_k)$ by its continuous counterpart, $F(\\omega)$: $F(\\omega_k) \\rightarrow \\Omega^2 \\int d\\omega \\rho(\\omega) F(\\omega)$. By integrating Eq.~(\\ref{Eq_bk_Volt}) in time, each individual spin amplitude, $B_k(t)$, can formally be expressed in terms of the cavity amplitude, $A(t)$. By plugging the resulting equation into Eq.~(\\ref{Eq_a_Volt}) and assuming that initially all spins are in the ground state, $B_k(t=0)=0$, we arrive at the following integro-differential Volterra equation for the cavity amplitude ($\\omega_c=\\omega_s$)\n\\begin{eqnarray}\n\\dot A(t)=-\\kappa A(t)-\\Omega^2 \\int d\\omega \\rho(\\omega) \\int\\limits_{0}^t d\\tau\ne^{-i(\\omega-\\omega_c-i\\gamma)(t-\\tau)}A(\\tau)-\\eta(t), \n\\label{Eq_rigor}\n\\end{eqnarray}\nNote that in the $\\omega_p$-rotating frame the rapid oscillations presented in the original Hamiltonian (1) are absent, so that the time variation of $\\eta(t)$ in Eq.~(\\ref{Eq_rigor}) is much slower as compared to $1\/\\omega_p$. \n\nFor a proper description of the resulting dynamics, it is essential to capture the form of the spectral density $\\rho(\\omega)$ realized in the experiment as accurately as possible. Following \\onlinecite{Sandner2012}, we take the $q$-Gaussian function for that purpose \n\\begin{eqnarray}\n\\label{rho_w_Eq}\n\\rho(\\omega)=C\\cdot\\left[1-(1-q)\\dfrac{(\\omega-\\omega_s)^2}{\\Delta^2}\\right]^{\n\\dfrac{1}{1-q}},\n\\end{eqnarray}\ncharacterized by the dimensionless shape parameter $1 \\xi_i$. It's easy to see that:\n\\begin{equation*}\n\\begin{aligned}\n&\\left\\|g_m\\right\\|^2 > \\xi_i^2 \\left\\|\\tilde{g}\\right\\|^2 \\\\\n&\\Longleftrightarrow ~~ \\sum_{j=1}^{d}(\\mu_j-z\\sigma_j)^2 > \\xi_i^2 \\sum_{j=1}^{d}(\\mu_j)^2 \\\\\n&\\Longleftrightarrow ~~ \\sum_{j=1}^{d}(\\mu_j^2-2z\\mu_j\\sigma_j+z^2\\sigma_j^2) > \\xi_i^2 \\sum_{j=1}^{d}(\\mu_j)^2 \\\\\n&\\Longleftrightarrow ~~ \\sum_{j=1}^{d}(z^2\\sigma_j^2) > \\xi_i^2 \\sum_{j=1}^{d}(2z\\mu_j\\sigma_j)\\\\\n&\\Longleftrightarrow ~~ z > \\xi_i^2 \\frac{\\sum_{j=1}^{d}(2\\mu_j\\sigma_j)}{\\sum_{j=1}^{d}(\\sigma_j^2)}\n\\end{aligned}\n\\end{equation*}\nTherefore, given appropriate $z$, there exists $i$, such that $1\\le \\xi_i < \\xi_m$. By using these gradient norm relations, we can get:\n\n\\begin{equation*}\n\t\\begin{aligned}\n\t&\\cos(g_m,\\tilde{g}) - \\cos(g^{(i)},\\tilde{g})\\\\\n\t&= \\frac{(\\xi_m^2+1)\\left\\|\\tilde{g}\\right\\|^2-\\left\\|g_m-\\tilde{g}\\right\\|^2}{2\\xi_m\\left\\|\\tilde{g}\\right\\|^2} - \\frac{(\\xi_i^2+1)\\left\\|\\tilde{g}\\right\\|^2-\\left\\|g^{(i)}-\\tilde{g}\\right\\|^2}{2\\xi_i\\left\\|\\tilde{g}\\right\\|^2}\\\\\n\t&> \\left(\\frac{(\\xi_m^2+1)}{2\\xi_m}-\\frac{(\\xi_i^2+1)}{2\\xi_i}\\right)+\\frac{\\left\\|g_m-\\tilde{g}\\right\\|^2}{2\\left\\|\\tilde{g}\\right\\|^2}\\left(\\frac{1}{\\xi_i}-\\frac{1}{\\xi_m}\\right)\\\\\n\t&=\\frac{(\\xi_m-\\xi_i)(\\xi_m \\xi_i -1)}{2\\xi_m \\xi_i}+\\frac{(\\xi_m-\\xi_i)\\left\\|g_m-\\tilde{g}\\right\\|^2}{2\\xi_m \\xi_i\\left\\|\\tilde{g}\\right\\|^2}\\\\\n\t&>0\n\t\\end{aligned}\n\\end{equation*}\nHence, it's possible for the malicious gradient to have bigger cosine-similarity with true averaged gradient than that of some honest gradients.\n\n\n\n\n\\subsection{Proof of Lemma 2}\n\\label{appendix:lemm2} \nGiven a arbitrary subset of clients $\\mathcal{G}$ with $|\\mathcal{G}|=(1-\\beta)n$ and $\\beta<0.5$.\nLet $\\mathbf{A}=\\sum\\limits_{i\\notin \\mathcal{G}}\\left(g_{t}^{(i)}-\\nabla F(\\mathbf{x}_{t})\\right)$, $\\mathbf{B}=\\sum\\limits_{j\\in \\mathcal{G}}\\left(g_{t}^{(j)}-\\nabla F(\\mathbf{x}_{t})\\right)$, then $\\mathbf{A}$ and $\\mathbf{B}$ are independent. We have $\\mathbb{E}[\\mathbf{A}+\\mathbf{B}]=\\mathbf{0}$.\nRecall that $\\sigma^2$ is the bounded local variance for local gradient and $\\kappa^2$ is bounded deviation between local and global gradient. Applying the Jensen inequality, we have\n\\begin{equation*}\n\\begin{aligned}\n\\left\\|\\mathbb{E}\\left[\\mathbf{A}\\right]\\right\\|^2 &\\leq \\beta n\\sum\\limits_{i\\notin \\mathcal{G}}\\left\\|\\nabla F_i(\\mathbf{x}_{t})-\\nabla F(\\mathbf{x}_{t})\\right\\|^2 \\leq \\beta^2n^2\\kappa^2\\\\\n\\left\\|\\mathbb{E}\\left[\\mathbf{B}\\right]\\right\\|^2 &\\leq (1-\\beta)n\\sum\\limits_{i\\in \\mathcal{G}}\\left\\|\\nabla F_i(\\mathbf{x}_{t})-\\nabla F(\\mathbf{x}_{t})\\right\\|^2 \\leq (1-\\beta)^2n^2\\kappa^2\\\\\n\\end{aligned}\n\\end{equation*}\nNotice that $\\mathbb{E}[\\mathbf{A}]=-\\mathbb{E}[\\mathbf{B}]$, thus\n\\begin{equation*}\n\\left\\|\\mathbb{E}\\left[\\mathbf{A}\\right]\\right\\|^2=\\left\\|\\mathbb{E}\\left[\\mathbf{B}\\right]\\right\\|^2\\leq \\min\\{\\beta^2 n^2\\kappa^2, (1-\\beta)^2n^2\\kappa^2\\} = \\beta^2 n^2\\kappa^2\n\\end{equation*}\nUsing the basic relation between expectation and variance, we have\n\\begin{equation*}\n\\begin{aligned}\n&\\mathbb{E}\\left\\|\\mathbf{A}\\right\\|^2=\\left\\|\\mathbb{E}[\\mathbf{A}]\\right\\|^2+\\text{var}[\\mathbf{A}]\\leq\\left\\|\\mathbb{E}[\\mathbf{A}]\\right\\|^2+\\beta n\\sigma^2\\\\\n&\\mathbb{E}\\left\\|\\mathbf{B}\\right\\|^2=\\left\\|\\mathbb{E}[\\mathbf{B}]\\right\\|^2+\\text{var}[\\mathbf{B}]\\leq\\left\\|\\mathbb{E}[\\mathbf{B}]\\right\\|^2+(1-\\beta) n\\sigma^2\n\\end{aligned}\n\\end{equation*}\nwhich leads to\n\\begin{equation*}\n\\begin{aligned}\n\\mathbb{E}\\left\\|\\mathbf{B}\\right\\|^2 \\le \\beta^2 n^2\\kappa^2 +(1-\\beta)n\\sigma^2\n\\end{aligned}\n\\end{equation*}\nThen, we directly have\n\\begin{equation*}\n\\begin{aligned}\n&\\mathbb{E}\\left[\\left\\|\\frac{1}{|\\mathcal{G}|}\\sum\\limits_{i\\in \\mathcal{G}}\\left(g_{t}^{(i)}\\right)-\\nabla F(\\mathbf{x}_{t})\\right\\|^2\\right]=\\frac{1}{(1-\\beta)^2n^2}\\mathbb{E}\\left\\|\\mathbf{B}\\right\\|^2\\\\\n&\\leq { \\frac{\\beta^2\\kappa^2}{(1-\\beta)^2}}+\\frac{\\sigma^2}{(1-\\beta)n}\n\\end{aligned}\n\\end{equation*}\nIt completes the proof of Lemma 2.\n\n\n\\subsection{Proof of Theorem 1} \n\\label{append:theorem1}\n\nTaking the total expectations of averaged gradient on local sampling and randomness in aggregation rule, we have\n\\begin{equation*}\n\\begin{aligned}\n&\\mathbb{E}_t[F(\\mathbf{x}_{t+1})] - F(\\mathbf{x}_t) \\\\\n&\\leq -\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\hat{g}_t\\right]\\right\\rangle+\\frac{L\\eta^2}{2}\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t\\right\\|^2\\right]\\\\\n&= -\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\hat{g}_t-\\tilde{g}_t+\\tilde{g}_t-\\nabla F(\\mathbf{x}_t)+\\nabla F(\\mathbf{x}_t)\\right]\\right\\rangle\\\\\n&\\qquad+\\frac{L\\eta^2}{2}\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\nabla F(\\mathbf{x}_t)+\\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n&\\leq -\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\hat{g}_t-\\tilde{g}_t\\right]\\right\\rangle -\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\tilde{g}_t-\\nabla F(\\mathbf{x}_t)\\right]\\right\\rangle\\\\\n&\\qquad-\\eta\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2 +L\\eta^2\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2+L\\eta^2\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n\\end{aligned}\n\\end{equation*}\nFrom Assumption 1 \\& 2, we have\n\\begin{equation*}\n\\begin{aligned}\n\\left[\\mathbb{E}\\left\\|\\hat{g}_t-\\bar{g}_t\\right\\|\\right]^2\n\\leq c{\\delta}\\sup_{i,j\\in \\mathcal{G}}\\mathbb{E}[\\|{g_t^{(i)}}-{g_t^{(j)}}\\|^2]\\leq 2c{\\delta}(\\sigma^2+\\kappa^2)\n\\end{aligned}\n\\end{equation*}\nthen by Young's Inequality with $\\rho=2$, we can get\n\\begin{equation*}\n\\begin{aligned}\n&-\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\hat{g}_t-\\tilde{g}_t\\right]\\right\\rangle\\\\\n&\\leq \\eta \\left\\|\\nabla F(\\mathbf{x}_t)\\right\\| \\cdot \\mathbb{E}_t\\left\\|\\hat{g}_t-\\tilde{g}_t\\right\\|\\\\\n& \\leq\\frac{\\sqrt{\\delta}\\eta}{2\\rho}\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2 + \\frac{\\rho}{2}\\cdot 2\\sqrt{\\delta}\\eta c(\\sigma^2+\\kappa^2)\\\\\n&\\leq\\frac{\\sqrt{\\delta}\\eta}{4}\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2 + 2\\sqrt{\\delta}\\eta c(\\sigma^2+\\kappa^2)\n\\end{aligned}\n\\end{equation*}\nCombining with Lemma 2, we get\n\\begin{equation*}\n\\begin{aligned}\n&-\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\tilde{g}_t-\\nabla F(\\mathbf{x}_t)\\right]\\right\\rangle\\\\\n&\\leq \\eta \\left\\|\\nabla F(\\mathbf{x}_t)\\right\\| \\cdot \\mathbb{E}_t\\left\\|\\tilde{g}_t-\\nabla F(\\mathbf{x}_t)\\right\\|\\\\\n&\\leq\\frac{\\beta\\eta}{2}\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2 + \\frac{\\beta\\eta\\kappa^2}{2(1-\\beta)^2}\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n&\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\nabla F(\\mathbf{x}_t)\\right\\|^2\\right] \\\\\n&= \\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\bar{g}_t + \\bar{g}_t - \\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n&\\leq 2\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\bar{g}_t\\right\\|^2\\right]+2\\mathbb{E}_t\\left[\\left\\|\\bar{g}_t - \\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n&=2\\left[\\mathbb{E}\\left\\|\\hat{g}_t-\\bar{g}_t\\right\\|\\right]^2+2\\text{var}\\left\\|\\hat{g}_t\\right\\|+2\\mathbb{E}_t\\left[\\left\\|\\bar{g}_t - \\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n&\\leq \\quad \\begin{matrix} \\underbrace{ 4c\\delta(\\sigma^2+\\kappa^2)+2b^2+\\frac{2\\beta^2\\kappa^2}{(1-\\beta)^2}+\\frac{2\\sigma^2}{(1-\\beta)n} } \\\\ =\\Delta_1 \\end{matrix}\n\\end{aligned}\n\\end{equation*}\nIn the above derivations, the basic inequality $2\\mathbf{a}\\cdot \\mathbf{b}\\leq \\mathbf{a}^2+\\mathbf{b}^2$ is applied. Taking total expectation and rearranging the terms, we get\n\\begin{equation*}\n\\begin{aligned}\n&\\eta\\left(\\frac{4-\\sqrt{\\delta}-2\\beta}{4}-L\\eta \\right)\\mathbb{E}[\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2]\\leq \\mathbb{E}[F(\\mathbf{x}_{t})-F(\\mathbf{x}_{t+1})]\\\\\n&\\qquad\\qquad+2\\sqrt{\\delta}\\eta c(\\sigma^2+\\kappa^2)+\\frac{\\beta\\eta\\kappa^2}{2(1-\\beta)^2}+L\\eta^2\\Delta_1\\\\\n\\end{aligned}\n\\end{equation*}\nAssume that $\\eta \\le (2-\\sqrt{\\delta}-2\\beta)\/(4L)$, thus $\\left(\\frac{4-\\sqrt{\\delta}-2\\beta}{4}-L\\eta \\right)\\geq\\dfrac{1}{2}$. Taking summation and dividing by $\\eta\\left(\\frac{4-\\sqrt{\\delta}-2\\beta}{4}-L\\eta \\right)T$, then we finally get\n\\begin{equation*}\n\\begin{aligned}\n&\\dfrac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}[\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2]\\leq \\frac{2(F(\\mathbf{x}_0)-F^*)}{\\eta T}+2L\\eta\\Delta_1\\\\\n&\\qquad\\qquad\\qquad\\qquad+\\quad\n\\begin{matrix} \\underbrace{ 4\\sqrt{\\delta} c(\\sigma^2+\\kappa^2)+\\frac{\\beta\\kappa^2}{(1-\\beta)^2} } \\\\ =\\Delta_2 \\end{matrix}\n\\end{aligned}\n\\end{equation*}\nwhich completes the proof.\n\\section{Background and Related Work}\n\\label{secRelated} \n\n\\subsection{Safety \\& Security in Federated Learning}\n\nThe model safety and data security are essential principles of federated learning due to the concern of privacy risks and adversarial threats \\cite{yang2019federated, kairouz2019advances, lyu20survey, ma20safeFL}, especially in the\nage of emerging privacy regulations such as General Data\nProtection Regulation (GDPR) \\cite{sharma2019data}. In the context of FL, instead of raw data, the gradient information are shared to jointly train a model. More advanced technologies such as secure multiparty computation or differential privacy are also employed to enhance the privacy guarantees\\cite{abadi2016deepdp,Bonawitz17secureML,Gao19FTL}. Meanwhile, the learning systems are vulnerable to various kinds of failures, including non-malicious faults and malicious attacks. Data poisoning attacks and model update poisoning attacks (aka. untargeted attacks) aim to degrade or even fully break the global model during training phase, while backdoor attacks (aka. targeted attacks) make the model misclassify certain samples during inference phase \\cite{kairouz2019advances}.\nIn particular, the Byzantine threats can be viewed as worst-case attacks, in which corrupted clients can produce arbitrary outputs and are allowed to collude. In many studies, the Byzantine attacker is assumed to be omniscient and have capability to access white-box model parameters and all honest gradients to conduct strong attacks \\cite{Fang20Local}. As pointed in many works, appropriately crafted attacks can give significant impact on the model performance while circumventing most of current defenses \\cite{BaruchBG19LIE}. However, security mechanisms to protect privacy inevitably make it a more challenging task to successfully detect those failures and attacks, such as secure aggregation where the server can not directly see any individual client updates but an aggregate result \\cite{Bonawitz17secureML,kairouz2019advances}. Thus, the trade-off between privacy assurance and system robustness needs more investigations.\n\n\\subsection{Existing Defense Strategies}\n\n\\noindent \\textbf{Statistic-based.} This is also known as majority-vote based strategy, requiring the percentage of Byzantine clients less than 50\\%. This kind of methods use the $\\ell_p$-norm distance or cosine-similarity to measure the confidence for received gradients. The {Krum} as well as extended {Multi-Krum} are pioneering work towards Byzantine-robust learning\\cite{Blanchard17Byz}. In \\cite{Yin18optimalrate}, the convergence rate and error rate of {trimmed-mean (TrMean)} and {coordinate-wise median (Median)} is rigorously studied. Moreover, \\cite{Mhamdi18Bulyan} has shown that the Krum and median defenses are vulnerable to $\\ell_p$-attack and developed a meta-method called {Bulyan} on top of other robust aggregation methods. Specially, some works only aggregate the sign of gradient to mitigate the Byzantine effect\\cite{bernstein18sign, li2019rsa}. Recently, a method called {Divider and Conquer} is proposed to tackle strong attacks \\cite{shejwalkar2021manipulating}.\n\n\\noindent \\textbf{Validation-based.} The most straightforward approach to evaluate whether a particular gradient is honest or not, is utilizing the auxiliary data in PS to validate the performance of updated model. {Zeno} \\cite{Xie19Zeno} use a stochastic descendant score to evaluate the correctness of each gradient and choose those with highest scores. Fang \\cite{Fang20Local} use error rate based and loss function based rejection mechanism to reject gradients that have bad impact on model updating. In \\cite{cao20FLTrust}, the authors utilize the ReLU-clipped cosine-similarity between each received gradient and standard gradient as weight to get robust aggregation. The main concern of such approaches is the accessibility of auxiliary data.\n\n\\noindent \\textbf{History-aided.} If the one-to-one correspondence between gradient and client entity is knowable for PS, then it's possible to utilize historical data to trace the clients' behaviors. Some studies show that malicious behavior could be revealed from the gradient trace by designing advanced filter techniques \\cite{Alistarh18Byz, zhu20safeguard}. In\\cite{Mu19AFA}, the authors propose a Hidden Markov Model to learn the quality of model updates and discard the bad or malicious updates. Besides, the momentum SGD can also be considered as history-aided method and can help to alleviate the impact of Byzantine attacks\\cite{Karimireddy20history,Mahdi21momentum}.\n\n\\noindent \\textbf{Redundancy-based.} In the context of traditional distributed training, it's possible to assign each node with redundant data and use this redundancy to eliminate the effect of Byzantine failures. In \\cite{Chen18Draco}, the authors present a scalable framework called {DRACO} for robust distributed training using ideas from coding theory. In \\cite{DataSD21dataencoding}, a method based on data encoding and error correction techniques over real numbers is proposed to combat adversarial attacks. In \\cite{Rajput19Detox}, a framework called {DETOX} is proposed by combing computational redundancy and hierarchical robust aggregation to filter out Byzantine gradients.\n\n\\noindent \\textbf{Learning-based.} In \\cite{li20byzautoencoder}, the authors use VAE as spectral anomaly detection model to learn the representation of honest gradients and use reconstruction error in each round as detection threshold.\nIn \\cite{Pan20Justinian}, a method called Justinian's GAAvernor is proposed to learn a robust gradient aggregation policy against Byzantine attacks via reinforcement learning. In \\cite{regatti20bygars}, the authors use auxiliary data in PS to learn the coefficient in weighted average aggregation for each received gradient.\n\n\\noindent \\textbf{Ensemble-learning.} Another line of work leverage the ensemble learning approach to provably guarantee the predicted label for a testing example is not affected by Byzantine clients, in which multiple global models are trained and each of them is learned by using a randomly selected subset of clients \\cite{cao2021provably,qiao21provebackdoor}. However, such ensemble-learning methods significantly enlarge computational overhead and storage cost.\n\n\n\n\\section{Rethink of Recent Attacks}\n\\label{secAttackAnalysis}\n\nIn this section, we first give the threat model and then present our theoretical analysis along with empirical evidence of the\\emph{ Little is Enough (LIE)} attack \\cite{BaruchBG19LIE} to demonstrate the limitation of existing median- and distance-based defenses.\n\n\\noindent \\textbf{Threat Model.} Similar to the threat models in previous works \\cite{Blanchard17Byz,BaruchBG19LIE,Fang20Local,shejwalkar2021manipulating}, we assume that there exists an attacker that controls some malicious clients to perform model poisoning attacks. The malicious clients could be fake clients that injected by the attacker or genuine ones but corrupted by the attacker. Specially, we assume the attacker has full knowledge on all benign gradients, and model parameters, and the corrupted clients can collude to conduct strong attacks. However, the attacker cannot corrupt the server and the proportion of malicious clients $\\beta$ is less than half. For a system with $n$ clients, without loss of generality, we assume that the first $m$ clients are corrupted and $\\beta=\\frac{m}{n}<0.5$.\n\n\\vspace{1ex}\n\\noindent \\textbf{{LIE} Attack.} Byzantine clients first estimate coordinate-wise mean ($\\mu_j$) and standard deviation ($\\sigma_j$), and then send malicious gradient vector with elements crafted as follows:\n\\begin{equation}\\label{eq:lie}\n(g_m)_j = \\mu_j - z\\cdot\\sigma_j, ~ j \\in [d]\n\\end{equation}\nwhere the positive attack factor $z$ depends on the total number of clients and Byzantine fraction. The design mechanism behind this attack is circumventing the coordinate-wise median and trimmed-mean methods. As advised by original paper, the $z$ can be determined by using cumulative standard normal function $\\phi(z)$:\n\n\\begin{equation}\nz_{max} = max_z \\left(\\phi(z)<\\frac{n-\\left\\lfloor\\frac{n}{2}+1\\right\\rfloor}{n-m}\\right)\n\\end{equation}\n\nIn the following, we will show why this attack is harmful and hard to detect. From an optimization point of view, we can check the upper bound of non-convex distributed optimization problem before and after \\textit{LIE} attack, where we assume the distributed data are IID for simplicity. Lemma~\\ref{lemma:SGD-1} gives out general upper bound when no attack and no defense are performed \\cite{bottou2018optim,yu2019on}, from which we can see that the objective function will converge to a critical point given large iterations $T$ and small learning rate $\\eta$. Applying similar analysis method, we can get a new upper bound when \\textit{LIE} attack and coordinate-wise median defense are conducted as presented in Proposition~\\ref{proposition:SGD-LIE}, where we assume the training can converge.\n\\begin{lemma}\n\tFor a distributed non-convex optimization problem $F(\\mathbf{x})$ with $n$ benign workers, suppose the data are IID and the gradient variance is bounded by $\\sigma^2$. Empoly the SGD with a fixed learning rate $\\eta \\le 1\/L$ and assume $F^*=\\min_{\\mathbf{x}}F(\\mathbf{x})$, then we have the following convergence result\\footnote{In this paper, $\\|\\cdot\\|$ denotes the $\\ell_2$ norm.}:\n\t\\begin{equation}\n\t\t\\dfrac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}[\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2] \\leq \\frac{2(F(\\mathbf{x}_0)-F^*)}{\\eta T}+\\frac{L\\eta \\sigma^2}{n}\n\t\\end{equation}\n\t\\label{lemma:SGD-1}\n\\end{lemma}\n\\vspace{-2ex}\n\\begin{proposition}\n\tFor a distributed non-convex optimization problem $F(\\mathbf{x})$ with $(n-m)$ benign workers and $m$ malicious workers conducting \\textit{LIE} attack with appropriate $z$, suppose the data are IID and the gradient variance is bounded by $\\sigma^2$. Employ the Median-SGD with a fixed learning rate $\\eta \\le 1\/L$, and assume $F^*=\\min_{\\mathbf{x}}F(\\mathbf{x})$, then we have the upper bound $B$ of averaged gradient norm square:\n\t\\begin{align}\n\t\tB \\leq \\frac{2(F(\\mathbf{x}_0)-F^*)}{\\eta T} + \\frac{L\\eta \\sigma^2}{n} +{\\left(1+\\frac{1}{n}\\right)z^2\\sigma^2}\n\t\\end{align}\n\t\\label{proposition:SGD-LIE}\n\\end{proposition}\n\\vspace{-2ex}\n\\begin{proof}\n\tDetailed proof is in Appendix \\ref{appendix:prop1}.\n\\end{proof}\n\nCompared with the result in Lemma~\\ref{lemma:SGD-1}, there exists an extra constant term in Proposition~\\ref{proposition:SGD-LIE}, which does not diminish even with decreasing learning rate, enlarging the convergence error and even making the model training totally collapsed. When no defense is employed, just replace $z$ with ${\\beta z}$, because the $m\\cdot z\\sigma$ could be averaged across all $n$ workers, resulting in a smaller upper bound than median-based defense. This also explains the phenomenon that naive Mean aggregation even has better results than median-based and distance-based defenses in some cases as shown in \\cite{BaruchBG19LIE} and experimental results in this paper. Then, we turn to the coordinate point of view to further analyze why this type of crafted gradient is harmful for model training. \nRecall that signSGD can achieve good model accuracy by only utilizing the sign of gradient, which illuminates a fact that the sign of gradient plays an crucial role in model updating. Therefore, it's worthy to check the sign of gradient for this type attack. The crafting rule of \\textit{LIE} attack is already shown in Eq. (\\ref{eq:lie}), from which we can see that $(g_m)_j $ could have opposite sign with $\\mu_j$ when $\\mu_j>0$. For coordinate-wise median and $\\mu_j>0$, we assume this aggregation rule results in $\\tilde{g} = g_m$, then we have:\n\\begin{equation}\nif ~~ z > \\frac{\\mu_j}{\\sigma_j}, ~~then ~~sign(\\tilde{g}_j) \\ne sign(\\mu_j)\n\\end{equation} For mean aggregation rule and $\\mu_j>0$, if $\\mu_j$ and $\\sigma_j$ are estimated on benign clients, then the $j$-th element becomes:\n\\begin{equation}\n\\tilde{g}_j = \\frac{1}{n}[m\\cdot (g_m)_j + (n-m)\\mu_j] = \\mu_j - z\\cdot \\beta \\cdot\\sigma_j\n\\end{equation}\nand in this case a bigger $z$ is needed to reverse the sign:\n\\begin{equation}\nif ~~ z > \\frac{n\\mu_j}{m\\sigma_j}, ~~then ~~sign(\\tilde{g}_j) \\ne sign(\\mu_j)\n\\end{equation}\n\nEmpirical results in \\cite{BaruchBG19LIE} show that mostly coordinate-wise standard deviation turns out to be bigger than the corresponding gradient element, thus a small $z$ could turn a large number of positive elements into negative, leading to incorrect model updating. To verify this theoretical result, we adopt default training setting in Section~\\ref{secExperimentSetup} to train a CNN on MNIST dataset and ResNet-18 on CIFAR-10 dataset under no attack, and calculate averaged sign statistics across all workers as well as the sign statistics of a virtual gradient that crafted as Eq. (\\ref{eq:lie}). We plot the sign statistics over iterations as Fig.~\\ref{fig:sign_byz}, which convincingly supports our theoretical analysis. \n\n\\begin{figure}[ht]\n\t\\centering\n\t\\subfigure[Honest Gradient of CNN]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/mnist_honest.pdf}\n\t}\n\n\t\\subfigure[Malicious Gradient of CNN]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/mnist_lie.pdf}\n\t}\n\n\t\\subfigure[Honest Gradient of ResNet18]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/cifar_honest.pdf}\n\t}\n\n\t\\subfigure[Malicious Gradient of ResNet18]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/cifar_lie.pdf}\n\t}\n\t\\caption{Sign statistics of honest and malicious gradient. }\n\t\\label{fig:sign_byz}\n\\end{figure}\n\nNext, we present the following Proposition~\\ref{proposition:Safe-LIE} to explain why \\textit{LIE} attack is hard to detect, in which we compare the distance to averaged true gradient $\\tilde{g}=\\frac{1}{n}\\sum_{i=1}^{n}g^{(i)}$ and similarity with $\\tilde{g}$ for malicious gradient and honest gradient, respectively.\n\n\\begin{proposition}\n\tFor a distributed non-convex optimization problem $F(\\mathbf{x})$ with $(n-m)$ benign workers and $m$ malicious workers conducting \\textit{LIE} attack, suppose the data are IID and the gradient variance is bounded by $\\sigma^2$. Given small enough $z$, then the distance between malicious gradient and true averaged gradient could be smaller than that of certain honest gradient:\n\t\\begin{equation}\n\t\\exists ~i, ~s.t. ~~ \\mathbb{E}[\\left\\|g_m-\\tilde{g}\\right\\|^2] < \\mathbb{E}[\\|g^{(i)}-\\tilde{g}\\|^2]\n\t\\end{equation}\n\tand the cosine-similarity between malicious gradient and true averaged gradient could be bigger than that of certain honest gradient:\n\t\\begin{equation}\n\t\\exists ~i, ~s.t. ~~cos(g_m,\\tilde{g}) > cos(g^{(i)},\\tilde{g})\n\t\\end{equation}\n\t\\label{proposition:Safe-LIE}\n\\end{proposition}\n\\begin{proof}\n\tDetailed proof is in Appendix \\ref{appendix:prop2}.\n\\end{proof}\n\nFrom the above results we can see that it's possible for the malicious gradient to be more ``safe'' when evaluated by Krum and Bulyan methods. Hence, it's almost impossible to detect the malicious gradient from the distance and cosine-similarity perspectives. Instead, checking the sign statistics is a novel and promising perspective to detect abnormal gradients. Similar analysis is also valid for the recent proposed Min-Max\/Min-Sum attacks as well as the adaptive attack that uses different perturbation vectors \\cite{shejwalkar2021manipulating}, along with which a new method called \\textit{Divide and Conquer (DnC)} is also proposed to tackle those attacks. However, this method makes the assumption that malicious gradients are in the direction of largest singular vector of gradient matrix, and would fail when multiple attacks exist simultaneously or in non-IID settings. \n\n\\vspace{1ex}\n\\noindent \\textbf{New Hybrid Attack.} In this work, we extend the OFOM attack in\\cite{chang19cronus} and propose a type of hybrid attack called \\textbf{ByzMean} attack, which makes the mean of gradients be arbitrary targeted malicious gradient. More specifically, the malicious clients are divided into two sets, one set with $m_1$ clients chooses a arbitrary gradient value $g_{m_1}=*$, and the other set with $m_2=m-m_1$ clients chooses the gradient value $g_{m_2}$ such that the average of all gradients is exactly the $g_{m_1}$, just as follows:\n\\begin{equation}\ng_{m_1} = *, ~ g_{m_2}=\\frac{(n-m_1)g_{m_1}-\\sum_{i=m+1}^{n}g^{(i)}}{m_2}\n\\label{eq:byzMean}\n\\end{equation}\nAll existing attacks can be integrated into this ByzMean attack, making this hybrid attack even stronger than all single attacks. For example, we can set $g_{m_1}$ as random gradient or even the gradient crafted by \\textit{LIE }attack. In that case, all existing defense methods including DnC will be broken.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Our SignGuard Framework}\n\\label{secFramework} \n\nIn this section, we present formal problem formulation and introduce our SignGuard framework for Byzantine-robust federated learning. And some theoretical analysis on training convergence is also provided.\n\n\\subsection{System Overview and Problem Setup}\nOur federated learning system consists of a parameter server and a number of benign clients along with a small portion of Byzantine clients. We assume there exists an attacker or say adversary that aims at poisoning global model and controls the Byzantine clients to perform malicious attacks. We first give out the following definitions of benign and Byzantine clients, along with the attacker's capability and defense goal.\n\n\\begin{definition}\n\t\\textbf{(Benign Client)} A benign client always sends honest gradient to the server, which is an unbiased estimation of local true gradient at each iteration.\n\\end{definition}\n\n\\begin{definition}\n\t\\textbf{(Byzantine Client)} A Byzantine client may act maliciously and can send arbitrary message to the server.\n\\end{definition}\n\n\\noindent \\textbf{Attacker's Capability:}\nAs mentioned in the threat model in Section~\\ref{secAttackAnalysis}, the attacker has full knowledge on all benign gradients and the corrupted clients can collude to conduct various kinds of attacks. However, the attacker cannot compromise the server and the proportion of Byzantine clients is less than 50\\%.\n\\vspace{-1ex}\n\n\\noindent \\textbf{Defender's Capability:} As in previous studies \\cite{Fang20Local,cao20FLTrust}, We consider the\ndefense is performed on the server side. The parameter server does not have access to the raw training data on the clients, and the server does not know the exact number of malicious clients. However, the server has full access to the global model as well as the local model updates (i.e., local gradients) from all clients in each iteration. Specially, we further assume the received gradients are anonymous, which means the behavior of each client is untraceable. In consideration of privacy and security, we think this assumption is reasonable in the context of federated learning.\n\n\\noindent \\textbf{Defense Goal:}\nAs mentioned in \\cite{cao20FLTrust}, an ideal defense method should give consideration to the following three aspects: Fidelity, Robustness and Efficiency. We hope the defense method achieves Byzantine-robustness against various malicious attacks without sacrificing the model accuracy. Moreover, the defense should be computationally cheap such that does not affect the overall training efficiency.\n\n\\noindent \\textbf{Problem Formulation:}\nWe focus on federated learning on IID settings and then extend our algorithm into non-IID settings. We assume that training data are distributed over a number of clients in a network, and all clients jointly train a shared model based on disjoint local data. Mathematically, the underlying distributed optimization problem can be formalized as follows:\n\\begin{equation}\\label{eq:objective}\n\\min_{\\mathbf{x}\\in R^d}{F(\\mathbf{x})}=\\frac{1}{n}\\sum_{i=1}^{n}\\mathbb{E}_{\\xi_i\\sim D_i}\\left[F(\\mathbf{x};\\xi_i)\\right]\n\\end{equation}\nwhere $n$ is the total number of clients, $D_i$ denotes the local dataset of \\textit{i}-th client and could have different distribution from other clients, and $F(\\mathbf{x};\\xi_i)$ denotes the local loss function given shared model parameters $\\mathbf{x}$ and training data $\\xi_i$ sampled from $D_i$. We make all clients initialize to the same point $\\mathbf{x_0}$, then FedAvg \\cite{mcmahan17} can be employed to solve the problem. At each iteration, the \\textit{i}-th benign client draws $\\xi_i$ from $D_i$, and computes local stochastic gradient with respect to global shared parameter $\\mathbf{x}$, while Byzantine clients can send arbitrary gradient message:\n\\begin{equation}\n\\begin{aligned}\ng_{t}^{(i)} = \\begin{cases} \n\\nabla F(\\mathbf{x}_{t};\\xi_i) ,&\\text{if \\textit{i}-th client is benign}\n\\\\\n arbitrary , &\\text{if \\textit{i}-th client is Byzantine}\n\\end{cases}\n\\end{aligned}\n\\end{equation} \nThe parameter server collects all the local gradients and employs robust gradient aggregation rule to get a global model update:\n\\begin{equation}\n\\mathbf{x_{t+1}} = \\mathbf{x_{t}} -\\eta_{t}\\cdot \\textsl{GAR}(\\{g_{t}^{(i)}\\}_{i=1}^{n})\n\\end{equation}\nIn a synchronous and full participation setting, the result will be broadcast to all clients to update their local models and start a new iteration. In a partial participation setting, the model update is finished in PS and the updated model will be sent to the selected clients for next round. This process will repeat until the stop condition is satisfied.\n\n\nTo characterize the impact of Byzantine attack, we define the following two metrics:\n\\begin{definition}\n\t\\textbf{(Attack Success Rate)} The averaged proportion of malicious gradients that were selected by the detection-based GAR throughout the training iterations.\n\\end{definition}\n\\begin{definition}\n\t\\textbf{(Attack Impact)} The model accuracy drop compared with benchmark result that under no attack and no defense.\n\\end{definition}\n\nBased on above metrics, we can measure the effect of Byzantine attack by calculating the accuracy drop due to model poisoning and measure the validity of detection-based defense by calculating the attack success rate.\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t{\n\t\n\t\t\\includegraphics[width=2.0\\columnwidth,clip=true]{.\/figs\/illustration_framework.pdf}\n\t}\t\n\t\\vspace{-2ex}\n\t\\caption{Illustration of the workflow of proposed SignGuard. The collected gradients are anonymous and sent into multiple filters, after which the intersection of multiple outputs are selected as trusted gradients.}\n\n\t\\label{fig:illustration}\n\\end{figure*}\n\n\\subsection{Our Proposed Solution}\n\nThe proposed SignGuard framework is described in Algorithm~\\ref{alg1}-\\ref{alg2} and the workflow is illustrated in Fig.~\\ref{fig:illustration}. On a high level, we pay attention to the magnitude and direction of the received gradients. At each iteration, the collected gradients are sent into multiple filters, including norm-based thresholding filer and sign-based clustering filter, etc. \\textbf{Firstly}, for the norm-based filter, the median of gradient norms is utilized as reference norm as the median always lies in benign set. Considering that small magnitude of gradients do less harm to the training while significantly large one is definitely malicious, we will perform a loose lower threshold and a strict upper threshold. \\textbf{Secondly}, for the sign-based clustering filter, we extract some statistics of gradients as features and using Mean-Shift \\cite{meanshif} algorithm as unsupervised clustering model with adaptive number of cluster classes, while the cluster with largest size is selected as the trusted set. In this work, the proportions of positive, zero and negative signs are computed as basic features, which are sufficient for a number of attacks, including LIE attack. \n\\begin{algorithm}[t] \n\t\\setstretch{1}\n\t\\caption{~SignGuard-based Robust Federated Learning} \n\t\\begin{algorithmic}[1] \n\t\t\\State \\textbf{Input:} learning rate $\\eta$, total iteration $T$, total client number $n$\n\t\t\\State \\textbf{Initial:} $\\mathbf{x}_0\\in R^d$\n\t\t\\For{$t=0, 1, ..., T-1$} \n\t\t\\State \\textbf{On each client \\textit{i} :}\n\t\t\\State Sample a mini-batch of data to compute gradient $\\displaystyle g_{t}^{(i)}$\n\t\n\t\t\\State Send $\\displaystyle g_{t}^{(i)}$ to the parameter server\n\t\t\\State Wait for global gradient $\\tilde{g}_{t}$ from server\n\t\t\\State Update local model: $\\displaystyle \\mathbf{x}_{t+1}=\\mathbf{x}_{t}-\\eta\\tilde{g}_{t}$ \n\t\t\\vspace{1ex}\n\t\t\\State \\textbf{On server:}\n\t\t\\State Collect gradients from all clients\n\t\n\t\t\\State Obtain global gradient: $\\displaystyle\\tilde{g}_{t}=SignGuard(\\{g_{t}^{(i)}\\}_{i=1}^{n})$\n\t\t\\State Send $\\tilde{g}_{t}$ to all clients\n\t\t\\EndFor \n\t\\end{algorithmic} \n\t\\label{alg1}\n\\end{algorithm}\n\n\\begin{algorithm}[t] \n\t\\setstretch{1}\n\t\\caption{~SignGuard Function} \n\t\\begin{algorithmic}[1] \n\t\t\\State \\textbf{Input:} Set of received gradients $S_t=\\{g_{t}^{(i)}\\}_{i=1}^{n}$, lower and upper bound $L,R$ for gradient norm\n\t\n\t\t\\State \\textbf{Initial:} $S_1 = S_2 = \\emptyset $\n\t\t\\State \\quad Get $l_2$-norm and element-wise sign of each gradient\n\t\t\n\t\t\\State \\textbf{Step 1:} Norm-threshold Filtering\n\t\t\\State \\quad Get the median of norm $M = med(\\{\\|g_{t}^{(i)}\\|\\}_{i=1}^{n})$\n\t\t\\vspace{1ex}\n\t\t\\State \\quad Add the gradient that satisfies $L \\leq \\dfrac{\\|g_{t}^{(i)}\\|}{M} \\leq R $ into $S_1$\n\t\t\n\t\t\\State \\textbf{Step 2:} Sign-based Clustering\n\t\t\\State \\quad Randomly select a subset of gradient coordinates\n\t\t\\State \\quad Compute sign statistics on selected coordinates for each gradient as features\n\t\t\\State \\quad Train a Mean-Shift clustering model\n\t\t\\State \\quad Choose the cluster with most elements as $S_2$\n\t\t\\State \\textbf{Step 3:} Aggregation\n\t\t\\State \\quad Get trusted set: $S'_t=S_1 \\cap S_2$ \n\t\t\\State \\quad Get $\\displaystyle\\tilde{g}_{t}=\\frac{1}{|S'_t|}\\sum_{i\\in S'_t}g_t^{(i)}$ \n\t\t\n\t\t\\State \\textbf{Output:} Global gradient: $\\displaystyle\\tilde{g}_{t}$\n\t\\end{algorithmic} \n\t\\label{alg2}\n\\end{algorithm}\n\n\nHowever, those features only consider the overall statistics and lose sight of local properties. Take a toy example, when the amounts of positive and negative elements are approximate (just as ResNet-18), the naive sign statistics may be insufficient to detect sign-flipped gradients \\cite{Rajput19Detox} or those well-crafted attacks that have similar sign statistics. To mitigate this problem, we introduce randomized coordinate selection and add a similarity metric as additional feature in our algorithm, such as cosine-similarity or Euclidean distance between each received gradient and a ``correct'' gradient. However, without the help of auxiliary data in PS, the ``correct'' gradient is not directly available. A practical way is to compute pairwise similarities between all the other gradients and take the median as the similarity with ``correct'' gradient. Or more efficiently, just utilize the aggregated gradient from previous iteration as the ``correct'' gradient. Intuitively, it is promising to distinguish those irrelevant gradients and helps to improve the robustness of anomaly detection. What's challenging is, as shown in Section~\\ref{secAttackAnalysis}, the Euclidean distance or cosine-similarity metrics are not reliable for the state-of-the-art attacks, and even affect the judgment of SignGuard as we found in experiments. In this work, the plain ``SignGuard\" only uses sign statistics in default, and the enhanced variants that add cosine-similarity feature or Euclidean distance feature are called ``SignGuard-Sim\" and ``SignGuard-Dist\", respectively. We will provide some comparative results of them. We emphasize that the SignGuard is a sort of flexible approach and more advanced features could be further extracted to enhance the effectiveness of anomaly detection. And how to design a more reliable similarity metric is left as an open problem for future work. \n\n\nAfter filtering, the server eventually selects the intersection of multiple filter outputs as trusted gradient set, and obtains a global gradient by robust aggregation, e.g. trimmed-mean. In this work, we use the mean aggregation with magnitude normalization. It is worth noting that a small fraction of honest gradients could also be filter out due to gradient diversity, especially in the non-IID settings, depending on the variance of honest gradients and the closeness to malicious gradients.\n\n\n\\subsection{Convergence Analysis}\n\nIn this part, we provide some theoretical analysis of the security guarantee by SignGuard and the convergence of non-convex optimization problem, jointly considering the IID and non-IID data. We first claim that high separability can be achieved when the distributions of test statistics for malicious and honest gradients have negligible overlap.\n\n\\begin{claim}\n\tSuppose all honest gradients are computed with global model parameters and same batch size, and assume the test statistics of honest and malicious gradients follow two finite covariance distributions $P$ and $Q$. For $0<\\beta<1\/2$, let $U=(1-\\beta)P+\\beta Q$ be a mixture of sample points from $P$ and $Q$, denote $f(\\mathbf{x})$ and $g(\\mathbf{x})$ the PDFs of $P$ and $Q$. Then, there exists a algorithm that separates data points with low probability of error if the total variation distance satisfies: $TV(f,g)=1-o(1)$.\n\\end{claim}\n\\begin{remark}\n\tNote that the Byzantine clients have an inevitable trade-off between the attack impact and the risk of exposure by manipulating the gradient deviation. Therefore, under our detection-based SignGuard framework, the malicious gradient either have limited attack impact or become obvious to get detected, depending on the discrepancy between $P$ and $Q$.\n\\end{remark}\n\nTo conduct convergence analysis, we also make the following basic assumption, which is commonly used in the literature \\cite{yu2019on,bottou2018optim,Karimireddy19error_fix} for convergence analysis of distributed optimization.\n\\begin{assumption}\n\tAssume that problem (\\ref{eq:objective}) satisfies:\n\t\n\t\\textbf{1. Smoothness}: The objective function $F(\\cdot)$ is smooth with Lipschitz constant $L>0$, which means $\\forall \\mathbf{x}, \\forall \\mathbf{y},~\\left\\| \\nabla F(\\mathbf{x})-\\nabla F(\\mathbf{y})\\right\\| \\leq L\\left\\| \\mathbf{x}-\\mathbf{y}\\right\\|$.\n\tIt implies that:\n\t\\begin{equation}\n\tF(\\mathbf{x})-F(\\mathbf{y}) \\leq \\nabla F(\\mathbf{x})^{T}(\\mathbf{y}-\\mathbf{x})+\\frac{L}{2}\\left\\| \\mathbf{x}-\\mathbf{y}\\right\\|^2\n\t\\end{equation}\n\t\n\t\\textbf{2. Unbiased local gradient}: For each worker with local data, the stochastic gradient is locally unbiased:\n\t\\begin{equation}\n\t\\mathbb{E}_{\\xi_i\\sim D_i}\\left[\\nabla F(\\mathbf{x};\\xi_i)\\right] = \\nabla F_i(\\mathbf{x})\n\t\\end{equation}\n\t\n\t\\textbf{3. Bounded variances}: The stochastic gradient of each worker has a bounded variance uniformly, satisfying:\n\t\\begin{equation}\n\t\\mathbb{E}_{\\xi_i\\sim D_i}[\\left\\|\\nabla F(\\mathbf{x};\\xi_i)-\\nabla F_i(\\mathbf{x})\\right\\|^2] \\leq \\sigma^2\n\t\\end{equation}\n\tand the deviation between local and global gradient satisfies:\n\t\\begin{equation}\n\t\\left\\|\\nabla F_i(\\mathbf{x})-\\nabla F(\\mathbf{x})\\right\\|^2 \\leq \\kappa^2\n\t\\end{equation}\n\t\n\t\\label{as:1}\n\\end{assumption}\n\n\nFor SignGuard framework, the trusted gradients attained by filters may still contain a part of malicious gradients. In this case, any gradient aggregation rule necessarily results in an error to the averaged honest gradient \\cite{LaiRV16agnostic,Karimireddy20history}. Here we make another assumption on the capability of aggregation rule:\n\\begin{assumption}\n\tFor problem (\\ref{eq:objective}) with $(1 - \\beta)n$ benign clients (denoted by $\\mathcal{G}$) and $\\beta n$ Byzantine clients, suppose that at most $\\delta n$ Byzantine clients can circumvent SignGuard at each iteration. We assume that the robust aggregation rule in SignGuard outputs $\\hat{g}_t$ such that for some constant $c$ and constant $b$,\n\t\\begin{equation}\n\t\\begin{aligned}\n\t&\\textbf{1. Bounded Bias:}~~\\left[\\mathbb{E}\\left\\|\\hat{g}_t-\\bar{g}_t\\right\\|\\right]^2\n\t\\leq c{\\delta}\\sup_{i,j\\in \\mathcal{G}}\\mathbb{E}[\\|{g_t^{(i)}}-{g_t^{(j)}}\\|^2]\\\\\n\t&\\textbf{2. Bounded Variance:}~~ \\text{var}\\left\\|\\hat{g}_t\\right\\|\n\t\\leq b^2\n\t\\end{aligned}\t\n\t\\end{equation}\n\twhere $\\bar{g}_t=\\frac{1}{|\\mathcal{G}|}\\sum_{i\\in \\mathcal{G}}g_t^{(i)}$ and $0\\le \\delta < \\beta<0.5~$.\n\t\\label{as:2}\n\\end{assumption}\n\n\\begin{remark}\nWhen $\\delta=0$, it's possible to exactly recover the averaged honest gradient. For most aggregation rules such as Krum, the output is deterministic and thus has $b^2=0$. For clustering-based rules, the output is randomized and could have negligible variance if the clustering algorithm is robust.\n\\end{remark}\n\nWhen $\\beta n$ Byzantine clients exist and act maliciously, the desired gradient aggregation result is the average of $(1 - \\beta)n$ honest gradients, which still has a deviation to the global gradient of no attack setting. We give the following lemma to characterize the deviation:\n\n\\begin{lemma}\n\tSuppose the training data are non-IID under Assumption 1, then the deviation between averaged gradient of $(1-\\beta)n$ clients $\\bar{g}$ and the true global gradient $\\nabla F(\\mathbf{x})$ can be characterized as follows:\n\t\\begin{equation}\n\t\\mathbb{E}\\left[\\left\\|\\bar{g}-\\nabla F(\\mathbf{x})\\right\\|^2\\right]\n\t\\leq \\frac{\\beta^2\\kappa^2}{(1-\\beta)^2}+\\frac{\\sigma^2}{(1-\\beta)n}\n\t\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\tDetailed proof is in Appendix \\ref{appendix:lemm2}.\n\\end{proof}\n\n\nGiven above assumptions and lemma, extending the analysis techniques in \\cite{bottou2018optim,yu2019on, Karimireddy19error_fix, Karimireddy20history}, now we can characterize the convergence of SignGuard by the following theorem. \n\\begin{theorem}\n\tFor problem (\\ref{eq:objective}) under Assumption 1, suppose the SignGuard satisfying Assumption 2 is employed with a fixed learning rate $\\eta \\le (2-\\sqrt{\\delta}-2\\beta)\/(4L)$ and $F^*=\\min_{\\mathbf{x}}F(\\mathbf{x})$, then we have the following convergence result:\n\t\\begin{equation}\n\t\\dfrac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}[\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2] \\leq \\frac{2(F(\\mathbf{x}_0)-F^*)}{\\eta T}+2L\\eta\\Delta_1 + \\Delta_2\n\t\\end{equation}\n\twhere the constant terms are $\\Delta_1=4c\\delta(\\sigma^2+\\kappa^2)+2b^2+\\frac{2\\beta^2\\kappa^2}{(1-\\beta)^2}+\\frac{2\\sigma^2}{(1-\\beta)n}$ and $\\Delta_2=4c\\sqrt{\\delta}(\\sigma^2+\\kappa^2)+\\frac{\\beta\\kappa^2}{(1-\\beta)^2}$.\n\t\\label{theorem:signGuard}\n\\end{theorem}\n\\begin{proof}\n\tDetailed proof is in Appendix \\ref{append:theorem1}.\n\\end{proof}\n\n\\begin{remark}\n\tThe terms $\\Delta_1$ and $\\Delta_2$ arise from the existence of Byzantine clients and are influenced by the capability of aggregation rule. When no Byzantine client exists ($\\beta=0$ and thus $\\delta=0$), we have $\\Delta_2=0$ and the convergence is guaranteed with sufficiently small learning rate. If Byzantine clients exist ($\\beta>0$), even the defender is capable to remove all malicious gradients ($\\delta=0$), we still have $\\Delta_2>0$ due to non-IID data and may result in some model accuracy gaps to benchmark results. \n\\end{remark}\n\n\n\\section{Experimental Setup}\\label{secExperimentSetup}\nThe proposed SignGuard framework is evaluated on various datasets for image and text classification tasks. We mainly implement the learning tasks in IID fashion, and investigate the performance of different defenses in non-IID settings as well. The models that trained under no attack and no defense are used as benchmarks. All evaluated attack and defense algorithms are implemented in PyTorch.\n\n\\subsection{Datasets and Models}\n\n\\noindent \\textbf{MNIST.} MNIST is a 10-class digit image classification dataset, which consists of 60,000 training samples and 10,000 test samples, and each sample is a grayscale image of size 28 \u00d7 28. For MNIST, we construct a convolutional neural network (CNN) as the global model (see Appendix~\\ref{appendix:cnn}).\n\n\\noindent \\textbf{Fashion-MNIST.} Fashion-MNIST\\cite{xiao17fmnist} is a clothing image classification dataset, which has exactly the same image size and structure of training and testing splits as MNIST, and we use the same CNN as global model.\n\n\\noindent \\textbf{CIFAR-10.} CIFAR-10 \\cite{cifar10\/100} is a well-known color image classification dataset with 60,000 32 \u00d7 32 RGB images in 10 classes, including 50,000 training samples and 10,000 test samples. We use ResNet-18 \\cite{he2016residual} as the global models\\footnote{We use open-source implementation of ResNet-18, which is available at https:\/\/github.com\/kuangliu\/pytorch-cifar}.\n\n\\noindent \\textbf{AG-News.} AG-News is a 4-class topic classification dataset. Each class contains 30,000 training samples and 1,900 testing samples. The total number of training samples is 120,000 and 7,600 for test. We use a TextRNN that consists of two-layer bi-directional LSTM network \\cite{LiuQH16textRNN} as the global model.\n\n\\subsection{Evaluated Attacks}\nWe consider various popular model poisoning attacks in literature as well as recently proposed state-of-the-art attacks as introduced in Section~\\ref{secRelated}, and we assume the attacker knows all the benign gradients and the GAR in server. \n\n\\textbf{Random Attack.} The Byzantine clients send gradients with randomized values that generated by a multi-dimensional Gaussian distribution $\\mathcal{N}(\\mu,\\sigma^2 \\textbf{I})$. In our experiments, we take $\\mu = (0,...,0) \\in \\mathbb{R}^d\\ $and $\\sigma=0.5$ to conduct random attacks.\n\n\\textbf{Noise Attack.} The Byzantine clients send noise perturbed gradients that generated by adding Gaussian noise into honest gradients: $ g_{m} = g_{b} + \\mathcal{N}(\\mu,\\sigma^2 \\textbf{I}) $. We take the same Gaussian distribution parameters as random attack.\n\n\\textbf{Sign-Flipping.} The Byzantine clients send reversed gradients without scaling: $ g_{m} = -g_{b}$. This is a special case of reversed gradient attack \\cite{Rajput19Detox} or empire attack \\cite{Xie19Empires}.\n\n\\textbf{Label-Flipping.} The Byzantine clients flip the local sample labels during training process to generate faulty gradient. This is also a type of data poisoning attack. In particular, the label of each training sample in Byzantine clients is flipped from $l$ to $C-1-l$, where $C$ is the total categories of labels and $l\\in \\{0,1,\\cdots,C-1\\}$.\n\n\\textbf{Little is Enough.} As in \\cite{BaruchBG19LIE}, the Byzantine clients send malicious gradient vector with elements crafted as Eq.~(\\ref{eq:lie}). We set $z=0.3$ for default training settings in our experiments.\n\n\\textbf{ByzMean Attack.} As introduced in Section~\\ref{secAttackAnalysis}, we set $m_1=\\lfloor 0.8m \\rfloor$ and $m_2=m-m_1$, and set $g_{m_1}$ as LIE attack in all experiments.\n\n\\textbf{Min-Max\/Min-Sum.} As in \\cite{shejwalkar2021manipulating}, the malicious gradient is a perturbed version of the benign aggregate as Eq.~(\\ref{eq:gstd}), where $\\nabla^p$ is a perturbation vector and $\\gamma$ is a scaling coefficient, and those two attacks are formulated in Eq.~(\\ref{eq:minmax})-(\\ref{eq:minsum}). The first Min-Max attack ensures that the malicious gradients lie close to the clique of the benign gradients, while the Min-Sum attack ensures that the sum of squared distances of the malicious gradient from all the benign gradients is upper bounded by the sum of squared distances of any benign gradient from the other benign gradients. To maximize the attack impact, all malicious gradients keep the same. By default, we choose $\\nabla^p$ as $-std(g^{\\{i\\in [n]\\}})$, i.e., the inverse standard deviation.\n\\begin{equation}\ng_m = f_{avg}(g^{\\{i\\in [n]\\}})+\\gamma \\nabla^p\n\\label{eq:gstd}\n\\end{equation}\n\\begin{equation}\n\t\\mathop{\\arg\\max}\\limits_{\\gamma} ~ \\mathop{\\max}\\limits_{i\\in [n]}\\|g_m-g^{(i)}\\|\\leq \\mathop{\\max}\\limits_{i,j\\in [n]}\\|g^{(i)}-g^{(j)}\\|\n\t\\label{eq:minmax}\n\\end{equation}\n\\begin{equation}\n\t\\mathop{\\arg\\max}\\limits_{\\gamma} ~ \\mathop{\\sum}\\limits_{i\\in [n]}\\|g_m-g^{(i)}\\|^2\\leq \\mathop{\\max}\\limits_{i\\in [n]}\\mathop{\\sum}\\limits_{j\\in [n]}\\|g^{(i)}-g^{(j)}\\|^2\n\t\\label{eq:minsum}\n\\end{equation}\n\n\nSpecially, we investigate the fixed and randomized attacking behaviors respectively. In fixed settings, all corrupted clients play the role of Byzantine nodes and always perform the predefined attack method during the whole training process. In randomized settings, all corrupted clients will change their collusion attack strategy at each training epoch.\n\n\\subsection{Training Settings}\nBy default, we assume there are $n = 50$ clients in total for each task, 20\\% of which are Byzantine nodes with fixed attack method, and the training data are IID among clients. To verify the resilience and robustness, we will also evaluate the impact of different fractions of malicious clients for different attacks and defenses. Furthermore, our approach will also be evaluated in non-IID settings. In all experiments, we set the lower and upper bounds of gradient norm as $L = 0.1$ and $R = 3.0$, and randomly select 10\\% of coordinates to compute sign statistics in our SignGuard-based algorithms. Each training procedure is run for 60 epochs for MNIST\/Fashion-MNIST\/AG-News and 160 epochs for CIFAR-10, and local iteration is always set to 1. We employ momentum in PS side and the momentum parameter is set to 0.9, and weight decay is set to 0.0005. More details on some key hyper-parameters are described in Appendix~\\ref{appendix:train}\n\n\n\\subsection{Performance Metrics}\nWe train the models for a fixed number of epochs and use the test accuracy to evaluate the model performance. Considering the instability of model training and the fluctuation of model accuracy under strong attacks, we test the training model at the end of each training epoch and take the best test accuracy during the whole training process to assess the efficacy of defenses. We repeat each experiment for three times and report the average results. When a certain defense is performed, the accuracy gap to the baseline can be utilized to evaluate the efficacy of defense under various attacks, and smaller gap indicates more effective defense method. \n\n\\section{Evaluation Results}\\label{secExperimentResult}\n\n\n\\begin{table*}[ht] \\footnotesize\n\t\\centering\n\t\\caption{Comparison of defenses under various model poisoning attacks} \n\t\\label{tab:main_result_iid} \n\t\\renewcommand\\arraystretch{1.1}\n\t\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}} \n\t\\begin{tabular}{| c | c | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | c | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} |}\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\tabincell{c} {Dataset\\\\(Model)}} & \n\t\t\\multirow{2}{*}{GAR} & \n\t\t\\multirow{2}{*}{No Attack} & \n\t\t\\multicolumn{3}{c|}{Simple Attacks}&\n\t\t\\multicolumn{5}{c|}{State-of-the-art Attacks}\\\\\n\t\t\\cline{4-5} \n\t\t\\cline{5-6} \n\t\t\\cline{6-7}\n\t\t\\cline{7-8}\n\t\t\\cline{8-9} \n\t\t\\cline{9-10} \n\t\t\\cline{10-11} \n\t\t& & & {Random} & {Noise} & {Label-flip} & {ByzMean} & {Sign-flip} & {LIE} & {Min-Max} & {Min-Sum}\\\\\n\t\t\\hline\n\t\t\n\t\t\\multirow{10}{*}{\\tabincell{c} {MNIST\\\\(CNN)}}&\n\t\tMean & 99.23 & 84.84 & 90.48 & 99.05 & 31.98 & 98.42 & 84.49 & 68.89 & 34.46 \\\\\n\t\t& TrMean & 98.23 & 98.63 & 98.53 & 95.31 & 58.87 & 98.44 & 94.50 & 34.48 & 43.89\\\\\n\t\t& Median & 97.46 & 94.18 & 97.45 & 93.84 & 40.04 & 97.73 & 74.37 & 26.11 & 38.13 \\\\\n\t\t& GeoMed & 93.21 & 82.77 & 78.68 & 86.20 & 45.02 & 74.78 & 34.37 & 15.62 & 20.53 \\\\\n\t\t& Multi-Krum & 99.20 & 98.98 & 99.11 & 99.06 & 83.26 & 98.82 & 90.04 & 52.77 & 27.27 \\\\\n\t\t& Bulyan & 99.10 & 99.17 & {99.12} & 99.15 & 98.58 & 98.81 & 98.86 & 52.45 & 51.95 \\\\\n\t\t& DnC & 99.09 & 99.07 & 99.08 & {99.17} & 82.25 & 98.73 & {99.12} & 98.97 & 81.04 \\\\\n\t\t& SignGuard & 99.11 & {99.09} & {98.97} & \\textbf{99.18} & \\textbf{99.02} & \\textbf{99.13} & 99.15 & \\textbf{99.18} & {99.15} \\\\\n\t\t& SignGuard-Sim & 99.16 & \\textbf{99.18} & 99.16 & 99.07 & {98.91} & {99.06} & \\textbf{99.22} & {99.08} & {99.13} \\\\\n\t\t& SignGuard-Dist & 98.95 & 99.05 & \\textbf{99.18} & 99.11 & 98.93 & 98.86 & 98.96 & 99.01 & \\textbf{99.19} \\\\\n\t\t\\hline \\hline\n\t\t\n\t\t\\multirow{10}{*}{\\tabincell{c} {Fashion-MNIST\\\\(CNN)}}&\n\t\tMean & 89.51 & 69.88 & 31.83 & 89.37 & 16.31 & 86.68 & 79.78 & 47.73 & 45.12 \\\\\n\t\t& TrMean & 87.02 & 87.81 & 87.45 & 79.58 & 62.66 & 87.45 & 54.28 & 45.71 & 42.96 \\\\\n\t\t& Median & 80.77 & 82.96 & 82.59 & 77.41 & 47.46 & 82.52 & 45.14 & 47.43 & 50.83 \\\\\n\t\t& GeoMed & 76.51 & 79.96 & 78.93 & 78.16 & 40.51 & 70.65 & 10.00 & 73.75 & 66.63 \\\\\n\t\t& Multi-Krum & 87.89 & 89.12 & 88.94 & 89.27 & 69.95 & 87.59 & 72.22 & 40.08 & 47.36 \\\\\n\t\t& Bulyan & 88.80 & 89.31 & 89.32 & 89.21 & 88.72 & 87.52 & 88.64 & 59.65 & 43.63 \\\\\n\t\t& DnC & 89.21 & 88.89 & 88.14 & 88.85 & 70.15 & 87.58 & 71.82 & 88.43 & 88.94 \\\\\n\t\t& SignGuard & 89.48 & \\textbf{89.34} & \\textbf{89.32} & 89.12 & 89.35 & 88.69 & 89.34 & \\textbf{89.48} & \\textbf{88.51} \\\\\n\t\t& SignGuard-Sim & 89.43 & {89.24} & 89.21 & \\textbf{89.33} & {89.28} & {89.08} & \\textbf{89.36} & {89.04} & {88.18}\\\\\n\t\t& SignGuard-Dist & 89.37 & 88.87 & 89.30 & 89.31 & \\textbf{89.39} & \\textbf{89.21} & \\textbf{89.36} & 89.34 & 88.38 \\\\\n\t\t\\hline \\hline\n\t\t\n\t\t\\multirow{10}{*}{\\tabincell{c} {CIFAR-10\\\\(ResNet-18)}}&\n\t\tMean & 93.16 & 44.53 & 46.34 & {91.98} & 17.18 & 79.63 & 55.86 & 23.84 & 18.17 \\\\\n\t\t& TrMean & 93.15 & 89.61 & 89.47 & 85.15 & {30.13} & 85.54 & 43.76 & 24.81 & 23.36 \\\\\n\t\t& Median & 74.18 & 68.27 & 71.42 & 71.19 & 23.47 & 70.75 & 27.35 & 20.46 & 22.74 \\\\\n\t\t& GeoMed & 65.62 & 70.41 & 69.35 & 70.76 & 24.86 & 67.82 & 23.55 & 50.36 & 45.23 \\\\\n\t\t& Multi-Krum & 93.14 & \\textbf{92.88} & \\textbf{92.91} & 92.26 & 50.41 & 92.36 & 42.58 & 21.17 & 38.24 \\\\\n\t\t& Bulyan & 92.78 & 91.87 & 92.47 & 92.24 & 81.33 & 90.12 & 74.52 & 29.87 & 37.79 \\\\\n\t\t& DnC & 92.73 & 88.01 & 88.25 & 92.05 & 36.56 & 84.76 & 47.37 & 52.94 & 35.36 \\\\\n\t\t& SignGuard & 93.03 & \\textbf{92.78} & 92.52 & 92.28 & \\textbf{92.46} & 88.61 & \\textbf{92.93} & 92.56 & {92.47} \\\\\n\t\t& SignGuard-Sim & 93.19 & 92.51 & 91.38 & 92.26 & {92.26} & \\textbf{92.48} & 92.62 & {92.63} & 92.75 \\\\\n\t\t& SignGuard-Dist & 92.76 & 92.64 & 92.26 & \\textbf{92.51} & 92.42 & 91.69 & 92.36 & \\textbf{92.82} & \\textbf{92.93} \\\\\n\t\t\\hline \\hline\n\t\t\\multirow{10}{*}{\\tabincell{c} {AG-News\\\\(TextRNN)}}&\n\t\tMean & 89.36 & 28.18 & 28.41 & 86.72 & 25.05 & 84.18 & 79.34 & 27.32 & 25.24 \\\\\n\t\t& TrMean & 87.57 & 88.33 & 88.72 & 85.50 & 37.51 & 84.84 & 66.95 & 30.05 & 30.28 \\\\\n\t\t& Median & 84.57 & 84.52 & 84.59 & 82.08 & 28.99 & 81.10 & 32.39 & 30.28 & 29.71 \\\\\n\t\t& GeoMed & 82.38 & 77.63 & 77.18 & 78.42 & 27.36 & 81.64 & 31.57 & 74.82 & 71.48 \\\\\n\t\t& Multi-Krum & 88.86 & 89.18 & 89.22 & 86.89 & 68.53 & \\textbf{87.42} & 72.98 & 53.51 & 32.46 \\\\\n\t\t& Bulyan & 88.22 & 88.86 & 88.93 & 85.54 & 85.80 & 86.55 & 85.49 & 47.76 & 51.25 \\\\\n\t\t& DnC & 89.13 & 86.42 & 86.28 & 86.72 & 31.47 & 86.30 & 76.58 & 88.45 & 89.05 \\\\\n\t\t& SignGuard & 89.29 & \\textbf{89.22} & 89.23 & 86.78 & \\textbf{89.24} & 86.53 & 89.26 & 89.23 & 89.27 \\\\\n\t\t& SignGuard-Sim & 89.24 & 89.13 & \\textbf{89.29} & 87.05 & 89.36 & 86.76 & \\textbf{89.33} & \\textbf{89.27} & \\textbf{89.37}\\\\\n\t\t& SignGuard-Dist & 89.23 & 89.16 & 89.23 & \\textbf{87.25} & 89.31 & 87.30 & 89.17 & 89.22 & 89.35 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table*}\n\n\n\nIn this section, we conduct extensive experiments with various attack-defense pairs on both IID and non-IID data settings. We compare our methods with several existing defense methods, including TrMean, Median, GeoMed, Multi-Krum, Bulyan and DnC. The numerical results demonstrate the efficacy and superiority of our proposed SignGuard framework.\n\n\\subsection{Main Results in IID Settings}\nThe main results of best achieved test accuracy during training process under different attack and defense methods in IID setting are collected in Table~\\ref{tab:main_result_iid}. The results of naive \\textit{Mean} aggregation under \\textit{No Attack} are used as benchmarks. Note that we favor other defenses by assuming the defense algorithms know the fraction of Byzantine clients, which is somewhat unrealistic but intrinsically required by existing defenses. However, we do not use the Byzantine fraction information in our SignGuard-type methods, including plain SignGuard, SignGuard-Sim and SignGuard-Dist.\n\n\\vspace{1ex}\n\\noindent \\textbf{Sign Statistics are Powerful.} Test results on four datasets consistently show that our SignGuard-type methods can leverage the power of sign statistics and similarity features to filter out most malicious gradients and achieve comparable test accuracy as general distributed SGD under no attack. Consistent with original papers \\cite{BaruchBG19LIE,shejwalkar2021manipulating}, the state-of-the-art attacks, such as LIE and Min-Max\/Min-Sum, can circumvent the median-based and distance-based defenses, preventing successful model training. Take the results of Multi-Krum on ResNet-18 as example, it can be seen that when no attack is performed, Multi-Krum has negligible accuracy drop (less than 0.1\\%). However, the best test accuracy drops to 42.58\\% under LIE attack and even less than 40\\% under Min-Max\/Min-Sum attacks. Similar phenomena can also be found in model training under TrMean, Median and Bulyan methods. Besides, even under no attack, the Median and GeoMed methods are only effective in simple tasks, such as CNN for digit classification on MNIST and TextRNN for text classification on AG-News. When applied to complicated model training, such as ResNet-18 on CIFAR-10, those two methods have high convergence error and result in significant model degradation. While Muti-Krum and Bulyan suffer from well-crafted attacks, they perform well on naive attacks and even better than our plain SignGuard in mitigating random noise and sign-flip attack. Though the DnC method has extraordinary effectiveness under many attacks, we found it is unstable during training and can be easily broken by our proposed ByzMean attack. In contrast, our proposed SignGuard-type methods is able to distinguish most of those well-crafted malicious gradients and achieve satisfactory model accuracy under various types of attacks. Considering that the local data of Byzantine clients also contribute to global model when no attack is performed, it's not surprising to see that even the best defense against Byzantine attack will still result in small gap to the benchmark results.\n\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\subfigure[CNN trained on Fashion-MNIST]{\n\t\t\\includegraphics[width=2.1\\columnwidth]{.\/figs\/fmnist_byznum.pdf}\n\t}\n\n\t\\subfigure[ResNet-18 trained on CIFAR-10]{\n\t\t\\includegraphics[width=2.1\\columnwidth]{.\/figs\/cifar_byznum.pdf}\n\t}\n\n\t\\caption{Accuracy drop comparison under various attacks and different percentage of Byzantine clients. SignGuard has the smallest gap to the baseline. }\n\t\\label{fig:acc_byznum}\n\\end{figure*}\n\n\n\\vspace{1ex}\n\\noindent \\textbf{Sign Statistics are Insufficient.} Table~\\ref{tab:rate_iid} reports the average selected rate of both benign and Byzantine clients during the training process of ResNet-18. We notice that the SignGuard-type methods inevitably exclude part of honest gradients, and select some malicious gradients under the sign-flip attack, even with the help of similarity feature. The reason is that the proportions of positive and negative elements in normal gradient are approximate for ResNet-18, even after randomized downsampling of gradient elements. Consequently, the ratios of positive and negative signs remain approximate in the sign-flipped gradient. Therefore, the simple sign statistics are insufficient to make a distinction between those honest gradients and sign-flipped ones. We also notice that although SignGuard-Sim is resilient to all kinds of attacks and achieves high accuracy results, it only selects less than 80\\% honest gradients during training. One possible reason is that the cosine-similarity feature also has some diversity across honest gradients.\n\n\\begin{table}[htbp] \\footnotesize\n\t\\centering\n\t\\caption{Selected Rate of Honest and Malicious Gradients} \n\t\\label{tab:rate_iid} \n\t\\renewcommand\\arraystretch{1.3}\n\t\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}} \n\t\\begin{tabular}{| c | c | c | c | c | c | c |}\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\tabincell{c} {\\textbf{Attack}}} & \n\t\t\\multicolumn{2}{c|}{\\textbf{SignGuard}} &\n\t\t\\multicolumn{2}{c|}{\\textbf{SignGuard-Sim}} &\n\t\t\\multicolumn{2}{c|}{\\textbf{SignGuard-Dist}} \\\\\n\t\t\\cline{2-3} \n\t\t\\cline{3-4} \n\t\t\\cline{4-5}\n\t\t\\cline{5-6}\n\t\t\\cline{6-7}\n\t\t& {H} & {M} & {H} & {M} & {H} & {M}\\\\\n\t\t\\hline\n\t\tByzMean & 0.9625 & 0 & 0.7791 & 0 & 0.9272 & 0.0003\\\\\n\t\t\\hline\n\t\tSign-flip & 0.6870 & 0.3908 & 0.7639 & 0.0981 & 0.7570 & 0.2440\\\\\n\t\t\\hline\n\t\tLIE & 0.9532 & 0 & 0.7727 & 0 & 0.9151 & 0\\\\\n\t\t\\hline\n\t\tMin-Max & 0.9650 & 0 & 0.7866 & 0.0003 & 0.9105 & 0.0009\\\\\n\t\t\\hline\n\t\tMin-Sum & 0.9640 & 0 & 0.7752 & 0 & 0.9111 & 0\\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\vspace{1ex}\n\\noindent \\textbf{Percentage of Byzantine Clients.} We also evaluate the performance of signGuard-Sim with different percentages of Byzantine clients. In this part, we conduct experiments of CNN trained on the Fashion-MNIST dataset and ResNet-18 trained on CIFAR-10 dataset. We keep the total number of clients be 50 and vary the fraction of Byzantine clients from 10\\% to 40\\% to study the impact of Byzantine percentage for different defenses. We use the default training settings, and experiments are conducted under various state-of-the-art attacks. Particularly, we compare the results of SignGuard-Sim with Median, TrMean, Multi-Krum and DnC as shown in Fig.~\\ref{fig:acc_byznum}. It can be seen that our approach can effectively filter out malicious gradients and result in slight accuracy drop regardless of the high percentage of Byzantine clients, while other defense algorithms suffer much more attack impact with increasing percentage of Byzantine clients. In particular, we also find that Multi-Krum can mitigate sign-flip attack well in ResNet-18 training, possibly because the exact percentage of Byzantine clients is provided.\n\n\\vspace{1ex}\n\\noindent \\textbf{Time-varying Attack Strategy.} Further, we test different defense algorithms under time-varying Byzantine attack strategy. We still use the default system setting, and change attack method randomly at each epoch (including no attack scenario). The test accuracy curves of CNN on Fashion-MNIST and ResNet-18 on CIFAR-10 are presented in Fig.~\\ref{fig:acc_random_attack}, where the baseline is training under no attack and no defense, and we only test the State-of-the-art defenses. It can be found that our SignGuard could ensure successful model training and closely follow the baseline, while other defenses resulted in significant accuracy fluctuation and model deterioration. For CNN, the training process even collapsed eventually for other defenses\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\subfigure[CNN on Fashion-MNIST]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/fmnist_attack.pdf}\n\t}\n\n\t\\subfigure[ResNet-18 on CIFAR-10]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/cifar_attack.pdf}\n\t}\n\n\t\\caption{Defense effect comparison under time-varying attacks. SignGuard can ensure safe training and achieve decent model accuracy. }\n\t\\label{fig:acc_random_attack}\n\\end{figure}\n\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\subfigure[CNN on Fashion-MNIST]{\n\t\t\\includegraphics[width=2.0\\columnwidth]{.\/figs\/fmnist_noniid.pdf}\n\t}\n\n\t\\subfigure[CifarNet on CIFAR-10]{\n\t\t\\includegraphics[width=2.0\\columnwidth]{.\/figs\/cifar_noniid.pdf}\n\t}\n\t\\caption{Model accuracy comparison under various attacks and different degrees of non-IID. SignGuard has the best performance compared with other start-of-the-art defenses. }\n\t\\label{fig:acc_noniid}\n\\end{figure*}\n\n\n\\subsection{Main Results in Non-IID Settings}\n\nThe Byzantine-mitigation in non-IID FL settings has been a well-known challenging task due to the diversity of gradients. We evaluate our SignGuard-Sim method in synthetic non-IID partition of Fashion-MNIST and CIFAR-10 datasets. As previous works, we simulate the non-IID data distribution between clients by allocating $s$-fraction of dataset in a IID fashion and the remaining (1-$s$)-fraction in a sort-and-partition fashion. Specifically, we first randomly select $s$-proportion of the whole training data and evenly distribute them to all clients. Then, we sort the remaining data by labels and divide they into multiple shards, while data in the same shard has the same label, after which each client is randomly allocated with 2 different shards. The parameter $s$ can be used to measure the skewness of data distribution and smaller $s$ will generate more skewed data distribution among clients. We consider three levels for the skewness with $s$ = 0.3, 0.5, 0.8, respectively. \n\n\\vspace{1ex}\n\\noindent \\textbf{Efficacy on Non-IID Data.} We compare the SignGuard-Sim with various start-of-the-art defenses. As shown in Fig.~\\ref{fig:acc_noniid}, our method still works well under strong attacks in non-IID settings, achieving satisfactory accuracy results in various scenarios. In contrast, TrMean and Multi-Krum could not defend LIE attack and ByzMean attack, making them not reliable any more. Bulyan has good performance on CNN trained on Fashion-MNIST, but is ineffective under LIE attack on ResNet-18 trained on CIFAR-10. DnC can defend against sign-flip attack well, but performs poorly on the other scenarios. Those results in non-IID settings further demonstrate the general validness of sign statistics.\n\n\n\n\\subsection{Computational Overhead Comparison}\nThe following Table~\\ref{tab:aggtime} reports the averaged aggregation time of different defenses on training ResNet-18, where we omit the TrMean and Median since they induce negligible computation cost. It can be seen that SignGuard resulted in shortest time compared with GeoMed, Multi-Krum and Bulyan, which means our method can achieve efficiency and robustness simultaneously. For the other two variants, we found the pairwise similarity\/distance calculation is time-consuming and using the previous aggregate as correct gradient to compute similarity\/distance could alleviate this issue.\n\n\\begin{table}[htbp]\n\\centering\n\\caption{Averaged Aggregation Time} \n\\label{tab:aggtime} \n\\renewcommand\\arraystretch{1.2}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}} \n\\begin{tabular}{| c | c | c | c | c |}\n\t\\hline\n\t\\textbf{Method} & GeoMed & Multi-Krum & Bulyan & SignGuard\\\\\n\t\\hline\n \\textbf{Time(s)} & 0.39314 & 0.29847 & 0.29629 & 0.04706\\\\\n\t\\hline \\hline\n\t\\multirow{2}{*}{\\tabincell{c} {\\textbf{Method}}}&\n\t\\multicolumn{2}{c|}{SignGuard-Sim}&\n\t\\multicolumn{2}{c|}{SignGuard-Dist}\\\\\n\t\\cline{2-3} \n\t\\cline{3-4} \n\t\\cline{4-5}\n\t& pairwise & previous & pairwise & previous \\\\\n\t\\hline\n\t\\textbf{Time(s)} & 0.73887 & 0.07686 & 0.39117 & 0.07834\\\\\n\t\\hline\n\\end{tabular}\n\\end{table}\n\n\\section{Discussions}\\label{secDiscussion}\n\nFrom the previous sections we can see that the SignGuard approach performs well in many scenarios and can mitigate a number of attacks effectively. Extensive experiments demonstrate that sign statistics are powerful in malicious gradient detection but also insufficient in some cases. In this section, we present some discussions of our approach.\n\n\n\\subsection{Strength and Limitation of SignGuard} \n\nOur SignGuard approach mainly leverages the sign statistics to distinguish malicious gradients from honest gradients, which can overcome the drawbacks of distance and cosine-similarity based detection methods and reveals a new evaluation criteria for gradient correctness. The feasibility of our algorithm depends on the fact that sign statistics of honest gradients gather in a compact range, which is short of theoretical explanation at the moment. The experimental results show that the sign statistics are capable of detecting recent state-of-the-art attacks, which are distance-indistinguishable but do not take the variation of sign statistics into account. This is the key strength of SignGuard, however, also reveals the main limitation that it essentially depends on the distinguishability of sign statistics. As shown in evaluation results, if the ratio of positive and negative sign are approximate, then it's hard for . If the attacker keeps the values of gradient elements unmodified but shuffles the order of gradient elements, then the gradient norm and sign statistics will remain unchanged, that's why random coordinate selection and similarity feature are required in the design of SignGuard algorithm.\n\nAnother advantage is that SignGuard has good extensibility, not only sign statistics but also sophisticated distance and similarity features can be extracted as features, and more advanced clustering algorithms can be applied to separate benign and malicious gradients. And the final aggregation rule can also be replaced by TrMean or Multi-Krum to mitigate those malicious gradients that evade detection filters. Another limitation is also obvious that we only design 2-class K-Means algorithm in this work, which limits its wider applicability. The K-Means algorithm has its own drawbacks for it makes the assumption that clusters are convex and isotropic, and using fixed cluster number cannot tackle the case with dynamic varieties of attack. Therefore, more advanced and adaptive clustering algorithm should be developed to improve the flexibility and robustness.\n\n\\subsection{Possible Improvement for SignGuard} \n\nThe SignGuard framework in this work is a preliminary attempt that levarages sign-gradient to solve the Byzantine attack problem. We believe that there exist more robust characteristics of gradient that can reflect the malicious manipulation, rather than naive sign statistics. And the design of SignGuard framework needs to be improved as well. First, as shown in previous sections, similarity feature is not always helpful and can even hides sign statistics distinction for features in K-Means are isotropic, so it's hard to decide whether to use the similarity feature or not. One possible solution is to apply more advanced clustering models such as spectral clustering\\cite{} and Gaussian mixture models (GMM) with expectation-maximization (EM) algorithm\\cite{}. Second, when a variety of malicious gradients exist simultaneously, it's impossible to specify the number of clusters beforehand. In that case, the desired clustering model should be able to find suitable number of clusters automatically, such as hierarchical clustering \\cite{} and DBSCAN\\cite{}. Third, we find the sign statistics of randomly selected subset of gradient elements are almost consistent with that of original gradient, making the sign-flipped gradient still hard to detect when the ratios of positive and negative sign are approximate. Thus, possibly the algorithm can discard part of coordinates where all gradients have positive (or negative) sign, enlarging the difference between those two ratios to improve the cluster separability. What's more, it is promising to improve SignGuard by cooperating with existing defense algorithms. Because no single defense method can mitigate all possible attacks, and different defense algorithms have their own advantages and weaknesses. For example, Multi-Krum is more robust to sign-flip attack than SignGuard in both IID and non-IID settings as shown in previous sections, hence we are encouraged to adopt the design ideas behind Multi-Krum to calculate useful distance feature and take it into consideration during malicious gradient filtering.\n\n\\subsection{Trade-off Among Safety, Security and Fairness} \n\nTo ensure the safety of model training, we hope for more detailed information about gradient or even raw training data, to evaluate the correctness of each received gradient. If a small set of raw training data can be collected from voluntary clients, we can directly use them to validate the gradients. Otherwise, we should extract many statistical features, including gradient norm, distance, similarity and sign statistics, to perform anomaly detection. However, not only raw training data, but also raw gradient has the risk of privacy leakage. Hence, encryption algorithm and secure multiparty computing are also considered for data security protection, which may not support many detection algorithms and makes the Byzantine gradient detection especially challenging. Our SignGuard mainly makes use of the sign-gradient to achieve effective anomaly detection, having the potential for keeping users data security to a great extent. However, our experimental results also show that sign statistics and sign-similarity are insufficient in some cases, requiring original gradient values to improve robustness. This is a important trade-off between safety and security when developing federated learning systems. Moreover, honest gradients could be filter out as well, which may lead to some algorithm unfairness, especially in non-IID scenarios.\n\n\n\n\n\n\n\n\n\\section{Conclusion and Future Work}\\label{secConclusion}\n\nIn this work, we proposed a novel Byzantine attack detection framework, namely SignGuard, to mitigate malicious gradients in federated learning systems. It can overcome the drawbacks of median- and distance-based approaches which are vulnerable to well-crafted attacks and unlike validation-based approaches that require extra data collection in PS. And it also does not depend on historical data or other external information, only utilizing magnitude and robust sign statistics from current local gradients, making it a practical way to defend most kinds of model poisoning attacks. Extensive experimental results on image and text classification tasks verify our theoretical and empirical findings, demonstrating the extraordinary effectiveness of our proposed SignGuard-type algorithms. We hope this work can provide a new perspective on the Byzantine attack problems in machine learning security. Future directions include developing strategies to defend dynamic and hybrid model poisoning attacks as well as backdoor attacks in more complex federated learning scenarios. And how to design more effective and robust filters in the SignGuard framework for real-world learning systems is also left as an open problem.\n\n\\section{Acknowledgment}\\label{secAcknowledgment}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\nIn many of statistical physics applications the entropy notion enters as\na basic concept suitable for characterize the behaviour of macroscopic\nsystems \\cite{entropy_economic,tsallis2004,entropy_ecology,Franzosi_EPL15,\nFranzosi_PRE16,Felice_PhysA_2018}.\nIn the present manuscript, we address the problem of the correct definition\nof the microcanonical entropy for classical systems. In fact, the latter\nconcern has recently become a matter of a debate where it has been discussed\nwhich one between the Boltzmann and the Gibbs definition provides the correct\nentropy. \n\nA mechanically and adiabatically isolated system, at the equilibrium and\ncomposed of a macroscopic number of interacting particles is statistically\ndescribed with the microcanonical ensemble.\nIn this statistic description the relevant thermodynamical quantities are derived\nfrom the entropy $S$ through suitable thermodynamic relations.\nNow, there are -at least- two accepted definitions for the microcanonical entropy,\nthe ones commonly referred to as Boltzmann entropy and Gibbs entropy.\nThe former is proportional to the logarithm\nof the density of microstates at a given ``energy shell'', whereas the latter \nis proportional to the logarithm of the number of microstates up to a given\nenergy.\nThe debate as to which of these definitions of entropy is the correct one dates back\nto many years ago\n\\cite{Hertz10,Einstein11,Schluter48,Jaynes,Munster_1987,Pearson85,Berdichevsky91,Adib04,\nLavis2005245,Campisi05}. \n\nVery recently \\cite{Dunkel2013,Hilbert_PRE_2014,Hanggi_15}, it has been argued that\nthe Gibbs entropy yields a consistent thermodynamics, and they have been discussed some\nconsistency issues that, the microcanonical statistical mechanics founded on the\nBoltzmann entropy, would unveil \n\\cite{Dunkel2013,Hilbert_PRE_2014,Sokolov_2014,DunkelHilbertRep1,DunkelHilbertRep2,\nCampisi_2015,Campisi_2016}. These and other related arguments \n\\cite{Romero-Rochin,Treumann_2014,Treumann_2014a} have\nbeen contended \\cite{Vilar_2014, Frenkel_2015, Schneider_2014,Wang_2015, Cerino_2015, Swendsen_Wang_Physicaa_2016,Puglisi_PhysRep_2017,\nBaldovin_JStatMech_2017},\nin what has become a lively debate.\nAlthough this may seem a marginal issue, it has crucial consequences about the foundations of statistical mechanics.\nFor instance the negative temperatures notion wouldn't make sense, since\nthey are a well founded concept in the Boltzmann description, whereas, they\nare forbidden in the case of the Gibbs entropy since the number of microstates\nwith energy below a given value $E$ is a non-decreasing function of $E$.\nEven if we do not share the point of view of authors of Refs.\n\\cite{Dunkel2013,Hilbert_PRE_2014,Hanggi_15, Campisi_2015,Campisi_2016},\nas we have clarified in Refs. \\cite{Buonsante_AoP_2016, Buonsante_2015}\nwhere we have shown that the Boltzmann entropy provides a consistent description\nof the microcanonical ensemble, in our opinion these authors must be given\ncredit for having raised this key question.\n\nA further issue raised by the authors of Refs.\n\\cite{Dunkel2013,Hilbert_PRE_2014,Hanggi_15,\nCampisi_2015,Campisi_2016} pertains to the fact that the caloric equation\nof state, for instance in the simple case of an isolated ideal gas system,\nderived with the Boltzmann entropy is not strictly extensive.\nAbout this point, in Ref. \\cite{Buonsante_AoP_2016,Swendsen_2017}, we have shown that the correction to the extensive behaviour, is of the order of $1\/(nd)$, and therefore it vanishes in the limit of infinite degrees of freedom.\nAlthough in the case of a macroscopic system (as the ones more often considered in\nstatistical mechanics) this is not an issue and it represents just an aesthetical\nmathematical problem, it pose a relevant matter when microcanonical thermodynamics\nis applied to systems that for their nature do not admit the thermodynamic limit.\nExamples of the latter class include proteins, DNA helix, nanosystems.\n\n\nIn the present manuscript we propose a modified version of the Boltzmann\nentropy that overcomes all of these issues. In fact, this entropy reproduces\nthe same results as the Boltzmann entropy for systems with a macroscopic\nnumber of particles and predicts the correct extensivity for the caloric\nequation in the case of small systems.\nLet $H(x)$ be a classical Hamiltonian describing an autonomous many-body system\nof $n$ interacting particles in $d$ spatial dimensions, whose\ncoordinates and canonical momenta $(q_1\\ldots, p_1 ,\\ldots)$ are represented as \n$N$-component vectors $x\\in \\mathbb{R}^{N}$, with $N=2nd$.\nMoreover, we assume that no other conserved\nquantities do exist in addition to the total energy $H$ \\cite{Franzosi_JSP11,Franzosi_PRE12}.\nLet \n$\nM_E = \\left\\{x\\in \\mathbb{R}^{N} | H(x) \\leq E \\right\\}\n$\nbe the set of phase-space states with total energy less than or equal to $E$.\nThe Gibbs entropy for this system is\n\\begin{equation}\nS_G (E) = \\kappa_B \\ln \\Omega(E) \\, ,\n\\label{gibbs}\n\\end{equation}\nwhere $\\kappa_B$ is the Boltzmann constant and\n\\begin{equation}\n\\Omega(E) = \\dfrac{1}{h^{nd}} \\int d^N x \\Theta(E-H(x)) \\, ,\n\\label{OmegaE}\n\\end{equation}\nis the number of states with energy below $E$. $h$ is the Planck constant and\n$\\Theta$ is the Heaviside function.\n\nThe Boltzmann entropy concerns the energy level sets \n$\n\\Sigma_E = \\left\\{x\\in \\mathbb{R}^{N} | H(x) = E \\right\\} \\, ,\n$\nand is given in terms of $\\omega(E) = \\partial \\Omega\/\\partial E$, according to\n\\begin{equation}\nS_B (E) = \\kappa_B \\ln \\left(\\omega(E)\\Delta \\right) \\, ,\n\\label{boltzmann}\n\\end{equation}\nwhere the constant $\\Delta$ with the dimension of energy makes the argument of the logarithm dimensionless,\nand\n\\begin{equation}\n\\omega(E) = \\dfrac{1}{h^{nd}} \\int d^N x \\delta(E-H(x)) \\, ,\n\\label{omegaE}\n\\end{equation}\nis expressed in terms of the Dirac $\\delta$ function. Remarkably, in the case\nof smooth level sets $\\Sigma_E$, $\\omega(E)$\ncan be cast in the following form \n\\cite{RughPRL97,Franzosi_JSP11,Franzosi_PRE12}\n\\begin{equation}\n\\omega(E) = \\dfrac{1}{h^{nd}} \n\\int_{\\Sigma_E} \\dfrac{m^{N-1}(\\Sigma_E)}{\\Vert\\nabla H(x) \\Vert} \\, ,\n\\label{omegaEdiff}\n\\end{equation}\nwhere $m^{N-1}(\\Sigma_E)$ is the metric induced from $\\mathbb{R}^N$ on the\nhypersurface $\\Sigma_E$ and $\\Vert\\nabla H(x) \\Vert$ is the norm of the gradient\nof $H$ at $x$. \n\nThe entropy that we propose here is\n\\begin{equation}\nS (E) = \\kappa_B \\ln \\left( \\sigma(E) \\Delta^{1\/2} \\right) \\, ,\n\\label{enew}\n\\end{equation}\nwhere\n\\begin{equation}\n\\sigma(E) =\\dfrac{1}{h^{nd}} \\int_{\\Sigma_E} m^{N-1}(\\Sigma_E) \\, .\n\\label{sigmaEdiff}\n\\end{equation}\nIn the case of a system of identical particles, to avoid the Gibbs\nparadox it is in order to introduce a factor 1\/n! in the definitions of $\\Omega$,\n$\\omega$ and $\\sigma$, Eqs. \\eqref{OmegaE}, \\eqref{omegaE}, \\eqref{omegaEdiff}\nand \\eqref{sigmaEdiff}, as we will do in the following.\n\n\nThe entropy is the fundamental thermodynamic potential of the microcanonical ensemble from which\nsecondary thermodynamic quantities are obtained by derivatives with respect to the control\nparameter: the total energy $E$, the occupied volume $V$ and, possibly, further Hamiltonian\nparameters $A_\\mu$ (in the following we omit to indicate explicitly the dependence by $A_\\mu$ in \norder to simplify the notation).\nThe inverse temperatures $\\beta=(\\kappa_B T)^{-1}$ is derived from the\nthe entropy according to $\\beta = (\\partial S\/\\partial E)\/\\kappa_B$, thus in the\nthree cases under consideration we have\n\\begin{eqnarray}\n\\beta_G &=& \n\\dfrac{\\Omega^\\prime}{\\Omega} \\, , \\\\\n\\beta_B &=& \n\\dfrac{\\omega^\\prime}{\\omega} \\, , \\\\\n\\beta &=& \n\\dfrac{\\sigma^\\prime}{\\sigma} \\, ,\n\\end{eqnarray}\nwhere the symbol $^\\prime$ denotes the partial derivative of the corresponding\nterm with respect to energy $E$.\n\nA basic requisite for $S$ is to allow the measure of temperature \nand the other secondary\nthermodynamic quantities via microcanonical averages.\nIn terms of the microscopic dynamics, from the Liouville theorem it follows that the\ninvariant measure $d \\mu$ for the dynamics on each energy level-set $\\Sigma_E$ is\n$d \\mu = {m^{N-1}(\\Sigma_E)}\/{\\Vert \\nabla H \\Vert}$.\nIn the case of the Boltzmann entropy, the temperature definition meets the\nmentioned requisite since\n\\begin{equation}\n\\beta_B =\\left\\langle \n\\nabla \\left( \\frac{\\nabla H}{\\Vert \\nabla H \\Vert^2} \\right) \\right\\rangle \\, ,\n\\label{betaB}\n\\end{equation}\nwhere $\\langle \\rangle$ indicates the microcanonical average\n\\begin{equation}\n\\langle \\phi \\rangle = \\dfrac{1}{\\omega} \\int_{\\Sigma_E} \\phi d\\mu \\, .\n\\end{equation} \nEq. \\eqref{betaB} is derived in Ref. \\cite{RughPRL97}\nfor the case of many-particle systems for which the energy is the only conserved quantity, and in Refs. \n\\cite{Franzosi_JSP11,Franzosi_PRE12} for the general case of two or more\nconserved quantities. On the contrary, the Gibbs definition of \ntemperature does not meet such important requisite as diffusely\ndiscussed in Ref. \\cite{Buonsante_AoP_2016}.\nBy using by the Federer-Laurence derivation formula \\cite{Federer_1969,Laurence_1989,Franzosi_JSP11,Franzosi_PRE12}, \nin the case of the proposed entropy we get\n\\begin{equation}\n\\beta = \\dfrac{\\sigma^\\prime}{\\sigma} = \\dfrac{\\sigma^\\prime\/\\omega}{\\sigma\/\\omega} =\n\\dfrac{\\langle \\nabla \\left( \\frac{\\nabla H}{\\Vert \\nabla H \\Vert} \\right) \\rangle}\n{\\langle \\Vert \\nabla H \\Vert \\rangle} \\, .\n\\label{beta}\n\\end{equation}\nThis shows that also $S$, besides $S_B$, satisfies the requirement to provide secondary thermodynamic\nquantities measurable as microcanonical averages. In passing, we note that\nunder the hypothesis of ergodicity, the\naverages of each dynamical observable of the system can be equivalently measured along the dynamics.\n\nAs a simple test let us consider a classical ideal gas in $d$-spatial\ndimensions composed of $n$ \nidentical particles of mass $m$ for which it is easy matter to verify that\n\\begin{eqnarray}\n\\Omega (E,V) &=& \\dfrac{V^n (2\\pi m)^{nd\/2}}{\\Gamma(\\frac{nd}{2} +1) n! h^{nd} } E^{nd\/2} \\, ,\n\\\\\n\\omega (E,V) &=& \\dfrac{V^n (2\\pi m)^{nd\/2}}{\\Gamma(\\frac{nd}{2}) n! h^{nd} } E^{nd\/2-1} \\, ,\n\\\\\n\\sigma (E,V) &=& \\dfrac{2 V^n (2\\pi m)^{nd\/2}}{\\Gamma(\\frac{nd}{2}) n! h^{nd} } E^{(nd-1)\/2} \\, ,\n\\end{eqnarray}\nwhere the factor $1\/n!$ is introduced in order to avoid the Gibbs paradox.\nFrom these formulas one finds the following expression of the caloric \nequation\n\\begin{eqnarray}\n\\beta^{-1}_G &=& \\dfrac{E}{{nd}\/{2}} \\, ,\\\\\n \\beta^{-1}_B &=& \\dfrac{E}{\\left({nd}\/{2}-1\\right)} \\, , \\\\\n\\beta^{-1} &=& \\dfrac{E}{{(nd-1)}\/{2}} \\, .\n\\end{eqnarray}\nIn the count for the degrees of freedom for a system of free particles,\njust the kinetic term contributes. Thus, in $d$ spatial dimensions a\nsystem of $n$ particles have $nd$ degrees of freedom and, by setting\nthe energy $E$ to a given value we are left with $nd-1$ degrees of\nfreedom.\nTherefore, among these only the latter expression is exactly extensive\nand, hence, rigorously satisfies the equipartition theorem for any $n$.\nWith an analogous calculation, it is easy matter to show that for a system\nof $n$ independent identical harmonic oscillators, of mass $m$ and\nfrequency $\\nu$ in $d$ spatial dimensions the caloric equations\nderived from the three entropies are\n\\begin{eqnarray}\n\\beta^{-1}_G &=& \\dfrac{E}{nd} \\, ,\\\\ \n\\beta^{-1}_B &=& \\dfrac{E}{\\left({nd}-1\\right)} \\, , \\\\\n\\beta^{-1} &=& \\dfrac{E}{{(2nd-1)}\/{2}} \\, .\n\\end{eqnarray}\nIn this case, either the coordinates and the motional degrees of freedom\ncontribute to the count of the degrees of freedom of the system.\nThus, when the energy has a fixed value $E$, the number of degrees\nof freedom are $2nd-1$ and only $S$ brings to the correct equipartition\nformula.\n\n\\emph{In addition to lead up the correct relation between total energy\nand true number of degrees of freedom, the entropy we propose rigorously\nsatisfies the postulate of equal a-priory probability which is a very\nfoundations of the equilibrium microcanonic statistical mechanics.\nAs a matter of fact, for a generic isolated physical-system at the\nequilibrium, a given thermodynamic state is completely determined when\nwe know the values of the macroscopic parameters as energy, volume, and\npossibly further external parameters, that characterize such system.\nIn this way, from a thermodynamic point of view we do not distinguish\nbetween the states of the system represented by different points on\nthe same energy level and consistent with the further constraints.\nThis is just what Eq. \\eqref{omegaEdiff} does, it ``counts the number\u00b4\u00b4\nof microstates satisfying the macroscopic constraint $H=E$,\nconsistently to the above mentioned postulate.\nOn the contrary, the standard\nBoltzmann entropy adopts a place-dependent weight\n$1\/\\Vert \\nabla H \\Vert$.}\n\nIn order to better clarify the connection between the Boltzmann\nentropy and that one we propose, let us perform the following\nrough calculation.\nFor a system with $N$ degrees of freedom, if $\\Delta E \\ll E$,\napproximatively we have\n\\begin{equation}\n\\Omega(E+\\Delta E) - \\Omega(E) \\approx \\omega(E) \\Delta E + O(\\Delta E^{2})\\, ,\n\\end{equation}\non the other hand, for the Cavalieri's principle, we have\n\\begin{equation}\n\\Omega(E+\\Delta E ) - \\Omega(E) \\approx \n\\left(\\sigma(E)\\Delta^{1\/2} \n\\right) \\dfrac{\\Delta E}{\\Delta} + O(\\Delta E^{2})\n\\, .\n\\end{equation}\nHence it results\n$\n\\sigma (E)\\Delta^{1\/2} = \\omega(E)\\Delta + O(N^{2})\n$\nand, consequently\n\\begin{equation}\n\\lim_{N\\to \\infty} \\dfrac{1}{N} \\left(\\ln (\\sigma \\Delta^{1\/2})-\\ln(\\omega \\Delta)\n\\right)\n= 0 \\, .\n\\label{bigN}\n\\end{equation}\nThis makes evident that in the limit of large number of degrees of freedom,\nthe proposed entropy predicts the same results as the Boltzmann entropy,\nwhereas, in the case of systems with small $N$ the two\nentropies differ from each other.\n\nIn order to verify our assumption, we have tested the proposed\nentropy on two systems: the two dimensional $\\Phi^4$ model and\na one dimensional model of rotors.\n\nThe $\\phi^4$ model \\cite{Franzosi_PRE99,Franzosi_PRL00,BPV_PRB04,Franzosi_PRA10} is defined by the Hamiltonian\n\\begin{equation}\nH = \\sum_{\\bf j} \\dfrac{1}{2} \\pi^2_{\\bf j} + V(\\phi)\n\\label{Hphi4}\n\\end{equation}\nwhere\n\\begin{equation}\nV(\\phi) =\\sum_{\\bf j} \\left[ \n \\dfrac{\\lambda}{4!} \\phi^4_{\\bf j} - \\dfrac{\\mu^2}{2}\\phi^2_{\\bf j} +\n\\dfrac{J}{4} \\sum_{{\\bf k}\\in I({\\bf j})} (\\phi_{\\bf j} - \\phi_{\\bf k})^2\n \\right] \\, ,\n\\label{Vphi4}\n\\end{equation}\n$\\pi_{\\bf j}$ is the conjugate momentum of the variable $\\phi_{\\bf j}$\nthat defines the field at ${\\bf j}^{th}$ site. Indeed,\n${\\bf j} = (j_1,j_2)$ denotes a site of a two dimensional latte\nand\n$I({\\bf j})$ are the nearest neighbour lattice sites of the\n${\\bf j}^{th}$ site. The coordinates of the sites are integer numbers\n$j_k =1,\\ldots,N_k$, $k=1,2$, so that the total number of sites\nin the lattice is $N=N_1\\,N_2$. Furthermore periodic boundary conditions\nare assumed.\nThe local potential displays a double-well shape whose minima are located \nat $\\pm \\sqrt{{3! \\mu^2}\/{\\lambda}}$ and to which it corresponds \nthe ground-state energy per particle $e_0 = - 3! \\mu^4\/(2 \\lambda)$.\nAt low-energies the system is dominated by an ordered phase where the time \naverages of the local field are not vanishing. By increasing the system\nenergy the system undergoes a second order phase-transition and\nthe local $\\mathbb{Z}_2$ symmetry is restored. In fact, at high\nenergies the time averages of the local field go to zero.\n\nThe second model \\cite{Cerino_2015} is composed by $N$ rotators with\ncanonical coordinates\n$\\phi_1,\\ldots,\\phi_N,\\pi_1,\\ldots,\\pi_N$ and Hamiltonian\n\\begin{equation}\nH=\\sum^N_{j=1} [1-\\cos(\\pi_j)] + \\epsilon \n\\sum^N_{j=1}[1-\\cos(\\phi_j-\\phi_{j-1})] \\, ,\n\\label{Hrot}\n\\end{equation}\nwhere is assumed $\\phi_0 = 0$. The form of kinetic and potential terms\nin \\eqref{Hrot} makes the energy bounded either from above and from below\nand such Hamiltonian implies the existence of negative Boltzmann temperatures\n\\cite{Cerino_2015}.\n\nWe have numerically integrated the equation of motion associated to the \nHamiltonian of both the models, by using a third order symplectic algorithm\nand starting from initial conditions corresponding to different values\nof the system total energy $E$. We have measured along the dynamics the time averages of the relevant quantities that appear in \\eqref{betaB} \nand \\eqref{beta} and, then we have derived the curves $\\beta_B(E)$ and\n$\\beta(E)$ for the two models.\n\\begin{figure}[h]\n \\includegraphics[height=5.cm]{phi4_beta-e.eps}\n\\caption{The figure compares $\\beta_B(E\/N)$ (dotted line) and\n$\\beta(E\/N)$ (continuous line) numerically computed for a lattice\nof $128\\times 128$ sites for the\n$\\Phi^4$-model. The agreement is astonishing,\nin fact the two curves are indistinguishable. In the inset we report\na zoom in order to show the two curves.\n\\label{fig1}}\n\\end{figure} \n\\begin{figure}[h]\n \\includegraphics[height=5.cm]{rot_1d_beta-e.eps}\n\\caption{The figure compares $\\beta_B(E\/N)$ (dotted line) and\n$\\beta(E\/N)$ (continuous line) numerically computed for an array of $512$\nrotors. Also here the agreement is astonishing,\nthe two curves are indistinguishable thus we report the inset with\na zoom that shows the two curves.\n\\label{fig2}}\n\\end{figure} \nFigs. 1 and 2 clearly show the remarkable agreement between the curves\n$\\beta_B(E\/N)$ and $\\beta(E\/N)$, for both the models studied.\n\nIn conclusion we have proposed a novel definition of the microcanonical\nentropy for classical systems.\nWe have shown that this definition definitely resolve the debate\non the correct definition of the microcanonical entropy. In fact, we have\nshown that this entropy definition fixes the issue inherent the full\nextensivity of the caloric equation.\nFurthermore, we have given evidence by investigating\ntwo different models, that this entropy reproduces results which are in\nagreement with the ones predicted with standard Boltzmann entropy in the\ncase of macroscopic systems. \nSince the differences between the predictions of Boltzmann entropy and\nof the one here proposed, are more evident in systems with small number of\ndegrees of freedom, we conclude that the Boltzmann entropy (with the our one)\nprovides a correct description for macroscopic systems whereas\nextremely small systems should be described with the entropy that we\nhave proposed in order to avoid, for instance, issues with the\nextensivity of the caloric equation.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nWe are grateful to A. Smerzi and P. Buonsante for useful discussions.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Sample Section Title}\n\\label{sec:sample1}\n\n\nLorem ipsum dolor sit amet, consectetur adipiscing \\citep{Fabioetal2013} elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud \\citet{Blondeletal2008} exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit \\citep{Blondeletal2008,FabricioLiang2013} anim id est laborum.\n\nLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum see appendix~\\ref{sec:sample:appendix}.\n\n\n\\section{Sample Section Title}\n\\label{sec:sample1}\n\n\nLorem ipsum dolor sit amet, consectetur adipiscing \\citep{Fabioetal2013} elit, sed do eiusmod tempor incididunt ut labore et dolore magna \\citet{Blondeletal2008} aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit \\citep{Blondeletal2008,FabricioLiang2013} anim id est laborum.\n\nLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum see appendix~\\ref{sec:sample:appendix}.\n\n\n\\section{Introduction}\n\\label{sec1}\nCompound refractive lens assemblies (CRLs) are frequently used as objective lenses in X-ray microscopes \\cite{lengeler99}, or as upstream condensers \\cite{schroer05,vaughan11}. Both applications are extremely sensitive to lateral misalignment. The full focusing procedure of a CRL requires five degrees of freedom. Independent translations along the $x,$ and $y$ axes, in addition to two rotations $r_x$ and $r_y$ about those axes, produce a lateral alignment. The fifth degree of freedom is a translation along the $z$ axis, which locates the classical position of the focus (see Figure~\\ref{hugh_image}). \n\nOur principal motivation for this work is the task of laterally aligning compound refractive lens assemblies (CRLs) at X-ray free electron laser facilities (XFELs). XFELs are a new class of X-ray sources that produce the shortest duration and brightest X-ray pulses currently attainable, paving the way for experiments that were previously not possible \\cite{Yabashi2017}. A new approach to the alignment of CRLs is necessary, in large part due to the novel amplification process to generate X-ray pulses. At XFEL facilities, Self-Amplification of Spontaneous Emission (SASE) causes the beam position, spatial mode, propagation direction (pointing), and intensity to fluctuate stochastically \\cite{Emma2010,Schneidmiller2016}. The proper lateral alignment of focusing optics is crucial to produce the highest resolution and smallest focal spots, as required for many modern X-ray experiments.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=5cm]{CRL_diagram.eps}\n \\caption{This example diagram depicts a CRL-based imaging configuration along an optical axis $z$. A CRL is aligned to the optical axis by four independent motors. Two control translations along the perpendicular $x$ and $y$ axes, and two control rotations about the $x$ and $y$ axes.}\n \\label{hugh_image}\n\\end{figure}\n\nThe orientation of the CRL is controlled by four motorized stages: two that translate the optic along the $x$ and $y$ axes, and two that rotate about them. To align this optic, a detector is placed behind the exit surface of the CRL to measure the transmitted X-ray beam. Typically, these sensors are either charge-coupled device (CCD) cameras or different implementations of a photodiode (e.g. ion chamber). The beam-line scientist's task is to laterally align the CRL by maximizing X-ray transmitted light that reaches the sensor as a function of the four positions. \n\nIn more formal language, let $f\\colon \\mathbb{R}^4 \\to \\mathbb{R}$ denote an idealized model of a noiseless, steady (in space and amplitude) X-ray transmission through a CRL as a function of the orientation. In \\cite{simons17}, Simons et al. demonstrated that for a convex region $\\Omega \\subset \\mathbb{R}^4$, that transmission $f$ can be modelled with a Gaussian distribution. In this idealized case, the alignment procedure reduces to the trivially convex optimization problem\n\\begin{equation*}\n\\min_{{\\bf x} \\in \\Omega} -f({\\bf x}).\n\\end{equation*}\n\nGiven the simplicity of the problem, it is common at scientific beam-lines with more-stable amplitudes to establish lateral alignment through a simple manual process. A recent study automated this task by using a modified stochastic simplex method on the lateral alignment of CRL assemblies at synchrotron facilities \\cite{Breckling2021}. In lieu of automating, the usual approach is to perform a rough initial alignment, then select two of the four dimensions of $\\Omega$ and perform a raster scan of the transmission, logging a detector's response at each particular orientation. The ``best\" position from that 2D scan is selected, and the micro controllers are driven to that position. The alternate dimensions are then selected, and the procedure repeats until alignment is satisfactory. \n\nFor many scientific beam-lines, this dead-reckoning approach is sufficient. Unfortunately, the SASE process for generating X-rays at XFEL facilities introduces an unpredictable time-dependent intensity drift, as well as stochastic perturbations of the beam's propagation axes. This, of course, is in addition to the usual sources of measurement noise. These complications prevent the reliable success of a direct implementation of the simplex-based approach seen in \\cite{Breckling2021}. Given that it is only possible to record X-ray transmission for a single orientation at a single moment in time, an orderly raster-like scan of the transmission at an XFEL facility is not likely to see a distribution that strongly agrees with the Gaussian model developed in \\cite{simons17}. As a result, it remains the common practice to rely heavily on the intuition of the beamline scientist to interpret such scans, substantially extending the time required to produce an acceptable initial lateral alignment, and realignment. Given that time is an extremely limited resource at XFELs, an alternative technique to quickly and reliably expedite this procedure is sought. \n\nIn this paper we propose a technique to estimate the gradient of the transmission function that accounts for both time-dependent amplitude fluctuations, and instrumentation noise. If successful, such a gradient could be utilized in a classic steepest descent algorithm. Given that stochastic descent-based approaches have been successful in automating similar optical alignment and focusing tasks, including the control of directed energy sources \\cite{belen2007laboratory}, aligning line-of-sight communication arrays \\cite{Raj2010}, and the alignment of two-mirror telescopes \\cite{Li20}, we suspect that these corrections will allow for expedient and accurate alignments in our application. \n\nLet $t \\in \\mathbb{R}^+$ represent time, $T\\colon \\mathbb{R}^+ \\to \\mathbb{R}$ be an arbitrarily smooth function which denotes the intensity of the beam over time, $\\varsigma$ be the aggregate of all additive stochastic noise, and $\\Theta$ denote stochastic perturbations to the beam's orientation. We then formulate our estimate of the transmission function as $G({\\bf x},t)=-T(t)f({\\bf x}+\\Theta) + \\varsigma$. Our alignment procedure then looks like the optimization problem\n\\begin{equation}\\label{exp_prob}\n\\min_{{\\bf x} \\in \\Omega} E\\left[G({\\bf x},t)\\right], \\text{ for all } t \\geq 0,\n\\end{equation}\nwhere $E$ is the expected value. While this problem does admit an optimal solution, common stochastic steepest-descent methods are not amenable to finding it without directly addressing the amplitude fluctuations \\cite{Spall}. \n\nWe propose an approach to this problem that substitutes the usual finite difference method with one which corrects for the non-steady amplitude. The method systematically intertwines the usual spatial samples for the gradient with additional samples from a fixed central location. We demonstrate through error asymptotics and numerical benchmarks that these additional samples can, when collected at sufficient rate, can sufficiently account for amplitude changes in intensity over time. Thus, an amplitude-corrected gradient, when paired with a standard stochastic descent algorithm, becomes well-suited for minimization problems like \\eqref{exp_prob}.\n\nThe remainder of the paper is organized as follows: We formally introduce the amplitude-correcting scheme in Section~\\ref{DiffScheme}, along with notation, and asymptotic error estimates. We provide two numerical benchmarks in Section~\\ref{NumericalExperiments}. There, we first develop asymptotic error estimates for stochastic gradient descent (SGD) schemes using the amplitude-correction, along with a demonstration of the resulting convergence rates. We then demonstrate the efficacy of our amplitude-correcting gradient on a modified version of the Rosenbrock valley benchmark. Section~\\ref{XFEL} outlines how our method shows promise in automating the lateral alignment of CRLs at X-ray experimental facilities. There, we provide a proof-of-concept implementation of our full optimization scheme against a synthetic cost function modelled to behave appreciably similar to one used at a genuine XFEL facility. Finally, we provide remarks in summary in Section~\\ref{conclusions}.\n\n\\section{Constructing the Amplitude-Correcting Differencing Scheme} \\label{DiffScheme}\nGiven a function $f\\colon \\mathbb{R}^n \\to \\mathbb{R}$, we are primarily concerned with computing estimates of $\\nabla f$. To this end, we assume a high degree of smoothness, i.e., $f$ is sufficiently G\\^{a}teux and Fr\\'{e}chet differentiable to satisfy the necessary conditions of our estimates to follow. The gradient of a function at a particular point ${\\bf x}_c \\in \\mathbb{R}^n$ is typically estimated by sampling that function $\\mathcal{O}(N)$ $(N \\in \\mathbb{N}, N > n)$ times in a local region around ${\\bf x}_c$. We consider an $n-$dimensional ball of radius $\\delta>0$ centered at ${\\bf x}_c$, denoted $\\mathcal{B}_\\delta({\\bf x}_c)$, and define $\\Omega$ to be an open, connected, bounded set containing $\\mathcal{B}_\\delta({\\bf x}_c)$ within the interior. \n\nFor our application, $n=4$, given the degrees of freedom for lateral alignment. Additionally, the function $f$ can only be evaluated at one particular position ${\\bf x} \\in \\mathbb{R}^n$ at a time. Sampling another position ${\\bf x}' \\in \\mathbb{R}^n$ requires a discrete amount of time $h>0$ to elapse. Given a particular starting time $t_0 > 0$, we denote the interval of time required to compute a gradient using our technique defined below to be \n\\begin{equation*}\n \\mathcal{T}_{h,N} =: [t_0,t_0 + (4N + 1)h].\n\\end{equation*}\nThough in the interest of brevity, we may refer to $\\mathcal{T}_{h,N}$ as simply $\\mathcal{T}$. Finally, we assume that our amplitude function $T\\colon \\mathbb{R}^+ \\to \\mathbb{R}$ is at least four-times differentiable, i.e., $T \\in \\mathcal{C}^4(\\mathcal{\\mathcal{T})}.$\n\nLet $E$ and $V$ denote the expectation and variance of a time series over $\\mathcal{T}$. Our source of additive noise is assumed to be normal and i.i.d. such that $E(\\varsigma)$ = 0 and $V(\\varsigma) = \\sigma^2$. Our smooth and additive noise-corrupted functions are written:\n\\begin{eqnarray*}\n\tF({\\bf x},t) &=& T(t)f({\\bf x}), \\\\\n\tG({\\bf x},t) &=& F({\\bf x},t) + \\varsigma(t).\n\\end{eqnarray*}\n\nTo organize our scheme, we arrange our sample indices serially in terms of the position in $\\mathcal{B}_\\delta({\\bf x}_c)$ and time $t_k \\in \\mathcal{T}$. Let ${\\bf e}$ be an arbitrary unit vector in $\\mathbb{R}^n$. For the noise-free case, we write:\n\\begin{center}\n\t\\begin{tabular}{rclcl}\n\t\t$F({\\bf x}_c,t_k)$ & = & $F_k^c$ & = & $T_k f^c$, \\\\\n\t\t$F({\\bf x}_c \\pm \\delta {\\bf e}, t_k \\pm h)$ & = & $F_{k\\pm1}^{{\\bf e}^{\\pm}} $& = & $T_{k\\pm1} f^{{\\bf e}^{\\pm}}$.\n\t\\end{tabular}\n\\end{center}\n\\noindent Similarly, our noise corrupted case is written:\n\\begin{center}\n\t\\begin{tabular}{rclcl}\n\t\t$G({\\bf x}_c,t_k)$ & = & $G_k^c$ & = & $F_k^c + \\varsigma_k$, \\\\\n\t\t$G({\\bf x}_c \\pm \\delta {\\bf e}, t_k \\pm h)$ & = & $G_{k\\pm1}^{{\\bf e}^{\\pm}} $& = & $F_{k\\pm1}^{{\\bf e}^{\\pm}} + \\varsigma_{k\\pm1}$,\n\t\\end{tabular}\n\\end{center}\nwhere ${\\bf e}$ is a unit vector in the selected direction.\n\nWe use the over-bar shorthand to denote time-averaged terms, e.g.,\n\\begin{equation*}\n\\bar{F}_k^c = \\frac{F_{k-1}^c + F_{k+1}^c}{2}.\n\\end{equation*}\nWe make use of the usual norm notation, i.e., $|| \\cdot ||_2$ denotes an $L^2$ norm; though the subscript is dropped in the context of Euclidean vectors. When discussing discretized approximations to the usual gradient operator $\\nabla$, we use $\\nabla_\\delta$ to denote the uncorrected differencing scheme provided in Definition \\ref{basic_grad}, and $\\nabla_{\\delta,h}$ for the amplitude-correcting gradient estimate developed further below. Directional derivative operators and their approximations are then written as $({\\bf e} \\cdot \\nabla)$, $({\\bf e} \\cdot \\nabla_\\delta)$, and $({\\bf e} \\cdot \\nabla_{\\delta,h})$ respectively. \n\n\\begin{definition}[A Linear Regression-Based Gradient Estimate]\\label{basic_grad} Let $\\delta > 0$, and $\\Omega \\subset \\mathbb{R}^n$ contain the open ball $\\mathcal{B}_{\\delta}({\\bf x}_c)$. Further, let the points $\\lbrace {\\bf x}_i \\rbrace_{i=1}^{N}$ be a collection of $N$ unique points on the surface of the ball $\\mathcal{B}_{\\delta}({\\bf x}_c)$ such that $N>n$. For a given function $f :\\Omega \\rightarrow \\mathbb{R}$, sample each point on the ball, collecting each sample in the vector ${\\bf F} = \\lbrace f_i \\rbrace_{i=1}^{N}$. Use the matrix ${\\bf X} = \\lbrace 1, {\\bf x}_i \\rbrace_{i=1}^{N}$ and corresponding samples ${\\bf F}$ to assemble the linear regression problem \n\\begin{equation*}\n {\\boldsymbol \\eta} = ({\\bf X}^T {\\bf X})^{-1} {\\bf X}^T {\\bf F}.\n\\end{equation*}\nThe solution ${\\boldsymbol \\eta} = \\lbrace \\eta_i \\rbrace_{i=1}^{n+1}$ determines the gradient estimate\n\\begin{equation*}\n \\nabla f({\\bf x}_c) \\approx \\nabla_\\delta f({\\bf x}_c) =: \\lbrace \\eta_k \\rbrace_{k=2}^{n+1}.\n\\end{equation*}\n\\end{definition}\n\n\\subsection{The Differencing Scheme}\n\\label{sec:diffscheme}\nThe definition below is assembled similarly to that seen in Definition \\eqref{basic_grad}, but coordinates all sampling according to a uniformly-discretized time series. A example diagram is provided in Figure \\ref{ball}. If the sampling distance $\\delta > 0$ remains uniform, it is assumed that the time required to visit each point within the sequence is uniform. While this isn't a necessary limitation in practice, this assumption simplifies the analysis provded in Appendix \\ref{AccuracyEst}.\n\n\\begin{figure} \n\t\\begin{center}\n\t\t{\\bf An Example 6-Point Stencil in 2D} \\\\\n\t\t\\begin{tikzpicture}[fill = white]\n\t\t\\draw[blue!20,thick,dashed] (0,0) circle (2cm);\n\t\t\\path (0,0) node(a) [circle, draw, fill] {${\\bf x}_c$}\n\t\t(2.0,0.0) node(b) [circle, draw, fill] {${\\bf x}_1$}\n\t\t(1, 1.7320) node(c) [circle, draw, fill] {${\\bf x}_3$}\n\t\t(-1,1.7320) node(d) [circle, draw, fill] {${\\bf x}_5$}\n\t\t(-2.0,0) node(e) [circle, draw, fill] {${\\bf x}_2$}\n\t\t(-1,-1.7320)node(f) [circle, draw, fill] {${\\bf x}_4$}\n\t\t( 1,-1.7320)node(g) [circle, draw, fill] {${\\bf x}_6$};\n\t\t\n\t\t\\draw[blue!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=8 ) -- (node cs:name=b, angle=180-8);\n\t\t\\draw[blue!100, thick,-{Straight Barb[left]}] (node cs:name=b, angle=180+8) -- (node cs:name=a, angle=-8 ); \n\t\t\n\t\t\\draw[purple!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=60+8 ) -- (node cs:name=c, angle=180+60-8);\n\t\t\\draw[purple!100, thick,-{Straight Barb[left]}] (node cs:name=c, angle=180+60+8) -- (node cs:name=a, angle=60-8 ); \t\t\t\n\t\t\n\t\t\\draw[green!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=120+8 ) -- (node cs:name=d, angle=180+120-8);\n\t\t\\draw[green!100, thick,-{Straight Barb[left]}] (node cs:name=d, angle=180+120+8) -- (node cs:name=a, angle=120-8 ); \t\n\t\t\n\t\t\\draw[blue!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=180+8 ) -- (node cs:name=e, angle=180+180-8);\n\t\t\\draw[blue!100, thick,-{Straight Barb[left]}] (node cs:name=e, angle=180+180+8) -- (node cs:name=a, angle=180-8 ); \t\t\t\n\t\t\n\t\t\\draw[purple!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=240+8 ) -- (node cs:name=f, angle=180+240-8);\n\t\t\\draw[purple!100, thick,-{Straight Barb[left]}] (node cs:name=f, angle=180+240+8) -- (node cs:name=a, angle=240-8 ); \t\t\t\n\t\t\n\t\t\\draw[green!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=300+8 ) -- (node cs:name=g, angle=180+300-8);\n\t\t\\draw[green!100, thick,-{Straight Barb[left]}] (node cs:name=g, angle=180+300+8) -- (node cs:name=a, angle=300-8 ); \t\t\t\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\caption{\\label{ball} This depicts an example six-point ($N=3$) sampling stencil for a two-dimensional search space. The procedure requires a total of 13 samples. Begin by sampling at ${\\bf x}_c.$ Next, sample at ${\\bf x}_1$, then return and sample ${\\bf x}_c$. Repeat this process sequentially for the remaining ${\\bf x}_i$. }\n\\end{figure}\n\n\n\\begin{definition}[Amplitude-Correcting Gradient Estimate]\\label{def_grad} Let $\\delta, h > 0,$ and $\\Omega \\subset \\mathbb{R}^n$ contain the open ball $\\mathcal{B}_{\\delta}({\\bf x}_c)$. Let the points $\\lbrace {\\bf x}_i \\rbrace_{i=1}^{N}$ be a collection of $N$ unique points on the surface of the ball $\\mathcal{B}_{\\delta}({\\bf x}_c)$, along with the $N$ corresponding antipodal points $\\lbrace {\\bf x}_i' \\rbrace_{i=1}^N$, such that $N>n$. Let $\\mathcal{T}_{h,N}$ be the uniform discretization of the time interval $\\mathcal{T}.$ For the function $f:\\Omega \\rightarrow \\mathbb{R}$, the amplitude function $T:\\mathcal{T} \\rightarrow \\mathbb{R}$, and additive noise $\\varsigma: \\mathcal{T} \\rightarrow \\mathbb{R}$, we write the given function $G : \\Omega \\times \\mathcal{T} \\rightarrow \\mathbb{R}$ such that $G({\\bf x},t) = T(t)f({\\bf x}) + \\varsigma(t)$. Let $\\mu_T = E\\left[T\\left(\\mathcal{T}_{h,N} \\right) \\right]$, and ${\\bf e}_k^+$ be the unit vector in the direction ${\\bf x}_k - {\\bf x}_c.$ Index our uniformly discretized time-steps as $k=1,\\ldots,4N+1$. Sample ${\\bf x}_c$ at odd values of $k,$ i.e. $k = 2k'-1$, collecting each sample as $G_{k}^c.$ Time-average these values gives $\\bar{G}_{2k'}^c$. For even values of $k,$ i.e. $k= 2k'$, alternate sampling ${\\bf x}_{k'}$ and its antipodal counterpart ${\\bf x}_{k'}'$, collecting each sample as $G_{k}^{{\\bf e}_{k}^+}.$ We organize the matrix ${\\bf X}$ such that\n\\begin{equation*}\n {\\bf X} = \\begin{bmatrix}\n 1 & {\\bf x}_1 \\\\\n 1 & {\\bf x}_1' \\\\\n \\vdots & \\vdots \\\\\n 1 & {\\bf x}_N \\\\\n 1 & {\\bf x}_N'\n\\end{bmatrix}, \n\\end{equation*}\nthe sample matrix ${\\bf G}$ such that\n\\begin{equation*}\n {\\bf G} = \\frac{1}{\\mu_T}\\lbrace G_{2i}^{{\\bf e}_{2i}^+} - \\bar{G}_{2i}^c , \\rbrace_{i=1}^{2N}\n\\end{equation*}and the linear regression problem \n\\begin{equation*}\n {\\boldsymbol \\eta} = ({\\bf X}^T {\\bf X})^{-1} {\\bf X}^T {\\bf G}.\n\\end{equation*}\nThe solution ${\\boldsymbol \\eta} = \\lbrace \\eta_i \\rbrace_{i=1}^{n+1}$ determines our gradient estimate\n\\begin{equation*}\n \\nabla f({\\bf x}_c) \\approx \\nabla_{\\delta,h}G({\\bf x}_c, \\mathcal{T}) =: \\lbrace \\eta_i \\rbrace_{i=2}^{n+1}.\n\\end{equation*}\n\\end{definition}\n\nIf we use the following short-hand for the standard central-differencing stencil (in spatial coordinates), directional derivatives can be written \n\\begin{equation*}\n\\left({\\bf e}_k \\cdot \\nabla_{\\delta}\\right)f({\\bf x}_c) = \\frac{1}{2\\delta} \\left(f^{{\\bf e}_k^+} - f^{{\\bf e}_k^-}\\right).\n\\end{equation*}\nOur convention of selecting antipodal points in sequence allows us to utilize these directional derivative stencils directly. Since each observation of $G({\\bf x},t)$ results in an independent noise term $\\varsigma$, combining like-terms results in\n\\begin{align*}\n\t\\left({\\bf e}_k \\cdot \\nabla_{\\delta,h}\\right)G({\\bf x}_c,\\mathcal{T}) = \\frac{1}{2\\mu_T\\delta} & \\left[\\left(G_{2k}^{{\\bf e}_k^+} - \\bar{G}_{2k}^{c} \\right) - \\left(G_{2k+2}^{{\\bf e}_k^-} - \\bar{G}_{2k+2}^{c}\\right)\\right] \\\\\n\t = \\frac{1}{2\\mu_T\\delta} & \\Big[\\left(F_{2k}^{{\\bf e}_k^+} - \\bar{F}_{2k}^{c} \\right) - \\left(F_{2k+2}^{{\\bf e}_k^-} - \\bar{F}_{2k+2}^{c}\\right) \\\\\n\t & + \\frac{\\varsigma_{1,k}}{2} + \\varsigma_{2,k} + \\varsigma_{3,k} + \\frac{\\varsigma_{4,k}}{2}\\Big].\n\\end{align*}\n\n\\begin{thm}[Error Estimate on Noise-Free Functions]\\label{NoiseFreeThm} Let $F({\\bf x},t) = T(t)f({\\bf x})$ where $T$ and $f$ are at least $\\mathcal{C}^4(\\mathcal{T})$ and $\\mathcal{C}^3(\\Omega)$ respectively. We sample $N$ antipodal pairs such that the resulting sampling is unbiased, and quasi-uniform. For ${\\bf x}_c \\in \\Omega$, and $\\delta > 0$ such that $\\mathcal{B}_\\delta({\\bf x}_c)$ is in the interior of $\\Omega$, we let ${\\bf e}_k$ be the unit vector associated with the $k^{th}$ antipodal pair of points. Further, we let $\\mu_T$ be the known expectation of $T(t)$ over $\\mathcal{T}$. Selecting $h$ such that $h^3 < \\delta$ guarantees that there exists a constant $C^*(\\delta, h, N, T,T',T^{(4)},f, \\nabla f) > 0$ such that, \n\t\\begin{equation*}\n\t\\left|\\left| \\nabla f({\\bf x}_c) - \\nabla_{\\delta,h} F({\\bf x}_c,\\mathcal{T})\\right|\\right| \\leq C^* \\left(h + \\delta^2 \\right).\n\t\\end{equation*}\n\\end{thm}\n\n\\noindent A similar result is provided for the case when additive i.i.d. noise is present. \n\n\\begin{thm}[Error Estimate on Noisy Functions]\\label{NoisyThm} Let $G({\\bf x},t) = F({\\bf x},t) + \\varsigma(t)$. Under the same assumptions as Theorem \\ref{NoiseFreeThm}, the total contribution of error from stochastic sources can be written\n\t\\begin{eqnarray*}\n\t\t{\\varepsilon} &:=& \\nabla_{\\delta,h} G({\\bf x}_c, \\mathcal{T}) - \\nabla_{\\delta,h} F({\\bf x}_c, \\mathcal{T}) \\\\\n\t\t& =& \\left\\lbrace \\sum_{k=1}^N \\frac{\\hat{{\\bf e}}_i^T \\cdot {\\bf e}_k}{2 \\mu_T N \\delta} \\left[\\frac{\\varsigma_{1,k}}{2} + \\varsigma_{2,k} + \\varsigma_{3,k} + \\frac{\\varsigma_{4,k}}{2}\\right]\\right\\rbrace_{i=1}^n.\n\t\\end{eqnarray*} \n\tThen it follows that\n\t\\begin{equation*}\n\t E\\left[||\\varepsilon||\\right] \\leq 4 \\frac{\\sigma}{\\mu_T \\delta} \\sqrt{\\frac{n}{N}},\n\t\\end{equation*}\n\tand for $p \\in (0,1)$\n\t\\begin{equation*}\n\t \\mathcal{P} \\left[ ||\\varepsilon|| \\leq 4 \\frac{\\sigma}{\\mu_T \\delta} + 2 \\frac{\\sigma}{\\mu_T \\delta}\\sqrt{\\frac{\\log(1\/p)}{N}}\\right] \\geq 1-p.\n\t\\end{equation*}\n\\end{thm}\n\n\n\n\\section{Numerical Demonstrations} \\label{NumericalExperiments}\nGiven that our motivation is to employ the amplitude-correcting gradient in steepest descent methods, our demonstrations will focus on that application. We begin by presenting two accelerated versions of the classic SGD algorithm, differing only by which gradient estimation technique utilized. A full discussion on proper choices for $\\alpha$ and $\\beta$ can be found in \\cite{Nesterov}.\n\n\\begin{alg}[Accelerated SGD] \\label{Alg2} Choose a suitable initial condition ${\\bf x}_0 \\in \\mathbb{R}^n$, step-size $\\alpha_i > 0$, $\\alpha_i \\rightarrow 0$ as $i \\rightarrow \\infty$, and $\\beta \\in [0,1)$. Additionally, choose a radius $\\delta > 0$ for the gradient estimator. Indexing our steps with $i=0,1,\\ldots$ we proceed such that\n\t\\begin{eqnarray*}\n\t\t{\\bf y}_{i+1} &=& \\beta {\\bf y}_i - \\nabla_{\\delta}f({\\bf x}_{i+1}), \\\\\n\t\t{\\bf x}_{i+1} &=& {\\bf x}_i - \\alpha_i {\\bf y}_{i+1}.\n\t\\end{eqnarray*}\n\\end{alg}\n\n\\begin{alg}[Dynamic Amplitude-Corrected Accelerated SGD] \\label{Alg1} Choose a suitable initial condition ${\\bf x}_0 \\in \\mathbb{R}^n$, step-size $\\alpha_i > 0$, $\\alpha_i \\rightarrow 0$ as $i \\rightarrow \\infty$, and $\\beta \\in [0,1)$. Additionally, prescribe a spatial radius and time-step $\\delta, h > 0$ for the gradient estimator. Indexing our steps with $i=0,1,\\ldots$ we proceed such that\n\t\\begin{eqnarray*}\n\t\t{\\bf y}_{i+1} &=& \\beta {\\bf y}_i - \\nabla_{\\delta,h}G({\\bf x}_{i+1},t), \\\\\n\t\t{\\bf x}_{i+1} &=& {\\bf x}_i - \\alpha_i {\\bf y}_{i+1}.\n\t\\end{eqnarray*}\n\\end{alg}\n\nIn our first demonstration, we seek a direct comparison of the classic SGD algorithm with the amplitude-correcting version. In order for such a comparison to be salient, we consider two functions: Rosenbrock's valley with and without a time-varying amplitude. We then demonstrate, for well-selected parameters, that Algorithm \\ref{Alg2}'s performance on the steady-amplitude function qualitatively matches Algorithm \\ref{Alg1}'s performance on the non-steady version. When both simulations are successful against minimization problems that are otherwise formulated identically, we can conclude that the amplitude-corrections encoded into the online gradient estimate effectively overcome the variations.\n\nIn the second numerical experiment, we show that the error asymptotics provided in Theorems \\ref{NoiseFreeThm} and \\ref{NoisyThm} can be seen in SGD executions. We cite two theorems that respectively provide sufficient conditions for the convergence of Algorithm \\ref{Alg2} with probability 1, and asymptotic error estimates. We then construct a noisy, time-varying function that otherwise adheres to those conditions, then prove that well-selected parameters guarantee Algorithm \\ref{Alg1} also converges. This is numerically verified by isolating each source of error to see if the analytic rates match those encountered numerically. \n\n\\subsection{A Quake in Rosenbrock's Valley}\\\nRosenbrock's Valley \\cite{Rosenbrock} is a polynomial on $\\mathbb{R}^2$ defined as\n\\begin{equation}\\label{rosenbrock}\nf(x,y) = (1-x^2) + 100(y-x^2)^2.\n\\end{equation}\nThis polynomial has a global minimum value of $f(1,1) = 0$, and is locally convex around that point. However, the downward slope along the minimal ridge is quite low in the parabolic valley. It is this feature that made Rosenbrock's Valley a popular benchmark, since many steepest descent algorithms tend to reach the ridge quite quickly, but struggle to reach the optimal answer due to the oscillations spurred from the large values of $|\\nabla f(x,y)|$ for $(x,y)$ not precisely on the ridge path. In the interest of clarity, we will refer to these as \\emph{spatial oscillations.}\n\nThe classic benchmark nonlinear programming problem is typically presented as \n\\begin{equation}\\label{nlprog1}\n{\\bf x}^* = \\text{argmin}_{{\\bf x} \\in \\mathbb{R}^2} f(x,y).\n\\end{equation}\nWe complicate matters by including the amplitude function $T(t)$ such that\n\\begin{equation}\\label{nlprog2}\n{\\bf x}^* = \\text{argmin}_{{\\bf x} \\in \\mathbb{R}^2} E\\left[T(t)f(x,y)\\right].\n\\end{equation}\nwhere \n\\begin{equation}\\label{dynamic_amp_T1}\nT(t) = 1 + \\frac{3}{4} \\cos{(2 \\pi t)}.\n\\end{equation}\nAgain, for the sake of clarity, we shall refer to oscillations caused by a dynamic amplitudes like \\eqref{dynamic_amp_T1} as \\emph{temporal oscillations}.\n\nIn our first experiment, we attempt to solve our temporally oscillating problem \\eqref{nlprog2} with the standard gradient descent method (Algorithm \\ref{Alg2}.) We initialize at ${\\bf x}_0 = (-1.2,1)$, fix $\\alpha_i = \\delta = 1\/500$, $\\beta = 0$, enforce a step-size maximum $||{\\bf x}_{i+1} - {\\bf x}_{i}|| \\leq 1 \/ 4$, and a maximum iteration count of $i_{\\text{max}} = 1200$. The gradient is computed by a uniform sampling of $N=15$ antipodal pairs. In Figure \\ref{rosenbrockfigs_noise}, we see that the gradient estimates are erroneous far beyond what can be tolerated by the standard algorithm. The figure only depicts steps up to $i_{200}$, since the full path eventually diverges. Increasing the momentum value $\\beta$ has no appreciable impact on this outcome.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=10cm]{diverge.eps} \\\\\n\t\t\\hspace{0.5cm} 0 \\ \\includegraphics[width=6cm]{colorbar.eps} \\ 1800\n\t\\end{center}\n\t\\caption{\\label{rosenbrockfigs_noise}This figure demonstrates the failure of Algorithm \\ref{Alg2} to solve the temporally-fluctuating problem \\eqref{nlprog2}. Our plot only considers the first 200 steps, due to an eventual divergence. Each step is depicted by a red dot, connected in sequence by a white line. The white cross in each plot depicts the optimal solution at $(1,1)$. The spatial coordinates and color axis are all non-dimensionalized.}\n\\end{figure}\n\nIn the second experiment, we seek to demonstrate that our amplitude correcting gradient estimate is effective in overcoming the temporal oscillations imposed by \\eqref{dynamic_amp_T1}. We accomplish this by comparing the performance of Algorithm \\ref{Alg1}, which utilizes the dynamic amplitude correction, on the temporally-oscillating problem \\eqref{nlprog2} to the performance of classic gradient descent method in Algorithm \\ref{Alg2} on the non temporally-oscillating problem in \\eqref{nlprog1}. For each execution we initialize at ${\\bf x}_0 = (-1.2,1)$, selecting $\\alpha_i = \\delta = 1\/500$, $\\beta = 0$, enforce a step-size maximum $||{\\bf x}_{i+1} - {\\bf x}_{i}|| \\leq 1 \/ 4$, and a maximum iteration count of $i_{\\text{max}} = 1200$. The gradient is computed by a uniform sampling of $N=15$ antipodal pairs. In the temporally oscillating problem, we prescribe a time-step of $h = 1\/16$. We provide comparisons with, and without momentum in Figure~\\ref{rosenbrockfig}.\n\nIn the first row of Figure \\ref{rosenbrockfig} we see that without momentum ($\\beta = 0$) neither implementation manages to overcome the spatial oscillations. By iteration count $i_{\\text{max}} = 1200$, both executions seem to terminate in roughly the same position. In the second row, we see that when momentum is included, both methods overcome the spatial oscillations and reach the global minimum position. When considering the apparent qualitative similarity between these outcomes, in conjunction with the failure demonstrated in Figure \\ref{rosenbrockfigs_noise}, we posit that the amplitude corrections are effective in mitigating the temporal oscillations imposed on \\eqref{nlprog2}.\n\n\\begin{figure} \n\t\\begin{center}\n\t\t\\includegraphics[width=10cm]{comparison_plot.eps}\\\\\n\t\t0 \\ \\includegraphics[width = 6cm]{colorbar.eps} \\ 600\n\t\\end{center}\n\t\\caption{\\label{rosenbrockfig} The top row compares the results from Algorithms \\ref{Alg1} and \\ref{Alg2} to problems \\eqref{nlprog2} and \\eqref{nlprog1} respectively, with no momentum term ($\\beta = 0$). The bottom row makes the same comparison, but selects a momentum term $\\beta = 0.75$. Each step is depicted by a red dot, connected in sequence by a white line. The white cross in each plot depicts the optimal solution at $(1,1)$. The spatial coordinates and color axis are again all non-dimensionalized.}\n\\end{figure}\n\n\\subsection{A Convergence Study}\nThe following theorem provides conditions sufficient for the convergence of the standard differencing gradient in SGD (Algorithm \\ref{Alg2}), as well as an error estimate. Proof can be found in \\cite{Nguyen2019}.\n\n\\begin{thm}[Convergence of SGD with Probability One]\\label{sgd_converge} Under the following assumptions,\n\\begin{enumerate}\n \\item[{\\bf 1.)}] The objective function $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}$ is $\\mu-$strongly convex, i.e., there exists $\\mu > 0$ such that\n \\begin{equation*}\n f({\\bf x}) - f({\\bf x}') \\geq \\nabla f({\\bf x}')^T \\cdot ({\\bf x} - {\\bf x}') + \\frac{\\mu}{2}||{\\bf x} - {\\bf x}'||^2.\n \\end{equation*} \n \\item[{\\bf 2.)}] For particular realizations of ${\\bf \\varsigma}$, the noise-corrupted objective function $\\hat{f}({\\bf x}) = f({\\bf x}) + \\varsigma$ is L-smooth, i.e., there exists an $L > 0$ such that for any ${\\bf x}',{\\bf x} \\in \\mathbb{R}^n$,\n \\begin{equation*}\n ||\\nabla_\\delta \\hat{f}({\\bf x}) - \\nabla_\\delta \\hat{f}({\\bf x}')|| \\leq L||{\\bf x}-{\\bf x}'||.\n \\end{equation*}\n \\item[{\\bf 3.)}] The noise-corrupted cost function $\\hat{f}$ is convex for every realization of $\\varsigma$, i.e., for any ${\\bf x}, {\\bf x}' \\in \\mathbb{R}^n$\n \\begin{equation*}\n \\hat{f}({\\bf x}) - \\hat{f}({\\bf x}') \\geq \\nabla_\\delta \\hat{f}({\\bf x}')^T \\cdot ({\\bf x}-{\\bf x}').\n \\end{equation*}\n\\end{enumerate}\nThen considering Algorithm \\ref{Alg2} with step sizes \n\\begin{equation*}\n 0 < \\alpha_i < \\frac{1}{2L}, \\ \\sum_{i=0}^\\infty \\alpha_i = \\infty \\ \\text{and} \\ \\sum_{i=0}^{\\infty} \\alpha_i^2 < \\infty,\n\\end{equation*}\nthe following holds with probability 1 (almost surely)\n\\begin{equation*}\n ||{\\bf x} - {\\bf x}^* ||^2 \\rightarrow 0,\n\\end{equation*}\nwhere ${\\bf x}^* = \\text{\\emph{argmin}}_{{\\bf x} \\in \\mathbb{R}^n} f({\\bf x})$.\n\\end{thm}\n\nThe following result presents convergence of the stochastic gradient descent method in terms of the error seen in the gradient estimates of the cost function. \n\\begin{cor}\\label{sgd_conv_rate} Under the same assumptions of Theorem \\ref{sgd_converge}, let $\\mathcal{E} = \\frac{4L}{\\mu}$. Initialize Algorithm \\ref{Alg2} with step size $\\alpha_i = \\frac{2}{\\mu(t+\\mathcal{E})} \\leq \\alpha_0 = \\frac{1}{2L}.$ Then,\n\\begin{equation*}\n E\\left[||{\\bf x} - {\\bf x}^* ||^2 \\right] \\leq \\frac{16M}{\\mu^2} \\frac{1}{(t-\\tau+\\mathcal{E})},\n\\end{equation*}\nfor \n\\begin{equation*}\n t \\geq \\tau = \\frac{4L}{\\mu} \\text{\\emph{max}}\\left\\lbrace \\frac{L\\mu}{M}||{\\bf x}_0 - {\\bf x}^*||^2,1 \\right\\rbrace - \\frac{4L}{\\mu},\n\\end{equation*}\nwhere $M = 2E\\left[||\\nabla_\\delta \\hat{f}({\\bf x}^*) ||^2\\right]$ and ${\\bf x}^* = \\text{\\emph{argmin}}_{{\\bf x} \\in \\mathbb{R}^n} \\hat{f}({\\bf x})$.\n\\end{cor}\n\nWe now look to numerically verify the convergence of Algorithm \\ref{Alg1} on the problem:\n\\begin{equation}\\label{last_nlp}\n \\min_{{\\bf x} \\in \\mathbb{R}^3} E\\left[G({\\bf x},t)\\right],\n\\end{equation}\nwhere the cost function\n\\begin{equation}\\label{cost_f}\n G({\\bf x},t) = -\\left(1 + \\frac{3}{4}\\cos{\\left(2\\sqrt{2} \\pi t\\right)}\\right)\\left({\\bf x}^T \\mathbf{\\Sigma} {\\bf x} \\right) + \\varsigma(t),\n\\end{equation}\nwith $\\mathbf{\\Sigma}$ given by\n\\begin{equation*}\n \\mathbf{\\Sigma} = \\begin{bmatrix}\n 2 & -0.5 & 0 \\\\\n -0.5 & 2 & -0.5 \\\\\n 0 & -0.5 & 2\n\\end{bmatrix}.\n\\end{equation*}\n\nWe proceed by first demonstrating that an \\emph{a priori} accuracy of the SGD algorithm can be written in terms of the asymptotic error of our gradient estimate developed in Theorem \\ref{NoisyThm}. This allows us to formalize a parameterization of the error developed in Algorithm \\ref{Alg1} as a function of $\\delta, h, \\sigma,$ and $N,$ and to test the error rates. As before, the additive noise term $\\varsigma(t)$ is i.i.d. and $\\mathcal{N}(0,\\sigma)$. The asymptotic error of the dynamic amplitude-correcting gradient estimates of $G$ are given, in expectation, in Corollary \\ref{grad_cor}. Proof of the following comes directly from Theorems \\ref{NoiseFreeThm}, \\ref{NoisyThm}, and Young's inequality.\n\n\n\\begin{cor} \\label{grad_cor} The gradient of the cost function $G({\\bf x},t)$ in \\eqref{cost_f} can be estimated such that given $\\sigma, \\delta, h> 0,$ where $h^3 < \\delta$, there exists positive constants $c_1, c_2,$ and $c_3$ such that\n\\begin{equation*}\n E\\left[||\\nabla_{\\delta,h}G({\\bf 0},t)||^2\\right] \\leq c_1(1 + N^2)h^2 + c_2 \\delta^4 + c_3 \\frac{1}{N} \\frac{\\sigma^2}{\\delta^2}.\n\\end{equation*}\n\\end{cor}\n\nWhen we ignore the time-dependent amplitude of the cost function $G$ from \\eqref{cost_f}, we note that it was constructed to satisfy the assumptions from Theorem \\ref{sgd_converge}. In particular, Assumption 1 is satisfied with $\\mu = 2.$ In the calculations to follow, the initial position is ${\\bf x}_0 := (1,1,1)$, hence the total distance we intend Algorithm \\ref{Alg1} to travel is $|| {\\bf x}_0 - {\\bf 0}|| = \\sqrt{3}$. We also note that since $L$ is sensitive to $\\varsigma(t)$, it is not precisely known. For appropriately converging step-sizes $\\lbrace \\alpha_i \\rbrace_{i = 1}^{\\infty}$, we will see\n\\begin{equation*}\n || {\\bf x}_i - {\\bf 0}||^2\\rightarrow 0,\n\\end{equation*}\nwith probability 1 and when \n\\begin{equation*}\n i > \\tau = \\max \\left\\lbrace \\frac{2\\sqrt{3}L^2}{E\\left[||\\nabla_{\\delta,h}G({\\bf 0},t)||^2\\right]}, L \\right\\rbrace,\n\\end{equation*}\nwe see\n\\begin{eqnarray} \\label{alg_Error}\n E \\left[|| {\\bf x}_i - {\\bf 0}||^2 \\right] & \\leq & E\\left[||\\nabla_{\\delta,h}G({\\bf 0},t)||^2\\right] \\frac{1}{i - \\tau} \\nonumber \n \\\\ & \\leq & c_1(1 + N^2)h^2 + c_2 \\delta^4 + c_3 \\frac{1}{N} \\frac{\\sigma^2}{\\delta^2}.\n\\end{eqnarray}\n\\noindent Thus, for steps $i > \\tau$, the error seen in \\eqref{alg_Error} is proportional to that seen for the gradient estimate in Corollary \\ref{grad_cor}. \n\nThese error estimates are verified in a series of Monte Carlo studies. For each parametrization, we repeat and store the results from 30 executions of Algorithm \\ref{Alg1}, storing the results in\n\\begin{equation*}\n \\text{err}(\\delta,h,\\sigma,N) = \\left\\lbrace||{\\bf x}_{i_{\\text{max}}, k}||^2 \\right\\rbrace_{k=1}^{30}.\n\\end{equation*}\nIn the first experiment, we fix $\\delta$, $\\sigma$, and $N$ such that their contributions to the error in \\eqref{alg_Error} are several orders of magnitude below our choices for $h$. We further assume that $L \\approx ||\\mathbf{\\Sigma}||_2 = 2 + \\sqrt{2}\/2,$ which gives for $h$ sufficiently small, that our critical algorithm step $\\tau$ is $\\mathcal{O}(h^{-2}).$ Selecting a fixed step-size $\\alpha_i = \\delta$, with a fixed stopping point, trivially satisfies the convergence requirements of Theorem \\ref{sgd_converge}. In addition, given our estimate of $\\tau$, and the minimum travel distance required, selecting $N=5$, and $\\delta =$1\/100, we find a choice of $i_{\\text{max}} = 500$ to be appropriate. The results of this test confirm the \\emph{a priori} rate estimate of $\\mathcal{O}(h^2)$, and are presented in terms of the average result over the 30 simulations in Table \\ref{h_conv_sim}. \n\\begin{table} \n\\caption{\\label{h_conv_sim} Monte Carlo simulations were used to estimate the accuracy of 500 steps from Algorithm \\ref{Alg1}, in terms of $h$. Corollary \\ref{grad_cor} suggests we should see error converge at a rate of 2. We fix $N$=5, $\\delta$ = 1\/100, and $\\sigma$ = 1E-5. The other sources of error begin to dominate for choices of $h \\leq 1\/512$.}\n\\begin{center}\n\\begin{tabular}{|ccc|} \n \\hline\n $h$ & $\\text{AVG}\\left(\\text{err}(h)\\right)$ & Rate \\\\\n \\hline\n 1\/16& 1.7E-4 & - \\\\\n 1\/32& 3.2E-5 & 2.41 \\\\\n 1\/64& 2.2E-6 & 3.87 \\\\\n 1\/128& 5.2E-7 & 2.07 \\\\\n 1\/256& 1.1E-7 & 2.28 \\\\\n 1\/512& 3.7E-8 & 1.53 \\\\\n \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\nWe proceed similarly in the second experiment. We fix $h = 1\/1024$, $N=10$, and $\\delta=1\/100$, varying $\\sigma$. Noting that smallest choice for $\\sigma = 1\/2560$, we again estimate the critical time-step as $\\tau =$ 500. The optimal rates are observed and presented in Table \\ref{sig_conv_sim}.\n\n\\begin{table} \n\\caption{\\label{sig_conv_sim} Monte Carlo simulations were used to estimate the accuracy of 500 steps from Algorithm \\ref{Alg1}, in terms of $\\sigma$. We fix $N$=10, $\\delta$ = 1\/100, and $h$ = 1\/1024. Corollary \\ref{grad_cor} suggests we should see error converge at a rate of 2. The other sources of error begin to dominate for choices of $\\sigma \\leq 1\/2560$.}\n\\begin{center}\n\\begin{tabular}{|ccc|} \n \\hline\n $\\sigma$ & $\\text{AVG}\\left(\\text{err}(\\sigma)\\right)$ & Rate \\\\\n \\hline\n 1\/80 & 4.0E-4 & - \\\\\n 1\/160 & 1.2E-4 & 1.73 \\\\\n 1\/320 & 4.0E-5 & 2.04 \\\\\n 1\/640 & 9.7E-6 & 2.03 \\\\\n 1\/1280 & 2.1E-6 & 2.19 \\\\\n 1\/2560 & 5.8E-7 & 1.87 \\\\\n \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\nFor the third, we fix $N$=256, $\\sigma$ = 1\/2048, and $h$ = 1\/2048, varying $\\delta.$ We maintain our choice of $i_{\\text{max}}$ = 500, presenting the results in Table \\ref{delta_conv_sim}. We see rates comparable to the $\\mathcal{O}(\\delta^4)$ rate. \n\n\\begin{table} \n\\caption{\\label{delta_conv_sim} Monte Carlo simulations were used to estimate the accuracy of 5000 steps from Algorithm \\ref{Alg1}, in terms of $\\delta$. We fix $N$=256, $\\sigma$ = 1\/2048, and $h$ = 1\/2048. Corollary \\ref{grad_cor} suggests we should see error converge at a rate of 4. The other sources of error begin to dominate for choices of $\\delta \\leq 1\/100.$}\n\\begin{center}\n\\begin{tabular}{|ccc|} \n \\hline\n $\\delta$ & $\\text{AVG}\\left(\\text{err}(\\delta)\\right)$ & Rate \\\\\n \\hline\n 0.300 & 5.4E-2 & - \\\\\n 0.210 & 1.98E-2 & 2.90 \\\\\n 0.149 & 5.480E-3 & 3.70 \\\\\n 0.105 & 1.38E-3 & 3.98 \\\\\n 0.074 & 3.32E-4 & 4.11 \\\\\n \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn our final experiment, we test the linear convergence rate of the sampling count parameter $N.$ Fixing $h$ = 1\/1024, $\\delta$ = 1\/100, and $\\sigma$ = 0.64, varying $N$. We select $i_{\\text{max}} = 500$. In Table \\ref{N_conv_sim} we see convergence at a rate slightly better than the expected $\\mathcal{O}(N^{-1})$ rate. \n\n\\begin{table} \n\\caption{\\label{N_conv_sim} Monte Carlo simulations were used to estimate the accuracy of 500 steps from Algorithm \\ref{Alg1}, in terms of the inverse sample count $N^{-1}$. We fix $\\delta$ = 1\/100, $h$ = 1\/1024, and $\\sigma = $0.64. Corollary \\ref{grad_cor} suggests we should see error converge at a rate of 1.}\n\\begin{center}\n\\begin{tabular}{|ccc|} \n \\hline\n $N$ & $\\text{AVG}\\left(\\text{err}(N)\\right)$ & Rate \\\\\n \\hline\n 8 & 6.2E-1 & - \\\\\n 16 & 3.6E-1 & 0.77 \\\\\n 32 & 1.7E-1 & 1.05 \\\\\n 64 & 1.0E-2 & 1.86 \\\\\n 128 & 3.3E-3 & 1.67 \\\\\n 256 & 1.1E-3 & 1.58 \\\\\n \\hline \n\\end{tabular}\n\\end{center}\n\n\\end{table}\n\n\\section{Compound Refractive Lens Alignment on Simulated XFEL Experimental Beamline}\\label{XFEL}\nWhat follows is a proof-of-concept implementation of Algorithm \\ref{Alg1} by simulating the alignment of a CRL assembly on a scientific beam-line with a highly dynamic intensity. We begin by developing a model X-ray transmission function from data collected at the Advanced Photon Source (APS) at Argonne National Laboratory \\cite{Breckling2021}. This steady-amplitude model is then augmented with a time-dependent intensity function, recorded during an experiment performed at the Pohang Accelerator Laboratory's XFEL facility (PAL-XFEL).\n\nGiven that access to XFEL beam-lines is competitive and limited, our goal is to demonstrate the feasibility of our amplitude-correcting SGD approach to overcome the beam intensity fluctuations inherent to XFEL facilities. We break this effort into two parts: the construction of our model cost function, and the results of our implementation of Algorithm \\ref{Alg1} using that cost function in settings similar to those seen at PAL-XFEL. \n\n\\subsection{Developing a Model Cost Function}\nLet $\\Omega_{max} \\subset \\mathbb{R}^4$ denote the travel limits for the four stepper motors that determine the orientation of the CRL. For a given orientation ${\\bf x} = (x,y,r_x,r_y) \\in \\Omega$, let the resulting image deposited on the detector panel be denoted as $I({\\bf x}),$ or simply $I$ when convenient; see Figure \\ref{hugh_image}. Further, we describe position of a given pixel by its indices $I_{i,j}.$ Example detector images are shown in Figures \\ref{sensor_imgs}(a) and (b).\n\nLet $\\xi(I)$, $\\mu(I)$ and $\\sigma(I)$ denote the median, mean, and standard deviation, respectively, of the pixel values of the image $I$. We then constrain\n$I$ to a selected region of interest (ROI)\ndefined as\n$$\\hat{I}_M := \\left\\lbrace I_{i,j} \\in I \\ \\Big| \\ |I_{i,j}-\\xi(I)| > M \\times \\sigma(I) \\right\\rbrace,$$\nwhere $M > 0$ is a user-selected threshold parameter. In practice, we found $M=2$ to be a good choice. Figure \\ref{sensor_imgs} (c) and (d) highlight the corresponding ROIs, $\\hat{I}_M.$ \n\n\\begin{figure} \n \\begin{center}\n \\includegraphics[width=2cm]{not_well_aligned.eps} \\ \n \\includegraphics[width=2cm]{well_aligned.eps} \\ \n \\includegraphics[height=5.75cm]{vert_colorbar.eps} \\hspace{1cm} \n \\includegraphics[width=2cm]{not_well_aligned_support.eps} \\ \n \\includegraphics[width=2cm]{well_aligned_support.eps} \\ \n \\includegraphics[height=5.75cm]{vert_binarybar.eps}\n \\caption{Figure (a) is a cropped region collected from the imaging sensor when the CRL was poorly aligned. Figure (b) is the same cropped region, but shows the result from a well-aligned CRL. The images are shown on the same color axis, after feature normalizing against the maximum pixel value recorded. Figures (c) and (d) are binary images depicting the pixels identified in the ROI for Figures (a) and (b) respectively. \\label{sensor_imgs}}\n \\end{center}\n\\end{figure}\n\nIn the synchrotron experiments performed at the Advanced Photon Source in \\cite{Breckling2021}, a set of coordinates found by manual alignment were defined as the ground-truth to provide the ``well-aligned'' position of the CRL. We denote that position as ${\\bf x}^* = (x^*, y^*, r_x^*, r_y^*)$. This ground truth served two purposes. First, we were then able to define a feature scaling such that our metric of X-ray transmission, in terms of CRL orientation, \n\\begin{equation}\\label{crude_cost}\n f({\\bf x}; M) := \\mu\\left(\\hat{I}_{M}({\\bf x})\\right),\n\\end{equation}\nhad a maximal value of 1. Second, the ground-truth position allowed us to establish a four-dimensional rectangular region $\\hat{\\Omega} \\subset \\Omega_{max}$ around the best point that contained the support of $f$ above the noise floor. With this ground-truth and 4D window, we then collected several raster scans of $f(\\hat{\\Omega}; M=2)$. We make use of a full four-dimensional scan, and two high-resolution, independent, 2-dimensional raster scans of $\\hat{\\Omega}$.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=4cm]{XrY_raster.eps} \\ \n \\includegraphics[width=4cm]{YrX_raster.eps} \\\\\n 0 \\ \\includegraphics[width = 6cm]{colorbar.eps} \\ 1\n \\caption{Two 2D raster scans of $f$ in $\\hat{\\Omega}$ are depicted above. Both figures are mutually min-max normalized, and plotted on the same color axis.\\label{intensity_plots}}\n \\end{center}\n\\end{figure}\n\nAssuming a steady beam amplitude, it follows from the model developed by Simons \\textit{et al.} that an idealized transmission function $f:\\mathbb{R}^4 \\rightarrow \\mathbb{R}^+$ is given by a 4-variate Gaussian distribution \\cite{simons17}. We generalize that model as \n\\begin{equation} \\label{regression}\n f_{\\text{Simons}}({\\bf x}; a,b,\\mathbf{A}, \\hat{{\\bf x}}) = a\\exp{\\left(-({\\bf x}-\\hat{{\\bf x}})^T \\mathbf{A} ({\\bf x}-\\hat{{\\bf x}}) \\right)} + b,\n\\end{equation}\nwhere $a,b \\in \\mathbb{R}$, the matrix $\\mathbf{A} \\in \\mathbb{R}^{4\\times4}$ is symmetric, and $\\hat{{\\bf x}} \\in \\mathbb{R}^4$ is the position associated with optimal lateral alignment. Fitting the four-dimensional raster scan data $f(\\hat{\\Omega}; M=2)$ to Simons' model \\eqref{regression} gives the idealized X-ray transmission $f_{\\text{Simons}}^*({\\bf x})$. We present two, two-dimensional slice views of $f_{\\text{Simons}}^*({\\bf x})$ in Figure \\ref{f_simons_plots}. \n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=4cm]{XrY_Gauss.eps} \\ \n \\includegraphics[width=4cm]{YrX_Gauss.eps} \\\\\n 0 \\ \\includegraphics[width = 6cm]{colorbar.eps} \\ 1\n \\caption{Depicted here are 2D slices selected from $f_{\\text{Simons}}^*(\\hat{\\Omega})$. The aspect ratios were selected to agree with Figures (a) and (b) from Figure \\ref{intensity_plots}. \\label{f_simons_plots}}\n \\end{center}\n\\end{figure}\n\nTo model the noise functions that are characteristic to the XFEL light sources, we include additive measurement noise as $\\varsigma_\\Omega(t)$. Let $\\text{diam}(\\hat{\\Omega})$ denote the maximal \\text{diam}eter of the set $\\hat{\\Omega}$. We collected a sampling $\\mathcal{S} = \\lbrace {\\bf x}_i \\rbrace_{i=1}^{500}$ such that for every orientation ${\\bf x}_i$, $||{\\bf x}_i - {\\bf x}^*|| > \\text{diam}(\\hat{\\Omega})$. We found that $\\sigma(f(\\mathcal{S};M=2)) \\approx 4.5 \\times 10^{-3}.$ We then model the time-series of additive noise $\\varsigma_\\Omega(t)$ as i.i.d. and $\\mathcal{N}(0, 4.5 \\times 10^{-3})$. \n\nWe additionally consider fluctuations that occur because of pointing jitter (from the SASE generation scheme) \\cite{kang2017hard}. We assume the position and direction of the beam may randomly fluctuate as a function of the beam's divergence profile, which was estimated at the APS to be 6.5$\\times 10^{-3}$ Radians. We account for jitter in our model as random perturbations of the orientation vector ${\\bf x}$ in the $r_x$ and $r_y$ directions. Further, we expect that the beam will jitter randomly within 10\\% of the beam-divergence. In doing so, we define \n\\begin{equation*}\n \\Theta(t) = (0, 0, \\theta_x(t), \\theta_y(t))\n\\end{equation*}\nwhere $\\theta_x$ and $\\theta_y$ are respectively i.i.d and $\\mathcal{N}(0,6.5\\times 10^{-4})$.\n\nWe lastly introduce the fluctuating intensity of the beam over time. To this end, we utilize the measured shot-to-shot intensity values recorded at the PAL-XFEL facility, which was recorded using a quadrant beam position monitor (QBPM) at 30 Hz \\cite{DresselhausMarais2020}. We feature-scale the raw pulse-to-pulse time-series data by normalizing the full signal against the mean recorded value. This scaled signal is written as $T_{\\text{PAL}}(t,\\kappa)$ where $\\kappa$ determines the number of pulses averaged during a data collection event. In Figure \\ref{intensity_over_time} we show $T_{\\text{PAL}}(t,1)$ in dark gray, $T_{\\text{PAL}}(t,8)$ in light gray, and $T_{\\text{PAL}}(t,264)$ in red. The mollified signals at $\\kappa=8$ and $\\kappa=264$ respectively represent the average beam intensity over a sampling interval, and the amount of time required to collect all samples necessary to compute the amplitude-correcting gradient. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=12cm]{T_plot.eps}\n \\caption{This figure depicts a two-minute interval of the signals $T_{\\text{PAL}}(t,\\kappa=1)$ in dark gray, $T_{\\text{PAL}}(t,\\kappa=8)$ in light gray, and $T_{\\text{PAL}}(t,\\kappa=264)$ in red. All signals are feature normalized by their mean value. The data was recorded at the PAL-XFEL facility. {\\color{green} \\cite{DresselhausMarais2020}} \\label{intensity_over_time} }\n\\end{figure}\n\nOur full cost function model is hence written and evaluated as\n\\begin{equation}\\label{xfel_cost}\n G_{\\text{XFEL}}({\\bf x},t; \\kappa) = -T_{\\text{PAL}}(t,\\kappa) f_{\\text{Simons}}^*({\\bf x} + \\Theta(t)) + \\varsigma_\\Omega(t).\n\\end{equation}\n\n\\subsection{Solving the CRL Alignment Problem}\nWe now endeavor to study the performance of Algorithm \\ref{Alg1} on our model of the CRL alignment problem\n\\begin{equation*}\n \\min_{{\\bf x} \\in \\hat{\\Omega}} E\\left[G_{\\text{XFEL}}({\\bf x},t;\\kappa = 8)\\right], \\ \\forall t > 0.\n\\end{equation*}\nOur goal is to identify a range of nominal parameter choices for Algorithm \\ref{Alg1} that can be implemented as a starting point at an XFEL facility. \n\nWe begin by noting that when sampling \\eqref{xfel_cost} to estimate $\\nabla f_{\\text{Simons}}^*({\\bf x})$ as per Definition \\ref{def_grad}, we consider $N=8$ quasi-uniformly distributed antipodal pairs in our differencing stencil. We select our effective integration time interval for the camera to be $h_{\\text{cam}} = 8\/30$ seconds, and establish the full time interval required to complete the scheme as $\\mathcal{T}_{h,N} := [t_0, (4N+1)h + t_0].$ Further, we make use of the estimate\n\\begin{equation*}\n \\mu_T = T_{\\text{PAL}}(t_0 + 264\/30,264),\n\\end{equation*}\nwhere $t_0$ is the moment we began estimating the gradient.\n\nFor each execution of Algorithm \\ref{Alg1} that follows, the stopping condition is established to be a maximal iteration count $i_{\\text{max}}.$ No other stopping conditions are considered. Additionally, we conceptualize our initial gradient sphere radius $\\alpha_0$ as some multiple $C r,$ where $r = ||{\\bf x}_0 - \\hat{{\\bf x}}||,$ though we don't expect users to know what $r$ is \\textit{a priori.} At each step $i$, the gradient sampling radius $\\alpha_0$ is scaled by a cooling factor such that\n\\begin{equation*}\n \\alpha_i = \\frac{\\alpha_0}{(1 + i)^\\gamma},\n\\end{equation*}\nwhere $\\gamma > 0$ and fixed. Further, we enforce a maximum step size $||{\\bf x}_i - {\\bf x}_{i-1}|| \\leq \\delta_i = \\alpha_i.$ Given that the true distance $r$ is unknown upon initialization, the executions that follow are intended to identify a performance relationship between $\\alpha_0$ with respect to $r$, $\\gamma$, and the stopping condition.\n\nWe demonstrate a single execution of Algorithm \\ref{Alg1} with an initial position ${\\bf x}_0$ selected randomly a distance of $r = 0.4$ from $\\hat{{\\bf x}}.$ We note that this Euclidean distance is significantly further away from $\\hat{{\\bf x}}$ than the positions selected during the manually-tuned rough alignments completed during data collections at the more stable synchrotron source at APS \\cite{Breckling2021}. We fix $\\gamma = 0.3$, select $\\alpha_0$ according to the distance scalar $C = 3.0$, assign a momentum term $\\beta = 0.15,$ and set the stopping condition to $i_{\\text{max}} = 100$ iterations. In addition to the time required to collect the image data from the camera sensor $h_{\\text{cam}}$, we need to include an estimate of the time required to move the four stepper motors, and process the data. We assume $h_{\\text{move}}=5\/30s$. Given the full time interval time interval $h_{\\text{total}} = h_{\\text{cam}} + h_{\\text{move}} = 13\/30s$, the total execution time assumed necessary to reach 100 iterations is \n\\begin{equation*}\n (4N+1) \\times i_{\\text{max}} \\times h_{\\text{total}} = 1430s,\n\\end{equation*}\n or 23.8 minutes. A figure depicting the particular route taken is presented in the 2D projections shown in Figure \\ref{single_exec}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=9cm]{SingleExecution.eps} \\\\\n\\ \\ -1 \\includegraphics[width=6cm]{colorbar.eps} 0 \\\\\n$-f_{\\text{Simons}}^*({\\bf x})$\n\\caption{\\label{single_exec\nA single execution of Algorithm \\ref{Alg1} with an initial position ${\\bf x}_0$ selected randomly at a distance $r = ||{\\bf x}_0 - \\hat{{\\bf x}}|| = 0.4.$ The initial step-size $\\alpha_0$ is fixed to $3r = 1.20,$ and scaled with each step by the cooling parameter $\\gamma = 0.3.$ The slices depicted in (a) and (b) use the optimal off-axis values in $\\hat{{\\bf x}}$. The blue dots depict the initial position projected onto the respective 2D planes, the white crosses depict the optimum alignment coordinates $\\hat{{\\bf x}}$, while the red dots depict the 100 positions ${\\bf x}_i$. Each position is connected sequentially by a white line.}\n\\end{figure}\n\nNext, we present the result of three Monte Carlo experiments. We maintain the parameter choices established in the demonstration execution above, varying only the stopping condition $i_{\\text{max}} = 50, 100,$ and $200$. Each Monte Carlo executes Algorithm \\ref{Alg1} to completion 100 times, varying the initial position randomly on $\\partial \\mathcal{B}_r({\\bf x}_0)$ where $r = 0.4.$ The results depicted in Figure \\ref{final_MC_1} demonstrate the expected convergence behavior for those well-selected parameters. \n\n\\begin{figure}\n \\centering\n $i_{\\text{max}} = 50; (11.9 \\ \\text{Minutes})$\\\\\n \\includegraphics[width=7cm]{itermaxmax_50.eps}\\\\\n $i_{\\text{max}} = 100; (23.8 \\ \\text{Minutes})$ \\\\\n \\includegraphics[width=7cm]{itermaxmax_100.eps}\\\\\n $i_{\\text{max}} = 200; (47.7 \\ \\text{Minutes})$ \\\\\n \\includegraphics[width=7cm]{itermaxmax_200.eps}\\\\\n \\caption{\\label{final_MC_1}Depicted above are the results of three Monte Carlo simulations, wherein Algorithm \\ref{Alg1} is executed 100 times to solve the synthetic CRL alignment problem, varying the stopping condition $i_{\\text{max}}$. The point $\\hat{{\\bf x}}$ is depicted in each figure as a red cross. Figures (a) and (b) show the spatial distribution of results when $i_{\\text{max}} = 50$. Similarly, Figures (c) and (d) denote the results when $i_{\\text{max}} = 100$, and Figures (e) and (f) show $i_{\\text{max}} = 200$. The result of a particular execution is shown as a dark blue dot. The blue ellipses highlight the 99.3\\% uncertainty region.}\n\\end{figure}\n\nWhat remains to be assessed is performance as a function of the user's choice of step size, and cooling parameter. We consider two values for the cooling parameter $\\gamma$, five initial step-size scales $C$, and six stopping conditions $i_{\\text{max}}.$ For each particular set of parameters, Algorithm \\ref{Alg1} is executed 100 times, where the starting position ${\\bf x}_0$ is again sampled randomly from $\\partial \\mathcal{B}_r(\\hat{{\\bf x}})$. These regions demonstrate a collection of parameters that tend to reliably converge under an hour ($i_{\\text{max}} < 200$ iterations.) \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=5cm]{gamma_0_3.eps} \\ \n \\includegraphics[width=5cm]{gamma_0_4.eps}\\\\\n \\caption{Depicted above are the results from 60 Monte Carlo studies varying $i_{\\text{max}}$, the cooling parameter $\\gamma$, and gradient sphere-radius interval $\\alpha_0$. We vary $\\alpha_0$ as a multiple of the initial position's distance from ground truth $C||{\\bf x}_0 - \\hat{{\\bf x}}|| = Cr.$ Figures (a) and (b) fix $\\gamma$ as 0.3 and 0.4, respectively. Each Monte Carlo simulation executes Algorithm \\ref{Alg1} a total of 100 times; the average value of which is depicted normalized by $r$, and depicted as the vertical height. The shaded regions above and below the interpolated lines represent the standard deviation trend, in terms of the 4D Euclidean distance. \\label{final_all}}\n\\end{figure}\n\nWhile additional parameters remain to be thoroughly studied, namely the momentum term $\\beta,$ we found that choices of $\\beta > 0.15$ tended to perform poorly over longer periods of computation time. In particular, when $i_{\\text{max}} > 50$ we saw no apparent improvement to performance. Given that the settings identified above demonstrate convergence that tends to improve with additional computation, an attractive behavior for an unsupervised optimization method, we advise being conservative with $\\beta.$ We additionally note that our choice to equate the maximal step-size with the gradient \\text{diam}eter was born out of observation. Choices of $\\alpha_i$ substantially larger than $\\delta_i$ frequently resulted in failure over longer time intervals. Lastly, we observed that selecting $\\gamma$ too large tended to collapse $\\alpha_i$ too quickly, which was also detrimental to long-time performance. Conversely, selecting $\\gamma$ too small tends to result in slow convergence. \n\n\\section{Conclusions}\\label{conclusions}\nThe motivation for this work was encountered while attempting to automate the task of laterally aligning optics at an X-ray Free-Election Laser (XFEL) facility. These facilities, in aggregate, are capable of generating extremely bright pencil beams of X-ray light, but from moment to moment that brightness fluctuates in time. If not for the stochastic noise sources and the apparent intensity fluctuations, the task of orienting beam-line optics reduces to a rather simple minimization problem \\cite{simons17}. While the apparent level of stochastic measurement noise is certainly tractable for many stochastic descent methods, the intensity fluctuations are so severe that they required a separate, independent treatment. \n\nIn this paper, we introduced a differencing scheme to estimate the gradient of a cost functional potentially corrupted by both stochastic noise and independent amplitude fluctuations. We assume that only one position in the search space can be measured at any particular moment in time. Thus, any finite differencing scheme is going to require procedurally moving from point to point, recording each intensity along with the corresponding position and time. In this scheme, we account for the fluctuating amplitude by introducing additional samples at a single, fixed location central to the differencing stencil. By alternating these samples sequentially in time, separating the resulting data post-hoc provides a proportional estimate of the functional's amplitude. This additional signal is then interpolated along the time axis, and subtracted from the corresponding signal generated sequentially by the finite differencing stencil. When well-sampled in space and time, this method is effective at detrending those measurements. \n\nWe included a detailed error analysis of this amplitude-correcting gradient estimate, as well as numerical benchmarking of its performance in nonlinear programming problems solved with SGD. Additionally, given that access to XFEL facilities are highly limited, we included a proof-of-concept implementation of an amplitude-correcting SGD method that we believe shows promise. In doing so, we identified regions of parameter choices that will likely be effective in a similarly-configured apparatus. \n\n\\section*{Acknowledgments} \nThis manuscript has been authored in part by Mission Support and Test Services, LLC, under Contract No. DE-NA0003624 with the U.S. Department of Energy, National Nuclear Security Administration (DOE-NNSA), NA-10 Office of Defense Programs, and supported by the Site-Directed Research and Development Program. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published content of this manuscript, or allow others to do so, for United States Government purposes. The U.S. Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http:\/\/energy.gov\/downloads\/doe-public-access-plan). The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the United States Government. DOE\/NV\/03624--1406.\n\nPortions of this work were performed at High Pressure Collaborative Access Team (HPCAT; Sector 16), Advanced Photon Source (APS), Argonne National Laboratory. HPCAT operations are supported by the DOE-NNSA's Office of Experimental Sciences. The Advanced Photon Source is a DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357\n\nSunam Kim, Sangsoo Kim, and Daewoong Nam would like to acknowledge support from the National Research Foundation of Korea (NRF), specifically NRF-2019R1A6B2A02098631 and NRF-2021R1F1A1051444.\n\nPart of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. We also acknowledge the support of the Lawrence Fellowship in this work.\n\n \\bibliographystyle{elsarticle-num} \n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}}