diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmncm" "b/data_all_eng_slimpj/shuffled/split2/finalzzmncm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmncm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe recent exciting developments in the understanding of\nnon-perturbative effects in the theory formerly known as strings~\\cite{dbranes}\nhave led to tantalizing glimpses of a broader framework\nfor the Theory of Everything (TOE),\nreferred to variously as $M$ or $F$ theory. This TOE has been\napproached from several different perspectives. First were the\nstrong-coupling limits of various string theories that had been\nthought distinct before the advent of duality. Consideration of\nType IIA string led to $M$ theory~\\cite{duff} \nand that of Type IIB string led to\n$F$ theory. A second perspective has been provided by the low-energy\nlimit. In the case of $M$ theory, \nwhich is Lorentz invariant for\nnon-trivial reasons, this low-energy limit leads\nto 11-dimensional $N=1$ supergravity. The\nlow-energy limit of $F$ theory is less evident: it is not Lorentz\ninvariant, and the relation to a candidate higher-dimensional\nsupergravity theory is not yet clear. An interesting third\nperspective has been provided by Matrix theory~\\cite{matrix}, \nwhich proposes a\nnon-perturbative formulation of $M$ theory using light-cone quantization.\n\nWe have offered a fourth perspective~\\cite{emndbmonop}, based on the world-sheet\n$\\sigma$-model formulation of string theory. By extending the\nspace of conformal field theories describing critical string\ntheories to general renormalizable two-dimensional field theories,\none is able to address issues in non-critical string theory~\\cite{aben,ddk}. \nThe\nworld-sheet renormalization scale may be identified \nwith a\nLiouville field~\\cite{emn} whose dynamics is non-trivial \naway from\ncriticality. One example of\nan interesting world-sheet field theory is the non-compact\nWess-Zumino model~\\cite{wittenbh} \nthat describes a black hole in $1+1$ dimensions,\nwhich may be formulated as a monopole defect on the world \nsheet~\\cite{emnmonop}.\nWe have recently shown that the supersymmetrization of this model\nis conformal in 11 dimensions~\\cite{emndbmonop}, \nwhich we interpret as the world-sheet\ndescription of the massless solitons that appear in the strong-coupling\nstring approach to $M$ theory. This analysis is reviewed in more detail\nbelow, together with some indications that a twelfth\ndimension might be described by a Liouville field.\n\nA fifth perspective on $M$ theory has been the recent suggestion~\\cite{horava} \nthat it might be equivalent at short distances to a Chern-Simons\ntopological gauge theory (TGT) based on the supergroup $Osp(1|32,R) \\otimes\nOsp(1|32,R) $. The idea that a TGT might underly quantum gravity has\nbeen recurrent in recent years. One such theory was found at the core\nof the $(1+1)$-dimensional string black-hole model~\\cite{eguchi,emnorigin}, \nand it provides a\nnatural incarnation of the holographic principle. The provocative\nproposal of Horava~\\cite{horava} raises several questions: is the choice of\nsupergroup unique? what other degrees of freedom are present in $M$\ntheory? what is the nature of the non-trivial dynamics that leads to the\ngeneration of space-time? and many others.\n\nThe purpose of this paper is to address these questions from the\nworld-sheet perspective outlined above, building a bridge to the TGT\nperspective that also illuminates many aspects of the relationship\nbetween $M$ theory and $F$ theory. We recall that the \nappropriately supersymmetrized world-sheet\nmonopole corresponds to a target-space $D$ brane, and point out that\nits recoil induces an anti-de-Sitter (AdS) metric in the 11-dimensional\ntarget space, whose corresponding supergroup must be at least as\nlarge as $Osp(1|32,R) $. Within this approach, there is an extra\ntime-like dimension parametrized by a Liouville field.\nWe point out that~\\cite{holten}\nthe minimal supergroup\nextension of the Lorentz group in $11+1$ \ndimensions is $Osp(1|32,R) \\otimes Osp(1|32,R) $, which is broken to\n$Osp(1|32,R) $ because of the Lorentz non-invariance of $F$ theory. \nHorava's TGT is a local short-distance field theory with the\n$Osp(1|32,R) \\otimes Osp(1|32,R) $ symmetry, but the full $M$-theory\ndynamics involves non-local structures that may be expressed as\nWilson loops on the boundary of the 11-dimensional adS space.\nSingleton~\\cite{singl,guna,guna2} \nand higher infinite-dimensional unitary representations\nof $Osp(1|32,R) $\ndescribe non-local\nboundary states. We recall that supersymmetric Wilson loops have a\nstring interpretation~\\cite{awada}, \nin terms of which the world-sheet\nmonopoles characterize defects at the interface with AdS space.\n\nThe layout of this paper is as follows. In section 2, we review\naspects of our previous analysis of the two-dimensional string\nblack-hole model that later find echos in our analysis of $M$ theory.\nIn section 3 we discuss the critical monopole and vortex \ndeformations in\nsuperstring in 11 dimensions, \nand use $D$-brane recoil to derive \nin section 4 the corresponding AdS$_{11}$ metric on\ntarget space-time. In section 5 we motivate the \nappearance of the $Osp(1|32,R) \\otimes Osp(1|32,R) $\nsupergroup structure in the short-distance TGT, and in section 6\nwe draw parallels between AdS black holes and our earlier \ntwo-dimensional work. In section 7 we develop the interpretation of\nstrings as Wilson loops in this TGT, and in \nsection 8 we make some conjectures and advertize open issues within the\nperspective developed here.\n\n\\section{Black Holes as World-Sheet Monopole Defects, and the\nAppearance of TGT at the Core}\n\nA relevant precursor of the discussion in this\npaper is found in the two-dimensional black-hole model. The\nEuclidean target-space version \nmay be described in terms of a vortex defect on the world \nsheet~\\cite{sathiap,ovrut,emnmonop},\nobtained as the solution $X_v$ of the equation\n\\begin{equation}\n\\partial_z {\\bar \\partial}_z X_v = {i \\pi q_v \\over 2} [ \\delta(z - z_1) -\n\\delta(z - z_2)]\n\\label{defect}\n\\end{equation}\nwhere $q_v$ is the vortex charge and $z_{1,2}$ are the locations of\na vortex and antivortex, respectively, which we may map to the\norigin and the point at infinity. The corresponding\nsolution to (\\ref{defect}) is\n\\begin{equation}\nX_v = q_v {\\rm Im~ln} z\n\\label{solution}\n\\end{equation}\nand we see that the vortex charge $q_v$ must be integer. To\nsee that the solution (\\ref{solution}) corresponds to a black hole,\nwe introduce the space-time coordinates $(r,\\theta)$:\n\\begin{equation}\nz \\equiv (e^r - e^{-r})e^{i\\theta}\n\\label{transform}\n\\end{equation}\nin terms of which the induced target-space metric is\n\\begin{equation}\nd s^2 = {dz d{\\bar z} \\over 1 + z {\\bar z}} = dr^2 + {\\rm tanh}^2 r d\n\\theta^2\n\\end{equation}\nwhich we recognize as a Euclidean black hole located at $r = 0$.\nWe recall that this model can be regarded as an $SL(2,R) \/ U(1)$\nWess-Zumino coset model, with gauge field\n\\begin{equation}\nA_z \\rightarrow \\epsilon^2 \\partial_z \\theta\n\\label{monopole}\n\\end{equation}\nas $r \\equiv \\epsilon \\rightarrow 0$. We see that the\ngauge field is singular at the origin, so that the world-sheet defect\nmay be interpreted as a monopole of the compact $U(1)$ gauge group.\n\nThere are related `spike' configurations which are solutions of the\nequation\n\\begin{equation}\n\\partial_z {\\bar \\partial}_z X_m = - {\\pi q_m \\over 2} [ \\delta(z - z_1) -\n\\delta(z - z_2)]\n\\label{spike}\n\\end{equation}\ngiven by\n\\begin{equation}\nX_m = q_m {\\rm Re~ln} z\n\\label{solution2}\n\\end{equation}\nIt is easy to see that single-valuedness of the partition function imposes\nthe following quantization condition:\n\\begin{equation}\n2 \\pi \\beta q_v q_m = {\\rm integer}\n\\label{quantization}\n\\end{equation}\nat finite temperature $T \\equiv \\beta^{-1} \\ne 0$.\nMaking the change of variables\n\\begin{equation}\n|z|^2 \\equiv - u v \\;\\;: \\;\\; u = e^{R+t}, v = - e^{R-t}\n\\label{uandv}\n\\end{equation}\nand identifying $R \\equiv r + {\\rm ln}(1 - e^{-r})$, we\nsee that the `spike' (\\ref{solution2}) corresponds to a\nMinkowski black hole~\\cite{emnmonop}:\n\\begin{equation}\nds^2 = { dz d{\\bar z} \\over 1 + z {\\bar z}} = \n - {du dv \\over 1 - uv} = dr^2 - {\\rm tanh}^2 r dt^2\n\\label{Minkowski}\n\\end{equation}\nThe coordinates $u,v$ are natural in the Wess-Zumino\ndescription of the Minkowski black hole, which involves\ngauging the non-compact $O(1,1)$ subgroup of $SL(2,R)$.\nReparametrizing the neighbourhood of the singularity by\n$w \\sim {\\rm ln}u \\sim - {\\rm ln}v$, one finds a\ntopological gauge theory (TGT) on the world sheet:\n\\begin{equation}\nS_{CS} = i \\int d^2z \\sqrt{h} {k \\over 2 \\pi} w \\epsilon^{ij} F(A)_{ij} +\n\\dots\n\\label{chernsimons}\n\\end{equation}\nwhere $h$ is the world-sheet metric, $F$ is the field strength\nof the non-compact Abelian gauge field, and the dots represent\nadditional `matter' or `magnon' fields. Their generic form close to the\nsingularity is~\\cite{wittenbh}\n\\begin{equation}\n- {k \\over 2 \\pi} \\int d^2z \\sqrt{h} h^{ij} D_i a D_j b + \\dots\n\\label{matter}\n\\end{equation}\nwhere $D_i$ is an $O(1,1)$ covariant derivative and $ab + uv = 1$.\nWe see from this that a non-zero condensate $ \\ne 0$\ncorresponds to $ \\ne 1$ and hence a non-trivial target space-time\nmetric (\\ref{Minkowski}).\n\nIn this connection, we recall\nthat there appears an enhanced symmetry\nat the singularity of the black hole~\\cite{emnorigin}, since\nthe topological world-sheet theory describing the singularity \nis characterized by a target $W_{1+\\infty} \\otimes W_{1+\\infty}$ \nsymmetry. When the space-time metric is generated \naway from the singularity, there is a spontaneous breaking of\n$W_{1+\\infty} \\otimes W_{1+\\infty} \\rightarrow\nW_{1+\\infty}$ due to the expectation value $ \\ne 0$,\nassociated with Wilson loops surrounding the world-sheet defects. \n\nThis is a convenient point to preview the two key ways in\nwhich the two-dimensional black hole model \ndescribed above is relevant to the\nconstruction of $M$ theory. The world sheet may be mapped onto\nthe target space-time in two dimensions, and this\ntwo-dimensional example\nmay usefully be viewed from either perspective.\nOn the world sheet, as we discuss in the next section, \nwhen the above monopole solution is supersymmetrized,\nit becomes a marginal deformation when the string is\nembedded in an 11-dimensional space-time. \nWe have suggested previously that this limit\ncorresponds to the masslessness of the $D$-brane representation\nof target-space black holes in the strong-coupling limit of\n$M$ theory when it becomes 11-dimensional. The \nmonopoles can be viewed as puncturing holes in the\nworld sheet through which the core of the space-time theory can be\nvisualized. \n\nOn the other hand,\ninterpreting the two-dimensional model from the space-time\npoint of view, we see at the core a TGT with a\nnon-compact gauge group, analogous to that proposed by Horava~\\cite{horava}. \nThe\ntask we tackle in Section 4 is that of identifying the `matter' fields\nthat appear around the core by analogy with\nthe fields $a,b$ above, whose condensation generates the\nspace-time metric.\nBefore addressing these issues, though, we first examine more\nclosely the supersymmetric world-sheet monopole model in an\n11-dimensional space time.\n\n~\\\\\n\\section{Critical Defects in 11-Dimensional Superstring}\n\nThe supersymmetrization of the above\nworld-sheet defects may be represented using a\nsine-Gordon theory~\\cite{ovrut,emndbmonop} \nwith local $n=1$ supersymmetry, which has\nthe following monopole deformation operator:\n\\begin{equation}\nV_m = {\\bar \\psi} \\psi : {\\rm cos} [{q_m \\over \\beta^{1\/2}_{n=1}}\n(\\phi(z) - \\phi({\\bar z}))]:\n\\label{susymonopole}\n\\end{equation}\nwhere the $\\psi, {\\bar \\psi}$ are world-sheet fermions with\nconformal dimensions $(1\/2,0), (0,1\/2)$ respectively, and\n$\\phi$ is a Liouville field. The effective temperature\n$1 \/ \\beta_{n=1}$ is related to the matter central charge by~\\cite{ovrut}\n\\begin{equation}\n\\beta_{n=1} = { 2 \\over \\pi ( d - 9 )}\n\\label{susybeta}\n\\end{equation}\nwhere we assume that $d > 9$. The corresponding vortex\ndeformation operator is\n\\begin{equation}\nV_v = {\\bar \\psi} \\psi : {\\rm cos} [ 2 \\pi q_v \\beta_{n=1}^{1\/2}\n(\\phi(z) + \\phi({\\bar z}))]:\n\\label{susyvortex}\n\\end{equation}\nwhere $q_v$ is the vortex charge. Including the conformal dimensions\nof the fermion fields, we find that the conformal dimensions of the\nvortex and monopole operators are\n\\begin{equation}\n\\Delta_v = {1 \\over 2} + {1 \\over 2} \\pi \\beta_{n=1} q_v^2 = {1 \\over 2} +\n{q_v^2 \\over (d - 9)}, \\;\\; \\\\\n\\Delta_m = {1 \\over 2} + {1 \\over 8 \\pi \\beta_{n=1}} q_m^2 = {1 \\over 2} +\n{q_m^2 (d-9) \\over 16}\n\\label{dimensions}\n\\end{equation}\nrespectively. We see that the supersymmetric vortex deformation \nwith minimal charge $|q_v| = 1$ is marginal when \nthe matter central charge $d = 11$. Below this\nlimiting value, the vortex deformation is irrelevant. The quantization\ncondition imposed by single-valuedness of the partition function\ntells us that the minimum allowed charge for a dual monopole defect\nis $|q_m| = (d - 9)\/4$, which is irrelevant for $14.04 > d > 11$.\nWe therefore see that $d = 11$ is the critical dimension in\nwhich both the supersymmetric vortex and monopole deformations\nare marginal. On the world sheet, this corresponds to a\nBerezinskii-Kosterlitz-Thouless transition~\\cite{xy}, \nwith an unstable plasma phase\nof free vortex defects in $11 < d < 14.04 $, whilst monopoles are bound \nfor matter central charges $d > 14.04$. We note\nthat $d=11$ is the maximal\ndimensionality of space in which it is possible to have Lorentz-covariant\nlocal supersymmetric theories, \nwhilst if one relaxes the requirement of Lorentz \ncovariance a 12-- (or higher--) dimensional \ntarget space may be allowed~\\cite{bars}. \n\n\nIn terms of the `temperature' (\\ref{susybeta}),\nassociated with the central-charge deficit of the matter theory, \nthe above pattern of phases may be expressed as follows:\n\n\\begin{itemize}\n\n\\item\n(i) $T < T_{BKT-vortex}$, corresponding to $d < 11$: vortices bound, \nmonopoles free,\n\n\\item\n(ii) $T_{BKT-vortex} < T < T_{BKT-monop}$ corresponding to $11 < d <\n14.04$: plasma of vortices and monopoles,\n\n\\item\n(iii) $T > T_{BKT-monop}$ corresponding to $d > 14.04$: monopoles bound,\nvortices free.\n\n\\end{itemize}\nwhere the two critical Berezinskii-Kosterltiz-Thouless\ntemperatures are familiar from the two-dimensional XY model~\\cite{xy}. \nIn our Liouville picture, these critical temperatures correspond to \ncritical values of the central charge $d$, as explained above, \nwhereas in critical strings such temperatures correspond to \ncritical values of the radius of the compactified dimension~\\cite{sathiap}. \n\n\nWe have argued that these defects correspond to $D$ branes,\nsince correlators involving defects and closed-string\noperators have cuts for generic values of $\\Delta_{v,m}$.\nThese cause the theory to become effectively that of an open string.\nOne may then impose Dirichlet boundary conditions on the\nboundaries of the effective world sheet, i.e., along the cuts,\nobtaining solitonic $D$-brane configurations which become\nmassless when $d \\rightarrow 11$. The \nworld-sheet Berezinskii-Kosterlitz-Thouless\ntransition when $d = 11$ corresponds to the $D$-brane condensation\nthat occurs in the strong-coupling limit of $M$ theory.\nIt is known that the low-energy limit of this critical theory\nis provided by 11-dimensional supergravity, which\npossesses only 3- and 5-brane solitonic solutions.\n\n\n\n\\section{Anti-de-Sitter Space Time from $D$-Particle Recoil} \n\nIn this section we discuss how eleven-dimensional anti-de-Sitter space\ntime AdS$_{11}$\narises from our Liouville approach to $D$-brane recoil~\\cite{dbrecoil}. \nThe $D$ brane is described as above by a world-sheet defect, whose\ninteraction with a closed-string state \nis described by a pair of logarithmic deformations~\\cite{gurarie},\ncorresponding to the collective coordinate $y_i$\nand velocity $u_i$ of the recoiling $D$ particle~\\cite{kmw,lizzi}.\nBefore the recoil, the world-sheet theory with\ndefects is conformally invariant. However, the\nlogarithmic operators are slightly relevant~\\cite{kmw}, in a \nworld-sheet renormalization group sense, with anomalous dimension \n$\\Delta = -\\frac{\\epsilon ^2}{2}$, where $\\epsilon$ is a\nregularization parameter specified below. \nThus, the recoiling $D$ particle is no longer described by a \nconformal theory on the world sheet.\nTo restore \nconformal invariance, one has to invoke \nLiouville dressing~\\cite{ddk}, which\nincreases the target space-time dimensionality\nto $d + 1$. Because of the supercriticality~\\cite{aben} of the \ncentral charge $d$ of the \nstringy $\\sigma$ model, which had been critical before \nincluding recoil effects, the Liouville field \nhas Minkowski signature in this approach.\nIn evaluating the\n$\\sigma$-model path integral, it is convenient to work \nwith a Euclidean time $X^0$ in the $d$-dimensional base\nspace.\\footnote{Note\nthat the time $X^0$ is therefore distinct from the Liouville\ntime $t$, which necessarily has Minkowski signature.\nIn the case of gauge theories, the Euclidean time $X^0$\nmay be thought of as temperature.}\nThus we obtain an effective curved space-time manifold $F$\nin $d+1$ dimensions, with signature $(1,d)$, \nwhich is described~\\cite{kanti} by a metric of the form: \n\\begin{equation}\nG_{00}=-1 \\,,\\, G_{ij}=\\delta_{ij} \\,,\\,\nG_{0i}=G_{i0}=f_i(y_i,t)=\\epsilon (\\epsilon y_i + u_i t)\\, ,\\,\\,i,j=1,...,d\n\\label{yiotametric}\n\\end{equation}\nwhere the regularization parameter $\\epsilon \\rightarrow 0^+$ is\nrelated~\\cite{kmw} to the world-sheet \nsize $L$ via \n\\begin{equation}\n\\epsilon ^{-2} \\sim \\eta {\\rm ln}(L\/a)^2,\n\\label{epsilon}\n\\end{equation}\nwhere $\\eta =-1$ for a \nLiouville mode $t$ of Minkowski signature,\nand $a$ a world-sheet short-distance cutoff. The quantities \n$y_i$ and $u_i$ represent the collective coordinates and velocity of a\n$d$-dimensional D(irichlet)-particle. \n\nIn our case, the original theory is conformal for $d=11$, \nso the $F$ manifold has twelve dimensions. \nThe $D$ particle is a point-like stringy soliton, with $d=11$\ncollective coordinates\nsatisfying Dirichlet boundary conditions on the open world sheet \nthat appears in the presence of a defect.\nThus the above Liouville theory describes a\n12-dimensional space time with {\\it two times}~\\cite{bars}\nif the basis space is taken to have Minkowski signature. The \nfact that the Liouville $\\sigma$-model dilaton is\nlinear in time reflects the non-covariant nature of the \nbackground~\\cite{aben}. This is consistent\nwith the Lorentz-non-covariant formalism \nof 12-dimensional superstrings~\\cite{bars},\nwhich reflects the need to use null vectors \nto construct the appropriate supersymmetries. \nIn our approach, this non-covariant nature is a natural consequence of the \nLiouville dressing, prior to supersymmmetry, but this\nremark provides for a smooth supersymmetrization of the results. \n\nWe recall~\\cite{kanti} that the components of the \nRicci tensor for the above 12-dimensional $F$ manifold are: \n\\begin{eqnarray}\nR_{00}&=& -\\frac{1}{(1+\\sum_{i=1}^{d} f_i^2)^2}\\,\n\\left(\\sum_{i=1}^{d} f_i \\frac{\\partial f_i}{\\partial t}\n\\right)\\,\\left[\\sum_{j=1}^{d} \\frac{\\partial f_j}\n{\\partial y_j} \\, \\left(1+\\sum_{k=1, k\\neq j}^{d} f_k^2\n\\right)\\right] \\\\[3mm]\n&+& \\frac{1}{(1+\\sum_{i=1}^{d} f_i^2)}\\,\\left[\\sum_{i=1}^{d}\n\\frac{\\partial^2 f_i}{\\partial y_i \\partial t}\n\\left(1+\\sum_{j=1, j\\neq i}^{d} f_j^2\\right)\n\\right]\\\\[5mm]\nR_{ii}&=&\\frac{1}{(1+\\sum_{k=1}^{d} f_k^2)^2}\\,\n\\left\\{\\,\\frac{\\partial f_i}{\\partial y_i}\\,\n\\left(\\sum_{j=1}^{d} f_j\\,\\frac{\\partial f_j}\n{\\partial t}\\right)-(1+\\sum_{k=1}^{d} f_k^2)\\,\n\\frac{\\partial^2 f_i}{\\partial y_i \\partial t}\\right.\\nonumber \\\\[3mm]\n&+& \\left. \\frac{\\partial f_i}{\\partial y_i} \\left[\n\\sum_{j=1, j\\neq i}^{d}\\, \\frac{\\partial f_j}\n{\\partial y_j}\\,(1+\\sum_{k=1, k\\neq j}^{d} f_k^2)\n\\right]\\right\\} \\\\[5mm]\nR_{0i}&=& \\frac{f_i}{(1+\\sum_{k=1}^{d} f_k^2)^2}\\,\n\\left\\{\\frac{\\partial f_i}{\\partial y_i}\\,\n\\left(\\sum_{j=1}^{d} f_j \\,\\frac{\\partial f_j}\n{\\partial t} \\right)-\\left(1+\\sum_{k=1}^{d} f_k^2\\right)\\,\n\\frac{\\partial^2 f_i}{\\partial y_i \\partial t}\\right\\}\\\\[5mm]\nR_{ij}&=& \\frac{1}{(1+\\sum_{k=1}^{d} f_k^2)^2}\\,\nf_i \\,f_j\\,\\frac{\\partial f_i}{\\partial y_i}\\,\n\\frac{\\partial f_j}{\\partial y_j} \n\\label{Ricci}\n\\end{eqnarray}\nWe consider below the asymptotic limit: $t >>0$. \nMoreover, we restrict ourselves to the limit\nwhere the recoil velocity $u_i \\rightarrow 0$,\nwhich is encountered if the $D$ particle \nis very heavy, with mass $M \\propto 1\/g_s$, where \n$g_s \\rightarrow 0$ is the dual string \ncoupling. \nIn such a case the closed string state splits into two open ones\n`trapped' on the $D$-particle defect. \nThe collective coordinates of the latter exhibit quantum\nfluctuations of order~\\cite{dbrecoil} \n$\\Delta y_i \\sim |\\epsilon ^2 y_i|$.\n{}From the world-sheet point of view~\\cite{emnmonop}, \nthis case of a very heavy $D$ particle corresponds \nto a strongly-coupled defect, since the coupling $e$ of the world-sheet\ndefect is related to the dual string coupling $g_s$ by\n\\begin{equation}\n e\\sqrt{\\pi\/3} \\propto \\frac{1}{\\sqrt{g_s}} \n\\label{duality}\n\\end{equation}\ncorresponding to a world-sheet\/target-space strong\/weak coupling duality. \n\nIn the limit $u_i \\rightarrow 0$, (\\ref{Ricci}) implies that\nthe only non-vanishing components of the Ricci tensor are: \n\\begin{equation}\nR_{ii} \\simeq \\frac{\\partial f_i}{\\partial y_i} \\left(\\sum_{j=1; j\\ne i}^{d}\n\\frac{\\partial f_j}{\\partial y_j} [1 + {\\cal O}(\\epsilon ^4)]\\right)\n\\frac{1}{(1 + \\sum_{k=1}^{d}f_k^2)^2} \\simeq \n\\frac{-(d-1)\/|\\epsilon|^4}{(\\frac{1}{|\\epsilon|^4} - \\sum_{k=1}^{d=11}|y_i|^2)^2} \n+{\\cal O}(\\epsilon ^{8}) \n\\label{limRicci}\n\\end{equation}\nwhere we have taken into account (\\ref{epsilon}) \nand the Minkowski signature of the Liouville \nmode $t$. Thus, in this limiting case\nand for large $t >>0$, the Liouville mode decouples\n{}from the residual $d$-dimensional manifold. \nWe may write (\\ref{limRicci}) as\n\\begin{equation} \n R_{ij}={\\cal G}_{ij}R \n\\label{newricci}\n\\end{equation}\nwhere ${\\cal G}_{ij}$ is a dimensionless diagonal \nmetric, corresponding to the line element:\n\\begin{equation}\n ds^2=\\frac{|\\epsilon|^{-4}\\sum_{i=1}^{d} \ndy_i^2}{(\\frac{1}{|\\epsilon|^4} - \\sum_{i=1}^{d}|y_i|^2)^2} \n\\label{ball}\n\\end{equation}\nThis metric describes the {\\it interior} \nof a $d$-dimensional ball, \nwhich is the Euclideanized version of an \nAdS space time~\\cite{witten}. \nOne can easily check that the curvature of the Minkowski version\nof (\\ref{ball}) \nis {\\it constant} and {\\it negative}: $R = -4d(d-1)|\\epsilon| ^4$,\nindependent of the exact location of the $D$ brane. \n\nIt is interesting that the metric of the space time (\\ref{ball}) \nexhibits a coordinate singularity\nat $\\sum _{i=1}^d|y_i|^2=|\\epsilon|^{-4}$, which prevents a naive extension \nof the {\\it open} ball $B^d$ to the {\\it closed} ball \n${\\overline B}^d$, including the boundary sphere $S^{d-1}$. The metric\nthat extends to ${\\overline B}^d$ is provided by a conformal\ntransformation of (\\ref{ball})~\\cite{witten}: \n\\begin{equation}\n d {\\tilde s}^2={\\cal F}^2 ds^2 \n\\label{conformal}\n\\end{equation}\nChoosing, say, \n${\\cal F}=|\\epsilon |^{-4} - \\sum_{i=1}^d|y_i|^2$\nresults in $d{\\tilde s}^2$ being associated with \nthe metric on a sphere $S^{d-1}$ of radius \n$\\frac{1}{|\\epsilon|^2}$. In general, ${\\cal F}$ may be changed by any conformal\ntransformation, leading to a conformally invariant \nEuclidean $S^{d-1}$ space as the boundary of an \nAdS$_d$ space time, whose metric is invariant under the Lorentz group \n$SO(1,d)$. \n\nWe close this section by recalling that AdS space time is the only\ntype\nof constant-curvature background, for space time dimensionality $d > 2$,\nwhich is consistent with local supersymmetry~\\cite{salam}. This is of\nvital importance here, because\n$D$ branes are stable {\\it only} in superstring theories.\n\n\\section{Specification of the Local Theory in the AdS$_{11}$ Bulk} \n\nThe important property \nof AdS space times for the next part of our discussion\nis the existence of powerful theorems implying\nthat, if a classical \nfield theory is specified in the boundary of the AdS space, then it has a\n{\\it unique} extension to the bulk~\\cite{witten,lee}. \nThese theorems underly the\n{\\it holographic} nature of field\/string theories in AdS\nspace times, \nin the sense that all the information about the \nbulk AdS theory is {\\it encoded} in the\nboundary theory~\\cite{malda,witten}. \n\nThe question now arises, what is the bulk AdS$_{11}$ theory\nunderlying $M$ theory? Some clues are provided by the\nconstruction of AdS$_{11}$ given in the previous section.\nIt is natural to look for a theory that becomes local in\nthe short-distance limit. This theory should, moreover,\nhave a local symmetry that arises naturally in such a framework.\nThe natural candidates are gauge theory and gravity,\ncombined of course with supersymmetry.\nIn the case of gauge theory, since a conventional quadratic\nkinematic term is irrelevant by simple dimensional\ncounting in more than four space-time dimensions, the most\nlikely possibility is a Topological Gauge Theory (TGT) of\nChern-Simons type.\nFurthermore, since it is known in principle how \nsupergravity may arise from an underlying TGT~\\cite{chamseddine},\nthis may also provide a framework in which the notion of a\nspace-time metric emerges dynamically. The remaining issue is\nthe choice of gauge supergroup, and this is where the above\nconstruction of the AdS$_{11}$ target space-time geometry \ncan provide key input into the short-distance\nformulation of $M$ theory as a supersymmetric TGT. \n\nThe minimal supergroup that incorporates the\nspace-time symmetry of AdS$_{11}$ is $Osp(1|32,R)$, so this\nshould be a subsupergroup of the conjectured gauge supergroup.\nHowever,\nour world-sheet approach provides a hint that the \nfull gauge supergroup should be larger than this. We recall\nthat the departure from criticality inherent to \nthe non-trivial $D$-brane recoil that generates AdS$_{11}$\ncould be absorbed by introducing a time-like Liouville field.\nThis leads to an underlying $(2,10)$ space-time signature, or\n$(1,11)$ in the Euclideanized version needed for an adequate\ndefinition of the path integral. We also recall that\nthere is an isomorphism between the minimal\nsupergroup extension of the Lorentz group $SO(1,11)$ and $Osp(1|32,R)\n\\otimes Osp(1|32,R)$~\\cite{holten}. It is therefore natural\nto propose that this may play a r\\^ole in the supergroup of\nthe local TGT. Note, however, that the Liouville field\ndecouples in the conformal limit of zero recoil, and that \nthe background field is linear, so any\nunderlying $SO(1,11)$ or $Osp(1|32,R)\\otimes Osp(1|32,R)$\nsymmetry must be at least spontaneously broken.\nThe natural minimal possibility is a $Osp(1|32,R)\\otimes Osp(1|32,R)\n\\rightarrow Osp(1|32,R)$ symmetry-breaking pattern, with the\nbreaking accompanied by the appearance of an AdS$_{11}$ metric\nat the world-sheet Berezinskii-Kosterlitz-Thouless transition point.\n\nThis is similar to the symmetry-breaking pattern\nprposed by~\\cite{horava}. \nIn that analysis, \nthe particular gauge supergroup $Osp(1|32,R)\\otimes Osp(1|32,R)$\narose as the minimal supersymmmetric\nextension of $Osp(1|32,R)$\nwith 64 supercharges~\\cite{holten,sierra}, \nwhich are necessary in $M$ theory \nto ensure {\\it parity invariance} and\nin order to obtain a consistent compactification of $M$ theory \nto a heterotic string with the gauge group $E_8 \\otimes\nE_8$~\\cite{hw}. \nIn our Liouville approach, such a group arises \nindependently and naturally \nfrom the presence of the `auxiliary' Liouville field.\n\n\\section{Relation to Two-Dimensional Structures}\n\nIt is known from the analysis in~\\cite{guna2} \nthat the contraction of $Osp(1|32, R) \\otimes Osp(1|32, R)$ \nwith the Poincare symmetry in 11-dimensional space time leads to \na single diagonal $Osp(1|32, R)$~\\footnote{The two factors \ncorrespond to different spinor representations of the $SO(1,11)$ \nalgebra~\\cite{guna,guna2}.}. \n\nFollowing~\\cite{guna}, we now recall that \n$Osp(1|32,R)$ has a two-dimensional maximal subsupergroup \n$Osp(16\/2,R)$, which has been argued to capture the dynamics \nof $D0$ particles in the matrix-model approach to M-theory~\\cite{matrix}.\nThe maximal even subgroup \nof $Osp(16\/2,R)$ is in turn $Sp(2,R) \\otimes SO(16)$. \nIt was argued in~\\cite{guna} that the \nfactor $Sp(2,R)$ corresponds to an AdS$_{2}$\nextension of the Poincare group\nin the longitudinal directions of the matrix $D$-brane theory. The\nconnection to this formulation of $D0$ particles supports the\nmotivation for the $D$-brane interpretation of the world-sheet\ndefects and the recoil calculation presented earlier.\n\nThe singleton representations of $Sp(2,R)$, which live \non the boundary of AdS$_2$, when \nexpanded in a `particle basis', \nconsist of an infinite tower of discrete-momentum states\nwith ever-increasing quantized $U(1)$ eigenvalues~\\cite{sierra}.\nSuch an infinite tower of states was identified in~\\cite{guna} \nwith the infinite tower of $D0$ branes with quantized \nlongitudinal momentum that appear in matrix theory in the \ninfinite-momentum frame~\\cite{matrix}. \nThis is consistent with the conjecture~\\cite{malda} \nthat the conformally invariant field theory of the singleton \nrepresentation on the boundary of AdS$_2$ \nis associated with an $N=16$~~~$U(n \\rightarrow \\infty)$ \nYang-Mills quantum-mechanical theory~\\cite{guna}, which in turn describes \nmatrix theory in the infinite-momentum frame. \n\nWe observe here that \n$Sp(2,R)$ is isomorphic to $SO(2,1)$, as well as to AdS$_2$.\nThis may be related to the possibility of associating \ntwo-dimensional space times with three-dimensional \nChern-Simons theories, whose dimensional reduction\nleads to AdS$_2$. This possibility recalls \nthat of two-dimensional stringy black-hole space times~\\cite{emnmonop},\nas briefly reviewed in section 2. The appearance of \na $D$-particle space time \nAdS$_2 \\otimes O(16)$ through the `breaking' (described by the\ncontraction with Poincare symmetry)\nof $Osp(1\/32,R) \\otimes Osp(1\/32,R)$ parallels the breaking of\n$W_{1+\\infty} \\otimes W_{1+\\infty} \\rightarrow\nW_{1+\\infty}$ in that case. \nIn the analysis of~\\cite{emnmonop}, \nthe association of the two-dimensional model \nwith a three-dimensional \nChern-Simons theory with $CP^1$ `magnon' fields $a,b$ that represent\nmatter away from the black-hole singularity, \nleads to an interesting symmetry-breaking pattern. \nA renormalization-group analysis has shown that space time appears as a \nnon-trivial (infrared) fixed point of the flow.\nWe present in the next section an alternative formulation of the\nappearance of a space-time metric in the full $M$ theory.\n\nThe appearance of $D0$ particles provides a nice consistency \ncheck of the approach we used in section 4, employing \nLiouville $D$-particle recoil to obtain \nAdS$_{11}$ dynamically. We now observe that \nAdS$_2$ structures can be associated with \ntopology change in AdS$_{11}$.\nTo see this, we first review briefly the relevant properties \nof AdS space times~\\cite{page,witten}. \nFor concreteness, we describe explicitly\nthe AdS$_4$ case of~\\cite{page}. The generalization \nto AdS$_d$, for general $d$, is straightforward~\\cite{witten}. \nThe Minkowski-signature AdS \nSchwarzschild black hole solution of~\\cite{page} \ncorresponds to a metric line element of the form: \n\\begin{equation}\n ds^2 = -V (dt)^2 + V^{-1} (dr)^2 + r^2 d\\Omega ^2\n \\label{adsbh}\n\\end{equation}\nwhere $d\\Omega ^2$ is the line element on a round two-sphere,\n$r$ the radial coordinate of the AdS space, \nand $t$ is the time coordinate. \nThe Euclidean version of the space time has the topology\n$ X_2 =B^2 \\otimes S^{n-1}$, where $n=3$ for~\\cite{page}.\n\nAccording to the analysis of~\\cite{page}, \nthere are two relevant critical temperatures in the AdS black-hole \nsystem: \\\\\n(i) the specific heat of a gas of \nblack holes changes sign at the lowest critical temperature $T_0$. \nFor $T < T_0$ there is only radiation, and the topology of \nthe finite-temperature space time \nis $X_1 = B^n \\otimes S^1$, where $n=3$ in~\\cite{page}. \\\\\n(ii) At temperatures above $T_0$,\nthe topology of the space time \nchanges to include black holes, becoming $ X_2 =B^2 \\otimes S^{n-1}$. \\\\\n(iv) For temperatures greater than a higher value $T > T_2$,\nthere is no equilibrium\nconfiguration without a black hole.~\\footnote{There is \nalso an intermediate temperature $T_1$: $T_0 < T\n< T_1 < T_2$, below which\nthe free energy of the black hole is positive, so the black hole \ntends to evaporate, and above which\nthe free energy of the configuration with the black hole \nand thermal radiation is lower than the corresponding \nconfiguration with just thermal radiation,\nso that the radiation tunnels to a black-hole state. In our apprach,\nthis tunnelling may be described by world-sheet instantons~\\cite{emn,yung}, \nwhich we do not\nexplore further here.}\n \n\nThe extension of the above analysis\nto our 11-dimensional AdS$_{11}$ case is straightforward~\\cite{witten}.\nThe important point to notice, for our purpose, is the fact that the \nblack-hole space time $X_2$ includes the two-dimensional AdS$_2$ \nspace, on the boundary of which live the quantum-mechanical \n$D0$ particles, as discussed in~\\cite{guna} and mentioned above. \nTherefore, we associate the group-theoretic observations of~\\cite{guna},\nwith the topology change: $X_1 \\rightarrow X_2$ due to \nthe appearance of black-hole AdS \nspace times. As discussed above, such topology changes may be interpreted\nas corresponding to condensation of world-sheet vortex defects. \nNotice, however, that the r\\^ole of temperature in this case\nis played by the \ncentral charge deficit of the `matter' theory (\\ref{susybeta}).\nThus, when the matter central charge $d$ reaches the critical\nvalue, corresponding to the lowest temperature $T_0$ of~\\cite{page}, \nthe topology changes, in the sense that the vacuum becomes\ndominated by an unstable plasma of free vortex and monopole \nworld-sheet defects. \n\nThe presence of a discrete tower of states living on the boundary \nof AdS$_2$ has been argued~\\cite{guna} to be crucial for yielding \nthe correct description of the $D0$-particle quantum mechanics. \nSuch delocalized states are therefore viewed as \ngravitational degrees of freedom. In this respect, the attention of the \nalert reader should be called to a parallel phenomenon that arises\nin two-dimensional stringy black holes with matter.\nThere, conformal invariance on the world sheet requires,\nin a black-hole space time, the coupling to lowest-level zero-momentum\nstring `tachyon' states of discrete \nsolitonic delocalized states that belong to higher string levels~\\cite{chaudh}.\nIn the AdS$_2$ case, the discrete tower of states corresponds to \na generalization of these lowest zero-momentum `tachyon' states, rather\nthan the higher-spin delocalized states of the two-dimensional black\nhole.\n\nA final topic of relevance here is the tensoring of the\nsingleton representation in each factor of $Osp(1|32,R)$. \nThe resulting theory consists, according to~\\cite{guna2}, \nof a doubleton representation that lives on the boundary of \nAdS$_{11}$, which is a ten-dimensional Minkowski space, and is\nscale invariant. The next question concerns the coupling of\nsuch scale-invariant theories to superstrings,\nwhich is discussed in the next section. \n\n\\section{Strings as Wilson Loops in TGTs}\n\nInvoking some mean-field conjectures,\nHorava~\\cite{horava} has shown how one can derive \nthe field content of the 11-dimensional supergravity by\ncalculating Wilson loops $$ of `partons' in his\n11-dimensional TGT with gauge supergroup \n$Osp(1|32,R) \\otimes Osp(1|32,R)$. Horava~\\cite{horava}\ndid not provide a dynamical model for these `partons' and\nWilson loops. However, our Liouville string approach\nprovides a natural conjecture for their origins, and, \nas we shall see below, a rather modified proposal for\na local field-theory description of $M$ theory.\n\n\nAs a prelude to our results, we first review briefly the work \nof~\\cite{awada}, according to \nwhich certain scale-invariant \nGreen-Schwarz superstring theories in flat target space\nare equivalent to Abelian gauge theories.\nThis equivalence should be understood \nin the sense that\n\\begin{equation} \n~\\sim~e^{iS_\\sigma }\n\\label{awadaabel}\n\\end{equation}\nwhere $S_\\sigma $ is a world-sheet \naction that encodes the area of the Wilson loop, \nand $W(C)$ is some combination of \nobservables, expressed in terms of chiral currents\n$e^{\\int J.A}$, which \ngo beyond the standard Wilson loops. The action $S$ becomes a standard \nworld-sheet action if a string scale is generated dynamically\nby an appropriate condensation mechanism. \n\nThe analysis of~\\cite{awada} was performed \nfor four-dimensional target spaces, but it can be \ngeneralized straightforwardly to six and ten dimensions. \nFor reasons of concreteness and calculational simplicity,\nwe review the analysis in the \nfour-dimensional case, where the gauge field theory coupling is \ndimensionless. \n\nWe consider an \nAbelian supersymmetric gauge theory, described by a standard Maxwell \nLagrangian in a superfield form.~\\footnote{The extension of this\nanalysis to the non-Abelian case raises interesting technical issues\nthat are currently under study.} \nThe connection with string theory emerges by considering the Stokes\ntheorem on a two-dimensional surface $\\Sigma$, \nwhose boundary is the loop $C$.\nIf one parametrizes the curve $C$ by $\\lambda$, then one\nmay write the exponent of the Wilson loop as \n\\begin{equation}\n S_{int} =ie \\int _{C} d\\tau A( X(\\tau)) \\frac{\\partial }{\\partial \\tau } X(\\tau) \n\\label{wilsonexp}\n\\end{equation}\nand the Stokes theorem tells us that\n\\begin{equation}\n S_{int} =\\frac{ie}{2}\\int _{\\Sigma (C)} d^2\\sigma \\epsilon^{ab}\nF_{\\alpha\\beta}, \n\\qquad a, b =1,2,\n\\label{stokes} \n\\end{equation}\nwhere $X^M$, $M=1, \\dots D$, is a $D$-dimensional \nspace-time coordinate for the gauge theory. We denote \nby lower-case Latin indices \nthe two-dimensional coordinates of the surface $\\Sigma$, \nwhich plays the r\\^ole of the world-sheet of the string \nand is equivalent to the gauge \ntheory in question~\\cite{polyakov}. The quantity \n$F_{ab} = \\partial_a X^M \\partial _b X^N F_{NM} =\n\\partial _a A_b - \\partial _b A_a $\nis the pull-back of the Maxwell tensor on the world sheet\n$\\Sigma$, with $A_a$ the corresponding projection of the gauge field \non $\\Sigma$: \n\\begin{equation}\nA_a =v_a^M A_M \\qquad ; \\qquad v_\\alpha ^M \\equiv \\partial _a X^M , \na=1,2~; M+1, \\dots D(=4).\n\\label{pullback3}\n\\end{equation}\n{}From a two-dimensional view-point, \nthis looks like a Chern-Simons term for a two-dimensional \ngauge theory on $\\Sigma$, bounded by the loop $C$. \nThe world sheet `magnetic field' corresponding to (\\ref{pullback3})\nreads:\n\\begin{equation}\n{\\cal B}=\\epsilon^{\\alpha\\beta}\\partial_\\alpha A_\\beta =\n\\epsilon^{\\alpha\\beta}\\partial_\\alpha \\partial_\\beta X^M A(X)_M\n+ \\frac{1}{2}\\epsilon^{\\alpha\\beta}\\partial_\\alpha X^M \\partial_\\beta X^N\nF_{NM}\n\\label{pullmagn}\n\\end{equation}\nNotice that the presence of world-sheet vortices is associated \nwith the first term on the right-hand-side of (\\ref{pullmagn}),\nwhilst world-sheet monopoles are associated with the second term,\nwhich is also gauge invariant in target space. \n\nThe novel observation of~\\cite{awada}\nis the possibility, in a supersymmetric gauge theory, \nof constructing a second superstring-like \nobservable, in addition to the Wilson loop, which is again defined on\nthe two-surface $\\Sigma$, and is consistent with all the symmetries of the\ntheory.\nThe second observable is easily understood in the two-dimensional \nsuperfield formalism:\n\\begin{equation}\nZ^{{\\cal A}} \n\\equiv (X^M, \\theta ^m, \\theta ^{{\\dot m}})\n\\label{sf}\n\\end{equation}\nThe pull-back basis $v_a ^M$ in (\\ref{pullback3}) is now extended\nto $v_a^{{\\cal A}} = E^{{\\cal A}}_{{\\cal B}} \\partial _a z^{{\\cal B}}$,\nwith \nthe following components~\\cite{awada}: \n\\begin{eqnarray}\n&~&v_a^{\\alpha{\\dot \\alpha}} =\\partial _z x^{\\alpha{\\dot \\alpha}}\n-\\frac{i}{2}\\left(\\theta^\\alpha (\\sigma) \\partial _a \\theta ^{{\\dot \\alpha}} (\\sigma)\n+ \\theta^{{\\dot \\alpha}} (\\sigma) \\partial _a \\theta ^{\\alpha} (\\sigma)\\right) \n\\nonumber \\\\\n&~& v_a^\\alpha = \\partial _a \\theta ^\\alpha (\\sigma) \\nonumber \\\\\n&~& v_a^{{\\dot \\alpha}} = \\partial _a \\theta ^{{\\dot \\alpha }} (\\sigma) \n\\label{superspace}\n\\end{eqnarray}\nin standard notation~\\cite{superspace}, where Greek dotted and undotted \nindices denote superspace components, with $x^{\\alpha{\\dot \\alpha}}\n\\equiv X^M$, etc..\nFollowing~\\cite{awada}, we now define\n\\begin{eqnarray} \n&~& C_{ab}^{\\alpha\\beta} \\equiv \\frac{i}{2} \nv_{a{\\dot \\beta}}^{(\\alpha}\nv_{b}^{\\beta){\\dot \\beta}},\n\\nonumber \\\\\n&~& C_{ab}^\\alpha \\equiv v_a^{\\alpha{\\dot \\alpha}}v_{b{\\dot\n\\alpha}}: \\nonumber \\\\\n&~& \\qquad \nC^\\alpha =\\epsilon^{ab}C_{ab}^\\alpha , \\qquad C^{\\alpha\\beta}=\\epsilon^{ab}C_{ab}^{\\alpha\\beta}\n\\label{Cs}\n\\end{eqnarray}\nwith similar relations holding for appropriately-defined \ndotted components of $C$, as found in~\\cite{awada}. \nNote that $C^{\\alpha}$ vanishes in the absence\nof supersymmetry, whilst $C^{\\alpha\\beta}$ exists also\nin non-supersymmetric gauge theories. \nIn the presence of defects, as we shall later, this is \nno longer the case, and $C^\\alpha$ can have non-supersymmetric \nremnants. \n\nThe supersymmetric Wilson loop, expressing the interaction between the \nsuperparticle and a supersymmetric gauge theory in the approach\nof~\\cite{awada}, may now expressed as: \n\\begin{eqnarray}\n &~& W(C)=e^{S_{int}^{(1)}}, \\qquad \nS_{int}^{(1)} \\equiv \\frac{i}{2} \\int _{\\Sigma (C)} d^2\\sigma \\epsilon^{ab}\n{\\cal F}_{ab} \\nonumber \\\\\n&~&{\\cal F}_{ab} \\equiv \\epsilon_{ab}\\{\\frac{1}{2}C^{\\alpha\\beta}(\\sigma)\nD_\\alpha W_\\beta (x(\\sigma),\\theta (\\sigma)) + \\nonumber \\\\\n&~& C^\\alpha (\\sigma) W_\\alpha\n(x(\\sigma),\\theta (\\sigma)) + h.c. \\} \n\\label{susywilson}\n\\end{eqnarray}\nwhere $W_\\alpha (x(\\sigma),\\theta (\\sigma))$ is the chiral superfield \nof the supersymmetric Abelian gauge theory. \nThe exponent $S_{int}^{(1)}$ \nclearly reduces to the standard expression (\\ref{wilsonexp}) \nin non-supersymmetric cases.\n\nIt was pointed out in~\\cite{awada} that \nthere is a second superstring-like observable, $\\Psi (\\Sigma )$, \ndefined on the world-sheet surface $\\Sigma$,\nwhich is constructed out of the $C_{ab}^\\alpha $ components, \nwhich therefore - in the absence of world-sheet defects - \nexists only in supersymmetric gauge theories:\n\\begin{equation}\n \\Psi (\\Sigma ) \\equiv e^{iS_{int}^{(2)}}, \\qquad \nS_{int}^{(2)} \\equiv \\kappa \\int _{\\Sigma (C)} d^2\\sigma \n\\sqrt{-\\gamma}\\gamma ^{ab}C_{ab}^\\alpha (\\sigma) \nW_\\alpha (x(\\sigma), \\theta (\\sigma)) + h.c.\n\\label{second}\n\\end{equation}\nwhere $\\gamma ^{ab}$ is the metric on $\\Sigma$. This term,\nunlike the standard Wilson loop, \nis not\na total world-sheet derivative, and therefore lives in the bulk of the \nworld-sheet $\\Sigma$, and depends on the metric $\\gamma $. \nThe coupling constant $\\kappa$ \nis defined classically as an independent coupling. However,\none expects that quantum effects will relate it to the gauge coupling \nconstant $e$, a point we return to later. \nThis second observable has been expressed in~\\cite{awada} \nin terms of `chiral' currents on\n$\\Sigma$, located at the string source:\n\\begin{eqnarray}\n&~& S_{int}^{(2)} =\\int d^6Z \\left({\\cal J}^\\alpha \nW_\\alpha + h.c. \\right), \\nonumber \\\\\n&~& {\\cal J}^\\alpha \\equiv \\kappa \\int _{\\Sigma (C)} d^2\\sigma \\sqrt{-\\gamma}\\gamma ^{ab}C_{ab}^\\alpha (\\sigma) \\delta ^{(6)} (Z-Z(\\sigma)), \n\\nonumber \\\\\n&~& \\delta ^{(6)} (Z-Z(\\sigma)) =\\delta ^{(4)} (Z-Z(\\sigma))\\left(\\theta - \\theta (\\sigma)\\right)^2 \n\\label{chiral}\n\\end{eqnarray}\nUpon integrating out the gauge field components in (\\ref{second}), \ni.e., considering the vacuum expectation value \n$<\\Psi (\\Sigma )>$, where $< \\dots >$ denotes\naveraging with respect to the Maxwell action for the gauge field, \nthe authors of~\\cite{awada} \nhave obtained a superstring-like action, which is \nscale invariant in target space, as well as on the world sheet:\n\\begin{eqnarray}\n &~& <\\Psi (\\sigma )>_{Maxwell} =e^{S_0 + S_1} \\nonumber \\\\\n&~& S_0 \\equiv \\frac{\\kappa _0^2}{16\\pi} \\int _{\\Sigma } d^2 \\sigma \n\\sqrt{-\\gamma}\\gamma ^{ab}v_a^Mv_b^N\\sigma _M\\sigma _N, \\nonumber \\\\ \n&~& S_1 \\equiv \\frac{\\kappa _1^2}{4\\pi} \\int _{\\Sigma (C)} \n\\sqrt{-\\gamma} \\gamma ^{ab}v_a^M v_b^N \\eta_{MN} \\sigma^K\\sigma_K\n\\label{gaugefield}\n\\end{eqnarray}\nwhere upper-case Latin indices denote target-space indices ($M,N,K=1,\n\\dots d(=4$)), and \n\\begin{eqnarray}\n&~& v_a^M \\equiv \\partial_a X^M(\\sigma) - \ni{\\overline \\theta}^m(\\sigma)\\Gamma ^M \\partial _a \\theta _m (\\sigma) \n\\nonumber \\\\\n&~& \\sigma^M=\\frac{\\sqrt{-\\gamma}\\epsilon^{ab}}{\\sqrt{{\\rm det}G}}\\partial_a v_b^M, \\quad G_{ab} \\equiv v_a^Mv_b^N\\eta_{MN} \n\\label{fourcomp}\n\\end{eqnarray}\nin standard four-component notation in target space, where the $m$ are\nspinor indices, and the $\\Gamma ^M$ are four-dimensional Dirac matrices. \nThe dimensionless coupling constants $\\kappa _{0,1}$ appear arbitrary \nat the classical level, but one expects them to be related\nto the dimensionless gauge coupling $e$, in the quantum theory. \n\n\nThe important observation in~\\cite{awada} was the fact that the \nworld-sheet action $S_2$ (\\ref{second}) resembles the\nclassical Green-Schwarz superstring action in flat four-dimensional \ntarget space, provided that condensation occurs for the composite field\n\\begin{equation}\n\\Phi \\equiv \\sigma^M \\sigma_M,\\qquad M=1, \\dots D(=4), \n\\label{composite}\n\\end{equation}\nwhich may be interpreted as the dilaton in target space, so that \n\\begin{equation}\n\\frac{\\kappa _1^2}{4\\pi}<\\Phi>=\\mu_{string~tension}\n\\label{tension}\n\\end{equation}\nSuch condensation would enable the string tension $\\mu$ to be obtained\nfrom a gauge theory without dimensionful parameters.\nThe question of physical interest\nis what causes this condensation, which presumably \nis responsible for a `spontaneous breaking' of \nthe scale invariance of four-dimensional string theory. \n\nWe note that the above definition of\nthe composite field $\\Phi$\nmay be extended to higher-dimensional \ncases, which are of interest to us in this $M$-theory application, \nby simply contracting the integrand of (\\ref{second}) \nwith appropriate powers of $\\Phi $ so as to make the coupling constant \n$\\kappa$ dimensionless. This can be understood as the definition of the \nsuperstring observable $\\Psi $ in more than four target space-time \ndimensions. It is important \nto note that the dimensionality of the coupling constant $\\kappa$\nis the same as that of the gauge theory coupling $e$ only \nfor space-time dimensionality four, six and ten, where supersymmetric\ntheories exist as well. \n\nWe now present a scenario for the formation of \nsuch a condensate, which is an extension of the ideas of~\\cite{witten}. \nWe associate the second observable (\\ref{second}) \nof~\\cite{awada} to the condensation \nof non-trivial world-sheet defects. \nThe scenario is based on the recent \ninterpretation~\\cite{witten} of confinement in purely gluonic\nnon-Abelian gauge theories, by means of a \nholographic principle encoding \nconfinement quantum physics in the classical \ngeometry of uncompactified AdS$_5$ space times. \nIn such a picture, four-dimensional conformally-invariant \nMinkowski space time is \nviewed as the boundary of an AdS$_5$\nspace time, in the spirit of~\\cite{malda,witten}. \nMacroscopic black holes \ndisappear at temperatures below $T_0$,\nwhere only radiation-dominated\nuniverses exist~\\cite{page}, as discussed above. \nWitten~\\cite{witten}, has associated \nthis temperature to a confining-deconfining phase \ntransition for quarks, and stressed the fact that,\nfor spatial Wilson loops the area law and the associated string tension\nare obtained only for finite-temperature field theory, because of the\nconformal invariance of the zero-temperature four-dimensional gauge theory, \nwhose renormalization-group $\\beta$ function vanishes identically. \n\nIt is important to notice that condensation of \ncomposite operators \ncan occur for both vortex (\\ref{solution}) \nand monopole (\\ref{solution2}) configurations.\nLet us first examine the case \nof the manifold with topology $X_1 = B^n \\otimes S^1$. \nIn this case, one may consider the role of vortices on the world sheet \nof the string, wrapped around the compact dimension $S^1$. The\ncondensation of such vortices, bound into pairs with antivortices, \nresults in the quantity $\\Phi $ (\\ref{composite})\nacquiring a non-trivial vacuum expectation value $<\\Phi > \\ne 0$.\nIn the classical AdS picture described above, \nit is the non-trivial Planckian dynamics \nof a five-dimensional space time which is responsible for the above \nphenomenon. In such a case the standard Berezinskii-Kosterlitz-Thouless\nphase transition temperature for vortex condensation \non the world \nsheet of critical strings~\\cite{sathiap}\nmay be identified with $T_0$ of the AdS Black Hole space time. \n\n\n\\section{A Conjecture for the Structure and Dynamics of $M$ Theory} \n\nWe are now equipped to formulate a conjecture\non the structure and dynamics of $M$ theory. \nSince ten-dimensional Minkowski space time\nmay be considered as the boundary of AdS$_{11}$, which arises\ndynamically through the Liouville \ndressing of a quantum-fluctuationg\n$D$ particle, we invoke the analysis of~\\cite{awada} for Abelian \ngauge theories, to conjecture that there exists a\nconformal field theory on the ten-dimensional Minkowski\nspace time ${\\cal M}$, which is dual\nto AdS$_{11}$ in the following sense~\\cite{malda}:\n\\begin{equation} \n _{CFT} =Z_{AdS_{11}}(\\phi) \n\\label{doubleton}\n\\end{equation}\nwhere the ${\\cal O}$ are appropriate local operators of the conformal \nfield theory. This is supported by the fact that the conformal group \nof ${\\cal M}$ is the same as $SO(2,10)$. Gunaydin has suggested~\\cite{guna2} \nthat the\nconformal field theory might be the doubleton field theory living on\n${\\cal M}$. \n\nAt this point, we should also remark \nthat, according to Horava~\\cite{horava}, the correspondence with the \n11-dimensional supergravity occurs through partons of the gauge group \n$Osp(1\/32, R)\\otimes Osp(1\/32,R)$ in an 11-dimensional Chern Simons \ntopological Gauge theory. This is different from \nthe conjecture (\\ref{doubleton}), which seems more natural from the \npoint of view of~\\cite{awada,malda}. \nHowever, the approach of~\\cite{horava} may be connected to the conjecture\n(\\ref{doubleton}), if one makes a connection between the \nChern-Simons gauge theory on $Osp(1\/32,R)\\otimes Osp(1\/32,R)$\nused in~\\cite{horava} with an appropriate string theory\nin a gravitational background in $12$ space-time dimensions.\nSuch a correspondence should be expected from the isomorphism of \nthe supergroup extension of\n$SO(1,11)$ with $Osp(1\/32,R)\\otimes Osp(1\/32,R)$. \nSumming this string theory, of Liouville type, \nover world-sheet genera in gravitational 12-dimensional backgrounds,\nas discussed above, produces a path integral over string \nbackgrounds, as discussed in~\\cite{emndbraneliouville}. \n\nThe fact that the 12-dimensional space-time backgrounds are not\nLorentz covariant is not a problem, in view of the non-covariant \nform of the Liouville theory. \nThe basic formula of~\\cite{emndbraneliouville} for such a string-theory\nspace quantization may be summarized as:\n\\begin{equation}\n {\\cal Z}=\\int Dg^i e^{-C[g^j] + \n\\int d^2\\sigma \\partial_\\alpha X^M \\partial^\\alpha X^N < \\partial^\\gamma J_{\\gamma,M} \\partial ^\\delta J_{\\delta,N}>_{g}} + \\dots \n\\label{stringtheoryspace}\n\\end{equation}\nwhere the $\\{ g^i \\}$ denotes an appropriate set of backgrounds, \nincluding the graviton, the $\\dots $ denote antisymmetric tensor and other \nstring sources, \n$C[g]$ is the Zamolodchikov central-charge action, which plays the \nr\\^ole of an effective (low-energy) target-space action of the string\ntheory, \nand the $J_M$ are Noether currents, corresponding to the \ntranslation invariance in target space time of the string. \nThe measure of integration $Dg^i$ arises from \nsumming over world-sheet genera. \n\nIt was argued in~\\cite{emndbraneliouville} that condensation \nin the two-point function of the currents may lead to \nwell-defined normalizable metric backgrounds, \ncoupled to the string source,\nprovided that the currents are logarithmic~\\cite{gurarie}. \nThe same would be true for \nantisymmetric tensor source terms, etc..\nThis means that there are appropriate \n$p$-brane solutions, obtainable as saddle points, with respect to \nthe various backgrounds ${\\bar g}^i$, of the \npartition function\n(\\ref{stringtheoryspace}). For instance, for graviton backgrounds,\n\\begin{equation}\n \\frac{\\delta}{\\delta G_{MN}} C[{\\bar g}] + V_{MN} =0\n\\label{saddle}\n\\end{equation}\nwhere the graviton vertex operator $V_{MN}$ plays the \nr\\^ole of an external string source. \n \nAmong such solutions there would be AdS backgrounds, \nsuch as the ones discussed above, which are already known to be\nassociated with logarithmic recoil operators~\\cite{kmw}. \nBased on these considerations, one may expect a connection of\nthe quantum theory of~\\cite{horava}:\n$< W(C)> \\sim {\\cal Z}$, \nwith the classical AdS\ntheory, c.f. the mean-field approximation in (\\ref{saddle}).\nIn that case, a classical AdS effective action\nwhich has a holographic property~\\cite{witten,malda}\nmay be viewed\nas the mean-field result of the quantum theory of the \nfluctuating stringy \nbackground (\\ref{stringtheoryspace}). In this picture, \nthe appearance of a \ndoubleton conformal field theory (\\ref{doubleton}) \non the boundary Minkowski space time\nmay well be valid~\\footnote{In the sense that, at least at present, \nwe do not see an apparent conflict between the two conjectures.},\nthereby unifying the conjecture (\\ref{doubleton}), also made\nin~\\cite{guna2},\nwith that made in \\cite{horava}. \nHowever, we re-emphasize that the above discussion should be considered\nconjectural at the present stage.\n~\\\\\n\\noindent\n{\\bf Acknowledgements}\n~\\\\\nThe work of D.V.N. is supported in part by the Department of Energy\nunder grant number DE-FG03-95-ER-40917.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe guidelines on liquidity stress testing in UCITS and AIFs produced by\n\\citet{ESMA-2020a} are rooted in the banking regulation defined by the\nBasel Committee on Banking Supervision \\citep{BCBS-2010,BCBS-2013}. For\ninstance, the redemption coverage ratio, which is the key instrument of LST\nprograms, is a copy-paste of the liquidity coverage ratio (LCR) in the Basel\nIII Accord. According to \\citet{BCBS-2008}, liquidity risk management in the\nbanking industry must be structured around three pillars: measurement,\nmanagement and monitoring. Beyond the redemption coverage ratio, which is\ntypically a measurement tool, \\citet{ESMA-2020a} adopt a similar approach by\nmixing the three Ms.\\smallskip\n\nLiquidity risk is an important topic for the banking sector\nbecause it concerns systemic risk. We face similar issues for the\nasset management industry because it can generate big market risks.\nSince liquidity risk is an ALM risk \\citep[Chapter 7]{Roncalli-2020}, it concerns both liabilities and assets. As mentioned by \\citet{Brunnermeier-2009}, the interconnectedness between funding liquidity and market liquidity amplifies the liquidity risk. This is obvious in stress periods, but this is even the case in normal periods when we consider the asset management industry. The reason is that redeeming investors impose negative externalities on the remaining investors:\n\\begin{quote}\n\\textquotedblleft \\textsl{Strategic interaction is a key determinant of\ninvestors' behavior in financial markets and institutions. When choosing\ntheir investment strategy, investors have to consider, not only the\nexpected fundamentals of the investment, but also the expected behavior of\nother investors, which could have a first-order effect on investment\nreturns. Particularly interesting are situations with payoff\ncomplementarities, where investors' incentives to take a certain action\nincrease if they expect that more investors will take such an action.\nPayoff complementarities are expected to generate a multiplier effect, by\nwhich they amplify the impact that shocks to fundamentals have on\ninvestors' behavior. Such amplification is often referred to as\n\\textit{financial fragility}}\\textquotedblright\\ \\citep[page\n239]{Chen-2010}.\n\\end{quote}\nThis \\textit{financial fragility} has been documented in several asset classes \\citep{Bouveret-2021, Chernenko-2020, Fricke-2021, Fricke-2020, Rohleder-2017, Goldstein-2017b}. The negative externalities and their major impact when considering stress periods explain that financial regulators have recently paid more attention to liquidity management in the asset management industry \\citep{AMF-2017, BaFin-2017, EFAMA-2020, ESRB-2017}, while the regulation of asset managers in terms of liquidity management was light in the 2000s. Nevertheless, introducing more stringent regulations in the asset management industry is not a new concept and dates back to the roadmap of the Financial Stability Board (FSB) when it was created in April 2009 after the 2008 Global Financial Crisis to monitor the stability of the financial system and manage systemic risk \\citep[page 453]{Roncalli-2020}.\\smallskip\n\nHowever, the lack of maturity and benchmarking is an obstacle for the development of liquidity stress testing in the asset management industry. One of the big challenges for regulators is standardizing models and practices. In the case of the banking industry, the Basel Committee has been successful in proposing statistical frameworks for market and credit risks. This is not the case in the asset management industry, where academic research is relatively invisible on the liability side. As such, most solutions are in-house and not published, implying limited distribution of best practices and, generally simplistic and naive methods being developed. Against this backdrop, it is not surprising that mathematical and statistical models are completely absent from regulatory publications, especially in the case of the ESMA guidelines on liquidity stress testing in asset management.\\smallskip\n\nThis paper completes a research project that began in April 2020\nand was organized into three streams. The first stream covered the liability side and funding liquidity modeling. In \\citet{Roncalli-lst1}, we introduced\ntwo statistical approaches that can be used to define a\nredemption shock scenario. The first one is the historical approach and\nconsiders non-parametric risk measures such as the historical or\nconditional value-at-risk. The second approach deals with frequency-severity\nmodels, which produces parametric risk measures and stress scenarios.\nThree of these probabilistic models are particularly interesting: the zero-inflated (or population-based) statistical model, the behavioral (or individual-based) model and the factor-based model. The second stream focused on the asset side and transaction cost modeling. In \\citet{Roncalli-lst2}, we proposed a two-regime model to estimate ex-ante transaction costs and market impacts. This model is an extension of the square-root model and considers trading limits in order to comply with the practices of asset managers.\nBased on proprietary and industry data, we were able to perform the calibration for large cap stocks, small cap stocks, sovereign bonds and corporate bonds. Moreover, we have detailed the analytics of liquidation rate, time to liquidation and liquidation shortfall to assess the liquidity risk profile of investment funds. The third stream corresponds to this research paper. The aim is to combine liability and asset risks in order to define the ALM tools. Therefore, this paper extensively mixes the previous models. For instance, a stress scenario may originate from the liabilities\nor the assets or both. Synthetic measures such as the funding gap or funding ratio are essential for asset-liability management. These measures are particularly exploited for the purpose of defining appropriate liquidation policies and the management tools that can be put in place. Besides traditional management methods, asset managers are paying more and more attention to liquidity buffers. The widespread use of cash buffers for the purpose of liquidity stress testing may have some significant impacts in terms of reducing or increasing systemic risk. The recent debate on cash buffering versus cash hoarding and the \\textquotedblleft \\textsl{dash for cash}\\textquotedblright\\ episode during the Covid-19 crisis in March 2020 demonstrate that the liquidity issue in asset management remains as before. This implies that asset managers must continue to develop the required tools and adopt more responsive tools. This is especially true for monitoring tools that must use higher frequency data.\\smallskip\n\nThe rest of the paper is organized as follows. Section 2 presents the liquidity measurement tools. We introduce the redemption coverage ratio (RCR) and the two computational approaches (time to liquidation and high-quality liquid assets). We also focus on the redemption liquidation policy and the differences between vertical and horizontal slicing. Compared to banks,\nreverse stress testing (RST) is more complex because two dimensions can be chosen, implying that we can define a liability or an asset RST scenario.\nSection 3 is dedicated to liquidity managements tools (LMTs). Besides swing pricing and special arrangements (redemption suspensions, gates, side pockets and in-kind redemptions), we extensively study the set-up of a liquidity buffer. We propose an optimization model that considers the costs and benefits of implementing a cash buffer and derive the optimal solution that depends on the risk premium of assets, the tracking error risk and the liquidation gain. Using the square-root transaction cost model, we obtain analytical formulas and test the impact of the different parameters. The liquidity monitoring tools are discussed in Section 4. We distinguish the macro-economic and micro-economic approaches. The macro-economic approach helps to define overall liquidity and is related to central bank liquidity and the economic outlook. This approach is extensively used by financial regulators and international bodies. In a liquidity stress testing framework, it must be complemented by a micro-economic approach that considers the daily liquidity at the asset class, security and issuer levels. Data collection from order books, market infrastructure and the trading desk of the asset manager is the key to successfully building a suitable monitoring system. Finally, Section 5 concludes the paper.\n\n\\section{Liquidity measurement tools}\n\nAmong the three Ms, measurement is certainly the most important and\ndifficult step of liquidity stress testing programs. Indeed, it encompasses\ntwo sources of uncertainty: liability risk and asset risk. As shown by\n\\citet{Roncalli-lst1}, there are two main approaches for measuring the\nliability risk. We can use an historical approach or a frequency-severity\nframework. For this latter, we also have the choice between three models:\nthe zero-inflated statistical model, the behavioral model or the factor-based model. On the asset risk side, things are simpler since we generally consider\nthe power-law model as a standard approach. However, calibrating the\nparameters remains a fragile exercise that is highly dependent on the historical data of the asset manager \\citep{Roncalli-lst2}.\\smallskip\n\nAs explained in the introduction, benchmarking will be a key factor for\nimproving these measures. Nevertheless, there is certainly another issue that\nis even more detrimental. Indeed, the definition of the concepts is not\nalways precise, and the regulators of the asset management industry are less\nprolific than the regulators of the banking industry. However, the devil is\nin the details. This is why we define the different measurement concepts more precisely in this section. First, we present the redemption coverage ratio and the two approaches for computing it. Then, we focus on the redemption liquidation policy, which must specify the appropriate decision in the case of a liquidity crisis. Finally, the regulation requires that the asset manager defines reverse stress testing scenarios and explores circumstances that might cause them to occur.\n\n\\subsection{Redemption coverage ratio}\n\nAccording to \\citet{ESMA-2020a}, the redemption coverage ratio (RCR) is\n\\textquotedblleft \\textit{a measurement of the ability of a fund's assets to\nmeet funding obligations arising from the liabilities side of the balance\nsheet, such as a redemption shock}\\textquotedblright. Except for\nthis definition\\footnote{It can be found on page 7 of the ESMA guidelines\n\\citep{ESMA-2020a}.}, there are no other references to this concept in the\nESMA guidelines. Therefore, we must explore other resources to\nclarify it, but they are few in number \\citep{Bouveret-2017, IMF-2017,\nESMA-2020b}.\\smallskip\n\nThe redemption coverage ratio was introduced by \\citet{Bouveret-2017},\nwho defines it as follows:\n\\begin{equation}\n\\limfunc{RCR}=\\frac{\\text{Liquid assets}}{\\text{Net outflows}}\n\\label{eq:RCR1}\n\\end{equation}%\nwhere net outflows and liquid assets correspond respectively to redemption\nshocks and the amount of the portfolio that can be liquidated over a given time horizon. There are two possible cases:\n\\begin{itemize}\n\\item if the RCR is above 1, then the fund's portfolio is sufficiently liquid to cope with the redemption scenario;\n\\item if the RCR is below 1, then the liquidity profile of the fund may be\n worsened when the redemption scenario occurs.\n\\end{itemize}\nIn this second case, the outcome will depend largely on the market liquidity\nconditions. Indeed, there is a pricing risk on the NAV because the fund will\nhave to sell illiquid assets in an illiquid market. The amount of additional\nassets to be sold is called the liquidity shortfall (LS):\n\\begin{equation}\n\\limfunc{LS}=\\max \\left( 0,\\text{Net outflows}-\\text{Liquid assets}\\right)\n\\label{eq:LS1}\n\\end{equation}%\nIn order to compare the liquidity profile of several funds, the measure\n$\\limfunc{LS}$ is expressed as a percentage of the fund's total net assets\n(TNA).\\smallskip\n\n\\begin{remark}\nThe RCR and LS measures refer to banking ALM concepts. Indeed, asset-liability management is based on two risk measures: the funding ratio\nand the funding gap \\citep[Chapter 7, page 376]{Roncalli-2020}. When the ALM\nis applied to liquidity risk, we refer to liquidity ratio and liquidity\ngap. It is obvious that the redemption coverage ratio is related to the\nliquidity (coverage) ratio, while the liquidity shortfall is equivalent to\nthe liquidity gap.\n\\end{remark}\n\nThe International Monetary Fund has used the redemption coverage ratio in the\ncase of its financial sector assessment program (FSAP) for two countries:\nLuxembourg in 2017 and the United States in 2020. These two FSAP exercises\nshowed that a significant proportion of the funds would have enough liquid\nassets to meet redemption shocks. However, the IMF found that the most vulnerable categories are HY and EM bond funds in Luxembourg \\citep{IMF-2017} and HY and loan mutual funds in the US \\citep{IMF-2020}. In the case of Luxembourg funds, Figure \\ref{fig:fsap_lux} shows that about 30 bond funds have an RCR below 1, and 50\\% of them have a liquidity shortfall greater than 10\\%, which is the borrowing limit for UCITS funds.\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{LS and RCR for selected investment funds}\n\\label{fig:fsap_lux}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[scale = 0.50]{fsap_lux.png}\n\\begin{flushleft}\n{\\small \\textit{Source}: \\citet[Figure 19, page 59]{IMF-2017}.}\n\\end{flushleft}\n\\vspace*{-10pt}\n\\end{figure}\n\n\\subsubsection{Time to liquidation approach}\n\n\\paragraph{Mathematical framework}\n\nWe consider a fund, whose asset structure is given by the vector $\\omega\n=\\left( \\omega _{1},\\ldots ,\\omega _{n}\\right) $\\ where $\\omega _{i}$ is the\nnumber of shares of security $i$ and $n$ is the number of securities that\nmake up the asset portfolio. By construction, the fund's total net assets are\nequal to:\n\\begin{equation}\n\\limfunc{TNA}=\\sum_{i=1}^{n}\\omega _{i}\\cdot P_{i} \\label{eq:TNA}\n\\end{equation}%\nwhere $P_{i}$ is the current price of security $i$. The mathematical\nexpressions of Equations (\\ref{eq:RCR1}) and (\\ref{eq:LS1}) are:\n\\begin{equation}\n\\limfunc{RCR}=\\frac{\\ensuremath{\\boldsymbol{\\mathpzc{A}}}}{\\ensuremath{\\boldsymbol{\\mathpzc{R}}}} \\label{eq:RCR2}\n\\end{equation}%\nand:%\n\\begin{equation}\n\\limfunc{LS}=\\max \\left( 0,\\ensuremath{\\boldsymbol{\\mathpzc{R}}}-\\ensuremath{\\boldsymbol{\\mathpzc{A}}}\\right) \\label{eq:LS2}\n\\end{equation}%\nwhere $\\ensuremath{\\boldsymbol{\\mathpzc{A}}}$ is the ratio of liquid assets in the fund and\n$\\ensuremath{\\boldsymbol{\\mathpzc{R}}}$ is the redemption shock expressed in \\%. Following\n\\citet{Roncalli-lst2}, the redemption shock expressed in dollars is equal to\n$\\ensuremath{\\mathbb{R}}=\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\limfunc{TNA}$. Let $q=\\left(\nq_{1},\\ldots ,q_{n}\\right) $ be a redemption portfolio and $q_{i}\\left(\nh\\right) $ be the number of shares liquidated after $h$ trading\ndays\\footnote{We recall that $q_{i}\\left( h\\right) $ is equal to:\n\\begin{equation}\nq_{i}\\left( h\\right) =\\min \\left( \\left( q_{i}-\\sum_{k=0}^{h-1}q_{i}\\left(\nk\\right) \\right) ^{+},q_{i}^{+}\\right) \\label{eq:asset2}\n\\end{equation}%\nwhere $q_{i}\\left( 0\\right) =0$ and $q_{i}^{+}$ denotes the maximum number of\nshares that can be sold during a trading day for the asset $i$ \\citep[Section\n3.2, page 14]{Roncalli-lst2}.}. The amount of liquid assets is equal to the\namount of assets that can be sold:\n\\begin{equation}\n\\ensuremath{\\mathbb{A}}\\left( h\\right) =\\sum_{i=1}^{n}\\sum_{k=1}^{h}q_{i}\\left( k\\right)\n\\cdot P_{i} \\label{eq:asset1}\n\\end{equation}%\nBy definition, we have $\\ensuremath{\\mathbb{A}}\\left( h\\right) =\\ensuremath{\\boldsymbol{\\mathpzc{A}}}\\left(\nh\\right) \\cdot \\limfunc{TNA}$. We notice that asset liquidation requires a parameter $h$ to be defined, which is the time horizon. Therefore, it is better\nto define RCR and LS measures as follows:\n\\begin{equation}\n\\limfunc{RCR}\\left( h\\right) =\\frac{\\ensuremath{\\mathbb{A}}\\left( h\\right) }{%\n\\ensuremath{\\mathbb{R}}}=\\frac{\\ensuremath{\\boldsymbol{\\mathpzc{A}}}\\left( h\\right) }{\\ensuremath{\\boldsymbol{\\mathpzc{R}}}}\n\\label{eq:RCR3}\n\\end{equation}%\nand:%\n\\begin{equation}\n\\limfunc{LS}\\left( h\\right) =\\frac{\\max \\left( 0,\\ensuremath{\\mathbb{R}}-\\ensuremath{\\mathbb{A}}%\n\\left( h\\right) \\right) }{\\limfunc{TNA}}=\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\max \\left(\n0,1-\\limfunc{RCR}\\left( h\\right) \\right) \\label{eq:LS3}\n\\end{equation}%\nSince $h$ is a liquidation time horizon, the previous computation method is\ncalled the time to liquidation (TTL) approach \\citep{Bouveret-2017}.\n\n\\paragraph{Relationship with the liquidation ratio}\n\nAs its name suggests, the time to liquidation approach is related to the\nliquidation ratio. Following \\citet{Roncalli-lst2}, the liquidation ratio\n$\\mathcal{LR}\\left( q;h\\right) $ is the proportion of the redemption scenario\n$q$ that is liquidated after $h$ trading days:\n\\begin{equation}\n\\mathcal{LR}\\left( q;h\\right) =\\frac{\\sum_{i=1}^{n}\\sum_{k=1}^{h}q_{i}\\left(\nk\\right) \\cdot P_{i}}{\\sum_{i=1}^{n}q_{i}\\cdot P_{i}} \\label{eq:LR1}\n\\end{equation}%\nBy definition, $\\mathcal{LR}\\left( q;h\\right) $ is between $0$ and $1$\nwhereas $\\limfunc{RCR}\\left( h\\right) \\geq 0$. Using Equation\n(\\ref{eq:asset1}), we deduce that:\n\\begin{equation}\n\\ensuremath{\\mathbb{A}}\\left( h\\right) =\\mathcal{LR}\\left( q;h\\right) \\cdot \\mathbb{V}%\n\\left( q\\right) \\label{eq:asset3}\n\\end{equation}%\nwhere $\\mathbb{V}\\left( q\\right) =\\sum_{i=1}^{n}q_{i}\\cdot P_{i}$ is the\nvalue function of the portfolio $q$. It follows that:%\n\\begin{equation}\n\\limfunc{RCR}\\left( h\\right) =\\frac{\\mathbb{V}\\left( q\\right) }{%\n\\ensuremath{\\mathbb{R}}}\\cdot \\mathcal{LR}\\left( q;h\\right) \\label{eq:RCR4}\n\\end{equation}%\nThe redemption coverage ratio can be seen as an extension of the concept of\nthe liquidation ratio when the liquidation portfolio $q$ corresponds to the pool of liquid assets and the redemption shock is defined without any reference to $q$. \\citet{Roncalli-lst2} define the liquidation period $h^{+}=\\left\\{ \\inf h:\\mathcal{LR}\\left( q;h\\right) =1\\right\\} $ as the number of trading days we need to liquidate the portfolio $q$. We can then have three cases:\n\\begin{enumerate}\n\\item The redemption coverage ratio is equal to the liquidation ratio if and only if the redemption scenario is equal to the value of the liquidation portfolio:%\n\\begin{equation}\n\\limfunc{RCR}\\left( h\\right) =\\mathcal{LR}\\left( q;h\\right) \\Leftrightarrow\n\\ensuremath{\\mathbb{R}}=\\mathbb{V}\\left( q\\right)\n\\end{equation}%\nSince $\\mathcal{LR}\\left( q;h\\right) $ is an increasing function of $h$ and $\\mathcal{LR}\\left( q;h^{+}\\right) =1$, we have:%\n\\begin{equation}\n\\left\\{\n\\begin{array}{ll}\n\\limfunc{RCR}\\left( h\\right) <1 & \\text{if }h\\ensuremath{\\mathbb{R}}$, we have $\\limfunc{RCR%\n}\\left( h\\right) >\\mathcal{LR}\\left( q;h\\right) $ and:%\n\\begin{equation}\n\\limfunc{RCR}\\left( h\\right) =\\frac{\\mathbb{V}\\left( q\\right) }{%\n\\ensuremath{\\mathbb{R}}}>1\\qquad \\forall \\,h\\geq h^{+}\n\\end{equation}\n\n\\item If $\\mathbb{V}\\left( q\\right) <\\ensuremath{\\mathbb{R}}$, we have $\\limfunc{RCR%\n}\\left( h\\right) <\\mathcal{LR}\\left( q;h\\right) $ and:%\n\\begin{equation}\n\\limfunc{RCR}\\left( h\\right) <1\\qquad \\forall \\,h\\geq 0\n\\end{equation}\n\\end{enumerate}\nEquation (\\ref{eq:RCR4}) shows that the redemption coverage ratio is an\nincreasing function of $h$. From a risk management perspective, the RCR is\nbelow one if the value $\\mathbb{V}\\left( q\\right) $ of liquid assets is lower\nthan the redemption shock $\\ensuremath{\\mathbb{R}}$ or if the time to liquidation is\nnot acceptable. Let $h^{\\star }=\\left\\{ \\inf h:\\limfunc{RCR}\\left( h\\right)\n>1\\right\\} $ be the number of trading days we need to absorb the redemption\nshock. The shorter the period $h^{\\star }$ is, the better the liquidity profile. Indeed, if the period $h^{\\star }$ is too long and even if $\\limfunc{RCR}\\left( h^{\\star }\\right) >1$, we cannot consider that the criterion is satisfied. This is why the risk management department must define an acceptable time to liquidation $\\tau _{h}$. In this case, the liquidity profile of the fund is appropriate if and only if $\\limfunc{RCR}\\left( \\tau _{h}\\right) >1$. By definition, $\\tau _{h}$ depends on the asset class. In the case of public equities, $\\tau _{h}$ is equal to a few days, whereas $\\tau _{h}$ may range from a few weeks to several months for private equities, depending on the liquidity objective of the investment fund.\\smallskip\n\nSimilarly, the liquidity shortfall $\\limfunc{LS}\\left( h\\right) $ can\nbe seen as an extension of the liquidation shortfall, which is defined as\n\\textquotedblleft \\textsl{the remaining redemption that cannot be fulfilled\nafter one trading day}\\textquotedblright\\ \\citep[Section 3.2.3, page\n18]{Roncalli-lst2}:\n\\begin{equation}\n\\mathcal{LS}\\left( q\\right) =1-\\mathcal{LR}\\left( q;1\\right)\n\\end{equation}%\nIndeed, we have:%\n\\begin{equation}\n\\limfunc{LS}\\left( h\\right) =\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\max \\left( 0,1-\\frac{%\n\\mathbb{V}\\left( q\\right) }{\\ensuremath{\\mathbb{R}}}\\cdot \\mathcal{LR}\\left(\nq;h\\right) \\right)\n\\end{equation}%\nIn the case where $\\mathbb{V}\\left( q\\right) =\\ensuremath{\\mathbb{R}}$, we obtain:\n\\begin{eqnarray}\n\\limfunc{LS}\\left( h\\right) &=&\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\max \\left( 0,1-%\n\\mathcal{LR}\\left( q;h\\right) \\right) \\notag \\\\\n&=&\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\left( 1-\\mathcal{LR}\\left( q;h\\right) \\right)\n\\notag \\\\\n&=&\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\mathcal{LS}\\left( q;h\\right)\n\\end{eqnarray}%\nwhere $\\mathcal{LS}\\left( q;h\\right) =1-\\mathcal{LR}\\left( q;h\\right) $ is\nthe \\textit{generalized} liquidation shortfall, that is the remaining\nredemption that cannot be fulfilled after $h$ trading days. While the\nliquidation shortfall is calculated with one trading day, the liquidity\nshortfall can be calculated with $h\\leq \\tau _{h}$. In the other cases, the\nliquidity shortfall is not equal to the product of the redemption rate\n$\\ensuremath{\\boldsymbol{\\mathpzc{R}}}$ and the generalized liquidation shortfall because we have:\n\\begin{equation}\n\\limfunc{LS}\\left( h\\right) =\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\max \\left( 0,1-\\frac{%\n\\mathbb{V}\\left( q\\right) }{\\ensuremath{\\mathbb{R}}}\\cdot \\left( 1-\\mathcal{LS}%\n\\left( q;h\\right) \\right) \\right)\n\\end{equation}%\nNevertheless, we always verify that:\n\\begin{equation}\n0\\leq \\limfunc{LS}\\left( h\\right) \\leq \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\n\\end{equation}%\nBy construction, the liquidity shortfall cannot exceed the redemption rate.\n\n\\paragraph{Portfolio distortion}\n\nSince the asset structure of the fund is given by the portfolio $\\omega\n=\\left( \\omega _{1},\\ldots ,\\omega _{n}\\right) $, the portfolio weights are\nequal to $w\\left( \\omega \\right) =\\left( w_{1}\\left( \\omega \\right) ,\\ldots\n,w_{n}\\left( \\omega \\right) \\right) $ where:\n\\begin{equation}\nw_{i}\\left( \\omega \\right) =\\frac{\\omega _{i}\\cdot P_{i}}{%\n\\sum_{j=1}^{n}\\omega _{j}\\cdot P_{j}} \\label{eq:w-omega}\n\\end{equation}%\nLet $q=\\left( q_{1},\\ldots ,q_{n}\\right) $ be the redemption scenario. It\nfollows that the redemption weights are given by:\n\\begin{equation}\nw_{i}\\left( q\\right) =\\frac{q_{i}\\cdot P_{i}}{\\sum_{j=1}^{n}q_{j}\\cdot P_{j}}\n\\label{eq:w-q}\n\\end{equation}%\nAfter the liquidation of $q$, the new asset structure is equal to $\\omega\n-q$, and the new weights of the portfolio become:\n\\begin{equation}\nw_{i}\\left( \\omega -q\\right) =\\frac{\\left( \\omega _{i}-q_{i}\\right) \\cdot\nP_{i}}{\\sum_{j=1}^{n}\\left( \\omega _{j}-q_{j}\\right) \\cdot P_{j}}\n\\label{eq:w-omega_q}\n\\end{equation}%\nExcept in the case of the proportional rule $q_{i}\\propto \\omega _{i}$, there\nis no reason that $w_{i}\\left( \\omega -q\\right) =w_{i}\\left( \\omega \\right)\n$. In fact, we have\\footnote{The weight difference $\\Delta w_{i}\\left( \\omega\n\\mid q\\right) $ is equal to:\n\\begin{eqnarray*}\n\\Delta w_{i}\\left( \\omega \\mid q\\right) &=&w_{i}\\left( \\omega -q\\right)\n-w_{i}\\left( \\omega \\right) \\\\\n&=&\\frac{\\left( \\omega _{i}-q_{i}\\right) \\cdot P_{i}}{\\mathbb{V}\\left(\n\\omega \\right) -\\mathbb{V}\\left( q\\right) }-\\frac{\\omega _{i}\\cdot P_{i}}{%\n\\mathbb{V}\\left( \\omega \\right) } \\\\\n&=&\\frac{\\mathbb{V}\\left( \\omega \\right) \\cdot \\left( \\omega\n_{i}-q_{i}\\right) \\cdot P_{i}-\\left( \\mathbb{V}\\left( \\omega \\right) -%\n\\mathbb{V}\\left( q\\right) \\right) \\cdot \\omega _{i}\\cdot P_{i}}{\\left(\n\\mathbb{V}\\left( \\omega \\right) -\\mathbb{V}\\left( q\\right) \\right) \\cdot\n\\mathbb{V}\\left( \\omega \\right) } \\\\\n&=&\\frac{\\mathbb{V}\\left( q\\right) \\cdot \\omega _{i}\\cdot P_{i}-\\mathbb{V}%\n\\left( \\omega \\right) \\cdot q_{i}\\cdot P_{i}}{\\left( \\mathbb{V}\\left( \\omega\n\\right) -\\mathbb{V}\\left( q\\right) \\right) \\cdot \\mathbb{V}\\left( \\omega\n\\right) } \\\\\n&=&\\frac{\\mathbb{V}\\left( q\\right) \\cdot w_{i}\\left( \\omega \\right) \\cdot\n\\mathbb{V}\\left( \\omega \\right) -\\mathbb{V}\\left( \\omega \\right) \\cdot\nw_{i}\\left( q\\right) \\cdot \\mathbb{V}\\left( q\\right) }{\\left( \\mathbb{V}%\n\\left( \\omega \\right) -\\mathbb{V}\\left( q\\right) \\right) \\cdot \\mathbb{V}%\n\\left( \\omega \\right) } \\\\\n&=&\\frac{\\mathbb{V}\\left( q\\right) }{\\left( \\mathbb{V}\\left( \\omega \\right) -%\n\\mathbb{V}\\left( q\\right) \\right) }\\left( w_{i}\\left( \\omega \\right)\n-w_{i}\\left( q\\right) \\right)\n\\end{eqnarray*}%\n}:%\n\\begin{eqnarray}\nw_{i}\\left( \\omega -q\\right) &=&w_{i}\\left( \\omega \\right) +\\Delta\nw_{i}\\left( \\omega \\mid q\\right) \\notag \\\\\n&=&w_{i}\\left( \\omega \\right) +\\frac{\\mathbb{V}\\left( q\\right) }{\\left(\n\\mathbb{V}\\left( \\omega \\right) -\\mathbb{V}\\left( q\\right) \\right) }\\left(\nw_{i}\\left( \\omega \\right) -w_{i}\\left( q\\right) \\right)\n\\end{eqnarray}\nThe previous analysis can be extended to the case $h\\ensuremath{\\mathbb{R}}$ is a better\nchoice when it is possible. However, this constraint is not always satisfied\nand is highly dependent on the value $\\tau _{h}$ of the time horizon. In fact, the optimal solution necessarily depends on $\\tau _{h}$ and is given by the following optimization problem:\n\\begin{eqnarray}\nq^{\\star }\\left( \\tau _{h}\\right) &=&\\arg \\max \\limfunc{RCR}\\left( \\tau\n_{h}\\right) \\notag \\\\\n&\\text{s.t.}&\\left\\{\n\\begin{array}{l}\nq\\propto \\omega \\\\\nq\\geq \\mathbf{0}_{n}%\n\\end{array}%\n\\right.\n\\end{eqnarray}%\nBy construction, the solution is independent from the value $\\ensuremath{\\mathbb{R}}\n$ of the redemption shock since we have:\n\\begin{equation}\n\\arg \\max \\limfunc{RCR}\\left( \\tau _{h}\\right) :=\\arg \\max \\ensuremath{\\mathbb{A}}\\left(\n\\tau _{h}\\right)\n\\end{equation}%\nWe obtain a trivial combinatorial problem. Indeed, the solution must satisfy\nthe following set of constraints:\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\nq\\propto \\omega \\\\\nq_{i}\\leq \\min \\left( \\tau _{h}\\cdot q_{i}^{+},\\omega _{i}\\right)\n\\end{array}%\n\\right.\n\\end{equation}%\nWe deduce that:%\n\\begin{equation}\nq^{\\star }\\left( \\tau _{h}\\right) =\\varphi \\left( \\tau _{h}\\right) \\cdot\n\\omega\n\\end{equation}%\nwhere:%\n\\begin{equation}\n\\varphi \\left( \\tau _{h}\\right) =\\inf_{i=1,\\ldots ,n}\\min \\left( \\tau\n_{h}\\cdot \\dfrac{q_{i}^{+}}{\\omega _{i}},1\\right) \\label{eq:RCR-optimal1}\n\\end{equation}%\nMoreover, we have:%\n\\begin{eqnarray}\n\\ensuremath{\\mathbb{A}}\\left( \\tau _{h}\\right) &=&\\sum_{i=1}^{n}\\left(\n\\sum_{k=1}^{h}q_{i}\\left( k\\right) \\right) P_{i} \\notag \\\\\n&=&\\sum_{i=1}^{n}q_{i}^{\\star }\\left( \\tau _{h}\\right) \\cdot P_{i} \\notag \\\\\n&=&\\varphi \\left( \\tau _{h}\\right) \\left( \\sum_{i=1}^{n}\\omega _{i}\\cdot P_{i} \\right)\n\\notag \\\\\n&=&\\varphi \\left( \\tau _{h}\\right) \\cdot \\limfunc{TNA}\n\\end{eqnarray}%\nWe conclude that the redemption coverage rate is equal to the ratio between $\\varphi \\left( \\tau _{h}\\right) $ and $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}$:\n\\begin{equation}\n\\limfunc{RCR}\\left( \\tau _{h}\\right) =\\frac{\\varphi \\left( \\tau _{h}\\right)\n}{\\ensuremath{\\boldsymbol{\\mathpzc{R}}}} \\label{eq:RCR-optimal2}\n\\end{equation}\n\n\\begin{example}[optimal pro-rata liquidation]\n\\label{ex:rcr2} We consider the optimal pro-rata liquidation when the\nredemption shock $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}$ is equal to $20\\%$ and the time horizon\n$\\tau_h$ varies from one trading day to one trading week.\n\\end{example}\n\nIn Table \\ref{tab:rcr2-1}, we indicate the optimal value $\\varphi \\left( \\tau\n_{h}\\right) $ for each time horizon $\\tau _{h}$. We also report\\footnote{We\ndon't need to report the statistics for $h\\geq \\tau _{h}$ because we have\n$\\mathcal{LR}\\left( q;h\\right) =\\mathcal{LR}\\left( q;\\tau _{h}\\right) $,\n$\\ensuremath{\\mathbb{A}}\\left( h\\right) =\\ensuremath{\\mathbb{A}}\\left( \\tau _{h}\\right) $,\n$\\limfunc{RCR}\\left( h\\right) =\\limfunc{RCR}\\left( \\tau _{h}\\right) $ and\n$\\limfunc{LS}\\left( h\\right) =\\limfunc{LS}\\left( \\tau _{h}\\right) $.}\n$\\mathcal{LR}\\left( q;h\\right) $, $\\ensuremath{\\mathbb{A}}\\left( h\\right) $,\n$\\limfunc{RCR}\\left( h\\right) $ and $\\limfunc{LS}\\left( h\\right) $ for $h\\leq\n\\tau _{h}$. When $\\tau _{h}=1$, the optimal liquidation portfolio is equal to\n$\\left( 20\\,000,13\\,795,2\\,317,9\\,216,3\\,470,804\\right) $. The redemption\ncoverage ratio is equal to $22.98\\%$, implying a high liquidity shortfall\nrepresenting $15.40\\%$ of the total net assets. When $\\tau _{h}=2$, the\noptimal portfolio $q^{\\star}$ becomes\n$\\left(40\\,000,27\\,589,4\\,633,18\\,433,6\\,941, 1\\,609\\right) $. The redemption\ncoverage ratio is then equal to $45.97\\%$ whereas the liquidity shortfall\nrepresents $10.81\\%$ of the total net assets. In Exercise \\ref{ex:rcr1}, the\nliquidation period $h^{+}$ was equal to five trading days, and we obtained\n$\\limfunc{RCR}\\left( 5\\right) =100\\%$. We notice that we achieve a better\nredemption coverage ratio with the optimal pro-rata liquidation rule. Indeed,\nwe have $\\limfunc{RCR}\\left( 5\\right) =114.92\\%$.\n\n\\begin{table}[tbph]\n\\centering\n\\caption{Computation of the RCR (Example \\ref{ex:rcr2}, optimal pro-rata liquidation)}\n\\label{tab:rcr2-1}\n\\begin{tabular}{ccccccc}\n\\hline\n\\multirow{2}{*}{$\\tau_h$} & $\\varphi \\left( \\tau _{h}\\right)$ &\n\\multirow{2}{*}{$h$} & $\\mathcal{LR}\\left( q;h\\right) $ & $\\ensuremath{\\mathbb{A}}\\left( h\\right) $ & $%\n\\limfunc{RCR}\\left( h\\right) $ & $\\limfunc{LS}\\left( h\\right) $ \\\\\n & (in \\%) & & (in \\%) & (in \\$ mn) & (in \\%) & (in \\%) \\\\ \\hline\n $1$ & ${\\hspace{5pt}}4.60$ & $1$ & $100.00$ & ${\\hspace{5pt}}6.515$ & ${\\hspace{5pt}}22.98$ & $15.40$ \\\\ \\hline\n\\mrm{2}{$2$} & \\mrm{2}{${\\hspace{5pt}}9.19$} & $1$ & ${\\hspace{5pt}}79.18$ & $10.317$ & ${\\hspace{5pt}}36.39$ & $12.72$ \\\\\n & & $2$ & $100.00$ & $13.030$ & ${\\hspace{5pt}}45.97$ & $10.81$ \\\\ \\hline\n\\mrm{3}{$3$} & \\mrm{3}{$13.79$} & $1$ & ${\\hspace{5pt}}63.66$ & $12.443$ & ${\\hspace{5pt}}43.89$ & $11.22$ \\\\\n & & $2$ & ${\\hspace{5pt}}90.02$ & $17.595$ & ${\\hspace{5pt}}62.07$ & ${\\hspace{5pt}}7.59$ \\\\\n & & $3$ & $100.00$ & $19.545$ & ${\\hspace{5pt}}68.95$ & ${\\hspace{5pt}}6.21$ \\\\ \\hline\n\\mrm{4}{$4$} & \\mrm{4}{$18.39$} & $1$ & ${\\hspace{5pt}}54.81$ & $14.284$ & ${\\hspace{5pt}}50.39$ & ${\\hspace{5pt}}9.92$ \\\\\n & & $2$ & ${\\hspace{5pt}}79.18$ & $20.633$ & ${\\hspace{5pt}}72.79$ & ${\\hspace{5pt}}5.44$ \\\\\n & & $3$ & ${\\hspace{5pt}}93.17$ & $24.280$ & ${\\hspace{5pt}}85.65$ & ${\\hspace{5pt}}2.87$ \\\\\n & & $4$ & $100.00$ & $26.060$ & ${\\hspace{5pt}}91.93$ & ${\\hspace{5pt}}1.61$ \\\\ \\hline\n\\mrm{5}{$5$} & \\mrm{5}{$22.98$} & $1$ & ${\\hspace{5pt}}47.13$ & $15.353$ & ${\\hspace{5pt}}54.16$ & ${\\hspace{5pt}}9.17$ \\\\\n & & $2$ & ${\\hspace{5pt}}70.74$ & $23.044$ & ${\\hspace{5pt}}81.29$ & ${\\hspace{5pt}}3.74$ \\\\\n & & $3$ & ${\\hspace{5pt}}85.68$ & $27.911$ & ${\\hspace{5pt}}98.46$ & ${\\hspace{5pt}}0.31$ \\\\\n & & $4$ & ${\\hspace{5pt}}94.54$ & $30.795$ & $108.64$ & ${\\hspace{5pt}}0.00$ \\\\\n & & $5$ & $100.00$ & $32.575$ & $114.92$ & ${\\hspace{5pt}}0.00$ \\\\ \\hline\n\\end{tabular}\n\\vspace*{-15pt}\n\\end{table}\n\n\\begin{remark}\nSince the optimal portfolio $q^{\\star }\\left( \\tau _{h}\\right) $ does not\ndepend on the redemption shock $\\ensuremath{\\mathbb{R}}$, $\\ensuremath{\\mathbb{A}}\\left( \\tau\n_{h}\\right) $ indicates the maximum redemption shock that can be absorbed,\nimplying that:\n\\begin{equation*}\n\\ensuremath{\\mathbb{R}}\\leq \\ensuremath{\\mathbb{A}}\\left( \\tau _{h}\\right) \\Rightarrow \\limfunc{%\nRCR}\\left( \\tau _{h}\\right) \\geq 1\n\\end{equation*}%\nBy definition, the maximum admissible redemption shock is equal to\n$\\ensuremath{\\mathbb{R}}\\left( \\tau _{h}\\right) =\\ensuremath{\\mathbb{A}} \\left( \\tau _{h}\\right) $ or\n$\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\left( \\tau _{h}\\right) =\\varphi \\left( \\tau _{h}\\right) $.\nFor instance, the maximum admissible redemption shock is equal to $\\$6.515$\nmn (or $4.60\\%$ of the TNA) when the time horizon is set to one trading day.\nFigure \\ref{fig:rcr2b} shows the evolution of $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\left(\n\\tau_{h}\\right)$ with respect to $\\tau _{h}$.\n\\end{remark}\n\n\\begin{figure}[h!]\n\\centering\n\\caption{Maximum admissible redemption shock in \\% (Example \\ref{ex:rcr2}, optimal pro-rata liquidation)}\n\\label{fig:rcr2b}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{rcr2b}\n\\end{figure}\n\n\\begin{example}[waterfall liquidation]\n\\label{ex:rcr3} We now consider the waterfall liquidation. In this case, the\nfund manager liquidates assets in order of their liquidity starting from the\nmost liquid ones. The redemption shock is still equal to $20\\%$.\n\\end{example}\n\n\\begin{table}[tbph]\n\\centering\n\\caption{Computation of the RCR (Example \\ref{ex:rcr3}, waterfall liquidation)}\n\\label{tab:rcr3-1}\n\\begin{tabular}{ccccc}\n\\hline\n\\multirow{2}{* }{$h$} & $\\mathcal{LR}\\left( q;h\\right) $ & $\\ensuremath{\\mathbb{A}}\\left( h\\right) $ & $%\n\\limfunc{RCR}\\left( h\\right) $ & $\\limfunc{LS}\\left( h\\right) $ \\\\\n& (in \\%) & (in \\$ mn) & (in \\%) & (in \\%) \\\\ \\hline\n$1$ & $11.80$ & $16.727$ & $ 59.01$ & $8.20$ \\\\\n$2$ & $23.38$ & $33.136$ & $116.90$ & $0.00$ \\\\\n$3$ & $34.06$ & $48.274$ & $170.30$ & $0.00$ \\\\\n$4$ & $44.21$ & $62.661$ & $221.05$ & $0.00$ \\\\\n$5$ & $52.53$ & $74.459$ & $262.67$ & $0.00$ \\\\\n$6$ & $57.55$ & $81.572$ & $287.76$ & $0.00$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Maximum admissible redemption shock in \\% (pro-rata vs. waterfall liquidation)}\n\\label{fig:rcr3b}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{rcr3b}\n\\end{figure}\n\nIn the waterfall approach, there are no constraints on the liquidation\nportfolio $q$, which is equal to the fund's portfolio $\\omega $. In this\ncase, the redemption coverage ratio is entirely determined by the trading\nlimits $q^{+}$ and the current portfolio $\\omega $. Every day, we sell $%\nq_{i}^{+}$ shares of security $i$ until there is nothing left -- $%\nq_{i}\\left( h\\right) =0$. Results are given in Table \\ref{tab:rcr3-1}. Since\nthere are no constraints on the asset structure of the portfolio $\\omega -q$,\nwe obtain higher values of the redemption coverage ratio compared to the\nnaive or optimal pro-rata liquidation approach. Indeed, we have $\\limfunc{RCR%\n}\\left( 1\\right) =59.01\\%$, but $\\limfunc{RCR}\\left( 2\\right) =116.90\\%$. In\nthis example, we have $\\limfunc{RCR}\\left( \\tau _{h}\\right) >1$ when $\\tau\n_{h}\\geq 2$. By construction, the waterfall approach will always give higher\nredemption coverage ratios than the pro-rata approach. To illustrate this\nproperty, we compare the maximum admissible redemption shock in Figure\n\\ref{fig:rcr3b} for the two approaches.\n\n\\subsubsection{High-quality liquid assets approach}\n\n\\paragraph{Mathematical framework}\n\nIn the high-quality liquid assets (HQLA) method, the amount of liquid\nassets is estimated by splitting securities by HQLA classes and applying\nliquidity weights. We assume that we have $m$ HQLA classes. Let $\\func{ccf}_{k}$\ndenote the liquidity weight or the cash conversion factor (CCF) of\nthe $k^{\\mathrm{th}}$ HQLA class. The ratio of liquid assets in the fund is\ndefined by:\n\\begin{equation}\n\\ensuremath{\\boldsymbol{\\mathpzc{A}}}=\\sum_{i=1}^{n}w_{i}\\left( \\omega \\right) \\cdot \\limfunc{CCF}\\nolimits_{\\ell\n\\left( i\\right) }\n\\end{equation}%\nwhere $\\ell \\left( i\\right) $ indicates the HQLA class $k$ of security $i$.\nWe have:\n\\begin{eqnarray}\n\\ensuremath{\\boldsymbol{\\mathpzc{A}}} &=&\\sum_{i=1}^{n}w_{i}\\left( \\omega \\right) \\cdot \\left(\n\\sum_{k=1}^{m}\\mathds{1}\\left\\{ i\\in k\\right\\} \\cdot \\limfunc{CCF}\\nolimits_{k}\\right)\n\\notag \\\\\n&=&\\sum_{k=1}^{m}\\left( \\sum_{i=1}^{n}\\mathds{1}\\left\\{ i\\in k\\right\\} \\cdot\nw_{i}\\left( \\omega \\right) \\right) \\cdot \\limfunc{CCF}\\nolimits_{k} \\notag \\\\\n&=&\\sum_{k=1}^{m}w_{k}\\cdot \\limfunc{CCF}\\nolimits_{k}\n\\end{eqnarray}%\nwhere $w_{k}$ is the weight of the $k^{\\mathrm{th}}$ HQLA class\\footnote{We also have:%\n\\begin{eqnarray}\n\\ensuremath{\\mathbb{A}} &=&\\ensuremath{\\boldsymbol{\\mathpzc{A}}}\\cdot \\func{TNA} \\notag \\\\\n&=&\\sum_{k=1}^{m}\\left( w_{k}\\cdot \\func{TNA}\\right) \\cdot \\limfunc{CCF}\\nolimits_{k}\n\\notag \\\\\n&=&\\sum_{k=1}^{m}\\func{TNA}\\nolimits_{k}\\cdot \\limfunc{CCF}\\nolimits_{k}\n\\end{eqnarray}%\nwhere $\\func{TNA}\\nolimits_{k}$ is the dollar amount of the $k^{\\mathrm{th}}$ HQLA\nclass.}. We deduce that:%\n\\begin{equation}\n\\func{RCR}=\\frac{\\sum_{k=1}^{m}w_{k}\\cdot \\limfunc{CCF}\\nolimits_{k}}{\\ensuremath{\\boldsymbol{\\mathpzc{R}}}}\n\\end{equation}%\nand:%\n\\begin{equation}\n\\func{LS}=\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\max \\left( 0,1-\\frac{\\sum_{k=1}^{m}w_{k}%\n\\cdot \\limfunc{CCF}\\nolimits_{k}}{\\ensuremath{\\boldsymbol{\\mathpzc{R}}}}\\right)\n\\end{equation}\n\n\\paragraph{Definition of HQLA classes}\n\nThe term HQLA refers to the liquidity coverage ratio (LCR) introduced in the\nBasel III framework \\citep{BCBS-2010, BCBS-2013}. An asset is considered to be a high-quality liquid asset if it can be easily converted into cash.\nTherefore, the concept of HQLA is related to asset quality and asset\nliquidity. The first property indicates if the asset can be sold without\ndiscount, while the second property indicates if the asset can be easily and\nquickly sold \\citep{Roncalli-2020}. Thus, the LCR ratio measures whether or not the bank has the necessary assets to face a one-month stressed period of outflows. The stock of HQLA is computed by defining eligible assets and applying haircut values. For instance, corporate debt securities rated above\n\\textsf{BBB$-$} are eligible, implying that high yield bonds are not. Then, a\nhaircut of $15\\%$ (resp. $50\\%$) is applied to corporate bonds rated\n\\textsf{AA$-$} or higher (resp. between \\textsf{A$+$} and \\textsf{BBB$-$}).\nSince the time horizon of the LCR is one month, the underlying idea is that\n(1) high yield bonds can be illiquid for one month, (2) investment grade\ncorporate bonds can be sold during the month but with a discount, (3)\ncorporate bonds rated \\textsf{AA$-$} or higher can lose $15\\%$ of their\nvalue in the month and (4) corporate bonds rated between \\textsf{A$+$}\nand \\textsf{BBB$-$} can lose $50\\%$ of their value in the\nmonth.\\smallskip\n\nIn Table \\ref{tab:hqla1}, we report the HQLA matrix given by\n\\citet{Bouveret-2017} and \\citet{IMF-2017}, which corresponds to the HQLA\nmatrix of the Basel III Accord using the following rule:\n\\begin{equation}\n\\limfunc{CCF}\\nolimits_{k}=1-H_{k}\n\\end{equation}%\nwhere $H_{k}$ is the haircut value. By construction, the CCF value is equal\nto $100\\%$ for cash. For equities, it is equal to $50\\%$. Although common\nequity shares are highly liquid, we can face a price drop before the\nliquidation. Therefore, this value of $50\\%$ mainly reflects a discount\nrisk. Sovereign bonds are assumed to be a perfect substitute for the cash if\nthe credit rating of the issuer is \\textsf{AA$-$} or higher. Otherwise, the\nCCF is equal to $85\\%$ and $50\\%$ for other IG sovereign bonds and $0\\%$ for\nHY sovereign bonds. In the case of corporate bonds, securities rated below\n\\textsf{BBB$-$} receive a CCF of $0\\%$, while the CCF is respectively equal to $50\\%$ and $85\\%$ for \\textsf{BBB$-$} to \\textsf{A$+$} and \\textsf{AA$-$} to \\textsf{AAA}. For securitization, the CCFs are the same as for corporate bonds, except the category \\textsf{BBB$-$} to \\textsf{BBB$+$} for which the CCF is set to zero.\n\n\\begin{table}[tbph]\n\\centering\n\\caption{Cash conversion factors}\n\\label{tab:hqla1}\n\\begin{tabular}{cccccc}\n\\hline\nCredit & \\mr{Cash} & Sovereign & Corporate & \\mr{Securitization} & \\mr{Equities} \\\\\nRating & & bonds & bonds & & \\\\ \\hline\n\\textsf{AA$-$} to \\textsf{AAA} & \\mrm{4}{$100\\%$} & $100\\%$ & $85\\%$ & $85\\%$ & \\mrm{4}{$50\\%$} \\\\\n\\textsf{A$-$} to \\textsf{A$+$} & & ${\\hspace{5pt}}85\\%$ & $50\\%$ & $50\\%$ & \\\\\n\\textsf{BBB$-$} to \\textsf{BBB$+$} & & ${\\hspace{5pt}}50\\%$ & $50\\%$ & ${\\hspace{5pt}}0\\%$ & \\\\\nBelow \\textsf{BBB$-$} & & ${\\hspace{10pt}}0\\%$ & ${\\hspace{5pt}}0\\%$ & ${\\hspace{5pt}}0\\%$ & \\\\ \\hline\n\\end{tabular}\n\\begin{flushleft}\n{\\small \\textit{Source}: \\citet[Table 6, page 14]{Bouveret-2017} and \\citet[Box 2, page 56]{IMF-2017}.}\n\\end{flushleft}\n\\vspace*{-10pt}\n\\end{table}\n\n\\begin{remark}\n\\citet[Exhibit 38, page 26]{ESMA-2019b} uses the same HQLA matrix, except\nfor securitization products. In this case, the CCFs are between $65\\%$ and $%\n93\\%$ if the credit rating of the structure is between \\textsf{AA-} and\n\\textsf{AAA}, and $0\\%$ otherwise.\n\\end{remark}\n\nAs noticed by \\citet{ESMA-2019b}, \\textquotedblleft \\textsl{the HQLA approach\nis very attractive from an operational point of view since it is easy to\ncompute and interpret}\\textquotedblright. However, this approach has three\ndrawbacks. First, the HQLA matrix proposed by the IMF and ESMA is a copy\/paste of\nthe HQLA matrix proposed by the Basel Committee, suggesting that the implicit\ntime horizon $\\tau _{h}$ is one month or 21 trading days. However, the time\nhorizon is never mentioned, implying that there is a doubt about the IMF and ESMA's true intentions. Second, the granularity of the HQLA matrix is\nquite coarse. For instance, there is no distinction between large cap and\nsmall cap stocks. In the case of sovereign bonds, the CCR only depends on the\ncredit rating. However, we know that some bonds are more liquid than others\neven if they belong to the same category of credit rating. For example,\nsovereign bonds issued by France, Germany, the UK and the US are more liquid than sovereign bonds issued by Belgium, Denmark, Finland, Ireland, Japan,\nNetherlands and Sweden\\footnote{This can be measured by the turnover ratio.}.\nWe observe the same issue with peripheral debt securities (Greece, Italy,\nPortugal, Spain) and EM bonds. In the case of corporate bonds, this problem is even more serious, because liquidity is not only an issuer-related question. For instance, the maturity impacts the liquidity of the bonds issued by the same company. The last drawback concerns the absence of the portfolio structure in the computation of the RCR. Indeed, the RCR depends neither on the portfolio holdings nor on the portfolio concentration. Therefore, the HQLA method is a specific top-down approach, which only focuses on asset classes. Two equity funds will have the same redemption coverage ratio for the same redemption shock (top left-hand panel in Figure \\ref{fig:hqla1}). For example, we have $\\func{RCR}=2.5$ if $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}=20\\%$. The RCR is below one if the redemption shock is greater than $50\\%$. For a high yield fund, the RCR is equal to zero whatever the value of the redemption shock (bottom right-hand panel\nin Figure \\ref{fig:hqla1}). For a balanced fund, comprised of $50\\%$\nIG bonds and $50\\%$ public equities, we obtain the following bounds:\n\\begin{equation}\n\\frac{50\\%}{\\ensuremath{\\boldsymbol{\\mathpzc{R}}}}\\leq \\func{RCR}\\leq \\frac{75\\%}{\\ensuremath{\\boldsymbol{\\mathpzc{R}}}}\n\\end{equation}%\nTherefore, it is obvious that the HQLA method is a macro-economic approach, that can make sense for regulators to monitor the liquidity risk at the industry level, but it is not adapted for comparing the liquidity risk of two funds.\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Redemption coverage ratio in \\% with the HQLA approach}\n\\label{fig:hqla1}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{hqla1}\n\\end{figure}\n\n\\paragraph{Implementation of the HQLA approach}\n\nBecause of the previous comments, asset managers that would like to\nimplement the HQLA approach must take into account the following\nconsiderations:\n\\begin{itemize}\n\\item The HQLA matrix must be more granular.\n\\item The asset manager must use different time horizons.\n\\item The calibration of the cash conversion factor mixes two factors\\footnote{See\nAppendix \\ref{appendix:hqla} on page \\pageref{appendix:hqla}\nfor the derivation of this result. A more conservative formula is\n$\\limfunc{CCF}\\nolimits_{k}\\left( \\tau _{h}\\right) =\\limfunc{LF}%\n\\nolimits_{k}\\left( \\tau _{h}\\right) \\cdot \\left( 1-\\limfunc{DF}%\n\\nolimits_{k}\\left( \\tau _{h}\\right) \\right)$.}:\n\\begin{equation}\n\\limfunc{CCF}\\nolimits_{k}\\left( \\tau _{h}\\right) =\\limfunc{LF}%\n\\nolimits_{k}\\left( \\tau _{h}\\right) \\cdot \\left( 1-\\limfunc{DF}%\n\\nolimits_{k}\\left( \\frac{\\tau _{h}}{2}\\right) \\right)\n\\label{eq:hqla2}\n\\end{equation}%\nwhere $\\limfunc{LF}_{k}\\left( \\tau _{h}\\right) $ is the (pure) liquidity\nfactor and $\\limfunc{DF}\\nolimits_{k}\\left( \\tau _{h}\\right) $ is the\ndiscount (or drawdown) factor.\n\\item The liquidity factor $\\limfunc{LF}_{k}\\left( \\tau _{h}\\right) $ is an\nincreasing function of $\\tau _{h}$. It indicates the proportion of the HQLA\nbucket that can be sold in $\\tau _{h}$ trading days. By definition, we have\n$\\limfunc{LF}\\nolimits_{k}\\left( 0\\right) =0$ and\n$\\limfunc{LF}\\nolimits_{k}\\left( \\infty \\right) =1$.\n\n\\item The drawdown factor $\\limfunc{DF}\\nolimits_{k}\\left( \\tau _{h}\\right)$ is an increasing function of $\\tau _{h}$. It indicates the loss value of the HQLA bucket in a worst-case scenario of a price drop after $\\tau _{h}$ trading days. By definition, we have $\\limfunc{DF}\\nolimits_{k}\\left( 0\\right) =0$ and $\\limfunc{DF}\\nolimits_{k}\\left( \\infty \\right) \\leq 1$.\n\\end{itemize}\nConcerning the HQLA classes, we can consider more granularity concerning the\nasset class. For example, we can distinguish DM vs. EM equities, LC vs. SC\nequities, etc. Moreover, we can introduce the specific risk factor of the\nfund, which encompasses two main dimensions: the fund's size and its\nportfolio structure. For instance, liquidating a fund of $\\$100$\nmn is different to liquidating a fund of $\\$10$ bn. Similarly, the\nliquidation of two funds with the same size can differ because of the weight\nconcentration difference. Indeed, liquidating a S\\&P 500 index fund of $\\$1$\nbn is different to liquidating an active fund of $\\$1$ bn that is\nconcentrated on $10$ American stocks. Therefore, the cash conversion factor\nbecomes:\n\\begin{equation}\n\\limfunc{CCF}\\nolimits_{k,j}\\left( \\tau _{h}\\right) =\\limfunc{LF}%\n\\nolimits_{k}\\left( \\tau _{h}\\right) \\cdot \\left( 1-\\limfunc{DF}%\n\\nolimits_{k}\\left( \\frac{\\tau _{h}}{2}\\right) \\right) \\cdot \\left( 1-%\n\\limfunc{SF}\\nolimits_{k}\\left( \\limfunc{TNA}\\nolimits_{j},\\mathcal{H}%\n_{j}\\right) \\right) \\label{eq:hqla3}\n\\end{equation}%\nwhere $\\limfunc{SF}\\nolimits_{k}\\in \\left[ 0,1\\right] $ is the specific risk\nfactor associated to the fund $j$. This is a decreasing function of the fund\nsize $\\limfunc{TNA}\\nolimits_{j}$ and the Herfindahl index $\\mathcal{H}_{j}$\nof the portfolio. Concerning the time horizon, $\\tau _{h}$ can be one day,\ntwo days, one week, two weeks or one month. Finally, the three functions\n$\\limfunc{LF}_{k}\\left( \\tau _{h}\\right) $, $\\limfunc{DF}\\nolimits_{k}\\left( \\tau _{h}\\right) $\nand $\\limfunc{SF}\\nolimits_{k}\\left( \\limfunc{TNA}\\nolimits_{j},\\mathcal{H}_{j}\\right)$\ncan be calibrated using standard econometric procedures.\\smallskip\n\nA basic specification of the liquidity factor is:\n\\begin{equation}\n\\limfunc{LF}\\nolimits_{k}\\left( \\tau _{h}\\right) =\\min \\left( 1.0,\\lambda\n_{k}\\cdot \\tau _{h}\\right)\n\\end{equation}%\nwhere $\\lambda _{k}$ is the selling intensity. For the drawdown factor, it\nis better to use a square root function\\footnote{This is what we observe when we compute the value-at-risk of equity indices. For instance, we have reported the historical value-at-risk of the S\\&P 500 index in Figure \\ref{fig:hqla3} on page \\pageref{fig:hqla3} for different confidence levels $\\alpha $. We obtain a square-root shape. In risk management, the square-root-of-time rule is very popular and is widely used for modeling drawdown functions \\citep[page 46]{Roncalli-2020}.}:\n\\begin{equation}\n\\limfunc{DF}\\nolimits_{k}\\left( \\tau _{h}\\right) =\\min \\left( \\limfunc{MDD}%\n\\nolimits_{k},\\eta _{k}\\cdot \\sqrt{\\tau _{h}}\\right)\n\\end{equation}%\nwhere $\\limfunc{MDD}\\nolimits_{k}$ is the maximum drawdown and $\\eta _{k}$ is\nthe loss intensity of the HQLA class. Let us consider the example of a large\ncap equity fund, whose total net assets are equal to $\\$1$ bn. The redemption\nshock is set to $\\$400$ mn. We assume that $\\lambda _{k}=5\\%$ per day, $\\eta\n_{k}=6.25\\%$ and $\\limfunc{MDD}\\nolimits_{k}=50\\%$. Results are reported in\nFigure \\ref{fig:hqla2}. We notice that the RCR depends on the value of $\\tau\n_{h}$. For small values of $\\tau _{h}$ (less than 10 days), the RCR is below\n1. For large values of $\\tau _{h}$ (greater than 10 days), the RCR is above 1\nbecause the liquidation factor overtakes the drawdown factor. Finally, we\nobserve that the CCF and RCR functions are increasing and then decreasing\nwith respect to the time horizon\\footnote{This is normal since we combine an\nincreasing linear function with a decreasing square-root function.}. We now\nconsider a second fund with the same assets under management, which is\ninvested in small cap stocks. In this case, we assume that $\\lambda _{k}$ is\nreduced by a factor of two and $\\eta _{k}$ is increased by 20\\%. Results are\ngiven in Figure \\ref{fig:hqla2}. We verify that the small cap fund has a\nlower RCR than the large cap fund.\\smallskip\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Specification of the cash conversion factor}\n\\label{fig:hqla2}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{hqla2}\n\\end{figure}\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Specification of the specific risk factor}\n\\label{fig:hqla4}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{hqla4}\n\\end{figure}\n\nAs explained previously, we should consider the specific risk of the\nfund. We propose the following formula:\n\\begin{equation}\n\\limfunc{SF}\\nolimits_{k}\\left( \\limfunc{TNA},\\mathcal{H}\\right) =\\min\n\\left( \\xi _{k}^{\\mathrm{size}}\\left( \\frac{\\limfunc{TNA}}{\\limfunc{TNA}%\n\\nolimits^{\\star }}-1\\right) ^{+}+\\xi _{k}^{\\mathrm{concentration}}\\left(\n\\sqrt{\\frac{\\mathcal{H}}{\\mathcal{H}^{\\star }}}-1\\right) ^{+},\\limfunc{SF}%\n\\nolimits^{+}\\right)\n\\end{equation}%\nwhere $\\limfunc{TNA}$ and $\\mathcal{H}$ are the total net assets and the\nHerfindahl index of the fund, which is computed as\n$\\mathcal{H}=\\sum_{i=1}^{n}w_{i}^{2}\\left( \\omega \\right) $. By definition,\nwe have $n^{-1}\\leq \\mathcal{H}\\leq 1$. $\\limfunc{TNA}\\nolimits^{\\star }$ and\n$\\mathcal{H}^{\\star }$ are two thresholds. Below these two limits,\n$\\limfunc{SF}\\nolimits_{k}\\left( \\limfunc{TNA},\\mathcal{H}\\right) $ is equal\nto zero. $\\xi _{k}^{\\mathrm{size}}$ and $\\xi _{k}^{\\mathrm{concentration}}$\nare two coefficients that control the importance of the size and\nconcentration risks. Moreover, $\\limfunc{SF}\\nolimits^{+}$ indicates the\nmaximum value that can be taken by the specific risk since we have the\nfollowing inequalities:\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\n0\\leq \\limfunc{SF}\\nolimits^{+}\\leq 1 \\\\\n0\\leq \\limfunc{SF}\\nolimits_{k}\\left( \\limfunc{TNA},\\mathcal{H}\\right) \\leq\n\\limfunc{SF}\\nolimits^{+}%\n\\end{array}%\n\\right.\n\\end{equation}%\nFigure \\ref{fig:hqla4} illustrates the specific risk of the fund when\n$\\limfunc{TNA}\\nolimits^{\\star }=\\$1$ bn, $\\mathcal{H}^{\\star }=1\/100$, $\\xi\n_{k}^{\\mathrm{size}}=10\\%$, $\\xi _{k}^{\\mathrm{concentration}}=25\\%$ and\n$\\limfunc{SF}\\nolimits^{+}=0.80$. We have also reported the two components\n$\\limfunc{SF}\\nolimits_{k}^{\\mathrm{size}}\\left( \\limfunc{TNA}\\right) $ and\n$\\limfunc{SF}\\nolimits_{k}^{\\mathrm{concentration}}\\left( \\mathcal{H}\\right)\n$:\n\\begin{equation}\n\\limfunc{SF}\\nolimits_{k}\\left( \\limfunc{TNA},\\mathcal{H}\\right) =\\min\n\\left( \\limfunc{SF}\\nolimits_{k}^{\\mathrm{size}}\\left( \\limfunc{TNA}\\right) +\n\\limfunc{SF}\\nolimits_{k}^{\\mathrm{concentration}}\\left( \\mathcal{H}\\right) ,\n\\limfunc{SF}\\nolimits^{+}\\right)\n\\end{equation}%\nIt is better to use additive components than multiplicative components,\nbecause the specific risk tends quickly to the cap value\n$\\limfunc{SF}\\nolimits^{+}$ in this last case.\n\n\\begin{example}\n\\label{ex:hqla5} We assume that $\\lambda _{k}=5\\%$ per day, $\\eta\n_{k}=6.25\\%$, $\\limfunc{MDD}\\nolimits_{k}=50\\%$,\n$\\limfunc{TNA}\\nolimits^{\\star }=\\$1$ bn, $\\mathcal{H}^{\\star }=1\/100$,\n$\\xi_{k}^{\\mathrm{size}}=10\\%$, $\\xi _{k}^{\\mathrm{concentration}}=25\\%$ and\n$\\limfunc{SF}\\nolimits^{+}=0.80$. We consider four mutual funds, whose TNA are respectively equal to $\\$1$, $\\$5$, $\\$7$ and $\\$10$ bn. The redemption shock is equal to 40\\% of the total net assets.\n\\end{example}\n\nResults are given in Table \\ref{tab:hqla5} with respect to the horizon time\n$\\tau_h$ and the fund size. We consider two concentration indices:\n$\\mathcal{H} = 0.01$ and $\\mathcal{H} = 0.04$. We notice the impact of the\nfund size on the RCR. For instance, when $\\tau_h$ is set to $10$ days\nand the concentration index is equal to $1\\%$, $\\func{RCR}$ is respectively equal to $1.08$, $0.65$, $0.43$ and $0.22$ for a fund size of $\\$1$ bn, $\\$5$ bn, $\\$7$ bn, and $\\$10$ bn. Therefore, the RCR is above one only when\nthe fund size is $\\$1$ bn. If we increase the concentration\nindex, the RCR can be below one even if the fund size is small.\nFor instance, when $\\tau_h$ is set to $10$ days and $\\mathcal{H}$ is equal to $4\\%$, $\\func{RCR}$ is equal to $0.81$ for a fund size of $\\$1$ bn. To summarize, the redemption coverage ratio is an increasing function of the time to liquidation $\\tau_h$, but a decreasing function of the concentration index $\\mathcal{H}$ and the fund size $\\limfunc{TNA}$.\n\n\\begin{table}[tbph]\n\\centering\n\\caption{Computation of the RCR in the HQLA approach}\n\\label{tab:hqla5}\n\\begin{tabular}{ccccccccc}\n\\hline\n$\\tau_h$ & \\multicolumn{4}{c}{$\\mathcal{H} = 0.01$} & \\multicolumn{4}{c}{$\\mathcal{H} = 0.04$} \\\\\n & $\\$1$ bn & $\\$5$ bn & $\\$7$ bn & $\\$10$ bn & $\\$1$ bn & $\\$5$ bn & $\\$7$ bn & $\\$10$ bn \\\\ \\hline\n${\\hspace{5pt}}1$ & $0.12$ & $0.07$ & $0.05$ & $0.02$ & $0.09$ & $0.04$ & $0.02$ & $0.02$ \\\\\n${\\hspace{5pt}}5$ & $0.56$ & $0.34$ & $0.23$ & $0.11$ & $0.42$ & $0.20$ & $0.11$ & $0.11$ \\\\\n $10$ & $1.08$ & $0.65$ & $0.43$ & $0.22$ & $0.81$ & $0.38$ & $0.22$ & $0.22$ \\\\\n $20$ & $2.01$ & $1.20$ & $0.80$ & $0.40$ & $1.50$ & $0.70$ & $0.40$ & $0.40$ \\\\\n $60$ & $1.64$ & $0.99$ & $0.66$ & $0.33$ & $1.23$ & $0.58$ & $0.33$ & $0.33$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Redemption liquidation policy}\n\nThe previous analysis demonstrates that the redemption coverage ratio is highly dependent on the redemption portfolio $q=\\left( q_{1},\\ldots ,q_{n}\\right) $. Generally, the redemption shock is expressed as a percentage. $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}$ represents the proportion of the fund size that can be redeemed. Then, we can convert the redemption shock is nominal value by using the identity formula:\n\\begin{equation}\n\\ensuremath{\\mathbb{R}}=\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\func{TNA}\n\\end{equation}%\nFor instance, if the redemption rate $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}$ is set to $10\\%$ and\nthe fund size $\\func{TNA}$ is equal to $\\$1$ bn, the redemption shock\n$\\ensuremath{\\mathbb{R}}$ is $\\$100$ mn. However, the computation of $\\func{RCR}$\nrequires defining the liquidation policy or the portfolio $q$. Two main\napproaches are generally considered: the pro-rata liquidation and the\nwaterfall liquidation. The first one ensures that the asset structure of the\nfund is the same before and after the liquidation. The second one minimizes\nthe time to liquidation. In practice, fund managers can mix the two schemes.\nIn this case, it is important to define the objective function in order to\nunderstand the trade-off between portfolio distortion and liquidation time.\n\n\\subsubsection{The standard approaches}\n\n\\paragraph{Vertical slicing}\n\nThe pro-rata liquidation uses the proportional rule, implying that each\nasset is liquidated such that the structure of the asset portfolio is the\nsame before and after the liquidation. This rule is also called the vertical\nslicing approach. From a mathematical point of view, we have:\n\\begin{equation}\nq=\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\omega\n\\end{equation}%\nwhere $\\omega $ is the fund's asset portfolio (before the liquidation).\nIn practice, $q_{i}$ is not necessarily an integer and must be rounded\\footnote{This is why the waterfall slicing approach is also called the near proportional rule.}. For instance, if $\\omega =\\left( 1000,514,17\\right) $ and $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}=10\\%$, we obtain $q=\\left( 100,51.4,1.7\\right) $. Since we cannot sell a fraction of an asset, we can choose $q=\\left(100,51,2\\right) $.\\smallskip\n\nWe recall that the tracking error due to the liquidation is equal to:\n\\begin{eqnarray}\n\\sigma \\left( \\omega \\mid q\\right) &=&\\sqrt{\\left( w\\left( \\omega -q\\right)\n-w\\left( \\omega \\right) \\right) ^{\\top }\\Sigma \\left( w\\left( \\omega\n-q\\right) -w\\left( \\omega \\right) \\right) } \\notag \\\\\n&=&\\sqrt{\\Delta w\\left( \\omega \\mid q\\right) ^{\\top }\\Sigma\\, \\Delta w\\left( \\omega\n\\mid q\\right) }\n\\end{eqnarray}%\nwhere $\\Sigma $ is the covariance matrix of asset returns, $w\\left( \\omega\n\\right) $ is the weight vector of portfolio $\\omega $ (before liquidation)\nand $w\\left( \\omega -q\\right) $ is the weight vector of portfolio $\\omega - q$ (after liquidation). The proportional rule ensures that the asset\ncomposition does not change because of the redemption. Since the weights are\nthe same --- $\\Delta w\\left( \\omega \\mid q\\right) =\\mathbf{0}_{n}$, the\ntracking error is equal to zero:\n\\begin{equation}\n\\sigma \\left( \\omega \\mid q\\right) =0\n\\end{equation}%\nThis property is important because there is no portfolio distortion with the\npro-rata liquidation rule.\\smallskip\n\nWe have seen that the redemption coverage ratio is highly dependent on the time to liquidation $\\tau _{h}$. In \\citet[Section 3.2.2, page 18]{Roncalli-lst2}, we have defined the liquidation time as the inverse function of the liquidation ratio:%\n\\begin{equation}\n\\mathcal{LT}\\left( q,p\\right) =\\mathcal{LR}^{-1}\\left( q;p\\right) =\\inf\n\\left\\{ h:\\mathcal{LR}\\left( q;h\\right) \\geq p\\right\\}\n\\end{equation}%\nWe now define the liquidity time (or time to liquidity) as follows:\n\\begin{equation}\n\\limfunc{TTL}\\left( p\\right) =\\limfunc{RCR}\\nolimits^{-1}\\left( p\\right) =\\inf \\left\\{ h:%\n\\limfunc{RCR}\\left( h\\right) \\geq p\\right\\}\n\\end{equation}%\nIt measures the required number of days to have a redemption coverage ratio\nlarger than $p$. As we have seen that $\\limfunc{RCR}\\left( h\\right) $ and\n$\\limfunc{LS}\\left( h\\right) $ are related to $\\mathcal{LR}\\left( q;h\\right)$\nand $\\mathcal{LS}\\left( q;h\\right) $, $\\limfunc{TTL}\\left( p\\right) $ is also\nrelated to $\\mathcal{LT}\\left( q,p\\right) $. In the case where the\nredemption portfolio satisfies $\\ensuremath{\\mathbb{R}}=\\mathbb{V}\\left( q\\right) $,\nwe verify that $\\limfunc{TTL}\\left( p\\right) =\\mathcal{LT}\\left( q,p\\right) $\nbecause we have $\\limfunc{RCR}\\left( h\\right) =\\mathcal{LR}\\left( q;h\\right)$\nand $\\limfunc{LS}\\left( h\\right) =\\mathcal{LS}\\left( q;h\\right) $. In the\ngeneral case, we have:\n\\begin{eqnarray}\n\\limfunc{TTL}\\left( p\\right) &=&\\inf \\left\\{ h:\\frac{\\mathbb{V}\\left( q\\right)\n}{\\ensuremath{\\mathbb{R}}}\\cdot \\mathcal{LR}\\left( q;h\\right) \\geq p\\right\\}\n\\notag \\\\\n&=&\\left\\{\n\\begin{array}{ll}\n\\mathcal{LT}\\left( q,\\dfrac{\\ensuremath{\\mathbb{R}}}{\\mathbb{V}\\left( q\\right) }\n\\cdot p\\right) & \\text{if }p\\leq \\dfrac{\\mathbb{V}\\left( q\\right) }{\\ensuremath{\\mathbb{R}}} \\\\\n+\\infty & \\text{otherwise}\n\\end{array}%\n\\right.\n\\end{eqnarray}\n\\smallskip\n\nWhile vertical slicing is optimal to minimize the tracking risk, the\nliquidation of the redemption portfolio can however take a lot of time.\nIndeed, the maximum we can liquidate each day is bounded by the liquidation\npolicy limit $q_{i}^{+}$. We have:\n\\begin{equation}\n\\sum_{h=1}^{\\tau _{h}}q_{i}\\left( h\\right) \\leq \\tau _{h}\\cdot q_{i}^{+}\n\\end{equation}%\nIn the case of the pro-rata liquidation rule, we have $q_{i}=\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\n\\cdot \\omega _{i}$. We deduce that the redemption portfolio can be fully liquidated after $\\func{TTL}\\left( 1\\right) =\\left\\lfloor \\tau _{h}^{+}\\right\\rfloor $ days where:\n\\begin{equation}\n\\tau _{h}^{+}=\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\cdot \\sup_{i=1,\\ldots ,n}\\frac{\\omega _{i}}{%\nq_{i}^{+}}\n\\end{equation}%\nIt may be difficult to sell some assets, because the value of $q_{i}^{+}$ is\nlow. Nevertheless, the remaining redemption value may be very small. This is\nwhy fund managers generally consider in practice that the portfolio is liquidated when the proportion $p$ is set to $99\\%$.\n\n\\paragraph{Horizontal slicing}\n\nHorizontal slicing is the technical term to define waterfall liquidation. In\nthis approach, the portfolio is liquidated by selling the most liquid\nassets first. Contrary to vertical slicing, the fund manager accepts\nthat the portfolio composition will be disturbed and his investment\nstrategy has te be modified, implying a tracking error risk:\n\\begin{equation}\n\\sigma \\left( \\omega \\mid q\\right) >0\n\\end{equation}\n\nIt is obvious that the waterfall approach minimizes the liquidity risk when\nit is measured by the liquidity shortfall. Let us illustrate this property\nwith the example described in Table \\ref{ex:rcr0} on page \\pageref{ex:rcr0}.\nIf we consider the naive pro-rata liquidation rule, we obtain the liquidity\ntimes given in Figure \\ref{fig:ttl1b} on page \\pageref{fig:ttl1b}. We notice\nthat they are very similar for $p=95\\%$, $99\\%$ and $100\\%$. We now assume\nthat $q_{7}^{+}=20$, meaning that the seventh asset is not very liquid.\nTherefore, we have a huge position on this asset ($\\omega _{7}=1\\,800$)\ncompared to the daily liquidation limit. If we would like to liquidate the\nfull exposure on this asset, it will take $90$ trading days versus $2$\ntrading days previously. The consequence of this illiquid exposure is that\nthe liquidity times are very different for $p=95\\%$, $99\\%$ and $100\\%$ (see\nFigure \\ref{fig:ttl2b} on page \\pageref{fig:ttl2b}). For instance, the\nmaximum liquidity time\\footnote{It is obtained by considering the case\n$\\ensuremath{\\boldsymbol{\\mathpzc{R}}}=100\\%$.} is respectively equal to $20$, $46$ and $90$\ntrading days for $p=95\\%$, $99\\%$ and $100\\%$. Previously, the maximum\nliquidity time was equal to $18$, $21$ and $22$ trading days when $q_{7}^{+}$\nwas equal to $1\\,000$. Having some illiquid assets in the portfolio may then\ndramatically increase the liquidity time when we choose the pro-rata\nliquidation rule. We have also computed the liquidity time when we consider\nthe waterfall liquidation rule. Results are reported in Figures\n\\ref{fig:ttl3a} and \\ref{fig:ttl3b} on page \\pageref{fig:ttl3a}. We observe\ntwo phenomena. First, if we compare Figures \\ref{fig:ttl1b} and\n\\ref{fig:ttl3a}, we notice the higher convexity of the waterfall approach\nwhen we increase the redemption shock. Second, we retrieve the similarity\npattern for $p=95\\%$, $99\\%$ and $100\\%$ except for very large redemption\nshocks when we have illiquid assets. The reason is that the part of illiquid\nassets is much lower than the remaining value of the portfolio. Figure\n\\ref{fig:ttl3c} summarizes the two phenomena by comparing the pro-rata and\nwaterfall approaches when $q_{7}^{+}=20$.\\smallskip\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Liquidity time in days (pro-rata versus waterfall liquidation, illiquid exposure, $p = 99\\%$)}\n\\label{fig:ttl3c}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{ttl3c}\n\\end{figure}\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Daily liquidation}\n\\label{fig:ttl4}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{ttl4}\n\\end{figure}\n\nIn order to determine the proportion of non-liquidated assets in the case of\nthe waterfall approach, we consider an analysis in terms of weights. We\nrecall that the portfolio weight of Asset $i$ is given by:\n\\begin{equation}\nw_{i}\\left( \\omega \\right) =\\frac{\\omega _{i}\\cdot P_{i}}{\\limfunc{TNA}}\n\\end{equation}%\nSince the number of required trading days to liquidate the exposure to Asset\n$i$ is equal to:\n\\begin{equation}\n\\tau _{i}\\left( \\omega \\right) =\\frac{\\omega _{i}}{q_{i}^{+}}\n\\end{equation}%\nthe portfolio weight of Asset $i$ that can be liquidated with a trading day\nis given by the following formula:\n\\begin{equation}\n\\psi _{i}\\left( \\omega \\right) =\\frac{w_{i}\\left( \\omega \\right) }{\\tau\n_{i}\\left( \\omega \\right) }=\\frac{q_{i}^{+}\\cdot P_{i}}{\\limfunc{TNA}}\n\\end{equation}%\nUsing Equation (\\ref{eq:LS3}) on page \\pageref{eq:LS3}, we deduce that\nthe liquidity shortfall of a full redemption scenario under the waterfall\napproach is equal to\\footnote{We have $\\func{LS}\\left( 0\\right) =100\\%$.}:\n\\begin{equation}\n\\func{LS}\\left( h\\right) =1-\\sum_{i=1}^{n}\\min \\left( h\\cdot \\psi _{i}\\left(\n\\omega \\right) ,w_{i}\\left( \\omega \\right) \\right)\n\\end{equation}%\nThe relative weight of the portfolio that can be liquidated at time $h$ is\nthen equal to $W\\left( h\\right) =\\func{LS}\\left( h-1\\right) -\\func{LS}\\left(\nh\\right) $. $W\\left( h\\right)$ is the daily liquidation expressed\nin \\%. In Figure \\ref{fig:ttl4}, we have reported the values taken by $W\\left( h\\right) $ for the previous example. We notice that significant liquidation occurs over the first $22$ days. After this period, the amount liquidated decreases substantially because it concerns illiquid assets.\n\n\\begin{remark}\nWe can use the previous analysis to determine the amount of \\textquotedblleft\n\\textsl{illiquid assets}\\textquotedblright\\ in the portfolio. For that, we\nchoose a threshold $w^{\\star }$ below which the amount liquidated is too\nsmall\\footnote{$w^{\\star }$ is generally set to $0.5\\%$.}:\n\\begin{equation}\nh^{\\star }=\\inf \\left\\{ h:W\\left( h\\right) \\leq w^{\\star }\\right\\}\n\\end{equation}%\nAlternatively, we can directly set the value of $h^{\\star }$ above which we\nassume it corresponds to an illiquid time. The amount of illiquid assets is\nthen equal to $\\sum_{k\\geq h^{\\star }}W\\left( h\\right) $ or equivalently\n$\\func{LS}\\left( h^{\\star }-1\\right) =1-\\sum_{i=1}^{n}\\min \\left( \\left(\nh^{\\star }-1\\right) \\cdot \\psi _{i}\\left( \\omega \\right) ,w_{i}\\left( \\omega\n\\right) \\right) $. In the previous example, it is equal to $2.50\\%$ if\n$w^{\\star }=1\\%$ and $1.52\\%$ if $w^{\\star }=0.5\\%$.\n\\end{remark}\n\n\\subsubsection{The mixing approach}\n\nSo far, the analysis of the redemption coverage ratio and the redemption\nliquidation policy has been focused on the trading limits and the daily\namounts that can be liquidated. This volume-based approach is not enough and\nmay lead to misleading conclusions. Indeed, the previous analysis\ncompletely omits the transaction costs. This is obviously the case of the\nvertical slicing approach, where the fund manager is forced to sell exposures\nthat are not liquid. Therefore, no cost analysis is done in the pro-rata\nliquidation rule. This is also the case in the above presentation of the\nhorizontal slicing approach, since the liquidation policy only considers the\ndaily trading limits through the variable $q^{+}$. Nevertheless, the practice\nof the waterfall approach is a little bit different, because it is not\nlimited to the liquidity depth. Indeed, the ultimate goal of this approach is\nto liquidate the exposures at the lowest cost. Therefore, it includes a cost\nanalysis. However, as seen previously, the waterfall approach implies a\ntracking risk that is not controlled. This is not acceptable in\npractice.\\smallskip\n\nThe optimal liquidation approach consists in defining a maximum acceptable\nlevel $\\mathcal{TR}^{+}$ of tracking risk and to minimize the transaction\ncost $\\mathcal{TC}\\left( q\\right) $ of the liquidation portfolio:\n\\begin{eqnarray}\nq^{\\star } &=&\\underset{q}{\\arg \\min }\\mathcal{TC}\\,\\left( q\\right)\n\\label{eq:liquidation1} \\\\\n&\\text{s.t.}&\\left\\{\n\\begin{array}{l}\n\\mathcal{TR}\\left( \\omega \\mid q\\right) \\leq \\mathcal{TR}^{+} \\\\\n\\mathcal{LS}\\left( q;h\\right) \\leq \\mathcal{LS}^{+} \\\\\n\\mathbf{1}_{n}^{\\top }w\\left( \\omega -q\\right) =0 \\\\\nw\\left( \\omega -q\\right) \\geq \\mathbf{0}_{n}%\n\\end{array}%\n\\right. \\notag\n\\end{eqnarray}%\nIn the case of an equity portfolio, the tracking risk is equal to the\ntracking error volatility:\n\\begin{equation}\n\\mathcal{TR}\\left( \\omega \\mid q\\right) =\\sigma \\left( \\omega \\mid q\\right) =\n\\sqrt{\\Delta w\\left( \\omega \\mid q\\right) ^{\\top }\\Sigma \\Delta w\\left(\n\\omega \\mid q\\right) }\n\\end{equation}%\nIn the case of a bond portfolio, it is more difficult to define the tracking\nrisk because the volatility is not the right approach to measure the risk of\nfixed-income instruments \\citep{Roncalli-2020}. Moreover, there are several\nrisk dimensions to take into account. For instance, \\citet{BenSlimane-2021}\nconsiders three dimensions\\footnote{In fact, \\citet{BenSlimane-2021} adds two\nliquidity components: the first one concerns the liquidity costs whereas the\nsecond one concerns the liquidity depth (or the axis component of market\nmakers).}: sectorial risk, duration risk and credit risk. Following\n\\citet{BenSlimane-2021}, we can define the tracking risk as the sum of three\nrisk measures:\n\\begin{equation}\n\\mathcal{TR}\\left( \\omega \\mid q\\right) =\\mathcal{R}_{w}\\left( \\omega \\mid\nq\\right) +\\mathcal{R}_{\\mathrm{MD}}\\left( \\omega \\mid q\\right) +\n\\mathcal{R}_{\\mathrm{DTS}}\\left( \\omega \\mid q\\right)\n\\end{equation}%\nThe weight risk measure $\\mathcal{R}_{w}\\left( \\omega \\mid q\\right) $ is the\nweight difference between Portfolio $\\omega -q$ and Portfolio $\\omega $\nwithin the sector $s$:\n\\begin{equation}\n\\mathcal{R}_{w}\\left( \\omega \\mid q\\right) =\\sum_{s=1}^{n_{\\mathcal{S}ector}}\n\\left\\vert \\sum_{i\\in \\mathcal{S}ector\\left( s\\right) }\\Delta\nw_{i}\\left( \\omega \\mid q\\right) \\right\\vert\n\\end{equation}%\nwhere $n_{\\mathcal{S}ector}$ is the number of sectors and $\\Delta w_{i}\\left(\n\\omega \\mid q\\right) =w_{i}\\left( \\omega -q\\right) -w_{i}\\left( \\omega\n\\right) $ is the weight distortion of Bond $i$ because of the liquidation. We\ndefine $\\mathcal{R}_{\\mathrm{MD}}\\left( \\omega \\mid q\\right) $ as the\nmodified duration risk of $\\omega -q$ with respect to $\\omega $ within the\nsector $s$:\n\\begin{equation}\n\\mathcal{R}_{\\mathrm{MD}}\\left( \\omega \\mid q\\right) =\\sum_{s=1}^{n_{%\n\\mathcal{S}ector}}\\sum_{j=1}^{n_{\\mathcal{B}ucket}}\\left\\vert \\sum_{i\\in\n\\mathcal{S}ector\\left( s\\right) }\\Delta w_{i}\\left( \\omega \\mid q\\right)\n\\cdot \\limfunc{MD}\\nolimits_{i}\\left( \\mathcal{B}ucket_{j}\\right)\n\\right\\vert\n\\end{equation}%\nwhere $n_{\\mathcal{B}ucket}$ is the number of maturity buckets and\n$\\func{MD}_{i}\\left( \\mathcal{B}ucket_{j}\\right) $ is the modified duration\ncontribution of Bond $i$ to the maturity bucket $j$. The rationale of this\ndefinition is to track the difference in modified duration per bucket.\nFinally, we define the DTS risk measure $\\mathcal{R}_{\\mathrm{DTS}}\\left(\n\\omega \\mid q\\right) $ as the weighted DTS difference between $\\omega -q$ and\n$\\omega $:\n\\begin{equation}\n\\mathcal{R}_{\\mathrm{DTS}}\\left( \\omega \\mid q\\right) =\\sum_{s=1}^{n_{%\n\\mathcal{S}ector}}\\left\\vert \\sum_{i\\in \\mathcal{S}ector\\left( s\\right)\n}\\Delta w_{i}\\left( \\omega \\mid q\\right) \\cdot \\limfunc{DTS}%\n\\nolimits_{i}\\right\\vert\n\\end{equation}%\nwhere $\\limfunc{DTS}_{i}$ is the duration-times-spread of Bond $i$. Regarding\nthe transaction cost function, we recall that it is defined as follows\n\\citep[Equation (26), page 25]{Roncalli-lst2}:\n\\begin{equation}\n\\mathcal{TC}\\left( q\\right) =\\sum_{i=1}^{n}\\sum_{h=1}^{h^{+}}\\mathds{1}\\left\\{\nq_{i}\\left( h\\right) >0\\right\\} \\cdot q_{i}\\left( h\\right) \\cdot P_{i}\\cdot %\n\\pmb{c}_{i}\\left( \\frac{q_{i}\\left( h\\right) }{v_{i}}\\right)\n\\label{eq:liq-tc2}\n\\end{equation}%\nwhere $\\pmb{c}_{i}\\left( x\\right) $ is the unit transaction cost function\nassociated with Asset $i$. In \\citet{Roncalli-lst2}, $\\pmb{c}_{i}\\left( x\\right)$ follows a two-regime power-law model. We also notice that the optimization problem (\\ref{eq:liquidation1}) includes a constraint related to the liquidation shortfall. Without this constraint, the solution consists in\nliquidating each day an amount $q_{i}\\left( h\\right) $ much smaller than the\ntrading limit $q_{i}^{+}$ in order to minimize the transaction costs due to\nthe market impact. Of course, the idea is not to indefinitely delay the\nliquidation. Therefore, this constraint is very important to ensure that a\nsignificant portion of the redemption portfolio has been sold before $h$. It\nfollows that the optimization problem (\\ref{eq:liquidation1}) can be tricky\nto solve from a numerical point of view, in particular for bond funds.\nNevertheless, it perfectly illustrates the trade-off between the three risk\ndimensions: the transaction cost risk $\\mathcal{TC}\\left( q\\right) $, the\ntracking risk $\\mathcal{TR}\\left( \\omega \\mid q\\right) $ and the liquidation\nshortfall risk $\\mathcal{LS}\\left( q;h\\right) $.\\smallskip\n\nOnce again, we consider the example described in Table \\ref{ex:rcr0} on page \\pageref{ex:rcr0}. We assume that the volatility of the assets is respectively equal to\\footnote{The correlation matrix of asset returns is given by:\n\\begin{equation*}\n\\rho =\\left(\n\\begin{array}{rrrrrrr}\n100\\% & & & & & & \\\\\n10\\% & 100\\% & & & & & \\\\\n40\\% & 70\\% & 100\\% & & & & \\\\\n50\\% & 40\\% & 80\\% & 100\\% & & & \\\\\n30\\% & 30\\% & 50\\% & 50\\% & 100\\% & & \\\\\n30\\% & 30\\% & 50\\% & 50\\% & 70\\% & 100\\% & \\\\\n30\\% & 30\\% & 50\\% & 50\\% & 70\\% & 70\\% & 100\\% \\\\\n\\end{array}%\n\\right)\n\\end{equation*}}\n$20\\%$, $18\\%$, $15\\%$, $15\\%$, $22\\%$, $30\\%$ and $35\\%$\nwhereas the bid-ask spread is equal to $5$, $3$, $5$, $8$, $12$, $15$ and $15$ bps. The transaction cost function corresponds to the SQRL model defined by \\citet{Roncalli-lst2} with $\\varphi_1 = 0.4$, $\\tilde{x} = 5\\%$ and $x^{+} = 10\\%$. We deduce that the daily volume $v_i$ of each asset is equal to $10 \\times q_i^{+}$. In Table \\ref{tab:mixing1d}, we define five liquidation portfolios where the redemption rate $\\ensuremath{\\boldsymbol{\\mathpzc{R}}} $ is set to $10\\%$. Portfolio \\#1 satisfies the pro-rata liquidation rule. We verify that the tracking risk (measured by the tracking error volatility) is equal to zero. The total transaction cost is equal to $22.4$ bps with the following break-down: $6.1$ bps for the bid-ask spread component and $16.2$ bps for the market impact component. This is a low tracking error. However, if the fund manager's objective is to liquidate the redemption in one trading day, we notice that the liquidation shortfall is equal to $23.5\\%$. In Portfolio \\#2, the liquidation is concentrated in the second and third assets. Because these assets are more liquid than the others, the transaction cost is lower and equal to $20.4$ bps. Nevertheless, this portfolio leads to a high tracking error risk of $79.6$ bps. Portfolio \\#3 is made up of the less liquid assets.\nTherefore, it is normal to obtain a high transaction cost of $42.5$ bps.\nAgain, this portfolio presents a high tracking risk since we have\n$\\mathcal{TR}\\left( \\omega \\mid q\\right) \\approx 2\\%$! If the objective function is to fulfill the redemption in one day, Portfolio \\#4 is a good candidate since we have $\\mathcal{LS}\\left( q;1\\right) =0$ and the transaction cost is moderate\\footnote{It is a little bit higher than the transaction cost of the vertical slicing approach.} ($\\mathcal{TC}\\left( q\\right) =25.6$ bps). However, the tracking risk is high and is equal to $35.4\\%$. Portfolio \\#5 is a compromise between tracking risk and liquidity shortfall\\footnote{Portfolio \\#5 is equal to $40\\%$ of Portfolio \\#1 and $60\\%$ of Portfolio \\#4.}, because we have $\\mathcal{TR}\\left( \\omega \\mid q\\right) =21.2$ bps, $\\mathcal{TC}\\left( q\\right) =22.6$ bps but $\\mathcal{LS}\\left( q;1\\right) =9.4\\%$. If the objective is to find the optimal liquidation policy with the constraints $\\mathcal{LS}\\left( q;1\\right) \\leq 10\\%$ and $\\mathcal{TR}\\left( \\omega \\mid q\\right) =20\\%$, Portfolio \\#5 is a good starting point.\n\n\\begin{table}[tbph]\n\\centering\n\\caption{Comparison of five redemption portfolios}\n\\label{tab:mixing1d}\n\\begin{tabular}{ccrrrrr}\n\\hline\n\\multicolumn{2}{c}{Liquidation portfolio} & \\#1 & \\#2 & \\#3 & \\#4 & \\#5 \\\\\n\\hline\n$q_{1}$ & & $43\\,510$ & ${\\hspace{20pt}}\\,0$ & ${\\hspace{20pt}}\\,0$ & $20\\,000$ & $29\\,404$ \\\\\n$q_{2}$ & & $30\\,010$ & $27\\,000$ & ${\\hspace{20pt}}\\,0$ & $20\\,000$ & $24\\,004$ \\\\\n$q_{3}$ & & ${\\hspace{5pt}}5\\,040$ & $22\\,238$ & ${\\hspace{20pt}}\\,0$ & $10\\,000$ & ${\\hspace{5pt}}8\\,016$ \\\\\n$q_{4}$ & & $20\\,050$ & ${\\hspace{20pt}}\\,0$ & ${\\hspace{20pt}}\\,0$ & $20\\,000$ & $20\\,020$ \\\\\n$q_{5}$ & & ${\\hspace{5pt}}7\\,550$ & ${\\hspace{20pt}}\\,0$ & $34\\,315$ & $18\\,044$ & $13\\,846$ \\\\\n$q_{6}$ & & ${\\hspace{5pt}}1\\,750$ & ${\\hspace{20pt}}\\,0$ & $17\\,500$ & ${\\hspace{20pt}}\\,0$ & ${\\hspace{10pt}}\\,700$ \\\\\n$q_{7}$ & & ${\\hspace{10pt}}\\,180$ & ${\\hspace{20pt}}\\,0$ & ${\\hspace{5pt}}1\\,800$ & ${\\hspace{20pt}}\\,0$ & ${\\hspace{15pt}}\\,72$ \\\\ \\hline\n$\\mathcal{TR}\\left( \\omega \\mid q\\right)$ & (in bps) & ${\\hspace{5pt}}0.0$ & $79.6$ & $201.0$ & $35.4$ & $21.2$ \\\\\n$\\mathcal{TC}\\left( q\\right)$ & (in bps) & $22.4$ & $20.4$ & ${\\hspace{5pt}}42.5$ & $25.6$ & $22.6$ \\\\\n$\\mathcal{TC}_{\\ensuremath{\\boldsymbol{\\mathpzc{s}}}}\\left( q\\right)$ & (in bps) & ${\\hspace{5pt}}6.1$ & ${\\hspace{5pt}}4.5$ & ${\\hspace{5pt}}13.8$ & ${\\hspace{5pt}}6.6$ & ${\\hspace{5pt}}6.4$ \\\\\n$\\mathcal{TC}_{\\pmb{\\pi}}\\left( q\\right)$ & (in bps) & $16.2$ & $15.9$ & ${\\hspace{5pt}}28.7$ & $19.1$ & $16.2$ \\\\\n$\\mathcal{LS}\\left( q;1\\right)$ & (in \\%) & $23.5$ & $48.2$ & ${\\hspace{5pt}}60.7$ & ${\\hspace{5pt}}0.0$ & ${\\hspace{5pt}}9.4$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Reverse stress testing}\n\nReverse stress testing is a \\textquotedblleft \\textsl{fund-level stress test\nwhich starts from the identification of the pre-defined outcome with regards\nto fund liquidity (e.g. the point at which the fund would no longer be liquid\nenough to honor requests to redeem units) and then explores scenarios and\ncircumstances that might cause this to occur}\\textquotedblright\\ \\citep[page\n6]{ESMA-2020a}. Following \\citet{Roncalli-2020}, reverse stress testing\nconsists in identifying stress scenarios that could bankrupt the fund.\nTherefore, reverse stress testing can be viewed as an inverse problem.\nIndeed, liquidity stress testing starts with a liability liquidity scenario\nand an asset liquidity scenario in order to compute the redemption coverage\nratio. The liability liquidity scenario is defined by the redemption shock\n$\\ensuremath{\\mathbb{R}}$ (or the redemption rate $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}$), while the\nasset liquidity scenario is given by the stressed trading limits $q^{+}$ or\nthe HQLA classification. Given a time horizon $\\tau _{h}$, the outcome is\n$\\limfunc{RCR}\\left( \\tau _{h}\\right) $. From a theoretical point of view,\nthe bankruptcy of the fund depends on whether the condition $\\limfunc{RCR}\\left( \\tau _{h}\\right) \\geq 1$ is satisfied or not. The underlying idea is that the fund is not viable if $\\limfunc{RCR}\\left( \\tau _{h}\\right) <1$. In practice, the fund can continue to exist because it can use short-term borrowing or\nother liquidity management tools such as gates or side pockets\\footnote{These\ndifferent tools will be explored in the next section on page\n\\pageref{section:lmt}.}. In fact, the fund's survival depends on many\nparameters. However, we can consider that a too small value of\n$\\limfunc{RCR}\\left( \\tau_{h}\\right) $ is critical and can produce the\ncollapse of the fund. Let $\\limfunc{RCR}^{-}$ be the minimum acceptable level\nof the redemption coverage ratio. Then, reverse stress testing consists in\nfinding the liability liquidity scenario and\/or the asset liquidity scenario\nsuch that $\\limfunc{RCR}\\left( \\tau _{h}\\right) =\\limfunc{RCR}^{-}$.\n\n\\subsubsection{The liability RST scenario}\n\nFrom a liability perspective, reverse stress testing consists in finding\nthe redemption shock above which the redemption coverage ratio is lower\nthan the minimum acceptable level:\n\\begin{equation}\n\\limfunc{RCR}\\left( \\tau _{h}\\right) \\leq \\limfunc{RCR}\\nolimits^{-}%\n\\Longrightarrow \\left\\{\n\\begin{array}{c}\n\\ensuremath{\\mathbb{R}}\\geq \\ensuremath{\\mathbb{R}^{\\mathrm{RST}}}\\left(\\tau_h\\right) = \\dfrac{\\ensuremath{\\mathbb{A}}\\left( \\tau _{h}\\right) }{\\limfunc{RCR%\n}\\nolimits^{-}} \\\\\n\\text{or} \\\\\n\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\geq \\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}\\left(\\tau_h\\right) = \\dfrac{\\ensuremath{\\boldsymbol{\\mathpzc{A}}}\\left( \\tau _{h}\\right) }{\\limfunc{RCR}%\n\\nolimits^{-}}%\n\\end{array}%\n\\right.\n\\label{eq:rst1}\n\\end{equation}%\n$\\ensuremath{\\mathbb{R}^{\\mathrm{RST}}}\\left(\\tau_h\\right)$ (or $\\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}\\left(\\tau_h\\right)$) is called the\nliability reverse stress testing scenario. At first sight, computing the\nliability RST scenario seems to be easy since the calculation of\n$\\ensuremath{\\mathbb{A}}\\left( \\tau _{h}\\right) $ is straightforward. However, it is a\nlittle bit more complicated since $\\ensuremath{\\mathbb{A}}\\left( \\tau _{h}\\right) $\ndepends on the liquidation portfolio $q$. Therefore, we have to define $q$.\nThis is the hard task of reverse stress testing. Indeed, the underlying idea\nis to analyze each asset exposure individually and decide the quantity of\neach asset that can be sold in the market during a stress period.\\smallskip\n\nThe simplest way to define $q$ is to use the multiplicative approach with\nrespect to the portfolio $\\omega $:\n\\begin{equation}\nq_{i}^{\\mathrm{RST}}=\\alpha _{i}\\cdot \\omega _{i}\n\\end{equation}%\nwhere $\\alpha _{i}$ represents the proportion of the asset $i$ than can be\nsold during a liquidity stress event. In particular, $\\alpha _{i}=0$\nindicates that the asset is illiquid during this period. $\\alpha _{i}$ also\ndepends on the size $\\omega _{i}$. For instance, a large exposure on an asset\ncan lead to a small value of $\\alpha _{i}$ because it can be difficult to\nliquidate such exposure.\\smallskip\n\n\\begin{table}[tbph]\n\\centering\n\\caption{Computation of the liability RST scenario}\n\\label{tab:rst1}\n\\begin{tabular}{c|cccc|cccc}\n\\hline\n& & & & & & & & \\\\[-1em]\n& \\multicolumn{4}{c|}{$\\ensuremath{\\mathbb{R}^{\\mathrm{RST}}}\\left(\\tau_h\\right)$ (in \\$ mn)} &\n\\multicolumn{4}{c}{$\\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}\\left(\\tau_h\\right)$ (in \\%)} \\\\\n$\\limfunc{RCR}\\nolimits^{-}$ & $25\\%$ & $75\\%$ & $50\\%$ & $100\\%$\n& $25\\%$ & $75\\%$ & $50\\%$ & $100\\%$ \\\\ \\hline\n $\\tau_h =1$ & $25.1$ & $12.6$ & ${\\hspace{5pt}}8.4$ & ${\\hspace{5pt}}6.3$ & $17.7$ & ${\\hspace{5pt}}8.9$ & ${\\hspace{5pt}}5.9$ & ${\\hspace{5pt}}4.4$ \\\\\n $\\tau_h =2$ & $46.2$ & $23.1$ & $15.4$ & $11.5$ & $32.6$ & $16.3$ & $10.9$ & ${\\hspace{5pt}}8.1$ \\\\\n $\\tau_h =3$ & $63.2$ & $31.6$ & $21.1$ & $15.8$ & $44.6$ & $22.3$ & $14.9$ & $11.1$ \\\\\n $\\tau_h =4$ & $80.1$ & $40.1$ & $26.7$ & $20.0$ & $56.5$ & $28.3$ & $18.8$ & $14.4$ \\\\\n$\\tau_h \\geq 5$ & $87.5$ & $43.8$ & $29.2$ & $21.9$ & $61.8$ & $30.9$ & $20.6$ & $15.4$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nLet us consider again the example described in Table \\ref{ex:rcr0} on page\n\\pageref{ex:rcr0}. We assume that the third, fifth, sixth and seventh assets\nare illiquid in a stress period. For the other assets, we set $\\alpha_1 =\n20\\%$, $\\alpha_2 = 30\\%$ and $\\alpha_4 = 15\\%$. Results are given in Table\n\\ref{tab:rst1}. For instance, if the minimum acceptable level of the\nredemption coverage ratio is equal to $25\\%$, we obtain $\\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}\\left(1\\right)\n= 17.7\\%$. This means that the fund may support a redemption shock below\n$17.7\\%$, whereas the RCR limit of $25\\%$ is broken if the fund experiences a\nredemption shock above $17.7\\%$. If the minimum acceptable level is set to\n$100\\%$, which is the regulatory requirement, the liability RST scenario\ncorresponds to $\\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}\\left(1\\right) = 4.4\\%$.\n\n\\begin{remark}\nWe don't always have a solution to Problem (\\ref{eq:rst1}). Nevertheless, we\nnotice that:\n\\begin{equation}\n\\limfunc{RCR}\\left( \\infty \\right) =\\frac{\\sum_{i=1}^{n}q_{i}^{\\mathrm{RST}%\n}\\cdot P_{i}}{\\sum_{i=1}^{n}\\omega _{i}\\cdot P_{i}}=\\sum_{i=1}^{n}\\alpha\n_{i}\\cdot w_{i}\\left( \\omega \\right)\n\\end{equation}%\nA condition to obtain a solution such that $\\ensuremath{\\mathbb{R}}\\leq \\func{TNA}$\nand $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\leq 1$ is to impose the constraint $\\limfunc{RCR}%\n\\nolimits^{-}\\geq \\sum_{i=1}^{n}\\alpha _{i}\\cdot w_{i}\\left( \\omega \\right)$.\n\\end{remark}\n\n\\subsubsection{The asset RST scenario}\n\nThe asset RST scenario consists in finding the asset liquidity shock above\nwhich the redemption coverage ratio is lower than the minimum acceptable\nlevel. Contrary to the liability RST scenario, for which the liquidity shock\nis measured by the redemption rate, it is not easy to define what a liquidity\nshock is when we consider the asset side. For that, we recall that the stress\ntesting of the assets consists in defining three multiplicative (or additive)\nshocks for the bid-ask spread, the volatility and the daily volume\n\\cite[Section 5.4, page 51]{Roncalli-lst2}. Let $x_{i}$ be the participation\nrate. We have:\n\\begin{equation}\nx_{i}=\\frac{q_{i}}{v_{i}}\n\\end{equation}%\nwhere $v_{i}$ is the daily volume. The trading limit $x_{i}^{+}$ (expressed\nin participation rate) is supposed to be fixed, implying that it is the same\nin normal and stress periods. However, the stress period generally faces a\nreduction in the daily volume, meaning that the trading limit $q_{i}^{+}$\n(expressed in number of shares) is not the same:\n\\begin{equation}\nq_{i}^{+}=\\left\\{\n\\begin{array}{ll}\nv_{i}\\cdot x_{i}^{+} & \\text{in a normal period} \\\\\nm_{v}\\cdot v_{i}\\cdot x_{i}^{+} & \\text{in a stressed period}%\n\\end{array}%\n\\right.\n\\end{equation}%\nwhere $m_{v}<1$ is the multiplicative shock of the daily volume. The\nunderlying idea of the asset RST scenario is then to define the upper limit\n$m_{v}^{\\mathrm{RST}}$ below which the redemption coverage ratio is lower\nthan the minimum acceptable level:\n\\begin{equation}\n\\limfunc{RCR}\\left( \\tau _{h}\\right) \\leq \\limfunc{RCR}\\nolimits^{-}%\n\\Longrightarrow m_{v}\\leq m_{v}^{\\mathrm{RST}}\\left( \\tau _{h}\\right) <1\n\\end{equation}%\nNevertheless, the computation of $m_{v}^{\\mathrm{RST}}\\left( \\tau _{h}\\right)\n$ requires defining a liquidation portfolio. For that, we can use the\nvertical slicing approach where $q_{i}=\\ensuremath{\\boldsymbol{\\mathpzc{R}}}^{\\star }\\cdot\n\\omega _{i}$ and $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}^{\\star }$ is a standard redemption rate%\n\\footnote{%\nA typical value of $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}^{\\star }$ is $10\\%$. It is important to\nuse a low value for $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}^{\\star }$ because the asset RST scenario\nmeasures the liquidity stress from the asset perspective, not from the\nliability perspective.}. As in the case of the liability RST problem, the\nsolution may not exist if $\\limfunc{RCR}\\left( \\tau _{h}\\right) \\leq\n\\limfunc{RCR}\\nolimits^{-}$ when $m_{v}$ is set to one.\n\n\\begin{remark}\nIn the liability RST problem, a low value of $\\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}$ indicates that the fund\nis highly vulnerable. Indeed, this means that a small redemption shock may\nproduce a funding liquidity stress on the investment fund. In the asset RST problem, the fund is vulnerable if the value of $m_{v}^{\\mathrm{RST}}$ is high. In this case, a slight deterioration of the market depth induces a market liquidity stress on the investment fund even if it faces a small redemption. To summarize, fund managers would prefer to have low values of $\\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}$ and high values of $m_{v}^{\\mathrm{RST}}$.\n\\end{remark}\n\nThe computation of $m_{v}^{\\mathrm{RST}}$ for the previous example is\nreported in Figure \\ref{fig:rst2}. We first notice that the solution cannot exist because there is no value of $m_{v}$ such that $\\limfunc{RCR}\\left( \\tau _{h}\\right) \\leq \\limfunc{RCR}\\nolimits^{-}$. For instance, this is the case of $\\tau _{h}\\leq 6$ when $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}^{\\star }$ is set to $30\\%$ (bottom right-hand panel). By construction, $m_{v}^{\\mathrm{RST}}\\left( \\tau _{h}\\right) $ is a decreasing function of $\\tau _{h}$. Indeed, the reverse stress testing scenario is more severe for short time windows than for long time windows. We also verify that $m_{v}^{\\mathrm{RST}}\\left( \\tau _{h}\\right) $ is an increasing function of $\\limfunc{RCR}\\nolimits^{-}$, because the constraint is tighter.\n\n\\begin{remark}\nReverse stress testing does not reduce to the computation of $\\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}\\left( \\tau _{h}\\right)$ or $m_{v}^{\\mathrm{RST}}\\left( \\tau _{h}\\right) $. This step must be completed by the economic analysis to understand what market or financial scenario can imply $\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\geq \\ensuremath{\\boldsymbol{\\mathpzc{R}}^{\\mathrm{RST}}}\\left(\\tau_h\\right)$ or $m_{v}\\leq m_{v}^{\\mathrm{RST}}\\left( \\tau _{h}\\right) $.\n\\end{remark}\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Computation of the asset RST scenario}\n\\label{fig:rst2}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{rst2}\n\\vspace*{-10pt}\n\\end{figure}\n\n\\section{Liquidity management tools}\n\\label{section:lmt}\n\nLiquidity management tools are measures applied by fund managers in\nexceptional circumstances to control or limit dealing in fund units\n\\citep{ESMA-2020a}. According to \\citet{Darpeix-2020}, the main LMTs are\nanti-dilution levies, gates, liquidity buffers, redemption fees,\nin-kind redemptions, redemption suspensions, short-term borrowing, side\npockets and swing pricing. They can be grouped into three categories\n(Table \\ref{tab:esma-lmt}). First, we have liquidity buffers that may or not be mandatory, and short-term borrowing. The underlying idea is to invest a portion of assets in cash and to use it in the case of a liquidity stress. As such, this category has an impact on the structure of the asset portfolio. Second, we have special arrangements that include gates, in-kind redemptions, redemption suspensions and side pockets. The objective of this second group is to limit or delay the redemptions. Finally, we have swing pricing mechanisms\\footnote{They include anti-dilution levies.}, the purpose of which is clearly to protect the remaining investors.\n\n\\begin{table}[tbph]\n\\centering\n\\caption{LMTs available to European corporate debt funds (June 2020)}\n\\label{tab:esma-lmt}\n\\scalebox{0.975}{\n\\begin{tabular}{llcc}\n\\hline\n & & AIF & UCITS \\\\ \\hline\nShort-term borrowing & & $78\\%$ & $91\\%$ \\\\ \\hdashline\n & Gates & $23\\%$ & $73\\%$ \\\\\nSpecial arrangements & Side pockets & $10\\%$ & $10\\%$ \\\\\n & In-kind redemptions & $34\\%$ & $77\\%$ \\\\ \\hdashline\nSwing pricing & & ${\\hspace{5pt}}7\\%$ & $57\\%$ \\\\\nAnti-dilution levies & & $11\\%$ & $17\\%$ \\\\ \\hline\n\\end{tabular}}\n\\begin{flushleft}\n{\\small \\textit{Source}: \\citet[page 38]{ESMA-2020b}.}\n\\end{flushleft}\n\\end{table}\n\n\\subsection{Liquidity buffer and cash holding}\n\nAs noticed by \\citet{Yan-2006}, cash is a critical component of mutual funds'\nportfolios for three reasons. First, cash is generally used to manage the\ninflows and outflows of the fund. For instance, in the case of a\nsubscription, the fund manager may decide to delay the investment in order to\nfind better investment opportunities later. In the case of a redemption, cash\ncan be used to liquidate a part of the portfolio without selling the risky\nassets. Second, cash is important for the day-to-day management of the fund\nfor paying management fees, managing collateral risk, investing in\nderivatives, etc. Third, cash is a financial instrument of market timing\n\\citep{Simutin-2010, Simutin-2014}. This explains that cash holding is an old\npractice of mutual funds.\\smallskip\n\nSince the 2008 Global Financial Crisis, the importance of cash management has\nincreased due to liquidity policies of asset managers, and liquidity (or\ncash) buffers have become a central concept in liquidity risk management.\nNevertheless, implementing a cash buffer has a cost in terms of expected\nreturn. Therefore, cash buffer policies are increasingly integrated into\ninvestment policies.\n\n\\subsubsection{Definition}\n\nA liquidity buffer refers to the stock of cash instruments held by the fund\nmanager in order to manage the future redemptions of investors. This suggests\nthe intentionality of the fund manager to use the buffer only for liquidity\npurposes. Because it is difficult to know whether cash is used for other\npurposes (e.g. tactical allocation, supply\/demand imbalance), the cash\nholding of the investment fund is considered as a measurement proxy of its\nliquidity buffer. \\citet{Chernenko-2016} go further and suggest that cash holding is\n\\textquotedblleft \\textsl{a good measure of a fund's liquidity transformation\nactivities}\\textquotedblright.\\smallskip\n\nSince we use a strict definition, we consider that a liquidity buffer\ncorresponds to the following instruments:\n\n\\begin{itemize}\n\\item Cash\n\n\\begin{itemize}\n\\item Cash at hand\n\n\\item Deposits\n\\end{itemize}\n\n\\item Cash equivalents\n\n\\begin{itemize}\n\\item Repurchase agreements (repo)\n\n\\item Money market funds\n\n\\item Short-term debt securities\n\\end{itemize}\n\\end{itemize}\n\n\\noindent Generally, we assume that short-term debt securities have a\nmaturity less than one year. We notice that cash and cash equivalents do not\nexactly coincide with liquid assets. Indeed, liquid assets may include stocks\nand government bonds that can be liquidated the next day. Therefore, our\ndefinition of the liquidity buffer is in fact the definition of a cash\nbuffer.\n\n\\subsubsection{Cost-benefit analysis}\n\nMaintaining a cash buffer has the advantage of reducing the cost of redemption liquidation and mitigating funding risk. However, it also induces some costs in terms of return, tracking error, beta exposure, etc. Since a cash buffer corresponds to a deleverage of the risky assets, it may breach the fiduciary duties of the fund manager. Indeed, the investors pay management and performance fees in order to be fully exposed to a given asset class. Therefore, all these dimensions make the cost-benefit analysis difficult and complex, and computing an \\textquotedblleft\n\\textit{optimal}\\textquotedblright\\ level of cash buffer is a difficult task from a professional point of view.\n\n\\paragraph{Cash buffer analytics}\nIn what follows, we define the different concepts that are necessary to\nconduct a cost-benefit analysis.\n\n\\subparagraph{Cash-to-assets ratio}\n\nWe assume that a cash buffer is implemented in the fund, and we note $w_{\\mathrm{cash}}$ as the cash-to-assets ratio:%\n\\begin{equation}\nw_{\\mathrm{cash}}=\\frac{\\mathrm{cash}}{\\func{TNA}}\n\\end{equation}%\n$w_{\\mathrm{cash}}$ indicates the proportion of cash held for liquidity\npurposes, whereas $w_{\\mathrm{asset}}=1-w_{\\mathrm{cash}}$ measures the risky\nexposure to the assets. Traditionally, the fund is fully exposed to the\nassets, meaning that $w_{\\mathrm{cash}}=0\\%$ and $w_{\\mathrm{asset}}=100\\%$.\nImplementing a cash buffer implies that $w_{\\mathrm{cash}}>0$. Nevertheless,\nit is difficult to give an order of magnitude in terms of policies and\npractices by asset managers. Using a sample of US funds regulated by the SEC,\n\\citet{Chernenko-2016} found that $w_{\\mathrm{cash}}$ is equal to $7.5\\%$ and\n$7.9\\%$ for equity and bond funds on average. However, the dispersion is very\nhigh because $\\sigma \\left( w_{\\mathrm{cash}}\\right) $ is approximately equal\nto $8\\%$. Moreover, this high dispersion is observed in both the cross\nsection and the time series. Using the percentile statistics, we can\nestimate that the common practice is to have a cash buffer between $0\\%$ and $15\\%$.\n\n\\subparagraph{Mean-variance analysis}\n\nIn Appendix \\ref{appendix:cash} on page \\pageref{appendix:cash},\nwe derive several statistics by comparing a fund\nthat is fully exposed to the assets and a fund that implements a cash\nbuffer. Let $R$ be the random return of this latter. We have:\n\\begin{equation}\n\\mathbb{E}\\left[ R\\right] =\\mu _{\\mathrm{asset}}-w_{\\mathrm{cash}}\\cdot\n\\left( \\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}\\right)\n\\end{equation}%\nand:%\n\\begin{equation}\n\\sigma \\left( R\\right) =\\sqrt{w_{\\mathrm{cash}}^{2}\\cdot \\sigma _{\\mathrm{%\ncash}}^{2}+w_{\\mathrm{asset}}^{2}\\cdot \\sigma _{\\mathrm{asset}}^{2}+2w_{%\n\\mathrm{cash}}\\cdot w_{\\mathrm{asset}}\\cdot \\rho _{\\func{cash},\\mathrm{asset}%\n}\\cdot \\sigma _{\\mathrm{cash}}\\cdot \\sigma _{\\mathrm{asset}}}\n\\end{equation}%\nwhere $\\mu _{\\mathrm{cash}}$ and $\\mu _{\\mathrm{asset}}$ are the expected\nreturns of the cash and asset components, $\\sigma _{\\mathrm{cash}}$ and\n$\\sigma _{\\mathrm{asset}}$ are the corresponding volatilities, and\n$\\rho _{\\mathrm{cash},\\mathrm{asset}}$ is the correlation between the cash and the assets. Since the volatility of the cash buffer is considerably lower than the volatility of the assets, we deduce that:\n\\begin{equation}\n\\sigma \\left( R\\right) \\approx \\left( 1-w_{\\mathrm{cash}}\\right) \\cdot\n\\sigma _{\\mathrm{asset}}\n\\end{equation}%\nWe observe that both the expected return\\footnote{Because\nwe generally have $\\mu _{\\mathrm{asset}}>\\mu _{\\mathrm{cash}}$.} and\nthe volatility decrease with the introduction of the cash buffer. In\nconclusion, maintaining constant liquidity consists in taking less risk\nwith little impact on the Sharpe ratio of the fund. Indeed, we obtain:\n\\begin{equation*}\n\\limfunc{SR}\\left( R\\right) \\approx \\limfunc{SR}\\left( R_{\\mathrm{asset}%\n}\\right)\n\\end{equation*}%\nwhere $\\limfunc{SR}\\left( R_{\\mathrm{asset}}\\right) $ is the Sharpe ratio of\nthe assets. Therefore, the implementation of a cash buffer is equivalent to\ndeleveraging the asset portfolio. This result is confirmed by the portfolio's beta, which is lower than one:\n\\begin{equation}\n\\beta \\left( R\\mid R_{\\mathrm{asset}}\\right) \\approx 1-w_{\\mathrm{cash}}\\leq 1\n\\end{equation}\n\n\\subparagraph{Tracking error analysis}\n\nIn this analysis, we consider that the benchmark is the asset portfolio (or the index of the corresponding asset class). On page \\pageref{appendix:cash-te}, we show that the expected excess return is equal\nto:\n\\begin{equation}\n\\mathbb{E}\\left[ R\\mid R_{\\mathrm{asset}}\\right] =-w_{\\mathrm{cash}}\\cdot\n\\left( \\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}\\right)\n\\end{equation}%\nwhereas the tracking error volatility $\\sigma \\left( R\\mid R_{\\mathrm{asset}%\n}\\right) $ is equal to:%\n\\begin{equation}\n\\sigma \\left( R\\mid R_{\\mathrm{asset}}\\right) \\approx w_{\\mathrm{cash}}\\cdot\n\\sigma _{\\mathrm{asset}}\n\\end{equation}%\nIn a normal situation where $\\mu _{\\mathrm{asset}}>\\mu _{\\mathrm{cash}}$, the\nexpected excess return is negative whereas the tracking error volatility is\nproportional to the cash-to-assets ratio. An important result is that the\ninformation ratio is the opposite of the Sharpe ratio of the assets:\n\\begin{equation}\n\\limfunc{IR}\\left( R\\mid R_{\\mathrm{asset}}\\right) \\approx -\\limfunc{SR}%\n\\left( R_{\\mathrm{asset}}\\right)\n\\end{equation}%\nAgain, this implies that the information ratio is generally negative.\n\n\\subparagraph{Liquidation gain}\n\nThe previous analysis shows that there is a cost associated to the cash\nbuffer. Nevertheless, there are also some benefits. The most important is the\nliquidation gain, which is related to the difference of the transaction costs\nwithout and with the cash buffer:\n\\begin{equation}\n\\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right) =\\mathcal{TC}_{\\mathrm{without}}-%\n\\mathcal{TC}_{\\mathrm{with}} \\label{eq:lg1}\n\\end{equation}%\nwhere $\\mathcal{TC}_{\\mathrm{without}}$ is the transaction cost without the cash\nbuffer and $\\mathcal{TC}_{\\mathrm{with}}$ is the transaction cost with the cash\nbuffer. In Appendix \\ref{appendix:cash-lg} on page\n\\pageref{appendix:cash-lg}, we show that:\n\\begin{eqnarray}\n\\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right) &=&\\mathcal{TC}_{\\mathrm{asset}%\n}\\left( \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right) -\\mathcal{TC}_{\\mathrm{cash}}\\left( %\n\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right) \\cdot \\mathds{1}\\left\\{ \\ensuremath{\\boldsymbol{\\mathpzc{R}}}0$.\n\\end{example}\n\nIn the top left-hand panel in Figure \\ref{fig:cash3a}, we have reported the\ntransaction cost function\n$\\mathcal{TC}_{\\mathrm{asset}}\\left(\\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right) $ for the\nfollowing parameters: a bid-ask spread $\\ensuremath{\\boldsymbol{\\mathpzc{s}}}$ of $20$ bps, a price impact\nsensitivity $\\beta _{\\pmb{\\pi}}$ of $0.4$ and an annualized volatility of\n$20\\%$. We notice that the transaction cost is between $0$ and $70$ bps.\nWhereas the unit transaction cost function is concave, the total transaction\ncost is convex. The first derivative $\\mathcal{TC}_{\\mathrm{asset}}^{\\prime\n}\\left( \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right) $ is given in the top right-hand panel in\nFigure \\ref{fig:cash3a}. We verify that\n$\\mathcal{TC}_{\\mathrm{asset}}^{\\prime }\\left( \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right) >0$,\nbut $\\mathcal{TC}_{\\mathrm{asset}}^{\\prime }\\left( \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right) $\nis far from constant. Therefore, the approximation of\n$\\mathcal{TC}_{\\mathrm{asset}}\\left( \\ensuremath{\\boldsymbol{\\mathpzc{R}}}-w_{\\mathrm{cash}}\\right)\n$ by the function $\\mathcal{TC}_{\\mathrm{asset}}\\left( \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right)\n-\\mathcal{TC}_{\\mathrm{asset}}\\left( w_{\\mathrm{cash}}\\right) $ is not accurate. This discrepancy is illustrated in the bottom panels in Figure\n\\ref{fig:cash3a} when $w_{\\mathrm{cash}}$ is equal to $10\\%$ and $50\\%$.\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Transaction cost function (\\ref{eq:ex-cash3a}) in bps}\n\\label{fig:cash3a}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{cash3a}\n\\end{figure}\n\nAs such, this is not surprising if the exact formula of $\\mathbb{E}\\left[\n\\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right) \\right] $ is:\n\\begin{eqnarray}\n\\mathbb{E}\\left[ \\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right) \\right] &=&%\n\\frac{\\eta \\left( \\ensuremath{\\boldsymbol{\\mathpzc{s}}}-\\ensuremath{\\boldsymbol{\\mathpzc{c}}}\\right) }{\\eta +1}\\cdot w_{\\mathrm{cash}%\n}^{\\eta +1}+\\frac{2\\eta \\beta _{\\pmb{\\pi}}\\sigma }{2\\eta +3}+ \\notag \\\\\n&&\\eta \\ensuremath{\\boldsymbol{\\mathpzc{s}}}\\cdot w_{\\mathrm{cash}}\\left( 1-w_{\\mathrm{cash}}\\right) -\\eta\n\\beta _{\\pmb{\\pi}}\\sigma \\cdot I\\left( w_{\\mathrm{cash}};\\eta \\right)\n\\label{eq:cash3b}\n\\end{eqnarray}%\nwhereas the approximate formula is very different:\n\\begin{equation}\n\\mathbb{E}\\left[ \\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right) \\right] \\approx %\n\\ensuremath{\\boldsymbol{\\mathpzc{s}}}\\cdot w_{\\mathrm{cash}}+\\beta _{\\pmb{\\pi}}\\sigma \\cdot w_{\\mathrm{cash}%\n}^{1.5}-\\frac{\\ensuremath{\\boldsymbol{\\mathpzc{s}}}}{\\eta +1}\\cdot w_{\\mathrm{cash}}^{\\eta +1}-\\frac{%\n3\\beta _{\\pmb{\\pi}}\\sigma }{2\\eta +3}\\cdot w_{\\mathrm{cash}}^{\\eta +1.5}\n\\label{eq:cash3c}\n\\end{equation}%\nWe have reported these two functions in Figure \\ref{fig:cash3d}. The\nliquidation gains are expressed in bps. We observe some differences between\nthe exact formula (\\ref{eq:cash3b}) and the approximate formula\n(\\ref{eq:cash3c}), but these differences tend to diminish when\n$w_{\\mathrm{cash}}$ tends to $1$. Moreover, the differences increase with\nrespect to the parameter $\\eta $, which controls the shape of the redemption rate distribution function\\footnote{On page\n\\pageref{fig:cash3b}, Figure \\ref{fig:cash3b} shows the density and\ndistribution functions of the redemption rate. If $\\eta =1$, we obtain the\nuniform probability distribution. If $\\eta \\rightarrow 0$, the redemption\nrate is located at $\\ensuremath{\\boldsymbol{\\mathpzc{R}}} = 0$. If $\\eta \\rightarrow 1$, the\nredemption rate is located at $\\ensuremath{\\boldsymbol{\\mathpzc{R}}} = 1$. If $\\eta <1$, the\nprobability that the redemption rate is lower than $50\\%$ is greater than\n$50\\%$. If $\\eta >1$, the probability that the redemption rate is lower than\n$50\\%$ is less than $50\\%$. Therefore, $\\eta $ controls the location of the\nredemption rate. The greater the value of $\\eta $, the greater the risk of observing a large redemption rate.}. This is normal because the probability of observing a large redemption rate increases with the parameter $\\eta $. In fact, the poor approximation of $\\mathbb{E}\\left[ \\mathcal{LG}\\left(\nw_{\\mathrm{cash}}\\right) \\right] $ mainly comes from the solution of\n$\\mathbb{E}\\left[ \\mathcal{LG}_{\\mathrm{asset}}\\left(\nw_{\\mathrm{cash}}\\right) \\right] $ and not the solution of $\\mathbb{E}\\left[\n\\mathcal{LG}_{\\mathrm{cash}}\\left( w_{\\mathrm{cash}}\\right) \\right] $ as\nillustrated in Figure \\ref{fig:cash3c} on page\n\\pageref{fig:cash3c}.\\smallskip\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Exact vs. approximate solution of\n$\\mathbb{E}\\left[ \\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right) \\right] $ in bps\n(Example \\ref{ex:cash3}, page \\pageref{ex:cash3})}\n\\label{fig:cash3d}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{cash3d}\n\\end{figure}\n\nThis example allows us to verify the properties that have been demonstrated\npreviously. Indeed, Figure \\ref{fig:cash3d} confirms that the approximate\nfunction of $\\mathbb{E}\\left[ \\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right)\n\\right] $ is increasing and reaches its maximum at $w_{\\mathrm{cash}}^{\\star\n}=1$, whereas the exact function of $\\mathbb{E}\\left[ \\mathcal{LG}\\left(\nw_{\\mathrm{cash}}\\right) \\right] $ increases almost everywhere and only\ndecreases when $w_{\\mathrm{cash}}$ is close to $1$. This implies that the\nmaximum of $\\mathbb{E}\\left[ \\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right)\n\\right] $ reaches its maximum at $w_{\\mathrm{cash}}^{\\star }<1$. In our\nexample, $w_{\\mathrm{cash}}^{\\star }$ is equal to $97.40\\%$, $96.67\\%$,\n$93.55\\%$ and $83.37\\%$ when $\\eta $ is respectively equal to $0.5$, $1$, $2$\nand $3$.\n\n\\begin{example}\n\\label{ex:cash4} We consider Example \\ref{ex:cash3} on page\n\\pageref{ex:cash3}, but we impose a daily trading limit $x^{+}$. This example\nis more realistic than the previous one, because selling $100\\%$ of the\nassets generally requires more than one day. This is especially true in a\nliquidity stress testing framework. For example, $x^{+}=10\\%$ imposes that we\ncan sell $10\\%$ of the fund every trading day, implying that we need $10$\ntrading days to liquidate the fund.\n\\end{example}\n\nIf $x\\leq x^{+}$, we have:\n\\begin{equation}\n\\mathcal{TC}_{\\mathrm{asset}}\\left( x\\right) =x\\left( \\ensuremath{\\boldsymbol{\\mathpzc{s}}}+\\beta _{\\pmb{\\pi}%\n}\\sigma \\sqrt{x}\\right)\n\\end{equation}%\nIf $x^{+}0$. Therefore, it is\nmore interesting to use a \\textquotedblleft \\textit{stressed}\\textquotedblright\\\ntransaction cost function when we would like to\ncalculate cash buffer analytics. This is why we only focus on cases (c) and\n(d) in what follows. Figure \\ref{fig:cash6b} shows the optimal value\n$w_{\\mathrm{cash}}^{\\star }$ of the cash buffer with respect to the expected\nredemption rate\\footnote{We have $\\mathbb{E}\\left[ \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right] =\n\\frac{\\eta }{\\eta +1}$ when $\\mathbf{F}\\left( x\\right) =x^{\\eta }$.}. We\nverify that $w_{\\mathrm{cash}}^{\\star }$ increases with the trading limit $x^{+}$ and the expected redemption rate. For instance, the optimal cash buffer is equal to $10\\%$ if $\\mathbb{E}\\left[ \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right] =50\\%$ and $x^{+}=10\\%$. If there is no trading limit, $w_{\\mathrm{cash}}^{\\star }=10\\%$ if $\\mathbb{E}\\left[ \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right] =23\\%$. Of course, these results are extremely sensitive to the values of $\\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}$, $\\lambda $ and $\\sigma _{\\mathrm{asset}}$. For example, we obtain Figure \\ref{fig:cash6c} on page \\pageref{fig:cash6c} when $\\mu _{\\mathrm{asset}}- \\mu _{\\mathrm{cash}}$ is equal to $2.5\\%$. $w_{\\mathrm{cash}}^{\\star }$ is dramatically reduced, and there is no liquidity buffer when $x^{+}=10\\%$. There is also no implementation when $x^{+}=100\\%$ and $\\mathbb{E}\\left[ \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right] \\leq 50\\%$. Therefore, the value of $w_{\\mathrm{cash}}^{\\star }$ is very sensitive to $\\mu _{\\mathrm{asset}}- \\mu _{\\mathrm{cash}}$. We observe the same phenomenon with the parameter $\\lambda $. Indeed, when we take into account the tracking error risk, the optimal value $w_{\\mathrm{cash}}^{\\star }$ is reduced\\footnote{See Figures \\ref{fig:cash6d} and \\ref{fig:cash6e} on page \\pageref{fig:cash6d}.}.\\smallskip\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Net buffer cost\n($\\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}=1\\%$ and $\\lambda =0$)}\n\\label{fig:cash6a}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{cash6a}\n\\end{figure}\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Optimal cash buffer\n($\\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}=1\\%$ and $\\lambda =0$)}\n\\label{fig:cash6b}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{cash6b}\n\\end{figure}\n\nGiven $w_{\\mathrm{cash}}$, we define the break-even risk premium as the\nvalue of $\\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}$ such that the net cost\nfunction is minimum. It is equal to:\n\\begin{equation}\n\\varrho \\left( w_{\\mathrm{cash}}\\right) =\\frac{\\partial \\,\\mathbb{E}\\left[\n\\mathcal{LG}\\left( w_{\\mathrm{cash}}\\right) \\right] }{\\partial \\,w_{\\mathrm{%\ncash}}}-\\lambda w_{\\mathrm{cash}}\\left( \\sigma _{\\mathrm{cash}}^{2}+\\sigma _{%\n\\mathrm{asset}}^{2}-2\\rho _{\\mathrm{cash},\\mathrm{asset}}\\sigma _{\\mathrm{%\ncash}}\\sigma _{\\mathrm{asset}}\\right)\n\\end{equation}%\nIn Figures \\ref{fig:cash7a} and \\ref{fig:cash7b} on page \\pageref{fig:cash6c}, we have reported the value of $\\varrho \\left( w_{\\mathrm{cash}}\\right) $ for the previous example. Once $\\varrho \\left( w_{\\mathrm{cash}}\\right) $ is computed, we obtain the following rules\\footnote{For instance, Figures \\ref{fig:cash7e} and \\ref{fig:cash7f} on page \\pageref{fig:cash7e} illustrate this set of rules for a liquidity buffer of $10\\%$.}:%\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\n\\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}<\\varrho \\left( w_{\\mathrm{cash}%\n}\\right) \\Rightarrow w_{\\mathrm{cash}}^{\\star }>w_{\\mathrm{cash}} \\\\\n\\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}=\\varrho \\left( w_{\\mathrm{cash}%\n}\\right) \\Rightarrow w_{\\mathrm{cash}}^{\\star }=w_{\\mathrm{cash}} \\\\\n\\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}>\\varrho \\left( w_{\\mathrm{cash}%\n}\\right) \\Rightarrow w_{\\mathrm{cash}}^{\\star }0\\Leftrightarrow \\mu _{\\mathrm{asset}}-\\mu _{%\n\\mathrm{cash}}<\\varrho \\left( 0\\right) =\\frac{\\partial \\,\\mathbb{E}\\left[\n\\mathcal{LG}\\left( 0\\right) \\right] }{\\partial \\,w_{\\mathrm{cash}}}\n\\end{equation}%\nWe notice that $\\varrho \\left( 0\\right) $ does not depend on\nthe tracking error risk. Figures \\ref{fig:cash7c} and \\ref{fig:cash7d} show\nwhen a liquidity buffer is implemented with respect to the risk premium\n$\\mu_{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}$ and the expected redemption rate\n$\\mathbb{E}\\left[ \\ensuremath{\\boldsymbol{\\mathpzc{R}}}\\right] $.\n\n\\begin{figure}[p]\n\\centering\n\\caption{Implementation of a cash buffer when $x^{+} = 10\\%$}\n\\label{fig:cash7c}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{cash7c}\n\\end{figure}\n\n\\begin{figure}[p]\n\\centering\n\\caption{Implementation of a cash buffer when $x^{+} = 100\\%$}\n\\label{fig:cash7d}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{cash7d}\n\\end{figure}\n\n\\subsubsection{The debate on cash hoarding}\n\nWe cannot finish this section without saying a few words about the debate on\ncash hoarding. Indeed, the underlying idea of the previous analysis is to\nimplement a cash buffer before the redemption occurs, and to help\nthe liquidation process during the liquidity stress period\n\\citep{Chernenko-2016, Goldstein-2017a, Ma-2021}. However, \\citet{Morris-2017}\nfound that asset managers can hoard cash during redemption\nperiods, because they anticipate worst days. Instead of liquidating the cash\nbuffer to meet investor redemptions, asset managers can preserve the\nliquidity of their portfolios \\citep{Jiang-2021} or even increase the proportion of cash during the stress period \\citep{Schrimpf-2021}. In this case, cash hoarding may amplify fire sales and seems to be contradictory with the implementation of a cash buffer. However, cash hoarding is easy to understand in our framework. Indeed, during a stress period, asset managers may anticipate a very pessimistic scenario, meaning that they dramatically reduce the expected risk premium $\\mu _{\\mathrm{asset}}-\\mu _{\\mathrm{cash}}$. This implies increasing the level of the optimal cash buffer $w_{\\mathrm{cash}}^{\\star }$. Therefore, the previous framework explains that cash buffering and cash hoarding are compatible if we consider that asset managers have a dynamic view of the risk premium of assets.\n\n\\subsection{Special arrangements}\n\nSpecial arrangements are used extensively by the hedge fund industry. In\nparticular, gates and side pockets were extensively implemented during\nthe 2008 Global Financial Crisis after the Lehman Brothers collapse\n\\citep{Aiken-2015, Teo-2011}. Nevertheless, mutual funds are increasingly\nfamiliar with these tools and are allowed in many European countries\n\\citep[Table 4.3.A, page 33]{Darpeix-2020}. For instance, gates, in-kind\nredemptions, side pockets and redemption suspensions are active in France, Italy, Spain and the Netherlands. In Germany, gates and side pockets are not\npermitted whereas side pockets are prohibited in the United Kingdom.\n\n\\subsubsection{Redemption suspension and gate}\n\nWhen implementing a gate, the fund manager temporarily limits the amount of\nredemptions from the fund. In this case, the gate forces the redeeming\ninvestors to wait until the next regular withdrawal dates to receive the\nbalance of their withdrawal request. For instance, the fund manager can impose that the daily amount of withdrawals do not exceed $2\\%$ of the fund's net assets. Let us assume a redemption rate of $5\\%$ at time $t$ (investors\n$A$) and $2\\%$ at time $t+1$ (investors $B$). Because we have a daily gate of\n$2\\%$, only $40\\%$ of the withdrawal of investors $A$ may be executed at time\n$t$. The next $60\\%$ are executed at time $t+1$ and $t+2$. Investors $B$ who\nwould like to redeem at time $t+1$ must wait until time $t+2$, because redeeming investors $A$ take precedence. Finally, we obtain the redemption schedule reported in Table~\\ref{tab:lmt-gate1}. We notice that the last redeeming investors may be greatly penalized because of the queuing system. If there are many redemptions, the remaining investors have no incentive to redeem because they face two risks. The risk of time redemption depends on the frequency of withdrawal dates. In the case of monthly withdrawals, investors can wait several months before obtaining their cash. For instance, we observed this situation during the hedge fund crisis at the end of 2008. The second risk concerns the valuation. Indeed, the unit price can change\ndramatically during the redemption gate period. This is why regulators\ngenerally impose a maximum period for mutual funds that would like to impose\na redemption gate.\\smallskip\n\n\\begin{table}[tbph]\n\\centering\n\\caption{Stress scenarios of the participation rate}\n\\label{tab:lmt-gate1}\n\\begin{tabular}{cccccc}\n\\hline\nRedemption & Redeeming & \\multicolumn{4}{c}{Time} \\\\\nGate & Investors & $t$ & $t+1$ & $t+2$ & $t+3$ \\\\ \\hline\n\\multirow{4}{*}{No gate}\n& \\multirow{2}{*}{$A$}\n & $5\\%$ & & & \\\\\n & & ($100\\%$)& & & \\\\ \\cline{2-6}\n& \\multirow{2}{*}{$B$}\n & & $2\\%$ & & \\\\\n & & & ($100\\%$) & & \\\\ \\hline\n\\multirow{4}{*}{$2\\%$}\n& \\multirow{2}{*}{$A$}\n & $2\\%$ & $2\\%$ & $1\\%$ & \\\\\n & & ($40\\%$)& ($40\\%$) & ($20\\%$) & \\\\ \\cline{2-6}\n& \\multirow{2}{*}{$B$}\n & & & $1\\%$ & $1\\%$ \\\\\n & & & & ($50\\%$) & ($50\\%$) \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nAn extreme case of a redemption gate is when the manager completely suspends redemptions from his fund. A redemption suspension is rare and was originally used by hedge funds\\footnote{See for instance the famous suspension of redemptions decided by GAM after its top manager in charge of absolute return strategies was the subject of a disciplinary procedure \\citep{GAM-2018}.}. However, it is now part of the liquidity management tools that can be used by mutual funds. For instance, it is the only mechanism that is available in all European jurisdictions \\citep{ESRB-2017, Darpeix-2020}. It was used by at least $215$ European investment funds (with net assets totaling \\euro 73.4 bn) during the coronavirus crisis in February and March 2020 \\citep{Grill-2021}. The authors found that \\textquotedblleft \\textsl{many of those funds had invested in illiquid assets, were leveraged or had lower cash holdings than funds that were not suspended}\\textquotedblright.\\smallskip\n\nAt first sight, a suspension of redemptions seems to be a tougher decision\nthan a redemption gate. Indeed, in this last case, redemptions continue to be\naccepted, but they are delayed. However, it is not certain that a redemption\ngate will have less impact than a redemption suspension. In a period of fire\nsales, gates can also exacerbate the liquidity crisis because of the asset\nliquidation\/market transmission channel of systemic risk \\citep{Roncalli-2015a}. On the contrary, redemption suspensions do not\ndirectly contribute to the asset liquidation from a theoretical point of\nview. However, we generally observe higher redemptions when suspensions are stopped. This means that we can have an ex-post overreaction of investors.\nIn fact, it seems that a suspension of redemptions is preferable when the fund manager faces a temporary liquidity crisis such that many securities can not be priced. In the absence of price valuation, it may be good to wait until normal conditions are restored. Of course, it is not always possible and depends on the nature of the liquidity crisis.\\smallskip\n\nThe impact of gates has received little attention from academics.\nNevertheless, the theoretical study of \\citet{Cipriani-2014} showed that\nthere can be preemptive runs when a fund manager is able to impose a gate,\nalthough it can be ex-post optimal for the fund's investors.\nThis illustrates the issue of strategic interaction and payoff complementarities described by \\citet{Chen-2010}. Moreover, imposing a gate generally leads to a reputational risk for the fund and a negative externality for the corresponding asset class and the other similar investment funds. More generally, \\citet{Voellmy-2021} showed that redemption gates are less efficient than redemption fees, which are described on page \\pageref{section:redemption-fees}.\n\n\\subsubsection{Side pocket}\n\nWhen a side pocket is created, the fund separates illiquid assets from\nliquid assets. Therefore, the fund is split into two funds: the mirror fund,\nwhich is made up of the liquid assets and the side pocket of illiquid\nassets. Each investor in the initial fund receives the same number of units\nof the mirror fund and the side pocket. The mirror fund inherits the\nproperties of the original fund. Therefore, the mirror fund can continue to\nbe subscribed or redeemed. On the contrary, the side pocket fund becomes a\nclosed-end fund \\citep{Opromolla-2009}. The fund manager's objective is then to liquidate the assets of the side pocket fund. However, he is not forced to liquidate them immediately and can wait until market conditions improve. For instance,it took many months (and sometimes one or two years) for hedge fund to manage the side pockets created in October 2008 and for investors to retrieve their cash.\\smallskip\n\nTo the best of our knowledge, the only academic study on side pocketing\nis the research work conducted by \\citet{Aiken-2015}, who analyzed\nthe behavior of $740$ hedge funds between 2006 and 2011.\nThe authors found that side pockets and gates are positively correlated, meaning that hedge funds both gated investors and placed assets into a side pocket during the 2008 Global Financial Crisis.\nThis result suggests that gates and side pockets are not mutually exclusive.\nThis explains the bad reputation of side pockets. Indeed, investors\ngenerally have the feeling of facing a double sentence. A part of their\ninvestment is segregated, and they don't know when and how much of their\ncapital they will retrieve. And the remainder of their investment is gated.\nThis is not the original objective of side pocketing, since the underlying\nidea is to separate the original fund into a healthy portfolio and a bad\nportfolio. But generally, the healthy fund is also gated.\\smallskip\n\nCertainly, side pocketing is a last-resort discretionary liquidity restriction because of the reputational risk. First, the fund\nmanager gives a strong signal to the market that the liquidity crisis is not temporary but will persist for a long time. Therefore, side pocketing indirectly contributes to strengthening the spillover effect of the liquidity crisis because market sentiment is getting worse. Second, if we restrict our analysis to the fund level, the effect of side pocketing is ambiguous.\nIt is obvious that it eliminates the first-mover advantage, but it is also a sign that the liquidity calibration of the original fund was worse.\nMoreover, side pockets can be used to protect management fees on\nthe more liquid assets or to hide a poor risk management process.\nThis explains that side pocketing is generally followed by the collapse of the fund, which generally suffers from existing investors' withdrawing while it is not able to attract new investors.\n\n\\subsubsection{In-kind redemptions}\n\nIn-kind redemptions are non-monetary payments. In this case, the fund\nmanager offers a basket of securities to the redeeming investor, generally\nthe asset portfolio of the fund on a pro-rata basis. Since the beginning of\nthe 2000s, in-kind redemptions have been used extensively in order to improve the tax efficiency of US exchange traded funds \\citep{Poterba-2002}. Even though they are less common in the mutual fund industry, in-kind redemptions have become increasingly popular to manage liquidity runs. For instance, according to \\citet{ESRB-2017}, in-kind redemptions are the most common available tool in the European Union, just after the suspension of redemptions.\\smallskip\n\nIn-kind redemptions are generally considered as an efficient tool for\nmanaging liquidity runs since they transfer the liquidation issue to\nredeeming investors. As showed by \\citet{Agarwal-2020}, redemption-in-kind funds tend to deliver more illiquid securities. Moreover, these funds\n\\textquotedblleft \\textsl{experience less flow subsequently because\ninvestors avoid such funds where they are unable to benefit from liquidity\ntransformation function of funds}\\textquotedblright\\ \\citep[page 30]{Agarwal-2020}.\\smallskip\n\nNormally, in-kind redemptions solve the valuation problem of the redemption portfolio when it corresponds to the pro-rata asset portfolio\\footnote{Indeed, the valuation problem is transferred to the redeeming investors.}. This property is appealing in a period of liquidity stress. However, the pro-rata rule only concerns large redemptions in order to be sure that the rounding effect and the decimalization impact are small.\nFrom a technical point of view, redemption-in-kind is certainly more difficult to manage than gating the fund. This certainly explains why there are few mutual funds that have applied in-kind redemptions in Europe.\n\n\\subsection{Swing pricing}\n\nThe objective of swing pricing is to protect existing investors from\ndilution\\footnote{This means a reduction in the fund's value.} caused by\nlarge trading costs and market impacts due to subscriptions and\/or\nredemptions. Since this mechanism is relatively new, there are few research\nstudies on its benefit. From a theoretical and empirical point of view, it\nseems that swing pricing can eliminate the first-mover advantage\n\\citep{Jin-2019, Capponi-2020} and mitigate the systemic risk\n\\citep{Malik-2017, Jin-2019}. Nevertheless, these results must be challenged\nas shown by the works of \\citet{Lewrick-2017a, Lewrick-2017b}:\n\\begin{quote}\n[...] \\textquotedblleft \\textsl{we show that, within our theoretical\nframework, swing pricing can prevent self-fulfilling runs on the fund.\nHowever, in practice, the scope for swing pricing to prevent\nself-fulfilling runs is more limited, primarily because the share of\nliquidity-constrained investors is difficult to assess}\\textquotedblright\\\n\\citep{Lewrick-2017a}.\\smallskip\n\n[...] \\textquotedblleft \\textsl{we show that swing pricing dampens outflows\nin reaction to weak fund performance, but has a limited effect during\nstress episodes. Furthermore, swing pricing supports fund returns, while\nraising accounting volatility, and may lead to lower cash\nbuffers}\\textquotedblright\\ \\citep{Lewrick-2017b}.\n\\end{quote}\n\n\\subsubsection{Investor dilution}\n\nFollowing \\citet{Roncalli-lst1}, the total net assets (TNA) equal the total\nvalue of assets $A\\left( t\\right) $ less the current or accrued liabilities\n$D\\left( t\\right) $:\n\\begin{equation*}\n\\limfunc{TNA}\\left( t\\right) =A\\left( t\\right) -D\\left( t\\right)\n\\end{equation*}%\nThe net asset value (NAV) represents the share price or the unit price:\n\\begin{equation*}\n\\limfunc{NAV}\\left( t\\right) =\\frac{\\limfunc{TNA}\\left( t\\right) }{N\\left(\nt\\right) }\n\\end{equation*}%\nwhere the total number $N\\left( t\\right) $ of shares or units in issue is the\nsum of all units owned by all unitholders. In the sequel, we assume that the\ndebits are negligible: $D\\left( t\\right) \\ll A\\left( t\\right) $. This implies\nthat:\n\\begin{equation*}\n\\limfunc{NAV}\\left( t+1\\right) \\approx \\frac{A\\left( t+1\\right) }{N\\left(\nt+1\\right) }\n\\end{equation*}%\n$R_{A}\\left( t+1\\right) $ denotes the return of the assets. We can then\nface three situations:\n\\begin{enumerate}\n\\item There is no net subscription or redemption flows, meaning that $\n N\\left( t+1\\right) =N\\left( t\\right) $ and $A\\left( t+1\\right) =\\left(\n 1+R\\left( t+1\\right) \\right) \\cdot A\\left( t\\right) $. In this case, we\n have:\n\\begin{eqnarray}\n\\limfunc{NAV}\\left( t+1\\right) &=&\\left( 1+R_{A}\\left( t+1\\right) \\right)\n\\frac{A\\left( t\\right) }{N\\left( t\\right) } \\notag \\\\\n&=&\\left( 1+R_{A}\\left( t+1\\right) \\right) \\cdot \\limfunc{NAV}\\left(\nt\\right) \\label{eq:dilution1}\n\\end{eqnarray}%\nThe growth of the net asset value is exactly equal to the return of the\nassets:%\n\\begin{equation*}\nR_{\\limfunc{NAV}}\\left( t+1\\right) =\\frac{\\limfunc{NAV}\\left( t+1\\right) }{%\n\\limfunc{NAV}\\left( t\\right) }-1=R_{A}\\left( t+1\\right)\n\\end{equation*}\n\n\\item If the investment fund experiences some net subscription flows, the\n number of units becomes:\n\\begin{equation*}\nN\\left( t+1\\right) =N\\left( t\\right) +\\Delta N\\left( t+1\\right)\n\\end{equation*}%\nwhere $\\Delta N\\left( t+1\\right) =N^{+}\\left( t+1\\right) $ is the number of\nunits to be created. At time $t+1$, we have\\footnote{$\\Delta N\\left(\nt+1\\right) \\cdot \\limfunc{NAV}\\left( t\\right) $ is the amount invested in\nthe new assets at time $t$.}:%\n\\begin{eqnarray*}\nA\\left( t+1\\right) &=&\\left( 1+R_{A}\\left( t+1\\right) \\right) \\cdot \\left(\nA\\left( t\\right) +\\Delta N\\left( t+1\\right) \\cdot \\limfunc{NAV}\\left(\nt\\right) \\right) \\\\\n&=&\\left( 1+R_{A}\\left( t+1\\right) \\right) \\cdot \\left( N\\left( t\\right)\n\\cdot \\limfunc{NAV}\\left( t\\right) +\\Delta N\\left( t+1\\right) \\cdot \\limfunc{%\nNAV}\\left( t\\right) \\right) \\\\\n&=&\\left( 1+R_{A}\\left( t+1\\right) \\right) \\cdot N\\left( t+1\\right) \\cdot\n\\limfunc{NAV}\\left( t\\right)\n\\end{eqnarray*}%\nand:%\n\\begin{equation*}\n\\limfunc{TNA}\\left( t+1\\right) =A\\left( t+1\\right) -\\mathcal{TC}\\left(\nt+1\\right)\n\\end{equation*}%\nwhere $\\mathcal{TC}\\left( t+1\\right) $ is the transaction cost of buying\nthe new assets. We deduce that:\n\\begin{eqnarray}\n\\limfunc{NAV}\\left( t+1\\right) &=&\\frac{A\\left( t+1\\right) -\\mathcal{TC}%\n\\left( t+1\\right) }{N\\left( t+1\\right) } \\notag \\\\\n&=&\\left( 1+R_{A}\\left( t+1\\right) \\right) \\cdot \\limfunc{NAV}\\left(\nt\\right) -\\frac{\\mathcal{TC}\\left( t+1\\right) }{N\\left( t+1\\right) }\n\\label{eq:dilution2}\n\\end{eqnarray}%\nIn this case, the growth of the net asset value is less than the return of\nthe assets:%\n\\begin{equation*}\nR_{\\limfunc{NAV}}\\left( t+1\\right) =R_{A}\\left( t+1\\right) -\\frac{\\mathcal{TC%\n}\\left( t+1\\right) }{N\\left( t+1\\right) \\cdot \\limfunc{NAV}\\left( t\\right) }%\n\\leq R_{A}\\left( t+1\\right)\n\\end{equation*}\n\n\\item If the investment fund experiences some net redemption flows, the\n number of units becomes:\n\\begin{equation*}\nN\\left( t+1\\right) =N\\left( t\\right) +\\Delta N\\left( t+1\\right)\n\\end{equation*}%\nwhere $\\Delta N\\left( t+1\\right) =-N^{-}\\left( t+1\\right) $ and\n$N^{-}\\left(\nt+1\\right) $ is the number of units to be redeemed. At time $t+1$, we have:%\n\\begin{eqnarray}\n\\limfunc{NAV}\\left( t+1\\right) &=&\\frac{\\left( 1+R_{A}\\left( t+1\\right)\n\\right) \\cdot N\\left( t\\right) \\cdot \\limfunc{NAV}\\left( t\\right) -\\mathcal{%\nTC}\\left( t+1\\right) }{N\\left( t\\right) } \\notag \\\\\n&=&\\left( 1+R_{A}\\left( t+1\\right) \\right) \\cdot \\limfunc{NAV}\\left(\nt\\right) -\\frac{\\mathcal{TC}\\left( t+1\\right) }{N\\left( t\\right) }\n\\label{eq:dilution3}\n\\end{eqnarray}%\nIn this case, the growth of the net asset value is less than the return of\nthe assets:%\n\\begin{equation*}\nR_{\\limfunc{NAV}}\\left( t+1\\right) =R_{A}\\left( t+1\\right) -\\frac{\\mathcal{TC%\n}\\left( t+1\\right) }{N\\left( t\\right) \\cdot \\limfunc{NAV}\\left( t\\right) }%\n\\leq R_{A}\\left( t+1\\right)\n\\end{equation*}\n\\end{enumerate}\nWhen comparing Equations (\\ref{eq:dilution1}), (\\ref{eq:dilution2}) and\n(\\ref{eq:dilution3}), we notice that subscription\/redemption flows may\npenalize existing\/remaining investors, because the net asset value is reduced\nby the transaction costs that are borne by all investors in the fund:\n\\begin{equation*}\n\\limfunc{NAV}\\left( t+1\\mid \\Delta N\\left( t+1\\right) =0\\right) -\\limfunc{NAV%\n}\\left( t+1\\mid \\Delta N\\left( t+1\\right) \\neq 0\\right) =\\frac{\\mathcal{TC}%\n\\left( t+1\\right) }{\\max \\left( N\\left( t\\right) ,N\\left( t+1\\right) \\right)}\n\\end{equation*}%\nThe decline in the net asset value is referred to as \\textquotedblleft\n\\textit{investor dilution}\\textquotedblright.\\smallskip\n\nIn order to illustrate the dilution, we consider a fund with the following\ncharacteristics: $\\limfunc{NAV}\\left( t\\right) =\\$100$, $N\\left( t\\right)\n=10$ and $R_{A}\\left( t+1\\right) =5\\%$. In the absence of\nsubscriptions\/redemptions, we have:\n\\begin{equation*}\n\\limfunc{NAV}\\left( t+1\\right) =\\left( 1+5\\%\\right) \\times 100=105\n\\end{equation*}%\nWe assume that creating\/redeeming $5$ shares induces a transaction cost of\n$\\$30$. In the case of a net subscription of $\\$500$, we have $N\\left(\nt+1\\right) =15$ and:\n\\begin{equation*}\n\\limfunc{NAV}\\left( t+1\\right) =\\left( 1+5\\%\\right) \\times 100-\\frac{30}{15}%\n=103\n\\end{equation*}%\nIn the case of a net redemption of $\\$500$, we have $N\\left( t+1\\right) =5$\nand:%\n\\begin{equation*}\n\\limfunc{NAV}\\left( t+1\\right) =\\left( 1+5\\%\\right) \\times 100-\\frac{30}{10}%\n=102\n\\end{equation*}%\nThe transaction cost therefore reduces the NAV and impacts all investors in the fund. Moreover, we notice that the dilution is greater for redemptions than subscriptions. The reason is that the number of shares increases in the case of a subscription, implying that the transaction cost by share is lower than in the case of a redemption.\\smallskip\n\nThis asymmetry property between subscriptions and redemptions is an important\nissue when considering a liquidity stress testing program. Another factor is\nthat the unit transaction cost is an increasing function of the size of the\nsubscription\/redemption amount. This is particularly true in a stress market\nwhen it is difficult to sell assets because of the low demand. If we consider\nthe previous example, we can assume that selling $\\$500$ in a stress period\nmay induce a transaction cost of $\\$50$. In this case, we obtain:\n\\begin{equation*}\n\\limfunc{NAV}\\left( t+1\\right) =\\left( 1+5\\%\\right) \\times 100-\\frac{50}{10}%\n=100\n\\end{equation*}%\nThis example illustrates how investor dilution is an important issue when\nthe fund faces redemptions in a stress period.\n\n\\subsubsection{The swing pricing principle}\n\nThe swing pricing principle means that the NAV is adjusted for net\nsubscriptions\/redemptions. Therefore, transaction costs are only borne by the\nsubscribing\/redeeming investors. In the case of a net redemption, the NAV\nmust be reduced by the transaction costs divided by the number of net\nredeeming shares:\n\\begin{equation*}\n\\limfunc{NAV}\\nolimits_{\\mathrm{swing}}\\left( t+1\\right) =\\limfunc{NAV}%\n\\nolimits_{\\mathrm{gross}}\\left( t+1\\right) -\\frac{\\mathcal{TC}\\left(\nt+1\\right) }{N^{-}\\left( t+1\\right) -N^{+}\\left( t+1\\right) }\n\\end{equation*}%\nwhere $\\limfunc{NAV}\\nolimits_{\\mathrm{gross}}$ is the \\textquotedblleft\n\\textit{gross}\\textquotedblright\\ net asset value calculated before swing pricing is\napplied \\citep{AFG-2016}. In the case of a net subscription, the NAV becomes:\n\\begin{equation*}\n\\limfunc{NAV}\\nolimits_{\\mathrm{swing}}\\left( t+1\\right) =\\limfunc{NAV}%\n\\nolimits_{\\mathrm{gross}}\\left( t+1\\right) +\\frac{\\mathcal{TC}\\left(\nt+1\\right) }{N^{+}\\left( t+1\\right) -N^{-}\\left( t+1\\right) }\n\\end{equation*}%\nTherefore, the NAV\\ is increased if $N^{+}-N^{-}>0$. Finally, we obtain the\nfollowing compact formula:%\n\\begin{equation*}\n\\limfunc{NAV}\\nolimits_{\\mathrm{swing}}\\left( t+1\\right) =\\limfunc{NAV}%\n\\nolimits_{\\mathrm{gross}}\\left( t+1\\right) +\\frac{\\mathcal{TC}\\left(\nt+1\\right) }{\\Delta N\\left( t+1\\right) }\n\\end{equation*}%\nThe adjustment only impacts investors that trade on that day, since existing\ninvestors are not affected by this adjustment. Indeed, the total net asset is\nequal to:\n\\begin{eqnarray*}\n\\limfunc{TNA}\\left( t+1\\right) &=&A\\left( t+1\\right) -\\mathcal{TC}\\left(\nt+1\\right) \\\\\n&=&N\\left( t\\right) \\cdot \\limfunc{NAV}\\nolimits_{\\mathrm{gross}}\\left(\nt+1\\right) +\\Delta N\\left( t+1\\right) \\cdot \\limfunc{NAV}\\nolimits_{\\mathrm{%\nswing}}\\left( t+1\\right) -\\mathcal{TC}\\left( t+1\\right) \\\\\n&=&N\\left( t\\right) \\cdot \\limfunc{NAV}\\nolimits_{\\mathrm{gross}}\\left(\nt+1\\right) +\\Delta N\\left( t+1\\right) \\cdot \\limfunc{NAV}\\nolimits_{\\mathrm{%\ngross}}\\left( t+1\\right) \\\\\n&=&N\\left( t+1\\right) \\cdot \\limfunc{NAV}\\nolimits_{\\mathrm{gross}}\\left(\nt+1\\right)\n\\end{eqnarray*}%\nmeaning that it is exactly equal to the gross net asset value. If there is no redemption\/subscription at time $t+2$, we obtain:\n\\begin{eqnarray*}\n\\limfunc{NAV}\\left( t+2\\right) &=&\\left( 1+R_{A}\\left( t+2\\right) \\right)\n\\cdot \\limfunc{NAV}\\nolimits_{\\mathrm{gross}}\\left( t+1\\right) \\\\\n&=&\\left( 1+R_{A}\\left( t+2\\right) \\right) \\cdot \\left( 1+R_{A}\\left(\nt+1\\right) \\right) \\cdot \\limfunc{NAV}\\left( t+1\\right)\n\\end{eqnarray*}%\nWe notice that swing pricing has protected the fund's buy-and-hold investors.\\smallskip\n\nIf we consider the previous example, we have $\\limfunc{NAV}\\nolimits_{%\n\\mathrm{gross}}\\left( t+1\\right) =105$ and:%\n\\begin{equation*}\n\\limfunc{NAV}\\nolimits_{\\mathrm{swing}}\\left( t+1\\right) =\\left\\{\n\\begin{array}{ll}\n105+\\dfrac{30}{5}=111 & \\text{if subscription} \\\\\n\\\\\n105-\\dfrac{30}{5}=99 & \\text{if redemption}%\n\\end{array}%\n\\right.\n\\end{equation*}%\nWe observe that swing pricing increases the fund's volatility since the NAV\nadjustment with swing pricing is greater than the NAV adjustment without\nswing pricing. Moreover, the adjustment is smaller for subscriptions because\nthe number of shares increases\\footnote{Indeed, we have $\\max \\left( N\\left(\nt\\right) ,N\\left( t+1\\right) \\right) =N\\left( t+1\\right)\n>N\\left( t\\right) $ in the case of net subscriptions and $\\max \\left( N\\left(\nt\\right) ,N\\left( t+1\\right) \\right) =N\\left( t\\right) $ in the case of net\nredemptions.}. Therefore, we notice an asymmetry between subscriptions and\nredemptions since the latter impact the unit price more than the former. In\nthe case of a liquidity crisis where there is a substantial imbalance between demand and supply, the impact of redemptions is even stronger and the\ncontagion risk of a spillover effect is increased.\n\n\\subsubsection{Swing pricing in practice}\n\nSwing pricing is regulated in Europe and the U.S. and can be used under\nregulatory constraints \\citep{Malik-2017}. For instance, in France, the asset\nmanager should inform the AMF and the fund's auditor of the implementation of\nswing pricing \\citep{AFG-2016}. The use of swing pricing has also been\nencouraged during the Coronavirus crisis in order to manage the liquidity:\n\\begin{quote}\n\\textquotedblleft \\textsl{The AMF also favors the use of swing pricing and\nanti-dilution levies mechanisms during the current crisis, given the low\nliquidity of certain underlying assets and the sometimes-high costs\ninvolved in restructuring portfolios}\\textquotedblright\\ \\citep[page 4]{AMF-2020}.\n\\end{quote}\nAccording to \\citet{ESMA-2020b}, swing pricing was the most used LMT in\nEurope during the market stress in February and March 2020, far ahead\nof redemption suspension. This follows the recommendations provided by the ESRB. Similar rules have existed in the U.S. for some years \\citep{SEC-2016}, even though the use of swing pricing is less widespread than in E.U. jurisdictions.\n\n\\paragraph{Full vs. partial vs. dual pricing}\n\nAccording to \\citet{Jin-2019}, asset managers use three alternative pricing mechanisms:\n\\begin{enumerate}\n\\item Partial swing pricing\\\\\nThe NAV is adjusted only when the net fund flow is greater than a threshold.\n\n\\item Full swing pricing\\\\\nThe NAV is adjusted every time there is a net inflow or outflow.\nFull swing pricing is a special case of partial swing pricing by considering that the threshold is equal to zero.\n\n\\item Dual pricing\\\\\nWe distinguish bid and ask NAVs, meaning that the investment fund has two NAVs. Therefore, investors purchase the fund shares at the ask price and sell at the bid price.\n\\end{enumerate}\nUsing a dataset of UK based asset managers, \\citet{Jin-2019} estimated that approximately a quarter of investment funds use traditional pricing mechanisms whereas the three remaining quarters consider alternative pricing mechanisms. Within this group, the break down is the following:\n$25\\%$ employ full swing pricing, $50\\%$ prefer partial swing pricing and $25\\%$ promote dual pricing.\\smallskip\n\nDual pricing is an extension of full swing pricing that distinguishes\nbetween subscriptions and redemptions. However, dual pricing is more complex\nto calibrate. Indeed, it is not obvious to allocate transaction costs to both\nredeeming and subscribing investors because of the netting process. We have:\n\\begin{equation*}\n\\limfunc{NAV}\\nolimits_{\\mathrm{ask}}\\left( t+1\\right) =\\limfunc{NAV}%\n\\nolimits_{\\mathrm{gross}}\\left( t+1\\right) +\\frac{\\alpha \\cdot \\mathcal{TC}%\n\\left( t+1\\right) }{N^{+}\\left( t+1\\right) }\n\\end{equation*}%\nand:%\n\\begin{equation*}\n\\limfunc{NAV}\\nolimits_{\\mathrm{bid}}\\left( t+1\\right) =\\limfunc{NAV}%\n\\nolimits_{\\mathrm{gross}}\\left( t+1\\right) -\\frac{\\left( 1-\\alpha \\right)\n\\cdot \\mathcal{TC}\\left( t+1\\right) }{N^{-}\\left( t+1\\right) }\n\\end{equation*}%\nwhere $\\alpha $ is the portion of the transaction costs allocated to gross\nsubscriptions. For instance, we can use the pro-rata rule:%\n\\begin{equation*}\n\\alpha =\\frac{N^{+}\\left( t+1\\right) }{N^{+}\\left( t+1\\right) +N^{-}\\left(\nt+1\\right) }\n\\end{equation*}%\nbut we can also penalize redeeming investors:%\n\\begin{equation*}\n\\alpha =\\frac{N^{+}\\left( t+1\\right) }{N^{+}\\left( t+1\\right) +\\gamma \\cdot\nN^{-}\\left( t+1\\right) }\n\\end{equation*}%\nwhere $\\gamma \\geq 1$ is the penalization factor. Let us consider the\nprevious example with $N^{+}\\left( t+1\\right) =10$, $N^{-}\\left( t+1\\right)\n=5$ and $\\mathcal{TC}\\left( t+1\\right) =30$. We have $\\limfunc{NAV}_{\\mathrm{\nswing}}\\left( t+1\\right) =111$. If we assume that $\\gamma =1$, we have $\\limfunc{NAV}\\nolimits_{\\mathrm{ask}}\\left( t+1\\right) =107$ and $\\limfunc{NAV}\\nolimits_{\\mathrm{bid}}\\left( t+1\\right) =103$. If $\\gamma $ is set to $2$, the previous figures become $\\limfunc{NAV}\\nolimits_{\\mathrm{ask}}\\left(\nt+1\\right) =106.5$ and $\\limfunc{NAV}\\nolimits_{\\mathrm{bid}}\\left(\nt+1\\right) =102$.\n\n\\begin{remark}\nThe previous example illustrates one of the drawbacks of swing pricing.\nIndeed, since there are $10$ subscriptions and $5$ redemptions, the swing NAV\nis greater than the gross NAV ($\\$111$ vs. $\\$105$). Redeeming investors benefit from the entry of new investors. In the case of dual pricing,\nthe unit price of redeeming investors is equal to $\\$103$ (if $\\gamma$ is set to $1$), which is lower than $\\$111$.\n\\end{remark}\n\n\\paragraph{Setting the swing threshold and the swing factor}\n\nIn most cases, swing pricing is applied only when the net amount of\nsubscriptions and redemptions reaches a threshold\\footnote{An alternative\napproach is to replace $\\min \\left( N\\left( t\\right) ,N\\left( t+1\\right)\n\\right) $ with $N\\left( t\\right) $.}:\n\\begin{equation*}\n\\left\\vert \\frac{\\Delta N\\left( t+1\\right) }{\\min \\left( N\\left( t\\right)\n,N\\left( t+1\\right) \\right) }\\right\\vert \\geq sw_{\\mathrm{threshold}}\n\\end{equation*}%\nwhere $sw_{\\mathrm{threshold}}$ is the swing threshold. For example,\n$sw_{\\mathrm{threshold}}=5\\%$ implies that the swing pricing mechanism is\nactivated every time we observe at least $5\\%$ of inflows\/outflows. A swing\nfactor is then applied to the NAV:\n\\begin{equation*}\n\\limfunc{NAV}\\nolimits_{\\mathrm{swing}}\\left( t+1\\right) =\\left\\{\n\\begin{array}{ll}\n\\left( 1+sw_{\\mathrm{\\mathrm{factor}}}\\right) \\cdot \\limfunc{NAV}\\left(\nt+1\\right) & \\text{if net subscription}\\geq sw_{\\mathrm{threshold}} \\\\\n\\left( 1-sw_{\\mathrm{\\mathrm{factor}}}\\right) \\cdot \\limfunc{NAV}\\left(\nt+1\\right) & \\text{if net redemption}\\geq sw_{\\mathrm{threshold}}%\n\\end{array}%\n\\right.\n\\end{equation*}%\n\\smallskip\n\nWe can use different approaches to calibrate the parameters\n$sw_{\\mathrm{threshold}}$ and $sw_{\\mathrm{\\mathrm{factor}}}$. For instance,\nwe can assume that $sw_{\\mathrm{threshold}}$ is constant for a family of\nfunds (e.g. equity funds). In this case, $sw_{\\mathrm{threshold}}$ is\nestimated using a historical sample of flow rates and transaction costs. The\nunderlying idea is to use a value of $sw_{\\mathrm{threshold}}$ such that\ntransaction costs become significant. However, this approach may appear too\nsimple in a liquidity stress testing framework. Indeed, transaction costs are\nlarger in a stress period, meaning that $sw_{\\mathrm{threshold}}$ is a\ndecreasing function of the stress intensity. For instance, the asset manager\ncan calibrate two values of $sw_{\\mathrm{threshold}}$, a standard figure\nwhich is valid for normal periods and a lower figure which is valid for\nnormal periods. Typical values are $5\\%$ and $2\\%$. The parameter\n$sw_{\\mathrm{\\mathrm{factor}}}$ must reflect the transaction costs. Again,\ntwo approaches are possible: ex-ante or ex-post transaction costs. In the\nfirst case, we consider the transaction cost function calibrated to\nmeasure the asset risk, whereas the effective cost is used in the second\ncase.\\smallskip\n\nBy construction, the swing factor $sw_{\\mathrm{\\mathrm{factor}}}$ varies over\ntime while the swing threshold $sw_{\\mathrm{threshold}}$ is more static. When\nthe swing pricing mechanism is applied, we can estimate the amount of\ntransaction costs:\n\\begin{equation*}\n\\mathcal{TC}\\left( t+1\\right) =sw_{\\mathrm{\\mathrm{factor}}}\\cdot \\limfunc{%\nNAV}\\left( t+1\\right) \\cdot \\left\\vert \\Delta N\\left( t+1\\right) \\right\\vert\n\\end{equation*}%\nWe deduce that the transaction cost ratio is greater than the product of the\nswing threshold and the swing factor:\n\\begin{eqnarray*}\n\\frac{\\mathcal{TC}\\left( t+1\\right) }{\\min \\left( N\\left( t\\right) ,N\\left(\nt+1\\right) \\right) \\cdot \\limfunc{NAV}\\left( t+1\\right) } &=&sw_{\\mathrm{%\n\\mathrm{factor}}}\\cdot \\left\\vert \\frac{\\Delta N\\left( t+1\\right) }{\\min\n\\left( N\\left( t\\right) ,N\\left( t+1\\right) \\right) }\\right\\vert \\\\\n&\\geq &sw_{\\mathrm{\\mathrm{factor}}}\\cdot sw_{\\mathrm{threshold}}\n\\end{eqnarray*}%\nAnother approach consists in fixing the value of the product:\n\\begin{equation*}\nsw_{\\mathrm{\\mathrm{factor}}}\\cdot sw_{\\mathrm{threshold}}=sw_{\\mathrm{%\nproduct}}\n\\end{equation*}%\nIn this case, we are sure that the swing pricing is activated when the\ntransaction cost ratio is greater than the swing product\n$sw_{\\mathrm{product}}$. In the previous approaches, the swing factor is\ncalculated once we have verified that the fund flow is larger than\n$sw_{\\mathrm{threshold}}$. In this new approach, the swing factor is first\ncalculated in order to determine the swing threshold:\n\\begin{equation*}\nsw_{\\mathrm{threshold}}=\\frac{sw_{\\mathrm{product}}}{sw_{\\mathrm{\\mathrm{%\nfactor}}}}\n\\end{equation*}%\nTherefore, the swing threshold is dynamic and changes every day.\\smallskip\n\nLet us see an example to illustrate the difference between the static and\ndynamic approaches. We consider that $sw_{\\mathrm{threshold}}=5\\%$ and\n$sw_{\\mathrm{\\mathrm{factor}}}=40$ bps. We deduce that\n$sw_{\\mathrm{product}}=2$ bps. In the static approach, the swing pricing\nmechanism is not activated if we face a redemption rate of $4\\%$ whatever the\nvalue of the swing factor. We assume that we are in a period of stress and a\nredemption rate of $4\\%$ implies a swing factor of $60$ bps. In the dynamic\napproach, the swing threshold is equal to $3.33\\%$, implying the activation\nof the swing pricing mechanism. More generally, we have a hyperbolic\nrelationship between $sw_{\\mathrm{\\mathrm{factor}}}$ and\n$sw_{\\mathrm{threshold}}$ as illustrated in Figure \\ref{fig:swing_pricing5}.\n\n\\begin{figure}[tbph]\n\\centering\n\\caption{Dynamic approach of swing pricing}\n\\label{fig:swing_pricing5}\n\\vskip 10pt plus 2pt minus 2pt\\relax\n\\includegraphics[width = \\figurewidth, height = \\figureheight]{swing_pricing5}\n\\end{figure}\n\n\\subsubsection{Anti-dilution levies}\n\nAnti-dilution levies (ADL) are very close to swing pricing since the fund\nmanager does not use the transaction costs to adjust the NAV, but to adjust\nentry and exit fees. According to \\citet{AFG-2016}, these fees are equal to:\n\\begin{equation*}\n\\scalebox{0.9}{\n\\begin{tabular}{c|c:c|c:c|c}\n\\hline\n& \\multicolumn{2}{c|}{$N^{+}>N^{-}$} & \\multicolumn{2}{c|}{$N^{+}N^{-}$ or redeeming investors if $N^{+}N^{-}$ and\n$N^{+}2$, where the conformal group\nis finite. However, consideration of $d>2$ is also important,\nparticularly in the statistical mechanical context when $d=3$.\nIn the case of general $d$ conformal invariance still provides\nquite powerful constraints. For example, in the infinite\ngeometry $R^d$ the forms of the two and three point\nfunctions of scalar fields in a conformal field theory are determined\nexactly (up to normalisation) by the restrictions of conformal\ninvariance.\n\nCardy has shown how to generalise the principle of\nconformal invariance to the case of the semi-infinite geometry\n$R^d_+$, so that surface critical phenomena can be probed using\nthese techniques~\\cite{cardy:NPB240,cardy:domb}.\nIn $R^d_+$ it is only appropriate to have\nconformal invariance under conformal transformation which leave\nthe boundary fixed. In this case the restrictions on the form of\ncorrelations functions are not as strong. In particular the\nform of the two point\nfunction of a scalar field in $R^d_+$ is restricted by conformal\ninvariance only up to some function of a single conformally invariant\nvariable~\\cite{cardy:NPB240}. This function must be then be determined\nfor the particular theory under consideration.\n\nIn this paper we outline a powerful method, which makes essential\nuse of conformal invariance, for calculating the two point functions\nof scalar, vector and tensor fields of conformal field theories in\nthe semi-infinite space $R^d_+$. In particular we give a\nprescription for treating the conformally invariant integrals\nthat arise in a diagrammatic expansion of the theory. Techniques for\nhandling such integrals have been developed for the infinite space\n$R^d$, and have proven to be very\nuseful~\\cite{peliti:LNC,symanzik:LNC}. However these techniques\ndo not extend to $R^d_+$ and so this alternative technique is\ndeveloped.\n\n\n\\subsect{\\bf Conformal Invariance}\n\\label{conformal}\n\nA transformation of coordinates $x_\\mu \\to x_\\mu^g(x)$ is a\nconformal transformation if it leaves the line\nelement unchanged up to a local scale factor $\\Omega(x)$. That is\n\\begin{equation}\n{\\rm d} x^g_\\mu {\\rm d} x^g_\\mu =\\Omega(x)^{-2}{\\rm d} x_\\mu {\\rm d} x_\\mu \\; .\n\\end{equation}\n\nFor the discussion of two point functions of fields in a conformal\nfield theory we need to consider the effect of conformal\ntransformations on these fields. If a field ${\\cal O}(x)$\ntransforms under the conformal group as\n\\begin{equation}\n{\\cal O}(x) \\to {\\cal O}^g(x^g)=\\Omega(x)^\\eta {\\cal O}(x)\\; ,\n\\end{equation}\nfor some $\\eta$, then ${\\cal O}(x)$ is said to be a quasi primary scalar field\nwith scale dimension $\\eta$.\nA quasi primary vector field ${\\cal V}_\\mu(x)$ with scale dimension $\\eta$\nis one which transforms like\n\\begin{equation}\n{\\cal V}(x) \\to {\\cal V}_\\mu^g(x^g)=\\Omega(x)^\\eta{\\cal R}_{\\mu\\alpha}(x){\\cal V}_\\alpha(x)\\; ,\n\\end{equation}\nwhere ${\\cal R}_{\\mu\\alpha}(x)=\\Omega(x)\\partial x^g_\\mu \/\\partial x_\\alpha$. The\ntransformation for quasi primary tensor fields follows analogously.\nWe will restrict our attention to quasi primary\nfields in this paper.\n\nIn the semi-infinite space $R^d_+$ we define coordinates\n $x_\\mu=(y,{\\bf x})$ where $y$\nmeasures the perpendicular distance from the boundary, and ${\\bf x}_i$\nare coordinates in the $d-1$ dimensional hyperplanes parallel to\nthe boundary. The two point\nfunctions of scalar operators are restricted by translational\nand rotational invariance in planes parallel to the boundary to be\n\\begin{equation}\n\\vev{{\\cal O}_1(x){\\cal O}_2(x')}=G(y,y',\\mod {{\\bf x}-{\\bf x}'})\\; ,\n\\end{equation}\nand scale invariance further restricts the form of $G$ to depend\non two independent scale invariant variables $s^2\/ y^2$ and\n$s^2\/y'{}^2$, where $s^2=(x-x')^2$. This situation should\nbe contrasted with the case of infinite space where it is not possible\nto construct a variable from two points which is invariant under all of\nscale, translational and rotational transformations.\n\nFor two points in $R^d_+$ conformal invariance provides further\nrestrictions. Under conformal transformations which leave the\nboundary fixed\n\\begin{equation}\ns^2 \\to {s^2 \\over \\Omega(x)\\Omega(x')}\\, , \\quad \\quad\ny \\to {y\\over \\Omega(x)} \\, , \\quad \\quad y' \\to {y' \\over\n\\Omega(x')} \\, ,\n\\label{eq:confy}\n\\end{equation}\nso that only one independent conformally invariant variable can\nbe constructed from two points\n\\begin{equation}\n\\xi={s^2 \\over 4yy'} \\, \\quad \\quad {\\rm or} \\quad \\quad\nv^2={s^2 \\over {\\bar s}^2}={\\xi\\over 1+\\xi} \\, ,\n\\end{equation}\nwhere ${\\bar s}^2=({\\bf x}-{\\bf x}')^2 +(y+y')^2$ is the square of the distance\nalong the path between $x$ and the image point of $x'$ .\n\nAs a consequence, the correlation function of two quasi primary scalar\nfields may be written as\n\\begin{equation}\n\\vev {{\\cal O}_1(x){\\cal O}_2(x')} = {1\\over (2y)^{\\eta_1}}{1\\over (2y')^{\\eta_2}}\nf(\\xi)\\, ,\n\\end{equation}\nfor some arbitrary function $f(\\xi)\\;$\\footnote{The $\\xi \\to 0$ and\n$\\xi \\to \\infty$ limiting behaviour of this function is fixed by\nthe Operator Product and Boundary Operator\nExpansions~\\cite{mca-osb:sur2}.}\n\nAs an example we consider free scalar field theory, where the field\n$\\phi(x)$ satisfies Dirichlet or Neumann boundary conditions at $y=0$.\nThen by the method of images the Green's function is simply\n\\begin{equation}\n\\vev{\\phi(x)\\phi(x')}=G_\\phi(x,x')=A \\Big ({1\\over s^{d-2}} \\pm\n{1\\over {\\bar s}^{d-2}} \\Big )={A\\over (4yy')^{\\eta_\\phi}}f_\\phi(\\xi)\n \\, ,\\label{eq:Gfree}\n\\end{equation}\nwhere\n\\begin{equation}\nA={1\\over (d-2) S_d}\\, , \\quad \\quad \\eta_\\phi={\\textstyle {1 \\over 2}} d-1 \\, , \\quad\n\\quad f_\\phi(\\xi)= \\xi^{-\\eta_\\phi} \\pm (1+\\xi)^{-\\eta_\\phi} \\; .\n\\end{equation}\nIn the above expression the upper (lower) sign corresponds to\nNeumann (Dirichlet) boundary conditions and the factor\n$S_d=2\\pi^{{1\\over 2} d}\/ \\Gamma({\\textstyle {1 \\over 2}} d)$ is the area of a unit hypersphere in\n$d$ dimensions.\n\nIn~\\cite{mca-osb:sur2}, henceforth referred to as $I$,\nthe form of the\ntwo point functions of scalar, vector and\ntensor fields was worked out in detail for the $O(N)$ sigma model\nin both the $\\varepsilon$ and large $N$ expansions. These calculation were\nsignificantly simplified by the use of a new technique to solve the\nconformally invariant integrals on $R^d_+$ that naturally arise.\nIn the next section this technique is discussed in detail.\n\n\\subsect{Parallel Transform Method}\n\\label{method}\n\nWe consider integrals of the form\n\\begin{eqnarray}\nf(\\xi) & = & \\int_0^\\infty\\!\\! {\\rm d} z\\int \\! {\\rm d}^{d-1}{\\bf r} \\,\n{1\\over (2z)^d} f_1({\\tilde {\\xi}}) f_2({\\tilde {\\xi}}')\\; , \\label{eq:integ} \\\\\n && {\\tilde {\\xi}}\\, =\\, {(x-r)^2\\over 4yz}\n\\quad\\quad {\\tilde {\\xi}}' \\,=\\, {(x'-r)^2\\over 4y'z} \\; ,\n\\quad\\quad r\\,=\\,(z,{\\bf r}) \\; ,\\nonumber\n\\end{eqnarray}\nwhere conformal invariance restricts the form of the integral to be a\nfunction of $\\xi$ only. This follows because under conformal\ntransformations which leave the boundary fixed, the\nintegration measure transforms like ${\\rm d}^dx \\to \\Omega(x)^{-d}{\\rm d}^dx$\nand the factor $1\/(2z)^d \\to \\Omega(x)^d\/(2z)^d$ so the\nlocal scaling factor cancels.\n\nGiven functions $f_1$ and $f_2$ we may solve\nintegrals of this type indirectly by first integrating $f(\\xi)$ over\nhyperplanes parallel to the boundary\\footnote{This is related to the\nRadon transformation of $f(x)$~\\cite{gelfand}, which is defined as the\nintegral of\n$f(x)$ over all possible hyperplanes in $R^d$. Here we consider\nintegrals over the subset of hyperplanes in $R^d_+$ which are\nparallel to the boundary.}\n\\begin{equation}\n \\int \\! {\\rm d}^{d-1} {\\bf x} \\, f(\\xi) = (4yy')^{\\lambda}\n{\\hat f}(\\rho) \\; , \\quad\\quad \\rho = {(y-y')^2\\over 4yy'} \\; ,\n\\quad\\quad \\lambda = {\\textstyle {1 \\over 2}} (d-1) \\; ,\n\\label{eq:hf}\\end{equation}\nwhich defines the function ${\\hat{f}}(\\rho)$ to be\n\\begin{equation}\n{\\hat f}(\\rho) = {\\pi^\\lambda \\over \\Gamma (\\lambda)}\n\\int_0^\\infty \\!\\! {\\rm d} u \\, u^{\\lambda -1}f(u+\\rho) \\; .\n\\label{eq:trans}\\end{equation}\nThe crucial point is that this defines an integral transform\n$f \\to {\\hat{f}}$ which is invertible.\nThus $f(\\xi)$ can be retrieved from ${\\hat{f}}(\\rho)$ via\n\\begin{equation}\n f(\\xi) = {1\\over \\pi^\\lambda \\Gamma (-\\lambda)} \\int_0^\\infty \\!\\!\n{\\rm d} \\rho \\, \\rho^{-\\lambda - 1}{\\hat f}(\\rho + \\xi) \\; .\n\\label{eq:inver}\\end{equation}\nThe integral in the above formula is actually singular for values of\n$\\lambda$ that we consider here, but the inversion formula may still be\ndefined by analytic continuation in $\\lambda$ from ${\\sl Re}(\\lambda)<0$.\nTo verify that the transformation \\rref{eq:trans} is compatible with the\ninversion formula \\rref{eq:inver} it is sufficient to make use of the\nfollowing relation involving generalised functions\n\\begin{equation}\n\\int \\! {\\rm d} u \\, (\\rho -u)_+^{\\mu-1} u_+^{\\lambda-1} = B(\\mu,\\lambda)\\,\n\\rho_+^{\\mu+\\lambda-1} \\sim \\Gamma (-\\lambda) \\Gamma (\\lambda) \\delta (\\rho)\n\\ \\ \\hbox{as} \\ \\ \\mu\\to -\\lambda \\; .\n\\end{equation}\nFor the case $d=3$ when $\\lambda=1$ we use\n\\begin{equation}\n{\\rho_+^{-\\lambda -1}\\over \\Gamma(-\\lambda)} \\sim \\delta'(\\rho) \\quad\n \\hbox{as} \\quad \\lambda \\to 1 \\, ,\n\\end{equation}\nto reduce the inversion formula~\\rref{eq:inver} to the simple form\n\\begin{equation}\n f(\\xi) = - {1\\over \\pi} \\, {\\hat f}'(\\xi) \\, .\n\\end{equation}\n\nNow that this parallel transform has been defined it is possible to\nderive an integral relation for the transformed functions by\nintegrating $f(\\xi)$ in~\\rref{eq:integ} with respect to ${\\bf x}$ so that\n\\begin{equation}\n{\\hat{f}}(\\rho)= \\int_0^\\infty\\!\\! {\\rm d} z \\, {1\\over 2z}\n{\\hat{f}}_1({\\tilde \\rho}){\\hat{f}}_2({\\tilde \\rho}')\\, \\quad \\quad {{\\tilde \\rho}} =\n{(y-z)^2\\over 4yz} \\; , \\quad {{\\tilde \\rho}'} =\n{(y'-z)^2\\over 4y'z} \\; .\n\\label{eq:tran-int}\\end{equation}\nIn order to solve integrals of this type we first change variables\n $z=e^{2\\theta}$, $y=e^{2\\theta_1}$ and $y'=e^{2\\theta_2}$ so that\nequation \\rref{eq:tran-int} becomes\n\\begin{equation}\n{\\hat{f}} \\Big (\\sinh^2(\\theta_1-\\theta_2)\\Big ) =\n\\int_{-\\infty}^\\infty \\!\\! {\\rm d}\\theta \\,{\\hat{f}}_1 \\left\n(\\sinh^2(\\theta-\\theta_1)\n\\right ){\\hat{f}}_2 \\left (\\sinh^2(\\theta-\\theta_2)\\right )\\, .\n\\label{eq:theta}\\end{equation}\nNow by taking the Fourier transform\n\\begin{equation}\n{\\tilde {{\\hat{f}}}}(k)= \\int_{-\\infty}^\\infty \\!\\! {\\rm d}\\theta\\, e^{ik\\theta}\n{\\hat{f}}(\\sinh^2{\\theta}) \\; ,\n\\label{eq:four}\n\\end{equation}\nthen by the convolution theorem the transformed integral relation\n\\rref{eq:tran-int} becomes\n\\begin{equation}\n{\\tilde {\\hat {f}}}(k)={\\tilde {\\hat {f}}}_1(k){\\tilde {\\hat {f}}}_2(k) \\; .\n\\label{eq:four-int}\n\\end{equation}\nThus we may solve integrals of the general type given\nin~\\rref{eq:integ} by this double integral transform method provided\nthat it is possible to make the transforms $f_i(\\xi) \\to {\\hat{f}}_i(\\rho) \\to\n{\\tilde {\\hat {f}}}_i(k)$ for both the functions $f_1$ and $f_2$ {\\em and}\nthat the subsequent inverse transforms of the resulting function\n${\\tilde {\\hat {f}}}(k)$ can be made. Of course the form of the functions $f_1$ and\n$f_2$ are crucial in order for this procedure to be successfully\nundertaken. For the typical cases which arise in the diagrammatic\nexpansion of a conformal field theory this method has proven to be\nvery successful, although the intermediate steps often involve\nnontrivial manipulations of hypergeometric functions. In the next\nsection several examples which are likely to occur in calculations in\nconformal field theory are given to illustrate the method, and provide\na table of transforms for future reference.\n\n\\subsect{Illustration of the Method}\n\\label{illustrate}\n\n\nFor application to the calculation of two point functions in a\nconformal field theory we may use this method to solve the integrals\nover products of propagators that occur in a diagrammatic expansion\nof the theory. Therefore, by considering, for example, the Green's\nfunction of the free scalar field given in~\\rref{eq:Gfree}\nwe wish to solve integrals of the following type\n\\begin{equation}\nI = \\int_0^\\infty\\!\\! {\\rm d} z \\int \\! {\\rm d}^{d-1}{\\bf r} \\,{1\\over (2z)^\\beta}\n{1\\over \\big ({\\tilde s}^{2}\\big )^{\\alpha} \\big ({\\bar{\\tilde{s}}}{}^{2}\\big )^{{\\bar {\\alpha}}}\n\\big ({{\\tilde s}'^2}\\big )^{\\alpha'} \\big ({{\\bar{\\tilde{s}}}{}'^2}\\big)^{{\\bar {\\alpha}}'}} \\,\n \\; , \\label{eq:I}\n\\end{equation}\nwith\n\\begin{eqnarray}\n&&{\\tilde s}^2\\; =\\; ({\\bf x}-{\\bf r})^2+(y-z)^2\\; , \\quad \\quad \\ \\\n{\\bar{\\tilde{s}}}{}^2\\; =\\;({\\bf x}-{\\bf r})^2+(y+z)^2\\; , \\nonumber \\\\\n&&{\\tilde s}{}'^2 \\; =\\;({\\bf x}'-{\\bf r})^2+(y'-z)^2\\; ,\\quad \\quad\n{\\bar{\\tilde{s}}}{}'^2 \\; =\\;({\\bf x}'-{\\bf r})^2+(y'+z)^2\\; . \\nonumber\n\\end{eqnarray}\nFor conformal invariance, following~\\rref{eq:confy}, we must also require\n\\begin{equation}\n\\alpha+{\\bar {\\alpha}}+\\alpha'+{\\bar {\\alpha}}'+\\beta=d \\; .\n\\end{equation}\nThis integral may be readily cast into the general\nform~\\rref{eq:integ}, for which we should then take\n\\begin{equation}\nf_1({\\tilde {\\xi}}) ={1\\over(2y)^{\\alpha+{\\bar {\\alpha}}}}{1\\over {\\tilde {\\xi}}^\\alpha\n(1+{\\tilde {\\xi}})^{\\bar {\\alpha}}} \\; , \\quad \\quad \\quad\nf_2({\\tilde {\\xi}}') = {1\\over(2y')^{\\alpha'+{\\bar {\\alpha}}'}}{1\\over {\\tilde {\\xi}}'{}^{\\alpha'}\n(1+\\xi')^{{\\bar {\\alpha}}'}} \\; .\n\\end{equation}\nLater in this section we will consider the more general integrals that\narise in the discussion of the large $N$ expansion of the $O(N)$ sigma\nmodel, where the propagator for the auxiliary field $\\lambda$ has\na more complicated functional form.\n\nTo solve the integral~\\rref{eq:I} using the method of\nsection~\\ref{method}\nwe first take the sequence of transforms $f \\to {\\hat{f}} \\to\n{\\tilde {\\hat {f}}}$ as defined in~\\rref{eq:trans} and~\\rref{eq:four}\nfor functions of the form $f_i(\\xi)$ above.\nFor simplicity we take\n\\begin{equation}\nf(\\xi) = {1\\over \\xi^\\alpha (1+\\xi)^{{\\bar {\\alpha}}}}\n\\label{eq:func}\\end{equation}\nThe first transform $f\\to{\\hat{f}}$ follows from standard references\n\\begin{eqnarray}\n{\\hat{f}}(\\rho) &=& {\\pi^\\lambda \\over \\Gamma (\\lambda)}\n\\int_0^\\infty \\!\\! {\\rm d} u \\, u^{\\lambda -1}\n{1\\over (u+\\rho)^\\alpha(1+u+\\rho)^{\\bar {\\alpha}}} \\nonumber \\\\\n&=& \\pi^\\lambda{ \\Gamma(\\alpha+{\\bar {\\alpha}}-\\lambda) \\over \\Gamma(\\alpha+{\\bar {\\alpha}})}\n{1\\over (1+\\rho)^{\\alpha+{\\bar {\\alpha}}-\\lambda}}\nF\\Big (\\alpha+{\\bar {\\alpha}}-\\lambda,\\alpha\\,;\\,\\alpha+{\\bar {\\alpha}}\\, ;\\,\n{1\\over 1+\\rho} \\Big ) \\;\t .\n\\end{eqnarray}\nThe function $F(a,b;c;z)$ is a hypergeometric function whose definition\nis given in~\\rref{eq:2F1}\nFor the subsequent transform ${\\hat{f}}\\to {\\tilde {\\hat {f}}}$ we consider the cases\n${\\bar {\\alpha}}=0,\\;\\alpha=0,\\;\\alpha={\\bar {\\alpha}}$ separately\n\\begin{eqnarray}\nf_{\\rm I}(\\xi)&=&{1\\over \\xi^\\alpha } \\nonumber \\\\\n{\\hat{f}}_{\\rm I}(\\rho)& = &\\pi^\\lambda {\\Gamma(\\alpha-\\lambda)\\over\n\\Gamma(\\alpha)}{1\\over \\rho^{\\alpha-\\lambda}} \\label{eq:aa1} \\\\\n{\\tilde {\\hat {f}}}_{\\rm I}(k)&=&\\pi^{{1\\over 2} d}{\\Gamma(2\\alpha-2\\lambda)\\Gamma(1-2\\alpha+\n2\\lambda) \\over\\Gamma(\\alpha)\\Gamma({\\textstyle {1 \\over 2}}+\\alpha-\\lambda)} \\left [\n{\\Gamma(\\alpha-\\lambda+{ \\textstyle{i\\over 2}} k) \\over \\Gamma(1-\\alpha+\\lambda+{ \\textstyle\n{i\\over 2}} k)} + {\\Gamma(\\alpha-\\lambda-{ \\textstyle {i\\over 2}} k)\n\\over \\Gamma(1-\\alpha+\\lambda-{ \\textstyle {i\\over 2}}k)} \\right ] \\nonumber \\\\\n&&\\nonumber\\\\\n f_{\\rm II}(\\xi)&=&{1\\over (1+\\xi)^{\\bar {\\alpha}}} \\nonumber \\\\\n{\\hat{f}}_{\\rm II}(\\rho)& =& \\pi^\\lambda {\\Gamma({\\bar {\\alpha}}-\\lambda)\\over\n\\Gamma({\\bar {\\alpha}})}{1\\over (1+\\rho)^{{\\bar {\\alpha}}-\\lambda}} \\label{eq:aa2}\\\\\n{\\tilde {\\hat {f}}}_{\\rm II}(k)&=&\\pi^{{1\\over 2} d} {1\\over \\Gamma({\\bar {\\alpha}})\\Gamma({\\textstyle {1 \\over 2}}+{\\bar {\\alpha}}\n-\\lambda)}\\Gamma({\\bar {\\alpha}}-\\lambda +{ \\textstyle {i\\over 2}} k)\\Gamma({\\bar {\\alpha}}-\\lambda -\n{ \\textstyle {i\\over 2}}\\ k) \\nonumber \\\\\n&&\\nonumber \\\\\nf_{\\rm III}(\\xi)&=& {\n1\\over \\xi^\\alpha (1+\\xi)^\\alpha} \\nonumber \\\\\n{\\hat{f}}_{\\rm III}(\\rho)&=&\\pi^\\lambda {\\Gamma(2\\alpha-\\lambda)\\over\n\\Gamma(2\\alpha)} {1\\over(1+\\rho)^{2\\alpha-\\lambda}}F\\Big (2\\alpha-\n\\lambda,\\alpha\\,;\\,2\\alpha\\, ;\\, {1\\over 1+\\rho} \\Big ) \\label{eq:aa3} \\\\\n{\\tilde {\\hat {f}}}_{\\rm III}(k)&=&\\pi^{{1\\over 2} d}4^{\\alpha-{1\\over 2} d}{\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha)\n\\over \\Gamma(\\alpha)} {\\Gamma(\\alpha - {\\textstyle {1 \\over 2}} \\lambda- { \\textstyle {i\\over 4}} k)\n\\Gamma(\\alpha-{\\textstyle {1 \\over 2}} \\lambda + { \\textstyle {i\\over 4}} k) \\over\n\\Gamma({\\textstyle {1 \\over 2}}+{\\textstyle {1 \\over 2}} \\lambda - { \\textstyle {i\\over 4}} k)\n\\Gamma({\\textstyle {1 \\over 2}}+{\\textstyle {1 \\over 2}} \\lambda + { \\textstyle {i\\over 4}} k) } \\nonumber\n\\end{eqnarray}\nThere is one other case, a particular combination of two functions of\nthe type~\\rref{eq:func}, which is of interest\n\\begin{eqnarray}\n\\hskip -20pt f_{\\rm IV}(\\xi)&=&\n{2\\xi+1\\over \\xi^\\alpha (1+\\xi)^\\alpha}\\nonumber \\\\\n\\hskip -20pt{\\hat{f}}_{\\rm IV}(\\rho)& = & 2\\pi^\\lambda\n {\\Gamma(2\\alpha-\\lambda-1)\\over\n\\Gamma(2\\alpha-1)} {1\\over (1+\\rho)^{2\\alpha-\\lambda-1}}F\\Big (2\\alpha-\n\\lambda-1,\\alpha-1\\,;\\,2\\alpha-2\\, ;\\, {1\\over 1+\\rho} \\Big )\n\\label{eq:aa4} \\\\\n\\hskip -20pt{\\tilde {\\hat {f}}}_{\\rm IV}(k)&=&\\pi^{{1\\over 2} d}4^{\\alpha-{1\\over 2} d}\n{\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha)\n\\over \\Gamma(\\alpha)} {\\Gamma(\\alpha - {\\textstyle {1 \\over 2}}\\lambda-{\\textstyle {1 \\over 2}} -{ \\textstyle{i\\over4}} k)\n\\Gamma(\\alpha-{\\textstyle {1 \\over 2}} \\lambda-{\\textstyle {1 \\over 2}} + { \\textstyle {i\\over 4}} k) \\over\n\\Gamma({\\textstyle {1 \\over 2}} \\lambda - { \\textstyle {i\\over 4}} k) \\Gamma({\\textstyle {1 \\over 2}} \\lambda +\n{ \\textstyle {i\\over 4}} k) }\\, .\\nonumber\n\\end{eqnarray}\nThe last two cases, $f_{\\rm III}$ and $f_{\\rm IV}$, are important\nbecause the more general case where ${\\bar {\\alpha}}$ differs from $\\alpha$\nby any integer follows in a straightforward manner from them.\nHowever, the derivation of those two results directly is nontrivial.\nThe simplest way to verify them is by working\nbackwards and taking the inverse transforms. A general procedure for\ntaking the inverse transforms is discussed next.\n\nFor application to conformal field theory where we have integrals\nof the form~\\rref{eq:integ} then the transformed\nrelation~\\rref{eq:four-int}\nsuggests that we need to take the inverse transform\nof products of the functions ${\\tilde {\\hat {f}}}_i(k)$ in I to IV.\nIn all of these cases the dependence of ${\\tilde {\\hat {f}}}(k)$ on $k$\nis through combinations of Gamma functions. Consequently, by considering\nthe poles of the Gamma function, the inverse Fourier transform\n${\\tilde {\\hat {f}}} \\to {\\hat{f}}$ of~\\rref{eq:four-int} can be performed by contour\nintegration. We first consider the following combination\n of Gamma functions\nwhich is appropriate for verifying the transforms of $f_{\\rm III}$ and\n$f_{\\rm IV}$ above\n\\begin{equation}\n{\\tilde {\\hat {g}}}_{a,b}(k) \\equiv {\\Gamma (a-{ \\textstyle{i\\over 4}}k)\n\\Gamma (a+{ \\textstyle{i\\over 4}}k) \\over \\Gamma (b-{ \\textstyle{i\\over 4}}k)\n\\Gamma (b+{ \\textstyle{i\\over 4}}k) } \\, .\n\\label{eq:thgab}\n\\end{equation}\nThe poles of $\\Gamma(a-{ \\textstyle{i\\over 4}}k)$ occur at ${ \\textstyle{i\\over\n4}}k=a+n$ with residue $(-1)^n\/n!$ (for $n$ a non negative integer).\nTherefore, the inverse transform is obtained as a sum of the residues of\n${\\tilde {\\hat {g}}}_{a,b}$, resulting in a series that has hypergeometric form\n\\begin{eqnarray}\n\\!\\!\\!\\!\\!\\!\\!\\!{\\hat g}_{a,b}(\\sinh^2 \\theta)& = &\n{1\\over 2\\pi}\\int \\! {\\rm d} k \\, e^{-ik\\theta} \\, {\\tilde {\\hat {g}}}_{a,b}(k) \\nonumber \\\\\n&=& {4\\Gamma(2a)\\over \\Gamma(b-a)\\Gamma(b+a)}\\, e^{-4a|\\theta|}\nF \\bigl ( 2a,a-b+1;a+b; e^{-4|\\theta|} \\bigl ) \\, . \\label{eq:hgab} \\\\\n&=& {4\\Gamma(2a)\\over \\Gamma(b-a)\\Gamma(b+a)}\\,\n{1\\over (4\\cosh^2 \\theta)^{2a}}\nF \\bigl ( 2a,a+b-{\\textstyle {1 \\over 2}}; 2a+2b-1;{1\\over \\cosh^2\\theta}\\bigl )\\nonumber \\, .\n\\end{eqnarray}\nBy choosing appropriate values for $a,b$, and noting\nthat $\\cosh^2\\theta=1+\\rho$ then the Fourier\ntransformed functions ${\\tilde {\\hat {f}}}_{\\rm III}$ and ${\\tilde {\\hat {f}}}_{\\rm IV}$\n follow directly from this result.\nTo obtain the inverse parallel transform we use\n\\begin{equation}\n {1 \\over \\Gamma (-\\lambda)} \\int_0^\\infty \\!\\!\\!\\!\n{\\rm d} \\rho \\, \\rho^{-\\lambda -1} \\, {1\\over (1+\\rho+\\xi)^p} =\n{\\Gamma(p+\\lambda)\\over\\Gamma(p)}{1\\over (1+\\xi)^{p+\\lambda}} \\, ,\n\\end{equation}\nwith $p=2a+n$, in the last line of~\\rref{eq:hgab}\nso that\n\\begin{equation}\ng_{a,b}(\\xi)= {\\Gamma(2a+\\lambda)\\over 4^{2a-1}\\pi^\\lambda\\Gamma(b-a)\n\\Gamma(b+a)}\\,{1\\over(1+ \\xi)^{2a+\\lambda}}\\,\nF \\Big ( 2a+\\lambda,a+b-{\\textstyle {1 \\over 2}}; 2a+2b-1;{1\\over 1+\\xi}\\Big)\\; .\n\\label{eq:gab}\\end{equation}\nNow, with the appropriate choice of $a,b$, we can use this result\nto verify the parallel transforms ${\\hat{f}}_{\\rm III}$ and ${\\hat{f}}_{\\rm IV}$\nin equations~\\rref{eq:aa3} and~\\rref{eq:aa4}.\n\nIn order to solve the integrals of the type~\\rref{eq:integ} we must\nfind the inverse Fourier transform of products of the functions\n${\\tilde {\\hat {f}}}_i(k)$ in I to IV. These may can be simply obtained as\nhypergeometric series by contour integration in a similar way to\nabove above calculation.\nThe procedure for finding the inverse parallel\ntransform differs, though, because it is not always possible to make the\nsimplifying manipulation of the hypergeometric function that is\nmade in~\\rref{eq:hgab}. This is because\nthe hypergeometric series is often of higher order. However, a\nprocedure for taking the inverse transform ${\\hat{f}} \\to\nf$ which bypasses this step is derived in the appendix. This procedure\nmakes essential use of a special property of the hypergeometric\nseries which arises on taking the inverse Fourier transform, that is\ndue the symmetry ${\\tilde {\\hat {g}}}(k)={\\tilde {\\hat {g}}}(-k)$. After taking the inverse Fourier\ntransform of products of the functions in I to IV, we obtain\na hypergeometric series with one of the two following forms\n\\begin{eqnarray}\n{\\hat{g}}(\\sinh^2\\theta)&= & e^{-4a|\\theta|} {}_{q+1}F_q \\bigl\n(2a,b_1,\\cdots b_q; c_1, \\cdots c_q ; e^{-4|\\theta|} \\bigl )\\, ,\n\\label{eq:hgform}\\\\\n{\\hat {h}}(\\sinh^2\\theta)&=& e^{-2a|\\theta|} {}_{q+1}F_q \\bigl\n( 2a,b_1,\\cdots b_q; c_1, \\cdots c_q ; e^{-2|\\theta|} \\bigl )\\, ,\n\\label{eq:hhform}\n\\end{eqnarray}\nwhere the notation ${}_{q+1}F_q$ refers to a generalised\nhypergeometric series which is defined in~\\rref{eq:pFq}.\nThe crucial point is that the parameters $b_i$ and $c_i$ in these\nfunctions are always related by $c_i=1+2a-b_i$.\n\nWe now present the inverse transforms of six of the\npossible combinations of the functions in I to IV, which have been\nobtained using this method. These represent solutions to particular\nintegrals of the type~\\rref{eq:integ}.\nFirst we consider products of the functions $f_{\\rm I}$ and $f_{\\rm\nII}$. In these\ncases the inverse Fourier transform results in hypergeometric series\nof the form~\\rref{eq:hhform} and the inverse parallel transform can\nbe found via the methods outlined in the appendix. Thus,\nusing~\\rref{eq:hresult}, we obtain\n\\begin{eqnarray}\n{\\cal I}_{\\rm I,I}(\\xi) & = & \\int_0^\\infty\\!\\! {\\rm d} z\\int \\!\n{\\rm d}^{d-1}{\\bf r} \\, {1\\over (2z)^d} {1\\over{\\tilde {\\xi}}^\\alpha}{1\\over\n{\\tilde {\\xi}}'{}^{\\alpha'}} \\nonumber\\\\\n&=& \\pi^{{1\\over 2} d} {\\Gamma(1+\\alpha+\\alpha'-d)\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha-\\alpha')\n\\over \\Gamma(1-{\\textstyle {1 \\over 2}} d)\\Gamma({\\textstyle {1 \\over 2}} d)}\\,\nF\\big (\\alpha,\\alpha';1+\\alpha+\\alpha'-{\\textstyle {1 \\over 2}} d; -\\xi \\big ) \\nonumber \\\\\n&& + \\;\n\\pi^{{1\\over 2} d} {\\Gamma(\\alpha+\\alpha-{\\textstyle {1 \\over 2}} d)\\Gamma({\\textstyle {1 \\over 2}} d -\\alpha)\n\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha')\\over\\Gamma(d-\\alpha-\\alpha')\\Gamma(\\alpha)\\Gamma(\\alpha')}\n {1\\over \\xi^{\\alpha+\\alpha'-{1\\over 2} d}} \\nonumber \\\\\n&& \\hskip 100pt \\times \\; F\\big ({\\textstyle {1 \\over 2}}\nd -\\alpha,{\\textstyle {1 \\over 2}} d -\\alpha',1+{\\textstyle {1 \\over 2}} d-\\alpha-\\alpha'; -\\xi\\big ) \\; ,\n\\label{eq:I-I}\\\\\n{\\cal I}_{\\rm I,II}(\\xi) & = & \\int_0^\\infty\\!\\! {\\rm d} z\\int \\!\n{\\rm d}^{d-1}{\\bf r} \\, {1\\over (2z)^d} {1\\over{\\tilde {\\xi}}^\\alpha}{1\\over\n(1+{\\tilde {\\xi}}')^{\\alpha'}} \\nonumber\\\\\n&=&\\pi^{{1\\over 2} d} {\\Gamma(1+\\alpha+\\alpha'-d)\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha)\\over\n\\Gamma({\\textstyle {1 \\over 2}} d)\\Gamma(1+\\alpha'-{\\textstyle {1 \\over 2}} d)} F(\\alpha,\\alpha';{\\textstyle {1 \\over 2}} d; -\\xi)\\; ,\n\\label{eq:I-II}\\\\\n{\\cal I}_{\\rm II,II}(\\xi) & = & \\int_0^\\infty\\!\\! {\\rm d} z\\int \\!\n{\\rm d}^{d-1}{\\bf r} \\, {1\\over (2z)^d} {1\\over(1+{\\tilde {\\xi}})^\\alpha}{1\\over\n(1+{\\tilde {\\xi}}')^{\\alpha'}} \\nonumber\\\\\n&=&\\pi^{{1\\over 2} d} {\\Gamma(1+\\alpha+\\alpha'-d)\\over\n\\Gamma(1+\\alpha+\\alpha'-{\\textstyle {1 \\over 2}} d)} F(\\alpha,\\alpha';1+\\alpha+\\alpha'-\n{\\textstyle {1 \\over 2}} d;-\\xi)\\; . \\label{eq:II-II}\n\\end{eqnarray}\nIn order to bring these results to this form\nit is necessary to use several identities of the\nhypergeometric function which can be found in the standard\nreferences~\\cite{grad}.\n\nIf we take the limit $\\alpha+\\alpha' \\to d$ in these integrals,\nwhich corresponds to $\\beta \\to 0$ in the original\nintegral~\\rref{eq:I} then the following relation\n\\begin{equation}\n{1\\over \\Big ( (x-x')^2\\Big )^{{1\\over 2} d - \\beta}} \\sim {1\\over 2\\beta}\\,\nS_d \\delta^d (x-x') \\ \\ \\hbox{as} \\ \\ \\beta\\to 0 \\; ,\n\\end{equation}\ncan be used to show that\n\\begin{equation}\n{\\cal I}_{\\rm I,I}+{\\cal I}_{\\rm II,II} =\\pi^d {\\Gamma({\\textstyle {1 \\over 2}} d -\\alpha)\n\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha') \\over \\Gamma(\\alpha)\\Gamma(\\alpha')} \\delta^d (x-x')\n\\; ,\n\\end{equation}\nin the limit $\\alpha+\\alpha' \\to d$. This is the expected result when\nthe range of the integral~\\rref{eq:I}, with ${\\bar {\\alpha}}={\\bar {\\alpha}}'=\\beta=0$,\nis extended to the infinite space $R^d$. In a similar way it is\npossible to show that if $\\alpha+\\alpha'=d$ then ${\\cal I}_{\\rm I,II}+{\\cal I}_{\\rm\nII,I}=0$, where ${\\cal I}_{\\rm II,I}$ is defined by taking\n$\\alpha\\leftrightarrow \\alpha'$ in ${\\cal I}_{\\rm I,II}$.\n\n\nWe now evaluate three more conformally\ninvariant integrals involving combinations of the functions $f_{\\rm\nIII}$ and\n$f_{\\rm IV}$. In these cases the inverse Fourier transform results in a\nhypergeometric series of the form~\\rref{eq:hgform}. One obtains\n\\begin{eqnarray}\n{\\cal I}_{\\rm III,III}(\\xi)&=&\\int_0^\\infty\\!\\!{\\rm d} z \\int\\! {\\rm d}^{d-1}{\\bf r} \\,\n{1\\over (2z)^d} {1\\over {\\tilde {\\xi}}^\\alpha (1+{\\tilde {\\xi}})^\\alpha}\n{1\\over {\\tilde {\\xi}}'{}^{\\alpha'} (1+{\\tilde {\\xi}}')^{\\alpha'}} \\nonumber \\\\\n&=& {\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha')\\Gamma(\\alpha'-\\alpha) \\Gamma(\\alpha+\n\\alpha'-\\lambda) \\over \\Gamma({\\textstyle {1 \\over 2}} d -\\alpha)\\Gamma(\\alpha')\n\\Gamma(\\alpha+{\\textstyle {1 \\over 2}})} \\pi^{{1\\over 2} d}4^{\\alpha'-{1\\over 2} d}\n{1\\over [\\xi(1+\\xi)]^\\alpha} \\label{eq:III}\\\\\n&&\\times \\;{}_3F_2\\Big (\\alpha,1+\\alpha-{\\textstyle {1 \\over 2}} d,\n{\\textstyle {1 \\over 2}} d-\\alpha'; \\alpha+{\\textstyle {1 \\over 2}}, 1+\\alpha-\\alpha';-{1\\over 4\\xi (1+\\xi)}\n\\Big) \\, + \\, \\alpha \\leftrightarrow \\alpha' \\; ,\\nonumber \\\\\n{\\cal I}_{\\rm III,IV}(\\xi)&=&\\int_0^\\infty\\!\\!{\\rm d} z \\int \\!{\\rm d}^{d-1}{\\bf r} \\,\n{1\\over (2z)^d} {1\\over {\\tilde {\\xi}}^\\alpha (1+{\\tilde {\\xi}})^\\alpha} {2{\\tilde {\\xi}}'+\n1\\over {\\tilde {\\xi}}'{}^{\\alpha'} (1+{\\tilde {\\xi}}')^{\\alpha'}} \\nonumber \\\\\n&=&{\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha)\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha')\\Gamma(\\alpha+\\alpha'-{\\textstyle {1 \\over 2}} d)\n\\over\\Gamma(\\alpha)\\Gamma(\\alpha')\\Gamma(d-\\alpha-\\alpha')}\\pi^{{1\\over 2} d}\n{1\\over[\\xi(1+\\xi)]^{\\alpha+\\alpha'-{1\\over 2} d}} \\label{eq:III-IV} \\\\\n&&\\times \\;F\\Big (\\lambda -\\alpha,{\\textstyle {1 \\over 2}} d -\\alpha'\\, ;\nd-\\alpha-\\alpha'\\, ;\n-4\\xi(1+\\xi) \\Big ) \\nonumber \\\\\n{\\cal I}_{\\rm IV,IV}(\\xi)& =&\\int_0^\\infty\\!\\!{\\rm d} z \\int \\!{\\rm d}^{d-1}{\\bf r} \\,\n{1\\over (2z)^d} {2{\\tilde {\\xi}}+1\\over {\\tilde {\\xi}}^\\alpha (1+{\\tilde {\\xi}})^\\alpha} {2{\\tilde {\\xi}}'+\n1\\over {\\tilde {\\xi}}'{}^{\\alpha'} (1+{\\tilde {\\xi}}')^{\\alpha'}} \\nonumber \\\\\n&=&{\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha')\\Gamma(\\alpha'-\\alpha) \\Gamma(\\alpha+\n\\alpha'-1-\\lambda) \\over \\Gamma({\\textstyle {1 \\over 2}} d -\\alpha)\\Gamma(\\alpha')\n\\Gamma(\\alpha-{\\textstyle {1 \\over 2}})} \\pi^{{1\\over 2} d}4^{\\alpha'-{1\\over 2} d}\n{2\\xi+1\\over [\\xi(1+\\xi)]^\\alpha} \\label{eq:IV}\\\\\n&&\\times\\; {}_3F_2\\Big (\\alpha,1+\\alpha-{\\textstyle {1 \\over 2}} d,\n{\\textstyle {1 \\over 2}} d-\\alpha'; \\alpha-{\\textstyle {1 \\over 2}}, 1+\\alpha-\\alpha';- {1\\over 4\\xi (1+\\xi)}\n\\Big) \\, + \\, \\alpha \\leftrightarrow \\alpha'\\; .\\nonumber\n\\end{eqnarray}\nTo solve for ${\\cal I}_{\\rm III,III}$ we require the transformed\nfunction ${\\tilde {\\hat {f}}}_{\\rm III}(k)$ with the result for the inverse\ntransform of the general case~\\rref{eq:g} which is given in the\nappendix. For ${\\cal I}_{\\rm IV,IV}$ we use ${\\tilde {\\hat {f}}}_{\\rm IV}(k)$ with the\ninverse transform~\\rref{eq:brg}. To obtain ${\\cal I}_{\\rm III,IV}$ in the\nform~\\rref{eq:III-IV}, we follow a similar procedure to the other two\ncases, but also use a relationship between hypergeometric functions with\nargument $-z$ and hypergeometric functions with argument $-1\/z$ to\nsimplify the expression.\n\nThe solution to the integrals in~\\rref{eq:I-I} to~\\rref{eq:IV}\nall have a pole at $\\alpha=d\/2$ except for~\\rref{eq:II-II}.\nThis pole arises due to the short distance logarithmic singularity for\n$r\\sim x$ in each of these integrals when $\\alpha=d\/2$.\n\nWe are now in a position to evaluate integrals of the\ntype~\\rref{eq:integ} with products of more general functions than\nthose discussed thus far. For example, if we consider the function\n$g_{a,b}$ given in~\\rref{eq:gab} which was derived from the definition\nof ${\\tilde {\\hat {g}}}_{a,b}$ in~\\rref{eq:thgab}, then since\n\\begin{equation}\n{\\tilde {\\hat {g}}}_{a,b}(k){\\tilde {\\hat {g}}}_{b,c}(k)={\\tilde {\\hat {g}}}_{a,c}(k)\\; ,\n\\end{equation}\nit follows directly that\n\\begin{eqnarray}\n\\int_0^\\infty\\!\\!{\\rm d} z \\int\\! {\\rm d}^{d-1}{\\bf r}\\,{1\\over (2z)^d} g_{a,b}({\\tilde {\\xi}})\ng_{b,c}({\\tilde {\\xi}}') & = & g_{a,c}(\\xi) \\; , \\quad\\quad\\quad\\quad\\quad\n\\quad a \\ne c \\nonumber \\\\\n&=&(4yy')^{{1\\over 2} d}\\delta^d(x-x') \\; , \\quad a=c \\; .\n\\end{eqnarray}\nThis is a solution to an integral of the product of two\nhypergeometric functions with the special form~\\rref{eq:gab}.\nThis relation is useful in the large $N$ expansion of the $O(N)$ sigma\nmodel with the Ordinary transition,\nwhere the Green's function of the auxiliary field $\\lambda$ is\na hypergeometric function of exactly this\ntype~\\cite{mca-osb:sur2,ohno:let1,ohno:prog}.\n\nWe may generalise this further by considering the function\n\\begin{equation}\n{\\tilde {\\hat {g}}}_{ab,c\\delta}(k)\n \\equiv {\\Gamma (a-{ \\textstyle{i\\over 4}}k)\n\\Gamma (a+{ \\textstyle{i\\over 4}}k) \\Gamma (b-{ \\textstyle{i\\over 4}}k)\n\\Gamma (b+{ \\textstyle{i\\over 4}}k)\\over \\Gamma (c-{ \\textstyle{i\\over 4}}k)\n\\Gamma (c+{ \\textstyle{i\\over 4}}k) \\Gamma (\\delta-{ \\textstyle{i\\over 4}}k)\n\\Gamma (\\delta+{ \\textstyle{i\\over 4}}k) } \\; .\n\\label{eq:thgabcd}\n\\end{equation}\nThe methods of the appendix can then be used to obtain the inverse\ntransforms of this function provided $\\delta={\\textstyle {1 \\over 2}} \\lambda$ or $\\delta={\\textstyle {1 \\over 2}}\n+ {\\textstyle {1 \\over 2}} \\lambda$.\nThe inverse Fourier transform gives\\footnote{This function, is\nrelated to the Meijer's G-function which is defined by the contour\nintegration of combinations of Gamma functions with arguments of a\nparticular form~\\cite{grad}.}\n\\begin{eqnarray}\n{\\hat{g}}_{ab,c\\delta}(\\sinh^2\\theta)&=&{4\\Gamma(2a)\\Gamma(b-a)\\Gamma(b+a) \\over\n\\Gamma(c-a)\\Gamma(c+a)\\Gamma(\\delta-a)\\Gamma(\\delta+a)}\\, e^{-4a|\\theta|}\n\\label{eq:gabcd1}\\\\\n&&\\hskip -10pt\\times \\; {}_4F_3\\Big(\n2a,b+a,1+a-c,1+a-\\delta;1+a-b,c+a,\\delta+a\n\\, ; e^{-4|\\theta|} \\Big ) \\nonumber \\\\\n&& + \\, a \\leftrightarrow b\\; .\\nonumber\n\\end{eqnarray}\nSubsequently, using~\\rref{eq:g} for the case $\\delta={\\textstyle {1 \\over 2}} +{\\textstyle {1 \\over 2}}\\lambda$\nwe find\n\\begin{eqnarray}\n\\hskip -35pt g_{ab,c\\delta}(\\xi)&=&{1\\over 4^{2a-1}\\pi^\\lambda}\n{\\Gamma(2a+\\lambda)\\Gamma(b-a)\\Gamma(b+a) \\over\n\\Gamma(c-a)\\Gamma(c+a)\\Gamma({\\textstyle {1 \\over 2}} +{\\textstyle {1 \\over 2}} \\lambda-a)\\Gamma({\\textstyle {1 \\over 2}} +{\\textstyle {1 \\over 2}} \\lambda\n+a)} {1\\over [\\xi(1+\\xi)]^{a+{1\\over 2}\\lambda}} \\label{eq:gabcd2}\\\\\n&&\\times\\;{}_3F_2\\Big (a+{\\textstyle {1 \\over 2}}\\lambda,{\\textstyle {1 \\over 2}}+a-{\\textstyle {1 \\over 2}}\\lambda,c-b\\, ;\n1+a-b,a+c\\, ; -{1\\over4\\xi(1+\\xi)} \\Big ) \\; \\nonumber \\\\\n&& + \\, a \\leftrightarrow b\\; ,\\nonumber\n\\end{eqnarray}\nwhereas when $\\delta={\\textstyle {1 \\over 2}} \\lambda$, using~\\rref{eq:brg} we obtain\n\\begin{eqnarray}\ng_{ab,c\\delta}(\\xi)&=&{1\\over 4^{2a-1}\\pi^\\lambda}\n{\\Gamma(2a+\\lambda)\\Gamma(b-a)\\Gamma(b+a) \\over\n\\Gamma(c-a)\\Gamma(c+a)\\Gamma({\\textstyle {1 \\over 2}} \\lambda-a)\\Gamma({\\textstyle {1 \\over 2}} \\lambda+a)}\n {\\xi+{\\textstyle {1 \\over 2}}\\over [\\xi(1+\\xi)]^{{1\\over 2}+ a+{1\\over 2}\\lambda}} \\\\\n&&\\times\\;{}_3F_2\\Big ({\\textstyle {1 \\over 2}}+a+{\\textstyle {1 \\over 2}}\\lambda,1 +a-{\\textstyle {1 \\over 2}}\\lambda,c-b\\, ;\n1+a-b,a+c\\, ; -{1\\over4\\xi(1+\\xi)} \\Big ) \\nonumber \\\\\n&& + \\, a \\leftrightarrow b\\; .\\nonumber\n\\end{eqnarray}\nThus provided $\\delta$ is one of ${\\textstyle {1 \\over 2}} \\lambda$ or ${\\textstyle {1 \\over 2}} +{\\textstyle {1 \\over 2}}\n\\lambda$ then $g_{ab,c\\delta}(\\xi)$ can be obtained as ${}_3F_2$\nhypergeometric functions. The solutions to the integrals\nin~\\rref{eq:III}-\\rref{eq:IV} represent special\ncases of these functions. More generally, integrals\nof products of these types of ${}_3F_2$\nhypergeometric functions are possible. Since\n\\begin{equation}\n{\\tilde {\\hat {g}}}(k)_{ab,c\\delta}\\, {\\tilde {\\hat {g}}}_{ce,bf}(k)\\;=\\; {\\tilde {\\hat {g}}}(k)_{ae,\\delta f} \\; ,\n\\end{equation}\nthen it follows that\n\\begin{equation}\n\\int_0^\\infty\\!\\!{\\rm d} z \\int\\! {\\rm d}^{d-1}{\\bf r}\\,{1\\over (2z)^d} g_{ab,c\\delta}\n({\\tilde {\\xi}}) g_{ce,bf}({\\tilde {\\xi}}')\\; =\\; g_{ae,\\delta f}(\\xi) \\; ,\n\\end{equation}\nprovided $\\delta,f={\\textstyle {1 \\over 2}}\\lambda,{\\textstyle {1 \\over 2}}+{\\textstyle {1 \\over 2}} \\lambda$. Similar integral\nrelations can be derived by considering possible combinations of\n$g_{a,b}$ with $g_{ab,c\\delta}$ with particular choices of the\nparameters $a,b,c,\\delta$.\nIntegrals such as\nthese occur in a discussion of the large $N$ expansion of the $O(N)$\nsigma model with the Special transition where the Green's\nfunction for the auxiliary field $\\lambda$ contains hypergeometric\nfunctions of this type~\\cite{mca-osb:sur2,ohno:prog,ohno:let2}.\n\n\\subsect{Integrals Involving Spin Factors}\n\\label{spin}\n\nWe now turn our attention to conformally invariant integrals\ninvolving spin factors which occur in the discussion of two point\nfunctions of vector and tensor fields. For this we define fields\n$X_\\mu$, ${\\tilde X}_\\mu$, with scale dimension\nzero, which transform like vectors at the point $x$\nunder conformal transformations that leave the boundary fixed.\n\\begin{equation}\nX_\\mu={y\\over \\xi^{1\\over 2}(1+\\xi)^{1\\over 2}}\\partial_\\mu \\xi \\; , \\quad \\quad \\quad\n{\\tilde X}_\\mu= {y\\over {\\tilde {\\xi}}^{1\\over 2}(1+{\\tilde {\\xi}})^{1\\over 2}}\\partial_\\mu {\\tilde {\\xi}} \\; .\n\\end{equation}\nThese are constructed to be unit vectors so that\n$\\; X_\\mu X_\\mu\\, =\\, {\\tilde X}_\\mu {\\tilde X}_\\mu\\,=\\,1\\, .$\n\nWe will use the example of an integral with\none spin factor in the integrand to illustrate the method. Such an\nintegral would be appropriate for correlation functions involving a\nsingle vector field. We define\n\\begin{equation}\nI_\\mu=I(\\xi)X_\\mu=\n\\int_0^\\infty \\!\\!{\\rm d} z \\int\\!{\\rm d}^{d-1}{\\bf r} \\, {1\\over (2z)^d} {\\tilde X}_\\mu\nf_1({\\tilde {\\xi}}) f_2({\\tilde {\\xi}}') \\; ,\n\\label{eq:Ivmu}\\end{equation}\nwhich has the functional form $I_\\mu=I(\\xi)X_\\mu$ due to conformal\ninvariance. To find $I(\\xi)$ we use the fact that $X_\\mu$ is a unit\nvector to obtain\n\\begin{equation}\nI(\\xi)=\\int_0^\\infty \\!\\!{\\rm d} z \\int\\!{\\rm d}^{d-1}{\\bf r} \\, {1\\over (2z)^d} (X\\cdot{\\tilde X})\nf_1({\\tilde {\\xi}}) f_2({\\tilde {\\xi}}')\\; .\n\\label{eq:Iv}\\end{equation}\nNow, since\n\\begin{equation}\nX\\cdot {\\tilde X}={(2\\xi+1)(2{\\tilde {\\xi}}+1)-(2{\\tilde {\\xi}}'+1) \\over 4\n\\Big (\\xi(1+\\xi){\\tilde {\\xi}}(1+{\\tilde {\\xi}})\\Big )^{1\\over 2}}\\; ,\n\\end{equation}\nthe methods of section~\\ref{illustrate} can be used to solve\nfor $I(\\xi)$ in terms of hypergeometric functions.\nFor example, if we take\n\\begin{equation}\nf_1(\\xi)={1\\over \\xi^\\alpha (1+\\xi)^\\alpha} \\; , \\quad \\quad \\quad\nf_2(\\xi)={1\\over \\xi^{\\alpha'} (1+\\xi)^{\\alpha'} }\\; ,\n\\end{equation}\nthen we may use the solution to the integral ${\\cal I}_{\\rm III,IV}$\n in~\\rref{eq:III-IV}\nto obtain\n\\begin{eqnarray}\nI(\\xi)&=&\\pi^{{1\\over 2} d}\n{\\Gamma({\\textstyle {1 \\over 2}}+{\\textstyle {1 \\over 2}} d -\\alpha)\\Gamma({\\textstyle {1 \\over 2}} d-\\alpha')\\Gamma({\\textstyle {1 \\over 2}}+\\alpha+\n\\alpha'-{\\textstyle {1 \\over 2}} d) \\over\\Gamma({\\textstyle {1 \\over 2}} +\\alpha)\\Gamma(\\alpha')\\Gamma({\\textstyle {1 \\over 2}}+ d\n-\\alpha-\\alpha') } {1\\over [\\xi(1+\\xi)]^{\\alpha+\\alpha'-{1\\over 2} d}} \\nonumber\\\\\n&&\\times \\; F\\Big ( d-2\\alpha,d -2\\alpha';{\\textstyle {1 \\over 2}}\n+d-\\alpha-\\alpha'; -\\xi\\Big )\\; .\n\\label{eq:Iresult}\\end{eqnarray}\nThe case where $\\alpha={\\textstyle {1 \\over 2}}(d-1)$ can be worked out by an alternative\nmethod by noting that\\footnote{For this we recall that $-\\partial^2\ns^{2-d}=(d-2)S_d\\delta^d (s)\\, .$}\n\\begin{equation}\n\\partial_\\mu\\left ({1\\over (2y)^{d-1}} {{\\tilde X}_\\mu \\over\n[{\\tilde {\\xi}}(1+{\\tilde {\\xi}})]^{{1\\over 2}(d-1)}} \\right )=S_d\\delta^d(x-r)\\; ,\n\\end{equation}\nso that from the definition of $I_\\mu$ in~\\rref{eq:Ivmu},\n\\begin{equation}\n\\partial_\\mu \\left ({1\\over (2y)^{d-1}} I_{\\mu}\\right )=S_d{1\\over (2y)^d}\n{1\\over [\\xi(1+\\xi)]^{\\alpha'}}\\; .\n\\label{eq:prImu}\\end{equation}\nThis result may be rewritten as a differential equation for $I(\\xi)$\n\\begin{equation}\n{{\\rm d} \\over {\\rm d} \\xi}\\left ( [ \\xi(1+\\xi) ]^{{1\\over 2}(d-1)} I(\\xi) \\right )\n= {\\textstyle {1 \\over 2}} S_d {1\\over [\\xi(1+\\xi)]^{1+\\alpha'-{1\\over 2} d}} \\; ,\n\\end{equation}\nwhich may be solved to give\n\\begin{equation}\nI(\\xi) = {S_d \\over d-2\\alpha'} \\, {1\\over [\\xi(1+\\xi)]^{\\alpha'-{1\\over 2}}}\\,\nF\\Big (1,d-2\\alpha'\\,;1+{\\textstyle {1 \\over 2}} d - \\alpha'\\,;-\\xi \\Big ) \\; .\n\\end{equation}\nThe constant of integration is taken to be zero, because otherwise\nthe presence of such a term would violate~\\rref{eq:prImu} by producing\nan extra delta function contribution to the RHS.\nThis solution is in agreement with~\\rref{eq:Iresult}. A similar\nprocedure can be used for integrals involving more spin factors, which\nwould be appropriate for correlation function involving\nthe energy momentum tensor or two vector fields for example.\nSuch integrals are evaluated in $I$ by a slightly different method.\n\n\\subsection{Large $N$ Expansion for the $O(N)$ Model}\n\\label{largeN}\n\nIn this section, by way of conclusion,\nwe demonstrate the use of the of the parallel transform method to\nto calculate two point functions in the $1\\over N$\nexpansion of the $O(N)$\nnon-linear sigma model for the case of semi-infinite geometry.\nAs usual, the nonlinear constraint on the fields $\\phi_\\alpha(x)$;\n$\\phi^2=N$ can be removed by introducing an auxiliary field\n$\\lambda(x)$ in the Lagrangian via an interaction term ${\\cal L}_{\\rm\nI}={\\textstyle {1 \\over 2}} \\lambda \\phi^2$. To analyse the two point functions of the\nfields $\\phi_\\alpha$ and $\\lambda$ we first define\n\\begin{equation}\n\\vev{\\phi_\\alpha(x) \\phi_\\beta(x')}=G_\\phi(x,x')\\delta_{\\alpha\\beta}\\; ,\n\\quad \\quad \\quad \\vev{\\lambda(x)\\lambda(x')}=G_\\lambda(x,x')\\; .\n\\end{equation}\nThen, to zeroth order in the $1\\over N$ expansion,\nthese Green's functions satisfy the following relations~\\cite{vas:tmp1}\n\\begin{eqnarray}\n\\Big ( -\\nabla^2+ \\vev{\\lambda(x)} \\Big ) G_\\phi (x,x')&=&\n\\delta^d(x-x')\\; ,\\label{eq:gphi}\\\\\n\\int\\! {\\rm d}^d r\\, G_\\phi^2(x,r) G_\\lambda(r,x')&=&- {2 \\over N}\n\\delta^d(x-x')\\; .\\label{eq:glambda}\n\\end{eqnarray}\nBoth of these relations may be solved by making use of conformal\ninvariance and using the parallel transform method discussed in\nsection~\\ref{method}. For this we write\n\\begin{equation}\nG_\\phi(x,x')={1\\over (4yy')^{\\eta_\\phi}} f_\\phi(\\xi)\\; , \\quad \\quad\n\\quad G_\\lambda(x,x')={1\\over (4yy')^{\\eta_\\lambda}}\nf_\\lambda(\\xi)\\; . \\label{eq:confg}\n\\end{equation}\nSince $2\\eta_\\phi+\\eta_\\lambda=d$ due to conformal\ninvariance of the integral in~\\rref{eq:glambda}, then the zeroth\norder result $\\eta_\\phi={\\textstyle {1 \\over 2}} d -1$ implies that\n $\\eta_\\lambda=2$ to this order. Now, with the scaling relation\n $\\vev{\\lambda(x)}=A_\\lambda\/4y^2$, it is possible to obtain\n $G_\\phi$ as a solution to a differential equation. Alternatively we\ncan recast~\\rref{eq:gphi} into an integral equation so that the method of\nparallel transforms can be used to obtain a solution. Writing\n\\begin{equation}\n\\int\\! {\\rm d}^d r \\, H(x,r)G_\\phi(r,x')=\\delta^d(x-x')\\; ,\\label{eq:intgphi}\n\\end{equation}\nrequires that\n\\begin{equation}\nH(x,x')=\\Big (-\\nabla^2+{A_\\lambda\\over 4y^2} \\Big )\\delta^d(x-x') \\; .\n\\end{equation}\nThe integral of $H(x,x')$ over planes parallel to the boundary may\nbe written as\n\\begin{equation}\n\\int {\\rm d}^{d-1} {\\bf x} \\, H(x,x') = {1\\over (4yy')^{3\\over 2}} {\\hat {h}}(y,y')\\; ,\n\\end{equation}\ndefining ${\\hat {h}}$ to be\n\\begin{equation}\n{\\hat {h}} (e^{2\\theta}, e^{2\\theta'}) = \\Big (- {{\\rm d}^2\\over {\\rm d}\\theta^2}\n+ 1 + A_\\lambda \\Big ) \\delta (\\theta -\\theta')\\; .\n\\end{equation}\nThe subsequent Fourier transform of ${\\hat {h}}(e^{2\\theta},\ne^{2\\theta'})$\ngives the simple expression\n\\begin{equation}\n{\\tilde {\\hat {h}}}(k)= k^2+1+A_\\lambda \\; .\n\\end{equation}\nWe may now solve for $G_\\phi$ by first integrating the integral\nequation~\\rref{eq:intgphi} over planes parallel to the boundary\nand then taking the Fourier transform as defined in~\\rref{eq:four}.\nThe resulting equation is\n\\begin{equation}\n{\\tilde {\\hat {h}}}(k){\\tilde {\\hat {f}}}_\\phi(k)=1 \\; ,\n\\end{equation}\nwhere ${\\tilde {\\hat {f}}}_\\phi(k)$ is the transform of the function\n$f_\\phi(\\xi)$ defined in~\\rref{eq:confg}.\nConsequently the desired result is\n\\begin{equation}\n{\\tilde {\\hat {f}}}_\\phi(k)={1\\over {\\tilde {\\hat {h}}}(k)}={1\\over k^2+1+A_\\lambda} \\; .\n\\end{equation}\nIf we are now express ${\\tilde {\\hat {f}}}_\\phi(k)$ as\n\\begin{equation}\n{\\tilde {\\hat {f}}}_\\phi(k)={1\\over 16} {\\Gamma (\\mu+ { \\textstyle{i\\over\n4}}k)\\Gamma (\\mu-{ \\textstyle{i\\over 4}}k) \\over\n\\Gamma (1+\\mu+{ \\textstyle{i\\over 4}}k)\\Gamma (1+\\mu-{ \\textstyle{i\\over 4}}k)} \\; ,\n\\quad \\quad \\quad \\mu^2={1+A_\\lambda \\over 16}\\; ,\n\\end{equation}\nthen we may use the result~\\rref{eq:gab} to obtain the inverse\ntransform directly:\n\\begin{equation}\nf_\\phi(\\xi)={1\\over 4^{1+2\\mu}\\pi^\\lambda }\n{\\Gamma(2\\mu+\\lambda )\\over \\Gamma(1+2\\mu)} {1\\over (1+\\xi)^{2\\mu+\\lambda}}\nF\\Big (2\\mu+\\lambda, {\\textstyle {1 \\over 2}} + 2 \\mu; 1+4\\mu; {1\\over 1+\\xi} \\Big ) \\;\n{}.\n\\end{equation}\nThis general form for $f_\\phi(\\xi)$ gives the correct large $N$\nGreen's function $G_\\phi(x,x')$ appropriate for both the Ordinary\nand Special transitions in the statistical mechanical context where\nwe should take $\\mu=(d-3)\/4$ and $\\mu=(d-5)\/4$\nrespectively~\\cite{bray:jpa}.\nSolutions for $G_\\lambda(x,x')$ can now be obtained in a similar way\nvia the integral equation~\\rref{eq:glambda}. Results for\n$G_\\lambda(x,x')$ for both the Ordinary and Special transitions were\ncalculated with the parallel transform method in $I$ and also\nin~\\cite{ohno:let1,ohno:prog,ohno:let2} by a different method.\nIt would be interesting to see if the next order in the $1\\over N$\nexpansion can be obtained using the methods discussed in this paper;\nthis is the subject of future research.\n\n\\vskip 20pt\n\n\\leftline{\\large \\bf Acknowledgements}\n\\vskip 5pt\nI wish to thank Hugh Osborn for many useful ideas and suggestions.\nThis research was funded by a Postdoctoral Research Fellowship from\n the National Science and Engineering Research Council of\nCanada.\n\\vskip 5pt\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}