diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziwvh" "b/data_all_eng_slimpj/shuffled/split2/finalzziwvh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziwvh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\nBeing able to replicate, and therefore investigate, the structure and function of real-world complex networks is a profoundly difficult problem. However, the pervasiveness of systems that could be more accurately interpreted as a result cannot be overstated: social networks \\cite{girvan2002community}, the spread of disease \\cite{newman2003structure}, artificial intelligence \\cite{hopfield1982neural}, language structure \\cite{ronen2014links}, and transportation networks \\cite{santi2014quantifying}. Accordingly, a number of network models and network generating algorithms have been proposed~\\cite{watts1998collective, newman2001random, kim2004performance, volz2004random, serrano2005tuning, bansal2009exploring, ritchie2014higher, ritchie2014beyond, overbury2015using}. Many of these network models seek to reproduce a specific network property or characteristic: the degree distribution \\cite{barabasi1999emergence, newman2001random}, the small worldness \\cite{watts1998collective}, degree-degree correlations \\cite{newman2002assortative, newman2003structure} or clustering, the propensity of 3-cycles in a network \\cite{milo2002network, newman2003properties}. However, investigations of \\emph {higher-order} structure, subgraphs and arrangements of subgraphs not specified by standard network metrics, have been limited by a lack of accurate and versatile network models. Some progress has been made using the \\emph{configuration model}~\\cite{ karrer2010random,ritchie2014higher, ritchie2014beyond}, and it is this work we seek to build upon. \n\nIn the standard configuration model, triangle subgraphs appear infrequently as a by-product of working with finite size networks~\\cite{bollobas1980probabilistic}. But what if one \\emph{wants} triangle subgraphs to appear in a network, in particular, if one wants to model a complex network with clustering? An extension of the configuration model to this case exists~\\cite{miller2009percolation, newman2009random}. In this extension a node is allocated a number of stubs, that may go on to form standard edges, as well as a number of triangle `corners' or \\emph{hyperstubs}, pairs of stubs that will form triangles. While edges are formed in the usual way, triangles are formed by selecting three triangle hyperstubs at random and connecting their pairs of constituent stubs. As for edges the number of all stubs must be divisible by two, the total number of triangle hyperstubs must be divisible by three for the triangle hyperstub sequence to be graphical. Another similarity this model shares with the standard configuration model is that the probability that any two triangles will share an edge, thus forming a $G_\\boxslash$ subgraph (see Figure~\\ref{fig:examples}), vanishes in the limit of large network size~\\cite{karrer2010random}. Just as a network composed of lines only is limited in recreating real-world networks, so is a model that can only include edges and triangles. Obviously, this may depend on properties and structure of the real networks, but in many cases edges and triangles are not enough to produce an accurate enough artificial replica of the real network. \n\n\\begin{figure}[!htbp]\n\\begin{center}\n\\begin{tabular}{ccccc}\n\\begin{tikzpicture}\n\\node [place] (2) at (0, 0) {};\n\\node [place] (1) at (0.001, 1) {};\n \\draw [ultra thick](1) to (2);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}\n \\newdimen\\R\n \\R=0.58cm\n \\draw (0:\\R)\n \\foreach \\x in {120,240,360} { (\\x:\\R) }\n (360:\\R) node [place] (3) {}\n (240:\\R) node [place] (2) {}\n (120:\\R) node [place] (1) {};\n \n \\draw [ultra thick](2) to (3);\n \\draw [ultra thick](3) to (1);\n\\end{tikzpicture} \n&\n\\begin{tikzpicture}\n \\newdimen\\R\n \\R=0.58cm\n \\draw (0:\\R)\n \\foreach \\x in {120,240,360} { (\\x:\\R) }\n (360:\\R) node [place] (3) {}\n (240:\\R) node [place] (2) {}\n (120:\\R) node [place] (1) {};\n \n \\draw [ultra thick](1) to (2);\n \\draw [ultra thick](2) to (3);\n \\draw [ultra thick](3) to (1);\n\\end{tikzpicture} \n&\n\\begin{tikzpicture}\n\\node [place] (1) at (0.001, 0) {};\n\\node [place] (2) at (0.001, 1.001) {};\n\\node [place] (3) at (1, 1) {};\n\\node [place] (4) at (1, 0.001) {};\n\\draw [ultra thick](2) to (3);\n\\draw [ultra thick](3) to (4);\n\\draw [ultra thick](4) to (1);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}\n \\newdimen\\R\n \\R=0.65cm\n \\draw (0:\\R)\n \\foreach \\x in {90,210,330} { (\\x:\\R) }\n (330:\\R) node [place] (3) {}\n (210:\\R) node [place] (2) {}\n (90:\\R) node [place] (1) {};\n \\node [place] (4) at (0, 0) {};\n \\draw [ultra thick](1) to (4);\n \\draw [ultra thick](2) to (4);\n \\draw [ultra thick](3) to (4);\n\\end{tikzpicture} \n\\\\\n$G_0$ & \n$u3$ &\n$G_\\triangle~\/~t3$ &\n$u4$ &\n$s4$ \\\\ \n\\begin{tikzpicture}\n\\node [place] (1) at (0.001, 0) {};\n\\node [place] (2) at (0.001, 1.001) {};\n\\node [place] (3) at (1, 1) {};\n\\node [place] (4) at (1, 0.001) {};\n\\draw [ultra thick](1) to (2);\n\\draw [ultra thick](2) to (3);\n\\draw [ultra thick](4) to (1);\n\\draw [ultra thick](1) to (3);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}\n\\node [place] (1) at (0.001, 0) {};\n\\node [place] (2) at (0.001, 1.001) {};\n\\node [place] (3) at (1, 1) {};\n\\node [place] (4) at (1, 0.001) {};\n\\draw [ultra thick](1) to (2);\n\\draw [ultra thick](2) to (3);\n\\draw [ultra thick](3) to (4);\n\\draw [ultra thick](4) to (1);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}\n\\node [place] (1) at (0.001, 0) {};\n\\node [place] (2) at (0.001, 1.001) {};\n\\node [place] (3) at (1, 1) {};\n\\node [place] (4) at (1, 0.001) {};\n\\draw [ultra thick](1) to (2);\n\\draw [ultra thick](2) to (3);\n\\draw [ultra thick](3) to (4);\n\\draw [ultra thick](4) to (1);\n\\draw [ultra thick](1) to (3);\n\\end{tikzpicture} \n&\n\\begin{tikzpicture}\n\\node [place] (1) at (0.001, 0) {};\n\\node [place] (2) at (0.001, 1.001) {};\n\\node [place] (3) at (1, 1) {};\n\\node [place] (4) at (1, 0.001) {};\n\\draw [ultra thick](1) to (2);\n\\draw [ultra thick](2) to (3);\n\\draw [ultra thick](3) to (4);\n\\draw [ultra thick](4) to (1);\n\\draw [ultra thick](1) to (3);\n\\draw [ultra thick](2) to (4);\n\\end{tikzpicture} \n&\n~\n\\\\\n$i4$ & \n$G_\\boxempty~\/~e4$ &\n$G_\\boxslash~\/~d4$ &\n$G_\\boxtimes~\/~c4$ &\n\t\t\t \\\\ \n\\begin{tikzpicture}\n \\newdimen\\R\n \\R=0.75cm\n \\draw (0:\\R)\n \\foreach \\x in {1,71,...,281} { (\\x:\\R) }\n (1:\\R) node [place] (1) {}\n (71:\\R) node [place] (2) {}\n (141:\\R) node [place] (3) {}\n (211:\\R) node [place] (4) {}\n (281:\\R) node [place] (5) {};\n \\draw [ultra thick](1) to (2);\n \\draw [ultra thick](2) to (3);\n \\draw [ultra thick](3) to (4);\n \\draw [ultra thick](4) to (5);\n \\draw [ultra thick](5) to (1);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}\n \\newdimen\\R\n \\R=0.75cm\n \\draw (0:\\R)\n \\foreach \\x in {1,61,...,301} { (\\x:\\R) }\n (1:\\R) node [place] (1) {}\n (61:\\R) node [place] (2) {}\n (121:\\R) node [place] (3) {}\n (181:\\R) node [place] (4) {}\n (241:\\R) node [place] (5) {}\n (301:\\R) node [place] (6) {};\n \\draw [ultra thick](1) to (2);\n \\draw [ultra thick](2) to (3);\n \\draw [ultra thick](3) to (4);\n \\draw [ultra thick](4) to (5);\n \\draw [ultra thick](5) to (6);\n \\draw [ultra thick](6) to (1);\n\\end{tikzpicture}\n&\n~\n&\n~\n&\n\\\\\n$G_{\\pentagon}$& \n$G_{\\hexagon}$&\n~&\n~& \n\\end{tabular}\n\\caption{The set of subgraphs that have been used in this paper. The subgraphs denoted by: $\\{G_0, G_\\triangle, G_\\boxempty, G_\\boxslash, G_\\boxtimes, G_{\\pentagon}, G_{\\hexagon}\\}$, are those that have been used as input for the proposed network construction algorithms. We use: $\\{u3, t3, u4, s4, i4, e4, d4, c4\\}$, to denote the total number of uniquely counted subgraphs given by the subgraph counting algorithm~\\cite{ritchie2014higher}.}\\label{fig:examples}\n\\end{center}\n\\end{figure}\n\nThe configuration model has since received further attention to address this~\\cite{karrer2010random}. Building on the edge-triangle model a more general subgraph-based approach is taken where one may specify distributions of edges alongside distributions of arbitrary subgraphs. In the case of complete subgraphs it is obvious how to do this. For example, $G_\\boxtimes$ subgraphs can be formed by allocating to nodes hyperstubs composed of three stubs. Then, four of these hyperstubs can be selected at random to form a $G_\\boxtimes$ subgraph. However, it is not clear how this may work for subgraphs that are not symmetric. For example, in a $G_\\boxslash$, there are two different types of hyperstubs and it is necessary for any network model or construction algorithm to be able to make this distinction. Karrer and Newman proposed that it is possible to identify a node's \\emph{role} within a subgraph using \\emph{orbits}. To find the orbits of a subgraph one must first list all possible automorphisms of the subgraph, that is, permutations of nodes that do not create or destroy edges. The orbit of a node is a set of other nodes with which it may be permuted so that no edges are created or destroyed. Of course, computing the automorphism group of subgraphs is a computationally hard problem but so long as subgraphs with few nodes are used, this is not a problem~\\cite{karrer2010random}. \n\nNetwork models are rarely used independently of other processes. Instead, they typically provide the substrate for dynamical processes to operate upon. For example, the compartmental Susceptible-Infected-Recovered ($SIR$) model of contagion is often embedded into a network to help better understand how the network and its properties affect the epidemic. Previous work~\\cite{ritchie2014beyond} successfully incorporated the Karrer and Newman approach into an approximate ODE or mean-field model for $SIR$ epidemics on networks displaying higher-order structure, and this mean-field model showed excellent agreement with simulation results. In order to achieve this, Ritchie et~al. bypassed the need to classify a node's role in a subgraph via the automorphism group. Instead, nodes within asymmetric subgraphs were uniquely enumerated, even if they were topologically equivalent to one another, and this enumeration defined their role. The motivation for this adaptation was to simplify the derivation of the ODE model. Using the orbit approach or the full enumeration are different ways of satisfying different modelling needs, and these are not the only possible approaches. In fact, when modelling networks and nodes within subgraphs one can instead classify nodes by the stub cardinality of their hyperstubs. \n\nA common method across all of the above models, i.e., edge-triangle, the more general Karrer-Newman model, and that proposed by Ritchie et al., is that sequences of hyperstubs must be specified for each and every subgraph that is to be included. From these sequences it is possible to recover the network's degree sequence by multiplying them by the stub cardinality of the hyperstub which they represent, and then summing the resulting sequences. Therefore the degree sequence of the network is a result of the construction of the network rather than a quantity that is controlled for. However, given that the degree sequence of the network is probably the single most important characteristic of a network, there is a need for methods that can generate networks with a particular subgraph family and distribution yet preserve a given degree sequence. In~\\cite{ritchie2014beyond}, we recently showed that it is possible to constrain the hyperstub sequences so that the $1^{st}$ and $2^{nd}$ moments of the resulting degree sequence are controlled. In this paper, we go beyond this work and propose two generation algorithms that provide full control over the degree sequence and clustering. \n \nThe paper is organised as follows. In Section~\\ref{sec:MM}, we describe in detail the two generation algorithms, including tuning of clustering. In Section~\\ref{sec:results}, we validate our algorithms and we explore the diversity of the generated networks by comparing them to the widely used Big-V rewiring scheme. We further analyse networks generated by using different subgraph families or distributions. Epidemic and complex contagion models are simulated on these networks and we show that degree distribution and global clustering alone are not sufficient to predict the outcome of these processes. Finally, we discuss extensions and further research questions relating to our work.\n\n\n\\section{Materials and methods}\\label{sec:MM}\n\nIn this section we propose two new algorithms, both of which are parametrised by a degree sequence and a set of subgraphs. The algorithms construct hyperstub degree sequences (from which the input degree sequence may be recovered exactly) that can be used in a modified configuration model style connection procedure to realise a network. \n\nThere are some caveats regarding the preservation of the input degree sequence that are common to all configuration-like models. Firstly a degree sequence must sum to a an even number to be graphical. If it does not, a stub must be created or destroyed to satisfy this constraint. In general, hyperstub degree sequences must sum with multiplicity equal to the number of times they appear in their parent subgraphs, i.e., a triangle hyperstub sequence must be divisible by 3. When selecting stubs or hyperstubs at random to form subgraphs it is possible that self or multi-edges may form. The number of these events happening depends only on the average degree $\\langle k \\rangle$ and thus remains constant with network size. It is possible to simply delete self-edges or collapse multi-edges down to a single edge. If this approach is taken then the guiding degree sequence will be violated. Instead we disallow such connections by reselecting nodes in the connection procedure until no self or multi-edges will be created by forming the subgraph. This is known as the \\emph{matching algorithm}~\\cite{milo2003uniform}. Finally, it is possible for the process to be left with no option other than to add subgraphs over existing links or selecting multiple instances of the same node. In this case we completely reset the algorithm, regenerating hyperstub sequences and forming subsequent connections until a network is formed. \n\n\\subsection{The underdetermined sampling algorithm -- UDA}\\label{sec:UDA}\n\nThe concept underpinning this algorithm is that for each node there are combinations of hyperstubs that will satisfy its degree. For example, a node with $k=3$ classical edges could form 3 single $G_0$ edges or 1 $G_0$ edge and one $G_\\triangle$ hyperstub. The number of possible arrangements will depend on the degree of the node and number of input subgraphs. From these arrangements a single one is selected at random. For a given degree $k$ this problem is equivalent to solving an underdetermined linear Diophantine equation equal to $k$ subject to positivity constraints. The coefficients are given by the edge counts of the hyperstubs, that are induced by the input subgraphs, and the solution will give the number of each hyperstub so that the degree of the node is matched exactly. \n\nTo generate a network using this algorithm, let us assume that a degree sequence, $D=\\{d_1,d_2, \\dots, d_{N}\\}\\in \\mathbb{N}_0^{1 \\times N}$, and the set of subgraphs to be included in the network's construction, $G = \\{G_1, G_2, \\dots, G_l\\}$, is given. Then, for each subgraph we classify its hyperstubs by their edge cardinality. It is now possible to form a vector that has elements specifying the number of edges in each hyperstub. From this vector we take the unique elements. For example, the $G_\\boxslash$ subgraph will have a corresponding hyperstub vector of $\\alpha = (2,3)$. For a given degree $k$ we must consider all possible hyperstubs and hyperstub combinations that yield a classical degree equal to $k$. To systematically list all such combinations, we first concatenate all the hyperstub vectors into a single vector, $\\boldsymbol{\\alpha}$, to be used as coefficients for the following linear underdetermined Diophantine equation \n\\begin{eqnarray}\\label{eq:dio}\nk = \\alpha_1 x_1 + \\alpha_2 x_2 + \\dots \\alpha_{r}x_{r},\n\\end{eqnarray}\nwhere $k = k_{min},k_{min}+1,\\dots,k_{max}$ and $r$ denotes the number of eligible hyperstubs -- a node with degree $k=3$ can only go on to form subgraphs where the hyperstubs contain no more than three edges --, for the given degree $k$ and which is solved subject to the constraint $\\boldsymbol{x} \\in \\mathbb{N}_0^{r}$. A solution $\\boldsymbol{x}$ of this equation corresponds to the number of each type of hyperstubs required to result in a node of degree $k$. For example, if $\\alpha_1$ and $\\alpha_2$ take values $1$ and $2$ corresponding to hyperstubs of $G_0$ and $G_\\triangle$ respectively and the degree of the node is $k=5$, the Diophantine equation would take the form $5 = x_1 + 2x_2$ and the solution space of this equation is given by the pairs $(x_1,x_2) =\\{(5,0),~(3,1),~(1,2)\\}$. In general these equations may be solved recursively by fixing a trial value $x_i = j$ and reducing the dimensionality of the equation by absorbing this term. This is repeated until the equation becomes of the standard form: $k' = \\alpha_1x_1 + \\alpha_2x_2$, which can be solved explicitly. A solution obtained this way will form a single solution of the original equation. This process is then repeated for a different starting trial solution, and since we seek only positive solutions and $k$ is finite, the corresponding solution space has a finite number of elements. Matlab code for this process is available at \\url{https:\/\/github.com\/martinritchie\/Network-generation-algorithms}.\n\nOnce the entire solution space for each degree has been found it is possible to start forming the hyperstub degree sequences. To proceed, the algorithm works sequentially through the degree sequence $D =\\{d_1,d_2, \\dots, d_N\\}$ of the $N$ nodes, where $d_i \\in \\{k_{min}, k_{min}+1, \\dots, k_{max}\\}$. By selecting at random a solution from the solution space that corresponds to $k=d_i$, that specifies the hyperstub configuration, and by concatenating all the selected solutions for all the nodes a hyperstub degree sequence of dimension $h \\times N$, where $h$ denotes the total number of hyperstubs induced by the input subgraphs, is formed.\n\nFor incomplete subgraphs it is not possible to select solutions of the Diophantine equations' solution spaces at random. The reason for this is two-fold: (1) not all asymmetric subgraphs are composed of equal quantities of each of their constituent hyperstubs, and (2) hyperstubs with lower stub cardinality will appear more frequently than hyperstubs of higher stub cardinality because hyperstubs with fewer edges can be more readily accommodated into the degree of a node. Problem (1) may be addressed by representing every hyperstub induced by a subgraph in the vector of coefficients opposed to grouping hyperstubs by their stub cardinality. Problem (2) may be addressed by decomposing hyperstubs generated in excess into simple\/classical edges. This is the approach we take in our implementation. This choice is motivated by its wider applicability. For example, when using $G_\\boxslash$ as an input subgraph there will be more degree-2 corners generated than degree-3 corners. However, once all degree-3 corners are allocated to $G_\\boxslash$ subgraphs any leftover degree-2 corners may be decomposed back into stubs that can form edges, thus preserving the degree sequence. Finally, it should be noted that if the input subgraphs do not admit hyperstub combinations that can sum to a particular degree in the network then the proposed method will fail. For example, it is almost always necessary to include $G_0$ (edge) as input. \n\nPseudo-code for the UDA algorithm is given in Appendix~\\ref{UDA}, and the Matlab code is available from \\url{https:\/\/github.com\/martinritchie\/Network-generation-algorithms}.\n\n\\subsubsection{A priori clustering calculation}\\label{sec:UDA_c}\n\nThe global clustering coefficient is defined as the ratio between the total number of triangles and the total number of connected triples of nodes $\\triangle + \\vee$, since each triangle contains 3 triples of nodes: $C = \\frac{\\triangle}{\\triangle + \\vee}$. It should be noted that each unique triangle is counted 6 times and each unique triple is counted twice. The number of triples incident to a node of degree $k$ is given by $\\triangle + \\vee = k(k-1)$ since a node will form a triple with every pair of its neighbours and each triple is counted twice. The expected number of triples for a node of degree $k$ is therefore given by $P(K=k)\\times k(k-1)$, where $P(K=k)$ is the probability of finding a node of degree $k$. The expected number of triangles incident to a node of degree $k$, $\\langle \\triangle_k \\rangle$, may be obtained from the Diophantine equations' solution space associated with that degree. To do this, one needs to sum all occurrences of triangle corners, regardless of which subgraph they belong to, from that solution space and divide by the number of solutions in that particular solution space, since solutions are selected uniformly at random. Finally we are in a position to compute the global clustering coefficient as\n\\begin{eqnarray}\nC = \\sum_{k=2}^{k_{max}} \\frac{\\langle \\triangle_k \\rangle}{P(K=k) \\times k(k-1).}\n\\end{eqnarray}\nFor example, let us consider the homogeneous network with $k=5$ and the input subgraphs $G_0$ and $G_\\boxslash$. These subgraphs induce the vector of coefficients $\\boldsymbol{\\alpha} = (1,2,3)$ that, for $k=5$, has the following solution space\n\\[\\begin{array}{rccccc}\n G_0: & 5 & 3 & 2 & 1 & 0,\\\\\n g_2: & 0 & 1 & 0 & 2 & 1, \\\\\n g_3: & 0 & 0 & 1 & 0 & 1,\n\\end{array}\\]\nwhere the rows give the number of each hyperstub, the columns give an individual solution and $g_2$ and $g_3$ denote the double and triple hyperstub of $G_\\boxslash$ respectively. From this we may calculate the expected number of triangles $\\langle \\triangle_5 \\rangle$. In this example we can see that on average for every $g_3$ corner the UDA algorithm will generate two $g_2$ corners. Since the excess $g_2$ corners will be decomposed into edges, one observes that $g_2$ and $g_3$ will be generated in equal quantities. So the expected number of $g_2$ is given by the expected number of $g_3$, e.g., $2\/5$ per node. Since $g_2$ denotes a triangle corner, the number of $g_2$ corners also gives the total \nnumber of triangles, that is uniquely counted and per node. So the expected number of triangle per node is $12\/5$, each triangle being counted 6 times, and this network will have a theoretical global \nclustering of $C=0.12$. Computationally, we verify this by generating such networks with $N=5000$, and find that the number of open triples and triangles is exactly $|\\vee| = 100000$ and $|\\triangle| = \n12120$, resulting in a global clustering of $0.1212$, as expected. \n\n\\subsection{Cardinality matching -- CMA}\\label{sec:CMA}\n\nThe cardinality matching algorithm (CMA) requires as input a degree sequence, a set of subgraphs and corresponding \\emph{subgraph sequences}, i.e., multiple sequences specifying to which and how many subgraphs nodes belong to. Note that these sequences are not yet allocated to nodes. The algorithm proceeds to allocate hyperstubs of subgraphs to nodes that have a sufficient number of stubs to accommodate the hyperstub degree. The algorithm outputs hyperstub degree sequences, from which the input degree sequence may be recovered exactly. This then can be used to realise a network based on a modification of the configuration model.\n\nTo generate a CMA network one needs to first decide on a degree sequence $D$, a subgraph set $G = \\{G_1, G_2, \\dots, G_l\\}$ and a set of subgraph sequences $S = \\{S_1,S_2,\\dots, S_l\\}$, where $S_j(k)$, with $j=1, 2, \\dots, l$ and $k=1, 2,\\dots, N$, gives the number of times a node will be part of a $G_j$ subgraph without specifying the precise hyperstubs that connect the node to a $G_j$ subgraph. Our goal is to map the subgraph sequences into hyperstub sequences which can then be allocated to nodes that can accommodate them. From the hyperstub sequence, it is possible to work out the lower bound on the degree of nodes that can accommodate a specific hyperstub sequence. To complete this mapping one needs to differentiate between complete and incomplete subgraphs.\n\nFor complete subgraphs the subgraph sequence is identical to its hyperstub sequence since there is only one way or hyperstub by which a node can connect to such a subgraph. Thus, multiplying the hyperstub degree by the number of edges in the hyperstub will give us the lower bound on the degree of nodes that can accommodate the hyperstub sequence. For incomplete subgraphs the subgraph sequence does not specify how the node connects to the subgraph. Hence, we need to determine how the various hyperstubs are allocated to nodes. To see how to do this let us consider an arbitrary subgraph $G$ with subgraph sequence $S$. Given that the subgraph has $m$ distinct hyperstubs, let $p=(p_1, p_2, \\dots, p_m)$ be the vector of probabilities of picking different hyperstubs. We note that the values of $p$ reflects the proportion of each hyperstub found in the subgraph. For example, $G_\\boxslash$ has two distinct hyperstubs that both appear with multiplicity two, in this case $p=(1\/2,1\/2)$. This will ensure that their numbers are balanced and subgraphs can be formed.\n\nNext, using the multinomial distribution corresponding to subgraph $G$, $M^{G}(s^{G}_i,P)$ where $s^{G}_i$ denotes the subgraph sequence of index $i$ (this is not yet a node label), we pick hyperstub types to transform the subgraph sequence into hyperstub degree. For each $s^{G}_i$ this will result in a vector of length $m$ specifying the exact number of each hyperstub. It is possible to concatenate all the resulting choices from all multinomial distributions $M^{G}(s^{G}_i,p)$, where $i=1,2, \\dots, N$ to form the following matrix\n$$\n\\bordermatrix{\n & s_1^{G} & s_2^{G} & \\dots & s_N^{G} \\cr\nh_1^{G} & h_1^{G}(1) & h_1^{G}(2) & \\dots & h_1^{G}(N) \\cr\nh_2^{G} & h_2^{G}(1) & h_2^{G}(2) & \\dots & h_2^{G}(N) \\cr\n\\vdots & \\vdots & \\ddots & \\ddots & \\vdots \\cr\nh_m^{G} & h_m^{G}(1) & h_m^{G}(2) & \\dots & h_m^{G}(N)\n\\cr}=H^{G},\n$$\nwhere $h_i^{G}(j)$ denotes the number of $h_i$ hyperstubs contributing to the subgraph degree $s_j^{G}$. We now need to compute the total number of edges specified by each column of the above matrix or by the hyperstub degree. This is given by $H^{G}(i)=\\sum_{j=1}^{m}|h_j^{G}|h_j^{G}(i)$ that denotes the total number of edges required by the subgraph degree $s_i^{G}$, and where $|h_j^{G}|$ represents the number of edges needed to form hyperstub $j$ in subgraph $G$ and $i=(1,2,\\dots,N)$. This process needs to be repeated for each subgraph to be included in the networks construction, i.e., for each subgraph $G_i$ with subgraph sequence $S^{G_i}=(s_1^{G_i},s_2^{G_i},\\dots, s_N^{G_i})$ there is a corresponding $H^{G_i}$ with elements that the algorithm will use as the lower bound on the degree of the nodes that can accept such a selection of hyperstubs.\n\nThe algorithm then proceeds by choosing the largest values, $H_{\\max}$, from all $H^{G_i}$ matrices, and this is used as the lower bound on the degree of nodes that can accept the hyperstub configuration associated with $H_{\\max}$, i.e., have enough edges of the classical type. From this list of all nodes with degree equal to or larger than $H_{\\max}$, a node is selected uniformly at random. The degree of the selected node is reduced accordingly, and the index of the node is now associated with the hyperstub degree to $H_{\\max}$. This node is then removed from the pool of eligible nodes for that particular subgraph, as otherwise it may be selected twice for the same subgraph thus violating the subgraph degree sequence. Similarly, the element $H_{\\max}$ is also removed from the pool of subgraph degree sequences that have yet to be allocated to nodes. This needs to be repeated until all elements of each subgraph degree sequence are allocated to nodes. Any edges that are not allocated to a particular hyperstub or subgraph are left to form edges.\n\nIn some cases it may be necessary to impose some cardinality constraints on the subgraph sequences. Obviously, if the network is homogeneous with $k=3$ we cannot include complete pentagon subgraphs or allocate two $G_\\triangle$ subgraphs to each node. More generally, it may be necessary to constrain the moments of the subgraph sequences. Let $\\langle k \\rangle$ denote the mean degree of the given degree sequence and let $G_i$ be a subgraph composed of a single hyperstub with cardinality $\\alpha$ and having subgraph degree sequence with mean $\\langle s \\rangle$ then: $ \\langle \\alpha s \\rangle = \\alpha \\langle s \\rangle \\leq \\langle k \\rangle$ is a necessary condition for the two sequences to be graphical. In the case of more than one hyperstub, this is extended to $\\sum_{i = 1}^m\\alpha_i \\langle s_i \\rangle \\leq \\langle k \\rangle$, where $m$, $\\alpha_i$ and $s_i$ denote the number of hyperstubs, hyperstub cardinality and associated subgraph sequence respectively. For the networks generated in this paper, the degree sequence and subgraph sequences were measured from networks previously generated by the UDA such that prior knowledge about the sequences being graphical was available without the need to impose any such constraints. \n\nClustering calculations for this algorithm are trivial since the subgraph degree sequences are known. One simply sums a sequence and then multiplies this figure by the number of triangles induced by that subgraph, being careful not to double count across multiple sequences for the same subgraph. The number of triples of connected nodes can be calculated following the method given for the UDA given in Section~\\ref{sec:UDA_c}. Pseudo-code for the CMA is given in Appendix~\\ref{CMA}, with the corresponding Matlab code available from \\url{https:\/\/github.com\/martinritchie\/Network-generation-algorithms}.\n \n\\subsection{Connection process}\\label{sec:CP}\n\nWe describe this process for a single incomplete subgraph. The case of the complete subgraph is trivial and has already been described (see Section~\\ref{sec:intro}). This process was first presented by Karrer \\& Newman \\cite{karrer2010random}. Consider a subgraph composed of three different hyperstub types, $h_1$, $h_2$ and $h_3$ that occur with a multiplicity of 1, 2 and 3 respectively, i.e., the subgraph is composed of 6 nodes. For these hyperstub sequences to be graphical we require\n\\begin{eqnarray}\n\\sum_{i=1}^N |h_1|_i = \\frac{1}{2}\\sum_{i=1}^N |h_2|_i = \\frac{1}{3}\\sum_{i=1}^N |h_3|_i,\n\\end{eqnarray}\nwhere $|h_i|_j$ specifies the $h_i$ hyperstub degree of node $j$. If these conditions are not met, one needs to decompose any surplus hyperstubs into stubs that may form classical edges in order to preserve the degree sequence. \n\nUsing the hyperstub sequences, one can create three dynamic lists for the three hyperstub types where a node appears with multiplicity equal to its hyperstub degree. Once the dynamic lists are fully populated, the connections process can start. This is done by selecting the following: 1 node from the $h_1$ bin, 2 from the $h_2$ bin and 3 from the $h_3$ bin, and all the selection processes done uniformly at random and without replacement. Before forming the connections between these 6 nodes, one must ensure that: (1) the selection contains no duplicates (that will form self-edges) and (2) that no single pair of nodes are already connected. If a connection already exists, a multi-edge may form and\/or subgraphs will share edges. If neither of these conditions are violated then the connections may be formed. Otherwise all nodes are returned to their bins and a new selection is made. As previously discussed, it is possible to delete self and multi-edges, however, this will destroy the degree sequence. The method of reselecting nodes has been previously introduced and is known as the \\emph{matching algorithm} \\cite{milo2003uniform}. It is possible that after many selections no valid combinations of nodes remain. For example, all bins may contain the same node. In this and other non-viable cases, all bins are re-populated and the connection process is started anew. It should be noted that, as none of the construction constraints discussed above involve the neighbours of the nodes being connected, it is possible for previously created subgraphs to become connected into a set of subgraphs with overlap, see Figure~\\ref{fig:byproductexample} for an illustration. Evidence of such by-products will be shown in Section~\\ref{sec:UDA_CMA}.\n\n\\begin{figure}[!htbp]\n\\begin{center}\n\\begin{tikzpicture}[scale=.5]\n \\node[place, minimum size = 8mm] (A) at ( 2.5, -4.33) {\\scalebox{1}{$A$}};\n \\node[place, minimum size = 8mm] (B) at ( 0,0) {\\scalebox{1}{$B$}};\n \\node[place, minimum size = 8mm] (D) at ( 5,0) {\\scalebox{1}{$D$}}; \n \\node[place, minimum size = 8mm] (E) at ( 10,0) {\\scalebox{1}{$E$}};\n \\node[place, minimum size = 8mm] (C) at ( 2.5,4.33) {\\scalebox{1}{$C$}};\n \\node[place, minimum size = 8mm] (F) at ( 7.5, 4.33) {\\scalebox{1}{$F$}}; \n \\node[place, minimum size = 8mm] (G) at ( 5, 8.65) {\\scalebox{1}{$G$}};\n\n \\path[fill=red!20,opacity=.5] (B.center) to (C.center) to (F.center) to (D.center) to (B.center);\n \\path[fill=green!20,opacity=.5] (C.center) to (F.center) to (E.center) to (D.center) to (C.center);\n \\path[fill=blue!20,opacity=.5] (C.center) to (G.center) to (F.center) to (D.center) to (C.center);\n\n \\node[place, minimum size = 8mm] (A) at ( 2.5, -4.33) {\\scalebox{1}{$A$}};\n \\node[place, minimum size = 8mm] (B) at ( 0,0) {\\scalebox{1}{$B$}};\n \\node[place, minimum size = 8mm] (D) at ( 5,0) {\\scalebox{1}{$D$}}; \n \\node[place, minimum size = 8mm] (E) at ( 10,0) {\\scalebox{1}{$E$}};\n \\node[place, minimum size = 8mm] (C) at ( 2.5,4.33) {\\scalebox{1}{$C$}};\n \\node[place, minimum size = 8mm] (F) at ( 7.5, 4.33) {\\scalebox{1}{$F$}}; \n \\node[place, minimum size = 8mm] (G) at ( 5, 8.65) {\\scalebox{1}{$G$}};\n\n \n \\draw [line width=1pt, dashed] (C) -- (G);\n \\draw [line width=1pt, dashed] (G) -- (F);\n \\draw [line width=1pt, dashed] (F) -- (C);\n\n \\draw [line width=1pt] (E) -- (F);\n \\draw [line width=1pt] (E) -- (D);\n \\draw [line width=1pt] (D) -- (F);\n\n \\draw [line width=1pt] (C) -- (B);\n \\draw [line width=1pt] (D) -- (C);\n \\draw [line width=1pt] (B) -- (D);\n\n \\draw [line width=1pt] (A) -- (B);\n \\draw [line width=1pt] (A) -- (D);\n \\draw [line width=1pt] (B) -- (D);\n\\end{tikzpicture}\n\\end{center}\n\\caption{Unintended generation of subgraphs with overlap. Despite satisfying the generation constraints given in Section~\\ref{sec:CP}, the addition of triangle (C,G,F) to toast (A,B,C,D) and triangle (D,F,E) results in 3 unintended distinct toasts \\{(B,C,F,D) in red, (D,C,F,E) in green, and (D,C,G,F) in blue\\} overlapping on one unintended triangle (C,F,D), in gray.}\n\\label{fig:byproductexample}\n\\end{figure}\n\n\\subsection{The Big-V algorithm}\\label{sec:bigv}\nThe Big-V algorithm does not generate networks as such, but is a widely-used, see~\\cite{house2010impact,house2011insights,ritchie2014higher,green2010large} for example, degree-preserving rewiring algorithm making it possible to control clustering. At each iteration, the algorithm selects a linear chain of 5 nodes at random, e.g., $\\{a,b,c,d,e\\}$ with 4 edges $\\{(a,b),(b,c),(c,d),(d,e)\\}$. It then delete edges $(a,b)$ and $(d,e)$ to form $(a,e)$ and $(b,d)$. When starting from an unclustered network, this process will lead to at least one extra $G_\\triangle$ being created~\\cite{bansal2009exploring}. This is repeated until the desired level of clustering is achieved. It is possible to include a Metropolis-style augmentation whereby at each step the local clustering coefficient is computed for the five nodes before and after rewiring, and the rewired configuration is only accepted if it results in an increase in average local clustering. It is worth noting that this algorithm leads to a positive degree-degree correlation which was not necessarily present in the original network.\n\nIn this paper, we use the Big-V algorithm to demonstrate that our newly proposed algorithms are able to sample from a larger part of the state space of all possible networks with a given degree sequence and global clustering coefficient.\n\n\\subsection{Models of contagion}\\label{sec:contagion}\n\nIn order to illustrate the impact of network structure -- and higher-order structure particularly -- different epidemic dynamics were simulated on the generated networks. Three different models were chosen: Susceptible-Infected-Susceptible ($SIS$), Susceptible-Infected-Recovered ($SIS$) \nand complex contagion \\cite{miller2015complex, osullivan2015mathematical}. To simulate $SIS$ and $SIR$ dynamics, the fully susceptible network of nodes is perturbed by infecting a small number of nodes. Infected nodes spread the infection to susceptible neighbours at a per-link rate of infection $\\tau$. Infected individuals recover independently of the network at rate $\\gamma$ and become susceptible again (for $SIS$ dynamics) or become removed (for $SIR$ epidemics). In contrast to the infection process in the previous two dynamics, the complex contagion process requires that susceptible nodes are exposed to multiple infectious events before becoming infected. These events must be from different infectious neighbours as only the first infection attempt from an infectious node counts. This critical infection threshold for each node is set in advance and is usually bounded from above by the degree of the node. To simulate the complex contagion dynamics, nodes are allocated infection thresholds $r_i\\in \\mathbb{N}$, where $i=1,2\\dots, N$, and the fully susceptible population of nodes is perturbed by infecting an initial number of nodes chosen at random. In this model a susceptible node $i$ becomes infected as soon as it has received at least $r_{i}$ infectious contacts from $r_{i}$ distinct infectious neighbours. There is no recovery in this model and infected individuals remain infected for the duration of the epidemic. \n\n\\section{Results}\\label{sec:results}\n\n\\subsection{Algorithm validation}\\label{sec:validation}\n\nTo validate our algorithms, we generated a number of networks with pre-specified degree distribution and subgraph set, as well as a multinomial distribution of subgraph corners or hyperstubs around nodes. We verified that the networks generated were as expected given the input. \n\nAs described in Section~\\ref{sec:MM} the algorithms preserve the degree sequence, permitting at most a single edge to be deleted if the degree sequence sums to an odd number. The ability to exercise control over the networks' subgraph topology is illustrated by Figure~\\ref{fig:example_nets}. Note that Figure~\\ref{fig:small_random} shows a \\emph{random} network that includes $G_\\triangle$ subgraphs. When constructing networks using the configuration model it is possible to create $G_\\triangle$ subgraphs with non-zero probability and this is to be expected~\\cite{newman2009networks}. However, this is a function of mean degree not network size, and this probability goes to zero with network size going to infinity. \n\n\\begin{table}[!htbp]\n\\centering\n\\begin{tabular}{ccccccccc}\\rowcolor{LightBlue} \n & c4 & d4 & e4 & i4 & s4 & t3 & u3 & u4 \\\\ \\hline\nRandom & 0 & 0 & 42 & 17 & 446 & 6 & 482 & 1706 \\\\\\rowcolor{LightBlue} \nBig-V & 1 & 23 & 10 & 10 & 212 & 7 & 386 & 1220 \\\\\nUDA & 7 & 10 & 22 & 5 & 243 & 1 & 389 & 1239 \\\\\\rowcolor{LightBlue} \nCMA & 0 & 9 & 10 & 40 & 185 & 24 & 389 & 1201\n\\end{tabular}\n\\caption{Subgraph counts for the networks of Figure~\\ref{fig:example_nets}. Note: if one adds a single $G_\\triangle$ so that it shares a single edge with a $G_\\boxslash$ and this edge is not the diagonal edge of $G_\\boxslash$, then $d4$ increases by one but $t3$ will have only increased by one, not two. We note that $2\\cdot d3$ yields the maximum number of possible $G_\\triangle$ induced by $G_\\boxslash$. In general, calculating the number of $G_\\triangle$ in this way will always yield the maximum possible count but not necessarily the true count because a single $G_\\triangle$ could be shared by more than one $G_\\boxslash$.}\n\\label{tab:example_counts}\n\\end{table}\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{small_random}\n \\caption{Random}\n \\label{fig:small_random}\n\\end{subfigure}%\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{small_bigv}\n \\caption{Big-V, C=0.22}\n \\label{fig:small_big_v}\n\\end{subfigure}\n\\vskip\\baselineskip\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{small_c4}\n \\caption{UDA, C=0.22}\n \\label{fig:small_uda}\n\\end{subfigure}%\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{small_card}\n \\caption{CMA, C=0.22}\n \\label{fig:small_card}\n\\end{subfigure}\n\\caption{Small networks generated by the Big-V, UDA and CMA algorithms. All networks have the same homogeneous degree sequence with $k=5$. The Big-V algorithm re-wired the random network, Figure~\\ref{fig:small_random}. The UDA was parametrised with subgraphs $G_0$ $G_\\boxslash$ and $G_\\boxtimes$. The CMA was parametrised so that every node was incident to 2 $G_\\triangle$. The Big-V, UDA, and CMA networks all have a global clustering coefficient of $C=0.22$. The network nodes are coloured so that green\/orange\/pink denotes nodes of low\/medium\/high clustering, respectively.}\n\\label{fig:example_nets}\n\\end{figure}\n\nTo properly demonstrate the proposed algorithms' control over the building blocks in the network, we used a recently described subgraph counting algorithm~\\cite{ritchie2014higher} to count the number of subgraphs \\emph{a posteriori}. In our implementation we counted subgraphs composed of 4 nodes or less -- see the top two rows of Figure~\\ref{fig:examples}, as well as 5- and 6-cycles. Table~\\ref{tab:example_counts} provides the subgraph counts for the networks displayed in Figure~\\ref{fig:example_nets}. It confirms that the random network given in Figure~\\ref{fig:small_random} contains 6 $G_\\triangle$, counted uniquely, as observed above. The table also reveals that, through increasing the frequency of $G_\\triangle$, the Big-V algorithm also introduced $G_\\boxslash$ and $G_\\boxtimes$ subgraphs. The UDA was parametrised with $\\{G_0, G_\\boxslash, G_\\boxtimes\\}$ and the table confirms a significant presence of these subgraphs when compared to the random network. Although the CMA was parametrised solely with $G_\\triangle$ subgraphs distributed so that each node was incident to 2 $G_\\triangle$ subgraphs, the subgraph counts reveal that this network contains 9 $G_\\boxslash$ subgraphs. This is a consequence of attempting to generate \\emph{small} networks with such a high prevalence of triangles: it is highly likely that the algorithms will select nodes that already share one other common neighbour later in the connection process. One expects the proportion of these events to become increasingly negligible with greater network size. \n\nNext, we used the above motif counting algorithm to evaluate the extent to which the proposed algorithms can exert control over the prevalence of subgraphs in the generated networks. Figure~\\ref{fig:homo_k5_counting} compares \\emph{measured} counts of subgraphs in UDA and CMA networks with \\emph{expected} counts. Here, an important observation must be made at the outset. Even in random networks, cycles ($G_\\boxempty$, $G_{\\pentagon}$ and $G_{\\hexagon}$) appear in significant quantities: 33, 100 and 333 times respectively, and regardless of network size. They are a natural consequence of the fact that the probability of selecting two nodes in different branches of a finite tree-like network is non-zero. Therefore, our \\emph{expected} counts are the sum of the counts expected \\emph{by construction} and those \\emph{measured} in the random networks. For example, since the CMA networks were generated with each node being incident to a single $G_{\\hexagon}$ subgraph, a total of 833 uniquely counted $G_{\\hexagon}$ subgraphs were expected \\emph{by construction} in networks of size $N=5000$. However, because an average of 344 $G_{\\hexagon}$ subgraphs were counted in random networks of size $N=5000$, our \\emph{expected} count was $833+344=1177$. The \\emph{measured} count was found to be 1165. More generally, we found the \\emph{expected} counts to match well with the \\emph{measured} counts, indicating that the generating algorithms did not create by-products in addition to those observed at random\\footnote{Although we will show in Section~\\ref{sec:UDA_CMA} that for specific parameterisations of CMA, by-products are possible.}. However, these results also suggest that the level of control exerted by the algorithms over subgraph prevalence depends on how often those subgraphs appear naturally as by-products. Control is strongest for subgraphs that do not appear naturally as by-products. When considering subgraphs that appear naturally with high frequency, e.g., $G_{\\pentagon}$, real control over their prevalence can only be achieved if an even higher frequency is imposed, which may not always be possible for a given degree sequence and global clustering. \n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{subfigure}{0.49\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{rand_and_uda}\n \\caption{UDA}\n \\label{fig:rand_and_uda}\n\\end{subfigure}%\n\\begin{subfigure}{0.49\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{rand_and_card}\n \\caption{CMA}\n \\label{fig:rand_and_card}\n\\end{subfigure}\n\\caption{A comparison of subgraphs found in the UDA and CMA networks to their random network analogues and expected counts plotted with thick lines, thin lines and discrete markers respectively. $p5$ and $h6$ denote the counts of $G_{\\pentagon}$ and $G_{\\hexagon}$ respectively. All networks have the same homogeneous degree sequence with $k=5$ but with increasing size: $N=250,500,1000,2500,5000$, where 100 of each size was generated. (a) The UDA algorithm was parametrised with subgraphs $\\{G_\\triangle, G_\\boxempty, G_{\\pentagon}, G_{\\hexagon}\\}$, and the resulting average subgraph counts are shown on the left. (b) The CMA algorithm was parametrised so that each node was incident to a single $G_{\\pentagon}$ and $G_{\\hexagon}$ subgraph, and the resulting average subgraph counts are shown on the right. The expected values were calculated by summing the total counts from the subgraph sequences, dividing them by the subgraphs' node cardinality, and adding these figures to the number of subgraphs found as by-products in the random networks.}\n\\label{fig:homo_k5_counting}\n\\end{figure}\n\nIn what follows, we set out to highlight differences between the new algorithms compared to classic ones and also to emphasise the diversity within networks generated by the same algorithms.\n\n\\subsection{Sampling from a different area of the network state space}\\label{sec:UDA_Big_V}\n\nIn this section, we seek to highlight the versatility of the proposed generation mechanisms by showing that, given a degree distribution and a global clustering, they sample different areas of the network state space than existing methods such as Big-V. We begin by reminding the reader that the Big-V algorithm searches for paths of 5 nodes and rewires such paths so that additional triangles are created. In other words, the principal building block of this algorithm is the $G_\\triangle$ subgraph and subgraphs that may be constructed by overlapping $G_\\triangle$ subgraphs. It follows that this algorithm is unlikely to give rise to a higher than expected at random number of $G_\\boxempty$ or other `empty' cycles. The UDA algorithm was therefore parametrised with subgraph family $\\{G_0, G_\\triangle, G_\\boxempty, G_{\\pentagon}, G_{\\hexagon} \\}$. In order to eliminate the effect of degree heterogeneity, a homogeneous degree sequence with $k=5$ was used. The resulting networks had a global clustering coefficient of $C=0.04$, induced by 666 (uniquely counted) $G_\\triangle$ subgraphs. We then used the Big-V algorithm to rewire random networks constructed using the same degree sequence until the desired level of clustering, $C=0.04$, was achieved. Significant differences between generated networks would confirm that the Big-V and UDA generated networks are sampled from different areas of the state space of networks satisfying that degree sequence and global clustering. As a further point of reference, data taken from a random network realisation of the degree sequence was included in all of our analyses. Henceforth we shall refer to these three types of networks as network family \\textbf{A}.\n\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{subfigure}{0.31\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{homo_k5_cycles_avpl}\n \\caption{Average path length}\n \\label{fig:homo_k5_cycles_avpl}\n\\end{subfigure}%\n\\begin{subfigure}{0.31\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{homo_k5_cycles_betw_avg}\n \\caption{Average betweenness}\n \\label{fig:homo_k5_cycles_avg_betw}\n\\end{subfigure}\n\\begin{subfigure}{0.31\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{homo_k5_cycles_betw_max}\n \\caption{Maximum betweenness}\n \\label{fig:homo_k5_cycles_max_betw}\n\\end{subfigure}\n\\caption{Plots of the average path length and diameter for homogeneous networks ($N=5000$ and $k=5$) for network family \\textbf{A}. The Big-V algorithm was parametrised solely by clustering, in this case $C=0.04$, to best suit the networks produced by the UDA. The differences in average path length, average betweenness centrality and maximum betweenness centrality between the random network and its Big-V analogue were of similar magnitude as the differences between the Big-V network and the cycle-based UDA networks, and these were significant.}\n\\label{fig:homo_k5_cycles_betw}\n\\end{figure}\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[scale=1]{homo_k5_cycles_sg_count}\n\\caption{Distributions of total number of subgraphs in network family \\textbf{A} ($N=5000$, $k=5$). The Big-V and UDA networks have a global clustering coefficient of $C=0.04$. All given counts are unique. The $t3$ counts denote the number of $G_\\triangle$ subgraphs that are not involved in any subgraphs of four nodes (i.e., $G_\\boxslash$ and $G_\\boxtimes$). However, the $c4$ and $d4$ counts may include $G_\\triangle$ subgraphs shared by $G_\\boxslash$ and $G_\\boxtimes$. The number of $G_\\boxempty$ subgraphs generated by the Big-V algorithm is very close to the counts found in random networks.} \n\\label{fig:homo_k5_counts}\n\\end{figure}\n\nIn Figure~\\ref{fig:homo_k5_cycles_betw}, the distributions of the average path length, average betweenness centrality and maximum betweenness centrality for the above networks are given. In general, an increase in clustering results in a higher value of the average path length -- see the average path length of random and Big-V networks in Figure~\\ref{fig:homo_k5_cycles_avpl}. This is a known result ~\\cite{bansal2009exploring}. Surprisingly, a similar magnitude of difference in average path length and average and maximum betweenness centrality is observed between the Big-V and UDA networks despite them having the same global clustering, see Figure~\\ref{fig:homo_k5_cycles_avpl}, ~\\ref{fig:homo_k5_cycles_avg_betw} and ~\\ref{fig:homo_k5_cycles_max_betw}, respectively. Output from the subgraph counting algorithm (Figure~\\ref{fig:homo_k5_counts}) confirms that, as expected, the Big-V algorithm does not generate more $G_\\boxempty$ subgraphs than are observed in the random network. More generally, the results show that the Big-V and UDA networks exhibit markedly different subgraph topologies with the Big-V networks relying heavily on $G_\\boxslash$ to cluster the networks unlike UDA networks that rely almost exclusively on $G_\\triangle$ not appearing as part of any other subgraph. It may be that such variation was facilitated by the low level of clustering considered, and that with higher clustering, eliciting such differences might be more challenging. However, these results provide evidence that the UDA algorithm can sample from a different part of the state space than the Big-V algorithm.\n\n\\subsection{Diversity within the newly proposed algorithms}\\label{sec:UDA_CMA}\n\nIn this section, we illustrate the diversity of networks generated with UDA and CMA by exploring the impact of subgraph distribution over nodes (for identical degree distribution and global clustering) and how it may change network characteristics. \n\nTo do this we first parametrised the UDA with subgraph family $\\{G_0, G_\\triangle, G_\\boxempty, G_\\boxslash, G_\\boxtimes\\}$ (chosen due to its frequent use in the literature, e.g., ~\\cite{ritchie2014beyond, ritchie2014higher, house2010generalised, house2009motif, bansal2009exploring, karrer2010random}), and a heterogeneous degree sequence generated using the Poisson distribution with $\\lambda = 5$. Since it is difficult to control for the number of subgraphs that appear in a network generated using the UDA we counted the total number of each subgraph, from UDA-produced subgraph sequences, and used these counts to create alternative subgraph sequences as input to the CMA, see Section~\\ref{sec:CMA}, rather than drawing such sequences from a theoretical distribution. The resulting networks were therefore expected to have identical degree sequence, global clustering of 0.13 and subgraph counts. Since the CMA allows us to choose arbitrary sequences of subgraphs, we opted to push the clustered subgraphs, $\\{ G_\\triangle, G_\\boxslash, G_\\boxtimes\\}$, onto the higher-degree nodes to accentuate the effect of clustering. We did this by specifying that these subgraphs had to appear with multiplicity greater than one. For example, a degree-three $G_\\boxtimes$ hyperstub required a minimum $k=9$-degree node. As previously, we included a random network realisation of the heterogeneous degree sequence for comparison. Henceforth, we shall refer to these three types of networks as network family \\textbf{B}. \n\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_ass}\n \\caption{Assortativity}\n \\label{fig:pois_k5_ass}\n\\end{subfigure}%\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{k5_pois_semilog}\n \\caption{Degree dependent clustering}\n \\label{fig:pois_k5_ddc}\n\\end{subfigure}\n\\caption{Plots of assortativity and degree-dependent clustering for network family \\textbf{B} with $k \\sim Pois(5)$. The UDA and CMA networks have a global clustering coefficient of $C=0.13$. The distribution of subgraphs in CMA networks was manipulated so that the clustered subgraphs $\\{G_\\triangle, G_\\boxslash, G_\\boxtimes\\}$ appeared around nodes with multiplicity greater than one. In order to preserve the subgraph degree sequence these aggregated subgraphs were allocated to the higher degree nodes, resulting in higher assortativity and a more positively skewed distribution of degree-dependent clustering. The dash-dotted line corresponds to $c(k)=k^{-1}$.}\n\\label{fig:pois_k5_hetm}\n\\end{figure}\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_avpl}\n \\caption{Average path length}\n \\label{fig:pois_k5_avpl}\n\\end{subfigure}%\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_diam}\n \\caption{Diameter}\n \\label{fig:pois_k5_diam}\n\\end{subfigure}\n\\caption{Plots of average path length and diameter for network family \\textbf{B} with $k \\sim Pois(5)$. The UDA and CMA networks have a global clustering coefficient of $C=0.13$. The increased average path length and diameter between the UDA and random networks is attributable to the higher clustering. The similar increase between UDA and CMA networks is a reflection of the higher assortativity of the CMA networks.}\n\\label{fig:pois_k5_pl}\n\\end{figure}\n\nThe heterogeneity in degree distribution allows us to use additional degree-dependent metrics: degree-degree correlations and degree-dependent clustering~\\cite{newman2002assortative, serrano2005tuning}. These have been plotted in Figure~\\ref{fig:pois_k5_hetm}. The plot for the degree-degree correlation coefficient shows that by aggregating clustered subgraphs around high-degree nodes, the CMA-constructed networks yield a higher assortativity than that of UDA and random networks, see Figure~\\ref{fig:pois_k5_ass}. This is an important property of the methodology since the clustering potential of a network is bounded by the degree-degree correlation coefficient~\\cite{serrano2005tuning}. Moreover, if one wishes to maximise clustering in heterogeneous networks, it is necessary for nodes of similar degree to mix preferentially. Figure~\\ref{fig:pois_k5_ddc} shows that the CMA networks yield a negatively skewed distribution of degree-dependent clustering, with nodes of degree $k\\geq 9$ contributing most to clustering. The ability to manipulate the degree and clustering relationship as well as assortativity clearly demonstrates the broader scope of the CMA when sampling from the ensemble of networks with same degree distribution and global clustering. \n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{subfigure}{0.31\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_min_betw}\n \\caption{Minimum}\n \\label{fig:pois_k5_min_betw}\n\\end{subfigure}%\n\\begin{subfigure}{0.31\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_avg_betw}\n \\caption{Average}\n \\label{fig:pois_k5_avg_betw}\n\\end{subfigure}\n\\begin{subfigure}{0.31\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_max_betw}\n \\caption{Maximum}\n \\label{fig:pois_k5_max_betw}\n\\end{subfigure}\n\\caption{Plots of betweenness centrality for network family \\textbf{B} with $k \\sim Pois(5)$. The UDA and CMA networks have a global clustering coefficient of $C=0.13$. A trend of increasing average and maximum betweenness centrality is observed between random, UDA and CMA networks, respectively.}\n\\label{fig:pois_k5_betw}\n\\end{figure}\n\nAs with network family \\textbf{A}, an increase in average path length, diameter, average and maximum betweenness centrality of UDA and CMA networks over random networks will be attributable to the increased global clustering coefficient, $C=0.13$, see Figure~\\ref{fig:pois_k5_pl} and ~\\ref{fig:pois_k5_betw}. However, since UDA and CMA networks share the same degree sequence and global clustering coefficient differences in these metrics between UDA and CMA can only be due to increased degree-degree correlation and negatively skewed distribution of degree-dependent clustering. It has previously been noted that increased assortativity corresponds to an increase in average path length \\cite{xulvi2004reshuffling} and this will be compounded by the higher-degree nodes (which inevitably serve as central hubs) being more clustered. Similarly, an increase in diameter (a function of path length) will be due to these highly clustered high-degree nodes. Finally, Figures~\\ref{fig:pois_k5_avg_betw} and~\\ref{fig:pois_k5_max_betw} show a significant increase in average and maximum betweenness centrality between UDA and CMA networks. This is yet another manifestation of the presence of these highly-clustered high-degree nodes.\n\nTable~\\ref{tab:countsB} presents a comparison between \\emph{measured} and \\emph{expected} average subgraph counts for the networks in family \\textbf{B}.\nWhereas there is good agreement for UDA networks, it is observed that CMA networks have produced by-products other than what was expected at random, e.g., an additional 50\\% $G_\\boxslash$ have appeared as by-products. The effects of finite size have been exacerbated by aggregating clustered subgraphs around higher degree nodes, effectively excluding lower to medium degree nodes during this part of the connection process. Within this densely connected component it is easy to envisage a situation where adding only a single edge may create additional (unwanted) subgraphs. This highlights the fact that whilst the total number of $G_\\triangle$ is preserved (as evidenced by identical global clustering), the way these subgraphs contribute to higher-order structure can vary significantly.\n\n\\begin{table}[!htbp]\n\\centering\n\\begin{tabular}{ccccc}\\rowcolor{LightBlue} \n & c4 & d4 & e4 & t3 \\\\ \\hline \nRandom & 0 & 0 & 79 & 21 \\\\ \\rowcolor{LightBlue}\nUDA & 243 & 504 & 587 & 718 \\\\\nCMA & 232 & 743 & 772 & 691\\\\ \\rowcolor{LightBlue}\nExpected & 243 & 504 & 619 & 741\n\\end{tabular}\n\\caption{Subgraph counts for network \\textbf{B} ($N=5000$, $k \\sim Pois(5)$ and $C=0.13$). The counts are unique. The expected counts are computed by summing the total counts from the subgraph sequences, dividing them by the subgraphs' node cardinality, and adding these figures to the number of subgraphs found as by-products in the random network. The counts for $t3$ are for $G_\\triangle$ subgraphs that do not appear in any other subgraphs.}\n\\label{tab:countsB}\n\\end{table}\n\nThis Section has highlighted that control over the choice of subgraph families and their distributions makes it possible to flexibly explore the solution space of networks with the same degree distribution and global clustering. This in turn provides us with the means to investigate specific areas of this solution space as well as further our understanding of how network metrics deal with such diversity.\n\n\\subsection{Does higher-order structure matter?}\\label{sec:SIS}\n\nIn order to answer this question we make use of the network families \\textbf{A} and \\textbf{B} detailed above and test the impact of higher-order structure by considering the outcome and evolution of widely used dynamics on networks, namely, $SIS$, $SIR$ and the complex contagion model. \n\nFor each network type in families {\\textbf A} and {\\textbf B} a series of networks were generated. For each network we performed a single Gillespie realisation of the $SIS$, $SIR$ and complex contagion epidemics. The mean time evolution of infectious prevalence was then calculated, plotted and compared between network types. Complex contagion dynamics was simulated in a similar way but without recovery and remembering that a single infectious contact was usually not sufficient to result in an infected node. Different thresholds of infection and infectious seeds were used and these are specified in figure captions. Matlab code for the $SIS$ and $SIR$ Gillespie algorithms is available from \\url{https:\/\/github.com\/martinritchie\/Dynamics}.\n\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{homo_k5_cycles_SIS}\n \\caption{SIS}\n \\label{fig:k5_homo_cycles_SIS}\n\\end{subfigure}%\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{homo_k5_cycles_SIR}\n \\caption{SIR}\n \\label{fig:k5_homo_cycles_SIR}\n\\end{subfigure}\n\\vskip\\baselineskip\n\\begin{subfigure}{0.9\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{homo_k5_cycles_cc}\n \\caption{Complex contagion}\n \\label{fig:homo_k5_cycles_cc}\n\\end{subfigure}\n\\caption{Epidemic dynamics for network family \\textbf{A} with $k=5$. The Big-V and CMA networks have a global clustering coefficient of $C=0.04$. In (a) and (b) the orange line, blue circles and green vertical markers correspond to the random, Big-V and UDA networks respectively. In (c) and (d) the same colour scheme is used but with bars. The $SIS$ and $SIR$ epidemics represent the average of single Gillespie simulations on each of the 1000 network realisations from each network generation algorithm. The $SIS$ and $SIR$ epidemics were seeded with an initial infectious seed of $I_0=10$ and had a per link rate of infection of $\\tau=1$ and recovered independently at rate $\\gamma=1$. The complex contagion epidemics had an initial infectious seed of $I_0=250$ and a fixed threshold of infection of $r=2$.}\n\\label{fig:k5_homo_cycles_epi}\n\\end{figure}\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_SIS}\n \\caption{SIS}\n \\label{fig:pois_k5_sis}\n\\end{subfigure}%\n\\begin{subfigure}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_SIR}\n \\caption{SIR}\n \\label{fig:pois_k5_sir}\n\\end{subfigure}\n\\vskip\\baselineskip\n\\begin{subfigure}{0.9\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{pois_k5_classical_cc}\n \\caption{Complex contagion}\n \\label{fig:pois_k5_classical_cc}\n\\end{subfigure}\n\\caption{Epidemic dynamics for network family \\textbf{B} with $k \\sim Pois(5)$. The UDA and CMA networks have a global clustering coefficient of $C=0.13$. In (a) and (b) the orange line, blue circles and green vertical markers correspond to the random, UDA and CMA networks respectively. In (c) and (d) the same colour scheme is used for the bars. The $SIS$ and $SIR$ epidemics represent the average of single Gillespie simulations on each of the 1000 network realisations from each network generation algorithm. The $SIS$ and $SIR$ epidemics were seeded with an initial infectious seed of $I_0=10$ and had a per link rate of infection of $\\tau=1$ and recovered independently at rate $\\gamma=1$. The complex contagion epidemics had an initial infectious seed of $I_0=1000$ and a fixed threshold of infection of $r=3$. In the $SIS$ and $SIR$ dynamics the effect of clustering on the high degree nodes, inhibiting disease propagation, dominates the epidemiological encouraging effect of increased assortativity.}\n\\label{fig:pois_k5_epi}\n\\end{figure}\n\nWe know by construction that members of network family \\textbf{A} were generated using different subgraphs and Section~\\ref{sec:UDA_CMA} has shown that observable differences were found between networks in terms of average path length, betweenness centrality and subgraph composition. Despite this, Figures~\\ref{fig:k5_homo_cycles_SIS} and~\\ref{fig:k5_homo_cycles_SIR}, which show the time evolution for $SIS$ and $SIR$ dynamics respectively, illustrate that these dynamics can display a certain degree of insensitivity to these differences in structure. In this case, it is the $SIR$ dynamics that show the greatest difference, in peak infectious prevalence (Figure~\\ref{fig:k5_homo_cycles_SIR}) albeit quite marginal. In contrast, complex contagion dynamics do show sensitivity to structural differences found between Big-V and UDA networks. Figure~\\ref{fig:homo_k5_cycles_cc} reveals that for Big-V networks the epidemic fully percolates in almost 100\\% of the simulations instead of only 80\\% of the cases for UDA networks. This indicates that whilst Big-V networks operate in the super critical regime, UDA networks are closer to the transition point. Locating this transition is possible but is beyond the scope of this paper. \n\nWhen network family \\textbf{A} is used, the networks' degree distribution and clustering appear to be the main determinants of the time evolution and outcome of the $SIS$ and $SIR$ epidemics. In contrast, when network family \\textbf{B} is used, Figure~\\ref{fig:pois_k5_epi} shows that all dynamics considered are impacted by differences in network topology. For Figures~\\ref{fig:pois_k5_sis} and~\\ref{fig:pois_k5_sir}, a trend of inhibited spread of infection is observed from the random to UDA to CMA networks. It has already been shown that clustering slows the spread of infection~ \\cite{keeling1999effects,green2010large}, and we see that this effect dominates over higher assortativity, which usually leads to faster initial spread of the epidemic \\cite{kiss2008effect}. Similarly, Figure~\\ref{fig:pois_k5_classical_cc} which shows the distribution of the final epidemic size for the complex contagion dynamics reveals that: (a) the higher clustering observed in the UDA networks fails to have a significant impact when compared to the random network equivalent and (b) the CMA networks significantly slow the pace of the epidemic as well as reduce its final size compared to both random and UDA networks. Hence, for the UDA and CMA networks where both degree distribution and global clustering are identical the observed differences are explained by the combined effect of varying distributions of subgraph around nodes and varying prevalence of subgraphs (both of which are related to one another to some extent) as shown by\n Table~\\ref{tab:countsB}.\n\nTaken together, our simulation data shows that even though the proposed algorithms construct networks with identical degree sequence and global clustering, these networks can give rise to measurable differences in resulting epidemics, be it it time evolution or final outcome. With the exception of $SIS$ and $SIR$ epidemics on network family \\textbf{A} (still with some small differences) we found significant differences in all other instances. A more systematic investigation of more network models and wider parameter range for the dynamics is needed but left to future work.\n\n\\section{Discussion}\\label{sec:discussion}\n\nIn this paper, we have described two novel network generating algorithms that strictly preserve a given degree sequence whilst permitting control over the building blocks of the network and enabling tuning of global clustering. We have compared these algorithms to one another as well as to the widely used Big-V rewiring algorithm. Using our algorithms we have empirically demonstrated that it is possible to create networks that are identical with respect to degree sequence and global clustering, yet elicit significant differences in network metrics and in the outcome of dynamical processes unfolding on them. We have presented evidence to suggest that the methods sample from different areas of the network state space and that these sampling variations do matter. \n\nOf the two algorithms proposed, UDA is the simplest to use. It requires less input and is conceptually elegant. We believe that this algorithm, when parametrised with complete subgraphs, would more likely yield analytical results due to its combinatorial nature. Note that whilst varying levels of clustering can be achieved and estimated before network construction it is not possible to target a specific level of clustering, due to the emergent nature of the distribution of subgraphs around nodes. A second potential limitation of this algorithm is its dependence on solving the underdetermined Diophantine equations that reside in high dimensional spaces. Computationally, it may become difficult to include large families of subgraphs. However, we did not encounter such problem in our experiments.\n\nThe CMA algorithm is more complex but also more versatile. Being able to specify distributions of subgraphs alongside a given degree sequence, and preserve both, is highly novel. Fixing the degree sequence allows for some interesting ways to construct the subgraph sequences. Knowing the number of nodes in each degree class $k$ allows us to combine such nodes to form complete subgraph with $k$ nodes, as this requires ($k-1$) links. The remaining single edges can then be used to connect to the rest of the network. We used this for the heterogeneous degree sequence presented in the results section, yielding a network with a global clustering coefficient of $C=0.67$ and a giant component of $N\\approx 4800$ out of $N=5000$ nodes. We were unable to achieve such high clustering with either Big-V or UDA algorithms. It must be noted that in our application of this algorithm, we had the luxury of using hyperstubs sequences we knew to be graphical -- them being output from the UDA -- to guide how we parametrised the CMA. In general, the stub sequences induced by the hyperstub sequences would have to be constrained to ensure that they are graphical. This is possible but must be taken into account when considering applying this algorithm. \n\nWe have shown that despite identical degree distribution and global clustering, significant diversity in networks can still be elicited. This has occurred in two ways: (1) by construction, by redistributing the same number of subgraphs and (2) unexpectedly, through the emergence of by-products. We conjecture that any controlled -- or believed to be controlled -- network generation algorithm will yield by-products, unless heuristic constraints are introduced to reduce the likelihood of subgraphs sharing lower-order subgraph components for example. As witnessed in our results, even configuration model networks lead to a large number of loops with 4, 5, and 6 nodes (longer cycles were not measured). This problem can only be exacerbated when control of more sophisticated structures is implemented. As such, care has to be taken when parametrising algorithms. For example, one would need to specify a relatively large number $G_{\\hexagon}$ subgraphs in a network's construction to impact the subgraph count beyond what one would observe by chance in a random network. More surprisingly, as we witnessed with $G_\\boxslash$ subgraphs in the CMA networks from network family \\textbf{B}, significant numbers of subgraph by-products can appear in addition to what was observed in the random networks depending on how one wishes to place the subgraphs around nodes. \n\nWe have seen that by using a very modest selection of subgraphs, we have been able to substantially influence dynamics running on the network, particularly complex contagion dynamics. All results relating to this model indicate that constraining a network by degree sequence and clustering is not sufficient to accurately predict the course of the epidemic. More importantly, the results appear to suggest that the location of the critical regime depends on the higher-order structure of the network (above and beyond clustering). \n\nBeing able to generate networks with different structural properties or higher-order structure is a key feature of any network construction algorithm. However, if such structural details do not impact on dynamics unfolding on the network, then models for such dynamics can rely with high confidence on a limited set of network descriptors. Although degree sequence, degree-degree correlations and global clustering coefficient were observed to be the main drivers of disease transmission in models such as $SIS$ and $SIR$, we found it not to be true in general. This is an important finding because one should remember that the dynamics simulated here are modest in complexity, when compared to models of neuronal dynamics for example, and yet, we were able to elicit significant differences by simply tuning the network structure above and beyond triangles. This implies that determining the role and impact of higher-order structure may yet hold many important and surprising answers. \\\\\n\n\n\\noindent{\\textbf{Acknowledgements:}} Martin Ritchie gratefully acknowledges EPSRC (Engineering and Physical Sciences Research Council) and the University of Sussex for funding for his PhD. We would also like to thank Dr J.C. Miller for useful discussions on the complex contagion model~\\cite{miller2015complex}, and for sharing his code for simulating the complex contagion model on networks~\\cite{miller2015private}.\n\n\n\n\\clearpage\n\n\\section{Appendix}\\label{sec:app}\n\n\\subsection{Pseudocode for UDA}\\label{UDA}\n\\begin{algorithm}[!htbp]\n\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{yellow!30}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}\\Input{$D=(d_1,d_2,\\dots,d_N)$, $G=\\{G_1,G_2,\\dots,G_l\\}$}\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{yellow!30}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}\\Output{$H \\in \\mathbb{N}_0^{l \\times N}$.}\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{blue!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}\\KwSty{Variables} \\\\\n$D$: degree sequence, $N$: number of nodes, \\\\\n$G$: set of subgraphs, $l$: number of subgraphs, \\\\\n$g_i$: subgraph adjacency matrix, $X_k$: solution space for degree $k$, \\\\\n$H$: hyperstub degree sequence \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{orange!50}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}\\KwSty{Procedure} \\\\\n\\For{Each subgraph, $G_i$}{\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Identify the degree sequences of the subgraphs.} \\\\\n$s_i = \\sum g_i$ \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Take the unique elements.} \\\\\n$s_i = unique(s_i)$} \n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Concatenate into a single vector.} \\\\\n $S = (s_1, s_2, \\dots s_l)$ \\\\ \n \\For{$k=1,2,\\dots k_{max}$}{\n \\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% $X_k(i,:)$ denotes a hyperstub arrangement for a degree $k$ node.} \\\\\n $X_k = diorecur(S,k)$ \\\\\n }\n \\For{n = 1,2,\\dots, N}{\n \\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Take random element from the solution space.} \\\\\n $r = rand$; $h_n = X_{D(n)}(r,\\cdot)$}\n \\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Concatenate into a single matrix.} \\\\\n $H = (h_1, h_2, \\dots, h_l)$ \\\\ \n \\nl \\KwRet\n\\caption{Pseudocode for the underdetermined network generation algorithm (UDA). This pseudocode focuses on the salient points of the UDA, namely, how the algorithm draws solutions from the solution space of an underdetermined Diophantine equation to determine the arrangement of hyperstubs around a particular node. Other steps, such as ensuring the handshake lemma is satisfied for both lines and subgraphs, are detailed in Section~\\ref{sec:UDA} and can be viewed in the source code. The output hyperstub degree sequence $H$ must be used as input for a modified configuration model connection process to realise a network, see Section~\\ref{sec:CP}.}\\label{algo:UDA}\n\\end{algorithm}\n\n\\clearpage\n\n\\subsection{Pseudocode for CMA}\\label{CMA}\n\\IncMargin{1em}\n\\begin{algorithm}[!htbp]\n\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\\SetKwFunction{FRecurs}{FnRecursive}\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{yellow!30}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}\\Input{$D=(d_1,d_2,\\dots,d_N)$, $G=\\{G_1,G_2,\\dots,G_l\\}$, $S=\\{S_1,S_2,\\dots,S_l\\}$.}\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{yellow!30}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}\\Output{$H \\in \\mathbb{N}_0^{|s| \\times N}$.}\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{blue!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}\\KwSty{Variables} \\\\\n$D$: degree sequence, $N$: number of nodes, \\\\\n$G$: set of subgraphs, $l$: number of subgraphs, \\\\\n$S$: subgraph sequence, $g_i$: subgraph adjacency matrix, \\\\\n$|s|$: number of unique corners in a subgraph, $H$: hyperstub degree sequence \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{orange!50}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}\\KwSty{Procedure} \\\\\n\\For{Each subgraph, $G_i$}{\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Identify the degree sequence, $s$, of the subgraph.} \\\\\n$s_i = \\sum g_i$, $s_i = unique(s_i)$, $m = length(s_i)$ \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% $p$ reflects the proportions of hyperstubs} \\\\\n$p_i = (p_1,p_2,\\dots,p_m)$ \\\\\n\t\\For{$j = 1,2,\\dots,N$}{\n \\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% The subgraph sequence is decomposed into a hyperstub} \\\\\n \\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% sequence using the multinomial distribution, $M$, } \\\\\n \\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% so that $H_i \\in \\mathbb{N}_0^{m \\times N}$ } \\\\\n $H_i(j) = M(S_i(j),p_i)$, \\\\}\n \\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% $H'_i$ is a sequence of the true stub count } \\\\\n $H'_i=H_i \\cdot s_i$\\\\\n \\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Sum so that $H'_i \\in \\mathbb{N}_0^{1 \\times N}$ } \\\\\n $H'_i(j) = \\sum_{\\alpha=1}^{m}H'_i(\\alpha,j)$ \\\\\n}\n\\While{elements of each $H_i$ are non-zero}{\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Find the largest subgraph degree, } \\\\\n$h_i(j) = \\max \\{\\max \\{H'_1\\}, \\max \\{H'_2 \\}, \\cdots, \\max \\{H'_l\\} \\}$ \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% i.e., the $j^{th}$ element of $H_i$. } \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% Find all elements of the degree sequence at least this large and } \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% select an element from $d'$ at random} \\\\\n$d' = \\{d \\in D: d \\geq m \\}$, $\\delta = d'(random)$ \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% pair $H_i(j)$ to $\\delta$ and update} \\\\\n\\leavevmode\\rlap{\\hbox to \\hsize{\\color{green!20}\\leaders\\hrule height .8\\baselineskip depth .5ex\\hfill}}{\\% $\\delta$'s available degree and $H_i$} \\\\\n$\\delta = \\delta - H_i(j)$,~ $H_i(j) = 0$ \\\\\n}\n\\caption{Pseudocode for the cardinality matching algorithm (CMA). Other steps, such as ensuring the handshake lemma is satisfied for both lines and subgraphs, are identical to what is used for the UDA and are detailed in Section~\\ref{sec:UDA} and can be viewed in the Matlab source code. The output hyperstub degree sequence $H$ must be used as input for a modified configuration model connection process to realise a network, see Section~\\ref{sec:CP}.}\n\\end{algorithm}\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nPrincipal Component Analysis (PCA) is a celebrated dimension reduction method. It was first described by \\cite{pearson1901}; and it was developed further by several authors \\citep[see \\emph{e.g.}][and references therein]{jolliffe1986}. In a nutshell, PCA summarises high-dimensional data $(x_1,...,x_m)\\in\\mathbb{R}^d$, $m\\in\\mathbb{N}^*$, into a smaller space, which is designed to be `meaningful' and more easily interpretable. By `meaningful' we mean that this new subspace still captures efficiently the correlations between data points, while at the same time reducing drastically the dimension of the space. A popular tool to design this meaningful subspace is the \\emph{Gram Matrix} of the data, defined as $(\\langle x_i,x_j\\rangle)_{i,j}$. PCA then considers the eigenvectors of this matrix. Note that this is a linear operation, in the sense that PCA consists of an orthogonal transformation of the coordinate system in which we describe our data, followed by a projection onto the first $k$ directions in the new system, corresponding to the largest $k$ eigenvalues of the Gram matrix.\n\nOver the past two decades, PCA has been studied and enriched \\citep[\\emph{e.g.}, principal curves as a nonlinear extension of PCA, as done by][]{guedj2018sequential}. The particular extension of PCA that we focus on is 'kernel PCA' \\citep[which may be traced back to][]{scholkopf1998}. Using a kernel, we map our data \ninto a reproducing kernel Hilbert space\\footnote{We refer the reader to \\cite{hein2004kernels} or \\cite{hofmann2005tutorial} for an introduction to RKHS and their uses in machine learning.} (RKHS).\nThe linear PCA then operates in this Hilbert space to yield a finite-dimensional subspace onto which we project new data points. The final step is to assess how close from the original data is this projection. Kernel PCA is widely used in the machine learning literature \\citep[\\emph{e.g.},][to name but a few recent works]{kim2020,xu2019,vo2016} which makes the need of a better theoretical understanding even more pressing.\n\nA first theoretical study has been made in \\cite{shawe-taylor2005eigenspectrum} who derived PAC (Probably Approximately Correct) guarantees for kernel PCA. The PAC bounds proposed by \\cite{shawe-taylor2005eigenspectrum} were set up to control the averaged projection of new data points onto a finite-dimensional subspace of the RKHS into which data is embedded.\n\n\nBounds in \\cite{shawe-taylor2005eigenspectrum} include a Rademacher complexity. Rademacher complexity terms are known to be challenging to compute in many settings as they typically blow up combinatorially. To provide more numerically friendly results, we investigate a different route than \\cite{shawe-taylor2005eigenspectrum} and introduce the first PAC-Bayesian study of kernel PCA which, as a byproduct, allows to replace the Rademacher term by a Kullback-Leibler divergence (which has a closed form when distributions are Gaussian, and can be approximated by Monte Carlo in other cases). PAC-Bayes theory is a powerful framework to study generalisation properties of randomised predictors, and was introduced in the seminal works of \\cite{shawe1997pac,McAllester1998,McAllester1999}. PAC-Bayes theory has then been developed further by \\cite{seeger2002,McAllester2003,maurer2004note,catoni2007} among others. PAC-Bayes has emerged in the past few years as one of the promising leads to study generalisation properties of deep neural networks \\citep{conf\/uai\/Dziugaite2017,letarte2019}. A surge of recent papers in a variety of different settings illustrates the flexibility and relevance of PAC-Bayes as a principled tool \\citep{conf\/icml\/Amit2018,conf\/icml\/Dziugaite2018,conf\/nips\/Dziugaite2018,conf\/nips\/Rivasplata2018,rivasplata2020, holland2019,haddouche2020,nozawa2020,mhammedi19pac,mhammedi2020pacbayesian,cantelobre2020pacbayesian}.\nWe refer to the recent survey \\cite{guedj2019primer} and tutorial \\cite{GueSTICML}, and the paper \\cite{rivasplata2020}, for details on PAC-Bayes theory.\n\n\n\\paragraph{Our contributions.} We aim at PAC and PAC-Bayesian bounds on the performance of kernel PCA.\nWe provide empirical PAC bounds which improve on those from \\cite{shawe-taylor2005eigenspectrum}. We introduce the first PAC-Bayes lower and upper bounds for kernel PCA, which clarify the merits and limitations of the overall method. These results are unprecedented, to the best of our knowledge.\n\n\n\\paragraph{Outline.} We introduce our notation and recall existing theoretical results on kernel PCA in \\cref{sec:notation}. \\cref{sec:pac} contains two new PAC bounds for kernel PCA, and \\cref{sec:pacb} is devoted to two new PAC-Bayes bounds, along with our proofs. The paper closes with a brief illustration of the numerical value of our bounds (\\cref{sec: experiments}) and concluding remarks (\\cref{sec:end}). We gather proofs of technical results in \\cref{sec:proofs}.\n\n\n\\section{Notation and preliminaries}\n\\label{sec:notation}\n\nWe let $\\mathbb{R}^{m \\times n}$ denote the space of matrices of shape $m \\times n$ with real entries. The data space is $\\mathcal{X} \\subseteq \\mathbb{R}^d$. We assume to have access to $s = (x_1, \\ldots, x_m) \\in \\mathcal{X}^m$, a realisation of the size-$m$ random vector $S = (X_1, \\ldots, X_m) \\in \\mathcal{X}^m$.\n\nWe let $\\mathcal{M}_1(\\mathcal{X})$ denote the space of probability distributions over $\\mathcal{X}$ and $\\mu \\in \\mathcal{M}_1(\\mathcal{X})$ stands for the distribution that generates one random example $X \\in \\mathcal{X}$. Its empirical counterpart is given by $\\hat{\\mu} = \\frac{1}{m}\\sum_{i=1}^{m} \\delta_{X_i}$, \\emph{i.e.}, the empirical distribution defined by the random sample. We assume the collected sample to be independent and identically distributed (iid): $S \\sim \\mu^m$, where $\\mu^m = \\mu\\otimes\\cdots\\otimes\\mu$ ($m$ copies).\n\n$\\mathbb{E}_{\\nu}[f] = \\mathbb{E}_{X \\sim\\nu}[f(X)] = \\int_{\\mathcal{X}} f(x) \\nu(dx)$ denotes the expectation under $\\nu \\in \\mathcal{M}_{1}(\\mathcal{X})$, for $f : \\mathcal{X}\\to\\mathbb{R}$.\nWe denote by $\\mathcal{H}$ a (separable) Hilbert space, equipped with an inner product $\\langle \\cdot,\\cdot \\rangle$. We let $\\| u \\| = \\langle u,u \\rangle^{1\/2}$ be the norm of $u \\in \\mathcal{H}$.\nThe operator $P_V : \\mathcal{H} \\to \\mathcal{H}$ is the orthogonal projection onto a subspace $V$, and \n$P_{v} = P_{\\operatorname{span}\\{ v \\}}$. In what follows, $\\mathcal{F}$ is a set of predictors, and $\\pi,\\pi^0 \\in \\mathcal{M}_1(\\mathcal{F})$ represent probability distributions over $\\mathcal{F}$. Finally, $\\mathbb{E}_{\\pi}[L] = \\mathbb{E}_{f \\sim\\pi}[L(f)] = \\int_{\\mathcal{F}} L(f) \\pi(df)$ is the expectation under $\\pi \\in \\mathcal{M}_1(\\mathcal{F})$, for $L : \\mathcal{F}\\to\\mathbb{R}$.\n\n\n\\paragraph{On Reproducing Kernel Hilbert Spaces (RKHS).} We recall results from \\cite{hein2004kernels} on the links between RKHS and different mathematical structures.\n\nLet us start by a primer on kernels.\nThe key idea is that while data belongs to a data space $\\mathcal{X}$, a kernel function $\\kappa : \\mathcal{X}\\times\\mathcal{X} \\to \\mathbb{R}$ implicitly embeds data into a Hilbert space (of real-valued functions), where there is an abundance of structure to exploit. Such a function $\\kappa$ is required to be \\emph{symmetric} in the sense that $\\kappa(x_1,x_2) = \\kappa(x_2,x_1)$ for all $x_1,x_2 \\in \\mathcal{X}$.\n\n\\begin{definition}[PSD kernels]\n A symmetric real-valued function $\\kappa:\\mathcal{X}\\times\\mathcal{X}\\rightarrow \\mathbb{R}$ is said to be a \\emph{positive semi definite} (PSD) kernel if \n $\\forall n\\geq 1$, $\\forall x_1,...,x_n \\in\\mathcal{X}$, $\\forall c_1,...,c_n\\in\\mathbb{R}$:\n \\begin{align*}\n \\sum_{i,j=1}^n c_i c_j \\kappa(x_i,x_j)\\geq 0.\n \\end{align*}\nIf the inequality is strict (for non-zero coefficients $c_1,\\ldots,c_n$), then the kernel is said to be \\emph{positive definite} (PD).\n\\end{definition}\n\nFor instance, polynomial kernels $\\kappa(x,y)= (x^{T}y+r)^n$, and Gaussian kernels $\\kappa(x,y)=\\exp(-||x-y||^2\/2\\sigma^2)$ (for $n\\geq 1,(x,y)\\in(\\mathbb{R}^d)^2, r\\geq 0,\\sigma>0$) are PD kernels.\n\n\\begin{definition}[RKHS]\n A \\emph{reproducing kernel Hilbert space} (RKHS) on $\\mathcal{X}$ is a Hilbert space $\\mathcal{H} \\subset \\mathbb{R}^\\mathcal{X}$ (functions from $\\mathcal{X}$ to $\\mathbb{R}$) where all evaluation functionals $\\delta_x : \\mathcal{H} \\rightarrow \\mathbb{R}$, defined by $\\delta_x(f) = f(x)$, are continuous.\n\\end{definition}\nNote that, since the evaluation functionals are linear, an equivalent condition to continuity (of all the $\\delta_x$'s) is that for every $x\\in\\mathcal{X}$, there\nexists $M_x < +\\infty$ such that\n\\[ \n\\forall f \\in \\mathcal{H}, \\hspace{3mm} |f(x)|\\leq M_x ||f||. \n\\]\nThis condition is the so-called \\emph{reproducing property}.\nWhen $\\mathcal{H}$ is an RKHS over $\\mathcal{X}$, there is a kernel $\\kappa:\\mathcal{X}\\times\\mathcal{X}\\rightarrow \\mathbb{R}$ and a mapping $\\psi:\\mathcal{X}\\to\\mathcal{H}$ such that $\\kappa(x_1,x_2) = \\langle \\psi(x_1),\\psi(x_2) \\rangle_{\\mathcal{H}}$. Intuitively: the `feature mapping' $\\psi$ maps data points $x \\in \\mathcal{X}$ to `feature vectors' $\\psi(x) \\in \\mathcal{H}$, while the kernel computes the inner products between those feature vectors without needing explicit knowledge of $\\psi$.\n\nThe following key theorem from \\cite{aronszajn1950} links PD kernels and RKHS.\n\n\\begin{theorem}[Moore-Aronszajn, 1950]\nIf $\\kappa$ is a positive definite kernel, then there exists a unique reproducing kernel Hilbert space $\\mathcal{H}$ whose kernel is $\\kappa$.\n\\end{theorem}\n\nIn \\cite{hein2004kernels}, a sketch of the proof is provided: from a PD kernel $\\kappa$, we build an RKHS from the pre-Hilbert space \n$$\nV=\\operatorname{span}\\left\\{\\kappa(x,.)\\mid x\\in\\mathcal{X} \\right\\}. \n$$\nWe endow V with the following inner product:\n\\begin{equation}\\label{remark_RKHS_finite_data_space_X}\n\\left\\langle \\sum_{i}a_i\\kappa(x_i,.), \\sum_j b_j \\kappa(x_j,.) \\right\\rangle_V = \\sum_{i,j} a_i b_j \\kappa(x_i,x_j) .\n\\end{equation}\nIt can be shown that this is indeed a well-defined inner product.\nThus, the rest of the proof consists in the completion of $V$ into an Hilbert space (of functions) verifying the reproducing property, and the verification of the uniqueness of such an Hilbert space.\n\nA important special case is when $|\\mathcal{X}|< +\\infty$, then $V$ is a finite-dimensional vector space. Thus if we endow it with an inner product, $V$ is already an Hilbert space (it already contains every pointwise limits of all Cauchy sequences of elements of $V$). As a consequence, the associated RKHS is finite-dimensional in this case.\n\n\\begin{definition}[Aronszajn mapping]\n \\label{aronszajn_mapping}\n For a fixed PD kernel $\\kappa$, we define the \\emph{Aronszajn mapping} \n \n $\\psi: \\mathcal{X}\\rightarrow \\left(\\mathcal{H},\\langle \\cdot,\\cdot\\rangle\\right)$ such that\n \\[\n \\forall x \\in\\mathcal{X}, \\hspace{3mm} \\psi(x)= \\kappa(x,.) ,\n \\]\n where we denote by $\\mathcal{H}$ the RKHS given by the Moore-Aronszajn theorem and $\\langle\\cdot,\\cdot\\rangle$ is the inner product given in \\eqref{remark_RKHS_finite_data_space_X}.\n In the sequel, we refer to the Aronszajn mapping as the pair $(\\psi,\\mathcal{H})$ when it is important to highlight the space $\\mathcal{H}$ into which $\\psi$ embeds the data.\n\\end{definition}\n\nWhen it comes to embedding points of $\\mathcal{X}$ into a Hilbert space through a feature mapping $\\psi$, several approaches have been considered \\citep[see][Section 3.1]{hein2004kernels}. The Aronszajn mapping is one choice among many.\n\n\n\\paragraph{On kernel PCA.} We present here the results from \\cite{shawe-taylor2005eigenspectrum}. Fix a PD kernel $\\kappa$. We denote by $(\\psi,\\mathcal{H})$ the Aronszajn mapping of $\\mathcal{X}$ into $\\mathcal{H}$.\n\n\\begin{definition}\n\\label{d:gram_matrix}\nThe kernel Gram matrix of a data set $s=(x_1,\\ldots,x_m) \\in \\mathcal{X}^m$ is the element $K(s)$ of $\\mathbb{R}^{m \\times m}$ defined as\n\\[ \nK(s)= (\\kappa( x_i,x_j))_{i,j}.\n\\]\nWhen the data set is clear from the context, we will shorten this notation to $K = (\\kappa( x_i,x_j))_{i,j}$.\n\\end{definition}\nNote that for a random sample $S = (X_1, \\ldots, X_m)$, the corresponding $K(S)$ is a random matrix. \n\nThe goal of kernel PCA is to analyse $K$ by putting the intricate data sample of size $m$ from the set $\\mathcal{X}$ into $\\mathcal{H}$, where data are properly separated, and then find a small (in terms of dimension) subspace $V$ of $\\mathcal{H}$ which catches the major part of the information contained in the data. \nWe define $\\mu \\in \\mathcal{M}_1(\\mathcal{X})$, a probability measure over $\\mathcal{X}$, as the distribution representing the way data are spread out over $\\mathcal{X}$.\n\n\nIn other words, we want to find a subspace $V \\subseteq \\mathcal{H}$ such as\n\\[\n\\forall x\\in \\mathcal{X}, \\hspace{3mm}\n\\left| ||P_V(\\psi(x)||^2 - ||\\psi(x)||^2 \\right| \\approx 0\n\\]\nwhere $P_V$ is the orthogonal projection over the subspace $V$.\n\nThe notation $[m] = \\{ 1,\\ldots,m \\}$ is convenient.\nRecall that $\\psi$ and $\\mathcal{H}$ are defined such that we can express the elements of $K$ as a scalar product in $\\mathcal{H}$:\n\\[\n\\forall i,j \\in [m], \\hspace{3mm}\nK_{i,j}= \\kappa(x_i,x_j) = \\langle \\psi(x_i),\\psi(x_j)\\rangle_\\mathcal{H}.\n\\]\n\n\n\\begin{definition}\nFor any probability distribution $\\nu$ over $\\mathcal{X}$,\nwe define the self-adjoint operator on $L^2(\\mathcal{X},\\nu)$ associated to the kernel function $\\kappa$ as:\n\\[\n\\mathcal{K}_\\nu (f)(x) = \\int_{\\mathcal{X}} f(x') \\kappa(x,x')\\nu(dx').\n\\]\n\\end{definition}\n\n\\begin{definition}\n\\label{d:e_values}\nWe use the following conventions:\n\\begin{itemize}[leftmargin=15pt,itemsep=1pt]\n \\item If $\\mu$ is the data-generating distribution, then we rename $\\mathcal{K} := \\mathcal{K}_{\\mu}$.\n \\item If $\\hat{\\mu}$ is the empirical distribution of our $m$-sample $(x_i)_i$, then we name $\\hat{\\mathcal{K}} := \\mathcal{K}_{\\hat{\\mu}}$.\n \\item $\\lambda_1\\geq \\lambda_2\\geq\\cdots$ are the eigenvalues of the operator $\\mathcal{K}$.\n \\item $\\hat{\\lambda}_1\\geq\\cdots\\geq \\hat{\\lambda}_m\\geq 0$ are the eigenvalues of the kernel matrix $K$.\n\\end{itemize}\n\\end{definition}\n\nMore generally, let $\\lambda_1(A) \\geq \\lambda_2(A) \\geq \\cdots$ be the eigenvalues of a matrix $A$, or a linear operator $A$. Then in Definition \\ref{d:e_values}, we use the shortcuts $\\lambda_i = \\lambda_i(\\mathcal{K})$ and $\\hat{\\lambda}_i = \\lambda_i(K)$. \nNotice that $\\forall i \\in\\{1,\\dots,m\\}$, we have $\\lambda_i(\\hat{\\mathcal{K}})= \\frac{\\hat{\\lambda}_i}{m}$.\n\n\n\\begin{definition}\n\\label{d:sums}\nFor a given sequence of real scalars $(a_i)_{i \\ge 1}$ of length $m$, which may be infinity, we define for any $k$ the initial sum and the tail sum as\n\\[\na^{\\leq k}:= \\sum_{i= 1}^k a_i\n\\hspace{3mm}\\text{ and}\\hspace{3mm}\na^{>k}:= \\sum_{i= k+1}^m a_i.\n\\]\n\\end{definition}\n\n\n\\begin{definition}\n\\label{d:corr_matrix}\nThe \\emph{sample covariance matrix} of a random data set $S=(X_1,\\ldots,X_m)$ is the element $C(S)$ of $\\mathbb{R}^{m \\times m}$ defined by\n\\[ \nC(S)= \\frac{1}{m}\\sum_{i=1}^m \\psi(X_i)\\psi(X_i)',\n\\]\nwhere $\\psi(x)'$ denotes the transpose of $\\psi(x)$. \nNotice that this is the sample covariance in feature space.\nWhen $S$ is clear from the context, we will shorten $C(S)$ to $C$.\n\\end{definition}\n\n\nOne could object that $C$ may not be finite-dimensional, because $\\mathcal{H}$ is not (in general). However, notice that the subspace of $\\mathcal{H}$ spanned by $\\psi(x_1),...,\\psi(x_m)$ is always finite-dimensional, hence by choosing a basis of this subspace, $C$ becomes effectively a finite-dimensional square matrix (of size no larger than $m \\times m$).\n\n\n\n\\begin{definition}\n\\label{def: subspaces}\nFor any probability distribution $\\nu$ over $\\mathcal{X}$, we define $\\mathcal{C}_\\nu : \\mathcal{H} \\to \\mathcal{H}$ as the mapping $\\alpha \\mapsto \\mathcal{C}_\\nu(\\alpha)$ where:\n\\[ \n\\mathcal{C}_\\nu(\\alpha) = \\int_{\\mathcal{X}}\\langle \\psi(x),\\alpha\\rangle \\psi(x)\\nu(dx).\n\\]\nIf $\\nu$ has density $v(x)$, then we write \n$\\mathcal{C}_v (\\alpha) = \\int_{\\mathcal{X}}\\langle \\psi(x),\\alpha\\rangle \\psi(x)v(x)dx$.\nNotice that the eigenvalues of $\\mathcal{K}_\\nu$ and $\\mathcal{C}_\\nu$ are the same for any $\\nu$, the proof of this fact may be found in \\cite{shawe-taylor2005eigenspectrum}.\nWe similarly define the simplified notations $\\mathcal{C} := \\mathcal{C}_{\\mu}$ (when $\\mu$ is the population distribution) \nand $\\hat{\\mathcal{C}} = \\mathcal{C}_{\\hat{\\mu}}$ (when $\\hat{\\mu}$ is the empirical distribution).\nWe then define for any $k\\in\\{1,\\dots,m\\}$ \n\\begin{itemize\n \\item $V_k$ the subspace spanned by the $k$-first eigenvectors of $\\mathcal{C}$,\n \\item $\\hat{V_k}$ the subspace spanned by the $k$-first eigenvectors of $\\hat{\\mathcal{C}}$.\n\\end{itemize}\n\\end{definition}\nNotice that $\\hat{\\mathcal{C}}$ coincides with the sample covariance matrix $C$, \\emph{i.e.} we have\n\\[\n\\hat{\\mathcal{C}}(\\alpha)= C\\alpha,\n\\hspace{5mm} \\forall \\alpha \\in \\mathcal{H}.\n\\]\nSo for any $k>0$, $\\hat{V_k}$ is the subspace spanned by the first $k$ eigenvectors of the matrix $C$.\n\n\n\\begin{proposition}[Courant-Fischer's corollary]\n\\label{p:courant-fischer}\nIf $(u_i)_i$ are the eigenvectors associated to $(\\lambda_i(\\mathcal{K}_\\nu))_i$ and $V_k$ is the space spanned by the $k$ first eigenvectors:\n\\[ \n\\lambda_k(\\mathcal{K}_\\nu) \n= \\mathbb{E}_\\nu[||P_{u_k}(\\psi(x))||^2] \n= \\max_{\\dim(V)=k} \\min_{0\\neq v\\in V}\\mathbb{E}_\\nu[||P_{v}(\\psi(x))||^2],\n\\]\n\\[\n\\lambda^{\\leq k}(\\mathcal{K}_\\nu) = \\max_{\\dim(V)=k} \\mathbb{E}_\\nu[||P_{V}(\\psi(x))||^2],\n\\]\n\\[\n\\lambda^{>k}(\\mathcal{K}_\\nu) = \\min_{\\dim(V)=k} \\mathbb{E}_\\nu[||P_{V}^\\bot(\\psi(x))||^2].\n\\]\n\\end{proposition}\n\nNow we will denote by $\\mathbb{E_{\\mu}}$ the expectation under the true data-generating distribution $\\mu$ and by $\\mathbb{E}_{\\hat{\\mu}}$ the expectation under the empirical distribution of an $m$-sample $S$.\nCombining the last properties gives us the following equalities.\n\n\\begin{proposition}\n\\label{p: characterization_eigenvalues}\n We have\n \\begin{align*}\n \\mathbb{E}_{\\hat{\\mu}} [ ||P_{\\hat{V_k}}(\\psi(x))||^2] \n &= \\frac 1 m \\sum_{i=1}^m ||P_{\\hat{V _k}}(\\psi(x_i))||^2 = \\frac 1 m \\sum_{i=1}^k \\hat{\\lambda}_i ,\\\\\n \\mathbb{E}_{\\mu} [ ||P_{V_k}(\\psi(x))||^2] \n &= \\sum_{i=1}^k \\lambda_i .\n \\end{align*} \n\\end{proposition}\n\nWe now recall the main results from \\cite{shawe-taylor2005eigenspectrum} before introducing our own results.\n\nWith the notation introduced in Definition \\ref{d:sums}, when projecting onto the subspace $\\hat{V_k}$ spanned by the first $k$ eigenvectors of $\\hat{\\mathcal{C}} = \\hat{\\mathcal{K}}$,\nthe tail sum $\\lambda^{>k} = \\sum_{i>k} \\lambda_i$ lower-bounds the expected squared residual.\n\n\n\\begin{theorem}[\\citealp{shawe-taylor2005eigenspectrum}, Theorem 1]\n\\label{th : st_2005_residual}\nIf we perform PCA in the feature space defined by the Aronszjan mapping $(\\psi,\\mathcal{H})$ of a kernel $\\kappa$, then with probability of at least $1-\\delta$ over random $m$-samples $S$ we have for all $k \\in [m]$ if we project new data $x$ onto the space $\\hat{V}_k$, \nthe expected square residual satisfies:\n\\begin{align*}\n\\lambda^{>k} \\leq \\mathbb{E}_{\\mu}[||P_{\\hat{V}_k}^\\bot(\\psi(x))||^2]\n &\\leq \\min_{1\\leq \\ell \\leq k} \\left[\\frac 1 m\\hat{\\lambda}^{>\\ell}(S) + \\frac{1 + \\sqrt{\\ell}} {\\sqrt{m}} \\sqrt{\\frac 2 m \\sum_{i=1}^m \\kappa(x_i,x_i)^2}\\right] \\\\\n & + R^2 \\sqrt{\\frac {18} {m} \\log\\left(\\frac {2m} {\\delta}\\right)} .\n\\end{align*}\nThis holds under the assumption that $\\| \\psi(x) \\| \\leq R$, for any $x \\in \\mathcal{X}$.\n\\end{theorem}\n\n\\begin{theorem}[\\citealp{shawe-taylor2005eigenspectrum}, Theorem 2]\n\\label{th : st_2005_proj}\nIf we perform PCA in the feature space defined by the Aronszjan mapping $(\\psi,\\mathcal{H})$ of a kernel $\\kappa$, then with probability of at least $1-\\delta$ over random $m$-samples $S$ we have for all $k \\in [m]$ if we project new data $x$ onto the space $\\hat{V}_k$, the expected square projection satisfies:\n\\begin{align*}\n\\lambda^{\\leq k} \\geq \\mathbb{E}_{\\mu}[||P_{\\hat{V}_k}(\\psi(x))||^2]\n &\\geq \\max_{1\\leq \\ell\\leq k} \\left[\\frac 1 m\\hat{\\lambda}^{\\leq \\ell}(S) - \\frac{1 + \\sqrt{\\ell}} {\\sqrt{m}} \\sqrt{\\frac 2 m \\sum_{i=1}^m \\kappa(x_i,x_i)^2}\\right] \\\\\n & - R^2 \\sqrt{\\frac {19} {m} \\log\\left(\\frac {2(m+1)} {\\delta}\\right)} .\n\\end{align*}\nThis holds under the assumption that $\\| \\psi(x) \\| \\leq R$, for any $x \\in \\mathcal{X}$.\n\\end{theorem}\n\n\n\nNotice that the purpose of those two theorems is to control, by upper and lower bounds, the theoretical averaged squared norm projections of a new data point $x$ into the empirical subspaces $\\hat{V}_k$ and $\\hat{V}_k^\\bot$: $ \\mathbb{E}_{\\mu}[||P_{\\hat{V}_k}(\\psi(x))||^2] $ and $\\mathbb{E}_{\\mu}[||P_{\\hat{V}_k}^\\bot(\\psi(x))||^2]$. Let us note that for each theorem, only one side of the inequality is empirical (while the other one consists in an unknown quantity, $\\lambda\n^{\\leq k}$ or $\\lambda^{>k}$, respectively). \n\nOur contribution is twofold: \n\\begin{itemize}\n \\item We first propose two empirical PAC bounds improving (in some cases) the results of \\cite{shawe-taylor2005eigenspectrum}. These are collected in \\cref{sec:pac}.\n \\item Casting this onto the PAC-Bayes framework, we then move on to two more sophisticated empirical bounds which are replacing the theoretical quantities $\\lambda^{>k}$ and $\\lambda\n^{\\leq k}$ in Theorems \\ref{th : st_2005_residual} and \\ref{th : st_2005_proj}. This is the core of the paper (\\cref{sec:pacb}).\n\\end{itemize}\n\n\n\n\\section{A theoretical analysis of kernel PCA: PAC bounds}\\label{sec:pac}\n\nWe present in this section two PAC bounds, which are directly comparable to those of \\citet{shawe-taylor2005eigenspectrum} which were recalled in \\cref{th : st_2005_residual} and \\cref{th : st_2005_proj} (see also \\cref{sec: experiments} for a brief numerical comparison). These bounds exploit the classical idea of splitting a data set in half, one being used as a training set and the other as a test set.\n\\begin{theorem}\n\\label{th : pac_inequality_1}\nIf we perform PCA in the feature space defined by the Aronszjan mapping $(\\psi,\\mathcal{H})$ of a kernel $\\kappa$, then with probability at least $1-\\delta$ over random $2m$-samples $S=S_1\\cup S_2$ (where $S_1=\\{x_1,...,x_m\\}$, $S_2=\\{x_{m+1},...,x_{2m}\\}$ are two disjoints $m$-samples, we have for all $k \\in [m]$, if we project new data $x$ onto the space $\\hat{V}_k(S_1)$, the expected square projection is bounded by :\n\\begin{align*}\n \\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S_1)}(\\psi(x))||^2\\right] & \\geq \\frac{1}{m}\\sum_{i=1}^m ||P_{\\hat{V}_k(S_1)}(\\psi(x_{m+i}))||^2 -R^2\\sqrt{\\frac{2}{m}\\log\\left(\\frac{1}{\\delta}\\right)} .\n\\end{align*}\nWhere $\\hat{V}_k(S_1)$ is the subspace spanned by the $k$ eigenvectors of the covariance matrix $C(S_1)$ corresponding to the $k$ largest eigenvalues of $C(S_1)$.\nThis holds under the assumption that $\\| \\psi(x) \\| \\leq R$, for any $x \\in \\mathcal{X}$.\n\\end{theorem}\n\nThis result provides an empirical lower bound for the theoretical expected squared projection. In other words, it quantifies how accurate the projection on a new data point onto our empirical subspace is.\n\n\\begin{theorem}\n\\label{th : pac_inequality_2}\nIf we perform PCA in the feature space defined by the Aronszjan mapping $(\\psi,\\mathcal{H})$ of a kernel $\\kappa$, then with probability at least $1-\\delta$ over random $2m$-samples $S=S_1\\cup S_2$ (where $S_1=\\{x_1,...,x_m\\}$, $S_2=\\{x_{m+1},...,x_{2m}\\}$ are two disjoints $m$-samples, we have for all $k \\in [m]$, the expected square residual is bounded by:\n\\begin{align*}\n \\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k^{\\bot}(S_1)}(\\psi(x))||^2\\right] & \\leq \\frac{1}{m}\\sum_{i=1}^m ||P_{\\hat{V}_k^{\\bot}(S_1)}(\\psi(x_{m+i}))||^2 +R^2\\sqrt{\\frac{2}{m}\\log\\left(\\frac{1}{\\delta}\\right)} .\n\\end{align*}\nWhere $\\hat{V}_k^{\\bot}(S_1) $ is the orthogonal complement of $\\hat{V}_k(S_1)$, the subspace spanned by the $k$ eigenvectors of the covariance matrix $C(S_1)$ corresponding to the $k$ largest eigenvalues of $C(S_1)$.\nThis holds under the assumption that $\\| \\psi(x) \\| \\leq R$, for any $x \\in \\mathcal{X}$.\n\\end{theorem}\n\n\n\\cref{th : pac_inequality_2} provides an upper bound on the residual squared projection. It therefore measures how much information is lost by the projection of a new data point onto our empirical subspace.\n\nThe rest of this section is devoted to the proofs of \\cref{th : pac_inequality_1} and \\cref{th : pac_inequality_2}. Numerical implementations of both theorems are gathered in \\cref{sec: experiments}.\n\n\nWe first recall a classical concentration inequality of \\citet{mcdiarmid1989}.\n\n\\begin{theorem}[Bounded differences, McDiarmid]\n\\label{th: mcdiarmid}\nLet $X_1,...,X_n$ be independent random variables taking values in $\\mathcal{X}$ and $f: \\mathcal{X}^n \\rightarrow \\mathbb{R}$. Assume that for all $1\\leq i\\leq n$ and all $x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_n \\in \\mathcal{X}$ we have:\n\\[\n\\sup_{x_i,\\hat{x}_i} \n|f(x_1,\\ldots,x_{i-1},x_{i},x_{i+1},\\ldots,x_n) - f(x_1,\\ldots,x_{i-1},\\hat{x}_{i},x_{i+1},\\ldots,x_n)| \\leq c_i .\n\\]\nThen for all $\\delta >0$:\n\\[\n\\mathbb{P}\\left( f(X_1,...,X_n) - \\mathbb{E}[f(X_1,...,X_n)] > \\delta \\right) \\leq \\exp\\left(- \\frac{2\\delta^2}{\\sum_{i=1}^n c_i^2}\\right) .\n\\]\n\\end{theorem}\n\n\\begin{proof}[Proof of Theorem \\ref{th : pac_inequality_1}]\n\nLet $S=S_1\\cup S_2$ a size-$2m$ sample. We recall that our data are iid.\nWe first apply Proposition \\ref{p:courant-fischer} and Proposition \\ref{p: characterization_eigenvalues}\nto $S_1$.\n\nWe define $\\hat{V}_k(S_1)$ the subspace spanned by the $k$ eigenvectors of the covariance matrix $C(S_1)$ corresponding to the top $k$ eigenvalues of $C(S_1)$.\nWe need now to study \n$\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S_1)}(\\psi(x))||^2\\right]$.\n\nNote that\n\\begin{align*}\n\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S_1)}(\\psi(x))||^2\\right] \n&= \\left(\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S_1)}(\\psi(x))||^2\\right]- \\frac{1}{m}\\sum_{i=1}^m ||P_{\\hat{V}_k(S_1)}(\\psi(x_{m+i}))||^2\\right) \\\\\n&\\hspace*{17mm}+ \\frac{1}{m}\\sum_{i=1}^m ||P_{\\hat{V}_k(S_1)}(\\psi(x_{m+i}))||^2 .\n\\end{align*}\nBecause our data are iid, following the distribution $\\mu$, we know that $S_1$ and $S_2$ are independent, hence \n\\[\n\\mathbb{E}_{S_2}\\left[ \\frac{1}{m}\\sum_{i=1}^m ||P_{\\hat{V}_k(S_1)}(\\psi(x_{m+i})||^2 \\right] = \\mathbb{E}_{\\mu}\\left[||P_{\\hat{V}_k(S_1)}(\\psi(x))||^2\\right] .\n\\]\nWe can now apply McDiarmid's inequality: with probability $1-\\delta$,\n\\[\n\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S_1)}(\\psi(x))||^2\\right] - \\frac{1}{m}\\sum_{i=1}^m ||P_{\\hat{V}_k(S_1)}(\\psi(x_{m+i}))||^2 \\geq - R^2 \\sqrt{\\frac{2}{m}\\log\\left(\\frac{1}{\\delta}\\right)} .\n\\]\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{th : pac_inequality_2}]\nThe proof is similar to the previous one, just replace $P_{\\hat{V}_k(S_1)}$ by $P_{\\hat{V}_k^{\\bot}(S_1)}$ and use McDiarmid's inequality.\n\\end{proof}\n\n\n\\section{A theoretical analysis of kernel PCA: PAC-Bayes bounds}\\label{sec:pacb}\n\nThis section contains our main results which harness the power and flexibility of PAC-Bayes. We bring bounds of a new kind for kernel PCA: while our PAC bounds (in \\cref{sec:pac}) were assessing that kernel PCA was efficient with a certain amount of confidence, the two next theorems, on the contrary, explicit the limitations we face when projecting onto an empirical subspace, therefore contributing to a better overall theoretical understanding.\n\n\\begin{theorem}\n\\label{th: pac_bayes_thm_proj}\nFor a finite data space $\\mathcal{X}$, for $\\alpha\\in\\mathbb{R}$, $\\delta\\in]0,1]$, if we perform PCA in the feature space defined by the Aronszjan mapping $(\\psi,\\mathcal{H})$ of a kernel $\\kappa$, then with probability of at least $1-\\delta$ over random $m$-samples $S$, we have for all $k \\in [m]$, the expected square projection is bounded by:\n\\begin{align*}\n \\mathbb{E}_{\\mu}[||P_{\\hat{V}_k}(\\psi(x))||^2]\n &\\leq \\frac{1}{m}\\hat{\\lambda}^{\\leq k} + \\frac{\\log(1\/\\delta)}{m^{\\alpha}} + \\frac{R^4}{2m^{1-\\alpha}} .\\end{align*}\nand the optimal value for $\\alpha$ is $\\alpha_0= \\frac{1}{2}+\\frac{1}{2\\log(m)}\\log\\left( \\frac{2\\log(1\/\\delta)}{R^{4}} \\right)$.\nThis holds under the assumption that $\\| \\psi(x) \\| \\leq R$, for any $x \\in \\mathcal{X}$.\n\\end{theorem}\n\nThis theorem shows that the expected squared projection cannot improve on a quantity close to the partial sum of empirical eigenvalues. This demonstrates that measuring the efficiency of our projection through this empirical sum is therefore relevant.\n\nThe next theorem is intended in the same spirit, but holds for empirical squared residuals.\n\n\n\\begin{theorem}\n\\label{th: pac_bayes_thm_residual}\nFor a finite data space $\\mathcal{X}$, for $\\alpha\\in\\mathbb{R}$, $\\delta\\in]0,1]$, if we perform PCA in the feature space defined by the Aronszjan mapping $(\\psi,\\mathcal{H})$ of a kernel $\\kappa$, then with probability at least $1-\\delta$ over random $m$-samples $S$, we have for all $k \\in [m]$, the expected square residual is bounded by:\n\\begin{align*}\n \\mathbb{E}_{\\mu}[||P_{\\hat{V}_k^{\\bot}}(\\psi(x))||^2]\n &\\geq \\frac{1}{m}\\hat{\\lambda}^{>k} - \\frac{\\log(1\/\\delta)}{m^{\\alpha}} - \\frac{R^4}{2m^{1-\\alpha}}.\n\\end{align*}\nand the optimal value for $\\alpha$ is $\\alpha_0= \\frac{1}{2}+\\frac{1}{2\\log(m)}\\log\\left( \\frac{2\\log(1\/\\delta)}{R^{4}} \\right)$.\nThis holds under the assumption that $\\| \\psi(x) \\| \\leq R$, for any $x \\in \\mathcal{X}$.\n\\end{theorem}\n\n\\begin{remark}\n The assumption $|\\mathcal{X}|< +\\infty$ may appear to be restrictive at first glance. As a matter of fact, it covers the case of $\\mathcal{X}\\subseteq\\mathbb{R}^d$ bounded if one decides to discretise $\\mathcal{X}$ into a finite grid $\\mathcal{G}$. With a large number of points on the grid, one can approximate $\\mathcal{X}$ efficiently and also apply Theorems \\ref{th: pac_bayes_thm_proj} and \\ref{th: pac_bayes_thm_residual} and those theorems provide solid bounds independent of the number of points inside $\\mathcal{G}$.\n\\end{remark}\n\nNumerical implementations of those two bounds are gathered in \\cref{sec: experiments}.\n\n\n\nWe now move to the proofs of \\cref{th: pac_bayes_thm_proj} and \\cref{th: pac_bayes_thm_residual}. We start with additional technical background.\n\n\\paragraph{Self-bounding functions.}\n\nWe use the notion of \\emph{self-bounding function} \\citep[presented for instance in][Definition 2]{boucheron2009} which allows to deal with a certain type of exponential moment. This tool is at the core of the recent work from \\cite{haddouche2020}, to obtain PAC-Bayesian generalisation bounds when the loss is unbounded \\citep[as typically assumed in most of the PAC-Bayes literature, see the discussion of][and references therein]{haddouche2020}.\n\n \n\n\\begin{definition}[\\citealp{boucheron2009}, Definition 2]\n\\label{d: self-bounding}\n A function $f:\\mathcal{X}^m\\rightarrow\\mathbb{R}$ is said to be $(a,b)$-self-bounding with $(a,b)\\in\\left(\\mathbb{R}^{+}\\right)^2\\backslash\\{(0,0)\\}$, if there exists $f_i :\\mathcal{X}^{m-1}\\rightarrow \\mathbb{R}$ for every $i\\in[m]$ such that for all $i\\in[m]$ and $x\\in\\mathcal{X}$:\n \\[\n 0\\leq f(x) - f_i(x^{(i)}) \\leq 1 \n \\]\n and\n \\[\n \\sum_{i=1}^m f(x)-f_i(x^{(i)}) \\leq af(x) +b .\n \\]\n Where for all $1\\leq i \\leq m$, the removal of the $i$th entry is $x^{(i)}= (x_1,...,x_{i-1},x_{i+1},...,x_m)$.\n We denote by $\\texttt{SB}(a,b)$ the class of\n functions that satisfy this definition.\n\\end{definition}\n\n\nIn \\cite{boucheron2009}, the following bound has been presented to deal with the exponential moment of a self-bounding function.\nLet $c_{+}:=\\max(c,0)$ denote the positive part of $c \\in \\mathbb{R}$. We define $c_{+}^{-1}:= +\\infty$ when $c_{+}=0$.\n\n\\begin{theorem}[\\citealp{boucheron2009}, Theorem 3.1]\n\\label{th: exp_inequality2009 }\n Let $Z=g(X_1,...,X_m)$ where $X_1,...,X_m$ are independent (not necessarily identically distributed) $\\mathcal{X}$-valued random variables.\n We assume that $\\mathbb{E}[Z]<+\\infty$. \n \n \n If $g\\in\\texttt{SB}(a,b)$, then defining $c=(3a-1)\/6$, \n for any $s\\in[0;c_{+}^{-1})$ we have:\n \\[ \n \\log\\left(\\mathbb{E}\\left[e^{s(Z-\\mathbb{E}[Z])} \\right] \\right) \\leq \\frac{\\left(a\\mathbb{E}[Z] +b \\right)s^2}{2(1-c_{+}s)}. \n \\]\n \n\\end{theorem}\n\n\n\n\\paragraph{PAC-Bayes.}\n\n We will consider a fixed learning problem with data space $\\mathcal{Z}$, set of predictors $\\mathcal{F}$, and loss function $\\ell : \\mathcal{F}\\times \\mathcal{Z} \\rightarrow \\mathbb{R}^{+} $.\n We denote by $\\mu$ a probability distribution over $\\mathcal{Z}$, \n $s = (z_1,\\ldots,z_m)$ is a size-$m$ sample.\n We denote as $\\Sigma_{\\mathcal{F}}$ the considered $\\sigma$-algebra on $\\mathcal{F}$. \n Finally, we define for any $f\\in\\mathcal{F}$, $L(f)= \\mathbb{E}_{z\\sim \\mu}[\\ell(f,z)]$ and $\\hat{L}_s(f)=\\frac{1}{m}\\sum_{i=1}^m \\ell(f,z_i)$.\n\nLet us highlight that in supervised learning, for instance, $\\mathcal{Z} = \\mathcal{X}\\times\\mathcal{Y}$\nwhere $\\mathcal{X} \\subset \\mathbb{R}^d$ is a space of inputs, and $\\mathcal{Y}$ a space of labels. In this case predictors are functions $f:\\mathcal{X}\\to\\mathcal{Y}$. One may be interested in applying PCA over the input space $\\mathcal{X}$ to reduce the input dimension.\n\n\nWe use the PAC-Bayesian bound given by \\cite{rivasplata2020}, in which the authors developed a PAC-Bayesian bound allowing the use of a data-dependant prior (we refer to the discussion and references therein for an introduction to data-dependent prior distributions in PAC-Bayes).\n\n\\begin{definition}[Stochastic kernels]\n A \\emph{stochastic kernel} from $\\mathcal{Z}^m$ to $\\mathcal{F}$ is defined as a mapping $Q: \\mathcal{Z}^m\\times \\Sigma_{\\mathcal{F}} \\rightarrow [0;1]$ where\n \\begin{itemize}\n \\item For any $B\\in \\Sigma_{\\mathcal{F}}$, the function $s=(z_1,...,z_m)\\mapsto Q(s,B)$ is measurable,\n \\item For any $s\\in\\mathcal{Z}^m$, the function $B\\mapsto Q(s,B)$ is a probability measure over $\\mathcal{F}$.\n \\end{itemize}\n We will denote by $\\texttt{Stoch}(\\mathcal{Z}^m,\\mathcal{F})$ the set of all stochastic kernels from $\\mathcal{Z}^m$ to $\\mathcal{F}$.\n\\end{definition}\n\n\\begin{definition}\n For any $Q\\in \\texttt{Stoch}(\\mathcal{Z}^m,\\mathcal{F}) $ and $s\\in\\mathcal{Z}^m$, we define $Q_s[L]:=\\mathbb{E}_{f\\sim Q_s}[L(f)]$ and $Q_s[\\hat{L}_s]:=\\mathbb{E}_{f\\sim Q_s}[\\hat{L}_s(f)]$. We generalise this definition to the case where we consider $S=(Z_1,...,Z_m)$ a random sample. Then we consider the random quantities $Q_S[L]$ and $Q_S[\\hat{L}_S]$.\n\\end{definition}\n\nFor the sake of clarity, we now present a slightly less general version of one of the main theorems of \\cite{rivasplata2020}. We define $\\mu^m := \\mu \\otimes\\cdots \\otimes \\mu$ ($m$ times).\n\n\\begin{theorem}[\\citealp{rivasplata2020}]\n\\label{th: rivasplata2020}\n For any $F:\\mathbb{R}^2\\rightarrow \\mathbb{R}$ convex, for any $Q^{0}\\in\\texttt{Stoch}(\\mathcal{Z}^m,\\mathcal{F})$, we define\n \\[ \\xi:= \\mathbb{E}_{S\\sim \\mu^m} \\mathbb{E}_{f\\sim Q^{0}_S}\\left[\\exp\\left(F(\\hat{L}_S(f),L(f))\\right)\\right] . \\]\n Then for any $Q\\in\\texttt{Stoch}(\\mathcal{Z}^m,\\mathcal{F})$, any $\\delta\\in ]0;1]$, with probability at least $1-\\delta$ over the random draw of the sample $S\\sim \\mu^m$\n \\[ F(Q_S[(\\hat{L}_S,L]) \\leq \\operatorname{KL}(Q_S,Q_S^{0}) + \\log(\\xi\/\\delta) . \\]\n\\end{theorem}\n\nFor a fixed PD kernel $\\kappa$, $(\\psi,\\mathcal{H})$ is the associated Aronszajn mapping.\nLet us now consider a finite data space $\\mathcal{X}$. We therefore have\n\\[N_\\mathcal{H}:=\\dim(\\mathcal{H})< +\\infty.\\] \nFor the sake of clarity, we will assume that $\\mathcal{H}=\\mathbb{R}^{N_\\mathcal{H}}$ endowed with the Euclidean inner product. The space is provided with the Borelian $\\sigma$-algebra $\\mathcal{B}(\\mathbb{R}^{N_\\mathcal{H}})$.\n\n\nWe assume that our data distribution $\\psi(\\mu)$ over $\\mathcal{H}$ is bounded in $\\mathcal{H}$ by a scalar $R$:\n\\begin{equation*}\n\\forall x\\in\\mathcal{X}, \\hspace{3mm}\n||\\psi(x)||\\leq R .\n\\end{equation*}\n\\begin{remark}\n Note that this assumption is not especially restrictive. Indeed, if $\\kappa$ is bounded by $C$, then for all $x\\in\\mathcal{X}, ||\\psi(x)||^2=\\kappa(x,x)\\leq C$.\n\\end{remark}\n\n\nFirst, note that the function $||P_{V_k}(\\psi(x))||^2$ is expressed with a quadratic dependency over the coordinates of $\\psi(x)$. However, it would be far more convenient better to consider linear functions.\n\n\\begin{proposition}[\\citealp{shawe-taylor2005eigenspectrum}, Prop. 11]\n\\label{p:shawetaylor_prop_11}\nLet $\\hat{V}$ be the subspace spanned by some subset $I$ of the set of eigenvectors of the correlation matrix $C(S)$ associated to the kernel matrix $K(S)$. There exists an Hilbert space mapping $\\hat{\\psi}:\\mathcal{X}\\rightarrow \\mathbb{R}^{N_\\mathcal{H}^2}$ such that the projection norm $||P_{\\hat{V}}(\\psi(x))||^2$ is a linear function $\\hat{f}$ in $\\mathbb{R}^{N_\\mathcal{H}^2}$ such that, for all $(x,z)\\in\\mathcal{X}^2$\n\\[\n\\langle \\hat{\\psi}(x),\\hat{\\psi}(z)\\rangle = \\kappa(x,z)^2.\n\\]\nFurthermore, if $|I| = k$, then the 2-norm of the function $\\hat{f}$ is $\\sqrt{k}$.\n\\end{proposition} \n\n\\begin{proof}\nLet $X=U\\Sigma V'$ be the SVD of the sample matrix $X$ (where each column represents a data point).\nThe projection norm is then given by:\n\\[|| P_{\\hat{V}}(\\psi(x)||^2 = \\psi(x)' U(I)U(I)' \\psi(x) \\]\nwhere $U(I)\\in\\mathbb{R}^{N_\\mathcal{H}\\times k}$ containing the $k$ columns of $U$ in the set $I$ (or equivalently $U(I)\\in\\mathbb{R}^{N_\\mathcal{H}\\times{N_\\mathcal{H}}}$ filled with zeros). If we define $w:= U(I)U(I)'\\in\\mathbb{R}^{N_\\mathcal{H}^2}$ then we have:\n\\[ \n|| P_{\\hat{V}}(\\psi(x)||^2 \n= \\sum_{i,j=1}^{N_\\mathcal{H}} w_{i,j}\\psi(x)_i \\psi(x)_j \n= \\sum_{i,j=1}^{N_\\mathcal{H}} w_{i,j} \\hat{\\psi}(x)_{i,j} \n= \\langle w, \\hat{\\psi}(x)\\rangle ,\n\\]\nwhere for all $i$, $\\psi(x)_i$ represents the $i$-th coordinate of $\\psi(x)$ and for all $x\\in\\mathcal{X},\\hat{\\psi}(x)= (\\psi(x)_i\\psi(x)_j)_{i,j=1..N_\\mathcal{H}}$.\n\nThus, the projection norm is a linear function $\\hat{f}$ in $\\mathbb{R}^{N_\\mathcal{H}^2}$ with the mapping $\\hat{\\psi}$. Note that considering the 2-norm of $\\hat{f}$ is equivalent to consider the 2-norm of $w$.\nThen, if we denote by $(u_i)_i$ the columns of $U$, we have:\n\\begin{align*}\n ||\\hat{f}||^2 \n & = \\sum_{i,j=1}^{N_\\mathcal{H}} w_{i,j}^2= ||U(I)U'(I)||^2 \\\\\n & = \\left\\langle\\sum_{i\\in I}u_i u_i', \\sum_{j\\in I}u_j u_j' \\right\\rangle \\\\\n &= \\sum_{i,j\\in I} (u'_i u_j)^2 \\\\ &= k.\n\\end{align*}\nHence the 2-norm of $\\hat{f}$ is $\\sqrt{k}$. Finally, for $(x,z)\\in\\mathcal{X}^2$:\n\\begin{align*}\n \\kappa(x,z)^2\n &= \\langle \\psi(x),\\psi(z) \\rangle ^2 \n = \\left( \\sum_{i=1}^{N_\\mathcal{H}}\\psi(x)_i \\psi(z)_i \\right)^2 \\\\\n &= \\sum_{i,j=1}^{N_\\mathcal{H}} \\psi(x)_i \\psi(z)_i \\psi(x)_j \\psi(z)_j \\\\\n &= \\sum_{i,j=1}^{N_\\mathcal{H}} (\\psi(x)_i \\psi(x)_j) (\\psi(z)_i \\psi(z)_j)\\\\\n &=\\langle \\hat{\\psi}(x), \\hat{\\psi}(z) \\rangle.\n\\end{align*}\n\\end{proof}\nNow, recall that for a fixed $k$, $P_{V_k}$ minimises the shortfall between the squared norm projection of $\\psi(x)$ and $||\\psi(x)||^2$ for any $x$ over the space of projection functions over a subspace of dimension at most $k$. We therefore introduce the following learning framework.\n\nThe data space is $\\mathcal{X}$ with the probability distribution $\\mu$. \nThe space $\\mathcal{X}$ is endowed with a $\\sigma$-algebra $\\Sigma_\\mathcal{X}$.\n\nThe goal is to minimise the loss over the set of linear predictors in $\\mathbb{R}^{N_\\mathcal{H}^2}$ corresponding to projections into a $k$-dimensional subspace of $\\mathbb{R}^{N_\\mathcal{H}}$ \n\\begin{align*}\n \\mathcal{F}_{k} &:= \\left\\{ f_{w}: x \\mapsto \\langle w , \\hat{\\psi}(x)\\rangle \\mid \\exists V\\subseteq \\mathbb{R}^{N_\\mathcal{H}}, \\text{dim}(V)=k, \\; \\forall x\\in \\mathcal{X}, f_{w}(x)= ||P_V(\\psi(x)||^2 \\right\\}.\n\\end{align*}\n\nBecause $k$ may variate between $1$ and $N_{\\mathcal{H}}$, we also define the space $\\mathcal{F}$ such that for all $k$, $\\mathcal{F}_k\\subseteq\\mathcal{F}$:\n\\[ \\mathcal{F} := \\left\\{ f_{w}: x \\mapsto \\langle w , \\hat{\\psi}(x)\\rangle \\mid \\exists V\\subseteq \\mathbb{R}^{N_\\mathcal{H}}, \\forall x\\in \\mathcal{X}, f_{w}(x)= ||P_V(\\psi(x)||^2 \\right\\}. \\] \nWhen needed, we will assimilate $f_w$ to $w$.\n\nThe loss function is $\\ell : \\mathcal{F} \\times \\mathcal{X} \\to [0,R^2]$ such that for $f_w \\in \\mathcal{F}$ and $ x\\in\\mathcal{X}$:\n\\[ \n\\ell(f,x)= ||\\psi(x)||^2 - f_w(x) .\n\\]\n\n\nThis loss is indeed non-negative because all the vectors of $\\mathcal{F}$ are representing the squared norm of projectors in $\\mathbb{R}^{N_\\mathcal{H}}$ so we have for all $f_w\\in\\mathcal{F},$ $x\\in\\mathcal{X}, f_w(x)\\leq ||\\psi(x)||^2$.\n\n\n\\begin{remark}\nIn other contexts and tasks other loss functions are used. The reason that the loss function just defined above makes sense in our task is given by Proposition \\ref{p:courant-fischer}. Indeed, we know that if $f$ is a projector over a space $V$ of dimension $k$, then we have $\\ell(f,x)=||P_{V}^{\\bot}(\\psi(x)||^{2} $ and moreover the expected squared residual $\\lambda^{>k}$ satisfies:\n\\[ \\lambda^{>k} = \\min_{\\dim(V)=k} \\mathbb{E}_{\\mu}[||P_{V}^\\bot(\\psi(x))||^2] = \\mathbb{E}_{\\mu} [ ||P_{V_k}^{\\bot}(\\psi(x))||^2]. \\]\n\\end{remark}\nWe assume $|\\mathcal{X}|< +\\infty$.\nTo apply our PAC-Bayesian theorem (\\cref{th: rivasplata2020}) we need to introduce an adapted stochastic kernel. For a fixed $k$, we set $Q^k, P^k$ as follows, $\\forall s\\in\\mathcal{X}^m$, $\\forall B\\in \\Sigma_{\\mathcal{F}_{k}}$:\n\\[ Q^k(s,B)= \\mathds{1}\\left\\{ f_{\\hat{w}_k(s)}\\in B \\right\\}, \\]\n\\[ P^k(s,B)= \\mathds{1}\\left\\{ f_{\\hat{w}^{\\bot}_k(s)}\\in B \\right\\}, \\]\nwhere $\\Sigma_{\\mathcal{F}_k}$ is the $\\sigma$-algebra on $\\mathcal{F}_k$, and: \n\\begin{itemize}\n \\item For all $k$, the vector $\\hat{w}_k(s)$ of $\\mathbb{R}^{N_{\\mathcal{H}^2}}$ is such that $f_{\\hat{w}_k}(x)= ||P_{\\hat{V}_k(s)}(.)||^2 $ where $\\hat{V_k}(s)$ is the $k-$dimensional subspace defined in Definition \\ref{def: subspaces} obtained from the sample $s$;\n \\item For all $k$, the vector $\\hat{w}^{\\bot}_k(s)$ of $\\mathbb{R}^{N_{\\mathcal{H}^2}}$ is such that $f_{\\hat{w}^{\\bot}_k}(x)= ||P_{\\hat{V}^{\\bot}_k(s)}(.)||^2 $ where $\\hat{V_k}^{\\bot}(s)$ is the orthogonal of $\\hat{V}_k(s)$.\n\\end{itemize}\nWe need the following technical results.\n\\begin{theorem}\n\\label{th: measurability_kernel_Q}\nFor all $k$, $Q^k$ is a stochastic kernel. \n\\end{theorem}\n\n\\begin{proof}\nThe proof is postponed to \\cref{sec:proofs}.\n\\end{proof}\n\n \\begin{theorem}\n \\label{th: measurability_kernel_P}\n For all $k$, $P^k$ is a stochastic kernel.\n \\end{theorem}\n\n\\begin{proof}\nThe proof is postponed to Section \\ref{sec:proofs}.\n\\end{proof}\n\n\\begin{remark}\nThe proof of Theorem \\ref{th: measurability_kernel_Q} exploits the hypothesis $|\\mathcal{X}|< +\\infty$. Indeed, the fact that $\\mathcal{H}$ is finite-dimensional allows to consider well-defined symmetric matrices and to exploit a result from \\cite{wilcox1972}.\n\\end{remark}\n\n\n\n\nNow we prove \\cref{th: pac_bayes_thm_proj}, which we recall here for convenience: for a finite data space $\\mathcal{X}$, for $\\alpha\\in\\mathbb{R}$, $\\delta\\in]0,1]$, for any $1\\leq k\\leq m$, we have with probability $1-\\delta$ over the random $m$-sample $S$\n\\begin{align*}\n \\mathbb{E}_{\\mu}[||P_{\\hat{V}_k}(\\psi(x))||^2]\n &\\leq \\frac{1}{m}\\sum_{i=1}^k \\hat{\\lambda}_i + \\frac{\\log(1\/\\delta)}{m^{\\alpha}} + \\frac{R^4}{2m^{1-\\alpha}}.\n\\end{align*}\nand the optimal value for $\\alpha$ is $\\alpha_0= \\frac{1}{2}+\\frac{1}{2\\log(m)}\\log\\left( \\frac{2\\log(1\/\\delta)}{R^{4}} \\right)$.\n\n\n\\begin{proof}[Proof of \\cref{th: pac_bayes_thm_proj}]\n\nLet $k\\in[m]$.\nWe first apply \\cref{th: rivasplata2020} with probability $1-\\delta$, $F:(x,y)\\mapsto m^{\\alpha}(y-x)$ and the stochastic kernel $P^k$ (thanks to \\cref{th: measurability_kernel_P}) as prior and posterior which nullify the KL-divergence term. \nWe then have, with probability $1-\\delta$,\n\\[ m^{\\alpha}( P^k_S(L(f))- P^k_S(\\hat{L}_S(f)) \\leq \\operatorname{KL}(P^k_S,P_S^{k}) + \\log(\\xi\/\\delta) \\]\nwhere $\\xi:= \\mathbb{E}_{S\\sim \\mu^m} \\mathbb{E}_{f\\sim P^{k}_S}\\left[\\exp\\left(m^{\\alpha}(\\hat{L}_S(f)-(L(f))\\right)\\right]$.\n\nNotice that for any sample $S$, $P^k_S$ is the Dirac measure in $f_{\\hat{w}^{\\bot}_k(S)}$ \\emph{i.e.} the function representing the squared norm projection $||P_{\\hat{V}^{\\bot}_k(S)}(.)||^2$. Hence $P^k_S(L(f))= L(f_{\\hat{w}^{\\bot}_k(S)})$, $Q^k_S(\\hat{L}_S(f))= \\hat{L}_S(f_{\\hat{w}^{\\bot}_k(S)})$.\n\nFinally we have\n \\[ m^{\\alpha}\\left(L(f_{\\hat{w}^{\\bot}_k(S)})- \\hat{L}_S(f_{\\hat{w}^{\\bot}_k(S)})\\right) \\leq \\log(\\xi\/\\delta) \\]\nhence:\n\\[ m^{\\alpha}\\left(\\mathbb{E}_{\\mu}\\left[ ||\\psi(x)||^2- ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\hat{\\mu}}\\left[ ||\\psi(x)||^2- ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right]\\right) \\leq \\log(\\xi\/\\delta) \\] \nwhere $\\hat{\\mu}$ is the empirical distribution over the $m$-sample $S$.\n\nFurthermore, because we are considering orthogonal projections, we can see that for any $x\\in\\mathcal{X}$, we have \n\\[ \n||\\psi(x)||^2- ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 = ||P_{\\hat{V}_k(S)}(\\psi(x))||^2.\n\\]\nSo\n\\begin{align*}\n\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] &\\leq \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] +\\frac{\\log(\\xi\/\\delta)}{m^{\\alpha}}.\n\\end{align*}\n\nThanks to Proposition \\ref{p: characterization_eigenvalues}, we know that $\\mathbb{E}_{ \\hat{\\mu}}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right]= \\frac{1}{m}\\sum_{i=1}^k \\hat{\\lambda}_i$, which yields\n\\begin{align*}\n\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] &\\leq \\frac{1}{m}\\sum_{i=1}^k \\hat{\\lambda}_i +\\frac{\\log(\\xi\/\\delta)}{m^{\\alpha}}.\n\\end{align*}\nFinally, we need to control $\\log(\\xi)$. To do so, we use \\cref{th: exp_inequality2009 }. We first recall that\n\\begin{align*}\n\\xi &:= \\mathbb{E}_{S\\sim \\mu^m} \\mathbb{E}_{f\\sim P^{k}_S}\\left[\\exp\\left(m^{\\alpha}(L(f)-\\hat{L}_S(f))\\right)\\right] \\\\\n& = \\mathbb{E}_{S\\sim \\mu^m} \\left[\\exp\\left(m^{\\alpha}(L(f_{\\hat{w}^{\\bot}_k})-\\hat{L}_S(f_{\\hat{w}^{\\bot}_k}))\\right)\\right] \\\\\n&= \\mathbb{E}_{S\\sim \\mu^m} \\left[\\exp\\left(m^{\\alpha}(\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] )\\right)\\right].\n\\end{align*}\nYet, thanks to Propositions \\ref{p:courant-fischer} and \\ref{p: characterization_eigenvalues}, we know that:\n\\begin{align*}\n \\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] \\leq \\max_{\\dim(V)=k} \\mathbb{E}_\\mu[||P_{V}(\\psi(x))||^2] = \\mathbb{E}_{\\mu}\\left[ ||P_{V_k(S)}(\\psi(x))||^2 \\right]\n\\end{align*}\nand \n\\begin{align*}\n \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right]]& = \\max_{\\dim(V)=k} \\mathbb{E}_{\\hat{\\mu}}[||P_{V}(\\psi(x))||^2]\\\\\n & \\geq \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{V_k}(\\psi(x))||^2 \\right].\n\\end{align*}\nThus we have \n\\[ \\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] \\leq \\mathbb{E}_{\\mu}\\left[ ||P_{V_k}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{V_k}(\\psi(x))||^2 \\right]\\]\n\\begin{remark}\n \n The interest of this maneuver is to replace the projector $P_{\\hat{V}_k(S)}$, which is data-dependent, by $P_{V_k}$, which is not. Doing so, if we set $Y= \\frac{1}{m}\\sum_{i=1}^m Y_i $ and for all $i$, $Y_i=||P_{V_k}(\\psi(x_i))||^2 $ (where the $Y_i$ are iid), we can write properly: \n\\[\\mathbb{E}_{\\mu}\\left[ ||P_{V_k}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{V_k}(\\psi(x))||^2 \\right]= \\mathbb{E}_{\\mathcal{S}}[Y] -Y \\]\nwhile $\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] \\neq \\mathbb{E}_{\\mathcal{S}}\\left[\\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right]\\right]$.\n\\end{remark}\nSo, finally we have\n\\[\\xi \\leq \\mathbb{E}_{S\\sim \\mu^m} \\left[\\exp\\left(m^{\\alpha}(\\mathbb{E}_{\\mathcal{S}}[Y]- Y )\\right)\\right]. \\] \nWe define the function $f : \\mathcal{X}^{m} \\to \\mathbb{R}$ as\n \\[\n f:z \\mapsto \\frac{1}{R^2}\\hspace{1mm}\\sum_{i=1}^{m}\\left(R^2-||P_{V_k}(\\psi(z_i))||^2\\right)\n \\hspace{7mm}\\text{for}\\hspace{2mm} z=(z_1,...,z_m)\\in\\mathcal{X}^{m}.\n \\]\n We define $Z=f(X_1,...,X_m)$ and notice that $\\mathbb{E}_{\\mathcal{S}}[Y]- Y= \\frac{R^2}{m}\\left( Z- \\mathbb{E}_{\\mathcal{S}}[Z] \\right)$. \n We first prove that $f\\in\\texttt{SB}(\\beta,1-\\beta)$ (cf. Def \\ref{d: self-bounding}) for any $\\beta\\in[0,1]$.\n \n Indeed, for all $1\\leq i\\leq m$, we define\n \\[\n f_i(z^{(i)})= \\frac{1}{R^2}\\sum_{j\\neq i} \\left(R^2-||P_{V_k}(\\psi(z_i))||^2\\right)\n \\]\n where $z^{(i)}=(z_1,...,z_{i-1},z_{i+1},...,z_{m})\\in\\mathcal{X}^{m-1}$ for any $z\\in\\mathcal{X}^{m}$ and for any $i$.\n \n Then, since $0\\leq ||P_{V_k}(\\psi(z_i))||^2 \\leq R^2$ for all $i$, we have \n \\[\n 0\\leq f(z)-f_i(z^{(i)})= \\frac{R^2-||P_{V_k}(\\psi(z_i))||^2}{R^2} \\leq 1 .\n \\]\nMoreover, because $f(z)\\leq m$ for any $z\\in\\mathcal{X}^m$, we have\n\\begin{align*}\n \\sum_{i=1}^m f(z)-f_i(z^{(i)}) \n & = \\sum_{i=1}^m \\frac{R^2-||P_{V_k}(\\psi(z_i))||^2}{R^2} \\\\\n & = f(z) =\\beta f(z) + (1-\\beta)f(z)\n \\leq \\beta f(z) + (1-\\beta)m.\n\\end{align*}\nSince this holds for any $x\\in\\mathcal{X}^{m}$, this proves that $f$ is $(\\beta,1-\\beta)$-self-bounding.\n\nNow, to complete the proof, we will use \\cref{th: exp_inequality2009 }. Because $Z$ is $(1\/3,(2\/3)m)$-self-bounding, we have for all $s\\in\\mathbb{R}^{+}$\n\\[ \n\\log\\left(\\mathbb{E}_{\\mathcal{S}}\\left[e^{s(Z-\\mathbb{E}_{\\mathcal{S}}[Z])} \\right] \\right) \\leq \\frac{\\left(\\frac{1}{3}\\mathbb{E}_{\\mathcal{S}}[Z] +\\frac{2m}{3} \\right)s^2}{2}. \n\\]\nAnd since $Z\\leq m$:\n\\begin{align*}\n \\mathbb{E}_{\\mathcal{S}}\\left[ e^{m^\\alpha(\\mathbb{E}_{\\mathcal{S}}[Y]-Y)}\\right] \n &= \\mathbb{E}_{\\mathcal{S}}\\left[ e^{\\frac{R^2}{m^{1-\\alpha}}(Z-\\mathbb{E}_{\\mathcal{S}}[Z])}\\right] \n & \\\\\n &\\leq \\exp\\left( \\frac{\\left(\\frac{1}{3}\\mathbb{E}_{\\mathcal{S}}[Z] +\\frac{2m}{3} \\right)R^4}{2m^{2-2\\alpha}} \\right)\n & (\\emph{\\cref{th: exp_inequality2009 }})\\\\\n & \\leq \\exp\\left( \\frac{R^4}{2m^{1-2\\alpha}} \\right).\n & (\\textrm{since } \\mathbb{E}_{\\mathcal{S}}[Z]\\leq m).\n\\end{align*}\nSo, finally, we have\n\\[ \\frac{\\log(\\xi)}{2m^{\\alpha}} \\leq \\frac{R^4}{m^{1-\\alpha}}, \\]\nhence the final result.\n\nTo obtain the optimal value of $\\alpha$, we simply study the derivative of the univariate function $f_{R,\\delta}(\\alpha):= \\frac{\\log(1\/\\delta)}{m^{\\alpha}} + \\frac{R^{4}}{m^{1-\\alpha}}$.\n\\end{proof}\n\nWe now prove \\cref{th: pac_bayes_thm_residual} which deals with the expected square residuals. We recall the theorem: for a finite data space $\\mathcal{X}$, for $\\alpha\\in\\mathbb{R}$, $\\delta\\in]0,1]$, for any $1\\leq k\\leq m$, we have with probability $1-\\delta$ over the random $m$-sample $S$:\n\\begin{align*}\n \\mathbb{E}_{\\mu}[||P_{\\hat{V}^{\\bot}_k}(\\psi(x))||^2]\n &\\geq \\frac{1}{m}\\sum_{i=k+1}^m \\hat{\\lambda}_i - \\frac{\\log(1\/\\delta)}{m^{\\alpha}} - \\frac{R^4}{2m^{1-\\alpha}}\n\\end{align*}\nand the optimal value for $\\alpha$ is $\\alpha_0= \\frac{1}{2}+\\frac{1}{2\\log(m)}\\log\\left( \\frac{2\\log(1\/\\delta)}{R^{4}} \\right)$.\n\n\n\\begin{proof}[Proof of \\cref{th: pac_bayes_thm_residual}]\n The proof is similar to the one of \\cref{th: pac_bayes_thm_proj} but it rather involves the stochastic kernel $Q^k$. Let $k\\in[m]$. We first apply \\cref{th: rivasplata2020} with probability $1-\\delta$, $F:(x,y)\\mapsto m^{\\alpha}(x-y)$ and the stochastic kernel $Q^k$ (cf. Theorem \\ref{th: measurability_kernel_Q}) as prior and posterior which nullify the KL-divergence term. We then have with probability $1-\\delta$:\n\\[ m^{\\alpha}( Q^k_S(\\hat{L}_S(f))-Q^k_S(L(f))) \\leq \\operatorname{KL}(Q^k_S,Q_S^{k}) + \\log(\\xi\/\\delta) \\] \nwhere $\\xi:= \\mathbb{E}_{S\\sim \\mu^m} \\mathbb{E}_{f\\sim Q^{k}_S}\\left[\\exp\\left(m^{\\alpha}(\\hat{L}_S(f)-(L(f))\\right)\\right]$.\n\nWe notice that for any sample $S$, $Q^k_S$ is the Dirac measure in $f_{\\hat{w}_k(S)}$ i.e. the function representing the squared norm projection $||P_{\\hat{V}_k(S)}(.)||^2$. Hence $Q^k_S(L(f))= L(f_{\\hat{w}_k(S)})$, $Q^k_S(\\hat{L}_S(f))= \\hat{L}_S(f_{\\hat{w}_k(S)}) $.\nFinally we have:\n \\[ m^{\\alpha}\\left( \\hat{L}_S(f_{\\hat{w}_k(S)})-L(f_{\\hat{w}_k(S)})\\right)) \\leq \\log(\\xi\/\\delta) \\]\nhence\n\\[ m^{\\alpha}\\left(\\mathbb{E}_{\\hat{\\mu}}\\left[ ||\\psi(x)||^2- ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\mu}\\left[ ||\\psi(x)||^2- ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right]\\right) \\leq \\log(\\xi\/\\delta), \\] \nwhere $\\hat{\\mu}$ is the empirical distribution over the $m$-sample $S$.\n\nFurthermore, because we are considering orthogonal projections, we remark that for any $x\\in\\mathcal{X}$, we have: \n\\[ ||\\psi(x)||^2- ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 = ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2. \\]\nSo we have by multiplying by $-1$:\n\\begin{align*}\n\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right] &\\geq \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right] -\\frac{\\log(\\xi\/\\delta)}{m^{\\alpha}}.\n\\end{align*}\nThanks to Proposition \\ref{p: characterization_eigenvalues}, we know that $\\mathbb{E}_{ \\hat{\\mu}}\\left[ ||P_{\\hat{V}_k(S)}(\\psi(x))||^2 \\right]= \\frac{1}{m}\\sum_{i=k+1}^m \\hat{\\lambda}_i$ and then we have: \n\\begin{align*}\n\\mathbb{E}_{x\\sim \\mu}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right] &\\geq \\frac{1}{m}\\sum_{i=k+1}^m \\hat{\\lambda}_i -\\frac{\\log(\\xi\/\\delta)}{m^{\\alpha}}.\n\\end{align*}\nFinally we need to control $\\log(\\xi)$. To do so, we use \\cref{th: exp_inequality2009 }. We first recall that:\n\\begin{align*}\n\\xi &:= \\mathbb{E}_{S\\sim \\mu^m} \\mathbb{E}_{f\\sim Q^{k}_S}\\left[\\exp\\left(m^{\\alpha}(\\hat{L}_S(f)-L(f))\\right)\\right] \\\\\n& = \\mathbb{E}_{S\\sim \\mu^m} \\left[\\exp\\left(m^{\\alpha}(\\hat{L}_S(f_{\\hat{w}_k})-L(f_{\\hat{w}_k}))\\right)\\right] \\\\\n&= \\mathbb{E}_{S\\sim \\mu^m} \\left[\\exp\\left(m^{\\alpha}\\left(\\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right] \\right)\\right)\\right].\n\\end{align*}\nYet, thanks to Propositions \\ref{p:courant-fischer} and \\ref{p: characterization_eigenvalues}, we know that:\n\\begin{align*}\n \\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right] \\geq \\min_{\\dim(V)=k} \\mathbb{E}_\\mu[||P_{V^{\\bot}}(\\psi(x))||^2] = \\mathbb{E}_{\\mu}\\left[ ||P_{V^{\\bot}_k(S)}(\\psi(x))||^2 \\right]\n\\end{align*}\nand \n\\begin{align*}\n \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right]]& = \\min_{\\dim(V)=k} \\mathbb{E}_{\\hat{\\mu}}[||P_{V^\\bot}(\\psi(x))||^2]\\\\\n & \\leq \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{V^{\\bot}_k}(\\psi(x))||^2 \\right].\n\\end{align*}\nThus we have \n\\[ \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V^{\\bot}}_k(S)}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V^{\\bot}}_k(S)}(\\psi(x))||^2 \\right] \\leq \\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{V^{\\bot}_k}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\mu}\\left[ ||P_{V^{\\bot}_k}(\\psi(x))||^2 \\right].\\]\n\n\\begin{remark}\n The interest of this maneuver is to replace the projector $P_{\\hat{V}_k(S)}$, which is data-dependent, by $P_{V_k}$, which is not. Doing so, if we set $Y= \\frac{1}{m}\\sum_{i=1}^m Y_i $ and for all $i$, $Y_i=||P_{V^{\\bot}_k}(\\psi(x_i))||^2 $ (where the $Y_i$ are iid), we can write properly: \n\\[\\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{V^{\\bot}_k}(\\psi(x))||^2 \\right] - \\mathbb{E}_{\\mu}\\left[ ||P_{V^{\\bot}_k}(\\psi(x))||^2 \\right]= Y-\\mathbb{E}_{\\mathcal{S}}[Y] \\]\nwhile $\\mathbb{E}_{\\mu}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right] \\neq \\mathbb{E}_{\\mathcal{S}}\\left[\\mathbb{E}_{\\hat{\\mu}}\\left[ ||P_{\\hat{V}^{\\bot}_k(S)}(\\psi(x))||^2 \\right]\\right]$.\n\\end{remark}\nSo, finally, we have:\n\\[\\xi \\leq \\mathbb{E}_{S\\sim \\mu^m} \\left[\\exp\\left(m^{\\alpha}(Y-\\mathbb{E}_{\\mathcal{S}}[Y] )\\right)\\right] . \\] \nWe define the function $f : \\mathcal{X}^{m} \\to \\mathbb{R}$ as\n \\[\n f:z \\mapsto \\frac{1}{R^2}\\hspace{1mm}\\sum_{i=1}^{m}||P_{V^{\\bot}_k}(\\psi(z_i))||^2\\\n \\hspace{7mm}\\text{for}\\hspace{2mm} z=(z_1,...,z_m)\\in\\mathcal{X}^{m}.\n \\]\n\n We define $Z=f(X_1,...,X_m)$ and notice that $\\mathbb{E}_{Y-\\mathcal{S}}[Y]= \\frac{R^2}{m}\\left( Z- \\mathbb{E}_{\\mathcal{S}}[Z] \\right)$. \n We first prove that $f\\in\\texttt{SB}(\\beta,1-\\beta)$ (cf. Def \\ref{d: self-bounding}) for any $\\beta\\in[0,1]$.\n \n Indeed, for all $1\\leq i\\leq m$, we define:\n \\[\n f_i(z^{(i)})= \\frac{1}{R^2}\\sum_{j\\neq i} ||P_{V_k}(\\psi(z_i))||^2\n \\]\n where $z^{(i)}=(z_1,...,z_{i-1},z_{i+1},...,z_{m})\\in\\mathcal{X}^{m-1}$ for any $z\\in\\mathcal{X}^{m}$ and for any $i$.\n \n Then, since $0\\leq ||P_{V^{\\bot}_k}(\\psi(z_i))||^2 \\leq R^2$ for all $i$, we have \n \\[\n 0\\leq f(z)-f_i(z^{(i)})= \\frac{||P_{V^{\\bot}_k}(\\psi(z_i))||^2}{R^2} \\leq 1 .\n \\]\nMoreover, because $f(z)\\leq m$ for any $z\\in\\mathcal{X}^m$, we have:\n\\begin{align*}\n \\sum_{i=1}^m f(z)-f_i(z^{(i)}) \n & = \\sum_{i=1}^m \\frac{||P_{V^{\\bot}_k}(\\psi(z_i))||^2}{R^2} \\\\\n & = f(z) =\\beta f(z) + (1-\\beta)f(z)\n \\leq \\beta f(z) + (1-\\beta)m.\n\\end{align*}\nSince this holds for any $x\\in\\mathcal{X}^{m}$, this proves that $f$ is $(\\beta,1-\\beta)$-self-bounding.\n\nNow, to complete the proof, we use \\cref{th: exp_inequality2009 }. Because $Z$ is $(1\/3,(2\/3)m)$-self-bounding, we have for all $s\\in\\mathbb{R}^{+}$:\n\\[ \n\\log\\left(\\mathbb{E}_{\\mathcal{S}}\\left[e^{s(Z-\\mathbb{E}_{\\mathcal{S}}[Z])} \\right] \\right) \\leq \\frac{\\left(\\frac{1}{3}\\mathbb{E}_{\\mathcal{S}}[Z] +\\frac{2m}{3} \\right)s^2}{2}. \n\\]\nAnd since $Z\\leq m$:\n\\begin{align*}\n \\mathbb{E}_{\\mathcal{S}}\\left[ e^{m^\\alpha(\\mathbb{E}_{\\mathcal{S}}[Y]-Y)}\\right] \n &= \\mathbb{E}_{\\mathcal{S}}\\left[ e^{\\frac{R^2}{m^{1-\\alpha}}(Z-\\mathbb{E}_{\\mathcal{S}}[Z])}\\right] \n & \\\\\n &\\leq \\exp\\left( \\frac{\\left(\\frac{1}{3}\\mathbb{E}_{\\mathcal{S}}[Z] +\\frac{2m}{3} \\right)R^4}{2m^{2-2\\alpha}} \\right)\n & (\\cref{th: exp_inequality2009 })\\\\\n & \\leq \\exp\\left( \\frac{R^4}{2m^{1-2\\alpha}} \\right).\n & (\\textrm{since } \\mathbb{E}_{\\mathcal{S}}[Z]\\leq m).\n\\end{align*}\nSo, finally, we have\n\\[ \\frac{\\log(\\xi)}{m^{\\alpha}} \\leq \\frac{R^4}{2m^{1-\\alpha}}. \\]\nHence the final result.\n\nTo obtain the optimal value of $\\alpha$, we simply study the derivative of the univariate function $f_{R,\\delta}(\\alpha):= \\frac{\\log(1\/\\delta)}{m^{\\alpha}} + \\frac{R^{4}}{m^{1-\\alpha}}$.\n\\end{proof}\n \n\n\\section{A brief numerical illustration}\n\\label{sec: experiments}\n\nIn this section we briefly illustrate the numerical behaviour of our lower and upper bounds, with respect to \\cite{shawe-taylor2005eigenspectrum}. We conduct two experiments below.\n\n\\paragraph{Experiment 1}\nWe exploit the dataset used in \n\\citet{shawe-taylor2005eigenspectrum} to compare our bound in \\cref{th : pac_inequality_1}, which we recall here:\n\\[ \n\\frac{1}{m}\\sum_{i=1}^m ||P_{\\hat{V}_k(S_1)}(\\psi(x_{m+i}))||^2 -R^2\\sqrt{\\frac{2}{m}\\log\\left(\\frac{1}{\\delta}\\right)},\n\\]\nwith the one from \\citet{shawe-taylor2005eigenspectrum}, given by\n\\[ \n\\max_{1\\leq \\ell\\leq k} \\left[\\frac 1 m\\hat{\\lambda}^{\\leq \\ell}(S) - \\frac{1 + \\sqrt{\\ell}} {\\sqrt{m}} \\sqrt{\\frac 2 m \\sum_{i=1}^m \\kappa(x_i,x_i)^2}\\right] - R^2 \\sqrt{\\frac {19} {m} \\log\\left(\\frac {2(m+1)} {\\delta}\\right)}.\n\\]\nWe choose $\\delta=0.05$, and the dataset size is $N= 696$. We take $m=\\frac N 4 = 174$, and $k\\in\\{1,\\dots,100\\}$.\n\nWe generate a $2m$-sized dataset $S=S_1\\cup S_2$, we apply our kernel PCA method over $S_1$ (to obtain the projection space $\\hat{V}_{\\mathcal{S_1}}$), then we compute our bound by using $S_2= \\{x_{m+1},\\cdots, x_{2m}\\}$. This is the blue curve in \\autoref{experiment1_pac_inequality}.\n\nWe then compute the bound from \\citet{shawe-taylor2005eigenspectrum} by using the eigenvalues of $K(S)$ (the orange curve in \\autoref{experiment1_pac_inequality}).\nFinally, we draw $\\lambda^{\\leq k}$ (green curve).\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{experiment_1.png}\n \\caption{Evaluation of \\cref{th : pac_inequality_1}. The x-axis is the number $k$ of considered eigenvalues to compute the bound, the y-axis is the amount of information contained in the projection, from 0 to 1.} \n \\label{experiment1_pac_inequality}\n\\end{figure}\nClearly, on this specific instance of the kernel PCA problem, our \\cref{th : pac_inequality_1} leads to a much tighter bound than the one of \\cite{shawe-taylor2005eigenspectrum}. Let us stress here that this is merely a safety check on a specific example.\n\n\n\\paragraph{Experiment 2}\n\nWe are using the same experimental framework as in Experiment 1. We now compute four curves: the theoretical eigenvalues, the bound from \\cref{th: pac_bayes_thm_proj} with the `naive' choice of $\\alpha=1\/2$ and also with the optimised $\\alpha_0$. We also compute the bound from \\cref{th : pac_inequality_1}. Results are shown in \\autoref{experiment_2}. Clearly, the choice of $\\alpha$ significantly influences the tightness of the bound.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{experiment_3.png}\n \\caption{The x-axis is the number $k$ of considered eigenvalues to compute the bound, the y-axis is the amount of information contained in the projection, from 0 to 1.}\n \\label{experiment_2}\n\\end{figure}\n\n\\section{Conclusion}\\label{sec:end}\n\nWe provided empirical bounds for two quantities: the expected squared norm and the expected residual of a new data point projected onto the empirical (small) subspaces given by the kernel PCA method. This outperforms (as illustrated on an example) the existing bounds given by \\cite{shawe-taylor2005eigenspectrum}. Another improvement on the seminal work of \\cite{shawe-taylor2005eigenspectrum} is that we provide both lower and upper empirical bounds for each studied quantity. Doing so, we contribute a better theoretical understanding of kernel PCA, which we hope will translate into practical insights on strengths and limitations of kernel PCA for machine learning theoreticians and practitioners.\n\n\n\\section{Proofs -- technical results}\n\\label{sec:proofs}\n\n\\subsection{Proof of \\cref{th: measurability_kernel_Q}}\n\n\n\n\nFirst, for all $k$ and $s\\in\\mathcal{X}^m$, the function $B\\mapsto Q^k_s(B)$ is the Dirac in $||P_{\\hat{V}_k(s)}(.)||^2 \\in \\mathcal{F}_k$ hence it is a well-defined probability law.\nNow, we fix $k\\in\\{1.. N_{\\mathcal{H}}\\}$ and $B\\in \\Sigma_{\\mathcal{F}_k}$. We need to prove that the function $A:s\\mapsto Q^k(s,B)$ is measurable. We first decompose $A$ into several functions:\n\n\\begin{align*} \ns & \\stackrel{\\psi}{\\longmapsto} \\hspace{1mm} (\\psi(x_1)\\cdots \\psi(x_m)) \\hspace{1mm} \\stackrel{A_1}{\\longmapsto} \\hspace{1mm} C(s) \\hspace{1mm}\\stackrel{A_2}{\\longmapsto}\\hspace{1mm} \\text{eigenvectors of } C(s) \\\\\n& \\stackrel{A_3}{\\longmapsto} \\hspace{1mm} f_{\\hat{w}_k} \\hspace{1mm} \\stackrel{A_4}{\\longmapsto} \\hspace{1mm} \\mathds{1}\\left\\{ f_{\\hat{w}_k}\\in B \\right\\}\n\\end{align*}\nFor all the intermediate spaces (which are all finite-dimensional vector spaces, or subsets of them), we will consider them with their Lebesgue $\\sigma-$algebra (or the one induced on the corresponding subset), which will allow us to consider the usual notion of continuity onto those spaces. Then for every $s$, we have $A(s)= A_4 \\circ A_3 \\circ A_2 \\circ A_1 \\circ \\psi (s)$.\\\\\nNow our goal is to prove that all those functions are measurable, doing so, $A$ will be measurable as composition of measurable functions.\n\nFirst, because the $\\sigma$-algebra on $\\mathcal{X}$ is $\\mathcal{P}(\\mathcal{X})$, we know that $\\psi$ is measurable. \n\n\\paragraph{Measurablity of $A_1$.} Thanks to the definition of $C(S)$, we know that every coordinate of $C(s)$ consists in a linear combination of the coordinates of $(\\psi(x_1),\\cdots, \\psi(x_m)$. Thus, $A_1$ is continuous therefore measurable.\n\\paragraph{Measurablity of $A_2$.} To prove that $A_2$ is measurable, we need to show that the eigenvectors from a symmetric matrix ($C(s)$ is indeed symmetric) are a measurable function of this matrix. This result is true: to prove it, we will detail the problem treated in \\cite{wilcox1972}. Let us consider a polynomial \n\\[M(p)= \\sum_{|\\alpha|\\leq q} M_\\alpha p^\\alpha \\]\n where $ q\\in\\mathbb{N}$ $\\alpha= (\\alpha_1,\\cdots,\\alpha_n)\\in\\mathbb{N}^n, |\\alpha|= \\alpha_1+\\cdots+\\alpha_n $. Also, $p=(p_1,...,p_n)\\in\\mathbb{R}^n, p^{\\alpha}= p_1^{\\alpha1}\\cdots p_n^{\\alpha_n}$.\n Finally, for any $\\alpha$, $M_\\alpha$ is an Hermitian matrix of size $d\\times d$. Let us consider the following eigenvalue problem for $E$ an Hermitian positive definite matrix: \n\\begin{align*}\n\\label{eq: eigenvalue_problem}\n M(p)x= \\lambda E x\\hspace{5mm} x\\in\\mathbb{R}^d.\n\\end{align*}\nIf we denote by $\\lambda_1(p)\\geq\\cdots\\geq \\lambda_d(p)$ the $d$ eigenvalues of $M(p)$ in descending order, we can use the following:\n\\begin{theorem}[\\citealp{wilcox1972}, Theorem 2]\n\\label{th: wilcox1972}\n There exists functions\n $v_i : \\mathbb{R}^n \\to \\mathbb{R}^d$, $i \\in [d]$, \n such that \n \\begin{itemize}\n \\item For all $i\\in [d]$ and $p\\in\\mathbb{R}^n$, it holds that $M(p)v_i(p)= \\lambda_i(p)v_i(p)$;\n \\item For all $1\\in [d]$, the function $v_i(p)$ is Lebesgue measurable.\n \\end{itemize}\n\\end{theorem}\nWe now prove the following lemma:\n\\begin{lemma}\n Let $M = (m_{i,j})_{i,j}\\in\\mathbb{R}^{d\\times d}$ be a symmetric matrix, then the eigenvectors of $M$ are measurable functions on the coordinates of $M$.\n\\end{lemma}\n\\begin{proof}\nWe set for $(i,j)\\in[d]\\times[d]$, $E_{i,j}$ the matrix with value $1$ in coordinate $(i,j)$ and $0$ everywhere else. Then we take $n= d(d+1)\/2$ and we define $p=(m_{i,j})_{i\\leq j}\\in\\mathbb{R}^{n}$. We also define for $i Z\\}, \\quad \\mathbb{P}\\as\n\\label{eq-def-rand-st}\n\\end{equation}\nThe variable $Z$ is called a \\emph{randomisation device} for the randomised stopping time $\\eta$, and the process $\\rho$ is called the \\emph{generating process}. The set of $(\\mathcal{G}_t)$-randomised stopping times is denoted by $\\mathcal{T}^R(\\mathcal{G}_t)$. It is assumed that randomisation devices of different stopping times are independent.\n\\end{definition}\nWe refer to \\cite{Solanetal2012}, \\cite{TouziVieille2002} for an extensive discussion on various definitions of randomised stopping times and conditions that are necessary for their equivalence. To avoid unnecessary complication of notation, we assume that the probability space $(\\Omega, \\mathcal{F}, \\mathbb{P})$ supports two independent random variables $Z_\\tau$ and $Z_\\sigma$ which are also independent of $\\mathcal{F}_T$ and are the randomisation devices for the randomised stopping times $\\tau$ and $\\sigma$ of the two players. \n\n\\begin{definition}\\label{def-value-rand-strat}\nDefine\n\\begin{equation*}\nV_*:=\\sup_{\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)} \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)} N(\\tau,\\sigma)\\quad\\text{and}\\quad V^*:= \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)}\\sup_{\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)} N(\\tau,\\sigma).\n\\end{equation*}\nThe \\emph{lower value} and {\\em upper value of the game in randomised strategies} are given by $V_*$ and $V^*$, respectively. If they coincide, the game is said to have a \\emph{value in randomised strategies} $V=V_*=V^*$. \n\\end{definition}\nThe following theorem states the main result of this paper.\n\\begin{theorem}\n\\label{thm:main2}\nUnder assumptions \\ref{eq-integrability-cond}, \\ref{ass:regular_gen}, \\ref{eq-order-cond}-\\ref{ass:filtration}, the game has a value in randomised strategies. Moreover, if $\\hat f$ and $\\hat g$ in \\ref{ass:regular_gen} are non-increasing and non-decreasing, respectively, there exists a pair $(\\tau_*,\\sigma_*)$ of optimal strategies.\n\\end{theorem}\nFor the clarity of presentation of our methodology, we first prove a theorem with more restrictive regularity properties of payoff processes and then show how to extend the proof to the general case of Theorem \\ref{thm:main2}.\n\\begin{theorem}\n\\label{thm:main}\nUnder assumptions \\ref{eq-integrability-cond}-\\ref{ass:filtration}, the game has a value in randomised strategies and there exists a pair $(\\tau_*,\\sigma_*)$ of optimal strategies.\n\\end{theorem}\nProofs of the above theorems are given in Section \\ref{sec:sions}. They rely on two key results: an approximation procedure (Propositions \\ref{thm:conv_lipsch} and \\ref{thm:conv_lipsch_gen}) and an auxiliary game with `nice' regularity properties (Theorem \\ref{th-value-cont-strat} and \\ref{th-value-cont-strat_gen}) which enables the use of a known min-max theorem (Theorem \\ref{th-the-Sion}).\n\n\nThe $\\sigma$-algebra $\\mathcal{F}_0$ is not assumed to be trivial. It is therefore natural to consider a game in which players assess their strategies ex-post, i.e., after observing the information available to them at time $0$ when their first action may take place. Allowing for more generality, let $\\mathcal{G}$ be a $\\sigma$-algebra contained in $\\mathcal{F}^1_0$ and in $\\mathcal{F}^2_0$, i.e., containing only information available to both players at time $0$. The expected payoff of the game in this case takes the form (recall that $\\tau,\\sigma\\in[0,T]$):\n\\begin{equation}\\label{eqn:cond_func}\n\\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big]\n=\n\\mathbb{E}\\big[ f_{\\tau} \\ind{\\{\\tau<\\sigma\\}} \n+\ng_{\\sigma} \\ind{\\{{\\sigma}<{\\tau}\\}}\n+\nh_{\\tau} \\ind{\\{\\tau=\\sigma\\}}\\big| \\mathcal{G} \\big].\n\\end{equation}\n\nThe proof of the following theorem is in Section \\ref{sec:ef_functional}.\n\\begin{theorem}\\label{thm:ef_0_value}\nUnder the assumptions of Theorem \\ref{thm:main2} and for any $\\sigma$-algebra $\\mathcal{G} \\subseteq \\mathcal{F}^1_0 \\cap \\mathcal{F}^2_0$, the $\\mathcal{G}$-conditioned game has a value, i.e.\n\\begin{equation}\\label{eqn:value_ef0}\n\\operatornamewithlimits{\\mathrm{ess\\,sup}}_{\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)} \\operatornamewithlimits{\\mathrm{ess\\,inf\\vphantom{p}}}_{\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)} \\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big] = \\operatornamewithlimits{\\mathrm{ess\\,inf\\vphantom{p}}}_{\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)}\\operatornamewithlimits{\\mathrm{ess\\,sup}}_{\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)} \\mathbb{E}\\big[\\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big], \\qquad \\mathbb{P}\\as\n\\end{equation}\nMoreover, if $\\hat f$ and $\\hat g$ in \\ref{ass:regular_gen} are non-increasing and non-decreasing, respectively, there exists a pair $(\\tau_*,\\sigma_*)$ of optimal strategies in the sense that\n\\begin{equation}\\label{eqn:saddleG}\n\\mathbb{E}\\big[ \\mathcal{P}(\\tau_*, \\sigma) \\big| \\mathcal{G} \\big]\n\\le\n\\mathbb{E}\\big[ \\mathcal{P}(\\tau_*, \\sigma_*) \\big| \\mathcal{G} \\big]\n\\le\n\\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma_*) \\big| \\mathcal{G} \\big], \\qquad \\mathbb{P}\\as\n\\end{equation}\nfor all other admissible pairs $(\\tau,\\sigma)$.\n\n\\end{theorem}\n\n\\section{Examples}\\label{sec:examples}\nBefore moving on to prove the theorems stated above, in this section we illustrate some of the specific games for which our general results apply. We draw form the existing literature on two-player zero-sum Dynkin games in continuous time and show that a broad class of these (all those we are aware of) fits within our framework. Since our contribution is mainly to the theory of games with partial\/asymmetric information, we exclude the well-known case of games with full information which has been extensively studied (see our literature review in the introduction). \n\n\\subsection{Game with partially observed scenarios}\\label{subsec:game_1}\nOur first example extends the setting of \\cite{Grun2013} and it reduces to that case if $J=1$ and the {\\em payoff processes} $f$, $g$ and $h$ are deterministic functions of an It\\^o diffusion $(X_t)$ on $\\mathbb{R}^d$, i.e., $f_t=f(t,X_t)$, $g_t=g(t,X_t)$ and $h_t=h(t, X_t)$. On a discrete probability space $(\\Omega^s, \\mathcal{F}^s, \\mathbb{P}^s)$, consider two random variables $\\mcalI$ and $\\mcalJ$ taking values in $\\{1,\\ldots,\\I\\}$ and in $\\{1,\\ldots,\\J\\}$, respectively. Denote their joint distribution by $(\\pi_{i,j})_{i=1, \\ldots, \\I,j=1,\\ldots,\\J}$ so that $\\pi_{i,j} = \\mathbb{P}^s(\\mcalI = i,\\mathcal J=j)$. The indices $(i,j)$ are used to identify the {\\em scenario} in which the game is played and are the key ingredient to model the asymmetric information feature. Consider another probability space $(\\Omega^p, \\mathcal{F}^p, \\mathbb{P}^p)$ with a filtration $(\\mathcal{F}^p_t)$ satisfying the usual conditions, and $(\\mathcal{F}^p_t)$-adapted payoff processes $f^{i,j}$, $g^{i,j}$, $h^{i,j}$, with $(i,j)$ taking values in $\\{1,\\ldots,\\I\\} \\times \\{1,\\ldots,\\J\\}$. For all $i,j$, we assume that $f^{i,j}$, $g^{i,j}$, $h^{i,j}$ satisfy conditions \\ref{eq-integrability-cond}-\\ref{eq-terminal-time-order-cond}.\n\nThe game is set on the probability space $(\\Omega, \\mathcal{F}, \\mathbb{P}) := (\\Omega^p \\times \\Omega^s, \\mathcal{F}^p \\vee \\mathcal{F}^s, \\mathbb{P}^p \\otimes \\mathbb{P}^s)$. The first player is informed about the outcome of $\\mcalI$ before the game starts but never directly observes $\\mcalJ$. Hence, her actions are adapted to the filtration $\\mathcal{F}^1_t = \\mathcal{F}^p_t \\vee \\sigma(\\mcalI)$. Conversely, the second player knows $\\mcalJ$ but not $\\mcalI$, so her actions are adapted to the filtration $\\mathcal{F}^2_t = \\mathcal{F}^p_t \\vee \\sigma(\\mcalJ)$. Given a choice of random times $\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)$ and $\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)$ for the first and the second player, the payoff is\n\\begin{equation*}\n\\mathcal{P} (\\tau, \\sigma) = f^{\\mcalI,\\mcalJ}_{\\tau} \\ind{\\{\\tau<\\sigma\\}} \n+\ng^{\\mcalI,\\mcalJ}_{\\sigma} \\ind{\\{{\\sigma}<{\\tau}\\}}\n+\nh^{\\mcalI,\\mcalJ}_\\tau \\ind{\\{\\tau = \\sigma\\}}.\n\\end{equation*}\nPlayers assess the game by looking at the expected payoff as in \\eqref{eq-uninf-payoff}. It is worth noticing that this corresponds to the so-called `{\\em ex-ante}' expected payoff, i.e., the expected payoff before the players acquire the additional information about the values of $\\mcalI$ and $\\mcalJ$. The structure of the game is common knowledge, i.e., both players know all processes $f^{i,j}$, $g^{i,j}$ and $h^{i,j}$ involved; however, they have partial and asymmetric knowledge on the couple $(i,j)$ which is drawn at the start of the game from the distribution of $(\\mcalI,\\mcalJ)$.\n\nDrawing a precise parallel with the framework of Section \\ref{sec:setting}, the above setting corresponds to $f_t = f^{\\mcalI,\\mcalJ}_t$, $g_t = g^{\\mcalI,\\mcalJ}_t$, and $h_t = h_t^{\\mcalI,\\mcalJ}$ with the filtration $\\mathcal{F}_t = \\mathcal{F}^p_t \\vee \\sigma(\\mcalI, \\mcalJ)$. The observation flows for the players are given by $(\\mathcal{F}^1_t)$ and $(\\mathcal{F}^2_t)$, respectively. \n\nThe particular structure of players' filtrations $(\\mathcal{F}^1_t)$ and $(\\mathcal{F}^2_t)$ allows for the following decomposition of randomised stopping times, see \\cite[Proposition 3.3]{esmaeeli2018} (recall the radomisation devices $Z_\\tau\\sim U([0,1])$ and $Z_\\sigma\\sim U([0,1])$, which are mutually independent and independent of $\\mathcal{F}_T$). \n\\begin{Lemma}\\label{lem:tau_decomposition}\nAny $\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)$ has a representation\n\\begin{equation}\\label{eqn:tau_decomposition}\n\\tau = \\sum_{i=1}^\\I \\ind{\\{\\mcalI = i\\}} \\tau_i, \n\\end{equation}\nwhere $\\tau_1,\\ldots,\\tau_\\I \\in \\mathcal{T}^R(\\mathcal{F}^p_t)$, with generating processes $\\xi^1,\\ldots,\\xi^\\I \\in {\\mathcal{A}^\\circ} (\\mathcal{F}^p_t)$ and a common randomisation device $Z_\\tau$.\nAn analogous representation holds for $\\sigma$ with $\\sigma_1, \\ldots, \\sigma_\\J \\in \\mathcal{T}^R(\\mathcal{F}^p_t)$, generating processes $\\zeta^1_t, \\ldots, \\zeta^\\J_t \\in {\\mathcal{A}^\\circ} (\\mathcal{F}^p_t)$, and a common randomisation device $Z_\\sigma$. \n\\end{Lemma}\n\\begin{cor}\nAny $(\\mathcal{F}^1_t)$-stopping time $\\tau$ has a decomposition \\eqref{eqn:tau_decomposition} with $\\tau_1,\\ldots,\\tau_\\I$ being $(\\mathcal{F}^p_t)$-stopping times (and analogously for $(\\mathcal{F}^2_t)$-stopping times).\n\\end{cor}\nHence, given a realisation of the idiosyncratic scenario variable $\\mcalI$ (resp.\\ $\\mcalJ$), the first (second) player chooses a randomised stopping time whose generating process is adapted to the common filtration $(\\mathcal{F}^p_t)$. The resulting expected payoff can be written as\n\\begin{equation*}\nN(\\tau, \\sigma) = \\sum_{i=1}^\\I \\sum_{j=1}^\\J \\pi_{i,j} \\mathbb{E} \\Big[ f^{i,j}_{\\tau_i} \\ind{\\{\\tau_i<\\sigma_j\\}}+\ng^{i,j}_{\\sigma_j} \\ind{\\{{\\sigma_j}<{\\tau_i}\\}}+ h^{i,j}_{\\tau_i} \\ind{\\{\\tau_i = \\sigma_j\\}} \\Big].\n\\end{equation*}\n\n\\subsection{Game with a single partially observed dynamics} \\label{subsec:game_2}\nOur second example generalises the set-ups of \\cite{DGV2017} and \\cite{DEG2020} and reduces to those cases when $J=2$, the time horizon is infinite and the payoff processes are (particular) time-homogeneous functions of a (particular) one-dimensional diffusion. Here the underlying dynamics of the game is a diffusion, whose drift depends on the realisation of an independent random variable $\\mcalJ\\in\\{1,\\ldots, J\\}$. Formally, on a probability space $(\\Omega, \\mathcal{F}, \\mathbb{P})$ we have a Brownian motion $(W_t)$ on $\\mathbb{R}^d$, an independent random variable $\\mcalJ\\in\\{1,\\ldots, J\\}$ with distribution $\\pi_j=\\mathbb{P}(\\mcalJ=j)$, and a process $(X_t)$ on $\\mathbb{R}^d$ with the dynamics\n\\[\ndX_t=\\sum_{j=1}^J \\ind{\\{\\mcalJ=j\\}} \\mu_j(X_t)dt+\\sigma(X_t)dW_t,\\quad X_0=x,\n\\]\nwhere $\\sigma$, $(\\mu_j)_{j=1,\\ldots J}$ are given functions (known to both players) that guarantee existence of a unique strong solution of the SDE for each $j=1,\\ldots J$. The payoff processes are deterministic functions of the underlying process, i.e., $f_t=f(t,X_t)$, $g_t=g(t,X_t)$ and $h_t=h(t,X_t)$, and they are known to both players. We assume that the payoff processes satisfy conditions \\ref{eq-integrability-cond}-\\ref{eq-terminal-time-order-cond}. It is worth to remark that in the specific setting of \\cite{DGV2017} the norms $\\| f \\|_{\\mcalL}$ and $\\| g \\|_{\\mcalL}$ are not finite so that our results cannot be directly applied. However, the overall structure of the game in \\cite{DGV2017} is easier than ours so that some other special features of the payoff processes can be used to determine existence of the value therein.\n\nTo draw a precise parallel with the notation from Section \\ref{sec:setting}, here we take $\\mathcal{F}_t=\\mathcal{F}^W_t\\vee\\sigma(\\mcalJ)$, where $(\\mathcal{F}^W_t)$ is the filtration generated by the Brownian sample paths and augmented with $\\mathbb{P}$-null sets. Both players observe the dynamics of $X$, however they have partial\/asymmetric information on the value of $\\mcalJ$. In \\cite{DGV2017} neither of the two players knows the true value of $\\mcalJ$, so we have $(\\mathcal{F}^1_t)=(\\mathcal{F}^2_t)=(\\mathcal{F}^X_t)$, where $(\\mathcal{F}^X_t)$ is generated by the sample paths of the process $X$ and it is augmented by the $\\mathbb{P}$-null sets (notice that $\\mathcal{F}^X_t\\subsetneq \\mathcal{F}_t$). In \\cite{DEG2020} instead, the first player (minimiser) observes the true value of $\\mcalJ$. In that case $(\\mathcal{F}^1_t)=(\\mathcal{F}_t)$ and $(\\mathcal{F}^2_t)=(\\mathcal{F}^X_t)$, so that $\\mathcal{F}^2_t\\subsetneq \\mathcal{F}^1_t$. Using the notation $X^\\mcalJ$ to emphasise the dependence of the underlying dynamics on $\\mcalJ$, and given a choice of random times $\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)$ and $\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)$ for the first and the second player, the game's payoff reads\n\\begin{equation*}\n\\mathcal{P} (\\tau, \\sigma) = f(\\tau,X^\\mcalJ_\\tau) \\ind{\\{\\tau<\\sigma\\}} \n+\ng (\\sigma,X^\\mcalJ_\\sigma) \\ind{\\{{\\sigma}<{\\tau}\\}}\n+\nh (\\tau, X^\\mcalJ_\\tau) \\ind{\\{\\tau = \\sigma\\}}.\n\\end{equation*}\nPlayers assess the game by looking at the expected payoff as in \\eqref{eq-uninf-payoff}. Finally, we remark that under a number of (restrictive) technical assumptions and with infinite horizon \\cite{DGV2017} and \\cite{DEG2020} show the existence of a value and of a saddle point in a smaller class of strategies. In \\cite{DGV2017} both players use $(\\mathcal{F}^X_t)$-stopping times, with no need for additional randomisation. In \\cite{DEG2020} the uninformed player uses $(\\mathcal{F}^X_t)$-stopping times but the informed player uses $(\\mathcal{F}_t)$-randomised stopping times.\n\n\\subsection{Game with two partially observed dynamics\nHere we show how the setting of \\cite{GenGrun2019} also fits in our framework. This example is conceptually different from the previous two because the players observe two different stochastic processes. On a probability space $(\\Omega,\\mathcal{F},\\mathbb{P})$ two processes $(X_t)$ and $(Y_t)$ are defined (in \\cite{GenGrun2019} these are finite-state continuous-time Markov chains). The first player only observes the process $(X_t)$ while the second player only observes the process $(Y_t)$. In the notation of Section \\ref{sec:setting}, we have $(\\mathcal{F}^1_t)=(\\mathcal{F}^X_t)$, $(\\mathcal{F}^2_t)=(\\mathcal{F}^Y_t)$ and $(\\mathcal{F}_t)=(\\mathcal{F}^X_t\\vee\\mathcal{F}^Y_t)$, where the filtration $(\\mathcal{F}^X_t)$ is generated by the sample paths of $(X_t)$ and $(\\mathcal{F}^Y_t)$ by those of $(Y_t)$ (both filtrations are augmented with $\\mathbb{P}$-null sets). The payoff processes are deterministic functions of the underlying dynamics, i.e., $f_t=f(t,X_t,Y_t)$, $g_t=g(t,X_t,Y_t)$ and $h_t=h(t, X_t,Y_t)$, and they satisfy conditions \\ref{eq-integrability-cond}-\\ref{eq-terminal-time-order-cond}. Given a choice of random times $\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)$ and $\\sigma\\in \\mathcal{T}^R(\\mathcal{F}^2_t)$ for the first and the second player, the game's payoff reads\n\\begin{equation*}\n\\mathcal{P} (\\tau, \\sigma) = f(\\tau,X_\\tau,Y_\\tau) \\ind{\\{\\tau<\\sigma\\}} \n+\ng (\\sigma,X_\\sigma,Y_\\sigma) \\ind{\\{{\\sigma}<{\\tau}\\}}\n+\nh (\\tau, X_\\tau,,Y_\\tau) \\ind{\\{\\tau = \\sigma\\}}.\n\\end{equation*}\nPlayers assess the game by looking at the expected payoff as in \\eqref{eq-uninf-payoff}. We remark that the proof of existence of the value in \\cite{GenGrun2019} is based on variational inequalities and relies on the finiteness of the state spaces of both underlying processes, and therefore cannot be extended to our general non-Markovian framework.\n\n\\subsection{Game with a random horizon\nHere we consider a non-Markovian extension of the framework of \\cite{lempa2013}, where the time horizon of the game is exponentially distributed and independent of the payoff processes. On a probability space $(\\Omega,\\mathcal{F},\\mathbb{P})$ we have a filtration $(\\mathcal{G}_t)_{t\\in[0,T]}$, augmented with $\\mathbb{P}$-null sets, and a positive random variable $\\theta$ which is independent of $\\mathcal{G}_T$ and has a continuous distribution. Let $\\Lambda_t:=\\ind{\\{t\\ge \\theta\\}}$ and take $\\mathcal{F}_t=\\mathcal{G}_t\\vee\\sigma(\\Lambda_s,\\,0\\le s\\le t)$.\n\n\nThe players have asymmetric knowledge of the random variable $\\theta$. The first player observes the occurrence of $\\theta$, whereas the second player does not. We have $(\\mathcal{F}^1_t)=(\\mathcal{F}_t)$ and $(\\mathcal{F}^2_t)=(\\mathcal{G}_t)\\subsetneq (\\mathcal{F}^1_t)$.\nGiven a choice of random times $\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)$ and $\\sigma\\in \\mathcal{T}^R(\\mathcal{F}^2_t)$ for the first and the second player, the game's payoff reads\n\\begin{align}\\label{eq:PLM}\n\\mathcal{P} (\\tau, \\sigma) \n&= \n\\indd{\\tau \\wedge \\sigma \\le \\theta} \\big(f^0_\\tau \\ind{\\{\\tau<\\sigma\\}} \n+\ng^0_\\sigma \\ind{\\{{\\sigma}<{\\tau}\\}}\n+\nh^0_\\tau \\ind{\\{\\tau = \\sigma\\}} \\big),\n\\end{align}\nwhere $f^0$, $g^0$ and $h^0$ are $(\\mathcal{G}_t)$-adapted processes that satisfy conditions \\ref{eq-integrability-cond}-\\ref{eq-terminal-time-order-cond} and $f^0 \\ge 0$.\n\nNotice that the problem above does not fit directly into the framework of Section \\ref{sec:setting}: Assumption \\ref{eq-integrability-cond} is indeed violated, because the processes $(\\indd{t \\le \\theta} f^0_t),(\\indd{t \\le \\theta}g^0_t)$ are not c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}}. However, we now show that the game can be equivalently formulated as a game satisfying conditions of our framework. The expected payoff can be rewritten as follows\n\\begin{align*\nN^0(\\tau, \\sigma) := \\mathbb{E}\\big[\\mathcal{P} (\\tau, \\sigma) \\big]\n&= \n\\mathbb{E}\\big[\\indd{\\tau \\le \\theta} \\ind{\\{\\tau<\\sigma\\}} f^0_\\tau \n+\n\\indd{\\sigma \\le \\theta} \\ind{\\{{\\sigma}<{\\tau}\\}} g^0_\\sigma \n+\n\\indd{\\sigma \\le \\theta} \\ind{\\{\\tau = \\sigma\\}} h^0_\\tau\\big]\\\\\n&= \n\\mathbb{E}\\big[\\indd{\\tau \\le \\theta} \\ind{\\{\\tau<\\sigma\\}} f^0_\\tau \n+\n\\indd{\\sigma < \\theta} \\ind{\\{{\\sigma}<{\\tau}\\}} g^0_\\sigma \n+\n\\indd{\\sigma < \\theta} \\ind{\\{\\tau = \\sigma\\}} h^0_\\tau\\big],\\notag\n\\end{align*}\nwhere the second equality holds because $\\theta$ is continuously distributed and independent of $\\mathcal{F}^2_T$, so $\\mathbb{P}(\\sigma=\\theta) = 0$ for any $\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)$. Fix $\\varepsilon > 0$ and set\n\\begin{align*\nf^\\varepsilon_t:=f^0_{t}\\ind{\\{t<\\theta+\\varepsilon\\}},\\quad g_t:=g^0_t\\ind{\\{t < \\theta\\}}, \\quad h_t:=h^0_t\\ind{\\{t < \\theta\\}}, \\qquad t \\in [0, T].\n\\end{align*}\nWe see that conditions \\ref{eq-integrability-cond}, \\ref{eq-order-cond}, \\ref{eq-terminal-time-order-cond} hold for the processes $(f^\\varepsilon_t)$, $(g_t)$, $(h_t)$ (for condition \\ref{eq-order-cond} we use that $f^0 \\ge 0$). Condition \\ref{ass:regular} (regularity of payoffs $f^\\varepsilon$ and $g$) is satisfied, because $\\theta$ has a continuous distribution, so it is a totally inaccessible stopping time for the filtration $(\\mathcal{F}_t)$ by \\cite[Example VI.14.4]{RogersWilliams}. Therefore, by Theorem \\ref{thm:main}, the game with expected payoff\n\\[\nN^\\varepsilon(\\tau, \\sigma) =\\mathbb{E}\\big[\\mathcal{P}^\\varepsilon(\\tau,\\sigma)\\big]:= \\mathbb{E} \\big[\\ind{\\{\\tau<\\sigma\\}} f^\\varepsilon_\\tau \n+\n\\ind{\\{{\\sigma}<{\\tau}\\}} g_\\sigma \n+\n\\ind{\\{\\tau = \\sigma\\}} h_\\tau\\big]\n\\]\nhas a value and a pair of optimal strategies exists.\n\nWe now show that the game with expected payoff $N^0$ has the same value as the one with expected payoff $N^\\varepsilon$, for any $\\varepsilon > 0$. First observe that\n\\begin{align*\nN^\\varepsilon(\\tau, \\sigma) - N^0(\\tau, \\sigma) = \\mathbb{E}\\big[\\indd{\\tau < \\sigma} \\indd{\\theta < \\tau < \\theta + \\varepsilon} f^0_\\tau\\big] \\ge 0\n\\end{align*}\nby the assumption that $f^0 \\ge 0$. Hence, \n\\begin{equation}\\label{eqn:N_eps_upper}\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} \\sup_{\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)} N^\\varepsilon (\\tau, \\sigma)\n\\ge\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} \\sup_{\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)} N^0 (\\tau, \\sigma).\n\\end{equation}\nTo derive an opposite inequality for the lower values, fix $\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)$. For $\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)$, define\n\\[\n\\hat \\tau =\n\\begin{cases}\n\\tau, & \\tau \\le \\theta,\\\\\nT, & \\tau > \\theta.\n\\end{cases}\n\\]\nThen, using that $\\mathcal{P}^\\varepsilon(\\tau,\\sigma)=\\mathcal{P}(\\tau,\\sigma)$ on $\\{\\tau\\le \\theta\\}$ and $\\mathcal{P}^\\varepsilon(T,\\sigma)=g^0_\\sigma\\ind{\\{\\sigma<\\theta\\}}=\\mathcal{P}(\\tau,\\sigma)$ on $\\{\\tau>\\theta\\}$, we have\n$N^\\varepsilon(\\hat \\tau, \\sigma) = N^0(\\tau, \\sigma)$. It then follows that\n\\[\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} N^\\varepsilon (\\tau, \\sigma) \\le \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} N^0 (\\tau, \\sigma),\n\\]\nwhich implies\n\\begin{equation}\\label{eqn:N_eps_lower}\n\\sup_{\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)} \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} N^\\varepsilon (\\tau, \\sigma) \\le \n\\sup_{\\sigma \\in \\mathcal{T}^R(\\mathcal{F}^2_t)} \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} N^0 (\\tau, \\sigma).\n\\end{equation}\n\nSince the value of the game with expected payoff $N^\\varepsilon$ exists, combining \\eqref{eqn:N_eps_upper} and \\eqref{eqn:N_eps_lower} we see that the value of the game with expected payoff $N^0$ also exists. It should be noted, though, that this does not imply that an optimal pair of strategies for $N^\\varepsilon$ is optimal for $N^0$. \n\nIt is worth noticing that in \\cite{lempa2013} the setting is Markovian with $T=\\infty$, $f^0_t=h^0_t=e^{-rt} \\bar f(X_t)$, $g^0_t=e^{-rt} \\bar g(X_t)$, $\\bar f$, $\\bar g$ deterministic functions, $r\\ge 0$, $\\theta$ exponentially distributed and $(X_t)$ a one-dimensional linear diffusion. Under specific technical requirements on the functions $\\bar f$ and $\\bar g$ the authors find that a pair of optimal strategies for the game \\eqref{eq:PLM} exists when the first player uses $(\\mathcal{F}^1_t)$-stopping times and the second player uses $(\\mathcal{F}^2_t)$-stopping times (in the form of hitting times to thresholds), with no need for randomisation. Their methods rely on the theory of one-dimensional linear diffusions (using scale function and speed measure) and free-boundary problems, hence do not admit an extension to a non-Markovian case.\n\n \n\\section{Reformulation as a game of (singular) controls} \\label{sec:reform}\n\nIn order to integrate out the randomisation devices for $\\tau$ and $\\sigma$ and obtain a reformulation of the payoff functional $N(\\tau, \\sigma)$ in terms of generating processes for randomised stopping times $\\tau$ and $\\sigma$, we need the following two auxiliary lemmata. We remark that if $\\eta$ is a $(\\mathcal{G}_t)$-randomised stopping time for $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$, then $\\eta$ is also an $(\\mathcal{F}_t)$-randomised stopping time. Therefore, the results below are formulated for $(\\mathcal{F}_t)$-randomises stopping times.\n\n\\begin{Lemma}\\label{lem-eta-xi}\nLet $\\eta\\in\\mathcal{T}^R(\\mathcal{F}_t)$ with the generating process $(\\rho_t)$. Then, for any $\\mathcal{F}_T$-measurable random variable $\\kappa$ with values in $[0,T]$, \n\\begin{alignat}{3}\n&\\mathbb{E}[\\ind{\\{\\eta\\le \\kappa\\}}|\\mathcal{F}_T]=\\rho_\\kappa, \\qquad &&\\mathbb{E}[\\ind{\\{\\eta>\\kappa\\}}|\\mathcal{F}_T]=1-\\rho_\\kappa, \\label{eq-xi-eta-1}\\\\ \n&\\mathbb{E}[\\ind{\\{\\eta<\\kappa\\}}|\\mathcal{F}_T]=\\rho_{\\kappa_-},\\qquad &&\\mathbb{E}[\\ind{\\{\\eta\\ge \\kappa\\}}|\\mathcal{F}_T]=1-\\rho_{\\kappa_-}. \\label{eq-xi-eta-3} \n\\end{alignat}\n\\end{Lemma}\n\\begin{proof}\nThe proof of \\eqref{eq-xi-eta-1} follows the lines of \\cite[Proposition 3.1]{DEG2020}. Let $Z$ be the randomisation device for $\\eta$. Since $\\rho$ is right-continuous, non-decreasing and (\\ref{eq-def-rand-st}) holds, we have\n\\begin{equation*}\n\\{\\rho_\\kappa > Z\\}\\subseteq \\{\\eta\\le \\kappa\\}\\subseteq\\{\\rho_\\kappa\\ge Z\\}.\n\\end{equation*}\nUsing that $\\rho_\\kappa$ is $\\mathcal{F}_T$-measurable, and $Z$ is uniformly distributed and independent of $\\mathcal{F}_T$, we compute\n\\begin{equation*}\n\\mathbb{E}[\\ind{\\{\\eta\\le \\kappa\\}}|\\mathcal{F}_T]\\ge \\mathbb{E}[\\ind{\\{\\rho_\\kappa> Z\\}}|\\mathcal{F}_T] = \\int_0^1 \\ind{\\{\\rho_\\kappa> y\\}} dy = \\rho_\\kappa,\n\\end{equation*}\nand\n\\begin{equation*}\n\\mathbb{E}[\\ind{\\{\\eta\\le \\kappa\\}}|\\mathcal{F}_T]\\le \\mathbb{E}[\\ind{\\{\\rho_\\kappa\\ge Z\\}}|\\mathcal{F}_T] = \\int_0^1 \\ind{\\{\\rho_\\kappa\\ge y\\}} dy = \\rho_\\kappa.\n\\end{equation*}\nThis completes the proof of the first equality in \\eqref{eq-xi-eta-1}. The other one is a direct consequence.\n\nTo prove $(\\ref{eq-xi-eta-3})$, we observe that, by (\\ref{eq-xi-eta-1}), for any $\\varepsilon>0$ we have\n\\[\n\\ind{\\{\\kappa>0\\}}\\mathbb{E}[\\ind{\\{\\eta\\le (\\kappa-\\varepsilon) \\vee (\\kappa\/2)\\}}|\\mathcal{F}_T]=\\ind{\\{\\kappa>0\\}} \\rho_{(\\kappa-\\varepsilon) \\vee (\\kappa\/2)}.\n\\]\nDominated convergence theorem implies \n\\begin{align*}\n\\mathbb{E}[\\ind{\\{\\eta< \\kappa\\}}|\\mathcal{F}_T] &= \\ind{\\{\\kappa > 0\\}}\\, \\mathbb{E}[\\ind{\\{\\eta< \\kappa\\}}|\\mathcal{F}_T] \n= \\lim_{\\varepsilon\\downarrow 0} \\ind{\\{\\kappa>0\\}}\\,\\mathbb{E}[\\ind{\\{\\eta\\le (\\kappa-\\varepsilon) \\vee (\\kappa\/2)\\}}|\\mathcal{F}_T]\\\\\n&= \\lim_{\\varepsilon\\downarrow 0} \\ind{\\{\\kappa>0\\}}\\, \\rho_{(\\kappa-\\varepsilon) \\vee (\\kappa\/2)} = \\ind{\\{\\kappa>0\\}}\\, \\rho_{\\kappa-} = \\rho_{\\kappa-},\n\\end{align*}\nwhere in the last equality we used that $\\rho_{0-}=0$. This proves the first equality in \\eqref{eq-xi-eta-3}. The other one is a direct consequence.\n\\end{proof}\n\n\n\\begin{Lemma}\\label{lem:integ_out}\nLet $\\eta,\\theta\\in\\mathcal{T}^R(\\mathcal{F}_t)$ with generating processes $(\\rho_t)$, $(\\chi_t)$ and independent randomisation devices $Z_\\eta$, $Z_\\theta$. For $(X_t)$ measurable, adapted and such that $\\|X\\|_{\\mcalL}<\\infty$ (but not necessarily {c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}}\\!\\!}),\nwe have\n\\begin{align*}\n&\\mathbb{E}\\left[X_\\eta \\ind{\\{\\eta\\le\\theta\\}\\cap\\{\\eta y\\}.\n\\end{equation*}\nThen, $\\eta=q(Z_\\eta)$. Using that $Z_\\eta \\sim U(0,1)$ and Fubini's theorem, we see that\n\\begin{align*}\n\\mathbb{E}\\left[X_\\eta \\ind{\\{\\eta\\le\\theta\\}\\cap\\{\\etaT-1\/n\\}}\\ind{\\{\\theta>T-1\/n\\}}|\\mathcal{F}_T\\big]\n=\n\\lim_{n\\to\\infty}\\mathbb{E}\\big[\\ind{\\{\\rho_{T-1\/n}\\le Z_\\eta\\}}\\ind{\\{\\chi_{T-1\/n}\\le Z_\\theta\\}}|\\mathcal{F}_T\\big]\\\\\n&=\n\\lim_{n\\to\\infty}(1-\\rho_{T-1\/n})(1-\\chi_{T-1\/n})\n=\n\\Delta\\rho_T\\Delta\\chi_T,\n\\end{align*}\nwhere the second equality is by \n\\[\n\\{\\rho_{T-1\/n}T-\\tfrac{1}{n}\\}\\subseteq\\{\\rho_{T-1\/n}\\le Z_\\eta\\},\n\\]\nand analogous inclusions for $\\{\\theta\\!>\\!T\\!-\\!\\frac{1}{n}\\}$. The third equality uses that $\\rho_{T-1\/n}$ and $\\chi_{T-1\/n}$ are $\\mathcal{F}_T$-measurable, and $Z_\\eta$, $Z_\\theta$ are independent of $\\mathcal{F}_T$. The final equality follows since $\\rho_T=\\chi_T=1$. Combining the above gives\nthe desired result.\n\\end{proof}\n\nApplying Lemma \\ref{lem-alt-repr-both-rand} and Corollary \\ref{cor:j} to \\eqref{eqn:payoff} and \\eqref{eq-uninf-payoff}, we obtain the following reformulation of the game.\n\n\\begin{Proposition}\\label{prop-functionals-equal}\nFor $\\tau\\in \\mathcal{T}^R (\\mathcal{F}^1_t)$, $\\sigma\\in \\mathcal{T}^R(\\mathcal{F}^2_t)$, \n\\begin{equation}\nN(\\tau,\\sigma)= \\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta_{t})d\\xi_t + \\int_{[0, T)} g_t(1-\\xi_t)d\\zeta_t + \\sum_{t \\in [0, T]} h_t \\Delta\\xi_t \\Delta\\zeta_t\\bigg],\n\\label{eq-functional-in-terms-of-controls}\n\\end{equation}\nwhere $(\\xi_t)$ and $(\\zeta_t)$ are the generating processes for $\\tau$ and $\\sigma$, respectively.\n\\end{Proposition}\n\nWith a slight abuse of notation, we will denote the right-hand side of \\eqref{eq-functional-in-terms-of-controls} by $N(\\xi,\\zeta)$. \n\n\\begin{remark}\\label{rem-Laraki-Solan}\nIn the Definition \\ref{def-value-rand-strat} of the lower value, the infimum can always be replaced by infimum over \\emph{pure} stopping times (cf. \\cite{LarakiSolan2005}). Same holds for the supremum in the definition of the upper value.\n\nLet us look at the upper value: take arbitrary $\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)$, $\\sigma\\in \\mathcal{T}^R(\\mathcal{F}^2_t)$, and define the family of stopping times\n\\begin{equation*}\nq(y)=\\operatornamewithlimits{inf\\vphantom{p}}\\{t\\in [0,T]: \\zeta_t > y\\}, \\qquad y \\in [0,1),\n\\end{equation*}\nsimilarly to the proof of Lemma \\ref{lem-alt-repr-both-rand} and with $(\\zeta_t)$ the generating process of $\\sigma$. Then,\n\\begin{equation*}\nN(\\tau,\\sigma)=\\int_0^1 N(\\tau,q(y)) dy \\le \\sup_{y\\in[0,1)} N(\\tau,q(y))\\le \\sup_{\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)} N(\\tau,\\sigma),\n\\end{equation*}\nwhere $\\mathcal{T}(\\mathcal{F}^2_t)$ denotes the set of pure $(\\mathcal{F}^2_t)$-stopping times. Since $\\mathcal{T}(\\mathcal{F}^2_t) \\subset \\mathcal{T}^R(\\mathcal{F}^2_t)$, we have\n\\begin{equation*}\n\\sup_{\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)}N(\\tau,\\sigma)= \\sup_{\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)} N(\\tau,\\sigma),\n\\end{equation*}\nand, consequently, the `{\\em inner}' optimisation can be done over pure stopping times:\n\\begin{equation*}\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau\\in\\mathcal{T}^R(\\mathcal{F}^1_t)}\\sup_{\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)} N(\\tau,\\sigma)= \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)}\\sup_{\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)} N(\\tau,\\sigma).\n\\end{equation*}\nBy the same argument one can show that\n\\begin{equation*}\n\\sup_{\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)} \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau\\in\\mathcal{T}^R(\\mathcal{F}^1_t)} N(\\tau,\\sigma)= \\sup_{\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)} \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau\\in \\mathcal{T}(\\mathcal{F}^1_t)} N(\\tau,\\sigma).\n\\end{equation*}\nHowever, in general an analogue result for the `{\\em outer}' optimisation does not hold, i.e.,\n\\begin{equation*}\n\\sup_{\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)} \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)} N(\\tau,\\sigma)\\neq \\sup_{\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)} \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)} N(\\tau,\\sigma)\n\\end{equation*}\nas shown by an example in Section \\ref{sec:Nikita-examples}.\n\\end{remark}\n\n\n\\section{Sion's theorem and existence of value}\\label{sec-Sion-existence-of-value}\\label{sec:sions}\nThe proofs of Theorems \\ref{thm:main2} and \\ref{thm:main}, i.e., that the game with payoff \\eqref{eq-uninf-payoff} has a value in randomised strategies, utilises Sion's min-max theorem \\cite{Sion1958} (see also \\cite{Komiya1988} for a simple proof). The idea of relying on Sion's theorem comes from \\cite{TouziVieille2002} where the authors study zero-sum Dynkin games with full and symmetric information. Here, however, we need different key technical arguments as explained in, e.g., Remark \\ref{rem-TV-norm-doesnt-work} below.\n\nLet us start by recalling Sion's theorem.\n\\begin{theorem}[Sion's theorem]\\label{th-the-Sion}\n\\cite[Corollary 3.3]{Sion1958}\nLet $A$ and $B$ be convex subsets of a linear topological space one of which is compact. Let $\\varphi(\\mu,\\nu)$ be a function $A\\times B \\mapsto \\mathbb{R}$ that is quasi-concave and upper semi-continuous in $\\mu$ for each $\\nu\\in B$, and quasi-convex and lower semi-continuous in $\\nu$ for each $\\mu\\in A$. Then,\n\\begin{equation*}\n\\sup_{\\mu\\in A}\\operatornamewithlimits{inf\\vphantom{p}}_{\\nu\\in B} \\varphi(\\mu,\\nu)=\\operatornamewithlimits{inf\\vphantom{p}}_{\\nu\\in B}\\sup_{\\mu\\in A} \\varphi(\\mu,\\nu).\n\\end{equation*}\n\\end{theorem}\n\nThe key step in applying Sion's theorem is to find a topology on the set of randomised stopping times, or, equivalently, on the set of corresponding generating processes so that the functional $N(\\cdot, \\cdot)$ satisfies the assumptions. We will use the weak topology of \n\\[\n\\mathcal{S} := L^2 \\big([0, T] \\times \\Omega, \\mathcal{B}([0, T]) \\times \\mathcal{F}, \\lambda \\times \\mathbb{P}\\big),\n\\]\nwhere $\\lambda$ denotes the Lebesgue measure on $[0, T]$. Given a filtration $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$, in addition to the class of increasing processes ${\\mathcal{A}^\\circ}(\\mathcal{G}_t)$ introduced in Section \\ref{sec:setting}, here we also need\n\\begin{align*}\n{\\mathcal{A}^\\circ_{ac}} (\\mathcal{G}_t) :=&\\,\\{\\rho\\in {\\mathcal{A}^\\circ}(\\mathcal{G}):\\,\\text{$t\\mapsto\\rho_t(\\omega)$ is absolutely continuous on $[0,T)$ for all $\\omega\\in\\Omega$}\\}.\n\\end{align*}\nIt is important to notice that $\\rho\\in{\\mathcal{A}^\\circ_{ac}}(\\mathcal{G}_t)$ may have a jump at time $T$ if\n\\[\n\\rho_{T-}(\\omega):=\\lim_{t\\uparrow T}\\int_0^t\\big(\\tfrac{d}{d t}\\rho_s\\big)(\\omega)d s<1=\\rho_T(\\omega).\n\\] \nAs with ${\\mathcal{A}^\\circ}(\\mathcal{G}_t)$, in the definition of ${\\mathcal{A}^\\circ_{ac}}(\\mathcal{G}_t)$ we require that the stated properties hold for all $\\omega \\in \\Omega$, which causes no loss of generality if $\\mathcal{G}_0$ contains all $\\mathbb{P}$-null sets of $\\Omega$. It is clear that ${\\mathcal{A}^\\circ_{ac}} (\\mathcal{G}_t) \\subset{\\mathcal{A}^\\circ}(\\mathcal{G}_t) \\subset\\mathcal{S}$.\n\nFor reasons that will become clear later (e.g., see Lemma \\ref{lem-strat-set-compact}), we prefer to work with slightly more general processes than those in ${\\mathcal{A}^\\circ}(\\mathcal{G}_t)$ and ${\\mathcal{A}^\\circ_{ac}}(\\mathcal{G}_t)$. Let us denote\n\\begin{align*}\n\\mathcal{A}(\\mathcal{G}_t) :=&\\, \\{ \\rho \\in \\mathcal S : \\,\\exists\\; \\hat\\rho\\in{\\mathcal{A}^\\circ} (\\mathcal{G}_t) \\,\\text{such that $\\rho = \\hat \\rho$ for $(\\lambda \\times \\mathbb{P})$\\ae $(t,\\omega)\\in[0,T]\\times\\Omega$}\\},\\\\\n\\mathcal{A}_{ac}(\\mathcal{G}_t) := &\\,\\{ \\rho \\in \\mathcal S : \\,\\exists\\; \\hat\\rho\\in{\\mathcal{A}^\\circ_{ac}} (\\mathcal{G}_t) \\,\\text{such that $\\rho = \\hat \\rho$ for $(\\lambda \\times \\mathbb{P})$\\ae $(t,\\omega)\\in[0,T]\\times\\Omega$}\\}.\n\\end{align*}\nWe will call $\\hat \\rho$ in the definition of the set $\\mathcal{A}$ (and $\\mathcal{A}_{ac}$) the \\emph{c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}}} (and {\\em absolutely continuous}) {\\em representative} of $\\rho$. Although it is not unique, all c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representatives are indistinguishable (Lemma \\ref{lem:cadlag_indis}). Hence, all c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representatives $\\hat\\rho$ of $\\rho\\in\\mathcal{A}$ define the same positive measure on $[0,T]$ for $\\mathbb{P}$\\ae $\\omega\\in\\Omega$ via a non-decreasing mapping $t\\mapsto\\hat\\rho_t(\\omega)$.\nThen, given any bounded measurable process $(X_t)$ the stochastic process (Lebesgue-Stieltjes integral)\n\\begin{equation*}\nt \\mapsto \\int_{[0, t]} X_s\\, d\\hat \\rho_s, \\qquad t \\in [0, T], \n\\end{equation*}\ndoes not depend on the choice of the c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representative $\\hat \\rho$ in the sense that it is defined up to indistinguishability.\n\nThe next definition connects the randomised stopping times that we use in the construction of the game's payoff (Proposition \\ref{prop-functionals-equal}) with processes from the classes $\\mathcal{A}(\\mathcal{F}^1_t)$ and $\\mathcal{A}(\\mathcal{F}^2_t)$. Note that $\\mathcal{A}(\\mathcal{G}_t) \\subseteq \\mathcal{A}(\\mathcal{F}_t)$ whenever $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$, so the definition can be stated for $\\mathcal{A}(\\mathcal{F}_t)$ without any loss of generality.\n\\begin{definition}\\label{def:integral}\nLet $(X_t)$ be measurable and such that $\\|X\\|_{\\mcalL}\\!<\\!\\infty$ (not necessarily {c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}}\\!\\!}). For $\\chi,\\rho \\in \\mathcal{A}(\\mathcal{F}_t)$, we define the Lebesgue-Stieltjes integral processes\n\\[\nt \\mapsto \\int_{[0, t]} X_s\\, d\\rho_s,\\quad t\\mapsto\\int_{[0, t]} X_s\\,(1-\\chi_{s}) d\\rho_s\\quad\\text{and}\\quad t\\mapsto\\int_{[0, t]} X_s\\,(1-\\chi_{s-}) d\\rho_s \\qquad t \\in [0, T], \n\\]\nby \n\\[\nt \\mapsto \\int_{[0, t]} X_s\\, d\\hat{\\rho}_s,\\quad t\\mapsto\\int_{[0, t]} X_s\\,(1-\\hat{\\chi}_{s}) d\\hat{\\rho}_s\\quad\\text{and}\\quad t\\mapsto\\int_{[0, t]} X_s\\,(1-\\hat{\\chi}_{s-}) d\\hat{\\rho}_s \\qquad t \\in [0, T], \n\\]\nfor any choice of the c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representatives $\\hat \\rho$ and $\\hat \\chi$, uniquely up to indistinguishability.\n\\end{definition}\n\nWith a slight abuse of notation we define a functional $N: \\mathcal{A}(\\mathcal{F}^1_t) \\times \\mathcal{A}(\\mathcal{F}^2_t) \\to \\mathbb{R}$ by the right-hand side of \\eqref{eq-functional-in-terms-of-controls}. It is immediate to verify using Definition \\ref{def-value-rand-strat} and Proposition \\ref{prop-functionals-equal} that the lower and the upper value of our game satisfy\n\\begin{align}\\label{eq:VV}\nV_{*}=\\sup_{\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)}\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}(\\mathcal{F}^1_t)} N(\\xi,\\zeta), \\qquad V^*=\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}(\\mathcal{F}^1_t)} \\sup_{\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)} N(\\xi,\\zeta).\n\\end{align}\nNotice that even though according to Definition \\ref{def-rand-st} the couple $(\\xi,\\zeta)$ should be taken in ${\\mathcal{A}^\\circ}(\\mathcal{F}^1_t)\\times{\\mathcal{A}^\\circ}(\\mathcal{F}^2_t)$, in \\eqref{eq:VV} we consider $(\\xi,\\zeta)\\in \\mathcal{A}(\\mathcal{F}^1_t)\\times\\mathcal{A}(\\mathcal{F}^2_t)$. This causes no inconsistency thanks to the discussion above and Definition \\ref{def:integral} for integrals.\n\n\n\\begin{remark} \nThe mapping $\\mathcal{A}(\\mathcal{F}^1_t) \\times \\mathcal{A}(\\mathcal{F}^2_t) \\ni (\\xi, \\zeta) \\mapsto N(\\xi, \\zeta)$ does not satisfy the conditions of Sion's theorem under the strong or the weak topology of $\\mathcal{S}$. Indeed, taking $\\xi^n_t = \\ind{\\{t \\ge T\/2 + 1\/n\\}}$, we have $\\xi^n_t \\to \\ind{\\{t \\ge T\/2\\}}=:\\xi_t$ for $\\lambda$\\ae $t \\in [0, T]$, so that by the dominated convergence theorem $(\\xi^n)$ also converges to $\\xi$ in $\\mathcal{S}$. Then, fixing $\\zeta_t = \\ind{\\{t \\ge T\/2\\}}$ in $\\mathcal{A}(\\mathcal{F}^2_t)$ we have $N(\\xi^n, \\zeta) = \\mathbb{E}[g_{T\/2}]$ for all $n\\ge 1$ whereas $N(\\xi, \\zeta) =\\mathbb{E}[ h_{T\/2} ]$. So the lower semicontinuity of $\\xi \\mapsto N(\\xi, \\zeta)$ cannot be ensured if, for example, $\\mathbb{P}(h_{T\/2}>g_{T\/2})>0$.\n\\end{remark}\n\nDue to issues indicated in the above remark, as in \\cite{TouziVieille2002}, we `smoothen' the control strategy of one player in order to introduce additional regularity in the payoff. We will show that this procedure does not change the value of the game (Proposition \\ref{thm:conv_lipsch}). We choose (arbitrarily and with no loss of generality, thanks to Remark \\ref{rem:ineq}) to consider an auxiliary game in which the first player can only use controls from $\\mathcal{A}_{ac} (\\mathcal{F}^1_t)$. Let us define the associated upper\/lower values:\n\\begin{equation}\\label{eq-value-cont-restriction}\nW_{*}=\\sup_{\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)}\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}_{ac}(\\mathcal{F}^1_t)} N(\\xi,\\zeta)\\quad\\text{and}\\quad W^*=\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}_{ac}(\\mathcal{F}^1_t)} \\sup_{\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)} N(\\xi,\\zeta).\n\\end{equation}\n\nHere, we work under the regularity assumption on the payoff processes \\ref{ass:regular}. Relaxation of this assumption is conducted in Section \\ref{sec:relax}.\nThe main results can be distilled into the following theorems:\n\n\\begin{theorem}\\label{th-value-cont-strat}\nUnder assumptions \\ref{eq-integrability-cond}-\\ref{ass:filtration}, the game (\\ref{eq-value-cont-restriction}) has a value, i.e.\n\\begin{equation*}\nW_{*}=W^{*}:=W.\n\\end{equation*}\nMoreover, the $\\zeta$-player (maximiser) has an optimal strategy, i.e. there exists $\\zeta^*\\in\\mathcal{A}(\\mathcal{F}^2_t)$ such that\n\\begin{equation*}\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}_{ac}(\\mathcal{F}^1_t)} N(\\xi,\\zeta^*)=W.\n\\end{equation*}\n\\end{theorem}\n\\begin{Proposition}\\label{thm:conv_lipsch}\nUnder assumptions \\ref{eq-integrability-cond}-\\ref{ass:filtration}, for any $\\zeta \\in \\mathcal{A}(\\mathcal{F}^2_t)$ and $\\xi \\in \\mathcal{A}(\\mathcal{F}^1_t)$, there is a sequence $\\xi^n \\in \\mathcal{A}_{ac}(\\mathcal{F}^1_t)$ such that\n\\[\n\\mathop{\\lim\\sup}_{n \\to \\infty} N(\\xi^n, \\zeta) \\le N(\\xi, \\zeta).\n\\]\n\\end{Proposition}\n\nThe proofs of the above theorems will be conducted in the following subsections: Section \\ref{sec:tech} contains a series of technical results which we then use to prove Theorem \\ref{th-value-cont-strat} (in Section \\ref{sec:verif}) and Proposition \\ref{thm:conv_lipsch} (in Section \\ref{sec:approx}). With the results from Theorem \\ref{th-value-cont-strat} and Proposition \\ref{thm:conv_lipsch} in place we can provide a (simple) proof of Theorem \\ref{thm:main}.\n\\begin{proof}[{\\bf Proof of Theorem \\ref{thm:main}}]\nObviously, $V_* \\le W_*$ and $V^* \\le W^*$. However, Proposition \\ref{thm:conv_lipsch} implies that \n\\begin{align}\\label{eq:W*}\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}_{ac}(\\mathcal{F}^1_t)} N(\\xi,\\zeta) = \\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}(\\mathcal{F}^1_t)} N(\\xi,\\zeta)\\quad\\text{for any $\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)$},\n\\end{align} \nso $V_* \\ge W_*$ and therefore $V_* = W_*$. Then, thanks to Theorem \\ref{th-value-cont-strat}, we have a sequence of inequalities which completes the proof of existence of the value\n\\[\nW = W_* = V_* \\le V^* \\le W^* = W.\n\\]\nIn \\eqref{eq:W*} we can choose $\\zeta^*$ which is optimal for $W$ (its existence is guaranteed by Theorem \\ref{th-value-cont-strat}). Then,\n\\[\nV=V_*=\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}(\\mathcal{F}^1_t)} N(\\xi,\\zeta^*).\n\\]\nThanks to Remark \\ref{rem:ineq}, we can repeat the same arguments above with the roles of the two players swapped as in \\eqref{eq:swap}, i.e., the $\\tau$-player ($\\xi$-player) is the maximiser and the $\\sigma$-player ($\\zeta$-player) is the minimiser. Thus, applying again Theorem \\ref{th-value-cont-strat} and Proposition \\ref{thm:conv_lipsch} (with $\\mathcal{P}'$ as in Remark \\ref{rem:ineq} in place of $\\mathcal{P}$) we arrive at \n\\[\n-V=:V'=\\operatornamewithlimits{inf\\vphantom{p}}_{\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)} \\mathbb{E}\\big[\\mathcal{P}'(\\xi^*,\\zeta)\\big],\n\\]\nwhere $\\xi^*\\in\\mathcal{A}(\\mathcal{F}^1_t)$ is optimal for the maximiser in the game with value $W'=-W$. Hence $\\xi^*$ is optimal for the minimiser in the original game with value $V$ and the couple $(\\xi^*,\\zeta^*)\\in\\mathcal{A}(\\mathcal{F}^1_t)\\times\\mathcal{A}(\\mathcal{F}^2_t)$ is a saddle point. The corresponding randomised stopping times, denoted $(\\tau_*,\\sigma_*)$, are an optimal pair for the players. \n\\end{proof}\n\n\n\n\n\\subsection{Technical results}\\label{sec:tech}\n\nIn this section we give a series of results concerning the convergence of integrals when either the integrand or the integrator converge in a suitable sense. We start by stating a technical lemma whose easy proof is omitted. \n\\begin{Lemma}\\label{lem:cadlag_indis}\nLet $(X_t)$ and $(Y_t)$ be c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} measurable processes such that $X_t = Y_t$, $\\mathbb{P}$\\as for $t \\in D \\subset [0, T)$ countable and dense, $X_{0-} = Y_{0-}$ and $X_T = Y_T$, $\\mathbb{P}$\\as Then $(X_t)$ is indistinguishable from $(Y_t)$.\n\\end{Lemma}\n\n\n\\begin{definition}\\label{def:def_C}\nGiven a c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} measurable process $(X_t)$, for each $\\omega\\in\\Omega$ we denote\n\\[\nC_X(\\omega):= \\{ t\\in[0,T]: X_{t-}(\\omega)=X_t(\\omega) \\}.\n\\]\n\\end{definition}\nOur next result tells us that the convergence $(\\lambda\\times\\mathbb{P})$\\ae of processes in $\\mathcal{A} (\\mathcal{G}_t)$ can be lifted to $\\mathbb{P}$\\as convergence at all points of continuity of the corresponding c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representatives.\n\\begin{Lemma}\\label{lem:cadlag_convergence}\nFor a filtration $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$, let $(\\rho^n)_{n\\ge 1}\\subset\\mathcal{A}(\\mathcal{G}_t)$ and $\\rho \\in \\mathcal{A}(\\mathcal{G}_t)$ with $\\rho^n \\to \\rho$ $(\\lambda \\times \\mathbb{P})$\\ae as $n\\to\\infty$. \nThen for any c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representatives $\\hat \\rho^n$ and $\\hat \\rho$ we have\n\\begin{equation}\\label{eqn:cadlag_convergence}\n\\mathbb{P}\\Big(\\big\\{\\omega \\in \\Omega:\\ \\lim_{n\\to\\infty}\\hat \\rho^n_t(\\omega)= \\hat \\rho_t(\\omega) \\:\\:\\text{for all $t\\in C_{\\hat \\rho}(\\omega)$}\\big\\}\\Big) = 1.\n\\end{equation}\n\\end{Lemma}\n\\begin{proof}\nThe $(\\lambda \\times \\mathbb{P})$\\ae convergence of $\\rho^n$ to $\\rho$ means that the c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representatives $\\hat \\rho_n$ converge to $\\hat \\rho$ also $(\\lambda \\times \\mathbb{P})$\\ae. Hence, there is a set $D \\subset [0, T]$ with $\\lambda([0,T]\\setminus D) = 0$ such that $\\hat \\rho^n_t \\to \\hat \\rho_t$ $\\mathbb{P}$\\as for $t \\in D$. Since $\\lambda([0,T]\\setminus D) = 0$, there is a countable subset $D_0 \\subset D$ that is dense in $[0, T]$. Define \n\\[\n\\Omega_0 := \\{ \\omega \\in \\Omega:\\ \\hat \\rho^n_t (\\omega) \\to \\hat \\rho_t (\\omega)\\:\\: \\text{for all $t \\in D_0$}\\}. \n\\]\nThen $\\mathbb{P}(\\Omega_0) = 1$.\n\nNow, fix $\\omega\\in\\Omega_0$ and let $t\\in C_{\\hat \\rho}(\\omega) \\cap (0, T)$. Take an increasing sequence $(t^1_k)_{k\\ge 1}\\subset D_0$ and a decreasing one $(t^2_k)_{k\\ge 1}\\subset D_0$, both converging to $t$ as $k\\to\\infty$. For each $k\\ge 1$ we have\n\\begin{equation}\\label{eqn:upward_conv}\n\\hat \\rho_t(\\omega)=\\lim_{k\\to\\infty}\\hat \\rho_{t^2_k}(\\omega)=\\lim_{k\\to\\infty}\\lim_{n\\to\\infty}\\hat \\rho^n_{t^2_k}(\\omega)\\ge \\mathop{\\lim\\sup}_{n\\to\\infty}\\hat \\rho^n_t(\\omega), \n\\end{equation}\nwhere in the final inequality we use that $\\hat \\rho^n_{t^2_k}(\\omega)\\ge \\hat\\rho^n_t(\\omega)$ by monotonicity. By analogous arguments we also obtain\n\\[\n\\hat \\rho_t(\\omega)=\\lim_{k\\to\\infty}\\hat \\rho_{t^1_k}(\\omega)=\\lim_{k\\to\\infty}\\lim_{n\\to\\infty}\\hat \\rho^n_{t^1_k}(\\omega)\\le \\mathop{\\lim\\operatornamewithlimits{inf\\vphantom{p}}}_{n\\to\\infty}\\hat \\rho^n_t(\\omega), \n\\]\nwhere the first equality holds because $t\\in C_{\\hat \\rho}(\\omega)$. Combining the above we get \\eqref{eqn:cadlag_convergence} (apart from $t\\in\\{0,T\\}$) by recalling that $\\omega\\in\\Omega_0$ and $\\mathbb{P}(\\Omega_0)=1$. The convergence at $t = T$, irrespective of whether it belongs to $C_{\\hat\\rho}(\\omega)$, is trivial as $\\hat \\rho^n_T(\\omega) = \\hat \\rho_T(\\omega) = 1$. If $0 \\in C_{\\hat\\rho}(\\omega)$, then $\\hat \\rho_0 (\\omega) = \\hat \\rho_{0-}(\\omega) = 0$. Inequality \\eqref{eqn:upward_conv} reads $0 = \\hat \\rho_0 (\\omega) \\ge \\mathop{\\lim\\sup}_{n \\to \\infty} \\hat \\rho^n_0(\\omega)$. Since $\\hat \\rho^n_0(\\omega) \\ge 0$, this proves that $\\hat\\rho^n_0(\\omega) \\to \\hat \\rho_0(\\omega)=0$.\n\\end{proof}\n\n\\begin{Lemma}\\label{prop-terminal-time-jump-limit}\nFor a filtration $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$, let $(\\rho^n)_{n\\ge 1}\\subset{\\mathcal{A}^\\circ}(\\mathcal{G}_t)$ and $\\rho\\in{\\mathcal{A}^\\circ}(\\mathcal{G}_t)$ with $\\rho^n\\to \\rho$ $(\\lambda\\times\\mathbb{P})$\\ae as $n\\to\\infty$. For any $t \\in [0, T]$ and any random variable $X \\ge 0$ with $\\mathbb{E}[X]<\\infty$, we have\n\\begin{equation*}\n\\mathop{\\lim\\sup}_{n\\to\\infty} \\mathbb{E}[X\\Delta \\rho^n_t]\\le \\mathbb{E}[X\\Delta \\rho_t].\n\\end{equation*}\n\\end{Lemma}\n\\begin{proof}\nFix $t \\in (0, T)$. Using $(\\lambda\\times\\mathbb{P})$\\ae convergence of $\\rho^{n}$ to $\\rho$, i.e., that $\\int_0^T \\mathbb{P}\\big(\\lim_{n\\to\\infty}\\rho^{n}_t=\\rho_t\\big) d t=T$, there is a decreasing sequence $\\delta_m \\to 0$ such that\n\\begin{equation*}\n\\lim_{n\\to\\infty} \\rho^{n}_{t-\\delta_m} = \\rho_{t-\\delta_m},\n\\qquad\n\\lim_{n\\to\\infty} \\rho^{n}_{t+\\delta_m} = \\rho_{t+\\delta_m},\\qquad \\mathbb{P}\\as\n\\end{equation*}\nThen, by the dominated convergence theorem,\n\\begin{align*}\n\\mathbb{E}[X\\Delta \\rho_t]&=\\lim_{m \\to \\infty} \\mathbb{E}[ X (\\rho_{t + \\delta_m} - \\rho_{t - \\delta_m})]\\\\\n&=\n\\lim_{m\\to\\infty}\\lim_{n\\to\\infty} \\mathbb{E}[X(\\rho^{n}_{t+\\delta_m}-\\rho^{n}_{t-\\delta_m})]\\\\\n&=\n\\lim_{m\\to\\infty}\\mathop{\\lim\\sup}_{n\\to\\infty} \\mathbb{E}[X(\\rho^{n}_{t+\\delta_m}-\\rho^{n}_{t-\\delta_m})]\\\\\n&=\n\\lim_{m\\to\\infty}\\mathop{\\lim\\sup}_{n\\to\\infty} \\mathbb{E}[X(\\rho^{n}_{t+\\delta_m}-\\rho^{n}_{t} + \\rho^{n}_{t-} - \\rho^{n}_{t-\\delta_m} + \\Delta \\rho^{n}_{t})] \\ge \\mathop{\\lim\\sup}_{n\\to\\infty} \\mathbb{E}[X \\Delta \\rho^{n}_t],\n\\end{align*}\nwhere the last inequality is due to $t\\mapsto\\rho^{n}_t$ being non-decreasing. This finishes the proof for $t\\in(0,T)$.\nThe proof for $t \\in\\{ 0, T\\}$ is a simplified version of the argument above, since $\\rho^n_T=\\rho_T=1$ and $\\rho^n_{0-}=\\rho_{0-}=0$, $\\mathbb{P}$\\as\n\\end{proof}\n\nWe need to consider a slightly larger class of processes ${\\tilde{\\mathcal{A}}^\\circ}(\\mathcal{G}_t) \\supset {\\mathcal{A}^\\circ}(\\mathcal{G}_t)$ defined by\n\\begin{align*}\n{\\tilde{\\mathcal{A}}^\\circ}(\\mathcal{G}_t):=&\\,\\{\\rho\\,:\\,\\text{$\\rho$ is $(\\mathcal{G}_t)$-adapted with $t\\mapsto\\rho_t(\\omega)$ c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}},}\\\\\n&\\qquad\\,\\text{non-decreasing, $\\rho_{0-}(\\omega)=0$ and $\\rho_T(\\omega)\\le 1$ for all $\\omega\\in\\Omega$}\\}.\n\\end{align*}\n\n\\begin{Proposition}\\label{prop:r-convergence}\nFor a filtration $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$, let $(\\rho^n)_{n\\ge 1}\\subset{\\tilde{\\mathcal{A}}^\\circ}(\\mathcal{G}_t)$ and $\\rho\\in{\\tilde{\\mathcal{A}}^\\circ}(\\mathcal{G}_t)$. Assume\n\\[\n\\mathbb{P}\\Big(\\big\\{\\omega \\in \\Omega:\\ \\lim_{n\\to\\infty}\\rho^n_t(\\omega)=\\rho_t(\\omega)\\:\\:\\text{for all $t\\in C_\\rho(\\omega)\\cup\\{T\\}$} \\big\\} \\Big) = 1.\n\\]\nThen for any $X\\in\\mcalL$ that is also $(\\mathcal F_t)$-adapted and regular, we have\n\\begin{equation}\\label{eqn:b_t}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T]} X_t d\\rho^n_t\\bigg] = \\mathbb{E}\\bigg[\\int_{[0, T]} X_t d\\rho_t\\bigg].\n\\end{equation}\n\\end{Proposition}\n\\begin{proof}\nLet us first assume that $(X_t) \\in \\mcalL$ has continuous trajectories (but is not necessarily adapted). If we prove that \n\\begin{equation}\\label{eqn:b_t_omega}\n\\lim_{n\\to\\infty}\\int_{[0, T]} X_t(\\omega) d \\rho^n_t(\\omega)= \\int_{[0, T]} X_t(\\omega) d \\rho_t (\\omega),\\quad\\text{for $\\mathbb{P}$\\ae $\\omega\\in\\Omega$,}\n\\end{equation}\nthen the result in \\eqref{eqn:b_t} will follow by the dominated convergence theorem. By assumption there is $\\Omega_0\\subset \\Omega$ with $\\mathbb{P}(\\Omega_0)=1$ and such that $\\rho^n_t(\\omega)\\to\\rho_t(\\omega)$ at all points of continuity of $t\\mapsto \\rho_t(\\omega)$ and at the terminal time $T$ for all $\\omega \\in \\Omega_0$. Since $d\\rho^n_t(\\omega)$ and $d\\rho_t(\\omega)$ define positive measures on $[0,T]$ for each $\\omega\\in\\Omega_0$, the convergence of integrals in \\eqref{eqn:b_t_omega} can be deduced from the weak convergence of finite measures, see \\cite[Remark III.1.2]{Shiryaev}. Indeed, if $\\omega\\in\\Omega_0$ is such that $\\rho_T(\\omega)=0$, the right-hand side of \\eqref{eqn:b_t_omega} is zero and we have\n\\begin{equation*}\n\\mathop{\\lim\\sup}_{n\\to\\infty}\\left|\\int_{[0,T]} X_t(\\omega) d \\rho^n_t(\\omega)\\right| \\le \\mathop{\\lim\\sup}_{n\\to\\infty}\\sup_{t \\in [0, T]} |X_t(\\omega)| \\rho^n_T(\\omega)= 0,\n\\end{equation*}\nwhere we use $X\\in\\mcalL$ to ensure that $\\sup_{t \\in [0, T]} |X_t(\\omega)|<\\infty$. If instead, $\\omega\\in\\Omega_0$ is such that $\\rho_T(\\omega)>0$, then for all sufficiently large $n$'s, we have $\\rho^n_T(\\omega) > 0$ and $t \\mapsto \\rho^n_t(\\omega) \/ \\rho^n_T(\\omega)$ define cumulative distribution functions (cdfs) converging pointwise to $\\rho_t(\\omega) \/ \\rho_T(\\omega)$ at the points of continuity of $\\rho_t(\\omega)$. Since $t \\mapsto X_t(\\omega)$ is continuous, \\cite[Thm III.1.1]{Shiryaev} justifies\n\\begin{align*}\n\\lim_{n\\to\\infty}\\int_{[0, T]} X_t (\\omega) d \\rho^{n}_t(\\omega) =&\\,\\lim_{n\\to\\infty} \\rho^{n}_T (\\omega) \\int_{[0, T]} X_t(\\omega) d\\left(\\frac{\\rho^{n}_t(\\omega)}{\\rho^{n}_T(\\omega)}\\right) \\\\\n=&\\,\\rho_T(\\omega) \\int_{[0, T]} X_t(\\omega) d\\left(\\frac{\\rho_t(\\omega)}{\\rho_T(\\omega)}\\right) = \\int_{[0, T]} X_t(\\omega) d \\rho_t(\\omega).\n\\end{align*}\n\nNow we drop the continuity assumption on $X$. We turn our attention to c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}}, $(\\mathcal{F}_t)$-adapted and regular $(X_t) \\in \\mcalL$. By \\cite[Theorem 3]{Bismut1978} there is $(\\tilde X_t) \\in \\mcalL$ with continuous trajectories (not necessarily adapted) such that $(X_t)$ is an $(\\mathcal{F}_t)$-optional projection of $(\\tilde X_t)$. From the first part of the proof we know that \\eqref{eqn:b_t} holds for $(\\tilde X_t)$. To show that it holds for $(X_t)$ it is sufficient to notice that $(\\rho^n_t)$ and $(\\rho_t)$ are $(\\mathcal{F}_t)$-optional processes, and apply \\cite[Thm VI.57]{DellacherieMeyer} to obtain\n\\[\n \\mathbb{E}\\bigg[\\int_{[0, T]} X_t d \\rho^n_t\\bigg] = \\mathbb{E}\\bigg[\\int_{[0, T]} \\tilde X_t d \\rho^n_t\\bigg]\\qquad \\text{and} \\qquad \n \\mathbb{E}\\bigg[\\int_{[0, T]} X_t d \\rho_t\\bigg] = \\mathbb{E}\\bigg[\\int_{[0, T]} \\tilde X_t d \\rho_t\\bigg].\n\\]\n\\end{proof}\n\n\\begin{remark}\nThe statement of Proposition \\ref{prop:r-convergence} can be strengthened to include all processes in $\\mcalL$ which are regular but not necessarily $(\\mathcal{F}_t)$-adapted. One can prove it by adapting arguments of the proof of \\cite[Thm.\\ 3]{Meyer}. \n\\end{remark}\n\n\n\\begin{Proposition}\\label{prop-specific-convergence-2}\nFor a filtration $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$, let $\\chi\\in {\\mathcal{A}^\\circ}(\\mathcal{G}_t)$ and $\\rho\\in\\mathcal{A}_{ac}(\\mathcal{G}_t)$ and consider $X\\in\\mcalL$ which is $(\\mathcal{F}_t)$-adapted and regular. If $(\\rho^n)_{n\\ge 1}\\subset\\mathcal{A}_{ac}(\\mathcal{G}_t)$ converges $(\\lambda \\times \\mathbb{P})$\\ae to $\\rho$ as $n\\to\\infty$, then\n\\begin{equation}\\label{eq:lim00}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T]} X_t(1-\\chi_{t-})d\\rho^n_t\\bigg]=\\mathbb{E}\\bigg[\\int_{[0, T]} X_t(1-\\chi_{t-})d\\rho_t\\bigg].\n\\end{equation}\n\\end{Proposition}\n\\begin{proof}\nDefine absolutely continuous adapted processes\n\\begin{equation*}\nR^n_t = \\int_{[0, t]} (1-\\chi_{s-})d\\rho^n_s\\quad\\text{and}\\quad R_t = \\int_{[0, t]} (1-\\chi_{s-})d\\rho_s,\n\\end{equation*}\nso that\n\\begin{equation}\\label{eq:intdR}\n\\int_{[0, T]} X_t(1-\\chi_{t-})d\\rho^n_t=\\int_{[0, T]} X_tdR^n_t\\quad\\text{and}\\quad \\int_{[0, T]} X_t(1-\\chi_{t-})d\\rho_t=\\int_{[0, T]} X_tdR_t.\n\\end{equation}\nWith no loss of generality we can consider the absolutely continuous representatives of $\\rho$ and $\\rho^n$ from the class ${\\mathcal{A}^\\circ_{ac}}(\\mathcal{G}_t)$ in the definition of all the integrals above (which we still denote by $\\rho$ and $\\rho^n$ for simplicity). In light of this observation it is clear that $(R^n)_{n\\ge 1}\\subset{\\tilde{\\mathcal{A}}^\\circ}(\\mathcal{G}_t)$ and $R\\in {\\tilde{\\mathcal{A}}^\\circ}(\\mathcal{G}_t)$. The idea is then to apply Proposition \\ref{prop:r-convergence} to the integrals with $R^n$ and $R$ in \\eqref{eq:intdR}.\n\nThanks to Lemma \\ref{lem:cadlag_convergence} and recalling that $\\rho^n_T = \\rho_T = 1$, the set\n\\[\n\\Omega_0 = \\big\\{\\omega\\in\\Omega:\\lim_{n\\to\\infty}\\rho^n_t(\\omega)= \\rho_t(\\omega) \\text{ for all $t \\in [0, T]$}\\big\\}\n\\]\nhas full measure, i.e., $\\mathbb{P}(\\Omega_0)=1$. For any $\\omega \\in \\Omega_0$ and $t\\in[0,T]$, integrating by parts (see, e.g., \\cite[Prop. 4.5, Chapter 0]{revuzyor}), using the dominated convergence theorem and then again integrating by parts give\n\\begin{equation}\\label{eqn:conv_R}\n\\lim_{n \\to \\infty} R^n_t= \\lim_{n \\to \\infty} \\bigg[(1-\\chi_{t})\\rho^n_t - \\int_{[0,t]} \\rho^n_sd(1-\\chi_{s}) \\bigg]\n= (1-\\chi_{t})\\rho_t - \\int_{[0,t]} \\rho_sd(1-\\chi_{s})=R_t.\n\\end{equation}\nHence $R^n$ and $R$ satisfy the assumptions of Proposition \\ref{prop:r-convergence} and we can conclude that \\eqref{eq:lim00} holds.\n\\end{proof}\n\n\\begin{cor}\\label{cor-specific-convergence-2}\nUnder the assumptions of Proposition \\ref{prop-specific-convergence-2}, we have \n\\begin{equation*\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)} X_t(1-\\chi_{t})d\\rho^n_t + X_T \\Delta \\chi_T \\Delta \\rho^n_T\\bigg]=\\mathbb{E}\\bigg[\\int_{[0, T)} X_t(1-\\chi_{t})d\\rho_t + X_T \\Delta \\chi_T \\Delta \\rho_T\\bigg].\n\\end{equation*}\n\\end{cor}\n\\begin{proof}\nRecall that $\\rho^n$ and $\\rho$ are continuous everywhere apart from $T$. Hence, we can rewrite the left- and right-hand side of \\eqref{eq:lim00} as\n\\[\n\\int_{[0, T]} X_t(1-\\chi_{t-})d\\rho^n_t = \\int_{[0, T]} X_t(1-\\chi_{t})d\\rho^n_t + X_T \\Delta \\chi_T \\Delta \\rho^n_T\n\\]\nand\n\\[\n\\int_{[0, T]} X_t(1-\\chi_{t-})d\\rho_t = \\int_{[0, T]} X_t(1-\\chi_{t})d\\rho_t + X_T \\Delta \\chi_T \\Delta \\rho_T\\,,\n\\]\nrespectively. It remains to note that $\\int_{[0, T]} X_t(1-\\chi_{t})d\\rho^n_t = \\int_{[0, T)} X_t(1-\\chi_{t})d\\rho^n_t$ because $\\chi_T = 1$.\n\\end{proof}\n\nWe close this technical section with a similar result to the above but for approximations which are needed for the proof of Proposition \\ref{thm:conv_lipsch}. The next proposition is tailored for our specific type of regularisation of processes in $\\mathcal{A}(\\mathcal{F}^1_t)$. Notice that the left hand side of \\eqref{eq:lim-in-A} features $\\chi_{t-}$ while the right hand side has $\\chi_t$.\n\n\\begin{Proposition}\\label{prop-specific-convergence-3}\nFor a filtration $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$, let $\\chi,\\rho \\in{\\mathcal{A}^\\circ} (\\mathcal{G}_t)$, $(\\rho^n)_{n\\ge 1}\\subset{\\mathcal{A}^\\circ} (\\mathcal{G}_t)$ and consider $X\\in\\mcalL$ which is $(\\mathcal{F}_t)$-adapted and regular. Assume the sequence $(\\rho^n)_{n\\ge 1}$ is non-decreasing and for $\\mathbb{P}$\\ae $\\omega\\in\\Omega$\n\\begin{align}\\label{eq:conv-rho}\n\\lim_{n\\to\\infty}\\rho^{n}_t(\\omega)=\\rho_{t-}(\\omega)\\:\\: \\text{for all $t\\in[0,T)$}.\n\\end{align}\nThen\n\\begin{equation}\\label{eq:lim-in-A}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)} X_t(1-\\chi_{t-})d\\rho^n_t\\bigg]= \\mathbb{E}\\bigg[\\int_{[0, T)} X_t(1-\\chi_{t})d\\rho_t\\bigg],\n\\end{equation}\nand for $\\mathbb{P}$\\ae $\\omega \\in \\Omega$\n\\begin{equation}\\label{eqn:lim-in-t-}\n\\lim_{n\\to\\infty}\\rho^n_{t-}(\\omega)=\\rho_{t-}(\\omega)\\quad \\text{for all $t \\in [0, T]$}.\n\\end{equation}\n\\end{Proposition}\n\\begin{proof}\nDenote by $\\Omega_0$ the set on which the convergence \\eqref{eq:conv-rho} holds. The first observation is that for all $\\omega\\in\\Omega_0$ and $t \\in (0, T]$\n\\begin{align}\\label{eq:lim-nt}\n\\lim_{n\\to\\infty}\\rho^n_{t-}(\\omega)=\\lim_{n\\to\\infty}\\lim_{u\\uparrow t}\\rho^n_{u}(\\omega)=\\lim_{u\\uparrow t}\\lim_{n\\to\\infty}\\rho^n_{u}(\\omega)=\\lim_{u\\uparrow t}\\rho_{u-}(\\omega)=\\rho_{t-}(\\omega),\n\\end{align}\nwhere the order of limits can be swapped by monotonicity of the process and of the sequence. The convergence at $t=0$ is obvious as $\\rho^n_{0-} = \\rho_{0-} = 0$. This proves \\eqref{eqn:lim-in-t-}.\n\nDefine for $t \\in [0, T)$,\n\\begin{equation}\\label{eqn:def_Rn}\nR^n_t=\\int_{[0, t]} (1-\\chi_{s-}) d\\rho^n_s, \\qquad R_t=\\int_{[0, t]} (1-\\chi_{s}) d\\rho_s,\n\\end{equation}\nand extend both processes to $t=T$ in a continuous way by taking $R^n_{T}:=R^n_{T-}$ and $R_{T}:=R_{T-}$. By construction we have $(R^n)_{n\\ge 1}\\subset\\tilde{{\\mathcal{A}^\\circ}} (\\mathcal{G}_t)$ and $R\\in \\tilde{{\\mathcal{A}^\\circ}}(\\mathcal{G}_t)$ and the idea is to apply Proposition \\ref{prop:r-convergence}. First we notice that for all $\\omega\\in\\Omega$ and any $t\\in[0,T)$ we have\n\\[\n\\Delta R_t(\\omega)=(1-\\chi_t(\\omega))\\Delta\\rho_t(\\omega),\n\\] \nso that we can write the set of points of continuity of $R$ as (recall Definition \\ref{def:def_C})\n\\[\nC_R(\\omega)=C_{\\rho}(\\omega)\\cup \\{t\\in[0,T]:\\chi_t(\\omega)=1\\}.\n\\] \n\nFor any $t\\in[0, T)$ and all $\\omega\\in\\Omega_0$, integrating $R^n_t(\\omega)$ by parts (\\citep[Prop. 4.5, Chapter 0]{revuzyor}) and then taking limits as $n\\to\\infty$ we get\n\\begin{align}\\label{eq:Rcon}\n\\lim_{n\\to\\infty}R^n_t(\\omega)=&\\,\\lim_{n\\to\\infty} \\Big[(1-\\chi_{t}(\\omega))\\rho^n_t(\\omega) - \\int_{[0, t]} \\rho^n_s(\\omega)d (1-\\chi_{s}(\\omega))\\Big]\\\\\n=&\\, (1-\\chi_{t}(\\omega))\\rho_{t-}(\\omega) - \\int_{[0, t]} \\rho_{s-}(\\omega)d (1-\\chi_{s}(\\omega)) \\notag\\\\\n=&\\, R_t(\\omega) - (1-\\chi_t(\\omega)) \\Delta \\rho_t(\\omega)=R_{t-}(\\omega),\\notag \n\\end{align}\nwhere the second equality uses dominated convergence and \\eqref{eq:conv-rho}, and the third equality is integration by parts. We can therefore conclude that \n\\begin{align*}\n\\lim_{n\\to\\infty}R^n_t(\\omega)=R_t(\\omega),\\quad\\text{for all $t\\in C_R(\\omega) \\cap[0,T)$ and all $\\omega\\in\\Omega_0$}.\n\\end{align*}\n\nIt remains to show the convergence at $T$ which is in $C_R(\\omega)$ by our construction of $R$. Since the function $t \\mapsto \\rho_t(\\omega)$ is non-decreasing and the sequence $(\\rho^n(\\omega))_n$ is non-decreasing, the sequence $(R^n(\\omega))_n$ is non-decreasing too (an easy proof of this fact involves integration by parts and observing that $t\\mapsto d (1-\\chi_t(\\omega))$ defines a negative measure; notice also the link to the first-order stochastic dominance). As in \\eqref{eq:lim-nt}, we show that $\\lim_{n \\to \\infty} R^n_{T-}(\\omega) = R_{T-} (\\omega)$ for $\\omega \\in \\Omega_0$. By construction of $R^n$ and $R$, this proves convergence of $R^n_T$ to $R_T$.\n\nThen, the processes $R^n$ and $R$ fulfil all the assumptions of Proposition \\ref{prop:r-convergence} whose application allows us to obtain \\eqref{eq:lim-in-A}.\n\\end{proof}\n\nFrom the convergence \\eqref{eq:Rcon}, an identical argument as in \\eqref{eq:lim-nt} proves convergence of left-limits of processes $(R^n)$ at any $t \\in [0, T]$. The following corollary formalises this observation. It will be used in Section \\ref{sec:relax}.\n\\begin{cor}\\label{cor:lim_R_in-t-}\nConsider the processes $(R^n)$ and $R$ defined in \\eqref{eqn:def_Rn}. For $\\mathbb{P}$\\ae $\\omega \\in \\Omega$ we have\n\\begin{equation*}\n\\lim_{n\\to\\infty} R^n_{t-}(\\omega)=R_{t-}(\\omega)\\quad \\text{for all $t \\in [0, T]$}.\n\\end{equation*}\n\\end{cor}\n\n\n\\subsection{Verification of the conditions of Sion's theorem}\\label{sec:verif}\n\nFor the application of Sion's theorem, we will consider a weak topology on $\\mathcal{A}_{ac}(\\mathcal{F}^1_t)$ and $\\mathcal{A}(\\mathcal{F}^2_t)$ inherited from the space $\\mathcal{S}$. In our arguments, we will often use that for convex sets the weak and strong closedness are equivalent \\cite[Theorem 3.7]{Brezis2010} (although weak and strong convergence are not equivalent, c.f. \\cite[Corollary 3.8]{Brezis2010}).\n\n\\begin{Lemma} \\label{lem-strat-set-compact}\nFor any filtration $(\\mathcal{G}_t) \\subseteq (\\mathcal{F}_t)$ satisfying the usual conditions, the set $\\mathcal{A}(\\mathcal{G}_t)$ is weakly compact in $\\mathcal{S}$.\n\\end{Lemma}\n\\begin{proof}\nWe write $\\mathcal{A}$ for $\\mathcal{A}(\\mathcal{G}_t)$ and ${\\mathcal{A}^\\circ}$ for ${\\mathcal{A}^\\circ}(\\mathcal{G}_t)$. The set $\\mathcal{A}$ is a subset of a ball in $\\mathcal{S}$. Since $\\mathcal{S}$ is a reflexive Banach space, this ball is weakly compact (Kakutani's theorem, \\cite[Theorem 3.17]{Brezis2010}). Therefore, we only need to show that $\\mathcal{A}$ is weakly closed. Since $\\mathcal{A}$ is convex, it is enough to show that $\\mathcal{A}$ is strongly closed \\cite[Theorem 3.7]{Brezis2010}.\n\nTake a sequence $(\\rho^n)_{n\\ge 1}\\subset\\mathcal{A}$ that converges strongly in $\\mathcal{S}$ to $\\rho$. We will prove that $\\rho\\in\\mathcal{A}$ by constructing a c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} non-decreasing adapted process $(\\hat \\rho_t)$ such that $\\hat\\rho_{0-} = 0$, $\\hat \\rho_T = 1$, and $\\hat \\rho = \\rho$ $(\\lambda\\times\\mathbb{P})$\\ae With no loss of generality we can pass to the c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representatives $(\\hat \\rho^n)_{n\\ge 1}\\subset{\\mathcal{A}^\\circ}$ which also converge to $\\rho$ in $\\mathcal S$. Then, there is a subsequence $(n_k)_{k\\ge 1}$ such that $\\hat\\rho^{n_k}\\to \\rho$ $(\\lambda \\times \\mathbb{P})$\\ae \\cite[Theorem 4.9]{Brezis2010}.\n\nSince \n\\[\n\\int_0^t \\mathbb{P}\\big(\\lim_{k\\to\\infty}\\hat\\rho^{n_k}_s=\\rho_s\\big) d s=t,\\quad\\text{for all $t\\in[0,T]$,}\n\\]\nwe can find $\\hat D\\subset [0,T]$ with $\\lambda([0,T]\\setminus\\hat D)=0$ such that $\\mathbb{P}(\\Omega_t)=1$ for all $t\\in \\hat D$, where \n\\[\n\\Omega_t:=\\{\\omega\\in\\Omega: \\lim_{k\\to\\infty}\\hat \\rho^{n_k}_t(\\omega)=\\rho_t(\\omega)\\}.\n\\]\nThen we can take a dense countable subset $D\\subset \\hat D$ and define $\\Omega_0:=\\cap_{t\\in D}\\Omega_t$ so that $\\mathbb{P}(\\Omega_0)=1$ and \\[\n\\lim_{k\\to\\infty}\\hat \\rho^{n_k}_t(\\omega)=\\rho_t(\\omega),\\qquad\\text{for all $(t,\\omega)\\in D\\times\\Omega_0$.}\n\\]\nSince $\\hat \\rho^{n_k}$ are non-decreasing, so is the mapping $D\\ni t\\mapsto \\rho_t(\\omega)$ for all $\\omega\\in\\Omega_0$. Let us extend this mapping to $[0,T]$ by defining $\\hat \\rho_t(\\omega):=\\rho_t(\\omega)$ for $t\\in D$ and\n\\[\n\\hat{\\rho}_t(\\omega):=\\lim_{s\\in D:s\\downarrow t} \\rho_s(\\omega),\\quad\\hat{\\rho}_{0-}(\\omega):=0,\\quad \\hat{\\rho}_{T}(\\omega):=1,\\quad\\text{for all $\\omega\\in \\Omega_0$,}\n\\]\nwhere the limit exists due to monotonicity. For $\\omega\\in \\mathcal{N}:=\\Omega\\setminus\\Omega_0$, we set $\\hat \\rho_t(\\omega) = 0$ for $t < T$ and $\\hat \\rho_T(\\omega)=1$. Notice that $\\mathcal{N}\\in\\mathcal{G}_0$ since $\\mathbb{P}(\\mathcal{N})=0$ so that $\\hat{\\rho}_t$ is $\\mathcal{G}_t$-measurable for $t\\in D$. Moreover, $\\hat{\\rho}$ is c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} by construction and $\\hat{\\rho}_t$ is measurable with respect to $\\cap_{s\\in D, s > t\\,}\\mathcal{G}_s=\\mathcal{G}_{t+}=\\mathcal{G}_t$ for each $t\\in[0,T]$ by the right-continuity of the filtration. Hence, $\\hat \\rho$ is $(\\mathcal{G}_t)$-adapted and $\\hat \\rho \\in {\\mathcal{A}^\\circ}$.\n\nIt remains to show that $\\hat \\rho^{n_k}\\to \\hat{\\rho}$ in $\\mathcal{S}$ so that $\\hat \\rho=\\rho$ $(\\lambda\\times\\mathbb{P})$\\ae and therefore $\\rho\\in\\mathcal{A}$. It suffices to show that $\\hat \\rho^{n_k}\\to \\hat{\\rho}$ $(\\lambda\\times\\mathbb{P})$\\ae and then conclude by dominated convergence that $\\hat \\rho^{n_k}\\to \\hat{\\rho}$ in $\\mathcal{S}$. For each $\\omega\\in\\Omega_0$ the process $t\\mapsto\\hat \\rho(\\omega)$ has at most countably many jumps (on any bounded interval) by monotonicity, i.e., $\\lambda([0,T]\\setminus C_{\\hat \\rho}(\\omega))=0$ (recall Definition \\ref{def:def_C}). Moreover, arguing as in the proof of Lemma \\ref{lem:cadlag_convergence}, we conclude\n\\[\n\\lim_{k\\to\\infty}\\hat \\rho^{n_k}_t(\\omega)=\\hat \\rho_t(\\omega),\\quad\\text{for all $t\\in C_{\\hat \\rho}(\\omega)$ and all $\\omega\\in\\Omega_0$}.\n\\]\nSince $(\\lambda\\!\\times\\!\\mathbb{P})(\\{(t,\\omega)\\!:\\!t\\in C_{\\hat\\rho}(\\omega)\\cap B,\\omega\\in\\Omega_0\\})\\!=\\!\\lambda(B)$ for any bounded interval $B\\subseteq[0,T]$ then $\\hat \\rho^{n_k}\\!\\to\\! \\hat{\\rho}$ in $\\mathcal{S}$ and $\\mathcal{A}$ is strongly closed in $\\mathcal{S}$.\n\\end{proof}\n \n\\begin{remark}\\label{rem-TV-norm-doesnt-work}\nOur space $\\mathcal{A}(\\mathcal{G}_t)$ is the space of processes that generate randomised stopping times and for any $\\rho\\in\\mathcal{A}(\\mathcal{G}_t)$ we require that $\\rho_T(\\omega)=1$, for all $\\omega\\in\\Omega$. In the finite horizon problem, i.e., $T<\\infty$, such specification imposes a constraint that prevents a direct use of the topology induced by the norm considered in \\cite{TouziVieille2002}. Indeed, in \\cite{TouziVieille2002} the space $\\mathcal S$ is that of $(\\mathcal{G}_t)$-adapted processes $\\rho$ with\n\\begin{equation*}\n\\|\\rho\\|^2:=\\mathbb{E}\\bigg[\\int_{0}^T (\\rho_t)^2 d t + (\\Delta \\rho_T)^2\\bigg] < \\infty,\\quad \\Delta\\rho_T:=\\rho_T-\\mathop{\\lim\\operatornamewithlimits{inf\\vphantom{p}}}_{t\\uparrow T}\\rho_t.\n\\end{equation*}\nThe space of generating processes $\\mathcal{A}(\\mathcal{G}_t)$ is not closed in the topology induced by $\\|\\cdot\\|$ above: define a sequence $(\\rho^n)_{n\\ge 1}\\subset\\mathcal{A}(\\mathcal{G}_t)$ by\n\\begin{equation*}\n\\rho^n_t = n \\bigg(t - T + \\frac{1}{n}\\bigg)^+, \\qquad t \\in [0, T].\n\\end{equation*}\nThen $\\|\\rho^n\\|\\to 0$ as $n\\to\\infty$ but $\\rho\\equiv 0\\notin\\mathcal{A}(\\mathcal{G}_t)$ since it fails to be equal to one at $T$ (and it is not possible to select a representative from $\\mathcal{A}(\\mathcal{G}_t)$ with the equivalence relation induced by $\\|\\,\\cdot\\,\\|$). \n\\end{remark}\n\nIt is of interest to explore the relationship between the topology on $\\mathcal{A}(\\mathcal{G}_t)$ implied by the weak topology on $\\mathcal{S}$ (denote it by $\\mcalO_2$) and the topology introduced in \\cite{BaxterChacon, Meyer} (denote it by $\\mcalO_1$). The topology $\\mcalO_1$ is the coarsest topology in which all functionals of the form\n\\begin{equation}\\label{eqn:top_O2}\n\\mathcal{A}(\\mathcal{G}_t) \\ni \\rho \\mapsto \\mathbb{E} \\Big[\\int_{[0, T]} X_t\\, d \\rho_t\\Big]\n\\end{equation}\nare continuous for any $X \\in \\mcalL$ with continuous trajectories. Our topology $\\mcalO_2$, instead, is the restriction to $\\mathcal{A}(\\mathcal{G}_t)$ of the weak topology on $\\mathcal{S}$. That is, $\\mcalO_2$ is the coarsest topology for which all functionals of the form\n\\begin{equation*\n\\mathcal{A}(\\mathcal{G}_t) \\ni \\rho \\mapsto \\mathbb{E} \\Big[\\int_{[0, T]} \\rho_t\\, Y_t\\, d t\\Big]\n\\end{equation*}\nare continuous for all $Y \\in \\mathcal{S}$. \n\\begin{Lemma}\\label{lem:top}\nTopologies $\\mcalO_1$ and $\\mcalO_2$ are identical.\n\\end{Lemma}\n\\begin{proof}\nDenoting \n\\begin{align}\\label{eq:Xint}\nX_t = \\int_{[0, t]} Y_t\\, d t\n\\end{align} \nand integrating by parts, we obtain for $\\rho \\in \\mathcal{A} (\\mathcal{G}_t)$\n\\[\n\\mathbb{E} \\Big[\\int_{[0, T]} \\rho_t\\, Y_t\\, d t\\Big]\n=\n\\mathbb{E} \\Big[X_T \\rho_T - X_0 \\rho_{0-} - \\int_{[0, T]} X_t\\, d \\rho_t \\Big]\n=\n\\mathbb{E} \\Big[X_T - \\int_{[0, T]} X_t\\, d \\rho_t \\Big],\n\\]\nwhere we used that $\\rho_T = 1$ and $\\rho_{0-} = 0$, $\\mathbb{P}$\\as Hence, $\\mcalO_2$ is the coarsest topology on $\\mathcal{A} (\\mathcal{G}_t)$ for which functionals \\eqref{eqn:top_O2} are continuous for all processes $X$ defined as in \\eqref{eq:Xint}. Since these processes $X$ are continuous, we conclude that $\\mcalO_2 \\subset \\mcalO_1$.\n\nThe set $\\mathcal{A} (\\mathcal{G}_t)$ is compact in the topologies $\\mcalO_1$ \\citep[Theorem 3]{Meyer} and $\\mcalO_2$ (see Lemma \\ref{lem-strat-set-compact} above). The compact Hausdorff topology is the coarsest among Hausdorff topologies \\cite[Cor.~3.1.14, p. 126]{Engelking}. Since $\\mcalO_2$ is Hausdorff by \\cite[Prop~3.3]{Brezis2010}, so is $\\mcalO_1$ and we have $\\mcalO_1 = \\mcalO_2$.\n\n\\end{proof}\n\n\\begin{remark}\nMeyer \\cite[Thm.~4]{Meyer} shows that if $\\mathcal{F}$ is separable (i.e., countably generated) then the topology $\\mcalO_1$ (hence $\\mcalO_2$) is metrizable. This could also be seen directly for the topology $\\mcalO_2$ by \\cite[Thm.~3.29]{Brezis2010}, because $\\mathcal{A}(\\mathcal{G}_t)$ is bounded in $\\mathcal{S}$ and $\\mcalO_2$ is the restriction to $\\mathcal{A}(\\mathcal{G}_t)$ of the weak topology on $\\mathcal{S}$. Indeed, it emerges from this argument for the metrizability of $\\mcalO_2$ that it is sufficient to require that only $\\mathcal{G}_T$ be separable.\n\\end{remark}\n\n\n\\begin{Lemma}\\label{lem:semi-cont}\nGiven any $(\\xi,\\zeta)\\in\\mathcal{A}_{ac}(\\mathcal{F}^1_t) \\times\\mathcal{A} (\\mathcal{F}^2_t)$, the functionals $N(\\xi,\\cdot):\\mathcal{A}(\\mathcal{F}^2_t)\\to\\mathbb{R}$ and $N(\\cdot,\\zeta):\\mathcal{A}_{ac}(\\mathcal{F}^1_t)\\to\\mathbb{R}$ are, respectively, upper semicontinuous and lower semicontinuous in the strong topology of $\\mathcal{S}$.\n\\end{Lemma}\n\\begin{proof}\nSince $\\xi \\in \\mathcal{A}_{ac} (\\mathcal{F}^1_t)$, we have from \\eqref{eq-functional-in-terms-of-controls} that the contribution of simultaneous jumps reduces to a single term:\n\\begin{equation}\\label{eqn:N_cont}\nN(\\xi,\\zeta)= \\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta_{t})d\\xi_t + \\int_{[0, T)} g_t(1-\\xi_t)d\\zeta_t + h_T\\Delta\\xi_T\\Delta\\zeta_T\\bigg].\n\\end{equation}\n\n\\emph{Upper semicontinuity of $N(\\xi,\\cdot)$}. Fix $\\xi\\in\\mathcal{A}_{ac}(\\mathcal{F}^1_t)$ and consider a sequence $(\\zeta^{n})_{n \\ge 1}\\subset\\mathcal{A}(\\mathcal{F}^2_t)$ converging to $\\zeta \\in \\mathcal{A}(\\mathcal{F}^2_t)$ strongly in $\\mathcal{S}$. We have to show that\n\\begin{equation*}\n\\mathop{\\lim\\sup}_{n\\to\\infty} N(\\xi,\\zeta^{n})\\le N(\\xi,\\zeta).\n\\end{equation*}\nAssume, by contradiction, that $\\mathop{\\lim\\sup}_{n\\to\\infty} N(\\xi,\\zeta^{n}) > N(\\xi,\\zeta)$. There is a subsequence $(n_k)$ over which the limit on the left-hand side is attained. Along a further subsequence we have $(\\mathbb{P}\\times\\lambda)\\ae$ convergence of $\\zeta^n$ to $\\zeta$ \\citep[Theorem 4.9]{Brezis2010}. With an abuse of notation we will assume that the original sequence possesses those two properties, i.e., the limit $\\lim_{n\\to\\infty} N(\\xi,\\zeta^{n})$ exists, it strictly dominates $N(\\xi,\\zeta)$, and there is $(\\mathbb{P} \\times \\lambda)\\ae$ convergence of $\\zeta^{n}$ to $\\zeta$.\n\nSince $\\xi$ is absolutely continuous on $[0, T)$, \n\\begin{equation*}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta^{n}_{t})d\\xi_t\\bigg]=\\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta_{t})d\\xi_t \\bigg]\n\\end{equation*}\nby the dominated convergence theorem. For the last two terms of $N(\\xi, \\zeta^n)$ in \\eqref{eqn:N_cont} we have\n\\begin{align*}\n\\mathbb{E}\\bigg[\\int_{[0, T)} g_t(1-\\xi_t)d\\zeta^n_t + h_T\\Delta\\xi_T\\Delta\\zeta^n_T\\bigg]\n&=\n\\mathbb{E}\\bigg[\\int_{[0, T)} g_t(1-\\xi_{t-})d\\zeta^n_t + h_T\\Delta\\xi_T\\Delta\\zeta^n_T\\bigg]\\\\\n&=\n\\mathbb{E}\\bigg[\\int_{[0, T]} g_t(1-\\xi_{t-})d\\zeta^n_t + (h_T - g_T) \\Delta\\xi_T\\Delta\\zeta^n_T\\bigg],\n\\end{align*}\nwhere the first equality is by the continuity of $\\xi$ and for the second one we use that $1-\\xi_{T-} = \\Delta \\xi_T$. From Lemma \\ref{lem:cadlag_convergence} and the boundedness and continuity of $(\\xi_t)$ we verify the assumptions of Proposition \\ref{prop:r-convergence} (with $X_t=g_t(1-\\xi_{t-})$ therein since $\\xi_{t-}$ is continuous on $[0,T]$), so\n\\begin{equation*}\n\\lim_{n \\to \\infty} \\mathbb{E}\\bigg[\\int_{[0, T]} g_t(1-\\xi_{t-})d\\zeta^n_t \\bigg] = \\mathbb{E}\\bigg[\\int_{[0, T]} g_t(1-\\xi_{t-})d\\zeta_t \\bigg].\n\\end{equation*}\nRecalling that $g_T \\le h_T$, we obtain from Lemma \\ref{prop-terminal-time-jump-limit}\n\\begin{equation*}\n\\mathop{\\lim\\sup}_{n\\to\\infty} \\mathbb{E}\\big[(h_T-g_T)\\Delta\\xi_T\\Delta\\zeta^n_T\\big] \\le \\mathbb{E}\\big[(h_T-g_T)\\Delta\\xi_T\\Delta\\zeta_T\\big].\n\\end{equation*}\nCombining above convergence results contradicts $\\lim_{n\\to\\infty} N(\\xi,\\zeta^n) >N(\\xi,\\zeta)$, hence, proves the upper semicontinuity.\n\n\\emph{Lower semicontinuity of $N(\\cdot,\\zeta)$}. Fix $\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)$ and consider a sequence $(\\xi^{n})_{n\\ge 1}\\subset\\mathcal{A}_{ac}(\\mathcal{F}^1_t)$ converging to $\\xi \\in \\mathcal{A}_{ac} (\\mathcal{F}^1_t)$ strongly in $\\mathcal{S}$. Arguing by contradiction as above, we assume that there is a subsequence of $\\xi^{n}$, which we denote the same, such that $\\xi^{n} \\to \\xi$ $(\\mathbb{P}\\times\\lambda)$\\ae and \n\\begin{equation}\\label{eqn:two_terms2}\n\\lim_{n\\to\\infty} N(\\xi^{n},\\zeta) < N(\\xi,\\zeta). \n\\end{equation}\nBy Lemma \\ref{lem:cadlag_convergence} and the continuity of $(\\xi_t)$ we have for $\\mathbb{P}$\\ae $\\omega\\in\\Omega$\n\\[\n\\lim_{n\\to\\infty} \\xi^{n}_t(\\omega) = \\xi_t(\\omega)\\quad\\text{for all $t \\in [0,T)$}.\n\\] \nThen by dominated convergence for the second term of $N(\\xi^n,\\zeta)$ in \\eqref{eqn:N_cont} we get\n\\[\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)} g_t(1-\\xi^{n}_t)d\\zeta_t\\bigg]= \\mathbb{E}\\bigg[\\int_{[0, T)} g_t(1-\\xi_t)d\\zeta_t\\bigg].\n\\]\nFor the remaining terms of $N(\\xi^{n}, \\zeta)$, we have \n\\[\n\\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta_{t})d\\xi^n_t + h_T\\Delta\\xi^n_T\\Delta\\zeta_T\\bigg]=\\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta_{t})d\\xi^n_t + f_T \\Delta\\xi^n_T\\Delta\\zeta_T + (h_T-f_T)\\Delta\\xi^n_T\\Delta\\zeta_T\\bigg]. \n\\]\nObserve that, by Lemma \\ref{prop-terminal-time-jump-limit},\n\\begin{equation*}\n\\mathop{\\lim\\operatornamewithlimits{inf\\vphantom{p}}}_{n\\to\\infty}\\mathbb{E}\\big[(h_T-f_T)\\Delta\\xi^n_T\\Delta\\zeta_T\\big]\\ge \\mathbb{E}\\big[(h_T-f_T)\\Delta\\xi_T\\Delta\\zeta_T\\big],\n\\end{equation*}\nbecause $h_T-f_T\\le 0$. Further,\n\\begin{equation*}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta_{t})d\\xi^n_t + f_T \\Delta\\xi^n_T\\Delta\\zeta_T \\bigg]=\\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta_{t})d\\xi_t + f_T \\Delta\\xi_T\\Delta\\zeta_T\\bigg]\n\\end{equation*}\nby Corollary \\ref{cor-specific-convergence-2}. The above results contradict \\eqref{eqn:two_terms2}, therefore, proving the lower semicontinuity.\n\\end{proof}\n\nWe are now ready to prove that the game with continuous randomisation for the first player ($\\tau$-player) has a value.\n\\begin{proof}[{\\bf Proof of Theorem \\ref{th-value-cont-strat}}]\nWe will show that the conditions of Sion's theorem hold (recall the notation in Theorem \\ref{th-the-Sion}) with $(A,B)=(\\mathcal{A}(\\mathcal{F}^2_t),\\mathcal{A}_{ac}(\\mathcal{F}^1_t))$ on the space $\\mathcal{S} \\times \\mathcal{S}$ equipped with its weak topology. For the sake of compactness of notation, we will write $\\mathcal{A}$ for $\\mathcal{A}(\\mathcal{F}^2_t)$ and $\\mathcal{A}_{ac}$ for $\\mathcal{A}_{ac}(\\mathcal{F}^1_t)$. It is straightforward to verify that the sets $\\mathcal{A}$ and $\\mathcal{A}_{ac}$ are convex. Compactness of $\\mathcal{A}$ in the weak topology of $\\mathcal{S}$ follows from Lemma \\ref{lem-strat-set-compact}. It remains to prove the convexity and semi-continuity properties of $N$ with respect to the weak topology of $\\mathcal{S}$. This is equivalent to showing that for any $a\\in\\mathbb{R}$, $\\hat{\\xi}\\in\\mathcal{A}_{ac}$ and $\\hat{\\zeta}\\in\\mathcal{A}$ the level sets \n\\[\n\\mathcal{K}(\\hat{\\zeta},a)=\\{\\xi\\in\\mathcal{A}_{ac}:N(\\xi,\\hat{\\zeta})\\le a\\} \\qquad \\text{and}\\qquad \\mathcal{Z}(\\hat{\\xi},a)=\\{\\zeta\\in\\mathcal{A}:N(\\hat{\\xi},\\zeta)\\ge a\\}\n\\]\nare convex and closed in $\\mathcal{A}_{ac}$ and $\\mathcal{A}$, respectively, with respect to the weak topology of $\\mathcal{S}$. For any $\\lambda \\in [0,1]$ and $\\xi^{1}, \\xi^{2} \\in \\mathcal{A}_{ac}$, $\\zeta^{1}, \\zeta^{2} \\in \\mathcal{A}$, using the expression in \\eqref{eq-functional-in-terms-of-controls} it is immediate (by linearity) that \n\\begin{align*}\nN(\\lambda \\xi^{1} + (1-\\lambda) \\xi^{2}, \\hat \\zeta) &= \\lambda N(\\xi^{1}, \\hat \\zeta) + (1-\\lambda) N(\\xi^{2}, \\hat\\zeta),\\\\\nN(\\hat\\xi, \\lambda \\zeta^{1} + (1-\\lambda)\\zeta^{2}) &= \\lambda N(\\hat\\xi, \\zeta^{1}) + (1-\\lambda) N(\\hat\\xi, \\zeta^{2}).\n\\end{align*}\nThis proves the convexity of the level sets. Their closedness in the strong topology of $\\mathcal{S}$ is established in Lemma \\ref{lem:semi-cont}. The latter two properties imply, by \\cite[Theorem 3.7]{Brezis2010}, that the level sets are closed in the weak topology of $\\mathcal{S}$. Sion's theorem (Theorem \\ref{th-the-Sion}) yields the existence of the value of the game: $W_* = W^*$.\n\nThe second part of the statement results from using a version of Sion's theorem proved in \\cite{Komiya1988} which allows to write $\\max$ instead of $\\sup$ in \\eqref{eq-value-cont-restriction}, i.e.,\n\\begin{equation*}\n\\sup_{\\zeta\\in\\mathcal{A}}\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}_{ac}} N(\\xi,\\zeta)=\\max_{\\zeta\\in\\mathcal{A}}\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}_{ac}} N(\\xi,\\zeta)=\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}_{ac}} N(\\xi,\\zeta^*),\n\\end{equation*}\nwhere $\\zeta^*\\in\\mathcal{A}$ delivers the maximum.\n\\end{proof}\n\n\n\\subsection{Approximation with continuous controls}\\label{sec:approx}\nWe now prove Proposition \\ref{thm:conv_lipsch} by constructing a sequence $(\\xi^{n})$ of Lipschitz continuous processes with the Lipschitz constant for each process bounded by $n$ for all $\\omega$. This uniform bound on the Lipschitz constant is not used in this paper as we only need that each of the processes $(\\xi^{n}_t)$ has absolutely continuous trajectories with respect to the Lebesgue measure on $[0,T)$ so that it belongs to $\\mathcal{A}_{ac}(\\mathcal{F}^1_t)$.\n\n\n\\begin{proof}[Proof of Proposition \\ref{thm:conv_lipsch}]\n\nFix $\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)$. We need to show that for any $\\xi\\in\\mathcal{A}(\\mathcal{F}^1_t)$, there exists a sequence $(\\xi^{n})_{n \\ge 1} \\subset \\mathcal{A}_{ac}(\\mathcal{F}^1_t)$ such that\n\\begin{equation}\n\\mathop{\\lim\\sup}_{n\\to\\infty} N(\\xi^{n},\\zeta)\\le N(\\xi,\\zeta).\n\\label{eq-liminf-M}\n\\end{equation}\n\nWe will explicitly construct absolutely continuous $\\xi^{n}$ that approximate $\\xi$ in a suitable sense. As $N(\\xi, \\zeta)$ does not depend on the choice of c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} representatives, by Definition \\ref{def:integral}, without loss of generality we assume that $\\xi \\in {\\mathcal{A}^\\circ}(\\mathcal{F}^1_t)$ and $\\zeta \\in {\\mathcal{A}^\\circ}(\\mathcal{F}^2_t)$. Define a function $\\phi^n_t = (nt)\\wedge 1\\vee 0$. Let $\\xi^{n}_t = \\int_{[0, t]} \\phi^n_{t-s} d\\xi_s$ for $t\\in[0,T)$, and $\\xi^{n}_T = 1$. We shall show that $(\\xi^{n}_t)$ is $n$-Lipschitz, hence absolutely continuous on $[0, T)$.\nNote that $\\phi^n_t\\equiv 0$ for $t\\le 0$, and therefore $\\xi^{n}_t = \\int_{[0, T]} \\phi^n_{t-s} d\\xi_s$ for $t \\in [0, T)$. For arbitrary $t_1,t_2\\in[0,T)$ we have\n\\begin{align*}\n|\\xi^{n}_{t_1}-\\xi^{n}_{t_2}| \n&= \n\\left|\\int_{[0, T]} (\\phi^n_{t_1-s}-\\phi^n_{t_2-s}) d\\xi_s\\right| \n\\le\n\\int_{[0, T]} |\\phi^n_{t_1-s}-\\phi^n_{t_2-s}| d\\xi_s\\\\\n&\\le \n\\int_{[0, T]} n|(t_1-s)-(t_2-s)| d\\xi_s \n=\n\\int_{[0, T]} n|t_1-t_2| d\\xi_s=n|t_1-t_2|,\n\\end{align*}\nwhere the first inequality is Jensen's inequality (which is applicable since $\\xi(\\omega)$ is a cumulative distribution function on $[0, T]$ for each $\\omega$), and the second inequality follows by the definition of $\\phi^n$.\n\nWe will verify the assumptions of Proposition \\ref{prop-specific-convergence-3}. Clearly the sequence $(\\xi^n)$ is non-decreasing in $n$, as the measure $d \\xi(\\omega)$ is positive for each $\\omega \\in \\Omega$ and the sequence $\\phi^n$ is non-decreasing. By the construction of $\\xi^{n}$ we have $\\xi^{n}_0 = 0 \\to \\xi_{0-}$ as $n\\to\\infty$. Moreover, for any $t \\in (0, T)$ and $n > 1\/t$\n\\begin{align*}\n\\xi^{n}_t = \\int_{[0, t)}\\phi^n_{t-s}d\\xi_s=\\xi_{t-\\tfrac{1}{n}}+\\int_{(t-\\tfrac{1}{n}, t)}n(t-s)d\\xi_s,\n\\end{align*}\nwhere the first equality uses that $\\phi^n_{0}=0$, so that jumps of $\\xi$ at time $t$ give zero contribution, and the second one uses the definition of $\\phi^n$. Letting $n\\to\\infty$ we obtain $\\xi^{n}_t\\to \\xi_{t-}$ as the second term above vanishes since\n\\begin{align*}\n0\\le \\int_{(t-\\tfrac{1}{n}, t)}n(t-s)d\\xi_s\\le \\xi_{t-} - \\xi_{t-\\tfrac{1}{n}}\\to 0. \n\\end{align*}\nThe continuity of $\\xi^n$ on $[0, T)$ and Proposition \\ref{prop-specific-convergence-3} imply that\n\\begin{equation*}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)}f_t(1-\\zeta_{t})d\\xi^{n}_t\\bigg]\n=\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)}f_t(1-\\zeta_{t-})d\\xi^{n}_t\\bigg]\n= \n\\mathbb{E}\\bigg[\\int_{[0, T)}f_t(1-\\zeta_{t})d\\xi_t\\bigg],\n\\end{equation*}\nand $\\lim_{n \\to \\infty} \\xi^{n}_{T-} = \\xi_{T-}$ so that\n\\[\n\\lim_{n\\to\\infty}\\Delta\\xi^{n}_T =\\Delta\\xi_T,\n\\]\nsince $\\xi^{n}_T=1$ for all $n\\ge 1$.\nThe dominated convergence theorem (applied to the second integral below) also yields\n\\begin{equation}\\label{eqn:M_ij_conv}\n\\begin{aligned}\n\\lim_{n\\to\\infty} N(\\xi^{n},\\zeta)\n&=\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)}f_{t}(1-\\zeta_{t})d\\xi^{n}_{t}+ \\int_{[0, T)} g_t(1-\\xi^{n}_{t})d\\zeta_t + h_T\\Delta\\xi^{n}_T\\Delta\\zeta_T\\bigg]\\\\\n&= \\mathbb{E}\\bigg[\\int_{[0, T)}f_{t}(1-\\zeta_{t})d\\xi_{t} + \\int_{[0, T)} g_t(1-\\xi_{t-})d\\zeta_t + h_T\\Delta\\xi_T\\Delta\\zeta_T\\bigg].\n\\end{aligned}\n\\end{equation}\nNote that\n\\begin{align}\\label{eq-remove-common-jumps}\nN(\\xi,\\zeta)&=\\mathbb{E}\\bigg[\\!\\int_{[0, T)}\\! f_t(1-\\zeta_{t})d\\xi_t +\\! \\int_{[0, T)}\\! g_t(1\\!-\\xi_t)d\\zeta_t + \\sum_{t \\in [0, T]} h_t \\Delta\\xi_t\\Delta\\zeta_t\\bigg]\\notag\\\\\n&= \\mathbb{E}\\bigg[\\!\\int_{[0, T)}\\! f_t(1-\\zeta_{t})d\\xi_t +\\! \\int_{[0, T)}\\! g_t(1-\\xi_{t-})d\\zeta_t\n+\\! \\sum_{t \\in [0, T)}\\!\\! (h_t-g_t)\\Delta\\xi_t\\Delta\\zeta_t + h_T\\Delta\\xi_T\\Delta\\zeta_T\\bigg]\\\\\n&\\ge \\mathbb{E}\\bigg[\\!\\int_{[0, T)}\\! f_t(1-\\zeta_{t})d\\xi_t +\\! \\int_{[0, T)}\\! g_t(1-\\xi_{t-})d\\zeta_t + h_T\\Delta\\xi_T\\Delta\\zeta_T\\bigg],\\notag\n\\end{align}\nwhere the last inequality is due to Assumption \\ref{eq-order-cond}. Combining this with \\eqref{eqn:M_ij_conv} completes the proof of \\eqref{eq-liminf-M}.\n\\end{proof}\n\n\\subsection{Relaxation of Assumption \\ref{ass:regular}}\\label{sec:relax}\n\nAssumption \\ref{ass:regular} which requires that the payoff processes be regular can be relaxed to allow for a class of jumps including predictable ones with nonzero conditional mean (i.e., violating regularity, see Eq. \\eqref{eq:cond-reg}). In this section we extend Theorem \\ref{th-value-cont-strat} and Proposition \\ref{thm:conv_lipsch} to the case of Assumption \\ref{ass:regular_gen} with $(\\hat g_t)$ from the decomposition of the payoff process $g$ being non-decreasing. In this case we must `smoothen' the generating process $\\xi$ of the minimiser in order to guarantee the desired semi-continuity properties of the game's expected payoff (see Remark \\ref{rem:contrad}). Arguments when $(\\hat f_t)$ from the decomposition of $f$ in Assumption \\ref{ass:regular_gen} is non-increasing are analogous thanks to the symmetry of the set-up pointed out in Remark \\ref{rem:ineq}. However, in that case we restrict strategies of the maximiser to absolutely continuous generating processes $\\zeta\\in\\mathcal{A}_{ac}(\\mathcal{F}^2_t)$ and the first player (minimiser) picks $\\xi\\in\\mathcal{A}(\\mathcal{F}^1_t)$.\n\n\\begin{theorem}\\label{th-value-cont-strat_gen}\nUnder assumptions \\ref{eq-integrability-cond}, \\ref{ass:regular_gen}, \\ref{eq-order-cond}-\\ref{ass:filtration}, (with $\\hat g$ non-decreasing) the game \\eqref{eq-value-cont-restriction} has a value, i.e.\n\\begin{equation*}\nW_{*}=W^{*}:=W.\n\\end{equation*}\nMoreover, the $\\zeta$-player (maximiser) has an optimal strategy, i.e. there exists $\\zeta^*\\in\\mathcal{A}(\\mathcal{F}^2_t)$ such that\n\\begin{equation*}\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\xi\\in\\mathcal{A}_{ac}(\\mathcal{F}^1_t)} N(\\xi,\\zeta^*)=W.\n\\end{equation*}\n\\end{theorem}\n\\begin{Proposition}\\label{thm:conv_lipsch_gen}\nUnder assumptions \\ref{eq-integrability-cond}, \\ref{ass:regular_gen}, \\ref{eq-order-cond}-\\ref{ass:filtration}, (with $\\hat g$ non-decreasing) for any $\\zeta \\in \\mathcal{A}(\\mathcal{F}^2_t)$ and $\\xi \\in \\mathcal{A}(\\mathcal{F}^1_t)$, there is a sequence $\\xi^n \\in \\mathcal{A}_{ac}(\\mathcal{F}^1_t)$ such that\n\\[\n\\mathop{\\lim\\sup}_{n \\to \\infty} N(\\xi^n, \\zeta) \\le N(\\xi, \\zeta).\n\\]\n\\end{Proposition}\n\\begin{proof}[{\\bf Proof of Theorem \\ref{thm:main2}}]\nThe proof of the existence of the value is identical to the proof Theorem \\ref{thm:main} but with references to Theorem \\ref{th-value-cont-strat} and Proposition \\ref{thm:conv_lipsch} replaced by the above results. \n\nFor the existence of the saddle point, the additional requirement that $\\hat g$ be non-decreasing {\\em and} $\\hat f$ be non-increasing guarantees the complete symmetry of the problem when swapping the roles of the two players as in Remark \\ref{rem:ineq}. Thus, the same proof as in Theorem \\ref{thm:main} can be repeated verbatim.\n\\end{proof}\n \n\nIn the rest of the section we prove Theorem \\ref{th-value-cont-strat_gen} and Proposition \\ref{thm:conv_lipsch_gen}. Processes $\\hat f, \\hat g$ have the following decomposition according to Theorem VI.52 in \\cite{DellacherieMeyer} and remarks thereafter: there are $(\\mathcal{F}_t)$-stopping times $(\\eta^f_k)_{k \\ge 1}$ and $(\\eta^g_k)_{k \\ge 1}$, non-negative $\\mathcal{F}_{\\eta^f_k}$-measurable random variables $X^f_k$, $k \\ge 1$, and non-negative $\\mathcal{F}_{\\eta^g_k}$-measurable random variables $X^g_k$, $k \\ge 1$, such that \n\\begin{equation}\\label{eqn:decomposition_piecewise}\n\\hat f_t = \\sum_{k=1}^\\infty (-1)^k X^f_k \\ind{\\{t \\ge \\eta^f_k\\}}, \\qquad \\hat g_t = \\sum_{k=1}^\\infty X^g_k \\ind{\\{t \\ge \\eta^g_k\\}}.\n\\end{equation}\nThe alternating terms in the sum for $(\\hat f_t)$ come from interweaving sequences for the two non-decreasing processes $(\\hat f^+_t)$ and $(\\hat f^-_t)$ from $\\mcalL$ arising from the decomposition of the integrable variation process $(\\hat f_t)$ (recall $\\hat f_t=\\hat f^+_t-\\hat f^-_t$). This is for notational convenience and resulting in no mathematical complications as the infinite sum is absolutely convergent. Recall that $\\hat g$ is assumed non-decreasing.\n\nThe condition that $\\hat f_0 = \\hat g_0 = 0$ means that $\\eta^f_k, \\eta^g_k > 0$ for all $k \\ge 1$. Since $\\hat f, \\hat g$ have integrable variation (in the sense of \\cite[p. 115]{DellacherieMeyer}), the infinite sequences in \\eqref{eqn:decomposition_piecewise} are dominated by integrable random variables $X^f$ and $X^g$: for any $t \\in [0, T]$\n\\begin{align}\\label{eq:Xfg}\n|\\hat f_t| \\le X^f := \\sum_{k=1}^\\infty X^f_k, \\qquad \\text{and}\\qquad \\hat g_t \\le X^g := \\sum_{k=1}^\\infty X^g_k.\n\\end{align}\n\nTo handle convergence of integrals with piecewise-constant processes, we need to extend the results of Proposition \\ref{prop:r-convergence}.\n\\begin{Proposition}\\label{prop:A}\nFor a filtration $(\\mathcal{G}_t)\\subseteq (\\mathcal{F}_t)$, consider $(\\rho^n)_{n \\ge 1} \\subset \\tilde{\\mathcal{A}^\\circ}(\\mathcal{G}_t)$ and $\\rho \\in \\tilde{\\mathcal{A}^\\circ}(\\mathcal{G}_t)$ with\n\\[\n\\mathbb{P}\\Big(\\Big\\{\\omega \\in \\Omega:\\ \\lim_{n\\to\\infty}\\rho^n_t(\\omega)=\\rho_t(\\omega),\\quad\\text{for all $t\\in C_\\rho(\\omega)\\cup\\{T\\}$} \\Big\\} \\Big) = 1.\n\\]\nThen for any $\\mathcal{F}$-measurable random variables $\\theta\\in(0,T]$ and $X\\in[0,\\infty)$ with $\\mathbb{E}[X] < \\infty$ we have\n\\begin{equation}\\label{eqn:theta_t}\n\\mathop{\\lim\\sup}_{n\\to\\infty}\\mathbb{E}\\Big[\\int_{[0, T]} \\ind{\\{t \\ge \\theta\\}} X d\\rho^n_t\\Big] \\le \\mathbb{E}\\Big[\\int_{[0, T]} \\ind{\\{t \\ge \\theta\\}} X d\\rho_t\\Big].\n\\end{equation}\nFurthermore, if $\\mathbb{P} (\\{\\omega:\\ \\theta(\\omega) \\in C_\\rho(\\omega) \\text{ or } X(\\omega) = 0\\}) = 1$, then\n\\begin{equation}\\label{eqn:theta_t_eq}\n\\lim_{n\\to\\infty}\\mathbb{E}\\Big[\\int_{[0, T]} \\ind{\\{t \\ge \\theta\\}} X d\\rho^n_t\\Big] = \\mathbb{E}\\Big[\\int_{[0, T]} \\ind{\\{t \\ge \\theta\\}} X d\\rho_t\\Big].\n\\end{equation}\n\\end{Proposition}\n\\begin{proof}\nLet $\\Omega_0$ be the set of $\\omega \\in \\Omega$ for which $\\rho^n_t(\\omega) \\to \\rho_t(\\omega)$ for all $t \\in C_\\rho(\\omega) \\cup \\{T\\}$. \nFix $\\omega \\in \\Omega_0$. For any $t$ such that $t \\in C_\\rho(\\omega)$ and $t < \\theta(\\omega)$ (such $t$ always exists as $\\theta(\\omega) > 0$ and $\\rho$ has at most countably many jumps on any bounded interval) we have $\\rho^n_t(\\omega) \\le \\rho^n_{\\theta(\\omega)-}(\\omega)$ so that by assumption \n\\[\n\\mathop{\\lim\\operatornamewithlimits{inf\\vphantom{p}}}_{n \\to \\infty} \\rho^n_{\\theta(\\omega)-} (\\omega) \\ge \\rho_{t} (\\omega).\n\\]\nSince $C_{\\rho}(\\omega)$ is dense in $(0, T)$, by arbitrariness of $t<\\theta(\\omega)$ we have\n\\begin{equation}\\label{eqn:hash}\n\\mathop{\\lim\\operatornamewithlimits{inf\\vphantom{p}}}_{n \\to \\infty} \\rho^n_{\\theta(\\omega)-} (\\omega) \\ge \\rho_{\\theta(\\omega)-} (\\omega).\n\\end{equation}\nWe rewrite the integral as follows: $\\int_{[0, T]} \\ind{\\{t \\ge \\theta\\}} X d\\rho^n_t = X (\\rho^n_T - \\rho^n_{\\theta-})$. Therefore,\n\\begin{align*}\n\\mathop{\\lim\\sup}_{n\\to\\infty}\\mathbb{E}\\Big[\\int_{[0, T]} \\ind{\\{t \\ge \\theta\\}} X d\\rho^n_t\\Big]\n=\n\\mathop{\\lim\\sup}_{n\\to\\infty}\\mathbb{E}\\big[X (\\rho^n_T - \\rho^n_{\\theta-}) \\big]\n=\n\\mathop{\\lim\\sup}_{n \\to \\infty} \\mathbb{E} [X \\rho^n_T] - \\mathop{\\lim\\operatornamewithlimits{inf\\vphantom{p}}}_{n \\to \\infty} \\mathbb{E}[X \\rho^n_{\\theta-}].\n\\end{align*}\nThe dominated convergence theorem yields that $\\lim_{n \\to \\infty} \\mathbb{E} [X \\rho^n_T] = \\mathbb{E} [X \\rho_T]$, while applying Fubini's theorem gives\n\\[\n\\mathop{\\lim\\operatornamewithlimits{inf\\vphantom{p}}}_{n \\to \\infty} \\mathbb{E}[X \\rho^n_{\\theta-}] \\ge \\mathbb{E}[\\mathop{\\lim\\operatornamewithlimits{inf\\vphantom{p}}}_{n \\to \\infty} X \\rho^n_{\\theta-}] \\ge \\mathbb{E}[ X \\rho_{\\theta-}],\n\\]\nwhere the last inequality is by \\eqref{eqn:hash}. Combining the above estimates completes the proof of \\eqref{eqn:theta_t}.\n\nAssume now that $\\theta(\\omega) \\in C_\\rho(\\omega)$ or $X(\\omega) = 0$ for $\\mathbb{P}$\\ae $\\omega \\in \\Omega_0$. This and the dominated convergence theorem yield\n\\[\n\\mathbb{E}[X(\\rho_T - \\rho_{\\theta-})] = \\mathbb{E}[X(\\rho_T - \\rho_{\\theta})] = \\lim_{n \\to \\infty} \\mathbb{E}[X(\\rho^n_T - \\rho^n_{\\theta})] \\le \\mathop{\\lim\\sup}_{n \\to \\infty} \\mathbb{E}[X(\\rho^n_T - \\rho^n_{\\theta-})],\n\\]\nwhere the last inequality follows from the monotonicity of $\\rho^n$. This estimate and \\eqref{eqn:theta_t} prove \\eqref{eqn:theta_t_eq}.\n\\end{proof}\n\n\\begin{remark}\\label{rem:contrad0}\nThe inequality \\eqref{eqn:theta_t} in Proposition \\ref{prop:A} can be strict even if $\\rho^n_t \\to \\rho_t$ for all $t \\in [0, T]$ because this condition does not imply that $\\rho^n_{t-} \\to \\rho_{t-}$. One needs further continuity assumptions on $(\\rho_t)$ to establish equality \\eqref{eqn:theta_t_eq}.\n\\end{remark}\n\n\\begin{proof}[{\\bf Proof of Theorem \\ref{th-value-cont-strat_gen}}]\nCompared to the proof of the analogue result under the more stringent condition \\ref{ass:regular} (i.e., Theorem \\ref{th-value-cont-strat}), we only need to establish lower and upper semicontinuity of the functional $N$, while all other remaining arguments stay valid. For the semicontinuity, we extend arguments of Lemma \\ref{lem:semi-cont}. \n\n\\emph{Upper semicontinuity of $N(\\xi,\\cdot)$}. Fix $\\xi\\in\\mathcal{A}_{ac}(\\mathcal{F}^1_t)$ and consider a sequence $(\\zeta^{n})_{n \\ge 1}\\in\\mathcal{A}(\\mathcal{F}^2_t)$ converging to $\\zeta \\in \\mathcal{A}(\\mathcal{F}^2_t)$ strongly in $\\mathcal{S}$. Arguing by contradiction, we assume that there is a subsequence of $(\\zeta^n)_{n \\ge 1}$ denoted the same with an abuse of notation, that converges $(\\mathbb{P} \\times \\lambda)\\ae$ to $\\zeta$ and such that\n\\begin{equation*}\n\\lim_{n\\to\\infty} N(\\xi,\\zeta^{n}) > N(\\xi,\\zeta).\n\\end{equation*}\nWithout loss of generality, we can further require that $(\\zeta^n)_{n \\ge 1} \\subset {\\mathcal{A}^\\circ}(\\mathcal{F}^2_t)$ and $\\zeta \\in {\\mathcal{A}^\\circ}(\\mathcal{F}^2_t)$. \nSince $\\xi$ is absolutely continuous on $[0, T)$, \n\\begin{equation}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta^{n}_{t})d\\xi_t\\bigg]=\\mathbb{E}\\bigg[\\int_{[0, T)} f_t(1-\\zeta_{t})d\\xi_t \\bigg]\n\\label{eq-int-conv-1a}\n\\end{equation}\nby the dominated convergence theorem. For the last two terms of $N(\\xi, \\zeta^n)$ (recall \\eqref{eqn:N_cont}) we have\n\\begin{align*}\n\\mathbb{E}\\bigg[\\int_{[0, T)} g_t(1-\\xi_t)d\\zeta^n_t + h_T\\Delta\\xi_T\\Delta\\zeta^n_T\\bigg]\n&=\n\\mathbb{E}\\bigg[\\int_{[0, T]} g_t(1-\\xi_{t-})d\\zeta^n_t + (h_T - g_T) \\Delta\\xi_T\\Delta\\zeta^n_T\\bigg].\n\\end{align*}\nAs in the proof of Lemma \\ref{lem:semi-cont}, for the regular part $\\tilde g$ of the process $g$ we have\n\\begin{equation}\\label{eqn:tl_g_conv}\n\\lim_{n \\to \\infty} \\mathbb{E}\\bigg[\\int_{[0, T]} \\tilde g_t(1-\\xi_{t-})d\\zeta^n_t \\bigg] = \\mathbb{E}\\bigg[\\int_{[0, T]} \\tilde g_t(1-\\xi_{t-})d\\zeta_t \\bigg].\n\\end{equation}\n\nFor the pure jump part $\\hat g$ of the process $g$, we will prove that \n\\begin{equation}\\label{eqn:hat_g_conv}\n\\mathop{\\lim\\sup}_{n \\to \\infty} \\mathbb{E}\\bigg[\\int_{[0, T]} \\hat g_t(1-\\xi_{t-})d\\zeta^n_t \\bigg] \\le \\mathbb{E}\\bigg[\\int_{[0, T]} \\hat g_t(1-\\xi_{t-})d\\zeta_t \\bigg].\n\\end{equation}\nTo this end, let us define\n\\[\nR^n_t = \\int_{[0, t]} (1-\\xi_{s-})d\\zeta^n_s, \\qquad R_t = \\int_{[0, t]} (1-\\xi_{s-})d\\zeta_s, \\qquad \\text{for $t\\in[0,T]$,}\n\\]\nwith $R^n_{0-} = R_{0-} = 0$ and then we are going to apply Proposition \\ref{prop:A} with $R^n$ and $R$ instead of $\\rho^n$ and $\\rho$. We need $R^n_t (\\omega) \\to R_t(\\omega)$ as $n\\to\\infty$ for $t \\in C_R(\\omega)=C_{\\zeta}(\\omega)\\cup \\{t\\in[0,T]:\\xi_t(\\omega)=1\\}$, for $\\mathbb{P}$\\ae $\\omega \\in \\Omega$. \nThe latter is indeed true. Setting $\\Omega_0 = \\{\\omega \\in \\Omega:\\ \\lim_{n \\to \\infty} \\zeta^n_t(\\omega) = \\zeta_t(\\omega)\\ \\forall\\, t \\in C_\\zeta(\\omega) \\}$, we have $\\mathbb{P}(\\Omega_0) = 1$ by Lemma \\ref{lem:cadlag_convergence}. For any $\\omega \\in \\Omega_0$ and $t \\in C_\\zeta(\\omega)$, invoking the absolute continuity of $(\\xi_t)$, we obtain (omitting the dependence on $\\omega$)\n\\begin{equation*}\n\\lim_{n \\to \\infty} R^n_t \n=\n\\lim_{n \\to \\infty} \\Big[ (1-\\xi_t) \\zeta^n_t + \\int_{[0, t]} \\zeta^n_s d \\xi_s \\Big]\n=\n(1-\\xi_t) \\zeta_t + \\int_{[0, t]} \\zeta_s d \\xi_s = R_t,\n\\end{equation*}\nwhere the convergence of the second term is the consequence of the dominated convergence theorem and the fact that $\\lambda ([0,T]\\setminus C_\\zeta(\\omega)) = 0$ and $\\zeta^n_T = \\zeta_T = 1$. \n\n\nFor any $k \\ge 1$, since $X^g_k \\ge 0$, Proposition \\ref{prop:A} gives (recall \\eqref{eqn:decomposition_piecewise})\n\\begin{equation}\\label{eqn:limsup01}\n\\mathop{\\lim\\sup}_{n \\to \\infty} \\mathbb{E} \\bigg[ \\int_{[0, T]} X^g_k \\ind{\\{t \\ge \\eta^g_k\\}} d R^n_t \\bigg] \\le \\mathbb{E} \\bigg[\\int_{[0, T]} X^g_k \\ind{\\{t \\ge \\eta^g_k\\}} d R_t \\bigg].\n\\end{equation}\nWe apply the decomposition of $\\hat g$ and then the monotone convergence theorem\n\\[\n\\mathbb{E}\\bigg[\\int_{[0, T]} \\hat g_t(1-\\xi_{t-})d\\zeta^n_t \\bigg] = \\mathbb{E} \\bigg[ \\sum_{k=1}^\\infty \\int_{[0, T]} X^g_k \\ind{\\{t \\ge \\eta^g_k\\}} d R^n_t \\bigg]\n= \\sum_{k=1}^\\infty \\mathbb{E} \\bigg[ \\int_{[0, T]} X^g_k \\ind{\\{t \\ge \\eta^g_k\\}} d R^n_t \\bigg].\n\\]\nSince $\\hat g \\in \\mcalL$ we have the bound (recall \\eqref{eq:Xfg})\n\\[\n\\sum_{k=1}^\\infty \\sup_n \\mathbb{E} \\bigg[ \\int_{[0, T]} X^g_k \\ind{\\{t \\ge \\eta^g_k\\}} d R^n_t \\bigg] \\le \\sum_{k=1}^\\infty \\mathbb{E} [ X^g_k ] \n< \\infty .\n\\]\nThen we can apply (reverse) Fatou's lemma (with respect to the counting measure on $\\mathbb{N}$)\n\\begin{align*}\n\\mathop{\\lim\\sup}_{n \\to \\infty} \\sum_{k=1}^\\infty \\mathbb{E} \\bigg[ \\int_{[0, T]} X^g_k \\ind{\\{t \\ge \\eta^g_k\\}} d R^n_t \\bigg]\n&\\le\n\\sum_{k=1}^\\infty \\mathop{\\lim\\sup}_{n \\to \\infty} \\mathbb{E} \\bigg[ \\int_{[0, T]} X^g_k \\ind{\\{t \\ge \\eta^g_k\\}} d R^n_t \\bigg]\\\\\n&\\le\n\\sum_{k=1}^\\infty \\mathbb{E} \\bigg[\\int_{[0, T]} X^g_k \\ind{\\{t \\ge \\eta^g_k\\}} d R_t \\bigg]\n=\n\\mathbb{E}\\bigg[\\int_{[0, T]} \\hat g_t(1-\\xi_{t-})d\\zeta_t \\bigg],\n\\end{align*}\nwhere the last inequality is due to \\eqref{eqn:limsup01} and the final equality follows by monotone convergence and the decomposition of $\\hat g$. This completes the proof of \\eqref{eqn:hat_g_conv}.\n\nRecalling that $g_T \\le h_T$, we obtain from Lemma \\ref{prop-terminal-time-jump-limit}\n\\begin{equation*}\n\\mathop{\\lim\\sup}_{n\\to\\infty} \\mathbb{E}\\big[(h_T-g_T)\\Delta\\xi_T\\Delta\\zeta^n_T\\big] \\le \\mathbb{E}\\big[(h_T-g_T)\\Delta\\xi_T\\Delta\\zeta_T\\big],\n\\end{equation*}\nand combining the latter with \\eqref{eqn:tl_g_conv}, \\eqref{eqn:hat_g_conv} and \\eqref{eq-int-conv-1a} shows that\n\\begin{align}\\label{eq:usc_gen}\n\\mathop{\\lim\\sup}_{n \\to \\infty} N(\\xi, \\zeta^n) \\le N(\\xi, \\zeta).\n\\end{align}\nHence we have a contradiction with $\\lim_{n\\to\\infty} N(\\xi,\\zeta^n) >N(\\xi,\\zeta)$, which proves the upper semicontinuity.\n\n\\emph{Lower semicontinuity of $N(\\cdot,\\zeta)$}. The proof follows closely the argument of the proof of Lemma \\ref{lem:semi-cont}: we fix $\\zeta\\in\\mathcal{A}(\\mathcal{F}^2_t)$, consider a sequence $(\\xi^{n})_{n\\ge 1}\\subset\\mathcal{A}_{ac}(\\mathcal{F}^1_t)$ converging to $\\xi \\in \\mathcal{A}_{ac} (\\mathcal{F}^1_t)$ strongly in $\\mathcal{S}$, assume that \\eqref{eqn:two_terms2} holds and reach a contradiction. We only show how to handle the convergence for $(\\hat f_t)$ as all other terms are handled by the proof of Lemma \\ref{lem:semi-cont}.\n\nBy Lemma \\ref{lem:cadlag_convergence} and the continuity of $(\\xi_t)$ we have $\\mathbb{P} \\big( \\lim_{n\\to\\infty} \\xi^{n}_t(\\omega) = \\xi_t(\\omega)\\ \\forall\\,t \\in [0,T)\\big) = 1$. \nLet\n\\[\nR^n_t = \\int_{[0,t]} (1-\\zeta_{t-})d\\xi^n_t, \\qquad R_t = \\int_{[0,t]} (1-\\zeta_{t-})d\\xi_t,\n\\]\nwith $R^n_{0-} = R_{0-} = 0$. Due to the continuity of $(\\xi^n_t)$ and $(\\xi_t)$ for $t \\in [0, T)$, processes $(R^n_t)$ and $(R_t)$ are continuous on $[0, T)$ with a possible jump at $T$. From \\eqref{eqn:conv_R} in the proof of Proposition \\ref{prop-specific-convergence-2} we conclude that for $\\mathbb{P}$\\ae $\\omega \\in \\Omega$\n\\[\n\\lim_{n \\to \\infty} R^n_t(\\omega) = R_t(\\omega) \\quad \\text{for all $t \\in [0, T]$}.\n\\]\nSince $\\Delta \\hat f_T = 0$ (see Assumption \\ref{ass:regular_gen}), there is a decomposition such that $X^f_k \\ind{\\{\\eta^f_k = T\\}} = 0$ $\\mathbb{P}$\\as for all $k$. Recalling that $(R_t)$ is continuous on $[0, T)$, we can apply \\eqref{eqn:theta_t_eq} in Proposition \\ref{prop:A}: for any $k \\ge 1$\n\\[\n\\lim_{n \\to \\infty} \\mathbb{E} \\bigg[ \\int_{[0, T]} X^f_k \\ind{\\{t \\ge \\eta^f_k\\}} d R^n_t \\bigg] = \\mathbb{E} \\bigg[\\int_{[0, T]} X^f_k \\ind{\\{t \\ge \\eta^f_k\\}} d R_t \\bigg].\n\\]\nCombining the latter with decomposition \\eqref{eqn:decomposition_piecewise} and the dominated convergence theorem (with the bound $X^f$) we obtain\n\\[\n\\lim_{n \\to \\infty} \\mathbb{E} \\bigg[ \\int_{[0, T]} \\hat f_t d R^n_t \\bigg] = \\mathbb{E} \\bigg[\\int_{[0, T]} \\hat f_t d R_t \\bigg].\n\\]\nArguing as in the proof of Corollary \\ref{cor-specific-convergence-2}, we have\n\\begin{equation*\n\\lim_{n \\to \\infty} \\mathbb{E} \\bigg[ \\int_{[0, T)} \\hat f_t (1-\\zeta_t) d \\xi^n_t + \\hat f_T \\Delta \\zeta_T \\Delta \\xi^n_T\\bigg] = \\mathbb{E} \\bigg[\\int_{[0, T)} \\hat f_t (1-\\zeta_t) d \\xi_t + \\hat f_T \\Delta \\zeta_T \\Delta \\xi_T \\bigg].\n\\end{equation*}\nCorollary \\ref{cor-specific-convergence-2} implies an analogous convergence for $(\\tilde f_t)$ and the rest of the proof of lower semicontinuity from Lemma \\ref{lem:semi-cont} applies.\n\\end{proof}\n\n\\begin{remark}\\label{rem:contrad}\nIn the arguments above, item (4) in Assumption \\ref{ass:regular_gen} implies in particular that the payoff process $(g_t)$ does not have predictable jumps that are $\\mathbb{P}$\\as negative. This assumption cannot be further relaxed as this may cause the proof of the upper semicontinuity in Theorem \\ref{th-value-cont-strat_gen} to fail. Recall that the process $(g_t)$ corresponds to the payoff of the second player and her strategy $(\\zeta_t)$ is not required to be absolutely continuous. For example, fix $t_0\\in(0,T)$ and take $g_t=1-\\ind{\\{t\\ge t_0\\}}$, $\\zeta_t = \\ind{\\{t\\ge t_0\\}}$ and $\\xi_t = \\ind{\\{t = T\\}}$. Let us consider the sequence $\\zeta^n_t =\\ind{\\{t\\ge t_0-\\frac{1}{n}\\}}$, which converges to $\\zeta$ pointwise and also strongly in $\\mathcal{S}$. We have\n\\begin{equation*}\n\\int_{[0, T]} g_t(1-\\xi_{t-})d\\zeta^n_t\\equiv 1, \\:\\:\\: \\text{for all $n$'s, but}\\:\\:\\: \\int_{[0, T]} g_t(1-\\xi_{t-})d\\zeta_t\\equiv 0,\n\\end{equation*}\nhence \\eqref{eqn:hat_g_conv} fails and so does \\eqref{eq:usc_gen}.\n\\end{remark}\n\n\n\n\\begin{proof}[{\\bf Proof of Proposition \\ref{thm:conv_lipsch_gen}}]\nHere, we also only show how to extend the proof of Proposition \\ref{thm:conv_lipsch} to the more general setting. Fix $\\zeta \\in {\\mathcal{A}^\\circ}(\\mathcal{F}^2_t)$ and $\\xi \\in {\\mathcal{A}^\\circ}(\\mathcal{F}^1_t)$. Construct a sequence $(\\xi^n) \\subset {\\mathcal{A}^\\circ_{ac}}(\\mathcal{F}^1_t)$ as in the proof of Proposition \\ref{thm:conv_lipsch}. It is sufficient to show that\n\\begin{equation}\\label{eqn:lipch_conv_liminf}\n\\mathop{\\lim\\sup}_{n \\to \\infty} N(\\xi^n, \\zeta) \\le N(\\xi, \\zeta).\n\\end{equation}\n\nFrom the proof of Proposition \\ref{thm:conv_lipsch} we have that\n\\begin{equation}\\label{eqn:M_ij_conv_gen}\n\\begin{aligned}\n&\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)}\\tilde f_{t}(1-\\zeta_{t})d\\xi^{n}_{t}+ \\int_{[0, T)} \\tilde g_t(1-\\xi^{n}_{t})d\\zeta_t + h_T\\Delta\\xi^{n}_T\\Delta\\zeta_T\\bigg]\\\\\n&= \\mathbb{E}\\bigg[\\int_{[0, T)}\\tilde f_{t}(1-\\zeta_{t})d\\xi_{t} + \\int_{[0, T)} \\tilde g_t(1-\\xi_{t-})d\\zeta_t + h_T\\Delta\\xi_T\\Delta\\zeta_T\\bigg].\n\\end{aligned}\n\\end{equation}\nFor $t \\in [0, T]$, define\n\\[\nR^n_t = \\int_{[0, t]} (1-\\zeta_{s-})d\\xi^n_s, \\qquad R_t = \\int_{[0, t]} (1-\\zeta_{s})d\\xi_s\n\\]\nwith $R^n_{0-} = R_{0-} = 0$. Corollary \\ref{cor:lim_R_in-t-} implies that for $\\mathbb{P}$\\ae $\\omega \\in \\Omega$\n\\begin{align}\\label{eq:convR}\n\\lim_{n \\to \\infty} R^n_{t-}(\\omega) = R_{t-}(\\omega) \\quad \\text{for all $t \\in [0, T]$}.\n\\end{align}\nBy the decomposition of $(\\hat f_t)$ in \\eqref{eqn:decomposition_piecewise} and the dominated convergence theorem for the infinite sum (recalling \\eqref{eq:Xfg}) we obtain\n\\begin{align*}\n\\mathbb{E}\\bigg[\\int_{[0, T)}\\hat f_{t}(1-\\zeta_{t})d\\xi^{n}_{t}\\bigg] \n&= \n\\mathbb{E}\\bigg[\\int_{[0, T)}\\hat f_{t}(1-\\zeta_{t-})d\\xi^{n}_{t}\\bigg]\n=\n\\sum_{k=1}^\\infty \\mathbb{E}\\bigg[(-1)^k \\int_{[0, T)} X^f_k \\ind{\\{t \\ge \\eta^f_k\\}} d R^n_t \\bigg]\\\\\n&=\n\\sum_{k=1}^\\infty \\mathbb{E} \\big[ (-1)^k X^f_k (R^n_{T-} - R^n_{\\eta^f_k-})\\big],\n\\end{align*}\nwhere the first equality follows from the continuity of $(\\xi^n_t)$ on $[0, T)$.\nWe further apply dominated convergence (with respect to the product of the counting measure on $\\mathbb N$ and to the measure $\\mathbb{P}$) to obtain\n\\begin{equation}\\label{eqn:part2}\n\\begin{aligned}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\!\\int_{[0, T)}\\!\\hat f_{t}(1-\\zeta_{t})d\\xi^{n}_{t}\\bigg] \n&=\\sum_{k=1}^\\infty \\mathbb{E} \\big[(-1)^k \\lim_{n\\to\\infty} X^f_k (R^n_{T-} - R^n_{\\eta^f_k-})\\big]\\\\\n&=\n\\sum_{k=1}^\\infty \\mathbb{E} \\big[ (-1)^k X^f_k (R_{T-} - R_{\\eta^f_k-})\\big]=\n\\mathbb{E}\\bigg[\\int_{[0, T)}\\hat f_{t}(1-\\zeta_{t})d\\xi_{t}\\bigg],\n\\end{aligned}\n\\end{equation}\nwhere the second equality uses \\eqref{eq:convR} and the final one the decomposition of $\\hat f$. \nRecalling that $\\xi^n_t\\to\\xi_{t-}$ as $n\\to\\infty$ by construction, dominated convergence gives\n\\begin{equation}\\label{eqn:part3}\n\\lim_{n\\to\\infty}\\mathbb{E}\\bigg[\\int_{[0, T)}\\hat g_{t}(1-\\xi^n_{t})d\\zeta_{t}\\bigg]\n=\n\\mathbb{E}\\bigg[\\int_{[0, T)}\\hat g_{t}(1-\\xi_{t-})d\\zeta_{t}\\bigg].\n\\end{equation}\nPutting together \\eqref{eqn:M_ij_conv_gen}, \\eqref{eqn:part2} and \\eqref{eqn:part3} shows\n\\[\n\\lim_{n \\to \\infty} N(\\xi^n, \\zeta) = \\mathbb{E}\\bigg[\\int_{[0, T)} f_{t}(1-\\zeta_{t})d\\xi_{t} + \\int_{[0, T)} g_t(1-\\xi_{t-})d\\zeta_t + h_T\\Delta\\xi_T\\Delta\\zeta_T\\bigg].\n\\]\nIt remains to notice that by \\eqref{eq-remove-common-jumps} the right hand side is dominated by $N(\\xi, \\zeta)$, which completes the proof of \\eqref{eqn:lipch_conv_liminf}.\n\\end{proof}\n\n\n\\subsection{Proof of Theorem \\ref{thm:ef_0_value}} \\label{sec:ef_functional}\nRandomisation devices $Z_\\tau$ and $Z_\\sigma$ associated to a pair $(\\tau,\\sigma)\\in\\mathcal{T}^R(\\mathcal{F}^1_t)\\times\\mathcal{T}^R(\\mathcal{F}^2_t)$ are independent of $\\mathcal{G}$. Denoting by $(\\xi_t) \\in {\\mathcal{A}^\\circ}(\\mathcal{F}^1_t)$ and $(\\zeta_t) \\in {\\mathcal{A}^\\circ}(\\mathcal{F}^2_t)$ the generating processes for $\\tau$ and $\\sigma$, respectively, the statement of Proposition \\ref{prop-functionals-equal} can be extended to encompass the conditional functional \\eqref{eqn:cond_func}:\n\\begin{equation}\\label{eqn:cond_reform}\n\\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big] = \\mathbb{E}\\bigg[\\int_{[0, T)} f_{t}(1-\\zeta_{t})d \\xi_t + \\int_{[0, T)} g_{t}(1-\\xi_t) d \\zeta_t + \\sum_{t \\in [0, T]} h_t \\Delta \\xi_t \\Delta \\zeta_t \\bigg|\\mathcal{G}\\bigg].\n\\end{equation}\nWe can also repeat the same argument as in Remark \\ref{rem-Laraki-Solan} to obtain that\n\\[\n\\underline V:=\\operatornamewithlimits{\\mathrm{ess\\,sup}}_{\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)}\\operatornamewithlimits{\\mathrm{ess\\,inf\\vphantom{p}}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} \\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big]=\\operatornamewithlimits{\\mathrm{ess\\,sup}}_{\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)}\\operatornamewithlimits{\\mathrm{ess\\,inf\\vphantom{p}}}_{\\tau \\in \\mathcal{T}(\\mathcal{F}^1_t)} \\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big]\n\\]\nand \n\\[\n\\overline V:=\\operatornamewithlimits{\\mathrm{ess\\,inf\\vphantom{p}}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} \\operatornamewithlimits{\\mathrm{ess\\,sup}}_{\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)} \\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big]=\\operatornamewithlimits{\\mathrm{ess\\,inf\\vphantom{p}}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)}\\operatornamewithlimits{\\mathrm{ess\\,sup}}_{\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)} \\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big].\n\\]\nNotice that $\\overline V\\ge \\underline V$, $\\mathbb{P}$\\as We will show that\n\\begin{align}\\label{eq:EV}\n\\mathbb{E}[\\,\\underline V\\,]=\\mathbb{E}[\\,\\overline V\\,],\n\\end{align}\nso that $\\overline V= \\underline V$\\,, $\\mathbb{P}$\\as as needed.\n\nIn order to prove \\eqref{eq:EV}, let us define\n\\begin{equation*}\n\\overline{M}(\\tau):=\\operatornamewithlimits{\\mathrm{ess\\,sup}}_{\\sigma \\in \\mathcal{T}(\\mathcal{F}^2_t)} \\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big],\\quad\\text{for $\\tau\\in\\mathcal{T}^R(\\mathcal{F}^1_t)$},\n\\end{equation*}\nand \n\\begin{align*}\n\\underline{M}(\\sigma):=\\operatornamewithlimits{\\mathrm{ess\\,inf\\vphantom{p}}}_{\\tau \\in \\mathcal{T}(\\mathcal{F}^1_t)} \\mathbb{E}\\big[ \\mathcal{P}(\\tau, \\sigma) \\big| \\mathcal{G} \\big],\\quad\\text{for $\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)$}.\n\\end{align*}\nThese are two standard optimal stopping problems and the theory of Snell envelope applies (see, e.g., \\cite[Appendix D]{Karatzas1998} and \\cite{elkaroui1981}). We adapt some results from that theory to suit our needs in the game setting.\n\\begin{Lemma}\\label{lem:directed}\nThe family $\\{\\overline{M}(\\tau),\\,\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)\\}$ is downward directed and the family $\\{\\underline{M}(\\sigma),\\,\\sigma\\in \\mathcal{T}^R(\\mathcal{F}^2_t)\\}$ is upward directed. \n\\end{Lemma}\n\\begin{proof}\nLet $\\tau^{(1)},\\tau^{(2)}\\in\\mathcal{T}^R(\\mathcal{F}^1_t)$ and let $\\xi^{(1)},\\xi^{(2)}\\in{\\mathcal{A}^\\circ}(\\mathcal{F}^1_t)$ be the corresponding generating processes. Fix the $\\mathcal{G}$-measurable event $B=\\{\\overline{M}(\\tau^{(1)})\\le\\overline{M}(\\tau^{(2)})\\}$ and define another $(\\mathcal{F}^1_t)$-randomised stopping time as $\\hat{\\tau}=\\tau^{(1)} \\ind{B} +\\tau^{(2)} \\ind{B^c}$. We use $\\mathcal{G}\\subset\\mathcal{F}^1_0$ to ensure that $\\hat \\tau\\in\\mathcal{T}^R(\\mathcal{F}^1_t)$. The generating process of $\\hat \\tau$ reads $\\hat \\xi_t=\\xi^{(1)}_t \\ind{B} +\\xi^{(2)}_t \\ind{B^c}$ for $t\\in[0,T]$. Using the linear structure of $\\hat \\xi$ and recalling \\eqref{eqn:cond_reform}, for any $\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)$, we have\n\\begin{align*}\n\\mathbb{E}\\big[\\mathcal{P}(\\hat{\\tau}, \\sigma)|\\mathcal{G}\\big]\n&=\n\\ind{B}\\mathbb{E}\\bigg[\\int_{[0, \\sigma)} f_{u}d \\xi^{(1)}_u + g_{\\sigma}(1-\\xi^{(1)}_\\sigma) + h_\\sigma \\Delta \\xi^{(1)}_\\sigma \\bigg|\\mathcal{G}\\bigg]\\\\\n&\\hspace{12pt}+\\ind{B^c}\\mathbb{E}\\bigg[\\int_{[0, \\sigma)} f_{u}d \\xi^{(2)}_u + g_{\\sigma}(1-\\xi^{(2)}_\\sigma) + h_\\sigma \\Delta \\xi^{(2)}_\\sigma\\bigg|\\mathcal{G}\\bigg]\\\\\n&=\\ind{B}\\mathbb{E}\\big[\\mathcal{P}(\\tau^{(1)} ,\\sigma)|\\mathcal{G}\\big]+\\ind{B^c}\\mathbb{E}\\big[\\mathcal{P}(\\tau^{(2)} ,\\sigma)|\\mathcal{G}\\big]\\\\\n&\\le\\ind{B}\\overline{M}(\\tau^{(1)})+\\ind{B^c}\\overline{M}(\\tau^{(2)})=\\overline{M}(\\tau^{(1)})\\wedge\\overline{M}(\\tau^{(2)}),\n\\end{align*} \nwhere the inequality is by definition of essential supremum and the final equality by definition of the event $B$.\nThus, taking the supremum over $\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)$ we get\n\\[\n\\overline{M}(\\hat \\tau)\\le \\overline{M}(\\tau^{(1)})\\wedge\\overline{M}(\\tau^{(2)}),\n\\]\nhence the family $\\{\\overline{M}(\\tau),\\,\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)\\}$ is downward directed. A symmetric argument proves that \nthe family $\\{\\underline{M}(\\sigma),\\,\\sigma\\in \\mathcal{T}^R(\\mathcal{F}^2_t)\\}$ is upward directed.\n\\end{proof}\nAn immediate consequence of the lemma and of the definition of essential supremum\/infimum is that (see, e.g., \\cite[Lemma I.1.3]{Peskir2006}) we can find sequences $(\\sigma_n)_{n\\ge 1}\\subset \\mathcal{T}^R(\\mathcal{F}^2_t)$ and $(\\tau_n)_{n\\ge 1}\\subset\\mathcal{T}^R(\\mathcal{F}^1_t)$ such that $\\mathbb{P}$\\as\n\\begin{align}\\label{eq:limV}\n\\overline V=\\lim_{n\\to\\infty}\\overline{M}(\\tau_n)\\quad\\text{and}\\quad\\underline V=\\lim_{n\\to\\infty}\\underline{M}(\\sigma_n),\n\\end{align}\nwhere the convergence is monotone in both cases. \n\nAnalogous results hold for the optimisation problems defining $\\overline M(\\tau)$ and $\\underline M(\\sigma)$. The proof of the following lemma is similar to that of Lemma \\ref{lem:directed} and omitted.\n\\begin{Lemma}\\label{lem:directed2}\nThe family $\\{\\mathbb{E}\\big[\\mathcal{P}(\\tau,\\sigma)|\\mathcal{G}\\big],\\,\\sigma\\in \\mathcal{T}(\\mathcal{F}^2_t)\\}$ is upward directed for each $\\tau\\in \\mathcal{T}^R(\\mathcal{F}^1_t)$. The family $\\{\\mathbb{E}\\big[\\mathcal{P}(\\tau,\\sigma)|\\mathcal{G}\\big],\\,\\tau\\in \\mathcal{T}(\\mathcal{F}^1_t)\\}$ is downward directed for each $\\sigma\\in \\mathcal{T}^R(\\mathcal{F}^2_t)$. \n\\end{Lemma}\nIt follows that for each $\\tau\\in\\mathcal{T}^R(\\mathcal{F}^1_t)$ and $\\sigma\\in\\mathcal{T}^R(\\mathcal{F}^2_t)$, there are sequences $(\\sigma^\\tau_n)_{n\\ge 1}\\subset\\mathcal{T}(\\mathcal{F}^2_t)$ and $(\\tau^\\sigma_n)_{n\\ge 1}\\subset\\mathcal{T}(\\mathcal{F}^1_t)$ such that \n\\begin{align}\\label{eq:limM}\n\\overline M(\\tau)=\\lim_{n\\to\\infty}\\mathbb{E}\\big[\\mathcal{P}(\\tau,\\sigma^\\tau_n)|\\mathcal{G}\\big]\\quad\\text{and}\\quad\\underline M(\\sigma)=\\lim_{n\\to\\infty}\\mathbb{E}\\big[\\mathcal{P}(\\tau^\\sigma_n,\\sigma)|\\mathcal{G}\\big],\n\\end{align}\nwhere the convergence is monotone in both cases. Equipped with these results we can prove the following lemma which will quickly lead to \\eqref{eq:EV}.\n\n\\begin{Lemma}\\label{cor-exp-for-values}\nRecall $V_*$ and $V^*$ as in Definition \\ref{def-value-rand-strat}. We have\n\\begin{equation}\\label{eq-cor-exp}\n\\mathbb{E}[\\overline{V}] = V^*, \\qquad \\text{and}\\qquad \\mathbb{E}[\\underline{V}] = V_*.\n\\end{equation}\n\\end{Lemma}\n\\begin{proof}\nFix $\\tau\\in\\mathcal{T}^R(\\mathcal{F}^1_t)$. By \\eqref{eq:limM} and the monotone convergence theorem\n\\[\n\\mathbb{E}[ \\overline{M}(\\tau) ] = \\lim_{n\\to\\infty}\\mathbb{E}[\\mathcal{P}(\\tau,\\sigma^\\tau_n)]\\le \\sup_{\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)}\\mathbb{E}[\\mathcal{P}(\\tau,\\sigma)].\n\\]\nThe opposite inequality follows from the fact that $\\overline{M}(\\tau) \\ge \\mathbb{E}[\\mathcal{P}(\\tau,\\sigma)|\\mathcal{G}]$ for any $\\sigma \\in \\mathcal{T}(\\mathcal{F}^2_t)$ by the definition of the essential supremum. Therefore, we have\n\\begin{equation}\\label{eqn:M1}\n\\mathbb{E}[ \\overline{M}(\\tau) ] = \\sup_{\\sigma\\in\\mathcal{T}(\\mathcal{F}^2_t)}\\mathbb{E}[\\mathcal{P}(\\tau,\\sigma)]. \n\\end{equation}\nFrom \\eqref{eq:limV}, similar arguments as above prove that\n\\begin{equation}\\label{eqn:M2}\n\\mathbb{E}[\\overline{V}] = \\operatornamewithlimits{inf\\vphantom{p}}_{\\tau \\in \\mathcal{T}^R(\\mathcal{F}^1_t)} \\mathbb{E}[ \\overline{M}(\\tau) ].\n\\end{equation}\nCombining \\eqref{eqn:M1} and \\eqref{eqn:M2} completes the proof that $\\mathbb{E}[\\overline{V}] = V^*$. The second part of the statement requires analogous arguments.\n\\end{proof}\n\nFinally, \\eqref{eq-cor-exp} and Theorem \\ref{thm:main2} imply \\eqref{eq:EV}, which concludes the proof of Theorem \\ref{thm:ef_0_value}.\n\n\n\n\n\n\n\\section{Counterexamples}\n\\label{sec:Nikita-examples}\n\nIn the three subsections below we show that: (a) relaxing condition \\ref{eq-order-cond} may lead to a game without a value, (b) in situations where one player has all the informational advantage, the use of randomised stopping times may still be beneficial also for the uninformed player, and (c) Assumption \\ref{ass:regular_gen} is tight in requiring that either $(\\hat f_t)$ is non-increasing or $(\\hat g_t)$ is non-decreasing. \n\nIn order to keep the exposition simple we consider the framework of Section \\ref{subsec:game_1} with $\\I = 2$, $\\J = 1$, and impose that $(\\mathcal{F}^p_t)$ be the trivial filtration (hence all payoff processes are deterministic, since they are $(\\mathcal{F}^p_t)$-adapted). Furthermore we restrict our attention to the case in which $f^{1,1} = f^{2,1} = f$, $g^{1,1} = g^{2,1} = g$ and $h^{1,1}_t\\ind{\\{t \\frac12,\n\\end{equation*}\nso the game does not have a value.\n\\end{Proposition}\n\\begin{proof}\nFirst we show that $V_{*}\\le\\frac{1}{2}$. Recall that (c.f. Remark \\ref{rem-Laraki-Solan})\n\\begin{equation*}\nV_*=\\sup_{\\sigma\\in\\mathcal{T}^R}\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1,\\tau_2\\in\\mathcal{T}^R}N((\\tau_1,\\tau_2),\\sigma)=\\sup_{\\sigma\\in\\mathcal{T}^R}\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1,\\tau_2\\in[0,1]}N((\\tau_1,\\tau_2),\\sigma),\n\\end{equation*} \nso we can take $\\tau_1, \\tau_2 \\in [0, 1]$ deterministic in the arguments below. Take any $\\sigma \\in \\mathcal{T}^R$ and the corresponding generating process $(\\zeta_t)$ which is, due to the triviality of the filtration $(\\mathcal{F}^p_t)$,\na deterministic function. For $\\tau_1\\in[0,1)$, $\\tau_2=1$ we obtain\n\\begin{align*}\nN((\\tau_1,\\tau_2),\\sigma)&= \\mathbb{E}\\big[(\\frac{1}{2}\\sigma \\indd{\\sigma<\\tau_1}+1\\cdot \\ind{\\{\\sigma\\ge\\tau_1\\}})\\indd{\\mcalI = 1} + (\\frac{1}{2}\\sigma \\indd{\\sigma<1}+0 \\cdot \\indd{\\sigma=1}) \\indd{\\mcalI = 2}\\big]\\\\\n&\\le \\frac{1}{2}(\\frac{1}{2}\\zeta_{\\tau_1-}+(1-\\zeta_{\\tau_1-}))+ \\frac{1}{4}\\zeta_{1-}\n= \\frac{1}{2}-\\frac{1}{4}\\zeta_{\\tau_1-}+\\frac{1}{4}\\zeta_{1-},\n\\end{align*}\nwhere we used that $\\sigma$ is bounded above by $1$ and that $\\mcalI$ is independent of $\\sigma$ with $\\mathbb{P}(\\mcalI=1)=\\mathbb{P}(\\mcalI=2)=\\tfrac{1}{2}$.\nIn particular,\n\\begin{equation*}\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1,\\tau_2 \\in [0, 1]} N((\\tau_1,\\tau_2),\\sigma) \\le \\lim_{\\tau_1\\to 1-} N((\\tau_1,1),\\sigma) = \\frac{1}{2}.\n\\end{equation*}\nThis proves that $V_* \\le \\frac12$.\n\nNow we turn our attention to demonstrating that $V^{*}>\\frac{1}{2}$. Noting again that\n\\begin{equation*}\nV^*=\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1,\\tau_2\\in\\mathcal{T}^R}\\sup_{\\sigma\\in\\mathcal{T}^R}N(\\tau_1,\\tau_2,\\sigma)=\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1,\\tau_2\\in\\mathcal{T}^R}\\sup_{\\sigma\\in[0,1]}N(\\tau_1,\\tau_2,\\sigma),\n\\end{equation*} \nwe can restrict our attention to constant $\\sigma \\in [0, 1]$. Take any $\\tau_1, \\tau_2 \\in \\mathcal{T}^R$ and the corresponding generating processes $(\\xi^1_t), (\\xi^2_t)$ which are also deterministic functions. \n\nTake any $\\delta \\in (0, 1\/2)$. If $\\xi^1_{1-} > \\delta$, then for any $\\sigma<1$ we have\n\\begin{align*}\nN((\\tau_1,\\tau_2),\\sigma)\n&\\ge \n\\mathbb{E}\\big[\\big(1 \\cdot \\indd{\\tau_1\\le\\sigma}+ \\frac{1}{2}\\sigma \\indd{\\sigma<\\tau_1}\\big) \\indd{\\mcalI=1}+\\frac{1}{2}\\sigma \\indd{\\mcalI=2}\\big]\\\\\n&=\n\\mathbb{E}\\big[\\big(\\xi^1_\\sigma + \\frac{1}{2}\\sigma(1-\\xi^1_\\sigma)\\big) \\ind{\\{\\mcalI=1\\}}+\\frac{1}{2}\\sigma \\ind{\\{\\mcalI=2\\}}\\big]\\\\\n&=\n\\frac{1}{2}\\xi^1_\\sigma - \\frac{1}{4}\\sigma\\xi^1_\\sigma+\\frac{1}{2}\\sigma\n=\n\\frac{1}{2}\\xi^1_\\sigma(1 - \\frac{1}{2}\\sigma)+\\frac{1}{2}\\sigma,\n\\end{align*}\nand, in particular,\n\\begin{equation*}\n\\sup_{\\sigma \\in [0, 1]} N((\\tau_1,\\tau_2),\\sigma) \\ge \\lim_{\\sigma\\to 1-} N((\\tau_1,\\tau_2),\\sigma) \\ge \\frac{1}{4}\\xi^1_{1-} + \\frac{1}{2}\\ge \\frac12 + \\frac14\\delta>\\frac12.\n\\end{equation*}\nOn the other hand, if $\\xi^1_{1-} \\le \\delta$, taking $\\sigma=1$ yields\n\\begin{equation*}\n\\sup_{\\sigma \\in [0, 1]} N((\\tau_1,\\tau_2),\\sigma) \\ge N((\\tau_1,\\tau_2),1) \\ge \\mathbb{E}[2\\cdot \\ind{\\{\\tau_1= 1\\}} \\ind{\\{\\mcalI=1\\}}] = 1-\\xi^1_{1-} \\ge 1 - \\delta > \\frac{1}{2}.\n\\end{equation*}\nThis completes the proof that $V^* > \\frac12$.\n\\end{proof}\n\n\n\\subsection{Necessity of randomization}\\label{sec-example-necessity-of-rand}\n\nHere we argue that randomisation is not only sufficient in order to find the value in Dynkin games with asymmetric information but in many cases it is also necessary. In \\cite{DEG2020} there is a rare example of explicit construction of optimal strategies for a zero-sum Dynkin game with asymmetric information in a diffusive set-up (see Section \\ref{subsec:game_2} above for details). The peculiarity of the solution in \\cite{DEG2020} lies in the fact that the informed player uses a randomised stopping time whereas the uninformed player sticks to a pure stopping time. An interpretation of that result suggests\nthat the informed player uses randomisation to `gradually reveal' information about the scenario in which the game is being played, in order to induce the uninformed player to act in a certain desirable way. Since the uninformed player has `nothing to reveal' one may be tempted to draw a general conclusion that she should never use randomised stopping rules. However, Proposition \\ref{prop-uninf-benefits-from-rand} below shows that such conclusion would be wrong in general and even the \\emph{uninformed} player may benefit from randomisation of stopping times.\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{example_graph.png}\n\\end{center}\n\\caption{Payoff functions $f$ in blue, $g$ in orange.}\n\\label{fig:1}\n\\end{figure}\n\nWe consider specific payoff functions $f$ and $g$ plotted on Figure \\ref{fig:1}. Their analytic formulae read\n\\[\nf_t = (10t+4)\\ind{\\{t\\in[0,\\frac{1}{10})\\}} + 5\\ind{\\{t\\in[\\frac{1}{10},1]\\}}, \\qquad \ng_t = (15t - 6) \\ind{\\{t\\in[\\frac{2}{5},\\frac{1}{2})\\}}+(9-15 t)\\ind{\\{t\\in[\\frac{1}{2},\\frac{3}{5})\\}}\n\\]\nwith\n\\begin{equation*}\nh^1 = 0 = g_{1-},\\quad h^2=5=f_{1-}.\n\\end{equation*}\nWe also set $T=1$, $\\pi_1 = \\pi_2 =\\frac{1}{2}$.\nAs above, we identify randomized strategies with their generating processes. In particular, we denote by $\\zeta$ the generating process for $\\sigma \\in \\mathcal{T}^R$. \n\nBy Theorem \\ref{thm:main}, the game has a value in randomised strategies, i.e., $V^* = V_*$. Restriction of the uninformed player's (player 2) strategies to pure stopping times affects only the lower value, see Remark \\ref{rem-Laraki-Solan}. The lower value of the game in which player 2 is restricted to using pure stopping times reads\n\\begin{equation*}\n\\widehat{V}_*:=\\sup_{\\sigma\\in[0,1]}\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1, \\tau_2 \\in \\mathcal{T}^R} N((\\tau_1, \\tau_2),\\sigma)=\\sup_{\\sigma\\in[0,1]}\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1, \\tau_2 \\in [0,1]} N((\\tau_1, \\tau_2),\\sigma),\n\\end{equation*}\nwhere the equality is again due to Remark \\ref{rem-Laraki-Solan} (notice that here all pure stopping times are $(\\mathcal{F}^p_t)$-stopping times hence deterministic, because $(\\mathcal{F}^p_t)$ is trivial). As the following proposition shows, $\\widehat{V}_*\\widehat{V}_*.\n\\end{equation*}\n\\end{Proposition}\n\\begin{proof}\nFirst, notice that \n\\begin{equation*}\n\\widehat{V}_*\\le\\sup_{\\sigma\\in [0,1]} N(\\hat{\\tau}(\\sigma),\\sigma),\n\\end{equation*}\nwhere we take\n\\[\n\\hat \\tau (\\sigma)= (\\tau_1(\\sigma), \\tau_2(\\sigma)) = \\begin{cases}\n(1,1),& \\text{for } \\sigma\\in[0,1),\\\\\n(1,0),& \\text{for } \\sigma=1.\n \\end{cases}\n\\]\nIt is easy to verify that $\\sup_{\\sigma\\in[0,1]} N(\\hat{\\tau}(\\sigma),\\sigma)=2$. \n\nWe will show that the $\\sigma$-player can ensure a strictly larger payoff by using a randomised strategy. Define $\\zeta_t=a \\ind{\\{t\\ge\\frac{1}{2}\\}}+(1-a)\\ind{\\{t=1\\}}$, i.e., the corresponding $\\sigma\\in\\mathcal{T}^R$ prescribes to `stop at time $\\frac{1}{2}$ with probability $a$ and at time $1$ with probability $1-a$'. The value of the parameter $a \\in [0,1]$ will be determined below. We claim that\n\\begin{equation}\\label{eqn:zb}\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1, \\tau_2 \\in[0,1]} N((\\tau_1, \\tau_2),\\zeta) = N((1,0),{\\zeta})\\wedge N((1,1),{\\zeta}).\n\\end{equation}\nAssuming that the above is true, we calculate\n\\[\nN((1,0),{\\zeta})=2+\\frac{3}{4}a, \\qquad N((1,1),{\\zeta}) = \\frac{5}{2}-a.\n\\]\nPicking $a = \\frac27$ the above quantities are equal to $\\frac{31}{14}$. Hence $V_* \\ge \\frac{31}{14}>2$.\n\nIt remains to prove \\eqref{eqn:zb}. Recall that $\\zeta_t=a \\ind{\\{t\\ge\\frac{1}{2}\\}}+(1-a)\\ind{\\{t=1\\}}$ is the generating process of $\\sigma$ and the expected payoff reads\n\\begin{equation*}\nN((\\tau_1, \\tau_2), \\zeta) = \\sum_{i=1}^2 \\mathbb{E} \\big[ \\ind{\\{\\mcalI=i\\}}\\left(f_{\\tau_i} \\ind{\\{\\tau_i \\le \\sigma\\} \\cap \\{\\tau_i < 1\\}} + g_{\\sigma} \\ind{\\{\\sigma<\\tau_i\\} \\cap \\{\\sigma < 1\\}} + h^i \\ind{\\{\\tau_i = \\sigma = 1\\}}\\right) \\big].\n\\end{equation*}\nIt is clear that on the event $\\{\\mcalI=1\\}$ the infimum is attained for $\\tau_1=1$, irrespective of the choice of $\\zeta$. On the event $\\{\\mcalI=2\\}$ the informed player would only stop either at time zero, where the function $f$ attains the minimum cost $f_0=4$, or at time $t>\\frac12$ since $\\zeta$ only puts mass at $t=\\frac12$ and at $t=1$ (the informed player knows her opponent may stop at $t=\\frac12$ with probability $a$). The latter strategy corresponds to a payoff $5-\\frac72 a$ and can also be achieved by picking $\\tau_2=1$. Then the informed player needs only to consider the expected payoff associated to the strategies $(\\tau_1,\\tau_2)=(1,0)$ and $(\\tau_1,\\tau_2)=(1,1)$, so that \\eqref{eqn:zb} holds.\n\\end{proof}\n\n\\subsection{Necessity of Assumption \\ref{ass:regular_gen}} \\label{subsec:example_jumps}\n\nOur final counter-example shows that violating Assumption \\ref{ass:regular_gen} by allowing both predictable upward jumps of $f$ \\emph{and} predictable downward jumps of $g$ may also lead to a game without a value.\n\nConsider the payoffs\n\\[\nf_t=1+2\\ind{\\{t\\ge \\frac{1}{2}\\}},\\quad g_t=-\\ind{\\{t\\ge \\frac{1}{2}\\}},\\quad h^1=3,\\quad h^2=-1,\n\\]\nso that $h^1=f_{1-}$ and $h^2=g_{1-}$ and let us also set $T=1$, $\\pi_1=\\pi_2=\\tfrac{1}{2}$. Assumption \\ref{ass:regular_gen} is violated as $g$ has a predictable downward jump and $f$ has a predictable upward jump at time $t=\\frac{1}{2}$.\n\n\\begin{Proposition}\nIn the example of this subsection we have\n\\begin{equation*}\nV_{*}\\le 0,\\quad\\text{and}\\quad V^{*}>0,\n\\end{equation*}\nso the game does not have a value.\n\\end{Proposition}\n\\begin{proof}\nFirst we show that $V_{*}\\le 0$. For this step, it is sufficient to restrict our attention to pure stopping times $\\tau_1,\\tau_2\\in[0,1]$ for the informed player (c.f. Remark \\ref{rem-Laraki-Solan}). Let $\\sigma\\in\\mathcal{T}^R$ with a (deterministic) generating process $(\\zeta_t)$ and fix $\\varepsilon\\in (0, \\frac12)$. For $\\tau_1=\\frac{1}{2}-\\varepsilon$ and $\\tau_2=1$ we obtain\n\\begin{align*}\nN((\\tau_1,\\tau_2),\\sigma)&\n=\\mathbb{E}\\big[\\indd{\\mcalI=1}(0\\cdot\\indd{\\sigma<\\tau_1}+1\\cdot\\indd{\\sigma\\ge\\tau_1}) + \\indd{\\mcalI=2}(0\\cdot\\indd{\\sigma<\\frac{1}{2}}-1\\cdot\\indd{\\sigma\\ge \\frac{1}{2}})\\big]\\\\\n&=\\frac12\\big(1-\\zeta_{(\\frac{1}{2}-\\varepsilon)-}\\big)-\\frac12\\big(1-\\zeta_{\\frac{1}{2}-}\\big).\n\\end{align*}\nTherefore, using that $(\\zeta_t)$ has c\\`adl\\`ag\\@ifnextchar.{}{\\@ifnextchar,{}{\\@ifnextchar;{}{ }}} trajectories we have\n\\begin{equation*}\n\\operatornamewithlimits{inf\\vphantom{p}}_{\\tau_1,\\tau_2\\in[0,1]} N((\\tau_1,\\tau_2),\\sigma) \\le \\lim_{\\varepsilon\\to 0} \\frac12\\cdot(\\zeta_{\\frac{1}{2}-}-\\zeta_{(\\frac{1}{2}-\\varepsilon)-}) =0.\n\\end{equation*}\nSince the result holds for all $\\sigma\\in\\mathcal{T}^R$ we have $V_*\\le 0$.\n\nNext, we demonstrate that $V^{*}>0$. For this step it is sufficient to consider pure stopping times $\\sigma\\in[0,1]$ for the uninformed player (Remark \\ref{rem-Laraki-Solan}). Let $\\tau_1,\\tau_2\\in\\mathcal{T}^R$ and let $\\xi^1,\\xi^2$ be the associated (deterministic) generating processes. Consider first the case in which $\\xi^1_{\\frac{1}{2}-}+\\xi^2_{\\frac{1}{2}-}>\\delta$ for some $\\delta\\in(0,1)$ and fix $\\varepsilon \\in (0, \\frac12)$. For $\\sigma=\\frac{1}{2}-\\varepsilon$ we have\n\\begin{align*}\nN((\\tau_1,\\tau_2),\\sigma)\n&=\n\\mathbb{E}\\big[\\indd{\\mcalI=1}(1\\cdot\\indd{\\tau_1\\le\\sigma}+0\\cdot\\indd{\\sigma<\\tau_1}) + \\indd{\\mcalI=2}(1\\cdot\\indd{\\tau_2\\le\\sigma}+0\\cdot\\indd{\\sigma<\\tau_2})\\big]\\\\\n&= \\frac12\\big(\\xi^1_{\\frac{1}{2}-\\varepsilon} + \\xi^2_{\\frac{1}{2}-\\varepsilon}\\big),\n\\end{align*}\nthus implying\n\\begin{equation}\\label{eq:last0}\n\\sup_{\\sigma\\in[0,1]} N((\\tau_1,\\tau_2),\\sigma) \\ge \\lim_{\\sigma\\to \\frac{1}{2}-} N((\\tau_1,\\tau_2),\\sigma)= \\frac12(\\xi^1_{\\frac{1}{2}-}+\\xi^2_{\\frac{1}{2}-})>\\frac{\\delta}{2}>0.\n\\end{equation}\nIf, instead, $\\xi^1_{\\frac{1}{2}-}+\\xi^2_{\\frac{1}{2}-}\\le\\delta$ so that, in particular, $\\xi^1_{\\frac{1}{2}-}\\vee\\xi^2_{\\frac{1}{2}-}\\le\\delta$, then\n\\begin{equation}\\label{eq:last1}\n\\begin{aligned}\n\\sup_{\\sigma\\in[0,1]} N((\\tau_1,\\tau_2),\\sigma)\n&\\ge \nN((\\tau_1,\\tau_2),1)\\\\\n&\\ge \n\\mathbb{E}\\big[\\indd{\\mcalI=1}(1\\cdot\\indd{\\tau_1<\\frac{1}{2}}+3\\cdot\\indd{\\tau_1\\ge\\frac{1}{2}}) + \\indd{\\mcalI=2}(-1)\\big]\\\\\n&\\ge \\frac{1}{2}\\left(\\xi^1_{\\frac{1}{2}-}+3\\big(1-\\xi^1_{\\frac{1}{2}-}\\big)\\right) - \\frac{1}{2}\n=1-\\xi^1_{\\frac{1}{2}-}\\ge 1-\\delta>0.\n\\end{aligned}\n\\end{equation}\nCombining \\eqref{eq:last0} and \\eqref{eq:last1} we have $V^*>0$.\n\\end{proof}\n\n\n\n\\bibliographystyle{abbrvnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe AdS\/CFT correspondence \\cite{Maldacena:1997re, Witten:1998qj, Gubser:1998bc}, the celebrated conjecture relating type IIB strings on $AdS_5\\times S^5$ to $\\mathcal{N}=4$ super Yang-Mills theory, has received a lot of attention given the possibilities of extracting information from the strongly coupled gauge theory, by means of performing perturbative computations in the gravitational dual. However, this same property has made it difficult to find a way to prove the conjecture in all generality, and one needs to rely in tests restricted to the BPS sector.\n\nIn particular, evaluation of four-point correlation functions of BPS operators in tree level supergravity has allowed to check the correspondence in the limit $N\\rightarrow \\infty$, large $\\lambda$. Four-point functions are very interesting objects as they are not completely fixed by conformal symmetry, and they can be given an Operator Product Expansion (OPE) interpretation, which is known to encode all the dynamical information of the theory. Moreover, their quantum behaviour is severely restricted due to the existence of a lagragian formulation of $d=4$ SYM, so the predictions on the dynamical piece can be verified by direct computation.\n\nThe present availability of the spectrum has limited the calculations to fields arising in the compactification of IIB supergravity on $AdS_5\\times S^5$. The standard AdS\/CFT dictionary relates the infinite tower of KK scalar excitations originating from the trace of the graviton and the five-form on $S^5$ to $1\/2$-BPS operators of $\\mathcal{N}=4$ SYM theory. These operators are known to have protected conformal dimensions, two- and three-point functions \\cite{Lee:1998bxa,D'Hoker:1998tz}. Four-point functions are then the simplest objects which exhibit non-trivial dynamics when going to the strongly coupled regime. Therefore, comparison of results obtained from supergravity with those obtained either from free or perturbative YM often reveal new insights into the behaviour of the theory, while also constituting a probing test for the duality.\n\nGiven the technical difficulty associated with evaluating diagrams for generic operators, supergravity induced four-point functions have been studied only for specific examples\\footnote{Other known examples involving superconformal descendents can be found in \\cite{Liu:1998ty,Uruchurtu:2007kq}.}. The first example in the literature, in which the basic techniques for evaluating amplitudes were developed, was the four-point function of dilaton-axion fields \\cite{D'Hoker:1999pj}, whose dual operators belong to the (ultrashort) current multiplet of $\\mathcal{N}=4$ SYM. Four-point functions of superconformal primaries followed later since the cubic and quartic couplings are difficult to evaluate \\cite{Arutyunov:1999en,Arutyunov:1999fb}. The examples have been restricted to those involving four identical operators with weight $\\Delta=2,3,4$ \\cite{Arutyunov:2000py,Arutyunov:2002fh,Arutyunov:2003ae}, and the results have shown to have the dynamical structure predicted by the gauge theory and superconformal symmetry. The first example that explored the dynamics in the $t$-channel between massless fields and Kaluza-Klein (KK) excitations was presented in \\cite{Berdichevsky:2007xd}, and so far, there are not known computations from supergravity that address fields transforming in generic representations, this is, of the form $[0,n,0]$. \n\nIn this paper we then continue the programme of evaluating new examples of four-point functions involving BPS operators. In this case we will consider two operators of lowest conformal dimension $\\Delta=2$, and two operators of generic conformal dimension $\\Delta=n$. This example generalises the result in \\cite{Berdichevsky:2007xd} and is the first one involving operators transforming in generic representations of the $R$-symmetry group. This constitutes a first step towards computing the four-point function of 1\/2-BPS primaries of arbitrary weight, while also allowing the emergence of interactions between the massless graviton multiplet and the infinite tower of KK excitations. We will start by establishing the general structure of the amplitude by restricting the functional dependence using superconformal symmetry and the dynamical procedure known as the insertion procedure. We then evaluate the amplitude in AdS supergravity and compare this result against the predictions made in the gauge theory side. \n\nTo this end, one needs to obtain the on-shell value of the five-dimensional effective action for type IIB supergravity on $AdS_5\\times S^5$ relevant for the calculation. These terms can be found in \\cite{Lee:1998bxa,Arutyunov:1999en,Arutyunov:1999fb, Arutyunov:1998hf}. To calculate the on-shell action, we use the techniques in \\cite{Arutyunov:2002fh, Berdichevsky:2007xd, D'Hoker:1999ni} for evaluating the AdS $z$-integrals. However, for the evaluation of the effective vertices coming from the integrals over the $S^5$, we introduce a new method, as the direct evaluation of sums of products of $SO(6)$ $C$-tensors cannot be evaluated in a closed form when including representations depending on generic values\\footnote{And even in cases in which $n>4$ it becomes very involved and one requires the use of a computer algebra program.}\\cite{Arutyunov:2002fh}. We then show that as in the previous cases in the literature, the four derivative terms in the effective lagrangian can be re-expressed in terms of two and zero derivative terms, so the lagrangian is of $\\sigma$-model type. We also show how the resulting quartic lagrangian has a rather simple form, after the dramatic simplification coming from adding the different contributions. Finally, we will verify that the result for the strongly coupled four-point amplitude splits into a free and an interacting piece, which has the structure predicted by the insertion procedure \\cite{Arutyunov:2002fh,Intriligator:1998ig}. This phenomena has also been observed in all other four-point functions involving superconformal primary operators, and is a highly non--trivial result as there is no argument supporting this splitting in the gravitational theory. This result serves then as further evidence for the AdS\/CFT correspondence\n\nThe plan of this paper is as follows. In section \\ref{sec:structure} we consider the general structure of the four-point amplitude of $1\/2$-BPS operators using the different symmetries (i.e. conformal, crossing and $R$-symmetry) and we see that the dependence is contained in four functions of conformal ratios. In section \\ref{sec:insertion}, we introduce further constraints on the interacting piece from the insertion procedure, that reduces the number of independent functions from four to one. Section \\ref{sec:chiralpsdiffweight} is devoted to the evaluation of the four-point function of interest in the supergravity approximation. Some technical details are postponed to the appendices, including the derivation of the quartic lagrangian and the novel method for computing the effective interaction vertices coming from integrals on $S^5$. In section \\ref{sec:verifying} we analyse the supergravity result in the light of the predictions obtained from the CFT side, and verify that indeed, the supergravity-induced amplitude splits into a free and an interacting piece. We also reveal a puzzling result pertaining to one of the coefficient functions entering the amplitude. Finally, section \\ref{sec:conclusions} summarises our results and presents some interesting problems that could be addressed in the future.\n\\section{General Structure of the Four-Point Function}\n\\label{sec:structure}\nThe general structure of the process we are considering is constrained by $R$ and crossing summetry. In this paper we are concerned with four-point functions of $1\/2$-BPS superconformal primaries of $\\mathcal{N}=4$ supersymmetric Yang-Mills theory. The canonically normalised operators \\cite{Lee:1998bxa} with conformal dimension $\\Delta=k$ are given by\n\\begin{equation}\n\\mathcal{O}_k^{I}(\\vec{x})=\\frac{(2\\pi)^k}{\\sqrt{k\\lambda^k}}C_{i_1\\cdots i_k}^{I}\\mathrm{tr}(\\varphi^{i_1}(\\vec{x})\\cdots \\varphi^{i_k}(\\vec{x}))\n\\label{normCPOk}\n\\end{equation}\nwhere $C_{i_1\\cdots i_k}^I$ are totally symmetric traceless $SO(6)$ tensors of rank $k$ and the index $I$ runs over a basis of a representation of $SO(6)$ specified by $k$. The four-point function we wish to study has the form\n\\begin{equation}\n\\langle \\mathcal{O}^{I_1}_{2}(\\vec{x}_1) \\mathcal{O}^{I_2}_{2}(\\vec{x}_2) \\mathcal{O}^{I_3}_{n}(\\vec{x}_3) \\mathcal{O}^{I_4}_{n}(\\vec{x}_4)\\rangle\n\\label{diffweightprocess}\n\\end{equation}\nThe content of the OPE's is given by operators in the representations arising in the tensor of the $SU(4)$ representations $[0,2,0]$ and $[0,n,0]$. This is\n\\begin{equation}\n\\label{4pdiffweightreps}\n\\langle \\mathcal{O}_{2}(\\vec{x}_1)\\mathcal{O}_{2}(\\vec{x}_2)\\mathcal{O}_{n}(\\vec{x}_3)\\mathcal{O}_{n}(\\vec{x}_4)\\rangle\n\\in [0,2,0] \\otimes [0,2,0] \\otimes [0,n,0] \\otimes [0,n,0]\n\\end{equation}\nwhere\n\\begin{equation}\n[0,n,0]\\otimes[0,n,0]=\\sum_{k=0}^{n}\\sum_{l=0}^{n-k}[l,2n-2l-2k,l] \n\\end{equation}\nAll the OPE channels with $l=0,1$ contain only short and semishort operators. We now follow the ideas and methods in \\cite{Arutyunov:2002fh}. An appropriate basis to study the content of a four-point function is given by the \\emph{propagator basis} arising in free field theory. Recall that the propagator for scalar fields is given by\n\\begin{equation}\n\\label{scalarpropYM}\n\\langle \\varphi^{i}(\\vec{x}_1)\\varphi^j(\\vec{x}_2)\\rangle=\\frac{\\delta^{ij}}{|\\vec{x}_{12}|^2}\n\\end{equation}\nLet us introduce the harmonic (complex) variables $u^i$ satisfying the following constraints\n\\begin{equation}\nu_iu_i=0 \\qquad \\qquad u_i\\bar{u}_i=1\n\\end{equation}\nThese variables parametrise the coset $SO(6)\/SO(2)\\times SO(4)$ so that under an $SO(6)$ transformation, the highest weight vector representation transforms as $u^{i_1}\\cdots u^{i_n}$, so projections onto representations $[0,n,0]$ can be achieved by writing\n\\begin{equation}\n\\mathcal{O}^{(n)}=u_{i_1}\\cdots u_{i_n}\\mathrm{tr}(\\varphi^{i_1}\\cdots \\varphi^{i_n})\n\\end{equation}\nwith $(n)$ denoting the highest weight of the representation $[0,n,0]$. Scalar fields can also be projected\n\\begin{equation}\n\\varphi^{i_1}(\\vec{x}_1)=\\varphi(1)\\bar{u}_1^{i_1}\n\\end{equation} \nso (\\ref{scalarpropYM}) can be rewritten as\n\\begin{equation}\n\\langle \\varphi(1)\\varphi(2) \\rangle =\\frac{ {u_1}^{i_1} {u_2}^{i_2}\\delta^{i_1 i_2}}{|\\vec{x}_{12}|^2}\n=\\frac{ ({u_1}^{i_1} {u_2}^{i_2})}{|\\vec{x}_{12}|^2}\n\\end{equation}\nWe can now construct four-point functions by connecting pairs of points by propagators. For the case in hand, the amplitude will have $n+2$ contractions, so the propagator basis for (\\ref{4pdiffweightreps}) is determined from six graphs belonging to four equivalence classes, as depicted in figure \\ref{colourbasis}. \n\\begin{figure}[ht]\n\\begin{center}\n\\resizebox{60mm}{100mm}{\\input{colourd.pdftex_t}}\n\\end{center}\n\\caption{Propagator basis for the process $\\langle \\mathcal{O}_{2}(\\vec{x}_1) \\mathcal{O}_{2}(\\vec{x}_2) \\mathcal{O}_{n}(\\vec{x}_3)\\mathcal{O}_{n}(\\vec{x}_4)\\rangle$. The graphs are arranged in four equivalence classes. The symbol $n$ stands\nfor the $n$ propagators coming out from the corresponding vertices.}\n\\label{colourbasis}\n\\end{figure}\nEach of the propagator structures can be multiplied by an arbitrary function of the conformally invariant ratios $u$ and $v$\n\\begin{equation}\nu=\\frac{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^2}{|\\vec{x}_{13}|^2|\\vec{x}_{24}|^2} \\qquad \\qquad\nv=\\frac{|\\vec{x}_{14}|^2|\\vec{x}_{23}|^2}{|\\vec{x}_{13}|^2|\\vec{x}_{24}|^2} \n\\label{crossradii}\n\\end{equation}\nHence, the most general four-point amplitude with the required transformation properties is given by\n\\begin{eqnarray}\n&&\\langle \\mathcal{O}_{2}(\\vec{x}_1) \\mathcal{O}_{2}(\\vec{x}_2) \\mathcal{O}_{n}(\\vec{x}_3)\\mathcal{O}_{n}(\\vec{x}_4)\\rangle=\na(u,v)\\frac{({u_1}^{i_1}{u_2}^{i_2})^2({u_3}^{i_3}{u_4}^{i_4})^n}{|\\vec{x}_{12}|^4|\\vec{x}_{34}|^{2n}}\n\\nonumber \\\\\n&+&b_1(u,v)\\frac{({u_1}^{i_1}{u_2}^{i_2})({u_3}^{i_3}{u_4}^{i_4})^{n-1}({u_1}^{i_1}{u_3}^{i_3})({u_2}^{i_2}{u_4}^{i_4})}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^{2(n-1)}|\\vec{x}_{13}|^2|\\vec{x}_{24}|^2}\n\\nonumber \\\\\n&+&b_2(u,v)\\frac{({u_1}^{i_1}{u_2}^{i_2})({u_3}^{i_3}{u_4}^{i_4})^{n-1}({u_1}^{i_1}{u_4}^{i_4})({u_2}^{i_2}{u_3}^{i_3})}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^{2(n-1)}|\\vec{x}_{14}|^2|\\vec{x}_{23}|^2}\n\\nonumber \\\\\n&+&c_1(u,v)\\frac{({u_1}^{i_1}{u_3}^{i_3})^2({u_2}^{i_2}{u_4}^{i_4})^2({u_3}^{i_3}{u_4}^{i_4})^{n-2}}{|\\vec{x}_{34}|^{2(n-2)}|\\vec{x}_{13}|^{4}|\\vec{x}_{24}|^{4}}\n+c_2(u,v)\\frac{({u_1}^{i_1}{u_4}^{i_4})^2({u_2}^{i_2}{u_3}^{i_3})^2({u_3}^{i_3}{u_4}^{i_4})^{n-2}}{|\\vec{x}_{34}|^{2(n-2)}|\\vec{x}_{14}|^{4}|\\vec{x}_{23}|^{4}}\n\\nonumber \\\\\n&+&d(u,v)\\frac{({u_1}^{i_1}{u_3}^{i_3})({u_2}^{i_2}{u_4}^{i_4})({u_2}^{i_2}{u_3}^{i_3})({u_1}^{i_1}{u_4}^{i_4})({u_3}^{i_3}{u_4}^{i_4})^{n-2}}{|\\vec{x}_{13}|^2|\\vec{x}_{24}|^{2}|\\vec{x}_{23}|^{2}|\\vec{x}_{14}|^2|\\vec{x}_{34}|^{2(n-2)}}\n\\label{structure4p}\n\\end{eqnarray}\nPermutation symmetries under exchange of $1\\leftrightarrow 2$ and $3 \\leftrightarrow 4$ reduce the number of coefficient functions to four since\n\\begin{eqnarray}\na(u,v)&=&a(u\/v,1\/v) \\nonumber \\\\\nb_2(u,v)&=&b_1(u\/v,1\/v) \\nonumber \\\\\nc_2(u,v)&=&c_1(u\/v,1\/v) \\nonumber \\\\\nd(u,v)&=&d(u\/v,1\/v)\n\\end{eqnarray}\nThe harmonic variables in (\\ref{structure4p}) can be re-expressed in terms of $SO(6)$ $C$-tensors (Appendix \\ref{sec:SphereInts}) as\n\\begin{eqnarray}\n&&\\langle \\mathcal{O}_{2}^{I_1}(\\vec{x}_1) \\mathcal{O}_{2}^{I_2}(\\vec{x}_2) \\mathcal{O}_{n}^{I_3}(\\vec{x}_3)\\mathcal{O}_{n}^{I_4}(\\vec{x}_4)\\rangle=\na(u,v)\\frac{\\delta^{I_1 I_2}_2\\delta^{I_3 I_4}_n}{|\\vec{x}_{12}|^4|\\vec{x}_{34}|^{2n}}\n+b_1(u,v)\\frac{C^{I_1I_2I_3I_4}}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^{2(n-1)}|\\vec{x}_{13}|^2|\\vec{x}_{24}|^2}\n\\nonumber \\\\\n&+&b_2(u,v)\\frac{C^{I_1I_2I_4I_3}}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^{2(n-1)}|\\vec{x}_{14}|^2|\\vec{x}_{23}|^2}\n+c_1(u,v)\\frac{\\Upsilon^{I_1I_2I_3I_4}}{|\\vec{x}_{34}|^{2(n-2)}|\\vec{x}_{13}|^{4}|\\vec{x}_{24}|^{4}}\n\\nonumber \\\\\n&+&c_2(u,v)\\frac{\\Upsilon^{I_1I_2I_4I_3}}{|\\vec{x}_{34}|^{2(n-2)}|\\vec{x}_{14}|^{4}|\\vec{x}_{23}|^{4}}\n+d(u,v)\\frac{S^{I_1I_2I_3I_4}}{|\\vec{x}_{13}|^2|\\vec{x}_{24}|^{2}|\\vec{x}_{23}|^{2}|\\vec{x}_{14}|^2|\\vec{x}_{34}|^{2(n-2)}}\n\\label{structure4pCten}\n\\end{eqnarray}\nIt is possible to compute the value of the coefficient functions using free field theory in the large $N$ limit (\\emph{e.g.} contribution form planar diagrams only). This was done in \\cite{Rayson:2007th} and the results are reproduced here\\footnote{The coefficient of the disconnected piece is set to be one as a consequence of the normalisation choice for the two-point functions.}\n\\begin{equation}\na=1 \\qquad b_i=\\frac{2n}{N^2} \\qquad c_i=\\frac{n(n-1)}{2N^2}\\left(\\frac{X_{i_1\\cdots i_{n-2}kk}X_{j_1\\cdots j_{n-2}ll}}{X_{m_1\\cdots m_n}X_{m_1\\cdots m_n}}\\right) \\qquad d=\\frac{2n(n-1)}{N^2}\n\\label{largeNfreecoeff}\n\\end{equation}\nwhere $X_{i_1\\cdots i_n}$ is a totally symmetric rank $n$ colour tensor, so that the value of $c_i$ is dependent on a non-trivial tensor calculation\\footnote{For $n=2$, $c_i=1$ and for $n=3$, $c_i=0$. For $n\\geq 4$ it was shown in \\cite{Rayson:2007th} that it the value of $c_i$ can be approximated as\n\\begin{equation}\nc_i\\simeq \\frac{2n(n-2)}{N^2}\\simeq (n-2)b_i \n\\end{equation}\n}. Notice also that $d=(n-1)b_i$ for any value of $n$ and $N$. \n\\section{The Insertion Formula}\n\\label{sec:insertion}\nWe now follow the ideas developed in \\cite{Arutyunov:2002fh} to restrict the dynamical piece of the four-point function. The derivative with respect to the coupling $g_{YM}^2$ of the amplitude (\\ref{diffweightprocess}) can be expressed as (see also \\cite{Intriligator:1998ig})\n\\begin{equation}\n\\frac{\\partial}{\\partial g_{YM}^2}\\langle \\mathcal{O}_{2}\\mathcal{O}_{2}\\mathcal{O}_{n}\\mathcal{O}_{n}\\rangle \n\\propto \\int d^{4}\\vec{x_0} d^{4}\\theta_0\n\\langle \\mathcal{O}_\\tau(\\vec{x_0})\\mathcal{O}_{2}\\mathcal{O}_{2}\\mathcal{O}_{n}\\mathcal{O}_{n}\\rangle \n\\end{equation}\nThe integration above is consistent with supersymmetry as the $\\theta$-expansion for the case $\\mathcal{O}_2$ terminates at four $\\theta$'s, and one can show that the five-point function in the right side of the previous expression, gives rise to a nilpotent superconformal covariant. By following this procedure in which we insert and additional \\emph{ultrashort} operator, it is possible to extract more information about the four-point function we have been studying. As the construction of nilpotents covariants if of technical nature, we refer to \\cite{Arutyunov:2002fh} for references and the derivation of the results reproduced below.\n\nThe key idea is to assume that the nilpotent covariant must have the following form\n\\begin{equation}\n\\label{inserting}\n\\langle \\mathcal{O}_\\tau(\\vec{x_0})\\mathcal{O}_{2}\\mathcal{O}_{2}\\mathcal{O}_{n}\\mathcal{O}_{n}\\rangle=R^{2222}(\\theta_0)^4F^{00n-2n-2}(\\vec{x_0},\\cdots,\\vec{x}_4,u_1,\\cdots,u_4)\n\\end{equation}\nso the five-point function is factorised into a kernel with weight $2$ and an additional factor carrying the remaining $SO(6)$ quantum numbers, so at each point the weight is $k_i'=k_i-2$. Note here that the Grassmann factor $(\\theta_0)^4$ carries the full harmonic dependence at the insertion point. The relevant expressions are given by\n\\begin{eqnarray}\n\\label{Rkernel}\nR^{2222}&=&u \\frac{({u_1}^{i_1}{u_2}^{i_2})^2({u_3}^{i_3}{u_4}^{i_4})^2}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^2}+(v-u-1)\\frac{({u_1}^{i_1}{u_2}^{i_2})({u_3}^{i_3}{u_4}^{i_4})({u_1}^{i_1}{u_3}^{i_3})({u_2}^{i_2}{u_4}^{i_4})}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^2|\\vec{x}_{13}|^2|\\vec{x}_{24}|^2}\n\\nonumber \\\\\n&+&(1-u-v)\n\\frac{({u_1}^{i_1}{u_2}^{i_2})({u_3}^{i_3}{u_4}^{i_4})({u_1}^{i_1}{u_4}^{i_4})({u_2}^{i_2}{u_3}^{i_3})}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^2|\\vec{x}_{14}|^2|\\vec{x}_{23}|^2}+\n \\frac{({u_1}^{i_1}{u_3}^{i_3})^2({u_2}^{i_2}{u_4}^{i_4})^2}{|\\vec{x}_{13}|^4|\\vec{x}_{24}|^4}\n \\nonumber \\\\\n&+& \\frac{({u_1}^{i_1}{u_4}^{i_4})^2({u_2}^{i_2}{u_3}^{i_3})^2}{|\\vec{x}_{14}|^4|\\vec{x}_{23}|^4}+(u-v-1)\\frac{({u_1}^{i_1}{u_3}^{i_3})({u_1}^{i_1}{u_4}^{i_4})({u_2}^{i_2}{u_4}^{i_4})({u_2}^{i_2}{u_3}^{i_3})}{|\\vec{x}_{13}|^2|\\vec{x}_{14}|^2|\\vec{x}_{23}|^2|\\vec{x}_{24}|^2}\n\\end{eqnarray}\nand\n\\begin{equation}\nF^{0 k_1'k_2'k_3'}=\n\\left(\\frac{{u_2}^{i_2}{u_3}^{i_3}}{|\\vec{x}_{23}|^2}\\right)^{\\frac{1}{2}(k_1+k_2-k_3-2)}\n\\left(\\frac{{u_2}^{i_2}{u_4}^{i_4}}{|\\vec{x}_{24}|^2}\\right)^{\\frac{1}{2}(k_1+k_3-k_2-2)}\n\\left(\\frac{{u_3}^{i_3}{u_4}^{i_4}}{|\\vec{x}_{34}|^2}\\right)^{\\frac{1}{2}(k_2+k_3-k_1-2)}f(\\vec{x}_0,\\cdots, \\vec{x}_4)\n\\end{equation}\nSubstitution of these expressions into (\\ref{inserting}) and integration over the Grassman variable $\\theta_0$ lead to the following dependence on the coupling of the four-point function (\\ref{diffweightprocess})\n\\begin{eqnarray}\n\\label{insertionres}\n&&\\frac{\\partial}{\\partial g_{YM}^2}\\langle \\mathcal{O}_{2}^{I_1}\\mathcal{O}_{2}^{I_2}\\mathcal{O}_{n}^{I_3}\\mathcal{O}_{n}^{I_4}\\rangle=u G(u,v)\\frac{\\delta_{2}^{I_1I_2}\\delta_{n}^{I_3I_4}}{|\\vec{x}_{12}|^{4}|\\vec{x}_{34}|^{2n}}+\n(v-u-1)G(u,v)\\frac{C^{I_1I_2I_3I_4}}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^{2(n-1)}|\\vec{x}_{13}|^2|\\vec{x}_{24}|^2}\n\\nonumber \\\\\n&&+(1-u-v)G(u,v)\\frac{C^{I_1I_2I_4I_3}}{|\\vec{x}_{12}|^2|\\vec{x}_{34}|^{2(n-1)}|\\vec{x}_{14}|^2|\\vec{x}_{23}|^2}+G(u,v)\\frac{\\Upsilon^{I_1I_2I_3I_4}}{|\\vec{x}_{34}|^{2(n-2)}|\\vec{x}_{13}|^{4}|\\vec{x}_{24}|^{4}}\n\\nonumber \\\\\n&&+vG(u,v)\\frac{\\Upsilon^{I_1I_2I_4I_3}}{|\\vec{x}_{34}|^{2(n-2)}|\\vec{x}_{14}|^{4}|\\vec{x}_{23}|^{4}}+(u-v-1)G(u,v)\\frac{S^{I_1I_2I_3I_4}}{|\\vec{x}_{13}|^2|\\vec{x}_{24}|^{2}|\\vec{x}_{23}|^{2}|\\vec{x}_{14}|^2|\\vec{x}_{34}|^{2(n-2)}}\\nonumber \\\\\n\\end{eqnarray}\nwith\n\\begin{equation}\nG(u,v)=\\int d^4 \\vec{x}_0 f(\\vec{x}_0,\\cdots, \\vec{x}_4)\n\\end{equation}\nSo comparing (\\ref{insertionres}) with (\\ref{structure4pCten}) one realises that the amplitude depends on a single function $\\mathcal{F}(u,v)$, satisfying\n\\begin{eqnarray}\na(u,v)&=&u\\mathcal{F}(u,v) \\nonumber \\\\\nb_1(u,v)&=&(v-u-1)\\mathcal{F}(u,v)\\nonumber \\\\\nb_2(u,v)&=&(1-u-v)\\mathcal{F}(u,v)\\nonumber \\\\\nc_1(u,v)&=&\\mathcal{F}(u,v)\\nonumber \\\\\nc_2(u,v)&=&v\\mathcal{F}(u,v)\\nonumber \\\\\nd(u,v)&=&(u-v-1)\\mathcal{F}(u,v)\n\\label{insertrels}\n\\end{eqnarray}\nThis is a (partial) non-renormalisation theorem for the structure of the amplitude (i.e. a dynamical constraint), so verification of this result from the supergravity calculation constitutes an indirect test for the AdS\/CFT correspondence.\n\\section{Supergravity Calculation}\n\\label{sec:chiralpsdiffweight}\nThe precise relation between the operators in the gauge theory and the fields in the bulk was established in \\cite{Witten:1998qj,Gubser:1998bc} and refined in \\cite{Mueck:1999kk,Mueck:1999gi,Bena:1999jv}. The proposition is \n \\begin{equation}\n\\langle \\exp\\{ \\int d^{4}x \\phi_{0}(\\vec{x})\\mathcal{O}(\\vec{x})\\}\\rangle_{CFT}=\\exp \\{ -S_{IIB}[\\phi_{0}(\\vec{x})] \\}\n\\label{prescriptionadscft}\n\\end{equation}\nOn the left hand side of (\\ref{prescriptionadscft}) the field $\\phi_0(\\vec{x})$, which stands for the boundary value of the bulk field $\\phi(z_0,\\vec{x})$, is a source for the operator $\\mathcal{O}(\\vec{x})$, and the expectation value is computed by expanding the exponential and evaluating the correlation functions in the field theory. On the right hand side, one has the generating functional encompassing all dynamical processes of IIB strings on $AdS_5\\times S^{5}$. In the supergravity approximation, $S_{IIB}$ is just the type IIB supergravity action on $AdS_5\\times S^5$, and it is assumed here that all the bulk fields $\\phi(z_0,\\vec{x})$ have appropriate boundary behaviour so they source the YM operators on the left hand side. Hence in practice, one first finds the boundary data for the corresponding gravitational fields and then computes correlation functions as a function of these values (on-shell), by functional differentiation.\n\nGiven that we are interested in computing correlation functions of superconformal primaries, we first need to identify the bulk fields whose value in the boundary serve as sources. From looking at the representations, we see that the fields dual to superconformal primaries are obtained from mixtures of modes from the graviton and the five form on the $S^5$ \\cite{Kim:1985ez} and are denoted as $s_k^I$, with $I$ running over the basis of the corresponding $SO(6)$ irrep. with Dynkin labels $[0,k,0]$. The four-point function can then be determined from the expression\n\\begin{equation}\n\\langle \\mathcal{O}^{I_1}_{k_1}(\\vec{x}_{1})\\mathcal{O}^{I_2}_{k_2}(\\vec{x}_{2})\\mathcal{O}^{I_3}_{k_3}(\\vec{x}_{3})\\mathcal{O}^{I_4}_{k_4}(\\vec{x}_{4})\\rangle=\n\\frac{\\delta}{\\delta s_{k_1}^{I_{1}}(\\vec{x}_{1})}\\frac{\\delta}{\\delta s_{k_2}^{I_{2}}(\\vec{x}_{2})}\n\\frac{\\delta}{\\delta s_{k_3}^{I_{3}}(\\vec{x}_{3})}\\frac{\\delta}{\\delta s_{k_4}^{I_{4}}(\\vec{x}_{4})}(-S_{IIB})\n\\end{equation}\n\\subsection{On-Shell Lagrangian}\n\\label{subsec:onshelldiffCPO}\nWe are interested in computing (\\ref{diffweightprocess}) in strongly coupled $\\mathcal{N}=4$ SYM theory, using the supergravity approximation. The prescription (\\ref{prescriptionadscft}) indicates that we need to evaluate the on-shell value of the five-dimensional effective action of compactified type IIB supergravity on $AdS_5\\times S^5$. We write this action as\n\\begin{equation}\nS=\\frac{N}{8\\pi^2}\\int [dz] \\left(\\mathcal{L}_2+\\mathcal{L}_3+\\mathcal{L}_4\\right)\n\\end{equation}\nwhich involves the sum of quadratic, cubic and quartic terms. The normalisation of the action can be derived from expressing the ten dimensional gravitational coupling as $2\\kappa_{10}^2=(2\\pi)^7g_s^2\\alpha'^4$ and using the volume of $S^5$ to get the five dimensional gravitational coupling\n\\begin{equation}\n\\label{normaction5d}\n\\frac{1}{2\\kappa_{5}^2}=\\frac{\\mathrm{Vol}(S^5)}{2\\kappa_{10}^2}=\\frac{N^2}{8\\pi^2l^3}\n\\end{equation}\nwith $l$ being the $AdS_5$ radius, which will be set to one. The quadratic terms \\cite{Kim:1985ez,Arutyunov:1998hf} read\n\\begin{eqnarray}\n\\mathcal{L}_{2}&=&\\frac{1}{4}(D_{\\mu}{s_{2}}^{1}D^{\\mu}{s_{2}}^{1}-4{s_{2}}^{1}{s_{2}}^{1})\n +\\frac{1}{4}(D_{\\mu}{s_{n}}^{1}D^{\\mu}{s_{n}}^{1}+n(n-4){s_{n}}^{1}{s_{n}}^{1})\n \\nonumber \\\\\n &+&\\frac{1}{2}({F_{\\mu \\nu,1}}^{1})^{2} +\\frac{1}{2}(({F_{\\mu \\nu,n-1}}^{1})^{2}+2n(n-2)(A^{1}_{\\mu,n-1})^{2})\n \\nonumber \\\\\n &+&\\frac{1}{4}D_{\\mu}\\phi_{\\nu \\rho,0}D^{\\mu}\\phi_{0}^{\\nu \\rho}-\\frac{1}{2}D_{\\mu}\\phi^{\\mu \\nu,0}D^{\\rho}\\phi_{\\rho \\nu,0}\n +\\frac{1}{2}D_{\\mu}\\phi^{\\nu}_{\\nu,0}D_{\\rho}\\phi^{\\mu \\rho}_{0}-\\frac{1}{4}D_{\\mu}\\phi^{\\nu}_{\\nu,0}D^{\\mu}\\phi^{\\rho}_{\\rho,0}\n \\nonumber \\\\\n &-&\\frac{1}{2}\\phi_{\\mu \\nu,0}\\phi^{\\mu \\nu}_{0}+\\frac{1}{2}(\\phi^{\\mu}_{\\mu,0})^{2}\n \\nonumber \\\\\n &+&\\frac{1}{4}D_{\\mu}\\phi_{\\nu \\rho,n-2}D^{\\mu}\\phi_{n-2}^{\\nu \\rho}-\\frac{1}{2}D_{\\mu}\\phi^{\\mu \\nu,n-2}D^{\\rho}\\phi_{\\rho \\nu,n-2}\n +\\frac{1}{2}D_{\\mu}\\phi^{\\nu}_{\\nu,n-2}D_{\\rho}\\phi^{\\mu \\rho}_{n-2}\n \\nonumber \\\\\n &-&\\frac{1}{4}D_{\\mu}\\phi^{\\nu}_{\\nu,n-2}D^{\\mu}\\phi^{\\rho}_{\\rho,n-2}\n +\\frac{(n^2-6)}{4}\\phi_{\\mu \\nu,n-2}\\phi^{\\mu \\nu}_{n-2}-\\frac{(n^2-2)}{4}(\\phi^{\\mu}_{\\mu,n-2})^{2}\n\\end{eqnarray}\nwhere $F_{\\mu \\nu,k}=\\partial_{\\mu}A_{\\nu,k}-\\partial_{\\nu}A_{\\mu,k}$, and summation over upper indices is assumed, running over the basis of the irreducible representation corresponding to the field\\footnote{We often use the notation $s_k^{I_m}\\equiv s_{k}^{m}$.}. We should point out that the fields have been rescaled in order to simplify the action. In this case, the corresponding rescaling factors are given by\n\\begin{equation}\ns_{n} \\rightarrow \\sqrt{ \\frac{(n+1)}{2^{6}n(n-1)(n+2)}} s_{n} \\qquad A_{\\mu,n-1} \\rightarrow\n2\\sqrt{\\frac{n+1}{n}} A_{\\mu,n-1}\n\\end{equation}\nand all symmetric tensors are left unscaled. The cubic couplings \\cite{Arutyunov:1999en, Lee:1998bxa, Lee:1999pj} are given by\n\\begin{eqnarray}\n\\mathcal{L}_{3}&=&-\\frac{1}{3}\\langle C^{1}_{2}C^{2}_{2}C^{3}_{[0,2,0]}\\rangle s_{2}^{1}s_{2}^{2}s_{2}^{3}\n -\\frac{n(n-1)}{2}\\langle C^{1}_{n}C^{2}_{n}C^{3}_{[0,2,0]} \\rangle s_{n}^{1}s_{n}^{2}s_{2}^{3}\n \\nonumber \\\\\n &-&\\frac{1}{4}\\left(D^{\\mu}s_{2}^{1}D^{\\nu}s_{2}^{1}\\phi_{\\mu \\nu,0}-\\frac{1}{2}(D^{\\mu}s_{2}^{1}D_{\\mu}s_{2}^{1}\n -4s_{2}^{1}s_{2}^{1})\\phi^{\\nu}_{\\nu,0} \\right)\n \\nonumber \\\\\n &-&\\frac{1}{4}\\left(D^{\\mu}s_{n}^{1}D^{\\nu}s_{n}^{1}\\phi_{\\mu \\nu,0}-\\frac{1}{2}(D^{\\mu}s_{n}^{1}D_{\\mu}s_{n}^{1}\n +n(n-4)s_{n}^{1}s_{n}^{1})\\phi^{\\nu}_{\\nu,0} \\right)\n \\nonumber \\\\\n &-&\\frac{1}{2}\\langle C_{2}^{1}C_{n}^{1}C^{3}_{[0,n-2,0]}\\rangle \\left(D^{\\mu}s_{2}^{1}D^{\\nu}s_{n}^{1}\\phi_{\\mu \\nu,n-2}\n -\\frac{1}{2}(D^{\\mu}s_{2}^{1}D_{\\mu}s_{n}^{1}-2n s_{2}^{1}s_{n}^{1})\\phi^{\\nu}_{\\nu,n-2} \\right)\n \\nonumber \\\\\n &-&\\langle C^{1}_{2}C^{2}_{2}C^{3}_{[1,0,1]}\\rangle s_{2}^{1}D^{\\mu}s_{2}^{2}A_{\\mu,1}^{3}\n -\\frac{n}{2}\\langle C^{1}_{n}C^{2}_{n}C^{3}_{[1,0,1]}\\rangle s_{n}^{1}D^{\\mu}s_{n}^{2}A_{\\mu,1}^{3}\n \\nonumber \\\\\n &-&\\sqrt{\\frac{n(n-1)}{2}}\\langle C^{1}_{2}C^{2}_{n}C^{3}_{[1,n-2,1]}\\rangle s_{2}^{1}D^{\\mu}s_{n}^{2}A_{\\mu,n-1}^{3}\n -\\sqrt{\\frac{n(n-1)}{2}}\\langle C^{1}_{n}C^{2}_{2}C^{3}_{[1,n-2,1]}\\rangle s_{n}^{1}D^{\\mu}s_{2}^{2}A_{\\mu,n-1}^{3}\n \\nonumber\n\\end{eqnarray}\nAs one can see, there are different contributions to the $s$ and $t$-channels. Finally, the quartic couplings are given by\n\\begin{equation}\n\\mathcal{L}_{4}=\\mathcal{L}_{4}^{(0)}+\\mathcal{L}_{4}^{(2)}+\\mathcal{L}_{4}^{(4)}\n\\end{equation}\nwhere the supraindex indicates contributions coming from zero, two and four-derivative terms, which are given by\n\\begin{eqnarray}\n\\mathcal{L}_{4}&=&\\mathcal{L}_{k_{1}k_{2}k_{3}k_{4}}^{(0)I_{1}I_{2}I_{3}I_{4}}s_{k_{1}}^{I_{1}}s_{k_{2}}^{I_{2}}s_{k_{3}}^{I_{3}}s_{k_{4}}^{I_{4}}+\n\\mathcal{L}_{k_{1}k_{2}k_{3}k_{4}}^{(2)I_{1}I_{2}I_{3}I_{4}}s_{k_{1}}^{I_{1}}D_{\\mu}s_{k_{2}}^{I_{2}}s_{k_{3}}^{I_{3}}D^{\\mu}s_{k_{4}}^{I_{4}}\n\\nonumber \\\\\n&+&\\mathcal{L}_{k_{1}k_{2}k_{3}k_{4}}^{(4)I_{1}I_{2}I_{3}I_{4}}s_{k_{1}}^{I_{1}}D_{\\mu}s_{k_{2}}^{I_{2}}D^{\\nu}D_{\\nu}(s_{k_{3}}^{I_{3}}D^{\\mu}s_{k_{4}}^{I_{4}})\n\\end{eqnarray}\nThe explicit form of these terms has been computed in \\cite{Arutyunov:1999fb}. \nFor our case, two of the $k_{i}$'s are equal to 2 and the other two are equal to $n$. This allows for six possible permutations, where the indices $I_{i}$ run over the basis of the representation $[0,k_{i},0]$ which is being summed over. The less trivial part of the calculation is to compute the explicit coefficients of these terms. It can be shown, however, that the relevant interactions can be reduced to a simple expression, as it occurs in all the examples that have been computed previously. We refer to appendix \\ref{sec:QuarticSimp} for the details, and reproduce the final expression here\n\\begin{eqnarray}\n\\mathcal{L}_{4}&=&-\\frac{1}{4}(C^{1234}+S^{1234})s_{2}^{1}D_{\\mu}s_{2}^{2}s_{n}^{3}D^{\\mu}s_{n}^{4}\n\\nonumber \\\\\n&+&\\frac{1}{8}n(-\\delta^{12}_{2}\\delta^{34}_{n}+(6+n)C^{1234}+(3n-4)S^{1234}-n\\Upsilon^{1234})s_{2}^{1}s_{2}^{1}s_{n}^{3}s_{n}^{4}\n\\end{eqnarray}\nwhich can be shown to reproduce the $n=3$ case in \\cite{Berdichevsky:2007xd}. The quantities in this expression will be defined later. It should be noted that all four derivative terms disappear, which is consistent with the fact that this is a sub-subextremal process, i.e. $k_{1}=k_{2}+k_{3}+k_{4}-4$, as indicated in \\cite{Arutyunov:2000ima,D'Hoker:2000dm}.\n\n\nNow that the relevant terms in the lagrangian have been specified, it remains to compute its on-shell value. From the couplings, one can determine the diagrams that need to be computed. In the $s$-channel, one has a scalar exchange of $s_{2}^{I}$, a vector exchange $A^{I}_{a,[1,0,1]}$ and a graviton exchange, $\\phi_{ab,[0,0,0]}$. In the $t$-channel, one has a scalar exchange of $s_{n}^{I}$, a vector exchange $A^{I}_{a,[1,n-1,1]}$ and a massive symmetric tensor $\\phi_{ab,[0,n-2,0]}$. Finally one has contact diagrams contributing to the process. The Witten diagrams for the $s$-channel are shown on Fig. \\ref{schanneldiffw}. The corresponding diagrams for the $t$-channel and the contact diagram are shown on Fig. \\ref{tchanneldiffw}.\n\\begin{figure}[ht]\n\\begin{center}\n\\resizebox{120mm}{40mm}{\\input{diffweights2.pdftex_t}}\n\\end{center}\n\\caption{Witten Diagrams for the $s$-channel process. \\emph{(a)} exchange by a scalar with $m^2=-4$ \\emph{(b)} exchange by a massless vector \\emph{(c)} graviton exchange}\n\\label{schanneldiffw}\n\\end{figure}\n\\begin{figure}[ht]\n\\begin{center}\n\\resizebox{160mm}{40mm}{\\input{diffweights.pdftex_t}}\n\\end{center}\n\\caption{Witten Diagrams for the $t$-channel process. \\emph{(a)} exchange by a scalar of mass $m^2=\\Delta(\\Delta-4)$ \\emph{(b)} exchange by a vector of mass $m_{k}^2=k^2-1$ \\emph{(c)} exchange by a tensor field of mass $f_k=k(k+4)$ \\emph{(d)} Contact diagram. \\ }\n\\label{tchanneldiffw}\n\\end{figure}\n\nIt is convenient to introduce the currents\n\\begin{eqnarray}\nT_{\\mu\\nu}&=&D_{(\\mu}s_{k_{1}}D_{\\nu)}s_{k_{1}}-\\frac{1}{2}g_{\\mu\\nu}\\left(D^{\\rho}s_{k_{1}}D_{\\rho}s_{k_{2}}+\\frac{1}{2}(m^2_{k_{1}}+m^2_{k_{2}}-k_{3}(k_{3}+4))s_{k_{1}}s_{k_{2}}\\right) \\nonumber \\\\\nJ_{\\mu}&=&s_{k_{1}}D_{\\mu}s_{k_{2}}-s_{k_{2}}D_{\\mu}s_{k_{1}}\n\\end{eqnarray}\nwhere $k_{1},k_{2},k_{3}$ are the conformal weights of the corresponding scalar operators and the primaries here have the appropriate weight depending of the channel one is considering. One then represents the solution to the equations of motion in the form\n\\begin{equation}\ns_{k}=s_{k}^{0}+\\tilde{s}_{k} \\qquad A_{\\mu}=A_{\\mu}^{0}+\\tilde{A}_{\\mu} \\qquad \\phi_{\\mu\\nu}=\\phi_{\\mu\\nu}^{0}+\\tilde{\\phi}_{\\mu\\nu}\n\\end{equation}\nwhere $s^{0}_{k}$, $A_{\\mu}^{0}$ and $\\phi_{\\mu\\nu}^{0}$ are solutions to the linearised equations with fixed boundary conditions and $\\tilde{s}_{k}$, $\\tilde{A}_{\\mu}$ and $\\tilde{\\phi}_{\\mu\\nu}$ represent the fields in the AdS bulk with vanishing boundary conditions. It is then possible to express these fields in terms of an integral on the bulk, involving the corresponding Green function. For the $s$-channel process one needs\n\\begin{eqnarray}\n\\tilde{s}_{2}^{5}(w)&=&2\\langle C^{1}_{2}C^{2}_{2}C^{5}_{[0,2,0]}\\rangle \\int [dz] G_{2}(z,w) s^{1}_{2}(z)s^{2}_{2}(z) +n(n-1)\\langle C^{1}_{2}C^{2}_{n}C^{5}_{[0,n,0]}\\rangle\\int [dz] G_{n}(z,w)s^{1}_{2}(z)s^{2}_{n}(z) \n\\nonumber \\\\\n\\tilde{A}_{\\mu,1}^{5}(w)&=&\\frac{1}{4}\\langle C^{1}_{2}C^{2}_{2}C^{5}_{[1,0,1]} \\rangle \\int [dz] {G_{\\mu}}^{\\nu}(z,w)J_{\\nu}(z) \n\\nonumber \\\\\n\\tilde{\\phi}^{5}_{\\mu\\nu,0}(w)&=&\n\\frac{1}{4}\\langle C^{1}_{2}C^{3}_{2}C^{5}_{[0,0,0]} \\rangle\\int [dz]G_{\\mu\\nu\\mu'\\nu'}(z,w)T^{\\mu'\\nu'}(z)\n\\end{eqnarray}\nwhere the $z$-integral is being done on the vertex involving the $\\mathcal{O}_{2}$'s. For the $t$-channel process, the bulk fields couple to a $\\Delta=2$ primary and a $\\Delta=n$ primary, so the $z$-integrals read\n\\begin{eqnarray}\n\\tilde{s}_{n}^{5}(w)&=&2n(n-1)\\langle C^{1}_{2}C^{3}_{n}C^{5}_{[0,n,0]}\\rangle \\int [dz] G_{n}(z,w) s^{1}_{2}(z)s^{3}_{n}(z) \n\\nonumber \\\\\n\\tilde{A}_{\\mu,n-1}^{5}(w)&=&\\frac{1}{2}\\sqrt{\\frac{n(n-1)}{2}}\\langle C^{1}_{2}C^{3}_{n}C^{5}_{[1,n-2,1]} \\rangle \\int [dz] {G_{\\mu}}^{\\nu}(z,w)J_{\\nu}(z) \n\\nonumber \\\\\n\\tilde{\\phi}^{5}_{\\mu\\nu,n-2}(w)&=&\\frac{1}{2}\n\\langle C^{1}_{2}C^{3}_{n}C^{5}_{[0,n-2,0]} \\rangle \\int [dz]G_{\\mu\\nu\\mu'\\nu'}(z,w)T^{\\mu'\\nu'}(z)\n\\end{eqnarray}\nand the currents are defined with the appropriate weights. We will drop the tilde in the following. Using the expressions above, we arrive at the following expression for the on-shell value of the action for each of the channels we are considering. For the $s$-channel, the amplitude is determined by\n\\begin{eqnarray}\n\\mathcal{L}_{s-channel}&=&-n(n-1)\n\\langle C_{2}^{1}C_{2}^{2}C_{2}^{5}\\rangle \\langle C_{n}^{3}C_{n}^{4}C_{2}^{5}\\rangle\n\\int [dz] s^{1}_{2}(z)s^{2}_{2}(z)G(z,w)s^{3}_{n}(w)s^{4}_{n}(w) \\nonumber \\\\\n&-&\\frac{n}{2^4}\\langle C^{1}_{2}C^{2}_{2}C^{5}_{[1,0,1]} \\rangle\\langle C^{3}_{2}C^{2}_{4}C^{5}_{[1,0,1]} \\rangle \\int [dz]J^{\\mu}(z)G_{\\mu\\nu}(z,w)J^{\\nu}(w) \\nonumber \\\\\n&-&\\frac{1}{2^4}\\langle C_{2}^{1}C_{2}^{2}C_{[0,0,0]}^{5}\\rangle \\langle C_{n}^{3}C_{n}^{4}C_{[0,0,0]}^{5}\\rangle\n\\int [dz] T^{\\mu\\nu}_{22}(z)G_{\\mu\\nu\\mu'\\nu'}(z,w)T^{\\mu'\\nu'}_{nn}(w) \\label{s-channel}\n\\end{eqnarray}\nand for the $t$-channel one has\n\\begin{eqnarray}\n\\mathcal{L}_{t-channel}&=&-n^2(n-1)^2\n\\langle C_{2}^{1}C_{n}^{3}C_{n}^{5}\\rangle \\langle C_{2}^{2}C_{n}^{4}C_{n}^{5}\\rangle\n\\int [dz] s^{1}_{2}(z)s^{3}_{n}(z)G(z,w)s^{2}_{2}(w)s^{4}_{n}(w) \n\\nonumber \\\\\n&-&\\frac{n(n-1)}{2^3}\\langle C^{1}_{2}C^{3}_{n}C^{5}_{[1,n-2,1]} \\rangle\\langle C^{2}_{2}C^{4}_{n}C^{5}_{[1,n-2,1]} \\rangle \\int [dz]J^{\\mu}(z)G_{\\mu\\nu}(z,w)J^{\\nu}(w) \n\\nonumber \\\\\n&-&\\frac{1}{2^3}\n\\langle C_{2}^{1}C_{n}^{3}C_{[0,n-2,0]}^{5}\\rangle \\langle C_{2}^{2}C_{n}^{4}C_{[0,n-2,0]}^{5}\\rangle\n\\int [dz] T^{\\mu\\nu}_{2n}(z)G_{\\mu\\nu\\mu'\\nu'}(z,w)T^{\\mu'\\nu'}_{2n}(w) \\label{t-channel}\n\\end{eqnarray}\nThe expressions in brackets arise from the integrals over $S^5$ and are defined in appendix \\ref{sec:SphereInts}. We will worry about contact interactions later. So far, we see that we need to compute three Witten Diagrams for each channel, involving exchanges of scalars, massless and massive gauge bosons and massless and massive gravitons. In order to do so, we extend the methods developed in \\cite{D'Hoker:1999pj,D'Hoker:1999ni,Berdichevsky:2007xd} to perform the computations.\n\\subsection{Results for Exchange Integrals}\nWe now carry out the integrals and write the results in terms of $\\bar{D}$-functions, which are functions of $u$ and $v$ and are related to the more familiar $D$-functions \\cite{D'Hoker:1999pj} which are defined as\n\\begin{equation}\nD_{\\Delta_1\\Delta_2\\Delta_3,\\Delta_4}(\\vec{x}_1,\\vec{x}_2,\\vec{x}_3,\\vec{x}_4)=\\int [dw] \\tilde{K}_{\\Delta_1}(w,\\vec{x}_1)\\tilde{K}_{\\Delta_2}(w,\\vec{x}_2)\\tilde{K}_{\\Delta_3}(w,\\vec{x}_3)\\tilde{K}_{\\Delta_4}(w,\\vec{x}_4) \n\\end{equation}\nwhere $\\tilde{K}_{\\Delta}(w,\\vec{x})$ is the unit normalised bulk-to-boundary propagator for a scalar of conformal dimension $\\Delta$\n\\begin{equation}\n\\tilde{K}_{\\Delta}(z,\\vec{x})=\\left(\\frac{z_{0}}{z_{0}^2+(\\vec{z}-\\vec{x})^2}\\right)^{\\Delta}\n\\end{equation}\n$D_{\\Delta_1\\Delta_2\\Delta_3\\Delta_4}$ can be identified as a quartic scalar interactions (see Fig. \\ref{wittendfunc}). The relation between the $D$-functions and the $\\bar{D}$-functions, and their properties can be found in appendix \\ref{sec:Dfunc}. \n\\begin{figure}[t]\n\\begin{center}\n\\resizebox{60mm}{38mm}{\\input{dfunction.pdftex_t}}\n\\end{center}\n\\caption{Graphic representation of a $D$-function.}\n\\label{wittendfunc}\n\\end{figure}\nLet us first introduce the following notation for the various exchange integrals that contribute to the amplitude. \n\\begin{eqnarray}\nS_{\\Delta_{1}\\Delta_{2}\\Delta_{3}\\Delta_{4}}(\\vec{x}_{1},\\vec{x}_{2},\\vec{x}_{3},\\vec{x}_{4})&=&\\int [dw] [dz]\\tilde{K}_{\\Delta_{1}}(z,\\vec{x}_{1})\\tilde{K}_{\\Delta_{2}}(z,\\vec{x}_{2})G(z,w)\\tilde{K}_{\\Delta_{3}}(w,\\vec{x}_{3})\\tilde{K}_{\\Delta_{4}}(w,\\vec{x}_{4})\n\\nonumber \\\\\nV_{\\Delta_{1}\\Delta_{2}\\Delta_{3}\\Delta_{4}}(\\vec{x}_{1},\\vec{x}_{2},\\vec{x}_{3},\\vec{x}_{4})&=&\\int [dw] [dz]\\tilde{K}_{\\Delta_{1}}(z,\\vec{x}_{1})\\buildrel\\leftrightarrow\\over{D^{\\mu}}\\tilde{K}_{\\Delta_{2}}z,\\vec{x}_{2})\nG_{\\mu\\nu}(z,w)\\tilde{K}_{\\Delta_{3}}(w,\\vec{x}_{3})\\buildrel\\leftrightarrow\\over{D^{\\nu}}\\tilde{K}_{\\Delta_{4}}(w,\\vec{x}_{4}) \n\\nonumber \\\\\nT_{\\Delta_{1}\\Delta_{2}\\Delta_{3}\\Delta_{4}}(\\vec{x}_{1},\\vec{x}_{2},\\vec{x}_{3},\\vec{x}_{4})&=&\\int [dz] [dw] T^{\\mu\\nu}_{\\Delta_{1}\\Delta_{2}}(z,\\vec{x}_{1},\\vec{x}_{2})G_{\\mu\\nu\\mu'\\nu'}(z,w)T^{\\mu'\\nu'}_{\\Delta_{3},\\Delta_{4}}(w,\\vec{x}_{3},\\vec{x}_{4}) \n\\end{eqnarray}\nwith the bulk-to-bulk propagators appropriately chosen, depending on the particle that is being exchanged. For our case, the $s$-channel integrals yield\n\\begin{eqnarray}\nS_{22nn}(\\vec{x}_{1},\\vec{x}_{2},\\vec{x}_{3},\\vec{x}_{4})&=&\\frac{\\pi^2}{8}\\frac{1}{(n-1)\\Gamma(n)} \n\\frac{u}{{|\\vec{x}_{12}|}^{4}{|\\vec{x}_{34}|}^{2n}}\\bar{D}_{11nn}\n\\nonumber \\\\\nV_{22nn}(\\vec{x}_{1},\\vec{x}_{2},\\vec{x}_{3},\\vec{x}_{4})&=&-\\frac{\\pi^2}{4\\Gamma(n)}\\frac{u}{{|\\vec{x}_{12}|}^{4}{|\\vec{x}_{34}|}^{2n}}\n\\left\\{-2\\bar{D}_{21nn+1}+\\bar{D}_{21n+1n}+\\bar{D}_{12nn+1}\\right\\}\n\\nonumber \\\\\nT_{22nn}(\\vec{x}_{1},\\vec{x}_{2},\\vec{x}_{3},\\vec{x}_{4})&=&-\\frac{\\pi^2}{2\\Gamma(n)}\\frac{u}{{|\\vec{x}_{12}|}^{4}{|\\vec{x}_{34}|}^{2n}}\n\\left\\{\\frac{1}{3}n\\bar{D}_{11nn}-n(n-1)u\\bar{D}_{22nn}\\right. \n\\nonumber \\\\\n&-&\\left.n(1+v-u)\\bar{D}_{22n+1n+1}\\right\\}\n\\label{schannresults}\n\\end{eqnarray}\nand the $t$-channel amplitudes are given by\n\\begin{eqnarray}\nS_{2n2n}(\\vec{x}_{1},\\vec{x}_{3},\\vec{x}_{2},\\vec{x}_{4})&=&\\frac{\\pi^2}{8}\\frac{1}{(n-1)\\Gamma(n)}\\frac{u^2}{{|\\vec{x}_{12}|}^{4}{|\\vec{x}_{34}|}^{2n}}\\bar{D}_{12n-1n}\n\\nonumber \\\\\nV_{2n2n}(\\vec{x}_{1},\\vec{x}_{3},\\vec{x}_{2},\\vec{x}_{4})&=&-\\frac{\\pi^2}{2n\\Gamma(n)}\\frac{u^2}{{|\\vec{x}_{12}|}^{4}{|\\vec{x}_{34}|}^{2n}}\n\\left\\{-\\bar{D}_{31nn}+\\bar{D}_{12nn+1}-(n-1)\\bar{D}_{22n-1n+1}\\right.\n\\nonumber \\\\\n&+&\\left.(n-1)u\\bar{D}_{23n-1n}\\right\\}\n\\nonumber \\\\\nT_{2n2n}(\\vec{x}_{1},\\vec{x}_{3},\\vec{x}_{2},\\vec{x}_{4})&=&-\\frac{\\pi^2}{\\Gamma(n)}\\left[\\frac{n}{(n+1)(n+2)}\\right]\\frac{u^2}{{|\\vec{x}_{12}|}^{4}{|\\vec{x}_{34}|}^{2n}}\n\\left\\{2\\bar{D}_{31n+1n+1}\\right.\n\\nonumber \\\\\n&+&+n(n-1)u\\bar{D}_{33n-1n+1}+\\left.2n(1-v-u)\\bar{D}_{23nn+1}\\right\\}\n\\label{rchannresults}\n\\end{eqnarray}\nwhere $u$ and $v$ were introduced in (\\ref{crossradii}).\nThese expressions are to be substituted in the action, including an overall factor of $C(n)^2C(2)^2$ where\n\\begin{equation}\nC(n)=\\begin{cases}\n \\frac{\\Gamma(n)}{\\pi^{2}\\Gamma(n-2)}, \\qquad n> 2 \\\\\n \\frac{1}{\\pi^2}, \\hspace{16mm} n=2\n\\end{cases}\n\\end{equation}\n\\subsection{Contact Diagrams}\nOne starts from the quartic lagrangian\n\\begin{eqnarray}\n\\mathcal{L}_{4}&=&-\\frac{1}{4}(C^{1234}-S^{1234})s_{2}^{1}\\nabla_{\\mu}s_{2}^{2}s_{n}^{3}\\nabla^{\\mu}s_{n}^{4}\n\\nonumber \\\\\n&+&\\frac{1}{8}n(-\\delta_{2}^{12}\\delta_{n}^{34}+(6+n)C^{1234}+(3n-4)S^{1234}\n-n\\Upsilon^{1234})s_{2}^{1}s_{2}^{1}s_{n}^{3}s_{n}^{4}\n\\label{quartic}\n\\end{eqnarray}\nWe record the useful identity\n\\begin{equation}\nD_{\\mu}K_{\\Delta_{1}}(z,\\vec{x}_{1})D^{\\mu}K_{\\Delta_{2}}(z,\\vec{x}_{2}) =\\Delta_{1}\\Delta_{2}\n(K_{\\Delta_{1}}(z,\\vec{x}_{1})K_{\\Delta_{2}}(z,\\vec{x}_{2})-2|\\vec{x}_{12}|^{2}K_{\\Delta_{1}+1}(z,\\vec{x}_{1})K_{\\Delta_{2}+1}(z,\\vec{x}_{2}))\n\\end{equation}\nUsing this expression and the definition of the $D$-functions, we see that the contribution to the amplitude from\nthe quartic lagrangian is given by\n\\begin{eqnarray}\n\\mathcal{L}_{4}&=&-\\frac{1}{4}(C^{1234}-S^{1234})(2n D_{22nn}-4n |\\vec{x}_{24}|^{2}D_{23nn+1})\n\\nonumber \\\\\n&+&\\frac{1}{8}n(-\\delta^{12}_{2}\\delta^{34}_{n}+(6+n)C^{1234}+(3n-4)S^{1234}-n\\Upsilon^{1234})D_{22nn}\n\\end{eqnarray}\nwhere again an overall factor of $C(n)^{2}C(2)^{2}$ was omitted, but should be included. We can rewrite this\nexpression in terms of the $\\bar{D}$-functions\n\\begin{eqnarray}\n\\mathcal{L}_{4}&=&\\pi^2\\frac{(C(2)C(n))^{2}}{\\Gamma(n)}\\frac{u^2}{|\\vec{x}_{12}|^{4}|\\vec{x}_{34}|^{2n}}\\left[-\\frac{n}{4}(C^{1234}-S^{1234})(\n\\bar{D}_{22nn}-\\bar{D}_{23nn+1})\\right.\n\\nonumber \\\\\n&+&\\left.\\frac{1}{2^4}n(-\\delta^{12}_{2}\\delta^{34}_{n}+(6+n)C^{1234}+(3n-4)S^{1234}-n\\Upsilon^{1234})\\bar{D}_{22nn}\\right]\n\\label{quarticfinal}\n\\end{eqnarray}\nThe final result for the on-shell action is then given by substituting the expressions for the exchange amplitudes on equations (\\ref{s-channel}) and (\\ref{t-channel}) and by equation (\\ref{quarticfinal}).\n\\subsection{Results for the Four-Point Function}\nWe collect the results for the relevant on-shell action. First we write down the part of the lagrangian that contributes to the four-point function of interest\n\\begin{eqnarray}\n\\mathcal{L}_{on-shell}&=&-n(n-1)\\langle C_{2}^{1}C_{2}^{2}C_{2}^{5}\\rangle \\langle C_{n}^{3}C_{n}^{4}C_{2}^{5}\\rangle\n\\int [dz] s^{1}_{2}(z)s^{2}_{2}(z)G(z,w)s^{3}_{2}(w)s^{4}_{2}(w) \n\\nonumber \\\\\n&-&n^2(n-1)^2\\langle C_{2}^{1}C_{n}^{3}C_{n}^{5}\\rangle \\langle C_{2}^{2}C_{n}^{4}C_{n}^{5}\\rangle\n\\int [dz] s^{1}_{2}(z)s^{3}_{n}(z)G(z,w)s^{2}_{2}(w)s^{4}_{n}(w) \n\\nonumber \\\\\n&-&\\frac{n}{2^4}\\langle C^{1}_{2}C^{2}_{2}C^{5}_{[1,0,1]} \\rangle \\langle C^{3}_{n}C^{4}_{n}C^{5}_{[1,0,1]} \\rangle\n\\int [dz]s^{1}_{2}(z)\\buildrel\\leftrightarrow\\over\\nabla^{\\mu}s^{2}_{2}(z)\nG_{\\mu\\nu}(z,w)s_{n}^{3}(w)\\buildrel\\leftrightarrow\\over\\nabla^{\\nu }s^{4}_{n}(w) \n\\nonumber \\\\ \n&-&\\frac{n(n-1)}{2^3}\\langle C^{1}_{2}C^{3}_{n}C^{5}_{[1,n-1,1]} \\rangle \\langle C^{2}_{2}C^{4}_{n}C^{5}_{[1,n-1,1]} \\rangle\n\\int [dz]s_{2}^{1}(z)\\buildrel\\leftrightarrow\\over\\nabla^{\\mu}s_{n}^{3}(z)\nG_{\\mu\\nu}(z,w)s^{2}_{2}(w)\\buildrel\\leftrightarrow\\over\\nabla^{\\nu}s_{n}^{4}(w) \n\\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\n&-&\\frac{1}{2^4} \\langle C^{1}_{2}C^{2}_{2}C^{5}_{[0,0,0]} \\rangle \\langle C^{3}_{3}C^{4}_{3}C^{5}_{[0,0,0]} \\rangle\n\\int [dz] T^{\\mu\\nu}_{22}(z)G_{\\mu\\nu\\mu'\\nu'}(z,w)T^{\\mu'\\nu'}_{nn}(w) \n\\nonumber \\\\\n&-&\\frac{1}{2^3}\\langle C^{1}_{2}C^{3}_{n}C^{5}_{[0,n-2,0]} \\rangle \\langle C^{2}_{2}C^{4}_{n}C^{5}_{[0,n-2,0]} \\rangle\n\\int [dz] T^{\\mu\\nu}_{2n}(z)G_{\\mu\\nu\\mu'\\nu'}(z,w)T^{\\mu'\\nu'}_{2n}(w) \n\\nonumber \\\\\n&-&\\frac{1}{2^2}(C^{1234}-S^{1234})s_{2}^{1}(w)\\nabla_{\\mu}s_{2}^{2}(w)s_{n}^{3}(w)\\nabla^{\\mu}s_{n}^{4}(w)\n\\nonumber \\\\\n&+&\\frac{1}{2^3}n\\left(-\\delta^{12}_{2}\\delta^{34}_{n}+(6+n)C^{1234}+(3n-4)S^{1234}-n\\Upsilon^{1234}\\right)s_{2}^{1}(w)s_{2}^{1}(w)s_{n}^{3}(w)s_{n}^{4}(w)\n\\end{eqnarray}\nWe now substitute the summation of overlapping $SO(6)$ tensors (see appendix \\ref{sec:SphereInts}) and use the results for the exchange integrals. After relabelling the indices, one finally gets the on-shell value of the action that determines the four-point function \n\\begin{eqnarray}\n\\mathcal{S}&=&\\frac{N^{2}}{8\\pi^{2}}\\frac{(n-1)^2(n-2)^2}{4\\pi^{6}\\Gamma(n)}\n\\int d^{4}\\vec{x}_{1}d^{4}\\vec{x}_{2}d^{4}\\vec{x}_{3}d^{4}\\vec{x}_{4}\ns_{2}^{1}(\\vec{x}_{1})s_{2}^{2}(\\vec{x}_{2})s_{n}^{3}(\\vec{x}_{3})s_{n}^{4}(\\vec{x}_{4})\n\\frac{u}{|\\vec{x}_{12}|^{4}|\\vec{x}_{34}|^{2n}}\\left\\{\\right.\n\\nonumber \\\\\n&+&\\delta^{12}_{2}\\delta^{34}_{n}\\frac{n}{2^{5}}\\left[\\bar{D}_{11nn}-(n+1)u\\bar{D}_{22nn}-(1+v-u)\\bar{D}_{22n+1n+1}\\right]\n\\nonumber \\\\\n&+&C^{1234}\\frac{n}{2^4}\\left[-2\\bar{D}_{11nn}-2(n-1)u\\bar{D}_{12n-1n}+(n+6)u\\bar{D}_{22nn}-2\\bar{D}_{21nn+1}+2\\bar{D}_{12nn+1}\\right.\n\\nonumber \\\\\n&-&\\left.(u\\bar{D}_{31nn}-(n-1)u^2\\bar{D}_{23n-1n})-u((n-1)\\bar{D}_{22n-1n+1}-\\bar{D}_{12nn+1})\\right]\n\\nonumber \\\\\n&+&C^{1243}\\frac{n}{2^2}\\left[u\\bar{D}_{23nn+1}-u\\bar{D}_{22nn}\\right]\n\\nonumber \\\\\n&+&\\Upsilon^{1234}\\frac{n}{2^4(n+2)}\\left[\\frac{2(n-1)^{2}(n+2)}{(n+1)}u\\bar{D}_{12n-1n}+(n-2)(u\\bar{D}_{31nn}-(n-1)u^2\\bar{D}_{23n-1n})\\right.\n\\nonumber \\\\\n&+&(n-2)u((n-1)\\bar{D}_{22n-1n+1}-\\bar{D}_{12nn+1})-n(n+2)u\\bar{D}_{22nn}\n\\nonumber \\\\\n&+&\\left.\\frac{2}{n+1}(n(n-1)u^2\\bar{D}_{33n-1n+1}+2u\\bar{D}_{31n+1n+1}+2n(1-u-v)u\\bar{D}_{23nn+1})\\right]\n\\nonumber \n\\end{eqnarray}\n\\begin{eqnarray}\n&+&S^{1234}\\frac{n}{2^4}\\left[-2(n-1)^2u\\bar{D}_{12n-1n}+3nu\\bar{D}_{22nn}\n-4u\\bar{D}_{23nn+1}\\right.\n\\nonumber \\\\\n&+&\\left.\\left.(u\\bar{D}_{31nn}-(n-1)u^2\\bar{D}_{23n-1n})+((n-1)u\\bar{D}_{22n-1n+1}-u\\bar{D}_{12nn+1})\\right]\n\\right\\}\n\\end{eqnarray}\nHere we have made use of some identities relating $\\bar{D}$-functions (appendix \\ref{sec:Dfunc}) to simplify the expressions. Notice that here we are abusing of the notation, as the scalar fields now refer to the boundary sources, and so depend on the $\\vec{x}_i$ coordinates. We are now ready to compute the four-point function (\\ref{diffweightprocess}) using the AdS\/CFT prescription given in (\\ref{prescriptionadscft}). Of course, we need first to canonically normalise the corresponding 1\/2-BPS operators, taking into account the rescaling we did to the action at the beginning of this computation\n\\begin{equation}\n\\tilde{s}^{I}_{n}=\\frac{N}{4\\pi^{2}}(n-2)^{1\/2}(n-1)s^{I}_{n}\n\\qquad \\qquad\n\\tilde{s}_{2}^{I}=\\frac{N}{4\\sqrt{2}\\pi^2}s^{I}_{2}\n\\end{equation}\nThis implies that the connected piece of the four-point function is of order $\\mathcal{O}(1\/N^2)$. The explicit form can be determined from\n\\begin{equation}\n\\langle \\mathcal{O}_{2}(\\vec{x}_{1})\\mathcal{O}_{2}(\\vec{x}_{2})\\mathcal{O}_{n}(\\vec{x}_{3})\\mathcal{O}_{n}(\\vec{x}_{4})\\rangle=\n\\frac{2^{9}\\pi^{8}}{N^4}\\frac{1}{(n-2)(n-1)^2}\n\\frac{\\delta}{\\delta s_{2}^{I_{1}}(\\vec{x}_{1})}\\frac{\\delta}{\\delta s_{2}^{I_{2}}(\\vec{x}_{2})}\n\\frac{\\delta}{\\delta s_{n}^{I_{3}}(\\vec{x}_{3})}\\frac{\\delta}{\\delta s_{n}^{I_{4}}(\\vec{x}_{4})}(-S)\n\\end{equation}\nUpon functional differentiation, the contribution to the amplitude from each of the tensor structures will be given by the corresponding orbit, this is, the $s$, $t$ and $u$ channels obtained by independent permutations of the points $1 \\leftrightarrow 2$, $3 \\leftrightarrow 4$. Here we make use of the symmetries of the $SO(6)$ tensors, so the final result reads as follows\n\\begin{eqnarray}\n&&\\langle \\mathcal{O}_{2}^{I_{1}}(\\vec{x}_{1})\\mathcal{O}_{2}^{I_{2}}(\\vec{x}_{2})\\mathcal{O}_{n}^{I_{3}}(\\vec{x}_{3})\\mathcal{O}_{n}^{I_{4}}(\\vec{x}_{4})\\rangle\n=\\frac{1}{{\\vec{x}_{12}}^4{\\vec{x}_{34}^{2n}}}\\left\\{A(u,v)\\delta^{I_1I_2}_{2}\\delta^{I_3I_4}_{n}+B_{1}(u,v)C^{I_{1}I_{2}I_{3}I_{4}}\n\\right.\n\\nonumber \\\\\n&&\\left.+B_{2}(u,v)C^{I_{1}I_{2}I_{4}I_{3}}+c_{1}(u,v)\\Upsilon^{I_{1}I_{2}I_{3}I_{4}}+C_{2}(u,v)\\Upsilon^{I_{1}I_{2}I_{4}I_{3}}\n+D(u,v)S^{I_{1}I_{2}I_{3}I_{4}}\\right\\}\n\\label{resultdiffweightsugra}\n\\end{eqnarray}\nwhere the functions $(A, B_{1}, B_{2}, C_{1}, C_{2}, D)$ are given by \n\\begin{equation}\n(A, B_{1}, B_{2}, C_{1}, C_{2}, D)=\\frac{2^{4}(n-2)}{\\Gamma(n)}\\frac{1}{N^2}(\\tilde{A},\\tilde{B}_{1},\\tilde{B}_{2},\\tilde{C}_1,\\tilde{C}_2, \\tilde{D})\n\\end{equation}\nand\n\\begin{eqnarray}\n\\tilde{A}(u,v)&=&\n-\\frac{n}{2^3}u\\left\\{\\bar{D}_{11nn}-(n+1)u\\bar{D}_{22nn}-(1+v-u)\\bar{D}_{22n+1n+1}\\right\\}\n\\nonumber \\\\\n\\tilde{B}_1(u,v)&=&\n-\\frac{n}{2^3}u \\left\\{-2\\bar{D}_{11nn}-2(n-1)u\\bar{D}_{12n-1n}\n-2(\\bar{D}_{21nn+1}-\\bar{D}_{21n+1n})+(n+6)u\\bar{D}_{22nn}\\right.\n\\nonumber \\\\\n&-&(u\\bar{D}_{31nn}-(n-1)u^2\\bar{D}_{23n-1n})-((n-1)u\\bar{D}_{22n-1n+1}-u\\bar{D}_{12nn+1})\n\\nonumber \\\\\n&-&\\left.4u(\\bar{D}_{22nn}-\\bar{D}_{32nn+1})\\right\\}\n\\nonumber \\\\\n\\tilde{B}_2(u,v)&=&\n-\\frac{n}{2^3}u\\left\\{-2\\bar{D}_{11nn}-2(n-1)u\\bar{D}_{12nn-1}\n-2(\\bar{D}_{12nn+1}-\\bar{D}_{21nn+1})+(n+6)u\\bar{D}_{22nn}\\right.\n\\nonumber \\\\\n&-&(u\\bar{D}_{13nn}-(n-1)u^2\\bar{D}_{23nn-1})-((n-1)u\\bar{D}_{22n+1n-1}-u\\bar{D}_{12n+1n})\n\\nonumber \\\\\n&-&\\left.4u(\\bar{D}_{22nn}-\\bar{D}_{23nn+1})\\right\\}\n\\nonumber \\\\\n\\tilde{C}_1(u,v)&=&\n-\\frac{n}{2^3(n+2)}u^2\\left\\{\\frac{2(n-1)^2(n+2)}{(n+1)}\\bar{D}_{12n-1n}\n-n(n+2)\\bar{D}_{22nn} \\right.\n\\nonumber \\\\\n&+&\\frac{2n(n-1)}{n+1}u\\bar{D}_{33n-1n+1}\n+\\frac{4}{n+1}\\bar{D}_{31n+1n+1}+\\frac{4n}{n+1}(1-u-v)\\bar{D}_{23nn+1}\n\\nonumber \\\\\n&+&\\left.(n-2)((n-1)\\bar{D}_{22n-1n+1}-\\bar{D}_{12nn+1})+(n-2)(\\bar{D}_{31nn}-(n-1)u\\bar{D}_{23n-1n})\\right\\}\n\\nonumber \\\\\n\\tilde{C}_2(u,v)&=&\n-\\frac{n}{2^3(n+2)}u^2\\left\\{\\frac{2(n-1)^2(n+2)}{(n+1)}\\bar{D}_{12nn-1}\n-n(n+2)\\bar{D}_{22nn}\\right.\n\\nonumber \\\\\n&+&\\frac{2n(n-1)}{n+1}u\\bar{D}_{33n+1n-1}\n+\\frac{4}{n+1}\\bar{D}_{13n+1n+1}+\\frac{4n}{n+1}(v-u-1)\\bar{D}_{23n+1n}\n\\nonumber \\\\\n&+&\\left.(n-2)((n-1)\\bar{D}_{22n+1n-1}-\\bar{D}_{12n+1n})+(n-2)(\\bar{D}_{13nn}-\n(n-1)u\\bar{D}_{23nn-1}\\right\\}\n\\nonumber \\\\\n\\tilde{D}(u,v)&=&\n-\\frac{n}{2^3}u^2\\left\\{-2(n-1)^2(\\bar{D}_{12n-1n}+\\bar{D}_{12nn-1})+6n\\bar{D}_{22nn}-4(\\bar{D}_{23nn+1}+\\bar{D}_{32nn+1})\\right.\n\\nonumber \\\\\n&+&(n-1)(\\bar{D}_{22n-1n+1}+\\bar{D}_{22n+1n-1})-(\\bar{D}_{12nn+1}+\\bar{D}_{12n+1n})\n\\nonumber \\\\\n&+&\\left.(\\bar{D}_{31nn}+\\bar{D}_{13nn})-(n-1)u(\\bar{D}_{23n-1n}+\\bar{D}_{23nn-1})\\right\\}\n\\label{orbits}\n\\end{eqnarray}\nFrom (\\ref{orbits}) it is possible to see that the crossing symmetries are respected and that the overall form of the four-point amplitude is consistent with conformal symmetry.\n\\section{Verifying the CFT Predictions}\n\\label{sec:verifying}\nWe now try to verify the dynamical constraints imposed on the amplitude by the insertion procedure, on the supergravity result. To do this, we need to rewrite the result (\\ref{resultdiffweightsugra}) in a simpler way. We will follow the notation in \\cite{Rayson:2007th}, which is based on ideas developed in \\cite{Nirschl:2004pa, Dolan:2003hv} and introduce the conformal invariants\n\\begin{equation}\n\\sigma=\\frac{u_1\\cdot u_3 u_2\\cdot u_4}{u_1\\cdot u_2 u_3 \\cdot u_4} \\qquad \\qquad \\tau=\\frac{u_1 \\cdot u_4 u_2 \\cdot u_3}{u_1 \\cdot u_2 u_3 \\cdot u_4} \n\\label{ucrossrad1}\n\\end{equation}\nso the four-point function (\\ref{diffweightprocess}) is given by\n\\begin{equation}\n \\langle \\mathcal{O}_{2}(\\vec{x}_1,u_1)\\mathcal{O}_{2}(\\vec{x}_2,u_2)\\mathcal{O}_{n}(\\vec{x}_3,u_3)\\mathcal{O}_{n}(\\vec{x}_4,u_4)\\rangle =\\left(\\frac{u_1.u_2}{|\\vec{x}_{12}|^2}\\right)^{2}\\left(\\frac{u_3.u_4}{|\\vec{x}_{34}|^2}\\right)^{n}\\mathcal{G}^{(2,2,n,n)}(u,v;\\sigma,\\tau)\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathcal{G}^{(2,2,n,n)}(u,v;\\sigma,\\tau)=\\mathcal{G}_{0}(u,v;\\sigma,\\tau)+s(u,v;\\sigma,\\tau)\\mathcal{H}_{I}(u,v;\\sigma,\\tau)\n\\label{generalstrucG}\n\\end{equation}\n$\\mathcal{H}_I$ contains all the non-trivial dynamic contributions and $\\mathcal{G}_{0}$ is the free field part, which has the following structure\n\\begin{equation}\n\\mathcal{G}_{0}(u,v;\\sigma,\\tau)=k+G_{f}(u,v;\\sigma,\\tau)+s(u,v;\\sigma,\\tau)\\mathcal{H}_{0}(u,v,\\sigma,\\tau)\n\\end{equation}\nIn these expressions\n\\begin{equation}\ns(u,v;\\sigma,\\tau)=v+\\sigma^2 uv +\\tau^2 u +\\sigma v (v-1-u)+\\tau (1-u-v) + \\sigma \\tau u (u-1-v)\n\\end{equation}\nThe free field term in the $22\\rightarrow nn$ channel is given by the expression \\cite{Rayson:2007th,Nirschl:2004pa, Dolan:2003hv}\n\\begin{equation}\n\\label{freeg}\n\\mathcal{G}_0(u,v;\\sigma,\\tau)=1+b_1\\left(\\sigma u +\\tau \\frac{u}{v}\\right)+c_1\\left(\\sigma^2 u^2 +\\tau^2 \\frac{u^2}{v^2}\\right)+d \\sigma \\tau \\frac{u}{v}\n\\end{equation}\nwith $b_1, c_1$ and $d$ are given in (\\ref{largeNfreecoeff}). The $2n \\rightarrow 2n$ channel can be obtained using crossing symmetry. From (\\ref{resultdiffweightsugra}), one can read the expression in the interacting theory \n\\begin{equation}\n\\mathcal{G}(u,v;\\sigma,\\tau)=a(u,v)+\\left(\\sigma u b_{1}(u,v)+\\tau\\frac{u}{v}b_{2}(u,v) \\right)+\\left(\\sigma^2 u^2 c_{1}(u,v)+\\tau^2\\frac{u^2}{v^2}c_{2}(u,v)\\right)+\\sigma\\tau \\frac{u^2}{v}d(u,v)\n\\label{intg}\n\\end{equation}\nwhere $a(u,v)=A(u,v)$, $b_1(u,v)=\\frac{B_{1}(u,v)}{u}$, $c_1(u,v)=\\frac{C_{1}(u,v)}{u^2}$ and $d(u,v)=\\frac{v}{u^2}D(u,v)$. $b_2(u,v)$ and $c_2(u,v)$ can be obtained from crossing symmetry, as the supergravity result (\\ref{largeNfreecoeff}) satisfies this property. Notice also that the cross-ratios $\\sigma$ and $\\tau$ defined in (\\ref{ucrossrad1}) arise naturally from expressing the products of $C$-tensors in terms of harmonic polynomials (see appendix \\ref{sec:HarmPoly}).\n\nIt is possible to rewrite (\\ref{intg}) by simplifying the result (\\ref{largeNfreecoeff}), using identities between $\\bar{D}$-functions (see appendix \\ref{sec:Dfunc}). The simplification was done in \\cite{Rayson:2007th} and we reproduce it here. One gets\n\\begin{equation}\n\\mathcal{G}(u,v;\\sigma,\\tau)=1+\\frac{2n}{N^2}\\left(\\sigma u +\\tau \\frac{u}{v}+(n-1)\\sigma\\tau \\frac{u^2}{v}-\\frac{1}{(n-2)!}s(u,v;\\sigma,\\tau)u^n\\bar{D}_{nn+222}(u,v) \\right)\n\\end{equation}\nwhere the disconnected piece has been normalised to 1. In the free field limit, $\\mathcal{G}\\rightarrow \\mathcal{G}_0$, so comparing (\\ref{freeg}) with (\\ref{intg}) one has\n\\begin{equation}\na(u,v)\\rightarrow 1 \\qquad b_i(u,v) \\rightarrow b_i \\qquad c_i(u,v) \\rightarrow c_i \\qquad d(u,v) \\rightarrow d\n\\end{equation}\nfrom where we can identify $k=1+(n+1)b_i+2c_i$ and from (\\ref{generalstrucG}) one sees that\\footnote{This can be read of from $a(u,v)$ as its connected piece has no free field contributions.}\n\\begin{equation}\n\\mathcal{H}_I(u,v)=-\\frac{2n}{N^2}\\frac{1}{(n-2)!}u^n \\bar{D}_{nn+222}(u,v)\n\\label{Hint}\n\\end{equation}\nIn the $2n\\rightarrow 2n$ channel the previous expression reads\n\\begin{equation}\n\\hat{\\mathcal{H}}_I(u,v)=-\\frac{2n}{N^2}\\frac{1}{(n-2)!}u^2 \\bar{D}_{2n+22n}(u,v)\n\\end{equation}\nIt is now clear that one can write\n\\begin{eqnarray}\na(u,v)&=&1+v \\mathcal{H}_I(u,v) \\hspace{28.mm} d(u,v)=d+\\frac{v}{u}(u-v-1)\\mathcal{H}_I(u,v)\n\\nonumber\\\\\nb_{1}(u,v)&=&b_1+\\frac{v}{u}(v-u-1)\\mathcal{H}_I(u,v) \\qquad b_{2}(u,v)=b_2+\\frac{v}{u}(1-u-v)\\mathcal{H}_I(u,v)\n\\nonumber \\\\\nc_{1}(u,v)&=&c_1+\\frac{v}{u}\\mathcal{H}_I(u,v) \\hspace{25.mm} c_{2}(u,v)=c_2+\\frac{v^2}{u}\\mathcal{H}_I(u,v)\n\\label{splitcoeff}\n\\end{eqnarray}\nso the supergravity result also splits into a free and a quantum part, as it was predicted by superconformal symmetry. Defining \n\\begin{equation}\n\\mathcal{H}_I(u,v)=\\frac{u}{v}\\mathcal{F}(u,v)\n\\end{equation}\nit becomes clear that the relations (\\ref{insertrels}) are satisfied. We consider this fact as a strong evidence in favour of the AdS\/CFT correspondence. \n\nWe can also read off the values of the coefficients $b_i, c_i$ and $d$ from the free part of the function $\\mathcal{G}(u,v;\\sigma,\\tau)$. The results are\n\\begin{equation}\nb_i=\\frac{2n}{N^2} \\qquad c_i=0 \\qquad d = \\frac{2n(n-1)}{N^2} \n\\label{cizero}\n\\end{equation}\nNotice that the values of $b_i$ and $d$ agree with those computed using free field theory. This is a highly non-trivial result. However, $c_i$ vanish, which is apparently at odds with what was obtained using free fields, but recall that $c_i$ was dependent on the colour structure of the operators. This might suggest that this quantity receives quantum corrections. It should also be noticed that in the case $n=3$, one has $c_i=0$ so there is agreement \\cite{Berdichevsky:2007xd}. \n\n\\section{Conclusions and Outlook}\n\\label{sec:conclusions}\nIn this paper, we have investigated four-point functions of different weight operators in the context of the AdS\/CFT correspondence. We have looked at a specific computations in the supergravity approximation (large $\\lambda$, large $N$), of a process involving fields dual to primaries of conformal dimension $2$ and primaries of conformal dimension $n$. The results have been analysed using results from free field Yang-Mills theory and superconformal symmetry. Some of our key results are summarised below:\n\\begin{itemize}\n\\item The connected piece of the four-point function of 1\/2-BPS superconformal primaries of conformal weights 2 and $n$, was shown to have a structure that is consistent with superconformal symmetry. Moreover, we have seen it naturally separates into a free and an interacting (quantum) piece, which involves all the non-trivial dynamics and satisfies the restrictions imposed by the insertion procedure. \n\n\\item A new method was used for evaluating effective couplings in the lagrangian arising from integrals over $S^5$. This allowed the determination of the on-shell lagrangian for KK scalars dual to superconformal primaries in the YM side. \n\n\\item We provided further evidence for the possibility that the quartic four-derivative Lagrangian of \\cite{Arutyunov:1999fb} vanishes, as now we have extended the computation of the lagrangian to include primaries with different conformal weights, with two of them being generic (i.e. no specification of the representation content). As it has been argued before in \\cite{Arutyunov:2002fh, Arutyunov:2003ae}, this would imply the existence of a $\\sigma$-model action describing the extension of $d=5$ $\\mathcal{N}=8$ supergravity to include massive KK modes of the IIB compactification.\n\\end{itemize}\nWith the techniques developed in appendix \\ref{sec:HarmPoly} to compute the interaction couplings arising from the products of $C$-tensors, it seems likely that the computation of the correlation function \n\\begin{equation*}\n\\langle \\mathcal{O}_{n_1}(\\vec{x}_1)\\mathcal{O}_{n_1}(\\vec{x}_2)\\mathcal{O}_{n_2}(\\vec{x}_3)\\mathcal{O}_{n_2}(\\vec{x}_4)\\rangle\n\\end{equation*} \nin AdS supergravity could be evaluated. This would give us further information on the dynamics of KK scalars, and would provide additional evidence for the vanishing of the quartic four-derivative lagrangian in the five-dimensional effective theory.\n\nAnother problem one could explore is the effect of $\\mathcal{R}^4$ corrections to four-point functions of superconformal primary operators. Recalling that the dual fields are built from the trace of the graviton in the $S^5$ and the RR four-form on $S^5$ and given that all the terms at order $\\alpha'^3$ involving the metric and the four-form are known from \\cite{Paulos:2008tn}, it is conceivable that the corrections to the five-dimensional effective lagrangian can be obtained. This indeed would be a difficult task, but a first step would be to consider the case of lowest scale dimension primaries($\\Delta=2$). In this way, it should be possible to compute the order $(g_{YM}^2N)^{-3\/2}$ correction to the four-point function of lowest weight primaries. \n\nA puzzle that remains to be addressed is the mismatch of the $c_i$ coefficient function from the supergravity computation, eq. (\\ref{cizero}), and the free-field theory one, eq. (\\ref{largeNfreecoeff}). Given that the supergravity result gives $c_i=0$, one might imagine that there should be stringy corrections to this quantity. Corrections in $\\alpha'$ could be considered once the higher order corrections to the five-dimensional effective action are known. Another interesting avenue would be to consider the potential contribution coming from non-perturbative effects \\cite{Green:2002vf}.\n\nFinally it should be mentioned that the supergravity result obtained here can be used to ana\\-lyse the structure of the OPE of the primaries at strong coupling and to evaluate anomalous dimensions. Some results in this matter can be found in \\cite{Rayson:2007th}.\n\\\\\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\n\\section{Introduction}\n\nIn a recent paper \\cite{EHIIM11}, Eto et al. investigated electromagnetic\nproperties of baryons under the influence of external electromagnetic field,\nbased on the Skyrme model \\cite{Skyrme61} with Wess-Zumino-Witten term\nincluding electromagnetism \\cite{WZ71},\\cite{Witten83}, thereby\nconcluding that a nucleon in external electromagnetic\nfields has anomalous charge distribution due to the chiral anomaly.\nFurthermore, the Gell-Mann-Nishijima relation, $Q = I_3 + N_B \/ 2$\n($Q$ : electric charge, $I_3$ : the third component of isospin,\n$N_B$ : baryon number), acquires an additional term due to the quantum\nanomaly. As a consequence, non-zero net charge, which is generally\nnon-integer, is induced even for a neutron. This astounding conclusion\nstems from the gauged Wess-Zumino-Witten action with two flavors,\ngiven in the form \\cite{KRS84},\\cite{PR85} :\n\\begin{equation}\n S_{WZW} [U, A_\\mu] \\ = \\ - \\ e \\,\\int \\,d^4 x \\,A_\\mu \\,\n \\left(\\, \\frac{N_c}{6} \\,j^\\mu_B \\ + \\ \\frac{1}{2} \\,j^\\mu_{anm} \\,\\right),\n \\label{WZW1} \n\\end{equation}\nwhere\n\\begin{eqnarray}\n j^\\mu_B &=& - \\,\\frac{1}{24 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\n \\mbox{\\rm tr} \\,(L_\\nu \\,L_\\alpha \\,L_\\beta) , \\label{current_B} \\\\\n j^\\mu_{anm} &=& \\frac{i \\,e \\,N_c}{96 \\,\\pi^2} \\,\n \\epsilon^{\\mu \\nu \\alpha \\beta} \\,F_{\\nu \\alpha} \\,\n \\mbox{\\rm tr} \\,\\tau_3 \\,(L_\\beta + R_\\beta) , \\label{current_anm}\n\\end{eqnarray}\nwith\n\\begin{equation}\n L_\\mu \\ \\equiv \\ U \\,\\partial_\\mu U^\\dagger, \\ \\ \\ \n R_\\mu \\ \\equiv \\ \\partial_\\mu U^\\dagger \\,U . \n\\end{equation}\n(We point out that our definition of $L_\\mu$ and $R_\\mu$ is different\nfrom that in \\cite{EHIIM11}.)\nHere, $j^\\mu_B$ is the well-known baryon current giving an integer baryon\nnumber \\cite{Witten83}. According to \\cite{EHIIM11}, in the presence of\nbackground electromagnetic fields, not only\nthe first term but also the second term of Eq.~(\\ref{WZW1}) is important.\nThe electric charge $Q$ with the contribution of anomaly is then written as\n\\begin{equation}\n Q \\ = \\ I_3 \\ + \\ \\frac{N_B}{2} \\ + \\ \\frac{Q_{anm}}{2} ,\n\\end{equation}\nwhere $N_B = \\int d^3 x \\,j^0_B$ and $Q_{anm} = \\int \\,d^3 x \\,j^0_{anm}$.\nThis means that the Gell-Mann-Nishijima relation receives\na remarkable modification under the background electromagnetic fields.\n\nIt appears to us, however, that the above-mentioned anomalous induction\nof non-zero net charge for a nucleon (or a Skyrmion) is not in good harmony\nwith the schematic physical picture illustrated in Fig.1 of their paper.\nThis schematic diagram represents electric charge generation of a nucleon\nthrough the anomalous coupling between one pion and two photons\n(or electromagnetic fields). Since the electromagnetic fields (as abelian\ngauge fields) carries no electric charge, the exchanged pion in this\nfigure must be neutral. In fact, this lowest order diagram results\nfrom the same vertex as describing the famous decay process\n$\\pi^0 \\rightarrow 2 \\,\\gamma$ due to the triangle\nanomaly \\cite{Adler69},\\cite{BJ69},\nwhich is legitimately contained in the\ngauged Wess-Zumino-Witten action \\cite{Witten83}\\nocite{KRS84}-\\cite{PR85}.\nNaturally, the gauged Wess-Zumino-Witten\naction also contains higher-power terms in the pion fields.\nHowever, even if one considers diagrams in which more pions are exchanged\nbetween the nucleon and the electromagnetic fields, the exchanged pions\nmust be electrically neutral as a whole, since the electromagnetic fields\ncarry no electric charge. \nWhat we are worrying about here is a conflict between this intuitive thought\nand the principle conclusion of the paper \\cite{EHIIM11}, i.e. the anomalous\ninduction of non-zero net charge for a nucleon.\n\nThe purpose of the present paper is to unravel the origin of this contradiction.\nHere, we unavoidably encounter the problem of how to define electromagnetic\nhadron current in an unambiguous manner by starting with the gauged\nWess-Zumino action.\nA subtlety arises from the fact that the gauged Wess-Zumino Witten action\ncontains nonlinear terms in the electromagnetic fields. In fact, if it contains\nonly linear terms in the electromagnetic fields, it is clear that one can easily\nread off the electromagnetic hadron current as a coefficient of the\nelectromagnetic field. For handling this delicate point, \nfirst in sect.II, we briefly analyze the familiar lagrangian of\nscalar electrodynamics containing couplings between photons and\ncomplex scalar fields, which is nonlinear in the photon fields.\nA particular emphasis here is put on how to define electromagnetic matter\ncurrent based on a solid guiding principle. It will be shown there that the two\nforms of current, i.e. the one defined on the basis of the Noether theorem\nand the other defined as a source current of the Maxwell equation through\nthe equations of motion, perfectly coincides with each other.\nIt is also shown that this current is gauge-invariant and conserved,\nthereby ensuring the consistency of scalar electrodynamics as a quantum\ngauge theory. In section III, we shall carry out a similar analysis for the\nnonlinear meson action with the Wess-Zumino-Witten action including\nelectromagnetism to find something unexpected, which is thought to be\nthe origin of the discrepancy pointed out above. Finally, in sect. IV,\nwe briefly summarize what we have found in the present paper. \n\n\n\\section{A lesson learned from scalar electrodynamics}\n\nLet us start with the familiar lagrangian of scalar electrodynamics given by\n\\begin{equation}\n {\\cal L} \\ = \\ - \\,\\frac{1}{4} \\,F_{\\mu \\nu} \\,F^{\\mu \\nu} \\ + \\ \n D_\\mu \\phi^* \\,D^\\mu \\phi \\ - \\ V (\\phi^* \\,\\phi) ,\n\\end{equation}\nwith\n\\begin{eqnarray}\n F_{\\mu \\nu} &\\equiv& \\partial_\\mu \\,A_\\nu \\ - \\ \n \\partial_\\nu A_\\mu , \\\\\n D^\\mu \\,\\phi (x) &\\equiv& [\\,\\partial^\\mu \\ + \\ i \\,e \\,A^\\mu (x) \\,] \\,\n \\phi(x) .\n\\end{eqnarray}\nThis lagrangian is manifestly gauge-invariant under the following gauge\ntransformation : \n\\begin{eqnarray}\n \\phi (x) &\\rightarrow& e^{- \\,i \\,e \\,\\alpha (x)} \\,\\phi (x), \\ \\ \\ \n A^\\mu (x) \\ \\rightarrow \\ A^\\mu (x) \\ + \\ \\partial^\\mu \\,\\alpha (x).\n \\label{SED_GT}\n\\end{eqnarray}\nThe equations of motion derived from the above lagrangian are given by\n\\begin{eqnarray}\n \\partial_\\mu \\,F^{\\mu \\nu} &=& j^\\nu , \\label{SED_Maxwell_eq}\\\\\n D_\\mu \\,D^\\mu \\,\\phi &=& - \\,\\partial V \\,\/ \\,\\partial \\phi^*, \\\\\n \\left( D_\\mu \\,D^\\mu \\,\\phi \\right)^* &=& - \\,\\partial V \\,\/ \\,\\partial \\phi .\n\\end{eqnarray}\nHere, the source current $j^\\nu$ of the Maxwell\nequation (\\ref{SED_Maxwell_eq}) is given by\n\\begin{equation}\n j^\\nu \\ = \\ i \\,e \\,\\left[\\, \\phi^* \\,D^\\mu \\,\\phi \\ - \\ \n (D^\\mu \\,\\phi)^* \\,\\phi \\,\\right] \\ = \\ \n i \\,e \\,\\phi^* \\,\\overleftrightarrow{\\partial^\\nu} \\,\\phi \\ - \\ \n 2 \\,e^2 \\,\\phi^* \\,\\phi \\,A^\\nu , \n\\end{equation}\nwith $\\overleftrightarrow{\\partial^\\nu} = \\overrightarrow{\\partial^\\nu} - \\overleftarrow{\\partial^\\nu}$.\nBy using the equations of motion, it can be shown that this matter current\n$j^\\nu$ is conserved, i.e.\n\\begin{equation}\n \\partial_\\nu \\,j^\\nu \\ = \\ 0 .\n\\end{equation}\nOne can also convince that this matter (or source) current is\ninvariant under the full gauge transformation (\\ref{SED_GT}).\nOne should recognize that the conservation of source current is crucially\nimportant. In fact, if it were broken, one encounters a serious contradiction\nwith the Maxwell equation (\\ref{SED_Maxwell_eq}) in such a way that\n\\begin{equation}\n 0 \\ = \\ \\partial_\\nu \\,\\partial_\\mu \\,F^{\\mu \\nu} \\ = \\ \n \\partial_\\nu \\,j^\\nu \\ \\neq \\ 0.\n\\end{equation}\nFor later discussion, it is useful to remember the fact that the matter current\nabove can also be obtained by using the standard Noether prescription.\nTo confirm it, we first consider the infinitesimal version of the gauge\ntransformation given by\n\\begin{eqnarray}\n \\delta \\phi \\ = \\ - \\,i \\,e \\,\\epsilon (x) \\, \\phi , \\ \\ \\ \n \\delta \\phi^* \\ = \\ i \\,e \\,\\epsilon (x) \\,\\phi^*, \\ \\ \\ \n \\delta A^\\mu \\ = \\ \\partial^\\mu \\epsilon (x).\n\\end{eqnarray}\nNaturally, the lagrangian of the scalar electrodynamics is invariant under\nthis gauge transformation.\nThe Noether current is obtained by considering another variation\n\\begin{eqnarray}\n \\delta^\\prime \\phi \\ = \\ - \\,i \\,e \\,\\epsilon (x) \\, \\phi , \\ \\ \\ \n \\delta^\\prime \\phi^* \\ = \\ i \\,e \\,\\epsilon (x) \\,\\phi^*, \\ \\ \\ \n \\delta^\\prime A^\\mu \\ = \\ 0.\n\\end{eqnarray}\nThe variation of the whole lagrangian under this transformation\nis reduced to the form\n\\begin{equation}\n \\delta^\\prime {\\cal L} \\ = \\ \\epsilon (x) \\,\\partial_\\mu \\,J^\\mu \\ + \\ \n \\partial_\\mu \\epsilon (x) \\,J^\\mu,\n\\end{equation} \nwhich defines the current $J^\\mu$ such a way that \n\\begin{eqnarray}\n J^\\mu \\ &=& \\frac{\\partial (\\delta^\\prime {\\cal L})}{\\partial (\\partial_\\mu \\epsilon (x))}, \n \\label{SED_Noether_current} \\\\\n \\partial_\\mu \\,J^\\mu &=& \\ \\frac{\\partial (\\delta^\\prime {\\cal L})}{\\partial \\epsilon (x)}.\n\\end{eqnarray} \nIf the lagrangian ${\\cal L}$ is invariant under a space-time independent\n(global) transformation\n\\begin{eqnarray}\n \\delta^\\prime \\phi \\ = \\ - \\,i \\,e \\,\\epsilon \\, \\phi , \\ \\ \\ \n \\delta^\\prime \\phi^* \\ = \\ i \\,e \\,\\epsilon \\,\\phi^*, \n\\end{eqnarray}\nwhich is indeed the case with our lagrangian (1), we conclude that\n\\begin{equation}\n 0 \\ = \\ \\delta^\\prime {\\cal L} \\ = \\ \\epsilon \\,\\,\\partial_\\mu \\,J^\\mu.\n\\end{equation}\nThis means that the current $J^\\mu$ defined by the equation\n(\\ref{SED_Noether_current}) is in fact\na conserved Noether current. The above-explained method of obtaining\nthe Noether current is known as the Gell-Mann-Levy method.\n\nNow, for the lagrangian of the scalar electrodynamics, we have\n\\begin{eqnarray}\n \\delta^\\prime [\\,\\partial_\\mu \\,\\phi^* \\,\\partial^\\mu \\phi \\,]\n &=& i \\,e \\,\\partial_\\mu \\,\\epsilon (x) \\,\n [\\,\\phi^* \\,\\partial^\\mu \\,\\phi \\ - \\ \\partial^\\mu \\,\\phi^* \\,\\phi \\,], \\\\\n \\delta^\\prime \\,[\\,i \\,e \\,(\\,\\partial_\\mu \\,\\phi^* \\,\\phi \\ - \\ \n \\phi^* \\,\\partial_\\mu \\,\\phi \\,) \\,A^\\mu \\,] &=&\n \\partial_\\mu \\epsilon (x) \\,( - \\,2 \\,e^2 ) \\,\\phi^* \\,\\phi \\,A^\\mu , \\\\\n \\delta^\\prime \\,[\\,e^2 \\,A_\\mu \\,A^\\mu \\,\\phi^* \\,\\phi \\,] &=& 0 ,\n\\end{eqnarray}\nthereby being led to\n\\begin{equation}\n \\delta^\\prime \\,{\\cal L} \\ = \\ \\partial_\\mu \\,\\epsilon (x) \\,\n \\left\\{\\,i \\,e \\,[\\,\\phi^* \\,\\partial^\\mu \\,\\phi \\ - \\ \n (\\partial^\\mu \\,\\phi)^* \\,\\phi \\,]\n \\ - \\ 2 \\,e^2 \\,\\phi^* \\,\\phi \\,A^\\mu \\,\\right\\} .\n\\end{equation}\nThe resultant Noether current is therefore given by\n\\begin{eqnarray}\n J^\\mu &=& i \\,e \\,\\,\\phi^* \\,\\overleftrightarrow{\\partial^\\mu} \\,\\phi \n \\ - \\ 2 \\,e^2 \\,\\phi^* \\,\\phi \\,A^\\mu\n \\ = \\ \n i \\,e \\,[\\,\\phi^* \\,D^\\mu \\,\\phi \\ - \\ (D^\\mu \\,\\phi)^* \\,\\phi \\,].\n\\end{eqnarray}\nOne confirms that this conserved Noether current precisely coincides\nwith the source current appearing in the Maxwell\nequation (\\ref{SED_Maxwell_eq}) for the photon\nfield. This ensures the consistency of the scalar electrodynamics as a\nclassical and a quantum field theory. Somewhat embarrassingly, we shall\nsee below that the familiar gauged Wess-Zumino-Witten action does not\nsatisfy the same sense of consistency.\n\n\n\\section{electromagnetic hadron current resulting from the gauged\nWess-Zumino-Witten action}\n\n\\subsection{Matter current derived from Noether principle}\n\nHere, we start with the gauged Wess-Zumino-Witten action with two flavor\nexpressed in the following form :\n\\begin{eqnarray}\n S_{WZW} [\\,U, A_\\mu \\,] &=& S_{WZ} [U] \\ - \\ \n \\frac{1}{2} \\,e \\,\\int \\,d^4 x \\,A_\\mu\n \\nonumber \\\\\n &\\times& \\left\\{\\,- \\,\\frac{1}{24 \\,\\pi^2} \\,\n \\epsilon^{\\mu \\nu \\alpha \\,\\beta} \\,\n \\mbox{tr} \\,(L_\\nu \\, L_\\alpha \\,L_\\beta ) \\right. \\nonumber \\\\\n &\\,& \\ \\ \\left. + \\ \\frac{3 \\,i \\,e}{48 \\,\\pi^2} \\,\n \\epsilon^{\\mu \\nu \\alpha \\beta} \\,\n F_{\\nu \\alpha} \\,\\mbox{tr} \\,Q \\,(L_\\beta + R_\\beta) \\,\\right\\} ,\n \\label{WZW2}\n\\end{eqnarray}\nwhere $Q$ is the SU(2) charge matrix given as\n\\begin{equation}\n Q \\ = \\ \\left( \\begin{array}{cc}\n \\frac{2}{3} & 0 \\\\\n 0 & - \\,\\frac{1}{3} \\\\\n \\end{array} \\right) .\n\\end{equation}\n(As is well-known, although $S_{WZ} [U]$ vanishes in the SU(2) case,\nits gauge variation does not. We therefore retain it here.) \nBy construction, i.e. as a consequence of the\n``trial and error'' gauging {\\it a la} Witten \\cite{Witten83}, the gauged\nWess-Zumino-Witten action is invariant under the following\ninfinitesimal gauge transformation : \n\\begin{equation}\n \\delta U \\ = \\ i \\,\\epsilon (x) \\,[Q, U] , \\ \\ \n \\delta U^\\dagger \\ = \\ i \\,\\epsilon (x) \\,[Q, U^\\dagger], \\ \\ \n \\delta \\,A_\\mu \\ = \\ - \\,\\frac{1}{e} \\,\\,\\partial_\\mu \\,\\epsilon (x) .\n \\label{fullGT}\n\\end{equation}\nLet us first try to see what answer we shall obtain for the Noether current,\nif we apply the Gell-Mann-Levy method to the above lagrangian (\\ref{WZW2}).\nThe transformation, which we consider to this end, is given by\n\\begin{equation}\n \\delta^\\prime U \\ = \\ i \\,\\epsilon (x) \\,[Q, U] , \\ \\ \n \\delta^\\prime U^\\dagger \\ = \\ i \\,\\epsilon (x) \\,[Q, U^\\dagger], \\ \\ \n \\delta^\\prime \\,A_\\mu \\ = \\ 0 . \n\\end{equation}\nMaking use of the relation\n\\begin{eqnarray}\n \\delta^\\prime \\,S_{WZW} &=& - \\,\\int \\,d^4 x \\,\\,\\partial_\\mu \\epsilon (x) \\,\n \\frac{1}{48 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\mbox{tr} \\,\n (L_\\nu \\,L_\\alpha \\,L_\\beta ) \\nonumber \\\\\n &\\,& - \\ \\int \\,d^4 x \\,\\,\\partial_\\nu \\,\\epsilon (x) \\,\\frac{3 \\,i \\,e}{48 \\,\\pi^2}\n \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,A_\\mu \\,\\partial_\\alpha \\,\n \\mbox{tr} \\,Q \\,(L_\\beta + R_\\beta),\n\\end{eqnarray}\nwe readily find that the corresponding Noether current is given by\n\\begin{eqnarray}\n J^\\mu_{\\rm I} \\ \\equiv \\ \\frac{\\delta \\,(\\delta^\\prime \\,S_{WZW})}\n {\\delta (\\partial_\\mu \\,\\epsilon (x))} &=&\n - \\,\\frac{1}{48 \\,\\pi^2} \\,\\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\,\n \\mbox{tr} \\,(L_\\nu \\,L_\\alpha \\,L_\\beta) \\nonumber \\\\\n &\\,& + \\,\\frac{3 \\,i \\,e}{48 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\,\n A_\\nu \\,\\partial_\\alpha \\,\n \\mbox{tr} \\,Q \\,(L_\\beta + R_\\beta) .\n\\end{eqnarray}\nOne can also verify that this current is invariant under the full\ngauge transformation (\\ref{fullGT}). Unfortunately, this current is not conserved.\nIn fact, we find that\n\\begin{eqnarray}\n \\partial_\\mu \\,J^\\mu_{\\rm I} &=& \\frac{3 \\,i \\,e}{24 \\,\\pi^2} \\,\\,\n \\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\partial_\\mu A_\\nu \\,\\,\n \\partial_\\alpha \\,\\mbox{tr} \\,Q \\,(L_\\beta + R_\\beta) \\ \\neq \\ 0. \\label{div_current}\n\\end{eqnarray}\nHowever, one can verify that the r.h.s. of (\\ref{div_current}) is a total\nderivative of another four-vector as\n\\begin{equation}\n \\partial_\\mu \\,J^\\mu_{\\rm I} \\ = \\ \\partial_\\mu \\,X^\\mu,\n\\end{equation}\nwith\n\\begin{equation}\n X^\\mu \\ \\equiv \\ \\frac{3 \\,i \\,e}{48 \\,\\pi^2} \\,\n \\epsilon^{\\mu \\nu \\alpha \\beta} \\,A_\\nu \\,\\partial_\\alpha \\,\n \\mbox{tr} \\,Q \\,(L_\\beta + R_\\beta) .\n\\end{equation}\nThis means that, if we define another current $j^\\mu_{\\rm II}$ by\n\\begin{equation}\n J^\\mu_{\\rm II} \\ \\equiv \\ J^\\mu_{\\rm I} \\ - \\ X^\\mu \\ = \\ \n - \\,\\frac{1}{24 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\n \\mbox{tr} \\,(L_\\nu \\,L_\\alpha \\,L_\\beta ),\n\\end{equation}\nthen, $J^\\mu_{\\rm II}$ is conserved. The price to pay is that the new current\n$J^\\mu_{\\rm II}$ is no longer gauge-invariant.\n\nIncidentally, in the case of Poincare symmetry not of internal symmetry,\nthe ambiguous nature of the Noether current is widely known.\nFor example, in the case of quantum\nchromodynamics (QCD), the 2nd-rank energy momentum tensor obtained from\na naive Noether procedure does not satisfy the desired symmetry property\nunder the exchange of two Lorentz indices \\cite{JM90}. However, there exists\na well-known procedure for ``improving'' the Noether current by adding\na superpotential - divergence of anti-symmetric tensor - which does not\nspoil the current conservation. The symmetric energy momentum tensor\nof QCD obtained in such a procedure is sometimes called Belinfante\nsymmetrized energy-momentum tensor. \n\nSummarizing the analysis in this subsection, we have applied the familiar\nGell-Mann-Levy method to the gauged Wess-Zumino-Witten action for\nobtaining a Noether current as a candidate of electromagnetic hadron\ncurrent. However, we have ended up\nwith two different forms of currents, i.e. $J^\\mu_{\\rm I}$ and $J^\\mu_{\\rm II}$.\nThe current $J^\\mu_{\\rm I}$ is gauge-invariant but not conserved, while\nthe current $J^\\mu_{\\rm II}$ is conserved but not gauge-invariant.\nAs pointed out in the paper by Son and Stephanov \\cite{SS08}, one can\nconstruct the 3rd current, which satisfies both of gauge-invariance and current\nconservation, by using the ``trial and error'' gauging method as proposed by\nWitten. It is given by\n\\begin{eqnarray}\n J^\\mu_{\\rm III} &=& - \\,\\frac{1}{48 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\n \\mbox{tr} \\,(L_\\nu \\,L_\\alpha \\,L_\\beta) \\nonumber \\\\\n &\\,& - \\,\\frac{3 \\,i \\,e}{48 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\n \\partial_\\nu \\,[\\,A_\\alpha \\,\\mbox{tr} \\,Q \\,(L_\\beta + R_\\beta) \\,].\n\\end{eqnarray}\nUnfortunately, it is not a current derived from the gauged Wess-Zumino-Witten\naction on the basis of a definite prescription as guidelined by the Noether principle.\n\nIn this way, we must conclude that, quite different from the case of\nscalar electrodynamics, the standard Noether method does not do a\ndesired good job to derive the electromagnetic hadron current\ncorresponding to the gauged Wess-Zumino-Witten action, in the sense \nthat it fails to give a candidate of electromagnetic hadron current,\nsatisfying both of gauge-invariance and conservation.\nIn the next subsection, we shall investigate the nature of another candidate\nof electromagnetic hadron current, i.e. the source current, which is defined\nthrough the equation of motion for the electromagnetic field. \n\n\n\\subsection{Matter current as a source of Maxwell equation}\n\nThe full action of the two-flavor Skyrme model coupled to the\nelectromagnetic field $A_\\mu$ is given by\n\\begin{equation}\n S \\ = \\ S_\\gamma [A_\\mu] \\ + \\ S_{Skyrme} [U, A_\\mu] \\ + \\ \n S_{WZW} [U, A_\\mu] .\n\\end{equation}\nHere, the 1st term\n\\begin{equation}\n S_\\gamma \\ = \\ - \\,\\frac{1}{4} \\,\\int \\,d^4 x \\,\\,F_{\\mu \\nu} \\,F^{\\mu \\nu}\n\\end{equation}\nis the kinetic part of the electromagnetic field, while the 2nd term,\n$S_{Skyrme} [U,A_\\mu]$, stands for the non-anomalous part of action\nfor the two-flavor Skyrme model minimally coupled to the electromagnetic\nfield. The 3rd term, i.e. $S_{WZW} [U, A_\\mu]$, gives the gauged\nWess-Zumino-Witten action given by (\\ref{WZW2}).\nIn the following, we shall discard\nthe part $S_{Skyrme} [U, A_\\mu]$ for simplicity, since it plays no essential role\nin our discussion below. The Euler-Lagrange equation of motion for the\nelectromagnetic field is therefore written down from\n\\begin{equation}\n \\frac{\\delta}{\\delta A_\\nu} \\,\\left\\{\\,S_\\gamma [A_\\mu] \\ + \\ \n S_{WZW} [U, A_\\mu] \\,\\right\\} \\ = \\ 0 ,\n\\end{equation}\nwhich gives the Maxwell equation\n\\begin{equation}\n \\partial_\\mu \\,F^{\\mu \\nu} \\ = \\ j^\\nu ,\n\\end{equation}\nwith the definition of the source current $j^\\nu$ as\n\\begin{equation}\n e \\,j^\\nu \\ \\equiv \\ - \\,\\frac{\\delta}{\\delta A_\\nu} \\,S_{WZW} [U, A_\\mu] .\n\\end{equation}\n(Naturally, if we had included the part $S_{Skyrme} [U, A_\\mu]$, it would also\ncontribute to the source current of Maxwell equation. However, this part\nof current is conserved itself and does not cause any trouble as discussed\nbelow.) An immediate question is whether the above definition, given as a\nfunctional derivative of the gauged Wess-Zumino-Witten action with respect\nto the electromagnetic fields, offers us the same answer as obtained with\nthe Noether prescription. The answer is no.\nWe find that the source current is given by\n\\begin{eqnarray}\n j^\\mu &=& - \\,\\frac{1}{48 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\,\n \\mbox{\\rm tr} \\,(L_\\nu \\,L_\\alpha \\,L_\\beta) \\nonumber \\\\\n &\\,& + \\,\\frac{3 \\,i \\,e}{96 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\,\n F_{\\nu \\alpha} \\,\\mbox{\\rm tr} \\,Q \\,(L_\\beta + R_\\beta) \\nonumber \\\\\n &\\,& + \\,\\frac{3 \\,i \\,e}{48 \\,\\pi^2} \\,\\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\,\n \\partial_\\nu \\,\\left[\\,\n A_\\alpha \\,\\mbox{\\rm tr} \\,Q \\,(L_\\beta + R_\\beta) \\,\\right] . \n \\label{source_current}\n\\end{eqnarray}\nwhich does not coincide with any of the currents\n$J^\\mu_{\\rm I}$, $J^\\mu_{\\rm II}$, and $J^\\mu_{\\rm III}$ discussed in the\nprevious subsection. Somewhat unexpectedly, it turns out that this current\n$j^\\mu$ is not gauge-invariant. More serious problem is that it\nis not conserved, owing to the presence of the 2nd term\nof (\\ref{source_current}). In fact, we find that\n\\begin{equation}\n \\partial_\\mu \\,j^\\mu \\ = \\ \\partial_\\mu \\,X^\\mu \\ \\neq \\ 0,\n\\end{equation}\nwith\n\\begin{equation}\n X^\\mu \\ = \\ \\frac{3 \\,i \\,e}{48 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\,\n A_\\nu \\,\\partial_\\alpha \\,\\mbox{\\rm tr} \\,Q \\,(L_\\beta + R_\\beta) .\n\\end{equation}\nAs emphasized in the example of scalar electrodynamics, non-conservation\nof source current is not permissible, since it causes an incompatibility\nwith the fundamental equation of electromagnetism, i.e.\nthe Maxwell equaion \\cite{Jackiw85}.\nHow can we make a compromise with this trouble. One possible attitude would\nbe to follow the argument as given by Kaymakcalan, Rajeev and Schechter many\nyears ago \\cite{KRS84}.\nThey argue that the low-energy effective action for QCD involves many more\nnew fields and interactions so one should not worry too much about the\ncomplete consistency of equation of motion. The effective action is, after all,\nbeing used as a handy mnemonic to read off the relevant vertices. The\ngauged Wess-Zumino-Witten action certainly describes the typical\nanomalous processes containing the photons like $\\pi^0 \\rightarrow 2 \\,\\gamma$\nand\/or $\\gamma \\rightarrow 3 \\,\\pi$ consistently with the low energy theorem,\ni.e. the anomalous Ward identities.\n\nNow we are in a position to pinpoint the origin of somewhat astounding conclusion\nobtained in the paper \\cite{EHIIM11}, i.e. the anomalous induction of net electric\ncharge for a nucleon in the magnetic fields. This conclusion follows from\nthe electromagnetic hadron current given as a half of the sum\nof $j^\\mu_B$ in (\\ref{current_B}) and $j^\\mu_{anm}$ in (\\ref{current_anm}).\nSetting $N_c = 3$, this reduces to\n\\begin{eqnarray}\n j^\\mu &=& - \\,\\frac{1}{48 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\,\n \\mbox{\\rm tr} \\,(L_\\nu \\,L_\\alpha \\,L_\\beta) \\nonumber \\\\\n &\\,& + \\,\\frac{i \\,e \\,N_c}{192 \\,\\pi^2} \\,\\epsilon^{\\mu \\nu \\alpha \\beta} \\,\\,\n F_{\\nu \\alpha} \\,\\,\\mbox{\\rm tr} \\,\\,\\tau_3 \\,(L_\\beta + R_\\beta) .\n \\label{current_B+anm}\n\\end{eqnarray}\nIn consideration of the fact that $Q = \\frac{1}{6} + \\frac{\\tau_3}{2}$,\nthis current just coincides with the sum of the 1st and 2nd terms\nin the source current (47), which we have derived above.\nSince the 3rd term of the current (\\ref{source_current}) is of a\ntotal derivative form, it does not contribute to the net charge of a\nnucleon. We thus find that the 2nd term of the current (\\ref{source_current})\nor of the current (\\ref{current_B+anm}) is\nthe cause of trouble, which prevents\nthe conservation of source current of the Maxwell equation.\nIn any case, what we can say definitely from the analysis above is that\nthe anomalous induction of non-zero net charge for a nucleon (or a Skyrmion)\nclaimed in the paper \\cite{EHIIM11} is inseparably connected with\nthis unfavorable feature of the gauged Wess-Zumino-Witten action.\nStill, what is lacking in our understanding is a deep explanation of why the\ngauged Wess-Zumino-Witten action, which was constructed so as to fulfill the\nelectromagnetic gauge-invariance with use of the ``trial and error'' method,\ndoes not satisfy the consistency with the Maxwell equation. \n \nA final comment is on a related work by Kharzeev, Yee, and Zahed \\cite{KEZ11},\nwhich was done motivated by the paper \\cite{EHIIM11}.\nStarting with a simple effective lagrangian of QCD (it corresponds to\nthe lowest power term in the pion field in the gauged Wess-Zumino-Witten\naction), they investigated the effect of anomaly induced charge distribution\nin the nucleon. Under a certain kinematical approximation concerning the\nclassical equation of motion for the pion field in a nucleon, they\nconclude that the abelian anomaly of QCD induces a quadrupole moment\nfor a neutron but it does not induce net electric charge for it.\nThe last statement, i.e. no induction of net electric charge for a neutron\nappears to be consistent with the nature of their effective lagrangian\nand also with the intuitive consideration given in the introduction of\nthe present paper. \n\n\n\\section{Summary and conclusion}\n\nTo conclude, motivated by the recent claim that, under the external magnetic\nfields, the anomalous couplings between mesons and electromagnetic fields\ncontained in the gauged Wess-Zumino-Witten action induces non-zero\nnet electric-charge for a nucleon, we have carefully re-investigated the\nproblem of how to define the electromagnetic hadron current from this\nwidely-known action. To this end, we first compare the two methods of\nobtaining electromagnetic hadron current for the familiar lagrangian\nof the scalar electrodynamics. The one is the Gell-Mann-Levy\nmethod to obtain the Noether current, while the other is the method of\nusing equations of motion of actions to define source current.\nFor this standardly-known lagrangian, we confirm that these\ntwo methods give precisely the same form of the electromagnetic\nhadron current. It can also be verified that this current is gauge-invariant\nand conserved. Unfortunately, this is not the case with\nthe gauged Wess-Zumino-Witten action.\nThat is, the currents obtained by these two methods do not coincide with\neach other. Particularly troublesome here is the fact that the source current\nof Maxwell equation is not conserved. This means that\nthe gauged Wess-Zumino-Witten action, which was constructed so as to\nfulfill the electromagnetic gauge-invariance by using the ``trial and error''\nmethod, does not satisfy the consistency with the the fundamental equation\nof electromagnetism. Although mysterious, it seems at the least\nclear that the recently claimed anomalous induction of net electric charge\nfor a nucleon in the magnetic fields is inseparably connected with this\nunwelcome feature of the gauged Wess-Zumino-Witten action. \n\n\\begin{acknowledgments}\nThe author would like to thank Prof. T.~Kubota for useful discussion.\nThis work is supported in part by a Grant-in-Aid for Scientific\nResearch for Ministry of Education, Culture, Sports, Science\nand Technology, Japan (No.~C-21540268)\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}