diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlkhr" "b/data_all_eng_slimpj/shuffled/split2/finalzzlkhr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlkhr" @@ -0,0 +1,5 @@ +{"text":"\\subsubsection*{\\bibname}}\n\\bibliographystyle{apalike}\n\n\n\n\\usepackage{subfig}\n\n\n\\newenvironment{talign*}\n{\\let\\displaystyle\\textstyle\\csname align*\\endcsname}\n{\\endalign}\n\\newenvironment{talign}\n{\\let\\displaystyle\\textstyle\\align}\n{\\endalign}\n\n\\def\\mathcal{H}{\\mathcal{H}}\n\\def\\mathbb{N}{\\mathbb{N}}\n\\def\\mathbb{R}{\\mathbb{R}}\n\\def\\mathcal{X}{\\mathcal{X}}\n\\def\\hat{\\Pi}_{\\textup{MC}}{\\hat{\\Pi}_{\\textup{MC}}}\n\\def\\hat{\\Pi}_{\\textup{MLMC}}{\\hat{\\Pi}_{\\textup{MLMC}}}\n\\def\\hat{\\Pi}_{\\textup{BQ}}{\\hat{\\Pi}_{\\textup{BQ}}}\n\\def\\hat{\\Pi}_{\\textup{MLBQ}}{\\hat{\\Pi}_{\\textup{MLBQ}}}\n\\def\\textup{MSE}{\\textup{MSE}}\n\\def\\textup{Cost}{\\textup{Cost}}\n\\def\\mathbb{E}{\\mathbb{E}}\n\\def\\mathbb{V}{\\mathbb{V}}\n\\DeclareMathOperator*{\\argmin}{argmin}\n\\def\\textup{Err}{\\textup{Err}}\n\n\n\\newtheorem{assumption}{Assumption}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{alg}{Algorithm}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{corollary}{Corollary}\n\\newtheorem{proposition}{Proposition}\n\\newtheorem{fact}{Fact}\n\\newtheorem{definition}{Definition}\n\\newtheorem{property}{Property}\n\\newtheorem{remark}{Remark}\n\\newtheorem{example}{Example}\n\n\n\n\\title{Multilevel Bayesian Quadrature}\n\n\\author{\n Kaiyu Li\\textsuperscript{1}, Daniel Giles\\textsuperscript{1}, Toni Karvonen\\textsuperscript{2}, Serge Guillas\\textsuperscript{13}, Fran\\c{c}ois-Xavier Briol \\textsuperscript{13} \\\\\n \\textsuperscript{\\bf{1}}Department of Statistical Science, University College London\\\\\n \\textsuperscript{\\bf{2}}Department of Mathematics and Statistics, University of Helsinki\\\\\n \\textsuperscript{\\bf{3}}The Alan Turing Institute\\\\\n \\texttt{\\{kaiyu.li.19,d.giles\\}@ucl.ac.uk,}\\texttt{toni.karvonen@helsinki.fi,} \\texttt{\\{s.guillas,f.briol\\}@ucl.ac.uk} \\\\\n}\n\\date{}\n\n\n\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nMultilevel Monte Carlo is a key tool for approximating integrals involving expensive scientific models. The idea is to use approximations of the integrand to construct an estimator with improved accuracy over classical Monte Carlo. We propose to further enhance multilevel Monte Carlo through Bayesian surrogate models of the integrand, focusing on Gaussian process models and the associated Bayesian quadrature estimators. We show using both theory and numerical experiments that our approach can lead to significant improvements in accuracy when the integrand is expensive and smooth, and when the dimensionality is small or moderate. We conclude the paper with a case study illustrating the potential impact of our method in landslide-generated tsunami modelling, where the cost of each integrand evaluation is typically too large for operational settings. \n\\end{abstract}\n\n\n\\section{INTRODUCTION}\n\\label{sec:Intro}\n\nThis paper considers the task of approximating an unknown integral, or expectation, when evaluations of the integrand are expensive, either from a computational or financial point of view. This is a common problem in statistics and machine learning, where one commonly needs to marginalise random variables, compute normalisation constants of probability density functions or compute posterior expectations. However the problem is even more pronounced when doing uncertainty quantification for large mathematical models in science and engineering. For example, a scientist might be uncertain about the value of certain model parameters, and might therefore wish to estimate the expected value of some quantity of interest involving the model with respect to distributions on these parameters. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.3\\textwidth]{images\/sketch_Aug.png}\n {\\includegraphics[width=0.3\\textwidth]{images\/heatmap.png}}%\n\\caption{Tsunami Model. \\emph{Left}: Sketch of the submerged landslide-generated tsunami. \\emph{Right:} Solution of the differential equation through time and space.}\n\\label{fig:sketch}\n\\end{figure}\n\nAn example which illustrates this problem (and which will be revisited in \\Cref{sec:result}) is the modelling of landslide-generated tsunamis, where the evolution of the wave through space and time is described through a complex system of differential equations \\citep{behrens2015tsunami, reguly2018volna, giles2020performance, Marras2021tsunami}; see Figure \\ref{fig:sketch} for an illustration. In this context, designers of tsunami resistant buildings, prevention structures or early warning systems might be interested in estimating the total wave energy or momentum flux of the tsunami at a fixed location. These quantities are functions of the solution of the differential equations, but there will usually be some uncertainty associated with certain physical parameters, such as those characterising the slope or size of the landslide. This uncertainty is represented through probability distributions, leading to the need to compute the expected value of the quantities of interest. The main challenge here is that in order to obtain high accuracy estimates, it is necessary to use very fine time and space meshes to solve the differential equations, leading to large computational costs.\n\n\n\nA common approach to the approximation of such integrals is Monte Carlo (MC) methods, which include a wide range of simulation-based algorithms. Of particular relevance is \\emph{Multilevel Monte Carlo} (MLMC) \\citep{giles2015multilevel} and its various extensions \\citep{giles2009multilevel,Dick2011,kuo2015multi,kuo2017multilevel}. MLMC is designed for expensive integrands where cheap approximations are available at several levels of accuracy. Such models are called multifidelity models \\citep{Peherstorfer2018}, and are widely used, including for atmospheric dispersion modelling \\citep{katsiolides2018multilevel}, biochemical reaction network modelling \\citep{Warne2019}, reliability theory \\citep{Aslett2017}, erosion and flood risk modelling \\citep{clare2021assessing}, pricing in finance \\citep{dempster2018high}, the design of advanced aerospace vehicles \\citep{geraci2017multifidelity}, or tsunami modelling \\citep{sanchez2016uncertainty}. \n\nMLMC evaluates the cheap but inaccurate approximate integrands a large number of times, and only evaluates the high-accuracy but expensive approximate integrands a small number of times. For the tsunami example above, standard MC would use a small time and space mesh, and evaluate the integrand at fixed high accuracy level. In contrast, MLMC will use several approximations with different meshes (each corresponding to a level), and use fewer evaluations of the expensive levels. For a fixed computational budget, this allows MLMC to obtain much more accurate estimate than standard MC. Beyond the scientific application areas above, this has also led MLMC to be used to enhance computational tools including Markov chain Monte Carlo \\citep{Dodwell2019,Wang2022}, particle filters \\citep{gregory2017seamless}, approximate Bayesian computation \\citep{Jasra2019}, Bayesian experimental design \\citep{Goda2020} or variational inference \\citep{Shi2021,Fujisawa2021}.\n\nUnfortunately, most multilevel methods suffer from the fact that they are simulation-based methods which neglect all known properties of the integrand. This makes the methods widely applicable, but means that their convergence rate will be slow when the integrand satisfies stronger regularity condition. This is clearly sub-optimal when working with expensive models, where the number of evaluations will be limited. In this work, we propose to enhance MLMC through the use of surrogate models which encodes properties of the integrand, such as smoothness, sparsity or even periodicity. We focus in particular on Gaussian processes (GPs), which naturally lead to a class of algorithms that we call \\emph{multilevel Bayesian quadrature} (MLBQ). \n\nMLBQ is a Bayesian probabilistic numerical methods \\citep{Hennig2015,Cockayne2017BPNM,Wenger2021,Hennig2022}, and more specifically a specific Bayesian quadrature algorithm (BQ; \\citealp{Diaconis1988,o1991bayes,Rasmussen2003}); see \\cite{briol2019probabilistic} for a recent overview. As we will see in the remainder of the paper, this approach can lead to a posterior distribution on the value of the integral, with (1) significant improvements in accuracy over existing methods when using the posterior mean as a point estimate, and (2) the ability to quantify our uncertainty (given limited function evaluations) over the value of the integral. \n\n\n\\section{BACKGROUND}\n\\label{sec:background}\n\nWe now review key components of our approach: MC, multilevel models, MLMC and BQ.\n\n\n\\paragraph{Monte Carlo Methods}\nLet $\\Pi$ be a probability distribution on $\\Omega \\subseteq \\mathbb{R}^d$ ($d \\in \\mathbb{N}_+$) and let $f \\colon \\Omega \\to \\mathbb{R}$ be some integrand.\nWe focus on approximating $\\Pi[f] \\coloneqq \\int_\\Omega f(\\omega) \\Pi(d\\omega)$ and assume that $f$ is square integrable with respect to $\\Pi$ (i.e.\\ $\\Pi[f^2] < \\infty$). To tackle this task, we use pointwise evaluations of $f$: $\\{\\omega_i,f(\\omega_i)\\}_{i=1}^n$ for $n \\in \\mathbb{N}_+$ and $\\omega_i \\in \\Omega$ for $i \\in \\{1,\\ldots,n\\}$. For example, a MC estimator \\citep{robert2004monte,rubinstein2016simulation} takes the form\n\\begin{talign*}\n \\hat{\\Pi}_{\\textup{MC}}[f] \\coloneqq \\frac{1}{n} \\sum_{i=1}^n f(\\omega_i), \n\\end{talign*}\nwhere $\\{\\omega_i\\}_{i=1}^n \\sim \\Pi$; that is, $\\{\\omega_i\\}_{i=1}^n$ are independent and identically distributed (IID) realisations from $\\Pi$.\nAs $n \\rightarrow \\infty$ and under mild regularity conditions, MC estimators converge to $\\Pi[f]$, making these approaches widely applicable. However, their performance when $n$ is finite and relatively small can be quite poor, which is a common issue when $f$ is expensive to evaluate such as for multilevel models. Alternative equal-weight estimators suffering from similar drawbacks include quasi-Monte Carlo (QMC) or randomised QMC \\citep{owen2013monte}, which use $\\{\\omega_i\\}_{i=1}^n$ that form a space-filling design.\n\n\n\n\\paragraph{Monte Carlo for Multifidelity Models}\n\nFor multifidelity models, we can improve on MC through MLMC.\nSuppose that $f_L=f$, and $f_l \\colon \\Omega \\rightarrow \\mathbb{R}$ for $l \\in \\{1,\\ldots,L-1\\}$ are approximations of $f$ which increase both in accuracy and cost with the level $l$.\nThe integral of interest can be expressed through a telescoping sum as\n\\begin{talign}\\label{eq:telescopic}\n \\Pi[f] =\\Pi[f_L] &= \\Pi[f_0]+\\sum^L_{l=1} \\Pi[f_l-f_{l-1}].\n\\end{talign}\nInstead of using a single MC estimator for $\\Pi[f]$, we can estimate each term in the sum separately. Suppose that $\\{\\{\\omega_{(l, i)}\\}_{i=1}^{n_l}\\}_{l=0}^L \\sim \\Pi$, the MLMC estimator is\n\\begin{talign*}\n \\hat{\\Pi}_{\\textup{MLMC}}[f] \\coloneqq{}& \\hat{\\Pi}_{\\textup{MC}}[f_0] + \\sum_{l=1}^L \\hat{\\Pi}_{\\textup{MC}}[f_l -f_{l-1} ] \\\\\n ={}& \\frac{1}{n_0}\\sum^{n_0}_{i=1}f_0(\\omega_{(0,i)}) + \\frac{1}{n_l} \\sum^L_{l=1}\\sum^{n_l}_{i=1} (f_l(\\omega_{(l,i)})-f_{l-1}(\\omega_{(l,i)})).\n\\end{talign*}\nFor expensive integrands, there are two main advantages to this approach over MC. Firstly, each integrand (but the first) in the telescoping sum is of the form $f_l - f_{l-1}$, which will have low variance since we expect $f_l \\approx f_{l-1}$ and hence $\\mathbb{V}[f_l - f_{l-1}] \\approx \\mathbb{V}[0] = 0$. As a result, a small $n_l$ is sufficient to estimate such terms accurately through MC. Secondly, we have assumed that the functions are cheaper to evaluate for small $l$, so some the initial terms in the sum can be estimated accurately through MC estimation with a large $n_l$. \n\nThese remarks can be made precise by considering the computational cost necessary to obtain a given accuracy $\\varepsilon$, or equivalently a given mean-squared error (MSE) $\\varepsilon^2$. For an estimator $\\hat{\\Pi}[f]$, denote by $\\textup{Cost}(\\hat{\\Pi},\\varepsilon)$ this cost and by $\\textup{MSE}(\\hat{\\Pi}) \\coloneqq \\mathbb{E}[(\\hat{\\Pi}[f] -\\Pi[f])^2]=\\mathbb{V}[\\hat{\\Pi}[f]] + (\\mathbb{E}[\\hat{\\Pi}[f]]-\\Pi[f])^2$ the MSE, where $\\mathbb{E}$ and $\\mathbb{V}$ denote the mean and variance with respect to all random variables in the estimator. For MC, $\\mathbb{E}[\\hat{\\Pi}_{\\textup{MC}}[f]]=\\Pi[f]$ and $\n\\textup{MSE}(\\hat{\\Pi}_{\\textup{MC}}) = \\mathbb{V}[\\hat{\\Pi}_{\\textup{MC}}[f]] = \\mathbb{V}[f] n^{-1}$. To achieve a MSE of $\\varepsilon^2$, $n$ should be at least $\\varepsilon^{-2}\\mathbb{V}[f]$. If $C$ is the computational cost per sample, a MSE of $\\varepsilon^2$ will lead to $\\textup{Cost}(\\hat{\\Pi}_{\\textup{MC}},\\varepsilon) = \\varepsilon^{-2}\\mathbb{V}[f] C$.\n\nAs we will now see, MLMC can provide significant improvements over MC. Let $C_0$ denote the cost of $f_0$, $C_l$ the cost of $f_l-f_{l-1}$, $V_0 = \\mathbb{V}[f_0]$ and $V_l = \\mathbb{V}[f_l - f_{l-1}]$. The total cost of MLMC is $\\sum_{l=0}^L n_l C_l$. The MSE and cost to achieve a MSE of $\\varepsilon^2$ are hence\n\\begin{talign*}\n\\textup{MSE}(\\hat{\\Pi}_{\\textup{MLMC}}) &= \\mathbb{V}[\\hat{\\Pi}_{\\textup{MLMC}}[f]] = \\sum_{l=0}^L n_l^{-1}V_l, \\\\\n\\textup{Cost}(\\hat{\\Pi}_{\\textup{MLMC}},\\varepsilon) &= \\varepsilon^{-2}(\\sum^L_{l=0}\\sqrt{V_lC_l})^2.\n\\end{talign*}\nTo compare this cost with that of MC, we will consider two cases.\nFirstly, if $V_lC_l$ increases rapidly with levels, we will have $\\textup{Cost}(\\hat{\\Pi}_{\\textup{MLMC}},\\varepsilon) \\approx \\varepsilon^{-2} V_LC_L$. Secondly, if $V_lC_l$ decreases rapidly with levels, $\\textup{Cost}(\\hat{\\Pi}_{\\textup{MLMC}},\\varepsilon) \\approx \\varepsilon^{-2} V_0C_0$. For standard MC, the variance of the estimate is $\\mathbb{V}[f] = \\mathbb{V}[f_L] \\approx \\mathbb{V}[f_0]$ and the cost of evaluating $f_L$ is similar to the cost of evaluating $f_L -f_{L-1}$, so we have $\\textup{Cost}(\\hat{\\Pi}_{\\textup{MC}},\\varepsilon) \\approx \\varepsilon^{-2}V_0C_L$. Since $V_0 > V_L$ and $C_L > C_0$, we will therefore have $\\textup{Cost}(\\hat{\\Pi}_{\\textup{MC}},\\varepsilon) > \\textup{Cost}(\\hat{\\Pi}_{\\textup{MLMC}},\\varepsilon)$ regardless of the behaviour of $V_l C_l$. Indeed, in the first case $\\textup{Cost}(\\hat{\\Pi}_{\\textup{MC}},\\varepsilon) \\approx (V_0\/V_L) \\textup{Cost}(\\hat{\\Pi}_{\\textup{MLMC}},\\varepsilon)$, whilst in the second case, $\\textup{Cost}(\\hat{\\Pi}_{\\textup{MC}},\\varepsilon) \\approx (C_L \/ C_0) \\textup{Cost}(\\hat{\\Pi}_{\\textup{MLMC}},\\varepsilon)$. \n\nThis analysis of MLMC can be extended to find the optimal sample sizes per level given a fixed computational cost $T$ (see Appendix \\ref{appendix_mlmc_optimN} or \\citealp[Section~1.3]{giles2015multilevel} for a similar analysis with optimal sample sizes for a fixed MSE):\n\\begin{talign*}\nn^{\\text{MLMC}} = \\left(n^{\\text{MLMC}}_0, \\ldots, n^{\\text{MLMC}}_L\\right) \\coloneqq \\left(D \\sqrt{\\frac{V_0}{C_0}}, \\ldots, D \\sqrt{\\frac{V_L}{C_L}}\\right)\n\\end{talign*} \nwhere $D = T (\\sum_{l^\\prime=0}^L \\sqrt{V_{l^\\prime} C_{l^\\prime}})^{-1}$. In practice, there are limitations which prevent the direct use of $n^{\\text{MLMC}}_l$. Firstly, $V_l$ is usually unknown, although it can be estimated from data. Unfortunately, estimates of $V_l$ may be unreliable if the sample size at level $l$ is small. Secondly, $f_L$ is usually an approximation to $f$ (as opposed to $f=f_L$). Thirdly, as for our tsunami example, the number of levels can be chosen by the user and it is hence difficult to decide which approximations $f_0, \\ldots, f_L$ to include. \n\n\n\\paragraph{Bayesian Quadrature}\n\nClearly, the MLMC estimator can lead to significant gains, but we note that it focuses solely on sampling from $\\Pi$ and does not utilise properties of~$f$. This is in contrast to BQ, an approach to integration which is based on a GP model of $f$. GPs are widely used as models for deterministic but computationally expensive functions, especially in computer experiments \\citep{santner2003design, sacks1989design} and in spatial statistics \\citep{stein1999interpolation}. We will denote a GP by $\\mathcal{GP}(m, c)$ to emphasise the mean function $m: \\Omega \\to \\mathbb{R}$ and the (symmetric and positive semi-definite) covariance function $c \\colon \\Omega \\times \\Omega \\to \\mathbb{R}$ (also called kernel), which uniquely identify the model. Given a $\\mathcal{GP}(m, c)$ prior on $f$ and some observations $\\{\\omega_i,f(\\omega_i)\\}_{i=1}^n$ at pairwise distinct $\\{\\omega_i\\}_{i=1}^n \\subset \\Omega$ for some $n \\in \\mathbb{N}_+$, the posterior on $f$ is also a GP with mean and covariance \\citep{williams2006gaussian}\n\\begin{talign*}\n \\tilde{m}(\\omega)&=m(\\omega)+c(\\omega,W)c(W,W)^{-1}(f(W)-m(W)), \\nonumber \\\\\n \\tilde{c}(\\omega,\\omega')&=c(\\omega,\\omega')-c(\\omega,W)c(W,W)^{-1}c(W,\\omega')\n\\end{talign*}\n for all $\\omega,\\omega' \\in \\Omega$. Here, $W=(\\omega_1,\\omega_2,\\ldots,\\omega_n)^\\top$, $f(W)=(f(\\omega_1),f(\\omega_2),\\ldots,f(\\omega_n))^\\top$, $c(\\omega, W)=c(W,\\omega)^\\top=(c(\\omega,\\omega_1),\\ldots,c(\\omega,\\omega_n))$ and $(c(W,W))_{i,j}=c(\\omega_i,\\omega_j)$ for all $i,j \\in \\{1,\\ldots,n\\}$. \n Prior knowledge on $f$, such as smoothness and periodicity, can be incorporated by specifying $m$ and~$c$. For example, the squared exponential covariance function $c_\\text{SE}(\\omega,\\omega^\\prime)=\\exp\\left( -\\|\\omega-\\omega'\\|_2^2\/\\gamma^2\\right)$ with length-scale $\\gamma > 0$ implies a prior belief that $f$ has infinitely many derivatives. Alternatively, the Mat\\'ern covariance function $\nc_{\\text{Mat\\'ern}}(\\omega,\\omega^\\prime)= 2^{1-v} \\Gamma^{-1}(v)\\left(\\sqrt{2v}\\|\\omega-\\omega^\\prime\\|_2 \/ \\gamma\\right)^v K_v\\left(\\sqrt{2v}\\|\\omega-\\omega^\\prime\\|_2\/ \\gamma\\right)$ with smoothness $v>0$ and length-scale $\\gamma>0$ where $K_v$ is a modified Bessel function of the second kind implies a belief that $f$ is $\\lceil v \\rceil-1$ times differentiable. \n\n\n\n\nBQ \\citep{Diaconis1988,o1991bayes,Rasmussen2003} is an estimator for $\\Pi[f]$ motivated through Bayesian inference. The idea is to specify a prior on $f$, obtain the posterior on $f$ given evaluations of $f$, then consider the implied (pushforward) posterior on $\\Pi[f]$. The most common approach uses a $\\mathcal{GP}(m, c)$ prior; in that case, the posterior on $\\Pi[f]$ is a Gaussian with mean and variance \\citep{briol2019probabilistic}\n\\begin{talign*}\n \\mathbb{E}_{\\text{BQ}}[\\Pi[f]] = \\hat{\\Pi}_{\\textup{BQ}}[f] &= \\Pi[\\tilde{m}] \\\\\n &= \\Pi[m]+\\Pi[c(\\cdot,W)]c(W,W)^{-1} (f(W)-m(W)),\\nonumber \\\\\n \\mathbb{V}_{\\text{BQ}}[\\Pi[f]] &= \\Pi [\\Pi[\\tilde{c}]] \\\\\n & =\\Pi[\\Pi[c]]-\\Pi[c(\\cdot,W)]c(W,W)^{-1}\\Pi[c(W,\\cdot)], \\nonumber\n\\end{talign*}\nwhere $\\Pi[c(\\cdot,W)]=(\\Pi[c(\\cdot,\\omega_1)],\\ldots,\\Pi[c(\\cdot,\\omega_n)])^\\top$ and we use the convention that for a function with two inputs, $\\Pi[\\Pi[\\cdot]]$ always denotes integration once with respect to each input. In contrast with MC methods which rely on central limit theorems, $\\mathbb{V}_{\\text{BQ}}[\\Pi[f]]$ can quantify our uncertainty about $\\Pi[f]$ for finite $n$. \n\nA particular advantage of the formulae above is that they are defined for arbitrary $\\{\\omega_i\\}_{i=1}^n$. A number of choices have been studied including IID \\citep{Rasmussen2003}, QMC \\citep{briol2019probabilistic,Jagadeeswaran2018}, realisations from determinental point processes \\citep{Bardenet2019}, point sets with symmetry properties \\citep{Karvonen2017symmetric,Karvonen2019} and adaptive designs \\citep{Osborne2012active,Gunter2014,briol2015frank}.\nFor specific point sets and GP priors, $\\hat{\\Pi}_{\\textup{BQ}}[f]$ coincides with classical quadrature rules \\citep{Diaconis1988,Karvonen2017classical}.\n\nThe two main disadvantages of BQ are that: (i) as per GPs, the computational cost is $\\mathcal{O}(n^3)$ due to the need to invert $n \\times n$ matrices, (ii) $\\Pi[c(\\cdot,\\omega)]$ for $\\omega \\in \\Omega$ and $\\Pi[\\Pi[c]]$ are only tractable for some pairs of distributions and covariance functions (see Table 1 in \\citealp{briol2019probabilistic}). That being said, BQ also has much faster convergence rates than classical Monte Carlo methods when $d$ is small or moderate \\citep{briol2019probabilistic,kanagawa2020convergence,wynne2021convergence}. For this reason, BQ has mostly been applied to problems where $n$ is constrained to be small (for example when the integrand is expensive) and the integration measure is relatively simple. This includes problems in global illumination in computer graphics \\citep{Brouillat2009}, cardiac modelling \\citep{Oates2017heart}, engineering control \\citep{Paul2016}, econometrics \\citep{Oettershagen2017}, risk \\citep{CADINI201615} and in variational inference \\citep{Acerbi2018}.\n\n\n\n\n\n\n\n\\section{METHODOLOGY}\n\\label{sec:method}\n\nAlthough MLMC is particularly well-suited to integrals involving multifidelity models, it usually disregards any prior information on the integrand. \nWe now remedy this issue by designing a novel estimator which combines the advantages of BQ and MLMC. \n\nOur proposed algorithm uses the telescopic sum in Equation \\eqref{eq:telescopic} and approximates each of the terms through BQ rather than MC. Here and throughout the remainder of the paper, we use the convention that $f_{-1} \\equiv 0$ to simplify all expressions. Suppose we have access to the evaluations $\\{\\{f_l(\\omega_{(l,i)})-f_{l-1}(\\omega_{(l,i)})\\}_{i=0}^{n_l}\\}_{l=0}^L$ of the approximate integrands on $\\Omega$. We will specify a sequence of priors such that $\\mathcal{GP}(m_l,c_l)$ is a prior on the increment $f_l-f_{l-1}$, and we will take these increments to be independent a-priori.\n\\begin{proposition} \n\\label{BMLMC_sum}\nGiven the priors and datasets described above, the posterior distribution on $\\Pi[f]$ is a univariate Gaussian with mean\n\\begin{talign*}\n \\mathbb{E}_{\\textup{MLBQ}}[\\Pi[f]] &\\coloneqq \\sum_{l=0}^L \\hat{\\Pi}_{\\textup{BQ}}[f_l -f_{l-1} ] \\\\\n & = \\sum_{l=0}^L \\big( \\Pi[m_l] + \\Pi[c_l(\\cdot,W_l)]c_l(W_l,W_l)^{-1} (f_l(W_l)-f_{l-1}(W_l)-m_l(W_l)) \\big)\n\\end{talign*}\nand variance\n\\begin{talign*}\n \\mathbb{V}_{\\textup{MLBQ}}[\\Pi[f]] &\\coloneqq \\sum^{L}_{l=0}\\mathbb{V}_{\\textup{BQ}}[\\Pi[f_l-f_{l-1}]]\\\\\n & = \\sum_{l=0}^L \\big( \\Pi[\\Pi[c_l]] - \\Pi[c_l(\\cdot,W_l)]c_l(W_l,W_l)^{-1}\\Pi[c_l(W_l,\\cdot)] \\big)\n\\end{talign*}\nwhere $W_l=(\\omega_{(l,1)},\\ldots,\\omega_{(l,n_l)})^\\top$ for $l \\in \\{0,\\ldots,L\\}$.\n\\end{proposition} \nThe proof is given in Appendix \\ref{append:ProofProp1}. Once again, a point estimator can be obtained through the posterior mean $\\hat{\\Pi}_{\\textup{MLBQ}}[f] \\coloneqq \\mathbb{E}_{\\text{MLBQ}}[\\Pi[f]]$ and we will call this the \\emph{multilevel Bayesian quadrature} (MLBQ) \\emph{estimator}. Although MLBQ requires only a straightforward modification of the MLMC algorithm, we will see in the remainder of the paper that it will allow us to take advantage of the properties of both MLMC and BQ. A simple illustration example comparing BQ and MLBQ (L=2) with the same evaluation constraint is shown in Figure \\ref{fig:Illustration}. We used the approximations from the Poisson equation experiment in Section \\ref{sec:result} and Appendix \\ref{Append: FEM}. As we observed, the GP for MLBQ fits $f_2$ better than the GP for BQ. The MLBQ estimator has smaller error and smaller variance than the BQ estimator.\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{images\/Illustration.pdf} \n \\caption{Illustration Example: \\textit{Upper Left}: the approximations to $f$, GP for BQ, GP for MLBQ. \\textit{Bottom Left}: GP for level 0, 1, 2 of MLBQ. \\textit{Right}: BQ and MLBQ esitmators.} \\label{fig:Illustration}\n\\end{figure*}\nThe cost for implementing MLBQ is $\\mathcal{O}(\\sum_{l=0}^L n_l^3)$, which is larger than the $\\mathcal{O}(\\sum_{l=0}^L n_l)$ of MLMC. However, for most multifidelity models, we expect these costs to be dwarfed by the cost of function evaluations which is $\\sum_{l=0}^L n_l C_l$. Additionally, we will see in the next Section that MLBQ can have a much faster convergence rate than MLMC.\n\nDue to the independence assumption, we can estimate the GP hyperparameters separately for each level; see Appendix \\ref{append: hyper_prior}. If the assumption is violated, we could be under- or over-estimating our uncertainty. It is possible to do away with this assumption by modelling levels jointly following the work on multi-output BQ of \\cite{Xi2018MultiOutput}, but this would prohibitively increase the cost to $\\mathcal{O}((\\sum_{l=0}^L n_l)^3)$.\n\n\n\n\n\n\\section{THEORY}\n\\label{sec:theory}\n\n\nWe now prove an upper bound on the error of MLBQ and derive the optimal number of samples per level. \n\nLet $L^2(\\Omega)$ denote the space of square-integrable functions on $\\Omega \\subseteq \\mathbb{R}^d$; i.e. functions for which $\\| f \\|_{L^2(\\Omega)} \\coloneqq (\\int_\\Omega f(\\omega)^2 d \\omega)^{1\/2} < \\infty$.\nThe Sobolev space $W^\\alpha_2(\\Omega)$ of integer order $\\alpha \\geq 0$ consists of functions $f \\in L^2(\\Omega)$ for which $\\|f\\|_{\\alpha} \\coloneqq (\\sum_{\\beta\\in \\mathbb{N}^d\\, : \\, \\lvert\\beta\\rvert \\leq \\alpha } \\|D^\\beta f\\|^2_{L^2(\\Omega)} )^{1\/2} < \\infty$,\nwhere $\\lvert \\beta \\rvert = \\sum^d_{i=1} \\beta_i$ and $D^\\beta f$ is the weak derivative~\\citep[p.\\@~22]{adams2003sobolev} of order $\\beta$.\nFor non-integer $\\alpha \\geq 0$, the Sobolev norm can be defined via Fourier transforms and the two definitions coincide, up to a constant, for integer $\\alpha$ if $\\Omega$ is sufficiently regular~\\citep[Section~2.2]{wynne2021convergence}.\nThe space $W_2^\\alpha(\\Omega)$ is a Hilbert space.\n\n\nBy the Moore--Aronszajn Theorem~\\citep[Theorem~3 in Chapter~1]{Berlinet2004}, every positive semi-definite covariance function $c \\colon \\Omega \\times \\Omega \\to \\mathbb{R}$ induces a unique reproducing kernel Hilbert space (RKHS) $\\mathcal{H}(c)$ consisting of functions $f \\colon \\Omega \\to \\mathbb{R}$ and equipped with an inner product $\\langle \\cdot, \\cdot \\rangle_{\\mathcal{H}(c)}$ and norm $\\| \\cdot \\|_{\\mathcal{H}(c)}$.\n The RKHS satisfies: (1) $c(\\cdot, \\omega) \\in \\mathcal{H}(c)$ for every $\\omega \\in \\Omega$, and (2) the reproducing property that $f(\\omega)=\\langle f,c(\\cdot, \\omega )\\rangle_{\\mathcal{H}(c)}$ for every $f \\in \\mathcal{H}(c)$ and $\\omega \\in \\Omega$.\n \nThe following assumptions are used in our results:\n\\begin{enumerate}\n\\item[A1.] The domain is of the form $\\Omega = \\Omega_1 \\times \\cdots \\times \\Omega_d$ for each $\\Omega_i$ a non-empty interval.\n\\item[A2.] The distribution $\\Pi$ has a bounded density function $\\pi$; i.e. $\\| \\pi \\|_{L^\\infty(\\Omega)} \\coloneqq \\sup_{ \\omega \\in \\Omega } \\pi(\\omega) < \\infty$.\n\\item[A3.] For each $l \\in \\{0, \\ldots, L\\}$, the RKHS $\\mathcal{H}_l \\coloneqq \\mathcal{H}(c_l)$ is norm-equivalent to $W_2^{\\alpha_l}(\\Omega)$ for $\\alpha_l > d\/2$. Two Hilbert spaces $H_1$ and $H_2$ are norm-equivalent if and only if they are equal as sets and there are constants $b_1,b_2 >0$ such that $b_1 \\| f \\|_{H_1} \\leq \\| f \\|_{H_2} \\leq b_2 \\| f \\|_{H_1}$ for all $f \\in H_1 = H_2$.\n\\item[A4.] There are $\\beta_l > d\/2$ such that $f_0 \\in W_2^{\\beta_0}(\\Omega)$ and $f_l, f_{l-1} \\in W_2^{\\beta_l}(\\Omega)$ for every $l \\in \\{1, \\ldots, L\\}$.\n\\item[A5.] For each $l \\in \\{0, \\ldots, L\\}$, the fill-distance $h_{W_l, \\Omega} = h_{l, \\Omega} \\coloneqq \\sup_{ \\omega \\in \\Omega} \\min_{i = 1,\\ldots, n_l} \\| \\omega - \\omega_{(l,i)} \\|_2$ satisfies $h_{l, \\Omega} \\leq h_\\textup{qu} n_l^{-1\/d}$ for a constant $h_\\textup{qu} > 0$.\n\\item[A6.] The prior means are $m_l \\equiv 0$ for all $l \\in \\{0, \\ldots, L\\}$.\n\\end{enumerate}\n\nThe purpose of Assumption~A1 is to ensure that the domain is sufficiently regular for the use of Sobolev extension and embedding theorems.\nThis assumption could be generalised to allow more complex domains~\\citep[Section~3.1]{wynne2021convergence}.\nAssumption~A3 and its relatives are standard in the error analysis of GP and BQ methods~\\citep[e.g.,][]{karvonen2020maximum, teckentrup2020convergence, wynne2021convergence}.\nThe RKHS of a Mat\\'ern kernel $c_\\textup{Mat\\'ern}$ with smoothness $v$ and any length-scale is norm-equivalent to $W_2^{\\alpha}(\\Omega)$ for $\\alpha = v + d\/2$ whenever $\\Omega$ satisfies Assumption~A1.\nBecause the fill-distance of a set $W$ equals the radius of the largest ball in $\\Omega$ which contains no point from $W$, Assumption~A5, known as the quasi-uniformity assumption~\\citep[Section~14.1]{wendland2004scattered}, ensures that each of the sets $W_{l}$ covers $\\Omega$ in a sufficiently uniform manner.\nRegular grids are examples of sets that satisfy Assumption~A5.\nAssumption~A6 is made out of convenience and could be replaced with the assumption that $m_l \\in W_2^{\\beta_l}(\\Omega)$ for each $l$.\n\n\n\\begin{theorem} \\label{theo1}\nSuppose that assumptions A1--A6 hold and define $\\tau_l \\coloneqq \\min\\{ \\alpha_l, \\beta_l\\}$.\nThen \n\\begin{talign*} \n \\textup{Err}(\\hat{\\Pi}_{\\textup{MLBQ}}) &= \\lvert \\Pi[f_L]-\\hat{\\Pi}_{\\textup{MLBQ}}[f_L] \\rvert \\\\\n &\\leq \\| \\pi \\|_{L^\\infty(\\Omega)} \\sum^L_{l=0} a_l \\|f_l-f_{l-1}\\|_{\\tau_l}n_l^{-\\tau_l\/d}\\nonumber\n\\end{talign*}\nwhenever each $n_l$ is sufficiently large.\nEach constant $a_l > 0$ depends on $\\alpha_l$, $\\beta_l$, $c_l$, $h_\\textup{qu}$, $d$, and $\\Omega$, but not on $f_l$ or the data points.\n\\end{theorem}\n\n\nTheorem~\\ref{theo1} is proved in Appendix \\ref{append:ProofTheo1}. The proof is similar to the convergence proofs in~\\citet{kanagawa2020convergence,karvonen2020maximum, teckentrup2020convergence, wynne2021convergence}. The Sobolev norm in the bound may be replaced with the RKHS norm $\\| f_l - f_{l-1} \\|_{\\mathcal{H}_l}$ if $\\beta_l \\geq \\alpha_l$ due to assumption~A3.\n\n\\begin{remark} \\label{remark:iid-points}\nIf it is assumed that $\\beta_l \\geq \\alpha_l$ for each $l$, one may use Theorem~1 and Corollary~2 in \\citet{Krieg2022} to prove a variant of Theorem~\\ref{theo1} in which the points are independent samples from a uniform distribution on $\\Omega$ and the upper bound is for the expected error of the MLBQ.\n\\end{remark}\n\nVarious other generalisations of Theorem~\\ref{theo1} are possible but are not included here so as to simplify the presentation of our assumptions. These include non-zero prior means, varying kernel parameters~\\citep{teckentrup2020convergence}, misspecified likelihoods~\\citep{wynne2021convergence}, and improved rates when each $f_l$ has, essentially, twice the smoothness of $c_l$ (\\citealp{tuo2020improved};~\\citealp[Sections~3.4 and~4.5]{karvonen2020maximum}) or when both $f_l$ and $c_l$ are infinitely differentiable \\citep[Theorem~2.20]{karvonen2019kernel}.\n\n\nAt each level $l$, the convergence rate of $\\mathcal{O}(n_l^{-\\tau_l\/d})$ is faster than the rate for MC estimators of $\\mathcal{O}(n_l^{-1\/2})$ because $\\tau_l\/d = \\min\\{\\alpha_l, \\beta_l\\} \/ d > 1\/2$.\nSince $f_0,...,f_L$ approximate the same function, the kernels $c_l$ and the smoothnesses $\\alpha_l$ and $\\beta_l$ do not typically change with $l$, which means that the the constants $a_l$ do not change.\nIf, additionally, $\\| f_l - f_{l-1} \\|_{\\tau_l}$ tends to zero as $l$ increases, which is usually the case because approximation quality should increase with the level, we see that fewer evaluations are needed at higher levels.\nHowever, if $\\tau_l$ differ significantly, more evaluations than expected may be needed at higher levels.\n\n\nUsing Theorem \\ref{theo1} and assuming we use the same prior at each level, we can also derive the optimal number of samples for MLBQ under a limited computational budget.\nTo do so, we assume that the cost of fitting GPs at each levels is dwarfed by the cost of function evaluations.\n\nThis is reasonable because function evaluation costs tend to be relatively large for applications where MLMC is commonly used. For example, for differential equation models the cost is usually driven by the cost of the solvers such as finite difference, finite element or finite volume methods, and this can be large for fine meshes. For example, for the tsunami example in \\Cref{sec:Tsunami}, fitting all the GPs takes less than $25$ seconds whereas a single evaluation of $f_L-f_{L-1}$ takes $150$ seconds even on clusters. For this reason, we therefore assume that the total cost of running MLBQ and functions evaluations is given by $\\gamma \\sum^L_{l=0} C_{l} n_{l}$ for some $\\gamma \\geq 1$ but close to $1$.\n\n\n\\begin{theorem} \\label{theo2}\n Suppose that assumptions A1--A6 hold and $c_l$ and $\\tau \\coloneqq \\tau_l = \\min\\{\\alpha_l, \\beta_l\\} = \\alpha_l$ do not depend on $l$. Then \n \\begin{talign*}\nn^{\\textup{MLBQ}} &= n_0^{\\textup{MLBQ}},\\ldots,n_L^{\\textup{MLBQ}} \\\\\n&\\coloneqq \\underset{\\substack{n_0,n_1,\\cdots,n_L \\\\ \\text{s.t. } \\gamma \\sum^L_{l=0} C_{l} n_{l} = T }}\\argmin \\sum^L_{l=0} a_l \\|f_l-f_{l-1}\\|_{\\tau} n_l^{-\\tau\/d} \n\\end{talign*}\n for $\\gamma \\geq 1$ and $T > 0$ is solved by\n \\begin{talign*} \n n^{\\textup{MLBQ}}_l &= D \\left( \\frac{ \\|f_l-f_{l-1}\\|_{\\tau}}{C_l} \\right)^{\\frac{d}{\\tau + d}} \\quad \\forall l \\in \\{0, \\ldots, L\\},\n \\end{talign*}\n where $D= T( \\gamma \\sum^L_{{l'}=0}C_{l'}^{\\frac{\\tau}{\\tau + d}} ( \\|f_{l'}-f_{{l'}-1}\\|_{\\tau} )^{\\frac{d}{\\tau + d}} )^{-1}$.\n\\end{theorem}\n\nThe proof is given in Appendix \\ref{append:ProofTheo2}. The additional assumptions were introduced to simplify the result by ensuring that $a_l$ does not depend on $l$. If the function evaluation costs do not dominate or if $\\tau_l$ differ, one can still calculate the optimal sample sizes by solving the optimisation problem in \\Cref{theo2} numerically.\n\nThe optimal sample sizes for MLMC and MLBQ are similar; here, $\\|f_l -f_{l-1}\\|_\\tau$ is analogous to $V_l$ in that it measures the size of each element in the telescoping sum. We expect $\\|f_l-f_{l-1}\\|_{\\tau}$ to be a decreasing function of $l$ which converges to zero. If the convergence is slow, the sample size for large $l$ has to be relatively large, whereas it can be relatively small otherwise. Additionally, a large cost $C_l$ also leads to relatively smaller sample sizes. For MLMC, the optimal sample size at level $l$ is proportional to $C_l^{-1\/2}$ whereas for MLBQ it is proportional to $C_l^{-d\/(\\tau+d)}$. Therefore, when $\\tau > d$, the penalisation for large $C_l$ is smaller for MLBQ than MLMC, and vice-versa. This is intuitive because when $\\tau$ is large, the integrands are smoother and we expect BQ to be able to approximate them faster in the number of samples.\n\nPlugging in the optimal samples sizes of \\Cref{theo2} to the bound in \\Cref{theo1}, we obtain that \n\\begin{talign*}\n\\textup{Err}(\\hat{\\Pi}_{\\textup{MLBQ}}) &\\leq A T^{-\\frac{\\tau}{d}} \\left( \\sum^L_{{l}=0} C_{l}^{\\frac{\\tau}{\\tau + d}} \\|f_{l}-f_{{l}-1}\\|_{\\tau}^{\\frac{d}{\\tau + d}} \\right)^{\\frac{\\tau+d}{d}},\n\\end{talign*}\nwhere $A = \\| \\pi \\|_{L^\\infty(\\Omega)} a \\gamma^{\\tau\/d}$. \nFor BQ based on evaluations of $f_L$ and utilising the same computational budget we obtain \n\\begin{talign*}\n\\textup{Err}(\\hat{\\Pi}_{\\textup{BQ}}) & \\leq A T^{-\\frac{\\tau}{d}} C_L^{\\frac{\\tau}{d}}\\|f_L\\|_{\\tau}\n\\end{talign*}\nfrom \\Cref{theo1} by setting $f_l \\equiv 0$ and $C_l = 0$ for every $l \\in \\{0, \\ldots, L - 1\\}$.\nLet us denote the two upper bounds above by $B_\\textup{MLBQ}$ and $B_\\textup{BQ}$.\nTo compare these bounds, we consider two cases. Firstly, if the term $b_l \\coloneqq {C_l}^{\\tau\/(\\tau+d)}\\|f_l-f_{l-1}\\|_{\\tau}^{d\/(\\tau+d)}$ grows rapidly with $l$, then $B_\\textup{MLBQ}$ is dominated by the highest level $L$, so that $B_\\textup{MLBQ} \\approx A T^{-\\tau\/d} C_L^{\\tau\/d} \\|f_L-f_{L-1}\\|_{\\tau}$. Secondly, if $b_l$ decreases rapidly with $l$, then $B_\\textup{MLBQ} \\approx A T^{-\\tau\/d} C_0^{\\tau\/d} \\|f_0\\|_{\\tau} $. In either case, the bound on $\\textup{Err}(\\hat{\\Pi}_{\\textup{MLBQ}})$ is smaller than that on $\\textup{Err}(\\hat{\\Pi}_{\\textup{BQ}})$ under natural assumptions. \nIn the first case\n\\begin{talign*}\nB_\\textup{BQ} \\approx (\\|f_L\\|_{\\tau} \/ \\|f_L-f_{L-1}\\|_{\\tau}) B_\\textup{MLBQ} \\geq B_\\textup{MLBQ}\n\\end{talign*}\nif $\\|f_L\\|_{\\tau} \\geq \\|f_L-f_{L-1}\\|_{\\tau}$, whilst in the second case\n\\begin{talign*}\nB_\\textup{BQ} \\approx (C_L\/C_0)^{\\tau\/d} ( \\| f_L \\|_\\tau \/ \\| f_0 \\|_\\tau ) B_\\textup{MLBQ} \\geq B_\\textup{MLBQ}\n\\end{talign*}\nif $C_L \\geq C_0$ and $\\| f_L \\|_\\tau \\geq \\| f_0 \\|_\\tau$.\n\n\\section{EXPERIMENTS}\n\\label{sec:result}\nWe now evaluate MLBQ for synthetic differential equation models and landlside-generated tsunami modelling. \n\n\n\\paragraph{Poisson Equation} \\label{sec:PE example}\n\nThe Poisson equation is a canonical partial differential equation which abounds in physics~\\citep[e.g.,][Chapter~8]{MathewsWalker1970}. We consider a synthetic model where for $f \\colon (0, 1) \\to \\mathbb{R}$, \n\\begin{talign*}\n f''(\\omega) &= z(\\omega) \\: \\text{ for } \\: \\omega \\in (0,1) \\quad \\& \\quad\n f(0)=f(1)=0\n\\end{talign*}\nwhere $z(\\omega)=1$. Here, $\\Pi[f]=\\int_0^1 f(\\omega) d\\omega$ so that $\\Pi$ is a $\\text{Unif}(0,1)$. To obtain $f_0,\\ldots,f_L$, we use piecewise linear finite element approximations as described in Appendix \\ref{Append: FEM}. We use $L=2$ and have $C=(C_0,C_1,C_2) = (3.6,8.5, 42.4)$ (all measured in $10^{-3}$ seconds). This problem is relatively simple and could be brute-forced with MC, but has the advantage that we can compute the optimal sample sizes for MLBQ and MLMC since A1--A6 are all satisfied when using a unifom grid of points and $\\|f_l-f_{l-1}\\|_{\\tau_l}$ can be computed in closed form for all $l$. It therefore makes for a good test-bed for our method.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{images\/PE_Plot.pdf}\n \\caption{Poisson Equation. \\textit{Left}: Absolute integration error. \\textit{Right}: Calibration of MLBQ, IID points.}\n \\label{fig:PE3L}\n\\end{figure}\n\n\nWe compare four different settings: MLBQ using $n^{\\text{MLBQ}}$ and uniform grid points, MLBQ using $n^{\\text{MLMC}}$ and uniform grid points or IID points, and MLMC using $n^{\\text{MLMC}}$ and IID points. To implement $n^{\\text{MLMC}}$, we brute-forced the computation of $V_0,\\ldots,V_L$ through an MC approximation. All MLBQ algorithms use a mean-zero GP with Mat\\'ern $0.5$ kernel, and all sample sizes are given in Appendix \\ref{append: Experimental_Details_PE}. \n\nFigure \\ref{fig:PE3L} visualizes the result of $100$ repetitions of the experiment, where for each repetition, we evaluated $f_0,\\ldots,f_L$ at new point sets, and used the same dataset for MLBQ and MLMC to estimate $\\Pi[f]$. When using uniform grids, there is no randomness and the experiment is therefore done only once. The left-hand side plot shows that $\\hat{\\Pi}_{\\textup{MLBQ}}[f]$ significantly outperforms $\\hat{\\Pi}_{\\textup{MLMC}}[f]$ across a range of budgets $T$. For MLBQ, we also see that the impact of the sample size per level is not as significant as that of type of points used, with the uniform grid outperforming IID points. This is promising since the optimal sample sizes will be difficult to obtain in general due to the need to access $V_l$ or $\\|f_l-f_{l-1}\\|_{\\tau_l}$ for each level $l$ (in the cases of $n^{\\text{MLMC}}$ and $n^{\\text{MLBQ}}$ respectively). The right-hand side plot shows coverage frequencies for various credible level. Most of the results lie closely to the identity line, indicating that MLBQ has good frequentist coverage. The only exception is for larger budget $T$, in which case MLBQ is under-confident in the sense that the posterior variance is too large relative to frequentist coverage probabilities. This is generally preferable to being over-confident.\n\n\n\n\n\\paragraph{ODE with Random Coefficient and Forcing} \\label{ODE}\n\nWe now consider a popular test-bed for MLMC as first studied in Section 7.1 of \\cite{giles2015multilevel}:\n\\begin{talign*}\n \\frac{\\mathrm{d}}{\\mathrm{d} x}\\left( c(x)\\frac{\\mathrm{d}u}{\\mathrm{d} x} \\right) &= -50^2 \\omega_2^2 \\ \\text{ for } \\ x\\in (0,1)\n\\end{talign*}\nwith $u(0)=u(1)=0$, $c(x) = 1+\\omega_1 x$, $\\omega_1 \\sim \\text{Unif}(0,1)$ and $\\omega_2 \\sim \\mathcal{N}(0,1)$.\nThe integral is $\\Pi [f] =\\int_\\Omega f(\\omega) \\Pi(d \\omega) =\\int_\\Omega (\\int^1_0u(x,\\omega)dx) \\Pi(d \\omega)$, where $\\omega=(\\omega_1,\\omega_2)$ and $\\Pi$ is a product of the marginal distributions for $\\omega_1$ and $\\omega_2$, and $\\Omega=[0,1]\\times(-\\infty,\\infty)$. We take $L=2$ and each level is obtained through a finite difference approximation of $f$ with grid size $h_l$ (see Appendix \\ref{appendixODE_solver}). We have $C=(C_0,C_1,C_2) = (1.0, 2.6, 21.8)$ (in $10^{-3}$ seconds).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{images\/ODE_Plot.pdf}\n \\caption{ODE with Random Coefficient and Forcing. \\textit{Left}: Log-absolute integration error. \\textit{Right}: Calibration of MLBQ and BQ with IID points.}\n \\label{fig:ODE3L}\n\\end{figure}\n\nThe assumptions from \\Cref{sec:theory} do not hold here since $\\Omega_2$ is unbounded (which breaks A1), but we still use this example to study our method in a broad range of settings.\nWe compare MLBQ with different point sets, MLMC and BQ with IID samples. For all multilevel methods, we select the sample size according to $n^{\\text{MLMC}}$ (see Appendix \\ref{appendixODE_solver}). In this example, we cannot use $n^{\\text{MLBQ}}$ since $\\|f_{l}-f_{l-1}\\|_{\\tau_l}$ is not available in closed form. All methods using a GP use a product of univariate Mat\\'ern kernels per dimension with $v=2.5$, or a squared exponential kernel (``SE''). \n\nThere are three interesting observations in the left-hand side plot in Figure \\ref{fig:ODE3L}. Firstly, MLBQ with a Halton sequence (``QMC'') or a Latin hypercube design (``LHS'') lead to a better performance than with IID sampling, once again reflecting the importance of the choice of point set. Secondly, the choice of kernel also has some impact, with the MLBQ estimator with squared exponential kernel outperforming the corresponding estimator with Mat\\'ern kernel. Thirdly, MLBQ significantly outperforms BQ and MLMC, even though a sub-optimal sample size per level was used here. More precisely, MLBQ (with any point set) at $T=1.517\\text{s}$ is able to outperform MLMC with a budget $20$ times larger ($T=30.347\\text{s}$) and is comparable to MLMC with a budget $100$ times larger ( $T=151.736\\text{s}$). A similar conclusion holds when comparing MLBQ with BQ. \n\nFinally, the right-hand side plot shows that the calibration performances of MLBQ and BQ are very similar. The methods tend to be over-confident when $T$ is very small, and become under-confident when $T$ is larger.\n\n\n\\paragraph{Landslide-Generated Tsunami}\\label{sec:Tsunami}\n\nWe now consider a variation of the submerged landslide-generated tsunami of \\cite{lynett2005numerical}. The movement of the landslide mass on the beach slope results in the generation of tsunami waves (see Figure~\\ref{fig:sketch}, left), and we consider the temporal evolution of this wave. We use a tsunami simulator called Volna-OP2 \\citep{reguly2018volna, giles2020performance}, which is a differential equation solver capable of simulating the complete life-cycle of a tsunami: generation, propagation and inundation. Volna-OP2 numerically solves the nonlinear shallow water equations (see Appendix \\ref{appendixlandslide}) with a finite volume method. The simulations with Volna-OP2 are run on a single NVIDIA P100 graphical processing unit (the Wilkes2 machine in Cambridge's CSD3).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{images\/TsunamiPlot.pdf}\n \\caption{Landslide-Generated Tsunami. \\textit{Top}: Log-absolute integration error. \\textit{Bottom}: Calibration of MLBQ. The left-hand side plots correspond to $\\Pi [f^e]$ and the right-hand side plots to $\\Pi[f^m]$. }\n \\label{TsunamiPlot}\n\\end{figure}\n\nWe use a bathymetry $h(x,t)$ with $x \\in [-100, 3100]$ (in meters) and $t \\in [0, 300]$ (in seconds). The parameters of interest are: $\\omega_1$, defined to be the ratio of the maximum vertical thickness of the slide ($\\Delta h$) to the initial vertical distance from the center point of the slide to the surface ($d_o=50$ m); $\\omega_2$, the slope angle; and $\\omega_3$, the length of the slide. All of these parameters lead to nonlinear effects which can greatly influence the amplification of tsunami waves. The value of these parameters tends to be unknown a-priori and we take $\\Pi$ to consist of marginal distributions representing our uncertainty, given by $\\text{Unif}(0.125, 0.5)$, $\\text{Unif}(5^{\\circ}, 15^{\\circ})$ and $\\text{Unif}(100m, 200 m)$ respectively. A representative example of the solution provided by Volna-OP2 for $\\omega=(0.375,10^{\\circ},150m)$ is on the right-hand side in Figure \\ref{fig:sketch}. In tsunami modelling, two functionals of the solution of the model which are often of interest are the total energy flux \\citep{degueldre2016random}, denoted $f^e \\colon \\Omega \\rightarrow \\mathbb{R}$, and the momentum flux \\citep{park2017probabilistic}, denoted $f^m \\colon \\Omega \\rightarrow \\mathbb{R}$, and we therefore want to compute $\\Pi[f^e]$ and $\\Pi[f^m]$.\n\n\nIn the experiments, we estimate these quantities at a gauge at $x=3000$ with MLBQ and MLMC using the same IID point sets and repeat the experiment $20$ times. We take $L=4$ and each level corresponds to a different spatial and temporal resolution used in the solver. The number of evaluations per level are listed in Appendix \\ref{appendixlandslide}. We have $C= (C_0,C_1,C_2,C_3,C_4) = (5,15,30,65,150)$ (measured in seconds). These costs are significantly larger than the cost of fitting all Gaussian processes, which is carried out on a laptop and ranges from 1 second to 25 seconds depending on sample sizes per level. We use a tensor product Mat\\'ern kernel with smoothness $v=2.5$ for MLBQ. The related analytical formulas are provided in Appendix \\ref{appendix_kmie}. \n\n The upper box plots of Figure~\\ref{TsunamiPlot} show the absolute errors of our estimates on the logarithmic scale. As we observed, MLBQ always significantly outperforms MLMC. We did not compare to BQ here because $f_L$ is too computational expensive to obtain a reliable estimate. The calibration plots show that MLBQ tends to be overconfident when the budget is small ($T=1200$ or $T=6000$) but becomes under-confident when budget is larger ($T=12000$). \n \n Overall, although the setup studied in this paper would be considered a `toy model' for the tsunami modelling community, any method which showcases such a drastic reduction in computing time should be of interest to tsunami warning centres given their tight budget constraints. \n\n\n\n\n\n\n\n\n\n\n\n\\section{CONCLUSION}\n\n\nWe introduced MLBQ, a method for computing integrals involving multifidelity models. MLBQ enhances MLMC by bringing to it the advantages of Bayesian methods, namely: (1) the ability to make use of prior information about the integrand, which leads to faster convergence rates, and (2) the ability to provide Bayesian quantification of uncertainty over the value of the integral of interest. From the point of view of Bayesian probabilistic numerics, this algorithm is also a step forward towards making the field reach applications where it can be most impactful, including specifically when models are computationally expensive and it is therefore desirable to make use of as much prior knowledge as possible to improve estimates.\n\nThere are a large number of possible extensions and we therefore only mention some of the most promising. Firstly, one could consider extending MLBQ to multi-index Monte Carlo \\citep{Haji-Ali2016}, which can be useful for models where levels can have multiple indices. For example, in partial differential equation models, one index could be discretisation through time and the other through space, and using this structure could bring further gains.\nSecondly, one could consider improving scalability through hybrid strategies where BQ is used on the more expensive levels and alternatives, such as MC or scalable BQ methods \\citep[e.g.][]{Karvonen2017symmetric,Jagadeeswaran2018}, are used on the cheaper levels. Finally, since we observed that the choice of point set had a large impact on performance, one could consider designing novel acquisition functions for adaptive experimental design \\citep[e.g. following the work of][]{Ehara2021}.\n\n\n\n\n\\subsubsection*{Acknowledgements}\n\n\nThe authors would like to thank Dimitra Salmadinou for support in accessing tsunami simulations, and Zhuo Sun for helpful discussions. KL and SG acknowledge funding from the Lloyd's Tercentenary Research Foundation, the Lighthill Risk Network and the Lloyd's Register Foundation-Data Centric Engineering Programme of the Alan Turing Institute for the project ``Future Indonesian Tsunamis: Towards End-to-end Risk quantification (FITTER)''. SG also acknowledges support from the Alan Turing Institute project ``Uncertainty Quantification of multi-scale and multiphysics computer models: applications to hazard and climate models'' under the EPSRC grant EP\/N510129\/1. DG and SG were supported by the EPSRC project EP\/W007711\/1 ``Software Environment for Actionable \\& VVUQ-evaluated Exascale Application'' (SEAVEA). \nTK was supported by the Academy of Finland postdoctoral researcher grant \\#338567 ``Scalable, adaptive and reliable probabilistic integration''. \nFXB was supported by the Lloyd's Register Foundation Programme on Data-Centric Engineering and The Alan Turing Institute under the EPSRC grant [EP\/N510129\/1], and through an Amazon Research Award on\n\"Transfer Learning for Numerical Integration in Expensive Machine Learning Systems\". \n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nMachine learning is an integral part of daily decision-support systems, ranging from\npersonalised medicine to credit lending in banks and financial institutions. The\ndownside of such a pervasive use of machine learning is the privacy concern\nassociated with the use of personal data ~\\citep{Elliot2018}. In some\napplications, such as medicine, sharing data with privacy in mind is fundamental\nfor advancing the science~\\citep{Beaulieu2019}. However, the available data may\nnot be sufficient to realistically train state-of-the-art machine learning\nalgorithms. Synthetic data is one way of mitigating this challenge. \n\n\n\n\n\nCurrent state-of-the-art methods for synthetic data generation, such as\nGenerative Adversarial Networks (GANs)~\\citep{Goodfellow2014}, use complex deep\ngenerative networks to produce high-quality synthetic data for a large variety\nof problems~\\citep{Choi2017, Xu2019}. \nHowever, GANs are known to be unstable and delicate to\ntrain~\\citep{Arjovsky2017}. Moreover, GANs are also notoriously difficult to\ninterpret, and finding an appropriate loss function is an active area of\nresearch~\\citep{wang2019}. Another class of models, variational auto encoders\n(VAEs)~\\citep{Kingma2013, Rezende2015} have also been used to generate data,\nalthough their primary focus is finding a lower-dimensional data representation.\nWhile deep generative models, such as VAEs and GANs, can generate\nhigh-fidelity synthetic data, they are both difficult to interpret. Due to the\ncomplex composition of deep networks, it is nearly impossible to characterise\nthe impact of varying weights on the density of the generated data. For VAEs, a\nlatent space may be used to interpret different aspects of the data. \n\nIn this work, we focus on probabilistic models based on copulas. Sklar showed\nthat any multivariate distribution function can be expressed as a function of\nits univariate marginals and a copula~\\citep{Sklar1959}. \nHowever, estimating copulas for multivariate mixed data types is typically hard,\nowing to 1) very few parametric copulas have multivariate formulations and 2)\ncopulas are not unique for discrete or count data~\\citep{Nikoloulopoulos2009}.\nTo address the first challenge, the current standard is to use a pairwise copula\nconstruction~\\cite{Aas2009}, which involves two steps identifying the pair copula\nand a tree structure defining pairwise relationship~\\citep{Elidan2010,\nLopez-paz2013, Czado2010, Chang2019}. \n\nIn this\npaper, we propose a probabilistic synthetic data generator that is interpretable\nand can model arbitrarily complex data-sets. \nOur approach is based on normalising flows to estimate\ncopulas. We show that our model is flexible enough to learn complex\nrelationships and does not rely on explicit learning of the graph or parametric\nbivariate copulas. Figure~\\ref{fig:copulaintro} illustrates the\nflexibility of our proposed method by modelling copula with a complex structure.\nWe are not aware of any copula based method able to learn the copula shown in\nFigure~\\ref{fig:copulaintro} a). \n\nTo deal with count data, we propose a modified spline flow~\\citep{Durkan2019} as\na continuous relaxation of discrete probability mass functions. This formulation\nallows us to build complex multivariate models with mixed data types that can\nlearn copula-based flow models, which can be used for the following tasks:\n\\begin{itemize}\n\t\\item [1.] \\textit{Synthetic data generation:} Use the estimated model to\n\tgenerate new data-sets that have a distribution similar to the training set.\n\t\n\t\\item[2.] \\textit{Inferential statistics:} When the copula has\n\tlearned the relationship between the variables correctly, one can change the\n\tmarginals to study the effects of change. For example, if we estimated a\n\tcopula flow based on a UK dataset with age as one of the variables, we can\n\tmodify the marginal of the age distribution to generate synthetic data for a\n\tdifferent country. To generate data for Germany, we would use\n\tGermany's age distribution as a marginal for the data-generating process.\n\t\\item[3.]\\textit{Privacy preservation:} The data generated from the copula\n\tflow model is fully synthetic, i.e., the generator does not rely on actual\n\tobservations for generating new data. One can perturb the generated data\n\tbased on differential privacy mechanisms to prevent privacy attacks based on\n\ta large number of identical queries.\n\\end{itemize}\n\n\n\n\\begin{figure*}[tb]\n\\centering\n\\scalebox{0.8}{\n\t\\begin{tikzpicture}\n\t\\centering\n\t\t\\node[inner sep=0pt] (copula) at (0,0)\n\t\t{\\includegraphics[width=.30\\textwidth]{figures\/copula_intro}};\n\t\t\\node[inner sep=0pt] (plus) at (2.5,0)\n\t\t{\\includegraphics[width=.02\\textwidth]{figures\/plus_sign.png}};\n\t\t\\node[inner sep=0pt] (marginal) at (5,0)\n\t\t{\\includegraphics[width=.30\\textwidth]{figures\/marginals_intro}};\n\t\t\\node[inner sep=0pt] (equal) at (7.6,0)\n\t\t{\\includegraphics[width=.02\\textwidth]{figures\/equal_image.png}};\n\t\t\\node[inner sep=0pt] (joint) at (10,0.0)\n\t\t{\\includegraphics[width=.30\\textwidth]{figures\/joint_intro}};\n\t\t\n\t\t\\node[inner sep=0pt] (coupalHead) at (0,-2.5) {a) Copula flow };\n\t\t\n\t\t\\node[inner sep=0pt] (MargHead) at (5,-2.5) {b) Marginal flow};\n\t\t\n\t\t\\node[inner sep=0pt] (MargHead) at (10,-2.5) {b) Complex joint density\n\t\t};\n\t\\end{tikzpicture}\n\t}\n\t\\caption{Learning the joint and marginal distributions for `2 rings' data.\n\t This figure demonstrates how copulas\n\t and marginals can be combined to obtain complex joint distributions.\n\tThe copula associated with the `2 rings' distribution cannot be modelled\n\tby any standard bi-variate copula, whereas our proposed copula flow method\n\tcan learn this structure.}\n\t\\label{fig:copulaintro}\n\\end{figure*}\n\\section{Background}\n\\label{sec:background}\nWe denote the domain of a function $ F $ by $ \\domain F $ and range by $\\range F$. \nWe use capital letters $ X, Y $ to denote random variables \nand lower case $ x, y $ to represent their realisations. Bold symbols $ \\mat X =\n[X_1, \\ldots, X_d]$ and corresponding $ \\vec x = [x_1, \\ldots, x_d]$ represent vectors of random variables and their realisations, respectively. \nWe denote the distribution function (also known as the Cumulative Distribution\nFunction (CDF)) of a random variable $X$ by $F_X$, and the corresponding Probability Density Function (PDF) by $f_X$.\nSuch functions play an important role in copula theory. If we map a\nrandom variable through its own distribution function we get an uniformly distributed random variable-$\n\\uniform{1}$ or uniform marginal. This is known as\n\\emph{probability integral transform}~\\citep{David1948}.\\footnote{This\ndefinition holds only for continuous variables; we discuss CDFs for discrete variables in section \\ref{sec:discreteFlows}} A copula\nis a relationship, literally a link, between uniform marginals obtained via the\nprobability integral transform~\\citep{Nelsen2006}. Two random variables are\nindependent if the joint copula of their uniform marginals is\nindependent. Conversely, random variables are \\textit{linked} via their\ncopulas. \n\n\n\n\\subsection{Copulas}\n For a pair of random variables, the marginal CDFs $F_X, F_Y$ describe how each\nvariable is individually distributed. The joint CDF $F_{X,Y}$ tells us how they\nare jointly distributed. Sklar's seminal result \\citep{Sklar1959} allows us to\nwrite the joint CDF as a function of the univariate marginals $ F_X $ and $ F_Y\n$, i.e., \n\t\t\\begin{align}\n\t\t\tF_{X, Y}(X, Y) &= C \\left( U_X, U_Y \\right) = C \\left(F_X (X), F_Y(Y) \\right),\n\t\t\\end{align}\n\t\twhere $C$ is a copula. \nHere, the\nuniform marginals are obtained by the probability integral transform, i.e., $ U_X =\nF_X(X)$ and $U_Y = F_Y(Y)$. \n\n\nCopulas describe the dependence structure independent of the marginal\ndistribution, which we exploit for synthetic data generation; especially in\nprivacy preservation, where we perturb data samples by another random process,\ne.g., the Laplacian mechanism~\\citep{Dwork2013}. This changes the marginal\ndistribution but crucially it does not alter the copula of the original data.\n\nFor continuous marginals $ F_X $ and $ F_Y $, the copula $C$ is unique, whereas\nfor discrete marginals it is unique only on $ \\range{F_X} \\times \\range{F_Y}$.\nA corollary of Sklar's theorem is that $X$ and $Y$ are independent if\nand only if their copula is an independent copula, i.e., $C (F_X(X), F_Y(Y))= F_X(X) \\, F_Y(Y)$. \n\\subsection{Generative Sampling}\n\\label{sec:gen_sampling}\nWe can use the constructive inverse of Sklar's theorem for building a generative model.\nThe key concept is that the inverse of the probability integral transform,\ncalled \\textit{inverse transform sampling}, allows us to generate random samples\nstarting from a uniform distribution. The procedure to generate a random sample $\nx $ distributed as $ F_X $ is to first sample a random variable $ u \\sim\n\\uniform{1}$ and second to set $ x := F_X^{(-1)} (u)$.\n Here, the the function $ F^{(-1)} $ is a quasi-inverse function.\n \\begin{defn}[Quasi-inverse function] \n\t\\label{defn:quasi-inverse}\n\tLet $ F $ be a CDF. Then\n \tthe function $ F^{(-1)} $ is a quasi-inverse function of $F$ with domain $\n \t[0, 1] $ such that \n \t $F(F^{(-1)} (y)) = y \\quad \\forall \\, y \\in \\range F $\n \tand $ F^{(-1)} (y) = \\inf \\left\\lbrace x | F(x) \\geq y \\right\\rbrace\n \t\t\\forall \\, x \\in \\domain F, \\, \\text{and} \\, y \\notin \\range F$.\n\tFor strictly monotonically increasing $ F $, the quasi-inverse becomes the regular inverse CDF $\n \tF^{-1}$~\\citep{Nelsen2006}.\n \\end{defn}\n\nThe inverse function, also called as the \\emph{quantile function}, maps a random\nvariable from a uniform distribution to $F$-distributed random variables, i.e.,\n$ X = F\\inv (U) $ or $ F\\inv : \\uniform{1} \\rightarrow \\range F $. Hence, we\ncan present the problem of synthetic data generation as the problem of\nestimating the (quasi)-inverse function for the given data. \n\nTo generate samples from $F_X$ using a copula-based model, we start with a pair of uniformly distributed random\nvariables, then pass them through inverse of the copula CDF to obtain correlated\nuniform marginals; see Figure~\\ref{fig:generative_model} b). Note that the\ncopulas are defined over uniform marginals, i.e., the random variables are\nuniformly distributed. We use the correlated univariate marginals to subsequently generate\n$F$-distributed random variables via inverse transform sampling; see\nFigure~\\ref{fig:generative_model} c). Hence, if we can learn the CDFs for the\ncopula as well as the marginals, we can then combine these two models to\ngenerate correlated random samples that are distributed similar to the training\ndata. The procedure is illustrated in\nFigure~\\ref{fig:generative_model}. \n\n\n\\begin{figure*}\n\t\\begin{tikzpicture}\n\t\\centering\n\t\t\\node[inner sep=0pt] (uniform) at (-0.5,0)\n\t\t{\\includegraphics[width=.25\\textwidth]{figures\/uniform_1}};\n\t\t\\node[inner sep=0pt] (copula_text) at (2.0,0)\n\t\t{$\\Longrightarrow $ $\\mathcal{C}_X$ $\\Longrightarrow $};\n\t\t\\node[inner sep=0pt] (marginal) at (4.65, 0)\n\t\t{\\includegraphics[width=.25\\textwidth]{figures\/copula_2}};\n\t\t\\node[inner sep=0pt] (equal) at (7.2,0)\n\t\t{$\\Longrightarrow $ $\\mathcal{F}_X$ $\\Longrightarrow $};\n\t\t\\node[inner sep=0pt] (joint) at (9.85,0)\n\t\t{\\includegraphics[width=.25\\textwidth]{figures\/joint_3}};\n\t\t\n\t\t\\node[inner sep=0pt] (UnilHead) at (0,-2.2) {a) Uniform independent\n\t\tmarginal};\n\t\t\n\t\t\\node[inner sep=0pt] (copulaHead) at (5,-2.2) {b) Copula samples $C (U_X, U_Y)$ };\n\t\t\n\t\t\\node[inner sep=0pt] (MargHead) at (9.9,-2.2) {c) Joint distribution\n\t\t};\n\n\t\\end{tikzpicture}\n\t\\caption{Copula flow generative model. Left: We start with uniformly distributed\n\t independent variable samples. These samples are passed through the copula\n\t flow network to generate correlated uniformly distributed random variables (middle).\n\t The correlated variables are transformed to the desired distribution by\n\t using univariate marginal flows $\\mathcal{F}_X$ (right).}\n\t \\label{fig:generative_model}\n\\end{figure*}\n\n\\section{Copula Flows}\n\nThe inverse function used to generate synthetic data can be described as a flow\nfunction $ \\mathcal{F}_X \\approx F_X^{(-1)}$ that transforms a uniform random variable\n$ U$ into $ X\\sim F_X$, see Figure~\\ref{fig:generative_model}. We can interpret the (quasi)-inverse function as a\nnormalising flow~\\citep{Tabak2010, Rezende2015, Papamakarios2019} that\ntransforms a uniform density into the one described by the copula distribution\nfunction $C$. We refer it as \\emph{copula flow}. \n\n\\subsection{Normalising Flows}\nNormalising flows are compositions of smooth, invertible mappings that\ntransform one density into another~\\citep{Rezende2015,Papamakarios2019}. The\napproximation in $\\mathcal{F}_X \\approx F_X^{(-1)}$ indicates that we are estimating a\n(quasi) inverse distribution function by using a flow function $\\mathcal{F}_X $. If we\nlearn the true CDF $ F_X$, we\nhave\n\\begin{align}\n\tF_X(x) = \\mathcal{F}_X\\inv (x) = u, \\, u\\sim \\uniform{1}, \\quad \\forall x \\in \\domain {F_X},\n\\end{align} \ndue to uniqueness of the CDF.\nAn advantage of the normalising flow formulation is that we can construct\narbitrarily complex maps as long as they are invertible and\ndifferentiable. \n\nConsider an invertible flow function $ \\vec\\mathcal{F} : \\mathbb{R}^{D}\n\\rightarrow \\mathbb{R}^{D}$, which transforms a random variable as $ \\vec X=\n\\vec \\mathcal{F} (\\vec U) $. By using the change of\nvariables formula, the density $f_{\\vec X}$ of variable $\\vec X$ is obtained as\n\\begin{align}\n\\label{eq:normFlow}\nf_{\\vec X}(\\vec X) &= f_{\\vec U}\\left(\\mathcal{F}\\inv (\\vec X)\\right) \\left|\\operatorname{det} \\left(\\tfrac{\\partial \\vec \\mathcal{F}\\inv(\\vec X)}{\\partial \\vec X} \\right) \\right| \n = \\, \\left|\\operatorname{det} \\left(\\tfrac{\\partial \\vec \\mathcal{F}\\inv(\\vec X)}{\\partial \\vec X} \\right) \\right|,\n\\end{align}\nwhere $f_{\\vec U}\\left(\\mathcal{F}\\inv (\\vec X)\\right) = f_{\\vec U}(\\vec U) = 1.0$, since $f_{\\vec U}=\\uniform{1}$.\n\n\nFor the copula flow model, we assume that the copula density $c$ of the copula CDF $C$ exists. Starting with the bivariate case for random variables $X, Y$, we compute the density $f_{XY}$\nvia the partial derivatives of the $C$, i.e., \n\\begin{align}\n\\label{eq:bivariate_density}\n\tf_{XY} &= \\frac{\\partial^2 C (F_X, F_Y) }{\\partial F_X \\partial F_Y} \n\t= c_{XY}(F_X, F_Y) \\, f_X \\, f_Y.\n\\end{align}\nWe can generalise this result to the joint density $f_{\\mat\nX}(\\vec X) $ of a $d$-dimensional random vector $\\vec X =[ X_{1}, \\ldots,\nX_{d}]$ as\n\\begin{align}\n\\label{eqn:jointMarginal}\nf_{\\mat X}&= \n\\underbrace{\nc_{\\mat X} (F_{X_1}, \\ldots, F_{X_d}) \n\\vphantom{\\prod\\nolimits_{k=1}^{d}}\n}_{\\text{copula density}} \n\\, \n\\underbrace{\n\\prod\\nolimits_{k=1}^{d}f_{X_k}\n}_{\\text{marginal density}}\n.\n\\end{align}\n\nTo construct the copula-based flow model, we begin with rewriting the joint\ndensity $f_{\\mat X}$ in~\\eqref{eqn:jointMarginal} in terms of marginal\nflows $ \\mathcal{F} $ and the joint copula flow $\\mathcal{C}_{\\vec X}$. We then obtain\n\\begin{align}\n\t\\label{eqn:jointMarginalNoText}\n\tf_{\\mat X}(\\vec X)\n\t &=\\left|\\operatorname{det} \\left(\\tfrac{\\partial \\mathcal{C}_{\\vec X}\\inv(\\vec U_{\\vec X})}\n\t{\\partial \\vec U_{\\vec X}} \\right) \\right| \\prod\\nolimits_{k=1}^{d} \n\t\\left| \\tfrac{\\partial \\mathcal{F}_{X_k}\\inv(X_k)}{\\partial X_k} \\right|,\n\\end{align}\nwhere $\\mathcal{C}_{\\vec X}$ is the copula flow function and $\\mathcal{F}_{X_1},\n \\ldots, \\mathcal{F}_{X_d}$ are marginal flow functions; see Appendix~\\ref{sec:apendixFlow}\n for the detailed derivation. Even though the copula itself is defined on\n independent univariate marginals functions, we use $X$ in the notation to\n emphasise that these univariates are obtained by inverting the flow $U_{X_k} =\n \\mathcal{F}\\inv _{X_k}(X_k)$ for marginals.\n\n\n\n\nGiven a data-set $\\{\\vec x_1, \\ldots, \\vec x_N\\}$ of size $N$, we \ntrain the flow by maximising the total log-likelihood $ \n\\mathcal L=\\log f_{\\mat X}(\\vec x) = \\sum_{n=1}^{N} \\log\nf_{\\mat X}(\\vec x_{n})$ with respect to the parametrisation of flow function\n$ \\mathcal{F} $. \nThe log-likelihood can be written as the sum of two terms, namely\n\\begin{align}\n\\label{eqn:Loglikelihood}\n\\mathcal{L} \n&=\\mathcal{L}_{\\mathcal{C}_{\\vec X}} \n+\\mathcal{L}_{\\vec \\mathcal{F}} \n= \n\\mathcal{L}_{\\mathcal{C}_{\\vec X}} \n+ \\sum\\nolimits_{k=1}^d \\mathcal{L}_{X_k}.\n\\end{align}\nThe copula flow log-likelihood $\n \\mathcal{L}_{\\mathcal{C}_{\\vec X}} $ depends on the marginals flows. Hence, we\n train the marginal flows first and then the copula flow. The gradients of\n both log-likelihood terms are independent as they are separable. The procedure\n of first training the marginals before fitting copula models is standard in the\n copula literature and yields better performance and numerical stability\n \\citep{Joe2005}. \n\n\n\\subsection{Copula Flows}\nCopula is a CDF defined over uniform marginals, i.e., $\\uniform{1}^d \\rightarrow\n\\uniform{1}$. We can use the inverse of this CDF to generate samples via inverse\ntransform sampling. For the multivariate case, we can use the conditional\ngeneration procedure~\\citep{Hansen1994}. Let $ \\vec U_{\\vec X} = [U_{X_1}, \\ldots,\nU_{X_d}] $ be a random vector with copula $ \\mat C_{\\vec X} $, and $ \\vec U=\nU_1, \\ldots, U_d$ be i.i.d. $ \\uniform{1} $ random variables. Then the\nmultivariate flow transform $\\vec U_X := \\mat \\mathcal{C}_{\\vec X} ( \\vec U)\n$ can be defined recursively as\n\\begin{align}\n\\label{eqn:multivariateCopula}\n\tU_{X_1} &:= \\mathcal{C}_{X_1} (U_{1}),\\quad\n\tU_{X_k} := \\mathcal{C}_{X_{k|1, \\ldots k-1}}\\left( U_{k} | U_{X_1, \\ldots, X_{k-1}} \\right),\n\t \\quad 2 \\leq k \\leq d,\n\\end{align}\nwhere $ \\mathcal{C}_{X_{k|1, \\ldots k-1}} = C\\inv_{X_{k|1, \\ldots k-1}} $ is the\nflow function conditioned on all the variables $ U_{X_1}, \\ldots,\nU_{X_{k-1}}$. The key concept is the interpretation of the inverse of the\n(normalising) copula flow $\\mathcal{C}_{\\vec X}\\inv$ as a conditional CDF.\nMoreover, we estimate $ \\mathcal{C}_{X_{k|1, \\ldots k-1}}$ recursively, with one\ndimension at a time, via a neural spline\nnetwork~\\citep{Durkan2019}. The conditioning variables, $X_{1}, \\ldots X_{k-1}$ are\ninput to the network that outputs spline parameters for the flow $\n\\mathcal{C}_{X_{k|1, \\ldots k-1}}$.\nThe procedure above is similar to\nauto-regressive density estimators~\\citep{Papamakarios2017,Papamakarios2019}\nwith Masked Autoregressive Flow (MAF)\nstructure~\\citep{Germain2015,Papamakarios2017}. Similar to MAF we can stack such\nmultiple flows to create flexible multivariate copula flow $\\mat\n\\mathcal{C}_{\\vec X}$. In the copula literature this multivariate extension is\nconvenient for Archimedean copulas~\\citep{Nelsen2006}, where the generators for\nsuch copulas can be interpreted as conditional flow functions. \n\n\n\n\\subsection{Univariate Marginal Flows}\nTo estimate univariate marginals, we can use both parametric and non-parametric\ndensity estimation methods as the dimensionality is not a challenge. However,\nfor training the copula, we require models that can be inverted easily to obtain\nuniform marginals, i.e., we need to have a well-defined CDF for the methods we\nemploy for the density estimation. Simultaneously, we need a model that can be used\nfor generating data via inverse transform sampling.\nWe propose to use monotone rational quadratic splines, neural spline flows\n(NSF)~\\citep{Durkan2019}. With the a sufficiently dense spline, i.e., a large\nnumber of knot positions, we can learn arbitrarily complex marginal\ndistributions. As we directly attempt to learn the quasi-inverse\nfunction $ \\mathcal{F} : \\uniform{1} \\, \\rightarrow \\, \\range F$, where $F$ is the\n\\textit{true} CDF, a single parameter vector $ \\vec\n\\theta_d = [\\vec \\theta_d^w, \\vec\n\\theta_d^h, \\vec \\theta_d^s]$ describing the width, height and slope parameters,\nrespectively, is sufficient. \nWe document the details of the proposed spline network\nin Appendix~\\ref{sec:SoftwareDetails}\n\n\n\n\n\n\n\n\n\n\\section{Discrete Data Modelling}\n\\begin{wrapfigure}[13]{r}{0.35\\textwidth}\n\t\\vspace{-15mm}\n\t \\centering\n\t\t\\includegraphics[width=0.35\\textwidth]{figures\/hyperGeom}\n\t\t\\caption{Illustration of discrete marginal flow for three discrete classes. The marginal flow (blue) gives discrete samples when rounded up. The distributional transform\n\t\tis uniform along the vertical line of the true CDF (orange). }\n\t\t\\label{fig:hypergeom}\n\\end{wrapfigure}\nModelling mixed variables via normalising flows is challenging~\\citep{Onken2016,\nZiegler2019, Tran2019}. Discrete data poses challenges for both copula learning\nas well as learning flow functions. For marginals, i.e., univariate flow\nfunctions, the input is a uniform distribution continuous in $[0, 1]$, whereas\nthe output is discrete and hence discontinuous. For copula learning, we need\nuniform marginals for the given training data. With discrete inputs to the\ninverse flow function (CDF), the output is discontinuous in $[0, 1]$. \n\n\\subsection{Marginal Flows for Discrete\/Count Data}\n\\label{sec:discreteFlows}\n\n\nWe first focus on learning the univariate flow maps, i.e., marginal learning.\nOrdinal variables have a natural order, which we can use directly. For\ncategorical data, we propose to assign each class a unique integer in $\\{0\n,\\ldots, n-1 \\}$, where $n $ is the number of classes. We define a CDF over\nthese integer values. As this assignment is not\nunique, the same category assignment needs to be maintained for training and\ndata generation. \n\n\n\nFor discrete data generation, we round the data output of the flow function to\nthe next higher integer, i.e., we can consider random variable $Y = \\text{ceil}\n(X)$. However, this procedure does not yield a valid density. To ensure that\nthat the samples are properly discretised, we use a quantised\ndistribution~\\citep{Salimans2019, Dillon2017} so that density learning can be\nformulated as a quantile learning procedure. \n\nWe assign a quantile range for a given class or ordinal integer. We use the same\nspline-based flow functions as the one used for continuous marginals, but with\nquantisation as the last step. The advantage of this discrete flow is that our\nquasi-inverse, i.e., the flow function is a continuous and monotonic function,\nwhich can be trained by maximising the likelihood. Figure~\\ref{fig:hypergeom}\nshows the spline-based flow function learned for a hypergeometric distribution.\nThe continuous flow function (blue curve in Figure~\\ref{fig:hypergeom}) learned\nfor the discrete marginal, light orange step function in\nFigure~\\ref{fig:hypergeom}, allows us to generate discrete data via inverse\ntransform sampling.\n\n\\subsection{Mixed Variable Copulas}\n\nHowever, the inverse of this flow function, i.e., the CDF of the marginal,\nresults in discontinuous values at the locations of the training inputs (blue\ncircles in Figure~\\ref{fig:hypergeom}). Copulas are unique and well defined only\nfor continuous univariate marginals. An elegant way to find continuous\nunivariate marginals for the discrete variable is via the distributional\ntransform~\\citep{Ruschendorf2009}.\n\\begin{defn}[Distributional Transform]\n\t\\label{defn:dist_xform}\n\tLet $ X \\sim F_X $. The modified CDF is defined as \n\t\\begin{align}\n\t\tF_X (x, \\lambda) := \\mathrm{Pr}(X < x) + \\lambda \\mathrm{Pr}(X = x).\n\t\\end{align}\n\tLet $ V $ be uniformly distributed independent of $X$. Then the\n\tdistributional transform $ X \\rightarrow U $ of $ X $ is \n\t\\begin{align}\n\t\tu := \\mathcal{F}\\inv_{X} (x) = F_{X-}(x) + V(F_X(x)- F_{X-}(x)),\n\t\\end{align}\n\twhere $F_{X-}(x) = \\mathrm{Pr} (X < x)$, $F_{X}(x) = \\mathrm{Pr} (X \\le x) $ and\n\t$\\mathrm{Pr}(X=x) = F_X(x)- F_{-}(x)$. With this, we have $ U \\, \\sim \\uniform{1}$\n\tand $ X = \\mathcal{F}_{X} (U) \\, a.s. $~\\citep{Ruschendorf2009}. \n\\end{defn}\nThis distributional transform behaves similar to the probability integral\ntransform for continuous distributions, i.e., a discrete random variable $X$\nmapped through the distributional transform gives an uniform marginal. However,\nunlike for continuous variables, the distributional map is stochastic and hence not\nunique. With the distributional transform, the copula model does not need a\nspecial treatment for discrete variables; see section 2\nin~\\citep{Ruschendorf2009} for details. In Figure~\\ref{fig:hypergeom},\nthe red cross marks show the values from distributional transform. All samples\nalong the $y$-axis that share same $x$-value.\nWith the distributional transform and marginal splines, the copula flow model\nleverages recent advances in normalising flows to learn arbitrarily complex\ndistributions. Moreover, we can show that:\n\\begin{theorem}\n\tA copula flow is a universal density approximator. \\quad Proof:\n\tAppendix~\\ref{sec:uni_density_approx}.\n\\end{theorem}\nHence, we can learn a model to generate any type of data, discrete or\ncontinuous, with the proposed copula flow model. This property holds true when\nthe flow network converges to the inverse function.\n\n\n \\section{Related Work}\n\n The Synthetic Data Vault (SDV) project~\\citep{Patki2016} is very close in\n spirit of this paper. In the SDV, the univariate marginals are learned by using\n a Gaussian mixture model, and the multivariate copula is learned as a Gaussian\n copula. \n\n Unlike flow-based estimation, proposed in this\n work, copula-based models typically follow a two-step procedure, the\n pair-copula construction or \\emph{vine copulas}, by first constructing a\n pairwise dependence structure and then learning pair-copulas for\n it~\\citep{Aas2009}. Copula Bayesian networks \\citep{Elidan2010} use conditional\n copulas and learn a Bayesian network over the variables. In~\\citep{Chang2019},\n the authors propose to use Monte-Carlo tree search for finding the network\n structure. Such models have limited flexibility, e.g., no standard copula model\n can learn the structure in figure~\\ref{fig:copulaintro}, which our model\n (copula flow) can learn.\n \n \n\n GAN-based methods are the workhorse of the majority of recent advances in the\n synthetic data generation, e.g., the conditional GAN~\\citep{Xu2019}, builds a\n conditional generator to tackle mixed-variable types, whereas the\n tableGAN~\\citep{Park2018} builds GANs for privacy preservation. Unlike the\n tableGAN, the PATE-GAN~\\citep{Jordon2019} produces synthetic data with\n differential privacy guarantees. Data generators for medical\n records~\\citep{Che2017, Choi2017} are also based on GANs. \n \n\n Normalising flow methods have had very rapid growth in terms of the continuous\n improvements on benchmarks introduced by~\\citep{Papamakarios2017}. Our work\n relies heavily on the NSF~\\citep{Durkan2019, Muller2019}. Instead of training\n models with a likelihood loss, in~\\citep{Nash2019}, the authors use\n auto-regressive energy models for density estimation. While most methods either\n rely on coupling~\\citep{Dinh2014, Dinh2016} or\n auto-regressive~\\citep{Papamakarios2017, Kingma2016} models, transformation\n networks~\\cite{Oliva2018} use a combination of both. The neural auto-regressive\n flows~\\citep{Huang2018}, which have monotonic function guarantees, can be used\n as a drop-in replacement for the spline-based copula network used in this\n paper. \n\n\n\\section{Experiments}\n\\label{sec:experiemnts}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figures\/mixed_var_plot_2}\n\t\\caption{Mixed variable copula learning: joint marginals, joint scatter plots, regression plots. Fitting a mixed type vine~\\citep{Onken2016}. $ X_2 $ is a\n\t\tdiscrete Hyper geometric, $ X_1 $ is Half Normal, and $X_3 $ is Gamma\n\t\tdistributed. Pairwise copulas $\\left( X_1 \\text{ and } X_2 \\right)$\n\t\tGaussian, $\\left( X_2 \\text{ and } X_3 \\right)$ Clayton, $\\left( X_1\n\t\t\\text{ and } X_3 \\right)$ Gumbel. The diagonal plots show the marginals for each\n\t\tvariable, whereas the upper triangle shows joint scatter plots. The lower triangle shows\n\t\tregression plots for each variable. Here, we overlay a\n\t\tlinear regression fit and data for the pair of variables. For continuous variables $X_1$ and $X_3$ all\n\t\tthree methods produce regression lines similar to each other. For\n\t\trelations between discrete and continuous variables, the copula flow and the\n\t\tground truth align. However, for both discrete to continuous relations, the\n\t\tmixed vine fails to capture the relation as evident by the misaligned\n\t\tlinear regression fit. }\n\t\\label{fig:mixedExperiment}\n\\end{figure}\n\nOur proposed copula flow can learn copula models with continuous and discrete\nvariables. This allows us to use the model to learn complex relations in tabular\ndata as well. \n\n\n\\paragraph{Mixed Variable Copula Learning}\nWe start with an\nillustrative example by constructing a generative model with discrete and\ncontinuous variables. We demonstrate that the copula flow model can indeed learn the structure\nas well as the marginal densities. We construct a vine model with three different\nmarginals: 1) $ X_1$, continuous, Half Normal; 2) $X_2$ discrete,\nhypergeometric; 3) $X_3$, continuous, Gamma. Since the copula describes a symmetric\nrelationship, we have three different bivariate relations between three\nvariables. We assign three widely used bivariate copulas: a Gaussian for $X_1$ and $X_2$, a Clayton for $X_2$ and $X_3$, and a Gumbel copula for $X_1$ and $X_3$. This model is inspired by the mixed vine data model described \nin~\\citep{Onken2016}, where authors propose to build a likelihood using CDFs and vine for mixed data. We use this model as a baseline for comparison.\nThe copula flow can learn all three marginals, which are plotted\nalong the diagonal of Figure~\\ref{fig:mixedExperiment}, while the mixed vine copula struggles a bit (second row, second column). The copula flow also successfully learns copulas between discrete and continuous variables, as evident by the linear regression line in the lower half of the plots in \nFigure~\\ref{fig:mixedExperiment}. Both the \nmixed vine model~\\citep{Onken2016} and the copula flow can learn the continuous copula well as evident by the left lowermost regression plot, but the mixed vine copula struggles with mixed variable copulas (third row, second column of Figure~\\ref{fig:mixedExperiment}). Scatter plots in the upper diagonal show that the samples from mixed vine model are more spread out while copula flow samples align with the true samples.\n\n\n \n\n\\paragraph{Density Estimation}\nFor density estimation, we compare the performance against the current state of\nthe art neural density estimators. The standard benchmark introduced by\n\\citep{Papamakarios2017} modifies the original data with discrete marginals. For example, in the UCI power data-set, the authors\nadd a uniform random noise to the sub-metering columns. This is equivalent to using the\ndistributional transform, Definition~\\ref{defn:dist_xform}, with a fixed seed.\nThese ad-hoc changes defeat the purpose of interpretability and transparency.\nHowever, to maintain the exact comparison across different density estimators we\nkeep the data, training, test, and validation splits exactly as \nin~\\citep{Papamakarios2017}, with the code provided by~\\citep{Oliva2018}. We use\nCDF splines flow for estimating the marginals, which is equivalent to estimating\nthe model with an independence copula.\nWe obtain the univariate uniform marginals by inverting the flow, and then\nfit a copula flow model to obtain the joint density. \n We summarise the density\nestimation results in Table~\\ref{tbl:density_estim}. The marginal likelihood scores ignore the\nrelationships amongst the variables, whereas the copula likelihood shows information\ngained by using copula. The joint likelihood can\nbe expressed as sum of marginal and copula likelihoods, see~\\eqref{eqn:Loglikelihood} The copula flow based\nmodel achieves a performance that is close to the state of the art. Further\nimprovements may be achieved by fine tuning the copula flow structure as the\nmarginal CDFs are close to the empirical CDF as verified by using the Kolmogorov-Smirnov (KS) test; see Appendix~\\ref{sec:extra_results}\n \n \\begin{table}[th]\n\t\\centering\n\t\\begin{tabular}{c c c c c c }\n\t\t\\hline \n\t\tModel & Power & Gas & Hepmass & Miniboone \\\\ \n\t\t\\hline \n\t\tFFJORD & $ 0.46 \\pm 0.01 $ & $ 8.59 \\pm 0.12 $ & $ - 14.92 \\pm 0.08 $ &\n\t\t$ - 10.43 \\pm 0.04 $ \\\\\n\t\tRQ-NSF (AR)~\\citep{Durkan2019} & $ 0.66 \\pm 0.01 $ & $ 13.09 \\pm 0.02 $ & $ - 14.01 \\pm\n\t\t0.03 $ & $ - 9.22 \\pm 0.48 $ \\\\\n\t\tMAF~\\citep{Papamakarios2017} & $ 0.45 \\pm 0.01 $ & $12.35 \\pm 0.02$ & $-17.03 \\pm 0.02$ &$-10.92\n\t\t\\pm 0.46$ \\\\ \n\t\t\\hline\n\t\tMarginal Flows $ \\mathcal{F} $& $ -0.80 \\pm 0.02 $ & $ -6.67 \\pm 0.02 $& $\n\t\t- 26.42 \\pm 0.05$\n\t\t& $-53.17 \\pm 0.06$ \\\\ \n\t\tCopula Flow $ \\mathcal{C}\t $& $ 1.39 \\pm 0.03 $ & $ 15.6 \\pm 0.67$ &\n\t\t$5.4 \\pm 0.10$ & $37.77 \\pm 0.21 $ \\\\ \n\t\tJoint Model $ \\mathcal{C}\t\\, + \\, \\mathcal{F} $ & $ 0.59 \\pm 0.03 $\n\t\t& $ 8.05 \\pm 0.68 $ & $-19.6 \\pm 0.12$ & $-14.83 \\pm 0.21$ \\\\ \n\t\t\\hline \n\t\\end{tabular} \n\t\\caption{Test log-likelihoods (nats) on density estimation benchmarks. \n\tExact replication of the dataset and splits as provided\n\tby~\\citep{Papamakarios2017}.}\n\t\\label{tbl:density_estim}\n\\end{table}\n\n\n\\paragraph{Synthetic Data Generation}\n Estimating the performance of data generators is a challenging\n process~\\citep{Theis2016}. Merely having very high likelihood on the training\n or test set is not necessarily the best metric. One measure to look at is the\n machine learning efficacy of the generated data, i.e., whether can we use the synthetic\n data to effectively perform machine learning tasks (e.g., regression or\n classification), which we wished to accomplish with the true\n data~\\citep{Xu2019, Jordon2019}. Here, we test the synthetic data generator\n using the framework introduced in~\\citep{Xu2019}.\n The copula flow model can capture the relations between different\n variables while learning a density model as it is evident by the high scores in the\n ML tasks benchmark; see\n Table~\\ref{tbl:synth_data}. For Gaussian mixture (GM) based models, the copula flow model performs very close to the `Identity'\n transform, which uses the true data. This indicates a high similarity of the true and the synthetic data-set. For real data, the model\n performs very close to the current state of the art~\\citep{Xu2019} for\n classification tasks and outperforms it on regression tasks on aggregate. The\n quality of learned CDFs is assessed by using the KS test; see Appendix~\\ref{sec:extra_results}.\n \n\\begin{table}[ht] \n\t\\centering\n\\begin{tabular}{c c c c c c c c c c c}\n\\hline\n & \\multicolumn{2}{c}{GM Sim.} & & & \\multicolumn{2}{c}{BN Sim.} & & & \\multicolumn{2}{c}{Real} \\\\ \\cline{2-3} \\cline{6-7} \\cline{10-11} \nMethod & $\\mathcal{L}_{syn}$ & $\\mathcal{L}_{test}$ & & & $\\mathcal{L}_{syn}$ & $\\mathcal{L}_{test}$ & & & clf & reg \\\\ \\hline\nIdentity & -2.61 & -2.61 & & & -9.33 & -9.36 & & & 0.743 & 0.14 \\\\ \\hline\nCLBN~\\citep{Che2017} & -3.06 & 7.31 & & & -10.66 & -9.92 & & & 0.382 & -6.28 \\\\\nPrivBN~\\citep{Zhang2017} & -3.38 & -12.42 & & & -12.97 & -10.90 & & & 0.225 & -4.49 \\\\\nMedGAN~\\citep{Choi2017} & -7.27 & -60.03 & & & -11.14 & -12.15 & & & 0.137 & -8.80 \\\\\nVeeGAN~\\citep{Srivastava2017} & -10.06 & -4.22 & & & -15.40 & -13.86 & & & 0.143 & -6.5e6 \\\\\nTableGAN~\\citep{Park2018} & -8.24 & -4.12 & & & -11.84 & -10.47 & & & 0.162 & -3.09 \\\\\nTVAE~\\citep{Xu2019} & \\textbf{-2.65} & -5.42 & & & \\textbf{-6.76} & \\textbf{-9.59} & & & \\textbf{0.519} & -0.20 \\\\\nCTGAN~\\citep{Xu2019} & -5.72 & -3.40 & & & -11.67 & -10.60 & & & 0.469 & -0.43 \\\\ \\hline\nCopulaFlow & -3.17 & \\textbf{-2.98} & & & -10.51 & -9.69 & & & 0.431 & \\textbf{-0.18} \\\\ \\hline\n\\end{tabular}\n\\caption{Performance of the copula flow models for synthetic data generation. \nWe benchmark by using the synthetic data to perform the \nmachine learning (ML) task, classification or regression, associated with\nthe data-set. The metric is generated for the original test data.\nGaussian mixture (GM) and Bayesian network (BN) simulations correspond\nto data generated from known GM or BN models. Log-likelihoods $\\mathcal{L}_{syn}$\nof the true model given synthetic data and $\\mathcal{L}_{test}$ of the \ngenerative model given the test data are shown. All values except the copula \nflow are from the framework in~\\citep{Xu2019}. }\n\t\\label{tbl:synth_data}\n\\end{table}\n\n\n\\section{Conclusion}\nIn this work we proposed a new probabilistic model to generate high fidelity\nsynthetic data. Our model is based on copula theory which allows us to build\ninterpretable and flexible model for the data generation. We used distributional\ntransform to address the challenges associated with learning copulas for\ndiscrete data. Experimental results show that our proposed model is capable of\nlearning mixed variable copulas required for tabular data-sets. We show that\nsynthetic data generated from a variety of real data-sets is capable of capturing\nthe relationship amongst the mixed variables.\nOur model can be extended to generate data with differential\nprivacy guarantees. We consider this as an important extension and future work.\nMoreover, our copula flow model can also be used, where copula models are used,\ne.g., in finance and actuarial science.\n\n\\section*{Broader Impact}\n Copula based density models are interpretable, as combination of individual\nbehaviour and their joint tendencies independent of individual behaviour. Hence,\nthese models are used in financial and insurance modelling, where interpretable\nmodels are needed for regulatory purposes. Appeal for synthetic data stems from\ninherent data privacy achieved by the model as it hides true data. However, the\ncurrent implementation is still prone to privacy attacks as one can use a large\nnumber of queries to learn the statistics from the model, i.e., private data may\nleak through the synthetic data generated by the proposed model. Copula models\nare typically governed by a small number of parameters, whereas the copula\nflow model has very large deep network. This can pose a challenge even if the\noutput of the network is interpretable. \n\n\n\\section{Flow Model}\n\\label{sec:apendixFlow}\n\nWe can derive the flow-based likelihood by starting from the generative procedure\nfor the samples. We start with a $d$-dimensional independently distributed random\nvector $\\vec U \\sim \\uniform{1}^d$ and map it through the copula flow $\\mathcal{C}_{\\vec\nX}$ to obtain the random vector $\\vec U_X = \\mathcal{C}_{\\vec X} (\\vec U)$. The\njoint vector is then mapped through the marginal flows $\\vec \\mathcal{F}_{\\vec X}$ to\nobtain a random vector $\\vec X = \\mathcal{F}_{\\vec X} (\\vec U_X)$. The combined\nformulation can be written as a composition of two flows to obtain $\\vec X =\n\\mathcal{F}_{\\vec X} \\left( \\mathcal{C}_{\\vec X} (\\vec U) \\right)$, which is also a\nvalid flow function~\\citep{Rezende2015}.\n\nThe likelihood for this flow function can be written as\n\\begin{align}\n\\label{eqn:likelihood_part1}\n\t\\vec f_{\\vec X} (\\vec X) &= \\vec f_{\\vec U_{\\vec X}}(\\vec \\mathcal{F}_{\\vec X}\\inv (\\vec X)) \\, \\left| \\mathrm{det} \\left(\\frac{\\partial \\vec \\mathcal{F}_{\\vec X}\\inv(\\vec X)}{\\partial \\vec X} \\right) \\right|. \n\\end{align}\nThe marginal flows are independent for each dimension of the random vector $\\vec\nX$. Hence, the determinant can be expressed as a product of the diagonal terms to\nobtain the likelihood \n\\begin{align}\n\t\\vec f_{\\vec X} (\\vec X) &= \\vec f_{\\vec U_{\\vec X}}(\\vec \\mathcal{F}_{\\vec X}\\inv (\\vec X)) \\, \\prod_{i=1}^{d} \\left| \\left(\\frac{\\partial \\mathcal{F}_{X_i}\\inv(x_i)}{\\partial x_i} \\right) \\right|.\n\\end{align}\nThe quantity $\\vec f_{\\vec U_{\\vec X}}(\\vec \\mathcal{F}_{\\vec X}\\inv (\\vec X))$ is\nessentially the flow likelihood for the copula density, which we can write as \n\\begin{align}\n\t\\vec f_{\\vec U_{\\vec X}}(\\vec \\mathcal{F}_{\\vec X}\\inv (\\vec X)) & = \\vec f_{\\vec U_{\\vec X}} \\left( \\vec \\mathcal{C}_{\\vec X} (\\vec U) \\right) \\\\\n\t& = \\vec f_{\\vec U}(\\vec \\mathcal{C}_{\\vec X}\\inv (\\vec U_{\\vec X})) \\, \\left|\\mathrm{det} \\left(\\frac{\\partial \\vec \\mathcal{C}_{\\vec X}\\inv(\\vec U_{\\vec X})}{\\partial \\vec U_{\\vec X}} \\right) \\right| \\\\\n\t& = \\left| \\mathrm{det} \\left(\\frac{\\partial \\vec \\mathcal{C}_{\\vec X}\\inv(\\vec U_{\\vec X})}{\\partial \\vec U_{\\vec X}} \\right) \\right|,\n\t\\label{eqn:copula_likelihood}\n\\end{align}\nwhere $f_{\\vec U}\\left(\\mathcal{C}_{\\vec X}\\inv (\\vec U_{\\vec X})\\right) = f_{\\vec U}(\\vec U) = 1.0$, since $f_{\\vec U}=\\uniform{1}$.\nUsing the the copula likelihood in~\\eqref{eqn:likelihood_part1} in\nthe total likelihood~\\eqref{eqn:copula_likelihood} yields\n\\begin{align}\n\t\\vec f_{\\mat X}(\\vec X) &= \\left| \\mathrm{det} \\left(\\frac{\\partial \\vec \\mathcal{C}_{\\vec X}\\inv(\\vec \\mathcal{F}_{\\vec X}\\inv (\\vec X))}{\\partial \\vec \\mathcal{F}_{\\vec X}\\inv (\\vec X)} \\right) \\right| \\prod_{i=1}^{d} \\left| \\left(\\frac{\\partial \\mathcal{F}_{X_i}\\inv(X_i)}{\\partial X_i} \\right) \\right|.\n\\end{align}\n\n\n\n\n\n\\paragraph{Sklar's Theorem}\nAs discussed in section~\\ref{sec:background}, we can interpret the inverse of the flow\nfunction as a CDF: \n\\begin{align}\n\t&\\vec X = \\mathcal{F}_{\\vec X} \\left( \\mathcal{C}_{\\vec X} (\\vec U) \\right), \\\\\n\t&\\mathcal{C}_{\\vec X}\\inv \\left( \\mathcal{F}_{\\vec X}\\inv ( \\vec X) \\right)= \\vec U.\n\\end{align}\nIf we replace the flow function with the\ntrue marginal CDF $\\vec F_{\\vec X}$ and the true copula CDF $\\vec C_{\\vec X}$, we can rewrite it as\n\\begin{align}\n\t\\vec C_{\\vec X} \\left( \\vec F_{\\vec X}( \\vec X) \\right)&= \\vec U.\n\\end{align}\nBy the probability integral transform we know that if $\\vec H_{\\vec X}$ is the\nCDF of $\\vec X$ then $\\vec H_{\\vec X} (\\vec X)= \\vec U$. Hence, we can write the\njoint CDF in terms of marginal CDFs as \n\\begin{align}\n\t\\vec H_{\\vec X}( \\vec X) = \\vec C_{\\vec X} \\left( \\vec F_{\\vec X}( \\vec X) \\right).\n\\end{align}\nThis is essentially Sklar's theorem~\\citep{Sklar1959}. Note we\nreached this result by starting from the normalising flow formulation, whereas in\nsection~\\ref{sec:background} we used Sklar's theorem for the generative model. \n\n\n\\section{Additional Results}\n\\label{sec:extra_results}\n\\subsection{Copula Fitting}\nThe majority of practical copula models are bivariate in nature and have the additional\nadvantage of easy visualisation. While the major theme of this work is ability\nto learn copula for mixed variables types we demonstrate that the proposed\nmodel can learn standard bivariate copulas easily. \n\n\\subsection{Marginal fitting }\n\nFor arbitrarily complex models it is difficult to test the goodness of fit as the\nlikelihood only gives an indication for specific samples. Moreover, average likelihood values do not gives a reliable comparison metric. To compare samples generated from the mode we perform a 2 sample Kolmogorov-Smirnov (KS) test for goodness of fit. \n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{ccclcc}\n\\hline\n\\multicolumn{3}{c}{Power} & & \\multicolumn{2}{c}{Gas} \\\\ \\cline{1-3} \\cline{5-6} \n & KS Stat & P value & & KS Stat & P value \\\\ \\cline{1-3} \\cline{5-6} \n1 & 0.012 & 0.474 & & 0.009 & 0.755 \\\\\n2 & 0.011 & 0.582 & & 0.017 & 0.099 \\\\\n3 & 0.012 & 0.521 & & 0.017 & 0.39 \\\\\n4 & 0.014 & 0.320 & & 0.016 & 0.509 \\\\\n5 & 0.012 & 0.452 & & 0.014 & 0.256 \\\\\n6 & 0.009 & 0.792 & & 0.016 & 0.148 \\\\\n\\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & & 0.010 & 0.685 \\\\\n\\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & & 0.011 & 0.590 \\\\ \\hline\n\\end{tabular}\n\\caption{The table shows the Kolmogorov-Smirnov KS test for each of the six dimensions. We perform a two-sample KS test. A total of 10,000 samples were selected as the two-sample KS test is exact up to 10,000 data points; it uses approximations for large sample sizes. Low values of test statistic and high p-values indicates we cannot reject the hypothesis that two samples are from the same distribution.}\n\\label{tab:power-kstest}\n\\end{table}\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\subfloat[Marginal flow inverse $\\vec U_{\\vec X} = \\mathcal{F}_{\\vec X}\\inv (\\vec X)$]{%\n\t\t\\includegraphics[width=0.48\\linewidth]{figures\/marginal_inverse.png}%\n\t\t\\label{fig:marginal_inverse}%\n\t} \n\t\\subfloat[Copula flow inverse $\\vec U = \\mathcal{C}_{\\vec U_{\\vec X}}\\inv ({\\vec U_{\\vec X}}) $ ]{%\n\t\t\\includegraphics[width=0.48\\linewidth]{figures\/inverse_test}%\n\t\t\\label{fig:copula_inverse}%\n\t}\n\t\t\\caption{The figure depicts inverse flow transformation of the 3\n\t\tdimensional toy example in the Section~\\ref{sec:experiemnts}. In the\n\t\tFigure~\\ref{fig:marginal_inverse} we can clearly see the copula\n\t\tstructures in the off-diagonal scatter plots. The almost uniform\n\t\tdistribution along the diagonal indicates that $\\mathcal{F}_{\\vec X}\\inv \\approx \\vec F_{\\vec X} $. The Figure~\\ref{fig:copula_inverse} on the right shows inverse transform applied to the output of inverse flow, i.e. $\\mathcal{C}_{\\vec X}\\inv \\left( \\mathcal{F}_{\\vec X}\\inv \\approx \\vec F_{\\vec C\n\t\tX} \\right) $. As the evident from scatter plots the copula flow and marginal flow together closely match true CDF see Appendix~\\ref{sec:uni_density_approx} }\n\t\\label{fig:inverse_flow}\n\\end{figure*} \n\\begin{figure*}[ht]\n\t\\centering\n\t\\subfloat[Gumbel Copula]{%\n\t\t\\includegraphics[width=0.33\\linewidth]{figures\/Gumbel_1_25.png}%\n\t\t\\label{fig:left}%\n\t} \n\t\\subfloat[Clayton Copula]{%\n\t\t\\includegraphics[width=0.33\\linewidth]{figures\/Clayton_0_7.png}%\n\t\t\\label{fig:middle}%\n\t}\n\t\\subfloat[Frank Copula]{%\n\t\t\\includegraphics[width=0.33\\linewidth]{figures\/Frank_1_25.png}%\n\t\t\\label{fig:right}%\n\t}\n\t\\caption{Learning bivariate Copulas using Copula Flow model. \n\tEach bi-variate density is learned by the same flow }\n\t\\label{fig:default}\n\\end{figure*} \n\n\n\\section{Definitions}\n\n\\begin{defn}\n\t\\label{defn:CDF}\nA distribution function (cumulative distribution function) is a function $F$\nwith domain $R = [-\\infty, \\infty]$, such that \n\\begin{itemize}\n \\item $F$ is non-decreasing \n \\item $F(-\\infty)$ = 0 and $F(\\infty) = 1$\n \\item $F(x) = \\mathrm{Pr}[X \\leq x] \\quad \\forall x \\in [-\\infty, \\infty]$\n \\item $F$ is right continuous\n\\end{itemize}\n\\end{defn}\n\n\n\\section{Universal Density Approximator}\n\\subsection{Copula Flows are Universal Density Approximators }\n\\label{sec:uni_density_approx}\n\n\nAny invertible non-linear function that can take i.i.d. $ \\uniform{1} $ data and map it onto a\ndensity via monotone functions is a universal\napproximator~\\citep{Hyvarinen1999}.\n\nThe copula flow model can be used to represent any distribution function, i.e., \nthe model is a universal density approximator. The inverse of the flow model can be\ninterpreted as a CDF. By the probability integral transform and Theorem 1\nin~\\citep{Hyvarinen1999}, we know that if $H_X$ is the CDF of the random variable\nthen $H_X(X)$ is uniformly distributed in the hyper-cube $[0,1]^d$, where\n$d $ is the dimension of the vector. \n\nIn the following, we show\nthat with the combination of marginal and copula flows we can learn any CDF with desired\naccuracy. Then the inverse of this CDF, i.e., the quantile function, can be used to\ngenerate random variables distributed according to $X$.\n\n\n\\paragraph{Convergence of the Marginal Flow} \n\\begin{prop}\n\t\\label{prop:spline_universality}\n\tMonotonic rational quadratic splines can universally approximate any\n\tmonotonic function with appropriate derivative approximations, see\n\tTheorem~3.1 in~\\citep{Gregory1982} \n\\end{prop}\nAs discussed in section~\\ref{sec:background}, a (quasi) inverse function $F\\inv$ can be used to\ntransform random variable $U \\sim \\uniform{1}$ to the random variable $X\\sim F$. The inverse function is a monotonic\nfunction, and from Proposition~\\ref{prop:spline_universality} we can\nsay that\n\\begin{lemma}\n\t\\label{lem:spline_CDF}\n\tGiven a flow map $\\mathcal{F}_X: U \\rightarrow \\tilde{X}$, such that $\\mathcal{F}_X$\n\tconverges point-wise to the true inverse function $F_X\\inv$, the transformed\n\trandom variable $\\tilde{X} = \\mathcal{F}_X(U)$ converges in distribution to the\n\tdistribution $X = F_X\\inv(U)$.\n\\end{lemma}\nProof: This is Portmanteau's Theorem applied to flow functions based on Proposition~\\ref{prop:spline_universality}\n\n\n\\paragraph{Convergence of the Copula Flow}\n\nFor any arbitrary ordering of the variables we\nhave $\\vec U = \\vec C_{\\vec X} (\\vec U_{\\vec X}) $. We can write \n\\begin{align}\n\tU_{1} & \\doteq C_{X_1}(U_{X_1}, \\emptyset ) = \\mathrm{Pr}(U_{X_1} \\le u_{X_1} | \\emptyset) \\approx \\mathcal{C}_{X_1}\\inv(U_{X_1}, \\emptyset ) \\\\\n\tU_{k} & \\doteq C_{X_k}(U_{X_k}, U_{X_{1:k-1}} )\\\\\n\t\t\t& = \\mathrm{Pr}(U_{X_k}\\le u_{X_k} | U_{X_{1:k-1}} ) \\\\\n\t\t\t& \\approx \\mathcal{C}_{X_k}\\inv(U_{X_k}, U_{X_{1:k-1}} ), \\qquad 2 < k < d, \n\\end{align} where $\\emptyset$ indicates null set.\nAccording to~\\citep{Hyvarinen1999}, the variables $U_1, \\ldots, U_d$ are\nindependently and uniformly distributed. The conditional copula flow is also\nmodelled as a rational quadratic spline. From Lemma~\\ref{lem:spline_CDF}\nthe $ \\mathcal{C}_{{\\vec X}}\\inv(\\vec U_{\\vec X} )$ converges to the true\ncopula CDF $\\vec C_{\\vec X}$. Therefore, $\\vec \\mathcal{C}_{\\vec X} (\\vec U)$\nconverges in distribution to $\\vec U_X$. \n\n\nWe can extend the conditional copula flow by adding the marginal flow $U_{X_k}=\nF_{X_k}(X_k)$ to obtain \n\\begin{align}\n\tU_{k} & \\doteq C_{X_k}(F_{X_k}(X_k), F_{X_{1:k-1}}(X_{1:k-1}) ) = H_{X_k}(X_k, X_{1:k-1} )\\\\\n\t\t\t& = \\mathrm{Pr}(X_k \\le x_k | X_{1:k-1} ) \\\\\n\t\t\t& \\approx \\mathcal{C}_{X_k}\\inv(\\mathcal{F}_{X_k}\\inv (X_k), \\mathcal{F}_{X_{1:k-1}}\\inv (X_{1:k-1})), \\qquad 2 < k < d \n\\end{align}\nFrom Theorem 1 in~\\citep{Hyvarinen1999}, the variables $U_1, \\ldots, U_d$ are\nindependently and uniformly distributed. Moreover, the combined flow is\nmonotonic as both the copula flow and marginal flow are monotonic. Hence, the\ninverse of the combined flow can transform any random vector $\\vec X$ to uniform independent\nsamples in the cube $\\uniform{1}^d$.\n\nThe flows are learned as invertible monotonic functions. Therefore, using Lemma~\\ref{lem:spline_CDF} we obtain that the distribution\n$\\mathcal{F}_{\\vec X }\\inv \\left(\n\\mathcal{C}_{\\vec X} (\\vec U) \\right)$ converges in distribution to $\\vec X$. \nWe can generate any random\nvector $\\vec X$ starting from i.i.d. random variables. Hence, the copula flow model\nis universal density approximator.\n\n\n\n\n\n\n \\section{Software Details}\n \\label{sec:SoftwareDetails}\n A key point of our approach is the interpretation of normalising flow\n functions as a quantile function transforming a uniform density to the desired\n density via inverse transform sampling. As quantile functions are monotonic, we\n use monotone rational quadratic splines described in~\\citep{Durkan2019} as a\n normalising flow function. However, we make changes to the original splines.\n These changes are necessary to ensure that the flow is learning quantile\n function.\n \\begin{itemize}\n \\item The univariate flow $\n\\mathcal{F} $ maps from the $ \\uniform{1} $ to $ \\range {F_X} $ of the random variable\n$ X $. Hence, unlike~\\citep{Durkan2019}, our splines are asymmetric in their\nsupport.\n\t\\item \tWe modify the standard spline network to build a map as $ (0, 1)\n\t\\rightarrow (B_{\\text{lower}}, B_{\\text{upper}}) $ where $B$ is the range of\n\tthe marginal. Infinite\n\tsupport can be added with $B\\to\\infty$.\n\t\\item We do not parametrise the knot positions by a neural network. We instead\n\ttreat them as a free vector, i.e., $ \\vec \\theta_d = [\\vec \\theta_d^w, \\vec\n\t\\theta_d^h, \\vec \\theta_d^s]$ are the width, height and slope parameters,\n\trespectively, that characterise the flow\n\tfunction for the independent marginals of data dimension $\\mathcal{F}_{\\vec \\theta_d} $\n\t\\item We map out-of-bound values back into the range and set the gradients\n\tof the flow map to $0$ at these locations.\n\\end{itemize}\n\nFor copula flow we use same network as autoregressive neural spline\nflows~\\citep{Durkan2019}. As copula is a CDF defined over uniform densities, the\ncopula flow spline maps $\\uniform{1} \\rightarrow \\uniform{1}$. Apart from the\nchange in the range of the splines, the copula flow architecture is sames as\nthat of autoregressive neural spline flows~\\citep{Durkan2019}. \n\n\\begin{table}[]\n\t\\scalebox{0.90}{\n\t\\begin{tabular}{llccccc}\n\t\\hline\n\t\t\t\t & & Power & Gas & Hepmass & Miniboone & Synthetic Data \\\\ \\hline\n\t\t\t\t & Dimension & 6 & 8 & 21 & 43 & Various \\\\\n\t & Train Data Points & 1,615,917 & 852,174 & 315,123 & 29,556 & Various \\\\ \\hline\n\t\t\t\t & Batch Size & 2048 & 1024 & 1024 & 512 & 1024 \\\\\n\tMarginal Flow & Epochs & 50 & 50 & 50 & 50 & 100 \\\\\n\t\t\t\t & Bins & 512 & 512 & 512 & 512 & 512 \\\\\n\t\t\t\t & Learning Rate & 1e-3 & 1e-3 & 1e-3 & 1e-3 & 1e-3 \\\\ \\hline\n\t\t\t\t & Batch Size & 512 & 512 & 512 & 512 & 512 \\\\\n\tCopula Flow & Epochs & 100 & 100 & 100 & 50 & 100 \\\\\n\t\t\t\t & Bins & 16 & 16 & 16 & 8 & 16 \\\\\n\t\t\t\t & Learning Rate & 1e-4 & 1e-4 & 1e-4 & 1e-4 & 1e-4 \\\\\n\t & Hidden Features & [512, 512] & [256, 256] & [256, 256] & [128, 128] & [512, 512] \\\\\n\t\t\t\t & Number of Flows & 5 & 10 & 15 & 10 & 10 \\\\ \\hline\n\t\\end{tabular}\n\t}\n\t\\caption{Hyperprameters for the density estimation results }\n\t\\label{tab:hyper-parameters}\n\t\\end{table}\n\n\\section{Comparing Generative Samples}\nWe need to clearly define what the metric we are comparing against. This is\nsimple question of saying if we think our method is better then we need to\nclearly have a valid metric. \n\ncandidates so far, really overweight on kernel based methods\n\\begin{itemize}\n \\item Log-likelihood of test data, we have full probabilistic model, can't\n directly compare with GAN based papers. \n \\item Maximum Mean Discrepancy MMD\n \\item Omptised Kernel version of MMD ( Generative Models and Model Criticism\nvia Optimized Maximum Mean Discrepancy (2017))\n \\item two-sample classifier tests (Gretton, Borgwardt, Rasch, Sch\u00f6lkopf,\n Smola. A kernel two-sample test (2012))\n \\item Goodness of fit test, Finite sample Stien Discrepency [FSSD, NeurIPS\n 2017 Best Paper], alternative to log-likelihood. \n\\end{itemize}\n\n\\section{Main Method}\nCrux of the paper\n\\begin{itemize}\n\t\\item Learn copula based models to learn joint densities correctly.This\n\tmethod separates joint density learning and marginal density learning.\n\tcopula learns joint and marginals are 1d densities which are easy to learn\n\t\\item we can learn arbitrarily complex marginal densities, using normalising\n\tflows [cite(renzende, shakir icml 2015)].\n\t\\item normalising flows relies on give us invertible function\n\ttransformations, which can use to get uniform marginals [vias CDF of base\n\tdistribution like univariate Gaussian for example ]\n\t\\item Use Masked auto-regressive flows to learn the joint density of the\n\tuniform marginals (which is a copula density, assuming it exists) \n\t\\item to generate a sample from the model, first sample from copula to\n\tobtain correlated uniform random variables and then push through the learned\n\tnormalising flow map\n\\end{itemize}\n\n\nNormalizing flows transform a probability distribution using an invertible\nfunction (Tabak and Turner, 2013; Rezende and Mohamed, 2015; Rippel and Adams,\n2013).\n\n\nFor multivariate case we can readily extend the transform as well as the\ninverse quantile function or flow $ \\mathcal{F} $. Let $ \\vec X = {X_1, \\ldots,\nX_d} $ be a random vector with distribution function $ F $. Let the vector $\n\\vec U= {U_1, \\ldots, U_d}$ be iid $ \\uniform{1} $ distributed random variable\nthen the multivariate quantile transform $ Y := \\tau\\inv (V) $ can be defined\nrecursively as, \n\\begin{align}\nY1 &:= F_{1}\\inv (U_1) \\\\\nY_k &:= F\\inv_{k|1, \\ldots k-1}\\left( U_k | Y_1, \\ldots, Y_{k-1} \\right), \\quad 2 \\leq k \\leq d \\nonumber\n\\end{align}, \nwhere $ F\\inv_{k|1, \\ldots k-1} $ denotes conditional distribution function. \n\n\nWe can apply nomalizing flow methods to learn the conditional distribution\nfunction $ F_{k|1, \\ldots k-1} $. This is presented as autoregressive density\nestimation in Masked Autoregressive Flows MAF~\\cite{Papamakarios2017}. \n\n\n\n\nwe have parametric models for the marginal densities $f_{X_i}(x_i | \\vec\n\\theta_i)$ where $ \\vec \\theta_i $ are parameters of the $ i^th $ marginal. And\nthe joint copula density $\\vec c_{\\mat X}( \\vec x)$ is defined by $\\vec\n\\theta_c$ the log-likelihood for data set $\\mat D = (\\vec x^{(1)}, \\ldots, \\vec\nx^{(N)})$can be written as, \n\\begin{align}\n\\log \\mathcal{L} &(\\mat D | \\vec \\theta_1, \\vec \\ldots, \\vec \\theta_d, \\vec \\theta_c) \\\\ &= \\log \\vec f_{\\mat X}(\\vec x) \\\\\n&= \\log c_{\\mat X}\\left (F_{X_1}(x_1), \\ldots, F_{X_d}(x_d) | \\vec \\theta_c \\right) \\nonumber \\\\\n& \\quad \\quad + \\sum_{i}^{d}\\log \\left( f_{X_i}(x_i | \\vec \\theta_i) \\right) \n\\end{align}\n\n\n\nWe can then train the model with gradient decent, as the likelihood function\nseparable in data points we can do minibatching\/ stochastic gradient descent. \n\n\n\n\n\n\n\\subsection{Normalising Flows }\n\nNormalising flows~\\cite{Rezende2015} construct an invertible flow function, e.g.\n$G$, that transforms one density into another density. An alternative way to\nlook at flow densities is the classical way of generating samples of random\nvariable $ X $ following a distribution function $ F $. A standard way to\ngenrate randomvariable is \n\n\n\nthat we can learn distribution functions $ F_{X}(x) $ needed for training\ncopulas via normalising flow function. \n\n\\textbf{rember to write about generating truncated distributions, e.g., truncated normal, no rejection sampling is needed if know inverse CDF }\n\n\\begin{theorem}\n\t\\label{thm:copulaInvariance}\n\tLet X and Y be continuous random variables with copula $ C_{XY} $. If $\n\t\\alpha $ and $\\beta $ are strictly increasing on $ \\range X $ and$ \\range Y\n\t$, respectively, then the copula $ C_{ \\alpha (X) \\, \\beta(Y)}= C_{XY} $,\n\ti.e., $ C_{XY} $ is invariant under strictly increasing transofrmations of $\n\tX $ and $ Y $\n\\end{theorem}\nproof see appendix or \\cite{Nelson}\n\n\n\n\n\\subsubsection{Hoeffding and Frechet bounds}\n\n\n\\subsubsection{Training Copulas}\n\nThe bottle neck is the Jacobian computation $ \\left(\\partial G^{-1} \/ \\partial\nx\\right) $ in \\ref{eq:normFlow} which is of order $O (D^3)$ for D dimensional\nmap . So the main aim of recent developments in normalising flow based methods\nis to create flow maps where Jacobain is easier to compute. This is typically\ntrue for functions with lower triangular Jacobians [\\textit{May be explain this\nbti more}]. Masked Autoregressive flows \\cite{Papamakarios2017}\n\n\n\n\\subsection{Masked Autoregressive Flows}\n\nWe can express the density $\\vec f_{\\mat X}(\\vec x)$ as production of\nautoregressive dependence, i.e., $\\vec f_{\\mat X}(\\vec x) = \\prod_i f_{X_i}(x_i\n| x_{1:i-1})$. We can model this by building flow maps recursively, i.e., $\nF_{X}(\\vec x) = F_{X_d} \\left( \\ldots F_{X_2}(F_{X_1}(x_1))) \\right)$. Normally,\neach individual densities are expressed in terms of lcation scale family, for\nexample normal distribution as, \n\\begin{align}\nf_{X_i}\\left(x_{i} | \\mathbf{x}_{1: i-1}\\right) &=\\mathcal{N}\\left(x_{i} | \\mu_{i},\\left(\\exp \\alpha_{i}\\right)^{2}\\right) \n\\end{align}\nDue to autoregressive construction Jacobian is lower triangular, hence,\ndeterminant of Jacobian is just $\\exp \\left(-\\sum_i \\alpha_{i}\\right)$, where\n$\\alpha_{i} = g_{NN}(\\vec x_{1:i-1})$ see ~\\cite{Papamakarios2017} more details.\n\n\nThe neural networks function can be conveniently compute by masking specifi\nweights to impose recursive nature. one such example is\nMADE~\\cite{Germain2015}. Important advantage of using MADE is that flow can be\nimplemented in one forward pass rather than recursive construction. Masked\nautoregressive flows MAF~\\cite{Papamakarios2017} is made by stacking MADE layers\none after the other. For better performance one can also permute data so that\nthe autoregressive network can be more expressive. permutation is invertible\nrotation map with unity determinant can improve performance of MAFs.\n\n\nthis is expressed as a combination of Gaussian densities. We can express this in\nterms of inverse of distribution function \n\n\\begin{align}\nF_{X_{d+1}}^{-1}(x_{d+1}) = \\frac{x_{d+1}- \\mu_{X_{1:d}}}{\\sigma_{X_{1:d}}} \n\\end{align}\n\n\n\\subsection{Spline Flows}\n\nAnother alternative to implement invertible maps with lower triangular Jacobians\nis coupling transform [Cite NICE and Real NVP paper]. In neural spline flows the\ncoupling is implemented by monotonic rational quadratic transforms [Cite neural\nspline and original spline paper]\n\nThe spline maps an interval [-B, B] to [-B, B] using K different\nrational-quadratic fucntions set at knots $\\lbrace \\left( x^k,\ny^k\\right)\\rbrace$ where $k= 0, \\ldots, K$ . the spline can be made\nmonotonically increasing function by placeing knots monotonically increasing\nfashion between the $ (-B, B) $ to $ (B, B) $. \n\nDo we need to expand it here or when we actually introduce changes to the basic\nmodel ?\n\nCone to complete this based on next secion \n\n\\begin{itemize}\n\t\\item [1.] A \n\\end{itemize}\n\n\n\\sanket {need to think and rephrase next 2 paras keeping the gist}\n\n\nAn interesting question to ask what is distribution function of the univariate\nmarginals $ U_{X}, U_{Y} $ obtained by probability integral transform of $ X, Y\n$ via distributions, i.e., $ U_X = F_X (x) , \\, \\, U_Y = F_Y (y) $. This\nquestion is the primary motivation for study of copulas by Frechet [Cite\nFrechet] and later consolidated by Sklar~\\parencite{Sklar1959}. \n\nFor synthetic data generation this question is crucial. Especially in privacy\npreservation, where we perturb the data samples by another random process, this\nchanges the distribution function of the underlying variables, whereas for\nsynthetic data to be useful the relationships between the variables should be\npreserved. This can be achieved by using copulas, \n\nWe can do structure learning and build dependency graph, C, D or R-VIne copulas\nas well as CBN are part of this we use stacked MAFs to automate this but ither\nsetting is plausible under the framework. \n\nmake tree structure to MADE connection, with learned MADE we can do this in one\npass \n\nShow how multivariate Gaussian Copula is is essentially a generative map $ u\n\\rightarrow \\phi(x) \\rightarrow AX \\rightarrow \\phi(x) $\n\n\n\\section{Introduction}\n\\input{1_intro.tex}\n\\input{2_background.tex}\n\\input{3_copulaFlow.tex}\n\\input{4_experiments.tex}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\setcounter{lemma}{0}\n\nThe notion of derivative was first defined in \\cite{BZ-Induced} for smooth\nrepresentations of $\\ G_{n}=GL(n)$ over non-archimedean fields and became a\ncrucial tool in the study of this category. The purpose of the present paper\nis to transfer part of this construction to the archimedean case.\n\nThe definition of derivative is based on the \\textquotedblleft\nmirabolic\\textquotedblright\\ subgroup $P_{n}$ of $G_{n}$ consisting of\nmatrices with last row $(0,\\dots,0,1)$. The unipotent radical of this subgroup\nis an $\\left( n-1\\right) $-dimensional linear space that we denote $V_{n}$,\nand the reductive quotient is $G_{n-1}$. The group $G_{n-1}$ has 2 orbits on\n$V_{n}$ and hence also on $V_{n}^{\\ast }$: the zero and the non-zero orbit.\nThe stabilizer in $G_{n-1}$ of a non-trivial character $\\psi $ of $V_n$ is isomorphic to $P_{n-1}$.\n\nThe construction of derivative in \\cite{BZ-Induced} is based on two\nfunctors: $\\Phi ^{-}$ and $\\Psi ^{-}$. In this paper we denote those functors just by $\\Phi$ and $\\Psi$. The functor $\\Psi$ goes from the\ncategory of smooth representations of the mirabolic group $P_{n}$ to the\ncategory of smooth representations of $G_{n-1}$ (for each $n$) and $\\Phi$ goes from the category of smooth representations of $P_{n}$ to the\ncategory of smooth representations of $P_{n-1}$. The functor $\\Psi$ is\nthe functor of (normalized) coinvariants with respect to $V_{n}$ and the functor $\\Phi$ is the functor of (normalized) co-equivariants with respect to $(V_{n},\\psi )$. The\nfunctor of $k$-th derivative is defined to be the composition $\\Psi \\circ \\Phi^{k-1}$.\n\nAnother way to describe those two functors is via the equivalence of\ncategories of the smooth representations of $P_{n}$ and the category of \nG_{n-1}$-equivariant sheaves on $V_{n}^{\\ast }$. This equivalence is based\non the Fourier transform. Under this equivalence, $\\Psi$ becomes the\nfiber at $0$ and $\\Phi$ becomes the fiber at the point $\\psi $. The\nfunctor $\\Phi $ can also be viewed as the composition of two functors:\nrestriction to the open orbit and an equivalence of categories between\nequivariant sheaves on the orbit and representations of the stabilizer of a\npoint.\n\nIn the archimedean case, the notion of fiber of a sheaf behaves differently\nthan in the non-archimedean case; in particular it is not exact. One way to\ndeal with this problem is to consider instead the notion of a stalk or a jet. On the\nlevel of representations this means that one uses generalized coinvariants\ninstead of usual coinvariants. For example, the Casselman-Jacquet functor\nis defined in this way. Therefore in our definition we replace the functor $\\Psi $ by the functor of generalized coinvariants. However, we do not\nchange the definition of the functor $\\Phi $ since we think of it as\nrestriction to an open set followed by an equivalence of categories, and in\nparticular it should be exact.\n\n\n\\DimaA{\nThis gives the following definition of derivative.\nLet $\\psi_{n}$ be the standard non-degenerate character of $V_{n}$, given by $\\psi _{n}(x_1,\\dots,x_{n-1}):=\\exp(\\sqrt{-1}\\pi \\re x_{n-1}).$ We will also denote by $\\psi_n$ the corresponding character of the Lie algebra $\\fv_n$.\nFor all $n$ and for all representations $\\pi $ of ${\\mathfrak{p}}_{n}$, we define\n$$\\Phi(\\pi):=|\\det|^{-1\/2} \\otimes \\pi_{(\\fv_n,\\psi_n)}:=|\\det|^{-1\/2} \\otimes \\pi\/\\Span\\{\\alp v - \\psi_n(\\alp)v \\, : \\, v \\in \\pi, \\, \\alp \\in \\fv_{n}\\}$$\nand\n$$\\Psi(\\pi):= \\lim_{\\overset{\\longleftarrow }{l}} \\pi \/\\Span\\{\\beta v\\,|\\,v \\in \\pi ,\\beta \\in ({\\mathfrak{v}}_{n})^{\\otimes l} \\}.$$\nNow, define $D^k(\\pi):=\\Psi\\Phi^{k-1}(\\pi)$.\n}\n\nConsider the case $k=n$; in this case the derivative becomes the (dual) Whittaker\nfunctor. It is well known that the behavior of the Whittaker functor depends\non the category of representations that we consider. For example in the\ncategory of (admissible) Harish-Chandra modules the Whittaker functor gives high\ndimensional vector spaces while in the equivalent category of\nsmooth admissible {Fr\\'{e}chet }representations the Whittaker functor gives\nvector spaces of dimension $\\leq 1$ just as in the non-archimedean case. For\nthis reason we view the functor $D^{k}$ restricted to the category of smooth\nadmissible {Fr\\'{e}chet} representations as the natural counterpart of the\nBernstein-Zelevinsky derivative.\n\nNevertheless in order to study this functor we will need to consider also the category of Harish-Chandra modules as well as some other related functors.\n\nIn the non-archimedean case the highest non-zero derivative plays a special\nrole. It has better properties than the other derivatives. In particular in\nits definition one can omit the last step of $\\Psi $ since $V_{n-k}$\nalready acts trivially on the obtained representation. The index of the\nhighest derivative is called the depth of the representation.\nAs observed in \\cite{GS} the depth can also be described in\nterms of the wavefront set of the representation.\n\nIn the archimedean case the wavefront set of a representation $\\pi$ is\ndetermined by its annihilator variety, and we will use the latter to\ndefine ``depth''. We recall that if $\\pi$ is an admissible representation\nof $G_{n}$ then its annihilator variety $\\mathcal{V}_{\\pi}$ is a subset\nof the cone of nilpotent $n\\times n$ matrices. We define the depth of\n$\\pi$ to be the smallest index $d$ such that $X^{d}=0$ for all\n$X\\in \\mathcal{V}_{\\pi}$.\n\n\n\\begin{example*}\n$ $\n\\begin{enumerate}\n\\item For a finite dimensional representation $\\pi $ of $G_{n}$, $depth(\\pi\n)=1$, $D^{1}(\\pi )=\\Phi (\\pi )|_{G_{n-1}}=\\pi |_{G_{n-1}}$, and $D^{k}(\\pi\n)=0$ for any $k>1$.\n\n\\item $D^{n}=(\\Phi) ^{n-1}$ is the Whittaker functor. On the category of\nsmooth admissible {Fr\\'{e}chet} representations it is proven to\nbe exact in \\cite{CHM} and in the category of admissible Harish-Chandra modules in \\cite{Kos}.\nIt is also proven in\n\\cite{Kos} that $depth(\\pi )=n$ if and only if $D^{n}(\\pi )\\neq 0$ (in both\ncategories).\n\\end{enumerate}\n\\end{example*}\n\nFrom now on, let $F$ be an archimedean local field and $G_n:=\\GL(n,F)$.\n\nIn this paper we will mainly be interested in the depth derivative. The following theorem summarizes the main results of this paper.\n\n\n\n\\begin{introthm}\n\\label{thm:main} Let $\\mathcal{M}_{\\infty}(G_{n})$ denote the category of smooth\nadmissible {Fr\\'{e}chet} representations of moderate growth and let \n\\mathcal{M}^{d}_{\\infty}(G_{n})$ denote the subcategory of representations of depth \n\\leq d$. Then\n\n\\begin{enumerate}\n\\item \\label{mainit:Adm} $D^d$ defines a functor $\\cM^d_{\\infty}(G_n) \\to\n\\mathcal{M}_{\\infty}(G_{n-d})$.\n\n\\item \\label{mainit:Exact} The functor $D^d : \\cM^d_{\\infty}(G_n) \\to\n\\mathcal{M}_{\\infty}(G_{n-d})$ is exact.\n\n\\item \\label{mainit:nilp} For any $\\pi \\in \\cM^d_{\\infty}(G_n), \\,\nD^d(\\pi)=(\\Phi)^{d-1}(\\pi)$.\n\n\\item \\label{mainit:Zero} $D^{k}|_{\\mathcal{M}^{d}_{\\infty}(G_{n})}=0$ for any $k>d$.\n\n\\item \\label{mainit:Prod} Let $n=n_1+\\cdots +n_d$ and let $\\chi_i$ be characters of $G_{n_i}$. Let $\\pi= \\chi_1 \\times \\cdots \\times \\chi_d \\in \\cM_d(G_n)$ denote the corresponding monomial representation. Then\n $$D^d(\\pi)\\cong((\\chi_1)|_{G_{n_1-1}} \\times \\RamiA{\\cdots} \\times (\\chi_d)|_{G_{n_d-1}}) $$\n\\Dima{\n\\item \\label{mainit:A} If $\\tau$ is an irreducible unitary representation of $G_n$ and $\\tau^{\\infty}$ has depth $d$ then $D^d(\\tau^{\\infty})\\cong (A\\tau)^{\\infty}$, where $A\\tau$ denotes the adduced representation defined in \\cite{Sahi-Kirillov} (see \\S\\S\\S\\ref{subsubsec:PrelA}).\n }\n\\end{enumerate}\n\\end{introthm}\n\n\\begin{remark*}$ $\n\\begin{enumerate}[(i)]\n\\item Part (\\ref{mainit:nilp}) of the theorem means that ${\\mathfrak{v}}\n_{n-d+1}$ acts nilpotently on $(\\Phi )^{d-1}(\\pi )$. Unlike the p-adic\ncase, $V_{n-d+1}$ need not act trivially on $(\\Phi )^{d-1}(\\pi )$.\n\n\\item The proofs of parts (\\ref{mainit:Exact}),(\\ref{mainit:Zero}) are based on the results of \\cite{AGS2}\n\n\\item In this paper we do not prove that for $\\pi$ of depth $d$, $D^{d}(\\pi )\\neq 0$. This is in fact true but needs an additional argument which is provided in \\cite{GS-Gen}. However, for $\\pi$ monomial or unitarizable it follows from parts \\eqref{mainit:Prod} and \\eqref{mainit:A} of Theorem \\ref{thm:main} respectively.\n\n\\item We prove analogs of items (\\ref{mainit:Adm}), (\\ref{mainit:nilp}), and\n(\\ref{mainit:Zero}) of the theorem also for the category of Harish-Chandra modules.\n\n\\end{enumerate}\n\\end{remark*}\n\n\\subsection{Related results}\n$\\quad$\\\\\nAs was mentioned earlier, the non-archimedean counterpart of this paper was done in \\cite{BZ-Induced}. In the archimedean case, an analogous notion to the notion of highest derivative was introduced for irreducible unitary representations in \\cite{Sahi-Kirillov} and called ``adduced representation\".\n\nThe case of smooth representations over archimedean fields differs from the above cases in several ways. First of all, we do not have a suitable category of representations of $P_n$. The existence of such a category in other cases was crucial for the study of derivatives.\n\nAnother difference is the relation between the derivative and the classification of irreducible representations. In the non-archimedean case, the theory of derivatives was the base for the Zelevinsky classification.\nIn the unitary case, the notion of adduced representation is closely related with the Vogan classification (see \\S\\S\\S \\ref{subsubsec:IntroAddRep} and \\S\\S\\S\\ref{subsubsec:PrelA}).\n\nIn our case, we do not currently have a classification that is suitable for the theory of derivatives. The Langlands classification is not compatible with the notion of derivative. In particular, it is hard to read from the Langlands classification the annihilator variety or even the\ndepth of the representation, which are crucial notions in the study of derivatives. We hope that eventually it will be possible to make an archimedean analog of the Zelevinsky classification. However, it seems to be quite difficult. Let us explain why.\n\nThe Langlands classification presents any irreducible representation as a ``smallest\" subquotient of a parabolic induction of a discrete series representation. In the non-archimedean case the discrete series representations can be presented as ``largest\" subquotients of parabolic induction of cuspidal representations.\nThe Zelevinsky classification is dual (under the Zelevinsky involution) to the Langlands classification. Namely, the Zelevinsky classification presents any irreducible representation as a ``largest\" subquotient of a generalized Speh representation corresponding to a segment of cuspidal representations and any such Speh representation as a ``smallest\" subquotient of a parabolic induction of a cuspidal representation.\n\nSuch a nice picture cannot exist in the archimedean case or even in the complex case. Indeed, in this case only $G_1$ has discrete series representations. Thus one would expect that the natural analog of generalized Speh representation as above exists only for $G_1$. Thus, the naive analogy would suggest that any irreducible representation is the ``largest\" subquotient of a principal series representation. This is not true. Moreover, it is even not true that any irreducible representation is the ``largest\" subquotient of a monomial representation (i.e. a Bernstein-Zelevinsky product of characters)\\Dima{, or even the ``largest\" subquotient of a BZ-product of finite-dimensional representations}.\n\nThe n-th derivative of representations of $G_n$ is the Whittaker functor. Thus, a special case of Theorem \\ref{thm:main} implies that the Whittaker functor is exact and maps a principal series representation to a one-dimensional space. This is known for any quasi-split reductive group by \\cite{Kos} and \\cite{CHM}.\n\n\\subsection{Structure of our proof}\n$ $\\\\\nWe start working in the Harish-Chandra category. We show that for a Harish-Chandra module $\\pi$ of depth $d$, $D^d(\\pi)$ is an admissible\nHarish-Chandra module over $G_{n-d}$. From this we deduce that $D^d(\\pi)=\\Phi^{d-1}(\\pi)$ and $D^k(\\pi)=0$ for any $k>d$.\n\n\n\n In \\cite{AGS2} we analyze the functor $\\Phi^k$ as a functor from $\\mathcal{M}_{\\infty}(G_n)$\nto the category of representations of ${\\mathfrak{p}}_{n-k}$. We prove that\nit is exact and for any $\\pi \\in \\mathcal{M}_{\\infty}(G_n),$ $\\Phi^k(\\pi)$ is a\nHausdorff space. This means that $\\mathfrak{u}_n^k( \\pi \\otimes \\psi^k)$ is\na closed subspace. In fact, we prove those statements for a wider class of representations of $\\fp_n$.\n\nThen we deduce items \\eqref{mainit:Adm}-\\eqref{mainit:Zero} of Theorem \\ref{thm:main} from the above results.\n\nIn \\cite{AGS2} We prove \\eqref{mainit:Prod} by computing $\\Phi^{d-1}$ on certain representations of $\\fp_n$ using the results on exactness and Hausdorffness of $\\Phi$ for those representations.\n\nFinally, we prove \\eqref{mainit:A} using \\eqref{mainit:Adm}-\\eqref{mainit:Prod}, \\cite{GS}, and the Vogan classification.\n\n\\subsubsection{Admissibility}\nLet ${\\mathfrak{n}}_n$ denote the Lie algebra of upper triangular nilpotent \nn \\times n$ matrices. A finitely-generated $(\\g_n,K)$-module\n\n $\\pi$ is admissible if and\nonly if it is finitely generated over $\\mathfrak{n}_n$ (see Theorem \\ref{thm:EqAdm}). Thus, we know that $\\Phi^{d-1}(\\pi)$ is finitely generated over $\\mathfrak{n}_{n-d+1}$ and we need to show that it is in fact finitely\ngenerated over $\\mathfrak{n}_{n-d}$.\n\nTo do that we use two invariants of modules over Lie algebras: annihilator\nvariety (see \\ref{subsubsec:AnnVar}) and associated variety (see \\ref{subsec:Filt}). Both are analogs of the notion of support of a module over a commutative algebra. Both are subvarieties of the dual space to the Lie\nalgebra, and the annihilator variety includes the associated variety. The\ndefinition of the associated variety requires the module to be filtered, but\nthe resulting variety does not depend on the choice of a good filtration on\nthe module.\n\nTo prove that $\\Phi^{d-1}(\\pi )$ is finitely generated over $\\mathfrak{n}_{n-d}$ we show that the associated variety of $\\Phi^{d-1}(\\pi )$, viewed as\na module over $\\mathfrak{n}_{n-d+1}$, is included in $\\mathfrak{n}_{n-d}^{\\ast }$. Using a lemma that we prove in \\S \\S \\ref{subsec:keylem},\nwe deduce this from the bound on the annihilator variety of $\\pi $ that we\nhave by definition of the depth of $\\pi $.\n\n\n\\subsection{Applications}\\label{subsec:Appl\n\n\\subsubsection{Degenerate Whittaker models}\nLet $N_nn$, we denote the union by $G_{\\infty}$ and all the groups we will consider will be embedded to $G_{\\infty}$ in a standard way.\n\\item We denote by $P_{n}\\subset G_{n}$ the mirabolic subgroup (consisting of matrices with last row $(0 , \\dots , 0,1)$).\n\\item Let $V_n \\subset P_n$ be the last column. Note that $V_n \\cong F^{n-1}$ and $P_n = G_{n-1} \\ltimes V_n$. Let $U_n^k := V_{n-k+1} V_{n-k+2} \\cdots V_{n}$ and $S_n^k := G_{n-k} U_n^k$. Note that $U_n^k$ is the unipotent radical of $S_n^k$. Let $N_n:=U_n^n$.\n\\item Fix a non-trivial unitary additive character $\\theta$ of $F$, given by $\\theta(x)=\\exp(\\sqrt{-1}\\pi \\re x)$.\n\\item Let $\\bar{\\psi}_n^k:U_n^k \\to F$ be the standard non-degenerate homomorphism\\Dima{, given by $\\bar{\\psi}_n^k(u)=\\sum_{j=n-k}^{n-1} u_{j,j+1}$} and let $\\psi_n^k:=\\theta \\circ \\bar{\\psi}_n^k$\n\\end{itemize}\n\nWe will usually omit the $n$ from the notations $U_n^k$ and $S_n^k$, and both indexes from $\\psi_n^k$.\n\\end{notation}\n\n\n\\begin{defn}\\label{def:main}\n\\DimaA{Define functors $\\Phi: \\cM(\\fp_n) \\to \\cM(\\fp_{n-1})$ by $\\Phi(\\pi):=\\pi_{\\fv_n,\\psi} \\otimes |\\det|^{-1\/2}$ and $\\Psi,\\Psi_0: \\cM(\\fp_n) \\to \\cM(\\fg_{n-1})$ by $\\Psi(\\pi):= \\pi_{gen,\\fv_{n}}$ and $\\Psi_0(\\pi):= \\pi_{\\fv_{n}}$. }\n\nFor a $\\p_n$-module $\\pi$ we define three notions of derivative:\n\\DimaA{\n\\begin{enumerate}\n\\item $E^k(\\pi):=\\Phi^{k-1}(\\pi):=\\pi_{\\fu^{k-1},\\psi^{k-1}}\\otimes |\\det|^{-(k-1)\/2}.$ Clearly it has a structure of a $\\p_{n-k+1}$ - representation.\n\n\\item $D^k(\\pi):= \\Psi(E^k(\\pi)).$\n\n\\item $B^k(\\pi):= \\Psi_0(E^k(\\pi)).$\n\\end{enumerate}\n}\n\nNote that the derivative functor $D^k$ was defined in the introduction.\nFor convenience we will also use untwisted versions of the above functors, defined by \\Dima{$\\oPhi(\\pi):=\\Phi(\\pi) \\otimes |\\det|^{1\/2}$, and $\\oE^k(\\pi):=E^k(\\pi) \\otimes |\\det|^{(k-1)\/2}$.}\n\nWe denote the restrictions of the above functors to the subcategory $\\cM_{\\infty}(G_n)$ by $B_{\\infty}^k$, $D_{\\infty}^k$, and $E_{\\infty}^k$. Similarly, we denote the restrictions to $\\cM_{HC}(G_n)$ by $B_{HC}^k$, $D_{HC}^k$ and $E_{HC}^k$. Note that if $\\pi \\in \\cM_{\\infty}(G_n)$ then $D_{\\infty}^k(\\pi)$ has a natural structure of a $P_{n-k+1}$ topological representation and if $\\pi \\in \\cM_{HC}(G_n)$ the $D_{HC}^k(\\pi)$ has a natural structure of a $K'$ representation where $K'$ is the maximal compact subgroup of $G_{n-k}$. The same is true for the functors $B$ and $E$.\n\nWe have natural maps: $E^k \\to D^k \\to B^k$,\n$ HC \\circ B^k_{\\infty} \\to B^k_{HC} \\circ HC$, $HC \\circ D^k_{\\infty} \\to D^k_{HC}\\circ HC$ and $HC \\circ E^k_{\\infty} \\to E^k_{HC}\\circ HC$. Here $HC$ is the functor of taking $K-$ finite vectors and the last three maps are maps of $K$ representations and $\\p$ representations.\n\n\\end{defn}\n\n\n\\Dima{\n\\begin{prop}\\label{prop:D3Adm}\nLet $\\pi \\in \\cM_{HC}(G_n)$. Then $B_{HC}^k(\\pi)$ is admissible for any $1\\leq k \\leq n$.\n\\end{prop}\n\\begin{proof}\nBy Theorem \\ref{thm:EqAdm}, $\\pi$ is finitely generated over $\\n_n$.\nNote that the functor $B_{HC}^k$ quotients by the last $k$ columns of $\\n_n$ (with an appropriate character) and thus $B_{HC}^k(\\pi)$ is finitely generated over $\\n_{n-k}$.\n Therefore, by Theorem \\ref{thm:EqAdm} again, $B_{HC}^k(\\pi)$ is admissible.\n\\end{proof}\n\n\\begin{remark}\nLet $\\pi \\in \\cM_{HC}(G_n)$, and let $S^{\\prime} \\in \\bC^n$ be the multiset\ncorresponding to an infinitesimal character of $B^k(\\pi)$. Then $S'$ is obtained from\nthe multiset corresponding to some infinitesimal character of $\\pi$ by deleting $k$ of the elements and adding $1\/2$ to\neach of the remaining ones. This is proven by the argument in the proof of \\cite[Proposition 4.5.4]{GS}.\n\\end{remark} }\n\n\nIn \\RamiA{\\S \\ref{sec:Adm}} we prove the following theorem\n\\begin{theorem}\\label{thm:HCAdm}\nLet $\\pi \\in \\cM_{HC}^d(G_n)$. Then the restriction of $E_{HC}^d(\\pi)$ to $\\g_{n-d}$ is admissible.\n\\end{theorem}\n\n\\begin{cor}\\label{cor:HCNilp}\nLet $\\pi \\in \\cM_{HC}^d(G_n)$. Then $\\fv_{n-d+1}$ acts nilpotently on $E_{HC}^d(\\pi)$. Namely, there exists a number $k$ such that for any $X \\in \\fv_{n-d}$, $X^k$ acts by zero on $E_{HC}^d(\\pi)$.\n\\end{cor}\n\\begin{proof}\nLet $\\tau:= E_{HC}^d(\\pi)$. Since it is admissible over $\\g_{n-d}$, it is finite over the center of $U(\\g_{n-d})$. Hence there exists a polynomial $p$ such that $\\tau(p(I))=0$, where $I \\in \\g_{n-d}$ denotes the identity matrix. Let $k$ be the degree of $p$ and $X \\in \\fv_{n-d}$ be any element. We will show that $\\tau(X)^k=0$.\n\nNote that $[I,X] = X$ and hence $ad(X)^k(I^k)= k!(-X)^k$ and $ad(X)^kI^{k-i}=0$ for any $i>0$. Thus $ad(X)^k (p(I))$ is proportional to $X^k$. On the other hand, $\\tau(p(I))=0$, hence $\\tau(ad(X)^k (p(I)))=0$ and thus $\\tau(X)^k=0$.\n\\end{proof}\n\n\\begin{cor}\nLet $\\pi \\in \\cM_{HC}^d(G_n)$. Then\n\\begin{enumerate}\n\\item $D_{HC}^{d}(\\pi)=E_{HC}^{d}(\\pi).$\n\\item $E_{HC}^{d+1}(\\pi)=D_{HC}^{d+1}(\\pi)=B_{HC}^{d+1}(\\pi)=0$.\n\\end{enumerate}\n\\end{cor}\n\n\\begin{thm}[\\cite{AGS2}, Theorem A] \\label{thm:ExactHaus}\n\nFor any $0 n$, $F^{i+1}M=F^{1}AF^{i}M$.\n\\end{enumerate}\n\nA filtration on $A$ is called good if it is a good filtration of $A$ as a\nmodule over itself.\n\\end{defn}\n\n\\begin{exm}\nLet ${\\mathfrak{g}}$ be a (finite dimensional) Lie algebra and\n$U({\\mathfrak{g}})$ the universal enveloping algebra. Define a good filtration\n$U^{i}$ on $U({\\mathfrak{g}})$ by the order of the tensor. Then\n$\\operatorname{Gr}(U({\\mathfrak{g}})) = \\Sym({\\mathfrak{g}})$, the symmetric\nalgebra.\n\n\\end{exm}\n\nFrom now on we fix a good filtration on $A$. Let $M$ be an $A$-module.\n\n\\begin{exm}\nSuppose that $M$ is finitely generated. Let $\\{m_{1} , \\dots , m_{k}\\}$ be a set of\ngenerators. We define a good filtration on $M$ \\ by $F^{i}M=\\{\\sum_{j=1\n^{k}a_{j}m_{j}$ s.t. $a_{j}\\in F^{i}A\\}$.\n\\end{exm}\n\nThe following lemma is evident.\n\\begin{lem}\n$ $\n\\begin{enumerate}[(i)]\n\\item There exists a good filtration on $M$ if and only if $M$ is finitely\ngenerated over $A$.\n\n\\item A filtration $F^{i}M$ is good if and only if $\\operatorname{Gr}_{F}(M)$\nis finitely generated over $\\operatorname{Gr}(A)$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{cor}\nSuppose that $\\operatorname{Gr}(A)$ is Noetherian. Suppose that $M$ is\nfinitely generated and let $F^{i}M$ be a good filtration on $M$.\nThen\\newline(i) For any submodule $L \\subset M$, the induced filtration\n$F^{i}L:=F^{i}M \\cap L$ is good.\\newline(ii) $A$ is Noetherian.\n\\end{cor}\n\nIn particular, $U({\\mathfrak{g}})$ is Noetherian.\n\nAn important tool in the proof of Theorem \\ref{thm:HCAdm} will be the following proposition.\n\n\\begin{prop} \\label{prop:WonFilt}\nLet $\\pi \\in \\cM_{HC}(G)$ be a Harish-Chandra module and let $F^i$ be a good $\\g$-filtration on it. Then $F^i$ is good as an $\\n$-filtration, i.e. $F^{i+1} = \\n F^i$ for $i$ big enough.\n\\end{prop}\n\\RamiA{\nThis proposition is due to Gabber\nand is based on a proposition by Casselman and Osborne.\nFor completeness we included its proof in Appendix \\ref{sec:PfWonFilt}.\n}\n\n\\begin{defn}\nFor any filtration $F$ on $M$ we we can associate to $M$ a\nsubvariety of $\\operatorname{Spec} Gr(A)$ by $AV_F(M):=\\mathrm{Supp}(Gr(M))\n\\subset\\operatorname{Spec} Gr(A)$.\n\nIf $M$ is finitely generated we choose a good filtration $F$ on $M$\nand define the associated variety of $M$ to be $AV(M):=AV_F(M)$. This variety does not depend on the choice of the good filtration.\n\\end{defn}\n\n\n\\begin{remark*}\nIf $A$ is commutative then the associated variety equals to the annihilator variety. Otherwise, the associated variety can be smaller.\n\\end{remark*}\n\n\n\n\n\\subsection{Bigraded Lie algebra} \\label{bigraded}\n\n\nLet $\\mathfrak{a}$ be a Lie algebra and let $X,Y\\in $ $\\mathfrak{a}$ be\ncommuting ad-semisimple elements with integer eigenvalues. Define\n\\begin{equation*}\n\\mathfrak{a}_{ij}:=\\left\\{ Z\\in \\mathfrak{a}:\\left[ X,Z\\right] =iZ,\\left[ Y,\n\\right] =jZ\\right\\}\n\\end{equation*\nThus we have the direct sum decompositio\n\\begin{equation}\n\\mathfrak{a}=\\oplus_{i}\\mathfrak{a}_{i}\\text{ where }\\mathfrak{a}_{i}\n\\mathfrak{\\oplus }_{j}\\mathfrak{a}_{ij} \\label{=dec}\n\\end{equation}\n\nWe now choose an ordered basis $\\mathcal{B}$ of $\\mathfrak{a}$ as follows.\nPick ordered bases $\\mathcal{B}_{ij}$ for each $\\mathfrak{a}_{ij}$, order\nthe pairs $\\left( i,j\\right) $ lexicographically so that\n\\begin{equation}\n\\left( i,j\\right) \\succ \\left( k,l\\right) \\text{ if }i>k\\text{ or if }i=\n\\text{ and }j>l \\label{=lex0}\n\\end{equation\nand let $\\mathcal{B}$ be the \\emph{descending} union of $\\mathcal{B}_{ij}$.\nThus\\emph{\\ }$\\mathcal{B}$ is ordered so that $\\mathcal{B}_{ij}$ precedes \n\\mathcal{B}_{kl}$ if $\\left( i,j\\right) \\succ \\left( k,l\\right) $. By the\nPoincare-Birkhoff-Witt theorem the corresponding ordered (PBW) monomials\nform a basis for the enveloping algebra $\\mathcal{U}\\left( \\mathfrak{a\n\\right) $.\n\n\\begin{definition}\n\\label{contain}If $u\\in \\mathcal{U}\\left( \\mathfrak{a}\\right) $ and a PBW\nmonomial $T$ has a nonzero coefficient in the expansion of $u$, we say $u$\n\\emph{contains} $T.$\n\\end{definition}\n\nNote that $adX,adY$ act on $\\mathcal{U}\\left( \\mathfrak{a}\\right) $ with\ninteger eigenvalues as well and we defin\n\\begin{equation}\n\\mathcal{U}_{ij}\\left( \\mathfrak{a}\\right) =\\left\\{ u\\in \\mathcal{U}\\left(\n\\mathfrak{a}\\right) :\\left[ X,u\\right] =iu,\\left[ Y,u\\right] =ju\\right\\}\n\\label{=Uij}\n\\end{equation\nBy construction each PBW monomial belongs to some $\\mathcal{U}_{ij}\\left(\n\\mathfrak{a}\\right) $, and thus the following result is obvious.\n\n\\begin{lemma}\n\\label{lem-contain}If\\ $u\\in $ $\\mathcal{U}_{ij}\\left( \\mathfrak{a}\\right) $\nand $u$ contains $T$, then $T\\in $ $\\mathcal{U}_{ij}\\left( \\mathfrak{a\n\\right) $.\n\\end{lemma}\n\n\\subsection{Coinvariants module} \\label{coinv}\n\nFor $s\\geq 0$ define $\\mathcal{N}_{s}=\\oplus _{i\\geq s}\\mathfrak{a}_{i}$;\nthen $\\mathcal{N}_{0}$ is a Lie subalgebra and $\\mathcal{N}_{1}$ is a\nnilpotent ideal of $\\mathcal{N}_{0}$. Let $\\xi \\in \\mathcal{N}_{1}^{\\ast }$\nbe such that $\\xi |_{\\mathcal{N}_{2}}=0$ then $\\xi $ defines a Lie algebra\ncharacter of $\\mathcal{N}_{1}$, and we hav\n\\begin{equation*}\n\\mathcal{N}_{0}^{\\xi }=\\mathfrak{a}_{0}^{\\xi }\\oplus \\mathcal{N}_{1}\n\\end{equation*\nwhere $\\mathcal{N}_{0}^{\\xi }$ and $\\mathfrak{a}_{0}^{\\xi }$ denote the\nstabilizers of $\\xi $ in the respective Lie algebras.\n\nConsider the linear map $\\DimaA{\\Xi} :\\mathcal{N}_{0}^{\\xi }\\rightarrow \\mathcal{U\n\\left( \\mathfrak{a}_{0}^{\\xi }\\right) $ given by\n\\begin{equation}\n\\DimaA{\\Xi} \\left( Z\\right) =\\left\\{\n\\begin{tabular}{lll}\n$Z$ & if & $Z\\in \\mathfrak{a}_{0}^{\\xi }$ \\\\\n$\\xi \\left( Z\\right) $ & if & $Z\\in \\mathcal{N}_{1}\n\\end{tabular\n\\right. \\label{=Psi}\n\\end{equation\nIt is easy to check that $\\DimaA{\\Xi} $ is a Lie algebra map, i.e. it intertwines\nthe Lie bracket with the commutator, and hence by universality it extends to\nan algebra map from $\\mathcal{U}\\left( \\mathcal{N}_{0}^{\\xi }\\right) $ to \n\\mathcal{U}\\left( \\mathfrak{a}_{0}^{\\xi }\\right) $ that we continue to\ndenote by $\\DimaA{\\Xi} $.\n\nSuppose $M$ is an $\\mathcal{N}_{0}^{\\xi }$-module, we define the $\\xi $-coinvariant\nmodule to be\n\\begin{equation*}\nL=M\/M^{\\prime }\\text{ where }M^{\\prime }=span\\left\\{ Zv-\\xi \\left( Z\\right)\nv\\mid Z\\in \\mathcal{N}_{1},v\\in M\\right\\}\n\\end{equation*\nThen $L$ is a $\\mathfrak{a}_{0}^{\\xi }$-module and the projection map \n\\varpi :M\\rightarrow L$ satisfies\n\\begin{equation}\n\\varpi \\left( uv\\right) =\\DimaA{\\Xi} \\left( u\\right) \\varpi \\left( v\\right) \\text{\nfor }u\\in \\mathcal{U}\\left( \\mathcal{N}_{0}^{\\xi }\\right) ,v\\in M\\text{.}\n\\label{=inter}\n\\end{equation}\n\n\\subsection{The \\RamiA{Algebraic} Key Lemma}\\label{subsec:keylem}\n\nIn this subsection we assume the following.\n\n\\begin{cond}\n\\label{assume}\n$ $\n\\begin{enumerate}\n\\item $\\mathfrak{a}$ is a Lie algebra with elements $X,Y$ and bigrading \n\\mathfrak{a}_{ij}$ as in \\S\\S \\ref{bigraded}.\n\n\\item $\\mathfrak{a}_{ij}=\\left\\{ 0\\right\\} $ if $j\\not\\in \\left\\{\n-1,0,1\\right\\} $ and also that $\\mathfrak{a}_{1,-1}=\\left\\{ 0\\right\\} $.\n\n\\item $\\xi $ is a character of $\\mathcal{N}_{1}$ as in \\S\\S \\ref{coinv}\nand $\\xi |_{\\mathfrak{a}_{ij}}=0$ unless $i=1,j=0$.\n\n\\end{enumerate}\n\\end{cond}\n\n\\begin{lemma}\nSuppose $\\mathfrak{a}$ and $\\xi $ satisfy (1)-(3) of Condition \\ref{assume}\nthen we have $\\mathfrak{a}_{0,1}\\subset \\mathfrak{a}_{0}^{\\xi }.$\n\\end{lemma}\n\n\\begin{proof}\nWe need to show that $\\DimaA{\\xi} \\left( \\left[ A,B\\right] \\right) =0$ for all \nA\\in \\mathfrak{a}_{0,1}$, $B\\in \\mathcal{N}_{1}$.\n\nTo prove this we may assume that $B\\in \\mathfrak{a}_{ij}$ for some $i,j$.\nThen we have $\\left[ A,B\\right] \\in \\mathfrak{a}_{i,j+1}$ and so by\nCondition \\ref{assume} (3) we have $\\DimaA{\\xi} \\left( \\left[ A,B\\right] \\right) =0$\nunless $i=1$ and $j=-1$. This forces $B\\in $ $\\mathfrak{a}_{1,-1}$ and hence\n$B=0$ by Condition \\ref{assume} (2) and so $\\DimaA{\\xi} \\left( \\left[ A,B\\right]\n\\right) =0$ in this case as well.\n\\end{proof}\n\nIn view of the previous lemma we have a well-defined restriction ma\n\\begin{equation*}\nRes:\\left( \\mathfrak{a}_{0}^{\\DimaA{\\xi} }\\right) ^{\\ast }\\rightarrow \\left(\n\\mathfrak{a}_{0,1}\\right) ^{\\ast }\n\\end{equation*\n\nLet $M$ be a $\\mathfrak{a}$-module and fix a (not necessary good) filtration $F^iM$ on $M$.\nWe now define the $\\DimaA{\\xi} $-coinvariants module $L$ of $M$ as in \\S\\S \\ref{coinv} and let $F^iL$ be the induced filtration on it.\n\nLet\n\\begin{equation*}\nAV_F\\left( L\\right) \\subset \\left( \\mathfrak{a}_{0}^{\\DimaA{\\xi} }\\right) ^{\\ast \n\\text{ and }\\mathcal{V}\\left( M\\right) \\subset \\mathfrak{a}^{\\ast }\n\\end{equation*\ndenote the respective $F$-associated variety of $L$ and annihilator variety of $M$ as in\n\\S\\S \\ref{subsec:Filt} and \\S\\S\\S \\ref{subsubsec:AnnVar}.\n\nWe are now ready to formulate the key lemma.\n\\begin{lem}[\\RamiA{The key lemma}]\n\\label{lem:key} Suppose $\\phi \\in Res\\left[ AV_F\\left( L\\right) \\right]\n\\subset \\left( \\mathfrak{a}_{0,1}\\right) ^{\\ast }$ and regard $\\phi +\\DimaA{\\xi} $\nas an element of $\\mathfrak{a}^{\\ast }$ via\n\\begin{equation*}\n\\left( \\phi +\\DimaA{\\xi} \\right) |_{\\mathfrak{a}_{0,1}}=\\phi \\text{, }\\left( \\phi\n+\\DimaA{\\xi} \\right) |_{\\mathfrak{a}_{1,0}}=\\DimaA{\\xi} \\text{ and }\\left( \\phi +\\DimaA{\\xi}\n\\right) |_{\\mathfrak{a}_{ij}}=0\\text{ for all other pairs }\\left( i,j\\right)\n\\end{equation*\nThen we hav\n\\begin{equation*}\n\\phi +\\DimaA{\\xi} \\in \\mathcal{V}\\left( M\\right)\n\\end{equation*}\n\\end{lem}\n\nThe proof involves in a crucial way the PBW basis discussed in \\S\\S \\ref{bigraded} above. Note that by Condition \\ref{assume} (2), the sequence of\npairs $\\left( i,j\\right) $ ordered as in (\\ref{=lex0}) looks as follows:\n\\begin{equation}\n\\fbox{$\\cdots ,\\left( 1,1\\right) $},\\fbox{$\\left( 1,0\\right) ,(0,1)$},\\fbox{$(0,0),(0,-1)$},\\fbox{$\\left( -1,1\\right) ,\\cdots $} \\label{=lex}\n\\end{equation}\nwhere we have grouped the possible pairs $\\left( i,j\\right) $ into 4 groups\nfor ease of reference below. Note that we do not mean to imply that \n\\mathcal{B}_{ij}\\neq \\emptyset $ for the indicated pairs in (\\ref{=lex}),\nbut rather that $\\mathcal{B}_{ij}=\\emptyset $ for the \\emph{missing pairs}\ne.g. $\\left( 1,-1\\right) ,\\left( 0,2\\right) $ etc.\n\n\\begin{proof}\n[Proof of Lemma \\ref{lem:key}]\n\\Dima{Let $\\sigma^n:\\mathcal{U}^n(\\mathfrak{a}) \\to \\Sym^n(\\mathfrak{a})$ denote the $n$-th symbol map. }\nWe need to show that for all $n$, and for all $\nP\\in Ann\\left( M\\right) \\cap \\mathcal{U}^{n}\\left( \\mathfrak{a}\\right) $ we\nhav\n\\begin{equation}\n\\left\\langle \\sigma ^{n}\\left( P\\right) ,\\phi +\\DimaA{\\xi} \\right\\rangle =0\n\\label{=show}\n\\end{equation}\nSince $Ann\\left( M\\right) $ and $\\mathcal{U}^{n}\\left( \\mathfrak{a}\\right) $\nare stable under the adjoint action $ad$, $Ann\\left( M\\right) \\cap \\mathcal{U}^{n}\\left( \\mathfrak{a}\\right) $ is a direct sum of $ad(X)$-eigenspaces.\nSince $X$ any $Y$ commute, each $ad(X)$-eigenspace in $Ann\\left( M\\right) \\cap \\mathcal{U}^{n}\\left( \\mathfrak{a}\\right) $ is a direct sum of $ad(Y)$-eigenspaces.\nThus we may further assume\n\\begin{equation*}\nP\\in Ann\\left( M\\right) \\cap \\mathcal{U}^{n}\\left( \\mathfrak{a}\\right) \\cap\n\\mathcal{U}_{kl}\\left( \\mathfrak{a}\\right)\n\\end{equation*\nfor some integers $k,l$, where $\\mathcal{U}_{kl}\\left( \\mathfrak{a}\\right) $\nis defined as in (\\ref{=Uij}).\n\nConsider the PBW monomials contained in $P$ in the sense of Definition \\ref{contain}. We say such a monomial is \\textquotedblleft\nrelevant\\textquotedblright\\ if it is a product of precisely $n$ factors from\ngroup 2 in the sequence (\\ref{=lex}) \\Dima{(i.e. $\\{(1,0),(0,1)\\}$)} and \\textquotedblleft\nirrelevant\\textquotedblright\\ otherwise. Thus we get a decomposition\n\\begin{equation*}\nP=R+I\n\\end{equation*\nwhere $R$ and $I$ are combinations of relevant and irrelevant monomials\nrespectively.\n\nWe note that $R\\in \\mathcal{U}\\left( \\mathcal{N}_{0}^{\\DimaA{\\xi} }\\right) $ and we\nclaim that the following properties hold\n\\begin{eqnarray}\n\\left\\langle \\sigma ^{n}\\left( P\\right) ,\\phi +\\DimaA{\\xi} \\right\\rangle\n&=&\\left\\langle \\sigma ^{n}\\left( R\\right) ,\\phi +\\DimaA{\\xi} \\right\\rangle\n\\label{C1} \\\\\n\\DimaA{\\Xi} \\left( R\\right) &\\in &\\mathcal{U}^{n-k}\\left( \\mathfrak{a\n_{0,1}\\right) \\label{C2} \\\\\n\\sigma ^{n-k}\\left( \\DimaA{\\Xi} \\left( R\\right) \\right) &\\in &Ann\\left( Gr_F(L)\\right) \\label{C3} \\\\\n\\left\\langle \\sigma ^{n}\\left( R\\right) ,\\phi +\\DimaA{\\xi} \\right\\rangle\n&=&\\left\\langle \\sigma ^{n-k}\\left( \\DimaA{\\Xi} \\left( R\\right) \\right) ,\\phi\n\\right\\rangle \\label{C4}\n\\end{eqnarray\nGranted these claims for the moment, we can prove the Lemma as\nfollows. Since $\\phi \\in Res\\left[ AV_F\\left( L\\right) \\right] $ we deduce\nfrom (\\ref{C2}) and (\\ref{C3}) that $\\left\\langle \\sigma ^{n-k}\\left( \\DimaA{\\Xi}\n\\left( R\\right) \\right) ,\\phi \\right\\rangle =0$. Now by (\\ref{C1}) and (\\re\n{C4}) we get (\\ref{=show}) as desired.\n\nWe now turn to the proof of claims (\\ref{C1} -- \\ref{C4}). For (\\ref{C1})\nits suffices to show that\n\\begin{equation}\n\\left\\langle \\sigma ^{n}\\left( T\\right) ,\\phi +\\DimaA{\\xi} \\right\\rangle =0\n\\label{=showT}\n\\end{equation\nfor every \\emph{irrelevant} monomial $T$ contained in $P$. Indeed if $T$ has\nfewer then $n$ factors then $\\sigma ^{n}\\left( T\\right) =0,$ otherwise $T$\nmust have a factor not from group 2 and then (\\ref{=showT}) holds since\n$\\phi +\\DimaA{\\xi} $ vanishes on such factors by definition.\n\nIf $R=0$ then certainly (\\ref{C2} -- \\ref{C4}) hold. Therefore we may assume\nthat $P$ contains at least one relevant monomial $S$. By definition every\nsuch $S$ is of the form\n\\begin{equation*}\nS=A_{1}\\cdots A_{p}B_{1}\\cdots B_{n-p}\\text{ with }A_{i}\\in \\mathcal{B}_{1,0\n\\text{ and }B_{j}\\in \\mathcal{B}_{0,1}\n\\end{equation*\nBy Lemma \\ref{lem-contain} we have $S\\in \\mathcal{U}_{kl}\\left( \\mathfrak{a\n\\right) $ which force\n\\begin{equation}\nk,l\\geq 0\\text{ and }n=k+l. \\label{=kl}\n\\end{equation\nand that $S$ is necessarily of the for\n\\begin{equation}\nS=A_{1}\\cdots A_{k}B_{1}\\cdots B_{n-k}\\text{ with }A_{i}\\in \\mathcal{B}_{1,0\n\\text{ and }B_{j}\\in \\mathcal{B}_{0,1} \\label{=relev}\n\\end{equation\nNow by (\\ref{=Psi}) we ge\n\\begin{equation}\n\\DimaA{\\Xi} \\left( S\\right) =\\DimaA{\\Xi} \\left( A_{1}\\cdots A_{k}B_{1}\\cdots\nB_{n-k}\\right) =\\DimaA{\\xi} \\left( A_{1}\\right) \\cdots \\DimaA{\\xi} \\left( A_{k}\\right)\nB_{1}\\cdots B_{n-k}\\in \\mathcal{U}^{n-k}\\left( \\mathfrak{a}_{0,1}\\right)\n\\label{=cont}\n\\end{equation\nSince $R$ is a combination of relevant monomials (\\ref{C2}) follows.\n\nTo prove (\\ref{C3}) we need to show tha\n\\begin{equation*}\n\\DimaA{\\Xi} \\left( R\\right) L^{i}\\subset L^{i+n-k-1}\n\\end{equation*\nBy formula (\\ref{=inter}) we have\n\\begin{equation*}\n\\DimaA{\\Xi} \\left( R\\right) L^{i}=\\DimaA{\\Xi} \\left( R\\right) \\varpi \\left( M^{i}\\right)\n=\\varpi \\left( RM^{i}\\right) =\\varpi \\left( \\left( P-I\\right) M^{i}\\right) .\n\\end{equation*\nSince $P\\in Ann\\left( M\\right) $ we have $PM^{i}=0$ and so it suffices to\nshow that\n\\begin{equation}\n\\varpi \\left( TM^{i}\\right) \\subset L^{i+n-k-1} \\label{=showC3}\n\\end{equation\nfor every \\emph{irrelevant} monomial $T$ contained in $P$. For this we\nconsider several cases.\n\nFirst suppose $T$ has a group 1 factor, then we can write $T=ZT^{\\prime }$\nwhere $Z$ is a group 1 basis vector and $T^{\\prime }$ is a smaller PBW\nmonomial. In this case we have $\\DimaA{\\xi} \\left( Z\\right) =0$ and hence we get\n\\begin{equation*}\n\\varpi \\left( TM^{i}\\right) =\\varpi \\left( ZT^{\\prime }M^{i}\\right) =\\DimaA{\\xi}\n\\left( Z\\right) \\varpi \\left( T^{\\prime }M^{i}\\right) =0\n\\end{equation*\nwhich certainly implies (\\ref{=showC3}).\n\nThus we may suppose $T$ has no group 1 factors. It follows then that the\nonly possible factors of $T$ with positive $ad$ $X$ weight are those from \n\\mathcal{B}_{1,0}$. Now suppose that $T$ has a group 4 factor. Since such a\nfactor has negative $ad$ $X$ weight and so since $T$ has $ad$ $X$ weight $k\n, $T$ must have\\ at least $k+1$ factors from $\\mathcal{B}_{1,0}$. Thus \nT=A_{1}\\cdots A_{k+1}T^{\\prime }$ where $A_{i}\\in \\mathcal{B}_{1,0}$ and \nT^{\\prime }\\in \\mathcal{U}^{n-k-1}\\left( \\mathfrak{a}\\right) $. Thus we get\n\\begin{equation*}\n\\varpi \\left( TM^{i}\\right) =\\DimaA{\\xi} \\left( A_{1}\\right) \\cdots \\DimaA{\\xi} \\left(\nA_{k+1}\\right) \\varpi \\left( T^{\\prime }M^{i}\\right) \\subset L^{i+n-k-1}\n\\end{equation*}\n\nTherefore we may assume that $T$ has only group 2 and group 3 factors. Since\n$T\\in \\mathcal{U}_{kl}\\left( \\mathfrak{a}\\right) $ it follows that $T$ must\nhave exactly $k$ factors from $\\mathcal{B}_{1,0}$ and \\emph{at least }$l$\nfactors from $\\mathcal{B}_{0,1}$. Since $T$ has at most $n$ factors and \nk+l=n,$ it follows that $T$ has \\emph{exactly} $l$ factors from $\\mathcal{B\n_{0,1}$. Hence $T$ is relevant, contrary to assumption. This finishes the\nproof of (\\ref{=showC3}).\n\nFinally to prove (\\ref{C4}) it suffices to show tha\n\\begin{equation*}\n\\left\\langle \\sigma ^{n}\\left( S\\right) ,\\phi +\\DimaA{\\xi} \\right\\rangle\n=\\left\\langle \\sigma ^{n-k}\\left( \\DimaA{\\Xi} \\left( S\\right) \\right) ,\\phi\n\\right\\rangle\n\\end{equation*\nfor every relevant monomial $S=A_{1}\\cdots A_{k}B_{1}\\cdots B_{n-k}$ as in \n\\ref{=relev}). For this we calculate as follow\n\\begin{equation*}\n\\left\\langle \\sigma ^{n}\\left( S\\right) ,\\phi +\\DimaA{\\xi} \\right\\rangle\n=\\left\\langle \\DimaA{\\xi} \\left( A_{1}\\right) \\cdots \\DimaA{\\xi} \\left( A_{k}\\right)\n\\sigma ^{n-k}\\left( B_{1}\\cdots B_{n-k}\\right) ,\\phi \\right\\rangle\n=\\left\\langle \\sigma ^{n-k}\\left( \\DimaA{\\Xi} \\left( S\\right) \\right) ,\\phi\n\\right\\rangle\n\\end{equation*}\n\\end{proof}\n\n\n\\subsection{Proof of Theorem \\ref{thm:HCAdm}}\\label{subsec:PfHCAdm}\n\n\n\nIn the notations of Theorem \\ref{thm:HCAdm}:\n\nIt is enough to show that $\\oE^d(\\pi)$ is admissible.\nLet $(\\tau,L):= \\oE^d(\\pi)= \\pi_{\\fu^{d-1},\\psi},$ considered as a representation of $\\fp_{n-d+1} = \\g_{n-d} \\ltimes \\fv_{n-d+1}$.\nDenote by $\\varpi$ the projection $\\pi \\onto \\tau$.\n\nWe will need the following lemma from linear algebra.\n\\begin{lem}\\label{lem:LinAlg}\nLet $u \\in \\fu_n^d$ be the matrix that has 1s on the superdiagonal of the lower block and 0s elsewhere. Then for any $v \\in \\fv_{n-d+1}$, if $(u+v)^d=0$ then $v=0$.\n\\end{lem}\n\\begin{proof}\nLet $A:=u+v$. Computing $A^k$ by induction for $k \\leq d$ we see that its first $n-d$ columns will be zero, the $n-d+k$-th column of the submatrix consisting of the first $n-d$ rows will be $v$, the other columns of this submatrix will be zero and the square submatrix formed by the last $d$ rows and columns will be $J_d^k$, where $J_d$ is the (upper triangular) Jordan block. Thus, $A^d=0$ if and only if $v=0$.\n\\end{proof}\n\nFix a good filtration $\\pi^i$ on $\\pi$. Note that by Proposition \\ref{prop:WonFilt} it will also be good as an $\\n$-filtration and define $L^i:=\\varpi(\\pi^i)$. Note that $L^i$ is a good filtration.\n\n\\begin{cor} \\label{cor:AV0}\n$pr_{\\fv_{n-d+1}^*}(AV(L)) = \\{0\\}$.\n\\end{cor}\n\\begin{proof}\nLet $\\mathfrak{a}:=\\g_n$ and let $X,Y \\in \\g$ be diagonal matrices given by $$X = \\diag(0^{n-d},1,2 , \\dots , d) \\text{ and }Y= \\diag(0^{n-d-1},1^{d+1}).$$ Consider the bigrading $\\mathfrak{a}=\\bigoplus_{ij} \\mathfrak{a}_{ij}$ defined as in \\S \\ref{bigraded}. Note that $\\cN_1 = \\DimaA{\\fu_d}$ and let $\\psi=\\psi^d$. Note that the conditions of Condition \\ref{assume} are satisfied.\n\nLet $\\phi \\in pr_{\\fv_{n-d+1}^*}(AV(L))$. By the key Lemma \\ref{lem:key}, we have $\\phi+\\psi \\in \\cV(\\pi)$. By the definition of $d$, this implies $((\\phi+\\psi)^*)^d=0$ and thus, by Lemma\n\\ref{lem:LinAlg}, $\\phi=0$.\n\\end{proof}\n\n\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:HCAdm}]\nBy Corollary \\ref{cor:AV0}, $pr_{\\fv_{n-d+1}^*}(AV(L)) = \\{0\\}$.\nHence any $X\\in \\fv_{n-d+1}$ vanishes on $AV(L) \\subset \\p_{n-d+1}^*$.\nBy Hilbert's Nullstellensatz this implies that there exists $k$ such that $X^k \\in Ann(gr(L))$. Since $\\fv_{n-d+1}$ is finite dimensional, one can find one $k$ suitable for all $X\\in \\fv_{n-d+1}$. Since $L^i$ is an $\\n_{n-d+1}$-good filtration on $L$, $gr(L)$ is finitely generated over $\\Sym(\\n_{n-d+1})$. Since $\\Sym^{>k}(\\fv_{n-d+1})$ acts by zero, $gr(L)$ is finitely generated even over $\\Sym(\\n_{n-d})$. Thus, $L^i$ is an $\\n_{n-d}$- good filtration and hence $L$ is finitely generated over $\\n_{n-d}$. Thus, by Theorem \\ref{thm:EqAdm} it is an admissible Harish-Chandra module over $G_{n-d}$.\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n$\\mathcal N=2$ gauge theories have been of great interest in the past twenty-five years. While $\\mathcal N=4$ SYM has trivial non-perturbative physics \nthe more realistic $\\mathcal N=1$ gauge theories are yet to be solved.\n $\\mathcal N=2$ gauge theories exhibit many interesting phenomena, such as confinement and monopole condensation. Moreover, their topological sector gives access to their non-perturbative regime.\n\nSeiberg and Witten derived the Wilsonian low energy effective action of the $\\mathcal N=2$ $SU(2)$ gauge theory by encoding the problem into a two-dimensional (2D) holomorphic curve \\cite{Seiberg:1994rs}. Their work was soon after generalized to other gauge groups and matter contents \\cite{Argyres:1994xh,Klemm:1994qs,Seiberg:1994aj,Argyres:1995wt}. Although for the paradigmatic $SU(2)$ case the Seiberg-Witten (SW) curve was derived from first principles \\cite{Seiberg:1994rs}, its construction becomes difficult for generic quiver gauge theories. \nTherefore, other methods have been employed, e.g., integrability \\cite{Donagi:1995cf}, geometric engineering \\cite{Katz:1996fh,Katz:1997eq} and the type IIA\/M-theory brane constructions \\cite{Witten:1997sc,Kol:1997fv,Brandhuber:1997ua}. The SW curve was initially introduced as an auxiliary space \\cite{Seiberg:1994rs}, however, it was later understood that it is part of the M-theory target space \\cite{Witten:1997sc}. Using string theory, $\\mathcal{N}=2$ gauge theories can be realized as world volume theories on D4-branes, which are suspended between NS5-branes. Uplifting this brane setup to M-theory, all the branes can be seen as one single M5-brane with a non-trivial topology. The geometry of this M5-brane is encoded in the SW curve. Therefore, the SW curve can also be derived by studying the minimal surface of the M5-brane \\cite{Witten:1997sc}.\n\nAn alternative way to derive the Seiberg-Witten results was discovered by Nekrasov \\cite{Nekrasov:2002qd}. He succeeded in finding the instanton partition functions of the $\\mathcal N=2$ gauge theories by introducing a special deformation called the $\\Omega$ background. The deformed theory should in fact be interpreted as a five-dimensional (5D) $\\mathcal N=1$ gauge theory defined on the space $\\mathcal{M}_4 \\times S^1$. This class of 5D gauge theories was first studied by Seiberg \\cite{Seiberg:1996bd} and their relation to the four-dimensional (4D) $\\mathcal N=2$ gauge theories on $\\mathcal{M}_4$ was explored in \\cite{Nekrasov:1996cz}. Further, it was found that the 5D $\\mathcal N=1$ gauge theories can be realized using D5- and NS5-branes \\cite{Aharony:1997ju,Aharony:1997bh}. This D5\/NS5 brane construction is T-dual to the D4\/NS5 system discussed above \\cite{Witten:1997sc} as well as the original D3\/NS5 Hanany-Witten set-up \\cite{Hanany:1996ie}. The 5D extension of the SW curve has been studied in \\cite{Kol:1997fv,Brandhuber:1997ua}. The curve was obtained by compactifying one of the directions along which the NS5-branes extend in the D4\/NS5 setup. After T-duality along the compactified direction, D4-branes turn into D5-branes, whose world volume theory is a 5D $\\mathcal{N}=1$ gauge theory.\n\nAn intriguing relation between the gauge theory partition function and topological string theory was conjectured by Nekrasov \\cite{Nekrasov:2002qd}. String theory compactified on Calabi-Yau threefold (CY$_3$) yields $\\mathcal{N}=2$ gauge theory on the 4D transverse space. The partition function of this gauge theory is equivalent to the field theory limit of the topological string partition function. This relation has been tested and verified by several authors \\cite{Iqbal:2003ix,Iqbal:2003zz,Eguchi:2003sj}. The topological string theory computation leads to a special case of $\\Omega$ deformed gauge theories. The generic $\\Omega$ deformation of gauge theories is obtained by considering an extension called refined topological string partition function \\cite{Iqbal:2007ii,Awata:2008ed,Taki:2007dh}.\nTopological strings without field theory limit gives the generating function of the BPS states \ncoming from M2-branes wrapped on two-cycles inside CY$_3$.\nThis means that the topological string theory describes the holomorphic sector of M-theory on CY$_3$. \nThe topological string partition function is then equivalent to the Nekrasov partition function \nfor 5D gauge theory via M-theory lift of the geometric engineering.\n\n\nIn the present article, we consider the 5D $\\mathcal{N}=1$ $SU(N)^{M-1}$ liner quivers depicted in Figure~\\ref{quiver}. Their type IIA string theory description involves $N$ D4-branes and $M$ NS5-branes. In this set-up, the NS5-branes wrap a coordinate circle $S^1$. \nFrom the M-theory point of view, there is, in addition, an M-theory circle around which the M5-branes that lead to D4-branes wrap. \nWe have thus two compact circles, whose roles can be exchanged. In other words, all we have is an M5-brane with non-trivial topology, which yields the SW curve of either $SU(N)^{M-1}$ theory or $SU(M)^{N-1}$ theory. In this sense, $SU(N)^{M-1}$ gauge theory is dual to $SU(M)^{N-1}$ gauge theory. Although the conceptual understanding of this duality has been discussed previously \\cite{Katz:1997eq,Aharony:1997bh}\\footnote{For related work see also \\cite{Muneyuki:2011qu}.}, the explicit duality map was not known.\n\n\nWe take a first step toward understanding this duality in detail. The strategy we adopt is to compare the low energy effective theories of 5D $SU(N)^{M-1}$ and $SU(M)^{N-1}$ gauge theories (Figure~\\ref{quiver}). This is achieved by independently using both the Seiberg-Witten formalism and Nekrasov's partition function. We derive the map between the ultraviolet (UV) parameters of the two gauge theories, through which they are dual to each other. \n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=150mm,clip]{LinQuiver.eps}\n \\end{center}\n \\caption{The circle $SU(N)_{(i)}$ corresponds to the $i$-th gauge group,\nthe segments between two circles are bifundamental hypermultiplets,\nand the flavor symmetries are illustrated by the two blue boxes $SU(N)_{(0)}$ and $SU(N)_{(M)}$ at the ends of the quiver.}\n \\label{quiver}\n\\end{figure}\nThe Seiberg-Witten curves are obtained by minimizing the worldvolume of an M5-brane with nontrivial geometry.\nNekrasov partition functions are computed using topological string theory.\nBoth in the M-theory and the topological string theory descriptions the duality is geometrically realized simply as a rotation of the M5-brane and toric diagram respectively.\nWe would also like to mention that there is another duality for 4D $\\mathcal{N}=1$ gauge theories that is based on performing (non-trivial in this case) operations on the toric diagrams. The $\\mathcal{N}=1$ toric duality is studied in \\cite{Feng:2000mi}.\n\n\\bigskip\n\nThis article is organized as follows. In Section \\ref{sec:Review}, we review well known tools and notions that will be used for our study of the duality. In particular, we will describe the Seiberg-Witten framework adopted for the 5D gauge theories, as well as the derivation of Nekrasov's partition function using topological string theory. In Section \\ref{sec:MtheoryDeriv}, we compute the duality map for the gauge theory parameters based on analysis of the SW curve. The same map will then be re-derived independently in Section \\ref{sec:TopStringDeriv}, where we calculate Nekrasov's partition function via the topological string partition function for toric CY$_3$. Starting from the toric diagram, one notices that the duality is manifest. The consequences of the duality for 2D CFTs through AGTW are discussed in Section \\ref{sec:GaugeToCFT} together with the simplest extension to the generic $\\Omega$ background. Section \\ref{sec:Discussion} is devoted to discussions of our results and possible future applications.\n\n\n\\section{Background material}\n\\label{sec:Review}\n\n\\subsection{Seiberg-Witten formalism}\n\\label{sec:SWReview}\n\nWe begin by summarizing the Seiberg-Witten solution for 4D $\\mathcal{N}=2$ gauge theories. Nice reviews of this topic can be found in \\cite{Bilal:1995hc,Lerche:1996xu,Klemm:1997gg,Peskin:1997qi}. The complete description of the low energy effective theory (up to two derivatives and quartic fermion terms) is encoded in the prepotential $\\mathcal{F}(a)$ according to\n\\begin{equation}\nS_{eff} = \\int d^4x d^4 \\theta \\mathcal{F} (a) + \\int d^4x d^4 \\bar{\\theta} \\bar{\\mathcal{F}} (\\bar{a}) \\, .\n\\end{equation}\nThe prepotential is a holomorphic function of the vacuum expectation values (${a}_i$) of the scalar fields in the $\\mathcal{N} = 2$ vector multiplet. The holomorphic gauge couplings are obtained as\n\\begin{equation}\n\\tau_{ij} =\\frac{ \\partial^2\\mathcal{F}(a)}{\\partial {a}_i \\partial {a}_j} \\, ,\n\\end{equation}\nwhile the expectation values of the scalar fields in the dual (magnetic) theory are given by\n\\begin{equation}\n{a_D}^i = \\frac{\\partial \\mathcal{F}(a)}{\\partial {a}_i} \\, .\n\\eqnlab{PrepotentialDeriv}\n\\end{equation}\nThe electromagnetic duality acts on the Coulomb moduli as the modular transformation\n\\begin{equation}\n \\left( \\begin{array}{c}\n {a_D}^i \\\\\n {a}_i \\\\\n \\end{array}\n \\right) \\rightarrow\n \\left( \\begin{array}{cc}\n a & b \\\\\n c & d \\\\\n \\end{array}\n \\right)\n \\left( \\begin{array}{c}\n {a_D}^i \\\\\n {a}_i \\\\\n \\end{array}\n \\right)\n \\quad \\mbox{with} \\quad \\left( \\begin{array}{cc}\n a & b \\\\\n c & d \\\\\n \\end{array}\n \\right) \\in SL(2, \\mathbb{Z}) \\, .\n\\end{equation}\nThe prepotential is determined using an auxiliary curve called the SW curve\n\\begin{equation}\nF_{4D}(t,v) = 0\n\\end{equation}\ntogether with a meromorphic differential $\\lambda_{SW}$. The derivatives of the meromorphic one-form with respect to the moduli of the SW curve\\footnote{The moduli of the SW curve $u$ for the SU(2) example is the gauge invariant Coulomb moduli $u=\\langle \\text{tr}\\phi^2 \\rangle + \\dots$.} are the holomorphic differentials of the curve. The Coulomb moduli are then computed according to\n\\begin{equation}\na_i = \\oint_{A_i} \\lambda_{SW} \\quad \\mbox{and} \\quad {a_D}^i = \\oint_{B^i} \\lambda_{SW} \\, ,\n\\label{a_aD}\n\\end{equation}\nwhere $A_i$ and $B_i$ are the basic cycles of the algebraic curve with intersection number $A_i \\cdot B^j = \\delta_i^j$. The prepotential itself can be found by integrating \\eqnref{PrepotentialDeriv}. Moreover, contour integrals of the meromorphic differential $\\lambda_{SW}$ around its poles give linear combinations of the bare quark masses ($m_i$).\n\nThe SW curve and one-form can also be derived from M-theory \\cite{Witten:1997sc}. To do this we consider the brane setup in Table \\ref{config}, where $N$ D4-branes are suspended between $M$ NS5-branes. We introduce also $2N$ flavor branes attached to the two outermost NS5-branes and extended to infinity. The theory described by this setup is 4D \n$\\mathcal{N}=2$ $SU(N)^{M-1}$ gauge theory, which is asymptotically conformal. The rotation of $x^{4}$ and $x^{5}$ coordinates corresponds to $U(1)_R$ symmetry, while rotation of $x^{7}$, $x^{8}$, and $x^{9}$ corresponds to \n$SU(2)_R$ symmetry.\n\\begin{table}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n&$x^0$ & $x^1$ &$x^2$ &$x^3$ &$x^4$ &$x^5$ &$x^6$ &$x^7$ &$x^8$ &$x^9$ &($x^{10}$) \\\\\n\\hline\n$M$ NS5-branes &$-$&$-$&$-$&$-$&$-$&$-$&.&.&.&.&.\\\\\n\\hline\n$N$ D4-branes &$-$&$-$&$-$&$-$&.&.&$-$&.&.&.&$-$\\\\\n\\hline\n\\end{tabular}\n\\caption{Brane configuration in type IIA string theory}\n\\label{config}\n\\end{table}\nTable \\ref{config} is a classical configuration from the gauge theory point of view. Taking the tension of the branes into account, the configuration has to be modified to include the quantum effects. Uplifting to M-theory and minimizing the world volume of the corresponding M5-brane under fixed boundary condition yields the SW curve. This curve describes a 2D subsurface inside the space spanned by the coordinates $\\{x^4, x^5, x^6, x^{10}\\}$, where $x^{10}$ is the direction of the M-theory circle.\n\nTo obtain 5D $\\mathcal{N}=1$ gauge theory we compactify the $x^{5}$ coordinate. After T-duality along $x_5$, the system becomes an D5\/NS5 brane system in type IIB string theory with a 5D $\\mathcal N=1$ gauge theory living on the D5-branes (Table \\ref{configIIB}). This is the 5D $\\mathcal N=1$ gauge theory for which we are constructing the SW curve. The spacetime of this gauge theory is $\\mathcal{M}_4 \\times S^1$ with the circumference of the IIB circle being\n\\begin{equation}\n\\beta =\\frac{ 2 \\pi \\alpha'}{R_5} = \\frac{ 2 \\pi \\ell^3_{p}}{R_5 R_{10}} \\, ,\n\\end{equation}\nwhere $\\alpha' = \\ell_{s}^2 = \\frac{\\ell_{\\text{p}}^3}{R_{10}}$. Going back to the type IIA description, we define the complex coordinates $v$ and $s$ according to\n\\begin{equation}\n v \\equiv x^4 + i x^5 \\quad \\text{and} \\quad s \\equiv x^6 + i x^{10} \\, .\n\\end{equation}\nDue to the periodic nature of $x^5$ and $x^{10}$ it is natural to introduce another pair of complex coordinates\n\\begin{equation}\n w \\equiv e^{-\\frac{v}{R_5}} \\quad \\text{and} \\quad t \\equiv e^{-\\frac{s}{R_{10}}} \\, .\n\\label{def_tw}\n\\end{equation}\nThe radius of the $x^{5}$ circle is denoted as $R_5$ and that of the M-theory circle as $R_{10}$.\n\\begin{table}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n&$x^0$ & $x^1$ &$x^2$ &$x^3$ &$x^4$ &$x^5$ &$x^6$ &$x^7$ &$x^8$ &$x^9$ &($x^{10}$) \\\\\n\\hline\n$M$ NS5-branes &$-$&$-$&$-$&$-$&$-$&$-$&.&.&.&.&.\\\\\n\\hline\n$N$ D5-branes &$-$&$-$&$-$&$-$&.&$-$&$-$&.&.&.&$-$\\\\\n\\hline\n\\end{tabular}\n\\caption{Brane configuration in type IIB string theory}\n\\label{configIIB}\n\\end{table}\n\nThe SW curve of the 5D $SU(N)^{M-1}$ theory is now written as a polynomial of degree $N$ in $w$ and degree $M$ in $t$ as\n\\begin{align}\nF (t,w) \\equiv \\sum_{i=0}^N \\sum_{j=0}^M C_{p,q} w^p t^q \\, .\n\\label{SW_curve}\n\\end{align}\nThe periodic boundary condition along the $x^5$ coordinate makes the curve invariant under a shift of the positions of the color branes ($a'$) and flavor branes ($m'$) by $2 \\pi R_5$. Therefore, the coefficients of the curve $C_{p,q}$ depend only on the gauge coupling $q$ and\n\\begin{equation}\n\\label{tildema}\n\\begin{split}\n\\tilde{m} &\\equiv e^{-m' \/ R_5} = e^{- \\beta m} \\, , \\\\\n\\tilde{a} &\\equiv e^{-a' \/ R_5} = e^{- \\beta a} \\, ,\n\\end{split}\n\\end{equation}\nin which periodicity is manifest. Note that quantities that have dimension of mass are related to the ones with dimension of length (primed) as\n\\begin{equation}\na = \\frac{a'}{2\\pi \\ell^2_s} \\quad \\text{and} \\quad\nm = \\frac{m'}{2\\pi \\ell^2_s} \\, .\n\\end{equation}\nThe coefficients $C_{p,q}$ will be determined explicitly in Section \\ref{sec:MtheoryDeriv}.\n\n\\bigskip\n\nThe M-theory derivation of the SW one-form can be found in \\cite{Fayyazuddin:1997by, Henningson:1997hy, Mikhailov:1997jv}. We summarize it for pure $SU(2)$ theory here. The extension to generic quiver theories is straightforward. The idea is to relate two different expressions of the masses of BPS states. On one hand, the mass of a BPS particle is given by\n\\begin{align}\n{m_{\\text{BPS}}}^2 = |n_e a + n_m a_D|^2 \\, ,\n\\label{BPS}\n\\end{align}\nwhere $n_e$ and $n_m$ are the electric and magnetic charges of the BPS state respectively. This formula can be rewritten using the SW one-form as\n\\begin{align}\n{m_{\\text{BPS}}}^2 = \\left| \\int_{n_e A + n_m B} \\lambda_{\\rm SW} \\right| ^2 \\, .\n\\label{mass1}\n\\end{align}\nOn the other hand, a BPS state is interpreted as an open M2-brane attached to an M5-brane whose volume is minimized. The boundary of a such minimal M2-brane with charge $(n_e,n_m)$ is the cycle $n_e A + n_m B$. Finally, the mass of this BPS state is calculated using the volume-form of the M2-brane\n\\begin{align}\n\\omega = ds \\wedge dv = d \\left[ \\log t \\, (d \\log w) \\right]\n\\end{align}\nand reads\n\\begin{align}\n{m_{\\text{BPS}}}^2 = \\left| \\frac{1}{(2 \\pi)^2 \\ell_p{}^3} \\int_{M_2} \\omega \\right| ^2 \\, ,\n\\label{mass2}\n\\end{align}\nwhere $1\/ (2 \\pi)^2 \\ell_p{}^3 $ is the tension of the M2-brane. Comparing (\\ref{mass1}) with (\\ref{mass2}), we find that the SW one-form takes the form\n\\begin{align}\n\\lambda_{\\rm SW} = - \\frac{i}{(2 \\pi)^2 \\ell_p{}^3} \\log t \\, (d \\log w) \\, .\n\\label{SW_1-form}\n\\end{align}\n\n\\subsection{Partition function and topological vertex}\n\nThe microscopic way to obtain the prepotential is via Nekrasov's partition function \\cite{Nekrasov:2002qd,Nekrasov:2003rj}\n\\begin{equation}\n\\label{ZF}\nZ(a;\\epsilon_1 ,\\epsilon_2) = e^{ \\frac{\\mathcal{F} (a)}{\\epsilon_1 \\epsilon_2} + \\cdots} \\, ,\n\\end{equation}\nwhich contains the full low energy effective description of $\\mathcal{N} = 2$ gauge theories in a deformed background. More details can be found in \\cite{Nekrasov:2005wg,Bruzzo:2002xf,Marino:2004cn,Tachikawa:2004,Shadchin:2005mx}. The starting point of Nekrasov's derivation is 5D $\\mathcal{N} = 1$ gauge theory on $\\mathcal{M}_4 \\times S^1$. This theory depends on two deformation parameters $(\\epsilon_1, \\epsilon_2)$ and the circumference of the circle $\\beta$. Taking the limit $\\beta \\rightarrow 0$ leads to the so called $\\Omega$ deformed 4D $\\mathcal{N}=2$ gauge theory. The deformation parameterized by $\\epsilon_1$ and $\\epsilon_2$ breaks the $SO(4)$ Lorentz symmetry down to $SO(2){\\times}SO(2)$. In this way the path integral is localized to one point on $\\mathcal{M}_4$ and the computation of the partition function is simplified to supersymmetric quantum mechanics along $S^1$.\n\nNekrasov's partition function $Z(a,m,q;\\epsilon_1,\\epsilon_2)$ of 4D $\\mathcal{N}=2$ gauge theory is a function of the set of moduli $a$ parameterizing the Coulomb branch, the masses $m$ of all the flavor and bifundamental fields, the coupling constants $q=e^{2\\pi i \\tau}$ and the two parameters $\\epsilon_1$ and $\\epsilon_2$. It can be factorized as\n\\begin{equation}\n\\label{Partition}\nZ = Z_{\\rm pert}\\,Z_{\\rm inst} \\, ,\n\\end{equation}\nwhere $Z_{\\rm pert}$ is the perturbative part containing tree-level and one-loop contributions, while $Z_{\\rm inst}$ is the contribution from the instantons. The instanton part can be expanded with respect to the instanton number $k$\n\\begin{equation}\n\\label{Instantons}\nZ_{\\rm inst} = \\sum_{k} q^{k} Z_{k} \\, .\n\\end{equation}\n\nAs discussed previously, one way to realize 4D $\\mathcal{N}=2$ gauge theories is the Hanany-Witten setup in Table \\ref{config}.\nAnother way is to consider CY$_3$ compactification of type IIA string theory. \nThese two different points of view are connected by a series of duality transformations \\cite{Karch:1998yv}. Starting from the Hanany-Witten setup, the transformations consist of a T-duality along the $x^6$ coordinate, followed by an S-duality involving $x^6$ and $x^{10}$ and lastly another T-duality along the new $x^6$ coordinate. The resulting theory is IIA string theory on non-compact CY$_3$ without any branes.\nThe gauge symmetry of the 4D theory is geometrically realized by the vanishing cycles inside CY$_3$. A special class of CY$_3$ which yields $\\mathcal{N}=2$ gauge theories is the toric type \\cite{Aganagic:2003db}. Its generic configuration is a fibration of special Lagrangian $T^2 \\times \\mathbb{R}$ over the base $\\mathbb{R}^3$. For $SU(N)$ gauge symmetry it is further required that the CY$_3$ manifold is a non-trivial fibration of $A_{N-1}$ singularity over the space $\\mathbb{P}^1$ \\cite{Iqbal:2003zz}.\n\n\nAlready in \\cite{Nekrasov:2002qd} Nekrasov suggested that the partition function of $\\mathcal{N}=2$ gauge theories is the field theory limit of the topological string partition function on toric CY$_3$. \n For toric CY$_3$ the topological string partition function can be computed graphically using the so called toric diagram\\footnote{In this article we use the word toric diagram for the dual graph of the toric data. This is also called web-diagram \\cite{Aharony:1997bh}.}, which characterizes the toric Calabi-Yau manifold. The toric diagram consists of a collection of trivalent vertices, which are joined together by oriented straight lines.\n\nWriting down the topological string partition function is simple using the topological vertex formalism. The procedure is very similar to computing Feynman diagrams for usual field theory, where the internal momentum integrals are replaced by sums over the Young diagrams $R$\n\\begin{equation}\n\\int d p \\quad \\longrightarrow \\quad \\sum_{R} \\nonumber \\, .\n\\end{equation}\nSchematically, it takes the form\n\\begin{equation}\n \\mathcal{Z} = \\sum_{R} \\, (\\text{three-vertices}) \\times (\\text{oriented lines}) \\, .\n \\eqnlab{PartitionFuncSum}\n\\end{equation}\nThe topological three-vertex describes the open string amplitude on a local $\\mathbb{C}^3$ coordinate patch. In the case $\\hbar = \\epsilon_1 = - \\epsilon_2$, the contribution from the topological vertex is given by \\cite{Aganagic:2003db}\n\\begin{equation}\n C_{R_1 R_2 R_3}(\\mathfrak{q}) = \\mathfrak{q}^{\\frac{\\kappa_{R_3}}{2}}S_{R_2}(\\mathfrak{q}^\\rho) \\sum_\\eta S_{R_1\/\\eta}(\\mathfrak{q}^{R_2^T+\\rho})\\, S_{R_3^T\/\\eta}(\\mathfrak{q}^{R_2+\\rho}) \\, ,\n \\label{def_topv}\n\\end{equation}\nwhere $S_{\\alpha}$ and $S_{\\alpha \/ \\eta}$ are the Schur and skew-Schur functions, respectively. \nWe also introduce the symbol $\\mathfrak{q}=e^{ - \\beta \\hbar}$ for the exponentiated $\\Omega$ background.\nThe three free indices represent the three straight lines going out from the vertex. Each line is labeled by an infinite set of all possible Young tableaux associated with the group $U(\\infty)$. \nIn the Feynman diagram analogy, the vertex\n\\begin{figure}[h!]\n\\qquad \\qquad \\qquad \\quad\n\\includegraphics[scale=0.6]{vertex.eps}\n\\qquad \\qquad \n\\put(17,21){$\\longrightarrow$}\n\\qquad \\qquad \n\\includegraphics[scale=0.6]{topological-vertex.eps} \n\\qquad \\qquad \n\\put(13,21){$= \\quad C_{R_1 R_2 R_3}$}\n\\put(-19,17){\\tiny$R_1$}\n\\put(-50,32){\\tiny$R_3$}\n\\put(-50,13){\\tiny$R_2$}\n\\end{figure}\nis replaced by the topological vertex (\\ref{def_topv}), \\\\ while for the propagator\n\\begin{equation}\nG(p)=\\frac{1}{p^2} \\quad \\longrightarrow \\quad (-Q)^{|R|} (-1)^m \\mathfrak{q}^{-\\frac{m}{2}\\kappa_{R}} \\, , \\nonumber\n\\end{equation}\nwhere $Q=e^{-t}$ is the exponentiated K\\\"ahler moduli (size) of the two-cycle represented by the segment. The framing factor ($(-1)^m \\mathfrak{q}^{-\\frac{m}{2}\\kappa_{R}}$) of the ``propagator'' contains the second Casimir $\\kappa_R$ of the representation $R$, which is defined as $\\kappa_R = \\sum_j R_j (R_j-2j-1)=-\\kappa_{R^T}$ where $R_j$ is the number of boxes in the $j$-th line of the tableu and $R^T$ is the transposed Young tableu. The integer $(-m-1)$ is the self-intersection number of the two-cycle and is illustrated in Figure \\ref{mfigure} together with two examples. \n\\begin{figure}[t]\n\\center\n\\includegraphics[scale=0.6]{framing_factor.eps}\n\\qquad \\qquad \n\\includegraphics[scale=0.6]{framing_factor_uss.eps}\n\\qquad \\qquad \\quad \n\\includegraphics[scale=0.6]{framing_factor_paral.eps} \n \\put(-336,8){\\footnotesize$ \\vec{v}_{\\text{in}}$} \n \\put(-258,32){\\footnotesize$ \\vec{v}_{\\text{out}}$}\n \\put(-296,21){\\footnotesize$R$} \n \\put(-296,10){\\footnotesize$m$} \n \\put(-180,8){\\footnotesize$m=1$} \n \\put(-48,10){\\footnotesize$m=0$} \n\\caption{In the definition of the framing factor we have $m=\\det \\left(\\vec{v}_{\\text{in}} \\cdot \\vec{v}_{\\text{out}}\\right)$. We graphically clarify its definition and give two examples.}\n\\label{mfigure}\n\\end{figure}\n\nThe closed string amplitude on the full CY$_3$ is obtained by gluing together the local $\\mathbb{C}^3$ patches as in \\eqnref{PartitionFuncSum}. The sum in \\eqnref{PartitionFuncSum} is taken over all the Young tableaux sets attached to the internal lines of the toric diagram. After carrying out the summation explicitly, it is straightforward to compare with the gauge theory partition function given by Nekrasov. The topological vertex formalism gives thus an alternative derivation for\n Nekrasov's partition function based on the geometric shape of the corresponding toric diagram.\n\n\n\nWhen $\\epsilon_1 \\neq - \\epsilon_2$, the topological vertex function above should be replaced by the \\textit{refined} topological vertex function \\cite{Iqbal:2007ii,Awata:2008ed}\n\\begin{align}\n\\begin{split}\nC_{R_1 R_2 R_3} (\\mathfrak{t},\\mathfrak{q}) = & {\\left( {\\frac{\\mathfrak{q}}{\\mathfrak{t}}} \\right)}^{\\frac{{\\left\\| R_1 \\right\\|^2 + \\left\\| R_2 \\right\\|^2 }}{2}} \\mathfrak{t}^{\\frac{{{\\kappa}_{R_1}}}{2}} \nP_{{R_2}^T } (\\mathfrak{t}^{ - \\rho } ;\\mathfrak{q},\\mathfrak{t}) \\, \\times \\\\\n& \\sum_\\eta {{\\left( {\\frac{\\mathfrak{q}}{\\mathfrak{t}}} \\right)}^{\\frac{{\\left| \\eta \\right| + \\left| R_3 \\right| - \\left| R_1 \\right|}}{2}} \nS_{R_1 \/\\eta } (\\mathfrak{t}^{ - {R_2}^T } \\mathfrak{q}^{ - \\rho } )}\nS_{{R_3}^T \/\\eta } (\\mathfrak{t}^{ - \\rho } \\mathfrak{q}^{ - R_2 } ) \\, ,\n\\end{split}\n\\end{align}\nwhere \n$\\mathfrak{q}=e^{-\\beta \\epsilon_1},\\, \\mathfrak{t}=e^{\\beta \\epsilon_2}$,\nand $P_{R} (\\mathop \\mathfrak{t}\\nolimits^{ - \\rho } ;\\mathfrak{q},\\mathfrak{t})$ is the principal specialization of the Macdonald function\n\\begin{align}\nP_{R^T } (\\mathfrak{t}^{ - \\rho } ;\\mathfrak{q},\\mathfrak{t})=\\mathfrak{t}^{\\frac{1}{2}||R||^2} \\prod_{(i,j)\\in R}(1-\\mathfrak{t}^{{R_j}^T-i+1} \\mathfrak{q}^{R_i-j})^{-1} \\, .\n\\end{align}\nThe refined topological vertex function is a generalization which reduces to the ordinary vertex function when choosing $\\epsilon_1 = - \\epsilon_2$. It has slightly different properties compared to the ordinary topological vertex. E.g., instead of being entirely cyclic symmetric, one of its legs indicates a preferred direction. Slicing invariance is a conjecture claiming that the full partition function should be invariant under a change of the choice of the preferred direction.\n\n\\subsection{Introducing the duality}\n\\label{subsec:review_duality}\n\nThe first hint toward a duality between the 5D gauge theories with gauge groups $SU(N)^{M-1}$ and $SU(M)^{N-1}$ is given by counting the physical parameters of these two theories. Indeed, we find that the number of parameters matches exactly. For this counting we can ignore the infinite tower of Kaluza-Klein modes and count only the zero modes\\footnote{To include all the Kaluza-Klein modes only the circumference of the circle $\\beta$ is added as an extra parameter. However, the circumference $\\beta$ always appears multiplied with the gauge parameters whose mass dimension is $1$, so it does not actually introduce any new degree of freedom.}. The zero modes coincide with the parameters of the corresponding 4D gauge theories on $\\mathcal{M}_4$, we will therefore use 4D terminology in the rest of this section.\n\nThe infrared (IR) physics of $SU(N)^{M-1}$ and $SU(M)^{N-1}$ gauge theories at generic points on the Coulomb branch are both described by the $U(1)^{(N-1)(M-1)}$ theory. They are thus described by $(N-1)(M-1)$ IR effective coupling constants\n\\begin{equation}\n\\tau_{IR}^i = \\tau_{IR}^i\\left(\\tau_{UV} , m_{\\text{f}} , m_{\\text{bif}} , a \\right) \\, ,\n\\end{equation}\nwhich depend holomorphically on the gauge theory parameters. $\\tau_{UV}$ are the UV coupling constants, $m_{\\text{f}}$ are the mass parameters of the flavor hypermultiplets, $m_{\\text{bif}}$ are the mass parameters of the bifundamental hypermultiplets and $a$ are the Coulomb moduli parameters. The counting of the parameters for asymptotically superconformal $SU(N)^{M-1}$ and $SU(M)^{N-1}$ gauge theories is summarized in Table~\\ref{tab:CountParam}. Summing all the parameters shows that there are in total $[(N+1)(M+1)-3]$ parameters in both theories, allowing the possibility to derive a map between them.\n\\begin{table}[h]\n\t \\centering\n \\begin{tabular}{| c || c | c |}\n \\hline\n & $SU(N)^{M-1}$ & $SU(M)^{N-1}$ \\\\\n \\hline\n\t \\hline\n\t $\\tau_{UV}$ & $M-1$ & $N-1$ \\\\\n\t \\hline\n\t $m_{\\text{f}}$ & $2N$ & $2M$ \\\\\n\t \\hline\n\t $m_{\\text{bif}}$ & $M-2$ & $N-2$ \\\\\n\t \\hline\n\t $a$ & $(N-1)(M-1)$ & $(M-1)(N-1)$ \\\\\n\t \\hline\n\t \\hline\n\t Total & $(N+1)(M+1)-3$ & $(M+1)(N+1)-3$ \\\\\n\t \\hline\n \\end{tabular}\n\t \\caption{Counting of the gauge theory parameters}\n\t \\label{tab:CountParam}\n\\end{table}\n\nOne of approaches we use is to match the coefficients of the SW curves and the SW one-form of the two dual theories. Before attempting that, we first count the degrees of freedom that are encoded in the SW curve. The SW curve of the 5D $SU(N)^{M-1}$ gauge theory is a polynomial of degree $M$ in the variable $t$ and $N$ in the variable $w$. We have therefore $[(M+1)(N+1)-1]$ complex coefficients, where one has been subtracted to allow an overall coefficient. Moreover, there is the freedom to set the origins of the coordinates $s$ and $v$. Removing two more coefficients we find $[(M+1)(N+1)-3]$ degrees of freedom in total. Thus, the number of coefficients in the SW curve is always the same as the number of physical parameters.\n\nIf we exchange the role of the variables $t$ and $w$, the SW curve (\\ref{SW_curve}) of the original $SU(N)^{M-1}$ theory \ncan be read as the SW curve of the dual $SU(M)^{N-1}$ theory.\nThe coefficients $C_{p,q}$ in the original curve \nget reinterpreted as the coefficients in the curve of the dual theory $(C_{q,p})_d$.\nIn addition, the SW one-form (\\ref{SW_1-form}) also remains the same up to a minus sign (\\ref{equalSWoneform}).\nUsing (\\ref{a_aD}) the IR effective coupling constant is given by\n\\begin{equation} \n \\tau_{IR} \n = \\frac{ \\frac{\\partial a_D}{\\partial u} }{ \\frac{\\partial a}{\\partial u} } \n \n = \\frac{ \\int_B \\omega }{ \\int_A \\omega } \\, ,\n\\end{equation}\nwhere $\\omega$ is the holomorphic differential. Since the holomorphic differential does not distinguish\\footnote{This is true because $\\omega$ has no poles as opposed to the meromorphic $\\lambda_{SW}$ that does have poles.}\nthe cycle $A$ (or $B$) of the original theory from $A_d$ (or $B_d$) of the dual theory, \nwe get that the dual IR effective coupling constant is equal to the original one.\nTherefore, once the relation between the gauge theory parameters and the coefficients $C_{p,q}$ in the SW curve is established, it is straightforward to find the duality map. \nThe map is obtained by equating the coefficients $C_{p,q}$, written in terms of the gauge theory parameters of the original $SU(N)^{M-1}$ theory, with the coefficients $(C_{q,p})_d$, written in terms of the parameters of the dual $SU(M)^{N-1}$ theory.\n\n\n\nThe interpretation of this duality in the context of the brane setup in IIA\/M and IIB theories \\cite{Aharony:1997bh} is the following. Consider M-theory compactified on $T^2$. The cycles of the torus correspond to the two phases of the variables $t$ and $w$. Exchanging $t$ and $w$ is the holomorphic extension of a particular $SL(2,\\mathbb{Z})$ modular transformation on the compactification torus, where the M-theory circle is exchanged with the $x^5$ circle. This modular transformation is equivalent to S-duality in IIB theory compactified on $S^1$ via T-duality along the $x^5$ circle. \nThe modular transformation in the IIA theory limit exchanges D4-branes with NS5-branes compactified along the $x^5$ circle, while S-duality in IIB theory exchanges D5-branes with NS5-branes.\nRigorously, the 90 degree rotation of the brane configuration ($w\\rightarrow t$ and $t\\rightarrow w^{-1}$) corresponds to this S-duality, but in the main body of this paper we study the $t\\leftrightarrow w$ reflection that it is technically simpler. The 90 degree rotation case is presented in detail in the appendix \\ref{app:90Rotation}.\nNote that this S-duality is different from the one which appears as the \nelectric-magnetic duality in the 4D Seiberg-Witten theory. \nAs we will see in the following sections, it acts on the gauge theory parameters \nin a totally different manner.\nThe difference of these two types of S-dualities is due to the difference of the brane setup.\nIt is known that the Montonen-Olive duality,\nwhich is the extension of the electric-magnetic duality for 4D $\\mathcal{N}=4$ theory,\n is obtained by compactifying the M5-branes on a torus \\cite{Vafa:1997mh, Tachikawa:2011ch}.\nIn the brane setup of Table~\\ref{config} the $x^6$ direction has to be compactified instead of $x^5$ (as in our case). \n\n\n\nThe duality described here was originally found in the context of geometric engineering \\cite{Katz:1997eq}. On the IIB string theory side, the SW curve is embedded in the CY manifold and the duality can be seen in a similar way as the M-theory analysis above. In the mirror IIA theory, on the other hand, the duality is most clear from the toric diagram. Indeed, the toric diagram for the $SU(N)^{M-1}$ theory is exactly the same as the one for the $SU(M)^{N-1}$ theory, up to a simple reflection or 90 degree rotation. This duality is therefore manifest at the level of the topological string partition function. Depending on which sums are carried out explicitly in \\eqnref{PartitionFuncSum}, we obtain Nekrasov's partition function of $SU(N)^{M-1}$ theory or $SU(M)^{N-1}$ theory. The topological vertex formalism provides the extension of the duality for the non-zero self-dual $\\Omega$ background.\n\n\n\\section{M-theory derivation}\n\\label{sec:MtheoryDeriv}\n\nIn this section we present the first derivation of the duality map using the Seiberg-Witten formalism reviewed in Section \\ref{sec:SWReview}. Another, independent derivation based on the topological vertex formalism is given in \\ref{sec:TopStringDeriv}. The map between the gauge theory parameters of the 5D $\\mathcal{N}=1$ $SU(N)^{M-1}$ and $SU(M)^{N-1}$ liner quiver gauge theories compactified on $S^1$ is obtained by comparing their Seiberg-Witten curves. The SW curves are derived using the M-theory approach \\cite{Witten:1997sc}. We, firstly, warm up with the self-dual case of $SU(2)$ gauge theory with four flavors and then turn to the generic duality between $SU(N)^{M-1}$ and $SU(M)^{N-1}$. The special case ($M=2$) between $SU(N)$ and $SU(2)^{N-1}$ is given at the end of this section.\n\n\\subsection{$SU(2)$ gauge theory with four flavors}\n\\label{subsec:Msu2}\n\nWe begin by deriving the SW curve for the simplest case; the compactified 5D $SU(2)$ gauge theory with four flavors.\\footnote{\nAn alternative derivation of the SW curve is given in \\cite{Minahan:1997ch} using a different point of view that exploits the enhancement of the global symmetry to $E_5 = SO(10)$ \\cite{Seiberg:1996bd}.} \nThe brane setup is described in Table \\ref{config} together with Figure \\ref{su2branesetup} and includes $M=2$ NS5-branes with $N=2$ D4-branes suspended between them.\n\\begin{figure}[htb]\n\\centering\n\\tiny\n\\setlength{\\unitlength}{4cm}\n\\vspace{10mm}\n\\begin{picture}(1,1)(0,0)\n\\linethickness{0.25mm}\n\\scriptsize\n\\put(0.5,0){\\line(0,1){1}}\n\\put(0.44,1.15){NS5$_1$}\n\\put(0.9,0){\\line(0,1){1}}\n\\put(0.81,1.15){NS5$_{2}$}\n\\tiny\n\n\\put(0.1,0.2){\\line(1,0){0.4}}\n\\put(0.27,0.23){$m'_{1}$}\n\\put(0.1,0.7){\\line(1,0){0.4}}\n\\put(0.27,0.73){$m'_{2}$}\n\n\\scriptsize\n\\put(-0.2,0.18){D4$_1$}\n\\put(-0.2,0.68){D4$_2$}\n\\tiny\n\n\\put(0.5,0.25){\\line(1,0){0.4}}\n\\put(0.67,0.28){$a'{}_{1}$}\n\\put(0.5,0.75){\\line(1,0){0.4}}\n\\put(0.67,0.78){$a'{}_{2}$}\n\n\\put(0.9,0.15){\\line(1,0){0.4}}\n\\put(1.07,0.18){$m'_{3}$}\n\\put(0.9,0.8){\\line(1,0){0.4}}\n\\put(1.07,0.83){$m'_{4}$}\n\\end{picture}\n\\normalsize\n\\caption{Brane configuration for $SU(2)$ gauge theory.}\n\\label{su2branesetup}\n\\end{figure}\n\nThe asymptotic behavior of the NS5-branes is determined by the holomorphic extension of the equations of motion $\\nabla^2 s=0$, which minimizes its worldvolume. If the $x^5$ direction is not compactified the asymptotic behavior of an NS5-brane at large $|v|$ is given by \n\\begin{align}\n\\frac{s}{R_{10}} = \\sum_{i=1}^2 \\log (v-a'_i)\n- \\sum_{i=1}^{2} \\log (v-b'_i)\n+ {\\rm const} \\, ,\n\\end{align}\nwhere $a'_i$ and $b'_i$ are the classical positions on the $v$-plane of the D4-branes attached to the NS5-brane from the left and the right respectively. Compactifying the $x^5$ direction is equivalent to periodically attaching D4-branes on a non-compact $x^5$ coordinate. Firstly, we concentrate on the first NS5-brane. The two flavor D4-branes that are attached to it at $v=m'_1$ and $v=m'_2$ can be reinterpreted as infinitely many D4-branes attached at $v = m'_1 + 2 \\pi i R_5 n$ and $v=m'_2 + 2 \\pi i R_5 n$, with $n=\\cdots, -1,0,1,\\cdots$. Similarly, two color D4-branes are attached at $v = a'_1 + 2 \\pi i R_5 n$ and $v=a'_2 + 2 \\pi i R_5 n$ from the other side. The asymptotic behavior of the first NS5-brane is, therefore, given by\n\\begin{equation}\n\\begin{split}\n\\frac{s_{(1)}}{R_{10}} \n=& \\sum_{n=-\\infty}^{\\infty} \\left( \\log (v - m'_1 - 2 \\pi i R_5 n)\n + \\log(v - m'_2 - 2 \\pi i R_5 n) \\right) - \\\\\n& \\sum_{n=-\\infty}^{\\infty} \\left( \\log (v - a'_1 - 2 \\pi i R_5 n)\n + \\log(v - a'_2 - 2 \\pi i R_5 n) \\right)\n+ {\\rm const} \\, .\n\\end{split}\n\\nonumber\n\\end{equation}\nUsing the definitions of the periodic coordinates (\\ref{def_tw}) and gauge theory parameters (\\ref{tildema}) we can write the position of the first NS5-brane as\n\\begin{align}\nt_{(1)} = & \\, C \\, \n\\frac{\\sinh\\left( \\frac{v-a'_1}{2R_5} \\right)\\sinh\\left( \\frac{v-a'_2}{2R_5} \\right)}\n{\\sinh\\left( \\frac{v-m'_1}{2R_5} \\right)\\sinh\\left( \\frac{v-m'_2}{2R_5} \\right)} \\quad \\longrightarrow \\quad C \n\\left\\{\n\\begin{array}{l}\n\\sqrt{\n\\frac{ \\tilde{m}_1 \\, \\tilde{m}_2\n}{\n\\tilde{a}_1\\, \\tilde{a}_2\n}\n}\n\\quad (w \\to \\infty) \\\\\n\\sqrt{ \\frac{\n\\tilde{a}_1 \\, \\tilde{a}_2\n}\n{\n\\tilde{m}_1 \\, \\tilde{m}_2\n}\n}\n\\quad (w \\to 0)\n\\end{array}\n\\right. \\, ,\n\\label{asym_t1}\n\\end{align}\nwhere the expressions after the arrow are the asymptotic behaviors in the $w \\to \\infty$ and $w \\to 0$ regions.\nSimilarly, for the second NS5-brane we have\n\\begin{align}\nt_{(2)}\n= & C' \\frac{\\sinh\\left( \\frac{v-m'_3}{2R_5} \\right)\\sinh\\left( \\frac{v-m'_4}{2R_5} \\right)}\n{\\sinh\\left( \\frac{v-a'_1}{2R_5} \\right)\\sinh\\left( \\frac{v-a'_2}{2R_5} \\right)} \\quad \\longrightarrow \\quad C' \\left\\{\n\\begin{array}{l}\n\\sqrt{ \\frac{\n\\tilde{a}_1 \\, \\tilde{a}_2\n}\n{\n\\tilde{m}_3 \\, \\tilde{m}_4\n} }\n\\quad (w \\to \\infty) \\\\\n\\sqrt{ \\frac{\n\\tilde{m}_3 \\, \\tilde{m}_4\n}{\n\\tilde{a}_1 \\, \\tilde{a}_2\n}\n}\n\\quad (w \\to 0)\n\\end{array}\n\\right. \\, .\n\\end{align}\nFollowing \\cite{Witten:1997sc}, the distance between the two NS5-branes should give the 4D bare coupling constant $q \\equiv \\exp \\left( 2 \\pi i \\tau_{\\rm bare} \\right)$ in the limit $R_5 \\to \\infty$. However, since we are studying the compactified 5D case, \n\\begin{align}\n\\frac{t_{(2)}}{t_{(1)}}\n= & \\exp \\left( \\frac{s_{(1)}-s_{(2)}}{R_{10}} \\right) \n= \\frac{C'}{C} \n\\frac{\\prod_{\\mathfrak{i}=1}^4 \\sinh\\left( \\frac{v-m'_\\mathfrak{i}}{2R_5} \\right)\n{\\sinh^2 \\left( \\frac{v-a'_1}{2R_5} \\right) \\sinh^2 \\left( \\frac{v-a'_2}{2R_5} \\right)} \\, \n\\label{1lp}\n\\end{align}\nand the asymptotic distance between the NS5-branes at $w \\to 0$ is different from the distance at $w \\to \\infty$ by a factor\\footnote{Note that the index $i=1,2$ counts the color, while the index $\\mathfrak{i}=1,\\dots,4$ counts the flavor.} $\\prod_\\mathfrak{i} \\tilde{m}_\\mathfrak{i} \\prod_i \\tilde{a}_i^{-2}$. Thus, relating the constants $C$ and $C'$ to the 4D gauge theory parameters is subtle. In the rest of this section, we assume that these constants do not depend on the radius $R_5$ and that\n\\begin{align}\n\\frac{C'}{C} = q\n\\label{CCq}\n\\end{align}\nis an exact relation for arbitrary $R_5$. \nThis assumption indicates that the bare coupling constant is identified as the average of the\n two asymptotic distances, which is one of the most natural possibilities.\nIndeed, as discussed in section \\ref{subsec:topsu2}, this identification is justified by \ncomparing the topological string partition function with Nekrasov partition function.\n\n\n\n\n\nWe continue by writting the 5D SW curve as a polynomial of degree two in $w$\n\\begin{align}\nq_1(t) w^2 + q_2(t) w + q_3(t) =0 \\, .\n\\label{WPolynomial}\n\\end{align}\nIn the $w \\to \\infty$ region, having two NS5-branes at $t=C \\left(\\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{a}_1\\, \\tilde{a}_2 }\\right)^{\\frac{1}{2}}$ \nand $t=Cq \\left( \\frac{ \\tilde{a}_1 \\, \\tilde{a}_2 }{ \\tilde{m}_3 \\, \\tilde{m}_4 } \\right)^{\\frac{1}{2}}$ leads to\n\\begin{align}\nq_1(t) = \\left(t-C \\left(\\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{a}_1\\, \\tilde{a}_2 }\\right)^{\\frac{1}{2}}\n\\right)\n\\left(\nt-Cq \n\\left( \\frac{ \\tilde{a}_1 \\, \\tilde{a}_2 }{ \\tilde{m}_3 \\, \\tilde{m}_4 } \\right)^{\\frac{1}{2}}\n\\right) \\, .\n\\end{align}\nSimilarly, in the $w \\to 0$ region, we have the two NS5-branes at $t=C \\left(\\frac{ \\tilde{a}_1\\, \\tilde{a}_2 }{ \\tilde{m}_1 \\, \\tilde{m}_2 }\\right)^{\\frac{1}{2}}$ and $t=Cq \\left( \\frac{ \\tilde{m}_3 \\, \\tilde{m}_4 }{ \\tilde{a}_1 \\, \\tilde{a}_2 } \\right)^{\\frac{1}{2}}$, so we obtain\n\\begin{align}\nq_3(t) = d' \\left(t-C \\left(\\frac{ \\tilde{a}_1\\, \\tilde{a}_2 }{ \\tilde{m}_1 \\, \\tilde{m}_2 }\\right)^{\\frac{1}{2}}\n\\right)\n\\left(\nt-Cq \n\\left( \\frac{ \\tilde{m}_3 \\, \\tilde{m}_4 }{ \\tilde{a}_1 \\, \\tilde{a}_2 } \\right)^{\\frac{1}{2}}\n\\right) \\, ,\n\\end{align}\nwhere $d'$ is a temporarily undetermined constant.\nIf we, now, write the 5D SW curve as a polynomial of degree two in $t$ and consider the asymptotic behavior of the flavor D4-branes we can determine some more coefficients. In the $t \\to \\infty$ ($s \\to -\\infty$) region there are two flavor D4-branes at $w=\\tilde{m}_1$ and $w=\\tilde{m}_2$ and in the $t \\to 0$ ($s \\to \\infty$) region two flavor D4-branes at $w=\\tilde{m}_3$ and $w=\\tilde{m}_4$. These boundary conditions constrain the SW curve to be of the form\n\\begin{align}\n(w-\\tilde{m}_1)(w-\\tilde{m}_2) t^2 + P_2(w) t + d (w-\\tilde{m}_3)(w-\\tilde{m}_4) = 0 \\, ,\n\\label{TPolynomial}\n\\end{align}\nwhere $d$ is another undetermined constant that we will now fix.\nThe two forms (\\ref{WPolynomial}) and (\\ref{TPolynomial}) of the SW curve are simultaneously satisfied if we write\n\\footnotesize\n\\begin{equation}\n\\begin{split}\n& (w-\\tilde{m}_1)(w-\\tilde{m}_2) t^2 \\\\\n& - \\left( \\left[\nC \\left(\\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{a}_1\\, \\tilde{a}_2 }\\right)^{\\frac{1}{2}}\n+ Cq\n\\left( \\frac{ \\tilde{a}_1 \\, \\tilde{a}_2 }{ \\tilde{m}_3 \\, \\tilde{m}_4 } \\right)^{\\frac{1}{2}}\n\\right]w^2\n- b \\, w \n\\right. \\\\\n& \\quad \\left.\n+ \\, \\tilde{m}_1 \\, \\tilde{m}_2 \\left[\nC \\left(\\frac{ \\tilde{a}_1\\, \\tilde{a}_2 }{ \\tilde{m}_1 \\, \\tilde{m}_2 }\\right)^{\\frac{1}{2}}\n+ Cq\n\\left( \\frac{ \\tilde{m}_3 \\, \\tilde{m}_4 }{ \\tilde{a}_1 \\, \\tilde{a}_2 } \\right)^{\\frac{1}{2}}\n\\right]\n\\right) t \\\\\n& + C^2 q \\left( \\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{m}_3 \\, \\tilde{m}_4 } \\right)^{\\frac{1}{2}} (w-\\tilde{m}_3)(w-\\tilde{m}_4) \\, =0 \\, ,\n\\label{curveC}\n\\end{split}\n\\end{equation}\n\\normalsize\nor equivalently\n\\footnotesize\n\\begin{equation}\n\\begin{split}\n&\\left(t-C \\left(\\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{a}_1\\, \\tilde{a}_2 }\\right)^{\\frac{1}{2}}\n\\right)\n\\left(\nt-Cq\n\\left( \\frac{ \\tilde{a}_1 \\, \\tilde{a}_2 }{ \\tilde{m}_3 \\, \\tilde{m}_4 } \\right)^{\\frac{1}{2}}\n\\right) w^2 \\\\\n& + \\left( - \\left( \\tilde{m}_1 + \\tilde{m}_2 \\right) t^2\n+ b \\, t\n- \\, C^2q \\left( \\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{m}_3 \\, \\tilde{m}_4 } \\right)^{\\frac{1}{2}}\n\\left( \\tilde{m}_3 + \\tilde{m}_4 \\right)\n\\right) w \\\\\n& + \\,\n\\tilde{m}_1 \\, \\tilde{m}_2\n\\left(t-C \\left(\\frac{ \\tilde{a}_1\\, \\tilde{a}_2 }{ \\tilde{m}_1 \\, \\tilde{m}_2 }\\right)^{\\frac{1}{2}}\n\\right)\n\\left(\nt-Cq\n\\left( \\frac{ \\tilde{m}_3 \\, \\tilde{m}_4 }{ \\tilde{a}_1 \\, \\tilde{a}_2 } \\right)^{\\frac{1}{2}}\n\\right)\n\\, =0 \\, .\n\\label{curveCD}\n\\end{split}\n\\end{equation}\n\\normalsize\nWe have, thus, determined all the coefficients in the curve except for $b$, which is related to the Coulomb moduli parameter.\n\nA comment on the weak coupling limit ($q \\equiv C'\/C \\ll 1$) of the obtained curve is in order. In this limit, the curve (\\ref{curveC}) reduces to\n\\small\n\\begin{equation}\n\\begin{split}\n& (w-\\tilde{m}_1)(w-\\tilde{m}_2) t^2 \n- \\left( \nC \\left(\\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{a}_1\\, \\tilde{a}_2 }\\right)^{\\frac{1}{2}} w^2\n- b \\, w \n+ \\, \nC \\left( \\tilde{m}_1 \\, \\tilde{m}_2 \\tilde{a}_1\\, \\tilde{a}_2 \\right)^{\\frac{1}{2}}\n\\right) t \\\\\n& + C^2 q \\left( \\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{m}_3 \\, \\tilde{m}_4 } \\right)^{\\frac{1}{2}} (w-\\tilde{m}_3)(w-\\tilde{m}_4) = 0 \\, .\n\\label{reduced_C}\n\\end{split}\n\\end{equation}\n\\normalsize\nIf we choose $C=1$ and assume that $b=\\tilde{a}_1+\\tilde{a}_2$ with $\\tilde{a}_1 = \\tilde{a}_2^{-1}$ the curve (\\ref{reduced_C}) coincides with the one previously given in \\cite{Nekrasov:1996cz,Brandhuber:1997cc,Eguchi:2000fv}. However, we want to emphasize that this expression is valid only under the weak coupling approximation. \nMoreover, we want to briefly comment on the 4D limit ($R_5 \\to \\infty$) of our 5D curve (\\ref{curveC}). Details are provided in Appendix \\ref{app:4DLimit}. In the 4D limit the curve (\\ref{curveC}) reduces to the one obtained in \\cite{Eguchi:2009gf}. This is an additional check of our result.\n\n\\bigskip\n\nWe are now ready to derive the duality map that corresponds to the exchange of the coordinates\n\\begin{align}\nt_d = w \\, , \\qquad w_d = t \\, ,\n\\end{align}\nwhere $d$ stands for dual.\nWithout any loss of generality we pick $|\\tilde{m}_1| \\geq |\\tilde{m}_2|$, $|\\tilde{m}_3| \\geq |\\tilde{m}_4|$, $|(\\tilde{m}_1)_d| \\geq |(\\tilde{m}_2)_d|$ and $|(\\tilde{m}_3)_d| \\geq |(\\tilde{m}_4)_d|$. Then, simply by comparing the two expressions (\\ref{curveC}) and (\\ref{curveCD}) of the SW curve we obtain the duality transformation\n\\begin{equation}\n\\begin{split}\n(\\tilde{m}_1)_d = C \\left(\\frac{ \\tilde{m}_1 \\, \\tilde{m}_2 }{ \\tilde{a}_1 \\, \\tilde{a}_2 }\\right)^{\\frac{1}{2}} \\, ,&\n\\qquad\n(\\tilde{m}_2)_d = \\, C q\n\\left( \\frac{ \\tilde{a}_1 \\, \\tilde{a}_2 }{ \\tilde{m}_3 \\, \\tilde{m}_4 } \\right)^{\\frac{1}{2}}\n\\, , \n\\\\\n(\\tilde{m}_3)_d = C \\left(\\frac{ \\tilde{a}_1 \\, \\tilde{a}_2 }{ \\tilde{m}_1 \\, \\tilde{m}_2 }\\right)^{\\frac{1}{2}} \\, ,&\n\\qquad\n(\\tilde{m}_4)_d = \\, C q\n\\left( \\frac{ \\tilde{m}_3 \\, \\tilde{m}_4 }{ \\tilde{a}_1 \\, \\tilde{a}_2 } \\right)^{\\frac{1}{2}} \\, ,\n\\\\\nb_d = b \\, ,& \\qquad\nq_d = \\left( \\frac{\\tilde{m}_2\\tilde{m}_4}{\\tilde{m}_1 \\tilde{m}_3} \\right)^{\\frac{1}{2}} \\, ,\n\\\\\nC_d = \\tilde{m}_1^{\\frac{1}{2}} \\tilde{m}_3^{\\frac{1}{2}} \n\\, ,&\n\\qquad\n(\\tilde{a}_1)_d{} (\\tilde{a}_2)_d{} = \\, C^2 q \n\\left( \\frac{\\tilde{m}_2\\tilde{m}_3}{\\tilde{m}_1 \\tilde{m}_4} \\right)^{\\frac{1}{2}}\n\\, .\n\\label{Map_SU(2)_C}\n\\end{split}\n\\end{equation}\n\n\n\\smallskip\n\nSo far we have not specified the $v$; a natural choice is to set it at the center of mass of the two D4-branes, where\n\\begin{align}\na_1 = - a_2 \\quad \\Rightarrow \\quad \\tilde{a}_1 = \\tilde{a}_2^{-1} \\, .\n\\label{U(1)}\n\\end{align}\nSimilarly, we pick the $s=v_d$ so that \n\\begin{align}\n(\\tilde{a}_1)_d = (\\tilde{a}_2)_d^{-1}\n\\label{U(1)D}\n\\end{align}\nis realized. This condition is satisfied when the constant $C$ is\n\\begin{align}\nC = \\left( \\frac{\\tilde{m}_1 \\tilde{m}_4}{\\tilde{m}_2\\tilde{m}_3} \\right)^{\\frac{1}{4}}\nq^{-\\frac{1}{2}} \n\\, = \\, (\\tilde{m}_1)_d^{\\frac{1}{2}} (\\tilde{m}_3)_d^{\\frac{1}{2}} \\, .\n\\end{align}\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=6cm,clip]{cyclesSU2.eps}\n \\put(-210,-20){\\vector(0,1){190}}\n \\put(-220,170){\\vector(0,-1){190}}\n \\put(-210,-20){\\vector(1,0){240}}\n \\put(30,-30){\\vector(-1,0){240}}\n \\put(50,-25){$s$}\n \\put(50,-35){$t$}\n \\put(-223,180){$w$}\n \\put(-213,180){$v$}\n \\put(-200,50){$w=\\tilde{m}_1$}\n \\put(-200,120){$w=\\tilde{m}_2$}\n \\put(-5,50){$w=\\tilde{m}_3$}\n \\put(-5,120){$w=\\tilde{m}_4$}\n \\put(-153,170){\\footnotesize$t=\\sqrt[4]{ \\frac{ \\tilde{m}_4}{ \\tilde{m}_1 \\tilde{m}_2^3 \\tilde{m}_3 q^{2}} } $}\n \\put(-73,170){\\footnotesize$t=\\sqrt[4]{ \\frac{\\tilde{m}_1 \\tilde{m}_3 \\tilde{m}_4^3 q^{2}}{ \\tilde{m}_2 } } $}\n \\put(-150,-5){\\footnotesize$t=\\sqrt[4]{ \\frac{\\tilde{m}_1^3 \\tilde{m}_2 \\tilde{m}_4}{ \\tilde{m}_3 \\, q^{2}} } $}\n \\put(-73,-5){\\footnotesize$t=\\sqrt[4]{ \\frac{\\tilde{m}_1 \\, q^{2}}{ \\tilde{m}_2 \\tilde{m}_3^3 \\tilde{m}_4} } $}\n \\put(-90,30){\\small$A_1$}\n \\put(-90,100){\\small$A_2$}\n \\put(-150,30){\\small$M_1$}\n \\put(-150,100){\\small$M_2$}\n \\put(-25,30){\\small$M_3$}\n \\put(-25,100){\\small$M_4$}\n \\put(-32,145){\\small$T_4$}\n \\put(-102,145){\\small$T_3$}\n \\put(-32,20){\\small$T_2$}\n \\put(-102,20){\\small$T_1$}\n \\put(-37,80){\\footnotesize$\\left(A_2\\right)_d$}\n \\put(-107,80){\\footnotesize$\\left(A_1\\right)_d$}\n \\end{center}\n \\caption{In this figure the configuration of the M5-brane that leads to 5D $SU(2)$ four flavor is depicted.}\n \\label{web}\n\\end{figure}\n\n\nThe transformation rule $b_d=b$ contains the duality transformation of the Coulomb moduli implicitly. We could in principle calculate $b$ explicitly in terms of the Coulomb moduli, as in \\cite{Eguchi:2009gf}. However, for our purpose it is enough to consider the contour integrals of the SW one-form around the $A$ and the $A_d$ cycles. We depict all the cycles on the M5-brane in Figure \\ref{web}. For the four junctions in Figure \\ref{web} we find the following topological relations\n\\begin{equation}\n\\begin{split}\nA_1 - T_1 - M_1 + \\left(A_1\\right)_d = 0 \\, , \\qquad\n& - \\left(A_1\\right)_d - M_2 + T_3 + A_2 = 0 \\, , \\\\\n-A_2 + T_4 + M_4 - \\left(A_2\\right)_d = 0 \\, , \\qquad\n& \\left(A_2\\right)_d+ M_3 - T_2 - A_1 = 0\n\\end{split}\n\\label{cycle_cons}\n\\end{equation}\namong the cycles. In our conventions the integrals are positive when we go around $w = \\infty$ and $t=\\infty$ clock-wise. The relations (\\ref{cycle_cons}) among the cycles can also be read as relations among cycle integrals by replacing $A \\to \\int_{A} \\lambda_{SW}$. Using the expression of the SW one-form given in (\\ref{SW_1-form})\n\\begin{align}\n\\lambda_{SW} \n= \\frac{i}{(2\\pi)^2 \\ell_p{}^3} v ds\n= \\frac{i R_5 R_{10}}{(2\\pi)^2 \\ell_p{}^3} \\log w \\frac{dt}{t} \\, ,\n\\end{align}\nwhere the factor $1\/(2\\pi)^2 l_s{}^3$ is the tension of the M2-brane, we can calculate the cycle integrals. The integral around $M_1$ is obtained by considering the limit $t \\to \\infty$ and regarding the coordinate $w$ as a multivalued function of $t$. The curve around this region is approximately given by\n\\begin{equation}\n(w(t) - \\tilde{m}_1)(w(t) - \\tilde{m}_2) t^2 \\approx 0 \n\\qquad \\Rightarrow \\qquad \nw(t) = \\left\\{\n\\begin{array}{l}\n\\tilde{m}_1 + {\\cal O}(t^{-1}) \\\\\n\\tilde{m}_2 + {\\cal O}(t^{-1})\n\\end{array}\n\\right. \\, .\n\\end{equation}\nThe first branch contributes to the integral around the cycle $M_1$. The contour integral can be carried out as\n\\begin{equation}\n\\oint_{M_1} \\log w(t) \\frac{dt}{t} \n= - \\oint_{t=\\infty} \\left( \\log \\tilde{m}_1 + {\\cal O}(t^{-1}) \\right) \\frac{dt}{t} = 2 \\pi i \\log \\tilde{m}_1 \\, .\n\\end{equation}\nSimilarly, we obtain the rest of the integrals around the cycles $M_\\mathfrak{i}$ and $T_\\mathfrak{i}$\n\\begin{equation}\n\\begin{split}\n\\oint_{M_\\mathfrak{i}} \\lambda_{SW} &= - \\frac{R_5 R_{10}}{2 \\pi \\ell_p{}^3} \\log\\tilde{m_\\mathfrak{i}} = m_\\mathfrak{i} \\, ,\n\\\\\n\\oint_{T_\\mathfrak{i}} \\lambda_{SW} &= \\frac{R_5 R_{10}}{2 \\pi \\ell_p{}^3} \\log (\\tilde{m_\\mathfrak{i}})_d = \\left( m_\\mathfrak{i} \\right)_d \\, ,\n\\end{split}\n\\label{cint}\n\\end{equation}\nwhere $\\mathfrak{i}=1,\\dots,4$ is the flavor index. What is more, the contour integrals around the cycles $A_i$ and $\\left(A_i\\right)_d$, by definition, give the Coulomb moduli:\n\\begin{equation}\n\\begin{split}\n\\oint_{A_1} \\lambda_{SW} \n&= - \\oint_{A_2} \\lambda_{SW}\n= - \\frac{R_5 R_{10}}{2 \\pi \\ell_p{}^3} \\log \\tilde{a} = a \\, ,\n\\\\\n\\oint_{A'_1} \\lambda_{SW} \n&= - \\oint_{A'_2} \\lambda_{SW} \n= \\frac{R_5 R_{10}}{2 \\pi \\ell_p{}^3} \\log \\tilde{a}_d = - a_d \\, ,\n\\end{split}\n\\label{aint}\n\\end{equation} \nwhere $i=1,2$ is the color index. The first equality in each line is ensured by (\\ref{cycle_cons}). The sign of $a_d$ in the second line is inverted because\n\\begin{align}\n\\label{equalSWoneform}\n(\\lambda_{SW})_d \n= \\frac{i (R_5)_d (R_{10})_d}{(2\\pi)^2 \\ell_p{}^3} \\log (w)_d \\frac{d(t)_d}{(t)_d}\n= \\frac{i R_{10} R_5}{(2\\pi)^2 l_p{}^3} \\log t \\frac{dw}{w}\n= - \\lambda_{SW} \\, .\n\\end{align}\nThe four conditions in (\\ref{cycle_cons}) consistently lead to the duality relation\n\\begin{align}\n\\tilde{a}_d = \\left( \\frac{\\tilde{m}_2 \\tilde{m}_4}\n{\\tilde{m}_1\\tilde{m}_3} \\right)^{\\frac{1}{4}} q^{-\\frac{1}{2}} \\tilde{a} \\, .\n\\label{dual_a}\n\\end{align}\nThe positions $a_1$ and $a_2$ of the color D4-branes were originally defined ``classically'' in the D4\/NS5 brane setup and are not necessarily {equal} to $a$ defined by the cycle integral (\\ref{aint}). However, the first equality of each line in (\\ref{aint}) indicates that the classical conditions (\\ref{U(1)}) and (\\ref{U(1)D}) are satisfied even when we include the quantum effects. \n\nSummarizing, the duality map for the self-dual $SU(2)$ case is\n\\begin{equation}\n\\begin{split}\n\\label{map_reflection_su2}\n(\\tilde{m}_1)_d \n= \\tilde{m}_1^{\\frac{3}{4}} \\tilde{m}_2^{\\frac{1}{4}}\n\\tilde{m}_3^{-\\frac{1}{4}} \\tilde{m}_4^{\\frac{1}{4}} q^{-\\frac{1}{2}} \\, ,&\n\\qquad\n(\\tilde{m}_2)_d \n= \\tilde{m}_1^{\\frac{1}{4}} \\tilde{m}_2^{-\\frac{1}{4}}\n\\tilde{m}_3^{-\\frac{3}{4}} \\tilde{m}_4^{-\\frac{1}{4}} q^{\\frac{1}{2}} \\, ,\n\\\\\n(\\tilde{m}_3)_d \n= \\tilde{m}_1^{-\\frac{1}{4}} \\tilde{m}_2^{-\\frac{3}{4}}\n\\tilde{m}_3^{-\\frac{1}{4}} \\tilde{m}_4^{\\frac{1}{4}} q^{-\\frac{1}{2}} \\, ,&\n\\qquad\n(\\tilde{m}_4)_d \n= \\tilde{m}_1^{\\frac{1}{4}} \\tilde{m}_2^{-\\frac{1}{4}}\n\\tilde{m}_3^{\\frac{1}{4}} \\tilde{m}_4^{\\frac{3}{4}} q^{\\frac{1}{2}} \\, ,\n\\\\\nq_d = \\left( \\frac{\\tilde{m}_2\\tilde{m}_4}{\\tilde{m}_1 \\tilde{m}_3} \\right)^{\\frac{1}{2}} \\, ,&\n\\qquad\n\\tilde{a}_d = \\left( \\frac{\\tilde{m}_2 \\tilde{m}_4\n{\\tilde{m}_1\\tilde{m}_3} \\right)^{\\frac{1}{4}} q^{-\\frac{1}{2}} \\tilde{a} \\,\n\\end{split}\n\\end{equation}\nand can, alternatively, be reorganized as\n\\begin{equation}\n\\begin{split}\n(\\tilde{m}_1)_d (\\tilde{m}_2)_d (\\tilde{m}_3)_d (\\tilde{m}_4)_d \n= \\frac{\\tilde{m}_1 \\tilde{m}_4}{\\tilde{m}_2 \\tilde{m}_3} \\, ,&\n\\qquad\n\\frac{(\\tilde{m}_1)_d (\\tilde{m}_4)_d}{(\\tilde{m}_2)_d (\\tilde{m}_3)_d}\n= \\tilde{m}_1 \\tilde{m}_2 \\tilde{m}_3 \\tilde{m}_4 \\, ,\n\\\\\n\\frac{(\\tilde{m}_2)_d (\\tilde{m}_4)_d}{(\\tilde{m}_1)_d (\\tilde{m}_3)_d}\n= q^2 \\, ,&\n\\qquad\n{q_d}^2\n= \\frac{\\tilde{m}_2 \\tilde{m}_4}{\\tilde{m}_1 \\tilde{m}_3} \\, ,\n\\\\\n\\frac{(\\tilde{m}_1)_d (\\tilde{m}_2)_d}{(\\tilde{m}_3)_d (\\tilde{m}_4)_d}\n= \\frac{\\tilde{m}_1 \\tilde{m}_2}{\\tilde{m}_3 \\tilde{m}_4} \\, ,&\n\\qquad\nq_d^{-\\frac{1}{2}} \\tilde{a}_d = q^{-\\frac{1}{2}} \\tilde{a} \\, .\n\\end{split}\n\\label{Map_5DC}\n\\end{equation}\n\n\n\n\n\n\n\n\\bigskip\n\n\n\\subsection{$SU(N)^{M-1} \\leftrightarrow SU(M)^{N-1}$ duality}\n\n\nThe extension of the analysis in the previous subsection to the generic linear quiver gauge theory is straightforward. The asymptotics of the NS5-branes and the D4-branes constrain the form of the SW curve of $SU(N)^{M-1}$ gauge theory to\n\\begin{align}\n\\prod^{N}_{\\alpha=1} (w-\\tilde{m}_\\alpha) t^M + \\cdots + d \\prod^{2N}_{\\alpha=N+1} (w-\\tilde{m}_\\alpha)\n=0\n\\label{NMw}\n\\end{align}\nand\n\\footnotesize\n\\begin{align}\n\\prod^{M}_{i=1}\\left[t-C_{(i)}\\left( \n\\frac{ \\prod_{\\alpha=1}^N \\tilde{a}_\\alpha^{(i-1)} }{ \\prod_{\\beta=1}^N \\tilde{a}^{(i)}_\\beta}\n\\right)^{1\/2}\n\\right] w^N + \\cdots\n+ d' \\prod^{M}_{i=1}\\left[ t-C_{(i)}\\left(\n\\frac{ \\prod_{\\alpha=1}^N \\tilde{a}^{(i)}_\\alpha}{ \\prod_{\\beta=1}^N \\tilde{a}_\\beta^{(i-1)} }\n\\right)^{1\/2}\n\\right]\n=0 \\, ,\n\\label{NMt}\n\\end{align}\n\\normalsize\nwhere we have defined\n\\begin{align}\n\\label{mass-a}\n\\tilde{a}_{\\alpha}^{(0)} \\equiv \\tilde{m}_\\alpha\n\\quad \\text{and} \\quad\n\\tilde{a}_{\\alpha}^{(M)} \\equiv \\tilde{m}_{N+\\alpha} \\, .\n\\end{align}\nThe index $\\alpha=1,\\dots,N$ is used to count colors inside each single $SU(N)$ factor, whereas the index $i=1,\\dots,M$ counts hypermultiplets along the quiver gauge group. This notation is further clarified in Figure \\ref{branesetup}.\n\\begin{figure}[t]\n\\centering\n\\tiny\n\\setlength{\\unitlength}{4cm}\n\\vspace{10mm}\n\\begin{picture}(1,1)(0,0)\n\\linethickness{0.25mm}\n\\scriptsize\n\\put(-0.3,0){\\line(0,1){1}}\n\\put(-0.36,1.15){NS5$_1$}\n\\put(0.1,0){\\line(0,1){1}}\n\\put(0.06,1.15){NS5$_2$}\n\\put(0.5,0){\\line(0,1){1}}\n\\put(0.44,1.15){NS5$_i$}\n\\put(0.9,0){\\line(0,1){1}}\n\\put(0.81,1.15){NS5$_{i+1}$}\n\\put(1.3,0){\\line(0,1){1}}\n\\put(1.19,1.15){NS5$_{M-1}$}\n\\put(1.7,0){\\line(0,1){1}}\n\\put(1.63,1.15){NS5$_M$}\n\\tiny\n\n\\put(-0.7,0.1){\\line(1,0){0.4}}\n\\put(-0.65,0.13){$m'_{1} = a'{}_{1}^{(0)}$}\n\\put(-0.7,0.25){\\line(1,0){0.4}}\n\\put(-0.65,0.28){$m'_{2} = a'{}_{2}^{(0)}$}\n\\put(-0.5,0.45){\\circle*{0.01}}\n\\put(-0.5,0.55){\\circle*{0.01}}\n\\put(-0.5,0.65){\\circle*{0.01}}\n\\put(-0.7,0.75){\\line(1,0){0.4}}\n\\put(-0.78,0.78){$m'_{N-1} = a'{}_{N-1}^{(0)}$}\n\\put(-0.7,0.9){\\line(1,0){0.4}}\n\\put(-0.66,0.93){$m'_{N} = a'{}_{N}^{(0)}$}\n\n\\scriptsize\n\\put(-1.1,0.08){D4$_1$}\n\\put(-1.1,0.23){D4$_2$}\n\\put(-1.1,0.73){D4$_{N-1}$}\n\\put(-1.1,0.88){D4$_{N}$}\n\\tiny\n\n\\put(-0.3,0.15){\\line(1,0){0.4}}\n\\put(-0.17,0.18){$a'{}_{1}^{(1)}$}\n\\put(-0.3,0.3){\\line(1,0){0.4}}\n\\put(-0.17,0.33){$a'{}_{2}^{(1)}$}\n\\put(-0.1,0.47){\\circle*{0.01}}\n\\put(-0.1,0.53){\\circle*{0.01}}\n\\put(-0.1,0.59){\\circle*{0.01}}\n\\put(-0.3,0.7){\\line(1,0){0.4}}\n\\put(-0.19,0.73){$a'{}_{N-1}^{(1)}$}\n\\put(-0.3,0.85){\\line(1,0){0.4}}\n\\put(-0.17,0.88){$a'{}_{N}^{(1)}$}\n\n\\put(0.22,0.5){\\circle*{0.01}}\n\\put(0.3,0.5){\\circle*{0.01}}\n\\put(0.38,0.5){\\circle*{0.01}}\n\n\\put(0.5,0.1){\\line(1,0){0.4}}\n\\put(0.63,0.13){$a'{}_{1}^{(i)}$}\n\\put(0.7,0.27){\\circle*{0.01}}\n\\put(0.7,0.34){\\circle*{0.01}}\n\\put(0.7,0.41){\\circle*{0.01}}\n\\put(0.5,0.5){\\line(1,0){0.4}}\n\\put(0.63,0.53){$a'{}_{\\alpha}^{(i)}$}\n\\put(0.7,0.67){\\circle*{0.01}}\n\\put(0.7,0.74){\\circle*{0.01}}\n\\put(0.7,0.81){\\circle*{0.01}}\n\\put(0.5,0.9){\\line(1,0){0.4}}\n\\put(0.63,0.93){$a'{}_{N}^{(i)}$}\n\n\\put(1.02,0.5){\\circle*{0.01}}\n\\put(1.1,0.5){\\circle*{0.01}}\n\\put(1.18,0.5){\\circle*{0.01}}\n\n\\put(1.3,0.1){\\line(1,0){0.4}}\n\\put(1.39,0.13){$a'{}_{1}^{(M-1)}$}\n\\put(1.3,0.3){\\line(1,0){0.4}}\n\\put(1.39,0.33){$a'{}_{2}^{(M-1)}$}\n\\put(1.5,0.48){\\circle*{0.01}}\n\\put(1.5,0.54){\\circle*{0.01}}\n\\put(1.5,0.6){\\circle*{0.01}}\n\\put(1.3,0.7){\\line(1,0){0.4}}\n\\put(1.39,0.73){$a'{}_{N-1}^{(M-1)}$}\n\\put(1.3,0.9){\\line(1,0){0.4}}\n\\put(1.39,0.93){$a'{}_{N}^{(M-1)}$}\n\n\\put(1.7,0.05){\\line(1,0){0.4}}\n\\put(1.73,0.08){$m'_{N+1} = a'{}_{1}^{(M)}$}\n\\put(1.7,0.2){\\line(1,0){0.4}}\n\\put(1.73,0.23){$m'_{N+2} = a'{}_{2}^{(M)}$}\n\\put(1.9,0.41){\\circle*{0.01}}\n\\put(1.9,0.51){\\circle*{0.01}}\n\\put(1.9,0.61){\\circle*{0.01}}\n\\put(1.7,0.75){\\line(1,0){0.4}}\n\\put(1.73,0.78){$m'_{2N-1} = a'{}_{N-1}^{(M)}$}\n\\put(1.7,0.87){\\line(1,0){0.4}}\n\\put(1.73,0.9){$m'_{2N} = a'{}_{N}^{(M)}$}\n\\end{picture}\n\\vspace{5mm}\n\\normalsize\n\\caption{Brane setup for $SU(N)^{M-1}$ gauge theory, with vertical lines being D4-branes and horizontal ones being NS5-branes.\nWithout the loss of generality, we assume that $|\\tilde{m}_1| \\ge |\\tilde{m}_2| \\ge \\cdots \\ge |\\tilde{m}_N|$ and $|\\tilde{m}_{N+1}| \\ge |\\tilde{m}_{N+2}| \\ge \\cdots \\ge |\\tilde{m}_{2N}|$.\n}\n\\label{branesetup}\n\\end{figure}\nSimilarly, the curve for $SU(M)^{N-1}$ can be written in two forms:\n\\begin{align}\n\\prod^{M}_{i=1} (w-\\tilde{m}_i) t^N + \\cdots\n+ D \\prod^{2M}_{i=M+1} (w-\\tilde{m}_i)\n= 0\n\\label{MNw}\n\\end{align}\nand\n\\footnotesize\n\\begin{align}\n\\prod^{N}_{\\alpha=1}\\left[t - C_{(\\alpha)}\\left(\n\\frac{ \\prod_{i=1}^M \\tilde{a}_i^{(\\alpha-1)} }{ \\prod_{j=1}^M \\tilde{a}^{(\\alpha)}_j}\n\\right)^{1\/2}\n\\right] w^M + \\cdots\n+ D' \\prod^{N}_{\\alpha=1}\\left[t-C_{(\\alpha)}\\left( \n\\frac{ \\prod_{i=1}^M \\tilde{a}^{(\\alpha)}_i}{ \\prod_{j=1}^M \\tilde{a}_j^{(\\alpha-1)} }\n\\right)^{1\/2}\n\\right] = 0 \\, ,\n\\label{MNt}\n\\end{align}\n\\normalsize\nwhere, now, the index $i=1,\\dots,M$ is used to count colors inside a single $SU(M)$ factor and the index $\\alpha=1,\\dots,N$ counts hypermultiplets along the product gauge group. \nAlso, we define\n\\begin{align}\n\\tilde{a}_{i}^{(0)} \\equiv \\tilde{m}_i\n\\quad \\text{and} \\quad\n\\tilde{a}_{i}^{(N)} \\equiv \\tilde{m}_{M+i} \\, .\n\\end{align}\nAs in the previous subsection, we now have to express the constants $C_{(i)}$ of the $SU(N)^{M-1}$ SW curve in terms of the gauge coupling constants $q^{(i)}$. The educated assumption \n\\begin{align}\nq^{(i)} = \\frac{C_{(i+1)}}{C_{(i)}} \n\\quad \\Rightarrow \\quad\nC_{(i)}= C \\prod_{k=1}^{i-1} q^{(k)} \\, ,\n\\label{CK}\n\\end{align}\nturns out to be the correct one, where $C \\equiv C_{(1)}$ is some common constant that corresponds to the ambiguity of the rescaling of the coordinate $t$. The same relation (\\ref{CK}) holds for the constants $C_{(\\alpha)}$ of the $SU(M)^{N-1}$ SW curve in terms of the gauge coupling constants $q^{(\\alpha)}$.\nWe are now ready to derive the duality map for the exchange $w \\leftrightarrow t$. By comparing the SW curves (\\ref{NMw}) and (\\ref{MNt}) we obtain\\footnote{We use the notation $(A B)_d\\equiv A_d B_d$.}\n\\small\n\\begin{align}\n\\tilde{m}_\\alpha\n= \\left( C_{(\\alpha)}\\left( \n\\frac{ \\prod_{i=1}^M \\tilde{a}_i^{(\\alpha-1)} }{ \\prod_{j=1}^M \\tilde{a}^{(\\alpha)}_j}\n\\right)^{1\/2} \\right)_d \\, ,\n\\quad\n\\tilde{m}_{N+\\alpha}\n= \\left( C_{(\\alpha)}\\left( \n\\frac{ \\prod_{i=1}^M \\tilde{a}^{(\\alpha)}_i}{ \\prod_{j=1}^M \\tilde{a}_j^{(\\alpha-1)} }\n\\right)^{1\/2} \\right)_d \\, .\n\\label{comp1}\n\\end{align}\n\\normalsize\nFurthermore, by comparing (\\ref{NMt}) with (\\ref{MNw}), we obtain\n\\small\n\\begin{align}\nC_{(i)}\\left(\n\\frac{ \\prod_{\\alpha=1}^N \\tilde{a}_\\alpha^{(i-1)} }{ \\prod_{\\beta=1}^N\\tilde{a}^{(i)}_\\beta}\n\\right)^{1\/2}\n= ( \\tilde{m}_i )_d \\, ,\n\\quad\nC_{(i)}\\left( \n\\frac{ \\prod_{\\alpha=1}^N \\tilde{a}^{(i)}_\\alpha}{ \\prod_{\\beta=1}^N \\tilde{a}_\\beta^{(i-1)} }\n\\right)^{1\/2}\n= ( \\tilde{m}_{M+i} )_d \\, .\n\\label{comp2}\n\\end{align}\n\\normalsize\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=5cm,clip]{cycles_generic.eps}\n \\put(-87,-5){\\footnotesize$ \\left( A^{(\\alpha-1)}_i \\right)_d$}\n \\put(-11,70){\\small $A^{(i)}_\\alpha$}\n \\put(-165,70){ \\small$A^{(i-1)}_\\alpha$}\n \\put(-82,135){\\footnotesize $\\left( A^{(\\alpha)}_i \\right)_d$}\n \\put(-200,-20){\\vector(0,1){160}}\n \\put(-200,-20){\\vector(1,0){240}}\n \\put(50,-25){$s$}\n \\put(-215,140){$v$}\n \\end{center}\n \\caption{In this figure a ``junction'' of M5-branes is depicted. From this we read off the rule for the ``conservation'' of the cycle integrals. }\n\\end{figure}\n\nAgain as in the previous subsection we have to impose the ``conservation'' of the cycle integrals,\n\\begin{align}\n\\left( A^{(\\alpha)}_i \\right)_d \n- A^{(i)}_\\alpha\n- \\left( A^{(\\alpha-1)}_i \\right)_d\n+ A^{(i-1)}_\\alpha = 0 \\, ,\n\\label{cycle_cons_gen}\n\\end{align}\nwhich leads to the map\n\\begin{align}\n\\label{aa_aa}\n\\frac{ (\\tilde{a}^{(\\alpha)}_i )_d}{(a^{(\\alpha-1)}_i )_d}\n= \\frac{\\tilde{a}^{(i)}_\\alpha}{\\tilde{a}^{(i-1)}_\\alpha} \\, .\n\\end{align}\nCombining the equations above we get\n\\begin{equation}\n\\begin{split}\n\\left( \\tilde{a}^{(\\alpha)}_{i} \\right)_d\n&= C \\left( \n\\frac{\\prod_{\\gamma=1}^\\alpha \\tilde{a}^{(i)}_\\gamma \\prod_{\\delta=\\alpha+1}^N \\tilde{a}^{(i-1)}_\\delta}\n{ \\prod_{\\gamma=1}^\\alpha \\tilde{a}_\\gamma^{(i-1)} \\prod_{\\delta=\\alpha+1}^N \\tilde{a}_\\delta^{(i)} }\\right)^{1\/2} \\prod_{k=1}^{i-1} q^{(k)} \\, , \\\\\n\\left( q^{(\\alpha)} \\right)_d\n&= \\left( \\frac{\\tilde{m}_{\\alpha+1} \\tilde{m}_{N+\\alpha+1}}{\\tilde{m}_{\\alpha} \\tilde{m}_{N+\\alpha}} \\right)^{1\/2} \\, .\n\\end{split}\n\\label{gen_map}\n\\end{equation}\nThe duality map as derived above still includes one unknown coefficient $C$ that reflects the freedom to rescale the coordinate $t$. Moreover, the Coulomb moduli parameters are defined up to the choice of the $v$.\nA natural way to fix both is to impose \n\\begin{align}\n\\prod_{\\alpha=1}^N \\tilde{a}^{(0)}_\\alpha = 1\n\\quad \\text{and} \\quad\n\\prod_{i=1}^M (\\tilde{a}^{(0)}_i)_d = 1 \\, .\n\\label{prod_a}\n\\end{align}\nThe latter condition determines the constant $C$ to be\n\\begin{align}\nC = \n{\\prod_{\\alpha=1}^N \\left( \\tilde{a}^{(M)}_{\\alpha} \\right)^{\\frac{1}{2M}}}\n\\prod_{i=1}^{M-1} \\left( q^{(i)} \\right)^{- \\frac{M-i}{M}} \\, .\n\\label{Def_C}\n\\end{align}\nAt this point we have to stress that the constant $C$ depends on the choice of the origin (\\ref{prod_a}) for the coordinates $v$ and $s$. Naively, one may think that the duality map depends on this choice. However, in terms of the {\\it physical} gauge theory parameters the map is independent of this choice. A detailed description of the physical gauge theory parameters is given in Appendix \\ref{phys-par}.\n\n\n\nIn terms of the physical gauge theory parameters the duality map is\n\\footnotesize\n\\begin{align}\n& \\left( \\hat{a}_{i}^{(\\alpha)} \\right)_d\n= \\left( \\tilde{m}_{\\text{bif}}^{(i-1,i)} \\right)^{\\alpha-\\frac{N}{2}}\n\\prod_{\\gamma=1}^\\alpha \n\\left( \n\\frac{\\hat{a}_{\\gamma}^{(i)} }{\\hat{a}_{\\gamma}^{(i-1)} } \\right)\n\\left(\n\\frac{\\hat{a}_\\gamma^{(0)}}{\\hat{a}_\\gamma^{(M)}}\n\\right) ^{\\frac{1}{M}}\n \\prod_{k=1}^{M} \n\\left( \\tilde{m}_{\\text{bif}}^{(k-1,k)} \\right)^{\\frac{N-2\\alpha}{2M}}\n\\prod_{\\ell=1}^{i-1} \\left( q^{(\\ell)} \\right)^{\\frac{i}{M}}\n\\prod_{\\ell=i}^{M-1} \\left( q^{(\\ell)} \\right) ^{- \\frac{M-\\ell}{M}} \\, , \\nonumber\n\\\\\n&\\left( \\tilde{m}^{(\\alpha-1,\\alpha)}_{\\text{bif}} \\right)_d\n= \\left(\n\\frac{\\hat{a}_\\alpha^{(M)}}{\\hat{a}_\\alpha^{(0)}} \n\\prod_{k=1}^{M} \\tilde{m}_{\\text{bif}}^{(k-1,k)}\n\\right)^{\\frac{1}{M}} \\, ,\n\\\\\n& \\left( q^{(\\alpha)} \\right)_d\n= \\left( \\frac{\\hat{a}^{(0)}_{\\alpha+1} \\hat{a}^{(M)}_{\\alpha+1}}\n{\\hat{a}^{(0)}_{\\alpha} \\hat{a}^{(M)}_{\\alpha}} \\right)^{1\/2} \\, . \\nonumber\n\\end{align}\n\\normalsize\n\n\n\\subsubsection*{$SU(M)$ $\\leftrightarrow$ $SU(2)^{M-1}$ case}\n\nBefore ending this section we wish to consider the special case with $N=2$. This duality between $SU(M)$ and $SU(2)^{M-1}$ gauge theories is of particular interest due to its implications in 2D CFTs. Through the AGTW conjecture this duality relates four-point correlation functions of $q$-deformed $W_M$ Toda theories to $(M+2)$-correlation functions of $q$-deformed Liouville theories. This topic will be discussed in Section \\ref{sec:GaugeToCFT}. For now we just give the map\n\\normalsize\n\\begin{equation}\n\\begin{split}\n&(\\tilde{m}_{1}^{\\text{f}})_d \n= \\left( \\frac{(\\tilde{m}^{\\text{f}}_1)^{1+M} \\tilde{m}^{\\text{f}}_3}\n{(\\tilde{m}^{\\text{f}}_2)^{1-M} \\tilde{m}^{\\text{f}}_4} \\right) ^{\\frac{1}{2M}} \n\\prod_{k=1}^{M-1} \\left( q^{(k)} \\right)^{\\frac{M-k}{M}} \\, ,\n\\\\\n&(\\tilde{m}_{i}^{\\text{f}})_d \n= \\left( \\frac{\\tilde{m}^{\\text{f}}_1 \\tilde{m}^{\\text{f}}_3\n{\\tilde{m}^{\\text{f}}_2 \\tilde{m}^{\\text{f}}_4} \\right) ^{\\frac{1}{2M}} \n\\tilde{m}_{ \\text{bif} }^{ (i-1,i)} \n\\prod_{k=1}^{i-1} \\left( q^{(k)} \\right)^{-\\frac{k}{M}}\n\\prod_{k=i}^{M-1} \\left( q^{(k)} \\right)^{\\frac{M-k}{M}} \\, ,\n\\\\\n&(\\tilde{m}_{M}^{\\text{f}})_d \n= \\left( \\frac{\\tilde{m}^{\\text{f}}_1 (\\tilde{m}^{\\text{f}}_3)^{1+M}}\n{\\tilde{m}^{\\text{f}}_2 (\\tilde{m}^{\\text{f}}_4)^{1-M} } \\right) ^{\\frac{1}{2M}} \n\\prod_{k=1}^{M-1} \\left( q^{(k)} \\right)^{-\\frac{k}{M}} \\, ,\n\\\\\n&(\\tilde{m}_{M+1}^{\\text{f}})_d \n= \\left( \\frac{(\\tilde{m}_2^{\\text{f}})^{1+M} \\tilde{m}_4^{\\text{f}}}\n{(\\tilde{m}_1^{\\text{f}})^{1-M} \\tilde{m}_3^{\\text{f}}}\n\\right) ^{\\frac{1}{2M}}\n\\prod_{k=1}^{M-1} \\left( q^{(k)} \\right)^{-\\frac{M-k}{M}} \\, ,\n\\\\\n&(\\tilde{m}_{M+i}^{\\text{f}})_d \n= \\left( \\frac{\\tilde{m}_2^{\\text{f}} \\tilde{m}_4^{\\text{f}}}\n{\\tilde{m}_1^{\\text{f}} \\tilde{m}_3^{\\text{f}}}\n\\right) ^{\\frac{1}{2M}}\n\\tilde{m}_{ \\text{bif} }^{ (i-1,i)} \n\\prod_{k=1}^{i-1} \\left( q^{(k)} \\right)^{\\frac{k}{M}}\n\\prod_{k=i}^{M-1} \\left( q^{(k)} \\right)^{-\\frac{M-k}{M}} \\, ,\n\\\\\n&(\\tilde{m}_{2M}^{\\text{f}})_d \n= \\left( \\frac{\\tilde{m}_2^{\\text{f}} (\\tilde{m}_4^{\\text{f}})^{1+M}}\n{\\tilde{m}_1^{\\text{f}} (\\tilde{m}_3^{\\text{f}})^{1-M}}\n\\right) ^{\\frac{1}{2M}}\n\\prod_{k=1}^{M-1} \\left( q^{(k)} \\right)^{\\frac{k}{M}} \\, ,\n\\end{split}\n\\end{equation}\n\\normalsize\nand\n\\normalsize\n\\begin{equation}\n\\begin{split}\n&\\left( \\tilde{a}_{1}^{\\text{f}} \\right)_d\n= \\tilde{a}^{(1)}_{\\text{f}} \n\\left( \n\\frac{(\\tilde{m}_2^{\\text{f}})^{1-M} \\tilde{m}_4^{\\text{f}}}\n{(\\tilde{m}_1^{\\text{f}})^{1-M} \\tilde{m}_3^{\\text{f}}}\n\\right) ^{\\frac{1}{2M}}\n\\prod_{i=1}^{M-1} \\left( q^{(i)} \\right)^{-\\frac{M-k}{M}} \\, ,\n\\\\\n&\\left( \\tilde{a}_{i}^{\\text{f}} \\right)_d\n= \\frac{\\tilde{a}^{(i)}_{\\text{f}} }{\\tilde{a}^{(i-1)}_{\\text{f}} }\n\\left( \\frac{\\tilde{m}_2^{\\text{f}} \\tilde{m}_4^{\\text{f}}}\n{\\tilde{m}_1^{\\text{f}} \\tilde{m}_3^{\\text{f}}}\n\\right) ^{\\frac{1}{2M}}\n\\prod_{k=1}^{i-1} \\left( q^{(k)} \\right)^{\\frac{k}{M}}\n\\prod_{k=i}^{M-1} \\left( q^{(k)} \\right)^{-\\frac{M-k}{M}} \\, ,\n\\\\\n&\\left( \\tilde{a}_{M}^{\\text{f}} \\right)_d\n= \\frac{1}{\\tilde{a}^{(M-1)}_{\\text{f}} }\n\\left( \\frac{\\tilde{m}_2^{\\text{f}} \\tilde{m}_4^{\\text{f}}}\n{\\tilde{m}_1^{\\text{f}} \\tilde{m}_3^{\\text{f}}}\n\\right) ^{\\frac{1}{2M}}\n\\prod_{i=1}^{M-1} \\left( q^{(i)} \\right)^{\\frac{k}{M}} \\, ,\n\\\\\n&q_d\n= \\left( \\frac{ \\tilde{m}_{1}^{\\text{f}} \\tilde{m}_{4}^{\\text{f}}}\n{\\tilde{m}_{2}^{\\text{f}} \\tilde{m}_{3}^{\\text{f}}}\n\\right)^{1\/2} \\, .\n\\end{split}\n\\end{equation}\n\\normalsize\nIt is interesting to note that the mass parameters and the gauge coupling constant of the dual $SU(M)$ theory are completely independent of the Coulomb moduli parameters of the original $SU(2)^{M-1}$ theory. \n\n\n\n\n\\section{Topological string derivation}\n\\label{sec:TopStringDeriv}\n\nIn the previous section we presented a derivation of the duality map using the Seiberg-Witten formalism. Here, we will present an independent derivation (or check) using Nekrasov's partition function. We compute Nekrasov's partition functions for 5D $\\mathcal{N}=1$ $SU(N)^{M-1}$ and $SU(M)^{N-1}$ linear quivers and show that they are equal when we relate their gauge theory parameters with the duality map (\\ref{gen_map}).\n\nThe computation of Nekrasov's partition functions is performed using topological string theory. As we reviewed in Section \\ref{sec:Review}, topological string theory offers an alternative derivation of the gauge theory partition functions and most importantly provides {\\it a rewriting of the partition function} in a form in which the duality is manifest. \nIt is unlikely that gauge theory reasoning alone would lead to this rewriting. However, from the string theory point of view it is natural. Due to the fact that the partition function is read off from a toric diagram, symmetries that arise from the CY geometry (and are obscured otherwise) are manifest in this formalism.\n\nIn the previous section we used the type IIA D4\/NS5 brane setup to realize the linear quiver gauge theories. As we discussed in Section \\ref{sec:Review}, the D4\/NS5 brane configuration is dual to type IIA string theory compactified on CY$_3$. We are interested in the special class of Calabi-Yau manifolds that satisfy the toric condition and lead to $SU(N)$ gauge theory. Theses CY$_3$ are completely specified by their toric diagrams. In the case of linear quivers the toric diagram is essentially same as the brane diagram.\n\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=130mm,clip]{BraneToToric.eps}\n \\end{center}\n \\caption{The D4\/NS5 system is T-dual to IIB $(p,q)$ 5-brane system.\nThe M\/IIB duality relates it to M-theory on the corresponding toric CY.}\n \\label{BraneToToric}\n\\end{figure}\nFollowing \\cite{Aharony:1997bh}, the D4\/NS5 brane setup in IIA theory is T-dual to IIB $(p,q)$-brane web system (D5\/NS5). When uplifting this system to M-theory via M\/IIB duality, we obtain M-theory on $M^4\\times \\textrm{CY}_3\\times S^1$ where CY is a toric three-fold whose $(p,q)$-cycle shrinks.\nIn this way the D4\/NS5 system is equivalent to M-theory on toric CY,\nor IIA on CY which is the usual geometric engineering setup. This connection is illustrated in Figure \\ref{BraneToToric}.\n \n\nGiven the toric diagram we can use the topological vertex formalism to calculate Nekrasov's partition function of 4D $\\mathcal{N}=2$ gauge theories. We should stress again that in this paper we study the Nekrasov partition function for the 5D uplift of the 4D gauge theory. The 5D Nekrasov partition function is precisely equal to the topological string partition function\\footnote{To be precise, the obtained topological string partition function is the Nekrasov partition function for the $U(N)$ gauge theory whose Coulomb moduli parameters are constrained as $a_1=-a_2=a$. According to \\cite{Alday:2009aq}, this constrained partition function is still not precisely $SU(N)$. The difference is the overall factor which in \\cite{Alday:2009aq} is called the U(1) factor and is independent of the Coulomb moduli. This $U(1)$ factor does not affect the low-energy effective coupling constants which we studied in the previous section.}; of course after the appropriate identification of the gauge theory parameters with the string theory parameters.\n\nWriting down the topological string partition function is simple using the topological vertex formalism. The procedure was reviewed in Section~\\ref{sec:Review}.\nWhat is quite tedious is to bring the topological string partition function in the form given by Nekrasov. For that we have to perform the sums. Such calculations have previously been done by \\cite{Aganagic:2002qg,Iqbal:2003zz,Eguchi:2003sj,Iqbal:2004ne,Taki:2007dh}. They involve summations over Young diagrams. The summand contains Schur and skew-Schur functions. The calculation is quite technical so we hide most of the details in Appendix \\ref{app:NekrasovTop}. We first warm up with the $SU(2)$ case and then present the general $\\mathcal{N}=2$ $SU(N)^{M-1}$ linear quiver in its full glory. Once we bring the topological string partition function in the form that was given by Nekrasov, we obtain the identification between the gauge theory and the string theory parameters. Using this identification we finally write down the duality map that is identical to the one found in Section \\ref{sec:MtheoryDeriv}.\n\n\n\\subsection{$SU(2)$ gauge theory with four flavors}\n\\label{subsec:topsu2}\n\nIn this subsection we compute the topological string partition function for $SU(2)$ SQCD with four flavors using the topological vertex formalism. The toric diagram from which we read off the partition functions is depicted in Figure~\\ref{fig:4flavSQCD}. Due to the symmetry of the diagram, it is convenient to first consider the ``half-geometry'' of the corresponding toric CY shown in Figure~\\ref{fig:bifund}.\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=40mm,clip]{BifundGeom2v2.eps}\n \\end{center}\n \\caption{The sub-diagram that engineers the bifundamental hypermultiplet of $SU(2)$ quiver gauge theories, where $R_i,Y_i,\\mu_i$ denote Young diagrams. The parameters $Q_1,Q_2,Q_3$ are associated with the line labeled by the Young diagram $\\mu_1, \\mu_2, \\mu_3$, respectively.}\n \\label{fig:bifund}\n\\end{figure}\nThis sub-diagram is dual to two horizontal D4-branes crossing a vertical NS5-brane. The vertical sequence of closed loops describes a combination of the two-cycles in CY$_3$ which give a vector multiplet and two fundamental hypermultiplets. As we will see, the K\\\"ahler parameters of the three two-cycles inside the CY geometry correspond to the Coulomb moduli of the $SU(2)$ gauge group and two of the hypermultiplet masses. After gluing the two ``half-geometries'' according to Figure~\\ref{fig:4flavSQCD} we obtain a toric CY$_3$ with six two-cycles, which correspond to the Coulomb moduli parameter $a$, the four flavor masses $m_{\\mathfrak{i}}$ ($\\mathfrak{i}=1,\\dots,4$) and the gauge coupling constant $q$. \n\n\nFirst, we will focus on the computation of the contribution from this ``half-geometry'' to the topological partition function. The Young diagram $R_i$ is kept to be arbitrary for as long as possible so that this computation can be used also for more generic cases like $SU(2)^{M-1}$ gauge theory. For the $SU(2)$ gauge theory with four flavors we then set $R_1=R_2=\\emptyset$ in order to get the partition function.\n\nUsing the topological vertex formalism we read off the following sub-amplitude for the local geometry depicted in Figure~\\ref{fig:bifund} \n\\footnotesize\n\\begin{equation}\n\\begin{split}\nL^{\\,R_1\\,Y_1}_{\\,R_2\\,Y_2}\n(Q_{1},\\,Q_{2},\\,Q_{3})\n&\\equiv\n\\sum_{\\mu_1,\\mu_2,\\mu_3}\n(-1)^{|\\mu_2|}\\mathfrak{q}^{\\frac{1}{2}\\kappa_{\\mu_2} }\\,\n\\prod_{i=1}^3(-Q_i)^{|\\mu_i|}\n\\\\\n&\\rule{0pt}{3ex}\n\\qquad \\times \nC_{\\emptyset R_1 \\mu_1}(\\mathfrak{q})\\,\nC_{\\mu_2 Y_1^T \\mu_1^T}(\\mathfrak{q})\\,\nC_{\\mu_3 Y_2^T\\mu_2^T}(\\mathfrak{q})\\,\nC_{\\mu_3^T R_2 \\emptyset}(\\mathfrak{q})\n\\\\\n&\\rule{0pt}{4ex}=\n\\,S_{R_1}(\\mathfrak{q}^\\rho)\\,S_{R_2}(\\mathfrak{q}^\\rho)\\,S_{Y_1^T}(\\mathfrak{q}^\\rho)\\,S_{Y_2^T}(\\mathfrak{q}^\\rho)\\\\\n&\\rule{0pt}{4ex}\n\\qquad \\times\n\\sum_{\\mu_1,\\mu_2,\\mu_3}\n\\sum_{\\zeta,\\eta}\nS_{\\mu_1^T}(-Q_1\\mathfrak{q}^{R_1+\\rho})\\,\nS_{\\mu_1\/\\zeta}(\\mathfrak{q}^{Y_1^T+\\rho})\\,\nS_{\\mu_2\/\\zeta}(\\mathfrak{q}^{Y_1+\\rho})\\\\\n&\\qquad\n\\times\nS_{\\mu_2\/\\eta}(Q_2\\mathfrak{q}^{Y_2^T+\\rho})\\,\nS_{\\mu_3\/\\eta}(Q_2^{-1}\\mathfrak{q}^{Y_2+\\rho})\\,\nS_{\\mu_3^T}(-Q_2Q_3\\mathfrak{q}^{R_2^T+\\rho})\\, .\n\\end{split}\n\\end{equation}\n\\normalsize\nThe second line of the equation is obtained by inserting the definition of the vertex function (\\ref{def_topv}). In order to get a closed form of the topological string amplitude we have to perform the summation explicitly. For that we employ the Cauchy formulas\n\\begin{equation}\n\\begin{split}\n\\sum_\\eta\nS_{\\eta\/R_1}(x)\nS_{\\eta\/R_2}(y)\n&=\n\\prod_{i,j}\n(1-x_iy_j)^{-1}\n\\sum_\\eta\nS_{R_1\/\\eta}(y)\nS_{R_2\/\\eta}(x) \\, , \\\\\n\\sum_\\eta\nS_{\\eta^T\/R_1}(x)\nS_{\\eta\/R_2}(y)\n&=\n\\prod_{i,j}\n(1+x_iy_j)\n\\sum_\\eta\nS_{R_1^T\/\\eta}(y)\nS_{R_2^T\/\\eta^T}(x) \\, .\n\\end{split}\n\\end{equation}\nNotice that $S_{R\/\\emptyset}=S_{R}$ and $S_{\\emptyset\/R}=\\delta_{R,\\emptyset}$. By using these formulas repeatedly, we obtain the following closed form of the amplitude\n\\small\n\\begin{equation}\n\\begin{split}\nL^{\\,R_1\\,Y_1}_{\\,R_2\\,Y_2}\n(Q_{1},\\,Q_{2},\\,Q_{3})\n&=\n\\,S_{R_1}(\\mathfrak{q}^\\rho)\\,S_{R_2}(\\mathfrak{q}^\\rho)\\,S_{Y_1^T}(\\mathfrak{q}^\\rho)\\,S_{Y_2^T}(\\mathfrak{q}^\\rho)\n\\\\\n&\\rule{0pt}{4ex}\\times\n\\frac{\n\\left[R_1, Y_1^T \\right]_{Q_1}\n\\left[Y_2, R_2^T \\right]_{Q_3}\n\\left[R_1, Y_2^T \\right]_{Q_1Q_2}\n\\left[Y_1, R_2^T, \\right]_{Q_2Q_3}\n}\n{\n\\left[ Y_1, Y_2^T \\right]_{Q_2}\n\\left[ R_1, R_2^T \\right]_{Q_1Q_2Q_3}\n} \\, ,\n\\end{split}\n\\end{equation}\n\\normalsize\nwhere $[*,*]_Q$ is defined as\n\\begin{align}\n\\left[Y_1, Y_2 \\right]_{Q}\n\\equiv\n\\prod_{i,j=1}^\\infty\n(1-Q\\mathfrak{q}^{Y_{1i}+Y_{2j}-i-j+1})\n=\\left[Y_2, Y_1 \\right]_{Q} \\, .\n\\end{align}\nThe instanton contribution of the gauge theory partition function is given by the normalized amplitude\n\\footnotesize\n\\begin{equation}\n\\begin{split}\n&\\tilde{L}^{\\,R_1\\,Y_1}_{\\,R_2\\,Y_2}\n\\equiv\n\\frac{L^{\\,R_1\\,Y_1}_{\\,R_2\\,Y_2}}{L^{\\,\\emptyset\\,\\emptyset}_{\\,\\emptyset\\,\\emptyset}}\\\\\n&=\\rule{0pt}{4ex}\n2^{|R_1|+|R_2|+|Y_1|+|Y_2|}\n\\left(\\sqrt{\\frac{Q_1}{Q_3}}\\right)^{|R_1|-|R_2|}\n\\left(\\sqrt{{Q_1}{Q_3}}\\right)^{|Y_1|+|Y_2|}\n\\mathfrak{q}^{-\\frac{1}{4}(\\kappa_{R_1}-\\kappa_{R_2}-\\kappa_{Y_1}+\\kappa_{Y_2})}\n\\\\\n&\\times\\rule{0pt}{4ex}\n\\label{bifundamp}\nS_{R_1}(\\mathfrak{q}^\\rho)\\,S_{R_2}(\\mathfrak{q}^\\rho)\\,S_{Y_1^T}(\\mathfrak{q}^\\rho)\\,S_{Y_2^T}(\\mathfrak{q}^\\rho)\n\\frac{\nP^{-1}_{Y_1R_1}(Q_1)\nP^{-1}_{Y_2R_1}(Q_1Q_2)\nP^{-1}_{R_2Y_1}(Q_2Q_3)\nP^{-1}_{R_2Y_2}(Q_3)\n}\n{P^{-1}_{R_2R_1}(Q_1Q_2Q_3)P^{-1}_{Y_2Y_1}(Q_2)} \\, ,\n\\end{split}\n\\end{equation}\n\\normalsize\nwhere we define the function $P$ as follows \\cite{Konishi:2003qq}:\n\\begin{equation}\n\\begin{split}\n&\\frac{1\n}{P_{Y_1Y_2}(\\mathfrak{q},Q)}\n\\equiv\n\\prod_{(i,j)\\in Y_1}\n\\sinh \\frac{\\beta}{2}\n\\left(\na+\\hbar(Y_{1\\,i}+Y^T_{2\\,j}-i-j+1)\n\\right)\n\\\\\n&\\rule{0pt}{5ex}\n\\quad\\qquad\\qquad\\quad\n\\times\n\\prod_{(i,j)\\in Y_2}\n\\sinh \\frac{\\beta}{2}\n\\left(a+\\hbar\n(-Y^T_{1\\,j}-Y_{2\\,i}+i+j-1)\\right)\n\\end{split}\n\\end{equation}\nfor $\\mathfrak{q}=e^{-\\beta \\hbar}$ and $Q=e^{-\\beta a}$. To get this expression we have used the formulas (\\ref{NPrelation}) and (\\ref{bracket_N}).\n\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=70mm,clip]{4fSQCDv2.eps}\n \\end{center}\n \\caption{The toric diagram that gives $SU(2)$ SQCD with four fundamental hypermultiplets. Since the eight external lines are semi-infinite half-lines, we assign the empty Young diagram $\\emptyset$ to them.}\n \\label{fig:4flavSQCD}\n\\end{figure}\nWith this sub-amplitude at hand we move on to the computation of the full partition function of $SU(2)$ SQCD with four flavors. The associated toric diagram is depicted in Figure~\\ref{fig:4flavSQCD}. The partition function for this toric diagram is obtained by gluing two sub-diagrams $\\tilde{L}$ according to\n\\small\n\\begin{align}\nZ_{\\,\\textrm{inst}}=\n\\sum_{Y_1,Y_2}\nQ_B^{|Y_1|+|Y_2|}\\,\n\\mathfrak{q}^{\\frac{\\kappa_{Y_1}}{2}-\\frac{\\kappa_{Y_2}}{2}}\\,\n\\tilde{L}^{\\,\\emptyset\\,Y_1}_{\\,\\emptyset\\,Y_2}\\,\n(Q_{m1},Q_{F},Q_{m2})\\,\n\\tilde{L}^{\\,\\emptyset\\,Y_2^T}_{\\,\\emptyset\\,Y_1^T}\\,\n(Q_{m4},Q_{F},Q_{m3}) \\, .\n\\end{align}\n\\normalsize\nThis expression is written in terms of the string theory parameters used in geometric engineering. By comparing it with the Nekrasov partition function in \\cite{Nekrasov:2002qd} we obtain the identifications\n\\small\n\\begin{equation}\n\\begin{split}\n&Q_{m1}=e^{-\\beta(m_1^{\\text{f}} - a)}\n=\\frac{\\tilde{m}_1}{\\tilde{a}},\\quad\nQ_{m2}=e^{-\\beta(-m_2^{\\text{f}}-a)}\n= \\frac{1}{\\tilde{m}_2\\tilde{a}}\n,\\quad\\cr\n&Q_{m3}=e^{-\\beta(m_3^{\\text{f}}-a)}\n=\\frac{\\tilde{m}_3}{\\tilde{a}}\n,\\quad\nQ_{m4}=e^{-\\beta(-m_4^{\\text{f}}-a)}\n= \\frac{1}{\\tilde{m}_4 \\tilde{a}}, \\quad\nQ_{F}=e^{-2a\\beta}\n=\\tilde{a}^{2} \\, ,\n\\label{Q_Def}\n\\end{split}\n\\end{equation}\n\\normalsize\nwhere the second equality is written in the M-theoretical parametrization from Section \\ref{sec:MtheoryDeriv}.\n\nIn particular, the ``numerator contribution\" of the left sub-diagram\n$\n\\tilde{L}^{\\,\\emptyset\\,Y_1}_{\\,\\emptyset\\,Y_2}\\,\n(Q_{m1},Q_{F},Q_{m2})\n$\ntakes the form\n\\begin{equation}\n\\begin{split}\n&P^{-1}_{Y_1\\emptyset}(Q_{m1})\nP^{-1}_{Y_2\\emptyset}(Q_{m1}Q_F)\nP^{-1}_{\\emptyset Y_1}(Q_FQ_{m2})\nP^{-1}_{\\emptyset Y_2}(Q_{m2})\n\\cr\n&\\rule{0pt}{4ex} \\qquad\\qquad\\qquad\n= (-1)^{|Y_1|+|Y_2|}\n\\prod_{f=1,2}\nZ_{\\,\\textrm{fund}}(\\,a,\\vec{Y},m_f,\\hbar;\\beta\\,) \\, ,\n\\end{split}\n\\end{equation}\nwhere we have used (\\ref{Pinverse}) together with Nekrasov's expresion (\\ref{Nek5Dfund}). This is precisely the contribution from the two fundamental hypermultiplets with masses $m_1$ and $m_2$. The sub-diagram $\\tilde{L}^{Y_2^T\\emptyset}_{Y_1^T\\emptyset}$ \ngives the contribution of the two fundamental hypermultiplets with masses $m_3$ and $m_4$ in a similar fashion. Moreover, the remaining part has the interpretation of contribution from the vector multiplet\n\\begin{align}\n\\frac{S_{Y_1}(\\mathfrak{q}^\\rho)\\,S_{Y_2}(\\mathfrak{q}^\\rho)}{P^{-1}_{Y_1 Y_2}(Q_F)}\n\\frac{S_{Y_2^T}(\\mathfrak{q}^\\rho)\\,S_{Y_1^T}(\\mathfrak{q}^\\rho)}{P^{-1}_{Y_2^T Y_1^T}(Q_F)}\n = (-4)^{-|Y_1|-|Y_2|} Z_{\\textrm{vector}} (\\,a,\\vec{Y},\\hbar;\\beta\\,)\n\\end{align}\nwhere we have used (\\ref{specializedSchur}), (\\ref{Ptranspose}) and (\\ref{Nek5Dvect2}). The details of the computation can be found in Appendix \\ref{app:NekrasovTop}. We have thus exactly reproduced the Nekrasov partition function \\cite{Nekrasov:2002qd}, where the instanton factor is given by\n\\begin{align}\nq\n=Q_B\\sqrt{Q_{m1}Q_{m2}Q_{m3}Q_{m4}}\n=Q_B\\, \\tilde{a}^{-2} \\sqrt{\\frac{\\tilde{m}_1\\tilde{m}_3}{\\tilde{m}_2\\tilde{m}_4}} \\, .\n\\label{q_Def}\n\\end{align}\nIt is remarkable that the parametrization (\\ref{q_Def}) does not depend on the $\\Omega$ background parameter $\\mathfrak{q}=e^{ - \\beta \\hbar}$.\n\nWe can interpret the identification of the parameters (\\ref{Q_Def}) and (\\ref{q_Def}) in the context of the brane setup. Taking into account that $\\tilde{a}_{\\alpha}$ and $\\tilde{m}_\\mathfrak{i}$, correspond to the positions of the color branes and the flavor branes respectively, the ratio of them corresponds to the distance between the corresponding branes as in (\\ref{Q_Def}). In the similar way, by rewriting (\\ref{q_Def}) as \n$$\nq = \\sqrt{(Q_{m1} Q_B Q_{m3}) \\times (Q_{m2} Q_B Q_{m4})} \n$$\nwe see that $q$ can be interpreted as the average distance between the two NS5-branes in the $v \\rightarrow \\pm \\infty$ asymptotic regions, as discussed in Section \\ref{sec:MtheoryDeriv}. This observation justifies the identification (\\ref{CCq}).\n\n\n\\subsubsection*{Reflection symmetry}\n\nThe topological string partition function $Z=\\left(L_{\\,\\emptyset\\emptyset}^{\\,\\emptyset\\emptyset}\\right)^2 Z_{\\textrm{inst}}$ (without normalization) has the same symmetries as the toric diagram it is based on. The normalization factor $\\left(L_{\\,\\emptyset\\emptyset}^{\\,\\emptyset\\emptyset}\\right)^2$ gives the perturbative contribution of the Nekrasov partition function, while $Z$ is equivalent to the full Nekrasov partition function. Therefore, a graphical symmetry of the toric diagram is also a symmetry of the full quantum gauge theory, including perturbative and instanton corrections.\n\nTypical examples are the reflection symmetries of Figure~\\ref{fig:4flavSQCD}. The partition function is invariant under reflection along the diagonal axis when it is performed together with the transformation \n\\begin{align}\nQ_{m2}\\leftrightarrow Q_{m3}\\, , \\quad\nQ_{B}\\leftrightarrow Q_{F} \\, .\n\\end{align}\nThis reflection symmetry implies the following duality relations\n\\begin{equation}\n\\begin{split}\n(Q_{m1})_d=Q_{m1}\\, ,\\quad\n(Q_{m2})_d=Q_{m3}\\, ,&\\quad\n(Q_{m3})_d=Q_{m2}\\, ,\\quad\n(Q_{m4})_d=Q_{m4}\\, ,\\cr\n(Q_{B})_d=Q_{F}\\, ,&\\quad\n(Q_{F})_d=Q_{B}\\, .\n\\end{split}\n\\end{equation}\nIn the M-theory language, it is an invariance of the Nekrasov partition function under the transformation\n\\small\n\\begin{equation}\n\\begin{split}\n\\frac{(\\tilde{m}_1)_d}{(\\tilde{a})_d}=\n\\frac{\\tilde{m}_1}{\\tilde{a}} \\, , \\quad \n\\frac{1}{(\\tilde{m}_2)_d (\\tilde{a})_d}\n=\\frac{\\tilde{m}_3}{\\tilde{a}} \\, , \\quad\n\\rule{0pt}{4ex}\n& \\frac{(\\tilde{m}_3)_d}{(\\tilde{a})_d}\n= \\frac{1}{\\tilde{m}_2\\tilde{a}} \\, , \\quad \n\\frac{1}{(\\tilde{m}_4)_d (\\tilde{a})_d} = \\frac{1}{\\tilde{m}_4 \\tilde{a}} \\, ,\n\\cr\n\\rule{0pt}{4ex}\nq_d\\,{(\\tilde{a})_d^{2}}\\sqrt{\\frac{(\\tilde{m}_2)_d(\\tilde{m}_4)_d}{(\\tilde{m}_1)_d(\\tilde{m}_3)_d}}\n=\\tilde{a}^{2} \\, , \\quad \n\\rule{0pt}{4ex}\n& (\\tilde{a})_d^{2}=q\\,{\\tilde{a}^{2}}\\sqrt{\\frac{\\tilde{m}_2\\tilde{m}_4}{\\tilde{m}_1\\tilde{m}_3}} \\, .\n\\label{top_su2_map1}\n\\end{split}\n\\end{equation}\n\\normalsize\nThis is the self-duality of the holomorphic sector of the 5D gauge theory in the Coulomb branch.\n\nNote that if we combine this duality map with a known symmetry of the Nekrasov partition function, we obtain another expression for this self-duality. In particular, we can combine with a simultaneous change of the sign of the Coulomb moduli and the masses discussed at the end of Appendix \\ref{app:NekrasovTop}, which correspond to $\\tilde{m}_{\\mathfrak{i}} \\to \\tilde{m}_{\\mathfrak{i}}^{-1}$, $\\tilde{a} \\to \\tilde{a}^{-1}$. Acting this symmetry transformation on both the original and the dual theory in (\\ref{top_su2_map1}) we obtain\n\\small\n\\begin{equation}\n\\begin{split}\n\\frac{(\\tilde{a})_d}{(\\tilde{m}_1)_d}=\n\\frac{\\tilde{a}}{\\tilde{m}_1} \\, , \\quad\n(\\tilde{m}_2)_d (\\tilde{a})_d\n=\\frac{\\tilde{a}}{\\tilde{m}_3} \\, ,\\quad\n\\rule{0pt}{4ex}\n& \\frac{(\\tilde{a})_d}{(\\tilde{m}_3)_d}\n=\\tilde{m}_2\\tilde{a} \\, , \\quad\n(\\tilde{m}_4)_d (\\tilde{a})_d = \\tilde{m}_4 \\tilde{a} \\, ,\n\\cr\n\\rule{0pt}{4ex}\nq_d\\,{(\\tilde{a})_d^{-2}}\\sqrt{\\frac{(\\tilde{m}_1)_d(\\tilde{m}_3)_d}{(\\tilde{m}_2)_d(\\tilde{m}_4)_d}}\n=\\tilde{a}^{-2} \\, , \\quad\n\\rule{0pt}{4ex}\n& (\\tilde{a})_d^{-2}=q\\,{\\tilde{a}^{-2}}\\sqrt{\\frac{\\tilde{m}_1\\tilde{m}_3}{\\tilde{m}_2\\tilde{m}_4}} \\, .\n\\label{top_su2_map2}\n\\end{split}\n\\end{equation}\n\\normalsize\nIt is now straightforward to see that (\\ref{top_su2_map2}) is equivalent to the duality map (\\ref{Map_5DC}) which was derived using the M5-brane construction in the previous section.\\footnote{It is possible to obtain (\\ref{top_su2_map2}) directly if we define the toric diagram in Figure~\\ref{fig:4flavSQCD} with all the arrows reversed. In that case, the parametrization of the geometric engineering parameters in (\\ref{Q_Def}) gets inverted. In this article, we use the standard parametrization from \\cite{Iqbal:2004ne}.} The point here is that the self-dual $\\Omega$-background deformation $\\hbar$ maintains this duality, since we have shown that not only the Seiberg-Witten solution but also the Nekrasov partition function is invariant under the duality transformation. This result is due to the nontrivial fact that the duality map does not depend on the $\\Omega$-background parameter $\\hbar$. In the following we will see that this duality for high rank gauge theories also satisfy this non-trivial property.\n\n\n{\\subsection{$SU(N)^{M-1}\\leftrightarrow SU(M)^{N-1}$ duality}}\n\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=110mm,clip]{QuiverSub2.eps}\n \\end{center}\n \\caption{The sub-diagram of the toric diagram for $SU(N)$ quiver gauge theories. The parameters $Q_{m\\alpha}$ and $Q_{F\\alpha}$ are associated with the lines labeled by the Young diagrams $\\mu_{\\alpha}$ and $\\nu_{\\alpha}$, respectively.}\n \\label{fig:quiv2}\n\\end{figure}\nWe will now generalize to $SU(N)$ quiver gauge theories. As in the previous subsection, we divide the toric diagram into sub-diagrams along its symmetry lines. The sub-diagram of the generic ladder geometry we choose to compute is shown in Figure~\\ref{fig:quiv2}. Using the topological vertex formalism, the contribution coming from this sub-diagram is\n\\small\n\\begin{equation}\n\\begin{split}\n&H_{\\,Y_1Y_2\\cdots Y_N}^{\\,R_1R_2\\cdots R_N}\\,(\\,\\mathfrak{q},Q_{m1},\\cdots,Q_{mN},Q_{F1},\\cdots,Q_{FN})\n\\\\\n&=\\sum_{\\mu_{1,\\cdots ,N},\\nu_{1,\\cdots ,N}}\n\\prod_{\\alpha=1}^N\\,\n(-Q_{m\\alpha})^{|\\mu_\\alpha|}\\,\n\\prod_{\\alpha=1}^{N-1}\\,\n(-Q_{F\\alpha})^{|\\nu_\\alpha|}\\,\n\\\\\n&\\times\nC_{ \\emptyset R_1\\mu_1^T}\\,\nC_{ \\nu_1^TY_1^T \\mu_1 }\\,\nC_{ \\nu_1 R_2 \\mu_2^T}\\,\nC_{ \\nu_2^T Y_2^T \\mu_2}\\,\nC_{ \\nu_2 R_3\\mu_3^T}\\,\nC_{ \\nu_3^TY_3^T \\mu_3}\\,\n\\cdots\nC_{ \\nu_{N-1}R_N \\mu_N^T}\\,\nC_{ \\emptyset Y_N^T \\mu_N} \\, .\n\\end{split}\n\\end{equation}\n\\normalsize\nBy substituting the definition of the topological vertex, we obtain the following expression\n\\small\n\\begin{equation}\n\\begin{split}\n&H_{\\,Y_1Y_2\\cdots Y_N}^{\\,R_1R_2\\cdots R_N}\n\\\\\n&=\\rule{0pt}{4ex}\n\\prod_{\\alpha=1}^N\\,\nS_{R_\\alpha}(\\mathfrak{q}^{\\rho})\\,\nS_{Y_\\alpha^T}(\\mathfrak{q}^{\\rho})\\,\n\\sum_{\\mu,\\nu,\\eta,\\zeta}\n\\prod_{\\alpha=1}^N\\,\n(-Q_{m\\alpha})^{|\\mu_\\alpha|}\\,\n(-Q_{F\\alpha})^{|\\nu_\\alpha|}\n\\\\\n&\\quad\\times\n\\prod_{\\alpha=1}^N\\,\nS_{ \\nu_{\\alpha-1}\/\\zeta_{\\alpha-1}}(\\mathfrak{q}^{R_\\alpha^T+\\rho})\\,\nS_{ \\mu_\\alpha\/\\zeta_{\\alpha-1}}(\\mathfrak{q}^{R_\\alpha+\\rho})\\,\nS_{ \\mu_\\alpha^T\/\\eta_\\alpha}(\\mathfrak{q}^{Y_\\alpha^T+\\rho})\\,\nS_{ \\nu_\\alpha^T\/\\eta_\\alpha}(\\mathfrak{q}^{Y_\\alpha+\\rho}) \\, .\n\\end{split}\n\\end{equation}\n\\normalsize\nNote that the lines on the left and right edges are associated with a singlet or empty $\\nu_0=\\nu_N=\\emptyset$ tableu. We can take the summation since all the $\\kappa$-factors from the framing factors are canceled out in the partition function. This type of subdiagram is called ``the vertex on a strip geometry\" and is studied extensively in \\cite{Iqbal:2004ne}. By using the formula (B.1) from \\cite{Taki:2007dh} we can compute it explicitly:\n\\begin{equation}\n\\begin{split}\nH_{\\,Y_1Y_2\\cdots Y_N}^{\\,R_1R_2\\cdots R_N}\n&\n\\frac{\\prod_{\\alpha=1}^N\\,\nS_{R_\\alpha}(\\mathfrak{q}^{\\rho})\\,\nS_{Y_\\alpha^T}(\\mathfrak{q}^{\\rho})}\n{\\prod_{1\\leq \\alpha<\\beta\\leq N}\\left[R_\\alpha, R_\\beta^T\\right]_{Q_{\\alpha\\beta}}\n\\left[Y_\\alpha,Y_\\beta^T \\right]_{Q_{m\\alpha}^{-1}Q_{\\alpha\\beta}Q_{m\\beta}}\n}\n\\\\\n&\\qquad\\times\\rule{0pt}{4ex}\n\\prod_{1\\leq \\alpha<\\beta\\leq N}\n\\left[Y_\\alpha,R_\\beta^T \\right]_{Q_{m\\alpha}^{-1}Q_{\\alpha\\beta}}\n\\prod_{1\\leq \\alpha\\leq\\beta\\leq N}\n\\left[R_\\alpha, Y_\\beta^T\\right]_{Q_{\\alpha\\beta}Q_{m\\beta}} \\, ,\n\\end{split}\n\\end{equation}\nwhere $Q_{\\alpha\\beta} = \\prod_{a=\\alpha}^{\\beta-1}Q_{m\\,a}Q_{F\\,a}$. Normalizing this sub-diagram by dividing with $H_{\\emptyset\\cdots}^{\\emptyset\\cdots}$ we obtain\n\\small\n\\begin{equation}\n\\begin{split}\n\\tilde{H}_{\\,Y_1Y_2\\cdots Y_N}^{\\,R_1R_2\\cdots R_N}\n&\n\\frac{\\prod_{\\alpha=1}^N\\,\nS_{R_\\alpha}(\\mathfrak{q}^{\\rho})\\,\nS_{Y_\\alpha^T}(\\mathfrak{q}^{\\rho})}\n{\\prod_{1\\leq \\alpha<\\beta\\leq N}\nN_{R_\\beta R_\\alpha}(Q_{\\alpha\\beta})\\,\nN_{Y_\\beta Y_\\alpha}(Q_{m\\alpha}^{-1}Q_{\\alpha\\beta}Q_{m\\beta})\n}\n\\\\\n&\\qquad\\times\\rule{0pt}{4ex}\n\\prod_{1\\leq \\alpha<\\beta\\leq N}\nN_{R_\\beta Y_\\alpha}(Q_{m\\alpha}^{-1}Q_{\\alpha\\beta})\n\\prod_{1\\leq \\alpha\\leq\\beta\\leq N}\nN_{Y_\\beta R_\\alpha}(Q_{\\alpha\\beta}Q_{m\\beta}) \\, ,\n\\end{split}\n\\label{Htilde}\n\\end{equation}\n\\normalsize\nwhere \n\\begin{align}\nN_{Y_1Y_2}(\\mathfrak{q},Q)\n\\equiv \\frac{\\left[Y_1^T, Y_2 \\right]_{Q}}\n{\\left[\\emptyset, \\emptyset \\right]_{Q}}\n=N_{Y_2^TY_1^T}(\\mathfrak{q},Q) \\, .\n\\end{align}\n\nThe generic $SU(N)^{M-1}$ linear quiver theories with fundamental and bifundamental hypermultiplets are engineered using CY$_3$ with\n linear toric diagrams that are obtained by gluing local structures depicted in Figure~\\ref{fig:quiv2}.\nThe partition function for $SU(N)^{M-1}$ quivers is read off from Figure~\\ref{fig:large} and is written in terms of the local structure of the geometry that is illustrated in Figure~\\ref{fig:genQuiver}, \n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=100mm,clip]{genericQuiverVer3.eps}\n \\end{center}\n \\caption{The local structure of the toric diagrams for the $SU(N)$ linear quivers.}\n \\label{fig:genQuiver}\n\\end{figure}\n\\begin{equation}\n\\label{genQuivPartFuncmain}\n\\begin{split}\nZ_{\\,\\textrm{inst}} &=\n\\sum\n\\cdots\n\\sum_{Y_1^{(i)},\\cdots,Y_N^{(i)}}\n\\cdots\n\\prod_{\\alpha}(-Q_{B\\alpha}^{(i)})^{|Y_\\alpha|}\n\\cr\n&\\quad \\cdots \\tilde{H}_{\\,Y_1^{(i+1)} Y_2^{(i+1)} \\cdots Y_N^{(i+1)}\n^{\\,Y_1^{(i)} Y_2^{(i)} \\cdots Y_N^{(i)}}\\,\n(\\,Q^{(i)}_{m1}, \\cdots,Q^{(i)}_{mN},Q^{(i)}_{F1},\\cdots,Q^{(i)}_{FN})\n\\cr\n&\\quad \\times\n\\tilde{H}^{\\,Y_1^{(i-1)} Y_2^{(i-1)} \\cdots Y_N^{(i-1)}\n_{\\,Y_1^{(i)} Y_2^{(i)} \\cdots Y_N^{(i)}}\\,\n(\\,Q^{(i-1)}_{m1},\\cdots,Q^{(i-1)}_{mN},Q^{(i-1)}_{F1},\\cdots,Q^{(i-1)}_{FN})\n\\cdots.\n\\end{split}\n\\end{equation}\n\\normalsize\nThis expression is written in terms of the string theory parameters. In order to make contact with Nekrasov's partition function \n we introduce the following identification for the K\\\"ahler parameters\n\\begin{align}\nQ^{(i)}_{\\alpha\\beta}=e^{-\\beta(a^{(i)}_\\alpha-a^{(i)}_\\beta)}\n= \\frac{\\tilde{a}_{\\alpha}^{(i)}}{\\tilde{a}_{\\beta}^{(i)}} \\, , \\qquad \nQ^{(i)}_{m\\alpha}=e^{-\\beta(a^{(i)}_\\alpha-{a}^{(i+1)}_\\alpha-m^{(i,i+1)})}\n= \\frac{\\tilde{a}^{(i)}_{\\alpha}}{\\tilde{a}^{(i+1)}_{\\alpha}} \\, ,\n\\label{Qab_Qm}\n\\end{align}\nwhere $\\tilde{a}_{\\alpha}^{(i)}$ are the M-theory parameters from Section \\ref{sec:MtheoryDeriv}. Here, we have defined \n\\begin{align}\nQ_{\\alpha \\, \\alpha+1}^{(i)} \\equiv Q_{m \\, \\alpha}^{(i)} Q_{F\\alpha}^{(i)}\n\\label{QF_Qab}\n\\end{align}\n(see Figure~\\ref{fig:genQuiver}), which leads to the identification\n\\begin{align}\nQ_{F\\alpha}^{(i)}\n{= \\frac{\\tilde{a}_{\\alpha}^{(i+1)}}{\\tilde{a}_{\\alpha+1}^{(i)}}}\n= \\exp \\left[ - \\beta (a_{\\alpha}^{(i+1)}-a_{\\alpha+1}^{(i)} + m^{(i,i+1)}) \\right] \\, .\n\\label{QF_param}\n\\end{align}\nNote that the parameters above satisfy the following relations\n\\begin{equation}\n\\begin{split}\nQ^{(i)}_{\\alpha\\beta} \n&= Q^{(i-1)}_{\\alpha\\beta} \\frac{Q^{(i-1)}_{m\\beta}}{Q^{(i-1)}_{m\\alpha}} \\, ,\n\\\\\nQ_{F\\alpha}^{(i)} \n&= Q_{F \\alpha}^{(i-1)} \\frac{Q_{m \\, \\alpha+1}^{(i-1)}}{Q_{m\\alpha}^{(i)}} \\, .\n\\label{rec_QF}\n\\end{split}\n\\end{equation}\n\nWhen comparing with the expression (\\ref{GluingOfQuiver}) of the Nekrasov partition function \\cite{Nekrasov:2002qd, Fucito:2004gi}, we can show that the topological string partition function (\\ref{genQuivPartFuncmain}) is almost the same as the partition function for the quiver gauge theory.\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=120mm,clip]{genericQuiverDiag.eps}\n \\caption{The quiver diagram for the $SU(N)$ quiver gauge theory associated with Figure \\protect\\ref{fig:genQuiver}. } \n \\label{fig:genQuiverDiag}\n\\end{figure}\nThe remaining problem is to find the identification between the base K\\\"ahler parameters $Q_{B}^{(i)}$ and the gauge coupling constants $q^{(i)}$. For the purpose, let us compute the corresponding part of the partition function (\\ref{genQuivPartFuncmain}). The K\\\"ahler parameters $Q_{B\\alpha}$ of the two-cycles $B+m_\\alpha F+n_\\alpha F'$ are given by\n\\begin{align}\nQ_{B1}^{(i)}=Q_B^{(i)}\n\\qquad \\text{and} \\qquad\nQ_{B\\alpha}^{(i)} \n= Q^{(i)}_{B\\,\\alpha-1} \\frac{Q^{(i)}_{m \\, \\alpha-1}}{ Q^{(i-1)}_{m\\alpha}} \\, .\n\\label{rec_QB}\n\\end{align}\nThe part of the partition function (\\ref{genQuivPartFuncmain}) that contains these parameters is\n\\begin{equation}\n\\begin{split}\n\\prod_{\\alpha}(-Q_{B\\alpha}^{(i)})^{|Y_\\alpha^{(i)}|}\n&=(-Q_B^{(i)})^{\\sum{|Y_\\alpha^{(i)}|}}\n\\frac\n{\n\\prod_{1\\leq\\alpha<\\beta\\leq N}\n(Q^{(i)}_{m\\alpha})^{|Y_\\beta^{(i)}|}\n}\n{\n\\prod_{2\\leq\\alpha\\leq\\beta\\leq N}\n(Q^{(i-1)}_{m\\alpha})^{|Y_\\beta^{(i)}|}\n}\\\\\n&=(-Q_B ^{(i)}Q^{(i-1)}_{m1})^{\\sum{|Y_\\alpha^{(i)}|}}\n\\frac\n{\n\\prod_{1\\leq\\alpha<\\beta\\leq N}\n(Q^{(i)}_{m\\alpha})^{|Y_\\beta^{(i)}|}\n}\n{\n\\prod_{1\\leq\\alpha\\leq\\beta\\leq N}\n(Q^{(i-1)}_{m\\alpha})^{|Y_\\beta^{(i)}|}\n}\n\\, .\n\\label{QB_q}\n\\end{split}\n\\end{equation}\nBy comparing (\\ref{QB_q}) with (\\ref{GaugeCouplingKahlers}), we find the following relation between the gauge coupling constants and the K\\\"ahler parameters of the base $\\mathbb{P}^1$\n\\begin{equation}\n\\begin{split}\nQ_B^{(i)}\n= &\nq^{(i)}\n\\frac{1}{Q^{(i-1)}_{m1}}\n\\prod_{\\alpha=1}^N\n\\sqrt{\\frac{Q^{(i-1)}_{m\\alpha}}\n{Q^{(i)}_{m\\alpha}}}\n= \nq^{(i)}\n\\frac{\\tilde{a}_1^{(i)}}{\\tilde{a}_1^{(i-1)}}\n\\prod_{\\alpha=1}^N\n\\frac{\\sqrt{\\tilde{a}_{\\alpha}^{(i-1)} \\tilde{a}_{\\alpha}^{(i+1)}}\n{\\tilde{a}_{\\alpha}^{(i)}}\n\\cr\n= &\nq^{(i)}\n\\exp \\left[\n- \\beta \n\\left( \na_1^{(i)} - a_1^{(i-1)} - \\frac{N-2}{2} m^{(i-1,i)} + \\frac{N}{2} m^{(i,i+1)} \n\\right)\n\\right] \\, .\n\\end{split}\n\\label{QB_param}\n\\end{equation}\nInserting (\\ref{Qab_Qm}), (\\ref{QF_param}) and (\\ref{QB_param}) into the topological partition function (\\ref{genQuivPartFuncmain})\ngives precisely the Nekrasov partition function for the quiver theory in Figure~\\ref{fig:genQuiverDiag}.\n\nFrom the relations (\\ref{QF_Qab}), (\\ref{rec_QF}), and (\\ref{rec_QB}) we see that all the $Q_{\\alpha \\beta}^{(i)}$, $Q_{F\\alpha}^{(i)}$ for $1 \\le i \\le M-1$ and $Q_{B\\alpha}^{(i)}$ for $2 \\le \\alpha \\le N$ are not independent. The toric diagram in Figure~\\ref{fig:genQuiverDiag} shows that the remaining parameters $Q_{m\\alpha}^{(i)}$, $Q_{B}^{(i)}$ and $Q_{F \\alpha}^{(0)}$ are independent; this can also be deduced from the relations (\\ref{Qab_Qm}), (\\ref{QF_param}) and (\\ref{QB_q}). Moreover, the number of parameters add up to $(M+1)(N+1)-3$, which is the same as the number of parameters of the $SU(N)^{M-1}$ gauge theory as discussed in Section \\ref{subsec:review_duality}. Therefore, they are one-to-one correspondent with the gauge theory parameters. \n\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=140mm,clip]{largefigure.eps}\n \\end{center}\n \\caption{The toric diagram for the linear $SU(N)$ quiver gauge theory. $Q_{B}^{(i)}$ is related to the gauge coupling constant $q^{(i)}$ of the $i$-th gauge group $SU(N)_{(i)}$. The Coulomb moduli of the $i$-th gauge group are given by $Q_{\\alpha\\beta}^{(i)}$. Since $SU(N)_{(0)}$ and $SU(N)_{(M)}$ are in fact not gauge groups but global, flavor symmetries, the K\\\"ahler parameters $Q_{\\alpha\\beta}^{(0)}$ and $Q_{\\alpha\\beta}^{(M)}$ on the edges encode the masses of the (anti-) fundamental hypermultiplets living on the endpoints of the corresponding quiver diagram.}\n \\label{fig:large}\n\\end{figure}\nThe duality map of the reflection transformation is given by\n\\begin{equation}\n\\begin{split}\n(Q_{mi}^{(\\alpha-1)})_d = Q_{m \\alpha}^{(i-1)} \\, , & \\quad\n(Q_{Fi}^{(\\alpha-1)})_d = Q_{B \\alpha}^{(i)} \\, , \\cr\n\\qquad(Q_{Bi}^{(\\alpha)})_d = Q_{F \\alpha}^{(i-1)} \\, , & \\quad\n(Q_{\\alpha \\, \\alpha+1}^{(i)})_d = Q_{m \\, i+1}^{(\\alpha-1)} Q_{B \\, i+1}^{(\\alpha)} \\, .\n\\label{top_gen_map}\n\\end{split}\n\\end{equation}\nAgain, by taking into account (\\ref{QF_Qab}), (\\ref{rec_QF}), and (\\ref{rec_QB}), we see that in (\\ref{top_gen_map}) the second map (in the first line) for $2 \\le \\alpha \\le N$, the third (in the second line) for $2 \\le i \\le M$ and the fourth can be derived from the remaining maps are redundant. Therefore, the independent ones are the first map, the second with $\\alpha=1$ and the third with $i=1$.\n\nFinally, we show that the duality map obtained here is equivalent to the one we found using the M5-brane analysis. To do so, it is enough to show that the independent duality relations in (\\ref{top_gen_map}) can also be derived from the relations (\\ref{gen_map}). Just like the $SU(2)$ case, we combine this duality map with a {simultaneous} transformation $\\tilde{a}_{\\alpha}^{(i)} \\to \\tilde{a}_{\\alpha}^{(i)} {}^{-1}$ and $(\\tilde{a}_{\\alpha}^{(i)})_d \\to (\\tilde{a}_{\\alpha}^{(i)})_d^{-1}$, which is a symmetry of the Nekrasov partition function. Then, the first map, the second map (in the first line) for $\\alpha=1$, and the third map (in the second line) for $i=1$ in (\\ref{top_gen_map}) respectively become\n\\small\n\\begin{equation}\n\\begin{split}\n\\left( \\frac{\\tilde{a}^{(\\alpha-1)}_{i}}{\\tilde{a}^{(\\alpha)}_{i}} \\right)_d\n&= \\frac{\\tilde{a}^{(i-1)}_{\\alpha}}{\\tilde{a}^{(i)}_{\\alpha}} \\, ,\n\\\\\n\\left( \\frac{\\tilde{a}^{(1)}_{i}}{\\tilde{a}^{(0)}_{i+1}} \\right)_d\n&= q^{(i)}\n\\frac{\\tilde{a}_1^{(i-1)}}{\\tilde{a}_1^{(i)}}\n\\prod_{\\alpha=1}^N\n\\frac{\\sqrt{\\tilde{a}_{\\alpha}^{(i-1)} \\tilde{a}_{\\alpha}^{(i+1)}}\n{\\tilde{a}_{\\alpha}^{(i)}} \\, ,\n\\\\\n\\frac{\\tilde{a}^{(1)}_{i}}{\\tilde{a}^{(0)}_{i+1}} \n&= \\left(\nq^{(i)}\n\\frac{\\tilde{a}_1^{(i-1)}}{\\tilde{a}_1^{(i)}}\n\\prod_{\\alpha=1}^N\n\\frac{\\sqrt{\\tilde{a}_{\\alpha}^{(i-1)} \\tilde{a}_{\\alpha}^{(i+1)}}\n{\\tilde{a}_{\\alpha}^{(i)}}\n\\right)_d\n\\label{top_final_map}\n\\end{split}\n\\end{equation}\n\\normalsize\nafter inserting (\\ref{Qab_Qm}), (\\ref{QF_param}) and (\\ref{QB_param}). The first line in (\\ref{top_final_map}) is precisely the relation (\\ref{aa_aa}), from which the duality map (\\ref{gen_map}) is derived. The second line can be derived from (\\ref{gen_map}), while the third line in (\\ref{top_final_map}) is the same as the second line in (\\ref{top_final_map}) with the parameters of the original theory exchanged with the ones of the dual theory. \nSince the role of the original and the dual theory can be exchanged the third line of (\\ref{top_final_map}) is also correct.\nWe have thus shown that the duality map obtained from the topological string analysis is identical to the one obtained from the M-theory analysis.\n\n\n\n\\section{From 5D $\\mathcal{N}=1$ gauge theory to 2D CFT}\n\\label{sec:GaugeToCFT}\n\n\nIn this section we discuss the implications of the 5D $SU(N)^{M-1} \\leftrightarrow SU(M)^{N-1}$ duality in 2D CFTs and we propose that the DOZZ three-point function of $q$-deformed Toda theory is obtained from the topological string partition function of $U(1)$ linear quivers.\nWe rewrite the $U(1)$ gauge theory partition function into the DOZZ three-point function of $q$-deformed Liouville theory that is given in \\cite{Kozcaz:2010af}. What is more, we\nextend it to $q$-deformed Toda theory and then\n conjecture that $q$-deformed Heisenberg free CFT on a multi-punctured sphere is dual to $q$-deformed Toda CFT on a three-punctured sphere.\nWe begin with a short review of the AGTW duality \\cite{Alday:2009aq,Wyllard:2009hg} between 4D $\\mathcal{N}=2$ $SU(N)$ conformal gauge quivers and 2D $A_{N-1}$ conformal Toda field theories and then turn to its 5D generalization between $\\mathcal{N}=1$ gauge theories and $q$-deformed Virasoro and $W_N$ algebra \\cite{Awata:2010yy}. The 5D gauge theory duality studied in this article then implies relations between correlation functions (conformal blocks) of the $q$-deformed Virasoro algebra and those of the $q$-deformed $W_N$ algebra. Ultimately, the 4D version of this duality should lead to relations between Liouville and Toda theories.\n\nIn \\cite{Gaiotto:2009we} Gaiotto was able to obtain a large class of $\\mathcal{N}=2$ superconformal field theories in four dimensions by compactifying (a twisted version of) the six-dimentional $(2,0)$ SCFT on a Riemann surface with genus $g$ and $n$ punctures. The parameter space of the exactly marginal gauge couplings of the 4D gauge theory is identified with the complex structure moduli space $\\mathcal{C}_{g,n} \/\\Gamma_{g,n}$ of the Riemann surface. The discrete group $\\Gamma_{g,n}$ is the generalized S-duality transformations of the 4D theory.\n\nSoon after, Alday, Gaiotto and Tachikawa conjectured \\cite{Alday:2009aq} that the instanton partition function of a $\\mathcal{N}=2$ $SU(2)$ quiver gauge theory in $\\Omega$ background is equal to the conformal block of the conformal Liouville theory on a certain Riemann surface $\\mathcal{C}_{g,n}$. This Riemann surface can be found in a systematic way from the quiver diagram of the 4D gauge theory\\footnote{The quiver diagram drawn \\`a la Gaiotto \\cite{Gaiotto:2009we} looks identical to the diagram associated with the conformal block.}. The two theories are equal under the following identificaton between their parameters\n\\begin{equation} \n\\epsilon_1 = b \\, , \\qquad \\epsilon_2 = \\frac{1}{b} \\, ,\n\\end{equation}\nwith the central charge of the Virasoro algebra being $c=1+6\\left(b+\\frac{1}{b}\\right)^2$. The coupling constants $q$ are identified with the cross-ratios $z$, the hypermultiplet masses $m$ (both flavor and bifundamental) correspond to the external momenta in the Liouville theory and the Coulomb moduli $a$ correspond to the internal momenta in the conformal block. Both external and internal momenta are denoted by $\\alpha$ here. \nThe AGTW conjecture has been proved for a special case in \\cite{Fateev:2009aw, Hadasz:2010xp, Mironov:2009qn, Mironov:2010pi}, and attempts for proof in more generic settings have been made by using a new basis of the Verma module \\cite{Alba:2010qc, Fateev:2011hq, Belavin:2011js}.\n\n\nThe one-loop contribution in the partition function precisely reproduces the product of the so called DOZZ three-point function of the Liouville theory\n\\cite{Dorn:1994xn,Zamolodchikov:1995aa,Teschner:2001rv,Nakayama:2004vk}\n\\small\n\\begin{equation}\n\\begin{split}\n\\label{dozz}\n&C_{\\,{}_{\\textrm{DOZZ}}}(\\alpha_1,\\alpha_2,\\alpha_3)=\n\\left[\n\\pi\\, \\mu\\, \\gamma\\,\\left(b^2\\right)\\, b^{2-2b^2}\n\\right]^{\\frac{b+1\/b-\\sum_{i}\\alpha_i}{b}}\n\\\\\n&\\quad \\quad \\rule{0pt}{5ex}\n\\times\\frac{\\Upsilon_0\\,\\Upsilon_b (2\\alpha_1)\\Upsilon_b (2\\alpha_2)\\Upsilon_b (2\\alpha_3)}\n{\\Upsilon_b (\\alpha_1+\\alpha_2+\\alpha_3-b-1\/b)\n\\Upsilon_b (\\alpha_1+\\alpha_2-\\alpha_3)\n\\Upsilon_b (\\alpha_2+\\alpha_3-\\alpha_1)\n\\Upsilon_b (\\alpha_3+\\alpha_1-\\alpha_2)} \\,,\n\\end{split}\n\\end{equation}\n\\normalsize\nwhere the special function $\\Upsilon_b (x)$ is defined by\n\\begin{align}\n&\\Upsilon_b (x)=\\frac{1}{\\Gamma_b(x)\\Gamma_b(\\epsilon_1+\\epsilon_2-x)},\n\\\\\n&\\Gamma_b(x)\n=\n\\exp \\frac{d}{ds}\n\\frac{1}{\\Gamma(s)}\n\\int_0^\\infty\n\\frac{dt}{t}\\,\n\\frac{t^se^{-tx}}{(1-x^{-\\epsilon_1t})(1-e^{-\\epsilon_2t})}\n\\Big|_{s=0}\n\\propto\n\\sum_{m,n=0}^\\infty\n(x+\\epsilon_1m+\\epsilon_2n)^{-1}.\n\\end{align}\nFinally, the partition function of the 4D SCFT on $S^4$,\n\\begin{equation} \n\\int d a \\, a^2 \\, |\\,Z_{\\,\\textrm{Nek}}(a)\\,|^2\n\\end{equation}\nwith $Z_{\\textrm{Nek}} =Z_{\\textrm{tree}} Z_{\\textrm{1-loop}}Z_{\\textrm{inst}}$ being the full partition function, is equal to the correlation function of primary fields $V_\\alpha =e^{2 \\alpha \\phi}$ in the Liouville theory with conformal dimension $\\Delta =\\alpha\\left(b+\\frac{1}{b} - \\alpha \\right)$. Take the $SU(2)$ gauge theory with four flavors as an example, this theory corresponds to the Liouville CFT on the Riemann sphere with four punctures $\\mathcal{C}_{0,4}$. Quantitatively, the AGTW conjecture states\n\\begin{equation} \n\\int d \\mu(\\alpha)\\,C_{\\,\\textrm{DOZZ}}C_{\\,\\textrm{DOZZ}} \\, | \\,q^{\\Delta-\\Delta_1-\\Delta_2} \\mathcal{B}_{0,4}(\\alpha)\\,|^2\n\\propto\n\\int d a \\, a^2 \\, |\\,Z_{\\,\\textrm{Nek}}(a)\\,|^2 \\, ,\n\\end{equation}\nwhere the two DOZZ factors come from the decomposition of the four punctured sphere into two pants. The conformal block $\\mathcal{B}$ is then equal to the instanton part of the Nekrasov partition function, and the ``square root'' of the DOZZ part gives the perturbative correction of the partition function.\n\nThe 5D extension of the conjecture suggests that the instanton part of the 5D Nekrasov partition function is equal to the conformal block of a $q$-deformed CFT. Schematically this conjectured duality is the following equality\n\\begin{equation} \n\\mathcal{B}^{\\,q-\\textrm{Liouville}}(\\alpha)=\nZ^{\\,\\textrm{5D}}_{\\,\\textrm{Nek}}(a) \\, .\n\\end{equation}\nIn \\cite{Awata:2010yy} the authors studied the case of $SU(2)$ pure SYM, which is the simplest setup of the AGT duality, and they found that the partition function coincides with the irregular conformal block of the $q$-deformed Virasoro algebra. Although the 5D extension of the instanton counting is established, the theoretical framework of $q$-deformed CFT's is not well developed. Therefore, we cannot establish the duality for the full sector yet.\nThe $q$-deformation of conformal field theory should first be developed to reveal the scope of the AGTW duality. However, by assuming the 5D AGTW conjecture, we will now illustrate how the gauge theory duality studied in Section \\ref{sec:MtheoryDeriv} and \\ref{sec:TopStringDeriv} can be used to make predictions about $q$-deformed CFT's. Although we have mostly reviewed the $SU(2)$ quiver case, the ideas can be generalized to $SU(N)$ quivers.\n\n\n\\begin{figure}[t]\n\\qquad \\qquad \\qquad \\qquad \\quad \n\\includegraphics[scale=0.5]{Todacorrelator.eps} \n\\qquad \\qquad \\qquad \\qquad \\qquad \n\\includegraphics[scale=0.5]{liouvilecorrelator.eps}\n\\caption{The $SU(3) \\leftrightarrow SU(2)\\times SU(2)$ duality implies that the four-point $W_{3}$ Toda correlator on a sphere (left) should be equal to the five-point Liouville correlator on a sphere (right). The black points denote $U(1)$ punctures and the encircled ones $SU(3)$ punctures in the $W_{3}$ Toda theory respectively, whereas the grey points correspond to $SU(2)$ punctures in the Liouville theory.}\n\\label{SU(3)example}\n\\end{figure}\n\n\n\\subsection{5D quiver $U(1)$ gauge theories and $q$-deformation of DOZZ}\n\nIn this subsection we give an example involving $U(1)$ gauge theory, whose instanton partition function is given by\n\\small\n\\begin{equation} \nZ^{\\,\\textrm{5D inst}}_{\\,U(1)}=\n\\sum_{Y}\\,\n{q}^{|Y|}\n\\frac{\\prod_{(i,j)\\in Y}\\sinh \\frac{\\beta}{2}(m_1+\\hbar(i-j))\\sinh \\frac{\\beta}{2}(-m_2+\\hbar(i-j))}\n{\\prod_{(i,j)\\in Y}\\sinh \\frac{\\beta}{2}(\\hbar(Y_i+Y^T_j-i-j+1))\\sinh \\frac{\\beta}{2}(-\\hbar(Y_i+Y^T_j-i-j+1))}\n\\, ,\n\\end{equation}\n\\normalsize\nwith one fundamental and one anti-fundamental hypermultiplet.\nMoreover, we introduce the perturbative part of the partition function:\n\\begin{equation} \nZ^{\\,\\textrm{5D pert}}_{\\,U(1)}={[\\emptyset,\\emptyset]_{e^{-\\beta m_1}}[\\emptyset,\\emptyset]_{e^{-\\beta m_2}}} \\, ,\n\\end{equation}\nwhere the bracket is defined in (\\ref{bracket}). The full Nekrasov partition function is the product of the two:\n$Z^{\\,\\textrm{5D}}_{\\,U(1)}=Z^{\\,\\textrm{5D pert}}_{\\,U(1)}Z^{\\,\\textrm{5D inst}}_{\\,U(1)}$.\nBy using the techniques from the topological vertex formalism, we can perform the summation inside the full partition function to obtain\n\\begin{equation}\n\\label{eq:U1PartFunc}\nZ^{\\,\\textrm{5D}}_{\\,U(1)}=\n\\frac{[\\emptyset,\\emptyset]_{Q_1}[\\emptyset,\\emptyset]_{Q_F}[\\emptyset,\\emptyset]_{Q_2}[\\emptyset,\\emptyset]_{Q_1Q_FQ_2}}\n{[\\emptyset,\\emptyset]_{Q_1{Q_F}}[\\emptyset,\\emptyset]_{{Q_F}Q_2}} \\, .\n\\end{equation} \nThe right hand side of this equation has appeared already in \\cite{Iqbal:2004ne,Kozcaz:2010af}.\nThe parameters are defined as\n\\begin{equation} \nQ_i=e^{-\\beta m_i}\\,\\,(i=1,2)\n\\quad \\text{and} \\quad\n-Q_F{\\sqrt{Q_1Q_2}}=q \\, .\n\\end{equation}\n\nWhat is interesting here is that the expression (\\ref{eq:U1PartFunc}) corresponds to the $q$-deformed DOZZ function \\cite{Kozcaz:2010af}\n\\begin{equation} \n\\mid[\\emptyset,\\emptyset]_{Q_1}[\\emptyset,\\emptyset]_{Q_F}[\\emptyset,\\emptyset]_{Q_2}[\\emptyset,\\emptyset]_{Q_1Q_FQ_2}\\mid^2\n\\propto\nC_{\\,\\textrm{DOZZ}}^{\\,\\mathfrak{q}} \\, ,\n\\end{equation} \nwhere the $q$-deformation parameter is $\\mathfrak{q}=e^{-\\beta \\hbar}$ and the identification of parameters takes the form\n\\begin{equation} \nQ_1=e^{-\\beta(-\\alpha_1-\\alpha_2+\\alpha_3)} \\, , \\quad\nQ_F=e^{-\\beta(-\\alpha_1+\\alpha_2-\\alpha_3)} \\, , \\quad\nQ_2=e^{-\\beta(\\alpha_1-\\alpha_2-\\alpha_3)} \\, .\n\\end{equation}\nBy using the rotational duality described in Appendix~\\ref{app:90Rotation}, the $q$-deformed DOZZ function is expected to be given by the following replacement of the $\\Upsilon$-function in the definition (\\ref{dozz}):\n\\begin{align}\n\\Upsilon_b (x)=\\frac{1}{\\Gamma_b(x)\\Gamma_b(\\epsilon-x)}\n\\quad \\longrightarrow \\quad\n\\Upsilon_b^{\\,\\mathfrak{q}} (x)\n=\\frac{1}{\\Gamma_b^{\\,\\mathfrak{q}}(x)\\Gamma_b^{\\,\\mathfrak{q}}(\\epsilon-x)} \\, ,\n\\end{align}\nwhere $\\Gamma_b^{\\,\\mathfrak{q}}(x)\\propto \\prod_{i,j} \\left( \\sinh \\frac{\\beta}{2}(x+\\epsilon_1 i+\\epsilon_2j) \\right)^{-1}$. The idea is illustrated using the toric diagrams in Figure~\\ref{fig:abelquiv}, where the $U(1)^{N-1}$ quiver gauge theory is on the left\\footnote{See \\cite{Tai:2010ps} for work related to this idea.}. The rotated diagram on the right depicts the so-called 4D Gaiotto theory for the sphere with two full punctures and one simple puncture.\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=120mm,clip]{AbelianQuiver.eps}\n \\end{center}\n \\caption{The toric diagram for $U(1)^2$ linear quiver (left) and the free hypermultiplets (right).}\n \\label{fig:abelquiv}\n\\end{figure}\nThe AGT dual of this $U(1)$ gauge theory partition function is the DOZZ three-point function of the rank-$N$ Toda field theory. In response to Gaiotto's construction of the 4D gauge theory, the DOZZ function is the three-point function for two full primary fields and one semi-degenerate field. Above we studied $U(1)$ gauge theory with two flavors, which is dual to Liouville theory on the sphere with 3 punctures. Since we consider the 5D uplift of the gauge theory, 2D CFT is replaced by the $q$-analogue of it. It is straightforward to extend this argument to generic $\\Omega$ background, in which case the 2D CFT with generic central charge appears.\n\nUsing the idea above we can conjecture the $q$-analogue of the Toda DOZZ function. We consider the $U(1)^{N-1}$ linear quiver gauge theory. The toric diagram for this theory is shown on the left in Figure~\\ref{fig:abelquiv}. With the formalism of the \\textit{refined} topological vertex, we can compute the closed form of the full Nekrasov partition function\n\\small\n\\begin{equation} \n \\label{5Dabelianquiver}\nZ^{\\,\\textrm{5D}}_{\\,U(1)^{N-1}}=\n\\prod_{i,j=1}^\\infty\n\\frac{\\prod_{1\\leq a\\leq b\\leq N}\\left(1-Q_{ab}Q_{m\\,b}\\mathfrak{t}^{i-\\frac{1}{2}}\\mathfrak{q}^{j-\\frac{1}{2}} \\right)\n\\prod_{1\\leq a\\le b\\leq N}\\left(1-Q_{m\\,a}^{-1}Q_{ab}\\mathfrak{t}^{i-\\frac{1}{2}}\\mathfrak{q}^{j-\\frac{1}{2}} \\right)\n}\n{\n\\prod_{1\\leq a\\le b\\leq N}\\left(1-Q_{ab}\\mathfrak{t}^{i-\\frac{3}{2}}\\mathfrak{q}^{j+\\frac{1}{2}} \\right)\n\\left(1-Q_{m\\,a}^{-1}Q_{ab}Q_{ab}\\mathfrak{t}^{i+\\frac{1}{2}}\\mathfrak{q}^{j-\\frac{3}{2}} \\right)\n} \\, ,\n\\end{equation}\n\\normalsize\nwhere $Q_{ab}=\\prod_{i=a}^{b-1}Q_{m\\,i}{q}^{(i)}$. In order to relate this expression to the combinatorial form of the instanton part of the partition function, we have to assume the slicing invariance of the refined topological vertex (see \\cite{Iqbal:2008ra} for details). Our claim is that the square of (\\ref{5Dabelianquiver}) gives the major portion of the DOZZ three-point function of the ``$q$-deformed $sl(N)$ Toda field theory'' on sphere with two full primary fields and one semi-degenerate field. This result would be a powerful guide to formulate a yet-unknown $q$-deformation of the Toda field theory.\n\nWe can also recast our proposal as a duality between the $(M+2)$-point function of $W_{N}$-algebra and the $(N+2)$-point function of $W_{M}$-algebra. \nSee Figure \\ref{SU(3)example} for an example. The $q$-deformed conformal blocks for the Heisenberg algebra are defined in the form of the Dotsenko-Fateev integral representation \\cite{Mironov:2011dk,Awata:2010yy}, and we can see that these conformal blocks give the 5D Nekrasov partition functions for $U(1)$ quiver gauge theories\\footnote{See \\cite{Losev:2003py,Marshakov:2009gs,Alba:2009ya}\nfor the 4D version of the AGTW ``Heisenberg\/$U(1)$'' duality, which implies the equality between the free conformal block for the Heisenberg algebra and the Nekrasov partition function for $U(1)$ quiver gauge theory. This is a simplified toy model for the original AGT duality of Virasoro\/$SU(2)$.}. The simplest situation we have studied in this subsection is thus the equivalence between the $(N+2)$-point function of ``$W_1$'' (Heisenberg) algebra and the three-point function of $W_{N}$ algebra. This conjecture for $W_{N}$ algebras is the direct consequence of combining the duality from Section~\\ref{sec:TopStringDeriv} and the AGTW conjecture. It gives a CFT analogue of this duality, which can be valuable in the studies of 2D CFT.\n\n\n\\section{Summary and discussion}\n\\label{sec:Discussion}\n\n\nIn this paper, we have studied the duality between two 5D ${\\cal N}=1$ linear quiver gauge theories compactified on $S^1$ with gauge groups $SU(N)^{M-1}$ and $SU(M)^{N-1}$ respectively. We have found the explicit map between the gauge theory parameters of these two theories, under which they describe the same \nlow energy effective theory on the Coulomb branch. We have derived the map both by considering the M5-brane configuration and by calculating the topological string partition function. There are several interesting extensions and applications of this duality.\n\n\\smallskip\n\nThe implications of this duality in 2D CFT through the 5D extension of the AGTW conjecture have been discussed above.\nWe conjuctured the three-point function of $q$-deformed Toda theory from the topological string partition function of the U(1) linear quiver. Moreover, the duality between $(M+2)$-point function of $q$-deformed $W_{N}$-algebra and $(N+2)$-point function of $q$-deformed $W_{M}$-algebra is proposed. An interesting future direction is to study in detail the duality we have proposed here between Liouville and Toda correlation functions.\n\n\\smallskip\n\nAlthough it is natural and interesting to consider the 4D limit of this duality, it seems to be subtle. In an upcoming paper \\cite{index} we follow a simple path to the 4D version of this duality, where the 4D superconformal index \\cite{Kinney:2005ej,Romelsberger:2005eg} is used to study the duality between the 4D conformal $\\mathcal{N}=2$ $SU(N)^{M-1}$ and $SU(M)^{N-1}$ line quivers. The superconformal index counts the multiplets that obey shortening conditions, up to equivalence relations that set to zero all the short multiplets that can recombine into long multiplets. Basically, it knows the complete list of protected operators in a superconformal theory. Together with one-loop computations, the analysis of the chiral ring and representation theory arguments it was used to study the spectrum of $\\mathcal{N}=2$ superconformal QCD at large $N$ in \\cite{Gadde:2009dj,Gadde:2010zi}. What is more, there is a relation between the 4D superconformal index and topological quantum field theories in 2D \\cite{Gadde:2009kb, Gadde:2010te, Gadde:2011ik}, which provides a simpler version of the AGTW relation between 4D partion functions and 2D CFT correlators. The index is the partition function on $S^3 \\times S^1$ \\cite{Festuccia:2011ws}, it is coupling-independent and easier to calculate than Pestun's partition function on $S^4$. It is related to a 2D TQFT correlation function \\cite{Gadde:2009kb} as opposed to the full-fledged CFT correlation function that is required in AGTW. The superconformal index has been successfully used to test ${\\mathcal N}=1$ Seiberg duality \\cite{Romelsberger:2005eg,Romelsberger:2007ec} \nand ${\\mathcal N}=1$ toric duality \\cite{Gadde:2010en} (as well as AdS\/CFT \\cite{Kinney:2005ej}).\n\n\\smallskip\n\nLow energy physics of supersymmetric gauge theories can also be captured by matrix models. Different types of matrix models have been studied in this context. First, the (``old'') Dijkgraaf-Vafa matrix model \\cite{Dijkgraaf:2002fc,Dijkgraaf:2002vw,Dijkgraaf:2002dh} gives the low energy effective superpotential of 4D ${\\cal N}=1$ gauge theory that is obtained by \ndeforming ${\\cal N}=2$ with the addition of superpotential terms of polynomial type. \nThe action of this matrix model is given by its tree-level superpotential.\nAnother matrix model was later proposed by the same authors in \\cite{Dijkgraaf:2009pc}. The ``new'' Dijkgraaf-Vafa matrix model gives Nekrasov's partition function of 4D ${\\cal N}=2$ gauge theory, and though the AGTW conjecture, the conformal block of the Liouville\/Toda CFT \\cite{Itoyama:2009sc,Awata:2010yy,Eguchi:2009gf,Mironov:2011dk}.\nSince the prepotential of the ${\\cal N}=2$ gauge theory can be reproduced from the low energy effective superpotential \\cite{Cachazo:2002pr}, these two matrix models should be closely related even if they are computing different quantities. Indeed, both of them are introduced in the context of topological string theory in such a way that \nthe spectral curves of these matrix models reproduce the Seiberg-Witten curve.\n\nHowever, at first sight they look quite different in the following way. On the one hand, in the ``old'' Dijkgraaf-Vafa matrix model the matrix corresponds to the zero-modes of the adjoint scalar fields. Therefore, $SU(N)$ theory is studied using a single matrix while $SU(2)^{M-1}$ a quiver matrix (multi-matrix) model. On the other hand, in the ``new'' Dijkgraaf-Vafa matrix model $SU(2)^{M-1}$ theory corresponds to the single matrix model with multi-Penner type action, while $SU(N)$ theory corresponds to the quiver matrix model with $N-1$ adjoint matrices \\cite{Schiappa:2009cc}. As was already pointed out in \\cite{Dijkgraaf:2009pc}, the role of the base and the fiber of the Calabi-Yau geometry is inverted in the second matrix model compared to the first one. Since the structure of the base and the fiber \nare related to the numbers $N$ and $M$ of the $SU(N)^{M-1}$ gauge theory, it implies that these matrix models are related by the duality studied in this paper. We expect that it will play an important role to understand the relation between these matrix models.\n\n\\smallskip\n\nSeveral other kinds of extensions of the duality we study here are also possible. In this article we focus on the duality between theories which are 5D uplifts of 4D superconformal field theories. It should be possible to extend the duality to the theories which are uplifts of asymptotically free theories.\nIn such cases it is known that we can introduce the Chern-Simons term \\cite{Seiberg:1996bd} in the action. The configuration of the M5-brane curve depends on the Chern-Simons level \\cite{Brandhuber:1997ua} and thus the duality will also act on it. Considering such an effect would be interesting.\n\nThe extension to the elliptic quiver gauge theories, including $\\mathcal{N}=2^*$ theory, is another future direction. Such quiver gauge theories are obtained by further compactifying the $x^6$ direction in addition to the $x^5$ direction in Table \\ref{config}. Following \\cite{Vafa:1997mh,Tachikawa:2011ch}, the S-duality corresponding to the electric-magnetic duality appears by compactifying the $x^6$ direction in the special case where no NS5-branes are placed. The duality studied in this article can be also interpreted as S-duality, but it acts on the gauge theories in a totally different manner than the conventional electric-magnetic duality. The elliptic quiver gauge theories will offer an interesting playground to understand these two different types of S-dualities in a unified manner.\n\nIn the present article, we have studied the duality in the self-dual $\\Omega$ background. The extension to the generic $\\Omega$ background would be an important direction related to the existence of a preferred direction in the refined topological vertex. The conjectured slicing invariance would then be crucial for extending our result to the refined case. The duality maps we have derived are maintained even after switching on the self-dual $\\Omega$ background. However, it is non-trivial whether the generic $\\Omega$ background modifies the maps.\n\nConsidering this duality for generic $\\Omega$ background in the context of the integrable system would be also interesting, where the ``quantum Seiberg-Witten curve\"\nappears as the Hamiltonian of the Schr\\\"odinger equation. If we manage to find the explicit expression of the 5D Hamiltonian \\cite{Aganagic:2011mi}, then it would be straightforward to obtain the duality map by using the same method we employed in this paper. \nThe Nekrasov-Schatashivilli \\cite{Nekrasov:2009ui,Nekrasov:2009rc,Nekrasov:2009zz} limit is especially interesting because the time-dependent terms\\footnote{The time coordinates are interpreted as gauge couplings.} in the Schr\\\"odinger equation are expected to vanish there. We then get a simple eigenvalue problem as an alternative way to solve quantum gauge theory.\n\n\n\\section*{Acknowledgements}\nIt is a pleasure to thank Giulio Bonelli, Nadav Drukker, Tohru Eguchi, Amihay Hanany, Kazunobu Maruyoshi, Sara Pasquetti, Filippo Passerini, Leonardo Rastelli, Kazuhiro Sakai, Yuji Tachikawa, Alessandro Tanzini, Niclas Wyllard and Konstantinos Zoubos for useful discussions and correspondence. L.B., E.P. and F.Y. would like to thank IHES for providing a stimulating atmosphere during the course of this work. E.P. wishes to thank IHP for its warm hospitality as this work was in progress. The work of E.P. is supported by the Humboldt Foundation.\nThe research of M.T. is supported in part by JSPS Grant-in-Aid for Creative Scientific Research No. 19GS0219.\n F.Y. is partially supported by the INFN project TV12. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Introduction}\nOver-parameterization {helps} deep neural networks (DNNs) to generalize better in real-life applications \\cite{brown2020language, huang2018gpipe, alex2019big,zagor2016}, despite providing them with the capacity to fit almost any set of random labels~\\cite{Zhang17}. \nThis~phenomenon has spawned a growing body of work that aims at identifying fundamental differences between real and random labels, such as in training time \\cite{arpit2017closer, gu2019neural,han2018co,zhang2018generalized}, sharpness of the minima \\cite{keskar2016large,neyshabur2017exploring}, dimensionality of layer embeddings \\cite{ansuini2019intrinsic,collins2018detecting,pmlr-v80-ma18d}, and sensitivity \\cite{arpit2017closer,novak2018sensitivity}, among other complexity measures \\cite{bartlett1998sample,Bartlett2017,neyshabur2015norm,neyshabur2017exploring}. \nWhile it is obvious that over-parameterization helps DNNs to interpolate any set of random labels, it is not immediately clear what DNNs \\emph{learn} when trained in this setting.\nThe objective of this study is to provide a partial answer to this question. \n\nThere are at least two reasons why answering this question is of value. \nFirst, \nin order to understand how DNNs work, it is imperative to observe how they behave under ``extreme'' conditions, such as when trained with labels that are entirely random. \nSince the pioneering work of~\\cite{Zhang17}, several works have looked into the case of random labels. \nWhat distinguishes our work from others is that previous works aimed to demonstrate {differences} between real and random labels, highlighting the \\emph{negative} side of training on random labels. By contrast, this work provides insights into what properties of the\ndata distribution DNNs learn when trained on random labels.\n\nSecond, observing DNNs trained on random labels can explain phenomena that have been previously noted, but were poorly understood. In particular, by studying what is learned on random labels, we offer new insights into: (1)~why DNNs exhibit critical stages \n\\cite{achille2017critical, frankle2020early}, (2)~how earlier layers in DNNs generalize while later layers specialize \\cite{ansuini2019intrinsic,arpit2017closer,cohen2018dnn,yosinski2014transferable}, (3)~why the filters learned by DNNs in the first layer seem to encode some useful structure when trained on random labels \\cite{arpit2017closer}, \nand (4)~why pre-training on random labels can accelerate training in downstream tasks \\cite{pondenkandath2018leveraging}. \nWe show that even when controlling for simple explanations like weight scaling (which was not always accounted for previously), such curious observations continue to hold. \n\nThe main contributions of this work are:\n\\begin{itemize}[leftmargin=*]\n \\item We investigate DNNs trained with random labels and fine-tuned on disjoint image data with real or random labels, demonstrating unexpected positive and negative effects.\n \\item We provide explanations of the observed effects. \n We show analytically for convolutional and fully connected networks that an alignment between the principal components of the network parameters and the data takes place. \n We demonstrate experimentally how this effect explains why pre-training on random labels helps. \n We also show why, under certain conditions, pre-training on random labels can hurt the downstream task due to specialization at the later layers. \n \\item We conduct experiments verifying that these effects are present in several network architectures, including VGG16 \\cite{simonyan2014very} and ResNet18-v2 \\cite{He2016}, on CIFAR10 \\cite{Krizhevsky09learningmultiple} and\n ImageNet ILSVRC-2012~\\cite{deng2009imagenet}, across a range of hyper-parameters,\n such as the learning rate, initialization,\n number of training iterations, \n width and depth.\n\\end{itemize}\n\nIn this work,\nwe do not use data augmentation as it provides a (weak) supervisory signal. \nMoreover, we use the terms ``positive'' and ``negative'' to describe the impact of what is learned with random labels on the downstream training, such as faster\/slower training.\nThe networks reported throughout the paper are taken from a big set of experiments that we conducted using popular network architectures, datasets, and wide hyperparameter ranges. \nExperimental details are provided in Appendix~A and~B. \nWe use boldface for random variables, \nsmall letters for their values,\nand capital letters for matrices.\n\n\\subsection{Motivating example}\n\n\\begin{figure}[tb]\n \\scriptsize \\sffamily\n \\centering\n \\begin{minipage}{0.23\\textwidth}\n \\centering \n 1. Pre-training helps\\\\\n (real labels)\n \\end{minipage}\n \\begin{minipage}{0.23\\textwidth}\n \\centering\n 2. Pre-training helps\\\\\n (random labels)\n \\end{minipage}\n \\begin{minipage}{0.23\\textwidth}\n \\centering\n 3. Pre-training hurts\\\\\n (real labels)\n \\end{minipage}\n \\begin{minipage}{0.23\\textwidth}\n \\centering\n 4. Pre-training hurts\\\\\n (random labels)\n \\end{minipage}\n \\includegraphics[width=0.23\\textwidth,height=0.21\\textwidth]{images\/figure1_vgg16_positive_real.pdf}\n \\includegraphics[width=0.23\\textwidth,height=0.21\\textwidth]{images\/figure1_vgg16_positive_random.pdf}\n \\includegraphics[width=0.23\\textwidth,height=0.21\\textwidth]{images\/figure1_vgg16_negative_real.pdf}\n \\includegraphics[width=0.23\\textwidth,height=0.21\\textwidth]{images\/figure1_vgg16_negative_random.pdf}\n \\caption{Pre-training on random labels may exhibit both positive ({\\sc 1 \\& 2}) and negative ({\\sc 3 \\& 4}) effects on the downstream fine-tuning depending on the setup.\n VGG16 models are pre-trained on\n CIFAR10 examples with random labels and subsequently fine-tuned on the fresh\n CIFAR10 examples with either real labels ({\\sc 1 \\& 3}) or 10 random labels ({\\sc 2 \\& 4}) using different hyperparameters.\n \n }\n \\label{fig:image1}\n \n\\end{figure}\n\nFigure~\\ref{fig:image1} shows learning curves of the VGG16 architecture~\\cite{simonyan2014very} pre-trained \non 20k CIFAR10 examples~\\cite{Krizhevsky09learningmultiple} with random labels (upstream) and fine-tuned on a disjoint subset of 25k CIFAR10 examples with either random or real labels (downstream). \nWe observe that in this setup, pre-training a neural network on images with random labels accelerates training on a second set of images, both for real and random labels (positive effect). However, in the \\emph{same setting} but with a different initialization scale and number of random classes upstream, a negative effect can be observed downstream: training becomes slower. \nWe also observe a lower final test accuracy for real labels in both cases, which we are \nnot explicitly investigating in this paper (and which has been observed before, e.g.\\ in~\\cite{frankle2020early}).\n\nThe fact that pre-training on random labels can accelerate training downstream has been observed previously, e.g.\\ in \\cite{pondenkandath2018leveraging}. However, there is a ``simple'' property that can explain improvements in the downstream task: Because the cross-entropy loss is scale-sensitive, training the network tends to increase the scale of the weights \\cite{neyshabur2017exploring}, which can increase the effective learning rate of the downstream task (see the gray curve in Figure~\\ref{fig:PCA_init}).\nTo eliminate this effect, in all experiments we re-scale the weights of the network after pre-training to match their $\\ell_2$ norms at initialization.\nWe show that even after this correction, pre-training on random labels positively affects the downstream task.\nThis holds for both VGG16 and ResNet18 trained on CIFAR10 and ImageNet (see Appendix B).\n\nWe show experimentally that some of the positive transfer is due to the second-order statistics of the network parameters. \nWe prove that\nwhen trained on random labels, the principal components of weights at the first layer are aligned with the principal components of data. \nInterestingly, this \\emph{alignment effect} implies that the model parameters learned at the first layer can be summarized by a {one-dimensional mapping} between the eigenvalues of the data and the eigenvalues of the network parameters.\nWe study these mappings empirically and raise some new open questions. \nWe also analyze how, under certain conditions, a competing effect of specialization at the later layers may hide the positive transfer of pre-training on random labels, which we show to be responsible for the negative effect demonstrated in Figure \\ref{fig:image1}. \n\nTo the best of our knowledge, the alignment effect has not been established in the literature before. \nThis paper proves the existence of this effect and studies its implications.\nNote\nthat while these effects are established for training on random labels,\nwe also observe them empirically for real labels. \n\n\\subsection{Related work}\n\\label{sec:PreviousWorks}\n\nA large body of work in the literature has developed techniques for mitigating the impact of \\emph{partial} label noise, such as \\cite{zhang2018generalized,Huber2011,natarajan2013learning,jiang2018mentornet,li2017learning,liu2015classification,sukhbaatar2014learning}. Our work is distinct from this line of literature because we focus on the case of purely random labels.\n\nThe fact that positive and negative learning takes place is related to the common observation that earlier layers in DNNs learn general-purpose representations whereas later layers specialize \\cite{ansuini2019intrinsic,arpit2017closer,cohen2018dnn,yosinski2014transferable}. For random labels, it has been noted that memorization happens at the later layers, as observed by measuring the classification accuracy using activations as features \\cite{cohen2018dnn} or by estimating the intrinsic dimensionality of the activations \\cite{ansuini2019intrinsic}. We show that specialization at the later layers has a negative effect because it exacerbates the inactive ReLU phenomenon. Inactive ReLUs have been studied in previous works, which suggest that this effect could be mitigated by either increasing the width or decreasing the depth \\cite{lu2019dying}, using skip connections \\cite{douglas2018relu}, or using other activation functions, such as the leaky ReLU \\cite{maas2013rectifier,he2015} or the exponential learning unit (ELU)~\\cite{clevert2015fast}. \n\nFor transfer learning, it has been observed that pre-training on random labels can accelerate training on real labels in the downstream task \\cite{pondenkandath2018leveraging}. However, prior works have not accounted for simple effects, such as the change in first-order weight statistics (scaling), which increases when using the scale-sensitive cross-entropy loss \\cite{neyshabur2017exploring}. Changing the norm of the weights alters the effective learning rate. \\cite{Raghu19} investigated transfer from ImageNet to medical data and observed that the transfer of first-order weight statistics provided faster convergence. \nWe show that even when taking the scaling effect into account, additional gains from second-order statistics are identified. \n\nOther works have considered PCA-based convolutional filters either as a model by itself without training~\\cite{Gan15,Dehmamy2019}, as an initialization ~\\cite{Ren16,Wagner13}, or to estimate the dimensionality of intermediate activations~\\cite{collins2018detecting,montavon2011kernel}.\nNote that our results suggest an initialization by \\textit{sampling} from the data covariance instead of initializing the filters directly using the principal axes of the data.\n\\mbox{\\citet{Ye2020deconv}} show that a ``deconvolution'' of data at input and intermediate layers can be beneficial. This deconvolution corresponds to a whitening of the data distribution, therefore aligning data with an isotropic weight initialization, which is related to a positive effect of alignment we observe in this paper.\n\n\nIn addition, there is a large body of work on unsupervised learning.\nAmong these, the \\emph{Exemplar-CNN} method~\\cite{Dosovitskiy14} can be seen as the limiting case of using random labels with infinitely many classes (one label per image) and large-scale data augmentation. \nIn our study\nwe do \\emph{not} use data augmentation since it provides a supervisory signal to the neural network that can cause additional effects.\n\n\\section{Covariance matrix alignment between network parameters and data}\n\\label{sec:PCA}\n\nReturning to the motivating example in Figure \\ref{fig:image1}, we observe that pre-training on random labels can improve training in the downstream tasks for both random and real labels. This improvement is in the form of \\emph{faster training}. In this section, we explain this effect. We start by considering the first layer in the neural network, and extend the argument to later layers in Section~\\ref{subsec:DeepNetworks}.\n\n\\subsection{Preliminaries}\\label{sect::pca_preliminaries}\nLet $\\calD$ be the probability distribution over the instance space $\\mathcal{X}\\subseteq\\BR^d$ and $\\mathcal{Y}$ be a finite target set.\nWe fix a network architecture, a loss function, a learning rate\/schedule, and a \ndistribution of weights for random initialization. Then, ``training on \nrandom labels'' corresponds to the following procedure: We randomly sample i.i.d.\\ instances $ \\textbf{x}_1,..., \\textbf{x}_N \\sim \\calD$,\nand i.i.d.\\ labels $\\textbf{y}_1,...,\\textbf{y}_N\\in \\mathcal{Y}$ independently of each other. \nWe also sample the initial weights of the neural network, and\n train the network on the data $\\{(\\textbf{x}_i, \\textbf{y}_i)\\}_{i=1,\\ldots,N}$ for $T$ training iterations using stochastic gradient descent (SGD). During training, the\nweights are \\emph{random variables} due to the randomness of the initialization and the training sample. Hence, we can speak of their statistical properties, such as their first and second moments. \n\nIn the following, we are interested in layers that are convolutional or fully connected. We assume that the output of the $k$-th neuron in the first layer can be written as: $f_k(x)=g(\\langle w_k,\\,x\\rangle +b_k)$ for some activation function $g$. \nWe write $\\mu_x=\\mathbb{E}[\\textbf{x}]$ and observe that since the covariance matrix $\\Sigma_x = \\mathbb{E}[(\\textbf{x}-\\mu_x)\\cdot (\\textbf{x}-\\mu_x)^T]$ is symmetric positive semi-definite,\nthere exists an orthogonal de\\-com\\-po\\-si\\-tion $\\BR^d = V_1 \\oplus ... \\oplus V_r$\nsuch that $V_i$ are eigenspaces to $\\Sigma_x$ with distinct eigenvalues $\\sigma_i^2$.\n\n\\begin{definition}[Alignment]\\label{def:alignment} A symmetric matrix $A$ is said to be \\emph{\\textbf{aligned}} with a symmetric matrix $B$ if each eigenspace of $B$ is a subset of an eigenspace of $A$. If $A$ is aligned with $B$, we define for each eigenvalue\nof $B$ with eigenspace $V\\subseteq \\BR^d$ the \\emph{\\textbf{corresponding}} eigenvalue of \n$A$ as the one belonging to the eigenspace that contains $V$.\n\\end{definition}\n\nIf $A$ and $B$'s eigenspaces are all of dimension 1 (which\nis true except for a Lebesgue null set in the space of symmetric matrices),\n ``$A$ is aligned with $B$'' becomes equivalent to the assertion that they share the same eigenvectors. \nHowever, the\nrelation is not symmetric in general (e.g.\\ only scalar multiples of the identity matrix $I_d$ are aligned with $I_d$, but $I_d$ is aligned with any symmetric matrix).\n\n\\subsection{Alignment for centered Gaussian inputs}\n\\label{subsec:Gaussian}\n\n\\begin{proposition}\n\\label{prop:1}\nAssume the instances $\\textbf{x}$ are drawn i.i.d. from $\\mathcal{N}(0,\\Sigma_x)$ and the initial weights in\nthe first layer are drawn from an isotropic distribution (e.g. the standard Gaussian). Let $\\textbf{w}\\in\\mathbb{R}^d$ be a random variable whose value is drawn uniformly at random from the set of weights in the first layer after training using SGD with random labels (see Section \\ref{sect::pca_preliminaries}). Then: (1) $\\mathbb{E}[\\textbf{w}]=0$ and (2) $\\Sigma_w = \\mathbb{E}[\\textbf{w}\\cdot\\textbf{w}^T]$ is aligned with the covariance matrix of data $\\Sigma_x$.\n\\end{proposition}\n\n\\begin{proof}\nThe proof exploits symmetries: The input, initialization, and gradient descent are invariant under\nthe product of the orthogonal groups of the eigenspaces of $\\Sigma_x$, so the distribution of\nweights must have the same invariance properties. The full proof is given in Appendix~C. \n\\end{proof}\n\nProposition~\\ref{prop:1} says that independently of many settings (e.g.\\ number of random labels, network architecture, learning rate or schedule), the eigenspaces of $\\Sigma_w\\in\\mathbb{R}^{d\\times d}$\nare given by the eigenspaces of $\\Sigma_x\\in\\mathbb{R}^{d\\times d}$. Hence, the only information needed to fully determine $\\Sigma_w$ is a function $f$ that maps the eigenvalues $\\sigma_i^2$ of $\\Sigma_x$ \nto the corresponding eigenvalues $\\tau_i^2$ of $\\Sigma_w$. \nNote that the argument of the proof of Proposition \\ref{prop:1} also apply to the case of a single training run of an infinitely wide network in which the layers are given by weight vector distributions, see e.g.~\\cite{sirignano2018}. \nFor finite networks, in practice, one would only be able to approximate $\\Sigma_w$ based on several independent training runs.\n\nNext, we present experimental evidence that first-layer alignment actually takes place, not just for Gaussian input with random labels, but also in real image datasets with random labels, and even when training on real labels using convolutional networks. The intuition behind this result for real labels is that small patches in the image (e.g. $3\\times 3$) are nearly independent of the labels. Before we do that, we introduce a suitable measure of alignment that we use in the experiments.\n\n\\begin{definition}\nFor two positive definite matrices $A, B$, the ``misalignment'' \n$M(A, B)$ is defined as: \n\\begin{equation}\n M(A, B) := \\inf_{\\Sigma\\succ 0\\ \\hbox{ aligned with}\\ A} \n \\big\\{\\tfrac{1}{2}\\textbf{tr}(\\Sigma^{-1}B+B^{-1}\\Sigma)-d\\big\\}\n \\label{eq:DefinitionA}\n\\end{equation}\n\\end{definition}\nThe rationale behind this definition of misalignment is presented in Appendix~D.\nIn particular, it can be shown that for any $A, B\\succ 0$, we have $M(A, B)\\ge 0$ with equality if and only if $B$ is aligned with $A$. In addition, $M(A,B)$ is continuous in $B$ and satisfies desirable equivariance and invariance properties and can be computed in closed form by\n$M(A, B) + d = \\sum_{i=1}^r \\sqrt{ \\textbf{tr}(B|_{V_i}) \\cdot \\textbf{tr}(B^{-1}|_{V_i})}$\nwhere $V_1 \\oplus ... \\oplus V_r$ is the orthogonal decomposition of $\\BR^d$ into eigenspaces of $A$,\nand $B|_{V_i}$ is the linear map $V_i\\arrow V_i, \\textbf{v}\\mapsto pr_i(B(\\textbf{v}))$ with $pr_i$ the \northogonal projection $\\BR^d\\arrow V_i$. \n\n\n\\begin{SCfigure}[2]\n \\centering\n \\includegraphics[width=0.5\\textwidth]%\n {images\/alignment_figure.png}\n \\caption{Plots of the misalignment scores of the filters of the first layer in a two-layer neural network (256 convolutional filters, 64 fully-connected nodes) when trained on CIFAR10 with either real or random labels. Throughout training, misalignment scores between the filters of the first layer and the data remain very small compared to those between filters and a random orthonormal basis.}\n \\label{fig:Experiment_alignment}\n\\end{SCfigure}\n\nFigure \\ref{fig:Experiment_alignment} displays the misalignment scores between the covariance of filters at the first layer with the covariance of the data (patches of images). For comparison, the misalignment scores with respect to some random orthonormal basis are plotted as well. \nAs predicted by Proposition \\ref{prop:1}, the weight eigenvectors stay aligned to the data eigenvectors but not to an arbitrary random basis. \n\n\\newcommand{$v_w$}{$v_w$}\n\\newcommand{$\\overline{v_x}$}{$\\overline{v_x}$}\n\\newcommand{$v_x$}{$v_x$}\n\\begin{figure}[tb]\n\\centerline{\n\\small\n\\raisebox{-0.55\\height}{\\includegraphics[height=0.22\\textwidth]{images\/rnd1_new.png}}\\hfill\n{\\setlength{\\extrarowheight}{5pt}\n\\def0.038\\textwidth{0.038\\textwidth}\n\\newcommand{\\cpi}[2]{$\\vcenter{\\hbox{\\includegraphics[height=0.038\\textwidth]{images\/pca\/pca-#1-#2.png}}}$}\n\\newcommand{\\ncpi}[2]{$\\vcenter{\\hbox{\\includegraphics[height=0.038\\textwidth]{images\/pca_new\/pca-#1-#2.png}}}$}\n\\begin{tabular}{r@{\\hspace{1em}}c@{ $\\sim$ }c}\n \\# & $v_w${} & $v_x${}\\\\\n4 & \\ncpi{3}{0} & \\ncpi{3}{1} \\\\\n6 & \\ncpi{5}{0} & \\ncpi{5}{1} \\\\\n7 & \\ncpi{6}{0} & \\ncpi{6}{1} \\\\\n8 & \\ncpi{7}{0} & \\ncpi{7}{1} \\\\ \n10& \\ncpi{9}{0} & \\ncpi{9}{1} \n\\end{tabular}\\hfill\n\\begin{tabular}{r@{\\hspace{1em}}c@{ $\\sim$ }c@{ $=$ }l}\n\\# & $v_w${} & $\\overline{v_x}${} & $v_x${} $+\\ldots$\\\\\n1 & \\ncpi{0}{0} & \\ncpi{0}{1} & \\ncpi{0}{2} $+$ \\ncpi{0}{3} \\\\\n2 & \\ncpi{1}{0} & \\ncpi{1}{1} & \\ncpi{1}{2} $+$ \\ncpi{1}{3} \\\\\n3 & \\ncpi{2}{0} & \\ncpi{2}{1} & \\ncpi{2}{3} $+$ \\ncpi{2}{2} \\\\\n5 & \\ncpi{4}{0} & \\ncpi{4}{1} & \\ncpi{4}{3} $+$ \\ncpi{4}{2} \\\\\n9 & \\ncpi{8}{0} & \\ncpi{8}{1} & \\ncpi{8}{3} $+$ \\ncpi{8}{2} \n\\end{tabular}%\n\n\n \\caption{Visualization of covariance alignment. {\\sc left}: Random selection of WRN-28-4 first-layer convolutional filters (CIFAR10, random labels). {\\sc center\/right}: Eigenvectors $v_w${} of $\\Sigma_w$ \n with largest eigenvalues (rank in column `\\#') and eigenvectors $v_x${} of $\\Sigma_x$ with $\\langle v_x, v_w\\rangle > 0.4$. {\\sc center}: Cases where one $v_x${} matches. {\\sc right}: Cases where two $v_x${} and their weighted combination $\\overline{v_x}${} match.}\n \\label{fig:filters}\n\\end{figure}\n\nFor image data, we can also visualize the alignment of $\\Sigma_w$ to $\\Sigma_x$. \nFigure~\\ref{fig:filters} shows results based on 70 wide ResNet models \\cite{zagor2016} trained on CIFAR10 with random labels.\nFor better visualization, we use a $5\\times 5$ initial convolution here. The left part of the figure shows a random selection of some of the 70$\\cdot$64\nconvolution filters. From the filters we estimate $\\Sigma_w$ and \ncompute the eigenvectors $v_w${}, then visualize the ten $v_w${} with the largest eigenvalues. From the image data we compute the\npatch data covariance $\\Sigma_x$ and its eigenvectors $v_x${}. \nWe show the data eigenvectors for which the inner product with the filter eigenvectors exceeds a\nthreshold and the weighted sum $\\overline{v_x}${} of these if there are multiple such $v_x${}. \n(See Appendix D.1. for why this is expected to occur as well.)\n\nThe visual similarity between the $v_w${} and the $\\overline{v_x}${} illustrates the predicted covariance alignment. Note that this alignment is visually non-obvious when looking at individual filters as shown on the left.\n\n\\subsection{Mapping of eigenvalues}\\label{subsect::mapping_eigenvalues}\nAs stated earlier, Proposition \\ref{prop:1} shows that, on average, the first layer effectively learns a \nfunction which maps each eigenvalue of $\\Sigma_x$ to the \\emph{corresponding} eigenvalue of $\\Sigma_w$ (see\nDefinition \\ref{def:alignment}). In this section, we examine the shape of this function.\n\nSince, in practice, we will only have \\textit{approximate} alignment due to the finiteness of the number of inputs and weights, non--Gaussian inputs, and correlations between overlapping patches, we extend the definition of $f(\\sigma)$.\nSuch an extension is based on the following identity~\\eqref{eq:general_tau}:\nFor $\\Sigma_x\\in\\mathbb{R}^{d\\times d}$ let $\\textbf{v}_i$ be an eigenvector of length 1 \nwith eigenvalue $\\sigma_i^2$. If $\\Sigma_w$ is aligned with \n$\\Sigma_x$, $\\textbf{v}_i$ is also an eigenvector of $\\Sigma_w$ and the corresponding eigenvalue $\\tau_i^2$ is:\n\\begin{equation}\\label{eq:general_tau}\n \\tau_i^2 = \\textbf{v}_i^T \\Sigma_w \\textbf{v}_i \n = \\textbf{v}_i^T \\BE[(\\textbf{w}-\\mu_w)(\\textbf{w}- \\mu_w)^T] \\textbf{v}_i = \\BE[ \\langle \n \\textbf{w} - \\mu_w, \\textbf{v}_i\\rangle ^2], \n\\end{equation}\nwhich is the variance of the projection of the weight vectors onto the principal axis $\\textbf{v}_i$.\nWe can take this as the definition of $\\tau_i$ in the general case, since this formulation can be applied even when we have an imperfect alignment between the eigenspaces.\n\\begin{definition}\nGiven two positive definite symmetric $d\\times d$ matrices $\\Sigma_x, \\Sigma_w$, such that \n$\\Sigma_w$ is aligned with $\\Sigma_x$ or $\\Sigma_x$ has $d$ distinct eigenvalues. \nLet $\\sigma_1^2,\\sigma_2^2,...$ be the eigenvalues of $\\Sigma_x$ with corresponding \neigenvectors $\\textbf{v}_1,\\textbf{v}_2,...$ of length 1, we define \nthe transfer function from $\\Sigma_x$ to $\\Sigma_w$ as\n\\begin{equation}\n f: \\{\\sigma_1,\\sigma_2,...\\} \\arrow \\BR, \\ \\ \\sigma_i \\mapsto \\sqrt{\\textbf{v}_i^T \\Sigma_w \\textbf{v}_i}\n\\end{equation}\n\\end{definition}\n\nIn practice, the eigenvalues are distinct almost surely so every eigenvalue of the data has a unique corresponding eigenvector of length 1 (up to $\\pm$) and the function $f(\\sigma)$ is well-defined. \n\n\\begin{figure}[tb]\n \\begin{center}\n \\centerline{\n \\includegraphics[width=0.33\\textwidth,height=3.4cm]{images\/st8.pdf}\n \\includegraphics[width=0.33\\textwidth]{images\/fsigma_random_15_10_4curves.png}\n \\includegraphics[width=0.33\\textwidth]{images\/fsigma_real_15_10_4curves.png}\n }\n \\caption{\n {\\sc left:} $f(\\sigma)$ for synthetic data $\\calN(0,\\mathrm{diag}(0.1, 0.2, \\dots, 3.0))$ in fully-connected neural networks with two layers of size 256. The graph is approximately continuous and of a regular structure: increasing, then decreasing. {\\sc center,right:}\n $f(\\sigma)$ that results from training a 2-layer convolutional network (256 filters followed by 64 fully-connected nodes) on CIFAR10 for random ({\\sc center}) and real labels ({\\sc right}) after 5, 10, 20, and 40 epochs ($\\sim$195 training iterations per epoch).\n } \n \\label{fig:Experiment2_f_sigma}\n \\end{center}\n\\end{figure}\n\nUsing this definition of $f(\\sigma)$, we can now look at the shape of the function for concrete examples.\nHere, we train a simple fully connected network\n50 times, collect statistics, and plot the corresponding mapping between each eigenvalue $\\sigma_i$ of $\\Sigma_x$ with the corresponding $\\tau_i$ in (\\ref{eq:general_tau}) (see Appendix E.1 for more details). \nThe result is shown in Figure~\\ref{fig:Experiment2_f_sigma} ({\\sc left}).\nIn general, $f(\\sigma)$ on synthetic data looks smooth and exhibits a surprising structure: the function first increases before it decreases. Both the decreasing part or the increasing part may be missing depending on the setting (e.g.\\ dimensionality of data and network architecture) but we observe the same shape of curves in all experiments.\nFor real data (CIFAR10), Figure~\\ref{fig:Experiment2_f_sigma} ({\\sc center\/right}) shows that the function $f(\\sigma)$ appears to have a similar shape (increasing, then decreasing, but less smooth) for training with both real and random labels. \n\nWe interpret (without a formal argument) this surprisingly regular shape of $f(\\sigma)$ to be the result of two effects:\n(1)~Larger eigenvalues $\\sigma_i$ lead to larger effective learning rate in gradient descent, which leads in turn to larger corresponding $\\tau_i$, hence the increasing part of $f$.\n(2)~Very large eigenvalues $\\tau_i$ would dominate the output of the layer, masking the contribution of other components. Backpropagation compensates for this effect to capture more of the input signal. This leads to the decreasing part of $f$ for higher $\\sigma_i$.\n(See also Appendix E.1)\n\n\n\\subsection{Covariance alignment and eigenvalue mapping explains positive transfer experimentally}\n\\label{subsect:explain_pos_transfer}\n\\begin{figure}[tb]\n \\centerline{\n\\includegraphics[width=0.33\\textwidth]{images\/real_oneConv.pdf}%\n\\includegraphics[width=0.33\\textwidth]{images\/random_oneConv.pdf\n\\includegraphics[width=0.33\\textwidth]{images\/random_oneConv_detail.pdf}\n }\n \\caption{\n Training accuracy for learning real labels ({\\sc left}) and random labels ({\\sc middle}, zoomed in: {\\sc right}) on CIFAR10 with a simple CNN (one $3\\times 3$ convolution, one hidden dense layer). Randomly initializing the convolutional filters from a learned covariance reproduces the effect of pre-training within measurement error (red and pink lines are almost indistinguishable in the right plot). }\n \\label{fig:PCA_init}\n \n\\end{figure}\n\nTo connect the alignment effect to the faster learning downstream, we conduct the following experiment.\nSuppose that instead of pre-training on random labels, we sample from the Gaussian approximation of the filters in the first layer that were trained on random labels. \nIn a simple CNN on CIFAR10, consisting of one convolutional and one fully connected layers, the gain in downstream task is almost fully recovered, as shown in Figure~\\ref{fig:PCA_init}. \nThe gray curves show the raw effect of pre-training, but this contains the \nscaling effect.\nTo eliminate the latter effect, we always re-scale the weights after pre-training to match their $\\ell_2$ norm at initialization (using $\\ell_1$ norm gives similar results).\nRecovering the transfer effect in this way implies that the positive transfer is mainly due to the second-order statistics of the weights in the first layer, which, by Proposition~\\ref{prop:1}, are fully described by the alignment of principal components in combination with the shape of the function $f(\\sigma)$. \n\nNote that the combined results presented up to here indicate that both analytically and experimentally the following seems to be true: Given the training data of a neural network that is trained with random labels, we can predict the second-order statistics of its first layer weights up to a one-dimensional scaling function $f(\\sigma)$ of the eigenvalues and this function has a surprisingly regular shape. If further investigation of $f$ may lead us to understand its shape (which we regard as an interesting area of research), we could predict these after-training statistics perfectly by only gathering the data statistics. \n\n\\subsection{Deeper layers}\n\\label{subsec:DeepNetworks}\nSo far, we have only discussed effects in the first layer of a neural network.\nHowever, in Figure 6 we show that transferring more of the layers from the model pre-trained on\nrandom labels improves the effect considerably. We now turn to generalizing Section\n\\ref{subsect:explain_pos_transfer} to the multi-layer case, i.e. we reproduce this effect with\nweight initializations computed from the input distribution.\n\nFor the first layer, we have seen that we can reproduce the effect of training on random labels\nby randomly sampling weights according to the corresponding covariance matrix $\\Sigma_w$, which\nin turn is given by the same eigenvectors $e_1,...,e_d$ as the data covariance $\\Sigma_x$, \nand a set of new eigenvalues $\\tau_1^2,...,\\tau_d^2$. \nSo, if we can approximate the right (or good) eigenvalues, \nwe can directly compute an initialization that results in faster training in a subsequent\ntask of learning real labels. See the first two accuracy columns in Table 1 for\nresults in an example case, and Appendix E for different choices of $\\tau_1,...,\\tau_d$ (it turns\nout different reasonable choices of the $\\tau_i$ give all results very similar to Table 1).\n\nWe can then iterate this procedure also for the next (fully connected or convolutional) layers.\nGiven the filters for the earlier layers $L_1,...,L_{k-1}$, for each training image\nwe can compute the output after layer $L_{k-1}$, which becomes the input to the layer $L_k$.\nTreating this as our input data, we determine the\neigenvectors $e_1, e_2,...$ of the corresponding data covariance matrix. \nThen we compute the $d$ most important directions and use $\\tau_1 e_1, \\tau_2 e_2,...,\n\\tau_d e_d$ (with the same assumed $\\tau_1, \\tau_2,...$ as before) as our constructed filters.\n(Alternatively, we can sample according to the covariance\nmatrix given by the eigenvectors $e_i$ and eigenvalues $\\tau_1^2,...,\\tau_d^2,0,...,0$, which\ngives again essentially the same results, compare Table 2 and 4 in Appendix E.3.)\n\nApplying this recipe to a CNN with three convolutional layers and \none fully connected layer, we see that this indeed gives initializations that \nbecome better when applied to 1,2, and 3 layers, see Table \\ref{table:experiment_3conv}.\nThe performance after applying this approach to all three convolutional \nlayers matches the performance of transferring the first three layers of a network trained on random labels.\nSee Appendix E.3 for details.\n\n\n\\begin{SCtable}\n\\caption{Training and test accuracy on subsets of CIFAR10 \nof the initialization procedure described in Section~\\ref{subsec:DeepNetworks} on the layers of a simple convolutional network. Both training and test accuracies improve with the number of layers that are initialized in this way.}\n\\label{table:experiment_3conv}\n\\begin{tabular}{@{}rlcccc@{}}\n\\hline \n & & \\multicolumn{4}{c}{Convolutional layers sampled} \\\\\nIterations & Data &$\\{\\}$ & $\\{1\\}$ & $\\{1,2\\}$ & $\\{1,2,3\\}$ \\\\ \\hline \n100 & Train & 0.31 & 0.34 & 0.38 & 0.41 \\\\\n & Test & 0.31 & 0.33 & 0.37 & 0.40 \\\\\n1000 & Train & 0.58 & 0.61 & 0.67 & 0.68 \\\\\n & Test & 0.53 & 0.55 & 0.56 & 0.56 \\\\\n\\hline\n\\end{tabular}\n\\end{SCtable}\n\n\\begin{figure}[tb]\n \\centering\n \\scriptsize \\sffamily\n \\begin{minipage}{0.33\\textwidth}\n \\centering \n Simple CNN \n \\end{minipage}\n \\begin{minipage}{0.33\\textwidth}\n \\centering\n VGG16\n \\end{minipage}\n \\begin{minipage}{0.33\\textwidth}\n \\centering\n ResNet18\n \\end{minipage}\\\\\n \\includegraphics[width=0.33\\textwidth]{images\/cifar10_simplecnn_random_k_layers.pdf}\n \\includegraphics[width=0.33\\textwidth]{images\/cifar10_vgg16_random_k_layers.pdf}\n \\includegraphics[width=0.33\\textwidth]{images\/cifar10_resnet18_random_k_layers.pdf}\n \\caption{Transferring more layers improves downstream performance.\n Simple CNN architecture with 3 convolutional layers ({\\sc left}),\n VGG16 ({\\sc center}), and\n ResNet18 ({\\sc right})\n pre-trained on CIFAR10 examples with random labels and subsequently\n fined-tuned on 25k fresh CIFAR10 examples with random labels.\n Lines with circular markers correspond to training from scratch.\n Error bars correspond to min\/max over 3 runs.\n Plots for fine-tuning with real labels available in the appendix.\n \n }\n \\label{fig:TransferKLayers}\n\\end{figure}\n\n\\section{Specializing neurons}\n\\label{sec:Specializing}\nDespite the alignment effect taking place at the earlier layers of the neural network when trained with random labels, negative transfer is sometimes observed when fine-tuning on a downstream task as shown in Figure~\\ref{fig:image1}. In this section, we show that this is likely due to a specialization at the later layer.\n\nFigure~\\ref{fig:VGGNeuronActivation} displays the distribution of neurons with respect to the number of held out images they are activated by for the settings of Figure~\\ref{fig:image1} that exhibited positive (top row) and negative (bottom row) transfers. Comparing neural activations during initialization, end of pre-training, and end of fine-tuning, we note that neural activations are markedly diminished in the negative transfer case compared to the positive transfer case despite the fact that their neural activation distributions were identical during initialization. In Appendix F, we show that the significant drop in neural activation in the negative transfer case happens immediately after switching to the downstream task. As a result, the effective capacity available downstream is diminished. By contrast, neural activations are not severely impacted in the positive transfer setting. In Appendix F, we provide detailed figures describing this phenomenon across all layers of VGG16, which reveal that such a specialization effect becomes more prominent in the later layers compared to the earlier layers. In particular, Appendix F shows that neural activations at the later layers can drop abruptly and permanently once the\n switch to the downstream task takes place due to specialization, which can prevent the network for recovering its fully capacity. \n \n\n\\begin{figure}[tb]\n \\centering \\scriptsize \\sffamily\n \\begin{minipage}{0.32\\textwidth}\n \\centering\n \\small\n \\hspace{.9cm}Initialization\n \\end{minipage}\n \\begin{minipage}{0.32\\textwidth}\n \\centering\n \\small\n End of pre-training\n \\end{minipage}\n \\begin{minipage}{0.32\\textwidth}\n \\centering\n \\small\n End of fine-tuning\n \\end{minipage}\\\\\n \\includegraphics[width=.32\\textwidth]{images\/activation_up_init_positive.pdf}\n \\includegraphics[width=.32\\textwidth]{images\/activation_up_end_positive.pdf}\n \\includegraphics[width=.32\\textwidth]{images\/activation_down_end_positive.pdf}\\\\\n \\includegraphics[width=.32\\textwidth]{images\/activation_up_init.pdf}\n \\includegraphics[width=.32\\textwidth]{images\/activation_up_end.pdf}\n \\includegraphics[width=.32\\textwidth]{images\/activation_down_end.pdf}\n \\caption{\n Activation plots for the two VGG16 models in Figure \\ref{fig:image1}\n at initialization ({\\sc left}),\n after pre-training\n with random labels ({\\sc center}),\n and after subsequently fine-tuning on fresh\n examples with random labels ({\\sc right}). Top row is for the positive transfer case; bottom row shows negative transfer.\n Histograms depict distributions of neurons over the fraction of \\emph{held out}\n examples that activate them.\n The two histograms in each subplot correspond to two different\n activation spaces.\n \n \n }\n \\label{fig:VGGNeuronActivation}\n\\end{figure}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=.32\\textwidth]{images\/cifar_simplecnn_width_64.pdf}\n \\includegraphics[width=.32\\textwidth]{images\/cifar_simplecnn_width_128.pdf}\n \\includegraphics[width=.32\\textwidth]{images\/cifar_simplecnn_width_1024.pdf}\n \\caption{\n Increasing model width mitigates negative transfer.\n Simple CNN architectures with two convolutional layers and 64 ({\\sc left}), 128 ({\\sc center}), and 1024 ({\\sc right}) units in the dense layer.\n \n }\n \\label{fig:SimpleCNNIncreasingWIdth}\n\\end{figure}\n\nOne way to mitigate the effect of the inactive ReLU units is to increase the width so that the capacity remains sufficiently large for the downstream task. Figure~\\ref{fig:SimpleCNNIncreasingWIdth} shows that increasing the width can indeed mitigate the negative transfer effect. \nWhile increased width seems to have general performance advantages~\\cite{zagor2016}, it seems to be\nalso particularly useful in the case of transfer learning~\\cite{alex2019big}.\n\n\n\\section{Concluding remarks}\nThe objective of this paper is to answer the question of what neural networks learn when trained on random labels. \nWe provide a partial answer by proving an alignment effect of principal components of network parameters and data and studying its implications, particularly for transfer learning. \nOne important consequence is that second-order statistics of the earlier layers can be reduced to a one-dimensional function, which exhibits a surprising, regular structure. \nIt remains an open question what the ``optimal'' shape of such function is, or whether it can be described analytically. \n\nThe models used in this paper are taken from a large set of experiments that we conducted using popular network architectures and datasets, such as simple convolutional networks, VGG16, ResNet18-v2, CIFAR10,\nand ImageNet, with wide range of hyperparameter settings (Appendix B).\nThese experiments show that pre-training on random labels\nvery often accelerates training on downstream tasks compared to training from scratch with the \\emph{same hyperparameters}\nand\nrarely hurts the training speed.\n\nBy studying what is learned on random labels, we shed new insights into previous phenomena that have been reported in the literature. For instance, the alignment effect at the earlier layers explains the empirical observations of \\cite{pondenkandath2018leveraging} that pre-training on random labels can accelerate training in downstream tasks and the observation of \\cite{arpit2017closer} that the filters learned on random labels seem to exhibit some useful structure. Also, our findings related to the inactive ReLU units at the later layers demonstrate how upper layers specialize early during training, which may explain why neural networks exhibit critical learning stages \\cite{achille2017critical} and why increasing the width seems to be particularly useful in transfer learning \\cite{alex2019big}. Both alignment and specialization are in agreement with the observation that earlier layers generalize while later layers specialize, a conclusion that has been consistently observed in the literature when training on real labels \\cite{ansuini2019intrinsic,arpit2017closer,cohen2018dnn,yosinski2014transferable}. We show that it holds for random labels as well. \n\n\n\\section*{Acknowledgements}\nThe authors are grateful to\nAlexander Kolesnikov, \nAlexandru \u0162ifrea,\nJessica Yung,\nLarisa Markeeva,\nLionel Ngoupeyou Tondji, \nLucas Beyer,\nPhilippe Gervais,\nand other members of the Google Brain Team\nfor valuable discussions.\n\n\\section*{Broader Impact}\nThis work is partially theoretical and contains experiments to study the theoretical results and related hypotheses. The paper aims at improving our understanding of how DNNs learn from data and therefore does not have a \\textit{direct} impact on applications or society. Hence, speculating on its potential broader impact is difficult at this stage. Nevertheless, we hope that a better understanding of deep neural networks will lead to improvements in the future along the direction of building interpretable and explainable AI,\nwhich are critical ingredients for the creation of socially-responsible AI systems.\n\n\n\\bibliographystyle{abbrvnat} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}