{"text":"\n\\section{Introduction} \n\nEdge devices such as smartphones, remote sensors and smart home appliances\ngenerate massive amounts of data \\citep{wang2018smart, cao2017deepmood, shi2016promise}. In recent years, Federated Learning (FL)\nhas emerged as a technique to train models on this data while\npreserving privacy \\citep{FedAvg,FedProx}.\n\nIn FL, we have a single server that is connected to many clients. Each\nclient stores a local dataset that it does not want to share with the server\nbecause of privacy concerns or law enforcement \\citep{voigt2017eu}. The server wants to train a model on all local\ndatasets. To this end, it initializes the model and sends it to a random\nsubset of clients. Each client trains the model on its local dataset and\nsends the trained model back to the server. The server accumulates all\ntrained models into an updated model for the next iteration and repeats the\nprocess for several rounds until some termination criterion is met. This\nprocedure enables the server to train a model without accessing any local\ndatasets. \n\nToday's neural network models often have millions or even billions\n\\citep{gpt3} of parameters, which makes high communication costs a\nconcern in FL. In fact, \\citet{carbon} suggest that communication between clients\nand server may account for over 70\\% of energy consumption in FL. Reducing\ncommunication in FL is an attractive area of research because it lowers\nbandwidth requirements, energy consumption and training time.\n\nCommunication in FL occurs in two phases: Sending parameters from the\nserver to clients (\\emph{downlink}) and sending updated parameters from\nclients to the server (\\emph{uplink}). Uplink bandwidth usually imposes a\ntighter bottleneck than downlink bandwidth. This has several reasons. For\none, the average global mobile upload bandwidth is currently less than one\nfourth of the download bandwidth \\citep{speedtest}. For another, FL downlink\ncommunication sends the same parameters to each client. Broadcasting\nparameters is usually more efficient than the accumulation of parameters\nfrom different clients that is required for uplink communication \\citep{LFL,\nFedPAQ}. For these reasons, we seek to compress uplink communication.\n\n\\begin{figure}[h] \n \\begin{subfigure}[b]{0.50\\textwidth}\n \\def18em{18em}\n \\input{samplequant_complex_static.pdf_tex}\n \\caption{Static quantization.}\n \\label{fig:static_quantization}\n \\end{subfigure} \n \\hfill\n \\begin{subfigure}[b]{0.50\\textwidth}\n \\def18em{18em}\n \\input{samplequant_complex_dynamic.pdf_tex}\n \\label{fig:dynamic_quantization}\n \\caption{Client-adaptive quantization.}\n \\end{subfigure} \n \\caption{Static quantization vs. client-adaptive quantization when\n accumulating parameters $p_A$ and $p_B$. (a): Static quantization uses the\n same quantization level for $p_A$ and $p_B$. (b) Client-adaptive\n quantization uses a slightly higher quantization level for $p_B$ because\n $p_B$ is weighted more heavily. This allows us to use a significantly lower quantization level $q_A$ for $p_A$ while keeping the quantization error measure\n $\\mathrm{E}_{p_A,p_B}\\left[\\mathrm{Var}\\left(\\quantizer(p)\\right)\\right]$\n roughly constant. Since communication is approximately proportional to\n $q_A + q_B$, client-adaptive quantization communicates less data.}\n \\label{fig:clientdynamicquant}\n \\vspace*{-2mm}\n\\end{figure}\n\n\n\\begin{wrapfigure}{r}{0.49\\textwidth}\n \\vspace*{-5mm}\n \\begin{tikzpicture}[scale=0.9]\n \\fill [magenta, opacity=0.2] (0,2.1) rectangle (6, 3.7);\n \\fill [teal, opacity=0.2] (0,1.1) rectangle (6, 2.1);\n \\fill [brown, opacity=0.2] (0,0) rectangle (6, 1.1);\n \\draw [<->, >=stealth] (0,4) -- (0,0) -- (6.3,0);\n \\node [left] at (0,2) {Loss};\n \\draw [magenta, thick](0,3.7) .. controls (0.1,1.3) and (0.3,1.3) .. (5.5, 1.4) node [above, magenta] {\\small q=1};\n \\draw [teal, thick,](0,3.7) .. controls (0.6,0.7) and (1.6,0.7) .. (5.5, 0.85) node [above, teal] {\\small q=2};\n \\draw [brown, thick](0,3.7) .. controls (1.8,0.45) and (3.8,0.45) .. (5.5, 0.45) node [below, brown] {\\small q=4};\n \\begin{scope}\n \\clip (0, 2.1) rectangle (6, 4);\n \\draw [black, very thick, dotted, line cap=round, dash pattern=on 0pt off 2.5\\pgflinewidth](0,3.7) .. controls (0.1,1.3) and (0.3,1.3) .. (5.5, 1.4);\n \\end{scope}\n \\begin{scope}[shift={(-1.02,0.00)}]\n \\clip (0, 0) rectangle (6, 1.1);\n \\draw [brown, dotted, very thick, line cap=round, dash pattern=on 0pt off 2.5\\pgflinewidth](0,4) .. controls (1.8,0.45) and (3.8,0.45) .. (5.5, 0.45);\n \\end{scope}\n \\begin{scope}[shift={(-0.245,-0.00)}]\n \\clip (0, 1.065) rectangle (6, 2.1);\n \\draw [teal, very thick, dotted, line cap=round, dash pattern=on 0pt off 2.5\\pgflinewidth](0,4) .. controls (0.6,0.7) and (1.6,0.7) .. (5.5, 0.85);\n \\end{scope}\n \\node [below] at (2, 0.7) {\\small adaptive q};\n\n \\begin{scope}[shift={(0,-1.5)}]\n \\begin{axis}[%\n ,xlabel=Communication\n ,xlabel near ticks\n ,ylabel near ticks\n ,ylabel=q\n ,axis x line = bottom,axis y line = left\n ,ytick={0, 1,2,4}\n ,xtick=\\empty\n ,ymax=4.5\n ,ymin=0\n ,width=3in\n ,height=1.2in\n ]\n \\addplot+[const plot, no marks, thick] coordinates {(0,1) (0.15,1) (0.15,2) (0.7,2) (0.7,4) (3,4)};\n \\end{axis}\n \\end{scope}\n\n \\end{tikzpicture} \n \\caption{Time-adaptive quantization. A small quantization level (q)\n decreases the loss with less communication than a large q, but converges\n to a higher loss. This motivates an adaptive quantization strategy that uses a small q as long as \n it is beneficial and then switches over to a large q. We generalize this idea into an algorithm\n that monotonically increases q based on the training loss.}\n \\label{fig:timedynamicquant}\n\\end{wrapfigure}\n\nA large class of compression algorithms for FL apply some lossy quantizer $\\quantizer$, optionally\nfollowed by a lossless compression stage. $\\quantizer$ usually provides a ``quantization level''\nhyperparameter $q$ to control the coarseness of quantization (e.g. the number of bins\nfor fixed-point quantization). When $q$ is kept constant during training, we speak of\n\\emph{static quantization}. When $q$ changes, we speak of \\emph{adaptive quantization}. Adaptive\nquantization can exploit asymmetries in the FL framework to minimize communication. One such\nasymmetry lies in FL's training time, where we observe that early training rounds can use a lower\n$q$ without affecting convergence. \\Cref{fig:timedynamicquant} illustrates how \\emph{time-adaptive\nquantization} leverages this phenomenon to minimize communication. Another asymmetry lies in FL's\nclient space, because most FL algorithms weight client contributions to the global model\nproportional to their local dataset sizes. \\Cref{fig:clientdynamicquant} illustrates how\n\\emph{client-adaptive quantization} can minimize the quantization error. Intuitively, FL clients\nwith greater weighting should have a greater communication budget and our proposed client-adaptive\nquantization achieves this in a principled way. To this end, we introduce the expected variance of\nan accumulation of quantized parameters, $\\mathbb{E}[\\mathrm{Var}(\\sum\\quantizer(p))]$, as a measure of the\nquantization error. Our client-adaptive quantization algorithm then assigns clients minimal\nquantization levels, subject to a fixed $\\mathbb{E}[\\mathrm{Var}(\\sum\\quantizer(p))]$. This lowers the amount of\ndata communicated from clients to the server, without increasing the quantization error.\n\nDAdaQuant (Doubly Adaptive Quantization) combines time- and client-adaptive quantization with an adaptation\nof the QSGD fixed-point quantization algorithm to achieve state-of-the-art\nFL uplink compression.\nIn this paper, we make the following contributions:\n\n\\begin{itemize}[noitemsep,leftmargin=*] \n \\item We introduce the concept of client-adaptive quantization and develop algorithms for time-\n and client-adaptive quantization that are computationally efficient, empirically Pareto optimal\n and compatible with arbitrary FL quantizers. Our client-adaptive quantization is provably optimal\n for stochastic fixed-point quantizers. \n \n \\item We create Federated QSGD as an adaptation of the stochastic\n fixed-point quantizer QSGD that works with FL. Federated QSGD outperforms\n all other quantizers, establishing a strong baseline for FL compression with\n static quantization.\n\n \\item We combine time- and client-adaptive quantization into DAdaQuant. We\n demonstrate DAdaQuant's state-of-the-art compression by empirically\n comparing it against several competitive FL compression\n algorithms.\n\\end{itemize}\n\n\n\\section{Related Work}\n\nFL research has explored several approaches to reduce communication. We\nidentify three general directions.\n\nFirst, there is a growing interest of\ninvestigating FL algorithms that can converge in fewer rounds. FedAvg\n\\citep{FedAvg} achieves this with prolonged local training, while FOLB\n\\citep{folb} speeds up convergence through a more principled client\nsampling. Since communication is proportional to the number of training\nrounds, these algorithms effectively reduce communication.\n\nSecondly, communication can be reduced by reducing the model\nsize because the model size is proportional to the amount of training communication.\nPruneFL \\citep{fedprune} progressively prunes the model over the course of\ntraining, while AFD \\citep{feddropout} only trains submodels on clients.\n\nThirdly, it is possible to directly compress FL training communication. FL\ncompression algorithms typically apply techniques like top-k sparsification\n\\citep{fedzip, fetchsgd} or quantization \\citep{FedPAQ, uveqfed} to\nparameter updates, optionally followed by lossless compression. Our work\napplies to quantization-based compression algorithms. It is partially based\non QSGD \\citep{QSGD}, which combines lossy fixed-point quantization with a\nlossless compression algorithm to compress gradients communicated in\ndistributed training. DAdaQuant adapts QSGD into Federated QSGD, which works\nwith Federated Learning. DAdaQuant also draws inspiration from FedPAQ\n\\citep{FedPAQ}, the first FL framework to use lossy compression based on\nmodel parameter update quantization. However, FedPAQ does not explore the\nadvantages of additional lossless compression or adaptive quantization.\nUVeQFed \\citep{uveqfed} is an FL compression algorithm that generalizes\nscalar quantization to vector quantization and subsequently employs lossless\ncompression with arithmetic coding. Like FedPAQ, UVeQFed also limits itself\nto a single static quantization level.\n\nFaster convergence, model size reduction and communication compression are\northogonal techniques, so they can be combined for further communication\nsavings. For this paper, we limit the scope of empirical comparisons to\nquantization-based FL compression algorithms.\n\nFor quantization-based compression for model training, prior works have\ndemonstrated that DNNs can be successfully trained in low-precision\n\\citep{banner2018scalable,gupta2015deep,sun2019hybrid}. There are also\nseveral adaptive quantization algorithms for training neural networks in a\nnon-distributed setting. \\Citet{adaparams} use different quantization levels\nfor different parameters of a neural network. FracTrain \\citep{FracTrain}\nintroduced multi-dimensional adaptive quantization by developing\ntime-adaptive quantization and combining it with parameter-adaptive\nquantization. However, FracTrain uses the current loss to decide on the\nquantization level. FL generally can only compute local client losses that\nare too noisy to be practical for FracTrain. AdaQuantFL introduces\ntime-adaptive quantization to FL, but requires the global loss\n\\citep{adaquantfl}. To compute the global loss, AdaQuantFL has to\ncommunicate with every client each round. We show in\n\\Cref{sec:experimentsresults} that this quickly becomes impractical as the\nnumber of clients grows. DAdaQuant's time-adaptive quantization overcomes\nthis issue without compromising on the underlying FL communication. In\naddition, to the best of our knowledge, DAdaQuant is the first algorithm to\nuse client-adaptive quantization.\n\n\\section{The DAdaQuant method}\n\n\\subsection{Federated Learning}\n\nFederated Learning assumes a client-server topology with a set ${\\mathbb{C}} =\n\\{c_i|i \\in \\{1,2...N\\}\\}$ of $N$ clients that are connected to a single server. Each client\n$c_k$ has a local dataset $D_k$ from the local data distribution\n$\\mathcal{D}_k$.\nGiven a model $M$ with parameters ${\\bm{p}}$, a loss function\n$f_{{\\bm{p}}}(d\\in D_k)$ and the local loss $F_k({\\bm{p}}) =\n\\frac{1}{|D_k|}\\sum_{d \\in D_k} f_{\\bm{p}}(d)$, FL seeks to minimize the global\nloss $G({\\bm{p}}) = \\sum_{k=1}^{N} \\frac{|D_k|}{\\sum_l|D_l|}F_k({\\bm{p}})\\label{eq:flobjective}$.\n\n\n\\subsection{Federated Averaging (FedAvg)}\n\nDAdaQuant makes only minimal assumptions about the FL algorithm. Crucially,\nDAdaquant can complement FedAvg \\citep{FedAvg}, which is representative of a\nlarge class of FL algorithms.\n\nFedAvg trains the model $M$ over several rounds. In each round $t$, FedAvg\nsends the model parameters ${\\bm{p}}_t$ to a random subset ${\\mathbb{S}}_t$ of $K$ clients\nwho then optimize their local objectives $F_k({\\bm{p}}_t)$ and send the updated\nmodel parameters ${\\bm{p}}_{t+1}^k$ back to the server. The server accumulates\nall parameters into the new global model ${\\bm{p}}_{t+1} = \\sum_{k\\in{\\mathbb{S}}_t}\n\\frac{|D_k|}{\\sum_j |D_j|} {\\bm{p}}_{t+1}^{k}$\\; and starts the next round.\n\\Cref{alg:fedavgdadaquant} lists FedAvg in detail. For our experiments, we\nuse the FedProx \\citep{FedProx} adaptation of FedAvg. FedProx improves the\nconvergence of FedAvg by adding the proximal term\n$\\frac{\\mu}{2}\\|{\\bm{p}}_{t+1}^k - {\\bm{p}}_t\\|^2$ to the local objective\n$F_k({\\bm{p}}_{t+1}^k)$ in \\Cref{alg:fedavgobjective} of\n\\Cref{alg:fedavgdadaquant}.\n\n\\subsection{Quantization with Federated QSGD}\n\\label{sec:qsgd}\n\nWhile DAdaQuant can be applied to any quantizer with a configurable\nquantization level, it is optimized for fixed-point quantization. We\nintroduce Federated QSGD as a competitive fixed-point quantizer on top of\nwhich DAdaQuant is applied. \n\nIn general, fixed-point quantization uses a quantizer $\\quantizer_q$ with\nquantization level $q$ that splits $\\mathbb{R}_{\\geq0}$ and $\\mathbb{R}_{\\leq0}$ into $q$ intervals each.\n$\\quantizer_q(p)$ then returns the sign of $p$ and $|p|$ rounded to one of\nthe endpoints of its encompassing interval. $\\quantizer_q({\\bm{p}})$ quantizes the\nvector ${\\bm{p}}$ elementwise.\n\nWe design DAdaQuant's quantization stage based on QSGD, an efficient\nfixed-point quantizer for state-of-the-art gradient compression.\nQSGD quantizes a vector ${\\bm{p}}$ in\nthree steps:\n\\begin{enumerate}[noitemsep]\n \\item Quantize ${\\bm{p}}$ as $\\quantizer_q(\\frac{{\\bm{p}}}{||{\\bm{p}}||_2})$ into $q$ bins in $[0,1]$, storing signs and $||{\\bm{p}}||_2$ separately. (\\emph{lossy})\n \\item Encode the resulting integers with 0 run-length encoding. (\\emph{lossless})\n \\item Encode the resulting integers with Elias $\\omega$ coding. (\\emph{lossless})\n\\end{enumerate}\n\nQSGD has been designed specifically for quantizing gradients. This makes it\nnot directly applicable to parameter compression. To overcome this\nlimitation, we apply difference coding to uplink compression, first\nintroduced to FL by FedPAQ. Each client $c_k$ applies $\\quantizer_q$ to the\n\\emph{parameter updates} ${\\bm{p}}^k_{t+1}-{\\bm{p}}_t$ (cf.\n\\Cref{alg:fedavgclientreturn} of \\Cref{alg:fedavgdadaquant}) and sends them to the server. The server\nkeeps track of the previous parameters ${\\bm{p}}_t$ and accumulates the quantized\nparameter updates into the new parameters as ${\\bm{p}}_{t+1} = {\\bm{p}}_t + \\sum_{k\\in{\\mathbb{S}}_t}\n\\frac{|D_k|}{\\sum_l |D_l|} \\quantizer_q({\\bm{p}}^k_{t+1}-{\\bm{p}}_t)$ (cf.\n\\Cref{alg:fedavgaccumulate} of \\Cref{alg:fedavgdadaquant}). We find that QSGD works well with parameter\nupdates, which can be regarded as an accumulation of gradients over several\ntraining steps. We call this adaptation of QSGD \\emph{Federated QSGD}.\n\n\\subsection{Time-adaptive quantization}\n\nTime-adaptive quantization uses a different quantization level $q_t$ for\neach round $t$ of FL training. DAdaQuant chooses $q_t$ to minimize\ncommunication costs without sacrificing accuracy. To this end, we find that\nlower quantization levels suffice to initially reduce the loss, while partly\ntrained models require higher quantization levels to further improve (as illustrated in \\Cref{fig:timedynamicquant}).\nFracTrain is built on similar observations for non-distributed training. \nTherefore, we design DAdaQuant to mimic FracTrain in monotonically\nincreasing $q_t$ as a function of $t$ and using the training loss to inform\nincreases in $q_t$.\n\nWhen $q$ is too low, FL converges prematurely. Like FracTrain, DAdaQuant\nmonitors the FL loss and increases $q$ when it converges. Unlike FracTrain,\nthere is no single centralized loss function to evaluate and unlike\nAdaQuantFL, we do not assume availability of global training loss\n$G({\\bm{p}}_t)$. Instead, we estimate $G({\\bm{p}}_t)$ as the average\nlocal loss $\\hat{G}_t = \\sum_{k\\in{\\mathbb{S}}_t} \\frac{|D_k|}{\\sum_l\n|D_l|}F_k({\\bm{p}}_t)$ where ${\\mathbb{S}}_t$ is the set of clients sampled at round $t$.\nSince ${\\mathbb{S}}_t$ typically consists of only a small fraction of all clients,\n$\\hat{G}_t$ is a very noisy estimate of $G({\\bm{p}}_t)$. This makes it\nunsuitable for convergence detection. Instead, DAdaQuant tracks a running\naverage loss $\\doublehat{{G}}_{t} = \\psi \\doublehat{{G}}_{t-1} + (1-\\psi) \\hat{G}_{t}$.\n\nWe initialize $q_1 = q_\\text{min}$ for some $q_\\text{min} \\in {\\mathbb{N}}$.\nDAdaQuant determines training to converge whenever $\\doublehat{{G}}_{t} \\geq\n\\doublehat{{G}}_{t+1-\\phi}$ for some $\\phi \\in {\\mathbb{N}}$ that specifies the number of\nrounds across which we compare $\\doublehat{{G}}$. On convergence, DAdaQuant sets\n$q_{t} = 2q_{t-1}$ and keeps the quantization level fixed for at least\n$\\phi$ rounds to enable reductions in $G$ to manifest in $\\doublehat{{G}}$.\nEventually, the training loss converges regardless of the quantization\nlevel. To avoid unconstrained quantization increases on convergence, we\nlimit the quantization level to $q_\\text{max}$.\n\nThe following equation\nsummarizes DAdaQuant's time-adaptive quantization:\n$$\nq_{t} \\longleftarrow\n\\begin{cases}\n q_{\\text{min}} & t = 0 \\\\\n 2q_{t-1} & t > 0 \\text{ and } \\doublehat{{G}}_{t-1} \\geq \\doublehat{{G}}_{t-\\phi} \\text{ and } t > \\phi \\text{ and } 2q_{t-1} < q_{\\text{max}} \\text{ and } q_{t-1} = q_{t-\\phi} \\\\\n q_{t-1} & \\text{else}\n\\end{cases}\n$$\n\n\\begin{figure}[]\\small\\centering\n\\begin{subtable}[]{0.492\\textwidth}\n\\begin{tabular}{l|l|lllll}\n\\multicolumn{2}{r|}{Round} & \\multicolumn{1}{l|}{1} & \\multicolumn{1}{l|}{2} & \\multicolumn{1}{l|}{3} & \\multicolumn{1}{l|}{4} & \\multicolumn{1}{l|}{5} \\\\ \\hline\nClient & Samples & \\multicolumn{5}{c}{Quantization level} \\\\ \\hline\nA & 1 & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{8}} & \\multicolumn{1}{l|}{} \\\\\nB & 2 & \\multicolumn{1}{l|}{\\gradient{8}} & \\multicolumn{1}{l|}{\\gradient{8}} & \\multicolumn{1}{l|}{\\gradient{8}} & \\multicolumn{1}{l|}{\\gradient{8}} & \\multicolumn{1}{l|}{} \\\\\nC & 3 & \\multicolumn{1}{l|}{\\gradient{8}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{8}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{8}} \\\\\nD & 4 & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{8}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{8}}\n\\end{tabular}\n\\caption{Static quantization.}\n\\end{subtable}\\hfill\n\\begin{subtable}[h]{0.492\\textwidth}\n\\begin{tabular}{l|l|lllll}\n\\multicolumn{2}{r|}{Round} & \\multicolumn{1}{l|}{1} & \\multicolumn{1}{l|}{2} & \\multicolumn{1}{l|}{3} & \\multicolumn{1}{l|}{4} & \\multicolumn{1}{l|}{5} \\\\ \\hline\nClient & Samples & \\multicolumn{5}{c}{Quantization level} \\\\ \\hline\nA & 1 & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{4}} & \\multicolumn{1}{l|}{} \\\\\nB & 2 & \\multicolumn{1}{l|}{\\gradient{1}} & \\multicolumn{1}{l|}{\\gradient{2}} & \\multicolumn{1}{l|}{\\gradient{2}} & \\multicolumn{1}{l|}{\\gradient{4}} & \\multicolumn{1}{l|}{} \\\\\nC & 3 & \\multicolumn{1}{l|}{\\gradient{1}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{2}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{8}} \\\\\nD & 4 & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{2}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{8}}\n\\end{tabular}\n\\caption{Time-adaptive quantization.}\n\\end{subtable}\n\n \\begin{subtable}[h]{0.492\\textwidth}\n \\begin{tabular}{l|l|lllll}\n \\multicolumn{2}{r|}{Round} & \\multicolumn{1}{l|}{1} & \\multicolumn{1}{l|}{2} & \\multicolumn{1}{l|}{3} & \\multicolumn{1}{l|}{4} & \\multicolumn{1}{l|}{5} \\\\ \\hline\n Client & Samples & \\multicolumn{5}{c}{Quantization level} \\\\ \\hline\n A & 1 & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{6}} & \\multicolumn{1}{l|}{} \\\\\n B & 2 & \\multicolumn{1}{l|}{\\gradient{7}} & \\multicolumn{1}{l|}{\\gradient{6}} & \\multicolumn{1}{l|}{\\gradient{7}} & \\multicolumn{1}{l|}{\\gradient{9}} & \\multicolumn{1}{l|}{} \\\\\n C & 3 & \\multicolumn{1}{l|}{\\gradient{9}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{9}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{7}} \\\\\n D & 4 & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{9}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{9}}\n \\end{tabular}\\hspace{0.47em}\n \\caption{Client-adaptive quantization.}\n \\end{subtable}\n \\begin{subtable}[h]{0.492\\textwidth}\n \\begin{tabular}{l|l|lllll}\n \\multicolumn{2}{r|}{Round} & \\multicolumn{1}{l|}{1} & \\multicolumn{1}{l|}{2} & \\multicolumn{1}{l|}{3} & \\multicolumn{1}{l|}{4} & \\multicolumn{1}{l|}{5} \\\\ \\hline\n Client & Samples & \\multicolumn{5}{c}{Quantization level} \\\\ \\hline\n A & 1 & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{3}} & \\multicolumn{1}{l|}{} \\\\\n B & 2 & \\multicolumn{1}{l|}{\\gradient{1}} & \\multicolumn{1}{l|}{\\gradient{1}} & \\multicolumn{1}{l|}{\\gradient{2}} & \\multicolumn{1}{l|}{\\gradient{5}} & \\multicolumn{1}{l|}{} \\\\\n C & 3 & \\multicolumn{1}{l|}{\\gradient{1}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{2}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{7}} \\\\\n D & 4 & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{1}} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\gradient{9}}\n \\end{tabular}\n \\caption{Time-adaptive and client-adaptive quantization.}\n \\end{subtable}\n \\caption{Exemplary quantization level assignment for 4 FL clients that train over 5 rounds. Each round, two clients get sampled for training.}\n \\label{fig:quantexample}\n\\end{figure}\n\n\\subsection{Client-adaptive quantization}\n\nFL algorithms typically accumulate each parameter $p_i$ over all clients\ninto a weighted average $p = \\sum_{i=1}^K{w_ip_i}$ (see \\Cref{alg:fedavgdadaquant}).\nQuantized FL communicates and accumulates quantized parameters\n$\\quantizer_q(p) = \\sum_{i=1}^K{w_i\\quantizer_q(p_i)}$ where $q$ is the\nquantization level. We define the quantization error $e^q_p$ as $e^q_p = |p\n- \\quantizer_q(p)|$. We observe that $\\mathbb{E}_{p_1\\ldots\np_K}[\\mathrm{Var}(\\quantizer_q(p))]$ is a useful statistic of the quantization error\nbecause it strongly correlates with the loss added by quantization. For\na stochastic, unbiased fixed-point compressor like Federated QSGD, $\\mathbb{E}_{p_1\\ldots\np_K}[\\mathrm{Var}(\\quantizer_q(p))]$ equals $\\mathbb{E}_{p_1\\ldots p_K}[\\mathrm{Var}(e^q_p)]$ and\ncan be evaluated analytically.\n\nWe observe in our experiments that communication cost per client is roughly\na linear function of Federated QSGD's quantization level $q$. This means that the\ncommunication cost per round is proportional to $Q = Kq$. We call $Q$ the\ncommunication budget and use it as a proxy measure of communication cost.\n\nClient-adaptive quantization dynamically adjusts the quantization level of each client. This means\nthat even within a single round, each client $c_k$ can be assigned a different quantization level $q_k$. The\ncommunication budget of client-adaptive quantization is then $Q = \\sum_{k=1}^K{q_k}$ and\n$\\quantizer_q(p)$ generalizes to $\\quantizer_{q_1\\ldots q_K}(p) = \\sum_{i=1}^K{w_i\\quantizer_{q_i}(p_i)}$. We devise an algorithm that chooses $q_k$ to minimize $Q$ subject to $\\mathbb{E}_{p_1\\ldots\np_K}[\\mathrm{Var}(e^{q_1\\ldots q_K}_p)] = \\mathbb{E}_{p_1\\ldots p_K}[\\mathrm{Var}(e^q_p)]$ for a given $q$. Thus, our algorithm effectively minimizes\ncommunication costs while maintaining a quantization error similar to static quantization. \\Cref{theorem:q} provides us with an analytical formula for quantization\nlevels $q_1\\ldots q_K$. \n\n\\begin{theorem}{Given parameters $p_1\\ldots p_k\\sim\\mathcal{U}[-t, t]$ and quantization level $q$, $\\min_{q_1\\ldots q_K}\\sum_{i=1}^K{q_i}$\n subject to $\\mathbb{E}_{p_1\\ldots p_K}[\\mathrm{Var}(e^{q_1\\ldots q_K}_p)] = \\mathbb{E}_{p_1\\ldots p_K}[\\mathrm{Var}(e^q_p)]$ is minimized by $q_i = \\sqrt{\\frac{a}{b}}\\times w_i^{2\/3}$ where $a = {\\sum_{j=1}^K w_j^{2\/3}}$ and $b = {\\sum_{j=1}^K \\frac{w_j^2}{q^2}}$.}\n\\label{theorem:q}\n\\end{theorem}\n\nDAdaQuant applies \\Cref{theorem:q} to lower communication costs while\nmaintaining the same loss as static quantization does with a fixed $q$. To\nensure that quantization levels are natural numbers, DAdaQuant approximates\nthe optimal real-valued solution as $q_i = \\max(1,\n\\text{round}(\\sqrt{\\frac{a}{b}}\\times w_i^{2\/3}))$. \\Cref{sec:proofs} gives a detailed proof\nof \\Cref{theorem:q}. To the best of our knowledge, DAdaQuant is the first algorithm\nto use client-adaptive quantization.\n\n\\begin{algorithm}[H]\n \\SetAlgoLined\n \\Fn(){\\FServer{}} {\n Initialize $w_i = \\frac{|D_i|}{\\sum_j |D_j|}$ for all $i\\in[1,\\ldots,N]$\\;\n \\For(){$t = 0,\\dots,T-1$}{\n \n Choose ${\\mathbb{S}}_t \\subset {\\mathbb{C}}$ with $|{\\mathbb{S}}_t| = K$, including each $c_k \\in {\\mathbb{C}}$ with uniform probability\\;\n \\colorbox{brown!40}{\n $q_{t} \\longleftarrow\n \\begin{cases}\n q_{\\text{min}} & t = 0 \\\\\n 2q_{t-1} & t > 0 \\text{ and } \\doublehat{{G}}_{t-1} \\geq \\doublehat{{G}}_{t-\\phi} \\text{ and } t > \\phi \\text{ and } q_{t} \\leq q_{\\text{max}} \\text{ and } q_{t-1} = q_{t-\\phi} \\\\\n q_{t-1} & \\text{else}\n \\end{cases}$}\\;\n \\For(in parallel){$c_k \\in {\\mathbb{S}}_t$}{\n \\colorbox{magenta!40}{\n $q_t^k \\longleftarrow\n \\sqrt{\\sum_{j=1}^K w_j^{2\/3}\/\\sum_{j=1}^K \\frac{w_j^2}{q^2}}\n $\n }\\;\n \n $Send(c_k, {\\bm{p}}_t, ${$q_t^k$}$)$\\; \n \n $Receive(c_k, {\\bm{p}}_{t+1}^k,${$\\hat{G}_t^k$}$)$\\;\n }\n \n ${\\bm{p}}_{t+1} \\longleftarrow \\sum_{k\\in{\\mathbb{S}}_t} w_k {\\bm{p}}_{t+1}^{k}$\\;\n \\label{alg:fedavgaccumulate}\n \\colorbox{brown!40}{\n $\\hat{G}_{t} \\longleftarrow \\sum_{k\\in{\\mathbb{S}}_t} w_k \\hat{G}_t^k$\n }\\;\n \\colorbox{brown!40}{\n $\\doublehat{{G}}_{t} \\longleftarrow \n \\begin{cases}\n \\hat{G}_0 & t = 0 \\\\\n \\psi \\doublehat{{G}}_{t-1} + (1-\\psi) \\hat{G}_{t} & \\textrm{else} \\\\\n \\end{cases}$\n }\\;\n } \n }\n \\Fn(){\\FClient{$c_k$}} {\n $Receive(\\textrm{Server}, {\\bm{p}}_t,$\\,{$q_t^k$}$)$\\;\n \\colorbox{brown!40}{$\\hat{G}_t^k \\longleftarrow F_k({\\bm{p}}_{t})$}\\;\n \n ${\\bm{p}}_{t+1}^k \\longleftarrow$ $F_k({\\bm{p}}_{t+1}^k)$ trained with SGD for $E$ epochs with learning rate $\\eta$\\;\n \\label{alg:fedavgobjective}\n $Send(\\textrm{Server},\\,$\\colorbox{teal!40}{$\\quantizer_{q_t^k}({\\bm{p}}_{t+1}^k)$}$, ${$\\hat{G}_t^k$}$)$\\;\n \\label{alg:fedavgclientreturn}\n }\n \\caption{The FedAvg and DAdaQuant algorithms. The uncolored lines list FedAvg. Adding the colored lines creates DAdaQuant. \\textrect{teal} --- quantization, \\textrect{magenta} --- client-adaptive quantization, \\textrect{brown} --- time-adaptive quantization.}\n \\label{alg:fedavgdadaquant}\n\\end{algorithm}\n\n\\subsection{Doubly-adaptive quantization (DAdaQuant)}\n\nDAdaQuant combines the time-adaptive and client-adaptive quantization\nalgorithms described in the previous sections. At each round $t$,\ntime-adaptive quantization determines a preliminary quantization level\n$q_t$. Client-adaptive quantization then finds the client quantization\nlevels $q_t^k, k \\in \\{1, \\ldots, K\\}$ that minimize $\\sum_{i=1}^K{q_i}$\nsubject to $\\mathbb{E}_{p_1\\ldots p_K}[\\mathrm{Var}(e^{q_1\\ldots q_K}_p)] = \\mathbb{E}_{p_1\\ldots\np_K}[\\mathrm{Var}(e^q_p)]$. \\Cref{alg:fedavgdadaquant} lists DAdaQuant in detail.\n\\Cref{fig:quantexample} gives an example of how our time-adaptive,\nclient-adaptive and doubly-adaptive quantization algorithms set quantization levels.\n\n\\Citet{FedPAQ} prove the convergence of FL with quantization for convex\nand non-convex cases as long as the quantizer $\\quantizer$ is (1) unbiased\nand (2) has a bounded variance. These convergence results extend to\nDAdaQuant when combined with any quantizer that satisfies (1) and (2) for\nDAdaQuant's minimum quantization level $q=1$. Crucially, this includes\nFederated QSGD. \n\nWe highlight DAdaQuant's low overhead and general applicability. The\ncomputational overhead is dominated by an additional evaluation epoch per\nround per client to compute $\\doublehat{{G}}_t$, which is negligible when training\nfor many epochs per round. DAdaQuant can compliment any FL algorithm that\ntrains models over several rounds and accumulates a weighted average of\nclient parameters. Most FL algorithms, including FedAvg, follow this design.\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\\subsection{Experimental details}\n\n\\textbf{Evaluation} We use DAdaQuant with Federated QSGD to train\ndifferent models with FedProx on different datasets for a fixed number of rounds.\nWe monitor the test loss and accuracy at fixed intervals and measure\nuplink communication at every round across all devices.\n\n\\textbf{Models \\& datasets} We select a broad and diverse set of five\nmodels and datasets to demonstrate the general applicability of DAdaQuant.\nTo this end, we use DAdaQuant to train a linear model, CNNs and LSTMs of\nvarying complexity on a federated synthetic dataset (\\dataset{Synthetic}),\nas well as two federated image datasets (\\dataset{FEMNIST} and\n\\dataset{CelebA}) and two federated natural language datasets\n(\\dataset{Sent140} and \\dataset{Shakespeare}) from the LEAF \\citep{LEAF}\nproject for standardized FL research. We refer to\n\\Cref{sec:models_datasets_detailed} for more information on the\nmodels, datasets, training objectives and implementation.\n\n\\textbf{System heterogeneity}\nIn practice, FL has to cope with clients that have different compute\ncapabilities. We follow \\citet{FedProx} and simulate this \\emph{system\nheterogeneity} by randomly reducing the number of epochs to $E'$ for a\nrandom subset ${\\mathbb{S}}_t' \\subset {\\mathbb{S}}_t$ of clients at each round $t$, where\n$E'$ is sampled from $[1, \\ldots, E]$ and $|{\\mathbb{S}}_t'| = 0.9K$. \n\n\\textbf{Baselines}\nWe compare DAdaQuant against competing quantization-based algorithms for FL\nparameter compression, namely Federated QSGD, FedPAQ \\citep{FedPAQ}, GZip\nwith fixed-point quantization (FxPQ + GZip), UVeQFed \\citep{uveqfed} and\nFP8. Federated QSGD (see \\cref{sec:qsgd}) is our most important baseline\nbecause it outperforms the other algorithms. FedPAQ only applies fixed-point\nquantization, which is equivalent to Federated QSGD without lossless\ncompression. Similarly, FxPQ + GZip is equivalent to Federated QSGD with\nGzip for its lossless compression stages. UVeQFed generalizes scalar\nquantization to vector quantization, followed by arithmetic coding. We apply\nUVeQFed with the optimal hyperparameters reported by its authors. FP8\n\\citep{FP8} is a floating-point quantizer that uses an 8-bit floating-point\nformat designed for storing neural network gradients. We also evaluate all\nexperiments without compression to establish an accuracy benchmark.\n\n\\textbf{Hyperparameters} With the exception of \\dataset{CelebA}, all our datasets\nand models are also used by \\citeauthor{FedProx}. We therefore adopt most of\nthe hyperparameters from \\citeauthor{FedProx} and use LEAF's hyperparameters for \\dataset{CelebA} \\cite{LEAF}.\nFor all experiments, we sample 10 clients each round. We train\n\\dataset{Synthetic}, \\dataset{FEMNIST} and \\dataset{CelebA} for 500 rounds\neach. We train \\dataset{Sent140} for 1000 rounds due to slow convergence and\n\\dataset{Shakespeare} for 50 rounds due to rapid convergence. We use batch\nsize 10, learning rates 0.01, 0.003, 0.3, 0.8, 0.1 and $\\mu$s (FedProx's\nproximal term coefficient) 1, 1, 1, 0.001, 0 for \\dataset{Synthetic},\n\\dataset{FEMNIST}, \\dataset{Sent140}, \\dataset{Shakespeare},\n\\dataset{CelebA} respectively. We randomly split the local datasets into\n80\\% training set and 20\\% test set.\n\nTo select the quantization level $q$ for static quantization with Federated\nQSGD, FedPAQ and FxPQ + GZip, we run a gridsearch over $q = 1, 2, 4, 8,\n\\ldots$ and choose for each dataset the lowest $q$ for which Federated QSGD\nexceeds uncompressed training in accuracy. We set UVeQFed's ``coding rate''\nhyperparameter $R=4$, which is the lowest value for which UVeQFed achieves\nnegligible accuracy differences compared to uncompressed training.\nWe set\nthe remaining hyperparameters of UVeQFed to the optimal values reported by\nits authors. \\Cref{sec:uveqfed} shows further experiments that compare against\nUVeQFed with $R$ chosen to maximize its compression factor.\n\nFor DAdaQuant's time-adaptive quantization, we set $\\psi$ to 0.9, $\\phi$ to\n${1\/10}^{th}$ of the number of rounds and $q_\\textrm{max}$ to the\nquantization level $q$ for each experiment. For \\dataset{Synthetic} and\n\\dataset{FEMNIST}, we set $q_\\textrm{min}$ to 1. We find that\n\\dataset{Sent140}, \\dataset{Shakespeare} and \\dataset{CelebA} require a high\nquantization level to achieve top accuracies and\/or converge in few rounds.\nThis prevents time-adaptive quantization from increasing the quantization\nlevel quickly enough, resulting in prolonged low-precision training that\nhurts model performance. To counter this effect, we set $q_\\textrm{min}$ to\n$q_\\textrm{max}\/2$. This effectively results in binary time-adaptive\nquantization with an initial low-precision phase with $q =\nq_\\textrm{max}\/2$, followed by a high-precision phase with $q =\nq_\\textrm{max}$.\n\n\\subsection{Results}\n\\label{sec:experimentsresults}\n\n\\begin{wrapfigure}[18]{r}{0.40\\textwidth}\n \\vspace*{-1.7cm} \n \\begin{center}\n \n \\scalebox{0.75}{\n \\input{anc\/adaquantfl.pgf}}\n \\end{center}\n \\caption{Comparison of AdaQuantFL and DAdaQuant. We plot the total\n client$\\rightarrow$server communication required to train an MLR model on\n synthetic datasets with 10, 100, 200 and 400 clients. AdaQuantFL's\n communication increases linearly with the number of clients because it\n trains the model on all clients at each round. In contrast, DAdaQuant's\n communication does not change with the number of clients.} \n \\label{fig:adaquantfl} \n\\end{wrapfigure}\n\nWe repeat the main experiments three times and report average results and\ntheir standard deviation (where applicable). \\Cref{tab:results} shows the\nhighest accuracy and total communication for each experiment.\n\\Cref{fig:pareto} plots the maximum accuracy achieved for any given amount\nof communication.\n\n \n\\begin{table}[]\n \\scriptsize\n \n \\centering\n \\begin{tabular}{ll@{\\hskip -0mm}ll@{\\hskip 2mm}ll@{\\hskip 2mm}l}\n \\multicolumn{1}{l|}{} & \\multicolumn{2}{c|}{\\textbf{Synthetic}} & \\multicolumn{2}{c|}{\\textbf{FEMNIST}} & \\multicolumn{2}{c}{\\textbf{Sent140}} \\\\ \\hline\n \\rowcolor[HTML]{EFEFEF} \n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{Uncompressed}} & {$78.3\\pm 0.3$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$12.2$\\,MB}} & {$77.7\\pm 0.4$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$\\!\\!\\!\\!\\!132.1$\\,GB}} & {$69.7\\pm 0.5$} & {$43.9$\\,GB} \\\\\n \\multicolumn{1}{l|}{\\textbf{Federated QSGD}} & {$-0.1\\pm 0.1$} & \\multicolumn{1}{l|}{{$17\\times$}} & {$+0.7\\pm 0.5$} & \\multicolumn{1}{l|}{{$\\!\\!\\!\\!\\!2809\\times$}} & {$-0.0\\pm 0.5$} & {$90\\times$} \\\\\n \\rowcolor[HTML]{EFEFEF} \n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{FP8}} & {$\\bm{+0.1\\pm 0.4}$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$4.0\\times$ ($0.23\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$-0.1\\pm 0.4$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$\\!\\!\\!\\!\\!4.0\\times$ ($0.00\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$-0.2\\pm 0.5$} & {$4.0\\times$ ($0.04\\!\\times\\!\\!\\!\\!\\!\\times$)} \\\\\n \\multicolumn{1}{l|}{\\textbf{FedPAQ (FxPQ)}} & {$-0.1\\pm 0.1$} & \\multicolumn{1}{l|}{{$6.4\\times$ ($0.37\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$+0.7\\pm 0.5$} & \\multicolumn{1}{l|}{{$\\!\\!\\!\\!\\!11\\times$ ($0.00\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$-0.0\\pm 0.5$} & {$4.0\\times$ ($0.04\\!\\times\\!\\!\\!\\!\\!\\times$)} \\\\\n \\rowcolor[HTML]{EFEFEF} \n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{\\color[HTML]{333333} \\textbf{FxPQ + GZip}}} & {\\color[HTML]{333333} {$-0.1\\pm 0.1$}} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{\\color[HTML]{333333} {$14\\times$ ($0.82\\!\\times\\!\\!\\!\\!\\!\\times$)}}} & {\\color[HTML]{333333} {$+0.6\\pm 0.2$}} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{\\color[HTML]{333333} {$\\!\\!\\!\\!\\!1557\\times$ ($0.55\\!\\times\\!\\!\\!\\!\\!\\times$)}}} & {\\color[HTML]{333333} {$-0.0\\pm 0.6$}} & {\\color[HTML]{333333} {$71\\times$ ($0.79\\!\\times\\!\\!\\!\\!\\!\\times$)}} \\\\\n \\multicolumn{1}{l|}{\\textbf{UVeQFed}} & {$-0.5\\pm 0.2$} & \\multicolumn{1}{l|}{{$0.6\\times$ ($0.03\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$-2.8\\pm 0.5$} & \\multicolumn{1}{l|}{{$\\!\\!\\!\\!\\!12\\times$ ($0.00\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$+0.0\\pm 0.2$} & {$15\\times$ ($0.16\\!\\times\\!\\!\\!\\!\\!\\times$)} \\\\ \\hline\n \\rowcolor[HTML]{EFEFEF} \n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{DAdaQuant}} & {$-0.2\\pm 0.4$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$\\bm{48\\times}$ ($\\bm{2.81\\!\\times\\!\\!\\!\\!\\!\\times}$)}} & {$+0.7\\pm 0.1$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$\\!\\!\\!\\!\\!\\bm{4772\\times}$ ($\\bm{1.70\\!\\times\\!\\!\\!\\!\\!\\times$})}} & {$-0.1\\pm 0.4$} & {$\\bm{108\\times}$ ($\\bm{1.19\\!\\times\\!\\!\\!\\!\\!\\times}$)} \\\\\n \\multicolumn{1}{l|}{\\textbf{DAdaQuant$_{\\text{time}}$}} & {$-0.1\\pm 0.5$} & \\multicolumn{1}{l|}{{$37\\times$ ($2.16\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$\\bm{+0.8\\pm 0.2}$} & \\multicolumn{1}{l|}{{$\\!\\!\\!\\!\\!4518\\times$ ($1.61\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$-0.1\\pm 0.6$} & {$93\\times$ ($1.03\\!\\times\\!\\!\\!\\!\\!\\times$)} \\\\\n \\rowcolor[HTML]{EFEFEF} \n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{DAdaQuant$_{\\text{clients}}$}} & {$+0.0\\pm 0.3$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$26\\times$ ($1.51\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$+0.7\\pm 0.4$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$\\!\\!\\!\\!\\!3017\\times$ ($1.07\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$\\bm{+0.1\\pm 0.6}$} & {$105\\times$ ($1.16\\!\\times\\!\\!\\!\\!\\!\\times$)} \\\\\n & & & & & & \\\\\n \\multicolumn{1}{l|}{} & \\multicolumn{2}{c|}{\\textbf{Shakespeare}} & \\multicolumn{2}{c}{\\textbf{Celeba}} & & \\\\ \\hhline{-----}\n \n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{Uncompressed}} & \\cellcolor[HTML]{EFEFEF}{$\\bm{49.9\\pm 0.3}$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$267.0$\\,MB}} & \\cellcolor[HTML]{EFEFEF}{$90.4\\pm 0.0$} & \\cellcolor[HTML]{EFEFEF}{$12.6$\\,GB} & & \\\\\n \\multicolumn{1}{l|}{\\textbf{Federated QSGD}} & {$-0.5\\pm 0.6$} & \\multicolumn{1}{l|}{{$9.5\\times$}} & {$-0.1\\pm 0.1$} & {$648\\times$} & & \\\\\n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{FP8}} & \\cellcolor[HTML]{EFEFEF}{$-0.2\\pm 0.4$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$4.0\\times$ ($0.42\\!\\times\\!\\!\\!\\!\\!\\times$)}} & \\cellcolor[HTML]{EFEFEF}{$\\bm{+0.0\\pm 0.1}$} & \\cellcolor[HTML]{EFEFEF}{$4.0\\times$ ($0.01\\!\\times\\!\\!\\!\\!\\!\\times$)} & & \\\\\n \\multicolumn{1}{l|}{\\textbf{FedPAQ (FxPQ)}} & {$-0.5\\pm 0.6$} & \\multicolumn{1}{l|}{{$3.2\\times$ ($0.34\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$-0.1\\pm 0.1$} & {$6.4\\times$ ($0.01\\!\\times\\!\\!\\!\\!\\!\\times$)} & & \\\\\n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{FxPQ + GZip}} & \\cellcolor[HTML]{EFEFEF}{$-0.5\\pm 0.6$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$9.3\\times$ ($0.97\\!\\times\\!\\!\\!\\!\\!\\times$)}} & \\cellcolor[HTML]{EFEFEF}{$-0.1\\pm 0.2$} & \\cellcolor[HTML]{EFEFEF}{$494\\times$ ($0.76\\!\\times\\!\\!\\!\\!\\!\\times$)} & & \\\\\n \\multicolumn{1}{l|}{\\textbf{UVeQFed}} & {$-0.0\\pm 0.4$} & \\multicolumn{1}{l|}{{$7.9\\times$ ($0.83\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$-0.4\\pm 0.3$} & {$31\\times$ ($0.05\\!\\times\\!\\!\\!\\!\\!\\times$)} & & \\\\ \\hhline{-----}\n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{DAdaQuant}} & \\cellcolor[HTML]{EFEFEF}{$-0.6\\pm 0.5$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$\\bm{21\\times}$ ($\\bm{2.21\\!\\times\\!\\!\\!\\!\\!\\times}$)}} & \\cellcolor[HTML]{EFEFEF}{$-0.1\\pm 0.1$} & \\cellcolor[HTML]{EFEFEF}{$\\bm{775\\times}$ ($\\bm{1.20\\!\\times\\!\\!\\!\\!\\!\\times}$)} & & \\\\\n \\multicolumn{1}{l|}{\\textbf{DAdaQuant$_{\\text{time}}$}} & {$-0.5\\pm 0.5$} & \\multicolumn{1}{l|}{{$12\\times$ ($1.29\\!\\times\\!\\!\\!\\!\\!\\times$)}} & {$-0.1\\pm 0.2$} & {$716\\times$ ($1.10\\!\\times\\!\\!\\!\\!\\!\\times$)} & & \\\\\n \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}\\textbf{DAdaQuant$_{\\text{clients}}$}} & \\cellcolor[HTML]{EFEFEF}{$-0.4\\pm 0.5$} & \\multicolumn{1}{l|}{\\cellcolor[HTML]{EFEFEF}{$16\\times$ ($1.67\\!\\times\\!\\!\\!\\!\\!\\times$)}} & \\cellcolor[HTML]{EFEFEF}{$-0.1\\pm 0.0$} & \\cellcolor[HTML]{EFEFEF}{$700\\times$ ($1.08\\!\\times\\!\\!\\!\\!\\!\\times$)} & & \n \\end{tabular}\n \n \\caption{Top-1 test accuracies and total client$\\rightarrow$server communication of all baselines, DAdaQuant,\n DAdaQuant$_\\textrm{time}$ and DAdaQuant$_\\textrm{clients}$. Entry $x \\pm y\\,\\,\\,\\, p\\!\\times (q\\!\\times\\!\\!\\!\\!\\times)$ denotes an accuracy difference of x\\% w.r.t. the uncompressed\n accuracy with a standard deviation of y\\%, a compression factor of $p$ w.r.t. the uncompressed\n communication and a compression factor of $q$ w.r.t. Federated QSGD.\n \n }\n \\label{tab:results}\n\\end{table}\n\n\\paragraph{Baselines}\n\n\\Cref{tab:results} shows that the accuracy of most experiments lies within\nthe margin of error of the uncompressed experiments. This reiterates the\nviability of quantization-based compression algorithms for communication\nreduction in FL.\nFor all experiments, Federated QSGD achieves a significantly higher\ncompression factor than the other baselines. The authors of FedPAQ and\nUVeQFed also compare their methods against QSGD and report them as superior.\nHowever, FedPAQ is compared against ``unfederated'' QSGD that communicates\ngradients after each local training step and UVeQFed is \ncompared against QSGD without its lossless compression stages.\n\n\\paragraph{Time-adaptive quantization} The purely time-adaptive version of\nDAdaQuant, DAdaQuant$_\\textrm{time}$, universally outperforms Federated QSGD\nand the other baselines in \\Cref{tab:results}, achieving comparable accuracies\nwhile lowering communication costs. DAdaQuant$_\\textrm{time}$\nperforms particularly well on \\dataset{Synthetic} and \\dataset{FEMNIST},\nwhere it starts from the lowest possible quantization level $q=1$. However,\nbinary time-adaptive quantization still measurably improves over QSGD for\n\\dataset{Sent140}, \\dataset{Shakespeare} and \\dataset{Celeba}.\n\n\\Cref{fig:adaquantfl} provides empirical evidence that AdaQuantFL's communication scales linearly\nwith the number of clients. As a result, AdaQuantFL is prohibitively expensive for datasets with\nthousands of clients such as \\dataset{Celeba} and \\dataset{Sent140}. DAdaQuant does not face this\nproblem because its communication is unaffected by the number of clients. \n\n\\begin{figure}\n \\vspace*{-0.7cm}\n \\begin{center} \n \\hspace*{-0.5cm}\n \\scalebox{0.75}{\n \\input{anc\/pareto_paper_full.pgf}}\n \\end{center}\n \\vspace*{-0.3cm} \n \\caption{Communication-accuracy trade-off curves for training on\n \\dataset{FEMNIST} with Federated QSGD and DAdaQuant. We plot the average highest\n accuracies achieved up to any given amount of client$\\rightarrow$server communication.\n \\Cref{sec:paretofull} shows curves for all datasets, with similar results.}\n \\label{fig:pareto}\n\\end{figure}\n\n\\paragraph{Client-adaptive quantization}\nThe purely time-adaptive version of DAdaQuant, DAdaQuant$_\\textrm{clients}$,\nalso universally outperforms Federated QSGD and the other baselines in\n\\Cref{tab:results}, achieving similar accuracies while lowering\ncommunication costs. Unsurprisingly, the performance of\nDAdaQuant$_\\textrm{clients}$ is correlated with the coefficient of variation\n$c_v = \\frac{\\sigma}{\\mu}$ of the numbers of samples in the local datasets\nwith mean $\\mu$ and standard deviation $\\sigma$: \\dataset{Synthetic}\n($c_v=3.3$) and \\dataset{Shakespeare} ($c_v=1.7$) achieve significantly\nhigher compression factors than \\dataset{Sent140} ($c_v=0.3$),\n\\dataset{FEMNIST} ($c_v=0.4$) and \\dataset{Celeba} ($c_v=0.3$).\n\n\\paragraph{DAdaQuant}\nDAdaQuant outperforms DAdaQuant$_\\textrm{time}$ and\nDAdaQuant$_\\textrm{clients}$ in communication while achieving similar\naccuracies. The compression factors of DAdaQuant are roughly multiplicative\nin those of DAdaQuant$_\\textrm{clients}$ and DAdaQuant$_\\textrm{time}$. This\ndemonstrates that we can effectively combine time- and client-adaptive\nquantization for maximal communication savings.\n\n\n\\paragraph{Pareto optimality} \\Cref{fig:pareto} shows that DAdaQuant\nachieves a higher accuracy than the strongest baseline, Federated QSGD, for\nany fixed accuracy. This means that DAdaQuant is Pareto optimal for the\ndatasets we have explored.\n \n\\section{Conclusion}\n\nWe introduced DAdaQuant as a computationally efficient and robust algorithm\nto boost the performance of quantization-based FL compression algorithms. We\nshowed intuitively and mathematically how DAdaQuant's dynamic adjustment of\nthe quantization level across time and clients minimize\nclient$\\rightarrow$server communication while maintaining convergence speed.\nOur experiments establish DAdaQuant as nearly universally superior over\nstatic quantizers, achieving state-of-the-art compression factors when\napplied to Federated QSGD. The communication savings of DAdaQuant\neffectively lower FL bandwidth usage, energy consumption and training time.\nFuture work may apply and adapt DAdaQuant to new quantizers, further pushing\nthe state of the art in FL uplink compression.\n \n\\section{Reproducibility Statement}\n\nOur submission includes a repository with the source code for DAdaQuant and\nfor the experiments presented in this paper. All the datasets used in our\nexperiments are publicly available. Any post-processing steps of the\ndatasets are described in \\Cref{sec:models_datasets_detailed}. To facilitate\nthe reproduction of our results, we have bundled all our source code,\ndependencies and datasets into a Docker image. The repository submitted with\nthis paper contains instructions on how to use this Docker image and\nreproduce all plots and tables in this paper.\n\n\\section{Ethics Statement}\n\nFL trains models on private client datasets in a privacy-preserving manner.\nHowever, FL does not completely eliminate privacy concerns, because the\ntransmitted model updates and the learned model parameters may expose the\nprivate client data from which they are derived. Our work does not directly\ntarget privacy concerns in FL. With that said, it is worth noting that\nDAdaQuant does not expose any client data that is not already exposed\nthrough standard FL training algorithms. In fact, DAdaQuant reduces the\namount of exposed data through lossy compression of the model\nupdates. We therefore believe that DAdaQuant is free of ethical\ncomplications. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nGlobal symmetries are instrumental in organizing our understanding of phases of matter. The celebrated Landau paradiagm classifies phases according to broken symmetries, which also determines the universality classes of transitions between phases.\nSymmetry principles become even more powerful from the point of view of long wavelength, low-energy physics, as the renormalization group fixed points (i.e. IR) often embody more symmetries than the microscopic lattice model (i.e. UV), which is the phenomenon of emergent symmetry~\\cite{Moessner2001,Isakov2003,Senthil2004,YYHe2016,NvsenMa2019}. A common example is the emergence of continuous space-time symmetries in the field-theoretical description of a continuous phase transition~\\cite{altland2010condensed}. It is even plausible that a critical point is determined up to finite choices by its full emergent symmetry, which is the basic philosophy (or educated guess) behind the conformal bootstrap program~\\cite{Poland2019}.\n\nModern developments in quantum many-body physics have significantly broadened the scope of quantum phases beyond the Landau classification~\\cite{Wen2019}. For these exotic phases, more general notions of global symmetry are called for to completely characterize the phases and the associated phase transitions. Intuitively, these ``beyond Landau'' phases do not have local order parameters. Instead, non-local observables are often needed to characterize them. For a well-known example, confined and deconfined phases of a gauge theory are distinguished by the behavior of the expectation value of Wilson loop operators~\\cite{fradkin2013field,Gregor2011}. To incorporate such extended observables into the symmetry framework, higher-form symmetries~\\cite{Nussinov2006, Nussinov2009, Gaiotto_2015}, and more generally algebraic symmetries~\\cite{ji2019categorical, kong2020algebraic} have been introduced. These are symmetries whose charged objects are spatially extended, e.g. strings and membranes. In other words, their symmetry transformations only act nontrivially on extended objects. Most notably, spontaneous breaking of such higher symmetries can lead to highly entangled phases, such as topological order~\\cite{Gaiotto_2015}. Therefore, even though topologically ordered phases are often said to be beyond the Landau paradiagm, they can actually be understood within a similar conceptual framework once higher symmetries are included. In addition, just as the usual global symmetries, higher-form symmetries can have quantum anomalies~\\cite{Gaiotto_2015}, which lead to strong non-perturbative constraints on low-energy dynamics~\\cite{Gaiotto2017}.\n\nIn this work, we make use of the prototypical continuous quantum phase transition, the Ising transition, to elucidate the functionality of the higher-form symmetry. The motivation to re-examine the well-understood Ising transition is the following: in addition to the defining 0-form $\\mathbb{Z}_2$ symmetry, the topological requirement that $\\mathbb{Z}_2$ domain walls must be closed (in the absence of spatial boundary) can be equivalently formulated as having an unbreakable $\\mathbb{Z}_2$ $(D-1)$-form symmetry, where $D$ is the spatial dimension. Gapped phase on either side of the transition spontaneously breaks one and only one of the two symmetries. Therefore to correctly determine the full emergent internal symmetry in the Ising CFT, the $\\mathbb{Z}_2$ higher-form symmetry should be taken into account. For $D=2$, the $1$-form symmetry manifests more clearly in the dual formulation~\\cite{Wegner1971}, namely as the confinement-deconfinement transition of a $\\mathbb{Z}_2$ gauge theory, which will shed light on higher-form symmetry breaking transitions in a concrete setting.\n\nA basic question about a global symmetry is whether it is broken spontaneously or not in the ground state. For clarity, let us focus on the $D=2$ case. It is well-known that the Ising symmetric, or ``quantum disordered'' phase, spontaneously breaks the higher-form symmetry, and the opposite in the Ising symmetry-breaking phase. The fate at the critical point remains unclear to date. To diagonose higher-form symmetry breaking, we compute the ground state expectation value of the ``order parameter'' for the higher-form symmetry -- commonly known as the disorder operator in literature~\\cite{KadanoffPRB1971,Fradkin2016,XCWu2020,YCWang2021,XCWu2021}, which creates a domain wall in the Ising system. Spontaneous breaking of the $\\mathbb{Z}_2$ $1$-form symmetry is signified by the perimeter law for the disorder operator. In the dual formulation, the corresponding object is the Wilson loop operator. Through large-scale QMC simulations, we find numerically that at the transition, the disorder operator defined on a rectangular region scales as $l^s e^{-a_1l}$, where $l$ is the perimeter of the region, and $s>0$ is a universal constant. We thus conclude that the 1-form symmetry is spontaneously broken at the (2+1)d Ising transition, and it remains so in the disordered phase of the model. This is in stark contrast with the $D=1$ case, where the disorder operator has a power-law decay. \n\nTo corroborate the numerical results, we consider generally disorder operator corresponding to a 0-form $\\mathbb{Z}_2$ symmetry in a free scalar theory in $D$ dimensions, which is a stable fixed point for $D\\geq 3$. We show that for the kind of $\\mathbb{Z}_2$ symmetry in this case, the disorder operator can be related to the 2nd Renyi entropy. Therefore, the disorder operator also obeys a ``perimeter'' (i.e. volume of the boundary) scaling, with possibly multiplicative power-law correction. Whether the higher-form symmetry is broken or not is determined by the subleading power-law corrections. We also discuss other free theories, such as a Fermi liquid, where the decay of the disorder operator is in between the ``perimeter'' and the ``area'' laws, and therefore no higher-form symmetry breaking.\n\nThe rest of the paper is organized as follows. In Sec.~\\ref{sec:ii} we review higher-form symmetry and its spontaneous breaking, and its relevancy in conventional phases. We also consider higher-form symmetry breaking in free and interacting conformal field theories. In Sec.~\\ref{sec:iii} we specialize to the setting of quantum Ising model in (2+1)d and define the disorder operator. Sec.~\\ref{sec:iv} presents the main numerical results from quantum Monte Carlo simulations, which reveal the key evidence of the 1-form symmetry breaking at the $(2+1)$d Ising transition. Sec.~\\ref{sec:v} outlines a few immediate directions about the higher-form symmetry breaking and their measurements in unbiased numerical treatments in other quantum many-body systems.\n\n\\section{Generalized global symmetry}\n\\label{sec:ii}\nConsider a quantum many-body system in $D$ spatial dimensions. Global symmetries are unitary transformations which commute with the Hamiltonian. Typically the symmetry transformation is defined over the entire system, and charges of the global symmetry are carried by particle-like objects.\n\nAn important generalization of global symmetry is the higher-form symmetry~\\cite{Gaiotto_2015}. For an integer $p\\geq 0$, $p$-form symmetry transformations act nontrivially on $p$-dimensional objects. In other words, ``charges'' of $p$-form symmetry are carried by extended objects. In this language, the usual global symmetry is 0-form as the particle-like object is of 0-dimension. $p$-form symmetry transformations themselves are unitary operators supported on each codimension-$p$ (i.e. spatial dimension $(D-p)$) closed submanifold $M_{D-p}$. In particular, it means that there are infinitely many symmetry transformations in the thermodynamic limit. In this work we will only consider discrete, Abelian higher-form symmetry, so for each submanifold $M_{D-p}$ the associated unitary operators form a finite Abelian group $G$.\nPhysically, higher-form symmetry means that the certain $p$-dimensional objects are charged under the group $G$, and the quantum numbers they carry constrain the processes of creation, annihilation and splitting etc. In particular, these extended objects are ``unbreakable'', i.e. they are always closed and can not end on $(p-1)$-dimensional objects.\n\nFor a concrete example, let us consider (2+1)$d$ $\\mathbb{Z}_2$ gauge theory definend on a square lattice. Each edge of the lattice is associated with a $\\mathbb{Z}_2$ gauge field (i.e. a qubit), subject to the Gauss's law at each site $v$:\n\\begin{equation}\n\t\\prod_{e\\ni v}\\tau_e^x=1.\n\t\\label{eqn:gauss}\n\\end{equation}\nHere $e$ runs over edges ending on $v$.\n\nThe divergence-free condition implies that there are no electric charges in the gauge theory. In other words, all $\\mathbb{Z}_2$ electric field lines must form loops. An electric loop can be created by applying the following operator along any closed path $\\gamma$ on the lattice:\n\\begin{equation}\n\tW_e(\\gamma)=\\prod_{e\\in \\gamma}\\tau_e^z.\n\t\\label{}\n\\end{equation}\nThe corresponding $\\mathbb{Z}_2$ 1-form symmetry operator is defined as\n\\begin{equation}\n\tW_m(\\gamma^\\star)=\\prod_{e\\perp \\gamma^\\star} \\tau_e^x\n\t\\label{eqn:Wm}\n\\end{equation}\nfor any closed path $\\gamma^\\star$ on the dual lattice. Here the subscript $m$ in $W_m$ indicates that this is actually the string operator for $\\mathbb{Z}_2$ flux excitations. In field theory parlance, $W_e$ is the Wilson operator of the $\\mathbb{Z}_2$ gauge theory, and $W_m$ is the corresponding Gukov-Witten operator~\\cite{gukov2008rigid}. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure\/braiding}\n\t\\caption{(a) 0-form symmetry charge is a point-like object, measured by the symmetry transformation defined on the entire system (i.e. at a fixed time slice) (b) 1-form symmetry charge is a loop (the solid line), measured by the symmetry transformation defined on a loop as well when the two are linked. }\n\t\\label{fig:braiding}\n\\end{figure}\n\nWe notice that the $W_m(\\gamma^\\star)$ operator is in fact the product of Gauss's law term $\\prod_{v\\in e}\\tau_e^x$ for all $v$ in the region enclosed by $\\gamma^\\star$. In other words, the smallest possible $\\gamma^\\star$ is a loop around one vertex $v$, and the fact tht $W_m(\\gamma^\\star)$ is conserved by the dynamics means that the gauge charge at site $v$ must be conserved (mod 2) as well. Therefore, the $\\mathbb{Z}_2$ gauge theory with electric 1-form symmetry is one with completely static charges, including the case with no charges at all. For applications in relativistic quantum field theories, it is usually further required that the 1-form symmetry transformation is ``topological'', i.e. not affected by local deformation of the loop $\\gamma^\\star$, which is equivalent to the absence of gauge charge as given in Eq. \\eqref{eqn:gauss}.\n\nIt is instructive to consider how the 1-form charge of an electric loop can be measured. This is most clearly done in space-time: to measure a $p$-dimensional charge, one ``wraps'' around the charge by a $(D-p)$-dimensional symmetry operator. Appying the symmetry transformation is equivalent to shrinking the symmetry operator, and in $(D+1)$ spacetime, because of the linking the two must collide, and the non-commutativity (e.g. between $W_e$ and $W_m$) measures the charge value. We illustrate the process for $p=0$ (Fig. \\ref{fig:braiding}(a)) and $p=1$ (Fig. \\ref{fig:braiding}(b)), in three-dimensional space-time.\n\nNow consider the following Hamiltonian of Ising gauge theory:\n\\begin{equation}\n\tH=-J\\sum_e\\tau_e^x - K\\sum_p \\prod_{e\\in \\partial p}\\tau_e^z,\n\t\\label{}\n\\end{equation}\nwhere $J,K > 0$. When $J\\ll K$, the ground state is in the deconfined phase, which can be viewed as an equal-weight superposition of all closed $\\mathbb{Z}_2$ electric loops. In this phase, the $\\mathbb{Z}_2$ 1-form symmetry is spontaneously broken. When $J\\gg K$, the ground state is a product state with $\\tau_e^x=1$ everywhere, and the 1-form symmetry is preserved. This is the confined phase. Similar to the usual boson condensation, the expectation value of the electric loop creation operator $W_e(\\gamma)$ can be used to characterize the 1-form symmetry breaking phase, which obeys perimeter law in the deconfined phase.\n\nThis example shows that higher-forms symmetry naturally arises in gauge theories. In condensed matter applications, gauge theories are usually emergent~\\cite{Senthil2004,NvsenMa2018}, which means that dynamical gauge charges are inevitably present and the electric 1-form symmetry is explicitly broken. Even under such circumstances, at energy scales well below the electric charge gap, the theory still has an emergent 1-form symmetry~\\cite{WenPRB2019}.\n\nLet us now discuss more generally the spontaneous breaking of higher-form symmetry~\\cite{Gaiotto_2015, Lake2018, Hofman2018}. We will assume that the symmetry group is discrete. For a $p$-form symmetry, a charged object is created by an extended operator $W(C)$ defined on a $p$-dimensional manifold $C$. When the symmetry is unbroken, we have\n\\begin{equation}\n\t\\langle W(C)\\rangle \\sim e^{-t_{p+1} \\mathrm{Area}(C)},\n\t\\label{eq:eq5}\n\\end{equation}\nwhere $\\mathrm{Area}(C)$ is the volume of a minimal $(p+1)$-dimensional manifold whose boundary is $C$. $t_{p+1}$ can be understood as the ``tension'' of the $(p+1)$-dimensional manifold. This generalizes the exponential decay of charged local operator for the 0-form case. On the other hand, when the symmetry is spontaneously broken,\n\\begin{equation}\n\t\\langle W(C)\\rangle \\sim e^{-t_p \\mathrm{Perimeter}(C)},\n\t\\label{}\n\\end{equation}\nwhere $\\mathrm{Perimeter}(C)$ denotes the ``volume'' of $C$ itself. Importantly the expectation value only depends locally on $C$, which is the analog of the factorization of the correlation function of local order parameter $\\langle O(x)O^\\dag(y)\\rangle \\approx \\langle O(x)\\rangle \\langle O^\\dag(y)\\rangle$ for $0$-form symmetry. One can then redefine the operator $W(C)$ to remove the perimeter scaling and in that case $\\langle W(C)\\rangle$ would approach a constant in the limit of large $C$~\\cite{HastingsPRB2005}. At critical point, however, subleading corrections become important, which will be examined below. \n\nThe $\\mathbb{Z}_2$ gauge theory is famously dual to a quantum Ising model~\\cite{Kogut1979}. In fact, more generally, there is a duality transformation which relates a system with global $\\mathbb{Z}_2$ 0-form symmetry (in the $\\mathbb{Z}_2$ even sector) to one with global $\\mathbb{Z}_2$ $(D-1)$-form symmetry, a generalization of the Kramers-Wannier duality in (1+1)d. \n\nLet us now review the duality in (2+1)d. The dual Ising spins are defined on plaquettes, whose centers form the dual lattice. For a given edge $e$ of the original lattice, denote the two adjacent plaquettes by $p$ and $q$, as shown in the figure below:\n\\begin{center}\n\\begin{tikzpicture}\n\t\\draw[thick] (-1.3, 0) -- (1.3, 0);\n\t\\draw[thick] (-1.3, 1) -- (1.3, 1);\n\t\\draw[thick] (-1.3,-1) -- (1.3,-1);\n\t\\draw[thick, dashed] (-1.3,-0.5) -- (1.3,-0.5);\n\t\\draw[thick, dashed] (-1.3,0.5) -- (1.3,0.5);\n\t\\draw[thick] (-1, -1.3) -- (-1, 1.3);\n\t\\draw[thick] (0, -1.3) -- (0, 1.3);\n\t\\draw[thick] (1, -1.3) -- (1, 1.3);\n\t\\draw[thick, dashed] (0.5, -1.3) -- (0.5, 1.3);\n\t\\draw[thick, dashed] (-0.5, -1.3) -- (-0.5, 1.3);\n\t\\filldraw (-0.5, -0.5) circle (1.5pt);\n\t\\filldraw (0.5, -0.5) circle (1.5pt);\n\t\\filldraw (-0.5, 0.5) circle (1.5pt);\n\t\\filldraw (0.5, 0.5) circle (1.5pt);\n\t\\node at (-0.35, 0.3) {$p$};\n\t\\node at (0.61, 0.3) {$q$};\n\t\\node at (0.13, 0.7) {$e$};\n\\end{tikzpicture}\n\\end{center}\n\nThe duality map is defined as follows:\n\\begin{equation}\n\t\\sigma_{p}^z\\sigma_{q}^z \\leftrightarrow \\tau_e^x, \\sigma^x_{p} \\leftrightarrow \\prod_{e\\in \\partial p}\\tau_e^z.\n\t\\label{}\n\\end{equation}\nNote that the expression automatically ensures $\\prod_p\\sigma^x_p=1$ in a closed system, so the dual spin system has a $\\mathbb{Z}_2$ 0-form symmetry generated by $S=\\prod_p \\sigma_p^x$, and the map can only be done in the $\\mathbb{Z}_2$ even sector with $S=1$~\\footnote{In a sense the $\\mathbb{Z}_2$ symmetry is gauged. In fact one way to derive the duality is to first gauge the $\\mathbb{Z}_2$ symmetry and then perform gauge transformations to eliminate the Ising matter.}. Conversely, the mapping also implies $\\prod_{v\\in e}\\tau_x^e=1$, and in fact $W_m(\\gamma^\\star)=1$ for any $\\gamma^*$, i.e. the $\\mathbb{Z}_2$ 1-form symmetry is strictly enforced.\n\nIn the dual model, the electric field line of the $\\mathbb{Z}_2$ gauge theory becomes the domain walls separating regions with opposite Ising magnetizations. Therefore, a Wilson loop $W_e(\\gamma)$ maps to\n\\begin{equation}\n\tX_M=\\prod_{p\\in M} \\sigma_p^x,\n\t\\label{eqn:Xdef}\n\\end{equation}\nwhere $\\partial M=\\gamma$, i.e. $M$ is the region enclosed by $\\gamma$. Physically $X_M$ flips all the Ising spins in the region $M$, thus creating a domain wall along the boundary $\\gamma$. It is called the disorder operator for the Ising system, which will be the focus of our study below.\n\nUnder the duality map, the Hamiltonian becomes\n\\begin{equation}\n\tH=-J\\sum_{\\langle pq\\rangle}\\sigma_p^z\\sigma_q^z - K\\sum_p \\sigma_p^x.\n\t\\label{eqn:TFI}\n\\end{equation}\n\nThe phases of the gauge theory can be readily understood in the dual representation. For $K\\gg J$, the $\\mathbb{Z}_2$ gauge theory is in the deconfined phase, which means that the ground state contains arbitrarily large electric loops. For the dual Ising model, the ground state is disordered, with all $\\sigma_p^x=1$. If we work in the $\\sigma^z$ eigenbasis (which is natural to discuss symmetry breaking), the ground state wavefunction is given by\n\\begin{equation}\n\t\\ket{\\psi_{K=\\infty}}\\propto \\prod_p \\frac{1+\\sigma_p^x}{2}\\ket{\\uparrow\\uparrow\\cdots\\uparrow}.\n\t\\label{}\n\\end{equation}\nNamely we pick any basis state and apply the ground state projector. Expanding out the projector, one can see that the wavefunction is an equal superposition of all domain wall configurations, i.e. a condensation of domain walls. Since the domain walls carry $\\mathbb{Z}_2$ 1-form charges, the condensation breaks the 1-form symmetry spontaneously, much like the Bose condensation spontaneously breaks the conservation of particle numbers\n\nIn the other limit $K\\ll J$, the gauge theory is confined. Correspondingly, the dual Ising model is in the ferromagnetically ordered phase: there are two degenerate ground states $\\ket{\\uparrow\\cdots\\uparrow}$ and $\\ket{\\downarrow\\cdots\\downarrow}$. There are no domain walls at all in the limit $K\\rightarrow 0$. When a small but finite $K\/J$ is turned on, quantum fluctuations create domain walls on top of the fully polarized ground states, but these domain walls are small and sparse.\n\n\n\n\\subsection{Non-invertible anomaly and gapless states}\nA notable feature of the duality map is that on either side, only one of two symmetries, the $\\mathbb{Z}_2$ 0-form and the $\\mathbb{Z}_2$ 1-form symmetries, is faithfully represented (in the sense that the symmetry transformation is implemented by a nontrivial operator, even though the duality is supposed to work only in the symmetric sector). The other symmetry transformation is mapped to the identity at the operator level. Physically, only one of them is an explicit global symmetry, while the other one appears as a global constraint (e.g. on the Ising side, domain walls of the 0-form global symmetry are codimension-1 closed manifolds, which is the manifestation that they are charged under a $(D-1)$-form symmetry). \n\nA closely related fact is that the ordered phase for one symmetry is necessarily the disordered phase of the other, and any non-degenerate gapped phase must break one and only one of the two symmetries. This has been proven rigorously in one spatial dimension~\\cite{Levin2019}, and is believed to hold in general dimensions as well.\n\nIt is clear from these results that these two symmetries can not be considered as completely independent. Recently, Ref. [\\onlinecite{JiPRR2019}] proposed that the precise relation between the two dual symmetries is captured by the notion of a non-invertible quantum anomaly. Intuively, the meaning of the non-invertible anomaly in the context of the $\\mathbb{Z}_2$ Ising model can be understood as follows: the charge of the $\\mathbb{Z}_2$ 0-form symmetry is an Ising spin flip, while the charge of the $\\mathbb{Z}_2$ 1-form symmetry is an Ising domain wall. These two objects have nontrivial mutual ``braiding'', in the sense that when an Ising charge is moved across a domain wall, it picks up a minus sign due to the Ising symmetry transformation applied to one side of the domain wall. In other words, the charge of the 1-form symmetry is actually a flux loop of the 0-form symmetry. Ref. [\\onlinecite{JiPRR2019}] suggested that two symmetries whose charged objects braid nontrivially with each other can not be realized faithfully in a local Hilbert space. If locality is insisted, then the only option is to realize the $D$ spatial dimensional system as the boundary of a $\\mathbb{Z}_2$ toric code model in $(D+1)$ spatial dimension. In this case, the charged objects are in fact bulk topological excitations brought to the boundary. The nontrivial braiding statistics between the two kinds of charges reflects the topological order in the bulk. Such an anomaly is fundamentally different from more familiar 't Hooft anomaly realized on the boundary of a symmetry-protected topological phase (which is an invertible state). We refer to Ref. [\\onlinecite{JiPRR2019}] for more thorough discussions of the non-invertible anomaly.\n\n\n\n\n Since any gapped state must break one of the two symmetries, it is a very natural question to ask whether there are gapless states that preserve both symmetries. An obvious candidate for such a gapless state is the symmetry-breaking continuous transition. At the transition, the two-point correlation function of the Ising order parameter decays algebraically with the distance, implying that the $\\mathbb{Z}_2$ 0-form symmetry is indeed unbroken. For the dual $(D-1)$-form symmetry, the Kramers-Wannier duality maps the disorder operator, which is a string operator in the Ising basis, to the two-point correlator of the Ising order parameter. Therefore the expectation value of the disorder operator also exhibits power-law correlation, and the dual $0$-form symmetry is preserved. Therefore the Ising conformal field theory in (1+1)d indeed provides an example of symmetric gapless state with non-invertible anomaly~\\cite{JiPRR2019}. But for the case of $D>1$, the situation is far from clear and that is what we will address in this paper. First we analyze the expectation value of the disorder operator in a free field theory.\n\n\\subsection{Scaling of disorder operator in field theory}\nWe now discuss the scaling form of the disorder operator at or near the critical point from a field-theoretical point of view. The natural starting point is the Gaussian fixed point, i.e. a free scalar theory, described by the following Hamiltonian\n\\begin{equation}\n\t{H}[\\phi]=\\int\\mathrm{d}^D\\mathbf{r}\\,\\left[\\frac{\\pi^2}{2}+\\frac{1}{2}(\\nabla \\phi)^2\\right].\n\t\\label{eqn:freeboson1}\n\\end{equation}\nThe real scalar $\\phi$ can be thought of as the coarse grained Ising order parameter, and $\\pi$ is the conjugate momentum of the real scalar $\\phi$. The $\\mathbb{Z}_2$ symmetry acts as $\\phi\\rightarrow -\\phi$. The disorder operator $X_M$ is basically defined as the continuum version of Eq. \\eqref{eqn:Xdef}, where the $\\mathbb{Z}_2$ symmetry is applied to a finite region $M$.\n\nInterestingly, for the free theory the expectation value of the disorder operator can be related to another well-studied quantity, the 2nd Renyi entanglement entropy $S_2$. More precisely, for a region $M$, we have\n\\begin{equation}\n\te^{-S_2(M)}=\\langle X_M\\rangle.\n\t\\label{eqn:S2=X}\n\\end{equation}\nHere $S_2(M)$ is the 2nd Renyi entropy of the region $M$.\n\nTo see why this is the case, recall that the 2nd Renyi entropy $S_2$ for a region $M$ of a quantum state $\\ket{\\Psi}$ is given by\n\\begin{equation}\n\te^{-S_2(M)}=\\Tr \\rho_M^2,\n\t\\label{}\n\\end{equation}\nwhere $\\rho_M$ is the reduced density matrix for the region $M$, obtained from tracing out the degrees of freedom in the complement $\\ol{M}$: $\\rho_M= \\Tr_{\\ol{M}} \\ket{\\Psi}\\bra{\\Psi}$. In the following we denote the ground wave functional of the state $\\ket{\\Psi}$ by $\\Psi(\\phi)$:\n\\begin{equation}\n\t\\ket{\\Psi}=\\int D\\phi\\, \\Psi(\\phi)\\ket{\\phi}.\n\t\\label{eqn:wfn}\n\\end{equation}\n\nThe Renyi entropy can be calculated with a replica trick, which we now review in the Hamiltonian formalism. Consider two identical copies of the system, in the state $\\ket{\\Psi}\\otimes\\ket{\\Psi}$. In the field theory example, the fields in the two copies are denoted by $\\phi^{(1)}$ and $\\phi^{(2)}$, respectively. We denote the basis state with a given field configuration $\\phi^{(i)}$ in the $i$-th copy by $\\ket{\\phi^{(i)}_M,\\phi^{(i)}_{\\ol{M}}}$, where $\\phi^{(i)}_M$ is the field configuration restricted to $M$ and similarly $\\phi^{(i)}_{\\ol{M}}$ for the complement of $M$. Since the two copies are completely identical, there is a swap symmetry $R$ acting between the two copies $R: \\phi^{(1)}\\leftrightarrow \\phi^{(2)}$. $R_M$ then swaps the field configurations only within the region $M$:\n\\begin{equation}\n\tR_M\\ket{\\phi^{(1)}_M, \\phi^{(1)}_{\\ol{M}}}\\otimes\\ket{\\phi^{(2)}_M, \\phi^{(2)}_{\\ol{M}}}=\n\t\\ket{\\phi^{(2)}_M, \\phi^{(1)}_{\\ol{M}}}\\otimes\\ket{\\phi^{(1)}_M, \\phi^{(2)}_{\\ol{M}}}.\n\t\\label{}\n\\end{equation}\n\nThe expectation of $R_M$ on the replicated ground state $\\ket{\\Psi}\\otimes\\ket{\\Psi}$ is then given by\n\\begin{equation}\n\t\\begin{split}\n\t(\\bra{\\Psi}&\\otimes\\bra{\\Psi})R_M(\\ket{\\Psi}\\otimes\\ket{\\Psi})\\\\\n\t&=\\int \\prod_{i=1,2}D\\phi^{(i)}_M D\\phi^{(i)}_{\\ol{M}}\\,\\Psi(\\phi^{(1)}_M,\\phi^{(1)}_{\\ol{M}})\\Psi^*(\\phi^{(2)}_M,\\phi^{(1)}_{\\ol{M}})\\\\\n\t&\\quad\\quad\\Psi(\\phi^{(2)}_M,\\phi^{(2)}_{\\ol{M}})\\Psi^*(\\phi^{(1)}_M,\\phi^{(2)}_{\\ol{M}})\\\\\n\t&=\\int D\\phi^{(1)}_M D\\phi^{(2)}_{M}\\,\\rho_M(\\phi^{(1)}_M, \\phi^{(2)}_M)\\rho_M(\\phi^{(2)}_M, \\phi^{(1)}_M)\\\\\n\t&=\\Tr \\rho_M^2.\n\t\\end{split}\n\t\\label{}\n\\end{equation}\nTherefore the Renyi entropy is the expectation value of the disorder operator for the replica symmetry.\n\nFor a free theory, we rotate the basis to $\\phi_\\pm = \\frac{1}{\\sqrt{2}}(\\phi^{(1)}\\pm \\phi^{(2)})$. In the new basis, the swap symmetry operator becomes:\n\\begin{equation}\n\tR:\\phi_\\pm \\rightarrow \\pm \\phi_\\pm.\n\t\\label{}\n\\end{equation}\nIt is straightforward to check that the Hamiltonian of the replica takes essentially the same form in the new basis:\n\\begin{equation}\n\tH[\\phi^{(1)}]+H[\\phi^{(2)}]=H[\\phi_+]+H[\\phi_-].\n\t\\label{}\n\\end{equation}\nThe ground state again is factorized: $\\ket{\\Psi}\\otimes\\ket{\\Psi}=\\ket{\\Psi}_+\\otimes\\ket{\\Psi}_-$, where $\\ket{\\Psi}_\\pm$ is the state of the $\\phi_\\pm$ field, with the same wave functional as $\\phi$: $\\braket{\\phi_\\pm |\\Psi}_\\pm = \\Psi(\\phi_\\pm)$ as defined in Eq. \\eqref{eqn:wfn}.\n\nWe can now compute the expectation value of $R_M$:\n\\begin{equation}\n\t(\\bra{\\Psi}_+\\otimes\\bra{\\Psi}_-)R_M(\\ket{\\Psi}_+\\otimes\\ket{\\Psi}_-) =\\braket{X_M}.\n\t\\label{}\n\\end{equation}\nwhere we used the fact that $R$ acts as the identity on $\\phi_+$. For $\\phi_-$, $R_M$ is nothing but the disorder operator $X_M$. \n\nThe 2nd Renyi entropy of a free scalar has been well-studied~\\cite{Casini2006,Casini2009, Dowker2015, ElvangPLB2015, Helmes2016, Bueno2019, Berthiere2018} and we summarize the results below.\n\nIt is important to distinguish the case where the boundary is smooth and those with sharp corners on the boundary.\n\nFirst consider a smooth boundary. For a sphere of radius $R$, in $D=1,2,3$ we have:\n\\begin{equation}\n\tS_2=\n\t\\begin{cases}\n\t\t\\frac{1}{6}\\ln R & D=1\\\\\n\t\ta_1\\frac{R}{\\epsilon} -\\gamma & D=2\\\\\n\t\ta_2\\left(\\frac{R}{\\epsilon}\\right)^2-\\frac{1}{192}\\ln \\frac{R}{\\epsilon} & D=3\n\t\\end{cases}.\n\t\\label{eqn:S2scaling}\n\\end{equation}\nHere $\\epsilon$ is a short-distance cutoff, e.g. the lattice spacing, $a_1, a_2$ non-universal coefficients and $\\gamma$ a universal constant. For a more general smooth entangling boundary, in 2D the same form holds although the constant correction $\\gamma$ depends on shape of the region. In 3D, it is known that the coefficient of the logarithmic divergent part of the Renyi entropy can be determined entirely from the local geometric data (e.g. curvature) of the surface in a general CFT~\\cite{Solodukhin2008, Fursaev2012}. \n\nIf the boundary has sharp corners then there are additional divergent terms in the entropy. The prototypical case is $D=2$ when the entangling region has sharp corners. In that case\n\\begin{equation}\n\tS_2=a_1\\frac{l}{\\epsilon}-s\\ln \\frac{l}{\\epsilon},\n\t\\label{eq:eq16}\n\\end{equation}\nwhere $l$ is the perimeter of the entangling region and $s$ is an universal function that only depends on the opening angles of the corners.\nFor real free scalar, the coefficient of the logarithmic correction is $s\\approx 0.0260$ for a square region (so four $\\pi\/2$ corners, as those in Fig.~\\ref{fig:fig1})~\\cite{Casini2009, Helmes2016}. \n\nQualitatively, it is important that for $D=2,3$ the leading term in $S_2$ always obeys an ``perimeter'' law, i.e. it only depends on the ``area'' (length in 2D) of the entangling boundary. If instead we view $S_2$ as the disorder operator for the $\\mathbb{Z}_2$ replica symmetry, the non-universal, cutoff-dependent perimeter term can be removed by redefining the disorder operator locally along the boundary, and the remaining term is universal. For $D=2$, the subleading term is either a \\emph{negative} constant when the boundary is smooth, or a $\\ln l$ correction with a \\emph{negative} coefficient. So according to the relation Eq.~\\eqref{eqn:S2=X}, the disorder parameter $\\braket{X_M}$, after renormalizing away the perimeter term, does not decrease with the size of $M$, and therefore the corresponding $(D-1)$-form symmetry is spontaneously broken. This is consistent with the fact that the replica symmetry itself must be preserved as there is no coupling between the two copies.\n\nAlthough the free Gaussian theory is unstable against quartic interactions below the upper critical dimension, and the actual critical theory is the interacting Wilson-Fisher fixed point, results from the free theory can still provide useful insights. It is well-known that for $D=1$, for $M$ an interval of length $R$ the disorder operator $\\braket{X_M}\\sim R^{-1\/4}$, the same power-law decay as that of the Ising order parameter due to Kramers-Wannier duality. For $D=2$, we will resort to numerical simulations below to address the question.\n\n Notice that the relation between $\\langle X\\rangle$ and $S_2$ essentially holds for all free theories, including free fermions. For example, the disorder operator associated with the fermion parity symmetry is also equal to $S_2$. Interestingly, for a Fermi liquid, it is well-known that $\\ln \\langle X\\rangle = - S_2\\sim -l^{D-1}\\ln l$~\\cite{Gioev2006, Wolf2006}, where here $l$ is the linear size of the region. This is an example of a gapless state where the $(D-1)$-form symmetry is preserved. Similar results hold for non-interacting bosonic systems with ``Bose surface''~\\cite{LaiPRL2013}, an example of which in 2D is given by the exciton Bose liquid~\\cite{ParamekantiPRB2002, TayPRL2010}:\n \\begin{equation}\n\t H=\\int\\mathrm{d}^2\\mathbf{r}\\,\\left[\\frac{\\pi^2}{2}+\\kappa (\\partial_x\\partial_y \\phi)^2\\right].\n\t \\label{}\n \\end{equation}\n\\newline\nIn other words, to preserve both the $0$-form symmetry and the dual $(D-1)$-form symmetry, it is necessary to have a surface of gapless modes in the momentum space.\n\nWhile analytical results discussed in this work are limited to free theories, we conjecture that similar scaling relations hold for interacting CFTs as well. To see why this is plausible, we notice that the entanglement Hamiltonian of a CFT is algebraically ``localized'' near the boundary of the subsystem~\\cite{Casini2011}, which suggests that even for a non-local observable, such as the disorder operator, the major contribution is expected to come from the boundary, and hence a perimeter law scaling. We leave a more systematic study along these lines for future work. In Sec. \\ref{sec:iv} we numerically confirm our conjecture for the Ising CFT in (2+1)d.\n\nWe now briefly discuss what happens if a small mass is turned on in Eq. \\eqref{eqn:freeboson1}. Suppose we are in a gapped phase, and denote by $\\xi$ the correlation length. In general, we expect that $S_2$ obeys an perimeter scaling in the gapped phase, namely the leading term in $S_2$ is given by $a\\frac{R}{\\epsilon}$. In 2D for a disk entangling region of radius $R$, we have~\\cite{Metlitski2009EE}\n\\begin{equation}\n\tS_2= a_{c}\\frac{R}{\\xi} + f\\left( \\frac{R}{\\xi} \\right).\n\t\\label{eq:eq17}\n\\end{equation}\nHere $a_{c}$ is the value of $a$ at the critical point (which was denoted by $a_1$ in Eq. \\eqref{eqn:S2scaling}). The function $f(x)$ satisfies\n\\begin{equation}\n\tf(x)\\rightarrow\n\t\\begin{cases}\n\t\trx & x\\rightarrow \\infty\\\\\n\t\t-\\gamma_c & x\\rightarrow 0\n\t\\end{cases}.\n\t\\label{}\n\\end{equation}\nHere $r$ is an universal constant (once the definition of $\\xi$ is fixed). Suppose the transition is tuned by an external parameter $g$ and the critical point is reached at $g_c$. Since $\\xi\\sim (g-g_c)^{-\\nu}$ where $\\nu$ is the correlation length exponent, one finds that\n\\begin{equation}\n\ta-a_c \\sim (g-g_c)^{\\nu},\n\t\\label{eq:eq19}\n\\end{equation}\n \n\n\\section{Order and disorder in Ising spin models}\n\\label{sec:iii}\n\nIn the following we study $1$-form symmetry breaking in the transverse field Ising (TFI) model which gives rise to the $(2+1)$d Ising transition. We have reviewed the connection with the $\\mathbb{Z}_2$ gauge thory in Sec. \\ref{sec:ii}, as well as the 1-form symmetry in the Ising spin system. We will now focus more on the quantitative aspects of the TFI model. Even though the TFI model and the $\\mathbb{Z}_2$ lattice gauge theory are equivalent by the duality map, we choose to work with the TFI model here because the numerical simulation is more straightforward.\n\n We will now consider a square lattice with one Ising spin per site, and the global Ising symmetry is generated by $S=\\prod_\\mb{r}\\sigma_\\mb{r}^x$. There are, generally speaking, two phases: a ``disordered'' phase, where the Ising symmetry is preserved by the ground state~\\footnote{We note that there are in fact two distinct types of Ising-disordered phases in 2D, one trivial paramagnet and the other one a nontrivial Ising symmetry-protected topological phase.}, and an ordered phase where the ground states spontaneously break the symmetry. They are separated by a quantum phase transition, described by a conformal field theory with $\\mathbb{Z}_2$ symmetry. It is well-understood how to characterize the Ising symmetry breaking (and its absence) in the three cases: consider the two-point correlation function of the order parameter $\\sigma^z_\\mb{r}$. The asympotic forms of the correlation function $\\langle \\sigma_\\mb{r}^z\\sigma_\\mb{r'}^z\\rangle$ for large $|\\mb{r}-\\mb{r}'|$ distinguish the three cases:\n\\begin{equation}\n\t\\langle \\sigma_\\mb{r}^z\\sigma_\\mb{r'}^z\\rangle\\sim\n\t\\begin{cases}\n\t\te^{-\\frac{|\\mb{r}-\\mb{r}'|}{\\xi}} & \\text{disordered}\\\\\n\t\t\\frac{1}{|\\mb{r}-\\mb{r}'|^{2\\Delta}} & \\text{critical}\\\\\n\t\t\\text{const.} & \\text{ordered}\n\t\\end{cases}.\n\t\\label{}\n\\end{equation}\nIn both the disordered phase and the quantum critical point, the Ising symmetry is preserved because of the absence of long-range order. The prototypical lattice model that displays all these features is the TFI model defined on a square lattice:\n\\begin{equation}\n\tH=-\\sum_{\\langle \\mb{r}\\mb{r'}\\rangle}\\sigma_\\mb{r}^z\\sigma_{\\mb{r}'}^z - h\\sum_\\mb{r} \\sigma_\\mb{r}^x, h\\geq 0.\n\t\\label{eq:eq6}\n\\end{equation}\nNote that this is the same as Eq. \\eqref{eqn:TFI}, but we have set $J=1$ and renamed $K$ by $h$, to align with the standard convention in literature. The model is in the ordered (disordered) phase for $h\\ll 1$ ($h\\gg 1$). The precise location of the critical point varies with dimension, $h_c=1$ in $D=1$ and $h_c=3.044$ in $D=2$~\\cite{Bloete2002,ZiHongLiu2019}.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.72\\linewidth]{.\/figure\/fig1.pdf}\n\t\\caption{Disorder operator $X$ applied on regions with different shapes: (a) $M$ is a square region with size $R\\times R$ and perimeter $l$. (b) $M$ is a rectangular region with size $R\\times 2R$.}\n\t\\label{fig:fig1}\n\\end{figure}\n\n We will be interested in the disorder operator:\n\\begin{equation}\n\tX_{ M}=\\prod_{\\mb{r}\\in M}\\sigma_\\mb{r}^x,\n\t\\label{eq:eq8}\n\\end{equation}\nwhere $M$ is a rectangle region in the lattice, illustrated in Fig.~\\ref{fig:fig1}.\nIn Ref.~\\onlinecite{ji2019categorical} this operator is called the patch symmetry operator.\n\n When $X_M$ is applied to e.g. $\\ket{\\uparrow\\cdots\\uparrow}$, a domain wall is created along the boundary of the region $M$. These operators are charged under the dual $\\mathbb{Z}_2$ 1-form symmetry. One can easily see that $\\bra{\\psi_{h=\\infty}}X_M\\ket{\\psi_{h=\\infty}}=1$, and $\\bra{\\psi_{h=0}}X_M\\ket{\\psi_{h=0}}=0$. More generally,\n\\begin{equation}\n\t\\bra{\\psi}X_M\\ket{\\psi} \\sim\n\t\\begin{cases}\n\t\te^{-al_M} & h>h_c\\\\\n\t\te^{-bA_M} & hh_c$}\n\nFirst we present results in the disordered phase $h>h_c$. As shown in Eq.~\\eqref{eq:eq9}, we expect that the disorder operator obeys a perimeter law scaling, and for $h\\gg h_c$ the coefficient is given in Eq.~\\eqref{eq:eq10}.\n\nFig.~\\ref{fig:fig2} shows the QMC-obtained $\\ln\\langle X_M \\rangle$ as a function of $l$ for different values of $h$. The temperature is taken to be $\\beta=10$, and we have checked that the results already converge for this value of $\\beta$. We observe a clear linear scaling, and the inset shows that for large field $h\\gg h_c$, the slopes of the $\\ln\\langle X_M \\rangle$ are indeed given by $1\/8h^{2}$ asympototically.\n\n\\begin{figure}[htp!]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{.\/figure\/fig3.pdf}\n\t\\caption{$\\ln(a_{c} - a)$ versus $\\ln(h-h_{c})$ in the disordered phase for $L=24$ when $h$ is approaching the critical point. The fitted slope (red line) is $0.63\\pm 0.02$, consistent with the correlation length exponent of the $(2+1)$d Ising transition, as expected in Eq.~\\eqref{eq:eq19}.}\n\t\\label{fig:fig3}\n\\end{figure}\n\nNow we consider the other limit, when $h$ is approaching the critical point $h_c$ from the disordered side. To test the scaling given in Eq.~\\eqref{eq:eq19}, we measure the disorder operator and find the slope $a$ by a linear fit. Fig.~\\ref{fig:fig3} shows $a_c-a$ as a funtion of $h-h_c$ in a log-log plot. A clear power law manifests in the data, and the exponent is found to be $\\nu=0.63(2)$. Considering the finite-size effect, the result agrees very well with the 3D Ising correlation length exponent.\n\n\\subsection{Critical point $h=h_c$}\n The central question to be addressed is whether the $\\mathbb{Z}_2$ 1-form symmetry is spontanously broken at the critical point. To this end, we measure the disorder operator $\\langle X \\rangle$ at $h=h_c$ and scale the inverse temperature $\\beta=L$ in these simulations. We have also checked that finite-$\\beta$ effect is negligible in our calculations.\n\nFig.~\\ref{fig:fig4} shows $\\ln\\langle X_{M} \\rangle$ as a funtion of the perimeter $l$, where $M$ is taken to be a square region, as illustrated in Fig.~\\ref{fig:fig1} (a). Results for different system sizes $L=8,16,24,32,40$ are presented and it is clear that the finite-size effect is negligible. The data clearly demonstrates a linear scaling as in Eq.~\\eqref{eq:eq16} and the slope $a_1$ quickly converges to $0.0394 \\pm 0.0004$. \n\n\n\n\\begin{figure}[htp!]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{.\/figure\/fig4.pdf}\n\t\\caption{$-\\ln(\\langle X \\rangle)$ versus $l$ at the critical point. We use the relation of Eq.~\\eqref{eq:eq16} to fit the data and the fitted curve of the data upto $L=40$ is $-\\ln(\\langle X \\rangle)=(0.0394 \\pm 0.0004)l-(0.0267 \\pm 0.005)\\ln(l)-(0.0158 \\pm 0.008)$.}\n\t\\label{fig:fig4}\n\\end{figure}\n\nAs we have explained, the boundary of $M$ generally contributes to the disorder operator a term proportional to the perimeter. To detect 1-form symmetry breaking, we need to check whether $\\langle X\\rangle$ depends on the area or not. For this purpose, we consider rectangular regions with different aspect ratios: one with $1:1$ (Fig.~\\ref{fig:fig1} (a)) and the other with $1:2$ (Fig.~\\ref{fig:fig1} (b)), and present the results of $\\langle X \\rangle$ at the $h=h_c$ together in Fig.~\\ref{fig:fig5}. It can be seen that the two sets of data basically fall on the same curve, indicating that the disorder parameter only depends on the perimeter.\n\n\\begin{figure}[htp!]\n\\centering\n\\includegraphics[width=\\columnwidth]{.\/figure\/fig5.pdf}\n\\caption{$-\\ln \\langle X_M\\rangle$ versus $l$ at the phase transition point for $M$ with the shape $R\\times R$ (already shown in Fig.~\\ref{fig:fig4}) and $R \\times 2R$, for system size $L=32$. The blue line represents the fitted curve of the data for $R\\times 2R$ using the relation specified in Eq.~\\eqref{eq:eq16}. The fitted result of $R\\times 2R$ is $-\\ln\\langle X \\rangle=(0.0397 \\pm 0.0002)l-(0.0279 \\pm 0.003)\\ln(l)-(0.0192 \\pm 0.006)$ and for $R\\times R$ at $L=32$ the result is $-\\ln\\langle X \\rangle=(0.0399 \\pm 0.0003)l-(0.0272 \\pm 0.004)\\ln(l)-(0.0162 \\pm 0.005)$. The coefficients are indistiguishable within errorbars.}\n\\label{fig:fig5}\n\n\\end{figure}\n\nGiven the relation between $\\langle X_M\\rangle$ and the Renyi entropy in the free theory, let us examine possible corner contributions to $\\langle X_M\\rangle$, which is parameterized in the coefficient $s$ of Eq.~\\eqref{eq:eq16}. We fit the data points in Fig.~\\ref{fig:fig5} to Eq.~\\eqref{eq:eq16}, which yields $s=0.0272 \\pm 0.004$, close to the free value. We perform the same fit for data points with aspect ratio $1:2$ and obtain essentially the same results ($s=0.0279\\pm 0.003$). The agreement between the fitting results for regions with different aspect ratios again lends strong support for the perimeter dependence of $\\langle X_M\\rangle$ even beyond the leading order, and consequently the 1-form symmetry breaking at the $(2+1)$d Ising CFT. \n\n\nThe convergence of the coefficients $a_1$, $s$ and $a_0$ versus the linear system size $L$ is given in Fig.~\\ref{fig:fig7} in Appendix~\\ref{Sec:appB3}.\n\n\\subsection{Ordered phase $h1$. The most challenging case is $D=2$ where the transition is described by the interacting Wilson-Fisher fixed point, and we exploit large-scale quantum Monte Carlo simulations. We use the disorder operator of the Ising system to probe the breaking of the dual higher-form symmetry. We find numerically that at the critical point of the 2D quantum Ising model, the one-form disorder operator exhibits sponatenous symmetry breaking as in the disordered phase, whereas in the ordered phase, the one-form symmetry is intact. \n\nThe disorder operator is intimately related to a line defect (also called a twist operator) in a Ising CFT, around which the spin operator sees an anti-periodic boundary condition. In fact, a line defect is nothing but the boundary of a disorder operator. It is believed that in general such a line defect can flow to a conformal one at low energy, which is indeed consistent with a perimeter law scaling for the expectation value of the disorder operator~\\footnote{We are grateful for Shu-Heng Shao for discussions on this point.}. Local properties of disorder line defects have been previously investigated in Ref. [\\onlinecite{Billo2013}] and [\\onlinecite{Gaiotto2013}]. It will be interesting to understand the relation between the local properties with the universal corner contributions to the disorder operator~\\cite{Bueno2015}.\n\nOur findings, besides elucidating the physics of quantum Ising systems from a new angle, provides a working example of higher-form symmetry at practical use. \nSimilar physical systems can be studied, for example, the disordered operator constructed in this work is readily generalized to the $(2+1)$d XY transition and can be measured with unbiased QMC simulations. Another important direction is to study other higher-form symmetry breaking transitions, such as 1-form symmetry breaking transition in 3D systems. It would also be interesting to investigate the ultility of the disorder operator in the topological Ising paramagnetic phase.\nMore applications in quantum lattice models are awaiting to be explored, and will certainly lead to new insight for a new framework that unifies our understanding of the exotic quantum phases and transitions going beyond the Landau paradigm and those within. \n\n{\\it Note added.-} We would like to draw the reader's attention to few closely related recent works by X.-C. Wu, C.-M. Jian and C. Xu~\\cite{XCWu2020,XCWu2021} and by some of the present author on scaling of disorder oeprator at $(2+1)$d U(1) quantum criticality~\\cite{YCWang2021}. \n\n\\section*{Acknowledgement}\nJRZ, ZY and ZYM thank the enjoyable discussions with Yan-cheng Wang and Yang Qi and acknowledge the support from the RGC of Hong Kong SAR of China\n(Grant Nos. 17303019 and 17301420), MOST through the\nNational Key Research and Development Program (Grant\nNo. 2016YFA0300502) and the Strategic Priority Research\nProgram of the Chinese Academy of Sciences (Grant No.\nXDB33000000). We are grateful for Xiao-Gang Wen, Shu-Heng Shao and William William-Krempa for helpful comments. MC would like to thank Zhen Bi, Wenjie Ji and Chao-Ming Jian for enlightening discussions and acknowledges support from NSF (DMR-1846109). We thank the Computational Initiative at the\nFaculty of Science and the Information Technology Services at the University of Hong Kong, and the\nTianhe-1A, Tianhe-2 and Tianhe-3 prototype platforms at the\nNational Supercomputer Centers in Tianjin and Guangzhou for\ntheir technical support and generous allocation of CPU time.\n\n\n\\section{Free scalar field}\n\\label{sec:app1}\nIn this appendix, we calculate the disorder operator in a Gaussian fixed point and prove that its expectation value is intimately related to the evaluation of the 2nd Renyi entropy.\n\nConsider a free scalar field in $d$ spatial dimensions:\n\\begin{equation}\n\tH=\\frac{1}{2}\\int\\mathrm{d}^d\\mb{x}\\, \\big[\\pi(\\mb{x})^2 + (\\nabla \\phi(\\mb{x}))^2\\big].\n\t\\label{}\n\\end{equation}\nThe theory has a $\\mathbb{Z}_2$ symmetry $X: \\phi\\rightarrow -\\phi$.\n\nTo calculate the disorder operator, let us regularize the theory on a lattice:\n\\begin{equation}\n\tH=\\frac{1}{2}\\sum_i \\pi_i^2 + \\frac{1}{2}\\sum_{ij}\\phi_iK_{ij}\\phi_j.\n\t\\label{}\n\\end{equation}\nHere $i, j, \\dots$ label lattice sites. Define $W= \\sqrt{K}$. In the $\\phi$ basis, the ground state wavefunction is given by\n\\begin{equation}\n\t\\Psi(\\phi)=\\left(\\det \\frac{W}{\\pi}\\right)^{1\/4} e^{-\\frac{1}{2}\\phi^\\mathsf{T} W \\phi}.\n\t\\label{}\n\\end{equation}\nThe reduced density is then\n\\begin{equation}\n\t\\rho(\\phi, \\phi')=\\Psi^*(\\phi)\\Psi(\\phi')=\\sqrt{\\det\\frac{W}{\\pi}} e^{-\\frac{1}{2}(\\phi^\\mathsf{T} W \\phi + {\\phi'}^\\mathsf{T} W\\phi')}.\n\t\\label{}\n\\end{equation}\n\nConsider a region $M$, and represent the covariance matrix $W$ as\n\\begin{equation}\n\tW =\n\t\\begin{pmatrix}\n\t\tA & B \\\\\n\t\tB^\\mathsf{T} & C\n\t\\end{pmatrix},\n\t\\label{}\n\\end{equation}\nwhere the block $B$ ($C$) are the restriction of the matrix $W$ to sites inside (outside) $M$.\n\\begin{eqnarray}\n\t\\rho_M(\\phi_\\mathrm{i}, \\phi_\\mathrm{i}') &=& \\int\\cal{D}\\phi_\\mathrm{o}\\,\\rho(\\{\\phi_\\mathrm{i}, \\phi_\\mathrm{o}\\},\\{\\phi_\\mathrm{i}', \\phi_\\mathrm{o}\\}) \\nonumber\\\\\n\t&=&\\sqrt{\\det\\frac{W}{\\pi}} e^{-\\frac{1}{2}(\\phi_\\mathrm{i}^\\mathsf{T} A\\phi_\\mathrm{i} + {\\phi_\\mathrm{i}'}^\\mathsf{T} A\\phi_\\mathrm{i}')}\\int\\cal{D}\\phi_\\mathrm{o}\\,e^{-\\phi_\\mathrm{o}^\\mathsf{T} C\\phi_\\mathrm{o} + (\\phi_\\mathrm{i}+\\phi_\\mathrm{i}')^\\mathsf{T} B\\phi_\\mathrm{o}} \\nonumber\\\\\n\t&=& \\sqrt{\\det\\frac{W}{\\pi}} e^{-\\frac{1}{2}(\\phi_\\mathrm{i}^\\mathsf{T} A\\phi_\\mathrm{i} + {\\phi_\\mathrm{i}'}^\\mathsf{T} A\\phi_\\mathrm{i}')} \\frac{1}{\\sqrt{\\det \\frac{C}{\\pi}}}e^{\\frac{1}{4}(\\phi_\\mathrm{i}+\\phi_\\mathrm{i}')^\\mathsf{T} B^\\mathsf{T} C^{-1}B (\\phi_\\mathrm{i}+\\phi_\\mathrm{i}')}.\n\t\\label{}\n\\end{eqnarray}\nWe use the identity that\n\\begin{equation}\n\t\\begin{split}\n\t\\det W &= \\det (A-B^\\mathsf{T} C^{-1}B) \\det C\\\\\n\t\\end{split}\n\t\\label{}\n\\end{equation}\n\nThe reduced density matrix\n\\begin{equation}\n\t\\rho_M(\\phi, \\phi')= \\sqrt{\\det \\frac{A-B^\\mathsf{T} C^{-1}B}{\\pi}}\\,e^{-\\frac{1}{2}(\\phi^\\mathsf{T} A\\phi + {\\phi'}^\\mathsf{T} A\\phi')+\\frac{1}{4}(\\phi+\\phi')^\\mathsf{T} B^\\mathsf{T} C^{-1}B (\\phi+\\phi')}\n\t\\label{}\n\\end{equation}\n Now we can calculate the disorder operator:\n\\begin{eqnarray}\n\t\\langle X_M\\rangle &=& \\Tr (\\rho_M X) \\nonumber\\\\\n\t&=& \\int \\mathcal{D}\\phi\\mathcal{D}\\phi'\\, \\langle \\phi|\\rho_M|\\phi'\\rangle\\langle \\phi'|U|\\phi\\rangle\\nonumber\\\\\n\t&=& \\int \\mathcal{D}\\phi\\, \\langle \\phi|\\rho_M|\\mathrm{-}\\phi\\rangle \\nonumber\\\\\n\n\t&=& \\sqrt{\\frac{\\det (A-B^\\mathsf{T} C^{-1}B)}{\\det A}}\\nonumber\\\\\n\t&=&\\sqrt{\\det(\\mathds{1}-A^{-1\/2}B^\\mathsf{T} C^{-1}B A^{-1\/2})}.\n\t\\label{}\n\\end{eqnarray}\nThe second Renyi entropy\n\\begin{eqnarray}\n\t\te^{-S_2} &=& \\Tr \\rho_M^2 \\nonumber\\\\\n\t&=&\\int\\cal{D}\\phi\\cal{D}\\phi'\\, \\rho_M^2(\\phi, \\phi')\\nonumber\\\\\n\t&=& \\det \\frac{A-B^\\mathsf{T} C^{-1}B}{\\pi}\\int\\cal{D}\\phi\\cal{D}\\phi' \\,e^{-\\phi^\\mathsf{T} A\\phi - {\\phi'}^\\mathsf{T} A\\phi'+\\frac{1}{2}(\\phi+\\phi')^\\mathsf{T} B^\\mathsf{T} C^{-1}B (\\phi+\\phi')}\\nonumber\\\\\n\t&=&\\det \\frac{A-B^\\mathsf{T} C^{-1}B}{\\pi}\\int\\cal{D}\\phi_+\\cal{D}\\phi_- \\, e^{- \\phi_+^\\mathsf{T} A \\phi_+ - \\phi_-^\\mathsf{T} A\\phi_- + \\phi_+^\\mathsf{T} B^\\mathsf{T} C^{-1}B \\phi_+} \\nonumber\\\\\n\t&=& \\sqrt{\\det(\\mathds{1}-A^{-1\/2}B^\\mathsf{T} C^{-1}B A^{-1\/2})}.\n\t\\label{}\n\\end{eqnarray}\nHere $\\phi_\\pm = \\frac{\\phi\\pm \\phi'}{\\sqrt{2}}$. Thus we have found $e^{-S_2}=\\langle X_M\\rangle$.\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\subsection{Classification accuracy across four days}\\label{appen:classification}\n The detailed results of Naive and CTR model trained under different hourly data in four days are shown in Table~\\ref{tab:res:classification:2018-02-12} \\ref{tab:res:classification:2018-04-05} \\ref{tab:res:classification:2018-08-26} \\ref{tab:res:classification:2018-10-23}, respectively. First, across all days, CTR models always outperform the baseline model. Moreover, the performance decay is up to $3.74\\%$ for Naive model (in Apr. 5) and up to $1.71\\%$ for CTR model (CTR$_{12}$ in Aug. 26) along the day. CTR models have much smaller performance variations with the fluctuation of loads and renewables along the four days.\n Second, the CTR model scales well with more hours data included. The stark performance decay can only be observed with more than 12 hours data included.\n\n \\begin{table*}[!hb]\n \\centering\n \\caption{Classification accuracy of Naive and CTR models (Feb. 12)}\n \\resizebox{.7\\columnwidth}{!}{%\n \\label{tab:res:classification:2018-02-12}\n \\begin{tabular}{lrrrrrrrrr}\n \\toprule\n \\multicolumn{1}{c}{Hour} & Naive & CTR$_1$ & CTR$_2$ & CTR$_3$ & CTR$_4$ & CTR$_6$ & CTR$_8$ & CTR$_{12}$ & CTR$_{24}$ \\\\\n \\midrule\n 03:00 - 04:00 & 98.37 & 99.24 & 99.24 & 99.29 & 99.24 & \\textbf{99.31} & 99.26 & 99.17 & 98.94 \\\\\n 04:00 - 05:00 & 98.79 & 99.36 & 99.38 & 99.40 & 99.32 & \\textbf{99.42} & 99.37 & 99.30 & 99.11 \\\\\n 05:00 - 06:00 & 98.74 & 99.68 & 99.70 & \\textbf{99.71} & 99.69 & \\textbf{99.71} & 99.70 & 99.65 & 99.55 \\\\\n 06:00 - 07:00 & 98.60 & 99.85 & \\textbf{99.87} & 99.86 & \\textbf{99.87} & 99.86 & 99.86 & 99.84 & 99.77 \\\\\n 07:00 - 08:00 & 98.14 & \\textbf{99.88} & \\textbf{99.88} & 99.87 & 99.85 & 99.87 & 99.85 & 99.85 & 99.80 \\\\\n 08:00 - 09:00 & 98.66 & \\textbf{99.97} & \\textbf{99.97} & \\textbf{99.97} & \\textbf{99.97} & \\textbf{99.97} & 99.96 & 99.96 & 99.95 \\\\\n 09:00 - 10:00 & 99.27 & 99.96 & 99.96 & 99.96 & \\textbf{99.97} & 99.96 & 99.96 & 99.95 & 99.94 \\\\\n 10:00 - 11:00 & 99.04 & \\textbf{99.80} & \\textbf{99.80} & \\textbf{99.80} & 99.79 & 99.77 & 99.77 & 99.74 & 99.70 \\\\\n 11:00 - 12:00 & 97.64 & 99.24 & \\textbf{99.38} & \\textbf{99.38} & 99.34 & 99.29 & 99.31 & 99.21 & 99.08 \\\\\n 12:00 - 13:00 & 97.96 & \\textbf{99.37} & 99.36 & 99.34 & 99.31 & 99.30 & 99.31 & 99.19 & 99.04 \\\\\n 13:00 - 14:00 & 97.97 & 99.22 & \\textbf{99.26} & 99.25 & 99.21 & 99.21 & 99.18 & 99.05 & 98.91 \\\\\n 14:00 - 15:00 & 97.25 & 99.29 & \\textbf{99.32} & 99.30 & 99.29 & 99.29 & 99.27 & 99.13 & 98.95 \\\\\n 15:00 - 16:00 & 98.37 & \\textbf{99.25} & 99.23 & \\textbf{99.25} & 99.18 & 99.19 & 99.16 & 99.06 & 98.94 \\\\\n 16:00 - 17:00 & 98.71 & 99.52 & 99.53 & \\textbf{99.54} & 99.53 & 99.51 & 99.45 & 99.43 & 99.34 \\\\\n 17:00 - 18:00 & 98.12 & \\textbf{99.69} & \\textbf{99.69} & 99.68 & \\textbf{99.69} & 99.65 & 99.62 & 99.59 & 99.53 \\\\\n 18:00 - 19:00 & 98.93 & \\textbf{99.88} & \\textbf{99.88} & \\textbf{99.88} & \\textbf{99.88} & 99.82 & 99.83 & 99.82 & 99.78 \\\\\n 19:00 - 20:00 & 99.17 & 99.87 & \\textbf{99.88} & \\textbf{99.88} & \\textbf{99.88} & 99.84 & 99.85 & 99.83 & 99.81 \\\\\n 20:00 - 21:00 & 99.03 & 99.68 & 99.64 & \\textbf{99.69} & 99.59 & 99.58 & 99.59 & 99.58 & 99.54 \\\\\n 21:00 - 22:00 & 97.83 & \\textbf{99.47} & \\textbf{99.47} & 99.39 & 99.38 & 99.31 & 99.38 & 99.33 & 99.22 \\\\\n 22:00 - 23:00 & 96.11 & 99.45 & \\textbf{99.47} & 99.42 & 99.37 & 99.32 & 99.36 & 99.30 & 99.23 \\\\\n 23:00 - 24:00 & 96.82 & \\textbf{99.02} & 98.96 & 98.89 & 98.82 & 98.76 & 98.81 & 98.71 & 98.60 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\end{table*}\n \\begin{table*}[!h]\n \\centering\n \\caption{Classification accuracy of Naive and CTR models (Apr. 05)}\n \\resizebox{.7\\columnwidth}{!}{%\n \\label{tab:res:classification:2018-04-05}\n \\begin{tabular}{lrrrrrrrrr}\n \\toprule\n \\multicolumn{1}{c}{Hour} & Naive & CTR$_1$ & CTR$_2$ & CTR$_3$ & CTR$_4$ & CTR$_6$ & CTR$_8$ & CTR$_{12}$ & CTR$_{24}$ \\\\\n \\midrule\n 03:00 - 04:00 & 97.95 & \\textbf{99.24} & \\textbf{99.24} & \\textbf{99.24} & \\textbf{99.24} & \\textbf{99.24} & 99.21 & 99.11 & 98.95 \\\\\n 04:00 - 05:00 & 98.29 & \\textbf{99.34} & 99.33 & 99.33 & 99.31 & 99.33 & 99.31 & 99.23 & 99.08 \\\\\n 05:00 - 06:00 & 98.43 & 99.44 & 99.45 & \\textbf{99.46} & 99.44 & \\textbf{99.46} & 99.44 & 99.36 & 99.19 \\\\\n 06:00 - 07:00 & 99.02 & 99.66 & \\textbf{99.69} & \\textbf{99.69} & \\textbf{99.69} & 99.60 & \\textbf{99.69} & 99.65 & 99.55 \\\\\n 07:00 - 08:00 & 98.20 & \\textbf{99.56} & 99.54 & 99.55 & 99.53 & 99.46 & 99.53 & 99.49 & 99.35 \\\\\n 08:00 - 09:00 & 98.22 & 99.31 & \\textbf{99.34} & 99.30 & 99.32 & 99.24 & 99.23 & 99.24 & 99.11 \\\\\n 09:00 - 10:00 & 98.05 & 99.27 & 99.27 & 99.27 & \\textbf{99.28} & 99.15 & 99.16 & 99.17 & 98.97 \\\\\n 10:00 - 11:00 & 98.40 & 99.26 & 99.29 & \\textbf{99.30} & \\textbf{99.30} & 99.21 & 99.23 & 99.23 & 99.08 \\\\\n 11:00 - 12:00 & 97.68 & 98.94 & 98.97 & 98.98 & \\textbf{99.00} & 98.86 & 98.89 & 98.88 & 98.64 \\\\\n 12:00 - 13:00 & 97.11 & 98.46 & \\textbf{98.51} & 98.45 & 98.44 & 98.22 & 98.42 & 98.10 & 98.05 \\\\\n 13:00 - 14:00 & 97.36 & \\textbf{98.83} & 98.80 & 98.77 & 98.78 & 98.56 & 98.65 & 98.45 & 98.32 \\\\\n 14:00 - 15:00 & 96.64 & 98.94 & 98.97 & 98.98 & \\textbf{99.00} & 98.82 & 98.89 & 98.71 & 98.64 \\\\\n 15:00 - 16:00 & 96.55 & 99.10 & \\textbf{99.21} & 99.14 & \\textbf{99.21} & 99.01 & 99.07 & 98.93 & 98.81 \\\\\n 16:00 - 17:00 & 97.36 & \\textbf{98.93} & 98.71 & 98.81 & 98.56 & 98.60 & 98.54 & 98.52 & 98.38 \\\\\n 17:00 - 18:00 & 97.03 & \\textbf{98.72} & 98.59 & 98.67 & 98.51 & 98.46 & 98.47 & 98.42 & 98.31 \\\\\n 18:00 - 19:00 & 98.01 & 99.15 & \\textbf{99.19} & \\textbf{99.19} & 99.02 & 99.11 & 99.02 & 98.95 & 98.88 \\\\\n 19:00 - 20:00 & 98.72 & 99.33 & 99.39 & \\textbf{99.40} & 99.21 & 99.33 & 99.27 & 99.20 & 99.15 \\\\\n 20:00 - 21:00 & 98.87 & 99.37 & 99.41 & \\textbf{99.42} & 99.38 & 99.38 & 99.31 & 99.24 & 99.21 \\\\\n 21:00 - 22:00 & 98.04 & 99.41 & \\textbf{99.43} & 99.34 & 99.39 & 99.35 & 99.30 & 99.26 & 99.22 \\\\\n 22:00 - 23:00 & 95.28 & 99.22 & 99.23 & 99.23 & \\textbf{99.25} & 99.17 & 99.07 & 99.00 & 98.87 \\\\\n 23:00 - 24:00 & 98.37 & \\textbf{99.39} & 99.36 & 99.35 & 99.37 & 99.33 & 99.25 & 99.22 & 99.16 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\end{table*}\n \n \\vspace{10cm}\n \n \\begin{table*}[!h]\n \\centering\n \\caption{Classification accuracy of Naive and CTR models (Aug. 26)}\n \\resizebox{.7\\columnwidth}{!}{%\n \\label{tab:res:classification:2018-08-26}\n \\begin{tabular}{lrrrrrrrrr}\n \\toprule\n \\multicolumn{1}{c}{Hour} & Naive & CTR$_1$ & CTR$_2$ & CTR$_3$ & CTR$_4$ & CTR$_6$ & CTR$_8$ & CTR$_{12}$ & CTR$_{24}$ \\\\\n \\midrule\n 03:00 - 04:00 & 99.73 & \\textbf{99.95} & 99.94 & 99.92 & \\textbf{99.95} & 99.92 & 99.91 & 99.90 & 99.88 \\\\\n 04:00 - 05:00 & 99.73 & \\textbf{99.92} & 99.91 & 99.89 & 99.88 & 99.89 & 99.88 & 99.88 & 99.85 \\\\\n 05:00 - 06:00 & 99.66 & 99.88 & \\textbf{99.90} & 99.88 & 99.87 & 99.88 & 99.87 & 99.86 & 99.84 \\\\\n 06:00 - 07:00 & 97.93 & 99.65 & \\textbf{99.67} & 99.66 & \\textbf{99.67} & 99.62 & 99.65 & 99.61 & 99.50 \\\\\n 07:00 - 08:00 & 98.94 & \\textbf{99.63} & \\textbf{99.63} & \\textbf{99.63} & \\textbf{99.63} & 99.60 & 99.60 & 99.59 & 99.51 \\\\\n 08:00 - 09:00 & 98.94 & \\textbf{99.62} & \\textbf{99.62} & \\textbf{99.62} & 99.59 & 99.58 & 99.46 & 99.58 & 99.49 \\\\\n 09:00 - 10:00 & 98.87 & 99.52 & \\textbf{99.55} & 99.49 & 99.52 & 99.51 & 99.39 & 99.52 & 99.41 \\\\\n 10:00 - 11:00 & 98.91 & \\textbf{99.59} & 99.58 & 99.57 & 99.58 & 99.57 & 99.50 & 99.58 & 99.50 \\\\\n 11:00 - 12:00 & 97.98 & 99.15 & 99.18 & \\textbf{99.19} & \\textbf{99.19} & 99.17 & 99.15 & 99.15 & 99.10 \\\\\n 12:00 - 13:00 & 98.08 & 99.03 & \\textbf{99.07} & 99.06 & \\textbf{99.07} & 98.95 & 98.83 & 98.44 & 98.82 \\\\\n 13:00 - 14:00 & 98.31 & 98.71 & 98.87 & 98.97 & \\textbf{98.98} & 98.81 & 98.64 & 98.35 & 98.65 \\\\\n 14:00 - 15:00 & 98.05 & 98.68 & 98.84 & 98.93 & \\textbf{98.97} & 98.90 & 98.56 & 98.20 & 98.60 \\\\\n 15:00 - 16:00 & 98.15 & 98.77 & 98.90 & 98.89 & \\textbf{98.98} & 98.90 & 98.67 & 98.29 & 98.68 \\\\\n 16:00 - 17:00 & 97.85 & 98.75 & 98.75 & 98.88 & 98.62 & \\textbf{98.94} & 98.32 & 98.19 & 98.66 \\\\\n 17:00 - 18:00 & 98.96 & 99.32 & 99.36 & \\textbf{99.38} & \\textbf{99.38} & 99.31 & 99.37 & 99.11 & 99.31 \\\\\n 18:00 - 19:00 & 98.77 & 99.42 & \\textbf{99.46} & 99.44 & 99.43 & 99.36 & 99.39 & 99.10 & 99.28 \\\\\n 19:00 - 20:00 & 98.68 & \\textbf{99.64} & \\textbf{99.64} & 99.63 & 99.61 & 99.59 & 99.58 & 99.42 & 99.53 \\\\\n 20:00 - 21:00 & 98.42 & \\textbf{99.55} & 99.50 & 99.50 & 99.53 & 99.46 & 99.46 & 99.25 & 99.38 \\\\\n 21:00 - 22:00 & 97.96 & 99.46 & 99.44 & 99.45 & \\textbf{99.47} & 99.35 & 99.36 & 99.06 & 99.25 \\\\\n 22:00 - 23:00 & 98.63 & 99.59 & 99.61 & 99.59 & \\textbf{99.62} & 99.52 & 99.53 & 99.31 & 99.46 \\\\\n 23:00 - 24:00 & 98.89 & 99.63 & \\textbf{99.64} & 99.61 & 99.62 & 99.52 & 99.54 & 99.29 & 99.46 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\end{table*}\n \\begin{table*}[!h]\n \\centering\n \\caption{Classification accuracy of Naive and CTR models (Oct. 23)}\n \\resizebox{.7\\columnwidth}{!}{%\n \\label{tab:res:classification:2018-10-23}\n \\begin{tabular}{lrrrrrrrrr}\n \\toprule\n \\multicolumn{1}{c}{Hour} & Naive & CTR$_1$ & CTR$_2$ & CTR$_3$ & CTR$_4$ & CTR$_6$ & CTR$_8$ & CTR$_{12}$ & CTR$_{24}$ \\\\\n \\midrule\n 03:00 - 04:00 & 99.46 & \\textbf{99.75} & \\textbf{99.75} & 99.71 & \\textbf{99.75} & 99.70 & 99.71 & 99.64 & 99.60 \\\\\n 04:00 - 05:00 & 98.62 & 99.55 & \\textbf{99.56} & 99.50 & 99.54 & 99.50 & 99.54 & 99.39 & 99.22 \\\\\n 05:00 - 06:00 & 98.95 & 99.57 & \\textbf{99.60} & 99.55 & 99.58 & 99.55 & \\textbf{99.60} & 99.50 & 99.32 \\\\\n 06:00 - 07:00 & 99.16 & 99.55 & \\textbf{99.56} & 99.53 & 99.55 & 99.42 & 99.55 & 99.42 & 99.29 \\\\\n 07:00 - 08:00 & 98.43 & 99.31 & \\textbf{99.34} & 99.30 & 99.25 & 99.09 & 99.27 & 99.01 & 98.74 \\\\\n 08:00 - 09:00 & 97.02 & 98.99 & 98.99 & \\textbf{99.01} & 98.90 & 98.80 & 98.70 & 98.67 & 98.41 \\\\\n 09:00 - 10:00 & 97.14 & 99.03 & \\textbf{99.05} & 98.90 & 98.91 & 98.83 & 98.71 & 98.65 & 98.33 \\\\\n 10:00 - 11:00 & 97.66 & \\textbf{98.85} & 98.67 & 98.75 & 98.77 & 98.67 & 98.55 & 98.50 & 98.15 \\\\\n 11:00 - 12:00 & 97.74 & \\textbf{98.75} & 98.62 & 98.71 & 98.68 & 98.65 & 98.60 & 98.48 & 98.24 \\\\\n 12:00 - 13:00 & 97.34 & 98.61 & \\textbf{98.70} & 98.65 & 98.52 & 98.41 & 98.51 & 98.26 & 98.05 \\\\\n 13:00 - 14:00 & 98.43 & 98.81 & 98.84 & \\textbf{98.86} & 98.71 & 98.68 & 98.66 & 98.55 & 98.40 \\\\\n 14:00 - 15:00 & 98.33 & 98.76 & \\textbf{98.86} & 98.82 & 98.70 & 98.62 & 98.63 & 98.52 & 98.35 \\\\\n 15:00 - 16:00 & 97.00 & 98.64 & \\textbf{98.83} & 98.76 & 98.63 & 98.57 & 98.52 & 98.33 & 98.07 \\\\\n 16:00 - 17:00 & 98.47 & 99.31 & 99.33 & \\textbf{99.36} & 99.31 & 99.15 & 99.28 & 99.08 & 98.99 \\\\\n 17:00 - 18:00 & 99.13 & \\textbf{99.47} & \\textbf{99.47} & \\textbf{99.47} & \\textbf{99.47} & 99.33 & 99.46 & 99.32 & 99.25 \\\\\n 18:00 - 19:00 & 99.13 & 99.57 & \\textbf{99.58} & 99.57 & 99.54 & 99.55 & 99.54 & 99.43 & 99.36 \\\\\n 19:00 - 20:00 & 98.20 & 99.35 & \\textbf{99.42} & \\textbf{99.42} & 99.34 & 99.38 & 99.36 & 99.21 & 99.06 \\\\\n 20:00 - 21:00 & 98.78 & 99.48 & 99.51 & \\textbf{99.52} & 99.47 & 99.49 & 99.47 & 99.34 & 99.20 \\\\\n 21:00 - 22:00 & 97.76 & \\textbf{99.54} & 99.53 & 99.52 & 99.52 & 99.52 & 99.50 & 99.40 & 99.18 \\\\\n 22:00 - 23:00 & 98.69 & \\textbf{99.59} & \\textbf{99.59} & 99.58 & 99.58 & 99.57 & 99.56 & 99.46 & 99.40 \\\\\n 23:00 - 24:00 & 99.01 & 99.57 & \\textbf{99.58} & 99.55 & 99.55 & 99.53 & 99.53 & 99.43 & 99.34 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\end{table*}\n\n\n\\newpage\n\\subsection{Hourly data distribution}\\label{appen:data_dist}\nThe distribution shifts between different hours are measured using the hamming distance of commitment decisions and the energy distance of the input features, shown in Figure~\\ref{fig:heat_map_commitment_decisions} and \\ref{fig:heat_map_energy_distance}, respectively. Each entry indicates the distance between the corresponding hours. Therefore, the diagonal entries are always 0 since the distances between the same dataset are $0$. Interestingly, the block-wise patterns can be observed for energy distances, which is more obvious for hamming distances. Within the block which consists of several consecutive hours, the distribution shift tends to be small. Cross the block, the distribution shift largely. This, to some degree, explains why the CTR models scale well when the span of the combined dataset is less than certain hours, from the data distribution perspective.\n\n\\begin{figure*}[!h]\n\\centering\n\\includegraphics[width=0.30\\linewidth]{img\/heatmap_2018-02-12.pdf}\n\\includegraphics[width=0.30\\linewidth]{img\/heatmap_2018-04-05.pdf}\\\\\n\\includegraphics[width=0.30\\linewidth]{img\/heatmap_2018-08-26.pdf}\n\\includegraphics[width=0.30\\linewidth]{img\/heatmap_2018-10-23.pdf}\n\\caption{Heat map of commitment decisions across hours}\n\\label{fig:heat_map_commitment_decisions}\n\\end{figure*}\n\n\n\\begin{figure*}[!h]\n\\centering\n\\includegraphics[width=0.30\\linewidth]{img\/all_ed_heatmap_2018-02-12.pdf}\n\\includegraphics[width=0.30\\linewidth]{img\/all_ed_heatmap_2018-04-05.pdf} \\\\\n\\includegraphics[width=0.30\\linewidth]{img\/all_ed_heatmap_2018-08-26.pdf}\n\\includegraphics[width=0.30\\linewidth]{img\/all_ed_heatmap_2018-10-23.pdf}\n\\caption{Heat map of the energy distances of hourly distribution across hours}\n\\label{fig:heat_map_energy_distance}\n\\end{figure*}\n\n\n\n\\section{Conclusion}\n\nThe paper presented several challenges that may hamper the use of ML\nmodels in power systems operations, including the variability of\nelectricity demand and renewable production, the variations in\nproduction costs, and the combinatorial structure of commitment\ndecisions. To address these challenges, the paper proposed a novel ML\npipeline that leverages day-ahead forecasts and\nthe TSO's knowledge of commitment decisions several hours before they\ntake place. The proposed pipeline consists of two main phases: 1)\nday-ahead data preparation and training; and 2) real-time predictions,\nwhich fits naturally within the operations of a TSO. Moreover, informed\nby the behavior of real-time markets, the paper proposed a novel\nClassification-Then-Regression (CTR) approach that leverages deep neural\nnetworks based on Latent Surgical Interventions to capture generator\ncommitments and generators operating at their limits. Computational\nexperiments that replicated MISO's operational pipeline on a real,\nlarge-scale transmission system demonstrated the feasibility of the\napproach. In particular, the results show that optimization proxies\nbased on ML models have the potential to provide operators with new\ntools for real-time risk monitoring.\n\nSeveral extensions are possible, which will be the topic of future\nwork. The integration, during training, of Lagrangian-based penalties\nhas the potential to further improve the performance of the neural\nnetwork models. Such methods have been shown to improve the\nfeasibility of the predictions with respect to the original\nconstraints. Moreover, once trained, the proposed classifier may be\nused to also accelerate the SCED resolution, by providing hints as to\nwhich variables should be fixed at their minimum or maximum limit.\nFinally, the proposed CTR model is not limited to SCED models, and may\nbe applied to any optimization problems.\n\n\\section*{Acknowledgments}\n\nThis research is partly funded by NSF Awards 1912244\nand 2112533, and ARPA-E Perform Award AR0001136.\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\\section{Introduction}\n\\label{sec:intro}\n\nThe \\textit{Security-Constrained Economic Dispatch} (SCED) is a fundamental optimization model for Transmission System Operators (TSOs) to clear real-time energy markets while ensuring reliable operations of power grids \\cite{conejo2018power}.\nIn the US, TSOs like MISO and PJM execute a SCED every five minutes, which means that the optimization problem must be solved in an even tighter time frame, i.e., well under a minute \\cite{Chen2018_MarketClearingSoftware}.\nSecurity constraints, which enforce robustness against the loss of any individual component, render SCED models particularly challenging for large systems \\cite{Chiang2015_SolvingSCOPF,Wang2016_SolvingCorrectiveSCOPF,velloso2021exact} unless only a subset of contingencies is considered.\nWith more distributed resources and increased operational uncertainty, such computational bottlenecks will only become more critical \\cite{Chen2018_MarketClearingSoftware}.\n\nThis paper is motivated by the growing share of renewable generation, especially wind, in the MISO system, which calls for risk-aware market-clearing algorithms.\nOne particular challenge is the desire to perform risk analysis in real time, by solving a large number of scenarios for load and renewable production \\cite{werho2021_ScenarioGenerationWind}.\nHowever, systematically solving many SCED instances is not practical given the tight constraints of real-time operations. To overcome this computational challenge, this paper proposes to learn an optimization proxy for SCED, i.e., a Machine Learning (ML) model that can predict an optimal solution for SCED, within acceptable numerical tolerances and in milliseconds.\n\nIt is important to emphasize that the present goal is not to replace optimization-based market-clearing tools.\nInstead, the proposed optimization proxy provides operators with an additional tool for risk assessment.\nThis allows to quickly evaluate how the system would behave under various scenarios, without the need to run costly optimizations. In particular, because these predictions are combined into aggregated risk metrics, small prediction errors on individual instances are acceptable.\n\n\\subsection{Related Literature and Challenges}\n\\label{sec:intro:motivation}\n\n The combination of ML and optimization for power systems has attracted increased attention in recent years. A first thread \\cite{pan2019deepopf, fioretto2020predicting, chatzos2020high, lei2020data, chatzos2021spatial, velloso2020combining, zamzam2020learning, owerko2020optimal} uses ML models to predict an optimal solution to the Optimal Power Flow (OPF) problem.\n Pan et al. \\cite{pan2019deepopf} train a Deep Neural Network (DNN) model to predict solutions to the security-constrained DC-OPFs, and report results on systems with no more than 300 buses. One common limitation of ML models, also noted in \\cite{pan2019deepopf}, is that predictions are not guaranteed to satisfy the original problem's constraints. To address this limitation, recent efforts \\cite{fioretto2020predicting, chatzos2020high} integrate Lagrangian duality into the training of DNNs in order to capture the physical and operational constraints of AC-OPFs. A similar approach is followed in Velloso et al. \\cite{velloso2020combining} in the context of preventive security-constrained DC-OPF. Namely, a DNN is trained using Lagrangian duality techniques, then used as a proxy for the time-consuming master problem in a column-and-constraint generation algorithm. More recently, Chatzos et al. \\cite{chatzos2021spatial} embed the Lagrangian duality framework in a two-stage learning that exploits a regional decomposition of the power network, enabling a more efficient distributed training.\n \n \n Another research thread is the integration of ML models within optimization algorithms, in order to improve runtime performance. For instance, a number of papers (e.g., \\cite{deka2019learning,misra2021learning,xavier2021learning, yang2020fast,guha2019machine}) try to identify active constraints in order to reduce the problem complexity. In \\cite{xavier2021learning}, the authors investigate learning feasible solutions to the unit commitment problem. Venzka el al. \\cite{venzke2020neural} use a DNN model to learn the feasible region of a dynamic SCOPF and transform the resulting DNN into a mixed-inter linear program.\n \n Existing research papers in ML for power systems typically suffer from two limitations. On the one hand, most papers report numerical results on small academic test systems, which are one to two orders of magnitude smaller than real transmission systems. This is especially concerning, as higher-dimensional data has an adverse impact on convergence and accuracy of machine-learning algorithms.\n On the other hand, almost all papers rely on artificially-generated data whose distribution does not capture the variability found in actual operations. For instance, only changes in load are considered, typically without capturing spatio-temporal correlations. Other sources of uncertainty, such as renewable production, are not considered, nor is the variability of economic bids.\n Finally, changes in commitment decisions throughout the day are rarely addressed, although they introduce non-trivial, combinatorial, distribution shifts.\n\n\\subsection{Contributions and Outline}\n\\label{sec:intro:contributions}\n\n \n To address the above limitations, the paper proposes a novel ML pipeline that is grounded in the structure of real-world market operations. The approach leverages the TSO's forward knowledge of 1) commitment decisions and 2) day-ahead forecasts for load and renewable productions. Indeed, both are available by the time the day-ahead market has been executed. Furthermore, the paper presents an in-depth analysis of the real-time market behavior, which informs a novel ML architecture to better capture the nature of actual operations.\n \n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{img\/ML_pipelines-crop.pdf}\n \\caption{The proposed machine learning pipeline.}\n \\label{fig:learning_pipeline}\n \\end{figure}\n \n The proposed learning pipeline is depicted in Figure~\\ref{fig:learning_pipeline}. First, commitment decisions and day-ahead forecasts for load and renewable generation are gathered, following the clearing of the day-ahead market. Second, this information is used to generate a training dataset of SCED instances, using classical data augmentation techniques. Third, specialized ML models are trained, ideally one for each hour of the operating day, thereby alleviating the combinatorial explosion of commitment decisions: each ML model only needs to consider one set of commitments and focus on load\/renewable scenarios around the forecasts for that hour. Fourth, throughout the operating day, the trained models are used in real time to evaluate a large number of scenarios. The entire data-generation and training procedure is completed in a few hours.\n \n \n The rest of the paper is organized as follows. Section \\ref{sec:opt_pipeline} describes the interplay between day-ahead and real-time markets in the MISO system, and gives an overview of the real-time SCED for MISO. Section \\ref{sec:methodology} analyzes the behavior of the real-time market solutions, proposes a combined Classification-Then-Regression architecture, and presents the overall ML pipeline. Numerical experiments on a real-life system are reported in Section \\ref{sec:results}\n\n\\section{The Learning Methodology}\n\\label{sec:methodology}\n\nThis section reviews the learning methodology for the RT-SCED. First,\nSection \\ref{sec:pattern_dispatch} presents an analysis of the\nbehavior of optimal SCED solutions in MISO's optimization pipeline.\nThese patterns motivate a novel ML architecture, which is described in\nSection \\ref{sec:ml_model}, followed by an overview of the proposed ML\npipeline in Section \\ref{sec:ML_pipeline}. Further details on the\ndata are given in Section \\ref{sec:results}, as wells as in\n\\cite{PSCC2022-data}.\n\n\\subsection{Pattern Analysis of Optimal SCED solutions} \n\\label{sec:pattern_dispatch}\n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{img\/daily_commitments.png}\n \\vspace{0.2cm}\n \\caption{Distribution of unique hourly commitments per day, for each month of the year 2018. Higher values indicate higher variability in commitment decisions.}\n \\label{fig:daily_commitment}\n \\end{figure}\n \n The system at hand is the French transmission system, whose network topology is provided by the French TSO, RTE.\n It contains $6{,}708$ buses, $8{,}965$ transmission lines and transformers, and $1{,}890$ individual generators, $1{,}177$ of which are wind and solar generators and $713$ are conventional units.\n The MISO optimization pipeline, described in Section \\ref{sec:opt_pipeline}, is replicated on this system for the entire year 2018, yielding $365$ DA-SCUC instances and about $100{,}000$ RT-SCED instances.\n Relevant statistics are reported next.\n \nFirst, unsurprisingly, commitment decisions display a high variability; for example, in 2018, a total of $5{,}380$ different hourly commitments were recorded across the total $8{,}760$ hours of the year, where each ``hourly commitment\" is a binary vector of size\n713 that contains the (hourly) commitment status of conventional generators. \nThe intra-day variability of commitment decisions follows a seasonal pattern, which is illustrated in Figure \\ref{fig:daily_commitment}. \nNamely, for each month of the year, Figure \\ref{fig:daily_commitment} displays the distribution, across every day of the month, of the number of unique hourly commitments over a day. The higher values correspond to the higher variability in commitment decisions, while the lower values indicate that the commitment decisions are stable throughout the day. Typically, the variability of commitment decisions is lower in summer and higher in winter; this behavior is expected since more generators are online in winter, which naturally tends to yield more diverse commitments.\nIndeed, in June 2018, all the days have at least 5 and at most 19 different hourly commitments, with $50\\%$ of days having less than 7 and $50\\%$ having more than 8. In contrast, in January 2018, except for two outliers, every day has at least 19 different commitments, and 16 days had a different commitment decision every hour. \nOverall this combinatorial explosion of commitment decisions has an adverse effect on ML models, since it creates distribution shifts on unseen commitments that is detrimental to the performance.\n \n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=.5\\columnwidth]{img\/01_2018_gen_at_bounds_non_renew-crop.pdf} \n \\includegraphics[width=.438\\columnwidth]{img\/08_2018_gen_at_bounds_non_renew-crop.pdf} \n \\vspace{0.2cm}\n \\caption{Proportion of generators at their maximum and minimum limits in January (left) and August (right).}\n \\label{fig:dispatch_pattern}\n \\end{figure}\n \nSecond, an analysis of RT-SCED solutions reveals, also unsurprisingly, that a majority of generators are dispatched to either their minimum or maximum limit; such limits include economic offer data and ramping constraints. Detailed statistics are reported in Figure \\ref{fig:dispatch_pattern} for January and August 2018: each plot quantifies, across all RT-SCED instances for that month, the proportion of generators dispatched at their minimum (at\\_min) or maximum (at\\_max) limit, or neither (non\\_tight); these statistics exclude the renewable generators and the generators for which the minimum and maximum limit are identical. In January, the median proportion of generators being dispatched at their minimum (resp., maximum) limit is close to $40\\%$ (resp., $20\\%$), while in August, these values are around $45\\%$ and $20\\%$. \nThe variability is also visibly higher in winter, echoing the previous observations for\ncommitment decisions.\n\n\\subsection{First Classify, then Perform Regression}\n\\label{sec:ml_model}\n \\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{img\/Models.pdf}\n \\vspace{0.2cm}\n \\caption{The Proposed CTR model architecture. \\textbf{Left}: Overall structure of the Classification-Then-Regression (CTR) model where the outputs of the classifier are used to inform the subsequent regressor. Thus the regressor only needs to predict the dispatches of non-tight generators; \\textbf{Right}: The Latent Surgical Intervention (LSI) block for each classifier and regressor in the CTR model in which a gate operator takes the binary indicator to filter out some representation from fully connected layers. The result is then added into the representations element-wisely. }\n \\label{fig:ML_model}\n \\end{figure*}\n\n The previous analysis indicates that, given the knowledge of which generators are dispatched to their minimum or maximum limit, only a small number of generators need to be predicted.\n This suggests a \\textbf{Classification-Then-Regression} (CTR) approach, first screening the active generators to identify those at their bounds, and then applying a regression to predict the dispatch of the remaining generators.\n The overall architecture is depicted in Figure~\\ref{fig:ML_model}.\n The input of the learning task is a dataset $\\mathcal{D} = \\{(\\mathbf{x}_i, \\mathbf{y}_i, \\mathbf{p}_i)\\}_{i=1}^N$, where $\\mathbf{x}_i$, $\\mathbf{y}_i$, $\\mathbf{p}_i$ represent the $i^{th}$ observation of the system state, the status indicators of the generators (i.e., whether each generator is at its maximum\/minimum limits), and the optimal dispatches, respectively.\n\n \\begin{table}[!t]\n \\centering\n \\caption{Input features of DNN model}\n \\label{tab: ML_features}\n \\begin{tabular}{ccc}\n \\toprule\n Feature & Size & Source\\\\\n \\midrule\n Loads & $L$ & Load forecasts\\\\\n Cost of generators & $G$ & Bids\\\\\n Cost of reserves & $2G$ & Bids\\\\\n Previous solution & $G$ & SCED\\\\\n Commitment decisions & $G$ & SCUC\\\\\n Reserve Commitments & $G$ & SCUC\\\\\n Generator min\/max limits & $2G$ & Renewable forecasts\\\\\n Line losses factor & $2B+1$ & System\\\\\n \\bottomrule\n \\end{tabular}\n \\end{table}\n\nThe CTR model is composed of two modules; classifier and\nregressor. First, the classifier component aims at classifying whether\neach generator is at its minimum or maximum limit. It considers the\nstatus of the power network with $B$ buses, $G$ generators, and $L$\nloads as its inputs. Specifically, the classifier, parameterized by\n$\\mathbf{w}_1$, is a mapping $f_{\\mathbf{w}_1}: \\mathbb{R}^{d}\n\\xrightarrow{} \\{0, 1\\}^{2G}$, where $d$ is the dimension of the input\nfeatures. The overall input features describe the state of the power\nsystem, detailed in Table~\\ref{tab: ML_features}. Also, it outputs the\nbinary vector of $2G$ meaning that, for each generator, there are two\nclassification choices: one for determining whether it is dispatched at its\nmaximum limit and one for determining whether it is dispatched\nat its minimum limit. In the experiments presented in the subsequent\nsection, the dimension $d$ of the input space can be as large as\n$24{,}403$ in the RTE system. The optimal trainable parameters\n$\\mathbf{w^*_1}$ in the classifier are obtained through minimizing the\nloss function as follows:\n \n \\begin{align}\n \\mathbf{w^*_1} = \\argmin_{\\mathbf{w_1}} \\frac{1}{N}\\sum_{i=1}^N \\mathcal{L}_c(\\mathbf{y}_i, f_{\\mathbf{w}_1}(\\mathbf{x}_i)), \\label{eq: cls_loss}\n \\end{align}\n where $\\mathcal{L}_c$ denotes the cross entropy loss, i.e., \n \\begin{align}\n \\mathcal{L}_c(\\mathbf{y}_i, \\hat{\\mathbf{y}}_i) = -\\sum_{j=1}^{2G} \\mathbf{y}_{i,j}\\log(\\hat{\\mathbf{y}}_{i,j}) + (1-\\mathbf{y}_{i,j})\\log(1-\\hat{\\mathbf{y}}_{i,j}).\n \\end{align}\n\n The second architectural component, the regressor that is parameterized by $\\mathbf{w}_2$, is a mapping $f_{\\mathbf{w}_2}: \\mathbb{R}^{d+2G} \\xrightarrow{} \\mathbb{R}^{G}$.\n The additional $2G$ features in the input of the regressor come from the outputs of the classifier.\n Given the trained classifier $f_{\\mathbf{w}^*_1}$, the optimal trainable parameters of the regressor $\\mathbf{w^*_2}$ are obtained by minimizing the loss function $\\mathcal{L}_r$ over all training instances as \n \\begin{align}\n \\label{eq: reg_loss}\n \\mathbf{w^*_2} = \\argmin_{\\mathbf{w_2}} \\frac{1}{N}\\sum_{i=1}^N \\mathcal{L}_r \\left( \\mathbf{p}_i, f_{\\mathbf{w}_2}\\left(\\mathbf{x}_i, f_{\\mathbf{w}^*_1}(\\mathbf{x}_i)\\right) \\right), \n \\end{align}\n where $\\mathcal{L}_r$ is the mean absolute error (MAE) loss, i.e., $\\mathcal{L}_r(\\mathbf{p}, \\hat{\\mathbf{p}}) = \\|\\mathbf{p} - \\hat{\\mathbf{p}}\\|_1$.\n \nThe CTR architecture features a deep neural network\n(DNN). Specifically, it uses a Latent Surgical Intervention (LSI)\nnetwork \\cite{donnot2018latent} as its building block. LSI is a\nvariant of residual neural networks \\cite{he2016identity} that is\naugmented by binary interventions. As illustrated in Figure\n\\ref{fig:ML_model}, the LSI block exploits, via gate operators, the\nbinary information coming from commitment decisions and the\nclassifier. As mentioned earlier, because the variability in SCED\nsolutions primarily comes from the combinatorial nature of the\ncommitment decisions, it is crucial to design the DNN architecture to\nuse commitment decisions as the input of the model. Thus, the\nLSI-based CTR model makes it possible to learn the generator dispatch\nfrom various commitment decisions, thereby allowing the proposed\napproach to generalize to the cases where the models are trained over\nmultiple commitment decisions.\n \n\\subsection{Machine Learning Pipeline} \n\\label{sec:ML_pipeline}\n\nThe high variability of SCED instances in real operations, in terms of\ncommitment decisions and forecasts of loads and renewable energy\nsources, makes it extremely challenging to learn the SCED optimization.\nTo mitigate this significant variability, this paper proposes a\nlearning pipeline that closely follows MISO's optimization pipeline.\nThe machine-learning pipeline is depicted in\nFigure~\\ref{fig:learning_pipeline} and consists of three phases: data\npreparation, training, and prediction.\n \n \\paragraph{Data Preparation} At 12pm of day $D-1$, the TSO outputs the commitment decisions of generators for the next 24 hours and Monte-Carlo scenarios for day $D$.\n Each scenario consists of 15 minute-level forecasts for load demands and renewable generators.\n Since the SCED is solved every 5 minutes throughout a day, the ML pipeline first linearly interpolates the forecasts at 5 minute level.\n Further data augmentation may be performed, if necessary, by perturbing load and renewable generations following the strategy described in \\cite{chatzos2020high}.\n The data is then used as input to SCED optimization models, which generate the various optimal dispatches.\n The entire dataset is then divided into subsets, each of which spans one or a few hours, based on computational considerations. Indeed, the goal is to strike the proper balance between the accuracy of the models and the training time, since the models have to be available before midnight. This data preparation process yields the input instances to the learning task described previously.\n \n\\paragraph{Training}\n\nFirst, each subset is split into the traditional\ntraining\/validation\/test instances. The CTR models are trained on\ntraining instances in sequence while the validation dataset is used\nfor hyperparameter tuning and the test dataset is used for reporting\nthe performance of the machine learning models. This training step\ntakes place in parallel for each subset.\n\n\\paragraph{Prediction} Starting from midnight on day $D$, at each time step, the corresponding ML model takes the latest system state as input and predicts the optimal dispatch of SCED models in real-time.\n\n\\section{Overview of MISO's Market-Clearing Pipeline} \n\\label{sec:opt_pipeline}\n\nThis section describes the interplay between MISO's day-ahead and real-time markets; the reader is referred to \\cite{BPM_002} for a detailed overview of MISO's energy and reserve markets.\n\n\\subsection{MISO Optimization Pipeline}\n\nThe day-ahead market consists of two phases, and is executed every day\nat 10am. First, a day-ahead security-constrained unit commitment\n(DA-SCUC) is executed: it outputs the commitment and regulation status\nof each generator for every hour of the following day. Then, a\nday-ahead SCED is executed to compute day-ahead prices and settle the\nmarket. The results of the day-ahead market clearing are then posted\nonline at approximately 1pm, i.e., there is a delay of several hours\nbefore the commitment decisions take effect. While MISO may commit\nadditional units during the operating day, through out-of-market\nreliability studies, in practice, $99\\%$ of commitment decisions are\ndecided in the day-ahead market\n\\cite{Chen2018_MarketClearingSoftware}. Accordingly, for simplicity\nand without loss of generality, this paper assumes that commitment\ndecisions from the DA-SCUC are not modified afterwards.\n\nThen, throughout the operating day, the real-time market is executed\nevery 5 minutes. This real-time SCED (RT-SCED) adjusts the dispatch\nof every generator in response to variations in load and renewable\nproduction, and maximizes economic benefit. Despite its short-time\nhorizon, the RT-SCED must still account for uncertainty in load and\nrenewable production. In the MISO system, this uncertainty mainly\nstems from the intermittency of wind farms, and from increasing\nvariability in load. The latter is caused, in part, by the growing\nnumber of behind-the-meter distributed energy resources (DERs) such as\nresidential storage and rooftop solar. Therefore, operators\ncontinuously monitor the state of the power grid, and may take\npreventive and\/or corrective actions to ensure safe and reliable\noperations.\n\n\\subsection{RT-SCED Formulation}\n\nThe RT-SCED used by MISO is a DC-based linear programming formulation that co-optimizes energy and reserves \\cite{MISO2009_SCED}.\nThe reader is referred to \\cite{BPM002_D} for the full mathematical formulation; only its core elements are described here.\nThe computation of market-clearing prices is beyond the scope of this paper, and is therefore not discussed here.\n\nThe RT-SCED model comprises, for each generator, one variable for energy dispatch and up to four categories of reserves: regulating, spinning, online supplemental, and offline supplemental.\nIn addition, each generator is subject to ramping constraints and individual limits on energy and reserve dispatch.\nEnergy production costs are modeled as piece-wise linear convex functions.\nEach market participant submits its own production costs and reserve prices via MISO's market portal, and may submit different offers for each hour of the day.\nFor intermittent generators such as solar and wind, the latest forecast is used in lieu of binding offers.\n\nAt the system level, line losses are estimated in real-time from\nMISO's state estimator, and incorporated in the formulation using a\nloss factor approach as in \\cite{FERC2017_MarginalLossCalculation}.\nTransmission constraints are modeled using PTDF matrices, where flow\nsensitivities with respect to power injections and withdrawals are\nprovided by an external tool. Reserves are dispatched on individual\ngenerators, in order to meet zonal and market-wide minimum\nrequirements. Additional constraints ensure that, in a contingency\nevent, reserves may be deployed without tripping transmission lines.\nPower balance, reserve requirements, and transmission limits are soft,\ni.e., they may be violated, albeit at a reasonably high cost.\n\nIn summary, the RT-SCED receives the following inputs: the commitment decisions from DA-SCUC, the most recent forecast for load and renewable production, the economic limits and production costs of the generators, the current state estimation, and the transmission constraints and reserve requirements. The RT-SCED produces as outputs the active power and reserve dispatch for each generator.\n\n\\section{Experimental Results}\n\\label{sec:results}\n\n\\subsection{Test Cases}\n\\label{sec:data_setup}\n\nThe experiments replicate MISO's operations on the French transmission\nsystem, for four representative days in 2018, namely, February\n12\\textsuperscript{th}, April 5\\textsuperscript{th}, August\n26\\textsuperscript{th}, and October 23\\textsuperscript{rd}. These\nfour days were selected at random to represent annual seasonality.\nFor each operating day, 2000 Monte-Carlo scenarios for total load and\nrenewable power production are sampled in a day-ahead fashion, i.e.,\nscenarios are produced at noon of the previous day. These scenarios\nare illustrated in Figure \\ref{fig:res:scenarios}, which displays 200\nscenarios for solar production, wind production, total load, and total\nnet load, respectively. The total net load in this figure represents\nthe amount of power production expected to be generated by the\nconventional generators, which is identical to the total load minus\nthe renewable generation.\n \nThe forecasting models for load consumption and renewable power\ngeneration in this paper used Long Short-Term Memory (LSTM) neural\nnetworks \\cite{hochreiter1997long} and the scenarios are generated by\na MC-dropout approach \\cite{gal2016dropout}. Other forecasting methods\nand scenario generation approaches could be used instead, without\nchanging the methodology.\n\n \n \\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\columnwidth]{img\/sced_learning_solar.png}\n \\includegraphics[width=0.48\\columnwidth]{img\/sced_learning_wind.png}\\\\\n \\includegraphics[width=0.48\\columnwidth]{img\/sced_learning_load.png}\n \\includegraphics[width=0.48\\columnwidth]{img\/sced_learning_net_load.png}\\\\\n \\caption{Day-ahead scenarios for solar output (top-left), wind output (top-right), total load (bottom-left) and net load (bottom-right). 200 Monte-Carlo scenarios are depicted.}\n \\label{fig:res:scenarios}\n \\end{figure}\n\nDay-ahead commitment decisions are obtained by solving a DA-SCUC problem and recording its solution. \nThe SCUC formulation used in the present experiment follows\nMISO's DA-SCUC described in \\cite{BPM_002,BPM_002_B}. \nAgain, the proposed ML pipeline is agnostic to the SCUC formulation itself and how it is solved, as it only requires the resulting commitments.\n\nFinally, for each scenario and each day, 288 RT-SCED instances are\nsolved (one every 5 minutes), yielding a total of $576{,}000$\ninstances. This initial dataset is then divided into 24 hourly\ndatasets, each containing $24{,}000$ SCED instances, since commitment\ndecisions are hourly. To avoid information leakage, the data is split\nbetween training\/validation\/testing instances as follows: $85\\%$ of\nscenarios are used for training, $7.5\\%$ for validation, and $7.5\\%$\nfor testing. All the reported results are in terms of the testing\ninstances.\n \nThe optimization problems (SCUC and SCED) are formulated in the JuMP\nmodeling language \\cite{DunningHuchetteLubin2017_JuMP} and solved with\nGurobi 9.1 \\cite{gurobi}, using Linux machines with dual Intel Xeon\n6226@2.7GHz CPUs on the PACE Phoenix cluster \\cite{PACE}; the entire\ndata generation phase is performed in less than 2 hours. The proposed\nCTR model is implemented using PyTorch \\cite{NEURIPS2019_9015} and trained using the Adam optimizer \\cite{kingma2014adam} with a learning rate of 5e-4.\nA grid search is performed to choose the hyperparameters, i.e., the\ndimension of the hidden layers (taken in the set \\{64, 128, 256\\}),\nand the number of layers (from the set \\{2, 3, 4, 5, 6\\}) for the\nfully connected layers in the LSI block. A leaky ReLU with leakiness\nof $\\alpha=0.01$ is used as the activation function in the LSI\nblock. An early stopping criterion with 20 epochs is used for training the classifier and regressor models in order to prevent overfitting: when the loss values (Eq. \\ref{eq: cls_loss} and Eq. \\ref{eq: reg_loss}) on the validation dataset do not decrease for 20 consecutive epochs, the training process is terminated. Training is performed using Tesla V100-PCIE GPUs with 16GBs\nHBM2 RAM, on machines with Intel CPU cores at 2.1GHz.\n\n\\subsection{Baselines}\n\nThe proposed CTR models are evaluated against the following baselines:\nNaive CTR, Naive Regression (Naive Reg), and Regression (Reg). The\nnaive baselines replicate the behavior of the previous SCED solution,\ni.e., they use the dispatch solution obtained 5 minutes earlier. The\nnaive baselines are motivated by the fact that, if the system does not\nchange much between two consecutive intervals, then the SCED solution\nshould not be changed much either. The Naive CTR uses the same\napproach as the CTR models: it first predicts whether a given\ngenerator is at its minimum (resp., maximum) limit if it was at its\nminimum (resp. maximum) limit in the previous dispatch; then, for the\nremaining generators, it predicts the active dispatch using the\nregressor. The naive baselines are expected to perform worse when\nlarge fluctuations in load and renewable production are observed,\nwhich typically occurs in the morning and evening. Note that these\ntimes of the day also display the largest ramping needs, and are among\nthe most critical for reliability. The effectiveness of the CTR\narchitecture is also demonstrated by comparing it to Reg for the optimal active dispatch, i.e., by omitting the\nclassification step from the CTR.\n\n\\subsection{Optimal Dispatch Prediction Errors}\n \n \\begin{table}[!t]\n \\centering\n \\caption{Average Classification Accuracy (\\%) of the CTR Classifier and Naive CTR on 4 Representative Days in 2018.}\n \\label{tab:res:classification_overall}\n \\begin{tabular}{lccccc}\n \\toprule\n & \\multicolumn{4}{c}{Dates} & \\\\\n \\cmidrule{2-5}\n Methods & Feb. 12 & Apr. 5 & Aug. 26 & Oct. 23 & Avg. \\\\\n \\midrule\n Naive classifier & 98.26 & 97.79 & 98.64 & 98.31 & 98.25 \\\\\n CTR & 99.56 & 99.18 & 99.40 & 99.24 & 99.35 \\\\\n \n \n \n \n \n \n \n \\bottomrule\n \\end{tabular}\n \\end{table}\n \n \n Table \\ref{tab:res:classification_overall} reports the overall\n classification accuracy of the CTR classifier and the Naive CTR across\n four representative days in 2018. Surprisingly, the naive classifier\n is a strong baseline, with an accuracy that ranges from $97.79\\%$ in\n the Spring to $98.64\\%$ in the Summer. The CTR classifier always\n improves on the baseline, by around one percentage point on average\n and by up to $1.40$ percentage point in the Spring. More detailed results\n about the classifiers are given in Appendix~\\ref{appen:classification}.\n \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\columnwidth]{img\/MAE_along_days_CTR_1_no_sub-crop.pdf}\n \\caption{Mean Absolute Error (MAE) Over Time on Feb. 12 (top-left), Apr. 5 (top-right), Aug. 26 (bottom-left), and Oct. 23 (bottom-right).}\n \\label{fig:MAE_along_time}\n\\end{figure}\n\n\\begin{table}[t]\n \\centering\n \\caption{Mean Absolute Error (MW) by Generator Size$^{\\dagger}$.}\n \\label{tab:res:disaggerated_dsipatch}\n \\resizebox{0.95\\columnwidth}{!}{\n \\begin{tabular}{lrrrrrrrrrrrr}\n \\toprule\n Date & Method & Small & Medium & Large & All \\\\\n \\midrule\n \\multirow{4}{*}{Feb. 12} & Naive Reg & 0.122 & 0.465 & 1.602 & 0.374 \\\\\n & Naive CTR & 0.084 & 0.333 & 1.128 & 0.262 \\\\\n & Reg & 0.057 & 0.188 & 0.654 & 0.153 \\\\\n & CTR & \\textbf{0.043} & \\textbf{0.141} & \\textbf{0.535} & \\textbf{0.117} \\\\\n \\midrule\n \\multirow{4}{*}{Apr. 05} & Naive Reg & 0.242 & 0.345 & 7.480 & 0.772 \\\\\n & Naive CTR & 0.197 & 0.220 & 4.374 & 0.463 \\\\\n & Reg & 0.149 & 0.152 & 2.553 & 0.282 \\\\\n & CTR & \\textbf{0.105} & \\textbf{0.110} & \\textbf{2.291} & \\textbf{0.241} \\\\\n \\midrule\n \\multirow{4}{*}{Aug. 26} & Naive Reg & 0.097 & 0.256 & 7.447 & 0.637 \\\\\n & Naive CTR & 0.080 & 0.149 & 4.045 & 0.352 \\\\\n & Reg & 0.054 & 0.124 & 2.454 & 0.218 \\\\\n & CTR & \\textbf{0.034} & \\textbf{0.064} & \\textbf{2.220} & \\textbf{0.190} \\\\\n \\midrule\n \\multirow{4}{*}{Oct. 23} & Naive Reg & 0.176 & 0.535 & 8.425 & 0.778 \\\\\n & Naive CTR & 0.140 & 0.341 & 4.596 & 0.434 \\\\\n & Reg & 0.106 & 0.192 & 2.724 & 0.263 \\\\\n & CTR & \\textbf{0.076} & \\textbf{0.145} & \\textbf{2.525} & \\textbf{0.235} \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\\\\n $^{\\dagger}$Small: 0-10MW; Medium: 10-100MW; Large: $>$100MW\\\\\n\\end{table}\n\nFigure~\\ref{fig:MAE_along_time} reports the mean absolute error (MAE)\nfor active power dispatch of the different models for each of the\nconsidered days. Given a ground-truth dispatch $p_i^g$\nand the predicted dispatch $\\hat{p}^g_i$, the MAE is\ndefined as\n\\begin{align*}\n MAE = \\frac{1}{N} \\frac{1}{G} \\sum_{i=1}^{N} \\sum_{g=1}^{G} \\|p^g_i - \\hat{p}^g_i\\|,\n\\end{align*}\nwhere $N$ is the number of instances in the test dataset and $G$ is\nthe number of generators for each instance. As shown in\nFigure~\\ref{fig:MAE_along_time}, the MAEs of CTR are always lower than\nthose of Reg, demonstrating the benefits of the classifier. The MAEs\nof Naive CTR are always lower than those of Naive Reg, showing that\neven a naive classifier has benefits. Moreover, the methods using DNNs\nfor classification and regression, i.e., CTR and Reg, are always\nbetter than their naive counterparts, demonstrating the value of deep\nlearning. Note that the performance of the naive methods fluctuates\nduring the day, mainly due to the variability of the commitments. The\nnaive methods are not robust with respect to changes in commitments,\ncontrary to the proposed CTR approach.\n\nTo investigate how the machine-learning models perform for different\ngenerator types, Table~\\ref{tab:res:disaggerated_dsipatch}\nreports their behavior for different generator sizes. The generators\nare clustered into three groups based on their actual active\ndispatches, and the MAE is reported for each group separately.\nSmall-size generators have a capacity between $0$ and $10$MW,\nmedium-size generators have a capacity between $10$ and $100$MW, and\nlarge-size generators have a capacity above $100$MW. The last column\nreport the MAEs across all\ngenerators. Table~\\ref{tab:res:disaggerated_dsipatch} further confirms\nthat the CTR models consistently outperform the corresponding\nregressors. These improvements are most notable for small and medium\ngenerators which are expected to have higher variability: the MAEs are\ndecreased by up to $37.04\\%$ across small generators and $48.39\\%$\nacross medium generators (in Aug. 26).\nMoreover, across four days, the CTR model achieves a Mean Average Percentage Error (MAPE) of $0.59\\%$ and $0.34\\%$ for medium and large generators.\n\n\n\\subsection{Solving Time vs Inference Time}\nSolving the SCED using traditional optimization tools takes, on\naverage, $15.93$ seconds. Actual computing times fluctuate throughout\nthe day, with increased load and congestion leading to higher\ncomputing times. In contrast, evaluating the optimization proxy for a\nbatch of 288 instances takes an average $1.5$ milliseconds. In other\nwords, roughly $200{,}000$ scenarios may be evaluated in less than one\nsecond. This represents an improvement of 4 orders of magnitude, even\nunder the assumption that several hundred SCED instances can be solved\nin parallel.\n\\section{Sensitivity Analysis}\n\\label{sec:results2}\n\n\\subsection{Motivations and Experiment Settings}\n\nHow many CTR models need to be trained is an important question to\nponder in practice. Preparing a single CTR model for 24 hours on Day\n$D$ would be most convenient. However, as described in\nSection~\\ref{sec:pattern_dispatch}, the variability in commitments is\nnotoriously harmful to the prediction accuracy and training time of\nthe CTR model. However, successive hours during a day may only have a\nfew differences in commitments and hence a single CTR model may be\nsufficient for predicting the associated SCED optimizations.\n \nTo answer that question, the original instance data is used to produce\na variety of datasets, containing data for consecutive 2, 3, 4, 6, 8,\n12, and 24 hours. For instance, the 8 hours dataset contains three\nsets of instances, each grouping SCED data for 8 successive\nhours. Three models are then trained using instances covering their 8\nhours of data. All models use the termination criterion presented in Section \\ref{sec:results}. \nThe various CTR models for 1, 2, ..., 24 hours are\nthen compared for the 24 hours of the day, using the models\nappropriate for each hour. It is expected that the quality of the\nprediction will deteriorate with a coarser granularity, since there\nwill be a larger variability in commitments and net loads. However,\nthe LSI architecture of the CTR model and the availability of more\ninstances during training may compensate for this increase in\nvariability. In the following, CTR models trained on $i$ hours are\ndenoted as CTR$_i$.\n\n\\subsection{Results}\n \n\n \\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.8\\columnwidth]{img\/MAE_CTR_days-crop.pdf}\n \\caption{MAEs of CTR models for representative days: the CTR model\n scales well when trained over multiple hours. The performance\n degradation becomes significant only when aggregating 6 hours\n or more.}\n \\label{fig:MAE_multiple_hours}\n \\end{figure*}\n\n\n\nFigure~\\ref{fig:MAE_multiple_hours} illustrates the MAE values of the\nvarious CTR models. The performance of the CTR model remains strong\neven when aggregating up to 4 successive hours. As more hours are\naggregated, the performance starts to degrade. To get more insight on\nthe performance of the various CTR models, it is useful to consider\nthe energy distance \\cite{rizzo2016energy} between the empirical\ndistributions of the testing and training instances. Recall that, given\ntwo empirical distributions $\\{x_i\\}_{i=1}^N$ and $\\{y_i\\}_{i=1}^M$,\ntheir energy distance is given by\n \\begin{align*}\n \\mathcal{E}(\\mathcal{X},\\mathcal{Y}) &= \\frac{2}{NM} \\sum_{i=1}^{N} \\sum_{j=1}^{M} \\|x_i - y_j\\| \\\\\n &- \\frac{1}{N^2} \\sum_{i=1}^{N} \\sum_{j=1}^{N} \\|x_i - x_j\\| \\\\\n &- \\frac{1}{M^2} \\sum_{i=1}^{M} \\sum_{j=1}^{M} \\|y_i - y_j\\|.\n \\end{align*}\n \n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\columnwidth]{img\/energy_scatter_MAE_scaled-crop.pdf}\n \\caption{Scaled MAE value of the CTR models vs. energy distance between the test datasets and the training datasets for given hour's period on Feb. 12 (top-left), Apr. 5 (top-right), Aug. 26 (bottom-left), and Oct. 23 (bottom-right).}\n \\label{fig:MAE_energy_distances}\n \\end{figure}\n\nIt is thus interesting to study the relationship between the MAE of a\nCTR model and the energy distance between its testing and training\ndatasets. Figure~\\ref{fig:MAE_energy_distances} reports the\nrelationship between the scaled MAE and the energy distance, where the\nscaled MAE is the MAE value divided by the best MAE value across all\nexperiments. Each dot captures the scaled MAE value (y-axis) for a CTR\nmodel and the associated energy distance (x-axis) between its training\nand testing datasets.\n\nObserve first that the energy distance increases as more hourly data\nare combined into the training set. In\nFigure~\\ref{fig:MAE_energy_distances}, the dots from CTR$_i$ tend to\nhave smaller energy distances than those of CTR$_j$ for $j > i$. The\nscaled MAE also increases as the energy distance grows. More\ninterestingly, Figure~\\ref{fig:MAE_energy_distances} shows that the\nMAE value only increases slightly as the energy distance becomes\nlarger. Specifically, an increase of $10^2$ in energy distance\nproduces an increase of only 10\\% in MAE (corresponding to a scaled MAE\nof 1.1). These results confirm that the CTR models scale well when\nfacing a reasonable amount of variability in commitment decisions and\nnet loads. However, when the energy distance goes over a certain\nthreshold, then the performance of CTR models starts to degrade\nsignificantly even in presence of more data and more training time.\nNote that the energy distance between the training and testing\ndatasets can be computed before the training process. Hence, the\nproper granularity for the CTR model can be chosen to obtain the\ndesired accuracy. This quantification shows the potential of CTR to\nperform well with more complex components involved in the MISO\npipeline such as the Look Ahead Commitment (LAC).\n \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.49\\columnwidth]{img\/reg_training_time-crop.pdf}\n \\includegraphics[width=.47\\columnwidth]{img\/cls_training_time-crop.pdf}\n \\caption{Training Times of CTR$_i$ Models: Regression (Left) and Classification (Right) }\n \\label{fig:training_time}\n \\end{figure}\n \nFigure~\\ref{fig:training_time} reports the training times of the CTR models. The average training time increases as more hourly data are included. The regression takes more training time than the classification task. Section \\ref{sec:ML_pipeline} indicated that the training process must complete within about 12 hours after accumulating the data instances at 12pm on day $D-1$.\nThis is clearly possible for all the CTR$_i$ models with $i \\leq 4$. Consider CTR$_4$ for instance. Figure~\\ref{fig:training_time} shows that it can be trained for a 4-hour block within 10 hours in the worst case (4 hours for classification and 6 hours for regression). As a result, all 6 CRT$_4$ models can easily be trained in parallel within the required time frame. ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nUltra-hot Jupiters allow us to advance our understanding of planetary migration and orbital stability (\\citealt{Delrez2016}), and they offer great prospects for atmospheric characterization (\\citealt{Parmentier2018}). Their high temperature (typically higher than about 2000\\,K) simplifies the atmospheric chemistry by dissociating molecular species into their atomic constituents (\\citealt{Lothringer2018}). Multiple atoms and ions could thus be detected in the atmospheric limb of ultra-hot Jupiters using high-resolution transmission spectroscopy (e.g., \\citealt{Hoeijmakers2018, Hoeijmakers2019} for the prototypical ultra-hot Jupiter KELT-9b). These planets are also interesting candidates for probing evaporation and the effect of photo-ionization on the upper atmospheric structure. Their extreme irradiation by the host star causes the hydrodynamical expansion of their upper atmosphere, allowing metals to escape and be detected in the near-ultraviolet after they are ionized in the exosphere (\\citealt{Fossati2010}, \\citealt{Haswell2012}, \\citealt{Sing2019}). \n\nInterestingly, several ultra-hot Jupiters were found on highly misaligned orbits (e.g. WASP-12b, \\citealt{Albrecht2012}; WASP-33b, \\citealt{Cameron2010}; WASP-121b, \\citealt{Delrez2016}), suggesting dynamical migration processes induced by gravitational interactions with companions, rather than disk migration (e.g. \\citealt{Nagasawa2008}, \\citealt{Fabrycky2007}, \\citealt{Guillochon2011}). At such close distances to their stars, ultra-hot Jupiters are subjected to strong tidal interactions that determine their final orbital evolution. Precisely measuring the orbital architecture of ultra-hot Jupiters and monitoring its evolution is thus of particular importance to determine their migration history and their potential decay into the star. The occultation of a rotating star by a transiting planet removes the light of the hidden photosphere from the observed stellar lines (the so-called Rossiter-McLaughlin effect, or RM effect, \\citealt{Holt1893}; \\citealt{Rossiter1924}; \\citealt{McLaughlin1924}). Different techniques have been developed to analyze the radial velocity (RV) anomaly induced by the distortion of the stellar absorption lines (e.g., \\citealt{ohta2005}, \\citealt{gimenez2006}, \\citealt{hirano2011b}, \\citealt{boue2013}), to model their profile while accounting for the planet occultation (e.g., \\citealt{Cameron2010}, \\citealt{Gandolfi2012}, \\citealt{Crouzet2017}), or to isolate the local stellar lines from the planet-occulted regions (e.g., \\citealt{Cegla2016}, \\citealt{Bourrier_2018_Nat}). These techniques enable deducing the trajectory of the planet across the stellar disk, and thus inferring the projected or true 3D alignment between the spins of the planetary orbit and the stellar rotation. \\\\\n\nThe ultra-hot Jupiter WASP-121b (\\citealt{Delrez2016}) is a good candidate for both atmospheric and orbital architecture studies (Table~\\ref{tab:sys_prop}). This super-inflated gas giant transits a bright F6-type star (V = 10.4), favoring optical transmission spectroscopy measurements. Its near-polar orbit at the edge of the Roche limit ($P$ = 1.27\\,days) makes WASP-121b subject to strong tidal interactions with the star (\\citealt{Delrez2016}) and an intense atmospheric escape. The increase in transit depth of WASP-121b toward near-UV wavelengths (\\citealt{Evans2018}, \\citealt{Salz2019_NUV_WASP121b}) was recently shown to arise from iron and magnesium atoms that escape into the exosphere (\\citealt{Sing2019}), which confirms the hydrodynamical evaporation of WASP-121b and opens new avenues to link the structure and composition of the lower and upper atmosphere.\n\nIn the present study we investigate the atmosphere of WASP-121b, and refine the properties of its planetary system. In Sect.~\\ref{sec:RV_fit}, we reanalyze long-term RV and activity indexes of the system. Sect.~\\ref{sec:reloaded RM} exploits transit spectroscopy of WASP-121b obtained with the High Accuracy Radial velocity Planet Searcher (HARPS), combined with simultaneous EulerCam photometry, to analyze the orbital architecture of WASP-121b and its star. In Sect.~\\ref{sec:atmo_struc} we characterize the atmospheric structure of the planet at the limb, using a new method to isolate the signal of the planetary atmosphere from the occulted stellar lines. We conclude the study in Sect.~\\ref{sec:conclu}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Radial velocity monitoring of WASP-121}\n\\label{sec:RV_fit}\n\n\\subsection{Planet-induced motion}\n\nWe analyzed RV data points of WASP-121 obtained with the Coralie (\\citealt{baranne1996}, \\citealt{Queloz2000}) and HARPS (\\citealt{Mayor2003}) spectrographs to revise the semi-amplitude of the stellar reflex motion and the mass of WASP-121b (the complete RV dataset is shown in Fig.~\\ref{fig:RV_ana_appendix_nobin}). RV data were analyzed with the Data and Analysis Center for Exoplanets web platform (DACE\\footnote{https:\/\/dace.unige.ch}). We excluded datapoints obtained during four planet transits (one observed with Coralie, the other three with HARPS) and binned the remaining data in time by 0.25 day separately for each instrument (to mitigate short-term stellar signals and to avoid favorring HARPS datapoints). The processed data (Fig.~\\ref{fig:RV_fit}) were fit with the Keplerian model described in \\citet{Delisle2016}. It was combined with the activity detrending described in \\citet{Delisle2018}, which adds a term that is linearly correlated with the bisector of the cross-correlation functions (CCFs). The model was fit to the data using a Markov chain Monte Carlo (MCMC) algorithm (\\citealt{Diaz2014,Diaz2016}) with Gaussian priors on the period, time of mid-transit, eccentricity, and periastron argument derived from photometry obtained with the Transiting Exoplanet Survey Satellite (TESS) by \\citet{Bourrier2019}. Results are given in Table~\\ref{fig:RV_ana_appendix}. The mass we derive for WASP-121b is consistent with that of \\citet{Delrez2016}. We kept the values of the properties that have been derived from the TESS photometry as our final estimates for the revised planetary properties (Table~\\ref{tab:sys_prop}), because the fit to the RV data did not improve their precision, nor changed their values significantly.\\\\\n\n\n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[trim=0.cm 0.cm 0.cm 0cm,clip=true,width=\\columnwidth]{WASP-121_rvs_binned}\n\\caption[]{Radial velocities of WASP-121b, phase-folded with the orbital period and detrended from stellar activity. Coralie data points have been obtained before (red) and after (blue) the fiber upgrade (\\citealt{Segransan2010}). HARPS datapoints are shown in gold. Our best-fit Keplerian model to the out-of-transit data is shown as a solid black line (datapoints obtained in the frame of transit observations were binned, see text). The first and fourth transit contacts are indicated by dash-dotted vertical black lines. }\n\\label{fig:RV_fit}\n\\end{figure}\n\\end{center}\n\n\n\n\n\\subsection{Stellar rotation}\n\\label{sec:Prot}\n\nAfter the contribution of WASP-121b was removed, the periodogram of the RV residuals reveals three significant signals at periods of 0.89, 1.13, and $\\sim$8.4 days. These signals are also visible when periodograms of the bisector span (BIS SPAN) and the full-width at half-maximum (FWHM) time-series measured on the CCF are analyzed. They arise from magnetically active regions at the surface of fast-rotating WASP-121, and are all aliases of one another. We show in Sect.~\\ref{sec:results_RM} that the signal at $\\sim$8.4 days must be an alias because of the high measured stellar projected rotational velocity. We then used the technique proposed by \\citet{Dawson2010} to determine whether the signals at 0.89 or 1.13 days directly trace the rotational modulation of WASP-121. To distinguish the real signal and aliases, \\citet{Dawson2010} proposed simulating data with the same time sampling and injecting the signals that are to be tested as being real or aliases. For each injected signal, a comparison of the period and phase of all the aliases created by the observational sampling is then performed between the simulated and real data set. Using this technique, \\citet{Dawson2010} were able to show that the period originally derived for planet 55\\,Cnc\\,e (\\citealt{McArthur2004}, \\citealt{Fischer2008}) was an alias of the real signal.\\\\\nHere, we extend the approach of \\citet{Dawson2010} by performing 100 simulations for each injected signal, taking different configurations for the noise into account. We also analyze the rotational signal using the RVs, the BIS SPAN and the FWHM time-series. For each real or alias signal in the real or simulated data, we calculate the area below each peak and its phase (Fig.\\,\\ref{fig:1}). The area is defined as the sum of the power for all frequencies that lie 5 bins away from the frequency corresponding to the maximum power of the peak. Finally, the sum of both the absolute phase and area differences are calculated for each of the 100 simulations on the RVs, the BIS SPAN and the FWHM. These sums are given in Table\\,\\ref{table:rotation_period}. Overall, we observe smaller differences between the real and simulated data when the 1.13-day signal is considered compared to the signal at 0.89 day. We therefore propose that the 1.13-day signal traces the rotational modulation of WASP-121. \\\\\n\n\n\n\\begin{table}[tbh]\n\\caption{\n\\label{table:rotation_period}\nArea and phase differences, in arbitrary units, for the 0.89- and 1.13-day signals seen in the RV, BIS SPAN, and FWHM time-series periodograms. Bold numbers highlight the lower difference values when the two signals are compared.}\n\\begin{center}\n\\begin{tabular}{cccc}\n\\hline\\hline \n & Period [d] \t\t\t\t\t& Area difference & Phase difference \\\\\n\\hline\nRV & \\begin{tabular}{@{}c@{}}0.89 \\\\ 1.13\\end{tabular} & \\begin{tabular}{@{}c@{}}922 \\\\ \\bf{696}\\end{tabular} & \\begin{tabular}{@{}c@{}}1629 \\\\ \\bf{1347}\\end{tabular} \\\\\n\\hline\nBIS SPAN & \\begin{tabular}{@{}c@{}}0.89 \\\\ 1.13\\end{tabular} & \\begin{tabular}{@{}c@{}}1923 \\\\ \\bf{690}\\end{tabular} & \\begin{tabular}{@{}c@{}}\\bf{447} \\\\ 647\\end{tabular} \\\\\n\\hline\nFWHM & \\begin{tabular}{@{}c@{}}0.89 \\\\ 1.13\\end{tabular} & \\begin{tabular}{@{}c@{}}1975 \\\\ \\bf{1202}\\end{tabular} & \\begin{tabular}{@{}c@{}}\\bf{2608} \\\\ 2820\\end{tabular} \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n\\begin{figure*}[]\n\\center\n\\includegraphics[angle=0,width=16cm]{gls_with_alias_period.pdf}\n\\caption[]{Comparison between the real and alias signals in the RV data of WASP-121 corrected for the planet signal (red shadow) and in the simulated data (black). The top panel corresponds to the 0.89-day signal, the bottom panel to the 1.13-day signal. These plots correspond to one realization of noise out of 100 different trials. Real signals are shown in black (panels A3 and B2), yearly aliases ($\\pm$0.0027 days$^{-1}$) in green (panels A3 and B2), daily aliases ($\\pm$1, $\\pm$1.0027 and $\\pm$1.0056 days$^{-1}$) in blue (panels A1, A4, B1, and B4), and 2-day aliases ($\\pm$2.0027 and $\\pm$2.0055 days$^{-1}$) in purple (panels A2, A5, B3, and B5). Arrows at the top of each peak show the phase of each signal for the real (red) and the simulated data (black).}\n\\label{fig:1}\n\\end{figure*}\n\n\n\n\nWe also ran a periodogram analysis on the residuals between the TESS photometry and the best-fit model derived by \\citet{Bourrier2019}. The two strongest peaks are measured at periods of 1.16 and 1.37 days. The first signal corresponds well to the rotational modulation identified in the RV of WASP-121, and likely originates in the same active regions at the surface of the star. WASP-121 was observed over two TESS orbits. We cut each of them in half, and ran independent periodogram analyses on the four resulting segments. The stronger 1.37-day signal is only present in the second TESS orbit, with similar power in its two halves (Fig.~\\ref{fig:TESS:residualspg}). Our best interpretation is that WASP-121 rotates differentially, with the 1.37-day signal arising from active regions located at higher latitudes (and thus rotating slower) than those responsible for the 1.13-day signal. These high-latitude regions would have developed rapidly around epoch $\\sim 1502$ and lasted at least for the rest of the TESS observations. The possibility for differential rotation is investigated in more detail in Sect.~\\ref{sec:fit_RM}. \\\\\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim=1.5cm 0.8cm 1.cm 0.5cm,clip=true,width=\\columnwidth]{PAPER_pg_res_Orb2}\n\\caption{\\label{fig:TESS:residualspg} Lomb-Scargle periodogram of the residuals between TESS photometry in orbit 22 and the best-fit model for WASP-121b. The orange dashed line indicates the rotational modulation identified in the RVs at 1.13\\,d. The peak indicated by the red dashed line at 1.37\\,d is only present in this orbit, and likely traces a transient active region at higher stellar latitudes.}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\\begin{table*} \n\\caption[]{Properties of the WASP-121 system}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}{c|c|c|c|c}\n \\hline\n \\hline\n Parameter & Symbol & Value & Unit & Origin \\\\\n \\hline \n\\textit{Stellar properties} & & & & \\\\ \n \\hline \nMass\t\t& $M_{\\star}$ & 1.358$\\stackrel{+0.075}{_{-0.084}}$ & M$_{\\odot}$ & \\citealt{Delrez2016} \\\\\nRadius\t\t& $R_{\\star}$ & 1.458$\\pm$0.030 & R$_{\\odot}$ & \\citealt{Delrez2016} \\\\ \nDensity\t\t& $\\rho_{\\star}$ & 0.434$\\pm$0.038 & $\\rho_{\\odot}$ & \\citealt{Delrez2016}$^{\\dagger}$ \\\\\nLimb-darkening coefficients & $u_{1}$ & 0.364$\\stackrel{+0.034}{_{-0.030}}$ & & EulerCam \\\\\n\t\t\t& $u_{2}$ & 0.146$\\stackrel{+0.066}{_{-0.049}}$ & & EulerCam \\\\\nInclination & $i_\\mathrm{\\star}^{\\rm North}$ & 8.1$\\stackrel{+3.0}{_{-2.6}}$ & deg & RM \\\\\n & $i_\\mathrm{\\star}^{\\rm South}$ & 171.9$\\stackrel{+2.5}{_{-3.4}}$ & deg & RM \\\\\nEquatorial velocity & $v_\\mathrm{eq}$ & [65.28 - 120] & km\\,s$^{-1}$ & RM \\\\\n \\hline\n \\hline \n\\textit{Planetary properties} & & & \\\\ \n \\hline\nTransit epoch\t& $T_{0}$ & 2458119.72074$\\pm$0.00017 & BJD$_\\mathrm{TDB}$ & TESS \\\\ \nOrbital period & $P$ & 1.27492504$^{+1.5\\times 10^{-7}}_{-1.4\\times 10^{-7}}$ & d & (TESS+EulerCam) \\\\ \nScaled semi-major axis & $a_\\mathrm{p}\/R_{\\star}$ & 3.8131$^{+0.0075}_{-0.0060}$ & & (TESS+EulerCam) \\\\\nSemi-major axis & $a_\\mathrm{p}$ & 0.02596$^{+0.00043}_{-0.00063}$ & au & (TESS+EulerCam)$^{\\dagger}$ \\\\\t\t\t\nEccentricity & $e$ & [0 - 0.0032] & & TESS \\\\\nArgument of periastron & $\\omega$ & 10$\\pm$10 & deg & TESS \\\\\nOrbital inclination \t& $i_\\mathrm{p}$ & 88.49$\\pm$0.16 & deg & (TESS+RM) \\\\ \nImpact parameter & $b$ & 0.10$\\pm$0.01 & & (TESS+RM)$^{\\dagger}$ \\\\ \nTransit durations & $T_\\mathrm{14}$ & 2.9053$\\stackrel{+0.0065}{_{-0.0059}}$ & h & TESS$^{\\dagger}$ \\\\\n\t\t\t\t& $T_\\mathrm{23}$ & 2.2605$\\stackrel{+0.0055}{_{-0.0053}}$ & h & TESS$^{\\dagger}$ \\\\\nPlanet-to-star radius ratio & $R^\\mathrm{T}_\\mathrm{p}\/R_{\\star}$ & 0.12355$\\stackrel{+0.00033}{_{-0.00029}}$ & & TESS \n\\\\ \n \t\t\t\t\t\t & $R^\\mathrm{E}_\\mathrm{p}\/R_{\\star}$ & 0.12534$\\stackrel{+0.00043}{_{-0.00060}}$ & & EulerCam\n\\\\ \nRadius & $R^\\mathrm{T}_\\mathrm{p}$ & 1.753$\\pm$0.036 & $R_\\mathrm{Jup}$ & TESS$^{\\dagger}$ \\\\\n & $R^\\mathrm{E}_\\mathrm{p}$ & 1.773$\\stackrel{+0.041}{_{-0.033}}$ & $R_\\mathrm{Jup}$ & EulerCam$^{\\dagger}$ \\\\\nStellar reflex velocity\t& $K$ & 177.0$\\stackrel{+8.5}{_{-8.1}}$ & m\\,s$^{-1}$ & RV \\\\\nMass & $M_\\mathrm{p}$ & 1.157$\\pm$0.070 & $M_\\mathrm{Jup}$ & RV$^{\\dagger}$ \\\\\nDensity & $\\rho_\\mathrm{p}$ & 0.266$\\stackrel{+0.024}{_{-0.022}}$ & g\\,cm$^{-3}$ & (TESS+RV)$^{\\dagger}$ \\\\\nSurface gravity & $g_\\mathrm{p}$ & 9.33$\\stackrel{+0.71}{_{-0.67}}$ & m\\,s$^{-2}$ & (TESS+RV)$^{\\dagger}$ \\\\\nSky-projected obliquity & $\\lambda$ & 87.20$\\stackrel{+0.41}{_{-0.45}}$ & deg & RM \\\\\n3D obliquity & $\\psi^{\\rm North}$ & 88.1$\\pm$0.25 & deg & RM$^{\\dagger}$\\\\\n\t\t\t & $\\psi^{\\rm South}$ & 91.11$\\pm$0.20 & deg & RM$^{\\dagger}$\\\\\t \t\t\t \n \\hline\n \\end{tabular}\n \\begin{tablenotes}[para,flushleft]\n Notes: Values in square brackets indicate the 1$\\sigma$ confidence intervals for the equatorial velocity and eccentricity, whose probability distributions peak at the lower boundary values for these parameters. The 3$\\sigma$ confidence intervals for these parameters are [65.28 - 295]\\,km\\,s$^{-1}$ and [0 - 0.0078]. Properties with TESS origin are reported from \\citet{Bourrier2019}, or revised when combined with other datasets. Coefficients $u_1$ and $u_2$ are associated with a quadratic limb-darkening law. The daggers indicate derived parameters. Planetary density and surface gravity were calculated using the lowest planet-to-star radius ratio (from TESS). There are two possible solutions for the stellar inclination and 3D obliquity of WASP-121b, marked as \\textit{North} or \\textit{South} depending on which pole of the star is visible. \\\\\n \\end{tablenotes}\n \\end{threeparttable}\n\\label{tab:sys_prop}\n\\end{table*}\n\n\n\n\\section{Reloaded Rossiter-McLaughlin analysis}\n\\label{sec:reloaded RM}\n\n\n\\subsection{HARPS observations of WASP-121}\n\\label{sec:HARPS_data}\n\nWe studied the orbital architecture of WASP-121b and the properties of its host star by analyzing three transit observations obtained with the HARPS echelle spectrograph (HEARTS survey, ESO program 100.C-0750; PI: D. Ehrenreich). Three visits were scheduled on 31 December 2017 (Visit 1), 9 January 2018 (Visit 2), and 14 January 2018 (Visit 3). They lasted between 6.6 and 8.1\\,h, covering the full duration of the transit ($\\sim$2.9\\,h) with sufficient baseline on each side to determine the unocculted stellar properties (Table~\\ref{tab:log}).\n\nObservations were reduced with the HARPS (version 3.8) Data Reduction Software, yielding spectra with resolving power 115,000 and covering the region 380-690 nm. The reduction includes a correction of the color effect due to variability of the extinction caused by Earth's atmosphere during the transit (e.g., \\citealt{bourrier2014b}, \\citealt{Bourrier_2018_Nat}). The spectrum of a hot F-type star such as WASP-121 contains far fewer absorption lines than later-type stars. Including these absent lines into the mask would reduce the contrast of the CCF and the precision on their derived properties. Furthermore, the fast rotation of WASP-121 broadens the stellar lines, blending lines that are isolated in the spectrum of colder stars. A single mask line needs to be associated with unresolved stellar lines contributing to the same blended line to avoid introducing correlated information into the CCF. We thus computed CCFs for each spectral order using a custom mask specific to WASP-121 (this mask is available in electronic form at the CDS). All measured 1D spectra were averaged and smoothed with a 0.09\\,\\AA\\, moving average. The continuum was estimated by running a local maximum detection algorithm on the spectrum, using an alpha-shape algorithm to remove unreliable local maxima and applying a cubic interpolation on the remaining maxima to extrapolate the continuum on the full wavelength grid. The stellar lines to be included in our custom mask were then defined as a local minimum surrounded by two local maxima. A first estimate of the detectable lines and their position was made by running a local minimum detection algorithm on the stellar spectrum. Positions were then derived more accurately as the minimum of a parabola fit around the line minimum in a window of $\\pm$3\\,km\\,s$^{-1}$. We discarded lines with windows smaller than 5 pixels, lines with derived centers farther away than 0.03\\,\\AA\\, from the local minimum, and shallow lines with relative flux difference between the local minimum and highest local maxima smaller than 0.05. Last, we generated a synthetic telluric spectrum with Molecfit \\citep{Smette2015}, and removed mask lines for which the core of a neighboring telluric line (with depth ratios with the mask line higher than 2\\%) entered the region defined by the two local maxima of the mask line for at least one Earth barycentric RV of the spectrum. The final mask is composed of 1828 lines. Their weights were set to the relative flux difference between the stellar lines local minimum and the average of their two local maxima (\\citealt{pepe2002}). \n\nBecause the CCFs generated with the HARPS DRS are oversampled with a step of 0.25\\,km\\,s$^{-1}$, and have a pixel width of about 0.8\\,km\\,s$^{-1}$, we kept one in four points in all CCFs prior to their analysis (\\citealt{Cegla2017}). Here we note that by construction, our custom mask lines are at rest in the stellar rest frame. The null velocity in the CCFs calculated with this mask thus directly corresponds to the stellar rest velocity. \\\\\n\n\n\n\\begin{table} \n\\caption[]{Log of WASP-121 b HARPS transit observations}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}{c|c|c|c}\n \\hline\n \\hline\n Visits & 1 & 2 & 3 \\\\\n \\hline\n Date (start) & 31-12-17 & 09-01-18 & 14-01-18 \\\\\n Number of exposures & 35 & 55 & 50 \\\\\n Exposure duration (s) & 570-720 & 500-600 & 500-660 \\\\ \n Exposure S\/N (550\\,nm) & 26-61 & 21-43 & 32-49 \\\\\n \\hline\n \\end{tabular}\n \\begin{tablenotes}[para,flushleft]\n Notes: The S\/N is given per pixel.\\\\\n \\end{tablenotes}\n \\end{threeparttable}\n\\label{tab:log}\n\\end{table}\n\n\n\n\n\n\\subsection{Simultaneous EulerCam photometry}\n\\label{sec:LC_fit}\n\nThe reloaded RM technique requires knowledge of the transit light curve in the spectral band of the CCFs (\\citealt{Cegla2016}). Measuring the transit simultaneously in photometry and spectroscopy further allows us to determine occulted spots and plages along the chord that is transited by the planet. We therefore obtained simultaneous photometry throughout the transits in Visits 1 and 2 using EulerCam at the 1.2m Euler telescope at La Silla. Observations were carried out using an r'-Gunn filter to match the HARPS wavelength band as closely as possible, and we applied a slight defocus to the telescope to improve the target point spread function (PSF) sampling and the observation efficiency. After standard image correction procedures, we extracted relative aperture photometry, iteratively selecting a set of stable reference stars and testing a number of extraction apertures. For details on EulerCam and the relevant observation and data reduction procedures, see \\citet{Lendl2012}. We combined the new broadband photometry with two archival EulerCam light curves observed in r' band on 2014 January 19 and 23 (Fig.~\\ref{fig:EULER_LC_all}, \\citealt{Delrez2016}). The four light curves are shown in Fig.~\\ref{fig:EULER_LC_all}, and they are available in electronic form at the CDS. \n\nThe increased scatter during the transit on 2014 January 23 is caused by the passage of a cloud. The light curve obtained in Visit 1 shows a much shallower transit than the light curves that were obtained during the other epochs. We did not find any large variation (beyond the mmag level) in the overall stellar brightness between Visits 1 and 2, as are created, for example, by changing star spot coverage, which would translate into an offset in the measured transit depth. No variations in transit depth similar to that of Visit 1 are found in any of the 17 transits observed with TESS (\\citealt{Bourrier2019}). We lack a convincing physical explanation for this anomaly, and suggest that instrumental effects linked to image saturation are likely the origin of this variation. Indeed, the target saturated the detector near the transit center in Visit 1, and the data are therefore likely affected by detector nonlinearity at high flux levels. The light curve from Visit 1 was therefore excluded from further analyses. \\\\\n\nWe made use of the MCMC code described in \\citet{Lendl2017} to fit the EulerCam data. We assumed a uniform prior distribution for the star-to-planet radius ratio (i.e., this parameter was fit without any a priori constraints), and placed normal prior distributions on the impact parameter, the transit duration, the mid-transit time, and the planetary period. These priors were centered on the values derived in \\citet{Bourrier2019}, and their width corresponds to the respective 1\\,$\\sigma$ uncertainties. We used the routines of \\citet{Espinoza2015} to compute quadratic limb-darkening coefficients, using a wide ($\\sigma_{prior}=0.1$) normal prior distribution centered on these values in our analysis. Our code allows for the use of parametric baseline models (see, e.g., \\citealt{Gillon2010}), and we find that the light curves of 2014 January 19 and 23, and 2018 January 14 are best fit by models of the form $p(t^2)+p(\\mathit{xy}^1) + p(\\mathit{FWHM}^1)$, $p(t^1)+ p(\\mathit{FWHM}^2)$, and $p(t^2)+p(\\mathit{xy}^1)+p(\\mathit{FWHM}^2)$, respectively, where $p(\\mathit{i}^n)$ refers to a polynomial of order $n$ in parameter $i$. The parameters are the time $t$, coordinate shifts $xy$, and stellar $\\mathit{FWHM}$. System properties specific to the EulerCam passband are given in Table~\\ref{tab:sys_prop}. The baseline between the EulerCam observations is long, as transits are separated by several years, and therefore these transits improved the precision on the orbital period and semi-major axis compared to the TESS fit. We updated their values accordingly.\\\\\n\nWe compared our measurement for R$_\\mathrm{p}\/$R$_{*}$ (0.1253$\\stackrel{+0.0004}{_{-0.0006}}$ from EulerCam in 619-710\\,\\AA\\,) and that of \\citet{Bourrier2019} (0.1236$\\pm{0.0003}$ from TESS in 600-1000\\,\\AA\\,) with those of \\citet{Evans2018} obtained with the G430 and G750 HST\/WFC3 grisms, averaging their measurements within the EulerCam and TESS respective passbands. The \\citet{Evans2018} results yield R$_\\mathrm{p}\/$R$_{*}$ = 0.12238$\\pm$0.00036 (EulerCam) and 0.12244$\\pm$0.00021 (TESS), which is significantly lower than the EulerCam and TESS measurements by 0.003 (5.3$\\sigma$) and 0.001 (2.7$\\sigma$), respectively. \\citet{Evans2018} previously noted that their measurements were lower than values obtained using ground-based photometry in the $B$, $r'$, and $z'$ bandpass (\\citealt{Delrez2016}) and proposed that these discrepancies could arise from systematics in the latter measurements. Interestingly, our planet-to-star radius ratios are consistent with those obtained by \\citet{Delrez2016} in the bands that overlap with those of EulerCam (0.12521$\\pm$0.0007 in 555-670\\,\\AA\\,) and TESS (0.12298$\\pm$0.0012 in 836-943\\,\\AA\\,). The good agreement between ground- and space-based measurements might suggest that the reduction procedure or systematics specific to the HST data might have offset the transit depths derived by \\citet{Evans2018}.\\\\\n\n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[trim=0.cm 2.cm 0.cm 0cm,clip=true,width=\\columnwidth]{EULER_LC_all}\n\\caption[]{Transit light curves of WASP-121b obtained with EulerCam, offset by 0.02 for visibility. Best-fit models fitted to the 2014 and 2018 data are shown in red. They include a common transit model and a detrending model specific to each visit. The abnormal shape of the 2017 light curve is likely due to instrumental effects. For this epoch we only overplot the transit model.}\n\\label{fig:EULER_LC_all}\n\\end{figure}\n\\end{center}\n\n\n\n\n\n\n\n\\subsection{Analysis of the local stellar CCFs}\n\\label{sec:extra}\n\nThe HARPS CCFs (heareafter CCF$_\\mathrm{DI}$) originate from starlight integrated over the disk of WASP-121. We used the reloaded RM technique (\\citealt{Cegla2016}, see also \\citealt{Bourrier2017_WASP8,Bourrier_2018_Nat}) to isolate the local CCF (hereafter CCF$_\\mathrm{loc}$) from the regions of the photosphere that are occulted by WASP-121b. The CCF$_\\mathrm{DI}$ calculated in the stellar rest frame were first corrected for the stellar Keplerian motion induced by WASP-121b. We identified the CCF$_\\mathrm{DI}$ obtained outside of the transit, taking care to exclude those that even partially overlapped with the transit window, and coadded them to build a ``master-out'' CCF$_\\mathrm{DI}$ in each night, which corresponds to the unocculted star. The continua of the master-out and individual CCF$_\\mathrm{DI}$ outside of the transit were normalized to the same continuum at unity, while in-transit CCF$_\\mathrm{DI}$ were scaled to reflect the planetary disk absorption. This scaling was made using the theoretical transit light curve derived from the fit to the EulerCam data (Sect.~\\ref{sec:LC_fit}), whose spectral range is closer to that of HARPS than that of TESS. \n\nThe CCF$_\\mathrm{loc}$ associated with the planet-occulted regions were retrieved by subtracting the scaled in-transit CCF$_\\mathrm{DI}$ from the master-out in each night. The local stellar line profiles from the planet-occulted regions of the photosphere are clearly visible in Fig.~\\ref{fig:2D_maps}. They are always redshifted, and this redshift slightly increases along the transit chord. WASP-121b therefore always transits that hemisphere of the star that rotates away from us, with a transit chord farther from the projected stellar spin axis at egress than at ingress. This preliminary analysis implies that the sky-projected obliquity $\\lambda$ must be slightly lower than 90$^{\\circ}$, in contrast to the value of 102.2$\\pm$5.5$^{\\circ}$ (using the same convention as in the present study) derived by \\citealt{Delrez2016} from a classical velocimetric analysis of the RM effect in CORALIE data.\n\n\n\\begin{center}\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=\\columnwidth]{2D_maps_col}\n\\caption[]{Maps of the residuals between the scaled CCF$_\\mathrm{DI}$ and their master-out in each visit. Residuals are colored as a function of their flux, and plotted as a function of RV in the stellar rest frame (in abscissa) and orbital phase (in ordinate). The vertical dashed black line indicates the stellar rest velocity. Horizontal dotted lines are the transit contacts. In-transit residuals correspond to the CCF$_\\mathrm{loc}$, and show the average local stellar line profile (recognizable by a lower flux in the CCF$_\\mathrm{loc}$ cores) from the planet-occulted regions of the stellar disk. For comparison, the spectroscopic width of the disk-integrated stellar lines is about 14\\,km\\,s$^{-1}$. Black crosses with error bars indicate the centroids of the detected stellar line profile. The slanted dashed black line tracks the orbital trajectory of the planet.}\n\\label{fig:2D_maps}\n\\end{figure}\n\\end{center}\n\n\nThe RV centroids of the CCF$_\\mathrm{loc}$ can generally be derived from a simple Gaussian fit. The CCFs generated with our custom mask for WASP-121, however, show side lobes that would limit the precision of CCF properties derived with a Gaussian fit. Therefore, we used the double-Gaussian model introduced by \\citet{Bourrier_2018_Nat} for the M dwarf GJ\\,436, which consists of the sum of a Gaussian function representing the CCF continuum and side lobes, and an inverted Gaussian function representing the CCF core. As illustrated in Fig.~\\ref{fig:CCF_DI_fit}, the double-Gaussian model reproduces the entire CCF of WASP-121 well and thus exploits the full information contained in its profile. \n\nWe performed a preliminary fit to the CCF$_\\mathrm{loc}$ using a double-Gaussian model where the FWHM ratio, contrast ratio, and centroid difference between the core and lobe components were set to the best-fit values for the nightly master-out CCF$_\\mathrm{DI}$ (as in \\citealt{Bourrier_2018_Nat}). The local average stellar line is considered detected if the amplitude of the model CCF$_\\mathrm{loc}$ (defined as the flux difference between the minimum of the model CCF$_\\mathrm{loc}$ and its continuum) is three times larger than the dispersion in the continuum of the observed CCF$_\\mathrm{loc}$. This led us to discard a few CCF$_\\mathrm{loc}$ that were located very near the stellar limb, where the lower flux and partial occultation by the planet yield very low S\/Ns ratios. The remaining CCF$_\\mathrm{loc}$ were shifted to the same rest velocity and averaged on each night to create a master local CCF$_\\mathrm{loc}$ (e.g., \\citealt{Wyttenbach2017}). The comparison with the master CCF$_\\mathrm{DI}$ (Fig.~\\ref{fig:Compa_Models_Out_Loc}) clearly shows the effect of rotational broadening; the local average stellar line is far narrower and deeper than the disk-integrated line. Both CCFs show sidelobes, which are well fit with a double-Gaussian model but with different properties. In the master CCF$_\\mathrm{loc}$ the lobe component is broader, and more redshifted relative to the core component, than in the master CCF$_\\mathrm{DI}$. The final fit to the CCF$_\\mathrm{loc}$ in individual exposures was performed with a double-Gaussian model where the core and lobe components were linked as in the nightly master CCF$_\\mathrm{loc}$. Flux errors assigned to the CCF$_\\mathrm{loc}$ were set to the standard deviation in their continuum flux, and the uncertainties on the derived parameters were set to the 1\\,$\\sigma$ statistical errors from a Levenberg-Marquardt least-squares minimisation. The local stellar surface RVs were defined as the derived centroids of the CCF$_\\mathrm{loc}$ core component.\\\\\n\n\n\n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=\\columnwidth]{CCF_DI_fit.pdf}\n\\caption[]{Typical CCF$_\\mathrm{DI}$ integrated over the disk of WASP-121 (blue points, obtained during one of the out-of-transit exposures in Visit 2). The solid black profile is the best-fit double-Gaussian model to the measured CCF. The dashed black profiles show the individual Gaussian components to this model, which yields a low dispersion on the fit residual (bottom panel). The blue shaded regions indicate the velocity ranges used to define the CCF continuum.}\n\\label{fig:CCF_DI_fit}\n\\end{figure}\n\\end{center}\n\n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[trim=1.5cm 0cm 1cm 0cm,clip=true,width=\\columnwidth]{Compa_Models_Out_Loc.pdf}\n\\caption[]{Master-out CCF$_\\mathrm{DI}$ (magenta) and master local CCF$_\\mathrm{loc}$ (blue), binned over the three visits and normalized to the same continuum. The dashed and dotted black profiles show the best-fit models to the master-out and master-local, respectively. They are based on the same double-Gaussian model, but with different correlations between the properties of the lobe and core Gaussian components.}\n\\label{fig:Compa_Models_Out_Loc}\n\\end{figure}\n\\end{center}\n\n\n\n\n\n\n\n\\subsection{Analysis of the stellar rotation and orbital architecture}\n\\label{sec:fit_RM}\n\n\\subsubsection{Model and prior constraints}\n\\label{sec:priors_RM}\n\nDespite some variability, the local RVs follow a similar trend in the three visits (Fig.~\\ref{fig:RV_local}). They become more redshifted along the transit chord and remain always positive, confirming the preliminary interpretation performed in Sect.~\\ref{sec:extra} of a near-polar orbit only crossing the redshifted stellar hemisphere. The orbital architecture of the system and the properties of the velocity field of the stellar photosphere can be derived from the fit to the local RVs using the reloaded RM model (\\citealt{Cegla2016}; see their Figure 3 for the definitions of the coordinate system and angle conventions), which calculates brightness-weighted theoretical RVs averaged over each planet-occulted region. In previous reloaded RM studies (\\citealt{Cegla2016}, \\citealt{Bourrier2017_WASP8, Bourrier_2018_Nat}) the model was fit to the data by varying the stellar projected rotational velocity $v_{\\rm eq}\\sin i_{*}$ (and in some cases, the differential rotation or convective motions of the stellar photosphere) and the sky-projected obliquity $\\lambda$. The latter parameter alone thus controlled the model planet trajectory, and the coordinates of the occulted regions. The near-polar orbit of WASP-121b, however, results in $v_{\\rm eq}\\sin i_{*}$ being strongly degenerate with the planet impact parameter (see Appendix~\\ref{apn:polar_orb}), which remains poorly determined because of the uncertainty on the orbital inclination $i_\\mathrm{p}$ (\\citealt{Bourrier2019}). Interestingly, an impact parameter close to zero would require that the planet cross the projected stellar spin axis (where local RVs are zero), which is incompatible with the transit of a single stellar hemisphere indicated by the positive local RVs series. This means that more stringent constraints can be derived on the orbital inclination from the fit to the local RVs, and we therefore modified the reloaded RM model to include $i_\\mathrm{p}$ as a free parameter. The scaled semi-major axis of WASP-121b has less influence on the local RVs and is much better determined than $i_\\mathrm{p}$. After checking that $a_\\mathrm{p}\/R_{*}$ could not be better constrained through the fit, we therefore fixed it to its nominal value. Similarly, the other orbital properties and the ephemeris of WASP-121b are known to a much higher precision through transit photometry and velocimetry than could be obtained via the fit to the local RVs, and they were accordingly fixed to their nominal values. The planet-to-star radius ratio and the stellar limb-darkening coefficients cannot be retrieved from the fit to the local RVs because absolute flux levels are lost in HARPS ground-based data. We note that the measured local RVs do not depend on our choice for $i_\\mathrm{p}$ because the photometric scaling of the CCFs was performed directly with the transit light-curve model fit to the simultaneous EulerCam data (Sect.~\\ref{sec:extra}). \\\\\n\n\n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[trim=0cm 0.cm 0cm 0cm,clip=true,width=\\columnwidth]{PAPER_WASP121b_RV_stsurf_phase.pdf}\n\\caption[]{Radial velocities of the stellar surface regions occulted by WASP-121b as a function of orbital phase. Horizontal bars show the exposure durations. The black curve is the best-fit reloaded RM model (indistinguishable between the low- and high- $i_{*}$ solutions) to the three visits. Dashed vertical lines are the transit contacts. The horizontal dashed line highlights the stellar rest velocity, which is found along the projected stellar spin axis. \\textbf{Upper panel}: Local RVs in individual Visits 1 (red), 2 (gold), and 3 (blue). \\textbf{Bottom panel}: Local RVs derived from the CCF$_\\mathrm{loc}$ binned over the three visits, shown separately for the sake of clarity.}\n\\label{fig:RV_local}\n\\end{figure}\n\\end{center}\n\n\nAdditional constraints can be set from the independent measurements of stellar line broadening and the stellar rotational period. \\citet{Delrez2016} derived a spectroscopic value $v_{\\rm eq}\\sin i_{*\/\\rm spec}$ = 13.56$\\stackrel{+0.69}{_{-0.68}}$\\,km\\,s$^{-1}$ from the fit to stellar \\ion{Fe}{i} lines in CORALIE spectra. A similar estimate can be derived from the comparison between the HARPS master-out CCF$_\\mathrm{DI}$ and master-local CCF$_\\mathrm{loc}$. Under the assumption that CCF$_\\mathrm{loc}$ measured along the transit chord are representative of the entire stellar disk, the observed master-out was fit by tiling a model star with the master-local CCF$_\\mathrm{loc}$, weighted by the limb-darkening law derived from the EulerCam photometry, and shifted in RV position by the solid rotation of the photosphere, which was let free to vary. We obtain a good fit for $v_{\\rm eq}\\sin i_{*} \\sim $13.9\\,km\\,s$^{-1}$ (Fig.~\\ref{fig:Fit_Mout_Mloc}), suggesting that the local average stellar line profile does not change substantially across the stellar disk within the precision of HARPS, and that $v_{\\rm eq}\\sin i_{*\/\\rm spec}$ can be used as a prior for the stellar projected rotational velocity. \n\nAnalysis of ground-based spectroscopy and TESS photometry of WASP-121 (Sect.~\\ref{sec:RV_fit}) revealed a persistent rotational modulation at 1.13 days, and a transient modulation at 1.37 days. We understand these results as an indication of differential rotation, with the equator of WASP-121 rotating at least as fast as the latitudes probed by the 1.13 days signal, and the transient signal arising from higher latitudes that rotate more slowly. This sets a prior on the stellar equatorial velocity $v_{\\rm eq}\\geqslant$ 65.28\\,km\\,s$^{-1}$.\n\nThe three local RV series were simultaneously fit with the updated RM model. We assumed a solar-like differential rotation law P$(\\theta_\\mathrm{lat})$ = P$_\\mathrm{eq}\/(1 - \\alpha$\\,sin$^{2}(\\theta_\\mathrm{lat}))$, where $\\theta_\\mathrm{lat}$ is the stellar latitude and $\\alpha = 1 - $P$_\\mathrm{eq}\/$P$_\\mathrm{pole}$ is the relative differential rotation rate (\\citealt{Cegla2016}). We accounted for the blur caused by the planetary motion during a HARPS exposure by oversampling the transit chord between the planetary position at the beginning and end of each exposure (\\citealt{Bourrier2017_WASP8}). We sampled the posterior distributions of the model parameters using \\textit{emcee} MCMC (Foreman2013), as in \\citet{Bourrier_2018_Nat}. Jump parameters for the MCMC are the stellar equatorial velocity $v_{\\rm eq}$, the cosine of the stellar inclination cos$(i_{*})$, the sky-projected obliquity $\\lambda$, the orbital inclination $i_\\mathrm{p}$, and the differential rotation rate $\\alpha$. We set a uniform prior on $v_{\\rm eq}$ and a Gaussian prior on $v_{\\rm eq}\\sin i_{*\/\\rm spec}$, following the above discussion. The posterior distribution from the fit to TESS photometry was used as prior for $i_\\mathrm{p}$ (\\citealt{Bourrier2019}). Uniform priors were set on the other parameters over their definition range: [-1 ; 1] for cos$(i_{*})$, [-180 ; 180]$^{\\circ}$ for $\\lambda$, and [-1 ; 1] for $\\alpha$. \\\\\n\n\n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[trim=1cm 0cm 1cm 0.5cm,clip=true,width=\\columnwidth]{PAPER_WASP121b_scr_Mout_Mloc_binned_HARPS-binned.pdf}\n\\caption[]{Master-out CCF$_\\mathrm{DI}$ (thick magenta line) and its best fit (black line) obtained by tiling a model star with the limb-darkened master local CCF$_\\mathrm{loc}$, used as a proxy for the specific stellar intensity profile. The best-fit value for the projected stellar rotational velocity, which controls the spectral position of this profile over the model stellar disk, is in good agreement with its measured spectroscopic value and yields a good fit to the observed master-out CCF$_\\mathrm{DI}$.}\n\\label{fig:Fit_Mout_Mloc}\n\\end{figure}\n\\end{center}\n\n\n\n\n\n\\subsubsection{Results}\n\\label{sec:results_RM}\n\nPosterior probability distributions are shown in Figs.~\\ref{fig:PD_lowi} and \\ref{fig:PD_highi}. Best-fit values for the model parameters were set to the median of their distributions, and are given in Table~\\ref{tab:sys_prop}. Some of the parameter distributions are asymmetrical, and we therefore chose to define their $1\\sigma$ uncertainties using the highest density intervals, which contain 68.3\\% of the posterior distribution mass such that no point outside the interval has a higher density than any point within it. The probability distributions show unique solutions for all model parameters, except for the stellar inclination. While we find that WASP-121 is highly inclined, the data do not allow us to distinguish whether the south pole ($i_\\mathrm{*}$ = 171.9$\\stackrel{+2.5}{_{-3.4}}^{\\circ}$) or the north pole ($i_\\mathrm{*}$ = 8.1$\\stackrel{+3.0}{_{-2.6}}^{\\circ}$) is visible. Both scenarios yield similar $\\chi^{2}$ of 111 for 43 degrees of freedom. The relatively high reduced $\\chi^{2}$ (2.6) is caused by the dispersion of the local RV measurements between the three nights. Deviations from the nominal best-fit model beyond the photon noise are present in all nights and in all phases of the transit, suggesting that variability in the local photospheric properties of this active star could be the origin of these variations. The noise in individual CCF$_\\mathrm{loc}$ prevents us from searching for variations in their bisector span. No clear correlations were found between the local RVs and the FWHM or contrast of the CCF$_\\mathrm{loc}$, with the EulerCam photometry, or with the Ca\\,II$_\\mathrm{HK}$, H$\\alpha$, and Na activity indexes. We show the best-fit model for the local stellar RVs in Fig.~\\ref{fig:RV_local} and the orbital architecture corresponding to the visible north pole in Fig.~\\ref{fig:disque}. \n\nThe stellar equatorial rotation remains poorly constrained, with a highest density interval of [65.28 - 120]\\,km\\,s$^{-1}$ that corresponds to rotation periods between [0.61 - 1.13]\\,days. The probability distribution for $v_\\mathrm{eq}$ nonetheless favors low velocities, which suggests that the persistent 1.13-day signal measured in photometry and ground-based data arises from active regions close to the stellar equator. We cannot confirm the differential rotation of WASP-121, with $\\alpha$ = 0.08$\\stackrel{+0.11}{_{-0.13}}$, but this result excludes high differential rotation rates and is consistent within 1$\\sigma$ with the observed rotational modulations. Indeed, the constraints $P_\\mathrm{eq}\\leqslant$ 1.13\\,days and $P_\\mathrm{pole}\\geqslant$ 1.34\\,days imply $\\alpha\\geqslant$ 0.16. These results are also consistent with measurements obtained for Kepler stars by \\citet{Balona2016}, who showed that $|\\alpha|$ ranges between 0 and 0.2 for stars with rotation periods on the order of 1 day (see their Figure 9). We note that even in the case of differential rotation, the signal measured at $\\sim$8.4\\,days (Sect.~\\ref{sec:Prot}) cannot trace the rotational modulation of a high-latitude region because the lowest $\\alpha$ required would be 0.87 at the stellar poles. Measurements of the local surface RVs at higher S\/N, for instance, with the ESPRESSO spectrograph, will be crucial in assessing the differential rotation of WASP-121.\\\\\n\nThe orbit of WASP-121b is almost but not exactly edge-on ($i_\\mathrm{p}$ = 88.49$\\pm$0.16$^{\\circ}$) and polar ($\\lambda$ = 87.20$\\stackrel{+0.41}{_{-0.45}}^{\\circ}$). We substantially improved the precision on these properties compared to previous studies, and find that $\\lambda$ is 15$^{\\circ}$ lower (3$\\sigma$) than the value derived by \\citet{Delrez2016} (we converted their spin-orbit angle $\\beta$ = 257.8$^{\\circ}$ in the same frame as our study). We combined the probability distributions of $i_\\mathrm{p}$, $\\lambda$, and $i_{*}$ to derive the 3D obliquity of the system, $\\psi$ = arccos(sin\\,$i_{*}$ cos\\,$\\lambda$ sin\\,$i_\\mathrm{p}$ + cos\\,$i_{*}$ cos\\,$i_\\mathrm{p}$), and measure $\\psi^{\\rm South}$ = 91.11$\\pm$0.20$^{\\circ}$ (stellar south pole visible) or $\\psi^{\\rm North}$ = 88.1$\\pm$0.25$^{\\circ}$ (north pole visible). We note that our result for the obliquity does not change the conclusion by \\citet{Delrez2016} that WASP-121b is on a highly misaligned orbit, and that it likely underwent strong dynamical interactions with a third companion, possibly an outer planet, during the life of the system (1.5$\\pm$1.0\\,Gyr). The dynamical evolution of WASP-121b is now controlled by tidal interactions with the star, leading to a gradual decrease in the obliquity and semi-major axis of the planet and to its eventual disruption (\\citealt{Delrez2016}). Even with a strong tidal dissipation, however, it would take millions of years to decrease the obliquity by one degree (\\citealt{Delrez2016}), and our value for the semi-major axis is not significantly lower than that of \\citealt{Delrez2016}. This mechanism therefore cannot explain the difference between our measurement for $\\lambda$ and that of \\citet{Delrez2016}, which could be due to a bias induced by their use of the classical RM technique (\\citealt{Cegla2016a}). An interesting alternative might be the nodal precession of the orbit, however, as is the case for the ultra-hot Jupiter WASP-33b (\\citealt{Johnson_WASP33}). The uncertainties on the orbital inclination and obliquity from \\citet{Delrez2016} prevent us from measuring a clear variation in the argument of the ascending node, with a decrease of -0.95$\\pm$0.64$^{\\circ}$ in about three years (from the end of 2014 to the end of 2017). Interestingly this decrease would correspond to a stellar gravitational quadrupole moment of 9.0$\\times$10$^{-4}$ (for $\\Psi_\\mathrm{North}$) or -1.5$\\times$10$^{-4}$ (for $\\Psi_\\mathrm{South}$), however, calculated with the equation in \\citet{Barnes2013}. A negative moment is excluded by the expected oblateness of WASP-121, but the former solution is on the same order as moments estimated for the early-type fast-rotating star WASP-33 (\\citealt{Johnson_WASP33,Johnson_WASP33_erratum}, \\citealt{Iorio2016}).\\\\\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim=0.5cm 3cm 0.5cm 1cm,clip=true,width=\\columnwidth]{WASP121b_st_disk_ross_lowistar.pdf}\n\\caption[]{Projection of WASP-121 in the plane of sky for the best-fit scenario where the north pole of the star is visible. The stellar spin axis is displayed as a black arrow extending from the north pole. The stellar equator is represented as a black line, solid when visible and dashed when hidden from view. The stellar disk is colored as a function of its RV field. The normal to the orbital plane is shown as a green arrow, and the orbital trajectory is displayed as a green curve. The black disk is WASP-121b, to scale.}\n\\label{fig:disque}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Distinguishing the planetary and stellar atmospheres}\n\\label{sec:atmo_struc}\n\nAtmospheres of ultra-hot Jupiters were recently found to contain atomic metal species (e.g., \\citealt{Hoeijmakers2018, Hoeijmakers2019}), which are prevented from condensing by the high temperatures (e.g., \\citealt{Visscher2010}, \\citealt{Wakeford2017}). As an ultra-hot Jupiter, WASP-121b receives extreme amounts of stellar radiation and likely undergoes atmospheric escape orders of magnitude larger than for hot Jupiters (\\citealt{Salz2019_NUV_WASP121b}). Magnesium and iron ions were recently detected in its exosphere through their near-UV absorption lines (\\citealt{Sing2019}), consistent with the marginally larger transit depth measured in broadband near-UV by \\citet{Salz2019_NUV_WASP121b}. These metallic species likely become photoionized within the exosphere after being carried upward by the hydrodynamically expanding upper atmosphere. They could be present in their neutral form in the atmosphere of WASP-121b, and yield strong absorption in optical lines. The custom mask we built to define the CCFs of WASP-121 is based on the stellar spectral absorption lines (Sect.~\\ref{sec:HARPS_data}), most of which arise from iron in the stellar atmosphere. Indeed, cross-matching our mask with the VALD database (\\citealt{Piskunov1995,Kupka2000,Ryabchikova2015}) shows that of their 989 lines they have in common, more than half (570) arise from neutral iron. The second most frequent species is neutral nickel, with 67 lines. This means that if the atmospheric limb of WASP-121b contains atomic iron, we would expect its average signature to be superimposed on the stellar CCF measured during transit. We present here a summary of the technique that we devised to search for and extract the atmospheric absorption signal of an exoplanet, based on the reloaded RM approach. It will be fully described in a forthcoming paper. \\\\\n\n\\subsection{Method}\n\nThe in-transit CCF$_\\mathrm{loc}$ extracted in Sect.~\\ref{sec:extra} corresponds to the specific intensity spectrum of the occulted stellar region, multiplied by the wavelength-dependant occulting area of the planet. The intensity spectrum contains the cross-correlated absorption line from the stellar photosphere, centered at the RV of the occulted region. The occulting area is the sum of the continuum level set by the opaque planetary layers (here averaged over the HARPS band) and the equivalent surface of the atmospheric limb. If the planet contains species that absorb the CCF mask lines, this surface corresponds to the cross-correlated absorption line from the planetary atmosphere, centered at the orbital RV of the planet. When the stellar and planetary absorption lines follow sufficiently different tracks in RV-phase space, as is the case with WASP-121b (Fig.~\\ref{fig:2D_maps}), it is possible to distinguish their individual contributions from the CCF$_\\mathrm{loc}$.\\\\\n\n\\begin{enumerate}\n\\item The first step consists of subtracting the stellar light that is occulted by the planetary continuum from the CCF$_\\mathrm{loc}$. To do this, we used the master CCF$_\\mathrm{loc}$, assuming that it is representative of the individual CCF$_\\mathrm{loc}$ along the transit chord (see Sect.~\\ref{sec:extra}). The master was rescaled to the correct photometric level using the best-fit EulerCam transit model (Sect.~\\ref{sec:LC_fit}), which accounts for the limb-darkening and planetary continuum associated with each exposure. The rescaled master was then shifted to the RV of the planet-occulted regions, calculated with the best-fit model for the local stellar surface RVs (Sect.~\\ref{sec:results_RM}). These operations yield the CCF of the product between the local stellar spectra and the transmission spectrum of the atmospheric limb in each exposure.\n\n\\item The second step consists of dividing these CCFs by the master CCF$_\\mathrm{loc}$, rescaled and shifted as described in the first step, to isolate the cross-correlated absorption line of the atmospheric limb, or CCF$_\\mathrm{atm}$. The scaling was made using the total surface of the star rather than the surface associated with the planetaty continuum, to obtain CCF$_\\mathrm{atm}$ in classical units of absorption relative to the stellar surface. The RV-phase maps of the CCF$_\\mathrm{atm}$ from WASP-121b reveal a bright streak aligned with the orbital trajectory of the planet, which is visible only during transit, and is therefore consistent with absorption by metals in the atmosphere of WASP-121b. \n\n\\item The third and last step consists of shifting all CCF$_\\mathrm{atm}$ into the planet rest frame, and averaging them over exposures where the entire planet occults the star. We calculated the theoretical RV track of the planet in the stellar rest frame using the orbital properties of the planet listed in Table~\\ref{tab:sys_prop}. Ingress and egress are excluded because they probe a smaller fraction of the planetary atmosphere that varies in time. Observing WASP-121b with higher-sensitivity spectrographs such as ESPRESSO might allow studying the shape of the planetary signal during ingress\/egress, and possibly resolving longitudinal variations in the planetary atmosphere. We note that we analyzed the three HARPS visits binned together, because of the small amplitude of the planetary signal, and so that the master CCF$_\\mathrm{loc}$ could be determined with a high SNR. The low dispersion of residuals outside of the planetary track in Fig.~\\ref{fig:2D_atmo_maps} confirms that, within the precision of the HARPS data, the master CCF$_\\mathrm{loc}$ is representative of the stellar line along the transit chord. \n\\end{enumerate}\nThe interest of this approach is that it allows us to directly use the local stellar lines that are measured along the transit chord to correct for the bias of the atmospheric signal induced by the RM effect (e.g., \\citealt{Louden2015}, \\citealt{Casasayas2017,Casasayas2018,Casasayas2019}). One caveat is that step 2 divides the CCF of the product between planetary and stellar lines by the CCF of the stellar lines. Unless all dominant planetary or stellar lines in the CCF mask keep the same profile, this division does not fully remove the contribution of the stellar lines in exposures where they overlap with the planetary lines. We will address this caveat in the forthcoming paper.\\\\\n\nWe performed a preliminary analysis to identify the velocity range that is absorbed by the planetary atmosphere in each exposure. We then carried out the reloaded RM analysis again (Sect.~\\ref{sec:reloaded RM}), excluding these planet-absorbed ranges from the fits to the CCF$_\\mathrm{loc}$ and from the construction of the masters CCF$_\\mathrm{out}$ and CCF$_\\mathrm{loc}$. In four exposures (from phase -0.007 to 0.018) the RVs of the transit chord and planetary orbit are too close to fit the uncontaminated local stellar line and retrieve its centroid (see Fig.~\\ref{fig:2D_maps}). We fit the remaining RVs as in Sect.~\\ref{sec:fit_RM} and found no significant changes in the properties derived in Sect.\\ref{sec:results_RM}. The contamination from the planet likely does not bias the local stellar RVs beyond the precision of the HARPS data, and the contaminated phase range likely has less influence on the RV model than if it were closer to ingress or egress. Future studies of WASP-121b and similar planets using higher-precision spectrographs should nonetheless take special care with planet-contaminated exposures. The final extraction of the planetary signal was performed using the local RV model derived in Sect.\\ref{sec:results_RM} and the new uncontaminated master stellar CCF$_\\mathrm{loc}$. Fig.~\\ref{fig:2D_atmo_maps} shows the final RV-phase map of the CCF$_\\mathrm{atm}$. \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim=0cm 0cm 0.cm 0cm,clip=true,width=\\columnwidth]{binned_HARPS-binned_WASP121b_CCF_res_abs_PAPER.pdf}\n\\caption[]{Map of the atmospheric CCF$_\\mathrm{atm}$ binned over the three visits, colored as a function of absorption, and plotted as a function of RV in the stellar rest frame (in abscissa) and orbital phase (in ordinate). Horizontal dotted lines are the transit contacts. The bright streak is the absorption signature from the planetary atmosphere. It follows the track of the planetary orbital motion (solid green curve), but with a slight blueshift.}\n\\label{fig:2D_atmo_maps}\n\\end{figure}\n\n\n\n\\subsection{Results}\n\nThe master atmospheric signal, shown in Fig.~\\ref{fig:master_atm}, is well fitt with a Gaussian profile. Errors on the master CCF$_\\mathrm{atm}$ were set to the dispersion in its continuum. We measure a significant blueshift of -5.2$\\pm$0.5\\,km\\,s$^{-1}$ in the planetary rest frame, and an FWHM of 14.5$\\pm$1.2\\,km\\,s$^{-1}$. Correcting this width for the HARPS LSF broadening (2.61\\,km\\,s$^{-1}$) and for the blurr induced by the planet motion during an exposure\\footnote{The blur does not quadratically broaden the Gaussian profile of the atmospheric signal. We therefore shifted Gaussian profiles at the rate of the planetary motion during a 690\\,s long exposure and compared their average with the measured signal.} yields a net FWHM of 12.9$\\pm$1.2\\,km\\,s$^{-1}$ for the atmospheric signal. Thermal broadening contributes negligibly to the measured width (FWHM$_\\mathrm{thermal}\\sim$1.5\\,km\\,s$^{-1}$ at 2800\\,K). If WASP-121b is tidally locked, then its atmosphere rotates in the stellar rest frame with the same angular velocity as the planet orbits the star (5.7$\\times$10$^{-5}$\\,rad\\,s$^{-1}$). Accounting for the orbital inclination (but assuming that the planet is not inclined with respect to its orbital plane), we obtain a projected rotational RV of 7.15\\,km\\,s$^{-1}$ for atmospheric layers close to the planet surface, corresponding to an FWHM of 11.9\\,km\\,s$^{-1}$. Planetary rotation (in the stellar rest frame) therefore likely accounts for most of the atmospheric broadening, especially if the measured signal arises from higher altitude where the planetary rotation induces a higher velocity.\\\\\n\nThe measured blueshift could trace fast winds going from the dayside to the nightside along both terminators, as predicted for atmospheric circulation in the lower atmospheric layers of hot Jupiters (\\citealt{Showman2013}). In this scenario the hotspot is expected to be located at the substellar point, as is indeed measured in the TESS phase curve of WASP-121b (\\citealt{Bourrier2019}). However, it might then be expected that heat is efficiently restributed through the fast day- to nightside winds, whereas the phase curve revealed a strong temperature contrast. This might indicate that the iron signal arises from different layers than those probed by the TESS photometry. It has been proposed (\\citealt{Beatty2019,Keating2019}) that the nightsides of most hot Jupiters are covered with clouds of similar composition, which would form at temperatures of about 1100\\,K. With an irradiation temperature of $\\sim$3310\\,K, WASP-121\\,b is in a regime where such clouds are not yet predicted to disperse (\\citealt{Keating2019}). The HARPS measurements might therefore probe absorption signals from layers at lower altitudes than are probed by TESS, where fast day- to nightside winds homogenize temperature longitudinally. Meanwhile, the TESS phase curve could trace emission from high-altitude clouds on the nightside (T$_\\mathrm{night} <$ 2200\\,K at 1$\\sigma$), which would hide the emission from the deeper, hotter regions probed on the dayside (T$_\\mathrm{day}$ = 2870\\,K). Alternatively, the measured blueshift could trace an anisotropic expansion of the upper atmospheric layers, for example, due to the asymmetrical irradiation of the dayside atmosphere (\\citealt{Guo2013}) or its compression by stellar wind and radiation. Interestingly, a stronger but marginal blueshift was measured in the metal species escaping WASP-121b (\\citealt{Sing2019}), supporting the idea that the atmospheric layers are increasingly blueshifted as their altitude increases. We note that varying the stellar mass within its 3$\\sigma$ uncertainties, thus affecting the planet orbital velocity track (e.g., \\citealt{Hoeijmakers2019}), does not change the measured blueshift within its uncertainty. \n\n\n\n\n\nWe do not have the precision required to study individual HARPS exposures, but we analyzed the shape and position of the planetary signal averaged over the first half, and then the second half, of the transit (ingress and egress excluded). We found that the absorption signal maintains the same FWHM (13.2$\\pm$1.1\\,km\\,s$^{-1}$ and 13.2$\\pm$2.0\\,km\\,s$^{-1}$, respectively) but becomes more blueshifted (from -3.82$\\pm$0.48 to -6.63$\\pm$0.86\\,km\\,s$^{-1}$). Interestingly, blueshifted absorption signals whose shift increases during transit have been observed in the near-IR helium lines of extended planetary atmospheres (\\citealt{Allart2018}, \\citealt{Nortmann2018}, \\citealt{Salz2018}). It is unclear whether these features trace material that escapes from WASP-121b and is blown away by the stellar wind or radiation pressure, as no absorption is observed before or after the transits and the absorption profile shows no strong asymmetries. The atmospheric circulation may show strong spatial asymmetries, and the atmospheric limb probes regions with different speeds as the tidally locked planet rotates during transit. Three-dimensional simulations of the planetary atmosphere and more precise observations are required to explore the origin of the measured blueshift.\\\\\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim=0.5cm 0cm 0.5cm 0.5cm,clip=true,width=\\columnwidth]{MplCCF_res_absCCF.pdf}\n\\caption[]{Master atmospheric CCF$_\\mathrm{atm}$ averaged over the full in-transit exposures. The absorption signal from the planetary atmosphere is clearly detected and well approximated by a Gaussian profile (dashed black profile) with a significant blueshift with respect to the planetary rest velocity (vertical dotted black line). }\n\\label{fig:master_atm}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclu}\n\n\nThe ultra-hot Jupiter WASP-121b, transiting a bright F-type star on a near-polar orbit, offers great opportunities to investigate the dynamical and atmospheric properties of giant planets in extreme gravitational and energetic conditions. \\\\\n\nWe combined RV measurements with EulerCam and published TESS photometry to revise the orbital and bulk properties of the planet. Three HARPS transit observations of WASP-121b were then used to refine the orbital architecture of the system. We applied the reloaded RM method to isolate the properties of the stellar photosphere along the transit chord, using a custom mask to compute the CCF of WASP-121, and simultaneous EulerCam photometry to rescale them to their absolute flux level. Analysis of the local RVs from the planet-occulted regions confirms the near-polar orbit of WASP-121b, which leads to a strong degeneracy between impact parameter and stellar rotational velocity. We thus improved the reloaded RM model to include the orbital inclination and semi-major axis in the fit to the local RVs. We further derived independent constraints on the stellar rotation period by analyzing the activity indexes of the star, and by comparing the shapes of the local and disk-integrated stellar lines. This allowed us to derive the stellar inclination, orbital inclination, and 3D obliquity to a high precision (Table~\\ref{tab:sys_prop}), and to exclude high differental rotation rates for WASP-121. These measurements will be helpful in constraining studies of WASP-121b past and future dynamical evolution. We encourage follow-up transit observations of the planet to monitor a possible evolution of the obliquity and impact parameter that would result from the nodal precession of the orbit.\\\\\n\nThe custom mask used to calculate the CCFs of WASP-121 was built from the stellar lines, most of which arise from iron transitions. The presence of iron is also expected in the atmosphere of ultra-hot Jupiters because the high temperatures prevent it from condensing. As a result, we developed a new method for removing the contribution of the stellar lines from the local CCFs of the planet-occulted regions and isolating the contribution from the planetary atmosphere. This method is based on the possibility of directly deriving from the data the local stellar lines, uncontaminated by the planet, which is possible when the orbital trajectory of the planet and its transit chord across the stellar surface are sufficiently separated in RV-phase space. The application of this method to the HARPS observations of WASP-121b binned over three transits revealed the absorption CCF of iron in the planet atmospheric limb. The width of the signal is consistent with the rotation of WASP-121b, if it is tidally locked. The absorption signal is blueshifted in the planetary rest frame, increasing from -3.82$\\pm$0.48 during the first half of the transit to -6.63$\\pm$0.86\\,km\\,s$^{-1}$ in the second half. This is reminiscent of the effect seen for the ultra-hot gas giant WASP-76\\,b (Ehrenreich et al. 2020). These features could arise from day- to nightside winds along both terminators or from the upward winds of an anisotropically expanding atmosphere, combined with the different regions probed by the atmospheric limb as the planet rotates during transit. Observations at higher spectral resolution and with a better sensitivity, for instance, with the ESPRESSO spectrograph, will enable refining the shape of the signal and its temporal evolution. Similar measurements at other wavelengths, searching for species located in different layers than iron, would furthermore allow us to map the full dynamical structure of the WASP-121b atmosphere.\\\\\n\nLike their colder relatives, ultra-hot Jupiters display a wide range of orbital architectures (from aligned, such as WASP-19b, \\citealt{TregloanReed2013} to nearly polar, such as WASP-121b). Ground-based instruments with high resolving power (e.g., HARPS and ESPRESSO in the visible; CARMENES, SPIRou, and NIRPS in the infrared), will make it possible to investigate in details their dynamical properties and to carry out transmission and emission spectroscopy of their atmosphere, allowing us to identify precisely the signatures of their atomic and molecular components and characterize their 3D atmospheric flows.\\\\ \n\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgements}\nWe thank the referee for their fair and useful review of our study. We thank J.B. Delisle for his advice in correcting for activity in the RV measurements and N. Hara for his help in statistical matters. V.B. and R.A acknowledge support by the Swiss National Science Foundation (SNSF) in the frame of the National Centre for Competence in Research ``PlanetS''. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project Four Aces, grant agreement No 724427; project Exo-Atmos, grant agreement no. 679633). This publication made use of the Data \\& Analysis Center for Exoplanets (DACE), which is a facility based at the University of Geneva (CH) dedicated to extrasolar planets data visualisation, exchange and analysis. DACE is a platform of the PlanetS NCCR, federating the Swiss expertise in Exoplanet research. The DACE platform is available at https:\/\/dace.unige.ch. N.A-D. acknowledges the support of FONDECYT project 3180063. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\subsubsection*{\\bibname}}\n\n\\usepackage{pifont\n\\newcommand{\\cmark}{\\ding{51}}%\n\\newcommand{\\xmark}{\\ding{55}}%\n\n\\usepackage{fullpage}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc} \n\\usepackage{hyperref}\n\\usepackage{booktabs} \n\\usepackage{amsfonts,amsmath,amsthm,amssymb} \n\\usepackage{nicefrac} \n\\usepackage{microtype} \n\\usepackage{graphicx}\n\n\\usepackage{hyperref} \n\\usepackage{url} \n\\usepackage{booktabs} \n\\usepackage{amsfonts,amsmath,amsthm} \n\\usepackage{nicefrac} \n\\usepackage{microtype} \n\\usepackage{graphicx}\n\\usepackage{todonotes}\n\\usepackage{bm}\n\\usepackage{soul}\n\\usepackage{mathrsfs}\n\\usepackage[capposition=bottom]{floatrow}\n\n\\input{math_macros}\n\n\\setlength\\columnsep{0.30in}\n\n\n\n\\title{Harmonizable mixture kernels with variational Fourier features}\n\n\\author{ Zheyang Shen \\qquad Markus Heinonen \\qquad Samuel Kaski \\\\ Helsinki Institute for Information Technology HIIT \\\\ Aalto University}\n\n\n\\begin{document}\n\\twocolumn[\\maketitle]\n\n\\begin{abstract}\nThe expressive power of Gaussian processes depends heavily on the choice of kernel. In this work we propose the novel harmonizable mixture kernel (HMK), a family of expressive, interpretable, non-stationary kernels derived from mixture models on the generalized spectral representation. As a theoretically sound treatment of non-stationary kernels, HMK supports harmonizable covariances, a wide subset of kernels including all stationary and many non-stationary covariances. We also propose variational Fourier features, an inter-domain sparse GP inference framework that offers a representative set of `inducing frequencies'. We show that harmonizable mixture kernels interpolate between local patterns, and that variational Fourier features offers a robust kernel learning framework for the new kernel family. \n\\end{abstract}\n\n\\section{INTRODUCTION}\n\nKernel methods are one of the cornerstones of machine learning and pattern recognition. Kernels, as a measure of similarity between two objects, depart from common linear hypotheses by allowing for complex nonlinear patterns \\citep{vapnik2013nature}. In a Bayesian framework, kernels are interpreted probabilistically as covariance functions of random processes, such as for the Gaussian processes (GP) in Bayesian nonparametrics.\nAs rich distributions over functions, GPs serve as an intuitive nonparametric inference paradigm, with well-defined posterior distributions. \\par\nThe kernel of a GP encodes the prior knowledge of the underlying function. \nThe \\emph{squared exponential} (SE) kernel is a common choice which, however, can only model global monotonic covariance patterns, while generalisations have explored local monotonicities \\citep{gibbs1998bayesian, paciorek2004nonstationary}.\nIn contrast, expressive kernels can learn hidden representations of the data \\citep{wilson2013gaussian}.\\par\nThe two main approaches to construct expressive kernels are composition of simple kernel functions \\citep{archambeau2011multiple, durrande2016detecting, gonen2011multiple, rasmussen2006, sun2018differentiable}, and modelling of the spectral representation of the kernel \\citep{wilson2013gaussian, samo2015generalized, remes2017non}. In the compositional approach kernels are composed of simpler kernels, whose choice often remains ad-hoc.\n\\par\nThe spectral representation approach proposed by \\citet{quia2010sparse}, and extended by \\citet{wilson2013gaussian}, constructs \\emph{stationary} kernels as the Fourier transform of a Gaussian mixture, with theoretical support from the Bochner's theorem. Stationary kernels are unsuitable for large-scale datasets that are typically rife with locally-varying patterns \\citep{samo2016string}. \\citet{remes2017non} proposed a practical \\emph{non-stationary} spectral kernel generalisation based on Gaussian process frequency functions, but with explicitly unclear theoretical foundations. An earlier technical report studied a non-stationary spectral kernel family derived via the generalised Fourier transform \\citep{samo2015generalized}. \\citet{samo2017advances} expanded the analysis into non-stationary continuous bounded kernels. \\par\nThe cubic time complexity of GP models significantly hinders their scalability. Sparse Gaussian process models \\citep{herbrich2003fast, snelson2006sparse, titsias2009variational,hensman2015scalable} scale GP models with variational inference on pseudo-input points as a concise representation of the input data. Inter-domain Gaussian processes generalize sparse GP models by linearly transforming the original GP and computing cross-covariances, thus putting the inducing points on the transformed domain \\citep{lazaro2009inter}.\n\\begin{table*}[t]\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{lcccr}\n Kernel & Harmonizable & Non-stationary & Spectral inference & Reference \\\\\n \\hline\n SE: squared exponential & \\cmark & \\xmark & \\cmark & \\citet{rasmussen2006} \\\\\n SS: sparse spectral & \\cmark & \\xmark &\\cmark & \\citet{quia2010sparse} \\\\\n SM: spectral mixture & \\cmark & \\xmark & \\cmark & \\citet{wilson2013gaussian} \\\\\n GSK: generalised spectral kernel & \\cmark & \\cmark & \\xmark &\\citet{samo2017advances}\\\\\n GSM: generalised spectral mixture &\\bf{?} & \\cmark & \\xmark &\\citet{remes2017non} \\\\\n HMK: harmonizable mixture kernel & \\cmark & \\cmark & \\cmark & current work\n \\end{tabular}\n }\n \\caption{Overview of proposed spectral kernels. The SE, SS and SM kernels are stationary with scalable spectral inference paradigms \\citep{lazaro2009inter, quia2010sparse, gal2015improving}. The GSM kernel is theoretically poorly defined with unknown harmonizable properties. HMK is well-defined with variational Fourier features as spectral inference.}\n \\label{tab:spkernels}\n\\end{table*}\n\n\nIn this paper we propose a theoretically sound treatment of non-stationary kernels, with main contributions:\n\\begin{itemize}\n \\item We present a detailed analysis of \\textit{harmonizability}, a concept mainly existent in statistics literature. Harmonizable kernels are non-stationary kernels interpretable with their \\emph{generalized} spectral representations, similar to stationary ones.\n \n \\item We propose practical \\emph{harmonizable mixture kernels} (HMK), a class of kernels dense in the set of harmonizable covariances with a mixture generalized spectral distribution.\n \\item We propose \\emph{variational Fourier features}, an inter-domain GP inference framework for GPs equipped with HMK. Functions drawn from such GP priors have a well-defined Fourier transform, a desirable property not found in stationary GPs.\n \n \n \n\\end{itemize}\n\n\n\n\n\n\\section{HARMONIZABLE KERNELS}\n\nIn this section we introduce \\emph{harmonizability}, a generalization of stationarity largely unknown to the field of machine learning. We first define harmonizable kernel, and then analyze two existing special cases of harmonizable kernels, stationary and locally stationary kernels. We present a theorem demonstrating the expressiveness of previous stationary spectral kernels. Finally, we introduce Wigner transform as a tool to interpret and analyze these kernels.\\par\nThroughout the discussion in the paper, we consider complex-valued kernels with vectorial input $k(\\mathbf{x}, \\mathbf{x}'): \\mathbb{R}^D\\times \\mathbb{R}^D\\mapsto\\mathbb{C}$, and we denote vectors from the input (data) domain with symbols $\\mathbf{x}, \\mathbf{x}', \\boldsymbol{\\tau}, \\bf{t}$, while we denote frequencies with symbols $\\boldsymbol{\\xi}, \\boldsymbol{\\omega}$.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{plots\/fig1_op4-crop.pdf}\n \\caption{Comparison of Gaussian, SS, SM, GSK, GSM and HM kernels (columns) with respect to the kernel, Wigner distribution, and the generalized spectral density including real and imaginary part (rows).}\n \\label{fig:fig1}\n\\end{figure*}\n\n\\subsection{Harmonizable kernel definition}\n\nA harmonizable kernel \\citep{kakihara1985, yaglom1987correlation, loeve1994probability} is a kernel with a \\emph{generalized spectral distribution} defined by a generalized Fourier transform:\n\\begin{definition}\nA complex-valued bounded continuous kernel $k: \\mathbb{R}^D \\times\\mathbb{R}^D\\mapsto \\mathbb{C}$ is \\emph{harmonizable} when it can be represented as\n\\begin{align}\n k(\\mathbf{x},\\mathbf{x}') &= \\int_{\\mathbb{R}^D\\times\\mathbb{R}^D} e^{2i\\pi(\\boldsymbol{\\omega}^\\top \\mathbf{x}-\\boldsymbol{\\xi}^\\top \\mathbf{x}')}\\mu_{\\Psi_k}(\\text{d}\\boldsymbol{\\omega}, \\text{d}\\boldsymbol{\\xi}),\n\\end{align}\nwhere $\\mu_{\\Psi_k}$ is the Lebesgue-Stieltjes measure associated to some positive definite function $\\Psi_k(\\boldsymbol{\\omega}, \\boldsymbol{\\xi})$ with bounded variations.\n\\end{definition}\n\nHarmonizability is a property shared by kernels and random processes with such kernels. The positive definite measure induced by function $\\Psi_k$ is defined as the generalized spectral distribution of the kernel, and when $\\mu_{\\Psi_k}$ is twice differentiable, the derivative $S_k(\\boldsymbol{\\omega}, \\boldsymbol{\\xi}) = \\dfrac{\\partial^2\\Psi_k}{\\partial\\boldsymbol{\\omega}\\partial\\boldsymbol{\\xi}}$ is defined as \\emph{generalized spectral density} (GSD).\\par\nHarmonizable kernel is a very general class in the sense that it contains a large portion of bounded, continuous kernels (See Table \\ref{tab:spkernels}) with only a handful of (somewhat pathological) exceptions \\citep{yaglom1987correlation}.\n\n\n\\subsection{Comparison with Bochner's theorem}\nStationary kernels are kernels whose value only depends on the distance $\\boldsymbol{\\tau}=\\mathbf{x}-\\mathbf{x}'$, and therefore is invariant to translation of the input. Bochner's theorem \\citep{bochner1959lectures, stein2012interpolation} expresses similar relation between finite measures and kernels:\n\\begin{theorem}\n(Bochner) A complex-valued function $k: \\mathbb{R}^D\\times\\mathbb{R}^D\\mapsto\\mathbb{C}$ is the covariance function of a weakly stationary mean square continuous complex-valued random process on $\\mathbb{R}^D$ if and only if it can be represented as \\begin{align}\n k(\\boldsymbol{\\tau}) &= \\int_{\\mathbb{R}^D} e^{2i\\pi\\boldsymbol{\\omega}^\\top\\boldsymbol{\\tau}} \\psi_k(\\text{d}\\boldsymbol{\\omega}).\n\\end{align}\nwhere $\\psi_k$ is a positive finite measure.\n\\end{theorem}\nBochner's theorem draws duality between the space of finite measures to the space of stationary kernels. The \\emph{spectral distribution} $\\psi_k$ of a stationary kernel is the finite measure induced by a Fourier transform. And when $\\psi_k$ is absolutely continuous with respect to the Lebesgue measure, its density is called \\emph{spectral density} (SD), $S_k(\\boldsymbol{\\omega})=\\dfrac{\\d{\\psi_k(\\boldsymbol{\\omega})}}{\\d{\\boldsymbol{\\omega}}}$.\\par\nHarmonizable kernels include stationary kernels as a special case. When the mass of the measure $\\mu_\\Psi$ is concentrated on the diagonal $\\boldsymbol{\\omega}=\\boldsymbol{\\xi}$, the generalized inverse Fourier transform devolves into an inverse Fourier transform with respect to $\\boldsymbol{\\tau}=\\mathbf{x}-\\mathbf{x}'$, and therefore recovers the exact form in Bochner's theorem.\n\nA key distinction between the two spectral distributions is that the spectral distribution is a nonnegative finite measure, but the generalized spectral distribution is a complex-valued measure with subsets assigned to complex numbers. Even with a real-valued harmonizable kernel, $\\Psi_k$ can be complex-valued.\n\n\\subsection{Stationary spectral kernels}\nThe perspective of viewing the spectral distribution as a normalized probability measure makes it possible to construct expressive stationary kernels by modeling their spectral distributions. Notable examples include the sparse spectrum (SS) kernel \\citep{quia2010sparse}, and spectral mixture (SM) kernel \\citep{wilson2013gaussian},\n\\begin{align}\n k_{SS}(\\boldsymbol{\\tau}) &= \\sum_{q=1}^Q \\alpha_q\\cos(2\\pi\\boldsymbol{\\omega}_q^\\top\\boldsymbol{\\tau}),\\\\\n k_{SM}(\\boldsymbol{\\tau}) &= \\sum_{q=1}^Q \\alpha_qe^{-2\\pi^2\\tau^\\top{\\boldsymbol{\\Sigma}}_q\\tau}\\cos(2\\pi\\boldsymbol{\\omega}_q^\\top\\boldsymbol{\\tau}),\n\\end{align}\nwith number of components $Q \\in \\mathbb{N}_+$, the component weights (amplitudes) $\\alpha_q \\in \\mathbb{R}_+$, the (mean) frequencies $\\boldsymbol{\\omega}_q\\in\\mathbb{R}_+^D$, and the frequency covariances ${\\boldsymbol{\\Sigma}}_q \\succeq \\mathbf{0}$.\nHere we prove a theorem demonstrating the expressiveness of the above two kernels.\n\\begin{theorem}\nLet $h$ be a complex-valued positive definite, continuous and integrable function. Then the family of \\emph{generalized spectral kernels}\n\\begin{align}\n k_{GS}(\\boldsymbol{\\tau}) &= \\sum_{q=1}^Q \\alpha_q h(\\boldsymbol{\\tau}\\circ\\boldsymbol{\\gamma}_q)e^{2i\\pi\\boldsymbol{\\omega}_q^\\top\\boldsymbol{\\tau}},\n\\end{align}\nis dense in the family of stationary, complex-valued kernels with respect to pointwise convergence of functions. Here $\\circ$ denotes the Hadamard product, $\\alpha_q\\in\\mathbb{R}_+$, $\\boldsymbol{\\omega}_k\\in\\mathbb{R}^D$, $\\boldsymbol{\\gamma}_k\\in\\mathbb{R}^{D}_+$, $Q\\in\\mathbb{N}_+$.\n\\end{theorem}\n\\begin{proofskch}\nWe know that discrete measures are dense in the Banach space of finite measures. Therefore, the complex extension of sparse spectrum kernel\n $k_{SS}(\\boldsymbol{\\tau}) = \\sum_{k=1}^K \\alpha_k e^{2i\\pi\\boldsymbol{\\omega}_k^\\top\\boldsymbol{\\tau}}$ is dense in stationary kernels.\\par\nFor each $q$, the function $\\dfrac{\\alpha_q}{h(0)} h(\\boldsymbol{\\tau}\\circ\\boldsymbol{\\gamma}_q)e^{2i\\pi\\boldsymbol{\\omega}_k^\\top\\boldsymbol{\\tau}}$ converges to $\\alpha_q e^{2i\\pi\\boldsymbol{\\omega}_q^\\top\\boldsymbol{\\tau}}$ pointwise as $\\boldsymbol{\\gamma}_q\\downarrow \\mathbf{0}$. Therefore, the proposed kernel form is dense in the set of sparse spectrum kernels, and by extension, stationary kernels.\nSee Section 1 in supplementary materials for a more detailed proof.\n\\end{proofskch}\\par\nWe strengthen the claim of \\citet{samo2015generalized} by adding a constraint $\\alpha_k > 0$ that restricts the family of functions to only valid PSD kernels \\citep{samo2017advances}. The spectral distribution of $k_{GS}$ is\n\\begin{align}\n \\psi_{k_{GS}}(\\boldsymbol{\\xi}) &= \\sum_{q=1}^Q \\dfrac{\\alpha_q}{\\prod_{d=1}^D\\gamma_{kd}}\\psi_h((\\boldsymbol{\\xi}-\\boldsymbol{\\omega}_k)\\oslash\\boldsymbol{\\gamma}_k),\n\\end{align}\nwith $\\oslash$ denoting elementwise division of vectors. A real-valued kernel can be obtained by averaging a complex kernel with its complex conjugate, which induces a symmetry on the spectral distribution, $\\psi_k(\\boldsymbol{\\xi}) = \\psi_k(-\\boldsymbol{\\xi})$. For instance, the SM kernel has the symmetric Gaussian mixture spectral distribution \n\\begin{align}\n \\psi_{k_{SM}}(\\boldsymbol{\\xi}) &= \\dfrac{1}{2}\\sum_{q=1}^Q\\alpha_q(\\mathcal{N}(\\boldsymbol{\\xi}|\\boldsymbol{\\omega}_q, {\\boldsymbol{\\Sigma}}_q)+\\mathcal{N}(\\boldsymbol{\\xi}|-\\boldsymbol{\\omega}_q, {\\boldsymbol{\\Sigma}}_q)). \n\\end{align}\n\n\\subsection{Locally stationary kernels}\n\nAs a generalization of stationary kernels, the locally stationary kernels \\citep{silverman1957locally} are a simple yet unexplored concept in machine learning. A locally stationary kernel is a stationary kernel multiplied by a sliding power factor:\n\\begin{align}\n k_{LS}(\\mathbf{x},\\mathbf{x}') &= k_1\\left(\\dfrac{\\mathbf{x}+\\mathbf{x}'}{2}\\right)k_2(\\mathbf{x}-\\mathbf{x}').\n\\end{align}\nwhere $k_1: \\mathbb{R}^D\\mapsto\\mathbb{R}_{\\geq 0}$ is an arbitrary nonnegative function, and $k_2:\\mathbb{R}^D\\mapsto\\mathbb{C}$ is a stationary kernel. $k_1$ is a function of the \\emph{centroid} between $\\mathbf{x}$ and $\\mathbf{x}'$, describing the scale of covariance on a global structure, while $k_2$ as a stationary covariance describes the local structure \\citep{genton2001classes}. It is straightforward to see that locally stationary kernels reduce into stationary kernels when $k_1$ is constant.\n\nIntegrable locally stationary kernels are of particular interest because they are harmonizable with a GSD. Consider a locally stationary Gaussian kernel (LSG) defined as a SE kernel multiplied by a Gaussian density on the centroid $\\widetilde{\\mathbf{x}} = (\\mathbf{x}+\\mathbf{x}')\/2$. Its GSD can be obtained using the generalized Wiener-Khintchin relations \\citep{silverman1957locally}.\n\\begin{align}\n k_{\\text{LSG}}(\\mathbf{x}, \\mathbf{x}') &= e^{-2\\pi^2\\widetilde{\\mathbf{x}}^\\top{\\boldsymbol{\\Sigma}}_1\\widetilde{\\mathbf{x}}}e^{-2\\pi^2\\boldsymbol{\\tau}^\\top{\\boldsymbol{\\Sigma}}_2\\boldsymbol{\\tau}},\\\\\n S_{k_{\\text{LSG}}}(\\boldsymbol{\\omega}, \\boldsymbol{\\xi}) &= \\mathcal{N}\\left(\\left.\\dfrac{\\boldsymbol{\\omega}+\\boldsymbol{\\xi}}{2}\\right\\vert 0, {\\boldsymbol{\\Sigma}}_2\\right)\\mathcal{N}\\left(\\left.\\boldsymbol{\\omega}-\\boldsymbol{\\xi}\\right\\vert 0, {\\boldsymbol{\\Sigma}}_1\\right).\n\\end{align}\n\n\\subsection{Interpreting spectral kernels}\n\nWhile the spectral distribution of a stationary kernel can be easily interpreted as a `spectrum', the analogy does not apply to harmonizable kernels. In this section, we introduce the Wigner transform \\citep{flandrin1998time} which adds interpretability to kernels with spectral representations.\n\\begin{definition}\nThe \\emph{Wigner distribution function} (WDF) of a kernel $k(\\cdot,\\cdot):\\mathbb{R}^D\\times\\mathbb{R}^D\\mapsto\\mathbb{C}$ is defined as $W_k:\\mathbb{R}^D\\times\\mathbb{R}^D\\mapsto\\mathbb{R}$:\n\\begin{align}\n W_k(\\mathbf{x}, \\boldsymbol{\\omega}) &= \\int_{\\mathbb{R}^D} k\\left(\\mathbf{x}+\\dfrac{\\boldsymbol{\\tau}}{2}, \\mathbf{x}-\\dfrac{\\boldsymbol{\\tau}}{2}\\right)e^{-2i\\pi\\boldsymbol{\\omega}^\\top\\boldsymbol{\\tau}} \\d{\\boldsymbol{\\tau}}.\n\\end{align}\n\\end{definition}\n\nThe Wigner transform first changes the kernel form $k$ into a function of the centroid of the input: $(\\mathbf{x}+\\mathbf{x}')\/2$ and the lag $\\mathbf{x}-\\mathbf{x}'$, and then takes the Fourier transform of the lag. The Wigner distribution functions are fully equivalent to non-stationary kernels. Given the domain of WDF, we can view WDF as a `spectrogram' demonstrating the relation between input and frequency. Converting an arbitrary kernel into its Wigner distribution sheds light into the frequency structure of the kernel (See Figure \\ref{fig:fig1}).\\par\nThe WDFs of locally stationary kernels adhere to the intuitive notion of local stationarity where frequencies remain constant at a local scale. Take locally stationary Gaussian kernel $k_{\\text{LSG}}$ as an example:\n\\begin{align}\n W_{k_{\\text{LSG}}}(\\mathbf{x},\\boldsymbol{\\omega}) &= \\mathcal{N}(\\boldsymbol{\\omega}|\\mathbf{0}, {\\boldsymbol{\\Sigma}}_2) e^{-2\\pi^2\\mathbf{x}^\\top{\\boldsymbol{\\Sigma}}_1\\mathbf{x}}.\n\\end{align}\n\n\\section{HARMONIZABLE MIXTURE KERNEL}\nIn this section we propose a novel \\emph{harmonizable mixture kernel}, a family of kernels dense in harmonizable covariance functions. We present the kernel in an intentionally concise form, and refer the reader to the Section 2 in the Supplements for a full derivation.\n\n\\subsection{Kernel form and spectral representations}\n\nThe \\emph{harmonizable mixture kernel} (HMK) is defined with an additive structure:\n\\begin{align}\n k_{\\text{HM}}(\\mathbf{x},\\mathbf{x}')&=\\sum_{p=1}^P k_p(\\mathbf{x}-\\mathbf{x}_p, \\mathbf{x}'-\\mathbf{x}_p),\\\\\n k_p(\\mathbf{x}, \\mathbf{x}') &= k_{\\text{LSG}}(\\mathbf{x}\\circ\\boldsymbol{\\gamma}_p, \\mathbf{x}'\\circ\\boldsymbol{\\gamma}_p)\\phi_p(\\mathbf{x})^\\top\\mathbf{B}_p\\phi_p(\\mathbf{x}'),\n\\end{align}\nwhere $P\\in\\mathbb{N}_+$ is the number of centers, $\\left(\\phi_p(\\mathbf{x})\\right)_{q=1}^{Q_p}=e^{2i\\pi\\boldsymbol{\\mu}_{pq}^\\top\\mathbf{x}}$ are sinusoidal feature maps, $\\mathbf{B}_p\\succeq\\mathbf{0}_{Q_p}$ are spectral amplitudes, $\\boldsymbol{\\gamma}_p\\in\\mathbb{R}^D_+$ are input scalings, $\\mathbf{x}_p\\in\\mathbb{R}^D$ are input shifts, and $\\boldsymbol{\\mu}_{pq}\\in\\mathbb{R}^D$ are frequencies. It is easy to verify $k_{\\text{HM}}$ as a valid kernel, for each $k_p$ is a product of an LSG kernel and an inner product with finite basis expansion of sinusoidal functions.\\par\n\nHMKs have closed form spectral representations such as \\emph{generalized spectral density} (See Section 2 in the Supplement for detailed derivation):\n\\begin{align}\n S_{k_{\\text{HM}}}(\\boldsymbol{\\omega}, \\boldsymbol{\\xi}) &= \\sum_{p=1}^P S_{k_p}(\\boldsymbol{\\omega}, \\boldsymbol{\\xi})e^{-2i\\pi\\mathbf{x}_p^\\top(\\boldsymbol{\\omega}-\\boldsymbol{\\xi})},\\\\\n S_{k_p}(\\boldsymbol{\\omega}, \\boldsymbol{\\xi}) &= \\dfrac{1}{\\prod_{d=1}^D\\gamma_{pd}^2}\\sum_{1\\leq i, j \\leq Q_p} b_{pij}S_{pij}(\\boldsymbol{\\omega}, \\boldsymbol{\\xi}),\\\\\n S_{pij}(\\boldsymbol{\\omega}, \\boldsymbol{\\xi})&=S_{k_{\\text{LSG}}}((\\boldsymbol{\\omega}-\\boldsymbol{\\mu}_{pi})\\oslash\\boldsymbol{\\gamma}_p, (\\boldsymbol{\\xi}-\\boldsymbol{\\mu}_{pj})\\oslash\\boldsymbol{\\gamma}_p).\n\\end{align}\nThe \\emph{Wigner distribution function} can be obtained in a similar fashion\n\\begin{align}\n W_{k_{\\text{HM}}}(\\mathbf{x},\\boldsymbol{\\omega})&=\\sum_{p=1}^P W_{k_p}(\\mathbf{x}-\\mathbf{x}_p, \\boldsymbol{\\omega}),\\\\\n W_{k_p}(\\mathbf{x},\\omega) &= \\dfrac{1}{\\prod_{d=1}^D\\gamma_{pd}}\\sum_{1\\leq i,j\\leq Q_p} W_{pij}(\\mathbf{x},\\boldsymbol{\\omega}),\\\\\n W_{pij}(\\mathbf{x},\\boldsymbol{\\omega}) &= W_{k_{\\text{LSG}}}\\left(\\mathbf{x}\\circ\\boldsymbol{\\gamma}_p, \\left(\\boldsymbol{\\omega}-(\\boldsymbol{\\mu}_{pi}+\\boldsymbol{\\mu}_{pj})\/2\\right)\\oslash\\boldsymbol{\\gamma}_p\\right)\\notag\\\\\n &\\times\\cos(2\\pi(\\boldsymbol{\\mu}_{pi}-\\boldsymbol{\\mu}_{pj})^\\top\\mathbf{x}).\n\\end{align}\nThe kernel form, GSD and WDF both take a normal density form. It is straightforward to see $S_{k_{\\text{HM}}}$ is PSD, and that $k_{\\text{Hm}}(-\\mathbf{x}, -\\mathbf{x}')$ is the GSD of $S_{k_{\\text{HM}}}$. A real-valued kernel $k_r$ is obtained by averaging with its complex conjugate: $W_{k_r}(\\mathbf{x},\\boldsymbol{\\omega})=W_{k_r}(\\mathbf{x},-\\boldsymbol{\\omega})$, $S_{k_r}(\\boldsymbol{\\omega}, \\boldsymbol{\\xi}) = S_{k_r}(-\\boldsymbol{\\omega}, -\\boldsymbol{\\xi})$.\n\n\\subsection{Expressiveness of HMK}\nSimilar to the construction of \\emph{generalized spectral kernels}, we can construct a generalized version $k_h$ where $k_{\\text{LSG}}$ is replaced by $k_{\\text{LS}}$, a locally stationary kernel with a GSD.\n\\begin{theorem} Given a continuous, integrable kernel $k_{\\text{LS}}$ with a valid \\emph{generalized spectral density}, the harmonizable mixture kernel\n\\begin{align}\n k_h(\\mathbf{x},\\mathbf{x}')&=\\sum_{p=1}^P k_p(\\mathbf{x}-\\mathbf{x}_p, \\mathbf{x}'-\\mathbf{x}_p),\\\\\n k_p(\\mathbf{x}, \\mathbf{x}')&=k_{\\text{LS}}(\\mathbf{x}\\circ\\boldsymbol{\\gamma}_p,\\mathbf{x}'\\circ\\boldsymbol{\\gamma}_p)\\phi_p(\\mathbf{x})^\\top\\mathbf{B}_p\\phi_p(\\mathbf{x}'),\n\\end{align}\nis dense in the family of harmonizable covariances with respect to pointwise convergence of functions. Here $P\\in\\mathbb{N}_+$, $(\\phi_p(\\mathbf{x}))_q=e^{2i\\pi\\boldsymbol{\\mu}_{pq}^\\top\\mathbf{x}}$, $q=1,\\hdots, Q_p$, $\\boldsymbol{\\gamma}_p\\in\\mathbb{R}_+^D$, $\\mathbf{x}_p\\in\\mathbb{R}^D$, $\\boldsymbol{\\mu}_{pq}\\in\\mathbb{R}^D$, $\\mathbf{B}_p$ as positive definite Hermitian matrices.\n\\end{theorem}\n\\begin{proof}\nSee Section 3 in the supplementary materials.\n\\end{proof}\n\n\\section{VARIATIONAL FOURIER FEATURES}\n\nIn this section we propose variational inference for the harmonizable kernels applied in Gaussian process models.\n\nWe assume a dataset of $n$ input $X = \\{\\mathbf{x}_i\\}_{i=1}^n$ and output $\\mathbf{y} = \\{ y_i\\} \\in \\mathbb{R}^n$ observations from some function $f(\\mathbf{x})$ with a Gaussian observation model:\n\\begin{align}\ny = f(\\mathbf{x}) + \\varepsilon, \\qquad \\varepsilon \\sim \\mathcal{N}(0, \\sigma_y^2). \\label{eq:noise}\n\\end{align}\n\n\\subsection{Gaussian processes}\n\nGaussian processes (GP) are a family of Bayesian models that characterise distributions of functions \\citep{rasmussen2006}. We assume a zero-mean Gaussian process prior on a latent function $f(\\mathbf{x}) \\in \\mathbb{R}$ over vector inputs $\\mathbf{x} \\in \\mathbb{R}^D$\n\\begin{align}\nf(\\mathbf{x}) &\\sim \\mathcal{GP}( 0, K(\\mathbf{x},\\mathbf{x}')),\n\\end{align}\nwhich defines a priori distribution over function values $f(\\mathbf{x})$ with mean $\\mathbb{E}[ f(\\mathbf{x})] = 0$ and covariance \n\\begin{align}\n\\cov[ f(\\mathbf{x}), f(\\mathbf{x}')] &= K(\\mathbf{x},\\mathbf{x}').\n\\end{align}\nA GP prior specifies that for any collection of $n$ inputs $X$, the corresponding function values $\\mathbf{f} = ( f(\\mathbf{x}_1), \\ldots, f(\\mathbf{x}_n))^\\top \\in \\mathbb{R}^n$ are coupled by following a multivariate normal distribution \n$\\mathbf{f} \\sim \\mathcal{N}(\\mathbf{0}, \\mathbf{K}_{ff}),$\nwhere $\\mathbf{K}_{ff} = (K(\\mathbf{x}_i, \\mathbf{x}_j))_{i,j=1}^n \\in \\mathbb{R}^{n \\times n}$ is the kernel matrix over input pairs. The key property of GP's is that output predictions $f(\\mathbf{x})$ and $f(\\mathbf{x}')$ correlate according to how similar are their inputs $\\mathbf{x}$ and $\\mathbf{x}'$ as defined by the kernel $K(\\mathbf{x},\\mathbf{x}') \\in \\mathbb{R}$. \n\n\n\n\n\n\\subsection{Variational inference with inducing features}\n\n\nIn this section, we introduce variational inference of sparse GPs in an inter-domain setting. Consider a GP prior $f(\\mathbf{x})\\sim\\mathcal{GP}(0, k)$, and a valid linear transform $\\mathcal{L}$ projecting $f$ to another GP $\\mathcal{L}_f(\\mathbf{z})\\sim\\mathcal{GP}(0, k')$. \n\n\nWe begin by \\emph{augmenting} the Gaussian process with $m < n$ inducing variables $u_j = \\mathcal{L}_f(\\mathbf{z}_j)$ using a Gaussian model. $\\mathbf{z}_j$ are \\emph{inducing features} placed on the domain of $\\mathcal{L}_f(\\mathbf{z})$, with prior $p(\\u) = \\mathcal{N}( \\u | \\mathbf{0}, \\mathbf{K}_{uu})$ and a conditional model \\citep{hensman2015scalable}\n\\begin{align}\n p(\\mathbf{f} | \\u) &= \\mathcal{N}( \\mathbf{A} \\u, \\mathbf{K}_{ff} - \\mathbf{A} \\mathbf{K}_{uu} \\mathbf{A}^\\dag), \\label{eq:interp}\n\\end{align}\nwhere $\\mathbf{A} = \\mathbf{K}_{fu} \\mathbf{K}_{uu}^{-1}$, and $\\mathbf{A}^\\dag$ denotes the Hermitian transpose of $\\mathbf{A}$ allowing for complex GPs.\nThe kernel $\\mathbf{K}_{uu}$ is between the $m \\times m$ inducing variables\nand the kernel $\\mathbf{K}_{fu}$ is the cross covariance of $\\mathcal{L}$, $\\left(\\mathbf{K}_{fu}\\right)_{is} = \\cov(f(\\mathbf{x}_i), \\mathcal{L}_f(\\mathbf{z}_s))$. Next, we define a variational approximation $q(\\u) = \\mathcal{N}( \\u | \\mathbf{m}, \\mathbf{S})$ with the Gaussian interpolation model \\eqref{eq:interp},\n\\begin{align}\n q(\\mathbf{f})\n &= \\mathcal{N}( \\mathbf{f} | \\mathbf{A} \\mathbf{m}, \\mathbf{K}_{ff} - \\mathbf{A} (\\mathbf{S} - \\mathbf{K}_{uu}) \\mathbf{A}^\\dag),\n\\end{align}\nwith free variational mean $\\mathbf{m} \\in \\mathbb{R}^m$ and variational covariance $\\mathbf{S} \\in \\mathbb{R}^{m \\times m}$ to be optimised. Finally, variational inference \\citep{blei2016} describes an evidence lower bound (ELBO) of augmented Gaussian processes as\n\\begin{align}\n \\hspace{-2.5mm}\\log p(\\mathbf{y}) & \\ge \\sum_{i=1}^n \\mathbb{E}_{q(f_i)} \\log p(y_i | f_i) - \\mathrm{KL}[ q(\\u) || p(\\u)]. \\label{eq:elbo}\n\\end{align}\n\n\n\\subsection{Fourier transform of a harmonizable GP}\n\n\nIn this section, we compute cross-covariances between a GP and the Fourier transform of the GP. Consider a GP prior $f\\sim\\mathcal{GP}(0,k)$ where the kernel $k$ is harmonizable with a GSD $S_k$ and where $\\hat{f}$ is the Fourier transform of $f$,\n\\begin{align}\n \\hat{f}(\\boldsymbol{\\omega}) &\\triangleq \\int_{\\mathbb{R}^D} f(\\mathbf{x})e^{-2i\\pi\\boldsymbol{\\omega}^\\top \\mathbf{x}}\\d{\\mathbf{x}}.\n\\end{align}\nThe validity of this setting is easily verified because $f$ is square integrable on expectation,\n\\begin{align}\n \\mathbb{E}\\left\\{\\int_{\\mathbb{R}^D} |f(\\mathbf{x})|^2\\d{\\mathbf{x}}\\right\\} &= \\int_{\\mathbb{R}^D} k(\\mathbf{x},\\mathbf{x}) \\d{\\mathbf{x}} < \\infty.\n\\end{align}\n\nWe can therefore derive the cross-covariances\n\\begin{align}\n \\cov(\\hat{f}(\\boldsymbol{\\omega}), f(\\mathbf{x}))\n &= \\int_{\\mathbb{R}^D} k(\\t,\\mathbf{x}) e^{-2 i \\pi\\boldsymbol{\\omega}^\\top \\t} \\d{\\t} \\\\\n \\cov(\\hat{f}(\\boldsymbol{\\omega}), \\hat{f}(\\boldsymbol{\\xi}))\n \n &= S_k(\\boldsymbol{\\omega}, \\boldsymbol{\\xi}).\n\\end{align}\nThe above derivation is valid for any harmonizable kernel with a GSD. The Fourier transform of $\\mathcal{GP}(0,k)$ is a complex-valued GP with kernel $S_k$, which correlates to the original GP.\\par\n \n\nFor harmonizable, integrable kernel $k$, we can construct an inter-domain sparse GP model defined in 4.2 by plugging in $\\mathcal{L}_f = \\hat{f}$.\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[width=\\textwidth, clip=true]{plots\/banana1.pdf}\n\\end{center}\n \\caption{Sparse GP classification with the banana dataset. The model is learned by an HMK with $P=4$ components, and thus 2 inducing frequencies for each component constitute a total of $2 \\times 4$ inducing frequencies.}\n \\label{figure:gpc}\n\\end{figure*}\n\n\\subsection{Variational Fourier features of the harmonizable mixture kernel}\nHMK belongs to the kernel family discussed in 4.3, but we can utilize the additive structure of an HMK $k_{HM} = \\sum_{p=1}^P k_p(\\mathbf{x}-\\mathbf{x}_p, \\mathbf{x}'-\\mathbf{x}_p)$. A GP with kernel $k_{HM}$ can be decomposed into $P$ independent GPs:\n\\begin{align}\n f(\\mathbf{x}) &= \\sum_{p=1}^P f_p(\\mathbf{x}-\\mathbf{x}_p),\\\\\n f_p(\\mathbf{x}) &\\sim \\mathcal{GP}(0, k_p(\\mathbf{x}, \\mathbf{x}')).\n\\end{align}\nGiven this formulation, we can derive \\emph{variational Fourier features} with inducing frequencies conditioned on one $f_p$. For the $p^{th}$ component, we have $m_p$ inducing frequencies $(\\boldsymbol{\\omega}_{p1}, \\ldots, \\boldsymbol{\\omega}_{pm_p})$ and $m_p$ inducing values $(u_{p1}, \\cdots, u_{pm_p})$. We can compute inter-domain covariances in a similar fashion:\n\\begin{align}\n \\mathbf{K}_{fu}(\\boldsymbol{\\omega}_{qj}, \\mathbf{x}) &\\triangleq \\cov(f(x), u_{qj}) \\label{eq:kfu} \\\\\n &= \\sum_{p=1}^P\\cov(f_p(\\mathbf{x}-\\mathbf{x}_p), u_{qj}) \\notag \\\\\n &= \\cov(f_q(\\mathbf{x}-\\mathbf{x}_q), \\hat{f}_q(\\boldsymbol{\\omega}_{qj})). \\notag\n\\end{align}\nSimilarly, we compute entries of the matrix $K_{uu}$\n\\begin{align}\n \\mathbf{K}_{uu}(\\boldsymbol{\\omega}_{pi}, \\boldsymbol{\\omega}_{qj}) \\triangleq \\cov(u_{pi}, u_{qj}) &= \\begin{cases}\n S_p(\\boldsymbol{\\omega}_{pi}, \\boldsymbol{\\omega}_{qj}), p=q,\\\\\n 0, p\\neq q.\n \\end{cases} \\label{eq:kuu}\n\\end{align}\nThe matrix $\\mathbf{K}_{uu}$ allows for a block diagonal structure, which allows for faster matrix inversion. The variational Fourier features are then completed by plugging in entries in $\\mathbf{K}_{fu}$ \\eqref{eq:kfu} and $\\mathbf{K}_{uu}$ \\eqref{eq:kuu} into the evidence lower bound \\eqref{eq:elbo}.\\par\n\\subsection{Connection to previous work}\nIn this section we demonstrate that an inter-domain stationary GP with windowed Fourier transform \\citep{lazaro2009inter} is equivalent to a rescaled VFF with a tweaked kernel. GPs with stationary kernels do not have valid Fourier transform, therefore, previous attempts of using Fourier transforms of GPs have been accompanied by a window function:\n\\begin{align}\n \\mathcal{L}_f(\\boldsymbol{\\omega}) &= \\int_{\\mathbb{R}^D} f(\\mathbf{x}) w(\\mathbf{x}) e^{-2i\\pi\\boldsymbol{\\omega}^\\top \\mathbf{x}} \\d{\\mathbf{x}}.\n\\end{align}\nThe windowing function $w(\\mathbf{x})$ can be a soft Gaussian window $w(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu},{\\boldsymbol{\\Sigma}})$ \\citep{lazaro2009inter} or a hard interval window $w(x)=\\mathbb{I}_{[a\\leq x\\leq b]}e^{2i\\pi a}$ \\citep{hensman2017variational}. The windowing approach shares the caveat of a blurred version of the frequency space, caused by an inaccurate Fourier transform\\citep{lazaro2009inter}.\\par\nConsider $f\\sim\\mathcal{GP}(0, k)$ where $k$ is a stationary kernel, and $w(\\mathbf{x}) = \\mathcal{N}(x|\\mu,{\\boldsymbol{\\Sigma}})$, we see that $g(\\mathbf{x}) = w(\\mathbf{x})f(\\mathbf{x}) \\sim\\mathcal{GP}(0, w(\\mathbf{x})w(\\mathbf{x}')k(\\mathbf{x}-\\mathbf{x}'))$. It is easy to verify that the kernel of $g(\\mathbf{x})$ is locally stationary. There exist the following relations of cross-covariances:\n\\begin{align}\n \\cov(f(\\mathbf{x}), \\mathcal{L}_f(\\boldsymbol{\\omega})) &= \\dfrac{\\cov(g(\\mathbf{x}), \\hat{g}(\\boldsymbol{\\omega}))}{w(\\mathbf{x})},\\\\\n \\cov(\\mathcal{L}_f(\\boldsymbol{\\omega}), \\mathcal{L}_f(\\boldsymbol{\\xi})) &= \\cov(\\hat{g}(\\boldsymbol{\\omega}), \\hat{g}(\\boldsymbol{\\xi})).\n\\end{align}\nTherefore, windowed inter-domain GPs are equivalent to rescaled GPs with a tweaked kernel.\n\\section{EXPERIMENTS}\nIn this section, we experiment with the harmonizable mixture kernels for kernel recovery, GP classification and regression. We use a simplied version of the harmonizable kernel where the two matrices of the locally stationary $k_{\\text{LSG}}$ are diagonals: ${\\boldsymbol{\\Sigma}}_1=\\mbox{diag}(\\sigma_d^2)$, ${\\boldsymbol{\\Sigma}}_2=\\lambda^2 I$. See Section 6 in the supplement for more detailed information.\n\\subsection{Kernel recovery}\nWe demonstrate the expressiveness of HMK by using it to recover certain non-stationary kernels. We choose the non-stationary \\emph{generalized spectral mixture kernel} (GSM) \\citep{remes2017non} and the covariance function of a time-inverted fractional Brownian motion (IFBM):\n\n\\resizebox{1.00\\columnwidth}{!}{\n\\begin{tabular}{l}\n $k_{\\text{GSM}}(x,x') = w(x)w(x') k_{\\text{Gibbs}}(x, x')\\cos(2\\pi(\\mu(x)x-\\mu(x')x')), $ \\\\\n $k_{\\text{Gibbs}}(x, x') = \\sqrt{\\dfrac{2l(x)l(x')}{l(x)^2+l(x')^2}}\\exp\\left(-\\dfrac{(x-x')^2}{l(x)^2+l(x')^2}\\right), $ \\\\\n $k_{\\text{IFBM}}(t,s) = \\dfrac{1}{2}\\left(\\dfrac{1}{t^{2h}}+\\dfrac{1}{s^{2h}} - \\left\\vert\\dfrac{1}{t}-\\dfrac{1}{s}\\right\\vert^{2h}\\right),$ \\\\\n\\end{tabular}\n}\n\nwhere $s, t \\in (0.1, 1.1]$ and $x, x' \\in [-1, 1]$. \nThe hyperparameters of $k_{\\text{HM}}$ are randomly initialized, and optimized with stochastic gradient descent.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth, clip=true, ]{plots\/approx4-crop.pdf}\n \\caption{Kernel recovery experiment with true kernels (left) against SM kernel approximations (right).}\n \\label{fig:ifbm_rec}\n\\end{figure}\n\nBoth kernels can be recovered almost perfectly with mean squared errors of $0.0033$ and $0.0008$. The result indicates that we can use the GSD and the Wigner distribution of the approximating HM kernel to interpret the GSM kernel (see Section 5 in supplementary materials). \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth, clip=true]{plots\/solar4-crop.pdf}\n \\caption{Sparse GP regression with solar irradiance dataset.}\n \\label{figure:gpr}\n\\end{figure}\n\n\\subsection{GP classification with banana dataset}\n\nIn this section, we show the effectiveness of variational Fourier fetures in GP classification with HMK. We use an HMK with $P=4$ components to classify the banana dataset, and compare SVGP with inducing points (IP) \\citep{hensman2015scalable} and SVGP with variational Fourier features (VFF). The model parameters are learned by alternating optimization rounds of natural gradients for the variational parameters, and Adam optimizer for the other parameters \\citep{salimbeni2018natural}.\n\nFigure \\ref{figure:gpc} shows the decision boundaries of the two methods over the number of inducing points.\nFor both variants, we experiment with model complexities from 6 to 24 inducing points in IP, and from 2 to 8 inducing frequencies for each component of HMK in the VFF. The centers of HMK (red triangles) spread to support the data distribution. The IP method is slightly more complex compared to VFF at the same parameter counts in terms of nonzero entries in the variational parameters.\n\nThe VFF method recovers roughly the correct decision boundary even with a small number of inducing frequencies, while converging faster to the decision boundaries as the number of inducing frequencies increases.\n\n\\subsection{GP regression with solar irradiance}\n\nIn this section, we demonstrate the effectiveness of HMK in interpolation for the non-stationary solar irradiance dataset. We run sparse GP regression with squared exponential, spectral mixture and harmonizable mixture kernels, and show the predicted mean, and 95\\% confidence intervals for each model (See Figure \\ref{figure:gpc}).\n\nWe use sparse GP regression proposed in \\citep{titsias2009variational} with 50 inducing points marked at the x axis. The SE kernel can not estimate the periodic pattern and overestimates the signal smoothness. The SM kernel fits the training data well, but misidentifies frequencies on the first and fourth interval of the test set.\n\n\nFor sparse GP with HMK, we use the same framework where the variational lower bound is adjusted for VFF. The model extrapolates better for the added flexibility of nonstationarity, and the inducing frequencies aggregate near the learned frequencies. Both first and last test intervals are well fitted. The Wigner distribution with inducing frequencies of the optimised HM kernel is shown in Figure \\ref{figure:gpc}d.\n\n\n\n\n\n\n\\section{CONCLUSION}\nIn this paper, we extend the generalization of Gaussian processes by proposing harmonizable mixture kernel, a non-stationary kernel spanning the wide class of harmonizable covariances. Such kernels can be used as an expressive tool for GP models. We also proposed variational Fourier features, an inter-domain inference framework used as drop-in replacements for sparse GPs. This work bridges previous research on spectral representation of kernels and sparse Gaussian processes.\\par\nDespite its expressiveness, one may brand the parametric form of HMK as not fully Bayesian, since it contradicts the nonparametric nature of GPs. A fully Bayesian approach would be to place a nonparametric prior over harmonizable mixture kernels, representing the uncertainty of the kernel form \\citep{shah2014student}. \n\n\\bibliographystyle{plainnat}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}