diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzafxy" "b/data_all_eng_slimpj/shuffled/split2/finalzzafxy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzafxy" @@ -0,0 +1,5 @@ +{"text":"\\section{Submission of conference papers to ICLR 2022}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Method}\n\n\\begin{figure}\n\\vspace{-3mm}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures\/CLIP_framework.pdf}\n\\vspace{-6mm}\n\\caption{Overall architecture of FILIP, a dual-stream model with Transformer-based image and text encoders. On top of the image and text encoders, the representations of textual tokens and visual tokens are linearly projected to the multi-modal joint space. A novel fine-grained contrastive learning equipped with cross-modal late interaction is proposed, which uses a token-wise maximum similarity between visual and textual tokens. }\n\\label{fig:framework}\n\\end{center}\n\\vspace{-4mm}\n\\end{figure}\n\n\\label{sec:model_overview}\n\nIn this paper, we propose a new cross-modal pre-training model that excels in fine-grained interaction between image encoder and text encoder for mining more detailed semantic alignment, named as FILIP, as shown in Figure \\ref{fig:framework}. \nParticularly, FILIP is a dual-stream model with Transformer-based image and text encoders.\nFor the visual modality, the image encoder is a Vision Transformer \\citep{dosovitskiy2020image} which takes the concatenation of an extra [CLS] token embedding and linearly projected image patches as input.\nFor the textual modality, following \\cite{radford2021learning}, we use the lower-cased byte pair encoding (BPE) \\citep{sennrich2016neural}\nwith a vocabulary size of 49,408 to tokenize the text. \nEach text sequence starts with [BOS] token and ends with [EOS] token.\nAfter the word embedding layer, the token embeddings are fed into a modified decoder-only Transformer model as in \\citep{radford2019language}. \nOn top of the image and text encoders, the representations of textual tokens and visual tokens are linearly projected to the multi-modal common space, and are separately L2-normalized. \nDifferent from existing dual-stream models (e.g., CLIP and ALIGN) which models cross-modal interaction via only the global features of the entire image and text sequence, \nwe introduce a novel fine-grained contrastive learning objective equipped with cross-modal late interaction which takes into account the fine-grained interaction between image patches and textual tokens, detailed in Section \\ref{sec:late_interaction}.\n\n\n\n\n\\vspace{-1mm}\n\\subsection{Fine-grained Contrastive Learning}\n\\label{sec:late_interaction}\n\nContrastive representation learning has recently been found to learn better representations than its predictive counterpart \nin both visual \\citep{tian2020contrastive} and vision-language cross-modal pre-training \\citep{radford2021learning}.\nUnder a general formulation of cross-modal contrastive learning \\citep{radford2021learning}, we want to learn encoders $f_\\theta$ for image data $\\mathcal{I}$ and $g_\\phi$ for text data $\\mathcal{T}$ such that, given an image $ \\vx_{img} \\in {\\mathcal{I}}$, and a text $\\vx_{text} \\in {\\mathcal{T}}$, the encoded representations $f_\\theta(\\vx_{img})$ and $g_\\phi(\\vx_{text})$ are close if they are related and far apart if not, under a distance metric. \nIn each training batch, we sample $b$ image-text pairs $\\{\\vx_{img}_k, \\vx_{text}_k\\}_{k=1}^b$.\nFor image $\\vx_{img}_k$ in image-text pair $\\{\\vx_{img}_k, \\vx_{text}_k\\}$, $\\vx_{text}_k$ is its positive, while the other texts \nwill be used as in-batch negatives. \nThe image-to-text contrastive loss ${\\mathcal{L}}^I_k$ for $\\vx_{img}_k$ can then be formulated as\n\\[\n{\\mathcal{L}}^I_k (\\vx_{img}_k, \\{\\vx_{text}_j\\}_{j=1}^b) = -\\frac{1}{b} \\log \\frac{exp(s_{k,k}^I)}{\\sum_{j} exp(s_{k,j}^I)},\n\\]\nwhere $s_{k,j}^I$ denotes the similarity of the $k$-th image to the $j$-th text.\nSimilarly, the text-to-image contrastive loss for $\\vx_{text}_k$ is\n\\[\n{\\mathcal{L}}^T_k (\\vx_{text}_k, \\{\\vx_{img}_j\\}_{j=1}^b) = -\\frac{1}{b} \\log \\frac{exp(s_{k,k}^T)}{\\sum_{j} exp(s_{j,k}^T)}.\n\\]\nThe total loss of this mini-batch can be represented by \n\\begin{equation}\n{\\mathcal{L}} = \\frac{1}{2}\\sum\\limits_{k=1}^b ({\\mathcal{L}}^I_k + {\\mathcal{L}}^T_k). \\label{eq:contrastive_loss}\n\\end{equation}\n\n\\vspace{-1mm}\n\\subsubsection{Cross-modal Late Interaction}\n\\label{sec:Cross-modal-late}\nFrom the contrastive loss (\\ref{eq:contrastive_loss}), the cross-modal interaction is reflected in how we compute the similarities $s_{i,j}^I$ and $s_{i,j}^T$ for the $i$-th image and $j$-th text.\nPrevious methods like CLIP~\\citep{radford2021learning} and ALIGN~\\citep{jia2021scaling} simply encode each image or text separately to a global feature i.e., $f_\\theta(\\vx_{img}_i) \\in \\mathbb{R}^{d}$ and $g_\\phi(\\vx_{text}_j) \\in \\mathbb{R}^{d}$, and compute these two similarities as\n\\begin{equation}\n s^I_{i,j} = s^T_{i,j} = f_\\theta(\\vx_{img}_i)^\\top g_\\phi(\\vx_{text}_j), \\label{eq:orig_loss}\n\\end{equation}\nneglecting finer-grained interactions (e.g., word-patch alignment) between the two modalities.\nTo alleviate this problem, while simultaneously maintain the training and inference efficiency of dual-stream models, we apply a cross-modal late interaction inspired by \\cite{khattab2020colbert} to model the token-wise cross-modal interaction.\n\nSpecifically, \ndenote $n_1$ and $n_2$ as the number of (non-padded) tokens of the $i$-th image and $j$-th text, respectively, \nand the corresponding encoded features \nare $f_\\theta(\\vx_{img}_i) \\in \\mathbb{R}^{n_1 \\times d}$ and $g_\\phi(\\vx_{text}_j) \\in \\mathbb{R}^{n_2 \\times d}$.\nFor the $k$-th visual token, we compute its similarities with all textual tokens of $\\vx_{text}_j$, and use the largest one \n\\begin{equation}\n\\max_{0\\le r < n_2} [f_\\theta(\\vx_{img}_i)]_k^\\top [g_\\phi(\\vx_{text}_j)]_r\n\\label{eq:tokenwise_max_sim}\n\\end{equation} \nas its token-wise maximum similarity with $\\vx_{text}_j$.\nWe then use the average token-wise maximum similarity of all non-padded tokens \nin the image (resp. text) as the similarity of an image to a text (resp. a text to an image). \nThe similarity of the $i$-th image to the $j$-th text\ncan thus be formulated as:\n\\begin{equation}\ns_{i,j}^I (\\vx_{img}_i, \\vx_{text}_j) = \\frac{1}{n_1}\\sum_{k=1}^{n_1} [f_\\theta(\\vx_{img}_i)]_k^\\top [g_\\phi(\\vx_{text}_j)]_{m_k^I}, \\label{eq:late_sim_i}\n\\end{equation}\nwhere $m_k^I = \\arg \\max_{0\\le r < n_2} [f_\\theta(\\vx_{img}_i)]_k^\\top [g_\\phi(\\vx_{text}_j)]_r$.\nSimilarly, the similarity of the $j$-th text to the $i$-th image is\n\\begin{equation}\ns_{i,j}^T (\\vx_{img}_i, \\vx_{text}_j) = \\frac{1}{n_2}\\sum_{k=1}^{n_2} [f_\\theta(\\vx_{img}_i)]_{m_k^T}^\\top [g_\\phi(\\vx_{text}_j)]_k, \\label{eq:late_sim_t}\n\\end{equation}\nwhere $m_k^T = \\arg \\max_{0\\le r < n_1} [f_\\theta(\\vx_{img}_i)]_r^\\top [g_\\phi(\\vx_{text}_j)]_k$.\nNote that $s_{i,j}^I (\\vx_{img}_i, \\vx_{text}_j)$ in Equation (\\ref{eq:late_sim_i}) does not necessarily equal $s_{i,j}^T (\\vx_{img}_i, \\vx_{text}_j)$ in Equation (\\ref{eq:late_sim_t}).\n\n\n\\begin{remark}\n\\label{rmk:late_interaction_loss}\nIntuitively, the token-wise maximum similarity in Equation~(\\ref{eq:tokenwise_max_sim}) means that\nfor each image patch, we find its most similar textual token.\nSimilarly, for each textual token, we also find its closest image patch.\nBy applying this to the similarity calculation in (\\ref{eq:late_sim_i}) and (\\ref{eq:late_sim_t}) \nfor contrastive loss (\\ref{eq:contrastive_loss}),\n the dual-stream model learns fine-grained alignment between image patches and textual tokens.\n\\end{remark}\n\n\nThe original late interaction mechanism in \\citep{khattab2020colbert} computes the relevance score of a document to a query \\textit{padded with mask tokens}, as a \\textit{sum} of token-wise maximum similarities,\nand is optimized via a \\textit{pairwise} softmax cross-entropy loss.\nThough inspired from \\citet{khattab2020colbert}, our proposed cross-modal late interaction differs in several aspects.\nFirstly, we exclude the padded textual tokens when computing the similarity, as they harm the performance. \nWe speculate that this is because these padded tokens also learn textual representations and will mislead the model to align image patches to these meaningless padded tokens rather than meaningful non-padded words.\nSecondly, when computing similarities (\\ref{eq:late_sim_i}) and (\\ref{eq:late_sim_t}), we use the average of the token-wise maximum similarities instead of summation in \\citep{khattab2020colbert}. This is because the number of non-padded tokens varies from text to text, and this summation over all non-padded tokens can have quite different magnitudes, leading to less stabilized training and worse final performance.\nThirdly, we optimize the late interaction mechanism via a contrastive loss (\\ref{eq:contrastive_loss}) which is found powerful vision-language pre-training~\\citep{radford2021learning} instead of the original pairwise loss in \\citep{khattab2020colbert}.\n\n\\textbf{Training Efficiency.}\nThough the cross-modal late interaction is able to capture finer-grained features \ncompared with the original loss, \nit relies on the token-wise representations of both modalities, \nand can be inefficient in terms of communication, memory and computation, especially when the batch size is large. \nTo alleviate this problem, we utilize several methods.\nFirstly, we reduce the embedding size\nto 256.\nBesides, we reduce the precision of the last-layer features of both modalities from fp32 to fp16 before node communication in a distributed learning setting, \nand perform the multiplication in Equations (\\ref{eq:late_sim_i}) and (\\ref{eq:late_sim_t}) under the reduced precision.\nIn addition, since the complexity of similarity calculation scales with the sequence length of \ntextual tokens and image patches,\nfor each image (resp. text), we select the 25\\% tokens with the highest token-wise maximum similarity score (Equation (\\ref{eq:tokenwise_max_sim})) among all texts (resp. images) in the same local worker before node communication, based on the intuition that each sample can be represented by a few of the most representative tokens. Effects of these modifications are studied in Section \\ref{sec:efficiency-study-of-late-loss}.\n \n\n\n\n\\subsubsection{Prompt Ensemble and Templates}\n\\label{sec:prompt_ensemble}\n\nDue to the problem of polysemy and inconsistency with the pre-training process, following \\citet{radford2021learning}, we also use prompt templates to augment the original label for some downstream tasks.\nFor visualizations, for simplicity, we \nuse only one prompt template across the paper, i.e. ``a photo of a \\{label\\}.'' as \\citet{radford2021learning}.\nFor other experiments, we\nreport results using prompt ensemble following~\\citet{radford2021learning}.\nWhen multiple prompts are allowed, the token-wise representations of different prompt templates for the same class label are different, and can not be summed together to form \na mean textual representation as in \\citep{radford2021learning}.\nThus, instead of ensembling different prompt templates by their mean textual representation, we ensemble them\nby their mean token-wise similarity.\nSpecifically, suppose there are $C$ prompt templates, each label is augmented to $C$ different texts $\\vx_{text}_1, \\vx_{text}_2, \\cdots, \\vx_{text}_C$.\nThe \nsimilarity between an image $\\vx_{img}$ and this label is computed as\n$\n\\frac{1}{C}\\sum_{c=1}^C s_{\\cdot,\\cdot}^I (\\vx_{img}, \\vx_{text}_c),\n$\nwhere $s_{\\cdot,\\cdot}^I$ is defined in Equation (\\ref{eq:late_sim_i}).\n\nWe use a unified rule-based method inspired by \\citet{radford2018improving}\nto construct prompt templates for image classification tasks.\nSpecifically, each template consists of four components:\n\n\\vspace{-3mm}\n\\begin{equation}\n\\text{[prefix] \\{label\\}, [category description]. [suffix].} \\label{eq:prompt_template}\n\\end{equation}\nHere, the ``[prefix]'' is an in-context description like ``a photo of a\" similar as~\\cite{radford2021learning};\n``{label}'' is a class label of the dataset;\n``[category description]'' describes the category which is found helpful for some fine-grained image classification datasets \\citep{radford2021learning}, e.g., `` a type of pet'' for dataset Oxford-IIIT Pets.\nAn interesting finding is that,\nadding a suffix that includes the reference word ``it\" (e.g., ``I like it.\") at the end of the prompt empirically improves the zero-shot classification performance of the proposed model.\nWe speculate this is because the reference word ``it\" strengthens the fine-grained cross-modal alignment, as it\ncan also be aligned to image patches of the target object. Detailed prompt templates for different datasets can be found in Appendix~\\ref{apdx:prompt_template}.\n\n\n\n\n\\vspace{-1mm}\n\\subsection{Image and Text Augmentation}\n\n\\label{sec:augmentation}\nTo obtain better generalization and data-efficiency of the model, we perform data augmentation on both images and texts during the pre-training phase to construct more image-text pairs. \nWe apply AutoAugment \\citep{krizhevsky2012imagenet,sato2015apac,cubuk2019autoaugment,hoffer2020augment} for image augmentation, following the SOTA vision recognition methods \\citep{touvron2021training,xie2020self}.\nTo ensure the augmented texts are semantically similar as the original one, for\ntext augmentation, we rewrite the original text using \nback-translation \\citep{xie2020unsupervised,sennrich2016improving}.\nSpecifically,\nthe texts are first translated to the target language and then translated back to the source language. \nWe choose German and Russian as the target language and get extra two texts for each image-text pair. \nWhen constructing a batch of image-text pairs during the pre-training, the text of each image-text pair is randomly sampled from the three candidate texts, i.e., the original text and two back-translated texts.\n\n\n\\vspace{-1mm}\n\\subsection{Pre-training Dataset}\n\\label{sec:dataset_construction}\n\nA sufficiently large image-text dataset is a prerequisite for vision-language pre-training. \nRecent CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling} construct datasets with 400M and 1800M image-text pairs, respectively. \nIn this work, we also construct a large-scale dataset called FILIP300M, which consists of 300M image-text pairs and covers board vision and language concepts.\nSpecifically, we collect image-text pairs from the Internet,\nand apply the following image- and text-based filtering rules to clean data.\nFor image-based filtering, we remove the images whose shorter dimension is smaller than 200 pixels and the aspect ratio is larger than 3. \nFor text-based filtering, we keep only English texts, and exclude the meaningless ones, e.g., img\\_0.jpg. \nWe also discard image-text pairs whose texts are repeated for over 10 times. \nBesides, we also use 3 public datasets, including Conceptual Captions 3M (CC3M) \\citep{sharma2018conceptual}, Conceptual 12M (CC12M) \\citep{changpinyo2021cc12m} and Yahoo Flickr Creative Commons 100M (YFCC100M) \\citep{thomee2016yfcc100m}. We apply the same filtering rules on YFCC100M. Finally, we use about 340M image-text pairs for pre-training. \nDespite using a smaller training dataset than CLIP and ALIGN, our models still outperform them in most down-steam tasks (see Section~\\ref{sec:expt}).\n\n\n\\begin{table}\n\\vspace{-2mm}\n\\Large\n\\caption{Top-1 accuracy(\\%) of zero-shot image classification on 12 datasets. Our FILIP can boost 3$\\sim$5\\% accuracy on average.}\n\\label{zeroshot-classification-table}\n\\vspace{-1mm}\n\\begin{center}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|cccc cccc cccc |c}\n\n&\\rotatebox{90}{\\Large{CIFAR10}}~~ &\n\\rotatebox{90}{\\Large{CIFAR100}}~~ &\n\\rotatebox{90}{\\Large{Caltech101}}~~ &\n\\rotatebox{90}{\\Large{StanfordCars}}~~ &\n\\rotatebox{90}{\\Large{Flowers102}}~~ &\n\\rotatebox{90}{\\Large{Food101}}~~ &\n\\rotatebox{90}{\\Large{SUN397}}~~ &\n\\rotatebox{90}{\\Large{DTD}}~ &\n\\rotatebox{90}{\\Large{Aircrafts}}~~ &\n\\rotatebox{90}{\\Large{OxfordPets}}~~ &\n\\rotatebox{90}{\\Large{EuroSAT}}~~ & \n\\rotatebox{90}{\\Large{\\textbf{ImageNet}}}~~ &\n\\rotatebox{90}{\\Large{\\textbf{Average}}}~~ \\\\\n\\midrule\nCLIP-ViT-B\/32 & 91.3 & 65.1 & 87.9 & 59.4 & 66.7 & 84.4 & 63.2 & 44.5 & 21.2 & 87.0 & 49.4 & 63.2 & 65.3 \\\\\n$\\text{FILIP}_{\\text{base}}$-ViT-B\/32 & 86.9 & 65.5 & 91.9 & 55.4 & 85.3 & 82.8 & 69.1 & 49.3 & 57.2 & 88.1 & 49.9 & 68.8 & \\textbf{70.9}$^{+5.6}$ \\\\\n\n\\midrule\nCLIP-ViT-L\/14 & 96.2 & 77.9 & 92.6 & 77.3 & 78.7 & 92.9 & 67.7 & 55.3 & 36.1 & 93.5 & 59.9 & 75.3 & 75.3 \\\\ \n$\\text{FILIP}_{\\text{large}}$-ViT-L\/14 & 95.7 & 75.3 & 93.0 & 70.8 & 90.1 & 92.2 & 73.1 & 60.7 & 60.2 & 92 & 59.2 & 77.1 & \\textbf{78.3}$^{+3.0}$ \\\\\n\n\\bottomrule\n\\end{tabular}}\n\\end{center}\n\\vspace{-2mm}\n\\end{table}\n\n\n\n\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\nThis paper introduces FILIP, a simple yet generic framework towards fine-grained vision-language pre-training.\nBy using a token-wise maximum similarity, our method learns fine-grained representation for patches in the images and words in the sentences.\nWhile it achieves competitive results against several large-scale multi-modal pre-training on various downstream tasks, both its architecture and training procedure can still be optimized to improve its performance. In the future, a\nmore advanced image encoder as well as a well-designed interaction layer can be used to boost the performance.\nFurthermore, we can further add more masked language\/image loss to support more generation tasks.\nTo this end, we hope to extend FILIP as a generic and unified interface for solving a large variety of vision-language tasks.\n\n\n\\section{Related Work}\n\\paragraph{Vision-Language Pre-training Models.} The pre-train-and-fine-tune scheme has achieved great success in the domains of natural language processing~\\citep{devlin2018bert,brown2020language} and computer vision~\\citep{dosovitskiy2020image}.\nIt is then naturally extended to a joint cross-modal domain of \nVision-and-Language Pre-training (VLP). \nThe pre-training datasets of recent VLP models \ninclude publically available \ndatasets \nlike YFCC100M \\citep{thomee2016yfcc100m} and CC12M \\citep{changpinyo2021cc12m}, as well as \nlarger-scale datasets with more than 100M samples\nin CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling}, which are shown to be even more powerful. \nThe pre-training tasks of VLP models can be categorized into two categories: image-text contrastive learning task and Language Modeling (LM) based tasks:\n(i) CLIP \\citep{radford2021learning}, ALIGN \\citep{jia2021scaling} and UNIMO \\citep{li2020unimo} make use of cross-modal contrastive learning which aligns the textual and visual information into a unified semantic space; (ii) VisualBERT \\citep{li2019visualbert}, UNITER \\citep{chen2020uniter}, M6 \\citep{lin2021m6}, and DALL-E \\citep{ramesh2021zeroshot} employ LM-like objectives, including both masked LM (e.g., Masked Language\/Region Modeling), and autoregressive LM (e.g., image captioning, text-grounded image generation).\nOn the other hand, some methods rely on a pre-trained object detection model such as Faster-RCNN \\citep{ren2015faster} to extract image regional\nfeatures offline, which requires extra labeled bounding-box data and makes the approach less scalable.\nRecent efforts such as SOHO \\citep{huang2021seeing} and SimVLM \\citep{wang2021simvlm} try to eliminate this burden via visual dictionary or PrefixLM \n\\citep{raffel2020exploring}. \nIn this paper, we \n directly \nlearn fine-grained vision-language representations\nin an end-to-end and simpler manner while maintaining the benefit of inference efficiency.\n\n\n\\paragraph{Multi-Modality Interaction Mechanism.} \nThe core of vision-language pre-training models lies in modeling the interaction between the two modalities. \nThere are mainly two types of cross-modal interaction architectures: single-stream and dual-stream models. \nSingle-stream models like VisualBERT \\citep{li2019visualbert} and ViLT \\citep{kim2021vilt} directly concatenate the patch-wise or regional visual features and textual embeddings \nand feed them to the transformer-based model.\nDual-stream models such as ViLBERT \\citep{lu2019vilbert} and CLIP~\\citep{radford2021learning} have separate encoders for different modalities. \nThis allows flexible use of different models for different modalities, and\n efficient inference for downstream tasks like image-text retrieval, through the ability of decoupling the encoders and pre-compute image\/text features offline.\nIn this paper, while following the dual-stream approach for its flexible and efficient inference,\nwe further propose a new multi-modal interaction mechanism to capture the fine-grained representations. \n\n\n\n\\section{Introduction}\nLarge-scale Vision-Language Pre-training (VLP) models like CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling} have recently demonstrated success across \nvarious downstream tasks. They learn visual and textual representations from millions of image-text pairs collected from the Internet and show superior zero-shot ability and robustness. The core technique of these models lies in the global contrastive alignment of the images and texts through a dual-stream model. Such architecture is inference-efficient for downstream tasks like retrieval because the encoders for the two modalities can be decoupled and the image or text representations can be pre-computed offline. \nHowever, CLIP and ALIGN\nmodel the cross-modal interaction via solely the similarity of the global feature of each modality, lacking the ability of capturing finer-level information like the relationship between visual objects and textual words.\nIn this paper, we develop a simple yet efficient cross-modal finer-grained interaction mechanism for large-scale VLP.\n\nTo achieve finer-grained cross-modal interaction, previous methods mainly exploited two kinds of methods.\n(1) One line of work \\citep{chen2020uniter,li2020oscar,m5product,li2020unimo,zhang2021vinvl,capture} uses a pre-trained object detector to extract region-of-interest (ROI) features from images, and then fuses it with the paired text through a VLP model.\nThis design complicates the pre-training due to pre-computing and storing a large number of ROI features. \nIn addition, the zero-shot ability of these approaches is usually limited by the predefined number of classes and their performance is also restricted by the quality of the detector.\n(2) Another line of work \\citep{li2021align, kim2021vilt} \nenforces the token-wise or patch-wise representations from both modalities into the same space and models these finer-grained interactions via cross-attention \\citep{li2021align} or self-attention \\citep{kim2021vilt}. \nHowever, these methods are usually less efficient in terms of both training and inference. In particular, during training, cross-attention in \\citep{li2021align} requires to be performed in an encoder-decoder structure, while the complexity of the self-attention \\citep{kim2021vilt} grows quadratically with the length of the prolonged concatenated sequences of both modalities. During inference, the data from both modalities are intertwined to compute the cross-attention or self-attention, and can not be pre-computed offline as dual-stream models like CLIP and ALIGN.\nThis can be less efficient for downstream tasks like image\/text retrieval and image classification.\n\nIn this paper, we propose a large-scale Fine-grained Interactive Language-Image Pre-training framework named FILIP \nto address these limitations.\nInspired by \\cite{khattab2020colbert}, \nwe model the fine-grained semantic alignment through a novel cross-modal\nlate interaction mechanism in the contrastive loss, instead of using cross or self-attention.\nSpecifically, our fine-grained contrastive learning uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective.\nIn this way,\nFILIP successfully leverages the finer-grained expressiveness\namong image patches and textual words \nwhile simultaneously gaining the ability to pre-compute image and text representations offline.\nUnlike \\cite{khattab2020colbert}, we discard the padded tokens and use average \ninstead summation \nof token-wise maximum similarities when computing the image-text alignment,\nwhich enhances the cross-modal representation learning and stabilizes training.\nFurthermore, we construct a large-scale pre-training dataset named FILIP300M from the Internet.\nData cleaning and image-text data augmentation are also explored and proved useful in this work.\n\nExtensive experiments show that by effectively learning fine-grained representations,\nFILIP achieves state-of-the-art performance on multiple downstream vision-language tasks, including zero-shot image classification and image-text retrieval. For example, FILIP reaches 77.1\\% top-1 accuracy for zero-shot ImageNet classification, surpassing CLIP with less training data.\nVisualizations on word-patch alignment further show that FILIP learns meaningful finer-grained features with promising localization ability. \n\n\n\n\n\n\n\\section{Experiments}\n\\label{sec:expt}\n\n\\vspace{-1mm}\n\\subsection{Experimental Setup}\n\\label{sec:experiment_details}\n\\textbf{Model Architectures.}\nWe train two \nmodels from scratch, i.e., $\\text{FILIP}_{\\text{base}}$ and $\\text{FILIP}_{\\text{large}}$. \nThe model architectures follow\nCLIP \\citep{radford2021learning}, i.e., the image encoder is ViT-B\/32 for $\\text{FILIP}_{\\text{base}}$ and ViT-L\/14 for $\\text{FILIP}_{\\text{large}}$. More details can be found in Appendix \\ref{apdx:expt_setting}.\n\n\\textbf{Pre-training Details.}\nTo save memory and scale up the batch size, automatic mixed-precision \\citep{micikevicius2018mixed} and gradient checkpoint \\citep{griewank2000algorithm, chen2016training} are used\nThe input images are resized to $224 \\times 224$ resolution during pre-training and the maximum length of the text is limited to $77$ tokens following \\citet{radford2021learning}.\nThe training is mainly conducted on Nvidia V100 GPUs and Ascend Cards.\n$\\text{FILIP}_{\\text{base}}$ is trained on 128 cards about 9 days and $\\text{FILIP}_{\\text{large}}$ takes about 24 days to train on 192 cards. \nUnless otherwise specified, we use $\\text{FILIP}_{\\text{large}}$ to compare with other methods and $\\text{FILIP}_{\\text{base}}$ for ablation.\nWe train both models using the LAMB optimizer \\citep{2019Large} and cosine learning rate schedule \\citep{2016SGDR} with a linear warmup. \nWeight decay regularization is applied to all parameters except bias, layer normalization, token embedding, positional embedding and temperature in contrastive loss. \nDetailed values of hyperparameters for different datasets and models can be found in Appendix \\ref{apdx:expt_setting}.\n\n\n\n\\vspace{-1mm}\n\\subsection{Zero-Shot Image Classification}\n\\label{sec:zeroshot_classification}\nIn this section, we evaluate our proposed FILIP on the zero-shot image classification task.\nWe compare our FILIP with CLIP \\citep{radford2021learning} on 12 downstream classification datasets, using the same evaluation setting as in CLIP. As described in Section \\ref{sec:prompt_ensemble}, we apply a set of prompts for each dataset and ensemble them to get the final results, see Appendix \\ref{apdx:prompt_template} for details. We only compare the zero-shot performance with CLIP here as ALIGN does not release its model and the related performances are not reported in their paper.\n\nTable \\ref{zeroshot-classification-table} shows the results on 12 datasets.\nDespite using less training data (340M vs. 400M), both $\\text{FILIP}_{\\text{base}}$ and $\\text{FILIP}_{\\text{large}}$ considerably outperform their CLIP counterparts in terms of average top-1 accuracy over 12 datasets, i.e., achieving absolute improvements of 5.6\\% and 3.0\\%, respectively. In particular, our FILIP surpasses CLIP on ImageNet, the largest dataset among 12 datasets.\nFILIP also achieves substantial performance gains on some domain-specific datasets, e.g., for Aircrafts, the two FILIP models reach a 30\\% improvement over CLIP on average. We speculate this is because,\nunlike CLIP which aggregates the information of the whole image into the representation of the [CLS] token, our proposed FILIP model focuses more on the target object by directly aligning the image patches corresponding to the target object with the textual tokens corresponding to the class label (visualizations of word-patch alignment are in Section \\ref{sec:Visualization of Fine-grained Alignment}).\n\n\n\n\\begin{table}\n\\vspace{-3mm}\n\\center\n\\caption{Results of zero-shot image-text retrieval on Flickr30K and MSCOCO datasets. The last two rows (marked with *) report the zero-shot results on Flickr30K dataset of model fine-tuned on MSCOCO dataset, following the setting of ALBEF \\citep{li2021align}.}\n\\huge\n\\label{tab:zero-shot-retrieval-table}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ccccccccccccc}\n\\toprule\n & \\multicolumn{6}{c}{Flickr30K} & \\multicolumn{6}{c}{MSCOCO} \\\\\n & \\multicolumn{3}{c}{image-to-text} & \\multicolumn{3}{c}{text-to-image} & \\multicolumn{3}{c}{image-to-text} & \\multicolumn{3}{c}{text-to-image} \\\\\n & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\\\\n\\midrule\nUnicoder-VL & 64.3 & 85.8 & 92.3 & 48.4 & 76.0 & 85.2 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\nImageBERT & 70.7 & 90.2 & 94.0 & 54.3 & 79.6 & 87.5 & 44.0 & 71.2 & 80.4 & 32.3 & 59.0 & 70.2 \\\\\nUNITER & 83.6 & 95.7 & 97.7 & 68.7 & 89.2 & 93.9 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\nCLIP & 88.0 & 98.7 & 99.4 & 68.7 & 90.6 & 95.2 & 58.4 & 81.5 & 88.1 & 37.8 & 62.4 & 72.2 \\\\\nALIGN & 88.6 & 98.7 & 99.7 & \\textbf{75.7} & \\textbf{93.8} & \\textbf{96.8} & 58.6 & 83.0 & 89.7 & 45.6 & 69.8 & 78.6 \\\\\n\\textbf{FILIP} & \\textbf{89.8} & \\textbf{99.2} & \\textbf{99.8} & {75.0} & {93.4} & {96.3} & \\textbf{61.3} & \\textbf{84.3} & \\textbf{90.4} & \\textbf{45.9} & \\textbf{70.6} & \\textbf{79.3} \\\\ \\hline\nALBEF* & 94.1 & 99.5 & 99.7 & 82.8 & 96.3 & 98.1 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\n\\textbf{FILIP}* & \\textbf{95.4} & \\textbf{99.8} & \\textbf{100.0} & \\textbf{84.7} & \\textbf{97.0} & \\textbf{98.7} & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\n\\bottomrule \n\\end{tabular}}\n\\vspace{-3mm}\n\\end{table}\n\n\n\n\\vspace{-1mm}\n\\subsection{Image-Text Retrieval}\n\\label{sec:image_text_retrieval}\nImage-text retrieval consists of two sub-tasks: image-to-text retrieval and text-to-image retrieval. \nWe evaluate our FILIP model on two retrieval benchmark datasets: Flickr30K \\citep{dataset_flickr30k} and MSCOCO \\citep{dataset_mscoco}, under both zero-shot and fine-tuned settings. \nMore details of experimental setting can be found in Appendix \\ref{apdx:expt_setting}.\n\nTables \\ref{tab:zero-shot-retrieval-table} and \\ref{tab:finetuned-retrieval-table} show the results of \nzero-shot and fine-tuned \nimage-text retrieval,\nrespectively. We compare our FILIP model against methods with complex attention layers including Unicoder-VL \\citep{unicoder_vl}, ImageBERT \\citep{imagebert}, UNITER \\citep{chen2020uniter}, VILLA \\citep{villa}, ERNIE-ViL \\citep{ernie_vil}, Oscar \\citep{li2020oscar}, VinVL \\citep{zhang2021vinvl}, ALBEF \\citep{li2021align}, and methods trained on larger-scale image-text datasets including CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling}. As we can see, FILIP achieves state-of-the-art performances under all metrics on both Flickr30K and MSCOCO datasets, except for zero-shot text-to-image retrieval on Flickr30K, where FILIP achieves competitive performance with SOTA. For zero-shot image-to-text retrieval on MSCOCO dataset, the absolute R@1 of our proposed FILIP is 2.7\\% higher than ALIGN, which is trained on a much larger dataset.\n\n\\begin{table}\n\\vspace{-3mm}\n\\center\n\\caption{Results of \nfine-tuned \nimage-text retrieval on Flickr30K and MSCOCO datasets.}\n\\Large\n\\label{tab:finetuned-retrieval-table}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ccccccccccccc}\n\\toprule\n & \\multicolumn{6}{c}{Flickr30K} & \\multicolumn{6}{c}{MSCOCO} \\\\\n & \\multicolumn{3}{c}{image-to-text} & \\multicolumn{3}{c}{text-to-image} & \\multicolumn{3}{c}{image-to-text} & \\multicolumn{3}{c}{text-to-image} \\\\\n & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\\\\n\\midrule\nUnicoder-VL & 86.2 & 96.3 & 99.0 & 71.5 & 90.9 & 94.9 & 62.3 & 87.1 & 92.8 & 48.4 & 76.7 & 85.9 \\\\\nImageBERT & 87.0 & 97.6 & 99.2 & 73.1 & 92.6 & 96.0 & 66.4 & 89.8 & 94.4 & 50.5 & 78.7 & 87.1 \\\\\nUNITER & 87.3 & 98.0 & 99.2 & 75.6 & 94.1 & 96.8 & 65.7 & 88.6 & 93.8 & 52.9 & 79.9 & 88.0 \\\\\nVILLA & 87.9 & 97.5 & 98.8 & 76.3 & 94.2 & 96.8 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\nERNIE-ViL & 88.1 & 98.0 & 99.2 & 76.7 & 93.6 & 96.4 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\nOscar & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & 73.5 & 92.2 & 96.0 & 57.5 & 82.8 & 89.8 \\\\\nVinVL & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & 75.4 & 92.9 & 96.2 & 58.8 & 83.5 & 90.3 \\\\\nALIGN & 95.3 & 99.8 & 100.0 & 84.9 & 97.4 & 98.6 & 77.0 & 93.5 & 96.9 & 59.9 & 83.3 & 89.8 \\\\\nALBEF & 95.9 & 99.8 & \\textbf{100.0} & 85.6 & 97.5 & 98.9 & 77.6 & 94.3 & 97.2 & 60.7 & \\textbf{84.3} & 90.5 \\\\\nOur FILIP & \\textbf{96.6} & \\textbf{100.0} & \\textbf{100.0} & \\textbf{87.1} & \\textbf{97.7} & \\textbf{99.1} & \\textbf{78.9} & \\textbf{94.4} & \\textbf{97.4} & \\textbf{61.2} & \\textbf{84.3} & \\textbf{90.6} \\\\\n\\bottomrule\n\\end{tabular}}\n\\vspace{-3mm}\n\\end{table}\n\n\n\n\\begin{table}\n\\vspace{-3mm}\n\\caption{Ablation study of different components on pre-training subset of YFCC100M. I2T and T2I are abbreviations for image-to-text and text-to-image retrieval, respectively. ``ZS'' means zero-shot performance. Underlined numbers have\nthe highest improvements for the corresponding metrics. }\n\\label{ablation-yfcc-table}\n\\begin{center}\n\\begin{tabular}{l|rrrrr}\n\\toprule \\multirow{2}{*}{ Model } & \\multicolumn{4}{c} { MSCOCO } & ImageNet \\\\\n & I2T R@1 & I2T R@5 & T2I R@1 & T2I R@5 & ZS Top1 \\\\\n\\midrule Baseline (ViT-B\/32) & $25.0$ & $49.5$ & $14.7$ & $34.7$ & 30.4 \\\\\n~w\/ image augmentation & $26.1$ & $51.8$ & $16.5$ & $37.5$ & $ 32.5 $ \\\\ \n~w\/ back translation & $29.2$ & $55.0$ & $17.9$ & $39.8$ & $33.9$ \\\\\n~w\/ cross-modal late interaction & $\\underline{30.5}$ & $\\underline{55.3}$ & $\\underline{18.5}$ & $\\underline{40.0}$ & $\\underline{34.3}$\\\\\nOur $\\text{FILIP}_{\\text{base}}$ & $\\mathbf{33.4}$ & $\\mathbf{60.1}$ & $\\mathbf{23.0}$ & $\\mathbf{46.2}$ & $\\mathbf{37.8}$ \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\vspace{-3mm}\n\\end{table}\n\n\n\\begin{table}[t!]\n\\vspace{-3mm}\n \\caption{Efficiency study of the cross-modal late interaction. ``orig'' and ``late'' stand for the contrastive loss based on the original cosine similarity \n in CLIP and our proposed cross-modal late interaction, respectively. ``ZS'' means zero-shot performance.\n We report results for ViT-B\/32 trained on filtered YFCC100M with 8 V100 GPUs, with a batch size of 512 per GPU. Training time and memory consumption are tested using the same gradient checkpoint configuration. \n \n * denotes our final setting used in other experiments.}\n \\label{tab:efficiency-late}\n \\begin{center}\n\n \\begin{tabular}{cccccccc}\n \\toprule \\multirow{2}{*}{ Loss } &Embed&Embed & Token & Training time & Memory & ImageNet \\\\\n & dim & precision& \\% & (sec\/iter) & (MB) & ZS Top1 \\\\\n \\midrule \n orig (baseline)& 512 & fp32 & - &1.31 & 14300 & 30.4 \\\\\n late & 512 & fp32 &100\\% & 2.85 & 26000 & 34.6 \\\\\n late & 512 & fp16 &100\\% & 2.67 & 23468 & 34.5 \\\\\n late & 256 & fp16 &100\\% & 2.31 & 22382 & \\textbf{35.2} \\\\\n late & 256 & fp16 &50\\% & 1.61 & 16336 & 34.5 \\\\\n late* & 256 & fp16 &25\\% & 1.39 & 16100 & 34.3 \\\\\n \n \n \n \n \\bottomrule\n \\end{tabular}\n \\end{center}\n \\vspace{-3mm}\n\\end{table}\n\n\\vspace{-1mm}\n\\subsection{Ablation Study}\n\\label{sec:ablation_yfcc}\n\n\\textbf{Effectiveness of Each Component.}\nWe study the effectiveness of each component in FILIP, i.e., image\/text augmentations and cross-modal late interaction. Experiments are conducted on \n$\\text{FILIP}_{\\text{base}}$,\nwith a filtered subset of YFCC100M as the training dataset (as described in Section \\ref{sec:dataset_construction}),\non both zero-shot retrieval and classification tasks. \nWe measure models' performance on MSCOCO zero-shot image-text retrieval and ImageNet zero-shot classification, which are two effective indicators \nfor the quality of the learned vision-language representations. \n\nTable \\ref{ablation-yfcc-table} reports the results. As can be seen, all three components \nare beneficial for both tasks.\nDespite the simple design, cross-modal late interaction brings significant performance improvements over the baseline (the vanilla CLIP ViT-B\/32), with an absolute R@1 gain of 5.5\\% (resp. 3.8\\%) for image-to-text (resp. text-to-image) retrieval on MSCOCO and an absolute top-1 accuracy gain of 3.9\\% for zero-shot classification on ImageNet. \nFurther improvements are observed when all components are combined together.\n\n\n\n\n\n\\begin{figure}\n\\vspace{-3mm}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figures\/ImageNetVis_vsCLIP.pdf}\n\t\\vspace{-5mm}\n\t\\caption{Visualizations of word-patch alignment for 4 classes of the ImageNet dataset and ``a photo of a \\{label\\}.\" is the prompt. Numbers in the parentheses after the class label indicate the location indices of the class label in the tokenized textual sequence. The correct predictions are highlighted by opaque patches with the class label indices in red.}\n\t\\label{fig:single_obj}\n\t\\vspace{-3mm}\n\\end{figure}\n\n\n\\textbf{Efficiency Study of Cross-modal Late Interaction.}\n\\label{sec:efficiency-study-of-late-loss}\nSince the late interaction mechanism in Section~\\ref{sec:Cross-modal-late} requires to calculate the similarity between all visual and textual tokens, \nits efficiency can be a problem when\nemployed in large-scale distributed training.\nAs described in Section \\ref{sec:Cross-modal-late}, we make several attempts to address the issue. \nTable \\ref{tab:efficiency-late} shows the efficiency improvement on zero-shot classification on ImageNet when\nthese attempts are applied. \nAs can be seen, these attempts improve the efficiency of late interaction \nwithout accuracy drop. Combining all three attempts achieves only slightly slower training and larger memory consumption than the original loss in CLIP.\n\n\n\\vspace{-1mm}\n\\subsection{Visualization of Fine-grained Alignment}\n\\label{sec:Visualization of Fine-grained Alignment}\n\nIn this section, we visualize FILIP's capability of capturing fine-grained cross-modal correspondence using the method of word-patch alignment. To make a fair comparison, we use our $\\text{FILIP}_{\\text{base}}$ trained on YFCC100M and CLIP's ViT-B\/32, which are of the same size, for visualization. Each image is patchified to $7\\times 7$ image patches. \nMore visualization results can be found in Appendix~\\ref{apdx:more_vis}.\n\n\\textbf{Visualization Method.} The word-patch alignment is performed based on the token-wise similarity between the image patches and textual tokens. Specifically, for the $k$-th image patch, the location index of textual token with the largest similarity with it ($m_k^I$ in Equation~(\\ref{eq:late_sim_i})) is considered as its predicted label, and is placed at the center of it.\nTake class ``balloon'' as an example. \nThere are 8 tokens in the tokenized textual sequence ``[BOS] a photo of a balloon. [EOS]'', \nand the location index of the class label ``balloon'' is ``5''. \nNote that one class label may be tokenized to more than one token.\nLocation indices of textual tokens corresponding to the class label are highlighted in red, while the others are marked in white.\nA desired model that learns fine-grained representations would predict image patches of the target object to red indices.\n\n\\textbf{Observations.}\nFigure~\\ref{fig:single_obj} shows the word-patch alignment results for FILIP and CLIP on 4 classes from the ImageNet dataset.\nAs can be seen,\nFILIP exhibits the finer-grained understanding of \nan image in the following aspects. \n(i) A single object: \nFrom the visualization of class ``small white butterfly'', the image patches covering the object are all classified correctly;\n(ii) Same object in different shapes:\nFrom the visualizations of class ``balloon'' and ``lifeboat'', image patches corresponding to all target objects with different shapes and locations are correctly classified; \n(iii) Key Components of an object: For class ``electric locomotive'', there are two key components crucial to correctly classifying the image, i.e., ``electric'' and ``locomotive'', whose corresponding textual token indices are ``5'' and ``6'', respectively. As can be seen, image patches matching these two key components are respectively correctly classified.\nOn the other hand, CLIP can not correctly align image patches with corresponding textual tokens.\nCompared with \\cite{kim2021vilt} which uses an extra optimal transport to align the textual word and image patch distributions, the word-patch alignment can be simply automatically learned by our method. \n\n\n\n\n\n\n\n\\section{Appendix}\n\n\\subsection{Datasets Summary}\nTable \\ref{tab:dataset_statistics} shows the number of image-text pairs of each datasets used in different pre-training methods.\n\\begin{table}[h]\n\\center\n\\caption{Number of image-text pairs used in the pre-training of FILIP, CLIP and ALIGN.}\n\\label{tab:dataset_statistics}\n\\begin{tabular}{ c|cccc|c|c } \n \\hline\n\\multirow{2}{*}{} & \\multicolumn{4}{c|} { FILIP } & { CLIP } & { ALIGN } \\\\\n& CC3M & CC12M & YFCC100M & FILIP300M & \\citep{radford2021learning} & \\citep{jia2021scaling} \\\\ \n \\hline\n \\# & 3M & 10M & 26M & 300M & 400M & 1800M \\\\\n \\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\subsection{Detailed Experimental Settings}\n\\label{apdx:expt_setting}\n\n\n\\begin{table}[htbp]\n\\caption{The architecture parameters for FILIP models.}\n\\label{tab:Filip_model_Hyperparameter}\n \\centering\n \n \\begin{tabular}{l|cc ccc ccc}\n \\toprule \n \\multirow{2}{*}{ Model } & Embedding & Input & \\multicolumn{3}{c}{ Image Encoder } & \\multicolumn{3}{c}{ Text Encoder } \\\\\n & dimension & resolution & \\#layers & width & \\#heads & \\#layers & width & \\#heads \\\\\n \\midrule \n \n $\\text{FILIP}_{\\text{base}}$ & 256 & $224\\times 224$ & 12 & 768 & 12 & 12 & 512 & 8 \\\\\n $\\text{FILIP}_{\\text{large}}$ & 256 & $224\\times 224$ & 24 & 1024 & 16 & 12 & 768 & 12 \\\\\n \n \\bottomrule\n \\end{tabular}\n \n\\end{table}\n\n\n\\paragraph{Model Architectures.} \nWe follow the same architecture design as CLIP, for both $\\text{FILIP}_{\\text{base}}$ and $\\text{FILIP}_{\\text{large}}$, except that we reduce the embedding dimension from 512\/768 to 256 for the efficiency of loss computation.\nTable \\ref{tab:Filip_model_Hyperparameter} describes the details of architectures.\n\n\\begin{table}\n \\centering\n \\caption{Common hyperparameters used for FILIP pre-training.}\n \n \\label{tab:pre-training hyperparams}\n \\centering\n \\begin{tabular}{l|c}\n \\toprule Hyperparameter & Value \\\\\n \\midrule\n Vocabulary size & 49408 \\\\\n Initial temperature & $0.07$ \\\\\n LAMB $\\beta_{1}$ & $0.9$ \\\\\n LAMB $\\beta_{2}$ & $0.999$ \\\\\n LAMB $\\epsilon$ & $10^{-4}$ \\\\\n Warm-up iters & 3000 \\\\\n Training epochs & 30 \\\\\n \\bottomrule\n \\end{tabular}\n \n \\end{table}\n \n\\begin{table}\n \\centering\n \\caption{Model- and dataset-specific hyperparameters used for FILIP pre-training.\n \n Numbers in batch size represent the total batch size across all workers and are calculated as: batch size per GPU $\\times$ \\#GPUs. FILIP340M is the combination of FILIP300M, YFCC100M, CC12M and CC3M.}\n \\label{tab:Model_and_dataset_specific hyperparameters}\n \\begin{minipage}{\\textwidth}\n \\centering\n \\begin{tabular}{l|l|cccc}\n \\toprule \n Model & Dataset & Batch size & Base LR & Weight decay & \\\\\n \\midrule \n $\\text{FILIP}_{\\text{base}}$ & YFCC100M & $1024 \\times 8 $ & $6 \\times 10^{-3}$ & 3e-2 \\\\\n $\\text{FILIP}_{\\text{base}}$ & FILIP340M & $ 320 \\times 128 $ & $ 2 \\times 10^{-3}$ & 3e-3 \\\\\n \\midrule\n $\\text{FILIP}_{\\text{large}}$ & FILIP340M & $ 160 \\times 192 $ & $ 8 \\times 10^{-4}$ & 3e-3 \\\\ \n \n \n \n \n \n \\bottomrule\n \\end{tabular}\n \\end{minipage}\n\\end{table}\n\n\n\\paragraph{Details for Pre-training and Hyperparameters.} \nFor the implementation of the contrastive loss, following CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling}, we also set the temperature in the softmax function to be a learnable parameter and initialize it as 0.07. \nFor the pre-training, we use the LAMB optimizer implemented by the cybertronai's open-source repository (\\url{https:\/\/github.com\/cybertronai\/pytorch-lamb}). \nFor the learning rate scheduler, we first assign a base learning rate and then linearly warm it up to the peak learning rate according to the effective total batch size by a square root strategy, $peak\\_lr=base\\_lr \\times \\sqrt{\\frac{total\\_bs}{512}}$. \nWe note that a large weight decay is crucial to stabilize training and improve generalization.\nSpecifically, we found that the training stability is a challenging issue when applying mix-precision training to large-scale models, i.e., the training is extremely unstable and the NaN loss easily happens. Recent works DALL-E \\citep{ramesh2021zeroshot} and Cogview \\citep{ding2021cogview} also notice this issue and provide their solutions. \nHowever, we found that simply increasing the weight decay and applying the trick of removing the weight decay of specific parameters as described in Section \\ref{sec:experiment_details} work for our case.\nThe base learning rate and weight decay are selected manually via observing the performance at the early training stage.\nTable \\ref{tab:pre-training hyperparams} summarizes the common hyperparameters and Table \\ref{tab:Model_and_dataset_specific hyperparameters} shows the model- and dataset-specific hyperparameters for FILIP pre-training.\n\n\n\n\n\n\\paragraph{Details for Image-text Retrieval.} Following previous works \\citep{jia2021scaling,li2021align}, for Flickr30K, we test on the 1K test set with or without fine-tuning on the 30K training set, while for MSCOCO, we test on the 5K test set with or without fine-tuning on the 113K training set. We use the similarity between image and text for ranking and use the contrastive loss for fine-tuning. Since there are multiple texts for each image in these two datasets, we change the ground-truth label of contrastive loss to consider multiple positives, by assigning a probability of 1\/\\#positive to each positive following ALBEF~\\citep{li2021align}. Besides, we also use prompts during evaluation for both datasets, see Appendix \\ref{apdx:prompt_template} for details. Table \\ref{tab:hyperparameter_image_text_retrieval} shows the hyperparameters for image-text retrieval fine-tuning.\n\n\\begin{table}[htbp]\n \\centering\n \\caption{Hyperparameters used for image-text retrieval fine-tuning.}\n \\label{tab:hyperparameter_image_text_retrieval}\n \\begin{tabular}{l|c}\n \\toprule Hyperparameter & Value \\\\\n \\midrule\n Image size & 392 $\\times$ 392 \\\\\n Training epochs & 3 \\\\\n Optimizer & LAMB \\\\ \n Batch size & 5120 \\\\\n Base LR & $2 \\times 10^{-4}$ \\\\\n Weight decay & $3 \\times 10^{-4}$ \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\n\\subsection{More visualizations of Word-patch Alignment and Grad-cam Heatmaps}\n\\label{apdx:more_vis}\nIn Figure~\\ref{fig:MoreImageNetVis}, we visualize the cross-modal alignment of the proposed method for more images, in terms of\nboth word-patch alignment as described in Section~\\ref{sec:Visualization of Fine-grained Alignment} and Grad-CAM heatmaps~\\citep{selvaraju2017grad}.\nWe compute the Grad-CAM heatmaps based on the average self-attention maps over the image patches classified to targeted textual tokens (i.e., the textual token(s) corresponding to the class label in the ImageNet dataset) in the last layer of the image encoder. We average the heatmaps over all attention heads.\nAs can be seen, our proposed model learns meaningful alignment between image patches and textual tokens.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures\/ImageNetVis_appendix.pdf}\n\\caption{More visualizations on different classes of ImageNet dataset. Numbers in the parentheses after the class label indicate the location indices of class label in the tokenized textual sequence. }\n\\label{fig:MoreImageNetVis}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{Prompt Templates for downstream tasks}\n\\label{apdx:prompt_template}\n\\paragraph{Image Classification.} Table~\\ref{tbl:prompt_templates} shows the prompt templates for different image classification datasets\nin the form of ``\n$\n\\text{[prefix] \\{label\\}, [category description]. [suffix].} \n$\n''\nin Equation (\\ref{eq:prompt_template}).\nThere are three components to be determined in the template, i.e., the prefix, the category description and the suffix.\nFor each component, we select several well-performed ones for each dataset. \nThen we use the full combinations of all three components as the set of prompt templates for ensemble.\nFor instance, we use 5 prefixes, no category descriptions, and 6 suffixes for dataset ImageNet. \nThen the total number of prompt templates for this dataset is: $5 \\times 1 \\times 6=30$.\n\n\\paragraph{Image-text Retrieval.} Following CLIP \\citep{radford2021learning}, we use prompt in zero-shot image-text retrieval for both Flickr30K and MSCOCO datasets.\nThe prompt is selected by the same rule as described in Section~\\ref{sec:prompt_ensemble}, except that we do not use ``[category description]'' here. Table \\ref{tab:prompts_for_retrieval} shows the prompt templates for zero-shot image-text retrieval on Flickr30K and MSCOCO datasets. \n\n\n\\begin{table}\n\\small\n \\centering\n \\caption{Prompt templates used for 12 downstream image classification tasks. \n }\n \\label{tbl:prompt_templates}\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{m{1.5cm}|m{4cm}|m{2cm}|m{4cm}}\n \t\\toprule\n \tDataset & Prefix & Category description & Suffix \\\\ \\midrule\n \tCIFAR10 & ``a photo of a\", ``a jpeg photo of a\", ``a painting of a\", ``itap of a\", ``graffiti of a\", ``a cartoon\", ``a doodle\" & None & None, ``It's common in daily life\", ``It's cute\", ``It's ugly\", ``It's weird\", ``Hope you like it\" \\\\ \\midrule\n \tCIFAR100 & ``a jpeg photo of a\", ``a painting of a\", ``a good photo of a\", ``a bad photo of a\", ``a photo of a\", ``itap of a\", ``a rendering of a\" & None & None, ``It's common in daily life\", ``It's beautiful\", ``It's ugly\", ``I like it\", ``I take it today\" \\\\ \\midrule\n \n \tCaltech101 & ``a photo of a\", ``a cropped photo of a\", ``a good photo of a\", ``a bad photo of a\" & None & None, ``I like it\", ``I hate it\", ``It's ugly\", ``It's cute\" \\\\ \\midrule\n \tStanford-Car & ``a photo of a\", ``a close-up photo of a\", ``a good photo of a\", ``a bad photo of a\" & ``a type of car\", ``a type of automobile\" & ``I like it\", ``It belongs to my friend\", ``It's brand new\", ``It's popular recently\", ``It's important to me\", ``I take it today\" \\\\ \\midrule\n \tFlowers102 & ``a photo of a (many) \", ``a rendering of a (many) \", ``itap of a (many) \" & ``a type of flower\", ``a type of bloom\" & ``It's beautiful\", ``It's from my best friend\", ``It gives out a sweet perfume\/fragrance\" \\\\ \\midrule\n \tImageNet & ``a photo of a\", \"a good photo of a\", ``a bad photo of a\", ``a close-up photo of a\", ``itap of a\" & None & ``I like it\", ``It's common in daily life\", ``It's not common in daily life\", ``It's ugly\", ``It's cute\", ``It's beautiful\" \\\\ \\midrule\n \tFood101 & ``a photo of my\", ``a close-up photo of my\", ``itap of my\" & ``a type of food\", ``a type of nourishment\" & ``I made it today\", ``I like it\", ``I hate it\", ``It's delicious\", ``It's with nice flavour\", ``It's with terrible flavour\", ``It's popular recently\" \\\\ \\midrule\n \tSUN397 & ``a photo of a\", ``a good photo of a\", ``a bad photo of a\", ``a bright photo of a\", a dark photo of a\", ``a black and white photo of a\", ``a nice scene of a\", ``a terrible scene of a\" & None & None, ``I like it\", ``I hate it\", ``It's beautiful\", ``It's common in daily life\", ``It's important to me\" \\\\ \\midrule\n \tDTD & ``itap of a\", ``a close-up photo of a\" & ``texture\", ``surface\", ``material\" & None, ``It's out of style\", ``It's popular in old days\", ``It's ugly\", ``It's beautiful\" \\\\ \\midrule\n \tAircrafts & ``a photo of the\", ``a close-up photo of the\", ``a good photo of the \", ``a pixelated photo of the\" & ``a type of plane\", ``a type of aircraft\", ``a type of airliner\" & None,``I like it\", ``It's important to me\", ``I take it today\", ``Hope you like it\" \\\\ \\midrule\n \tOxford Pet & ``a photo of my\", ``a low resolution photo of my\", ``a good photo of my\" & ``a type of pet\", ``a type of dog or cat\" & None, ``It's cute\", ``It's important to me\", ``I like it\", ``It's beautiful\" \\\\ \\midrule\n \tEuroSAT & ``a photo of a\", ``a painting of a\", ``a cropped photo of a\", ``a good photo of a\", ``a blurry photo of a\" & None, ``an example of aerial or satellite images\" & None, ``I like it\", ``It's taken from an aircraft or some flying object\", ``It's collected by imaging satellites\" \\\\ \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\n\n\\begin{table}[!t]\n\\small\n \\centering\n \\caption{Prompt templates used for zero-shot image-text retrieval on Flickr30K and MSCOCO datasets. \n }\n \\label{tab:prompts_for_retrieval}\n \\resizebox{0.8\\textwidth}{!}{\n \\begin{tabular}{m{1.5cm}|m{3cm}|m{3cm}|m{1.5cm}}\n \t\\toprule\n \tDataset & Task & Prefix & Suffix \\\\ \\midrule\n \t\\multirow{2}{*}{ Flickr30K } & image-to-text retrieval & ``a good photo of the'' & ``I hate it.'' \\\\ \n \t & text-to-image retrieval & ``a good photo of'' & None \\\\ \\midrule\n \t\\multirow{2}{*}{ MSCOCO } & image-to-text retrieval & ``a good photo of'' & ``It is ugly.'' \\\\ \n \t & text-to-image retrieval & None & None \\\\ \\bottomrule\n\n \\end{tabular}\n }\n\\end{table}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\label{section:intro}\n\n\nIn [IKoT1, IKoT2] we showed that there is an interesting relation \nbetween the exact WKB theory and the topological recursion for the \nconfluent family of the Gauss hypergeometric differential equations, \nthat is, we verified that the Voros coefficients of (confluent) \nhypergeometric equations are expressed as the difference values \nof the free energy of the spectral curve obtained as the classical \nlimit of the equations. In this paper we discuss the extension of \nthis result to a family of hypergeometric differential equations \nassociated with $2$-dimensional degenerate Garnier systems. \n\nThe $N$-dimensional Garnier system is a Hamiltonian system with $N$ \nvariables obtained through monodromy preserving deformations of second \norder linear differential equations on $\\mathbb{P}^1$ with $N+3$ regular \nsingular points. In the case of $N=1$, the system reduces to the sixth \nPainlev\\'e equation $P_{\\rm VI}$ and the Gauss hypergeometric function \ngives a particular solution of $P_{\\rm VI}$. In this sense the Gauss \nhypergeometric equation and its confluent version are associated with \nPainlev\\'e equations (i.e., $1$-dimensional Garnier system). In the \nsame manner a confluent family of hypergeometric differential systems \nwith two independent variables are associated with the $2$-dimensional \nGarnier system according to the following diagram of degeneration: \n\\begin{figure}[htbp]\n$$\n\\xymatrix@!C=25pt@R=7pt{\n&&\n&& \\text{$K(1,2,2)$} \\ar@{->}[rr] \\ar@{->}[rrdd]\n&& \\text{$K(2,3)$} \\ar@{->}[rrd] \n&&\n&&\n\\\\\n\\text{$K(1,1,1,1,1)$} \\ar@{->}[rr] \n&& \\text{$K(1,1,1,2)$} \\ar@{->}[rru] \\ar@{->}[rrd] \n&&\n&&\n&& \\text{$K(5)$} \n\\\\\n&&\n&& \\text{$K(1,1,3)$} \\ar@{->}[rr] \\ar@{->}[rruu]\n&& \\text{$K(1,4)$} \\ar@{->}[rru] \n&&\n&&\n}\n$$\n\\label{fig:confluence}\n\\end{figure}\n\n\\noindent\nHere $K(1,1,1,1,1)$ designates the $2$-dimensional Garnier system and, \nin general, the symbol $(\\#)=(r_1, \\ldots, r_m)$ means that its underlying \nmonodromy preserving deformation is concerned with a linear equation \nwith $m$ singular points of Poincar\\'e ranks $r_1-1, \\ldots, r_m-1$. \nIn what follows a hypergeometric differential system with two independent \nvariables is called a hypergeometric system of type $(\\#)$ when it is \nassociated with a confluent $2$-dimensional Garnier system $K(\\#)$. \nAmong them, in this article, we consider the hypergeometric systems of \ntype $(1,4)$ and $(2,3)$, or the following two third-order ordinary \ndifferential equations obtained from the hypergeometric systems of \ntype $(1,4)$ and $(2,3)$ by fixing the second variable $x_2 = t$: \nThe first one is \n\\begin{equation}\n\\label{eq:intro:(1,4)_eq(d\/dx)} \n\t\\left\\{ 3 \\hbar^3 \\frac{d^3}{dx^3} + 2t \\hbar^2 \\frac{d^2}{dx^2} + x \\hbar \\frac{d}{dx} \n\t\t\t- \\hat{\\lambda}_\\infty \\right\\} \\psi = 0, \n\\end{equation} \nwhich is called the hypergeometric equation of type $(1,4)$, and the second one is \n\\begin{equation}\n\\label{eq:intro:(2,3)_eq(d\/dx)}\n\t\\left\\{ 4 \\hbar^3 \\frac{d^3}{dx^3} - 2 x \\hbar^2 \\frac{d^2}{dx^2} \n\t\t\t+ 2 ( \\hat{\\lambda}_\\infty - \\hbar ) \\hbar \\frac{d}{dx} - t \\right\\} \\psi = 0, \n\\end{equation}\nwhich is called the hypergeometric equation of type $(2,3)$. \n\nThe purpose of this article is to show that the Voros coefficients \nof \\eqref{eq:intro:(1,4)_eq(d\/dx)} and \\eqref{eq:intro:(2,3)_eq(d\/dx)} \nare expressed as the difference values of the free energy \ndefined through the topological recursion due to Eynard and Orantin \\cite{EO} \n(\\thmref{main(i)}) and further that the explicit forms \nof Voros coefficients and the free energy of \n\\eqref{eq:intro:(1,4)_eq(d\/dx)} and \\eqref{eq:intro:(2,3)_eq(d\/dx)} can be \nobtained by using this relation between these two quantities \n(\\thmref{main(iii)} and \\thmref{main(iv)}). \n\nVoros coefficients are defined as contour integrals of the logarithmic \nderivative of WKB solutions in the thoery of the exact WKB analysis. \nIts importance in the study of the global behavior of solutions has been already recognized by the pioneering work of Voros (\\cite{Voros83}). The explicit form of Voros coefficients plays an important role to describe parametric Stokes phenomena, which are Stokes phenomena with respect to parameters included in the equation. \nThe explicit form of the Voros coefficients is now known for the confluent family of the Gauss hypergeometric equation and the hypergeometric equation of type (1,4) (\\cite{SS, Takei08, KoT11, ATT, Aoki-Tanda, AIT, IKo}). \nVoros coefficients are also studied for the Painlev\\'e equations to clarify the parametric Stokes phenomena (\\cite{I14}). \n\nOn the other hand, the topological recursion introduced by Eynard and \nOrantin (\\cite{EO}) is a generalization of the loop equations that the \ncorrelation functions of the matrix model satisfy. For a Riemann surface \n$\\Sigma$ and meromorphic functions $x$ and $y$ on $\\Sigma$, it \nproduces an infinite tower of meromorphic differentials $W_{g,n}(z_1, \\ldots, z_n)$ on $\\Sigma$. \nA triplet $(\\Sigma, x, y)$ is called a spectral curve and $W_{g,n}(z_1, \\ldots, z_n)$ is called a correlation function. As is shown in \\cite{GS, DM, BE} etc., \nthe quantization scheme connects WKB solutions of \ndifferential equations with the topological recursion. More precisely, \nWKB solutions of a differential equation are constructed also by \ncorrelation functions for the spectral curve corresponding to the \nclassical limit of the differential equation provided that the spectral curve \nsatisfies the so-called ``admissibility condition\" (cf. \\ \\cite[Definition 2.7]{BE}). \nMoreover, for a spectral curve, we can define free energies \n(also called symplectic invariants) $F_g$. For more details about the topological recursion, \nsee, e.g., the review paper \\cite{EO-08}. \nThe main results of this paper as well as those of \\cite{IKT-part1,IKT-part2} \nstrengthen the interplay between the WKB theory and the topological recursion \nand these interplays are expected to produce more profound insights in these theories. \n\nThe paper is organized as follows: \nIn \\S2 we recall some fundamental facts about \nthe exact WKB analysis and Eynard-Orantin's topological recursion. \nIn \\S3 we study quantization of the spectral curve. \nIn \\S4 we state our main theorem (\\thmref{main(i)}, \\thmref{main(ii)}, \\thmref{main(iii)} and \\thmref{main(iv)}). \nWe give a proof of our result only for (1,4) curve, but (2,3) curve can be treated similarly. \n\\section*{Acknowledgement}\nThe author would like to express my special thanks to late Professor Tatsuya Koike. \nThe author is also very grateful to Professors \nTakashi Aoki, \nSampei Hirose, \nKohei Iwaki, \nShingo Kamimoto, \nTakahiro Kawai, \nSaiei Matsubara,\nGenki Shibukawa, \nTakahiro Shigaki, \nNobuki Takayama, \nYoshitsugu Takei \nand \nMika Tanda \nfor helpful discussions and communications. \n\n\n\n\\section{Voros coefficients and the topological recursion}\n\\label{sec:review}\n\n\n\\subsection{WKB solutions}\n\\label{subsec:WKB-sol}\n\nIn this article we discuss the third order ordinary differential equation with a small parameter $\\hbar \\ne 0$ \nof the form\n\\begin{equation}\n\\label{eq:3rd-ODE}\n\t\\left\\{\n\t\tp_0(x, \\hbar) \\hbar^3 \\frac{d^3}{dx^3} + p_1(x, \\hbar) \\hbar^2 \\frac{d^2}{dx^2} \n\t\t+ p_2(x, \\hbar) \\hbar \\frac{d}{dx} + p_3(x, \\hbar)\n\t\\right\\}\\psi = 0,\n\\end{equation}\nwhere $x \\in \\mathbb{C}$, and\n\\begin{equation}\n\tp_i(x, \\hbar) = p_{i,0}(x) + \\hbar p_{i,1}(x) \\quad (i = 0, 1, 2, 3)\n\\end{equation}\nwith rational functions $p_{i,j}(x)$ ($i = 0, 1, 2, 3$, $j = 0, 1$) and \n\\begin{equation}\n\tp_{0,1}(x) = p_{1,1}(x) = 0. \n\\end{equation} \nWe consider \\eqref{eq:3rd-ODE} as a differential equations on the Riemann sphere $\\mathbb{P}^1$ \nwith regular or irregular singular points. \nFor \\eqref{eq:3rd-ODE} we can construct a formal solution, called a WKB solution, of the form \n\\begin{align}\n\\label{eq:WKB-type}\n\t\\psi (x, \\hbar) \n\t\t&= \\exp \\left[ \\int^x S(x, \\hbar) dx \\right].\n\\end{align}\nThe logarithmic derivative $S(x, \\hbar)$ of WKB solutions of \\eqref{eq:3rd-ODE} satisfies the equation \n\\begin{equation}\n\\begin{split}\n\\label{eq:Riccati-gen}\n\tp_0(x, \\hbar) \\hbar^3 \n\t\t\\left( \\frac{d^2}{dx^2} S(x, \\hbar) + 3 S(x, \\hbar) \\frac{d}{dx} S(x, \\hbar) + {S(x, \\hbar)}^3 \\right) \n\t+ p_1(x, \\hbar) \\hbar^2 \\left( \\frac{d}{dx} S(x, \\hbar) + {S(x, \\hbar)}^2 \\right) \\\\\n\t\\qquad + \\hbar p_2(x, \\hbar) S(x, \\hbar) + p_3(x, \\hbar) = 0. \n\\end{split}\n\\end{equation}\nEq. \\eqref{eq:Riccati-gen} is a counterpart of the Riccati equation in the second-order case \nand admits a solution of the form \n\\begin{align}\n\\label{eq:Riccati-gen-expansion}\n\tS(x, \\hbar) \n\t\t&:= \\hbar^{-1} S_{-1}(x) + S_0(x) + \\hbar S_1(x) + \\cdots \n\t\t= \\sum_{m = -1}^{\\infty} \\hbar^m S_m(x). \n\\end{align}\nIn fact, by substituting \\eqref{eq:Riccati-gen-expansion} into \\eqref{eq:Riccati-gen}, and comparing like powers of both sides with respect to $\\hbar$, we obtain \n\\begin{align}\n\\label{eq:Riccati-gen-1}\n\tp_{0,0}(x) S_{-1}^3 + p_{1,0}(x) S_{-1}^2 + p_{2,0}(x) S_{-1} + p_{3,0}(x) &= 0,\\\\\n\\label{eq:Riccati-gen-2}\n\t\\left(3 p_{0,0}(x) S_{-1}{}^2 +2 p_{1,0}(x) S_{-1} +p_{2,0}(x)\\right) S_0 \n\t+ 3 p_{0,0}(x) S_{-1} \\frac{d S_{-1}}{dx} +p_{1,0}(x) \\frac{d S_{-1}}{dx} \\\\\n\t+p_{2,1}(x) S_{-1} +p_{3,1}(x) &= 0, \\notag \n\\end{align}\nand\n\\begin{align}\n\\label{eq:Riccati-gen-3}\n\t\\left(3 p_{0,0}(x) S_{-1}{}^2 +2 p_{1,0}(x) S_{-1} +p_{2,0}(x)\\right) S_{m + 1} \n\t+ \\sum_{\\substack{i + j + k = m-1 \\\\ i, j, k \\geq 0}} S_{i} S_{j} S_{k} + 3 \\sum_{j = 0}^{m-1} S_{m - j - 1} S_{j} \n\t\\\\ \n\t+ 3 p_{0,0}(x) S_m \\frac{d S_{-1}}{dx} + 3 p_{0,0}(x) S_{-1} \\frac{d S_{m}}{dx} \n\t+ p_{0,0}(x)\\frac{d^2 S_{m-1}}{dx^2} \n\t+ p_{1,0}(x) \\sum_{j = 0}^m S_{m - j} S_{j} \\notag \\\\\n\t+ p_{1,0}(x) \\frac{d S_{m}}{dx} + p_{2,1}(x) S_m = 0 \\quad (m \\geq 0). \\notag \n\\end{align} \nEq. \\eqref{eq:Riccati-gen-1} has three solutions, and once we fix one of them, \nwe can determine $S_m$ for $m \\geq 0$ uniquely and recursively \nby \\eqref{eq:Riccati-gen-2} and \\eqref{eq:Riccati-gen-3}. \n\n\n\n\\subsection{Voros coefficients}\n\\label{subsec:Voros-coeff}\n\nA Voros coefficient is defined as a properly regularized integral of $S(x, \\hbar)$ along \na path connecting singular points of \\eqref{eq:3rd-ODE}. \nWhen $S_m(x)$ with $m \\geq 1$ is integrable at any singular point of \\eqref{eq:3rd-ODE}, \nwe can define Voros coefficients by \n\\begin{equation}\\label{eq:def-Voros-coeff}\nV_{\\gamma_{b_1, b_2}}(\\hbar)\n:= \\int_{\\gamma_{b_1, b_2}}\n\\big( S(x, \\hbar) -\\hbar^{-1}S_{-1}(x) - S_0(x)\\big) dx\n= \\sum_{m = 1}^{\\infty} \\hbar^m \\int_{\\gamma_{b_1, b_2}} S_m(x) dx,\n\\end{equation}\nwhere $\\gamma_{b_1, b_2}$ is a path from a singular point $b_1$ to a singular point $b_2$.\n(When there is no need to specify a path $\\gamma_{b_1,b_2}$, we use the abbreviated \nnotation $V(\\hbar)$ instead of $V_{\\gamma_{b_1, b_2}}(\\hbar)$.)\nNote that Voros coefficients only depend on the class \n$[\\gamma_{b_1, b_2}]$ of paths in the relative homology group\n$$\nH_1 \\big(\\mathbb{P}^1 \\setminus\n\\{\\text{Turning points}\\},\n\\{\\text{Singular points}\\}; \\mathbb{Z} \\big).\n$$\nSuch an integration contour (or a relative homology class) \ncan be understood as a lift of a path on $x$-plane \nonto the Riemann surface of $S_{-1}(x)$ \n(i.e., three sheeted covering of $x$-plane). \nThe lift of a path is specified by drawing branch cuts and distinguishing the \nfirst, second and third sheets of the Riemann surface.\n\n\n\\subsection{The global topological recursion}\n\\label{sec:TR}\n\nLet us first fix notation. \nWe restrict ourselves to the case when a spectral curve is of genus $0$ \nbecause we will not discuss the general case in this paper \n(see \\cite{BE-12} for the general definition). \n\n\\begin{dfn} \\label{def:spectral-curve}\nA spectral curve (of genus $0$) is a pair $(x(z), y(z))$\nof non-constant rational functions on $\\mathbb{P}^1$, \nsuch that their exterior differentials \n$dx$ and $dy$ never vanish simultaneously. \n\\end{dfn}\n\nLet $R$ be the set of ramification points of $x(z)$, \ni.e., $R$ consists of zeros of $dx(z)$ of any order and poles of $x(z)$ \nwhose orders are greater than or equal to two \n(here we consider $x$ as a branched covering map from $\\mathbb{P}^1$ to itself). \nWe further assume that \n\n\\begin{itemize}\n\\item[(A1)]\nA function field $\\mathbb{C}(x(z), y(z))$ coincides with $\\mathbb{C}(z)$.\n\n\\item[(A2)]\n\nIf $r$ is a ramification point which is a pole of $x(z)$, \nand if $Y(z) = - x(z)^2 y(z)$ is holomorphic near $r$,\nthen $dY(r) \\neq 0$.\n\n\\item[(A3)]\nAll of the ramification points of $x(z)$ are simple,\ni.e., the ramification index of each ramification point\nis two.\n\n\\item[(A4)]\nWe assume branch points are all distinct,\nwhere a branch point is defined as the image of\na ramification point by $x(z)$.\n\\end{itemize}\n\nWe need to introduce some notation to define the topological recursion. \n\n\\begin{dfn} \\label{def:effective-ramification}\nA ramification point $r$ is said to be ineffective if \nthe correlation functions $W_{g,n}(z_1,\\dots,z_n)$ \nfor $(g,n) \\ne (0,1)$ are holomorphic at $z_i = r$ for each $i=1,\\dots,n$. \nA ramification point which is not ineffective is called effective. \nThe set of effective ramification points is denoted by $R^{\\ast}$ $(\\subset R)$. \n\\end{dfn}\n\n\n\\begin{dfn}\nFor two sets $A$ and $B$, $A \\subseteq_k B$ means $A \\subseteq B$ and $|A| = k$. \n\\end{dfn}\n\n\\begin{dfn}\n$\\mathcal{S}(\\bm{t})$ denotes the set of set partitions of $\\bm{t} = \\{ t_1, \\ldots, t_k \\}$. \n\\end{dfn}\n\nThen, we define the recursive structure: \n\n\\begin{dfn}[{\\cite[Definition 3.4]{BE}}]\nLet $\\{ W_{g, n} \\}$ be an arbitrary collection of symmetric multidifferential on $(\\mathbb{P}^1)^n$ \nwith $g \\geq 0$ and $n \\geq 1$. Let $k \\geq 1$, \n$\\bm{t} = \\{ t_1, \\ldots, t_k \\}$ and $\\bm{z} = \\{ z_1, \\ldots, z_n \\}$. Then, we define \n\\begin{align}\n\\label{eq:R(k)}\n\t{\\mathcal{R}}^{(k)} \\left(W_{g, n+1}(\\bm{t}; \\bm{z})\\right) \n\t\t&:= \\sum_{\\mu \\in \\mathcal{S}(\\bm{t})} \n\t\t\t\\sum_{\\sqcup_{i=1}^{l(\\mu)} I_i = \\{1, 2, \\cdots, n\\}} \n\t\t\t\\sum'_{\\sum_{i=1}^{l(\\mu)} g_i = g + l(\\mu) - k} \n\t\t\t\\left\\{ \\prod_{i=1}^{l(\\mu)} W_{g_i, |\\mu_i| + |I_i|}(\\mu_i, z_{I_i}) \\right\\}. \n\\end{align}\nThe first summation in \\eqref{eq:R(k)} is over set partitions of $\\bm{t}$, \n$l(\\mu)$ is the number of subsets in the set partition $\\mu$. \nThe third summation in \\eqref{eq:R(k)} is over all $l(\\mu)$-tuple \nof non-negative integers $(g_1, \\ldots, g_{l(\\mu)})$ such that $\\sum_{i=1}^{l(\\mu)} g_i = g + l(\\mu) - k$. \n$\\sqcup$ denotes the disjoint union, \nand the prime ${}'$ on the summation symbol in \\eqref{eq:R(k)} means that we exclude terms for\n$(g_i, |\\mu_i| + |I_i|) = (0, 1)$ ($i = 1, \\ldots, l(\\mu)$)\n(so that $W_{0, 1}$ does not appear) in the sum. \nWe also define \n\\begin{align}\n\t{\\mathcal{R}}^{(0)} W_{g, n+1}(\\bm{z}) &:= \\delta_{g,0} \\delta_{n,0}, \n\\end{align}\nwhere $\\delta_{i,j}$ is the Kronecker delta symbol. \n\\end{dfn}\n\n\\begin{ex} \nFor $k=2$, $\\mathcal{S}(\\bm{t})$ is given by \n\\begin{align}\n\\mathcal{S}(\\{ t_1, t_2 \\}) \n= \\Bigl\\{ \\bigl\\{ \\{ t_1, t_2 \\} \\bigr\\}, \\bigl\\{\\{ t_1 \\}, \\{ t_2 \\} \\bigr\\} \\Bigr\\}. \n\\end{align}\nTherefore, we have \n\\begin{align}\n{\\mathcal{R}}^{(2)} \\left(W_{g, n+1}(\\bm{t}; \\bm{z})\\right) \n&= W_{g-1,n+2}(\\bm{t}, \\bm{z}) \n\t+ \\sum_{I_1\\sqcup I_2 = \\{1, 2, \\cdots, n\\} } \n\t\t\\sum'_{g_1 + g_2 = g - 1} \n\t\t\\left\\{ \\prod_{i=1}^{2} W_{g_i, 1 + |I_i|}(t_i, z_{I_i}) \\right\\}. \n\\end{align}\n\\end{ex}\n \nWe now define the topological recursion. \n\n\\begin{dfn}[{\\cite[Definition 3.6]{BE}}]\nEynard-Orantin's correlation function\n$W_{g, n}(z_1, \\cdots, z_n)$ for $g \\geq 0$ and $n \\geq 1$ \nis defined as a multidifferential \non $(\\mathbb{P}^1)^n$ using the recurrence relation\n\\begin{align}\n\\label{eq:gTR}\n\t&W_{g, n+1}(z_0, z_1, \\cdots, z_n) \\\\\n\t&:= \\sum_{r \\in R} \\mathop{\\rm{Res}}_{z = r} \\left\\{ \n\t\t\t\\sum_{k=1}^{r-1} \\sum_{\\beta(z) \\subseteq_k \\tau'(z)} \n\t\t\t(-1)^{k+1} \\frac{w^{z - \\alpha}(z_0)}{E^{(k)}(z;\\beta(z))} \n\t\t\t{\\mathcal{R}}^{(k+1)} \\left(W_{g, n+1}(z, \\beta(z);z_1, \\cdots, z_n)\\right) \n\t\t\\right\\} \\notag\n\\end{align}\nfor $2g + n \\geq 2$ with initial conditions\n\\begin{align}\nW_{0, 1}(z_0) &:= y(z_0) dx(z_0),\n\\quad\nW_{0, 2}(z_0, z_1) = B(z_0, z_1)\n:= \\frac{dz_0 dz_1}{(z_0 - z_1)^2}.\n\\end{align}\nHere we set $W_{g,n} \\equiv 0$ for a negative $g$ and \n\\begin{align}\n\\label{eq:E(k)}\n\tE^{(k)}(z; t_1, \\ldots, t_k)\n\t\t&:= \\prod_{i=1}^k (W_{0,1}(z) - W_{0,1}(t_i)). \n\\end{align}\nThe second and third summations in \\eqref{eq:gTR} together mean that \nwe are summing over all subsets of $\\tau'(z)$. \n$\\alpha$ is an arbitrary base point on $\\mathbb{P}^1$, but it can be checked (see \\cite{BE-12}) that the definition is actually independent of the choice of base point $\\alpha$.\nWe have also used the multi-index notation:\nfor $I = \\{i_1, \\cdots, i_m\\} \\subset \\{1, 2, \\cdots, n\\}$\nwith $i_1 < i_2 < \\cdots < i_m$, $z_I:= (z_{i_1}, \\cdots, z_{i_m})$.\n\\end{dfn}\n\nNote that this recursion was called ``global topological recursion\" in \\cite{BE-12}. \nIt was shown in \\cite{BE-12} that it is indeed equivalent to the following \nusual local formulation of the topological recursion when the ramification points are all simple. \n\n\n\\subsection{Free energy through the topological recursion}\n\\label{subsec:free-energy}\n\nThe $g$-th free energy $F_g$ ($g\\geq 0$) is a complex number\ndefined for the spectral curve,\nand one of the most important objects in Eynard-Orantin's theory.\nIt is also called a symplectic invariant since it is \n``almost'' invariant under symplectic transformations\nof spectral curves (see \\cite{EO-13} for the details). \n\n\\begin{dfn}[{\\cite[Definition 4.3]{EO}}]\nFor $g \\geq 2$, the $g$-th free energy $F_g$ is defined by\n\\begin{equation}\n\\label{def:Fg2}\nF_g := \\frac{1}{2- 2g} \\sum_{r \\in R} \\mathop{\\rm{Res}}_{z = r}\n\\big[\\Phi(z) W_{g, 1}(z) \\big]\n\\quad (g \\geq 2),\n\\end{equation}\nwhere $\\Phi(z)$ is a primitive of $y(z) dx(z)$. \nFor $g=1$, we define the free energy $F_1$ satisfying \\eqref{eq:variational_free-energy}. \nThe free energies $F_0$ for $g=0$ is also defined, but in a different manner \n(see \\cite[\\S 4.2.3]{EO} for the definition). \n\\end{dfn}\nNote that the right-hand side of \\eqref{def:Fg2} does not\ndepend on the choice of the primitive\nbecause $W_{g, 1}$ has no residue at each ramification point. \n\nIn applications (and in our article), the generating series\n\\begin{equation} \\label{eq:total-free-energy}\nF := \\sum_{g = 0}^{\\infty} \\hbar^{2g-2} F_g\n\\end{equation}\nof $F_g$'s is crucially important. \nWe also call the generating series \\eqref{eq:total-free-energy} \nthe free energy of the spectral curve. \n\n\n\\subsection{Variational formulas for the correlation functions}\n\\label{subsec:variational-formula-TR}\n\nIn \\S \\ref{subsection:quantum-(1,4)} and \\S \\ref{subsection:quantum-(2,3)} \nwe will consider a family of spectral curves parametrized by complex parameters. \nFor our purpose, we briefly recall the variational formulas obtained \nby \\cite[\\S 5]{EO} which describe the differentiation \nof the correlation functions $W_{g,n}$ and the free energies $F_g$ \nwith respect to the parameters.\n\nSuppose that we have given a family \n$(x_\\varepsilon(z), y_\\varepsilon(z))$\nof spectral curves parametrized by a complex parameter \n$\\varepsilon$ which lies on a certain domain $U \\subset {\\mathbb C}$ \nsuch that \n\\begin{itemize}\n\\item \n$x_\\varepsilon(z), y_\\varepsilon(z)$ depend \nholomorphically on $\\varepsilon \\in U$. \n\\item \n$x_\\varepsilon(z), y_\\varepsilon(z)$ \nsatisfy the assumptions (A1) -- (A4) \nfor any $\\varepsilon \\in U$. \n\\item \nThe cardinality of the set $R_\\varepsilon$ of \nramification points of $x_\\varepsilon(z)$ is constant on $\\varepsilon \\in U$\n(i.e. ramification points of $x_\\varepsilon(z)$ \nare distinct for any $\\varepsilon \\in U$).\n\\end{itemize}\nThen, the correlation functions $W_{g,n}(z_1, \\dots, z_n; \\varepsilon)$ \nand the $g$-th free energy $F_g(\\varepsilon)$ defined from the spectral curve \n$(x_\\varepsilon(z), y_\\varepsilon(z))$\nare holomorphic in $\\varepsilon \\in U$ \nas long as $z_i \\notin R_\\varepsilon$ for any $i=1,\\dots,n$. \n\nIn order to formulate a variational formula for correlation functions, \nwe need to introduce the notion of ``differentiation with fixed $x$\". \nFor a meromorphic differential $\\omega(z; \\varepsilon)$ on ${\\mathbb P}^1$, \nwhich depends on $\\varepsilon$ holomorphically, define \n\\begin{equation}\n\\delta_{\\varepsilon} \\, \\omega(z; \\varepsilon) \n:= \\left( \n\\frac{\\partial}{\\partial \\varepsilon} \\omega(z_{\\varepsilon}(x); \\varepsilon) \n\\right) \\biggl|_{x=x_{\\varepsilon}(z)} \n\\quad (z \\notin R_\\varepsilon), \n\\end{equation}\nwhere $z_{\\varepsilon}(x)$ is (any branch of) the inverse function of \n$x = x_{\\varepsilon}(z)$ which is defined away from branch points \n(i.e. points in $x_{\\varepsilon}(R_\\varepsilon)$). \nIn \\cite{EO} the notation \n$\\delta_{\\Omega} \\, \\omega(z; \\varepsilon) \\big|_{x(z)}$ \nis used for $\\delta_{\\varepsilon} \\omega(z; \\varepsilon)$ defined above.\nSuch differentiation $\\delta_{\\varepsilon}$ can be generalized \nto multidifferentials in an obvious way. \nThen, under these assumptions, the variational formula is formulated as follows.\n\n\\begin{thm}[{\\cite[Theorem 5.1]{EO}}] \\label{thm:VariationFormula}\nIn addition to the above conditions, for any $\\varepsilon \\in U$, \nwe further assume that \n\\begin{itemize}\n\\item \nIf $r_\\varepsilon \\in R_\\varepsilon$ is a zero of \n$dx_\\varepsilon(z)$, then the functions\n$\\partial x_\\varepsilon\/ \\partial \\varepsilon$ and \n$\\partial y_\\varepsilon\/ \\partial \\varepsilon$ are holomorphic\n(as functions of $z$) at $r_\\varepsilon$, \nand $dy_\\varepsilon(z)$ does not vanish\n(as a differential of $z$) at $r_\\varepsilon$.\n\\item \nIf $r_\\varepsilon \\in R_\\varepsilon$ is a pole of $x_\\varepsilon(z)$ \nwith an order greater than or equal to two, then \n\\[\n\\frac{\\Omega_\\varepsilon(z) \\, B(z_1, z) \\, B(z_2 , z)}\n{dy_\\varepsilon(z) dx_\\varepsilon(z)}\n\\]\nis holomorphic (as a differential in $z$) at $r(\\varepsilon)$, where \n\\begin{equation} \\label{eq:Omega}\n\\Omega_\\varepsilon(z) := \n\\frac{\\partial y_\\varepsilon}{\\partial \\varepsilon}(z) \\, dx(z)\n- \\frac{\\partial x_\\varepsilon}{\\partial \\varepsilon}(z) \\, dy(z).\n\\end{equation}\n\\item \nThere exist a path $\\gamma$ in $\\mathbb{P}^1$ passing through\nno ramification point and a function $\\Lambda_\\varepsilon (z)$ \nholomorphic in a neighborhood of $\\gamma$ for which the following holds.\n\\begin{equation}\n\\Omega_\\varepsilon(z) = \n\\int_{\\zeta \\in \\gamma} \\Lambda_\\varepsilon(\\zeta) \\, B(z, \\zeta).\n\\end{equation} \n\\end{itemize}\nThen, $W_{g,n}(z_1, \\dots, z_n; \\varepsilon)$ \nand $F_g(\\varepsilon)$ defined from the spectral curve \n$(x_\\varepsilon(z), y_\\varepsilon(z))$\nsatisfy the following relations: \n\\begin{itemize}\n\\item[{\\rm{(i)}}]\nFor $2g + n \\geq 2$, \n\\begin{equation}\n\\delta_{\\varepsilon} \\, W_{g, n} \n(z_1, \\cdots, z_n; \\varepsilon)\n= \\int_{\\zeta \\in \\gamma} \\Lambda_\\varepsilon(\\zeta) \\,\nW_{g, n + 1}(z_1, \\cdots, z_n, \\zeta; \\varepsilon) \n\\end{equation}\nholds on $\\varepsilon \\in U$ as long as each of $z_1, \\cdots, z_n$ satisfies \n$z_i \\notin R_\\varepsilon$. \n\n\n\\item[{\\rm{(ii)}}]\nFor $g \\geq 1$,\n\\begin{equation}\n\\label{eq:variational_free-energy}\n\\frac{\\partial F_g}{\\partial \\varepsilon}(\\varepsilon)\n= \\int_{\\gamma}\\Lambda_\\varepsilon(z) \\, W_{g, 1}(z;\\varepsilon)\n\\end{equation}\nholds on $\\varepsilon \\in U$.\n\n\n\\end{itemize}\n\n\\end{thm}\n\nSee \\cite[\\S 5.1]{EO} \n(based on the Rauch's variation formula; see \\cite{KK} for example) \nfor the proof.\nWe note that, since we modify the definition of the topological recursion \nby adding higher order poles of $x(z)$ as ramification point, \nwe also need to require the second condition in the above claim. \n\n\n\n\n\n\n\\section{Quantization of spectral curves}\n\\label{subsec:quantum-curve}\n\n\nWe treat the quantization by using the divisor with parameters which was introduced by \\cite{BE}. \n\n In this article, we consider the defining equation of the spectral curve\n\\begin{equation}\n\\label{eq:spectral-curve}\n\tP(x, y) = p_0(x) y^3 + p_1(x) y^2 + p_2(x) y + p_3(x) = 0. \n\\end{equation}\n\n\\begin{dfn}[{\\cite[Definition 2.3]{BE}}]\nLet us rewrite the defining equation \\eqref{eq:spectral-curve} of the spectral curve as \n\\begin{equation}\n\tP(x, y) = \\sum_{i, j \\in A} \\alpha_{i, j} x^i y^j = 0 \\quad (\\alpha_{i, j} \\ne 0).\n\\end{equation}\nThen the Newton polygon $\\Delta$ of \\eqref{eq:spectral-curve} is the convex hull of the set $A$. \n\\end{dfn}\n\n\\begin{dfn}[{\\cite[Definition 2.5]{BE}}]\nFor $m = 2, 3$, we define the following meromorphic function on $\\mathbb{P}^1$: \n\\begin{equation}\n\tP_m(x, y) = \\sum_{k = 1}^{m-1} p_{m-1-k}(x) y^k = 0. \n\\end{equation}\n\\end{dfn}\n\n\\begin{dfn}[{\\cite[Definition 2.7]{BE}}]\nWe say that a spectral curve is admissible if: \n\\begin{itemize}\n\t\\item[1.] Its Newton polygon $\\Delta$ has no interior point; \n\t\\item[2.] If the origin $(x, y) = (0, 0) \\in \\mathbb{C}^2$ is on the curve \n\t\t\t$\\{ P(x, y) = 0 \\subset \\mathbb{C}^2\\}$, then the curve is smooth at this point. \n\\end{itemize}\n\\end{dfn}\n\nWe assume that our spectral curve $(x(z), y(z))$ is admissible. \nThen the following theorem holds according to \\cite{BE}. \n\n\\begin{thm}[{\\cite[Lemma 5.14]{BE}}]\n\\label{thm:WKB-Wg,n-BE}\nLet $\\beta_i \\quad (1 \\leqq i \\leqq n)$ be simple poles of $x(z)$ and \n\\begin{equation}\n\\label{eq:D}\n\\begin{split}\n\tD(z ; \\underline{\\nu}) &= [z] - \\sum_{i=1}^{n} \\nu_i [\\beta_i]\n\\end{split}\n\\end{equation}\nbe a divisor on $\\mathbb{P}^1$, where $\\nu_i \\quad (1 \\leqq i \\leqq n)$ are complex numbers satisfying \n$\\sum_{i=1}^{n} \\nu_i = 1$. \nFor a differential $\\omega(z)$, we define its integration along the divisor $D(z; \\underline{\\nu})$ by \n\\[\n\\int_{D(z; \\underline{\\nu})} \\omega(z) = \n\\sum_{i=1}^{n} \\nu_i \\int^{z}_{\\beta_i} \\omega(z) \n\\]\nand extend the definition to multidifferentials in an obvious way. \nLet $W_{g, n}(z_1, \\cdots, z_{n})$ be the correlation functions of a spectral curve $(x(z), y(z))$ defined from (\\ref{eq:spectral-curve}). \nThen,\n\\begin{equation}\n\\label{eq:WKB-Wg,n}\n\\begin{split}\n\t\\psi(x, \\hbar)\n\t&= \\exp \\Bigg[ \\hbar^{-1} \\int^z W_{0, 1}(z) \n\t\t+ \\frac{1}{2!} \\int_{D(z ; \\nu)} \\int_{D(z ; \\nu)} \n\t\t\t\\left( W_{0, 2}(z_1, z_2) - \\frac{dx(z_1) \\, dx(z_2)}{(x(z_1) - x(z_2))^2} \\right) \\\\\n\t&\\quad \\left. \\left.\n\t + \\sum_{m = 1}^{\\infty} \\hbar^m \n\t \t\\left\\{ \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\t\\frac{1}{n!} \\int_{D(z ; \\nu)} \\cdots \\int_{D(z ; \\nu)} W_{g, n}(z_1, \\ldots, z_n) \n\t\t\\right\\} \\right] \\right|_{z = z(x)}\n\\end{split}\n\\end{equation}\nis a WKB type formal solution of \n\\begin{equation}\n\\begin{split}\n\\label{eq:quantization}\n\t\\left[ D_1 D_2 \\frac{p_0(x)}{x^{\\lfloor \\alpha_{3} \\rfloor}} D_{3} \n\t\t\t+ D_1 \\frac{p_1(x)}{x^{\\lfloor \\alpha_{2} \\rfloor}} D_{2} \n\t\t\t+ \\frac{p_2(x)}{x^{\\lfloor \\alpha_{1} \\rfloor}} D_{1} \n\t\t\t+ \\frac{p_3(x)}{x^{\\lfloor \\alpha_{0} \\rfloor}} \n\t\t\t- \\hbar C_1 D_1 \\frac{x^{\\lfloor \\alpha_{2} \\rfloor}}{x^{\\lfloor \\alpha_{1} \\rfloor}} \n\t\t\t- \\hbar C_2 \\frac{x^{\\lfloor \\alpha_{1} \\rfloor}}{x^{\\lfloor \\alpha_{0} \\rfloor}} \n\t\\right] \\psi = 0, \n\\end{split}\n\\end{equation}\nwhere \n\\begin{align*}\n\t\\alpha_m &= \\inf \\{ a \\mid (a, m) \\in \\Delta \\} \\quad (m = 0, 1, 2, 3), \\\\\n\tD_i \n\t&= \\hbar \\frac{x^{\\lfloor \\alpha_{i} \\rfloor}}{x^{\\lfloor \\alpha_{i-1} \\rfloor}} \\frac{d}{dx} \n\t\t\\quad (i = 1, 2, 3), \\\\\n\tC_k \n\t&= \\sum_{i = 1}^{n} \\nu_i \\left( \n\t\t\\lim_{z \\rightarrow {\\beta_i}} \\frac{P_{k+1}(x(z), y(z))}{{x(z)}^{\\lfloor \\alpha_{3-k} \\rfloor + 1}}\n\t\t\\right) \\quad (k = 1, 2). \n\\end{align*}\n\\end{thm}\n\n\\begin{rem}\nIt is mentioned by \\cite[Remark 5.12]{BE} that \nit is also possible to choose a pole of $x$ of order more than one as $\\beta_i$ in \\eqref{eq:D}\nwhen $\\beta \\notin R^{\\ast}$. \nTherefore, we can use \\thmref{WKB-Wg,n-BE} in the case (1,4) and (2,3) curve in the next section. \n\\end{rem}\n\n\n\\subsection{Quantum (1,4) curve}\n\\label{subsection:quantum-(1,4)}\n\nLet us consider the (1,4) curve defined by \n\\begin{equation}\n\\label{eq:(1,4)_P(x,y)}\n\tP(x, y) = 3 y^3 + 2t y^2 + x y - {\\lambda_\\infty} = 0, \n\\end{equation}\nwith parameters $t, {\\lambda_\\infty} \\ne 0$. \nA rational parameterization of this curve is \n\\begin{equation}\n\\label{eq:(1,4)_parameterization}\n\\begin{cases}\n\t\\displaystyle\n\tx = x(z) \n\t= \\frac{-3 z^3 - 2t z^2 + {\\lambda_\\infty}}{z} = -3 z^2 - 2t z + \\frac{{\\lambda_\\infty}}{z}, \\\\[10pt]\n\t\\displaystyle\n\ty = y(z) = z. \n\\end{cases}\n\\end{equation}\nFirst few terms of the correlation functions and free energies are computed as \n\\begin{align*}\n\tW_{0, 3}(z_1, z_2, z_3) \n\t&= \\biggl\\{ \n\t\t\\frac{2 z_1(15 {z_1}^5 - (9{z_2} + 9{z_3} - 4t) {z_1}^4 \n\t\t\t\t- (2t{z_2} + 2t{z_3} - 3{z_2}{z_3}) {z_1}^3 + {\\lambda_\\infty} {z_1}^2 \n\t\t\t\t- {\\lambda_\\infty} {z_2}{z_3})}\n\t\t\t{({z_1} - {z_2})^3 ({z_1} - {z_3})^3 (6 {z_1}^3 + 2t {z_1}^2 + {\\lambda_\\infty})^2} \\\\ \n\t&\\quad\n\t\t+ \\frac{2 z_2(15 {z_2}^5 - (9{z_3} + 9{z_1} - 4t) {z_2}^4 \n\t\t\t\t- (2t{z_3} + 2t{z_1} - 3{z_3}{z_1}) {z_2}^3 + {\\lambda_\\infty} {z_2}^2 \n\t\t\t\t- {\\lambda_\\infty} {z_3}{z_1})}\n\t\t\t{({z_2} - {z_3})^3 ({z_2} - {z_1})^3 (6 {z_2}^3 + 2t {z_2}^2 + {\\lambda_\\infty})^2} \\\\ \n\t&\\quad\n\t\t+ \\frac{2 z_3(15 {z_3}^5 - (9{z_1} + 9{z_2} - 4t) {z_3}^4 \n\t\t\t\t- (2t{z_1} + 2t{z_2} - 3{z_1}{z_2}) {z_3}^3 + {\\lambda_\\infty} {z_3}^2 \n\t\t\t\t- {\\lambda_\\infty} {z_1}{z_2})}\n\t\t\t{({z_3} - {z_1})^3 ({z_3} - {z_2})^3 (6 {z_3}^3 + 2t {z_3}^2 + {\\lambda_\\infty})^2}\n\t\t\\biggl\\} \\\\\n\t&\\quad\n\t\t\\times d{z_1} \\, d{z_2} \\, d{z_3}, \\\\\n\tW_{1, 1}(z)\n\t&= \\frac{z^2 (27 z^6 - 99 z^3 - 36 {\\lambda_\\infty} t z^2 - 4 {\\lambda_\\infty} t^2 z + 3 {\\lambda_\\infty}^2)}\n\t\t\t{(6 z^3 + 2t z^2 + {\\lambda_\\infty})^4} \\, dz,\n\\end{align*}\n\\begin{align*}\t\n\tF_0(\\lambda_\\infty, t)\n\t&= - \\frac{t^6}{972} + \\frac{2 {\\lambda_\\infty} t^3}{27} - \\frac{3 {\\lambda_\\infty}^2}{4} \n\t\t+ \\frac{{\\lambda_\\infty}^2}{4} \\log{(-3 {\\lambda_\\infty}^2)}, \\quad\n\tF_1(\\lambda_\\infty, t) = - \\frac{1}{12} \\log{\\lambda_\\infty}.\n\\end{align*}\n\\begin{rem}\n\tIt seems $W_{0,3}$ has singularities at $z_1 = z_2 = z_3$, \n\tbut we can verify that $W_{0,3}$ is holomorphic there. \n\\end{rem}\n\nWe choose \n\\begin{equation}\n\\label{eq:(1,4)_D}\n\\begin{split}\n\tD(z ; \\nu) \n\t&= [z] - (1 - \\nu_\\infty) [0] - \\nu_\\infty [\\infty] \\\\\n\t&= (1 - \\nu_\\infty) ([z] - [0]) + \\nu_\\infty ([z] - [\\infty]) \n\\end{split}\n\\end{equation}\nas the divisor for the quantization. \n\\begin{rem}\n\t$z = \\infty$ is a double pole of $x(z)$, i.e., $\\infty \\in R$, but we can verify that $\\infty \\notin R^{\\ast}$. \n\tTherefore, we can choose $\\beta = \\infty$ as a base point. \n\\end{rem}\nThen, \\thmref{WKB-Wg,n-BE} gives the quantum curve of the (1,4) curve (quantum (1,4) curve): \n\\begin{equation}\n\\label{eq:(1,4)_eq(d\/dx)} \n\t\\left\\{ 3 \\hbar^3 \\frac{d^3}{dx^3} + 2t \\hbar^2 \\frac{d^2}{dx^2} + x \\hbar \\frac{d}{dx} \n\t\t\t- \\hat{\\lambda}_\\infty \\right\\} \\psi = 0. \n\\end{equation} \nHere we used the notation \n\\begin{equation} \\label{eq:lambda-hat-(1,4)}\n\t\\hat{\\lambda}_\\infty = \\lambda_\\infty - \\nu_{\\infty} \\hbar.\n\\end{equation}\nNote that the special case $t = 0$\nof the equation has been already constructed as \na quantum curve in \\cite[\\S6.2.2]{BE}. \n\nLet $S_m(x, \\lambda, \\nu)$ be the coefficient of the Voros coefficient of \\eqref{eq:(1,4)_eq(d\/dx)}. Then $S_m(x, \\lambda, \\nu)$ satisfies the following lemma. \n\\begin{lem}\n\\label{lem:(1,4)_Sn} \nFor $m = 1, 2, \\cdots$, we have\n\\begin{align} \nS_m(x, \\lambda, \\nu) = O(x^{-{2}}) \n\\quad\n(x \\rightarrow \\infty).\n\\end{align} \n\\end{lem} \n\n\n\\subsection{Quantum (2,3) curve}\n\\label{subsection:quantum-(2,3)}\n\nLet us consider the (2,3) curve defined by \n\\begin{equation}\n\\label{eq:(2,3)_P(x,y)}\n\tP(x, y) = 4 y^3 - 2 x y^2 + 2 {\\lambda_\\infty} y - t = 0\n\\end{equation}\nwith parameters $t, {\\lambda_\\infty} \\ne 0$. \nA rational parameterization of this curve is \n\\begin{equation}\n\\label{eq:(2,3)_parameterization}\n\\begin{cases}\n\t\\displaystyle\n\tx = x(z) \n\t= \\frac{4 z^3 + 2 {\\lambda_\\infty} z - t}{2 z^2} = 2 z + \\frac{{\\lambda_\\infty}}{z} - \\frac{t}{2 z^2} \\\\[10pt]\n\t\\displaystyle\n\ty = y(z) = z. \n\\end{cases}\n\\end{equation}\nFirst few terms of the correlation functions and free energies are computed as \n\\begin{align*}\n\tW_{0, 3}(z_1, z_2, z_3) \n\t&= \\biggl\\{ \n\t\t- \\frac{{z_1}^2 (8 {z_1}^5 - 4({z_2} + {z_3}) {z_1}^4 - 2 {\\lambda_\\infty} {z_1}^3 \n\t\t\t\t+ t {z_1}^2 + (2 {\\lambda_\\infty} {z_2}{z_3} + t {z_2} + t {z_3}) {z_1} - 3t {z_2}{z_3}}\n\t\t\t{({z_1} - {z_2})^3 ({z_1} - {z_3})^3 (2 {z_1}^3 - {\\lambda_\\infty} {z_1} + t)^2} \\\\ \n\t&\\qquad\n\t\t- \\frac{{z_2}^2 (8 {z_2}^5 - 4({z_3} + {z_1}) {z_2}^4 - 2 {\\lambda_\\infty} {z_2}^3 \n\t\t\t\t+ t {z_2}^2 + (2 {\\lambda_\\infty} {z_3}{z_1} + t {z_3} + t {z_1}) {z_2} - 3t {z_3}{z_1}}\n\t\t\t{({z_2} - {z_3})^3 ({z_2} - {z_1})^3 (2 {z_2}^3 - {\\lambda_\\infty} {z_2} + t)^2} \\\\ \n\t&\\qquad\n\t\t- \\frac{{z_3}^2 (8 {z_3}^5 - 4({z_1} + {z_2}) {z_3}^4 - 2 {\\lambda_\\infty} {z_3}^3 \n\t\t\t\t+ t {z_3}^2 + (2 {\\lambda_\\infty} {z_1}{z_2} + t {z_1} + t {z_2}) {z_3} - 3t {z_1}{z_2}}\n\t\t\t{({z_3} - {z_1})^3 ({z_3} - {z_2})^3 (2 {z_3}^3 - {\\lambda_\\infty} {z_3} + t)^2}\n\t\t\\biggl\\} \\\\\n\t&\\quad\n\t\t\\times d{z_1} \\, d{z_2} \\, d{z_3}, \\\\\n\tW_{1, 1}(z)\n\t&= - \\frac{(4 z^3 - t) (8 {\\lambda_\\infty} z^4 - 20t z^3 + 2 {\\lambda_\\infty} t z - t^2)}\n\t\t\t{8(2 z^3 - {\\lambda_\\infty} z + t)^4} \\, dz,\n\\end{align*}\n\\begin{align*}\t\n\tF_0(\\lambda_\\infty, t)\n\t= - \\frac{{\\lambda_\\infty}^2}{4} \\log{(-2 t)}, \\quad\n\tF_1(\\lambda_\\infty, t)\n\t= - \\frac{1}{8} \\log{t}.\n\\end{align*}\n\\begin{rem}\n\tIt seems $W_{0,3}$ has singularities at $z_1 = z_2 = z_3$, \n\tbut we can verify that $W_{0,3}$ is holomorphic there. \n\\end{rem}\n\nWe choose \n\\begin{equation}\n\\label{eq:(2,3)_D}\n\\begin{split}\n\tD(z ; \\nu) \n\t&= [z] - (1 - \\nu_\\infty) [0] - \\nu_\\infty [\\infty] \\\\\n\t&= (1 - \\nu_\\infty) ([z] - [0]) + \\nu_\\infty ([z] - [\\infty]) \n\\end{split}\n\\end{equation}\nas the divisor for the quantization. \n\\begin{rem}\n\t$z = 0$ is a double pole of $x(z)$, i.e., $0 \\in R$, but we can verify that $0 \\notin R^{\\ast}$. \n\tTherefore, we can choose $\\beta = 0$ as a base point. \n\\end{rem}\nThen, \\thmref{WKB-Wg,n-BE} gives the quantum curve of the (2,3) curve (quantum (2,3) curve): \n\\begin{equation}\n\\label{eq:(2,3)_eq(d\/dx)}\n\t\\left\\{ 4 \\hbar^3 \\frac{d^3}{dx^3} - 2 x \\hbar^2 \\frac{d^2}{dx^2} \n\t\t\t+ 2 ( \\hat{\\lambda}_\\infty - \\hbar ) \\hbar \\frac{d}{dx} - t \\right\\} \\psi = 0, \n\\end{equation}\nwhere \n\\begin{equation} \\label{eq:lambda-hat-(2,3)}\n\t\\hat{\\lambda}_\\infty = \\lambda_\\infty - \\nu_{\\infty} \\hbar.\n\\end{equation}\n\n\\begin{lem}\n\\label{lem:(1,4)_Sn} \nFor $m = 1, 2, \\cdots$, we have\n\\begin{align} \nS_m(x, \\lambda, \\nu) = O(x^{-{3\/2}}) \n\\quad\n(x \\rightarrow \\infty).\n\\end{align} \n\\end{lem} \n\n\n\n\n\n\\section{Voros coefficients and the free energy}\n\\label{sec:Voros-vs-TR}\n\n\n\\subsection{Relations between Voros coefficients and the free energy}\n\\label{subsec:Voros-vs-TR}\n\n\nIn this subsection we formulate the main results which allow us to express the Voros coefficients \nof the quantum curves discussed in \\S \\ref{subsec:quantum-curve} by the free energy \nwith a parameter shift. \n\nLet \n\\begin{equation} \n\\label{eq:total-free-energy}\n\tF({\\lambda_{\\infty}}, t; \\hbar)\n\t= \\sum_{g = 0}^{\\infty} \\hbar^{2g - 2} F_g({\\lambda_{\\infty}}, t)\n\\end{equation}\nbe the free energy for the spectral curve in \\S \\ref{subsec:quantum-curve}. \nThen, the precise statement is formulated as follows. \n\n\\begin{thm}\n\\label{thm:main(i)}\n\\begin{equation} \n\\label{eq:V-and-F-general}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar)\n\t= F(\\hat{\\lambda}_{\\infty} + \\hbar, t, \\hbar) - F(\\hat{\\lambda}_{\\infty}, t, \\hbar) \n\t\t- \\frac{\\partial F_0}{\\partial \\lambda_{\\infty}} \\hbar^{-1} \n\t\t+ \\frac{2 \\nu_{\\infty} - 1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}. \n\\end{equation}\nHere $\\hat{\\lambda}_{\\infty} = {\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar$ as we have introduced in \\eqref{eq:lambda-hat-(1,4)}. \n\\end{thm}\n\nWe can prove \\thmref{main(i)} similarly to the case of the Weber equation because the proof of \\thmref{main(i)} does not depend on $t$. \n\nTo prove \\thmref{main(i)}, we need the following identity. \n\n\\begin{lem}\n\\label{lem:variation}\n\\begin{equation}\n\\label{eq:variation}\n\t\\frac{\\partial^n}{\\partial{\\lambda_{\\infty}}^n} F_g\n\t= \\int_{\\zeta_1 = 0}^{\\zeta_1=\\infty}\\cdots \\int_{\\zeta_n = 0}^{\\zeta_n=\\infty}\n\t\tW_{g, n}(\\zeta_1, \\cdots, \\zeta_n)\\qquad (2g + n \\geq 3).\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:variation}]\nBecause\n\\begin{equation}\n\t\\Omega(z) \n\t= \\frac{\\partial y(z)}{\\partial {\\lambda_{\\infty}}} \\cdot dx(z)\n\t\t- \\frac{\\partial x(z)}{\\partial {\\lambda_{\\infty}}} \\cdot dy(z)\n\t= - \\frac{dz}{z}\n\t= \\int^{\\zeta = \\infty}_{\\zeta = 0} B(z, \\zeta)\n\\end{equation}\nholds, Theorem \\ref{thm:VariationFormula}\ngives \\eqref{eq:variation}, except for the case $g=0$. \nBy using the expressions of $W_{0,3}$ and $F_0$, \nwe can verify \\eqref{eq:variation} holds for for $(g,n) = (0,3)$ directly. \nTherefore, thanks to Theorem \\ref{thm:VariationFormula}, \nwe can conclude that \\eqref{eq:variation} is also valid for $g=0$ and $n \\ge 3$. \nThis completes the proof. \n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main(i)}]\nBy Theorem \\ref{thm:WKB-Wg,n-BE}, the Voros coefficient can be rewritten as\n\\begin{align}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t&= \\sum_{m = 1}^{\\infty} \\hbar^m \\int_0^\\infty \n\t\t\\Bigl( S(x(z), {\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) - \\hbar^{-1} S_{-1}(x(z), {\\lambda_{\\infty}}, t) \n\t\t\t- S_0(x(z), {\\lambda_{\\infty}}, t, \\nu_{\\infty}) \n\t\t\\Bigr) \\frac{dx}{dz} \\, dz \\\\ \n\t&= \\sum_{m = 1}^{\\infty} \\hbar^m \\int_0^\\infty \n\t\t\\left\\{ \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\t\t\\frac{1}{n!} \\frac{d}{dz} \\int_{\\zeta_1 \\in D(z; \\underline{\\nu})}\n\t\t\t\t\\cdots \\int_{\\zeta_n \\in D(z; \\underline{\\nu})} W_{g, n}(\\zeta_1, \\ldots, \\zeta_n) \n\t\t\\right\\} dz \\notag \\\\\n\t&= \\sum_{m = 1}^{\\infty} \\hbar^m \n\t\t\\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \\frac{1}{n!} \n\t\t\t\\left( \\int_{\\zeta_1 \\in D(\\infty; \\underline{\\nu})} \\cdots \n\t\t\t\t\t\\int_{\\zeta_n \\in D(\\infty; \\underline{\\nu})} \\right. \\notag \\\\\n\t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\left. \n\t\t\t\t\t- \\int_{\\zeta_1 \\in D(0; \\underline{\\nu})} \\cdots \n\t\t\t\t\t\\int_{\\zeta_n \\in D(0; \\underline{\\nu})} \n\t\t\t\\right) W_{g, n}(\\zeta_1, \\ldots, \\zeta_n). \\notag\n\\end{align}\nBecause\n\\begin{equation}\n\tD(\\infty; \\underline{\\nu}) = (1 - \\nu_{\\infty}) ([\\infty] - [0]) \n\t\\quad\\text{and}\\quad \n\tD(0; \\underline{\\nu}) = - \\nu_{\\infty} ([\\infty] - [0]), \n\\end{equation}\nwe have\n\\begin{equation}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t= \\sum_{m = 1}^{\\infty} \\hbar^m \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\\frac{(1 - \\nu_{\\infty})^n - (- \\nu_{\\infty})^n}{n!} \\int_0^\\infty \\cdots \\int_0^\\infty \n\t\t\t\tW_{g, n}(\\zeta_1, \\ldots, \\zeta_n). \n\\end{equation}\nNow we use Lemma \\ref{lem:variation}:\n\\begin{align}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t&= \\sum_{m = 1}^{\\infty} \\hbar^m \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\\frac{(1 - \\nu_{\\infty})^n - (- \\nu_{\\infty})^n}{n!} \\frac{ \\partial^n F_g }{ \\partial {\\lambda_{\\infty}}^n } \\\\\n\t&= \\sum_{n = 1}^{\\infty} \\frac{(1 - \\nu_{\\infty})^n - (- \\nu_{\\infty})^n}{n!} \n\t\t\\hbar^n \\frac{ \\partial^n F({\\lambda_{\\infty}}, t; \\hbar) }{ \\partial {\\lambda_{\\infty}}^n } \n\t\t\t- \\frac{(1 - \\nu_{\\infty}) - (- \\nu_{\\infty})}{\\hbar}\\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}} \n\t\t\\notag\\\\\n\t&\\qquad \n\t\t\t- \\frac{(1 - \\nu_{\\infty})^2 - (- \\nu_{\\infty})^2}{2!} \\frac{\\partial^2 F_0}{\\partial{\\lambda_{\\infty}}^2} \n\t\t\\notag\\\\\n\t&= F \\left({\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar + \\hbar, t; \\hbar \\right) \n\t\t- F \\left({\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar, t; \\hbar \\right) \n\t\t- \\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}} \\hbar^{-1}\n\t\t+ \\frac{2 \\nu_{\\infty} - 1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}. \\notag \n\\end{align}\n\\end{proof}\n\n\\begin{rem} \\label{rem:regularization}\nIn the definition (\\ref{eq:def-Voros-coeff}) of the Voros coefficient, we subtracted the first two terms \n$\\hbar^{-1}S_{-1}$ and $S_0$ because these terms are singular at endpoints of the path \n$\\gamma$. \nHowever, a regularization procedure of divergent integral (see \\cite{Voros-zeta} for example) \nallows us to define the regularized Voros coefficient as follows:\n\\begin{equation} \n\tV_{\\rm reg}({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t:= \\hbar^{-1}V_{-1}({\\lambda_{\\infty}}, t, \\nu_{\\infty}) + V_0({\\lambda_{\\infty}}, t, \\nu_{\\infty}) \n\t\t+ V({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar), \n\\end{equation}\nwhere $V_{-1}({\\lambda_{\\infty}}, t, \\nu_{\\infty})$ and $V_0({\\lambda_{\\infty}}, t, \\nu_{\\infty})$ are obtained by solving \n\\begin{equation} \n\\label{eq:zeta-regularization-equation}\n\t\\frac{\\partial^2}{\\partial {\\lambda_{\\infty}}^2} V_{-1} \n\t= \\int_{\\gamma} \\frac{\\partial^2}{\\partial {\\lambda_{\\infty}}^2} S_{-1}(x) \\, dx, \\quad \n\t\\frac{\\partial}{\\partial {\\lambda_{\\infty}}} V_{0}\n \t= \\int_{\\gamma} \\frac{\\partial}{\\partial {\\lambda_{\\infty}}} S_{0}(x) \\, dx.\n\\end{equation}\nActually, we can verify that $\\partial_{{\\lambda_{\\infty}}}^2 S_{-1}(x) dx$ and \n$\\partial_{{\\lambda_{\\infty}}}S_0(x) dx$ are holomorphic at $x=\\infty$ although \n$S_{-1}$ and $S_0$ are singular there.\nHence, the equations \\eqref{eq:zeta-regularization-equation} make sense\nand we can find $V_{-1}$ and $V_{0}$. For example, in the case of the (1,4) quantum curve, \nwe obtain \n\\begin{align}\n\t\\frac{\\partial^2}{\\partial {\\lambda_{\\infty}}^2} V_{-1} \n\t= \\frac{1}{{\\lambda_{\\infty}}}, \\qquad \n\t\\frac{\\partial}{\\partial {\\lambda_{\\infty}}} V_{0} \n\t= - \\frac{2 \\nu_{\\infty} - 1}{2 {\\lambda_{\\infty}}}. \n\\end{align}\nActually, we can verify that the regularized integrals are realized by the correction terms \n\\begin{equation}\n\tV_{-1} = \\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}}, \\qquad\n\tV_{0} = - \\frac{2 \\nu_{\\infty} - 1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2} \n\\end{equation}\nin the right hand-side of the relation \\eqref{eq:V-and-F-general}.\nThus we conclude that the regularized Voros coefficient satisfies \n\\begin{equation} \n\\label{eq:Vreg-and-free-energy}\n\tV_{\\rm reg}({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t= F \\left({\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar + \\hbar, t; \\hbar \\right) \n\t\t- F \\left({\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar, t; \\hbar \\right). \n\\end{equation}\n\\end{rem}\n\n\n\n\\subsection{Three-term difference equations satisfied by the free energy} \n\\label{subsec:Three-term_difference-eq.} \n\n\nIn this subsection, we derive the three-term difference equation which the generating function of the free energies satisfies. The precise statement is formulated as follows. \n\n\\begin{thm}\n\\label{thm:main(ii)}\nThe free energy \\eqref{eq:total-free-energy} satisfies the following difference equation.\n\\begin{equation}\n\\label{eq:free-energy_difference-eq.}\n\tF({\\lambda_{\\infty}} + \\hbar, t; \\hbar) - 2 F({\\lambda_{\\infty}}, t; \\hbar) + F({\\lambda_{\\infty}} - \\hbar, t; \\hbar) \n\t= \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}. \n\\end{equation}\n\\end{thm}\n\nWe will only give the proof for the (quantum) (1,4) curve because the result for the (quantum) (2,3) curve is proved in a similar manner. \n\nTo prove Theorem \\ref{thm:main(ii)}, we need the following identity.\n\n\\begin{lem}\n\\label{lem:Voros-parameter}\n\\begin{equation}\n\\label{eq:Voros-difference}\n\tV({\\lambda_{\\infty}}, t, - \\nu_{\\infty}, \\hbar) - V({\\lambda_{\\infty}}, t, 1 - \\nu_{\\infty}, \\hbar) \n\t\t= - \\log{ \\left( 1 - \\frac{\\nu_{\\infty} \\hbar}{{\\lambda_{\\infty}}} \\right) }.\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main(ii)}]\nFrom \\lemref{Voros-parameter}, \n\\begin{equation}\n\\label{eq:main:tmpeq}\n\tV({\\lambda_{\\infty}}, t, 0, \\hbar) = V({\\lambda_{\\infty}}, t, 1, \\hbar).\n\\end{equation}\nIt follows from \\thmref{main(i)} that\n\\begin{align}\n\tV({\\lambda_{\\infty}}, t, 0, \\hbar)\n\t\t&= F \\left({\\lambda_{\\infty}} + \\hbar, t; \\hbar \\right) - F \\left({\\lambda_{\\infty}}, t; \\hbar \\right)\n\t\t\t- \\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}} \\hbar^{-1}\n\t\t\t- \\frac{1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}, \\\\\n\tV({\\lambda_{\\infty}}, t, 1, \\hbar) \n\t\t&= F \\left({\\lambda_{\\infty}}, t; \\hbar \\right) - F \\left({\\lambda_{\\infty}} - \\hbar, t; \\hbar \\right)\n\t\t\t- \\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}} \\hbar^{-1}\n\t\t\t+ \\frac{1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}.\n\\end{align}\nBy substituting these two relations into \\eqref{eq:main:tmpeq}, we obtain \\thmref{main(ii)}.\n\\end{proof}\n\n\n\n\\subsection{The explicit form of the free energy}\n\\label{subsec:FreeEnergy} \n\n\nWe obtain explicit formulas for the coefficients of the free energy and Voros coefficients. In this subsection we provide the explicit expressions for the free energy. We will only give the proof for the (quantum) (1,4) curve because the result for the (quantum) (2,3) curve is proved in a similar manner. \n\n\\begin{thm}\n\\label{thm:main(iii)}\nFor $g \\geq 2$, the $g$-th free energy of \nthe spectral curve $(C)$\nhas the following expression.\n\\begin{itemize}\n\t\\item[$\\bullet$] For (1,4) curve (\\S \\ref{subsection:quantum-(1,4)}):\n\\end{itemize} \\vspace{-1.3em}\n\\begin{equation}\n\\label{eq:(1,4)_Fg(concrete-form)}\n\tF_g({\\lambda_{\\infty}}, t) = \\frac{B_{2g}}{2g(2g - 2)} \\dfrac{1}{{{\\lambda_\\infty}}^{2g-2}} \\quad (g \\geq 2), \n\\end{equation}\nwhere $\\{B_n\\}_{n \\geq 0}$ designates the Bernoulli number defined by \n\\begin{equation}\n\\label{def:Bernoulli}\n\t\\frac{w}{e^w - 1} = \\sum_{n = 0}^{\\infty} B_n \\frac{w^n}{n!}.\n\\end{equation}\n($F_0$ and $F_1$ for (1,4) curve are given in \\S \\ref{subsection:quantum-(1,4)}.) \n\\begin{itemize}\n\t\\item[$\\bullet$] For (2,3) curve (\\S \\ref{subsection:quantum-(2,3)}):\n\\end{itemize} \\vspace{-1.3em}\n\\begin{equation}\n\\label{eq:(2,3)_Fg(concrete-form)}\n\tF_g({\\lambda_{\\infty}}, t) = 0 \\quad (g \\geq 2).\n\\end{equation}\n($F_0$ and $F_1$ for (2,3) curve are given in \\S \\ref{subsection:quantum-(2,3)}.) \n\\end{thm}\n\n\nTo prove \\thmref{main(iii)}, we need the following lemma. \n\n\\begin{lem}\n\\label{lem:t-dependence}\n\\begin{equation}\n\\label{eq:t-dependence}\n\t\\frac{ \\partial F_g }{ \\partial t} = 0 \\qquad (g \\geq 1).\n\\end{equation}\n\\end{lem}\n\nLemma \\ref{lem:t-dependence} is obtained from \n\n\\begin{lem}\n\\label{lem:variation-t}\nFor the (1,4) equation \n\\begin{equation}\n\\label{eq:variation-t}\n\t\\frac{ \\partial F_{g} }{ \\partial t} \n\t= - \\mathop{\\rm{Res}}_{z = \\infty} z^2 \\, W_{g,1}(z) \n\\end{equation}\nholds. \n\\end{lem}\n\n\\begin{lem}\n\\label{lem:variation-t()}\nFor the (1,4) equation the following relations hold: \n\\begin{align}\n\\label{eq:variation-t()_1}\n\t\\mathop{\\rm{Res}}_{z = \\infty} z^2 \\sum_{m = -1}^{\\infty} \\hbar^m S_m(x(z)) dx(z) \n\t= C_{-1}(z, {\\lambda_{\\infty}}, \\nu_{\\infty}) \\hbar^{-1} + C_0(z, {\\lambda_{\\infty}}, \\nu_{\\infty}), \\\\\n\\label{eq:variation-t()_3}\n\t\\mathop{\\rm{Res}}_{z = \\infty} z^2 \\sum_{\\substack{g \\geq 0, \\, n \\geq 2 \\\\ (g, n) \\ne (0, 2)}} \n\t\t\\frac{\\hbar^{2g - 2 + n}}{(n-1)!} \n\t\t\\int_{\\infty}^z \\cdots \\int_{\\infty}^z W_{g, n}(z, z_2, \\ldots, z_n)\n\t= \\sum_{g \\geq 1} \\hbar^{2g} C_{g, 2}, \n\\end{align}\nwhere $C_{-1}$, $C_{0}$ and $C_{g, 2}$ $(g \\geq 1)$ are constant with respect to $\\hbar$. \n\\end{lem}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:variation-t}]\nBy using the Riccati equation, \nwe can verify \\eqref{eq:variation-t()_1} directly. \nBecause\n\\begin{equation}\n\t\\Omega(z) \n\t= \\frac{\\partial y(z)}{\\partial t} \\cdot dx(z)\n\t\t- \\frac{\\partial x(z)}{\\partial t} \\cdot dy(z)\n\t= 2z \\, dz \n\t= - \\frac{1}{2 \\pi i} \\int_{\\zeta \\in \\gamma} {\\zeta}^2 B(z, \\zeta)\n\\end{equation}\nholds, Theorem \\ref{thm:VariationFormula}\ngives \\eqref{eq:variation-t}. \n\\end{proof}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:variation-t()}]\nBecause $W_{g, n}(z_1, \\cdots, z_n)$ are holomorphic at $z_i = \\infty$ $(1 \\leq i \\leq n)$ \nfor $2g - 2 + n \\geq 1$, we find that \n\\begin{align*}\n\tW_{g, n}(z, z_2, \\ldots, z_n) \n\t\\sim \\frac{d z_i}{{z_i}^2} \\left( {C}^{(i)}_{g,n} + O(1\/z_i) \\right\n\t\\quad (z_i \\rightarrow \\infty ). \n\\end{align*}\nThen, since lower order terms $O(1\/z_i)$ vanish in the limit $z_i \\to \\infty$ $(1 \\leq i \\leq n)$,\nwe obtain\n\\begin{align*}\n\t\\int_{\\zeta_2 = \\infty}^{\\zeta_2 = z_2} W_{g, n}(z, \\zeta_2, \\ldots, \\zeta_n) \n\t= \\int_{\\zeta_2 = \\infty}^{\\zeta_2 = z_2} \n\t\t\t\\frac{C_{g,n} d \\zeta_2 \\cdots d \\zeta_n}\n\t\t\t\t{{z}^2 {\\zeta_2}^2 {\\zeta_3}^2 \\cdots {\\zeta_n}^2} d z\n\t= - \\frac{C_{g,n} d \\zeta_3 \\cdots d \\zeta_n}{{z}^2 {z_2} {\\zeta_3}^2 \\cdots {\\zeta_n}^2} d z.\n\\end{align*}\nTherefore, \n\\begin{align*}\n\t\\left. \\int_{\\infty}^{z_2} \\cdots \\int_{\\infty}^{z_n} W_{g, n}(z, z_2, \\ldots, z_n) \n\t\\right|_{z_2 = \\cdots = z_n = z}\n\t\\sim \\left( \\frac{(-1)^{n+1}C_{g,n}}{z^{n+1}} + \\cdots \\right) dz \n\\end{align*}\nholds. Multiplying both sides of the equation by $z^2$ and calculating residues, we obtain \\eqref{eq:variation-t()_3}. \n\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:t-dependence}]\nBy taking $\\nu = 0$ in \\thmref{WKB-Wg,n-BE} we obtain \n\\begin{align}\n\t&\\left. \\log{\\psi} \\right|_{x = x(z)} = \\sum_{m = -1}^{\\infty} \\hbar^m \\int^{x(z)} S_m dx \\\\\n\t&= \\sum_{m = -1}^{\\infty} \\hbar^m \n\t \t\\left\\{ \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\t\\frac{1}{n!} \\int_{\\infty}^z \\cdots \\int_{\\infty}^z \n\t\t\t\t\\left( W_{g, n}(z_1, \\ldots, z_n) \n\t\t\t\t\t\t- \\delta_{g,0} \\delta_{n,2} \\frac{dx(z_1) \\, dx(z_2)}{(x(z_1) - x(z_2))^2} \n\t\t\t\t\\right)\n\t\t\\right\\}. \\notag \n\\end{align}\nIt follows from this equation that \n\\begin{align}\n\t\\sum_{g \\geq 0} \\hbar^{2g - 1} \\left(- \\mathop{\\rm{Res}}_{z = \\infty} z^2 W_{g, 1}(z) \\right) \n\t&= - \\mathop{\\rm{Res}}_{z = \\infty} z^2 \\sum_{m = -1}^{\\infty} \\hbar^m S_m(x(z)) dx(z) \\\\\n\t&\\qquad \\notag \n\t\t+ \\mathop{\\rm{Res}}_{z = \\infty} z^2 \n\t\t\t\\int_{\\infty}^z \\left( W_{0, 2}(z, z_2) - \\frac{dx(z) \\, dx(z_2)}{(x(z) - x(z_2))^2} \\right) \\\\\n\t&\\qquad \\notag\n\t\t+ \\mathop{\\rm{Res}}_{z = \\infty} z^2 \\sum_{\\substack{g \\geq 0, \\, n \\geq 2 \\\\ (g, n) \\ne (0, 2)}} \n\t\t\t\\frac{\\hbar^{2g - 2 + n}}{(n-1)!} \n\t\t\t\\int_{\\infty}^z \\cdots \\int_{\\infty}^z W_{g, n}(z, z_2, \\ldots, z_n). \n\\end{align}\nBecause the left hand side of this equation is written by \n\\begin{align*}\n\t\\sum_{g \\geq 0} \\hbar^{2g - 1} \\left(- \\mathop{\\rm{Res}}_{z = \\infty} z^2 W_{g, 1}(z) \\right) \n\t= - \\hbar^{-1} \\mathop{\\rm{Res}}_{z = \\infty} z^2 W_{0, 1}(z) \n\t\t+ \\sum_{g \\geq 1} \\hbar^{2g - 1} \\frac{ \\partial F_g }{ \\partial t}, \n\\end{align*}\nwe compare the odd terms with respect to $\\hbar$ of both sides. \nBy using \\lemref{variation-t()} we find that there is no odd term whose order with respect to \n$\\hbar$ is greater than or equal to one in the right hand side. \nIt means that \\eqref{eq:t-dependence} holds. \n\\end{proof}\n\nNow we give a proof of \\thmref{main(iii)}.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main(iii)}]\nBy using a shift operator (or an infinite order differential operator) \n$e^{\\hbar\\partial_{{\\lambda_{\\infty}}}}$, the equation (\\ref{eq:free-energy_difference-eq.}) \nin \\thmref{main(ii)} becomes\n\\begin{equation}\n\\label{prop:difference-eq:sol:tmp:1}\n\te^{-\\hbar\\partial_{{\\lambda_{\\infty}}}} (e^{\\hbar\\partial_{{\\lambda_{\\infty}}}} - 1)^2 F({\\lambda_{\\infty}}, t; \\hbar)\n\t= \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}. \n\\end{equation}\nIt follows from \n\\begin{equation}\n\te^{-w} (e^w - 1)^2 \n\t\\left\\{ \\frac{1}{w^2} - \\sum_{n = 0}^{\\infty} \\frac{B_{n + 2}}{\\, n + 2 \\,} \\frac{\\, w^n \\,}{\\, n! \\,}\n\t\\right\\} = 1\n\\end{equation} \n(which follows from the definition \\eqref{def:Bernoulli} of the Bernoulli numbers) that\n\\begin{equation}\n\te^{-\\hbar\\partial_{{\\lambda_{\\infty}}}} (e^{\\hbar\\partial_{{\\lambda_{\\infty}}}} - 1)^2\n\t\\left\\{ (\\hbar\\partial_{{\\lambda_{\\infty}}})^{-2} - \\sum_{n = 0}^{\\infty} \\frac{B_{n + 2}}{\\, n + 2 \\,} \n\t\t\t\\frac{\\, (\\hbar\\partial_{{\\lambda_{\\infty}}})^n \\,}{\\, n! \\,}\n\t\\right\\} = {\\rm{id}}.\n\\end{equation}\nHence we find that\n\\begin{align}\n\\label{sol:FreeEnergy}\n\t\\hat{F}({\\lambda_{\\infty}}, t;\\hbar)\n\t&:= \\left\\{ (\\hbar\\partial_{{\\lambda_{\\infty}}})^{-2} - \\sum_{n = 0}^{\\infty} \\frac{B_{n + 2}}{\\, n + 2 \\,} \n\t\t\t\t\\frac{\\, (\\hbar\\partial_{{\\lambda_{\\infty}}})^n \\,}{\\, n! \\,}\n\t\t\\right\\} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2} \\\\\n\t&= \\hbar^{-2} F_0({\\lambda_{\\infty}}, t) \n\t\t- \\frac{1}{12} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2} \n\t\t+ \\sum_{g = 2}^{\\infty} \\frac{B_{2g}}{2g(2g-2)} \\frac{\\hbar^{2g - 2}}{{\\lambda_{\\infty}}^{2g-2}} \n\t\t+ \\hat{F}_t (t)\n\t\t\\notag\n\\end{align}\nis a solution of \\eqref{eq:free-energy_difference-eq.}. \nHere we note that, \n\\begin{equation}\n\\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2} = \\frac{1}{2} \\log{(-3 {\\lambda_{\\infty}}^2)} \n\\end{equation}\nholds.\n\nSince $F$ and $\\hat{F}$ satisfies the same difference equation \n\\eqref{eq:free-energy_difference-eq.}, their difference \n\t$G := F - \\hat{F} = \\sum_{g=2}^{\\infty} \\hbar^{2g-2} G_{g}({\\lambda_{\\infty}}, t)$ \nsatisfies \n\\begin{equation}\n\tG({\\lambda_{\\infty}} + \\hbar, t; \\hbar) - 2G({\\lambda_{\\infty}}, t; \\hbar) + G({\\lambda_{\\infty}} - \\hbar, t; \\hbar) = 0.\n\\end{equation}\nThis relation implies that, each coefficient $G_{g}({\\lambda_{\\infty}}, t)$ of $G$ must satisfy \n$\\partial_{\\lambda_{\\infty}}^2 G_{g} = 0$.\nTherefore, each term of $G$ must be a linear in ${\\lambda_{\\infty}}$. \nHowever, due to the homogeneity\nand \\lemref{t-dependence}, \n$F_g - \\hat{F}_g$ must be zero for all $g$. \nThis shows the desired equality \\eqref{eq:(1,4)_Fg(concrete-form)}. \n\\end{proof}\n\n\n\\subsection{The explicit form of Voros coefficients} \n\\label{subsec:voros} \n\n \nIn this subsection we provide the explicit expressions for Voros coefficients. We will only give the proof for the (quantum) (1,4) curve because the result for the (quantum) (2,3) curve is proved in a similar manner. \n\n\\begin{thm}\n\\label{thm:main(iv)}\nThe Voros coefficients\nfor the following quantum curve \nhas the following expression. \n\\begin{itemize}\n\t\\item[$\\bullet$] For (1,4) curve (\\S \\ref{subsection:quantum-(1,4)}):\n\\end{itemize} \\vspace{-1.3em}\n\\begin{equation}\n\\label{eq:(1,4)_Voros(concrete-form)}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}, \\hbar) \n\t= \\sum_{m = 1}^{\\infty} \\frac{B_{m+1}(\\nu_{\\infty})}{m(m + 1)} \n\t\t\\left( \\frac{\\hbar}{{\\lambda_{\\infty}}} \\right)^{m}.\n\\end{equation}\nHere $B_m(t)$ is the Bernoulli polynomial defined through the generating function as\n\\begin{equation}\n\\label{def:BernoulliPoly}\n\\frac{w e^{X w}}{e^w - 1} = \\sum_{m = 0}^{\\infty} B_m(X) \\frac{w^m}{m!}.\n\\end{equation}\n(These expressions were also obtained in \\cite{IKo}.)\n\\begin{itemize}\n\t\\item[$\\bullet$] For (2,3) curve (\\S \\ref{subsection:quantum-(2,3)}):\n\\end{itemize} \\vspace{-1.3em}\n\\begin{equation}\n\\label{eq:(2,3)_Voros(concrete-form)}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}, \\hbar) = 0. \n\\end{equation}\n\\end{thm}\n\n\n\n\\begin{proof}\nThe relation \\eqref{eq:Vreg-and-free-energy} \nbetween the regularized Voros coefficient \nand the free energy can be written as\n\\begin{equation}\n\tV_{\\rm reg}(\\lambda_\\infty, t, \\nu_\\infty; \\hbar) \n\t= e^{- \\nu_\\infty \\hbar \\partial_{\\lambda_\\infty}} \\Big( e^{\\hbar\\partial_{\\lambda_\\infty}} - 1 \\Big) \n\t\tF(\\lambda_\\infty, t; \\hbar) \n\\end{equation}\nby the shift operators. \nUsing the three term relation \\eqref{eq:free-energy_difference-eq.} of $F$, \nwe have\n\\begin{equation} \n\\label{eq:(1,4)-diffrence-eq-for-V}\n\\begin{split}\n\te^{(\\nu_\\infty - 1) \\hbar \\partial_{\\lambda_\\infty}} \\Big( e^{\\hbar\\partial_{\\lambda_\\infty}} - 1 \\Big) \n\t\tV(\\lambda_\\infty, t, \\nu_\\infty; \\hbar)\n\t= e^{-\\hbar \\partial_{\\lambda_\\infty}} \\Big( e^{\\hbar\\partial_{\\lambda_\\infty}} - 1 \\Big)^2 \n\t\tF(\\lambda_\\infty, t; \\hbar) \n\t= \\frac{1}{2} \\log{(-3 {\\lambda_\\infty}^2)}.\n\\end{split}\n\\end{equation}\n\nLet us invert the shift operator \n$e^{(\\nu_\\infty - 1) \\hbar \\partial_{\\lambda_\\infty}} \\left( e^{\\hbar\\partial_{\\lambda_\\infty}} - 1 \\right)$\n(or solving the difference equation) \nto obtain an expression of $V_{\\rm reg}$. \nFor the purpose, we use a similar technique used in the previous subsection. \nNamely, it follows from \n\\begin{equation}\n\te^{- X w} (e^{w} - 1) \n\t\\left(\\frac{1}{w} + \\sum_{m=0}^{\\infty} \\frac{B_{m+1}(X)}{m+1} \\frac{w^m}{m!} \\right) = 1\n\\end{equation}\n(cf.\\,\\eqref{def:BernoulliPoly}) that \n\\begin{equation}\n\te^{- X \\hbar \\partial_{\\lambda_\\infty}} (e^{\\hbar \\partial_{\\lambda_\\infty}} - 1) \n\t\\left( (\\hbar \\partial_{\\lambda_\\infty})^{-1} \n\t\t\t+ \\sum_{m=0}^{\\infty} \\frac{B_{m+1}(X)}{m+1} \\frac{(\\hbar \\partial_{\\lambda_\\infty})^m}{m!} \n\t\\right) \n\t= {\\rm id}. \n\\end{equation}\nThe last equality with $X = 1 - \\nu_\\infty$ shows that the formal series\n\\begin{align} \n\\label{eq:expression-Vreg}\n\tV_{\\rm reg} \n\t&= \\hbar^{-1} \\frac{\\partial F_0}{\\partial \\lambda_\\infty} \n\t\t- \\frac{\\nu_\\infty - 1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_\\infty}^2} \n\t\t+ \\sum_{m=1}^{\\infty} \\frac{B_{m+1}(1- \\nu_\\infty)}{m+1} \n\t\t\\frac{(\\hbar \\partial_{\\lambda_\\infty})^m \\log \\lambda_\\infty}{m!} \\\\\n\t\t\\notag \n\t&= \\hbar^{-1} V_{-1} + V_0 + \n\t\t\\sum_{m=1}^{\\infty} \\frac{(-1)^{m+1}B_{m+1}(1 - \\nu_\\infty)}{m(m+1)} \n\t\t\\left(\\frac{\\hbar}{\\lambda_\\infty}\\right)^m \\\\\n\t\t\\notag \n\t&= \\hbar^{-1} V_{-1} + V_0 + \n\t\t\\sum_{m=1}^{\\infty} \\frac{B_{m+1}(\\nu_\\infty)}{m(m+1)} \n\t\t\\left(\\frac{\\hbar}{\\lambda_\\infty}\\right)^m \n\\end{align}\nsatisfies the difference equation \\eqref{eq:(1,4)-diffrence-eq-for-V}.\nHere we used $B_1(X) = X - 1\/2$ and the equality \n$B_m(X) = (-1)^{m}B_m(1-X)$.\n\\end{proof}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nF\\\"urstenberg's correspondence principle creates a fruitful link between finite combinatorics and ergodic theory. It connects additive combinatorics with the study of shift invariant measures on the Cantor set $\\{0,1\\}^\\mathbb{Z}$. In particular it leads to various strengthenings and generalizations of Szemer\\'edi's celebrated theorem on arithmetic progressions. \n\nThe goal of this paper is to study a similar correspondence principle between finite large girth $d$-regular graphs and ${\\rm Aut}(T_d)$ invariant probability measures on $F^{V(T_d)}$ where $F$ is a finite set and $T_d$ is the $d$-regular tree with vertex set $V(T_d)$. The case $d=2$ is basically classical ergodic theory however the case $d\\geq 3$ is much less developed. \n\nOur approach can be summarized as follows. Assume that $G$ is a $d$-regular graph of girth $g$. We think of $d$ as a fixed number (say $10$) and $g$ as something very large. We wish to scan the large scale structure of $G$ in the following way. We put a coloring $f:V(G)\\rightarrow F$ on the vertices of $G$ with values in a finite set $F$. (It does not have to be a proper coloring i.e. neighboring vertices can have identical color.) Then we look at the colored neighborhoods (of bounded radius) of randomly chosen points $v\\in V(G)$. By this sampling we obtain a probability distribution on $F$-colored (bounded) trees that carries valuable information on the global structure of $G$. For example, if there is a coloring $f:V(G)\\rightarrow\\{0,1\\}$ such that, with high probability, a random vertex $v$ has a color different from its neighbours, then $G$ is essentially bipartite. \n\nIt turns out to be very convenient to regard the information obtained from a specific coloring as an approximation of a probability measure on $F^{V(T_d)}$ that is invariant under ${\\rm Aut}(T_d)$. This can be made precise by using Benjamini--Schramm limits of colored graphs (see Section \\ref{invproc}, or \\cite{bs} for the original formulation). We will use the following definition.\n\n\\begin{definition} Let $\\mathcal{S}=\\{G_i\\}_{i=1}^\\infty$ be a sequence of $d$-regular graphs. We say that $\\mathcal{S}$ is a large girth sequence if for every $\\varepsilon>0$ there is an index $n$ such that for every $i\\geq n$ the probability that a random vertex in $G_i$ is contained in a cycle of length at most $\\lceil 1\/\\varepsilon\\rceil$ is at most $\\varepsilon$.\n\\end{definition}\n\n\\begin{definition}\\label{profile} Let $\\mathcal{S}=\\{G_i\\}_{i=1}^\\infty$ be a large girth sequence of $d$-regular graphs, and $F$ a finite set. We denote by $[\\mathcal{S}]_F$ the set of ${\\rm Aut}(T_d)$ invariant probability measures on $F^{V(T_d)}$ that arise as Benjamini--Schramm limits of $F$-colorings $\\{f_i:V(G_i)\\rightarrow F\\}_{i=1}^\\infty$ of $\\mathcal{S}$. We denote by $[\\mathcal{S}]$ the set $\\bigcup_{n\\in\\mathbb{N}}[\\mathcal{S}]_{\\{1,2,\\dots,n\\}}$.\n\\end{definition}\n\n It is clear that if $\\mathcal S'$ is a subsequence of $\\mathcal S$, then $[\\mathcal S]\\subseteq [\\mathcal S']$. If $[\\mathcal{S}]=[\\mathcal S']$ holds for every subsequence $S'$ of $S$, then $\\mathcal{S}$ is called {\\it local-global convergent} (see Subsection \\ref{corresp} and \\cite{HLSz}). Local-global convergent sequences of graphs have limit objects in the form of a {\\it graphing} \\cite{HLSz}. For a convergent sequence $\\mathcal S$ the set $[\\mathcal{S}]$ carries important information on the structure of the graphs in $\\mathcal{S}$.\n\nWe call a process $\\mu$ {\\it universal} if $\\mu\\in[\\mathcal{S}]$ for every large girth sequence $\\mathcal{S}$. Universality means, roughly speaking, that it defines a structure that is universally present in every large girth $d$-regular graph.\n Weakening the notion of universality, we call a process $\\mu$ {\\it typical} if $\\mu\\in [\\{\\mathbb G_{n_i}\\}_{i=1}^\\infty]$ holds with probability 1 for some fixed sequence $\\{n_i\\}_{i=1}^{\\infty}$, where $\\{\\mathbb G_{n_i}\\}_{i=1}^{\\infty}$ is a sequence of independently and uniformly chosen random $d$-regular graphs with $|V(\\mathbb G_{n_i})|=n_i$. We will see that understanding typical processes is basically equivalent with understanding the large scale structure of random $d$-regular graphs. More precisely, we will formulate a correspondence principle (see Subsection \\ref{corresp}) between the properties of random $d$-regular graphs and typical processes. \n\n\\medskip\n\nAmong universal processes, factor of i.i.d processes on $T_d$ (see \\cite{russ} and the references therein) have a distinguished role because of their close connection to local algorithms \\cite{gamarnik, HLSz, kungabor}. They can be used to give estimates for various structures (such as large independent sets \\cite{csoka, harangi, hoppen, mustazee}, matchings \\cite{csokalipp, nazarov}, subgraphs of large girth \\cite{damien, kungabor}, etc., see also \\cite{goldberg}) in $d$-regular graphs. On the other hand, \\cite{cordec} characterizes the covariance structure of weak limits of factor of i.i.d. processes and thus it gives a necessary condition for a process to be factor of i.i.d. However, there are only few general and widely applicable sufficient conditions. This is a difficult question even for branching Markov processes that are important in statistical physics (e.g. Ising model, Potts model). In Section \\ref{glauber} we give a Dobsrushin-type sufficient condition for a branching Markov chain to be factor of i.i.d. \nWe use standard methods from statistical physics, in particular, a heat-bath version of Glauber dynamics. The idea behind this goes back to Ornstein and Weiss: sufficient conditions for fast mixing of Glauber dynamics often imply that the process is factor of i.i.d. See also the paper of H\\\"aggstr\\\"om, Jonasson and Lyons \\cite{russregi}.\nWe will see that the necessary condition on the covariance structure given in \\cite{cordec} is not sufficient for a branching Markov chain to be factor of i.i.d. To show this, we use our necessary conditions for typical processes (Section \\ref{entropy}), which automatically apply for factor of i.i.d. processes.\n\n\\medskip\n\nOur paper is built up as follows. In the first part we summarize various known and new facts about factor of i.i.d, universal and typical processes, local-global convergence and graphings. Moreover, in this part, we formulate our correspondence principle between typical processes and random $d$-regular graphs. In Section \\ref{glauber} we focus more on branching Markov chains on $T_d$. We give a Dobrushin-type sufficient condition for a branching Markov chain to be factor of i.i.d. In the last part (Section \\ref{entropy}) we give necessary conditions for a process to be typical using joint entropy functions. We will see that this result implies necessary conditions on the large scale structure of random $d$-regular graphs. (Note that our entropy method is closely related to the F-invariant, introduced by Lewis Bowen \\cite{lewis} in ergodic theory, and also to the ideas developed by Molloy and Reed \\cite{molloyreed} to study random $d$-regular graphs in combinatorics.) In particular, we prove that the value distributions of eigenvectors of random $d$-regular graphs can not be concentrated around boundedly many values (this is even true for approximative eigenvectors). Moreover, we show that random $d$-regular graphs do not cover bounded $d$-regular weighted graphs (for precise formulation, see Theorem \\ref{thm:combap}). These results are closely related to the papers of Molloy and Reed \\cite{molloyreed} about dominating ratio and Bollob\\'as \\cite{bollind} about independence numbers. \n\n\n\n\n\n\\section{Invariant processes}\\label{invproc}\n\nLet $T_d$ be the (infinite) $d$-regular tree with vertex set $V(T_d)$ and edge set $E(T_d)$. \nLet $M$ be a topological space. We denote by $I_d(M)$ the set of $M$-valued random processes on the $d$-regular tree \n$T_d$ that are invariant under automorphisms of $T_d$. More precisely, $I_d(M)$ is the set of ${\\rm Aut}(T_d)$ \ninvariant Borel probability measures on the space $M^{V(T_d)}$. (If $\\Psi\\in {\\rm Aut}(T_d)$, then $\\Psi$ induces \na map naturally from $M^{V(T_d)}$ to itself: given a labelling of the vertices of $T_d$, the new label of a vertex is \nthe label of its inverse image at $\\Psi$. The probability measures should be invariant with respect to this induced map.) \nThe set $I_d(M)$ possesses a topological structure; namely the restriction of the weak topology for probability measures on $M^{V(T_d)}$ to $I_d(M)$. Note that most of the time in this paper $M$ is a finite set. We denote by $I_d$ the set of invariant processes on $T_d$ with finitely many values.\n\nLet $T_d^*$ denote the rooted $d$-regular tree: it is $T_d$ with a distinguished vertex $o$, which is called the root. \nLet $N$ be a topological space and $f:M^{V(T_d^*)}\\rightarrow N$ be a Borel measurable function that is invariant under \n${\\rm Aut}(T_d^*)$, which is the set of root-preserving automorphisms of $T_d^*$. For every $\\mu\\in I_d(M)$ the function $f$ defines a new process $\\nu \\in I_d(N)$ by evaluating \n$f$ simultaneously at every vertex $v$ (by placing the root on $v$) on a $\\mu$-random element in $M^{V(T_d)}$.\nWe say that $\\nu$ is a {\\it factor} of $\\mu$. \n\nA possible way to get processes in $I_d$ goes through Benjamini--Schramm limits. For the general definition see \\cite{bs}. We will use and formulate it for colored large-girth graph sequences, as follows. Let $F$ be a finite set. Assume that $\\{G_i\\}_{i=1}^\\infty$ is a large girth sequence of $d$-regular graphs. Let $\\{f_i:V(G_i)\\rightarrow F\\}_{i=1}^\\infty$ be a sequence of colorings of $G_i$. For every pair of numbers $r,i\\in\\mathbb{N}$ we define the probability distribution $\\mu_{r,i}$ concentrated on rooted $F$-colored finite graphs as follows. We pick a random vertex $v\\in V(G_i)$ and then we look at the neighborhood $N_r(v)$ of radius $r$ of $v$ (rooted by $v$) together with the coloring $f_i$ restricted to $N_r(v)$. The colored graphs $(G_i,f_i)$ are Benjamini--Schramm convergent if for every $r\\in\\mathbb{N}$ the sequence $\\{\\mu_{r,i}\\}_{i=1}^\\infty$ weakly converges to some measure $\\mu_r$. The limit object is the probability measure $\\mu$ on $F^{V(T_d^*)}$ with the property that the marginal of $\\mu$ in the neighborhood of radius $r$ of the root is $\\mu_r$. It is easy to see that the measure we get from $\\mu$ by forgetting the root is in $I_d(F)$. \n\nWe list various classes of invariant processes on $T_d$ that are related to large girth sequences of finite graphs. \n\n\\bigskip\n\n\\noindent{\\bf Factor of i.i.d. processes:}~Let $\\mu\\in I_d([0,1])$ be the uniform distribution on $[0,1]^{V(T_d)}$, which is \nthe product measure of the uniform distributions on the interval $[0,1]$. A {\\it factor of i.i.d. process is a factor of the process $\\mu$}. Let $F_d$ denote the set of such processes in $I_d$. \nSee Lemma \\ref{ebred} for an easy example producing independent sets as factor of i.i.d. processes.\n\n\\bigskip\n\n\\noindent{\\bf Local processes:}~We say that a process is {\\it local} if it is in the closure of factor of i.i.d \nprocesses in the weak topology. Let $L_d$ denote the set of such processes in $I_d$.\n\n\\bigskip\n\n\\noindent{\\bf Universal processes:}~A process $\\mu\\in I_d$ is called universal if $\\mu\\in [\\mathcal{S}]$ holds for every large girth sequence $\\mathcal{S}$ of $d$-regular graphs. We denote the set of such processes by $U_d$. \n\n\\bigskip\n\n\\noindent{\\bf Typical processes:}~A process $\\mu\\in I_d$ is called typical if $\\mu\\in [\\{\\mathbb G_{n_i}\\}_{i=1}^\\infty]$ holds with probability 1 for some fixed sequence $\\{n_i\\}_{i=1}^{\\infty}$, where $\\{\\mathbb G_{n_i}\\}_{i=1}^{\\infty}$ is a sequence of independently chosen uniform random $d$-regular graphs with $|V(\\mathbb G_{n_i})|=n_i$. We denote the set of typical processes by $R_d$. \n\n\\bigskip\n\n\\begin{lemma} \\label{lem:bovul}We have the follwing containments:\n\n$$F_d\\subseteq L_d\\subseteq U_d\\subseteq R_d.$$\n\n\\end{lemma}\n\n\\begin{proof} The first and last containments are trivial. The containment $L_d\\subseteq U_d$ is easy to see. For a proof we refer to \\cite{HLSz} where a much stronger theorem is proved. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\nWe also know by recent results of Gamarnik and Sudan \\cite{gamarnik} and Rahman and Vir\\'ag \\cite{mustazee} that $L_d\\neq R_d$ for \nsufficiently large $d$. Their result implies that the indicator function of a maximal independent set \n(a set of vertices that does not contain any neighbors) in a random $d$-regular graph is not in $L_d$ (that is, the largest independent set can not be approximated with \nfactor of i.i.d. processes); on the other hand, it is in $R_d$.\n\nIt is sometimes useful to consider variants of $F_d,L_d,U_d$ and $R_d$ where the values are in an infinite topological space $N$. The definitions can be easily modified using the extension of Benjamini--Schramm limits to colored graphs where the colors are in a topological space. We denote by $F_d(N),L_d(N),U_d(N)$ and $R_d(N)$ the corresponding set of processes.\nUsing this notation, it was proved in \\cite{harangi} that $F_d(\\mathbb{R})\\neq L_d(\\mathbb{R})$. In that paper Harangi and Vir\\'ag used random Gaussian wave functions \\cite{wave} to show this. \n See also Corollary 3.3. in the paper of Lyons \n\\cite{russ}: it provides a discrete-valued example for a process in $L_d(\\lbrace 0,1\\rbrace)\\setminus U_d(\\lbrace 0,1\\rbrace)$.\n\\medskip\n\nThe following question remains after these results.\n\n\\begin{question} Is it true that $U_d=L_d$?~ Is it true that $U_d=R_d$? \n\\end{question}\n\n\\medskip\n\nIt is an important goal of this paper to give sufficient conditions (for particular models) and necessary conditions for processes to be in one of the above classes.\nA recent result \\cite{cordec} in this direction is the following.\n\n\\begin{theorem}\\label{thmcordec} Let $\\mu\\in L_d(\\mathbb{R})$ and let $v,w\\in V(T_d)$ be two vertices of distance $k$. Let $f:T_d\\rightarrow\\mathbb{R}$ be a $\\mu$-random function. Then the correlation of $f(v)$ and $f(w)$ is at most $(k+1-2k\/d)(d-1)^{-k\/2}$. \n\\end{theorem}\n\nNote that the statement also holds for processes in $R_d$; however the proof of that extension uses the very hard theorem of J. Friedman \\cite{friedman} on the second eigenvalue of random $d$-regular graphs. There are various examples showing that the condition of Theorem \\ref{thmcordec} is not sufficient. We also give a family of such examples using branching Markov processes (see Theorem \\ref{exnotsuff}).\nBranching Markov processes will play an important role in this paper so we give a brief description of them.\n\n\\medskip\n\n\\noindent{\\bf Branching Markov processes:}~Now choose $M$ to be a finite state space $S$ with the \ndiscrete topology. Let $Q$ be the transition matrix of a reversible Markov chain on the \nstate space $S$. Choose the state of the root uniformly at random. Then make random steps according \nto the transition matrix $Q$ to obtain the states of the neighbors of the root. These steps \nare made conditionally independently, given the state of the root. Continue this: given the \nstate of a vertex at distance $k$ from the root, choose the states of its neighbors which are \nat distance $k+1$ from the root conditionally independently and according to the transition matrix $Q$.\nIt is easy to see that reversibility implies that the distribution of the collection of the random variables we get is invariant, \nhence the distribution of the branching Markov process (which will be denoted by $\\nu_Q$) is in $I_d(S)$.\n\nIn the particular case when there is a fixed probability \nof staying at a given state, and another fixed probability of transition between distinct states, the branching Markov process is identical to \nthe Potts model on the tree and for $|S|=2$ we get the Ising model. See e.g. \\cite{evans, sly} for the description of the connection of the parameters \nof the two models. \n\n\\medskip\n\n\\subsection{Correspondence between typical processes and random $d$-regular graphs}\n\n\\label{corresp}\n\nTypical processes might be of interest on their own, being the processes that can be modelled on random $d$-regular graphs. In addition to this, we can go in the other direction. As we will see later, results on typical processes imply statements for random $d$-regular graphs. In the last section, based on entropy estimates we give necessary conditions for an invariant process to be typical. In this section we show how these results can be translated to statements about random $d$-regular graphs. We will present a correspondence principle between these objects. \n\n\\subsubsection{Local-global convergence and metric} \n\n\nWhen we want to study the correspondence between typical processes (which are defined on \nthe vertex set of the $d$-regular tree) and random $d$-regular graphs, another notion of convergence of bounded \ndegree graphs will be useful. In this subsection we briefly resume the concept of local-global convergence (also called colored neighborhood convergence) based on the papers of Bollob\\'as and Riordan \\cite{BR} (where this notion was introduced) and Hatami, Lov\\'asz and Szegedy \\cite{HLSz}. \n\nIn the beginning of this section, we defined the notion of local (Benjamini--Schramm) convergence of bounded degree graphs. However, we need a finer convengence notion that captures more of the global structure than local convergence.\n Recall that if $F$ is a finite set (colors) and $G$ is a finite graph with some $f: V(G)\\rightarrow F$ , then by picking a random vertex $v\\in V(G)$ and looking at its neighborhood $N_r(v)$ of radius $r$, we get a probability distribution $\\mu_{r, G,f}$, which is concentrated on rooted $F$-colored finite graphs. (These distributions are called the local statistics of the coloring $f$.) \nLet $[k]=\\lbrace 1, \\ldots, k\\rbrace$, and we define \n\\[Q_{r,G,k}=\\lbrace \\mu_{r,G,f}\\vert f:V(G)\\rightarrow [k]\\rbrace. \\] \n\nLet $U^{r,k}$ be the set of triples $(H, o, f)$ where $(H, o)$ is a rooted graph of radius at most $r$ and $f: V(H)\\rightarrow [k]$ is a coloring of its vertices with (at most) $k$ colors. Let $\\mathcal M(U^{r,k})$ be the set of probability measures on $U^{r,k}$. With this notation, we have that $Q_{r, G, k}\\subseteq \\mathcal M(U^{r,k})$. The space $\\mathcal M(U^{r,k})$ is a compact metric space equipped with the total variation \ndistance of probability measures: \n\\[d_{TV}(\\mu, \\nu)=\\sup_{A\\subseteq U^{r,k}}|\\mu(A)-\\nu(A)|.\\]\n\n(Note that we will use an equivalent definition of total variation distance later in this paper.)\n\n\\begin{definition}[Local-global convergence, \\cite{HLSz}.] A sequence of finite graphs $(G_n)_{n=1}^{\\infty}$ with uniform degree bound $d$ is locally-globally convergent if for every $r, k\\geq 1$, the sequence $(Q_{r,G_n, k})$ converges in the Hausdorff distance inside the compact metric space $(\\mathcal M(U^{r,k}),\\, d_{TV})$.\n\\end{definition}\n\nFor every locally-globally convergent sequence $(G_n)$ of bounded degree graphs there is a limit object called graphing such that the sets of local statistics of $G_n$ converge to the local stastics of the limit object; see Theorem 3.2 of \\cite{HLSz} for the precise statement, and e.g. \\cite{aldous, cordec, gabor} for more about graphings. \n\n The following metrization of local-global convergence was defined by Bollob\\'as and Riordan \\cite{BR}.\n\n\\begin{definition}[Colored neighborhood metric, \\cite{BR}]\n Let $G, G'$ be finite graphs. Their colored neighborhood distance is the following:\n\\begin{equation}\\label{dcn}d_{CN}(G,G')=\\sum_{k=1}^{\\infty}\\sum_{r=1}^{\\infty} 2^{-k-r} d_H(Q_{r,G,k}, Q_{r, G',k}),\\end{equation}\nwhere $d_H$ denotes the Hausdorff distance of sets in the compact metric space $(\\mathcal M(U^{r,k}),\\, d_{TV})$. \n\\end{definition}\n\nLet $X_d$ be the set of all finite graphs with maximum degree at most $d$. It is clear from the definition that every \nsequence in $X_d$ contains a locally-globally convergent subsequence \\cite{HLSz}. It follows that the completion \n$\\overline {X_d}$ of the metric space $(X_d, d_{CN})$ is a compact metric space. It was proved in \\cite{HLSz} that the elements of $X_d$ can be represented by certain measurable graphs called \ngraphings. \n\n\\begin{definition} [Graphing, \\cite{HLSz}.]Let $\\Omega$ be a Polish topological space and let $\\nu$ be a probability measure on the Borel sets in $X$. A graphing is a graph $\\mathcal G$ on $V(\\mathcal G)=\\Omega$ with Borel measureable edge set $E(\\mathcal G)\\subset \\Omega\\times \\Omega$ in which all degrees are at most $d$ and \n\\[\\int_A e(x, B)d\\nu(x)=\\int_B e(x, A)d\\nu(x)\\] \nfor all measurable sets $A, B\\subset \\Omega$, where $e(x, S)$ is the number of edges from \n$x\\in\\Omega$ to $S\\subseteq \\Omega$.\n\\end{definition}\nIf $\\mathcal G$ is graphing, then $Q_{r, \\mathcal G, k}$ makes sense with the additional condition that the coloring $f: \\Omega\\rightarrow [k]$ is measurable. Hence local-global convergence and metric both extend to graphings. \n\nWe will need the following two lemmas about the metric $d_{CN}$. We remark that for sake of simplicity we will use the notion of random $d$-regular graphs with $n$ vertices in the sequel without any restriction on $d$ and $n$. If $d$ and $n$ are both odd, then there are no such graphs. We will formulate the statements such that they trivially hold for the empty set as well.\n\n\\begin{lemma} \\label{lem:halo}For all $d\\geq 1$ and $\\varepsilon>0$ there exists $F(\\varepsilon)$ such that for all $n\\geq 1$ in \nthe set of $d$-regular graphs with $n$ vertices endowed with $d_{CN}$ there exist an $\\varepsilon$-net of size at most $F(\\varepsilon)$. \\label{lem:net}\n\\end{lemma} \n\\begin{proof}Using compactness, we can choose an $\\varepsilon\/2$-net $N$ in the space $(\\overline {X_d}, d_{CN})$. We show that $F(\\varepsilon):=|N|$ is a good choice. Let $N'$ be the subset of $N$ consisting of points $x$ such that the ball of radius \n$\\varepsilon\/2$ around $x$ contains a $d$-regular graph with $n$ vertices. To each element in $N'$ we assign a $d$-regular graph with $n$ vertices of distance at most $\\varepsilon\/2$. It is clear that set of these \ngraphs have the desired properties. \\end{proof}\\hfill $\\square$\n\n\\begin{lemma}\\label{lem:lip}\nFor all $\\delta>0$ there exists $i_0$ such that for all $i\\geq i_0$ and graphs $G_1, G_2\\in X_d$ both on the vertex set $[i]$ and $|E(G_1)\\triangle E(G_2)|=1$ satisfy \n$d_{CN}(G_1, G_2)\\leq \\delta$.\n\\end{lemma}\n\\begin{proof} \nSince the sum of the weights is finite in \\eqref{dcn}, and the all the Hausdorff distances are at most 1, it is enough to prove the statement for a single term. Let us fix $k$ and $r$. Let $\\mu_{r,G_1,f}\\in Q_{r, G_1, k}$ be an arbitrary element corresponding to a coloring $f: [i]\\rightarrow [k]$. It is enough to prove that the \ntotal variation distance of $\\mu_{r,G_1,f}$ and $\\mu_{r,G_2,f}$ can be bounded from above by a quantity depending only on $i$ and tending to zero as $i$ goes to $\\infty$. Let $e$ be the only edge in $E(G_1)\\triangle E(G_2)$. In both $G_1$ and $G_2$ there are boundedly many vertices $v$ such that $e$ intersects the neighborhood of radius $r$ of $v$. It is easy to see that $2(d+1)^r$ is such a bound. The colored neighborhoods of the rest of the vertices are the same in $G_1$ and $G_2$. It follows that the total variation distance of $\\mu_{r,G_1,f}$ and $\\mu_{r,G_2,f}$ is at most $2(d+1)^r\/i$. This completes the proof. \\hfill $\\square$\n\\end{proof}\n\n\\subsubsection{Typical processes} \n\nIn this section we prove a correspondence principle between typical processes and random $d$-regular graphs. \n\nThroughout this section, $d\\geq 3$ will be fixed, and $\\mathbb G_n$ will be a uniformly chosen random $d$-regular graph on $n$ vertices. \n\n\\begin{lemma}\\label{typl1} For fixed $d\\geq 3$ there is a sequence $\\lbrace B_n\\rbrace_{n=1}^{\\infty}$ of $d$-regular graphs with $|V(B_n)|=n$ such that $d_{CN}(B_n, \\mathbb G_n)$ tends to $0$ in probability as $n\\rightarrow \\infty$.\n\\end{lemma}\n\n\\begin{proof}\nGiven $\\varepsilon>0$, for all $n\\geq 1$, by using Lemma \\ref{lem:net}, we choose an $\\varepsilon\/4$-net $N_n$ of size at most $F(\\varepsilon\/4)$ in the set of $d$-regular graphs with $n$ vertices with respect to the colored neighborhood metric. (We emphasize that the size of the net does not depend on the number of vertices of the graph.) For each $n$, let $B_{n, \\varepsilon}\\in N_n$ be a (deterministic) $d$-regular graph on vertices such that \n\\begin{equation}\\label{conce1}\\mathbb P(d_{CN}(B_{n, \\varepsilon}, \\mathbb G_n)\\leq \\varepsilon\/4)\\geq \\frac{1}{F(\\varepsilon\/4)},\\end{equation} \nwhere $\\mathbb G_n$ is a uniform random $d$-regular graph on $n$ vertices. \nSuch a $B_{n, \\varepsilon}$ must exist according to the definition of the $\\varepsilon\/4$-net $N_n$. \n\nWe define $f_{n, \\varepsilon}(H_n)=d_{CN}(B_{n, \\varepsilon}, H_n)$ for $d$-regular graphs $H_n$ on $n$ vertices. By Lemma \\ref{lem:lip}, if $n\\geq n_0$ with some fixed $n_0$, then $f_{n, \\varepsilon}$ is a Lipschitz function with $\\delta$. By well-known concentration inequalities (based on the exploration process and Azuma's inequality on martingales, see e.g. \\cite[Chapter 7]{alon}, this implies the following. For all $\\eta>0$ there exists $n_1=n_1(\\eta)$ such that \n\\begin{equation}\\label{conce2}\\mathbb P(|f_{n, \\varepsilon}(\\mathbb G_n)-\\mathbb E(f_{n, \\varepsilon}(\\mathbb G_n))|>\\eta)\\leq \\eta \\qquad (n\\geq n_1).\\end{equation}\nBy choosing $0<\\eta<\\min(\\varepsilon\/4, 1\/F(\\varepsilon\/4))$, inequalities \\eqref{conce1} and \\eqref{conce2} together imply $\\mathbb E(f_{n, \\varepsilon}(\\mathbb G_n))\\leq \\varepsilon\/2$ $(n\\geq n_1)$. That is, since $f_{n, \\varepsilon}$ is concentrated around its expectation (due to its Lipschitz property) for large $n$, and $\\mathbb G_n$ is close to some fixed graph with probability with a positive lower bound not depending on $n$, we conclude that this expectation has to be small for $n$ large enough. \n\nPutting this together, this yields \n\\[\\mathbb P(f_{n, \\varepsilon}(\\mathbb G_n)>\\varepsilon)=\\mathbb P(d_{CN}(B_{{n, \\varepsilon}},\\mathbb G_n)>\\varepsilon)\\leq \\varepsilon \\qquad (n\\geq n(\\varepsilon)).\\]\n\nBy a standard diagonalization argument, let $k(n)=\\max\\{k\\, \\vert \\, n(1\/k)\\varepsilon\\rbrace$ is infinite for some $\\varepsilon>0$. Choose $S'\\subseteq S$ by Proposition \\ref{prop:graphing}; that is, $(\\mathbb G_i)_{i\\in S'}$ locally-globally converges to a fixed graphing $\\mathcal G$ with probability 1. On the other hand, by independence, it follows that with probability 1 we have $\\mathbb G_i\\in C$ for infinitely many $i\\in S'$. \nSince $C$ is closed in the local-global topology, and $\\mathcal G$ is the limit of the whole sequence almost surely, this implies that $\\mathcal G$ has to be in $C$. But, by definition, $\\mathcal G$ is typical. This contradicts our assumption on $C$. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\n\n\nThe main application of Proposition \\ref{prop:corres} is that we can turn statements about typical processes into statements about random $d$-regular graphs. \nAs we have explained before, typical processes are exactly the processes coming from typical graphings. Therefore if we succeed in excluding typical processes from a closed set within the weak topology of invariant processes, then at the same time we exclude typical graphings from a closed set within the local-global topology, and through Proposition \\ref{prop:corres} we obtain a result for random $d$-regular graphs. \nWe will demonstrate this principle on concrete examples in Section \\ref{dominating}. \n\n\n\n\n\\subsection{Joinings and related metric}\n\n\\label{joining}\n\n\nAn invariant coupling, or shortly {\\it joining}, of two elements $\\mu,\\nu\\in I_d(M)$ is a process $\\psi\\in I_d(M\\times M)$ such that the two marginal processes of $\\psi$ (with respect to the first and second coordinate in $M\\times M$) are $\\mu$ and $\\nu$.\nWe denote by $C(\\mu,\\nu)$ the set of all joinings of $\\mu$ and $\\nu$.\n\nAssume that the topology on $M$ is given by a metric $m:M\\times M\\rightarrow\\mathbb{R}^+\\cup\\{0\\}$. Then we define a distance $m_c$ on $I_d(M)$ in the following way.\n\\begin{equation}m_c(\\mu,\\nu)=\\inf_{\\psi\\in C(\\mu,\\nu)}\\mathbb{E}(m(\\psi|_v)),\\label{eq:metric}\\end{equation} \nwhere $v$ is an arbitrary fixed vertex of $T_d$ and $\\psi|_v$ is the restriction of $\\psi$ to $v$. Note that automorphism invariance implies that $m_c$ does not depend on the choice of $v$. \nIf $M$ has finite diameter, then $m_c(\\mu,\\nu)$ is a finite number bounded by this diameter. \n\nThis is basically Ornstein's $\\bar d$-metric, which was originally defined for $\\mathbb Z$-invariant processes, see e.g. \\cite{glasner}. See also the recent papers of Lyons and Thom \\cite{russ, monoton1} where \nseveral results and open questions on $T_d$ are presented, connecting the factor of i.i.d. processes to \nthis metric.\n \n\n The key to the proof of the fact that this is a metric is the notion of relatively independent joining \\cite[Chapter 15, Section 7]{glasner}. Assume that $\\psi_{1,2}\\in C(\\mu_1,\\mu_2)$ and $\\psi_{2,3}\\in C(\\mu_2,\\mu_3)$. Let us consider the unique joining of $\\psi_{1,2}$ and $\\psi_{2,3}$ that identifies the marginal $\\mu_2$ and has the property that $\\mu_1$ and $\\mu_3$ are conditionally independent with respect to $\\mu_2$. \nWe remark that using relatively independent joinings and some kind of Borel--Cantelli arguments one can check that the space of invariant processes is complete with respect to the $\\bar d$-metric.\n\n\n\nThe case when $M$ is a finite set plays a special role in our paper. In this case we define \n$m(x,y)=1$ if $x\\neq y$ and $m(x,x)=0$ for $x,y\\in M$. The corresponding metric $m_c$ is regarded as the Hamming distance for processes in $I_d(M)$.\n\n\\medskip \n\n\n\\section{Glauber dynamics and branching Markov processes}\n\n\\label{glauber}\n\nGlauber dynamics is an important tool in statistical physics. In this chapter we consider a variant of \nheat-bath Glauber dynamics that is an $m_c$-continuous transformation on $I_d(M)$. \nWe begin with the finite case, then we define the Dobrushin coefficient, and formulate the main results: a Dobrushin-type sufficient condition for branching Markov chains to be factor of i.i.d. \nThen we give a brief description of the Poisson Glauber dynamics that seems to be the closest analogy to classical Glauber dynamics, and we define something similar, that is more technical, but more useful in our applications. \n\n\\subsection{Glauber dynamics on finite graphs}\n\n\\label{finiteglaub}\n\nFirst suppose that $G$ is a (potentially infinite) $d$-regular graph, and we have a reversible Markov chain with finite state space $S$ \nand transition matrix $Q$. We think of $G$ such that each vertex has a state from $S$; the state of the graph is an element in $S^{V(G)}$. A {\\it Glauber step at vertex} $v\\in V(G)$ is a way of generating a random state from a given state of the graph. \nWe do this by randomizing the state of $v$ conditionally on the states of its neighbors, as follows. \n\nLet $N(v)$ denote the set of the neighbors of $v$. Let $C=v\\cup N(v)$ and $\\mu_C$ the distribution of the branching Markov process restricted to $C$. For a state $\\omega\\in S^{N(v)}$, we define $B_{v, \\omega}$ to be the conditional distribution of the state of $v$ given $\\omega$. The Glauber step at $v$ (the so called heat-bath version) is the operation of randomizing the state of $v$ from $B_{v, \\omega}$.\n\n\nNow we define the Glauber dynamics on a finite graph. It is a Markov chain on the state space of the graph $S^{V(G)}$ obtained by choosing a vertex $v$ uniformly at random, and performing the Glauber step at $v$. \nSee e.g. Section 3.3. in \\cite{markovmixing} on Glauber dynamics for various models. \n\n\n It is also clear from the theory of \nfinite state space Markov chains that (with appropriate conditions on $Q$) this Markov chain has a unique stationary \ndistribution, which is the limiting distribution of the Glauber dynamics. However, the order of the mixing time depends on $Q$; the question typically is whether the mixing \ntime can be bounded by a linear \nfunction of the number of vertices. Our main result will show that the so called Dobrushin condition, which implies fast mixing, also implies that the process is factor of i.i.d. Note that the connection between fast mixing and factor of i.i.d. property was also implicitly used in \\cite{gamarnik}. A paper of Berger, Kenyon, Mossel and Peres \\cite{berger} deals with the problem of fast mixing on trees for the Ising model, i.e. when there are only two states. See Theorem 1.4. of \\cite{berger}. Furthermore Mossel and Sly \\cite{exact} gave a sharp threshold for \ngeneral bounded degree graphs. The recent paper \nof Lubetzky and Sly \\cite{spacetime} contains more refined results for the Ising model with underlying graph \n$(\\mathbb Z\/n\\mathbb Z)^d$, and its Theorem 4 refers to analogous results for general graphs. \n\nIt is important to mention the paper of Bubley and Dyer \\cite{pathcoupling} on fast mixing of \nthe Glauber dynamics of Markov chains and on the path coupling technique, which \nis applied in \\cite{berger}, \nand whose ideas will be used in what follows. \nSee also the paper of Dembo and Montanari \\cite{dembo} and Chapter 15 in \\cite{markovmixing} for more details on mixing time of the Glauber dynamics.\n\n\\subsection{The Dobrushin coefficient and factor of i.id. processes} \n\nWhen we examine how the properties of the Glauber dynamics depend on the transition matrix $Q$, it is helpful to investigate the following: how does a change in the state of a single neighbor of $v$ effect the conditional \ndistribution of the state of $v$ at the Glauber step? This is the idea of the definition of the Dobrushin coefficient (see e.g. \n\\cite{pathcoupling, dobrushin}). \n\n\\begin{definition}[Dobrushin coefficient] \\label{def:dobr}Let us consider a reversible Markov chain on a finite state space $S$ with transition matrix $Q$. \nThe Dobrushin coefficient of the Markov chain is defined by \n\\begin{multline*}D=\\sup \\bigl \\lbrace d_{TV}( B_{v,\\omega}, B_{v,\\omega'}): \\omega, \\omega'\\in S^{N(v)},\\ |\\lbrace u\\in N(v): \\omega(u)\\neq \\omega'(u)\\rbrace|=1 \\bigr \\rbrace, \\end{multline*} \nwhere $d_{TV}$ is the total variation distance of \nprobability distributions:\n\\begin{multline*}d_{TV}(P_1,P_2)\n=\\frac{1}{2}\\sum_{s\\in S}|P_1(s)-P_2(s)|\\\\=\\inf\\lbrace \\mathbb P(X\\neq Y): X\\sim P_1,\\ Y\\sim P_2,\\ \\mathbb P \\textrm{\\ is a coupling of } X \\textrm{\\ and\\ }Y\n\\rbrace.\\end{multline*}\n\\end{definition}\n\nTo put it in another way, we consider pairs of configurations on the neighbours of $v$ that differ at only one place. \nWe calculate the total variation distance of the conditional distributions at $v$ given the two configurations. \nFinally we take the supremum for all these pairs. Note that this definition depends only on $Q$ and \nthe number of neighbors of $v$.\n\n\\medskip\n\nNow we can formulate the main result of this section, which will be proved in Subsection \\ref{proofthm1}. \n\n\\begin{theorem}\\label{thm1}\nIf the condition $D<1\/d$ holds for a reversible Markov chain with transition matrix $Q$ on a finite state space $S$, then the \nbranching Markov process $\\nu_Q$ corresponding to $Q$ on the $d$-regular tree $T_d$ is a factor of i.i.d. process; that is, $\\nu_Q\\in F_d(S)$.\n\\end{theorem}\n\nThis theorem is \nheuristically in accordance with the results of Bubley and Dyer \\cite{pathcoupling}, who proved fast mixing of the Glauber \ndynamics if the condition $D<1\/d$ holds. Moveover, this condition has other consequences for correlation decay and the \nuniqueness of the Gibbs measure under various circumstances \\cite{dobrushin, lovasz, sokal, weitz}. However, we do not know in general \nwhether fast mixing or the uniqueness of the Gibbs measure implies that the branching Markov process is factor of i.i.d.\n \n\n\n\n\n\n\\subsection{Poisson Glauber dynamics on $T_d$}\n\nWhen the vertex set of the underlying graph is finite, as we have already seen in Subsection \\ref{finiteglaub}, it is easy to define the Glauber dynamics. \nFrom now on we get back to the infinite $d$-regular tree, where it is not possible to choose a vertex uniformly at \nrandom, and perform Glauber dynamics step by step this way. \nIn this subsection we give a heuristic description of the continuous time Glauber dynamics on the infinite tree for motivation. However, for our purposes the discrete version defined in the next subsection is \nmore convenient, hence we omit the precise details of the definition of the continuous time model.\n\n\nWe assign independent Poisson processes with rate 1 to the vertices of the tree. That is, each vertex has a \nsequence of random times when it wakes up. At the beginning, at time zero, the vertices are in random \nstates chosen independently and uniformly from the finite state space $S$. When a vertex wakes up, it performs a single \nGlauber step defined earlier. This depends only on the state of the neighbors of the \nvertex. However, to know these states, we have to know what has happened when the neighbors have performed Glauber steps earlier. \nThis continues, hence it is not trivial whether this process is well-defined. To see this, one can check that the \nexpectation of the number of Glauber steps that effect the randomization of a vertex waking up is finite. \n\nThis argument could be made precise (see e.g. \\cite[Theorem 1]{howard} for the definition of joint distribution of the Poisson processes on $T_3$). The advantage of the continuous time Glauber dynamics is the fact that \nthe probability that neighbors wake up at the same time is zero. When we define the discrete time Glauber step \nin the next subsection, we will have to pay attention to avoid the event that neighbors are waking up simultaneously. \n\n\n \n\\subsection{The factor of i.i.d. Glauber step on $T_d$}\n\n\nAs we have seen in Subsection \\ref{finiteglaub}, the single Glauber step for finite graphs maps each configuration in \n$S^{V(G)}$ to a random configuration. Now we are working with the infinite $d$-regular tree $T_d$, hence \nwe deal with random processes, which are probability distributions on $S^{V(T_d)}$. We \nwill describe a way of performing Glauber steps simultaneosly at different vertices such that our procedure produces factor of i.i.d. \nprocesses from factor of i.i.d. processes. \n\nGiven a configuration $\\omega \\in S^{V(T_d)}$, which is a labelling of \nthe vertices of the $d$-regular tree with labels from the finite state space $S$ of the Markov chain, we will perform a \nsingle Glauber step to get a random configuration $G\\omega$ in $S^{V(T_d)}$. Fix the \ntransition matrix $Q$. The scheme is \nthe following; we give the details afterwards. \n\\begin{enumerate}\n\\item Choose an invariant random subset $U$ of $V(T_d)$ such that it has positive density and it does not contain any two vertices of distance less than 3. \n\\item For each vertex $v\\in U$ perform the usual Glauber step at $v$: randomize the state of vertex $v$ according to \nthe conditional distribution with respect to the states of its neighbours.\n\\end{enumerate}\n\nMore precisely, for the first part we need the following lemma. \n\n\\begin{lemma} \\label{ebred}It is possible to find an invariant random subset $U$ \nof $V(T_d)$ such that \\begin{itemize}\n\\item it is factor of i.i.d.: the distribution of the indicator function of $U$ is in $F_d(\\lbrace 0,1\\rbrace)$;\n\\item it has positive density: the probability \nthat the root $o$ is in $U$ is positive;\n\\item it does not contain any two vertices of distance less than 3. \n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof} We start with $[0,1]^{V(T_d)}$ endowed with $\\mu$, the product measure of the uniform distributions on the \ninterval $[0,1]$. That is, vertices have independent and uniformly distributed labels from $[0,1]$. \n\nA vertex $v\\in V(T_d)$ will be in $U$ if its label is larger than the labels of the vertices in its neighbourhood of \nradius 2. That is, for $\\omega\\in [0,1]^{V(T_d)}$ we set $f(\\omega)=1$ if $\\omega$ at the root $o$ is larger than $\\omega_u$ for all $u\\in V(T_d)$ at distance at most 2 from the root. Otherwise $f(\\omega)=0$. Then we get the characteristic function of $U$ by placing the root to each vertex and applying $f$. This is a factor of i.i.d. process satisfying all conditions. \\hfill $\\square$\n\n\\end{proof}\n\n\\medskip\n\nThis lemma ensures that we can perform the first part of the Glauber step as a factor of i.i.d. process. \nAs for the second part, we just refer to the definition of the Glauber step at a single vertex: each vertex $v\\in U$ randomizes its \nstate given the state of its neighbors and according to the distribution of the branching Markov process \nconstrained on the finite subset $v\\cup N(v)$. Since the distance of any two vertices \nin $U$ is at least 3, these randomizations can be performed simoultaneously and independently. \n\n\nIt is straightforward to extend the definition of the Glauber step to a map from the set of probability measures on $S^{V(T_d)}$ to \nitself. Namely, choose a random configuration from $S^{V(T_d)}$ according to the given measure, and perform the \nGlauber step described above. This gives a new probability measure on $S^{V(T_d)}$. It is also easy to see \nthat if we apply this for an invariant probability measure, then the resulting measure will also be invariant. \nHence we have extended the definition of the Glauber step to a transformation of the form $G: I_d(S)\\rightarrow I_d(S)$.\n\nMoreover, note that if $\\nu$ is factor of i.i.d., then $G(\\nu)$ is also factor of i.i.d., \nsince the set of vertices performing Glauber steps is chosen by a factor of i.i.d. process by Lemma \\ref{ebred}, and \nGlauber steps depend only on the state of the neighbors of these vertices.\n\n\\subsection{The invariance of the branching Markov process for the Glauber step} \n\nIn order to prove Theorem \\ref{thm1}, we will need the fact that the Glauber step defined above does not change \nthe distribution of the branching Markov process. \n\n\\begin{proposition}[Invariance] \\label{prop:inv}\nIf $\\nu_Q\\in I_d(S)$ is the branching Markov process with transition matrix $Q$\nthen it is a fixed point of the Glauber step corresponding to $Q$ and $d$ (i.e. $G(\\nu_Q)=\\nu_Q$.)\n\\end{proposition}\n\n\\begin{proof}\nFirst we check that the Glauber step at a single vertex $u$ does not change the \ndistribution of the branching Markov process. It follows from the fact that the distribution of the state of $u$ and the joint distribution of the states at $V(T_d)\\setminus \\{u\\cup N(u)\\}$ are conditionally independent given the states of the vertices in $N(u)$. \n\nLet $U$ be the set of vertices performing Glauber steps when we apply $G$. Since these vertices are far away from each other \n(their distance is at least 3 according to Lemma \\ref{ebred}), the randomizations are independent, and therefore, since the Glauber step at a \nsingle vertex does not change the distribution, it is also invariant for finitely many steps. On the other hand, for arbitrary $U$ it is possible to \nfind finite sets of vertices $U_n$ such that (i) $U_n\\subseteq U_{n+1}$ for all $n$; (ii) $\\bigcup_{n=1}^{\\infty} U_n=V(T_d)$; (iii) if a vertex is in $U\\cap U_n$, then all its neighbors are in $U_n$. For example, one can use balls of appropriate radius with a few vertices \nomitted from the boundary. Since every $U_n$ contains finitely many vertices, and vertices on the boundary of $U_n$ do not perform Glauber steps, the distribution of the branching Markov process is invariant for the \nGlauber steps at vertices $U\\cap U_n$. This also implies that the branching Markov process is invariant for $G$, when we perform Glauber steps at \nthe vertices of $U$ simultaneously.\n\\hfill $\\square$\n\\end{proof}\n\n\\subsection{The Glauber step as a contraction}\n\n\n\n\n\nWe will prove that if the Dobrushin coefficient (Definition \\ref{def:dobr}) is small enough, then the factor of i.i.d. Glauber step is a contraction \nwith respect to the metric \n$m_c$ derived from the Hamming distance on $S$.\nFirst we need a notation and a lemma. \n\n\\begin{definition}[Coupling Hamming distance] Let $S$ be a finite state space with the discrete topology and with the Hamming distance: $m(s,s)=0$ for all $s\\in S$ and $m(s,t)=1$ if $s\\neq t$. We denote by $h_c$ the metric defined by equation \\eqref{eq:metric} on $I_d(S)$ corresponding to the Hamming distance (see Section \\ref{joining}). \n\\end{definition}\n\nRecall that $B_{v,\\omega}$ is the distribution of the state of vertex $v$ at the Glauber step if the state of its \nneighbors are given by $\\omega\\in S^{N(v)}$.\n\n\\begin{lemma}\\label{lem:pc}Suppose that we have a branching Markov process on $T_d$ with Dobrushin coefficient $D$. Fix \n$v\\in V(T_d)$ and $\\omega, \\omega'\\in S^{N(v)}$ such that $|\\lbrace u\\in N(v): \\omega(u)\\neq \\omega'(u)\\rbrace|=k$. Then we have that \n\\[d_{TV}(B_{v,\\omega}, B_{v, \\omega'})\\leq k D.\\]\n\n\\end{lemma}\n\\begin{proof} The case $k=1$ is trivial. The general case follows by induction using the triangle inequality. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\nNow we can prove that the factor of i.i.d. Glauber step is a contraction if the Dobrushin condition holds.\n\\begin{proposition}\\label{prop:contract}\nIf $D<{1\/d}$, then $G: I_d(S)\\rightarrow I_d(S)$ is a contraction with respect to the coupling Hamming distance $h_c$; that is, \nthere exists $r<1$ such that\n\\[h_c(G(\\nu_1), G(\\nu_2))0$ such that $r:=(1+\\varepsilon)(1-p+pdD)<1$, where $p>0$ is the density of $U$ in the Glauber step. This is possible if $D<1\/d$. Fix $\\nu_1, \\nu_2\\in I_d(S)$. Denote their distance $h_c(\\nu_1, \\nu_2)$ by $h$. By the definition of the metric $h_c$, there is \na joining $\\Psi$ of $\\nu_1$ and $\\nu_2$ such that $\\mathbb E(m(\\Psi|_v))<(1+\\varepsilon)h$ holds ayt any given vertex $v$, where $m$ denotes the Hamming distance on $S$. \n\nOur goal is to construct a joining $\\Psi'$ of $G(\\nu_1)$ and $G(\\nu_2)$ such that $\\mathbb{E}(m(\\Psi'|_v))\\leq rh$.\nWe construct this joining in a way that the set of vertices that perform the Glauber step are the same for $\\nu_1$ and $\\nu_2$. \nAs a first step we choose an invariant random set $U$ according to Lemma \\ref{ebred} such that $U$ is independent from $\\Psi$. \n\nWe define $\\Psi'$ from $\\Psi$ and $U$ as follows. When we randomize the state of a given vertex $v\\in U$, conditionally on the states of vertices in $N(v)$, we use the best possible coupling of the conditional distributions in total variation (the probability that the two random variables are different is minimal). Since we deal with finite number of configurations and a discrete probability space for fixed $u$, this is sensible. For the distinct vertices in $U$ we join these couplings independently to get $\\Psi'$ for a fixed $U$. This defines $\\Psi'$ on the whole extended probability space. \n\nSince $U$ is invariant and the randomizations depend only on the states of the neighbors, $\\Psi'$ is also invariant. \nIt is clear that the marginal distributions $\\nu_1'$ and $\\nu_2'$ of $\\Psi'$ are identical to $G(\\nu_1)$ \nand $G(\\nu_2)$, respectively. \n\nNow we give an upper bound on the coupling Hamming distance of $\\nu_1'$ and $\\nu_2'$. \n\nFix $v\\in V(T_d)$. The probability that $v\\in U$ is $p$ by definition. With probability $1-p$ its state is not changed, therefore there is a difference in $\\Psi'$ with probability $E(m(\\Psi|_v))0$. We denote by $G_{n,\\varepsilon}$ the set of $S$-colored $d$-regular graphs on the vertex set $V_n$ with the restriction that the distribution of vertex colors is $\\varepsilon$-close to $\\nu_v$ and the distribution of colored (directed) edges is $\\varepsilon$-close to $\\nu_e$ in total variation distance.\nSince $\\nu$ is typical we know that if $n$ is large enough and belongs to the sequence $\\{n_i\\}_{i=1}^{\\infty}$, then almost every $d$-regular graph on $n$ vertices is in $G_{n,\\varepsilon}$. It follows that \n\\begin{equation}\\label{entpr1}\n\\limsup_{n\\rightarrow\\infty} \\frac{|G_{n,\\varepsilon}|}{t_n}\\geq 1\n\\end{equation}\n holds for every $\\varepsilon>0$ where $t_n$ is the number of $d$-regular graphs on $n$ vertices.\n\n\nIn the rest of the proof we basically compute the asymptotic behavior of $\\log|G_{n,\\varepsilon}|$ if $\\varepsilon$ is small and $n$ is large enough depending on $\\varepsilon$. We start by assigning $d$ half-edges to each element of $V_n$. Let $V_n^*$ denote the set of these half edges. We first color the vertices according to the distribution $\\nu_v$. We color $V_n^*$ such that each half edge inherits the color of its incident vertex. Then we match these half-edges \nsuch that the distribution of the colors of the endpoints of a uniform random edge is $\\nu_e$.\nTo be more precise, in each coloring throughout this proof, we allow an $\\varepsilon$ error in the total variation distance of distributions. \n\nThere are $H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{n(1+o(1))}$ ways \nto color $V_n$ with distribution $\\nu_v$. \nHere $o(1)$ means a quantity that goes to $0$ if first $n$ goes to infinity and then $\\varepsilon$ goes to $0$.\n \nAssume that the vertices of $V_n$ have a fix coloring. Let $M$ denote the set of perfect macthings on $V_n^*$ that satisfy the above requirement. By Lemma \\ref{entlem} we have that $$|M|= PM(nd)H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{nd\/2(1+o(1))}\/H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{nd(1+o(1))}.$$\n\n\nFinally we have to take into consideration that the order of the half-edges does not matter, hence we \nget every coloring $(d!)^{n}$ times. \n\nPutting everything together, the number of colored $d$-regular graphs on $V_n$ with the required property is the following:\n\\[\\frac{H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{n(1+o(1))} PM(nd)H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{nd\/2(1+o(1))}}{H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{nd(1+o(1))}(d!)^n}.\\] \n\nUsing the same argument about the half-edges but forgetting about all colorings, one can see that the number of \n$d$-regular graphs on $n$ vertices is \n\\[\\frac{PM(nd)}{(d!)^n}.\\]\n\nBy (\\ref{entpr1}) we conclude that\n\\[\\limsup_{n\\rightarrow\\infty} \\frac{H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{n(1+o(1))}H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{nd\/2(1+o(1))}}{H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{nd(1+o(1))}}\\geq 1;\\]\n\\[ H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{d\/2(1+o(1))}\\geq H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{(d-1)(1+o(1))}.\\]\n \nBy tending to $0$ with $\\varepsilon$, taking the logarithm of both sides and rearranging we get the statement of the theorem. \\hfill $\\square$\n\\end{proof}\n\nSimilarly to the proof of Theorem \\ref{edgevertex}, one can show the following.\n\n\\begin{theorem}\\label{staredge} For any typical process $\\nu\\in R_d$ the following holds:\n\\[h(\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d)\\geq \\frac{d}{2} h(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture}),\\]\nwhere \\ \\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}$\\ \\ \\ \\ _d$\n\\end{picture} \\ \\ is the star of degree $d$.\n\\end{theorem}\n\n\\begin{proof} The proof is very similar to the proof of Theorem \\ref{edgevertex} so we only give the details that are different. Let $\\nu\\in R_d\\cap I_d(S)$. Let $C$ denote the star of degree $d$. We label the root of $C$ by $0$ and the endpoints of the rays by $\\{1,2,\\dots,d\\}$. Let \\ $\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d$\\ and\\ $\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture}$\\ denote the marginal distributions of $\\nu$ on the degree $d$ star and on an edge in $T_d$. Again we count $S$-colored $d$-regular graphs on $n$ vertices with the restriction that the distribution on random stars and edges are close to\\ $\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d$ and\\ $\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture}$. Let $V_n$ be a set of $n$ elements. To each element $v_i\\in V_n$ we assign $d$ half-edges $\\{v_{i,j}\\}_{j=1}^d$. We denote by $V_n^*$ the set of half-edges. Let $f:V_n^*\\rightarrow S\\times S$ be a coloring of the half-edges with pairs of elements from $S$ such that the first coordinates of $f(v_{i,j})$ and $f(v_{i,k})$ are the same, say $g(i)\\in S$, for every triple $1\\leq i\\leq n$ and $1\\leq j,k\\leq d$. \nTo each number $1\\leq i\\leq n$ we can assign an $S$-colored version of the star $C$ such that the color of the root $0$ is $s_i$ and the color of $j\\in V(C)$ is the second coordinate of $f(v_{i,j})$ for $1\\leq j\\leq d$. \nWe say that $f$ is \"good\" if the distribution of these colored stars is \\ $\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d$ if $1\\leq i\\leq n$ is random. The number of good colorings is $H(\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d)^{n(1+o(1))}$. \nWe obtain a $d$-regular graph $G$ with a desired coloring $g$ by using a perfect matching on the set of half-edges such that the second coordinate of each half-edge is equal to the first coordinate of its pair in the mathching. \nUsing Lemma \\ref{entlem}, we obtain that the number of such perfect matchings is $$PM(nd)H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{(dn\/2)(1+o(1))}H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{-dn(1+o(n))}.$$ Thus the number of $d$-regular graphs with a desired coloring is\n$$\\frac{PM(nd)H(\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d)^{n(1+o(1))}}{H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{(dn\/2)(1+o(1))}d!^n}.$$ Similarly to the proof of Theorem \\ref{edgevertex} we obtain that $H(\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d)\\geq H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{d\/2}$. This completes the proof. \n\n\\end{proof}\n\n\\subsection{Entropy inequalities and branching Markov chains}\n\nIn Theorem \\ref{thm1} we gave a sufficient condition for a branching Markov process to be factor of i.i.d. process. This can not be necessary, as the example of the Ising model shows. The \nIsing model with parameter $\\vartheta$ is the particular case where the Markov chain has only two states and the transition \nmatrix $Q=\\left(\n\\begin{tabular}{cc} \n$\\frac{1+\\vartheta}{2}$ & $\\frac{1-\\vartheta}{2}$\\\\ $\\frac{1-\\vartheta}{2}$ & $\\frac{1+\\vartheta}{2}$\n\\end{tabular}\n\\right)$ is symmetric. \nThat is, when we propagate the states from the root along the tree, $\\frac{1+\\vartheta}{2}$ is the probability that we keep \nthe current state. The model is called ferromagnetic if $\\vartheta\\geq 0$; i.e. if it is more likely to keep the current state than to change it. The Dobrushin coefficient of the Ising model with parameter $\\vartheta\\geq 0$ is just $\\vartheta$. \nTherefore our theorem implies that when $-1\/d<|\\vartheta|<1\/d$, then the ferromagnetic Ising model is a factor of i.i.d. \nprocess. But a stronger statement is known: the Ising model is a factor of i.i.d. if $-1\/(d-1)\\leq\\vartheta\\leq 1\/(d-1)$. To prove this, \none can use that the clusters in the random cluster representation of the Ising model are almost surely finite in this \nregime. See e.g.\nSection 3 of \\cite{russ} for the details. See also the paper of H\\\"aggstr\\\"om, Jonasson and Lyons \\cite{russregi} for a generalization of this result to random-cluster and Potts models.\n\nIt is also known that the Ising model with parameter $|\\vartheta|>1\/\\sqrt{d-1}$ can not be factor of i.i.d. (not even a weak limit of factor of i.i.d processes) see \\cite{russ} and \\cite{cordec}. \nIt is an open question whether the Ising model with $1\/(d-1)< |\\vartheta|\\leq 1\/\\sqrt{d-1}$ is factor of i.i.d. or not (or whether it is limit of \nfactor of i.i.d). \n\n\nFor the ferromagnetic Ising model, the parameter $\\vartheta$ is equal to the spectral radius of the transition matrix $Q$, which is, in general, the second largest eigenvalue in absolute value after the eigenvalue $1$. \nMore generally, the results of \\cite{cordec} imply that a branching Markov process is not the weak limit of factor of i.i.d. \nprocesses if the spectral radius $\\varrho$ of its transition matrix $Q$ is larger than $1\/\\sqrt{d-1}$. We will \nuse Theorem \\ref{edgevertex} to show that for general branching Markov processes the correlation bound is far from being optimal. \n\n\\begin{theorem}\\label{exnotsuff} For every $d\\geq 3$ and $\\varepsilon>0$ there exists a transition matrix $Q$ such that \n\\begin{itemize}\n\\item its spectral radius is less than $\\varepsilon$;\n\\item the branching Markov process on the $d$-regular tree $T_d$ according to $Q$ is not a typical process, and hence it is not the weak limit of factor of i.i.d. processes.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nChoose a prime $p$ which is equal to 1 modulo 4 and which satisfies $\\frac{2\\sqrt{p}}{p+1}<\\varepsilon$. \nLet $G$ be a $(p+1)$-regular Ramanujan graph (see the definition below) on $k$ vertices such that \n\\[k>(p+1)^{\\frac{d}{d-2}}.\\] Due to Lubotzky, Phillips and Sarnak \\cite{lbs}, this is possible. Let $Q$ be the\ntransition matrix of the simple random walk on the vertices of $G$. (That is, $Q$ is the adjacency matrix of $G$ normalized \nby $p+1$.) Let $r$ be the spectral radius of $G$. \nBy the definition of Ramanujan graphs we \nhave that $r\\leq \\frac{2\\sqrt{p}}{p+1}< \\varepsilon$.\n\nThe branching Markov process on $T_d$ according to $Q$ is an invariant process in $I_d(N)$, where $N$ represents the \nvertices of $G$, that is, it has $k$ elements. Since $G$ is regular, the stationary random walk is uniformly distributed \non its \nvertices, and therefore the vertex entropy of this branching Markov process is just $\\ln k$. \n\nAs for the edge entropy: we can choose the first vertex uniformly at random, and then one of its \n$p+1$ neighbors arbitrarily, but the order does not matter. Therefore the edge entropy is $\\ln k+\\ln (p+1)$. \n\nFrom Proposition \\ref{edgevertex} we get that if the branching Markov process according to the transition matrix $Q$ was a typical process, \nthen the following would be true: \n\\[\\frac{d}{2}h(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})\\geq (d-1)h(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture});\\]\n\\[\\frac{d}{2}[\\ln k+\\ln(p+1)]\\geq (d-1)\\ln k;\\]\n\\[d\\ln(p+1)\\geq (d-2)\\ln k;\\]\n\\[(p+1)^{d\/(d-2)}\\geq k.\\]\n\nThis contradicts the choice of $k$. Therefore the branching Markov process according to $Q$ is not a typical process. \n\\end{proof}\n\n\\begin{remark} The example of the Potts model shows that the typicallity of a process or the fact whether it is factor of i.i.d. can not be decided based only on the \nnumber of states and the spectral radius. \nLet $Q_1$ be the \ntransition matrix of the Potts model on $k$ states (see e.g. \\cite{sly}): with a \ngiven probability $p$ it stays at the actual state, otherwise it chooses another state uniformly at random. Its \nspectral radius is equal to $1-\\frac{pk}{k-1}$. Moreover, it is also known that the Potts model satisfies the Dobrushin condition if $k>2d$ \\cite{sokal}. \nBy choosing $p$ such that the spectral radius is so small that the previous theorem can be applied, we get that the branching Markov chain in the previous theorem is not limit of factor of i.i.d., while Theorem \\ref{thm1} implies that the branching Markov process according to $Q_1$ \nis a factor of i.i.d. process. \n\n \n\\end{remark}\n\n\\begin{remark}\nWe have seen that the entropy inequality can lead to stronger bound than the correlation decay when the number of states is sufficiently large. However, for the Ising model, when $k=2$, the correlation decay bound is stronger than the bound we get from this entropy inequality. \n\\end{remark}\n\n\\medskip\n\n\\subsection{Entropy inequalities and random $d$-regular graphs}\n\n\\label{dominating}\n\nIn this section we show how to use entropy inequalities to obtain results about random $d$-regular graphs. Our strategy is that we use Theorem \\ref{staredge} to show that certain invariant processes can not be typical. Then, by the correspondence principle, we translate this to statements about random $d$-regular graphs. Throughout this section we assume that $d\\geq 3$. \n\n\nWe denote by $C$ the degree $d$ star in $T_d$ with root $o$ and leaves $w_1,w_2,\\dots,w_d$.\nLet $\\mu\\in I_d(M)$ be an invariant process. If $F$ is a finite subset of $V(T_d)$, then we denote by $\\mu_F$ the marginal distribution of $\\mu$ restricted to $F$, and by $\\nu_F$ the product measure of the marginals of $\\mu_F$.\nWe denote by $t(F)$ the total correlation of the joint distribution of $\\mu_F$; that is, $t(F)=h(\\nu_F)-h(F)$.\n\n\n\n\n\n\\begin{proposition} \\label{prop:41}\n\n\n\nLet $\\mu$ be a typical process and suppose that $h(C)-h(C\\setminus \\{ w_1\\})\\leq b$ for some $b\\geq 0$. \nThen $t(C\\setminus \\{ w_1\\})\\leq b\\frac{2d-2}{d-2}$ and \\[d_{TV}(\\mu_{C\\setminus \\{ w_1\\}}, \\nu_{C\\setminus \\{ w_1\\}})\\leq \\sqrt{b(d-1)\/(d-2)}. \\]\n\\end{proposition}\n\n\\begin{proof}\nBy Theorem \\ref{staredge} and the condition of the proposition we get \n\\begin{equation}\\label{eq:p41}0\\leq h(C)-\\frac d2 h(\\{o, w_1\\})\\leq h(C\\setminus \\{w_1\\})-\\frac d2 h(\\{o, w_1\\})+b.\\end{equation}\nBy using a simple upper bound on the entropy of $C\\setminus \\{w_1\\}$ we get \n\\begin{equation*}0\\leq h(o)+(d-1)[h(\\{o,w_1\\})-h(o)]- \\frac d2 h(\\{o, w_1\\})+b.\n\\end{equation*}\nBy rearranging and multiplying by $d\/(d-2)$, this implies \n\\[-\\frac d2h(\\{o, w_1\\})\\leq \\frac{db}{d-2}-dh(o).\\]\nPutting this together with inequality \\eqref{eq:p41}, we conclude \n\\[0\\leq h(C\\setminus \\{w_1\\})-dh(o)+\\frac{2d-2}{d-2}b.\\] \nSince $h(\\nu_F)=dh(o)$ for an invariant process if $F$ consists of $d$ vertices, this concludes the proof of the first inequality.\n\nObserve that $t(C\\setminus \\{ w_1\\})=D\\big(\\mu_{C\\setminus \\{ w_1\\}}||\\nu_{C\\setminus \\{ w_1\\}}\\big)$, where $D$ denotes the relative entropy. \nRecall that Pinsker's inequality says that $D(P||Q)\\geq 2d_{TV}(P, Q)^2$, where $P$ and $Q$ are two probability distributions on the same set. This implies the statement. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\n\nAs a first application of Proposition \\ref{prop:41}, we use it in the case of $b=0$. \n\n\\begin{definition} \\label{def:rigid}\nLet $S$ be a finite set and $\\mu\\in I_d(S)$ be an invariant process. Assume that $C$ is a degree $d$ star in $T_d$ with root $o$ and leaves $w_1,w_2,\\dots,w_d$. We say that $\\mu$ is {\\it rigid} if \n\\begin{enumerate} \n\\item the values on $C\\setminus\\{w_1\\}$ uniquely determine the value on $w_1$;\n\\item $\\mu$ restricted to $C\\setminus\\{w_1\\}$ is not i.i.d. at the vertices. \n\\end{enumerate}\n\\end{definition}\n\n\n\n\n\n\n\\begin{proposition}\\label{apstaredge} If $\\mu\\in I_d(S)$ is a rigid process, then it is not typical.\n\\end{proposition}\n\\begin{proof} The first assumption in Definition \\ref{def:rigid} implies that Proposition \\ref{prop:41} holds for $\\mu$ with $b=0$, and thus we obtain that $\\mu_{C\\setminus\\{w_1\\}}=\\nu_{C\\setminus\\{w_1\\}}$, which contradicts the second assumption. \\hfill $\\square$ \n\\end{proof}\n\n\\medskip\n\nWe give an example for families of rigid processes. \n\n\\begin{lemma}\\label{rig1} Assume that $S$ is a finite set in $\\mathbb{R}$ and that $\\mu$ satisfies the eigenvector equation; namely, that a $\\mu$-random function $f:T_d\\rightarrow S$ satisfies that $\\lambda f(o)=f(w_1)+f(w_2)+\\dots+f(w_d)$ holds with probability $1$. Then $\\mu$ is rigid.\n\\end{lemma} \n\\begin{proof}\nObserve that $f(w_1)=\\lambda f(o)-(f(w_2)+f(w_3)+\\dots+f(w_d))$, which shows that the first condition is satisfied. We want to exclude the possibility that $f(o), f(w_2), f(w_3), \\ldots, f(w_n)$ are identically distributed independent random variables. We can assume that all values in $S$ are taken with positive probability. This means that for every pair $(c_1,c_2)\\in S\\times S$ we have with positive probability that $f(w_2)=f(w_3)=\\dots=f(w_d)=c_1$, $f(o)=c_2$, and thus $f(w_1)=\\lambda c_2-(d-1)c_1$. It follows that $\\lambda S+(1-d)S\\subseteq S$ (using Minkowski sum), which is impossible if $S$ is finite.\\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\nWe give further applications of Proposition \\ref{prop:41} in extremal combinatorics.\n\n\\begin{definition}\\label{def:cover}\nLet $G=(V, E)$ be a $d$-regular (not necessarily finite) graph. Let $M: S\\times S\\rightarrow \\mathbb N \\cup \\{0\\}$. We assume that $\\sum_{q\\in S} M(s,q)=d$ holds for every $s\\in S$. Furthermore, we suppose that the weighted directed graph with adjacency matrix $M$ is connected. Let $f: V\\rightarrow S$ be an arbitrary function. We say that $f$ is a covering at $v\\in V$ if \n\\[\\big |\\,\\{w\\ | \\ f(w)=q, w\\in N(v)\\}\\,\\big |=M(f(v), q),\\] \nwhere $N(v)$ is the set of neighbors of $v$. \n\\end{definition}\n\n\\begin{lemma}\\label{lem:covrig} Assume that $M: S\\times S\\rightarrow \\mathbb N \\cup \\{0\\}$ is as in the previous definition. Fix $\\varepsilon\\geq 0$ and $d\\geq 3$. Assume furthermore that $\\mu\\in I_d(S)$ is an invariant process such that a $\\mu$-random function $f: V(T_d^*)\\rightarrow S$ is a covering at the root $o$ with probability $1-\\varepsilon$, and the distribution of $f(o)$ is supported on at least two elements. Then the following hold. \n\\begin{enumerate}[(a)]\n\\item $h(C)-h(C\\setminus \\{ w_1\\})\\leq \\varepsilon \\log |S|$. \n\\item There exists $\\delta=\\delta(M, \\varepsilon)>0$ such that $\\mathbb P(f(o)=s)\\geq \\delta$ holds for all $s\\in S$.\n\\item By using the notation of Proposition \\ref{prop:41}, we have \\[d_{TV}(\\mu_{C\\setminus \\{ w_1\\}}, \\nu_{C\\setminus \\{ w_1\\}})\\geq \\frac12 (\\delta^d-\\varepsilon).\\] \n\\item If $\\varepsilon=0$, then $\\mu$ is rigid.\n\\end{enumerate} \n\\end{lemma}\n\\begin{proof} We denote by $A$ the event that $f$ is a covering at $o$, and by $B$ its complement. Then $\\mathbb P(B)=\\varepsilon$.\n\n\\noindent $(a)$ For $\\varepsilon=0$: observe that $f(w_1)$ is the unique element $q\\in S$ with the following property: \n\\[|\\,\\{w\\ | \\ f(w)=q, w\\in \\{w_2, w_3, \\ldots, w_d\\}\\}\\,\\big |=M(f(o), q)-1,\\] \nwhich depends only on the values of $f$ on $C\\setminus \\{w_1\\}$. Therefore the values on $C\\setminus \\{w_1\\}$ uniquely determine the value on $w_1$, and the two entropies are equal.\nOtherwise, conditional entropy with respect to an event with positive probability will be defined as the entropy of the conditional distribution. Then we have \n\\[\nh(C)=h(C|A)\\mathbb P(A)+h(C|B)\\mathbb P(B)-\\mathbb P(A)\\log \\mathbb P(A)-\\mathbb P(B)\\log \\mathbb P(B);\\]\n\\[h(C\\setminus \\{w_1\\})=h(C\\setminus \\{w_1\\}|A)\\mathbb P(A)+h(C\\setminus \\{w_1\\}|B)\\mathbb P(B)-\\mathbb P(A)\\log \\mathbb P(A)-\\mathbb P(B)\\log \\mathbb P(B).\\]\nIf $A$ holds, then by the argument above, the value on $w_1$ is uniquely determined by the other ones. Hence \n$h(C\\setminus \\{w_1\\}|A)=h(C|A)$. On the other hand, $h(C|B)\\leq h(C\\setminus \\{w_1\\}|B)+\\log |S|$ is a trivial upper bound. Therefore we obtain\n\\[h(C)-h(C\\setminus \\{ w_1\\})=[h(C|B)-h(C\\setminus \\{ w_1\\}|B)]\\mathbb P(B)\\leq \\varepsilon \\log |S|.\\]\n\n \\noindent $(b)$ We show that $\\delta(M, \\varepsilon)\\geq \\frac{a}{d^k}-\\frac{\\varepsilon}{d-1}$ holds, where $k$ is the diameter of the directed graph with adjacency matrix $M$. If $s\\in S$ has probability $a$, then any of its neighbors $t$ has probability at least $(a-\\varepsilon)\/d$, due to the following. The probability of the event $D$ that $f(o)=s$ and $f$ is a covering at the root is at least $a-\\varepsilon$. Given $D$, the joint distribution of the neighbors is permutation invariant. On the event $D$, the values of $f$ evaluated at the neighbors of the root are exactly the neighbors of $s$ with multiplicity in $M$. Hence the probability that the value of $f$ at a fixed neighbor of the root is $t$ is at least $1\/d$ conditionally on $D$. Using the invariance of the process, this proves the lower bound for the probability of $t$. \n \n We can choose an element $s_0\\in S$ which has probability at least $1\/|S|$. By induction, we have that an element of distance $m$ from $s_0$ in the directed graph $M$ has probability at least \n \\[\\frac{1}{|S|d^m}-\\varepsilon \\bigg [\\frac{1}{d}+\\frac{1}{d^2}+\\ldots+\\frac{1}{d^m} \\bigg].\\]\n Since every other element in $S$ can be reached by a directed path of length at most $k$ in $M$, the proof is complete. \n \n \\noindent $(c)$ Choose $s_1,s_2\\in S$ such that $M(s_1, s_2)\\leq d\/2$. The covering property at $o$ implies that the probability of the event $\\{f(o)=s_1, f(w_2)=s_2, f(w_3)=s_2, \\ldots, f(w_d)=s_2\\}$ is zero. That is, this event has conditional probability 0 with respect to $A$. It follows that \n \\[\\mathbb P(f(o)=s_1, f(w_2)=s_2, f(w_3)=s_2, \\ldots, f(w_d)=s_2)\\leq \\mathbb P(B)=\\varepsilon.\\]\nOn the other hand, by part $(b)$ and invariance, the same event has probability at least $\\delta^d$ when we consider $\\nu$ restricted to $C\\setminus \\{w_1\\}$ (recall that $\\nu$ is the product measure of the marginals). This implies the statement.\n\n\\noindent $(d)$ The first property follows from the argument in $(a)$. In addition, we have seen in part $(c)$ that the probability of a given configuration is 0. On the other hand, by $(b)$, the probability of each value is positive. \n This excludes the possibility that $\\mu$ restricted to $C\\setminus\\{w_1\\}$ is i.i.d. \\hfill $\\square$\n\n\\end{proof}\n\n\\medskip\n\nFor the combinatorial applications, we need the following definition.\n\n\\begin{definition} \nLet $G=(V, G)$ be a finite $d$-regular graph, and $M: S\\times S\\rightarrow \\mathbb N\\cup \\{0\\}$ as in definition \\ref{def:cover}. For an arbitrary function $g: V\\rightarrow S$ let $W\\subset V$ be the subset of vertices $v$ at which $h$ is not a covering. We introduce the quantity $e(g):=|W|\/|V|$. Furthermore, we define the covering error ratio of $G$ with respect to $M$ by \\[c(G,M)=\\min_{g: V\\rightarrow S} e(g).\\] \n\\end{definition}\n\nIt will be important that the covering error ratio can be extended to graphings in a natural way such that the extension is continuous in the local-global topology. Let $\\mathcal{G}$ be a graphing on the vertex set $\\Omega$. Let $g:\\Omega\\rightarrow S$ be an arbitrary measurable function. Let $W\\subseteq\\Omega$ be the set of vertices at which $g$ is not a covering of $M$. We denote by $e(g)$ the measure of $W$. We define $c(\\mathcal{G},M)$ as the infimum of $e(g)$ where $g$ runs through all measurable maps $g:\\Omega\\rightarrow S$. We can also obtain $c(\\mathcal{G},M)$ as a minimum taken on processes. For $\\mu\\in I_d(S)$ let $e(\\mu)$ denote the probability that a $\\mu$ random function $f:T_d^*\\rightarrow S$ is not a covering of $M$ at $o$. Using the fact that $e(\\mu)$ is continuous in the weak topology and that $\\gamma(\\mathcal{G},S)$ is compact in the weak topology we obtain that\n\\begin{equation}\\label{cermin}\nc(\\mathcal{G},M)=\\min_{\\mu\\in\\gamma(\\mathcal{G},S)}e(\\mu).\n\\end{equation}\n\nNow we are ready to prove the next combinatorial statement. Recall that $\\delta(M, 0)>0$, and hence $\\varepsilon_0$ defined in the theorem is also positive.\n\n\\begin{theorem}\\label{thm:combap} Fix $d\\geq 3$ and $M$ as in the definition \\ref{def:cover}. Let \n\\[\\varepsilon_0=\\inf\\bigg\\{\\varepsilon>0: \\frac 12(\\delta(M, \\varepsilon)^d-\\varepsilon)\\leq \\sqrt {\\varepsilon\\log |S|\\frac{d-1}{d-2}}\\bigg\\},\\] \nwhere $\\delta(M, \\varepsilon)$ is defined in Lemma \\ref{lem:covrig} $(b)$.\n Then for every $0<\\varepsilon<\\varepsilon_0$ the probability $\\mathbb P(c(\\mathbb G_i, M)<\\varepsilon)$ converges to $0$ as $i\\rightarrow \\infty$, where $\\mathbb G_i$ is a random $d$-regular graph on $i$ vertices. \n\\end{theorem}\n\n\\begin{proof} Suppose that the invariant process $\\mu\\in I_d(S)$ satisfies the conditions of Lemma \\ref{lem:covrig} for some $\\varepsilon>0$, and it is typical. Part $(a)$ implies that Proposition \\ref{prop:41} can be applied with $b=\\varepsilon \\log |S|$. Putting this together with part $(c)$ of the lemma, we obtain\n\\[\\frac 12[\\delta(M, \\varepsilon)^d-\\varepsilon]\\leq d_{TV}(\\mu_{C\\setminus \\{ w_1\\}}, \\nu_{C\\setminus \\{ w_1\\}})\\leq \\sqrt {\\varepsilon\\log |S|\\frac{d-1}{d-2}}.\\]\n\nBy equation (\\ref{cermin}) it follows that $c(\\mathcal{G},M)\\geq \\varepsilon_0$ holds for every typical graphing in $\\overline{X_d}$. Let $0<\\varepsilon<\\varepsilon_0$ be an arbitrary real number and and let $Q_\\varepsilon=\\{\\mathcal{G}|c(\\mathcal{G},M)\\leq\\varepsilon\\}$. By applying Proposition \\ref{prop:corres} for $Q_\\varepsilon$, the proof is complete. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\nTheorem \\ref{thm:combap} provides a family of combinatorial statements depending on the matrix $M$. \nAn interesting application of Theorem \\ref{thm:combap} is when $M$ is the adjacency matrix of a $d$-regular simple graph $H$. In this case we obtain that random $d$-regular graphs do not cover (not even in an approximative way) the graph $H$. If we apply Proposition \\ref{apstaredge} to such a matrix $M$ we get the following. \nLet $\\mu\\in I_d(V(H))$ be the invariant process on $T_d$ that is a covering map from $T_d$ to $H$. Then $\\mu$ is not typical and thus it is not in the weak closure of factor of i.i.d processes. \n\n\n\n\n\n\n\n\n We show two concrete examples, using only $2\\times 2$ matrices, to illustrate how our general statement of Theorem \\ref{thm:combap} is related to known results. Note that in these special cases the literature has better bounds then ours; our goal is only demonstrating the connection between different areas.\n\n\\begin{equation*}\nM_1=\\begin{pmatrix} 0 & d \\\\ 1 & d-1 \\end{pmatrix}~~~,~~~M_2=\\begin{pmatrix} 0~~ & d \\\\ d~~ & 0 \\end{pmatrix}\n\\end{equation*}\n\nThe dominating ratio of a finite graph $G$ is the following. Let $m$ be the size of the smallest set of vertices $V'$ of $G$ such that each vertex of $G$ is either in $V'$ or connected to a vertex in $V'$. The dominating ratio is defined as $dr(G)=m\/|V(G)|$. It is clear that the dominating ratio of a $d$-regular graph is at least $1\/(d+1)$. It is easy to see that the dominating ratio of a $d$-regular graph $G$ is equal to $1\/(d+1)$ if and only if $c(G,M_1)=0$. For this particular matrix, one can use a better bound than the general one given in Lemma \\ref{lem:covrig}. Namely, as a simple calculation shows, $\\delta(M, \\varepsilon)=1\/(d+1)-\\varepsilon\/(d+1)$ can be chosen.\nTheorem \\ref{thm:combap} applied to $M_1$ gives to following combinatorial statement. \n\n\\begin{proposition} For every $d\\geq 3$ we define \n\\[\\varepsilon_0=\\inf\\bigg\\{\\varepsilon>0: \\frac 12\\bigg[\\bigg(\\frac{1-\\varepsilon}{d+1}\\bigg)^d-\\varepsilon\\bigg]\\leq \\sqrt {\\varepsilon\\log |S|\\frac{d-1}{d-2}}\\bigg\\}.\\] \nThen $P(dr(\\mathbb G_i)<1\/(d+1)+\\varepsilon)$ converges to $0$ as $i\\rightarrow \\infty$ for all $0<\\varepsilon<\\varepsilon_0$. \\end{proposition}\n\nThis gives the following for small values of $d$. \n\n\\begin{center}\n\\begin{tabular}{lcccc}\n$d$ & 3&4&5&6\\\\ \\hline\n$\\varepsilon_0$ & $4.38\\cdot 10^{-5}$ & $6.15\\cdot 10^{-7}$ & $4.47\\cdot 10^{-9}$&$2.08\\cdot10^{-11}$ \n\\end{tabular}\n\\end{center}\n\nFor $d=3$ Molloy and Reed \\cite{molloyreed} gave a much better bound $0.2636$ for the dominating ratio; our result gives $0.2500438$. It would be interesting to improve our bounds for larger $d$ as well. \n\n\n\\medskip\n\nThe next application shows that random $d$-regular graphs are separated from being bipartite, which was first proved by Bollob\\'as \\cite{bollind}. To put it in another way, it says that the independence ratio (size of the largest independent set divided by the number of vertices) of a random $d$-regular graph is at most $1\/2-\\varepsilon_0$ with probability tending to $1$ with the number of vertices for some $\\varepsilon_0>0$. We can obtain this by applying Theorem \\ref{thm:combap} for the matrix $M_2$. In fact, $\\delta(M, \\varepsilon)\\leq 1\/2-\\varepsilon$, due to the following argument. One of the states has probability at least $1\/2$, let us say state $0$. Fix a neighbor of the root. If the root is in state $0$, and the random function is a covering at $0$, then its neighbor is in state 1. This event has probability at least $1\/2-\\varepsilon$, hence the probability of 1 is at least $1\/2-\\varepsilon$. \n\nTherefore \n\\[\\varepsilon_0=\\inf\\bigg\\{\\varepsilon>0: \\frac 12[(1\/2-\\varepsilon)^d-\\varepsilon]\\leq \\sqrt {\\varepsilon\\log 2\\cdot \\frac{d-1}{d-2}}\\bigg\\}.\\] \n\n\nAbout the best known bounds, see McKay \\cite{mckay} for small $d$. \nFor large $d$, the independence ratio of random $d$-regular graphs is concentrated around $2\\log d\/d$ \\cite{bollind, sly}. Our results do not improve their bounds.\n\n\n\\medskip\n\n\n\n\\begin{remark} From Lemma \\ref{rig1} and Proposition \\ref{apstaredge} we obtain that any typical processes $\\mu$ (and thus any factor of i.i.d process) that satisfy the eigenfunction equation must take infinitely many values. It would be good to see a finer statement about the possible value distributions. Maybe these distributions are always Gaussian. \n\\end{remark}\n\n\\begin{remark} The proof of Theorem \\ref{thm:combap} makes use of the fact that $c(G,M)$ is continuous in the local-global topology. The continuity of various combinatorial parameters in the Benjamini--Schramm topology was studied in e.g. \\cite{miklos1, miklos2, gabor}. In those cases it is also possible to prove combinatorial statements through continuity and the analytic properties of the limit objects.\n\\end{remark}\n\n\\subsubsection*{Acknowledgement.} The authors are grateful to Mikl\\'os Ab\\'ert and to B\\'alint Vir\\'ag for helpful discussions and for organizing active seminars in Budapest related to this topic. The research was supported by the MTA R\\'enyi Institute Lend\\\"ulet Limits of Structures Research Group.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe notion of causality is deeply rooted in our understanding of nature. In ordinary situations with a fixed spacetime background we can always say that the cause belongs to the past light cone of the effect, and the effect to the future light cone of the cause. This familiar idea might be untenable at regimes at which the quantum mechanical properties of the systems under study are of comparable relevance to their gravitational properties \\cite{isham1993canonical, hartle1993spacetime, butterfield2001spacetime}, for instance if the metric tensor, and thus the causal relations, are subject to quantum fluctuations.\n\nThe crucial role played by probability in quantum mechanics on the one hand, and the dynamical causal structure of general relativity on the other hand, led to the conjecture that a theory unifying general relativity and quantum mechanics should be a probabilistic theory on a dynamical causal structure \\cite{hardy2005probability}. Adopting an operational point of view, we can ask what the measurable consequences of an indefinite causal structure would be. The process matrix framework \\cite{oreshkov2012quantum} is a possible way to address this question, and exploits techniques typical of quantum information to deal with the problem. The framework retains the validity of ordinary quantum mechanics at a local level, i.e. in local laboratories where quantum operations are performed, but makes no assumptions on the global causal structure outside the laboratories. Interestingly, the framework allows for processes that are more general than those allowed by the standard (causal) quantum formalism. In particular, they include situations in which the direction of the signaling, and thus causality in the operational sense, is not fixed. Nonetheless, logical paradoxes, such as signaling to the past, are ruled out by consistency conditions. \n\nWe call a process matrix causally ordered if it allows for signalling only in one fixed direction between the parties. A (bipartite) process matrix is {\\it causally separable} if it can be decomposed as a convex combination of causally ordered processes. An example of a {\\it causally nonseparable} process is the `quantum switch' \\cite{oreshkov2012quantum, oreshkov2015causal}. This is a quantum system with an auxiliary degree of freedom which can coherently control the order in which operations are applied. The quantum switch provides quantum computational advantages with respect to quantum circuits with fixed gate order \\cite{chiribella2013quantum, chiribella2012perfect, araujo2014computational} and has recently been implemented with linear optics \\cite{procopio2014experimental}.\n\nIn their original formulation, process matrices were only defined for finite dimensional Hilbert spaces \\cite{oreshkov2012quantum, araujo2014computational, araujo2015witnessing, oreshkov2015causal}. Despite providing an arena for the experimental verification of systems like the quantum switch, finite-dimensional systems are too restrictive for the purpose of studying indefinite causality. The generalization of the formalism to continuous variables broadens the class of systems which can be described with the formalism. In particular Gaussian quantum optics, used to describe some cases of continuous-variable quantum systems, has a very important role in quantum information processing \\cite{weedbrook2012gaussian}. The generalization proposed here can be straightforwardly used to devise new experiments. As an example of such applications, we propose an infinite-dimensional version of the quantum switch.\n\nIn addition, quantum fluctuations of the metric and of the causal structures are expected at high energies, where both quantum and gravitational effects become relevant. At these regimes a description in terms of quantum fields is required. The generalization proposed here is a necessary step towards this goal and paves the way for a more thorough study of quantum fields on indefinite causal structures. With this in mind, it is worth noting that a proper treatment of quantum fields requires to solve problems related to the localisation of the local laboratories and the tensor product structure of the Hilbert spaces. The study of this problem is beyond the scope of this work and is likely to require the framework of algebraic quantum field theory \\cite{haag1964algebraic, haag1992local}.\n\nContrary to the finite-dimensional case, in this work we face difficulties related to singularities. These singularities arise from the straightforward generalization of the approach used in finite dimensions when the dimensions of the Hilbert space tend to infinity. We solve this problem by using a phase space representation of the process matrices in terms of Wigner functions. We also show that the notion of causal nonseparability is maintained in infinite dimensions and we provide an argument for the causal nonseparability of the quantum switch. Specifically, we show that it exhibits interference due to the superposition of the order in which the operations are applied.\n\n\\section{The W-matrix formalism}\nIn this section we give a brief introduction to the W-matrix formalism in finite-dimensional Hilbert spaces, following the first formulation given in \\cite{oreshkov2012quantum}. Here we restrict the discussion to a two-party scenario, but the formalism is valid for an arbitrary number or local observers. Let us consider the two observers A and B, situated in separate local laboratories. We assume that standard quantum mechanics is valid in each local laboratory. However, we make no assumptions on the global causal structure outside the laboratories. This means that each observer is free to perform local quantum operations on a physical system in a finite-dimensional Hilbert space. More specifically, the local operations performed in the laboratories are completely positive (CP) maps $\\mathcal{M}_i^{A}: \\mathcal{L}(\\mathcal{H}^{A_1}) \\rightarrow \\mathcal{L}(\\mathcal{H}^{A_2})$ and $\\mathcal{M}_j^{B}: \\mathcal{L}(\\mathcal{H}^{B_1}) \\rightarrow \\mathcal{L}(\\mathcal{H}^{B_2})$, where $\\mathcal{L}(\\mathcal{H})$ denotes linear operators acting on the finite-dimensional Hilbert space $\\mathcal{H}$, and where $\\mathcal{H}^{A_1},\\, \\mathcal{H}^{A_2}$ and $ \\mathcal{H}^{B_1}, \\, \\mathcal{H}^{B_2}$ are respectively the input and output Hilbert spaces of A and B. It is convenient to use the Choi-Jamio{\\l}kowski (CJ) isomorphism \\cite{jamiolkowski1972linear, choi2000completely}, which associates an operator in the tensor product of two given Hilbert spaces to a map between the two. We write the CJ-equivalent of the local operations as $M_i^{X}= (\\mathbb{1}\\otimes \\mathcal{M}_i^{X})\\left| \\Phi^+ \\right> \\left< \\Phi^+ \\right|$ on $\\mathcal{H}^{X_1}\\otimes \\mathcal{H}^{X_2}$, $X=A,\\, B$, where $\\left| \\Phi^+ \\right>= \\sum_i \\left| i \\right>_{X_1} \\left| i \\right>_{X_1}$ is the maximally entagled state in the input Hilbert space. Given the set of CP maps $\\left\\lbrace \\mathcal{M}_i^{X} \\right\\rbrace_{i=1}^{n}$ corresponding to all possible $n$ local outcomes, the sum $\\sum_i \\mathcal{M}_i^X$ is also trace preserving (TP). Physically this means that an outcome always occurs in an experiment. Using the Choi-Jamio{\\l}kowski isomorphism, we can write this condition (CPTP condition) as $\\sum_i\\text{Tr}_{X_2}M_i^X= \\mathbb{1}_{X_1}$.\n\nGiven the set of CP maps accounting for all possible local operations, we can ask which are the most general correlations between the outcomes of the two observers. The most general way to linearly map local quantum operations to probability distributions can be written as $p(\\mathcal{M}_i^A,\\, \\mathcal{M}_j^B)= \\text{Tr}\\left[ W (M_i^A \\otimes M_j^B) \\right]$, where we introduce the process matrix $W \\in \\mathcal{L}\\left( \\mathcal{H}^{A_1}\\otimes \\mathcal{H}^{A_2} \\otimes \\mathcal{H}^{B_1} \\otimes \\mathcal{H}^{B_2}\\right)$, a positive linear operator $W \\geq 0$. The non-negativity of the probabilities (including the case when the two parties share entanglement) is ensured by the positivity of the W-matrix. Moreover, we require that probabilities are normalised, i.e. $\\sum_{ij}p(\\mathcal{M}_i^A,\\, \\mathcal{M}_j^B)=1$.\n\nIn \\cite{araujo2015witnessing} it was shown that the characterization of the W-matrix in the two-party scenario and finite-dimensional Hilbert spaces can be given as\n\\begin{align} \\label{eq:Characterization}\n\t& W \\geq 0, \\\\\n\t& \\text{Tr}W =d_{A_2} d_{B_2}, \\qquad d_X= \\text{dim}(\\mathcal{H}_{X}), \\nonumber\\\\\n\t& _{B_1 B_2} W = _{A_2 B_1 B_2} W, \\nonumber\\\\\n\t& _{A_1 A_2} W = _{B_2 A_1 A_2} W,\\nonumber\\\\\n\t& W= _{A_2} W + _{B_2} W - _{A_2 B_2} W, \\nonumber\n\\end{align}\nwhere $_{X} W= \\frac{\\mathbb{1}_X}{d_X} \\otimes \\text{Tr}_X W$.\nThis means that not all the subspaces of the space of process matrices are allowed, because they give rise to non-normalized probabilities. In \\cite{oreshkov2012quantum} it is shown that these terms can be interpreted as logical paradoxes. As an example, let us assume a one-party scenario in which the input and output Hilbert spaces are two-dimensional and a basis is provided by the two states $\\left|0\\right>$ and $\\left|1\\right>$. Let the W-matrix be an identity channel from the observer's output to the observer's input. Then if the observer applies a local operation which flips the qubit, we get the paradox $\\left|0\\right>=\\left|1\\right>$. This paradox is of the type of the `grandfather paradox', in which an agent goes back in time and kills his grandfather. This situations are automatically ruled out in the W-matrix formalism by the conditions \\eqref{eq:Characterization}. On the other hand, the requirements \\eqref{eq:Characterization} together with the local CPTP maps give rise to correlations which are more general than those of standard quantum mechanics.\n\nIn the formulation in finite-dimensional Hilbert spaces the characterization of the process matrix heavily relies on the dimension of the Hilbert spaces of the observers, so that taking the representation of W and letting the dimensions tend to infinity would lead to singularities. Therefore a straightforward generalization to infinite dimensions is not possible. An alternative formulation, suitable for infinite-dimensional Hilbert spaces, is given in terms of Wigner functions, which provide an equivalent description to the usual operator representation. We will see that the requirement that W gives rise to consistent probabilities restricts the possible Wigner representations, and provides an equivalent characterization of the process matrix to the finite-dimensional case.\n\n\\section{Extension to infinite dimensions}\nThe extension of the W-matrix formalism to continuous variables presents some novel features in contrast to the original framework in finite-dimensional Hilbert spaces. These features are analogous to those encountered in the infinite-dimensional limit of ordinary quantum mechanics of finite-dimensional systems \\cite{peres2006quantum}, and mainly concern the boundedness of the operators representing a quantum state.\n\nWe consider two local observers, $A$ and $B$, each provided with a local laboratory and free to perform local operations on a quantum system. In infinite dimensions we have to restrict the domain $\\mathcal{L}(\\mathcal{H})$ of linear operators on the Hilbert space $\\mathcal{H}$ to bounded linear operators on $\\mathcal{H}$. We call this space $\\mathcal{B}(\\mathcal{H})$. The maps describing the local operations in A and B are represented by completely positive (CP) maps $\\mathcal{M}_i^{A}: \\mathcal{B}(\\mathcal{H}_{A_{1}}) \\rightarrow \\mathcal{B}(\\mathcal{H}_{A_{2}})$, $\\mathcal{M}_j^{B}: \\mathcal{B}(\\mathcal{H}_{B_{1}}) \\rightarrow \\mathcal{B}(\\mathcal{H}_{B_{2}})$, where $\\mathcal{H}_{X_{1}}, \\, \\mathcal{H}_{X_{2}}$, $X=A,\\,B$, are the (infinite-dimensional) input and output Hilbert spaces of each laboratory. Each map $\\mathcal{M}_i^{X}$ describes transformations of a state $\\rho$ with outcome $i$ and output state $\\mathcal{M}_i^{X}(\\rho)$. A convenient way of representing CP maps is through the Choi-Jamiolkowski (CJ) isomorphism (see \\cite{jamiolkowski1972linear, choi2000completely} for the original definition in finite dimensions, \\cite{holevo2011entropy} for the extension to infinite dimensions), which associates an operator $M_i^X$ to a CP map $\\mathcal{M}_i^X$ through\n\t$M_i^{X}= \\left( \\mathbb{1} \\otimes \\mathcal{M}_i^{X} \\right) \\left| \\Phi^+ \\right> \\left< \\Phi^+ \\right|$.\nHere $\\left| \\Phi^+ \\right>= \\int dx \\left| xx \\right>_{X_{1}}$ is the non-normalized maximally entangled state in $\\mathcal{H}_{X_{1}} \\otimes \\mathcal{H}_{X_{1}}$ and $\\mathbb{1}$ is the identity operator. Since the probability of obtaining an outcome is unity, the sum over all possible $\\mathcal{M}_i^X$ is a completely positive trace-preserving (CPTP) map. This condition, which we refer to as CPTP condition, is expressed in terms of the CJ equivalent $M^X= \\sum_i M_i^X$ as $\\operatorname{Tr}_{X_{2}}(M^{X})=\\mathbb{1}_{X_{1}}$.\n\nThe process matrix is an operator $W \\in \\mathcal{B}(\\mathcal{H}_{A_{1}} \\otimes \\mathcal{H}_{A_{2}}\\otimes \\mathcal{H}_{B_{1}}\\otimes \\mathcal{H}_{B_{2}})$ such that $W \\geq 0$ and the probability of two measurement outcomes $i$ and $j$ is\n\\begin{equation}\n\tp(\\mathcal{M}_i^{A},\\, \\mathcal{M}_j^{B}) = \\operatorname{Tr} \\left[ W (M_i^{A} \\otimes M_j^{B}) \\right].\n\\end{equation}\nThe probability should satisfy $0 \\leq p(\\mathcal{M}_i^{A},\\, \\mathcal{M}_j^{B}) \\leq 1$. In particular, the condition $\\sum_{ij} p(\\mathcal{M}_i^{A},\\, \\mathcal{M}_j^{B}) = 1$ implies that $\\operatorname{Tr}\\left[W (M^{A}\\otimes M^{B})\\right]=1$ for every pair of CPTP maps $\\mathcal{M}^{A},\\, \\mathcal{M}^{B}$. From now on we will only consider the CJ representation of the CP maps. \n\n\\subsection{Characterization of the one-party scenario}\nThe one party scenario can be obtained from the two parties when the Hilbert spaces of one observer are one-dimensional.\nThe Wigner equivalent of a CPTP map $M$ (we omit here the index relative to the observer) and of a process matrix $W$ is a function of four variables on the phase space, namely $M(\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2})$ and $W(\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2})$. Here the subscripts $1$ and $2$ refer respectively to the input and output Hilbert space and the quantity $\\boldsymbol{\\xi}_i$ corresponds to the point in the phase space $\\boldsymbol{\\xi}_i=(x_i, p_i)$. In terms of Wigner functions, the CPTP condition becomes\n\t$\\frac{1}{2\\pi} \\int d\\boldsymbol{\\xi}_{2} M (\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2})=1$.\nBy computing the Fourier transform $\\tilde{M}(\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})= \\frac{1}{(2\\pi)^2}\\int d\\boldsymbol{\\xi}_{1} d\\boldsymbol{\\xi}_{2} M(\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2}) e^{-i \\boldsymbol{\\xi}_{1} \\cdot \\boldsymbol{\\eta}_{1}}e^{-i \\boldsymbol{\\xi}_{2} \\cdot \\boldsymbol{\\eta}_{2}}$, with $\\boldsymbol{\\eta}_i= (\\kappa_i, \\omega_i)$ the previous condition reads\n\t\\begin{equation} \\label{eq:CPTPcondition}\n\t\t\\tilde{M} (\\boldsymbol{\\eta}_{1},\\boldsymbol{0})= 2\\pi \\delta(\\boldsymbol{\\eta}_{1}),\n\t\\end{equation}\nwhere $\\delta(\\boldsymbol{\\eta}_1)= \\delta(\\kappa_1)\\delta(\\omega_1)$ and $\\delta$ is the Dirac delta function.\n\nWe use the CPTP condition \\eqref{eq:CPTPcondition} to characterize the $W$-matrix. In terms of the Wigner representation the normalization of probability $Tr(W M^A)=1$ is\n\\begin{equation} \\label{eq:norm_oneparty}\n\t\\frac{1}{(2\\pi)^2} \\int d \\boldsymbol{\\eta}_{1} d \\boldsymbol{\\eta}_{2} \\tilde{W} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})=1.\n\\end{equation}\t\nFor each $\\tilde{M}(\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$ we identify a small interval $S_{2}(\\tilde{M}) \\in \\mathbb{R}^2$ around $\\boldsymbol{\\eta}_{2} = \\boldsymbol{0}$ where we can approximate $\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$ with $\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{0})$. We assume that the function $\\tilde{M}$ has a well-defined limit at $\\boldsymbol{\\eta}_2=0$. For all possible $\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$ we choose the smallest interval $S_{2}=\\min_{\\tilde{M}} S_{2}(\\tilde{M})$. We set\n\t$S_{2} \\equiv \\left[ -\\frac{\\epsilon}{2},\\,\\frac{\\epsilon}{2}\\right]\\times \\left[ -\\frac{\\delta}{2},\\,\\frac{\\delta}{2}\\right]$.\nWe now split our integral in two parts: in the first one the output variables are integrated over $S_{2}$; in the second one the integration is performed on $\\mathbb{R}^2\\setminus S_{2}$. By using equation \\eqref{eq:CPTPcondition} in the integral on $S_{2}$, equation \\eqref{eq:norm_oneparty} reads\n\\begin{equation} \\label{eq:splitoneparty}\n\t1= \\frac{\\epsilon \\delta}{2\\pi} \\tilde{W} (\\boldsymbol{0}, \\boldsymbol{0}) + \\left<\\tilde{W}\\tilde{M}\\right>_{\\mathbb{R}^2, \\mathbb{R}^2\\setminus S_{2}},\n\\end{equation}\nwhere $\\left< f \\right>_{R_i, R_j}= \\frac{1}{(2\\pi)^2}\\int_{R_i} d \\boldsymbol{\\eta}_{1} \\int_{R_j} d \\boldsymbol{\\eta}_{2} f(\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$. Note that, in order to satisfy equation \\eqref{eq:splitoneparty}, $\\tilde{W}(\\boldsymbol{\\eta}_1,\\boldsymbol{0})$ can not diverge faster than $1\/\\epsilon \\delta$. This implies that for all possible $\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$, restricted to the domain $\\mathbb{R}^2 \\times (\\mathbb{R}^2 \\setminus S_2)$, the second term in the sum is always equal to the same constant. This can only happen if the second term in the sum in equation \\eqref{eq:splitoneparty} vanishes, so we conclude that \n\t$\\tilde{W} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})=0$ when $\\boldsymbol{\\eta}_{2} \\notin S_{2}$ and $\\tilde{W} (\\boldsymbol{0}, \\boldsymbol{\\eta}_{2}) = \\frac{2\\pi}{\\epsilon \\delta}$ when $\\boldsymbol{\\eta}_{2} \\in S_{2}$.\nWe now send $\\epsilon$ and $\\delta$ to zero. In the limit we find\n\\begin{equation} \\label{eq:Woneparty}\n\t\\tilde{W} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})= 2 \\pi w(\\boldsymbol{\\eta}_{1}) \\delta(\\boldsymbol{\\eta}_{2}),\n\\end{equation}\nwhere $w(\\boldsymbol{\\eta}_1)$ is a function to be determined.\n\nWe now ask which conditions $w(\\boldsymbol{\\eta}_{1})$ should satisfy in order for the probability to be normalized. If we substitute the result \\eqref{eq:Woneparty} in the condition for the normalization of the probability \\eqref{eq:norm_oneparty} we see that\n\t$1=\t\\frac{1}{2\\pi} \\int d \\boldsymbol{\\eta}_{1} w(\\boldsymbol{\\eta}_{1}) \\tilde{M} (\\boldsymbol{\\eta}_{1},\\boldsymbol{0})= w(\\boldsymbol{0})$.\nMoreover, we can write the complete expression for the Wigner function as\n\t$W (\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2}) \n\n\t=\\frac{1}{2\\pi} \\int d \\boldsymbol{\\eta}_{1} e^{i\\boldsymbol{\\xi}_{1} \\cdot \\boldsymbol{\\eta}_{1}}w (\\boldsymbol{\\eta}_{1})$.\nThe Wigner equivalent of the $W$-matrix does not depend on the variables of the second Hilbert space. In the operator representation this result is equivalent to having the identity in the second Hilbert space. This is compatible with the finite-dimensional case shown in \\cite{oreshkov2012quantum}. Moreover, given $W = W_1 \\otimes \\mathbb{1}_2$, computing the partial trace on the first system leads to\n\t$Tr_{1} W_1 =\\frac{1}{(2\\pi)^2} \\int d \\boldsymbol{\\xi}_{1} d \\boldsymbol{\\eta}_{1} e^{i\\boldsymbol{\\xi}_{1} \\cdot \\boldsymbol{\\eta}_{1}} w (\\boldsymbol{\\xi}_{1})= w(\\boldsymbol{0})=1$.\nThis means that in $\\mathcal{H}_1$ the $W$-matrix is a state with unit trace. Therefore, the most general form of the total $W$ for the one-party case is $W= \\rho \\otimes \\mathbb{1}$, consistent with the finite-dimensional case.\n\n\\subsection{Characterization of the two-party scenario}\nIn the bipartite case the Wigner equivalent of the $W$-matrix is a function of eight variables in the phase space $W(\\boldsymbol{\\xi}_{A_{1}}, \\boldsymbol{\\xi}_{A_{2}}, \\boldsymbol{\\xi}_{B_{1}}, \\boldsymbol{\\xi}_{B_{2}})$, where the notation is consistent with the previous case.\nThe probability normalization in terms of the Fourier transform of the Wigner equivalents of the operators is\n\\begin{align} \\label{eq:normprobAB}\n\t1=&\\frac{1}{(2\\pi)^4} \\int d \\boldsymbol{\\eta}_{A_{1}} d \\boldsymbol{\\eta}_{A_{2}} d \\boldsymbol{\\eta}_{B_{1}} d \\boldsymbol{\\eta}_{B_{2}}\\tilde{W}(\\boldsymbol{\\eta}_{A_{1}}, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})\\nonumber \\\\ \n\t& \\times\\tilde{M}^{A}(\\boldsymbol{\\eta}_{A_{1}}, \\boldsymbol{\\eta}_{A_{2}}) \\tilde{M}^{B}( \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}}),\n\\end{align}\nwhere the CPTP condition for $\\tilde{M}^A$ and $\\tilde{M}^B$ is described by equation \\eqref{eq:CPTPcondition}.\nConsider now a specific local operation for one of the two parties, say Alice, given by\n$\\tilde{M}^{A}(\\boldsymbol{\\eta}_{A_{1}}, \\boldsymbol{\\eta}_{A_{2}}) = 2\\pi \\delta(\\boldsymbol{\\eta}_{A_{1}}) \\chi(R_{A_{2}})$, where $\\chi(R_{A_{2}})$ is the characteristic function over the set $R_{A_{2}}$, $\\chi(R_{A_{2}})= 1$ when $\\boldsymbol{\\eta}_{A_{2}} \\in R_{A_{2}}$, $\\chi(R_{A_{2}})= 0$ otherwise. $R_{A_{2}}$ is a two-dimensional set defined as $R_{A_{2}} = \\left[ -\\frac{1}{2\\alpha_1},\\frac{1}{2\\alpha_1} \\right]\\times \\left[ -\\frac{1}{2\\alpha_2},\\frac{1}{2\\alpha_2} \\right]$ and $\\alpha_1, \\, \\alpha_2$ are two arbitrary positive numbers. This choice of the measurement satisfies the CPTP condition for all $\\alpha_1,\\, \\alpha_2$. By inserting this in equation \\eqref{eq:normprobAB} we obtain\n\\begin{align}\n\t1=&\\frac{1}{(2\\pi)^3}\\frac{\\alpha_1 \\alpha_2}{\\alpha_1 \\alpha_2} \\int d \\boldsymbol{\\eta}_{A_{2}} d \\boldsymbol{\\eta}_{B_{1}} d \\boldsymbol{\\eta}_{B_{2}} \\tilde{W}(\\boldsymbol{0}, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})\\nonumber \\\\\n\t&\\times\\chi(R_{A_{2}}) \\tilde{M}^{B}(\\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}}) \\nonumber\n\\end{align}\nIf we now let $\\alpha_1, \\alpha_2$ be very large, but still finite, we can approximate $\\alpha_1 \\alpha_2 \\chi(R_{A_{2}})$ with the product of two delta functions, so that we can perform the integration in $\\boldsymbol{\\eta}_{A_{2}}$ by evaluating the $W$-matrix in the origin. Therefore, the condition to impose on the total $W$ to have an integral converging to a constant (one) is\n\t$\\tilde{W}(\\boldsymbol{0},\\, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})= 2\\pi\\alpha_1 \\alpha_2 \\tilde{W}_{B}(\\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})$\n whenever $\\boldsymbol{\\eta}_{A_{2}} \\in R_{A_{2}}$ and $W=0$ otherwise. $\\tilde{W}_{B}(\\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})$ is the reduced $W$ of the observer B. As a consequence, in the limit $\\alpha_1,\\,\\alpha_2 \\rightarrow \\infty$ we obtain\n$1=\\frac{1}{(2\\pi)^2} \\int d \\boldsymbol{\\eta}_{B_{1}} d \\boldsymbol{\\eta}_{B_{2}} \\tilde{W}_{B}(\\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})\\tilde{M}^{B}( \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})$.\nThe previous equation describes exactly the one-party case, so we can apply the result \\eqref{eq:Woneparty} and write\n\\begin{equation} \\label{eq:middlecondW_A}\n\t\\tilde{W}(\\boldsymbol{0},\\, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})= (2 \\pi)^2 \\tilde{w}_{B_{1}}(\\boldsymbol{\\eta}_{B_{1}})\\delta(\\boldsymbol{\\eta}_{B_{2}})\\delta(\\boldsymbol{\\eta}_{A_{2}}).\n\\end{equation}\nThis decomposition of $\\tilde{W}$ is correct only if $\\boldsymbol{\\eta}_{A_{2}}$ is arbitrarily close to the origin. If we now repeat the same procedure by swapping the measurements of Alice and Bob we find an analogous condition\n\\begin{equation} \\label{eq:middlecondW_B}\n\t\\tilde{W}(\\boldsymbol{\\eta}_{A_1},\\, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{0}, \\boldsymbol{\\eta}_{B_{2}})= (2\\pi)^2\\tilde{w}_{A_{1}}(\\boldsymbol{\\eta}_{A_{1}})\\delta(\\boldsymbol{\\eta}_{A_{2}})\\delta(\\boldsymbol{\\eta}_{B_{2}}),\n\\end{equation}\nwhich holds when $\\boldsymbol{\\eta}_{B_{2}}$ is arbitrarily close to the origin. \n\nWe now go back to the equation \\eqref{eq:normprobAB} for the normalization of probability. Similarly to the one-party case, we define two intervals\n$S_{A_{2}} = \\left[ -\\frac{\\epsilon_A}{2},\\frac{\\epsilon_A}{2} \\right]\\times \\left[ -\\frac{\\delta_A}{2},\\frac{\\delta_A}{2} \\right] \\in \\mathbb{R}^2$ and\n\t$S_{B_{2}} = \\left[ -\\frac{\\epsilon_B}{2},\\frac{\\epsilon_B}{2} \\right]\\times \\left[ -\\frac{\\delta_B}{2},\\frac{\\delta_B}{2} \\right] \\in \\mathbb{R}^2$, where we can approximate the functions $\\tilde{M}^{A}$ and $\\tilde{M}^{B}$ with their values in respectively $\\boldsymbol{\\eta}_{A_{2}} = \\boldsymbol{0}$ and $\\boldsymbol{\\eta}_{B_{2}} = \\boldsymbol{0}$. We can now split the probability condition in four parts, writing the integrals over $A_2$ and $B_2$ as the sum of an integral over $S_{A_2}$ and $S_{B_2}$ and on the rest of the integration region $\\bar{S}_{A_{2}}$ and $\\bar{S}_{B_{2}}$. Using the CPTP condition for the local operations we find\n\\begin{equation}\n\t1=P_{S_{A_{2}},S_{B_{2}}}+ P_{S_{A_{2}}, \\bar{S}_{B_{2}}} + P_{\\bar{S}_{A_{2}}, S_{B_{2}}} + P_{\\bar{S}_{A_{2}},\\bar{S}_{B_{2}}}\n\\end{equation}\nwhere\n\\begin{align*}\n\t&P_{S_{A_{2}},S_{B_{2}}}= const,\\\\\n\t&P_{S_{A_{2}}, \\bar{S}_{B_{2}}}= k_A \\int d \\boldsymbol{\\eta}_{B_{1}} \\tilde{w}_{B_{1}}(\\boldsymbol{\\eta}_{B_{1}})\\int_{\\mathbb{R}^2 \\setminus S_{B_{2}}}d\\boldsymbol{\\eta}_{B_{2}}\\delta(\\boldsymbol{\\eta}_{B_{2}}),\\\\\n\t&P_{\\bar{S}_{A_{2}}, S_{B_{2}}}= k_B \\int d \\boldsymbol{\\eta}_{A_{1}} \\tilde{w}_{A_{1}}(\\boldsymbol{\\eta}_{A_{1}})\\int_{\\mathbb{R}^2 \\setminus S_{A_{2}}}d\\boldsymbol{\\eta}_{A_{2}}\\delta(\\boldsymbol{\\eta}_{A_{2}}),\\\\\n\t&P_{\\bar{S}_{A_{2}},\\bar{S}_{B_{2}}}= \\left< \\tilde{W}\\tilde{M}^{A} \\tilde{M}^{B}\\right>_{\\mathbb{R}^2, \\mathbb{R}^2, \\mathbb{R}^2\\setminus S_{A_{2}}, \\mathbb{R}^2 \\setminus S_{B_{2}}}.\n\\end{align*}\nHere, $k_A,\\ k_B$ are constants and the notation for the last term is analogous to the one used in the one-party case. $P_{S_{A_{2}}, \\bar{S}_{B_{2}}}$ and $P_{\\bar{S}_{A_{2}}, S_{B_{2}}}$ are identically zero because the delta functions vanish in the interval.\nSince the integral is equal to the same constant for all local operations we conclude that the fourth term is zero in the interval considered. For this to be the case, the $W$-function should be zero outside $S_{A_{2}}$ or $S_{B_{2}}$, at least in one of the outputs. Setting $\\tilde{W}$ equal to zero in the input would instead lead to the trivial solution $W=0$. By taking the limit when the intervals $S_{A_{2}}, S_{B_{2}}$ reduce to a point, and following an analogous procedure to the one-party case, it is possible to show that the $W$-matrix is a delta function at least in one of the two outputs. Applying the inverse Fourier transform, in the original variables $\\boldsymbol{\\xi}_i$ the conditions on the $W$ imply that the Wigner equivalent of the process matrix can not depend on both outputs at the same time, i.e. $W(\\boldsymbol{\\xi}_{A_{1}}, \\boldsymbol{\\xi}_{A_{2}}, \\boldsymbol{\\xi}_{B_{1}})$ or $W(\\boldsymbol{\\xi}_{A_{1}}, \\boldsymbol{\\xi}_{B_{1}}, \\boldsymbol{\\xi}_{B_{2}})$. As we have already pointed out in the one-party scenario, this condition is equivalent to having an identity in at least one of the two output Hilbert spaces when $W$ is represented in the space of linear operators on the tensor product of the four Hilbert spaces.\n\n\nThe results for the infinite-dimensional process matrices show that the bipartite $W$ allows for three different situations. The first case consists in a shared state between $A$ and $B$ with no-signaling between the two observers. In the framework of infinite-dimensional W-matrices this is described as $W(\\boldsymbol{\\xi}_{A_1}, \\boldsymbol{\\xi}_{B_1})$. The fact that W does not depend on the output variables corresponds to the condition, shown in \\cite{oreshkov2012quantum}, $W= \\rho_{A_{1} B_{1}} \\otimes \\mathbb{1}_{A_{2} B_{2}}$. The second and third case describe signaling from one observer to the other. In this case the W-matrix is written as $W(\\boldsymbol{\\xi}_{A_1}, \\boldsymbol{\\xi}_{B_1}, \\boldsymbol{\\xi}_{B_2})$, with correlations at least between $\\boldsymbol{\\xi}_{A_1}$ and $\\boldsymbol{\\xi}_{B_2}$, when B signals to A or as $W(\\boldsymbol{\\xi}_{A_1}, \\boldsymbol{\\xi}_{A_2}, \\boldsymbol{\\xi}_{B_1})$, where at least $\\boldsymbol{\\xi}_{B_1}$ and $\\boldsymbol{\\xi}_{A_2}$ are correlated, when A signals to B. These two terms are described respectively as $W_{A_{1} A_{2} B_{1}} \\otimes \\mathbb{1}_{B_{2}}$ and $W_{A_{1} B_{1} B_{2}} \\otimes \\mathbb{1}_{A_{2}}$ in the finite-dimensional case.\n\nWe are interested in processes, which we refer to as \\emph{causally nonseparable}, where it is not possible to decompose the $W$-matrix as \\cite{araujo2015witnessing, oreshkov2015causal}\n\\begin{equation} \\label{eq:causallyseparable}\n\tW= \\lambda W^{A \\prec B} + (1-\\lambda)W^{B \\prec A},\n\\end{equation}\nwhere $0 \\leq \\lambda \\leq 1$. If equation \\eqref{eq:causallyseparable} holds, the $W$-matrix can always be understood as a classical (convex) mixture of a term which allows signaling from A to B with probability $\\lambda$ and a term which allows signaling from B to A with probability $1-\\lambda$. The possibility for A and B to share an entangled state with no-signaling correlations is also included in equation \\eqref{eq:causallyseparable}.\n\n\\section{Quantum switch in infinite dimensions}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.35]{switch.pdf}\n\t\\caption{A quantum system is prepared in a state $\\left| \\psi_I \\right>$ at time $t_I$ and is sent in a superposition of two paths. Each path, realized by sending the particle through a fiber (solid and dotted line in the figure), enters the two laboratories A and B in a fixed order and is detected by C at time $t_O$ after exiting the two laboratories. In each local laboratory the state undergoes local quantum operations described as measurement and repreparation. The probability of measurement outcomes shows an interference pattern due to the superposition of two causal orders. The interference can not be reproduced from local operations performed in a fixed causal order.}\n\t\\label{fig:switch}\n\\end{figure}\nA scheme of the quantum switch is provided in Figure \\ref{fig:switch}. The switch involves three local observers, which we denote as A, B and C. The observers perform local quantum operations, here chosen to be a measurement followed by a repreparation of a quantum state. Outside the laboratories the system propagates along two ``fibers'' (solid and dotted line in Figure \\ref{fig:switch}), which represent the propagation of the quantum system along an additional spatial degree of freedom. A quantum state $\\left|\\psi_I \\right>$ is prepared at time $t_I$ and sent in a superposition of two paths. In one of the paths the particle enters laboratory A at time $t_1$ and laboratory B at time $t_2 > t_1$; in the second path the order of the operations A and B is reversed. After exiting the laboratories A and B the system is detected by the observer C at time $t_O$. Note that in order to preserve the coherence of the process the measurements should not reveal the time.\n \nThe switch describes a quantum process in which the order of the local operations is in a superposition. In finite dimensions it has been proved that the $W$-matrix which describes the switch is causally nonseparable \\cite{araujo2015witnessing}, i.e. it can not be written as $W=\\lambda W^{A \\prec B \\prec C}+(1-\\lambda)W^{B \\prec A \\prec C}$, where C always comes after A and B and $0 \\leq\\lambda \\leq 1$. Here we generalize the switch to infinite dimensions, and provide an alternative proof of its causal nonseparability.\n\nThe $W$-matrix is an operator acting on the tensor product of six Hilbert spaces, $W \\in \\mathcal{B}(\\mathcal{H}_{A_1}\\otimes \\mathcal{H}_{A_2} \\otimes \\mathcal{H}_{B_1} \\otimes \\mathcal{H}_{B_2} \\otimes \\mathcal{H}_{C_1} \\otimes \\mathcal{H}_{p})$. The first five spaces are infinite-dimensional and $\\mathcal{H}_{p}$ is a two-dimensional Hilbert space spanned by the vectors $\\left| 0 \\right>$ and $\\left| 1 \\right>$, which label each of the paths (fibers) taken by the particle (see Figure \\ref{fig:switch}). The W-matrix of the switch is pure and can be written as\n\t$W= \\left| w \\right> \\left< w \\right|$,\nwhere $\\left| w \\right>= \\int d\\bar{r}\\, w(\\bar{r})\\left| \\bar{r} \\right>$, with $\\bar{r}=(r_{A_{1}},\\ r_{A_{2}},\\ r_{B_{1}}, \\ r_{B_{2}}, r_{C_{1}})$. Explicitly,\n\\begin{equation} \\label{eq:Wfunction}\n\tw(\\bar{r})= \\frac{1}{\\sqrt{2}}\\int dr_I \\psi_I(r_I) \\left( w^{A \\prec B \\prec C}\\left| 0 \\right> + w^{B \\prec A\\prec C} \\left| 1 \\right>\\right).\n\\end{equation} \nHere, $\\psi_I (r_I)$ is a normalized square-integrable function. The variables of the functions $w^{A \\prec B \\prec C}=w^{A \\prec B \\prec C}(r_I, \\bar{r})$ and $w^{B \\prec A \\prec C}=w^{B \\prec A \\prec C}(r_I,\\bar{r})$, where the arguments parametrize the propagation along the fiber, are omitted in \\eqref{eq:Wfunction} for simplicity. The total state $\\left| w \\right>$ is a superposition of two terms, decribed by $w^{A \\prec B \\prec C}$ and $w^{B \\prec A \\prec C}$, which can be explicitly written as\n\\begin{align}\n\tw^{A \\prec B \\prec C} = &G_{I1}(r_{A_{1}}-r_{I}) G_{12}(r_{B_{1}}-r_{A_{2}})G_{2O}(r_{C_1}-r_{B_{2}}) \\label{eq:wABC}\\\\\n\tw^{B \\prec A \\prec C} = &G_{I1}(r_{B_{1}}-r_{I}) G_{12}(r_{A_{1}}-r_{B_{2}})G_{2O}(r_{C_1}-r_{A_{2}}) \\label{eq:wBAC}\n\\end{align}\nwhere $G_{ab}(r_b-r_a)= \\left< r_b \\right| e^{-\\frac{i}{\\hbar}\\hat{H}(t_b - t_a)}\\left| r_a \\right>$ is the Green function between $r_a$ and $r_b$ and $\\hat{H}$ is the hamiltonian which generates the evolution along the fiber.\n\nConsider now the local operations performed by one of the parties, say A. Suppose that A measures the state in a region $R_i$ of the whole laboratory A. Afterwards, the state is reprepared in $\\left| \\phi_A \\right>$. The Choi-Jamio{\\l}kowski equivalent of this local operation in A's laboratory is\n\t$M_i^A = \\int_{R_i} dy_A \\left| y_A \\right> \\left< y_A \\right| \\otimes \\left| \\phi_A \\right>\\left< \\phi_A \\right|$. \nThe intervals $R_i$ satisfy $R_i \\cap R_j= \\emptyset$ for $i \\neq j$ and $\\cup_i R_i= V_{A}$, where $V_A$ is the volume of the local laboratory. The same considerations are valid for the case of B. The observer C detects the state he receives by projecting it over the region $R_k$ of the volume of his laboratory $V_C$ and by recombining the two paths via a measurement on the $\\left| \\pm \\right>= (\\left| 0 \\right>\\pm \\left| 1 \\right>)\/\\sqrt{2}$ basis. As a consequence, the local operation performed by C is $M_{k \\pm}^C= M_k^C \\otimes \\left| \\pm \\right>\\left< \\pm \\right|$, where $M_k^C= \\int_{R_k} dy_C \\left| y_C \\right> \\left< y_C \\right|$ and it is implied that the output Hilbert space of C is one-dimensional.\n\nThe probability of the measurement outcomes is then given by\n\t$p_{ijk \\pm}= p(\\mathcal{M}_i^A, \\ \\mathcal{M}_j^B, \\mathcal{M}_{k \\pm}^C)= \\left< w\\right| (M_i^A \\otimes M_j^B \\otimes M_{k \\pm}^C) \\left| w \\right>$. For simplicity we first consider a density of probability $\\Pi_{ijk \\pm}=\\Pi_{ijk \\pm}(r_I, r'_I)$ such that\n\t$p_{ijk \\pm}= \\int dr_I dr'_I \\psi_I (r_I) \\psi^*_I (r'_I) \\Pi_{ijk \\pm}(r_I, r'_I)$.\nThen we can write\n\\begin{equation} \\label{eq:densityswitch}\n\t\\Pi_{ijk \\pm}= \\frac{1}{2} \\left[ \\pi^{A \\prec B \\prec C}_{ijk \\pm} + \\pi^{B \\prec A \\prec C}_{ijk \\pm} + 2 \\operatorname{Re} \\pi^{int}_{ijk \\pm}\\right],\n\\end{equation}\nwhere we can express the single terms in the sum by adopting a vector notation with $\\left| w^{A\\prec B \\prec C} \\right> = \\int d \\bar{r}\\, w^{A\\prec B \\prec C} \\left| \\bar{r} \\right>$ and $\\left| w^{B\\prec A \\prec C} \\right> = \\int d \\bar{r}\\, w^{B\\prec A \\prec C} \\left| \\bar{r} \\right>$,\n\\begin{align}\n\t&\\pi^{A \\prec B \\prec C}_{ijk \\pm}=\\frac{1}{2}\\left< w^{A\\prec B\\prec C} \\right| M_i^A \\otimes M_j^B \\otimes M_{k}^C \\left| w^{A\\prec B \\prec C} \\right> \\nonumber\\\\\n\t&\\pi^{B \\prec A \\prec C}_{ijk \\pm}=\\frac{1}{2}\\left< w^{B\\prec A \\prec C} \\right| M_i^A \\otimes M_j^B \\otimes M_k^C \\left| w^{B\\prec A \\prec C}\\right> \\nonumber\\\\\n\t&\\pi^{int}_{ijk \\pm}=\\pm\\frac{1}{2}\\left< w^{A\\prec B \\prec C} \\right| M_i^A \\otimes M_j^B \\otimes M_k^C \\left| w^{B\\prec A \\prec C} \\right>.\n\\end{align}\n\nAssuming $t_1-t_I=t_2-t_1=t_O-t_2=\\Delta t$, we can show that $p_{ijk \\pm}$ describes a two-way signaling from A to B to C and from B to A to C. Specifically, we show that the two terms $\\pi_{ijk \\pm}^{A \\prec B \\prec C}$ and $\\pi_{ijk \\pm}^{B \\prec A \\prec C}$ correspond to a process in which the order of the events is fixed. Instead, $\\pi^{int}_{ijk \\pm}$ is an interference term, due to the superposition of causal orders, describing a two-way signaling between the three observers. In order to show this we can sum over the outputs of the observers and show how the marginals depend on the settings $\\phi_A$ of $M_i^A$ and $\\phi_B$ of $M_j^B$.\n\nWe assume that the states $\\psi_I, \\phi_A$ and $\\phi_B$ are prepared so that the probability of detection in the three local laboratories is almost one. This means that the integration over the volume of any local laboratory (A, B or C) can be extended to an integral over the whole space, since this would amount to adding a negligible term to the sum. Defining $p^{ABC}(ijk\\pm | \\phi_A, \\phi_B)= \\int d r_I d r'_I \\psi_I (r_I) \\psi^*_I (r'_I)\\pi_{ijk\\pm}^{A \\prec B \\prec C}$ the integral of the first term in equation \\eqref{eq:densityswitch}, we find that $\\sum_{jk \\pm}p^{ABC}(ijk\\pm | \\phi_A, \\phi_B)= p^{ABC}(i)$, which means that A does not receive information from B and C. Moreover, since $\\sum_{ij}p^{ABC}(ijk\\pm | \\phi_A, \\phi_B)= p^{ABC}(k\\pm | \\phi_B)$, C receives information from B. Finally, the fact that $\\sum_{ik\\pm}p^{ABC}(ijk\\pm | \\phi_A, \\phi_B)= p^{ABC}(j | \\phi_A)$ means that B receives information from A but not from C. Therefore, we conclude that the probability describes a causally ordered process where A signals to B and B signals to C. The situation is symmetric under the exchange of A and B if we consider the integral of the second term in equation \\eqref{eq:densityswitch}, $p^{BAC}(ijk\\pm | \\phi_A, \\phi_B)= \\int d r_I d r'_I \\psi_I (r_I) \\psi^*_I (r'_I)\\pi_{ijk\\pm}^{B \\prec A \\prec C}$.\n\nA probabilistic mixture of the two terms corresponds to a process with no fixed causal order, but causally separable in the sense previously discussed. In contrast, when the quantum switch is considered an additional interference term appears. The interference corresponds to $\\pi^{int}_{ijk \\pm}$ in equation \\eqref{eq:densityswitch} and it can be shown to be\n\\begin{align}\n\t&\\pi^{int}_{ijk\\pm}= \\pm \\frac{1}{2}\\int_{R_i} dr_{A_1} \\int_{R_j} dr_{B_1} \\int_{R_k} dr_{C_1} \\times \\nonumber\\\\\n\t&\\int dr_{A_2} dr'_{A_2} dr_{B_2} dr'_{B_2} w^{A\\prec B \\prec C}w^{*B\\prec A \\prec C}\\times \\nonumber\\\\\n\t&\\phi_A(r'_{A_2})\\phi^*_A(r_{A_2})\\phi_B(r'_{B_2})\\phi^*_B(r_{B_2}),\n\\end{align}\nwhere $w^{A\\prec B \\prec C}= w^{A\\prec B \\prec C}(r_I, r_{A_1}, r_{A_2}, r_{B_1}, r_{B_2}, r_{C_1})$ and $w^{B\\prec A \\prec C}=w^{B\\prec A \\prec C}(r'_I, r_{A_1}, r'_{A_2}, r_{B_1}, r'_{B_2}, r_{C_1})$ were defined in equations \\eqref{eq:wABC} and \\eqref{eq:wBAC}. To show that there is two-way signaling, we define $p^{int}(ijk \\pm| \\phi_A, \\phi_B)=\\int d r_I d r'_I \\psi_I (r_I) \\psi^*_I (r'_I)\\pi_{ijk\\pm}^{int}$ and sum over the outputs of the three observers. We find that $\\sum_{ij}p^{int}(ijk \\pm| \\phi_A, \\phi_B)= p^{int}(k \\pm | \\phi_A, \\phi_B)$, so both A and B signal to C. Moreover, the two conditions $\\sum_{j}p^{int}(ijk \\pm | \\phi_A, \\phi_B)= p^{int}(ik \\pm | \\phi_A, \\phi_B)$ and $\\sum_{i}p^{int}(ijk \\pm| \\phi_A, \\phi_B)= p^{int}(jk \\pm | \\phi_A, \\phi_B)$ mean respectively that B signals to A and C, and A signals to B and C. Therefore, we conclude that there is two-way signaling. Since the W-matrix is pure and the correlations can exhibit signaling in both directions A to B to C and B to A to C, we conclude that the process is causally nonseparable.\n\nTo summarise, in this paper we generalize the process matrix framework to continuous-variable quantum systems. This means that, as well as in finite dimensions, it is possible to describe the correlations between the measurement outcomes of two (or more) observers who can receive or send signals in absence of a global causally-ordered background. The correlations obtained are more general than those allowed by ordinary (causal) quantum mechanics. This generalization is suitable to devise new experiments using continuous-variable quantum systems, such as those considered in Gaussian quantum optics. Moreover, this work constitutes the first step towards the goal of formulating quantum fields on indefinite causal structures. As an example of application of this work, we implemented an infinite-dimensional version of the quantum switch exhibiting correlations stemming from quantum superposition of channels. \n\n\n\\begin{acknowledgments}\n\tWe thank Bernhard Baumgartner, Fabio Costa, Adrien Feix and Magdalena Zych for useful discussions. We acknowledge support from the European Commission project RAQUEL (No. 323970); the Austrian Science Fund (FWF) through the Special Research Program Foundations and Applications of Quantum Science (FoQuS), the doctoral programme CoQuS, and Individual Project (No. 24621).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe problem of string matching can be defined as follows. Given a text $T=t_1\nt_2\\cdots t_n$ and a pattern $P=p_1p_2\\cdots p_m$, with letters from an alphabet $\\Sigma$,\nfind all the occurrences of the pattern in the text.\nThis problem can be solved in $O(n+m)$ time by using well known algorithms\n(e.g., KMP \\cite{KMP77}). \n\nA more general formulation allows ``don't care'' or ``wild card''\ncharacters in the text and\/or the pattern. Pattern matching with don't cares can\nbe solved in $O(n \\log |\\Sigma| \\log m)$ as shown in \\cite{FP74}. A more\nrecent result \\cite{CC07} gives a deterministic $O(n \\log m)$ time algorithm.\n\nYet another enhancement is to allow for mismatches. We can\nformulate two versions of this problem: {\\bf 1) pattern matching with mismatches:} \n find the distance between the pattern and the text for every alignment\nbetween the pattern and the text or {\\bf 2) pattern matching with $k$\nmismatches:} find only alignments for which the distance is no\nmore than a given threshold $k$. \n\n\nThe distance metric used can be the Hamming distance, the edit distance or\nother criteria such as the number of non-overlapping inversions (e.g.\n\\cite{CC+13}). In this paper we focus on the Hamming distance.\nThe Hamming distance between two strings $A$ and $B$ is defined as the number\nof positions where the two strings differ and is denoted by $Hd(A,B)$. \n\nPattern matching with mismatches can be solved, naively, by computing the\nHamming distance for every alignment of the pattern in the text, in time $O(nm)$. However, the fastest known exact algorithm is\nAbrahamson's algorithm \\cite{ABR87} that runs in $O(n \\sqrt{m \\log m})$\ntime.\n\nPattern matching with $k$ mismatches can be solved in $O(nk)$\ntime (see \\cite{LV85} and \\cite{GG86}). These algorithms are based on a\ntechnique called the Kangaroo method (see section \\ref{sec_kangaroo}). This\nmethod computes the Hamming distance for every alignment in $O(k)$ time by\n``jumping'' from one error to the next. A faster algorithm for pattern\nmatching with $k$ mismatches runs in $O(n\\sqrt{k \\log k})$ \\cite{ALP04}. A\nsimpler version of this algorithm was given in \\cite{NR15}.\n\nRecent work has also addressed the online\nversion of pattern matching, where the text is received in a\nstreaming model, one character at a time, and it cannot be stored in its\nentirety (see e.g., \\cite{CKP08}, \\cite{PP09}, \\cite{PL07}).\nAnother version of this problem matches the pattern against multiple input\nstreams (see e.g., \\cite{CEP+07}). Yet another interesting problem is to sample a\nrepresentative set of mismatches for every alignment (see e.g., \\cite{CEP+12}).\nA survey of string matching with mismatches is given in \\cite{Nav01}.\nA description of practical on-line string searching algorithms can be\nfound in \\cite{NR02}.\n\nYet another formulation allows for don't care or wild card characters.\nPattern matching with mismatches and don't cares can be solved in $O(n \\sqrt{g \\log m})$ time, where $g$ is the number of non-wild card\npositions in the pattern (see \\cite{NR15}). This is done by a simple extension of Abrahamson's algorithm.\n\nPattern matching with $k$ mismatches and don't cares can be solved in time\n$O(nk^2\\log^2m)$ as shown in \\cite{Clif10}. The runtime can be improved to \n $O(nk\\;\\text{polylog}m)$ as shown in \\cite{Clif10, CEP+09}\nIf we allow don't cares only in the pattern, the problem can be solved in\n$O(n\\sqrt[3]{mk\\log^2m})$ time as shown in \\cite{CP07}. \nThis is also the problem we discuss in this paper.\n\n{\\bf Notation:} Let $T_i$ denote $t_i t_{i+1},\\ldots t_{i+m-1}$ for all\n$i=1..n-m+1$.\n \n{\\bf Pattern matching with $k$ mismatches and don't cares in the pattern:}\nGiven a text $T=t_1t_2\\ldots t_n$ and a pattern $P=p_1p_2\\ldots p_m$ from an\nalphabet $\\Sigma$, with $|\\Sigma|\\leq n$, and an integer $k$. Output all $i$, $1\\leq\ni\\leq n-m+1$, for which $Hd(P, T_i) \\leq k$.\nThe pattern may contain don't care characters, that match any character.\n\nGiven a pattern $P$, with don't cares, a maximal length substring of $P$ that\nhas no don't cares is called an ``{\\bf island}''. We will denote the number of\nislands in $P$ as $q$.\nIn this paper we give two algorithms for pattern matching with $k$ mismatches\nwhere there are don't cares in the pattern. The first one runs in\n$O(n\\sqrt{(q+k)\\log m})$ time. The second one runs in time $O(n\\sqrt[3]{qk\\log^2 m} +\nn\\sqrt{k\\log m})$ where $q$ is the number of islands in $P$. By combining the\ntwo, we show that pattern matching with $k$ mismatches and don't cares in the\npattern can be solved in $O(n\\sqrt{k\\log m}+n\\min\\{\\sqrt[3]{qk\\log^2 m},\\sqrt{q\\log m}\\})$ time.\nIf the number of islands is $O(k)$ our runtime becomes $O(n\\sqrt{k \\log m})$,\nwhich essentially matches the best known runtime for pattern matching with $k$\nmismatches without don't cares ($O(n\\sqrt{k\\log k})$). Since $q$ is always less\nthan $m$, our algorithm outperforms the $O(n\\sqrt[3]{mk\\log^2m})$ algorithm of\n\\cite{CP07}.\nFor $q=O(k^2)$, our algorithm outperforms the best known $O(nk\n\\;\\text{polylog}\\; m)$ algorithms of \\cite{Clif10, CEP+09}.\n\n\n\\section{Methods}\n\nBoth algorithms in this paper have the same basic structure (see section\n\\ref{sec_basic}).\nThe difference is in how fast we can answer the single alignment verification question:\n\n\\begin{question}\nGiven $i$, is the Hamming distance between $P$ and $T_i$ no more than\n$k$?\n\\end{question}\n\nIn the first algorithm (section \\ref{sec_alg1}), we can answer this question in\n$O(q+k)$ time. In the second algorithm (section \\ref{sec_alg2}), we can answer\nthis question in $O(\\sqrt[3]{k^2q^2\\log m} + k)$ time.\n\n\\subsection{Background}\n\nWe start by reviewing a number of well known techniques\nused in the literature for pattern pattern matching with $k$ mismatches (e.g., see\n\\cite{ALP04}), namely:\nconvolution, marking, filtering and the Kangaroo method.\n\n\\subsubsection{Convolution}\nGiven two arrays $T=t_1t_2\\ldots t_n$ and $P=p_1p_2\\ldots\np_m$ (with $m\\leq n$), the convolution of $T$ and $P$ is a sequence\n$C=c_1,c_2,\\ldots,c_{n-m+1}$ where $c_i=\\sum_{j=1}^mt_{i+j-1}p_j$, for $1\\leq\ni\\leq (n-m+1)$. \n\n \n\nConvolution can be applied to pattern matching with mismatches, as follows.\nGiven a string $S$ and a character $\\alpha$ define string $S^{\\alpha}$\nas $S^\\alpha[i]=1$ if $S[i]=\\alpha$ and $0$ otherwise.\nLet $C^\\alpha=convolution(T^\\alpha, P^\\alpha)$. Then $C^\\alpha[i]$ gives the\nnumber of matches between $P$ and $T_i$\nwhere the matching character is $\\alpha$. Therefore, one convolution gives us\nthe number of matches contributed by a single character to each of the\nalignments. Then $\\sum_{\\alpha \\in \\Sigma}C^{\\alpha}[i]$ is the total number of\nmatches between $P$ and $T_i$.\n\nOne convolution can be computed in\n$O(n\\log m)$ time by using the Fast Fourier Transform. \nIf the convolutions are applied on binary inputs, as is often the case in\npattern matching applications, some speedup techniques are presented in \\cite{FG09}.\n\n\\subsubsection{Marking}\\label{sec_marking} \n\n\nMarking is an algorithm that counts the number of matches of\nevery alignment, as follows.\nThe algorithm scans the text one character at a time\nand ``marks'' all the alignments that would produce a match between the current\ncharacter in the text and the corresponding character in the pattern. \nThe marking algorithm is generally used only on a subset of the pattern. That\nis, given a set $A$ of positions in $P$ the marking algorithm counts matches\nbetween the text and the subset of $P$ given by $A$. The pseudocode of the\nmarking algorithm is given in Algorithm \\ref{alg_counting}.\n\n\\begin{algorithm}\n\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\\caption{Mark$(T, P, A)$}\\label{alg_counting} \n\\Input{Text $T$, pattern $P$ and a set $A$ of positions in $P$} \n\\Output{An array $M$ where $M[i]$ gives the number of matches between $T_i$\nand $P$, on the subset of positions of $P$ given by $A$}\n\\lFor {$i\\leftarrow 1$ \\KwTo $n$}{$M[i]=0$}\n\\For {$i\\leftarrow 1$ \\KwTo $n$}{\n \\For{$j \\in A$ s.t. $P[j] = T[i]$}{\n \\lIf{$i-j+1 > 0$}{\n $M[i-j+1]${\\bf $++$}\n }\n }\n}\n\\Return $M$\\;\n\\end{algorithm}\n\n\\subsubsection{Filtering}\n\nFiltering is a method for reducing the number of alignments to look\nat. Filtering is based on the following principle.\nIf we restrict our pattern to only $2k$ positions, any alignment that has no more than\n$k$ mismatches, must have at least $k$ matches among the $2k$ positions.\nTo count matches among the $2k$ positions selected, for every alignment, we use\nthe marking algorithm. If the total number of marks generated is $B$ then there can\nbe no more than $B\/k$ positions that have at least $k$ marks. Therefore, instead\nof $n-m+1$ alignments we only have to look at $B\/k$ alignments. Each alignment\nis then verified using other methods.\n\n\\subsubsection{The Kangaroo method}\\label{sec_kangaroo} \n\nThe Kangaroo method allows us to check if the number of mismatches for a\nparticular alignment is no more than $k$, in $O(k)$ time. The Kangaroo method\nconstructs a generalized suffix tree of $T+P$, where $+$ means concatenation.\nThis suffix tree can be enhanced to answer Lowest Common Ancestor\n(LCA) queries in $O(1)$ time \\cite{AH+76}. LCA queries give us the longest\ncommon prefix between any portion of the text and any portion of the pattern,\nessentially telling us where the first mismatch appears. \nSpecifically, to count mismatches between\n$P$ and $T_i$, first perform an LCA query to find the position of the\nfirst mismatch between $P$ and $T_i$. Let this position be $j$. Then,\nperform another LCA to find the first mismatch between $P_{j+1..m}$ and\n$T_{i+j+1.. i+m-1}$, which gives the second mismatch of alignment $i$.\nContinue to ``jump'' from one mismatch to the next, until the end\nof the pattern is reached or we have found more than $k$ mismatches.\nTherefore, after $O(k)$ LCA queries we will either find all the mismatches or\ndetermine that there are more than $k$ of them. \nThe Kangaroo pseudocode is given in Algorithm \\ref{alg_kangaroo}.\n\n\\begin{algorithm}\n\\SetKw{LCA}{LCA}\n\\SetKw{True}{true}\n\\SetKw{False}{false}\n\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\\caption{Kangaroo$(P, T_i, k)$}\\label{alg_kangaroo}\n\\Input{A pattern $P$, an alignment $T_i$ and an integer $k$}\n\\Output{\\True if the pattern matches the alignment with no more than $k$\nmismatches, \\False otherwise}\n$j=0$\\;\n$d=0$\\;\n\\While{$d \\leq k$} {\n $j = j + \\LCA(T_{i+j}, P_{j+1})+1$\\;\n \\If {$j > m$} {\n \\Return{\\True}\\;\n }\n $d=d+1;$\n} \n\\Return{\\False}\\;\n\\end{algorithm}\n\n\\subsection{General Algorithm}\\label{sec_basic}\n\nWe are now ready to present the main algorithms given in this paper. The\ngeneral structure of both the algorithms is given in Algorithm \\ref{alg_basic}.\n\n\n\\begin{algorithm}\n\\caption{$K$-Mismatches with Wild Cards}\\label{alg_basic}\nLet $F_a$ be the number of occurrences of character $a$ in $T$ for all $a \\in\n\\Sigma$\\; \nLet $Cost(A)=\\Sigma_{i \\in A}F_{P[i]}$\\;\nLet $A$ be a set of positions in $P$ such that $|A|\\leq 2k$\nand $Cost(A) \\leq B$\\; \n$M=Mark(T, P, A)$\\;\n\\eIf{$|A| == 2k$}{\n $R=\\{\\}$\\;\n \\For{$i=1$ to $n$} {\n \\If {$M_i \\geq k$ {\\bf and} $DistNoMoreThanK(T_i, P, k)$} {\n $R = R \\cup \\{i\\}$\\;\n }\n }\n}{\n \n \\For{$a \\in \\Sigma$ s.t. $a \\neq P[i], \\forall i \\in A$} {\n $M'=Convolution(T,P,a)$\\;\n $M \\text{+=} M'$\\;\n }\n $R = \\{i \\in [1..n] | M_i \\geq m - k\\}$\\; \n}\n\\Return{$R$}\\;\n\\end{algorithm}\n\n{\\bf Algorithm and analysis:} \nFor each position $i$ in $P$ such that $P[i]=a$, we assign a cost $F_a$ where\n$F_a$ is the number of occurrences of $a$ in $T$. The algorithm starts by\nchoosing up to $2k$ positions from the pattern such that the total cost does not exceed a ``budget''\n$B$. The positions are chosen by a simple greedy strategy: sort all the\ncharacters by their cost $F_a$. Start choosing positions equal to the ``cheapest''\ncharacter, then choose positions equal to the next cheapest character, and\nso on until we have chosen $2k$ positions or we have exceeded the budget $B$.\n\n{\\bf Case 1:} If we can find $2k$ positions that cost no more than $B$, then\nwe call the marking algorithm with those $2k$ positions. Any\nposition in $T$ that receives less than $k$ marks, has more than $k$ mismatches,\nso we now focus on positions in $T$ that have at least $k$ marks.\nIf the total number of marks is $B$, then there will be no more than\n$B\/k$ positions that have at least $k$ marks. We verify each of these\npositions to see if they have more than $k$ mismatches. Let the time for a\nsingle verification be $O(V)$.\nThen, the runtime is $O(BV\/k)$.\n\n{\\bf Case 2:} If we cannot find $2k$ positions that cost no more than $B$,\nthen we compute marking for the positions that we did choose before we ran out\nof budget.\nThen, for each of the characters that we did not choose, we compute one\nconvolution to count how many matches they contribute to each alignment. It\nis easy to see that each of the characters not chosen for marking must have $F_a\n> B\/(2k)$.\nTherefore, the total number of such characters is no more than $n\/(B\/(2k))$. Therefore, the runtime of the convolution stage is $O(nk\/B * n \\log m)$. The runtime of the marking\nstage is $O(B)$, therefore the total runtime is $O(B + nk\/B * n \\log m)$.\n\nIf we make the runtime of the two cases equal, we can find the optimal value of\n$B$.\n\n\\begin{align*}\nBV\/k = B+n^2k\/B \\log m \\Rightarrow B=nk\\sqrt{\\frac{\\log m}{V}}\n\\end{align*}\n\nThis gives an asymptotic runtime of $O(BV\/k)=O(n\\sqrt{V \\log m})$. Therefore,\nthe runtime of the algorithm depends on $V$, which is the time it takes to\nverify whether a given single alignment has no more than $k$ mismatches.\n \n\\subsection{Single alignment distance in $O(q+k)$ time}\n\\label{sec_alg1}\n\nWe can answer the single alignment question \nin $O(q+k)$ time where $q$ is the number of {\\it islands} in the pattern as\nshown in Algorithm \\ref{alg_verif1}.\nThe algorithm uses Kangaroo jumps \\cite{LV85} to go to the next mismatch within\nan island in $O(1)$ time. If there is no mismatch left in the island, the algorithm goes\nto the next island also in $O(1)$ time.\nTherefore, the runtime is $O(q+k)$. With $V=O(q+k)$, Algorithm \\ref{alg_basic} does pattern matching\nwith $k$ mismatches in $O(n\\sqrt{(q+k)\\log m})$ time.\n\n\n\\begin{algorithm}\n\\label{alg_verif1}\n\\caption{$DistNoMoreThanK\\_V1(T_i, P, k)$}\n$d=0$\\;\n$j=1$\\;\n\\While{$d \\leq k$ {\\bf and} $j \\leq q$}{\n $r =$ no. of mismatches between island $j$ and\n corresponding region of $T_i$ (use Kangaroo jumps)\\; \n $d \\text{+=} r$\\; \n $j \\text{+=} 1$\\; \n}\n\\Return{$d \\leq k$}\n\\end{algorithm}\n\n\\subsection{Single alignment distance in $O(k^{2\/3}q^{2\/3}\\log^{1\/3}m+k)$ time}\n\\label{sec_alg2}\n\nThis idea is based on splitting the pattern into sections. We know that no more\nthan $k$ sections can have mismatches. The remaining sections have to match\nexactly. Consider exact pattern matching with don't cares.\nWe can check where a pattern matches the text exactly by using a constant number of convolutions. This is\ntrue because we can compute the values $C_i = \\Sigma_{j=0}^{m-1}(T_{i+j}-P_j)^2T_{i+j}P_j$ using a constant\nnumber of convolutions (see \\cite{CC07}). If $C_i=0$ then the pattern matches\nthe text at position $i$. \n\nUsing this result, we will split the pattern into $S$\nsections. In each section we include $q\/S$ islands. For each of the $S$\nsections, we use a constant number of convolutions to check where the section\nmatches the text. If $P$ has no more than $k$ mismatches at a particular\nalignment, then at least $S-k$ sections have to match exactly. Each of the at\nmost $k$ sections that do not match exactly are verified using Kangaroo jumps as seen\nearlier. One section takes at most $O(q\/S+k')$ time, where $k'$ is the number\nof mismatches discovered in that section. Over all the sections, the $k'$\nterms add up to no more than $k$, therefore the entire alignment can be verified\nin time $O(S+k+kq\/S)$.\n\n\nIf we make $V=O(S + k + kq\/S)$ in Algorithm \\ref{alg_basic}, then its runtime\nbecomes $O(n \\sqrt{V\\log m}) = O(n \\sqrt{(S + k + kq\/S)\\log m})$. \nThe preprocessing time for the $S$ sections is $O(Sn \\log m)$. The\noptimal value of $S$ is such that the preprocessing equals the main runtime:\n\n\\begin{align*}\n & n \\sqrt{(S + k + kq\/S)\\log m} = Sn \\log m \\\\\n\\Rightarrow & S + k + kq\/S = S^2 \\log m\\\\\n\\Rightarrow & S^2\/\\log m + kS\/\\log m + kq\/\\log m = S^3\\\\\n\\Rightarrow & S \\approx O(\\sqrt[3]{kq\/\\log m})\n\\end{align*}\n\nThis makes $V=O(S + k + kq\/S)=\nO(k +\\sqrt[3]{k^2q^2\\log m})$. This gives a\nruntime for pattern matching with $k$ mismatches of:\n\n\\begin{align*}\nO(nS\\log m + n \\sqrt{V\\log m}) = & O\\left(n\\sqrt[3]{kq\\log^2\nm} + n\\sqrt{ (k + \\sqrt[3]{k^2q^2\\log m}) \\log m }\\right)\\\\\n= & O\\left( n \\sqrt[3]{kq\\log^2 m} + n\\sqrt{k \\log m} \\right)\\\\\n\\end{align*}\n\n\\subsection{Combined result}\nIf $q < k^2$ then we can use the algorithm of section\n\\ref{sec_alg1}, which runs in $O(n\\sqrt{(q+k)\\log m})$ time. Otherwise, if\n$q > k^2$, we use the algorithm of section \\ref{sec_alg2}, which runs in\n$O(n\\sqrt[3]{qk\\log^2 m} + n\\sqrt{k \\log m})$ time.\nThus we have the following:\n\n\\begin{theorem}\nPattern matching with $k$ mismatches, with don't care\nsymbols in the pattern, can be solved in\n$O\\left(n\\sqrt{k \\log m} + n\\min\\{\\sqrt{q\\log m}, \\sqrt[3]{qk\\log^2\nm}\\}\\right)$ time.\n\\end{theorem}\n\n\\section{Conclusions}\nIn this paper we have offered efficient algorithms for the problem of pattern matching with $k$ mismatches. Specifically,\nwe have presented an algorithm that runs in\n$O(n\\sqrt{k\\log m}+n\\min\\{\\sqrt[3]{qk\\log^2 m},\\sqrt{q\\log m}\\})$ time, where $q$ is the number of islands. If the number of islands $q$ is $o(m)$, this algorithm is asymptotically\nfaster than the previous best algorithm for pattern matching with $k$ mismatches\nwith don't cares in the pattern.\n\n\\section{Acknowledgments}\nThis work has been supported in part by the following grants: NSF 1447711 and NIH R01-LM010101.\n\n\n\n\\section*{Bibliography}\n\\bibliographystyle{elsarticle-num} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}