diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgvcf" "b/data_all_eng_slimpj/shuffled/split2/finalzzgvcf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgvcf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe laws of Brownian motion, formulated first by Einstein more than\na century ago \\cite{Einstein}, have now found so many applications\nand generalizations in all quantitative sciences \\cite{Haw}. Many\nfractal structures in the nature can be derived from the sample\npaths of Brownian motion characterized by some appropriate fractal\ndimensions \\cite{Mandelbrot}.\n\n\nA $d$-dimensional Brownian motion is known to be recurrent, i.e.,\nthe particle returns to the origin, for $d\\leq$2 and escapes to\ninfinity for $d>$2. It is also known that the fractal (Hausdorff)\ndimension of the graph of a Brownian motion is equal to 3\/2 for\n$d=$1, and 2 for $d\\geq$2.\n\n\nThe scaling limit of interfaces in various critical 2$d$ lattice\nmodels are proven or conjectured to be described by the family of\nconformally invariant random curves i.e., Schramm-Loewner evolution\n(or SLE$_\\kappa$) \\cite{schramm} which is driven by a 1$d$ Brownian\nmotion of diffusivity $\\kappa$ \\cite{SLE}.\n\n\n\nOne of the most important invariance properties of planar Brownian\nmotion is conformal invariance. Although the scaling limit of 2$d$\nrandom walk, i.e., 2$d$ Brownian motion, because of self-crossing\nitself does not fall in the SLE category, variations of Brownian\nmotion are described by SLE. Loop erased random walk (LERW) where\nloops are removed along the way, is one of the examples which has\nbeen studied by Schramm and shown that can be described by\nSLE$_2$.\\\\The external perimeter of 2$d$ random walk is also a\nnon-intersecting fractal curve which can be defined by SLE.\nVerifying an earlier conjecture by Mandelbrot \\cite{Mandelbrot}, it\nhas been proven using SLE techniques \\cite{Lawler} that the fractal\ndimension of the Brownian perimeter is $d_f=$4\/3, i.e, the same as\nthe fractal dimension of self-avoiding random walk (SAW) and the\nexternal perimeter of the percolation hull.\n\nIn this paper, we investigate the statistical and fractal properties\nof a 3$d$ random walker which is attracted by a plane. We believe\nthat this study can provide useful intuitive extensions for many\nrelated physical phenomena including the problems with a discrete\ntime lattice walk \\cite{appl0, appl1}, relaxation phenomena\n\\cite{relax}, exciton trapping \\cite{trap} and diffusion-limited\nreactions \\cite{appl1, react}.\n\n\n\\section{The model}\n\nWe consider a random walker moving along the bonds of a cubic\nlattice with the \\emph{xy}-plane as an attractive plane. The\n'walker' source is considered to be the origin of the coordinate\nsystem. At each lattice point with $z\\neq0$, there are six\npossibilities for the random walker to select a link and move along.\nIn our model, the random walker prefers walking on and near the\nattractive plane, and thus the probability that the random walker\nchooses the link which approximates it to the attractive plane is\nset to be $\\alpha p$, and for remaining five links is considered to\nbe $p$, such that $\\alpha>1$ (and will be called \\emph{the strength\nof attraction}) and $p=\\frac{1}{\\alpha+5}$. For each lattice point\non the attractive plane with $z=0$, the probability that each of the\nfour links on the plane to be chosen is set to be $\\alpha p'$ and\nfor two other links perpendicular to the plane is considered to be\n$p'$, where $p'=\\frac{1}{4\\alpha+2}$. The single parameter $\\alpha$\nin our model, controls the strength of attraction. Note that in the\nlimiting case $\\alpha\\rightarrow\\infty$ our model reduces to the\npure 2$d$ random walk on the plane, and for $\\alpha=1$ the pure 3$d$\nrandom walk would be recovered.\\\\Thus there are four possible\nprobabilities. $\\alpha p'$ for links that are in the attractive\nplane, $p'$ for links from the attractive plane to either of the\nneighboring planes, $p$ for links in all of the neighboring planes\nor leading from them into the bulk, and $\\alpha p$ for links from\nall the neighboring planes to the attractive plane.\\\\By detailed\nbalance, in equilibrium at inverse temperature $\\beta$, the ratio\n$\\alpha p \/ p'$ of the probabilities onto and off the attractive\nplane, defines an attraction energy $\\beta\\epsilon = \\ln[2\\alpha\n(1+2\\alpha)\/(\\alpha +5)]$.\n\n\n\\section{Fractal dimension of the set of all visited sites and its level set}\n\n\\begin{figure}[b]\\begin{center}\n\\includegraphics[scale=0.39]{Fig0.eps}\n\\narrowtext\\caption{\\label{Fig0}(Color online) The average number of\ntotal lattice sites $M^{(3d)}$ visited (at least) once by the\nattracted random walker (ARW) (main panel), and those $M^{(2d)}$ on\nthe attractive plane (inset), as function of their average radius of\ngyration for two different values of the strength of attraction\n$\\alpha=$1.3 ($\\blacksquare$) and $\\alpha=$10 ($\\blacktriangle$).\nThe solid lines show the best fit to our data. The error bars are\nalmost the same size as the symbols.}\\end{center}\n\\end{figure}\n\nIn the cases of random walks, systems exhibit a \\emph{generic scale\ninvariance}, meaning that the systems can exhibit self-similarity\nand power laws without special tuning of parameters. This is why we\nalready expect that our model would exhibit rich fractal properties\nfor all values of $\\alpha$.\\\\Let us first look at the fractal\nspatial structure of the 3$d$ \\emph{attracted} random walk (ARW) and\nits intersection with the attractive plane. In order to estimate the\nfractal dimension $d_f$ of the set of points visited (at least) once\nby the random walker, we examine the scaling relation between the\naverage number of such points $M^{(3d)}$ and their corresponding\nradius of gyration $R_g$, i.e., $M^{(3d)}\\sim R_g^{d_f}$. Each\nensemble averaging for $M^{(3d)}$ (and also for $M^{(2d)}$ in the\nfollowing) and $R_g$ was taken over $5\\times 10^4$ independent\nsamples for a fixed number of random walk steps $N$. The\nmeasurements were done for $10^3\\leq N\\leq 10^5$ with the number\ninterval $\\delta N=2\\times 10^3$. We have also computed the fractal\ndimension of the total number of sites on the attractive plane\n(i.e., $M^{(2d)}$) visited by the random walker (in this case the\ncorresponding radius of gyration is computed for all set of distinct\nvisited sites only on the attractive plane $-$ see Fig.\n\\ref{Fig0}).\\\\ We find that the fractal dimensions have a remarkable\ncontinuous dependence on the parameter $\\alpha$. The results of\nthese fractal dimensions as function of the strength of attraction\n$\\alpha$ are illustrated in Fig. \\ref{Fig1}. As can be seen from\nfigure \\ref{Fig1}, for large values of $\\alpha$, since the problem\nreduces to the 2$d$ random walk on the attractive plane, these two\nfractal dimensions converge to the same value close to the value\n$\\sim1.83$ (this is comparable with the fractal dimension of the set\nof distinct sites visited by an 2$d$ RW on a square lattice, deduced\nfrom the results reported in \\cite{Lee}).\n\\\\All error bars in this paper are estimated by using the standard\nleast-squares analysis, and are almost of the same size as the\nsymbols used in the figures.\n\\\\For an ideal linearly self-similar fractal of dimension $d_f$, one\nexpects that the fractal dimension of the intersection being\n$d'_f=d_f-1$ \\cite{Mandelbrot}. But this is not apparently the case\nhere for $\\alpha\\neq 1$, since in our model, the attractive plane\nhas disturbed the homogeneity of the probability distribution in the\n\\emph{z}-direction. Only for $\\alpha=1$ where $d_f=2$ \\footnote{The\nrandom walk on a simple cubic lattice is a \\emph{transient} process,\nsince it has a finite escape probability $\\approx$ 0.66. Therefore,\nthe number of distinct visited sites by the random walker is almost\nthe same as the number of steps or equivalently the trajectory\nlength, and thus, it is expected for both to have a same fractal\ndimension 2.}, we find $d'_f=1=d_f-1$.\n\\begin{figure}[t]\\begin{center}\n\\includegraphics[scale=0.39]{Fig1.eps}\n\\narrowtext\\caption{\\label{Fig1}(Color online) The fractal dimension\nof the set of all lattice points visited (at least) once by the\nattracted random walker (ARW) ($\\blacksquare$), and the set of all\nnumber of visited points on the attractive plane ($\\square$), as\nfunction of the strength of attraction $\\alpha$. The error bars are\nalmost the same size as the symbols.}\\end{center}\n\\end{figure}\n\n\\section{Cluster size distribution on the attractive plane}\n\nHenceforth we investigate the fractal and scaling properties of the\nset of all distinct sites visited by the 3$d$ ARW only on the\nattractive plane. Each of these sites is visited at least once by\nthe 3$d$ ARW and marked upon visiting (if not already).\\\\In this\nsection, rather than analyzing the properties of the whole set,\nafter marking all visited sites on the plane, we identify each\ncluster-site as a set of all nearest-neighbor visited-sites on the\nlattice with a specific color. Two typical examples of such\nclustering are shown in Fig. \\ref{Fig2} for two different values of\nthe strength of attraction $\\alpha=2$ and $\\alpha=10$. As Fig.\n\\ref{Fig2} shows, for lower values of $\\alpha$, there exist many\nisolated clusters of different scales which are accessed by the ARW\nonly via the third dimension. By increasing the strength of the\nattraction, the number of isolated clusters decreases until\n$\\alpha\\rightarrow\\infty$ for which, there will be only one large\ncluster on the attractive plane.\n\n\n\\begin{figure}[h]\\begin{center}\n\\includegraphics[scale=0.28]{Fig2a.eps}\\hspace{0.5cm}\\includegraphics[scale=0.23]{Fig2b.eps}\n\\narrowtext\\caption{\\label{Fig2}(Color online) Typical samples of\nclusters of the visited sites on the attractive plane by a 3$d$ ARW\nof $N=10^6$ shown in different colors, for $\\alpha=2$ (left) and\n$\\alpha=10$ (right). }\\end{center}\n\\end{figure}\n\nTo examine possible scale invariance of cluster ensemble for rather\nsmall values of $\\alpha$, we compute the cluster size distribution\nand check whether it follows a power-law scaling. In the critical\nstatistical physics, the scaling properties of fractal clusters can\nbe described by the percolation theory \\cite{SA}, where the\nasymptotic behavior of cluster distribution $n_s(\\lambda)$ near the\ncritical point $\\lambda\\rightarrow \\lambda_c$ has the following\ngeneral form \\be\\label{Eq1}n_s(\\lambda)=\ns^{-\\tau}F[(\\lambda-\\lambda_c)s^{\\sigma}],\\ee where $\\sigma$ is an\ncritical exponent, and the scaling function $F(u)$ approaches to a\nconstant value for $|u|\\ll 1$ and decays rather fast for $|u|\\gg 1$.\n\n\\begin{figure}[t]\\begin{center}\n\\includegraphics[scale=0.4]{Fig3.eps}\n\\narrowtext\\caption{\\label{Fig3}(Color online) Cluster size\ndistribution exponent $\\tau$ defined in Eq. (\\ref{Eq1}), as a\nfunction of the strength of attraction $\\alpha$. Inset: number\ndensity $n_s$ of clusters of the visited lattice sites of size $s$\non the attractive plane for three different values $\\alpha=1.2$, $4$\nand $8$. The solid lines show the power-law behavior in the scaling\nregion. The error bars are almost the same size as the\nsymbols.}\\end{center}\n\\end{figure}\n\nWe undertook simulations for several values of $\\alpha$ to measure\nthe distribution of the cluster sizes of the visited lattice sites\nby the 3$d$ ARW on the attractive plane (this is the probability\nthat a visited lattice site on the attractive plane belongs to a\ncluster of size $s$). We gathered ensembles of a number of\n$5\\times10^4$ (for smaller $\\alpha$) and $1.5\\times10^6$ (for larger\nvalues of $\\alpha$) independent samples of fractal patterns with\nmarked visited-sites on the attractive plane. The number of the\nrandom walk steps was chosen to be $N=4\\times10^6$ in all\nsimulations. The number density $n_s$ of the clusters of size $s$\nhas then been computed for each specific value of $\\alpha$ by\ncounting the number of clusters of size $s$ divided by the total\nnumber of all clusters.\\\\We find that for rather small and\nintermediate size scale clusters, the distribution shows a power law\nbehavior compatible with the scaling relation in Eq. (\\ref{Eq1}). As\ncan be seen in the inset of Fig. \\ref{Fig3}, the curves for\ndifferent values of $\\alpha$ exhibit a sharp drop-off, indicating\nindeed that they contain only small clusters. By increasing $\\alpha$\nthe interval for scaling region decreases and a peak appears which\nsignals the formation of large scale clusters.\\\\Our estimation of\nthe cluster size distribution exponent $\\tau$ in the scaling region\nas a function of $\\alpha$ is also shown in Fig. \\ref{Fig3}. One\nobserves that the exponent $\\tau$ has a significant dependence on\nthe strength of attraction $\\alpha$.\n\n\n\\begin{figure}[t]\\begin{center}\n\\includegraphics[scale=0.39]{Fig4.eps}\n\\narrowtext\\caption{\\label{Fig4}(Color online) The fractal dimension\nof the perimeter of a cluster of visited sites on the attractive\nplane by 3$d$ ARW, as a function of the strength of attraction\n$\\alpha$. Inset: the average length of the perimeter $l$ of a\ncluster versus its average radius of gyration $r_g$, for two\ndifferent strengths of attraction $\\alpha=1.2$ (upper graph) and\n$\\alpha= 16$ (lower graph). The solid lines show the power-law\nbehavior in the scaling region. The error bars are almost the same\nsize as the symbols. }\\end{center}\n\\end{figure}\n\n\\section{Fractal dimension of the cluster boundaries on the attractive plane}\n\nThe remainder of this paper is dedicated to investigate the fractal\nproperties of the boundaries of the visited-sites clusters on the\nattractive plane.\\\\Given a configuration of visited sites by the\n3$d$ ARW on the attractive plane, the first step is to identifying\ndifferent clusters as outlined before. After that, the boundary\ncurve of each isolated cluster has to be identified. However the\ndefinition of interfaces and cluster boundaries on a square lattice\ncan contain some ambiguities, there has been introduced a\nwell-defined \\emph{tie-breaking} rule in \\cite{Saberi} that\ngenerates non-intersecting cluster boundaries on a square lattice\nwithout any ambiguity.\\\\To define the hull for each identified\ncluster according to the algorithm defined in \\cite{Saberi}, a\nwalker (which, of course, has to be distinguished from the 3$d$ ARW)\nmoves clockwise along the edges of the dual lattice (which is also a\nsquare lattice) around the cluster starting from a given boundary\nedge on the cluster. The direction at each step is always chosen\nsuch that walking on the selected edge leaves a visited site on the\nright and an empty plaquette on the left of the walker. If there are\ntwo possible ways of proceeding, the preferred direction is that to\nthe right of the walker. The directions \\emph{right} and \\emph{left}\nare defined locally according to the orientation of the\nwalker.\\\\According to this procedure, we have generated an ensemble\nof cluster boundary loops for several different strengths of\nattraction in the range $1.1\\leq\\alpha\\leq16$. Using the scaling\nrelation $l\\sim r_g^{d_f}$, between the average length of the\nperimeter of the loops $l$, and their average radius of gyration\n$r_g$, we computed the fractal dimension $d_f$ of the cluster\nboundaries as a function of $\\alpha$. The results are shown in Fig.\n\\ref{Fig4}.\n\nThe fractal dimension shows again a significant dependence on the\nstrength of attraction $\\alpha$. In the limit\n$\\alpha\\rightarrow\\infty$ $d_f$ converges to the value\n$\\frac{4}{3}=1.3\\bar{3}$ which is the fractal dimension of the SAW\ni.e., the outer perimeter of the planar Brownian motion.\n\n\\section{conclusions}\n\nIn this paper, we have studied the scaling properties and the\nfractal structure of the visited lattice-sites by a Brownian\nparticle in 3$d$ which is attracted by a plane with the strength\n$\\alpha$. The fractal dimensions of the set of visited sites by the\n3$d$ random walker in both three dimensions and on the attractive\nplane are computed which both converge to the same value $\\sim1.83$\nfor large $\\alpha$. We also found that size distribution of the\ncluster of visited sites by the particle on the attractive plane,\nhas a scaling form characterized by an exponent that depends\nsignificantly on the strength of attraction.\\\\The fractal dimension\nof the surrounding loops of the clusters on the plane has been\ncomputed as a function of $\\alpha$. This also converges\nasymptotically to the expected value for SAW i.e., the external\nperimeter of a planar Brownian motion.\n\nThese results need however some theoretical framework and\nmathematical proof. The other interesting feature which can be\ninvestigated, is the possible conformal invariance of the cluster\nboundaries on the attractive plane, which can be treated using SLE\ntechniques (such study is already done only for the limiting case\n$\\alpha\\rightarrow\\infty$ where the problem reduces to a 2$d$ random\nwalk in the attractive plane whose boundary is described by\nSLE$_{8\/3}$). The fractal dimension of an SLE$_\\kappa$ curve is\ngiven by $d_f=1+\\kappa\/8$. In case of conformal invariance of\ncluster boundaries on the attractive plane in our model, they would\nbe defined by a diffusivity $\\kappa$ which depends on the strength\nof attraction.\n\n\n\\textbf{Acknowledgement.} I would like to thank H. Dashti-Naserabadi\nfor his helps on programming. This work is financially supported by\nthe National Elite Foundation of Iran, and INSF grant No. 87041917.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\\renewcommand{\\thefootnote}{\\fnsymbol{footnote}}\n\t\\footnotetext[2]{Correspondence to: Adri\u00e0 Recasens (arecasens@google.com)}\n\t\\renewcommand*{\\thefootnote}{\\arabic{footnote}}\n\n\\label{sec:intro}\nOur perception of the world is inherently multimodal: humans and other animals effortlessly integrate many modalities to build their view of the world \\cite{ghazanfar2006neocortex, amedi2017task}. Although multimodal integration can help construct a richer perspective on reality \\cite{bavelier2002cross, shams2008benefits}, humans can easily process information and perform tasks even when only a single modality (e.g. sound, vision, or touch) is present \\cite{shimojo2001sensory, lacey2014visuo, bola2017task}. \nHowever, this flexibility is hard to find in perceptual computational models. Architectures for multimodal perception have typically been divided on early fusion, mid-fusion and late-fusion, but most of them need all modalities to be present in order to operate. With human flexibility as an inspiration, in this paper we introduce {\\em Zorro}, a multimodal Transformer architecture which is able to operate in both a single-modality and multi-modality setting. This property improves the overall performance of the model while opening the door to off-the-shelf self-supervised pre-training.\n\nOur key architectural innovation in Zorro\\xspace is to create separate unimodal and multimodal (fusion) representation streams within a single standard Transformer backbone. We achieve this without engineering the architecture, but instead by applying appropriate masks in all attention operations, resulting in some outputs that only capture individual modalities and some outputs that capture multimodal information. This has the direct benefit that the model can be applied when a subset of modalities is absent, e.g.\\ a model trained on audio and video can be evaluated on audio alone.\n\n\nWhile most of the emphasis of novel developments in the supervised space is put on the architecture, the unimodal outputs can be further exploited by introducing additional self-supervised training schemes. In contrast to recent multimodal attention-based models~\\cite{mbt,perceiver} that entangle both modalities throughout the network, Zorro\\xspace supports self-supervised contrastive training in a single network without representation collapse, thanks to its unimodal outputs (see Figure~\\ref{fig:teaser}). In this work, we explore this possibility by pre-training our model with an audio-visual contrastive loss~\\cite{alayrac2020self}. Differently from previous work, we can do this pre-training without the necessity of separate backbones per modality. \n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figures\/pull_figure.pdf}\n\t\\caption{\\small \n\t\tIn this paper we introduce the Zorro\\xspace multimodal architecture which enables both self-supervised contrastive learning and supervised learning. When used for self-supervision, the single-modality outputs are used together with a standard cross-modal self-supervised loss. When used for supervised learning, all outputs can be used for the final classification.\n\t} \n\t\\vspace*{-0.3cm}\n\t\\label{fig:teaser}\n\\end{figure*}\n\n\n\n\nThis paper presents four contributions: \\textbf{(a)} we introduce Zorro\\xspace, a novel set of Transformer-based multimodal architectures which enable both supervised and self-supervised training and, once trained, can be used for multimodal or unimodal inputs; \\textbf{(b)} we introduce three Zorro\\xspace-based architectures using state-of-the-art models such as ViT, SWIN and HiP; \\textbf{(c)} we show that Zorro\\xspace can be pre-trained on a large-scale audio-visual dataset in a self-supervised manner, and can also be pre-trained on unimodal datasets; and \\textbf{(d)} we benchmark our resulting models on AudioSet, VGGSounds, Kinetics-400 and ESC-50. \nThe model achieves state-of-the-art performance when compared with previous self-supervised learning techniques on most relevant benchmarks, while also achieving comparable performance with previous work for supervised training with labels. \n\n\n\n\n\n\\section{Related Work}\n\\label{sec:related}\n\n\n\\noindent \\textbf{Multimodal perception}: Multimodal perception is challenging as data from the various modalities can have different topologies, temporal frequencies and relative importances that depend on each task~\\cite{baltruvsaitis2018multimodal}. \nWith the emergence of convolutional neural networks, numerous works fused activations from intermediate tensors~\\cite{wang2020makes,fayek2020large,arandjelovic2018objects,simonyan2014,Feichtenhofer_2016_CVPR,carreira2017quovadis,xiao2020audiovisual}, but this required considerable engineering, as different modalities come in differently shaped feature grids and there are many different ways to combine them. \n\n\\vspace{2mm} \\noindent\\textbf{Self-supervised audio-visual learning}: Various methods have been used to employ the cross-modality similarity as a self-supervisory signal~\\cite{arandjelovic17look,arandjelovic2018objects,Senocak_2018_CVPR,owens2018audio,korbar2018cooperative,alwassel2019self,mandela2020datatrans,morgado20avid}. Most approaches rely on single-modality backbones which produce representations which are used in the self-supervised loss~\\cite{alwassel2019self,mandela2020datatrans,alayrac2020self,recasens2021broaden}. These techniques process different modalities with different sets of weights and restrict the ability to reason across modalities. Less common are approaches which learn self-supervised models with multiple modalities at once. One recent work in this direction is \\cite{shvetsova2021everything}, which learns representations using audio, video and text. However, to avoid the collapse of the self-supervised loss, they feed the modalities two at a time, increasing the amount of necessary forward passes. Instead, Zorro\\xspace masking can produce unimodal outputs without running the model multiple times. \n\n\\vspace{2mm} \\noindent\\textbf{Transformer architectures}: Inspired by ViT~\\cite{vit}, follow up work proposed single-modality processing for video~\\cite{vivit} and audio~\\cite{gong2021ast} using patch-based encodings. Transformer-based methods have also been proposed to tackle audio-visual classification. The closest to our method is MBT~\\cite{mbt}, which builds a multimodal architecture out of single-modality Transformers for video~\\cite{vit,vivit} and audio~\\cite{gong2021ast}. MBT merges modalities by creating an attention bottleneck which restricts communication between the audio and visual heads. Our method also regulates cross-modality communication, but by masking the latent connections we are able to obtain modality-specific heads while in MBT the representation is entirely multimodal. Another relevant work is VATT~\\cite{VATT}, a Transformer-based architecture to model video, audio and text with a single backbone. Differently from our work, in VATT each modality is independently processed by the transformer. Finally, the Perceiver architecture~\\cite{perceiver} scales to a large number of inputs by cross-attending to a set of latent queries. In this work, we use the follow-up Hierarchical Perceiver~\\cite{carreira2022hierarchical} which splits inputs and outputs into groups to improve model efficiency. \n\n\\vspace{2mm} \\noindent\\textbf{Masking attention in Transformers}: The original transformer architecture~\\cite{vaswani2017attention} used attention-masking for language modelling. After the success of image-based architectures, alternatives have been proposed to use attention masking to alleviate computational requirements of the architecture. Swin~\\cite{liu2021swin} proposed the use of local windows, restricting the self-attention layers to only neighbour pixels. Furthermore, mask2former~\\cite{cheng2022masked}, also restricted the cross-attention to local regions, enabling the use of transformers for high dimensional output (e.g segmentation). \n\n\n\n\n\n\n\n\\label{sec:method}\n\t\\begin{figure*}[t!]\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{figures\/model_figure.pdf}\n\t\t\\caption{\\textbf{The Zorro\\xspace-ViT model architecture}: The input to our model are video frames and audio spectrograms. Each of those inputs is patched using a 2D convolution and projection to input dimension $D$. Both audio and video input tokens are concatenated with a set of learned fusion vectors and added a position embedding. Next, we process these inputs through $L$ Zorro\\xspace's self-attention layers, where the Zorro\\xspace masking is applied. Specifically, our masking strategy blocks the information to flow towards the unimodal hidden representation, while still allowing the fusion representation to access all modalities. By doing this, we ensure that the image and audio representations are gated access to (i.e. depend on) only the video and audio inputs respectively. To produce the outputs, we learn a set of queries that cross-attend (also using masked attention) to the unimodal and multi-modal representation. \n\t\t}\n\t\t\\label{fig:model_figure}\n\t\t\\vspace*{-0.05cm}\n\t\\end{figure*}\n\t\n\t\n\\section{Zorro\\xspace: the masked multimodal Transformer}\n\\label{sec:model}\nIn this paper, we introduce Zorro\\xspace, a multimodal architecture which enables both supervised and self-supervised training. In this section, we unpack how Zorro\\xspace accomplishes this using modality-aware masking and by repurposing the original transformers components to allow contrastive learning between modalities. The key innovation of the architecture is introducing separate latent allocations for the different modalities, leading to a final representation which is partially unimodal (part of the representation sees only a single modality) and partially multimodal (part of the representation can attend to all modalities). First, we will describe Zorro\\xspace applied to the ViT architecture. Second, we extend Zorro\\xspace to two other state-of-the-art transformer architectures, Swin and HiP. Finally, we end this section by describing how to use Zorro\\xspace for self-supervised contrastive learning. \n\n\\subsection{Architecture}\n\\label{section:zorro}\n\\vspace{2mm} \\noindent \\textbf{Zorro\\xspace-ViT overview.}\nFigure~\\ref{fig:model_figure} depicts the Zorro\\xspace architecture, which consist of three main blocks. First, Zorro\\xspace processes the data in form of patches (similar to ViT~\\cite{vit}). In this stage, data from each modality is first converted into a 2D array of representations. This can be done by either (i) dividing the input tensor into sequential groups (either points or patches) and applying a linear projection, or (ii) applying domain-specific processing such as 1D\/2D\/3D convolutions and flattening. We use a 2D convolution to extract $16 \\times 16$ patches and project them to the input dimension $D$. Next, position embeddings are added to the projected vectors so that the model is able to localise and distinguish each embedded patch.\nLearned multimodal fusion vectors are then introduced.\nSecond, the resulting tokens are concatenated to form a single set and are then processed by $L$ layers of a Transformer~\\cite{vaswani2017attention} with Zorro\\xspace masking. \nFinally, to produce the final output we learn a set of queries that cross-attend to the output of the last self-attention layer similar to PerceiverIO~\\cite{perceiverIO}.\nWe utilise the standard cross-attention operation~\\cite{perceiverIO}, and produce 4 different outputs: an audio output, a video output, a fusion output (which only sees the multi-modal part of the representation) and a global output that sees the whole representation.\nThese three steps are described in more detail next.\n\n\n\\vspace{2mm} \\noindent \\textbf{Input pre-processing.} Let $x=(x_v,x_a)$ be a video sample consisting of frames $x_v \\in \\mathbb{R}^{N_f \\times H \\times W \\times 3}$ and audio spectrogram $x_a \\in \\mathbb{R}^{T \\times N_s}$ where $N_f$ is the number of frames, $N_s$ the dimensionality of the spectrogram, $H$ is the height of the frame, $W$ is the width of the frame and $T$ is the number of temporal steps in the spectogram. To downscale the input, we use a 2D convolution $f^\\textrm{patch}$ which yields $u = (u_v,u_a) = (f_v^{\\textrm{pre}}(x_v),f_a^{\\textrm{pre}}(x_a))$. Arrays $(u_v,u_a)$ are then flattened and absolute learned position encoding are added. Finally, we learn a set of $n_\\textbf{fusion}$ latent vectors which are concatenated to the audio and video input tokens. \n\n\\vspace{2mm} \\noindent \\textbf{Masked attention.} The key contribution of this paper is splitting the Transformer representation into specialised groups. Using masked attention we force part of the representation to attend only to itself, while other parts can attend to the whole representation. The main goal of this approach is to split the representation in three parts: a part which only focuses on video tokens, a part which focuses on audio tokens, and the remaining vectors which can attend to the whole representation. \n\nWe mask two parts of the model: the self-attention~\\cite{vaswani2017attention} and the decoding cross-attention~\\cite{perceiverIO}. \nBoth parts consist of the same underlying operation which takes keys $k$, values $v$ and queries $q$ to produce the final output $o$. To this end, \nwe introduce a masking binary tensor $m$ that specifies which vectors are connected to each other.\nEntries of the masking matrix are $m_{ij}=1$ if information can flow from latent $i$ to latent $j$. By setting $m_{ij}=0$, we indicate to the model that this connection should be omitted. This mask is applied to the standard attention output operation $o_{i} = \\sum_{j} a_{ij} \\cdot v_j $ which becomes $o_{i} = \\sum_{j} \\hat{a}_{ij} \\cdot v_j$ where:\n\n\n\\begin{equation}\n \\label{eqn:masked}\n \\hat{a}_{ij} = \\frac{m_{ij} \\exp ({\\frac{q_i^\\top k_j}{\\sqrt{D}}})}{\\sum\\limits_{\\{j', \\ m_{i{j'}} = 1\\}} \\exp ( { \\frac{q_i^\\top k_{j'}}{\\sqrt{D}}} ) }.\n\\end{equation}\nIn contrast to MBT~\\cite{mbt}, our modality-specific representation does not have access to the global representation, which prevents cross-modality information flows. Specifically, we set $m_{ij}=1$ if $j$ is a part of the fusion representation, otherwise we only set $m_{ij}=1$ if $i$ and $j$ are vectors of the same modality. \nBy doing this, we explicitly prevent information from the fusion stream leaking into the unimodal representation. \nThis is the key to preserving pure streams that correspond to single modalities. \n\n\\vspace{2mm} \\noindent \\textbf{Output space.}\nIn ViT architecture, a learnable CLS token is used to produce the output embedding vector. Instead, inspired by the PerceiverIO~\\cite{perceiverIO}, we learn a set of decoding vectors which are used to query the output from the Transformer to produce the final output. Each decoding vector cross attends to a subset of tokens to produce the final output vector. This decoding strategy can be used to produce as many outputs as desired, opening up the possibility for dense tasks such as segmentation or flow estimation. \n\nAs we are relying on having the Transformer representation split into specialised groups, we need to also apply Zorro\\xspace's masking to the output cross attention. Specifically, we found it beneficial to define four outputs for our model. The audio-specific output $o_A$, which only contains information coming from the audio input. The video-specific output $o_v$, which only includes information from the video modality. The fusion specific output $o_F$, which is computed by attending only to the fusion stream. And finally, a global output $o_G$, which attends to all the outputs in the model. Although $o_G$ and $o_F$ do contain similar information, we found it useful to still keep two different heads.\n\n\n\\subsection{Extending Zorro\\xspace for other architectures}\nIn this section, we propose variants of Zorro\\xspace for two state-of-the-art attention-based architectures, Swin and HiP. Differently from the ViT implementation, when building Zorro\\xspace-Swin and Zorro\\xspace-HiP we use the specific architecture building block for each modality and the fusion stream while we join the modalities with a cross-attention operation. This is required as the ViT masking is not directly applicable to Swin and HiP, but the overall idea remains the same. \n\n\\noindent \\textbf{Zorro\\xspace-Swin}: \nSwin~\\cite{liu2021swin} is a ViT-inspired transformer architecture which has shown improved efficiency and performance. The main innovation versus the original ViT architecture is to apply the self-attention operations on nearby tokens instead of all tokens in the input image. This reduces computational requirement while allowing the model to perform bottom-up inference. In order to build Zorro\\xspace-Swin, our main modification to the original architecture is to process individual modalities using Swin transformers. At the end of each Swin block, we update the fusion representation by cross-attending to both the unimodal and multimodal representation. To process the fusion representation, we use the same self-attention as in Zorro\\xspace-ViT. Given this design, we are free to use different architectures to process each modality. We use the original 2D Swin~\\cite{liu2021swin} to process the audio spectrograms while our adaptation of the Swin architecture for video. Similarly to Zorro\\xspace-ViT, no multimodal information flows into the unimodal streams. Detailed description of Zorro\\xspace-Swin can be found in Section~\\ref{arch:details} in the Appendix.\n\n\\noindent \\textbf{Zorro\\xspace-HiP}: The hierarchical perceiver~\\cite{carreira2022hierarchical} extends the previously introduced Perceiver models~\\cite{perceiver, perceiverIO} models, by splitting the inputs into groups, and operating only within those groups. Through the hierarchical architecture, those groups fuse together in order to aggregate information and globally reason about the input. In our implementation of HiP, instead of using directly the pixels and audio signal as input, we create patches similarly to the ViT\/Swin implementation. In order to create Zorro\\xspace-HiP, we use HiP building blocks for each modality. Specifically, those blocks group the inputs into smaller sets, cross-attend using learned features and finally apply self-attention layers to the outputs of the cross attention operation (see~\\cite{carreira2022hierarchical} for more details). In order to update the fusion representation, we learn a set of queries which cross attend to both unimodal and multimodal representation per each layer. More details can be found in Section~\\ref{arch:details} in the Appendix.\n\n\n\\subsection{Contrastive learning with Zorro\\xspace}\n\\label{sec:selfsup}\n\nContrastive audio-visual methods learn representations by aligning audio and video into a common embedding space. As opposed to unimodal approaches, instead of producing multiple views of the data, they use different modalities as views. \nOne important requirement is for the two backbones to not share information. If information is shared across modalities, the self-supervised training can easily collapse or converge to a trivial solution.\n\nModels for multimodal perception typically produce a single output for the multiple inputs. This is sufficient for supervised applications, but prevents the use of these audio-visual contrastive techniques. We design Zorro\\xspace in order to process unimodal and multimodal outputs, with the intention of enabling the use of self-supervised contrastive losses. \n\n\\vspace{2mm} \\noindent \\textbf{Noise Contrastive Estimation}: For training with the standard noise-contrastive estimation loss, we follow the implementation of the audio-visual loss \nfrom~\\cite{alayrac2020self}. Given the audio output $o_a$ and the video output $o_v$, we apply a final linear projection (different per modality) $g_a$ and $g_v$ to yield the final embedding vectors: $z_a = g_a(o_a)$ and $z_v = g_v(o_v)$. We compute the similarity between $z_a$ and $z_v$ by taking a normalised dot product and dividing by a temperature parameter $\\tau$, $\\textrm{sim}(z_a,z_v) = \\exp (\\frac{\\hat{z_a} \\hat{z_v}}{\\tau})$. Finally we apply the NCE loss:\n\\begin{equation}\n L_{\\textrm{NCE}}(z_a,z_v) =- \\sum_i \\log \\frac{\\textrm{sim}(z_a^i,z_v^i)}{\\sum_{j,k} \\textrm{sim}(z_a^k,z_v^j)}\n \\label{eqn:nce}\n\\end{equation}\n\nEquation~\\ref{eqn:nce} introduces describes the loss for audio-visual contrastive training. However, this technique does not train any parameters specific to the fusion representation or output (e.g, the fusion cross-attention or the fusion weights if the model has separate weights per modality). In order to self-supervise the output of the fusion stream, we add a fusion-visual and fusion-audio contrastive loss. We define a self-supervised loss contrasting both unimodal representations (audio and video) separately with the multimodal one (fusion). With those changes, the new loss is:\n\\begin{equation}\n \\label{eqn:fusion_contrastive}\n \\small L_{\\textrm{NCE}} = L_{\\textrm{NCE}}(z_a,z_v)+L_{\\textrm{NCE}}(z_a,z_f)+L_{\\textrm{NCE}}(z_I,z_f)\n\\end{equation}\n\n\n\\section{Experiments}\n\\label{sec:experiments}\nIn this section, we evaluate the Zorro\\xspace architecture on multiple settings. We first present details of the training and evaluation procedures, as well as the main datasets we use. We evaluate the method against state-of-the-art models on three standard audiovisual benchmarks ( AudioSet~\\cite{gemmeke2017audio}, VGGSound~\\cite{chen2020vggsound} and Kinetics-400~\\cite{carreira2017quovadis}), one vision benchmarks (Kinetics-400~\\cite{carreira2017quovadis}) and one audio benchmark (ESC-50~\\cite{piczak2015dataset}). Finally, we ablate the main design decisions that drove our research and showcase Zorro\\xspace's flexibility. Specifically, we compare the different architectures, study the effect of missing modalities, pre-train Zorro\\xspace with unimodal data and explore alternative attention-masking strategies.\n\n\\subsection{Experimental details}\nIn order to showcase Zorro\\xspace's ability to reason across different modalities, we pre-train it using self-supervision as well as with standard supervision using class labels. \nIn this section, we provide the most important details of the training procedure. Additional details about inputs, architectures and training can be found in Section \\ref{arch:details} and \\ref{sec:training_details} in the Appendix.\n\n\\vspace{2mm} \\noindent \\textbf{Pre-training datasets}: We utilise four datasets for pre-training: AudioSet~\\cite{gemmeke2017audio}, YouTube-8M, ACAV-100M~\\cite{lee2021acav100m} and ImageNet-21k~\\cite{ridnik2021imagenet}. AudioSet consist of $1.9$M videos which contain $527$ classes of annotated sounds. As the dataset is highly unbalanced, \\cite{mbt} proposed a smaller more balanced variant of the training set with $500$k examples. For the ablation experiments and training from scratch, we use the $1.9$M version while for fine-tuning we also use AudioSet-500k for fair comparison with the state-of-the-art. YouTube-8M~\\cite{abu2016youtube} consist of $8$M videos with audio and visual frames, annotated in a multi-label fashion with $3862$ different classes. Videos are representative of many activities, resulting a very natural distribution of data. ACAV-100M consist of $100$M videos with audio and visual frames without associated labels, which have been curated to contain a strong audio-visual correlation. We use $59M$ of those videos for self-supervised learning. ImageNet-21k consist of $13M$ images annotated on $21k$ classes, and been typically used for large-scale pretraining of visual transformer models~\\cite{vit}. \n\n\n\\vspace{2mm} \\noindent \\textbf{Audio-visual evaluation benchmarks}: To evaluate the ability of Zorro\\xspace to learn and transfer multimodal representations, we evaluate on standard audio-visual benchmarks. Specifically, we evaluate Zorro\\xspace in AudioSet, VGGSound~\\cite{chen2020vggsound} and Kinetics-400~\\cite{kay2017kinetics}. VGGSound consists of $163,603$ training and $13579$ test samples drawn from 10-second YouTube videos which span $309$ single-label, mutually exclusive classes. It focuses on real life audio evaluation with audio-visual correspondence where sounds are visually evident in the video. \nKinetics-400 consists of $201$K training videos of everyday actions which are classified into $400$ unique classes. While some datasets have bias in audio or video modality, Zorro\\xspace is able to learn the extent to rely on each modality. \n\n\\vspace{2mm} \\noindent \\textbf{Unimodal evaluation benchmarks}: Zorro\\xspace can be trained on multi-modal data but evaluated on unimodal data. To further show this we evaluate the multi-modal trained Zorro\\xspace models on unimodal fine-tunning tasks: Kinetics-400 for vision and ESC-50 for audio. ESC-50 dataset contains $2k$ clips classified into $50$ unique classes.\n\n\\vspace{2mm} \\noindent\\textbf{Zorro\\xspace inputs}: The inputs to our model are video and audio. The audio and video are synced and cover the same time span. Video consists of $8$ frames of size $224 \\times 224$. When training in AudioSet, we sample videos at $3.12$FPS which results on $2.56s$ of audio and video. Specific FPS per model and audio length for pre-training and fine-tuning is reported in Section~\\ref{sec:training_details} in the Appendix.\nDuring training, we use random cropping as well as color augmentation in frames. \nFor ESC-50, we match the lengths of the pre-trained model, looping over the audio sequence if required. Audio is sampled at $48kHz$, converted to spectrograms as inputs to our model using $128$ bins.\nTo augment the audio in training, we use SpecAugment~\\cite{park19specaug} and frequency jittering. During evaluation, we subsample the input video and audio into multiple equally spaced clips and averge their predictions. \n\n\n\\vspace{2mm} \\noindent\\textbf{Architectural details}: \nZorro\\xspace is based on unimodal transformer architectures (ViT, Swin and HiP), adapted for multimodal processing (similar to~\\cite{mbt}). Through all our experiments we use ViT-B\/16. For details on ViT, Swin and HiP architecture, see Section~\\ref{arch:details} in the Appendix.\n\n\\vspace{2mm} \\noindent\\textbf{Training details}:\nWe use the Adam optimiser with cosine decay learning rate schedule, weight decay and learning rate warmup. When fine-tuning, for Zorro\\xspace-ViT and Zorro\\xspace-Swin we find better to use SGD optimiser and momentum $0.9$. We train all models for $50$ epochs except for the ACAV-100M datasets where we train for $10$ epochs and the \\textit{input-level} and \\textit{bottleneck} baselines where we train for $25$ to prevent severe overfitting. We find best to use $n_\\textbf{fusion}=6$ in all models. For AudioSet fine-tuning, we use mixup ($\\alpha=0.3$) and label smoothing. We use cross-entropy loss for uni-label datasets and binary sigmoid cross-entropy for multi-label. We train one classifier for each of the $4$ outputs of the model and average its predictions. For contrastive training, we follow the procedure outlined in Section~\\ref{sec:selfsup}.\n\n\n\n\\subsection{State-of-the-art comparison}\nNext, we evaluate Zorro\\xspace against state-of-the-art methods. We evaluate our audio-visual trained Zorro\\xspace on benchmarks for audio-visual classification, video classification and audio classification, showcasing the universality of the approach.\n\n\t\\begin{table}[t]\n\t\t\\centering\n\t\t\\caption{\\small {\\bf AudioSet-2M comparison: training from scratch.} We report the performance of our models trained on audio-visual data compared with the state-of-the-art when trained from scratch. We report the mean average precision on the AudioSet test set.\n }\n\t\t\t\\begin{tabular}{c|cc|c} \\toprule\n\t\t\tModel & Train Mod & Eval Mod & AudioSet \\\\\n\t\t\t \\hline\n\t\t\t HiP~\\cite{carreira2022hierarchical} & A+V & A+V & 43.8\\\\\n\t\t\t Perceiver~\\cite{perceiver} & A+V & A+V & 44.2\\\\\n\t\t\t ERANN~\\cite{verbitskiy2021eranns} & A & A & 45.0 \\\\\n \\hline\n\t\t\t Zorro\\xspace-ViT &A+V & A+V & 45.1 \\\\ \n\t\t\t Zorro\\xspace-HiP & A+V & A+V & 45.2 \\\\ \n\t\t\t \\textbf{Zorro\\xspace-Swin} & \\textbf{A+V} & \\textbf{A+V} & \\textbf{46.5} \\\\ \n\n\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\label{tab:audioset_scratch}\n\t\\end{table}\n\n\n\n\t\\begin{table*}[t]\n\t\t\\centering\n\t\t\\caption{\\small {\\bf State-of-the-art results:} We compare Zorro\\xspace with the state-of-the-art in two settings: when labels are not used in pre-training or when labels are used. We report the mean average precision on the AudioSet test set and top-1 accuracy on K-400, VGGSound and ESC-50. IN-21k is ImageNet-21k~\\cite{ridnik2021imagenet}, YT8M is YouTube-8M~\\cite{abu2016youtube}, ACAV is ACAV-100M~\\cite{lee2021acav100m} and K-400 is Kinetics-400~\\cite{carreira2017quovadis}. \n }\n\t\t\t\\begin{tabular}{c|ccc|ccc|c|c} \\toprule\n \t\t\tModel & \\multicolumn{3}{c|}{Pre-Training} & \\multicolumn{3}{c|}{Eval: Video+Audio} & Eval: Video & Eval: Audio \\\\ \n \\hline\n\t\t\t & Dataset & Sup\/SSL & Mod & AS & VGGSound & K-400 & K-400 & ESC-50 \\\\\n\t\t\t \\hline\n\n \t\t\t \\multicolumn{4}{l|}{\\textbf{No pre-training}} & & & & & \\\\ \n\n\t\t\t SlowFast R101-NL~\\cite{feichtenhofer2019slowfast} & & & & & & \\bf 79.8 & \\bf 79.8 \\\\\n\t\t\t AVSlowFast~\\cite{xiao2020audiovisual}, R101 & & & & & & 78.8 & \\\\\n\n\t\t\t AudioSlowFast~\\cite{kazakos2021slow} & & & & & 52.5 & & \\\\\n\t\t\t ERANN~\\cite{verbitskiy2021eranns}, R101 & & & & 45.0& & & & 89.2 \\\\\n\n\t\t\t PlayItBack~\\cite{stergiou2022play}, R101 & & & & 47.7& 53.7& & \\\\\n \\midrule\n\n \t\t\t \\multicolumn{4}{l|}{\\textbf{Self-supervised pre-training}} & & & & & \\\\ \n\n\t\t\t MaskSpec~\\cite{chong2022masked}, ViT & AS &SSL & A& 47.1& & & &89.6 \\\\\n\n\t\t\t Zorro\\xspace-HiP &ACAV & SSL& A+V &49.4 & 61.3 &67.9 & 64.6 & 88.4\\\\\n\t\t\t Zorro\\xspace-Swin &ACAV & SSL & A+V &49.4 & 61.1& 73.7 & 69.4 & 91.4 \\\\\n\n\t\t\t Zorro\\xspace-ViT &ACAV & SSL & A+V & \\bf 50.3 & \\bf 63.6 & 76.5 &74.1 & \\bf 93.6\\\\\n\n \\midrule\n \t\t\t \\multicolumn{4}{l|}{\\textbf{Supervised pre-training}} & & & & & \\\\ \n\n\t\t\t \\spv{ViViT-Base~\\cite{vivit}} & \\spv{IN-21k} & \\spv{Sup.} & \\spv{V} & & &\\spv{80.0} & \\spv{80.0} & \\\\\n\t\t\t \\spv{MaskSpec~\\cite{chong2022masked}, ViT} & \\spv{AS} &\\spv{Sup} & \\spv{A}& & & && \\spv{98.2} \\\\\n\n\t\t\t \\spv{ PaSST}~\\cite{koutini2021efficient} & \\spv{IN} &\\spv{Sup.} & \\spv{V} & \\spv{49.6} & & & &\\spv{96.8} \\\\\n\t\t\t \\spv{ AST~\\cite{gong2021ast}} & \\spv{IN-21k} & \\spv{Sup.} & \\spv{V} & \\spv{45.9} & & & & \\spv{95.7}\\\\\n\n\t\t\t \\spv{MBT~\\cite{mbt},ViT} & \\spv{IN-21k} & \\spv{Sup.} & \\spv{V} & \\spv{ 52.1} & \\spv{64.1} & \\spv{80.8} &\\spv{79.4} \\\\\n\t\t\t \\spv{Zorro\\xspace-ViT} & \\spv{IN-21k} & \\spv{Sup.} & \\spv{V} & \\spv{50.9} & \\spv{63.1} & \\spv{79.8} &\\spv{77.6} & \\spv{81.7}\\\\\n\n\t\t\t \\spv{Zorro\\xspace-ViT} & \\spv{YT8M} & \\spv{Sup.} & \\spv{A+V} & \\spv{51.5} & \\ \\spv{64.8} &\\spv{79.6} & \\spv{76.1}&\\spv{93.1} \\\\\n\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\label{tab:sota_experiments}\n\t\\end{table*}\n\n\\vspace{2mm} \\noindent \\textbf{Training AudioSet-2M from scratch}: First, we evaluate Zorro\\xspace when trained from scratch on Audioset-2M using both the audio and visual modalities. Table~\\ref{tab:audioset_scratch} reports that Zorro\\xspace matches or overperforms other methods that directly trained on AudioSet-2M from scratch. Note that PlayItBack~\\cite{stergiou2022play} is not listed in Table~\\ref{tab:audioset_scratch} as it was trained with AudioSet-500k. This setting shows the model's ability to adapt to the multi-modal inputs without the need of pre-trained data. \n\n\\vspace{2mm} \\noindent \\textbf{Multi-modal comparison}: We train and evaluate our pre-trained models on AudioSet-500k (see \\cite{mbt} for details), VGGSound and Kinetics-400 where we use both the audio and visual inputs. Similar to \\cite{mbt}, for Zorro\\xspace-ViT we allocate different weights for the audio, video and fusion latents. We found this useful for improving the fine-tuning accuracy. \nTable~\\ref{tab:sota_experiments} reports the performance of our models. We divide the table into two different parts. First, we report the Zorro\\xspace performance when contrastive self-supervision is used for pre-training (no labels). Zorro\\xspace improves over all previous works on AudioSet and VGGSound. In AudioSet, our best-performing model on that setting is only $2\\%$ away from the supervised state-of-the-art, which demonstrates the ability of the self-supervised pre-training technique for learning general features. In VGGSound, Zorro\\xspace performs similarly with the supervised state-of-the-art when pre-trained only with self-supervision. Finally, for Kinetics-400, the resulting performance is not far from models with supervised pre-training. In the bottom part of the table we report the peformance of Zorro\\xspace when using supervised pre-training. We include the performance of the model when initialized with ViT pre-trained on ImageNet-21k. Even without multi-modal pretraining, Zorro\\xspace is able to perform comparably with SOTA models. When pre-trained on YouTube-8M, Zorro\\xspace also performs similarly to MBT~\\cite{mbt}. However, differently than Zorro\\xspace, MBT cannot perform unimodal inference when trained with multi-modal data. \nNote, we have not demonstrated it here, but Zorro\\xspace can also be trained using unimodal self-supervised methods such as MAE~\\cite{he2022masked} and DINO~\\cite{caron2021emerging} separately on the audio and visual streams. We discuss supervised unimodal training below.\n\n\\vspace{2mm} \\noindent \\textbf{Video comparison}: To showcase Zorro\\xspace's performance in the unimodal regime, we fine-tune our models (pre-trained on audio and video) on the task of video classification for Kinetics-400 using only video. Table~\\ref{tab:sota_experiments} reports the results. Our goal is not to show state-of-the-art performance on this setting, as we are aware of the improvements made on Transformer architectures to solve that task~\\cite{zhang2021co,liu2021swin,yan2022multiview}. Our goal is to provide an efficient mechanism for pre-training those architectures in order to improve the final performance on unimodal and multimodal inference. When Zorro\\xspace is pretrained using a contrastive loss and fine-tuned on Kinetics-400 (video only), Zorro\\xspace-ViT performs only $2.4\\%$ worse than when using audio-visual input. This shows the robustness of our model when reduced to using a single modality. Furthermore, when using the Zorro\\xspace model pre-trained on YT8M, our model is able to perform similarly to comparable architectures. Alternative to fine-tuning, we can also use the audio-visual trained model (column \\textit{Audio+Video}) and only feed the video. In that setting, our model trained on YouTube-8M performs at $76.3$ top-1, on par with the video only fine-tuned result. This unimodal inference on a multi-modal trained model is not possible with MBT, where retraining is needed. \n\n\\vspace{2mm} \\noindent \\textbf{Audio comparison}: To evaluate Zorro\\xspace's audio capabilities, we fine-tune our models on ESC-50 (audio-only dataset) and report results in Table~\\ref{tab:sota_experiments}. When pre-trained on YouTube-8M, Zorro\\xspace performs close to AST, an specialised audio transformer comparable in size. When using self-supervised pre-training, Zorro\\xspace improves performance over previous methods; Zorro\\xspace-ViT has an accuracy of $93.6\\%$, close to state-of-the-art supervised methods. \n\n\t\\begin{table*}[t]\n\t\t\\centering\n\t\t\\caption{\\small {\\bf Masking configurations and architectures:} We evaluate the different masking configurations by training Zorro\\xspace on AudioSet with a supervised loss and audio-visual contrastive loss. Specifically, we test the audio-visual trained models on a unimodal (Audio, Video) and multimodal setting. Our proposed configuration performs well across the board while providing additional unimodal outputs.\n }\n\t\t\t\\begin{tabular}{ccc|ccc|ccc} \\toprule\n\t\t\t& & &\\multicolumn{3}{c}{Supervised (Audio+Video)} & \\multicolumn{3}{c}{Self-Supervised (Audio+Video)} \\\\\n\t\t\t\\hline\n\n\t\t\tArchitecture & Params & Fusion & Video & Audio & Audio+Video & Video & Audio & Audio+Video \\\\\n\t\t\t\\hline\n\n\t\t\tViT & 98M & Two Streams &23.1 & 40.1& 42.2 &18.9 &32.3 &34.8 \\\\\n\t\t\tViT & 98M & Input Level &9.1 & 31.6& 42.2 &Collapse & Collapse&Collapse \\\\\n\t\t\tViT & 98M & Bottleneck ~\\cite{mbt} &9.7 & 32.6& 42.5 &Collapse & Collapse&Collapse \\\\\n\t\t\tViT & 98M & Zorro\\xspace & 22.5 & 39.7& 45.1 &17.8 &29.8 &33.6 \\\\\n\t\t\tHiP & 136M & Zorro\\xspace &22.0 & 39.5& 45.2 &11.3 &21.9 & 26.5 \\\\\n\t\t\tSwin & 161M & Zorro\\xspace & 25.4& 40.6& 46.5 &20.5 & 31.6& 35.7 \\\\\n\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\label{tab:ablation_experiments}\n\t\\end{table*}\n\\subsection{Architecture comparison}\nIn this section, we discuss the different architectures introduced in this paper. \nIn Table~\\ref{tab:ablation_experiments} we report comparison for those architectures in two settings: when trained from scratch and when pre-trained with an audio-visual contrastive loss followed by a linear layer on top, using Audioset-2M. \nWhen training from scratch, we observe Zorro\\xspace-Swin performs the best across the different models, both in the supervised and contrastive regimes. Although the number of parameters is larger than ViT, Swin trains $25\\%$ faster than ViT. HiP is the fastest of the three, while not losing much on accuracy. See Section~\\ref{arch:details} in the Appendix for model speed comparison. Furthermore, in Table~\\ref{tab:sota_experiments} we also present the results of fine-tuning these architectures after contrastive pre-training. It is important to note that for ViT, in this table we use one set of parameters per modality, which significantly increases the parameter count ($98$M to $267$M). In this regime, we observe how ViT is the best. However, Swin and HiP are faster and retain most of the performance. \n\n\\subsection{Zorro model flexibility}\n\n\\vspace{2mm} \\noindent \\textbf{Unimodal inference with a multimodal backbone}:\nHere we study the ability of audio-visual trained Zorro\\xspace to produce meaningful unimodal outputs when fed with unimodal data.\nTo achieve this we zero out the missing modality and only provide useful inputs for one modality, either video or audio. \nResults are reported in Table~\\ref{tab:ablation_experiments}. Models without unimodal output suffer significantly from one missing modality. In contrast, both Zorro\\xspace and using two separate modality streams achieve a high performance when only a single modality is provided. This is due to the fact that in those models, some capacity is allocated to each modality specifically and the model is able to produce unimodal outputs. \n\n\n\\vspace{2mm} \\noindent \\textbf{Unimodal pre-training for multi-modal fine-tuning}:\nThrough the paper, we assumed availability of large multi-modal dataset for training. However, in some situations we only have available large amounts of unimodal samples (e.g.\\ video or audio) and a small set of multi-modal data. To showcase the flexibility of our proposal, we run a single experiment where we train with two unimodal datasets and fine-tune on a smaller multi-modal dataset. We use only the audio signal from the AudioSet dataset and the videos from the Kinetics-400 dataset. When training, we mix batches with probability $0.5$ per dataset, and do not compute the loss for the missing modalities. For evaluation, we fine-tune the resulting model on VGGSound and compare its result to the model trained from scratch. The fine-tuned model performs at $59.2$ top-1 accuracy while the model trained from scratch performs at $54.4$. This experiment shows the flexibility of the Zorro\\xspace model to adapt to unimodal training while providing useful initialization for multi-modal fine-tuning. \n\n\t\n\\subsection{Masking configurations}\nIn this ablation, we study four different types of attention masking. First, we evaluate having data independent stream (\\textit{two streams}), where both models share weights but modalities are not connected. Secondly, we evaluate input level fusion, which consist of no masking in the model. This reduces the model to a vanilla ViT applied to the two concatenated modalities. Inspired by~\\cite{mbt}, we also evaluate \\textit{bottleneck masking} where the fusion tokens can attend to each modalities' tokens but each modality can also attend to the fusion tokens. We want to make clear that although this approach uses the main proposal from MBT, it is not a reproduction of their work. This configuration forces each stream to mostly concentrate on one modality, but information can flow across modalities through the fusion vectors. Finally, we compare all those masking strategies with our Zorro\\xspace masking. For each masking configuration we train a model in a supervised manner (keeping the same number of outputs for fairness, except for the Two Streams which has two outputs). We also train the model in a self-supervised way, where the audio and the video outputs are used to compute the contrastive loss. To report performance, we train a linear classifier on top of the contrastive representations.\n\nTable~\\ref{tab:ablation_experiments} reports the results. We extract two main conclusions. First, having modality independent streams is crucial for self-supervised training. Both the \\textit{input-level} and the \\textit{bottleneck} configurations immediately collapse as information can flow from one modality to the other. Performance for Zorro\\xspace and \\textit{two streams} is very similar as Zorro\\xspace when trained in a self-supervised manner reduces to the two stream architecture. Secondly, we find that having separate modality streams is useful also for supervised learning. Specially interesting is looking at the performances of \\textit{input-level}, \\textit{bottleneck} and Zorro\\xspace, where Zorro\\xspace performs better as the modality streams are more independently treated. We believe this is due to the ability of the model to keep modality-specific information through the network, which can be useful at later stages of processing. Finally, for self-supervised training of Zorro\\xspace, we use equation~\\ref{eqn:fusion_contrastive}, which trains also the fusion output. Although this produces a slight decrease on performance vs \\textit{two streams}, it's beneficial for downstream tasks. Alternatively, when Zorro\\xspace is trained using only audio and video outputs, it performs the same as \\textit{two streams} ($35.0$ vs $34.8$) as the two models are equivalent.\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusions}\nIn this paper, we introduced Zorro\\xspace, a novel Transformer masking configuration which enables simultaneous unimodal and multimodal training and inference, as well as contrastive pre-training. Different from previous approaches to multimodal perception, our proposed method is able to generate both unimodal and multimodal outputs. By splitting the information flow into unimodal and multimodal streams, we are able to improve performance when the architecture is trained with a supervised loss and show the ability of the model to be self-supervised with a contrastive loss. We evaluate our model on multimodal tasks, showing great flexibility and state-of-the-art performance.\n\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}%\nSpontaneous symmetry breaking is one of the important notions \nin modern physics. \nIn particular, spontaneous breaking of continuous global symmetries leads to Nambu-Goldstone (NG) modes~\\cite{Nambu:1961tp,Goldstone:1961eq,Goldstone:1962es}, which are gapless excitations that dominate low-energy physics.\nVarious gapless excitations can be identified as NG modes, such as phonons in superfluids, pions in hadron physics, axions or axionlike particles in particle physics and condensed matter physics, and so on.\nWhile the counting rule for the number of NG modes was originally formulated in relativistic systems possessing Lorentz invariance~\\cite{Nambu:1961tp,Goldstone:1961eq,Goldstone:1962es}, \nLorentz invariance is usually absent in realistic materials.\nThe extension to nonrelativistic systems without Lorentz invariance~\\cite{Nielsen:1975hm, Miransky:2001tw, Schafer:2001bq, Nambu:2004yia, Watanabe:2011ec} was also established~\\cite{Watanabe:2012hr, Hidaka:2012ym, Watanabe:2014fva}.\nThere, the number of NG modes, which may be reduced in the absence of Lorentz invariance, can be counted by using symmetry algebra.\n\nRecently, the notion of symmetries has been extended to higher-form symmetries~\\cite{Gaiotto:2014kfa} \n(see also Refs.~\\cite{Batista:2004sc,Pantev:2005zs,Pantev:2005wj,Pantev:2005rh,Nussinov:2006iva,Nussinov:2008aa,Nussinov:2009zz, Nussinov:2011mz,Banks:2010zn,Distler:2010zg,Kapustin:2014gua}),\n where charged objects are extended objects rather than pointlike operators for ordinary symmetries. \nIn particular, the Maxwell theory in $(3+1)$ dimensions can be understood as a broken phase of $U(1)$ 1-form symmetry whose charged object is a Wilson loop, and the photon can be identified \nas the NG mode~\\cite{Kugo:1985jc,Kovner:1992pu,Gaiotto:2014kfa,Lake:2018dqm}. The counting rule of the number of NG modes for higher-form symmetries has also been formulated recently~\\cite{Hidaka:2020ucc}.\n\nOn the other hand, there are cases where NG modes become unstable in the presence of background fields, which are beyond the conventional counting rule above. In fact, such instabilities have been discussed in various contexts, such as the instability of a photon with a time-dependent axion background in cosmology~\\cite{Carroll:1989vb,Garretson:1992vt,Anber:2006xt}\nor with a chirality imbalance known as the chiral plasma instability in cosmology and astrophysics~\\cite{Joyce:1997uy,Akamatsu:2013pjd}, and instability of an emergent dynamical axion with the background electric field in condensed matter physics~\\cite{Ooguri:2011aa}; see also Ref.~\\cite{Nakamura:2009tf} for a related instability in the five-dimensional Maxwell-Chern-Simons theory with a constant electric field.\nOne can ask whether the existence of such instabilities may be understood as a universal property of NG modes dictated by some symmetry algebra and whether there may be a general counting rule for these unstable modes similar to that of usual NG modes.\n\nIn this paper, we derive a general counting rule of these unstable NG modes for the spontaneous breaking of internal symmetries in the presence of background fields.\nWe show that the number of unstable NG modes is determined by the rank of the matrix in terms of the correlation functions of broken symmetry generators; see \\er{main} for our main result. \nWe verify the validity of our formula for known examples of instabilities.\n\n\n\n\\section{Comparison between type-B and unstable Nambu-Goldstone modes}\nBefore going to the detailed discussion, we briefly summarize the previous studies and our results on the classification of NG modes based on the dispersion relations.\nIn Lorentz-invariant systems, the dispersion relation of NG modes is always $\\omega = |\\boldsymbol{k}|$, and the number of NG modes is equal to the number of broken symmetry generators~\\cite{Nambu:1961tp,Goldstone:1961eq,Goldstone:1962es}. \nIn the presence of background fields where Lorentz symmetry is explicitly broken, it has been shown that there can be NG modes with the quadratic dispersion $\\omega \\sim |\\boldsymbol{k}|^2$ and gapped modes with $\\omega = {\\rm const} + O (|\\boldsymbol{k}|^2)$ for 0-form symmetries~\\cite{Nielsen:1975hm} and higher-form symmetries~\\cite{Yamamoto:2015maz,Sogabe:2019gif}. \nThe counting rules of these modes for 0-form symmetries and higher-form symmetries are proved in Refs.~\\cite{Watanabe:2012hr,Hidaka:2012ym} and Ref.~\\cite{Hidaka:2020ucc}, respectively. \nWithout fine-tuning of parameters of a theory, whether the dispersion relation of a NG mode is linear or quadratic dispersion is classified by a quantity $\\rho_{\\cal IJ}^0 \\equiv \\langle [Q_{\\cal I}, Q_{\\cal J}] \\rangle$ in the ground state. Here, $Q_{\\cal I}$ are broken symmetry generators whose independent degrees of freedom are parametrized by the index ${\\cal I}$ (see Ref.~\\cite{Hidaka:2020ucc} for the detailed definition), and the superscript ``0'' refers to the temporal direction in which commutators are defined: NG modes for $\\rho_{\\cal IJ}^0 = 0$ have linear dispersion and are called type A, while NG modes for $\\rho_{\\cal IJ}^0 \\neq 0$ have quadratic (or gapped) dispersion and are called type B.\n\nOur new insight in this paper is that there are generically additional modes with the dispersion $\\omega \\sim \\pm {\\rm i} |\\boldsymbol{k}|$ that can also be understood as NG modes dictated by the symmetry algebra. Although several examples of the modes with this dispersion relation with the positive imaginary part are already known simply as instabilities~\\cite{Carroll:1989vb,Garretson:1992vt,Joyce:1997uy,Ooguri:2011aa,Akamatsu:2013pjd,Anber:2006xt}, they have not been identified as NG modes so far.%\n\\footnote{The partner mode with the negative imaginary part, $\\omega \\sim - {\\rm i} |\\boldsymbol{k}|$, also has a remarkable feature: Although it is a damping mode, it does not involve any entropy production (somewhat similarly to Landau damping) unlike usual diffusion modes with the dispersion $\\omega \\sim - {\\rm i} |\\boldsymbol{k}|^2$. While it can also be understood as a new type of NG mode, below we will mostly focus on the unstable NG mode with the positive imaginary part, as the number of the former is simply equal to the number of the latter. This is similar to the fact that type-B NG modes are accompanied by gapped modes, and their numbers are equal.}\nMoreover, such a mode for even the conventional 0-form symmetry has not been known to the best of our knowledge. \nThe purpose of this paper is to generalize the counting rule for the conventional NG modes to these unstable NG modes and to provide a new such example of 0-form symmetry. To this end, we will introduce a new quantity $\\rho^l_{\\cal IJ}$ in Eq.~(\\ref{rho_cal}) below, which is the matrix of the correlators of broken symmetry generators put on the planes perpendicular to the spatial $x^l$ direction. The comparison between type-B and unstable NG modes is summarized in Table~\\ref{tab:classification}. \n\n\\begin{table}\n \\begin{tabular}{c|c|c}\n & Dispersion & Condition \\\\\n \\hline \n Type B & $\\omega \\sim |\\boldsymbol{k}|^2$ & $\\rank \\rho^0_{\\cal IJ} \\neq 0$ \\\\\n \\hline\n Unstable & $\\omega \\sim {\\rm i} |\\boldsymbol{k}| $ & \n $\\rank \\rho^l_{\\cal IJ} \\neq 0$ \n \\end{tabular}\n \\caption{Comparison between type-B and unstable NG modes. Here, $\\rho^0_{\\cal IJ}$ is the matrix of the equal time commutators of the broken symmetry generators put on spatial directions, and $\\rho^l_{\\cal IJ}$ is the matrix of the correlators of broken symmetry generators put on the planes perpendicular to the spatial $x^l$ direction.}\n \\label{tab:classification}\n\\end{table}\n\nBefore going to a mathematical proof for a more generic case, we first provide a rough idea on when and how the type-B or unstable NG modes appear. In essence, the directions of background fields can be classified to spatial and temporal ones (whose precise definitions will be given in Sec.~\\ref{sec:counting} below). All previous works on the classification of NG modes implicitly assume that background fields are in the spatial direction.\nIn this case, the dispersion relation is modified by the linear term of $\\omega$ as $\\omega^2 = \\pm \\alpha \\omega + |\\boldsymbol{k}|^2 +\\cdots $ with $\\alpha$ being some real constant, leading to type-B NG modes and gapped modes. The number of type-B NG modes can be counted by the correlation of spatially extended symmetry generators.\n\nOn the other hand, if the directions of the background field strengths are temporal, the modification of the dispersion relation is given by the linear term of $|\\boldsymbol{k}|$ as $\\omega^2 = \\pm \\beta |\\boldsymbol{k}| + |\\boldsymbol{k}|^2$ with $\\beta$ being some real constant, leading to unstable NG modes and dumping modes. \nIn this case, the number of unstable NG modes can be counted by the correlation of temporally extended symmetry generators.\n\nIn the following, we put this argument on a more mathematical basis using effective field theories for higher-form symmetries.\n\n\n\\section{Effective theories}%\nWe consider low-energy effective theories for spontaneous breaking of continuous 0- and higher-form internal symmetries with couplings to \nbackground fields.\nTo discuss the dispersion relations in the low-energy region, it is sufficient to focus on the effective action up to second order in derivatives. \nNote that, up to second order in fields and derivatives, the effective action for the higher-form symmetries includes that of 0-form symmetries.\nWe consider $D$-dimensional Minkowski spacetime with the mostly plus metric $\\eta_{\\mu\\nu} = \\diag (-1,1,...,1)$.\n\nFor the 0-form symmetries, we assume that a continuous 0-form symmetry with a compact Lie group $G$ is spontaneously broken to its subgroup $H$. For the higher-form symmetries, we assume that $U(1)$ $p_I$-form symmetries ($I = 1,..., N$) are spontaneously broken, since higher-form symmetries are always Abelian~\\cite{Gaiotto:2014kfa}.\nWe introduce a charged object for the $p_I$-form symmetry, which is a Wilson loop $W(C_I) = \\exp \\left(\\mathrm{i} \\int_{C_I }a_I \\right)$ on a $p_I$-dimensional closed subspace $C_I$. Here, $a_I$ is a $p_I$-form field, which has a gauge redundancy under the transformation $ a_I \\to a_I + \\mathrm{d} \\lambda_I $ for a $U(1)$ $(p_I-1)$-form parameter $\\lambda_I$ satisfying $ \\int_{C_I} \\mathrm{d} \\lambda_I \\in 2\\pi \\mathbb{Z}$. \nFor a $U(1)$ symmetry, the Wilson loop is transformed as $W_I (C_I ) \\to \\mathrm{e}^{\\mathrm{i} \\alpha_I} W(C_I)$ with $\\mathrm{e}^{\\mathrm{i} \\alpha_I} \\in U(1)$, which is generated by a shift $a_I \\to a_I + \\epsilon_I$ with a $p_I$-form $\\epsilon_I$ satisfying $\\mathrm{d} \\epsilon_I =0$ on $C_I$ and $\\int_{C_I} \\epsilon_I = \\alpha_I$.\nFor a 0-form symmetry with a non-Abelian group, the global transformation leads to a constant shift for the leading order of a field.\nThe spontaneous breaking of the higher-form symmetries can be characterized by the nonzero vacuum expectation value of the Wilson loop in the large volume limit, $\\vevs{W(C_I)} \\to 1$ up to the renormalization that can depend on the volume of $C_I$.\n\nWe now construct the effective action. To have the action preserving higher-form global symmetries, we use the field strength $f_{I} = \\mathrm{d} a_I$. \nUp to the second order of the derivatives and fields, the effective action is given by%\n\\footnote{For a construction of the effective actions based on the nonlinear realization of higher-form symmetries, see Ref.~\\cite{Hidaka:2020ucc}.}\n\\begin{equation}\nS \n =\n- \\frac{1}{2} \\int F^2_{IJ} \\mathrm{d} a_I \\wedge *\\mathrm{d} a_J\n+\n\\frac{1}{2}\n\\int \\mathrm{d} a_I \\wedge \\mathrm{d} a_J \\wedge A_{IJ} \\,.\n\\label{S}\n\\end{equation}\nHere, $F^2_{IJ} = F_{KI} F_{KJ}$ is expressed in terms of an invertible matrix $F_{IJ}$ that represents the matrix of decay constants, and \n$A_{IJ} = (-1)^{(p_I+1)(p_J+1) }A_{JI}$ is a $p_{IJ}$-form with $p_{IJ} = D- p_I -p_J -2$ that can depend on the spacetime coordinate.%\n\\footnote{The background field satisfies the quantization condition $\\int_{\\Sigma_{IJ}} \\mathrm{d} A_{IJ} \\in \\frac{1}{2\\pi} \\mathbb{Z}$ on a $(p_{IJ}+1)$-dimensional closed compact subspace $\\Sigma_{IJ}$.} \nThe first term in \\er{S} can be understood as a generalization of the Maxwell term, and the second term is a topological term with a possible background field $A_{IJ}$.\nThe matrix $F_{IJ}$ is nonzero only for $p_I = p_J$.\nIt would also be possible to regard $F_{IJ}^2$ as a $(p_J - p_I)$-form background field, but we assume that $F_{IJ}$ is a just constant 0-form for simplicity.\nThe presence of the background field $A_{IJ}$ breaks Lorentz invariance in general.\nNote that the assumption of the $p_I$-form global symmetries of the action excludes terms that break these symmetries explicitly, such as dynamical charged matter, which may be allowed to exist if we assume only $(p_I -1)$-form gauge invariance.\n\nIn passing, we remark that we can reproduce the nonrelativistic effective actions previously considered in Refs.~\\cite{Watanabe:2012hr, Hidaka:2012ym, Watanabe:2014fva} by choosing, e.g., $A_{IJ} = \\mu_{IJ} x^1 \\mathrm{d} x^2\\wedge \\cdots \\wedge \\mathrm{d} x^{D-1} $ with an appropriate constant $\\mu_{IJ}$ up to a total derivative.\nIt is possible to identify $A_{IJ}$ as a background gauge field whose symmetry is called composite or Chern-Weil global symmetry~\\cite{Brauner:2020rtz,Heidenreich:2020pkc}.\n\n\n\n\\section{Counting rule of unstable NG modes}%\n\\label{sec:counting}\nTo count the number of the unstable modes in the presence of the background field, we first derive the dispersion relations.\nHereafter, we assume that the translational invariance of the system is not broken.\nWe consider the configuration of the background field in the temporal direction where $\\mathrm{d} A_{IJ} = \\frac{1}{p_{IJ}!} E_{IJ,i_1...i_{p_{IJ}}} \\mathrm{d} x^0 \\wedge \\mathrm{d} x^{i_1} \\wedge \\cdots \\wedge \\mathrm{d} x^{i_{p_{IJ}}}$ with a constant $E_{IJ,i_1...i_{p_{IJ}}}$.\nThe background field can be understood as a generalization of an ordinary background electric field.%\n\\footnote{If we introduce a background magnetic field or its generalization in the spatial direction instead of the electric field, some of the NG modes become gapped rather than unstable~\\cite{Yamamoto:2015maz,Brauner:2017mui,Sogabe:2019gif,Hidaka:2020ucc}.}\n\n\n\\subsection{Dispersion relations}\nThe equation of motion for $a_I$ is\n\\begin{equation}\nF^2_{IJ} \\mathrm{d} * f_J\n= (-1)^{p_J +1} \nf_J \\wedge \\mathrm{d} A_{IJ}.\n\\label{EOM}\n\\end{equation}\nTo focus on physical degrees of freedom, we take the temporal gauge $a_{I,0 i_1...i_{p_I-1}} =0$ with the Gauss law constraint $\\partial^i f_{I,0 i i_1 ... i_{p_I-2}} =0$.\nIn momentum space,\n\\er{EOM} \ncan be written as\n\\begin{equation}\n(\\omega^2 - |\\boldsymbol{k}|^2)\n \\hat{a}_I^{i_1...i_{p_I}}\n = -\\frac{\\mathrm{i}}{p_J!}\n\\hat{M}^{ i_1...i_{p_I} l j_1 ... j_{p_J}}_{IJ}\nk_l \\hat{a}_{J, j_1 ...j_{p_J}}\\,,\n\\label{EOM_momentum}\n\\end{equation}\nwhere we defined\n$\\hat{a}_I = F_{IJ} a_J $\nto simplify our notation.\nWe also introduced $\\hat{M}^{i_1...i_{p_I} l j_1 ... j_{p_J}}_{IJ}\n = \n F^{-1}_{KI} M^{i_1...i_{p_I} l j_1 ... j_{p_J}}_{KL} F_{LJ}^{-1}$, where\n\\begin{equation}\nM^{i_1...i_{p_I} l j_1 ... j_{p_J}}_{IJ}\n= \n\\frac{\\epsilon^{0 i_1... i_{p_I} l j_1 ..j_{p_J}\nk_1 ... k_{p_{IJ}}}}{p_{IJ}!\n} \nE_{IJ, k_1 ... k_{p_{IJ}}}\\,, \n\\end{equation}\nwith $\\epsilon_{\\mu_1...\\mu_D}$ being the totally antisymmetric tensor satisfying $\\epsilon_{01...D-1} = 1$.\nWe remark that $\\hat{M}^{ i_1...i_{p_I}l j_1 ... j_{p_J}}_{IJ}$ \nis antisymmetric under the exchange \n${\\cal I} = (I, i_1,...,i_{p_I}) \\leftrightarrow {\\cal J} = (J, j_1,..., j_{p_J})$.\nUsing these collective indices, the equation of motion in \\er{EOM_momentum} can be simplified as\n\\begin{equation}\n(\\omega^2 - |\\boldsymbol{k}|^2)\n \\hat{a}_{\\cal I}\n = \n - \\mathrm{i} k_l \\hat{M}_{{\\cal IJ}}^l\n \\hat{a}_{\\cal J},\n\\label{EOM_collective}\n\\end{equation}\nwith the antisymmetric matrices \n$\\hat{M}_{{\\cal IJ}}^l = \\hat{M}^{i_1...i_{p_I} l j_1 ... j_{p_J}}_{IJ}$\nsatisfying \n$\\hat{M}_{\\cal JI}^l = - \\hat{M}_{\\cal IJ}^l$.\nHere, the indices $(i_1,...,i_{p_I})$ in ${\\cal I}$ are ordered as $i_1< \\cdots < i_{p_I}$ to avoid overcounting.\nThe number of degrees of freedom of ${\\cal I}$ is denoted as ${\\cal N}$.\n\nWe can now count the number of unstable modes in the background \n$\\hat{M}_{{\\cal IJ}}^l \\neq 0$. Equation~\\eqref{EOM_collective} implies that the dispersion relations may depend on the direction of the wave vector. \nWe thus consider the dispersion relation for each direction.\nIf we choose a wave vector along the $x^l$ direction, i.e., $\\boldsymbol{k} = (0,...0,\\underbrace{k}_{l\\text{-th}},0,...,0)$, \\er{EOM_collective} is further simplified as\n $(\\omega^2 - k^2)\n \\hat{a}_{\\cal I}\n = \n- \\mathrm{i} k \\hat{M}_{{\\cal IJ}}^l\n \\hat{a}_{\\cal J}$.\nSince the matrix $\\hat{M}^l = (\\hat{M}_{\\cal IJ}^l)$ is antisymmetric, we can transform this matrix by using an orthogonal matrix $P$ into the form\n\\begin{equation}\n\\begin{split}\n&\nP \\hat{M}^{l} P^T \n= \\mtx{ \\Lambda_{1}^l && \\\\ &\\ddots& \\\\ && \\Lambda_{n }^l \\\\ &&&\n0_{({\\cal N} - 2n)\\times ({\\cal N} -2n)}\n}\\,,\n\\\\\n& \n \\Lambda_{ m}^l = \\mtx{ 0 & - \\lambda_{m}^l \\\\ \\lambda_m^{l} & 0}\\,, \n\\quad \nn = \\frac{1}{2}\\rank (\\hat{M}^{l})\\,.\n\\end{split}\n\\end{equation}\nNote that $n$ is an integer because the rank of an antisymmetric matrix is always an even integer.\nIn this basis, we can solve \\er{EOM_collective} and obtain the dispersion relation for each $\\Lambda_{m }^l$:\n\\begin{equation}\n\\label{instability}\n\\omega^2 = k^2 \\pm |k\\lambda_{m}^l|,\n\\end{equation}\nwhich exhibits an instability in the region $|k| < |\\lambda_{m}^l|$.\nThe number of the unstable NG modes $N^l_{\\rm unst}$ in the $x^l$ direction is, therefore,\n\\begin{equation}\n N^l_{\\rm unst} \n =\n \\frac{1}{2}\\rank (\\hat{M}_{{\\cal IJ} }^l)=\\frac{1}{2}\\rank (M_{{\\cal IJ} }^l)\\,.\n \\label{Nunst}\n\\end{equation}\nWe remark that the Gauss law $k_i a^{i i_1 ...i_{p_I -1}} =0 $ is automatically satisfied for the counting of the unstable modes, since $ k_i M_{{\\cal IJ} }^i = 0$ if the direction of $k_i$ is the same as either of polarization directions, $i_1,...,i_{p_I}$ or $j_1,..., j_{p_J}$, due to the totally antisymmetric tensor.\nIn other words, the unstable NG modes are always transverse modes.\nNote that the number of unstable modes is determined for a given direction of the wave vector.\n\n\n\\subsection{Correlation function between symmetry generators}%\nHere, we show that the matrix $M_{{\\cal IJ} }^l$ can be expressed by the correlation function of broken symmetry generators, and, hence, the number of the unstable NG modes is equal to half of the rank of the matrix in terms of the correlation function (see the Appendix for detail).\n\nFrom the equation of motion in \\er{EOM}, we can define a conserved charge on a $(D - p_I - 1)$-dimensional closed subspace $\\Sigma_{I}$:\n\\begin{equation}\n Q_I (\\Sigma_{I})\n = \\int_{\\Sigma_{I}} \n\\left(- F^{2}_{IJ} * f_J + \nf_J \\wedge {A}_{IJ}\\right) \n.\n\\end{equation}\nThe correlation function can be calculated in the path integral formulation by using Ward-Takahashi identity as\n\\begin{equation}\n\\mathrm{i} \\vevs{ Q_I (\\Sigma_{I}) Q_J (\\Sigma_{J})}\n= - \n\\int_{\\Sigma_I \\cap \\Omega_J}\n \\mathrm{d} A_{IJ}\\,,\n\\label{correlation}\n\\end{equation}\nwhere $\\Omega_J$ is a $(D-p_J)$-dimensional subspace satisfying $\\partial \\Omega_J = \\Sigma_J$. \nFrom the correlation function, we can extract the matrix $M^l_{\\cal IJ}$\nby using a spatial version of the equal-time commutation relation of symmetry generators, where the $x^l$ direction plays the role of the temporal direction.\nA $(D- p)$-dimensional plane localized at $x^{i_1} = \\cdots = x^{i_p} =0$ is denoted by $S^{i_1...i_p}$.\nTo have a commutation relation, we also introduce a plane $S^{i_1...i_p}_{x^{i_p} = c}$ localized at $x^{i_1} = \\cdots = x^{i_{p-1}} =0$, $x^{i_p} = c$ with $c$ being some constant.\nWe first take the large volume limit $\\Sigma_I \\to S^{i_1...i_{p_I} l} \\cup \\b{S}^{i_1...i_{p_I} l}_{x^l = - \\epsilon}$ for an infinitesimal positive parameter $\\epsilon$. Here, $S^{i_1...i_{p_I} l}$ intersects with $\\Omega_J$ while $\\b{S}^{i_1...i_{p_I} l}_{x^l = - \\epsilon}$ (which is $S^{i_1...i_{p_I} l}_{x^l = - \\epsilon}$ with an opposite orientation) does not.\nThen, we take the large volume limit \n$\\Sigma_J \\to \nS^{j_1...j_{p_J} l}_{x^l =\\frac{\\epsilon}{2} } \\cup \n\\b{S}^{j_1...j_{p_J} l}_{x^l = - \\frac{\\epsilon}{2} }$.\nIn these limits, we have \n$\\Sigma_I \\cap \\Omega_J \\to\nS^{i_1...i_{p_I} l} \\cap \nS^{j_1...j_{p_J} l} = \nS^{i_1...i_{p_I} l j_1...j_{p_J}}$,\nand, hence,\n\\begin{align}\n&\n\\frac{\\mathrm{i} \\vevs{ Q_I (\\Sigma_{I}) Q_J(\\Sigma_{J})}}{\\vol (\\Sigma_I \\cap \n\\Omega_J)} \n\\nonumber \\\\\n&\\to\n\\frac{\\mathrm{i} \\vevs{ Q_I (S^{i_1...i_{p_I} l} \\cup \n\\b{S}^{i_1...i_{p_I} l}_{x^l = - \\epsilon} ) Q_J(S^{j_1...j_{p_J} l}_{x^l =\\frac{\\epsilon}{2} } \\cup \n\\b{S}^{j_1...j_{p_J} l}_{x^l = - \\frac{\\epsilon}{2} })}}{\\vol (S^{i_1...i_{p_I} l j_1...j_{p_J}})} \n\\nonumber \\\\\n&\n=:\n\\rho^{i_1...i_{p_I} l j_1...j_{p_J}}_{IJ}\n\\,.\n\\label{rho}\n\\end{align}\nHere, $\\vol (\\Sigma)$ is the volume of a subspace $\\Sigma$ including the temporal direction.\nBy the explicit calculation of \\er{correlation}, we have\n\\begin{align}\n&\n\\rho^{i_1...i_{p_I} l j_1...j_{p_J}}_{IJ}\n=(-1)^{p_{IJ} (D-p_{IJ}-1)+1}\nM^{i_1...i_{p_I} l j_1...j_{p_J}}_{IJ}\\,.\n\\label{rhoM}\n\\end{align}\nCombining \\ers{Nunst} and \\eqref{rhoM}, we arrive at \n\\begin{equation}\n N^l_{\\rm unst}=\\frac{1}{2}\\rank (\\rho^l_{\\cal IJ}) \\,,\n \\label{main}\n\\end{equation}\nwhere we have defined \n\\begin{equation}\n\\label{rho_cal}\n\\rho^l_{\\cal IJ} \n=\n\\rho^{i_1...i_{p_I} l j_1...j_{p_J}}_{IJ}.\n\\end{equation}\n\nNote that our counting rule in \\er{main} is similar to that of the type-B NG modes for the higher-form symmetries derived in Ref.~\\cite{Hidaka:2020ucc}.\nThe difference between the counting of type-B NG modes and unstable modes is whether the directions of the nonzero components of field strengths for the background fields are spatial directions $\\mathrm{d} A_{IJ} \\sim \\mathrm{d} x^{i_1} \\wedge \\cdots \\wedge \\mathrm{d} x^{i_{p_{IJ}+1}} $ or temporal directions $\\mathrm{d} A_{IJ} = \\frac{1}{p_{IJ}!}E_{IJ, i_1...i_{p_{IJ}}} \\mathrm{d} x^0 \\wedge \\mathrm{d} x^{i_1} \\wedge \\cdots \\wedge \\mathrm{d} x^{i_{p_{IJ}}}$.\n\nWe also note that the correlation function evaluated here indicates only the presence of instabilities rather than the presence of long-lasting ones.\nAlthough we assumed $\\mathrm{d} A_{IJ}$ to be a constant when evaluating the correlation function, the background field may evolve dynamically to relax the instabilities in realistic systems. It is indeed the case for the chiral plasma instability discussed in Refs.~\\cite{Joyce:1997uy, Akamatsu:2013pjd}.\n\n\n\n\\section{Examples}\nIn this section, we provide examples of unstable NG modes to verify the validity of our general counting rule in physical systems.\n\n\\subsection{Chiral plasma instability}%\nFirst, we consider electromagnetism in the background of an axion field:\n\\begin{equation}\n S\n =\n-\\frac{1}{2e^2} \\int \\mathrm{d} a \\wedge *\\mathrm{d} a \n+ \n \\frac{1}{2}\\int \\Theta_{aa} \\mathrm{d} a \\wedge \\mathrm{d} a\\,. \n\\end{equation}\nHere, the photon is described by a 1-form gauge field $a = a_\\mu \\mathrm{d} x^\\mu$, which can be understood as a NG mode for the spontaneously broken $U(1)$ 1-form symmetry, $e$ is a coupling constant, and $\\Theta_{aa} = -2 C \\mu_5 x^0$ is the background axion field with $\\mu_5$ corresponding to the chiral chemical potential for $C = 1\/(4\\pi^2)$. In fact, the second term above can be rewritten as the effective Chern-Simons term $S_{\\rm CS} = -C \\int \\mu_5 \\epsilon^{0ijk} a_i \\partial_j a_k$~\\cite{Redlich:1984md} by integration by parts. Then, taking its variation with respect to $a_i$ reproduces the so-called chiral magnetic effect: $j^i := e^2\\delta S_{\\rm CS}\/\\delta a_i = -2C e^2\\mu_5 \\epsilon^{0ijk} \\partial_j a_k$~\\cite{Vilenkin:1980fu,Nielsen:1983rb,Fukushima:2008xe}.\n\nThe equation of motion for the photon is $\\frac{1}{e^2} \n\\square \na^i \n- \n2C \\epsilon^{0 i j k} \n\\mu_5 \\partial_j a_k=0,\n$\nwhere we have taken the temporal gauge $a_0 =0$ and the Gauss law constraint $\\partial_i a^i =0$. Under the plane wave ansatz, we have the equation of motion in momentum space:\n$\n(\\omega^2 - |\\boldsymbol{k}|^2) \\hat{a}^i \n+\n \\mathrm{i}\n \\hat{M}^{ilj} k_l\n \\hat{a}_j=0,\n$\nwhere we defined $\\hat{a}_\\mu = \\frac{1}{e}a_\\mu$ and \n$\\hat{M}^{ilj}\n=e^2 M^{ilj} = \n-\n2 \\epsilon^{0ilj} C e^2 \\mu_5 $.\nFor instance, we choose the wave vector along the $x^1$ direction, $\\boldsymbol{k} = (k_1, 0,0)$.\nIn this case, the Gauss law constraint leads to $a_1=0$ for nonzero $k_1$.\nThen the equation of motion is simplified as\n\\begin{equation}\n \\mathrm{i} k_1 \n\\mtx{\n0 &\n \\hat{M}^{213 } \n\\\\\n \\hat{M}^{312}\n& 0\n}\\mtx{\\hat{a}_2 \\\\ \\hat{a}_3}\n= \n(\\omega^2 - k_1^2)\n\\mtx{\\hat{a}_2 \\\\ \\hat{a}_3}.\n\\end{equation}\nThe dispersion relation is obtained as\n$\\omega^2 \n= k_1^2 \\pm \n| \\hat{M}^{213 }\n k_1|\n$.\nTherefore, we have one unstable mode in the $x^1$ direction (and similarly for the $x^{2,3}$ directions). \nThis is the chiral plasma instability~\\cite{Joyce:1997uy,Akamatsu:2013pjd}.\n\nNext, we discuss the relation between the matrix $M^{ilj} $ and the correlation function of symmetry generators.\nThe equation of motion for $a$ gives the conserved charge on a closed surface $\\Sigma_a$:\n\\begin{equation}\n Q (\\Sigma_a)\n = \n\\int_{\\Sigma_a}\\left(\n-\\frac{1}{e^2} * \\mathrm{d} a\n+\n\\Theta_{aa} \\mathrm{d} a \\right)\\,.\n\\end{equation}\nThe correlation function between two symmetry generators is \n\\begin{equation}\n \\mathrm{i} \\vevs{ Q (\\Sigma_a) Q (\\Sigma'_a)}\n =\n-\n\\int_{\\Sigma_a \\cap \\Omega_a' }\\mathrm{d} \\Theta_{aa}\\,,\n\\end{equation}\nwhere $\\Omega_a'$ is a world volume whose boundary is a closed surface $\\Sigma_a'$.\nNow, we take the limit \n$\\Sigma_a \\to S^{il} \\cup \\b{S}^{il}_{x^l = -\\epsilon} $\nand then \n$\\Sigma_a' \\to S^{jl}_{x^l = \\epsilon\/2} \\cup \\b{S}^{jl}_{x^l = -\\epsilon\/2} $.\nWe have a temporally extended one-dimensional subspace \n$\\Sigma_a \\cap \\Omega_a' \\to S^{ilj}$. \nThen, the matrix $\\rho^{ilj}$ in \\er{rho} becomes\n\\begin{equation}\n\\begin{split}\n& \n\\rho^{ilj}\n = \n\\frac{2C \\epsilon^{0i l j} \\int_{S^{ilj}} \\mu_5 \\mathrm{d} x^0 }{\\vol(S^{ilj})}\n= - M^{ilj}\\,. \n\\end{split}\n\\end{equation}\nThe number of the unstable NG modes along the $x^l$ direction coincides with $\\frac{1}{2}\\rank (\\rho^{ilj})$.\n\n\n\\subsection{Dynamical axion in electric field}%\nThe next example is the dynamical axion in the background electric field in $(3+1)$ dimensions~\\cite{Ooguri:2011aa}. We consider the effective action\n\\begin{equation}\n S = -\\frac{F^2_\\phi}{2} \\int |\\mathrm{d} \\phi|^2 -\\frac{1}{2e^2}\n\\int |\\mathrm{d} a|^2\n+ \n\\int \\mathrm{d}\\phi \\wedge \\mathrm{d} a \\wedge A_{\\phi a}\\,,\n\\end{equation}\nwhere $\\phi$ is a 0-form axion field, $F_\\phi$ is a decay constant of the axion, and $\\mathrm{d} A_{\\phi a} = E_{\\phi a, i} \\mathrm{d} x^0 \\wedge \\mathrm{d} x^i$ with a constant $E_{\\phi a,i }$.\nIn the plane wave basis, the equations of motion for $\\phi$ and $a$ can be written as\n$ (\\omega^2 -|\\boldsymbol{k}|^2) \\hat{\\phi }\n+\n\\mathrm{i} k_l \\hat{M}^{lj}_{\\phi a}\n \\hat{a}_j \n=0$\nand \n$(\\omega^2 -|\\boldsymbol{k}|^2) \\hat{a}^i \n+\n\\mathrm{i} k_l \n\\hat{M}^{i l}_{ a \\phi}\n\\hat\\phi \n=0$,\nrespectively.\nHere, we defined\n$\\hat\\phi = F_\\phi \\phi$,\n $\\hat{a}_\\mu = \\frac{1}{e} a_\\mu$, and \n $\\hat{M}^{li}_{\\phi a} = - \\hat{M}^{il}_{ a \\phi }\n = \\frac{e} {F_{\\phi}}M^{li}_{\\phi a} = \n \\epsilon^{0 lik}\\frac{e} {F_{\\phi}} E_{\\phi a,k }$.\nFor concreteness, we take our coordinate so that \n$E_{\\phi a,i } = (0,0,E_{\\phi a,3 })$,\nand we focus on the wave vector $\\boldsymbol{k} = (k_1, 0,0)$.\nWe then find the dispersion relations $ \\omega^2 = k_1^2$, $ k_1^2 \\pm |k_1 \\hat{M}_{\\phi a}^{12}|$, among which there is one unstable mode.\n\nWe can also find the nontrivial correlations between symmetry generators. The equations of motion for $\\phi$ and $a$ lead to the conserved charges\n\\begin{equation}\n\\begin{split}\n Q_{\\phi} (\\Sigma_\\phi )\n&\n= \\int_{\\Sigma_\\phi} \\left(-F^2_\\phi *\\mathrm{d} \\phi +\n\\mathrm{d} a \n\\wedge A_{\\phi a}\\right),\n\\\\\n Q_{a} (\\Sigma_a)\n&\n= \\int_{\\Sigma_a} \\left(-\\frac{1}{e^2}* \\mathrm{d} a + \n\\mathrm{d}\\phi \\wedge A_{\\phi a}\\right).\n\\end{split}\n\\end{equation}\nHere, $\\Sigma_\\phi$ is a three-dimensional closed subspace.\nThe correlation function between them is\n\\begin{equation}\n\\mathrm{i} \\vevs{ Q_{\\phi} (\\Sigma_\\phi)\nQ_{a} (\\Sigma_a)}\n = - \n\\int_{\\Sigma_\\phi \\cap \\Omega_a} \\mathrm{d} A_{\\phi a}\\,.\n\\end{equation}\nWe take the limit \n$\\Sigma_\\phi \\to S^{l} \\cup \\bar{S}^{l}_{x^l = - \\epsilon}$ \nand then the limit \n$\\Sigma_a \\to S^{jl}_{x^l =\\epsilon\/2} \\cup \\b{S}^{jl}_{x^l = -\\epsilon\/2}$.\nIn this case, we have \n$\\Sigma_\\phi \\cap \\Omega_a\n\\to S^{lj}$, and\n\\begin{equation}\n\\begin{split}\n&\n\\rho_{\\phi a}^{lj}\n= \n- M^{lj}_{\\phi a}\n\\,.\n\\end{split}\n\\end{equation}\nFor the above choice of $(E_{\\phi a , i})$, we have, e.g.,\n$\\frac{1}{2}\\rank (\\rho^1) = 1$, \nwhich matches the number of unstable NG modes propagating along the $x^1$ direction.\n\n\n\\subsection{Unstable NG modes from 0-form symmetry breaking}\nBased on our generic description of unstable NG modes above, we give the third example of an unstable NG mode for a conventional 0-form symmetry with a background vector field, which is simple yet new to the best of our knowledge.\n\nWe consider the following low-energy effective action for the spontaneous breaking of the 0-form symmetry $U(1)\\times U(1) \\to \\{1\\}$ in $(2+1)$ dimensions:\n\\begin{equation}\n S = -\\frac{F_\\phi^2}{2} \n\\int |\\mathrm{d}\\phi|^2 - \\frac{F_\\chi^2}{2} \\int |\\mathrm{d}\\chi|^2\n+ \n\\int \\mathrm{d}\\phi \\wedge \\mathrm{d}\\chi \\wedge A_{\\phi \\chi}\\,.\n\\label{S_0}\n\\end{equation}\nHere, $(\\mathrm{e}^{\\mathrm{i} \\phi }, \\mathrm{e}^{\\mathrm{i}\\chi})$ is the set of \nNG modes for the $U(1)\\times U(1)$ symmetry,\nand \n$A_{\\phi \\chi} = - A_{\\chi \\phi}\n= A_{\\phi \\chi, \\mu} \n\\mathrm{d} x^\\mu \n= E_{\\phi \\chi, i } x^0 \\mathrm{d} x^i $\nis a background field with a constant $E_{\\phi \\chi, i}$.\nOne may understand $A_{\\phi \\chi}$ as a background gauge field whose conserved charge is the number of linked vortex loops for $\\phi $ and $\\chi$~\\cite{Brauner:2020rtz}.\nThe equations of motion for $\\phi$ and $\\chi $ in the plane wave basis are\n$ (\\omega^2 - |\\boldsymbol{k}|^2) \\hat{\\phi}\n+\n\\mathrm{i} k_l \\hat{M}^l_{\\phi \\chi} \\hat\\chi \n=0$\nand\n$ (\\omega^2 - |\\boldsymbol{k}|^2) \n\\hat\\chi\n-\n\\mathrm{i} k_l \\hat{M}^l_{\\phi \\chi} \\hat\\phi =0$,\nrespectively.\nHere, we defined \n$\\hat{\\phi} = F_\\phi \\phi$, \n$\\hat{\\chi} = F_\\chi \\chi$,\nand\n$ \n\\hat{M}_{\\phi \\chi}^l\n= \n\\frac{1}{F_\\phi F_\\chi} M_{\\phi \\chi}^l \n= \\frac{1}{F_\\phi F_\\chi}\n\\epsilon^{0 l i } E_{\\phi \\chi, i }$. \nWithout loss of generality, we choose the direction of the background field so that only $E_{\\phi \\chi , 2}$ is nonvanishing.\nIn this case, we have \n$M^1_{\\phi \\chi} \n=\n\\epsilon^{0 12} E_{\\phi \\chi, 2}\n=\n- E_{\\phi \\chi, 2}\n$\nand $M^2_{\\phi \\chi} =0$.\nWe also choose a wave vector \n$\\boldsymbol{k} = (k_1,0)$.\nThen, we have the dispersion relation\n$\\omega^2 = k_1^2\n\\pm \n|k_1 \\hat{M}_{\\phi \\chi}^1|$, \nand there exists one unstable mode for\n$|k_1| < |\\hat{M}_{\\phi \\chi}^1|$ along the $x^1$ direction.\n\nNext, we relate the matrix $M_{\\phi \\chi}^l $ to the correlation function of symmetry generators. The conserved charges for the spontaneously broken 0-form symmetries are found from the equations of motion as\n\\begin{equation}\n\\begin{split}\n Q_\\phi (\\Sigma_\\phi)\n &= \\int_{\\Sigma_\\phi} \n (-F_\\phi^2 *\\mathrm{d}\\phi \n+\n\\mathrm{d}\\chi \\wedge A_{\\phi \\chi})\\,,\n\\\\\nQ_\\chi (\\Sigma_\\chi)\n &\n =\n \\int_{\\Sigma_\\chi} \n (-F_\\chi^2 *\\mathrm{d}\\chi \n-\n\\mathrm{d}\\phi \\wedge A_{\\phi \\chi} )\\,,\n\\end{split}\n\\end{equation} \nwhere $\\Sigma_{\\phi, \\chi}$ are closed surfaces. \nThe conserved charges satisfy the correlation function\n\\begin{equation}\n \\mathrm{i} \\vevs{Q_\\phi(\\Sigma_{\\phi}) Q_\\chi(\\Sigma_{\\chi})}\n = \n-\\int_{\\Sigma_\\phi \\cap \\Omega_\\chi} \\mathrm{d} A_{\\phi \\chi} \\,.\n\\end{equation}\nHere,\n$\\Omega_\\chi$ is\na three-dimensional subspace whose boundary is $\\Sigma_\\chi$.\nTo have the matrix $M_{\\phi \\chi}^l$, we take the limit \n$\\Sigma_\\phi \\to S^l\\cup \\bar{S}^l_{x^l = -\\epsilon}$\nand then the limit\n$\\Sigma_\\chi \\to \nS^l_{x^l = \\epsilon\/2}\\cup \\bar{S}^l_{x^l = -\\epsilon\/2}$.\nWe have \n$\\Sigma_\\phi \\cap \\Omega_\\chi \\to S^l $ and\n\\begin{equation}\n\\begin{split}\n\\rho_{\\phi \\chi}^l\n= M^l_{\\phi \\chi} \\,.\n\\end{split}\n\\end{equation}\nFor the $x^1$ direction, we have $\\frac{1}{2} \\rank (\\rho^1) = 1$, which \ncoincides with the number of unstable NG modes.\n\n\n\n\\sectaps{Discussions}%\nWe remark that our derivation is similar to that of the counting rule for the so-called type-B NG modes~\\cite{Watanabe:2012hr,Hidaka:2012ym,Watanabe:2014fva,Hidaka:2020ucc}. \nIn our derivation, the background fields with only spatial directions \n$\\mathrm{d} A_{IJ} \\sim \\mathrm{d} x^{i_1} \\wedge \\cdots \\wedge \\mathrm{d} x^{i_{p_{IJ}+1}} $ of the latter is replaced by those with the temporal direction $\\mathrm{d} A_{IJ} = \\frac{1}{p_{IJ}!}E_{IJ, i_1...i_{p_{IJ}}} \\mathrm{d} x^0 \\wedge \\mathrm{d} x^{i_1} \\wedge \\cdots \\wedge \\mathrm{d} x^{i_{p_{IJ}}} $, and the commutation relation of symmetry generators, i.e., the time-ordered product of symmetry generators, is replaced by the product ordered in the spatial directions.\nOne main difference here is that there are various choices of spatial directions, which leads to the fact that the unstable NG modes depend on the direction of the wave vector.\nThe classification of conventional type-B NG modes and unstable NG modes in the presence of both spatial and temporal background fields is deferred to future work. \nOne may also be able to extend our counting rule for unstable NG modes to finite temperature in a way similar to the one for conventional NG modes of 0-form symmetries in Ref.~\\cite{Minami:2015uzo}. In such a case, dissipative effects could modify the dispersion relation of unstable NG modes; see, e.g., Refs.~\\cite{Joyce:1997uy,Akamatsu:2013pjd} in the case of chiral plasma instability.\nFinally, it would also be interesting to explore possible physical realizations of the theory considered in \\er{S_0} with straightforward extensions to $D$ dimensions by replacing $A_{\\phi \\chi}$ with a $(D-2)$-form field.\n\n\\section*{Acknowledgements}\nThis work is supported in part by the Keio Institute of Pure and Applied Sciences (KiPAS) project at Keio University and JSPS KAKENHI Grant No.~JP19K03852 (N.~Y.) and by JSPS KAKENHI Grants No.~JP21J00480 and No.~JP21K13928 (R.~Y.).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecommender systems research has employed item ratings, bookmarking actions and other user activities as primary sources of information to generate personalized suggestions because they provide evidence about user preferences. \nIn particular, User-to-User Collaborative Filtering \\cite{Desrosiers-Karypis:11} (henceforth, denoted as U2UCF) analyzes the ratings of items provided by users in order to identify ``like-minded'' people for preference prediction. However, the sparsity of the rating matrices affects recommendation performance. Thus, recent algorithms have been proposed to improve the recognition of preference similarity from rating data (e.g., Matrix Factorization algorithms \\cite{,Koren-Bell:11} such as SVD++ \\cite{Koren:08}), possibly combined with trust information derived from the establishment of social links among users; e.g., \\cite{Tang-etal:13,Yang-etal:17}. While these algorithms achieve good accuracy and coverage, they challenge the explanation of recommendation results because the policies applied to rank items can hardly be described in an intuitive way.\n\nIn the present work, we are interested in assessing whether U2UCF, which has nice explanation properties, can be improved by using other types of information that are complementary to rating data.\nSpecifically, we investigate whether the identification of frequently co-occurring interests in information search can be used to improve recommendation performance. \nWe start from the observation that, if the people who search for items tagged with a certain information category typically also search for items tagged with another category, the two categories might represent related interests.\nTherefore, even though we ignore the reasons behind this relatedness, we might leverage the strength of the association in preference estimation.\nIn this perspective, we propose to to build rich user profiles by extending the preferences for categories of items identified from rating behavior with frequently co-occurring interests for item categories, extracted from the logs of search engines. It can be noticed that interest co-occurrence can be learned by analyzing anonymous interaction sessions because it is aimed at describing general user behavior. Therefore, it can be applied to anonymized search logs, as long as search sessions can be identified.\n\nStarting from a category-based representation of user preferences, based on the analysis of ratings and on items categorization, we propose the following research question: \n\n{\\em RQ: How does the integration of data about interest co-occurrence in information search influence the performance of a collaborative recommender system that manages category-based user profiles? }\n\nIn order to answer this question we start from a {\\em Simple Category-based Collaborative Filtering (SCCF)} algorithm which infers a user's preferences on the basis of the distribution of her\/his ratings on item categories: a category-based user profile provides a conceptual view on preferences, so that user similarity can be computed by abstracting from item ratings, thus contrasting data sparsity; see \\cite{Sieg-etal:07b,Sieg-etal:10b}.\nThen, we propose the {\\em Extended Category-based Collaborative Filtering (ECCF)} algorithm that enriches category-based user profiles with evidence about interests that frequently co-occur in information search. ECCF employs the extended user profiles for rating estimation.\n\nIn order to evaluate the recommendation performance of ECCF, we extract information about co-occurring interests by analyzing the query log of a largely used search engine. Then, we test our algorithm by applying it to the Yelp Dataset \\cite{Yelp-dataset}, which stores user ratings of various types of businesses.\n\nWe analyze a few settings of ECCF in order to integrate different amounts of information about co-occurring preferences with rating data. In our experiments, we evaluate performance by taking U2UCF and SCCF as baselines: these algorithms differ in neighbor identification but are based on the same rating estimation approach. Therefore, they are a good basis to assess the impact of extended category-based user profiles on preference prediction. We also compare these algorithms with SVD++ to evaluate whether preference extension challenges the capability of recommending relevant items.\nThe results of our experiments show that ECCS outperforms U2UCF and SCCF in accuracy, MRR, diversity of recommendations and user coverage; moreover it outperforms SVD++ in accuracy and diversity of the generated suggestion lists. We thus conclude that preference co-occurrence information can positively contribute to the identification of good neighbors for rating estimation.\n\nIn summary, the main contributions of this work are:\n\\begin{itemize}\n \\item\n The integration of data about frequently co-occurring information interests (inferred by observing general search behavior) with category-based user preferences, in order to acquire rich individual user profiles.\n \\item The ECCF category-based recommendation algorithm, which extends User-to-User Collaborative Filtering to take both frequently co-occurring information interests and preference similarity into account in neighbor identification.\n \\item Evaluation results aimed at proving the benefits of frequently co-occurring interests to Collaborative Filtering.\n\\end{itemize}\n\nIn the following,\nSection \\ref{sec:related} positions our work in the related one. Section \\ref{model} presents ECCF. Section \\ref{sec:validation} describes the experiments we carried out to validate ECCF and discusses the evaluation results. Section \\ref{sec:conclusions} concludes the paper and outlines our future work. \n\n\n\\section{Related Work}\n\\label{sec:related}\n\n\\subsection{Recommender Systems}\n\nCross-domain recommendation has received the researchers' attention as a way to employ multiple information sources to contrast data sparsity; e.g., \\cite{Fernandez-Tobias-etal:16}. Moreover, holistic user models have been developed that jointly analyze different types of user behavior to enhance the recognition of the user's needs; e.g., \\cite{Teevan-etal:05, Musto-etal:2018b}.\nHowever, the fusion of personal information from different applications is problematic, unless it is done within a tightly integrated software environment. For instance, most people operate anonymously \\cite{Greenstein-etal:17} or have multiple identities \\cite{Doychev-etal:14}; moreover, most user activity logs are anonymized for privacy preservation purposes. It is thus interesting to consider other types of knowledge integration that do not require user identification across applications. Our work investigates this path of research.\n\n\nCollaborative Filtering generates suggestions by analyzing item ratings to identify similar users or similar items.\nSeveral algorithms have been developed, from K-Nearest Neighbors (KNN) to more recent ones such as Matrix Factorization \\cite{Desrosiers-Karypis:11,Koren-Bell:11}. In our work we adopt KNN because it has nice explanation capabilities and has proved to achieve good performance in a comparison with other approaches \\cite{Jannach-Ludewig:17,Ludewig-Jannach:18}. \n\nOntological user profiles model preferences at the semantic level. In \\cite{Sieg-etal:07b,Sieg-etal:10b}, Sieg et al. propose to exploit a taxonomy whose concepts represent item types, and to infer user interests on the basis of the observed ratings to the instances of such concepts. The neighborhood for rating estimation is then identified by measuring the semantic similarity between ontological user profiles. The category-based user similarity we propose is close to this approach. However, we go one step forward in the identification of preferences by extending the user profiles with frequently co-occurring information interests.\nThis type of extension also differentiates our work from that of Ronen et al., who propose to extend the preferences of the individual user by analyzing her\/his behavior in search logs \\cite{Ronen-etal:16}: that work assumes that the user's activities can be tracked across applications and extends the user profile by analyzing her\/his overall behavior. In contrast, we extend user preferences by analyzing anonymous data about general search behavior. \n\nSen et al. define tag-aware recommender systems as\n``recommender algorithms that predict user's preferences for tags''.\nIn \\cite{Sen-etal:09} they describe different signs of interest; e.g., searching or applying a tag, and so forth.\nOur work relates to tag-aware recommender systems because we analyze rating behavior on items associated to categories expressed as tags. However, we do not consider any other types of interaction with tags for estimating user preferences. \n\nIn \\cite{Gemmel-etal:12}, Gemmel et al. present a linear-weighted hybrid framework for resource recommendation that models different scenarios, among which tag-specific item recommendation. They propose to match users and items on the basis of their tag profiles. Differently, we match users on the basis of category-based profiles learned from rating behavior.\nThe same kind of difference holds between our work and the one of Nakamoto \\cite{Nakamoto:2007}.\n\nWhile TagiCoFi \\cite{Zhen:2009} employs user similarities defined from tagging information to regularize Matrix Factorization, we use tags in a KNN algorithm. \nIn \\cite{Tso-Sutter:2008} Tso and Sutter extend the ratings matrix using tagging information.\nThey reduce the three-dimensional correlations $$ to two-dimensional correlations $$, $$ and $$. Then, they apply a fusion method to combine the correlations for rating prediction. Differently, we extend the rating matrix with the categories (tags) associated to the items rated by users and with further categories identified from general search behavior. \n\nRecently, rating information has been combined with other types of data to improve recommendation. For instance, item reviews are used, possibly in combination with ratings, in \\cite{Chen-etal:15,Musat-Faltings:15,Muhammad-etal:15,Lu-etal:18}.\nMoreover, trust relations and reputation are used to steer recommendation on the basis of the feedback on items provided by trusted parties; e.g., \\cite{Kuter-etal:07,Liu-Lee:10,Tang-etal:13,Alotaibi-Vassileva:16,Mcnally-etal:14,Du-etal:17,Yang-etal:17}.\nIn \\cite{Mauro-etal:19}, we investigate multi-faceted trust for personalized recommendation. \nHowever, in the present work we focus on rating information to assess the potential improvement of Collaborative Filtering, when combined with general preference co-occurrence. \n\n\n\\subsection{Analysis of Interaction Sessions}\nThe identification of interest co-occurrence we propose is related to a few works supporting query expansion, query reformulation and term suggestion in Information Retrieval. Some researchers propose to analyze session-based user behavior in order to detect co-occurrence relations useful to improve search queries, taking the search context into account. For instance, in \\cite{Cao-etal:08} Cao et al. suggest queries on the basis of the context provided by the user's recent search history, by clustering queries on the basis of the search results visited by users. Moreover, Huang et al. \\cite{Huang-etal:03} and Chen et al. \\cite{Chen-etal:08} detect term co-occurrence in search sessions to group sets of relevant words that can be mutually suggested. \nOur work is different because we adopt a linguistic interpretation approach (based on lemmatization and Word Sense Disambiguation) to find the concepts referenced in the queries; see \\cite{Mauro-Ardissono:17b}. \nTherefore, we extract information about {\\em concept co-occurrence}, which is more general than {\\em term co-occurrence}. \n\nIt is worth mentioning that our analysis of interaction sessions differs from session-based recommendation, which analyzes the user's behavior during an interaction session to identify relevant item(s) to suggest; e.g., see \\cite{Garcin-etal:13,Jannach-Ludewig:17,Greenstein-etal:17,Jannach-etal:17}. In fact, we mine interest co-occurrence by abstracting from the particular sequence of queries performed by the users. Moreover, as previously discussed, we mine concept associations. \n\n\n\n\\subsection{Graph-based Information Filtering}\nKnowledge graphs describe item features and relations among entities, supporting the analysis of item relatedness, as well as similarity for information filtering and top-N recommendation. \nIn several works these graphs are extracted from document pools and\/or from the Linked Data Cloud. For instance, CoSeNa \\cite{Candan-etal:09} employs keyword co-occurrence in the corpus of documents to be retrieved, and ontological knowledge about the domain concepts, to support the exploration of text collections using a keywords-by-concepts graph. Moreover, in \\cite{DiNoia-etal:16}, Di Noia et al. create a relatedness graph by analyzing external data sources such as DBpedia in order to support the evaluation of semantic similarity between items. Analogously, item features have been extracted from the Linked Data Cloud to improve recommendation performance in \\cite{Musto-etal:16,Ragone-etal:17,Musto-etal:17,Musto-etal:18}. \n\nSome works attempt to extend the relations among information items by integrating data derived from the observation of different types of user behavior. E.g., Google search engine manages the Knowledge Graph \\cite{GoogleKnowledgeGraph} to relate facts, concepts and entities depending on their co-occurrence in queries. Moreover, entity2rec learns user-item relatedness from knowledge graphs by analyzing data about users' feedback and item information from Linked Open Data \\cite{Palumbo-etal:17}. Furthermore, in \\cite{Oramas-etal:15} Oramas et al. propose a hybrid recommender that integrates users implicit feedback into a knowledge graph describing item information, enriched with semantic data extracted from external sources. Finally, in \\cite{Vahedian-etal:17}, Vahedian et al. generalize graph-based approaches by simultaneously taking into account multiple types of relations among entities: they introduce meta-paths to represent patterns of relations and apply random-walk along such paths to identify relevant entities to suggest.\n\nOur work has analogies to the above listed ones because we employ a graph-based type of knowledge representation. However, we work at the conceptual level: our knowledge graph relates item categories instead of individual users and\/or items. Moreover, we do not compute similarity or relatedness by means of the knowledge graph: we use the graph to extend category-based user profiles. In turn, those profiles are employed in neighborhood identification. The separation between how preferences are inferred and how they are used for recommendation makes it possible to extend both types of activities in a modular way.\n\n\n\n\\section{Extended Category-based Collaborative Filtering}\n\\label{model}\nWe describe ECCF incrementally, starting from U2UCF that provides the basic match-making approach for rating estimation. \n\n\\subsection{User-to-User Collaborative Filtering}\nIn \\cite{Ricci-etal:11}, Ricci et al. define U2UCF as follows: ``the simplest and original implementation of this approach recommends to the active user the items that other users with similar tastes liked in the past. The similarity in taste of two users is calculated based on the similarity in the rating history of the users\".\nGiven: \n\\begin{itemize}\n \\item $U$ as the set of users and $I$ as the set of items;\n \\item $r: U X I \\Rightarrow {\\rm I\\!R}$ as a map of ratings;\n \\item $R \\in {\\rm I\\!R}^{U X I}$ as the users-items rating matrix, where each value is a rating $r_{ui}=R[u,i]$ given by a user $u \\in U$ to an item $i \\in I$. \n\\end{itemize}\nThe recommender system estimates $u$'s rating of $i$ ($\\hat{r}_{ui}$) as follows:\n \\begin{equation}\n \\label{eq:rmeancentering}\n \\hat{r}_{ui} = \\bar{r}_u + \\frac{ \n \t\\sum\\limits_{v\\in N_i(u)}\\sigma(u,v) (r_{vi} - \\bar{r}_v)\n }{\n \t\\sum\\limits_{v\\in N_i(u)}|\\sigma(u,v)|}\n \\end{equation}\nwhere $N_i(u)$ is the set of neighbors of $u$ that rated item $i$ and $\\sigma(u,v)$ is the similarity between user $u$ and user $v$ ($v \\in N_i(u)$). The similarity among users is computed by applying a distance metric, e.g., Cosine or Pearson similarity, to their rating vectors. \n\n\n\\subsection{Simple Category-based Collaborative Filtering (SCCF)}\n\\label{category-based-CF}\nSCCF manages user profiles in which the user's interest in each item category is represented as a positive number; the higher is the value, the stronger is the interest. \nWe define:\n\\begin{itemize}\n \\item $U$, $I$, $r$ and $R$ as above; \n \\item $C$ as the set of item categories;\n \\item $f: U X C \\Rightarrow {\\rm I\\!N} $ as a map between users and categories;\n \\item $UC\\in {\\rm I\\!N}^{U X C}$ as the Users-Categories matrix. For each $u \\in U$ and $c \\in C$, $UC[u,c]$ represents the interest of $u$ in $c$. We take as evidence of interest the {\\em frequency of exploration} of a category, i.e., the frequency of interaction of the user with items associated with the category. \n\\end{itemize}\nCategory exploration can be mapped to different types of user behavior; e.g., tagging items and searching for items by tag. We map exploration to rating behavior and we define $UC[u, c]$ as the number of ratings that $u$ has given to the items associated with $c$. \n\nSCCF computes user similarity on the basis of the estimated user preferences for item categories. Specifically, $\\sigma(u, v)$ is defined as the Cosine similarity of the users vectors in the $UC$ matrix and it is used in Equation (\\ref{eq:rmeancentering}) to estimate ratings. Thus, $\\hat{r}_{ui}$ is computed on the basis of the ratings $r_{vi}$ provided by the users $v \\in U$ whose preferences for categories are similar to those of $u$. \n\n\n\\subsection{Acquisition of Preferences Co-occurrence}\n\\label{graph}\nIn order to learn the strength of the associations between item categories in search behavior, we analyze their co-occurrence in the search sessions of a query log. By co-occurrence we mean the fact that two or more categories are referred by the queries belonging to the same session. \nIn the following we summarize the analysis of category co-occurrence; see \\cite{Mauro-Ardissono:18} for details.\n\nThe Category Co-occurrence Graph ($CCG$) represents category co-occurrence:\nin the $CCG$, nodes represent the data categories referenced in the analyzed queries and the weight of edges represents the co-occurrence frequency of the connected categories; i.e., how many times the categories have been identified within the same search sessions.\n\nWe retrieve the categories occurring in the queries by applying a Natural Language approach that identifies the referred concepts in a flexible way, by considering synonyms and by applying Word Sense Disambiguation to resolve the meaning of words; see \\cite{Ardissono-etal:16,Mauro-Ardissono:17b}. For Word Sense Disambiguation we use the Babelfy tool \\cite{Babelfy}.\n\nThe $CCG$ is built as follows:\ngiven two categories $x$ and $y$, the weight of the edge that connects them is defined as:\n\\begin{equation}\n\\label{eq1}\nw_{xy}=\\sum_{S\\in|Sessions|} Freq_{S_{xy}}\n\\end{equation}\nwhere $Freq_{S_{xy}}$ represents the evidence provided by session $S$ to the co-occurrence frequency of $x$ and $y$.\nGiven $S=\\{Q_1, \\dots, Q_n\\}$,\n$Freq_{S_{xy}}$ is computed as the maximum evidence of co-occurrence of $x$ and $y$ in $S$: \n\\begin{equation}\n\\label{eq2}\nFreq_{S_{xy}} = Max_{k=1}^{n}(Freq_{xy_{Q_k}}, ev_{xy_{Q_{k-1}}})\n\\end{equation}\nwhere $Freq_{xy_{Q_k}}$ is the co-occurrence evidence of $x$ and $y$ provided by query $Q_k$, and $ev_{xy_{Q_{k-1}}}$ is the one provided by $Q_1, \\dots, Q_{k-1}$. Similar to \\cite{Mauro-Ardissono:18}, we take the maximum, and not the sum of evidence because co-occurrence could derive either from query reformulation \\cite{Rieh-Xie:06}, or from the repetition of queries in click-through events of the log; see Section \\ref{sec:AOLlog} that describes the query log we used.\n\nA query $Q$ contributes to the estimation of co-occurrence as follows:\n\\begin{itemize}\n\\item \nIf $Q$ contains $k$ terms ($k>=0$), each one identifying a non-ambiguous category: \n$T_1 \\Rightarrow c_1, \\quad \\dots, \\quad T_k \\Rightarrow c_k$, then, for each category $c$ of $Q$:\n\\begin{itemize}\n \\item The co-occurrence evidence between $c$ and every other category $d$ of $Q$ is $Freq_{cd_{Q}} = 1$. \n \\item The co-occurrence evidence between $c$ and every other category $e$ identified in a non-ambiguous way in the other queries of $S$ is $Freq_{ce_{Q}} = 1$. \n \\item The co-occurrence evidence between any other categories $w$ and $z$ identified in $S$ is $Freq_{wz_{Q}} = 0$.\n\\end{itemize}\n\\item \nIf $Q$ contains an ambiguous term $t$ that refers to $m$ categories, the particular category the user is focusing on cannot be identified. Therefore, the co-occurrence evidence brought by $t$ is computed as above, but the assigned evidence is $\\frac{1}{m}$ in order to consider the possible interpretations of $Q$, and divide evidence among ambiguous categories.\n\\end{itemize}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{model.pdf}\n \\caption{Extension of Category-based User Profiles. }\n \\label{fig:matrici}\n\\end{figure}\n\n\n\\subsection{Extended Category-based Collaborative Filtering (\\textit{ECCF})}\n\\label{sec:ECCF}\nIn this recommendation model we employ frequent co-occurring information interests to extend category-based user profiles.\nWe reinforce the preferences for item categories learned by analyzing rating behavior (stored in the Users-Categories matrix $UC$) with interest co-occurrence associations (stored in the $CCG$ graph) in order to acquire an extended set of user preferences for neighbor identification.\n\nThe idea behind preference extension is that, the more the user has appreciated the items of a category, the more interest\n\\linebreak\nco-occurrence makes sense. Therefore, starting from the category-based user profiles stored in the $UC$ matrix, we increment user preferences with the contribution of the strongest co-occurrence relations of the $CCG$ graph, depending on the number of positive ratings available in the users-items matrix $R$. \nThe output of this process is stored in the Extended Preferences matrix $EP$, which is used to compute $\\sigma(u,v)$ in Equation \\ref{eq:rmeancentering}.\n\nFigure \\ref{fig:matrici} provides a graphical view of the computation of $EP$: the information stored in $UC$ is combined with that stored in the $CCG$ to set the values of this matrix. In this process, the users-ratings matrix $R$ is used to limit the reinforcement of preferences to the categories of the positively rated items.\nMoreover, the $CCG$ is used to propagate preference information according to the strongest co-occurrence of interests. \nIn detail, we compute the values of $EP$ as follows:\n\\begin{itemize}\n \\item \n let $Cat_i$ be the set of categories associated to item $i$; \n \\item\n let $CatSet_i$ be the set of categories directly connected to any category $c \\in Cat_i$ in the $CCG$ through the heaviest outbound arcs. These are the categories which most frequently co-occur with some categories of $Cat_i$ in search sessions.\n\\end{itemize}\nThen:\n\\begin{equation}\n\\label{eq:pm}\nEP[u,c]=UC[u,c]+\\sum_{i\\in|I|} f(u,i,c)\n\\end{equation}\nwhere\n\\begin{equation}\n f(u,i,c) =\n \\begin{cases}\n 1 & \\quad \\text{if $R[u,i] \\in$} ~ \\text{\\textit{PositiveRatings}} ~ \\text{$\\wedge$} ~\\text{$c \\in CatSet_i$}\\\\\n 0 & \\quad \\text{otherwise}\n \\end{cases}\n\\label{eq:f}\n\\end{equation}\nIn Equation \\ref{eq:f} $PositiveRatings$ denotes the set of ratings that are considered as positive in the dataset; e.g., \\{5\\}, or \\{4, 5\\} in a [1, 5] Likert scale.\n\n\n\\section{Validation of ECCF}\n\\label{sec:validation}\n\n\\subsection{Dataset of Item Ratings}\n\\label{sec:YELP}\nAs a source of rating data we exploit the Yelp Dataset \\cite{Yelp-dataset}, which contains information about a set of businesses, users and reviews and is available for academic purposes. In the dataset, item ratings take values in a [1, 5] Likert scale where 1 is the worst value and 5 is the best one. \nMoreover, each item is associated with a list of categories describing the kind of service it offers.\n\nThe full list of Yelp categories is available at \\url{www.yelp.com\/developers\/documentation\/v3\/category_list} and is organized in a taxonomy to specify businesses at different levels of detail. The taxonomy includes a large set of first-level categories, representing broad types of businesses; e.g., ``Active life'', ``Arts \\& entertainment'', ``Automotive'', \\dots, ``Food'', ``Restaurants'', and many others. In turn, the first-level categories are specialized into sub-categories; e.g., ``Restaurants'' includes many types of restaurants such as ``Indian'', ``Chinese'' and the like. \nWe apply two filters to the dataset:\n\\begin{enumerate}\n \\item \n We select all the Yelp categories that are subclasses of ``Restaurants'' or ``Food'': e.g., ``Indian'', ``Chinese'', ``Cafes'', ``Kebab'', ``Pizza'', and so forth; the total number of categories is 254. \n Then, we project the Yelp dataset on the set of items associated with at least one of these categories. In the rest of this paper we refer to this set of categories as {\\em CATS}. \n \\item We further filter the dataset on the users who rated at least 20 items.\n\\end{enumerate}\n\n\n\\begin{table}[t]\n\\centering\n\\caption{Statistics about the Filtered Datasets}\n\\begin{tabular}{l|l|l}\n\\hline\nYelp & Number of users & 26,600 \\\\\n& Number of businesses & 76,317 \\\\ \n& Number of ratings & 1,326,409 \\\\ \\hline\nAOL & Number of sessions & 1,248,803 \\\\\n& Number of queries & 2,136,029 \\\\\n\\hline\n\\end{tabular}\n\\label{t:dataset}\n\\end{table}\nThe higher portion of Table \\ref{t:dataset} summarizes the number of users, businesses and ratings of the filtered Yelp dataset.\n\n\n\\subsection{Dataset of Search Sessions}\n\\label{sec:AOLlog}\nFor the generation of the Category Co-occurrence Graph we use the AOL query log.\\footnote{The log is available at \\url{http:\/\/www.cim.mcgill.ca\/~dudek\/206\/Logs\/AOL-user-ct-collection\/}.} \nEach line of the log represents either a query or a click-through event on one of the search results of a query. The line contains various fields, among which the submitted query and the submission date and hour.\n\nIn order to build a graph that is thematically related to the items of the filtered Yelp dataset, we select from the log the search sessions relevant to the categories $c \\in CATS$ enriched with the following two types of external knowledge. The enrichment is useful to abstract from the specific category names used in Yelp and to take into account semantically related information: \n\\begin{enumerate}\n \\item \n {\\em Lemmatized knowledge:} we enrich each element $c \\in CATS$ with a set of keywords and synonyms from WordNet \\cite{WordNet} lexical database. \n \\item \n {\\em Relevant terms from the Probase \\cite{Wu-etal:12} taxonomy:} \n \\begin{itemize}\n \\item\n For each element $c \\in CATS$, we enrich $c$ with the \n \\linebreak\n $$ pairs of ProBase such that $concept$ has at least 85\\% WordNet similarity with any term of the lemmatized knowledge of $c$, and the WordNet similarity between the two components of the pair is 85\\%.\n \\item \n ProBase, recently called Microsoft Concept Graph, is a large concept network harnessed from web pages and search logs. It is organized as a list of $$ pairs related by a subclass relation and it contains \n \\linebreak \n 5,376,526 classes and 12,501,527 instances.\n \\end{itemize}\n\\end{enumerate}\nFor the selection of relevant search queries in the AOL log we match the lemmatized words occurring in the queries to the enriched categories of $CATS$. If there is at least one match between a term and a query, we consider the query as relevant and we include its parent session in the filtered log. \n\nWe identify the search sessions by aggregating the queries performed by the same user according to their temporal proximity, following the widely applied rule that two consecutive queries belong to different sessions if the time interval between them exceeds half an hour; see \\cite{White-etal:07}. \n \nThe lower portion of Table \\ref{t:dataset} shows the number of sessions and queries of the filtered AOL dataset.\n\nIt is worth noting that the AOL log was involved in an information leak issue but we decided to use it for two reasons. Firstly, our analysis is ethically correct because we study general search behavior to acquire aggregate data abstracting from the search histories of individual users. Secondly, to the best of our knowledge, the AOL log is the only available large dataset that reports textual search queries, and which can therefore be used for linguistic interpretation. We analyzed some public datasets but they did not meet our requirements. For instance, the Excite query dataset\\footnote{\\url{https:\/\/svn.apache.org\/repos\/asf\/pig\/trunk\/tutorial\/data\/}} contains about 1M queries while AOL log contains 20M queries. Moreover, in the Yahoo dataset\\footnote{\\url{https:\/\/webscope.sandbox.yahoo.com\/catalog.php?datatype=l\\&did=50}} the queries are coded; thus, it is not possible to extract any linguistic information to learn category co-occurrence.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{graph-distribution.jpg}\n \\caption{Distribution of the Weight of Edges in the $CCG$.}\\label{fig:graph-distribution}\n\\end{figure}\n\n\n\\subsection{Category Co-occurrence Graph}\nWe instantiate the {\\em CCG} with the interests that co-occur in the sessions of the filtered AOL dataset by applying the procedure described in Section \\ref{graph}.\nThe resulting graph is strongly connected: almost all of the categories are linked to each other by an edge having weight $>0$.\nHowever, the distribution of weights in the graph shows that there is a large number of weakly connected categories and a very small number of strongly associated ones. The ``heavy'' edges identify the interests that co-occur very frequently in search sessions and suggest to select the arcs having maximum weight in the {\\em CCG} for the extension of the user profiles, as done in Section \\ref{sec:ECCF}.\nFigure \\ref{fig:graph-distribution} shows this distribution; the x-axis represents the edges of the graph, and the y-axis represents their weights, which take values in [1, 272224].\n\n\\begin{table}[b]\n\\centering\n\\caption{Performance Evaluation @10; the Best Values Are in Boldface, the Worst Ones Are Strikethrough}\n\\begin{tabular}{l|l|l|l|l|l}\n\\hline \n\\textbf{Metrics} \n & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}U2UCF\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{SCCF}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{3,4,5\\}\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{4,5\\}\\end{tabular}}} & \\multicolumn{1}{c}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{5\\}\\end{tabular}}} \\\\ \\hline\n\\textbf{Precision} & \\st{0.7823} & \\textbf{0.786} & 0.7857 & 0.7855 & 0.7859 \\\\\n\\textbf{Recall} & \\st{0.7473} & 0.7526 & 0.7536 & \\textbf{0.755} & 0.7529 \\\\\n\\textbf{F1} & \\st{0.7644} & 0.7689 & 0.7693 & \\textbf{0.7699} & 0.769 \\\\\n\\textbf{RMSE} & \\st{1.0001} & 0.9899 & 0.9897 & 0.9893 & \\textbf{0.9892} \\\\\n\\textbf{MRR} & \\st{0.733} & 0.7367 & 0.737 & \\textbf{0.7391} & 0.7384 \\\\ \n\\textbf{Diversity} & \\st{0.3042} & 0.3053 & \\textbf{0.3056} & 0.3053 & 0.3049 \\\\\n\\textbf{User cov.} & \\st{0.8497} & 0.8521 & 0.8526 & \\textbf{0.8542} & 0.8534 \\\\\n\\hline\n\\end{tabular}%\n\\label{t:results@10}\n\\end{table}\n\n\\subsection{Test Methodology}\n\\label{experiments}\nWe evaluate the recommendation performance of ECCF by comparing it to U2UCF and SCCF, which we consider as baselines. Moreover, we compare these algorithms with SVD++ in order to assess the improvement in the suggestion of relevant items given by frequently co-occurring interests.\n\nThe SCCF and ECCF recommendation algorithms are developed by extending the Surprise library \\cite{Surprise}, while we use the default Surprise implementations of U2UCF and SVD++.\n\nWe test the algorithms by applying a 10-fold cross-validation on the filtered Yelp dataset, after having randomly distributed ratings on folds: we use 90\\% of the ratings as training set and 10\\% as test set. In all the tests, we configure the KNN algorithms to work with 50 neighbors.\n\nIn order to analyze the impact on recommendation performance of a looser, or stricter extension of user preferences with category co-occurrence, we validate ECCF on different settings of $PositiveRatings$ in Equation \\ref{eq:f}, i.e., on different interpretations of what is a good rating. For each fold\nwe generate three versions of the Extended Preferences matrix $EP$ having set $PositiveRatings$ to $\\{3,4,5\\}$, $\\{4,5\\}$, and $\\{5\\}$ respectively. \n\nWe evaluate Top-k recommendation performance with k=10 and k=20 by taking the ratings observed in the Yelp dataset as ground truth. For the evaluation we consider the following metrics: Precision, Recall, F1, RMSE, MRR, Diversity and User Coverage.\n\nDiversity describes the mean intra-list diversity of items in the suggestion lists @k; see \\cite{Bradley-Smyth:01}. In this work, we interpret diversity from the viewpoint of item classification. Therefore, we measure the diversity of a recommendation list as follows:\n\\begin{equation}\n\\text{intra-list diversity@k}={\\frac {\\sum _{{i=1}}^{k}\\sum _{{j=i}}^{k} (1 - sim(i, j))} {\\frac{k*(k+1)}{2}}}\n\\end{equation}\nwhere $sim(i, j)$ is the cosine similarity between the lists of categories associated to items $i$ and $j$ in the ratings dataset. \n\n\\subsection{Results}\n\\label{results}\nTable \\ref{t:results@10} shows the performance results of the KNN recommenders we compared, by taking into account a maximum of 10 suggested items (performance@10). \n\\begin{itemize}\n \\item \\textbf{Precision:} similar to previous results described in \\cite{Sieg-etal:07b}, all of the category-based recommenders outperform U2UCF. This can be explained by the fact that the matrices describing preferences for item categories are denser than the ratings one. Thus, they improve recommendation by supporting a better identification of neighbors for Equation \\ref{eq:rmeancentering}. \n However, SCCF outperforms all of the ECCF variants. The second best recommender is ECCF$\\{5\\}$ that extends user profiles in the strictest way: it only considers as pivots for extension the categories associated to the items that the user has rated 5 stars. Notice also that the precision of ECCF decreases when $PositiveRatings$ is lax. The reason is that the extension of user profiles with frequently co-occurring interests can increase the estimated interest in some noisy categories with respect to the pure observation of ratings distribution on categories. In particular, noise grows when the policy applied to extend preferences is less restrictive. \n \\item \\textbf{Recall:} ECCF outperforms the baselines in all the settings of $PositiveRatings$. Specifically, ECCF\\{4,5\\} achieves the best result, while recall is lower in ECCF\\{3,4,5\\} and further decreases in ECCF\\{5\\}.\n We explain this finding as follows: an extension of user profiles based on the categories of highly rated items supports the identification of a richer set of user preferences, and a more efficacious identification of neighbors, than only considering rating distribution on categories. However, if we restrict $PositiveRatings$ too much, the user profiles are not extended enough to sensibly improve Recall. Moreover, as noticed for Precision, if $PositiveRatings$ is lax, noise in the estimation of user preferences challenges neighbor selection.\n \n\\begin{table}[t]\n\\centering\n\\caption{Performance Evaluation @20}\n\\begin{tabular}{l|l|l|l|l|l}\n\\hline \n\\textbf{Metrics} \n & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}U2UCF\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{SCCF}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{3,4,5\\}\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{4,5\\}\\end{tabular}}} & \\multicolumn{1}{c}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{5\\}\\end{tabular}}} \\\\ \\hline\n\\textbf{Precision} & \\st{0.7806} & \\textbf{0.7842} & 0.7839 & 0.7838 & \\textbf{0.7842} \\\\ \n\\textbf{Recall} & \\st{0.757} & 0.7624 & 0.7634 & \\textbf{0.7649} & 0.7626 \\\\ \n\\textbf{F1} & \\st{0.7686} & 0.7731 & 0.7735 & \\textbf{0.7742} & 0.7732 \\\\\n\\textbf{RMSE} & \\st{0.9935} & 0.9838 & 0.9835 & \\textbf{0.9832} & \\textbf{0.9832} \\\\ \n\\textbf{MRR} & \\st{0.733} & 0.7369 & 0.7372 & \\textbf{0.7391} & 0.7384 \\\\ \n\\textbf{Diversity} & \\st{0.3059} & 0.307 & \\textbf{0.3073} & 0.307 & 0.3067 \\\\ \n\\textbf{User cov.} & \\st{0.8497} & 0.8521 & 0.8526 & \\textbf{0.8542} & 0.8534 \\\\\n\\hline\n\\end{tabular}%\n\\label{t:results@20}\n\\end{table}\n\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{accuracy_10.jpg}\n \\caption{Graphical Representation of Accuracy@10.}\n \\label{fig:accuracy@10}\n\\end{figure}\n\n\n \\item \\textbf{F1:} ECCF outperforms the baselines. In detail, ECCF\\{4,5\\} achieves the best F1 = 0.7691; moreover, F1 varies consistently with Recall, depending on $PositiveRatings$. \n \\item \\textbf{RMSE:} SCCF reduces the mean error between estimated and observed ratings with respect to the baseline, showing the benefits of category-based user profiles. Moreover, consistently with the variation of Precision, the best results are obtained by ECCF\\{5\\}, i.e., with a strict extension of user profiles. RMSE progressively increases (i.e., gets worse) for $PositiveRatings=\\{4, 5\\}$ and \\{3, 4, 5\\}.\n \\item \\textbf{MRR:} ECCF outperforms the baselines. Specifically, \n \\linebreak\n ECCF\\{4,5\\} obtains the best MRR = 0.7391. The second best value corresponds to a more selective extension of user profiles in ECCF\\{5\\}; moreover, if $PositiveItems=\\{3, 4, 5\\}$ results get worse.\n \\item \\textbf{Diversity}: both SCCF and ECCF outperform U2UCF. In this case, the best results are obtained with a lax extension of user preferences (ECCF\\{3,4,5\\}) and Diversity decreases while the preference extension policy becomes stricter. We explain these findings with the fact that category-based user profiles improve the estimation of user preferences concerning a variegate set of item categories, with respect to a flat recommendation based on ratings. However, the stricter is the extension of user preferences, the less item categories are used in neighbor identification.\n \\item \\textbf{User coverage:} ECCF outperforms the baselines, confirming the usefulness of preference extension. However, the selection of the ratings for the extension influences coverage: ECCF\\{4,5\\} achieves the best results by suggesting at least one relevant item to 85.42\\% of the users, against 84.97\\% of U2UCF. The second best is ECCF\\{5\\} and ECCF\\{3,4,5\\} has the worst results.\n \\end{itemize}\nIn the described experiments the $EP$ Matrix is defined by only taking into account positive ratings. In order to get a broader view on the performance of ECCF, we also consider its application to all the user ratings; i.e., we set $PositiveRatings$ to $\\{1, \\dots, 5\\}$. With respect to the previous results, in this case the algorithm achieves similar Precision but lower Recall (0.7524), MRR (0.7369) and User coverage (0.8155). \n\nTable \\ref{t:results@20} shows the results obtained by comparing \n\\linebreak\nperformance@20. \nThese results confirm the usefulness of category-based user profiles and of their extension with frequently\n\\linebreak\nco-occurring information interests: \n\\begin{itemize}\n \\item Also in this case, ECCF\\{4,5\\} is the best recommendation algorithm. It outperforms the others in Recall, F1, MRR and User coverage. Moreover both ECCF\\{5\\} and ECCF\\{4,5\\} achieve the best RMSE in comparison with the other recommenders. \n \\item However, while SCCF has the best Precision@10, both SCCF and ECCF\\{5\\} achieve the best Precision@20.\n\\end{itemize}\nWith respect to k=10, Precision@20 is lower while Recall@20 and F1@20 take higher values; this makes sense because we are considering longer suggestion lists. Moreover, RMSE@20 is lower, which tells us that the longer lists contain proportionally less errors in the estimation of ratings. Differently, most algorithms obtain the same MRR for k=10 and k=20 (except for SCCF and ECCF\\{3,4,5\\}): this shows that the first relevant item is almost always placed in the first 10 positions of the suggestion lists.\nFurthermore, the Diversity@20 has the highest values for all the recommenders: this might be due to the fact that the longer suggestion lists have more chances to include items belonging to different categories. Finally, User coverage@10 = User coverage@20 because we interpret coverage as the percentage of users who receive at least one suggestion.\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{accuracy_20.jpg}\n \\caption{Graphical Representation of Accuracy@20.}\n \\label{fig:accuracy@20}\n\\end{figure}\n\nFigures \\ref{fig:accuracy@10} and \\ref{fig:accuracy@20} depict the accuracy @10 and @20:\n\\begin{itemize}\n \\item \n All of the category-based recommenders outperform U2UCF, confirming the benefits of the introduction of category-based preferences in KNN Collaborative Filtering. The conceptual representation of user preferences generally improves performance because the matrices describing user preferences ({\\em UC} and {\\em EP}) are denser than the users-items matrix storing ratings ({\\em R}). Therefore, better neighbors can be identified for the computation of Equation \\ref{eq:rmeancentering}.\n \\item\n A comparison between category-based algorithms shows that the best performance results are obtained by extending user profiles on the basis of the items that users have rated very well, i.e., with 4 or 5 stars in a [1, 5] Likert scale. If the items that received middle ratings are considered as well, accuracy decreases. \n \\item The category-based representation of user profiles has positive impact on the Diversity of recommendation lists. Conversely, the extension of user profiles does not further help this aspect, unless user profiles are extended in a lax way. However, a lax extension is not convenient because it decreases other measures.\n\\end{itemize}\n\n\nIn order to assess the usefulness of preference extension in Top-k recommendation, we also compare the previously described algorithms with SVD++ \\cite{Koren:08}, which adopts Matrix Factorization to learn latent user and item factors, basing rating prediction on the sole analysis of user ratings. The comparison results show that:\n\\begin{itemize}\n \\item \n SVD++ is more accurate than U2UCF and SCC. On the filtered Yelp dataset, SVD++ obtains F1@10 = 0.7696. This finding shows that the management of category-based user profiles helps recommendation but it can be outperformed by a deeper understanding of the features of items and users. \n \\item\n SVD++ achieves similar accuracy results with respect to ECCF but it is outperformed by ECCF\\{4, 5\\}. Therefore, the extension of user profiles with frequently co-occurring information interests, integrated into a KNN recommender, improves accuracy and makes it comparable or higher than that of Matrix Factorization algorithms. \n \\item\n ECCF outperforms SVD++ as far as the diversity of the recommendation lists is concerned: SVD++ has Diversity@10 = 0.3041; this is comparable to the diversity achieved by U2UCF and lower than that of all the category-based recommenders we presented.\n \\item\n In contrast, SVD++ has the highest User coverage of all the algorithms (0.8709), showing its superior capability to contrast data sparsity. \n \\end{itemize}\n \n\\subsection{Discussion}\n\\label{discussion}\nIn summary, the evaluation results show that ECCF outperforms U2UCF, SCCF and SVD++ in accuracy and intra-list diversity. Moreover, it outperforms U2UCF and SCCF in MRR and user coverage, while SVD++ excels in the latter metric. The results also show that ECCF achieves the best results when applied to positive ratings, while its performance slightly decreases when the user profiles are extended by taking both positive and negative ratings.\n\nThese results support the hypothesis that preference extension, based on frequently co-occurring information interests, improves the accuracy of the suggestions generated by a KNN recommender system. However, research has to be carried out to improve other performance metrics, possibly also investigating the integration of preference co-occurrence in Matrix Factorization algorithms.\n\nIt might be questioned whether extending user profiles with general interest co-occurrence data might provide less personalized recommendations than, e.g., focusing the extensions on the user's neighborhood. In this respect, we point out that we aim at developing a model that does not depend on cross-domain user identification. However, an investigation of this issue can be interesting to deal with the cases in which user information can be shared among the applications, or public information about the users can be connected to the local profiles; e.g., public data on social networks.\n\nBefore closing this discussion, it is worth noting that, even though the AOL query log dates back to 2006, it can be considered as a good information source as long as it is analyzed from the viewpoint of the concepts expressed by the users. In other words, while the specific information items mentioned in the log might not exist any more, the topics referred in the queries are general and long-lasting. Of course, some new topics (e.g., new types of restaurants) might have emerged since 2006, and maybe new concept associations could exist now. However, the described performance results show that the co-occurring interests we identified are useful to improve recommendation performance; moreover, the methodology described in this paper can be applied to other more recent datasets, if available.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe investigated whether the identification of frequently\n\\linebreak \nco-occurring interests in information search can be used to improve the performance of KNN collaborative recommender systems. For this purpose, we defined a preference extension model that, applied to a category-based representation of user profiles, infers user preferences by exploiting frequently co-occurring information interests. Then, we implemented the model in the Extended Category-based Collaborative Filtering algorithm (ECCF). This is is variant of User-to-User Collaborative Filtering that works on category-based user profiles, enriched with preferences inferred from general search behavior.\nFor the analysis of user interests, we analyzed the query log of a largely used search engine. \n\nWe evaluated ECCF on a large dataset of item ratings, by applying different levels of strictness in the extension of user profiles. The evaluation showed that ECCF outperforms User-to-User Collaborative Filtering in accuracy, MRR, intra-list diversity and user coverage. Interestingly, ECCS also obtains higher accuracy and diversity than the SVD++ recommender system, based on Matrix Factorization; however, ECCS has lower user coverage than SVD++. \n\nIn our future work we will focus on the coverage aspect in order to improve the performance of KNN Collaborative Filtering.\nMoreover, we will carry out further experiments, considering (i) a broader domain than Restaurants and Food, on which we have focused our current work, and (ii) users who have provided few or zero ratings. \nWe will also analyze other datasets to check whether the performance results described in this article can be generalized. Finally, we will compare the performance of ECCF with a larger set of recommendation approaches based on preference extension.\n\n\\begin{acks}\nThis work was supported by the University of Torino through projects ``Ricerca Locale'', MIMOSA\n(MultIModal Ontology-driven query system for the heterogeneous data of a SmArtcity, ``Progetto di Ateneo Torino\\_call2014\\_L2\\_157'', 2015-17)\nand the Computer Science PhD program.\nWe are grateful to Zhongli Filippo Hu, who helped us filter the Yelp dataset.\n\\end{acks}\n\n\n \\bibliographystyle{ACM-Reference-Format} \n \n\\balance\n\n\\section{Introduction}\nRecommender systems research has employed item ratings, bookmarking actions and other user activities as primary sources of information to generate personalized suggestions because they provide evidence about user preferences. \nIn particular, User-to-User Collaborative Filtering \\cite{Desrosiers-Karypis:11} (henceforth, denoted as U2UCF) analyzes the ratings of items provided by users in order to identify ``like-minded'' people for preference prediction. However, the sparsity of the rating matrices affects recommendation performance. Thus, recent algorithms have been proposed to improve the recognition of preference similarity from rating data (e.g., Matrix Factorization algorithms \\cite{,Koren-Bell:11} such as SVD++ \\cite{Koren:08}), possibly combined with trust information derived from the establishment of social links among users; e.g., \\cite{Tang-etal:13,Yang-etal:17}. While these algorithms achieve good accuracy and coverage, they challenge the explanation of recommendation results because the policies applied to rank items can hardly be described in an intuitive way.\n\nIn the present work, we are interested in assessing whether U2UCF, which has nice explanation properties, can be improved by using other types of information that are complementary to rating data.\nSpecifically, we investigate whether the identification of frequently co-occurring interests in information search can be used to improve recommendation performance. \nWe start from the observation that, if the people who search for items tagged with a certain information category typically also search for items tagged with another category, the two categories might represent related interests.\nTherefore, even though we ignore the reasons behind this relatedness, we might leverage the strength of the association in preference estimation.\nIn this perspective, we propose to to build rich user profiles by extending the preferences for categories of items identified from rating behavior with frequently co-occurring interests for item categories, extracted from the logs of search engines. It can be noticed that interest co-occurrence can be learned by analyzing anonymous interaction sessions because it is aimed at describing general user behavior. Therefore, it can be applied to anonymized search logs, as long as search sessions can be identified.\n\nStarting from a category-based representation of user preferences, based on the analysis of ratings and on items categorization, we propose the following research question: \n\n{\\em RQ: How does the integration of data about interest co-occurrence in information search influence the performance of a collaborative recommender system that manages category-based user profiles? }\n\nIn order to answer this question we start from a {\\em Simple Category-based Collaborative Filtering (SCCF)} algorithm which infers a user's preferences on the basis of the distribution of her\/his ratings on item categories: a category-based user profile provides a conceptual view on preferences, so that user similarity can be computed by abstracting from item ratings, thus contrasting data sparsity; see \\cite{Sieg-etal:07b,Sieg-etal:10b}.\nThen, we propose the {\\em Extended Category-based Collaborative Filtering (ECCF)} algorithm that enriches category-based user profiles with evidence about interests that frequently co-occur in information search. ECCF employs the extended user profiles for rating estimation.\n\nIn order to evaluate the recommendation performance of ECCF, we extract information about co-occurring interests by analyzing the query log of a largely used search engine. Then, we test our algorithm by applying it to the Yelp Dataset \\cite{Yelp-dataset}, which stores user ratings of various types of businesses.\n\nWe analyze a few settings of ECCF in order to integrate different amounts of information about co-occurring preferences with rating data. In our experiments, we evaluate performance by taking U2UCF and SCCF as baselines: these algorithms differ in neighbor identification but are based on the same rating estimation approach. Therefore, they are a good basis to assess the impact of extended category-based user profiles on preference prediction. We also compare these algorithms with SVD++ to evaluate whether preference extension challenges the capability of recommending relevant items.\nThe results of our experiments show that ECCS outperforms U2UCF and SCCF in accuracy, MRR, diversity of recommendations and user coverage; moreover it outperforms SVD++ in accuracy and diversity of the generated suggestion lists. We thus conclude that preference co-occurrence information can positively contribute to the identification of good neighbors for rating estimation.\n\nIn summary, the main contributions of this work are:\n\\begin{itemize}\n \\item\n The integration of data about frequently co-occurring information interests (inferred by observing general search behavior) with category-based user preferences, in order to acquire rich individual user profiles.\n \\item The ECCF category-based recommendation algorithm, which extends User-to-User Collaborative Filtering to take both frequently co-occurring information interests and preference similarity into account in neighbor identification.\n \\item Evaluation results aimed at proving the benefits of frequently co-occurring interests to Collaborative Filtering.\n\\end{itemize}\n\nIn the following,\nSection \\ref{sec:related} positions our work in the related one. Section \\ref{model} presents ECCF. Section \\ref{sec:validation} describes the experiments we carried out to validate ECCF and discusses the evaluation results. Section \\ref{sec:conclusions} concludes the paper and outlines our future work. \n\n\n\\section{Related Work}\n\\label{sec:related}\n\n\\subsection{Recommender Systems}\n\nCross-domain recommendation has received the researchers' attention as a way to employ multiple information sources to contrast data sparsity; e.g., \\cite{Fernandez-Tobias-etal:16}. Moreover, holistic user models have been developed that jointly analyze different types of user behavior to enhance the recognition of the user's needs; e.g., \\cite{Teevan-etal:05, Musto-etal:2018b}.\nHowever, the fusion of personal information from different applications is problematic, unless it is done within a tightly integrated software environment. For instance, most people operate anonymously \\cite{Greenstein-etal:17} or have multiple identities \\cite{Doychev-etal:14}; moreover, most user activity logs are anonymized for privacy preservation purposes. It is thus interesting to consider other types of knowledge integration that do not require user identification across applications. Our work investigates this path of research.\n\n\nCollaborative Filtering generates suggestions by analyzing item ratings to identify similar users or similar items.\nSeveral algorithms have been developed, from K-Nearest Neighbors (KNN) to more recent ones such as Matrix Factorization \\cite{Desrosiers-Karypis:11,Koren-Bell:11}. In our work we adopt KNN because it has nice explanation capabilities and has proved to achieve good performance in a comparison with other approaches \\cite{Jannach-Ludewig:17,Ludewig-Jannach:18}. \n\nOntological user profiles model preferences at the semantic level. In \\cite{Sieg-etal:07b,Sieg-etal:10b}, Sieg et al. propose to exploit a taxonomy whose concepts represent item types, and to infer user interests on the basis of the observed ratings to the instances of such concepts. The neighborhood for rating estimation is then identified by measuring the semantic similarity between ontological user profiles. The category-based user similarity we propose is close to this approach. However, we go one step forward in the identification of preferences by extending the user profiles with frequently co-occurring information interests.\nThis type of extension also differentiates our work from that of Ronen et al., who propose to extend the preferences of the individual user by analyzing her\/his behavior in search logs \\cite{Ronen-etal:16}: that work assumes that the user's activities can be tracked across applications and extends the user profile by analyzing her\/his overall behavior. In contrast, we extend user preferences by analyzing anonymous data about general search behavior. \n\nSen et al. define tag-aware recommender systems as\n``recommender algorithms that predict user's preferences for tags''.\nIn \\cite{Sen-etal:09} they describe different signs of interest; e.g., searching or applying a tag, and so forth.\nOur work relates to tag-aware recommender systems because we analyze rating behavior on items associated to categories expressed as tags. However, we do not consider any other types of interaction with tags for estimating user preferences. \n\nIn \\cite{Gemmel-etal:12}, Gemmel et al. present a linear-weighted hybrid framework for resource recommendation that models different scenarios, among which tag-specific item recommendation. They propose to match users and items on the basis of their tag profiles. Differently, we match users on the basis of category-based profiles learned from rating behavior.\nThe same kind of difference holds between our work and the one of Nakamoto \\cite{Nakamoto:2007}.\n\nWhile TagiCoFi \\cite{Zhen:2009} employs user similarities defined from tagging information to regularize Matrix Factorization, we use tags in a KNN algorithm. \nIn \\cite{Tso-Sutter:2008} Tso and Sutter extend the ratings matrix using tagging information.\nThey reduce the three-dimensional correlations $$ to two-dimensional correlations $$, $$ and $$. Then, they apply a fusion method to combine the correlations for rating prediction. Differently, we extend the rating matrix with the categories (tags) associated to the items rated by users and with further categories identified from general search behavior. \n\nRecently, rating information has been combined with other types of data to improve recommendation. For instance, item reviews are used, possibly in combination with ratings, in \\cite{Chen-etal:15,Musat-Faltings:15,Muhammad-etal:15,Lu-etal:18}.\nMoreover, trust relations and reputation are used to steer recommendation on the basis of the feedback on items provided by trusted parties; e.g., \\cite{Kuter-etal:07,Liu-Lee:10,Tang-etal:13,Alotaibi-Vassileva:16,Mcnally-etal:14,Du-etal:17,Yang-etal:17}.\nIn \\cite{Mauro-etal:19}, we investigate multi-faceted trust for personalized recommendation. \nHowever, in the present work we focus on rating information to assess the potential improvement of Collaborative Filtering, when combined with general preference co-occurrence. \n\n\n\\subsection{Analysis of Interaction Sessions}\nThe identification of interest co-occurrence we propose is related to a few works supporting query expansion, query reformulation and term suggestion in Information Retrieval. Some researchers propose to analyze session-based user behavior in order to detect co-occurrence relations useful to improve search queries, taking the search context into account. For instance, in \\cite{Cao-etal:08} Cao et al. suggest queries on the basis of the context provided by the user's recent search history, by clustering queries on the basis of the search results visited by users. Moreover, Huang et al. \\cite{Huang-etal:03} and Chen et al. \\cite{Chen-etal:08} detect term co-occurrence in search sessions to group sets of relevant words that can be mutually suggested. \nOur work is different because we adopt a linguistic interpretation approach (based on lemmatization and Word Sense Disambiguation) to find the concepts referenced in the queries; see \\cite{Mauro-Ardissono:17b}. \nTherefore, we extract information about {\\em concept co-occurrence}, which is more general than {\\em term co-occurrence}. \n\nIt is worth mentioning that our analysis of interaction sessions differs from session-based recommendation, which analyzes the user's behavior during an interaction session to identify relevant item(s) to suggest; e.g., see \\cite{Garcin-etal:13,Jannach-Ludewig:17,Greenstein-etal:17,Jannach-etal:17}. In fact, we mine interest co-occurrence by abstracting from the particular sequence of queries performed by the users. Moreover, as previously discussed, we mine concept associations. \n\n\n\n\\subsection{Graph-based Information Filtering}\nKnowledge graphs describe item features and relations among entities, supporting the analysis of item relatedness, as well as similarity for information filtering and top-N recommendation. \nIn several works these graphs are extracted from document pools and\/or from the Linked Data Cloud. For instance, CoSeNa \\cite{Candan-etal:09} employs keyword co-occurrence in the corpus of documents to be retrieved, and ontological knowledge about the domain concepts, to support the exploration of text collections using a keywords-by-concepts graph. Moreover, in \\cite{DiNoia-etal:16}, Di Noia et al. create a relatedness graph by analyzing external data sources such as DBpedia in order to support the evaluation of semantic similarity between items. Analogously, item features have been extracted from the Linked Data Cloud to improve recommendation performance in \\cite{Musto-etal:16,Ragone-etal:17,Musto-etal:17,Musto-etal:18}. \n\nSome works attempt to extend the relations among information items by integrating data derived from the observation of different types of user behavior. E.g., Google search engine manages the Knowledge Graph \\cite{GoogleKnowledgeGraph} to relate facts, concepts and entities depending on their co-occurrence in queries. Moreover, entity2rec learns user-item relatedness from knowledge graphs by analyzing data about users' feedback and item information from Linked Open Data \\cite{Palumbo-etal:17}. Furthermore, in \\cite{Oramas-etal:15} Oramas et al. propose a hybrid recommender that integrates users implicit feedback into a knowledge graph describing item information, enriched with semantic data extracted from external sources. Finally, in \\cite{Vahedian-etal:17}, Vahedian et al. generalize graph-based approaches by simultaneously taking into account multiple types of relations among entities: they introduce meta-paths to represent patterns of relations and apply random-walk along such paths to identify relevant entities to suggest.\n\nOur work has analogies to the above listed ones because we employ a graph-based type of knowledge representation. However, we work at the conceptual level: our knowledge graph relates item categories instead of individual users and\/or items. Moreover, we do not compute similarity or relatedness by means of the knowledge graph: we use the graph to extend category-based user profiles. In turn, those profiles are employed in neighborhood identification. The separation between how preferences are inferred and how they are used for recommendation makes it possible to extend both types of activities in a modular way.\n\n\n\n\\section{Extended Category-based Collaborative Filtering}\n\\label{model}\nWe describe ECCF incrementally, starting from U2UCF that provides the basic match-making approach for rating estimation. \n\n\\subsection{User-to-User Collaborative Filtering}\nIn \\cite{Ricci-etal:11}, Ricci et al. define U2UCF as follows: ``the simplest and original implementation of this approach recommends to the active user the items that other users with similar tastes liked in the past. The similarity in taste of two users is calculated based on the similarity in the rating history of the users\".\nGiven: \n\\begin{itemize}\n \\item $U$ as the set of users and $I$ as the set of items;\n \\item $r: U X I \\Rightarrow {\\rm I\\!R}$ as a map of ratings;\n \\item $R \\in {\\rm I\\!R}^{U X I}$ as the users-items rating matrix, where each value is a rating $r_{ui}=R[u,i]$ given by a user $u \\in U$ to an item $i \\in I$. \n\\end{itemize}\nThe recommender system estimates $u$'s rating of $i$ ($\\hat{r}_{ui}$) as follows:\n \\begin{equation}\n \\label{eq:rmeancentering}\n \\hat{r}_{ui} = \\bar{r}_u + \\frac{ \n \t\\sum\\limits_{v\\in N_i(u)}\\sigma(u,v) (r_{vi} - \\bar{r}_v)\n }{\n \t\\sum\\limits_{v\\in N_i(u)}|\\sigma(u,v)|}\n \\end{equation}\nwhere $N_i(u)$ is the set of neighbors of $u$ that rated item $i$ and $\\sigma(u,v)$ is the similarity between user $u$ and user $v$ ($v \\in N_i(u)$). The similarity among users is computed by applying a distance metric, e.g., Cosine or Pearson similarity, to their rating vectors. \n\n\n\\subsection{Simple Category-based Collaborative Filtering (SCCF)}\n\\label{category-based-CF}\nSCCF manages user profiles in which the user's interest in each item category is represented as a positive number; the higher is the value, the stronger is the interest. \nWe define:\n\\begin{itemize}\n \\item $U$, $I$, $r$ and $R$ as above; \n \\item $C$ as the set of item categories;\n \\item $f: U X C \\Rightarrow {\\rm I\\!N} $ as a map between users and categories;\n \\item $UC\\in {\\rm I\\!N}^{U X C}$ as the Users-Categories matrix. For each $u \\in U$ and $c \\in C$, $UC[u,c]$ represents the interest of $u$ in $c$. We take as evidence of interest the {\\em frequency of exploration} of a category, i.e., the frequency of interaction of the user with items associated with the category. \n\\end{itemize}\nCategory exploration can be mapped to different types of user behavior; e.g., tagging items and searching for items by tag. We map exploration to rating behavior and we define $UC[u, c]$ as the number of ratings that $u$ has given to the items associated with $c$. \n\nSCCF computes user similarity on the basis of the estimated user preferences for item categories. Specifically, $\\sigma(u, v)$ is defined as the Cosine similarity of the users vectors in the $UC$ matrix and it is used in Equation (\\ref{eq:rmeancentering}) to estimate ratings. Thus, $\\hat{r}_{ui}$ is computed on the basis of the ratings $r_{vi}$ provided by the users $v \\in U$ whose preferences for categories are similar to those of $u$. \n\n\n\\subsection{Acquisition of Preferences Co-occurrence}\n\\label{graph}\nIn order to learn the strength of the associations between item categories in search behavior, we analyze their co-occurrence in the search sessions of a query log. By co-occurrence we mean the fact that two or more categories are referred by the queries belonging to the same session. \nIn the following we summarize the analysis of category co-occurrence; see \\cite{Mauro-Ardissono:18} for details.\n\nThe Category Co-occurrence Graph ($CCG$) represents category co-occurrence:\nin the $CCG$, nodes represent the data categories referenced in the analyzed queries and the weight of edges represents the co-occurrence frequency of the connected categories; i.e., how many times the categories have been identified within the same search sessions.\n\nWe retrieve the categories occurring in the queries by applying a Natural Language approach that identifies the referred concepts in a flexible way, by considering synonyms and by applying Word Sense Disambiguation to resolve the meaning of words; see \\cite{Ardissono-etal:16,Mauro-Ardissono:17b}. For Word Sense Disambiguation we use the Babelfy tool \\cite{Babelfy}.\n\nThe $CCG$ is built as follows:\ngiven two categories $x$ and $y$, the weight of the edge that connects them is defined as:\n\\begin{equation}\n\\label{eq1}\nw_{xy}=\\sum_{S\\in|Sessions|} Freq_{S_{xy}}\n\\end{equation}\nwhere $Freq_{S_{xy}}$ represents the evidence provided by session $S$ to the co-occurrence frequency of $x$ and $y$.\nGiven $S=\\{Q_1, \\dots, Q_n\\}$,\n$Freq_{S_{xy}}$ is computed as the maximum evidence of co-occurrence of $x$ and $y$ in $S$: \n\\begin{equation}\n\\label{eq2}\nFreq_{S_{xy}} = Max_{k=1}^{n}(Freq_{xy_{Q_k}}, ev_{xy_{Q_{k-1}}})\n\\end{equation}\nwhere $Freq_{xy_{Q_k}}$ is the co-occurrence evidence of $x$ and $y$ provided by query $Q_k$, and $ev_{xy_{Q_{k-1}}}$ is the one provided by $Q_1, \\dots, Q_{k-1}$. Similar to \\cite{Mauro-Ardissono:18}, we take the maximum, and not the sum of evidence because co-occurrence could derive either from query reformulation \\cite{Rieh-Xie:06}, or from the repetition of queries in click-through events of the log; see Section \\ref{sec:AOLlog} that describes the query log we used.\n\nA query $Q$ contributes to the estimation of co-occurrence as follows:\n\\begin{itemize}\n\\item \nIf $Q$ contains $k$ terms ($k>=0$), each one identifying a non-ambiguous category: \n$T_1 \\Rightarrow c_1, \\quad \\dots, \\quad T_k \\Rightarrow c_k$, then, for each category $c$ of $Q$:\n\\begin{itemize}\n \\item The co-occurrence evidence between $c$ and every other category $d$ of $Q$ is $Freq_{cd_{Q}} = 1$. \n \\item The co-occurrence evidence between $c$ and every other category $e$ identified in a non-ambiguous way in the other queries of $S$ is $Freq_{ce_{Q}} = 1$. \n \\item The co-occurrence evidence between any other categories $w$ and $z$ identified in $S$ is $Freq_{wz_{Q}} = 0$.\n\\end{itemize}\n\\item \nIf $Q$ contains an ambiguous term $t$ that refers to $m$ categories, the particular category the user is focusing on cannot be identified. Therefore, the co-occurrence evidence brought by $t$ is computed as above, but the assigned evidence is $\\frac{1}{m}$ in order to consider the possible interpretations of $Q$, and divide evidence among ambiguous categories.\n\\end{itemize}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{model.pdf}\n \\caption{Extension of Category-based User Profiles. }\n \\label{fig:matrici}\n\\end{figure}\n\n\n\\subsection{Extended Category-based Collaborative Filtering (\\textit{ECCF})}\n\\label{sec:ECCF}\nIn this recommendation model we employ frequent co-occurring information interests to extend category-based user profiles.\nWe reinforce the preferences for item categories learned by analyzing rating behavior (stored in the Users-Categories matrix $UC$) with interest co-occurrence associations (stored in the $CCG$ graph) in order to acquire an extended set of user preferences for neighbor identification.\n\nThe idea behind preference extension is that, the more the user has appreciated the items of a category, the more interest\n\\linebreak\nco-occurrence makes sense. Therefore, starting from the category-based user profiles stored in the $UC$ matrix, we increment user preferences with the contribution of the strongest co-occurrence relations of the $CCG$ graph, depending on the number of positive ratings available in the users-items matrix $R$. \nThe output of this process is stored in the Extended Preferences matrix $EP$, which is used to compute $\\sigma(u,v)$ in Equation \\ref{eq:rmeancentering}.\n\nFigure \\ref{fig:matrici} provides a graphical view of the computation of $EP$: the information stored in $UC$ is combined with that stored in the $CCG$ to set the values of this matrix. In this process, the users-ratings matrix $R$ is used to limit the reinforcement of preferences to the categories of the positively rated items.\nMoreover, the $CCG$ is used to propagate preference information according to the strongest co-occurrence of interests. \nIn detail, we compute the values of $EP$ as follows:\n\\begin{itemize}\n \\item \n let $Cat_i$ be the set of categories associated to item $i$; \n \\item\n let $CatSet_i$ be the set of categories directly connected to any category $c \\in Cat_i$ in the $CCG$ through the heaviest outbound arcs. These are the categories which most frequently co-occur with some categories of $Cat_i$ in search sessions.\n\\end{itemize}\nThen:\n\\begin{equation}\n\\label{eq:pm}\nEP[u,c]=UC[u,c]+\\sum_{i\\in|I|} f(u,i,c)\n\\end{equation}\nwhere\n\\begin{equation}\n f(u,i,c) =\n \\begin{cases}\n 1 & \\quad \\text{if $R[u,i] \\in$} ~ \\text{\\textit{PositiveRatings}} ~ \\text{$\\wedge$} ~\\text{$c \\in CatSet_i$}\\\\\n 0 & \\quad \\text{otherwise}\n \\end{cases}\n\\label{eq:f}\n\\end{equation}\nIn Equation \\ref{eq:f} $PositiveRatings$ denotes the set of ratings that are considered as positive in the dataset; e.g., \\{5\\}, or \\{4, 5\\} in a [1, 5] Likert scale.\n\n\n\\section{Validation of ECCF}\n\\label{sec:validation}\n\n\\subsection{Dataset of Item Ratings}\n\\label{sec:YELP}\nAs a source of rating data we exploit the Yelp Dataset \\cite{Yelp-dataset}, which contains information about a set of businesses, users and reviews and is available for academic purposes. In the dataset, item ratings take values in a [1, 5] Likert scale where 1 is the worst value and 5 is the best one. \nMoreover, each item is associated with a list of categories describing the kind of service it offers.\n\nThe full list of Yelp categories is available at \\url{www.yelp.com\/developers\/documentation\/v3\/category_list} and is organized in a taxonomy to specify businesses at different levels of detail. The taxonomy includes a large set of first-level categories, representing broad types of businesses; e.g., ``Active life'', ``Arts \\& entertainment'', ``Automotive'', \\dots, ``Food'', ``Restaurants'', and many others. In turn, the first-level categories are specialized into sub-categories; e.g., ``Restaurants'' includes many types of restaurants such as ``Indian'', ``Chinese'' and the like. \nWe apply two filters to the dataset:\n\\begin{enumerate}\n \\item \n We select all the Yelp categories that are subclasses of ``Restaurants'' or ``Food'': e.g., ``Indian'', ``Chinese'', ``Cafes'', ``Kebab'', ``Pizza'', and so forth; the total number of categories is 254. \n Then, we project the Yelp dataset on the set of items associated with at least one of these categories. In the rest of this paper we refer to this set of categories as {\\em CATS}. \n \\item We further filter the dataset on the users who rated at least 20 items.\n\\end{enumerate}\n\n\n\\begin{table}[t]\n\\centering\n\\caption{Statistics about the Filtered Datasets}\n\\begin{tabular}{l|l|l}\n\\hline\nYelp & Number of users & 26,600 \\\\\n& Number of businesses & 76,317 \\\\ \n& Number of ratings & 1,326,409 \\\\ \\hline\nAOL & Number of sessions & 1,248,803 \\\\\n& Number of queries & 2,136,029 \\\\\n\\hline\n\\end{tabular}\n\\label{t:dataset}\n\\end{table}\nThe higher portion of Table \\ref{t:dataset} summarizes the number of users, businesses and ratings of the filtered Yelp dataset.\n\n\n\\subsection{Dataset of Search Sessions}\n\\label{sec:AOLlog}\nFor the generation of the Category Co-occurrence Graph we use the AOL query log.\\footnote{The log is available at \\url{http:\/\/www.cim.mcgill.ca\/~dudek\/206\/Logs\/AOL-user-ct-collection\/}.} \nEach line of the log represents either a query or a click-through event on one of the search results of a query. The line contains various fields, among which the submitted query and the submission date and hour.\n\nIn order to build a graph that is thematically related to the items of the filtered Yelp dataset, we select from the log the search sessions relevant to the categories $c \\in CATS$ enriched with the following two types of external knowledge. The enrichment is useful to abstract from the specific category names used in Yelp and to take into account semantically related information: \n\\begin{enumerate}\n \\item \n {\\em Lemmatized knowledge:} we enrich each element $c \\in CATS$ with a set of keywords and synonyms from WordNet \\cite{WordNet} lexical database. \n \\item \n {\\em Relevant terms from the Probase \\cite{Wu-etal:12} taxonomy:} \n \\begin{itemize}\n \\item\n For each element $c \\in CATS$, we enrich $c$ with the \n \\linebreak\n $$ pairs of ProBase such that $concept$ has at least 85\\% WordNet similarity with any term of the lemmatized knowledge of $c$, and the WordNet similarity between the two components of the pair is 85\\%.\n \\item \n ProBase, recently called Microsoft Concept Graph, is a large concept network harnessed from web pages and search logs. It is organized as a list of $$ pairs related by a subclass relation and it contains \n \\linebreak \n 5,376,526 classes and 12,501,527 instances.\n \\end{itemize}\n\\end{enumerate}\nFor the selection of relevant search queries in the AOL log we match the lemmatized words occurring in the queries to the enriched categories of $CATS$. If there is at least one match between a term and a query, we consider the query as relevant and we include its parent session in the filtered log. \n\nWe identify the search sessions by aggregating the queries performed by the same user according to their temporal proximity, following the widely applied rule that two consecutive queries belong to different sessions if the time interval between them exceeds half an hour; see \\cite{White-etal:07}. \n \nThe lower portion of Table \\ref{t:dataset} shows the number of sessions and queries of the filtered AOL dataset.\n\nIt is worth noting that the AOL log was involved in an information leak issue but we decided to use it for two reasons. Firstly, our analysis is ethically correct because we study general search behavior to acquire aggregate data abstracting from the search histories of individual users. Secondly, to the best of our knowledge, the AOL log is the only available large dataset that reports textual search queries, and which can therefore be used for linguistic interpretation. We analyzed some public datasets but they did not meet our requirements. For instance, the Excite query dataset\\footnote{\\url{https:\/\/svn.apache.org\/repos\/asf\/pig\/trunk\/tutorial\/data\/}} contains about 1M queries while AOL log contains 20M queries. Moreover, in the Yahoo dataset\\footnote{\\url{https:\/\/webscope.sandbox.yahoo.com\/catalog.php?datatype=l\\&did=50}} the queries are coded; thus, it is not possible to extract any linguistic information to learn category co-occurrence.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{graph-distribution.jpg}\n \\caption{Distribution of the Weight of Edges in the $CCG$.}\\label{fig:graph-distribution}\n\\end{figure}\n\n\n\\subsection{Category Co-occurrence Graph}\nWe instantiate the {\\em CCG} with the interests that co-occur in the sessions of the filtered AOL dataset by applying the procedure described in Section \\ref{graph}.\nThe resulting graph is strongly connected: almost all of the categories are linked to each other by an edge having weight $>0$.\nHowever, the distribution of weights in the graph shows that there is a large number of weakly connected categories and a very small number of strongly associated ones. The ``heavy'' edges identify the interests that co-occur very frequently in search sessions and suggest to select the arcs having maximum weight in the {\\em CCG} for the extension of the user profiles, as done in Section \\ref{sec:ECCF}.\nFigure \\ref{fig:graph-distribution} shows this distribution; the x-axis represents the edges of the graph, and the y-axis represents their weights, which take values in [1, 272224].\n\n\\begin{table}[b]\n\\centering\n\\caption{Performance Evaluation @10; the Best Values Are in Boldface, the Worst Ones Are Strikethrough}\n\\begin{tabular}{l|l|l|l|l|l}\n\\hline \n\\textbf{Metrics} \n & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}U2UCF\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{SCCF}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{3,4,5\\}\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{4,5\\}\\end{tabular}}} & \\multicolumn{1}{c}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{5\\}\\end{tabular}}} \\\\ \\hline\n\\textbf{Precision} & \\st{0.7823} & \\textbf{0.786} & 0.7857 & 0.7855 & 0.7859 \\\\\n\\textbf{Recall} & \\st{0.7473} & 0.7526 & 0.7536 & \\textbf{0.755} & 0.7529 \\\\\n\\textbf{F1} & \\st{0.7644} & 0.7689 & 0.7693 & \\textbf{0.7699} & 0.769 \\\\\n\\textbf{RMSE} & \\st{1.0001} & 0.9899 & 0.9897 & 0.9893 & \\textbf{0.9892} \\\\\n\\textbf{MRR} & \\st{0.733} & 0.7367 & 0.737 & \\textbf{0.7391} & 0.7384 \\\\ \n\\textbf{Diversity} & \\st{0.3042} & 0.3053 & \\textbf{0.3056} & 0.3053 & 0.3049 \\\\\n\\textbf{User cov.} & \\st{0.8497} & 0.8521 & 0.8526 & \\textbf{0.8542} & 0.8534 \\\\\n\\hline\n\\end{tabular}%\n\\label{t:results@10}\n\\end{table}\n\n\\subsection{Test Methodology}\n\\label{experiments}\nWe evaluate the recommendation performance of ECCF by comparing it to U2UCF and SCCF, which we consider as baselines. Moreover, we compare these algorithms with SVD++ in order to assess the improvement in the suggestion of relevant items given by frequently co-occurring interests.\n\nThe SCCF and ECCF recommendation algorithms are developed by extending the Surprise library \\cite{Surprise}, while we use the default Surprise implementations of U2UCF and SVD++.\n\nWe test the algorithms by applying a 10-fold cross-validation on the filtered Yelp dataset, after having randomly distributed ratings on folds: we use 90\\% of the ratings as training set and 10\\% as test set. In all the tests, we configure the KNN algorithms to work with 50 neighbors.\n\nIn order to analyze the impact on recommendation performance of a looser, or stricter extension of user preferences with category co-occurrence, we validate ECCF on different settings of $PositiveRatings$ in Equation \\ref{eq:f}, i.e., on different interpretations of what is a good rating. For each fold\nwe generate three versions of the Extended Preferences matrix $EP$ having set $PositiveRatings$ to $\\{3,4,5\\}$, $\\{4,5\\}$, and $\\{5\\}$ respectively. \n\nWe evaluate Top-k recommendation performance with k=10 and k=20 by taking the ratings observed in the Yelp dataset as ground truth. For the evaluation we consider the following metrics: Precision, Recall, F1, RMSE, MRR, Diversity and User Coverage.\n\nDiversity describes the mean intra-list diversity of items in the suggestion lists @k; see \\cite{Bradley-Smyth:01}. In this work, we interpret diversity from the viewpoint of item classification. Therefore, we measure the diversity of a recommendation list as follows:\n\\begin{equation}\n\\text{intra-list diversity@k}={\\frac {\\sum _{{i=1}}^{k}\\sum _{{j=i}}^{k} (1 - sim(i, j))} {\\frac{k*(k+1)}{2}}}\n\\end{equation}\nwhere $sim(i, j)$ is the cosine similarity between the lists of categories associated to items $i$ and $j$ in the ratings dataset. \n\n\\subsection{Results}\n\\label{results}\nTable \\ref{t:results@10} shows the performance results of the KNN recommenders we compared, by taking into account a maximum of 10 suggested items (performance@10). \n\\begin{itemize}\n \\item \\textbf{Precision:} similar to previous results described in \\cite{Sieg-etal:07b}, all of the category-based recommenders outperform U2UCF. This can be explained by the fact that the matrices describing preferences for item categories are denser than the ratings one. Thus, they improve recommendation by supporting a better identification of neighbors for Equation \\ref{eq:rmeancentering}. \n However, SCCF outperforms all of the ECCF variants. The second best recommender is ECCF$\\{5\\}$ that extends user profiles in the strictest way: it only considers as pivots for extension the categories associated to the items that the user has rated 5 stars. Notice also that the precision of ECCF decreases when $PositiveRatings$ is lax. The reason is that the extension of user profiles with frequently co-occurring interests can increase the estimated interest in some noisy categories with respect to the pure observation of ratings distribution on categories. In particular, noise grows when the policy applied to extend preferences is less restrictive. \n \\item \\textbf{Recall:} ECCF outperforms the baselines in all the settings of $PositiveRatings$. Specifically, ECCF\\{4,5\\} achieves the best result, while recall is lower in ECCF\\{3,4,5\\} and further decreases in ECCF\\{5\\}.\n We explain this finding as follows: an extension of user profiles based on the categories of highly rated items supports the identification of a richer set of user preferences, and a more efficacious identification of neighbors, than only considering rating distribution on categories. However, if we restrict $PositiveRatings$ too much, the user profiles are not extended enough to sensibly improve Recall. Moreover, as noticed for Precision, if $PositiveRatings$ is lax, noise in the estimation of user preferences challenges neighbor selection.\n \n\\begin{table}[t]\n\\centering\n\\caption{Performance Evaluation @20}\n\\begin{tabular}{l|l|l|l|l|l}\n\\hline \n\\textbf{Metrics} \n & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}U2UCF\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{SCCF}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{3,4,5\\}\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{4,5\\}\\end{tabular}}} & \\multicolumn{1}{c}{\\textbf{\\begin{tabular}[c]{@{}c@{}}ECCF\\\\ \\{5\\}\\end{tabular}}} \\\\ \\hline\n\\textbf{Precision} & \\st{0.7806} & \\textbf{0.7842} & 0.7839 & 0.7838 & \\textbf{0.7842} \\\\ \n\\textbf{Recall} & \\st{0.757} & 0.7624 & 0.7634 & \\textbf{0.7649} & 0.7626 \\\\ \n\\textbf{F1} & \\st{0.7686} & 0.7731 & 0.7735 & \\textbf{0.7742} & 0.7732 \\\\\n\\textbf{RMSE} & \\st{0.9935} & 0.9838 & 0.9835 & \\textbf{0.9832} & \\textbf{0.9832} \\\\ \n\\textbf{MRR} & \\st{0.733} & 0.7369 & 0.7372 & \\textbf{0.7391} & 0.7384 \\\\ \n\\textbf{Diversity} & \\st{0.3059} & 0.307 & \\textbf{0.3073} & 0.307 & 0.3067 \\\\ \n\\textbf{User cov.} & \\st{0.8497} & 0.8521 & 0.8526 & \\textbf{0.8542} & 0.8534 \\\\\n\\hline\n\\end{tabular}%\n\\label{t:results@20}\n\\end{table}\n\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{accuracy_10.jpg}\n \\caption{Graphical Representation of Accuracy@10.}\n \\label{fig:accuracy@10}\n\\end{figure}\n\n\n \\item \\textbf{F1:} ECCF outperforms the baselines. In detail, ECCF\\{4,5\\} achieves the best F1 = 0.7691; moreover, F1 varies consistently with Recall, depending on $PositiveRatings$. \n \\item \\textbf{RMSE:} SCCF reduces the mean error between estimated and observed ratings with respect to the baseline, showing the benefits of category-based user profiles. Moreover, consistently with the variation of Precision, the best results are obtained by ECCF\\{5\\}, i.e., with a strict extension of user profiles. RMSE progressively increases (i.e., gets worse) for $PositiveRatings=\\{4, 5\\}$ and \\{3, 4, 5\\}.\n \\item \\textbf{MRR:} ECCF outperforms the baselines. Specifically, \n \\linebreak\n ECCF\\{4,5\\} obtains the best MRR = 0.7391. The second best value corresponds to a more selective extension of user profiles in ECCF\\{5\\}; moreover, if $PositiveItems=\\{3, 4, 5\\}$ results get worse.\n \\item \\textbf{Diversity}: both SCCF and ECCF outperform U2UCF. In this case, the best results are obtained with a lax extension of user preferences (ECCF\\{3,4,5\\}) and Diversity decreases while the preference extension policy becomes stricter. We explain these findings with the fact that category-based user profiles improve the estimation of user preferences concerning a variegate set of item categories, with respect to a flat recommendation based on ratings. However, the stricter is the extension of user preferences, the less item categories are used in neighbor identification.\n \\item \\textbf{User coverage:} ECCF outperforms the baselines, confirming the usefulness of preference extension. However, the selection of the ratings for the extension influences coverage: ECCF\\{4,5\\} achieves the best results by suggesting at least one relevant item to 85.42\\% of the users, against 84.97\\% of U2UCF. The second best is ECCF\\{5\\} and ECCF\\{3,4,5\\} has the worst results.\n \\end{itemize}\nIn the described experiments the $EP$ Matrix is defined by only taking into account positive ratings. In order to get a broader view on the performance of ECCF, we also consider its application to all the user ratings; i.e., we set $PositiveRatings$ to $\\{1, \\dots, 5\\}$. With respect to the previous results, in this case the algorithm achieves similar Precision but lower Recall (0.7524), MRR (0.7369) and User coverage (0.8155). \n\nTable \\ref{t:results@20} shows the results obtained by comparing \n\\linebreak\nperformance@20. \nThese results confirm the usefulness of category-based user profiles and of their extension with frequently\n\\linebreak\nco-occurring information interests: \n\\begin{itemize}\n \\item Also in this case, ECCF\\{4,5\\} is the best recommendation algorithm. It outperforms the others in Recall, F1, MRR and User coverage. Moreover both ECCF\\{5\\} and ECCF\\{4,5\\} achieve the best RMSE in comparison with the other recommenders. \n \\item However, while SCCF has the best Precision@10, both SCCF and ECCF\\{5\\} achieve the best Precision@20.\n\\end{itemize}\nWith respect to k=10, Precision@20 is lower while Recall@20 and F1@20 take higher values; this makes sense because we are considering longer suggestion lists. Moreover, RMSE@20 is lower, which tells us that the longer lists contain proportionally less errors in the estimation of ratings. Differently, most algorithms obtain the same MRR for k=10 and k=20 (except for SCCF and ECCF\\{3,4,5\\}): this shows that the first relevant item is almost always placed in the first 10 positions of the suggestion lists.\nFurthermore, the Diversity@20 has the highest values for all the recommenders: this might be due to the fact that the longer suggestion lists have more chances to include items belonging to different categories. Finally, User coverage@10 = User coverage@20 because we interpret coverage as the percentage of users who receive at least one suggestion.\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{accuracy_20.jpg}\n \\caption{Graphical Representation of Accuracy@20.}\n \\label{fig:accuracy@20}\n\\end{figure}\n\nFigures \\ref{fig:accuracy@10} and \\ref{fig:accuracy@20} depict the accuracy @10 and @20:\n\\begin{itemize}\n \\item \n All of the category-based recommenders outperform U2UCF, confirming the benefits of the introduction of category-based preferences in KNN Collaborative Filtering. The conceptual representation of user preferences generally improves performance because the matrices describing user preferences ({\\em UC} and {\\em EP}) are denser than the users-items matrix storing ratings ({\\em R}). Therefore, better neighbors can be identified for the computation of Equation \\ref{eq:rmeancentering}.\n \\item\n A comparison between category-based algorithms shows that the best performance results are obtained by extending user profiles on the basis of the items that users have rated very well, i.e., with 4 or 5 stars in a [1, 5] Likert scale. If the items that received middle ratings are considered as well, accuracy decreases. \n \\item The category-based representation of user profiles has positive impact on the Diversity of recommendation lists. Conversely, the extension of user profiles does not further help this aspect, unless user profiles are extended in a lax way. However, a lax extension is not convenient because it decreases other measures.\n\\end{itemize}\n\n\nIn order to assess the usefulness of preference extension in Top-k recommendation, we also compare the previously described algorithms with SVD++ \\cite{Koren:08}, which adopts Matrix Factorization to learn latent user and item factors, basing rating prediction on the sole analysis of user ratings. The comparison results show that:\n\\begin{itemize}\n \\item \n SVD++ is more accurate than U2UCF and SCC. On the filtered Yelp dataset, SVD++ obtains F1@10 = 0.7696. This finding shows that the management of category-based user profiles helps recommendation but it can be outperformed by a deeper understanding of the features of items and users. \n \\item\n SVD++ achieves similar accuracy results with respect to ECCF but it is outperformed by ECCF\\{4, 5\\}. Therefore, the extension of user profiles with frequently co-occurring information interests, integrated into a KNN recommender, improves accuracy and makes it comparable or higher than that of Matrix Factorization algorithms. \n \\item\n ECCF outperforms SVD++ as far as the diversity of the recommendation lists is concerned: SVD++ has Diversity@10 = 0.3041; this is comparable to the diversity achieved by U2UCF and lower than that of all the category-based recommenders we presented.\n \\item\n In contrast, SVD++ has the highest User coverage of all the algorithms (0.8709), showing its superior capability to contrast data sparsity. \n \\end{itemize}\n \n\\subsection{Discussion}\n\\label{discussion}\nIn summary, the evaluation results show that ECCF outperforms U2UCF, SCCF and SVD++ in accuracy and intra-list diversity. Moreover, it outperforms U2UCF and SCCF in MRR and user coverage, while SVD++ excels in the latter metric. The results also show that ECCF achieves the best results when applied to positive ratings, while its performance slightly decreases when the user profiles are extended by taking both positive and negative ratings.\n\nThese results support the hypothesis that preference extension, based on frequently co-occurring information interests, improves the accuracy of the suggestions generated by a KNN recommender system. However, research has to be carried out to improve other performance metrics, possibly also investigating the integration of preference co-occurrence in Matrix Factorization algorithms.\n\nIt might be questioned whether extending user profiles with general interest co-occurrence data might provide less personalized recommendations than, e.g., focusing the extensions on the user's neighborhood. In this respect, we point out that we aim at developing a model that does not depend on cross-domain user identification. However, an investigation of this issue can be interesting to deal with the cases in which user information can be shared among the applications, or public information about the users can be connected to the local profiles; e.g., public data on social networks.\n\nBefore closing this discussion, it is worth noting that, even though the AOL query log dates back to 2006, it can be considered as a good information source as long as it is analyzed from the viewpoint of the concepts expressed by the users. In other words, while the specific information items mentioned in the log might not exist any more, the topics referred in the queries are general and long-lasting. Of course, some new topics (e.g., new types of restaurants) might have emerged since 2006, and maybe new concept associations could exist now. However, the described performance results show that the co-occurring interests we identified are useful to improve recommendation performance; moreover, the methodology described in this paper can be applied to other more recent datasets, if available.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe investigated whether the identification of frequently\n\\linebreak \nco-occurring interests in information search can be used to improve the performance of KNN collaborative recommender systems. For this purpose, we defined a preference extension model that, applied to a category-based representation of user profiles, infers user preferences by exploiting frequently co-occurring information interests. Then, we implemented the model in the Extended Category-based Collaborative Filtering algorithm (ECCF). This is is variant of User-to-User Collaborative Filtering that works on category-based user profiles, enriched with preferences inferred from general search behavior.\nFor the analysis of user interests, we analyzed the query log of a largely used search engine. \n\nWe evaluated ECCF on a large dataset of item ratings, by applying different levels of strictness in the extension of user profiles. The evaluation showed that ECCF outperforms User-to-User Collaborative Filtering in accuracy, MRR, intra-list diversity and user coverage. Interestingly, ECCS also obtains higher accuracy and diversity than the SVD++ recommender system, based on Matrix Factorization; however, ECCS has lower user coverage than SVD++. \n\nIn our future work we will focus on the coverage aspect in order to improve the performance of KNN Collaborative Filtering.\nMoreover, we will carry out further experiments, considering (i) a broader domain than Restaurants and Food, on which we have focused our current work, and (ii) users who have provided few or zero ratings. \nWe will also analyze other datasets to check whether the performance results described in this article can be generalized. Finally, we will compare the performance of ECCF with a larger set of recommendation approaches based on preference extension.\n\n\\begin{acks}\nThis work was supported by the University of Torino through projects ``Ricerca Locale'', MIMOSA\n(MultIModal Ontology-driven query system for the heterogeneous data of a SmArtcity, ``Progetto di Ateneo Torino\\_call2014\\_L2\\_157'', 2015-17)\nand the Computer Science PhD program.\nWe are grateful to Zhongli Filippo Hu, who helped us filter the Yelp dataset.\n\\end{acks}\n\n\n \\bibliographystyle{ACM-Reference-Format} \n \n\\balance\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{\\@startsection{subsection}{2}%\n \\z@{.35\\linespacing\\@plus.7\\linespacing}{.25\\linespacing}%\n {\\normalfont\\itshape}}\n\\makeatother\n\n\\numberwithin{equation}{section}\n\n\\theoremstyle{plain}\n\\newtheorem{lemma}{Lemma}[section]\n\n\\newaliascnt{proposition}{lemma}\n\\newtheorem{proposition}[proposition]{Proposition}\n\\aliascntresetthe{proposition} \n\n\\newaliascnt{theorem}{lemma}\n\\newtheorem{theorem}[theorem]{Theorem} \n\\aliascntresetthe{theorem}\n\\newtheorem*{theorem*}{Theorem}\n\n\\newaliascnt{corollary}{lemma}\n\\newtheorem{corollary}[corollary]{Corollary} \n\\aliascntresetthe{corollary}\n\n\\newtheoremstyle{citing\n {3pt\n {3pt\n {\\itshape\n {\n {\\bfseries\n {.\n {.5em\n \n {\\thmnote{#3}\n\n\\theoremstyle{citing}\n\\newtheorem*{varthm}{\n\n\n\n\\def\\equationautorefname~#1\\null{(#1)\\null}\n\\def\\itemautorefname~#1\\null{(#1)\\null}\n\\def\\sectionautorefname~#1\\null{\\S#1\\null}\n\\def\\subsectionautorefname~#1\\null{\\S#1\\null}\n\n\\providecommand*{\\lemmaautorefname}{Lemma}\n\\providecommand*{\\corollaryautorefname}{Corollary}\n\\providecommand*{\\propositionautorefname}{Proposition}\n\\providecommand*{\\theoremautorefname}{Theorem}\n\n\n\\newcommand\\CA{{\\mathcal A}} \\newcommand\\CB{{\\mathcal B}}\n\\newcommand\\CC{{\\mathcal C}} \\newcommand\\CE{{\\mathcal E}}\n\\newcommand\\CF{{\\mathcal F}} \\newcommand\\CG{{\\mathcal G}}\n\\newcommand\\CH{{\\mathcal H}} \\newcommand\\CL{{\\mathcal L}}\n\\newcommand\\CM{{\\mathcal M}} \\newcommand\\CN{{\\mathcal N}}\n\\newcommand\\CO{{\\mathcal O}} \\newcommand\\CP{{\\mathcal P}}\n\\newcommand\\CQ{{\\mathcal Q}} \\newcommand\\CR{{\\mathcal R}}\n\n\\newcommand\\fb{{\\mathfrak b}} \\newcommand\\fc{{\\mathfrak c}}\n\\newcommand\\fg{{\\mathfrak g}} \\newcommand\\fh{{\\mathfrak h}}\n\\newcommand\\fp{{\\mathfrak p}} \\newcommand\\fq{{\\mathfrak q}}\n\\newcommand\\fu{{\\mathfrak u}} \\newcommand\\fl{{\\mathfrak l}}\n\\newcommand\\fm{{\\mathfrak m}} \\newcommand\\fn{{\\mathfrak n}}\n\\newcommand\\FN{{\\mathfrak N}} \\newcommand\\fr{{\\mathfrak r}}\n\\newcommand\\ft{{\\mathfrak t}} \\newcommand\\fv{{\\mathfrak v}}\n\n\\newcommand\\BBC{{\\mathbb C}} \\newcommand\\BBD{{\\mathbb D}}\n\\newcommand\\BBF{{\\mathbb F}} \\newcommand\\BBP{{\\mathbb P}}\n\\newcommand\\BBQ{{\\mathbb Q}} \\newcommand\\BBR{{\\mathbb R}}\n\\newcommand\\BBZ{{\\mathbb Z}}\n\n\\newcommand\\Bbar{\\overline{B}} \\newcommand\\Cbar{\\overline{C}}\n\\newcommand\\CObar{\\overline{\\mathcal O}} \\newcommand\\Gbar{\\overline{G}}\n\\newcommand\\Hbar{\\overline{H}} \\newcommand\\jtilde{{\\widetilde j}}\n\\newcommand\\ktilde{{\\widetilde{k}}} \\newcommand\\Lbar{\\overline{L}}\n\\newcommand\\ptilde{{\\tilde {p}}} \\newcommand\\Tbar{\\overline{T}}\n\\newcommand\\Xtilde{\\widetilde X}\n\\newcommand\\Ytilde{\\widetilde Y} \\newcommand\\Ztilde{{\\widetilde Z}}\n\n\\DeclareMathOperator{\\Ad}{Ad} \n\\DeclareMathOperator{\\Alt}{Alt}\n\\DeclareMathOperator{\\Aut}{Aut} \n\\DeclareMathOperator{\\gr}{gr}\n\\DeclareMathOperator{\\Hom}{Hom} \n\\DeclareMathOperator{\\Lie}{Lie}\n\\newcommand{\\red}{\\operatorname{r}} \n\\DeclareMathOperator{\\res}{res}\n\n\\newcommand\\id{{id}}\n\\newcommand\\inverse{^{-1}}\n\\renewcommand\\th{{^{\\text{th}}}}\n\n\\newcommand\\CHIJ{{\\mathcal H}^{IJ}}\n\\newcommand\\etaIJ{\\eta^{IJ}}\n\\newcommand\\etaEJ{\\eta^{\\emptyset J}}\n\\newcommand\\FNt{\\widetilde{\\FN}}\n\\newcommand\\IzJ{I\\cap{}^zJ}\n\\newcommand\\LIbar{\\overline{L}_I}\n\\newcommand\\LJbar{\\overline{L}_J}\n\\newcommand\\OIbar{{\\overline{\\CO}_I}} \n\\newcommand\\Osbar{{\\overline{\\CO}_{\\{s\\}}}} \n\\newcommand\\point{{\\operatorname {pt}}}\n\\newcommand\\SN{\\mathfrak N}\n\\newcommand\\SNt{\\widetilde{\\SN}}\n\\newcommand\\ssleq{{\\scriptscriptstyle \\leq}}\n\\newcommand\\sslt{{\\scriptscriptstyle <}}\n\\newcommand\\Waf{W_{\\operatorname{af}}}\n\\newcommand\\Wex{W_{\\operatorname{ex}}}\n\\newcommand\\WIJ{{W^{IJ}}} \n\\newcommand\\XEJ{X^{\\emptyset J}}\n\\newcommand\\XIE{{X^{I \\emptyset}}}\n\\newcommand\\XIJ{X^{IJ}}\n\\newcommand\\ZIbar{{\\overline{Z}_I}} \n\\newcommand\\Zsbar{{\\overline{Z}_{\\{s\\}}}} \n\n\\begin{document}\n\n\n\\title[Equivariant K-theory] {Equivariant K-theory of\n generalized Steinberg varieties}\n\n\\author[J. M. Douglass]{J. Matthew Douglass} \n\\address{Department of Mathematics\\\\University of North Texas\\\\Denton TX,\n USA 76203}\n\\email{douglass@unt.edu} \\urladdr{http:\/\/hilbert.math.unt.edu}\n\n\\author[G. R\\\"ohrle]{Gerhard R\\\"ohrle}\n\\address{Fakult\\\"at f\\\"ur Mathematik\\\\Ruhr-Universit\\\"at Bochum\\\\D-44780\n Bochum, Germany} \n\\email{gerhard.roehrle@rub.de}\n\\urladdr{http:\/\/www.ruhr-uni-bochum.de\/ffm\/Lehrstuehle\/Lehrstuhl-VI}\n\n\\subjclass[2010]{Primary 20G05; Secondary 14L30 20C08}\n\n\\keywords{Equivariant $K$-theory, Hecke algebra, Steinberg variety}\n\n\n\n\\begin{abstract}\n We describe the equivariant $K$-groups of a family of generalized\n Steinberg varieties that interpolates between the Steinberg variety of a\n reductive, complex algebraic group and its nilpotent cone in terms of the\n extended affine Hecke algebra and double cosets in the extended affine\n Weyl group. As an application, we use this description to define\n Kazhdan-Lusztig ``bar'' involutions and Kazhdan-Lusztig bases for these\n equivariant $K$-groups.\n\\end{abstract}\n\n\\maketitle\n\n\\section{Introduction}\n\nSuppose $G(\\BBF_q)$ is a Chevalley group defined over the finite field\n$\\BBF_q$. A fundamental result in the classification of irreducible complex\nrepresentations of $G(\\BBF_q)$ is the classification of representations that\ncontain a vector fixed by a Borel subgroup $B(\\BBF_q)$ of $G(\\BBF_q)$. These\nrepresentations are completely determined, and their characters may be\ncomputed, using the centralizer ring or Hecke algebra $\\CH(G(\\BBF_q),\nB(\\BBF_q))$. Iwahori \\cite{iwahori:structure} conjectured that\n$\\CH(G(\\BBF_q), B(\\BBF_q))$ is isomorphic to the group algebra of the Weyl\ngroup $W$ of $G(\\BBF_q)$. An explicit isomorphism between $\\CH(G(\\BBF_q),\nB(\\BBF_q))$ and the group algebra of $W$ was constructed by Lusztig\n\\cite{lusztig:theorem}. More generally, irreducible representations that\ncontain a vector fixed by a parabolic subgroup $P_I(\\BBF_q)$ of $G(\\BBF_q)$\nare determined by the parabolic Hecke algebra $\\CH(G(\\BBF_q), P_I(\\BBF_q))$.\nCurtis, Iwahori, and Kilmoyer \\cite{curtisiwahorikilmoyer:hecke} showed that\nthis last algebra is isomorphic to the Hecke algebra $\\CH(W, W_I)$, where\n$W_I$ is the corresponding parabolic subgroup of $W$, and Curtis\n\\cite{curtis:isomorphism} extended Lusztig's construction to obtain an\nexplicit isomorphism between $\\CH(G(\\BBF_q), P_I(\\BBF_q))$ and $\\CH(W,W_I)$.\n\nNow suppose $G(\\BBQ_p)$ is a Chevalley group defined over $\\BBQ_p$. In this\ncase, one important class of representations consists of those\nrepresentations that contain a vector fixed by an Iwahori subgroup of\n$G(\\BBQ_p)$. These representations are again classified by a Hecke algebra,\nthis time the extended affine Hecke algebra $\\CH$ of the complex dual group,\n$\\check G$. Kazhdan and Lusztig \\cite{kazhdanlusztig:langlands} construct\nan isomorphism between $\\CH$ and the equivariant $K$-theory of the Steinberg\nvariety of $\\check G$. They then use this isomorphism to give a construction\nof the irreducible representations of $\\CH$. In this paper we extend their\nconstruction and explicitly describe the equivariant $K$-groups of the\ngeneralized Steinberg varieties $\\XIJ$ from~\\cite{douglassroehrle:geometry}\nin terms of the Hecke algebra $\\CH$. When $I=J$, the subspace of $\\CH$ we\nconsider is, up to an involution, the extension to $\\CH$ of the subalgebra\n$\\CH(G(\\BBF_q), P_I(\\BBF_q))$ of $\\CH(G(\\BBF_q), B(\\BBF_q))$.\n\nIn another direction, it follows from \\cite[Theorem 5.1.3]\n{douglassroehrle:homology} and \\cite[Theorem 2.5]\n{douglassroehrle:steinberg} that the rational Borel-Moore homology of $\\XIJ$\nmay be computed algebraically as the space of $W_I\\times W_J$-invariants in\nthe smash (semidirect) product of the coinvariant algebra of $W$ with the\ngroup algebra of $W$. The results in this paper may be viewed as the\nextension of this computation to the more refined level of equivariant\n$K$-theory and the affine Hecke algebra.\n\nFrom now on, suppose that $G$ is a connected, reductive complex algebraic\ngroup such that the derived group of $G$ is simply connected. Set\n$\\fg=\\Lie(G)$. For $g\\in G$ and $x\\in \\fg$ write $gx$ instead of\n$\\Ad(g)(x)$, where $\\Ad$ is the adjoint representation of $G$. Define $\\FN$\nto be the cone of nilpotent elements in $\\fg$ and let $B$ be a fixed Borel\nsubgroup of $G$ with Lie algebra $\\fb$. Then, the Steinberg variety of $G$\nis the variety $Z$ of all triples $(x,gB,hB)$ in $\\FN\\times G\/B \\times G\/B$\nsuch that $g\\inverse x, h\\inverse x\\in \\fb$. Based on a construction of\nKazhdan and Lusztig \\cite{kazhdanlusztig:langlands}, Chriss and Ginzburg\n\\cite {chrissginzburg:representation} and Lusztig \\cite{lusztig:bases} have\nshown that there is an algebra structure on $K^{G\\times \\BBC^*}(Z)$, the\n$G\\times \\BBC^*$-equivariant $K$-group of $Z$, such that $K^{G\\times\n \\BBC^*}(Z)$ is isomorphic to the extended, affine Hecke algebra $\\CH$\nassociated to $G$. Ostrik \\cite{ostrik:equivariant} used this isomorphism to\ndescribe $K^{G\\times \\BBC^*}(\\FN)$ in terms of $\\CH$ and to define a\nKazhdan-Lusztig ``bar'' involution, and a Kazhdan-Lusztig basis, of\n$K^{G\\times \\BBC^*}(\\FN)$. As indicated above, in this paper we describe the\nequivariant $K$-groups of the generalized Steinberg varieties $\\XIJ$ in\nterms of $\\CH$. These generalized Steinberg varieties interpolate between\n$Z=X^{\\emptyset \\emptyset}$ and $\\FN=X^{SS}$ ($S$ is the Coxeter generating\nset for $W$ determined by $B$). We then use our description to define\nKazhdan-Lusztig ``bar'' involutions and Kazhdan-Lusztig bases of the\nequivariant $K$-groups $K^{G\\times \\BBC^*}(\\XIJ)$.\n\nThe proof of the main theorem in this paper (\\autoref{thm:main}) relies on\nOstrik's computation of $K^{G\\times \\BBC^*}(\\FN)$. For a generalized\nSteinberg variety $\\XIJ$ we use a filtration of $K^{G\\times \\BBC^*}(\\XIJ)$\nindexed by $G$-orbits in the product of two partial flag varieties. In the\nspecial case of $\\FN$, there is a single $G$-orbit, the filtration of\n$K^{G\\times \\BBC^*}(\\FN)$ is trivial, and Ostrik has computed $K^{G\\times\n \\BBC^*}(\\FN)$ in terms of $\\CH$. In the general case, each associated\ngraded piece has the form $K^{L'\\times \\BBC^*}(\\FN')$, where $L'$ is a Levi\nfactor of a parabolic subgroup of $G$ and $\\FN'$ is the nilpotent cone in\n$\\Lie(L')$. Thus, each of these graded pieces is described using Ostrik's\ntheory.\n\nIn \\autoref{sec:main} we give the basic constructions and state the main\ntheorem relating the extended affine Hecke algebra and the equivariant\n$K$-theory of generalized Steinberg varieties. Assuming facts that are\nproved in subsequent sections, \\autoref{sec:proof} contains a proof\nof~\\autoref{thm:main}. The constructions of the ``standard basis,'' the\n``bar'' involution, and the Kazhdan-Lusztig basis are given in\n\\autoref{sec:kl}. In \\autoref{sec:kthy} we review the intersection\/Tor\nproduct construction in the form it is used in this paper. The final three\nsections contain proofs of the main ingredients used in the proof\nof~\\autoref{thm:main}.\n\n\\subsection{Notation and conventions}\nWith $G$ and $B$ as above, fix a maximal torus $T$ contained in $B$. Let\n$W=N_G(T)\/T$ be the Weyl group of $(G,T)$, let $S$ be the set of simple\nreflections in $W$ determined by the choice of $B$, and let $X(T)$ be the\ncharacter group of $T$. Then $X(T)$ is a free abelian group and $W$ acts on\n$X(T)$ as group automorphisms. We use additive notation for $X(T)$ and\nconsider the root system $\\Phi$ of $(G,T)$ as a subset of $X(T)$. The roots\ncorresponding to the root subgroups in $B$ determine a positive system\n$\\Phi^+$, and a base $\\Pi$, of $\\Phi$. If $s$ is in $S$, then $s=s_\\alpha$\nfor a unique $\\alpha$ in $\\Pi$.\n\nLet $H$ be a complex linear algebraic group. We use the convention that a\nlowercase fraktur letter denotes the Lie algebra of the group denoted by the\nsame uppercase roman letter, so for example, $\\fh= \\Lie(H)$. Let $R(H)$\ndenote the representation ring of $H$ and let $X(H)$ denote the character\ngroup of $H$. Define $\\overline H$ to be the product of $H$ with the\none-dimensional complex torus $\\BBC^*$, so $\\overline H= H\\times\n\\BBC^*$. Let $v\\colon \\Hbar \\to \\BBC^*$ be the character defined by\n$v(h,z)=z$ for $h\\in H$ and $z\\in \\BBC^*$ and let $A=\\BBZ[v,v\\inverse]$ be\nthe subring of $R(\\Hbar)$ generated by $v$. With this notation, there is a\nnatural isomorphism $R(\\Hbar)\\cong A\\otimes_{\\BBZ} R(H)$. In particular,\n\\[\nR(\\Tbar) \\cong A\\otimes_{\\BBZ} R(T)\\cong A[X(T)]\\quad\\text{and} \\quad\nR(\\Gbar)\\cong A\\otimes_{\\BBZ} R(G).\n\\]\n\nWhen the group $H$ acts on a quasiprojective variety $Y$, let $K^{H}(Y)$\ndenote the Grothendieck group of the abelian category of $H$-equivariant\ncoherent sheaves on $Y$. The group $K^{H}(Y)$ is naturally an $R(H)$-module.\n\nSuppose $C$ is a closed subgroup of $H$. For a $C$-variety $F$, let\n$H\\times^C F$ denote the quotient of $H\\times F$ by the $C$-action given by\n$c\\cdot (h,y)= (hc\\inverse, cy)$ for $c\\in C$, $h\\in H$, and $y\\in F$. The\nimage of $(h,y)$ in $H\\times^C F$ is denoted by $h*y$. The group $H$ acts on\n$H\\times^C F$ by left multiplication and the projection $f\\colon H\\times^C\nF\\to H\/C$ given by $h*y\\mapsto hC$ is a well-defined $H$-equivariant\nmorphism. Conversely, suppose $Y$ is an $H$-variety and $f_Y\\colon Y\\to H\/C$\nis an $H$-equivariant morphism. Set $F=f_Y\\inverse(C)$. Then the map\n$m\\colon H\\times^C F\\to Y$ given by $h*y\\mapsto hy$ is a well-defined\n$H$-equivariant isomorphism such that $f=f_Ym$. Suppose $\\Cbar$ acts on $F$.\nThen $\\Hbar$ acts on both $\\Hbar\\times^{\\Cbar}F$ and $H\\times^CF$, and these\nvarieties are canonically isomorphic $\\Hbar$-varieties. It follows from work\nof Thomason \\cite[Proposition 6.2]{thomason:algebraic} that\n$K^{\\Hbar}(H\\times^CF)$ is naturally isomorphic to $K^{\\Cbar}(F)$, and that\nif $C_{\\red}$ is a reductive subgroup of $C$ such that $C\\cong\nC_{\\operatorname{u}} \\rtimes C_{\\red}$, where $C_{\\operatorname{u}}$ is the\nunipotent radical of $C$, then $K^{\\Cbar}(F)$ is isomorphic to\n$K^{\\Cbar_{\\red}}(F)$ (see \\cite[\\S5.2]{chrissginzburg:representation}). Let\n\\[\n\\res_F\\colon K^{\\Hbar}(H\\times^CF) \\xrightarrow{\\ \\cong\\ }\nK^{\\Cbar_{\\red}}(F)\n\\]\ndenote the composition of these two isomorphisms.\n\nSuppose $Y_1$ and $Y_2$ are $H$-varieties with $Y_1\\subseteq Y_2$. To\nsimplify the notation, if $Y_1$ is closed in $Y_2$, then we sometimes denote\nthe direct image map $K^{H}(Y_1) \\to K^H(Y_2)$ simply by $()_*$, and if\n$Y_1$ is open in $Y_2$, then we sometimes denote the restriction map\n$K^{H}(Y_2) \\to K^H(Y_1)$ by $()^*$.\n\nUnless otherwise indicated, we consider $\\fg$ as a $\\BBC^*$-module with the\naction of $\\BBC^*$ given by $z\\cdot x= z^{-2}x$ for $z$ in $\\BBC^*$ and $x$\nin $\\fg$. Then $\\Gbar$ acts on $\\FN$ by $(g,z) \\cdot x =z^{-2}g\\cdot x$. For\na subgroup $P$ of $G$, $\\Gbar$ acts on $G\/P$ by $(g,z)\\cdot hP= gh P$.\nDefine\n\\[\n\\FNt=\\{\\, (x, gB)\\in \\FN\\times G\/B\\mid g\\inverse x\\in \\fb\\,\\}.\n\\]\nAs above, the Steinberg variety of $G$ is\n\\[\nZ=\\{\\, (x, gB, hB) \\in \\FN \\times G\/B \\times G\/B \\mid g\\inverse x, h\\inverse\nx\\in \\fb \\,\\} \\cong \\FNt\\times_{\\FN} \\FNt .\n\\]\nThen $\\Gbar$ acts on $\\FNt$ and $Z$ via the diagonal action, and the\nprojections\n\\[\np_Z\\colon Z\\to \\FN,\\quad q_Z\\colon Z\\to G\/B \\times G\/B,\\quad p\\colon \\FNt\\to\n\\FN, \\quad\\text{and} \\quad q\\colon \\FNt\\to G\/B\n\\]\nare all $\\Gbar$-equivariant. \n\n\\section{Statement of the main theorem }\\label{sec:main} \n\n\\subsection{Generalized Steinberg varieties}\n\nFor $I\\subseteq S$, let $W_I=\\langle I \\rangle$ be the subgroup of $W$\ngenerated by $I$ and let $P_I$ be the parabolic subgroup of $G$ that\ncontains $B$ such that $N_{P_I}(T)\/T = W_I$. Let $U_I$ denote the unipotent\nradical of $P_I$ and let $L_I$ be the Levi factor of $P_I$ that contains\n$T$. Then $P_I=L_IU_I$ and $\\fp_I=\\fl_I + \\fu_I$ are Levi decompositions of\n$P_I$ and $\\fp_I$, respectively. Define $\\Pi_I= \\{\\, \\alpha\\in \\Pi\\mid\ns_\\alpha\\in I\\,\\}$ and let $\\Phi_I$ be the intersection of $\\Phi$ with the\nspan of $\\Pi_I$. Then, with respect to the action of $T$, $\\Phi_I$ is the\nset of roots of $\\fl_I$, $\\Phi^+\\cup \\Phi_I$ is the set of roots of $\\fp_I$,\nand $\\Phi^+\\setminus \\Phi_I$ is the set of roots of $\\fu_I$. In the special\ncase when $I=\\emptyset$, $P_I=B$ and we define $U = U_{\\emptyset}$.\n\nEach pair of subsets $I, J\\subseteq S$ determines a \\emph{generalized\n Steinberg variety}\n\\[\n\\XIJ= \\{\\, (x, gP_I, hP_J)\\in \\FN\\times G\/P_I \\times G\/P_J \\mid g\\inverse\nx\\in \\fp_I,\\ h\\inverse x\\in \\fp_J\\,\\}\n\\]\n(see \\cite[\\S2]{douglassroehrle:geometry}). Define\n\\[\n\\etaIJ\\colon Z\\to \\XIJ \\quad\\text{by}\\quad \\etaIJ(x, gB, hB)= (x, gP_I,\nhP_J).\n\\]\nThen $\\XIJ$ is a $\\Gbar$-variety ($\\Gbar$ acts diagonally on $\\XIJ$) and\n$\\etaIJ$ is a surjective, proper, $\\Gbar$-equivariant morphism. Notice that\n\\begin{itemize}\n\\item if $I=J=\\emptyset$, then $\\XIJ=Z$ and $\\etaIJ$ is the identity, and\n\\item if $I=J=S$, then $\\XIJ \\cong \\FN$ and $\\etaIJ$ may be identified with\n projection $p_Z\\colon Z\\to \\FN$.\n\\end{itemize}\n\n\n\\subsection{Hecke algebras}\n\nThe \\emph{Iwahori-Hecke algebra of $W$} is the $A$-algebra $\\CH_S$ with\n$A$-basis $T_w$, for $w$ in $W$, and multiplication satisfying\n\\begin{equation}\n \\label{eq:std}\n \\begin{cases}\n T_w T_{w'}= T_{ww'}& \\text{if $\\ell(ww')= \\ell(w)+ \\ell(w')$, and} \\\\\n T_s^2= v^2 T_1+(v^2-1)T_s&\\text{for $s$ in $S$,}\n \\end{cases}\n\\end{equation}\nwhere $\\ell$ is the length function on $W$ determined by $S$ and the\nsubscript $1$ in $T_1$ denotes the identity in $W$ (see\n\\cite{kazhdanlusztig:coxeter}).\n\nThe \\emph{extended affine Hecke algebra of $W$} is the $A$-algebra $\\CH$\nwith generators $T_w$, $\\theta_\\lambda$, for $w$ in $W$ and $\\lambda$ in\n$X(T)$, and multiplication satisfying\n\\begin{equation}\n \\label{eq:bernstein}\n \\begin{cases}\n T_w T_{w'}= T_{ww'}& \\text{if $\\ell(ww')= \\ell(w)+ \\ell(w')$,} \\\\\n T_s^2= v^2 T_1+(v^2-1)T_s&\\text{for $s$ in $S$,} \\\\\n \\theta_\\lambda \\theta_{\\mu}= \\theta_{\\lambda+\\mu}& \\text{for\n $\\lambda, \\mu$ in $X(T)$,} \\\\\n \\theta_\\lambda T_s -T_s \\theta_{s(\\lambda)}= (v^2-1) \\frac\n {\\theta_\\lambda-\\theta_{s(\\lambda)}} {1-\\theta_{-\\alpha}} &\\text{for\n $\\lambda$ in $X(T)$ and $s=s_\\alpha$ in $S$, and} \\\\\n \\theta_0=T_1 &\\text{is the identity in $\\CH$ .}\n \\end{cases}\n\\end{equation}\n(See \\cite[\\S1]{lusztig:bases}. Note that for $w$ in $W$, the generator\n$T_w$ in the preceding definition is related to the generator $\\tilde T_w$\nin \\cite[\\S1]{lusztig:bases} by $\\tilde T_w=v^{-\\ell(w)} T_w$.)\n\nWe identify the $A$-span, in $\\CH$, of $\\{\\,T_w\\mid w\\in W\\,\\}$ with the\nIwahori-Hecke algebra $\\CH_S$, and we identify the $A$-span, in $\\CH$, of\n$\\{\\,\\theta_\\lambda \\mid \\lambda \\in X(T)\\,\\}$ with the group algebra\n$A[X(T)]$ of $X(T)$. Then $\\CH_S$ and $A[X(T)]$ are subalgebras of $\\CH$\nthat contain the identity. The center of $\\CH$ is $A[X(T)]^W$ (see\n\\cite{lusztig:singularities}). We identify $R(\\Gbar)$ with $A[X(T)]^W$, and\nhence with the center of $\\CH$, via the isomorphism $R(\\Gbar) \\cong\nA[X(T)]^W$ given by associating with a representation of $\\Gbar$ its\ncharacter in $A[X(T)]$. The map $A[X(T)]\\otimes_A \\CH_S\\to \\CH$ given by\nmultiplication, $\\theta_\\lambda\\otimes T_w\\mapsto \\theta_\\lambda T_w$, is an\n$A$-module isomorphism. We call $\\{\\, \\theta_\\lambda T_w\\mid \\lambda\\in\nX(T), w\\in W\\,\\}$ the \\emph{Bernstein basis} of $\\CH$ because it arises from\nthe Bernstein presentation~\\eqref{eq:bernstein}.\n\nFor $\\lambda$ in $X(T)$, let $t_\\lambda$ denote translation by $\\lambda$ in\n$X(T)$. Then $\\{\\, t_\\lambda\\mid \\lambda\\in X(T)\\,\\}$ is a subgroup of\n$\\Aut(X(T))$ isomorphic to $X(T)$. Recall that $W$ acts faithfully on\n$X(T)$. Define $\\Wex$, the \\emph{extended affine Weyl group of $\\Phi$,} to\nbe the subgroup of $\\Aut(X(T))$ generated by the image of $W$ and $\\{\\,\nt_\\lambda\\mid \\lambda \\in X(T)\\,\\}$. Then $\\Wex$ is isomorphic to the\nsemi-direct product $X(T) \\rtimes W$. We frequently identify $W$ with its\nimage in $\\Aut(X(T))$ and consider $W$ as a subgroup of $\\Wex$.\n\nThe \\emph{affine Weyl group of $\\Phi$,} $\\Waf$, is the subgroup of\n$\\Aut(X(T))$ generated by the image of $W$ and $\\{\\, t_\\alpha\\mid \\alpha \\in\n\\Phi\\,\\}$. Then $\\Waf$ is a normal subgroup of $\\Wex$ and there is a finite\nabelian subgroup $\\Gamma$ of $\\Wex$ such that $\\Wex= \\Waf \\Gamma$ and\n$\\Waf\\cap \\Gamma =1$. The group $\\Waf$ is a Coxeter group with a Coxeter\ngenerating set $S_{\\textrm{af}}$ that contains $S$, and $\\Gamma$ acts on\n$\\Waf$ as Coxeter group automorphisms preserving $S_{\\textrm{af}}$. Extend\nthe length function $\\ell$ and Bruhat order $\\leq$ on $\\Waf$ to $\\Wex$ by\ndefining\n\\[\n\\ell(y\\gamma)= \\ell(y)\\qquad\\text{and} \\qquad \\text{$y\\gamma \\leq y'\\gamma'$\n if and only if $\\gamma=\\gamma'$ and $y\\leq y'$}\n\\] \nfor $y,y'$ in $\\Waf$ and $\\gamma, \\gamma'$ in $\\Gamma$ (see~\\cite[\\S2]\n{lusztig:singularities}).\n\nThe algebra $\\CH$ has a \\emph{standard basis}, $\\{\\, T_w\\mid w\\in \\Wex\\,\\}$,\nsuch that the relations~\\eqref{eq:std} hold (see \\cite[\\S1]\n{lusztig:bases}). The ``bar'' involution of $\\CH$, $\\overline{\\phantom {x}\n}\\colon \\CH \\to \\CH$, is the ring automorphism of $\\CH$ defined by\n$\\overline v=v\\inverse$ and $\\overline{T_x}= T_{x\\inverse} \\inverse$ for $x$\nin $\\Wex$. As observed by Lusztig~\\cite{lusztig:singularities}, the argument\nin the proof of \\cite[Theorem 1.1] {kazhdanlusztig:coxeter} can be applied\nto show that for $x$ in $\\Wex$, there are unique elements $C_x$ and $C_x'$\nin $\\CH$ such that\n\\begin{equation}\n \\label{eq:cx}\n \\begin{cases}\n \\overline{C_x}= C_x \\\\\n C_x=v_x\\inverse T_x+ \\sum_{y< x} \\epsilon_y\\epsilon_x v_x v_y^{-2}\n \\overline{P}_{y,x} T_y\n \\end{cases}\n\\end{equation}\nand\n\\begin{equation}\n \\label{eq:cx'}\n \\begin{cases}\n \\overline{C_x'}= C_x' \\\\\n C_x'=v_x\\inverse T_x+ \\sum_{y< x} v_x\\inverse P_{y,x}T_y,\n \\end{cases}\n\\end{equation}\nwhere \n\\[\n\\epsilon_x= (-1)^{\\ell(x)}\\quad\\text{and}\\quad v_x= v^{\\ell(x)}\n\\]\nfor $x$ in $\\Wex$, and $P_{y,x}$ is a polynomial in $v$ of degree at most\n$\\ell(x)-\\ell(y) -1$. (See also~\\cite[\\S1.7, 1.8]{lusztig:bases}.) We call\n$\\{\\, C_x\\mid x\\in \\Wex\\,\\}$ and $\\{\\, C_x'\\mid x\\in \\Wex\\,\\}$\n\\emph{Kazhdan-Lusztig bases} of $\\CH$. A fundamental property of the\nKazhdan-Lusztig bases is that if $x$ is in $\\Wex$ and $s,t$ are in $S$ with\n$\\ell(tx)<\\ell(x)$ and $\\ell(xs)<\\ell(x)$, then\n\\begin{equation}\n \\label{eq:cxts}\n T_tC_x= C_xT_s=-C_x\\quad\\text{and}\\quad T_tC_x'= C_x'T_s=v^2 C_x'.\n\\end{equation}\n\nFor a subset $I$ of $S$ let $w_I$ be the longest element in $W_I$. Then\n\\[\nC_{w_I}= (-v)^{\\ell(w_I)} \\sum_{y\\in W_I} \\epsilon_y v_y^{-2} T_y \\quad\n\\text{and}\\quad T_sC_{w_I}= C_{w_I}T_s=-C_{w_I}\n\\]\nfor $s$ in $I$. For subsets $I$ and $J$ of $S$ we have the Kazhdan-Lusztig\nbasis elements $C_{w_I}$ and $C_{w_J}$ of $\\CH$. Define\n\\[\n\\CHIJ=C_{w_I} \\CH C_{w_J}\\quad \\text{and}\\quad \\chi^{IJ}\\colon \\CH\\to\n\\CHIJ\\text{ by } \\chi^{IJ}(h)= C_{w_I} h C_{w_J}.\n\\]\nObviously, $\\CHIJ$ is an $R(\\Gbar)$-submodule of $\\CH$ and $\\chi^{IJ}$ is a\nsurjective $R(\\Gbar)$-module homomorphism.\n\n\\subsection{The isomorphism \\texorpdfstring{$\\CHIJ \\cong\n K^{\\Gbar}(\\XIJ)$}{}} \n\nThe group $\\Gbar$ acts on the Steinberg variety $Z$ and so $K^{\\Gbar}(Z)$ is\nnaturally an $R(\\Gbar)$-module, and hence an $A$-module. Chriss and Ginzburg\n\\cite[\\S7.2]{chrissginzburg:representation} and Lusztig \\cite{lusztig:bases}\nhave shown that $K^{\\Gbar}(Z)$ has an $A$-algebra structure such that $\\CH\n\\cong K^{\\Gbar}(Z)$. Let $\\varphi\\colon \\CH\\to K^{\\Gbar}(Z)$ be the\n$A$-algebra isomorphism constructed by Lusztig \\cite[Theorem\n8.6]{lusztig:bases}. The main result in this paper is the following theorem.\n\n\\begin{theorem}\\label{thm:main}\n For each pair of subsets $(I,J)$ of $S$ there is a unique\n $R(\\Gbar)$-module isomorphism\n \\[\n \\psi^{IJ}\\colon \\CHIJ \\xrightarrow{\\ \\cong\\ } K^{\\Gbar}(\\XIJ)\n \\]\n such that the diagram\n \\begin{equation}\n \\label{eq:mainthm}\n \\vcenter{\\vbox{\n \\xymatrix{\\CH \\ar[d]^{\\chi^{IJ}} \\ar[rr]^-{\\varphi}_-{\\cong} &&\n K^{\\Gbar}(Z) \\ar[d]^{\\etaIJ_*} \\\\\n \\CHIJ \\ar[rr]^-{\\psi^{IJ}}_-\\cong && K^{\\Gbar}(\\XIJ) }\n }}\n \\end{equation}\n commutes.\n\\end{theorem}\n\nIn~\\autoref{sec:kl} we use the isomorphism $\\psi^{IJ}$ to define a standard\nbasis, a ``bar'' involution, and a Kazhdan-Lusztig basis for\n$K^{\\Gbar}(\\XIJ)$. As we explain next, the proof of~\\autoref{thm:main}\ngives significantly more information about the isomorphism $\\psi^{IJ}$ than\nis encoded in~\\eqref{eq:mainthm}. This leads to a graded refinement of the\ntheorem that is given in~\\autoref{cor:ref}.\n\nThe preimages of the $G$-orbits on $G\/P_I \\times G\/P_J$ under the projection\non the second and third factors, say $\\{\\XIJ_z\\}$, form a partition of\n$\\XIJ$ into locally closed, equidimensional subvarieties indexed by\n$\\{W_IzW_J\\}$, the set of $(W_I, W_J)$-double cosets in $W$. The closures of\nthese subvarieties are the irreducible components of $\\XIJ$ (see\n\\cite[\\S3]{douglassroehrle:geometry}). Thus, the fundamental classes of the\nclosures of the subvarieties $\\XIJ_z$ form a basis of the top Borel-Moore\nhomology of $\\XIJ$. As explained in detail below, the contribution of each\nsubvariety $\\XIJ_z$ to $K^{\\Gbar}(\\XIJ)$ is not just a single homology\nclass, rather it is the full equivariant $K$-group of isomorphism classes of\n$\\LIbar\\cap {}^z \\LJbar$-equivariant coherent sheaves on the nilpotent cone\nof $\\fl_I\\cap z\\fl_J$. A basis of this $K$-group is indexed by the set of\n$(W_I,W_J)$-double cosets in $W_I\\backslash \\Wex\/W_J$ that project to the\ndouble coset $W_IzW_J$ in $W$. Taking the union over $z$ (in a suitable\nsense) gives rise to a basis of $K^{\\Gbar}(\\XIJ)$ indexed by the\n$(W_I,W_J)$-double cosets in $\\Wex$.\n\nIn case $I=J=\\emptyset$, $X^{\\emptyset \\emptyset}=Z$, and $\\{W_\\emptyset z\nW_\\emptyset\\}=W$. For $w$ in $W$, $X^{\\emptyset \\emptyset}_w=Z_w$ is the\nconormal bundle to the $G$-orbit in $G\/B\\times G\/B$ corresponding to $w$,\n$L_\\emptyset \\cap {}^w L_\\emptyset =T$, the cone of nilpotent elements in\n$\\fl_\\emptyset \\cap w\\fl_\\emptyset$ is $\\{0\\}$, and $K^{\\Tbar}(\\{0\\}) \\cong\nA[X(T)]$. Obviously, $\\{\\, t_\\lambda w\\mid \\lambda\\in X(T)\\,\\}$ parametrizes\nthe set of $(W_\\emptyset, W_\\emptyset)$-double cosets in $\\Wex$ that project\nto $\\{w\\}$. It follows from~\\autoref{thm:ost} that $\\{\\, t_\\lambda w\n\\mid \\lambda\\in X(T)\\,\\}$ parametrizes an $A$-basis of $K^{\\Gbar}(Z_w)$ and\nit follows from ~\\cite[\\S8.6]{lusztig:bases} that $\\{\\, t_\\lambda w \\mid\nw\\in W,\\ \\lambda\\in X(T)\\,\\}$ parametrizes an $A$-basis of $K^{\\Gbar}(Z)$.\n\nAt the other extreme, when $I=J=S$, there is a single $(W,W)$-double coset\nin $W$ (with representative $1$) and $\\{W_S z W_S\\}=\\{W\\}$. In this case,\n$X^{SS}_1=X^{SS}\\cong \\FN$, $L_S \\cap {}^1 L_S =G$, and the cone of\nnilpotent elements in $\\fl_S \\cap {}^1\\fl_S$ is $\\FN$. The\n$(W_S,W_S)$-double cosets in $\\Wex$ that project to $W$ are simply the\n$(W,W)$-double cosets in $\\Wex$. These are parametrized by $\\{\\,\nt_\\lambda\\mid \\lambda\\in X(T)^+\\,\\}$, where $X(T)^+$ is the set of dominant\nweights relative to the choice of $B$. For $\\lambda$ in $X(T)$ let\n$\\CL_\\lambda$ be the $G$-equivariant line bundle on $G\/B$ such that $T$ acts\non the fibre over $B$ with character $-\\lambda$. We consider $\\CL_\\lambda$\nas a $\\Gbar$-equivariant line bundle on $G\/B$ via the natural projection\n$\\Gbar \\to G$. It follows from a result of Broer~\\cite{broer:line} that if\n$\\lambda$ is dominant, then $p_* q^* ([\\CL_\\lambda]) = [R^0p_*q^*\n\\CL_\\lambda]$ in $K^{\\Gbar}(\\FN)$. Ostrik~\\cite[\\S2.2] {ostrik:equivariant}\nhas proved the following key lemma he attributes to R.~Bezrukavnikov.\n\n\\begin{lemma}\\label{lem:ostlem}\n The set $\\{\\, p_*q^* ([\\CL_\\lambda]) \\mid \\lambda\\in X(T)^+ \\,\\}$ is an\n $A$-basis of $K^{\\Gbar}(\\FN)$.\n\\end{lemma}\n\nIn~\\cite{ostrik:equivariant} Ostrik assumes that the group $G$ is\nsimple. The extension to reductive groups is straightforward. It follows\nfrom the lemma that $\\{\\,t_\\lambda \\mid \\lambda\\in X(T)^+\\,\\}$ parametrizes\nan $A$-basis of $K^{\\Gbar} (X^{SS}_1)= K^{\\Gbar}(\\FN)$.\n\nFor arbitrary $I,J\\subseteq S$, the results we prove below are an amalgam of\nthe two extreme cases. Let $\\WIJ$ denote the set of minimal length\n$(W_I,W_J)$-double coset representatives in $W$. For $z$ in $\\WIJ$, set\n\\[\nL_z=L_I\\cap {}^zL_J\n\\]\nand let $X(T)^+_{z}$ be the set of weights in $X(T)$ that are dominant for\n$L_z$. Then $L_z$ is a reductive group (see~\\cite[\\S69B]\n{curtisreiner:methodsII}). We show in~\\autoref{thm:ost} that there is an\nisomorphism $K^{\\Gbar}(\\XIJ_z)\\cong K^{\\Lbar_z}(\\SN_z)$, where $\\SN_z$ is\nthe nilpotent cone in $\\fl_z$ and we show in~\\autoref{lem:double} that $\\{\\,\nt_\\lambda z \\mid \\lambda\\in X(T)^+_{z}\\,\\}$ parametrizes the set of $(W_I,\nW_J)$-double cosets in $\\Wex$ that project to $W_IzW_J$. It follows\nfrom~\\autoref{thm:wideiso2} that $\\{\\, t_\\lambda z \\mid \\lambda\\in\ngX(T)^+_{z}\\,\\}$ parametrizes an $A$-basis of $K^{\\Gbar}(\\XIJ_z)$ and it\nfollows from~\\autoref{cor:linind} that $\\{\\, t_\\lambda z \\mid z\\in \\WIJ,\\\n\\lambda\\in X(T)^+_{z} \\,\\}$ parametrizes an $A$-basis of $K^{\\Gbar}(\\XIJ)$.\n\nIn summary, there is a filtration of $K^{\\Gbar}(\\XIJ)$ such that the direct\nsummands of the associated graded $A$-module are naturally indexed by\n$(W_I,W_J)$-double cosets in $W$. In addition, the summand indexed by $z$ in\n$\\WIJ$ is isomorphic to $K^{\\Lbar_z}(\\SN_z)$ and has a basis consisting of\nisomorphism classes of equivariant coherent sheaves canonically indexed by\nthe $(W_I,W_J)$-double cosets in $\\Wex$ that project to $W_IzW_J$.\n\n\\section{The proof of \\autoref{thm:main}}\\label{sec:proof}\n\nIn this section we prove~\\autoref{thm:main}. The proof proceeds in three\nsteps. The first step is show that $K^{\\Gbar}(\\XIJ)$ is a free $A$-module,\nthe second step is to show that the composition $\\etaIJ_* \\varphi$ factors\nthrough $\\chi^{IJ}$, and the third step is to show that the resulting map\n$\\psi^{IJ}$ is an isomorphism. In the course of the argument we construct\nexplicit $A$-bases of of $\\CHIJ$ and $K^{\\Gbar}(\\XIJ)$ that correspond under\n$\\psi^{IJ}$.\n\nTo show that $K^{\\Gbar}(\\XIJ)$ is a free $A$-module, we use the filtration\non $K^{\\Gbar}(\\XIJ)$ determined by the $G$-orbits on $G\/P_I \\times G\/P_J$.\nRecall that the rule $z\\mapsto G\\cdot (P_I,zP_J)$ defines a bijection\nbetween $\\WIJ$ and the set of $G$-orbits in $G\/P_I \\times G\/P_J$. Let\n$q_{IJ}\\colon \\XIJ\\to G\/P_I \\times G\/P_J$ be the projection on the second\nand third factors, and for $z$ in $\\WIJ$ define $\\XIJ_z$ to be the preimage\nin $\\XIJ$ of the orbit $G\\cdot (P_I, zP_J)$. Then\n\\[\n\\XIJ_z=\\{\\, (x, gP_I, gzP_J)\\in \\FN\\times G\/P_I\\times G\/P_J \\mid g\\inverse\nx\\in \\FN\\cap \\fp_I\\cap z\\fp_J\\,\\}.\n\\]\nChoose a linear order on $\\WIJ$, say $\\WIJ=\\{\\, z_i\\mid 1\\leq i\\leq\n|\\WIJ|\\,\\}$, that extends the Bruhat order and define\n\\[\n\\XIJ_{\\ssleq i}=\\coprod_{j\\leq i} \\XIJ_{z_j}.\n\\] \nThen $\\XIJ_{\\ssleq i}= \\XIJ_{\\ssleq i-1} \\amalg \\XIJ_{z_i}$, where\n$\\XIJ_{\\ssleq i-1}$ is closed in $\\XIJ_{\\ssleq i}$ and $\\XIJ_{{z_i}}$ is\nopen in $\\XIJ_{\\ssleq i}$. It is shown in \\autoref{ssec:ex} that for\n$i\\geq1$ the sequence\n\\begin{equation}\n \\label{eq:exz}\n \\xymatrix{0 \\ar[r] & K^{\\Gbar}(\\XIJ_{\\ssleq i-1}) \\ar[r]^-{()_*} &\n K^{\\Gbar}(\\XIJ_{\\ssleq i}) \\ar[r]^-{()^*} & K^{\\Gbar}(\\XIJ_{z_i}) \\ar[r]&\n 0 } \n\\end{equation}\nof $R(\\Gbar)$-modules is exact. It follows that the embedding $\\XIJ_{\\ssleq\n i}\\hookrightarrow \\XIJ$ induces an injection $K^{\\Gbar}(\\XIJ_{\\ssleq i})\n\\hookrightarrow K^{\\Gbar}(\\XIJ)$ in equivariant $K$-theory. \n\nSuppose $z$ is in $\\WIJ$ and recall that $L_z=L_I\\cap {}^zL_J$ and that\n$\\SN_z=\\FN\\cap \\fl_z$ is the cone of nilpotent elements in $\\fl_z$. It is\nshown in \\autoref{ssec:xijz}, in the course of the proof\nof~\\autoref{thm:ost}, that $K^{\\Gbar}(\\XIJ_z) \\cong\nK^{\\Lbar_z}(\\SN_z)$. Thus, the next proposition follows from the exact\nsequence~\\eqref{eq:exz} by induction and~\\autoref{lem:ostlem}.\n\n\\begin{proposition}\n The equivariant $K$-group $K^{\\Gbar}(\\XIJ)$ is a free $A$-module.\n\\end{proposition}\n\nNext, to show that $\\etaIJ_* \\varphi$ factors through $\\chi^{IJ}$, it is\nenough to show that for each generator $s$ in $J$, and each generator $t$ in\n$I$,\n\\begin{equation}\n \\label{eq:4b}\n \\etaIJ_* \\varphi (hT_s)= - \\etaIJ_* \\varphi(h)= \\etaIJ_* \\varphi (T_th) \n\\end{equation}\nfor all $h$ in $\\CH$. Indeed, if this condition holds, then for all\n$h$ in $\\CH$, $x$ in $W_I$, and $y$ in $W_J$, $\\etaIJ_* \\varphi(T_x h\nT_y)= \\epsilon_x \\epsilon_y \\etaIJ_*\\varphi(h)$, and so\n$\\etaIJ_*\\varphi(C_{w_I} h C_{w_J}) = r_{IJ} \\etaIJ_*\\varphi(h)$,\nwhere $r_{IJ}$ is a non-zero element of $A$ that depends only on $I$\nand $J$. Then $\\psi^{IJ}\\colon \\CHIJ\\to K^{\\Gbar}(\\XIJ)$ by\n$\\psi^{IJ}(C_{w_I} h C_{w_J}) = \\etaIJ_*\\varphi(h)$ is well defined\nbecause if $C_{w_I} h C_{w_J}= 0$, then $r_{IJ} \\etaIJ_*\\varphi(h)=0$\nand so $\\etaIJ_*\\varphi(h)=0$.\n\nLet $\\pi_I\\colon \\XEJ \\to \\XIJ$ by $\\pi_I(x, gB,\nhP_J)= (x, gP_I, hP_J)$ and let $\\pi_J\\colon \\XIE \\to \\XIJ$ by $\\pi_J(x,\ngP_I, hB)= (x, gP_I, hP_J)$. Then the diagram\n\\begin{equation*}\n \\label{eq:3b}\n \\xymatrix{Z\\ar[r]^{\\eta^{\\emptyset J}} \\ar[d]_{\\eta^{I \\emptyset}}\n \\ar[dr]^{\\etaIJ} & \\XEJ \\ar[d]^{\\pi_I} \\\\ \n \\XIE \\ar[r]_{\\pi_J} &\\XIJ}\n\\end{equation*}\ncommutes and so~\\eqref{eq:4b} follows from the equalities\n\\[\n\\eta^{\\emptyset J}_* \\varphi (hT_s)= - \\eta^{\\emptyset J}_*\n\\varphi(h)\\quad\\text{and}\\quad \\eta^{I \\emptyset}_* \\varphi (T_th)= -\n\\eta^{I \\emptyset}_* \\varphi(h),\n\\]\nfor $s\\in J$ and $t\\in I$, by applying $(\\pi_I)_*$ and $(\\pi_J)_*$,\nrespectively. Thus, by symmetry, it is enough to show that $\\eta^{\\emptyset\n J}_* \\varphi$ factors through the projection $\\CH\\to \\CH C_{w_J}$ given by\nright multiplication by $C_{w_J}$:\n\\[\n\\xymatrix{ \\CH \\ar@{-->}[d] \\ar[r]^{\\varphi} & K^{\\Gbar}(Z)\n \\ar[d]^-{\\eta^{\\emptyset J}_*} \\\\\n \\CH C_{w_J} \\ar@{-->}[r] & K^{\\Gbar}(\\XEJ).} \n\\]\n\nThe intersection\/Tor-product construction described by Lusztig in\n\\cite[\\S6.4]{lusztig:bases} can be used to define a\n$K^{\\Gbar}(Z)$-module structure on $K^{\\Gbar}(\\XEJ)$, say\n\\[\n\\star_J\\colon K^{\\Gbar}(Z) \\times K^{\\Gbar}(\\XEJ) \\to K^{\\Gbar}(\\XEJ),\n\\]\nsuch that the map $\\eta^{\\emptyset J}_*\\colon K^{\\Gbar}(Z) \\to\nK^{\\Gbar}(\\XEJ)$ is $K^{\\Gbar}(Z)$-linear (see~\\autoref{pro:xlin}). Thus,\nfor all $h$ in $\\CH$ and all $s$ in $S$,\n\\[\n\\eta^{\\emptyset J}_* \\varphi(hT_s)= \\eta^{\\emptyset J}_*(\\varphi(h) \\star\n\\varphi(T_s)) = \\varphi(h) \\star_J \\eta^{\\emptyset J}_* (\\varphi(T_s)),\n\\]\nwhere $\\star$ is the convolution product in $K^{\\Gbar}(Z)$. Hence, it is\nenough to show that $\\eta^{\\emptyset J}_* (\\varphi(T_s))= -\\eta^{\\emptyset\n J}_*( \\varphi(1))$ for all $s$ in $J$, or equivalently that\n$\\eta^{\\emptyset J}_*(\\varphi(T_s+1))=0$. For a simple reflection $s$ in\n$W$, let $\\mathbf a_s$ in $K^{\\Gbar}(Z)$ be defined as in\n\\cite[\\S7.20]{lusztig:bases} (see \\autoref{sec:as}). By~\\cite[\\S7.25]\n{lusztig:bases}, $\\varphi(T_s+1)=-v \\mathbf a_s$. Therefore, the existence\nof the map $\\psi^{IJ}$ in~\\autoref{thm:main} follows from the next theorem.\n\n\\begin{theorem}\\label{thm:as}\n Suppose $s$ is in $J$. Then $\\etaEJ_*(\\mathbf a_s)= 0$ in\n $K^{\\Gbar}(\\XEJ)$.\n\\end{theorem}\n\nThis theorem is proved in \\autoref{sec:as}.\n\nFinally, to show that $\\psi^{IJ}$ is an isomorphism, we use an $A$-basis of\n$\\CHIJ$ and the partition $\\XIJ= \\coprod_{z\\in \\WIJ} \\XIJ_z$ to define\ncompatible filtrations on $\\CHIJ$ and $K^{\\Gbar}(\\XIJ)$. We also need the\nanalogous constructions for $\\CH$ and $K^{\\Gbar}(Z)$. \n\nRecall that $q_{Z}\\colon Z\\to G\/B \\times G\/B$ is the projection on the\nsecond and third factors. For $w$ in $W$ define $Z_w$ to be the preimage in\n$Z$ of the orbit $G\\cdot (B, wB)$. Then\n\\[\nZ_w= \\{\\, (x, gB, gwB)\\in \\FN\\times G\/B\\times G\/B \\mid g\\inverse x\\in\n\\fu\\cap w\\fu\\,\\}.\n\\]\nDefine \n\\[\nZ_{\\ssleq w}=\\coprod_{y\\leq w} Z_y \\quad \\text{and} \\quad Z_{\\sslt w}=\nZ_{\\ssleq w}\\setminus Z_w= \\coprod_{y< w} Z_y,\n\\]\nwhere $\\leq$ is the Bruhat order on $W$. Similarly, for $z$ in $\\WIJ$ define\n\\[\n\\XIJ_{\\ssleq z} =\\coprod_{y\\leq z} \\XIJ_y \\quad \\text{and} \\quad \\XIJ_{\\sslt\n z}= \\XIJ_{\\ssleq z}\\setminus \\XIJ_z =\\coprod_{y< z} \\XIJ_y,\n\\] \nwhere the unions are over $y$ in $\\WIJ$.\n\nSuppose $z$ is in $\\WIJ$ and let $\\eta^z$, $\\eta^{\\ssleq z}$, and\n$\\eta^{\\sslt z}$ be the restrictions of $\\etaIJ$ to $Z_z$, $Z_{\\ssleq z}$,\nand $Z_{\\sslt z}$, respectively. It is shown in\n\\cite[\\S3]{douglassroehrle:geometry} that if $w_1$ is in $W_I$ and $w_2$ is\nin $W_J$, then $\\etaIJ(Z_{w_1zw_2})\\subseteq \\XIJ_z$, and that\n$\\etaIJ(Z_{w_1zw_2}) =\\XIJ_z$ if and only if $w_1zw_2=z$. It is shown in\n~\\cite[Lemma 2.2] {douglass:inversion} that if $z$ and $z'$ are in $\\WIJ$,\nthen $z\\leq z'$ if and only if there are elements $w$ in $W_IzW_J$ and $w'$\nin $W_Iz'W_J$ such that $w\\leq w'$. Therefore, $\\etaIJ(Z_{\\ssleq\n z})\\subseteq \\XIJ_{\\ssleq z}$, and letting $r_z$ and $r_z^{IJ}$ denote the\ninclusions $Z_z\\to Z_{\\ssleq z}$ and $\\XIJ_z\\to \\XIJ_{\\ssleq z}$,\nrespectively, the square\n\\begin{equation}\n \\label{eq:cart}\n \\vcenter{\\vbox{\n \\xymatrix{Z_z\\ar[r]^{r_z} \\ar[d]^{\\eta^z} & Z_{\\ssleq\n z}\\ar[d]^{\\eta^{\\ssleq z}} \\\\\n \\XIJ_z\\ar[r]^{r_z^{IJ}} & \\XIJ_{\\ssleq z}}\n }} \n\\end{equation}\nis cartesian. \n\n\\begin{lemma}\\label{lem:hyp1}\n The map $\\eta^z\\colon Z_z\\to \\XIJ_z$ is a proper morphism.\n\\end{lemma}\n\n\\begin{proof}\n Define $Z_{W_IzW_J} = \\coprod_{w\\in {W_IzW_J}} Z_w$. Then $Z_{{W_IzW_J}}=\n (\\etaIJ)\\inverse \\big( \\XIJ_z \\big)$ and so by base change, the\n restriction of $\\etaIJ$ to $Z_{{W_IzW_J}}$ is proper. Now $G\\cdot (B,zB)$\n is closed in $\\coprod_{w\\in W_IzW_J} G\\cdot (B, wB)$ and hence $Z_z$ is\n closed in $Z_{{W_IzW_J}}$. It follows that $\\eta^z$ is proper. \\qed\n\\end{proof}\n\nSince $\\etaIJ(Z_{\\leq z}) \\subseteq \\XIJ_{\\leq z}$ for all $z$ in $\\WIJ$\nand~\\eqref{eq:cart} is cartesian, the proper morphisms $\\etaIJ$ and $\\eta^z$\ninduce a map of short exact sequences such that the following diagram\ncommutes:\n\\begin{equation}\n \\label{eq:exwz}\n \\vcenter{\\vbox{\n \\xymatrix{0 \\ar[r] & K^{\\Gbar}(Z_{\\sslt z}) \\ar[r]^{()_*}\n \\ar[d]^{\\eta^{\\sslt z}_*} & K^{\\Gbar}(Z_{\\ssleq z}) \\ar[r]^{()^*}\n \\ar[d]^{\\eta^{\\ssleq z}_*}& K^{\\Gbar}(Z_{z}) \\ar[r] \\ar[d]^{\\eta^{\n z}_*}& 0 \\\\ \n 0 \\ar[r] & K^{\\Gbar}(\\XIJ_{\\sslt z}) \\ar[r]^{()_*} &\n K^{\\Gbar}(\\XIJ_{\\ssleq z}) \\ar[r]^{()^*} & K^{\\Gbar}(\\XIJ_{z})\n \\ar[r]& 0 ,} \n }}\n\\end{equation}\nwhere $Z_{\\sslt 1}= \\XIJ_{\\sslt 1}= \\emptyset$.\n\nSuppose $z$ is in $\\WIJ$. Then $L_z = L_I\\cap {}^zL_J =L_{\\IzJ}$ is a Levi\nsubgroup of $P_I\\cap {}^zP_J$ (see~\\cite[\\S69B] {curtisreiner:methodsII}).\nSet\n\\[\nB_z=L_z\\cap B \\quad\\text{and} \\quad \\SNt_z= \\{\\, (x, hB_z)\\in \\SN_z\\times\nL_z\/B_z\\mid h\\inverse x\\in \\fb_z\\,\\},\n\\]\nand let \n\\[\np_z\\colon \\SNt_z\\to \\SN_z\\quad\\text{and}\\quad q_z\\colon \\SNt_z\\to L_z\/B_z\n\\]\nbe the projections. Then $B_z$ is a Borel subgroup of $L_z$ and $p_z$ is the\nSpringer resolution of $\\SN_z$.\n\n\\begin{theorem}\\label{thm:ost}\n Suppose $z$ is in $\\WIJ$. Then there is a commutative diagram of\n $R(\\Gbar)$-modules\n \\begin{equation*}\n \\xymatrix{ K^{\\Gbar}(Z_z) \\ar[r]_{\\cong} \\ar[d]^{\\eta^z_*} &\n K^{\\Lbar_z}(\\SNt_z) \\ar[d]^{(p_z)_*} \\\\ \n K^{\\Gbar}(\\XIJ_z) \\ar[r]_{\\cong} & K^{\\Lbar_z}(\\SN_z), } \n \\end{equation*}\n where the horizontal maps are isomorphisms and the vertical maps are\n surjections.\n\\end{theorem}\n\nThis theorem is proved in \\autoref{ssec:xijz}.\n\n\\begin{corollary}\\label{cor:surj}\n The map $\\eta^{\\ssleq z}_*\\colon K^{\\Gbar}(Z_{\\ssleq z}) \\to\n K^{\\Gbar}(\\XIJ_{\\ssleq z})$ is surjective for all $z$ in\n $\\WIJ$. Therefore, the maps $\\etaIJ_*\\colon K^{\\Gbar}(Z) \\to\n K^{\\Gbar}(\\XIJ)$ and $\\psi^{IJ}\\colon \\CHIJ\\to K^{\\Gbar}(\\XIJ)$ are\n both surjective.\n\\end{corollary}\n\n\\begin{proof}\n We show that $\\eta^{\\ssleq z}_*$ is surjective using induction on $z$ in\n the Bruhat order on $\\WIJ$. It is shown\n in~\\cite[\\S3.17]{kazhdanlusztig:langlands} that the image of\n $K^{\\Gbar}(Z_{\\sslt z})$ in $K^{\\Gbar}(Z)$ coincides with the sum of the\n images of the spaces $K^{\\Gbar}(Z_{\\ssleq y})$ for $y>}[r]^-{r_z^*} \\ar[d]^{\\eta^{\\ssleq\n z}_*}& K^{\\Gbar}(Z_z) \\ar[r]_-{\\cong} \\ar[d]^{\\eta^z_*} &\n K^{\\Lbar_z}(\\SNt_z) \\ar[d]^{(p_z)_*} \\\\ \n \\CHIJ_{\\ssleq z} \\ar[r]^-{}_-{\\cong} & K^{\\Gbar}(\\XIJ_{\\ssleq z})\n \\ar@{->>}[r]^-{(r_z^{IJ})^*} & K^{\\Gbar}(\\XIJ_z) \\ar[r]_-{\\cong} &\n K^{\\Lbar_z}(\\SN_z), }\n }} \n \\end{equation}\n where\n \\begin{enumerate}\n \\item the horizontal maps are surjective or bijective, as indicated, and\n the vertical maps are surjective, \\label{i:wi1}\n \\item if $f_1$ is the composition across the top row, then $\\CH_y$ is in\n the kernel of $f_1$ for all $y}[r]\n \\ar[d]^{\\tilde \\eta^z} & Z_z\\ar[d]^{\\eta^z}\\\\ \n \\SN_z\\ar[r]^-{i_z^{IJ}} & F^{IJ}\\ar@{^{(}->}[r] &\\XIJ_z} \n }}\n\\end{equation}\ncommutes. Moreover, one checks that the left-hand square is cartesian.\n\nNow applying $K^{\\Gbar}$ and $K^{\\Lbar_z}$ to~\\eqref{eq:co1} we get\n\\[\n\\xymatrix{K^{\\Gbar}(Z_z) \\ar[r]^-{\\res_z}\\ar[d]^{\\eta^z_*} &\n K^{\\Lbar_z}(F) \\ar[r]^-{i_z^*} \\ar[d]^{\\tilde \\eta^z_*} &\n K^{\\Lbar_z}(\\SNt_z) \\ar[d]^{(p_z)_*} \\\\\n K^{\\Gbar}(\\XIJ_z) \\ar[r]^-{\\res_z^{IJ}} & K^{\\Lbar_z}(F^{IJ})\n \\ar[r]^-{(i_z^{IJ})^*} & K^{\\Lbar_z}(\\SN_z), }\n\\]\nwhere $\\res_z=\\res_{F}$ is defined using the isomorphism $Z_z\\cong\nG\\times^{P_z}F$ and $\\res_z^{IJ}=\\res_{F^{IJ}}$ is defined using the\nisomorphism $\\XIJ_z\\cong G\\times^{P_z}F^{IJ}$. The left-hand square commutes\nby the naturality of $\\res$. For the right-hand square, the diagram\n\\[\n\\xymatrix{F \\ar[r]^-{\\tilde p} \\ar[d]^{\\tilde \\eta^z} & \\SNt_z\n \\ar[d]^{p_z} \\\\\n F^{IJ} \\ar[r]^-{\\tilde p^{IJ}}& \\SN_z}\n\\]\nis cartesian and $\\tilde p$ and $\\tilde p^{IJ}$ are vector bundles, so\n$\\tilde \\eta^z_* \\tilde p^* = (\\tilde p^{IJ})^* (p_z)_*$. By the Thom\nisomorphism in equivariant $K$-theory, $i_z^*= (\\tilde p^*)\\inverse$ and\n$(i_z^{IJ})^* = ((\\tilde p^{IJ})^*)\\inverse$, so $(i_z^{IJ})^* \\tilde\n\\eta^z_*= (p_z)_* i_z^*$. Finally, by~\\autoref{lem:ostlem} $(p_z)_*$ is a\nsurjection. Therefore, $\\tilde \\eta^z_*$ and $\\eta^z_*$ are surjections as\nwell.\n\n\n\\subsection{The sequence (\\ref{eq:exz}) is exact} \\label{ssec:ex}\n\n\\begin{proposition}\\label{pro:k1}\n Suppose $H$ is a linear algebraic group and that $Y$ is an $H$-variety\n such that $H$ acts on $Y$ with finitely many orbits. Then $K^H_1(Y)=0$.\n\\end{proposition}\n\n\\begin{proof}\n If $H$ acts transitively with point stabilizer $H_0$, then $Y\\cong\n H\/H_0$ and the result is known (see~\\cite[\\S1.3\n (p)]{kazhdanlusztig:langlands}). In the general case, choose an open\n orbit $\\CO$ in $Y$. Then there is an exact sequence\n \\[\n \\dotsm \\to K^H_1(Y\\setminus \\CO) \\to K^H_1(Y) \\to K^H_1(\\CO) \\to \\dotsm .\n \\]\n By induction on the number of orbits, $K^H_1(Y\\setminus \\CO) =0$. We have\n already observed that $K^H_1(\\CO)=0$. Thus, $K^H_1(Y)=0$. \\qed\n\\end{proof}\n\n\\begin{lemma}\\label{lem:k1}\n Suppose $I, J\\subseteq S$, and $z\\in \\WIJ$. Then $K^{\\Gbar}_1(\\XIJ_z)=0$.\n\\end{lemma}\n\n\\begin{proof}\n The constructions used in the proof of~\\autoref{thm:ost} apply to the\n functors $K_i^{\\Gbar}$ for $i\\geq 0$ (see \\cite[\\S5.2,\n 5.4]{chrissginzburg:representation}) and give isomorphisms\n \\[\n \\xymatrix{ K_1^{\\Gbar}(\\XIJ_z) \\ar[r]^-{\\res_z^{IJ}}_-{\\cong} &\n K_1^{\\Lbar_z}(F^{IJ}) \\ar[r]^-{(i_z^{IJ})^*}_-{\\cong} &\n K_1^{\\Lbar_z}(\\SN_z) .}\n \\]\n Because $\\Lbar_z$ acts on $\\SN_z$ with finitely many orbits, it follows\n from~\\autoref{pro:k1} that $K^{\\overline{L_{z}}}_1 (\\SN_z)=0$. \\qed\n\\end{proof}\n\nThe fact that sequence~\\eqref{eq:exz} is exact follows immediately\nfrom~\\autoref{lem:k1} and the long exact sequence in equivariant $K$-theory.\n\n\\subsection{The isomorphism \\texorpdfstring{$\\CHIJ_{\\ssleq z}\\cong\n K^{\\Gbar}(\\XIJ_{\\ssleq z})$}{}} \\label{ssec:isoz1}\n\nThe rest of this section is devoted to the proof\nof~\\autoref{thm:wideiso2}. We first define the maps in~\\eqref{eq:wideiso},\n\\[\n\\xymatrix{%\n \\CH_{\\ssleq z} \\ar[r]^-{} \\ar[d]^{\\chi^{\\ssleq z}}& K^{\\Gbar}(Z_{\\ssleq\n z}) \\ar[r]^-{r_z^*} \\ar[d]^{\\eta^{\\ssleq z}_*} & K^{\\Gbar}(Z_z) \\ar[r]\n \\ar[d]^{\\eta^z_*} & K^{\\Lbar_z}(\\SNt_z) \\ar[d]^{(p_z)_*} \\\\\n \\CHIJ_{\\ssleq z} \\ar[r] & K^{\\Gbar}(\\XIJ_{\\ssleq z})\n \\ar[r]^-{(r_z^{IJ})^*} & K^{\\Gbar}(\\XIJ_z) \\ar[r] &\n K^{\\Lbar_z}(\\SN_z), }\n\\]\nand show that the diagram commutes.\n\nThe middle square is induced by the cartesian diagram~\\eqref{eq:cart} and\nthe right-hand square is as in~\\autoref{thm:ost}. Both of these squares\ncommute.\n\n\n\\subsection{The left-hand square in diagram\n (\\ref{eq:wideiso})}\n\nWe observed after~\\eqref{eq:8} that if $\\lambda$ is in $X(T)$, $y$ is in\n$\\WIJ$, and $w$ is in the double coset $W_IyW_J$, then $C_{w_I}\n\\theta_\\lambda T_{w}C_{w_J}$ is in the span of $\\{\\, C_{w_I} \\theta_\\mu T_y\nC_{w_J} \\mid \\mu \\in X(T)\\,\\}$. Therefore,\n$\\chi^{IJ}(\\CH_{w})\\subseteq\\CHIJ_y$. It follows that $\\chi^{\\ssleq\n z}(\\CH_{\\ssleq z})\\subseteq \\CHIJ_{\\ssleq z}$. In particular,\n$\\chi^{\\ssleq z}\\colon \\CH_{\\ssleq z}\\to \\CHIJ_ {\\ssleq z}$ is defined.\n\nConsider the commutative diagram\n\\begin{equation*}\n \\xymatrix{%\n \\CH_z\\ar@{^{(}->}[r] \\ar[d]^{\\chi^z} &\n \\CH_{\\ssleq z} \\ar[d]^{\\chi^{\\ssleq z}}\\\\ \n \\CHIJ_z\\ar@{^{(}->}[r] & \\CHIJ_{\\ssleq z}, } \n\\end{equation*}\nwhere the horizontal maps are the inclusions and $\\chi^z$ is the restriction\nof $\\chi^{IJ}$ to $\\CH_z$. It is clear that $\\chi^y(\\CH_y)=\\CHIJ_y$ for $y$\nin $\\WIJ$ and so $\\chi^{\\ssleq z}$ is surjective.\n\nThe left-hand square in diagram~\\eqref{eq:wideiso} is\n\\begin{equation*}\n \\xymatrix{%\n \\CH_{\\ssleq z} \\ar[r]^-{\\varphi_z} \\ar[d]^{\\chi^{\\ssleq z}} &\n K^{\\Gbar}(Z_{\\ssleq z}) \\ar[d]^{\\eta^{\\ssleq z}_*} \\\\ \n \\CHIJ_{\\ssleq z} \\ar[r]^-{\\psi_z} & K^{\\Gbar}(\\XIJ_{\\ssleq z}), } \n\\end{equation*}\nwhere $\\varphi_z$ and $\\psi_z$ are defined below. \n\nRecall that for $w$ in $W$ $j_w\\colon Z_{\\ssleq w}\\to Z$ is the\ninclusion. Similarly, let $j^{IJ}_z$ denote the inclusion $\\XIJ_{\\ssleq\n z}\\to \\XIJ$ for $z$ in $\\WIJ$. The maps $\\varphi_z$ and $\\psi_z$ are the\nrestrictions of $\\varphi$ and $\\psi^{IJ}$, respectively, in the sense that\n$(j_z)_* \\circ \\varphi_z$ is the restriction of $\\varphi$ to $\\CH_{\\ssleq\n z}$ and $(j^{IJ}_z)_* \\circ \\psi_z$ is the restriction of $\\psi^{IJ}$ to\n$\\CHIJ_{\\ssleq z}$. In order to prove~\\autoref{thm:wideiso2} we need a\nformula for $\\varphi_z$, and so we define $\\varphi_z$ explicitly, show that\n$(j_z)_* \\circ \\varphi_z$ is the restriction of $\\varphi$ to $\\CH_{\\ssleq\n z}$, and then define $\\psi_z$.\n\nFor $w$ in $W$, let $q_{w,1} \\colon Z_w\\to G\/B$ by $q_{w,1}(x,gB, gwB)= gB$.\nThen $q_{w,1}$ is a $\\Gbar$-equivariant affine space bundle over $G\/B$ and\nso $q_{w,1}^*\\colon K^{\\Gbar}(G\/B) \\to K^{\\Gbar}( Z_w)$ is an\n$R(\\Gbar)$-module isomorphism.\n\n\\begin{theorem}\\label{thm:basis}\n Suppose $w$ is in $W$. There is an $A$-module isomorphism\n \\[\n \\varphi_w\\colon \\CH_{\\ssleq w} \\to K^{\\Gbar}(Z_{\\ssleq w})\n \\]\n such that\n \\begin{enumerate}\n \\item $(j_w)_* \\varphi_w\\colon \\CH_{\\ssleq w}\\to K^{\\Gbar}(Z)$ is the\n restriction of $\\varphi$ to $\\CH_{\\ssleq w}$, and \\label{it:phi1}\n \\item for $\\lambda$ in $X(T)$, $r_w^* \\varphi_w(\\theta_\\lambda T_w)=\n \\epsilon_w q_{w,1}^* [\\CL_\\lambda]$. \\label{it:phi2}\n \\end{enumerate}\n In particular, $\\{\\, r_w^* \\varphi_w(\\theta_\\lambda T_w)\\mid \\lambda\\in\n X(T) \\,\\}$ is an $A$-basis of $K^{\\Gbar}(Z_w)$.\n\\end{theorem}\n\n\\begin{proof}\n Lusztig has shown (see \\cite[Lemma 8.9]{lusztig:bases}) that there is a\n unique element $\\xi_y$ in $K^{\\Gbar}(Z_{\\ssleq y})$ such that\n $(j_y)_*(\\xi_y)=\\varphi(T_y)$. Let $d_1\\colon \\FNt\\to Z_1$ be the\n ``diagonal'' isomorphism given by $d_1(x, gB)= (x, gB, gB)$. With this\n notation, using the $K^{\\Gbar}(Z_1)$-module structure $\\star_w$ on\n $K^{\\Gbar}(Z_{\\ssleq w})$, define\n \\[\n \\varphi_w\\colon \\CH_{\\ssleq w}\\to K^{\\Gbar}(Z_{\\ssleq w})\n \\quad\\text{by}\\quad \\varphi_w (\\theta_\\lambda T_y)= (d_1)_*\n q^*[\\CL_\\lambda] \\star_w (j_y^w)_* (\\xi_y)\n \\]\n for $\\lambda$ in $X(T)$ and $y$ in $W$ with $y\\leq w$.\n\n Set $d=j_1d_1\\colon \\FNt \\to Z$, so $d(x,gB)=(x, gB, gB)$. By the\n definition of $\\varphi$, for $\\lambda$ in $X(T)$, $\\varphi(\n \\theta_\\lambda)= d_* q^*[\\CL_\\lambda]$. Thus, using\n equation~\\eqref{eq:star1} and~\\autoref{lem:starlin} we have\n \\begin{align*}\n (j_w)_* \\varphi_w (\\theta_\\lambda T_y) & = (j_w)_* \\big((d_1)_*\n q^*[\\CL_\\lambda] \\star_w (j_y^w)_* \\xi_y\\big) \\\\\n & = (j_1)_*(d_1)_* q^*[\\CL_\\lambda] \\star (j_w)_*(j_y^w)_* \\xi_y \n = d_* q^*[\\CL_\\lambda] \\star (j_y)_* \\xi_y \\\\ & = \\varphi(\\theta_\\lambda)\n \\star \\varphi(T_y) = \\varphi(\\theta_\\lambda T_y) .\n \\end{align*}\n This proves the first statement.\n\n To prove~\\autoref{it:phi2}, let $\\BBC_{Z_w}$ be the trivial line bundle on\n $Z_w$ and let\n \\[\n p_{w,1}\\colon Z_w\\to \\FNt\\quad\\text{by}\\quad p_{w,1} (x,gB, gwB)= (x, gB).\n \\]\n Then $q_{w,1}=q p_{w,1}$. Set $V_w=(\\Ztilde_1\\times \\FNt) \\cap (\\FNt\n \\times \\Ztilde_{ w})$ and let $p_{12}'\\colon V_w \\to Z_1$ and\n $p_{13}'\\colon V_w \\to Z_w$ be the obvious projections. It is\n straightforward to check that $p_{w,1}$ and $p_{12}'$ are smooth,\n that $p_{13}'$ is an isomorphism, and that the diagram\n \\[\n \\xymatrix{ Z_w \\ar[r]^-{(p_{13}')\\inverse} \\ar[d]^{p_{w,1}} & V_w\n \\ar[d]^{p_{12}'} \\\\ \n \\FNt \\ar[r]^-{d_1} &Z_1}\n \\]\n is cartesian. Thus $(p_{12}')^* (d_1)_*= (p_{13}')\\inverse_* p_{w,1}^*$.\n Using the $K^{\\Gbar}(Z_1)$-module structure $\\star_w'$ on\n $K^{\\Gbar}(Z_{w})$, we have\n \\begin{align*}\n r_w^* \\varphi_w(\\theta_\\lambda T_w)&= r_w^* \\big( (d_1)_*\n q^*[\\CL_\\lambda] \\star_w \\xi_w \\big)&& \\\\\n &= (d_1)_* q^*[\\CL_\\lambda] \\star_w' r_w^*( \\xi_w) &&\n \\text{\\autoref{lem:starlin}} \\\\\n &= (d_1)_* q^*[\\CL_\\lambda] \\star_w' \\epsilon_w [\\BBC_{Z_w}] &&\n \\text{\\cite[\\S8.9]{lusztig:bases}} \\\\\n &= \\epsilon_w\\ (p_{13}')_*\\, (p_{12}')^* (d_1)_* q^*[\\CL_\\lambda] &&\n \\text{\\autoref{pro:convlin}\\,\\autoref{i:conv3}} \\\\\n &= \\epsilon_w\\ p_{w,1}^*\\, q^*[\\CL_\\lambda] && \\\\\n &= \\epsilon_w\\ q_{w,1}^*\\, [\\CL_\\lambda]. &&\n \\end{align*}\n This completes the proof of the second statement.\n\n The last statement in the theorem follows from~\\autoref{it:phi2} and the\n fact that $q_{w,1}^*$ is an isomorphism.\n\n Choose a linear order on the interval $[1,w]$ in the Bruhat poset of $W$\n that extends the Bruhat order. This linear order determines gradings on\n $\\CH_{\\ssleq w}$ and $K^{\\Gbar}(Z_{\\ssleq w})$. As $\\{\\,\n \\theta_\\lambda T_y\\mid \\lambda\\in X(T) \\,\\}$ is an $A$-basis of $\\CH_{y}$\n and $\\{\\, r_y^* \\varphi_y(\\theta_\\lambda T_y)\\mid \\lambda\\in X(T) \\,\\}$ is\n an $A$-basis of $K^{\\Gbar}(Z_y)$ for $y$ in $W$, the associated graded map\n $\\gr \\varphi_w$ is an isomorphism. Thus, $\\varphi_w$ is an\n isomorphism. \\qed\n\\end{proof}\n\nIt follows from the theorem that for $z$ in $\\WIJ$,\n\\[\n\\etaIJ_* \\varphi(\\CH_{\\ssleq z})= \\etaIJ_* (j_z)_*( K^{\\Gbar}(Z_{\\ssleq z})\n)= (j^{IJ}_z)_* \\eta^{\\ssleq z}_*( K^{\\Gbar}(Z_{\\ssleq z})) \\subseteq\n(j^{IJ}_z)_* ( K^{\\Gbar}(\\XIJ_{\\ssleq z})).\n\\]\nOn the other hand, we have seen that $\\etaIJ_* \\varphi =\\psi^{IJ} \\chi^{IJ}$\nand that $\\chi^{\\ssleq z}$ is surjective, so\n\\[\n\\etaIJ_* \\varphi(\\CH_{\\ssleq z})=\\psi^{IJ} \\chi^{IJ}(\\CH_{\\ssleq z}) =\n\\psi^{IJ} \\chi^{\\ssleq z} (\\CH_{\\ssleq z})= \\psi^{IJ} (\\CHIJ_{\\ssleq z}).\n\\]\nTherefore, $\\psi^{IJ} (\\CHIJ_{\\ssleq z}) \\subseteq (j^{IJ}_z)_* (\nK^{\\Gbar}(\\XIJ_{\\ssleq z}))$ and so there is an $A$-module\nhomomorphism $\\psi_z\\colon \\CHIJ_{\\ssleq z}\\to K^{\\Gbar}(\\XIJ_{\\ssleq\n z})$ such that $\\eta^{\\ssleq z}_* \\varphi_z= \\psi_z \\chi^{\\ssleq\n z}$. In particular, the maps $\\varphi_z$ and $\\psi_z$ in the\nleft-hand square in~\\eqref{eq:wideiso} are defined and the diagram\ncommutes.\n\n\\subsection{Proof of~\\autoref{thm:wideiso2}} \\label{ssec:isoz3}\n\nIn this subsection we complete the proof of~\\autoref{thm:wideiso2}. The\narguments above show that diagram~\\eqref{eq:wideiso} commutes.\n\nTo prove~\\autoref{thm:wideiso2}\\,\\autoref{i:wi2}, let $f_1\\colon \\CH_{\\ssleq\n z} \\to K^{\\Lbar_z}(\\SNt_z)$ be the composition across the top row\nin~\\eqref{eq:wideiso}. Then $f_1= i_z^* \\res_z r_z^* \\varphi_z$, where\n$i_z^*$, $\\res_z$, and $\\varphi_z$ are isomorphisms and $r_z^*$ is\nsurjective. If $y