diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgwnu" "b/data_all_eng_slimpj/shuffled/split2/finalzzgwnu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgwnu" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\n\nSmolin\n\\cite{Smolin:1992,Smolin:1997,Smolin:2013}\nhas advocated the idea of cosmological natural selection.\nSuppose that, among the sets of laws of physics in the multiverse,\nthere are some that allow a universe to reproduce,\nand to pass on that ability to its offspring.\nUniverses that can reproduce efficiently\ncan lead to a population explosion,\ncrowding the multiverse with their progeny.\nIf there exist laws of physics that allow for efficient reproduction,\nthen it would not be surprising if our Universe could reproduce.\n\nSmolin\n\\cite{Smolin:1992,Smolin:1994}\nproposed that the natural place for reproduction to occur is\nat the singularities of black holes.\nHe suggested that quantum effects would replace\nthe singularity of each black hole by a bounce,\nso that each black hole would effectively produce one baby universe.\nIf so,\nthen the most abundant universes would be those\nthat produce the largest number of black holes.\n\nRecent theoretical research\n\\cite{Hamilton:2008zz,Hamilton:2010a,*Hamilton:2010b,*Hamilton:2010c,Hamilton:2011vr}\nshows that, if general relativity is correct,\nthen rotating black holes inevitably generate enormous\n(Planckian or hyper-Planckian) energies\nin an ingenious machine at the inner horizon\nthat I call the Black Hole Particle Accelerator (BHPA).\nIf our Universe reproduces,\nthen the BHPA, not a singularity, surely provides the mechanism.\n\n\\section*{The Black Hole Particle Accelerator}\n\nThe Kerr \\cite{Kerr:2007dk} geometry for a rotating black hole has,\nunlike the Schwarzschild geometry,\nnot only an outer horizon but also an inner horizon.\nThe inner horizon is the boundary of predictability\n(a Cauchy horizon),\na gateway to fun but bizarre pathologies,\nsuch as wormholes, white holes, naked (timelike) singularities,\nand closed timelike loops\n\\cite{Carter:1968a}.\n\n\\trajectoryunstablefig\n\nLinear perturbation theory\nshows that waves incident on the inner horizon amplify without bound,\nsuggesting instability\n\\cite{Chandrasekhar:1982},\nbut it was not until the seminal work of\nPoisson \\& Israel\n\\cite{Poisson:1990eh,*Barrabes:1990}\nthat the nonlinear development of the instability at the inner horizon\nstarted to become clear.\n\\cite{Poisson:1990eh,*Barrabes:1990}\nargued that counter-streaming between outgoing and ingoing streams\njust above the inner horizon would cause the interior mass\nto grow exponentially,\na phenomenon they dubbed ``mass inflation.''\n\n\\cite{Poisson:1990eh,*Barrabes:1990}\nproposed that outgoing and ingoing streams would be\nproduced by Price tails of radiation generated during the collapse\nof a black hole.\nHowever,\nthe energy density of the least rapidly decaying (quadrupole) mode\nof a Price tail decays as $t^{- 12}$,\nfalling below cosmic microwave background density\nin about a second, for a stellar-mass black hole.\nThus the astronomically realistic situation is that outgoing\nand ingoing streams are fed by ongoing accretion,\nwhich quickly overwhelms any Price tail.\n\n\\penrosekerrinflationfig\n\nAccretion, most probably of baryons and dark matter,\nnaturally produces both outgoing and ingoing streams.\nFigure~\\ref{trajectoryunstable}\nillustrates a pair of example geodesics of (dark matter, say)\nparticles that free-fall from zero velocity at infinity.\nA freely-falling particle that is prograde,\nthough necessarily ingoing at the outer horizon,\ngenerically switches to being outgoing at the inner horizon\n(physically, its angular momentum gives it an outward centrifugal push).\nConversely, retrograde particles generically remain ingoing\ndown to the inner horizon.\n\nFigure~\\ref{penrosekerrinflation} shows a partial Penrose diagram\nthat illustrates why the Kerr geometry\nis subject to the inflationary instability.\nThe problem is that, to fall through the inner horizon,\noutgoing and ingoing streams must fall through separate\noutgoing and ingoing inner horizons into causally separated regions\nwhere the timelike Boyer-Linquist time coordinate $t$\ngoes in opposite directions.\nThis requires that outgoing and ingoing streams exceed the speed\nof light relative to each other, which is physically impossible.\nIn practice,\nregardless of their initial orbital parameters,\noutgoing and ingoing streams approaching the inner horizon\nbecome highly focused along\nthe outgoing and ingoing principal null directions,\nand enormously blueshifted relative to each other\n\\cite{Hamilton:2010a,*Hamilton:2010b,*Hamilton:2010c,Hamilton:2011vr}.\nHowever tiny the initial streams might be,\nthe proper streaming energy density grows large enough\nto become a source of gravity that competes with the native gravity\nof the black hole.\nIn a fashion that is peculiarly general relativistic,\nthe gravitational force is in opposite directions\nfor the two streams,\naccelerating them even faster through each other.\nThe streams accelerate exponentially,\npowered by the gravity generated by their own counter-streaming.\nThis is inflation.\n\nAs Figure~\\ref{penrosekerrinflation} illustrates,\noutgoing particles stream through ingoing particles accreted in the future,\nwhile\ningoing particles stream through outgoing particles accreted in the past.\nThe blueshift between the outgoing and ingoing streams increases exponentially.\nEach stream sees approximately one black hole crossing time\nelapse on the opposing stream for each $e$-fold increase of blueshift\n\\cite{Hamilton:2008zz}.\nThe center-of-mass collision energy between particles of rest mass\n$\\sim 1$--$10^3 \\unit{GeV}$\nreaches the Planck energy of $\\sim 10^{19} \\unit{GeV}$\nafter of order 100 $e$-folds.\nThis happens before the streaming energy density and curvature\nhit the Planck density,\nwhich takes of order 300 $e$-folds\n\\cite{Hamilton:2008zz}.\nA few hundred $e$-folds corresponds to less than a second\nfor a stellar mass black hole,\nor hours to years for a supermassive black hole.\nThus what happens during inflation at any time depends\non accretion during the immediate past and future,\nbut not on the history over cosmological timescales.\n\nThe enormous gravitational acceleration that drives inflation\neventually leads to collapse in the transverse directions\n\\cite{Hamilton:2008zz,Hamilton:2010a,*Hamilton:2010b,*Hamilton:2010c,Hamilton:2011vr}.\nCollapse occurs after approximately $1\/\\dot{M}$ $e$-folds,\nwhere $\\dot{M}$ is the dimensionless ($c = G = 1$) accretion rate.\nThus the larger the accretion rate, the more rapidly collapse occurs.\nIf the accretion rate is larger than $\\dot{M} \\gtrsim 0.01$,\nthen collapse occurs before streams can blueshift\nthe $100$ $e$-folds needed to reach the Planck scale,\nand presumably no baby universe can be made.\nHowever,\naccretion rates as high as $0.01$\noccur only rarely,\nsuch as during the initial collapse of a black hole,\nor during a black hole merger,\nor during occasional events of high accretion.\nAstronomical black holes\nspend the vast majority of their lives accreting more slowly,\ntherefore producing collisions at Planck energies and above.\n\n\\section*{Making baby universes}\n\nMaking baby universes is not easy\n\\cite{Berezin:1985,*Berezin:1985re,Farhi:1986ty,Farhi:1989yr,Linde:1991sk,Aguirre:2005nt,Aguirre:2007gy}.\nThe fundamental problem is that\nmaking a baby universe appears to violate the second law of thermodynamics.\nIf a baby is a causal product of its mother,\nthen there are observers whose worldlines pass from mother to baby,\nand such observers see a transition from a state of high entropy,\nour Universe,\nto one of low entropy,\na baby universe in an inflating, vacuum-dominated state\nsmooth over its horizon scale.\n\nThe BHPA has at least two ingredients essential for reproduction.\nThe first is that it drives energies up to the Planck scale\n(and beyond).\nIf the genes of a universe are encoded for example\nin the structure of its Calabi-Yau manifold,\nthen some mechanism is needed to drive the system to a sufficient\nenergy to allow a slight rearrangement of the Calabi-Yau.\n\nThe second ingredient is a mechanism to overcome the entropy barrier.\nOur Universe apparently solves the entropy problem\nat least in part by brute force:\nevery day,\nit is carrying out prodigious numbers of collision experiments over a broad\nrange of Planckian and hyper-Planckian center-of-mass energies\nin large numbers of black holes\nthroughout the Universe.\n\nThe BHPA has another feature\nthat it surely harnesses\nto overcome the entropy barrier.\nThe most common way that systems in Nature reduce their\nentropy is not by happenstance,\nbut rather by exploiting the free energy of a system out of\nthermodynamic equilibrium.\nThe BHPA is far out of thermodynamic equilibrium:\ninitially,\nit comprises two cold streams accelerating through each other\nto enormous energies.\n\n\\section*{Supermassive black holes}\n\nIf babies are made where the largest number of collisions are taking place,\nthen supermassive black holes are implicated as the most likely mothers.\nThe number of collisions over any $e$-fold of energy is proportional\nto $M^3$, the black hole mass cubed.\nTwo powers of $M$ come from collisions being a two-body process,\nand a third power comes from the fact that the time spent\nin any $e$-fold range of collision energy is proportional to the black hole\ncrossing time.\n\nThe hypothesis of cosmological natural selection\npredicts that the most abundant universes in the multiverse\nare those that reproduce most efficiently\n\\cite{Smolin:1992,Smolin:1994}.\nIf reproduction works as proposed in this essay,\nthen the most successful reproducers are those that make large numbers\nof supermassive black holes.\n\nIn fact our Universe is remarkably\nefficient at making supermassive black holes.\nObservations indicate that most, if not all, massive galaxies\nhost supermassive black holes at their centers\n\\cite{Magorrian:1998}.\nThe existence of $\\sim 10^9 \\unit{{\\rm M}_\\odot}$ black holes\nalready at redshift $z \\sim 6$\nmeans that supermassive black holes were able to form\nso fast as to pose a challenge to conventional models\nthat assume Eddington-limited growth by accretion~\\cite{Begelman:2006db}.\nAs our Universe grows old,\nalmost all the matter in it may end up falling into\na supermassive black hole.\n\n\\section*{Challenge}\n\nAre there laws of physics that allow universes to reproduce?\nIf so,\nthen chances are our Universe is using them in the BHPA.\nDoes reproduction work only for special sets of laws of physics?\nExcellent.\nUndoubtedly\nthose laws hold for our Universe.\nNature is offering clues.\nLet us listen.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The subject. An introduction}\n\n\\, \\, \\rm Let $ (\\Omega ,\\mu )$ be a measure space ($ \\mu $ \nis not a finite sum of atoms), $ L^{2}_{{\\Bbb R}}(\\Omega ,\\mu )$ be\nLebesgue space of real valued functions and $ (u_{k})_{k\\geq 1}$ a \n\\it frame \\rm in $ L^{2}_{{\\Bbb R}}(\\Omega ,\\mu )$. Recall that this \nmeans that the selfadjoint operator $ S$ (the frame operator),\\, \n$$\n Sf= \\sum _{k\\geq 1}(f,u_{k})u_{k},\n$$\n\\rm is an isomorphism on $ L^{2}_{{\\Bbb R}}(\\Omega ,\\mu )$: there exist \n$ A>0,\\, B>0$ such that $ A\\cdot I\\leq \\, S\\leq \\, B\\cdot \nI$, that is\\, \n$$\nA\\Big \\Vert f\\Big \\Vert ^{2}\\leq \\, \\displaystyle \\sum _{k\\geq \n1}\\Big \\vert (f,u_{k})\\Big \\vert ^{2}\\leq \\, B\\Big \\Vert f\\Big \\Vert ^{2}\\quad \\forall f\\in L^{2}_{{\\Bbb R}}(\\Omega ,\\mu )\\,.\n$$\n\\, \n\\rm The right hand ``half\" of this condition is called the ``Bessel sequence \nproperty\"; its dual (equivalent) form is $ \\Big \\Vert \\displaystyle \\sum _{k\\geq \n1}c_{k}u_{k}\\Big \\Vert ^{2}\\leq $ $ B\\displaystyle \\sum _{k\\geq 1}\\Big \\vert c_{k}\\Big \\vert \n^{2}$ for every $ c=\\, (c_{k})_{k\\geq 1}\\in l^{2}$ (look on the \nadjoint $ T^{*}$ to $ Tf=\\, ((f,u_{k}))_{k\\geq 1}$). Every Riesz \nbasis (i.e., an isomorphic image of an orthonormal basis) is a bounded \nframe, and conversely, following the famous Marcus- Spielman- Srivastava \ntheorem [MSS2015], every bounded frame is a finite union of Riesz basis \nsequences (i.e., Riesz bases in their closed span).\\, \n\nBelow, we consider the question on how can be distributed \nthe signs $ sign(u_{k}(x))$ of a frame for $ k=\\, 1,2,...$. For \nthe case of orthonormal bases $ (u_{k})_{k\\geq 1}$ the question was \nraised in [Koz1948]. Kozlov's result is as follows:\\, \n\n\\medskip\n\n\\, \\it Let $ (u_{k})_{k\\geq 1}$ be an orthonormal basis in $ L^{2}_{{\\Bbb R}}(0,1;dx)$ \nand $ u_{k}^{\\pm }(x)=\\, max(0,\\pm u_{k}(x))$, $ x\\in (0,1)$ positive \nand negative parts of $ u_{k}$, respectively. Then $ \\sum _{k}u_{k}^{+}(x)^{2}=\\, \n\\sum _{k}u_{k}^{-}(x)^{2}=\\, \\infty $ almost everywhere.\\, \n\n\\medskip\n\n\\rm Kozlov's proof is quite involved and is based on topological properties \nof Lebesgue measure $ dx$ on $ (0,1)$. In [Koz1948], there are \nalso some applications to uniqueness\/divergence of Fourier series of \n$ L^{2}$ functions with respect to general orthogonal bases. Later \non, the same questions were discussed in [Aru1966], [Ovs1980]. We are \nalso informed (thanks to D. Yakubovich, University Autonoma de Madrid) \nthat the non-existence of positive Riesz bases was requested in the \nperceptive fields theory developed by V. D. Glezer and others, see for \nexample [Gle2016]. After this paper appeared in arXiv (1812.06313 in math.FA), Prof. \nA.M.Powell kindly informed us on two more papers \\cite{JS2015} and \\cite{PS2016}\nwhere the question on positive bases in $ L_{{\\Bbb R}}^{p}(0,1)$ is also \nconsidered, see comments in 1.2(5) below.\\, \n\n\n\n\\subsection{Results}\n\\label{res}\nWe give (simple) proofs to the following theorems.\\, \n\\begin{theorem}\n\\label{1.1}\nLet $ \\mu $ be a continuous measure (i.e., without \npoint masses) and $ (u_{k})_{k\\geq 1}$ a frame in $ L^{2}_{{\\Bbb R}}(\\Omega \n,\\mu )$. Then\\, \n$$\n \\sum _{k}(u_{k}^{+}(x))^{2}= \\sum _{k}(u_{k}^{-}(x))^{2}=\n \\infty, \\quad \\mu -a.e.\n$$\n In particular, there exists no positive frames (nor Riesz bases).\\, \n\\end{theorem}\n\n\\, \\, \\rm Theorem 1.1 is sharp in several senses: 1) first, \none \\it cannot weaken the frame condition \\rm of Theorem 1.1 up to \n``complete Bessel system\" condition; 2) secondly, the signs of $ u_{k}(x)$ \nare \\it not ``equidistributed\" on subsequences \\rm of $ (u_{k})$ even \nfor orthonormal bases; and 3) third, for sequence spaces $ l$ strictly \nlarger than $ l^{2}$, the sequences $ (u_{k}(x))_{k\\geq 1}$ can be \nin $ l$ for every $ x\\in \\Omega $. Precisely, the following facts hold.\\, \n\\, \n\\begin{theorem}\n\\label{1.2}\n \\it Let $ (\\Omega ,\\mu )$ be a measure space, $ \\mu $ \na continuous measure.\\, \n\nI. There exists a sequence $ (v_{n})_{n\\geq 1}$ in $ L^{2}_{{\\Bbb R}}(\\Omega \n,\\mu )$ such that\\, \n(1) $ v_{n}\\geq 0$ on $ \\Omega $,\\, \n(2) $ \\sum _{n}v_{n}(x)^{2}=\\, \\infty $ on $ \\Omega $,\\, \n(3) $ 0<\\, \\sum _{n}\\vert (f,v_{n})\\vert ^{2}\\leq $ $ B\\Vert f\\Vert \n^{2},$ $ \\forall f\\in L^{2}_{{\\Bbb R}}(\\Omega ,\\mu )$, $ f\\not= 0$ \n(i.e., $ (v_{n})_{k\\geq 1}$ is a complete Bessel sequence).\\, \n\nII. There exists a subset $ E\\subset \\, \\Omega $, $ 0<\\mu \nE<\\, \\infty $, and an orthonormal basis $ (u_{k})_{k\\geq 1}$ in \n$ L^{2}_{{\\Bbb R}}(\\Omega ,\\mu )$ such that $ v_{n}:=\\, u_{2n}\\vert \nE$, $ n=\\, 1,2,...$, satisfy conditions (1)-(3) of I \\textup($ \\Omega $ \nis replaced by $ E$\\textup).\\, \n\\end{theorem}\n\\begin{theorem}\n\\label{1.3}\n\\it Let $ \\{b_{n}\\},\\, b_{n}>0,\\, \\lim_{n}b_{n}=\\, \n\\infty $, be a monotone sequence such that\n$$\n\\lim_{n}\\frac{ b_{n}}{b_{n-1}}\n=\\, 1,\n$$\n and \n $$\n \\sum \\frac{1}{b_{n}}\n=\\, \\infty .\n$$\n\\it Then there exists a weight $ w(x)>0$ on the real line $ {\\Bbb R}$ \nsuch that the orthonormal polynomials $ p_{n}$, $ n=0,1,...$, form \na basis in $ L^{2}({\\Bbb R},wdx)$ and\n$$\n\\Big \\vert p_{n}(x)\\Big \\vert ^{2}\\leq \\, \\frac{\\displaystyle C(x)}{\\displaystyle \nb_{n}}\n$$\n\\it where $ C(x)>0$ is locally bounded on $ {\\Bbb R}$. Notice that \n$ \\Big \\vert p_{n}(x)\\Big \\vert ^{2}=\\, (p_{n}^{+}(x)\\pm p_{n}^{-}(x))^{2}=\\, \np_{n}^{+}(x)^{2}+\\, p_{n}^{-}(x)^{2}$.\\, \n\\end{theorem}\nThe proof of Theorem 1.3 is given in the \nspirit of the spectral theory of Jacobi matrices, and heavily depends \non methods developed by A. M\\'at\\'e and P. Nevai [MaN1983] and R. Szwarc \n[Szw2003], see more references and comments in Section 3 below. \\, \n\n\\bigskip\n\n\n\\subsection{Comments}\n\\label{1.4}\n \\bf (1) \\rm For measures \\it with point masses\\rm , \nno analog of Theorem 1.1 can be valid: there exist even orthogonal \nbases of nonnegative functions, for example, the natural basis in $ \nl^{2}=\\, L^{2}({\\Bbb N},count)$.\\, \n\n\n\\, \\bf (2) \\rm Also, in Theorem 1.1, \\it the completeness property \nis essential\\rm , i.e. just for Riesz (or even orthonormal) sequences, \nnothing similar is true: the sequences $ (u_{k}^{\\pm }(x))_{k\\geq 1}$ \ncan even have finitely many non-zero coordinates only. Theorem 1.2 \nshows that keeping only ``a half of \\it frame \\rm conditions\", namely \nthat of complete Bessel systems, we loose the conclusion of 1.1: there \nexist \\it positive \\rm complete Bessel sequences $ (u_{k})$ for which \n$ \\sum _{k}u_{k}(x)^{2}=\\, \\infty $ a.e.\\, \n\n\n\\, \\bf (3) \\rm Theorem 1.2 implies also a kind of ``non-equidistribution\" \nof the signs in the family $ (u_{k})_{k}$ forming a frame (and even \nan orthonormal basis); see comments in Section 4. \\, \n\n\n\\, \\bf (4) \\it The sharpness of Theorem 1.1\\rm , as stated in \nTheorem 1.3, implies in particular, that taking $ b_{n}=\\, n$ \nwe obtain an orthonormal polynomial basis $ (u_{k})_{k}$ in a weighted \nspaces $ L^{2}({\\Bbb R},w(x)dx)$, $ w(x)>0$, with the property $ \\vert u_{k}(x)\\vert \n\\leq \\, {\\frac{c(x)}{k^{1\/2}}} $ for every $ x\\in {\\Bbb R}$, and \nhence\\, \n$$\n \\sum _{k}\\vert u_{k}(x)\\vert ^{2+\\epsilon }<\\, \\infty \\forall \\epsilon >0\\quad \\forall x\\in {\\Bbb R}\\,.\n$$\n\\, \n\\, \\, \\rm It is curious that it seems there exist \\it no \nclassical \\rm (or ``semi-classical\") orthonormal polynomials which show \nsuch kind asymptotic behavior. Indeed, in the classical setting, \nthe best known estimates are shown by Laguerre orthonormal polynomial \nbasis $ L_{k}$, $ k=\\, 0,1,...$, in $ L^{2}_{{\\Bbb R}}(0,\\infty ;e^{-x}dx)$, \nwhere we have\n$$\n L_{k}(x)=\\, {\\frac{x^{-1\/4}e^{x\/2}}{\\sqrt{\\pi } \nk^{1\/4}}} Cos(2\\sqrt{kx} -\\, {\\frac{\\pi }{4}} )+O(k^{-3\/4})\\quad x>0\\,,\n$$\n\\rm see [Sz1975], p.198), and hence $ \\sum _{k}\\vert L_{k}(x)\\vert ^{4+\\epsilon \n}<\\, \\infty $ ($ \\forall \\epsilon >0$) almost everywhere, but \n$ (L_{k}(x))_{k\\geq 0}\\not\\in l^{4}$. Similar property holds for Hermite \nnormalized polynomials in $ L^{2}_{{\\Bbb R}}({\\Bbb R};e^{-x^{2}}dx)$.\\, \n\n\n\\, \\, \\bf (5) \\rm The theme of the sign distribution of bases \nwas developed at least in two other papers, [Aru1966] and [Ovs1980]. \nIn [Aru1966], it is proved that for an \\it unconditional basis $ (u_{k})_{k\\geq \n1}$ \\rm in $ L_{{\\Bbb R}}^{p}(0,1)$, $ \\sum _{k}u_{k}^{\\pm }(x)^{p'}=$ \n$ \\infty $ a. e. if $ 2\\leq p<\\infty ,\\, {\\frac{1}{p'}} +{\\frac{1}{p}} \n=\\, 1$ (which contains Kozlov's theorem), and $ \\sum _{k}u_{k}^{\\pm \n}=$ $ \\infty $ a.e. if $ 1
\\, 0$ for every $ E\\subset \\, (0,1),\\, \\vert E\\vert >\\, \n0$, then $ \\sum _{k}u_{k}^{\\pm }(x)^{p}=\\, $ $ \\infty $ a.e. on \n$ (0,1)$, $ \\forall p<\\, \\infty $. Below, we show on a very simple \nexample that, there exist positive uniformly minimal complete normalized \nsequences $ (u_{k})\\subset $ $ L_{{\\Bbb R}}^{2}(0,1)$, $ u_{k}\\geq 0$. In \\cite{PS2016}, it is shown that there exists neither positive unconditional basis in $ L_{{\\Bbb R}}^{p}(0,1)$, $ 1\\leq p<\\infty $ (already known from \n(Aru1966)), nor positive quasibasis; there are however positive Markushevich bases (minimal complete sequences having complete biorthogonal). In \\cite{JS2015} a positive Schauder basis in $ L_{{\\Bbb R}}^{1}(0,1)$ is \n constructed.\\, \n\n\\bigskip\n\n The rest of the paper is as follows: {\\S}2 - proof \nof theorem 1.1 and unconditional bases in Banach spaces, {\\S}3 - proof \nof theorem 1.3, and possible nonsymmetry between $ u^{\\pm }_{k}$, {\\S}4 \n- proof of theorem 1.2.\\, \n\n\n\\bigskip\n\n\\, \\, \\bf Acknowledgements. \\rm The first author is highly \ngrateful to Sasha and Olga Volberg, as well as to the Math Department \nof the MSU, organizing his short visit to Lansing-Ann Arbor (Fall 2018) \nwith remarkable working conditions. He also recognizes a support from \nRNF grant 14-41-00010 and the Chebyshev Lab, SPb University.\\, \nThe second author is supported by NSF grant DMS 1600065. \nBoth authors are grateful to Alexander Powell who indicated to them the papers \\cite{JS2015} and \\cite{PS2016}.\\, \n\\, \n\\section{Proof of Theorem 1.1, and signs of unconditional \nbases}\n\\, \n\\, \\, \\rm We start with a simplest version of our principal \nobservation.\\, \n\\, \n\\subsection{There exist no nonnegative Riesz bases in $ L^{2}$}\n\\label{2.1}\n\nThis result is not new, see \\cite{Aru1966}, \\cite{PS2016}. However, seems that our proof is somewhat simpler.\n\\begin{proof}\n\\, \\, \\, \\rm Indeed, let $ L^{2}=\\, L^{2}_{{\\Bbb R}}(\\Omega \n,\\mu )$, $ \\mu $ continuous, $ \\mu \\Omega <\\, \\infty $, and assume \nthat $ (u_{k})$ is\\, \n$$\n \\text{a normalized unconditional (= Riesz) basis having }\n u_{k}\\geq 0 \\,\\,\\text{on} \\,\\, \\Omega \n$$\n\\rm and $ f\\in L^{2}_{{\\Bbb R}}(\\Omega ,\\mu )$. Using \nthe development $ f=\\, \\sum _{k\\geq 1}(f,u'_{k})u_{k}$ (where \n$ (u'_{k})$ stands for the dual sequence, $ (u_{k},u'_{j})= \\delta \n_{kj}$), define $ R_{N}f=\\, $ $ \\sum _{k\\geq N}(f,u'_{k})u_{k}$ \nand observe that\\, \n$$\n \\Big \\Vert R_{N}f\\Big \\Vert _{L^{1}}\\leq \\, \\displaystyle \\int \n_{\\Omega }\\displaystyle \\sum _{k\\geq N}\\Big \\vert (f,u'_{k})\\Big \\vert u_{k}=\\, \n\\displaystyle \\sum _{k\\geq N}\\Big \\vert (f,u'_{k})\\Big \\vert (u_{k},1)_{L^{2}}=\n$$\n$$\n (f_{*},R_{N}^{*}1)_{L^{2}}, \\rm\\text{where}\\,\\, f_{*}=\\, \n\\displaystyle \\sum _{k\\geq 1}\\Big \\vert (f,u'_{k})\\Big \\vert u_{k}\\,.\n$$\n\\rm Since $ \\Vert f_{*}\\Vert _{2}\\leq \\, B\\Vert f\\Vert _{2}$, \nit means $ \\Big \\Vert R_{N}:L^{2}\\longrightarrow L^{1}\\Big \\Vert _{L^{1}}\\leq \n\\, B\\Vert R^{*}_{N}1\\Vert _{2}$. But $ \\lim_{N}\\Vert R^{*}_{N}1\\Vert \n_{2}=\\, 0$, and the map $ S_{N}f=\\, f-R_{N}f$ has a finite \nrank, so we get that $ id:L^{2}\\longrightarrow L^{1}$ is compact, which \nis not the case (for example, if $ \\mu \\Omega =\\, 1$, there exists \na unimodular orthonormal sequences in $ L^{2}$). \n\\end{proof} \n\n\\bigskip\n\n\\subsection{Remarks on other spaces}\n\\label{Lp}\n\n\\\n\\bigskip\n\nLet $ L^{p}= L^{p}_{{\\Bbb R}}(\\Omega \n,\\mu )$\\rm , $ \\mu $ continuous, $ \\mu \\Omega <$ $ \\infty $.\n\n \\bf (1) \n\\rm Exactly the same lines (with $ \\Vert f_{*}\\Vert _{2}$ replaced \nby $ \\Vert f_{*}\\Vert _{X}$, and $ \\Vert R^{*}_{N}1\\Vert _{2}$ by $ \n\\Vert R^{*}_{N}1\\Vert _{X^{*}}$) show that\\, \n\\, \n\\, \\it there is no nonnegative unconditional bases in any reflexive \nBanach space $ X$ of measurable functions\\, \n\\, \n\\rm such that $ L^{\\infty }(\\mu )\\subset \\, X^{*}\\subset \\, L^{1}(\\mu \n)$, $ X^{*}$ stands for the dual space with respect to the duality \n$ (f,h)=\\, \\int _{\\Omega }f\\overline{h}d\\mu $.\\, \n\\, \\, \\it Example: $ X=\\, L^{p}_{{\\Bbb R}}(\\Omega ,\\mu \n)$\\rm , $ 1
0$ such that\\, \n$$\n \\sum _{k\\geq 1}(u_{k}^{-}(x))^{2}\\leq \\, M^{2}\\quad \\forall x\\in E\\,\\, \\text{and} \\,\\, 0<\\, \\mu E<\\, \\infty \\,.\n $$\n\\, \n\\rm This implies the same contradiction as in point 2.1 that the natural \nembedding $ L^{2}(E,\\mu )\\, \\hookrightarrow \\, L^{1}(E,\\mu )$ \nis compact. The steps are as follows.\n\n\n(1) Setting $ v_{k}=\\, u^{-}_{k}$, we have from Lemma \\ref{2.2}\\, \n$$\n \\Vert Vf\\Vert _{l^{2}}^{2}= \\sum _{k\\geq 1}\\vert (f,u^{-}_{k})\\vert \n^{2}\\leq \\, \\Vert V\\Vert ^{2}\\cdot \\Vert f\\Vert ^{2}\\,\\, \\text{on}\\,\\, L^{2}(E,\\mu \n)\n$$\n\\rm and from the frame definition $ \\sum _{k\\geq 1}\\vert (f,u_{k})\\vert \n^{2}=\\, (Sf,f)\\leq \\, B\\Vert f\\Vert ^{2}_{2}$ ($ \\forall f\\in \nL^{2}(\\Omega ,\\mu )$). Hence\\, \n$$\n\\sum _{k\\geq 1}\\vert (f,u^{+}_{k})\\vert ^{2} \\leq \n C^{2}\\Vert f\\Vert ^{2}\\,\\,\\text{on}\\,\\,L^{2}(E,\\mu )\\,,\\,\\, C^{2}\\leq \\, 2(\\Vert \nV\\Vert ^{2}+B)\\,.\n$$\n\n\\rm (2) It follows from $ u^{+}_{k}=\\, u_{k}+\\, u^{-}_{k}$\\it , \n\\rm (1) and Lemma \\ref{2.3} that $ W$,\\, \n$$\n Wf:=\\, \\sum _{k\\geq 1}(f,u^{+}_{k})u^{-}_{k}\\vert E\\,,\n$$\n\\rm acting as $ L^{2}(E,\\mu )\\longrightarrow L^{2}_{{\\Bbb R}}(E,\\mu )$ \nis compact.\\, \n\n\n(3) Now, the quadratic form $ (Sf,f)=\\, (Uf,f)+\\, (Wf,f)+\\, \n(Xf,f)$ on $ L^{2}(E,\\mu )$, where\\, \n$$\n Xf:=\\, \\sum _{k\\geq 1}(f,u^{+}_{k})u^{+}_{k}\n$$\n\\rm is equivalent to $ (f,f)=\\, \\Vert f\\Vert ^{2}$ (in the sens \n$ A\\Vert f\\Vert ^{2}\\leq \\, (Sf,f)\\leq \\, B\\Vert f\\Vert ^{2}$), \nand the forms $ (Uf,f)$ and $ (Wf,f)$ are compact on $ L^{2}(E,\\mu )$. \nIt implies that $ (Xf,f)$ is equivalent to $ \\Vert f\\Vert ^{2}$ on \na subspace $ H\\subset \\, L^{2}(E,\\mu )$ of finite co-dimention.\\, \n\n(4) The latter property means that the compression\\, \n$$\nX_{E}:L^{2}(E,\\mu )\\longrightarrow L^{2}(E,\\mu ),\\quad X_{E}f=\\, Xf\\vert E, \\,\\, f\\in L^{2}(E,\\mu \n$$\n\\rm is Fredholm. Let $ R:L^{2}(E,\\mu )\\longrightarrow L^{2}(E,\\mu )$ \nbe a regularizer of $ X_{E}$, a bounded operator such that\\, \n$$\nRX_{E}=\\, id+\\, K\\,\\, \\text{where} \\,\\,K:L^{2}(E,\\mu )\\longrightarrow \nL^{2}(E,\\mu )\\,\\, \\text{is compact}\\,.\n$$\n\n\\rm (5) Show that $ X_{E}:L^{2}(E,\\mu )\\longrightarrow \\, L^{1}(E,\\mu \n)$ is compact. Indeed, similarly to Lemmas \\ref{2.2} and \\ref{2.3}, the norms $ \n\\Vert X_{E,N}:L^{2}(E,\\mu )\\longrightarrow L^{1}(E,\\mu )\\Vert $ tends \nto zero as $ N\\longrightarrow \\infty $, where\\, \n$X_{E,N}f=\\, \\sum _{k\\geq N}(f,u^{+}_{k})u^{+}_{k}\\vert \nE$. In fact,\n\\begin{align*} \n& \\Vert \\sum _{k\\geq N}(f,u^{+}_{k})u^{+}_{k}\\Vert _{L^{1} (E,\\mu \n)}\\leq \\, \\sum _{k\\geq N}\\vert (f,u^{+}_{k})\\vert \\cdot \\Vert u^{+}_{k}\\Vert \n_{L^{1} (E,\\mu )}=\n\\\\\n&\\sum _{k\\geq N}\\vert (f,u^{+}_{k})\\vert \\cdot \n(u^{+}_{k},1)_{L^{2} (E,\\mu )}\\leq \n\\\\\n& \\leq \\, (\\sum _{k\\geq N}\\vert (f,u^{+}_{k})\\vert ^{2})^{1\/2}(\\sum \n_{k\\geq N}\\vert (u^{+}_{k},1)_{L^{2} (E,\\mu )}\\vert ^{2})^{1\/2}\\leq \n\\\\\n& C\\Vert f\\Vert (\\sum _{k\\geq N}\\vert (u^{+}_{k},1)_{L^{2} (E,\\mu \n)}\\vert ^{2})^{1\/2}:=\\, C\\Vert f\\Vert \\epsilon _{N}\\,,\n\\end{align*}\n\\rm and $ \\lim_{N}\\epsilon _{N}=\\, 0$ in view of (1) above. Now, \nregarding the identity $ RX_{E}=$ $ id+$ $ K$ as acting from $ L^{2}(E,\\mu \n)$ to $ L^{1}(E,\\mu )$, we get that the natural embedding $ id:L^{2}(E,\\mu \n)$ $ \\hookrightarrow $ $ L^{1}(E,\\mu )$ is compact which contradicts \nto $ \\mu E>0$. \n\\end{proof} \n\\, \n\\subsection{Sign distributions for bases in more general spaces}\n\\label{2.5}\n \\rm Here \nwe give ``an abstract version\" of the reasoning from 2.1-2.4 (without \ntrying to find the most general setting). Let as before, $ (\\Omega ,\\mu \n)$ be a measure space with a continuous measure, and (WLOG) $ \\mu \\Omega \n<\\, \\infty $. $ X$ will be a real \\it reflexive Banach lattice \n\\rm of measurable functions such that\\, \n$$\nL^{\\infty }\\subset \\, X\\subset \\, L^{1}\\,\\,\\text{\nand} \\,\\, X^{*}=\\, \\{h:\\, hf\\in L^{1},\\, \\forall f\\in X\\}\n$$\n\\rm with the duality $ (f,h)=\\, \\int _{\\Omega }f\\,hd\\mu $.\\, \n\n\\medskip\n\n\\noindent{\\bf Example:} $ X=\\, L^{p}_{\\bR}(\\Omega ,\\mu )$\\rm , \n$ 1
\\, 0$,\\, \n$$\n \\Big (\\displaystyle \\int _{E}u_{k}^{+}d\\mu \\Big )_{k\\geq \n1}\\not\\in (Coef(U))^{*} \\rm \\text{and}\\,\\, \\Big (\\displaystyle \\int _{E}u_{k}^{-}d\\mu \n\\Big )_{k\\geq 1}\\not\\in (Coef(U))^{*}\\,.\n$$\n\\end{theorem}\n\n\\begin{proof} Here is the proof of Theorem \\ref{2.6}. \\rm The reasoning repeats our steps \nabove. Namely, let\\, \n$$\n V^{\\pm }f=\\, \\sum _{k\\geq 1}(f,u'_{k})u^{\\pm }_{k},\n\\,\\,\\text{so that}\\,\\, id=\\, V^{+}-\\, V^{-}\n$$\n\\rm and\\, \n$$\nV_{N}^{\\pm }f= \\sum _{k\\geq N}(f,u'_{k})u^{\\pm }_{k},\\,\\,\n f\\in X,\\,\\, N=\\, 1,2,...\n $$\n\n\\rm Now, assuming $ R^{-}u:=\\, \\Big (\\displaystyle \\int _{E}u_{k}^{-}d\\mu \n\\Big )_{k\\geq 1}\\in (Coef(U))^{*}$ for some $ E$, $ \\mu E>0$, we obtain\\, \n$$\n \\Vert V_{N}^{-}f\\Vert _{1}\\leq \\sum _{k\\geq N}\\vert \n(f,u'_{k})\\vert \\int _{E}u^{-}_{k}d\\mu =\\, (c(f_{*}),R_{N}^{-}u),\n$$\n\\rm where $ f_{*}=\\, \\sum _{k\\geq 1}\\vert (f,u'_{k})\\vert u_{k}$ \n(with $ \\Vert f_{*}\\Vert _{X}\\leq \\, C\\Vert f\\Vert _{X}$, unconditional \nbasis) and $ R_{N}^{-}u=\\, \\{0,...0,\\int _{E}u^{-}_{N+1}d\\mu ,...\\}$. \nSince $ (u'_{k})$ is a basis in $ X^{*}$ (reflexivity of $ X$), we \nget\\, \n$$\n \\Big \\Vert V_{N}^{-}:X\\longrightarrow L^{1}\\Big \\Vert \\leq \n\\, C\\Big \\Vert R_{N}^{-}u\\Big \\Vert _{X^{*\\, }}\\longrightarrow \\, \n0\\,\\,\\text{as}\\,\\,N\\longrightarrow \\infty \\,.\n$$\n\\, \n\\rm The same for $ V_{N}^{+}$:\\, \n$$\n \\Vert V_{N}^{+}f\\Vert _{1}\\leq \\sum _{k\\geq N}\\vert \n(f,u'_{k})\\vert \\int _{E}u^{+}_{k}d\\mu = (c(f_{*}),R_{N}u-R_{N}^{-}u)\\,,\n$$\n\\rm where $ R_{N}u=$ $ \\{0,...0,\\int _{E}u_{N+1}d\\mu ,...\\}$, and hence\\, \n\\, \n $$\n \\Vert V_{N}^{+}:X\\longrightarrow L^{1}\\Vert \\leq C\\Vert R_{N}u-R_{N}^{-}u\\Vert _{X^{*}}\\leq \\, C(\\Vert R_{N}u\\Vert \n_{X^{*\\, }}+\\, \\Vert R_{N}^{-}u\\Vert _{X^{*}})\\,.\n$$ \n\\rm But $ \\displaystyle \\int _{E}u_{k}d\\mu =\\, (\\chi _{E},u_{k})$ \nand hence $ \\Big (\\displaystyle \\int _{E}u_{k}d\\mu \\Big )_{k\\geq 1}\\in \n(Coef(U))^{*}$ ($ X\\subset \\, L^{1}$, and so $ L^{\\infty }\\subset \\, \nX^{*}$). It implies $ \\lim_{N}\\Vert R_{N}u\\Vert _{X^{*}}=\\, 0$, \nand as above, we conclude that both $ V^{+}$, $ V^{-}:\\, X\\longrightarrow \\, \nL^{1}$ are compact operators and $ id=\\, V^{+}-\\, V^{-}$. \nBut there exists a unimodular sequence $ v_{n}$ in $ X$ which tends \nweakly to zero (it is clear when replacing $ (\\Omega ,\\mu )$ by isomorphic \nmeasure space $ ((0,1,dx)$), but $ \\Vert v_{n}\\Vert _{1}=\\, \\mu \\Omega \n>0$. Contradiction. \n \\end{proof} \n\n\\bigskip\n\n\\subsection{\nNow we give an application of Theorem \\ref{2.6}}\n\\label{appl}\n\n\\medskip\n\n\\bf (1) Type, cotype, and unconditional bases. \\rm Recall (see for \nexample [Woj1996], point III.A.17) that a Banach space $ X$ is said \nto \\it have cotype $ q,\\,\\, 2\\leq q\\leq \\infty $\\rm , if for some \nconstant $ C>0$ and for every finite sequence $ x=\\,\\, (x_{j})$, \n$ x_{j}\\in X$,\\ \\par \n$$\n C\\displaystyle \\int _{0}^{1}\\Big \\Vert \\displaystyle \\sum _{j}r_{j}(t)x_{j}\\Big \\Vert \ndt\\geq \\,\\, \\Big \\Vert x\\Big \\Vert _{l^{q}}:=\\,\\, \\Big (\\displaystyle \\sum _{j}\\Big \\Vert \nx_{j}\\Big \\Vert ^{q}\\Big )^{1\/q}\\,,\n$$\nand it \\it has type $ q,\\,\\, 1\\leq q\\leq 2$\\rm , if\\ \\par \n$$\n \\displaystyle \\int _{0}^{1}\\Big \\Vert \\displaystyle \\sum _{j}r_{j}(t)x_{j}\\Big \\Vert \ndt\\leq \\,\\, C\\Big \\Vert x\\Big \\Vert _{l^{q}}\\,,\n$$\n where $ (r_{j})_{j\\geq 1}$ stands for the sequence of Rademacher \nfunctions. It is known (and is proved in [Woj1996], Ch. III.A) that \n$ X$ has type $ q$ if and only if $ X^{*}$ has cotype $ q'$, $ {\\frac{1}{q'}} \n+\\,\\, {\\frac{1}{q}} =\\,\\, 1$, and \\it if $ X$ has type $ q_{1}\\leq \n2$ and a cotype $ q_{2}\\geq 2$ and if $ U=\\,\\, (u_{k})$ is a normalized \nunconditional basis in $ X$ then\\ \\par \n\\ \\par \n \\centerline{ $ l^{q_{1}}\\subset \\,\\, Coef(U,X)\\subset \\,\\, l^{q_{2}}$\\rm .}\n\\ \\par \n\\bf Corollary. \\it If in condition of Theorem 2.3, the lattice $ X$ \nhas a cotype $ q_{2}$ then\\ \\par \n\\ \\par \n \\centerline{ $ \\Big (\\displaystyle \\int _{E}u_{k}^{\\pm }d\\mu \\Big )_{k\\geq \n1}\\not\\in l^{q_{2} '}$ \\rm ($ \\forall E\\subset $ $ \\Omega ,$ $ \\mu E>0$), \nwhence $ \\sum _{k\\geq 1}(u_{k}^{\\pm }(x))^{q_{2} '}=\\,\\, \\infty $ \na.e. $ \\Omega $.}\n\n\\medskip\n\n\\rm Indeed, $ l^{q'_{2}}\\subset \\,\\, Coef(U,X)^{*}$, \nand the first claim follows from the theorem. Also \n$$ \\Big (\\displaystyle \\int _{E}u_{k}^{\\pm \n}d\\mu \\Big )^{q_{2} '}\\leq \\,\\, c\\displaystyle \\int _{E}(u_{k}^{\\pm })^{q_{2} '}d\\mu \\,,\n$$\nwhence $ \\displaystyle \\int _{E}\\displaystyle \\sum _{k}(u_{k}^{\\pm })^{q_{2} '}d\\mu \n=\\,\\, \\infty $ for every $ E,\\,\\, \\mu E>0$, which is equivalent \nto\n$$\n \\sum _{k\\geq 1}(u_{k}^{\\pm }(x))^{q_{2} '}= \\infty \\quad\na.e. \\,\\, \\Omega \\,.\n$$ \n\\ \\par \n\\bf (2) The spaces $ X=$ $ L^{p}_{{\\Bbb R}}(\\Omega ,\\mu )$. \\rm It \nis known (and is basically equivalent to Khintchin's inequality, see \n[Woj1996], point III.A.22) that $ L^{p}$ \\it is of type $ q_{1}=\\,\\, min(2,p)$ \nand of cotype $ q_{2}=\\,\\, max(2,p)$\\rm , and hence\\ \\par \n\\ \\par \n \\centerline{ $ Coef(U,L^{p})\\subset $ $ l^{q}$, where $ q=\\,\\, max(2,p)$.}\n\\ \\par \n\\rm (It is curious to note how different is the coefficient space for \nthe standard trigonometric \\it Schauder basis \\rm of $ L^{p}(0,2\\pi $): \nthe Hausdorff--Young inequality tells that $ Coef(e^{inx},L^{p})\\subset $ \n$ l^{p'}$ for $ 1
0$, we have\\ \\par \n\\ \\par \n \\centerline{ for $ 1
0$). Theorem 1.3 is a simple restating of Theorem 3.1 below. The proof \nof Theorem 3.1 is based on the three terms recurrence for orthogonal \npolynomials but its direct application (replacing moduli of sums by \nsums of moduli) fails. Instead, we use a subtle reasoning introduced \nin a similar situation in important papers by A. M\\'at\\'e and P. Nevai \n[MaN1983] and R.Szwarc [Szw2003]. The basic facts of the theory of \northogonal polynomials are contained (for example) in the books [Sz1975], \n[Ber1968], [Sim2005]. One of them, the classical J. Favard theorem (1935), \nclaims that whatever are real sequences $ b_{k}\\in {\\Bbb R}$ and $ a_{k}>0$ \nand the sequence of polynomials $ p_{k}$, $ deg(p_{k})=\\, k$, \n$ k=0,1,...$ defined by the three term recurrence \n$$\n xp_{k}(x)=\\, a_{k+1}p_{k+1}(x)+\\, b_{k}p_{k}(x)+\\, \na_{k}p_{k-1}(x), \\quad k=\\, 0,1,2,...\\,,\n$$\n\\rm where $ p_{0}=\\, 1$, $ p_{-1}(x)=\\, 0$, there exists \n(at least one) Borel measure $ \\mu \\geq 0$ on the real line such that \n$ p_{k}\\in L^{2}(\\mu )$ ($ \\forall k\\geq 0$) and $ (p_{k},p_{j})_{L^{2} (\\mu \n)}=\\, \\delta _{k,j}$ (Kronecker delta). \n\nIn fact, the measure $\\mu$ is the scalar spectral measure of the associated tridiagonal (selfadjoint) Jacobi matrix $J$ having $(b_k)_{k\\ge 0}$ on the main diagonal and $(a_k)_{k\\ge 1}$ on two side diagonals.\n\n\n\n\n\nAnother classical theorem \n(T. Carleman) tell us that such a measure is unique if $ \\sum _{k\\geq 0}{\\frac{1}{a_{k}}} \n=\\, \\infty $ (the so-called ``determined case\") - the condition \nis obviously satisfied in case of Theorem 3.1 below. It follows that \nthe polynomials are dense in $ L^{2}(\\mu )$, and hence $ (p_{k})_{k\\geq \n0}$ forms an orthonormal basis in $ L^{2}(\\mu )$. A huge theory of orthogonal polynomials and the associated Jacobi matrices is (partially) presented\nin books mentioned above.\n\n\\bigskip\n\nWe use here the work of R. Szwarc \\cite{Szw2003}. We just repeat several calculations from this article to get the following result.\n\\begin{theorem}\n\\label{bn}\nLet $\\{b_n\\}$ , $b_n>0$, $b_n\\to \\infty$, be a monotone sequences such that $b_n\/b_{n-1} \\to 1$, and $\\sum b_n^{-1} =\\infty$ and let $a_n$ be such that\n$a_n = \\frac1{2B} \\sqrt{b_n b_{n-1}}$, where $01\/4$.\nLet the sequences\n$$\n\\frac{a_n^2}{b_n b_{n-1}}, \\,\\, \\frac{(b_n+ b_{n-1})}{ a_n^2},\\,\\, \\frac1{a_n^2}\n$$\nhave bounded variation. Then the corresponding Jacobi matrix $J$ with $b_n$ on the main diagonal is essentially\nself-adjoint if and only if $\\sum a_n^{-1} =\\infty$. In that case the spectrum of $J$ coincides \nwith the whole real line and the spectral measure is absolutely continuous.\n\\end{theorem}\n\nTheorem \\ref{bn} follows from this claim (except for the estimates of polynomials) immediately as the monotonicity of $\\{ b_n\\}$ ensures all the regularity required in Theorem \\ref{RSthm}, and, of course, in assumptions of Theorem \\ref{bn} $\\sum b_n^{-1}=\\infty$ gives $\\sum a_n ^{-1}=\\infty$.\n\nLet us follow \\cite{Szw2003} to show the estimate on orthogonal polynomials with respect to the spectral measure of $J$. There are several non essential typos in \\cite{Szw2003}, and we will correct them on the way.\n\nWe have\n\\begin{equation}\n\\label{3terms}\nxp_n(x)= a_{n+1} p_{n+1}(x) + b_n p_n(x) + a_{n} p_{n-1}(x)\\,.\n\\end{equation}\nWe put \n\\begin{equation}\n\\label{defAn}\nA_n(x):= p_n(x) \\sqrt{b_n-x},\\quad n\\ge N,\\,\\, \\Lambda_n := B\\frac{a_n}{\\sqrt{(b_n-x)(b_{n-1}-x)}}\\,.\n\\end{equation}\n\nWith this notation \\eqref{3terms} becomes\n\\begin{equation}\n\\label{3t}\n0= \\Lambda_{n+1} A_{n+1}(x) + B A_n(x) + \\Lambda_{n} A_{n-1}(x)\\,.\n\\end{equation}\n\nBy assumptions, $\\Lambda_n\\to \\frac12$ and $B<1$. Moreover,\nsince\n$$\nB^2\\Lambda_n^{-2} = \\frac{b_n b_{n-1}}{a_n^2}- \\frac{(b_n+ b_{n-1})}{ a_n^2}x+ \\frac1{a_n^2} x^2,\n$$\nit is of bounded variation, and thus so is $\\Lambda_n$.\n\n\\begin{theorem} (Mat\\'e, Nevai, \\cite{MaN1983} ) \n\\label{MN}\nLet $\\Lambda_n(x)$ be a positive valued sequence whose terms depend continuously on $x \\in [a,b]$. Let $A_n(x)$ be a real valued sequence of continuous functions satisfying\n\\eqref{3t}\nfor $n \\ge N$. Assume the sequence $\\Lambda_n(x)$ has bounded variation and $\\Lambda_n(x)\\to \\frac12$ for\n$x \\in [a,b]$. Let $|B| < 1$. Then there is a strictly positive function $f(x)$ continuous on $[a,b]$\n such that\n\\begin{equation}\n\\label{det}\nA_n^2(x) - A_{n-1}(x)A_{n+1}(x) \\to f(x)\n\\end{equation}\nuniformly for $x \\in [a,b]$. Moreover, there is a constant $c$ such that\n\\begin{equation}\n\\label{bound}\n|A_n(x)| \\le c\n\\end{equation}\nfor $n\\ge 0$ and $x \\in [a,b]$.\n\\end{theorem}\n\nClearly to prove Theorem \\ref{bn} it is sufficient to use this result of Mat\\'e, Nevai. Indeed, \\eqref{bound} obviously gives us the bound on $p_n(x)^2$ stated in Theorem \\ref{bn}.\nFor the reader's convenience and for making the paper self-contained we give a proof to Theorem \\ref{MN}.\n\n\\bigskip\n\n\\begin{proof} To prove Theorem \\ref{MN}, one first uses recurrent relation \\eqref{3t} to write\n$$\nA_{n-1} = -\\frac{\\Lambda_{n+1}}{\\Lambda_{n} }A_{n+1}(x) -\\frac{ B}{\\Lambda_{n}} A_n(x) \n$$\nand, hence,\n\\begin{equation}\n\\label{alg1}\nA_n^2-A_{n-1}A_{n+1} = A_n^2 +\\frac{\\Lambda_{n+1}}{\\Lambda_{n}} A_{n+1}^2 + \\frac{B}{\\Lambda_{n}} A_n A_{n+1}\n\\end{equation}\nThis can be rewritten as follows\n\\begin{equation}\n\\label{1}\nA_n^2-A_{n-1}A_{n+1} = \\Big( A_{n} + \\frac{B}{2\\Lambda_{n}} A_{n+1}\\Big)^2 + \\Big(\\frac{\\Lambda_{n+1}}{\\Lambda_{n}} - \\frac{B^2}{4\\Lambda_{n-1}^2} \\Big) A_{n+1}^2\n\\end{equation}\nNow we combine that equality with the facts that $\\Lambda_n\\to \\frac12$ and $B<1$, and this combination implies the following estimate:\n\\begin{equation}\n\\label{1a}\nA_{n+1}^2 \\le C (A_n^2-A_{n-1}A_{n+1})\\,.\n\\end{equation}\nBut we can also rewrite the equality\n\\eqref{alg1} in another form:\n\\begin{equation}\n\\label{2}\nA_n^2-A_{n-1}A_{n+1} = \\frac{\\Lambda_{n+1}}{\\Lambda_{n}} \\Big( A_{n+1} + \\frac{B}{2\\Lambda_{n+1}} A_{n}\\Big)^2 + \\Big(1 - \\frac{B^2}{4\\Lambda_{n}\\Lambda_{n+1}} \\Big) A_{n}^2\n\\end{equation}\n\nThis formula and the same two facts that $\\Lambda_n\\to \\frac12$ and $B<1$ imply now the following estimate:\n\\begin{equation}\n\\label{2a}\nA_{n}^2 \\le C (A_n^2-A_{n-1}A_{n+1})\\,.\n\\end{equation}\nLet us also write \n$$\nA_{n+2} = - \\frac{B}{\\Lambda_{n+2}}A_{n+1} -\\frac{\\Lambda_{n+1}}{\\Lambda_{n+2}} A_{n}\n$$\nThat equality together with \\eqref{alg1} give us the following:\n\\begin{align}\n\\label{alg2}\n&(A_{n+1}^2 - A_nA_{n+2} ) - (A_n^2 -A_{n-1}A_{n+1} )= \\Big(1-\\frac{\\Lambda_{n+1}}{\\Lambda_{n}} \\Big) A_{n+1}^2 +\\notag\n\\\\ \n&B \\Big(\\frac1{\\Lambda_{n+2}} -\\frac1{\\Lambda_{n}}\\Big) A_n A_{n+1} + \\big( \\frac{\\Lambda_{n+1}}{\\Lambda_{n+2}} -1\\Big) A_n^2\\,.\n\\end{align}\n\nDenoting $\\Delta_n:= A_n^2- A_{n-1}A_{n+1}$ we get from \\eqref{1a}, \\eqref{2a} and \\eqref{alg2}:\n$$\n\\Delta_n >0,\\,A_n^2 + A_{n+1}^2 \\le C\\Delta_n,\\quad |\\Delta_{n+1}-\\Delta_n| \\le C\\big( |\\Lambda_{n+1} -\\Lambda_n|+ |\\Lambda_{n+1} -\\Lambda_{n+2}|\\big)\\Delta_n\\,.\n$$\nDenote $\\eps_n:= |\\Lambda_{n+1} -\\Lambda_n|+ |\\Lambda_{n+1} -\\Lambda_{n+2}|$. Then\n$$\n(1-C\\eps_n) \\Delta_n \\le \\Delta_{n+1}\\le (1+ C\\eps_n) \\Delta_n,\n$$\nand $\\sum\\eps_n$ converges by the assumption that $\\Lambda_n$ has bounded variation.\n\nTherefore, $\\Delta_n$ uniformly converges to a strictly positive function $f$, and hence,\n$A_n^2$ are uniformly bounded uniformly bounded for $n > N(x)$ (namely, by a multiple $Cf(x)$ of $f(x)$). Thus \\eqref{bound} is proved, Theorem \\ref{MN} of Mat\\'e--Nevai is proved, and we already said that this proves the bound of Theorem \\ref{bn}.\n\n\\end{proof}\n\n\\section{Counterexamples: an attempt on Bessel systems, \nand proof of Theorem 1.2.}\n\\label{4}\n\\subsection{Part I of the Theorem 1.2.}\n\\label{4.1}\n \\rm A \\it natural question whether a ``half of the frame \ncondition\"\\rm , namely the Bessel one, \\it is sufficient \\rm for getting \nthe conclusion of theorem 1.1, is essentially equivalent (in the notation \nof Theorem 1.2) to the following: whether\\, \n$$\n(1)\\&(2) \\Rightarrow (3')\\it := \\exists f\\in L^{2}_{{\\Bbb R}}(\\Omega \n,\\mu ) \\rm \\,\\,\\text{such that}\\,\\,\\sum _{n}\\vert (f,v_{n})\\vert ^{2}=\\, \\infty \n\\, ?\n$$\n\\, \\, \\rm Indeed, if (3') does not hold (and we have $ \\sum _{n}\\vert \n(f,v_{n})\\vert ^{2}<\\, \\infty $, $ \\forall f\\in L^{2}_{{\\Bbb R}}(\\Omega \n,\\mu )$), we automatically get property (3) of theorem 1.3 just due \nto Banach--Steinhaus theorem applied to the semi-norms\\, \n$$\n p_{n}(f)=\\, \\Big (\\displaystyle \\sum _{k=1}^{n}(f,v_{k})^{2}\\Big )^{1\/2}\\,.\n $$\n\\, \n\\, \\, \\rm However, there is a counterexample which gives \na negative answer to this question and proves Part I of the Theorem 1.2.\\, \n\n\\medskip\n\n\\subsection{Counterexample}\n\\label{counter}\n \\rm Let $ (\\Omega ,\\mu )=\\, (0,1),\\, \ndx$, and $ (v_{k})_{k\\geq 1}$ be any enumeration of the indicator functions \n$ \\chi _{I}$ of dyadic subintervals $ {\\mathcal D}=\\, \\{I=\\, I_{j,n}\\}\\, \n$of $ (0,1)$:\\, \n$$\n I_{j,n}=\\, ({\\frac{\\displaystyle j}{\\displaystyle 2^{n}}} \n,{\\frac{\\displaystyle j+1}{\\displaystyle 2^{n}}} ),\\, j=\\, 0,...,\\, \n2^{n}-1\\,.\n$$\n\n\\medskip\n\n\\, \\rm Properties (1) and (2), as well as the completeness of \n$ (v_{k})$, are obvious. For (3), we write\\, \n$$\n \\displaystyle \\sum _{k}\\Big \\vert (f,v_{k})\\Big \\vert ^{2}=\\, \n\\displaystyle \\sum _{I\\in {\\mathcal D}}\\Big ({\\frac{\\displaystyle 1}{\\displaystyle \\vert \nI\\vert }} \\displaystyle \\int _{I}fdx\\Big )^{2}\\Big \\vert I\\Big \\vert ^{2}\\,,\n$$\n\\rm and notice that the desired property (3) is the ``Carleson embedding\"\n$$\n \\displaystyle \\sum _{I\\in {\\mathcal D}}\\Big ({\\frac{\\displaystyle 1}{\\displaystyle \n\\vert I\\vert }} \\displaystyle \\int _{I}fdx\\Big )^{2}w_{I}\\leq \\, B\\Big \\Vert \nf\\Big \\Vert _{2}^{2}\\,,\n$$\nwhere $ w_{I}=\\, \\vert I\\vert ^{2}$, $ I\\in {\\mathcal D}$. The \nnecessary and sufficient condition for such an embedding is (see [NTV1999], \n[NT1996])\\, \n$$\n \\sup_{J\\in {\\mathcal D}}{\\frac{\\displaystyle 1}{\\displaystyle \\vert \nJ\\vert }} \\displaystyle \\sum _{I\\subset J,I\\in {\\mathcal D}}w_{I}<\\, \\infty \n\\,,\n$$ \n\\rm which is obviously fulfilled for $ w_{I}=$ $ \\vert I\\vert ^{2}$, \n$ I\\in {\\mathcal D}$. \n\n\n\\bigskip\n\n\n\\subsection{Part II of the Theorem 1.2}\n\\label{4.3}\n \\rm Take $ \\Omega =\\, (0,2)$, and let $ (v_{n})_{n\\geq \n1}$ be the sequence in $ L^{2}_{{\\Bbb R}}((0,1),dx)$ constructed in \nPart I. Without loss of generality, we suppose that $ B<\\, 1$. \nThen, the linear mapping $ T:l^{2}\\longrightarrow L^{2}(0,1)$ defined \nby $ T\\delta _{n}=\\, v_{n},\\, n\\geq 1$ ($ \\delta _{n}$ stands \nfor the natural basis of $ l^{2}$) is a (strict) contraction. Let $ \nD_{T}=\\, (I-T^{*}T)^{1\/2}:l^{2}\\longrightarrow l^{2}$ its defect \noperator, and $ V:l^{2}\\longrightarrow L^{2}(1,2)$ an arbitrary isometric \noperator. We naturally consider $ L^{2}(0,2)$ as an orthogonal sum \n$ L^{2}(0,2)=\\, L^{2}(0,1)\\oplus L^{2}(1,2)$ and set $ Ux=\\, Tx\\oplus \nVD_{T}x$ for $ x\\in l^{2}$. Then, $ U$ is isometric, $ \\Vert Ux\\Vert ^{2}=\\, \n\\Vert Tx\\Vert ^{2}+\\, \\Vert D_{T}x\\Vert ^{2}=\\, \\Vert x\\Vert ^{2}$, \nand hence $ u_{2n}:=\\, U\\delta _{n}$, $ n=1,2,...$ is an orthonormal \nbasis in $ F:=\\, Ul^{2}\\subset \\, L^{2}(0,2)$. Choosing an \narbitrary orthonormal basis $ (u_{2n+1})_{n\\geq 1}$ in the orthogonal \ncomplement $ F^{\\perp }$, we obtain an orthonormal basis $ (u_{k})_{k\\geq \n1}$ in $ L^{2}(0,2)$ satisfying all requirements of the theorem (with \n$ E=\\, (0,1)$). \n\n\n\\subsection{A lapse of equidistribution between $ u_{k}^{\\pm }(x)$}\n\\label{4.4}\n\\begin{proof}\n \\rm One \ncan reordering the basis from 4.2 in order to get the following: \\it there \nexists an orthonormal basis $ (U_{k})$ in $ L_{\\bR}^{2}(0,2)$ such that\\, \n$$\n \\displaystyle \\sum _{k=1}^{n}(U_{k}^{-}(x))^{2}=\\, o\\Big (\\displaystyle \\sum \n_{k=1}^{n}(U_{k}^{+}(x))^{2}\\Big )\\,\\,\\text{as}\\,\\, n\\longrightarrow \\infty \\,\\,\n x\\in (0,1)\\,.\n $$\nIndeed, it suffices to set\\, \n$$\n(U_{k}):\\, u_{2}, u_{4},...,u_{2N_{1}},\\,\\, u_{1},\\,\\,\nu_{2N_{1} +2}, ...,\\, u_{2N_{2}}{\\rm ,}u_{3},...\n$$ \nwhere the integers $ N_{1}<\\, N_{2}<...$ increase sufficiently \nfast. \n\\end{proof}\n\\, \n\\subsection{A minimal sequence can be positive}\n\\label{4.5}\n \\rm Let $ u_{k}(x)=\\, {\\frac{1}{1+\\sqrt{2} \n}} (1+\\, Cos(\\pi kx))$, $ k=\\, 1,2,...$ in $ L_{\\bR}^{2}(0,1)$. \nThen $ (u_{k})$ spans $ L_{\\bR}^{2}(0,1)$, is normalized and uniformly \nminimal (with the dual $ u'_{k}=\\, (2+\\sqrt{2} )Cos(\\pi kx)$), \nand $ u_{k}(x)\\geq 0$. In fact, the Fourier series with respect to \n$ (u_{k})$ of a function $ f\\in L_{\\bR}^{2}(0,1)$, $ \\sum _{k\\geq 1}(f,u'_{k})u_{k}$, \nconverges to $ f$, if $ f$ is (for example) Dini continuous at $ x=0$ \nand $ f(0)=\\, 0$. However, $ (u_{k})$ is not a basis.\n\nThe question of the existence of non-negative Schauder basis in $L^p, p>1$ is open to the best of our knowledge. Detailed discussion can be found in \\cite{PS2016}. For $p=1$, as it is already mentioned, non-negative Schauder basis exists, see \\cite{JS2015}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\n\\section{Introduction}\nThe multi-armed bandit problem is a model that exemplifies the exploration-exploitation tradeoff in online decision making. It has various applications in drug design, online advertising, recommender systems, and so on. In stochastic multi-armed bandit problems, the agent sequentially chooses an arm from the given arm set at each time step and then observes a random reward drawn from the unknown distribution associated with the chosen arm. \n\nThe standard multi-armed bandit problem, where the arms are not correlated with one another, has been studied extensively in literature. While the \\emph{regret minimization} problem aims at maximizing the cumulative rewards by the trade-off between exploration and exploitation \\citep{auer2002finite, bubeck2012regret,thompson1933likelihood,agrawal2012analysis}, the \\emph{pure exploration} problem focuses on efficient exploration with specific goals, e.g., to identify the best arm \\citep{even2006action,bubeck2009pure,karnin2013almost,carpentier2016tight,gabillon2012best,garivier2016optimal}. There are two complementary settings for the problem of \\emph{best arm identification}: (i) Given $T \\in \\mathbb N$, the agent aims to maximize the probability of finding the best arm in at most $T$ time steps; (ii) Given $\\delta > 0$, the agent aims to find the best arm with the probability of at least $1-\\delta$ in the smallest number of steps. These settings are respectively known as the fixed-budget and fixed-confidence settings.\n\n\nIn this paper, we consider the problem of best arm identification in linear bandits in the fixed-budget setting. In linear bandits, the arms are correlated through an unknown global regression parameter vector $\\theta^{*} \\in \\mathbb{R}^{d}$. In particular, each arm $i$ from the arm set $\\mathcal A$ is associated with an arm vector $a(i) \\in \\mathbb{R}^{d}$, and the expected reward of arm $i$ is given by the inner product between $\\theta^{*}$ and $a(i)$. Hence, the standard multi-armed bandits and linear bandits are fundamentally different due to the fact that for the latter, pulling one arm can indirectly reveal information about the other arms but in the former, the arms are independent. \n\nA wide range of applications in practice can be modeled by linear bandits. For example, \\citet{tao2018best} considered online advertising, where the goal is to select an advertisement from a pool to maximize the probability of clicking for web users with different features. Empirically, the probability of clicking can be approximated by a linear combination of various attributes associated with the user and the advertisements (such as age, gender, the domain, keywords, advertising genres, etc.). Moreover, \\citet{hoffman2014correlation} applied the linear bandit model into the traffic sensor network problem and the problem of automatic model selection and algorithm configuration.\n\n\\paragraph{Main contributions.} Our main contributions are as follows:\n\\begin{enumerate}[(i)]\n \\item We design an algorithm named {\\em Optimal Design-based Linear Best Arm Identification} (OD-LinBAI). This parameter-free algorithm utilizes a phased elimination-based strategy in which the number of times each arm is pulled in each phase depends on G-optimal designs \\citep{kiefer1960equivalence}. \n \\item We derive an upper bound on the failure probability of OD-LinBAI. In particular, we show that the exponent depends on a hardness quantity $H_{2,\\mathrm{lin}}$. This quantity is a function of {\\em only} the first $d-1$ optimality gaps, where $d$ is the dimension of the arm vectors. This is a significant improvement over the upper bound of the failure probability of BayesGap in \\citet{hoffman2014correlation} which depends on a hardness quantity that depends on {\\em all} the gaps. Furthermore, OD-LinBAI is parameter-free, whereas BayesGap requires the knowledge of the hardness quantity (which is usually not known).\n \\item Finally, using ideas from~\\citet{carpentier2016tight}, we prove a minimax lower bound which involves another hardness quantity $H_{1,\\mathrm{lin}}$. By comparing $H_{1,\\mathrm{lin}}$ to $H_{2,\\mathrm{lin}}$, we show that OD-LinBAI is minimax optimal up to constants in the exponent. Experiments firmly corroborate the efficacy of OD-LinBAI vis-\\`a-vis BayesGap \\cite{hoffman2014correlation} and its variants, especially on hard instances.\n\\end{enumerate}\n \n\\paragraph{Related work.} The problem of regret minimization in linear bandits was first studied by \\citet{abe1999associative}, and has attracted extensive interest in the development of various algorithms (e.g., UCB-style algorithms \\citep{auer2002using,dani2008stochastic,rusmevichientong2010linearly,abbasi2011improved,chu2011contextual}, Thompson sampling \\citep{agrawal2013thompson, abeille2017linear}). In particular, in the book of \\citet{lattimore2020bandit}, a regret minimization algorithm based on the G-optimal design was proposed for linear bandits with finitely many arms. Although both this algorithm and our algorithm OD-LinBAI utilize the G-optimal design technique, they differ in numerous aspects including the manner of elimination and arm allocation, which emanates from the two different objectives.\n\nFor the problem of best arm identification in linear bandits, the fixed-confidence setting has previously studied in \\citep{tao2018best, soare2014best, xu2018fully,zaki2019towards,fiez2019sequential,kazerouni2021best,degenne2020gamification}. In particular, \\citet{soare2014best} introduced the optimal G-allocation problem and proposed a static algorithm $\\mathcal X \\mathcal Y$-Oracle as well as a semi-adaptive algorithm $\\mathcal X \\mathcal Y$-Adaptive; see Remark~\\ref{rmk:g-opt} for more discussions to \\citet{soare2014best}. \\citet{degenne2020gamification} treated the problem as a two-player zero-sum game between the agent and the\nnature, and thus designed an asymptotically optimal algorithm for the fixed-confidence setting.\n\nTo the best of our knowledge, \\citet{hoffman2014correlation} is the only previous work on the problem of best arm identification in linear bandits in the fixed-budget setting. They introduced a gap-based exploration algorithm BayesGap, which is a Bayesian treatment of UGapEb \\citep{gabillon2012best}. However, BayesGap is computationally expensive and not parameter-free. See Section~\\ref{section_mainresults} and Section~\\ref{section_exp} for more comparisons.\n\n\\section{Problem setup and preliminaries}\n\\paragraph{Best arm identification in linear bandits.} We consider the standard linear bandit problem with an unknown global regression parameter. In a linear bandit instance $\\nu$, the agent is given an arm set $\\mathcal A = [K]$, which corresponds to arm vectors $\\{a(1), a(2),\\ldots,a(K)\\} \\subset \\mathbb R^d$. At each time $t$, the agent chooses an arm $A_t$ from the arm set $\\mathcal A$ and then observes a noisy reward \n$$\nX_{t}=\\braket{\\theta^{*}, a(A_{t})}+\\eta_{t}\n$$\nwhere $\\theta^{*} \\in \\mathbb{R}^{d}$ is the unknown parameter vector and $\\eta_{t}$ is independent zero-mean $1$-subgaussian random noise.\n\n\nIn the fixed-budget setting, given a time budget $T \\in \\mathbb N$, the agent aims at maximizing the probability of identifying the best arm, i.e., the arm with the largest expected reward, with no more than $T$ arm pulls. More formally, the agent uses an \\emph{online} algorithm $\\pi$ to decide the arm $A_t^{\\pi}$ to pull at each time step $t$, and the arm $i_{\\mathrm{out}}^{\\pi} \\in \\mathcal A$ to output as the identified best arm by time $T$. We abbreviate $A_t^{\\pi}$ as $A_t$ and $i_{\\mathrm{out}}^{\\pi}$ as $i_{\\mathrm{out}}$ when there is no ambiguity.\n\nFor any arm $i\\in \\mathcal A$, let $p(i) = \\braket{\\theta^{*}, a(i)}$ denote the expected reward. For convenience, we assume that the expected rewards of the arms are in descending order and the best arm is unique. That is to say, $p(1) > p(2) \\ge \\dots \\ge p(K)$. For any suboptimal arm $i$, we denote $\\Delta_i = p(1) - p(i)$ as the optimality gap. For ease of notation, we also set $\\Delta_1= \\Delta_2$. Furthermore, let $\\mathcal E$ denote the set of all the linear bandit instances defined above.\n\n\\paragraph{Dimensionality-reduced arm vectors.} For any linear bandit instance, if the corresponding arm vectors do not span $\\mathbb R ^d$, i.e., $\\operatorname{span} \\left( \\{a(1), a(2),\\dots,a(K)\\} \\right) \\subsetneq \\mathbb R ^d$, the agent can work with a set of dimensionality-reduced arm vectors $\\{a'(1), a'(2),\\dots,a'(K)\\} \\subset \\mathbb R^{d'}$, that spans $\\mathbb R ^{d'}$, with little consequence. Specifically, let $B \\in \\mathbb R ^{d\\times d'} $ be a matrix whose columns form an orthonormal basis of the subspace spanned by $a(1), a(2),\\dots,a(K)$.\\footnote{Such an orthonormal basis can be calculated efficiently with the reduced singular value decomposition, Gram\u2013Schmidt process, etc.} Then the agent can simply set $a'(i) = B^{\\top}a(i)$ for each arm $i$.\n\nTo verify this, notice that $BB^{\\top}$ is a projection matrix onto the subspace spanned by $ \\{a(1), a(2),\\dots,a(K)\\} $ and consequently\n\\begin{align*}\n p(i) = \\braket{\\theta^*, a(i)} &= \\braket{\\theta^*, BB^{\\top}a(i)} = \\braket{B^{\\top}\\theta^*, B^{\\top}a(i)} = \\braket{{\\theta^*}', a'(i)} .\n\\end{align*}\nNote that $\\theta^*$ is the unknown parameter vector for original arm vectors while ${\\theta^*}' = B^{\\top}\\theta^*$ is the corresponding unknown parameter vector for the dimensionality-reduced arm vectors. In the problem of linear bandits, what we really care about is not the original unknown parameter $\\theta^*$ itself but the inner products between $\\theta^*$ and the arm vectors $a(i)$, which establishes the equivalence of original arm vectors and dimensionality-reduced arm vectors. \n\nIn our work, without loss of generality, we assume that the entire set of original arm vectors $ \\{a(1), a(2),\\dots,a(K)\\} $ span $\\mathbb R ^d$ and $d\\ge 2$.\\footnote{The situation that $d = 1$ is trivial: each arm vector is a scalar multiple of one another.} However, this idea of transforming into dimensionality-reduced arm vectors is often used in our elimination-based algorithm. See Section~\\ref{section_algo} for details.\n\n\\paragraph{Least squares estimators.} Let $A_1, A_2, \\dots, A_n$ be the sequence of arms pulled by the agent and $X_1,X_2,\\dots,X_n$ be the corresponding noisy rewards. Suppose that the corresponding arm vectors $\\{a(A_1), a(A_2), \\dots, a(A_n)\\}$ span $\\mathbb R ^d$, then the ordinary least squares (OLS) estimator of $\\theta^*$ is given by\n\\begin{equation*}\n\\hat{\\theta}=V^{-1} \\sum_{t=1}^{n} a(A_{t}) X_{t}\n\\end{equation*}\nwhere $V=\\sum_{t=1}^{n} a(A_t) a(A_t)^{\\top} \\in \\mathbb R^{d\\times d}$ is invertible. By applying the properties of subgaussian random variables, a confidence bound for the OLS estimator can be derived as follows. \n\\begin{proposition}[{\\citet[Chapter 20]{lattimore2020bandit}}]\n\\label{prop_concentration_estimator}\nIf $A_1, A_2, \\dots, A_n$ are deterministically chosen without knowing the realizations of $X_1,X_2,\\dots,X_n$, then for any $x \\in \\mathbb R^d$ and $\\delta > 0$,\n\\begin{equation*}\n \\Pr \\left( \\braket{\\hat{\\theta}-\\theta^*, x}\\ge \\sqrt{2\\|x\\|^2_{V^{-1}} \\log\\left(\\frac 1 {\\delta} \\right)}\\right) \\le \\delta.\n\\end{equation*}\n\\end{proposition}\n\n\\begin{remark} \nWhen the arm pulls are not adaptively chosen according to the random rewards, Proposition~\\ref{prop_concentration_estimator} no longer applies and an extra factor $\\sqrt d$ has to be paid for adaptive arm pulls \\citep{abbasi2011improved}. Our algorithm avoids this issue by deciding the arm pulls at the beginning of each phase, and designing the OLS estimator based on information from current phase. See Section~\\ref{section_algo} for details.\n\\end{remark}\n\n\\paragraph{G-optimal design.} The confidence interval in Proposition~\\ref{prop_concentration_estimator} shows the strong connection between the arm allocation in linear bandits and experimental design theory \\citep{pukelsheim2006optimal}. To control the confidence bounds, we first introduce the G-optimal design technique into the problem of best arm identification in linear bandits in the fixed-budget setting. Formally, the G-optimal design problem aims at finding a probability distribution $\\pi: \\{a(i):i\\in \\mathcal A \\} \\rightarrow[0,1]$ that minimises \n\\begin{equation*}\n g(\\pi) = \\max_{i\\in \\mathcal A} \\| a(i) \\|^2_{V(\\pi)^{-1}}\n\\end{equation*}\nwhere $V(\\pi) = \\sum_{i\\in \\mathcal A} \\pi(a(i))a(i)a(i)^{\\top}$. Theorem~\\ref{theorem_goptimal} states the existence of a small-support G-optimal design and the minimum value of $g$.\n\\begin{theorem}[\\citet{kiefer1960equivalence}]\n\\label{theorem_goptimal}\nIf the arm vectors $\\{a(i):i\\in \\mathcal A \\}$ span $\\mathbb R ^d$, the following statements are equivalent:\n\\begin{enumerate}[(1)]\n\\item $\\pi^*$ is a minimiser of $g$.\n\\item $\\pi^*$ is a maximiser of $f(\\pi) = \\log \\det V(\\pi)$.\n\\item $g(\\pi^*) = d$.\n\\end{enumerate}\nFurthermore, there exists a minimiser $\\pi^*$ of $g$ such that $|\\operatorname{Supp}\\left(\\pi^*\\right)| \\leq d(d+1) \/ 2$. \n\\end{theorem}\n\n\\begin{remark} \\label{rmk:g-opt}\nIt is worth mentioning that the G-optimal design problem for finite arm vectors is a convex optimization problem while the G-allocation problem in \\citet{soare2014best} for the fixed-confidence best arm identification in linear bandits is an NP-hard discrete optimization problem. A classical algorithm to solve the G-optimal design problem is the Frank-Wolfe algorithm \\citep{frank1956algorithm}, whose modified version guarantees linear convergence \\citep{damla2008linear}. For our work, it is sufficient to compute an $\\epsilon$-approximate optimal design\\footnote{For an $\\epsilon$-approximate optimal design $\\pi$, $g(\\pi) \\le (1+\\epsilon)d$.} with minimal impact on performance. Recently, a near-optimal design with smaller support was proposed in \\citet{lattimore2020learning}, which might be helpful in some scenarios. See Appendix~\\ref{appendix_goptimal} for more discussions on the above issues. To reduce clutter and ease the reading, henceforward in the main text, we assume that a G-optimal design for finite arm vectors can be found accurately and efficiently.\n\\end{remark}\n\n\n\\section{Algorithm}\n\\label{section_algo}\nOur algorithm Optimal Design-based Linear Best Arm Identification (OD-LinBAI) is presented in Algorithm~\\ref{algo1}.\n\n\\begin{algorithm}[hbpt]\n\\caption{Optimal Design-based Linear Best Arm Identification (OD-LinBAI)} \n\\label{algo1}\n\\hspace*{0.02in} {\\bf Input:} time budget $T$, arm set $\\mathcal A = [K]$ and arm vectors $\\{a(1), a(2),\\dots,a(K)\\} \\subset \\mathbb R^d$.\n\\begin{algorithmic}[1]\n\\State Initialize $t_0 = 1$, $ \\mathcal A_{0} \\leftarrow \\mathcal A$ and $d_0=d$.\n\\State For each arm $i \\in \\mathcal A_{0}$, set $a_0(i) = a(i)$.\n\\State Solve $m$ by Equation~(\\ref{equation_m}).\n\\For{$r=1$ to $\\lceil\\log _{2} d\\rceil$} \n\\State Set $d_r = \\dim \\left (\\operatorname{span}\\left(\\{a_{r-1}(i):i\\in \\mathcal A_{r-1}\\}\\right)\\right)$.\n\\If{$d_r = d_{r-1}$}\n\\State For each arm $i \\in \\mathcal A_{r}$, set $a_r(i) = a_{r-1}(i)$.\n\\Else\n\\State Find matrix $B_r \\in \\mathbb R^{d_{r-1}\\times d_r}$ whose columns form a orthonormal basis of the subspace spanned by $\\{a_{r-1}(i):i\\in \\mathcal A_{r-1}\\}$.\n\\State For each arm $i \\in \\mathcal A_{r-1}$, set $a_r(i) = B_r^{\\top}a_{r-1}(i)$.\n\\EndIf\n\\If{$r=1$}\n\\State Find a G-optimal design $\\pi_r:\\{a_r(i):i\\in \\mathcal A_{r-1}\\}\\rightarrow[0,1]$ with $|\\operatorname{Supp}\\left(\\pi_{r}\\right)| \\leq d(d+1) \/ 2$. \n\\Else\n\\State Find a G-optimal design $\\pi_r:\\{a_r(i):i\\in \\mathcal A_{r-1}\\}\\rightarrow[0,1]$.\n\\EndIf\n\\State Set \n$$\nT_r(i) = \\left\\lceil \\pi_r(a_r(i)) \\cdot m \\right\\rceil \\quad \\text { and } \\quad \nT_r = \\sum_{i \\in \\mathcal A_{r-1}} T_r(i).\n$$\n\\State Choose each arm $i \\in \\mathcal A_{r-1}$ exactly $T_r(i)$ times.\n\\State Calculate the OLS estimator: \n$$\n\\hat{\\theta}_{r}=V_{r}^{-1} \\sum_{t=t_{r}}^{t_{r}+T_{r}-1} a_r(A_{t}) X_{t} \\quad \\text { with } \\quad V_{r}=\\sum_{i \\in \\mathcal{A}_{r-1}} T_{r}(i) a_r(i) a_r(i)^{\\top}\n$$\n\\State For each arm $i \\in \\mathcal A_{r-1}$, estimate the expected reward: \n$$\n\\hat{p}_{r}(i) = \\braket{\\hat{\\theta}_{r},a_r(i)}.\n$$\n\\State Let $ \\mathcal A_{r}$ be the set of $\\lceil d \/ 2^r\\rceil$ arms in $ \\mathcal A_{r-1}$ with the largest estimates of the expected rewards.\n\\State Set $t_{r+1} = t_r+T_r$.\n\\EndFor\n\\end{algorithmic}\n\\hspace*{0.02in} {\\bf Output:} the only arm $i_{\\mathrm{out}}$ in $ \\mathcal A_{\\lceil\\log _{2} d \\rceil}$.\n\\end{algorithm}\n\nThe algorithm partitions the whole horizon into $\\lceil\\log _{2} d \\rceil$ phases, and maintains an \\emph{active} arm set $\\mathcal A_r$ in each phase $r$. The length of each phase is roughly equal to $m$, which will be formally defined later. \n\nMotivated by the equivalence of the original arm vectors and the dimensionality-reduced arm vectors, at the beginning of each phase $r$, the algorithm computes a set of dimensionality-reduced arm vectors $\\{a_r(i):i\\in \\mathcal A_{r-1}\\} \\subset \\mathbb R ^{d_r}$ which spans the $d_r$-dimensional Euclidean space $\\mathbb R ^{d_r}$. This can be implemented based on the dimensionality-reduced arm vectors of the last phase $\\{a_{r-1}(i):i\\in \\mathcal A_{r-1}\\} $ in an iterative manner (Lines $5-11$).\n\nAfter that, Algorithm~\\ref{algo1} finds a G-optimal design $\\pi_r$ for the current dimensionality-reduced arm vectors, with a restriction on the cardinality of the support when $r=1$. OD-LinBAI then pulls each arm in $\\mathcal A _{r-1}$ according to the proportions specified by the optimal design $\\pi_r$. Specifically, the algorithm chooses each arm $i \\in \\mathcal A_{r-1}$ exactly $T_r(i) = \\left\\lceil \\pi_r(a_r(i)) \\cdot m \\right\\rceil$ times, where the parameter $m$ is fixed among different phases and defined as \n\\begin{equation}\n\\label{equation_m}\n m = \\frac {T-\\min(K, \\frac { d (d +1)} 2) -\\sum\\limits_{r=1}^{{\\lceil\\log _{2} d \\rceil}-1} {\\left\\lceil\\frac {d} {2^{r}}\\right\\rceil}} {{\\lceil\\log _{2} d \\rceil} }.\n\\end{equation}\nNote that $m = \\Theta(T\/\\log_2 d)$ as $T\\rightarrow \\infty$ with $K$ fixed. Lemma~\\ref{lemma_m} in Appendix~\\ref{appendix_upperbound} shows with such choice of $m$, the total time budget consumed by the agent is no more than $T$. It turns out that the parameter $m$ plays a significant role in the implementation as well as the theoretical analysis of Algorithm~\\ref{algo1}.\n\nSince the support of the G-optimal design $\\pi_r$ must span $\\mathbb R ^{d_r}$, the OLS estimator can be directly applied (Line $19$). Then for each arm $i \\in \\mathcal A _{r-1}$, an estimate of the expected reward is derived. Note that Algorithm~\\ref{algo1} decouples the estimates of different phases and only utilizes the information obtained in the current phase $r$. \n\nAt the end of each phase $r$, Algorithm~\\ref{algo1} eliminates a subset of possibly suboptimal arms. In particular, $K-\\lceil d \/ 2\\rceil$ arms are eliminated in the first phase and about half of the active arms are eliminated in each of the following phases. Eventually, there is only single arm $i_{\\mathrm{out}}$ in the active set, which is the output of Algorithm~\\ref{algo1}.\n\n\\begin{remark}\nIt is worth considering the case of standard multi-armed bandits, which can be modeled as a special case of linear bandits. In particular, for any arm $i\\in \\mathcal A=[K]$, the corresponding arm vector is chosen to be $e_i$, which is the $i^{\\mathrm{th}}$ standard basis of $\\mathbb{R}^K$. It follows that $ d = K$, $\\theta ^ * = [p(1),p(2),\\cdots,p(K)]^{\\top} \\in \\mathbb{R}^K$ and arms are not correlated with one another. A simple mathematical derivation shows that we can always use a set of standard basis vectors of $\\mathbb{R}^{d_r}$ to represent the arm vectors regardless of which arms remain active during phase $r$. Also, the G-optimal design for a set of standard basis vectors is the uniform distribution on all of the active arms. Since pulling one arm is not able to give information about the other arms, the empirical estimates based on the OLS estimator are exactly the empirical means. Altogether, for standard multi-armed bandits, OD-LinBAI reduces to the procedure of Sequential Halving \\citep{karnin2013almost}, which is a state-of-the-art algorithm for best arm identification in standard multi-armed bandits.\n\\end{remark}\n\n\n\n\n\\section{Main results}\n\\label{section_mainresults}\n\\subsection{Upper bound}\nWe first state an upper bound on the error probability of OD-LinBAI (Algorithm~\\ref{algo1}). The proof of Theorem~\\ref{theorem_upperbound} is deferred to Appendix~\\ref{appendix_upperbound}.\n\n\\begin{theorem}\n\\label{theorem_upperbound}\nFor any linear bandit instance $\\nu \\in \\mathcal E$, OD-LinBAI outputs an arm $i_{\\mathrm{out}}$ satisfying\n$$\n\\Pr\\left[i_{\\mathrm{out}} \\neq 1 \\right] \\le\n\\left( \\frac {4K} { d}+3\\log _{2} d\\right) \\exp \\left( - \\frac {m} {32H_{2,\\mathrm{lin}}} \\right)\n$$\nwhere $m$ is defined in Equation~(\\ref{equation_m}) and\n$$\nH_{2,\\mathrm{lin}} = \\max_{2\\leq i \\leq d} \\frac {i} {\\Delta_i^2}.\n$$\n\\end{theorem}\n\nTheorem~\\ref{theorem_upperbound} shows the error probability of OD-LinBAI is upper bounded by \\begin{equation}\n\\label{equation_upper_OD}\n \\exp\\left(-\\Omega\\left( \\frac{T}{H_{2,\\mathrm{lin}} \\log_2d }\\right) \\right)\n\\end{equation}\nwhich depends on $T$, $d$ and $H_{2,\\mathrm{lin}}$. We remark that none of the three terms is avoidable in view of our lower bounds (see Section~\\ref{subsection_lower}).\n\nIn particular, $T$ is the time budget of the problem and $d$ is the effective dimension of the arm vectors.\\footnote{Recall that we assume the entire set of original arm vectors $ \\{a(1), a(2),\\dots,a(K)\\} $ span $\\mathbb R ^d$.} Given $T$ and $d$, $H_{2,\\mathrm{lin}}$ quantifies the difficulty of identifying the best arm in the linear bandit instance. The parameter $H_{2,\\mathrm{lin}}$ generalizes its analogue\n$$\nH_{2} = \\max_{2\\leq i \\leq K} \\frac {i} {\\Delta_i^2}\n$$\nproposed by \\citet{audibert2010best} for standard multi-armed bandits. However, $H_{2,\\mathrm{lin}}$ is not larger than $H_2$ since $H_{2,\\mathrm{lin}}$ is only a function of the first $d-1$ optimality gaps while $H_2$ considers all of the $K-1$ optimality gaps. In the extreme case that all of the suboptimal arms have the same optimality gaps, i.e., $\\Delta_2=\\Delta_3=\\cdots=\\Delta_K$, the two terms $H_2$ and $H_{2,\\mathrm{lin}}$ can differ significantly. In general, we have\n\\begin{equation*}\n H_{2,\\mathrm{lin}}\\le H_{2} \\le \\frac K d H_{2,\\mathrm{lin}}\n\\end{equation*}\nand both inequalities are essentially sharp, i.e., can be achieved by some linear bandit instances. This highlights a major difference between best arm identification in the fixed-budget setting for linear bandits and standard multi-armed bandits. Due to the linear structure, arms are correlated and we can estimate the mean reward of one arm with the help of the other arms. Thus, the hardness quantity $H_{2,\\mathrm{lin}}$ is only a function of the top $d$ arms and not all the arms.\n\n\\paragraph{Comparisons with BayesGap \\citep{hoffman2014correlation}.} To the best of our knowledge, the only previous algorithm in the same setting of best arm identification in linear bandits with a fixed budget with our work is BayesGap, which is adapted from UGapEb \\citep{gabillon2012best} for standard multi-armed bandits. We compare BayesGap and OD-LinBAI with respect to the algorithm design as well as the theoretical guarantees in the following.\n\\begin{enumerate}[(i)]\n\\item The model of BayesGap is Bayesian linear bandits, in which the unknown parameter vector $\\theta^*$ is drawn from a known prior distribution $\\mathcal{N}(0, \\eta^{2} I)$ and the additive noise is required to be strictly Gaussian. However, OD-LinBAI does not require these assumptions and the upper bound holds for any $\\theta^* \\in \\mathbb R^d$.\n\\item The algorithm and theoretical guarantee of BayesGap explicitly require the knowledge of a hardness quantity \n\\begin{equation*}\n H_1 = \\sum _{1\\le i\\le K} \\Delta_i^{-2}\n\\end{equation*}\nto control the confidence region and then allocate exploration. However, this hardness quantity $H_1$ is almost always unknown to the agent in practice. In most practical applications, BayesGap has to estimate $H_1$ in an adaptive way, which works reasonably well in numerical experiments but lacks guarantees. On the contrary, OD-LinBAI is fully parameter-free.\n\n\\item The error probability of BayesGap is upper bounded by\n\\begin{equation}\n\\label{equation_upper_BayesGap}\n \\exp\\left(-\\Omega\\left( \\frac{T}{H_{1}}\\right) \\right)\n\\end{equation}\nwhich depends on $T$ and $H_1$.\n\nCompared with (\\ref{equation_upper_BayesGap}), the upper bound of OD-LinBAI in (\\ref{equation_upper_OD}) has an extra $\\log_2 d$ term. \nThis is an interesting phenomena which also occurs in standard multi-armed bandits \\citep{audibert2010best, carpentier2016tight}. For best arm identification in standard multi-armed bandits, without the knowledge of the hardness quantity $H_1$, the agent has to pay a price of $\\log_2 K$ for the adaptation to the problem complexity. In Theorem~\\ref{theorem_lowerbound}, we prove a similar result for linear bandits, in which the price for the adaptation is $\\log_2 d$.\n\nThe upper bound (\\ref{equation_upper_BayesGap}) involves $H_1$, which is a function of \\emph{all} the optimality gaps. It holds that $H_1 \\ge H_2 \\ge H_{2,\\mathrm{lin}}$. Thus, the upper bound of OD-LinBAI is better with regard to the problem complexity parameter. BayesGap, at least in the theoretical analysis, does not fully utilize the linear structure of the bandit problem. See Section~\\ref{subsection_lower} for a more detailed comparison on different hardness quantities.\n\n\\end{enumerate}\n\n\n\\subsection{Lower bound}\n\\label{subsection_lower}\nBefore stating the lower bound formally, we introduce a generalization of $H_1$ that characterizes the difficulty of a linear bandit instance:\n\\begin{equation*}\n H_{1,\\mathrm{lin}} = \\sum _{1\\le i\\le d} \\Delta_i^{-2}.\n\\end{equation*}\nThis parameter is also associated with the top $d$ arms similarly to $H_{2,\\mathrm{lin}}$. See Table~\\ref{table_H} for a thorough comparison on different hardness quantities. \n\n\\begin{table}\n \\caption{Comparisons of different hardness quantities: $H_1$, $H_2$, $H_{1,\\mathrm{lin}}$ and $H_{2,\\mathrm{lin}}$}\n \\label{table_H}\n \\centering\n \\begin{tabular}{lll}\n \\toprule\n $H_{1} = \\sum _{1\\le i\\le K} \\Delta_i^{-2}$ & $H_{2} = \\max_{2\\leq i \\leq K} {i}\\cdot {\\Delta_i^{-2}}$ & $1\\le H_{1}\/H_{2} \\le \\log (2K)$ \\citep{audibert2010best} \\\\\n \\midrule\n $H_{1,\\mathrm{lin}} = \\sum _{1\\le i\\le d} \\Delta_i^{-2}$ & $H_{2,\\mathrm{lin}} = \\max_{2\\leq i \\leq d} {i}\\cdot {\\Delta_i^{-2}}$ & $1\\le H_{1,\\mathrm{lin}}\/H_{2,\\mathrm{lin}} \\le \\log (2d) $ \\\\\n \\midrule\n $1 \\le H_{1}\/H_{1,\\mathrm{lin}} \\le K\/ d $ & $1 \\le H_{2}\/H_{2,\\mathrm{lin}} \\le K \/d $ & \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nFor any linear bandit instance $\\nu \\in \\mathcal E$, we denote the hardness quantity $H_{1,\\mathrm{lin}}$ of $\\nu$ as $H_{1,\\mathrm{lin}}(\\nu)$.\\footnote{When there is no ambiguity, $H_{1,\\mathrm{lin}}$ will also be used.} In addition, let $\\mathcal E(a)$ denote the set of linear bandit instances in $\\mathcal E$ whose $H_{1,\\mathrm{lin}}$ is bounded by $a$ ($a>0$), i.e., $\\mathcal E(a) = \\{\\nu \\in \\mathcal E: H_{1,\\mathrm{lin}}(\\nu) \\le a\\}$.\n\nHere we give a minimax lower bound for the problem of best arm identification in linear bandits in the fixed-budget setting. The proof of Theorem~\\ref{theorem_lowerbound} is deferred to Appendix~\\ref{appendix_lowerbound}, which is built on the connection between linear bandits and standard multi-armed bandits and then utilizes the minimax lower bound for standard multi-armed bandits \\citep{carpentier2016tight}.\n\n\\begin{theorem}\n\\label{theorem_lowerbound}\nIf $T \\ge a^{2} \\log (6 T d) \/900$, then \n\\begin{equation*}\n \\min_{\\pi} \\max_{\\nu \\in \\mathcal E(a)} \\Pr\\left[i_{\\mathrm{out}}^{\\pi} \\neq 1 \\right] \\ge \\frac 1 6 \\exp \\left( - \\frac {240T} a \\right).\n\\end{equation*}\nFurther if $a \\ge 15 d^2$, then \n\\begin{equation*}\n \\min_{\\pi} \\max_{\\nu \\in \\mathcal E(a)} \\left( \\Pr\\left[i_{\\mathrm{out}}^{\\pi} \\neq 1 \\right] \\cdot \\exp \\left( \\frac {2700T} {H_{1,\\mathrm{lin}}(\\nu) \\log_2d } \\right) \\right)\\ge \\frac 1 6.\n\\end{equation*}\n\\end{theorem}\nTheorem~\\ref{theorem_lowerbound} first shows that for any best arm identification algorithm $\\pi$, even with the knowledge of an upper bound $a$ on the hardness quantity $H_{1, \\mathrm{lin}}$, there exists a linear bandit instance such that the error probability is at least\n\\begin{equation}\n\\label{equation_lower_1}\n \\exp\\left(-O\\left( \\frac{T}{a}\\right) \\right).\n\\end{equation}\nFurthermore, for any best arm identification algorithm $\\pi$, without the knowledge of an upper bound $a$ on the hardness quantity $H_{1, \\mathrm{lin}}$, there exists a linear bandit instance $\\nu$ such that the error probability is at least\n\\begin{equation}\n\\label{equation_lower_2}\n \\exp\\left(-O\\left( \\frac{T}{H_{1,\\mathrm{lin}}(\\nu) \\log_2d }\\right) \\right).\n\\end{equation}\n\nComparing the lower bounds (\\ref{equation_lower_1}) and (\\ref{equation_lower_2}) in two different settings, we show that the agent has to pay a price of $\\log_2 d$ in the absence of the knowledge about the problem complexity. Finding a best arm identification algorithm that matching the lower bound (\\ref{equation_lower_1}) remains an open problem since the upper bound of BayesGap (\\ref{equation_upper_BayesGap}) involves $H_1$ but not $H_{1, \\mathrm{lin}}$. However, notice that the knowledge about the complexity quantity which is required for BayesGap is usually unavailable in real-life applications.\n\nNow we compare the upper bound on the error probability of OD-LinBAI in (\\ref{equation_upper_OD}) with the lower bound (\\ref{equation_lower_2}). Table~\\ref{table_H} shows that $H_{1, \\mathrm{lin}} \\ge H_{2, \\mathrm{lin}}$ always holds. Therefore, the upper bound (\\ref{equation_upper_OD}) is not larger than the lower bound (\\ref{equation_lower_2}) order-wise in the exponent. However, the upper bound holds for {\\em all} instances, while the lower bound is a minimax result which holds for {\\em specific} instances. This shows OD-LinBAI (Algorithm~\\ref{algo1}) is \\emph{minimax optimal} up to multiplicative factors in the exponent. At the same time, since an upper bound can never be smaller than a lower bound, we know that the hard instances for the problem of best arm identification in linear bandits in the fixed-budget setting are those whose $H_{1, \\mathrm{lin}}$ and $H_{2, \\mathrm{lin}}$ are of the same order.\n\n\\section{Numerical experiments}\n\\label{section_exp}\nIn this section, we evaluate the performance of our algorithm OD-LinBAI and compare it with Sequential Halving and BayesGap. We present the results of two synthetic datesets here and the results of a real-world dataset, the Abalone dataset~\\citep{Dua:2019}, in Appendix~\\ref{subsec:abalone} in the supplementary. Sequential Halving \\citep{karnin2013almost} is a state-of-the-art algorithm for best arm identification in standard multi-armed bandits. For BayesGap \\citep{hoffman2014correlation}, there are two versions: one is BayesGap-Oracle, which is given the exact information of the required hardness quantity $H_1$; the other is BayesGap-Adaptive, which adaptively estimates the hardness quantity by the three-sigma rule. Throughout in the synthetic datesets we assume that the additive random noise follows the standard Gaussian distribution $\\mathcal{N}(0, 1)$. In each setting, the reported error probabilities of different algorithms are averaged over $1024$ independent trials. Additional implementation details and numerical results are provided in Appendix~\\ref{appendix_exp}. \n\n\\subsection{Synthetic dataset 1: a hard instance}\nThis dataset, where there are numerous competitors for the second best arm, was considered for the problem of best arm identification in linear bandits in the fixed-confidence setting \\citep{zaki2019towards, fiez2019sequential}. Similarly, we consider the situation that $d = 2$ and $K\\ge 3$. For simplicity, we set the unknown parameter vector $\\theta^* = [1,0]^{\\top}$. There is one best arm and one worst arm, which correspond to the arm vectors $a(1)=[1,0]^{\\top}$ and $a(K) = [\\cos( {3\\pi\/4}),\\sin( {3\\pi\/4})]^{\\top}$ respectively. For any arm $i \\in \\{2,3,\\dots,K-1\\}$, the corresponding arm vector is chosen to be $a(i) = [\\cos( {\\pi\/4+\\phi_{i}}),\\sin( {\\pi\/4}+\\phi_{i})]^{\\top}$ with $\\phi_{i}$ drawn independently from $ \\mathcal{N}(0, 0.09)$ independently. Therefore, there are $K-2$ almost second best arms. Considering the definitions of four hardness quantities, it holds that \n\\begin{equation*}\n H_{1} \\approx H_{2} \\approx \\frac K d H_{1,\\mathrm{lin}} \\approx \\frac K d H_{2,\\mathrm{lin}}.\n\\end{equation*}\nHence this is a hard instance in the sense that the linear structure is extremely strong. A good algorithm needs to fully utilize the correlations of the arms induced by the linear structure to pull arms as efficiently as possible. \n\n\\begin{figure}[htbp]\n\t\\centering\n \\begin{minipage}[t]{0.325\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{T=25.eps}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.325\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{T=50.eps}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.325\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{T=100.eps}\n\t\\end{minipage}\n\t\\caption{Error probabilities for different numbers of arms $K$ with $T=25, 50, 100$ from left to right.}\n\t\\label{fig_dataset1_1}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\centering\n \\begin{minipage}[t]{0.325\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{K=25.eps}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.325\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{K=50.eps}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.325\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{K=100.eps}\n\t\\end{minipage}\n\t\\caption{Error probabilities for different time budgets $T$ with $K=25, 50, 100$ from left to right.}\n\t\\label{fig_dataset1_2}\n\\end{figure}\n\nThe experimental results with fixed $T$ and $K$ are presented in Figure~\\ref{fig_dataset1_1} and Figure~\\ref{fig_dataset1_2} respectively. In terms of this hard linear bandit instance, OD-LinBAI is clearly superior compared to its competitors. In fact, OD-LinBAI consistently pulls only one arm from the $K-2$ almost second best arms and thus suffers minimal impact from the increase in $K$.\n\n\\subsection{Synthetic dataset 2: ramdom arm vectors}\nIn this experiment, the $K$ arm vectors are uniformly sampled from the unit $d$-dimensional sphere $\\mathbb S^{d-1}$. Without loss of generality, we assume that $a(1), a(2)$ are the two closest arm vectors and then set $\\theta^* = a(1) + 0.01(a(1)-a(2))$. Thus the best arm is arm $1$ while the second best arm is arm $2$. Differently from previous works \\citep{tao2018best, zaki2019towards, fiez2019sequential,degenne2020gamification}, we set the number of arms to be $K = c^d$ with a integer constant $c$. According to Theorem $8$ in \\citet{cai2013distributions}, the minimum optimality gap $\\Delta_1$ will converge in probability to a positive number as $d$ tends to infinity so that the random linear bandit instances \nwhich we perform our experiments on are neither too hard nor too easy. \n\n\\begin{figure}[htbp]\n\t\\centering\n \\begin{minipage}[t]{0.49\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{c=2.eps}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.49\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{c=3.eps}\n\t\\end{minipage}\n\t\\caption{Error probabilities for different $d$ with $c=2$ on the left and $c=3$ on the right.}\n\t\\label{fig_dataset2}\n\\end{figure}\n\nFigure~\\ref{fig_dataset2} shows the error probabilities of the $4$ different algorithms for this dataset with $c=2$ or $3$ when the time budget $T=2K$. In most situations, OD-LinBAI outperforms the other algorithms. Moreover, Table~\\ref{table_dataset2_3} and Table~\\ref{table_dataset2_4} in the supplementary show that OD-LinBAI is computationally efficient compared to BayesGap, which involves matrix inverse updates to calculate the confidence widths at each time step. See Appendix~\\ref{appendix_exp_2} for additional results and discussion on this dataset.\n\n\\section{Conclusion}\n\\label{section_conclusion}\nIn this paper, we introduce the G-optimal design technique into the problem of best arm identification in linear bandits in the fixed-budget setting. We design a parameter-free algorithm OD-LinBAI. To characterize the difficulty of a linear bandit instance, we introduce two hardness quantities $H_{1,\\mathrm{lin}}$ and $H_{2,\\mathrm{lin}}$. The upper bound of the error probability of OD-LinBAI and the minimax lower bound of this problem are respectively characterized by $H_{1,\\mathrm{lin}}$ and $H_{2,\\mathrm{lin}}$ instead of their analogues $H_1$ and $H_2$ in standard multi-armed bandits. Moreover, OD-LinBAI is minimax optimal up to multiplicative factors in the exponent, which is also supported by the considerable improvement of the performance in the numerical experiments over existing algorithms. \n\nAn potential direction for future work is to design an instance-dependent asymptotically optimal algorithm for this problem. However, finding a such algorithm or even an instance-dependent lower bound for the problem of best arm identification in standard multi-armed bandits in the fixed-budget setting remains open.\n\n\n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nDeep neural networks demonstrate undeniable success in several fields and employing them is taking off for information retrieval problems~\\citep{onal2016getting, mitra2017neural}. \nIt has been shown that supervised neural network models perform better as the training dataset grows bigger and becomes more diverse~\\citep{sun2017revisiting}. \nInformation retrieval is an experimental and empirical discipline, thus, having access to large-scale real datasets is essential for designing effective IR systems.\nHowever, in many information retrieval tasks, due to the sensitivity of the data from users and privacy issues, not all researchers have access to large-scale datasets for training their models.\n\nMuch research has been done on the general problem of preserving the privacy of sensitive data in IR applications, where the question is how should we design effective IR systems without damaging users' privacy? \nOne of the solutions so far is to anonymize the data and try to hide the identity of users~\\citep{Carpineto:2013, Zhang:2016}. As an example, \\citet{Zhang:2016} use a differential privacy approach for query log anonymization. However, there is no guarantee that the anonymized data will be as effective as the original data.\n\nUsing machine learning-based approaches, sharing the trained model instead of the original data has turned out to be an option for transferring knowledge~\\citep{Papernot:2017,Shokri:2015,Abadi:2016}. \nThe idea of \\emph{mimic learning} is to use a model that is trained based on the signals from the original training data to annotate a large set of unlabeled data and use these labels as training signals for training a new model. \nIt has been shown, for many tasks in computer vision and natural language processing, that we can transfer knowledge this way and the newly trained models perform as well as the model trained on the original training data~\\citep{Bucilua:2006,Hinton:2015,Romero:2014,Ba:2014}.\n\nHowever, trained models can expose the private information from the dataset they have been trained on~\\citep{Shokri:2015}. Hence, the problem of preserving the privacy of the data is changed into the problem preserving the privacy of the model.\nModeling privacy in machine learning is a challenging problem and there has been much research in this area. Preserving the privacy of deep learning models is even more challenging, as there are more parameters to be safeguarded~\\citep{Phan:2016}. \nSome work has studied the vulnerability of deep neural network as a service, where the interaction with the model is only via an input-output black box~\\citep{Tramer:2016, Fredrikson:2015, Shokri:2016}.\nOthers have proposed approaches to protect privacy against an adversary with a full knowledge of the training mechanism and access to the model's parameters. For instance, \\citet{Abadi:2016} propose a privacy preserving stochastic gradient descent algorithm offering a trade-off between utility and privacy. More recently, \\citet{Papernot:2017} propose a semi-supervised method for transferring the knowledge for deep learning from private training data. They propose a setup for learning privacy-preserving student models by transferring knowledge from an ensemble of teachers trained on disjoint subsets of the data for which privacy guarantees are provided. \n\nWe investigate the possibility of mimic learning for document ranking and study techniques aimed at preserving privacy in mimic learning for this task. Generally, we address two research questions:\n\\begin{enumerate}\n \\setlength{\\topsep}{0pt}\n \\setlength{\\partopsep}{0pt}\n \\setlength{\\itemsep}{0pt}\n \\setlength{\\parskip}{0pt}\n \\setlength{\\parsep}{0pt}\n\\item[\\textbf{RQ1}] \\textsl{Can we use mimic learning to train a neural ranker?}\n\\item[\\textbf{RQ2}] \\textsl{Are privacy preserving mimic learning methods effective for training a neural ranker?}\n\\end{enumerate}\n\nBelow, we first assess the general possibility of exploiting mimic learning for document ranking task regardless of the privacy concerns. \nThen we examine the model by~\\citet{Papernot:2017} as a privacy preserving technique for mimic learning.\n\n\n\\section{Training a Neural Ranker with Mimic Learning}\nIn this section, we address our first research question: ``Can we use mimic learning to train a neural ranker?''\n\nThe motivation for mimic learning comes from a well-known property of neural networks, namely that they are universal approximators, i.e., given enough training data, and a deep enough neural net with large enough hidden layers, they can approximate any function to an arbitrary precision~\\citep{Bucilua:2006}. \nThe general idea is to train a very deep and wide network on the original training data which leads to a big model that is able to express the structure from the data very well; such a model is called a \\emph{teacher} model. \nThen the teacher model is used to annotate a large unlabeled dataset. This annotated set is then used to train a neural network which is called a \\emph{student} network. \nFor many applications, it has been shown that the student model makes predictions similar to the teacher model with nearly the same or even better performance~\\citep{Romero:2014,Hinton:2015}. \nThis idea is mostly employed for compressing complex neural models or ensembles of neural models to a small deployable neural model~\\citep{Bucilua:2006,Ba:2014}. \n\nWe have performed a set of preliminary experiments to examine the idea of mimic learning for the task of document ranking. \nThe question is: Can we use a trained neural ranker on a set of training data to annotate unlabeled data and train a new model (another ranker) on the newly generated training data that works nearly as good as the original model?\n\nIn our experiments, as the neural ranker, we have employed \\emph{Rank model} proposed by \\citet{Dehghani:2017}. The general scheme of this model is illustrated in~\\eqref{fig:rankmodel}. \nIn this model, the goal is to learn a scoring function $\\mathcal{S}(q, d; \\theta)$ for a given pair of query $q$ and document $d$ with the set of model parameters $\\theta$. \nThis model uses a pair-wise ranking scenario during training in which there are two point-wise networks that share parameters and their parameters get updated to minimize a pair-wise loss.\nEach training instance has five elements $\\tau = (q,d_1, d_2, s_{q,d_1}, s_{q,d_2})$, where $s_{q,d_i}$ indicates the relevance score of $d_i$ with respect to $q$ from the ground-truth.\nDuring inference, the trained model is treated as a point-wise scoring function to score query-document pairs.\n\nIn this model, the input query and documents are passed through a representation learning layer, which is a function $i$ that learns the representation of the input data instances, i.e. $(q, d^+, d^-)$, and consists of three components: (1) an embedding function $\\varepsilon: \\mathcal{V} \\rightarrow \\mathbb{R}^{m}$ (where $\\mathcal{V}$ denotes the vocabulary and $m$ is the number of embedding dimensions), (2) a weighting function $\\omega: \\mathcal{V} \\rightarrow \\mathbb{R}$, and (3) a compositionality function $\\odot: (\\mathbb{R}^{m}, \\mathbb{R})^n \\rightarrow \\mathbb{R}^{m}$. More formally, the function $i$ is defined as:\n\\begin{equation}\n \\begin{aligned}\ni(q, d^+, d^-) = [ & \\odot_{i=1}^{|q|}(\\varepsilon(t_i^q),\\omega(t_i^q)) \\parallel\n& \\\\ &\n\\odot_{i=1}^{|d^+|} (\\varepsilon(t_i^{d^+}),\\omega(t_i^{d^+})) \\parallel\n& \\\\ &\n\\odot_{i=1}^{|d^-|} (\\varepsilon(t_i^{d^-}),\\omega(t_i^{d^-})) ~],\n \\end{aligned} \n\\end{equation}\nwhere $t_i^q$ and $t_i^d$ denote the $i$-{th} term in query $q$ and document $d$, respectively.\nThe weighting function $\\omega$ assigns a weight to each term in the vocabulary.\nIt has been shown that $\\omega$ simulates the effect of inverse document frequency (IDF), which is an important feature in information retrieval~\\cite{Dehghani:2017}.\nThe compositionality function $\\odot$ projects a set of $n$ embedding-weighting pairs to an $m$-\\:dimensional representation, independent of the value of $n$ by taking the element-wise weighted sum over the terms' embedding vectors.\nWe initialize the embedding function $\\varepsilon$ with word2vec embeddings~\\cite{Mikolov:2013} pre-trained on Google News, and the weighting function $\\omega$ with IDF.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[height=5cm]{Images\/m2_n}%\n \\caption{\\label{fig:rankmodel}\\emph{Rank Model}: Neural Ranking model proposed by~\\citet{Dehghani:2017} }\n\\end{figure}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[height=5.5cm]{Images\/pp_model}%\n \\caption{\\label{fig:pp_model} Privacy preserving annotator\/model sharing, proposed by~\\citet{Papernot:2017}.}\n\\end{figure*}\n\nThe representation learning layer is followed by a simple feed-forward neural network that is composed of $l-1$ hidden layers with ReLU as the activation function, and output layer $z_l$. \nThe output layer $z_l$ is a fully-connected layer with a single continuous output with $\\tanh$ as the activation function. \nThe model is optimized using the hinge loss (max-margin loss function) on batches of training instances and it is defined as follows:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}(b; \\theta) = \\frac{1}{|b|}\n\\sum_{i=1}^{|b|}\n\\max\\big\\{\n& \n0, 1 - \\text{sign}\n(s_{\\{q, d_1\\}_i} - s_{\\{q, d_2\\}_i})\n& \\\\ & \n\\left(\\mathcal{S}\\left(\\{q, d_1\\}_i; \\theta\\right) -\\mathcal{S}\\left(\\{q, d_2\\}_i; \\theta\\right)\\right)\n\\big\\}\n, \n\\end{aligned} \n\\end{equation}\nThis model is implemented using TensorFlow~\\citep{tang2016:tflearn,tensorflow2015-whitepaper}.\nThe configuration of teacher and student networks is presented in Table~\\ref{tbl:cfg}.\n\n\\begin{table}[h]\n\\centering\n\\caption{Teacher and student neural networks configurations.}\n\\vspace{5pt}\n\\begin{tabularx}{\\linewidth}{Xcc} \n\\toprule\n\\bf Parameter & \\bf Teacher & \\bf Student \\\\\n\\midrule\n Number of hidden layers & 3 & 3 \\\\\n Size of hidden layers & 512 & 128 \\\\\n Initial learning rate & 1E-3 & 1E-3 \\\\\n Dropout & 0.2 & 0.1 \\\\\n Embedding size & 500 & 300 \\\\\n Batch size & 512 & 512 \\\\\n\\bottomrule\n\\end{tabularx}\n\\label{tbl:cfg}\n\\end{table}\n\n\n\\begin{table}[!h]\n\\centering\n\\caption{\\label{tbl_res1}Performance of teacher and student models with different training strategies.}\n\\vspace{5pt}\n\\begin{adjustbox}{max width=\\textwidth}\n\\begin{tabular}{l l c c c}\n\\toprule\n\\bf Training strategy & \\bf model & \\textbf{MAP} & \\textbf{P@20} & \\textbf{nDCG@20} \n\\\\ \\midrule\n\\multirow{2}{*}{{Full supervision}} & {Teacher} \n& 0.1814 & 0.2888 & 0.3419 \n\\\\\n& {Student} \n& 0.2256 & 0.3111 & 0.3891 \n\\\\ \\midrule\n\\multirow{2}{*}{{Weak supervision}} & {Teacher} \n& 0.2716 & 0.3664 & 0.4109 \n\\\\ \n& {Student} \n& 0.2701 & 0.3562 & 0.4145 \n\\\\ \\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n\\end{table}\n\nAs our test collection, we use Robust04 with a set of 250 queries (TREC topics 301--450 and 601--700) with judgments, which has been used in the TREC Robust Track 2004.\nWe follow the knowledge distillation approach~\\cite{Hinton:2015} for training the student network. We have two sets of experiments, in the first one, we train the teacher model with full supervision, i.e., on the set of queries with judgments, using 5-fold cross validation. \nIn the second set of experiments, the set of queries with judgments is only used for evaluation and we train the teacher model using the weak supervision setup proposed in~\\citep{Dehghani:2017}. We use $3$ million queries from the AOL query log as the unlabeled training query set for the teacher model. \nIn all experiments, we use a separate set of $3$ million queries from the AOL query log as unlabeled data that is annotated by the trained teacher model (either using full or weak supervision) for training the student model.\n\nResults obtained from these experiments are summarized in Table~\\ref{tbl_res1}. \nThe results generally suggest that we can train a neural ranker using mimic learning.\nUsing weak supervision to train the teacher model, the student model performs as good as the teacher model.\nIn case of training the teacher with full supervision, as the original training data is small, the performance of the teacher model is rather low which is mostly due to the fact that the big teacher model overfits on the train data and is not able to generalize well. \nHowever, due to the regularization effect of mimic learning, the student model, which is trained on the predictions by the teacher model significantly outperforms the teacher model \\citep{Hinton:2015,Romero:2014}.\n\n\\section{Training a Neural Ranker with Privacy Preserving Mimic Learning}\nIn the previous section, we examined using the idea of mimic learning to train a neural ranker regardless of the privacy risks. In this section, we address our second research question: ``Are privacy preserving mimic learning methods effective for training a neural ranker?''\n\nIt has been shown that there is a risk of privacy problems, both where the adversary is just able to query the model, and where the model parameters are exposed to the adversaries inspection.\nFor instance, \\citet{Fredrikson:2015} show that only by observing the prediction of the machine learning models they can approximately reconstruct part of the training data (model-inversion attack). \\citet{Shokri:2016} also demonstrate that it is possible to infer whether a specific training point is included in the model's training data by observing only the predictions of the model (membership inference attack).\n\nWe apply the idea of knowledge transfer for deep neural networks from private training data, proposed by \\citet{Papernot:2017}. The authors propose a private aggregation of teacher ensembles based on the teacher-student paradigm to preserve the privacy of training data.\nFirst, the sensitive training data is divided into $n$ partitions. Then, on each partition, an independent neural network model is trained as a teacher. \nOnce the teachers are trained, an aggregation step is done using majority voting to generate a single global prediction. \nLaplacian noise is injected into the output of the prediction of each teacher before aggregation. The introduction of this noise is what protects privacy because it obfuscates the vulnerable cases, where teachers disagree. \n\nThe aggregated teacher can be considered as a deferentially private API to which we can submit the input and it then returns the privacy preserving label. There are some circumstances where due to efficiency reasons the model is needed to be deployed to the user device~\\cite{Abadi:2016}. To be able to generate a shareable model where the privacy of the training data is preserved, \\citet{Papernot:2017} train an additional model called the student model. The student model has access to unlabeled public data during training. The unlabeled public data is annotated using the aggregated teacher to transfer knowledge from teachers to student model in a privacy preserving fashion. \nThis way, if the adversary tries to recover the training data by inspecting the parameters of the student model, in the worst case, the public training instances with privacy preserving labels from the aggregated teacher are going to be revealed. The privacy guarantee of this approach is formally proved using differential privacy framework.\n\nWe apply the same idea to our task. We use a weak supervision setup, as partitioning the fully supervised training data in our problem leads to very small training sets which are not big enough for training good teachers. \nIn our experiments, we split the training data into three partitions, each contains one million queries annotated by the BM25 method. We train three identical teacher models. Then, we use the aggregated noisy predictions from these teachers to train the student network using the knowledge distillation approach. Configurations of teacher and student networks are similar to the previous experiments, as they are presented in Table~\\ref{tbl:cfg}.\n\nWe evaluate the performance in two situations: In the first one, the privacy parameter, which determines the amount of noise, is set to zero, and in the second one, the noise parameter is set to $0.05$, which guarantees a low privacy risk~\\citep{Papernot:2017}.\nWe report the average performance of the teachers before noise, the performance of noisy and non-noisy aggregated teacher, and the performance of the student networks in two situations. The results of these experiments are reported in Table~\\ref{tbl_res2}.\n\n\\begin{table}[h]\n\\centering\n\\caption{\\label{tbl_res2}Performance of teachers (average) and student models with noisy and non-noisy aggregation.}\n\\vspace{5pt}\n\\begin{adjustbox}{max width=\\textwidth}\n\\begin{tabular}{l c c c}\n\\toprule\n \\bf Model & \\textbf{MAP} & \\textbf{P@20} & \\textbf{nDCG@20} \n\\\\ \\midrule\n{Teachers (avg)} \n& 0.2566 & 0.3300 & 0.3836\n\\\\ \\midrule\n{Non-noisy aggregated teacher} \n& 0.2380 & 0.3055 & 0.3702 \n\\\\\n{Student \\small{(non-noisy aggregation)}} \n& 0.2337 & 0.3192 & 0.3717\n\\\\ \\midrule \n{Noisy aggregated teacher} \n& 0.2110 & 0.2868 & 0.3407 \n\\\\\n{Student \\small{(noisy aggregation)}} \n& 0.2255 & 0.2984 & 0.3559 \n\\\\ \\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n\\end{table}\n\nResults in the table suggest that using the noisy aggregation of multiple teachers as the supervision signal, we can train a neural ranker with an acceptable performance.\nCompared to the single teacher setup in the previous section, the performance of the student network is not as good as the average performance of teachers. Although the student network performs better than the teacher in the noisy aggregation setup. This is more or less the case for a student together with a non-noisy aggregated teacher.\nWe believe drops in the performance on the student networks compared to the results in the previous section are not just due to partitioning, noise, and aggregation. This is also the effect of the change in the amount of training data for the teachers in our experiments. So, in the case of having enough training data in each partition for each teacher, their prediction will be more determined and we will have less disagreement in the aggregation phase and consequently, we will get better signals for training the student model.\n\n\n\\section{Conclusion}\nWith the recent success of deep learning in many fields, IR is also moving from traditional statistical approaches to neural network based approaches.\nSupervised neural networks are data hungry and training an effective model requires a huge amount of labeled samples. \nHowever, for many IR tasks, there are not big enough datasets.\nFor many tasks such as the ad-hoc retrieval task, companies and commercial search engines have access to large amounts of data. However, sharing these datasets with the research community raises concerns such as violating the privacy of users.\nIn this paper, we acknowledge this problem and propose an approach to overcome it. Our suggestion is based on the recent success on mimic learning in computer vision and NLP tasks. Our first research question was: Can we use mimic learning to train a neural ranker?\n\nTo answer this question, we used the idea of mimic learning. Instead of sharing the original training data, we propose to train a model on the data and share the model. The trained model can then be used in a knowledge transfer fashion to label a huge amount of unlabeled data and create big datasets. We showed that a student ranker model trained on a dataset labeled based on predictions of a teacher model, can perform almost as well as the teacher model. \nThis shows the potential of mimic learning for the ranking task which can overcome the problem of lack of large datasets for ad-hoc IR task and open-up the future research in this direction.\n\nAs shown in the literature, even sharing the trained model on sensitive training data instead of the original data cannot guarantee the privacy. Our second research question was: Are privacy preserving mimic learning methods effective for training a neural ranker?\n\nTo guarantee the privacy of users, we proposed to use the idea of privacy preserving mimic learning. We showed that using this approach, not only the privacy of users is guaranteed, but also we can achieve an acceptable performance.\nIn this paper, we aim to lay the groundwork for the idea of sharing a privacy preserving model instead of sensitive data in IR applications. This suggests researchers from industry share the knowledge learned from actual users' data with the academic community that leads to a better collaboration of all researchers in the field. \n\nAs a future direction of this research, we aim to establish formal statements regarding the level of privacy that this would entail using privacy preserving mimic learning and strengthen this angel in the experimental evaluation. Besides, we can investigate that which kind of neural network structure is more suitable for mimic learning for the ranking task.\n\n\\begin{acks}\nThis research was supported in part by Netherlands Organization for Scientific Research through the \\textsl{Exploratory Political Search} project (ExPoSe, NWO CI \\# 314.99.108), by the Digging into Data Challenge through the \\textsl{Digging Into Linked Parliamentary Data} project (DiLiPaD, NWO Digging into Data \\# 600.006.014).\n\nThis research was also supported by\nAhold Delhaize,\nAmsterdam Data Science,\nthe Bloomberg Research Grant program,\nthe Criteo Faculty Research Award program,\nthe Dutch national program COMMIT,\nElsevier,\nthe European Community's Seventh Framework Programme (FP7\/2007-2013) under\ngrant agreement nr 312827 (VOX-Pol),\nthe Microsoft Research Ph.D.\\ program,\nthe Netherlands Institute for Sound and Vision,\nthe Netherlands Organisation for Scientific Research (NWO)\nunder pro\\-ject nrs\n612.001.116,\nHOR-11-10,\nCI-14-25,\n652.\\-002.\\-001,\n612.\\-001.\\-551,\n652.\\-001.\\-003,\nand\nYandex.\nAll content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and\/or sponsors.\n\n\\end{acks}\n\n\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\n\n\nIn this note we state a bit detailed account about \nMacPherson's Chern class transformation $C_*$ for quotient stacks defined in \\cite{O1}, \nalthough all the instructions have already been made in that paper. \nOur approach is also applicable for \nother additive characteristic classes, e.g., \n Baum-Fulton-MacPherson's Todd class transformation \\cite{BFM} \n (see \\cite{EG2, BZ} for the equivariant version) and more generally \n Brasselet-Sch\\\"urmann-Yokura's Hirzebruch class transformation \\cite{BSY} (see section 4 below). \nThroughout we work over the complex number field $\\mathbf{C}$ \nor a base field $k$ of characteristic $0$. \n\nWe begin with recalling $C_*$ for schemes and algebraic spaces. \nThese are spaces having trivial stabilizer groups. \nIn following sections we will deal with quotient stacks having affine stabilizers, \nin particular, `(quasi-)projective' Deligne-Mumford stacks \nin the sense of Kresch \\cite{Kresch}. \n\n\n\\subsection{Schemes}\nFor the category of quasi-projective schemes $U$ and proper morphisms, \nthere is a unique natural transformation from the constructible function functor to \nthe Chow group functor, $C_*: F(U) \\to A_*(U)$,\n so that it satisfies the normalization property: \n $$C_*(\\jeden_U)=c(TU) \\frown [U] \\in A_*(U) \\quad \\mbox{ if $U$ is smooth. }$$\nThis is called the {\\it Chern-MacPherson transformation}, \nsee MacPherson \\cite{Mac} in complex case ($k=\\mathbf{C}$) and Kennedy \\cite{Ken} \nin more general context of $ch(k)=0$. \nHere the naturality means the commutativity $f_*C_*=C_*f_*$ of $C_*$ \nwith pushforward of proper morphisms $f$. \nIn particular, for proper $pt: U \\to pt (={\\rm Spec}(k))$, \nthe ($0$-th) degree of $C_*(\\jeden_U)$ is equal to \nthe Euler characteristic of $U$: $pt_*C_*(\\jeden_U)=\\chi(U)$ \n(as for the definition of $\\chi(U)$ in algebraic context, see \\cite{Ken, Joyce}). \n\nAs a historical comment, \nSchwartz \\cite{Schwartz} firstly studied \na generalization of the Poincar\\'e-Hopf theorem for \ncomplex analytic singular varieties by introducing \na topological obstruction class for certain stratified vector frames, \nwhich in turn coincides with MacPherson's Chern class \\cite{BS}. \nTherefore, $C_*(U):=C_*(\\jeden_U)$ is usually \ncalled the {\\it Chern-Schwartz-MacPherson class} (CSM class) of a possibly singular variety $U$. \n\n\nTo grasp quickly what the CSM class is, there is a convenient way \ndue to Aluffi \\cite{Aluffi1, Aluffi2}. \nLet $U$ be a singular variety and \n$\\iota: U_0 \\hookrightarrow U$ a smooth open dense reduced subscheme. \nBy means of resolution of singularities, \nwe have a birational morphism $p: W \\to U$ so that $W=\\overline{U_0}$ is smooth \nand $D=W-U_0$ is a divisor with smooth irreducible components $D_1, \\cdots , D_r$ \nhaving normal crossings. \nThen by induction on $r$ and properties of $C_*$ it is shown that \n$$C_*(\\jeden_{U_0}) =p_*\\left(\\frac{c(TW)}{\\prod (1+D_i)} \\frown [W]\\right) \\in A_*(U).$$\n(Here $c(TW)\/\\prod (1+D_i)$ is equal to the total Chern class of dual to \n$\\Omega_W^1(\\log D)$ of differential forms with logarithmic poles along $D$). \nBy taking a stratification $U=\\coprod_{j} U_j$, we have \n$C_*(U) = \\sum_j C_*(\\jeden_{U_j})$. \nConversely, \nwe may regard this formula as an alternative definition of CSM class, \nsee \\cite{Aluffi1}. \n\n\n\n\n\\subsection{Algebraic spaces} \nWe extend $C_*$ to the category of arbitrary schemes or algebraic spaces \n(separated and of finite type). \nTo do this, we may generalize Aluffi's approach, or \nwe may trace the same inductive proof by means of Chow envelopes \n(cf. \\cite{Kimura}) \nof the singular Riemann-Roch theorem for arbitrary schemes \\cite{FG}. \n\nHere is a short remark. \nAn {\\it algebraic space} $X$ is \na stack over $Sch\/k$, under \\'etale topology, whose stabilizer groups are trivial: \nPrecisely, there exists a scheme $U$ (called an {\\it atlas}) \nand a morphism of stacks $u: U \\to X$ \nsuch that for any scheme $W$ and any morphism $W \\to X$ \nthe (sheaf) fiber product $U\\times_X W$ exists as a scheme, \nand the map $U\\times_X W \\to W$ is an \\'etale surjective morphism of schemes. \nIn addition, $\\delta: R:=U\\times_X U \\to U \\times_k U$ is quasi-compact, \ncalled the {\\it \\'etale equivalent relation}. \nDenote by $g_i: R \\to U$ (i=1,2) the projection to each factor of $\\delta$. \nThe Chow group $A_*(X)$ \n is defined using an \\'etale atlas $U$\n (Section 6 in \\cite{EG}). \nIn particular, letting $g_{12*}:=g_{1*}-g_{2*}$, \n$$\\xymatrix{\nA_*(R) \\ar[r]^{g_{12*}} & A_*(U) \\ar[r]^{u_*} & A_*(X) \\ar[r] & 0 \n}$$\nis exact (Kimura \\cite{Kimura}, Theorem 1.8). \nThen the CSM class of $X$ is given by \n$C_*(X) = u_*C_*(U)$: \nIn fact, if $U' \\to X$ is another atlas for $X$ with the relation $R'$, \nwe take the third $U''=U\\times_X U'$ with $R''=R\\times_X R'$, \nwhere $p: U'' \\to U$ and $q: U''\\to U'$ are \\'etale and finite. \nChow groups of atlases modulo ${\\rm Im}\\, (g_{12*})$ are mutually identified \nthrough the pullback $p^*$ and $q^*$, and particularly, \n$p^*C_*(U) = C_*(U'')=q^*C_*(U')$, that is checked by using \n resolution of singularities or \nthe Verdier-Riemann-Roch \\cite{Yokura} for $p$ and $q$. \nFinally we put $C_*: F(X) \\to A_*(X)$ by sending $\\jeden_W \\mapsto \\iota_*C_*(W)$ \nfor integral algebraic subspaces $W \\stackrel{\\iota}{\\hookrightarrow} X$ and extending it linearly, and \nthe naturality for proper morphisms \nis proved again using atlases. \nThis is somewhat a prototype of $C_*$ for quotient stacks described below. \n\n\n\n\n\n\\section{Chern class for quotient stacks} \n\n\\subsection{Quotient stacks} \nLet $G$ be a linear algebraic group acting on a scheme or algebraic space $X$. \nIf the $G$-action is set-theoretically free, \ni.e., stabilizer groups are trivial, \nthen the quotient $X \\to X\/G$ always exists as a morphism of algebraic spaces \n (Proposition 22, \\cite{EG}). \nOtherwise, in general we need the notion of quotient stack. \n\nThe {\\it quotient stack} $\\mathcal{X}=[X\/G]$ is \na (possibly non-separated) Artin stack over $Sch\/k$, under fppf topology \n(see, e.g., Vistoli \\cite{Vistoli}, G\\'omez \\cite{Gomez} for the detail): \nAn object of $\\mathcal{X}$ is a family of $G$-orbits in $X$ \nparametrized by a scheme or algebraic space $B$, \nthat is, a diagram $B \\stackrel{q}{\\leftarrow} P \\stackrel{p}{\\rightarrow} X$ \nwhere $P$ is an algebraic space, \n$q$ is a $G$-principal bundle and $p$ is a $G$-equivariant morphism. \nA morphism of $\\mathcal{X}$ is a $G$-bundle morphism $\\phi: P \\to P'$ so that $p'\\circ \\phi = p$, \nwhere $B' \\stackrel{q'}{\\leftarrow} P' \\stackrel{p'}{\\rightarrow} X$ is another object. \nNote that there are possibly many non-trivial automorphisms $P \\to P$ over \nthe identity morphism $id: B \\to B$, \nwhich form the stabilizer group associated to the object \n(e.g., the stabilizer group of a `point' ($B=pt$) is non-trivial in general). \nA morphism of stacks $B \\to \\mathcal{X}$ naturally corresponds to an object $B \\leftarrow P \\to X$, \nthat follows from Yoneda lemma: \nIn particular there is a morphism (called {\\it atlas}) $u: X \\to \\mathcal{X}$ \ncorresponding to the diagram $X \\stackrel{q}{\\leftarrow} G\\times X \\stackrel{p}{\\rightarrow} X$, \nbeing $q$ the projection to the second factor and $p$ the group action. \nThe atlas $u$ recovers any object of $\\mathcal{X}$ \nby taking fiber products: $B \\leftarrow P=B \\times_\\mathcal{X} X \\rightarrow X$. \n\nLet $f: \\mathcal{X} \\to \\mathcal{Y}$ be a {\\it proper} and \n{\\it representable} morphism of quotient stacks, i.e., \nfor any scheme or algebraic space $W$ and morphism $W \\to \\mathcal{Y}$, the base change \n$\\mathcal{X} \\times_\\mathcal{Y} W \\to W$ is a proper morphism of algebraic spaces. \nTake presentations $\\mathcal{X}=[X\/G]$, $\\mathcal{Y}=[Y\/H]$, \nand the atlases $u: X \\to \\mathcal{X}$, $u': Y \\to \\mathcal{Y}$. \nThere are two aspects of $f$: \\\\\n\n\\noindent \n(Equivariant morphism): \nPut $B:=\\mathcal{X}\\times_\\mathcal{Y} Y$, which naturally has a $H$-action \nso that $[B\/H]=[X\/G]$, $v: B \\to \\mathcal{X}$ is a new atlas, \nand $\\bar{f}:B \\to Y$ is $H$-equivariant: \n\\begin{equation}\\label{d1}\n\\xymatrix{\nB \\ar[r]^{\\bar{f}} \\ar[d]_{v} & Y \\ar[d]^{u'} \\\\\n\\mathcal{X} \\ar[r]_f & \\mathcal{Y}\n}\n\\end{equation}\n(Change of presentations): \nLet $P:=X \\times_\\mathcal{X} B$, then the following diagram is considered as \na family of $G$-orbits in $X$ and simultaneously as a family of $H$-orbits in $B$, i.e., \n$p: P \\to X$ is a $H$-principal bundle and $G$-equivariant, \n$q: P \\to B$ is a $G$-principal bundle and $H$-equivariant: \n\\begin{equation}\\label{d2}\n\\xymatrix{\nP \\ar[r]^{q} \\ar[d]_{p} & B \\ar[d]^{v} \\\\\nX \\ar[r]_{u} & \\; \\mathcal{X}.\n}\n\\end{equation}\n\n\nA simple example of such $f$ is given by \nproper $\\varphi: X \\to Y$ with an injective homomorphism \n$G \\to H$ so that $\\varphi(g.x)=g.\\varphi(x)$ and $H\/G$ is proper. In this case, \n$P=H \\times_k X$ and $B=H\\times_G X$ with \n$p: P \\to X$ the projection to the second factor, $q: P \\to B$ the quotient morphism. \n\n\n\\subsection{Chow group and pushforward} \nFor schemes or algebraic spaces $X$ (separated, of finite type) with $G$-action, \nthe {\\it $G$-equivariant Chow gourp} $A_*^G(X)$ has been introduced \nin Edidin-Graham \\cite{EG}, \nand the {\\it $G$-equivariant constructible function} $F^G(X)$ in \\cite{O1}. \nThey are based on Totaro's algebraic Borel construction: \nTake a Zariski open subset $U$ in \nan $\\ell$-dimensional linear representation $V$ of $G$ \nso that $G$ acts on $U$ freely. \nThe quotient exists as an algebraic space, denoted by $U_G=U\/G$. \nAlso $G$ acts $X \\times U$ freely, \nhence the mixed quotient $X \\times G \\to X_G:=X\\times_G U$ \nexists as an algebraic space. \nNote that $X_G \\to U_G$ is a fiber bundle with fiber $X$ and group $G$. \nDefine \n $A_n^G(X):=A_{n+\\ell-g}(X_G)$ ($g=\\dim G$) and $F^G(X):=F(X_G)$ \nfor $\\ell \\gg 0$. \nPrecisely saying, we take the direct limit over all linear representations of $G$, \nsee \\cite{EG, O1} for the detail. \n\n\n$A_n^G(X)$ is trivial for $n > \\dim X$ but \nit may be non-trivial for negative $n$. Also note that \nthe group $F^G_{inv}(X)$ of {\\it $G$-invariant} functions over $X$ is a subgroup of $F^G(X)$. \n\n\nLet us recall the proof that these groups are actually invariants of quotient stacks $\\mathcal{X}$. \nLook at the diagram (\\ref{d2}) above. \nLet $g=\\dim G$ and $h=\\dim H$. Note that $G\\times H$ acts on $P$. \nTake open subsets $U_1$ and $U_2$ of \nrepresentations of $G$ and $H$, respectively ($\\ell_i=\\dim U_i\\; i=1,2$) \nso that $G$ and $H$ act on $U_1$ and $U_2$ freely respectively. \nPut $U=U_1 \\oplus U_2$, on which $G \\times H$ acts freely. \nWe denote the mixed quotients for spaces arising in the diagram (\\ref{d2}) by \n$P_{G\\times H}:=P\\times_{G\\times H}U$, $X_G:=X\\times_G U_1$ and $B_H:=B\\times_H U_2$. \nThen the projection $p$ induces the fiber bundle \n$\\bar{p}: P_{G\\times H} \\to X_G$ with fiber $U_2$ and group $H$, \nand $q$ induces $\\bar{q}: P_{G\\times H} \\to B_{H}$ with fiber $U_1$ and group $G$. \nThus, the pullback $\\bar{p}^*$ and $\\bar{q}^*$ for Chow groups are isomorphic, \n$A_{n+\\ell_1}(X_G) \\simeq A_{n+\\ell_1+\\ell_2}(P_{G\\times H}) \\simeq A_{n+\\ell_2}(B_H)$. \nTaking the limit, we have the {\\it canonical identification} \n$$\\xymatrix{\nA^G_{n+g}(X) \\ar[r]^{p^*\\;\\; }_{\\simeq\\;\\;} & A^{G\\times H}_{n+g+h}(P) & \\ar[l]^{\\;\\;\\simeq}_{\\;\\;q^*} A^H_{n+h}(B)\n}$$\n(Proposition 16 in \\cite{EG}). \nNote that $(q^*)^{-1}\\circ p^*$ shifts the dimension by $h-g$. \nAlso for constructible functions, put the pullback $p^*\\alpha:=\\alpha \\circ p$, \nthen we have $F^G(X) \\simeq F^{G\\times H}(P) \\simeq F^H(B)$ \nvia pullback $p^*$ and $q^*$ (Lemma 3.3 in \\cite{O1}). \nWe thus define $A_*(\\mathcal{X}):=A_{*+g}^G(X)$ and $F(\\mathcal{X}):=F^G(X)$, \nalso $F_{inv}(\\mathcal{X}):=F^G_{inv}(X)$, through the canonical identification. \n\nGiven proper representable morphisms of quotient stacks $f: \\mathcal{X} \\to \\mathcal{Y}$ \nand any presentations $\\mathcal{X}=[X\/G]$, $\\mathcal{Y}=[Y\/H]$, we define \nthe pushforward $f_*: A_*(\\mathcal{X}) \\to A_*(\\mathcal{Y})$ by \n$$f_*^H\\circ (q^{*})^{-1}\\circ p^*: A_{n+g}^G(X) \\to A_{n+h}^H(Y)$$\nand also \n$f_*: F(\\mathcal{X}) \\to F(\\mathcal{Y})$ in the same way. \nBy the identification $(q^{*})^{-1}\\circ p^*$, \neverything is reduced to the equivariant setting (the diagram (\\ref{d1})). \n\n\\begin{lemma} The above $F$ and $A_*$ satisfy the following properties: \\\\\n{\\rm (i)} For proper representable morphisms of quotient stacks $f$, \nthe pushforward $f_*$ is well-defined; \\\\\n{\\rm (ii)} Let $f_1: \\mathcal{X}_1 \\to \\mathcal{X}_2$, $f_2: \\mathcal{X}_2 \\to \\mathcal{X}_3$ and \n$f_3:\\mathcal{X}_1 \\to \\mathcal{X}_3$ be proper representable morphisms of stacks \nso that $f_2\\circ f_1$ is isomorphic to $f_3$, \nthen $f_{2*}\\circ f_{1*}$ is isomorphic to $f_{3*}$ \n{\\rm (}$f_{3*}=f_{2*}\\circ f_{1*}$ \nusing a notational convention in Remark 5.3, \\cite{Gomez}{\\rm )}. \n\n\\end{lemma}\n\n\\noindent {\\sl Proof} :\\; \nLook at the diagram below, where $\\mathcal{X}_i=[X_i\/G_i]$ ($i=1,2,3$). \nWe may regard $\\mathcal{X}_1=[X_1\/G_1]=[B_1\/G_2]=[B_3\/G_3]$, and so on. \n{\\rm (i)} Put $f=f_1$, then the well-definedness of the pushforward $f_{1*}$ (in both of $F$ and $A_*$) \nis easily checked by taking fiber products and by the canonical identification. \n{\\rm (ii)} Assume that there exists \nan isomorphism of functors $\\alpha: f_2\\circ f_1 \\to f_3$ \n(i.e., a $2$-isomorphism of $1$-morphisms). \nThen two $G_3$-equivariant morphisms \n $\\bar{f}_2\\circ \\bar{f}_1$ \nand $\\bar{f}_3$ from $B_3$ to $X_3$ \ncoincide up to isomorphisms of $B_3$ and of $X_3$ \nwhich are encoded in the definition of $\\alpha$, \nhence their $G_3$-pushforwards coincide up to the chosen isomorphisms. \n\\hfill $\\Box$\n\n\\\n\n\n\\begin{center}\n{\\small \n\\begin{xy}\n(0,0)*+{ }; \n(25,0) *+{P_1}=\"P1\"; (45,0) *+{X_1}=\"X1\";\n(35,13) *+{B_1}=\"B1\"; (55,13) *+{\\mathcal{X}_1}=\"XX1\"; \n(25,20) *+{P'}=\"Pd\"; (45,20) *+{P_3}=\"P3\"; \n(45,26) *+{X_2}=\"X2\"; (65,26) *+{\\mathcal{X}_2}=\"XX2\"; (85,26) *+{\\mathcal{X}_3}=\"XX3\";\n(35,33) *+{B'}=\"Bd\"; (55,33) *+{B_3}=\"B3\"; \n(45,46) *+{P_2}=\"P2\"; (65,46) *+{B_2}=\"B2\"; (85,46) *+{X_3}=\"X3\";\n(64,20) *+{{}_{f_1}};\n{\\ar@{->} \"P1\";\"X1\"}; {\\ar@{->} \"P1\";\"B1\"};\n{\\ar@{->} \"X1\";\"XX1\"};\n{\\ar@{->} \"Pd\";\"P1\"};{\\ar@{->} \"Pd\";\"P3\"};{\\ar@{->} \"Pd\";\"Bd\"};\n{\\ar@{->} \"P3\";\"X1\"};{\\ar@{->} \"P3\";\"B3\"};\n{\\ar@{->} \"Pd\";\"P3\"};{\\ar@{->} \"Pd\";\"Bd\"};\n{\\ar@{->} \"B1\";\"XX1\"};{\\ar@{->} \"B1\";\"X2\"};\n{\\ar@{->} \"Bd\";\"B3\"};{\\ar@{->} \"Bd\";\"P2\"};{\\ar@{->} \"Bd\";\"B1\"};\n{\\ar@{->} \"XX1\";\"XX2\"};{\\ar@{->}_{f_3} \"XX1\";\"XX3\"};\n{\\ar@{->} \"X2\";\"XX2\"};\n{\\ar@{->}^{f_2} \"XX2\";\"XX3\"};\n{\\ar@{->} \"B3\";\"X3\"};{\\ar@{->} \"B3\";\"B2\"};{\\ar@{->} \"B3\";\"XX1\"};\n{\\ar@{->} \"P2\";\"X2\"};{\\ar@{->} \"P2\";\"B2\"};\n{\\ar@{->} \"B2\";\"X3\"};{\\ar@{->} \"B2\";\"XX2\"};\n{\\ar@{->} \"X3\";\"XX3\"};\n\\end{xy}\n}\n\\end{center}\n\n\n\\subsection{Chern-MacPherson transformation} \nWe assume that $X$ is a quasi-projective scheme or algebraic space \nwith action of $G$. \nThen $X_G$ exists as an algebraic space, \nhence $C_*(X_G)$ makes sense. \nTake the vector bundle $TU_G:=X\\times_G (U\\oplus V)$ over $X_G$, \n i.e., the pullback \nof the tautological vector bundle $(U\\times V)\/G$ over $U_G$ \ninduced by the projection $X_G \\to U_G$. \nOur natural transformation \n$$C^{G}_*: F^{G}(X) \\to A_*^{G}(X)$$ \nis defined to be the inductive limit of \n$$\nT_{U, *}:=c(TU_G)^{-1} \\frown C_*: F(X_G) \\to A_*(X_G)\n$$\nover the direct system of representations of $G$, \nsee \\cite{O1} for the detail. \n\nRoughly speaking, the $G$-equivariant CSM class \n$C_*^G(X)\\, (:=C_*^G(\\jeden_X))$ looks like \n``$c(T_{BG})^{-1}\\frown C_*(EG\\times_G X)$\", \nwhere $EG\\times_G X \\to BG$ means the universal bundle (as ind-schemes) \nwith fiber $X$ and group $G$, \nthat has been justified using a different inductive limit of Chow groups, \n see Remark 3.3 in \\cite{O1}. \n\n\\begin{lemma}\n{\\rm (i)} In the same notation as in the diagram (\\ref{d2}) in 2.1, \nthe following diagram commutes: \n $$\\xymatrix{\nF^{G}(X) \\ar[d]_{C^{G}_*} \\ar[r]^{p^{*}\\;}_{\\simeq\\;\\;} & F^{G\\times H}(P) \\ar[d]^{C^{G\\times H}_*} \\\\\nA^{G}_{*+g}(X) \\ar[r]_{p^*\\;}^{\\simeq\\;\\;} & A^{G\\times H}_{*+g+h}(P) \n}\n$$\n{\\rm (ii)} In particular, $C_*: F(\\mathcal{X}) \\to A_*(\\mathcal{X})$ is well-defined. \\\\\n{\\rm (iii)} $C_*f_*=f_*C_*$ for proper representable morphisms $f: \\mathcal{X} \\to \\mathcal{Y}$. \n\\end{lemma}\n\n\\noindent {\\sl Proof} :\\; \n{\\rm (i)} This is essentially the same as Lemma 3.1 in \\cite{O1} which \nshows the well-definedness of $C_*^G$. \nApply \nthe Verdier-Riemann-Roch \\cite{Yokura} to \nthe projection of the affine bundle $\\bar{p}: P_{G\\times H} \\to X_G$ (with fiber $U_2$), \nthen we have the following commutative diagram \n $$\\xymatrix{\nF(X_G) \\ar[d]_{C_*} \\ar[r]^{\\bar{p}^{*}\\;}_{\\;\\;} & F(P_{G\\times H}) \\ar[d]^{C_*} \\\\\nA_{*+\\ell_1}(X_G) \\ar[r]_{\\bar{p}^{**}\\;}^{\\;\\;} & A_{*+\\ell_1+\\ell_2}(P_{G\\times H}) \n}\n$$\nwhere $\\bar{p}^{**}=c(T_{\\bar{p}}) \\frown \\bar{p}^*$ \nand $T_{\\bar{p}}$ is the relative tangent bundle of $\\bar{p}$. \nThe twisting factor $c(T_{\\bar{p}})$ in $\\bar{p}^{**}$ is \ncancelled by the factors in $T_{U_1,*}$ \nand $T_{U,*}$: \nIn fact, since $T_{\\bar{p}} = \\bar{q}^*TU_{2H}$, $T_{\\bar{q}} = \\bar{p}^*TU_{1G}$ and \n$$TU_{G\\times H}= P \\times_{G\\times H} (T(U_1\\oplus U_2)) \n= T_{\\bar{p}}\\oplus T_{\\bar{q}},$$\nwe have \n\\begin{eqnarray*}\nT_{U,*} \\circ \\bar{p}^* (\\alpha)&=& c(TU_{G\\times H})^{-1} \\frown C_*(\\bar{p}^*\\alpha)\\\\\n&=&\nc(T_{\\bar{p}}\\oplus T_{\\bar{q}})^{-1} c(T_{\\bar{p}}) \\frown \\bar{p}^*C_*(\\alpha)\\\\\n&=&\nc(T_{\\bar{q}})^{-1} \\frown \\bar{p}^*C_*(\\alpha)\\\\\n&=& \n \\bar{p}^*(c(T{U_1}_G)^{-1} \\frown C_*(\\alpha)) \\\\\n &=& \\bar{p}^* \\circ T_{U_1, *} (\\alpha). \n\\end{eqnarray*}\nTaking the inductive limit, we conclude that $C_*^{G\\times H} \\circ p^*=p^* \\circ C_*^{G}$. \nThus {\\rm (i)} is proved. \nThe claim {\\rm (ii)} follows from {\\rm (i)} . \nBy {\\rm (ii)} , we may consider $C_*$ as the $H$-equivariant \nChern-MacPherson transformation $C_*^H$ given in \\cite{O1}, \nthus {\\rm (iii)} immediately follows from the naturality of $C_*^H$. \n\\hfill $\\Box$ \n\n\\\n\nThe above lemmas show the following theorem \n(cf. Theorem 3.5, \\cite{O1}): \n\n\\begin{theorem}\\label{theorem} \nLet $\\mathcal{C}$ be the category whose objects are \n{\\rm (}possibly non-separated{\\rm )} Artin quotient stacks $\\mathcal{X}$ \nhaving the form $[X\/G]$ of \nseparated algebraic spaces $X$ of finite type with action \nof smooth linear algebraic groups $G$; \nmorphisms in $\\mathcal{C}$ are assumed to be proper and representable. \nThen for the category $\\mathcal{C}$, \nwe have a unique natural transformation $C_*: F(\\mathcal{X}) \\to A_*(\\mathcal{X})$ \nwith integer coefficients \nso that it coincides with the ordinary MacPherson transformation \nwhen restricted to the category of quasi-projective schemes. \n\\end{theorem}\n\n\n\n\n\n\\subsection{Degree} \nLet $g=\\dim G$. The $G$-classifying stack $BG=[pt\/G]$ has \n(non-positive) virtual dimension $-g$, hence \n$$A_{-n}(BG)=A_{-n+g}^G(pt)=A^{n-g}_G(pt)=A^{n-g}(BG)$$\n for any integer $n$ (trivial for $n