{"text":"\\section{Introduction}\n\nTreating interference as noise (TIN) is a strategy that is universally applied in wireless networks to deal with interference from users that are far away. Interestingly, it is also known to be capacity optimal when the interference is sufficiently weak \\cite{Etkin_Tse_Wang, Shang_Kramer_Chen, Motahari_Khandani, Sreekanth_Veeravalli, Geng_TIN}. Most relevant to this work is the recent result by Geng et al. in \\cite{Geng_TIN}, where a broadly applicable condition is identified and shown to be sufficient (also conjectured to be necessary in almost all cases) for TIN to achieve the generalized degrees of freedom (GDoF) region. The GDoF optimality of TIN then serves as a stepping stone to a further tightening of the result, so that whenever Geng et al.'s condition holds, TIN is shown to achieve the entire capacity region within a constant gap. \n\nGeng et al.'s result highlights the advantage of the GDoF metric for obtaining finer insights into the capacity of wireless networks, relative to the more widely studied degrees of freedom (DoF) metric. While DoF studies have contributed a number of fundamental insights, the DoF metric is limited in that it treats all non-zero channels as essentially equally strong (capable of carrying exactly 1 DoF). Thus, insights into schemes such as TIN, which rely very much on certain signals being much weaker than others, cannot be obtained directly from DoF studies. The GDoF perspective is crucial for such insights, and serves as the logical next step after DoF in the pursuit of capacity through progressively refined approximations. The advantage of the GDoF metric is amply evident in the study of the 2 user interference network by Etkin et al. in \\cite{Etkin_Tse_Wang}, where the DoF metric only provides a trivial answer, whereas the GDoF metric identifies all of the important operational regimes, leading ultimately to a characterization of the entire capacity region within a 1 bit gap. \n\nThe richness of the GDoF metric naturally comes at the cost of reduced tractability, especially since even the simpler DoF metric is far from fully understood for wireless networks. As such GDoF characterizations are few and far in between \\cite{Etkin_Tse_Wang, Parker_Bliss_Tarokh, Jafar_Vishwanath_GDOF, Gou_Jafar_O1, Huang_Cadambe_Jafar, Karmakar_Varanasi}. This motivates simpler alternatives such as the ADT deterministic model of \\cite{Avestimehr_Diggavi_Tse, Bresler_Tse, Bresler_Parekh_Tse}. The ADT deterministic model captures much of the essence of the GDoF framework --- the diversity of signal strengths --- but is less useful when the finer details such as the channel phase or the distinction between rational and irrational realizations become critical. Unfortunately, since these finer details are important for wireless interference networks with 3 or more users (even from a DoF perspective) \\cite{Etkin_Ordentlich, Motahari_Gharan_Khandani, Zamanighomi_Wang, Cadambe_Jafar_Wang}, the ADT deterministic model has found limited use in such settings. \n\nThe main idea motivating this work is that while the ADT deterministic model may not be suitable for studying the more fragile regimes, it could still be well suited for studying those robust regimes where the finer aspects of channel realizations are not relevant. Given this insight, and since the regime where TIN is optimal is arguably the most robust regime, it follows that the ADT deterministic model should suffice to identify this regime in the GDoF sense and to study its properties. As initial verification of this insight, we begin by exploring the TIN optimality result of Geng et al. in the ADT framework. Indeed, the optimality conditions and the GDoF region are not only easily mapped to the ADT deterministic model, but also become more transparent in the deterministic setting. Encouraged by this insight, we proceed to the main contribution of this work --- exploring the optimality of TIN for $K$ user parallel Gaussian interference networks. \n\nOptimality of TIN for parallel Gaussian interference networks is an intriguing question for the following reasons. On the one hand, with the exception of the MAC-Z-BC network (which contains the multiple access channel, Z-channel and broadcast channel as special cases), it is known that all parallel Gaussian networks are in general inseparable \\cite{Cadambe_Jafar_inseparable, Cadambe_Jafar_MACZBC, Lalitha_inseparable}. The benefits of joint coding across parallel channels can be quite substantial and extend all the way from higher DoF \\cite{Cadambe_Jafar_inseparable} to simple achievable schemes and near-optimal rates at finite SNR \\cite{Nazer_Gastpar_Jafar_Vishwanath, Jafar_Ergodic}. On the other hand, for the 2 user interference network, extensions to parallel channels have been made from an exact \\emph{sum-capacity} perspective in \\cite{Shang_ParallelNoisy} and from a GDoF perspective in \\cite{Karmakar_Varanasi}\\footnote{Parallel interference networks may be seen as a special case of MIMO interference networks.}. In both cases, the results support separability of TIN optimal sub-channels. However, the insights from the $2$ user setting do not directly extend to the $K$ user interference network. For example, the GDoF region for the TIN optimal 2 user interference network is easily seen to be polymatroidal, whereas the GDoF region of TIN optimal $K$ user interference networks, with $K\\geq 3$, is no longer polymatroidal.\nThe distinction is particularly significant for parallel channels. The GDoF region of 2 user TIN optimal parallel interference networks is simply the direct sum of the corresponding sum-rate bounds for all the sub-channels \\emph{and} is achieved by separate TIN on each sub-channel. This is in general not the case with $3$ or more users (a simple example is provided in Section \\ref{sec:SFregion}). Given the significant challenges in going beyond $2$ users, it is most intriguing if the separability of parallel Gaussian interference networks will hold in the regime where TIN is sum-GDoF optimal. In other words, if each of the sub-channels of a $K$ user interference network satisfies the TIN optimality condition of Geng et al., then will TIN continue to be sum-GDoF optimal for the parallel channel setting?\n\nThe focus on sum-GDoF motivates us to first seek a more explicit characterization. To this end, we show that the sum-GDoF characterization for a $K$ user interference network is essentially a minimum weighted matching problem in combinatorial optimization. Consequently, the sum-GDoF are characterized in terms of a partition of the interference network into disjoint cycles. Aided by the insights from the cyclic partition approach, we explore the sum-capacity optimality of TIN for $K$ user parallel deterministic interference networks under the ADT deterministic model. A separable extension of the optimality of TIN to parallel interference networks is obtained subject to a mild invertibility condition. The result is then translated into the GDoF framework for parallel Gaussian interference networks. In terms of answering the main question, the implication is that if each of the sub-channels satisfies the TIN optimality condition of Geng et al., then subject to a mild invertibility condition, a separate TIN scheme for each sub-channel continues to be sum-GDoF optimal for the overall $K$ user parallel Gaussian interference networks.\n\n\n\\section{System Model, Definitions, and Notation}\\label{sec:systemmodel}\n\\subsection{Gaussian Interference Network Model}\nConsider the $K$ user real Gaussian interference network, with $M$ parallel sub-channels, described as\n\\begin{equation}\n\\label{original}\n{\\bf Y}_k(t) = \\sum_{i=1}^{K}\\tilde{\\bf H}_{ki}\\tilde{\\bf X}_i(t) + {\\bf Z}_k(t),~~~\\forall k \\in [K] \\triangleq \\{1,2,\\ldots,K\\},\n\\end{equation}\nwhere over the $t$-th channel use,\n\\begin{eqnarray}\n{\\bf Y}_k(t) &=& \\left[ Y_k^{[1]}(t), Y_k^{[2]}(t), \\ldots, Y_k^{[M]}(t) \\right]^T \\\\\n\\tilde{\\bf X}_i(t) &=& \\left[ \\tilde{X}_i^{[1]}(t), \\tilde{X}_i^{[2]}(t), \\ldots, \\tilde{X}_i^{[M]}(t) \\right]^T\n\\end{eqnarray}\nare the vectors containing the received signals observed at Receiver $k$ and the transmitted symbols from Transmitter $i$, respectively, and\n\\begin{eqnarray}\n{\\tilde{\\bf H}_{ki} = \\left[ \\begin{array}{cccc} \\tilde h_{ki}^{[1]} & 0 & \\ldots & 0\\\\\n 0 & \\tilde h_{ki}^{[2]} & \\ldots & 0\\\\\n \\vdots & \\vdots & \\ddots & \\vdots\\\\\n 0 & 0 & \\cdots & \\tilde h_{ki}^{[M]}\\\\\n \\end{array}\\right]} \\label{h}\n\\end{eqnarray}\nis a diagonal channel matrix comprised of the channel coefficients from Transmitter $i$ to Receiver $k$. The superscript within the square parentheses represents the sub-channel index, $m \\in [M] \\triangleq \\{1,2,\\ldots,M\\}$. All channel coefficients are fixed across channel uses. Perfect channel knowledge is available at all transmitters and receivers. The AWGN vector at Receiver $k$ over the $t$-th channel use,\n\\begin{eqnarray}\n{\\bf Z}_k(t) &=& \\left[ Z_k^{[1]}(t), Z_k^{[2]}(t), \\ldots, Z_k^{[M]}(t) \\right]^T\n\\end{eqnarray}\nhas zero mean and covariance matrix ${\\bf I}_{M}$, where ${\\bf I}_M$ represents the $M \\times M$ identity matrix. Noise processes are i.i.d over time. All symbols are real.\n\nAt Transmitter $i$, an independent message $W_i$ uniformly distributed over the message index set $\\{1,2,\\ldots,\\lceil 2^{nR_i}\\rceil\\}$ is mapped to the transmitted codeword $[ \\tilde{\\bf X}_i(1), \\tilde{\\bf X}_i(2), \\ldots, \\tilde{\\bf X}_i(n) ]$ (abbreviated as $\\tilde{\\bf X}_i^n$) over $n$ channel uses, and is subject to the average power constraint,\n\\begin{eqnarray}\n\\frac{1}{n}\\sum_{t=1}^{n}\\sum_{m=1}^{M} \\mathbb{E}\\left| \\tilde{X}_i^{[m]}(t) \\right|^2 \\leq P_i\n\\end{eqnarray}\nwhere the expectation is over the messages.\n\nAt Receiver $k$, the received signal $ [ {\\bf Y}_k(1), {\\bf Y}_k(2), \\ldots, {\\bf Y}_k(n) ]$ (abbreviated as $ {\\bf Y}_k^n$) is used to produce the estimate $\\hat{W}_k$ of the message $W_k$. The probability of error for Receiver $k$ is given by the probability that $\\hat{W}_k$ is not equal to $W_k$. A rate tuple $(R_1, R_2, \\ldots, R_K)$ is said to be achievable if we have an encoding and decoding mapping such that the probability of error for each receiver approaches zero as $n$ approaches infinity. The capacity region $\\mathcal{C}$ is the closure of the set of all achievable rate tuples. The sum-capacity is defined as $\\mathcal{C}_{\\Sigma} = \\max_{\\mathcal{C} } \\sum_{k=1}^K R_k$. \n\n\\subsection{GDoF Framework}\nFollowing \\cite{Geng_TIN}, we now translate the channel model (\\ref{original}) into an equivalent normalized form to facilitate GDoF studies. For such a purpose, we define $\\tilde{X}_i^{[m]}(t) = \\sqrt{P_i}{X}_i^{[m]}(t)$. Then over the $t$-th channel use, the received signal for Receiver $k$ across the $m$-th sub-channel is described by\n\\begin{equation}\n\\label{now}\nY_k^{[m]}(t) = \\sum_{i=1}^{K} \\tilde h_{ki}^{[m]} \\sqrt{P_i} {X}_i^{[m]}(t) + {Z}_k^{[m]}(t).\n\\end{equation}\n\nFurther, we take $P >1$ as a nominal power value, and define\n\\begin{equation}\n\\alpha_{ki}^{[m]} \\triangleq \\left( \\frac{\\log \\left( \\left| \\tilde h_{ki}^{[m]} \\right|^2 P_i \\right)} {\\log P} \\right)^+.\\footnote{As noted in \\cite{Geng_TIN}, avoiding negative $\\alpha$'s, will not influence the GDoF results.}\n\\end{equation}\n\nThe channel model (\\ref{now}) becomes\n\\begin{eqnarray}\\label{model}\nY_k^{[m]}(t) &=& \\sum_{i=1}^{K} \\mbox{sign}(\\tilde h_{ki}^{[m]}) \\sqrt{P^{\\alpha_{ki}^{[m]}}} {X}_i^{[m]}(t) + {Z}_k^{[m]}(t) \\\\\n&=& \\sum_{i=1}^{K} h_{ki}^{[m]} {X}_i^{[m]}(t) + {Z}_k^{[m]}(t) \\label{use}\n\\end{eqnarray}\nwhere $h_{ki}^{[m]} \\triangleq \\mbox{sign}(\\tilde h_{ki}^{[m]}) \\sqrt{P^{\\alpha_{ki}^{[m]}}} $ is the effective channel coefficient and $ {X}_i^{[m]}(t)$ is the equivalent channel input whose power is absorbed into the channel,\n\\begin{eqnarray}\n\\frac{1}{n}\\sum_{t=1}^{n}\\sum_{m=1}^{M} \\mathbb{E} \\left| {X}_i^{[m]}(t) \\right|^2 \\leq 1.\n\\end{eqnarray}\nAs in \\cite{Geng_TIN}, we call $\\alpha_{ki}^{[m]}$ the channel strength level. The equivalent model (\\ref{use}) will be used in the rest of this paper.\n\nWe define the GDoF region as\n\\begin{eqnarray}\n\\mathcal{D} \\triangleq \\left\\{ (d_1, d_2,\\ldots, d_K) : d_i = \\lim_{P \\rightarrow \\infty} \\frac{R_i}{\\frac{1}{2} \\log P}, \\forall i \\in \\{1,2,\\ldots,K\\}, (R_1, R_2, \\ldots, R_K) \\in \\mathcal{C} \\right\\}.\n\\end{eqnarray}\n\n\nThe sum-GDoF value is defined as $\\mathcal{D}_{\\Sigma} = \\max_{\\mathcal{D} } \\sum_{k=1}^K d_k$.\n\n\n\\subsection{ADT Deterministic Interference Network Model}\nAs in the Gaussian case, there are $K$ transmitter-receiver pairs in the ADT deterministic interference network model. Each transmitter wants to communicate with its corresponding receiver. The signal sent from Transmitter $i$, as observed at Receiver $k$, over the $m$-th sub-channel, is scaled up by a nonnegative integer value $n_{ki}^{[m]} \\triangleq \\lfloor \\log_2 |h_{ki}^{[m]}| \\rfloor = \\lfloor \\frac{1}{2}\\alpha_{ki}^{[m]} \\log_2 P \\rfloor$. \n\nThe channel may be written as\n\\begin{eqnarray}\\label{det}\nY_k^{[m]} &=& \\lfloor 2^{n_{k1}^{[m]}} {X}_1^{[m]} \\rfloor \\oplus \\lfloor 2^{n_{k2}^{[m]}} {X}_2^{[m]} \\rfloor \\oplus \\cdots \\oplus \\lfloor 2^{n_{kK}^{[m]}} {X}_K^{[m]} \\rfloor\n\\end{eqnarray}\nwhere addition is performed on each bit (modulo two). The time index is omitted for compactness. We assume the real-valued channel input is positive and has peak power constraint 1, then it can be written in base 2 as \n\\begin{eqnarray}\nX_i^{[m]} = 0.X_{i,(1)}^{[m]}X_{i,(2)}^{[m]}X_{i,(3)}^{[m]} \\ldots. \n\\end{eqnarray}\nThe capacity region and the associated notions are defined similar to those in the Gaussian setting.\n\n\nThe following directed graph representation will be useful to efficiently present the results in this work.\n\\subsection{Weighted Directed Graph Representation} The directed graph representation of the $K$ user interference network consists of $K$ vertices, $V_1, V_2, \\cdots, V_K$, one for each user. Since the vertices correspond directly to users, we will also refer to them as users. For all $(i,j)\\in[K]\\times[K]$, there is a directed edge $e_{ij}$ from user $j$ to user $i$, with weight $w(e_{ij})$ defined as follows:\n\\begin{eqnarray}\nw(e_{ij})&=&\\left\\{\n\\begin{array}{cc}\n\\alpha_{ij}& \\mbox{ if } i\\neq j,\\\\\n0 &\\mbox{ if } i=j\n\\end{array}\n\\right.\n\\end{eqnarray}\nThe directed graph for $K=3$ is illustrated in Fig. \\ref{dir}.\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=2.5in]{dir}\n\\caption{\\small The directed graph representation of a 3 user interference network.}\n\\label{dir}\n\\end{figure}\nThe directed graph is similarly defined for the ADT deterministic model, with all $\\alpha_{ij}$ values replaced by $n_{ij}$ values. \n\nWe are particularly interested in the notion of cycles on this directed graph. We define a cycle, $\\pi$, as a cyclically ordered subset of users, without repetitions. The set of all cycles is denoted as $[\\Pi]$. The cardinality of a cycle, denoted as $|\\pi|$ is the number of users that it involves. \n\\begin{eqnarray}\n|\\pi|&=&\\sum_{V_k\\in\\pi} 1, ~~~\\forall \\pi \\in [\\Pi]\n\\end{eqnarray}\nA cycle with only one user is a trivial cycle. Two cycles $\\pi_p, \\pi_q$, are said to be disjoint if they contain no common user, denoted as $\\pi_p\\cap\\pi_q=\\phi$.\n\nIntroducing a slight abuse of notation in the interest of conciseness, the same cycle, $\\pi$, can also be equivalently represented as a set of edges representing a closed path where no user is visited more than once. The weight of a cycle, denoted as $w(\\pi)$, is the sum of the weights of all the edges traversed in completing the cycle.\n\n\\begin{eqnarray}\nw(\\pi)&=&\\sum_{e_{ij}\\in\\pi}w(e_{ij}), ~~~\\forall \\pi\\in[\\Pi]\n\\end{eqnarray}\nNote that the weight of a trivial cycle is zero. Intuitively, the weight of a cycle is the accumulation of the strengths of interference terms encountered in the cycle.\n\nAs an example, consider the 3 user interference network, for which we have a total of 8 possible cycles, so that\n\\begin{eqnarray}\n[\\Pi]&=&\\{\\{1\\}, \\{2\\}, \\{3\\}, \\{1,2\\}, \\{2,1\\}, \\{1,3\\}, \\{3,1\\}, \\{2,3\\}, \\{3,2\\}, \\{1,2,3\\}, \\{3,2,1\\}\\}\\\\\n&=&\\{\\{e_{11}\\}, \\{e_{22}\\}, \\{e_{33}\\},\\{e_{12},e_{21}\\},\\{e_{13},e_{31}\\},\\{e_{23},e_{32}\\},\\{e_{12},e_{23},e_{31}\\},\\{e_{32},e_{21},e_{13}\\}\\}\n\\end{eqnarray}\n\\begin{eqnarray}\nw(\\{1,2,3\\})&=&\\alpha_{12}+\\alpha_{23}+\\alpha_{31}\\\\\n&=&w(\\{e_{12},e_{23},e_{31}\\})\n\\end{eqnarray}\n\n{\\bf Cyclic Partition:} A subset of the set of all cycles, $\\Pi\\subset [\\Pi]$, is said to be a cyclic partition if \n\\begin{eqnarray}\n\\pi_p\\cap\\pi_q&=&\\phi, ~~\\forall \\pi_p,\\pi_q\\in\\Pi\\\\\n\\sum_{\\pi\\in\\Pi}|\\pi|&=&K\n\\end{eqnarray}\nIn other words, a cyclic partition is a disjoint cyclic cover of the $K$ users. \n\n\n\n{\\bf Cyclic Partition Bound:} For any cyclic partition $\\Pi$, define the corresponding cyclic partition bound, $\\mathcal{D}^\\Pi_\\Sigma$, as\n\\begin{eqnarray}\n\\sum_{k=1}^K d_k&\\leq& \\sum_{k=1}^K\\alpha_{kk}-w(\\Pi)\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\nw(\\Pi) &=&\\sum_{\\pi\\in\\Pi}w(\\pi)\n\\end{eqnarray}\nis the net weight of the cyclic partition, representing the total interference encountered in this partition.\n\nSince there are many cyclic partitions, each of which gives rise to a cyclic partition bound, let us denote the tightest of these bounds as the \\emph{best cyclic partition bound}, $\\mathcal{D}^{\\Pi*}_\\Sigma$. In the deterministic setting, a cyclic partition bound is denoted by $\\mathcal{C}^\\Pi_\\Sigma$ and the best cyclic partition bound is denoted by $\\mathcal{C}^{\\Pi*}_\\Sigma$. A cyclic partition that produces the best cyclic partition bound is labeled an \\emph{optimal} cyclic partition, and denoted by $\\Pi^*$.\n\n\nFor example, when $K=6$, one possible cyclic partition is $\\Pi=\\{\\{1,3,5\\}, \\{4,2\\},\\{6\\}\\}$ which decomposes the users into three cycles, such that each user is represented in exactly one cycle. The corresponding cyclic partition bound is \n\\begin{eqnarray}\n\\sum_{k=1}^6d_k&\\leq&\\sum_{k=1}^6\\alpha_{kk}-(\\alpha_{13}+\\alpha_{35}+\\alpha_{51})-(\\alpha_{42}+\\alpha_{24})-(0)\n\\end{eqnarray}\n\n{\\bf Participating Edge:} Edge $e_{ij}$ is a participating edge for the cyclic partition $\\Pi$ if $i\\neq j$ and $e_{ij}\\in\\pi$ for some $\\pi\\in\\Pi$.\n\n{\\bf Cyclic Predecessor:} Under cyclic partition $\\Pi$, the cyclic predecessor for user $k$ is user $\\Pi(k)$, if $e_{\\Pi(k) k}$ is a participating edge for $\\Pi$. Note that if user $k$ belongs to a trivial cycle in $\\Pi$ then $\\Pi(k)=\\phi$.\n\nFinally, $\\mathbb{R}^K_+$ is the set of all $K$-tuples over non-negative real numbers.\n\n\\section{Optimality of TIN through the ADT Deterministic Model}\n\nWe first review Geng et al.'s result\\footnote{Complex channel model is considered in \\cite{Geng_TIN}, but the results therein are easily extended to real channel setting. Here we state the result for real channel model.} on the optimality of TIN for the $K$ user interference network with one sub-channel, i.e., $M=1$. The sub-channel index superscript is omitted in this section for compactness. \n\n\n\\begin{theorem}\\label{theorem:Geng}\n(Theorem 1 in \\cite{Geng_TIN})\nIn a $K$ user interference network, where the channel strength level from Transmitter $i$ to Receiver $j$ is equal to $\\alpha_{ji}$, $\\forall i,j\\in [K]$, if the following condition is satisfied\n\\begin{equation}\n\\alpha_{ii}\\geq \\max_{j:j\\neq i}\\{\\alpha_{ji}\\}+\\max_{k:k\\neq i}\\{\\alpha_{ik}\\},~~~\\forall i,j,k\\in [K],\\label{eq:cond}\n\\end{equation}\nthen power control and treating interference as noise achieve the entire GDoF region. Moreover, the GDoF region is given by\n\n\\begin{eqnarray}\n\\mathcal{D}_{\\mbox{\\tiny TIN}}&=&\\left\\{(d_1, d_2,\\cdots, d_K)\\in\\mathbb{R}^K_+: \\sum_{V_k\\in\\pi} d_k\\leq \\sum_{V_k\\in\\pi}\\alpha_{kk}-w(\\pi),~~\\forall \\pi\\in[\\Pi] \\right\\}\n\\end{eqnarray}\n\n\\end{theorem}\n\n{\\it Remark: Henceforth, we refer to (\\ref{eq:cond}) as the TIN optimality condition for Gaussian networks. If a network (sub-channel) satisfies the TIN optimality condition (\\ref{eq:cond}), the network (sub-channel) will be referred to as a TIN optimal network (sub-channel)}.\n\nNote that each of the bounds defining the GDoF region represents the sum-GDoF of a cyclic interference sub-network contained in the $K$ user fully connected interference network. A cyclic sub-network is comprised of a cyclically ordered subset of users where each user causes interference only to the preceding user and suffers interference only from the following user in the cycle. As shown by Zhou et al. \\cite{Zhou_Yu} and translated into the GDoF setting by Geng et al. in \\cite{Geng_TIN}, the sum-GDoF of a cyclic interference sub-network is simply the sum of all desired link strengths minus the sum of all cross link strengths. For example, the cycle $2\\rightarrow 4\\rightarrow 1\\rightarrow 3\\rightarrow 2$ corresponds to a 4 user cyclic interference sub-network with 4 desired and 4 interfering links, and its sum-GDoF are characterized by the outer bound $d_2+d_4+d_1+d_3\\leq \\alpha_{22}+\\alpha_{44}+\\alpha_{11}+\\alpha_{33}-\\alpha_{24}-\\alpha_{41}-\\alpha_{13}-\\alpha_{32}$. Note that because a subset of users of cardinality $L$ has $(L-1)!$ distinct cycles, there are a total of $(L-1)!$ sum-GDoF bounds for each cardinality-$L$ subset of users, out of which all but the tightest bound are redundant. Moreover, excluding the empty set and the singletons, there are $2^K-K-1$ subsets of users that give rise to cycle bounds, some of which may again be redundant. Nevertheless, when considered together, the cycle bounds describe the precise GDoF region of the fully connected network whenever condition (\\ref{eq:cond}) is satisfied. This remarkable aspect of Geng et al.'s result greatly simplifies the proof of the outer bound of the GDoF region, because only cyclic interference networks need to be considered. \n\n\nFollowing similar arguments as Geng et al., it is not difficult to obtain a corresponding TIN optimality result for the ADT deterministic model.\n\\begin{theorem} \\label{single}\nIn a $K$ user ADT deterministic interference network, where the channel strength level from Transmitter $i$ to Receiver $j$ is equal to $n_{ji}$, $\\forall i,j\\in [K]$, if the following condition is satisfied\n\\begin{equation}\nn_{ii} \\geq \\max_{j:j\\neq i}\\{n_{ji} \\}+\\max_{k:k\\neq i}\\{n_{ik} \\},~~~\\forall i,j,k\\in [K], \\label{tin}\n\\end{equation}\nthen power control and treating interference as noise can achieve the whole capacity region. Moreover, the capacity region is given by\n\\begin{eqnarray}\n\\mathcal{C}_{\\mbox{\\tiny TIN}}&=&\\left\\{(R_1, R_2,\\cdots, R_K)\\in\\mathbb{R}^K_+: \\sum_{V_k\\in\\pi} R_k\\leq \\sum_{V_k\\in\\pi}n_{kk}-w(\\pi),~~\\forall \\pi\\in[\\Pi] \\right\\}\n\\end{eqnarray}\n\n\\end{theorem}\n\n{\\it Remark: Following a similar convention as the Gaussian case, we refer to (\\ref{tin}) as the TIN optimality condition for the ADT deterministic model. A network (sub-channel) is called TIN optimal if the TIN optimality condition (\\ref{tin}) is satisfied over the network (sub-channel).}\n\n\nNote the translation from Theorem \\ref{theorem:Geng} for the Gaussian case to Theorem \\ref{single} for the ADT deterministic model is remarkably direct. The capacity region of the TIN optimal ADT deterministic interference network is exactly the scaled version of the GDoF region of the corresponding TIN optimal Gaussian interference network. The ADT deterministic model also reveals an interesting interpretation of the TIN optimality condition (\\ref{tin}), and by association (\\ref{eq:cond}). As highlighted in Figure \\ref{fig:cond}, the TIN optimality condition is \\emph{equivalent} to the following statements.\n\\begin{itemize}\n\\item Signal levels that suffer interference at their desired receiver, do not cause interference to others.\n\\item Signal levels that cause interference to others, do not suffer interference at their desired receiver.\n\\end{itemize}\n\n\n\\begin{figure}[t]\n\\center\n\\includegraphics[width=3.5 in]{cond} \n\\caption{\\small The TIN optimality condition for a $K$ user \\emph{fully} connected ADT deterministic interference network. Signal levels that cause interference do not suffer interference, and those that suffer interference cause no interference. Note that each user $i$ has $n_{ii}- \\max_{j:j\\neq i}\\{n_{ji}\\}-\\max_{k:k\\neq i}\\{n_{ik}\\}$ signal levels that neither cause interference, nor suffer interference. To avoid cluttering the figure, not all channels are shown.\n}\n\\label{fig:cond}\n\\end{figure}\n\nWhile we omit the proof details for Theorem \\ref{single} because they parallel those for Theorem \\ref{theorem:Geng} presented by Geng et al. in \\cite{Geng_TIN}, we will briefly present a simple alternative proof for the cycle bounds due to their central importance to this work. \n\nConsider the cyclic interference sub-network comprised of cyclically ordered user indices $\\pi = \\{i_0, i_1, \\ldots, i_L\\}$, obtained by eliminating all remaining links, users and messages. To each receiver $i_l$, let us give all messages except $W_{i_l}, W_{i_{l+1}}$, i.e., $\\{W_1,W_2,\\ldots,W_K\\}\/ \\{W_{i_l}, W_{i_{l+1}}\\}$, denoted as $W_{i_l,i_{l+1}}^c$, through a genie. \nFrom Fano's inequality, we have\n\\begin{eqnarray*}\nn(R_{i_l} - \\epsilon) &\\leq& I(W_{i_l}; Y_{i_l}^n | W_{i_l,i_{l+1}}^c) \\\\\n&=& H(Y_{i_l}^n | W_{i_l,i_{l+1}}^c) - H(Y_{i_l}^n | W_{i_l,i_{l+1}}^c, W_{i_l}) \\\\\n&=& H( \\lfloor 2^{n_{i_l i_l}} {X}_{i_l}^n \\rfloor \\oplus \\lfloor 2^{n_{i_l i_{l+1}}} {X}_{i_{l+1}}^n \\rfloor) - H(\\lfloor 2^{n_{i_l i_{l+1}}} {X}_{i_{l+1}}^n \\rfloor) \\\\\n&\\overset{(a)}{=}& H(\\lfloor 2^{n_{i_{l-1} i_{l}}} {X}_{i_{l}}^n \\rfloor) + H(\\lfloor 2^{n_{i_l i_l}} {X}_{i_l}^n \\rfloor \\oplus \\lfloor 2^{n_{i_l i_{l+1}}} {X}_{i_{l+1}}^n \\rfloor | \\lfloor 2^{n_{i_{l-1} i_{l}}} {X}_{i_{l}}^n \\rfloor) - H(\\lfloor 2^{n_{i_l i_{l+1}}} {X}_{i_{l+1}}^n \\rfloor) \\\\\n&\\overset{(b)}{\\leq}& n( n_{i_l i_l} - n_{i_{l-1} i_l} ) + H(\\lfloor 2^{n_{i_{l-1} i_{l}}} {X}_{i_{l}}^n \\rfloor) - H(\\lfloor 2^{n_{i_l i_{l+1}}} {X}_{i_{l+1}}^n \\rfloor)\n\\end{eqnarray*}\nwhere $(a)$ follows from the assumption $n_{i_l i_l} \\geq n_{i_{l-1} i_l} + n_{i_l i_{l+1}}$ such that the interfering-causing bits $\\lfloor 2^{n_{i_{l-1} i_{l}}} {X}_{i_{l}}^n \\rfloor$ suffer no interference at the desired receiver $i_l$ and $(b)$ is due to the fact that the entropy of a variable is no more than the number of bits therein. See Figure \\ref{newcyc} for a pictorial illustration.\n\\begin{figure}[t]\n\\center\n\\includegraphics[width=2.5 in]{newcyc} \n\\caption{\\small A cyclic ADT deterministic interference network that satisfies (\\ref{tin}).}\n\\label{newcyc}\n\\end{figure}\nAdding the above inequalities for $l \\in \\{1,2,\\ldots,L\\}$, we find that the entropy terms cancel out leaving us with\n\\begin{eqnarray*}\n\\sum_{l=1}^L n(R_{i_l} - \\epsilon) &\\leq& \\sum_{l=1}^L n( n_{i_l i_l} - n_{i_{l-1} i_l} ) = n \\sum_{l=1}^L n_{i_l i_l} - n w(\\pi)\n\\end{eqnarray*}\nfrom which we arrive at the desired bound by normalizing by $n$ on both sides of the inequality and letting $n$ approach infinity .\n\n\n{\\it Remark: Henceforth, since we are only interested in networks that satisfy the TIN optimality conditions, (\\ref{tin}) in the deterministic setting and (\\ref{eq:cond}) in the Gaussian setting, we will assume throughout that these conditions are satisfied.}\n\n\\section{Sum-Capacity (Sum-GDoF)}\nWe now switch our attention from capacity region to sum-capacity in the deterministic case, and from GDoF region to sum-GDoF in the Gaussian case. To avoid repetition, we will focus the discussion in this section to the Gaussian setting, i.e., GDoF region, sum-GDoF, channel strengths $\\alpha_{ij}$, etc., but all arguments made in this section also apply to the deterministic setting, with capacity region, sum-capacity, channel strengths $n_{ij}$. \n\nSince we already have the GDoF region characterization in Theorem \\ref{theorem:Geng}, the sum-GDoF characterization may appear trivial. However, there are certain interesting aspects of this problem that we will highlight in this section, which will be especially useful when we move on to parallel interference networks in subsequent sections.\n\nConsider, for example, the GDoF region of the TIN optimal 3 user interference network, which is the set of tuples $(d_1, d_2, d_3)\\in\\mathbb{R}^3_+$, defined by the following constraints.\n\\allowdisplaybreaks\n\\begin{eqnarray}\n d_1 &\\leq& \\alpha_{11}-w(\\{1\\})=\\alpha_{11}\\label{r1} \\\\\nd_2 &\\leq& \\alpha_{22}-w(\\{2\\}) =\\alpha_{22}\\label{r2} \\\\\nd_3 &\\leq& \\alpha_{33}-w(\\{3\\}) =\\alpha_{33}\\label{r3} \\\\\nd_1 + d_2 &\\leq& \\alpha_{11} + \\alpha_{22} -w(\\{1,2\\})=\\alpha_{11} + \\alpha_{22} -\\alpha_{12}-\\alpha_{21}\\label{r4} \\\\\nd_2 + d_3 &\\leq& \\alpha_{22} + \\alpha_{33} -w(\\{2,3\\})=\\alpha_{22} + \\alpha_{33} -\\alpha_{23}-\\alpha_{32}\\label{r5} \\\\\nd_3 + d_1 &\\leq& \\alpha_{33} + \\alpha_{11} -w(\\{3,1\\})=\\alpha_{11} + \\alpha_{33} -\\alpha_{31}-\\alpha_{13}\\label{r6} \\\\\nd_1 + d_2 + d_3 &\\leq& \\alpha_{11} + \\alpha_{22} + \\alpha_{33} -w(\\{1,2,3\\})=\\alpha_{11} + \\alpha_{22}+\\alpha_{33} -\\alpha_{12}-\\alpha_{23}-\\alpha_{31}\\label{r7} \\\\\nd_1 + d_2 + d_3 &\\leq& \\alpha_{11} + \\alpha_{22} + \\alpha_{33} -w(\\{3,2,1\\})=\\alpha_{11} + \\alpha_{22} + \\alpha_{33} -\\alpha_{21} - \\alpha_{32} - \\alpha_{13}\\label{r8}\n\\end{eqnarray}\nThe last two bounds are already sum-GDoF bounds. However, remarkably, neither of these may be tight. This is because, unlike similar forms that are commonly encountered e.g., the capacity region of the multiple access channel, this region is not polymatroidal. It is easy to see that a direct sum of (\\ref{r1}) and (\\ref{r6}), for example, \\emph{could} provide a tighter sum-GDoF bound. Incidentally, this would be a cyclic partition bound for the cyclic partition $\\Pi=\\{\\{1\\},\\{2,3\\}\\}$. But, how about something a bit more involved, such as $1\/2$ times the sum of (\\ref{r4}), (\\ref{r5}), (\\ref{r6}), which would also produce a sum-rate bound (but not a cyclic partition bound)? Let us consider this bound.\n\\begin{eqnarray}\n\\frac{(\\ref{r4})+(\\ref{r5})+(\\ref{r6})}{2}\\Rightarrow d_1+d_2+d_3&\\leq &\\sum_{k=1}^3\\alpha_{kk}-\\frac{w(\\{1,2\\})+w(\\{2,3\\})+w(\\{3,1\\})}{2}\n\\end{eqnarray}\nInterestingly, this is the same bound as $1\/2$ times the sum of $(\\ref{r7})$ and $(\\ref{r8})$. Therefore, it can never be tighter than the tightest of $(\\ref{r7})$ and $(\\ref{r8})$. Therefore, even though the GDoF region is not polymatroidal, the special structure of the cycle bounds imparts some special properties. This is what we will explore in this section. In fact, these examples are representative of our general result. We will show that for a TIN optimal $K$ user interference network, the sum-GDoF value is always given by a cyclic partition bound. This is the main result of this section, and we state it in the following theorem.\n\\begin{theorem}\\label{decompose}\nFor TIN optimal Gaussian interference networks\n\\begin{eqnarray}\n\\mathcal{D}_\\Sigma &=&\\mathcal{D}^{\\Pi*}_{\\Sigma}\n\\end{eqnarray}\nwhere $\\mathcal{D}^{\\Pi*}_{\\Sigma}$ is the best cyclic partition bound.\n\\end{theorem}\n\\noindent\\hspace{2em}{\\it Proof: } \nThe sum-GDoF value is expressed by the linear program\n\\begin{align}\n(LP_1)~~~~~~~ \\mathcal{D}_\\Sigma=\\max & \\mbox{ }d_1+d_2+\\cdots+d_K\\\\\n\\mbox{such that}& \\sum_{V_k\\in\\pi}d_k\\leq \\sum_{V_k\\in\\pi}\\alpha_{kk}-w(\\pi), ~~\\forall \\pi\\in[\\Pi]\\\\\n&d_k\\geq 0, ~~ \\forall k\\in[K] \\label{eq:positive}\n\\end{align}\n\n\nIn Section \\ref{sec:sign} we show that the non-negativity constraint (\\ref{eq:positive}) can be eliminated from $LP_1$ without affecting its value. This allows us to express the sum-GDoF in terms of the dual LP as follows.\n\\begin{align}\n(LP_2)~~~~~~~ \\mathcal{D}_\\Sigma=\\min~ & \\sum_{\\pi\\in\\Pi}\\lambda_\\pi\\left( \\sum_{V_k\\in\\pi}\\alpha_{kk}-w(\\pi)\\right)\\\\\n\\mbox{such that}&\\sum_{\\pi\\in\\Pi}\\lambda_\\pi 1(V_k\\in\\pi)=1, ~~\\forall k\\in[K]\\\\\n&\\lambda_\\pi\\geq 0, ~~ \\forall \\pi\\in[\\Pi] \n\\end{align}\nwhere $1(\\cdot)$ is the indicator function that returns the values 1 or 0 when the argument to the function is true or false, respectively.\n\n\\noindent Equivalently,\n\\begin{align}\n(LP_3)~~~~~~~ \\mathcal{D}_\\Sigma=& \\sum_{k=1}^K\\alpha_{kk}-\\max\\sum_{\\pi\\in\\Pi}\\lambda_\\pi w(\\pi)\\\\\n\\mbox{such that}&\\sum_{\\pi\\in\\Pi}\\lambda_\\pi 1(V_k\\in\\pi)=1, ~~\\forall k\\in[K]\\label{eq:lambdaconstraint}\\\\\n&\\lambda_\\pi\\geq 0, ~~ \\forall \\pi\\in[\\Pi] \n\\end{align}\n\n\\noindent Let us also define the integer constrained version of this LP.\n\\begin{align}\n(IP_4)~~~~~~~ \\mathcal{D}^{\\Pi*}_{\\Sigma}=& \\sum_{k=1}^K\\alpha_{kk}-\\max\\sum_{\\pi\\in\\Pi}\\lambda_\\pi w(\\pi)\\\\\n\\mbox{such that}&\\sum_{\\pi\\in\\Pi}\\lambda_\\pi 1(V_k\\in\\pi)=1, ~~\\forall k\\in[K]\\\\\n&\\lambda_\\pi\\in\\{0,1\\}, ~~ \\forall \\pi\\in[\\Pi] \n\\end{align}\nNote that the integer program $IP_4$ is simply the best cyclic partition bound $\\mathcal{D}^{\\Pi*}_{\\Sigma}$.\n\nSince imposing an integer constraint cannot make the $\\max$ term larger, it is already clear that $\\mathcal{D}^{\\Pi*}_{\\Sigma}\\geq\\mathcal{D}_\\Sigma$. To prove the other direction, let us reformulate $LP_3$ by changing the perspective from cycles to edges. Instead of the multipliers $\\lambda_\\pi$ that are associated with cycles, we will use multipliers $t_{ij}$ that are associated with edges. Define\n\\begin{eqnarray}\nt_{ij}&=&\\sum_{\\pi\\in\\Pi} \\lambda_\\pi 1(e_{ij} \\in\\pi), ~~~\\forall (i,j)\\in[K]\\times[K] \\label{t}\\label{eq:tconstraint}\n\\end{eqnarray}\n\nWe now translate the constraints (\\ref{eq:lambdaconstraint}) on cycles to edges. A cycle incident on vertex $k$ must have exactly one incoming and one outgoing edge. (\\ref{eq:lambdaconstraint}) says that the net contribution from $\\lambda_\\pi$ for all cycles associated with any particular vertex is 1. Clearly, then the net contribution for all edges leaving a transmitter (vertex), or all edges entering a receiver (vertex), must be unity.\n\\begin{eqnarray}\n\\sum_{j=1}^Kt_{ij}&=&1, ~\\forall i\\in[K]\\\\\n\\sum_{i=1}^Kt_{ij}&=&1, ~\\forall j\\in[K]\n\\end{eqnarray}\nand the objective value is equivalently re-written as\n\\begin{eqnarray}\n\\sum_{\\pi\\in\\Pi} \\lambda_\\pi w(\\pi) &=&\\sum_{\\pi\\in\\Pi} \\lambda_\\pi \\sum_{e_{ij}\\in\\pi}w(e_{ij})\\\\\n&=&\\sum_{\\pi\\in\\Pi} \\lambda_\\pi \\sum_{(i,j)\\in[K]\\times[K]}w(e_{ij})1(e_{ij}\\in\\pi)\\\\\n&=&\\sum_{(i,j)\\in[K]\\times[K]}w(e_{ij})\\sum_{\\pi\\in\\Pi} \\lambda_\\pi 1(e_{ij}\\in\\pi)\\\\\n&=&\\sum_{(i,j)\\in[K]\\times[K]}t_{ij}w(e_{ij})\n\\end{eqnarray}\n\n\\noindent Substituting into $LP_3$, this gives us the new LP\n\\begin{align}\n(LP_5)~~~~~~~ \\mathcal{D}_\\Sigma\\geq& \\sum_{k=1}^K\\alpha_{kk}+\\min\\sum_{(i,j)\\in[K]\\times[K]}c_{ij}t_{ij}\\\\\n\\mbox{such that }&\\sum_{j=1}^Kt_{ij}=1, ~\\forall i\\in[K]\\\\\n&\\sum_{i=1}^Kt_{ij}=1, ~\\forall j\\in[K]\\\\\n&t_{ij}\\geq 0, ~\\forall (i,j)\\in[K]\\times[K]\n\\end{align}\nwhere we defined $c_{ij}=-w(e_{ij})$, and the $\\geq$ sign appears because we dropped the constraint (\\ref{eq:tconstraint}). In this standard form, this LP is recognizable as the \\emph{minimum weight perfect matching} problem, and its solution is known to be integral, i.e., the optimizing $t_{ij}$ must take values in $\\{0,1\\}$ (See \\cite{Schrijver} and Theorem 5 of \\cite{matching}).\n\nHowever, note that any integral solution to $LP_5$ gives us a valid cyclic partition bound, $\\mathcal{D}^{\\Pi}_\\Sigma$. Therefore we have,\n\\begin{eqnarray}\n\\mathcal{D}_\\Sigma&\\geq&\\mathcal{D}^{\\Pi}_\\Sigma\\\\\n&\\geq&\\mathcal{D}^{\\Pi*}_\\Sigma\n\\end{eqnarray}\nbecause a cyclic partition bound cannot be smaller than the optimal cyclic partition bound. Since we have already shown that $\\mathcal{D}_\\Sigma\\leq \\mathcal{D}^{\\Pi*}_\\Sigma$, we must have $\\mathcal{D}^{\\Pi*}_\\Sigma=\\mathcal{D}_\\Sigma$. \\hfill\\mbox{\\rule[0pt]{1.5ex}{1.5ex}}\n\nFinally, since the same proof also works for the deterministic setting, let us conclude this section by stating the deterministic counterpart of Theorem \\ref{decompose} as the following corollary.\n\\begin{corollary}\\label{col:decompose}\nFor TIN optimal ADT deterministic interference networks\n\\begin{eqnarray}\n\\mathcal{C}_\\Sigma &=&\\mathcal{C}^{\\Pi*}_{\\Sigma}\n\\end{eqnarray}\nwhere $\\mathcal{C}^{\\Pi*}_{\\Sigma}$ is the best cyclic partition bound.\n\\end{corollary}\n\n\\section{Optimality of TIN for Parallel Interference Networks}\nAs we move from the single sub-channel case to multiple parallel sub-channels, the outer bound proof becomes significantly more challenging. Whereas formerly it was sufficient to only consider each cyclic sub-network obtained by eliminating all other users, messages and links, this is no longer possible for parallel interference networks. For example, a different cycle may be active in each sub-channel, however one cannot eliminate a different set of links for each sub-channel. As an outer bounding argument, eliminating a link is justified by including a genie that takes all the messages originating at the transmitter of that link, and provides them to the receiver of that link, so that the receiver can reconstruct and subtract the transmitted symbols from its received signal. However, in a parallel channels setting, the message information provided by the genie allows a receiver to reconstruct and subtract the transmitted symbols from a transmitter on \\emph{all} sub-channels. Thus, if a link from Transmitter $i$ to Receiver $j$ is removed for one sub-channel, it must be removed for all sub-channels. This makes it impossible to reduce a fully connected parallel interference network directly into \\emph{different} cyclic sub-networks over each sub-channel. As such, for parallel interference networks, the reduction to cyclic networks is in general no longer an option, and the entire network must be directly considered for the outer bound.\nGiven this added source of difficulty, the relative simplicity of the ADT deterministic model is tremendously useful. Thus, we start to explore parallel interference networks with the ADT deterministic model.\n\n\\subsection{ADT Deterministic Model}\nWhile we deal with multiple parallel sub-channels in this section, recall that we assume throughout that each sub-channel satisfies condition (\\ref{tin}). In other words, by itself, each sub-channel is TIN optimal. What we wish to explore is whether \\emph{collectively} such parallel channels remain separable and therefore TIN optimal. Let us start with a few relevant definitions. \n\nFor the definitions that have been introduced for the single sub-channel case, we will add a superscript to indicate the sub-channel index, for example cyclic partition $\\Pi^{[m]}$, cyclic predecessor $\\Pi^{[m]}(k)$, and cyclic partition bound $\\mathcal{C}_{\\Sigma}^{\\Pi^{[m]}}$. Note that many cyclic partitions are possible for each sub-channel, and a different cyclic partition may be used for each sub-channel. \n\n\n\n\n\n\n{\\bf Participating Input and Output Levels ($X_{i,u}^{[m]}, Y_{k,u}^{[m]}$)}: For the $m$-th sub-channel, we define participating input levels $$X_{i,u}^{[m]} \\triangleq 0.X_{i,(1)}^{[m]}, \\ldots, X_{i,\\left(n_{\\Pi^{[m]}{(i)} i}^{[m]}\\right)}^{[m]}$$ to be the bits that are sent from Transmitter $i$ and observed at its predecessor Receiver $\\Pi^{[m]} (i)$. The received signal levels resulting from all interfering $X_{i,u}^{[m]}$ are defined as the participating output levels $$Y_{k,u}^{[m]} \\triangleq \\sum_{i=1,i \\neq k}^K 2^{n_{ki}^{[m]}} {X}_{i,u}^{[m]}$$ where the summation is bit-wise modulo two. We can also write $X_{i,u}$ in a vector form as $$X_{i,u}^{[m]} = [ X_{i,(1)}^{[m]}, \\ldots, X_{i,\\left(n_{\\Pi^{[m]}(i) i}^{[m]}\\right)}^{[m]} ].$$ Similar vector notation is used for $Y_{k,u}^{[m]}$ when the vector form is clearer. \n\n{\\bf Invertibility}: The $m$-th sub-channel is said to be invertible if the mapping from ${\\bf X}_u^{[m]} \\triangleq (X_{1,u}^{[m]}, \\ldots, X_{K,u}^{[m]})$ to ${\\bf Y}_u^{[m]} \\triangleq (Y_{1,u}^{[m]}, \\ldots, Y_{K,u}^{[m]})$ is invertible for an optimal cyclic partition $\\Pi^{[m]*}$. Mathematically, we require \n\\begin{eqnarray}\nH({\\bf X}_u^{[m]}|{\\bf Y}_u^{[m]})&=&0.\\label{eq:invertADT}\n\\end{eqnarray}\n\n\n\n\n\nThe significance of these definitions will become clear with the statement of the result, illustrative examples, and finally from the details of the proof. Perhaps the most intriguing is the invertibility property. At this point it suffices to say that it is a ``mild\" property and is easily testable for a given problem instance. The mildness of this property will be explicitly addressed in Section \\ref{sec:mild}.\nWith these definitions, we are now ready to state the main result of this section in the following theorem.\n\n\n\n\n\\begin{theorem}\\label{Kdetthm}\nIn a $K$ user ADT deterministic interference network with $M$ sub-channels, \nif each sub-channel is individually TIN optimal and invertible, then even collectively for all the sub-channels of the parallel interference network, the sum-capacity is achieved by a separate TIN solution over each sub-channel. \n\\end{theorem}\n\n\nThe proof of Theorem \\ref{Kdetthm} is deferred to Section \\ref{p1}. At this point it is important to understand the statement of the theorem and its limitations through illustrative examples.\n\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=6in]{ex1}\n\\caption{\\small A 3 user ADT deterministic interference network with 3 sub-channels, where each sub-channel is TIN optimal. Under the optimal cyclic partitions $\\Pi^{[1]*}=\\{\\{1,2,3\\}\\}, \\Pi^{[2]*}=\\{\\{3,2,1\\}\\}, \\Pi^{[3]*}=\\{\\{1\\}, \\{2,3\\}\\}$, the participating input and output levels, $X_{i,u}^{[m]}, Y_{i,u}^{[m]}, i,m \\in \\{1,2,3\\}$ are labeled and the mapping from $(X_{1,u}^{[m]}, X_{2,u}^{[m]}, X_{3,u}^{[m]})$ to $(Y_{1,u}^{[m]}, Y_{2,u}^{[m]}, Y_{3,u}^{[m]})$ is easily verified to be invertible for each sub-channel.}\n\\label{ex1}\n\\end{figure}\n\n\\begin{example}\nConsider the $K=3$ user ADT deterministic interference network with $M=3$ parallel sub-channels, shown in Figure \\ref{ex1}. It is readily verified that each sub-channel by itself is TIN optimal. For example, consider user 2 in sub-channel 1. The desired signal strength for this user is $n_{22}^{[1]}=3$, the strongest interference caused by this user is $n_{12}^{[1]}=2$ and the strongest interference suffered by this user is $n_{23}^{[1]}=1$. Thus, the desired signal strength is no less than the sum of the signal strengths of the strongest interference caused and the strongest interference received by this user. The same is true for each of the 3 users in each of the 3 parallel sub-channels. Therefore, according to Theorem \\ref{single}, TIN is optimal for each sub-channel by itself.\nFor the 3 sub-channels, consider the optimal cyclic partitions $$\\Pi^{[1]*}=\\{\\{1,2,3\\}\\}, \\Pi^{[2]*}=\\{\\{3,2,1\\}\\}, \\Pi^{[3]*}=\\{\\{1\\}, \\{2,3\\}\\}.$$\nThe weights of the participating edges are \n\\begin{align}\nw(\\Pi^{[1]*}) &= w(\\{e_{12}^{[1]}, e_{23}^{[1]}, e_{31}^{[1]} \\}) = n_{12}^{[1]} + n_{23}^{[1]} + n_{31}^{[1]} = 3 \\\\\nw(\\Pi^{[2]*}) &= w(\\{e_{32}^{[2]}, e_{21}^{[2]}, e_{13}^{[2]} \\}) = n_{32}^{[2]} + n_{21}^{[2]} + n_{13}^{[2]} = 3 \\\\\nw(\\Pi^{[3]*}) &= w(\\{e_{11}^{[3]}, e_{23}^{[3]}, e_{32}^{[3]} \\}) = 0 + n_{23}^{[3]} + n_{32}^{[3]} = 3\n\\end{align}\nThen according to Corollary \\ref{col:decompose}, the sum-capacity values for each sub-channel by itself are given by $$\\mathcal{C}^{[m]}_{\\Sigma}= \\sum_{i=1}^3 n_{ii}^{[m]} - w(\\Pi^{[m]*}) =9-3=6, ~m = 1,2,3.$$ What we wish to know is if TIN continues to be the sum-capacity optimal scheme for all 3 sub-channels collectively. \n\nLet us check for invertibility for each sub-channel. According to the definitions, the participating inputs for sub-channel 1 are $X_{1,u}^{[1]} = [X_{1,(1)}^{[1]}, \\ldots, X_{1,(n_{31}^{[1]})}^{[1]}] = \\phi, X_{2,u}^{[1]} = [X_{2,(1)}^{[1]}, \\ldots, X_{2,(n_{12}^{[1]})}^{[1]}] = [ X_{2,(1)}^{[1]}, X_{2,(2)}^{[1]} ]$, $X_{3,u}^{[1]} = [X_{3,(1)}^{[1]}, \\ldots, X_{3,(n_{23}^{[1]})}^{[1]}] = [ X_{3,(1)}^{[1]}]$ and the participating outputs for sub-channel 1 are $Y_{1,u}^{[1]} = [X_{2,(1)}^{[1]} \\oplus X_{3,(1)}^{[1]}, X_{2,(2)}^{[1]}], Y_{2,u}^{[1]} = [X_{3,(1)}^{[1]}]$ and $Y_{3,u}^{[1]} = \\phi$. It is now trivial to verify that from $(Y_{1,u}^{[1]}, Y_{2,u}^{[1]}, Y_{3,u}^{[1]})$, we can recover $(X_{1,u}^{[1]},X_{2,u}^{[1]},X_{3,u}^{[1]})$. Therefore, sub-channel 1 is invertible. Similarly, the participating inputs and outputs for sub-channels 2 and 3 are shown in Figure \\ref{ex1} and it is easily verified that sub-channels 2 and 3 are invertible as well. Therefore, since all the conditions of Theorem \\ref{Kdetthm} are satisfied, we conclude that separate TIN is optimal for this parallel interference network, and therefore, the sum-capacity of the 3 sub-channels collectively, is the sum of their individual sum-capacities. In other words, the sum-capacity is $6+6+6=18$ and is achieved by separate TIN on each sub-channel.\n\n\\end{example}\n\nTo also expose the limitation of Theorem \\ref{Kdetthm}, the next example illustrates a relatively rare situation where invertibility is not satisfied, and so Theorem \\ref{Kdetthm} cannot be applied.\n\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=6in]{ex2}\n\\caption{\\small A 3 user ADT deterministic interference network with 3 sub-channels, where each sub-channel is TIN optimal. For the optimal cyclic partitions $\\Pi^{[1]*}=\\{ \\{ 1,2,3 \\} \\}, \\Pi^{[2]*}=\\{ \\{ 3,2,1\\}\\}$ and $\\Pi^{[3]*}=\\{ \\{ 1,2,3 \\} \\}$, participating inputs and outputs $X_{i,u}^{[m]}, Y_{i,u}^{[m]}, i, m \\in \\{1,2,3\\}$ are labeled. In this case, the mapping from $(X_{1,u}^{[3]}, X_{2,u}^{[3]}, X_{3,u}^{[3]})$ to $(Y_{1,u}^{[3]}, Y_{2,u}^{[3]}, Y_{3,u}^{[3]})$ is not invertible.}\n\\label{ex2}\n\\end{figure}\n\\begin{example}\nConsider the 3 user ADT deterministic interference network with 3 sub-channels, as shown in Figure \\ref{ex2}, with the optimal cyclic partitions $\\Pi^{[1]*}=\\{ \\{ 1,2,3 \\} \\}, \\Pi^{[2]*}=\\{ \\{ 3,2,1\\}\\}$ and $\\Pi^{[3]*}=\\{ \\{ 1,2,3 \\} \\}$ for the first, second and third sub-channel, respectively. It is easy to verify that all 3 sub-channels are TIN optimal individually. However, with the participating inputs and outputs $X_{i,u}^{[m]}, Y_{i,u}^{[m]}$ shown in the figure, it is also easy to see while the first two sub-channels are invertible, the third sub-channel is not.\n\\end{example}\n\nNote that when the network only has one sub-channel, i.e., $M=1$, we can delete all the interfering links except the participating interference links (ones in $\\Pi^{[1]}$) without violating the outer bound argument, so that the invertibility becomes trivially true. Thus, Theorem \\ref{Kdetthm} recovers the outer bound result of Theorem \\ref{single}. \n\nThere are many interesting classes of networks where invertibility is shown to hold easily. For example, when $K=3$, then invertibility is fully characterized in Section \\ref{sec:mild}. Another interesting class is the class of cyclic interference networks where each sub-channel contains only one cycle (different sub-channels may have different cycles). These and other interesting cases will be discussed in Section \\ref{sec:mild}.\n\n\\subsection{GDoF}\n\nWe now explore the extension to the Gaussian setting and show that the insights from the deterministic framework go through. We obtain the corresponding result on the sum-GDoF optimality of TIN for parallel Gaussian interference networks subject to similar invertibility property. $X_{i,u}^{[m]}, Y_{k,u}^{[m]}$ are defined similar to the deterministic case. Participating input bit levels $X_{i,u}^{[m]}$ are made up of the bit levels below the decimal point, sent from Transmitter $i$ and heard by Receiver $\\Pi^{[m]}(i)$, i.e., $X_{i,u}^{[m]} = \\mbox{sign}(X_i^{[m]}) \\times 0.X_{i,(1)}^{[m]},\\ldots,X_{i,\\left(n_{\\Pi^{[m]}(i)i}^{[m]}\\right)}^{[m]}$, where $n_{ki}^{[m]} = \\lfloor \\frac{1}{2} \\alpha_{ki}^{[m]} \\log_2 P \\rfloor$. Participating output levels $Y_{k,u}^{[m]}$ are the resulting interference from $X_{i,u}^{[m]}$ plus additive Gaussian noise, i.e., $Y_{k,u}^{[m]} = \\sum_{i=1,i \\neq k}^K h_{ki}^{[m]} X_{i,u}^{[m]} + Z_k^{[m]}$. \n\n \nThe invertibility property is a bit more delicate to translate, because of the presence of noise, average power constraints, and the focus on GDoF rather than exact capacity. Given a cyclic partition, for the invertibility property in the Gaussian case, it suffices to require the mapping from ${\\bf X}_u^{[m]} \\triangleq (X_{1,u}^{[m]}, \\ldots, X_{K,u}^{[m]})$ to ${\\bf Y}_u^{[m]} \\triangleq (Y_{1,u}^{[m]}, \\ldots, Y_{K,u}^{[m]})$ to be invertible \\emph{within bounded noise distortion}.\nMathematically we express the counterpart of (\\ref{eq:invertADT}) as\n\\begin{eqnarray}\n\\mbox{\\bf (Invertibility Property): } H({\\bf X}_u^{[m]} | {\\bf Y}_u^{[m]})&=& o(\\log(P)) \\label{equ:ginv}\n\\end{eqnarray}\nAs before, the $m$-{th} sub-channel is said to be invertible if there exists an optimal cyclic partition $\\Pi^{[m]*}$ under which invertibility is satisfied.\n\nWe have the following theorem.\n\n\\begin{theorem}\\label{thm}\nIn a $K$ user parallel Gaussian interference network with $M$ sub-channels, \nif each sub-channel is individually both TIN optimal and invertible, then the sum-GDoF value of the parallel Gaussian interference network is achieved by separate TIN over each sub-channel.\n\\end{theorem}\n\nThe proof of Theorem \\ref{thm} appears in Section \\ref{p3}. \n\n\\subsection{Mildness of Invertibility Condition}\\label{sec:mild}\nThe intuition behind the mildness of the invertibility condition is analogous to the commonly encountered issue of invertibility of channel matrices in wireless networks, i.e., the property is satisfied everywhere except over an algebraic variety of lower dimension than the parameter space, and therefore is increasingly likely to be true when the parameter space is a large field. In particular, we expect invertibility to hold in the Gaussian setting almost surely. In the deterministic setting also, because the signal levels $n_{ij}$ are defined as quantized versions of $\\alpha_{ij}\\log(P)$, with $\\alpha_{ij}$ drawn from a continuum of real values, as the quality of the quantization improves (with increasing $P$), the invertibility is increasingly likely to hold.\n\nTo strengthen this intuition, we take a closer look at the invertibility condition in this section. We will go into details mainly for the deterministic setting. For the Gaussian setting, while the insights from deterministic setting are expected to go through via the usual machinery of translating between deterministic and Gaussian settings, as used in a number of works \\cite{Bresler_Tse, Bresler_Parekh_Tse, Avestimehr_Diggavi_Tse, Avestimehr_Sezgin_Tse, Jafar_Vishwanath_GDOF, Niesen_Maddah_Ali_X}, an in-depth analysis appears to be extremely cumbersome with little by way of new insights. Hence we will restrict the discussion in the Gaussian setting primarily to just an intuitive level.\n\n\\subsubsection{ADT Deterministic Model}\n\\paragraph{3 users}\n Let us start with the ADT deterministic model for $K=3$, with arbitrary $M$, where we explicitly characterize the invertibility condition.\n\\begin{lemma}\\label{lemma:3inv}\nFor the $m$-{th} sub-channel of a 3 user ADT deterministic interference network, if $n_{12}^{[m]} + n_{23}^{[m]} + n_{31}^{[m]} \\neq n_{21}^{[m]} + n_{32}^{[m]} + n_{13}^{[m]}$, then sub-channel $m$ is invertible under any cyclic partition.\n\\end{lemma}\n{\\it Proof:} Consider the bi-partite graph comprised of the participating input and output levels as the two sets of vertices and the cross links between them as the edges. According to Theorem \\ref{lemma:inv}, if this graph is acyclic then invertibility must hold. Therefore, we only need to show that when $n_{12}^{[m]} + n_{23}^{[m]} + n_{31}^{[m]} \\neq n_{21}^{[m]} + n_{32}^{[m]} + n_{13}^{[m]}$, the bipartite graph is acyclic. Let us suppose the opposite, i.e., the graph has a cycle. Since only cross links are considered, for the 3 user case, the cycle must must traverse all 3 users. The 6 edges along the way correspond to 6 interfering links with strength $n_{ji}^{[m]}$. The bit sent from Transmitter $i$ to Receiver $j$ is shifted $n_{ji}^{[m]}$ places. Therefore as we traverse the 6 edges, the net shift factor encountered is $n_{12}^{[m]} + n_{23}^{[m]} + n_{31}^{[m]} - n_{21}^{[m]} - n_{32}^{[m]} - n_{13}^{[m]}$, which must equal zero for the cyclical path to return to its origin. But this contradicts the assumption that $n_{12}^{[m]} + n_{23}^{[m]} + n_{31}^{[m]} \\neq n_{21}^{[m]} + n_{32}^{[m]} + n_{13}^{[m]}$. This completes the proof by contradiction.\n\\hfill\\mbox{\\rule[0pt]{1.5ex}{1.5ex}}\n\nCombining the result of Lemma \\ref{lemma:3inv} with the result of Theorem \\ref{Kdetthm}, we have the explicit result for the 3 user parallel ADT deterministic interference network.\n\n\\begin{theorem}\\label{3detthm}\nFor the $3$ user parallel ADT deterministic interference network where each sub-channel is individually TIN optimal, if each sub-channel also satisfies\n\\begin{eqnarray}\nn_{12}^{[m]} + n_{23}^{[m]} + n_{31}^{[m]} \\neq n_{21}^{[m]} + n_{32}^{[m]} + n_{13}^{[m]}, \\forall m \\in[M] \\label{detrd}\n\\end{eqnarray}\nthen the sum-capacity of the $3$ user parallel ADT deterministic interference network is achieved by a separate TIN solution over each sub-channel.\n\\end{theorem}\n\n\\paragraph{Acyclic Bipartite Graph of Cross Channels between Participating Levels (Includes Cyclic Interference Networks)}\n\nThe following theorem presents a general result which was also used in the proof of invertibility for the 3 user case.\n\n\\begin{theorem}\\label{lemma:inv}\nFor each sub-channel of a $K$ user parallel ADT deterministic interference network, view the cross links between the participating input and output levels as the edges of an undirected bipartite graph. If this bipartite graph is acyclic, then the sub-channel is invertible. If each sub-channel individually is TIN optimal, then separate TIN over each sub-channel achieves the sum-capacity of the $K$ user parallel ADT deterministic interference network.\n\\end{theorem}\n\n{\\it Proof:} Since the optimality of separate TIN is already established subject to invertibility, all that remains is to show that invertibility holds. We will prove that in the absence of cycles in the bi-partite graph described above, one can always start from any participating input bit level as the root and build a tree with participating output bit levels as leaves such that we can proceed to the end of the tree (leaves) and start inverting sequentially from participating output levels to recover all participating input levels along the tree. The construction is as follows. Start at any participating input bit level as the root. When we leave the input bit level for an output bit level, always choose a \\emph{participating} edge. \nNote that for each input bit level, there is only one participating edge. Also, there is only one participating edge for each output bit level. After reaching the output bit level, if it is connected nowhere else then this is the leaf and we are done. If it is connected to other input bit levels, the edges must all be \\emph{non-participating} edges as the only participating edge has been used to arrive at the output bit. Again, for each input level reached, choose the only participating edge to reach the next output bit level. Because the graph has no cycles, the process must end eventually. We cannot end at an input level, because every input bit level must have a participating edge going out. Therefore we must end at output bit levels (leaves). Then we can traverse this tree back and find the original input bit level and all input bits along the way. \\hfill\\mbox{\\rule[0pt]{1.5ex}{1.5ex}}\n\nTo illustrate the inverting process, an example would be most useful. Consider a sub-channel of a 4 user ADT deterministic interference network, whose acyclic bipartite graph is shown in Figure \\ref{fig:inv}. The sub-channel is TIN optimal. Consider the optimal cyclic partition ${\\Pi}^* = \\{\\{1,2,3,4\\}\\}$ with participating edges $\\{e_{12}, e_{23}, e_{34}, e_{41}\\}$. Let us show that it is invertible. Start from input bit $X_{2,(1)}$ and create the tree as shown in Figure \\ref{fig:inv}. Inverting from the leaves would recover all input levels.\n\n\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=5 in]{inv}\n\\caption{\\small $(a)$ The acyclic bipartite graph of a sub-channel that satisfies the TIN optimality condition and the tree created to invert the input bit levels. Note that the graph is undirected, direction sign is added to highlight the order of how the tree is created. $(b)$ A more tree-centric view. Note that as the graph is bipartite, the levels alternate between input and output. As participating edge is used to go from input to output, there is only one edge from an input node to an output node. The participating output level is the modulo sum of all its connected input nodes. The leaves are output bits and are only connected to one input node from above. As such, an iterative inverting from bottom to top is feasible.}\n\\label{fig:inv}\n\\end{figure}\n\n\nWe mention that although Theorem \\ref{lemma:inv} establishes that the acyclic condition is sufficient for a sub-channel to be invertible, it is not necessary. Such examples are not uncommon, e.g., one appears in Figure \\ref{fig:dom} in this paper.\n\n\nNext we consider another interesting subclass of the general $K$ user ADT deterministic interference network, i.e., the cyclic interference networks where each sub-channel contains only one cycle (different sub-channels may have different cycles). As the bi-partite graph is trivially acyclic, invertibility holds. Combined with Theorem \\ref{Kdetthm}, we settle the optimality of separate TIN for cyclic interference networks. The result is stated in the following corollary.\n\n\\begin{corollary}\\label{col:cyc}\nFor a $K$ user parallel ADT deterministic interference network where each sub-channel is individually TIN optimal, if each sub-channel is also a cyclic interference network, then the sum-capacity of the $K$ user parallel ADT deterministic interference network is achieved by a separate TIN solution over each sub-channel.\n\\end{corollary}\n\n{\\it Remark:} Note that a \\emph{cyclic} interference network has an \\emph{acyclic} bi-partite graph as defined in Theorem \\ref{lemma:inv}. This is because in a cyclic network each receiver receives interference from only one transmitter, so that each output level can only be connected to one input level in the bi-partite graph.\n\n\\paragraph{Networks with Dominant Partitions}\nOur study of invertibility can be naturally extended to the following situation. For sub-channel $m$, consider an optimal cyclic partition ${\\Pi}^{[m]*}$. If the interference caused by each Transmitter $k\\in[K]$ to its cyclic predecessor $\\Pi^{[m]*}(k)$ is strictly the strongest, i.e., \n$n_{\\Pi^{[m]*}(k)k}^{[m]} > n_{jk}^{[m]}, \\forall j \\notin\\{k, \\Pi^{[m]*}(k)\\}$, we say that ${\\Pi}^{[m]*}$ is a \\emph{dominant} cyclic partition and sub-channel $m$ satisfies the \\emph{dominant} interference condition. The following theorem considers the networks where each sub-channel satisfies the dominant interference condition.\n\n\\begin{theorem}\\label{dom}\nFor a $K$ user parallel ADT deterministic interference network where the TIN optimality condition is satisfied in each sub-channel, if each sub-channel also satisfies\n\\begin{eqnarray}\nn_{\\Pi^{[m]*}(k)k}^{[m]} > n_{jk}^{[m]},&& \\forall j,k\\in[K], j \\notin\\{k, \\Pi^{[m]*}(k)\\}, m\\in[M] \\label{eq:dom}\n\\end{eqnarray}\nthen the sum-capacity of the $K$ user parallel ADT deterministic interference network is achieved by a separate TIN solution over each sub-channel.\n\\end{theorem}\n\n{\\it Proof:} We only need to prove that when each sub-channel satisfies the dominant interference condition (\\ref{eq:dom}), invertibility is implied. Although in this case, the bipartite graph may contain cycles, we are still able to construct trees in a way that no cycle would be encountered, such that inverting from the output bit leaves can recover all input levels. Similar to the construction given in Theorem \\ref{lemma:inv}, for any input bit level, we leave it through a \\emph{participating edge} and for any output bit level, we leave it through a \\emph{non-participating} edge. When this rule is used in transversing the graph, no cycle can be created. To see this we assume the opposite. If a cycle exists when we build the tree, then each input bit node is connected to a participating edge for leaving and a non-participating edge for coming back. As this is a cycle, the net scaling factor encountered must be 0, which means the sum of the strengths of all leaving edges must equal that of all coming edges. This is a contradiction as from the dominant cyclic partition condition, for each input bit node, the strength of the leaving edge is strictly larger than that of the coming edge. So we are guaranteed to end up with a desired tree. Repeating this process would complete the proof.\n\\hfill\\mbox{\\rule[0pt]{1.5ex}{1.5ex}}\n\nWe illustrate the process with an example. Consider a sub-channel of a 4 user ADT deterministic interference network, shown in Figure \\ref{fig:dom}. The sub-channel is TIN optimal, as for each user, signal levels that cause interference do not suffer interference, and those that suffer interference cause no interference. Consider the optimal cyclic partition ${\\Pi}^* = \\{\\{1,2,3,4\\}\\}$ with participating edges $\\{e_{12}, e_{23}, e_{34}, e_{41}\\}$. It is easy to verify that the participating link from each transmitter is the strongest. For example, for Transmitter 2, $n_{12} = 3 > \\max(n_{32}, n_{42}) = \\max(2,1) = 2$. Thus the sub-channel also satisfies the dominant interference condition. Then we prove it is invertible. Toward this end, consider the input bit $X_{2,(3)}$. Choose the participating edge to connect to the cyclic predecessor Receiver 2. As Receiver 2 is not an end yet, we will pass through all of its non-participating edges to come to input nodes (see Figure \\ref{fig:dom}). After arriving at Transmitters 3 and 4, again, follow the participating edges to cyclic predecessor Receiver 2 and 3, respectively. Receiver 2 is the end and from Receiver 3, we go to Transmitter 2 along the non-participating edge. Finally, pass through the participating edge to Receiver 1 and the end comes. It is easy to see we can invert sequentially from the output end nodes all the way to recover the desired input bit $X_{2,(3)}$ and the input bits along. All the other input bits can be recovered following similar procedures.\n\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=5 in]{dom}\n\\caption{\\small $(a)$ A sub-channel that satisfies the TIN optimality condition and dominant interference condition (\\ref{eq:dom}) for the dominant cyclic partition $\\Pi^* = \\{\\{1,2,3,4\\}\\}$. A cycle is highlighted in red. $(b)$ The \\emph{cyclic} bipartite graph and the tree created to invert $X_{2,(3)}$. Note that the graph is undirected, direction sign is added to highlight the order of how the tree is created.}\n\\label{fig:dom}\n\\end{figure}\n\n\\subsubsection{Gaussian Setting}\nWe now proceed to the Gaussian setting. Starting with the 3 user case, we provide an intuitive discussion on why invertibility holds here almost surely.\n\n\\paragraph{3 users}\n\n\n\n\n\nIf the optimal cyclic partition $\\Pi^*$ has two cycles, we assume $\\Pi^* =\\{ \\{1\\},\\{2,3\\} \\}$, without loss of generality. Then $X_{1,u} = \\phi, Y_{2,u} = h_{23} X_{3,u} + n_2 , Y_{3,u} = h_{32} X_{2,u} +n_3 $. The participating inputs are trivially invertible from the outputs within bounded variance noise distortion here, simply by normalizing by the channel realization.\n\nIf $\\Pi^*$ is a single cycle with all 3 users, we assume $\\Pi^* = \\{\\{1,2,3\\}\\}$. Then $X_{1,u} = \\mbox{sign}(X_1) \\times 0.X_{1,(1)}\\ldots X_{1,(n_{31})}, X_{2,u} = \\mbox{sign}(X_2) \\times 0.X_{2,(1)}\\ldots X_{2,(n_{12})}, X_{3,u} = \\mbox{sign}(X_3) \\times 0.X_{3,(1)}\\ldots X_{3,(n_{23})}$. We define $\\Delta \\triangleq n_{12} + n_{23} + n_{31} - n_{21} - n_{32} - n_{13} $, which is larger than 0 almost surely for appropriately large $P$. Instead of finding a single bit as in the ADT deterministic model, we consider a chunk with $\\Delta$ bits, e.g., $X_{2,[1]} = [ X_{2,(1)} \\ldots X_{2,(\\min(\\Delta, n_{12}))} ]$. Operating in units of $\\Delta$ bits, the invertibility process parallels the ADT deterministic model. The effect of additive noise terms becomes vanishingly small at the higher signal levels (thus limited to only an $o(\\log P)$ impact, see \\cite{Cadambe_Jafar_fullyconnected} for this argument), the carry overs across chunks are vanishingly small relative to the size of the chunks, and their number also does not scale with $P$ because the number of chunks remains constant. Thus, the Gaussian setting parallels the deterministic setting within $o(\\log P)$. Note that as the condition for non-invertibility in the ADT deterministic model is approached, i.e., as $\\alpha_{12} + \\alpha_{23} + \\alpha_{31} - \\alpha_{21} - \\alpha_{32} - \\alpha_{13}$ approaches zero, the size of the chunks becomes smaller, and the overhead of carry over bits increases proportionately. However, except when it is exactly zero (the setting with infinite overhead), the overhead does not scale with $P$, thus the GDoF, almost surely, continue to mimic the deterministic setting. \n\n\n\n\\paragraph{Networks with Dominant Partitions}\n\n\\begin{theorem}\\label{gdom}\nFor a $K$ user parallel Gaussian interference network where the TIN optimality condition is satisfied in each sub-channel, if each sub-channel also satisfies\n\\begin{eqnarray}\n\\alpha_{\\Pi^{[m]*}(k)k}^{[m]} > \\alpha_{jk}^{[m]},&& \\forall j,k\\in[K], j \\notin\\{k, \\Pi^{[m]*}(k)\\}, m\\in[M] \\label{eq:gdom}\n\\end{eqnarray}\nthen the sum-GDoF value of the $K$ user parallel Gaussian interference network is achieved by a separate TIN solution over each sub-channel.\n\\end{theorem}\n\n{\\it Proof:} Instead of an appeal to the ADT deterministic model, which could still be made, it is worthwhile in this section to consider a more direct proof. So let us see why the invertibility property is almost surely true, i.e., $H({\\bf X}_u^{[m]} | {\\bf Y}_u^{[m]}) = o(\\log P).$\nWe focus on one sub-channel, and the sub-channel index is omitted. We have\n\\begin{eqnarray}\n- I({\\bf X}_u ; {\\bf Y}_u) \n&=& H({\\bf X}_u| {\\bf Y}_u) - H({\\bf X}_u) \\\\\n&=& \\underbrace{ h({\\bf Y}_u | {\\bf X}_u) }_{o(\\log P)} - h({\\bf Y}_u)\\\\\n\\Longleftrightarrow H({\\bf X}_u | {\\bf Y}_u) &=& H({\\bf X}_u) - h({\\bf Y}_u) + o(\\log P)\n\\end{eqnarray}\nThus, for invertibility, it suffices to prove $H({\\bf X}_u) - h({\\bf Y}_u) = o(\\log P).$\n\nWe prove that when (\\ref{eq:gdom}) holds for sub-channel $m$, the invertibility property is implied.\nTowards this end, we define $V_{i,u} = h_{\\Pi^{[m]*}(i)i} X_{i,u} + Z_{\\Pi^{[m]*}(i)}$, ${\\bf V}_{u} = (V_{1,u}, \\ldots, V_{K,u})$ and prove\n\\begin{eqnarray}\nH({\\bf X}_u) - h({\\bf V}_u) &=& o(\\log P) \\label{inve1} \\\\\nh({\\bf V}_u) - h({\\bf Y}_u) &=& o(\\log P) \\label{inve2} .\n\\end{eqnarray}\nLet us prove them one by one. First, consider (\\ref{inve1}). It can be proved by noticing that $|h_{ \\Pi^{[m]*}(i) i}| = \\sqrt{P^{\\alpha_{\\Pi^{[m]*}(i)i} }}$ such that in ${\\bf V}_u$, all bits in ${\\bf X}_u$ are received above the noise floor. The derivations are similar to those in \\cite{Bresler_Tse,Cadambe_Jafar_fullyconnected}, thus we omit it.\n\nNext, we prove (\\ref{inve2}). Let us rewrite ${\\bf V}_u$ and ${\\bf Y}_u$ in the matrix form\n\\begin{eqnarray}\n{\\bf V}_u = {\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}}, {\\bf Y}_u = {\\bf F} {\\bf X}_u + {\\bf Z}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n{\\bf G} &=& \\mbox{diag} ( h_{ \\Pi^{[m]*}(1) 1}, h_{ \\Pi^{[m]*}(2) 2} , \\ldots, h_{ \\Pi^{[m]*}(K) K}) \\\\\n{\\bf F} &=& [h_{ji}]_{K \\times K} - \\mbox{diag}(h_{11}, \\ldots, h_{KK})\n\\end{eqnarray}\nand ${\\bf {\\bar{Z}}} = (Z_{\\Pi^{[m]*}(1) }, \\ldots, Z_{\\Pi^{[m]*}(K) })$ is a permutation of ${\\bf {{Z}}} = (Z_1, \\ldots, Z_K)$.\n${\\bf G}$ and ${\\bf F}$ are invertible almost surely and\n\\begin{eqnarray}\n{\\bf F}{\\bf G}^{-1} &=& \\left[\\frac{h_{ji}}{h_{\\Pi^{[m]*}(i) i}}\\right]_{K \\times K} - \\mbox{diag}\\left(\\frac{h_{11}}{h_{\\Pi^{[m]*}(1)1}}, \\ldots, \\frac{h_{KK}}{h_{\\Pi^{[m]*}(K) K}}\\right)\n\\end{eqnarray}\n\nDefine $\\sigma$ as the smallest singular value of ${\\bf F} {\\bf G}^{-1}$, and introduce $\\beta \\triangleq \\min(\\sigma, 1)$. Let us also define ${\\bf Z'} \\sim \\mathcal{N} (0, {\\bf F}{\\bf G}^{-1} ({\\bf F} {\\bf G}^{-1})^{T} - \\beta {\\bf I})$ and ${\\bf Z'}$ is independent of ${\\bf Z}$. The positive semidefinite property of the covariance\nmatrix is easily established from the definition of $\\beta$. We now have\n\\begin{eqnarray}\n&& h({\\bf V}_u) - h({\\bf Y}_u) \\notag\\\\\n&=& h({\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}} ) - h( {\\bf F} {\\bf {X}}_u + {\\bf {{Z}}}) \\\\\n&\\leq& h({\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}}) - h( {\\bf F} {\\bf {X}}_u + \\beta{\\bf {{Z}}}) \\label{z} \\\\\n&=& h({\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}} ) - I( {\\bf F} {\\bf {X}}_u + \\beta{\\bf {{Z}}} ; {\\bf F} {\\bf {X}}_u) - h(\\beta {\\bf Z})\\\\\n&\\leq& h({\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}} ) - I( {\\bf F} {\\bf {X}}_u + \\beta {\\bf {{Z}}} + {\\bf Z}'; {\\bf F} {\\bf {X}}_u) - h(\\beta {\\bf Z}) \\label{data}\\\\\n&=& h({\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}} ) - \\underbrace{h( {\\bf F} {\\bf {X}}_u + \\beta {\\bf {{Z}}} + {\\bf Z'})}_{= h\\left( {\\bf F} {\\bf G}^{-1} ({\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}}) \\right)} + h(\\beta {\\bf {{Z}}} + {\\bf Z'}) - h(\\beta {\\bf Z}) \\\\\n&=& h({\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}} ) - h({\\bf G} {\\bf {X}}_u + {\\bf {\\bar{Z}}} ) - \\log \\left| {\\bf F} {\\bf G}^{-1} \\right|\\\\\n&& +~ \\frac{1}{2} \\log(2 \\pi e)^K \\left| {\\bf F} {\\bf G}^{-1} ({\\bf F} {\\bf G}^{-1})^T - \\beta^2 {\\bf I} + \\beta^2 {\\bf I} \\right| - \\frac{1}{2} \\log(2 \\pi e)^K |\\beta^2 {\\bf I} |\\\\\n&=& - \\log \\left| {\\bf F} ({\\bf G})^{-1} \\right| + \\frac{1}{2} \\log|{{\\bf F} \\bf G}^{-1}| |({\\bf F} {\\bf G}^{-1})^T |\n- \\frac{1}{2} \\log (\\beta^2) \\\\\n&=& -\\frac{1}{2} \\log (\\beta^2) \\label{st}\n\\end{eqnarray}\nwhere (\\ref{z}) follows from the fact that $\\beta\\leq1$. In (\\ref{data}), we use the data processing inequality as $ {\\bf F} {\\bf X}_u \\rightarrow {\\bf F} {\\bf X}_u + \\beta{\\bf Z} \\rightarrow{\\bf F} {\\bf X}_u + \\beta {\\bf Z} + {\\bf Z'}$ forms a Markov chain. \n\nIt only remains to show that $\\beta$ is $o(\\log P)$. As $\\beta = \\min(\\sigma,1)$, it suffices to show $\\sigma = o(\\log P)$. By definition, $\\sigma = \\min_{{\\bf x}} || {\\bf F} {\\bf G}^{-1} {\\bf x}||$, where ${\\bf x} \\in {\\mathbb R}^{K \\times 1}$ is a unit vector. \nLet us prove the claim by contradiction. Choose a small positive $\\epsilon$ such that $\\epsilon^2 < \\frac{1}{2K^3}$. Suppose $\\sigma$ decays too fast with respect to $P$, then choose $P$ sufficiently large such that\n\\begin{eqnarray}\n\\sigma = \\min_{||{\\bf x}||=1} ||{\\bf F} {\\bf G}^{-1} {\\bf x}|| &\\leq& \\epsilon \\label{yy}\\\\\n\\frac{|h_{ji}|}{|h_{\\Pi^{[m]*}(i) i}|} = \\sqrt{P^{ \\alpha_{ji} - \\alpha_{\\Pi^{[m]*}(i) i} } }&\\leq& \\epsilon, \\forall j \\notin \\{ i, \\Pi^{[m]*}(i) \\}. \\label{small}\n\\end{eqnarray}\nSuppose the minimizing unit vector that corresponds to $\\sigma$ is ${\\bf x^*} = [x_1, \\ldots, x_K]^T$. Then the $j$-th entry of the $K \\times 1$ vector ${\\bf F} {\\bf G}^{-1} {\\bf x^*}$ (denoted as $y_j$) is \n\\begin{align}\ny_j = \\sum_{i=1, i \\neq j}^K \\frac{h_{ji}}{h_{\\Pi^{[m]*}(i) i}} x_i = \\sum_{i=1, i \\neq j, \\Pi^{[m]*}(i) \\neq j}^K \\frac{h_{ji}}{h_{\\Pi^{[m]*}(i) i}} x_i + x_{i_o}\n\\end{align}\nwhere $\\Pi^{[m]*}(i_o) = j$ and its absolute value\n\\begin{align}\n|y_j| &\\geq |x_{i_o}| - \\sum_{i=1, i \\neq j, \\Pi^{[m]*}(i) \\neq j}^K \\left| \\frac{h_{ji}}{h_{\\Pi^{[m]*}(i) i}} x_i \\right| \\\\\n& \\geq |x_{i_o}| - (K-2) \\epsilon \\label{ee}\n\\end{align}\nwhere (\\ref{ee}) follows from (\\ref{small}) and $|x_i| \\leq 1$ as ${\\bf x}^*$ is a unit vector. Also,\n\\begin{align}\n1 = \\sum_{i_o = 1}^K |x_{i_o}|^2 &\\leq \\sum_{j=1}^K \\Big( |y_j| + (K-2) \\epsilon \\Big)^2 \\label{yj} \\\\\n&\\leq \\sum_{j=1}^K 2 \\Big( |y_j|^2 + (K-2)^2 \\epsilon^2 \\Big) \\\\\n&\\leq 2 \\epsilon^2 + 2K(K-2)^2 \\epsilon^2 \\leq 2K^3 \\epsilon^2 \\label{ss}\n\\end{align}\nwhere we use (\\ref{ee}) to get (\\ref{yj}) and (\\ref{yy}) is used in (\\ref{ss}) such that $\\sum_{j=1}^K |y_j|^2 \\leq \\epsilon^2$. We get the desired contradiction as $\\epsilon^2 < \\frac{1}{2K^3}$ by assumption.\\hfill\\mbox{\\rule[0pt]{1.5ex}{1.5ex}}\n\nThe above proof relies heavily on the fact that $\\sigma = o(\\log P)$, which is only true when the dominant interference condition is satisfied. In general, we can not use this direct matrix inversion method to prove the invertibility for the Gaussian case. Proofs along the lines of the ADT deterministic model seem more generally applicable.\n\n\n\\subsection{GDoF Region for Parallel Networks}\\label{sec:SFregion}\nThe GDoF region of a TIN optimal $K$ user interference network, as stated in Theorem \\ref{theorem:Geng}, is comprised only of sum-GDoF bounds for all subsets of users. For parallel TIN optimal interference networks, our results characterize the tight sum-GDoF bounds of any subset of users. So it is natural to wonder if the set of all tight sum-GDoF bounds for all subsets of users characterizes the entire GDoF region, and therefore settles the optimality of TIN for the entire GDoF \\emph{region} in the \\emph{parallel} setting. In this section, we show through a counter-example that this is not the case. The following theorem states the result.\n\n\\begin{theorem}\\label{theorem:gap}\nFor the parallel $K>2$ user Gaussian interference network with $m>1$ sub-channels, each of which is individually TIN optimal and invertible, the region described by the tightest sum-GDoF bounds of all subsets of users, is in general \\emph{not} the same as the region achievable by separate TIN over each sub-channel.\n\\end{theorem}\n\n\\noindent{\\it Remark: Note that if either $K=2$ or $m=1$, then the two regions are the same. When $K>2$ and $m>1$, even though the regions are not the same, the sum-GDoF values are indeed the same, as we have shown in Theorem \\ref{thm}. Theorem \\ref{theorem:gap} also applies to the ADT deterministic model. This is readily seen because the counter-example presented below extends to the deterministic setting by choosing integer values $n_{ij}^{[m]}=10\\alpha_{ij}^{[m]}$, $\\forall i,j\\in\\{1,2,3\\}, m \\in \\{1,2\\}$, $\\epsilon=1$.}\n\n\n\\begin{figure}[h]\n\\center\n\\includegraphics[width= 3 in]{reg}\n\\caption{\\small A $K=3$ user Gaussian interference network with 2 sub-channels. The channel strength level is indicated for each link. Each sub-channel satisfies the TIN optimal condition and dominant interference condition.}\n\\label{reg}\n\\end{figure}\n\\noindent\\hspace{2em}{\\it Proof: }\nConsider a $K = 3$ user Gaussian interference network with $M = 2$ sub-channels, as shown in Figure \\ref{reg}. \nIt is easily seen that both sub-channels satisfy the TIN optimality condition and the dominant interference condition, for all subsets of users. Therefore, Theorem \\ref{gdom} establishes that the sum-GDoF value of all subsets of users in this parallel Gaussian interference network is achieved by separate TIN over each sub-channel. Incidentally, the sum-GDoF value for all $3$ users is 3, achieved by the GDoF tuple $(d_1,d_2,d_3) = (1,1,1)$ where every user gets 0.5 GDoF over each sub-channel by transmitting at full power and each receiver treats interference as noise.\n\nWe now view each TIN optimal sub-channel by itself. The GDoF region of the first sub-channel by itself is the set of tuples $(d_1^{[1]}, d_2^{[1]}, d_3^{[1]}) \\in \\mathbb{R}_{+}^3$ defined by the following constraints. \n\n\\begin{eqnarray}\n d_i^{[1]} &\\leq& 1, ~~\\forall i \\in \\{1,2,3\\}\\\\\nd_i^{[1]} + d_j^{[1]} &\\leq& 1.5, ~~\\forall i, j \\in \\{1,2,3\\}, i \\neq j \\label{zs} \\\\\nd_1^{[1]} + d_2^{[1]} + d_3^{[1]} &\\leq& 1.5\n\\end{eqnarray}\n\nSimilarly, the individual GDoF region for the second sub-channel is \n\\begin{eqnarray}\n d_i^{[2]} &\\leq& 1, ~~\\forall i \\in \\{1,2,3\\}\\\\\nd_i^{[2]} + d_j^{[2]} &\\leq& 1 + \\epsilon, ~~\\forall i, j \\in \\{1,2,3\\}, i \\neq j \\label{f1} \\\\\nd_1^{[2]} + d_2^{[2]} + d_3^{[2]} &\\leq& 1.5\n\\end{eqnarray}\n\nConsidering all sub-channels together, the sum-GDoF bounds for the parallel interference network (each of which is tight by itself, as proved in Theorem \\ref{thm}) are the following.\n\\begin{eqnarray}\n{d}_i &\\leq& 1+ 1 = 2, ~~\\forall i \\in \\{1,2,3\\}\\label{eq:c1}\\\\\n{d}_i + {d}_j &\\leq& 1.5 + 1 + \\epsilon = 2.5 + \\epsilon, ~~\\forall i, j \\in \\{1,2,3\\}, i \\neq j\\\\\n{d}_1 + {d}_2 + {d}_3 &\\leq& 1.5 + 1.5 = 3\\label{eq:c3}\n\\end{eqnarray}\n\nNow, consider the GDoF tuple $({d}_1, {d}_2, {d}_3) = (2, 0.5, 0.5)$ which is inside the region described by (\\ref{eq:c1})-(\\ref{eq:c3}). We prove this tuple is not achievable by separate TIN. In other words, we show that there does not exist a valid $(d_1^{[1]}, d_2^{[1]}, d_3^{[1]})$ and a valid $(d_1^{[2]}, d_2^{[2]}, d_3^{[2]})$, such that $(d_1^{[1]}+d_1^{[2]}, d_2^{[1]}+d_2^{[2]}, d_3^{[1]}+d_3^{[2]}) = (2, 0.5, 0.5)$. This is shown as follows.\n\nIn order to have $d_1^{[1]}+d_1^{[2]} = 2$, we must have $d_1^{[1]} = d_1^{[2]} = 1$. Given $d_1^{[2]} = 1$, from (\\ref{f1}), we must have $d_2^{[2]}\\leq\\epsilon$ and $d_3^{[2]} \\leq \\epsilon$. Since $d_2^{[2]}\\leq\\epsilon$, then, in order to have $d_2^{[1]}+d_2^{[2]}=0.5$, we must have \n$d_2^{[1]} \\geq 0.5 - \\epsilon$. \nSince $d^{[1]}_1=1$, $d_2^{[1]}\\geq 0.5-\\epsilon$ and $d_1^{[1]}+d^{[1]}_2+d^{[1]}_3\\leq 1.5$, we must have $d^{[1]}_3\\leq \\epsilon$. Now, since $d^{[1]}_3\\leq\\epsilon$ and $d^{[2]}_3\\leq\\epsilon$, we must have $d^{[1]}_3+d^{[2]}_3\\leq 2\\epsilon$. And since $\\epsilon>0$ can be arbitrarily small, it contradicts the requirement that $d^{[1]}_3+d^{[2]}_3=0.5$, thus completing the proof by counter-example.\\hfill\\mbox{\\rule[0pt]{1.5ex}{1.5ex}}\n\nTo summarize, for parallel interference networks (deterministic and Gaussian), where each sub-channel is individually TIN optimal and invertible, either the separate TIN achievable region is not tight or we need more than sum-rate bounds. In light of this observation, the optimality of separate TIN for sum-GDoF is especially remarkable.\n\n\\section{Proofs}\n\\subsection{Redundancy of non-negativity constraints in $LP_1$}\\label{sec:sign}\nBefore we prove the redundancy of non-negativity constraints in $LP_1$, let us first highlight the non-trivial nature of the problem. Consider the following $LP$, which seems similar to $LP_1$.\n$$\\max{R_1+R_2+R_3~\\mbox{ such that }~ R_1+R_2\\leq 10, R_1+R_3\\leq 10, R_2+R_3\\leq 30, (R_1, R_2, R_3)\\in\\mathbb{R}^3_+}$$\nIt is easy to see that the max value is $20$ achieved with $(R_1,R_2,R_3)=(0,10,10)$. However, if we ignore the non-negativity constraint $(R_1, R_2, R_3)\\in\\mathbb{R}^3_+$, then we can achieve a sum value of 25 with $(R_1,R_2,R_3)=(-5,15,15)$. Thus, in this $LP$, which looks similar to $LP_1$, one \\emph{cannot} ignore the non-negativity constraints. So let us see why this can be done in $LP_1$.\n\nReturning to sum-GDoF characterization in $LP_1$, we already assumed that the TIN optimality condition (\\ref{eq:cond}) is satisfied by the network, but let us now further assume that it is satisfied with \\emph{strict} inequality. We note that there is no loss of generality here, because the case with equality immediately follows from a continuity argument. Strict inequality in the TIN optimality condition means the following is true.\n\\begin{equation}\n\\alpha_{ii}> \\max_{j:j\\neq i}\\{\\alpha_{ji}\\}+\\max_{k:k\\neq i}\\{\\alpha_{ik}\\},~~~\\forall i,j,k\\in[K] \\label{eq:positiverate}\n\\end{equation}\nWe need the following lemmas.\n\\begin{lemma}\\label{lemma:positiverate}\nGiven that (\\ref{eq:positiverate}) is satisfied, the sum-GDoF must be achieved by a GDoF tuple $(d_1, d_2, \\cdots, d_K)$ with $d_k>0, \\forall k\\in[K]$.\n\\end{lemma}\n\\noindent\\hspace{2em}{\\it Proof: } Suppose that the sum-GDoF are achieved with a GDoF tuple where $d_i=0$. Replacing $n_{ij}$ with $\\alpha_{ij}$ in Fig. \\ref{fig:cond}, it is evident that user $i$ has $\\alpha_{ii}- \\max_{j:j\\neq i}\\{\\alpha_{ji}\\}-\\max_{k:k\\neq i}\\{\\alpha_{ik}\\}$ signal levels that neither cause interference, nor suffer interference. Thus, user $i$ can be assigned $d_i = \\alpha_{ii}- \\max_{j:j\\neq i}\\{\\alpha_{ji}\\}-\\max_{k:k\\neq i}\\{\\alpha_{ik}\\}>0$ GDoF without hurting any other user, thus improving the sum-GDoF value. Since the sum-GDoF value cannot be improved, we have a contradiction that completes the proof.\\hfill$\\Box$\n\n\\begin{lemma}\\label{lemma:drop}\nConsider a region\n\\begin{eqnarray}\n\\mathcal{D}&=&\\mathbb{R}^K_+\\cap\\mathcal{D}_u\n\\end{eqnarray}\nwhere $\\mathcal{D}_u\\subset\\mathbb{R}^K$ is closed and convex. If $\\max_{(d_1, d_2, \\cdots, d_K)\\in \\mathcal{D}}\\sum_{k=1}^Kd_k\\stackrel{\\triangle}{=} S<\\infty$ is achieved by a tuple $(d_1, d_2,\\cdots, d_K)$ with $d_k>0, \\forall k\\in[K]$, then \n\\begin{eqnarray}\n\\max_{(d_1, d_2, \\cdots, d_K)\\in \\mathcal{D}}\\sum_{k=1}^Kd_k&=&\n\\max_{(d_1, d_2, \\cdots, d_K)\\in \\mathcal{D}_u}\\sum_{k=1}^Kd_k\n\\end{eqnarray}\n\\end{lemma}\n\\noindent\\hspace{2em}{\\it Proof: } To set up a proof by contradiction, suppose, on the contrary, that while the $\\max$ sum value in $\\mathcal{D}$ is $S$, which is achieved by the tuple ${\\bf d}\\in\\mathcal{D}$ with $d_k>0, \\forall k\\in[K]$, there exists a tuple ${\\bf d}_u\\in\\mathcal{D}_u$ that achieves the sum value $S_u$ such that $S0$. Consider the tuple ${\\bf d}_\\epsilon={\\bf d}+\\epsilon {\\bf v}$, with $\\epsilon\\in[0,1]$, chosen such that ${\\bf d}_\\epsilon\\in\\mathbb{R}^K_+$. This is possible because all elements of ${\\bf d}$ are strictly positive. Since $\\mathcal{D}_u$ is convex, and we have both ${\\bf d}\\in\\mathcal{D}_u$ and ${\\bf d}_u\\in\\mathcal{D}_u$, therefore we must have a convex combination of the two, ${\\bf d}_\\epsilon\\in\\mathcal{D}_u$. Since we also have ${\\bf d}_\\epsilon\\in\\mathbb{R}^K_+$, it follows that ${\\bf d}_\\epsilon\\in\\mathcal{D}$. But this is a contradiction, because the sum-value achieved by ${\\bf d}_\\epsilon$ is $S_\\epsilon=S+\\epsilon S_v>S$, when $S$ was assumed to be the max value in $\\mathcal{D}$. \\hfill$\\Box$\n\n\nBy choosing $\\mathcal{D}$ as the constraint space for $LP_1$, and $\\mathcal{D}_u$ as the same region without the non-negativity constraint on the $d_i$, Lemma \\ref{lemma:positiverate} and Lemma \\ref{lemma:drop} imply that if (\\ref{eq:positiverate}) is satisfied, then there is no loss of generality in dropping the non-negativity constraints in $LP_1$. \n\nFinally in the case where the TIN optimality condition is satisfied possibly with equalities, a simple continuity argument can be applied as follows. Let us increase all $\\alpha_{ii}$ by a small positive amount $\\epsilon$. The resulting network is still TIN optimal, but now it satisfies the TIN optimality condition with a strict inequality. Since each of the bounds is perturbed by at most $K\\epsilon$, the sum-GDoF for the new network cannot exceed that of the original by more than $K\\epsilon$. Note that for the new network, because of Lemma \\ref{lemma:positiverate} and Lemma \\ref{lemma:drop} one can drop the non-negativity constraints with no loss of generality. Thus, in the limit $\\epsilon\\rightarrow 0+$, the sum-GDoF of the old network and the new network converge to the same value, as do the two linear programs, with and without the non-negativity constraints. Thus, even when the users satisfy only the TIN optimality condition (\\ref{eq:cond}) there is no loss of generality in dropping the non-negativity constraints. \n\n\\subsection{Outer Bound Proof of Example 1}\nBefore going to the outer bound proof for Theorem \\ref{Kdetthm}, we provide a proof specifically for Example 1 first, in order to illustrate the main insights in a simpler setting. For clarity of exposition, we redraw the network in Figure \\ref{pex1}. We want to prove the sum-capacity of this 3 user parallel ADT deterministic interference network is bounded above by $18$.\n\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=6in]{pex1}\n\\caption{\\small The same 3 user parallel ADT deterministic interference network network as Example 1. All the interfering input bits are labeled. Those that do \\emph{not} belong to $X_{i,u}^{[m]}$ are made solid.}\n\\label{pex1}\n\\end{figure}\n\nFor Receiver 1, from Fano's inequality, we have\n\\begin{eqnarray}\nn(R_1 - \\epsilon) &\\leq& I(W_1; Y_1^{[1]^n}, Y_1^{[2]^n}, Y_1^{[3]^n}) \\\\\n&=& H(Y_1^{[1]^n}, Y_1^{[2]^n}, Y_1^{[3]^n}) - H(Y_1^{[1]^n}, Y_1^{[2]^n}, Y_1^{[3]^n} | W_1) \\\\\n&\\leq& 9n - H(X_{2,(1)}^{[1]^n} \\oplus X_{3,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n} \\oplus X_{3,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n} \\oplus X_{3,(1)}^{[2]^n}, X_{2,(2)}^{[2]^n} \\oplus X_{3,(2)}^{[2]^n}) \\label{e1}\n\\end{eqnarray}\nwhere (\\ref{e1}) follows from the fact that each bit can only carry at most 1 bit of information.\n\nFor Receiver 2, we provide the bits that are sent from Transmitter 2 and cause interference at undesired receivers, i.e., the bits labeled in Figure \\ref{pex1}, as side information from a genie. Then we have\n\\begin{eqnarray}\n&& n(R_2 - \\epsilon) \\notag \\\\\n&\\leq& I(W_2; Y_2^{[1]^n}, Y_2^{[2]^n}, Y_2^{[3]^n}, X_{2,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n}, X_{2,(2)}^{[2]^n}, X_{2,(1)}^{[3]^n}) \\\\\n&=& H(Y_2^{[1]^n}, Y_2^{[2]^n}, Y_2^{[3]^n}, X_{2,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n}, X_{2,(2)}^{[2]^n},X_{2,(1)}^{[3]^n}) \\notag \\\\\n&& -~ H(Y_2^{[1]^n}, Y_2^{[2]^n}, Y_2^{[3]^n}, X_{2,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n}, X_{2,(2)}^{[2]^n},X_{2,(1)}^{[3]^n} | W_2) \\\\\n&=& H(X_{2,(2)}^{[2]^n}) + H(X_{2,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n},X_{2,(1)}^{[3]^n} | X_{2,(2)}^{[2]^n}) \\notag \\\\\n&& +~ \\underbrace{ H(Y_2^{[1]^n}, Y_2^{[2]^n}, Y_2^{[3]^n} | X_{2,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n}, X_{2,(2)}^{[2]^n},X_{2,(1)}^{[3]^n})}_{\\leq 4n} \\notag \\\\\n&& - H(X_{3,(1)}^{[1]^n}, X_{3,(1)}^{[3]^n}, X_{3,(2)}^{[3]^n} \\oplus X_{1,(1)}^{[3]^n} ) \\\\\n&\\leq& 5n + H(X_{2,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n},X_{2,(1)}^{[3]^n} | X_{2,(2)}^{[2]^n}) - H(X_{3,(1)}^{[1]^n}, X_{3,(1)}^{[3]^n}, X_{3,(2)}^{[3]^n} \\oplus X_{1,(1)}^{[3]^n}) \\label{e2}\n\\end{eqnarray}\nwhere in (\\ref{e2}), the positive term is exactly ${\\bf X}_{2,u}$ with conditioning on other interfering bit (solid node in Figure \\ref{pex1}), and the negative term is the interference. Similarly, for Receiver 3, we have\n\\begin{eqnarray}\nn(R_3 - \\epsilon) &\\leq& 4n + \\underbrace{ H(X_{3,(1)}^{[1]^n}, X_{3,(1)}^{[2]^n}, X_{3,(2)}^{[2]^n},X_{3,(1)}^{[3]^n}, X_{3,(2)}^{[3]^n} | X_{3,(2)}^{[1]^n})}_{=H( {\\bf X}_{3,u}^n | X_{3,(2)}^{[1]^n}) } - \\underbrace{H(X_{2,(1)}^{[2]^n}, X_{2,(1)}^{[3]^n})}_{\\mbox{Interference}}. \\label{e3}\n\\end{eqnarray}\nAdding (\\ref{e1}), (\\ref{e2}) and (\\ref{e3}), we have\n\\begin{eqnarray}\nn(R_1 + R_2 + R_3 - \\epsilon) &\\leq& 18n + H( {\\bf X}_{2,u}^n | X_{2,(2)}^{[2]^n}) + H( {\\bf X}_{3,u}^n | X_{3,(2)}^{[1]^n}) \\notag \\\\\n&& -~ H(X_{2,(1)}^{[1]^n} \\oplus X_{3,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n} \\oplus X_{3,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n} \\oplus X_{3,(1)}^{[2]^n}, X_{2,(2)}^{[2]^n} \\oplus X_{3,(2)}^{[2]^n}) \\notag \\\\\n&&-~ H(X_{3,(1)}^{[1]^n}, X_{3,(1)}^{[3]^n}, X_{3,(2)}^{[3]^n} \\oplus X_{1,(1)}^{[3]^n} ) - H(X_{2,(1)}^{[2]^n}, X_{2,(1)}^{[3]^n}) \\\\\n&\\leq& 18n + H( {\\bf X}_{2,u}^n, {\\bf X}_{3,u}^n | X_{1,(1)}^{[3]^n}, X_{2,(2)}^{[2]^n}, X_{3,(2)}^{[1]^n}) \\notag \\\\\n&& -~ H(X_{2,(1)}^{[1]^n} \\oplus X_{3,(1)}^{[1]^n}, X_{2,(2)}^{[1]^n}, X_{2,(1)}^{[2]^n} \\oplus X_{3,(1)}^{[2]^n}, X_{3,(2)}^{[2]^n},\\ldots \\notag \\\\\n&& X_{3,(1)}^{[1]^n}, X_{3,(1)}^{[3]^n}, X_{3,(2)}^{[3]^n}, X_{2,(1)}^{[2]^n}, X_{2,(1)}^{[3]^n} | X_{1,(1)}^{[3]^n}, X_{2,(2)}^{[2]^n}, X_{3,(2)}^{[1]^n}) \\label{e4} \\\\\n&=& 18n + H( {\\bf X}_{1,u}^n, {\\bf X}_{2,u}^n, {\\bf X}_{3,u}^n | X_{1,(1)}^{[3]^n}, X_{2,(2)}^{[2]^n}, X_{3,(2)}^{[1]^n}) \\notag \\\\\n&& -~ H( {\\bf Y}_{1,u}^n, {\\bf Y}_{2,u}^n, {\\bf Y}_{3,u}^n | X_{1,(1)}^{[3]^n}, X_{2,(2)}^{[2]^n}, X_{3,(2)}^{[1]^n}) \\\\\n&=& 18n\n\\end{eqnarray}\nwhere in (\\ref{e4}), the second term follows from the independence of ${\\bf X}_i$ and in the third term, we add conditioning on $X_{1,(1)}^{[3]}, X_{2,(2)}^{[2]}, X_{3,(2)}^{[1]}$, which cannot increase entropy. The negative term in (\\ref{e4}) is now the interfering signals resulting from ${\\bf X}_{i,u}$, i.e., ${\\bf Y}_{i,u}$. In the last step, we use the invertibility property, already verified for Example 1. Normalizing by $n$ and applying the limit $n\\rightarrow\\infty$, we arrive at the desired outer bound.\n\n\\subsection{Proof of Theorem \\ref{Kdetthm}} \\label{p1}\nCorollary \\ref{col:decompose} provides the achievable rate $\\sum_{m=1}^M \\left[ \\sum_{i=1}^K n_{ii}^{[m]} - w(\\Pi^{[m]*})\\right]$ by separate TIN over each sub-channel. We only need to prove that it is an outer bound, under the assumption that each sub-channel is invertible. Consider the optimal cyclic partition for each sub-channel. Then by definition, $w(\\Pi^{[m]*}) = \\sum_{i=1}^K n_{\\Pi^{[m]*} (i) i}$. We define $i_{\\max}^{[m]} \\triangleq {\\mbox{argmax}}_ {j \\neq i} ~n_{ji}^{[m]}$ to be the user that receives the most interference from Transmitter $i$ in sub-channel $m$. Writing the binary expansion of the channel input,\n\\begin{eqnarray}\nX_i^{[m]}\n&=& \\sum_{b = 1}^{n_{ii}^{[m]}} X_{i,(b)}^{[m]} 2^{-b} \\\\\n&=& \\underbrace{ \\sum_{b = 1}^{n_{\\Pi^{[m]*} (i)i}^{[m]}} X_{i,(b)}^{[m]} 2^{-b} }_{\\triangleq X_{i,u}^{[m]}} + \\underbrace{ \\sum_{b = n_{\\Pi^{[m]*} (i)i}^{[m]}+1}^{n_{i_{\\max}^{[m]}i}^{[m]}} X_{i,(b)}^{[m]} 2^{-b} }_{\\triangleq X_{i,v}^{[m]}} + \\underbrace{ \\sum_{b = n_{i_{\\max}^{[m]}i}^{[m]}+1}^{n_{ii}^{[m]}} X_{i,(b)}^{[m]} 2^{-b} }_{\\triangleq X_{i,q}^{[m]}}\\\\\n&=& X_{i,u}^{[m]} + X_{i,v}^{[m]} + X_{i,q}^{[m]}\n\\end{eqnarray}\nwhere $X_{i,u}^{[m]}, X_{i,v}^{[m]}, X_{i,q}^{[m]}$ are the bits that interfere at Receiver $\\Pi^{[m]*} (i)$, the other bits that interfere at Receiver $i^{[m]}_{\\max}$ and the remaining input bits, respectively (see Figure \\ref{weak}). We use ${\\bf X}_{i,u}$ to denote the stack of $X_{i,u}^{[m]}$ for all sub-channels, i.e., ${\\bf X}_{i,u} = [X_{i,u}^{[1]}, \\ldots, X_{i,u}^{[M]}]$. Similar notation is used for ${\\bf X}_{i,v}$ with $v$ replacing $u$. \n\n\\begin{figure}\n\\center\n\\includegraphics[width = 4 in]{weak}\n\\caption{\\small The signal levels of Transmitter $i$ and Receiver $i$. As $n_{ii}^{[m]} \\geq \\max_{j\\neq i} n_{ji}^{[m]} + \\max_{k \\neq i} n_{ik}^{[m]}$, the signal levels that cause interference ($X_{i,u}^{[m]}, X_{i,v}^{[m]}$) suffer no interference at the desired receiver.}\n\\label{weak}\n\\end{figure}\n\nGive ${\\bf X}_{i,u}, {\\bf X}_{i,v}$ as side information from a genie to Receiver $i$. Then from Fano's inequality, we have\n\\begin{eqnarray}\n&& n(R_i - \\epsilon) \\notag \\\\\n&\\leq& I(W_i; {\\bf Y}_i^n, {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n) \\\\\n&=& H({\\bf Y}_i^n, {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n) - H( {\\bf Y}_i^n, {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n | W_i) \\\\\n&=& H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n) + H({\\bf X}_{i,v}^n) + H({\\bf Y}_i^n | {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n) - H( {\\bf Y}_i^n| W_i) \\\\\n&\\leq& H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n) + n\\sum_{m=1}^{M}( n_{i_{\\max}^{[m]}i}^{[m]} - n_{\\Pi^{[m]*} (i)i}^{[m]}) + n\\sum_{m=1}^{M}( n_{ii}^{[m]} - n_{i_{\\max}^{[m]}i}^{[m]} ) - H( {\\bf Y}_i^n | W_i) \\label{detn} \\\\\n&=& H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n) - H( {\\bf Y}_i^n | W_i) + n\\sum_{m=1}^{M}( n_{ii}^{[m]} - n_{\\Pi^{[m]*} (i)i}^{[m]} ) \\label{detri}\n\\end{eqnarray}\nwhere the second term in (\\ref{detn}) follows from the fact that the entropy of $X_{i,v}^{[m]}$ is smaller than the number of bits therein and the third term in (\\ref{detn}) is due to the property that the signal levels in ${\\bf Y}_i$ that receive ${\\bf X}_{i,u}, {\\bf X}_{i,v}$ do not suffer interference (see Figure \\ref{weak}), because of each sub-channel is TIN optimal.\n\nAdding (\\ref{detri}) for $i \\in \\{1,\\ldots,K\\}$, we have\n\\begin{eqnarray}\n\\sum_{i=1}^K n( R_i - \\epsilon) &\\leq& \\sum_{i=1}^K H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n) - \\sum_{i=1}^K H( {\\bf Y}_i^n | W_i) + n\\sum_{i=1}^K \\sum_{m=1}^{M}( n_{ii}^{[m]} - n_{\\Pi^{[m]*} (i)i}^{[m]} ) \\\\\n&\\leq& H({\\bf X}_{1,u}^n, \\ldots, {\\bf X}_{K,u}^n| {\\bf X}_{1,v}^n, \\ldots, {\\bf X}_{K,v}^n) - \\sum_{i=1}^K H( {\\bf Y}_i^n | W_i, {\\bf X}_{1,v}^n, \\ldots , {\\bf X}_{K,v}^n) \\notag \\\\\n&& ~+ n \\sum_{m=1}^{M}( \\sum_{i=1}^K n_{ii}^{[m]} - \\sum_{i=1}^K n_{\\Pi^{[m]*} (i)i}^{[m]} ) \\label{ind} \\\\\n&=& H({\\bf X}_{1,u}^n, \\ldots, {\\bf X}_{K,u}^n| {\\bf X}_{1,v}^n, \\ldots, {\\bf X}_{K,v}^n) - H( {\\bf Y}_{1,u}^n, \\ldots, {\\bf Y}_{K,u}^n | {\\bf X}_{1,v}^n, \\ldots , {\\bf X}_{K,v}^n) \\notag \\\\\n&& +~ n \\sum_{m=1}^{M}\\left[ \\sum_{i=1}^K n_{ii}^{[m]} - w(\\Pi^{[m]*}) \\right] \\label{yu} \\\\\n&=&n \\sum_{m=1}^{M}\\left[ \\sum_{i=1}^K n_{ii}^{[m]} - w(\\Pi^{[m]*}) \\right] \\label{fin}\n\\end{eqnarray}\nwhere (\\ref{ind}) follows from the independence of ${\\bf X}_i$ and the fact that conditioning does not increase entropy. The second term of (\\ref{yu}) follows from the definition of $Y_{k,u}^{[m]} = \\sum_{i=1,i \\neq k}^K 2^{ n_{ki}^{[m]} } X_{i,u}^{[m]}$ and the fact that given the desired message $W_k$ and $X_{i,v}^{[m]}, X_{i,p}^{[m]}$, the only thing left in $Y_{k}^{[m]}$ is $Y_{k,u}^{[m]}$. (\\ref{fin}) is due to the invertibility assumption. Normalizing (\\ref{fin}) by $n$ and applying the limit $n\\rightarrow\\infty$, we arrive at the desired outer bound. \n\n\\subsection{Proof of Theorem \\ref{thm}} \\label{p3}\nAs separate TIN achieves GDoF $\\sum_{m=1}^M \\left[ \\sum_{i=1}^K \\alpha_{ii}^{[m]} - w(\\Pi^{[m]*}) \\right]$, we prove that this is an outer bound. The proof is similar to that for the ADT deterministic model with the difference that the input of the Gaussian network has average power constraint 1. \nFor sub-channel $m$, consider the optimal cyclic partition $\\Pi^{[m]*}$ with weight $w(\\Pi^{[m]*}) = \\sum_{i=1}^K \\alpha_{\\Pi^{[m]*} (i)i}^{[m]}$.\nLet us define $i_{\\max}^{[m]}$ to be the user that receives the strongest interference from Transmitter $i$ over sub-channel $m$, i.e., $i_{\\max}^{[m]} \\triangleq{\\mbox{argmax}}_ {j \\neq i} ~\\alpha_{ji}^{[m]}$. \nWriting the binary expansion of the channel input,\n\\begin{eqnarray}\nX_i^{[m]}\n&=& \\mbox{sign}(X_i^{[m]}) \\sum_{b = -\\infty}^{\\infty} X_{i,(b)}^{[m]} 2^{-b} \\\\\n&=& \\underbrace{ \\mbox{sign}(X_i^{[m]}) \\sum_{b = -\\infty}^{0} X_{i,(b)}^{[m]} 2^{-b} }_{\\triangleq X_{i,p}^{[m]}} + \\underbrace{ \\mbox{sign}(X_i^{[m]}) \\sum_{b = 1}^{n_{\\Pi^{[m]*} (i)i}^{[m]} } X_{i,(b)}^{[m]} 2^{-b} }_{\\triangleq X_{i,u}^{[m]}} \\notag \\\\\n&& + \\underbrace{ \\mbox{sign}(X_i^{[m]}) \\sum_{b = n_{\\Pi^{[m]*} (i)i}^{[m]} +1}^{n_{i^{[m]}_{\\max}i}^{[m]}} X_{i,(b)}^{[m]} 2^{-b} }_{\\triangleq X_{i,v}^{[m]}} + \\underbrace{\\mbox{sign}(X_i^{[m]}) \\sum_{b = n_{i^{[m]}_{\\max}i}^{[m]} +1}^{\\infty} X_{i,(b)}^{[m]} 2^{-b} }_{\\triangleq X_{i,q}^{[m]}} \\label{iq} \\\\\n&=& X_{i,p}^{[m]} + X_{i,u}^{[m]} + X_{i,v}^{[m]} + X_{i,q}^{[m]}\n\\end{eqnarray}\nwhere $X_{i,p}^{[m]}, X_{i,u}^{[m]}, X_{i,v}^{[m]}, X_{i,q}^{[m]}$ are the bits that have power more than 1, the bits that interfere at Receiver $\\Pi^{[m]*} (i)$, the other interfering bits that appear at Receiver $i^{[m]}_{\\max}$ and the remaining input bits that may only appear at the desired receiver, respectively. \n${\\bf X}_{i,u}$ is used to denote $[X_{i,u}^{[1]}, \\ldots, X_{i,u}^{[M]}]$. Similar notations are used for ${\\bf X}_{i,p}, {\\bf X}_{i,v}$ with $p,v$ replacing $u$, respectively.\n\nWe borrow a lemma from \\cite{Bresler_Tse} to bound the entropy of ${\\bf X}_{i,p}$, the bits that have peak power more than 1. Intuitively, it means that those bits only have bounded entropy, thus limited influence on capacity.\n\\begin{lemma}\n(Lemma 6 in \\cite{Bresler_Tse}) The following bound on the entropy holds: $H({\\bf X}_{i,p}^n) \\leq 2nM$.\n\\end{lemma}\nFor a proof, we refer the readers to \\cite{Bresler_Tse}.\n\nGiving ${\\bf X}_{i,u}, {\\bf X}_{i,v}$ and ${\\bf X}_p \\triangleq ({\\bf X}_{1,p}, \\ldots ,{\\bf X}_{K,p})$ as side information from a genie to Receiver $i$, we have\n\\begin{eqnarray}\n&& n(R_i - \\epsilon) \\notag\\\\\n&\\leq& I(W_i; {\\bf {Y}}_i^n, {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n, {\\bf X}_{p}^n) \\\\\n&=& I(W_i; {\\bf X}_{p}^n) + I(W_i; {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n | {\\bf X}_{p}^n) + I(W_i; {\\bf {Y}}_i^n | {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n,{\\bf X}_{p}^n) \\\\\n&=& \\underbrace{ H( {\\bf X}_{p}^n)}_{\\leq nO(1)} - \\underbrace{ H( {\\bf X}_{p}^n | W_i) }_{\\geq 0} + H({\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n | {\\bf X}_{p}^n) - \\underbrace{ H({\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n | {\\bf X}_{p}^n,W_i) }_{=0} \\notag \\\\\n&& ~+ h({\\bf {Y}}_i^n | {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n,{\\bf X}_{p}^n) - \\underbrace{ h( {\\bf {Y}}_i^n | {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n, {\\bf X}_{p}^n,W_i) }_{= h( {\\bf {Y}}_i^n| W_i)} \\label{le} \\\\\n&\\leq& H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n, {\\bf X}_{p}^n) + H({\\bf X}_{i,v}^n | {\\bf X}_{p}^n) + h({\\bf {Y}}_i^n | {\\bf X}_{i,u}^n, {\\bf X}_{i,v}^n, {\\bf X}_{p}^n) - h( {\\bf {Y}}_i^n| W_i) + nO(1) \\\\\n&\\leq& H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n, {\\bf X}_{p}^n) + n\\sum_{m=1}^{M}( n_{i^{[m]}_{\\max}i}^{[m]} - n_{\\Pi^{[m]*} (i)i}^{[m]}) \\notag \\\\ && ~+ n\\sum_{m=1}^{M} \\frac{1}{2} \\log \\Big[ 2\\pi e \\big( \\sum_{j\\neq i} P^{\\alpha_{ij}^{[m]}} + P^{\\alpha_{ii}^{[m]}}2^{ -2 n_{i^{[m]} _{\\max}i}^{[m] } }\\big) \\Big]\n- h( {\\bf {Y}}_i^n | W_i) + nO(1) \\label{gdetn} \\\\\n&\\leq& H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n, {\\bf X}_{p}^{n}) + n\\sum_{m=1}^{M}\\left[ \\frac{1}{2}( \\alpha_{i_{\\max}^{[m]} i}^{[m]} - \\alpha_{\\Pi^{[m]*} (i)i}^{[m]} )\\log P + 1 \\right] \\notag \\\\\n&& ~+ n\\sum_{m=1}^{M} \\frac{1}{2} \\log \\Big[ 2\\pi e \\big(K P^{\\alpha_{ii}^{[m]} - \\alpha_{i^{[m]} _{\\max}i}^{[m] } }\\big) \\Big] - h( {\\bf {Y}}_i^n | W_i) + nO(1) \\label{po} \\\\\n&=& H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n, {\\bf X}_{p}^{n}) - h( {\\bf {Y}}_i^n | W_i) + n\\sum_{m=1}^{M}\\frac{1}{2}( \\alpha_{ii}^{[m]} - \\alpha_{\\Pi^{[m]*} (i)i}^{[m]} )\\log P + nO(1) \\label{last}\n\\end{eqnarray}\nwhere we use Lemma 1 in the first term of (\\ref{le}). The third term in (\\ref{gdetn}) is due to the fact that the differential entropy of a random variable is maximized by Gaussian distribution given the covariance constraint, and conditioning on $X_{i,p}^{[m]}, X_{i,u}^{[m]}, X_{i,v}^{[m]}$, the magnitude of desired input is smaller than $2^{-n^{[m]}_{i^{[m]}_{\\max} i}}$ (see (\\ref{iq})). All the remaining interfering input has power constraint 1. In (\\ref{po}), we use $n_{ki}^{[m]} = \\lfloor \\frac{1}{2} \\alpha_{ki}^{[m]} \\log_2 P\\rfloor \\subset (\\frac{1}{2} \\alpha_{ki}^{[m]} \\log_2 P - 1, \\frac{1}{2} \\alpha_{ki}^{[m]} \\log_2 P ]$ and the TIN optimality condition such that $\\alpha_{ij}^{[m]} \\leq \\alpha_{ii}^{[m]} - \\alpha_{i^{[m]} _{\\max}i}^{[m] }$.\n\nAdding (\\ref{last}) for $i \\in \\{1,\\ldots,K\\}$, we have\n\\begin{eqnarray}\n&& \\sum_{i=1}^K n( R_i - \\epsilon) \\notag\\\\\n&\\leq& \\sum_{i=1}^K \\left[ H({\\bf X}_{i,u}^n | {\\bf X}_{i,v}^n, {\\bf X}_{p}^{n}) - h( {\\bf {Y}}_i^n | W_i) \\right] + n\\sum_{i=1}^K \\sum_{m=1}^{M}\\frac{1}{2}( \\alpha_{ii}^{[m]} - \\alpha_{\\Pi^{[m]*} (i)i}^{[m]} )\\log P + nO(1) \\\\\n&\\leq& H({\\bf X}_{u}^n | {\\bf X}_{v}^n, {\\bf X}_{p}^{n}) - \\sum_{i=1}^K h( {\\bf {Y}}_i^n | W_i, {\\bf X}_{v}^n, {\\bf X}_{p}^{n}) \\notag \\\\\n&& +~ \\frac{n}{2} \\log P \\sum_{m=1}^{M} \\left( \\sum_{i=1}^K \\alpha_{ii}^{[m]} - \\sum_{i=1}^K \\alpha_{\\Pi^{[m]*} (i)i}^{[m]} \\right) + nO(1) \\label{gfin} \\\\\n&\\leq& H({\\bf X}_{u}^n | {\\bf X}_{v}^n, {\\bf X}_{p}^{n}) - h( {\\bf {Y}}_{u}^n | {\\bf X}_{v}^n, {\\bf X}_{p}^{n}) + \\frac{n}{2} \\log P \\sum_{m=1}^{M} \\left[ \\sum_{i=1}^K \\alpha_{ii}^{[m]} - w(\\Pi^{[m]*}) \\right] + nO(1) \\label{y} \\\\\n&\\leq& \\frac{n}{2} \\log P \\sum_{m=1}^{M} \\left[ \\sum_{i=1}^K \\alpha_{ii}^{[m]} - w(\\Pi^{[m]*}) \\right] + no(\\log P) \\label{inv}\n\\end{eqnarray}\nwhere in (\\ref{gfin}), ${\\bf X}_u$ is the collection of ${\\bf X}_{i,u}$ for all users, i.e., ${\\bf X}_{u} = ({\\bf X}_{1,u}, \\ldots, {\\bf X}_{K,u})$. Similar notations are used for ${\\bf X}_{v}$ and ${\\bf Y}_{u}$. The second term of (\\ref{y}) is due to the definition that $Y_{i,u}^{[m]} = \\sum_{j=1,j\\neq i}^K h_{ij}^{[m]} X_{j,u}^{[m]} + Z_{i}^{[m]}$ and given the desired message $W_i$ and $X_{j,v}^{[m]}, X_{j,p}^{[m]}$, the only thing left in the received signal $Y_{i}^{[m]}$ is $Y_{i,u}^{[m]}$. (\\ref{inv}) is due to the invertibility assumption and is derived as follows.\n\\begin{eqnarray}\n&& H({\\bf X}_{u}^n | {\\bf X}_{v}^n, {\\bf X}_{p}^{n}) - h( {\\bf {Y}}_{u}^n | {\\bf X}_{v}^n, {\\bf X}_{p}^{n}) \\notag \\\\\n&=& H({\\bf X}_{u}^n | {\\bf {Y}}_{u}^n, {\\bf X}_{v}^n, {\\bf X}_{p}^{n}) - h( {\\bf {Y}}_{u}^n | {\\bf {X}}_{u}^n, {\\bf X}_{v}^n, {\\bf X}_{p}^{n}) \\\\\n&\\leq& \\sum_{t=1}^n H({\\bf X}_{u}(t) | {\\bf {Y}}_{u}(t) , {\\bf X}_{v}(t), {\\bf X}_{p}(t)) + n o(\\log P) \\\\\n&\\leq& \\sum_{t=1}^n H({\\bf X}_{u}(t) | {\\bf {Y}}_{u}(t)) + n o(\\log P) \\\\\n&\\leq& \\sum_{t=1}^n \\sum_{m=1}^M H({\\bf X}_{u}^{[m]}(t) | {\\bf {Y}}_{u}^{[m]}(t)) + n o(\\log P) \\\\\n&\\leq& Mn o(\\log P) + n o(\\log P) = n o(\\log P) \n\\end{eqnarray}\nwhere the last inequality follows from the definition of invertibility as stated in (\\ref{equ:ginv}).\n\nNormalizing (\\ref{inv}) by $\\frac{1}{2} n \\log P$ and letting first $n$ and then $P$ approach infinity, we obtain the matching outer bound and complete the proof\n\n\n\n\\section{Discussions}\nIn the context of $K$ user parallel Gaussian interference networks when each sub-channel satisfies the TIN optimal condition of Geng et al., we show that separate TIN over each sub-channel is optimal under a mild condition from the perspective of sum-GDoF. The main message is that the simple ADT deterministic model is still very insightful for the optimality of TIN, because TIN is robust enough to not be sensitive to the details that are not captured by the ADT deterministic model. \n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nOver the past few decades, ML has risen from a relatively obscure research area to an extremely influential discipline, actively being deployed in myriad applications and contexts around the world. The objectives and values of ML research are influenced by many factors, including the personal preferences of researchers and reviewers, other work in science and engineering, the interests of academic institutions, funding agencies and companies, and larger institutional and systemic pressures, including systems of oppression impacting who is able to do research and on which topics. Together these forces shape patterns in what research gets done and who benefits from this research. Therefore, it is important to document and understand the emergent values of the field: what the field is prioritizing and working toward. To this end, we perform a comprehensive analysis of 100 highly cited NeurIPS and ICML papers from four recent years spanning more than a decade.\n\nOur key contributions are as follows: \n\n(1) We develop and open source a fine-grained annotation scheme for the detection of values in research papers, including identifying a list of 67 values significant in machine learning research. To our knowledge, our annotation scheme is the first of its kind, and opens the door to further qualitative and quantitative analyses of research.\\footnote{We include our template and all annotations as supplementary material at \\url{https:\/\/github.com\/wagnew3\/The-Values-Encoded-in-Machine-Learning-Research} with a CC BY-NC-SA license.} \n\n(2) We apply our annotation scheme to annotate 100 influential ML research papers and extract their value commitments.These reflect and shape the values of the field more broadly. Like the annotation scheme itself, the resulting repository of annotated papers is available and is valuable not only in the context of this paper's analysis, but also as foundation for further qualitative and quantitative study of ML research.\n\n(3) We perform extensive textual analysis to understand the dominant values: performance, accuracy, state-of-the-art (SOTA), quantitative results, generalization, efficiency, building on previous work, and novelty.\nOur analysis indicates that while these values may seem on their face to be purely technical, they are \nsocially and politically charged: specifically, we find that these values are currently defined and operationalized in ways that centralize power, i.e., disproportionally benefit and empower the already powerful, such as large corporations, while neglecting society's least advantaged. \n\n(4) We present a quantitative analysis of the affiliations and funding sources of these most influential papers. We find substantive and increasing presence of tech corporations. For example, in 2008\/09, 24\\% of these top cited papers had corporate affiliated authors, and in 2018\/19 this statistic almost tripled, to 71\\%. Moreover, of these corporations connected to influential papers, the presence of \"big-tech\" firms, such as Google and Microsoft, increased more than fivefold, from 11\\% to 58\\%.\n\n\\section{Methodology}\n\nTo understand the values of ML research, we examined the most highly cited papers from NeurIPS and ICML from the years 2008, 2009, 2018, and 2019. We chose to focus on highly cited papers because they both reflect and shape the values of the discipline, drawing from NeurIPS and ICML because they are the most prestigious of the long-running ML conferences.\\footnote{At the time of writing, these two venues, along with the newer ICLR (2013-present), comprised the top 3 conferences\naccording to h5-index (and h5-median) in the AI category on Google Scholar, by a large margin.} Acceptance to these conferences is a valuable commodity used to evaluate researchers, and submitted papers are explicitly written so as to win the approval of the community, particularly the reviewers who will be drawn from that community. As such, these papers effectively reveal the values that authors believe are most valued by that community. Citations indicate amplification by the community, and help to position these papers as influential exemplars of ML research. To avoid detecting only short-lived trends and enable comparisons over time, we drew papers from two recent years (2018\/19) and from ten years earlier (2008\/09).\nWe focused on conference papers because they tend to follow a standard format and \nallow limited space, meaning that researchers must make hard choices about what to emphasize.\nCollectively, we annotated 100 papers, analyzing over 3,500 sentences drawn from them. In the context of expert qualitative content analysis, this is a significant scale which allows us to meaningfully comment on the values central to ML research.\n\nIn more detail, we began by creating an annotation scheme, and then used it to manually annotate each paper, examining the abstract, introduction, discussion, and conclusion: (1) We examined the chain of reasoning by which each paper justified its contributions, which we call the \\textit{justificatory chain}, rating the extent to which papers used technical or societal problems to justify or motivate their contributions. (2) We carefully read each sentence of these sections, annotating any and all values from our list that were uplifted or exhibited by the sentence.\\footnote{We use a conceptualization of \"value\" that is widespread in philosophy of science in theorizing about values in sciences. In this approach, a value of an entity is a property that is desirable for that kind of entity. For example, speed can be described as valuable in an antelope \\cite{mcmullin1982values}. Well-know scientific values include accuracy, consistency, scope, simplicity, and fruitfulness \\cite{kuhn.1977}. See \\cite{longino.1996} for a critical discussion of value-laden aspects of these values.} (3) We documented the extent to which the paper included a discussion of potential negative impacts. \n\nManual annotation was necessary, both to create the list of emergent values, and to obtain and richly understand the values present in each paper. Automated approaches, such as keyword searches, would suffer significant limitation: these approaches would only capture values that we anticipate; additionally, they would run the risk of systematically skewing the results towards values which are easy to identify, missing or mischaracterizing values which are exhibited in more nuanced ways. The qualitative approach was key for analyzing the values as well, as it requires a subtle understanding of how the values function in the text and understanding of taken for granted assumptions underlying the values, which methods such as keyword matching would fail to capture.\n\nTo assess consistency, 40\\% of the papers were annotated by two annotators. The intercoder consensus on values in these papers achieved a Cohen kappa coefficient of .61, which indicates substantial agreement \\citep{viera.2005}. Furthermore, we used several established strategies to increase consistency, including recoding data coded early in the process \\citep{krefting.1991} and conducting frequent discussions and assessments of the coding process, code list, and annotation scheme \\citep{krippendorff.2018}. \n\nTo create the list of values specifically (see Figure~\\ref{fig:value-totals}), we followed best practices in manual content analysis.\n(1) We began with a list of values we expected to be relevant based on prior knowledge, augmenting this list with seven ethical principles of interest from existing literature \\citep{dittrich.2012,floridi.2019}.\n(2) We randomly selected a subset of 10 papers for initial annotation, searching for the values on the list sentence by sentence and adding new values as they emerged. (3) Through discussion, we revisited all values and produced a values list. (4) We annotated the full set of papers using this list of values, meeting regularly to discuss difficult examples, and individually nominating and deciding by consensus when sentences justified inductively adding additional, emergent values to the values list. (5) For the final analysis presented here, we combined closely related values into clusters by consensus, such that they could be discussed together (for completeness, all values are treated separately in the Appendix). Formally stated, we establish our codes (short phrases that represent the relevant essence of information, in this case the list of values) using the an inductive-deductive approach. The deductive component involves starting with codes established in existing literature, which ensures we note and can speak to values of interest, including established ethical principles.\nThe inductive component involves the discovery of codes from the data, and impedes inappropriately biased or pre-conceived findings by focusing on emergent codes \\citep{bengtsson.2016,krippendorff.2018}.\n\nThe composition of our team confers additional validity to our work. We are a diverse, multi-racial, multi-gender team working closely, including undergraduate, graduate, and post-graduate researchers from machine learning, NLP, robotics, cognitive science, and philosophy. This team captures several advantages that other methods of manual annotation such as crowd sourcing lack: the nature of this team minimizes intra-disciplinary biases, affords the unique combination of expertise required to read the values in ML papers, allows meaningful engagement with relevant work in other fields, and enables best practices including continually clarifying the procedure, ensuring agreement, vetting consistency, reannotating, and discussing themes \\citep{krippendorff.2018}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[origin=c,width=\\textwidth]{figures\/value-freq-graph.png}\n\\caption{Proportion of annotated papers that uplifted each value.}\n\\label{fig:value-totals}\n\\end{figure}\n\n\\section{Quantitative Summary}\nIn Figure~\\ref{fig:value-totals}, we plot the prevalence of values in 100 annotated papers. The top values are: performance (87\\% of papers), building on past work (79\\%), generalization (79\\%), efficiency (73\\%), quantitative evidence (72\\%), and novelty (63\\%). Values related to user rights and stated in ethical principles appeared very rarely if at all: none of the papers mentioned autonomy, justice, or respect for persons.\nIn Table~\\ref{fig:just_chain} (top), we show the distribution of justification scores. Most papers only justify how they achieve their internal, technical goal; 71\\% don't make any mention of societal need or impact, and only 3\\% make a rigorous attempt to present links connecting their research to societal needs.\nIn Table~\\ref{fig:just_chain} (bottom), we show the distribution of negative impact discussion scores. One annotated paper included a discussion of negative impacts and a second mentioned the possibility; none of the remaining 98 papers contained any reference to potential negative impacts.\nIn Figure~\\ref{fig:vcorp-ties}, we show stated connections (funding and affiliations) of paper authors to institutions. Comparing papers written in 08\/09 to those written in 18\/19, ties to corporations nearly doubled to 79\\% of all annotated papers, ties to big tech multiplied over fivefold to 58\\%, while ties to universities declined to 81\\%, putting corporations nearly on par with universities in the most cited ML research. In the next sections, we present extensive qualitative examples and analysis of our findings, with additional analyses in the Appendix.\n\n\\begin{table}\n\\caption{Annotation scheme and results for justificatory chain (top) and negative impacts (bottom).}\n\\centering\n\\small\n\\begin{tabular}{lc}\n\\toprule\n\\textbf{Justificatory Chain Condition} & \\textbf{\\% of Papers} \\\\ \\midrule\nDoesn't rigorously justify how it achieves technical goal & 1\\% \\\\\nJustifies how it achieves technical goal but no mention of societal need & 71\\% \\\\\nStates but does not justify how it connects to a societal need & 16\\% \\\\\nStates and somewhat justifies how it connects to a societal need & 9\\% \\\\\nStates and rigorously justifies how it connects to a a societal need & 3\\% \\\\ \\bottomrule\n\\toprule\n\\textbf{Negative Impacts Condition} & \\textbf{\\% of Papers} \\\\ \\midrule\nDoesn't mention negative potential & 98\\% \\\\\nMentions but does not discuss negative potential \\hspace{18 mm} & 1\\% \\\\\nDiscusses negative potential & 1\\% \\\\\nDeepens our understanding of negative potential & 0\\% \\\\ \\bottomrule\n\\end{tabular}\n\\hfill\n\n\\label{fig:just_chain}\n\\end{table}\n\n\n\n\n\n\\section{Qualitative Analysis of Justifications and Negative Potential}\n\n\n\n\\subsection{Justificatory Chain}\n\nPapers typically motivate their projects by appealing to the needs of the ML research community and rarely mention potential societal benefits. Research-driven needs of the ML community include researcher understanding (e.g., understanding the effect of pre-training on performance\/robustness, theoretically understanding multi-layer networks) as well as more practical research problems (e.g., improving efficiency of models for large datasets, creating a new benchmark for NLP tasks). Some papers do appeal to needs of broader society, such as building models with realistic assumptions, catering to more languages, or understanding the world. However, even when societal needs are mentioned as part of the justification of the project, the connection is usually loose. Almost no papers explain how their project is meant to promote a social need they identify by giving the kind of rigorous justification that is typically expected of and given for technical contributions. \n\n\n\n\n\n\n\n\\subsection{Negative Potential}\n\\label{negative potential}\n\nTwo of the 100 papers discussed potential harms, whereas the remaining 98 did not mention them at all. The lack of discussion of potential harms is especially striking for papers which deal with socially contentious application areas, such as surveillance and misinformation. For example, the annotated corpus includes a paper advancing the identification of people in images, a paper advancing face-swapping, and a paper advancing video synthesis. These papers contained no mention of the well-studied negative potential of facial surveillance, DeepFakes, or misleading videos, respectively.\n\n\n\n\n\nFurthermore, among the two papers that do mention negative potential, \nthe discussions were mostly\nabstract and hypothetical,\nrather than grounded in the negative potential of their specific contributions. For example, authors may acknowledge \"possible unwanted social biases\" when applying the model to a real-world setting, without \ndiscussing the social biases encoded in the authors' proposed model. \n\n\n\n\n\n\\section{Stated values}\n\\label{sec:values}\n\nThe dominant values in ML research, e.g., accuracy and efficiency, may seem purely technical. However, the following analysis of several of these values shows how they can become politically loaded in the process of prioritizing and operationalizing them: sensitivity to the way that they are operationalized, and to the fact that they are uplifted at all, reveals value-laden assumptions that are often taken for granted.\nWe thus challenge a conception of prevalent values as politically neural and consider alternatives to their dominant conceptualization that may be equally or more intellectually interesting or socially beneficial.\\footnote{By challenging a politically neutral conception of the top values in machine learning research, this paper also contributes to the literature in philosophy. Philosophers of science have been working to understand the roles of values in science for decades. For example, Thomas Kuhn \\citep{kuhn.1977} presented a list of five scientific values which he deems as important in scientific research (accuracy, consistency, scope, simplicity, and fruitfulness). Helen Longino \\citep{longino.1996} and others have argued that prominent values are politically loaded, focusing mostly on how some of these values function in disciplines such as biology and social sciences. However, \"technical\" values, such as accuracy, are often left out of this type of critical discussion. Our paper shows that even the \"technical\" values aren't politically neutral, and it does so in the context of machine learning, which is often conceived as a less politically loaded discipline than biology or social sciences.} We have encouraged ourselves, and now encourage the reader, to remember that values once held to be intrinsic, obvious, or definitional have been in many cases transformed over time.\n\nTo provide a sense of what the values look like in context, we include three randomly selected examples of sentences annotated for each value discussed here (Tables 2-5), with extensive additional examples in the Appendix. Note that most sentences are annotated with multiple values.\\footnote{To avoid the impression that we are drawing attention to anything special about these randomly chosen example sentences, we omit attribution, and include a list of all annotated papers in the Appendix.}\n \n\n\n\n\n\n\n\n\n\n\\subsection{Performance}\n\n\\begin{table}\n\\caption{Random examples of \\textit{performance}, the most common emergent value.}\n\\small\n\\begin{tabular}{p{.96\\linewidth}}\n\\midrule \"Our model significantly outperforms SVM's, and it also outperforms convolutional neural nets when given additional unlabeled data produced by small translations of the training images.\"\\\\\n\\midrule \"We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem.\"\\\\\n\\midrule \"Furthermore, the learning accuracy and performance of our LGP approach will be compared with other important standard methods in Section 4, e.g., LWPR [8], standard GPR [1], sparse online Gaussian process regression (OGP) [5] and $\\upsilon$-support vector regression ($\\upsilon$-SVR) [11], respectively.\"\\\\\n\\bottomrule\n\\end{tabular}\n\\hfill\n\n\\label{fig}\n\\end{table}\n\nPerformance, accuracy, and achieving SOTA form the most common cluster of related values in annotated papers. While it might seem intrinsic for the field to care about performance, it is important to remember that models are not simply \"well-performing\" or \"accurate\" in the abstract but always in relation to and as \\textit{quantified} by some metric on some dataset. Examining prevalent choices of operationalization reveals political aspects of performance values. First, we find performance values are consistently and unquestioningly operationalized as correctness averaged across individual predictions, giving equal weight to each instance. However, choosing\nequal weights when averaging is a value-laden move which might deprioritize those underrepresentated in the data or the world, as well as societal and evaluee needs and preferences. Extensive research in ML fairness and related fields has considered alternatives, but we found no such discussions among the papers we examined. \n\n\nChoices of datasets are revealing. They are often driven purely by past work, so as to demonstrate improvement over a previous baseline (see also \\S\\ref{sec:novelty}). Another common justification for using a certain dataset is applicability to the \"real world\". Assumptions about how to characterize the \"real world\" are value-laden. One common assumption is the availability of very large datasets. However, presupposing the availability of large datasets is power centralizing because it encodes favoritism to those with resources to obtain and process them \\cite{dotan2019value}. Further overlooked assumptions include that the real world is binary or discrete, and that datasets come with a predefined ground-truth label for each example, presuming that a true label always exists \"out there\" independent of those carving it out, defining and labelling it. This contrasts against marginalized scholars' calls for ML models that allow for non-binaries, plural truths, contextual truths, and many ways of being \\cite{costanza2018design, hamidi2018gender, lewis2020indigenous}.\n\n\nThe prioritization of performance values also requires scrutiny. Valuing these properties is so entrenched in the field that generic success terms, such as \"success\", \"progress\", or \"improvement\" are often used as synonyms for performance and accuracy. However, one might alternatively invoke generic success to mean increasingly safe, consensual, or participatory ML that reckons with impacted communities and the environment. In fact, \"performance\" itself is a general success term that could have been associated with properties other than accuracy and SOTA. \n\n\n\\subsection{Generalization}\n\n\n\n\n\n\\begin{table}\n\\caption{Random examples of \\textit{generalization}, the third most common emergent value.}\n\\small\n\\begin{tabular}\n{p{0.96\\linewidth}}\n\\midrule \"The range of applications that come with generative models are vast, where audio synthesis [55] and semi-supervised classification [38, 31, 44] are examples hereof.\"\\\\\n\\midrule \"Furthermore, the infinite limit could conceivably make sense in deep learning, since over-parametrization seems to help optimization a lot and doesn't hurt generalization much [Zhang et al., 2017]: deep neural nets with millions of parameters work well even for datasets with 50k training examples.\"\\\\\n\\midrule \"Combining the optimization and generalization results, we uncover a broad class of learnable functions, including linear functions, two-layer neural networks with polynomial activation $\\phi(z) = z^{2l}$ or cosine activation, etc.\"\\\\\n\\bottomrule\n\\end{tabular}\n\\hfill\n\n\\label{fig}\n\\end{table}\n\nA common way of appraising the merits of one's work in ML is to claim that it generalizes well. Typically, generalization is understood in terms of performance or accuracy: a model generalizes when it achieves good performance on a range of samples, datasets, domains, or applications. \nUplifting generalization raises two kinds of questions. First, which datasets, domains, or applications show that the model generalizes well? Typically, a paper shows that a model generalizes by showing that it performs well on multiple tasks or datasets. However, the choice of particular tasks and datasets is rarely justified; the choice of tasks can often seem arbitrary, and authors rarely present evidence that their results will generalize to more realistic settings, or help to directly address societal needs.\n\nSecond, uplifting generalization itself reveals substantive assumptions. The prizing of generalization means that there is an incentive to harvest many datasets from a variety of domains, and to treat these as the only datasets that matter for that space of problems. Generalization thus prioritizes distilling every scenario down to a common set of representations or affordances, rather than treating each setting as unique. Critical scholars have advocated for valuing \\emph{context}, which stands at the opposite side of striving for generalization \\citep{d2020data}. Others have argued that this kind of totalizing lens (in which model developers have unlimited power to determine how the world is represented) leads to \\emph{representational} harms, due to applying a single representational framework to everything \\citep{crawford.2017,abbasi.2019}. \n\n\nFinally, the belief that generalization is even possible implicitly assumes a conservative approach in which new data will be sufficiently similar to previously seen data. \nWhen used in the context of ML, the assumption that the future resembles the past is often problematic as past societal stereotypes and injustice can be encoded in the process \\cite{o2016weapons}. Furthermore, to the extent that predictions are performative \\cite{perdomo2020performative}, especially predictions that are enacted, those ML models which are deployed to the world will contribute to shaping social patterns.\nNo papers attempt to counteract this quality or acknowledge its presence.\n\n\n\n\\subsection{Efficiency}\n\n\nEfficiency is another common value in ML research. Abstractly, saying that a model is efficient typically means saying that the model uses less of some resource, such as time, memory, energy, or number of labeled examples. \nIn practice however, efficiency is commonly referenced to imply the ability to scale up: a more efficient inference method allows you to do inference in much larger models or on larger datasets, using the same amount of resources.\nThis is reflected in our value annotations, where 72\\% of papers mention valuing efficiency, but only 14\\% of those value requiring \\textit{few} resources.\nIn this way, valuing efficiency facilitates and encourages the most powerful actors to scale up their computation to ever higher orders of magnitude, making their models even less accessible to those without resources to use them and decreasing the ability to compete with them. Alternative usages of efficiency could encode accessibility instead of scalability, aiming to create more equitable conditions for ML research.\n\n \n\\begin{table}\n\\caption{Random examples of \\textit{efficiency}, the fourth most common emergent value.}\n\\begin{small}\n\\begin{tabular}{p{0.96\\linewidth}}\n\\midrule \"Our model allows for controllable yet efficient generation of an entire news article \u2013 not just the body, but also the title, news source, publication date, and author list.\"\\\\\n\\midrule \"We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings.\"\\\\\n\\midrule \"In particular, our EfficientNet-B7 surpasses the best existing GPipe accuracy (Huang et al., 2018), but using 8.4x fewer parameters and running 6.1x faster on inference.\n\\\\\n\\bottomrule\n\\end{tabular}\n\\hfill\n\n\\end{small}\n\\label{fig}\n\\end{table}\n\n\n\n\\subsection{Novelty and Building on Past Work}\n\\label{sec:novelty}\n\n\\begin{table}\n\\caption{Random examples of \\emph{building on past work} and \\emph{novelty}, the second and sixth most common emergent values, respectively.}\n\\begin{small}\n\\begin{tabular}{p{0.96\\linewidth}}\n\\toprule\n\\textbf{Building on past work}\\\\\n\\midrule \"Recent work points towards sample complexity as a possible reason for the small gains in robustness: Schmidt et al. [41] show that in a simple model, learning a classifier with non-trivial adversarially robust accuracy requires substantially more samples than achieving good `standard' accuracy.\"\\\\\n\\midrule \"Experiments indicate that our method is much faster than state of the art solvers such as Pegasos, TRON, SVMperf, and a recent primal coordinate descent implementation.\"\\\\\n\\midrule \"There is a large literature on GP (response surface) optimization.\"\\\\\n\\bottomrule\n\\toprule\n\\textbf{Novelty} \\\\\n\\midrule \"In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework.\"\\\\\n\\midrule \"Third, we propose a novel method for the listwise approach, which we call ListMLE.\"\\\\\n\\midrule \"The distinguishing feature of our work is the use of Markov chain Monte Carlo (MCMC) methods for approximate inference in this model.\"\\\\\n\\bottomrule\n\\end{tabular}\n\\hfill\n\n\\end{small}\n\\label{fig:novelty}\n\\end{table}\n\n\n\nMost authors devote space in the introduction to positioning their paper in relation to past work, and describing what is novel.\nMentioning past work serves to signal awareness of related publications, \nto establish the new work as relevant to the community, and to provide the basis upon which to make claims about what is new. \nNovelty is sometimes suggested implicitly (e.g., \"we develop\" or \"we propose\"), but frequently it is emphasized explicitly (e.g. \"a new algorithm\" or \"a novel approach\")\n\n\n\n\nThis combined focus on novelty and building on recent work establishes a continuity of ideas, and might be expected to contribute to the self-correcting nature of science \\citep{merton.1973}. However, this is not always the case \\citep{ioannidis.2012} and attention to the ways novelty and building on past work are implemented reveals value commitments.\nWe find a clear emphasis on technical novelty, rather than critique of past work, or demonstration of measurable progress on societal problems, as has previously been observed \\citep{wagstaff.2012}. Although introductions sometimes point out limitations of past work (so as to further emphasize the contributions of their own paper), they are rarely explicitly critical of other papers in terms of methods or goals. Indeed, papers uncritically reuse the same datasets for years or decades to benchmark their algorithms, even if those datasets fail to represent more realistic contexts in which their algorithms will be used \\cite{bender2021dangers}. \nNovelty is denied to work that rectifies socially harmful aspects of existing datasets in tandem with strong pressure to benchmark on them and thereby perpetuate their use, enforcing a fundamentally conservative bent to ML research.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Corporate Affiliations and Funding}\n\\label{sec:corporate}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.77\\linewidth]{figures\/corp_affils_pie.png}\n\\caption{Corporate and Big Tech author affiliations.\n}\n\\label{fig:corp_affils}\n\\end{figure}\n\n\n\n\n\n\\begin{figure*}\n\\centering\n\\captionsetup{justification=centering}\n\\includegraphics[width=0.87\\linewidth]{figures\/affils.png}\n\\caption{Corporate affiliations and funding ties. Non-N.A. Universities are those \\\\outside the U.S. and Canada.}\n\\label{fig:vcorp-ties}\n\\end{figure*}\n\n\nOur analysis shows substantive and increasing corporate presence in the most highly-cited papers.\nIn 2008\/09, 24\\% of the top cited papers had corporate affiliated authors, and in 2018\/19 this statistic almost tripled,\nto 71\\%. Furthermore, we also find a much greater concentration of a few large tech firms, such as Google and Microsoft, with the presence of these \"big tech\" firms (as identified in \\citep{ahmed2020democratization})\nincreasing more than fivefold, from 11\\% to 58\\% (see Figure \\ref{fig:corp_affils}). The fraction of the annotated papers with corporate ties, by author affiliation or funding, dramatically increased from 43\\% in 2008\/09 to 79\\% in 2018\/19.\nIn addition, we found paramount domination of elite universities in our analysis as shown in Figure \\ref{fig:vcorp-ties}. Of the total papers with university affiliations, we found 82\\% were from elite universities (defined as the top 50 universities by QS World University Rankings, following past work \\cite{ahmed2020democratization}).\nThese findings are consistent with previous work indicating a pronounced corporate presence in ML research. In an automated analysis of\npeer-reviewed papers from 57 major computer science conferences, Ahmed and Wahed \\cite{ahmed2020democratization} show that the share of papers that have at least one corporate affiliated co-author increased from 10\\% in 2005 for both ICML and NeurIPS to 30\\% and 35\\% respectively in 2019. Our analysis shows that corporate presence is even more pronounced in those papers from ICML and NeurIPS that end up receiving the most citations.\n\nThe influence of powerful players in ML research is consistent with field-wide value commitments that centralize power. Others have also argued for causal connections. For example, Abdalla and Abdalla \\cite{abdalla2020grey} argue that big tech sway and influence academic and public discourse using strategies which closely resemble strategies used by Big Tabacco.\n\nMoreover, examining the prevalent values of big tech, critiques have repeatedly pointed out that objectives such as efficiency, scale, and wealth accumulation \\cite{o2016weapons,pasquale2015black,hanna2020against} drive the industry at large, often at the expense of individuals rights, respect for persons, consideration of negative impacts, beneficence, and justice. Thus, the top stated values of ML that we presented in this paper such as performance, generalization, and efficiency may not only enable and facilitate the realization of big tech's objectives, but also suppress values such as beneficence, justice, and inclusion. A \"state-of-the-art\" large image dataset, for example, is instrumental for large scale models, further benefiting ML researchers and big tech in possession of huge computing power. \nIn the current climate where values such as accuracy, efficiency, and scale, as currently defined, are a priority, user safety, informed consent, or participation may be perceived as costly and time consuming, evading social needs. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion and Related Work}\n\nML research is often perceived as value-neutral, and emphasis is placed on positive applications or potential.\nThis fits into a historical strain of thinking which has tended to frame technology as \"neutral\", based on the notion that new technologies can be unpredictably applied for both beneficial and harmful purposes \\citep{winner.1977}. Ironically, this claim of neutrality frequently serves as an insulation from critiques of AI and as a permission to emphasize the benefits of AI \\citep{rus.2018, weizenbaum1972impact}. Although it is rare to see anyone explicitly argue in print that ML is neutral, related ideas are part of contemporary conversation, including these canonical claims: long term impacts are too difficult to predict; sociological impacts are outside the expertise or purview of ML researchers \\citep{holstein.2019}; critiques of AI are really misdirected critiques of those deploying AI with bad data (\"garbage in, garbage out\"), again outside the purview of many AI researchers; and proposals such as broader impact statements represent merely a \"bureaucratic constraint\"\n\\citep{abuhamad.2020}. A recent qualitative analysis of required broader impact statements from NeurIPS 2020 similarly observed that these statements leaned towards positive consequences (often mentioning negative consequences only briefly and in some cases not at all), emphasized uncertainty about how a technology might be used, or simply omit any discussion of societal consequences altogether \\citep{nanayakkara.2021}.\n\n\nImportantly, there is a foundational understanding in Science, Technology, and Society Studies (STSS), Critical Theory, and Philosophy of Science that science and technologies are inherently value-laden, and these values are encoded in technological artifacts, many times in contrast to a field's formal research criteria, espoused consequences, or ethics guidelines \\cite{winner1980artifacts,bowker2000sorting,benjamin2019race}. There is a long tradition of exposing and critiquing such values in technology and computer science. For example, Winner \\cite {winner1980artifacts} introduced several ways technology can encode political values. This work is closely related to Rogaway \\cite {rogaway2015moral}, who notes that cryptography has political and moral dimensions and argues for a cryptography that better addresses societal needs. Weizenbaum \\cite {weizenbaum1976computer} argued in 1976 that the computer has from the beginning been a fundamentally conservative force which solidified existing power: in place of fundamental social changes, he argued, the computer renders technical solutions that allow existing power hierarchies to remain intact. \n\nOur paper extends these critiques to the field of ML. It is a part of a rich space of interdisciplinary critiques and alternative lenses used to examine the field. Works such as \\cite {mohamed2020decolonial, birhane2020algorithmic} critique AI, ML, and data using a decolonial lens, noting how these technologies replicate colonial power relationships and values, and propose decolonial values and methods.\nOthers \\cite {benjamin2019race, noble2018algorithms, d2020data} examine technology and data science from an anti-racist and intersectional feminist lens, discussing how our infrastructure has largely been built by and for white men; D'Ignazio and Klein \\cite {d2020data} present a set of alternative principles and methodologies for an intersectional feminist data science. Similarly, Kalluri \\cite {kalluri2020don} denotes that the core values of ML are closely aligned with the values of the most privileged and outlines a vision where ML models are used to shift power from the most to the least powerful. Dotan and Milli \\cite{dotan2019value} argue that the rise of deep learning is value-laden, promoting the centralization of power among other political values. Many researchers, as well as organizations such as Data for Black Lives, the Algorithmic Justice League, Our Data Bodies, the Radical AI Network, Indigenous AI, Black in AI, and Queer in AI, explicitly work on continuing to uncover particular ways technology in general and ML in particular can encode and amplify racist, sexist, queerphobic, transphobic, and otherwise marginalizing values \\cite{buolamwini2018gender, prabhu2020large}.\n\n\n\n\n\n\n\n\n\n\n\nWe present this paper in part in order to expose the contingency of the present state of the field; it could be otherwise. \nFor individuals, communities, and institutions wading through difficult-to-pin-down values of the field, as well as those striving toward alternative values, it is a useful tool to have a characterization of the way the field is now, for understanding, shaping, dismantling, or transforming what is, and for\narticulating and \nbringing about alternative visions.\n\nAs with all methods, our chosen approach \u2014 coding important sections of highly-cited papers \u2014 has limitations. Most notably, this approach requires human expertise and does not automatically scale or generalize to other data, which limits our ability to draw strong conclusions about other conferences or different years. Similarly, this approach is less reproducible than fully automated approaches, and for both our final list of values and specific annotation of individual sentences, different researchers might make somewhat different choices. However, given the overwhelming presence of certain values, the high agreement rate among annotators, and the similarity of observations made by our team, we strongly believe other researchers following a similar approach would reach similar conclusions about what values are most frequently uplifted by the most influential papers in this field. Lastly, we cannot claim to have identified every relevant value in ML. However, by including important ethical values identified by past work, and specifically looking for these, we can confidently assert their relative absence in this set of papers, which captures an important aspect of influential work in ML.\n\n\\section{Conclusion}\n\nWe reject the vague conceptualization of the discipline of ML as value-neutral. Instead, we investigate the ways that the discipline of ML is inherently value-laden. Our analysis of highly influential papers in the discipline finds that they not only favor the needs of research communities and large firms over broader social needs, but also that they take this favoritism for granted. The favoritism manifests in the choice of projects, the lack of consideration of potential negative impacts, and the prioritization and operationalization of values such as performance, generalization, efficiency, and novelty. These values are operationalized in ways that disfavor societal needs, usually without discussion or acknowledgment. Moreover, we uncover an overwhelming and increasing presence of big tech and elite universities in highly cited papers, which is consistent with a system of power-centralizing value-commitments. \nThe upshot is that the discipline of ML is not value-neutral. We find that it is socially and politically loaded, frequently neglecting societal needs and harms, while prioritizing and promoting \nthe concentration of power in the hands of already powerful actors.\n\n\n\n \n\n\n\n\n\n\\section*{Acknowledgements}\n\nWe would like to thank Luke Stark, Dan Jurafsky, and Sarah K. Dreier for helpful feedback on this work. We owe gratitude and accountability to the long history of work exposing how technology shifts power, work primarily done by communities at the margins. Abeba Birhane was supported, in part, by Science Foundation Ireland grant 13\/RC\/2094\\_2. Dallas Card was supported in part by the Stanford Data Science Institute. William Agnew was supported by an NDSEG Fellowship.\n\n\\Urlmuskip=0mu plus 1mu\n\\bibliographystyle{plain}\n\n\\section{Appendix}\\label{A}\n\\setcounter{figure}{0} \n\n\\subsection{Relevance to NeurIPS}\n\nThe ML community has recently introduced a number of efforts to understand the societal impacts of the field, such as NeurIPS's requirement for the inclusion of broader impacts statements for all submitted papers in 2020, the Resistance AI Workshop at NeurIPS 2020 which investigated how AI concentrates power, and the Navigating the Broader Impacts of AI Research Workshop at NeurIPS 2020 which sought to understand the impacts of ML research as a whole on society. Understanding the social impact of a paper, let alone the discipline, is difficult. Merely looking at various benchmarks, abstract assessments, or broader impact statements, for example, is insufficient to get at the many concrete impacts encoded in the research itself. This paper attempts to begin to bridge this gap by seeking to understand the value commitments in papers published at NeurIPS and a closely related conference, ICML. As such, this paper is highly relevant and a timely contribution to the NeurIPS audience. As research into technical ML topics -- reinforcement learning, deep learning, optimization, etc. -- are core to NeurIPS, it is apparent that it is increasingly important and of interest to understand where these research areas stand with regard to societal impact, both in a positive and negative manner, as well as the benefits they bring and to whom.\n\n\n\n\n\\subsection{Additional Methodological Details}\n\n\\subsubsection{Data Sources}\n\\label{sec:data}\n\n\nTo determine the most-cited papers from each conference, we rely on the publicly-available Semantic Scholar database \\citep{ammar.2018}, which includes bibliographic information for scientific papers, including citation counts.\\footnote{\\url{http:\/\/s2-public-api.prod.s2.allenai.org\/corpus\/}}\nUsing this data, we chose the most cited papers from each of 2008, 2009, 2018, 2019 published at NeurIPS and ICML. \n\nLike all bibliographic databases, Semantic Scholar is imperfect. Upon manual review, we wish to document that our selection includes one paper that was actually published in 2010, and one that was retracted from NeurIPS prior to publication (see \\S\\ref{sec:paperlist} for details). In addition, the citations counts used to determine the most cited papers reflect a static moment in time, and may differ from other sources. \n\nBecause our artifacts of study are papers that have been previously published at NeurIPS or ICML, we surmise that the authors normatively expect and consent to their papers and themselves as authors being referenced and analyzed in future papers, e.g. this paper. Accordingly, we chose not to seek explicit permission from the original authors to reference, annotate, and analyze their papers. The annotations we generated do not introduce any new personally identifying information or offensive content. The sentences from the original published papers are necessarily part of our annotations; to the extent that these papers have these issues, these sentences may contain personally identifying information or offensive content. Given the original authors contributed their work to the same venues as our own work, we believe that the potential to cause new harm from this inclusion is minimal.\n\n\n\n\\subsubsection{Defining elite university}\nTo determine the list of elite universities, we follow Ahmed and Wahed \\citep{ahmed2020democratization}, and rely on the QS World University Rankings for the discipline of computer science. For 2018\/19, we take the top 50 schools from the CS rankings for 2018. For 2008\/09, we take the top 50 schools from the CS rankings for 2011, as the closest year for which data is available. \n\n\\subsubsection{Defining big tech}\nWe used Abdalla and Abdalla's \\citep{abdalla2020grey} criterion to what is considered \"big tech\", which is comprised of: Alibaba, Amazon, Apple, Element AI, Facebook, Google, Huawei, IBM, Intel, Microsoft, Nvidia, Open AI, Samsung, and Uber. Furthermore, we added DeepMind to this list, which Google acquired in 2014. We considered all other companies as \"non-big tech\". \n\n\n\n\n\\subsection{Annotations}\nWe include the annotations of all papers at \\url{https:\/\/github.com\/wagnew3\/The-Values-Encoded-in-Machine-Learning-Research}.\nTo present a birds-eye view of the value annotations, we present randomly selected examples of annotated sentences in section \\S\\ref{sec:randomexamples}. \nIn addition, here we present the frequency of occurrence for all values prior to clustering (see \\S\\ref{sec:value_clusters}) in Figure \\ref{fig:value-totals-all}. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.97\\textwidth]{figures\/value-freq-graph_all.png}\n\\caption{Value occurrences across papers.}\n\\label{fig:value-totals-all}\n\\end{figure}\n\n\n\\subsection{Value Hierarchy}\n\\label{sec:value_clusters}\n\nDuring ongoing discussions held throughout the annotation process, several values emerged as particularly salient to all annotators, including \\emph{Performance}, \\emph{Efficiency}, \\emph{Generalization}, \\emph{Novelty}, and \\emph{Building on Past Work}. In some cases, these had strong overlap with related values (e.g., \\emph{Performance} is closely related to \\emph{Accuracy}). In other cases, we had annotated for several fine-grained values that we felt could be combined (e.g., \\emph{Data Efficiency} and \\emph{Label Efficiency} are types of \\emph{Efficiency}). As such, we decided to group together certain related sets of values into clusters for presentation in the main paper. The values that were combined into each cluster are listed in Table \\ref{tab:value_clusters} below.\n\n\\begin{table}[ht]\n \\centering\n \\small\n \\caption{Sets of values combined for discussion in the main paper}\n \\begin{tabular}{l l}\n Cluster & Values \\\\\n \\hline\n Performance values & Performance, Accuracy, State-of-the-art \\\\\n Building on past work & Building on classic work, Building on recent work \\\\\n Generalization values & Generalization, Avoiding train\/test discrepancy, Flexibility\/extensibility \\\\\n Efficiency values & Efficiency, Data efficiency, Energy efficiency, Fast, Label efficiency, \\\\\n & Low cost, Memory efficiency, Reduced training time \\\\\n \\end{tabular}\n \\label{tab:value_clusters}\n\\end{table}\n\n\n\\subsection{Experiments with Using Text Classification to Identify Values}\n\nAlthough it was not our primary purpose in annotating highly-cited papers, we include here a brief report on using the annotations we generated as potential training data for classifiers that could in principle be used to estimate the prevalence of these values in a larger set of ML papers. This is something that we should approach with great caution for several reasons: i) we only have a relatively small training set of annotated examples with respect to machine learning best practices;\nii) these annotations are taken from a non-random set of papers, and any models trained on these data may not generalize to all papers; iii) an automated approach will fail to detect additional, previously unobserved, emergent values; and iv) based on our experiences annotating these papers, we expect that many would be expressed subtly and in varied ways that would be difficult to detect automatically, at least without considerably more training data.\n\nTo present a baseline for testing the potential of this approach, while avoiding any biases that might be introduced by pretrained language models, we make use of simple regularized logistic regression classifiers operating on unigram features. We trained models separately for each value (for all values that had at least 20 relevant sentences, using all relevant sentences for the higher-order grouped values), treating each sentence as an instance with a binary label (present or not), tokenizing each sentence using \\emph{spaCy} and converting each to a binary feature representation indicating the presence or absence of each word in the vocabulary (all words occurring at least twice in the corpus). These choices were not tuned. We randomly selected 300 sentences to use as a held out test set (using the same test set for each value), and trained a model using the remaining data, using 5-fold cross validation to tune the regularization strength.\n\nF1 scores on the test set for the various models are shown in Figure \\ref{fig:textcat} (right), and can generally be seen to be unimpressive. \nThe F1 score for most values is on the order of 0.5 or less, and some values, even relatively common ones such as \\textit{Unifying Ideas}, ended up with an F1 score of 0. The most highly-weighted features for most classifiers were quite reasonable, but this is evidently a relatively difficult task, at least given this amount of data. The exceptions to this poor performance included the Performance-related values (\\emph{Performance}, \\emph{Accuracy}, and \\emph{State-of-the-art}), as well as \\emph{Effectiveness}, and \\emph{Facilitating Use}, all of which had F1 scores greater than 0.75, and most of which were typically represented by a relatively small set of terms (e.g., \"accurate\", \"accuracy\", \"accurately\", \"inaccurate\", \"accuracies\", \"errors\", etc. for \\emph{Accuracy}).\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/f1s.pdf}\n\\caption{Proportion of papers in from 2008--2020 (combining NeurIPS and ICML) predicted to have at least one sentence expressing each value (left), and estimated performance (F1) of the corresponding classifiers (right). Note that the overall performance of most classifiers is generally poor, indicating that the estimates on the left should be treated as unreliable in most cases. Grey bars represent the clustered values. Classifiers were not trained for values with less than 20 representative sentences.}\n\\label{fig:textcat}\n\\end{figure}\n\nAlthough the poor performance of these classifiers means we should interpret any use of them with caution, we explore applying them to a broader set of papers for the sake of completeness. To do so, we download pdfs of all papers published at NeurIPS and ICML for the years 2008 through 2020, convert these to text using \\emph{pdftotext}, and extract sentences from this text, excluding references, as well as very short sentences (less than 6 tokens) or lines without alphabetic characters. Note that due to the difficulty of automatically parsing papers into sections, these textual representations are not limited to the abstract, introduction, discussion, and conclusion, in contrast to our annotations, thus we would expect most values to occur more frequently, especially those that are likely to occur in sections about experiments and results. \n\nWe then apply the classifiers trained above to each sentence in each paper. For each value, we then compute the proportion of papers (combining NeurIPS and ICML for this entire time period) that had at least one sentence predicted to exhibit that value. The overall proportions are shown in Figure \\ref{fig:textcat} (left). As can be seen, the relative prevalence of values is broadly similar to our anntoated sample, though many are predicted to occur with greater frequency, as expected. However, to reiterate, we should be highly skeptical of these findings, given the poor performance of the classifiers, and we can view this analysis to be useful more so to deepen our understanding of appropriate methodology.\n\nFinally, as an additional exploration, we focus on the Performance-related values (\\emph{Performance}, \\emph{Accuracy}, and \\emph{State-of-the-art}), which represent the overall most prevalent cluster in our annotations and were relatively easy to identify using classification due to their typically simple and explicit expression. We plot the estimated frequency over time for both conferences. For NeurIPS, which has better archival practices, we extend the analysis back to 1987. We should again treat these results with caution, given all the caveats above, as well as the fact that we are now applying these classifiers outside the temporal range from which the annotations were collected. Nevertheless, the results, shown in Figure \\ref{fig:performance-temporal}, suggest that these values have gradually become more common in NeurIPS over time, reinforcing the contingent nature of the dominance of the current set of values. Further investigation is required, however, in order to verify this finding.\n\n\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/performance_plots.pdf}\n\\caption{Proportion of papers per year (of those published in ICML and NeurIPS) that are classified as having at least one sentence expressing \\emph{Performance}, \\emph{Accuracy}, or \\emph{State-of-the-art}, (top, middle, and bottom), based on simple text classifiers trained on our annotations. Bands show $\\pm$2 standard deviations, reflecting the changing overall number of papers per year. }\n\\label{fig:performance-temporal}\n\\end{figure}\n\n\n\\subsection{Code and Reproducibility}\nOur code and annotations are available at \\url{https:\/\/github.com\/wagnew3\/The-Values-Encoded-in-Machine-Learning-Research}. The text classification experiments were run on a 2019 Macbook Air.\n\n\\subsection{Potential Negative Societal Impacts}\n\nBecause this paper primarily relies on socially conscientious manual annotation of papers already published at NeurIPS and ICML, we believe that the potential negative societal impacts of carrying out these annotations and sharing them are minimal. However, we still briefly comment on this here.\n\nFirst, in terms of the specific concerns highlighted in the NeurIPS call for papers, we believe our annotation work poses no risk to living beings, human rights concerns, threats to livelihoods, etc.\nSimilarly, all annotators are co-authors on this paper, thus there was no risk to participants, beyond what we chose to take on for ourselves. We have further discussed these aspects of the data in \\S\\ref{sec:data}. Our computational experiments are done locally and have resource usage on par with everyday computer usage.\n\nOne area of potential concern to readers, particularly researchers, may be regarding adopting a punitive stance toward individuals, unintentionally casting certain authors in a negative light, or unintentionally contributing to harmful tensions within the ML community. We wish to directly express that throughout this paper we have sought to avoid punitive language toward individuals and adopt language emphasizing systematic patterns. In order to further minimize the former, we have chosen to include randomly selected examples omitting author attributions from quoted sources in the main paper. To complement this and meet the need for completeness, transparency, and reproducibility of our work, we include a full list of cited papers below, so as to acknowledge this work without drawing unnecessary attention to any one particular source.\n\nAlthough our intention is to broaden and deepen the conversation, we acknowledge that some authors may perceive our work as being not representative of the type of work they would like to see at NeurIPS, and possibly detrimental to the conference. However, because of the prominence and influence of machine learning today, it is especially important to have these conversations at the premier venues, and we hope that our paper will be the basis for useful conversations and future work.\nAs expressed in the main paper, these perceptions and norms may be precisely those that are more contingent than the community realizes; these norms may be shaped, dismantled, transformed, or reenvisioned for the better. \n\n\n\\subsection{Random Examples}\n\\label{sec:randomexamples}\n\nThe list below contains 100 random examples drawn from the annotated data, along with the set of annotated values for each. These sentences were annotated for values within the context of the paper.\n\\textit{\n\\begin{itemize}\n \\item The problem of minimizing the rank of a matrix variable subject to certain constraints arises in many fields including machine learning, automatic control, and image compression. {\\textbf{Used in practice\/Popular}}\n\t\\item Locality-sensitive hashing [6] is an effective technique that performs approximate nearest neighbor searches in time that is sub-linear in the size of the database {\\textbf{Approximation}, \\textbf{Building on recent work}, \\textbf{Effectiveness}, \\textbf{Fast}}\n\t\\item In the finite case, analysis of optimization and generalization of fully-trained nets is of course an open problem {\\textbf{Formal description\/analysis}, \\textbf{Generalization}}\n\t\\item So to achieve adversarial robustness, a classifier must generalize in a stronger sense. {\\textbf{Generalization}, \\textbf{Robustness}}\n\t\\item Robustness to label corruption is similarly improved by wide margins, such that pre-training alone outperforms certain task-specific methods, sometimes even after combining these methods with pre-training. {\\textbf{Performance}, \\textbf{Robustness}, \\textbf{Understanding (for researchers)}}\n\t\\item RBMs have been particularly successful in classification problems either as feature extractors for text and image data (Gehler et al., 2006) or as a good initial\ntraining phase for deep neural network classifiers (Hinton, 2007). {\\textbf{Building on recent work}, \\textbf{Flexibility\/Extensibility}, \\textbf{Successful}}\n\t\\item Our theoretical analysis naturally leads to a new formulation\nof adversarial defense which has several appealing properties; in particular, it inherits the benefits of scalability to\nlarge datasets exhibited by Tiny ImageNet, and the algorithm achieves state-of-the-art performance on a range of\nbenchmarks while providing theoretical guarantees. {\\textbf{Robustness}, \\textbf{Scales up}, \\textbf{Security}, \\textbf{Theoretical guarantees}}\n\t\\item The current paper focuses on the training loss, but\ndoes not address the test loss. {\\textbf{Generalization}}\n\t\\item This result is significant since stochastic methods are highly preferred for their efficiency over deterministic\ngradient methods in machine learning applications. {\\textbf{Efficiency}}\n\t\\item Ranking, which is to sort objects based on certain factors, is the central problem of applications such as in- formation retrieval (IR) and information filtering. {\\textbf{Applies to real world}, \\textbf{Used in practice\/Popular}}\n\t\\item This subspace is important, because, when projected onto this subspace, the means of the distributions are well-separated, yet the typical distance between points from the same distribution is smaller than in the original space. {\\textbf{Important}}\n\t\\item Overall, the existence of such adversarial examples raises concerns about the robustness of current classifiers. {\\textbf{Identifying limitations}, \\textbf{Robustness}}\n\t\\item We have shown that biased compressors if naively used can lead to bad generalization, and even non-convergence. {\\textbf{Formal description\/analysis}, \\textbf{Generalization}}\n\t\\item Bartlett and Mendelson [2002] provide a generalization bound for Lipschitz loss functions. {\\textbf{Building on classic work}, \\textbf{Generalization}}\n\t\\item The principal advantage of taking this \"lateral\" approach arises from the fact that compact representation in trajectory space is better motivated physically than compact representation in shape space {\\textbf{Realistic world model}}\n\t\\item In this paper, we show that gradient descent on deep overparametrized networks can obtain zero training loss {\\textbf{Formal description\/analysis}, \\textbf{Theoretical guarantees}}\n\t\\item Moreover, web queries often have different meanings for different users (a canonical example is the query jaguar ) suggesting that a ranking with diverse documents may be preferable. {\\textbf{Diverse output}, \\textbf{User influence}}\n\t\\item We include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance. {\\textbf{Learning from humans}, \\textbf{Performance}}\n\t\\item Inthis paper we propose a simple and fast algorithm SVP(Singular Value Projec-tion) for rank minimization under affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy a restricted isometry property(RIP). {\\textbf{Fast}, \\textbf{Novelty}, \\textbf{Simplicity}}\n\t\\item We use standard formalization of multiclass classification, where data consists of sample x and its label y\n(an integer from 1 to k). {\\textbf{Building on classic work}}\n\t\\item A number of recent work has shown that the low rank solution can be recovered exactly via minimizing the trace norm under certain conditions (Recht et al., 2008a; Recht et al., 2008b; Cand\\`es and Recht, 2008). {\\textbf{Building on recent work}}\n\t\\item This difficulty has necessitated the use of a heuristic inference procedure, that\nnonetheless was accurate enough for successful learning. {\\textbf{Accuracy}, \\textbf{Successful}}\n\t\\item We illustrate such potential by measuring search space properties relevant to architecture search. {\\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item Deep architectures consist of feature detector units arranged in layers. Lower layers detect simple features and feed into higher layers, which in turn detect more complex features. {\\textbf{Simplicity}}\n\t\\item This makes the updates hard to massively parallelize at a coarse, dataparallel level (e.g., by computing the updates in parallel and summing them together centrally) without losing the critical stochastic nature of the updates. {\\textbf{Large scale}, \\textbf{Parallelizability \/ distributed}}\n\t\\item This suggests future work on model robustness should evaluate proposed methods with pretraining in order to correctly gauge their utility, and some work could specialize pre-training for these downstream tasks. {\\textbf{Robustness}}\n\t\\item Adversarial training remains among the most trusted defenses, but it is nearly intractable on largescale problems. {\\textbf{Scales up}, \\textbf{Security}}\n\t\\item For complex robots such as humanoids or light-weight arms, it is often hard to model the system sufficiently well and, thus, modern regression methods offer a viable alternative [7,8]. {\\textbf{Realistic world model}}\n\t\\item In\ncontrast to prior work that operates in this goal-setting model, we use states as goals directly, which\nallows for simple and fast training of the lower layer. {\\textbf{Reduced training time}, \\textbf{Simplicity}}\n\t\\item Meanwhile, using less resources tends to produce less compelling results (Negrinho and Gordon, 2017; Baker et al., 2017a). {\\textbf{Requires few resources}}\n\t\\item This finding represents an exciting opportunity for defense against neural fake news: the best models for generating neural disinformation are also the best models at detecting it. {\\textbf{Applies to real world}}\n\t\\item Our strong empirical results suggest that\nrandomized smoothing is a promising direction\nfor future research into adversarially robust classification. {\\textbf{Quantitative evidence (e.g. experiments)}, \\textbf{Robustness}, \\textbf{Security}}\n\t\\item We then turn our attention to identifying the roots of BatchNorm's success. {\\textbf{Successful}, \\textbf{Understanding (for researchers)}}\n\t\\item We also report the results of large-scale experiments comparing these three methods which demonstrate the benefits of the mixture weight method: this method consumes less resources, while achieving a performance comparable to that of standard approaches. {\\textbf{Large scale}, \\textbf{Performance}, \\textbf{Requires few resources}}\n\t\\item This paper does not cover the the generalization of over-parameterized neural networks to the test data. {\\textbf{Avoiding train\/test discrepancy}, \\textbf{Generalization}}\n\t\\item While there has been success with robust classifiers on simple datasets [31, 36, 44, 48], more complicated datasets still exhibit a large gap between \"'standard\" and robust accuracy [3, 11]. {\\textbf{Applies to real world}, \\textbf{Robustness}, \\textbf{Successful}}\n\t\\item In this paper, we have shown theoretically how independence between examples can make the actual effect much smaller. {\\textbf{Novelty}, \\textbf{Theoretical guarantees}}\n\t\\item We provide empirical evidence that several recently-used methods for estimating the probability of held-out documents are inaccurate and can change the results of model comparison. {\\textbf{Accuracy}, \\textbf{Building on recent work}, \\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item This agreement is robust across different architectures, optimization methods, and loss functions {\\textbf{Robustness}}\n\t\\item Unfortunately, due to the slow-changing policy in an actor-critic setting, the current and target value estimates remain too similar to avoid maximization bias. {\\textbf{Accuracy}}\n\t\\item As a future work, we are pursuing a better understanding of probabilistic distributions on the Grassmann manifold. {\\textbf{Understanding (for researchers)}}\n\t\\item We also view these results as an opportunity to encourage the community to pursue a more systematic investigation of the algorithmic toolkit of deep learning and the underpinnings of its effectiveness. {\\textbf{Effectiveness}, \\textbf{Understanding (for researchers)}}\n\t\\item This challenge is further exacerbated in continuous state and action spaces, where a separate actor network is often used to perform the maximization in Q-learning. {\\textbf{Performance}}\n\t\\item The vulnerability of neural networks to adversarial perturbations has recently been a source of much discussion and is still poorly understood. {\\textbf{Robustness}, \\textbf{Understanding (for researchers)}}\n\t\\item Most of the evaluation methods described in this paper extend readily to more complicated topic models\u2014 including non-parametric versions based on hierarchical Dirichlet processes (Teh et al., 2006)\u2014since they only require a MCMC algorithm for sampling the latent topic assignments z for each document and a way of evaluating probability P(w | z, $\\Phi$, $\\alpha$m). {\\textbf{Flexibility\/Extensibility}, \\textbf{Understanding (for researchers)}}\n\t\\item In a formulation closely related to the dual problem, we have: $\\hat{w}$ = argmin w:F (w)$\\leq$c 1 n Xn i=1 $\\ell$(hw, xii, yi) (2) where, instead of regularizing, a hard restriction over the parameter space is imposed (by the constant c). {\\textbf{Formal description\/analysis}}\n\t\\item Second, we evaluate a surrogate loss function from four aspects: (a) consistency, (b) soundness, (c) mathemat- ical properties of continuity, differentiability, and con- vexity, and (d) computational efficiency in learning. {\\textbf{Efficiency}}\n\t\\item This leads to two natural questions that we try to answer in this paper: (1) Is it feasible to perform optimization in this very large feature space with cost which is polynomial in the size of the input space? {\\textbf{Performance}}\n\t\\item Despite its pervasiveness, the exact reasons for BatchNorm's effectiveness are still poorly understood. {\\textbf{Understanding (for researchers)}}\n\t\\item We have presented confidenceweighted linear classifiers, a new learning method designed for NLP problems based on the notion of parameter confidence. {\\textbf{Novelty}}\n\t\\item In addition, the experiments reported here suggest that (like other strategies recently proposed to train deep deterministic or stochastic neural networks) the curriculum strategies appear on the surface to operate like a regularizer, i.e., their beneficial effect is most pronounced on the test set. {\\textbf{Beneficence}, \\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item These\ngive further inside into hash-spaces and explain previously\nmade empirical observations. {\\textbf{Understanding (for researchers)}}\n\t\\item This means that current algorithms reach their limit at problems of size 1TB whenever the algorithm is I\/O bound (this amounts to a training time of 3 hours), or even smaller problems whenever the model parametrization makes the algorithm CPU bound. {\\textbf{Memory efficiency}, \\textbf{Reduced training time}}\n\t\\item Much of the results presented were based on the assumption that the target distribution is some mixture of the source distributions. {\\textbf{Valid assumptions}}\n\t\\item Empirical investigation revealed that this agrees well with actual training dynamics and predictive distributions across fully-connected, convolutional, and even wide residual network architectures, as well as with different optimizers (gradient descent, momentum, mini-batching) and loss functions (MSE, cross-entropy). {\\textbf{Generalization}, \\textbf{Quantitative evidence (e.g. experiments)}, \\textbf{Understanding (for researchers)}}\n\t\\item We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting\nin a new convex optimization formulation for multi-task learning. {\\textbf{Novelty}}\n\t\\item Recent progress in natural language generation has raised dual-use concerns. {\\textbf{Progress}}\n\t\\item These kernel functions can be used\nin shallow architectures, such as support vector machines (SVMs), or in deep\nkernel-based architectures that we call multilayer kernel machines (MKMs). {\\textbf{Flexibility\/Extensibility}}\n\t\\item Using MCMC instead of variational methods for approximate inference in Bayesian matrix factorization models leads to much larger improvements over the MAP trained models, which suggests that the assumptions made by the variational methods about the structure of the posterior are not entirely reasonable. {\\textbf{Understanding (for researchers)}}\n\t\\item In particular, the deep belief network (DBN) (Hinton et al., 2006) is a multilayer generative model where each layer encodes statistical dependencies among the units in the layer below it; it is trained to (approximately) maximize the likelihood of its training data. {\\textbf{Approximation}, \\textbf{Data efficiency}}\n\t\\item Furthermore, the learning accuracy and performance of our LGP approach will be compared with other important standard methods in Section 4, e.g., LWPR [8], standard GPR [1], sparse online Gaussian process regression (OGP) [5] and $\\nu$-support vector regression ($\\nu$-SVR) [11], respectively {\\textbf{Accuracy}, \\textbf{Performance}, \\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item \u2022 propose a simple method based on weighted minibatches to stochastically train with arbitrary weights on the terms of our decomposition without any additional hyperparameters. {\\textbf{Efficiency}, \\textbf{Simplicity}}\n\t\\item For example, Ng (2004) examined the task of PAC learning a sparse predictor and analyzed cases in which an $\\ell$1 constraint results in better solutions than an $\\ell$2 constraint. {\\textbf{Building on recent work}}\n\t\\item Graph Convolutional Networks (GCNs) (Kipf and Welling, 2017) are an efficient variant of Convolutional Neural Networks (CNNs) on graphs. GCNs stack layers of learned first-order spectral filters followed by a nonlinear activation function to learn graph representations. {\\textbf{Efficiency}}\n\t\\item This is a linear convergence rate. {\\textbf{Building on recent work}, \\textbf{Efficiency}, \\textbf{Quantitative evidence (e.g. experiments)}, \\textbf{Theoretical guarantees}}\n\t\\item However, as we\nobserve more interactions, this could emerge as a clear feature. {\\textbf{Building on recent work}, \\textbf{Data efficiency}}\n\t\\item Here we propose the first method that supports arbitrary low accuracy and even biased\ncompression operators, such as in (Alistarh et al., 2018; Lin et al., 2018; Stich et al., 2018). {\\textbf{Accuracy}, \\textbf{Novelty}}\n\t\\item Much recent work has been done on understanding under what conditions we can learn a mixture model. {\\textbf{Understanding (for researchers)}}\n\t\\item For this reason, we present an extension of the standard greedy OMP algorithm that can be applied to general struc- tured sparsity problems, and more importantly, meaningful sparse recovery bounds can be obtained for this algorithm. {\\textbf{Building on recent work}}\n\t\\item In this paper we show that this assumption is indeed necessary: by considering a simple yet prototypical exampleof GAN training we analytically show that (unregularized) GAN training is not always locally convergent {\\textbf{Formal description\/analysis}}\n\t\\item Overestimation bias is a property of Q-learning in which the maximization of a noisy value estimate induces a consistent overestimation {\\textbf{Accuracy}}\n\t\\item This drawback prevents GPR from applications which need large amounts of training data and require fast computation, e.g., online learning of inverse dynamics model for model-based robot control {\\textbf{Fast}, \\textbf{Large scale}}\n\t\\item This is problematic since we find there are techniques which do not comport well with pre-training; thus some evaluations of robustness are less representative of real-world performance than previously thought. {\\textbf{Applies to real world}, \\textbf{Performance}, \\textbf{Robustness}}\n\t\\item Approximation of this prior structure through simple, efficient hyperparameter optimization steps is sufficient to achieve these performance gains {\\textbf{Approximation}, \\textbf{Efficiency}, \\textbf{Performance}, \\textbf{Simplicity}}\n\t\\item The second mysterious phenomenon in training deep neural networks is \"deeper networks are harder to train.\" {\\textbf{Performance}}\n\t\\item However, the definition of our metric is sufficiently general that it could easily be used to test, for example, invariance of auditory features to rate of speech, or invariance of textual features to author identity. {\\textbf{Generalization}}\n\t\\item In Sec. 6 we test the proposed algorithm for\nface recognition and object categorization tasks. {\\textbf{Applies to real world}, \\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item It is possible to train classification\nRBMs directly for classification performance; the gradient is fairly simple and certainly tractable. {\\textbf{Performance}}\n\t\\item Figure 1 contrasts these two approaches. Defining and evaluating models using ODE solvers has several benefits: {\\textbf{Beneficence}}\n\t\\item They claim to achieve 12\\% robustness against non-targeted attacks that are within an\n`2 radius of 3 (for images with pixels in [0, 1]). {\\textbf{Generalization}, \\textbf{Robustness}}\n\t\\item Two commonly used penalties are the 1- norm and the square of the 2-norm of w. {\\textbf{Used in practice\/Popular}}\n\t\\item What should platforms do? Video-sharing platforms like YouTube use deep neural networks to scan videos while they are uploaded, to filter out content like pornography (Hosseini et al., 2017). {\\textbf{Applies to real world}}\n\t\\item We mention various properties\nof this penalty, and provide conditions for the consistency\nof support estimation in the regression setting. Finally, we\nreport promising results on both simulated and real data {\\textbf{Applies to real world}}\n\t\\item There\ncould be a separate feature for \"high school student,\" \"male,\" \"athlete,\" and \"musician\" and the\npresence or absence of each of these features is what defines each person and determines their\nrelationships. {\\textbf{Building on recent work}}\n\t\\item So, the over-parameterized convergence theory of DNN is much simpler than that of RNN. {\\textbf{Simplicity}, \\textbf{Understanding (for researchers)}}\n\t\\item Other threat models are possible: for instance, an adversary might generate comments or have entire dialogue agents, they might start with a human-written news article and modify a few sentences, and they might fabricate images or video. {\\textbf{Learning from humans}}\n\t\\item More generally, we hope that future work will be able to avoid relying on obfuscated gradients (and other methods that only prevent gradient descent-based attacks) for perceived robustness, and use our evaluation approach to detect when this occurs. {\\textbf{Generality}, \\textbf{Robustness}}\n\t\\item For example, the learned linear combination does not consistently outperform either the uniform combination of base kernels or simply the best single base kernel (see, for example, UCI dataset experiments in [9, 12], see also NIPS 2008 workshop). {\\textbf{Performance}}\n\t\\item Our main contributions are: \u2022 We analyze GP-UCB, an intuitive algorithm for GP optimization, when the function is either sampled from a known GP, or has low RKHS norm. {\\textbf{Optimal}}\n\t\\item For the standard linear setting, Dani et al. (2008) provide a near-complete characterization explicitly dependent on the dimensionality. In the GP setting, the challenge is to characterize complexity in a different manner, through properties of the kernel function. {\\textbf{Building on classic work}}\n\t\\item This allows us to map each architecture A to its approximate hyperparameter optimized accuracy {\\textbf{Accuracy}}\n\t\\item Unfortunately, they could only apply their method to linear networks. {\\textbf{Flexibility\/Extensibility}}\n\t\\item The strength of the adversary then allows for a trade-off between the enforced prior, and the data-dependent features. {\\textbf{Understanding (for researchers)}}\n\t\\item We observe that the computational bottleneck of NAS is the training of each child model to convergence, only to measure its accuracy whilst throwing away all the trained weights. {\\textbf{Accuracy}}\n\t\\item We show that the number of subproblems need only be logarithmic in the total number of possible labels, making thisapproach radically more efficient than others. {\\textbf{Efficiency}}\n\t\\item We establish a new\nnotion of quadratic approximation of the neural network, and connect it to the\nSGD theory of escaping saddle points. {\\textbf{Novelty}, \\textbf{Unifying ideas or integrating components}}\n\t\\item In this work, we decompose the prediction error for adversarial\nexamples (robust error) as the sum of the natural\n(classification) error and boundary error, and provide a differentiable upper bound using the theory\nof classification-calibrated loss, which is shown to\nbe the tightest possible upper bound uniform over\nall probability distributions and measurable predictors. {\\textbf{Accuracy}, \\textbf{Robustness}, \\textbf{Theoretical guarantees}}\n\t\\item A limit on the number of queries can be a result\nof limits on other resources, such as a time limit if inference time is a bottleneck or a monetary limit if the\nattacker incurs a cost for each query. {\\textbf{Applies to real world}, \\textbf{Low cost}, \\textbf{Requires few resources}}\n\t\\item Preliminary experiments demonstrate that it is significantly faster than batch alternatives on large datasets that may contain millions of training examples, yet it does not require learning rate tuning like regular stochastic gradient descent methods. {\\textbf{Quantitative evidence (e.g. experiments)}, \\textbf{Reduced training time}}\n\t\\item SuperGLUE is available at super.gluebenchmark.com. {\\textbf{Facilitating use (e.g. sharing code)}}\n\\end{itemize}}\n\n\n\\subsection{Full List of Cited Papers}\n\\label{sec:paperlist}\n\nThe full list of annotated papers is given below, along with the annotated scores (in square brackets) for \\emph{Discussion of Negative Potential} [left] (0 = Doesn't mention negative potential; 1 = Mentions but does not discuss negative potential; 2 = Discusses negative potential) and \\emph{Justification} [right] (0 = Doesn't rigorously justify how it achieves technical goal; 1 = Justifies how it achieves technical goal but no mention of societal need; 2 = States but does not justify how it connects to a societal need; 3 = States and somewhat justifies how it connects to a societal need; 4 = States and rigorously justifies how it connects to a a societal need). Note that due to minor errors in the data sources used, the distribution of papers over venues and years is not perfectly balanced. For the same reason, the list also contains one paper from 2010 (rather than 2009), as well as one paper that was retracted before publication at NeurIPS (marked with a $^*$).\n\n\\begin{itemize}\n\t\\item Mingxing Tan, Quoc Le. \\href{http:\/\/proceedings.mlr.press\/v97\/tan19a.html}{EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, Ruosong Wang. \\href{http:\/\/proceedings.mlr.press\/v97\/arora19a.html}{Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Jeremy Cohen, Elan Rosenfeld, Zico Kolter. \\href{http:\/\/proceedings.mlr.press\/v97\/cohen19c.html}{Certified Adversarial Robustness via Randomized Smoothing.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, Michael Jordan. \\href{http:\/\/proceedings.mlr.press\/v97\/zhang19p.html}{Theoretically Principled Trade-off between Robustness and Accuracy.} In \\emph{Proceedings of ICML}, 2019. [0\/2]\n\t\\item Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. \\href{http:\/\/proceedings.mlr.press\/v97\/song19d.html}{MASS: Masked Sequence to Sequence Pre-training for Language Generation.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, Kilian Weinberger. \\href{http:\/\/proceedings.mlr.press\/v97\/wu19e.html}{Simplifying Graph Convolutional Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar. \\href{http:\/\/proceedings.mlr.press\/v97\/recht19a.html}{Do ImageNet Classifiers Generalize to ImageNet?} In \\emph{Proceedings of ICML}, 2019. [0\/2]\n\t\\item Justin Gilmer, Nicolas Ford, Nicholas Carlini, Ekin Cubuk. \\href{http:\/\/proceedings.mlr.press\/v97\/gilmer19a.html}{Adversarial Examples Are a Natural Consequence of Test Error in Noise.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, Frank Hutter. \\href{http:\/\/proceedings.mlr.press\/v97\/ying19a.html}{NAS-Bench-101: Towards Reproducible Neural Architecture Search.} In \\emph{Proceedings of ICML}, 2019. [0\/2]\n\t\\item Dan Hendrycks, Kimin Lee, Mantas Mazeika. \\href{http:\/\/proceedings.mlr.press\/v97\/hendrycks19a.html}{Using Pre-Training Can Improve Model Robustness and Uncertainty.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Sai Praneeth Karimireddy, Quentin Rebjock, Sebastian Stich, Martin Jaggi. \\href{http:\/\/proceedings.mlr.press\/v97\/karimireddy19a.html}{Error Feedback Fixes SignSGD and other Gradient Compression Schemes.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Anastasia Koloskova, Sebastian Stich, Martin Jaggi. \\href{http:\/\/proceedings.mlr.press\/v97\/koloskova19a.html}{Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication.} In \\emph{Proceedings of ICML}, 2019. [0\/2]\n\t\\item Han Zhang, Ian Goodfellow, Dimitris Metaxas, Augustus Odena. \\href{http:\/\/proceedings.mlr.press\/v97\/zhang19d.html}{Self-Attention Generative Adversarial Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Zeyuan Allen-Zhu, Yuanzhi Li, Zhao Song. \\href{http:\/\/proceedings.mlr.press\/v97\/allen-zhu19a.html}{A Convergence Theory for Deep Learning via Over-Parameterization.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Simon Du, Jason Lee, Haochuan Li, Liwei Wang, Xiyu Zhai. \\href{http:\/\/proceedings.mlr.press\/v97\/du19c.html}{Gradient Descent Finds Global Minima of Deep Neural Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Anish Athalye, Nicholas Carlini, David Wagner. \\href{http:\/\/proceedings.mlr.press\/v80\/athalye18a.html}{Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples.} In \\emph{Proceedings of ICML}, 2018. [0\/2]\n\t\\item Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, Jeff Dean. \\href{http:\/\/proceedings.mlr.press\/v80\/pham18a.html}{Efficient Neural Architecture Search via Parameters Sharing.} In \\emph{Proceedings of ICML}, 2018. [0\/1]\n\t\\item Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine. \\href{http:\/\/proceedings.mlr.press\/v80\/haarnoja18b.html}{Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.} In \\emph{Proceedings of ICML}, 2018. [0\/2]\n\t\\item Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu. \\href{http:\/\/proceedings.mlr.press\/v80\/espeholt18a.html}{IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures.} In \\emph{Proceedings of ICML}, 2018. [0\/1]\n\t\\item Scott Fujimoto, Herke Hoof, David Meger. \\href{http:\/\/proceedings.mlr.press\/v80\/fujimoto18a.html}{Addressing Function Approximation Error in Actor-Critic Methods.} In \\emph{Proceedings of ICML}, 2018. [0\/1]\n\t\\item Hyunjik Kim, Andriy Mnih. \\href{http:\/\/proceedings.mlr.press\/v80\/kim18b.html}{Disentangling by Factorising.} In \\emph{Proceedings of ICML}, 2018. [0\/0]\n\t\\item Lars Mescheder, Andreas Geiger, Sebastian Nowozin. \\href{http:\/\/proceedings.mlr.press\/v80\/mescheder18a.html}{Which Training Methods for GANs do actually Converge?} In \\emph{Proceedings of ICML}, 2018. [0\/1]\n\t\\item Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang. \\href{http:\/\/proceedings.mlr.press\/v80\/arora18b.html}{Stronger generalization bounds for deep nets via a compression approach.} In \\emph{Proceedings of ICML}, 2018. [0\/3]\n\t\\item Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin. \\href{http:\/\/proceedings.mlr.press\/v80\/ilyas18a.html}{Black-box Adversarial Attacks with Limited Queries and Information.} In \\emph{Proceedings of ICML}, 2018. [0\/2]\n\t\\item Niranjan Srinivas, Andreas Krause, Sham Kakade, Matthias Seeger. \\href{https:\/\/icml.cc\/Conferences\/2010\/papers\/422.pdf}{Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design.} In \\emph{Proceedings of ICML}, 2010. [0\/1]\n\t\\item Honglak Lee, Roger Grosse, Rajesh Ranganath and Andrew Ng. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/571.pdf}{Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Julien Mairal, Francis Bach, Jean Ponce and Guillermo Sapiro. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/364.pdf}{Online dictionary learning for sparse coding.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Yoshua Bengio, Jerome Louradour, Ronan Collobert and Jason Weston. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/119.pdf}{Curriculum learning.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Laurent Jacob, Guillaume Obozinski and Jean-Philippe Vert. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/471.pdf}{Group Lasso with Overlaps and Graph Lasso.} In \\emph{Proceedings of ICML}, 2009. [0\/3]\n\t\\item Chun-Nam Yu and Thorsten Joachims. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/420.pdf}{Learning structural SVMs with latent variables.} In \\emph{Proceedings of ICML}, 2009. [0\/2]\n\t\\item Kilian Weinberger, Anirban Dasgupta, Josh Attenberg, John Langford and Alex Smola. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/407.pdf}{Feature hashing for large scale multitask learning.} In \\emph{Proceedings of ICML}, 2009. [0\/2]\n\t\\item Hanna Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/356.pdf}{Evaluation methods for topic models.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Kamalika Chaudhuri, Sham Kakade, Karen Livescu and Karthik Sridharan. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/317.pdf}{Multi-view clustering via canonical correlation analysis.} In \\emph{Proceedings of ICML}, 2009. [0\/2]\n\t\\item Shuiwang Ji and Jieping Ye. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/151.pdf}{An accelerated gradient method for trace norm minimization.} In \\emph{Proceedings of ICML}, 2009. [0\/3]\n\t\\item Junzhou Huang, Tong Zhang and Dimitris Metaxas. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/452.pdf}{Learning with structured sparsity.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Rajat Raina, Anand Madhavan and Andrew Ng. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/218.pdf}{Large-scale deep unsupervised learning using graphics processors.} In \\emph{Proceedings of ICML}, 2009. [0\/2]\n\t\\item Ronan Collobert and Jason Weston. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/391.pdf}{A unified architecture for natural language processing: deep neural networks with multitask learning.} In \\emph{Proceedings of ICML}, 2008. [0\/2]\n\t\\item Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/592.pdf}{Extracting and composing robust features with denoising autoencoders.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Ruslan Salakhutdinov and Andriy Mnih. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/600.pdf}{Bayesian probabilistic matrix factorization using Markov chain Monte Carlo.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/361.pdf}{Efficient projections onto the l1-ball for learning in high dimensions.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/166.pdf}{A dual coordinate descent method for large-scale linear SVM.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Tijmen Tieleman. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/638.pdf}{Training restricted Boltzmann machines using approximations to the likelihood gradient.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Hugo Larochelle and Yoshua Bengio. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/601.pdf}{Classification using discriminative restricted Boltzmann machines.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Jihun Hamm and Daniel Lee. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/312.pdf}{Grassmann discriminant analysis: a unifying view on subspace-based learning.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/167.pdf}{Listwise Approach to Learning to Rank - Theory and Algorithm.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/264.pdf}{Learning diverse rankings with multi-armed bandits.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Mark Dredze, Koby Crammer, and Fernando Pereira. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/322.pdf}{Confidence-weighted linear classification.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Ruslan Salakhutdinov and Iain Murray. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/573.pdf}{On the quantitative analysis of deep belief networks.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, Quoc V. Le. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/dc6a7e655d7e5840e66733e9ee67cc69-Abstract.html}{XLNet: Generalized Autoregressive Pretraining for Language Understanding.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Alexis CONNEAU, Guillaume Lample. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c04c19c2c2474dbf5f7ac4372c5b9af1-Abstract.html}{Cross-lingual Language Model Pretraining.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/4]\n\t\\item Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/e2c420d928d4bf8ce0ff2ec19b371514-Abstract.html}{Adversarial Examples Are Not Bugs, They Are Features.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/0d1a9651497a38d8b1c3871c84528bd4-Abstract.html}{Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin A. Raffel. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html}{MixMatch: A Holistic Approach to Semi-Supervised Learning.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/bdbca288fee7f92f2bfa9f7012727740-Abstract.html}{PyTorch: An Imperative Style, High-Performance Deep Learning Library.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Russ R. Salakhutdinov, Ruosong Wang. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/dbc4d84bfcfe2284ba11beffb853a8c4-Abstract.html}{On Exact Computation with an Infinitely Wide Neural Net.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c20bb2d9a50d5ac1f713f8b34d9aac5a-Abstract.html}{Unified Language Model Pre-training for Natural Language Understanding and Generation.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/7503cfacd12053d309b6bed5c89de212-Abstract.html}{Adversarial Training for Free!} In \\emph{Proceedings of NeurIPS}, 2019. [0\/3]\n\t\\item Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c74d97b01eae257e44aa9d5bade97baf-Abstract.html}{ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/4496bf24afe7fab6f046bf4923da8de6-Abstract.html}{SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems.} In \\emph{Proceedings of NeurIPS}, 2019. [1\/1]\n\t\\item Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/3e9f0fc9b2f89e043bc6233994dfcf76-Abstract.html}{Defending Against Neural Fake News.} In \\emph{Proceedings of NeurIPS}, 2019. [2\/4]\n\t\\item Yuan Cao, Quanquan Gu. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/cf9dc5e4e194fc21f397b4cac9cc3ae9-Abstract.html}{Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Florian Tramer, Dan Boneh. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/5d4ae76f053f8f2516ad12961ef7fe97-Abstract.html}{Adversarial Training and Robustness for Multiple Perturbations.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/2]\n\t\\item Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C. Duchi, Percy S. Liang. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/32e0bd1497aa43e02a42f47d9d6515ad-Abstract.html}{Unlabeled Data Improves Adversarial Robustness.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Lars Maal\u00f8e, Marco Fraccaro, Valentin Li\u00e9vin, Ole Winther. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/9bdb8b1faffa4b3d41779bb495d79fb9-Abstract.html}{BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Zeyuan Allen-Zhu, Yuanzhi Li, Yingyu Liang. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/62dad6e273d32235ae02b7d321578ee8-Abstract.html}{Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Durk P. Kingma, Prafulla Dhariwal. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/d139db6a236200b21cc7f752979132d0-Abstract.html}{Glow: Generative Flow with Invertible 1x1 Convolutions.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/2]\n\t\\item Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David K. Duvenaud. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/69386f6bb1dfed68692a24c8686939b9-Abstract.html}{Neural Ordinary Differential Equations.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, Jure Leskovec. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/e77dbaf6759253c7c6d0efc5690369c7-Abstract.html}{Hierarchical Graph Representation Learning with Differentiable Pooling.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Ricky T. Q. Chen, Xuechen Li, Roger B. Grosse, David K. Duvenaud. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/1ee3dfcd8a0645a25a35977997223d22-Abstract.html}{Isolating Sources of Disentanglement in Variational Autoencoders.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, Baoquan Chen. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/f5f8590cd58a54e94377e6ae2eded4d9-Abstract.html}{PointCNN: Convolution On X-Transformed Points.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Arthur Jacot, Franck Gabriel, Clement Hongler. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/5a4be1fa34e62bb8a6ec6b91d2462f5a-Abstract.html}{Neural Tangent Kernel: Convergence and Generalization in Neural Networks.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, Bryan Catanzaro. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/d86ea612dec96096c5e0fcc8dd42ab6d-Abstract.html}{Video-to-Video Synthesis.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Yuanzhi Li, Yingyu Liang. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/54fe976ba170c19ebae453679b362263-Abstract.html}{Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Madry. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/f708f064faaf32a43e4d3c784e6af9ea-Abstract.html}{Adversarially Robust Generalization Requires More Data.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/2]\n\t\\item Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/905056c1ac1dad141560467e0a99e1cf-Abstract.html}{How Does Batch Normalization Help Optimization?} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Harini Kannan, Alexey Kurakin, Ian Goodfellow. \\href{https:\/\/arxiv.org\/abs\/1803.06373}{Adversarial Logit Pairing.} In \\emph{Proceedings of NeurIPS$^*$}, 2018. [0\/2]\n\t\\item Ofir Nachum, Shixiang (Shane) Gu, Honglak Lee, Sergey Levine. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/e6384711491713d29bc63fc5eeb5ba4f-Abstract.html}{Data-Efficient Hierarchical Reinforcement Learning.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/3]\n\t\\item Prateek Jain, Raghu Meka, Inderjit Dhillon. \\href{https:\/\/papers.nips.cc\/paper\/2010\/hash\/08d98638c6fcd194a4b1e6992063e944-Abstract.html}{Guaranteed Rank Minimization via Singular Value Projection.} In \\emph{Proceedings of NeurIPS}, 2010. [0\/1]\n\t\\item Hanna Wallach, David Mimno, Andrew McCallum. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/0d0871f0806eae32d30983b62252da50-Abstract.html}{Rethinking LDA: Why Priors Matter.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/4]\n\t\\item Geoffrey E. Hinton, Russ R. Salakhutdinov. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/31839b036f63806cba3f47b93af8ccb5-Abstract.html}{Replicated Softmax: an Undirected Topic Model.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Daniel J. Hsu, Sham M. Kakade, John Langford, Tong Zhang. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/67974233917cea0e42a49a2fb7eb4cf4-Abstract.html}{Multi-Label Prediction via Compressed Sensing.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Youngmin Cho, Lawrence Saul. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/5751ec3e9a4feab575962e78e006250d-Abstract.html}{Kernel Methods for Deep Learning.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Kurt Miller, Michael Jordan, Thomas Griffiths. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/437d7d1d97917cd627a34a6a0fb41136-Abstract.html}{Nonparametric Latent Feature Models for Link Prediction.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/3]\n\t\\item Ian Goodfellow, Honglak Lee, Quoc Le, Andrew Saxe, Andrew Ng. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/428fca9bc1921c25c5121f9da7815cde-Abstract.html}{Measuring Invariances in Deep Networks.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Vinod Nair, Geoffrey E. Hinton. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/6e7b33fdea3adc80ebd648fffb665bb8-Abstract.html}{3D Object Recognition with Deep Belief Nets.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Martin Zinkevich, John Langford, Alex Smola. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/b55ec28c52d5f6205684a473a2193564-Abstract.html}{Slow Learners are Fast.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Ryan Mcdonald, Mehryar Mohri, Nathan Silberman, Dan Walker, Gideon Mann. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/d81f9c1be2e08964bf9f24b15f0e4900-Abstract.html}{Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/e7f8a7fb0b77bcb3b283af5be021448f-Abstract.html}{Learning Non-Linear Combinations of Kernels.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Laurent Jacob, Jean-philippe Vert, Francis Bach. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/fccb3cdc9acc14a6e70a12f74560c026-Abstract.html}{Clustered Multi-Task Learning: A Convex Formulation.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Kamalika Chaudhuri, Claire Monteleoni. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/8065d07da4a77621450aa84fee5656d9-Abstract.html}{Privacy-preserving logistic regression.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/3]\n\t\\item Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen Ryu, Krishna V. Shenoy, Maneesh Sahani. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/ad972f10e0800b49d76fed33a21f6698-Abstract.html}{Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/3]\n\t\\item Ilya Sutskever, Geoffrey E. Hinton, Graham W. Taylor. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/9ad6aaed513b73148b7d49f70afcfb32-Abstract.html}{The Recurrent Temporal Restricted Boltzmann Machine.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Wenyuan Dai, Yuqiang Chen, Gui-rong Xue, Qiang Yang, Yong Yu. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/0060ef47b12160b9198302ebdb144dcf-Abstract.html}{Translated Learning: Transfer Learning across Different Feature Spaces.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/3]\n\t\\item Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/0e65972dce68dad4d52d063967f0a705-Abstract.html}{Domain Adaptation with Multiple Sources.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Sham M. Kakade, Karthik Sridharan, Ambuj Tewari. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/5b69b9cb83065d403869739ae7f0995e-Abstract.html}{On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Francis Bach. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/a4a042cf4fd6bfb47701cbc8a1653ada-Abstract.html}{Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Ijaz Akhter, Yaser Sheikh, Sohaib Khan, Takeo Kanade. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/dc82d632c9fcecb0778afbc7924494a6-Abstract.html}{Nonrigid Structure from Motion in Trajectory Space.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Prateek Jain, Brian Kulis, Inderjit Dhillon, Kristen Grauman. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/aa68c75c4a77c87f97fb686b2f068676-Abstract.html}{Online Metric Learning and Fast Similarity Search.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Duy Nguyen-tuong, Jan Peters, Matthias Seeger. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/01161aaa0b6d1345dd8fe4e481144d84-Abstract.html}{Local Gaussian Process Regression for Real Time Online Model Learning.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Lester Mackey. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/85d8ce590ad8981ca2c8286f79f59954-Abstract.html}{Deflation Methods for Sparse PCA.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\\end{itemize}\n\n\\subsection{Checklist}\n\nMany conferences, including NeurIPS, have begun requiring reproducability checklists. We include a modified version of the NeurIPS checklist here to provide a quick summary of common reproducability questions and encourage this practice in future papers.\n\n\\begin{enumerate}\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes{}\n \\item Did you describe the limitations of your work?\n \\answerYes{See Discussion.} \n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerYes{Included in the Appendix.}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes{}\n\\end{enumerate}\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results?\n \\answerNA{}\n\t\\item Did you include complete proofs of all theoretical results?\n \\answerNA{}\n\\end{enumerate}\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{Available on Github}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerYes{Included in Appendix.}\n\t\\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerNo{}\n\t\\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes{Included in Appendix.}\n\\end{enumerate}\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes{Full listing of annotated papers is given in the Appendix.}\n \\item Did you mention the license of the assets?\n \\answerYes{See Footnote 1.}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerYes{Included in supplementary zipfile.}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerYes{Discussed in Appendix A.2. Additional Methodological Details.}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerYes{Discussed in Appendix A.2. Additional Methodological Details.}\n\\end{enumerate}\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA{}\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA{}\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA{}\n\\end{enumerate}\n\n\\end{enumerate}\n\n\n\n\\section{Appendix}\\label{A}\n\\setcounter{figure}{0} \n\n\\subsection{Relevance to NeurIPS}\n\nThe ML community has recently introduced a number of efforts to understand the societal impacts of the field, such as NeurIPS's requirement for the inclusion of broader impacts statements for all submitted papers in 2020, the Resistance AI Workshop at NeurIPS 2020 which investigated how AI concentrates power, and the Navigating the Broader Impacts of AI Research Workshop at NeurIPS 2020 which sought to understand the impacts of ML research as a whole on society. Understanding the social impact of a paper, let alone the discipline, is difficult. Merely looking at various benchmarks, abstract assessments, or broader impact statements, for example, is insufficient to get at the many concrete impacts encoded in the research itself. This paper attempts to begin to bridge this gap by seeking to understand the value commitments in papers published at NeurIPS and a closely related conference, ICML. As such, this paper is highly relevant and a timely contribution to the NeurIPS audience. As research into technical ML topics -- reinforcement learning, deep learning, optimization, etc. -- are core to NeurIPS, it is apparent that it is increasingly important and of interest to understand where these research areas stand with regard to societal impact, both in a positive and negative manner, as well as the benefits they bring and to whom.\n\n\n\n\n\\subsection{Additional Methodological Details}\n\n\\subsubsection{Data Sources}\n\\label{sec:data}\n\n\nTo determine the most-cited papers from each conference, we rely on the publicly-available Semantic Scholar database \\citep{ammar.2018}, which includes bibliographic information for scientific papers, including citation counts.\\footnote{\\url{http:\/\/s2-public-api.prod.s2.allenai.org\/corpus\/}}\nUsing this data, we chose the most cited papers from each of 2008, 2009, 2018, 2019 published at NeurIPS and ICML. \n\nLike all bibliographic databases, Semantic Scholar is imperfect. Upon manual review, we wish to document that our selection includes one paper that was actually published in 2010, and one that was retracted from NeurIPS prior to publication (see \\S\\ref{sec:paperlist} for details). In addition, the citations counts used to determine the most cited papers reflect a static moment in time, and may differ from other sources. \n\nBecause our artifacts of study are papers that have been previously published at NeurIPS or ICML, we surmise that the authors normatively expect and consent to their papers and themselves as authors being referenced and analyzed in future papers, e.g. this paper. Accordingly, we chose not to seek explicit permission from the original authors to reference, annotate, and analyze their papers. The annotations we generated do not introduce any new personally identifying information or offensive content. The sentences from the original published papers are necessarily part of our annotations; to the extent that these papers have these issues, these sentences may contain personally identifying information or offensive content. Given the original authors contributed their work to the same venues as our own work, we believe that the potential to cause new harm from this inclusion is minimal.\n\n\n\n\\subsubsection{Defining elite university}\nTo determine the list of elite universities, we follow Ahmed and Wahed \\citep{ahmed2020democratization}, and rely on the QS World University Rankings for the discipline of computer science. For 2018\/19, we take the top 50 schools from the CS rankings for 2018. For 2008\/09, we take the top 50 schools from the CS rankings for 2011, as the closest year for which data is available. \n\n\\subsubsection{Defining big tech}\nWe used Abdalla and Abdalla's \\citep{abdalla2020grey} criterion to what is considered \"big tech\", which is comprised of: Alibaba, Amazon, Apple, Element AI, Facebook, Google, Huawei, IBM, Intel, Microsoft, Nvidia, Open AI, Samsung, and Uber. Furthermore, we added DeepMind to this list, which Google acquired in 2014. We considered all other companies as \"non-big tech\". \n\n\n\n\n\\subsection{Annotations}\nWe include the annotations of all papers at \\url{https:\/\/github.com\/wagnew3\/The-Values-Encoded-in-Machine-Learning-Research}.\nTo present a birds-eye view of the value annotations, we present randomly selected examples of annotated sentences in section \\S\\ref{sec:randomexamples}. \nIn addition, here we present the frequency of occurrence for all values prior to clustering (see \\S\\ref{sec:value_clusters}) in Figure \\ref{fig:value-totals-all}. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.97\\textwidth]{figures\/value-freq-graph_all.png}\n\\caption{Value occurrences across papers.}\n\\label{fig:value-totals-all}\n\\end{figure}\n\n\n\\subsection{Value Hierarchy}\n\\label{sec:value_clusters}\n\nDuring ongoing discussions held throughout the annotation process, several values emerged as particularly salient to all annotators, including \\emph{Performance}, \\emph{Efficiency}, \\emph{Generalization}, \\emph{Novelty}, and \\emph{Building on Past Work}. In some cases, these had strong overlap with related values (e.g., \\emph{Performance} is closely related to \\emph{Accuracy}). In other cases, we had annotated for several fine-grained values that we felt could be combined (e.g., \\emph{Data Efficiency} and \\emph{Label Efficiency} are types of \\emph{Efficiency}). As such, we decided to group together certain related sets of values into clusters for presentation in the main paper. The values that were combined into each cluster are listed in Table \\ref{tab:value_clusters} below.\n\n\\begin{table}[ht]\n \\centering\n \\small\n \\caption{Sets of values combined for discussion in the main paper}\n \\begin{tabular}{l l}\n Cluster & Values \\\\\n \\hline\n Performance values & Performance, Accuracy, State-of-the-art \\\\\n Building on past work & Building on classic work, Building on recent work \\\\\n Generalization values & Generalization, Avoiding train\/test discrepancy, Flexibility\/extensibility \\\\\n Efficiency values & Efficiency, Data efficiency, Energy efficiency, Fast, Label efficiency, \\\\\n & Low cost, Memory efficiency, Reduced training time \\\\\n \\end{tabular}\n \\label{tab:value_clusters}\n\\end{table}\n\n\n\\subsection{Experiments with Using Text Classification to Identify Values}\n\nAlthough it was not our primary purpose in annotating highly-cited papers, we include here a brief report on using the annotations we generated as potential training data for classifiers that could in principle be used to estimate the prevalence of these values in a larger set of ML papers. This is something that we should approach with great caution for several reasons: i) we only have a relatively small training set of annotated examples with respect to machine learning best practices;\nii) these annotations are taken from a non-random set of papers, and any models trained on these data may not generalize to all papers; iii) an automated approach will fail to detect additional, previously unobserved, emergent values; and iv) based on our experiences annotating these papers, we expect that many would be expressed subtly and in varied ways that would be difficult to detect automatically, at least without considerably more training data.\n\nTo present a baseline for testing the potential of this approach, while avoiding any biases that might be introduced by pretrained language models, we make use of simple regularized logistic regression classifiers operating on unigram features. We trained models separately for each value (for all values that had at least 20 relevant sentences, using all relevant sentences for the higher-order grouped values), treating each sentence as an instance with a binary label (present or not), tokenizing each sentence using \\emph{spaCy} and converting each to a binary feature representation indicating the presence or absence of each word in the vocabulary (all words occurring at least twice in the corpus). These choices were not tuned. We randomly selected 300 sentences to use as a held out test set (using the same test set for each value), and trained a model using the remaining data, using 5-fold cross validation to tune the regularization strength.\n\nF1 scores on the test set for the various models are shown in Figure \\ref{fig:textcat} (right), and can generally be seen to be unimpressive. \nThe F1 score for most values is on the order of 0.5 or less, and some values, even relatively common ones such as \\textit{Unifying Ideas}, ended up with an F1 score of 0. The most highly-weighted features for most classifiers were quite reasonable, but this is evidently a relatively difficult task, at least given this amount of data. The exceptions to this poor performance included the Performance-related values (\\emph{Performance}, \\emph{Accuracy}, and \\emph{State-of-the-art}), as well as \\emph{Effectiveness}, and \\emph{Facilitating Use}, all of which had F1 scores greater than 0.75, and most of which were typically represented by a relatively small set of terms (e.g., \"accurate\", \"accuracy\", \"accurately\", \"inaccurate\", \"accuracies\", \"errors\", etc. for \\emph{Accuracy}).\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/f1s.pdf}\n\\caption{Proportion of papers in from 2008--2020 (combining NeurIPS and ICML) predicted to have at least one sentence expressing each value (left), and estimated performance (F1) of the corresponding classifiers (right). Note that the overall performance of most classifiers is generally poor, indicating that the estimates on the left should be treated as unreliable in most cases. Grey bars represent the clustered values. Classifiers were not trained for values with less than 20 representative sentences.}\n\\label{fig:textcat}\n\\end{figure}\n\nAlthough the poor performance of these classifiers means we should interpret any use of them with caution, we explore applying them to a broader set of papers for the sake of completeness. To do so, we download pdfs of all papers published at NeurIPS and ICML for the years 2008 through 2020, convert these to text using \\emph{pdftotext}, and extract sentences from this text, excluding references, as well as very short sentences (less than 6 tokens) or lines without alphabetic characters. Note that due to the difficulty of automatically parsing papers into sections, these textual representations are not limited to the abstract, introduction, discussion, and conclusion, in contrast to our annotations, thus we would expect most values to occur more frequently, especially those that are likely to occur in sections about experiments and results. \n\nWe then apply the classifiers trained above to each sentence in each paper. For each value, we then compute the proportion of papers (combining NeurIPS and ICML for this entire time period) that had at least one sentence predicted to exhibit that value. The overall proportions are shown in Figure \\ref{fig:textcat} (left). As can be seen, the relative prevalence of values is broadly similar to our anntoated sample, though many are predicted to occur with greater frequency, as expected. However, to reiterate, we should be highly skeptical of these findings, given the poor performance of the classifiers, and we can view this analysis to be useful more so to deepen our understanding of appropriate methodology.\n\nFinally, as an additional exploration, we focus on the Performance-related values (\\emph{Performance}, \\emph{Accuracy}, and \\emph{State-of-the-art}), which represent the overall most prevalent cluster in our annotations and were relatively easy to identify using classification due to their typically simple and explicit expression. We plot the estimated frequency over time for both conferences. For NeurIPS, which has better archival practices, we extend the analysis back to 1987. We should again treat these results with caution, given all the caveats above, as well as the fact that we are now applying these classifiers outside the temporal range from which the annotations were collected. Nevertheless, the results, shown in Figure \\ref{fig:performance-temporal}, suggest that these values have gradually become more common in NeurIPS over time, reinforcing the contingent nature of the dominance of the current set of values. Further investigation is required, however, in order to verify this finding.\n\n\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/performance_plots.pdf}\n\\caption{Proportion of papers per year (of those published in ICML and NeurIPS) that are classified as having at least one sentence expressing \\emph{Performance}, \\emph{Accuracy}, or \\emph{State-of-the-art}, (top, middle, and bottom), based on simple text classifiers trained on our annotations. Bands show $\\pm$2 standard deviations, reflecting the changing overall number of papers per year. }\n\\label{fig:performance-temporal}\n\\end{figure}\n\n\n\\subsection{Code and Reproducibility}\nOur code and annotations are available at \\url{https:\/\/github.com\/wagnew3\/The-Values-Encoded-in-Machine-Learning-Research}. The text classification experiments were run on a 2019 Macbook Air.\n\n\\subsection{Potential Negative Societal Impacts}\n\nBecause this paper primarily relies on socially conscientious manual annotation of papers already published at NeurIPS and ICML, we believe that the potential negative societal impacts of carrying out these annotations and sharing them are minimal. However, we still briefly comment on this here.\n\nFirst, in terms of the specific concerns highlighted in the NeurIPS call for papers, we believe our annotation work poses no risk to living beings, human rights concerns, threats to livelihoods, etc.\nSimilarly, all annotators are co-authors on this paper, thus there was no risk to participants, beyond what we chose to take on for ourselves. We have further discussed these aspects of the data in \\S\\ref{sec:data}. Our computational experiments are done locally and have resource usage on par with everyday computer usage.\n\nOne area of potential concern to readers, particularly researchers, may be regarding adopting a punitive stance toward individuals, unintentionally casting certain authors in a negative light, or unintentionally contributing to harmful tensions within the ML community. We wish to directly express that throughout this paper we have sought to avoid punitive language toward individuals and adopt language emphasizing systematic patterns. In order to further minimize the former, we have chosen to include randomly selected examples omitting author attributions from quoted sources in the main paper. To complement this and meet the need for completeness, transparency, and reproducibility of our work, we include a full list of cited papers below, so as to acknowledge this work without drawing unnecessary attention to any one particular source.\n\nAlthough our intention is to broaden and deepen the conversation, we acknowledge that some authors may perceive our work as being not representative of the type of work they would like to see at NeurIPS, and possibly detrimental to the conference. However, because of the prominence and influence of machine learning today, it is especially important to have these conversations at the premier venues, and we hope that our paper will be the basis for useful conversations and future work.\nAs expressed in the main paper, these perceptions and norms may be precisely those that are more contingent than the community realizes; these norms may be shaped, dismantled, transformed, or reenvisioned for the better. \n\n\n\\subsection{Random Examples}\n\\label{sec:randomexamples}\n\nThe list below contains 100 random examples drawn from the annotated data, along with the set of annotated values for each. These sentences were annotated for values within the context of the paper.\n\\textit{\n\\begin{itemize}\n \\item The problem of minimizing the rank of a matrix variable subject to certain constraints arises in many fields including machine learning, automatic control, and image compression. {\\textbf{Used in practice\/Popular}}\n\t\\item Locality-sensitive hashing [6] is an effective technique that performs approximate nearest neighbor searches in time that is sub-linear in the size of the database {\\textbf{Approximation}, \\textbf{Building on recent work}, \\textbf{Effectiveness}, \\textbf{Fast}}\n\t\\item In the finite case, analysis of optimization and generalization of fully-trained nets is of course an open problem {\\textbf{Formal description\/analysis}, \\textbf{Generalization}}\n\t\\item So to achieve adversarial robustness, a classifier must generalize in a stronger sense. {\\textbf{Generalization}, \\textbf{Robustness}}\n\t\\item Robustness to label corruption is similarly improved by wide margins, such that pre-training alone outperforms certain task-specific methods, sometimes even after combining these methods with pre-training. {\\textbf{Performance}, \\textbf{Robustness}, \\textbf{Understanding (for researchers)}}\n\t\\item RBMs have been particularly successful in classification problems either as feature extractors for text and image data (Gehler et al., 2006) or as a good initial\ntraining phase for deep neural network classifiers (Hinton, 2007). {\\textbf{Building on recent work}, \\textbf{Flexibility\/Extensibility}, \\textbf{Successful}}\n\t\\item Our theoretical analysis naturally leads to a new formulation\nof adversarial defense which has several appealing properties; in particular, it inherits the benefits of scalability to\nlarge datasets exhibited by Tiny ImageNet, and the algorithm achieves state-of-the-art performance on a range of\nbenchmarks while providing theoretical guarantees. {\\textbf{Robustness}, \\textbf{Scales up}, \\textbf{Security}, \\textbf{Theoretical guarantees}}\n\t\\item The current paper focuses on the training loss, but\ndoes not address the test loss. {\\textbf{Generalization}}\n\t\\item This result is significant since stochastic methods are highly preferred for their efficiency over deterministic\ngradient methods in machine learning applications. {\\textbf{Efficiency}}\n\t\\item Ranking, which is to sort objects based on certain factors, is the central problem of applications such as in- formation retrieval (IR) and information filtering. {\\textbf{Applies to real world}, \\textbf{Used in practice\/Popular}}\n\t\\item This subspace is important, because, when projected onto this subspace, the means of the distributions are well-separated, yet the typical distance between points from the same distribution is smaller than in the original space. {\\textbf{Important}}\n\t\\item Overall, the existence of such adversarial examples raises concerns about the robustness of current classifiers. {\\textbf{Identifying limitations}, \\textbf{Robustness}}\n\t\\item We have shown that biased compressors if naively used can lead to bad generalization, and even non-convergence. {\\textbf{Formal description\/analysis}, \\textbf{Generalization}}\n\t\\item Bartlett and Mendelson [2002] provide a generalization bound for Lipschitz loss functions. {\\textbf{Building on classic work}, \\textbf{Generalization}}\n\t\\item The principal advantage of taking this \"lateral\" approach arises from the fact that compact representation in trajectory space is better motivated physically than compact representation in shape space {\\textbf{Realistic world model}}\n\t\\item In this paper, we show that gradient descent on deep overparametrized networks can obtain zero training loss {\\textbf{Formal description\/analysis}, \\textbf{Theoretical guarantees}}\n\t\\item Moreover, web queries often have different meanings for different users (a canonical example is the query jaguar ) suggesting that a ranking with diverse documents may be preferable. {\\textbf{Diverse output}, \\textbf{User influence}}\n\t\\item We include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance. {\\textbf{Learning from humans}, \\textbf{Performance}}\n\t\\item Inthis paper we propose a simple and fast algorithm SVP(Singular Value Projec-tion) for rank minimization under affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy a restricted isometry property(RIP). {\\textbf{Fast}, \\textbf{Novelty}, \\textbf{Simplicity}}\n\t\\item We use standard formalization of multiclass classification, where data consists of sample x and its label y\n(an integer from 1 to k). {\\textbf{Building on classic work}}\n\t\\item A number of recent work has shown that the low rank solution can be recovered exactly via minimizing the trace norm under certain conditions (Recht et al., 2008a; Recht et al., 2008b; Cand\\`es and Recht, 2008). {\\textbf{Building on recent work}}\n\t\\item This difficulty has necessitated the use of a heuristic inference procedure, that\nnonetheless was accurate enough for successful learning. {\\textbf{Accuracy}, \\textbf{Successful}}\n\t\\item We illustrate such potential by measuring search space properties relevant to architecture search. {\\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item Deep architectures consist of feature detector units arranged in layers. Lower layers detect simple features and feed into higher layers, which in turn detect more complex features. {\\textbf{Simplicity}}\n\t\\item This makes the updates hard to massively parallelize at a coarse, dataparallel level (e.g., by computing the updates in parallel and summing them together centrally) without losing the critical stochastic nature of the updates. {\\textbf{Large scale}, \\textbf{Parallelizability \/ distributed}}\n\t\\item This suggests future work on model robustness should evaluate proposed methods with pretraining in order to correctly gauge their utility, and some work could specialize pre-training for these downstream tasks. {\\textbf{Robustness}}\n\t\\item Adversarial training remains among the most trusted defenses, but it is nearly intractable on largescale problems. {\\textbf{Scales up}, \\textbf{Security}}\n\t\\item For complex robots such as humanoids or light-weight arms, it is often hard to model the system sufficiently well and, thus, modern regression methods offer a viable alternative [7,8]. {\\textbf{Realistic world model}}\n\t\\item In\ncontrast to prior work that operates in this goal-setting model, we use states as goals directly, which\nallows for simple and fast training of the lower layer. {\\textbf{Reduced training time}, \\textbf{Simplicity}}\n\t\\item Meanwhile, using less resources tends to produce less compelling results (Negrinho and Gordon, 2017; Baker et al., 2017a). {\\textbf{Requires few resources}}\n\t\\item This finding represents an exciting opportunity for defense against neural fake news: the best models for generating neural disinformation are also the best models at detecting it. {\\textbf{Applies to real world}}\n\t\\item Our strong empirical results suggest that\nrandomized smoothing is a promising direction\nfor future research into adversarially robust classification. {\\textbf{Quantitative evidence (e.g. experiments)}, \\textbf{Robustness}, \\textbf{Security}}\n\t\\item We then turn our attention to identifying the roots of BatchNorm's success. {\\textbf{Successful}, \\textbf{Understanding (for researchers)}}\n\t\\item We also report the results of large-scale experiments comparing these three methods which demonstrate the benefits of the mixture weight method: this method consumes less resources, while achieving a performance comparable to that of standard approaches. {\\textbf{Large scale}, \\textbf{Performance}, \\textbf{Requires few resources}}\n\t\\item This paper does not cover the the generalization of over-parameterized neural networks to the test data. {\\textbf{Avoiding train\/test discrepancy}, \\textbf{Generalization}}\n\t\\item While there has been success with robust classifiers on simple datasets [31, 36, 44, 48], more complicated datasets still exhibit a large gap between \"'standard\" and robust accuracy [3, 11]. {\\textbf{Applies to real world}, \\textbf{Robustness}, \\textbf{Successful}}\n\t\\item In this paper, we have shown theoretically how independence between examples can make the actual effect much smaller. {\\textbf{Novelty}, \\textbf{Theoretical guarantees}}\n\t\\item We provide empirical evidence that several recently-used methods for estimating the probability of held-out documents are inaccurate and can change the results of model comparison. {\\textbf{Accuracy}, \\textbf{Building on recent work}, \\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item This agreement is robust across different architectures, optimization methods, and loss functions {\\textbf{Robustness}}\n\t\\item Unfortunately, due to the slow-changing policy in an actor-critic setting, the current and target value estimates remain too similar to avoid maximization bias. {\\textbf{Accuracy}}\n\t\\item As a future work, we are pursuing a better understanding of probabilistic distributions on the Grassmann manifold. {\\textbf{Understanding (for researchers)}}\n\t\\item We also view these results as an opportunity to encourage the community to pursue a more systematic investigation of the algorithmic toolkit of deep learning and the underpinnings of its effectiveness. {\\textbf{Effectiveness}, \\textbf{Understanding (for researchers)}}\n\t\\item This challenge is further exacerbated in continuous state and action spaces, where a separate actor network is often used to perform the maximization in Q-learning. {\\textbf{Performance}}\n\t\\item The vulnerability of neural networks to adversarial perturbations has recently been a source of much discussion and is still poorly understood. {\\textbf{Robustness}, \\textbf{Understanding (for researchers)}}\n\t\\item Most of the evaluation methods described in this paper extend readily to more complicated topic models\u2014 including non-parametric versions based on hierarchical Dirichlet processes (Teh et al., 2006)\u2014since they only require a MCMC algorithm for sampling the latent topic assignments z for each document and a way of evaluating probability P(w | z, $\\Phi$, $\\alpha$m). {\\textbf{Flexibility\/Extensibility}, \\textbf{Understanding (for researchers)}}\n\t\\item In a formulation closely related to the dual problem, we have: $\\hat{w}$ = argmin w:F (w)$\\leq$c 1 n Xn i=1 $\\ell$(hw, xii, yi) (2) where, instead of regularizing, a hard restriction over the parameter space is imposed (by the constant c). {\\textbf{Formal description\/analysis}}\n\t\\item Second, we evaluate a surrogate loss function from four aspects: (a) consistency, (b) soundness, (c) mathemat- ical properties of continuity, differentiability, and con- vexity, and (d) computational efficiency in learning. {\\textbf{Efficiency}}\n\t\\item This leads to two natural questions that we try to answer in this paper: (1) Is it feasible to perform optimization in this very large feature space with cost which is polynomial in the size of the input space? {\\textbf{Performance}}\n\t\\item Despite its pervasiveness, the exact reasons for BatchNorm's effectiveness are still poorly understood. {\\textbf{Understanding (for researchers)}}\n\t\\item We have presented confidenceweighted linear classifiers, a new learning method designed for NLP problems based on the notion of parameter confidence. {\\textbf{Novelty}}\n\t\\item In addition, the experiments reported here suggest that (like other strategies recently proposed to train deep deterministic or stochastic neural networks) the curriculum strategies appear on the surface to operate like a regularizer, i.e., their beneficial effect is most pronounced on the test set. {\\textbf{Beneficence}, \\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item These\ngive further inside into hash-spaces and explain previously\nmade empirical observations. {\\textbf{Understanding (for researchers)}}\n\t\\item This means that current algorithms reach their limit at problems of size 1TB whenever the algorithm is I\/O bound (this amounts to a training time of 3 hours), or even smaller problems whenever the model parametrization makes the algorithm CPU bound. {\\textbf{Memory efficiency}, \\textbf{Reduced training time}}\n\t\\item Much of the results presented were based on the assumption that the target distribution is some mixture of the source distributions. {\\textbf{Valid assumptions}}\n\t\\item Empirical investigation revealed that this agrees well with actual training dynamics and predictive distributions across fully-connected, convolutional, and even wide residual network architectures, as well as with different optimizers (gradient descent, momentum, mini-batching) and loss functions (MSE, cross-entropy). {\\textbf{Generalization}, \\textbf{Quantitative evidence (e.g. experiments)}, \\textbf{Understanding (for researchers)}}\n\t\\item We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting\nin a new convex optimization formulation for multi-task learning. {\\textbf{Novelty}}\n\t\\item Recent progress in natural language generation has raised dual-use concerns. {\\textbf{Progress}}\n\t\\item These kernel functions can be used\nin shallow architectures, such as support vector machines (SVMs), or in deep\nkernel-based architectures that we call multilayer kernel machines (MKMs). {\\textbf{Flexibility\/Extensibility}}\n\t\\item Using MCMC instead of variational methods for approximate inference in Bayesian matrix factorization models leads to much larger improvements over the MAP trained models, which suggests that the assumptions made by the variational methods about the structure of the posterior are not entirely reasonable. {\\textbf{Understanding (for researchers)}}\n\t\\item In particular, the deep belief network (DBN) (Hinton et al., 2006) is a multilayer generative model where each layer encodes statistical dependencies among the units in the layer below it; it is trained to (approximately) maximize the likelihood of its training data. {\\textbf{Approximation}, \\textbf{Data efficiency}}\n\t\\item Furthermore, the learning accuracy and performance of our LGP approach will be compared with other important standard methods in Section 4, e.g., LWPR [8], standard GPR [1], sparse online Gaussian process regression (OGP) [5] and $\\nu$-support vector regression ($\\nu$-SVR) [11], respectively {\\textbf{Accuracy}, \\textbf{Performance}, \\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item \u2022 propose a simple method based on weighted minibatches to stochastically train with arbitrary weights on the terms of our decomposition without any additional hyperparameters. {\\textbf{Efficiency}, \\textbf{Simplicity}}\n\t\\item For example, Ng (2004) examined the task of PAC learning a sparse predictor and analyzed cases in which an $\\ell$1 constraint results in better solutions than an $\\ell$2 constraint. {\\textbf{Building on recent work}}\n\t\\item Graph Convolutional Networks (GCNs) (Kipf and Welling, 2017) are an efficient variant of Convolutional Neural Networks (CNNs) on graphs. GCNs stack layers of learned first-order spectral filters followed by a nonlinear activation function to learn graph representations. {\\textbf{Efficiency}}\n\t\\item This is a linear convergence rate. {\\textbf{Building on recent work}, \\textbf{Efficiency}, \\textbf{Quantitative evidence (e.g. experiments)}, \\textbf{Theoretical guarantees}}\n\t\\item However, as we\nobserve more interactions, this could emerge as a clear feature. {\\textbf{Building on recent work}, \\textbf{Data efficiency}}\n\t\\item Here we propose the first method that supports arbitrary low accuracy and even biased\ncompression operators, such as in (Alistarh et al., 2018; Lin et al., 2018; Stich et al., 2018). {\\textbf{Accuracy}, \\textbf{Novelty}}\n\t\\item Much recent work has been done on understanding under what conditions we can learn a mixture model. {\\textbf{Understanding (for researchers)}}\n\t\\item For this reason, we present an extension of the standard greedy OMP algorithm that can be applied to general struc- tured sparsity problems, and more importantly, meaningful sparse recovery bounds can be obtained for this algorithm. {\\textbf{Building on recent work}}\n\t\\item In this paper we show that this assumption is indeed necessary: by considering a simple yet prototypical exampleof GAN training we analytically show that (unregularized) GAN training is not always locally convergent {\\textbf{Formal description\/analysis}}\n\t\\item Overestimation bias is a property of Q-learning in which the maximization of a noisy value estimate induces a consistent overestimation {\\textbf{Accuracy}}\n\t\\item This drawback prevents GPR from applications which need large amounts of training data and require fast computation, e.g., online learning of inverse dynamics model for model-based robot control {\\textbf{Fast}, \\textbf{Large scale}}\n\t\\item This is problematic since we find there are techniques which do not comport well with pre-training; thus some evaluations of robustness are less representative of real-world performance than previously thought. {\\textbf{Applies to real world}, \\textbf{Performance}, \\textbf{Robustness}}\n\t\\item Approximation of this prior structure through simple, efficient hyperparameter optimization steps is sufficient to achieve these performance gains {\\textbf{Approximation}, \\textbf{Efficiency}, \\textbf{Performance}, \\textbf{Simplicity}}\n\t\\item The second mysterious phenomenon in training deep neural networks is \"deeper networks are harder to train.\" {\\textbf{Performance}}\n\t\\item However, the definition of our metric is sufficiently general that it could easily be used to test, for example, invariance of auditory features to rate of speech, or invariance of textual features to author identity. {\\textbf{Generalization}}\n\t\\item In Sec. 6 we test the proposed algorithm for\nface recognition and object categorization tasks. {\\textbf{Applies to real world}, \\textbf{Quantitative evidence (e.g. experiments)}}\n\t\\item It is possible to train classification\nRBMs directly for classification performance; the gradient is fairly simple and certainly tractable. {\\textbf{Performance}}\n\t\\item Figure 1 contrasts these two approaches. Defining and evaluating models using ODE solvers has several benefits: {\\textbf{Beneficence}}\n\t\\item They claim to achieve 12\\% robustness against non-targeted attacks that are within an\n`2 radius of 3 (for images with pixels in [0, 1]). {\\textbf{Generalization}, \\textbf{Robustness}}\n\t\\item Two commonly used penalties are the 1- norm and the square of the 2-norm of w. {\\textbf{Used in practice\/Popular}}\n\t\\item What should platforms do? Video-sharing platforms like YouTube use deep neural networks to scan videos while they are uploaded, to filter out content like pornography (Hosseini et al., 2017). {\\textbf{Applies to real world}}\n\t\\item We mention various properties\nof this penalty, and provide conditions for the consistency\nof support estimation in the regression setting. Finally, we\nreport promising results on both simulated and real data {\\textbf{Applies to real world}}\n\t\\item There\ncould be a separate feature for \"high school student,\" \"male,\" \"athlete,\" and \"musician\" and the\npresence or absence of each of these features is what defines each person and determines their\nrelationships. {\\textbf{Building on recent work}}\n\t\\item So, the over-parameterized convergence theory of DNN is much simpler than that of RNN. {\\textbf{Simplicity}, \\textbf{Understanding (for researchers)}}\n\t\\item Other threat models are possible: for instance, an adversary might generate comments or have entire dialogue agents, they might start with a human-written news article and modify a few sentences, and they might fabricate images or video. {\\textbf{Learning from humans}}\n\t\\item More generally, we hope that future work will be able to avoid relying on obfuscated gradients (and other methods that only prevent gradient descent-based attacks) for perceived robustness, and use our evaluation approach to detect when this occurs. {\\textbf{Generality}, \\textbf{Robustness}}\n\t\\item For example, the learned linear combination does not consistently outperform either the uniform combination of base kernels or simply the best single base kernel (see, for example, UCI dataset experiments in [9, 12], see also NIPS 2008 workshop). {\\textbf{Performance}}\n\t\\item Our main contributions are: \u2022 We analyze GP-UCB, an intuitive algorithm for GP optimization, when the function is either sampled from a known GP, or has low RKHS norm. {\\textbf{Optimal}}\n\t\\item For the standard linear setting, Dani et al. (2008) provide a near-complete characterization explicitly dependent on the dimensionality. In the GP setting, the challenge is to characterize complexity in a different manner, through properties of the kernel function. {\\textbf{Building on classic work}}\n\t\\item This allows us to map each architecture A to its approximate hyperparameter optimized accuracy {\\textbf{Accuracy}}\n\t\\item Unfortunately, they could only apply their method to linear networks. {\\textbf{Flexibility\/Extensibility}}\n\t\\item The strength of the adversary then allows for a trade-off between the enforced prior, and the data-dependent features. {\\textbf{Understanding (for researchers)}}\n\t\\item We observe that the computational bottleneck of NAS is the training of each child model to convergence, only to measure its accuracy whilst throwing away all the trained weights. {\\textbf{Accuracy}}\n\t\\item We show that the number of subproblems need only be logarithmic in the total number of possible labels, making thisapproach radically more efficient than others. {\\textbf{Efficiency}}\n\t\\item We establish a new\nnotion of quadratic approximation of the neural network, and connect it to the\nSGD theory of escaping saddle points. {\\textbf{Novelty}, \\textbf{Unifying ideas or integrating components}}\n\t\\item In this work, we decompose the prediction error for adversarial\nexamples (robust error) as the sum of the natural\n(classification) error and boundary error, and provide a differentiable upper bound using the theory\nof classification-calibrated loss, which is shown to\nbe the tightest possible upper bound uniform over\nall probability distributions and measurable predictors. {\\textbf{Accuracy}, \\textbf{Robustness}, \\textbf{Theoretical guarantees}}\n\t\\item A limit on the number of queries can be a result\nof limits on other resources, such as a time limit if inference time is a bottleneck or a monetary limit if the\nattacker incurs a cost for each query. {\\textbf{Applies to real world}, \\textbf{Low cost}, \\textbf{Requires few resources}}\n\t\\item Preliminary experiments demonstrate that it is significantly faster than batch alternatives on large datasets that may contain millions of training examples, yet it does not require learning rate tuning like regular stochastic gradient descent methods. {\\textbf{Quantitative evidence (e.g. experiments)}, \\textbf{Reduced training time}}\n\t\\item SuperGLUE is available at super.gluebenchmark.com. {\\textbf{Facilitating use (e.g. sharing code)}}\n\\end{itemize}}\n\n\n\\subsection{Full List of Cited Papers}\n\\label{sec:paperlist}\n\nThe full list of annotated papers is given below, along with the annotated scores (in square brackets) for \\emph{Discussion of Negative Potential} [left] (0 = Doesn't mention negative potential; 1 = Mentions but does not discuss negative potential; 2 = Discusses negative potential) and \\emph{Justification} [right] (0 = Doesn't rigorously justify how it achieves technical goal; 1 = Justifies how it achieves technical goal but no mention of societal need; 2 = States but does not justify how it connects to a societal need; 3 = States and somewhat justifies how it connects to a societal need; 4 = States and rigorously justifies how it connects to a a societal need). Note that due to minor errors in the data sources used, the distribution of papers over venues and years is not perfectly balanced. For the same reason, the list also contains one paper from 2010 (rather than 2009), as well as one paper that was retracted before publication at NeurIPS (marked with a $^*$).\n\n\\begin{itemize}\n\t\\item Mingxing Tan, Quoc Le. \\href{http:\/\/proceedings.mlr.press\/v97\/tan19a.html}{EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, Ruosong Wang. \\href{http:\/\/proceedings.mlr.press\/v97\/arora19a.html}{Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Jeremy Cohen, Elan Rosenfeld, Zico Kolter. \\href{http:\/\/proceedings.mlr.press\/v97\/cohen19c.html}{Certified Adversarial Robustness via Randomized Smoothing.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, Michael Jordan. \\href{http:\/\/proceedings.mlr.press\/v97\/zhang19p.html}{Theoretically Principled Trade-off between Robustness and Accuracy.} In \\emph{Proceedings of ICML}, 2019. [0\/2]\n\t\\item Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. \\href{http:\/\/proceedings.mlr.press\/v97\/song19d.html}{MASS: Masked Sequence to Sequence Pre-training for Language Generation.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, Kilian Weinberger. \\href{http:\/\/proceedings.mlr.press\/v97\/wu19e.html}{Simplifying Graph Convolutional Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar. \\href{http:\/\/proceedings.mlr.press\/v97\/recht19a.html}{Do ImageNet Classifiers Generalize to ImageNet?} In \\emph{Proceedings of ICML}, 2019. [0\/2]\n\t\\item Justin Gilmer, Nicolas Ford, Nicholas Carlini, Ekin Cubuk. \\href{http:\/\/proceedings.mlr.press\/v97\/gilmer19a.html}{Adversarial Examples Are a Natural Consequence of Test Error in Noise.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, Frank Hutter. \\href{http:\/\/proceedings.mlr.press\/v97\/ying19a.html}{NAS-Bench-101: Towards Reproducible Neural Architecture Search.} In \\emph{Proceedings of ICML}, 2019. [0\/2]\n\t\\item Dan Hendrycks, Kimin Lee, Mantas Mazeika. \\href{http:\/\/proceedings.mlr.press\/v97\/hendrycks19a.html}{Using Pre-Training Can Improve Model Robustness and Uncertainty.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Sai Praneeth Karimireddy, Quentin Rebjock, Sebastian Stich, Martin Jaggi. \\href{http:\/\/proceedings.mlr.press\/v97\/karimireddy19a.html}{Error Feedback Fixes SignSGD and other Gradient Compression Schemes.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Anastasia Koloskova, Sebastian Stich, Martin Jaggi. \\href{http:\/\/proceedings.mlr.press\/v97\/koloskova19a.html}{Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication.} In \\emph{Proceedings of ICML}, 2019. [0\/2]\n\t\\item Han Zhang, Ian Goodfellow, Dimitris Metaxas, Augustus Odena. \\href{http:\/\/proceedings.mlr.press\/v97\/zhang19d.html}{Self-Attention Generative Adversarial Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Zeyuan Allen-Zhu, Yuanzhi Li, Zhao Song. \\href{http:\/\/proceedings.mlr.press\/v97\/allen-zhu19a.html}{A Convergence Theory for Deep Learning via Over-Parameterization.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Simon Du, Jason Lee, Haochuan Li, Liwei Wang, Xiyu Zhai. \\href{http:\/\/proceedings.mlr.press\/v97\/du19c.html}{Gradient Descent Finds Global Minima of Deep Neural Networks.} In \\emph{Proceedings of ICML}, 2019. [0\/1]\n\t\\item Anish Athalye, Nicholas Carlini, David Wagner. \\href{http:\/\/proceedings.mlr.press\/v80\/athalye18a.html}{Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples.} In \\emph{Proceedings of ICML}, 2018. [0\/2]\n\t\\item Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, Jeff Dean. \\href{http:\/\/proceedings.mlr.press\/v80\/pham18a.html}{Efficient Neural Architecture Search via Parameters Sharing.} In \\emph{Proceedings of ICML}, 2018. [0\/1]\n\t\\item Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine. \\href{http:\/\/proceedings.mlr.press\/v80\/haarnoja18b.html}{Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.} In \\emph{Proceedings of ICML}, 2018. [0\/2]\n\t\\item Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu. \\href{http:\/\/proceedings.mlr.press\/v80\/espeholt18a.html}{IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures.} In \\emph{Proceedings of ICML}, 2018. [0\/1]\n\t\\item Scott Fujimoto, Herke Hoof, David Meger. \\href{http:\/\/proceedings.mlr.press\/v80\/fujimoto18a.html}{Addressing Function Approximation Error in Actor-Critic Methods.} In \\emph{Proceedings of ICML}, 2018. [0\/1]\n\t\\item Hyunjik Kim, Andriy Mnih. \\href{http:\/\/proceedings.mlr.press\/v80\/kim18b.html}{Disentangling by Factorising.} In \\emph{Proceedings of ICML}, 2018. [0\/0]\n\t\\item Lars Mescheder, Andreas Geiger, Sebastian Nowozin. \\href{http:\/\/proceedings.mlr.press\/v80\/mescheder18a.html}{Which Training Methods for GANs do actually Converge?} In \\emph{Proceedings of ICML}, 2018. [0\/1]\n\t\\item Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang. \\href{http:\/\/proceedings.mlr.press\/v80\/arora18b.html}{Stronger generalization bounds for deep nets via a compression approach.} In \\emph{Proceedings of ICML}, 2018. [0\/3]\n\t\\item Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin. \\href{http:\/\/proceedings.mlr.press\/v80\/ilyas18a.html}{Black-box Adversarial Attacks with Limited Queries and Information.} In \\emph{Proceedings of ICML}, 2018. [0\/2]\n\t\\item Niranjan Srinivas, Andreas Krause, Sham Kakade, Matthias Seeger. \\href{https:\/\/icml.cc\/Conferences\/2010\/papers\/422.pdf}{Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design.} In \\emph{Proceedings of ICML}, 2010. [0\/1]\n\t\\item Honglak Lee, Roger Grosse, Rajesh Ranganath and Andrew Ng. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/571.pdf}{Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Julien Mairal, Francis Bach, Jean Ponce and Guillermo Sapiro. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/364.pdf}{Online dictionary learning for sparse coding.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Yoshua Bengio, Jerome Louradour, Ronan Collobert and Jason Weston. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/119.pdf}{Curriculum learning.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Laurent Jacob, Guillaume Obozinski and Jean-Philippe Vert. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/471.pdf}{Group Lasso with Overlaps and Graph Lasso.} In \\emph{Proceedings of ICML}, 2009. [0\/3]\n\t\\item Chun-Nam Yu and Thorsten Joachims. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/420.pdf}{Learning structural SVMs with latent variables.} In \\emph{Proceedings of ICML}, 2009. [0\/2]\n\t\\item Kilian Weinberger, Anirban Dasgupta, Josh Attenberg, John Langford and Alex Smola. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/407.pdf}{Feature hashing for large scale multitask learning.} In \\emph{Proceedings of ICML}, 2009. [0\/2]\n\t\\item Hanna Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/356.pdf}{Evaluation methods for topic models.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Kamalika Chaudhuri, Sham Kakade, Karen Livescu and Karthik Sridharan. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/317.pdf}{Multi-view clustering via canonical correlation analysis.} In \\emph{Proceedings of ICML}, 2009. [0\/2]\n\t\\item Shuiwang Ji and Jieping Ye. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/151.pdf}{An accelerated gradient method for trace norm minimization.} In \\emph{Proceedings of ICML}, 2009. [0\/3]\n\t\\item Junzhou Huang, Tong Zhang and Dimitris Metaxas. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/452.pdf}{Learning with structured sparsity.} In \\emph{Proceedings of ICML}, 2009. [0\/1]\n\t\\item Rajat Raina, Anand Madhavan and Andrew Ng. \\href{https:\/\/icml.cc\/Conferences\/2009\/papers\/218.pdf}{Large-scale deep unsupervised learning using graphics processors.} In \\emph{Proceedings of ICML}, 2009. [0\/2]\n\t\\item Ronan Collobert and Jason Weston. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/391.pdf}{A unified architecture for natural language processing: deep neural networks with multitask learning.} In \\emph{Proceedings of ICML}, 2008. [0\/2]\n\t\\item Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/592.pdf}{Extracting and composing robust features with denoising autoencoders.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Ruslan Salakhutdinov and Andriy Mnih. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/600.pdf}{Bayesian probabilistic matrix factorization using Markov chain Monte Carlo.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/361.pdf}{Efficient projections onto the l1-ball for learning in high dimensions.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/166.pdf}{A dual coordinate descent method for large-scale linear SVM.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Tijmen Tieleman. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/638.pdf}{Training restricted Boltzmann machines using approximations to the likelihood gradient.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Hugo Larochelle and Yoshua Bengio. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/601.pdf}{Classification using discriminative restricted Boltzmann machines.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Jihun Hamm and Daniel Lee. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/312.pdf}{Grassmann discriminant analysis: a unifying view on subspace-based learning.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/167.pdf}{Listwise Approach to Learning to Rank - Theory and Algorithm.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/264.pdf}{Learning diverse rankings with multi-armed bandits.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Mark Dredze, Koby Crammer, and Fernando Pereira. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/322.pdf}{Confidence-weighted linear classification.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Ruslan Salakhutdinov and Iain Murray. \\href{https:\/\/icml.cc\/Conferences\/2008\/papers\/573.pdf}{On the quantitative analysis of deep belief networks.} In \\emph{Proceedings of ICML}, 2008. [0\/1]\n\t\\item Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, Quoc V. Le. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/dc6a7e655d7e5840e66733e9ee67cc69-Abstract.html}{XLNet: Generalized Autoregressive Pretraining for Language Understanding.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Alexis CONNEAU, Guillaume Lample. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c04c19c2c2474dbf5f7ac4372c5b9af1-Abstract.html}{Cross-lingual Language Model Pretraining.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/4]\n\t\\item Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/e2c420d928d4bf8ce0ff2ec19b371514-Abstract.html}{Adversarial Examples Are Not Bugs, They Are Features.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/0d1a9651497a38d8b1c3871c84528bd4-Abstract.html}{Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin A. Raffel. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html}{MixMatch: A Holistic Approach to Semi-Supervised Learning.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/bdbca288fee7f92f2bfa9f7012727740-Abstract.html}{PyTorch: An Imperative Style, High-Performance Deep Learning Library.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Russ R. Salakhutdinov, Ruosong Wang. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/dbc4d84bfcfe2284ba11beffb853a8c4-Abstract.html}{On Exact Computation with an Infinitely Wide Neural Net.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c20bb2d9a50d5ac1f713f8b34d9aac5a-Abstract.html}{Unified Language Model Pre-training for Natural Language Understanding and Generation.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/7503cfacd12053d309b6bed5c89de212-Abstract.html}{Adversarial Training for Free!} In \\emph{Proceedings of NeurIPS}, 2019. [0\/3]\n\t\\item Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c74d97b01eae257e44aa9d5bade97baf-Abstract.html}{ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/4496bf24afe7fab6f046bf4923da8de6-Abstract.html}{SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems.} In \\emph{Proceedings of NeurIPS}, 2019. [1\/1]\n\t\\item Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/3e9f0fc9b2f89e043bc6233994dfcf76-Abstract.html}{Defending Against Neural Fake News.} In \\emph{Proceedings of NeurIPS}, 2019. [2\/4]\n\t\\item Yuan Cao, Quanquan Gu. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/cf9dc5e4e194fc21f397b4cac9cc3ae9-Abstract.html}{Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Florian Tramer, Dan Boneh. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/5d4ae76f053f8f2516ad12961ef7fe97-Abstract.html}{Adversarial Training and Robustness for Multiple Perturbations.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/2]\n\t\\item Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C. Duchi, Percy S. Liang. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/32e0bd1497aa43e02a42f47d9d6515ad-Abstract.html}{Unlabeled Data Improves Adversarial Robustness.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Lars Maal\u00f8e, Marco Fraccaro, Valentin Li\u00e9vin, Ole Winther. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/9bdb8b1faffa4b3d41779bb495d79fb9-Abstract.html}{BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Zeyuan Allen-Zhu, Yuanzhi Li, Yingyu Liang. \\href{https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/62dad6e273d32235ae02b7d321578ee8-Abstract.html}{Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers.} In \\emph{Proceedings of NeurIPS}, 2019. [0\/1]\n\t\\item Durk P. Kingma, Prafulla Dhariwal. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/d139db6a236200b21cc7f752979132d0-Abstract.html}{Glow: Generative Flow with Invertible 1x1 Convolutions.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/2]\n\t\\item Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David K. Duvenaud. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/69386f6bb1dfed68692a24c8686939b9-Abstract.html}{Neural Ordinary Differential Equations.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, Jure Leskovec. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/e77dbaf6759253c7c6d0efc5690369c7-Abstract.html}{Hierarchical Graph Representation Learning with Differentiable Pooling.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Ricky T. Q. Chen, Xuechen Li, Roger B. Grosse, David K. Duvenaud. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/1ee3dfcd8a0645a25a35977997223d22-Abstract.html}{Isolating Sources of Disentanglement in Variational Autoencoders.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, Baoquan Chen. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/f5f8590cd58a54e94377e6ae2eded4d9-Abstract.html}{PointCNN: Convolution On X-Transformed Points.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Arthur Jacot, Franck Gabriel, Clement Hongler. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/5a4be1fa34e62bb8a6ec6b91d2462f5a-Abstract.html}{Neural Tangent Kernel: Convergence and Generalization in Neural Networks.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, Bryan Catanzaro. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/d86ea612dec96096c5e0fcc8dd42ab6d-Abstract.html}{Video-to-Video Synthesis.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Yuanzhi Li, Yingyu Liang. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/54fe976ba170c19ebae453679b362263-Abstract.html}{Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Madry. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/f708f064faaf32a43e4d3c784e6af9ea-Abstract.html}{Adversarially Robust Generalization Requires More Data.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/2]\n\t\\item Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/905056c1ac1dad141560467e0a99e1cf-Abstract.html}{How Does Batch Normalization Help Optimization?} In \\emph{Proceedings of NeurIPS}, 2018. [0\/1]\n\t\\item Harini Kannan, Alexey Kurakin, Ian Goodfellow. \\href{https:\/\/arxiv.org\/abs\/1803.06373}{Adversarial Logit Pairing.} In \\emph{Proceedings of NeurIPS$^*$}, 2018. [0\/2]\n\t\\item Ofir Nachum, Shixiang (Shane) Gu, Honglak Lee, Sergey Levine. \\href{https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/e6384711491713d29bc63fc5eeb5ba4f-Abstract.html}{Data-Efficient Hierarchical Reinforcement Learning.} In \\emph{Proceedings of NeurIPS}, 2018. [0\/3]\n\t\\item Prateek Jain, Raghu Meka, Inderjit Dhillon. \\href{https:\/\/papers.nips.cc\/paper\/2010\/hash\/08d98638c6fcd194a4b1e6992063e944-Abstract.html}{Guaranteed Rank Minimization via Singular Value Projection.} In \\emph{Proceedings of NeurIPS}, 2010. [0\/1]\n\t\\item Hanna Wallach, David Mimno, Andrew McCallum. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/0d0871f0806eae32d30983b62252da50-Abstract.html}{Rethinking LDA: Why Priors Matter.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/4]\n\t\\item Geoffrey E. Hinton, Russ R. Salakhutdinov. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/31839b036f63806cba3f47b93af8ccb5-Abstract.html}{Replicated Softmax: an Undirected Topic Model.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Daniel J. Hsu, Sham M. Kakade, John Langford, Tong Zhang. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/67974233917cea0e42a49a2fb7eb4cf4-Abstract.html}{Multi-Label Prediction via Compressed Sensing.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Youngmin Cho, Lawrence Saul. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/5751ec3e9a4feab575962e78e006250d-Abstract.html}{Kernel Methods for Deep Learning.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Kurt Miller, Michael Jordan, Thomas Griffiths. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/437d7d1d97917cd627a34a6a0fb41136-Abstract.html}{Nonparametric Latent Feature Models for Link Prediction.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/3]\n\t\\item Ian Goodfellow, Honglak Lee, Quoc Le, Andrew Saxe, Andrew Ng. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/428fca9bc1921c25c5121f9da7815cde-Abstract.html}{Measuring Invariances in Deep Networks.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Vinod Nair, Geoffrey E. Hinton. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/6e7b33fdea3adc80ebd648fffb665bb8-Abstract.html}{3D Object Recognition with Deep Belief Nets.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Martin Zinkevich, John Langford, Alex Smola. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/b55ec28c52d5f6205684a473a2193564-Abstract.html}{Slow Learners are Fast.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Ryan Mcdonald, Mehryar Mohri, Nathan Silberman, Dan Walker, Gideon Mann. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/d81f9c1be2e08964bf9f24b15f0e4900-Abstract.html}{Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh. \\href{https:\/\/papers.nips.cc\/paper\/2009\/hash\/e7f8a7fb0b77bcb3b283af5be021448f-Abstract.html}{Learning Non-Linear Combinations of Kernels.} In \\emph{Proceedings of NeurIPS}, 2009. [0\/1]\n\t\\item Laurent Jacob, Jean-philippe Vert, Francis Bach. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/fccb3cdc9acc14a6e70a12f74560c026-Abstract.html}{Clustered Multi-Task Learning: A Convex Formulation.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Kamalika Chaudhuri, Claire Monteleoni. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/8065d07da4a77621450aa84fee5656d9-Abstract.html}{Privacy-preserving logistic regression.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/3]\n\t\\item Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen Ryu, Krishna V. Shenoy, Maneesh Sahani. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/ad972f10e0800b49d76fed33a21f6698-Abstract.html}{Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/3]\n\t\\item Ilya Sutskever, Geoffrey E. Hinton, Graham W. Taylor. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/9ad6aaed513b73148b7d49f70afcfb32-Abstract.html}{The Recurrent Temporal Restricted Boltzmann Machine.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Wenyuan Dai, Yuqiang Chen, Gui-rong Xue, Qiang Yang, Yong Yu. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/0060ef47b12160b9198302ebdb144dcf-Abstract.html}{Translated Learning: Transfer Learning across Different Feature Spaces.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/3]\n\t\\item Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/0e65972dce68dad4d52d063967f0a705-Abstract.html}{Domain Adaptation with Multiple Sources.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Sham M. Kakade, Karthik Sridharan, Ambuj Tewari. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/5b69b9cb83065d403869739ae7f0995e-Abstract.html}{On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Francis Bach. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/a4a042cf4fd6bfb47701cbc8a1653ada-Abstract.html}{Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Ijaz Akhter, Yaser Sheikh, Sohaib Khan, Takeo Kanade. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/dc82d632c9fcecb0778afbc7924494a6-Abstract.html}{Nonrigid Structure from Motion in Trajectory Space.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Prateek Jain, Brian Kulis, Inderjit Dhillon, Kristen Grauman. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/aa68c75c4a77c87f97fb686b2f068676-Abstract.html}{Online Metric Learning and Fast Similarity Search.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Duy Nguyen-tuong, Jan Peters, Matthias Seeger. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/01161aaa0b6d1345dd8fe4e481144d84-Abstract.html}{Local Gaussian Process Regression for Real Time Online Model Learning.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\t\\item Lester Mackey. \\href{https:\/\/papers.nips.cc\/paper\/2008\/hash\/85d8ce590ad8981ca2c8286f79f59954-Abstract.html}{Deflation Methods for Sparse PCA.} In \\emph{Proceedings of NeurIPS}, 2008. [0\/1]\n\\end{itemize}\n\n\\subsection{Checklist}\n\nMany conferences, including NeurIPS, have begun requiring reproducability checklists. We include a modified version of the NeurIPS checklist here to provide a quick summary of common reproducability questions and encourage this practice in future papers.\n\n\\begin{enumerate}\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes{}\n \\item Did you describe the limitations of your work?\n \\answerYes{See Discussion.} \n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerYes{Included in the Appendix.}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes{}\n\\end{enumerate}\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results?\n \\answerNA{}\n\t\\item Did you include complete proofs of all theoretical results?\n \\answerNA{}\n\\end{enumerate}\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{Available on Github}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerYes{Included in Appendix.}\n\t\\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerNo{}\n\t\\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes{Included in Appendix.}\n\\end{enumerate}\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes{Full listing of annotated papers is given in the Appendix.}\n \\item Did you mention the license of the assets?\n \\answerYes{See Footnote 1.}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerYes{Included in supplementary zipfile.}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerYes{Discussed in Appendix A.2. Additional Methodological Details.}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerYes{Discussed in Appendix A.2. Additional Methodological Details.}\n\\end{enumerate}\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA{}\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA{}\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA{}\n\\end{enumerate}\n\n\\end{enumerate}\n\n\n\n\\section{Introduction}\n\nOver the past few decades, ML has risen from a relatively obscure research area to an extremely influential discipline, actively being deployed in myriad applications and contexts around the world. The objectives and values of ML research are influenced by many factors, including the personal preferences of researchers and reviewers, other work in science and engineering, the interests of academic institutions, funding agencies and companies, and larger institutional and systemic pressures, including systems of oppression impacting who is able to do research and on which topics. Together these forces shape patterns in what research gets done and who benefits from this research. Therefore, it is important to document and understand the emergent values of the field: what the field is prioritizing and working toward. To this end, we perform a comprehensive analysis of 100 highly cited NeurIPS and ICML papers from four recent years spanning more than a decade.\n\nOur key contributions are as follows: \n\n(1) We develop and open source a fine-grained annotation scheme for the detection of values in research papers, including identifying a list of 67 values significant in machine learning research. To our knowledge, our annotation scheme is the first of its kind, and opens the door to further qualitative and quantitative analyses of research.\\footnote{We include our template and all annotations as supplementary material at \\url{https:\/\/github.com\/wagnew3\/The-Values-Encoded-in-Machine-Learning-Research} with a CC BY-NC-SA license.} \n\n(2) We apply our annotation scheme to annotate 100 influential ML research papers and extract their value commitments.These reflect and shape the values of the field more broadly. Like the annotation scheme itself, the resulting repository of annotated papers is available and is valuable not only in the context of this paper's analysis, but also as foundation for further qualitative and quantitative study of ML research.\n\n(3) We perform extensive textual analysis to understand the dominant values: performance, accuracy, state-of-the-art (SOTA), quantitative results, generalization, efficiency, building on previous work, and novelty.\nOur analysis indicates that while these values may seem on their face to be purely technical, they are \nsocially and politically charged: specifically, we find that these values are currently defined and operationalized in ways that centralize power, i.e., disproportionally benefit and empower the already powerful, such as large corporations, while neglecting society's least advantaged. \n\n(4) We present a quantitative analysis of the affiliations and funding sources of these most influential papers. We find substantive and increasing presence of tech corporations. For example, in 2008\/09, 24\\% of these top cited papers had corporate affiliated authors, and in 2018\/19 this statistic almost tripled, to 71\\%. Moreover, of these corporations connected to influential papers, the presence of \"big-tech\" firms, such as Google and Microsoft, increased more than fivefold, from 11\\% to 58\\%.\n\n\\section{Methodology}\n\nTo understand the values of ML research, we examined the most highly cited papers from NeurIPS and ICML from the years 2008, 2009, 2018, and 2019. We chose to focus on highly cited papers because they both reflect and shape the values of the discipline, drawing from NeurIPS and ICML because they are the most prestigious of the long-running ML conferences.\\footnote{At the time of writing, these two venues, along with the newer ICLR (2013-present), comprised the top 3 conferences\naccording to h5-index (and h5-median) in the AI category on Google Scholar, by a large margin.} Acceptance to these conferences is a valuable commodity used to evaluate researchers, and submitted papers are explicitly written so as to win the approval of the community, particularly the reviewers who will be drawn from that community. As such, these papers effectively reveal the values that authors believe are most valued by that community. Citations indicate amplification by the community, and help to position these papers as influential exemplars of ML research. To avoid detecting only short-lived trends and enable comparisons over time, we drew papers from two recent years (2018\/19) and from ten years earlier (2008\/09).\nWe focused on conference papers because they tend to follow a standard format and \nallow limited space, meaning that researchers must make hard choices about what to emphasize.\nCollectively, we annotated 100 papers, analyzing over 3,500 sentences drawn from them. In the context of expert qualitative content analysis, this is a significant scale which allows us to meaningfully comment on the values central to ML research.\n\nIn more detail, we began by creating an annotation scheme, and then used it to manually annotate each paper, examining the abstract, introduction, discussion, and conclusion: (1) We examined the chain of reasoning by which each paper justified its contributions, which we call the \\textit{justificatory chain}, rating the extent to which papers used technical or societal problems to justify or motivate their contributions. (2) We carefully read each sentence of these sections, annotating any and all values from our list that were uplifted or exhibited by the sentence.\\footnote{We use a conceptualization of \"value\" that is widespread in philosophy of science in theorizing about values in sciences. In this approach, a value of an entity is a property that is desirable for that kind of entity. For example, speed can be described as valuable in an antelope \\cite{mcmullin1982values}. Well-know scientific values include accuracy, consistency, scope, simplicity, and fruitfulness \\cite{kuhn.1977}. See \\cite{longino.1996} for a critical discussion of value-laden aspects of these values.} (3) We documented the extent to which the paper included a discussion of potential negative impacts. \n\nManual annotation was necessary, both to create the list of emergent values, and to obtain and richly understand the values present in each paper. Automated approaches, such as keyword searches, would suffer significant limitation: these approaches would only capture values that we anticipate; additionally, they would run the risk of systematically skewing the results towards values which are easy to identify, missing or mischaracterizing values which are exhibited in more nuanced ways. The qualitative approach was key for analyzing the values as well, as it requires a subtle understanding of how the values function in the text and understanding of taken for granted assumptions underlying the values, which methods such as keyword matching would fail to capture.\n\nTo assess consistency, 40\\% of the papers were annotated by two annotators. The intercoder consensus on values in these papers achieved a Cohen kappa coefficient of .61, which indicates substantial agreement \\citep{viera.2005}. Furthermore, we used several established strategies to increase consistency, including recoding data coded early in the process \\citep{krefting.1991} and conducting frequent discussions and assessments of the coding process, code list, and annotation scheme \\citep{krippendorff.2018}. \n\nTo create the list of values specifically (see Figure~\\ref{fig:value-totals}), we followed best practices in manual content analysis.\n(1) We began with a list of values we expected to be relevant based on prior knowledge, augmenting this list with seven ethical principles of interest from existing literature \\citep{dittrich.2012,floridi.2019}.\n(2) We randomly selected a subset of 10 papers for initial annotation, searching for the values on the list sentence by sentence and adding new values as they emerged. (3) Through discussion, we revisited all values and produced a values list. (4) We annotated the full set of papers using this list of values, meeting regularly to discuss difficult examples, and individually nominating and deciding by consensus when sentences justified inductively adding additional, emergent values to the values list. (5) For the final analysis presented here, we combined closely related values into clusters by consensus, such that they could be discussed together (for completeness, all values are treated separately in the Appendix). Formally stated, we establish our codes (short phrases that represent the relevant essence of information, in this case the list of values) using the an inductive-deductive approach. The deductive component involves starting with codes established in existing literature, which ensures we note and can speak to values of interest, including established ethical principles.\nThe inductive component involves the discovery of codes from the data, and impedes inappropriately biased or pre-conceived findings by focusing on emergent codes \\citep{bengtsson.2016,krippendorff.2018}.\n\nThe composition of our team confers additional validity to our work. We are a diverse, multi-racial, multi-gender team working closely, including undergraduate, graduate, and post-graduate researchers from machine learning, NLP, robotics, cognitive science, and philosophy. This team captures several advantages that other methods of manual annotation such as crowd sourcing lack: the nature of this team minimizes intra-disciplinary biases, affords the unique combination of expertise required to read the values in ML papers, allows meaningful engagement with relevant work in other fields, and enables best practices including continually clarifying the procedure, ensuring agreement, vetting consistency, reannotating, and discussing themes \\citep{krippendorff.2018}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[origin=c,width=\\textwidth]{figures\/value-freq-graph.png}\n\\caption{Proportion of annotated papers that uplifted each value.}\n\\label{fig:value-totals}\n\\end{figure}\n\n\\section{Quantitative Summary}\nIn Figure~\\ref{fig:value-totals}, we plot the prevalence of values in 100 annotated papers. The top values are: performance (87\\% of papers), building on past work (79\\%), generalization (79\\%), efficiency (73\\%), quantitative evidence (72\\%), and novelty (63\\%). Values related to user rights and stated in ethical principles appeared very rarely if at all: none of the papers mentioned autonomy, justice, or respect for persons.\nIn Table~\\ref{fig:just_chain} (top), we show the distribution of justification scores. Most papers only justify how they achieve their internal, technical goal; 71\\% don't make any mention of societal need or impact, and only 3\\% make a rigorous attempt to present links connecting their research to societal needs.\nIn Table~\\ref{fig:just_chain} (bottom), we show the distribution of negative impact discussion scores. One annotated paper included a discussion of negative impacts and a second mentioned the possibility; none of the remaining 98 papers contained any reference to potential negative impacts.\nIn Figure~\\ref{fig:vcorp-ties}, we show stated connections (funding and affiliations) of paper authors to institutions. Comparing papers written in 08\/09 to those written in 18\/19, ties to corporations nearly doubled to 79\\% of all annotated papers, ties to big tech multiplied over fivefold to 58\\%, while ties to universities declined to 81\\%, putting corporations nearly on par with universities in the most cited ML research. In the next sections, we present extensive qualitative examples and analysis of our findings, with additional analyses in the Appendix.\n\n\\begin{table}\n\\caption{Annotation scheme and results for justificatory chain (top) and negative impacts (bottom).}\n\\centering\n\\small\n\\begin{tabular}{lc}\n\\toprule\n\\textbf{Justificatory Chain Condition} & \\textbf{\\% of Papers} \\\\ \\midrule\nDoesn't rigorously justify how it achieves technical goal & 1\\% \\\\\nJustifies how it achieves technical goal but no mention of societal need & 71\\% \\\\\nStates but does not justify how it connects to a societal need & 16\\% \\\\\nStates and somewhat justifies how it connects to a societal need & 9\\% \\\\\nStates and rigorously justifies how it connects to a a societal need & 3\\% \\\\ \\bottomrule\n\\toprule\n\\textbf{Negative Impacts Condition} & \\textbf{\\% of Papers} \\\\ \\midrule\nDoesn't mention negative potential & 98\\% \\\\\nMentions but does not discuss negative potential \\hspace{18 mm} & 1\\% \\\\\nDiscusses negative potential & 1\\% \\\\\nDeepens our understanding of negative potential & 0\\% \\\\ \\bottomrule\n\\end{tabular}\n\\hfill\n\n\\label{fig:just_chain}\n\\end{table}\n\n\n\n\n\n\\section{Qualitative Analysis of Justifications and Negative Potential}\n\n\n\n\\subsection{Justificatory Chain}\n\nPapers typically motivate their projects by appealing to the needs of the ML research community and rarely mention potential societal benefits. Research-driven needs of the ML community include researcher understanding (e.g., understanding the effect of pre-training on performance\/robustness, theoretically understanding multi-layer networks) as well as more practical research problems (e.g., improving efficiency of models for large datasets, creating a new benchmark for NLP tasks). Some papers do appeal to needs of broader society, such as building models with realistic assumptions, catering to more languages, or understanding the world. However, even when societal needs are mentioned as part of the justification of the project, the connection is usually loose. Almost no papers explain how their project is meant to promote a social need they identify by giving the kind of rigorous justification that is typically expected of and given for technical contributions. \n\n\n\n\n\n\n\n\\subsection{Negative Potential}\n\\label{negative potential}\n\nTwo of the 100 papers discussed potential harms, whereas the remaining 98 did not mention them at all. The lack of discussion of potential harms is especially striking for papers which deal with socially contentious application areas, such as surveillance and misinformation. For example, the annotated corpus includes a paper advancing the identification of people in images, a paper advancing face-swapping, and a paper advancing video synthesis. These papers contained no mention of the well-studied negative potential of facial surveillance, DeepFakes, or misleading videos, respectively.\n\n\n\n\n\nFurthermore, among the two papers that do mention negative potential, \nthe discussions were mostly\nabstract and hypothetical,\nrather than grounded in the negative potential of their specific contributions. For example, authors may acknowledge \"possible unwanted social biases\" when applying the model to a real-world setting, without \ndiscussing the social biases encoded in the authors' proposed model. \n\n\n\n\n\n\\section{Stated values}\n\\label{sec:values}\n\nThe dominant values in ML research, e.g., accuracy and efficiency, may seem purely technical. However, the following analysis of several of these values shows how they can become politically loaded in the process of prioritizing and operationalizing them: sensitivity to the way that they are operationalized, and to the fact that they are uplifted at all, reveals value-laden assumptions that are often taken for granted.\nWe thus challenge a conception of prevalent values as politically neural and consider alternatives to their dominant conceptualization that may be equally or more intellectually interesting or socially beneficial.\\footnote{By challenging a politically neutral conception of the top values in machine learning research, this paper also contributes to the literature in philosophy. Philosophers of science have been working to understand the roles of values in science for decades. For example, Thomas Kuhn \\citep{kuhn.1977} presented a list of five scientific values which he deems as important in scientific research (accuracy, consistency, scope, simplicity, and fruitfulness). Helen Longino \\citep{longino.1996} and others have argued that prominent values are politically loaded, focusing mostly on how some of these values function in disciplines such as biology and social sciences. However, \"technical\" values, such as accuracy, are often left out of this type of critical discussion. Our paper shows that even the \"technical\" values aren't politically neutral, and it does so in the context of machine learning, which is often conceived as a less politically loaded discipline than biology or social sciences.} We have encouraged ourselves, and now encourage the reader, to remember that values once held to be intrinsic, obvious, or definitional have been in many cases transformed over time.\n\nTo provide a sense of what the values look like in context, we include three randomly selected examples of sentences annotated for each value discussed here (Tables 2-5), with extensive additional examples in the Appendix. Note that most sentences are annotated with multiple values.\\footnote{To avoid the impression that we are drawing attention to anything special about these randomly chosen example sentences, we omit attribution, and include a list of all annotated papers in the Appendix.}\n \n\n\n\n\n\n\n\n\n\n\\subsection{Performance}\n\n\\begin{table}\n\\caption{Random examples of \\textit{performance}, the most common emergent value.}\n\\small\n\\begin{tabular}{p{.96\\linewidth}}\n\\midrule \"Our model significantly outperforms SVM's, and it also outperforms convolutional neural nets when given additional unlabeled data produced by small translations of the training images.\"\\\\\n\\midrule \"We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem.\"\\\\\n\\midrule \"Furthermore, the learning accuracy and performance of our LGP approach will be compared with other important standard methods in Section 4, e.g., LWPR [8], standard GPR [1], sparse online Gaussian process regression (OGP) [5] and $\\upsilon$-support vector regression ($\\upsilon$-SVR) [11], respectively.\"\\\\\n\\bottomrule\n\\end{tabular}\n\\hfill\n\n\\label{fig}\n\\end{table}\n\nPerformance, accuracy, and achieving SOTA form the most common cluster of related values in annotated papers. While it might seem intrinsic for the field to care about performance, it is important to remember that models are not simply \"well-performing\" or \"accurate\" in the abstract but always in relation to and as \\textit{quantified} by some metric on some dataset. Examining prevalent choices of operationalization reveals political aspects of performance values. First, we find performance values are consistently and unquestioningly operationalized as correctness averaged across individual predictions, giving equal weight to each instance. However, choosing\nequal weights when averaging is a value-laden move which might deprioritize those underrepresentated in the data or the world, as well as societal and evaluee needs and preferences. Extensive research in ML fairness and related fields has considered alternatives, but we found no such discussions among the papers we examined. \n\n\nChoices of datasets are revealing. They are often driven purely by past work, so as to demonstrate improvement over a previous baseline (see also \\S\\ref{sec:novelty}). Another common justification for using a certain dataset is applicability to the \"real world\". Assumptions about how to characterize the \"real world\" are value-laden. One common assumption is the availability of very large datasets. However, presupposing the availability of large datasets is power centralizing because it encodes favoritism to those with resources to obtain and process them \\cite{dotan2019value}. Further overlooked assumptions include that the real world is binary or discrete, and that datasets come with a predefined ground-truth label for each example, presuming that a true label always exists \"out there\" independent of those carving it out, defining and labelling it. This contrasts against marginalized scholars' calls for ML models that allow for non-binaries, plural truths, contextual truths, and many ways of being \\cite{costanza2018design, hamidi2018gender, lewis2020indigenous}.\n\n\nThe prioritization of performance values also requires scrutiny. Valuing these properties is so entrenched in the field that generic success terms, such as \"success\", \"progress\", or \"improvement\" are often used as synonyms for performance and accuracy. However, one might alternatively invoke generic success to mean increasingly safe, consensual, or participatory ML that reckons with impacted communities and the environment. In fact, \"performance\" itself is a general success term that could have been associated with properties other than accuracy and SOTA. \n\n\n\\subsection{Generalization}\n\n\n\n\n\n\\begin{table}\n\\caption{Random examples of \\textit{generalization}, the third most common emergent value.}\n\\small\n\\begin{tabular}\n{p{0.96\\linewidth}}\n\\midrule \"The range of applications that come with generative models are vast, where audio synthesis [55] and semi-supervised classification [38, 31, 44] are examples hereof.\"\\\\\n\\midrule \"Furthermore, the infinite limit could conceivably make sense in deep learning, since over-parametrization seems to help optimization a lot and doesn't hurt generalization much [Zhang et al., 2017]: deep neural nets with millions of parameters work well even for datasets with 50k training examples.\"\\\\\n\\midrule \"Combining the optimization and generalization results, we uncover a broad class of learnable functions, including linear functions, two-layer neural networks with polynomial activation $\\phi(z) = z^{2l}$ or cosine activation, etc.\"\\\\\n\\bottomrule\n\\end{tabular}\n\\hfill\n\n\\label{fig}\n\\end{table}\n\nA common way of appraising the merits of one's work in ML is to claim that it generalizes well. Typically, generalization is understood in terms of performance or accuracy: a model generalizes when it achieves good performance on a range of samples, datasets, domains, or applications. \nUplifting generalization raises two kinds of questions. First, which datasets, domains, or applications show that the model generalizes well? Typically, a paper shows that a model generalizes by showing that it performs well on multiple tasks or datasets. However, the choice of particular tasks and datasets is rarely justified; the choice of tasks can often seem arbitrary, and authors rarely present evidence that their results will generalize to more realistic settings, or help to directly address societal needs.\n\nSecond, uplifting generalization itself reveals substantive assumptions. The prizing of generalization means that there is an incentive to harvest many datasets from a variety of domains, and to treat these as the only datasets that matter for that space of problems. Generalization thus prioritizes distilling every scenario down to a common set of representations or affordances, rather than treating each setting as unique. Critical scholars have advocated for valuing \\emph{context}, which stands at the opposite side of striving for generalization \\citep{d2020data}. Others have argued that this kind of totalizing lens (in which model developers have unlimited power to determine how the world is represented) leads to \\emph{representational} harms, due to applying a single representational framework to everything \\citep{crawford.2017,abbasi.2019}. \n\n\nFinally, the belief that generalization is even possible implicitly assumes a conservative approach in which new data will be sufficiently similar to previously seen data. \nWhen used in the context of ML, the assumption that the future resembles the past is often problematic as past societal stereotypes and injustice can be encoded in the process \\cite{o2016weapons}. Furthermore, to the extent that predictions are performative \\cite{perdomo2020performative}, especially predictions that are enacted, those ML models which are deployed to the world will contribute to shaping social patterns.\nNo papers attempt to counteract this quality or acknowledge its presence.\n\n\n\n\\subsection{Efficiency}\n\n\nEfficiency is another common value in ML research. Abstractly, saying that a model is efficient typically means saying that the model uses less of some resource, such as time, memory, energy, or number of labeled examples. \nIn practice however, efficiency is commonly referenced to imply the ability to scale up: a more efficient inference method allows you to do inference in much larger models or on larger datasets, using the same amount of resources.\nThis is reflected in our value annotations, where 72\\% of papers mention valuing efficiency, but only 14\\% of those value requiring \\textit{few} resources.\nIn this way, valuing efficiency facilitates and encourages the most powerful actors to scale up their computation to ever higher orders of magnitude, making their models even less accessible to those without resources to use them and decreasing the ability to compete with them. Alternative usages of efficiency could encode accessibility instead of scalability, aiming to create more equitable conditions for ML research.\n\n \n\\begin{table}\n\\caption{Random examples of \\textit{efficiency}, the fourth most common emergent value.}\n\\begin{small}\n\\begin{tabular}{p{0.96\\linewidth}}\n\\midrule \"Our model allows for controllable yet efficient generation of an entire news article \u2013 not just the body, but also the title, news source, publication date, and author list.\"\\\\\n\\midrule \"We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings.\"\\\\\n\\midrule \"In particular, our EfficientNet-B7 surpasses the best existing GPipe accuracy (Huang et al., 2018), but using 8.4x fewer parameters and running 6.1x faster on inference.\n\\\\\n\\bottomrule\n\\end{tabular}\n\\hfill\n\n\\end{small}\n\\label{fig}\n\\end{table}\n\n\n\n\\subsection{Novelty and Building on Past Work}\n\\label{sec:novelty}\n\n\\begin{table}\n\\caption{Random examples of \\emph{building on past work} and \\emph{novelty}, the second and sixth most common emergent values, respectively.}\n\\begin{small}\n\\begin{tabular}{p{0.96\\linewidth}}\n\\toprule\n\\textbf{Building on past work}\\\\\n\\midrule \"Recent work points towards sample complexity as a possible reason for the small gains in robustness: Schmidt et al. [41] show that in a simple model, learning a classifier with non-trivial adversarially robust accuracy requires substantially more samples than achieving good `standard' accuracy.\"\\\\\n\\midrule \"Experiments indicate that our method is much faster than state of the art solvers such as Pegasos, TRON, SVMperf, and a recent primal coordinate descent implementation.\"\\\\\n\\midrule \"There is a large literature on GP (response surface) optimization.\"\\\\\n\\bottomrule\n\\toprule\n\\textbf{Novelty} \\\\\n\\midrule \"In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework.\"\\\\\n\\midrule \"Third, we propose a novel method for the listwise approach, which we call ListMLE.\"\\\\\n\\midrule \"The distinguishing feature of our work is the use of Markov chain Monte Carlo (MCMC) methods for approximate inference in this model.\"\\\\\n\\bottomrule\n\\end{tabular}\n\\hfill\n\n\\end{small}\n\\label{fig:novelty}\n\\end{table}\n\n\n\nMost authors devote space in the introduction to positioning their paper in relation to past work, and describing what is novel.\nMentioning past work serves to signal awareness of related publications, \nto establish the new work as relevant to the community, and to provide the basis upon which to make claims about what is new. \nNovelty is sometimes suggested implicitly (e.g., \"we develop\" or \"we propose\"), but frequently it is emphasized explicitly (e.g. \"a new algorithm\" or \"a novel approach\")\n\n\n\n\nThis combined focus on novelty and building on recent work establishes a continuity of ideas, and might be expected to contribute to the self-correcting nature of science \\citep{merton.1973}. However, this is not always the case \\citep{ioannidis.2012} and attention to the ways novelty and building on past work are implemented reveals value commitments.\nWe find a clear emphasis on technical novelty, rather than critique of past work, or demonstration of measurable progress on societal problems, as has previously been observed \\citep{wagstaff.2012}. Although introductions sometimes point out limitations of past work (so as to further emphasize the contributions of their own paper), they are rarely explicitly critical of other papers in terms of methods or goals. Indeed, papers uncritically reuse the same datasets for years or decades to benchmark their algorithms, even if those datasets fail to represent more realistic contexts in which their algorithms will be used \\cite{bender2021dangers}. \nNovelty is denied to work that rectifies socially harmful aspects of existing datasets in tandem with strong pressure to benchmark on them and thereby perpetuate their use, enforcing a fundamentally conservative bent to ML research.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Corporate Affiliations and Funding}\n\\label{sec:corporate}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.77\\linewidth]{figures\/corp_affils_pie.png}\n\\caption{Corporate and Big Tech author affiliations.\n}\n\\label{fig:corp_affils}\n\\end{figure}\n\n\n\n\n\n\\begin{figure*}\n\\centering\n\\captionsetup{justification=centering}\n\\includegraphics[width=0.87\\linewidth]{figures\/affils.png}\n\\caption{Corporate affiliations and funding ties. Non-N.A. Universities are those \\\\outside the U.S. and Canada.}\n\\label{fig:vcorp-ties}\n\\end{figure*}\n\n\nOur analysis shows substantive and increasing corporate presence in the most highly-cited papers.\nIn 2008\/09, 24\\% of the top cited papers had corporate affiliated authors, and in 2018\/19 this statistic almost tripled,\nto 71\\%. Furthermore, we also find a much greater concentration of a few large tech firms, such as Google and Microsoft, with the presence of these \"big tech\" firms (as identified in \\citep{ahmed2020democratization})\nincreasing more than fivefold, from 11\\% to 58\\% (see Figure \\ref{fig:corp_affils}). The fraction of the annotated papers with corporate ties, by author affiliation or funding, dramatically increased from 43\\% in 2008\/09 to 79\\% in 2018\/19.\nIn addition, we found paramount domination of elite universities in our analysis as shown in Figure \\ref{fig:vcorp-ties}. Of the total papers with university affiliations, we found 82\\% were from elite universities (defined as the top 50 universities by QS World University Rankings, following past work \\cite{ahmed2020democratization}).\nThese findings are consistent with previous work indicating a pronounced corporate presence in ML research. In an automated analysis of\npeer-reviewed papers from 57 major computer science conferences, Ahmed and Wahed \\cite{ahmed2020democratization} show that the share of papers that have at least one corporate affiliated co-author increased from 10\\% in 2005 for both ICML and NeurIPS to 30\\% and 35\\% respectively in 2019. Our analysis shows that corporate presence is even more pronounced in those papers from ICML and NeurIPS that end up receiving the most citations.\n\nThe influence of powerful players in ML research is consistent with field-wide value commitments that centralize power. Others have also argued for causal connections. For example, Abdalla and Abdalla \\cite{abdalla2020grey} argue that big tech sway and influence academic and public discourse using strategies which closely resemble strategies used by Big Tabacco.\n\nMoreover, examining the prevalent values of big tech, critiques have repeatedly pointed out that objectives such as efficiency, scale, and wealth accumulation \\cite{o2016weapons,pasquale2015black,hanna2020against} drive the industry at large, often at the expense of individuals rights, respect for persons, consideration of negative impacts, beneficence, and justice. Thus, the top stated values of ML that we presented in this paper such as performance, generalization, and efficiency may not only enable and facilitate the realization of big tech's objectives, but also suppress values such as beneficence, justice, and inclusion. A \"state-of-the-art\" large image dataset, for example, is instrumental for large scale models, further benefiting ML researchers and big tech in possession of huge computing power. \nIn the current climate where values such as accuracy, efficiency, and scale, as currently defined, are a priority, user safety, informed consent, or participation may be perceived as costly and time consuming, evading social needs. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion and Related Work}\n\nML research is often perceived as value-neutral, and emphasis is placed on positive applications or potential.\nThis fits into a historical strain of thinking which has tended to frame technology as \"neutral\", based on the notion that new technologies can be unpredictably applied for both beneficial and harmful purposes \\citep{winner.1977}. Ironically, this claim of neutrality frequently serves as an insulation from critiques of AI and as a permission to emphasize the benefits of AI \\citep{rus.2018, weizenbaum1972impact}. Although it is rare to see anyone explicitly argue in print that ML is neutral, related ideas are part of contemporary conversation, including these canonical claims: long term impacts are too difficult to predict; sociological impacts are outside the expertise or purview of ML researchers \\citep{holstein.2019}; critiques of AI are really misdirected critiques of those deploying AI with bad data (\"garbage in, garbage out\"), again outside the purview of many AI researchers; and proposals such as broader impact statements represent merely a \"bureaucratic constraint\"\n\\citep{abuhamad.2020}. A recent qualitative analysis of required broader impact statements from NeurIPS 2020 similarly observed that these statements leaned towards positive consequences (often mentioning negative consequences only briefly and in some cases not at all), emphasized uncertainty about how a technology might be used, or simply omit any discussion of societal consequences altogether \\citep{nanayakkara.2021}.\n\n\nImportantly, there is a foundational understanding in Science, Technology, and Society Studies (STSS), Critical Theory, and Philosophy of Science that science and technologies are inherently value-laden, and these values are encoded in technological artifacts, many times in contrast to a field's formal research criteria, espoused consequences, or ethics guidelines \\cite{winner1980artifacts,bowker2000sorting,benjamin2019race}. There is a long tradition of exposing and critiquing such values in technology and computer science. For example, Winner \\cite {winner1980artifacts} introduced several ways technology can encode political values. This work is closely related to Rogaway \\cite {rogaway2015moral}, who notes that cryptography has political and moral dimensions and argues for a cryptography that better addresses societal needs. Weizenbaum \\cite {weizenbaum1976computer} argued in 1976 that the computer has from the beginning been a fundamentally conservative force which solidified existing power: in place of fundamental social changes, he argued, the computer renders technical solutions that allow existing power hierarchies to remain intact. \n\nOur paper extends these critiques to the field of ML. It is a part of a rich space of interdisciplinary critiques and alternative lenses used to examine the field. Works such as \\cite {mohamed2020decolonial, birhane2020algorithmic} critique AI, ML, and data using a decolonial lens, noting how these technologies replicate colonial power relationships and values, and propose decolonial values and methods.\nOthers \\cite {benjamin2019race, noble2018algorithms, d2020data} examine technology and data science from an anti-racist and intersectional feminist lens, discussing how our infrastructure has largely been built by and for white men; D'Ignazio and Klein \\cite {d2020data} present a set of alternative principles and methodologies for an intersectional feminist data science. Similarly, Kalluri \\cite {kalluri2020don} denotes that the core values of ML are closely aligned with the values of the most privileged and outlines a vision where ML models are used to shift power from the most to the least powerful. Dotan and Milli \\cite{dotan2019value} argue that the rise of deep learning is value-laden, promoting the centralization of power among other political values. Many researchers, as well as organizations such as Data for Black Lives, the Algorithmic Justice League, Our Data Bodies, the Radical AI Network, Indigenous AI, Black in AI, and Queer in AI, explicitly work on continuing to uncover particular ways technology in general and ML in particular can encode and amplify racist, sexist, queerphobic, transphobic, and otherwise marginalizing values \\cite{buolamwini2018gender, prabhu2020large}.\n\n\n\n\n\n\n\n\n\n\n\nWe present this paper in part in order to expose the contingency of the present state of the field; it could be otherwise. \nFor individuals, communities, and institutions wading through difficult-to-pin-down values of the field, as well as those striving toward alternative values, it is a useful tool to have a characterization of the way the field is now, for understanding, shaping, dismantling, or transforming what is, and for\narticulating and \nbringing about alternative visions.\n\nAs with all methods, our chosen approach \u2014 coding important sections of highly-cited papers \u2014 has limitations. Most notably, this approach requires human expertise and does not automatically scale or generalize to other data, which limits our ability to draw strong conclusions about other conferences or different years. Similarly, this approach is less reproducible than fully automated approaches, and for both our final list of values and specific annotation of individual sentences, different researchers might make somewhat different choices. However, given the overwhelming presence of certain values, the high agreement rate among annotators, and the similarity of observations made by our team, we strongly believe other researchers following a similar approach would reach similar conclusions about what values are most frequently uplifted by the most influential papers in this field. Lastly, we cannot claim to have identified every relevant value in ML. However, by including important ethical values identified by past work, and specifically looking for these, we can confidently assert their relative absence in this set of papers, which captures an important aspect of influential work in ML.\n\n\\section{Conclusion}\n\nWe reject the vague conceptualization of the discipline of ML as value-neutral. Instead, we investigate the ways that the discipline of ML is inherently value-laden. Our analysis of highly influential papers in the discipline finds that they not only favor the needs of research communities and large firms over broader social needs, but also that they take this favoritism for granted. The favoritism manifests in the choice of projects, the lack of consideration of potential negative impacts, and the prioritization and operationalization of values such as performance, generalization, efficiency, and novelty. These values are operationalized in ways that disfavor societal needs, usually without discussion or acknowledgment. Moreover, we uncover an overwhelming and increasing presence of big tech and elite universities in highly cited papers, which is consistent with a system of power-centralizing value-commitments. \nThe upshot is that the discipline of ML is not value-neutral. We find that it is socially and politically loaded, frequently neglecting societal needs and harms, while prioritizing and promoting \nthe concentration of power in the hands of already powerful actors.\n\n\n\n \n\n\n\n\n\n\\section*{Acknowledgements}\n\nWe would like to thank Luke Stark, Dan Jurafsky, and Sarah K. Dreier for helpful feedback on this work. We owe gratitude and accountability to the long history of work exposing how technology shifts power, work primarily done by communities at the margins. Abeba Birhane was supported, in part, by Science Foundation Ireland grant 13\/RC\/2094\\_2. Dallas Card was supported in part by the Stanford Data Science Institute. William Agnew was supported by an NDSEG Fellowship.\n\n\\Urlmuskip=0mu plus 1mu\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\\label{sect:introduction}\n\nThe study of solar oscillations has proven to be a powerful tool to infer the properties of the solar interior. Global helioseismology, based on the interpretation of the eigenfrequencies of the resonant modes of oscillations, has provided a robust description of the internal structure and dynamics of the Sun. In the last years, these studies have been complemented by focusing on local features. Local helioseismology interprets the full wave field observed at the surface. Several complemetary techniques have been developed to probe local perturbations, as Fourier-Hankel spectral analysis \\citep{Braun1995}, ring-diagram analysis \\citep{Hill1988}, time-distance helioseismology \\citep{Duvall+etal1993}, and acoustic holography \\citep{Lindsey+Braun1990}. \n\nFor an accurate interpretation of the measurements obtained from all these procedures, it is extremely important to achieve a deep understanding of the physics involved in the wave propagation. Due to the increasing interest of helioseismologists in active regions, such as sunspots, the knowledge of the inteaction of waves with magnetic structures has been greatly developed in the recent years. The conversion from fast-mode high-$\\beta$ acoustic waves to slow-mode low-$\\beta$ waves in solar active region is well-understood from the theoretical point of view \\citep{Cally+Bogdan1993, Crouch+Cally2003, Schunker+Cally2006, Cally2006}. An example of the success of this analytical development is the comparison of the observational absorption and phase shift data \\citep{Braun+etal1988, Braun1995} with the modeled results obtained from the conversion \\citep{Cally+etal2003, Crouch+etal2005}. Numerically, the fast-to-slow conversion is also well studied in sunspot-like atmospheres \\citep{Bogdan+etal2003, Rosenthal+etal2002, Khomenko+Collados2006, Khomenko+etal2009, Felipe+etal2010a}, as well as flux tubes \\citep{Hasan+etal2003, Hasan+Ulmschneider2004}. See \\citet{Khomenko2009} for a review.\n\nNot all the developments of phenomena associated with wave propagation in magnetic structures have reached the same degree of maturity. An insufficiently modelled effect is the fast-to-Alfv\\'en conversion. Pure Alfv\\'en waves can only exist in those mediums which are homogeneous in the direction perpendicular to both magnetic field and wavenumber. In general, this will not be the situation in a gravitationally stratifed atmosphere with complex magnetic field structure, although we will simply refer to Alfv\\'en waves even in this case. The fast-to-Alfv\\'en conversion only occurs when the wave propagation is not contained in the same plane of the density stratification and magnetic field and, thus, three dimensional (3D) analysis are necessary. The study of this process was started by \\citet{Crouch+Cally2005}, who studied the 3D propagation of oscillations in a polytrope permeated by an uniform magnetic field of arbitrary inclination, and found downward Alfv\\'en waves. \\citet{Cally+Goossens2008} obtained that the conversion to the Alfv\\'en mode is most efficient for field inclinations from vertical between 30 and 40 degrees, and azimuth angles (the angle between the magnetic field and wave propagation planes) between 60 and 80 degrees. \\citet{Cally+Hansen2011} found that the interaction between fast and Alfv\\'en waves is spread across many scale height, unlike the fast-to-slow, which is limited around the layer where the sound speed $c_S$ and the Alfv\\'en speed $v_A$ are similar. As the frequency increases the mode conversion is progressively more localized, although at the frequencis relevant to local helioseismology (around 3-5 mHz) the fast-to-Alfv\\'en conversion region spans the whole chromosphere. \n\nRecently, Khomenko and Cally have studied the conversion to the Alfv\\'en mode by means of 2.5D numerical simulations in homogeneous field configurations \\citep{Khomenko+Cally2011}, as well as realistic sunspot-like structures \\citep{Khomenko+Cally2012}. In these works they obtained the dependence of the efficiency of the conversion with the inclination and azimuth. However, in the sunspot configuration the conversion was only evaluated at some limited angles corresponding to selected 2D planes of the model. The aim of this work is to extend the results from \\citet{Khomenko+Cally2012} to 3D. It will allow us to populate the voids in their diagrams of fast-to-Alfv\\'en conversion efficiency. More relevant, the development of full 3D simulations provides the results for the complete wave field, where waves can propagate freely in all spatial directions without being restricted to a two-dimensional plane. The conversion to the Alfv\\'en mode in realistic 3D atmospheres has not been studied before with the detailed evaluation of the efficiency discussed in this paper. \n\n\n\\section{Numerical procedures}\n\\label{sect:procedures}\n\nThe three-dimensional (3D) non-linear magnetohydrodynamic equations are solved using the numerical code Mancha \\citep{Khomenko+Collados2006, Khomenko+etal2008, Felipe+etal2010a}. The code solves the equations for perturbations, obtained after removing the equilibrium state from the equations. A Perfect Matched Layer (PML) is used to avoid wave reflection at the top boundary \\citep{Berenger1996}, while periodic boundary conditions are imposed in the horizontal boundaries. The initial perturbation is set in the bottom boundary with small amplitude in order to ensure that the simulations are in the linear regime.\n\nAs a background atmosphere we use a magnetostatic (MHS) sunspot model, adopted from \\citet{Khomenko+Collados2008}. This model is a thick flux tube with distributed currents, azimuthally symmetric and has no twist. We set the height reference $z=0$ Mm at the photospheric level, where the optical depth at 500 nm is unity in the quiet Sun atmosphere. At this height the magnetic field at the axis is 900 G. The spatial resolution is 150 km in the horizontal directions and 50 km in the vertical direction. The computational domain spans from $z=-5$ Mm, where the minus sign indicates that it is below the photosphere, to $z=2.4$ Mm. The upper 500 km (10 grid points) correspond to the PML boundary layer, so the effective top of the simulation is located at $z=1.9$ Mm. The horizontal extent of the domain is $x \\in [-39,39]$ Mm and $y \\in [-30,30]$ Mm, with the axis of the sunspot located at $x=0$, $y=0$ Mm. At 39 Mm from the sunspot axis the thermodynamic variables of the model are taken from Model S \\citep{Christensen-Dalsgaard+etal1996} in the deep sub-photosphere layers and VAL-C model \\citep{Vernazza+etal1981} in the photosphere and chromosphere, stabilized following the method by \\citet{Parchevsky+Kosovichev2007} to avoid the convective instability. The axis of the sunspot is given by the \\citet{Avrett1981} model. The atmosphere between the quiet Sun boundary and the umbral model at the axis merges smoothly.\n\nWaves are driven in a few grid points at the bottom boundary at $z=-5$ Mm. The perturbations in pressure, density, and velocity are calculated analytically as an acoustic-gravity wave of a given frequency and wavenumber, neglecting the magnetic field and temperature gradient \\citep{Mihalas+Mihalas1984}. The detailed form of the perturbations can be found in \\citet{Khomenko+Cally2012}. In this simulation the frequency was set to $\\nu=\\omega \/2\\pi=5$ mHz, slightly below the maximum cut off frequency reached at the temperature minimum, and the horizontal wave number to $k_x=1.37$ Mm$^{-1}$.\n\n\nIn order to indentify the slow, fast, and Alfv\\'en wave modes in the magnetically dominated region of the computational domain, the velocity and magnetic field perturbations have been projected according to their orientation with respect to the equilibrium magnetic field onto these three characteristic directions: \n\n\\begin{equation}\n\\label{eq:elong}\n\\hat{e}_{long}=({\\rm cos}\\phi {\\rm sin}\\theta, {\\rm sin}\\phi {\\rm sin}\\theta, {\\rm cos}\\theta),\n\\end{equation}\n\\begin{eqnarray}\n\\label{eq:eperp}\n\\lefteqn{\\hat{e}_{perp}=(-{\\rm cos}\\phi {\\rm sin}^2\\theta {\\rm sin}\\phi,}\\nonumber\\\\\n&&1-{\\rm sin}^2\\theta {\\rm sin}^2\\phi,-{\\rm cos}\\theta {\\rm sin}\\theta {\\rm sin}\\phi)\\,,\n\\end{eqnarray}\n\\begin{equation}\n\\label{eq:etrans}\n\\hat{e}_{trans}=(-{\\rm cos}\\theta,0,{\\rm cos}\\phi{\\rm sin}\\theta).\n\\end{equation}\n\n\\noindent where $\\theta$ is the magnetic field inclination from the vertical and $\\phi$ is the field azimuth, measured from the $x-z$ plane. The projection $\\hat{e}_{long}$ is along the magnetic field and it selects the slow magneto-acoustic wave in the low-$\\beta$ region; the projection $\\hat{e}_{perp}$ was chosen after \\citet{Cally+Goossens2008} and it gives the asymptotic polarization direction of the Alfv\\'en mode; and the last projection $\\hat{e}_{trans}$ is set normal to the other two, corresponding to the fast wave in the low-$\\beta$ regime. These projections have already been successfully used to separate the three wave modes in idealized magnetic field configurations \\citep{Khomenko+Cally2011} as well as more complex magnetic topologies \\citep{Felipe+etal2010a, Khomenko+Cally2012}. These projections assume that the wavevector ${\\bf k}$ is contained in the $xz$ plane. In these simulations ${\\bf k}$ will not be limited to that plane, since the raypath can bend due to background variations in the $y$ direction. This would have some effect in the accuracy of the projections, but it can probably be neglected.\n\nAs a measure of the efficiency of the conversion to each mode, the time-averaged wave energy fluxes \\citep{Bray+Loughhead1974} were calculated in the magnetically dominated region. The acoustic energy flux is obtained from the expression:\n\n\\begin{equation}\n{\\bf F_{ac}}=\\langle p_1{\\bf v}\\rangle,\n\\label{eq:Fac}\n\\end{equation}\n\n\\noindent while the magnetic energy flux is given by: \n\n\\begin{equation}\n{\\bf F_{mag}}=\\langle {\\bf B_1}\\times({\\bf v}\\times {\\bf B_0})\\rangle\/\\mu_0.\n\\label{eq:Fmag}\n\\end{equation}\n\n\n\\noindent where $p_1$, ${\\bf v}$, and ${\\bf B_1}$ are the Eulerian perturbations in pressure, velocity, and magnetic field, respectively, ${\\bf B_0}$ is the background magnetic field, and $\\mu_0$ is the magnetic permeability. In the region where $v_A>c_S$ the acoustic energy flux contains the energy of the slow mode, while the magnetic flux includes the fast and Alfv\\'en modes. The time-averaged energy of the three wave modes was also calculated from the relations: \n\n\n\\begin{equation}\nE_{long}=\\rho_0c_S\\langle v_{long}^2\\rangle\n\\label{eq:Elong}\n\\end{equation}\n\n\\begin{equation}\nE_{perp}=\\rho_0v_A\\langle v_{perp}^2\\rangle\n\\label{eq:Eperp}\n\\end{equation}\n\n\\begin{equation}\nE_{trans}=\\rho_0v_A\\langle v_{trans}^2\\rangle\n\\label{eq:Etrans}\n\\end{equation}\n\n\\noindent where $v_{long}$, $v_{perp}$, and $v_{trans}$ are the velocity projections into the characteristic directions from Equations \\ref{eq:elong}-\\ref{eq:etrans}, and $\\rho_0$ is the density in the equilibrium state. These expressions provide an approximation of the wave energy assuming equipartition between kinetic and other energies. In the case of pure acoustic or Alfv\\'en waves, there is strict equipartition between kinetic and compressional or magnetic energy, respectively, and Equations (\\ref{eq:Elong}-\\ref{eq:Etrans}) correspond to the real energies.\n\n\n\\section{Velocity projections}\n\\label{sect:velocity}\n\nWhen the fast acoustic wave which was driven at the bottom boundary reaches the $v_A=c_S$ layer several mode transformations take place. Above that layer, in the magnetically dominated atmosphere, the incident wave splits into a slow acoustic mode, a fast magnetic mode, and an Alfv\\'en mode.\n\nFigure \\ref{fig:velocities} shows snapshots of the projected velocities scaled with a factor $\\sqrt{\\rho_0v_{ph}}$, where $v_{ph}=c_S$ for the $v_{long}$ component and $v_{ph}=v_A$ for the other two components, after 19 min of simulation. The stationary regime is achieved after about 10 minutes of simulations. The inclination of the magnetic field $\\theta$ varies from $0^o$ at the center of the sunspot to being almost horizontal at the boundaries of the computatial domain. The azimuth $\\phi=0^o$ corresponds to all the positions at $y=0$ Mm with positive $X$ value, and it increases up to $\\phi=180^o$ at $y=0$ Mm and negative $X$ values, including all the angles between these two extremes.\n\n\n\\begin{figure*}[!ht] \n \\centering\n \\includegraphics[width=18cm]{f1.eps}\n \\caption{Snapshots of the three orthogonal components of the velocity at $t=19$ min. The blue-red colors mean positive-negative velocity directions; the range of the color coding is the same in all panels. The projection $v_{long}$ is scaled with a factor of $\\sqrt{\\rho_0c_S}$ and the projections $v_{trans}$ and $v_{perp}$ with a factor $\\sqrt{\\rho_0v_A}$. Top panels correspond to horizontal cuts at $z=1.65$ Mm, from left to right: $v_{long}$, $v_{trans}$, and $v_{perp}$. Dashed lines represents contours of equal inclination of the background magnetic field, measured at the height where $c_S=v_A$. The three bottom panels show vertical cuts at $y=7.5$ Mm, from top to bottom: $v_{long}$, $v_{trans}$, and $v_{perp}$. Horizontal solid line is the height where $c_S=v_A$; horizontal dashed line is the fast mode reflection level. Magnetic field lines are inclined black lines.}\n \\label{fig:velocities}\n\\end{figure*}\n\n\nEach of the waves modes presents a different distribution across the 3D atmosphere of the sunspot. In the magnetically dominated atmosphere the slow wave appears in the $v_{long}$ projection. The $xy$ cut (top left panel) shows that the conversion to the slow mode is significative at almost all the positions of the sunspot. However, there are some locations where this transformation is specially favoured. The strongest slow wave signal appears for $X$ between $3$ and $12$ Mm and $Y$ between $-10$ and $10$ Mm. This region corresponds to moderate inclinations around $\\theta =30^o$. Although the frequency of the wave is below the cut-off frequency, the slow mode can reach the upper atmosphere because of the reduced cut-off value due to the inclination of the magnetic field. However, the vertical magnetic field at the axis of the sunspot avoids the propagation of slow waves around the center of the sunspot, and they produce evanescent modes. In general, the amplitude of the slow mode in the x-positive region of the sunspot is higher than the x-negative region. The driving perturbation generates an acoustic-gravity wave which propagates from left to right. Thus, the right half of the sunspot has a better alignment between the direction of propagation and the field lines, producing a more efficient conversion from fast acoustic waves (in the region where $v_Ac_S$). \n\n\\begin{figure}[!ht] \n \\centering\n \\includegraphics[width=8.5cm]{f2.eps}\n \\caption{Variation of the temperature with height at the axis of the sunspot (solid line), at 7.5 Mm from the axis (dotted line), and at the quiet Sun atmosphere (dashed line). }\n \\label{fig:temperature}\n\\end{figure}\n\n\n\n\nThe vertical cut of the slow mode (Figure \\ref{fig:velocities}d) shows an interference pattern above the solid line, near the top boundary. It appears mainly in the left part of the sunspot, between $x=-20$ Mm and $x=5$ Mm. The interference is produced between the upward propagating slow mode and a downward propagating wave produced by the partial reflection of the slow mode due to the steep temperature increase at the chromosphere. Similar result was found in the 2.5D simulations from \\citet{Khomenko+Cally2012}, where they confirmed that this reflection is a physical effect rather than an artifact from the top boundary. This reflection is stronger near the center of the sunspot, where the gradient of the temperature is steeper (Figure \\ref{fig:temperature}), as can be seen in panel $(a)$ from Figure \\ref{fig:velocities}. When the reflected wave reaches the $c_S=v_A$ layer it undergoes a secondary transformation, generating a new fast acoustic mode and a slow magnetic mode visible below the solid line in Figures \\ref{fig:velocities}d and \\ref{fig:velocities}e, respectively. \n\n\n\nThe fast magnetic mode in the low-$\\beta$ region is reflected back down due to the gradients of the Alfv\\'en speed (Figure \\ref{fig:velocities}e). If we neglect the contribution of the sound speed to the fast wave speed, the reflection height is given by the layer where the wave frequency $\\omega$ and the horizontal wave number $k_x$ are related by $\\omega=v_Ak_x$, and it is represented in the figure by a dashed line. Around the center of the sunspot the reflection is completed and at the height where panel $(b)$ is obtained there is no fast mode in that region. Farther from the axis of the sunspot the transformation layer is located at a higher height, and the insufficient height of the top boundary of the computational domain avoids the complete reflection of the fast mode. \n\nThe Alfv\\'en wave can be seen in panels $(c)$ and $(f)$. Along $y=0$ Mm, which corresponds to $\\phi=0^o$, there is no wave power in the Alfv\\'en mode. At this position the magnetic field is contained in the $xz$ plane, since $B_y=0$ G, making this plane equivalent to a 2D case. Under these conditions the Alfv\\'en mode is decoupled from the fast and slow magneto-acoustic modes, and it is not possible for the incident wave to undergo conversion to the Alfv\\'en mode. As regions farther from the $y=0$ Mm plane are considered the conversion to the Alfv\\'en mode becomes more efficient. It shows a highest amplitude at around $\\theta=45^o$.\n\n\n\n\\begin{figure*}[h]\n\\centering\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f3a.eps}}\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f3b.eps}}\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f3c.eps}}\n\\caption[width=\\textwidth]{Log$_{10}$ of the amplitude ratio $R={\\bf B_1}\/\\sqrt{\\mu_0\\rho_0}\/{\\bf v_1}$ for the projected directions at the upper atmosphere (averages at heights from 0.4 Mm above the $c_S = v_A$ layer up to z = 1.9 Mm). Left panel: slow acoustic mode ($\\hat{e}_{long}$); middle panel: fast magnetic mode ($\\hat{e}_{trans}$); right panel: Alfv\\'en mode ($\\hat{e}_{perp}$)}\n\\label{fig:polarization}\n\\end{figure*}\n\n\n\n\n\n\n\n\\section{Alfv\\'en mode polarization relations}\n\\label{sect:polarization}\n\nBefore evaluating the contribution of the different wave modes to the energy in the upper part of the atmosphere, we have checked the validity of the projections described in the previous section, in order to ensure that they provide an accurate decoupling of the slow, fast, and Alfv\\'en waves. Following \\citet{Khomenko+Cally2012}, we have calculated the ratio $R={\\bf B_1}\/\\sqrt{\\mu_0\\rho_0}\/{\\bf v_1}$. Since the kinetic and magnetic energy for an Alfv\\'en wave is in equipartition, one would expect the ratio $R$ to be equal to one for that waves. \n\n\n\nFigure \\ref{fig:polarization} shows the ratio $R$ for the three projected components of the velocity and magnetic field. In each case, ${\\bf B_1}$ and ${\\bf v_1}$ pairs were obtained from the decomposition in the directions defined by Equations \\ref{eq:elong} and \\ref{eq:etrans}. The ratio is evaluated at heights from 400 km above the transformation layer to the top boundary, and it is averaged in time for the stationary stage of the simulations. It reveals the different nature of the three projections. For the $\\hat{e}_{long}$ direction at $\\theta <50^o$, the ratio $R$ present small values around $10^{-2}$, indicating that this component is dominated by the velocity variations rather than magnetic, and confirming that this projection contains the slow acoustic waves. However, at higher inclinations the magnetic variations associated with velocity variations are bigger. An opposite behavior is found for the $\\hat{e}_{trans}$ component. In this case, in the regions where $\\theta <50^o$ the ratio $R$ is big, around $10^2-10^3$, and it becomes smaller at higher inclinations. These plots indicate that the projections $\\hat{e}_{long}$ and $\\hat{e}_{trans}$ provide a good estimation of the slow and fast waves, respectively, in the upper atmosphere when the inclination of the magnetic field is below $50^o$. At higher inclinations both wave modes are mixed up, since the projection directions are asymptotic, valid strictly only where the Alfv\\'en speed is much higher than the sound speed. At these locations this criterion is not fulfilled. \n\n\nThe right panel of Figure \\ref{fig:polarization} illustrates the ratio $R$ for the $\\hat{e}_{perp}$ component. It shows a ratio around $10^{0}$ for all the atmosphere, indicating that for this projections the velocity and magnetic perturbations are in equipartition, which confirm the Alfv\\'enic nature of the oscillations in this characteristic direction. \n\n\n\n\\begin{figure*}[h]\n\\centering\n\\subfloat{\n\\includegraphics[width=0.66\\textwidth]{f4ab.eps}}\n\\qquad\n\\subfloat{\n\\includegraphics[width=0.66\\textwidth]{f4c.eps}}\n\\caption[width=\\textwidth]{Phase shift between the variations of $v_{perp}$ and $B_{perp}$ in the upper atmosphere (averages at heights from 0.4 Mm above the $c_S = v_A$ layer up to z = 1.9 Mm) as a function of inclination and azimuth (top left panel) and X and Y (top right panel). Bottom panel shows the same quantity, but as a function of height and horizontal distance at $y=7.5$ Mm.}\n\\label{fig:phase}\n\\end{figure*}\n\n\nFigure \\ref{fig:phase} shows the phase difference between the perturbations in the velocity and magnetic field in the direction $\\hat{e}_{perp}$, which correspond to the Alfv\\'en wave. For a pure Alfv\\'en mode a phase shift of 180$^o$ indicates upward propagation, while negative phase shifts correspond to downward propagation. Most of the right part of the sunspot, that is, for azimuths between $0^o$ and $90^o$, presents a phase shift around $180^o$. At these locations the Alfv\\'en waves come from the conversion of the upward propagating fast acoustic mode introduced in the high-$\\beta$ region, and they keep their upward propagation to higher layers. However, at some positions in the left side of the sunspot (with $\\phi$ between $90^o$ and $180^o$) the negative sign of the phase shift indicates downward propagating waves. The direction of the propagation of the Alfv\\'en waves in this simulation shows a perfect agreement with the one obtained by \\citet{Khomenko+Cally2012}. Since the efficiency of the conversion to the Alfv\\'en mode is enhanced with the alignment of the direction of propagation and magnetic field, in the right part of the sunspot the upward propagating fast waves couple to upward Alfv\\'en waves. On the other hand, in the left part of the sunspot the most efficient conversion to Alfv\\'en waves occurs for the refracted downward propagating fast waves \\citep{Cally+Hansen2011}. See Figure 1 from \\citet{Khomenko+Cally2012} for a schematic representation of these mode transformations.\n\n\n\n\\section{Energy fluxes}\n\\label{sect:flux}\n\n\nThe acoustic and magnetic energy fluxes were calculated using Equations \\ref{eq:Fac} and \\ref{eq:Fmag}, respectively. Figures \\ref{fig:fluxXZ} and \\ref{fig:fluxXY} show the time-average results, including all the time steps after the stationary regime is achieved. The former one corresponds to a $xz$ cut at $y=6$ Mm. The magnetic flux is only plotted above the $c_S=v_A$ layer, where the distinction of the three different modes using the projections described in Equations \\ref{eq:elong}-\\ref{eq:etrans} is meaningful. The later figure shows a $xy$ plot at $z=1.65$ Mm.\n\n\\begin{figure*}[!ht] \n \\centering\n \\includegraphics[width=18cm]{f5.eps}\n \\caption{Vertical component of the energy fluxes at $y=6$ Mm. Top panel: acoustic flux; middle panel: magnetic flux; bottom panel: magnetic flux due to Alfv\\'en waves. The units of the color coding are $10^6$ erg cm$^{-2}$s$^{-1}$. Positive fluxes mean energy propagating upward and negative fluxes downward. Horizontal solid line is the height where $c_S=v_A$; horizontal dashed line is the fast mode reflection level. Magnetic field lines are inclined black lines.}\n \\label{fig:fluxXZ}\n\\end{figure*}\n\n\n\nMost of the domain presents a positive acoustic flux, which corresponds to the upward propagating slow mode in the low-$\\beta$ region. The highest contribution of this mode is located at the right side of the sunspot, especially at moderate inclinations where the amplitude of the $v_{long}$ projections was higher in Figure \\ref{fig:velocities}. However, at the left part of the sunspot a negative flux is obtained. This downward propagating slow mode flux is particularly large near the axis of the sunspot and represents the slow waves reflected by the temperature gradient, as discussed in the previous section. The 5 mHz frequency fast acoustic wave which propagates upward in the high-$\\beta$ region reaches the conversion layer $c_S=v_A$ before the high value of the cut-off frequency avoids its propagation. Note that near the center of the sunspot the cut-off frequency presents its highest value of $5.7$ mHz at the temperature minimum, which is located 375 km above the $c_S=v_A$ layer. Just above this layer, the recently converted slow acoustic mode becomes an evanescent wave. This situation differs from what happen at locations far from the axis of the sunspot, where the field inclination reduces the effective cut-off and allows the slow mode waves to scape to the upper atmosphere. At a height located a few hundreds kilometers higher than the temperature minimum, the cut-off frequency of the atmosphere is again below 5 mHz due to the chromospheric increase of the temperature, and the slow mode near the axis can propagate again, allowing the downward propagation of the waves reflected by the rise of the temperature.\n\n\n\\begin{figure*}[h]\n\\centering\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f6a.eps}}\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f6b.eps}}\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f6c.eps}}\n\\caption[width=\\textwidth]{Vertical component of the energy fluxes at $z=1.65$ Mm. Left panel: acoustic flux; middle panel: magnetic flux; right panel: magnetic flux due to Alfv\\'en waves. The units of the color coding are $10^6$ erg cm$^{-2}$s$^{-1}$. Positive fluxes mean energy propagating upward and negative fluxes downward.}\n\\label{fig:fluxXY}\n\\end{figure*}\n\n\n\n\nAbove the transformation layer the magnetic energy flux includes the fast and Alfv\\'en modes. The middle panel of Figure \\ref{fig:fluxXZ} shows that the magnetic flux is positive at most heights, while in the middle panel of Figure \\ref{fig:fluxXY} can be seen how it increases in the $Y$ direction toward the periphery of the sunspot. On the other hand, near the $X$ boundaries of the model a negative magnetic flux appears, larger than the highest positive magnetic flux. This negative flux must be produced by the reflected fast wave.\n\nFor a more detailed analysis of the contribution of the different wave modes, the magnetic flux of the Alfv\\'en wave was deattached from the total magnetic flux by recalculating the magnetic flux but using the projections of the velocity and magnetic field in the $\\hat{e}_{trans}$ direction in Equation \\ref{eq:Fmag}. The result is plotted in the bottom and right panels of Figures \\ref{fig:fluxXZ} and \\ref{fig:fluxXY}, respectively. The vertical cut reveals that at $y=6$ Mm the vertical Alfv\\'en flux is positive in the righ part and negative in the left part, that is, at the right part of the domain the Alfv\\'en wave propagate upward and at the left part they propagate downward. This result agrees with the one obtained in the previous section from the phase shift between $v_{perp}$ and $B_{perp}$. The $xy$ cut shows that the Alfv\\'en energy flux increases at locations with higher inclinations, far from the axis of the sunspot, only for those regions of the atmosphere whose azimuth $\\phi$ is different from $0^o$. Thus, the highest Alfv\\'en energy flux is obtained near the $Y$ boundaries. Note that it is comparable to the highest acoustic energy flux. The negative flux at the left side of the sunspot shown in the bottom panel of Figure \\ref{fig:fluxXZ} is hardly visible in the right panel of Figure \\ref{fig:fluxXY} because of the different scale used in the plots. \n\nA comparison between middle and right panels of Figure \\ref{fig:fluxXY} reveals that the negative magnetic flux from the $X$ boundaries is missing in the Alfv\\'en energy flux, so it corresponds to the reflected fast mode, as previously stated. At the positions where the Alfv\\'en energy flux is larger the total magnetic flux shows a lower positive value, meaning that at these locations the fast wave must also contribute with a negative flux. \n\n\n\n\n\n\\section{Energy of the three wave modes}\n\\label{sect:energy}\n\n\nThe wave energy of the slow, fast, and Alfv\\'en modes was calculated from Equations \\ref{eq:Elong}-\\ref{eq:Etrans}, using the corresponding projected velocities. To be consistent with the work by \\citet{Khomenko+Cally2012} and provide a direct comparison with their 2.5D simulations, time averaged energies were obtained at heights from 400 km above the $c_S=v_A$ layer up to the upper boundary of the simulation box in the stationary stage of the simulations. \n\n\nFigure \\ref{fig:energy_XY} illustrates the results for all the horizontal positions from the computational domain. The left panel shows that there is a prominent maximum of the slow wave energy at the right part of the sunspot, not far from the axis. The highest slow mode energy apears at around $x=10$ Mm and $y=0$ Mm. The main interest of this measurement is to quantify the amount of energy which is converted at the $c_S=v_A$ from the incident fast acoustic mode to the outgoing upward propagating slow acoustic mode. For this reason, in the left panel the regions with negative acoustic flux in Figure \\ref{fig:fluxXY} have been masked, since the slow wave energy in that locations does not comes directly from the conversion layer but from the reflection due to the temperature gradient. \n\n\nThe wave energies have been plotted as function of inclination and azimuth of the sunspot magnetic field lines at the corresponding horizontal locations in Figure \\ref{fig:energy_angulo}. This format allows a direct comparison with \\citet{Cally+Goossens2008} Figure 2, \\citet{Khomenko+Cally2011} Figure 4, and \\citet{Khomenko+Cally2012} Figure 5. In the left panel the angles corresponding to the positions with negative acoustic flux has also been masked. It shows that the maximum slow wave energy is obtained when the direction of the wave incidence forms an angle $\\phi=0^o$ with the magnetic field, for an inclination around $\\theta=25^o$. This result is consistent with the 2D analysis from \\citet{Schunker+Cally2006}, where they shown that around this field inclination the attack angle, \\rmit{i.e.}, the angle between the wavevector and the magnetic field at the conversion layer, is small and produce an enhacement of the conversion from the fast acoustic to the slow acoustic wave. The region where the conversion to the slow mode is efficient extends to higher azimuths, although it decreases significantly for $\\phi$ higher than $60^o$. The distribution of the slow mode energy as a function of $\\theta$ and $\\phi$ shows an excellent agreement with Figure 2 from \\citet{Cally+Goossens2008}. The only difference appears at higher azimuths, which correspond to the left part of the sunspots, where the reflection of the slow mode produced by the temperature increase complicates the evaluation of the upward propagating slow mode energy in this simulation of conversion in a realistic sunspot model. \n\n\nThe Alfv\\'en wave energy increases toward the periphery of the sunspot, except along $y=0$ Mm (right panel of Figure \\ref{fig:energy_XY}). Its maximum value is obtained very close to the boundary of the computational domain around $x=0$ Mm. Its representation as a function of inclination and azimuth (right panel of Figure \\ref{fig:energy_angulo}) reveals that the conversion to the Alfv\\'en mode is efficient at inclinations above $40^o$ and azimuths between $50^o$ and $120^o$. At $\\phi\\approx50^o$, which corresponds to the right side of the sunspot, the Alfv\\'en energy extends to higher inclinations than the left part of the sunspot. This is also visible in Figure \\ref{fig:energy_XY}. The distribution of the Alfv\\'en energy with $\\theta$ and $\\phi$ is remarkably similar to the one found previously from the 3D analysis by \\citet{Cally+Goossens2008} in homogeneous fields. However, a few differences arise. In \\citet{Cally+Goossens2008} the highest conversion to Alfv\\'en waves appears at $\\theta\\approx40^o$, while in our simulations the maximum energy is shifted toward higher inclinations. A similar shift in the position of the maximum was obtained from the 2.5D simulations of \\citet{Khomenko+Cally2011} in homogeneous magnetic fields. Since the Alfv\\'enic energy peaks so close to the boundary of our simulations, it is hard to say if the maximum conversion occurs at $\\theta\\approx47^o$, as shown by the plot, or at even higher inclinations. Moreover, the maximum energy appears in the left part of the sunspot, corresponding to $\\phi\\approx100^o$. This differs from the analysis of homogeneous magnetic fields, where the maximum is located at $\\phi=60^o$ \\citep{Cally+Goossens2008,Khomenko+Cally2011} or at $\\phi=80^o-90^o$ \\citep{Cally+Hansen2011}. In the 2.5D simulations in a sunspot model from \\citet{Khomenko+Cally2012} the energy of the Alfv\\'en wave peaks at azimuths between $70^o$ and $100^o$. \n\n\nMiddle panel of Figure \\ref{fig:energy_angulo} shows some fast wave energy at high inclinations.\n\n\n\n\n\n\\begin{figure*}[h]\n\\centering\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f7a.eps}}\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f7b.eps}}\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f7c.eps}}\n\\caption[width=\\textwidth]{Wave energies at the top of the atmosphere (averages\nat heights from 0.4 Mm above the $c_S = v_A$ layer up to z = 1.9 Mm) for the three projected velocity components as a function of the horizontal locations. Left panel: acoustic flux; middle panel: magnetic flux; right panel: magnetic flux due to Alfv\\'en waves. The units of the color coding are $10^6$ erg cm$^{-2}$s$^{-1}$. In the left panel the regions with negative acoustic vertical flux have been masked.}\n\\label{fig:energy_XY}\n\\end{figure*}\n\n\n\n\n\n\n\n\\begin{figure*}[h]\n\\centering\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f8a.eps}}\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f8b.eps}}\n\\subfloat{\n\\includegraphics[width=0.33\\textwidth]{f8c.eps}}\n\\caption[width=\\textwidth]{Wave energies at the top of the atmosphere (averages\nat heights from 0.4 Mm above the $c_S = v_A$ layer up to z = 1.9 Mm) for the three projected velocity components as a function of inclination and azimuth of the sunspot magnetic field lines at the corresponding horizontal locations. Left panel: acoustic flux; middle panel: magnetic flux; right panel: magnetic flux due to Alfv\\'en waves. The units of the color coding are $10^6$ erg cm$^{-2}$s$^{-1}$. In the left panel the regions with negative acoustic vertical flux have been masked.}\n\\label{fig:energy_angulo}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion and conclusions}\n\\label{sect:scattering}\n \nIn this paper we have studied the conversion from fast to Alfv\\'en modes in a realistic sunspot-like atmosphere by means of 3D numerical simulations. This study provides a direct comparison with analytical works \\citep{Cally+Goossens2008, Cally+Hansen2011} as well as numerical simulations \\citep{Khomenko+Cally2011,Khomenko+Cally2012}.\n\n\nWith regards to the fast-to-slow conversion, the efficiency of the transformation peaks at inclinations around 25$^o$ and low azimuths, and it decreases at higher azimuths. This result shows a good agreement with previous works \\citep{Schunker+Cally2006, Cally+Goossens2008} in homogeneous magnetic field configurations. However, the sunspot-like structure of this simulation produce some differences in the wave modes of the upper atmosphere. In the central and left part of the sunspot the simulation shows downward propagating slow waves, which are reflected due to the chromospheric increase of the temperature. A similar result was found in the 2.5D simulations from \\citet{Khomenko+Cally2012}, although in our case the downward flux near the axis is enhanced because of the higher increase of the temperature at the center of the sunspot. In \\citet{Khomenko+Cally2012} simulations the closest plane to the axis of the sunspot is located at a distance of 7.5 Mm, where the increase of the temperature in the transition region is lower (Figure \\ref{fig:temperature}).\n\nThe simulation presented in this paper supposes a step forward in the study of the conversion to the Alfv\\'en mode, since it accounts for full 3D. The work by \\citet{Khomenko+Cally2012} was limited to 2.5D, and only a few vertical slices across the sunspot model at several distances from the axis were analyzed. Since the fast-to-Alfv\\'en conversion depends strongly on the angle between the direction of wave propagation and the magnetic field, a full 3D simulation is required to retrieve the complete description. The simulation shows that the highest Alfv\\'enic wave energy is obtained at high inclinations and for azimuths between $50^o$ and $120^o$. The efficiency of the conversion obtained for the sunspot model shows a similar pattern to the homogeneous magnetic fields from \\citet{Cally+Goossens2008}, although the maximum of the magnetic energy of Alfv\\'en waves is shifted toward more inclined fields.\n\nA comparison of the total energy in the upper atmosphere of the upward propagating slow acoustic energy and the corresponding to the Alfv\\'en mode reveals that the former one is around two times higher. This result remarks the important role of the Alfv\\'en energy at the chromosphere. At $\\theta\\approx 25^o$ the acoustic energy is much higher than the Alfv\\'en energy, especially at low azimuths where its conversion is more efficient. However, at higher inclinations the Alfv\\'en energy becomes dominant and at $\\theta=47^o$ and $\\phi$ around $90^o$ it is more than 7 times higher than the acoustic energy. The horizontal size of the computational domain prevent us to measure the energy at higher inclinations for these azimuths, but from the tendency shown in the computed region one would expect a strongest dominance of the Alfv\\'en energy at higher inclinations. Moreover, the vertical limitation of the domain also reduces the energy in the Alfv\\'en mode. At the 5 mHz frequency of this simulation the fast-to-Alfv\\'en conversion region spans over 20 scale heights \\citep{Cally+Hansen2011}. Since our computational box has a much lower size, only a small fraction of the conversion is completed in our domain. The same limitation affects the simulations from \\citet{Khomenko+Cally2012}. However, in our case we have found a higher energy in the Alfv\\'en wave due to the higher inclinations covered by our computational domain. \n\nDespite of the dependence of the conversion with the azimuth, due to the stochastic direction of the wave propagation in the Sun, one would expect to find an efficient conversion to the Alfv\\'en wave at all the locations surrounding a sunspot with a certain inclination. Near the umbra-penumbra boundary, the waves whose attack angle is below $90^o$ will generate upward propagating Alfv\\'en waves, while the incident waves which propagate in the opposite direction will produce downward Alfv\\'en waves. In this way, in these regions upgoing and downgoing waves will coexist. On the other hand, at the higher inclinations of the penumbra, the energy flux of the Alfv\\'en mode is only composed by upward propagating waves for all the directions of incidence of the fast acoustic wave where the conversion to the Alfv\\'en wave is efficient, as shown by the right panel from Figure \\ref{fig:fluxXY} at $\\theta\\approx 45^o$. It differs from the situation at $\\theta\\approx 30^o$, where at some regions of the sunspot there is negative flux. In the case of solar observations, where the incident waves propagate in all directions, the Alfv\\'en waves in an annular region surrounding the sunspot with higher inclinations (around $\\theta\\approx 45^o$) will only consist on upward propagating waves.\n\nThe small time step imposed by the high chromospheric Alfv\\'en speed has been an important concern in the development of the numerical simulations presented in this paper. Some considerations have been taken into account in the numerical configuration regarding this issue. Firstly, a magnetic field strength of 900 G was adopted at the axis of the sunspot at the photosphere. This value is well below what might be expected in a mature sunspot. A more realistic field strength would lower the height of the $c_S=v_A$ and the fast mode reflection layers. This would allow the fast-to-Alfv\\'en conversion to be produced in a bigger region of the computational domain, and might generate an even higher Alfv\\'en energy flux at the top of the computational domain. Secondly, this top boundary has been set at the chromosphere. The recent work by \\citet{Hansen+Cally2012} has shown the interesting effects of the chromosphere-corona transition region on fast-to-Alfv\\'en conversion. They found that the reflection of the Alfv\\'en waves at the transition region \\citep{Uchida+Sakurai1975} is sensitive to the distance between the fast reflection point and the transition region. The Alfv\\'en flux that can reach the corona is increased when this distance is small. In the present work the computational box does not include the transition region, since it would produce an even smaller time step, compromising the realization of this simulation using a reasonable amount of computational time. It would be interesting to extend the simulations to the corona and adopt a realistic magnetic field strength, but it is out of the scope of this paper and shall be accounted for in forthcoming works.\n\n\n\n \n\n\n\\acknowledgements I would like to thank Elena Khomenko, Paul Cally, Manuel Collados, and Ashley Crouch for their examination and suggestions to an early version of this paper. This work used LaPalma supercomputer at Centro de Astrof\\'{\\i}sica de La Palma and MareNostrum supercomputer at Barcelona Supercomputing Center (the nodes of Spanish National Supercomputing Center). This work is supported by NASA contract NNH09CE43C. \n\n\\bibliographystyle{apj\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\n\nQuantum annealing (QA) is a technique for solving combinational optimization problems \\cite{kadowaki1998quantum, farhi2000quantum, farhi2001quantum}.\nThe solution of the problem is mapped to the ground state of the Ising\nHamiltonian, and it is searched in a large Hilbert space\nduring QA. The adiabatic theorem guarantees that the solution can be found as long as the dynamics is adiabatic.\nMany other applications have been proposed for finding the ground state of the Ising Hamiltonian using QA.\nOne such application is prime factorization \\cite{jiang2018quantum, dridi2017prime}.\nThe shortest-vector problem is a promising candidate for post quantum cryptography, and an approach for solving this problem using QA has been proposed \\cite{joseph2020not, joseph2021two}.\nAnother application of QA is machine learning \\cite{adachi2015application, wilson2021quantum, li2019improved, sasdelli2021quantum, date2020adiabatic, neven2008training, neven2012qboost, willsch2020support, winci2020path}.\nIn addition, an application to clustering has been reported \\cite{kumar2018quantum, kurihara2014quantum}.\nFurther, a method of using QA to perform calculations for topological data analysis (TDA) has been investigated \\cite{berwald2018computing}.\nMoreover, a method for mapping the graph coloring problem to the Ising model for the QA format has also been proposed \\cite{kudo2018constrained, kudo2020localization}.\n\n\n\n\n\n\nSeveral applications of QA to the simulation of condensed matter have also been reported.\nFor example, simulation of the $\\mathbb{Z}_{2}$ spin liquid has been investigated using QA \\cite{zhou2021experimental}.\nFurther, the Kosterlitz--Thouless (KT) phase transition has been experimentally observed \\cite{king2018observation}.\nIn addition, the Shastry--Sutherland Ising Model has been simulated \\cite{kairys2020simulating}.\nMoreover, Harris et al. investigated the spin-glass phase transition using QA \\cite{harris2018phase}.\n\nHowever, it is difficult to prepare the exact ground state using QA with the existing technology.\nThe main causes of deterioration are non-adiabatic transitions and decoherence.\nIn the case of QA for Ising models to solve combinatorial optimization problems, the probability of successfully obtaining the ground state increases with the number of trials.\nFor a given density matrix $\\rho =\\sum _n p_n |E_n\\rangle \\langle E_n|$, where $|E_n\\rangle $ denotes the eigenvector of the problem Hamiltonian and $p_n$ denotes the population,\nsingle-shot measurements in the computational basis randomly select $E_n$\nand let us know this energy $E_n$. Hence, as long as the ground state has a finite population, we can obtain the ground state energy if we perform a large number of measurements.\nMeanwhile, if we consider a Hamiltonian having non-diagonal matrix elements with the computational basis, the measurements in the computational basis do not let us know the energy of the Hamiltonian. Instead, we need to measure an observable of $H$ with Pauli measurements. In this case, we obtain $\\langle H \\rangle = \\sum _n p_n E_n$ for a given density matrix $\\rho =\\sum _n p_n |E_n\\rangle \\langle E_n|$ in the limit of a large number of measurements, and this is certainly different from the true ground state if the density matrix has a finite population of any excited state.\nTherefore, the imperfection of the ground-state preparation induces a crucial error in the estimation of the energy of the ground state of the Hamiltonian having non-diagonal elements. The objective of this study is to address this problem.\nFurthermore, there have been reported cases where quantum annealing fails due to the existence of symmetry\\cite{francis2022determining, imoto2021preparing}.\n\n\nIn this paper, we propose the use of a drive Hamiltonian that preserves the symmetry of the problem Hamiltonian for more efficient QA, which can potentially suppress non-adiabatic transitions.\nThe relevant Hamiltonians in condensed matter physics occasionally commute with the total magnetization $S_z=\\sum_{j}\\sigma_{j}^{(z)}$, and we aim to obtain the ground state of such a problem Hamiltonian. We employ the XY model (flip-flop interaction) as the drive Hamiltonian. As both the problem and the drive Hamiltonian commute with the\ntotal magnetization, the unitary evolution\nis restricted to the symmetric subspace with a specific magnetization.\nAs the total Hamiltonian commutes with the total magnetization, the non-adiabatic transitions are restricted to the same sector with respect to the total magnetization. Thus, the non-adiabatic transitions can potentially be suppressed.\nThrough numerical simulations, we demonstrate that our method can suppress non-adiabatic transitions for some problem Hamiltonians.\n\n\nThe remainder of this paper is organized as follows. Section \\ref{sec:QA} reviews QA.\nSection \\ref{sec:method} describes our method of using the flip-flop interaction for the drive Hamiltonian. Section \\ref{sec:numerical_result} describes numerical simulations \nconducted to evaluate the performance of our scheme, where the deformed spin star and a random XXZ spin chain are selected as the problem Hamiltonians. Finally, Section \\ref{sec:conclusion} concludes the paper.\n\n\n\n\n\n\\section{Quantum annealing \n}\\label{sec:QA}\n\nIn this section, we review QA \\cite{kadowaki1998quantum,farhi2000quantum,farhi2001quantum}.\nQA was proposed by Kadowaki and Nishimori as a method for solving combinatorial optimization problems \\cite{kadowaki1998quantum}.\nWe choose the magnetic transverse field\n\\begin{align}\n H_{D}=-B\\sum_{j=1}^{L}\\sigma_{j}^{(x)}\\label{eq:transverse_field}\n\\end{align}\nas the drive Hamiltonian, where $B$ denotes the amplitude of the magnetic field.\nIn the original idea of QA \\cite{kadowaki1998quantum}, the problem Hamiltonian was\nchosen as an Ising model\nmapped from a combinatorial optimization problem to be solved.\nWe set the ground state of this Hamiltonian as\n\\begin{align}\n \\ket{\\Phi(0)}=\\ket{+\\cdots+},\n\\end{align}\nwhere the state $\\ket{+}$ denotes the eigenstate of $\\sigma^{(x)}$ with an eigenvalue of $+1$.\nThe total Hamiltonian for QA is written as follows:\n\\begin{align}\n H=\\frac{t}{T}H_{P}+\\biggl(1-\\frac{t}{T}\\biggr)H_{D}\\label{conventionalqa},\n\\end{align}\nwhere $T$ is the annealing time, $H_{D}$ is the drive Hamiltonian, and $H_{P}$ is the problem Hamiltonian.\nFirst, we prepare the ground state of the drive Hamiltonian.\nSecond, the drive Hamiltonian is adiabatically changed into the problem Hamiltonian according to Eq. (\\ref{conventionalqa}).\nAs long as this dynamics is adiabatic, we can obtain the ground state of the problem Hamiltonian, which is guaranteed by the adiabatic theorem \\cite{morita2008mathematical}.\n\n\n\n\nNon-adiabatic transitions and decoherence are the main causes of performance degradation of QA.\nConsiderable effort has been devoted toward suppressing such non-adiabatic transitions and decoherence.\nIt is\nreported that a non-stoquastic Hamiltonian with off-diagonal matrix elements improves the performance of QA for some problem Hamiltonians \\cite{seki2012quantum, seki2015quantum,susa2022nonstoquastic}.\nSusa et al. suggested that the accuracy of QA can be improved using an inhomogeneous drive Hamiltonian for specific cases \\cite{susa2018exponential, susa2018quantum}.\nFurther, it has been shown that Ramsey interference can be used to suppress non-adiabatic transitions \\cite{matsuzaki2021direct}.\nOther methods have been reported for suppressing non-adiabatic transitions and decoherence using variational techniques \\cite{susa2021variational, matsuura2021variationally,imoto2021quantum}.\nError correction of QA has been investigated for suppressing decoherence \\cite{pudenz2014error}.\nIn addition, the use of decoherence-free subspaces for QA has been proposed \\cite{albash2015decoherence, suzuki2020proposal}.\nMoreover, spin-lock techniques have been theoretically proposed to use long-lived qubits for QA \\cite{chen2011experimental,nakahara2013lectures, matsuzaki2020quantum}.\n\n\n\n\nExperimental demonstrations of QA have also been reported. D-Wave Systems Inc. realized QA machines composed of thousands of qubits by using superconducting flux qubits \\cite{johnson2011quantum}.\nMany experimental demonstrations of QA using such devices have been reported, such as machine learning, graph coloring problems, and magnetic material simulation \\cite{kudo2018constrained, adachi2015application, hu2019quantum, kudo2020localization, harris2018phase}.\n\n\n\n\n\\section{Method}\\label{sec:method}\n\nIn this section, we describe our proposal of QA with symmetric subspaces.\nIn general, when a Hamiltonian has symmetry, it commutes with an observable that is a conserved quantity.\nIn this case, the Hamiltonian is block-diagonalized, and the eigenstates of the Hamiltonian are divided into subspaces characterized by the conserved quantity.\nWhen the drive Hamiltonian and problem Hamiltonian\nhave the same conserved quantities,\nany non-adiabatic transitions during QA\nwill occur inside the same subspace, which is in stark contrast to the conventional scheme where non-adiabatic transitions can occur in any subspace.\nOwing to such a restriction on non-adiabatic transitions, our scheme can potentially suppress non-adiabatic transitions.\n\n\n\nWe employ the one-dimensional XY model as the drive Hamiltonian.\nThe one-dimensional XY model is defined by\n\\begin{align}\n H=g\\sum_{j=1}^{L}\\bigl(\\sigma_{j}^{(+)}\\sigma_{j+1}^{(-)}+\\sigma_{j}^{(-)}\\sigma_{j+1}^{(+)}\\bigr),\\ \\ \\sigma_{L+1}^{(k)}=\\sigma_{1}^{(k)}\\label{eq:xy_ham},\n\\end{align}\nwhere $g$ denotes the coupling strength, $L$ is the number of sites, $\\sigma_{j}^{(\\alpha)}(\\alpha=x,y,z)$ are the Pauli matrices defined on the $j$-th site, and $\\sigma_{j}^{(\\pm)}=\\sigma_{j}^{(x)}\\pm i\\sigma_{j}^{(y)}$.\nFurther, $S_z$ is a conserved quantity of the XY model.\nThis model can be mapped to the free fermion system by the Jordan--Wigner transformation; hence, the ground-state energy of this Hamiltonian can be easily calculated using a classical computer. Therefore, when we try to prepare the ground state of this Hamiltonian (by using thermalization, for example), we can experimentally verify how close the state is to the true ground state by measuring the \nexpectation value and variance of\n\\cite{imoto2021improving}.\nIt is worth mentioning that, although the XY model was employed as the drive Hamiltonian in previous studies \\cite{kudo2018constrained, kudo2020localization}, the purpose was to add a proper constraint to solve graph coloring problems. To the best of our knowledge, our study is the first to use the XY model as the drive Hamiltonian for the suppression of non-adiabatic transitions.\n\n\nWe choose a problem Hamiltonian that commutes with $S_z$.\nThis guarantees that the non-adiabatic transitions can occur only inside the symmetric subspace. \nSome Hamiltonians used in condensed matter physics also commute with $S_z$.\n\n\n\nFrom the Lieb--Schultz--Mattis theorem \\cite{lieb1961two}, it is known that the energy gap between the ground state and the first excited state\nof the XY model is $O(1\/L)$ if we fix the value of $g$.\nThus, as the number of qubits increases, this energy gap becomes smaller, which can make it difficult to realize adiabatic dynamics. However, as we are going to optimize\nthe amplitude of the drive Hamiltonian (as explained below), the problem of the small energy gap can be solved by setting an appropriate value of $g$. \n\n\n\n\n\\section{Numerical results}\\label{sec:numerical_result}\n\nThis section describes numerical simulations performed to compare our scheme\nwith the conventional scheme.\nWe employ the Gorini--Kossakowski--Sudarshan--Lindblad (GKSL) master equation for the numerical simulations to account for the decoherence.\nFor the drive Hamiltonian,\nthe transverse magnetic field Hamiltonian\ndescribed in Eq. (\\ref{eq:transverse_field}) is used\nin the conventional scheme, while we employ the one-dimensional XY model in Eq. (\\ref{eq:xy_ham}).\nMoreover, we choose a deformed spin star model and a random XXZ spin chain as the problem Hamiltonians.\nWe define the estimation error as $|E^{(\\rm{true})}_g-E^{(\\rm{QA})}_g|$, where $E^{(\\rm{true})}_g$ denotes the true ground state energy and $E^{(\\rm{QA})}_g$ denotes the energy obtained by QA.\nFor fair comparison, we optimize the values of $B$ and g to minimize the expectation value of the problem Hamiltonian after QA (or equivalently, to minimize the estimation error).\n\n\n\n\nWe introduce the GKSL master equation to consider the decoherence \\cite{gorini1976completely, lindblad1976generators}.\nThe GKSL master equation is given by\n\n\\begin{align}\n \\frac{d\\rho(t)}{dt}=-i[H(t), \\rho(t)]+\\sum_{n}\\gamma[\\sigma^{(k)}_{n}\\rho(t)\\sigma^{(k)}_{n}-\\rho(t)],\n\\end{align}\nwhere $\\sigma^{(k)}_{j}(k=x,y,z)$ denotes the Lindblad operators acting on site $j$, $\\gamma$ denotes the decoherence rate, and $\\rho(t)$ is the density matrix of the quantum state at time $t$.\nWe solve the GKSL master equation using Qutip \\cite{johansson184nation,johansson2012qutip}.\nThroughout this paper, we choose $\\sigma^{(z)}$ as the Lindblad operator.\n\n\\subsection{Deformed spin star model}\n\\begin{figure}[htbp]\n\\begin{center}\n\\begin{minipage}{0.48\\textwidth}\n\\begin{center}\n\\begin{minipage}{0.90\\textwidth}\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{Fig1_a.pdf}\\\\\n(a) Energy spectrum of QA for the deformed spin star model with the drive Hamiltonian of the transverse field.\n\\end{center}\n\\end{minipage}\\\\\n\\begin{minipage}{0.90\\textwidth}\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{Fig1_b.pdf}\\\\\n\n(b) Energy spectrum of QA for the deformed spin star model with the drive Hamiltonian of the XY model.\n\\end{center}\n\\end{minipage}\n\\end{center}\n\\end{minipage}\n\\caption{Energy spectrum during QA plotted against time $t$\nto prepare the ground state of the deformed spin star model.\n(a) The transverse field is chose.\n(b) The XY model is chosen.\n}\n\\label{fig:hydro_energy_spec}\n\\end{center}\n\\end{figure}\n\n\nIn this subsection, we consider the deformed spin star model as the problem Hamiltonian.\nThe Hamiltonian of the deformed spin star model is given by\n\\begin{align}\n H&= \\omega\\hat{\\sigma}_{0}^{z}+\\omega_{1}\\hat{J}^{z}+J(\\hat{\\sigma}_{0}^{+}\\hat{J}^{-}+\\hat{\\sigma}_{0}^{-}\\hat{J}^{+})\\label{eq:deformed_spin_star},\n\\end{align}\nwhere $\\hat{J}^{+}\\equiv\\sum_{j=1}^{L}e^{2\\pi\\frac{j}{L}}\\sigma_{j}^{+}$ and $\\hat{J}^{-}\\equiv\\sum_{j=1}^{L}e^{-2\\pi\\frac{j}{L}}\\sigma_{j}^{-}$.\nThis kind of model has been investigated, because this describes a dynamics of a hybrid system composed of a superconducting flux qubit and nitrogen-vacancy centers in a diamond \\cite{marcos2010coupling,twamley2010superconducting,zhu2011coherent,zhu2014observation,matsuzaki2015improving,cai2015analysis}.\n\n\n\nThe energy spectra of the annealing Hamiltonians for the conventional scheme and our scheme are plotted in Fig. \\ref{fig:hydro_energy_spec}.\nIt is worth mentioning that the performance of QA depends on the amplitude of the drive Hamiltonian, i.e., $B$ and $g$. Hence, we sweep these parameters and compare the performance of our scheme with that of the conventional one when we optimize the amplitude of the drive Hamiltonian.\nWe performed the numerical simulations with decoherence rate of $\\gamma=2.5\\times10^{-5}$ GHz.\nIn the case, we choose the optimal amplitude of the drive Hamiltonian at each time.\nWe can see that the estimation error of our scheme is just three times smaller than that of the conventional scheme, as shown in Fig \\ref{fig:hydro_const_log}.\nWe find that the optimal annealing time of our scheme, which provides the smallest estimation error, is much shorter than that of \nthe other scheme.\nThis implies that, as the effect of the non-adiabatic transitions is significantly reduced, we can shorten the annealing time and thus suppress the effect of the decoherence.\n\n\n\n\n\n\n\n\n\\begin{figure}[ht]\n\n\\includegraphics[width=80mm]{Fig2.pdf}\n\n\\caption{\nEstimation error $|E^{(\\rm{true})}_g-E^{(\\rm{QA})}_g|$ for the deformed spin star model plotted against the annealing time, where $E^{(\\rm{true})}_g$ denotes the true ground state energy and $E^{(\\rm{QA})}_g$ denotes the energy obtained by QA. \nThe blue (orange) line denotes the case when we choose the XY model (transverse field) with an amplitude of $g$ ($B$) as the drive Hamiltonian. Here, for a optimal amplitude with a decoherence rate of $\\gamma=2.5\\times10^{-5}$ GHz.\nIn addition, we choose the model parameters $\\omega=\\omega_{1}=0.5$ GHz, $J=5$ GHz, and $L=3$(i.e. the total spin number is $4$).\n}\n\\label{fig:hydro_const_log}\n\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Random XXZ spin chain}\n\n\nIn this subsection, we consider a random XXZ spin chain as the problem Hamiltonian.\nThe Hamiltonian of the random XXZ spin chain is given by\n\\begin{align}\n H=\\sum_{j=1}^{L-1}J_{j}\\biggl(\\sigma^{x}_{j}\\sigma^{x}_{j+1}+\\sigma^{y}_{j}\\sigma^{y}_{j+1}+\\Delta\\sigma^{z}_{j}\\sigma^{z}_{j+1}\\biggr)\\label{eq:xxz_spin_chain},\n\\end{align}\nwhere $\\Delta$ is an anisotropic parameter and $\\{J_{j}\\}_{j=1}^{L-1}$ is a random interaction.\nWe employ an open boundary condition, and we assume that $\\{J_{j}\\}_{j=1}^{L-1}$ is chosen from independent uniform distributions on the interval $[0,2]$ (i.e., $\\{J_{j}\\}_{j=1}^{L-1}\\overset{\\text\\small\\textrm{iid}}{\\sim}U(0,2)$).\nIt is worth mentioning that this Hamiltonian commutes with the total magnetization $S_{z}$; hence, we can use our scheme to obtain the ground state of this Hamiltonian.\n\n\n\n\n\n\n\n\n\n\\begin{table}[htb]\n\\centering\n \\begin{tabular}{|c||c|}\n \\hline\n $J_{1}$ & $0.8441683664299817$ \\\\\n $J_{2}$ & $0.47574391516586223$ \\\\\n $J_{3}$ & $0.06980280523824778$ \\\\\n $J_{4}$ & $0.6197240483819366$ \\\\ \\hline\n \\end{tabular}\n \\caption{Coupling constant of the random XXZ spin chain $\\{J_{j}\\}_{j=1}^{4}$ used in Fig. \\ref{fig:random_xxz_energy_spec}. The unit of these values is GHz, as described in the main text.}\n \\label{tb:rand_XXZ_coefficient_jw}\n\\end{table}\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\begin{minipage}{0.48\\textwidth}\n\\begin{center}\n\\begin{minipage}{0.90\\textwidth}\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{Fig3_a.pdf}\\\\\n\n(a) Energy spectrum of QA for the random XXZ spin chain with the drive Hamiltonian of tshe random transverse field.\n\n\\end{center}\n\\end{minipage}\\\\\n\\begin{minipage}{0.90\\textwidth}\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{Fig3_b.pdf}\\\\\n\n\n(b) Energy spectrum of QA for the random XXZ spin chain with the drive Hamiltonian of the random XY model.\n\\end{center}\n\\end{minipage}\n\\end{center}\n\\end{minipage}\n\n\\caption{Energy spectrum during QA plotted against time to prepare the ground state of the random XXZ spin chain. The anisotropic parameter is\n $\\Delta=0.7$ GHz and the number of qubits is $L=4$.\n The coupling strength of the XXZ spin chain is described in Table \\ref{tb:rand_XXZ_coefficient_jw}.\n(a)The uniform transverse field is chosen as the drive Hamiltonian with an amplitude of $B=1.0$ GHz.\n(b)The XY model is chosen as the drive Hamiltonian with an amplitude of $g=1.0$ GHz.\n}\n\\label{fig:random_xxz_energy_spec}\n\\end{center}\n\\end{figure}\n\n\n\n\\begin{figure}[ht]\n\n\\includegraphics[width=80mm]{Fig4.pdf}\n\n\\caption{Estimation error plotted against the annealing time. We choose the drive Hamiltonian as the uniform transverse field (blue) and the XY model (orange) with a decoherence rate of $\\gamma=10^{-4}$ GHz. We use the same parameters as Fig. \\ref{fig:random_xxz_energy_spec}.\n}\n\\label{fig:random_xxz_random_tf}\n\n\\end{figure}\n\n\nThe energy spectrum of the annealing Hamiltonian for each scheme is plotted in Fig. \\ref{fig:random_xxz_energy_spec}.\nFurthermore, the estimation error is plotted against the annealing time in Fig. \\ref{fig:random_xxz_random_tf}.\nWe performed numerical simulations with a decoherence rate of $\\gamma=10^{-4}$ GHz.\nIn this case, we choose the optimal amplitude of the drive Hamiltonian at each time.\nThe estimation error of our scheme is around 10 times smaller than that of the other schemes.\nSimilar to the case of the deformed spin star model, we find that the optimal annealing time of our scheme becomes shorter than that of the conventional schemes, which confirms the suppression of the non-adiabatic transitions in our scheme.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\\label{sec:conclusion}\n\n\nWe proposed the use of a drive Hamiltonian that preserves the symmetry of the problem Hamiltonian. Owing to the symmetry, we can search the solution in an appropriate symmetric subspace during QA. As the non-adiabatic transitions occur only inside the specific subspace, our approach can potentially suppress unwanted non-adiabatic transitions. To evaluate the performance of our scheme, we employed the XY model as the drive Hamiltonian in order to find the ground state of problem Hamiltonians that commute with the total magnetization along the z-axis. We found that our scheme outperforms the conventional scheme in terms of the estimation error of QA.\n\n\n\n\n\n\\begin{acknowledgments}\nWe thank a useful comment from Shiro Kawabata.\nThis work was supported by MEXT's Leading Initiative for Excellent Young Researchers and JST PRESTO (Grant No. JPMJPR1919), Japan. This paper is partly\nbased on the results obtained from a project, JPNP16007,\ncommissioned by the New Energy and Industrial Technology Development Organization (NEDO), Japan.\n\\end{acknowledgments}\n\n\n\n\n\n~~~\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section*{Acknowledgment}\nMartin Meindlhumer acknowledges support of Johannes Kepler University Linz, Linz Institute of Technology (LIT).\n\n\n\\bibliographystyle{unsrt}\n\n\\section{Eigenvalue problem} \\label{sec:eigenvalues}\nA wide range of technical applications involving piezoelectric structures is operated in the regime of (mechanical) vibrations e.g. noise and vibration control and damping, shape control or energy harvesting. To design and study these applications the computation of their eigenfrequencies and -forms may be required.\nThe finite element discretization leads to a system with a huge number of eigenvalues of which typically only a few of the smallest are of interest. In our contribution we propose to use the inverse iteration procedure which allows the effective computation of a certain number of eigenvalues. \n\n\\subsection{Inverse iteration}\n\nThe inverse iteration procedure \\cite{schwarz2006NuMa,KNYAZEV2003,NEYMEYR200161} is designed to find the smallest eigenvalues and -vectors of the generalized eigenvalue problem\n\\begin{equation}\\label{eq:ev_problem}\n\t\\Am\\cdot\\qv_k=\\lamv_k\\Mm\\cdot\\qv_k\n\\end{equation}\nwith symmetric positive definite stiffness matrix $\\Am$ and mass matrix $\\Mm$.\n\nFor a given $\\lambda_k^n$ and $\\qv_k^n$, the next iterate $\\qv_k^{n+1}$ is computed by solving a linear system\n\n\\begin{equation}\\label{eq:inv_iter}\n\t\\Am \\cdot\\qv^{n+1}_k=\\lamv_k^n\\Mm\\cdot\\qv^n_k.\n\\end{equation}\n\nAn approximation of the eigenvalue is given by the Rayleigh quotient\n\\begin{equation}\\label{eq:rayleight_quotient}\n\\lamv(\\qv)=\\frac{\\qv^T\\cdot\\Am\\cdot\\qv}{\\qv^T\\cdot\\Mm\\cdot\\qv}.\n\\end{equation}\n For a suitable start vector $\\qv^0$, which is not perpendicular to the smallest eigenvector, the inverse iteration procedure converges to the smallest eigenvalue $\\lamv_1$ and the corresponding Eigenvector $\\qv_1$.\n\n\n\\subsection{Structure of assembled system}\n\nThe TDNNS (mixed) method for piezoelasticity with the variational formulation (\\ref{eq:V11}-\\ref{eq:V12}) leads to a certain structure of the global assembled system. The global stiffness matrix $\\Am_{global}$ is indefinite, the global mass matrix $\\Mm_{global}$ is positive semidefinite. The eigenvalue problem is of the form\n\n\\begin{comment}\n\\begin{eqnarray*}\n\\drawmatrix[size=1.8]{ \\mathbf{0} }\\drawmatrix[size=1.8, width=1.3]{\\mathbf{B}^T} \\drawmatrix[size=1.8,width=.9]{\\mathbf{0}} \\hspace{15pt} \\drawmatrix[size=1.8,width=1]{\\uv}\\hspace{15pt} \\drawmatrix[size=1.8,width=1]{\\lambda \\mathbf{M}}\\\\\n\\drawmatrix[size=1.3,width=1.8]{ \\mathbf{B} }\\drawmatrix[size=1.3]{-\\mathbf{C}} \\drawmatrix[size=1.3,width=.9]{\\mathbf{D}^T} \\times \\drawmatrix[size=1.3,width=1]{\\mathbf{\\sigma}}=\\drawmatrix[size=1.3,width=1]{\\mathbf{0}} \\\\\n\\drawmatrix[size=.9,width=1.8]{ \\mathbf{B} }\\drawmatrix[size=.9,width=1.3]{\\mathbf{D}} \\drawmatrix[size=.9]{\\mathbf-{E}} \\hspace{15pt}\\drawmatrix[size=0.9,width=1]{\\mathbf{\\Phi}}\\hspace{15pt} \\drawmatrix[size=0.9,width=1]{\\mathbf{0}}\n\\end{eqnarray*}\n\\end{comment}\n\n\\begin{eqnarray}\n\\underbrace{\\left[\\begin{BMAT}(e)[1pt,3cm,3cm]{c:c:c}{c:c:c}\n\\nullt&\\Bm^T&\\nullt\\\\\n\\Bm&-\\Cm&\\Dm^T\\\\\n\\nullt&\\Dm&-\\Em\n\\end{BMAT}\\right] \\cdot}_{\\Am_{global}}\n\\underbrace{\\left[\\begin{BMAT}(e)[1pt,.5cm,3cm]{c}{c:c:c} \n\\qv_{\\uv}\\\\ \\qv_{\\sigmat }\\\\ \\qv_{\\phim}\\end{BMAT}\\right]}_{\\qv} = \n\\lambda\\underbrace{\\left[\\begin{BMAT}[1pt,3cm,3cm]{c:c:c}{c:c:c} \n \\Mm &\\nullt& \\nullt \\\\ \\nullt & \\nullt &\\nullt \\\\ \\nullt&\\nullt & \\nullt\n\\end{BMAT}\\right]}_{\\Mm_{global}}\\cdot\\underbrace{\\left[\\begin{BMAT}(e)[1pt,.5cm,3cm]{c}{c:c:c} \n\\qv_{\\uv}\\\\ \\qv_{\\sigmat }\\\\ \\qv_{\\phim}\\end{BMAT}\\right]}_{\\qv}.\n\\label{eq:structur}\n\\end{eqnarray}\nThe global vector of degrees of freedom is denoted by $\\qv$, where $\\qv_{\\uv}$, $\\qv_{\\sigmat }$ and $\\qv_{\\phim}$ denote the degrees of freedom of displacement, stress and electric potential, respectively.\nThe symmetric positive definite matrices $\\Cm$ and $\\Em$ correspond to the mechanical compliance and electrical permittivity of the constitutive equations. In the variational formulation these parts are represented by $\\int_{\\Omega}\\sigmat:\\St^E:\\delta\\sigmat d\\Omega$ and $\\int_{\\Omega}\\nabla\\phi\\cdot\\epsilont\\cdot\\delta\\nabla\\phi d\\Omega$. The matrix $\\Dm$ corresponds the electromechanical coupling terms $\\int_{\\Omega}\\left( \\dt: \\sigmat\\cdot\\nabla \\delta\\phi\\right)d\\Omega$, while $\\Bm$ represents $\\langle \\epst(\\uv), \\delta \\sigmat\\rangle$, defined in (\\ref{eq:defdiv1})-(\\ref{eq:defdiv2}). Note that in (\\ref{eq:structur}) the right hand side is $\\mathbf{0}$ for the stresses and the electric potential. Therefore the global mass matrix is positive semidefinite. \\par \nIn the following, we motivate why the inverse iteration procedure can be used although neither the mass nor the stiffness matrix are positive definite. We emphasize that these considerations are purely theoretical, while the procedure is then directly applied to the indefinite or semidefinite matrices in implementations. \nFrom the last line of (\\ref{eq:structur}) the potential degrees of freedom can be computed as\n\n\\begin{equation}\\label{eq:phi_red}\n\t\\qv_{\\phim}=\\Em^{-1}\\cdot\\Dm\\cdot\\qv_{\\sigmat }.\n\\end{equation}\nInverting (\\ref{eq:phi_red}) into the second line of (\\ref{eq:structur}) we get for the stresses\n\\begin{equation}\\label{eq:sigma_red}\n\t\\Bm\\cdot\\qv_{\\uv}=\\left(\\Cm-\\Dm^T\\cdot\\Em^{-1}\\cdot\\Dm\\right)\\qv_{\\sigmat }=\\bar{\\Cm}\\cdot\\qv_{\\sigmat }.\n\\end{equation}\nFinally applying (\\ref{eq:sigma_red}) to the fist line of (\\ref{eq:structur}) we get the eigenvalue problem\n\\begin{equation}\\label{eq:ev_problem_struct}\n\t\\Bm^T\\cdot\\bar{\\Cm}^{-1}\\cdot\\Bm\\cdot\\qv_{\\uv}=\\lambda\\Mm\\cdot\\qv_{\\uv}.\t\n\\end{equation}\nFor typical technical piezoelectric materials the matrix $\\bar{\\Cm}$ can be assumed to be positive definite as it represents the compliance at constant dielectric displacement. Therefore, (\\ref{eq:sigma_red}) is formally equivalent to a generalized eigenvalue problem with symmetric positive definite matrices, and can be treated by the inverse iteration procedure\n\\par \nNote that the elimination of the degrees of freedom of stress an potential in (\\ref{eq:phi_red})-(\\ref{eq:ev_problem_struct}) is only of theoretical interest, to show that the eigenvalue problem can be solved by the inverse iteration procedure. In the practical implementation the global stiffness and mass matrix $\\Am_{global}$ and $\\Mm_{global}$ are used.\nAn exemplary implementation of the inverse iteration procedure \\textit{Netgen\/NGSolve} can be found in the tutorial section in \\cite{netgen}.\n\n\\section{Numerical Results} \\label{sec:num_results}\nIn the sequel we study two numerical examples. First we show the discretization of a circular piezoelectric patch actor applied to a square aluminium plate. We evaluate the displacement of the plate and the stress field for an applied static voltage as well as the eigenfrequencies and eigenforms of the assembly. Our second example is a radially polarized piezoelectric half cylinder, studied in \\cite{ZOUARI:2010}. We calculate (static) displacements and present a convergence study.\nFor both examples we compare our results to results gained in the commercial software tool \\textit{ABAQUS} 6.14 \\cite{abaqus20146}.\n\n\\subsection{Circular patch actor}\nOur first numerical example is a circular piezoelectric patch applied on a square plate. A schematic sketch is shown in Figure \\ref{fig:circPatch_sketch_and_mesh}. The piezoelectric patch has a diameter of $15\\operatorname{mm}$ and a thickness of $0.5\\operatorname{mm}$. The patch material is considered to be PTZ-H5, polarized in thickness direction. The material properties are taken from \\cite{ZOUARI:2010} and summarized in Table \\ref{tab:mat_piezp}. The square aluminium plate (Young's modulus $65\\operatorname{GPa}$, Poisson ratio $0.3$, density $2.7 \\times 10^{-9} \\operatorname{kg\/mm^3}$) has a length of $25 \\operatorname{mm}$ and a thickness of $1\\operatorname{mm}$. One side of the plate is clamped. A potential difference $\\Delta\\phi=100\\operatorname{V}$ is applied to the electrodes of the piezoelectric patch, which are located at the top and bottom of the patch actor.\n\n\\begin{figure}[htp]\n\n\\centering\n\\begin{subfigure}[b]{0.45\\textwidth}\n\t\\centering\n\t\t\\psfrag{x}{$x$}\n\t\t\\psfrag{y}{$y$}\n\t\t\\psfrag{z}{$z$}\n\t\t\\psfrag{voltage}{\\footnotesize $\\Delta\\phi=100$ V}\n\t\t\\psfrag{back}{\\footnotesize fixed}\n\t\t\\psfrag{def}{\\footnotesize deformation}\n\t\t\\psfrag{piezo}{\\begin{tabular}{@{}l@{}}\n\t\t\t\t\\footnotesize piezoelectric patch actor\n\t\t\\end{tabular}}\n\t\t\\psfrag{aluminium}{\\footnotesize{\\begin{tabular}{@{}l@{}}\n\t\t\t\t aluminium \\\\\n\t\t\t\t plate\n\t\t\\end{tabular}}}\n\t\t\n\t\t\\includegraphics[width=0.95\\textwidth]{scetch_geom_circPatch.eps}\n\n\t\\caption{\\centering Sketch of circular patch on quadratic aluminium plate}\n\t\\label{fig:scetch_circ_patch}\n\t\n\\end{subfigure}\\hspace{1eM}\n\\begin{subfigure}[b]{0.45\\textwidth}\n\t\n\t\\centering\n\t\\includegraphics[width=0.85\\textwidth]{mesch_circularPatch.eps}\n\t\n\t\\caption{\\centering Mesh for calculation in \\textit{Netgen\/NGSolve} ($\\approx 12k$ DOF) }\n\t\\label{fig:mesh_circ_patch}\n\\end{subfigure}\n\n\\caption{Sketch and mesh for circular piezoelectric patch actor}\n\\label{fig:circPatch_sketch_and_mesh}\n\\end{figure}\n\n\\begin{table}[htp]\n\t\\centering\n\t\\caption{Elastic ($[C]=\\operatorname{GPa}$), piezoelectric ($[e]=\\operatorname{C\/mm^2} $) and dielectric ($[\\epsilon]=\\operatorname{C\/Vm}$) properties and density ($[\\rho]=\\operatorname{kg\/mm^3}$) of PZT-5H}\n\t\\begin{tabular}{|l l|}\n\t\t\\hline\n\t\t$C_{11}^E=127.205 $ & $e_{31}=-6.62\\times 10^{-3}$ \\\\ \n\t\t$C_{12}^E=80.212 $ & $e_{33}=23.24\\times 10^{-3}$ \\\\ \n\t\t$C_{13}^E=84.670 $ & $e_{15}=17.03\\times 10^{-3}$ \\\\ \n\t\t$C_{33}^E=117.436 $ & $\\epsilon_{11}=-6.62\\times 10^{-9}$ \\\\ \n\t\t$C_{66}^E=23.474 $ & $\\epsilon_{33}=-6.62\\times 10^{-9}$ \\\\ \n\t\t$C_{44}^E=22.988 $ & $\\rho=7.5\\times10^{-9}$\\\\ \\hline\n\t\\end{tabular}\t\n\t\\label{tab:mat_piezp}\n\\end{table}\n\n For our reference calculations in \\textit{ABAQUS} we use a mesh with an element size $0.25\\operatorname{mm}$, consisting of four elements across the thickness of the aluminium and two elements across the thickness of the patch. We use \\textit{ABAQUS} 6.14 C3D20R elements (20 nodes quadratic brick elements, reduced integration) for the linear elastic aluminium plate and C3D20RE elements (20 node quadratic piezoelectric brick, reduced integration) for the piezoelectric patch. Our \\textit{ABAQUS} model requires approximately $830k$ degrees of freedom.\n \\par\n The mesh used for the calculation in \\textit{Netgen\/NGSolve}\n is shown in Figure \\ref{fig:mesh_circ_patch}. It consists of a very coarse prismatic mesh using only one element in thickness direction, for both the plate and the patch.\n On the circumference of the patch the mesh is refined by two ring layers of hexahedral elements, which have an aspect ration of $\\approx 25:2:1$. We use shape functions of order $p=2$ for displacements and stresses and order $p_{\\phi}=3$ for the electric potential. Summing up, we use $12439$ degrees of freedom for calculations in \\textit{Netgen\/NGSolve}.\n \n \\par Note that in the TDNNS-method, contrary to elements using nodal degrees of freedom, a reduction of the thickness of the patch (or the plate) has no effect on the number of used degrees of freedom. Actually we have chosen a rather thick patch-actor, in order to get accurate reference solutions at reasonable computational costs. \n \n\\par We compare resulting stress fields. In Figure \\ref{fig:compare_stess_circ_patch} show contour plots of the stress component $S_{11}$. In Figure \\ref{fig:compare_stess_circ_patch} the stress fields calculated in \\textit{Netgen\/NGSolve} and reference solution are shown.\n We as well evaluate the resulting vertical displacements of the corner point marked in Figure \\ref{fig:circPatch_sketch_and_mesh}(\\subref{fig:mesh_circ_patch}). While the reference displacement is $-6.07325\\operatorname{\\mu m}$, our result is $-6.03698\\operatorname{\\mu m}$. The relative difference of the results is $-0.597\\%$.\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\\includegraphics[width=0.8\\textwidth]{stress_circularPatch_ngsolve.eps}\n\t\t\\subcaption{Solution \\textit{Netgen\/NGsolve} \\\\ ($\\approx12k$ DOF)}\n\t\t\\label{fig:comp_stress_netgen}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\\includegraphics[width=.8\\textwidth]{stress_circularPatch_ABAQUS.eps}\n\t\t\\subcaption{ Solution \\textit{ABAQUS} \\\\ ($\\approx 830k$ DOF)}\n\t\t\\label{fig:comp_stress_abaqus}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.15\\textwidth}\n\t\t\\psfrag{stress}{$S_{11}$}\n\t\t\\psfrag{-2.0}{\\footnotesize$-2.0$}\n\t\t\\psfrag{-1.0}{\\footnotesize$-1.0$}\n\t\t\\psfrag{0.0}{\\footnotesize$0$}\n\t\t\\psfrag{1.0}{\\footnotesize$1.0$}\n\t\t\\psfrag{1.5}{\\footnotesize$1.5$}\n\t\t\\includegraphics[width=0.9\\textwidth]{label_stress.eps}\n\t\\end{subfigure}\n\t\\caption{Contour plot of stress $S_{11}$ in \\textit{ABAQUS} and \\textit{Netgen\/NGSolve}}\n\t\\label{fig:compare_stess_circ_patch}\n\\end{figure}\n\nFinally we show results of the eigenvalue calculation. The lowest five eigenfrequencies of the assembly for open ($OC$) and short circuit ($SC$) configuration are listed in Table \\ref{tab:circPatch_EF}. \n For the short cut configuration reference eigenfrequencies were calculated in \\textit{ABAQUS}. The relative frequency error $\\Delta f_{SC}=\\frac{f_{SC,ref}}{f_{SC}}-1 $ is listed in Table \\ref{tab:circPatch_EF}. The lowest three eigenforms are plotted in Figure \\ref{fig:circPatch_EF}.\n\n\\begin{table}[htp]\n\t\\centering\n\n\t\\caption{Eigenfrequecies for open circuit $f_{OC}$ and shortcut $f_{SC}$ }\n\t\\label{tab:circPatch_EF}\n\t\\begin{tabular}{|c||c|c||c|c|}\n\t\t\\hline\n\t\t$\\#$ & $f_{OC} \/ \\operatorname{kHz} $ & $f_{SC} \/ \\operatorname{kHz}$& $f_{SC,ref} \/ \\operatorname{kHz}$ & $\\Delta f_{SC} $ \\\\ \\hline \\hline\n\t\t1 & 1.2681 & 1.2650 &1.2647 & -0.001066 \\\\ \\hline\n\t\t2 & 3.8039 & 3.8039 &3.7987 & -0.000324 \\\\ \\hline\n\t\t3 & 8.9729 & 8.8098 &8.8475 & 0.003226 \\\\ \\hline\n\t\t4 & 11.7410 & 11.5219 &11.6111 & 0.012628 \t \\\\ \\hline\n\t\t5 & 12.0651 & 12.0646 &12.1416 & 0.011402 \t \\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\\includegraphics[width=0.8\\textwidth]{circPatch_SC_EF1.eps}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\\includegraphics[width=0.8\\textwidth]{circPatch_SC_EF2.eps}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\\includegraphics[width=0.8\\textwidth]{circPatch_SC_EF3.eps}\n\t\\end{subfigure}\n\n\t\\caption{Contour plot of the first three eigenforms of plate with circular patch actor}\n\t\\label{fig:circPatch_EF}\n\\end{figure}\n\n\\subsection{Radially polarized semicylinder}\nOur second example is a radially polarized piezoelectric semicylinder that was first presented in \\cite{ZOUARI:2010}. The semicylinder is of diameter $15\\operatorname{mm}$, thickness $1\\operatorname{mm}$ and length (in $z$ direction) $5\\operatorname{mm}$. A sketch of the geometry is shown in Figure \\ref{fig:semicyl}. We re-use the material parameters from Table \\ref{tab:mat_piezp} in the local frame of the polarization direction. To the electrodes, located at the inner and outer cylinder surfaces, we apply a potential difference of $100~V$. \n\\par \n\n\\begin{figure}[htp]\n\t\\centering\n\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\n\t\t\\psfrag{x}{$x$}\n\t\t\\psfrag{y}{$y$}\n\t\t\\psfrag{z}{$z$}\n\t\t\\psfrag{clamped}{\\footnotesize {clamped face}}\n\t\t\\psfrag{tp}[l]{\\footnotesize {tip displacement}}\n\t\t\\psfrag{phi100}[r]{$\\phi=100V$}\n\t\t\\psfrag{phi0}{$\\phi=0V$}\n\t\t\\psfrag{ra}[r]{\\begin{tabular}{@{}r@{}}\n\t\t\t\t\\footnotesize {radially polarized}\\\\\n\t\t\t\t\\footnotesize {piezoelectric material}\n\t\t\\end{tabular}}\n\t\t\\psfrag{dm}[r]{$r=15~mm$}\n\t\t\\psfrag{hp}{$1~mm$}\n\t\t\\includegraphics[width=0.75\\textwidth]{scetch_semizylinder.eps}\n\t\t\\caption{Sketch }\n\t\t\\label{fig:scetch_semizylinder}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.3\\textwidth}\n\t\t\n\t\t\\includegraphics[width=0.9\\textwidth]{mesh_semizylinder2.eps}\n\t\t\\subcaption{Mesh (mesh size $h=5~\\operatorname{mm}$)}\n\t\t\\label{fig:mesh_semicly}\n\t\\end{subfigure}\n\t\\caption{Sketch of radially polarized piezoelectric semicylinder and mesh used in \\textit{Netgen\/NGSolve} }\n\t\\label{fig:semicyl}\n\\end{figure}\n\nWe compute a reference solution in \\textit{ABAQUS} 6.14 using $10540$ C3D20E-elements (four across the thickness, 17 across the length and 155 across the circumference), in total $210k$ degrees of freedom. The reference tip displacement at the marked corner point in Figure \\ref{fig:semicyl}(\\subref{fig:mesh_semicly}) is $u^{tip}_{ref}=2.75525\\operatorname{\\mu m}$.\n\n\\begin{comment}\nWe use the variational formulation (\\ref{eq:TDNNS_update})-(\\ref{eq:TDNNS_update_rhs}) to implement the constitutive equations. The deformation gradient for this example is given by\n\\begin{equation}\\label{eq:def_grad_semicyl}\n\tF_M=F_{M,1}\\times F_{M,2}= \\begin{bmatrix}\n\t\t\tcp & sp & 0\\\\\n\t\t\t-sp & cp & 0\\\\\n\t\t\t0 & 0 & 1 \n\t \\end{bmatrix} \\cdot\n\t \\begin{bmatrix}\n\t \t\t0 & 0 & 1 \\\\ 0& 1 & 0 \\\\ 1&0 &0\n\t \\end{bmatrix}.\n\\end{equation}\nwith $cp=\\frac{x}{\\sqrt{x^2+y^2}}$ and $sp=\\frac{y}{\\sqrt{x^2+y^2}}$\nTherefore $F_{M,1}$ performs a rotation in $z$, $F_{M,2}$ performs an additions rotation in $y$ by an angle of $\\pi\/2$, which can as well be interpreted as a change of $x$ and $z$ axis. \n\\end{comment}\n\nThe mesh used for calculations in \\textit{Netgen\/NGSolve} is shown in Figure \\ref{fig:mesh_semicly}. Again a very coarse prismatic mesh consisting of one element along the length of semicylinder and 10 elements along the circumference is used. This corresponds to a mesh size $h=5$. A second (finer) mesh with two elements in z-direction and 20 elements along the circumference (mesh size $h=2.5~\\operatorname{mm}$) is studied. For both meshes we use only one element in thickness direction. We performed calculations with different orders of interpolation $k$ for displacements and stresses, interpolation order for the electric potential is $k+1$. \nFor each calculation the tip displacement $u^{tip}$ at the corner point is evaluated. We evaluate the relative difference of results defined as $\\delta_{rel}=u^{tip}\/u^{tip}_{ref}-1 $. The results of this convergence study are shown in Table \\ref{tab:semicyl} together with the number of degrees of freedom $n_{dof}$ used in \\textit{Netgen\/NGSolve}. \nWe see that the accuracy of the solution increases both, when decreasing the mesh size or increasing the polynomial order.\nMore detailed convergence studies can be found in \\cite{PechsteinMeindlhumerHumer:2018}.\n\n\\begin{table}[htp]\n\t\\centering\n\t\\caption{Relative difference of tip displacement computed by \\textit{Netgen\/NGSolve} and \\textit{ABAQUS}}\n\t\\begin{tabular}{|c||c| c||c|c|}\n\t\t\\hline\n\t\t & \\multicolumn{2}{|c||}{$h=5$} & \\multicolumn{2}{c|}{$h=2.5$} \\\\ \\hline\n\t\t k & $n_{dof}$ & $\\delta_{rel}$ & $n_{dof}$ & $\\delta_{rel}$ \\\\ \\hline\n\t\t 1 & 845 & $-0.016034$ & 2801 & $-0.002189$ \\\\ \\hline\n\t\t 2 & 2209 & $0.006113$ & 7543 & $0.000552$ \\\\ \\hline\n\t\t 3 & 4521 & $0.001890$ & 15729 & $0.000320$ \\\\ \\hline\n\t\\end{tabular}\t\n\t\\label{tab:semicyl}\n\\end{table}\n\n\\begin{comment}\n\\subsection{Patch on semicylinder}\nOur last example is a originally flat piezoelectric patch applied on an aluminium semicylinder. A sketch of the geometry is shown in Figure \\ref{fig:scetch_patch_zyl}. The cylinder has a thickness of $0.5\\operatorname{mm}$ and a (outer) radius of $150\\operatorname{mm}$, the piezoelectric patch is $0.1~\\operatorname{mm}$ thick and covers an angle of $\\pi\/3$, material properties are shown in Table \\ref{tab:mat_piezp}. The width of cylinder and patch is $3\\operatorname{mm}$. The cylinder is defined as electrode with an electric potential $\\phi=0\\operatorname{V}$. We assume that in first order approximation the piezoelectric patch behaves like an Bernoulli-Euler beam. Its middle fiber is not stretched and the patch has a (initial) length of $(r+h_p\/2)\\pi\/3$. The gradient of the predeformation is \n\n\\begin{equation}\\label{eq:def_grad_semicyl_patch}\n\tF_P=\\begin{bmatrix}\n\t\\frac{z}{r_m}& 0 & \\frac{x}{\\sqrt{x^2+y^2}} \\\\\n\t0 & 1 & 0 \\\\\n\t\\frac{-x}{r_m}& 0 & \\frac{z}{\\sqrt{x^2+y^2}} \n\t\n\t\\end{bmatrix} \n\\end{equation}\nwith the radius of the neutral phase $r_m=(r+h_p\/2)$.\n\nWe used a mesh with one element across thickness and width (for patch and epoxy) and $55$ elements along the circumference of the epoxy ($20$ element along the circumference ot the patch), interpolation order is $3$. In Figure \\ref{fig:reslut_smidcyl_patch} a contour plot of the $z$ component of the electric field and of the electric potential in the plane $x=0$ (the $y$-$z$-plane) is shown. Due to the linear distribution of strain $E_{xx}$ according to the approximated strain distribution of an Bernoulli Euler beam and the $d_{31}$ effect the $z$-component ot the electric field is linearly distributed.\nThe electric potential can be seen in the results in Figure \\ref{fig:patch_def_Ez} for the electric field and in Figure \\ref{fig:patch_def_phi} for the electric potential. Electric field and electric potential in the aluminium cylinder are zero. \n\n\t\\begin{figure}[htp]\n\t\\centering\n\t\\psfrag{x}{$x$}\n\t\\psfrag{y}{$y$}\n\t\\psfrag{z}{$z$}\n\t\\psfrag{clamped}{\\footnotesize {clamped face}}\n\t\\psfrag{phi0}{\\footnotesize $\\phi=0\\operatorname{V}$}\n\t\\psfrag{alu}[r]{\\footnotesize aluminium}\n\t\\psfrag{dm}[l]{\\footnotesize $r=150\\operatorname{mm}$}\n\t\\psfrag{hp}[r]{\\footnotesize $h_p=0.1\\operatorname{mm}$}\n\t\\psfrag{he}[c]{\\footnotesize $h_e=0.5\\operatorname{mm}$}\n\t\\psfrag{alp}{\\footnotesize $\\pi\/3$}\n\t\\psfrag{pie}{\\footnotesize piezoelectric patch}\n\t\\includegraphics[width=0.4\\textwidth]{scetch_zyl_patch.eps}\n\t\\caption{Sketch of piezoelectric patch on aluminium semicylinder}\n\t\\label{fig:scetch_patch_zyl}\n\t\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\begin{subfigure}{0.45\\textwidth}\n\t\\psfrag{E}[c]{\\footnotesize $z$-component of electric field (V \/ mm) }\n\t\\psfrag{-500}[l]{\\tiny$-500$}\n\t\\psfrag{-400}[l]{\\tiny$-400$}\n\t\\psfrag{-300}[l]{\\tiny$-300$}\n\t\\psfrag{-200}[l]{\\tiny$-200$}\n\t\\psfrag{-100}[l]{\\tiny$-100$}\n\t\\psfrag{0}[c]{\\tiny$0$}\n\t\\psfrag{100}[r]{\\tiny$100$}\n\t\\psfrag{200}[r]{\\tiny$200$}\n\t\\psfrag{300}[r]{\\tiny$300$}\n\t\\psfrag{400}[r]{\\tiny$400$}\n\t\\psfrag{500}[l]{\\tiny$600$}\n\t\\includegraphics[width=0.95\\textwidth]{patch_def_Ez.eps}\n\t\\caption{Contour plot of $z$-component of electric field}\n\t\\label{fig:patch_def_Ez}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.45\\textwidth}\n\t\\psfrag{pot}[c]{\\footnotesize electric potential (V) }\n\t\\psfrag{-2}[l]{\\tiny$-500$}\n\t\\psfrag{0}[l]{\\tiny$0$}\n\t\\psfrag{2}[r]{\\tiny$2$}\n\t\\psfrag{4}[r]{\\tiny$4$}\n\t\\psfrag{6}[r]{\\tiny$6$}\n\t\\psfrag{8}[r]{\\tiny$8$}\n\t\\psfrag{10}[l]{\\tiny$10$}\n\t\\psfrag{12}[l]{\\tiny$12$}\n\t\\psfrag{15}[l]{\\tiny$15$}\n\t\\includegraphics[width=0.95\\textwidth]{patch_def_phi.eps}\n\t\\caption{Contour plot of electric potential }\n\t\\label{fig:patch_def_phi}\n\t\\end{subfigure}\n\t\\caption{Results of semicylinder with predeformed piezoelectric patch }\n\t\\label{fig:reslut_smidcyl_patch}\n\\end{figure}\n\n\n\\begin{figure}[htp]\n\t\\psfrag{g2}{$\\Gamma_1$}\n\t\\psfrag{u}{$u=0$}\n\t\\psfrag{om}{$\\Omega$}\n\t\\psfrag{f}{$\\fv$}\n\t\\psfrag{g3}{$\\Gamma_3$}\n\t\\psfrag{phi}{$\\phi=\\phi_0$}\t\n\t\\includegraphics[width=0.5\\textwidth]{principle_sketch.eps}\n\\end{figure}\n\n\n\\begin{figure}\n\t\\psfrag{ut}{$\\boldsymbol{u}_t$}\n\t\\psfrag{snn}{$\\sigma_{nn}$}\n\t\\includegraphics[width=0.3\\textwidth]{2elements.eps}\n\\end{figure}\n\n\n\t\\begin{figure}[htp]\n\t\\centering\n\t\\psfrag{x}{$x$}\n\t\\psfrag{y}{$y$}\n\t\\psfrag{z}{$z$}\n\t\\psfrag{clamped}{\\footnotesize {clamped face}}\n\t\\psfrag{phi0}{\\footnotesize $\\phi=0\\operatorname{V}$}\n\t\\psfrag{alu}[r]{\\footnotesize aluminium}\n\t\\psfrag{dm}[l]{\\footnotesize $r=75\\operatorname{mm}$}\n\t\\psfrag{hp}[r]{\\footnotesize $h_p=0.5\\operatorname{mm}$}\n\t\\psfrag{he}[c]{\\footnotesize $h_e=1\\operatorname{mm}$}\n\t\\psfrag{alp}{\\footnotesize $\\pi\/3$}\n\t\\psfrag{pie}{\\footnotesize piezoelectric patch}\n\t\\includegraphics[width=0.4\\textwidth]{scetch_zyl_patch.eps}\n\\end{figure}\n\\end{comment}\n\n\\section{Conclusion} In our contribution we have provided prismatic and hexahedral elements of arbitrary polynomial order for the discretization of flat possibly curved piezoelectric geometries by the TDNNS-method. \nThe inverse iteration has been proposed to calculate eigenvalues and -forms.\n\nIn our numerical examples we have shown, that accurate simulation of flat curved piezoelectric structures can be done with only one element in thickness direction.\n Appropriate results were achieved for displacements and stresses as well as for frequencies, while the number of degrees of freedom could be reduced significantly. \n\\section{Introduction}\n\n\nThe direct and reverse piezoelectric effects allow to transform mechanical energy into electrical energy and vice versa. Moreover, modern piezoceramics are cheap in production and offer fast and accurate response. Therefore, piezoelectric materials are the method of choice in the realization of smart structures. \n\nWhen it comes to design and control of piezoelectric structures, simulation results are of high interest. A popular method to get approximate solutions is the finite element (FE) method. \nAllik and Huges \\cite{AllikHughes:1970} and later Lerch \\cite{Lerch:1988} carried out first simulations using volume elements based on the principle of virtual work. These elements are an extension of standard small-strain mechanical elements by degrees of freedom for the electric potential. However, piezoelectric structures are typically flat (e.g. patch actors), and therefore in order to avoid locking effects, their discretization into well-shaped elements leads to a computational system with a huge number of unknowns - even for simple applications. \\par \n\nNowadays, there are two ways to avoid this problem: the first is the derivation of equations for layered beams, plates and shells. The second is the design of locking-free volume elements. In any way, some relaxation technique is necessary to avoid shear locking. Prominent methods of relaxation are reduced integration, assumed strains, or mixed methods involving stresses, strains or dielectric displacements as additional independent unknowns. Also, a higher order of polynomial approximation is often feasible.\n\nMixed methods are probably the most involved of those methods mentioned above. Both from a theoretical point of view, when proving uniqueness and convergence results, as well as from an implementational point of view, when additional unknowns have to be realized in a computational code, the mixed methods require intricate treatment. However, when correctly done, mixed methods yield most accurate results, as contributions from various authors demonstrate. \nAmong the mixed methods, we cite hybrid stress elements \\cite{SzePan:1999,SzeYaoYi:2000}, a solid shell element based on a Hu-Washizu formulation \\cite{KlinkelWagner:2006} or the multi variable formulation \\cite{OrtigosaGil:2016}. Reissner-type mixed zigzag formulations were successfully used by \\cite{CarreraBoscolo:2007,CarreraNali:2009,WuLin:2014}. High-order methods comprise, among many others, isogeometric and hierarchical elements from the group around Gabbert \\cite{WillbergGabbert:2012,DuczekGabbert:2013}. A high-order element with zigzag function is proposed by Polit et al.~\\cite{PolitDOttavioVidal:2016}.\n\n Pechstein and Sch{\\\"o}berl \\cite{PechsteinSchoeberl:11} introduced an arbitrary order mixed FE-method for linear elasticity based on a Hellinger-Reissner formulation, which uses tangential displacements and normal normal stresses as degrees of freedom (TDNNS-method). This method was shown to be locking-free \\cite{PechsteinSchoeberl:12}, and later extended to piezoelectric materials \\cite{PechsteinMeindlhumerHumer:2018}. In the current contribution, we want to show the capability of the method. We discuss high-order geometry approximation,\n such that curved geometries can be treated accurately by a low number of curvilinear elements. Moreover, we address the problem of eigenvalue computation for the mixed method. We propose to use an inverse iteration \\cite{KNYAZEV2003} for the coupled electromechanical problem.\n\t\n\t\n\n\nThis paper is organized as follows: below, we introduce some notation on tensor products. In Section~\\ref{sec:linear_piezoelasticity}, the problem of linear piezolelasticity is introduced. Section~\\ref{sec:TDNNS_piezoelasticity} provides a short introduction into the variational formulation of the mixed TDNNS method that is the foundation of the current contribution. The implementation of the underlying shape functions and their derivatives is treated in Section~\\ref{sec:curvilinear_elements}. The coupled eigenvalue problem is described in Section~\\ref{sec:eigenvalues}, and an inverse iteration is proposed for its solution. Finally, numerical results illustrating the capability of the method are presented in Section~\\ref{sec:num_results}.\n\n\\paragraph{Notation}\nIn the current contribution, we use tensor calculus. Vectorial and tensorial quantities are denoted by boldface symbols. When summing over tensor components we use Einstein's summation convention. The dot denotes contraction, while $\\otimes^s$ is the symmetric tensor product:\n\\begin{align}\n(\\at \\cdot \\bt)_{ij} &= a_{ik} b_{kj}, & \\at : \\bt&= a_{ij} b_{ji}, & (\\av \\otimes^{s} \\bv)_{ij} &= \\tfrac{1}{2} ( a_i b_j + a_j b_i)\n\\end{align}\nThe gradient of a scalar field $a$ and a vector field $\\bv$ is denoted by the nabla operator. We indicate with respect to which coordinates the differentiation is carried out,\n\\begin{align}\n(\\nabla_\\xv a)_i &= \\frac{\\partial a}{\\partial x_i}, & (\\nabla_\\xv \\bv)_{ij} &= \\frac{\\partial b_i}{\\partial x_j}.\n\\end{align} \nWe may omit the index, if the derivation is with respect to $\\xv$. The divergence operator of a vector field is defined in the standard way, for a tensor field it is understood row-wise,\n\\begin{align}\n\\opdiv_\\xv \\av &= \\frac{\\partial a_i}{\\partial x_i}, & (\\opdiv_\\xv \\bt)_i &= \\frac{\\partial b_{ij}}{\\partial x_j}.\n\\end{align}\n\n\n\\section{Linear piezoelasticity} \\label{sec:linear_piezoelasticity}\n\n\n\nWe consider an elastic in part piezoelectric solid body in the domain $\\Omega \\subset \\mathbb{R}^3$, subjected to body forces $\\fv$ and free of body charges.We assume to stay in the regime of small deformations, where Voigt's theory of linear piezoelasticity \\cite{voigt1910lehrbuch} is appropriate. We are interested in the displacement field $\\uv$ and in the electric potential $\\phi$. From these quantities the electric field $\\Ev=-\\nabla\\phi$ and the linear strain tensor $\\epst=\\frac{1}{2}(\\nabla \\uv+ {\\nabla\\uv}^T)$ are derived. For the static case the mechanical and electrical balance equations in differential form are given by\n\\begin{eqnarray}\n\\opdiv \\sigmat &=& \\fv, \\label{eq:balance1}\\\\\n\\opdiv \\Dv &=& 0. \\label{eq:balance2}\n\\end{eqnarray}\nwith the Cauchy stress tensor $\\sigmat$ and the dielectric displacement $\\Dv$. The mechanical boundary conditions are\n\\begin{equation}\n\\uv = \\nullv \\text{ on } \\Gamma_1 \\qquad \\text{and} \\qquad \\sigmat \\cdot \\mathbf{n} = 0 \\text { on } \\Gamma_2 = \\partial \\Omega \\backslash \\Gamma_1,\n\\end{equation}\nand the electrical boundary conditions\n\\begin{equation}\n\\phi = \\phi_0 \\text{ on } \\Gamma_3 \\qquad \\text{and} \\qquad \\Dv \\cdot \\nv = \\mathbf{0} \\text { on } \\Gamma_4 = \\partial \\Omega \\backslash \\Gamma_3.\n\\end{equation}\n\n\n\n\nWe are concerned with the most common variant of constitutive equations, which are often referred to as Voigt's linear theory of piezoelectricity \\cite{voigt1910lehrbuch}.\nIn our notation, they read\n\\begin{eqnarray}\n\\sigmat &=& \\Ct^E: \\epst - \\et^T\\cdot \\Ev, \\label{eq:etype1}\\\\\n\\Dv &=& \\et: \\epst + \\epsilont^{\\eps}\\cdot \\Ev. \\label{eq:etype2}\n\\end{eqnarray}\nwith $\\Ct^E$ the elasticity tensor measured under constant electric field, $ \\epsilont^{\\eps}$ the dielectric tensor at constant strain and the piezoelectric permittivity tensor $\\et$. We use the following alternative variant of the material law\n\\begin{align}\n\\epst &=\\ \\St^E: \\sigmat + \\dt^T\\cdot \\Ev, \\label{eq:dtype1}\\\\\n\\Dv &=\\ \\dt: \\sigmat + \\epsilont^{\\sigma}\\cdot \\Ev. \\label{eq:dtype2}\n\\end{align}\nOf course the material parameters are connected, their relations are\n\\begin{align}\n\t \\St^{E} &=\\ (\\Ct^{E})^{-1},& \\dt &=\\ \\et\\cdot \\St^{E}, & \\epsilont^{\\sigma} &=\\ \\epsilont^{\\eps} + \\dt\\cdot \\et^T.\n\\end{align}\n\n\n\\section{TDNNS for piezoelasticity} \\label{sec:TDNNS_piezoelasticity}\nIn this section we briefly introduce the \"Tangential Displacement Normal Normal Stress\" (TDNNS) finite element method. The TDNNS finite element method was introduced by Pechstein and Sch{\\\"o}berl \\cite{PechsteinSchoeberl:11,pechstein2018analysis} for elasticity and later extended to piezoelasticity \\cite{PechsteinMeindlhumerHumer:2018}. It is a mixed finite element method based on Reissner's principle which considers displacements and stresses as unknowns. A more detailed description can be found in \\cite{PechsteinMeindlhumerHumer:2018}.\n\\par \n We assume $\\mathcal{T} = \\{T\\}$ to be a finite element mesh of the domain $\\Omega$. The unit vector $\\nv$ denotes the outer normal on element or domain boundaries $\\partial T$ or $\\partial \\Omega$. In general on a (boundary) surface a vector field $\\vv$ can be split into a normal component $v_n=\\vv\\cdot\\nv$ and a tangential component $\\vv_t=\\vv-v_n\\nv$. A tensor field $\\taut$ has a normal vector on a surface $\\tauv_n=\\taut\\cdot\\nv$, which analogously can be split into a normal component $\\tau_{nn}$ and tangential component ${\\tauv_{nt}}$.\n \n\\subsection{Finite element formulation for elastic problem } \nThe TDNNS finite element method uses the tangential component of the displacement $\\uv_t$ and the normal component of the stress vector $\\sigma_{nn}$ as degrees of freedom. Those quantities are continuous across the element interfaces. Note that the normal displacement $u_n$ is discontinuous. For the displacements, tangential-continuous elements introduced by N{\\'e}d{\\'e}lec \\cite{Nedelec:86} are used. Elements for the stresses are normal-normal-continuous and were introduced by Pechstein and Sch{\\\"o}berl \\cite{PechsteinSchoeberl:11}. \n{For the electric potential, standard continuous (e.g. nodal or hierarchical) finite elements are used. }\nWe assume the tangential continuous Nedelec elements for the displacements and continuous nodal or hierarchical elements for the electric potential on hybrid meshes to be well known. The definition of arbitrary-order prismatic or hexahedral normal-normal continuous stress elements is postponed to Section \\ref{sec:curvilinear_elements}. Other element types were introduced in \\cite{PechsteinSchoeberl:11}. At present, it is only necessary to assume that admissible stresses $\\sigmat$ are such that $\\sigma_{nn}$ is continuous across all element interfaces.\n\\begin{comment}\nThe finite element space on simplicial meshes can be described by\n\n\\begin{eqnarray}\\label{eq:fespace}\n\\uv, \\delta \\uv &\\in& \\spaceV_h = \\{ \\vv: \\vv|_T \\in [P^k(T)]^3, \\vv_t \\text{ continuous}\\}, \\label{eq:spaceVh}\\\\\n\\sigmat, \\delta \\sigmat &\\in& \\spaceSigma_h = \\{ \\taut: \\taut|_T \\in [P^k(T)]^{3\\times3}_{sym}, \\tau_{nn} \\text{ continuous} \\},\\\\ \\label{eq:spaceSigmah}\n\\phi, \\delta \\phi &\\in& \\ \\{ \\phi:\\phi_h |_T \\in P^{k+1}(T),\\phi \\text{ continuous}\\}\\label{eq:wh}\n\\end{eqnarray}\n\\end{comment}\n\n\\begin{comment}\n$P^{k+1}(T)$ denotes a polynomial space of highest order $k$ on simplicial element $T$. We consider tetrahedral, hexahedral and prismatic elements. For the standard continuous space $W_h$ and the tangential continuous space $V_h$ elements and shape functions are provided by Zaglmayr in \\cite[p. \\ 92ff.]{Zaglmayr:2006}. For the normal-normal continuous space elements and shape functions are described in \\cite{PechsteinSchoeberl:11}. In the open-source software package Netgen\/Ngsolve (\\href{https:\/\/ngsolve.org}{https:\/\/ngsolve.org} ) all elements are implemented. Additionally two-\\-di\\-men\\-sional quadrilateral and triangular elements are provided. \n\\end{comment}\n\nThe variational formulation of the elastic problem based on Reissner's principle is the following: find $\\uv$ satisfying $\\uv_t = \\mathbf{0}$ on $\\Gamma_1$ and $\\sigmat$ satisfying $\\sigma_{nn} = 0$ on $\\Gamma_2$ such that\n\\begin{align}\n\\int_\\Omega \\St :\\sigmat : \\delta \\sigmat\\, d\\Omega - \\langle \\epst(\\uv), \\delta \\sigmat\\rangle -\\langle \\epst(\\delta \\uv), \\sigmat \\rangle &=\\ \\int_\\Omega \\fv \\cdot \\delta \\uv\\,d\\Omega\n\\label{eq:TDNNS}\n\\end{align}\nfor all admissible virtual displacements $\\delta \\uv $ and virtual stresses $\\delta\\sigmat $ which satisfy the corresponding homogeneous essential boundary conditions. In (\\ref{eq:TDNNS}) instead of the integral $\\int_\\Omega \\epst(\\uv) : \\sigmat\\, d\\Omega$, the duality product $\\langle \\epst(\\uv), \\sigmat\\rangle$ occurs. This is due to the fact that the displacement field $\\uv$ is discontinuous, and therefore gaps between elements may arise. On finite element meshes, the duality product can be computed element wise by surface and volume integrals . Due to (additional) distributional parts of the strain, the surface integrals in (\\ref{eq:defdiv1})-(\\ref{eq:defdiv2}) do not vanish. Only if the stress field is normal-normal continuous, the duality product is well defined \n\n\\begin{eqnarray}\n\\langle\\epst(\\uv), \\sigmat \\rangle &=& \\sum_{T \\in \\mathcal{T}} \\Big( \\int_T \\sigmat : \\epst(\\uv)\\, d\\Omega - \\int_{\\partial T} \\sigmat_{nn}\\cdot \\uv_n\\, d\\Gamma \\Big) \\label{eq:defdiv1}\\\\\n&=& \\sum_{T \\in \\mathcal{T}} \\Big( -\\int_T \\opdiv \\sigmat \\cdot \\uv\\, d\\Omega + \\int_{\\partial T} \\sigmat_{nt}\\cdot \\uv_t\\, d\\Gamma \\Big)\\ = \\\n-\\langle \\opdiv \\sigmat, \\uv\\rangle. \\label{eq:defdiv2}\n\\end{eqnarray}\n\nEquations (\\ref{eq:defdiv1})-(\\ref{eq:defdiv2}) can be shown by partial integration, using continuity of normal normal stress and tangential displacement and is shown in detail in \\cite{PechsteinSchoeberl:11} and \\cite{pechstein2018analysis}.\n\\par \n\n\\subsection{Finite element formulation for piezoelastic problem }\nTo get equations for the (fully) coupled problem of piezoelasticity we eliminate the dielectric displacement in the balance equations (\\ref{eq:balance1})-(\\ref{eq:balance2}) and use the constitutive equations in (\\ref{eq:dtype1})-(\\ref{eq:dtype2}) to get\n\n\\begin{eqnarray}\n- \\St^{E}: \\sigmat + \\dt^T\\cdot \\nabla \\phi + \\epst &= \\mathbf{0}, \\label{eq:dif1}\\\\\n- \\opdiv \\sigmat &=\\fv, \\label{eq:dif2}\\\\\n-\\opdiv(\\dt: \\sigmat - \\epsilont^{\\sigma} \\cdot\\nabla \\phi) &= 0. \\label{eq:dif3}\n\\end{eqnarray}\n\n We multiply (\\ref{eq:dif1}) by a virtual stress $\\delta \\sigmat$, (\\ref{eq:dif2}) by a virtual displacement $\\delta \\uv $ and (\\ref{eq:dif3}) by a virtual electric potential $\\delta \\phi$ to get a variational formulation. The virtual quantities have to satisfy the corresponding homogeneous boundary conditions $\\delta u_t=0$ on $\\Gamma_1$, $\\delta \\sigma_{nn}=0$ on $\\Gamma_2$ and $\\delta\\phi=0$ on $\\Gamma_3$. After integration\nusing the identities in (\\ref{eq:defdiv1}) - (\\ref{eq:defdiv2}) we get from (\\ref{eq:dif1}) and (\\ref{eq:dif2})\n\n\\begin{eqnarray}\n\\int_\\Omega (-\\St^{E}: \\sigmat + \\dt^T\\cdot \\nabla \\phi): \\delta \\sigmat\\,d\\Omega + \\langle\\epst, \\delta \\sigmat\\rangle &=& 0,\\label{eq:varform1}\\\\\n\\label{eq:varform2} \\langle \\delta \\epst, \\sigmat\\rangle &=& \\int_\\Omega \\fv \\cdot \\delta \\uv\\, d\\Omega\n\\end{eqnarray}\n\nIntegration by parts in (\\ref{eq:dif3}), taking into account the homogeneous boundary conditions $\\Gamma_3$ ($\\delta\\phi=0$) and the natural one on $\\Gamma_4$ (no surface charges) leads to \n\n\\begin{align}\n\\int_\\Omega (\\dt: \\sigmat - \\epsilont^{\\sigma} \\cdot\\nabla \\phi) \\cdot \\nabla\\delta \\phi\\,d\\Omega = 0.\n\\end{align}\n\nSumming up we finally arrive at \n\n\\begin{align}\n-\\int_\\Omega (\\St^{E}: \\sigmat - \\dt^T\\cdot \\nabla \\phi) : \\delta \\sigmat\\, d\\Omega + \\langle \\epst(\\uv), \\delta \\sigmat\\rangle + \\langle \\epst(\\delta \\uv), \\sigmat \\rangle + \\label{eq:V11}\\\\+\\int_\\Omega (\\dt: \\sigmat - \\epsilont^{\\sigma} \\cdot\\nabla \\phi) \\cdot \\nabla \\delta\\phi\\, d\\Omega\n=\\label{eq:V12} \\int_\\Omega \\fv \\cdot \\delta \\uv\\,d\\Omega\n\\end{align}\n\nThe performance of thin prismatic elements has been shown in \\cite{PechsteinMeindlhumerHumer:2018}. They have been shown to be free from locking when using at least a order of $k=2$ for the electric potential and linear elements for the mechanical quantities (stress and displacement). \n\n\n\\section{Curvilinear elements} \\label{sec:curvilinear_elements}\n\nTo implement the equations from the previous section in a computational code, we use the well known concept of reference elements. The reference element is of unit size and not distorted. Contrarily, elements in the finite element mesh are in general distorted, and may be curved in order to enhance geometry approximation. Integration of virtual work is then carried out after transformation to the reference element. Also shape functions are defined for the reference element.\n\nIn Section~\\ref{sec_refelement}, we define the reference element for elements of prismatic and hexahedral shapes. Then we introduce shape functions for the stresses on these elements. Note that, for prismatic elements, we use less shape functions than proposed in the original paper \\cite{PechsteinSchoeberl:12}, while shape functions for hexahedral elements are introduced for the first time. In Section~\\ref{sec_transformation}, we describe in detail how quantities on the reference element are transformed to a distorted element in the mesh. Displacement shape functions are transformed by a covariant transformation, which preserves tangential degrees of freedom. From this transformation, we derive the correct transformation for the strain on a curvilinear element. Moreover, we present the correct transformation for the stress shape functions, as well as the transformation of the divergence of the stresses.\n\n\n\\subsection{Reference element and shape functions} \\label{sec_refelement}\n\nIn the following, we will provide shape functions of arbitrary order for displacements, stresses and electric potential on the reference hexahedral and prism. As we use the tensor product structure of these elements, we first describe the reference segment and reference triangle. All quantities associated to reference elements are denoted by a hat. The shape functions are then transformed by according transformations described in Section~\\ref{sec_transformation} to some possibly distorted element in the mesh.\n\nFor the electric potential, we use standard continuous hierarchical elements. For the displacement elements, we use tangential continuous elements well-known from electromagentics \\cite{Nedelec:86}.\nAll elements are implemented in the open-source software package Netgen\/NGSolve \\cite{netgen}. The exact basis function implemented there are described in detail in \\cite{Zaglmayr:2006}. We keep close to the notation adopted in this latter reference. The stress basis functions were introduced for tetrahedra in \\cite{PechsteinSchoeberl:11} and for prismatic elements in \\cite{PechsteinSchoeberl:12}. In the following, we provide an adapted set of stress basis functions for prisms and stress basis functions for hexahedral elements.\n\n\n\\paragraph{The reference segment}\nWe use the unit reference segment $\\hat T_{seg} = [0,1]$. For reference coordinate $\\hat x$, we define\nthe barycentric coordinates or ``hat basis functions'' \n\\begin{align}\n\\hat \\lambda_1(\\hat x) &= 1-\\hat x, & \\hat \\lambda_2(\\hat x) &= \\hat x,\n\\end{align}\nand the family of Legendre polynomials, $\\hat q_i = \\hat \\ell_i(\\hat x)$, where $i$ indicates the polynomial order.\n\n\\begin{figure}\n\\begin{center}\n\\psfrag{Ttrig}{$\\hat T_{trig}$}\n\\psfrag{hatt}{$\\hat T_{seg}$}\n\\psfrag{Tprism}{$\\hat T_{prism}$}\n\\psfrag{Thex}{$\\hat T_{hex}$}\n\\psfrag{lam1}{$\\hat \\lambda_1$}\n\\psfrag{lam2}{$\\hat \\lambda_2$}\n\\psfrag{x}{$\\hat x$}\n\\psfrag{y}{$\\hat y$}\n\\psfrag{z}{$\\hat z$}\n\\psfrag{E1}{$\\hat E_1$}\n\\psfrag{E2}{$\\hat E_2$}\n\\psfrag{E3}{$\\hat E_3$}\n\\psfrag{V1}{$\\hat V_1$}\n\\psfrag{V2}{$\\hat V_2$}\n\\psfrag{V3}{$\\hat V_3$}\n\\psfrag{Fz1}{$\\hat F^{\\hat z}_1$}\n\\psfrag{Fz2}{$\\hat F^{\\hat z}_2$}\n\\psfrag{Fx1}{$\\hat F^{\\hat x}_1$}\n\\psfrag{Fx2}{$\\hat F^{\\hat x}_2$}\n\\psfrag{Fy1}{$\\hat F^{\\hat y}_1$}\n\\psfrag{Fy2}{$\\hat F^{\\hat y}_2$}\n\\psfrag{Fxy1}{$\\hat F^{\\hat x\\hat y}_1$}\n\\psfrag{Fxy2}{$\\hat F^{\\hat x\\hat y}_2$}\n\\psfrag{Fxy3}{$\\hat F^{\\hat x\\hat y}_3$}\n\\includegraphics[width=0.7\\textwidth]{unitelements}\n\\end{center}\n\\caption{The various unit elements.} \\label{fig_refelements}\n\\end{figure}\n\\paragraph{The reference triangle}\n\nThe unit triangle shall be denoted by $\\hat T_{trig} = \\{(\\hat x, \\hat y): \\hat x \\geq0, \\hat y \\geq 0, \\hat x + \\hat y \\leq 1\\}$. For an enumeration of vertices and edges we refer to Figure~\\ref{fig_refelements}. Also on the triangle, we define barycentric coordinates \n\n\\begin{align}\n\\hat \\lambda_1(\\hat x, \\hat y) &= 1-\\hat x-\\hat y, & \\hat \\lambda_2(\\hat x, \\hat y) &= \\hat x, & \\hat \\lambda_3(\\hat x, \\hat y) &= \\hat y.\n\\end{align}\nWe need a family of polynomials on the edge $E_\\gamma$ between vertices $V_\\alpha$ and $V_\\beta$, that is extended to the whole triangle, e.g. the family of scaled Legendre polynomials\n\\begin{align}\n\\hat \\ell^S_{\\gamma,i}(\\hat x, \\hat y) &= \n\\hat \\ell_i\\left(\\frac{\\hat \\lambda_\\alpha - \\hat \\lambda_\\beta}{\\hat \\lambda_\\alpha + \\hat \\lambda_\\beta} \\right) \n(\\hat \\lambda_\\alpha + \\hat \\lambda_\\beta )^{i}\n\\end{align}\nA family of bivariate polynomials on the triangle of polynomial order $i+j$, can e.g. be realized by\n\\begin{align}\n\\hat q_{ij}(\\hat x, \\hat y) = \\hat \\ell^S_{3,i} \\hat \\ell_j(\\hat \\lambda_3 - \\hat \\lambda_1 - \\hat \\lambda_2).\n\\end{align}\nOf course any other family of polynomials on the triangle may be used. The conditioning of the finite element matrices for high polynomial orders is affected by this choice, but this is not within the scope of the current contribution.\n\n\n\nOn the triangle, we introduce three unit tensor fields, that have unit normal stress on one edge and zero normal stress on the other edges. Multiplying these unit tensors by (scalar) polynomials, we arrived at the desired hierarchical shape functions on the triangle, they can be found in \\cite{PechsteinSchoeberl:11}. We set the unit tensors\n\\begin{align}\n\\hat \\St_1 &= \\left[ \\begin{array}{cc} 0 & 1 \\\\ 1 & 0 \\end{array} \\right], &\n\\hat \\St_2 &= \\left[ \\begin{array}{cc} 1 & -1\/2 \\\\ -1\/2 & 0 \\end{array} \\right], &\n\\hat \\St_3 &= \\left[ \\begin{array}{cc} 0 & -1\/2 \\\\ -1\/2 & 1 \\end{array} \\right].\n\\end{align}\n\n\\paragraph{The reference prism}\n\nThe reference prism is constructed as a tensor product of the reference triangle (for $\\hat x$ and $\\hat y$ coordinates) and the reference segment (for $\\hat z$ coordinate): $\\hat T_{prism} = \\hat T_{trig} \\otimes \\hat T_{seg} = \\{(\\hat x, \\hat y, \\hat z): (\\hat x, \\hat y) \\in \\hat T_{trig} \\text{ and } \\hat z \\in \\hat T_{seg}\\}$. The reference prism is depicted in Figure~\\ref{fig_refelements}.\n\nWe define the local reference shape functions for the element of order $p$.\nOn the prism, we have shape functions associated to each of the three quadrilateral faces $\\hat F^{\\hat x \\hat y}_1$, $\\hat F^{\\hat x \\hat y}_2$ and $\\hat F^{\\hat x \\hat y}_3$, as well as shape functions associated to the two triangular faces $\\hat F^{\\hat z}_1$ and $\\hat F^{\\hat z}_2$, and shape functions that have zero normal stress and are associated with the element itself. The last class of shape functions is element-local and can be eliminated directly at assembly. We use\n\\begin{itemize}\n\\item for each quadrilateral face $\\hat F^{\\hat x \\hat y}_\\gamma$, $(p+1)^2$ shape functions\n\\begin{align}\n\\hat \\Nt^{\\sigma,\\hat F^{\\hat x \\hat y}_\\gamma} _{ik} &= \\left[ \\begin{array}{cc}\n\\hat \\ell_{\\gamma,i}^S(\\hat x, \\hat y) \\hat \\ell_k(\\hat z) \\hat\\St_\\gamma & \\nullv \\\\ \\nullv & 0\n\\end{array}\\right], & \n\\begin{array}{l}0 \\leq i,k \\leq p, \\\\ \\gamma = 1,2,3 .\\end{array}\n\\end{align}\n\\item for each triangular face $\\hat F^{\\hat z}_\\gamma$, $\\frac12(p+1)(p+2)$ shape functions\n\\begin{align}\n\\hat \\Nt^{\\sigma,\\hat F^{\\hat z}_\\gamma} _{ij} &=\n\\hat q_{ij}(\\hat x, \\hat y) \\hat \\lambda_{\\gamma}(\\hat z)\\ \\ev_{\\hat z} \\otimes^s \\ev_{\\hat z}\n, & \\begin{array}{l}0 \\leq i+j \\leq p, \\\\ \\gamma = 1,2 .\\end{array}\n\\end{align}\n\\item three types of element-local shape functions with vanishing normal stress, each associated to one block of the symmetric three-by-three tensor,\n\\begin{align}\n\\hat \\Nt^{\\sigma,\\hat x \\hat y} _{\\gamma ijk} &= \\left[ \\begin{array}{cc}\n\\hat q_{ij}(\\hat x, \\hat y)\\hat \\lambda_\\gamma(\\hat x, \\hat y) \\hat \\ell_j(\\hat z) \\hat\\St_\\gamma & \\nullv \\\\ \\nullv & 0\n\\end{array}\\right], & \n\\begin{array}{l}\n0 \\leq i+j \\leq p-1,\\\\ \n0 \\leq k \\leq p+1,\\\\ \n\\gamma = 1,2,3, \\end{array}\\\\\n\\hat \\Nt^{\\sigma,\\hat z}_{ijk} &=\n\\hat q_{ij}(\\hat x, \\hat y) \\hat \\lambda_{1}(\\hat z) \\hat \\lambda_{2}(\\hat z) \\hat \\ell_k(\\hat z)\\ \\ev_{\\hat z} \\otimes^s \\ev_{\\hat z}, &\n\\begin{array}{l}\n0 \\leq i+j \\leq p+1,\\\\ \n0 \\leq k \\leq p-1,\\end{array}\\\\\n\\hat \\Nt^{\\sigma,\\hat \\xi \\hat z} _{\\hat \\xi ijk} &= \\hat q_{ij}(\\hat x, \\hat y) \\hat \\ell_k(\\hat z)\\ \\ev_{\\hat \\xi} \\otimes^s \\ev_{\\hat z}, & \n\\begin{array}{l}\n0 \\leq i+j \\leq p,\\\\ \n0 \\leq k \\leq p, \\\\\n\\hat \\xi = \\hat x, \\hat y.\\end{array}\n\\end{align}\n\\end{itemize}\nNote that some components of the shape functions are of order $p+1$, however the element normal stresses are of order $p$.\n\\paragraph{The reference hexahedron}\nThe reference hexahedron is again constructed in tensor product style: $\\hat T_{hex} = \\hat T_{seg} \\otimes \\hat T_{seg} \\otimes \\hat T_{seg} $.\nThe reference hexahedron is depicted in Figure~\\ref{fig_refelements}.\n\nAgain, we distinguish between shape functions that are associated with one of the quadrilateral faces, with zero normal stress on all other faces, and shape functions that have zero normal stress on all faces. Additionally, the shape functions are built in such a way that the normal stress of the face-based shape functions corresponds to the normal stress distribution of face-based shape functions for prisms. This way, it is possible to use prismatic and hexahedral elements in the same mesh. Of course, also the shape functions for tetrahedral elements are implemented in NGSolve, which are not discussed here, match accordingly, such that hybrid meshes can be used.\nWe use the shape functions\n\\begin{itemize}\n\\item for each of the faces $\\hat F^{\\hat \\xi}_\\gamma$ with $\\hat \\xi \\in \\{\\hat x, \\hat y, \\hat z\\}$ and $\\gamma = 1,2$\n\\begin{align}\n\\hat \\Nt^{\\sigma,F^{\\hat \\xi}_\\gamma}_{ij} &= \\hat \\ell_i(\\hat \\eta) \\hat \\ell_j(\\hat \\zeta) \\lambda_\\gamma(\\hat \\xi)\\ \\ev_{\\hat \\xi} \\otimes^s \\ev_{\\hat \\xi}, & \\begin{array}{l}0 \\leq i,j \\leq p, \\\\ \\gamma = 1,2,\\\\ \\{\\hat \\xi, \\hat \\eta, \\hat \\zeta\\} = \\{\\hat x, \\hat y, \\hat z\\}.\\end{array}\\end{align}\n\\item the element-local shape functions with vanishing normal stress\n\\begin{align}\n\\hat \\Nt^{\\sigma,\\hat \\xi\\hat \\xi}_{ijk} &= \\hat \\ell_j(\\hat \\eta) \\hat \\ell_k(\\hat \\zeta) \\lambda_1(\\hat \\xi)\\lambda_2(\\hat \\xi) \\hat \\ell_i(\\hat \\xi)\\ \\ev_{\\hat \\xi} \\otimes^s \\ev_{\\hat \\xi}, & \\begin{array}{l}0 \\leq j,k \\leq p+1, \\\\ 0 \\leq i \\leq p-1,\\\\ \\{\\hat \\xi, \\hat \\eta, \\hat \\zeta\\} = \\{\\hat x, \\hat y, \\hat z\\},\\end{array}\\\\\n\\hat \\Nt^{\\sigma,\\hat \\xi\\hat \\eta}_{ijk} &= \\hat \\ell_i(\\hat \\xi)\\hat \\ell_j(\\hat \\eta) \\hat \\ell_k(\\hat \\zeta) \\ \\ev_{\\hat \\xi} \\otimes^s \\ev_{\\hat \\eta}, & \\begin{array}{l}0 \\leq i,j \\leq p, \\\\ 0 \\leq k \\leq p+1,\\\\ \\{\\hat \\xi, \\hat \\eta, \\hat \\zeta\\} = \\{\\hat x, \\hat y, \\hat z\\}.\\end{array}\n\\end{align}\n\n\\end{itemize}\n\n\\subsection{Transformations} \\label{sec_transformation}\n\nLet now $\\hat T$ denote the reference element of any type (triangular, quadrilateral, prismatic, tetrahedral or hexahedral), and let $T$ be a corresponding element in the finite element mesh. We assume $\\Phiv_T(\\hat{\\xv})$ to be a smooth one to one mapping from $\\hat{T}$ to $T$. A point in the reference element $\\hat{\\xv} \\in \\hat{T}$ is mapped by $\\Phiv_T$ to some point $\\xv\\in T$. In Figure \\ref{fig:tranform} this transformation is illustrated. Note, that in general $\\Phiv_T$ is nonlinear. The Jacobian of the transformation (similar to the deformation gradient tensor) is denoted by\n\n\n\\begin{equation}\\label{eq:Ft}\n\t\\Ft_T(\\hat{x})=\\nabla_{\\hat{x}} \\Phiv_T(\\hat{\\xv}).\n\\end{equation}\n\n\\begin{figure}[htp]\n\t\\begin{center}\n\t\t\\psfrag{Tdach}{$\\widehat{T}$}\n\t\t\\psfrag{xd1}{$1$}\n\t\t\\psfrag{yd1}{$1$}\n\t\t\\psfrag{xd}{$\\widehat{x}_1$}\n\t\t\\psfrag{yd}{$\\widehat{x}_2$}\n\t\t\\psfrag{x1}{$x_1$}\n\t\t\\psfrag{y1}[r]{$x_2$}\n\t\t\\psfrag{T}{$T$}\n\t\t\\psfrag{phit}{$\\Phi_T$,~$\\Ft_T=\\nabla\\Phi_T$}\n\t\t\\includegraphics[width=0.7\\textwidth]{transformation.eps}\n\t\\end{center}\n\t\\caption{Transformation from unit element to deformed element}\n\t\\label{fig:tranform}\n\\end{figure}\n\n The determinant of the Jacobian is denoted $J_T=\\det(\\Ft_T)$. Furthermore the Hessian of the $i^{th}$ component of $\\Phiv_T $, $\\Ht^i_T$, is given by\n\n\\begin{equation}\\label{eq:hessian}\n\t(\\Ht_T^i)_{jk}= \\frac{\\partial^2\\Phiv_{T,i}}{\\partial\\hat{x}_j \\partial\\hat{x}_k}(\\hat{\\xv}) \n\\end{equation}\n\nAs already mentioned, the degrees of freedom of the method are $\\uv_t$ and $\\sigma_{nn}$. These degrees of freedom need to be preserved when shape functions are transferred to the curvilinear element. Assume $\\hat{\\Nt}^{\\sigma}$ and $\\hat{\\Nt}^{u}$ to be one specific displacement and stress shape function, respectively. To calculate a finite element function, we transform these shape functions like\n\n\\begin{align}\n\t\\Nt^u &=\\Ft_T^{-T}\\cdot\\hat{\\Nt}^u,\\label{eq:transform_u}\\\\\n\t\\Nt^{\\sigma} &=\\frac{1}{J_T^2}\\Ft_T \\cdot\\hat{\\Nt}^{\\sigma} \\cdot\\Ft_T^T.\\label{eq:transform_sigma}\n\\end{align}\nThe displacement and stress field are then weighted sums of these basis functions. \\par \n The first transformation for the displacement shape functions \\eqref{eq:transform_u} is well known from finite elements for the electric field in Maxwell's equations, see e.g. \\cite{Monk:03}. The linear strain $\\epst$ of a finite element function is the the weighted sum of strains of such basis functions $\\epst(\\Nt^u)$. On curvilinear elements, this strain of a basis function has to be evaluated using the chain rule. In contrast to standard elements, it depends not only on the reference strain $\\hat{\\epst}(\\hat{\\Nt}^u)=\\frac{1}{2}\\left(\\nabla_{\\hat{\\xv}}\\hat{\\Nt}^u+\\nabla_{\\hat{\\xv}}\\hat{\\Nt}^{u,T}\\right)$ but also on the shape function $\\hat{\\Nt}^u$\n\n\\begin{equation}\\label{eq:strain_u}\n\t\\epst(\\Nt^u)=\\Ft_T^{-1}\\cdot\\hat{\\epst}(\\hat{\\Nt}^u)\\cdot\\Ft_T^{-T} + \\sum_{i=1}^{d}\\hat{\\Nt}^u_i\\Ft_T^{-1}\\cdot(\\Ht^i_T)^{-1}\\cdot\\Ft_T^{-T}.\n\\end{equation}\n\nThe second part of (\\ref{eq:strain_u}) only vanishes if $\\Ft_T$ is constant, which is the case only for affine linear elements.\\par\nThe transformation for stress shape functions (\\ref{eq:transform_sigma}) can be found in \\cite{PechsteinSchoeberl:11}. Note that compared to the Piola transformation, we have a factor of $\\frac{1}{J_T^2}$ instead of $\\frac{1}{J_T}$.\n For the (mechanical) balance equation (\\ref{eq:balance1}) the divergence of the stress tensor has to be derived. Again, it consists of a weighted sum of $\\opdiv_{\\xv}(\\Nt^{\\sigma})$. From basic calculus for the standard Piola transformation, we deduce \n\n\\begin{equation}\n\t \\opdiv_{\\xv}(\\Nt^{\\sigma})=\\opdiv_{\\xv}\\left(\\frac{1}{J_T^2}\\Ft_T\\cdot\\hat{\\Nt}^{\\sigma}\\cdot\\Ft_T^T\\right)=\\frac{1}{J_T}\\opdiv_{\\hat{\\xv}}\\left(\\tilde{\\Ft}_T\\cdot\\hat{\\Nt}^{\\sigma}\\right),\n \\end{equation}\n where $\\tilde\\Ft_T:=\\frac{1}{J_T}\\Ft_T$. By application of the product rule we get the $i^{th}$ component of the divergence of the stress tensor as\n\\begin{equation}\n\t\\label{eq:div_sigma}\n\t\\left(\\opdiv_{\\xv}\\Nt^{\\sigma}\\right)_i=\\frac{1}{J_T^2}\\Ft_{T,ij}\\left(\\opdiv_{\\hat{\\xv}}\\hat{\\Nt}^{\\sigma}\\right)_j+\\frac{1}{J_T}\\frac{\\partial\\tilde{\\Ft}_{T,ik}}{\\partial\\hat{x}_j}\\hat{\\Nt}^{\\sigma}_{kj}.\n\\end{equation}\n \n \\begin{comment}\n From the Piola transformation for the divergence of a tensor $\\taut=\\frac{1}{J_T}\\hat{\\taut}~ \\Ft_T$ in the actual configuration the identity\n\n\\begin{equation}\\label{eq:piolatransfomation}\n\t\\opdiv_x(\\taut)=\\opdiv_x \\left(\\frac{1}{J_T}\\cdot\\hat{\\taut}~ \\Ft_T^T\\right)=\\frac{1}{J_T}\\opdiv_{\\hat{x}}(\\hat{\\taut})\n\\end{equation}\n\nis known.\n We introduce $\\tilde{\\Ft}_T=\\frac{1}{J_T}\\Ft_T$ and use (\\ref{eq:piolatransfomation}) and the transformation (\\ref{eq:transform_sigma}) get the divergence of the stress tensor \n\n\\begin{eqnarray}\n\t\\opdiv_x(\\sigmat) &=& \\opdiv_x \\left( \\frac{1}{J_T}\\tilde{\\Ft}_T\\cdot\\hat{\\sigmat} \\cdot\\Ft_T^T \\right) \\\\\n\t&=& \\frac{1}{J_T} \\opdiv_{\\hat{x}}\\left(\\tilde{\\Ft}_T \\cdot\\hat{\\sigmat} \\right)\\label{eq:div_sigma2}\n\\end{eqnarray}\n\nin the reference system $\\hat{x}$. We further treat (\\ref{eq:div_sigma2}) by application of the product rule. We get the $i^{th}$ component of the divergence of the stress tensor , using the summation convention,\n\n\\begin{equation}\n(\\opdiv_x(\\sigmat))_i = \\frac{1}{J_T^2}\\Ft_{T,ij}(\\opdiv_{\\hat{x}}\\hat{\\sigmat})_j + \\frac{1}{J_T}\\frac{\\partial\\tilde{\\Ft}_{T,ik}}{\\partial \\hat{x}_j}\\hat{\\sigmat}_{kj}.\n\\end{equation}\n\\end{comment}\nAssuming that the shape functions $\\hat{\\Nt}^{\\sigma}$ as well as their divergence $\\opdiv_{\\hat{\\xv}}\\hat{\\Nt}^{\\sigma}$ are known analytically on the reference element, formula (\\ref{eq:div_sigma}) allows to evaluate $\\opdiv_{\\xv}\\Nt^{\\sigma}$ for a curvilinear mesh element. \n\n\\begin{comment}\n\\subsection{Considering predeformations}\nPiezoelectric materials are typically delivered as flat patches, but often applied to curved structures. Therefore the flat patch has to be deformed. In this section we show how such a deformation can be considered within our method. \n\nWe use common notation from large deformation elasticity, where a material point $\\Xv$ is mapped to the spatial point $\\xv= \\Phiv_T (\\Xv)$ by the (possibly large) predeformation of the piezoelectric material. The according displacement is added as $\\uv_P=\\Phiv_P-\\Xv $, and the deformation gradient of the predeformation is given as $\\Ft_P=\\nabla_{\\Xv}\\Phiv_P$.\nThe displacement of a point with initial location $\\Xv$ is the sum of the predeformation and the (small, incremental) deformation $\\tilde{\\uv}$,\n\n\\begin{equation}\\label{eq:total_displ}\n\\uv(\\Xv)=\\uv_P(\\Xv)+\\tilde{\\uv}(\\Phiv_P(\\Xv)).\n\\end{equation}\n\nNote, that $\\tilde{\\uv}$ is supposed to be small. It is seen as a function of the spatial coordinate $\\xv$ rather than the material coordinate $\\Xv$.\n\n The constitutive equations (\\ref{eq:etype1}) - (\\ref{eq:etype2}) are given for linear piezoelasticity. We assume, when considering predeformations, the constitutive equations connect Green's strain $\\Et_G$, Piola-Kirchhoff stress $\\sigmat_M$, and electric field $\\Ev_M=-\\nabla_{\\Xv}\\phi$ in material coordinates. \n Moreover we assume that strain, stress and electric field caused by the predeformations are small enough that the conditions for linear constitutive equations are not violated. \n\n In the variational formulation, we use the Cauchy stress $\\sigmat$, the incremental linearized strain $\\tilde{\\epst}$ and the electric potential in global ($x$) coordinates.\n The relation between the stress tensors and the gradients of the electric field in material and global coordinates is\n\n\\begin{eqnarray}\\label{eq:piola-transform}\n\\sigmat_M &=&J_P \\Ft_P^{-1}\\cdot\\sigmat\\cdot \\Ft_P^{-T} \\\\\n\\Ev_M&=&\\Ft_P^T\\cdot \\Ev =\\Ft_P^T\\cdot \\nabla_x \\phi.\n\\end{eqnarray}\n\nThe total deformation gradient tensor is \n\\begin{equation}\\label{eq:total_def_grad}\n\t\\Ft=\\nabla_{\\Xv} \\Phiv_P +\\nabla_x \\tilde{\\uv}\\cdot\\nabla_{\\Xv} \\Phiv_P \\ = (\\It+ \\nabla_{\\xv}\\tilde{\\uv})\\cdot\\Ft_P\n\\end{equation}\nwith the identity tensor $\\It$. The total strain is \n \\begin{eqnarray}\n \\Et_G&=&\\frac{1}{2}(\\Ft^T\\cdot\\Ft-\\It\n \\\\&=& \\frac{1}{2}(\\Ft_P^T\\cdot\\Ft_P-\\It)+\\frac{1}{2}\\left(\\Ft_P^T\\cdot(\\nabla_{\\xv}\\tilde{\\uv}+\\nabla_{\\xv}\\tilde{\\uv}^T+ \\nabla_{\\xv}\\tilde{\\uv}^T\\cdot\\nabla_{\\xv}\\tilde{\\uv})\\cdot \\Ft_P\\right)\\label{eq:total_strain}.\n \\end{eqnarray}\nWith the assumption of small incremental strains the quadratic term $\\nabla_x\\tilde{u}^T\\cdot\\nabla_x\\tilde{u}$ in equation (\\ref{eq:total_strain}) can be neglected. With the definition of the strain caused by predeformation $\\Et_P=\\frac{1}{2}(\\Ft_P^T\\cdot\\Ft_P-\\It)$ and $\\tilde{\\epst}=\\frac{1}{2}(\\nabla_{\\xv}\\tilde{\\uv}+\\nabla_{\\xv}\\tilde{\\uv}^T)$ we get \n\n\\begin{equation}\\label{eq:totoal_strain2}\n\t\\Et_G=\\Et_P+\\Ft_P^T\\cdot\\tilde{\\epst}\\cdot\\Ft_P\n\\end{equation}\n\nIn the variational formulation $\\Et_P$ is treated on the right hand side as Eigenstrain. We show the effect of predeformation for the mechanical and coupling part of the variation formulations and apply equation (\\ref{eq:piola-transform}) and equation (\\ref{eq:totoal_strain2}) in equation (\\ref{eq:V11})-(\\ref{eq:V12}). We ge\n\n\\begin{eqnarray}\\label{eq:TDNNS_update}\n\\int_\\Omega (-\\St^{E}: J_P \\Ft_P^{-1}\\cdot\\sigmat\\cdot \\Ft_P^{-T} + \\dt^T\\cdot \\Ft_P^T\\cdot \\nabla \\phi): J_P \\cdot\\Ft_P^{-1}\\delta\\sigmat\\cdot\\Ft_P^{-T}\\,d\\Omega + \\langle\\tilde{\\epst}, \\delta \\sigmat\\rangle + \\langle\\delta {\\epst}, \\sigmat\\rangle +\\\\ \n+ \\int_\\Omega (J_P\\dt :\\Ft_P^{-1}\\cdot\\sigmat\\cdot \\Ft_P^{-T} - \\epsilont^{\\sigma}\\cdot \\Ft_P^{T}\\cdot \\nabla_x \\phi) \\cdot \\Ft_P^T\\cdot\\nabla \\delta\\phi\\,d\\Omega+\\int_{\\Omega} J_P \\Et_P:\\Ft_P\\cdot \\delta \\sigmat\\cdot \\Ft_P^T \\,d\\Omega=\\delta w^{ext} \\label{eq:TDNNS_update_rhs} .\n\\end{eqnarray}\nFinally note that in the implementation only $\\Ft_P$ is needed, therefore the deformation $\\Phiv_P$ has not to be known explicitly. \\par \n\n\n\n\n\n\nThe same technique can be used if the constitutive equations of non-isotropic materials are given in a local material coordinate system different form the global one. Then $\\Ft_P$ is a rotation matrix and $J_P=1$. \n\\end{comment}","meta":{"redpajama_set_name":"RedPajamaArXiv"}}