title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
Beyond Value-Function Gaps: Improved Instance-Dependent Regret Bounds for Episodic Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/000c076c390a4c357313fca29e390ece-Abstract.html
Christoph Dann, Teodor Vanislavov Marinov, Mehryar Mohri, Julian Zimmert
https://papers.nips.cc/paper_files/paper/2021/hash/000c076c390a4c357313fca29e390ece-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11624-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/000c076c390a4c357313fca29e390ece-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CU8qQMhB3dh
https://papers.nips.cc/paper_files/paper/2021/file/000c076c390a4c357313fca29e390ece-Supplemental.pdf
We provide improved gap-dependent regret bounds for reinforcement learning in finite episodic Markov decision processes. Compared to prior work, our bounds depend on alternative definitions of gaps. These definitions are based on the insight that, in order to achieve a favorable regret, an algorithm does not need to learn how to behave optimally in states that are not reached by an optimal policy. We prove tighter upper regret bounds for optimistic algorithms and accompany them with new information-theoretic lower bounds for a large class of MDPs. Our results show that optimistic algorithms can not achieve the information-theoretic lower bounds even in deterministic MDPs unless there is a unique optimal policy.
null
Learning One Representation to Optimize All Rewards
https://papers.nips.cc/paper_files/paper/2021/hash/003dd617c12d444ff9c80f717c3fa982-Abstract.html
Ahmed Touati, Yann Ollivier
https://papers.nips.cc/paper_files/paper/2021/hash/003dd617c12d444ff9c80f717c3fa982-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11625-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/003dd617c12d444ff9c80f717c3fa982-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2a96Bf7Qdrg
https://papers.nips.cc/paper_files/paper/2021/file/003dd617c12d444ff9c80f717c3fa982-Supplemental.pdf
We introduce the forward-backward (FB) representation of the dynamics of a reward-free Markov decision process. It provides explicit near-optimal policies for any reward specified a posteriori. During an unsupervised phase, we use reward-free interactions with the environment to learn two representations via off-the-shelf deep learning methods and temporal difference (TD) learning. In the test phase, a reward representation is estimated either from reward observations or an explicit reward description (e.g., a target state). The optimal policy for thatreward is directly obtained from these representations, with no planning. We assume access to an exploration scheme or replay buffer for the first phase.The corresponding unsupervised loss is well-principled: if training is perfect, the policies obtained are provably optimal for any reward function. With imperfect training, the sub-optimality is proportional to the unsupervised approximation error. The FB representation learns long-range relationships between states and actions, via a predictive occupancy map, without having to synthesize states as in model-based approaches.This is a step towards learning controllable agents in arbitrary black-box stochastic environments. This approach compares well to goal-oriented RL algorithms on discrete and continuous mazes, pixel-based MsPacman, and the FetchReach virtual robot arm. We also illustrate how the agent can immediately adapt to new tasks beyond goal-oriented RL.
null
Matrix factorisation and the interpretation of geodesic distance
https://papers.nips.cc/paper_files/paper/2021/hash/007ff380ee5ac49ffc34442f5c2a2b86-Abstract.html
Nick Whiteley, Annie Gray, Patrick Rubin-Delanchy
https://papers.nips.cc/paper_files/paper/2021/hash/007ff380ee5ac49ffc34442f5c2a2b86-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11626-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/007ff380ee5ac49ffc34442f5c2a2b86-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gwP8pc1OgN_
https://papers.nips.cc/paper_files/paper/2021/file/007ff380ee5ac49ffc34442f5c2a2b86-Supplemental.pdf
Given a graph or similarity matrix, we consider the problem of recovering a notion of true distance between the nodes, and so their true positions. We show that this can be accomplished in two steps: matrix factorisation, followed by nonlinear dimension reduction. This combination is effective because the point cloud obtained in the first step lives close to a manifold in which latent distance is encoded as geodesic distance. Hence, a nonlinear dimension reduction tool, approximating geodesic distance, can recover the latent positions, up to a simple transformation. We give a detailed account of the case where spectral embedding is used, followed by Isomap, and provide encouraging experimental evidence for other combinations of techniques.
null
UniDoc: Unified Pretraining Framework for Document Understanding
https://papers.nips.cc/paper_files/paper/2021/hash/0084ae4bc24c0795d1e6a4f58444d39b-Abstract.html
Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, Tong Sun
https://papers.nips.cc/paper_files/paper/2021/hash/0084ae4bc24c0795d1e6a4f58444d39b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11627-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0084ae4bc24c0795d1e6a4f58444d39b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=UMcd6l1msUK
null
Document intelligence automates the extraction of information from documents and supports many business applications. Recent self-supervised learning methods on large-scale unlabeled document datasets have opened up promising directions towards reducing annotation efforts by training models with self-supervised objectives. However, most of the existing document pretraining methods are still language-dominated. We present UDoc, a new unified pretraining framework for document understanding. UDoc is designed to support most document understanding tasks, extending the Transformer to take multimodal embeddings as input. Each input element is composed of words and visual features from a semantic region of the input document image. An important feature of UDoc is that it learns a generic representation by making use of three self-supervised losses, encouraging the representation to model sentences, learn similarities, and align modalities. Extensive empirical analysis demonstrates that the pretraining procedure learns better joint representations and leads to improvements in downstream tasks.
null
Finding Discriminative Filters for Specific Degradations in Blind Super-Resolution
https://papers.nips.cc/paper_files/paper/2021/hash/008bd5ad93b754d500338c253d9c1770-Abstract.html
Liangbin Xie, Xintao Wang, Chao Dong, Zhongang Qi, Ying Shan
https://papers.nips.cc/paper_files/paper/2021/hash/008bd5ad93b754d500338c253d9c1770-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11628-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/008bd5ad93b754d500338c253d9c1770-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=az0BBDjDvwD
https://papers.nips.cc/paper_files/paper/2021/file/008bd5ad93b754d500338c253d9c1770-Supplemental.pdf
Recent blind super-resolution (SR) methods typically consist of two branches, one for degradation prediction and the other for conditional restoration. However, our experiments show that a one-branch network can achieve comparable performance to the two-branch scheme. Then we wonder: how can one-branch networks automatically learn to distinguish degradations? To find the answer, we propose a new diagnostic tool -- Filter Attribution method based on Integral Gradient (FAIG). Unlike previous integral gradient methods, our FAIG aims at finding the most discriminative filters instead of input pixels/features for degradation removal in blind SR networks. With the discovered filters, we further develop a simple yet effective method to predict the degradation of an input image. Based on FAIG, we show that, in one-branch blind SR networks, 1) we could find a very small number of (1%) discriminative filters for each specific degradation; 2) The weights, locations and connections of the discovered filters are all important to determine the specific network function. 3) The task of degradation prediction can be implicitly realized by these discriminative filters without explicit supervised learning. Our findings can not only help us better understand network behaviors inside one-branch blind SR networks, but also provide guidance on designing more efficient architectures and diagnosing networks for blind SR.
null
Counterfactual Explanations Can Be Manipulated
https://papers.nips.cc/paper_files/paper/2021/hash/009c434cab57de48a31f6b669e7ba266-Abstract.html
Dylan Slack, Anna Hilgard, Himabindu Lakkaraju, Sameer Singh
https://papers.nips.cc/paper_files/paper/2021/hash/009c434cab57de48a31f6b669e7ba266-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11629-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/009c434cab57de48a31f6b669e7ba266-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iaO_IH7CnGJ
https://papers.nips.cc/paper_files/paper/2021/file/009c434cab57de48a31f6b669e7ba266-Supplemental.pdf
Counterfactual explanations are emerging as an attractive option for providing recourse to individuals adversely impacted by algorithmic decisions. As they are deployed in critical applications (e.g. law enforcement, financial lending), it becomes important to ensure that we clearly understand the vulnerabilties of these methods and find ways to address them. However, there is little understanding of the vulnerabilities and shortcomings of counterfactual explanations. In this work, we introduce the first framework that describes the vulnerabilities of counterfactual explanations and shows how they can be manipulated. More specifically, we show counterfactual explanations may converge to drastically different counterfactuals under a small perturbation indicating they are not robust. Leveraging this insight, we introduce a novel objective to train seemingly fair models where counterfactual explanations find much lower cost recourse under a slight perturbation. We describe how these models can unfairly provide low-cost recourse for specific subgroups in the data while appearing fair to auditors. We perform experiments on loan and violent crime prediction data sets where certain subgroups achieve up to 20x lower cost recourse under the perturbation. These results raise concerns regarding the dependability of current counterfactual explanation techniques, which we hope will inspire investigations in robust counterfactual explanations.
null
From Canonical Correlation Analysis to Self-supervised Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/00ac8ed3b4327bdd4ebbebcb2ba10a00-Abstract.html
Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, Philip S Yu
https://papers.nips.cc/paper_files/paper/2021/hash/00ac8ed3b4327bdd4ebbebcb2ba10a00-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11630-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=X3TdREzbZN
https://papers.nips.cc/paper_files/paper/2021/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-Supplemental.pdf
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data. It follows the previous methods that generate two views of an input graph through data augmentation. However, unlike contrastive methods that focus on instance-level discrimination, we optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis. Compared with other works, our approach requires none of the parameterized mutual information estimator, additional projector, asymmetric structures, and most importantly, negative samples which can be costly. We show that the new objective essentially 1) aims at discarding augmentation-variant information by learning invariant representations, and 2) can prevent degenerated solutions by decorrelating features in different dimensions. Our theoretical analysis further provides an understanding for the new objective which can be equivalently seen as an instantiation of the Information Bottleneck Principle under the self-supervised setting. Despite its simplicity, our method performs competitively on seven public graph datasets.
null
BAST: Bayesian Additive Regression Spanning Trees for Complex Constrained Domain
https://papers.nips.cc/paper_files/paper/2021/hash/00b76fddeaaa7d8c2c43d504b2babd8a-Abstract.html
Zhao Tang Luo, Huiyan Sang, Bani Mallick
https://papers.nips.cc/paper_files/paper/2021/hash/00b76fddeaaa7d8c2c43d504b2babd8a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11631-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/00b76fddeaaa7d8c2c43d504b2babd8a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Yw7ZNeDVpBS
https://papers.nips.cc/paper_files/paper/2021/file/00b76fddeaaa7d8c2c43d504b2babd8a-Supplemental.pdf
Nonparametric regression on complex domains has been a challenging task as most existing methods, such as ensemble models based on binary decision trees, are not designed to account for intrinsic geometries and domain boundaries. This article proposes a Bayesian additive regression spanning trees (BAST) model for nonparametric regression on manifolds, with an emphasis on complex constrained domains or irregularly shaped spaces embedded in Euclidean spaces. Our model is built upon a random spanning tree manifold partition model as each weak learner, which is capable of capturing any irregularly shaped spatially contiguous partitions while respecting intrinsic geometries and domain boundary constraints. Utilizing many nice properties of spanning tree structures, we design an efficient Bayesian inference algorithm. Equipped with a soft prediction scheme, BAST is demonstrated to significantly outperform other competing methods in simulation experiments and in an application to the chlorophyll data in Aral Sea, due to its strong local adaptivity to different levels of smoothness.
null
Hyperbolic Busemann Learning with Ideal Prototypes
https://papers.nips.cc/paper_files/paper/2021/hash/01259a0cb2431834302abe2df60a1327-Abstract.html
Mina Ghadimi Atigh, Martin Keller-Ressel, Pascal Mettes
https://papers.nips.cc/paper_files/paper/2021/hash/01259a0cb2431834302abe2df60a1327-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11632-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/01259a0cb2431834302abe2df60a1327-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=c_XcmuxwAY
https://papers.nips.cc/paper_files/paper/2021/file/01259a0cb2431834302abe2df60a1327-Supplemental.zip
Hyperbolic space has become a popular choice of manifold for representation learning of various datatypes from tree-like structures and text to graphs. Building on the success of deep learning with prototypes in Euclidean and hyperspherical spaces, a few recent works have proposed hyperbolic prototypes for classification. Such approaches enable effective learning in low-dimensional output spaces and can exploit hierarchical relations amongst classes, but require privileged information about class labels to position the hyperbolic prototypes. In this work, we propose Hyperbolic Busemann Learning. The main idea behind our approach is to position prototypes on the ideal boundary of the Poincar\'{e} ball, which does not require prior label knowledge. To be able to compute proximities to ideal prototypes, we introduce the penalised Busemann loss. We provide theory supporting the use of ideal prototypes and the proposed loss by proving its equivalence to logistic regression in the one-dimensional case. Empirically, we show that our approach provides a natural interpretation of classification confidence, while outperforming recent hyperspherical and hyperbolic prototype approaches.
null
Backward-Compatible Prediction Updates: A Probabilistic Approach
https://papers.nips.cc/paper_files/paper/2021/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html
Frederik Träuble, Julius von Kügelgen, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Peter Gehler
https://papers.nips.cc/paper_files/paper/2021/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11633-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/012d9fe15b2493f21902cd55603382ec-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YjZoWjTKYvH
https://papers.nips.cc/paper_files/paper/2021/file/012d9fe15b2493f21902cd55603382ec-Supplemental.pdf
When machine learning systems meet real world applications, accuracy is only one of several requirements. In this paper, we assay a complementary perspective originating from the increasing availability of pre-trained and regularly improving state-of-the-art models. While new improved models develop at a fast pace, downstream tasks vary more slowly or stay constant. Assume that we have a large unlabelled data set for which we want to maintain accurate predictions. Whenever a new and presumably better ML models becomes available, we encounter two problems: (i) given a limited budget, which data points should be re-evaluated using the new model?; and (ii) if the new predictions differ from the current ones, should we update? Problem (i) is about compute cost, which matters for very large data sets and models. Problem (ii) is about maintaining consistency of the predictions, which can be highly relevant for downstream applications; our demand is to avoid negative flips, i.e., changing correct to incorrect predictions. In this paper, we formalize the Prediction Update Problem and present an efficient probabilistic approach as answer to the above questions. In extensive experiments on standard classification benchmark data sets, we show that our method outperforms alternative strategies along key metrics for backward-compatible prediction updates.
null
Truncated Marginal Neural Ratio Estimation
https://papers.nips.cc/paper_files/paper/2021/hash/01632f7b7a127233fa1188bd6c2e42e1-Abstract.html
Benjamin K Miller, Alex Cole, Patrick Forré, Gilles Louppe, Christoph Weniger
https://papers.nips.cc/paper_files/paper/2021/hash/01632f7b7a127233fa1188bd6c2e42e1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11634-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/01632f7b7a127233fa1188bd6c2e42e1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VA18aFPYfkd
https://papers.nips.cc/paper_files/paper/2021/file/01632f7b7a127233fa1188bd6c2e42e1-Supplemental.pdf
Parametric stochastic simulators are ubiquitous in science, often featuring high-dimensional input parameters and/or an intractable likelihood. Performing Bayesian parameter inference in this context can be challenging. We present a neural simulation-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability, which is unique among modern algorithms. Our approach is simulation efficient by simultaneously estimating low-dimensional marginal posteriors instead of the joint posterior and by proposing simulations targeted to an observation of interest via a prior suitably truncated by an indicator function. Furthermore, by estimating a locally amortized posterior our algorithm enables efficient empirical tests of the robustness of the inference results. Since scientists cannot access the ground truth, these tests are necessary for trusting inference in real-world applications. We perform experiments on a marginalized version of the simulation-based inference benchmark and two complex and narrow posteriors, highlighting the simulator efficiency of our algorithm as well as the quality of the estimated marginal posteriors.
null
ReAct: Out-of-distribution Detection With Rectified Activations
https://papers.nips.cc/paper_files/paper/2021/hash/01894d6f048493d2cacde3c579c315a3-Abstract.html
Yiyou Sun, Chuan Guo, Yixuan Li
https://papers.nips.cc/paper_files/paper/2021/hash/01894d6f048493d2cacde3c579c315a3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11635-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/01894d6f048493d2cacde3c579c315a3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=IBVBtz_sRSm
https://papers.nips.cc/paper_files/paper/2021/file/01894d6f048493d2cacde3c579c315a3-Supplemental.pdf
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance in enhancing the safe deployment of neural networks. One of the primary challenges is that models often produce highly confident predictions on OOD data, which undermines the driving principle in OOD detection that the model should only be confident about in-distribution samples. In this work, we propose ReAct—a simple and effective technique for reducing model overconfidence on OOD data. Our method is motivated by novel analysis on internal activations of neural networks, which displays highly distinctive signature patterns for OOD distributions. Our method can generalize effectively to different network architectures and different OOD detection scores. We empirically demonstrate that ReAct achieves competitive detection performance on a comprehensive suite of benchmark datasets, and give theoretical explication for our method’s efficacy. On the ImageNet benchmark, ReAct reduces the false positive rate (FPR95) by 25.05% compared to the previous best method.
null
Non-local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation
https://papers.nips.cc/paper_files/paper/2021/hash/018b59ce1fd616d874afad0f44ba338d-Abstract.html
Jogendra Nath Kundu, Siddharth Seth, Anirudh Jamkhandi, Pradyumna YM, Varun Jampani, Anirban Chakraborty, Venkatesh Babu R
https://papers.nips.cc/paper_files/paper/2021/hash/018b59ce1fd616d874afad0f44ba338d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11636-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/018b59ce1fd616d874afad0f44ba338d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AIQOddM5Xm
https://papers.nips.cc/paper_files/paper/2021/file/018b59ce1fd616d874afad0f44ba338d-Supplemental.pdf
Available 3D human pose estimation approaches leverage different forms of strong (2D/3D pose) or weak (multi-view or depth) paired supervision. Barring synthetic or in-studio domains, acquiring such supervision for each new target environment is highly inconvenient. To this end, we cast 3D pose learning as a self-supervised adaptation problem that aims to transfer the task knowledge from a labeled source domain to a completely unpaired target. We propose to infer image-to-pose via two explicit mappings viz. image-to-latent and latent-to-pose where the latter is a pre-learned decoder obtained from a prior-enforcing generative adversarial auto-encoder. Next, we introduce relation distillation as a means to align the unpaired cross-modal samples i.e., the unpaired target videos and unpaired 3D pose sequences. To this end, we propose a new set of non-local relations in order to characterize long-range latent pose interactions, unlike general contrastive relations where positive couplings are limited to a local neighborhood structure. Further, we provide an objective way to quantify non-localness in order to select the most effective relation set. We evaluate different self-adaptation settings and demonstrate state-of-the-art 3D human pose estimation performance on standard benchmarks.
null
Fast Training of Neural Lumigraph Representations using Meta Learning
https://papers.nips.cc/paper_files/paper/2021/hash/01931a6925d3de09e5f87419d9d55055-Abstract.html
Alexander Bergman, Petr Kellnhofer, Gordon Wetzstein
https://papers.nips.cc/paper_files/paper/2021/hash/01931a6925d3de09e5f87419d9d55055-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11637-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/01931a6925d3de09e5f87419d9d55055-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=XCaZKu00a_D
https://papers.nips.cc/paper_files/paper/2021/file/01931a6925d3de09e5f87419d9d55055-Supplemental.pdf
Novel view synthesis is a long-standing problem in machine learning and computer vision. Significant progress has recently been made in developing neural scene representations and rendering techniques that synthesize photorealistic images from arbitrary views. These representations, however, are extremely slow to train and often also slow to render. Inspired by neural variants of image-based rendering, we develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time. Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection. To push representation convergence times down to minutes, we leverage meta learning to learn neural shape and image feature priors which accelerate training. The optimized shape and image features can then be extracted using traditional graphics techniques and rendered in real time. We show that MetaNLR++ achieves similar or better novel view synthesis results in a fraction of the time that competing methods require.
null
Analytical Study of Momentum-Based Acceleration Methods in Paradigmatic High-Dimensional Non-Convex Problems
https://papers.nips.cc/paper_files/paper/2021/hash/019f8b946a256d9357eadc5ace2c8678-Abstract.html
Stefano Sarao Mannelli, Pierfrancesco Urbani
https://papers.nips.cc/paper_files/paper/2021/hash/019f8b946a256d9357eadc5ace2c8678-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11638-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/019f8b946a256d9357eadc5ace2c8678-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KnAMQ3nH8Pq
https://papers.nips.cc/paper_files/paper/2021/file/019f8b946a256d9357eadc5ace2c8678-Supplemental.pdf
The optimization step in many machine learning problems rarely relies on vanilla gradient descent but it is common practice to use momentum-based accelerated methods. Despite these algorithms being widely applied to arbitrary loss functions, their behaviour in generically non-convex, high dimensional landscapes is poorly understood.In this work, we use dynamical mean field theory techniques to describe analytically the average dynamics of these methods in a prototypical non-convex model: the (spiked) matrix-tensor model. We derive a closed set of equations that describe the behaviour of heavy-ball momentum and Nesterov acceleration in the infinite dimensional limit. By numerical integration of these equations, we observe that these methods speed up the dynamics but do not improve the algorithmic threshold with respect to gradient descent in the spiked model.
null
Multimodal Few-Shot Learning with Frozen Language Models
https://papers.nips.cc/paper_files/paper/2021/hash/01b7575c38dac42f3cfb7d500438b875-Abstract.html
Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill
https://papers.nips.cc/paper_files/paper/2021/hash/01b7575c38dac42f3cfb7d500438b875-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11639-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/01b7575c38dac42f3cfb7d500438b875-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=WtmMyno9Tq2
https://papers.nips.cc/paper_files/paper/2021/file/01b7575c38dac42f3cfb7d500438b875-Supplemental.pdf
When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, we present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language). Using aligned image and caption data, we train a vision encoder to represent each image as a sequence of continuous embeddings, such that a pre-trained, frozen language model presented with this prefix generates the appropriate caption. The resulting system is a multimodal few-shot learner, with the surprising ability to learn a variety of new tasks when conditioned on examples, represented as a sequence of any number of interleaved image and text embeddings. We demonstrate that it can rapidly learn words for new objects and novel visual categories, do visual question-answering with only a handful of examples, and make use of outside knowledge, by measuring a single model on a variety of established and new benchmarks.
null
Approximating the Permanent with Deep Rejection Sampling
https://papers.nips.cc/paper_files/paper/2021/hash/01d8bae291b1e4724443375634ccfa0e-Abstract.html
Juha Harviainen, Antti Röyskö, Mikko Koivisto
https://papers.nips.cc/paper_files/paper/2021/hash/01d8bae291b1e4724443375634ccfa0e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11640-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/01d8bae291b1e4724443375634ccfa0e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=mgkxmKYW62
https://papers.nips.cc/paper_files/paper/2021/file/01d8bae291b1e4724443375634ccfa0e-Supplemental.zip
We present a randomized approximation scheme for the permanent of a matrix with nonnegative entries. Our scheme extends a recursive rejection sampling method of Huber and Law (SODA 2008) by replacing the permanent upper bound with a linear combination of the subproblem bounds at a moderately large depth of the recursion tree. This method, we call deep rejection sampling, is empirically shown to outperform the basic, depth-zero variant, as well as a related method by Kuck et al. (NeurIPS 2019). We analyze the expected running time of the scheme on random $(0, 1)$-matrices where each entry is independently $1$ with probability $p$. Our bound is superior to a previous one for $p$ less than $1/5$, matching another bound that was only known to hold when every row and column has density exactly $p$.
null
Revisiting Model Stitching to Compare Neural Representations
https://papers.nips.cc/paper_files/paper/2021/hash/01ded4259d101feb739b06c399e9cd9c-Abstract.html
Yamini Bansal, Preetum Nakkiran, Boaz Barak
https://papers.nips.cc/paper_files/paper/2021/hash/01ded4259d101feb739b06c399e9cd9c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11641-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/01ded4259d101feb739b06c399e9cd9c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ak06J5jNR4
https://papers.nips.cc/paper_files/paper/2021/file/01ded4259d101feb739b06c399e9cd9c-Supplemental.pdf
We revisit and extend model stitching (Lenc & Vedaldi 2015) as a methodology to study the internal representations of neural networks. Given two trained and frozen models $A$ and $B$, we consider a "stitched model" formed by connecting the bottom-layers of $A$ to the top-layers of $B$, with a simple trainable layer between them. We argue that model stitching is a powerful and perhaps under-appreciated tool, which reveals aspects of representations that measures such as centered kernel alignment (CKA) cannot. Through extensive experiments, we use model stitching to obtain quantitative verifications for intuitive statements such as "good networks learn similar representations", by demonstrating that good networks of the same architecture, but trained in very different ways (eg: supervised vs. self-supervised learning), can be stitched to each other without drop in performance. We also give evidence for the intuition that "more is better" by showing that representations learnt with (1) more data, (2) bigger width, or (3) more training time can be "plugged in" to weaker models to improve performance. Finally, our experiments reveal a new structural property of SGD which we call "stitching connectivity", akin to mode-connectivity: typical minima reached by SGD are all "stitching-connected" to each other.
null
AugMax: Adversarial Composition of Random Augmentations for Robust Training
https://papers.nips.cc/paper_files/paper/2021/hash/01e9565cecc4e989123f9620c1d09c09-Abstract.html
Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, Zhangyang Wang
https://papers.nips.cc/paper_files/paper/2021/hash/01e9565cecc4e989123f9620c1d09c09-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11642-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/01e9565cecc4e989123f9620c1d09c09-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=P5MtdcVdFZ4
https://papers.nips.cc/paper_files/paper/2021/file/01e9565cecc4e989123f9620c1d09c09-Supplemental.pdf
Data augmentation is a simple yet effective way to improve the robustness of deep neural networks (DNNs). Diversity and hardness are two complementary dimensions of data augmentation to achieve robustness. For example, AugMix explores random compositions of a diverse set of augmentations to enhance broader coverage, while adversarial training generates adversarially hard samples to spot the weakness. Motivated by this, we propose a data augmentation framework, termed AugMax, to unify the two aspects of diversity and hardness. AugMax first randomly samples multiple augmentation operators and then learns an adversarial mixture of the selected operators. Being a stronger form of data augmentation, AugMax leads to a significantly augmented input distribution which makes model training more challenging. To solve this problem, we further design a disentangled normalization module, termed DuBIN (Dual-Batch-and-Instance Normalization), that disentangles the instance-wise feature heterogeneity arising from AugMax. Experiments show that AugMax-DuBIN leads to significantly improved out-of-distribution robustness, outperforming prior arts by 3.03%, 3.49%, 1.82% and 0.71% on CIFAR10-C, CIFAR100-C, Tiny ImageNet-C and ImageNet-C. Codes and pretrained models are available: https://github.com/VITA-Group/AugMax.
null
Habitat 2.0: Training Home Assistants to Rearrange their Habitat
https://papers.nips.cc/paper_files/paper/2021/hash/021bbc7ee20b71134d53e20206bd6feb-Abstract.html
Andrew Szot, Alexander Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, Devendra Singh Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimír Vondruš, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen Koltun, Jitendra Malik, Manolis Savva, Dhruv Batra
https://papers.nips.cc/paper_files/paper/2021/hash/021bbc7ee20b71134d53e20206bd6feb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11643-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/021bbc7ee20b71134d53e20206bd6feb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DPHsCQ8OpA
https://papers.nips.cc/paper_files/paper/2021/file/021bbc7ee20b71134d53e20206bd6feb-Supplemental.pdf
We introduce Habitat 2.0 (H2.0), a simulation platform for training virtual robots in interactive 3D environments and complex physics-enabled scenarios. We make comprehensive contributions to all levels of the embodied AI stack – data, simulation, and benchmark tasks. Specifically, we present: (i) ReplicaCAD: an artist-authored, annotated, reconfigurable 3D dataset of apartments (matching real spaces) with articulated objects (e.g. cabinets and drawers that can open/close); (ii) H2.0: a high-performance physics-enabled 3D simulator with speeds exceeding 25,000 simulation steps per second (850x real-time) on an 8-GPU node, representing 100x speed-ups over prior work; and, (iii) Home Assistant Benchmark (HAB): a suite of common tasks for assistive robots (tidy the house, stock groceries, set the table) that test a range of mobile manipulation capabilities. These large-scale engineering contributions allow us to systematically compare deep reinforcement learning (RL) at scale and classical sense-plan-act (SPA) pipelines in long-horizon structured tasks, with an emphasis on generalization to new objects, receptacles, and layouts. We find that (1) flat RL policies struggle on HAB compared to hierarchical ones; (2) a hierarchy with independent skills suffers from ‘hand-off problems’, and (3) SPA pipelines are more brittle than RL policies.
null
Time Discretization-Invariant Safe Action Repetition for Policy Gradient Methods
https://papers.nips.cc/paper_files/paper/2021/hash/024677efb8e4aee2eaeef17b54695bbe-Abstract.html
Seohong Park, Jaekyeom Kim, Gunhee Kim
https://papers.nips.cc/paper_files/paper/2021/hash/024677efb8e4aee2eaeef17b54695bbe-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11644-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/024677efb8e4aee2eaeef17b54695bbe-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xNmhYNQruJX
https://papers.nips.cc/paper_files/paper/2021/file/024677efb8e4aee2eaeef17b54695bbe-Supplemental.pdf
In reinforcement learning, continuous time is often discretized by a time scale $\delta$, to which the resulting performance is known to be highly sensitive. In this work, we seek to find a $\delta$-invariant algorithm for policy gradient (PG) methods, which performs well regardless of the value of $\delta$. We first identify the underlying reasons that cause PG methods to fail as $\delta \to 0$, proving that the variance of the PG estimator can diverge to infinity in stochastic environments under a certain assumption of stochasticity. While durative actions or action repetition can be employed to have $\delta$-invariance, previous action repetition methods cannot immediately react to unexpected situations in stochastic environments. We thus propose a novel $\delta$-invariant method named Safe Action Repetition (SAR) applicable to any existing PG algorithm. SAR can handle the stochasticity of environments by adaptively reacting to changes in states during action repetition. We empirically show that our method is not only $\delta$-invariant but also robust to stochasticity, outperforming previous $\delta$-invariant approaches on eight MuJoCo environments with both deterministic and stochastic settings. Our code is available at https://vision.snu.ac.kr/projects/sar.
null
Meta-Learning Reliable Priors in the Function Space
https://papers.nips.cc/paper_files/paper/2021/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html
Jonas Rothfuss, Dominique Heyn, jinfan Chen, Andreas Krause
https://papers.nips.cc/paper_files/paper/2021/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11645-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/024d2d699e6c1a82c9ba986386f4d824-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=H_qljL8t_A
https://papers.nips.cc/paper_files/paper/2021/file/024d2d699e6c1a82c9ba986386f4d824-Supplemental.pdf
Meta-Learning promises to enable more data-efficient inference by harnessing previous experience from related learning tasks. While existing meta-learning methods help us to improve the accuracy of our predictions in face of data scarcity, they fail to supply reliable uncertainty estimates, often being grossly overconfident in their predictions. Addressing these shortcomings, we introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as stochastic processes and performs meta-level regularization directly in the function space. This allows us to directly steer the probabilistic predictions of the meta-learner towards high epistemic uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates. Finally, we showcase how our approach can be integrated with sequential decision making, where reliable uncertainty quantification is imperative. In our benchmark study on meta-learning for Bayesian Optimization (BO), F-PACOH significantly outperforms all other meta-learners and standard baselines. Even in a challenging lifelong BO setting, where optimization tasks arrive one at a time and the meta-learner needs to build up informative prior knowledge incrementally, our proposed method demonstrates strong positive transfer.
null
VoiceMixer: Adversarial Voice Style Mixup
https://papers.nips.cc/paper_files/paper/2021/hash/0266e33d3f546cb5436a10798e657d97-Abstract.html
Sang-Hoon Lee, Ji-Hoon Kim, Hyunseung Chung, Seong-Whan Lee
https://papers.nips.cc/paper_files/paper/2021/hash/0266e33d3f546cb5436a10798e657d97-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11646-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0266e33d3f546cb5436a10798e657d97-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1Sy9EwFCyFQ
null
Although recent advances in voice conversion have shown significant improvement, there still remains a gap between the converted voice and target voice. A key factor that maintains this gap is the insufficient decomposition of content and voice style from the source speech. This insufficiency leads to the converted speech containing source speech style or losing source speech content. In this paper, we present VoiceMixer which can effectively decompose and transfer voice style through a novel information bottleneck and adversarial feedback. With self-supervised representation learning, the proposed information bottleneck can decompose the content and style with only a small loss of content information. Also, for adversarial feedback of each information, the discriminator is decomposed into content and style discriminator with self-supervision, which enable our model to achieve better generalization to the voice style of the converted speech. The experimental results show the superiority of our model in disentanglement and transfer performance, and improve audio quality by preserving content information.
null
Predicting What You Already Know Helps: Provable Self-Supervised Learning
https://papers.nips.cc/paper_files/paper/2021/hash/02e656adee09f8394b402d9958389b7d-Abstract.html
Jason D. Lee, Qi Lei, Nikunj Saunshi, JIACHENG ZHUO
https://papers.nips.cc/paper_files/paper/2021/hash/02e656adee09f8394b402d9958389b7d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11647-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/02e656adee09f8394b402d9958389b7d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Yx1OzVU_SRi
https://papers.nips.cc/paper_files/paper/2021/file/02e656adee09f8394b402d9958389b7d-Supplemental.pdf
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks), that do not require labeled data, to learn semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image from context, or predicting missing words, yet predicting this \textit{known} information helps in learning representations effective for downstream prediction tasks. This paper posits a mechanism based on approximate conditional independence to formalize how solving certain pretext tasks can learn representations that provably decrease the sample complexity of downstream supervised tasks. Formally, we quantify how the approximate independence between the components of the pretext task (conditional on the label and latent variables) allows us to learn representations that can solve the downstream task with drastically reduced sample complexity by just training a linear layer on top of the learned representation.
null
Oracle Complexity in Nonsmooth Nonconvex Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/030e65da2b1c944090548d36b244b28d-Abstract.html
Guy Kornowski, Ohad Shamir
https://papers.nips.cc/paper_files/paper/2021/hash/030e65da2b1c944090548d36b244b28d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11648-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/030e65da2b1c944090548d36b244b28d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=aMZJBOiOOPg
https://papers.nips.cc/paper_files/paper/2021/file/030e65da2b1c944090548d36b244b28d-Supplemental.pdf
It is well-known that given a smooth, bounded-from-below, and possibly nonconvex function, standard gradient-based methods can find $\epsilon$-stationary points (with gradient norm less than $\epsilon$) in $\mathcal{O}(1/\epsilon^2)$ iterations. However, many important nonconvex optimization problems, such as those associated with training modern neural networks, are inherently not smooth, making these results inapplicable. In this paper, we study nonsmooth nonconvex optimization from an oracle complexity viewpoint, where the algorithm is assumed to be given access only to local information about the function at various points. We provide two main results (under mild assumptions): First, we consider the problem of getting \emph{near} $\epsilon$-stationary points. This is perhaps the most natural relaxation of \emph{finding} $\epsilon$-stationary points, which is impossible in the nonsmooth nonconvex case. We prove that this relaxed goal cannot be achieved efficiently, for any distance and $\epsilon$ smaller than some constants. Our second result deals with the possibility of tackling nonsmooth nonconvex optimization by reduction to smooth optimization: Namely, applying smooth optimization methods on a smooth approximation of the objective function. For this approach, we prove an inherent trade-off between oracle complexity and smoothness: On the one hand, smoothing a nonsmooth nonconvex function can be done very efficiently (e.g., by randomized smoothing), but with dimension-dependent factors in the smoothness parameter, which can strongly affect iteration complexity when plugging into standard smooth optimization methods. On the other hand, these dimension factors can be eliminated with suitable smoothing methods, but only by making the oracle complexity of the smoothing process exponentially large.
null
CentripetalText: An Efficient Text Instance Representation for Scene Text Detection
https://papers.nips.cc/paper_files/paper/2021/hash/03227b950778ab86436ff79fe975b596-Abstract.html
Tao Sheng, Jie Chen, Zhouhui Lian
https://papers.nips.cc/paper_files/paper/2021/hash/03227b950778ab86436ff79fe975b596-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11649-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/03227b950778ab86436ff79fe975b596-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=z1F9G4VnGZ-
https://papers.nips.cc/paper_files/paper/2021/file/03227b950778ab86436ff79fe975b596-Supplemental.pdf
Scene text detection remains a grand challenge due to the variation in text curvatures, orientations, and aspect ratios. One of the hardest problems in this task is how to represent text instances of arbitrary shapes. Although many methods have been proposed to model irregular texts in a flexible manner, most of them lose simplicity and robustness. Their complicated post-processings and the regression under Dirac delta distribution undermine the detection performance and the generalization ability. In this paper, we propose an efficient text instance representation named CentripetalText (CT), which decomposes text instances into the combination of text kernels and centripetal shifts. Specifically, we utilize the centripetal shifts to implement pixel aggregation, guiding the external text pixels to the internal text kernels. The relaxation operation is integrated into the dense regression for centripetal shifts, allowing the correct prediction in a range instead of a specific value. The convenient reconstruction of text contours and the tolerance of prediction errors in our method guarantee the high detection accuracy and the fast inference speed, respectively. Besides, we shrink our text detector into a proposal generation module, namely CentripetalText Proposal Network (CPN), replacing Segmentation Proposal Network (SPN) in Mask TextSpotter v3 and producing more accurate proposals. To validate the effectiveness of our method, we conduct experiments on several commonly used scene text benchmarks, including both curved and multi-oriented text datasets. For the task of scene text detection, our approach achieves superior or competitive performance compared to other existing methods, e.g., F-measure of 86.3% at 40.0 FPS on Total-Text, F-measure of 86.1% at 34.8 FPS on MSRA-TD500, etc. For the task of end-to-end scene text recognition, our method outperforms Mask TextSpotter v3 by 1.1% in F-measure on Total-Text.
null
Learning to Select Exogenous Events for Marked Temporal Point Process
https://papers.nips.cc/paper_files/paper/2021/hash/032abcd424b4312e7087f434ef1c0094-Abstract.html
Ping Zhang, Rishabh Iyer, Ashish Tendulkar, Gaurav Aggarwal, Abir De
https://papers.nips.cc/paper_files/paper/2021/hash/032abcd424b4312e7087f434ef1c0094-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11650-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/032abcd424b4312e7087f434ef1c0094-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MckiHYXsBT
https://papers.nips.cc/paper_files/paper/2021/file/032abcd424b4312e7087f434ef1c0094-Supplemental.pdf
Marked temporal point processes (MTPPs) have emerged as a powerful modelingtool for a wide variety of applications which are characterized using discreteevents localized in continuous time. In this context, the events are of two typesendogenous events which occur due to the influence of the previous events andexogenous events which occur due to the effect of the externalities. However, inpractice, the events do not come with endogenous or exogenous labels. To thisend, our goal in this paper is to identify the set of exogenous events from a set ofunlabelled events. To do so, we first formulate the parameter estimation problemin conjunction with exogenous event set selection problem and show that thisproblem is NP hard. Next, we prove that the underlying objective is a monotoneand \alpha-submodular set function, with respect to the candidate set of exogenousevents. Such a characterization subsequently allows us to use a stochastic greedyalgorithm which was originally proposed in~\cite{greedy}for submodular maximization.However, we show that it also admits an approximation guarantee for maximizing\alpha-submodular set function, even when the learning algorithm provides an imperfectestimates of the trained parameters. Finally, our experiments with synthetic andreal data show that our method performs better than the existing approaches builtupon superposition of endogenous and exogenous MTPPs.
null
DRIVE: One-bit Distributed Mean Estimation
https://papers.nips.cc/paper_files/paper/2021/hash/0397758f8990c1b41b81b43ac389ab9f-Abstract.html
Shay Vargaftik, Ran Ben-Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben-Itzhak, Michael Mitzenmacher
https://papers.nips.cc/paper_files/paper/2021/hash/0397758f8990c1b41b81b43ac389ab9f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11651-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0397758f8990c1b41b81b43ac389ab9f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KXRTmcv3dQ8
https://papers.nips.cc/paper_files/paper/2021/file/0397758f8990c1b41b81b43ac389ab9f-Supplemental.pdf
We consider the problem where $n$ clients transmit $d$-dimensional real-valued vectors using $d(1+o(1))$ bits each, in a manner that allows the receiver to approximately reconstruct their mean. Such compression problems naturally arise in distributed and federated learning. We provide novel mathematical results and derive computationally efficient algorithms that are more accurate than previous compression techniques. We evaluate our methods on a collection of distributed and federated learning tasks, using a variety of datasets, and show a consistent improvement over the state of the art.
null
Learning Space Partitions for Path Planning
https://papers.nips.cc/paper_files/paper/2021/hash/03a3655fff3e9bdea48de9f49e938e32-Abstract.html
Kevin Yang, Tianjun Zhang, Chris Cummins, Brandon Cui, Benoit Steiner, Linnan Wang, Joseph E. Gonzalez, Dan Klein, Yuandong Tian
https://papers.nips.cc/paper_files/paper/2021/hash/03a3655fff3e9bdea48de9f49e938e32-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11652-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/03a3655fff3e9bdea48de9f49e938e32-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LT5QcAeuM15
https://papers.nips.cc/paper_files/paper/2021/file/03a3655fff3e9bdea48de9f49e938e32-Supplemental.pdf
Path planning, the problem of efficiently discovering high-reward trajectories, often requires optimizing a high-dimensional and multimodal reward function. Popular approaches like CEM and CMA-ES greedily focus on promising regions of the search space and may get trapped in local maxima. DOO and VOOT balance exploration and exploitation, but use space partitioning strategies independent of the reward function to be optimized. Recently, LaMCTS empirically learns to partition the search space in a reward-sensitive manner for black-box optimization. In this paper, we develop a novel formal regret analysis for when and why such an adaptive region partitioning scheme works. We also propose a new path planning method LaP3 which improves the function value estimation within each sub-region, and uses a latent representation of the search space. Empirically, LaP3 outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima, and shows benefits when plugged into the planning components of model-based RL such as PETS. These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 39% on average across 9 tasks, and in molecular design by up to 0.4 on properties on a 0-1 scale. Code is available at https://github.com/yangkevin2/neurips2021-lap3.
null
Progressive Feature Interaction Search for Deep Sparse Network
https://papers.nips.cc/paper_files/paper/2021/hash/03b2ceb73723f8b53cd533e4fba898ee-Abstract.html
Chen Gao, Yinfeng Li, Quanming Yao, Depeng Jin, Yong Li
https://papers.nips.cc/paper_files/paper/2021/hash/03b2ceb73723f8b53cd533e4fba898ee-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11653-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/03b2ceb73723f8b53cd533e4fba898ee-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rl2FreDHTb0
https://papers.nips.cc/paper_files/paper/2021/file/03b2ceb73723f8b53cd533e4fba898ee-Supplemental.pdf
Deep sparse networks (DSNs), of which the crux is exploring the high-order feature interactions, have become the state-of-the-art on the prediction task with high-sparsity features. However, these models suffer from low computation efficiency, including large model size and slow model inference, which largely limits these models' application value. In this work, we approach this problem with neural architecture search by automatically searching the critical component in DSNs, the feature-interaction layer. We propose a distilled search space to cover the desired architectures with fewer parameters. We then develop a progressive search algorithm for efficient search on the space and well capture the order-priority property in sparse prediction tasks. Experiments on three real-world benchmark datasets show promising results of PROFIT in both accuracy and efficiency. Further studies validate the feasibility of our designed search space and search algorithm.
null
Local Explanation of Dialogue Response Generation
https://papers.nips.cc/paper_files/paper/2021/hash/03b92cd507ff5870df0db7f074728830-Abstract.html
Yi-Lin Tuan, Connor Pryor, Wenhu Chen, Lise Getoor, William Yang Wang
https://papers.nips.cc/paper_files/paper/2021/hash/03b92cd507ff5870df0db7f074728830-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11654-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/03b92cd507ff5870df0db7f074728830-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1Av2E0EugkA
https://papers.nips.cc/paper_files/paper/2021/file/03b92cd507ff5870df0db7f074728830-Supplemental.pdf
In comparison to the interpretation of classification models, the explanation of sequence generation models is also an important problem, however it has seen little attention. In this work, we study model-agnostic explanations of a representative text generation task -- dialogue response generation. Dialog response generation is challenging with its open-ended sentences and multiple acceptable responses. To gain insights into the reasoning process of a generation model, we propose a new method, local explanation of response generation (LERG) that regards the explanations as the mutual interaction of segments in input and output sentences. LERG views the sequence prediction as uncertainty estimation of a human response and then creates explanations by perturbing the input and calculating the certainty change over the human response. We show that LERG adheres to desired properties of explanations for text generation including unbiased approximation, consistency and cause identification. Empirically, our results show that our method consistently improves other widely used methods on proposed automatic- and human- evaluation metrics for this new task by $4.4$-$12.8$\%. Our analysis demonstrates that LERG can extract both explicit and implicit relations between input and output segments.
null
Scalable Inference in SDEs by Direct Matching of the Fokker–Planck–Kolmogorov Equation
https://papers.nips.cc/paper_files/paper/2021/hash/03e4d3f831100d4355663f3d425d716b-Abstract.html
Arno Solin, Ella Tamir, Prakhar Verma
https://papers.nips.cc/paper_files/paper/2021/hash/03e4d3f831100d4355663f3d425d716b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11655-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/03e4d3f831100d4355663f3d425d716b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=admg0sZZm1e
https://papers.nips.cc/paper_files/paper/2021/file/03e4d3f831100d4355663f3d425d716b-Supplemental.pdf
Simulation-based techniques such as variants of stochastic Runge–Kutta are the de facto approach for inference with stochastic differential equations (SDEs) in machine learning. These methods are general-purpose and used with parametric and non-parametric models, and neural SDEs. Stochastic Runge–Kutta relies on the use of sampling schemes that can be inefficient in high dimensions. We address this issue by revisiting the classical SDE literature and derive direct approximations to the (typically intractable) Fokker–Planck–Kolmogorov equation by matching moments. We show how this workflow is fast, scales to high-dimensional latent spaces, and is applicable to scarce-data applications, where a non-parametric SDE with a driving Gaussian process velocity field specifies the model.
null
The Complexity of Bayesian Network Learning: Revisiting the Superstructure
https://papers.nips.cc/paper_files/paper/2021/hash/040a99f23e8960763e680041c601acab-Abstract.html
Robert Ganian, Viktoriia Korchemna
https://papers.nips.cc/paper_files/paper/2021/hash/040a99f23e8960763e680041c601acab-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11656-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/040a99f23e8960763e680041c601acab-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vY2HsMWG2b_
https://papers.nips.cc/paper_files/paper/2021/file/040a99f23e8960763e680041c601acab-Supplemental.pdf
We investigate the parameterized complexity of Bayesian Network Structure Learning (BNSL), a classical problem that has received significant attention in empirical but also purely theoretical studies. We follow up on previous works that have analyzed the complexity of BNSL w.r.t. the so-called superstructure of the input. While known results imply that BNSL is unlikely to be fixed-parameter tractable even when parameterized by the size of a vertex cover in the superstructure, here we show that a different kind of parameterization - notably by the size of a feedback edge set - yields fixed-parameter tractability. We proceed by showing that this result can be strengthened to a localized version of the feedback edge set, and provide corresponding lower bounds that complement previous results to provide a complexity classification of BNSL w.r.t. virtually all well-studied graph parameters.We then analyze how the complexity of BNSL depends on the representation of the input. In particular, while the bulk of past theoretical work on the topic assumed the use of the so-called non-zero representation, here we prove that if an additive representation can be used instead then BNSL becomes fixed-parameter tractable even under significantly milder restrictions to the superstructure, notably when parameterized by the treewidth alone. Last but not least, we show how our results can be extended to the closely related problem of Polytree Learning.
null
Fast Tucker Rank Reduction for Non-Negative Tensors Using Mean-Field Approximation
https://papers.nips.cc/paper_files/paper/2021/hash/040ca38cefb1d9226d79c05dd25469cb-Abstract.html
Kazu Ghalamkari, Mahito Sugiyama
https://papers.nips.cc/paper_files/paper/2021/hash/040ca38cefb1d9226d79c05dd25469cb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11657-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/040ca38cefb1d9226d79c05dd25469cb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RwdHpzTTGl
https://papers.nips.cc/paper_files/paper/2021/file/040ca38cefb1d9226d79c05dd25469cb-Supplemental.pdf
We present an efficient low-rank approximation algorithm for non-negative tensors. The algorithm is derived from our two findings: First, we show that rank-1 approximation for tensors can be viewed as a mean-field approximation by treating each tensor as a probability distribution. Second, we theoretically provide a sufficient condition for distribution parameters to reduce Tucker ranks of tensors; interestingly, this sufficient condition can be achieved by iterative application of the mean-field approximation. Since the mean-field approximation is always given as a closed formula, our findings lead to a fast low-rank approximation algorithm without using a gradient method. We empirically demonstrate that our algorithm is faster than the existing non-negative Tucker rank reduction methods and achieves competitive or better approximation of given tensors.
null
Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound
https://papers.nips.cc/paper_files/paper/2021/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html
Valentina Zantedeschi, Paul Viallard, Emilie Morvant, Rémi Emonet, Amaury Habrard, Pascal Germain, Benjamin Guedj
https://papers.nips.cc/paper_files/paper/2021/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11658-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0415740eaa4d9decbc8da001d3fd805f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2Lq5mDVwBdJ
https://papers.nips.cc/paper_files/paper/2021/file/0415740eaa4d9decbc8da001d3fd805f-Supplemental.pdf
We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties. While our approach holds for arbitrary distributions, we instantiate it with Dirichlet distributions: this allows for a closed-form and differentiable expression for the expected risk, which then turns the generalization bound into a tractable training objective.The resulting stochastic majority vote learning algorithm achieves state-of-the-art accuracy and benefits from (non-vacuous) tight generalization bounds, in a series of numerical experiments when compared to competing algorithms which also minimize PAC-Bayes objectives -- both with uninformed (data-independent) and informed (data-dependent) priors.
null
Numerical influence of ReLU’(0) on backpropagation
https://papers.nips.cc/paper_files/paper/2021/hash/043ab21fc5a1607b381ac3896176dac6-Abstract.html
David Bertoin, Jérôme Bolte, Sébastien Gerchinovitz, Edouard Pauwels
https://papers.nips.cc/paper_files/paper/2021/hash/043ab21fc5a1607b381ac3896176dac6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11659-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/043ab21fc5a1607b381ac3896176dac6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=urrcVI-_jRm
https://papers.nips.cc/paper_files/paper/2021/file/043ab21fc5a1607b381ac3896176dac6-Supplemental.pdf
In theory, the choice of ReLU(0) in [0, 1] for a neural network has a negligible influence both on backpropagation and training. Yet, in the real world, 32 bits default precision combined with the size of deep learning problems makes it a hyperparameter of training methods. We investigate the importance of the value of ReLU'(0) for several precision levels (16, 32, 64 bits), on various networks (fully connected, VGG, ResNet) and datasets (MNIST, CIFAR10, SVHN, ImageNet). We observe considerable variations of backpropagation outputs which occur around half of the time in 32 bits precision. The effect disappears with double precision, while it is systematic at 16 bits. For vanilla SGD training, the choice ReLU'(0) = 0 seems to be the most efficient. For our experiments on ImageNet the gain in test accuracy over ReLU'(0) = 1 was more than 10 points (two runs). We also evidence that reconditioning approaches as batch-norm or ADAM tend to buffer the influence of ReLU'(0)’s value. Overall, the message we convey is that algorithmic differentiation of nonsmooth problems potentially hides parameters that could be tuned advantageously.
null
A Contrastive Learning Approach for Training Variational Autoencoder Priors
https://papers.nips.cc/paper_files/paper/2021/hash/0496604c1d80f66fbeb963c12e570a26-Abstract.html
Jyoti Aneja, Alex Schwing, Jan Kautz, Arash Vahdat
https://papers.nips.cc/paper_files/paper/2021/hash/0496604c1d80f66fbeb963c12e570a26-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11660-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0496604c1d80f66fbeb963c12e570a26-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LcSfRundgwI
https://papers.nips.cc/paper_files/paper/2021/file/0496604c1d80f66fbeb963c12e570a26-Supplemental.pdf
Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in many domains. However, they struggle to generate high-quality images, especially when samples are obtained from the prior without any tempering. One explanation for VAEs' poor generative quality is the prior hole problem: the prior distribution fails to match the aggregate approximate posterior. Due to this mismatch, there exist areas in the latent space with high density under the prior that do not correspond to any encoded image. Samples from those areas are decoded to corrupted images. To tackle this issue, we propose an energy-based prior defined by the product of a base prior distribution and a reweighting factor, designed to bring the base closer to the aggregate posterior. We train the reweighting factor by noise contrastive estimation, and we generalize it to hierarchical VAEs with many latent variable groups. Our experiments confirm that the proposed noise contrastive priors improve the generative performance of state-of-the-art VAEs by a large margin on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ 256 datasets. Our method is simple and can be applied to a wide variety of VAEs to improve the expressivity of their prior distribution.
null
What training reveals about neural network complexity
https://papers.nips.cc/paper_files/paper/2021/hash/04a1bf2d968f1ce381cf1f9184a807a9-Abstract.html
Andreas Loukas, Marinos Poiitis, Stefanie Jegelka
https://papers.nips.cc/paper_files/paper/2021/hash/04a1bf2d968f1ce381cf1f9184a807a9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11661-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/04a1bf2d968f1ce381cf1f9184a807a9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RcjW7p7z8aJ
null
This work explores the Benevolent Training Hypothesis (BTH) which argues that the complexity of the function a deep neural network (NN) is learning can be deduced by its training dynamics. Our analysis provides evidence for BTH by relating the NN's Lipschitz constant at different regions of the input space with the behavior of the stochastic training procedure. We first observe that the Lipschitz constant close to the training data affects various aspects of the parameter trajectory, with more complex networks having a longer trajectory, bigger variance, and often veering further from their initialization. We then show that NNs whose 1st layer bias is trained more steadily (i.e., slowly and with little variation) have bounded complexity even in regions of the input space that are far from any training point. Finally, we find that steady training with Dropout implies a training- and data-dependent generalization bound that grows poly-logarithmically with the number of parameters. Overall, our results support the intuition that good training behavior can be a useful bias towards good generalization.
null
Class-agnostic Reconstruction of Dynamic Objects from Videos
https://papers.nips.cc/paper_files/paper/2021/hash/04da4aea8e38ac933ab23cb2389dddef-Abstract.html
Zhongzheng Ren, Xiaoming Zhao, Alex Schwing
https://papers.nips.cc/paper_files/paper/2021/hash/04da4aea8e38ac933ab23cb2389dddef-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11662-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/04da4aea8e38ac933ab23cb2389dddef-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OP6ihHjllEc
null
We introduce REDO, a class-agnostic framework to REconstruct the Dynamic Objects from RGBD or calibrated videos. Compared to prior work, our problem setting is more realistic yet more challenging for three reasons: 1) due to occlusion or camera settings an object of interest may never be entirely visible, but we aim to reconstruct the complete shape; 2) we aim to handle different object dynamics including rigid motion, non-rigid motion, and articulation; 3) we aim to reconstruct different categories of objects with one unified framework. To address these challenges, we develop two novel modules. First, we introduce a canonical 4D implicit function which is pixel-aligned with aggregated temporal visual cues. Second, we develop a 4D transformation module which captures object dynamics to support temporal propagation and aggregation. We study the efficacy of REDO in extensive experiments on synthetic RGBD video datasets SAIL-VOS 3D and DeformingThings4D++, and on real-world video data 3DPW. We find REDO outperforms state-of-the-art dynamic reconstruction methods by a margin. In ablation studies we validate each developed component.
null
Unique sparse decomposition of low rank matrices
https://papers.nips.cc/paper_files/paper/2021/hash/051928341be67dcba03f0e04104d9047-Abstract.html
Dian Jin, Xin Bing, Yuqian Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/051928341be67dcba03f0e04104d9047-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11663-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/051928341be67dcba03f0e04104d9047-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2GapPLFKvA
https://papers.nips.cc/paper_files/paper/2021/file/051928341be67dcba03f0e04104d9047-Supplemental.pdf
The problem of finding the unique low dimensional decomposition of a given matrix has been a fundamental and recurrent problem in many areas. In this paper, we study the problem of seeking a unique decomposition of a low-rank matrix $Y\in \mathbb{R}^{p\times n}$ that admits a sparse representation. Specifically, we consider $ Y = AX\in \mathbb{R}^{p\times n}$ where the matrix $A\in \mathbb{R}^{p\times r}$ has full column rank, with $r < \min\{n,p\}$, and the matrix $X\in \mathbb{R}^{r\times n}$ is element-wise sparse. We prove that this sparse decomposition of $Y$ can be uniquely identified by recovering ground-truth $A$ column by column, up to some intrinsic signed permutation. Our approach relies on solving a nonconvex optimization problem constrained over the unit sphere. Our geometric analysis for the nonconvex optimization landscape shows that any {\em strict} local solution is close to the ground truth solution, and can be recovered by a simple data-driven initialization followed with any second-order descent algorithm. At last, we corroborate these theoretical results with numerical experiments
null
Neighborhood Reconstructing Autoencoders
https://papers.nips.cc/paper_files/paper/2021/hash/05311655a15b75fab86956663e1819cd-Abstract.html
Yonghyeon LEE, Hyeokjun Kwon, Frank Park
https://papers.nips.cc/paper_files/paper/2021/hash/05311655a15b75fab86956663e1819cd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11664-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/05311655a15b75fab86956663e1819cd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_kaH2bAI3O
https://papers.nips.cc/paper_files/paper/2021/file/05311655a15b75fab86956663e1819cd-Supplemental.pdf
Vanilla autoencoders often produce manifolds that overfit to noisy training data, or have the wrong local connectivity and geometry. Autoencoder regularization techniques, e.g., the denoising autoencoder, have had some success in reducing overfitting, whereas recent graph-based methods that exploit local connectivity information provided by neighborhood graphs have had some success in mitigating local connectivity errors. Neither of these two approaches satisfactorily reduce both overfitting and connectivity errors; moreover, graph-based methods typically involve considerable preprocessing and tuning. To simultaneously address the two issues of overfitting and local connectivity, we propose a new graph-based autoencoder, the Neighborhood Reconstructing Autoencoder (NRAE). Unlike existing graph-based methods that attempt to encode the training data to some prescribed latent space distribution -- one consequence being that only the encoder is the object of the regularization -- NRAE merges local connectivity information contained in the neighborhood graphs with local quadratic approximations of the decoder function to formulate a new neighborhood reconstruction loss. Compared to existing graph-based methods, our new loss function is simple and easy to implement, and the resulting algorithm is scalable and computationally efficient; the only required preprocessing step is the construction of the neighborhood graph. Extensive experiments with standard datasets demonstrate that, compared to existing methods, NRAE improves both overfitting and local connectivity in the learned manifold, in some cases by significant margins. Code for NRAE is available at https://github.com/Gabe-YHLee/NRAE-public.
null
TopicNet: Semantic Graph-Guided Topic Discovery
https://papers.nips.cc/paper_files/paper/2021/hash/0537fb40a68c18da59a35c2bfe1ca554-Abstract.html
Zhibin Duan, Yi.shi Xu, Bo Chen, dongsheng wang, Chaojie Wang, Mingyuan Zhou
https://papers.nips.cc/paper_files/paper/2021/hash/0537fb40a68c18da59a35c2bfe1ca554-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11665-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0537fb40a68c18da59a35c2bfe1ca554-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZB8Du-E1KUz
https://papers.nips.cc/paper_files/paper/2021/file/0537fb40a68c18da59a35c2bfe1ca554-Supplemental.pdf
Existing deep hierarchical topic models are able to extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organize them into a topic hierarchy. However, it is unclear how to incorporate prior belief such as knowledge graph to guide the learning of the topic hierarchy. To address this issue, we introduce TopicNet as a deep hierarchical topic model that can inject prior structural knowledge as inductive bias to influence the learning. TopicNet represents each topic as a Gaussian-distributed embedding vector, projects the topics of all layers into a shared embedding space, and explores both the symmetric and asymmetric similarities between Gaussian embedding vectors to incorporate prior semantic hierarchies. With a variational auto-encoding inference network, the model parameters are optimized by minimizing the evidence lower bound and supervised loss via stochastic gradient descent. Experiments on widely used benchmark show that TopicNet outperforms related deep topic models on discovering deeper interpretable topics and mining better document representations.
null
(Almost) Free Incentivized Exploration from Decentralized Learning Agents
https://papers.nips.cc/paper_files/paper/2021/hash/054ab897023645cd7ad69525c46992a0-Abstract.html
Chengshuai Shi, Haifeng Xu, Wei Xiong, Cong Shen
https://papers.nips.cc/paper_files/paper/2021/hash/054ab897023645cd7ad69525c46992a0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11666-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/054ab897023645cd7ad69525c46992a0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2lBhfVPYOM
https://papers.nips.cc/paper_files/paper/2021/file/054ab897023645cd7ad69525c46992a0-Supplemental.pdf
Incentivized exploration in multi-armed bandits (MAB) has witnessed increasing interests and many progresses in recent years, where a principal offers bonuses to agents to do explorations on her behalf. However, almost all existing studies are confined to temporary myopic agents. In this work, we break this barrier and study incentivized exploration with multiple and long-term strategic agents, who have more complicated behaviors that often appear in real-world applications. An important observation of this work is that strategic agents' intrinsic needs of learning benefit (instead of harming) the principal's explorations by providing "free pulls". Moreover, it turns out that increasing the population of agents significantly lowers the principal's burden of incentivizing. The key and somewhat surprising insight revealed from our results is that when there are sufficiently many learning agents involved, the exploration process of the principal can be (almost) free. Our main results are built upon three novel components which may be of independent interest: (1) a simple yet provably effective incentive-provision strategy; (2) a carefully crafted best arm identification algorithm for rewards aggregated under unequal confidences; (3) a high-probability finite-time lower bound of UCB algorithms. Experimental results are provided to complement the theoretical analysis.
null
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers
https://papers.nips.cc/paper_files/paper/2021/hash/05546b0e38ab9175cd905eebcc6ebb76-Abstract.html
Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, Christopher Ré
https://papers.nips.cc/paper_files/paper/2021/hash/05546b0e38ab9175cd905eebcc6ebb76-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11667-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/05546b0e38ab9175cd905eebcc6ebb76-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yWd42CWN3c
https://papers.nips.cc/paper_files/paper/2021/file/05546b0e38ab9175cd905eebcc6ebb76-Supplemental.pdf
Recurrent neural networks (RNNs), temporal convolutions, and neural differential equations (NDEs) are popular families of deep learning models for time-series data, each with unique strengths and tradeoffs in modeling power and computational efficiency. We introduce a simple sequence model inspired by control systems that generalizes these approaches while addressing their shortcomings. The Linear State-Space Layer (LSSL) maps a sequence $u \mapsto y$ by simply simulating a linear continuous-time state-space representation $\dot{x} = Ax + Bu, y = Cx + Du$. Theoretically, we show that LSSL models are closely related to the three aforementioned families of models and inherit their strengths. For example, they generalize convolutions to continuous-time, explain common RNN heuristics, and share features of NDEs such as time-scale adaptation. We then incorporate and generalize recent theory on continuous-time memorization to introduce a trainable subset of structured matrices $A$ that endow LSSLs with long-range memory. Empirically, stacking LSSL layers into a simple deep neural network obtains state-of-the-art results across time series benchmarks for long dependencies in sequential image classification, real-world healthcare regression tasks, and speech. On a difficult speech classification task with length-16000 sequences, LSSL outperforms prior approaches by 24 accuracy points, and even outperforms baselines that use hand-crafted features on 100x shorter sequences.
null
Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness
https://papers.nips.cc/paper_files/paper/2021/hash/055e31fa43e652cb4ab6c0ee845c8d36-Abstract.html
Zifeng Wang, Tong Jian, Aria Masoomi, Stratis Ioannidis, Jennifer Dy
https://papers.nips.cc/paper_files/paper/2021/hash/055e31fa43e652cb4ab6c0ee845c8d36-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11668-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/055e31fa43e652cb4ab6c0ee845c8d36-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OThHxQUDzkp
https://papers.nips.cc/paper_files/paper/2021/file/055e31fa43e652cb4ab6c0ee845c8d36-Supplemental.pdf
We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier. In addition to the usual cross-entropy loss, we add regularization terms for every intermediate layer to ensure that the latent representations retain useful information for output prediction while reducing redundant information. We show that the HSIC bottleneck enhances robustness to adversarial attacks both theoretically and experimentally. In particular, we prove that the HSIC bottleneck regularizer reduces the sensitivity of the classifier to adversarial examples. Our experiments on multiple benchmark datasets and architectures demonstrate that incorporating an HSIC bottleneck regularizer attains competitive natural accuracy and improves adversarial robustness, both with and without adversarial examples during training. Our code and adversarially robust models are publicly available.
null
T-LoHo: A Bayesian Regularization Model for Structured Sparsity and Smoothness on Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/05a70454516ecd9194c293b0e415777f-Abstract.html
Changwoo Lee, Zhao Tang Luo, Huiyan Sang
https://papers.nips.cc/paper_files/paper/2021/hash/05a70454516ecd9194c293b0e415777f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11669-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/05a70454516ecd9194c293b0e415777f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Kvef55YMkm3
https://papers.nips.cc/paper_files/paper/2021/file/05a70454516ecd9194c293b0e415777f-Supplemental.pdf
Graphs have been commonly used to represent complex data structures. In models dealing with graph-structured data, multivariate parameters may not only exhibit sparse patterns but have structured sparsity and smoothness in the sense that both zero and non-zero parameters tend to cluster together. We propose a new prior for high-dimensional parameters with graphical relations, referred to as the Tree-based Low-rank Horseshoe (T-LoHo) model, that generalizes the popular univariate Bayesian horseshoe shrinkage prior to the multivariate setting to detect structured sparsity and smoothness simultaneously. The T-LoHo prior can be embedded in many high-dimensional hierarchical models. To illustrate its utility, we apply it to regularize a Bayesian high-dimensional regression problem where the regression coefficients are linked by a graph, so that the resulting clusters have flexible shapes and satisfy the cluster contiguity constraint with respect to the graph. We design an efficient Markov chain Monte Carlo algorithm that delivers full Bayesian inference with uncertainty measures for model parameters such as the number of clusters. We offer theoretical investigations of the clustering effects and posterior concentration results. Finally, we illustrate the performance of the model with simulation studies and a real data application for anomaly detection on a road network. The results indicate substantial improvements over other competing methods such as the sparse fused lasso.
null
The Utility of Explainable AI in Ad Hoc Human-Machine Teaming
https://papers.nips.cc/paper_files/paper/2021/hash/05d74c48b5b30514d8e9bd60320fc8f6-Abstract.html
Rohan Paleja, Muyleng Ghuy, Nadun Ranawaka Arachchige, Reed Jensen, Matthew Gombolay
https://papers.nips.cc/paper_files/paper/2021/hash/05d74c48b5b30514d8e9bd60320fc8f6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11670-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/05d74c48b5b30514d8e9bd60320fc8f6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=w6U6g5Bvug
https://papers.nips.cc/paper_files/paper/2021/file/05d74c48b5b30514d8e9bd60320fc8f6-Supplemental.pdf
Recent advances in machine learning have led to growing interest in Explainable AI (xAI) to enable humans to gain insight into the decision-making of machine learning models. Despite this recent interest, the utility of xAI techniques has not yet been characterized in human-machine teaming. Importantly, xAI offers the promise of enhancing team situational awareness (SA) and shared mental model development, which are the key characteristics of effective human-machine teams. Rapidly developing such mental models is especially critical in ad hoc human-machine teaming, where agents do not have a priori knowledge of others' decision-making strategies. In this paper, we present two novel human-subject experiments quantifying the benefits of deploying xAI techniques within a human-machine teaming scenario. First, we show that xAI techniques can support SA ($p<0.05)$. Second, we examine how different SA levels induced via a collaborative AI policy abstraction affect ad hoc human-machine teaming performance. Importantly, we find that the benefits of xAI are not universal, as there is a strong dependence on the composition of the human-machine team. Novices benefit from xAI providing increased SA ($p<0.05$) but are susceptible to cognitive overhead ($p<0.05$). On the other hand, expert performance degrades with the addition of xAI-based support ($p<0.05$), indicating that the cost of paying attention to the xAI outweighs the benefits obtained from being provided additional information to enhance SA. Our results demonstrate that researchers must deliberately design and deploy the right xAI techniques in the right scenario by carefully considering human-machine team composition and how the xAI method augments SA.
null
Subgoal Search For Complex Reasoning Tasks
https://papers.nips.cc/paper_files/paper/2021/hash/05d8cccb5f47e5072f0a05b5f514941a-Abstract.html
Konrad Czechowski, Tomasz Odrzygóźdź, Marek Zbysiński, Michał Zawalski, Krzysztof Olejnik, Yuhuai Wu, Łukasz Kuciński, Piotr Miłoś
https://papers.nips.cc/paper_files/paper/2021/hash/05d8cccb5f47e5072f0a05b5f514941a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11671-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/05d8cccb5f47e5072f0a05b5f514941a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5KCvuCYGi7G
https://papers.nips.cc/paper_files/paper/2021/file/05d8cccb5f47e5072f0a05b5f514941a-Supplemental.pdf
Humans excel in solving complex reasoning tasks through a mental process of moving from one idea to a related one. Inspired by this, we propose Subgoal Search (kSubS) method. Its key component is a learned subgoal generator that produces a diversity of subgoals that are both achievable and closer to the solution. Using subgoals reduces the search space and induces a high-level search graph suitable for efficient planning. In this paper, we implement kSubS using a transformer-based subgoal module coupled with the classical best-first search framework. We show that a simple approach of generating $k$-th step ahead subgoals is surprisingly efficient on three challenging domains: two popular puzzle games, Sokoban and the Rubik's Cube, and an inequality proving benchmark INT. kSubS achieves strong results including state-of-the-art on INT within a modest computational budget.
null
MCMC Variational Inference via Uncorrected Hamiltonian Annealing
https://papers.nips.cc/paper_files/paper/2021/hash/05f971b5ec196b8c65b75d2ef8267331-Abstract.html
Tomas Geffner, Justin Domke
https://papers.nips.cc/paper_files/paper/2021/hash/05f971b5ec196b8c65b75d2ef8267331-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11672-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/05f971b5ec196b8c65b75d2ef8267331-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YsZQhCJunjl
https://papers.nips.cc/paper_files/paper/2021/file/05f971b5ec196b8c65b75d2ef8267331-Supplemental.pdf
Given an unnormalized target distribution we want to obtain approximate samples from it and a tight lower bound on its (log) normalization constant log Z. Annealed Importance Sampling (AIS) with Hamiltonian MCMC is a powerful method that can be used to do this. Its main drawback is that it uses non-differentiable transition kernels, which makes tuning its many parameters hard. We propose a framework to use an AIS-like procedure with Uncorrected Hamiltonian MCMC, called Uncorrected Hamiltonian Annealing. Our method leads to tight and differentiable lower bounds on log Z. We show empirically that our method yields better performances than other competing approaches, and that the ability to tune its parameters using reparameterization gradients may lead to large performance improvements.
null
Landmark-RxR: Solving Vision-and-Language Navigation with Fine-Grained Alignment Supervision
https://papers.nips.cc/paper_files/paper/2021/hash/0602940f23884f782058efac46f64b0f-Abstract.html
Keji He, Yan Huang, Qi Wu, Jianhua Yang, Dong An, Shuanglin Sima, Liang Wang
https://papers.nips.cc/paper_files/paper/2021/hash/0602940f23884f782058efac46f64b0f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11673-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0602940f23884f782058efac46f64b0f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=41QJ--DLjoD
https://papers.nips.cc/paper_files/paper/2021/file/0602940f23884f782058efac46f64b0f-Supplemental.pdf
In Vision-and-Language Navigation (VLN) task, an agent is asked to navigate inside 3D indoor environments following given instructions. Cross-modal alignment is one of the most critical challenges in VLN because the predicted trajectory needs to match the given instruction accurately. In this paper, we address the cross-modal alignment challenge from the perspective of fine-grain. Firstly, to alleviate weak cross-modal alignment supervision from coarse-grained data, we introduce a human-annotated fine-grained VLN dataset, namely Landmark-RxR. Secondly, to further enhance local cross-modal alignment under fine-grained supervision, we investigate the focal-oriented rewards with soft and hard forms, by focusing on the critical points sampled from fine-grained Landmark-RxR. Moreover, to fully evaluate the navigation process, we also propose a re-initialization mechanism that makes metrics insensitive to difficult points, which can cause the agent to deviate from the correct trajectories. Experimental results show that our agent has superior navigation performance on Landmark-RxR, en-RxR and R2R. Our dataset and code are available at https://github.com/hekj/Landmark-RxR.
null
A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness
https://papers.nips.cc/paper_files/paper/2021/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html
James Diffenderfer, Brian Bartoldson, Shreya Chaganti, Jize Zhang, Bhavya Kailkhura
https://papers.nips.cc/paper_files/paper/2021/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11674-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0607f4c705595b911a4f3e7a127b44e0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YygA0yppTR
https://papers.nips.cc/paper_files/paper/2021/file/0607f4c705595b911a4f3e7a127b44e0-Supplemental.pdf
Successful adoption of deep learning (DL) in the wild requires models to be: (1) compact, (2) accurate, and (3) robust to distributional shifts. Unfortunately, efforts towards simultaneously meeting these requirements have mostly been unsuccessful. This raises an important question: Is the inability to create Compact, Accurate, and Robust Deep neural networks (CARDs) fundamental? To answer this question, we perform a large-scale analysis of popular model compression techniques which uncovers several intriguing patterns. Notably, in contrast to traditional pruning approaches (e.g., fine tuning and gradual magnitude pruning), we find that ``lottery ticket-style'' approaches can surprisingly be used to produce CARDs, including binary-weight CARDs. Specifically, we are able to create extremely compact CARDs that, compared to their larger counterparts, have similar test accuracy and matching (or better) robustness---simply by pruning and (optionally) quantizing. Leveraging the compactness of CARDs, we develop a simple domain-adaptive test-time ensembling approach (CARD-Decks) that uses a gating module to dynamically select appropriate CARDs from the CARD-Deck based on their spectral-similarity with test samples. The proposed approach builds a "winning hand'' of CARDs that establishes a new state-of-the-art (on RobustBench) on CIFAR-10-C accuracies (i.e., 96.8% standard and 92.75% robust) and CIFAR-100-C accuracies (80.6% standard and 71.3% robust) with better memory usage than non-compressed baselines (pretrained CARDs and CARD-Decks available at https://github.com/RobustBench/robustbench). Finally, we provide theoretical support for our empirical findings.
null
On the Importance of Gradients for Detecting Distributional Shifts in the Wild
https://papers.nips.cc/paper_files/paper/2021/hash/063e26c670d07bb7c4d30e6fc69fe056-Abstract.html
Rui Huang, Andrew Geng, Yixuan Li
https://papers.nips.cc/paper_files/paper/2021/hash/063e26c670d07bb7c4d30e6fc69fe056-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11675-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/063e26c670d07bb7c4d30e6fc69fe056-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fmiwLdJCmLS
https://papers.nips.cc/paper_files/paper/2021/file/063e26c670d07bb7c4d30e6fc69fe056-Supplemental.pdf
Detecting out-of-distribution (OOD) data has become a critical component in ensuring the safe deployment of machine learning models in the real world. Existing OOD detection approaches primarily rely on the output or feature space for deriving OOD scores, while largely overlooking information from the gradient space. In this paper, we present GradNorm, a simple and effective approach for detecting OOD inputs by utilizing information extracted from the gradient space. GradNorm directly employs the vector norm of gradients, backpropagated from the KL divergence between the softmax output and a uniform probability distribution. Our key idea is that the magnitude of gradients is higher for in-distribution (ID) data than that for OOD data, making it informative for OOD detection. GradNorm demonstrates superior performance, reducing the average FPR95 by up to 16.33% compared to the previous best method.
null
Iterative Methods for Private Synthetic Data: Unifying Framework and New Methods
https://papers.nips.cc/paper_files/paper/2021/hash/0678c572b0d5597d2d4a6b5bd135754c-Abstract.html
Terrance Liu, Giuseppe Vietri, Steven Z. Wu
https://papers.nips.cc/paper_files/paper/2021/hash/0678c572b0d5597d2d4a6b5bd135754c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11676-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0678c572b0d5597d2d4a6b5bd135754c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jcCatp6oWZK
https://papers.nips.cc/paper_files/paper/2021/file/0678c572b0d5597d2d4a6b5bd135754c-Supplemental.pdf
We study private synthetic data generation for query release, where the goal is to construct a sanitized version of a sensitive dataset, subject to differential privacy, that approximately preserves the answers to a large collection of statistical queries. We first present an algorithmic framework that unifies a long line of iterative algorithms in the literature. Under this framework, we propose two new methods. The first method, private entropy projection (PEP), can be viewed as an advanced variant of MWEM that adaptively reuses past query measurements to boost accuracy. Our second method, generative networks with the exponential mechanism (GEM), circumvents computational bottlenecks in algorithms such as MWEM and PEP by optimizing over generative models parameterized by neural networks, which capture a rich family of distributions while enabling fast gradient-based optimization. We demonstrate that PEP and GEM empirically outperform existing algorithms. Furthermore, we show that GEM nicely incorporates prior information from public data while overcoming limitations of PMW^Pub, the existing state-of-the-art method that also leverages public data.
null
Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization
https://papers.nips.cc/paper_files/paper/2021/hash/067a26d87265ea39030f5bd82408ce7c-Abstract.html
Clement Gehring, Kenji Kawaguchi, Jiaoyang Huang, Leslie Kaelbling
https://papers.nips.cc/paper_files/paper/2021/hash/067a26d87265ea39030f5bd82408ce7c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11677-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/067a26d87265ea39030f5bd82408ce7c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xj2sE--Q90e
https://papers.nips.cc/paper_files/paper/2021/file/067a26d87265ea39030f5bd82408ce7c-Supplemental.pdf
Estimating the per-state expected cumulative rewards is a critical aspect of reinforcement learning approaches, however the experience is obtained, but standard deep neural-network function-approximation methods are often inefficient in this setting. An alternative approach, exemplified by value iteration networks, is to learn transition and reward models of a latent Markov decision process whose value predictions fit the data. This approach has been shown empirically to converge faster to a more robust solution in many cases, but there has been little theoretical study of this phenomenon. In this paper, we explore such implicit representations of value functions via theory and focused experimentation. We prove that, for a linear parametrization, gradient descent converges to global optima despite non-linearity and non-convexity introduced by the implicit representation. Furthermore, we derive convergence rates for both cases which allow us to identify conditions under which stochastic gradient descent (SGD) with this implicit representation converges substantially faster than its explicit counterpart. Finally, we provide empirical results in some simple domains that illustrate the theoretical findings.
null
Mirror Langevin Monte Carlo: the Case Under Isoperimetry
https://papers.nips.cc/paper_files/paper/2021/hash/069090145d54bf4aa3894133f7e89873-Abstract.html
Qijia Jiang
https://papers.nips.cc/paper_files/paper/2021/hash/069090145d54bf4aa3894133f7e89873-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11678-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/069090145d54bf4aa3894133f7e89873-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9In970xfmNk
https://papers.nips.cc/paper_files/paper/2021/file/069090145d54bf4aa3894133f7e89873-Supplemental.pdf
Motivated by the connection between sampling and optimization, we study a mirror descent analogue of Langevin dynamics and analyze three different discretization schemes, giving nonasymptotic convergence rate under functional inequalities such as Log-Sobolev in the corresponding metric. Compared to the Euclidean setting, the result reveals intricate relationship between the underlying geometry and the target distribution and suggests that care might need to be taken in order for the discretized algorithm to achieve vanishing bias with diminishing stepsize for sampling from potentials under weaker smoothness/convexity regularity conditions.
null
Do Different Tracking Tasks Require Different Appearance Models?
https://papers.nips.cc/paper_files/paper/2021/hash/06997f04a7db92466a2baa6ebc8b872d-Abstract.html
Zhongdao Wang, Hengshuang Zhao, Ya-Li Li, Shengjin Wang, Philip Torr, Luca Bertinetto
https://papers.nips.cc/paper_files/paper/2021/hash/06997f04a7db92466a2baa6ebc8b872d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11679-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/06997f04a7db92466a2baa6ebc8b872d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HShLSEcVZJ4
https://papers.nips.cc/paper_files/paper/2021/file/06997f04a7db92466a2baa6ebc8b872d-Supplemental.pdf
Tracking objects of interest in a video is one of the most popular and widely applicable problems in computer vision. However, with the years, a Cambrian explosion of use cases and benchmarks has fragmented the problem in a multitude of different experimental setups. As a consequence, the literature has fragmented too, and now novel approaches proposed by the community are usually specialised to fit only one specific setup. To understand to what extent this specialisation is necessary, in this work we present UniTrack, a solution to address five different tasks within the same framework. UniTrack consists of a single and task-agnostic appearance model, which can be learned in a supervised or self-supervised fashion, and multiple ``heads'' that address individual tasks and do not require training. We show how most tracking tasks can be solved within this framework, and that the same appearance model can be successfully used to obtain results that are competitive against specialised methods for most of the tasks considered. The framework also allows us to analyse appearance models obtained with the most recent self-supervised methods, thus extending their evaluation and comparison to a larger variety of important problems.
null
Towards robust vision by multi-task learning on monkey visual cortex
https://papers.nips.cc/paper_files/paper/2021/hash/06a9d51e04213572ef0720dd27a84792-Abstract.html
Shahd Safarani, Arne Nix, Konstantin Willeke, Santiago Cadena, Kelli Restivo, George Denfield, Andreas Tolias, Fabian Sinz
https://papers.nips.cc/paper_files/paper/2021/hash/06a9d51e04213572ef0720dd27a84792-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11680-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/06a9d51e04213572ef0720dd27a84792-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=3KhhJxaufVF
https://papers.nips.cc/paper_files/paper/2021/file/06a9d51e04213572ef0720dd27a84792-Supplemental.pdf
Deep neural networks set the state-of-the-art across many tasks in computer vision, but their generalization ability to simple image distortions is surprisingly fragile. In contrast, the mammalian visual system is robust to a wide range of perturbations. Recent work suggests that this generalization ability can be explained by useful inductive biases encoded in the representations of visual stimuli throughout the visual cortex. Here, we successfully leveraged these inductive biases with a multi-task learning approach: we jointly trained a deep network to perform image classification and to predict neural activity in macaque primary visual cortex (V1) in response to the same natural stimuli. We measured the out-of-distribution generalization abilities of our resulting network by testing its robustness to common image distortions. We found that co-training on monkey V1 data indeed leads to increased robustness despite the absence of those distortions during training. Additionally, we showed that our network's robustness is often very close to that of an Oracle network where parts of the architecture are directly trained on noisy images. Our results also demonstrated that the network's representations become more brain-like as their robustness improves. Using a novel constrained reconstruction analysis, we investigated what makes our brain-regularized network more robust. We found that our monkey co-trained network is more sensitive to content than noise when compared to a Baseline network that we trained for image classification alone. Using DeepGaze-predicted saliency maps for ImageNet images, we found that the monkey co-trained network tends to be more sensitive to salient regions in a scene, reminiscent of existing theories on the role of V1 in the detection of object borders and bottom-up saliency. Overall, our work expands the promising research avenue of transferring inductive biases from biological to artificial neural networks on the representational level, and provides a novel analysis of the effects of our transfer.
null
Arbitrary Conditional Distributions with Energy
https://papers.nips.cc/paper_files/paper/2021/hash/06c284d3f757b15c02f47f3ff06dc275-Abstract.html
Ryan Strauss, Junier B. Oliva
https://papers.nips.cc/paper_files/paper/2021/hash/06c284d3f757b15c02f47f3ff06dc275-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11681-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/06c284d3f757b15c02f47f3ff06dc275-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_idcJrecij
null
Modeling distributions of covariates, or density estimation, is a core challenge in unsupervised learning. However, the majority of work only considers the joint distribution, which has limited relevance to practical situations. A more general and useful problem is arbitrary conditional density estimation, which aims to model any possible conditional distribution over a set of covariates, reflecting the more realistic setting of inference based on prior knowledge. We propose a novel method, Arbitrary Conditioning with Energy (ACE), that can simultaneously estimate the distribution $p(\mathbf{x}_u \mid \mathbf{x}_o)$ for all possible subsets of unobserved features $\mathbf{x}_u$ and observed features $\mathbf{x}_o$. ACE is designed to avoid unnecessary bias and complexity --- we specify densities with a highly expressive energy function and reduce the problem to only learning one-dimensional conditionals (from which more complex distributions can be recovered during inference). This results in an approach that is both simpler and higher-performing than prior methods. We show that ACE achieves state-of-the-art for arbitrary conditional likelihood estimation and data imputation on standard benchmarks.
null
Learning Domain Invariant Representations in Goal-conditioned Block MDPs
https://papers.nips.cc/paper_files/paper/2021/hash/06d172404821f7d01060cc9629171b2e-Abstract.html
Beining Han, Chongyi Zheng, Harris Chan, Keiran Paster, Michael Zhang, Jimmy Ba
https://papers.nips.cc/paper_files/paper/2021/hash/06d172404821f7d01060cc9629171b2e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11682-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/06d172404821f7d01060cc9629171b2e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=oepSB9bsoCF
https://papers.nips.cc/paper_files/paper/2021/file/06d172404821f7d01060cc9629171b2e-Supplemental.pdf
Deep Reinforcement Learning (RL) is successful in solving many complex Markov Decision Processes (MDPs) problems. However, agents often face unanticipated environmental changes after deployment in the real world. These changes are often spurious and unrelated to the underlying problem, such as background shifts for visual input agents. Unfortunately, deep RL policies are usually sensitive to these changes and fail to act robustly against them. This resembles the problem of domain generalization in supervised learning. In this work, we study this problem for goal-conditioned RL agents. We propose a theoretical framework in the Block MDP setting that characterizes the generalizability of goal-conditioned policies to new environments. Under this framework, we develop a practical method PA-SkewFit that enhances domain generalization. The empirical evaluation shows that our goal-conditioned RL agent can perform well in various unseen test environments, improving by 50\% over baselines.
null
Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning
https://papers.nips.cc/paper_files/paper/2021/hash/06d5ae105ea1bea4d800bc96491876e9-Abstract.html
Scott Sussex, Caroline Uhler, Andreas Krause
https://papers.nips.cc/paper_files/paper/2021/hash/06d5ae105ea1bea4d800bc96491876e9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11683-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/06d5ae105ea1bea4d800bc96491876e9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=mbm8YOsoSER
https://papers.nips.cc/paper_files/paper/2021/file/06d5ae105ea1bea4d800bc96491876e9-Supplemental.pdf
Causal structure learning is a key problem in many domains. Causal structures can be learnt by performing experiments on the system of interest. We address the largely unexplored problem of designing a batch of experiments that each simultaneously intervene on multiple variables. While potentially more informative than the commonly considered single-variable interventions, selecting such interventions is algorithmically much more challenging, due to the doubly-exponential combinatorial search space over sets of composite interventions. In this paper, we develop efficient algorithms for optimizing different objective functions quantifying the informativeness of a budget-constrained batch of experiments. By establishing novel submodularity properties of these objectives, we provide approximation guarantees for our algorithms. Our algorithms empirically perform superior to both random interventions and algorithms that only select single-variable interventions.
null
Fuzzy Clustering with Similarity Queries
https://papers.nips.cc/paper_files/paper/2021/hash/06f2e099b4f87109d52e15d7c05f0084-Abstract.html
Wasim Huleihel, Arya Mazumdar, Soumyabrata Pal
https://papers.nips.cc/paper_files/paper/2021/hash/06f2e099b4f87109d52e15d7c05f0084-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11684-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/06f2e099b4f87109d52e15d7c05f0084-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=slvWAZohje
https://papers.nips.cc/paper_files/paper/2021/file/06f2e099b4f87109d52e15d7c05f0084-Supplemental.pdf
The fuzzy or soft $k$-means objective is a popular generalization of the well-known $k$-means problem, extending the clustering capability of the $k$-means to datasets that are uncertain, vague and otherwise hard to cluster. In this paper, we propose a semi-supervised active clustering framework, where the learner is allowed to interact with an oracle (domain expert), asking for the similarity between a certain set of chosen items. We study the query and computational complexities of clustering in this framework. We prove that having a few of such similarity queries enables one to get a polynomial-time approximation algorithm to an otherwise conjecturally NP-hard problem. In particular, we provide algorithms for fuzzy clustering in this setting that ask $O(\mathsf{poly}(k)\log n)$ similarity queries and run with polynomial-time-complexity, where $n$ is the number of items. The fuzzy $k$-means objective is nonconvex, with $k$-means as a special case, and is equivalent to some other generic nonconvex problem such as non-negative matrix factorization. The ubiquitous Lloyd-type algorithms (or alternating-minimization algorithms) can get stuck at a local minima. Our results show that by making few similarity queries, the problem becomes easier to solve. Finally, we test our algorithms over real-world datasets, showing their effectiveness in real-world applications.
null
Improving black-box optimization in VAE latent space using decoder uncertainty
https://papers.nips.cc/paper_files/paper/2021/hash/06fe1c234519f6812fc4c1baae25d6af-Abstract.html
Pascal Notin, José Miguel Hernández-Lobato, Yarin Gal
https://papers.nips.cc/paper_files/paper/2021/hash/06fe1c234519f6812fc4c1baae25d6af-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11685-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/06fe1c234519f6812fc4c1baae25d6af-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=F7LYy9FnK2x
https://papers.nips.cc/paper_files/paper/2021/file/06fe1c234519f6812fc4c1baae25d6af-Supplemental.pdf
Optimization in the latent space of variational autoencoders is a promising approach to generate high-dimensional discrete objects that maximize an expensive black-box property (e.g., drug-likeness in molecular generation, function approximation with arithmetic expressions). However, existing methods lack robustness as they may decide to explore areas of the latent space for which no data was available during training and where the decoder can be unreliable, leading to the generation of unrealistic or invalid objects. We propose to leverage the epistemic uncertainty of the decoder to guide the optimization process. This is not trivial though, as a naive estimation of uncertainty in the high-dimensional and structured settings we consider would result in high estimator variance. To solve this problem, we introduce an importance sampling-based estimator that provides more robust estimates of epistemic uncertainty. Our uncertainty-guided optimization approach does not require modifications of the model architecture nor the training process. It produces samples with a better trade-off between black-box objective and validity of the generated samples, sometimes improving both simultaneously. We illustrate these advantages across several experimental settings in digit generation, arithmetic expression approximation and molecule generation for drug design.
null
Sample Selection for Fair and Robust Training
https://papers.nips.cc/paper_files/paper/2021/hash/07563a3fe3bbe7e3ba84431ad9d055af-Abstract.html
Yuji Roh, Kangwook Lee, Steven Whang, Changho Suh
https://papers.nips.cc/paper_files/paper/2021/hash/07563a3fe3bbe7e3ba84431ad9d055af-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11686-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=IZNR0RDtGp3
null
Fairness and robustness are critical elements of Trustworthy AI that need to be addressed together. Fairness is about learning an unbiased model while robustness is about learning from corrupted data, and it is known that addressing only one of them may have an adverse affect on the other. In this work, we propose a sample selection-based algorithm for fair and robust training. To this end, we formulate a combinatorial optimization problem for the unbiased selection of samples in the presence of data corruption. Observing that solving this optimization problem is strongly NP-hard, we propose a greedy algorithm that is efficient and effective in practice. Experiments show that our method obtains fairness and robustness that are better than or comparable to the state-of-the-art technique, both on synthetic and benchmark real datasets. Moreover, unlike other fair and robust training baselines, our algorithm can be used by only modifying the sampling step in batch selection without changing the training algorithm or leveraging additional clean data.
null
NeurWIN: Neural Whittle Index Network For Restless Bandits Via Deep RL
https://papers.nips.cc/paper_files/paper/2021/hash/0768281a05da9f27df178b5c39a51263-Abstract.html
Khaled Nakhleh, Santosh Ganji, Ping-Chun Hsieh, I-Hong Hou, Srinivas Shakkottai
https://papers.nips.cc/paper_files/paper/2021/hash/0768281a05da9f27df178b5c39a51263-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11687-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0768281a05da9f27df178b5c39a51263-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5sCVR3Lq6F
https://papers.nips.cc/paper_files/paper/2021/file/0768281a05da9f27df178b5c39a51263-Supplemental.zip
Whittle index policy is a powerful tool to obtain asymptotically optimal solutions for the notoriously intractable problem of restless bandits. However, finding the Whittle indices remains a difficult problem for many practical restless bandits with convoluted transition kernels. This paper proposes NeurWIN, a neural Whittle index network that seeks to learn the Whittle indices for any restless bandits by leveraging mathematical properties of the Whittle indices. We show that a neural network that produces the Whittle index is also one that produces the optimal control for a set of Markov decision problems. This property motivates using deep reinforcement learning for the training of NeurWIN. We demonstrate the utility of NeurWIN by evaluating its performance for three recently studied restless bandit problems.Our experiment results show that the performance of NeurWIN is significantly better than other RL algorithms.
null
Sageflow: Robust Federated Learning against Both Stragglers and Adversaries
https://papers.nips.cc/paper_files/paper/2021/hash/076a8133735eb5d7552dc195b125a454-Abstract.html
Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon
https://papers.nips.cc/paper_files/paper/2021/hash/076a8133735eb5d7552dc195b125a454-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11688-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/076a8133735eb5d7552dc195b125a454-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rA9HFxFT7th
https://papers.nips.cc/paper_files/paper/2021/file/076a8133735eb5d7552dc195b125a454-Supplemental.pdf
While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries. While the presence of both of these issues raises serious concerns in practical FL systems, no known schemes or combinations of schemes effectively address them at the same time. We propose Sageflow, staleness-aware grouping with entropy-based filtering and loss-weighted averaging, to handle both stragglers and adversaries simultaneously. Model grouping and weighting according to staleness (arrival delay) provides robustness against stragglers, while entropy-based filtering and loss-weighted averaging, working in a highly complementary fashion at each grouping stage, counter a wide range of adversary attacks. A theoretical bound is established to provide key insights into the convergence behavior of Sageflow. Extensive experimental results show that Sageflow outperforms various existing methods aiming to handle stragglers/adversaries.
null
Alias-Free Generative Adversarial Networks
https://papers.nips.cc/paper_files/paper/2021/hash/076ccd93ad68be51f23707988e934906-Abstract.html
Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, Timo Aila
https://papers.nips.cc/paper_files/paper/2021/hash/076ccd93ad68be51f23707988e934906-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11689-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/076ccd93ad68be51f23707988e934906-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Owggnutk6lE
https://papers.nips.cc/paper_files/paper/2021/file/076ccd93ad68be51f23707988e934906-Supplemental.pdf
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation.
null
Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising without Clean Images
https://papers.nips.cc/paper_files/paper/2021/hash/077b83af57538aa183971a2fe0971ec1-Abstract.html
Kwanyoung Kim, Jong Chul Ye
https://papers.nips.cc/paper_files/paper/2021/hash/077b83af57538aa183971a2fe0971ec1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11690-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/077b83af57538aa183971a2fe0971ec1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZqEUs3sTRU0
https://papers.nips.cc/paper_files/paper/2021/file/077b83af57538aa183971a2fe0971ec1-Supplemental.pdf
Recently, there has been extensive research interest in training deep networks to denoise images without clean reference.However, the representative approaches such as Noise2Noise, Noise2Void, Stein's unbiased risk estimator (SURE), etc. seem to differ from one another and it is difficult to find the coherent mathematical structure. To address this, here we present a novel approach, called Noise2Score, which reveals a missing link in order to unite these seemingly different approaches.Specifically, we show that image denoising problems without clean images can be addressed by finding the mode of the posterior distribution and that the Tweedie's formula offers an explicit solution through the score function (i.e. the gradient of loglikelihood). Our method then uses the recent finding that the score function can be stably estimated from the noisy images using the amortized residual denoising autoencoder, the method of which is closely related to Noise2Noise or Nose2Void. Our Noise2Score approach is so universal that the same network training can be used to remove noises from images that are corrupted by any exponential family distributions and noise parameters. Using extensive experiments with Gaussian, Poisson, and Gamma noises, we show that Noise2Score significantly outperforms the state-of-the-art self-supervised denoising methods in the benchmark data set such as (C)BSD68, Set12, and Kodak, etc.
null
Continuous Mean-Covariance Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/07811dc6c422334ce36a09ff5cd6fe71-Abstract.html
Yihan Du, Siwei Wang, Zhixuan Fang, Longbo Huang
https://papers.nips.cc/paper_files/paper/2021/hash/07811dc6c422334ce36a09ff5cd6fe71-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11691-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07811dc6c422334ce36a09ff5cd6fe71-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=pbAmqUUHsQ
https://papers.nips.cc/paper_files/paper/2021/file/07811dc6c422334ce36a09ff5cd6fe71-Supplemental.pdf
Existing risk-aware multi-armed bandit models typically focus on risk measures of individual options such as variance. As a result, they cannot be directly applied to important real-world online decision making problems with correlated options. In this paper, we propose a novel Continuous Mean-Covariance Bandit (CMCB) model to explicitly take into account option correlation. Specifically, in CMCB, there is a learner who sequentially chooses weight vectors on given options and observes random feedback according to the decisions. The agent's objective is to achieve the best trade-off between reward and risk, measured with option covariance. To capture different reward observation scenarios in practice, we consider three feedback settings, i.e., full-information, semi-bandit and full-bandit feedback. We propose novel algorithms with optimal regrets (within logarithmic factors), and provide matching lower bounds to validate their optimalities. The experimental results also demonstrate the superiority of our algorithms. To the best of our knowledge, this is the first work that considers option correlation in risk-aware bandits and explicitly quantifies how arbitrary covariance structures impact the learning performance.The novel analytical techniques we developed for exploiting the estimated covariance to build concentration and bounding the risk of selected actions based on sampling strategy properties can likely find applications in other bandit analysis and be of independent interests.
null
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language
https://papers.nips.cc/paper_files/paper/2021/hash/07845cd9aefa6cde3f8926d25138a3a2-Abstract.html
Mingyu Ding, Zhenfang Chen, Tao Du, Ping Luo, Josh Tenenbaum, Chuang Gan
https://papers.nips.cc/paper_files/paper/2021/hash/07845cd9aefa6cde3f8926d25138a3a2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11692-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07845cd9aefa6cde3f8926d25138a3a2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lk1ORT35tbi
https://papers.nips.cc/paper_files/paper/2021/file/07845cd9aefa6cde3f8926d25138a3a2-Supplemental.pdf
In this work, we propose a unified framework, called Visual Reasoning with Differ-entiable Physics (VRDP), that can jointly learn visual concepts and infer physics models of objects and their interactions from videos and language. This is achieved by seamlessly integrating three components: a visual perception module, a concept learner, and a differentiable physics engine. The visual perception module parses each video frame into object-centric trajectories and represents them as latent scene representations. The concept learner grounds visual concepts (e.g., color, shape, and material) from these object-centric representations based on the language, thus providing prior knowledge for the physics engine. The differentiable physics model, implemented as an impulse-based differentiable rigid-body simulator, performs differentiable physical simulation based on the grounded concepts to infer physical properties, such as mass, restitution, and velocity, by fitting the simulated trajectories into the video observations. Consequently, these learned concepts and physical models can explain what we have seen and imagine what is about to happen in future and counterfactual scenarios. Integrating differentiable physics into the dynamic reasoning framework offers several appealing benefits. More accurate dynamics prediction in learned physics models enables state-of-the-art performance on both synthetic and real-world benchmarks while still maintaining high transparency and interpretability; most notably, VRDP improves the accuracy of predictive and counterfactual questions by 4.5% and 11.5% compared to its best counterpart. VRDP is also highly data-efficient: physical parameters can be optimized from very few videos, and even a single video can be sufficient. Finally, with all physical parameters inferred, VRDP can quickly learn new concepts from a few examples.
null
Solving Soft Clustering Ensemble via $k$-Sparse Discrete Wasserstein Barycenter
https://papers.nips.cc/paper_files/paper/2021/hash/07a4e20a7bbeeb7a736682b26b16ebe8-Abstract.html
Ruizhe Qin, Mengying Li, Hu Ding
https://papers.nips.cc/paper_files/paper/2021/hash/07a4e20a7bbeeb7a736682b26b16ebe8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11693-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07a4e20a7bbeeb7a736682b26b16ebe8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yAIYc7YjGbd
null
Clustering ensemble is one of the most important problems in ensemble learning. Though it has been extensively studied in the past decades, the existing methods often suffer from the issues like high computational complexity and the difficulty on understanding the consensus. In this paper, we study the more general soft clustering ensemble problem where each individual solution is a soft clustering. We connect it to the well-known discrete Wasserstein barycenter problem in geometry. Based on some novel geometric insights in high dimensions, we propose the sampling-based algorithms with provable quality guarantees. We also provide the systematical analysis on the consensus of our model. Finally, we conduct the experiments to evaluate our proposed algorithms.
null
Bayesian Adaptation for Covariate Shift
https://papers.nips.cc/paper_files/paper/2021/hash/07ac7cd13fd0eb1654ccdbd222b81437-Abstract.html
Aurick Zhou, Sergey Levine
https://papers.nips.cc/paper_files/paper/2021/hash/07ac7cd13fd0eb1654ccdbd222b81437-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11694-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07ac7cd13fd0eb1654ccdbd222b81437-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=15HPeY8MGQ
https://papers.nips.cc/paper_files/paper/2021/file/07ac7cd13fd0eb1654ccdbd222b81437-Supplemental.zip
When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.While improving the robustness of neural networks is one promising approach to mitigate this issue, an appealing alternate to robustifying networks against all possible test-time shifts is to instead directly adapt them to unlabeled inputs from the particular distribution shift we encounter at test time.However, this poses a challenging question: in the standard Bayesian model for supervised learning, unlabeled inputs are conditionally independent of model parameters when the labels are unobserved, so what can unlabeled data tell us about the model parameters at test-time? In this paper, we derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters, and show how approximate inference in this model can be instantiated with a simple regularized entropy minimization procedure at test-time. We evaluate our method on a variety of distribution shifts for image classification, including image corruptions, natural distribution shifts, and domain adaptation settings, and show that our method improves both accuracy and uncertainty estimation.
null
Perturb-and-max-product: Sampling and learning in discrete energy-based models
https://papers.nips.cc/paper_files/paper/2021/hash/07b1c04a30f798b5506c1ec5acfb9031-Abstract.html
Miguel Lazaro-Gredilla, Antoine Dedieu, Dileep George
https://papers.nips.cc/paper_files/paper/2021/hash/07b1c04a30f798b5506c1ec5acfb9031-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11695-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07b1c04a30f798b5506c1ec5acfb9031-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xRJ_Xqmb6d
https://papers.nips.cc/paper_files/paper/2021/file/07b1c04a30f798b5506c1ec5acfb9031-Supplemental.pdf
Perturb-and-MAP offers an elegant approach to approximately sample from a energy-based model (EBM) by computing the maximum-a-posteriori (MAP) configuration of a perturbed version of the model. Sampling in turn enables learning. However, this line of research has been hindered by the general intractability of the MAP computation. Very few works venture outside tractable models, and when they do, they use linear programming approaches, which as we will show, have several limitations. In this work we present perturb-and-max-product (PMP), a parallel and scalable mechanism for sampling and learning in discrete EBMs. Models can be arbitrary as long as they are built using tractable factors. We show that (a) for Ising models, PMP is orders of magnitude faster than Gibbs and Gibbs-with-Gradients (GWG) at learning and generating samples of similar or better quality; (b) PMP is able to learn and sample from RBMs; (c) in a large, entangled graphical model in which Gibbs and GWG fail to mix, PMP succeeds.
null
Towards Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games
https://papers.nips.cc/paper_files/paper/2021/hash/07bba581a2dd8d098a3be0f683560643-Abstract.html
Xiangyu Liu, Hangtian Jia, Ying Wen, Yujing Hu, Yingfeng Chen, Changjie Fan, ZHIPENG HU, Yaodong Yang
https://papers.nips.cc/paper_files/paper/2021/hash/07bba581a2dd8d098a3be0f683560643-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11696-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07bba581a2dd8d098a3be0f683560643-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=G_WdNNLj4wU
https://papers.nips.cc/paper_files/paper/2021/file/07bba581a2dd8d098a3be0f683560643-Supplemental.pdf
Measuring and promoting policy diversity is critical for solving games with strong non-transitive dynamics where strategic cycles exist, and there is no consistent winner (e.g., Rock-Paper-Scissors). With that in mind, maintaining a pool of diverse policies via open-ended learning is an attractive solution, which can generate auto-curricula to avoid being exploited. However, in conventional open-ended learning algorithms, there are no widely accepted definitions for diversity, making it hard to construct and evaluate the diverse policies. In this work, we summarize previous concepts of diversity and work towards offering a unified measure of diversity in multi-agent open-ended learning to include all elements in Markov games, based on both Behavioral Diversity (BD) and Response Diversity (RD). At the trajectory distribution level, we re-define BD in the state-action space as the discrepancies of occupancy measures. For the reward dynamics, we propose RD to characterize diversity through the responses of policies when encountering different opponents. We also show that many current diversity measures fall in one of the categories of BD or RD but not both. With this unified diversity measure, we design the corresponding diversity-promoting objective and population effectivity when seeking the best responses in open-ended learning. We validate our methods in both relatively simple games like matrix game, non-transitive mixture model, and the complex \textit{Google Research Football} environment. The population found by our methods reveals the lowest exploitability, highest population effectivity in matrix game and non-transitive mixture model, as well as the largest goal difference when interacting with opponents of various levels in \textit{Google Research Football}.
null
Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples
https://papers.nips.cc/paper_files/paper/2021/hash/07c5807d0d927dcd0980f86024e5208b-Abstract.html
Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
https://papers.nips.cc/paper_files/paper/2021/hash/07c5807d0d927dcd0980f86024e5208b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11697-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07c5807d0d927dcd0980f86024e5208b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=52weXyh2yh
null
We study the problem of training certifiably robust models against adversarial examples. Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models. However, many studies have shown that Interval Bound Propagation (IBP) training uses much looser bounds but outperforms other models that use tighter bounds. We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}. We find significant differences in the loss landscapes across many linear relaxation-based methods, and that the current state-of-the-arts method often has a landscape with favorable optimization properties. Moreover, to test the claim, we design a new certifiable training method with the desired properties. With the tightness and the smoothness, the proposed method achieves a decent performance under a wide range of perturbations, while others with only one of the two factors can perform well only for a specific range of perturbations. Our code is available at \url{https://github.com/sungyoon-lee/LossLandscapeMatters}.
null
Mitigating Covariate Shift in Imitation Learning via Offline Data With Partial Coverage
https://papers.nips.cc/paper_files/paper/2021/hash/07d5938693cc3903b261e1a3844590ed-Abstract.html
Jonathan Chang, Masatoshi Uehara, Dhruv Sreenivas, Rahul Kidambi, Wen Sun
https://papers.nips.cc/paper_files/paper/2021/hash/07d5938693cc3903b261e1a3844590ed-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11698-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07d5938693cc3903b261e1a3844590ed-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7PkfLkyLMRM
https://papers.nips.cc/paper_files/paper/2021/file/07d5938693cc3903b261e1a3844590ed-Supplemental.pdf
This paper studies offline Imitation Learning (IL) where an agent learns to imitate an expert demonstrator without additional online environment interactions. Instead, the learner is presented with a static offline dataset of state-action-next state triples from a potentially less proficient behavior policy. We introduce Model-based IL from Offline data (MILO): an algorithmic framework that utilizes the static dataset to solve the offline IL problem efficiently both in theory and in practice. In theory, even if the behavior policy is highly sub-optimal compared to the expert, we show that as long as the data from the behavior policy provides sufficient coverage on the expert state-action traces (and with no necessity for a global coverage over the entire state-action space), MILO can provably combat the covariate shift issue in IL. Complementing our theory results, we also demonstrate that a practical implementation of our approach mitigates covariate shift on benchmark MuJoCo continuous control tasks. We demonstrate that with behavior policies whose performances are less than half of that of the expert, MILO still successfully imitates with an extremely low number of expert state-action pairs while traditional offline IL methods such as behavior cloning (BC) fail completely. Source code is provided at https://github.com/jdchang1/milo.
null
Global Filter Networks for Image Classification
https://papers.nips.cc/paper_files/paper/2021/hash/07e87c2f4fc7f7c96116d8e2a92790f5-Abstract.html
Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, Jie Zhou
https://papers.nips.cc/paper_files/paper/2021/hash/07e87c2f4fc7f7c96116d8e2a92790f5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11699-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/07e87c2f4fc7f7c96116d8e2a92790f5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=K_Mnsw5VoOW
https://papers.nips.cc/paper_files/paper/2021/file/07e87c2f4fc7f7c96116d8e2a92790f5-Supplemental.pdf
Recent advances in self-attention and pure multi-layer perceptrons (MLP) models for vision have shown great potential in achieving promising performance with fewer inductive biases. These models are generally based on learning interaction among spatial locations from raw data. The complexity of self-attention and MLP grows quadratically as the image size increases, which makes these models hard to scale up when high-resolution features are required. In this paper, we present the Global Filter Network (GFNet), a conceptually simple yet computationally efficient architecture, that learns long-term spatial dependencies in the frequency domain with log-linear complexity. Our architecture replaces the self-attention layer in vision transformers with three key operations: a 2D discrete Fourier transform, an element-wise multiplication between frequency-domain features and learnable global filters, and a 2D inverse Fourier transform. We exhibit favorable accuracy/complexity trade-offs of our models on both ImageNet and downstream tasks. Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness. Code is available at https://github.com/raoyongming/GFNet
null
CAFE: Catastrophic Data Leakage in Vertical Federated Learning
https://papers.nips.cc/paper_files/paper/2021/hash/08040837089cdf46631a10aca5258e16-Abstract.html
Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen
https://papers.nips.cc/paper_files/paper/2021/hash/08040837089cdf46631a10aca5258e16-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11700-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08040837089cdf46631a10aca5258e16-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=b4YiFnQH3gN
https://papers.nips.cc/paper_files/paper/2021/file/08040837089cdf46631a10aca5258e16-Supplemental.zip
Recent studies show that private training data can be leaked through the gradients sharing mechanism deployed in distributed machine learning systems, such as federated learning (FL). Increasing batch size to complicate data recovery is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data leakage attack with theoretical justification to efficiently recover batch data from the shared aggregated gradients. We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE). Comparing to existing data leakage attacks, our extensive experimental results on vertical FL settings demonstrate the effectiveness of CAFE to perform large-batch data leakage attack with improved data recovery quality. We also propose a practical countermeasure to mitigate CAFE. Our results suggest that private data participated in standard FL, especially the vertical case, have a high risk of being leaked from the training gradients. Our analysis implies unprecedented and practical data leakage risks in those learning settings. The code of our work is available at https://github.com/DeRafael/CAFE.
null
Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee
https://papers.nips.cc/paper_files/paper/2021/hash/080acdcce72c06873a773c4311c2e464-Abstract.html
Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Wei Jing, Cheston Tan, Bryan Kian Hsiang Low
https://papers.nips.cc/paper_files/paper/2021/hash/080acdcce72c06873a773c4311c2e464-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11701-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/080acdcce72c06873a773c4311c2e464-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ospGnpuf6L
https://papers.nips.cc/paper_files/paper/2021/file/080acdcce72c06873a773c4311c2e464-Supplemental.pdf
The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories. Despite its promising applications, existing works on FRL fail to I) provide theoretical analysis on its convergence, and II) account for random system failures and adversarial attacks. Towards this end, we propose the first FRL framework the convergence of which is guaranteed and tolerant to less than half of the participating agents being random system failures or adversarial attackers. We prove that the sample efficiency of the proposed framework is guaranteed to improve with the number of agents and is able to account for such potential failures or attacks. All theoretical results are empirically verified on various RL benchmark tasks.
null
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
https://papers.nips.cc/paper_files/paper/2021/hash/081be9fdff07f3bc808f935906ef70c0-Abstract.html
Rabeeh Karimi Mahabadi, James Henderson, Sebastian Ruder
https://papers.nips.cc/paper_files/paper/2021/hash/081be9fdff07f3bc808f935906ef70c0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11702-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/081be9fdff07f3bc808f935906ef70c0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bqGK5PyI6-N
https://papers.nips.cc/paper_files/paper/2021/file/081be9fdff07f3bc808f935906ef70c0-Supplemental.pdf
Adapting large-scale pretrained language models to downstream tasks via fine-tuning is the standard method for achieving state-of-the-art performance on NLP benchmarks. However, fine-tuning all weights of models with millions or billions of parameters is sample-inefficient, unstable in low-resource settings, and wasteful as it requires storing a separate copy of the model for each task. Recent work has developed parameter-efficient fine-tuning methods, but these approaches either still require a relatively large number of parameters or underperform standard fine-tuning. In this work, we propose Compacter, a method for fine-tuning large-scale language models with a better trade-off between task performance and the number of trainable parameters than prior work. Compacter accomplishes this by building on top of ideas from adapters, low-rank optimization, and parameterized hypercomplex multiplication layers.Specifically, Compacter inserts task-specific weight matrices into a pretrained model's weights, which are computed efficiently as a sum of Kronecker products between shared slow'' weights andfast'' rank-one matrices defined per Compacter layer. By only training 0.047% of a pretrained model's parameters, Compacter performs on par with standard fine-tuning on GLUE and outperforms standard fine-tuning on SuperGLUE and low-resource settings. Our code is publicly available at https://github.com/rabeehk/compacter.
null
Distilling Image Classifiers in Object Detectors
https://papers.nips.cc/paper_files/paper/2021/hash/082a8bbf2c357c09f26675f9cf5bcba3-Abstract.html
Shuxuan Guo, Jose M. Alvarez, Mathieu Salzmann
https://papers.nips.cc/paper_files/paper/2021/hash/082a8bbf2c357c09f26675f9cf5bcba3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11703-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/082a8bbf2c357c09f26675f9cf5bcba3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=tCYjE8Pf2Zg
https://papers.nips.cc/paper_files/paper/2021/file/082a8bbf2c357c09f26675f9cf5bcba3-Supplemental.pdf
Knowledge distillation constitutes a simple yet effective way to improve the performance of a compact student network by exploiting the knowledge of a more powerful teacher. Nevertheless, the knowledge distillation literature remains limited to the scenario where the student and the teacher tackle the same task. Here, we investigate the problem of transferring knowledge not only across architectures but also across tasks. To this end, we study the case of object detection and, instead of following the standard detector-to-detector distillation approach, introduce a classifier-to-detector knowledge transfer framework. In particular, we propose strategies to exploit the classification teacher to improve both the detector's recognition accuracy and localization performance. Our experiments on several detectors with different backbones demonstrate the effectiveness of our approach, allowing us to outperform the state-of-the-art detector-to-detector distillation methods.
null
Subgroup Generalization and Fairness of Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/08425b881bcde94a383cd258cea331be-Abstract.html
Jiaqi Ma, Junwei Deng, Qiaozhu Mei
https://papers.nips.cc/paper_files/paper/2021/hash/08425b881bcde94a383cd258cea331be-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11704-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08425b881bcde94a383cd258cea331be-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=68B1ezcffDc
https://papers.nips.cc/paper_files/paper/2021/file/08425b881bcde94a383cd258cea331be-Supplemental.pdf
Despite enormous successful applications of graph neural networks (GNNs), theoretical understanding of their generalization ability, especially for node-level tasks where data are not independent and identically-distributed (IID), has been sparse. The theoretical investigation of the generalization performance is beneficial for understanding fundamental issues (such as fairness) of GNN models and designing better learning methods. In this paper, we present a novel PAC-Bayesian analysis for GNNs under a non-IID semi-supervised learning setup. Moreover, we analyze the generalization performances on different subgroups of unlabeled nodes, which allows us to further study an accuracy-(dis)parity-style (un)fairness of GNNs from a theoretical perspective. Under reasonable assumptions, we demonstrate that the distance between a test subgroup and the training set can be a key factor affecting the GNN performance on that subgroup, which calls special attention to the training node selection for fair learning. Experiments across multiple GNN models and datasets support our theoretical results.
null
Scaling Neural Tangent Kernels via Sketching and Random Features
https://papers.nips.cc/paper_files/paper/2021/hash/08ae6a26b7cb089ea588e94aed36bd15-Abstract.html
Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin
https://papers.nips.cc/paper_files/paper/2021/hash/08ae6a26b7cb089ea588e94aed36bd15-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11705-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08ae6a26b7cb089ea588e94aed36bd15-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vIRFiA658rh
https://papers.nips.cc/paper_files/paper/2021/file/08ae6a26b7cb089ea588e94aed36bd15-Supplemental.zip
The Neural Tangent Kernel (NTK) characterizes the behavior of infinitely-wide neural networks trained under least squares loss by gradient descent. Recent works also report that NTK regression can outperform finitely-wide neural networks trained on small-scale datasets. However, the computational complexity of kernel methods has limited its use in large-scale learning tasks. To accelerate learning with NTK, we design a near input-sparsity time approximation algorithm for NTK, by sketching the polynomial expansions of arc-cosine kernels: our sketch for the convolutional counterpart of NTK (CNTK) can transform any image using a linear runtime in the number of pixels. Furthermore, we prove a spectral approximation guarantee for the NTK matrix, by combining random features (based on leverage score sampling) of the arc-cosine kernels with a sketching algorithm. We benchmark our methods on various large-scale regression and classification tasks and show that a linear regressor trained on our CNTK features matches the accuracy of exact CNTK on CIFAR-10 dataset while achieving 150x speedup.
null
BatchQuant: Quantized-for-all Architecture Search with Robust Quantizer
https://papers.nips.cc/paper_files/paper/2021/hash/08aee6276db142f4b8ac98fb8ee0ed1b-Abstract.html
Haoping Bai, Meng Cao, Ping Huang, Jiulong Shan
https://papers.nips.cc/paper_files/paper/2021/hash/08aee6276db142f4b8ac98fb8ee0ed1b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11706-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08aee6276db142f4b8ac98fb8ee0ed1b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=qQAtFdyDr-
null
As the applications of deep learning models on edge devices increase at an accelerating pace, fast adaptation to various scenarios with varying resource constraints has become a crucial aspect of model deployment. As a result, model optimization strategies with adaptive configuration are becoming increasingly popular. While single-shot quantized neural architecture search enjoys flexibility in both model architecture and quantization policy, the combined search space comes with many challenges, including instability when training the weight-sharing supernet and difficulty in navigating the exponentially growing search space. Existing methods tend to either limit the architecture search space to a small set of options or limit the quantization policy search space to fixed precision policies. To this end, we propose BatchQuant, a robust quantizer formulation that allows fast and stable training of a compact, single-shot, mixed-precision, weight-sharing supernet. We employ BatchQuant to train a compact supernet (offering over $10^{76}$ quantized subnets) within substantially fewer GPU hours than previous methods. Our approach, Quantized-for-all (QFA), is the first to seamlessly extend one-shot weight-sharing NAS supernet to support subnets with arbitrary ultra-low bitwidth mixed-precision quantization policies without retraining. QFA opens up new possibilities in joint hardware-aware neural architecture search and quantization. We demonstrate the effectiveness of our method on ImageNet and achieve SOTA Top-1 accuracy under a low complexity constraint (<20 MFLOPs).
null
Long Short-Term Transformer for Online Action Detection
https://papers.nips.cc/paper_files/paper/2021/hash/08b255a5d42b89b0585260b6f2360bdd-Abstract.html
Mingze Xu, Yuanjun Xiong, Hao Chen, Xinyu Li, Wei Xia, Zhuowen Tu, Stefano Soatto
https://papers.nips.cc/paper_files/paper/2021/hash/08b255a5d42b89b0585260b6f2360bdd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11707-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08b255a5d42b89b0585260b6f2360bdd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=aohkNJxjYJX
https://papers.nips.cc/paper_files/paper/2021/file/08b255a5d42b89b0585260b6f2360bdd-Supplemental.pdf
We present Long Short-term TRansformer (LSTR), a temporal modeling algorithm for online action detection, which employs a long- and short-term memory mechanism to model prolonged sequence data. It consists of an LSTR encoder that dynamically leverages coarse-scale historical information from an extended temporal window (e.g., 2048 frames spanning of up to 8 minutes), together with an LSTR decoder that focuses on a short time window (e.g., 32 frames spanning 8 seconds) to model the fine-scale characteristics of the data. Compared to prior work, LSTR provides an effective and efficient method to model long videos with fewer heuristics, which is validated by extensive empirical analysis. LSTR achieves state-of-the-art performance on three standard online action detection benchmarks, THUMOS'14, TVSeries, and HACS Segment. Code has been made available at: https://xumingze0308.github.io/projects/lstr.
null
Near Optimal Policy Optimization via REPS
https://papers.nips.cc/paper_files/paper/2021/hash/08d562c1eedd30b15b51e35d8486d14c-Abstract.html
Aldo Pacchiano, Jonathan N Lee, Peter Bartlett, Ofir Nachum
https://papers.nips.cc/paper_files/paper/2021/hash/08d562c1eedd30b15b51e35d8486d14c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11708-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08d562c1eedd30b15b51e35d8486d14c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZEhDWKLTvt7
https://papers.nips.cc/paper_files/paper/2021/file/08d562c1eedd30b15b51e35d8486d14c-Supplemental.pdf
Since its introduction a decade ago, relative entropy policy search (REPS) has demonstrated successful policy learning on a number of simulated and real-world robotic domains, not to mention providing algorithmic components used by many recently proposed reinforcement learning (RL) algorithms. While REPS is commonly known in the community, there exist no guarantees on its performance when using stochastic and gradient-based solvers. In this paper we aim to fill this gap by providing guarantees and convergence rates for the sub-optimality of a policy learned using first-order optimization methods applied to the REPS objective. We first consider the setting in which we are given access to exact gradients and demonstrate how near-optimality of the objective translates to near-optimality of the policy. We then consider the practical setting of stochastic gradients, and introduce a technique that uses generative access to the underlying Markov decision process to compute parameter updates that maintain favorable convergence to the optimal regularized policy.
null
Self-Consistent Models and Values
https://papers.nips.cc/paper_files/paper/2021/hash/08f0efebb1c51aada9430a089a2050cc-Abstract.html
Greg Farquhar, Kate Baumli, Zita Marinho, Angelos Filos, Matteo Hessel, Hado P. van Hasselt, David Silver
https://papers.nips.cc/paper_files/paper/2021/hash/08f0efebb1c51aada9430a089a2050cc-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11709-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08f0efebb1c51aada9430a089a2050cc-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=x2rdRAx3QF
https://papers.nips.cc/paper_files/paper/2021/file/08f0efebb1c51aada9430a089a2050cc-Supplemental.pdf
Learned models of the environment provide reinforcement learning (RL) agents with flexible ways of making predictions about the environment.Models enable planning, i.e. using more computation to improve value functions or policies, without requiring additional environment interactions.In this work, we investigate a way of augmenting model-based RL, by additionally encouraging a learned model and value function to be jointly \emph{self-consistent}.This lies in contrast to classic planning methods like Dyna, which only update the value function to be consistent with the model.We propose a number of possible self-consistency updates, study them empirically in both the tabular and function approximation settings, and find that with appropriate choices self-consistency can be useful both for policy evaluation and control.
null
Learning on Random Balls is Sufficient for Estimating (Some) Graph Parameters
https://papers.nips.cc/paper_files/paper/2021/hash/08f36fcf88c0a84c19a6ed437b9cbcc9-Abstract.html
Takanori Maehara, Hoang NT
https://papers.nips.cc/paper_files/paper/2021/hash/08f36fcf88c0a84c19a6ed437b9cbcc9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11710-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08f36fcf88c0a84c19a6ed437b9cbcc9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Tbq5fYViJzm
https://papers.nips.cc/paper_files/paper/2021/file/08f36fcf88c0a84c19a6ed437b9cbcc9-Supplemental.pdf
Theoretical analyses for graph learning methods often assume a complete observation of the input graph. Such an assumption might not be useful for handling any-size graphs due to the scalability issues in practice. In this work, we develop a theoretical framework for graph classification problems in the partial observation setting (i.e., subgraph samplings). Equipped with insights from graph limit theory, we propose a new graph classification model that works on a randomly sampled subgraph and a novel topology to characterize the representability of the model. Our theoretical framework contributes a theoretical validation of mini-batch learning on graphs and leads to new learning-theoretic results on generalization bounds as well as size-generalizability without assumptions on the input.
null
Risk-Averse Bayes-Adaptive Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/08f90c1a417155361a5c4b8d297e0d78-Abstract.html
Marc Rigter, Bruno Lacerda, Nick Hawes
https://papers.nips.cc/paper_files/paper/2021/hash/08f90c1a417155361a5c4b8d297e0d78-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11711-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/08f90c1a417155361a5c4b8d297e0d78-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=dxaINwQdXh1
https://papers.nips.cc/paper_files/paper/2021/file/08f90c1a417155361a5c4b8d297e0d78-Supplemental.pdf
In this work, we address risk-averse Bayes-adaptive reinforcement learning. We pose the problem of optimising the conditional value at risk (CVaR) of the total return in Bayes-adaptive Markov decision processes (MDPs). We show that a policy optimising CVaR in this setting is risk-averse to both the epistemic uncertainty due to the prior distribution over MDPs, and the aleatoric uncertainty due to the inherent stochasticity of MDPs. We reformulate the problem as a two-player stochastic game and propose an approximate algorithm based on Monte Carlo tree search and Bayesian optimisation. Our experiments demonstrate that our approach significantly outperforms baseline approaches for this problem.
null
Iterative Connecting Probability Estimation for Networks
https://papers.nips.cc/paper_files/paper/2021/hash/0919b5c38396c3f0c41f1112d538e42c-Abstract.html
Yichen Qin, Linhan Yu, Yang Li
https://papers.nips.cc/paper_files/paper/2021/hash/0919b5c38396c3f0c41f1112d538e42c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11712-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0919b5c38396c3f0c41f1112d538e42c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=IuH1bVRgvHH
https://papers.nips.cc/paper_files/paper/2021/file/0919b5c38396c3f0c41f1112d538e42c-Supplemental.zip
Estimating the probabilities of connections between vertices in a random network using an observed adjacency matrix is an important task for network data analysis. Many existing estimation methods are based on certain assumptions on network structure, which limit their applicability in practice. Without making strong assumptions, we develop an iterative connecting probability estimation method based on neighborhood averaging. Starting at a random initial point or an existing estimate, our method iteratively updates the pairwise vertex distances, the sets of similar vertices, and connecting probabilities to improve the precision of the estimate. We propose a two-stage neighborhood selection procedure to achieve the trade-off between smoothness of the estimate and the ability to discover local structure. The tuning parameters can be selected by cross-validation. We establish desirable theoretical properties for our method, and further justify its superior performance by comparing with existing methods in simulation and real data analysis.
null
Learning to Adapt via Latent Domains for Adaptive Semantic Segmentation
https://papers.nips.cc/paper_files/paper/2021/hash/092cb13c22d51c22b9035a2b4fe76b00-Abstract.html
Yunan Liu, Shanshan Zhang, Yang Li, Jian Yang
https://papers.nips.cc/paper_files/paper/2021/hash/092cb13c22d51c22b9035a2b4fe76b00-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11713-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/092cb13c22d51c22b9035a2b4fe76b00-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PEGc7x_QL2
https://papers.nips.cc/paper_files/paper/2021/file/092cb13c22d51c22b9035a2b4fe76b00-Supplemental.pdf
Domain adaptive semantic segmentation aims to transfer knowledge learned from labeled source domain to unlabeled target domain. To narrow down the domain gap and ease adaptation difficulty, some recent methods translate source images to target-like images (latent domains), which are used as supplement or substitute to the original source data. Nevertheless, these methods neglect to explicitly model the relationship of knowledge transferring across different domains. Alternatively, in this work we break through the standard “source-target” one pair adaptation framework and construct multiple adaptation pairs (e.g. “source-latent” and “latent-target”). The purpose is to use the meta-knowledge (how to adapt) learned from one pair as guidance to assist the adaptation of another pair under a meta-learning framework. Furthermore, we extend our method to a more practical setting of open compound domain adaptation (a.k.a multiple-target domain adaptation), where the target is a compound of multiple domains without domain labels. In this setting, we embed an additional pair of “latent-latent” to reduce the domain gap between the source and different latent domains, allowing the model to adapt well on multiple target domains simultaneously. When evaluated on standard benchmarks, our method is superior to the state-of-the-art methods in both the single target and multiple-target domain adaptation settings.
null
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection
https://papers.nips.cc/paper_files/paper/2021/hash/093b60fd0557804c8ba0cbf1453da22f-Abstract.html
Koby Bibas, Meir Feder, Tal Hassner
https://papers.nips.cc/paper_files/paper/2021/hash/093b60fd0557804c8ba0cbf1453da22f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11714-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/093b60fd0557804c8ba0cbf1453da22f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ympqhd5gE9
https://papers.nips.cc/paper_files/paper/2021/file/093b60fd0557804c8ba0cbf1453da22f-Supplemental.pdf
Detecting out-of-distribution (OOD) samples is vital for developing machine learning based models for critical safety systems. Common approaches for OOD detection assume access to some OOD samples during training which may not be available in a real-life scenario. Instead, we utilize the {\em predictive normalized maximum likelihood} (pNML) learner, in which no assumptions are made on the tested input. We derive an explicit expression of the pNML and its generalization error, denoted as the regret, for a single layer neural network (NN). We show that this learner generalizes well when (i) the test vector resides in a subspace spanned by the eigenvectors associated with the large eigenvalues of the empirical correlation matrix of the training data, or (ii) the test sample is far from the decision boundary. Furthermore, we describe how to efficiently apply the derived pNML regret to any pretrained deep NN, by employing the explicit pNML for the last layer, followed by the softmax function. Applying the derived regret to deep NN requires neither additional tunable parameters nor extra data. We extensively evaluate our approach on 74 OOD detection benchmarks using DenseNet-100, ResNet-34, and WideResNet-40 models trained with CIFAR-100, CIFAR-10, SVHN, and ImageNet-30 showing a significant improvement of up to 15.6% over recent leading methods.
null
Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation
https://papers.nips.cc/paper_files/paper/2021/hash/093f65e080a295f8076b1c5722a46aa2-Abstract.html
Lei Ke, Xia Li, Martin Danelljan, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu
https://papers.nips.cc/paper_files/paper/2021/hash/093f65e080a295f8076b1c5722a46aa2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11715-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/093f65e080a295f8076b1c5722a46aa2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OkFPq7ZtsQ
null
Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes. Most approaches only exploit the temporal dimension to address the association problem, while relying on single frame predictions for the segmentation mask itself. We propose Prototypical Cross-Attention Network (PCAN), capable of leveraging rich spatio-temporal information for online multiple object tracking and segmentation. PCAN first distills a space-time memory into a set of prototypes and then employs cross-attention to retrieve rich information from the past frames. To segment each object, PCAN adopts a prototypical appearance module to learn a set of contrastive foreground and background prototypes, which are then propagated over time. Extensive experiments demonstrate that PCAN outperforms current video instance tracking and segmentation competition winners on both Youtube-VIS and BDD100K datasets, and shows efficacy to both one-stage and two-stage segmentation frameworks. Code and video resources are available at http://vis.xyz/pub/pcan.
null
Algorithmic Instabilities of Accelerated Gradient Descent
https://papers.nips.cc/paper_files/paper/2021/hash/094bb65ef46d3eb4be0a87877ec333eb-Abstract.html
Amit Attia, Tomer Koren
https://papers.nips.cc/paper_files/paper/2021/hash/094bb65ef46d3eb4be0a87877ec333eb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11716-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/094bb65ef46d3eb4be0a87877ec333eb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Am_qvhPRQTq
null
We study the algorithmic stability of Nesterov's accelerated gradient method. For convex quadratic objectives, Chen et al. (2018) proved that the uniform stability of the method grows quadratically with the number of optimization steps, and conjectured that the same is true for the general convex and smooth case. We disprove this conjecture and show, for two notions of algorithmic stability (including uniform stability), that the stability of Nesterov's accelerated method in fact deteriorates exponentially fast with the number of gradient steps. This stands in sharp contrast to the bounds in the quadratic case, but also to known results for non-accelerated gradient methods where stability typically grows linearly with the number of steps.
null
Learning Optimal Predictive Checklists
https://papers.nips.cc/paper_files/paper/2021/hash/09676fac73eda6cac726c43e43e86c58-Abstract.html
Haoran Zhang, Quaid Morris, Berk Ustun, Marzyeh Ghassemi
https://papers.nips.cc/paper_files/paper/2021/hash/09676fac73eda6cac726c43e43e86c58-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11717-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/09676fac73eda6cac726c43e43e86c58-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bDHBNVtB9XA
https://papers.nips.cc/paper_files/paper/2021/file/09676fac73eda6cac726c43e43e86c58-Supplemental.pdf
Checklists are simple decision aids that are often used to promote safety and reliability in clinical applications. In this paper, we present a method to learn checklists for clinical decision support. We represent predictive checklists as discrete linear classifiers with binary features and unit weights. We then learn globally optimal predictive checklists from data by solving an integer programming problem. Our method allows users to customize checklists to obey complex constraints, including constraints to enforce group fairness and to binarize real-valued features at training time. In addition, it pairs models with an optimality gap that can inform model development and determine the feasibility of learning sufficiently accurate checklists on a given dataset. We pair our method with specialized techniques that speed up its ability to train a predictive checklist that performs well and has a small optimality gap. We benchmark the performance of our method on seven clinical classification problems, and demonstrate its practical benefits by training a short-form checklist for PTSD screening. Our results show that our method can fit simple predictive checklists that perform well and that can easily be customized to obey a rich class of custom constraints.
null
Finite Sample Analysis of Average-Reward TD Learning and $Q$-Learning
https://papers.nips.cc/paper_files/paper/2021/hash/096ffc299200f51751b08da6d865ae95-Abstract.html
Sheng Zhang, Zhe Zhang, Siva Theja Maguluri
https://papers.nips.cc/paper_files/paper/2021/hash/096ffc299200f51751b08da6d865ae95-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11718-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/096ffc299200f51751b08da6d865ae95-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1Rxp-demAH0
https://papers.nips.cc/paper_files/paper/2021/file/096ffc299200f51751b08da6d865ae95-Supplemental.pdf
The focus of this paper is on sample complexity guarantees of average-reward reinforcement learning algorithms, which are known to be more challenging to study than their discounted-reward counterparts. To the best of our knowledge, we provide the first known finite sample guarantees using both constant and diminishing step sizes of (i) average-reward TD($\lambda$) with linear function approximation for policy evaluation and (ii) average-reward $Q$-learning in the tabular setting to find the optimal policy. A major challenge is that since the value functions are agnostic to an additive constant, the corresponding Bellman operators are no longer contraction mappings under any norm. We obtain the results for TD($\lambda$) by working in an appropriately defined subspace that ensures uniqueness of the solution. For $Q$-learning, we exploit the span seminorm contractive property of the Bellman operator, and construct a novel Lyapunov function obtained by infimal convolution of a generalized Moreau envelope and the indicator function of a set.
null
Generalization Bounds for Graph Embedding Using Negative Sampling: Linear vs Hyperbolic
https://papers.nips.cc/paper_files/paper/2021/hash/09779bb7930c8a0a44360e12b538ae3c-Abstract.html
Atsushi Suzuki, Atsushi Nitanda, jing wang, Linchuan Xu, Kenji Yamanishi, Marc Cavazza
https://papers.nips.cc/paper_files/paper/2021/hash/09779bb7930c8a0a44360e12b538ae3c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11719-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/09779bb7930c8a0a44360e12b538ae3c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Zfk2NOSWoYg
https://papers.nips.cc/paper_files/paper/2021/file/09779bb7930c8a0a44360e12b538ae3c-Supplemental.pdf
Graph embedding, which represents real-world entities in a mathematical space, has enabled numerous applications such as analyzing natural languages, social networks, biochemical networks, and knowledge bases.It has been experimentally shown that graph embedding in hyperbolic space can represent hierarchical tree-like data more effectively than embedding in linear space, owing to hyperbolic space's exponential growth property. However, since the theoretical comparison has been limited to ideal noiseless settings, the potential for the hyperbolic space's property to worsen the generalization error for practical data has not been analyzed.In this paper, we provide a generalization error bound applicable for graph embedding both in linear and hyperbolic spaces under various negative sampling settings that appear in graph embedding. Our bound states that error is polynomial and exponential with respect to the embedding space's radius in linear and hyperbolic spaces, respectively, which implies that hyperbolic space's exponential growth property worsens the error.Using our bound, we clarify the data size condition on which graph embedding in hyperbolic space can represent a tree better than in Euclidean space by discussing the bias-variance trade-off.Our bound also shows that imbalanced data distribution, which often appears in graph embedding, can worsen the error.
null
Gradient Starvation: A Learning Proclivity in Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/0987b8b338d6c90bbedd8631bc499221-Abstract.html
Mohammad Pezeshki, Oumar Kaba, Yoshua Bengio, Aaron C. Courville, Doina Precup, Guillaume Lajoie
https://papers.nips.cc/paper_files/paper/2021/hash/0987b8b338d6c90bbedd8631bc499221-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11720-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/0987b8b338d6c90bbedd8631bc499221-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=aExAsh1UHZo
null
We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks. Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task, despite the presence of other predictive features that fail to be discovered. This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks. Using tools from Dynamical Systems theory, we identify simple properties of learning dynamics during gradient descent that lead to this imbalance, and prove that such a situation can be expected given certain statistical structure in training data. Based on our proposed formalism, we develop guarantees for a novel regularization method aimed at decoupling feature learning dynamics, improving accuracy and robustness in cases hindered by gradient starvation. We illustrate our findings with simple and real-world out-of-distribution (OOD) generalization experiments.
null

Neural Information Processing Systems NeurIPS 2021 Accepted Paper Meta Info Dataset

This dataset is collected from the NeurIPS 2021 Advances in Neural Information Processing Systems 35 conference accepted paper (https://papers.nips.cc/paper_files/paper/2021) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/nips2021). For researchers who are interested in doing analysis of NIPS 2021 accepted papers and potential research trends, you can use the already cleaned up json file in the dataset. Each row contains the meta information of a paper in the NIPS 2021 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.

Meta Information of Json File

{
    "title": "Backward-Compatible Prediction Updates: A Probabilistic Approach",
    "url": "https://papers.nips.cc/paper_files/paper/2021/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html",
    "authors": "Frederik Tr\u00e4uble, Julius von K\u00fcgelgen, Matth\u00e4us Kleindessner, Francesco Locatello, Bernhard Sch\u00f6lkopf, Peter Gehler",
    "detail_url": "https://papers.nips.cc/paper_files/paper/2021/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html",
    "tags": "NIPS 2021",
    "Bibtex": "https://papers.nips.cc/paper_files/paper/11633-/bibtex",
    "Paper": "https://papers.nips.cc/paper_files/paper/2021/file/012d9fe15b2493f21902cd55603382ec-Paper.pdf",
    "Reviews And Public Comment \u00bb": "https://papers.nips.cchttps://openreview.net/forum?id=YjZoWjTKYvH",
    "Supplemental": "https://papers.nips.cc/paper_files/paper/2021/file/012d9fe15b2493f21902cd55603382ec-Supplemental.pdf",
    "abstract": "When machine learning systems meet real world applications, accuracy is only one of several requirements. In this paper, we assay a complementary perspective originating from the increasing availability of pre-trained and regularly improving state-of-the-art models. While new improved models develop at a fast pace, downstream tasks vary more slowly or stay constant. Assume that we have a large unlabelled data set for which we want to maintain accurate predictions. Whenever a new and presumably better ML models becomes available, we encounter two problems: (i) given a limited budget, which data points should be re-evaluated using the new model?; and (ii) if the new predictions differ from the current ones, should we update? Problem (i) is about compute cost, which matters for very large data sets and models. Problem (ii) is about maintaining consistency of the predictions, which can be highly relevant for downstream applications; our demand is to avoid negative flips, i.e., changing correct to incorrect predictions. In this paper, we formalize the Prediction Update Problem and present an efficient probabilistic approach as answer to the above questions. In extensive experiments on standard classification benchmark data sets, we show that our method outperforms alternative strategies along key metrics for backward-compatible prediction updates."
}

Related

AI Agent Marketplace and Search

AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog

AI Agent Reviews

AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews

AI Equation

List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex

Downloads last month
33