Dataset Viewer
Auto-converted to Parquet
paperhash
stringlengths
38
205
s2_corpus_id
stringlengths
0
9
arxiv_id
stringlengths
0
13
title
stringlengths
8
176
abstract
stringlengths
228
5k
authors
sequence
summary
stringlengths
1
668
field_of_study
sequencelengths
0
22
venue
stringclasses
179 values
publication_date
stringlengths
10
19
n_references
int32
0
228
n_citations
int32
0
19.4k
n_influential_citations
int32
0
1.22k
introduction
stringclasses
0 values
background
stringclasses
0 values
methodology
stringclasses
0 values
experiments_results
stringclasses
0 values
conclusion
stringclasses
0 values
full_text
stringclasses
0 values
decision
bool
2 classes
decision_text
stringlengths
0
10k
reviews
sequence
comments
sequence
references
sequence
hypothesis
stringclasses
0 values
month_since_publication
int32
11
108
avg_citations_per_month
float32
0
372
mean_score
float32
0
1
mean_confidence
float32
0.08
1
mean_novelty
float32
0
1
mean_correctness
float32
0
1
mean_clarity
float32
mean_impact
float32
mean_reproducibility
float32
openreview_submission_id
stringlengths
9
12
vertolli|image_quality_assessment_techniques_improve_training_and_evaluation_of_energybased_generative_adversarial_networks|ICLR_cc_2018_Conference
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks
We propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function's components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways.
{ "name": [], "affiliation": [] }
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks
[ "generative adversarial networks", "gans", "deep learning", "image modeling", "image generation", "energy based models" ]
null
2018-02-15 22:29:49
14
null
null
null
null
null
null
null
null
false
The paper received borderline-negative scores (6,5,5) with R1 and R2 having significant difficulty with the clarity of the paper. Although R3 was marginally positive, they pointed out that the experiments are "extremely weak". The AC look at the paper and agrees with R3 on this point. Therefore the paper cannot be accepted in its current form. The experiments and clarity need work before resubmission to another venue.
{ "review_id": [ "HJZIu0Kef", "H1NEs7Clz", "Bk8udEEeM" ], "review": [ { "title": "title: Novelty of the paper is a bit restricted, and design choices appear to be lacking strong justifications.", "paper_summary": null, "main_review": "main_review: This paper proposed some new energy function in the BEGAN (boundary equilibrium GAN framework), including l_1 score, Gradient magnitude similarity score, and chrominance score, which are motivated and borrowed from the image quality assessment techniques. These energy component in the objective function allows learning of different set of features and determination on whether the features are adequately represented. experiments on the using different hyper-parameters of the energy function, as well as visual inspections on the quality of the learned images, are presented. \n\nIt appears to me that the novelty of the paper is limited, in that the main approach is built on the existing BEGAN framework with certain modifications. For example, the new energy function in equation (4) larges achieves similar goal as the original energy (1) proposed by Zhao et. al (2016), except that the margin loss in (1) is changed to a re-weighted linear loss, where the dynamic weighting scheme of k_t is borrowed from the work of Berthelot et. al (2017). It is not very clear why making such changes in the energy would supposedly make the results better, and no further discussions are provided. On the other hand, the several energy component introduced are simply choices of the similarity measures as motivated from the image quality assessment, and there are probably a lot more in the literature whose application can not be deemed as a significant contribution to either theories or algorithm designs in GAN.\n\nMany results from the experimental section rely on visual evaluations, such as in Figure~4 or 5; from these figures, it is difficult to clearly pick out the winning images. In Figure~5, for a fair evaluation on the performance of model interploations, the same human model should be used for competing methods, instead of applying different human models and different interpolation tasks in different methods. \n ", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: An incremental paper with moderately interesting results on a single dataset", "paper_summary": null, "main_review": "main_review: Summary: \nThe paper extends the the recently proposed Boundary Equilibrium Generative Adversarial Networks (BEGANs), with the hope of generating images which are more realistic. In particular, the authors propose to change the energy function associated with the auto-encoder, from an L2 norm (a single number) to an energy function with multiple components. Their energy function is inspired by the structured similarity index (SSIM), and the three components they use are the L1 score, the gradient magnitude similarity score, and the chromium score. Using this energy function, the authors hypothesize, that it will force the generator to generate realistic images. They test their hypothesis on a single dataset, namely, the CelebA dataset. \n\nReview: \nWhile the idea proposed in the paper is somewhat novel and there is nothing obviously wrong about the proposed approach, I thought the paper is somewhat incremental. As a result I kind of question the impact of this result. My suspicion is reinforced by the fact that the experimental section is extremely weak. In particular the authors test their model on a single relatively straightforward dataset. Any reason why the authors did not try on other datasets involving natural images? As a result I feel that the title and the claims in the paper are somewhat misleading and premature: that the proposed techniques improves the training and evaluation of energy based gans. \n\nOver all the paper is clearly written and easy to understand. \n\nBased on its incremental nature and weak experiments, I'm on the margin with regards to its acceptance. Happy to change my opinion if other reviewers strongly think otherwise with good reason and are convinced about its impact. ", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: A very technical paper with unclear significance.", "paper_summary": null, "main_review": "main_review: Quick summary:\nThis paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term. The model is then trained with CelebA under different parameters settings and results are analyzed.\n\nQuality and significance:\nThis is quite a technical paper, written in a very compressed form and is a bit hard to follow. Mostly it is hard to estimate what is the contribution of the model and how the results differ from baseline models.\n\nClarity:\nI would say this is one of the weak points of the paper - the paper is not well motivated and the results are not clearly presented. \n\nOriginality:\nSeems original.\n\nPros:\n* Interesting energy formulation and variation over BEGAN\n\nCons:\n* Not a clear paper\n* results are only partially motivated and analyzed", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null } ], "score": [ 0.4444444477558136, 0.5555555820465088, 0.4444444477558136 ], "confidence": [ 0.5, 0.5, 0.5 ], "novelty": [ null, null, null ], "correctness": [ null, null, null ], "clarity": [ null, null, null ], "impact": [ null, null, null ], "reproducibility": [ null, null, null ], "ethics": [ null, null, null ] }
{ "title": [ "Challenges of the BEGAN model", "Clarifications", "Clarifications" ], "comment": [ "Thank you for your review and comments.\n\nWe have been working on extending our research to include other datasets. The primary challenge is that the stock BEGAN model does rather poorly on datasets that do not have a lot of regular structure like the CelebA dataset. Consequently, we have preliminary results that are suggestive for MNIST and the msceleb dataset, but we've been unable to show any interesting results on Imagenet or the LSUN bedrooms dataset. \n\nOur suspicion is that these are issues with the stock network design/structure. We are currently working with an EBM-based modification of the model from \"Progressive Growing of GANs for Improved Quality, Stability, and Variation\" (this conference), which seems to replicate our results on other datasets. Our research is still very preliminary, though.\n\nWe are curious what the 'correct' number of datasets is for a conference proceedings paper (which is extremely short). The original BEGAN paper only uses one, custom dataset. The EBGAN paper has 4 (if you count MNIST). The WGAN-GP paper has 2 plus some artificial datasets. Consequently, there doesn't seem to be any consensus in the community on this point.", "Thank you for your review and comments.\n\nCould you unpack what you mean by, \"It is not very clear why making such changes in the energy would supposedly make the results better, and no further discussions are provided\"? We explicitly state in section 2.1 that:\n\n\"It is not particularly surprising that these modifications to Equation 2 show improvements. Zhao et al. (2016) devote an appendix section to the correct selection of m and explicitly mention that the “balance between... real and fake samples[s]” (italics theirs) is crucial to the correct selection of m. Unsurprisingly, a dynamically updated parameter that accounts for this balance is likely to be the best instantiation of the authors’ intuitions and visual inspection of the resulting output supports this (see Berthelot et al., 2017).\"\n\nWhat kind of discussion would you have liked to see? If you're looking for a formal analysis, we would suggest reviewing Berthelot et al., (2017) and Arjovsky, Chintala, and Bottou (2017) for their discussions of the advantages of the Wasserstein distance over the alternatives. Section 5 bullet 3 in the latter explicitly addresses the differences between the original EBGAN margin loss and the Wasserstein distance, if you are interested. Sections 3.3 and 3.4 of the former address the equilibrium hyper-parameters of the BEGAN model (e.g., gamma, k_t).\n\nCould you also unpack what you mean by, \"there are probably a lot more [similarity measures] in the [IQA] literature whose application can not be deemed as a significant contribution to either theories or algorithm designs in GAN\"? We assume that you are not saying that the mere existence of other methods is damning to the scientific study of some subset of those methods. Please clarify how our modification of the energy-based formulation of GANs to emphasize a more important role for the energy function (generally assumed to be an l1 or l2 norm across many studies) is not a significant contribution to GAN research?\n\nCould you also unpack what you mean by, \"human model\"? We would like to clarify that the function of Figure 5 is to illustrate how image diversity has not been lost when using the new evaluation. It is not trying to show how one set of images are better than another.", "Thank you for your review and comments.\n\nCould you be more specific about what needs greater clarity?\n\nAll of the models are modifications upon the original BEGAN model except model 1 (which is the original model). All of the modifications are based upon different hyper-parameter sets of equation 8 which are outlined in Table 1. Sections 2.2 and 2.3 motivate the modifications we made." ] }
{ "paperhash": [ "berthelot|began:_boundary_equilibrium_generative_adversarial_networks", "damon|seven_challenges_in_image_quality_assessment:_past,_present,_and_future_research", "chen|infogan:_interpretable_representation_learning_by_information_maximizing_generative_adversarial_nets", "goodfellow|generative_adversarial_nets", "gragera|semimetric_properties_of_sørensen-dice_and_tversky_indexes", "gupta|objective_color_image_quality_assessment_based_on_sobel_magnitude._signal,_image_and_video_processing", "lecun|a_tutorial_on_energy-based_learning", "liu|coupled_generative_adversarial_networks", "liu|deep_learning_face_attributes_in_the_wild", "radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks", "wang|modern_image_quality_assessment", "wang|image_quality_assessment:_from_error_visibility_to_structural_similarity", "xue|gradient_magnitude_similarity_deviation:_a_highly_efficient_perceptual_image_quality_index", "zhao|energy-based_generative_adversarial_network" ], "title": [ "Began: Boundary equilibrium generative adversarial networks", "Seven challenges in image quality assessment: past, present, and future research", "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "Generative adversarial nets", "Semimetric properties of sørensen-dice and tversky indexes", "Objective color image quality assessment based on sobel magnitude. Signal, Image and Video Processing", "A tutorial on energy-based learning", "Coupled generative adversarial networks", "Deep learning face attributes in the wild", "Unsupervised representation learning with deep convolutional generative adversarial networks", "Modern image quality assessment", "Image quality assessment: from error visibility to structural similarity", "Gradient magnitude similarity deviation: A highly efficient perceptual image quality index", "Energy-based generative adversarial network" ], "abstract": [ "", "", "", "", "", "", "", "", "", "", "", "", "", "" ], "authors": [ { "name": [ "david berthelot", "tom schumm", "luke metz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "m damon", " chandler" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "xi chen", "yan duan", "rein houthooft", "john schulman", "ilya sutskever", "pieter abbeel" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "ian goodfellow", "jean pouget-abadie", "mehdi mirza", "bing xu", "david warde-farley", "sherjil ozair", "aaron courville", "yoshua bengio" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "alonso gragera", "vorapong suppakitpaisarn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "savita gupta", "akshay gore", "satish kumar", "sneh mani", " srivastava" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "yann lecun", "sumit chopra", "raia hadsell", "m ranzato", " huang" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "ming-yu liu", "oncel tuzel" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "ziwei liu", "ping luo", "xiaogang wang", "xiaoou tang" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "alec radford", "luke metz", "soumith chintala" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "zhou wang", "alan c bovik" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "zhou wang", "alan c bovik", "hamid r sheikh", "eero p simoncelli" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "wufeng xue", "lei zhang", "xuanqin mou", "alan c bovik" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "junbo zhao", "michael mathieu", "yann lecun" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ "1703.10717v4", "", "1606.03657v1", "", "", "", "", "1606.07536v2", "1411.7766v3", "1511.06434v2", "", "", "1308.3052v2", "arXiv:1609.03126[cs.LG" ], "s2_corpus_id": [ "", "", "", "", "", "", "", "", "", "", "", "", "", "" ], "intents": [ null, null, null, null, null, null, null, null, null, null, null, null, null, null ], "isInfluential": [ null, null, null, null, null, null, null, null, null, null, null, null, null, null ] }
null
84
null
0.481481
0.5
null
null
null
null
null
ryzm6BATZ
brakel|learning_independent_features_with_adversarial_nets_for_nonlinear_ica|ICLR_cc_2018_Conference
Learning Independent Features with Adversarial Nets for Non-linear ICA
Reliable measures of statistical dependence could potentially be useful tools for learning independent features and performing tasks like source separation using Independent Component Analysis (ICA). Unfortunately, many of such measures, like the mutual information, are hard to estimate and optimize directly. We propose to learn independent features with adversarial objectives (Goodfellow et al. 2014, Arjovsky et al. 2017) which optimize such measures implicitly. These objectives compare samples from the joint distribution and the product of the marginals without the need to compute any probability densities. We also propose two methods for obtaining samples from the product of the marginals using either a simple resampling trick or a separate parametric distribution. Our experiments show that this strategy can easily be applied to different types of model architectures and solve both linear and non-linear ICA problems.
{ "name": [], "affiliation": [] }
null
[ "adversarial networks", "ica", "unsupervised", "independence" ]
null
2018-02-15 22:29:32
30
null
null
null
null
null
null
null
null
false
The paper proposes the use of GANs to match the joint distribution of features to the product of their marginals for ICA. The approach is totally plausible but reviewers have complaints about lack of rigor and analysis in terms of (i) mixing conditions under which the proposed GAN based approach will work, given that ICA is ill-posed for general nonlinear mixing (ii) comparison with prior work on linear and PNL ICA. Further, in most scenarios where GANs are used, one of the distributions is fixed (say, the real distribution) and the other is dynamic (fake distribution) trying to come close to the fixed distribution during optimization. In the proposed method, the discriminator encodes the distance b/w joint and product of marginals which are both dynamic during the learning. It might be useful to comment whether or not it has any implications wrt increased instability of training, etc.
{ "review_id": [ "ry2lpp_ez", "HyoEDdvxG", "H1hlWndxM" ], "review": [ { "title": "title: Thought provoking paper but lacks more detailed analysis", "paper_summary": null, "main_review": "main_review: \nThe idea of ICA is constructing a mapping from dependent inputs to outputs (=the derived features) such that the outputs are as independent as possible. As the input/output densities are often not known and/or are intractable, natural independence measures such as mutual information are hard to estimate. In practice, the independence is characterized by certain functions of higher order moments -- leading to several alternatives in a zoo of independence objectives. \n\nThe current paper makes the iteresting observation that independent features can also be computed via adversarial objectives. The key idea of adversarial training is adapted in this context as comparing samples from the joint distribution and the product of the marginals. \n\nTwo methods are proposed for drawing samples from the products of marginals. \nOne method is generating samples but permuting randomly the sample indices for individual marginals - this resampling mechanism generates approximately independent samples from the product distribution. The second method is essentially samples each marginal separately. \n\nThe approach is demonstrated in the solution of both linear and non-linear ICA problems.\n\nPositive:\nThe paper is well written and easy to follow on a higher level. GAN's provide a fresh look at nonlinear ICA and the paper is certainly thought provoking. \n\n\nNegative:\nMost of the space is devoted for reviewing related work and motivations, while the specifics of the method are described relatively short in section 4. There is no analysis and the paper is \nsomewhat anecdotal. The simulation results section is limited in scope. The sampling from product distribution method is somewhat obvious.\n\n\nQuestions:\n\n- The overcomplete audio source separation case is well known for audio and I could not understand why a convincing baseline can not be found. Is this due to nonlinear mixing?\nAs 26 channels and 6 channels are given, a simple regularization based method can be easily developed to provide a baseline performance, \n\n\n- The need for normalization in section 4 is surprising, as it obviously renders the outputs dependent. \n\n- Figure 1 may be misleading as h are not defined \n", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: Proposed Wasserstein GAN: not well-suited to ICA", "paper_summary": null, "main_review": "main_review: The focus of the paper is independent component analysis (ICA) and its nonlinear variants such as the post non-linear (PNL) ICA model. Motivated by the fact that estimating mutual information and similar dependency measures require density estimates and hard to optimize, the authors propose a Wasserstein GAN (generative adversarial network) based solution to tackle the problem, with illustrations on 6 (synthetic) and 3-dimemensional (audio) examples. The primary idea of the paper is to use the Wasserstein distance as an independence measure of the estimated source coordinates, and optimize it in a neural network (NN) framework.\n\nAlthough finding novel GAN applications is an exciting topic, I am not really convinced that ICA with the proposed Wasserstein GAN based technique fulfills this goal.\n \nBelow I detail my reasons:\n\n1)The ICA problem can be formulated as the minimization of pairwise mutual information [1] or one-dimensional entropy [2]. In other words, estimating the joint dependence of the source coordinates is not necessary; it is worthwhile to avoid it.\n\n2)The PNL ICA task can be efficiently tackled by first 'removing' the nonlinearity followed by classical linear ICA; see for example [3].\n\n3)Estimating information theoretic (IT) measures (mutual information, divergence) is a quite mature field with off-the-self techniques, see for example [4,5,6,8]. These methods do not estimate the underlying densities; it would be superfluous (and hard).\n\n4)Optimizing non-differentiable IT measures can computationally quite efficiently carried out in the ICA context by e.g., Givens rotations [7]; differentiable ICA cost functions can be robustly handled by Stiefel manifold methods; see for example [8,9].\n\n5)Section 3.1: This section is devoted to generating samples from the product of the marginals, even using separate generator networks. I do not see the necessity of these solutions; the subtask can be solved by independently shuffling all the coordinates of the sample.\n\n6)Experiments (Section 6): \ni) It seems to me that the proposed NN-based technique has some quite serious divergence issues: 'After discarding diverged models, ...' or 'Unfortunately, the model selection procedure also didn't identify good settings for the Anica-g model...'.\nii) The proposed method gives pretty comparable results to the chosen baselines (fastICA, PNLMISEP) on the selected small-dimensional tasks. In fact, [7,8,9] are likely to provide more accurate (fastICA is a simple kurtosis based method, which is \na somewhat crude 'estimate' of entropy) and faster estimates; see also 2).\n\nReferences:\n[1] Pierre Comon. Independent component analysis, a new concept? Signal Processing, 36:287-314, 1994.\n[2] Aapo Hyvarinen and Erkki Oja. Independent Component Analysis: Algorithms and Applications. Neural Networks, 13(4-5):411-30, 2000. \n[3] Andreas Ziehe, Motoaki Kawanabe, Stefan Harmeling, and Klaus-Robert Muller. Blind separation of postnonlinear mixtures using linearizing transformations and temporal decorrelation. Journal of Machine Learning Research, 4:1319-1338, 2003.\n[4] Barnabas Poczos, Liang Xiong, and Jeff Schneider. Nonparametric divergence: Estimation with applications to machine learning on distributions. In Conference on Uncertainty in Artificial Intelligence, pages 599-608, 2011.\n[5] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, Alexander Smola. A Kernel Two-Sample Test. Journal of Machine Learning Research, 13:723-773, 2012.\n[6] Alan Wisler, Visar Berisha, Andreas Spanias, Alfred O. Hero. A data-driven basis for direct estimation of functionals of distributions. TR, 2017. (https://arxiv.org/abs/1702.06516) \n[7] Erik G. Learned-Miller, John W. Fisher III. ICA using spacings estimates of entropy. Journal of Machine Learning Research, 4:1271-1295, 2003.\n[8] Francis R. Bach. Michael I. Jordan. Kernel Independent Component Analysis. Journal of Machine Learning Research 3: 1-48, 2002.\n[9] Hao Shen, Stefanie Jegelka and Arthur Gretton. Fast Kernel-Based Independent Component Analysis, IEEE Transactions on Signal Processing, 57:3498-3511, 2009.\n", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: Interesting nonlinear ICA method, but unfocused presentation and poor comparisons", "paper_summary": null, "main_review": "main_review: The paper proposes a GAN variant for solving the nonlinear independent component analysis (ICA) problem. The method seems interesting, but the presentation has a severe lack of focus.\n\nFirst, the authors should focus their discussion instead of trying to address a broad range of ICA problems from linear to post-nonlinear (PNL) to nonlinear. I would highly recommend the authors to study the review \"Advances in Nonlinear Blind Source Separation\" by Jutten and Karhunen (2003/2004) to understand the problems they are trying to solve.\n\nLinear ICA is a solved problem and the authors do not seem to be able to add anything there, so I would recommend dropping that to save space for the more interesting material.\n\nPNL ICA is solvable and there are a number of algorithms proposed for it, some cited already in the above review, but also more recent ones. From this perspective, the presented comparison seems quite inadequate.\n\nFully general nonlinear ICA is ill-posed, as shown already by Darmois (1953, doi:10.2307/1401511). Given this, the authors should indicate more clearly what is their method expected to do. There are an infinite number of nonlinear ICA solutions - which one is the proposed method going to return and why is that relevant? There are fewer relevant comparisons here, but at least Lappalainen and Honkela (2000) seem to target the same problem as the proposed method.\n\nThe use of 6 dimensional example in the experiments is a very good start, as higher dimensions are quite different and much more interesting than very commonly used 2D examples.\n\nOne idea for evaluation: comparison with ground truth makes sense for PNL, but not so much for general nonlinear because of unidentifiability. For general nonlinear ICA you could consider evaluating the quality of the estimated low-dimensional data manifold or evaluating the mutual information of separated sources on new test data.\n\nUpdate after author feedback: thanks for the response and the revision. The revision seems more cosmetic and does not address the most significant issues so I do not see a need to change my evaluation.", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null } ], "score": [ 0.5555555820465088, 0.2222222238779068, 0.4444444477558136 ], "confidence": [ 0.5, 1, 1 ], "novelty": [ null, null, null ], "correctness": [ null, null, null ], "clarity": [ null, null, null ], "impact": [ null, null, null ], "reproducibility": [ null, null, null ], "ethics": [ null, null, null ] }
{ "title": [ "Thanks for the feedback.", "Thank you for the feedback", "Thanks for the feedback.", "Thanks for the suggestions and comments. We adjusted the paper." ], "comment": [ "Thanks for the feedback and interesting references.\n\nMany of the criticisms here seem to be based on notions which are specific to linear ICA. Unfortunately this seems to be attributable to a lack of clarity in the paper and we'd like to emphasize that we didn't try to provide an alternative to methods which have been specifically designed for that problem. We evaluated our methods on linear ICA and PNL ICA because solutions to these problems are known and comparisons were possible but the point is that the method we propose is less dependent on the specific mixing process.\n\n\"1)The ICA problem can be formulated as the minimization of pairwise mutual information [1] or one-dimensional entropy [2]. In other words, estimating the joint dependence of the source coordinates is not necessary; it is worthwhile to avoid it.\"\n\nThe first observation is specific to the linear case but interesting to know about. Working with the entropy seems to be based on the same ideas as infomax and introduces other limitations but we consider it complementary to our approach. \n\n\"2)The PNL ICA task can be efficiently tackled by first 'removing' the nonlinearity followed by classical linear ICA; see for example [3].\"\n\nWhile we didn't aim to be optimal for the PNL case either, we'd like to point out that the approach in [3] is still an iterative procedure.\n\n\"4)Optimizing non-differentiable IT measures can computationally quite efficiently carried out in the ICA context by e.g., Givens rotations [7]; differentiable ICA cost functions can be robustly handled by Stiefel manifold methods; see for example [8,9].\"\n\nThese points seem to be specific to the linear case again but are once again interesting.\n\n\"5)Section 3.1: This section is devoted to generating samples from the product of the marginals, even using separate generator networks. I do not see the necessity of these solutions; the subtask can be solved by independently shuffling all the coordinates of the sample.\"\n\nThe first solution is indeed basically shuffling the coordinates of the sample but we admit that the text was a bit overly didactic and we shortened it a bit. The separate generator networks could be interesting in a setup in which shuffling is not desirable because there are temporal dependencies, for example. We changed the text to make this more clear.\n\n\"6)Experiments (Section 6): \ni) It seems to me that the proposed NN-based technique has some quite serious divergence issues: 'After discarding diverged models, ...' or 'Unfortunately, the model selection procedure also didn't identify good settings for the Anica-g model...'.\nii) The proposed method gives pretty comparable results to the chosen baselines (fastICA, PNLMISEP) on the selected small-dimensional tasks. In fact, [7,8,9] are likely to provide more accurate (fastICA is a simple kurtosis based method, which is a somewhat crude 'estimate' of entropy) and faster estimates; see also 2).\"\n\nThe first point is fair in that our model selection heuristic wasn't always able to identify the best model and that GAN training can be unstable. That said, the discarding of models was mainly because we performed a random search with aggressive hyperparameter ranges which could select very high learning rates, for example. The second point is fair too in that the cited methods might prove to be stronger baselines. We don't think that obtaining comparable results with a more general method is a bad thing but that is of course somewhat subjective.\n\nWe'd finally like to point out that we don't propose the use of the Wasserstein GAN loss specifically but GAN type objectives in general for learning independent features. The WGAN example in the text was mainly there to illustrate how in some cases the objective can be seen as a proxy for the mutual information.\n\nThanks again.", "Thank you for your response.\n\nMaximizing independence under general mixing conditions does not necessarily lead to the recovery of the underlying independent sources (even up to the standard ambiguities); this is one of the major motivations why the linear and post-nonlinear ICA (PNL-ICA) tasks have been considered in the literature.\n\nConstructing new general ICA 'solvers' can have certain impact, however the merits of the proposed heuristic are not illustrated/clear.\n1)In case of linear and post-nonlinear ICA: Available off-the-shelf methods can solve 1-2 orders-of-magnitude larger tasks than the ones studied with high accuracy in a numerically robust way.\n2)For general non-linear ICA tasks: \n-One should investigate whether techniques maximizing an independence measure lead to provable solution, find the hidden sources. \n-In fact, using approximate independence measures [such as (4)] raises further unhandled issues. \n\nTo sum up, it would be crucial to (i) understand the validity domain of the studied scheme, (ii) make it comparable to existing methods (in terms of scalability, precision and robustness; at least in the ICA and PNL-ICA settings), and (iii) construct new well-posed non-linear ICA tasks.\n\nMy opinion has not changed.", "We first like to thank the reviewer for the valuable feedback and suggestions.\n\nWe acknowledge that the linear and PNL ICA problems are more or less solved. However, we respectfully disagree that we should drop the treatment of these problems because we still think it is interesting that they can be solved with a new approach which in our opinion is very different from previous methods. This was not obvious to us when we started our research. \n\nA better definition of the version of the non-linear problem would indeed have been desirable in the context of source separation. While we presented the overcomplete case as a first step for evaluating the method, we surely realize that it doesn't come with any theoretical guarantees and that the obtained correlation scores are limited in interpretability. We adjusted the text to make this more clear. \n\nThe alternative to use estimates of the mutual information is certainly something we considered for both evaluation and model selection but this proved to be difficult in general. We tried both Kraskov's nearest-neighbor estimator and the Hilbert Schmidt Independence Criterion but both these estimators typically seemed to consider the features fully independent during most stages of training and don't take into account how informative they are about the input. We still like to thank the reviewer for motivating us to pursue this direction further and like to hear more in detail what is meant with the \"quality of the low-dimensional data manifold\". If there is some principled way of measuring the latter we would certainly like to investigate it.\n\nThanks.\n\n", "First of all, thanks for the feedback and suggestions.\n\nWe removed some of the text which basically reiterated the sampling process as it seemed that multiple reviewers found it redundant. As you suggested, we made the definitions of the full system in Section 4 a bit more explicit. \n\n\"The overcomplete audio source separation case is well known for audio and I could not understand why a convincing baseline can not be found. Is this due to nonlinear mixing?\"\n\nGood question and something we certainly looked at. The most important reason for our lack of a baseline here is that the real-world audio separation setting is slightly more complicated due to the different arrival times of the source signals and reverberation. Most multi-channel audio separation methods we know of work in the frequency domain to alleviate some of these issues, introducing the necessity to predict phase information to reconstruct the raw signals. Another issue was that the evaluation criteria and benchmark data sets in this domain still seem to be under active development but of course we'd love to hear about good real-world benchmarks we might have overlooked. \n\n\"As 26 channels and 6 channels are given, a simple regularization based method can be easily developed to provide a baseline performance, \"\n\nThis sounds interesting. Do you mean an auto-encoder with more standard regularization? In that case the hidden units wouldn't be trained to become independent but perhaps we misunderstood the suggestion.\n\n\"The need for normalization in section 4 is surprising, as it obviously renders the outputs dependent.\"\n\nWe normalize over samples in a batch and not over the units in the layer but admit that this was not clear in the paper. We changed the text to address this issue.\n\nWe also removed the figure you referred to as it didn't add much and took up a lot of space.\n\nThanks again.\n" ] }
{ "paperhash": [ "chen|infogan:_interpretable_representation_learning_by_information_maximizing_generative_adversarial_nets", "dinh|density_estimation_using_real_nvp", "ganin|domain-adversarial_training_of_neural_networks", "gulrajani|improved_training_of_wasserstein_gans", "ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift", "mao|least_squares_generative_adversarial_networks", "pedregosa|scikit-learn:_machine_learning_in_python" ], "title": [ "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", "Conditional Image Generation with PixelCNN Decoders Aäron van den Oord", "Domain-Adversarial Training of Neural Networks", "Improved Training of Wasserstein GANs", "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "Conditional Generative Adversarial Nets", "Scikit-learn: Machine Learning in Python" ], "abstract": [ "", "", "", "", "", "", "" ], "authors": [ { "name": [ "xi chen", "yan duan", "rein houthooft", "john schulman", "ilya sutskever", "pieter abbeel", "u c berkeley" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "google deepmind", "nal kalchbrenner", "lasse espeholt", "alex graves", "koray kavukcuoglu" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "yaroslav ganin", "hana ajakan", "pascal germain", "hugo larochelle", "françois laviolette", "mario marchand", "victor lempitsky", "urun dogan", "marius kloft", "francesco orabona", "tatiana tommasi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "ishaan gulrajani", "faruk ahmed", "martin arjovsky", "vincent dumoulin", "aaron courville" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "sergey ioffe" ], "affiliation": [ { "laboratory": "", "institution": "Christian Szegedy Google Inc", "location": "{}" } ] }, { "name": [ "mehdi mirza", "simon osindero" ], "affiliation": [ { "laboratory": "", "institution": "Université de Montréal Montréal", "location": "{'postCode': 'H3C 3J7', 'region': 'QC'}" }, { "laboratory": "", "institution": "Flickr / Yahoo Inc", "location": "{'postCode': '94103', 'settlement': 'San Francisco', 'region': 'CA'}" } ] }, { "name": [ "fabian pedregosa", "vincent michel", "olivier grisel", "mathieu blondel", "andreas müller", "joel nothman", "gilles louppe", "peter prettenhofer", "ron weiss", "vincent dubourg", "jake vanderplas", "gaël varoquaux", "alexandre gramfort", "bertrand thirion", "david cournapeau", "matthieu brucher", "matthieu perrot" ], "affiliation": [ { "laboratory": "", "institution": "CEA", "location": "{'addrLine': 'Bât 145', 'settlement': 'Parietal, Saclay Gif sur Yvette', 'country': 'France'}" }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": "", "institution": "Kobe University Kobe", "location": "{'country': 'Japan'}" }, { "laboratory": "", "institution": "Columbia University New York", "location": "{'country': 'USA'}" }, { "laboratory": "", "institution": "NSW", "location": "{'country': 'Australia'}" }, { "laboratory": "", "institution": "University of Liège Liège", "location": "{'country': 'Belgium'}" }, { "laboratory": "", "institution": "Bauhaus-Universität Weimar", "location": "{'settlement': 'Weimar', 'country': 'Germany'}" }, { "laboratory": "", "institution": "Google Inc New York", "location": "{'region': 'NY', 'country': 'USA'}" }, { "laboratory": "", "institution": "IFMA", "location": "{'postCode': '3867', 'settlement': 'LaMI Clermont-Ferrand', 'region': 'EA', 'country': 'France'}" }, { "laboratory": "", "institution": "University of Washington", "location": "{'postBox': 'Box 351580', 'settlement': 'Seattle', 'region': 'WA', 'country': 'USA'}" }, { "laboratory": "IESL Lab UMass Amherst Amherst", "institution": "", "location": "{'country': 'USA'}" }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": "", "institution": "SA", "location": "{'postCode': 'CSTJF', 'settlement': 'Pau', 'country': 'France'}" }, { "laboratory": "", "institution": "LNAO Neurospin", "location": "{'addrLine': 'Bât 145'}" }, { "laboratory": "", "institution": "CEA", "location": "{'settlement': 'Saclay Gif sur Yvette', 'country': 'France'}" } ] } ], "arxiv_id": [ "", "", "", "", "", "", "" ], "s2_corpus_id": [ "", "", "", "", "", "", "" ], "intents": [ null, null, null, null, null, null, null ], "isInfluential": [ null, null, null, null, null, null, null ] }
null
84
null
0.407407
0.833333
null
null
null
null
null
ryykVe-0W
fortunato|noisy_networks_for_exploration|ICLR_cc_2018_Conference
1706.10295v3
Noisy Networks For Exploration
We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and epsilon-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.
{ "name": [ "meire fortunato", "mohammad gheshlaghi azar", "bilal piot", "jacob menick", "matteo hessel", "ian osband", "alex graves", "vlad mnih", "remi munos", "demis hassabis", "olivier pietquin", "charles blundell", "shane legg" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }
null
[ "Computer Science", "Mathematics" ]
International Conference on Learning Representations
2017-06-30
45
785
null
null
null
null
null
null
null
true
The paper proposes to add noise to the weights of a policy network during learning in Deep-RL settings and finds that this results in better performance on DQN, A3C and other algorithms that use other exploration strategies. Unfortunately, the paper does not do a thorough job of exploring the reasons and doesn't offer a comparison to other methods that have been out on arxiv for several months before the submission, in spite of reviewers and anonymous requests. Otherwise I might have supported recommending the paper for a talk.
{ "review_id": [ "rJ6Z7prxf", "Hyf0aUVeM", "H14gEaFxG" ], "review": [ { "title": "title: A good paper, despite a weak analysis", "paper_summary": null, "main_review": "main_review: This paper introdues NoisyNets, that are neural networks whose parameters are perturbed by a parametric noise function, and they apply them to 3 state-of-the-art deep reinforcement learning algorithms: DQN, Dueling networks and A3C. They obtain a substantial performance improvement over the baseline algorithms, without explaining clearly why.\n\nThe general concept is nice, the paper is well written and the experiments are convincing, so to me this paper should be accepted, despite a weak analysis.\n\nBelow are my comments for the authors.\n\n---------------------------------\nGeneral, conceptual comments:\n\nThe second paragraph of the intro is rather nice, but it might be updated with recent work about exploration in RL.\nNote that more than 30 papers are submitted to ICLR 2018 mentionning this topic, and many things have happened since this paper was\nposted on arxiv (see the \"official comments\" too).\n\np2: \"our NoisyNet approach requires only one extra parameter per weight\" Parameters in a NN are mostly weights and biases, so from this sentence\none may understand that you close-to-double the number of parameters, which is not so few! If this is not what you mean, you should reformulate...\n\np2: \"Though these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.\"\nTwo ideas seem to be collapsed here: the idea of diminishing noise over an experiment, exploring first and exploiting later, and the idea of\nadapting the amount of noise to a specific problem. It should be made clearer whether NoisyNet can address both issues and whether other\nalgorithms do so too...\n\nIn particular, an algorithm may adapt noise along an experiment or from an experiment to the next.\nFrom Fig.3, one can see that having the same initial noise in all environments is not a good idea, so the second mechanism may help much.\n\nBTW, the short section in Appendix B about initialization of noisy networks should be moved into the main text.\n\np4: the presentation of NoisyNets is not so easy to follow and could be clarified in several respects:\n- a picture could be given to better explain the structure of parameters, particularly in the case of factorised (factorized, factored?) Gaussian noise.\n- I would start with the paragraph \"Considering a linear layer [...] below)\" and only after this I would introduce \\theta and \\xi as a more synthetic notation.\nLater in the paper, you then have to state \"...are now noted \\xi\" several times, which I found rather clumsy.\n\np5: Why do you use option (b) for DQN and Dueling and option (a) for A3C? The reason why (if any) should be made clear from the clearer presentation required above.\n\nBy the way, a wild question: if you wanted to use NoisyNets in an actor-critic architecture like DDPG, would you put noise both in the actor and the critic?\n\nThe paragraph above Fig3 raises important questions which do not get a satisfactory answer.\nWhy is it that, in deterministic environments, the network does not converge to a deterministic policy, which should be able to perform better?\nWhy is it that the adequate level of noise changes depending on the environment? By the way, are we sure that the curves of Fig3 correspond to some progress\nin noise tuning (that is, is the level of noise really \"better\" through time with these curves, or they they show something poorly correlated with the true reasons of success?)?\n\nFinally, I would be glad to see the effect of your technique on algorithms like TRPO and PPO which require a stochastic policy for exploration, and where I believe that the role of the KL divergence bound is mostly to prevent the level of stochasticity from collasping too quickly.\n\n-----------------------------------\nLocal comments:\n\nThe first sentence may make the reader think you only know about 4-5 old works about exploration.\n\nPp. 1-2 : \"the approach differs ... from variational inference. [...] It also differs variational inference...\"\nIf you mean it differs from variational inference in two ways, the paragraph should be reorganized.\n\np2: \"At a high level our algorithm induces a randomised network for exploration, with care exploration\nvia randomised value functions can be provably-efficient with suitable linear basis (Osband et al., 2014)\"\n=> I don't understand this sentence at all.\n\nAt the top of p3, you may update your list with PPO and ACKTR, which are now \"classical\" baselines too.\n\nAppendices A1 and A2 are a lot redundant with the main text (some sentences and equations are just copy-pasted), this should be improved.\nThe best would be to need to reject nothing to the Appendix.\n\n---------------------------------------\nTypos, language issues:\n\np2\nthe idea ... the optimization process have been => has\n\np2\nThough these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.\n=> you should make a sentence...\n\np3\nthe the double-DQN\n\nseveral times, an equation is cut over two lines, a line finishing with \"=\", which is inelegant\n\nYou should deal better with appendices: Every \"Sec. Ax/By/Cz\" should be replaced by \"Appendix Ax/By/Cz\".\nBesides, the big table and the list of performances figures should themselves be put in two additional appendices\nand you should refer to them as Appendix D or E rather than \"the Appendix\".\n\n\n\n\n", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: The proposed approach is interesting and has strengths, but the paper has weaknesses. I am somewhat divided for acceptance.", "paper_summary": null, "main_review": "main_review: In this paper, a new heuristic is introduced with the purpose of controlling the exploration in deep reinforcement learning. \n\nThe proposed approach, NoisyNet, seems very simple and smart: a noise of zero mean and unknown variance is added to each weight of the deep network. The matrices of unknown variances are considered as parameters and are learned with a standard gradient descent. The strengths of the proposed approach are the following:\n1 NoisyNet is generic: it is applied to A3C, DQN and Dueling agents. \n2 NoisyNet reduces the number of hyperparameters. NoisyNet does not need hyperparameters (only the kind of the noise distribution has to be defined), and replacing the usual exploration heuristics by NoisyNet, a hyperparameter is suppressed (for instance \\epsilon in the case of epsilon-greedy exploration).\n3 NoisyNet exhibits impressive experimental results in comparison to the usual exploration heuristics for to A3C, DQN and Dueling agents.\n\nThe weakness of the proposed approach is the lack of explanation and investigation (experimental or theoretical) of why does Noisy work so well. At the end of the paper a single experiment investigates the behavior of weights of noise during the learning. Unfortunately this experiment seems to be done in a hurry. Indeed, the confidence intervals are not plotted, and probably no conclusion can be reached because the curves are averaged only across three seeds! It’s disappointing. As expected for an exploration heuristic, it seems that the noise weights of the last layer (slowly) tend to zero. However for some games, the weights of the penultimate layer seem to increase. Is it due to NoisyNet or to the lack of seeds? \n\nIn the same vein, in section 3, two kinds of noise are proposed: independent or factorized Gaussian noise. The factorized Gaussian noise, which reduces the number of parameters, is associated with DQN and Dueling agents, while the independent noise is associated with A3C agent. Why? \n\nOverall the proposed approach is interesting and has strengths, but the paper has weaknesses. I am somewhat divided for acceptance. \n", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: Good paper but lack of empirical comparison & analysis", "paper_summary": null, "main_review": "main_review: A new exploration method for deep RL is presented, based on the idea of injecting noise into the deep networks’ weights. The noise may take various forms (either uncorrelated or factored) and its magnitude is trained by gradient descent along other parameters. It is shown how to implement this idea both in DQN (and its dueling variant) and A3C, with experiments on Atari games showing a significant improvement on average compared to these baseline algorithms.\n\nThis definitely looks like a worthy direction of research, and experiments are convincing enough to show that the proposed algorithms indeed improve on their baseline version. The specific proposed algorithm is close in spirit to the one from “Parameter space noise for exploration”, but there are significant differences. It is also interesting to see (Section 4.1) that the noise evolves in non-obvious ways across different games.\n\nI have two main concerns about this submission. The first one is the absence of a comparison to the method from “Parameter space noise for exploration”, which shares similar key ideas (and was published in early June, so there was enough time to add this comparison by the ICLR deadline). A comparison to the paper(s) by Osband et al (2016, 2017) would have also been worth adding. My second concern is that I find the title and overall discussion in the paper potentially misleading, by focusing only on the “exploration” part of the proposed algorithm(s). Although the noise injected in the parameters is indeed responsible for the exploration behavior of the agent, it may also have an important effect on the optimization process: in both DQN and A3C it modifies the cost function being optimized, both through the “target” values (respectively Q_hat and advantage) and the parameters of the policy (respectively Q and pi). Since there is no attempt to disentangle these exploration and optimization effects, it is unclear if one is more important than the other to explain the success of the approach. It also sheds doubt on the interpretation that the agent somehow learns some kind of optimal exploration behavior through gradient descent (something I believe is far from obvious).\n\nEstimating the impact of a paper on future research is an important factor in evaluating it. Here, I find myself in the akward (and unusual to me) situation where I know the proposed approach has been shown to bring a meaningful improvement, more precisely in Rainbow (“Rainbow: Combining Improvements in Deep Reinforcement Learning”). I am unsure whether I should take it into account in this review, but in doubt I am choosing to, which is why I am advocating for acceptance in spite of the above-mentioned concerns.\n\nA few small remarks / questions / typos:\n- In eq. 3 A(...) is missing the action a as input\n- Just below: “the the”\n- Last sentence of p. 3 can be misleading because the gradient is not back-propagated through all paths in the defined cost\n- “In our experiments we used f(x) = sgn(x) p |x|”: this makes sense to me for eq. 9 but why not use f(x) = x in eq. 10?\n- Why use factored noise in DQN and independent noise in A3C? This is presented like an arbitrary choice here.\n- What is the justification for using epsilon’ instead of epsilon in eq. 15? My interpretation of double DQN is that we want to evaluate (with the target network) the action chosen by the Q network, which here is perturbed with epsilon (NB: eq. 15 should have b in the argmax, not b*)\n- Section 4 should say explicitly that results are over 200M frames\n- Assuming the noise is sampled similarly doing evaluation (= as in training), please mention it clearly.\n- In paragraph below eq. 18: “superior performance compare to their corresponding baselines”: compared\n- There is a Section 4.1 but no 4.2\n- Appendix has a lot of redundant material with the main text, for instance it seems to me that A.1 is useless.\n- In appendix B: “σi,j is simply set to 0.017 for all parameters” => where does this magic value come from?\n- List x seems useless in C.1 and C.2\n- C.1 and C.2 should be combined in a single algorithm with a simple “if dueling” on l. 24\n- In C.3: (1) missing pi subscript for zeta in the “Output:” line, (2) it is not clear what the zeta’ parameters are for, in particular should they be used in l. 12 and 22?\n- The paper “Dropout as a Bayesian approximation” seems worth at least adding to the list of related work in the introduction.", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null } ], "score": [ 0.6666666865348816, 0.4444444477558136, 0.5555555820465088 ], "confidence": [ 0.75, 0.5, 0.75 ], "novelty": [ null, null, null ], "correctness": [ null, null, null ], "clarity": [ null, null, null ], "impact": [ null, null, null ], "reproducibility": [ null, null, null ], "ethics": [ null, null, null ] }
{ "title": [ "Re: Response to AnonReviewer1", "Response to AnonReviewer2", "Response to AnonReviewer1", "Response to AnonReviewer3", "Response to AnonReviewer1", "Re: Response to AnonReviewer2", "Response to AnonReviewer1", "Re: Response to AnonReviewer1", "Response to AnonReviewer2", "General response to the reviewers" ], "comment": [ "Thanks for your response and for editing the paper.\n\nAbout point 3 above, in the case of an actor-critic architecture, the relationship between exploration and noise in the actor is clear. By contrast, the relationship between exploration and noise in the critic is far less obvious. It is very unclear to me why having a noisy value function should help, hence my question. In a later paper (this is too late for this one), I would be glad to see what you get if you put noise only into the critic.\n\nMy general feeling is that the paper could have been improved more in terms of the split and redundancy between the main text and appendices A and B (in Appendix B, the figure alone without a word of explanation is a pity), but some useful improvements have been made.\n\nA new typo: p2, network.Randomised => missing space", "Here we address the main concerns of the reviewer:\n\n1- Concerning the absence of empirical comparison to the method “Parameter space noise for exploration”, we argue that this work is a concurrent submission to ICLR. So we do not think it is necessary to compare with it at this stage. We must emphasize that a fair comparison between the two methods can not be done by directly using the reported results in “Parameter space noise for exploration” since in this work the authors report performance for a selection of Atari games trained for 40 million frames, whereas we use the standard (Nature paper) setting of 57 games and 200 million frames. So to have a fair comparison we would need to implement and run their algorithm in the standard setting. \n\n2- Concerning the focus on the exploration aspect, the reviewer is right when saying that it is difficult to disentangle the exploration effect from the optimization in the final performance. On the other hand, we argue that Noisy Networks is the only exploration technique used by our algorithm. We emphasize on the exploration aspect because having weights with greater uncertainty introduce more variability into the decisions made by the policy, which has potential for exploratory actions. We have added a discussion in the updated version of the paper discussing that improvements might also come from better optimization. Finally we need to emphasize that we do not claim that noisy networks provide an optimal strategy for exploration. Indeed noisy networks does not take into account the uncertainty of the action-value function of the future states, which is required for optimal tradeoff between exploration and exploitation (see Azar et al. 2017). Thus, it cannot be an optimal strategy. However, it can produce an exploration which is state-dependent and automatically tunes the level of exploration for each problem and can be used with any Deep RL agent. This is a step towards a general exploration strategy for Deep RL.\n\nThe reviewer raises an interesting point of adding a graphical representation of the noisy linear layer. We included that in the revision as it could help implementing the method.\n\nFinally, we agree on the minor comments/typos and we have already corrected them in this updated version. For a discussion on the choice of factorised noise, please see answer to AnonReviewer1.", "We completely agree with the reviewer that the role of noise in critic and whether it is useful or not requires further investigation. We will include experiments to investigate this in a future version. \n\nRegarding the reviewer comment \"exploration in the actor is precisely meant for keeping exploring the actions with seemingly small return\" it is true that due to the noise in actor network, actions with seemingly small return may be explored. But in practice this might not be enough. The problem is that if there is some error in the estimation of value function then the probability of seemingly \"bad\" actions can go to zero super fast due to the update rule of A3C and the exponential nature of softmax operator in the actor network. In that case adding some small noise to the actor network would not change those exponentially small probabilities that much (at this point the agent has already converged to a wrong \"almost\" deterministic policy). Using stochastic baseline may help to alleviate this problem since by adding noise to the baseline the algorithm does not deterministically decrease the probabilities of seemingly bad actions. \n\nRegarding Appendix B we agree with the reviewer and change the text accordingly.", "1- Concerning the number of seeds, we ran all the experiments for three seeds. Note that these experiments are very computationally intensive and this is why the number of seeds is low (all papers with atari experiments over the 57 games tend to do one or three seeds). Nonetheless, we have provided the errors bars w.r.t. to 3 seeds in the revised version for fig 3 and for table 3 (max score for the 57 games). The error bars were already present for the performance on the 57 games in the appendix (figs 4, 5 and 6). It is not common to compute error bars for the median human normalized score as this score is already averaged over all the 57 atari games. \n\n2- Concerning the question on why factorised noise is used in one case (DQN) and not in the other case (A3C). As we mentioned in our response to the reviewer 1, the main reason is to boost the algorithm speed in the case of DQN, in which generating the independent noise for each weight is costly. In the case of A3C, since it is a distributed algorithm and speed is not a major concern we don’t use the factorisation trick. However, we have done experiments which shows that we can achieve a similar performance with A3C using factorised noise which we are including in the revised version.\n", "1- Concerning the diminishing noise over an experiment and whether NoisyNet addresses this issue, we argue that the NoisyNet adapts automatically the noise during learning which is not the case with the prior methods based on hand-tuned scheduling schemes. As it is shown in Section 4.1 (Fig. 3) it seems the mechanism under which NoisyNet learns to make a balance between exploration and exploitation is problem dependent and does not always follow a same pattern such as exploring first and exploit later. We think this is a useful feature of NoisyNet since it is quite difficult, if not impossible, to know to what extent and when exploration is required in each problem. So it is sensible to let the algorithm learn on its own how to handle the exploration-exploitation tradeoff.\n\n2- Concerning the choice of factorised noise, the main reason is to boost the algorithm speed in the case of DQN. In the case of A3C, since it is a distributed algorithm and speed is not a major concern we don’t use the factorization trick. However, we have done experiments which shows that we can achieve a similar performance with A3C using factorised noise. We included this result in the revised version.\n\n3- Concerning the application of NoisyNet in DDPG. We think the adaptation should be straight forward. One can put noise on the actor and the critic as we have done for A3C which is also an actor-critic method.\n\n4- Concerning the convergence to deterministic weights, we are not entirely sure that why this does not happen in the penultimate layer. One hypothesis may be that although there exists a deterministic solution for the optimisation problem of Eq. 2 this solution is not necessarily unique and there may exist a non-deterministic optima to which NoisyNet converges. In fig 3 we wanted to show that even in complex problems such as Atari games we observe the reduction of the noise in the last layer and problem specific evolution of noise parameters across the board. We have provided further clarification in the revised version and also addressed the remainder of the minor comments made by the reviewer. ", "Thank you for the response and updated manuscript, this is appreciated.\n\nRegarding #1, I believe that current research (in ML in general and deep RL in particular) has reached a pace where one can't just dismiss Arxiv papers because they haven't been accepted yet at a conference / journal. Of course it has to be a judgement call taking into account the other paper's visibility, quality, similarity to the proposed approach, and how easy/hard it is to make such a comparison. But in that case my personal opinion is that such a comparison should have been made here. The easiest one would have probably been to compare to your own performance after 40M step on the same subset of games, though a better one would have been to re-run their code which is open sourced (since end of July if I read their commit history correctly).\n\nNB: I'm also disappointed that they didn't compare to your approach in their own ICLR submission :(\n\nIn your revised version you changed the DQN & Dueling algorithms in two ways:\n- The noise is now the same for all transitions in a batch, while originally it was sampled differently for each transition\n- There is a new noise parameter \\xi'' for the action selection network, which wasn't there before (it appears as epsilon'' in eq. 16 which btw doesn't seem to be properly defined)\nCould you please confirm that these changes are fixes to correct mistakes in the original submission and match your implementation? (I don't see them mentioned in your changelog)\n\nMinor: Conclusion, 1st paragraph, last sentence => \"introduceS\"", "We thank the reviewer for the response.\n\nRegarding the use of noise in the critic (i.e., stochastic baseline) we think it is useful since it captures the uncertainty over the value function. Note that in the standard A3C if there is some error in the estimation of baseline value function then the algorithm may stop exploring the actions with seemingly small return prematurely. Stochastic baseline enables A3C-NoisyNet to do a better job in exploring those underappreciated actions as it dose not always decrease their probabilities.\n\nWe agree with the reviewer regarding the lack of description in Appendix B . In the new revision we have added a new paragraph describing the block diagram in Appendix B.\n\n", "\"Note that in the standard A3C if there is some error in the estimation of baseline value function then the algorithm may stop exploring the actions with seemingly small return prematurely. \"\nI'm afraid this is wrong: exploration in the actor is precisely meant for keeping exploring the actions with seemingly small return.\nHonestly, this idea of stochasticity in the critic is interesting, but it would deserve a thorough mathematical analysis to figure out what it really does (and an empirical comparison with not using it).\n\nAbout the three added lines in Appendix B, they don't bring much: it would be more useful to bring the detailed explanation of the calculations close to the figure.\n\nAnd there is a new typo: \"whose weights our perturbed\" => are pertubed", "Thanks for the response.\n\nRegarding #1 Reporting results and comparison after 40 M steps 20 games, as it is done in DQN w/ param noise paper, is a non-standard practice (e.g., in the ES paper, used as the baseline of \n DQN w param noise algorithm, they use the standard setting of the nature paper). So we don't think it is a right course of action to report our results in a non-standard setting and we refrain from doing it. \n\nEven if we had considered comparison with DQN w/ param noise after 40 M frames, this would not have been a fair comparison. This is due the fact that the DQN w/ param noise algorithm uses a different optimizer (Adam) and a different set of hyper parameters (e.g., step size =1e-4) than the standard DQN, whereas we use the standard nature paper DQN optimizer (RMSProp) and the corresponding hyper parameters (step size=2.5e-4). So by just comparing the existing results after 40 M frames it would have been difficult to know whether any potential gain/loss is due to the strength of exploration strategy or due to the different choice of hyper parameter and the optimiser. We believe the right course of action would be that the authors of DQN w/ param noise report their results in the standard setting using standard hyper parameters and not the other way around. So a fair comparison between their work and the rest of literature would be straightforward. \n\nRegarding the changes in DQN and Dueling we will include them in the Log. We also confirm that these changes are fixes to correct mistakes in the original submission and match our implementation.\n", "We like to thank the anonymous reviewers for their helpful and constructive comments. We provide individual response to each reviewer's comments. Here we report the list of main changes which we have added to the new revision.\n\n1-A discussion on the optimisation aspects of NoisyNet (Section 5, Paragraph 1). \n2- Further clarifications on why factorised noise is used in some agents as opposed to independent noise in the case of A3C (Section 3, Paragraph 3).\n3- Reporting the learning curves and the scores for NoisyNet-A3C with factorised noise, showing that a similar performance to the case of independent noise can be achieved with significantly less noisy variables (Appendix D).\n4-Adding error bars to the learning curves of Fig. 3 and error bounds to the scores of Table 3.\n5-Adding a graphical representation of noisy linear layer (Appendix B). \n6- Correcting the inconsistencies between the description of the algorithm in the original submission and our implementation (Appendix C Algo. 1 line 13, 14 and 16 and Eq. 16)\n" ] }
{ "paperhash": [ "bellemare|a_distributional_perspective_on_reinforcement_learning", "plappert|parameter_space_noise_for_exploration", "fortunato|bayesian_recurrent_neural_networks", "osband|deep_exploration_via_randomized_value_functions", "azar|minimax_regret_bounds_for_reinforcement_learning", "salimans|evolution_strategies_as_a_scalable_alternative_to_reinforcement_learning", "lipton|efficient_exploration_for_dialog_policy_learning_with_deep_bbq_networks_\\&_replay_buffer_spiking", "lipton|bbq-networks:_efficient_exploration_in_deep_reinforcement_learning_for_task-oriented_dialogue_systems", "bellemare|unifying_count-based_exploration_and_intrinsic_motivation", "houthooft|vime:_variational_information_maximizing_exploration", "osband|deep_exploration_via_bootstrapped_dqn", "mnih|asynchronous_methods_for_deep_reinforcement_learning", "mobahi|training_recurrent_neural_networks_by_diffusion", "wang|dueling_network_architectures_for_deep_reinforcement_learning", "hasselt|deep_reinforcement_learning_with_double_q-learning", "lillicrap|continuous_control_with_deep_reinforcement_learning", "blundell|weight_uncertainty_in_neural_network", "gal|dropout_as_a_bayesian_approximation:_representing_model_uncertainty_in_deep_learning", "blundell|weight_uncertainty_in_neural_networks", "hazan|on_graduated_optimization_for_stochastic_non-convex_problems", "mnih|human-level_control_through_deep_reinforcement_learning", "schulman|trust_region_policy_optimization", "osband|generalization_and_exploration_via_randomized_value_functions", "lattimore|the_sample-complexity_of_general_reinforcement_learning", "bellemare|the_arcade_learning_environment:_an_evaluation_platform_for_general_agents", "fix|monte-carlo_swarm_policy_search", "graves|practical_variational_inference_for_neural_networks", "schmidhuber|formal_theory_of_creativity,_fun,_and_intrinsic_motivation_(1990–2010)", "geist|kalman_temporal_differences", "jaksch|near-optimal_regret_bounds_for_reinforcement_learning", "oudeyer|what_is_intrinsic_motivation?_a_typology_of_computational_approaches", "auer|logarithmic_online_regret_bounds_for_undiscounted_reinforcement_learning", "singh|intrinsically_motivated_reinforcement_learning", "kearns|near-optimal_reinforcement_learning_in_polynomial_time", "sutton|policy_gradient_methods_for_reinforcement_learning_with_function_approximation", "moriarty|evolutionary_algorithms_for_reinforcement_learning", "bertsekas|dynamic_programming_and_optimal_control,_two_volume_set", "puterman|markov_decision_processes:_discrete_stochastic_dynamic_programming", "hinton|keeping_the_neural_networks_simple_by_minimizing_the_description_length_of_the_weights", "bellman|dynamic_programming_and_modern_control_theory", "thompson|on_the_likelihood_that_one_unknown_probability_exceeds_another_in_view_of_the_evidence_of_two_samples", "|count-based_exploration_with_neural_density_models", "lipton|efficient_exploration_for_dialogue_policy_learning_with_bbq_networks_&_replay_buffer_spiking", "geist|managing_uncertainty_within_value_function_approximation_in_reinforcement_learning", "williams|simple_statistical_gradient-following_algorithms_for_connectionist_reinforcement_learning", "bertsekas|dynamic_programming_and_optimal_control_volume_1_second_edition", "sutton|reinforcement_learning:_an_introduction", "hochreiter|flat_minima", "|current_address:_microsoft_research," ], "title": [ "A Distributional Perspective on Reinforcement Learning", "Parameter Space Noise for Exploration", "Bayesian Recurrent Neural Networks", "Deep Exploration via Randomized Value Functions", "Minimax Regret Bounds for Reinforcement Learning", "Evolution Strategies as a Scalable Alternative to Reinforcement Learning", "Efficient Exploration for Dialog Policy Learning with Deep BBQ Networks \\& Replay Buffer Spiking", "BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems", "Unifying Count-Based Exploration and Intrinsic Motivation", "VIME: Variational Information Maximizing Exploration", "Deep Exploration via Bootstrapped DQN", "Asynchronous Methods for Deep Reinforcement Learning", "Training Recurrent Neural Networks by Diffusion", "Dueling Network Architectures for Deep Reinforcement Learning", "Deep Reinforcement Learning with Double Q-Learning", "Continuous control with deep reinforcement learning", "Weight Uncertainty in Neural Network", "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "Weight Uncertainty in Neural Networks", "On Graduated Optimization for Stochastic Non-Convex Problems", "Human-level control through deep reinforcement learning", "Trust Region Policy Optimization", "Generalization and Exploration via Randomized Value Functions", "The Sample-Complexity of General Reinforcement Learning", "The Arcade Learning Environment: An Evaluation Platform for General Agents", "Monte-Carlo Swarm Policy Search", "Practical Variational Inference for Neural Networks", "Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010)", "Kalman Temporal Differences", "Near-optimal Regret Bounds for Reinforcement Learning", "What is Intrinsic Motivation? A Typology of Computational Approaches", "Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning", "Intrinsically Motivated Reinforcement Learning", "Near-Optimal Reinforcement Learning in Polynomial Time", "Policy Gradient Methods for Reinforcement Learning with Function Approximation", "Evolutionary Algorithms for Reinforcement Learning", "Dynamic Programming and Optimal Control, Two Volume Set", "Markov Decision Processes: Discrete Stochastic Dynamic Programming", "Keeping the neural networks simple by minimizing the description length of the weights", "Dynamic Programming and Modern Control Theory", "ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES", "Count-Based Exploration with Neural Density Models", "Efficient Exploration for Dialogue Policy Learning with BBQ Networks & Replay Buffer Spiking", "Managing Uncertainty within Value Function Approximation in Reinforcement Learning", "Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning", "Dynamic Programming and Optimal Control Volume 1 SECOND EDITION", "Reinforcement Learning: An Introduction", "Flat Minima", "Current address: Microsoft Research," ], "abstract": [ "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "" ], "authors": [ { "name": [ "Marc G. Bellemare", "Will Dabney", "R. Munos" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y. Chen", "Xi Chen", "T. Asfour", "P. Abbeel", "Marcin Andrychowicz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Meire Fortunato", "C. Blundell", "O. Vinyals" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Ian Osband", "Daniel Russo", "Zheng Wen", "Benjamin Van Roy" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. G. Azar", "Ian Osband", "R. Munos" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "I. Sutskever" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Zachary Chase Lipton", "Jianfeng Gao", "Lihong Li", "Xiujun Li", "Faisal Ahmed", "L. Deng" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Zachary Chase Lipton", "Xiujun Li", "Jianfeng Gao", "Lihong Li", "Faisal Ahmed", "L. Deng" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Marc G. Bellemare", "S. Srinivasan", "Georg Ostrovski", "T. Schaul", "D. Saxton", "R. Munos" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "F. Turck", "P. Abbeel" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Ian Osband", "C. Blundell", "A. Pritzel", "Benjamin Van Roy" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Volodymyr Mnih", "Adrià Puigdomènech Badia", "Mehdi Mirza", "Alex Graves", "T. Lillicrap", "Tim Harley", "David Silver", "K. Kavukcuoglu" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Mobahi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Ziyun Wang", "T. Schaul", "Matteo Hessel", "H. V. Hasselt", "Marc Lanctot", "Nando de Freitas" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. V. Hasselt", "A. Guez", "David Silver" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "T. Lillicrap", "Jonathan J. Hunt", "A. Pritzel", "N. Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Blundell", "Julien Cornebise", "K. Kavukcuoglu", "Daan Wierstra" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Y. Gal", "Zoubin Ghahramani" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Blundell", "Julien Cornebise", "K. Kavukcuoglu", "Daan Wierstra" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Elad Hazan", "K. Levy", "Shai Shalev-Shwartz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Volodymyr Mnih", "K. Kavukcuoglu", "David Silver", "Andrei A. Rusu", "J. Veness", "Marc G. Bellemare", "Alex Graves", "Martin A. Riedmiller", "A. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charlie Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "D. Kumaran", "Daan Wierstra", "S. Legg", "D. Hassabis" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "John Schulman", "S. Levine", "P. Abbeel", "Michael I. Jordan", "Philipp Moritz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Ian Osband", "Benjamin Van Roy", "Zheng Wen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Tor Lattimore", "Marcus Hutter", "P. Sunehag" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Marc G. Bellemare", "Yavar Naddaf", "J. Veness", "Michael Bowling" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jérémy Fix", "Matthieu Geist" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Alex Graves" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Schmidhuber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Geist", "O. Pietquin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Thomas Jaksch", "R. Ortner", "P. Auer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Pierre-Yves Oudeyer", "F. Kaplan" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Auer", "R. Ortner" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Satinder Singh", "A. Barto", "N. Chentanez" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Michael Kearns", "Satinder Singh" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Sutton", "David A. McAllester", "Satinder Singh", "Y. Mansour" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "David E. Moriarty", "A. Schultz", "J. Grefenstette" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Bertsekas" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Puterman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Geoffrey E. Hinton", "D. Camp" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Bellman", "R. Kalaba" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. R. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [], "affiliation": [] }, { "name": [ "Zachary Chase Lipton", "Jianfeng Gao", "Lihong Li", "Xiujun Li", "Faisal Ahmed", "L. Deng" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Geist", "O. Pietquin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Ronald J. Williams" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Bertsekas" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. S. Sutton", "A. Barto" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Sepp Hochreiter", "J. Schmidhuber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [], "affiliation": [] } ], "arxiv_id": [ "1707.06887v1", "1706.01905v2", "1704.02798v4", "1703.07608v5", "1703.05449v2", "1703.03864v2", "", "1711.05715", "1606.01868", "1605.09674v4", "1602.04621v3", "1602.01783v2", "1601.04114v2", "1511.06581v3", "1509.06461v3", "1509.02971v6", "1505.05424v2", "1506.02142v6", "1505.05424v2", "1503.03712", "", "1502.05477v5", "1402.0635v3", "1308.4828", "1207.4708v2", "", "", "", "1406.3270v1", "", "", "", "", "", "", "1106.0221v1", "", "", "", "", "", "", "", "", "", "", "", "", "" ], "s2_corpus_id": [ "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "" ], "intents": [ [ "background" ], [ "methodology", "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "methodology", "background" ], [ "background" ], [ "methodology", "background" ], [ "methodology", "background" ], [ "background" ], [ "methodology" ], [ "methodology" ], [ "background" ], [ "background" ], [ "methodology", "background" ], [ "background" ], [ "background" ], [ "background" ], [ "methodology", "background" ], [ "background" ], [ "background" ], [ "background" ], [ "methodology" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "methodology" ], [ "background" ], [ "background" ], [ "background" ], [ "methodology", "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "methodology" ], [ "methodology", "background" ], [ "background" ], [ "background" ], [ "background" ], [ "methodology" ] ], "isInfluential": [ true, false, false, true, false, false, false, false, false, false, false, true, false, true, false, false, false, false, false, false, true, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false ] }
null
92
8.532609
0.555556
0.666667
null
null
null
null
null
rywHCPkAW
kalyan|neuralguided_deductive_search_for_realtime_program_synthesis_from_examples|ICLR_cc_2018_Conference
4606753
1804.01186
Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
Synthesizing user-intended programs from a small number of input-output exam- ples is a challenging problem with several important applications like spreadsheet manipulation, data wrangling and code refactoring. Existing synthesis systems either completely rely on deductive logic techniques that are extensively hand- engineered or on purely statistical models that need massive amounts of data, and in general fail to provide real-time synthesis on challenging benchmarks. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12× speed-up compared to state-of-the-art systems.
{ "name": [ "ashwin k vijayakumar", "dhruv batra", "abhishek mohta", "prateek jain", "oleksandr polozov", "sumit gulwani" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }
We integrate symbolic (deductive) and statistical (neural-based) methods to enable real-time program synthesis with almost perfect generalization from 1 input-output example.
[ "Program synthesis", "deductive search", "deep learning", "program induction", "recurrent neural networks" ]
null
2018-02-15 22:29:30
31
161
9
null
null
null
null
null
null
true
The pros and cons of this paper cited by the reviewers can be summarized below: Pros: * The method proposed here is highly technically sophisticated and appropriate for the problem of program synthesis from examples * The results are convincing, demonstrating that the proposed method is able to greatly speed up search in an existing synthesis system Cons: * The contribution in terms of machine learning or representation learning is minimal (mainly adding an LSTM to an existing system) * The overall system itself is quite complicated, which might raise the barrier of entry to other researchers who might want to follow the work, limiting impact In our decision, the fact that the paper significantly moves forward the state of the art in this area outweighs the concerns about lack of machine learning contribution or barrier of entry.
{ "review_id": [ "SyFsGdSlM", "SkPNib9ez", "S1qCIfJWz" ], "review": [ { "title": "title: Although the search method chosen was reasonable, the only real innovation here is to use the LSTM to learn a search heuristic.", "paper_summary": null, "main_review": "main_review: The paper presents a branch-and-bound approach to learn good programs\n(consistent with data, expected to generalise well), where an LSTM is\nused to predict which branches in the search tree should lead to good\nprograms (at the leaves of the search tree). The LSTM learns from\ninputs of program spec + candidate branch (given by a grammar\nproduction rule) and ouputs of quality scores for programms. The issue\nof how greedy to be in this search is addressed.\n\nIn the authors' set up we simply assume we are given a 'ranking\nfunction' h as an input (which we treat as black-box). In practice\nthis will simply be a guess (perhaps a good educated one) on which\nprograms will perform correctly on future data. As the authors\nindicate, a more ambitious paper would consider learning h, rather\nthan assuming it as a given.\n\nThe paper has a number of positive features. It is clearly written\n(without typo or grammatical problems). The empirical evaluation\nagainst PROSE is properly done and shows the presented method working\nas hoped. This was a competent approach to an interesting (real)\nproblem. However, the 'deep learning' aspect of the paper is not\nprominent: an LSTM is used as a plug-in and that is about it. Also,\nalthough the search method chosen was reasonable, the only real\ninnovation here is to use the LSTM to learn a search heuristic.\n\n\nThe authors do not explain what \"without attention\" means.\n\n\nI think the authors should mention the existence of (logic) program\nsynthesis using inductive logic programming. There are also (closely\nrelated) methods developed by the LOPSTR (logic-based program\nsynthesis and transformation) community. Many of the ideas here are\nreminiscent of methods existing in those communities (e.g. top-down search\nwith heuristics). The use of a grammar to define the space of programs\nis similar to the \"DLAB\" formalism developed by researchers at KU\nLeuven.\n\nADDED AFTER REVISIONS/DISCUSSIONS\n\nThe revised paper has a number of improvements which had led me to give it slightly higher rating.\n\n", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: Incremental paper but well-written", "paper_summary": null, "main_review": "main_review: This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem.\n\nThis paper requires a large amount of background knowledge as it depends on understanding program synthesis as it is done in the programming languages community. Moreover the work mentions a neurally-guided search, but little time is spent on that portion of their contribution. I am not even clear how their system is trained.\n\nThe experimental results do show the programs can be faster but only if the user is willing to suffer a loss in accuracy. It is difficult to conclude overall if the technique helps in synthesis.", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: Strong paper; accept", "paper_summary": null, "main_review": "main_review: This is a strong paper. It focuses on an important problem (speeding up program synthesis), it’s generally very well-written, and it features thorough evaluation. The results are impressive: the proposed system synthesizes programs from a single example that generalize better than prior state-of-the-art, and it does so ~50% faster on average.\n\nIn Appendix C, for over half of the tasks, NGDS is slower than PROSE (by up to a factor of 20, in the worst case). What types of tasks are these? In the results, you highlight a couple of specific cases where NGDS is significantly *faster* than PROSE—I would like to see some analysis of the cases were it is slower, as well. I do recognize that in all of these cases, PROSE is already quite fast (less than 1 second, often much less) so these large relative slowdowns likely don’t lead to a noticeable absolute difference in speed. Still, it would be nice to know what is going on here.\n\nOverall, this is a strong paper, and I would advocate for accepting it.\n\n\nA few more specific comments:\n\n\nPage 2, “Neural-Guided Deductive Search” paragraph: use of the word “imbibes” - while technically accurate, this use doesn’t reflect the most common usage of the word (“to drink”). I found it very jarring.\n\nThe paper is very well-written overall, but I found the introduction to be unsatisfyingly vague—it was hard for me to evaluate your “key observations” when I couldn’t quite yet tell what the system you’re proposing actually does. The paragraph about “key observation III” finally reveals some of these details—I would suggest moving this much earlier in the introduction.\n\nPage 4, “Appendix A shows the resulting search DAG” - As this is a figure accompanying a specific illustrative example, it belongs in this section, rather than forcing the reader to hunt for it in the Appendix.\n\n", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null } ], "score": [ 0.5555555820465088, 0.5555555820465088, 0.7777777910232544 ], "confidence": [ 0.75, 0.5, 0.5 ], "novelty": [ null, null, null ], "correctness": [ null, null, null ], "clarity": [ null, null, null ], "impact": [ null, null, null ], "reproducibility": [ null, null, null ], "ethics": [ null, null, null ] }
{ "title": [ "Error analysis", "Training details + Clarification on the generalization accuracy", "revision with ML-based ranker", "We combine deep learning & symbolic methods in a new milestone for the program synthesis application, as opposed to a pure deep learning contribution", "New revision" ], "comment": [ "Thank you for the constructive feedback! We’ll add more details and clarify the introduction in the next revision.\n\nQ: Which factors lead to NGDS being slower than PROSE on some tasks?\nOur method is slower than PROSE when the predictions do not satisfy the requirements of the controller i.e. all the predicted scores are within the threshold or they violate the actual scores in branch and bound exploration. This leads to NGDS evaluating the LSTM for branches that were previously pruned. This can be especially harmful when branches that got pruned out at the very beginning of the search need to be reconsidered -- as it could lead to evaluating the network many times. While evaluating the network leads to minor additions in run-time, there are many such additions, and since PROSE performance is already << 1s for such cases, this results in considerable relative slowdown.\n\nWhy do the predictions violate the controller's requirements? This happens when the neural network is either indecisive (its predicted scores for all branches are too close) or wrong (its predicted scores have exactly the opposite order of the actual program scores). \nWe will update the draft with this discussion and present some examples below\n\nSome examples:\nA) \"41.711483001709,-91.4123382568359,41.6076278686523,-91.6373901367188\" ==> \"41.711483001709\"\n\tThe intended program is a simple substring extraction. However, at depth 1, the predicted score of Concat is much higher than the predicted score of Atom, and thus we end up exploring only the Concat branch. The found Concat program is incorrect because it uses absolute position indexes and does not generalize to other similar extraction tasks with different floating-point values in the input strings.\nWe found this scenario relatively common when the output string contains punctuation - the model considers it a strong signal for Concat.\nB) \"type size = 36: Bartok.Analysis.CallGraphNode type size = 32: Bartok.Analysis.CallGraphNode CallGraphNode\" ==> \"36->32\"\n\tWe correctly explore only the Concat branch, but the slowdown happens at the level of the `pos` symbol. There are many different logics to extract the “36” and “32” substrings. NGDS explores RelativePosition branch first, but the score of the resulting program is less then the prediction for RegexPositionRelative. Thus, the B&B controller explores both branches anyway and we end up with a relative slowdown caused by the network inference time.", "> Q: Please clarify how the system is trained.\n\n1) We use the industrially collected set of 375 string transformation tasks. Each task is a single input-output examples and 2-10 unseen inputs for evaluating generalization. Further, we split the 375 tasks into 65% train, 15% validation, and 20% test ones.\n2) We run PROSE on each of those tasks and collect the (symbol, production, spec input, spec output -> best program score after learning) information on all nodes of the search tree. As mentioned in the introduction, such traces provide a rich description of the synthesis problem thanks to the Markovian nature of deductive search in PROSE and enabling the creation of large datasets required for learning deep models. As a result, we obtain a dataset of ~450,000 search outcomes from mere 375 tasks.\n3) We further split all the search outcomes by the used symbol or its depth in the grammar. In our final evaluation, we present the results for the models trained on the decisions on the `transform` (depth=1), `pp`, `pos` symbols. We have also trained other symbol models as well as a single common model for all symbols/depths, but they didn’t perform as well.\n4) We employ Adam (Kingma and Ba, 2014) to optimize the objective. We use a batch size of 32 and a learning rate of 0.01 and use early stopping to pick the final model. The model architecture and the corresponding loss function (squared error) are discussed in Section 3.1. We will add the specific training details in the next revision of the paper. \n5) As discussed in Section 3.3, the learned models are integrated in the corresponding PROSE controller when the current search tree node matches the model's conditions (i.e. it is on the same respective symbol or depth).\n\n> Q: Is the approach useful for synthesis when there is a loss in program accuracy?\n\nIn fact, NGDS achieves higher average test accuracy than baseline PROSE (68.49% vs. 67.12%), although with slightly lower validation accuracy (63.83% vs. 70.21%) which effectively corresponds to 4 tasks.\n\nHowever, this is not the most important factor: PBE is bound to often fail in synthesizing the _intended_ program from a single input-output example. Even a machine-learned ranking function picks the wrong program 20% of the time (Ellis & Gulwani, IJCAI 2017).\n\nThus, the main goal of this work is speeding up the synthesis process on difficult scenarios without sacrificing the generalization accuracy too much. As a result, we achieve on average 50% faster synthesis time, with 10x speed-ups for many difficult tasks that require multiple seconds while still retaining competitive accuracy. Appendix C shows the breakdown of time and accuracy: out of 120 validation/test tasks, there are:\n- 76 tasks where both systems are correct,\n- 7 tasks where PROSE learns a correct program and NGDS learns a wrong one,\n- 4 tasks where PROSE learns a wrong program and NGDS learns a correct one,\n- 33 tasks where both systems are wrong.", "Following reviewers' feedback, we have updated the draft (Appendix C) with experiments that employ an ML-based ranking function as against the state-of-the-art ranker of PROSE that involves hand engineering. We observe that NGDS achieves ~2X speed-ups on average while still achieving highly comparable generalization accuracy as compared to PROSE with the ML-based ranker. ", "Thank you for the related work suggestions -- we will update this discussion in the next draft. We address your concerns below: \n\n> Q: Limited innovation in terms of deep learning:\n\nRather than being a pure contribution to deep learning, this work applies deep learning to the important field of program synthesis, where statistical approaches are still underexplored. Our main contribution is a hybrid approach to program synthesis that utilizes the best of both neural and symbolic synthesis techniques. Combining insights from both worlds in this way achieves a new milestone in program synthesis performance: from a single example it generates programs that generalize better than prior state-of-the-art (including neural RobustFill, symbolic PROSE, and hybrid DeepCoder), the generated program is provably correct, and the generation is 50% faster on average\n\nDeepCoder (Balog et al., ICLR 2017) first explored a hybrid approach last year by first predicting the likelihood of various operators and then using it to guide an external symbolic synthesis engine. Since deep networks are data-hungry, Balog et al. obtain training data by randomly sampling programs from the DSL and generating satisfying random strings as input-output examples. As noted in Section 1 and as evidenced by its inferior performance against our method, the generated programs tend to be unnatural leading to poor generalization. In contrast, NGDS closely integrates neural models at each step of the synthesis and so, it is possible to obtain large amounts of training data while utilizing a relatively small number of real-world examples. \n\n> Q: Learning the ranking function instead of taking it as a given: \n\nWhile related, this problem is orthogonal to our work: a ranking function evaluates whether a given full program generalizes well, whereas we aim to predict the generalization of the best program produced from a given partial search state.\n\nImportantly, the proposed technique, NGDS is independent of the ranking function and can be trivially integrated with any high-quality ranking function. For instance, the manually written ranking function of FlashFill in PROSE that we use is a result of 7 years of engineering and heavy fine-tuning for industrial applications. An even better-quality learned ranking function would only improve the accuracy of predictions, which are already on par with baseline PROSE (68.49% vs 67.12%).\n\nIn fact, a lot of recent prior work focuses on learning a ranking function for program induction, see (Singh & Gulwani, CAV 2015) and (Ellis & Gulwani, IJCAI 2017). For comparison, we are currently performing a set of experiments with an ML-learned ranking function; we'll update with the new results once it's done.\n\n> Q: What does \"without attention\" mean?\n\nAll the models we explore encode input and output examples using (possibly multi-layered, bi-directional) LSTMs with or without an attention mechanism (Bahdanau et al., ICLR 2015). As mentioned in Section 8, the most accurate predictions arise when we attend to the input string while encoding the output string similar to the attention-based models proposed by Devlin et al., 2017. We will make this clearer in the next version of the paper. \n\nSuch an attention mechanism allows the network to learn complex features like \"whether the output is a substring of the input\". Unfortunately, such accuracy comes at a cost of increasing the network evaluation time to quadratic instead of linear. As a result, prediction time at every node of the search tree dominates the search time, and NGDS is slower than PROSE even when its predictions are accurate. Therefore, we only use LSTM models without any attention mechanism in our evaluations. \n", "We have uploaded a new paper revision, as per the reviewers' feedback. Here's a summary of the changes:\n\n- Restructured the introduction, making NGDS details clearer and moving them earlier.\n- Added analysis of some erroneous scenarios in the Evaluation.\n- Expanded related work overview with more symbolic methods such as ILP and LOPSTR.\n- Added details of the training process, including all the hyperparameters.\n- Moved Appendix A into the main text.\n- Replaced the table in Appendix B (earlier C): we found that we selected a wrong model to generate the table in the previous submission. The summary results in Tables 1-2 and their analysis were on the correct best model (so no change was needed in the Evaluation), but the spreadsheet for detailed results in the appendix was not. We apologize for this confusion. The distribution of the speed-ups did not change substantially, although the correct spread is now from 12x to 0.2x.\n\nWe will upload one more revision later this month, in which we'll include experiments we're currently performing with an ML-learned ranking function (as opposed to the state-of-the-art PROSE ranking function, used in the current submission)." ] }
{ "paperhash": [ "gulwani|program_synthesis", "cai|making_neural_programming_architectures_generalize_via_recursion", "devlin|robustfill:_neural_program_learning_under_noisy_i/o", "loos|deep_network_guided_proof_search", "balog|deepcoder:_learning_to_write_programs", "parisotto|neuro-symbolic_program_synthesis", "sousa|learning_syntactic_program_transformations_from_examples", "gaunt|terpret:_a_probabilistic_programming_language_for_program_induction", "seide|cntk:_microsoft's_open-source_deep-learning_toolkit", "bosnjak|programming_with_a_differentiable_forth_interpreter", "kaiser|neural_gpus_learn_algorithms", "zaremba|learning_simple_algorithms_from_examples", "reed|neural_programmer-interpreters", "polozov|flashmeta:_a_framework_for_inductive_program_synthesis", "kingma|adam:_a_method_for_stochastic_optimization", "graves|neural_turing_machines", "lin|bias_reformulation_for_one-shot_function_induction", "le|flashextract:_a_framework_for_data_extraction_by_examples", "torlak|growing_solver-aided_languages_with_rosette", "alur|syntax-guided_synthesis", "udupa|transit:_specifying_protocols_with_concolic_snippets", "püschel|spiral:_code_generation_for_dsp_transforms", "hochreiter|long_short-term_memory", "manna|toward_automatic_program_synthesis", "waldinger|prow:_a_step_toward_automatic_program_writing", "ramsey|learning_to_learn_programs_from_examples_:_going_beyond_program_structure", "clausen|branch_and_bound_algorithms-principles_and_examples" ], "title": [ "Program Synthesis", "Making Neural Programming Architectures Generalize via Recursion", "RobustFill: Neural Program Learning under Noisy I/O", "Deep Network Guided Proof Search", "DeepCoder: Learning to Write Programs", "Neuro-Symbolic Program Synthesis", "Learning Syntactic Program Transformations from Examples", "TerpreT: A Probabilistic Programming Language for Program Induction", "CNTK: Microsoft's Open-Source Deep-Learning Toolkit", "Programming with a Differentiable Forth Interpreter", "Neural GPUs Learn Algorithms", "Learning Simple Algorithms from Examples", "Neural Programmer-Interpreters", "FlashMeta: a framework for inductive program synthesis", "Adam: A Method for Stochastic Optimization", "Neural Turing Machines", "Bias reformulation for one-shot function induction", "FlashExtract: a framework for data extraction by examples", "Growing solver-aided languages with rosette", "Syntax-guided synthesis", "TRANSIT: specifying protocols with concolic snippets", "SPIRAL: Code Generation for DSP Transforms", "Long Short-Term Memory", "Toward automatic program synthesis", "PROW: A Step Toward Automatic Program Writing", "Learning to Learn Programs from Examples : Going Beyond Program Structure", "Branch and Bound Algorithms-Principles and Examples" ], "abstract": [ "Program synthesis is the task of automatically finding a program in the underlying programming language that satisfies the user intent expressed in the form of some specification. Since the inception of AI in the 1950s, this problem has been considered the holy grail of Computer Science. Despite inherent challenges in the problem such as ambiguity of user intent and a typically enormous search space of programs, the field of program synthesis has developed many different techniques that enable program synthesis in different real-life application domains. It is now used successfully in software engineering, biological discovery, computer-aided education, end-user programming, and data cleaning. In the last decade, several applications of synthesis in the field of programming by examples have been deployed in mass-market industrial products. This survey is a general overview of the state-of-the-art approaches to program synthesis, its applications, and subfields. We discuss the general principles common to all modern synthesis approaches such as syntactic bias, oracle-guided inductive search, and optimization techniques. We then present a literature review covering the four most common state-of-the-art techniques in program synthesis: enumerative search, constraint solving, stochastic search, and deduction-based programming by examples. We conclude with a brief list of future horizons for the field.", "Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system's behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion.", "The problem of automatically generating a computer program from some specification has been studied since the early days of AI. Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation. Here, for the first time, we directly compare both approaches on a large-scale, real-world learning task and we additionally contrast to rule-based program synthesis, which uses hand-crafted semantics to guide the program generation. Our neural models use a modified attention RNN to allow encoding of variable-sized sets of I/O pairs, which achieve 92% accuracy on a real-world test set, compared to the 34% accuracy of the previous best neural synthesis approach. The synthesis model also outperforms a comparable induction model on this task, but we more importantly demonstrate that the strength of each approach is highly dependent on the evaluation metric and end-user application. Finally, we show that we can train our neural models to remain very robust to the type of noise expected in real-world data (e.g., typos), while a highly-engineered rule-based system fails entirely.", "Deep learning techniques lie at the heart of several significant AI advances in recent years including object recognition and detection, image captioning, machine translation, speech recognition and synthesis, and playing the game of Go.Automated first-order theorem provers can aid in the formalization and verification of mathematical theorems and play a crucial role in program analysis, theory reasoning, security, interpolation, and system verification.Here we suggest deep learning based guidance in the proof search of the theorem prover E. We train and compare several deep neural network models on the traces of existing ATP proofs of Mizar statements and use them to select processed clauses during proof search. We give experimental evidence that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved.Using a few proof guidance strategies that leverage deep neural networks, we have found first-order proofs of 7.36% of the first-order logic translations of the Mizar Mathematical Library theorems that did not previously have ATP generated proofs. This increases the ratio of statements in the corpus with ATP generated proofs from 56% to 59%.", "We develop a first line of attack for solving programming competition-style problems from input-output examples using deep learning. The approach is to train a neural network to predict properties of the program that generated the outputs from the inputs. We use the neural network’s predictions to augment search techniques from the programming languages community, including enumerative search and an SMT-based solver. Empirically, we show that our approach leads to an order of magnitude speedup over the strong non-augmented baselines and a Recurrent Neural Network approach, and that we are able to solve problems of difficulty comparable to the simplest problems on programming competition websites.", "Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training.", "Automatic program transformation tools can be valuable for programmers to help them with refactoring tasks, and for Computer Science students in the form of tutoring systems that suggest repairs to programming assignments. However, manually creating catalogs of transformations is complex and time-consuming. In this paper, we present REFAZER, a technique for automatically learning program transformations. REFAZER builds on the observation that code edits performed by developers can be used as input-output examples for learning program transformations. Example edits may share the same structure but involve different variables and subexpressions, which must be generalized in a transformation at the right level of abstraction. To learn transformations, REFAZER leverages state-of-the-art programming-by-example methodology using the following key components: (a) a novel domain-specific language (DSL) for describing program transformations, (b) domain-specific deductive algorithms for efficiently synthesizing transformations in the DSL, and (c) functions for ranking the synthesized transformations. We instantiate and evaluate REFAZER in two domains. First, given examples of code edits used by students to fix incorrect programming assignment submissions, we learn program transformations that can fix other students' submissions with similar faults. In our evaluation conducted on 4 programming tasks performed by 720 students, our technique helped to fix incorrect submissions for 87% of the students. In the second domain, we use repetitive code edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code. In our evaluation conducted on 56 scenarios of repetitive edits taken from three large C# open-source projects, REFAZER learns the intended program transformation in 84% of the cases using only 2.9 examples on average.", "We study machine learning formulations of inductive program synthesis; given input-output examples, we try to synthesize source code that maps inputs to corresponding outputs. Our aims are to develop new machine learning approaches based on neural networks and graphical models, and to understand the capabilities of machine learning techniques relative to traditional alternatives, such as those based on constraint solving from the programming languages community. \nOur key contribution is the proposal of TerpreT, a domain-specific language for expressing program synthesis problems. TerpreT is similar to a probabilistic programming language: a model is composed of a specification of a program representation (declarations of random variables) and an interpreter describing how programs map inputs to outputs (a model connecting unknowns to observations). The inference task is to observe a set of input-output examples and infer the underlying program. TerpreT has two main benefits. First, it enables rapid exploration of a range of domains, program representations, and interpreter models. Second, it separates the model specification from the inference algorithm, allowing like-to-like comparisons between different approaches to inference. From a single TerpreT specification we automatically perform inference using four different back-ends. These are based on gradient descent, linear program (LP) relaxations for graphical models, discrete satisfiability solving, and the Sketch program synthesis system. \nWe illustrate the value of TerpreT by developing several interpreter models and performing an empirical comparison between alternative inference algorithms. Our key empirical finding is that constraint solvers dominate the gradient descent and LP-based formulations. We conclude with suggestions for the machine learning community to make progress on program synthesis.", "This tutorial will introduce the Computational Network Toolkit, or CNTK, Microsoft's cutting-edge open-source deep-learning toolkit for Windows and Linux. CNTK is a powerful computation-graph based deep-learning toolkit for training and evaluating deep neural networks. Microsoft product groups use CNTK, for example to create the Cortana speech models and web ranking. CNTK supports feed-forward, convolutional, and recurrent networks for speech, image, and text workloads, also in combination. Popular network types are supported either natively (convolution) or can be described as a CNTK configuration (LSTM, sequence-to-sequence). CNTK scales to multiple GPU servers and is designed around efficiency. The tutorial will give an overview of CNTK's general architecture and describe the specific methods and algorithms used for automatic differentiation, recurrent-loop inference and execution, memory sharing, on-the-fly randomization of large corpora, and multi-server parallelization. We will then show how typical uses looks like for relevant tasks like image recognition, sequence-to-sequence modeling, and speech recognition.", "Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. In this paper, we consider the case of prior procedural knowledge for neural networks, such as knowing how a program should traverse a sequence, but not what local actions should be performed at each step. To this end, we present an end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data. We can optimise this behaviour directly through gradient descent techniques on user-specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition. When connected to outputs of an LSTM and trained jointly, our interpreter achieves state-of-the-art accuracy for end-to-end reasoning about quantities expressed in natural language stories.", "Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. \nWe present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. \nAn essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. \nTo achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.", "We present an approach for learning simple algorithms such as copying, multi-digit addition and single digit multiplication directly from examples. Our framework consists of a set of interfaces, accessed by a controller. Typical interfaces are 1-D tapes or 2-D grids that hold the input and output data. For the controller, we explore a range of neural network-based models which vary in their ability to abstract the underlying algorithm from training instances and generalize to test examples with many thousands of digits. The controller is trained using $Q$-learning with several enhancements and we show that the bottleneck is in the capabilities of the controller rather than in the search incurred by $Q$-learning.", "We propose the neural programmer-interpreter (NPI): a recurrent and compositional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value program memory, and domain-specific encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-to-sequence LSTMs. The program memory allows efficient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of computation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these programs and all 21 associated subprograms.", "Inductive synthesis, or programming-by-examples (PBE) is gaining prominence with disruptive applications for automating repetitive tasks in end-user programming. However, designing, developing, and maintaining an effective industrial-quality inductive synthesizer is an intellectual and engineering challenge, requiring 1-2 man-years of effort. Our novel observation is that many PBE algorithms are a natural fall-out of one generic meta-algorithm and the domain-specific properties of the operators in the underlying domain-specific language (DSL). The meta-algorithm propagates example-based constraints on an expression to its subexpressions by leveraging associated witness functions, which essentially capture the inverse semantics of the underlying operator. This observation enables a novel program synthesis methodology called data-driven domain-specific deduction (D4), where domain-specific insight, provided by the DSL designer, is separated from the synthesis algorithm. Our FlashMeta framework implements this methodology, allowing synthesizer developers to generate an efficient synthesizer from the mere DSL definition (if properties of the DSL operators have been modeled). In our case studies, we found that 10+ existing industrial-quality mass-market applications based on PBE can be cast as instances of D4. Our evaluation includes reimplementation of some prior works, which in FlashMeta become more efficient, maintainable, and extensible. As a result, FlashMeta-based PBE tools are deployed in several industrial products, including Microsoft PowerShell 3.0 for Windows 10, Azure Operational Management Suite, and Microsoft Cortana digital assistant.", "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.", "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.", "National Science Foundation (U.S.) (STC Center for Brains, Minds and Machines Award CCF-1231216)", "Various document types that combine model and view (e.g., text files, webpages, spreadsheets) make it easy to organize (possibly hierarchical) data, but make it difficult to extract raw data for any further manipulation or querying. We present a general framework FlashExtract to extract relevant data from semi-structured documents using examples. It includes: (a) an interaction model that allows end-users to give examples to extract various fields and to relate them in a hierarchical organization using structure and sequence constructs. (b) an inductive synthesis algorithm to synthesize the intended program from few examples in any underlying domain-specific language for data extraction that has been built using our specified algebra of few core operators (map, filter, merge, and pair). We describe instantiation of our framework to three different domains: text files, webpages, and spreadsheets. On our benchmark comprising 75 documents, FlashExtract is able to extract intended data using an average of 2.36 examples in 0.84 seconds per field.", "SAT and SMT solvers have automated a spectrum of programming tasks, including program synthesis, code checking, bug localization, program repair, and programming with oracles. In principle, we obtain all these benefits by translating the program (once) to a constraint system understood by the solver. In practice, however, compiling a language to logical formulas is a tricky process, complicated by having to map the solution back to the program level and extend the language with new solver-aided constructs, such as symbolic holes used in synthesis.\n This paper introduces ROSETTE, a framework for designing solver-aided languages. ROSETTE is realized as a solver-aided language embedded in Racket, from which it inherits extensive support for meta-programming. Our framework frees designers from having to compile their languages to constraints: new languages, and their solver-aided constructs, are defined by shallow (library-based) or deep (interpreter-based) embedding in ROSETTE itself.\n We describe three case studies, by ourselves and others, of using ROSETTE to implement languages and synthesizers for web scraping, spatial programming, and superoptimization of bitvector programs.", "The classical formulation of the program-synthesis problem is to find a program that meets a correctness specification given as a logical formula. Recent work on program synthesis and program optimization illustrates many potential benefits of allowing the user to supplement the logical specification with a syntactic template that constrains the space of allowed implementations. Our goal is to identify the core computational problem common to these proposals in a logical framework. The input to the syntax-guided synthesis problem (SyGuS) consists of a background theory, a semantic correctness specification for the desired program given by a logical formula, and a syntactic set of candidate implementations given by a grammar. The computational problem then is to find an implementation from the set of candidate expressions so that it satisfies the specification in the given theory. We describe three different instantiations of the counter-example-guided-inductive-synthesis (CEGIS) strategy for solving the synthesis problem, report on prototype implementations, and present experimental results on an initial set of benchmarks.", "With the maturing of technology for model checking and constraint solving, there is an emerging opportunity to develop programming tools that can transform the way systems are specified. In this paper, we propose a new way to program distributed protocols using concolic snippets. Concolic snippets are sample execution fragments that contain both concrete and symbolic values. The proposed approach allows the programmer to describe the desired system partially using the traditional model of communicating extended finite-state-machines (EFSM), along with high-level invariants and concrete execution fragments. Our synthesis engine completes an EFSM skeleton by inferring guards and updates from the given fragments which is then automatically analyzed using a model checker with respect to the desired invariants. The counterexamples produced by the model checker can then be used by the programmer to add new concrete execution fragments that describe the correct behavior in the specific scenario corresponding to the counterexample. We describe TRANSIT, a language and prototype implementation of the proposed specification methodology for distributed protocols. Experimental evaluations of TRANSIT to specify cache coherence protocols show that (1) the algorithm for expression inference from concolic snippets can synthesize expressions of size 15 involving typical operators over commonly occurring types, (2) for a classical directory-based protocol, TRANSIT automatically generates, in a few seconds, a complete implementation from a specification consisting of the EFSM structure and a few concrete examples for every transition, and (3) a published partial description of the SGI Origin cache coherence protocol maps directly to symbolic examples and leads to a complete implementation in a few iterations, with the programmer correcting counterexamples resulting from underspecified transitions by adding concrete examples in each iteration.", "Fast changing, increasingly complex, and diverse computing platforms pose central problems in scientific computing: How to achieve, with reasonable effort, portable optimal performance? We present SPIRAL, which considers this problem for the performance-critical domain of linear digital signal processing (DSP) transforms. For a specified transform, SPIRAL automatically generates high-performance code that is tuned to the given platform. SPIRAL formulates the tuning as an optimization problem and exploits the domain-specific mathematical structure of transform algorithms to implement a feedback-driven optimizer. Similar to a human expert, for a specified transform, SPIRAL \"intelligently\" generates and explores algorithmic and implementation choices to find the best match to the computer's microarchitecture. The \"intelligence\" is provided by search and learning techniques that exploit the structure of the algorithm and implementation space to guide the exploration and optimization. SPIRAL generates high-performance code for a broad set of DSP transforms, including the discrete Fourier transform, other trigonometric transforms, filter transforms, and discrete wavelet transforms. Experimental results show that the code generated by SPIRAL competes with, and sometimes outperforms, the best available human tuned transform library code.", "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.", "An elementary outline of the theorem-proving approach to automatic program synthesis is given, without dwelling on technical details. The method is illustrated by the automatic construction of both recursive and iterative programs operating on natural numbers, lists, and trees. In order to construct a program satisfying certain specifications, a theorem induced by those specifications is proved, and the desired program is extracted from the proof. The same technique is applied to transform recursively defined functions into iterative programs, frequently with a major gain in efficiency. It is emphasized that in order to construct a program with loops or with recursion, the principle of mathematical induction must be applied. The relation between the version of the induction rule used and the form of the program constructed is explored in some detail.", "This paper Describes a program, called \"PROW\", which writes programs PROW accepts the specification of the program in the language of predicate calculus, decides the algorithm for the program and then produces a LISP program which is an implementation of the algorithm. Since the construction of the algorithm is obtained by formal theorem-proving techniques, the programs that PROW writes are free from logical errors and do not have to be debugged The user of PROW can make PROW write programs in languages other than LISP by modifying the part of PROW that translates an algorithm to a LISP program. Thus PROW can be modified to write programs in any language In the end of this paper, it is shown that PROW can also be used as a question-answering program", "Programming-by-example technologies let end users construct and run new programs by providing examples of the intended program behavior. But, the few provided examples seldom uniquely determine the intended program. Previous approaches to picking a program used a bias toward shorter or more naturally structured programs. Our work here gives a machine learning approach for learning to learn programs that departs from previous work by relying upon features that are independent of the program structure, instead relying upon a learned bias over program behaviors, and more generally over program execution traces. Our approach leverages abundant unlabeled data for semisupervised learning, and incorporates simple kinds of world knowledge for common-sense reasoning during program induction. These techniques are evaluated in two programming-by-example domains, improving the accuracy of program learners.", "A large number of real-world planning problems called combinatorial optimization problems share the following properties: They are optimization problems, are easy to state, and have a finite but usually very large number of feasible solutions. While some of these as e.g. the Shortest Path problem and the Minimum Spanning Tree problem have polynomial algoritms, the majority of the problems in addition share the property that no polynomial method for their solution is known. Examples here are vehicle ∗Department of Computer Science, University of Copenhagen, Universitetsparken 1, DK2100 Copenhagen, Denmark." ], "authors": [ { "name": [ "Sumit Gulwani", "Oleksandr Polozov", "Rishabh Singh" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jonathon Cai", "Richard Shin", "D. Song" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jacob Devlin", "Jonathan Uesato", "Surya Bhupatiraju", "Rishabh Singh", "Abdel-rahman Mohamed", "Pushmeet Kohli" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Sarah M. Loos", "G. Irving", "Christian Szegedy", "C. Kaliszyk" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Matej Balog", "Alexander L. Gaunt", "Marc Brockschmidt", "Sebastian Nowozin", "Daniel Tarlow" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Sousa", "Gustavo Soares", "Loris D'antoni", "Oleksandr Polozov", "Sumit Gulwani", "Rohit Gheyi", "Ryo Suzuki", "Bjoern Hartmann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Alexander L. Gaunt", "Marc Brockschmidt", "Rishabh Singh", "Nate Kushman", "Pushmeet Kohli", "Jonathan Taylor", "Daniel Tarlow" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "F. Seide", "Amit Agarwal" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Matko Bosnjak", "Tim Rocktäschel", "Jason Naradowsky", "Sebastian Riedel" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Lukasz Kaiser", "I. Sutskever" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Wojciech Zaremba", "Tomas Mikolov", "Armand Joulin", "R. Fergus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Scott E. Reed", "Nando de Freitas" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Oleksandr Polozov", "Sumit Gulwani" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Diederik P. Kingma", "Jimmy Ba" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Dianhuan Lin", "Eyal Dechter", "Kevin Ellis", "J. Tenenbaum", "S. Muggleton" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Vu Le", "Sumit Gulwani" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Emina Torlak", "Rastislav Bodík" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Alur", "Rastislav Bodík", "Garvit Juniwal", "Milo M. K. Martin", "Mukund Raghothaman", "S. Seshia", "Rishabh Singh", "Armando Solar-Lezama", "Emina Torlak", "Abhishek Udupa" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Abhishek Udupa", "Arun Raghavan", "Jyotirmoy V. Deshmukh", "Sela Mador-Haim", "Milo M. K. Martin", "R. Alur" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Markus Püschel", "José M. F. Moura", "Jeremy R. Johnson", "D. Padua", "M. Veloso", "Bryan Singer", "Jianxin Xiong", "F. Franchetti", "Aca Gacic", "Y. Voronenko", "Kang Chen", "Robert W. Johnson", "Nicholas Rizzolo" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Sepp Hochreiter", "J. Schmidhuber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Z. Manna", "R. Waldinger" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Waldinger", "Richard C. T. Lee" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Catherine Ramsey", "Judith K. Smith", "Cheryl J. Adams" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Clausen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, "1704.06611", "1703.07469", "1701.06972", "1611.01989", "1611.01855", "1608.09000", "1608.04428", null, "1605.06640", "1511.08228", "1511.07275", "1511.06279", null, "1412.6980", "1410.5401", null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "40302057", "1844940", "6933074", "11208402", "2906360", "15904815", "11216724", "6429546", "38063112", "2468625", "2009318", "2982122", "7034786", "2867610", "6628106", "15299054", "33970", "11698735", "16038701", "6705760", "185626", "12427750", "1915014", "9539039", "32480327", "10599650", "16580792" ], "intents": [ [ "background" ], [ "background" ], [ "background", "methodology" ], [ "methodology" ], [ "background", "methodology" ], [ "methodology" ], [ "methodology" ], [ "background" ], [ "methodology" ], [ "background" ], [ "background" ], [ "methodology" ], [ "methodology" ], [ "background", "methodology" ], [ "methodology" ], [ "background", "methodology" ], [], [ "methodology" ], [], [ "background" ], [ "methodology" ], [ "methodology" ], [ "methodology" ], [ "methodology" ], [ "background" ], [ "background" ], [ "methodology" ] ], "isInfluential": [ false, false, true, false, true, false, false, false, false, false, false, false, false, true, false, false, false, false, false, false, false, false, false, false, false, false, false ] }
null
84
1.916667
0.62963
0.583333
null
null
null
null
null
rywDjg-RW
wu|enhancing_the_transferability_of_adversarial_examples_with_noise_reduced_gradient|ICLR_cc_2018_Conference
Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient
Deep neural networks provide state-of-the-art performance for many applications of interest. Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability. Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component. Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically. We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks.
{ "name": [], "affiliation": [] }
We propose a new method for enhancing the transferability of adversarial examples by using the noise-reduced gradient.
[ "black-box attack", "adversarial example", "deep learning", "transferability" ]
null
2018-02-15 22:29:42
18
null
null
null
null
null
null
null
null
false
The paper studies transferability of adversarial examples between model architectures, and proposes a method to improve this transferability. Whilst it covers an interesting and relevant line of research, the paper does not provide strong evidence for its main underling hypothesis: namely, that adversarial perturbations can be split in a model-specified and a data-specific part. The paper's presentation also warrants improvements. The authors have not provided a rebuttal.
{ "review_id": [ "rkzeadBxf", "rkKt2t2xz", "SJIOPWdgf" ], "review": [ { "title": "title: Review", "paper_summary": null, "main_review": "main_review: This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks.\n\nThis paper has many grammatical errors. The article is almost always missing from nouns. Some of the sentences need changing. For example:\n\n\"training model paramater\" --> \"training model parameters\" (assuming the neural networks have more than 1 parameter)\n\"same or similar dataset with\" --> \"same or a similar dataset to\"\n\"human eyes\" --> \"the human eye\"!\n\"in analogous to\" --> \"analogous to\"\n\"start-of-the-art\" --> \"state-of-the-art\"\n\nSome roughly chronological comments follow:\n\nIn equation (1) although it is obvious that y is the output of f, you should define it. As you are considering the single highest-scoring class, there should probably be an argmax somewhere.\n\n\"The best metric should be human eyes, which is unfortunately difficult to quantify\". I don't recommend that you quantify things in terms of eyes.\n\nIn section 3.1 I am not convinced there is yet sufficient justification to claim that grad(f||)^A is aligned with the inter-class deviation. It would be helpful to put equation (8) here. The \"human\" line on figure 1a doesn't make much sense. By u & v in the figure 1 caption you presumably the x and y axes on the plot. These should be labelled.\n\nIn section 4 you write \"it is meaningless to construct adversarial perturbations for the images that target models cannot classify correctly\". I'm not sure this is true. Imagenet has a *lot* of dog breeds. For an adversarial attack, it may be advantageous to change the classification from \"wrong breed of dog\" to \"not a dog at all\".\n\nSomething that concerns me is that, although your methods produce good results, it looks like the hyperparameters are chosen so as to overfit to the data (please do correct me if this is not the case). A better procedure would be to split the imagenet validation set in two and optimise the hyperparameters on one split, and test on the second. You also \"try lots of \\alphas\", which again seems like overfitting.\n\nTarget attack experiments are missing from 5.1, in 5.2 you write that it is a harder problem so it is omitted. I would argue it is still worth presenting these results even if they are less flattering.\n\nSection 6.2 feels out of place and disjointed from the narrative of the paper.\n\nA lot of choices in Section 6 feel arbitrary. In 6.3, why is resnet34 the chosen source model? In 6.4 why do you select those two target models?\n\nI think this paper contains an interesting idea, but suffers from poor writing and unprincipled experimentation. I therefore recommend it be rejected.\n\nPros:\n- Promising results\n- Good summary of adversarial methods\n\nCons:\n- Poorly written\n- Appears to overfit to the test data\n", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: Interesting study of the most intriguing but lesser studied aspect of adversarial examples.", "paper_summary": null, "main_review": "main_review: The problem of exploring the cross-model (and cross-dataset) generalization of adversarial examples is relatively neglected topic. However the paper's list of related work on that toopic is a bit lacking as in section 3.1 it omits referencing the \"Explaining and Harnessing...\" paper by Goodfellow et al., which presented the first convincing attempt at explaining cross-model generalization of the examples.\n\nHowever this paper seems to extend the explanation by a more principled study of the cross-model generalization. Again Section 3.1. presents a hypothesis on splitting the space of adversarial perturbations into two sub-manifolds. However this hypothesis seems as a tautology as the splitting is engineered in a way to formally describe the informal statement. Anyways, the paper introduces a useful terminology to aid analysis and engineer examples with improved generalization across models.\n\nIn the same vain, Section 3.2 presents another hypothesis, but is claimed as fact. It claims that the model-dependent component of adversarial examples is dominated by images with high-frequency noise. This is a relatively unfounded statement, not backed up by any qualitative or quantitative evidence.\n\nMotivated by the observation that most newly generated adversarial examples are perturbations by a high frequency noise and that noise is often model-specific (which is not measured or studied sufficiently in the paper), the paper suggests adding a noise term to the FGS and IGSM methods and give extensive experimental evidence on a variety of models on ImageNet demonstrating that the transferability of the newly generated examples is improved.\n\nI am on the fence with this paper. It certainly studies an important, somewhat neglected aspect of adversarial examples, but mostly speculatively and the experimental results study the resulting algorithm rather than trying trying the verify the hypotheses on which those algorithms are based upon.\n\nOn the plus side the paper presents very strong practical evidence that the transferability of the examples can be enhanced by such a simple methodology significantly.\n\nI think the paper would be much more compelling (are should be accepted) if it contained a more disciplined study on the hypotheses on which the methodology is based upon.", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null }, { "title": "title: Some arguments are not well justified ", "paper_summary": null, "main_review": "main_review: This paper focuses on enhancing the transferability of adversarial examples from one model to another model. The main contribution of this paper is to factorize the adversarial perturbation direction into model-specific and data-dependent. Motivated by finding the data-dependent direction, the paper proposes the noise reduced gradient method. \n\nThe paper is not mature. The authors need to justify their arguments in a more rigorous way, like why data-dependent direction can be obtained by averaging; is it true factorization of the perturbation direction? i.e. is the orthogonal direction is indeed model-specific? most of explanations are not rigorous and kind of superficial.\n\n\n", "strength_weakness": null, "questions": null, "limitations": null, "review_summary": null } ], "score": [ 0.3333333432674408, 0.4444444477558136, 0.4444444477558136 ], "confidence": [ 0.75, 0.75, 0.5 ], "novelty": [ null, null, null ], "correctness": [ null, null, null ], "clarity": [ null, null, null ], "impact": [ null, null, null ], "reproducibility": [ null, null, null ], "ethics": [ null, null, null ] }
{ "title": [], "comment": [] }
{ "paperhash": [ "athalye|synthesizing_robust_adversarial_examples", "carlini|towards_evaluating_the_robustness_of_neural_networks", "carlini|adversarial_examples_are_not_easily_detected:_bypassing_ten_detection_methods", "goodfellow|explaining_and_harnessing_adversarial_examples", "kurakin|adversarial_examples_in_the_physical_world", "liu|delving_into_transferable_adversarial_examples_and_black-box_attacks", "madry|towards_deep_learning_models_resistant_to_adversarial_attacks", "moosavi-dezfooli|deepfool:_a_simple_and_accurate_method_to_fool_deep_neural_networks", "papernot|distillation_as_a_defense_to_adversarial_perturbations_against_deep_neural_networks", "smilkov|smoothgrad:_removing_noise_by_adding_noise", "szegedy|intriguing_properties_of_neural_networks", "tramèr|ensemble_adversarial_training:_attacks_and_defenses", "balduzzi|the_shattered_gradients_problem:_if_resnets_are_the_answer,_then_what_is_the_question", "feinman|detecting_adversarial_samples_from_artifacts", "li|adversarial_examples_detection_in_deep_networks_with_convolutional_filter_statistics", "lu|no_need_to_worry_about_adversarial_examples_in_object_detection_in_autonomous_vehicles", "tramèr|the_space_of_transferable_adversarial_examples" ], "title": [ "Synthesizing Robust Adversarial Examples", "Towards Evaluating the Robustness of Neural Networks", "Generative Adversarial Nets", "EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES", "Workshop track -ICLR 2017 ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD", "DELVING INTO TRANSFERABLE ADVERSARIAL EX-AMPLES AND BLACK-BOX ATTACKS", "Towards Deep Learning Models Resistant to Adversarial Attacks", "DeepFool: a simple and accurate method to fool deep neural networks", "Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks", "SmoothGrad: removing noise by adding noise", "Intriguing properties of neural networks Christian Szegedy", "ENSEMBLE ADVERSARIAL TRAINING: ATTACKS AND DEFENSES", "The Shattered Gradients Problem: If resnets are the answer, then what is the question?", "Detecting Adversarial Samples from Artifacts", "Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics", "NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles", "The Space of Transferable Adversarial Examples" ], "abstract": [ "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "" ], "authors": [ { "name": [], "affiliation": [] }, { "name": [ "nicholas carlini", "david wagner" ], "affiliation": [ { "laboratory": "", "institution": "University of California", "location": "{'settlement': 'Berkeley'}" }, { "laboratory": "", "institution": "University of California", "location": "{'settlement': 'Berkeley'}" } ] }, { "name": [ "ian j goodfellow", "jean pouget-abadie", "mehdi mirza", "bing xu", "david warde-farley", "sherjil ozair", "aaron courville", "yoshua bengio", " delhi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": "", "institution": "Université de Montréal from Ecole Polytechnique", "location": "{}" }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": "Sherjil Ozair is visiting", "institution": "Université de Montréal from Indian Institute of Technology", "location": "{}" }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "ian j goodfellow", "jonathon shlens", "christian szegedy" ], "affiliation": [ { "laboratory": "", "institution": "Google Inc", "location": "{'settlement': 'Mountain View', 'region': 'CA'}" }, { "laboratory": "", "institution": "Google Inc", "location": "{'settlement': 'Mountain View', 'region': 'CA'}" }, { "laboratory": "", "institution": "Google Inc", "location": "{'settlement': 'Mountain View', 'region': 'CA'}" } ] }, { "name": [ "alexey kurakin", "ian j goodfellow", "samy bengio" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "yanpei liu", "xinyun chen", "chang liu", "dawn song" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "aleksander m ądry", "aleksandar makelov", "ludwig schmidt", "dimitris tsipras", "adrian vladu" ], "affiliation": [ { "laboratory": "", "institution": "Massachusetts Institute of Technology Cambridge", "location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}" }, { "laboratory": "", "institution": "Massachusetts Institute of Technology Cambridge", "location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}" }, { "laboratory": "", "institution": "Massachusetts Institute of Technology Cambridge", "location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}" }, { "laboratory": "", "institution": "Massachusetts Institute of Technology Cambridge", "location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}" }, { "laboratory": "", "institution": "Massachusetts Institute of Technology Cambridge", "location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}" } ] }, { "name": [ "seyed-mohsen moosavi-dezfooli", "alhussein fawzi", "pascal frossard", "école polytechnique", "fédérale de lausanne" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "nicolas papernot", "patrick mcdaniel", "xi wu", "jha § somesh", "ananthram swami" ], "affiliation": [ { "laboratory": "", "institution": "University of Wisconsin", "location": "{'settlement': 'Madison'}" }, { "laboratory": "", "institution": "University of Wisconsin", "location": "{'settlement': 'Madison'}" }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": "United States Army Research Laboratory", "institution": "", "location": "{'settlement': 'Adelphi', 'region': 'Maryland'}" } ] }, { "name": [ "daniel smilkov", "nikhil thorat", "been kim", "fernanda viégas", "martin wattenberg" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "inc wojciech google", " zaremba", "ilya sutskever", "joan bruna", "ian goodfellow", "rob fergus" ], "affiliation": [ { "laboratory": "", "institution": "New York University", "location": "{}" }, { "laboratory": "", "institution": "New York University", "location": "{}" }, { "laboratory": "", "institution": "Google Inc", "location": "{}" }, { "laboratory": "", "institution": "New York University Dumitru Erhan Google Inc", "location": "{}" }, { "laboratory": "", "institution": "University of Montreal", "location": "{}" }, { "laboratory": "", "institution": "New York University Facebook Inc", "location": "{}" } ] }, { "name": [ "florian tramèr", "nicolas papernot", "ian goodfellow", "dan boneh", "patrick mcdaniel" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "david balduzzi", "marcus frean", "lennox leary", "j p lewis", "kurt wan-duo ma", "brian mcwilliams" ], "affiliation": [ { "laboratory": "", "institution": "Victoria University of Welling", "location": "{}" }, { "laboratory": "", "institution": "Victoria University of Welling", "location": "{}" }, { "laboratory": "", "institution": "Victoria University of Welling", "location": "{}" }, { "laboratory": "", "institution": "Victoria University of Welling", "location": "{}" }, { "laboratory": "", "institution": "Victoria University of Welling", "location": "{}" }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "reuben feinman", "ryan r curtin", "saurabh shintre", "andrew b gardner" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "xin li", "fuxin li" ], "affiliation": [ { "laboratory": "", "institution": "Oregon State University", "location": "{}" }, { "laboratory": "", "institution": "Oregon State University", "location": "{}" } ] }, { "name": [ "jiajun lu", "hussein sibai", "evan fabry", "david forsyth" ], "affiliation": [ { "laboratory": "", "institution": "University of Illinois at Urbana Champaign", "location": "{}" }, { "laboratory": "", "institution": "University of Illinois at Urbana Champaign", "location": "{}" }, { "laboratory": "", "institution": "University of Illinois at Urbana Champaign", "location": "{}" }, { "laboratory": "", "institution": "University of Illinois at Urbana Champaign", "location": "{}" } ] }, { "name": [ "florian tramèr", "nicolas papernot", "ian goodfellow", "dan boneh", "patrick mcdaniel" ], "affiliation": [ { "laboratory": "", "institution": "Stanford University", "location": "{}" }, { "laboratory": "", "institution": "Pennsylvania State University", "location": "{}" }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": "", "institution": "Stanford University", "location": "{}" }, { "laboratory": "", "institution": "Pennsylvania State University", "location": "{}" } ] } ], "arxiv_id": [ "1707.07397v3", "", "", "", "", "", "1706.06083v4", "", "", "", "", "", "", "", "", "", "" ], "s2_corpus_id": [ "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "" ], "intents": [ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ], "isInfluential": [ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ] }
null
84
null
0.407407
0.666667
null
null
null
null
null
ryvxcPeAb
li|measuring_the_intrinsic_dimension_of_objective_landscapes|ICLR_cc_2018_Conference
1804.08838v1
Measuring the Intrinsic Dimension of Objective Landscapes
"Many recently trained neural networks employ large numbers of parameters to achieve good performanc(...TRUNCATED)
{"name":["chunyuan li","heerad farkhoor","rosanne liu","jason yosinski"],"affiliation":[{"labora(...TRUNCATED)
null
[ "Computer Science", "Mathematics" ]
International Conference on Learning Representations
2018-02-15
34
286
null
null
null
null
null
null
null
true
"The authors make an empirical study of the \"dimension\" of a neural net optimization problem, wher(...TRUNCATED)
{"review_id":["BkJsM2vgf","BJva6gOgM","B1IwI-2xz"],"review":[{"title":"title: Good paper","paper_sum(...TRUNCATED)
{"title":["Thanks for your kind and helpful comments","Thanks for the valuable, critical feedback","(...TRUNCATED)
{"paperhash":["kaiser|one_model_to_learn_them_all","louizos|bayesian_compression_for_deep_learning",(...TRUNCATED)
null
84
3.404762
0.62963
0.5
null
null
null
null
null
ryup8-WCW
"chen|fastgcn_fast_learning_with_graph_convolutional_networks_via_importance_sampling|ICLR_cc_2018_C(...TRUNCATED)
1801.10247v1
FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling
"The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph(...TRUNCATED)
{"name":["jie chen","tengfei ma","cao xiao"],"affiliation":[{"laboratory":"","institution":"IBM R(...TRUNCATED)
null
[ "Computer Science", "Mathematics" ]
International Conference on Learning Representations
2018-01-30
22
1,285
null
null
null
null
null
null
null
true
"Graph neural networks (incl. GCNs) have been shown effective on a large range of tasks. However, it(...TRUNCATED)
{"review_id":["SJce_4YlM","HJDVPNYgf","H1IdT6AlG","B1ymVPEgM"],"review":[{"title":"title: Interestin(...TRUNCATED)
{"title":["RE: RE: Regarding P and bootstrapping comparison","response, discussion, revision","respo(...TRUNCATED)
{"paperhash":["hamilton|inductive_representation_learning_on_large_graphs","goyal|graph_embedding_te(...TRUNCATED)
null
85
15.117647
0.666667
0.625
null
null
null
null
null
rytstxWAW
"mcdonnell|training_wide_residual_networks_for_deployment_using_a_single_bit_for_each_weight|ICLR_cc(...TRUNCATED)
3476061
1802.08530
Training wide residual networks for deployment using a single bit for each weight
"For fast and energy-efficient deployment of trained deep neural networks on resource-constrained em(...TRUNCATED)
{"name":["mark d mcdonnell"],"affiliation":[{"laboratory":"Computational Learning Systems Laboratory(...TRUNCATED)
null
[ "Computer Science", "Mathematics" ]
International Conference on Learning Representations
2018-02-15
29
71
1
null
null
null
null
null
null
true
"The paper presents a way of training 1bit wide resnet to reduce the model footprint while maintaini(...TRUNCATED)
{"review_id":["SkGtH2Kxf","HJ0pVRqxM","BJyxkbFxz"],"review":[{"title":"title: Solid work","paper_sum(...TRUNCATED)
{"title":["response to AnonReviewer3","Summary of changes to paper following review","Response to An(...TRUNCATED)
{"paperhash":["devries|improved_regularization_of_convolutional_neural_networks_with_cutout","chraba(...TRUNCATED)
null
84
0.845238
0.555556
0.666667
null
null
null
null
null
rytNfI1AZ
"courtiol|classification_and_disease_localization_in_histopathology_using_only_global_labels_a_weakl(...TRUNCATED)
"Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Superv(...TRUNCATED)
"Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncol(...TRUNCATED)
{ "name": [], "affiliation": [] }
"We propose a weakly supervised learning method for the classification and localization of cancers i(...TRUNCATED)
[ "Weakly Supervised Learning", "Medical Imaging", "Histopathology", "Deep Feature Extraction" ]
null
2018-02-15 22:29:29
34
null
null
null
null
null
null
null
null
false
"Authors present a method for disease classification and localization in histopathology images. Stan(...TRUNCATED)
{"review_id":["SkWQLvebf","S1O8uhkxf","Bk72o4NWM"],"review":[{"title":"title: Down-to-earth practica(...TRUNCATED)
{"title":["Paper Modifications","General Comments to Referees","Response to Referee","Response to Re(...TRUNCATED)
{"paperhash":["amores|multiple_instance_classification:_review,_taxonomy_and_comparative_study._arti(...TRUNCATED)
null
84
null
0.481481
0.583333
null
null
null
null
null
ryserbZR-
"zhang|learning_to_share_simultaneous_parameter_tying_and_sparsification_in_deep_learning|ICLR_cc_20(...TRUNCATED)
93002944
null
LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING
"Deep neural networks (DNNs) usually contain millions, maybe billions, of parameters/weights, making(...TRUNCATED)
{"name":["dejiao zhang","haozhu wang","mário a t figueiredo","laura balzano"],"affiliation":[{"l(...TRUNCATED)
"We have proposed using the recent GrOWL regularizer for simultaneous parameter sparsity and tying i(...TRUNCATED)
["Compressing neural network","simultaneously parameter tying and sparsification","group ordered l1 (...TRUNCATED)
null
2018-02-15 22:29:19
43
45
7
null
null
null
null
null
null
true
"The paper proposes to regularize via a family of structured sparsity norms on the weights of a deep(...TRUNCATED)
{"review_id":["rkPj2vjeM","rkJfM20eG","H1fONf_gG"],"review":[{"title":"title: A nice paper that woul(...TRUNCATED)
{"title":["Revision","Updating all results with averaged accuracy and the corresponding variance","A(...TRUNCATED)
{"paperhash":["goldluecke|variable_selection","aghasi|net-trim:_a_layer-wise_convex_pruning_of_deep_(...TRUNCATED)
null
84
0.535714
0.666667
0.75
null
null
null
null
null
rypT3fb0b

Datasets related to the task of Scholarly Document Quality Prediction (SDQP). Each sample is an academic paper for which either the citation count or the review score can be predicted (depending on availability).

ACL-OCL Extended

A dataset for citation count prediction only, based on the ACL-OCL dataset. Extended with updated citation counts, references and annotated research hypothesis

OpenReview (Last Update: 1.1.2025)

A dataset for review score and citation count prediction, obtained by parsing OpenReview. Due to licensing the dataset comes in different formats:

Datasets without parsed pdfs of submissions (i.e. the fields introduction, background, methodology, experiments_results, conclusion, full_text are available)

  1. openreview-public: Contains full information on all OpenReview submissions that are accompanied with a CC BY 4.0 license.

Datasets without parsed pdfs of submissions (i.e. the fields introduction, background, methodology, experiments_results, conclusion, full_text are None)

  1. openreview-full: Contains all OpenReview submissions, splits generated based on publications dates.
  2. openreview-iclr: All ICLR submissions from the years 2018-2023 (training) and 2024 (validation and training).
  3. openreview-neurips: All NeurIPS submissions from the years 2021-2023 (training) and 2024 (validation and training).

All datasets without parsed pdfs of submissions can be completed by running code available here

Overview of Dataset columns

Attribute Type Explanation
Basic Paper Info
title str The title of the paper.
authors list[Author] A list of authors of the paper.
abstract str | None The abstract of the paper (optional).
summary str | None A summary of the paper (optional).
month_since_publication int | None Month passed since the publication date (optional)
publication_date str | None The publication date of the paper (optional).
field_of_study list[str] | None The field(s) of study the paper belongs to (optional).
venue str | None The venue where the paper was published (optional).
ID's
paperhash str A unique hash identifier for the paper. (last_name_first_author|title|venue)
arxiv_id str | None The arXiv ID of the paper (optional).
s2_corpus_id str | None The Semantic Scholar (S2) corpus ID of the paper (optional).
Semantic Scholar Metadata
n_references int | None The number of references in the paper (optional).
n_citations int | None The number of citations the paper has received, only accepted papers (optional).
n_influential_citations int | None The number of influential citations the paper has received, only accepted papers (optional).
external_ids dict | None External IDs associated with the paper (optional).
Content
introduction str | None Introduction of the paper if available (optional)
background str | None Background of the paper if available (optional)
methodology str | None Methodology of the paper if available (optional)
experiments_results str | None Experiments & Results of the paper if available (optional)
conclusion str | None Conclusion of the paper if available (optional)
full_text str | None Full text of the paper if available (optional)
Review Data
decision bool | None The decision on the paper (e.g., accepted/rejected) (optional).
decision_text str | None The text explaining the decision (optional).
reviews list[Review] | None A list of reviews for the paper (optional).
comments list[Comment] | None A list of comments on the paper (optional).
Scores
mean_score float | None The mean overall score of the reviews (optional)
mean_novelty float | None The mean novelty score of the reviews (optional)
mean_confidence float | None The mean confidence score of the reviews (optional)
mean_correctness float | None The mean correctness score of the reviews (optional)
mean_clarity float | None The mean clarity score of the reviews (optional)
mean_impact float | None The mean impact score of the reviews (optional)
mean_reproducibility float | None The mean reproducibility score of the reviews (optional)
avg_citations_per_month float | None The average number of citations per month
References
references list[Reference] | None A list of references cited in the paper (optional).
Hypothesis
hypothesis str | None The hypothesis proposed in the paper annotated via an LLM (optional).

The attributes of Author, Review, Comment, Reference are the following:

Review

Attribute Type Explanation
review_id str A unique identifier for the review.
review TextReview The content of the review, represented by the TextReview model.
score float | None The overall score given by the reviewer (optional).
confidence float | None The reviewer's confidence in their assessment (optional).
novelty float | None The novelty score of the paper (optional).
correctness float | None The correctness score of the paper (optional).
clarity float | None The clarity score of the paper (optional).
impact float | None The impact score of the paper (optional).
reproducibility float | None The reproducibility score of the paper (optional).
ethics str | None Ethical considerations noted by the reviewer (optional).

TextReview

Attribute Type Explanation
title str | None The title of the review (optional).
paper_summary str | None A summary of the paper being reviewed (optional).
main_review str | None The main content of the review (optional).
strength_weakness str | None A section discussing the strengths and weaknesses of the paper (optional).
questions str | None Questions raised by the reviewer (optional).
limitations str | None Limitations of the paper as noted by the reviewer (optional).
review_summary str | None A summary of the review (optional).

Comment

Attribute Type Explanation
title str | None The title of the comment (optional).
comment str The content of the comment.

Reference

Attribute Type Explanation
Basic Paper Info
title str The title of the referenced paper.
abstract str The abstract of the referenced paper (default is an empty string).
authors list[str] A list of authors of the referenced paper.
IDs
paperhash str Paperhash for the reference paper (first_author_last_name|title)
arxiv_id str | None The arXiv ID of the referenced paper (optional, default is an empty string).
s2_corpus_id str | None The Semantic Scholar (S2) corpus ID of the referenced paper (optional, default is an empty string).
Reference Specific Info
intents list[str] | None The intents or purposes of the reference (optional).
isInfluential bool | None Indicates whether the reference is influential (optional).
Downloads last month
124